1 Introduction

Quantum Euclidean spaces were first introduced by a number of authors, including Groenewold [28] and Moyal [47], for the study of quantum mechanics in phase space. The constructions of Groenewold and Moyal were later abstracted into more general canonical commutation relation (CCR) algebras, and have since become fundamental in mathematical physics. Under the names Moyal planes or Moyal-Groenewold planes, these algebras play the role of a central and motivating example in noncommutative geometry [5, 22]. As geometrical spaces with noncommutating spatial coordinates, noncommutative Euclidean spaces have appeared frequently in the mathematical physics literature [21], in the contexts of string theory [61] and noncommutative field theory [48].

Quantum Euclidean spaces have also been studied as an interesting noncommutative setting for classical and harmonic analysis, and for this we refer the reader to recent work such as [24, 39, 46, 67].

Connes introduced the quantised calculus in [8] as a replacement for the algebra of differential forms for applications in a noncommutative setting, and afterwards this point of view found application to mathematical physics [9]. Connes successfully applied his quantised calculus in providing a formula for the Hausdorff measure of Julia sets and for limit sets of Quasi-Fuchsian groups in the plane [10, Chapter 4, Sect. 3.\(\gamma \)] (for a more recent exposition see [14, 17]).

Following [8], quantised calculus may be defined defined in terms of a Fredholm module. The idea behind a Fredholm module has its origins with Atiyah’s work on K-homology [2], and further details can be found in, for example, [33, Chapter 8].

A Fredholm module can be defined with the following data: a separable Hilbert space H, a unitary self-adjoint operator F on H and a \(C^*\)-algebra \(\mathcal {A}\) represented on H such that the commutator [Fa] is a compact operator on H for all a in A. The quantised differential of \(a \in \mathcal {A}\) is then defined to be the operator .

It is suggestive to think of the compact operators on H as being analogous to “infinitesimals”, and one can measure the “size” or “order” of an infinitesimal T in terms of its singular value sequence:

$$\begin{aligned} \mu (n,T) := \inf \{\Vert T-R\Vert \;:\;\mathrm {rank}(R)\le n\} \end{aligned}$$

where \(\Vert \cdot \Vert \) is the operator norm.

A problem of particular interest in quantised calculus is to precisely quantify the asymptotics of the sequence in terms of a. In operator theoretic language, we seek conditions under which the operator is in some ideal of the algebra of bounded operators on H. Of the greatest importance are Schatten-von Neumann \({{\mathcal {L}}}_p\) ideals, the Schatten-Lorentz \({{\mathcal {L}}}_{p,\infty }\) spaces and the Macaev-Dixmier ideal \({{\mathcal {M}}}_{1,\infty }\) (c.f. Sect. 2.1 and [42, Sect. 2.6]).

The link between quantised calculus and geometry is discussed by Connes in [9]. A model example for quantised calculus is to take a compact d-dimensional Riemannian spin manifold M (with \(d \ge 2\)) with Dirac operator D, and define H to be the Hilbert space of pointwise almost-everywhere equivalence classes of square integrable sections of the spinor bundle. The algebra \(\mathcal {A} = C(M)\) of continuous functions on M acts by pointwise multiplication on H, and one defines F as a difference of spectral projections:

$$\begin{aligned} F := \chi _{[0,\infty )}(D)-\chi _{(-\infty ,0)}(D). \end{aligned}$$

One then has , where \(M_f\) is the operator on H of pointwise multiplication by \(f \in C(M)\). In quantised calculus the immediate question is to determine the relationship between the degree of differentiability of \(f \in C(M)\) and the rate of decay of the singular values of . In general, we have the following inclusion [9, Theorem 3.1]:

This corresponds to the implication:

It is possible to specify even more precise details about the asymptotics of . Suppose that \(\omega \) is an extended limit (a continuous linear functional on the space of bounded sequences \(\ell _\infty ({{\mathbb {N}}})\) which extends the limit functional). If \(\omega \) is invariant under dilations (in the sense of [42, Definition 6.2.4]) then [9, Theorem 3.3] states that:

(1.1)

where \(c_d\) is a known constant, d is the exterior differential and \(\star \) denotes the Hodge star operator associated to the orientiation of M. The quantity on the left hand side of (1.1) is precisely the Dixmier trace . According to Connes, this formula “shows how to pass from quantized 1-forms to ordinary forms, not by a classical limit, but by a direct application of the Dixmier trace” [9, Page 676].

When working with particular manifolds, rather than general compact manifolds, it is possible to specify with even greater precision the relationship between f and the singular values of . In the one dimensional cases of the circle and the line, the appropriate choice for F turns out to be the Hilbert transform (see [10, Chapter 4, Sect. 3.\(\alpha \)]) and the commutators of pointwise multiplication operators and the Hilbert transform are very well understood. If f is a function on either the line \({{\mathbb {R}}}\) or the circle \({\mathbb {T}}\), necessary and sufficient conditions for to be in virtually every named operator ideal are known (see the discussion at the end of Chapter 6 of [50]).

In higher dimensions (in particular \({{\mathbb {T}}}^d\) and \({\mathbb R}^d\) for \(d \ge 2\)), an appropriate choice for F is given by a linear combination of Riesz transforms [12, 41]. Commutators of pointwise multiplication operators and Riesz transforms are well studied in classical harmonic analysis, and Janson and Wolff [35] determined necessary and sufficient conditions for such a commutator to be in \({{\mathcal {L}}}_p\) for all \(p \in (0,\infty )\). An even more precise characterisation was obtained by Rochberg and Semmes [60].

If \(f \in C^\infty ({{\mathbb {T}}}^d)\), let \(\nabla f = (\partial _1f,\partial _2f,\ldots ,\partial _df)\) be the gradient vector of f, and let \(\Vert \nabla f\Vert _2 = \left( \sum _{j=1}^d |\partial _j f|^2\right) ^{\frac{1}{2}}\). Then as a special case of (1.1), we have the following:

(1.2)

where \(k_d > 0\) is a known constant, and m denotes the flat (Haar) measure on \({{\mathbb {T}}}^d\). A similar integral formula can also be obtained in the non-compact setting of \({{\mathbb {R}}}^d\) [41, Theorem 2].

Despite having been heavily studied in the commutative setting, quantum differentiability in the strictly noncommutative setting is still largely unexplored. Recently the authors have established a characterisation of the \({{\mathcal {L}}}_{d,\infty }\)-ideal membership of quantised differentials for noncommutative tori [45]. The primary result of [45] is as follows. Let \(\theta \) be an antisymmetric real \(d\times d \) matrix with \(d>2\), and consider the noncommutative tori \({{\mathbb {T}}}_\theta ^d\). In this setting, there is a conventional choice of Fredholm module and an associated quantised calculus [25, Sect. 12.3]. An element \(x \in L_2({{\mathbb {T}}}^d_{\theta })\) belongs to the (noncommutative) homogeneous Sobolev space \({\dot{W}}_d^1({{\mathbb {T}}}_\theta ^d)\) if and only if its quantised differential has bounded extension in \({{\mathcal {L}}}_{d, \infty }\). The quantum torus analogue of (1.2) is also obtained as [45, Theorem 1.2]: for \(x\in {\dot{W}}_d^1({{\mathbb {T}}}_\theta ^d)\), there is a certain constant \(c_d\) such that for any continuous normalised trace \(\varphi \) on \({{\mathcal {L}}}_{1, \infty }\) we have

(1.3)

where \(\tau \) is the standard trace on the algebra \(L_\infty ({\mathbb T}^d_{\theta })\), and the integral is over the \(d-1\)-sphere \(\mathbb {S}^{d-1}\) with respect to its rotation invariant measure ds. To the best of our knowledge, these results were the first concerning quantum differentiability in the strictly noncommutative setting.

The primary task of this paper is to determine similar results for noncommutative Euclidean spaces. A number of major obstacles make this task far more difficult than for noncommutative tori. In particular, the methods of [45] were facilitated by a well-developed theory of pseudodifferential operators on noncommutative tori [29, 30]. However, despite recent advances [24, 38, 46], the theory of pseudodifferential operators for noncommutative Euclidean spaces is still in its infancy and it is not clear how to directly adapt the existing theory to this problem. It has therefore been necessary for us to introduce new arguments based on operator theory rather than pseudodifferential operator theory (see Sect. 5).

Another difficulty with \({{\mathbb {R}}}^d_{\theta }\) compared to \({{\mathbb {T}}}^d_\theta \) is that the nature of the required analysis changes dramatically with \(\theta \). For example, the range of the canonical trace \(\tau \) on the algebra \(L_\infty ({\mathbb T}^d_\theta )\) on projections is [0, 1], while for the canonical trace on \(L_\infty ({{\mathbb {R}}}^d_\theta )\) the range of the trace on projections is either \([0,\infty ]\) if \(\det (\theta )=0\) or instead ranges over integral multiples of \((2\pi )^{d/2}|\det (\theta )|^{1/2}\) if \(\det (\theta )\ne 0\).

A noteworthy side effect of our self-contained approach is that we obtain in an abstract manner the following commutator estimates for quantum Euclidean spaces: Let \(\Delta _\theta \) be the Laplace operator associated to the noncommutative Euclidean space \({\mathbb {R}_\theta ^d}\) (see Sect. 2.2 for complete definitions). For an appropriate class of smooth elements \(x\in L_\infty ({\mathbb {R}_\theta ^d})\), if \(\alpha , \beta \in {{\mathbb {R}}}\) are such that \(\alpha < \beta +1 \), then we have

$$\begin{aligned} {[}(1-\Delta _\theta )^{\alpha /2}, x](1-\Delta _\theta )^{-\beta /2} \in {{\mathcal {L}}}_{\frac{d}{\beta -\alpha +1},\infty }. \end{aligned}$$

In the classical (commutative) case, this estimate follows almost immediately from the calculus and the mapping properties of pseudodifferential operators (see [41, Lemma 13]).

1.1 Main results on quantum differentiability

In this section we state the main results of this paper. Heretofore unexplained notation which we use will be defined in Sect. 2.

Let \(\theta \) be an antisymmetric real \(d\times d \) matrix, where \(d\ge 2\).

Our first main result provides sufficient conditions for :

Theorem 1.1

If \(x \in L_p({\mathbb {R}_\theta ^d})\cap \dot{W}^1_d({\mathbb {R}_\theta ^d})\) for some \(d\le p < \infty \), then has bounded extension, and the extension is in \({{\mathcal {L}}}_{d,\infty }\).

The space \(\dot{W}^{1}_d({\mathbb {R}_\theta ^d})\) is a noncommutative homogeneous Sobolev space defined with respect to the partial derivatives \(\partial _j\), \(j=1,\ldots ,d\) (these notions will be defined and discussed in Sect. 3). The a priori assumption \(x \in L_p({\mathbb {R}_\theta ^d})\) for some \(d\le p < \infty \) may not be necessary, however we have been unable to remove it. One reason for this difficulty is that there is no clear replacement for the use of the Poincaré inequality in the noncommutative situation. See Proposition 3.15.

With Theorem 1.1, we can prove our second main result, the following trace formula:

Theorem 1.2

Let \(x\in L_p({\mathbb {R}_\theta ^d})\cap \dot{W}^1_d({\mathbb {R}_\theta ^d})\) for some \(d\le p < \infty \). Then there is a constant \(c_d\) depending only on the dimension d such that for any continuous normalised trace \(\varphi \) on \({{\mathcal {L}}}_{1,\infty }\) we have:

Here, the integral over \(\mathbb {S}^{d-1}\) is taken with respect to the rotation-invariant measure ds on \(\mathbb {S}^{d-1}\), and \(s = (s_1,\ldots ,s_d)\).

Here \(\tau _{\theta }\) is the canonical trace on the algebra \(L_\infty ({\mathbb {R}_\theta ^d})\) (see Sect. 2.2). Although the above integral formula is identical in appearance to (1.3), the proof involves different techniques.

The next corollary is a direct application of Theorem 1.2. The proof is the same as [45, Corollary 1.3], so we omit the details.

Corollary 1.3

Let \(x\in L_p({\mathbb {R}_\theta ^d})\cap \dot{W}^1_d({\mathbb {R}_\theta ^d})\) for some \(d\le p < \infty \). Then there are constants \(c_d\) and \(C_d\) depending only on d such that for any continuous normalised trace \(\varphi \) on \({\mathcal L}_{1,\infty }\) we have

Since \(\varphi \) vanishes on the trace class \({{\mathcal {L}}}_1\), Corollary 1.3 immediately yields the following noncommutative version of the \(p\le d\) component of [35, Theorem 1]:

Corollary 1.4

If \(x \in L_{p}({\mathbb {R}_\theta ^d})\) for some \(d\le p < \infty \) and has bounded extension in \( {{\mathcal {L}}}_{q}\) for some \(q \le d\), then x is a constant.

As a converse to Theorem 1.1, we prove our third main result: the necessity of the condition \(x \in {\dot{W}}^1_d({\mathbb {R}_\theta ^d})\) for .

Theorem 1.5

Suppose that \(d > 2\), and let \(x \in L_{d}({\mathbb {R}_\theta ^d})+L_\infty ({\mathbb {R}_\theta ^d})\). If has bounded extension in \({{\mathcal {L}}}_{d,\infty }\), then \(x \in {\dot{W}}^1_d({\mathbb {R}_\theta ^d})\), and there is a constant \(c_d>0\) depending only on d such that

For \(d = 2\), the same conclusion holds under the assumption that \(x \in L_\infty ({{\mathbb {R}}}^2_\theta )\).

Note that in the strictly noncommutative \(\det (\theta )\ne 0\) case, the assumed conditions on x in Theorem 1.5 are the same for \(d=2\) and \(d>2\), since \(L_d({\mathbb {R}_\theta ^d})\subset L_\infty ({\mathbb {R}_\theta ^d})\) in that case.

It is worth noting that one may consider the commutative (\(\theta = 0\)) case in Theorems 1.1, 1.2 and 1.5 and in this case the results obtained are very similar to those of [41]. The only difference being in the integrability assumptions: in [41], boundedness was assumed, and here we assume p-integrability for some \(d\le p < \infty \). Nonetheless the proofs we give here are independent to those of [41].

1.2 Main commutator estimate

As a byproduct of the proof of Theorem 1.2, we obtain a commutator estimate on quantum Euclidean spaces. In Sect. 2.2 we will introduce a certain smooth subalgebra \({\mathcal {A}({\mathbb {R}_\theta ^d})}\) of \(L_\infty ({\mathbb {R}_\theta ^d})\) (see Proposition 2.5), and let \(J_\theta = (1-\Delta _\theta )^{1/2}\) denote the quantum Bessel potential defined in Sect. 3.

Theorem 1.6

Let \(\alpha ,\beta \in {{\mathbb {R}}}\), and let \(x \in {\mathcal {A}({\mathbb {R}_\theta ^d})}\). Then if \( \alpha < \beta +1\):

$$\begin{aligned} {[}J_\theta ^{\alpha }, x]J^{-\beta }_\theta \in {\mathcal L}_{\frac{d}{\beta -\alpha +1},\infty }. \end{aligned}$$

On the other hand if \(\alpha = \beta +1\), then the operator

$$\begin{aligned} {[}J^{\alpha }_\theta , x]J_\theta ^{-\beta } \end{aligned}$$

has bounded extension.

This estimate is to be compared with the Cwikel type estimates provided in [39]. Using the latter estimates, one can deduce that \( J_\theta ^{\alpha } xJ^{-\beta }_\theta \in {\mathcal L}_{\frac{d}{\beta -\alpha },\infty }\) and \(x J_\theta ^{\alpha -\beta } \in {{\mathcal {L}}}_{\frac{d}{\beta -\alpha },\infty }\), however showing that the difference of these two operators is in the smaller ideal \({{\mathcal {L}}}_{\frac{d}{\beta -\alpha +1},\infty }\) requires additional argument.

If we consider the classical (commutative) setting, the result of Theorem 1.6 would follow from a standard application of pseudodifferential operator calculus: x is viewed as an order 0 pseudo-differential operator, while \(J _\theta ^{\alpha }\) is of order \(\alpha \). It follows that the commutator \([J_\theta ^{\alpha },x]\) is of order \(\alpha -1 \), and thus \([J_\theta ^{\alpha }, x]J^{-\beta }_\theta \) is of order \(\alpha -\beta -1\). From there, a short argument can be used to show that the result of Theorem 1.6 holds (an argument of precisely this nature was used in [41, Lemma 13]). It likely is possible to carry out a similar argument in the noncommutative setting using the quantum pseudodifferential operator theory of [24], however we have found the direct argument to be insightful.

The layout of this paper is the following. In the following section we introduce notation, terminology and required background material concerning operator ideals and analysis on quantum Euclidean spaces, and we also recount some elementary properties such as the dilation action and Cwikel type estimates. Section 4 is devoted to the proof of Theorem 1.1. Section 5 concerns our proof of Theorem 1.6, and is the most technical component of the paper. The final section, Sect. 6, completes the proofs of Theorems 1.2 and 1.5.

2 Notation and Preliminary Results

We will occasionally use the notation \(A\lesssim B\) to indicate that \(A\le CB\) for some \(0 \le C < \infty \), and use subscripts to indicate dependence on constants. E.g., \(A\lesssim _d B\) means that \(A\le C_dB\) for a constant \(C_d\) depending on d.

2.1 Operators, ideals and traces

The following material is standard; for more details we refer the reader to [42, 63]. Let H be a complex separable Hilbert space, and let \({{\mathcal {B}}}(H)\) denote the set of all bounded operators on H, and let \({{\mathcal {K}}}(H)\) denote the ideal of compact operators on H. Given \(T\in {{\mathcal {K}}}(H)\), the sequence of singular values \(\mu (T) = \{\mu (k,T)\}_{k=0}^\infty \) is defined as:

$$\begin{aligned} \mu (k,T) = \inf \{\Vert T-R\Vert \;:\;\mathrm {rank}(R) \le k\}. \end{aligned}$$

Equivalently, \(\mu (T)\) is the sequence of eigenvalues of |T| arranged in non-increasing order with multiplicities.

Let \(p \in (0,\infty ).\) The Schatten class \({{\mathcal {L}}}_p\) is the set of operators T in \({{\mathcal {K}}}(H)\) such that \(\mu (T)\) is p-summable, i.e. in the sequence space \(\ell _p\). If \(p \ge 1\) then the \({{\mathcal {L}}}_p\) norm is defined as:

$$\begin{aligned} \Vert T\Vert _p := \Vert \mu (T)\Vert _{\ell _p} = \left( \sum _{k=0}^\infty \mu (k,T)^p\right) ^{1/p}. \end{aligned}$$

With this norm \({{\mathcal {L}}}_p\) is a Banach space, and an ideal of \({{\mathcal {B}}}(H)\).

The weak Schatten class \({{\mathcal {L}}}_{p,\infty }\) is the set of operators T such that \(\mu (T)\) is in the weak \(L_p\)-space \(\ell _{p,\infty }\), with quasi-norm:

$$\begin{aligned} \Vert T\Vert _{p,\infty } = \sup _{k\ge 0}\, (k+1)^{1/p}\mu (k,T) < \infty . \end{aligned}$$

As with the \({{\mathcal {L}}}_p\) spaces, \({{\mathcal {L}}}_{p,\infty }\) is an ideal of \({{\mathcal {B}}}(H)\). We also have the following form of Hölder’s inequality,

$$\begin{aligned} \Vert TS\Vert _{r,\infty } \le c_{p,q}\Vert T\Vert _{p,\infty }\Vert S\Vert _{q,\infty } \end{aligned}$$

where \(\frac{1}{r}=\frac{1}{p}+\frac{1}{q}\), for some constant \(c_{p,q}\).

An operator theoretic result which will be useful is the Araki-Lieb-Thirring inequality [1, Page 169] (see also [37, Theorem 2]) which states that if A and B are bounded operators and \(r\ge 1\), then:

$$\begin{aligned} |AB|^r \prec \prec _{\log } |A|^r|B|^r \end{aligned}$$

where \(\prec \prec _{\log }\) denotes logarithmic submajorisation. In particular this implies the following inequality for the \({\mathcal L}_{r,\infty }\) quasinorm, when \(r\ge 1\):

$$\begin{aligned} \Vert AB\Vert _{r,\infty } \le e\Vert |A|^r|B|^r\Vert _{1,\infty } \le e\Vert A\Vert _{\infty }^{r-1}\Vert A|B|^r\Vert _{1,\infty }. \end{aligned}$$
(2.1)

Among ideals of particular interest is \({{\mathcal {L}}}_{1,\infty }\), and we are concerned with traces on this ideal. For more details, see [42, Sect. 5.7] and [62]. A functional \(\varphi :{{\mathcal {L}}}_{1,\infty }\rightarrow {{\mathbb {C}}}\) is called a trace if it is unitarily invariant. That is, for all unitary operators U and \(T\in {{\mathcal {L}}}_{1,\infty }\) we have that \(\varphi (U^*TU) = \varphi (T)\). It follows that for all bounded operators B we have \(\varphi (BT)=\varphi (TB).\)

An important fact about traces is that any trace \(\varphi \) on \({{\mathcal {L}}}_{1,\infty }\) vanishes on \({{\mathcal {L}}}_1\) [42, Theorem 5.7.8]. A trace \(\varphi \) is called continuous if it is continuous with respect to the \({\mathcal L}_{1,\infty }\) quasi-norm. It is known that not all traces on \({{\mathcal {L}}}_{1,\infty }\) are continuous [43, Remark 3.1(3)]. Within the class of continuous traces on \({{\mathcal {L}}}_{1,\infty }\) there are the well-known Dixmier traces [42, Chapter 6].

Finally, we say that a trace \(\varphi \) on \({{\mathcal {L}}}_{1,\infty }\) is normalised if \(\varphi \) takes the value 1 on any compact positive operator with eigenvalue sequence \(\{\frac{1}{n+1}\}_{n=0}^\infty \) (any two such operators are unitarily equivalent, and so the particular choice of operator is inessential).

2.2 Quantum Euclidean spaces

2.2.1 Heuristic motivation.

The original motivation for noncommutative Euclidean spaces begins with the canonical commutation relations of quantum mechanics. Let \(\theta \) be a fixed antisymmetric \(d\times d\) matrix. We consider the associative \(*\)-algebra with d self-adjoint generators \(\{x_1,\ldots ,x_d\}\) satisfying the relation:

$$\begin{aligned} {[}x_j,x_k] = \mathrm{{i}}\theta _{j,k},\quad 1\le j,k\le d. \end{aligned}$$
(2.2)

These operators may be thought of as coordinates of some fictitious noncommutative d-dimensional space.

At a purely formal level, if one defines:

$$\begin{aligned} U(t) := \exp (\mathrm{{i}}(t_1x_1+t_2x_2+\cdots +t_dx_d)),\quad t \in \mathbb {R}^d, \end{aligned}$$

and formally applies the Baker-Campbell-Hausdorff formula, one is led to the following identity:

$$\begin{aligned} U(t)U(s) = \exp (\frac{\mathrm{{i}}}{2}( t,\theta s))U(t+s),\quad t,s \in \mathbb {R}^d. \end{aligned}$$
(2.3)

The above relation is often called the Weyl form of the canonical commutation relations, and its representation theory is summarised by the well-known Stone-von Neumann theorem: provided that \(\det (\theta )\ne 0\), any two \(C^*\)-algebras generated by a strongly continuous unitary family \(\{U(t)\}_{t\in {{\mathbb {R}}}^d}\) satisfying (2.3) are \(*\)-isomorphic [4, Sect. 5.2.2.2], [32, Theorem 14.8], [68, Chapter 2, Theorem 3.1].

After fixing a concrete Hilbert space representation of (2.3), we will define \(L_\infty ({\mathbb {R}_\theta ^d})\) as the von Neumann algebra generated by \(\{U(t)\}_{t \in {{\mathbb {R}}}^d}\).

2.2.2 Formal definition and elementary properties.

Noncommutative Euclidean spaces admit several equivalent definitions; here we follow the approach in [39], where the authors define \(L_\infty ({\mathbb {R}_\theta ^d})\) as twisted group von Neumann algebra and then define function spaces on \({\mathbb {R}_\theta ^d}\) as being operator spaces associated to that algebra. We refer the reader to [39] for more details on this approach, and give a brief introduction here. Alternative yet unitarily equivalent approaches to the definition of noncommutative Euclidean space may also be found in the literature, see [22, 4, Sect. 5.2.2.2], [24] and [32, Chapter 14].

Define the following family of unitary operators on \(L_2({\mathbb R}^d)\):

$$\begin{aligned} (U(t) \xi )(r) = e^{-\frac{\mathrm{{i}}}{2} ( t, \theta r ) } \xi (r-t), \quad \xi \in L_2({{\mathbb {R}}}^d) , \; r, t \in {{\mathbb {R}}}^d. \end{aligned}$$
(2.4)

It is easily verified that the family \(\{U(t)\}_{t\in {\mathbb R}^d}\) is strongly continuous, and satisfies the Weyl relation (2.3). We will write \(U_\theta \) when there is need to refer to the dependence on the matrix \(\theta \).

Definition 2.1

Let \(d\in {{\mathbb {N}}}\) and \(\theta \) be a fixed antisymmetric real \(d\times d\) matrix. The von Neumann subalgebra of \({\mathcal B}(L_2({{\mathbb {R}}}^d))\) generated by \(\{U(t)\}_{t\in {{\mathbb {R}}}^d}\) given in (2.4) is called a noncommutative Euclidean space, denoted by \(L_\infty ({\mathbb {R}_\theta ^d})\).

Taking \(\theta =0 \), this definition states that \(L_\infty ({\mathbb R}^d_0)\) is the von Neumann algebra generated by the unitary group of translations on \({{\mathbb {R}}}^d\), and this is \(*\)-isomorphic to \(L_\infty ({{\mathbb {R}}}^d)\). Therefore the algebra of essentially bounded functions on Euclidean space is recovered as a special case of Definition 2.1.

Remark 2.2

We caution the reader that the approach taken here is the “Fourier dual” of the approach in [22]. In the commutative case, U(t) is the operator on \(L_2({{\mathbb {R}}}^d)\) of translation by \(t \in {{\mathbb {R}}}^d\), and the Fourier transform provides an isomorphism with the algebra \(L_\infty ({{\mathbb {R}}}^d)\) of essentially bounded functions acting by pointwise multiplication.

The algebraic structure of \(L_\infty ({\mathbb {R}_\theta ^d})\) is determined by the dimension of the kernel of \(\theta \). If \(d=2\), then up to an orthogonal conjugation \(\theta \) may be written as

$$\begin{aligned} \theta = \hbar \begin{pmatrix} 0 &{}\quad -1 \\ 1 &{}\quad 0 \end{pmatrix} \end{aligned}$$
(2.5)

for some constant \(\hbar > 0\). With \(\theta \) given as above, the algebra \(L_\infty ({\mathbb {R}_\theta ^d})\) is \(*\)-isomorphic to the algebra of bounded linear operators on \(L_2({{\mathbb {R}}})\). A \(*\)-isomorphism can be given explicitly by:

$$\begin{aligned} U(t) \mapsto \exp (\mathrm{{i}}t_1M_x+\mathrm{{i}}t_2\hbar \partial _x), \end{aligned}$$

where \(M_x\xi (t) = t\xi (t)\) for \(\xi \in L_2({{\mathbb {R}}})\) and \(\partial _x\xi = \xi '\) is the differentiation operator.

When \(d \ge 2\), we may up to orthogonal conjugation express an arbitrary \(d\times d\) antisymmetric real matrix as a direct sum of a zero matrix and matrices of the form (2.5) (see Sect. 6 of [39]), ultimately leading to the following \(*\)-isomorphism:

$$\begin{aligned} L_\infty ({\mathbb {R}_\theta ^d}) \cong L_\infty ({\mathbb R}^{\dim (\ker (\theta ))}){\overline{{\otimes }}} {{\mathcal {B}}}(L_2({\mathbb R}^{\mathrm {rank}(\theta )/2})) \end{aligned}$$
(2.6)

where \({\overline{{\otimes }}}\) is the von Neumann algebra tensor product.Footnote 1 See [27] for detailed information about this isomorphism.

In the case where \(\det (\theta )\ne 0\), (2.6) reduces to:

$$\begin{aligned} L_\infty ({\mathbb {R}_\theta ^d}) \cong {{\mathcal {B}}}(L_2({{\mathbb {R}}}^{d/2})). \end{aligned}$$
(2.7)

2.2.3 Weyl quantisation.

Let \(f \in L_1({{\mathbb {R}}}^d)\). We will define \(U(f) \in L_\infty ({\mathbb {R}_\theta ^d})\) as the operator given by the absolutely convergent Bochner integral:

$$\begin{aligned} U(f)\xi = \int _{{{\mathbb {R}}}^d} f(t)U(t)\xi \,dt,\quad \xi \in L_2({{\mathbb {R}}}^d). \end{aligned}$$

It should be verified first that the above integral indeed exists in the Bochner sense, and secondly that \(U(f) \in L_\infty ({\mathbb {R}_\theta ^d})\) as claimed.

Lemma 2.3

For \(f \in L_1({{\mathbb {R}}}^d)\), the integral:

$$\begin{aligned} U(f)\xi = \int _{{{\mathbb {R}}}^d} f(t)U(t)\xi \,dt,\quad \xi \in L_2({{\mathbb {R}}}^d) \end{aligned}$$

is absolutely convergent in the Bochner sense, and defines a bounded linear operator \(U(f):L_2({{\mathbb {R}}}^d)\rightarrow L_2({{\mathbb {R}}}^d)\) such that \(U(f) \in L_\infty ({\mathbb {R}_\theta ^d})\).

Proof

Recall that \(t\mapsto U(t)\) is strongly continuous. It follows that for all \(\eta ,\xi \in L_2({{\mathbb {R}}}^d)\), the scalar-valued function \(t\mapsto f(t)\langle \eta ,U(t)\xi \rangle \) is measurable. Since \(L_2({{\mathbb {R}}}^d)\) is separable, the Pettis measurability theorem [19, Theorem II.1.2], [34, Theorem 1.19] implies that for all \(\xi \in L_2({{\mathbb {R}}}^d)\) the function \(t\mapsto f(t)U(t)\xi \) is measurable in the \(L_2({\mathbb R}^d)\)-valued Bochner sense.

Since \(\Vert f(t)U(t)\xi \Vert _{L_2({{\mathbb {R}}}^2)} \le |f(t)|\Vert \xi \Vert _{L_2({{\mathbb {R}}}^d)}\) and \(f \in L_1({{\mathbb {R}}}^d)\), the integrand is absolutely integrable, and this proves the claim that the integral is absolutely convergent in the Bochner sense.

To see that \(\xi \mapsto U(f)\xi \) is a bounded operator, one simply applies the triangle inequality for the Bochner integral to obtain

$$\begin{aligned} \Vert U(f)\xi \Vert _{L_2({{\mathbb {R}}}^d)} \le \Vert f\Vert _{L_1({\mathbb R}^d)}\Vert \xi \Vert _{L_2({{\mathbb {R}}}^d)} \end{aligned}$$

so that \(U(f) \in {{\mathcal {B}}}(L_2({{\mathbb {R}}}^d))\). Finally, to see that \(U(f) \in L_\infty ({\mathbb {R}_\theta ^d})\) we will use von Neumann’s bicommutant theorem. Suppose that \(X \in {{\mathcal {B}}}(L_2({{\mathbb {R}}}^d))\) is a bounded linear operator which commutes with every \(\{U(t)\}_{t \in {{\mathbb {R}}}^d}\). Since X is bounded, it can be moved under the integration sign:

$$\begin{aligned} XU(f)\xi = X\int _{{{\mathbb {R}}}^d}f(t)U(t)\xi \,dt = \int _{{\mathbb R}^d} f(t)XU(t)\xi \,dt = U(f)X\xi . \end{aligned}$$

Hence X commutes with U(f), and thus U(f) commutes with every operator which commutes with every \(\{U(t)\}_{t \in {{\mathbb {R}}}^d}\) so it follows that \(U(f)\in L_\infty ({\mathbb {R}_\theta ^d})\). \(\quad \square \)

We will denote \(U = U_\theta \) when there is a need to refer to the dependence on \(\theta \). The map U has other names and notations in the literature: for example composing U with the Fourier transform determines a mapping \({{\mathcal {S}}}({{\mathbb {R}}}^d)\rightarrow {\mathcal B}(L_2({{\mathbb {R}}}^{d/2}))\) which is also known as the Weyl quantisation map [32, Sect. 13.3]. In the \(\det (\theta ) \ne 0\) case, the map U is also essentially the same as the so-called Weyl transform [68, Page 138]. In [24], the map denoted there \(\lambda _\theta \) is very similar to U, the only difference being that \(U(t_1e_1)U(t_2e_2)\cdots U(t_de_d)\) is used in place of U(t).

Assume now that \(f \in {{\mathcal {S}}}({{\mathbb {R}}}^d)\). For \(\xi \in {{\mathcal {S}}}({{\mathbb {R}}}^d)\), by the definition of U(t) we have:

$$\begin{aligned} (U(f)\xi )(s) = \int _{{{\mathbb {R}}}^d} f(t)e^{-\frac{\mathrm{{i}}}{2}(t,\theta s)}\xi (s-t)\,dt. \end{aligned}$$
(2.8)

Since \(\xi \) is continuous, it is easy to see that \((U(f)\xi )(s)\) is continuous as a function of s. Evaluating \(U(f)\xi (s)\) at \(s = 0\) yields:

$$\begin{aligned} (U(f)\xi )(0) = \int _{{{\mathbb {R}}}^d} f(t)\xi (-t)\,dt. \end{aligned}$$

Hence, if \(U(f) = U(g)\) for \(f,g \in L_1({{\mathbb {R}}}^d)\), it follows that:

$$\begin{aligned} \int _{{{\mathbb {R}}}^d} (f(t)-g(t))\xi (-t)\,dt = 0 \end{aligned}$$

for all \(\xi \in {{\mathcal {S}}}({{\mathbb {R}}}^d)\), and thus \(f = g\) pointwise almost everywhere. It follows that U is injective.

The class of Schwartz functions on \({\mathbb {R}_\theta ^d}\) is defined as the image of \({{\mathcal {S}}}({{\mathbb {R}}}^d)\) under U. That is,

$$\begin{aligned} {\mathcal {S}({\mathbb {R}_\theta ^d})}:= \{ x\in L_\infty ({\mathbb {R}_\theta ^d}): x= \int _{{{\mathbb {R}}}^d} f(s) U(s) ds ,\;\;\text{ for } \text{ some }\; f\in {{\mathcal {S}}}({\mathbb R}^d)\}. \end{aligned}$$
(2.9)

The Schwartz space \({\mathcal {S}({\mathbb {R}_\theta ^d})}\) is equipped with the topology induced by the isomorphism \(U:{{\mathcal {S}}}({\mathbb R}^d)\rightarrow {\mathcal {S}({\mathbb {R}_\theta ^d})}\), where \({{\mathcal {S}}}({{\mathbb {R}}}^d)\) is equipped with its canonical Fréchet topology. It is important to note that the Fréchet topology of \({\mathcal {S}({\mathbb {R}_\theta ^d})}\) is finer than the \(L_p({\mathbb {R}_\theta ^d})\) topology for every \(1\le p \le \infty \). This follows, for example, from Proposition 2.10 below.

It is worth emphasising that in the nondegenerate case (\(\det (\theta )\ne 0\)), the noncommutativity of \(L_\infty ({\mathbb {R}_\theta ^d})\) implies that \({\mathcal {S}({\mathbb {R}_\theta ^d})}\) has a number of properties quite unlike the classical Schwartz space \({{\mathcal {S}}}({{\mathbb {R}}}^d)\) (for example, see Theorem 2.4 below). In terms of the isomorphism (2.7), it is possible to select a specific basis such that \({\mathcal {S}({\mathbb {R}_\theta ^d})}\) is an algebra of infinite matrices whose entries have rapid decay ( [26, Theorem 6] and [56, Theorem 6.11]). While we will not need the specific details of the matrix description, we do make use of the following result, which is [22, Lemma 2.4].

Theorem 2.4

Assume that \(\det (\theta )\ne 0\). There exists a sequence \(\{p_n\}_{n\ge 0} \subset {\mathcal {S}({\mathbb {R}_\theta ^d})}\) such that:

  1. (i)

    Each \(p_n\) is a projection of rank n (considered as an operator on \(L_2({{\mathbb {R}}}^{d/2})\), via (2.7)).

  2. (ii)

    We have that \(p_n\uparrow 1\), where 1 is the identity operator in \(L_\infty ({\mathbb {R}_\theta ^d})\).

  3. (iii)

    \(\bigcup _{n\ge 0} p_n L_\infty ({\mathbb {R}_\theta ^d}) p_n\) is dense in \({\mathcal {S}({\mathbb {R}_\theta ^d})}\) in its Fréchet topology.

The presence of smooth projections is a feature of analysis on quantum Euclidean spaces in the \(\det (\theta )\ne 0\) case entirely distinct from analysis on Euclidean space. For our purposes we do not need to know the precise form of the sequence \(\{p_n\}_{n\ge 0}\), however a description using the map U may be found in [22, Sect. 2].

One feature of the Schwartz class \({{\mathcal {S}}}({{\mathbb {R}}}^d)\) is factorisability: that is, every \(f \in {{\mathcal {S}}}({\mathbb R}^d)\) can be obtained as a product \(f = gh\) for \(g,h \in {\mathcal S}({{\mathbb {R}}}^d)\) (see e.g. [72]). There is a similar result in the case of \({\mathcal {S}({\mathbb {R}_\theta ^d})}\) when \(\det (\theta )\ne 0\). For the mixed case, where \(\theta \ne 0\) but \(\det (\theta )=0\), the situation is less clear. We have found it more convenient to pass to a subalgebra of \({\mathcal {S}({\mathbb {R}_\theta ^d})}\) for which we can verify (a very minor weakening of) the factorisation property.

Proposition 2.5

There is a dense \(*\)-subalgebra \({\mathcal {A}({\mathbb {R}_\theta ^d})}\subseteq {\mathcal {S}({\mathbb {R}_\theta ^d})}\) such that every \(x\in {\mathcal {A}({\mathbb {R}_\theta ^d})}\) can be expressed as a finite linear combination of products of elements of \({\mathcal {A}({\mathbb {R}_\theta ^d})}\). That is, \(x= \sum _{j=1}^n y_jz_j \) where each \(y_j, z_j \in {\mathcal {A}({\mathbb {R}_\theta ^d})}\).

Proof

In the case \(\det (\theta )\ne 0\), this result is provided by [26, pg. 877]. In the commutative (\(\theta =0\)) case, this is a classical result of harmonic analysis (see e.g. [72]).

Performing a change of variables if necessary, we assume that \(\theta \) is of the form:

$$\begin{aligned} \theta = \begin{pmatrix} 0 &{}\quad 0 \\ 0 &{}\quad \theta ' \end{pmatrix} \end{aligned}$$

where \(\det (\theta ')\ne 0\). Let \(d_1 = \dim (\ker (\theta ))\). If \(\det (\theta )\ne 0\), then we do not need to change variables.

Let \(f\in {{\mathcal {S}}}({{\mathbb {R}}}^{d_1})\) and \(g \in {\mathcal S}({{\mathbb {R}}}^{d-d_1})\), and let \(f\otimes g\) denote the function on \({{\mathbb {R}}}^d\) given by:

$$\begin{aligned} (f\otimes g)(t_1,\ldots ,t_d) = f(t_1,\ldots ,t_{d_1})g(t_{d_1+1},\ldots ,t_{d}),\quad t \in {\mathbb R}^d. \end{aligned}$$

Then it follows readily from the definition that:

$$\begin{aligned} U_{\theta }(f\otimes g) = U_0(f)U_{\theta '}(g). \end{aligned}$$

Every Schwartz class function \(\phi \in {{\mathcal {S}}}({{\mathbb {R}}}^d)\) can be written as an infinite linear combination:

$$\begin{aligned} \phi = \sum _{j=0}^\infty \lambda _j f_j\otimes g_j \end{aligned}$$

where \(\{f_j\}_{j=0}^\infty \) and \(\{g_j\}_{j=0}^\infty \) are vanishing sequences in \({{\mathcal {S}}}({{\mathbb {R}}}^{d_1})\) and \({{\mathcal {S}}}({{\mathbb {R}}}^{d-d_1})\) respectively, and \(\sum _{j=0}^\infty |\lambda _j| < \infty \) (see [69, Theorem 45.1, Theorem 51.6]).

It follows that every \(x \in {\mathcal {S}({\mathbb {R}_\theta ^d})}\) can be written as a convergent series

$$\begin{aligned} x = \sum _{j=0}^\infty \lambda _jU_0(f_j)U_{\theta '}(g_j) \end{aligned}$$
(2.10)

for a summable sequence \(\{\lambda _j\}_{j=0}^\infty \).

We will define \({\mathcal {A}({\mathbb {R}_\theta ^d})}\) as the algebraic tensor product:

$$\begin{aligned} {\mathcal {A}({\mathbb {R}_\theta ^d})}= {{\mathcal {S}}}({{\mathbb {R}}}^{d_1})\otimes {{\mathcal {S}}}({\mathbb R}^{d-d_1}_{\theta '}). \end{aligned}$$

That is, we define \({\mathcal {A}({\mathbb {R}_\theta ^d})}\) to be the algebra of finite linear combinations of elements of the form \(U_0(f)U_{\theta '}(g)\), where \(f \in {{\mathcal {S}}}({{\mathbb {R}}}^{d_1})\) and \(g \in {\mathcal S}({{\mathbb {R}}}^{d-d_1})\).

Then \({\mathcal {A}({\mathbb {R}_\theta ^d})}\) clearly has the desired factorisation property, as \({{\mathcal {S}}}({{\mathbb {R}}}^{d_1})\) and \({{\mathcal {S}}}({\mathbb R}^{d-d_1}_{\theta '})\) do. \(\quad \square \)

From now on, we fix \({\mathcal {A}({\mathbb {R}_\theta ^d})}\) to be the dense subalgebra of \({\mathcal {S}({\mathbb {R}_\theta ^d})}\) constructed in the proof of Proposition 2.5.

For \(f, g \in {{\mathcal {S}}}({{\mathbb {R}}}^d)\), we compute

$$\begin{aligned}\begin{aligned} U(f)^*&= \int _{{{\mathbb {R}}}^d} \overline{ f(s)} U(s)^* ds = \int _{{{\mathbb {R}}}^d} \overline{ f(s)} \, U(-s) ds \\&= \int _{{{\mathbb {R}}}^d} \overline{ f(-s)} \, U(s) ds\,, \end{aligned} \end{aligned}$$

and

$$\begin{aligned}\begin{aligned} U(f) U(g)&= \int _{{{\mathbb {R}}}^d} f(s) U(s) ds\,\cdot \int _{{{\mathbb {R}}}^d} g(t) U(t) dt \\&= \int _{{{\mathbb {R}}}^d} \int _{{{\mathbb {R}}}^d} f(s) g(t) \, e^{\frac{\mathrm{{i}}}{2} ( s , \theta t) } U(s+t) dt ds \\&= \int _{{{\mathbb {R}}}^d} \int _{{{\mathbb {R}}}^d} f(s-t) g(t) \, e^{\frac{\mathrm{{i}}}{2} ( s , \theta t) } U(s) dt ds \\&= \int _{{{\mathbb {R}}}^d} \int _{{{\mathbb {R}}}^d} e^{\frac{\mathrm{{i}}}{2} ( s , \theta t) } f(s-t) g(t) dt\, U(s) ds . \end{aligned} \end{aligned}$$

For this reason, we define the \(\theta \)-involution as

$$\begin{aligned} f^\theta (s)= \overline{ f(-s)} , \end{aligned}$$
(2.11)

and the \(\theta \)-convolution as

$$\begin{aligned} f*_\theta g(s)= \int _{{{\mathbb {R}}}^d} e^{\frac{\mathrm{{i}}}{2} ( s , \theta t) } f(s-t) g(t) dt. \end{aligned}$$
(2.12)

Then, the above calculation shows immediately \(U(f)^* = U(f^\theta )\), and

$$\begin{aligned} U(f) \, U(g) = U(f*_\theta g). \end{aligned}$$
(2.13)

It is straightforward to verify that \({{\mathcal {S}}}({\mathbb R}^d)*_\theta {{\mathcal {S}}}({{\mathbb {R}}}^d)\subseteq {\mathcal S}({{\mathbb {R}}}^d)\). The \(\theta \)-convolution \(*_\theta \) is essentially the same as the twisted convolution of [26, Definition 1], where it was the basis for an alternative definition of \({{\mathcal {S}}}({\mathbb {R}_\theta ^d})\) (as was done in [22]).

2.2.4 Measure and integration for \({\mathbb {R}_\theta ^d}\).

There is a canonical semifinite normal trace \(\tau _{\theta }\) on \(L_\infty ({\mathbb {R}_\theta ^d})\), essentially defined so that in the isomorphism (2.6), \(\tau _{\theta }\) corresponds to integration with respect to the Lebesgue measure on the commutative part and is the canonical operator trace \({\mathrm {tr}}\) on the noncommutative part.

Definition 2.6

If \(x \in {{\mathcal {S}}}({\mathbb {R}_\theta ^d})\) is given by \(x = U(f)\) for \(f \in {{\mathcal {S}}}({{\mathbb {R}}}^d)\), we define \(\tau _{\theta }(x)\) as:

$$\begin{aligned} \tau _{\theta }(x) = (2\pi )^df(0). \end{aligned}$$

Since U is injective, \(\tau _\theta \) is indeed well-defined. The factor of \((2\pi )^d\) is inserted so that \(\tau _{\theta }\) recovers the Lebesgue integral when \(\theta = 0\) in the following sense: let \(\iota \) denote the map:

$$\begin{aligned} {{\mathcal {S}}}({{\mathbb {R}}}^d_0)\rightarrow {{\mathcal {S}}}({{\mathbb {R}}}^d) \end{aligned}$$

given by:

$$\begin{aligned} U(f) \mapsto \big (s\mapsto \int _{{{\mathbb {R}}}^d} f(\xi )\exp (\mathrm{{i}}(s,\xi ))\,d\xi \big ). \end{aligned}$$

Then if \({\widehat{f}}\) denotes the Fourier transform of \(f \in {{\mathcal {S}}}({{\mathbb {R}}}^d)\), we have

$$\begin{aligned} \iota (U({\widehat{f}} )) = (2\pi )^{d/2}f. \end{aligned}$$

However \(\int _{{{\mathbb {R}}}^d} f(s)\,ds = (2\pi )^{d/2}{\widehat{f}}(0)\), and so the integral of \(\iota (U({\widehat{f}}))\) is \((2\pi )^d{\widehat{f}}(0)\).

Lemma 2.7

The functional \(\tau _\theta :{\mathcal {S}({\mathbb {R}_\theta ^d})}\rightarrow {{\mathbb {C}}}\) admits an extension to a semifinite normal trace on \(L_\infty ({\mathbb {R}_\theta ^d})\). If \(\theta = \begin{pmatrix}0 &{}\quad 0 \\ 0 &{} \quad \theta '\end{pmatrix}\) where \(\det (\theta ')\ne 0\) then in terms of the isomorphism (2.6) we have:

$$\begin{aligned} \tau _{\theta } = \left( \int _{{\mathbb R}^{\dim (\ker (\theta ))}}dt\right) \otimes (2\pi )^{\mathrm {rank}(\theta )/2}|\det (\theta ')|^{1/2}{\mathrm {tr}} \end{aligned}$$

where \({\mathrm {tr}}\) is the classical trace on \({\mathcal B}(L_2({{\mathbb {R}}}^{\dim (\ker (\theta '))/2}))\).

When \(\det (\theta )\ne 0\), we have:

$$\begin{aligned} \tau _{\theta }(U(f)) = (2\pi )^{d/2}|\det (\theta )|^{1/2}{\mathrm {tr}}(U(f)),\quad f \in {{\mathcal {S}}}({{\mathbb {R}}}^d). \end{aligned}$$
(2.14)

Hence in the \(\det (\theta )\ne 0\) case the range of \(\tau _{\theta }\) on projections consists of integer multiples of \((2\pi )^{d/2}|\det (\theta )|^{1/2}\). On the other hand, when \(\det (\theta )=0\) then the range of \(\tau _\theta \) on projections is \([0,\infty ]\).

For \(0< p <\infty \), the space \(L_p({\mathbb {R}_\theta ^d})\) is defined to be the noncommutative \({{\mathcal {L}}}_p\)-space associated to the von Neumann algebra \(L_\infty ({\mathbb {R}_\theta ^d})\). If we define:

$$\begin{aligned} N_p := \{ x\in L_\infty ({\mathbb {R}_\theta ^d}) : \tau _\theta (|x|^p) <\infty \} \end{aligned}$$

then the \(L_p\) space \(L_p({\mathbb {R}_\theta ^d})\) is defined as the completion of \(N_p\) with the (quasi)norm \(\Vert x\Vert _p = \tau _\theta (|x|^p) ^{1/p}\). This is a norm when \(p\ge 1\).

When \(\det (\theta )\ne 0\), since \(L_\infty ({\mathbb {R}_\theta ^d})\) is \(*\)-isomorphic to the algebra \({{\mathcal {B}}}(L_2({{\mathbb {R}}}^{d/2}))\) and \(\tau _\theta \) is a rescaling of the classical trace, the spaces \(L_p({\mathbb {R}_\theta ^d})\) are precisely the Schatten \({{\mathcal {L}}}_p\)-classes. Then in the nondegenerate case we have immediately \(L_p ({\mathbb {R}_\theta ^d}) \subset L_q({\mathbb {R}_\theta ^d})\) when \(p<q\), i.e.,

$$\begin{aligned} c_{\theta }\Vert x\Vert _q \le \Vert x\Vert _p,\quad x \in L_p({\mathbb {R}_\theta ^d}). \end{aligned}$$
(2.15)

for some constant \(c_{\theta }\). This is in great contrast to the classical case, where \(L_p({{\mathbb {R}}}^d)\) is not contained in \(L_q({{\mathbb {R}}}^d)\) for \(p\ne q\).

The preceding computations immediately yield that the mapping \((2\pi )^{-d/2}U\) extends to an isometry from \(L_2({{\mathbb {R}}}^d)\) to \(L_2({\mathbb {R}_\theta ^d})\) [68, Chapter 2, Lemma 3.1].

Proposition 2.8

Let \(f\in {{\mathcal {S}}}({{\mathbb {R}}}^d)\). Then we have

$$\begin{aligned} \Vert U(f)\Vert _2 = (2\pi )^{d/2}\Vert f\Vert _2\,. \end{aligned}$$

Proposition 2.8 permits us to extend the domain of U from \(L_1({{\mathbb {R}}}^d)\) to \(L_1({{\mathbb {R}}}^d)+L_2({\mathbb R}^d)\).

Remark 2.9

It follows from Proposition 2.8 that the Schwartz class \({\mathcal {S}({\mathbb {R}_\theta ^d})}\) is dense in \(L_2({\mathbb {R}_\theta ^d})\). Indeed, \((2\pi )^{-d/2}U\) effects an isometric isomorphism between \(L_2({{\mathbb {R}}}^d)\) and \(L_2({\mathbb {R}_\theta ^d})\), and since the classical Schwartz space \({{\mathcal {S}}}({{\mathbb {R}}}^d)\) is dense in \(L_2({{\mathbb {R}}}^d)\) the density of \({\mathcal {S}({\mathbb {R}_\theta ^d})}\) in \(L_2({\mathbb {R}_\theta ^d})\) follows.

The following inequality may be thought of as the quantum Euclidean analogue of the Hausdorff-Young inequality.

Proposition 2.10

Let \(1\le p \le 2\) with \(\frac{1}{p} + \frac{1}{q} =1\). Then for every \(f \in L_p({{\mathbb {R}}}^d)\cap L_1({{\mathbb {R}}}^d)\), we have \(U(f)\in L_q({\mathbb {R}_\theta ^d})\), and

$$\begin{aligned} \Vert U(f)\Vert _q \le (2\pi )^{d/2}\Vert f \Vert _p \end{aligned}$$

and hence U has continuous extension from \(L_p({{\mathbb {R}}}^d)\) to \(L_q({\mathbb {R}_\theta ^d})\).

Proof

First consider the case \(p=1\) and \(q = \infty \). If \(\xi \in L_2({{\mathbb {R}}}^d)\), the triangle inequality for the Bochner integral gives us:

$$\begin{aligned} \Vert U(f)\xi \Vert _2 = \Vert \int _{{{\mathbb {R}}}^d} f(s) U(s)\xi ds \Vert _2 \le \Vert f\Vert _1\Vert \xi \Vert _2 \le (2\pi )^{d/2}\Vert f\Vert _1\Vert \xi \Vert _2 \end{aligned}$$

for all \(f \in L_1({{\mathbb {R}}}^d)\), and therefore,

$$\begin{aligned} \Vert U(f)\Vert _\infty \le (2\pi )^{d/2}\Vert f\Vert _1. \end{aligned}$$

The case \(p=2\) is provided by Proposition 2.8:

$$\begin{aligned} \Vert U(f)\Vert _2 = (2\pi )^{d/2}\Vert f\Vert _2. \end{aligned}$$

We may deduce the result for all \(1\le p\le 2\) by using complex interpolation for the couples \((L_1({{\mathbb {R}}}^d),L_2({\mathbb R}^d))\) and \((L_\infty ({\mathbb {R}_\theta ^d}),L_2({\mathbb {R}_\theta ^d}))\). The complex interpolation method for the latter couple is covered by the standard theory of interpolation of noncommutative \(L_p\)-spaces (see e.g. [52]). \(\quad \square \)

3 Calculus on \({\mathbb {R}_\theta ^d}\)

Now let us recall the differential structure on \({\mathbb {R}_\theta ^d}\). Let \({{\mathcal {D}}}_k\), \(1\le k \le d \) be the multiplication operators

$$\begin{aligned} ({{\mathcal {D}}}_k \xi )(r)= r_k \xi (r),\quad r \in {{\mathbb {R}}}^d \end{aligned}$$

defined on the domain \(\mathrm{{dom\,}} {{\mathcal {D}}}_k = \{\xi \in L_2({{\mathbb {R}}}^d): \xi \in L_2({{\mathbb {R}}}^d , r_k^2 dr)\}\). Fixing \(s\in {{\mathbb {R}}}^d\), it is easy to see that the unitary generator U(s) preserves \({\mathrm {dom}}({{\mathcal {D}}}_k)\), and we may compute:

$$\begin{aligned} {[}{{\mathcal {D}}}_k , U(s)] = s_k U(s),\quad \text{ and } \quad e^{\mathrm{{i}}t {{\mathcal {D}}}_k } U(s) e^{-\mathrm{{i}}t {{\mathcal {D}}}_k } = e^{\mathrm{{i}}t s_k } U(s) \in L_\infty ({\mathbb {R}_\theta ^d}), \quad t>0. \end{aligned}$$

For general \(x\in L_\infty ({\mathbb {R}_\theta ^d})\), if \([{{\mathcal {D}}}_k , x ] \) extends to a bounded operator on \(L_2({{\mathbb {R}}}^d)\), then we can write

$$\begin{aligned} {[}{{\mathcal {D}}}_k , x ] =\lim _{t{\rightarrow }0 } \frac{e^{\mathrm{{i}}t {\mathcal D}_k } x e^{-\mathrm{{i}}t {{\mathcal {D}}}_k }- x }{\mathrm{{i}}t } \end{aligned}$$

with respect to the strong operator topology, and therefore \([{{\mathcal {D}}}_k , x] \in L_\infty ({\mathbb {R}_\theta ^d})\) (see [39, Proposition 6.12]). This operator \([{{\mathcal {D}}}_k , x]\) is then defined to be the derivative \(\partial _k x\) of \(x\in L_\infty ({\mathbb {R}_\theta ^d})\). Evidently, \(\partial _k\) anti-commutes with the adjoint operation:

$$\begin{aligned} \partial _k x^* = {{\mathcal {D}}}_k x^* - x^* {{\mathcal {D}}}_k = - [{{\mathcal {D}}}_k , x ]^* = -(\partial _k x)^* . \end{aligned}$$

For a multi-index \({\alpha }\in {{\mathbb {N}}}_0^d\) and \(x\in L_\infty ({\mathbb {R}_\theta ^d})\), if every repeated commutator \([{\mathcal D}_j^{{\alpha }_j}, [{{\mathcal {D}}}_{j+1}^{{\alpha }_{j+1}}, \ldots , [{\mathcal D}_d^{{\alpha }_d}, x]]]\), \(j= 1 ,\ldots , d\) extends to a bounded operator on \(L_2({{\mathbb {R}}}^d)\), then the mixed partial derivative \(\partial ^{\alpha }x \) is defined as

$$\begin{aligned} \partial ^{\alpha }x =[{{\mathcal {D}}}_1^{{\alpha }_1}, [{{\mathcal {D}}}_2^{{\alpha }_2}, \ldots , [{{\mathcal {D}}}_d^{{\alpha }_d}, x]]] . \end{aligned}$$

If \(\partial ^{{\alpha }}x\) is bounded for all \({\alpha }\), we say that x is smooth.

Note that the space of Schwartz functions \({{\mathcal {S}}}({\mathbb R}^d)\) is a core for every operator \({{\mathcal {D}}}_k\), \(k=1, \ldots , d\), and we may show that if \(x= \int _{{{\mathbb {R}}}^d} f(s) U(s) ds \in {\mathcal {S}({\mathbb {R}_\theta ^d})}\), then we have

$$\begin{aligned} {[}{{\mathcal {D}}}_k , x]= \int _{{{\mathbb {R}}}^d} s_k f(s) U(s) ds \in {\mathcal {S}({\mathbb {R}_\theta ^d})}. \end{aligned}$$

Inductively, for any \({\alpha }\in {{\mathbb {N}}}_0^d\), \(\partial ^{\alpha }x \in {\mathcal {S}({\mathbb {R}_\theta ^d})}\), and so by our definition the elements of \({\mathcal {S}({\mathbb {R}_\theta ^d})}\) are smooth.

In terms of the isomorphism \(U:{{\mathcal {S}}}({{\mathbb {R}}}^d)\rightarrow {\mathcal {S}({\mathbb {R}_\theta ^d})}\), we can compute derivatives easily:

$$\begin{aligned} \partial ^{\alpha }U(\phi ) = U(t_1^{\alpha _1}\ldots t_d^{\alpha _d}\phi (t)). \end{aligned}$$
(3.1)

We now define the space \({\mathcal {S}'({\mathbb {R}_\theta ^d})}\) of tempered distributions, and the associated operations.

Definition 3.1

Let \({\mathcal {S}'({\mathbb {R}_\theta ^d})}\) be the space of continuous linear functionals on \({\mathcal {S}({\mathbb {R}_\theta ^d})}\), which may be called the space of quantum tempered distributions.

As in the classical case, denote the pairing of \(T \in {\mathcal {S}'({\mathbb {R}_\theta ^d})}\) with \(\phi \) in \({\mathcal {S}({\mathbb {R}_\theta ^d})}\) by \((T,\phi )\), and \(L_1({\mathbb {R}_\theta ^d})+L_\infty ({\mathbb {R}_\theta ^d})\) is embedded into \({\mathcal {S}'({\mathbb {R}_\theta ^d})}\) by:

$$\begin{aligned} (x,\phi ) := \tau _\theta (x\phi ), \quad x \in L_1({\mathbb {R}_\theta ^d})+L_\infty ({\mathbb {R}_\theta ^d}),\,\phi \in {\mathcal {S}({\mathbb {R}_\theta ^d})}. \end{aligned}$$

For a multi-index \(\alpha \in {{\mathbb {N}}}_0^d\) and \(T \in {\mathcal {S}({\mathbb {R}_\theta ^d})}\), define \(\partial ^\alpha T\) as the distribution \((\partial ^{\alpha }T,\phi ) = (-1)^{|\alpha |}(T,\partial ^{\alpha }\phi )\).

It is not hard to verify that \(\partial ^{\alpha }\) on distributions extends \(\partial ^{\alpha }\) on \(L_\infty ({\mathbb {R}_\theta ^d})\), so there is no conflict of notation.

By duality, we can extend the derivatives \({{\mathcal {D}}}_k\) to operators on \({\mathcal {S}'({\mathbb {R}_\theta ^d})}\). With these generalised derivatives, we are able to introduce the Sobolev spaces \(W_p^m ({\mathbb {R}_\theta ^d})\) associated to noncommutative Euclidean space.

Definition 3.2

For a positive integer m and \(1\le p \le \infty \), the space \(W_p^m ({\mathbb {R}_\theta ^d})\) is the space of \(x\in {\mathcal {S}'({\mathbb {R}_\theta ^d})}\) such that every partial derivative of x up to order m is in \(L_p({\mathbb {R}_\theta ^d})\), equipped with the norm

$$\begin{aligned} \Vert x\Vert _{W_p^m} = \sum _{|{\alpha }| \le m } \Vert \partial ^{\alpha }x \Vert _p\,. \end{aligned}$$

The homogeneous Sobolev space \({\dot{W}}_p^m ({\mathbb {R}_\theta ^d})\) consists of those \(x\in {\mathcal {S}'({\mathbb {R}_\theta ^d})}\) such that every partial derivative of x of order m is in \(L_p({\mathbb {R}_\theta ^d})\), equipped with the norm:

$$\begin{aligned} \Vert x\Vert _{\dot{W}_p^m} = \sum _{|{\alpha }| = m } \Vert \partial ^{\alpha }x \Vert _p\,. \end{aligned}$$

We shall now record a proof that \(W_p^m({\mathbb {R}_\theta ^d})\) is a Banach space. The proof given here largely replicates well-known arguments in the classical setting, so is only included for the sake of completeness.

Proposition 3.3

Equipped with the above norm, \(W_p^m ({\mathbb {R}_\theta ^d})\) is a Banach space for any \(1\le p \le \infty \) and \(m\in {{\mathbb {N}}}_0\).

Proof

It suffices to show that \(W_p^m ({\mathbb {R}_\theta ^d})\) is complete. Assume that \(\{x_n\}_{n=0}^\infty \subset W_p^m ({\mathbb {R}_\theta ^d})\) is a Cauchy sequence. Then for every \(|{\alpha }| \le m\), \(\{\partial ^{\alpha }x_n\}_n \) is a Cauchy sequence in \(L_p ({\mathbb {R}_\theta ^d})\), and so is convergent in the \(L_p\)-norm, so for each \({\alpha }\) there exists some \(y_{\alpha }\in L_p({\mathbb {R}_\theta ^d})\) such that \(\partial ^{\alpha }x_n{\rightarrow }y_{\alpha }\) in \(L_p ({\mathbb {R}_\theta ^d})\). In particular \( x_n{\rightarrow }y_0 \) in \(L_p ({\mathbb {R}_\theta ^d})\). Let us show that \(y_{\alpha }= \partial ^{\alpha }y_0\) for all \(|{\alpha }|\le m\), and this will complete the proof.

Let \(\phi \in {\mathcal {S}({\mathbb {R}_\theta ^d})}\). Then by the definition of \(\partial ^{\alpha }\) on \({\mathcal {S}'({\mathbb {R}_\theta ^d})}\) we have:

$$\begin{aligned} (\partial ^{\alpha }x_n,\phi ) = (-1)^{|{\alpha }|}(x_n,\partial ^{\alpha }\phi ). \end{aligned}$$

Since \(x_n\rightarrow y_0\) and \(\partial ^{\alpha }x_n\rightarrow y_{\alpha }\) in the \(L_p\)-sense it follows that:

$$\begin{aligned} (-1)^{|{\alpha }|}(y_0,\partial ^{\alpha }\phi ) = \lim _{n\rightarrow \infty } (\partial ^{\alpha }x_n,\phi ) = (y_{\alpha },\phi ). \end{aligned}$$

Thus by definition, \(y_{\alpha }= \partial ^{\alpha }y_0\). \(\quad \square \)

The Laplacian \(\Delta _\theta \) associated with \(L_\infty ({\mathbb {R}_\theta ^d})\) is defined on the domain \({\mathrm {dom}}(\Delta _\theta ) = L_2({{\mathbb {R}}}^d, |t|^4 dt)\) by

$$\begin{aligned} (-\Delta _\theta \xi )(t) = |t|^2 \xi (t). \end{aligned}$$

The gradient \({\nabla }_\theta \) associated with \(L_\infty ({\mathbb {R}_\theta ^d})\) is the operator

$$\begin{aligned} {\nabla }_\theta = (-\mathrm{{i}}{{\mathcal {D}}}_1, \ldots , -\mathrm{{i}}{{\mathcal {D}}}_d ), \end{aligned}$$

with the domain \(L_2({{\mathbb {R}}}^d, t_1^2 dt)\cap \cdots \cap L_2({{\mathbb {R}}}^d, t_d^2 dt )\).

We can see that if \(t \in {{\mathbb {R}}}^d\), then \(\exp ((t,{\nabla }_\theta ))\) is the operator on \(L_2({{\mathbb {R}}}^d)\) given by:

$$\begin{aligned} (\exp ((t,{\nabla }_\theta ))\xi )(r) = \exp (\mathrm{{i}}(t,r))\xi (r),\quad r \in {{\mathbb {R}}}^d,\,\xi \in L_2({{\mathbb {R}}}^d). \end{aligned}$$

Strictly speaking, the operators \(\Delta _\theta \) and \({\nabla }_\theta \) do not depend on the matrix \(\theta \). However, we prefer to use notation with \(\theta \) to emphasise that these operators are associated with \(L_\infty ({\mathbb {R}_\theta ^d})\). We will have frequent need to refer to the operator \((1-\Delta _\theta )^{1/2}\), which we abbreviate as \(J_\theta \),

$$\begin{aligned} J_\theta := (1- {{\mathcal {D}}}elta_\theta )^{1/2}. \end{aligned}$$

That is, \(J_{\theta }\) is the operator on \(L_2({{\mathbb {R}}}^d)\) of pointwise multiplication by \((1+|t|^2)^{1/2}\), with domain \(L_2({{\mathbb {R}}}^d,(1+|t|^2)dt)\). Classically, the operator \(J_{\theta }\) is called the Bessel potential.

Definition 3.4

Let \(N= 2^{\lfloor d/2\rfloor }\) and \(\{\gamma _j\}_{1\le j \le d }\) be self-adjoint \(N\times N\) matrices satisfying \(\gamma _j \gamma _k +\gamma _k \gamma _j= 2 \delta _{j,k}\). The Dirac operator \({{\mathcal {D}}}\) associated with \(L_\infty ({\mathbb {R}_\theta ^d})\) is the operator on \({{\mathbb {C}}}^N \otimes L_2({{\mathbb {R}}}^d) \) defined by

$$\begin{aligned} {{\mathcal {D}}}:= \sum _{j=1} ^d \gamma _j \otimes {{\mathcal {D}}}_j . \end{aligned}$$

In noncommutative geometric terms, the Dirac operator \({{\mathcal {D}}}\) may be used to define a spectral triple for \(L_\infty ({\mathbb {R}_\theta ^d})\) given by \(\big ( 1\otimes W_1^\infty ({\mathbb {R}_\theta ^d}), {{\mathbb {C}}}^N \otimes L_2({{\mathbb {R}}}^d) , {{\mathcal {D}}} \big )\). We refer the reader to [22, 67] for more details.

The main object in this note is the commutator

(3.2)

which denotes the quantised differential on quantum Euclidean spaces.

More generally, if x is not necessarily bounded we may still define on the dense subspace \({{\mathbb {C}}}^N\otimes C_c^\infty ({{\mathbb {R}}}^d)\). Suppose that \(x\in L_p({\mathbb {R}_\theta ^d})\) for some \(2\le p <\infty \). Then if \(\eta \in {{\mathbb {C}}}^N\otimes C_c^\infty ({{\mathbb {R}}}^d)\) with compact support K, we will have from Theorem 3.17 that \((1 \otimes x) \eta = (1\otimes x M_{\chi _K}) \eta \in L_2({{\mathbb {R}}}^d) \otimes {{\mathbb {C}}}^N\), where \(\chi _K\) is the characteristic function of K. It follows that \({\mathrm{{sgn}}}({{\mathcal {D}}})(1 \otimes x) \eta \in {{\mathbb {C}}}^N\otimes L_2({{\mathbb {R}}}^d)\). on the other hand, since \({\mathrm{{sgn}}}({{\mathcal {D}}}) \eta \) is still a compactly supported function in \({\mathbb C}^N\otimes L_2({{\mathbb {R}}}^d)\), using the same argument we have \((1 \otimes x){\mathrm{{sgn}}}({{\mathcal {D}}}) \eta \in {{\mathbb {C}}}^N\otimes L_2({{\mathbb {R}}}^d)\). Thus is a well-defined element in \({{\mathbb {C}}}^N\otimes L_2({{\mathbb {R}}}^d)\).

3.1 Dilation and translation

Since our quantum Euclidean spaces are generated by noncommutating operators, we cannot realise \(L_\infty ({\mathbb {R}_\theta ^d})\) as an algebra of functions on a space. While there are no underlying points, there are still natural actions of translation by \(t \in {{\mathbb {R}}}^d\) and dilation by \(\lambda \in (0,\infty )\).

Of the two, translation is simplest.

Definition 3.5

Suppose that \(x \in L_\infty ({\mathbb {R}_\theta ^d})\). For \(t \in {{\mathbb {R}}}^d\), define \(T_t(x)\) as:

$$\begin{aligned} T_t(x) = \exp ((t,\nabla _\theta ))x\exp (-(t,\nabla _{\theta })). \end{aligned}$$

More generally, if \(x \in {\mathcal {S}'({\mathbb {R}_\theta ^d})}\), define \(T_t(f)\) as the distribution given by

$$\begin{aligned} (T_t(f),\phi ) = (f,T_{-t}\phi ),\quad \phi \in {\mathcal {S}({\mathbb {R}_\theta ^d})}. \end{aligned}$$

That \(T_t(f)\) is a well-defined distribution for all \(f \in {\mathcal {S}'({\mathbb {R}_\theta ^d})}\) is a straightforward consequence of the observation that \(T_t\) is continuous in every seminorm which defines the topology of \({\mathcal {S}({\mathbb {R}_\theta ^d})}\). Moreover, it is a trivial matter to verify that \(T_t\) is an isometry in every \(L_p({\mathbb {R}_\theta ^d})\), for \(0 < p \le \infty \).

In terms of the map U, we have:

$$\begin{aligned} T_tU(\phi ) = U(e^{\mathrm{{i}}(t,\cdot )}\phi (\cdot )) \end{aligned}$$

for all \(\phi \in {{\mathcal {S}}}({{\mathbb {R}}}^d)\).

As we would expect from the classical case, \(\{T_t\}_{t \in {\mathbb R}^d}\) is continuous in the \(L_p\) norm for \(1\le p < \infty \).

Theorem 3.6

If \(x \in L_p({\mathbb {R}_\theta ^d})\) for \(1 \le p < \infty \), then \(T_t(x)\rightarrow x\) in the \(L_p\)-norm as \(t\rightarrow 0\).

Proof

Initially consider the case when \(x = U(f) \in L_2({\mathbb {R}_\theta ^d})\). It is straightforward to see that \(T_t(U(f)) = U(\exp (\mathrm{{i}}(t,\cdot )f(\cdot )))\) for all \(f \in L_2({{\mathbb {R}}}^d)\), and using Proposition 2.8 and the dominated convergence theorem:

$$\begin{aligned} \Vert T_t(U(f))-U(f)\Vert _2 = (2\pi )^{d/2}\Vert e^{\mathrm{{i}}(t,\cdot )}f(\cdot )-f(\cdot )\Vert _2 \rightarrow 0 \end{aligned}$$

as \(t\rightarrow 0\).

Suppose that \(2< p < \infty \) and \(x \in L_2({\mathbb {R}_\theta ^d})\cap L_\infty ({\mathbb {R}_\theta ^d})\). Using the Hölder inequality, it follows that:

$$\begin{aligned} \lim _{t\rightarrow 0}\Vert T_t(x)-x\Vert _{p}&\le \lim _{t\rightarrow 0} \Vert T_t(x)-x\Vert _\infty ^{1-\frac{2}{p}}\Vert T_t(x)-x\Vert _2^{\frac{2}{p}}\\&\le (2\Vert x\Vert _\infty )^{1-\frac{2}{p}}\lim _{t\rightarrow \infty }\Vert T_t(x)-x\Vert _2^{\frac{2}{p}}\\&= 0. \end{aligned}$$

We can extend from \(x \in L_2({\mathbb {R}_\theta ^d})\cap L_\infty ({\mathbb {R}_\theta ^d})\) to all \(x \in L_p({\mathbb {R}_\theta ^d})\) by using the norm-density of \(L_2({\mathbb {R}_\theta ^d})\cap L_\infty ({\mathbb {R}_\theta ^d})\) in \(L_p({\mathbb {R}_\theta ^d})\). Namely, let \(\varepsilon >0\) and select \(y \in L_2({\mathbb {R}_\theta ^d})\cap L_\infty ({\mathbb {R}_\theta ^d})\) such that \(\Vert x-y\Vert _p<\varepsilon \). Then:

$$\begin{aligned} \lim _{t\rightarrow 0} \Vert T_t(x)-x\Vert _p&\le \lim _{t\rightarrow 0} \Vert T_t(x-y)\Vert _p+\lim _{t\rightarrow 0}\Vert T_ty-y\Vert _p+\Vert y-x\Vert _p\\&\le 2\varepsilon +\lim _{t\rightarrow 0}\Vert T_ty-y\Vert _p\\&= 2\varepsilon . \end{aligned}$$

Hence, \(T_tx\rightarrow x\) in the \(L_p\) norm.

On the other hand, if \(1\le p < 2\), consider \(x \in L_2({\mathbb {R}_\theta ^d})\cap L_{2p/(4-p)}({\mathbb {R}_\theta ^d})\), then Hölder’s inequality and the fact that \(T_t\) is an isometry in every \(L_p({\mathbb {R}_\theta ^d})\) implies that:

$$\begin{aligned} \Vert T_t(x)-x\Vert _p&\le \Vert T_t(x)-x\Vert _2^{1/2}. \Vert T_t(x)-x\Vert _{2p/(4-p)} ^{1/2}\\&\lesssim _p \Vert T_t(x)-x\Vert _2^{1/2}\Vert x\Vert _{2p/(4-p)} ^{1/2}. \end{aligned}$$

Thus \(\lim _{t\rightarrow 0}\Vert T_t(x)-x\Vert _p = 0\) for \(x \in L_2({\mathbb {R}_\theta ^d})\cap L_{2p/(4-p)}({\mathbb {R}_\theta ^d})\), and this may be extended to all \(x \in L_p({\mathbb {R}_\theta ^d})\) by a density argument similar to the \(p>2\) case. \(\quad \square \)

Theorem 3.6 only discusses the cases \(1 \le p < \infty \) since we are not aware of any characterisation of the subspace of \(x \in L_\infty ({\mathbb {R}_\theta ^d})\) such that \(\lim _{t\rightarrow 0} \Vert T_tx-x\Vert _\infty = 0\). In the classical case, this corresponds to the space of bounded uniformly continuous functions. Using Theorem 2.10, it is possible to prove that \(\lim _{t\rightarrow 0}\Vert T_tx-x\Vert _\infty =0\) for all \(x \in {\mathcal {S}({\mathbb {R}_\theta ^d})}\), and for all x in the closure of \({\mathcal {S}({\mathbb {R}_\theta ^d})}\) in \(L_\infty ({\mathbb {R}_\theta ^d})\).

We now describe the “dilation” action of \({{\mathbb {R}}}^+\) on a quantum Euclidean space. A peculiarity of the noncommutative situation is that the natural dilation semigroup does not define an automorphism of \(L_\infty ({\mathbb {R}_\theta ^d})\) to itself, but instead the value of \(\theta \) varies.

The heuristic motivation for the dilation mapping is as follows. Recall that we consider \({\mathbb {R}_\theta ^d}\) as being generated by elements \(\{x_1,\ldots ,x_d\}\) satisfying the commutation relation

$$\begin{aligned} {[}x_j,x_k]=\mathrm{{i}}\theta _{j,k}. \end{aligned}$$

However this relation is not invariant under rescaling. That is, if we let \(\lambda > 0\) then the family \(\{\lambda x_1,\ldots ,\lambda x_d\}\) satisfies the relation:

$$\begin{aligned} {[}\lambda x_j,\lambda x_k] = \mathrm{{i}}\lambda ^2\theta _{j,k}. \end{aligned}$$

It therefore becomes clear that if we wish to define a “dilation by \(\lambda \)” map on \({\mathbb {R}_\theta ^d}\), we should instead consider dilation as mapping between two different noncommutative spaces. That is, from \({\mathbb {R}_\theta ^d}\) to \({{\mathbb {R}}}^d_{\lambda ^2\theta }\).

The following rigorous definition of the “dilation by \({\lambda }\)” map follows [24]. Given \(\lambda >0\), define the map \(\Psi _{\lambda }\) from \(L_\infty ({\mathbb {R}_\theta ^d})\) to \(L_\infty ({{\mathbb {R}}}_{{\lambda }^2 \theta }^d)\) as

$$\begin{aligned} \Psi _{\lambda }: U_\theta (s) \mapsto U_{{\lambda }^2 \theta } (\frac{s}{{\lambda }}). \end{aligned}$$
(3.3)

Recall that we include a subscript \(\theta \) (or \({\lambda }^2\theta \)) to indicate the dependence on the matrix.

Denote by \(\sigma _{\lambda }\) the usual \(L_2\)-norm preserving dilation on Euclidean space:

$$\begin{aligned} \sigma _{\lambda }\xi (t) = {\lambda }^{d/ 2 } \xi ({\lambda }t ), \quad \xi \in L_2({{\mathbb {R}}}^d). \end{aligned}$$

We have \( \sigma _{\lambda }^* = \sigma _{{\lambda }^{-1}}\). It is standard to verify that

$$\begin{aligned} U_\theta (s) = \sigma ^*_{{\lambda }} \, U_{{\lambda }^2 \theta } (\frac{s}{{\lambda }})\, \sigma _{\lambda }. \end{aligned}$$
(3.4)

Moreover, by (3.4), it is evident that for every \({\lambda }>0\), \(\Psi _{\lambda }\) is a \(*\)-isomorphism from \(L_\infty ({\mathbb {R}_\theta ^d})\) to \(L_\infty ({{\mathbb {R}}}_{{\lambda }^2 \theta }^d)\).

The following proposition shows how the dilation \(\Psi _{\lambda }\) affects the \(L_p\) norms for quantum Euclidean spaces.

Proposition 3.7

Let \({\lambda }>0 \) and \(x\in L_p ({\mathbb {R}_\theta ^d})\), and denote \(\xi = {\lambda }^2\theta \). Then for all \(2 \le p < \infty \), we have:

$$\begin{aligned} \Vert \Psi _{\lambda }x\Vert _{L_p({{\mathbb {R}}}^d_\xi )} \le \lambda ^{d/p}\Vert x\Vert _{L_p({\mathbb {R}_\theta ^d})} \end{aligned}$$

and \(\Psi _{{\lambda }}\) is an isometry from \(L_\infty ({\mathbb {R}_\theta ^d})\) to \(L_\infty ({{\mathbb {R}}}^d_\xi )\).

If in addition \(x \in W^{1}_p({\mathbb {R}_\theta ^d})\), then:

$$\begin{aligned} \Vert \partial _j\Psi _{\lambda }(x) \Vert _{L_p({{\mathbb {R}}}^d_{\xi })} \le \lambda ^{d/p-1}\Vert \partial _jx\Vert _{L_p({{\mathbb {R}}}^d_{ \theta })} \,,\quad j=1,\ldots , d. \end{aligned}$$
(3.5)

Proof

As was already mentioned, \(\Psi _{{\lambda }}\) is a \(*\)-isomorphism between \(L_\infty ({\mathbb {R}_\theta ^d})\) and \(L_\infty ({{\mathbb {R}}}^d_\xi )\), and since a \(*\)-isomorphism of \(C^*\)-algebras is an isometry, it follows immediately that \(\Psi _{\lambda }:L_\infty ({\mathbb {R}_\theta ^d})\rightarrow L_\infty ({\mathbb R}^d_\xi )\) is an isometry.

For \(p=2\), recall from Proposition 2.8 that the mapping \((2\pi )^{-d/2}U_\theta \) (resp. \((2\pi )^{-d/2}U_\xi \)) defines an isometry from \(L_2({\mathbb {R}_\theta ^d})\) (resp. \(L_2({{\mathbb {R}}}^d_\xi )\)) to \(L_2({{\mathbb {R}}}^d)\). Denoting \(d_{\lambda }\) for the map \(d_\lambda f(t) = f(t/\lambda )\), we have:

$$\begin{aligned} \Psi _{{\lambda }} \circ U_\theta = U_\xi \circ d_{\lambda }, \quad \lambda > 0. \end{aligned}$$

Hence \(\Psi _{\lambda }\) has the same norm betweeen \(L_2({\mathbb {R}_\theta ^d})\) and \(L_2({{\mathbb {R}}}^d_\xi )\) as \(d_{\lambda }\) does on \(L_2({{\mathbb {R}}}^d)\). This is easily computed to be \({\lambda }^{d/2}\).

Finally, the result for \(2< p < \infty \) follows from complex interpolation of the couples \((L_2({\mathbb {R}_\theta ^d}),L_\infty ({\mathbb {R}_\theta ^d}))\) and \((L_2({{\mathbb {R}}}^d_\xi ),L_\infty ({{\mathbb {R}}}^d_\xi ))\).

We recall that the complex interpolation space \((L_2({\mathbb {R}_\theta ^d}),L_\infty ({\mathbb {R}_\theta ^d}))_{\eta }\) is \(L_{2/\eta }({\mathbb {R}_\theta ^d})\), where \(\eta \in (0,1)\), and that we have:

$$\begin{aligned} \Vert \Psi _{\lambda }\Vert _{L_{2/\eta }\rightarrow L_{2/\eta }} \le \Vert \Psi _{{\lambda }}\Vert _{L_2\rightarrow L_2}^\eta \Vert \Psi _{{\lambda }}\Vert _{L_\infty \rightarrow L_\infty }^{1-\eta } \le {\lambda }^{d\eta /2}. \end{aligned}$$

Taking \(\eta = \frac{2}{p}\) yields the desired norm bound.

The second claim follows from the easily-verified identity:

$$\begin{aligned} \partial _j(\Psi _{{\lambda }}(x)) = \lambda ^{-1}\Psi _{{\lambda }}\partial _j(x). \end{aligned}$$

\(\square \)

3.2 Approximation by smooth functions for \({\mathbb {R}_\theta ^d}\)

For this section, we fix \(\psi \in {{\mathcal {S}}}({{\mathbb {R}}}^d)\) such that \(\int _{{{\mathbb {R}}}^d} \psi (s)\,ds = 1\). We do not assume that \(\psi \) is necessarily compactly supported or positive, since it will be convenient to have some freedom in choosing \(\psi \). For \(\varepsilon > 0\), define:

$$\begin{aligned} \psi _{\varepsilon }(t) = \varepsilon ^{-d}\psi (\frac{t}{\varepsilon }). \end{aligned}$$
(3.6)

By construction, \(\int _{{{\mathbb {R}}}^d}\psi _{\varepsilon }(t)\,dt = 1\). Moreover since \(\psi \) in particular has rapid decay at infinity, the \(L_1\)-norm of \(\psi _{\varepsilon }\) is primarily concentrated in the ball of radius \(\varepsilon ^{1/2}\) around zero. That is, for each \(N\ge 1\), there exists a constant \(C_N\) depending on \(\psi \) such that:

$$\begin{aligned} \int _{|t|>\varepsilon ^{1/2}} |\psi _{\varepsilon }(t)|\,dt \le C_N \varepsilon ^{N}. \end{aligned}$$
(3.7)

Theorem 3.8

Let \(1\le p <\infty \). For all \(x \in L_p({\mathbb {R}_\theta ^d})\), we have that \(U(\psi _{\varepsilon })x \rightarrow x\) in the \(L_p({\mathbb {R}_\theta ^d})\) norm as \(\varepsilon \rightarrow 0\).

Proof

Let us first prove the result for \(p=2\) and \(x \in {\mathcal {S}({\mathbb {R}_\theta ^d})}\). Thanks to Proposition 2.8 and (2.13), it suffices to show that for all \(f \in {{\mathcal {S}}}({{\mathbb {R}}}^d)\):

$$\begin{aligned} \psi _{\varepsilon }*_\theta f\rightarrow f \end{aligned}$$

in the norm of \(L_2({{\mathbb {R}}}^d)\), where \(*_\theta \) is the deformed convolution (2.12).

By definition (2.12), we have that:

$$\begin{aligned} \psi _{\varepsilon }*_{\theta } f (t) = \int _{{{\mathbb {R}}}^d} e^{-\frac{\mathrm{{i}}}{2}(t,\theta s)}\psi _{\varepsilon }(s)f(t-s)\,ds,\quad t \in {{\mathbb {R}}}^d. \end{aligned}$$
(3.8)

Since by definition \(\int _{{{\mathbb {R}}}^d} \psi _{\varepsilon }(s)\,ds = 1\), we have:

$$\begin{aligned} \psi _{\varepsilon }*_{\theta } f (t)-f(t) = \int _{{{\mathbb {R}}}^d} e^{-\frac{\mathrm{{i}}}{2}(t,\theta s)}\psi _{\varepsilon }(s)f(t-s)-\psi _{\varepsilon }(s)f(t)\,ds \end{aligned}$$

for all \(t \in {{\mathbb {R}}}^d\). Hence,

$$\begin{aligned} \psi _{\varepsilon }*_{\theta } f (t)-f(t) = \int _{{{\mathbb {R}}}^d} e^{-\frac{\mathrm{{i}}}{2}(t,\theta s)}\psi _{\varepsilon }(s)(f(t-s)-e^{\frac{\mathrm{{i}}}{2}(t,\theta s)}f(t))\,ds. \end{aligned}$$

Split the integral into the set \(|s|\le \varepsilon ^{1/2}\) and \(|s| > \varepsilon ^{1/2}\). Let \(N\ge 1\). Using (3.7) there is a constant \(C_N\) such that

$$\begin{aligned} |\psi _{\varepsilon }*_{\theta } f(t)-f(t)|&\le \int _{|s|\le \varepsilon ^{1/2}} |\psi _{\varepsilon }(s)| \,|f(t-s)-e^{\frac{\mathrm{{i}}}{2}(t,\theta s)}f(t)|\,ds\\&\quad + \int _{|s|> \varepsilon ^{1/2}} |\psi _{\varepsilon }(s)|\, |f(t-s)-e^{\frac{\mathrm{{i}}}{2}(t,\theta s)}f(t)|\,ds\\&\le \Vert \psi \Vert _1\sup _{|s|\le \varepsilon ^{1/2}} |f(t-s)-e^{\frac{\mathrm{{i}}}{2}(t,\theta s)}f(t)|\\&\quad + C_N\varepsilon ^N\Vert f\Vert _\infty . \end{aligned}$$

Since f is in Schwartz class (and in particular uniformly continuous and bounded), it follows that

$$\begin{aligned} \lim _{\varepsilon \rightarrow 0} |\psi _{\varepsilon }*_{\theta } f(t)-f(t)| = 0 \end{aligned}$$
(3.9)

uniformly for \(t \in {{\mathbb {R}}}^d\).

Returning to (3.8), we can use the triangle inequality to deduce that:

$$\begin{aligned} |\psi _{\varepsilon }*_{\theta } f(t)| \le \int _{{{\mathbb {R}}}^d} |\psi _{\varepsilon }(s)||f(t-s)|\,ds. \end{aligned}$$

That is, \(|\psi _{\varepsilon }*_{\theta } f| \le |\psi _{\varepsilon }|*|f|\). Using Young’s convolution inequality, this implies that:

$$\begin{aligned} \Vert \psi _{\varepsilon }*_{\theta }f\Vert _2\le \Vert \psi \Vert _1\Vert f\Vert _2. \end{aligned}$$

Thus \(\psi _{\varepsilon }*_{\theta } f -f\in L_2({{\mathbb {R}}}^d)\). Let \(\delta >0\) and select a compact set \(K\subset {{\mathbb {R}}}^d\) such that \(\Vert (\psi _{\varepsilon }*_{\theta }f-f)\chi _{{\mathbb R}^d{\setminus } K}\Vert _2< \delta \). Since we have uniform pointwise convergence (3.9), it follows that:

$$\begin{aligned} \lim _{\varepsilon \rightarrow 0} \Vert \psi _{\varepsilon }*_{\theta }f-f\Vert _2 \le \lim _{\varepsilon \rightarrow 0} \Vert (\psi _{\varepsilon }*_{\theta }f-f)\chi _K\Vert _2 + \delta = \delta . \end{aligned}$$

However \(\delta >0\) is arbitrary and therefore:

$$\begin{aligned} \lim _{\varepsilon \rightarrow 0} \Vert \psi _{\varepsilon }*_{\theta } f-f\Vert _2 = 0. \end{aligned}$$

This completes the proof for \(x \in {\mathcal {S}({\mathbb {R}_\theta ^d})}\).

Now we may complete the proof for \(p=2\) by using the density of \({\mathcal {S}({\mathbb {R}_\theta ^d})}\) in \(L_2({\mathbb {R}_\theta ^d})\) (Remark 2.9). Suppose that \(x \in L_2({\mathbb {R}_\theta ^d})\) and \(y \in {\mathcal {S}({\mathbb {R}_\theta ^d})}\) is chosen such that \(\Vert y-x\Vert _2 < \varepsilon \). Note that we have \(\Vert U(\psi _{\varepsilon })\Vert _\infty \le \Vert \psi _\varepsilon \Vert _1 = \Vert \psi _1\Vert _1 < \infty \). Thus,

$$\begin{aligned} \Vert U(\psi _{\varepsilon })x-x\Vert _2&\le \Vert U(\psi _{\varepsilon })(x-y)\Vert _2 + \Vert U(\psi _{\varepsilon })y-y\Vert _2 + \Vert x-y\Vert _2\\&\le (\Vert U(\psi _{\varepsilon })\Vert _\infty +1)\varepsilon + \Vert U(\psi _{\varepsilon })y-y\Vert _2\\&\rightarrow 0 \end{aligned}$$

as \(\varepsilon \rightarrow 0\). This completes the proof for \(p = 2\).

Now we may complete the proof for \(p \ne 2\) by following an identical argument to the proof of Theorem 3.6. \(\quad \square \)

The \(p=2\) component of Theorem 3.8 may be equivalently, stated as \(U(\psi _{\varepsilon }) \rightarrow 1\) in the strong operator topology of \(L_\infty ({\mathbb {R}_\theta ^d})\) in its representation on \(L_2({\mathbb {R}_\theta ^d})\).

There is another way in which we can approximate an element \(x \in L_p({\mathbb {R}_\theta ^d})\) using \(\psi _{\varepsilon }\). This uses the notion of convolution:

Definition 3.9

Let \(x \in L_p({\mathbb {R}_\theta ^d})\) for \(1\le p <\infty \). For \(\psi \in L_1({{\mathbb {R}}}^d)\) define:

$$\begin{aligned} \psi *x := \int _{{{\mathbb {R}}}^d} \psi (s)T_{-s}(x)\,ds \end{aligned}$$

as an absolutely convergent Bochner integral.

Some remarks are in order: First, Theorem 3.6 implies that the mapping \(s\mapsto T_{-s}(x)\) is continuous from \({{\mathbb {R}}}^d\) to \(L_p({\mathbb {R}_\theta ^d})\) with its norm topology, so for each \(y \in L_q({\mathbb {R}_\theta ^d})\), for \(\frac{1}{p}+\frac{1}{q}=1\), we have that \(s\mapsto \tau _{\theta }(yT_{-s}(x))\) is continuous and so the integrand is weakly measurable. Since \(L_p({\mathbb {R}_\theta ^d})\) is separable for \(p < \infty \), the Pettis measurability theorem ensures the Bochner measurability of the integrand. The triangle inequality then implies:

$$\begin{aligned} \Vert \psi *x\Vert _p \le \Vert \psi \Vert _1 \Vert x\Vert _p. \end{aligned}$$
(3.10)

If we instead consider \(p=\infty \), there may be issues with Bochner measurability of the integrand, however we will not need to be concerned with that case.

Another fact about convolution worth noting is that if \(x \in L_2({\mathbb {R}_\theta ^d})\) is given by \(x = U(f)\) for \(f \in L_2({{\mathbb {R}}}^d)\), then:

$$\begin{aligned} \psi *U(f) = U({\widehat{\psi }} f) \end{aligned}$$
(3.11)

where \({\widehat{\psi }} \) is the Fourier transform of \(\psi \).

Note at this stage that convolution with \(\psi \) commutes with each \(\partial _j\).

Theorem 3.10

Let \(x \in L_p({\mathbb {R}_\theta ^d})\) for \(1\le p < \infty \), and let \(\psi \) and \(\psi _\varepsilon \) be as in (3.6). Then:

$$\begin{aligned} \psi _\varepsilon *x \rightarrow x \end{aligned}$$

in the \(L_p\)-norm, as \(\varepsilon \rightarrow 0\).

Proof

By definition, and the fact that \(\int _{{\mathbb R}^d}\psi _{\varepsilon }(s)\,ds = 1\), we have:

$$\begin{aligned} \psi _{\varepsilon }*x-x = \int _{{{\mathbb {R}}}^d} \psi _{\varepsilon }(s)(T_{-s}(x)-x)\,ds. \end{aligned}$$

Using (3.7), let \(N\ge 1\) and split the integral into regions \(|s|\le \varepsilon ^{1/2}\) and \(|s|> \varepsilon ^{1/2}\) to obtain:

$$\begin{aligned} \Vert \psi _{\varepsilon }*x-x\Vert _p \le \Vert \psi \Vert _1\sup _{|s|< \varepsilon ^{1/2}}\Vert T_s(x)-x\Vert _p+2C_N\varepsilon ^N\Vert x\Vert _p \,. \end{aligned}$$

The result now follows from Theorem 3.6. \(\quad \square \)

We can now combine Theorems 3.8 and 3.10 to simultaneously approximate \(x\in L_p({\mathbb {R}_\theta ^d})\) with convolution and left multiplication by mollifying functions. The proof of the following is a straightforward consequence of the fact that \(\Vert U(\phi _{\varepsilon })\Vert _\infty \) is uniformly bounded in \(\varepsilon \), and also the inequality (3.10).

Corollary 3.11

Let \(x \in L_p({\mathbb {R}_\theta ^d})\), and suppose that we have a family \(\{x_{\varepsilon }\}_{\varepsilon >0} \subseteq L_p({\mathbb {R}_\theta ^d})\) such that \(x_{\varepsilon }\rightarrow x\) in the \(L_p\) sense as \(\varepsilon \rightarrow 0\). Then:

$$\begin{aligned} U(\psi _{\varepsilon })x_{\varepsilon }\rightarrow x,\quad \psi _{\varepsilon }*x_{\varepsilon }\rightarrow x \end{aligned}$$

in \(L_p({\mathbb {R}_\theta ^d})\), as \(\varepsilon \rightarrow 0\).

Proof

Both estimates follow from the fact that the \(L_1\)-norm of \(\psi _{\varepsilon }\) is uniformly bounded in \(\varepsilon \). Indeed, we have:

$$\begin{aligned} \Vert U(\psi _{\varepsilon })x_{\varepsilon }-x\Vert _p&\le \Vert U(\psi _{\varepsilon })\Vert _\infty \Vert x_{\varepsilon }-x\Vert _{p} + \Vert U(\psi _{\varepsilon })x-x\Vert _p\\&\le \Vert \psi \Vert _1\Vert x_{\varepsilon }-x\Vert _p + \Vert U(\psi _{\varepsilon })-x\Vert _p \end{aligned}$$

which vanishes as \(\varepsilon \rightarrow 0\) thanks to Lemma 3.8. Similarly (3.10) implies:

$$\begin{aligned} \Vert \psi _{\varepsilon }*x_{\varepsilon }-x\Vert _p\le \Vert \psi _{\varepsilon }\Vert _1\Vert x_{\varepsilon }-x\Vert _p + \Vert \psi _{\varepsilon }*x-x\Vert _p \end{aligned}$$

which again vanishes as \(\varepsilon \rightarrow 0\), due to Lemma 3.10. \(\quad \square \)

Corollary 3.11 suffices to show that, for example, \(\psi _{\varepsilon }*(U(\phi _{\varepsilon })x)\rightarrow x\) as \(\varepsilon \rightarrow 0\) in the \(L_p\) sense, where \(\phi _{\varepsilon }\in {{\mathcal {S}}}({{\mathbb {R}}}^d)\) is defined similarly to \(\psi _{\varepsilon }\).

It is shown in [24] that \( {\mathcal {S}({\mathbb {R}_\theta ^d})}\) is weak-\(*\) dense in \(L_\infty ({\mathbb {R}_\theta ^d})\), and norm dense in \(L_p({\mathbb {R}_\theta ^d})\) for \(1\le p <\infty \). Corollary 3.11 combined with the following lemma gives us a specific sequence which approximates an arbitrary \(x\in L_p({\mathbb {R}_\theta ^d})\) by a sequence in \({\mathcal {S}({\mathbb {R}_\theta ^d})}\).

Lemma 3.12

There exist choices of \(\psi \), \(\phi \) and \(\chi \) in \({\mathcal S}({{\mathbb {R}}}^d)\) with integral equal to 1 such that for all \(x \in L_p({\mathbb {R}_\theta ^d})\) (\(2\le p \le \infty \)) and \(\varepsilon >0\) the element \(\psi _{\varepsilon }*\big (U(\phi _{\varepsilon })U(\chi _{\varepsilon })x \big )\) is in the Schwartz space \({\mathcal {S}({\mathbb {R}_\theta ^d})}\).

Proof

Let us first prove that we can select \(\chi \in {\mathcal S}({{\mathbb {R}}}^d)\) such that \(U(\chi _{\varepsilon })x \in L_2({\mathbb {R}_\theta ^d})\) for all \(x \in L_p({\mathbb {R}_\theta ^d})\).

We refer to the isomorphism (2.6). By a change of variables if necessary, we assume that \(\theta \) is of the form:

$$\begin{aligned} \theta = \begin{pmatrix} 0_{d_1} &{}\quad 0 \\ 0 &{}\quad {\widetilde{\theta }}\end{pmatrix}, \end{aligned}$$

where \(d_1 = \dim (\ker (\theta ))\) and \(\det ({\widetilde{\theta }}) \ne 0\). Let \(H = L_2({{\mathbb {R}}}^{\mathrm {rank}(\theta )/2})\), then \(L_p({\mathbb {R}_\theta ^d})\) can be identified with the Bochner space:

$$\begin{aligned} L_p({\mathbb {R}_\theta ^d}) = L_p({{\mathbb {R}}}^{d_1}; {{\mathcal {L}}}_p(H)). \end{aligned}$$

(see, e.g. [51, Chapter 3]).

Since \({\widetilde{\theta }}\) has trivial kernel, the corresponding Schwartz space \({{\mathcal {S}}}({\mathbb R}^{d-d_1}_{{\widetilde{\theta }}})\) has a dense subspace of finite rank elements as in Theorem 2.4. Select \(n > 0\) and \(z \in L_\infty ({{\mathbb {R}}}^{d-d_1}_{{\widetilde{\theta }}})\) such that \(p_nzp_n\) (which is in \({{\mathcal {S}}}({\mathbb R}^{d-d_1}_{{\widetilde{\theta }}})\)) is given by \(U_{{\widetilde{\theta }}}(\zeta )\), where \(\zeta \in {\mathcal S}({{\mathbb {R}}}^{d-d_1})\). We may choose \(p_nzp_n\) such that \(\zeta \) has nonzero integral, thanks to part (iii) of Theorem 2.4.

Now select \(\eta \in C^\infty _c({{\mathbb {R}}}^{d_1})\) with \(\eta (0) \ne 0\). We select \(\chi \in {{\mathcal {S}}}({{\mathbb {R}}}^d)\) such that:

$$\begin{aligned} U_\theta (\chi ) = M_{\eta }\otimes U_{{\widetilde{\theta }}}(\zeta ) = M_\eta \otimes p_nzp_n. \end{aligned}$$

Since \(\eta \) and \(p_nzp_n\) are in the Schwartz spaces for \({\mathbb R}^{d_1}\) and \({{\mathbb {R}}}^{d-d_1}_{{\widetilde{\theta }}}\) respectively, we may indeed choose \(\chi \) such that \(U_{\theta }(\chi ) = M_{\eta }\otimes p_nzp_n\). We will have \(\int _{{{\mathbb {R}}}^d}\chi (t)\,dt = \eta (0)\int _{{\mathbb R}^d}\zeta (t)\,dt\), which by construction is not zero. Thus, rescaling \(\eta \) if necessary, we may assume that \(\int _{{\mathbb R}^d}\chi (t)\,dt = 1\).

Then, if \(x \in L_p({{\mathbb {R}}}^{d_1},{{\mathcal {L}}}_p(H))\), it follows that \(U(\chi )x\) is compactly supported on \({\mathbb R}^{d_1}\), and takes values in \(P{{\mathcal {L}}}_p(H)\). Therefore,

$$\begin{aligned} U(\chi )x \in L_2({{\mathbb {R}}}^{d_1}; {{\mathcal {L}}}_2(H)) = L_2({\mathbb {R}_\theta ^d}). \end{aligned}$$

One can then deduce that \(U(\chi _{\varepsilon })x \in L_2({\mathbb {R}_\theta ^d})\) via the dilation maps \(\Psi _{\varepsilon }\) and \(\Psi _{\varepsilon ^{-1}}\), since we have:

$$\begin{aligned} U_\theta (\chi _{\varepsilon }) = \varepsilon ^{-d}\Psi _{\varepsilon ^{-1}}U_{\varepsilon ^2\theta }(\chi )\Psi _{\varepsilon }. \end{aligned}$$

Since \(U(\chi _{\varepsilon })x \in L_2({\mathbb {R}_\theta ^d})\), from Theorem 2.8 there exists \(f \in L_2({{\mathbb {R}}}^d)\) such that \(U(\chi _{\varepsilon })x = U(f)\). Using (3.11) we have:

$$\begin{aligned} \psi _{\varepsilon }*\big (U(\phi _{\varepsilon })U(f)\big ) = U\big ({\widehat{\psi }}_{\varepsilon }(\phi _{\varepsilon }*_{\theta } f)\big ). \end{aligned}$$

It is easily shown that \(\phi _{\varepsilon }*_{\theta } f\) is smooth, and we may select \(\psi \) such that \({\widehat{\psi }}_{\varepsilon }\) is compactly supported, and thus \({\widehat{\psi }}_{\varepsilon }(\phi _{\varepsilon }*_{\theta } f)\) is smooth and compactly supported, and thus by definition it follows that \(U({\widehat{\psi }}_{\varepsilon }(\phi _{\varepsilon }*_{\theta } f)) = \psi _{\varepsilon }*(U(\phi _{\varepsilon })U(f))\) is in \({\mathcal {S}({\mathbb {R}_\theta ^d})}\). That is,

$$\begin{aligned} \psi _{\varepsilon }*(U(\phi _{\varepsilon })U(\chi _{\varepsilon })x) \in {\mathcal {S}({\mathbb {R}_\theta ^d})}. \end{aligned}$$

\(\square \)

Note that in the proof of Lemma 3.12, the function \(\zeta \) was chosen such that \(U_{{\widetilde{\theta }}}(\zeta )\) satisfies certain conditions. It is for this reason that we avoided making the assumption that the function \(\psi \) appearing in the preceding lemmas is positive or compactly supported; the proof of Lemma 3.12 is simplified if we do not need to prove that \(\zeta \) has those properties.

3.3 Density of \({\mathcal {S}({\mathbb {R}_\theta ^d})}\) and \({\mathcal {A}({\mathbb {R}_\theta ^d})}\) in Sobolev spaces

We now use the machinery of the previous subsection to prove that \({\mathcal {A}({\mathbb {R}_\theta ^d})}\) (and by extension, \({\mathcal {S}({\mathbb {R}_\theta ^d})}\)) is dense in \(W^{m}_{p}({\mathbb {R}_\theta ^d})\) for an appropriate range of indices (mp). Proving the density of \({\mathcal {A}({\mathbb {R}_\theta ^d})}\) in the homogeneous Sobolev space \({\dot{W}}^{m}_{p}({\mathbb {R}_\theta ^d})\), however, presents difficulties and we have been unable to achieve this for the full range of indices (mp).

As in Sect. 3.2, select a Schwartz class function \(\psi \) with \(\int _{{{\mathbb {R}}}^d} \psi (t)\,dt = 1\), and denote \(\psi _{\varepsilon }(t) = \varepsilon ^{-d}\psi (t/\varepsilon )\). We note one further property of \(U(\psi _{\varepsilon })\):

Lemma 3.13

Let \(1\le j\le d\). Then for all \(2\le p \le \infty \), we have:

$$\begin{aligned} \Vert \partial _j U(\psi _{\varepsilon })\Vert _p \le \varepsilon ^{1-\frac{d}{p}}\Vert \psi _1\Vert _q. \end{aligned}$$

where q satisfies \(\frac{1}{p}+\frac{1}{q}=1\).

Proof

Recall (from (3.1)) that:

$$\begin{aligned} \partial _j U(\psi _{\varepsilon }) = U(t_j\psi _{\varepsilon }(t)) \end{aligned}$$

so that we may apply Proposition 2.10 to bound \(\Vert \partial _j U(\psi _{\varepsilon })\Vert _p\) by:

$$\begin{aligned} \left( \int _{{{\mathbb {R}}}^d} t_j^q\varepsilon ^{-dq}|\psi (\frac{t}{\varepsilon })|^qdt\right) ^{1/q} \end{aligned}$$

where q is Hölder conjugate to p.

Applying the change of variable \(s = \frac{t}{\varepsilon }\), we get the norm bound:

$$\begin{aligned} \Vert \partial _j U(\psi _{\varepsilon })\Vert _p \le \varepsilon ^{1-d+\frac{d}{q}}\Vert \psi _1\Vert _q. \end{aligned}$$

\(\square \)

Lemma 3.13 allows us to prove the density of \({\mathcal {A}({\mathbb {R}_\theta ^d})}\) in the Sobolev spaces associated to \({\mathbb {R}_\theta ^d}\). We achieve this by first using Lemma 3.12 to prove that \({\mathcal {S}({\mathbb {R}_\theta ^d})}\) is dense in \(W^{m,p}({\mathbb {R}_\theta ^d})\).

Proposition 3.14

Let \(m\ge 0\) and \(1\le p < \infty \), and \(x \in W^{m}_p({\mathbb {R}_\theta ^d})\), and let \(\{\phi _{\varepsilon }\}_{\varepsilon >0}\), \(\{\psi _{\varepsilon }\}_{\varepsilon >0}\) and \(\{\chi _{\varepsilon }\}_{\varepsilon >0}\) be chosen as in Sect. 3.2. Then

$$\begin{aligned} \lim _{\varepsilon \rightarrow 0} \Vert \psi _{\varepsilon }*\big (U(\phi _{\varepsilon })U(\chi _{\varepsilon })x\big )-x\Vert _{W^{m}_p} = 0. \end{aligned}$$

In particular, \({\mathcal {S}({\mathbb {R}_\theta ^d})}\) is norm-dense in \(W_p^m({\mathbb {R}_\theta ^d})\).

Proof

For \(m=0\), this is already implied by Corollary 3.11.

For \(m=1\), we use the Leibniz rule, recalling that differentiation commutes with convolution:

$$\begin{aligned}&\partial _j\Big (\psi _{\varepsilon }*\big (U(\phi _{\varepsilon })U(\chi _{\varepsilon })x \big ) \Big )-\partial _j x \\&\quad = \psi _{\varepsilon }*\Big ( \big (\partial _j U(\phi _{\varepsilon })\big )\, U(\chi _{\varepsilon })x\Big ) + \psi _{\varepsilon }*\Big ( U(\phi _{\varepsilon })\,\big (\partial _j U(\chi _{\varepsilon })\big ) \,x\Big )\\&\quad \quad + \Big (\psi _{\varepsilon }*\big ( U(\phi _{\varepsilon })U(\chi _{\varepsilon })\partial _j x\big )-\partial _j x\Big ). \end{aligned}$$

Due to Corollary 3.11, the latter term vanishes in the \(L_p\)-norm as \(\varepsilon \rightarrow 0\).

For the first two terms, we apply Hölder’s inequality and Lemma 3.13. For the first summand, we apply (3.10),

$$\begin{aligned} \Vert \psi _{\varepsilon }*\Big ( \big (\partial _j U(\phi _{\varepsilon })\big )\, U(\chi _{\varepsilon })x \Big )\Vert _p\le & {} \Vert \psi _{\varepsilon }\Vert _1\Vert \chi _{\varepsilon }\Vert _1\Vert \partial _j U(\phi _{\varepsilon })\Vert _\infty \Vert x\Vert _p\\\lesssim & {} \varepsilon \Vert \chi \Vert _1\Vert \psi \Vert _1\Vert \phi \Vert _1 \Vert x\Vert _p \end{aligned}$$

and this vanishes as \(\varepsilon \rightarrow 0\). The second summand also vanishes as \(\varepsilon \rightarrow 0\) due to an identical argument, and this completes the case \(m=1\).

The cases \(m\ge 2\) follow similarly. \(\quad \square \)

At the time of this writing, we are unable to prove that the inclusion \({\mathcal {A}({\mathbb {R}_\theta ^d})}\subset {\dot{W}}_p ^m ({\mathbb {R}_\theta ^d})\) is dense. In the classical (commutative) setting or on quantum tori, this can be achieved by an application of a Poincaré inequality (see, e.g., [31, Theorem 7]). To the best of our knowledge, no adequate replacement is known in the noncommutative setting. In the following proposition, to obtain the desired convergence in \({\dot{W}}^1_d({\mathbb {R}_\theta ^d})\) norm, we have to assume additionally that \(x\in L_p({\mathbb {R}_\theta ^d})\) for some \(d \le p <\infty \). This is the ultimate cause of the a priori assumption in the statements of Theorems 1.1, 1.2 and 1.5 that \(x \in L_p({\mathbb {R}_\theta ^d})\) for some \(d\le p < \infty \).

Proposition 3.15

If \(x\in {\dot{W}}_d^1 ({\mathbb {R}_\theta ^d})\cap L_p({\mathbb {R}_\theta ^d})\) for some \(d\le p<\infty \), then the sequence \(\psi _{\varepsilon }*(U(\phi _\varepsilon )U(\chi _{\varepsilon })x)\) converges to x in \({\dot{W}}_d^1 \)-seminorm when \(\varepsilon \rightarrow 0^+\).

Proof

Let \(1\le j\le d\). Applying the Leibniz rule:

$$\begin{aligned}&\partial _j\Big (\psi _{\varepsilon }*\big (U(\phi _{\varepsilon })U(\chi _{\varepsilon })x \big ) \Big )-\partial _jx \\&\quad = \psi _{\varepsilon }*\Big ( \big (\partial _j U(\phi _{\varepsilon })\big )\, U(\chi _{\varepsilon })x\Big ) + \psi _{\varepsilon }*\Big ( U(\phi _{\varepsilon })\,\big (\partial _j U(\chi _{\varepsilon })\big )x \Big )\\&\qquad + \Big (\psi _{\varepsilon }*\big (U(\phi _{\varepsilon })U(\chi _{\varepsilon })\partial _j x \big )-\partial _j x \Big ). \end{aligned}$$

The latter term vanishes as \(\varepsilon \rightarrow 0\), as a consequence of Theorem 3.10.

For the first two terms, since \(x\in L_p({\mathbb {R}_\theta ^d})\) for some \(p \ge d\) we can apply Hölder’s inequality. E.g. for the first term we have:

$$\begin{aligned} \Big \Vert \psi _{\varepsilon }*\Big ( \big (\partial _j U(\phi _{\varepsilon })\big )\, U(\chi _{\varepsilon })x\Big ) \Big \Vert _d \lesssim \Vert \partial _j U( \phi _\varepsilon ) \Vert _q\Vert x\Vert _p\,, \end{aligned}$$

where \(\frac{1}{d} = \frac{1}{p} + \frac{1}{q} \). Using Lemma 3.13, \(\Vert \partial _j U(\phi _{\varepsilon })\Vert _q\rightarrow 0\) as \(\varepsilon \rightarrow 0\). The second term is handled similarly. Therefore, \(\Big \Vert \partial _j\Big (\psi _{\varepsilon }*\big (U(\phi _{\varepsilon })U(\chi _{\varepsilon })x \big ) \Big ) -\partial _jx \Big \Vert _d {\rightarrow }0\) and this completes the proof. \(\quad \square \)

Now using the density of \({\mathcal {A}({\mathbb {R}_\theta ^d})}\) in \({\mathcal {S}({\mathbb {R}_\theta ^d})}\) in its Fréchet topology, we may conclude the following key result:

Corollary 3.16

Let \(x \in L_p({\mathbb {R}_\theta ^d})\cap {\dot{W}}^1_d({\mathbb {R}_\theta ^d})\) for some \(d\le p < \infty \). There exists a sequence \(\{x_n\}_{n\ge 0}\subset {\mathcal {A}({\mathbb {R}_\theta ^d})}\) such that for all \(1\le j\le d\):

$$\begin{aligned} \lim _{n\rightarrow \infty } \Vert \partial _jx_n-\partial _j x\Vert _d = 0. \end{aligned}$$

3.4 Cwikel type estimates

Let \(x\in L_\infty ({\mathbb {R}_\theta ^d})\), then by definition, x is a bounded operator in \(B(L_2({{\mathbb {R}}}^d))\). On the other hand, for a (Borel) function g on \({{\mathbb {R}}}^d\), we may define:

$$\begin{aligned} M_g = g( {{\mathcal {D}}}_1,\ldots , {{\mathcal {D}}}_d )= g( \mathrm{{i}}\nabla _\theta ) \end{aligned}$$

via functional calculus. As \({{\mathcal {D}}}_k\) is merely the operator \(\xi (t) \mapsto t_k\xi (t)\), it follows that \(M_g\) is the multiplication operator:

$$\begin{aligned} M_g\xi (t) = g(t)\xi (t),\quad {\mathrm {dom}}(M_g) = L_2({\mathbb R}^d,|g(t)|^2\,dt). \end{aligned}$$
(3.12)

We call operators of the form \(M_g\) Fourier multipliers of \({\mathbb {R}_\theta ^d}\).

Note that if \(x \in L_2({\mathbb {R}_\theta ^d})\), we may still consider x as a (potentially unbounded) operator on \(L_2({{\mathbb {R}}}^d)\), with initial domain \({{\mathcal {S}}}({{\mathbb {R}}}^d)\).

The following theorem, quoted from [39], gives sufficient conditions for operators of the form \(x M_g\) to be in the Schatten class \({{\mathcal {L}}}_p (L_2({{\mathbb {R}}}^d))\) or the corresponding weak Schatten classes.

Theorem 3.17

Let \(x\in L_p({\mathbb {R}_\theta ^d})\) with \(2\le p <\infty \).

  1. (i)

    If \(g\in L_p({{\mathbb {R}}}^d)\), then \( x M_g\) is in \({{\mathcal {L}}}_p(L_2({{\mathbb {R}}}^d))\) and

    $$\begin{aligned} \Vert x M_g\Vert _{{{\mathcal {L}}}_p} \lesssim _p \Vert x\Vert _p \Vert g\Vert _p. \end{aligned}$$
  2. (ii)

    If \(g\in L_{p,\infty }({{\mathbb {R}}}^d)\) with \(p > 2\), then \( x M_g\) is in \({{\mathcal {L}}}_{p,\infty }(L_2({{\mathbb {R}}}^d)) \) and

    $$\begin{aligned} \Vert x M_g\Vert _{{{\mathcal {L}}}_{p,\infty }} \lesssim _p \Vert x\Vert _p \Vert g\Vert _{p,\infty }. \end{aligned}$$
  3. (iii)

    Let \(x \in W^{d}_1({\mathbb {R}_\theta ^d})\). Then \( x J_\theta ^{-d} \in {{\mathcal {L}}}_{1,\infty }\) and

    $$\begin{aligned} \Vert x J_\theta ^{-d}\Vert _{{{\mathcal {L}}}_{1,\infty }} \lesssim _p C_d \Vert x\Vert _{W_1^d}. \end{aligned}$$

Proof

Theorem 7.2 in [39] says that

$$\begin{aligned} \Vert xM_g\Vert _{E({{\mathcal {B}}}(L_2({{\mathbb {R}}}^d)))} \lesssim _E \Vert x\otimes g\Vert _{E(L_\infty ({\mathbb {R}_\theta ^d})\otimes L_\infty ({{\mathbb {R}}}^d))} \end{aligned}$$
(3.13)

for any interpolation space E of the couple \((L_2,L_\infty )\). Taking \(E =L_p\) in (3.13), we get (i). For (ii), we take \(E =L_{p,\infty }\) and use the estimate

$$\begin{aligned} \Vert x\otimes g\Vert _{L_{p,\infty }(L_\infty ({\mathbb {R}_\theta ^d})\otimes L_\infty ({\mathbb R}^d))}\le \Vert x\Vert _p \Vert g\Vert _{p,\infty } \end{aligned}$$

to immediately conclude the proof.

(iii) is merely an application of [39, Theorem 7.6]. Since the function \( (1+|t|^2)^{-d/2}\) is in \(\ell _{1,\infty } (L_\infty ({{\mathbb {R}}}^d))\),Footnote 2 it follows that \( x J_\theta ^{-d} \in {{\mathcal {L}}}_{1,\infty }\). \(\quad \square \)

4 Proof of Theorem 1.1

This section is devoted to the proof of Theorem 1.1, that is, that the condition \(x \in {\bigcup _{d\le p <\infty } L_p({\mathbb {R}_\theta ^d})}\cap W^{1}_d({\mathbb {R}_\theta ^d})\) is sufficient for , and with an explicit norm bound:

The proof given here is similar to the corresponding result on quantum tori [45], relying heavily on the Cwikel type estimate stated in the last section.

The following two lemmas are easily deduced from Theorem 3.17.

Consider the function on \({{\mathbb {R}}}^d\), \(\xi \mapsto (1+|\xi |^2)^{-\frac{d}{2}}\). When \(|\xi | > 1 \), we have \((1+|\xi | ^2)^{-\frac{d}{2}} \le |\xi |^{-d}\). For \(|\xi | \le 1\), \((1+|\xi | ^2)^{-\frac{d}{2}}\) is bounded from above by 1. Hence \(\xi \mapsto (1+|\xi | ^2)^{-\frac{d}{2}} \in L_{1,\infty } ({{\mathbb {R}}}^d)\), and so \(\xi \mapsto (1+|\xi | ^2)^{-\frac{{\beta }}{2}}\in L_{\frac{d}{{\beta }} , \infty } ({{\mathbb {R}}}^d)\). Recall \(J_\theta = (1- {\mathcal D}elta_\theta )^{1/2}\). Then we have:

Lemma 4.1

Consider the linear operator \(x J_\theta ^{- \beta } \) on \({{\mathbb {C}}}^N {\otimes }L_2({{\mathbb {R}}}^d)\). If \(x\in L_{\frac{d}{\beta }}({\mathbb {R}_\theta ^d})\) with \(\frac{d}{\beta }> 2\), then \(x J_\theta ^{- \beta } \in {{\mathcal {L}}}_{\frac{d}{\beta }, \infty } ,\) and

$$\begin{aligned} \Vert x J_\theta ^{- \beta }\Vert _{{{\mathcal {L}}}_{\frac{d}{\beta }, \infty }} \lesssim _{d,\beta } \Vert x\Vert _{\frac{d}{\beta }}. \end{aligned}$$

Lemma 4.2

Suppose that \(p>\frac{d}{2}\) and \(x\in L_p({\mathbb {R}_\theta ^d})\). If \(p\ge 2\), then:

$$\begin{aligned} \left\| \left[ {\mathrm{{sgn}}}({{\mathcal {D}}}) - \frac{{{\mathcal {D}}}}{\sqrt{1+{{\mathcal {D}}}^2}}, 1{\otimes }x \right] \right\| _{{{\mathcal {L}}}_p} \lesssim _{p,d} \Vert x\Vert _p . \end{aligned}$$

Proof

Let \(1\le j \le d\), and for \(\xi \in {{\mathbb {R}}}^d\) define

$$\begin{aligned} h_j(\xi ) := \frac{\xi _j}{|\xi |} -\frac{\xi _j}{(1+|\xi |^2)^{\frac{1}{2}}}. \end{aligned}$$

Thus,

$$\begin{aligned} M_{h_j} = h_j(\mathrm{{i}}\nabla _\theta ) = \frac{{\mathcal D}_j}{\sqrt{-{{\mathcal {D}}}elta_\theta }}-\frac{{\mathcal D}_j}{(1-{{\mathcal {D}}}elta_\theta )^{\frac{1}{2}}} \end{aligned}$$

Note that there is no ambiguity in writing \(\frac{{\mathcal D}_j}{\sqrt{-{{\mathcal {D}}}elta_{\theta }}}\), as this is simply \(M_g\) for \(g(\xi ) = \frac{\xi _j}{|\xi |}\). and so,

$$\begin{aligned} {\mathrm{{sgn}}}({{\mathcal {D}}}) - \frac{{{\mathcal {D}}}}{\sqrt{1+{{\mathcal {D}}}^2}} = \sum _{j=1}^d \gamma _j\otimes \Big (\frac{{\mathcal D}_j}{\sqrt{-{{\mathcal {D}}}elta_\theta }}-\frac{{\mathcal D}_j}{(1-{{\mathcal {D}}}elta_\theta )^{\frac{1}{2}}}\Big ) = \sum _{j=1}^d \gamma _j\otimes M_{h_j}. \end{aligned}$$

One can easily check that \(h_j \in L_p({{\mathbb {R}}}^d)\) as \(p> \frac{d}{2}\). Expanding out the commutator,

$$\begin{aligned} \left[ {\mathrm{{sgn}}}({{\mathcal {D}}}) - \frac{{{\mathcal {D}}}}{\sqrt{1+{\mathcal D}^2}}, 1{\otimes }x \right] = \left[ \sum _{j=1}^d {\gamma }_j {\otimes }M_{h_j}, 1{\otimes }x \right] =\sum _{j=1}^d {\gamma }_j {\otimes }[M_{h_j}, x]. \end{aligned}$$

Hence,

$$\begin{aligned} \begin{aligned} \left\| \left[ {\mathrm{{sgn}}}({{\mathcal {D}}}) - \frac{{\mathcal D}}{\sqrt{1+{{\mathcal {D}}}^2}}, 1{\otimes }x \right] \right\| _{{\mathcal L}_p}&\le d \max _{1\le j \le d} \left\| \left[ M_{h_j}, x \right] \right\| _{{{\mathcal {L}}}_p}\\&\le d \max _{1\le j \le d} \left( \left\| M_{h_j} x \right\| _{{{\mathcal {L}}}_p}+\left\| x M_{h_j}\right\| _{{\mathcal L}_p}\right) \\&= d \max _{1\le j \le d} \left( \Vert x ^* M_{h_j}\Vert _{{\mathcal L}_p}+ \Vert x M_{h_j}\Vert _{{{\mathcal {L}}}_p}\right) . \end{aligned} \end{aligned}$$

The desired conclusion follows then from Theorem 3.17.(i). \(\quad \square \)

The proof of the next lemma is modeled on that of [45, Lemma 4.2] and [41, Lemma 10], via the technique of double operator integrals (see [49] and [54] and references therein). For the convenience of the reader, let us give an brief introduction of double operator integrals, and sketch the proof of the next lemma.

Let H be a (complex) separable Hilbert space. Let \(D_0\) and \(D_1\) be self-adjoint (potentially unbounded) operators on H, and \(E^0\) and \(E^1\) be the associated spectral measures. For all \(x, y \in {{\mathcal {L}}}_2(H)\), the measure \((\lambda , \mu ) \mapsto {\mathrm {tr}}(x\, dE^0(\lambda ) \, y \, dE^1(\mu ) )\) is a countably additive complex valued measure on \({{\mathbb {R}}}^2\). We say that \(\phi \in L_\infty ({{\mathbb {R}}}^2)\) is \(E^0 \otimes E^1\) integrable if there exists an operator \(T_\phi ^{D_0, D_1} \in \mathcal {B} ({{\mathcal {L}}}_2(H))\) such that for all \(x, y \in {{\mathcal {L}}}_2(H)\),

$$\begin{aligned} {\mathrm {tr}} (x\,T_\phi ^{D_0, D_1} y ) =\int _{{{\mathbb {R}}}^2} \phi (\lambda , \mu ) {\mathrm {tr}}(x\, dE^0(\lambda ) \, y \, dE^1(\mu ) ). \end{aligned}$$

The operator \(T_\phi ^{D_0, D_1} \) is called the transformer. For \(A\in {{\mathcal {L}}}_2(H) \), we define

$$\begin{aligned} T_\phi ^{D_0, D_1}(A)=\int _{{{\mathbb {R}}}^2} \phi (\lambda , \mu ) dE^0(\lambda ) \, A \, dE^1(\mu ) . \end{aligned}$$
(4.1)

This is called a double operator integral.

Lemma 4.3

Let \(x\in {\mathcal {S}({\mathbb {R}_\theta ^d})}\). Then

$$\begin{aligned} \big \Vert \big [\frac{{{\mathcal {D}}}}{\sqrt{1+{{\mathcal {D}}}^2}}, 1{\otimes }x \big ]\big \Vert _{{{\mathcal {L}}}_{d,\infty } } \lesssim _d \Vert x\Vert _{\dot{W}_d^1}. \end{aligned}$$

Proof

Set \(g(t)= t (1+t^2)^{-\frac{1}{2}}\) for \(t\in {{\mathbb {R}}}\). Since all of the derivatives of x are bounded, we may apply [3, Theorem 4.1], which asserts that:

$$\begin{aligned} {[}g({{\mathcal {D}}}), 1{\otimes }x] = T_{g^{[1]}}^{{{\mathcal {D}}},{\mathcal D}} ([{{\mathcal {D}}}, 1{\otimes }x]), \end{aligned}$$
(4.2)

where \(g^{[1]}(\lambda ,\mu ) := \frac{g(\lambda )-g(\mu )}{\lambda -\mu } ={\psi }_1(\lambda ,\mu ){\psi }_2(\lambda ,\mu ) {\psi }_3(\lambda ,\mu )\), with

$$\begin{aligned} \psi _1= & {} 1+ \frac{1-\lambda \mu }{(1+ \lambda ^2)^{\frac{1}{2} } (1+\mu ^2) ^{\frac{1}{2} } },\;\; \psi _2 = \frac{(1+\lambda ^2)^{\frac{1}{4}} (1+\mu ^2)^{\frac{1}{4}} }{(1+ \lambda ^2)^{\frac{1}{2} } + (1+\mu ^2) ^{\frac{1}{2} } },\;\;\\ \psi _3= & {} \frac{1 }{(1+ \lambda ^2)^{\frac{1}{4} } (1+\mu ^2) ^{\frac{1}{4} } }. \end{aligned}$$

It follows that

$$\begin{aligned} T_{g^{[1]}}^{{{\mathcal {D}}},{{\mathcal {D}}}} = T_{{\psi }_1}^{{\mathcal D},{{\mathcal {D}}}} T_{{\psi }_2}^{{{\mathcal {D}}},{{\mathcal {D}}}} T_{{\psi }_3}^{{{\mathcal {D}}},{{\mathcal {D}}}} . \end{aligned}$$
(4.3)

[41, Lemma 8] ensures the boundedness of the transformer \(T_{{\psi }_2}^{{{\mathcal {D}}},{{\mathcal {D}}}} \), on both \({{\mathcal {L}}}_1\) and \({{\mathcal {L}}}_\infty \). For \(k=1,3\) the function \(\psi _k\) can be written as a linear combination of products of bounded functions of \(\lambda \) and of \(\mu \), and from this it follows that \(T_{{\psi }_k}^{{{\mathcal {D}}},{{\mathcal {D}}}}\) is also a bounded linear map on \({{\mathcal {L}}}_{1}\) and \({{\mathcal {L}}}_\infty \); see e.g. [54, Corollary 2] and [58, Corollary 2.4]. Then by real interpolation of \(({{\mathcal {L}}}_1, {{\mathcal {L}}}_\infty )\) (see [20]), the transformers \(T_{{\psi }_k}^{{{\mathcal {D}}},{\mathcal D}} \) with \(k=1,2,3\) are bounded linear transformations from \({{\mathcal {L}}}_{d,\infty }\) to \({{\mathcal {L}}}_{d,\infty }\). Using (4.2) and the product representation of g in (4.3), we have

$$\begin{aligned}\begin{aligned} \Vert [g({{\mathcal {D}}}), 1{\otimes }x]\Vert _{{{\mathcal {L}}}_{d,\infty }}&\le \, \Vert T_{{\psi }_1}^{{{\mathcal {D}}},{{\mathcal {D}}}} \Vert _{{{\mathcal {L}}}_{d,\infty }{\rightarrow }{{\mathcal {L}}}_{d,\infty }} \Vert T_{{\psi }_2}^{{{\mathcal {D}}},{{\mathcal {D}}}} \Vert _{{{\mathcal {L}}}_{d,\infty }{\rightarrow }{{\mathcal {L}}}_{d,\infty }}\\&\quad \times \Vert T_{{\psi }_3}^{{{\mathcal {D}}},{{\mathcal {D}}}} ([{{\mathcal {D}}}, 1{\otimes }x])\Vert _{ {{\mathcal {L}}}_{d,\infty }} \\&\lesssim _d \Vert T_{{\psi }_3}^{{{\mathcal {D}}},{{\mathcal {D}}}} ([{{\mathcal {D}}}, 1{\otimes }x])\Vert _{ {{\mathcal {L}}}_{d,\infty }}. \end{aligned}\end{aligned}$$

Since \({\psi }_3(\lambda ,\mu )=(1+\lambda ^2)^{-1/4} (1+\mu ^2)^{-1/4}\), by (4.1), we have

$$\begin{aligned} T_{{\psi }_3}^{{{\mathcal {D}}},{{\mathcal {D}}}} ([{{\mathcal {D}}}, 1{\otimes }x]) = (1+{{\mathcal {D}}}^2)^{-1/4} [{{\mathcal {D}}}, 1{\otimes }x](1+{\mathcal D}^2)^{-1/4}. \end{aligned}$$

Recalling that \({{\mathcal {D}}} = \sum _{j=1}^d {\gamma }_j{\otimes }{{\mathcal {D}}}_j\),

$$\begin{aligned}\begin{aligned} \Vert [g({{\mathcal {D}}}), 1{\otimes }x]\Vert _{{{\mathcal {L}}}_{d,\infty }}&\lesssim _d \Vert (1+{{\mathcal {D}}}^2)^{-1/4} [{{\mathcal {D}}}, 1{\otimes }x](1+{\mathcal D}^2)^{-1/4}\Vert _{{{\mathcal {L}}}_{d,\infty }} \\&\lesssim _d \sum _{j=1}^d \Vert (1+{{\mathcal {D}}}^2)^{-1/4} [{\gamma }_j{\otimes }{{\mathcal {D}}}_j, 1{\otimes }x](1+{{\mathcal {D}}}^2)^{-1/4}\Vert _{{\mathcal L}_{d,\infty }}. \end{aligned}\end{aligned}$$

But by definition, \([{\gamma }_j{\otimes }{{\mathcal {D}}}_j, 1{\otimes }x] = {\gamma }_j {\otimes }\partial _j x \), thus we obtain

$$\begin{aligned} \Vert (1+{{\mathcal {D}}}^2)^{-1/4} [{\gamma }_j{\otimes }{{\mathcal {D}}}_j, 1{\otimes }x](1+{{\mathcal {D}}}^2)^{-1/4}\Vert _{{{\mathcal {L}}}_{d,\infty }} = \Vert J_\theta ^{-1/2} \, \partial _j x \, J_\theta ^{-1/2}\Vert _{{\mathcal L}_{d,\infty }}. \end{aligned}$$

Here the first norm \(\Vert \cdot \Vert _{{{\mathcal {L}}}_{d,\infty }}\) is the norm of \({{\mathcal {L}}}_{d,\infty }({{\mathbb {C}}}^N\otimes L_2({\mathbb R}^d))\), and the second one is the norm of \({\mathcal L}_{d,\infty }(L_2({{\mathbb {R}}}^d))\), and \(J_\theta = (1-{\mathcal D}elta_\theta )^{1/2} \). We are reduced to estimating the quantity \(\Vert J_\theta ^{-1/2} \, \partial _j x \, J_\theta ^{-1/2}\Vert _{{\mathcal L}_{d,\infty }}\). By polar decomposition, for every j, there is a partial isometry \(V_j\) on \(L_2({{\mathbb {R}}}^d)\) such that

$$\begin{aligned} \partial _j x = V_j |\partial _j x | = V_j |\partial _j x |^{\frac{1}{2} } |\partial _j x |^{\frac{1}{2} }. \end{aligned}$$

Taking \(\beta =\frac{1}{2} \), and recalling that x is such that \(\Vert V_j |\partial _j x|^{\frac{1}{2} } \Vert _{2d}\le \Vert \, |\partial _j x|^{\frac{1}{2} } \Vert _{2d} = \Vert \partial _j x \Vert _d^{\frac{1}{2} }<\infty \), since \(2d > 2\), we may apply Lemma 4.1.(ii) to get

$$\begin{aligned} \Vert |\partial _j x|^{\frac{1}{2}} J_\theta ^{-1/2}\Vert _{{\mathcal L}_{2d,\infty }}= \Vert J_\theta ^{-1/2} |\partial _j x|^{\frac{1}{2}} \Vert _{{{\mathcal {L}}}_{2d,\infty }}\lesssim _d \Vert \, |\partial _j x|^{\frac{1}{2} } \Vert _{2d} \end{aligned}$$

and

$$\begin{aligned} \Vert J_\theta ^{-1/2} V_j |\partial _j x|^{\frac{1}{2}}\Vert _{{\mathcal L}_{2d,\infty }}\lesssim _d \Vert V_j |\partial _j x|^{\frac{1}{2} } \Vert _{2d}\lesssim _d \Vert \, |\partial _j x|^{\frac{1}{2} } \Vert _{2d}. \end{aligned}$$

Thus, by Hölder’s inequality for weak Schatten classes,

$$\begin{aligned} \Vert J_\theta ^{-1/2} \partial _j x \, J_\theta ^{-1/2}\Vert _{{\mathcal L}_{d,\infty }}\lesssim _d \Vert \, |\partial _j x|^{\frac{1}{2} } \Vert _{2d}^2\lesssim _d \Vert \partial _j x \Vert _d. \end{aligned}$$

Combining the preceding estimates, we arrive at

$$\begin{aligned} \Vert [g({{\mathcal {D}}}), 1{\otimes }x]\Vert _{{{\mathcal {L}}}_{d,\infty }} \lesssim _d \sum _{j=1}^d\Vert \partial _j x \Vert _d\lesssim _d \Vert x\Vert _{\dot{W}_d^1}, \end{aligned}$$

which completes the proof. \(\quad \square \)

Now we are able to complete the proof of Theorem 1.1.

Proof of Theorem 1.1

Lemmas 4.2, 4.3 and the inequality \(\Vert T\Vert _{d,\infty } \le \Vert T\Vert _d\) yield:

(4.4)

for all \(x \in {\mathcal {S}({\mathbb {R}_\theta ^d})}\), and with constants independent of \(\theta \). We are going to get rid of the dependence on \(\Vert x\Vert _d \) by a dilation argument as follows. Let \({\lambda }>0 \) and \(\Psi _{\lambda }: L_\infty ({\mathbb {R}_\theta ^d}) \rightarrow L_\infty ({{\mathbb {R}}}^d_{{\lambda }^2\theta })\) be the \(*\)-isomorphism defined in (3.3). By (3.4), for \(x\in L_\infty ({\mathbb {R}_\theta ^d})\), we have \(\Psi _{\lambda }(x) = \sigma _{\lambda }x \sigma _{\lambda }^*\). Since the operator \( \frac{{\mathcal D}_j}{\sqrt{-{{\mathcal {D}}}elta_\theta }}\), viewed as a Fourier multiplier on \({{\mathbb {R}}}^d\), commutes with \(\sigma _{\lambda }\) (and \(\sigma _{\lambda }^*\)), we have

Whence, . Applying (4.4) to \(\Psi _{\lambda }(x) \in L_\infty ({\mathbb R}^d_{{\lambda }^2\theta })\), we obtain

By virtue of Proposition 3.7, we return back to \(x\in L_\infty ({\mathbb {R}_\theta ^d})\):

Letting \({\lambda }\rightarrow 0\) completes the proof of Theorem 1.1 for \(x\in {\mathcal {S}({\mathbb {R}_\theta ^d})}\).

The general case \(x\in {\dot{W}}_d^1 ({\mathbb {R}_\theta ^d}) \cap {\bigcup _{d\le p <\infty } L_p({\mathbb {R}_\theta ^d})}\) is achieved by approximation. By Proposition 3.15, select a sequence \(\{x_n\}\) in \({\mathcal {S}({\mathbb {R}_\theta ^d})}\) such that \(x_n {\rightarrow }x\) in \({\dot{W}}_d^1\) seminorm. Corollary 3.11 implies that we can choose this sequence such that we also have that \(x_n{\rightarrow }x\) in the \(L_p({\mathbb {R}_\theta ^d})\)-sense. For these Schwartz elements \(x_n\), we have , so is Cauchy in \({{\mathcal {L}}}_{d, \infty }\), and thus converges to some limit (say, L) in the \({{\mathcal {L}}}_{d, \infty } \) quasinorm.

Let \(\eta \in L_2({{\mathbb {R}}}^d)\) be compactly supported, and let \(K\subset {{\mathbb {R}}}^d\) be a compact set containing the support of \(\eta \). Then \((x_n-x)\eta = (x_n-x)M_{\chi _K}\eta \). We have:

$$\begin{aligned} \Vert (x_n-x)\eta \Vert _2 = \Vert (x_n-x)\chi _K\eta \Vert _2 \le \Vert (x_n-x)M_{\chi _K}\Vert _{\infty }\Vert \eta \Vert _2 \le \Vert (x_n-x)\chi _K\Vert _{{{\mathcal {L}}}_p}\Vert \eta \Vert _2. \end{aligned}$$

Theorem 3.17 implies that \(\Vert (x_n-x)M_{\chi _K}\Vert _{{{\mathcal {L}}}_p} \lesssim _{p,K} \Vert x_n-x\Vert _p\), and since we have selected the sequence to converge in the \(L_p({\mathbb {R}_\theta ^d})\) sense:

$$\begin{aligned} \lim _{n\rightarrow \infty } \Vert (x_n-x)\eta \Vert _2 = 0. \end{aligned}$$
(4.5)

Similarly, if \(\xi \in {{\mathbb {C}}}^N\otimes L_2({{\mathbb {R}}}^d)\) is compactly supported, then \({\mathrm{{sgn}}}({{\mathcal {D}}})\xi \) is still compactly supported and we have:

$$\begin{aligned} \lim _{n\rightarrow \infty } \Vert 1\otimes (x_n-x){\mathrm{{sgn}}}(D)\xi \Vert _2 = 0. \end{aligned}$$
(4.6)

Combining (4.5) and (4.6) implies that for all compactly supported \(\xi \in {{\mathbb {C}}}^N\otimes L_2({{\mathbb {R}}}^d)\). Since we know that in the \({{\mathcal {L}}}_{d,\infty }\) topology, it follows that , and therefore .

To complete the proof, we note that for these Schwartz elements \(x_n\),

Upon taking the limit \(n {\rightarrow }\infty \) we arrive at:

\(\square \)

5 Commutator Estimates for \({\mathbb {R}_\theta ^d}\)

This section is devoted to the proof of Theorem 1.6, which is an essential ingredient for our proof of Theorem 1.2 i.e., the computation of when \(x\in L_\infty ({\mathbb {R}_\theta ^d}) \cap \dot{W} _d^1 ({\mathbb {R}_\theta ^d})\) and \({\varphi }\) is a continuous normalised trace on \({\mathcal L}_{1,\infty }\). One powerful tool used in [45] for quantum tori is the theory of noncommutative pseudodifferential operators. The proof in [45] proceeds by viewing the quantised differential as a pseudodifferential operator, then determining its (principal) symbol and order, and finally appealing to Connes’ trace formula as obtained in [46].

Despite the development of pseudodifferential operators on quantum Euclidean spaces in [24, 38], we have found it instructive to attempt a direct proof of Theorem 1.6. This has two main advantages: first, it makes the present text self-contained, and more importantly the methods presented below are based only on operator theory and can be generalised to settings where no pseudodifferential calculus is available.

For potential future utility we will prove Theorem 1.6 for the full range of parameters \((\alpha ,\beta )\), although ultimately we will only need certain specific choices of \(\alpha \) and \(\beta \).

Let \({\mathcal {A}({\mathbb {R}_\theta ^d})}\subseteq {\mathcal {S}({\mathbb {R}_\theta ^d})}\) be a factorisable subalgebra as in Proposition 2.5.

The main target of this section is to give the proof of Theorem 1.6, which is technical and somewhat tedious, and so is divided into several steps presented in the following subsections.

5.1 Commutator identities

The following integral formula will be useful: let \(\zeta < 1\) and \(\eta > 1-\zeta \). Then for all \(t> 0\) we have

$$\begin{aligned} \int _0^\infty \frac{1}{\lambda ^{\zeta }(t+\lambda )^{\eta }}\, d\lambda = t^{1-\zeta -\eta }\, {\mathrm B}(\eta +\zeta -1,1-\zeta ) \end{aligned}$$
(5.1)

where \({\mathrm B}(\cdot ,\cdot )\) is the Beta function.

For an operator \(T\in {{\mathcal {B}}}(L_2({{\mathbb {R}}}^d))\), let \(L_\theta (T) := J_\theta ^{-1}[J_\theta ^2,T]\) whenever it is defined, and define \(\delta _\theta (T) := [J_\theta ,T]\) similarly. Inductively, for \(k\in {{\mathbb {N}}}\) we define \(L_\theta ^k (T)= L_\theta (L_\theta ^{k-1}(T))\) and \(\delta ^k _\theta (T) =\delta _\theta (\delta ^{k-1}_\theta (T))\). We also make the convention that \(L_\theta ^0(T)=T\) and \(\delta ^0_\theta (T) =T\). Note that \(L_{\theta }(T)J_{\theta }^{-1} = L_{\theta }(TJ_{\theta }^{-1})\).

The following theorem states that to prove that \(\delta _{\theta }(T)\) is in a certain ideal, it suffices to show that \(L^k_{\theta }(T)\) is in that ideal for all \(k\ge 0\). The essential idea behind the proof goes back to [15, Appendix B]. Here some extra care is needed for the quasi-Banach cases \(0 <p\le 1\). We make use of the theory of integration of functions valued in quasi-Banach developed by Turpin and Waelbroeck [36, 70, 71]. We will refer the reader to [40] for results in the precise form we need.

Theorem 5.1

Let T be an operator on \(L_2({{\mathbb {R}}}^d)\) which maps the Schwartz class \({{\mathcal {S}}}({{\mathbb {R}}}^d)\) into \({\mathcal S}({{\mathbb {R}}}^d)\). Assume that \(L_\theta ^k(T)\) is defined for all \(k\ge 0\).

  1. (i)

    If \(L_\theta ^k(T)\) has bounded extension for all \(k\ge 0\), then \(\delta _\theta ^k(T)\) has bounded extension for all \(k\ge 0\).

  2. (ii)

    Similarly if \(p > 0\) and \(L_\theta ^k(T) \in {{\mathcal {L}}}_{p,\infty }\) for all \(k\ge 0\), then \(\delta _\theta ^k(T)\in {{\mathcal {L}}}_{p,\infty }\) for all \(k\ge 0\).

Proof

Taking \(\eta = 1\) and \(\zeta = 1/2\) in (5.1) yields

$$\begin{aligned} J_\theta ^{-1} = \frac{1}{\pi } \int _0^\infty \frac{1}{\lambda ^{1/2}(\lambda + J_\theta ^2)}\,d\lambda . \end{aligned}$$
(5.2)

Here since \((\lambda + J_\theta ^2) ^{-1}\) has bounded extension for all \(\lambda \ge 0\), the integrand is a norm-continuous function of \({\lambda }\) and the integral converges in operator norm; see e.g. [6, pp 701]. Since by assumption T has bounded extension and maps \({{\mathcal {S}}}({{\mathbb {R}}}^d)\) to \({{\mathcal {S}}}({\mathbb R}^d)\), for any \(\xi \in {{\mathcal {S}}}({{\mathbb {R}}}^d)\subset {\mathrm {dom}}(J^2_\theta )\), multiplying by \(J_\theta ^2\) and taking the commutator with T gives us

$$\begin{aligned} {[}J_\theta ,T] \xi = \frac{1}{\pi }\int _0^\infty \lambda ^{-1/2}\left[ \frac{J_\theta ^2}{\lambda +J_\theta ^2},T \right] \, \xi \, d\lambda , \end{aligned}$$

where the integrand on the right converges in the \(L_2({\mathbb R}^d)\)-valued Bochner sense. We manipulate the integrand as follows

$$\begin{aligned} \delta _\theta (T)\xi =[J_\theta ,T] \, \xi&= \frac{1}{\pi }\int _0^\infty \lambda ^{1/2}(\lambda +J_\theta ^2)^{-1}[J_\theta ^2,T](\lambda +J_\theta ^2)^{-1} \, \xi \,d\lambda \\&= \frac{1}{\pi }\int _0^\infty \lambda ^{1/2}\frac{J_\theta }{\lambda +J_\theta ^2}L_\theta (T)(\lambda +J_\theta ^2)^{-1} \, \xi \,d\lambda \\&= \frac{1}{\pi }\int _0^\infty \lambda ^{1/2}\frac{J_\theta }{(\lambda +J_\theta ^2)^2}L_\theta (T) \, \xi \,d\lambda \\&\quad + \frac{1}{\pi }\int _0^\infty \lambda ^{1/2}\frac{J_\theta }{\lambda +J_\theta ^2}[L_\theta (T),(\lambda +J_\theta ^2)^{-1}] \, \xi \,d\lambda \\&= \frac{1}{\pi }\int _0^\infty \lambda ^{1/2}\frac{J_\theta }{(\lambda +J_\theta ^2)^2}\,d\lambda \cdot L_\theta (T) \, \xi \\&\quad + \frac{1}{\pi }\int _0^\infty \lambda ^{1/2}\frac{J_\theta ^2}{(\lambda +J_\theta ^2)^2}L_\theta ^2(T)\frac{1}{\lambda +J_\theta ^2}\, \xi \,d\lambda .\\&= \frac{1}{2}L_\theta (T) \, \xi +\frac{1}{\pi }\int _0^\infty \lambda ^{1/2}\frac{J_\theta ^2}{(\lambda +J_\theta ^2)^2}L_\theta ^2(T)\frac{1}{\lambda +J_\theta ^2} \, \xi \,d\lambda . \end{aligned}$$

In the last equality above, we have used the fact that

$$\begin{aligned} \int _0^\infty \lambda ^{1/2}\frac{J_\theta }{(\lambda +J_\theta ^2)^2}\,d\lambda = \frac{\pi }{2}\, 1_{{{\mathcal {B}}}(L_2({{\mathbb {R}}}^d))}, \end{aligned}$$

which is deduced from (5.1) by taking \(\zeta = -1/2\) and \(\eta =2\). Also note that all the integrands above converge in \(L_2({{\mathbb {R}}}^d)\).

Now if \(L_\theta ^2(T)\) has bounded extension, we have

$$\begin{aligned} \left\| \frac{J_\theta ^2}{(\lambda +J_\theta ^2)^2}L_\theta ^2(T)\frac{1}{\lambda +J_\theta ^2}\right\| _{\infty } \le \Vert L_\theta ^2(T)\Vert _{\infty } \frac{1}{4\lambda (\lambda +1)}. \end{aligned}$$

Hence,

$$\begin{aligned} \Vert \delta _\theta (T)\Vert _{\infty } \le \frac{1}{2}\Vert L_\theta (T)\Vert _{\infty } + C \Vert L_\theta ^2(T)\Vert _\infty , \end{aligned}$$

where \(C>0\) is a certain constant. So if \(L_\theta (T)\) and \(L_\theta ^2(T)\) have bounded extension, then \(\delta _\theta (T)\) has bounded extension. Inductively, if \(L_\theta ^k(T)\) has bounded extension for all \(k\ge 0\), then \(\delta _\theta ^k(T)\) has bounded extension for all \(k\ge 0\). This completes the proof of part (i).

We turn to the proof of part (ii). If \(p > 1\), then \({\mathcal L}_{p,\infty }\) can be given an equivalent norm making it a Banach ideal. Then we may give the same argument as part (i), but with the operator norm replaced by a norm for \({{\mathcal {L}}}_{p,\infty }\). On the other hand, if \(p \le 1\), then \({{\mathcal {L}}}_{p,\infty }\) cannot be given a Banach norm and therefore a more delicate argument is needed. Taking yet more commutators, for all \(\xi \in {\mathcal S}({{\mathbb {R}}}^d)\) we have:

$$\begin{aligned} \delta _\theta (T)\xi&= \frac{1}{2}L_\theta (T)\xi + \frac{1}{\pi }\int _0^\infty \lambda ^{1/2} \frac{J_\theta ^2}{(\lambda +J_\theta ^2)^3} L_\theta ^2(T)\xi \,d\lambda \\&\quad + \frac{1}{\pi }\int _0^\infty \lambda ^{1/2}\frac{J_\theta ^2}{(\lambda +J_\theta ^2)^2}[L_\theta ^2(T),(\lambda +J_\theta ^2)^{-1}]\xi \,d\lambda \\&= \frac{1}{2}L_\theta (T)\xi +\frac{{\mathrm B}(3/2,3/2)}{\pi }J_\theta ^{-1}L_\theta ^2(T)\xi \\&\quad +\frac{1}{\pi }\int _0^\infty \lambda ^{1/2}\frac{J_\theta ^3}{(\lambda +J_\theta ^2)^3}L_\theta ^3(T)\frac{1}{\lambda +J_\theta ^2}\xi \,d\lambda . \end{aligned}$$

Iterating this process ultimately leads to the expansion, for each \(n\ge 1\),

$$\begin{aligned} \delta _\theta (T) = \sum _{j=1}^{n-1} \frac{1}{\pi }{\mathrm B}(j-1/2,3/2)J_\theta ^{1-j}L_\theta ^j(T) + \frac{1}{\pi }\int _0^\infty \lambda ^{1/2}\frac{J_\theta ^n}{(\lambda +J_\theta ^2)^n}L_\theta ^n(T)\frac{1}{\lambda +J_\theta ^2}\,d\lambda . \end{aligned}$$
(5.3)

The coefficients above are obtained by a choice of \(\eta = j+1\) and \(\zeta = -1/2\) in (5.1) yielding

$$\begin{aligned} \int _0^\infty \lambda ^{1/2}\frac{J_\theta ^j}{(\lambda +J_\theta ^2)^{j+1}}\,d\lambda = {\mathrm B}(j-1/2,3/2)\, J_\theta ^{1-j}, \end{aligned}$$

which are understood in the same meaning as (5.2).

To complete the proof of (ii), we will show that for any \(p > 0\) we can choose n large enough that the integral remainder term in (5.3) can be proved to be in \({{\mathcal {L}}}_{p,\infty }\). To this end we use the non-convex integration theory of [40]. Let \(n > 1\), and define:

$$\begin{aligned} {\mathcal {I}}_n(\lambda ) = \frac{J_\theta ^n}{(\lambda +J_\theta ^2)^n},\quad {\mathcal {J}} (\lambda ) = \frac{1}{\lambda +J_\theta ^2}. \end{aligned}$$

Let us show that we can choose n sufficiently large such that if \(X \in {{\mathcal {L}}}_{p,\infty }\), then \(\int _0^\infty \lambda ^{1/2}{\mathcal {I}}_n(\lambda )X {\mathcal {J}}(\lambda )\,d\lambda \) is in \({{\mathcal {L}}}_{p,\infty }\). Specifically, we use [40, Corollary 3.7] combined with [40, Proposition 3.8], which together imply that it suffices to have

$$\begin{aligned} \sum _{j\in {{\mathbb {N}}}_0} ((j+1)^{1/2}\Vert \mathcal I_n\Vert _{C^{2n}([j,j+1],{{\mathcal {B}}}(L_2({{\mathbb {R}}}^d)))}\Vert \mathcal J\Vert _{C^{2n}([j,j+1],{{\mathcal {B}}}(L_2({\mathbb R}^d)))})^{\frac{p}{p+1}} < \infty \end{aligned}$$
(5.4)

and \(n > \frac{1}{2p}\).

Now let us check (5.4). For \(0\le k\le 2n\), we have

$$\begin{aligned} \left\| \frac{\partial ^k}{\partial \lambda ^k}\mathcal I_n(\lambda )\right\|&= C_{k,n}\left\| \frac{J_\theta ^n}{(\lambda +J_\theta ^2)^{n+k}}\right\| \\&\le C_{k,n}\left\| \frac{J_\theta ^n}{(\lambda +J_\theta ^2)^n}\right\| \\&= C_{k,n}\left\| \frac{1}{(\lambda J_\theta ^{-1}+J_\theta )^n}\right\| . \end{aligned}$$

Since

$$\begin{aligned} \lambda J_\theta ^{-1}+J_\theta \ge \max \{{\lambda +1,2\lambda ^{1/2}}\} \,1_{{{\mathcal {B}}}(L_2({\mathbb R}^d))} , \end{aligned}$$

it follows that

$$\begin{aligned} \left\| \frac{\partial ^k}{\partial \lambda ^k}\mathcal I_n(\lambda )\right\| \le C_{k,n}\min \{1,\lambda ^{-n/2}\}. \end{aligned}$$

For \({\mathcal {J}}({\lambda })\) the estimates are easier

$$\begin{aligned} \left\| \frac{\partial ^k}{\partial \lambda ^k}{\mathcal {J}}({\lambda })\right\| \le C_k\frac{1}{\lambda +1}. \end{aligned}$$

So if we choose n large enough, (5.4) is satisfied. Thus, if \(L_\theta ^n(T) \in {{\mathcal {L}}}_{p,\infty }\) then

$$\begin{aligned} \frac{1}{\pi }\int _0^\infty \lambda ^{1/2}\frac{J_\theta ^n}{(\lambda +J_\theta ^2)^n}L_\theta ^n(T)\frac{1}{\lambda +J_\theta ^2}\,d\lambda \in {{\mathcal {L}}}_{p,\infty }. \end{aligned}$$

So, if all of \(L_\theta (T),L_\theta ^2(T),\ldots ,L_\theta ^n(T)\) are in \({{\mathcal {L}}}_{p,\infty }\) then (5.3) implies that \(\delta _\theta (T) \in {{\mathcal {L}}}_{p,\infty }\). Thus by induction, if \(L_\theta ^k(T) \in {{\mathcal {L}}}_{p,\infty }\) for every \(k\ge 0\), then \(\delta ^k_\theta (T) \in {{\mathcal {L}}}_{p,\infty }\) for every \(k\ge 0\). \(\quad \square \)

5.2 The case \({\alpha }=1\)

Now we commence the proof of Theorem 1.6 by first proving the case \({\alpha }=1\), which is the easiest case since we can directly apply Theorem 5.1 and the Cwikel type estimate [39].

Lemma 5.2

Let \(x \in {\mathcal {S}({\mathbb {R}_\theta ^d})}\). The operators \(L_\theta ^k( x)\) have bounded extension for all \(k \ge 1\). Moreover, we have

$$\begin{aligned} L_\theta ^k( x)J_\theta ^{-d} \in {{\mathcal {L}}}_{1,\infty } \end{aligned}$$

for all \(k\ge 0\).

Proof

We have

$$\begin{aligned} L_\theta ( x)&= J_\theta ^{-1}\sum _{j=1}^d [{{\mathcal {D}}}_j^2, x]\\&= J_\theta ^{-1}\sum _{j=1}^d 2{{\mathcal {D}}}_j[{{\mathcal {D}}}_j, x]-[{{\mathcal {D}}}_j,[{{\mathcal {D}}}_j, x]]\\&= \sum _{j=1}^d 2J_\theta ^{-1}{{\mathcal {D}}}_j\, \partial _j x -J_\theta ^{-1} \partial ^2_jx . \end{aligned}$$

Since \(J_\theta ^{-1}{{\mathcal {D}}}_j\) has bounded extension, it follows that \(L_\theta (x)\) also has bounded extension.

Since \(L_\theta \) commutes with \(J_\theta \) and each \({{\mathcal {D}}}_j\), for \(k\ge 2\) we have

$$\begin{aligned} L_\theta ^k( x) = \sum _{j=1}^d 2J_\theta ^{-1}{{\mathcal {D}}}_j L_\theta ^{k-1}( \partial _j x)-\sum _{j=1}^d J_\theta ^{-1}L_\theta ^{k-1}( \partial ^2_j x). \end{aligned}$$

So by induction on k, all \(L_\theta ^k(x)\) are bounded. Moreover, by convention \(L^0(T) = T\), then for all \(k\ge 1\) we get

$$\begin{aligned} L_\theta ^k(x)J_\theta ^{-d} = \sum _{j=1}^d 2J_\theta ^{-1}{\mathcal D}_jL_\theta ^{k-1}( \partial _jx)J_\theta ^{-d} - \sum _{j=1}^d J_\theta ^{-1}L_\theta ^{k-1}( \partial ^2_jx)J_\theta ^{-d}. \end{aligned}$$

Hence Theorem 3.17(iii) ensures \(L_\theta ^k(x)J_\theta ^{-d} \in {{\mathcal {L}}}_{1,\infty }\). \(\quad \square \)

An immediate corollary of Lemma 5.2 together with Theorem 5.1(i) yields

Corollary 5.3

For all \(x \in {\mathcal {S}({\mathbb {R}_\theta ^d})}\) and \(k\ge 0\), the operator \(\delta _\theta ^k(x)\) has bounded extension.

The main technical underpinning of Theorem 1.6 is the following Lemma:

Lemma 5.4

Let \(x \in {\mathcal {A}({\mathbb {R}_\theta ^d})}\). Then for all \(\beta > 0\) and all \(k\ge 0\) we have

$$\begin{aligned} \delta _\theta ^k( x)J_\theta ^{-\beta } \in {\mathcal L}_{d/\beta ,\infty }. \end{aligned}$$

Proof

Let \(T = x J_\theta ^{-d}\). Then from Lemma 5.2 and the fact that \(J_{\theta }^{-1}\) commutes with \(L_{\theta }\), we have that \(L_\theta ^k(T) \in {{\mathcal {L}}}_{1,\infty }\) for all \(k\ge 0\). Thus, it follows from Theorem 5.1(ii) that \(\delta _\theta ^k(T) = \delta _\theta ^k( x)J_\theta ^{-d}\) is in \({{\mathcal {L}}}_{1,\infty }\), and this proves the result for \(\beta = d\).

Now if \(\beta < d\), we can apply (2.1) with \(r = d/\beta \), \(A = \delta _{\theta }^k(x)\) and \(B = J_{\theta }^{-\beta }\) to obtain:

$$\begin{aligned} \delta _\theta ^k( x)J_\theta ^{-\beta } \in {\mathcal L}_{d/\beta ,\infty } \end{aligned}$$

thus the result is proved for for \(0 < \beta \le d\).

We will now complete the proof by an inductive argument, specifically by showing that if the result holds for \(\beta \) then it holds for \(\beta +1\).

Suppose that the result is true for some \(\beta > 0\). Then we write

$$\begin{aligned} \delta _\theta ^k( x)J_\theta ^{-\beta -1}&= [\delta _\theta ^k( x),J_\theta ^{-1}]J_\theta ^{-\beta } + J_\theta ^{-1}\delta _\theta ^k( x)J_\theta ^{-\beta }\\&= J_\theta ^{-1}[J_\theta ,\delta _\theta ^k( x)]J_\theta ^{-\beta -1}+J_\theta ^{-1}\delta _\theta ^k( x)J_\theta ^{-\beta }\\&= J_\theta ^{-1}\delta _\theta ^{k+1}( x)J_\theta ^{-\beta -1}+J_\theta ^{-1}\delta _\theta ^k( x)J_\theta ^{-\beta }. \end{aligned}$$

By the factorisation property of \({\mathcal {A}({\mathbb {R}_\theta ^d})}\) (see Proposition 2.5), we can write x as a finite linear combination of products, \(x = \sum _{j=1}^n y_jz_j\), where each \(y_j, z_j \in {\mathcal {A}({\mathbb {R}_\theta ^d})}\). Using the Leibniz rule on the jth summand, we deduce

$$\begin{aligned} \delta _\theta ^k(y_jz_j)J_\theta ^{-\beta -1}&=\sum _{j=0}^{k+1} \left( {\begin{array}{c}k+1\\ j\end{array}}\right) J_\theta ^{-1}\delta _\theta ^j( y_j)\delta _\theta ^{k+1-j}( z_j )J_\theta ^{-\beta }J_\theta ^{ -1}\\&\quad + \sum _{j=0}^k \left( {\begin{array}{c}k\\ j\end{array}}\right) J_\theta ^{-1}\delta _\theta ^j( y_j)\delta _\theta ^{k-j}( z_j )J_\theta ^{-\beta }. \end{aligned}$$

Hence by the Hölder inequality and the fact that \(J_\theta ^{ -1}\) is bounded,

$$\begin{aligned} \delta _\theta ^k(x)J_\theta ^{-\beta -1} \in {\mathcal L}_{d/\beta ,\infty }\cdot {{\mathcal {L}}}_{d,\infty }\subseteq {\mathcal L}_{d/(\beta +1),\infty }. \end{aligned}$$

Thus the result holds for \(\beta +1\), and this completes the proof. \(\quad \square \)

Observing that \(L_\theta (xy) = L_\theta (x) y + x L_\theta (y) - J_\theta ^{-1} \delta _\theta (x) L_\theta (y) \), the above proof works for \(L_\theta ^k (x) J_\theta ^{-\beta } \in {{\mathcal {L}}}_{d/\beta , \infty }\) as well. Moreover, using Proposition 2.5 and the Hölder inequality, we easily obtain the following “two-sided” variant of Lemma 5.4:

Corollary 5.5

Let \(x \in {\mathcal {A}({\mathbb {R}_\theta ^d})}\) and \(k\ge 0\). Then for all \(\gamma ,\beta > 0\) we have:

$$\begin{aligned} J_\theta ^{-\gamma } L_\theta ^k( x)J_\theta ^{-\beta } \in {\mathcal L}_{\frac{d}{\beta +\gamma },\infty } ,\quad J_\theta ^{-\gamma }\delta _\theta ^k( x)J_\theta ^{-\beta } \in {\mathcal L}_{\frac{d}{\beta +\gamma },\infty }. \end{aligned}$$

5.3 The case \(0\le \alpha \le \beta +1\)

For \(\zeta \in (0,1)\), taking \(\eta = 1\) in (5.1) yields

$$\begin{aligned} s^{-\zeta } = \frac{1}{{\mathrm B}(\zeta ,1-\zeta )}\int _0^\infty \frac{1}{\lambda ^{\zeta }(\lambda +s)}\,d\lambda . \end{aligned}$$

If \(\alpha = 1-\zeta \), we get the useful identity for \(\xi \in {{\mathcal {S}}}({{\mathbb {R}}}^d)\)

$$\begin{aligned} J_\theta ^{\alpha }\,\xi = \frac{1}{{\mathrm B} (1-\alpha ,\alpha )}\int _0^\infty \lambda ^{\alpha -1}\frac{J_\theta }{\lambda +J_\theta }\,\xi \,d\lambda , \end{aligned}$$
(5.5)

where the integrand on the right converges in \(L_2({{\mathbb {R}}}^d)\), as in the proof of Theorem 5.1.

The following is the \(\alpha \in [0,1)\) and \(\beta \ge 0\) case of Theorem 1.6:

Theorem 5.6

Let \(x \in {\mathcal {A}({\mathbb {R}_\theta ^d})}\). Let \(\alpha \in [0,1)\) and \(\beta \ge 0\) then for all \(k\ge 0\)

$$\begin{aligned} {[}J_\theta ^{\alpha },\delta _\theta ^k(x)]J_\theta ^{-\beta } \in {{\mathcal {L}}}_{\frac{d}{\beta -\alpha +1},\infty }. \end{aligned}$$

Proof

It follows from (5.5) that for \(\xi \in {{\mathcal {S}}}({{\mathbb {R}}}^d)\),

$$\begin{aligned} {[}J_\theta ^{\alpha },\delta _\theta ^k( x)]\,\xi&= \frac{1}{{\mathrm B}(1-\alpha ,\alpha )}\int _0^\infty \lambda ^{\alpha -1}\left[ \frac{J_\theta }{\lambda +J_\theta },\delta _\theta ^k( x)\right] \,\xi \,d\lambda \\&= \frac{1}{{\mathrm B}(1-\alpha ,\alpha )}\int _0^\infty \lambda ^{\alpha }(\lambda +J_\theta )^{-1}[J_\theta ,\delta _\theta ^k( x)](\lambda +J_\theta )^{-1}\,\xi \,d\lambda \\&= \frac{1}{{\mathrm B}(1-\alpha ,\alpha )}\int _0^\infty \lambda ^{\alpha }(\lambda +J_\theta )^{-1}\delta _\theta ^{k+1}( x)(\lambda +J_\theta )^{-1}\,\xi \,d\lambda \\&= \frac{1}{{\mathrm B}(1-\alpha ,\alpha )}\int _0^\infty \lambda ^{\alpha }(\lambda +J_\theta )^{-2}\delta _\theta ^{k+1}( x)\,\xi \,d\lambda \\&\quad - \frac{1}{{\mathrm B}(1-\alpha ,\alpha )}\int _0^\infty \lambda ^{\alpha }(\lambda +J_\theta )^{-1} [(\lambda +J_\theta )^{-1},\delta _\theta ^{k+1}( x)]\,\xi \,d\lambda \\&= \frac{1}{{\mathrm B}(1-\alpha ,\alpha )}\int _0^\infty \lambda ^{\alpha }(\lambda +J_\theta )^{-2}\delta _\theta ^{k+1}( x)\xi \,d\lambda \\&\quad + \frac{1}{{\mathrm B}(1-\alpha ,\alpha )}\int _0^\infty \lambda ^{\alpha }(\lambda +J_\theta )^{-2}\delta _\theta ^{k+2}( x)(\lambda +J_\theta )^{-1}\,\xi \,d\lambda . \end{aligned}$$

Since \(J_\theta ^{-\beta }\) maps \({{\mathcal {S}}}({{\mathbb {R}}}^d)\) into \({{\mathcal {S}}}({{\mathbb {R}}}^d)\), using the identity \(\int _0^\infty {\lambda }^{\alpha }\frac{t^{1-{\alpha }}}{({\lambda }+t )^2} d{\lambda }= {\mathrm B}(1-{\alpha }, 1+{\alpha })\) which is easily deduced from (5.1) again, we have

$$\begin{aligned} {[}J_\theta ^{\alpha },\delta _\theta ^k( x)]J_\theta ^{-\beta } \,\xi&= {\alpha }\, J_\theta ^{\alpha -1}\delta _\theta ^{k+1}( x)J_\theta ^{-\beta }\,\xi \\&\quad + \frac{1}{{\mathrm B}(1-\alpha ,\alpha )}\int _0^\infty \lambda ^{\alpha }(\lambda +J_\theta )^{-2}\delta _0^{k+2}( x)J_\theta ^{-\beta }(\lambda +J_\theta )^{-1}\,\xi \,d\lambda .\\&= {\alpha }\, J_\theta ^{\alpha -1}\delta _\theta ^{k+1}( x)J_\theta ^{-\beta } \,\xi \\&\quad + \frac{1}{{\mathrm B}(1-\alpha ,\alpha )}\int _0^\infty \lambda ^{\alpha }\frac{J_\theta ^{1-\alpha }}{(\lambda +J_\theta )^2}J_\theta ^{\alpha -1}\delta _\theta ^{k+2}( x)J_\theta ^{-\beta }\frac{1}{\lambda +J_\theta }\,\xi \,d\lambda . \end{aligned}$$

The operator \(J_\theta ^{\alpha -1}\delta _\theta ^{k+1}( x)J_\theta ^{-\beta } \) is in \({\mathcal L}_{\frac{d}{\beta -\alpha +1},\infty }\) due to Corollary 5.5. The second summand is treated in the following.

Assume initially that \(\frac{d}{\beta -\alpha +1} > 1\), or equivalently \(\alpha< \beta +1< d+{\alpha }\). Under this condition, the ideal \({{\mathcal {L}}}_{\frac{d}{\beta -\alpha +1},\infty }\) can be given a norm and we can estimate the second summand using the triangle inequality. We have

$$\begin{aligned} \left\| \frac{J_\theta ^{1-\alpha }}{(\lambda +J_\theta )^2}\right\| _{\infty } \le \sup _{t \ge 1} \frac{t^{1-\alpha }}{(t+\lambda )^2}= {\left\{ \begin{array}{ll} \frac{1}{(1+\lambda )^2},\quad \lambda \le \frac{\alpha +1}{1-\alpha }\\ \frac{C_{\alpha }}{\lambda ^{ \alpha +1}}\quad \lambda > \frac{\alpha +1}{1-\alpha }\,, \end{array}\right. } \end{aligned}$$
(5.6)

for a certain constant \(C_{\alpha }\). Thus,

$$\begin{aligned} \left\| \lambda ^{\alpha }\frac{J_\theta ^{1-\alpha }}{(J_\theta +\lambda )^2}\right\| _\infty \left\| \frac{1}{ \lambda +J_\theta }\right\| _{\infty } \le {\left\{ \begin{array}{ll} \frac{1}{(1+\lambda )^3},\quad \lambda \le \frac{\alpha +1}{1-\alpha }\\ \frac{C_{\alpha }}{\lambda (1+\lambda )}\quad \lambda > \frac{\alpha +1}{1-\alpha }\,, \end{array}\right. } \end{aligned}$$
(5.7)

which is integrable. If \(\alpha < \beta +1\), we get from the triangle inequality that

$$\begin{aligned} {[}J_\theta ^{\alpha },\delta _\theta ^k( x)]J_\theta ^{-\beta }\in {{\mathcal {L}}}_{\frac{d}{\beta -\alpha +1},\infty }. \end{aligned}$$

Thus the result is proved if \(\beta < d+\alpha -1\). In particular, since \(d\ge 2\) we have proved the result for \(0 < \beta \le 1\).

To complete the proof, we need an induction argument as in the proof of Lemma 5.4. Note first that by the assumed factorisation property of \({\mathcal {A}({\mathbb {R}_\theta ^d})}\), for any \(x \in {\mathcal {A}({\mathbb {R}_\theta ^d})}\) we can write x as a linear combination of products, \(x = \sum _{j=1}^n y_jz_j\) where each \(y_j,z_j \in {\mathcal {A}({\mathbb {R}_\theta ^d})}\). Suppose that \(\beta > 0\) is such that \([J_\theta ^{\alpha },\delta _\theta ^k( x)]J_\theta ^{-\beta } \in {{\mathcal {L}}}_{\frac{d}{\beta -\alpha +1},\infty }\) for all \(k\ge 0\) and all \(x \in {\mathcal {A}({\mathbb {R}_\theta ^d})}\). Then applying the Leibniz rule to the jth summand, we have:

$$\begin{aligned} J_\theta ^{-1}[J_\theta ^{\alpha },\delta _\theta ^k( y_jz_j)]J_\theta ^{-\beta }&= \sum _{l=0}^k \left( {\begin{array}{c}k\\ l\end{array}}\right) J_\theta ^{-1}[J_\theta ^{\alpha },\delta _\theta ^{k-l}(y_j)\delta _\theta ^l(z_j)]J_\theta ^{-\beta }\\&= \sum _{l=0}^k \left( {\begin{array}{c}k\\ l\end{array}}\right) J_\theta ^{-1}\delta _\theta ^{k-1}(y_j)[J_\theta ^{\alpha },\delta _\theta ^l(z_j)]J_\theta ^{-\beta }\\&\quad + \sum _{l=0}^k \left( {\begin{array}{c}k\\ l\end{array}}\right) J_\theta ^{-1}[J_\theta ^{\alpha },\delta _\theta ^{l}(y_j)]\delta _\theta ^{k-l}(z_j)J_\theta ^{-\beta }. \end{aligned}$$

Then applying the the Hölder inequality, we have

$$\begin{aligned} J_\theta ^{-1}[J_\theta ^{\alpha },\delta _\theta ^k( x)]J_\theta ^{-\beta } \in {\mathcal L}_{\frac{d}{1+\beta -\alpha +1},\infty }. \end{aligned}$$
(5.8)

Now we complete the proof by showing that if the required assertion holds for \(\beta \), then it holds for \(\beta +1\). Indeed,

$$\begin{aligned} {[}J_\theta ^{\alpha },\delta _\theta ^k( x)]J_\theta ^{-\beta -1}&= [J_\theta ^{-1},[J_\theta ^{\alpha },\delta _\theta ^k( x)]]J_\theta ^{-\beta } + J_\theta ^{-1}[J_\theta ^{\alpha },\delta _\theta ^k( x)]J_\theta ^{-\beta }\\&= -J_\theta ^{-1}[J_\theta ^{\alpha },\delta _\theta ^{k+1}( x)]J^{-\beta }J^{-1} +J_\theta ^{-1}[J_\theta ^{\alpha },\delta _\theta ^k( x)]J_\theta ^{-\beta }. \end{aligned}$$

From (5.8), we conclude that

$$\begin{aligned} {[}J_\theta ^{\alpha },\delta _\theta ^k( x)]J_\theta ^{-\beta -1} \in {{\mathcal {L}}}_{\frac{d}{\beta +1-\alpha +1},\infty }. \end{aligned}$$

Hence the assertion holds for all \(\beta > 0\). \(\quad \square \)

The cases where \(\alpha \ge 1\) are handled by induction on \({\alpha }\):

Corollary 5.7

Let \(x \in {\mathcal {A}({\mathbb {R}_\theta ^d})}\). Let \(\alpha \ge 0\), \(\beta \ge 0\) satisfy \(\alpha < \beta +1\). Then for all \(k\ge 0\) we have

$$\begin{aligned} {[}J_\theta ^{\alpha },\delta _\theta ^k( x)]J_\theta ^{-\beta } \in {{\mathcal {L}}}_{\frac{d}{\beta -\alpha +1},\infty }. \end{aligned}$$

Proof

The case \(\alpha \le 1\) is provided by Theorem 5.6. We proceed by induction. Fix \(\alpha \ge 0\) Suppose that the claim is true for all \(k\ge 0\) and \(\beta > \alpha -1\). Now let \(\beta > \alpha \). Then using the Leibniz rule and Lemma 5.4

$$\begin{aligned} {[}J_\theta ^{\alpha +1},\delta _\theta ^k(x)]J_\theta ^{-\beta }&= J_\theta ^{\alpha }[J_\theta ,\delta _\theta ^k(x)]J_\theta ^{-\beta }+[J_\theta ^{\alpha },\delta _\theta ^k( x)]J_\theta ^{1-\beta }\\&=[J_\theta ^{\alpha },[J_\theta ,\delta _\theta ^k( x)]]J_\theta ^{-\beta }+[J_\theta ,\delta _\theta ^k( x)]J_\theta ^{\alpha -\beta }+[J_\theta ^{\alpha },\delta _\theta ^k( x)]J_\theta ^{1-\beta }\\&= [J_\theta ^{\alpha },\delta _\theta ^{k+1}( x)]J_\theta ^{-\beta }+\delta _\theta ^{k+1}( x)J_\theta ^{\alpha -\beta }+[J_\theta ^{\alpha },\delta _\theta ^k( x)]J_\theta ^{1-\beta }\\&\in {{\mathcal {L}}}_{\frac{d}{\beta -\alpha +1},\infty } + {\mathcal L}_{\frac{d}{\beta -\alpha },\infty } + {\mathcal L}_{\frac{d}{\beta -1-\alpha +1},\infty }\\&= {{\mathcal {L}}}_{\frac{d}{\beta -\alpha },\infty }, \end{aligned}$$

thus proving the claim for \(\alpha +1\). \(\quad \square \)

Using the triangle inequality holds for the operator norm in place of the \({{\mathcal {L}}}_{\frac{d}{\beta -\alpha +1},\infty }\) norm, the first part of the proof of Theorem 5.6 can easily be adapted to the case \(0\le {\alpha }= \beta +1 \).

Theorem 5.8

Let \(x \in {\mathcal {S}({\mathbb {R}_\theta ^d})}\), and \(\alpha \ge 0\). Then for all \(k\ge 0\) the operator:

$$\begin{aligned} {[}J_\theta ^{\alpha },\delta _\theta ^k( x)]J_\theta ^{-\alpha +1} \end{aligned}$$

has bounded extension.

Proof

Beginning with the integral formula from the proof of Theorem 5.6, we have

$$\begin{aligned} J_\theta ^{1-\alpha }[J_\theta ^{\alpha },\delta _\theta ^k( x)]&= {\alpha }\delta _\theta ^{k+1}( x) + \frac{1}{{\mathrm B}(1-\alpha ,\alpha )}\int _0^\infty \lambda ^{\alpha }\frac{J_\theta ^{1-\alpha }}{(J_\theta +\lambda )^2}\delta _\theta ^{k+2}( x)\frac{1}{ \lambda +J_\theta }\,d\lambda . \end{aligned}$$

Thus since \(\delta _\theta ^{k+1}(x)\) and \(\delta _\theta ^{k+2}(x)\) are bounded (Corollary 5.3), we can use the triangle inequality for operator norm and the estimates (5.6) and (5.7) from the proof of Theorem 5.6 to conclude that

$$\begin{aligned} J_\theta ^{1-\alpha }[J_\theta ^{\alpha },\delta _\theta ^k( x)] \end{aligned}$$

has bounded extension. Taking the adjoint yields the result. \(\quad \square \)

5.4 Proof of Theorem 1.6

So far, we have established that Theorem 1.6 holds in the following cases

$$\begin{aligned} 0 \le \alpha \le \beta +1. \end{aligned}$$

Indeed, Corollary 5.7 and Theorem 5.8 imply an even stronger statement: for all \(k\ge 0\), we have that

$$\begin{aligned} {\left\{ \begin{array}{ll} {[}J_\theta ^{\alpha },\delta _\theta ^{k}( x)]J_\theta ^{-\beta } \in {{\mathcal {L}}}_{\frac{d}{\beta -\alpha +1},\infty } ,\quad \text{ if }\quad 0\le {\alpha }<\beta +1,\\ {[}J_\theta ^{\alpha },\delta _\theta ^{k}( x)]J_\theta ^{-\beta }\quad \text{ has } \text{ bounded } \text{ extension, } \text{ if }\quad 0\le {\alpha }=\beta +1\,. \end{array}\right. } \end{aligned}$$
(5.9)

We can conclude the proof by showing that if (5.9) holds for \((\alpha ,\beta )\) and all \(k\ge 0\) then it holds for \((\alpha -1,\beta -1)\) and all \(k\ge 0\). This will complete the proof, since for any \(\alpha <\beta +1\) we can find n large enough such that \(0\le \alpha +n \le \beta +n+1\) and hence (5.9) holds for \((\alpha +n,\beta +n)\) and all \(k\ge 0\).

To this end, suppose that (5.9) holds for some \((\alpha ,\beta )\) where \( \alpha \le \beta +1\) and for all \(k\ge 0\). From the Leibniz rule, we derive

$$\begin{aligned} {[}J_\theta ^{\alpha -1},\delta ^k_\theta ( x)]J_\theta ^{1-\beta }&= [J_\theta ^{\alpha },\delta _\theta ^k( x)]J_\theta ^{-\beta } + J_\theta ^{\alpha }[J_\theta ^{-1},\delta _\theta ^k( x)]J_\theta ^{1-\beta }\\&= [J_\theta ^{\alpha },\delta _\theta ^k( x)]J_\theta ^{-\beta } - J_\theta ^{\alpha -1}\delta _\theta ^{k+1}( x)J_\theta ^{-\beta }\\&= [J_\theta ^{\alpha },\delta _\theta ^k( x)]J_\theta ^{-\beta } - J_\theta ^{-1}[J_\theta ^{\alpha },\delta _\theta ^{k+1}( x)]J_\theta ^{-\beta }\\&\quad -J_\theta ^{-1}\delta _\theta ^{k+1}( x)J_\theta ^{\alpha -\beta }\\&= [J_\theta ^{\alpha },\delta _\theta ^k( x)]J_\theta ^{-\beta } - J_\theta ^{-1}[J_\theta ^{\alpha },\delta _\theta ^{k+1}( x)]J_\theta ^{-\beta }\\&\quad -[J_\theta ^{-1},\delta _\theta ^{k+1}( x)]J_\theta ^{\alpha -\beta }-\delta _\theta ^{k+1}( x)J_\theta ^{\alpha -\beta -1}\\&= [J_\theta ^{\alpha },\delta _\theta ^k( x)]J_\theta ^{-\beta } - J_\theta ^{-1}[J_\theta ^{\alpha },\delta _\theta ^{k+1}( x)]J_\theta ^{-\beta }\\&\quad +J_\theta ^{-1}\delta _\theta ^{k+2}( x)J_\theta ^{\alpha -\beta -1}-\delta _\theta ^{k+1}( x)J_\theta ^{\alpha -\beta -1}. \end{aligned}$$

Since \(\alpha \le \beta +1\), it follows from Lemma 5.4 that \([J_\theta ^{\alpha -1},\delta ^k_\theta ( x)]J_\theta ^{1-\beta }\) is in \({{\mathcal {L}}}_{\frac{d}{\beta -\alpha +1},\infty }\) if \(\alpha < \beta +1\) or \({{\mathcal {B}}}(L_{2}({{\mathbb {R}}}^d))\) if \(\alpha = \beta +1\).

Remark 5.9

We close this section by some useful remarks.

  1. (1)

    It is worth noting that if one continues the expansion in the proof of Theorem 5.6 we have the following expansion: for all \(n\ge 1\) and \(\alpha \in [0,1]\),

    $$\begin{aligned} {[}J_\theta ^{\alpha },\delta _\theta ^k( x)]&= \sum _{j=1}^n \frac{{\mathrm B}(j-\alpha ,1+\alpha )}{{\mathrm B}(1-\alpha ,\alpha )}J_\theta ^{\alpha -j}\delta _\theta ^{k+j}( x)\\&\quad +\frac{1}{{\mathrm B}(1-\alpha ,\alpha )}\int _0^\infty \lambda ^{\alpha }(\lambda +J_\theta )^{-(n+1)}\delta _\theta ^{k+n+1}( x)(\lambda +J_\theta )^{-1}\,d\lambda . \end{aligned}$$

    Here the coefficients come from the choice of \(\zeta = -\alpha \) and \(\eta = j+1\) in (5.1).

  2. (2)

    Moreover one can easily deduce the “two-sided” result that:

    $$\begin{aligned} J_\theta ^{-\gamma }[J_\theta ^{\alpha },\delta _\theta ^k( x)]J_\theta ^{-\beta } \in {\mathcal L}_{\frac{d}{\beta +\gamma -\alpha +1},\infty } \end{aligned}$$
    (5.10)

    whenever \(\alpha < \beta +\gamma +1\), and that the above operator has bounded extension whenever \(\alpha = \beta +\gamma +1\). An easy way to see how (5.10) follows from Theorem 1.6 is to use the identity:

    $$\begin{aligned} J_\theta ^{-\gamma }[J_\theta ^{\alpha },\delta _\theta ^k( x)]J_\theta ^{-\beta } = [J_\theta ^{\alpha -\gamma },\delta _\theta ^k( x)]J_\theta ^{-\beta }-[J_\theta ^{-\gamma },\delta _\theta ( x)]J_\theta ^{\alpha -\beta }. \end{aligned}$$
  3. (3)

    The generalisation to \(\alpha ,\beta \in {{\mathbb {C}}} \) with \(\mathfrak {R}(\alpha )\le \mathfrak {R}(\beta )+1\) is immediate.

6 Proofs of Theorems 1.2 and 1.5

As in Sect. 5, we consider the dense subalgebra \({\mathcal {A}({\mathbb {R}_\theta ^d})}\subset {\mathcal {S}({\mathbb {R}_\theta ^d})}\) constructed in Proposition 2.5.

Using Theorem 1.6 and the commutator estimates developed in Sect. 5, we are able to establish the trace formula in Theorem 1.2, and finally prove Theorem 1.5. This will be done by showing that for all \(x \in {\mathcal {A}({\mathbb {R}_\theta ^d})}\)

for a certain bounded operator A on \({{\mathbb {C}}}^N\otimes L_2({{\mathbb {R}}}^d)\) (depending on x), and then applying the trace formula given by [46, Theorem 6.15] to \( |A|^d (1+{{\mathcal {D}}}^2) ^{-d/2} \).

6.1 Operator difference estimates

We begin with the construction of the above mentioned operator A. For \(1\le j, k \le d\), denote \(g_{j,k} (t)= \frac{t_j t_k }{|t|^2}\) on \({{\mathbb {R}}}^d\). Let \(x\in {\mathcal {S}({\mathbb {R}_\theta ^d})}\). Define the operator \(A_j\) on \(L_2({{\mathbb {R}}}^d)\) as

$$\begin{aligned} A_j\xi:= & {} (\partial _j x)\xi - \sum _{k=1}^d (M_{g_{j,k}} \partial _k x)\xi \nonumber \\= & {} (\partial _j x)\xi - \sum _{k=1}^d g_{j,k}( {{\mathcal {D}}}_1,\ldots , {{\mathcal {D}}}_d )(\partial _k x)\xi ,\quad \xi \in L_2({{\mathbb {R}}}^d) \end{aligned}$$
(6.1)

and define the operator A on \({{\mathbb {C}}} ^N {\otimes }L_2({\mathbb R}^d)\)

$$\begin{aligned} A:= \sum _{j =1} ^d \gamma _j {\otimes }A_j, \end{aligned}$$

where N and \(\gamma _j\) are the same as in Definition 3.4.

The main result in this subsection is the following theorem:

Theorem 6.1

Let \(x \in {\mathcal {A}({\mathbb {R}_\theta ^d})}\). Then we have:

Recall that \({{\mathcal {D}}} = \sum _{j=1}^d \gamma _j {\otimes }{\mathcal D}_j\), and and write

By Lemma 4.2, \([{\mathrm{{sgn}}}({{\mathcal {D}}}) - g({{\mathcal {D}}}) , 1{\otimes }x ]\) belongs to \({{\mathcal {L}}}_{p}\) when \( p > \frac{d}{2} \). Define the auxiliary operator \({\widetilde{A}}_j\) for \(1\le j\le d\) on \(L_2({{\mathbb {R}}}^d)\) as

$$\begin{aligned} {\widetilde{A}}_j := \partial _j x - \sum _{k=1}^d {{\mathcal {D}}}_j {{\mathcal {D}}}_k J_\theta ^{-2} \partial _k x \,. \end{aligned}$$
(6.2)

The following proposition connects the commutator \( [{{\mathcal {D}}}_j J_\theta ^{-1}, x] \) with \({\widetilde{A}}_j\).

Proposition 6.2

Let \(1\le j \le d\), and \(x \in {\mathcal {A}({\mathbb {R}_\theta ^d})}\). Then,

$$\begin{aligned} {[}{{\mathcal {D}}}_j J_\theta ^{-1}, x]- {\widetilde{A}}_jJ_\theta ^{-1} \in {{\mathcal {L}}}_{\frac{d}{2},\infty }. \end{aligned}$$

Proof

From the Leibniz rule, we have

$$\begin{aligned} {[}{{\mathcal {D}}}_j J_\theta ^{-1},x] = \partial _j x J_\theta ^{-1}+ {{\mathcal {D}}}_j[J_\theta ^{-1}, x] = \partial _j x J_\theta ^{-1} - {{\mathcal {D}}}_j J_\theta ^{-1}\delta _\theta ( x)J_\theta ^{-1}. \end{aligned}$$

Using the integral formula (5.3) from Theorem 5.1, we have for all \(n\ge 0\),

$$\begin{aligned} \delta _\theta (x)J_\theta ^{-1}&= \sum _{j=1}^{n-1} \frac{1}{\pi }{\mathrm B}(j-1/2,3/2)J_\theta ^{1-j}L_\theta ^j(x)J_\theta ^{-1} \\&\quad + \frac{1}{\pi }\int _0^\infty \lambda ^{1/2}\frac{J_\theta ^n}{(\lambda +J_\theta ^2)^n}L_\theta ^n(x)J_\theta ^{-2}\frac{J_\theta }{\lambda +J_\theta ^2}\,d\lambda . \end{aligned}$$

From Corollary 5.5, we have that \(J_\theta ^{1-j}L_\theta ^j( x)J_\theta ^{-1} \in {\mathcal L}_{d/j,\infty }\) for every \(j\ge 1\). Due to a similar argument to the proof of Lemma 5.1, we have that

$$\begin{aligned} \int _0^\infty \lambda ^{1/2} \frac{J_\theta ^n}{(\lambda +J_\theta ^2)^n}L_\theta ^n(x)J_\theta ^{-2}\frac{J_\theta }{\lambda +J_\theta ^2}\,d\lambda \in {{\mathcal {L}}}_{\frac{d}{2},\infty } \end{aligned}$$

provided n is sufficiently large. So (recalling that \({\mathrm B}(\frac{1}{2},\frac{3}{2}) = \frac{\pi }{2}\)) we obtain

$$\begin{aligned} {[}{{\mathcal {D}}}_j J_\theta ^{-1},x] \in \partial _j x J_\theta ^{-1}-\frac{1}{2}{\mathcal D}_jJ_\theta ^{-1}L_\theta (x)J_\theta ^{-1} + {\mathcal L}_{\frac{d}{2},\infty }. \end{aligned}$$
(6.3)

By the definition of \(L_\theta \), we have:

$$\begin{aligned} {{\mathcal {D}}}_j J_\theta ^{-1} L_\theta (x)J_\theta ^{-1}&= {{\mathcal {D}}}_j J_\theta ^{-2}[J_\theta ^2, x]J_\theta ^{-1}\\&= {{\mathcal {D}}}_j J_\theta ^{-2}\sum _{k=1}^d [{{\mathcal {D}}}_k ^2, x]J_\theta ^{-1}\\&= \sum _{k=1}^d {{\mathcal {D}}}_j J_\theta ^{-2}({{\mathcal {D}}}_k \partial _kx + \partial _k x \,{{\mathcal {D}}}_k) J_\theta ^{-1}\\&= \sum _{k=1}^d {{\mathcal {D}}}_j J_\theta ^{-2}(2{{\mathcal {D}}}_k \partial _kx - \partial _{k}^2 x ) J_\theta ^{-1}. \end{aligned}$$

From Corollary 5.5, we have \( {{\mathcal {D}}}_j J_\theta ^{-2} \partial _{k}^2 x J_\theta ^{-1} \in {\mathcal L}_{d/2,\infty }\), and therefore

$$\begin{aligned} {{\mathcal {D}}}_j J_\theta ^{-1}L_\theta (x)J _\theta ^{-1} \in 2 \sum _{k=1}^d {{\mathcal {D}}}_j {{\mathcal {D}}}_k J_\theta ^{-2} \partial _k x J_\theta ^{-1} + {{\mathcal {L}}}_{d/2,\infty }. \end{aligned}$$
(6.4)

Combining (6.3) and (6.4) yields:

$$\begin{aligned} {[}{{\mathcal {D}}}_jJ_{\theta }^{-1},x] \in \partial _j x J_{\theta }^{-1}-\sum _{k=1}^d {{\mathcal {D}}}_j{{\mathcal {D}}}_k J_{\theta }^{-2}\partial _k x J_{\theta }^{-1} + {{\mathcal {L}}}_{d/2,\infty } = \widetilde{A}_jJ_{\theta }^{-1}+{{\mathcal {L}}}_{d/2,\infty } \end{aligned}$$

as was claimed. \(\quad \square \)

Let us also compare \( {\widetilde{A}}_jJ_\theta ^{-1}\) with \( A_jJ_\theta ^{-1}\).

Proposition 6.3

Let \(1\le j \le d\), and \(x \in {\mathcal {A}({\mathbb {R}_\theta ^d})}\). Then,

$$\begin{aligned} A_jJ_\theta ^{-1}- {\widetilde{A}}_jJ_\theta ^{-1} \in {\mathcal L}_{\frac{d}{2},\infty }. \end{aligned}$$

Proof

By definition, \( A_j = \sum _{k=1}^d M_{ g_{j,k}} \partial _k x \) and \({\widetilde{A}}_j = \sum _{k=1}^d M_{{\widetilde{g}}_{j,k}} \partial _k x \) with \({\widetilde{g}}_{j,k} (t)= \frac{t_j t_k }{1+|t|^2}\). So we are reduced to estimating \(M_{ g_{j,k}} \partial _k x J_\theta ^{-1} - M_{{\widetilde{g}}_{j,k}} \partial _k x J_\theta ^{-1}\) for every k. Using the factorisation of x as a linear combination of products yz, \(y,z \in {\mathcal {A}({\mathbb {R}_\theta ^d})}\) (Proposition 2.5) and the Leibniz rule, we have

$$\begin{aligned}&M_{ g_{j,k}} \partial _k (yz) J_\theta ^{-1} - M_{\widetilde{g}_{j,k}} \partial _k (yz) J_\theta ^{-1}\\&\quad = (M_{ g_{j,k}} - M_{{\widetilde{g}}_{j,k}}) \partial _k y\, z J_\theta ^{-1}+ (M_{ g_{j,k}} - M_{{\widetilde{g}}_{j,k}}) y\, \partial _k z J_\theta ^{-1}. \end{aligned}$$

From Lemma 5.4, both \( z J_\theta ^{-1}\) and \( \partial _k z J_\theta ^{-1}\) belong to \({{\mathcal {L}}}_{d, \infty }\). On the other hand, one can easily check that \( g_{j,k}-{\widetilde{g}}_{j,k} \in L_p({{\mathbb {R}}}^d)\) as \(p> \frac{d}{2}\), which yields by Theorem 3.17(i) that

$$\begin{aligned} (M_{ g_{j,k}} - M_{{\widetilde{g}}_{j,k}}) y \in {\mathcal L}_p\subset {{\mathcal {L}}}_{d,\infty }, \quad (M_{ g_{j,k}} - M_{{\widetilde{g}}_{j,k}}) \partial _k y \in {{\mathcal {L}}}_p\subset {{\mathcal {L}}}_{d,\infty } . \end{aligned}$$

Thus it follows from the Hölder inequality that

$$\begin{aligned} M_{ g_{j,k}} \partial _k x J_\theta ^{-1} - M_{{\widetilde{g}}_{j,k}} \partial _k x J_\theta ^{-1} \in {{\mathcal {L}}}_{d/2, \infty } , \end{aligned}$$

whence the proposition. \(\quad \square \)

For \(g(t) = t(1+t^2) ^{-1/2}\) on \({{\mathbb {R}}}\), Propositions 6.2 and 6.3 imply that

$$\begin{aligned} \mathrm{{i}}[g({{\mathcal {D}}}),1\otimes x] - A(1+{{\mathcal {D}}}^2)^{-1/2} \in {{\mathcal {L}}}_{\frac{d}{2},\infty }. \end{aligned}$$
(6.5)

This – combined with Lemma 4.2 – yields:

for all \(x \in {\mathcal {A}({\mathbb {R}_\theta ^d})}\).

Lemma 6.4

Let \(x\in {\mathcal {A}({\mathbb {R}_\theta ^d})}\). We have

Proof

We already know from Lemma 4.2 that , which together with (6.5) ensures that

Taking the adjoint:

Recall that by Theorem 1.1 (as has been proved in Sect. 4), so it follows that \(A(1+{\mathcal D}^2)^{-1/2} \in {{\mathcal {L}}}_{d,\infty }\). Using the Hölder inequality, we have

If \(d=2\), then we are done.

Now assume that \(d > 2\). We appeal to a recent result from E. Ricard [57, Theorem 3.4], which says that we can take a power 1 / 2 to each term of the preceding inclusion to get

Next we introduce a power d:

\(\square \)

By definition, \(|A|^2 = A^*A\), so we can write \(|A|^2\) as a polynomial in elements of \({\mathcal {A}({\mathbb {R}_\theta ^d})}\) and functions of \(D_j\), \(j=1,\ldots ,d\). It then follows from Theorem 1.6 that

$$\begin{aligned} {[}|A|^2,(1+ {{\mathcal {D}}}^2)^{\alpha /2}](1+{\mathcal D}^2)^{-\beta /2} \in {{\mathcal {L}}}_{\frac{d}{\beta -\alpha +1},\infty } \end{aligned}$$
(6.6)

for all \(\beta > 0\) and \(\alpha < 1\). Therefore, if \(d = 2\), letting \({\alpha }= -1\) and \(\beta =1 \) in (6.6), we have

$$\begin{aligned} {[}|A|^2,(1+{{\mathcal {D}}}^2)^{-1/2}](1+{{\mathcal {D}}}^2)^{-1/2} \in {{\mathcal {L}}}_{2/3,\infty } \subset {{\mathcal {L}}}_1. \end{aligned}$$

This inclusion can be combined with Lemma 6.4 to arrive at

which completes the proof of Theorem 6.1 for the \(d=2\) case.

For \(d>2\), we need

Proposition 6.5

Let \(d > 2\). Then

$$\begin{aligned} |A|^d(1+ {{\mathcal {D}}}^2)^{-d/2}-((1+ {\mathcal D}^2)^{-1/2}|A|^2(1+{{\mathcal {D}}}^2)^{-1/2})^{d/2} \in {\mathcal L}_1. \end{aligned}$$

Proof

From [13, Theorem B.1], it suffices to show the following four conditions:

  1. (i)

    \(|A|^{d-2}(1+ {{\mathcal {D}}}^2)^{1-\frac{d}{2}} \in {{\mathcal {L}}}_{\frac{d}{d-2},\infty }.\)

  2. (ii)

    \((1+{{\mathcal {D}}}^2)^{-1/2}|A|^2(1+ {\mathcal D}^2)^{-1/2} \in {{\mathcal {L}}}_{\frac{d}{2},\infty }.\)

  3. (iii)

    \([|A|^2(1+ {{\mathcal {D}}}^2)^{-1/2},(1+ {\mathcal D}^2)^{-1/2}] \in {{\mathcal {L}}}_{\frac{d}{2},1}.\)

  4. (iv)

    \(|A|^{d-2}[|A|^2,(1+{\mathcal D}^2)^{1-\frac{d}{2}}](1+ {{\mathcal {D}}}^2)^{-1} \in {{\mathcal {L}}}_1.\)

Since \(d > 2\), we have that \(|A|^{d-2} = |A|^{d-3}{\mathrm{{sgn}}}(A)A\), so (i) follows immediately from Lemma 5.4. Similarly using \(|A|^2 = A^*A\), we get also get (ii) immediately from the Hölder inequality and the fact that \(A(1+{\mathcal D}^2)^{-1/2}\) and its adjoint operator belong to \( {\mathcal L}_{d,\infty }\).

For (iii), we write:

$$\begin{aligned} {[}|A|^2(1+ {{\mathcal {D}}}^2)^{-1/2},(1+{{\mathcal {D}}}^2)^{-1/2}] = [|A|^2,(1+{{\mathcal {D}}}^2)^{-1/2}](1+{{\mathcal {D}}}^2)^{-1/2} \end{aligned}$$

which is in \({{\mathcal {L}}}_{\frac{2d}{5},\infty }\) due to (6.6) (with \(\alpha =-1\) and \(\beta =1\)). Since \(\frac{2d}{5} < \frac{d}{2}\), it follows that \({\mathcal L}_{2d/5,\infty }\subset {{\mathcal {L}}}_{d/2,1}\) and this proves (iii). Finally, (iv) immediately follows from (6.6) with \(\alpha = 2-d\) and \(\beta = 2\). \(\quad \square \)

Lemma 6.4 and Proposition 6.5 yield Theorem 6.1 for the case \(d>2\), and thus complete the proof of Theorem 6.1.

6.2 Proof of Theorem 1.2

Let us quote [46, Theorem 6.15] in the following. Let \(C_0 ({\mathbb {R}_\theta ^d}) \) be the norm closure of \({\mathcal {S}({\mathbb {R}_\theta ^d})}\) in \({{\mathcal {B}}}(L_2({\mathbb R}^d))\). For every \(g\in C(\mathbb {S}^{d-1})\), as defined in (3.12), \(g\big ( \frac{\mathrm{{i}}\nabla _\theta }{(-{{\mathcal {D}}}elta_\theta )^{1/2}})\) is the multiplication operator \(\xi (t)\mapsto g(\frac{t}{|t|}) \xi (t)\) in \({{\mathcal {B}}}(L_2({{\mathbb {R}}}^d))\). Moreover, all \(g\big ( \frac{\mathrm{{i}}\nabla _\theta }{(-{{\mathcal {D}}}elta_\theta )^{1/2}})\) with \(g\in C(\mathbb {S}^{d-1})\) form a commutative \(C^*\)-subalgebra of \({{\mathcal {B}}}(L_2({{\mathbb {R}}}^d))\). Set \(\Pi (C_0 ({\mathbb {R}_\theta ^d})+{{\mathbb {C}}}, C(\mathbb {S}^{d-1}))\) to be the \(C^*\)-subalgebra of \({\mathcal B}(L_2({{\mathbb {R}}}^d))\) generated by \(C_0 ({\mathbb {R}_\theta ^d})+{{\mathbb {C}}}\) and all those \(g\big ( \frac{\mathrm{{i}}\nabla _\theta }{(-{\mathcal D}elta_\theta )^{1/2}})\)’s. Theorem 3.3 of [46] implies that there exists a unique norm-continuous \(*\)-homomorphism

$$\begin{aligned} \mathrm{{sym}}: \Pi (C_0 ({\mathbb {R}_\theta ^d})+{{\mathbb {C}}}, C(\mathbb {S}^{d-1}))\longrightarrow \big (C_0 ({\mathbb {R}_\theta ^d})+{{\mathbb {C}}}\big ) {\otimes }_{\min } C(\mathbb {S}^{d-1}) \end{aligned}$$

which maps \(x \in C_0({\mathbb {R}_\theta ^d})\) to \(x\otimes 1\) and \(g\big (\frac{\mathrm{{i}}\nabla _{\theta }}{(-{{\mathcal {D}}}elta_{\theta })}\big )\) to \(1\otimes g\). Then [46, Theorem 6.15] says that for every continuous normalised trace \({\varphi }\) on \({{\mathcal {L}}}_{1,\infty }\), every \(x\in W^d_1({\mathbb {R}_\theta ^d})\), and every \(T \in \Pi (C_0 ({\mathbb {R}_\theta ^d})+{{\mathbb {C}}}, C(\mathbb {S}^{d-1}))\), we have

$$\begin{aligned} {\varphi }(T x (1-{{\mathcal {D}}}elta_\theta ) ^{-d/2} ) = C_d \Big ( \tau _\theta {\otimes }\int _{\mathbb {S}^{d-1}} \Big ) \big ( \mathrm{{sym}}(T) (x {\otimes }1) \big ) \end{aligned}$$
(6.7)

where \(C_d\) is a certain constant depending only on the dimension d.

Now we are able to give the proof of Theorem 1.2.

Proof of Theorem 1.2

We will assume initially that \(x\in {\mathcal {A}({\mathbb {R}_\theta ^d})}\). For a continuous normalised trace \({\varphi }\) on \({{\mathcal {L}}}_{1, \infty }\), Theorem 6.1 ensures that

But since \(A= \sum _j \gamma _j {\otimes }A_j\) self-adjoint unitary matrices \(\gamma _j\), the only part that contributes to the trace on the right hand side above is \( (1{\otimes }\sum _j A_j^* A_j )^{d/2} (1+ {\mathcal D}^2) ^{-d/2}\). Hence,

However, note that each \(A_j\) is a linear combination of operators of multiplication by a function \(x \in {\mathcal {S}({\mathbb {R}_\theta ^d})}\) and Fourier multiplication by a function \(g \in C(\mathbb {S}^{d-1})\), and so is in the algebra \(\Pi (C_0({\mathbb {R}_\theta ^d})+{{\mathbb {C}}},C(\mathbb {S}^{d-1}))\), with symbol:

$$\begin{aligned} \mathrm{sym}(A_j) = \partial _j x\otimes 1- \sum _{k=1}^d s_js_k\otimes \partial _k x. \end{aligned}$$

Since \(\mathrm{{sym}}\) is a norm-continuous \(*\)-homomorphism, we have

$$\begin{aligned} \mathrm{{sym}} ( \sum _j A_j^* A_j )^{d/2} = \Big ( \sum _{j=1}^d\big | \partial _j x - s_j \sum _{k=1}^d s_k \partial _k x\big |^2 \Big )^{d/2}. \end{aligned}$$

Since \(d\ge 2\), we can write:

$$\begin{aligned} \left( \sum _j A_j^*A_j\right) ^{d/2} = \left( \sum _j A_j^*A_j\right) ^{(d-2)/2}(\sum _{j} A_j^*A_j). \end{aligned}$$

Recalling the definition of \(A_j\),

$$\begin{aligned} A_j = \partial _j x + \sum _{k=1}^d\frac{{{\mathcal {D}}}_j{\mathcal D}_k}{-{{\mathcal {D}}}elta_{\theta }}\partial _k x. \end{aligned}$$

We arrive at:

$$\begin{aligned} \left( \sum _j A_j^*A_j\right) ^{d/2} = \left( \sum _j A_j^*A_j\right) ^{(d-2)/2}\sum _{j=1}^d A_j^*(\partial _j x - \sum _{k=1}^d\frac{{{\mathcal {D}}}_j{{\mathcal {D}}}_k}{-{\mathcal D}elta_\theta } \partial _k x). \end{aligned}$$

Since each \(\partial _j x\) is in \(W^{d}_1({\mathbb {R}_\theta ^d})\), we can apply (6.7) to arrive finally at:

$$\begin{aligned}&{\varphi }\big ( ( \sum _j A_j^* A_j )^{d/2} (1-{{\mathcal {D}}}elta_\theta )^{-d/2} \big )\\&\quad = C_d \Big ( \tau _\theta {\otimes }\int _{\mathbb {S}^{d-1}}ds\Big ) (\mathrm{{sym}} ( \sum _j A_j^* A_j )^{(d-2)/2})(\sum _{j=1}^d \mathrm{sym}(A_j)^*(\partial _j x-s_j\sum _{k=1}^d s_k\partial _k x)))\\&\quad = C_d\int _{\mathbb {S}^{d-1}} \tau _\theta (\left( \sum _{j=1}^d\big | \partial _j x - s_j \sum _{k=1}^d s_k \partial _k x\big |^2 \right) ^{d/2})\,ds. \end{aligned}$$

By virtue of Corollary 3.16, the general case of Theorem 1.2 is done via an approximation argument, identically to the proof of [45, Theorem 1.2]. \(\quad \square \)

6.3 Proof of Theorem 1.5

Finally, we prove Theorem 1.5.

Recall from Theorem 1.1 that when \(y \in {\mathcal {S}({\mathbb {R}_\theta ^d})}\) we have . Then if \(x \in L_\infty ({\mathbb {R}_\theta ^d})\), we have . The following lemma shows that for certain unbounded \(x\in L_d({\mathbb {R}_\theta ^d})\). Note that in the strictly noncommutative case of \(\det (\theta )\ne 0\), the following lemma is unnecessary as then we would have \(L_d({\mathbb {R}_\theta ^d})\subset L_\infty ({\mathbb {R}_\theta ^d})\).

Lemma 6.6

Let \(d > 2\), and take \(x \in L_d({\mathbb {R}_\theta ^d})\) and \(y \in {\mathcal {S}({\mathbb {R}_\theta ^d})}\). Then has extension in the ideal \({{\mathcal {L}}}_{d,\infty }\), with a quasi-norm bound

Proof

On the dense subspace \(C^\infty _c({{\mathbb {R}}}^d)\), the operator of multiplication by x is meaningful, and since is bounded, the operator is well-defined on the subspace \(C^\infty _c({{\mathbb {R}}}^d)\). Let us show that there is a bounded extension in \({{\mathcal {L}}}_{d,\infty }\). Applying the Leibniz rule:

We know from Corollary 5.3 that \([J_{\theta },y]\) has bounded extension, and since \([{{\mathcal {D}}},y] = \sum _{j=1}^d -\mathrm{{i}}\gamma _j\otimes \partial _jy\), the commutator \([{{\mathcal {D}}},y]\) has bounded extension.

Let us first bound the terms \([{{\mathcal {D}}},y]J_{\theta }^{-1}x\) and \([J_\theta ,y]J_{\theta }^{-1}x\). Since \(d > 2\), we may apply Lemma 4.1 to obtain:

$$\begin{aligned} \Vert [{{\mathcal {D}}},y]J_{\theta }^{-1}x\Vert _{d,\infty } \le \Vert [{\mathcal D},y]\Vert \Vert J_\theta ^{-1}x\Vert _{d,\infty } \lesssim _d \Vert y\Vert _{{\dot{W}}^1_\infty }\Vert x\Vert _d. \end{aligned}$$

To bound \([J_\theta ,y]J_\theta ^{-1}x\), we use the fact that:

$$\begin{aligned} J_{\theta }-{{\mathcal {D}}} = \frac{1}{J_{\theta }+{{\mathcal {D}}}} \end{aligned}$$

is bounded, so again applying Lemma 4.1, it follows that:

$$\begin{aligned} \Vert [J_{\theta },y]\Vert \lesssim _{d} \Vert y\Vert _\infty +\Vert [{{\mathcal {D}}},y]\Vert \le \Vert y\Vert _{W^1_\infty }. \end{aligned}$$
(6.8)

Thus,

$$\begin{aligned} \Vert [J_\theta ,y]J_\theta ^{-1}x\Vert _{d,\infty }\lesssim _{d} \Vert y\Vert _{W^1_\infty }\Vert x\Vert _d. \end{aligned}$$

Denoting \(h({{\mathcal {D}}}) := {\mathrm{{sgn}}}({{\mathcal {D}}})- {{\mathcal {D}}} J_{\theta }^{-1}\), we have so far:

(6.9)

As was already noted in the proof of Lemma 4.2, we can write \(h({{\mathcal {D}}}) := \sum _{j=1}^d \gamma _j\otimes h_j(\mathrm{{i}}\nabla _\theta )\) where:

$$\begin{aligned} h_j(t) = \frac{t_j}{|t|(1+|t|^2)^{1/2}(|t|+(1+|t|^2)^{1/2})}, \quad 1\le j\le d. \end{aligned}$$

Thus,

$$\begin{aligned} \sup _{t \in {{\mathbb {R}}}^d} |h_j(t)|(1+|t|^2) < \infty . \end{aligned}$$

It follows that \(h({{\mathcal {D}}})J_\theta \) has bounded extension. Lemma 4.1 then yields

$$\begin{aligned} \Vert yh({{\mathcal {D}}})x\Vert _{d,\infty }&\le \Vert y\Vert _\infty \Vert h({\mathcal D})J_\theta \Vert _\infty \Vert J_{\theta }^{-1}x\Vert _{d,\infty }\nonumber \\&\lesssim _d \Vert y\Vert _\infty \Vert x\Vert _d. \end{aligned}$$
(6.10)

Similarly,

$$\begin{aligned} \Vert h({{\mathcal {D}}})yx\Vert _{d,\infty } = \Vert h({\mathcal D})J_{\theta }J_\theta ^{-1}yJ_{\theta }J_\theta ^{-1}x\Vert _{d,\infty } \lesssim _{d} \Vert J_\theta ^{-1}yJ_\theta \Vert \Vert J_\theta ^{-1}x\Vert _{d,\infty }. \end{aligned}$$

We can write \(J_{\theta }^{-1}yJ_{\theta }\) as:

$$\begin{aligned} J_{\theta }^{-1}yJ_{\theta } = -J_{\theta }^{-1}[J_\theta ,y]+y. \end{aligned}$$

Applying (6.8) again allows us to bound the norm of the above by \(\Vert y\Vert _{W^1_\infty }\), so we arrive at the quasinorm bound:

$$\begin{aligned} \Vert h({{\mathcal {D}}})yx\Vert _{d,\infty } \lesssim _{d} \Vert y\Vert _{W^1_\infty }\Vert x\Vert _d. \end{aligned}$$
(6.11)

Combining (6.10), (6.11) and (6.8) with (6.9) yields as desired. \(\quad \square \)

Before proceeding to the proof of Theorem 1.5, we make the following remark concerning integration of operator-valued functions. Let \(\psi \in {{\mathcal {S}}}({{\mathbb {R}}}^d)\), and let \(x \in W^{1}_d({\mathbb {R}_\theta ^d})\). Then (formally), one has:

(6.12)

This formal computation is justified by the continuity of the mapping \(t\mapsto T_{-t}(x)\) in the \(W^{1}_d({\mathbb {R}_\theta ^d})\) norm (Theorem 3.6), which combines with Theorem 1.1 to imply that the mapping is continuous in the \({{\mathcal {L}}}_{d,\infty }\) topology. Since \(d > 1\), the ideal \({{\mathcal {L}}}_{d,\infty }\) can be equipped with an equivalent Banach norm, and so the functions:

are both Bochner measurable in the Banach spaces \(W^1_d({\mathbb {R}_\theta ^d})\) and \({{\mathcal {L}}}_{d,\infty }\) respectively. Theorem 1.1 implies that is a bounded linear map from \(W^1_d({\mathbb {R}_\theta ^d})\) to \({{\mathcal {L}}}_{d,\infty }\), and hence:

where both integrals are Bochner integrals. This justifies (6.12).

Noting that \(T_{-t}\) both commutes with Fourier multipliers and is unitary on \(L_2({\mathbb {R}_\theta ^d})\), it follows that:

and hence (6.12) implies:

(6.13)

(the constant which appears results from the necessity of switching to an equivalent norm for \({{\mathcal {L}}}_{d,\infty }\)).

We now proceed to the proof of Theorem 1.5.

Proof of Theorem 1.5

We assume that \(d > 2\) and \(x\in L_d({\mathbb {R}_\theta ^d})+L_\infty ({\mathbb {R}_\theta ^d})\). Suppose that .

From Corollary 3.11 and Lemma 3.12, we may select \(\{\psi _{\varepsilon }\}_{\varepsilon >0}\), \(\{\phi _{\varepsilon }\}_{\varepsilon > 0}\) and \(\{\chi _{\varepsilon }\}_{\varepsilon >0}\) such that \(\psi _{\varepsilon }*(U(\phi _{\varepsilon })U(\chi _{\varepsilon })x) \in {\mathcal {S}({\mathbb {R}_\theta ^d})}\).

The upper bound (6.13) implies:

Expanding the commutator using the Leibniz rule, the quasi-triangle inequality and Theorem 1.1:

By construction \(\Vert \psi _{\varepsilon }\Vert _1\) is constant as \(\varepsilon \rightarrow 0\), and applying Proposition 2.10, we also have that \(\Vert U(\phi _{\varepsilon })\Vert _\infty \) and \(\Vert U(\chi _{\varepsilon })\Vert _{\infty }\) are uniformly bounded as \(\varepsilon \rightarrow 0\). We now argue that and are also uniformly bounded as \(\varepsilon \rightarrow 0\). To see this, write x as \(x_0+x_1\), where \(x_0 \in L_\infty ({\mathbb {R}_\theta ^d})\) and \(x_1 \in L_d({\mathbb {R}_\theta ^d})\). Then Theorem 1.1 and Lemma 6.6 yield the bound:

and a similar bound for .

Due to Lemma 3.13, the seminorms \(\Vert U(\phi _{\varepsilon })\Vert _{{\dot{W}}^1_d}\) and \(\Vert U(\chi _{\varepsilon })\Vert _{{\dot{W}}^1_d}\) are uniformly bounded as \(\varepsilon \rightarrow 0\). Similarly, the \(W^1_\infty \)-norms of \(U(\chi _{\varepsilon })\) and \(U(\psi _{\varepsilon })\) are uniformly bounded as \(\varepsilon \rightarrow 0\).

It follows that is uniformly bounded in \({{\mathcal {L}}}_{d,\infty }\) as \(\varepsilon \rightarrow 0\). Now applying Corollary 1.3 to , it follows that \(\{\psi _{\varepsilon }*(U(\phi _{\varepsilon })U(\chi _{\varepsilon })x)\}_{\varepsilon >0}\) is uniformly bounded in \({\dot{W}}_d^1({\mathbb {R}_\theta ^d})\), so for every \(1\le j \le d\), \(\{\partial _j \big (\psi _{\varepsilon }*(U(\phi _{\varepsilon })U(\chi _{\varepsilon })x)\big )\}_{\varepsilon >0}\) is uniformly bounded in \(L_d({\mathbb {R}_\theta ^d})\). Since \(d\ge 2\), the space \(L_d({\mathbb {R}_\theta ^d})\) is reflexive and therefore \(\{\partial _j(\psi _{\varepsilon }*(U(\phi _{\varepsilon })U(\chi _{\varepsilon })x)) \}_{\varepsilon > 0}\) has a weak limit point in \(L_d({\mathbb {R}_\theta ^d})\). But we know from Theorem 3.11 that if \(y \in L_{d/(d-1)}({\mathbb {R}_\theta ^d})\) or \(y \in L_1({\mathbb {R}_\theta ^d})\), then \(U(\chi _{\varepsilon })U(\phi _{\varepsilon })(\psi _{\varepsilon }*y)\rightarrow y\) in the \(L_{d/(d-1)}({\mathbb {R}_\theta ^d})\) sense or in the \(L_1({\mathbb {R}_\theta ^d})\) sense respectively; hence that \(\psi _{\varepsilon }*(U(\phi _{\varepsilon })U(\chi _{\varepsilon })x) {\rightarrow }x \) in the the distributional sense. It follows that the weak limit point of \(\{\partial _j(\psi _{\varepsilon }*(U(\phi _{\varepsilon })U(\chi _{\varepsilon })x))\}_{\varepsilon >0}\) in \(L_d({\mathbb {R}_\theta ^d})\) must also be \(\partial _j x\).

Therefore, \(\partial _j x \in L_d({\mathbb {R}_\theta ^d})\) for every \(1\le j \le d \). That is, \(x \in W^{1}_{d}({\mathbb {R}_\theta ^d})\).

Finally, we obtain the bound on the norm using Corollary 1.3. That result implies that there exists a constant \(c_d >0 \) such that for all continuous normalised traces \({\varphi }\) on \({{\mathcal {L}}}_{1,\infty }\),

Since \({\varphi }\) is continuous,

Selecting a continuous normalised trace \({\varphi }\) of norm 1 completes the proof for \(d > 2\).

For \(d = 2\), we make the stronger assumption that \(x \in L_\infty ({\mathbb {R}_\theta ^d})\). This permits us to carry out the same proof, but instead we use the bounds:

to prove that is uniformly bounded in \(L_{2,\infty }\). \(\quad \square \)