1 Introduction

The sampling problem deals with recovery of bandlimited signals f from the collection of measurements \(\{f(\lambda )\}_{\lambda \in \Lambda }\) taken at the points of some uniformly discrete set \(\Lambda \subset {\mathbb R}^d\). The classical results deal with one dimensional signals that are elements of the Paley-Wiener or Bernstein spaces over a fixed interval \([-\sigma ,\sigma ]\). The sets \(\Lambda \) that provide the stable reconstruction, in this case, are completely described. For the Bernstein spaces, the answer is given in terms of a certain density of \(\Lambda \) and bandwidth parameter \(\sigma \), see [5]. The result for Paley-Wiener spaces is more complicated, see [16, 18]. It cannot be expressed in terms of a density of \(\Lambda \). We refer the reader to [5, 18] for the detailed exposition and the proofs.

The complexity of the task significantly increases in the multi-dimensional setting. Landau [11] proved that the necessary conditions for stable sampling remain valid for the Paley-Wiener spaces over any domain (see [13] for a much simpler proof). A sufficient condition for a sampling of signals from the Bernstein space with spectrum in a ball was obtained by Beurling, see [6]. We also refer the reader to [15] for some extensions. However, there is a gap between the necessary and sufficient conditions. Moreover, even for the simplest spectra as balls or cubes, examples show that no description of sampling sets is possible in terms of a density of \(\Lambda \), see Sect. 5.7 in [14].

Recently the so-called dynamical sampling problem (in what follows, we will more often use the term space-time sampling problem) attracted a lot of attention, see [2,3,4, 19], and references therein.

In this paper, we consider the following variant of dynamical sampling problem.

Main Problem

Let \(\Lambda \) be a uniformly discrete subset of \({\mathbb R}^n\) and let \({G_{\alpha }(x)}\) be a collection of functions parametrized by \(\alpha \in I\). What assumptions should be imposed on the spatial set \(\Lambda \), index set I, and functions \(G_{\alpha }\) to enable the recovery of every band-limited signal f from its space-time samples \(\{f *G_{\alpha }(\lambda )\}_{\lambda \in \Lambda , \alpha \in I}\)?

For signals f from a Paley-Wiener space \(PW_{\sigma }\) (see the definition below) it means that the inequalities

$$\begin{aligned} D_1 \Vert f\Vert ^2_{2} \le \sum \limits _{\lambda \in \Lambda } \int \limits _{I} |f * G_{\alpha }(\lambda )|^2 \, d\alpha \le D_2 \Vert f\Vert ^2_{2} \quad \text {for every } f \in PW_{\sigma } \end{aligned}$$
(1)

are true with some constants \(D_1\) and \(D_2\). Here, as usual, \(\Vert \cdot \Vert \) denotes the \(L^2\)-norm.

Recall that a set \(\Lambda = \{\lambda _k\} \subset {\mathbb R}^n\) is called uniformly discreteFootnote 1 (u.d.) if

$$\begin{aligned} \delta (\Lambda ) := \inf \limits _{\begin{array}{c} \lambda \ne \lambda ' \\ \lambda , \lambda ' \in \Lambda \end{array}}{|\lambda - \lambda '|} > 0. \end{aligned}$$

The constant \(\delta (\Lambda )\) is called the separation constant of \(\Lambda \).

In the one-dimensional setting, this problem appears in particular in connection with tasks of mathematical physics. Several examples are presented in [4]. One of them is the initial value problem for the heat equation

$$\begin{aligned} \frac{\partial }{\partial \alpha } u(x, \alpha ) = \sigma ^2 \frac{\partial ^2 u}{\partial x^2} (x,\alpha ), \quad \quad \sigma \ne 0,\, x \in {\mathbb R}, \, \alpha > 0, \end{aligned}$$
(2)

with initial condition

$$\begin{aligned} u(x,0) = f(x). \end{aligned}$$
(3)

It is well-known that the solution is given by the formula

$$\begin{aligned} u(x,\alpha ) = f *g_{\alpha } (x)= \int \limits _{{\mathbb R}^n} g_{\alpha } (x - y) f(y) dy, \end{aligned}$$
(4)

where \( g_{\alpha }(x) = \frac{1}{\sqrt{(4 \pi \alpha \sigma )}} \exp \left( -\frac{x^2}{4 \alpha \sigma } \right) .\) Note that Main Problem applied to equation (2) provides the reconstruction of initial function f from the states \(\{u(\lambda , \alpha )\}_{\lambda \in \Lambda , \alpha \in I}\).

A variant of Main Problem for the one-dimensional setting was considered by Aldroubi et al. in [4]. In particular, it was established that unlike the classical sampling setting, the assumptions that should be imposed on the set \(\Lambda \) to solve the Main Problem cannot be expressed in terms of some density of \(\Lambda \), see Example 4.1 in [4]. More precisely, one may construct a set with an arbitrarily small density that provides stable reconstruction of the initial signal. Also in that paper, it was shown that for the solution of Main Problem we have to require \(\Lambda \) to be relatively dense.

In the one-dimensional setting, for a large collection of kernels, a solution of Main Problem was presented in [19]: It turns out the stable recovery from the samples on \(\Lambda \) is possible if and only if \(\Lambda \) is not (in a certain sense) “close” to an arithmetic progression.

It seems natural to extend the results of [4, 19] to the multi-dimensional situation. Below we focus on the two-dimensional variant of the problem for the case of Gaussian kernel

$$\begin{aligned}&G_{\alpha }(x) = e^{- \alpha _1 x_1^2 - \alpha _2 x_2^2}, \quad \quad \alpha = (\alpha _1, \alpha _2) \in I, \,\\&I= (a,b) \times (c,d) \subset {\mathbb R}^2_+, \quad x = (x_1, x_2) \in {\mathbb R}^2. \end{aligned}$$

Our approach is similar to the one in [19]. However, this problem is considerably more involved than the one in the one-dimensional setting. One needs to apply some additional ideas. See Sect. 5 for some remarks on cases dimension higher than 2.

We pass to the description of the geometry of the sets \(\Lambda \) that solve the planar Main problem.

Definition 1

A curvilinear lattice in \({\mathbb R}^2\) defined by three vectors

$$\begin{aligned} t=(t_1,t_2) \in {\mathbb R}^2, \quad \xi =(\xi _1,\xi _2)\in {\mathbb R}^2, \quad \text{ and } \quad r=(r_1,r_2) \in {\mathbb R}^2,\, r_1^2 + r_2^2 = 1, \end{aligned}$$

is the set of all vectors \(\lambda =(\lambda _1,\lambda _2)\in {\mathbb R}^2\) satisfying

$$\begin{aligned} l_{t,\xi ,r}:=\{\lambda \in {\mathbb R}^2 \; \Big | \; r_1\cos (\lambda _1\xi _1+\lambda _2\xi _2+t_1)=r_2\cos (-\lambda _1\xi _1+\lambda _2\xi _2+t_2)\}. \end{aligned}$$
Fig. 1
figure 1

Curvilinear lattice defined by \(\cos (x+y) = 3 \cos (y-x)\)

The blue curves on Fig. 1 correspond to the curvilinear lattice \(l_{t, \xi , r}\) with \(t=(0,0), \xi = (1,1)\), and \(r = (1/\sqrt{10}, 3/\sqrt{10})\).

In what follows the notation \(W(\Lambda )\) stands for the collection of all weak limits of translates of a uniformly discrete set \(\Lambda \), see the definition in Sect. 2.

Condition (A): A uniformly discrete set \(\Lambda =\{\lambda =(\lambda _1,\lambda _2)\}\subset {\mathbb R}^2\) satisfies condition (A) if every set \(\Lambda ^*\in W(\Lambda )\) is not empty and does not lie on any lattice \(l_{t,\xi ,r}\).

Remark 1

A Delone set is a set that is both uniformly discrete and relatively dense. In particular, it is easy to check that every set that satisfies condition (A) is a Delone set.

We denote by \(PW^2_{\sigma }\) the space of square integrable on \({\mathbb R}^2\) functions with spectrum supported in the square \([-\sigma , \sigma ]^2\), i.e.

$$\begin{aligned} PW^2_{\sigma } = \{f \in L^2({\mathbb R}^2) \,\, \big | \,\, \mathrm{supp \,}\hat{f} \subset [-\sigma , \sigma ]^2 \}, \end{aligned}$$

where

$$\begin{aligned} \hat{f}(\xi _1, \xi _2) = \int \limits _{{\mathbb R}^2} e^{-i (\xi _1 x_1 + \xi _2 x_2)} f(x_1, x_2) \, dx_1 dx_2. \end{aligned}$$

Now, we are ready to formulate the main result.

Theorem 1

Given a u.d. set \(\Lambda \subset {\mathbb R}^2\) and a rectangle \(I = (a,b)\times (c, d)\) with \(0< a< b < \infty \), \(0< c< d < \infty \). The following statements are equivalent:

  1. (i)

    For every \(\sigma >0\) there are positive constants \(D_1 = D_1(\sigma , I, \Lambda )\) and \(D_2 = D_2(\sigma , I, \Lambda )\) such that (1) holds true.

  2. (ii)

    \(\Lambda \) satisfies condition (A).

The paper is organized as follows. In Sect. 2 we give all necessary definitions and fix some notations. As it was mentioned above, we employ the approach from [19] and divide the solution into two parts. We start with solving Main Problem for the Bernstein spaces \(B_{\sigma }\) and prove an analogue of Theorem 1 in Sect. 3. In Sect. 4 we investigate the connection between the sampling with Gaussian kernel in the Paley-Wiener and Bernstein spaces. We also prove the main result in Sect. 4. The remarks on multi-dimensional cases and some open problems that puzzle us are placed in Sect. 5.

2 Notations and Preliminaries

In the present paper, we deal with signals that belong to the Bernstein and Paley-Wiener spaces. Since we investigate Main Problem simultaneously for all bandwidth parameters, we may consider only the functions with the spectrum supported in squares. This leads us to

Definition 2

Given a positive number \(\sigma \), we denote by \(B_{\sigma }\) the space of all entire functions f in \({\mathbb C}^2\) satisfying the estimate

$$\begin{aligned} |f(z)|\le C e^{\sigma (|y_1|+|y_2|)},\quad z=(z_1,z_2)\in {\mathbb C}^2, \quad z_j=x_j+i y_j\in {\mathbb C}, \, j=1,2, \end{aligned}$$
(5)

where the constant \(C=C(f)\) depends only on f.

It is well-known that \(B_{\sigma }\) consists of the bounded continuous functions that are the inverse Fourier transforms of tempered distributions supported on the square \([-\sigma , \sigma ]^2\). We refer the reader to [12] for more information about Bernstein spaces.

For \(1\le p < \infty \) we may define the Paley-Wiener spaces by the formula

$$\begin{aligned} PW^p_{\sigma } = B_{\sigma } \cap L^p({\mathbb R}^2) \end{aligned}$$

or equivalently

$$\begin{aligned} PW^p_{\sigma } = \{f \in L^p({\mathbb R}^2) \,\, \big | \,\, \mathrm{supp \,}\hat{f} \subset [-\sigma , \sigma ]^2 \}. \end{aligned}$$

Following [5] (see also Chapter 3.4 in [10, 14, 17]), we introduce auxiliary

Definition 3

Let \(\{\Lambda _k\}\) and \(\Lambda \) be u.d. subsets of \({\mathbb R}^n\), satisfying \(\delta (\Lambda _k) \ge \delta > 0, \, k \in {\mathbb N}\). We say that the sequence \(\{\Lambda _k\}\) converges weakly to \(\Lambda \) if for every large \( R > 0\) and small \(\varepsilon > 0\) there exists such \(N = N(R, \varepsilon )\) that

$$\begin{aligned} \Lambda _{k} \cap (-R, R)^n \subset \Lambda + (-\varepsilon , \varepsilon )^n,\\ \Lambda \cap (-R, R)^n \subset \Lambda _{k} + (-\varepsilon , \varepsilon )^n. \end{aligned}$$

for all \(k \ge N\).

Definition 4

By \(W(\Lambda )\) we denote all weak limits of the translates \(\Lambda _k:=\Lambda - x_k\), where \(\{x_k\} \subset {\mathbb R}^n\) is an arbitrarily bounded or unbounded sequence.

We supply these definitions with several examples concerning the condition (A).

Example 1

To construct the set that does not satisfy condition (A), one may consider the following perturbation of the rectangle lattice:

$$\begin{aligned} \Lambda = \left\{ \left( 2 \pi n + \frac{1}{2^{m^2+n^2}}, 2 \pi m + \frac{1}{2^{|m| + |n|}} \right) , \quad m,n \in {\mathbb Z}\right\} . \end{aligned}$$

Taking any sequence \(\{x_k\} \subset {\mathbb R}^2\) such that \(|x_k| \rightarrow \infty \), one may check by the definition that the sequence \(\Lambda - x_k\) weakly converges to the set \(\Lambda = \left\{ \left( 2 \pi n, 2 \pi m \right) , \quad m,n \in {\mathbb Z}\right\} \), which, clearly, lies in \(l_{t, \xi , r}\) with \(t = (0,0)\), \(\xi = (1,1)\), and \(r = \left( \frac{1}{\sqrt{2}}, \frac{1}{\sqrt{2}} \right) \).

The following example is inspired by the papers [10, 17], which considered a planar mobile sampling problems.

Example 2

Set

$$\begin{aligned} D_{{\mathbb Z}} = \left\{ (x,y) \subset {\mathbb R}^2 \quad | \quad x^2 + y^2 = 4 \pi ^2 k^2, \,\, k \in {\mathbb Z}\right\} , \end{aligned}$$

i.e. \(D_{{\mathbb Z}}\) is a collection of the concentric equidistant circles with center (0, 0). Now, one may consider \(\Lambda \) to be any u.d. set located on the circles \(D_{{\mathbb Z}}\). One may check (see the proofs in [10, 17]) that any weak limit of translates for every unbounded sequence \(\{x_n\}\) for \(D_{{\mathbb Z}}\) lies on the parallel lines. The argument is based on the simple observation that the traces of translated circles in the rectangle \([-R,R]^2\) (for a fixed \(R>0\)) are getting closer and closer to the parallel lines as the value \(|x_n|\) increases. Moreover, the distance between these lines is \(2 \pi k\). For instance, one may take \(x_n = (0,2\pi n)\) and pass to a weak limit \(\Lambda - x_n \rightarrow \Lambda ^{*}\) to obtain that \(\Lambda ^{*} \subset l_{t,\xi ,r}\) with \(t = (0,0)\), \(\xi = (1,1)\), and \(r = \left( \frac{1}{\sqrt{2}}, \frac{1}{\sqrt{2}} \right) \).

The next example concerns the set that satisfy condition (A). In what follows, we skip some technical details.

Example 3

One may easily find a u.d. set \(\Lambda \) in \({\mathbb R}^2\) such that its projection \(P_{\Lambda }\) onto any rectangle \(P = [0, a] \times {[}0,b]\) is dense in P, where

$$\begin{aligned} P_{\Lambda } = \left\{ (\lambda _1 \, \mathrm{mod}\, a, \,\,\, \lambda _2\, \mathrm{mod}\, b) : \lambda = (\lambda _1, \lambda _2) \in \Lambda \right\} . \end{aligned}$$

For instance, one may take a u.d. subset of \((\sqrt{2}\, {\mathbb Z}\cup {\mathbb Z}) \times (\sqrt{3}\, {\mathbb Z}\cup {\mathbb Z})\) (with a dense projection ). Therefore, the set \(\Lambda \) and all its weak limits of translates do not lie on any curvilinear lattice, and \(\Lambda \) satisfy the condition (A).

Below we will use the simple fact that for every sequence \(x_k\) there is a subsequence \(x_{k_j}\) such that \(\Lambda - x_{k_j}\) converges weakly.

Throughout this paper we will adopt the following notations:

  • Let \(x \in {\mathbb R}^n, \, y \in {\mathbb R}^n, \, n \in {\mathbb N}\). Define \(|x| := \sqrt{x_1^2 + \dots + x_n^2}\). Notation \(x \cdot y\) stands for the scalar product of vectors x and y.

  • Set \(B_r(x) := \{y \in {\mathbb R}^n \, : \, |x-y| < r\},\,\,\) where \(x \in {\mathbb R}^n\) and \(r>0\).

  • Given \(\lambda =(\lambda _1, \dots , \lambda _n)\) and \(f \in L^{\infty }({\mathbb R}^n)\), we set

    $$\begin{aligned} f_\lambda (x):=f(x-\lambda ) = f(x_1 -\lambda _1, \dots , x_n - \lambda _n). \end{aligned}$$
  • By |A| we denote the n-th dimensional Lebesgue measure of a set \(A \subset {\mathbb R}^n\).

  • By C we denote different positive constants.

Basically, we will focus on two-dimensional case. It is convenient to fix the following notations.

  • Given a point \(x=(x_1,x_2)\in {\mathbb R}^2\), denote \(\tilde{x}:=(-x_1,x_2)\).

  • A symmetrization operator S is defined by the formula

    $$\begin{aligned} Sf(x):=f(x)+f(\tilde{x})+f(-\tilde{x})+f(-x), \quad f \in L^{\infty }({\mathbb R}^2). \end{aligned}$$
  • Set \({\mathbb T}:= \{|x| = 1, \, x \in {\mathbb R}^2\}\).

3 Sampling with Gaussian Kernel in Bernstein Spaces

An analogue of Theorem 1 for the Bernstein spaces is as follows:

Theorem 2

Given a u.d. set \(\Lambda \subset {\mathbb R}^2\) and \(I = (a,b)\times (c,d)\) with \(0< a< b< \infty , 0< c< d < \infty \). The following statements are equivalent:

  1. (i)

    For every \(\sigma >0\) there is a constant \(K=K(\sigma )\) such that

    $$\begin{aligned} \Vert f\Vert _\infty \le K\sup _{\alpha \in I} \sup _{\lambda \in \Lambda } \Vert f*G_\alpha \Vert _\infty \quad \text { for every } f\in B_\sigma . \end{aligned}$$
  2. (ii)

    \(\Lambda \) satisfies condition (A).

Above, as usual, \(\Vert \cdot \Vert _{{\infty }}\) denotes the sup-norm

$$\begin{aligned} \Vert f\Vert _{\infty } := \sup _{x \in {\mathbb R}^2} |f(x)|. \end{aligned}$$

3.1 Proof of Theorem 2, Part I

(ii) \(\Rightarrow \) (i). In what follows we assume that (i) is not true. We have to show that (ii) fails, i.e. there is a set \(\Lambda ^*\in W(\Lambda )\) such that it lies on some curvilinear lattice. The proof is divided into 5 steps. For the convenience of the reader, we will briefly describe them here and then pass to the argument.

In Step 1, using the standard Beurling technique, we find \(\Lambda ^{*} \in W(\Lambda )\) and \(g \in B_{\sigma }\) such that \(g *G_{\alpha }\) vanishes on \(\Lambda ^{*}\) for every \(\alpha \in I\). Our next step is to show that \(Sg_{\lambda } = 0\) for every \(\lambda \in \Lambda ^{*}\). In Step 3 we prove that \(\Lambda ^{*}\) lies on some curvilinear lattice under the assumption that \(g \in L^2({\mathbb R}^2)\). In Steps 4 and 5, using some approximation technique, we show how to get rid of the requirement that g is square integrable.

1. Due to the assumption made one can find a sequence of Bernstein functions \(f_n\in B_\sigma \) satisfying

$$\begin{aligned} \Vert f_n\Vert _\infty =1,\quad \Vert f_n*G_\alpha |_\Lambda \Vert _\infty \le 1/n. \end{aligned}$$

We may then introduce a sequence of functions

$$\begin{aligned} g_n(z):=f_n(z-x(n)), \quad z=(z_1,z_2), \quad x(n)=(x_1(n),x_2(n)), \end{aligned}$$

where x(n) are chosen so that \(|f_n(x(n))| > 1 - \frac{1}{n}, \, n \in {\mathbb N}\). Then we have

$$\begin{aligned} \Vert g_n\Vert _{\infty }=1 \quad \text {and} \quad \left\| g*G_{\alpha }|_{\Lambda +x(n)}\right\| _{\infty } \le 1/n,\, n\in {\mathbb N}. \end{aligned}$$

Using the compactness property of Bernstein space (see, e.g., [14], Proposition 2.19), we may assume that sequence \(g_n\) converges (uniformly on compacts in \({\mathbb C}^2\)) to some function \(g\in B_\sigma \). Moreover, passing if necessary to a subsequence, we may assume that the translates \(\Lambda +x(n)\) converge weakly to some u.d. set \(\Lambda ^*\). Of course, we may assume that \(\Lambda ^{*}\) is non-empty. Otherwise, we have arrived at contradiction with condition (A). Clearly, g satisfies

$$\begin{aligned} \Vert g\Vert _\infty =1, \text { and for every } \alpha \in I: \quad g*G_\alpha |_{\Lambda ^*}=0, \quad \Lambda ^*\in W(\Lambda ). \end{aligned}$$
(6)

For a point \(z = (z_1, z_2) \in {\mathbb C}^2\) we consider its complex conjugate point \(\bar{z} = (\bar{z_1},\bar{z_2})\).

Consider the decomposition \(g(z) = \varphi (z) + i \psi (z)\), where

$$\begin{aligned} \varphi (z):=\frac{g(z)+\overline{g(\bar{z})}}{2}, \quad \psi (z):=\frac{g(z)-\overline{g(\bar{z})}}{2i}. \end{aligned}$$

Then \(\varphi \) and \(\psi \) are real (on \({\mathbb R}^2\)) entire functions satisfying (5). Thereby, functions \(\varphi \) and \(\psi \) belong to \(\mathcal {B}_\sigma \), and since the kernel \(G_{\alpha }\) takes only real values on \({\mathbb R}^2\), we have \((\varphi *G_{\alpha })(\lambda )=0\) and \((\psi *G_{\alpha })(\lambda )=0\) for every \(\lambda \in \Lambda ^{*}\). Thus, we can continue the argument assuming that g is a real-valued function.

2. Recall that the notations \(\tilde{x}\) and Sf were introduced in Sect. 2.

Lemma 1

Assume a function \(g\in B_\sigma \) satisfies (6). Then for every \(\lambda \in \Lambda ^*\) the equality

$$\begin{aligned} S g_\lambda (x)=0 \end{aligned}$$
(7)

holds for a.e. \( x \in {\mathbb R}^2\).

Proof

Without loss of generality, we may assume that \(\lambda = (0,0)\) and \(I=\left( \frac{1}{2},1\right) ^2\). Observe that

$$\begin{aligned} (S g *G_{\alpha })(0,0) = 4 (g * G_{\alpha })(0,0) = 0 \quad \text { for every } \alpha \in I. \end{aligned}$$
(8)

Set

$$\begin{aligned} h(x_1, x_2) := Sg(x_1, x_2) \exp \left\{ - \frac{x_1^2 + x_2^2}{4} \right\} \quad \text { and } \quad I_+ := \left( \frac{1}{2}, \frac{3}{4}\right) ^2. \end{aligned}$$

Clearly, \(h \in L^2({\mathbb R}^2)\) and it is even in variables \(x_1\) and \(x_2\). Moreover, using (8), one can check that \((h *G_{\alpha })(0,0) = 0\) for any \(\alpha \in I_+\).

For every multi-index \(m = (m_1, m_2) \in \mathbb {N}^2\) and \(u \in I_+\) we have

$$\begin{aligned} \frac{\partial ^{m}}{\partial u_1^{m_1} \partial u_2^{m_2}} \int \limits _{{\mathbb R}^2} h(x_1,x_2) \exp \left\{ - u_1 x_1^2 - u_2 x_2^2 \right\} dx_1 dx_2 = 0. \end{aligned}$$

In particular, h is orthogonal to every monomial \(x_1^{\alpha _1} x_2^{\alpha _2}\) with even indexes \(\alpha _1\) and \(\alpha _2\) in the weighted space \(L^2\left( {\mathbb R}^2, \exp \left\{ - \frac{1}{2} (x_1^2+x_2^2) \right\} \right) \). Moreover, since h is even in any variable, from the symmetry, we see that h is orthogonal to every polynomial in this space. To finish the proof we use the completeness property of multi-dimensional analogues of Hermite polynomials. More precisely, we invoke Theorem 3.2.18 from [7] to deduce \(h = 0\). Consequently, \(Sg(x) =0\) for every \(x \in {\mathbb R}^2\), and the lemma follows. \(\square \)

3. We will need a simple technical

Lemma 2

Given a function \(F\in L^2({\mathbb R}^2)\) such that its inverse Fourier transform f is a real function. Then

$$\begin{aligned} S f_\lambda (x)=2\int \limits _{{\mathbb R}^2}\cos (x \cdot t)\, \text{ Re }\left( e^{i\lambda \cdot t}F(t)+e^{i\tilde{\lambda }\cdot t}F(\tilde{t})\right) dt. \end{aligned}$$

Proof

Indeed, we may write

$$\begin{aligned} f(x)=\text{ Re }\int \limits _{{\mathbb R}^2}e^{i x\cdot t}F(t)dt. \end{aligned}$$

Therefore,

$$\begin{aligned} Sf_\lambda (x)= & {} \text{ Re }\int \limits _{{\mathbb R}^2}\left( e^{i x\cdot t}+e^{i\tilde{x}\cdot t}+e^{-i x\cdot t}+e^{-i\tilde{x}\cdot t}\right) e^{i\lambda \cdot t}F(t)dt\\= & {} 2\text{ Re }\int \limits _{{\mathbb R}^2}(\cos (x\cdot t)+\cos (\tilde{x}\cdot t))e^{i\lambda \cdot t}F(t)dt\\= & {} 2\text{ Re }\int \limits _{{\mathbb R}^2}\cos (x\cdot t)\left( e^{i\lambda \cdot t}F(t)+e^{i\tilde{\lambda }\cdot t}F(\tilde{t})\right) dt , \end{aligned}$$

which proves the lemma. \(\square \)

3. If we additionally assume that \(g\in L^2({\mathbb R}^2)\), the result follows from the next statement.

Lemma 3

Assume \(g\in L^2({\mathbb R}^2)\). Then \(\Lambda ^*\) lies on some curvilinear lattice.

Proof

Denote by G the inverse Fourier transform of g. Recall that g is real, whence

$$\begin{aligned} G(t) = \overline{G(-t)} \quad \text { and } \quad G(\tilde{t}) = \overline{G(-\tilde{t})}. \end{aligned}$$

Denote by \(U(t) = \text{ Re }\left( e^{i\lambda \cdot t}G(t)+e^{i\tilde{\lambda }\cdot t}G(\tilde{t})\right) \). Since

$$\begin{aligned} 2U(t)= & {} 2\text{ Re }\left( e^{i\lambda \cdot t}G(t)+e^{i\tilde{\lambda }\cdot t}G(\tilde{t})\right) \\= & {} \left( e^{i\lambda \cdot t}G(t)+e^{i\tilde{\lambda }\cdot t}G(\tilde{t})\right) + \overline{\left( e^{i\lambda \cdot t}G(t)+e^{i\tilde{\lambda }\cdot t}G(\tilde{t})\right) } \\= & {} e^{i\lambda \cdot t}G(t)+e^{i\tilde{\lambda }\cdot t}G(\tilde{t}) + e^{-i\lambda \cdot t}G(-t)+e^{-i\tilde{\lambda }\cdot t}G(-\tilde{t}), \end{aligned}$$

we deduce that \(U(t) = U(-t)\). Combining this observation with Lemmas 1 and 2, we see that equality

$$\begin{aligned} \text{ Re }\left( e^{i\lambda \cdot t}G(t)+e^{i\tilde{\lambda }\cdot t}G(\tilde{t})\right) =0 \end{aligned}$$

holds for a.e. \(t\in {\mathbb R}^2\) and for every \(\lambda \in \Lambda ^{*}\).

Recall that \(G=0\) a.e. outside \((-\sigma ,\sigma )^2\). For every \(\epsilon >0\), find a real Schwartz function \(F_\epsilon \) whose support lies on \([-\sigma ,\sigma ]^2\) satisfying \(\Vert G-F_\epsilon \Vert _2<\epsilon \). Then

$$\begin{aligned} \Vert F_\epsilon \Vert _2\ge \Vert G\Vert _2-\epsilon \end{aligned}$$
(9)

and

$$\begin{aligned} \left( \,\, \int \limits _{{\mathbb R}^2} \left| \text{ Re }\left( e^{i\lambda \cdot t}F_\epsilon (t)+e^{i\tilde{\lambda }\cdot t}F_\epsilon (\tilde{t})\right) \right| ^2 d t \right) ^{1/2}<2\epsilon ,\quad \lambda \in \Lambda ^*. \end{aligned}$$
(10)

Using these inequalities, one can check that there are a point \(t_\epsilon \), \(t_{\epsilon } \in [-\sigma , \sigma ]^2\) and constants C and c depending only on \(\sigma \) such that

$$\begin{aligned} |F_\epsilon (t_\epsilon )|> C \quad \text { and } \quad \Bigl |\text{ Re }\left( e^{i\lambda \cdot t_\epsilon }F_\epsilon (t_\epsilon )+e^{i\tilde{\lambda }\cdot t_\epsilon }F_\epsilon (\tilde{t_\epsilon })\right) \Bigr |< c \epsilon |F_\epsilon (t_\epsilon )|, \end{aligned}$$
(11)

for every small enough \(\epsilon \). Indeed, by Plancherel theorem, we have \(\Vert G\Vert _2 = \Vert g\Vert _2\). Since \(\Vert g\Vert _{\infty } = 1\) and \(g \in B_{\sigma }\), using Bernstein inequality, we deduce \(\Vert G\Vert _2 = \Vert g\Vert _2 \ge C(\sigma )\). Now, assuming that for all t, the inequalities (11) do not hold true, by integration with respect to variable t and using the estimate (9), for sufficiently small \(\epsilon \) we arrive at

$$\begin{aligned} \int \limits _{{\mathbb R}^2} \left| \text{ Re }\left( e^{i\lambda \cdot t}F_\epsilon (t)+e^{i\tilde{\lambda }\cdot t}F_\epsilon (\tilde{t})\right) \right| ^2 d t \ge c^2 \epsilon ^2 \Vert F_{\epsilon }\Vert ^2_2 \ge \frac{c^2}{2} \epsilon ^2 \Vert G\Vert ^2_2 \ge \frac{c^2}{2} \epsilon ^2 C^2(\sigma ), \end{aligned}$$

which contradicts to estimate (10) when \(c > 2 \sqrt{2}/C(\sigma )\).

Write

$$\begin{aligned} F_\epsilon (t_\epsilon )=:R_\epsilon e^{iu_\epsilon },\quad F_\epsilon (\tilde{t}_\epsilon )=:r_\epsilon e^{iv_\epsilon }. \end{aligned}$$

Then we get

$$\begin{aligned} |R_\epsilon |>C,\quad \left| R_\epsilon \cos (\lambda \cdot t_{\epsilon }+u_\epsilon )+r_\epsilon \cos (\tilde{\lambda }\cdot t_\epsilon +v_\epsilon )\right| < c \epsilon R_\epsilon , \ \lambda \in \Lambda ^*. \end{aligned}$$

Then normalizing we arrive at

$$\begin{aligned} \left| \frac{R_\epsilon }{\sqrt{R_\epsilon ^2+r_\epsilon ^2}}\cos (\lambda \cdot t_{\epsilon }+u_\epsilon )+\frac{r_\epsilon }{\sqrt{R_\epsilon ^2+r_\epsilon ^2}}\cos (\tilde{\lambda }\cdot t_\epsilon +v_\epsilon )\right| < c\epsilon . \end{aligned}$$

Clearly, we may assume that \(u_{\epsilon } \in [0, 2 \pi ]\) and \(v_{\varepsilon } \in [0, 2 \pi ]\). Recall that \(t_{\epsilon } \in [-\sigma , \sigma ]^2\) and, of course,

$$\begin{aligned} \frac{R_\epsilon }{\sqrt{R_\epsilon ^2+r_\epsilon ^2}} \in [0,1] \quad \text {and} \quad \frac{r_\epsilon }{\sqrt{R_\epsilon ^2+r_\epsilon ^2}} \in [0,1]. \end{aligned}$$

Taking \(\epsilon = \frac{1}{n}\) and passing if necessary to a subsequence, we deduce that \(\Lambda \) lies on some curvilinear lattice.

4. In what follows we assume that

$$\begin{aligned} g\in B_{\sigma } \setminus L^2({\mathbb R}^2). \end{aligned}$$
(12)

For \(\epsilon > 0\) we set

$$\begin{aligned} h_{\epsilon }(\xi ):=\frac{\sin (\epsilon \xi )}{ \epsilon \xi }, \quad \Phi _{\epsilon }(x_1, x_2) := h_{\epsilon }(x_1) h_{\epsilon }(x_2), \quad \text {and} \quad \delta _\epsilon :=\Vert g \Phi _{\epsilon }\Vert _2^{-1/2}. \end{aligned}$$

\(\square \)

The next statement easily follows from (12).

Lemma 4

We have \(\delta _\epsilon \rightarrow 0\) as \(\epsilon \rightarrow 0.\)

We skip the simple proof.

Let us introduce auxiliary functions

$$\begin{aligned} \varphi _\epsilon (x):=\delta _\epsilon \Phi _{\epsilon }(x),\quad g_{\epsilon }(x):=g(x)\varphi _\epsilon (x), \quad x\in {\mathbb R}^2. \end{aligned}$$

By Lemma 4,

$$\begin{aligned} \Vert g_{\epsilon }\Vert _2=1/\delta _\epsilon \rightarrow \infty ,\quad \epsilon \rightarrow 0. \end{aligned}$$
(13)

Lemma 5

For every \(\lambda \in \Lambda ^*\) satisfying \(|\lambda |< 1/\sqrt{\delta _\epsilon }\) we have

$$\begin{aligned} \Vert S (g_\epsilon )_\lambda \Vert _2 \le C\sqrt{\delta }_\epsilon . \end{aligned}$$

Proof

By Lemma 1, \(S g_{\lambda } = 0\). Since the function \(\varphi _\epsilon \) is even with respect to each variable, we have

$$\begin{aligned}&S g_{\lambda } \varphi _{\epsilon }(x) = (Sg(\cdot -\lambda )\varphi _\epsilon (\cdot ))(x) \\&{=}g(x {-} \lambda ) \varphi _{\epsilon }(x) {+} g(\tilde{x} {-} \lambda ) \varphi _{\epsilon }(\tilde{x}) {+} g(-x - \lambda ) \varphi _{\epsilon }(-x) + g(-\tilde{x} - \lambda ) \varphi _{\epsilon }(-\tilde{x}) = 0. \end{aligned}$$

Hence,

$$\begin{aligned}&\left| S (g_\epsilon )_\lambda (x)\right| = \left| S (g \varphi _\epsilon )(\cdot -\lambda ) (x)\right| = \left| S (g \varphi _\epsilon )(x-\lambda ) - Sg(\cdot -\lambda )\varphi _\epsilon )(x) \right| \\&\le \left| g(x - \lambda )(\varphi _\epsilon (x-\lambda )-\varphi _\epsilon (x))\right| + \left| g(\tilde{x}-\lambda )(\varphi _\epsilon (\tilde{x}-\lambda )-\varphi _\epsilon (\tilde{x}))\right| \\&\quad + \left| g(-x - \lambda )(\varphi _\epsilon (-x-\lambda )-\varphi _\epsilon (-x))\right| +\left| g(-\tilde{x}-\lambda )(\varphi _\epsilon (-\tilde{x}-\lambda )-\varphi _\epsilon (-\tilde{x}))\right| . \end{aligned}$$

Below we focus on the estimate of the first term at the right hand-side of the inequality above. The remaining terms admit the same estimate.

Write \(\lambda =(\lambda _1,\lambda _2)\). Observe that

$$\begin{aligned} \left| \varphi _\epsilon (x-\lambda )-\varphi _\epsilon (x)\right|\le & {} \delta _\epsilon \Bigl (\left| h_{\epsilon }(x_1 - \lambda _1) - h_{\epsilon }(x_1)\right| \left| h_{\epsilon }(x_2 - \lambda _2)\right| \\&+\left| h_{\epsilon }(x_2-\lambda _2)-h_{\epsilon }(x_2)\right| |h_{\epsilon }(x_1)|\Bigr ). \end{aligned}$$

For \(j = 1,2\) using the Cauchy-Schwartz inequality, we have

$$\begin{aligned} \left( \int \limits _{{\mathbb R}} |h_{\epsilon }(x_j-\lambda _j)-h_{\epsilon }(x_j)|^2 dx_j \right) ^{1/2}= & {} \left( \int \limits _{{\mathbb R}} \left| \int \limits _{0}^{\lambda _j}h_{\epsilon }'(x_j-u) du \right| ^2 dx_j \right) ^{1/2}\\\le & {} C |\lambda _j| \Vert h_{\varepsilon }'\Vert _{2}. \end{aligned}$$

One may check that \(\Vert h_{\epsilon }\Vert _2=C/\sqrt{\epsilon }\) and \(\Vert h'_{\epsilon }\Vert _2=C\sqrt{\epsilon }\). Since \(\Vert g\Vert _\infty =1\) and \(|\lambda | \le 1/\sqrt{\delta _{\epsilon }}\), we arrive at

$$\begin{aligned} \Vert S (g_\epsilon )_\lambda \Vert _2 \le C \left( \,\,\int \limits _{{\mathbb R}^2} \left| \varphi _\epsilon (x-\lambda )-\varphi _\epsilon (x)\right| ^2 dx\right) ^{1/2} \le C \delta _{\epsilon } |\lambda | \Vert h_{\epsilon }\Vert _{2} \Vert h'_{\epsilon }\Vert _{2} \le C\sqrt{\delta }_\epsilon . \end{aligned}$$

That finishes the proof. \(\square \)

5. Denote by \(G_\epsilon :=\widehat{g\varphi _\epsilon }\). Then \(G_\epsilon \in L^2({\mathbb R})\) vanishes a.e. outside some square \((-\sigma ^*,\sigma ^*)^2\) (it is easy to check that one may take \(\sigma ^*=\sigma +\epsilon )\).

Using Lemma 5, for \(|\lambda | \le 1/ \sqrt{\delta _{\epsilon }}\) we get

$$\begin{aligned} \left( \,\, \int \limits _{{\mathbb R}^2}\left| \text{ Re }\left( e^{i\lambda \cdot x}G_\epsilon (x)+e^{-i\lambda \cdot \tilde{x}}G_\epsilon (\tilde{x})\right) \right| ^2 dx\right) ^{1/2} \le C\sqrt{\delta }_\epsilon . \end{aligned}$$

On the other hand, by (13), \(\Vert G_\epsilon \Vert _2\ge C\), for all small enough \(\epsilon \).

To finish the proof, we proceed as in the proof of Lemma 3.

3.2 Proof of Theorem 2, Part II

(i) \(\Rightarrow \) (ii). We will argue by contradiction. Assume that for every \(\sigma > 0\) there is a constant \(K=K(\sigma )\) such that

$$\begin{aligned} \Vert f\Vert _\infty \le K\sup _{\alpha \in I} \sup _{\lambda \in \Lambda } \Vert f*G_\alpha \Vert _\infty ,\quad f\in B_\sigma , \end{aligned}$$

but condition (ii) is not satisfied, i.e. there exists some \(\Lambda ' \in W(\Lambda )\) such that \(\Lambda '\) lies on some curvilinear lattice. Clearly, to come to the contradiction it suffices to construct for every \(\varepsilon > 0\) a function \(f = f_{\varepsilon }\) such that

$$\begin{aligned} \Vert f\Vert _{\infty } \ge C, \quad \sup \limits _{\alpha \in I} \sup \limits _{\lambda \in \Lambda } |f * G_{\alpha }(\lambda )| \le C\varepsilon , \end{aligned}$$
(14)

and \(f \in \mathcal {B}_{\sigma ^{*}}\) for some fixed \(\sigma ^{*}\).

Again, let us provide a brief description of the proof. We divide the proof into 4 steps. First, we build a function g such that \(g *G_{\alpha }\) vanishes on \(\Lambda '\) for every \(\alpha \in I\). A slight modification of g provides a function f, which satisfies (14). To verify the second estimate in  (14) we split the set \(\Lambda \) into the sets \( \Lambda _I = \Lambda \cap P\) and \(\Lambda _O = \Lambda \cap ({\mathbb R}^2 \setminus P)\) for an appropriate rectangle P. In the steps 3 and 4, we show that f satisfies the relations (14) for \(\lambda \in \Lambda _O\) and \(\lambda \in \Lambda _I\) respectively.

Now we pass to the proof.

1. By our assumption, there exist \(\Lambda ' \in W(\Lambda )\), \(\xi \in {\mathbb R}^2\), \((t_1,t_2) \in {\mathbb R}^2\), and \((r_1, r_2) \in {\mathbb T}\) such that for every \(\lambda ' \in \Lambda '\) the equality

$$\begin{aligned} r_1\cos (\lambda ' \cdot \xi - t_1) - r_2 \cos (\tilde{\lambda '} \cdot \xi - t_2) = 0 \end{aligned}$$
(15)

holds. Set

$$\begin{aligned} g(x) = r_1 \cos ( \xi \cdot x + t_1) - r_2 \cos ( \tilde{\xi } \cdot x + t_2). \end{aligned}$$

Clearly, \(g \in B_{\sigma }\) for \(\sigma = |\xi |\). Next, we will show that symmetrization of the function \(g_{\lambda '}\) vanishes for every \(\lambda ' \in \ \Lambda '\).

Lemma 6

The equality

$$\begin{aligned} S g_{\lambda '}(x) = 0 \end{aligned}$$

holds for every \(x \in {\mathbb R}^2\) and \(\lambda ' \in \Lambda '\).

Proof

After some simple calculations, we have

$$\begin{aligned} S g_{\lambda '} (x) = r_1 \, \mathrm{Re \,}\left( e^{i ( t_1 - \xi \cdot \lambda ' )} S(e^{i \xi \cdot x}) \right) - r_2 \, \mathrm{Re \,}\left( e^{i (t_2 - \tilde{\xi } \cdot \lambda ')} S(e^{i \tilde{\xi } \cdot x}) \right) , \end{aligned}$$

where we, as usual, apply symmetrization operator S with respect to variable x. Clearly, \(S(e^{i \tilde{\xi } \cdot x}) = S(e^{i \xi \cdot x}) = 2 (\cos (\xi \cdot x) + \cos ( \tilde{\xi } \cdot x))\). Thus, using  (15), we have

$$\begin{aligned} S g_{\lambda '} (x) = 2 (\cos (\xi \cdot x) + \cos ( \tilde{\xi } \cdot x)) \left( r_1 \, \mathrm{Re \,}e^{i (t_1 - \lambda ' \cdot \xi )} + r_2 \, \mathrm{Re \,}e^{i (t_2 - \tilde{\lambda '} \cdot \xi )} \right) = 0. \end{aligned}$$

\(\square \)

Consequently, for every \(\lambda ' \in \Lambda '\) and \(\alpha \in I\) we have

$$\begin{aligned} g *G_{\alpha }(\lambda ') = 0, \end{aligned}$$
(16)

since \(G_{\alpha }\) is even in every variable.

2. Fix small \(\varepsilon > 0\) and take large \(R=R(\varepsilon ) > 0\) (we will specify its value later). Recall that \(\Lambda ' \in W(\Lambda )\). In particular, that means that one can find \(v=(v_1, v_2) = v(R, \varepsilon ) \in {\mathbb R}^2\) such that inside the square \([-R,R]^2\), the set \(\Lambda - v\) is "close"  to \(\Lambda '\):

$$\begin{aligned} \text {for every } \lambda \in \Lambda \cap (v + (-R,R)^2) \text { there is } \lambda ' \in \Lambda ' \cap (-R,R)^2: \quad \mathrm{dist \,}(\lambda - v, \lambda ') \le \varepsilon . \end{aligned}$$
(17)

Set \(P = [v_1-R,v_1+R]\times [v_2-R,v_2+R]\) and consider the decomposition

$$\begin{aligned} \Lambda = \Lambda _I \cup \Lambda _O := \left( \Lambda \cap P \right) \cup \left( \Lambda \cap ({\mathbb R}^2 \setminus P) \right) . \end{aligned}$$

Consider

$$\begin{aligned} \Phi _{\varepsilon }(t) = \Phi _{\varepsilon }(t_1, t_2) = \frac{\sin (\varepsilon t_1)}{\varepsilon t_1} \frac{\sin (\varepsilon t_2)}{\varepsilon t_2}. \end{aligned}$$

We define the function f by the formula

$$\begin{aligned} f(x) = \Phi _{\varepsilon }(x-v) g(x-v), \quad x\in {\mathbb R}^2, \, v \in {\mathbb R}^2. \end{aligned}$$

Clearly, \(\Vert f\Vert _{\infty } \ge C\), and it suffices to show that \(|f *G_{\alpha }(\lambda )| \le C \varepsilon \) for every \(\lambda \in \Lambda \). We will estimate the value \(|f *G_{\alpha }(\lambda )|\) for \(\lambda \in \Lambda _I\) and \(\lambda \in \Lambda _O\) separately.

3. Assume that \(\lambda \in \Lambda _O\). We may choose \(R = R(\varepsilon ) = \frac{1}{\varepsilon ^2}\). Set \(U = U_1 \times U_2 = [-\sqrt{R}, \sqrt{ R}]^2.\) For \(s\in U\) we have

$$\begin{aligned} |f(\lambda - s)| \le \frac{\Vert g\Vert _{\infty }}{\varepsilon ^2 |\lambda _1 - s_1 - v_1| |\lambda _2 - s_2-v_2|} \le \frac{\Vert g\Vert _{\infty }}{\varepsilon ^2 |R-\sqrt{R}|^2} \le C \varepsilon ^2 \Vert g\Vert _{\infty }, \end{aligned}$$
(18)

since \(|\lambda _1 - v_1| \ge R\) and \(|\lambda _2-v_2| \ge R\). Next, it is easy to check that

$$\begin{aligned} J:=\int \limits _{{\mathbb R}}\int \limits _{{\mathbb R}\setminus U_1} G_{\alpha }(s_1,s_2) \, ds_1 ds_2+ \int \limits _{{\mathbb R}} \int \limits _{{\mathbb R}\setminus U_2} G_{\alpha }(s_1,s_2) \, ds_2 ds_1 \le C \varepsilon . \end{aligned}$$
(19)

Now, to estimate \(f *G_{\alpha } (\lambda )\) for \(\lambda \in \Lambda _O\), we write

$$\begin{aligned} |f*G_{\alpha }(\lambda )|\le & {} \int \limits _{U} |f(\lambda -s)| G_{\alpha } (s) \, ds + \int \limits _{{\mathbb R}} \int \limits _{{\mathbb R}\setminus U_1} |f(\lambda -s)| G_{\alpha } (s) \, ds \\&+ \int \limits _{{\mathbb R}} \int \limits _{{\mathbb R}\setminus U_2} |f(\lambda -s)| G_{\alpha } (s) \, ds. \end{aligned}$$

Applying \(\Vert f\Vert _{\infty } \le 1\) and estimates (18) and (19), we arrive at

$$\begin{aligned} |f*G_{\alpha }(\lambda )| \le C\varepsilon ^2 \int \limits _{U} G_{\alpha } (s) \, ds + J \le C \varepsilon . \end{aligned}$$

4. Now, assume that \(\lambda \in \Lambda _I\). Take \(\lambda ' \in \Lambda '\), satisfying condition (17) corresponding to \(\lambda \), i.e. \(\mathrm{dist \,}(\lambda - v, \lambda ' ) < \varepsilon \). Since \(g*G_{\alpha }(\lambda ') = 0\), we may write

$$\begin{aligned} f*G_{\alpha }(\lambda )= & {} \int \limits _{{\mathbb R}^2} f(\lambda -s) G_{\alpha }(s) ds + \Phi _{\varepsilon }(\lambda -v) \int \limits _{{\mathbb R}^2} g(\lambda '-s) G_{\alpha }(s) ds \\= & {} \int \limits _{{\mathbb R}^2} \big ( \Phi _{\varepsilon }(\lambda -s-v)\left( g(\lambda -s-v) - g(\lambda '-s) \right) \\&+ g(\lambda '-s)\left( \Phi _{\varepsilon }(\lambda -v-s) -\Phi _{\varepsilon }(\lambda -v) \right) \big ) G_{\alpha }(s) ds. \end{aligned}$$

Set

$$\begin{aligned} H_1:=\left| \Phi _{\varepsilon }(\lambda - s - v) - \Phi _{\varepsilon }(\lambda - v) \right| ,\\ H_2:=\left| g(\lambda - s - v) - g(\lambda ' - s)\right| . \end{aligned}$$

Clearly,

$$\begin{aligned} |f*G_{\alpha }(\lambda )| \le \int \limits _{{\mathbb R}^2} \left( H_1 |g(\lambda ' - s)| + H_2 \left| h_{\varepsilon }(\lambda - v) \right| \right) G_{\alpha }(s) ds. \end{aligned}$$
(20)

By Bernstein inequality and relation (17), we have

$$\begin{aligned} |H_1| \le \varepsilon (|s_1| + |s_2|), \quad |H_2| \le \varepsilon \Vert g'\Vert _{\infty } . \end{aligned}$$
(21)

Combining estimates (20) and (21) together, we obtain

$$\begin{aligned} |f*G_{\alpha }(\lambda )| \le C \varepsilon \int \limits _{{\mathbb R}^2} (\Vert g'\Vert _{\infty } + |s_1|+|s_2|) G_{\alpha }(s_1, s_2) ds_1 ds_2 \le C \varepsilon \end{aligned}$$

that finishes the proof.

The following statement easily follows from Theorem 2.

Lemma 7

Assume \(\Lambda \) and I satisfy the assumptions of Theorem 2 and condition (i) is fulfilled. Then for every \(\sigma > 0\) there is a constant C such that

$$\begin{aligned} \Vert f\Vert _{\infty }^2 \le C \int \limits _{I} \sup \limits _{\lambda \in \Lambda } | f *G_{\alpha } (\lambda )|^2 \, d\alpha \end{aligned}$$
(22)

for every \(f \in PW^2_{\sigma }\).

4 Sampling with Gaussian Kernel in Paley–Wiener Spaces

4.1 Auxiliary Statements

Recall that our aim is to describe the geometry of sets \(\Lambda \subset {\mathbb R}^2\) that for every \(f \in PW^2_{\sigma }\) the estimates

$$\begin{aligned} D_1 \Vert f\Vert ^2_{2} \le \sum \limits _{\lambda \in \Lambda } \int \limits _{I} |f * G_{\alpha }(\lambda )|^2 \, d\alpha \le D_2 \Vert f\Vert ^2_{2}, \end{aligned}$$
(23)

hold with some constants \(D_1\) and \(D_2\) independent on f.

4.1.1 Bessel-Type Inequality

We start with showing that the right hand-side of (23) follows easily from classical sampling results for u.d. set \(\Lambda \).

Proposition 1

Assume \(\Lambda \) is a u.d. set, \(I = (a,b) \times (c,d)\), where \(0<a<b<\infty , 0<c<d<\infty \). Then there is a constant \(D_2 = D_2(I,\Lambda )\) such that

$$\begin{aligned} \sum \limits _{\lambda \in \Lambda } \int \limits _{I} |f * G_{\alpha }(\lambda )|^2 \, d\alpha \le D_2 \Vert f\Vert ^2_{2}, \end{aligned}$$

for every \(f \in PW^2_{\sigma }\).

Proof

Recall the Bessel inequality for Paley-Wiener spaces: if \(\Lambda \) is a u.d. subset of \({\mathbb R}^d\) then there is a constant \(M=M(\Lambda , \sigma )\) such that

$$\begin{aligned} \sum \limits _{\lambda \in \Lambda }|g(\lambda )|^2 \le M \Vert g\Vert ^2_{2} \end{aligned}$$
(24)

for every \(g \in PW^2_{\sigma }\), see [20], Chapter 2, Theorem 17.

Note that convolution with Gaussian Kernel \(G_{\alpha }\) keeps the function in Paley-Wiener space. Using Young’s convolution inequality and Bessel inequality, one can find a constant \(D_2\) such that for every \(f \in PW^2_{\sigma }\) the estimate

$$\begin{aligned} \sum \limits _{\lambda \in \Lambda } \int \limits _{J} |f * G_{\alpha }(\lambda )|^2 \, d\alpha \le C |J| \Vert f *G_{\alpha }\Vert ^2_{2} \le C \Vert f\Vert _{2}^2\Vert G_{\alpha }\Vert _{1}^2 \le D_2 \Vert f\Vert _{2}^2 \end{aligned}$$

is true. That finishes the proof of proposition. \(\square \)

4.1.2 Auxiliary Functions

In what follows we need some auxiliary functions with special properties. These functions should belong to Paley-Wiener spaces, have a large \(L^2\)-norm with a small \(L^2\)-norm of the gradient. Now, we specify these requirements.

Condition (B): Let \(\varepsilon \) be a small positive parameter. A family of functions \(\{\Phi _{\varepsilon }\}\) satisfies condition (B) if

(\(\beta _1\)):

\(\Phi _{\varepsilon }(0, 0) = 1, \quad \Vert \Phi _{\varepsilon }\Vert _{\infty } = 1;\)

(\(\beta _2\)):

\(\Phi _{\varepsilon } \in PW^2_{\varepsilon }\);

(\(\beta _3\)):

\(\Vert \Phi _{\varepsilon }\Vert _{2} \rightarrow \infty \) as \(\varepsilon \rightarrow 0\);

(\(\beta _4\)):

\(\Vert \nabla \Phi _{\varepsilon }\Vert _{2} \rightarrow 0\) as \(\varepsilon \rightarrow 0\).

Next, we provide a few examples to illustrate some additional difficulties that occur in the multi-dimensional setting. Then we present an example of functions \(\Phi _{\varepsilon }\) that satisfy condition (B).

Example 4

Let us return to the one-dimensional case. Consider

$$\begin{aligned} \Phi _{\varepsilon } (x) = \frac{\sin (\varepsilon x)}{\varepsilon x}. \end{aligned}$$

Observe that functions \(\Phi _{\varepsilon }\) satisfy an analogue of condition (B) in the one-dimensional setting. Clearly, \(\Phi _{\varepsilon } (0) = 1, \Vert \Phi _{\varepsilon }\Vert _{\infty } = 1\), and \(\Phi _{\varepsilon } \in PW^2_{\varepsilon }\). One may easily check that

$$\begin{aligned} \Vert \Phi _{\varepsilon }\Vert _{2} \le C \varepsilon ^{-1/2} \quad \text {and} \quad \Vert \Phi '_{\varepsilon }\Vert _{2} \le C \varepsilon ^{1/2}. \end{aligned}$$

These relations prove the one-dimensional analogues of \((\beta _3)\) and \((\beta _4)\).

The passage from Bernstein to Paley-Wiener spaces and back in [19] was based on the properties of the functions in Example 4. One may try to construct functions \(\Phi _{\varepsilon }\) that satisfy condition (B) in the two-dimensional setting in the following natural way.

Example 5

Consider the function \(\Phi _{\varepsilon }\) defined by the formula

$$\begin{aligned} \Phi _{\varepsilon } (x, y) = \frac{\sin (\varepsilon x)}{\varepsilon x} \frac{\sin (\varepsilon y)}{\varepsilon y}. \end{aligned}$$

It is clear that conditions \((\beta _1)\) and \((\beta _2)\) are true. Property \((\beta _3)\) follows from

$$\begin{aligned} \Vert \Phi _{\varepsilon }\Vert _{2}\le C \varepsilon ^{-1}. \end{aligned}$$

However, one may easily check that \(\Vert \nabla \Phi _{\varepsilon }\Vert _{2}\) does not converge to zero as \(\varepsilon \rightarrow 0\).

However, in two-dimensional setting it is still possible to construct functions that satisfy condition (B). Now, we pass to the construction.

Lemma 8

Assume \(\varepsilon > 0\). There exist functions \(\Psi _{\varepsilon }\) such that

(P1):

\(\mathrm{supp \,}\Psi _{\varepsilon } \subset B_{\varepsilon }(0), \quad \Psi _{\varepsilon } \ge 0,\)

(P2):

\(C_1 \le \int \limits _{{\mathbb R}^2} \Psi _{\varepsilon }(x) \, dx \le C_2, \quad 0< C_1 \le C_2 < \infty ,\)

(P3):

\(\Vert \Psi _{\varepsilon }\Vert _{2} \ge \frac{C}{\varepsilon ^{3/4}},\)

(P4):

\(\left( \int \limits _{{\mathbb R}^2} |\Psi _{\varepsilon }(x)|^2 |x|^2 \, dx \right) ^{1/2} \le \frac{C}{\sqrt{ \log \frac{1}{\varepsilon }}}.\)

Proof

Fix small \(0< \varepsilon < 1\) and denote the integer part of \(\log \frac{1}{\varepsilon }\) by m. For integers n from [m, 2m] we set \(a_n = 2^{2n}/n\). Next, we define the function \(\Psi _{\varepsilon }\) layer by layer by the formula

$$\begin{aligned} \Psi _{\varepsilon }(x) = a_n, \quad x \in B_{2^{-n}}(0) \setminus B_{2^{-n-1}}(0). \end{aligned}$$

For \(|x|>\varepsilon \) and \(|x| < \frac{\varepsilon ^2}{2}\) we set \(\Phi _\varepsilon (x)=0\). Note that the area of the ring \(B_{2^{-n}}(0) \setminus B_{2^{-n-1}}(0)\) is equal to \(\frac{3 \pi }{4} 2^{-2n}\).

Clearly, \(\Psi _{\varepsilon }\) satisfy (P1). To verify (P2) we write

$$\begin{aligned} \int \limits _{{\mathbb R}^2} \Psi _{\varepsilon }(x) \, dx = \frac{3\pi }{4} \sum \limits _{n = m}^{2m} 2^{-2n} a_n = C \sum \limits _{n = m}^{2m} \frac{1}{n}. \end{aligned}$$

Note that the right-hand side of this equation can be estimated with some fixed positive constants from above and below by

$$\begin{aligned} \int \limits _{\log \frac{1}{\varepsilon }}^{2\log \frac{1}{\varepsilon }} \frac{1}{t} \, dt = \log 2. \end{aligned}$$

Thus, condition (P2) follows. Next, we have

$$\begin{aligned} \Vert \Psi _{\varepsilon }\Vert ^2_{2} = \frac{3 \pi }{4} \sum \limits _{n = m}^{2m} 2^{-2n} a^2_n \ge C \sum \limits _{n = m}^{2m} \frac{2^{2n}}{n^2} \ge C \int \limits _{\log \frac{1}{\varepsilon }}^{2\log \frac{1}{\varepsilon }} 2^{2t} t^{-2}\, dt \ge \frac{C}{\varepsilon ^2 \log ^2 \frac{1}{\varepsilon }} \ge \frac{C}{\varepsilon ^{3/2}}, \end{aligned}$$

and (P3) follows. The estimate

$$\begin{aligned} \int \limits _{{\mathbb R}^2} |\Psi _{\varepsilon }(x)|^2 |x|^2 \, dx \le C \sum \limits _{n = m}^{2m} 2^{-4n} a^2_n \le C \sum \limits _{n = m}^{2m} \frac{1}{n^2} \le \frac{C}{ \log \frac{1}{\varepsilon }} \end{aligned}$$

implies (P4) that finishes the proof. \(\square \)

Corollary 1

There exist functions \(\Phi _\varepsilon \) satisfying condition (B).

Proof

Denote by \(c_{\Psi } = \int \limits _{{\mathbb R}^2} \Psi _{\varepsilon }(x) \, dx\). By \((P_2)\), \(c_{\Psi }\) is positive, finite, and separated from zero. Now, we may define \(\Phi _\varepsilon \) as the Fourier transform of \(\Psi _{\varepsilon }\) with a proper normalization:

$$\begin{aligned} \Phi _{\varepsilon }(x) = \frac{1}{c_{\Psi }} \int _{{\mathbb R}^2}e^{-i x\cdot t}\Psi _{\varepsilon }(t) \,dt. \end{aligned}$$

The property \((\beta _2)\) follows from \((P_1)\). Due to \(\Psi _{\varepsilon } \ge 0\) and normalization condition \((\beta _1)\) is fulfilled. Relations \((P_3)\) and \((P_4)\) imply estimates \((\beta _3)\) and \((\beta _4)\) respectively.

\(\square \)

4.2 From Bernstein to Paley–Wiener Spaces and Back

To prove Theorem 1, we will use the following statement, which describes the connection between sampling in Paley-Wiener and Bernstein spaces.

Theorem 3

Let \(\Lambda \) be a u.d. set in \({\mathbb R}^2\), \(I = (a,b) \times (c,d), 0< a< b< \infty , 0< c< d < \infty ,\) and \(\sigma '>\sigma >0\).

  1. (i)

    Assume the inequality

    $$\begin{aligned} \Vert f\Vert _\infty \le K\sup _{\alpha \in I} \sup _{\lambda \in \Lambda } \Vert f*G_\alpha \Vert _\infty \quad \text { for all } f\in B_{\sigma '} \end{aligned}$$
    (25)

    holds with some constant \(K = K(\sigma ', \Lambda )\). Then there exists a constant \(D_1 = D_1(\sigma , \Lambda )\) such that

    $$\begin{aligned} D_1 \Vert f\Vert ^2_{2} \le \sum \limits _{\lambda \in \Lambda } \int \limits _{I} |f * G_{\alpha }(\lambda )|^2 \, d\alpha \quad \text { for every } f \in PW^2_{\sigma } \end{aligned}$$
    (26)

    is true.

  2. (ii)

    Assume that (26) holds with some constant \(D_1 = D_1(\sigma ', \Lambda )\) for all \(f \in PW_{\sigma '}\). Then there is a constant \(K = K(\sigma ', \Lambda )\) such that (25) is true for every \(f \in B_{\sigma }\).

Remark 2

For a similar result for space sampling see [15].

Remark 3

In this theorem we do not need to require I to be a rectangle. One may take \(I=(a,b) \subset {\mathbb R}\) with \(0< a< b < \infty \). In such a case by \(G_{\alpha }(x)\) we mean \(G_\alpha (x_1,x_2) = e^{-\alpha (x_1^2+x_2^2)}\) and \(d \alpha \) is a standard one-dimensional Lebesgue measure.

The proof of Theorem 3 is similar to the proof of Theorem 3 in the paper [19]. We provide the argument for statement (i) and leave the proof of (ii) to the reader. The functions \(\Phi _{\varepsilon }\) that satisfy condition (B) play a crucial role in our argument.

Proof of Theorem 3

Take \(\varepsilon > 0\) such that \(\sigma + \varepsilon < \sigma '\). By our assumption, for every \(q \in B_{\sigma }\) the estimate

$$\begin{aligned} \Vert q\Vert _{\infty } \le C \sup \limits _{\alpha \in I} \sup \limits _{ \lambda \in \Lambda } |q * G_{\alpha }(\lambda )|, \end{aligned}$$
(27)

is true and our aim is to prove (26). Consider functions \(\Phi _{\varepsilon }\) satisfying condition (B). Using \((\beta _1)\), we get

$$\begin{aligned} \Vert f\Vert ^2_{2} = \int \limits _{{\mathbb R}^2} |f(x)|^2 dx \le \int \limits _{{\mathbb R}^2} \sup \limits _{t \in {\mathbb R}^2} \left| \Phi _{\varepsilon }(x - t) f(t) \right| ^2 dx. \end{aligned}$$
(28)

Note that \(q(t) := \Phi _{\varepsilon }(x - t) f(t) \in \mathcal {B}_{\sigma + \varepsilon }\), and we can apply Lemma 7 to obtain

$$\begin{aligned} |q(t)|^2 \le C \int \limits _I \sup \limits _{\lambda \in \Lambda } \left| \,\int \limits _{{\mathbb R}^2} G_{\alpha }(\lambda - s) \Phi _{\varepsilon }(x - s) f(s) \, ds \right| ^2 d \alpha , \end{aligned}$$
(29)

where the constant C does not depend on t. To provide the estimate from above we may replace \(\sup \limits _{\lambda \in \Lambda }\) by \(\sum \limits _{\lambda \in \Lambda }\), and switch the order of integration and summation:

$$\begin{aligned} \Vert f\Vert ^2_{2} \le C \sum \limits _{\lambda \in \Lambda } \int \limits _I \int \limits _{{\mathbb R}^2} \left| \, \int \limits _{{\mathbb R}^2} G_{\alpha }(\lambda - s) \Phi _{\varepsilon }(x - s) f(s) \, ds \right| ^2 \, dx \, d \alpha . \end{aligned}$$
(30)

Denote by

$$\begin{aligned} Y_1 = \left| \Phi _{\varepsilon }(x - \lambda ) \,\int \limits _{{\mathbb R}^2} G_{\alpha }(\lambda - s) f(s) \, ds \right| ^2, \end{aligned}$$
(31)
$$\begin{aligned} Y_2 = \left| \,\int \limits _{{\mathbb R}^2} G_{\alpha }(\lambda - s) \left( \Phi _{\varepsilon } (x - \lambda ) - \Phi _{\varepsilon }(x - s) \right) f(s) \, ds \right| ^2. \end{aligned}$$
(32)

Using the inequality \(|a + b|^2 \le C(|a|^2 + |b|^2)\), we deduce from (30), (31), and (32) that

$$\begin{aligned} \Vert f\Vert ^2_{2} \le C \sum \limits _{\lambda \in \Lambda } \int \limits _I \int \limits _{{\mathbb R}^2} \left( Y_1 + Y_2\right) \, dx \, d\alpha . \end{aligned}$$
(33)

Next, we estimate the terms with \(Y_1\) and \(Y_2\) separately. The value of \(\sum \limits _{\lambda \in \Lambda } \int \limits _I \int \limits _{{\mathbb R}^2}Y_1\, dx \, d\alpha \) is majorized by

$$\begin{aligned} \sum \limits _{\lambda \in \Lambda } \int \limits _I \left( \,\, \int \limits _{{\mathbb R}^2} | \Phi _{\varepsilon }(x - \lambda )|^2 \, dx \right) |(f *G_{\alpha })(\lambda )|^2 \, d\alpha \le \Vert \Phi _{\varepsilon }\Vert _{2}^2 \int \limits _I \sum \limits _{\lambda \in \Lambda } |(f *G_{\alpha })(\lambda )|^2 d \alpha . \end{aligned}$$
(34)

The inequalities for the second term are more complicated. Set

$$\begin{aligned} H(x; \lambda ,s) = |\Phi _{\varepsilon }(x - \lambda ) - \Phi _{\varepsilon }(x - s)|. \end{aligned}$$

We start with the observation

$$\begin{aligned} H(x; \lambda ,s) \le \left| \int \limits _{s_1}^{\lambda _1} \frac{\partial \Phi _{\varepsilon }}{\partial x}(x - u_1, y - \lambda _2) du_1 \right| + \left| \int \limits _{s_2}^{\lambda _2} \frac{\partial \Phi _{\varepsilon }}{\partial y}(x - s_1, y - u_2) du_2 \right| . \end{aligned}$$

Using Cauchy-Schwarz inequality, we write

$$\begin{aligned} H^2(x; \lambda ,s) \le C \Bigg ((\lambda _1 - s_1) \int \limits _{s_1}^{\lambda _1} \left| \frac{\partial \Phi _{\varepsilon }}{\partial x}(x - u_1, y - \lambda _2)\right| ^2 du_1 \\ + (\lambda _2 - s_2) \int \limits _{s_2}^{\lambda _2} \left| \frac{\partial \Phi _{\varepsilon }}{\partial x}(x - s_1, y - u_2)\right| ^2 du_2 \Bigg ). \end{aligned}$$

Thereby, for \(\lambda = (\lambda _1, \lambda _2)\) and \(s = (s_1, s_2)\) we get

$$\begin{aligned} \int \limits _{{\mathbb R}^2} H^2(x; \lambda ,s) \, dx \le C |s_1 - \lambda _1| |s_2 - \lambda _2| \Vert \nabla \Phi _{\varepsilon }\Vert _{2}^2, \end{aligned}$$

whence

$$\begin{aligned} \Vert H(\,\cdot \,; \lambda , s)\Vert ^2_{2} \le C |s- \lambda |^{2} \Vert \nabla \Phi _{\varepsilon }\Vert _{2}^2. \end{aligned}$$
(35)

Now, we return to the estimation of the term with \(Y_2\) in the formula (33). Applying Cauchy–Schwarz inequality, we arrive at

$$\begin{aligned} \sum \limits _{\lambda \in \Lambda } \, \int \limits _{{\mathbb R}^2} Y_2 \, dx= & {} \sum \limits _{\lambda \in \Lambda }\,\, \int \limits _{{\mathbb R}^2} \left| \,\, \int \limits _{{\mathbb R}^2} f(s) G_{\alpha }(\lambda - s) H(x; \lambda ,s) \, ds \right| ^2 dx \\\le & {} \sum \limits _{\lambda \in \Lambda } \,\int \limits _{{\mathbb R}^2} \left( \,\, \int \limits _{{\mathbb R}^2} |f(s)|^2 G_{\alpha }(\lambda - s) ds \, \int \limits _{{\mathbb R}^2} G_{\alpha }(\lambda - s) H^2(x; \lambda ,s) ds \right) dx. \end{aligned}$$

With estimate (35) in hand, we continue

$$\begin{aligned} \sum \limits _{\lambda \in \Lambda }\, \int \limits _{{\mathbb R}^2} Y_2 \, dx\le & {} \sum \limits _{\lambda \in \Lambda } \left( \,\, \int \limits _{{\mathbb R}^2} |f(s)|^2 G_{\alpha }(\lambda - s) ds \,\int \limits _{{\mathbb R}^2} G_{\alpha }(\lambda - s) \Vert H^2(\cdot ; \lambda ,s)\Vert _{2} ds \right) \\\le & {} C \Vert \nabla \Phi _{\varepsilon }\Vert _{2}^2 \sum \limits _{\lambda \in \Lambda } \left( \,\, \int \limits _{{\mathbb R}^2} |f(s)|^2 G_{\alpha }(\lambda - s) ds \,\int \limits _{{\mathbb R}^2} G_{\alpha }(\lambda - s) |s - \lambda |^2 ds \right) . \end{aligned}$$

Clearly,

$$\begin{aligned} \int \limits _{{\mathbb R}^2} G_{\alpha }(\lambda - s) |s - \lambda |^2 ds \le C, \end{aligned}$$
(36)

and since \(\Lambda \) is a u.d. set, we have

$$\begin{aligned} \sum \limits _{\lambda \in \Lambda } G_{\alpha }(\lambda - s) \le C. \end{aligned}$$
(37)

Using relations (36) and (37), we finish the estimate of the term with \(Y_2\):

$$\begin{aligned} \sum \limits _{\lambda \in \Lambda }\; \int \limits _{I} \int \limits _{{\mathbb R}^2} Y_2\, dx \, d \alpha \le C |I| \Vert \nabla \Phi _{\varepsilon }\Vert _{2}^2 \Vert f\Vert _{2}^2. \end{aligned}$$
(38)

Combining (33), (34), and (38) together, we get

$$\begin{aligned} \Vert f\Vert ^2_{2} \le C_1 |I| \Vert \nabla \Phi _{\varepsilon }\Vert _{2}^2 \Vert f\Vert _{2}^2 + C_2 \Vert \Phi _{\varepsilon }\Vert _{2}^2 \int \limits _I \sum \limits _{\lambda \in \Lambda } |(f *G_{\alpha })(\lambda )|^2 d \alpha . \end{aligned}$$
(39)

To finish the proof we invoke properties \((\beta _3)\) and \((\beta _4)\). Indeed, taking sufficiently small \(\varepsilon > 0\) we make the first summand less than \(\frac{\Vert f\Vert ^2_{2}}{2}\), and (26) follows. \(\square \)

4.3 Proof of Theorem 1

Now, we are ready to prove the main result.

(i) \(\Rightarrow \) (ii)

Assume that for the set \(\Lambda \) condition (i) is satisfied. In particular, for any \(\sigma > 0\) inequality (26) is true for every \(f\in PW^2_{\sigma }\) with constant \(D_1\) depending on \(\sigma \). Then, Theorem 3 implies that for every \(\sigma > 0\) inequality (25) holds true for every \(f \in B_{\sigma }\) with constant K depending on \(\sigma \). By Theorem 2, we deduce that \(\Lambda \) satisfy condition (A).

(ii) \(\Rightarrow \) (i) Assume that condition (ii) is fulfilled. Recall that Proposition 1 ensures that the right hand-side estimate in (1) holds with some universal constant. Thus, it suffices to verify that inequality (26) is true for every \(\sigma > 0\) and every \(f \in PW^2_{\sigma }\) with some constant \(D_1 = D_1(\sigma )\). By our assumption and Theorem 2, the inequality (25) is true for every \(\sigma > 0\) and every \(f \in B_{\sigma }\) with a constant K depending only on \(\sigma \). Applying Theorem 3, we see that (26) holds true for every \(\sigma > 0\) and \(f \in PW^2_{\sigma }\) with a constant \(D_1\) depending only on \(\sigma \). Thus, condition (i) is true. That finishes the proof.

\(\square \)

5 Remarks

First, we would like to note that Theorems 1, 2, and 3 remain true for a wider collection of kernels that satisfy some additional assumptions similar to conditions \((\beta ) - (\theta )\) in [19].

Second, one may check that our approach provides a complete solution to the Main Problem for the Bernstein spaces \(B_{[-\sigma , \sigma ]^n}\) for the Gaussian kernel in \({\mathbb R}^n\) with any index set \(I = \prod \limits _{i=1}^n[a_i,b_i]\). One may therefore formulate an analogue of Theorem 2 in multi-dimensional setting. However, the passage to Paley-Wiener spaces faces obstacles similar to those discussed in Example 5, Sect. 4.

Recall the frame inequalities for a continuous frame \(\{e_x\}_{x \in X}\):

$$\begin{aligned} D_1 \Vert f\Vert ^p_{p} \le \int \limits _{X} | \langle f, e_{x} \rangle |^p dx \, \le D_2 \Vert f\Vert ^p_{p} \end{aligned}$$
(40)

Inequalities (1) correspond to the case \(p=2\), \(X = I \times \Lambda \), and dx is product of n-dimensional Lebesgue measure on I and counting measure on \(\Lambda \). As it was pointed to me by a reviewer, the inequalities (40) typically hold true for all range of Banach spaces \((X, \Vert \cdot \Vert _p), \, 1 \le p < \infty \) simultaneously provided the frame \(\{e_x\}\) has a sufficiently good localization, see [1, 8], and [9]. However, in our setting we did not manage to prove the analogue of Theorem 1 for all \(p \in [1, \infty )\) when the dimension \(n > 2\).

On the other hand, using our approach, one may check that for every \(n > 2\) there are a number p(n) and functions \(\Phi _{\varepsilon }\) such that for \(p \ge p(n)\) we have

$$\begin{aligned} \Vert \Phi _{\varepsilon }\Vert _{p} \rightarrow \infty , \quad \Vert \nabla \Phi _{\varepsilon }\Vert _{p} \rightarrow 0 \quad \text { as } \varepsilon \rightarrow 0. \end{aligned}$$

Thus, a modification of the proof of Theorem 3 leads to a complete solution of the Main Problem for \(PW^p_{[-\sigma , \sigma ]^n}\) spaces with \(p \ge p(n)\).