Abstract
Let \(I=(a,b)\times (c,d)\subset {\mathbb R}_{+}^2\) be an index set and let \(\{G_{\alpha }(x) \}_{\alpha \in I}\) be a collection of Gaussian functions, i.e. \(G_{\alpha }(x) = \exp (-\alpha _1 x_1^2 - \alpha _2 x_2^2)\), where \(\alpha = (\alpha _1, \alpha _2) \in I, \, x = (x_1, x_2) \in {\mathbb R}^2\). We present a complete description of the uniformly discrete sets \(\Lambda \subset {\mathbb R}^2\) such that every bandlimited signal f admits a stable reconstruction from the samples \(\{f *G_{\alpha } (\lambda )\}_{\lambda \in \Lambda }\).
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
The sampling problem deals with recovery of bandlimited signals f from the collection of measurements \(\{f(\lambda )\}_{\lambda \in \Lambda }\) taken at the points of some uniformly discrete set \(\Lambda \subset {\mathbb R}^d\). The classical results deal with one dimensional signals that are elements of the Paley-Wiener or Bernstein spaces over a fixed interval \([-\sigma ,\sigma ]\). The sets \(\Lambda \) that provide the stable reconstruction, in this case, are completely described. For the Bernstein spaces, the answer is given in terms of a certain density of \(\Lambda \) and bandwidth parameter \(\sigma \), see [5]. The result for Paley-Wiener spaces is more complicated, see [16, 18]. It cannot be expressed in terms of a density of \(\Lambda \). We refer the reader to [5, 18] for the detailed exposition and the proofs.
The complexity of the task significantly increases in the multi-dimensional setting. Landau [11] proved that the necessary conditions for stable sampling remain valid for the Paley-Wiener spaces over any domain (see [13] for a much simpler proof). A sufficient condition for a sampling of signals from the Bernstein space with spectrum in a ball was obtained by Beurling, see [6]. We also refer the reader to [15] for some extensions. However, there is a gap between the necessary and sufficient conditions. Moreover, even for the simplest spectra as balls or cubes, examples show that no description of sampling sets is possible in terms of a density of \(\Lambda \), see Sect. 5.7 in [14].
Recently the so-called dynamical sampling problem (in what follows, we will more often use the term space-time sampling problem) attracted a lot of attention, see [2,3,4, 19], and references therein.
In this paper, we consider the following variant of dynamical sampling problem.
Main Problem
Let \(\Lambda \) be a uniformly discrete subset of \({\mathbb R}^n\) and let \({G_{\alpha }(x)}\) be a collection of functions parametrized by \(\alpha \in I\). What assumptions should be imposed on the spatial set \(\Lambda \), index set I, and functions \(G_{\alpha }\) to enable the recovery of every band-limited signal f from its space-time samples \(\{f *G_{\alpha }(\lambda )\}_{\lambda \in \Lambda , \alpha \in I}\)?
For signals f from a Paley-Wiener space \(PW_{\sigma }\) (see the definition below) it means that the inequalities
are true with some constants \(D_1\) and \(D_2\). Here, as usual, \(\Vert \cdot \Vert \) denotes the \(L^2\)-norm.
Recall that a set \(\Lambda = \{\lambda _k\} \subset {\mathbb R}^n\) is called uniformly discreteFootnote 1 (u.d.) if
The constant \(\delta (\Lambda )\) is called the separation constant of \(\Lambda \).
In the one-dimensional setting, this problem appears in particular in connection with tasks of mathematical physics. Several examples are presented in [4]. One of them is the initial value problem for the heat equation
with initial condition
It is well-known that the solution is given by the formula
where \( g_{\alpha }(x) = \frac{1}{\sqrt{(4 \pi \alpha \sigma )}} \exp \left( -\frac{x^2}{4 \alpha \sigma } \right) .\) Note that Main Problem applied to equation (2) provides the reconstruction of initial function f from the states \(\{u(\lambda , \alpha )\}_{\lambda \in \Lambda , \alpha \in I}\).
A variant of Main Problem for the one-dimensional setting was considered by Aldroubi et al. in [4]. In particular, it was established that unlike the classical sampling setting, the assumptions that should be imposed on the set \(\Lambda \) to solve the Main Problem cannot be expressed in terms of some density of \(\Lambda \), see Example 4.1 in [4]. More precisely, one may construct a set with an arbitrarily small density that provides stable reconstruction of the initial signal. Also in that paper, it was shown that for the solution of Main Problem we have to require \(\Lambda \) to be relatively dense.
In the one-dimensional setting, for a large collection of kernels, a solution of Main Problem was presented in [19]: It turns out the stable recovery from the samples on \(\Lambda \) is possible if and only if \(\Lambda \) is not (in a certain sense) “close” to an arithmetic progression.
It seems natural to extend the results of [4, 19] to the multi-dimensional situation. Below we focus on the two-dimensional variant of the problem for the case of Gaussian kernel
Our approach is similar to the one in [19]. However, this problem is considerably more involved than the one in the one-dimensional setting. One needs to apply some additional ideas. See Sect. 5 for some remarks on cases dimension higher than 2.
We pass to the description of the geometry of the sets \(\Lambda \) that solve the planar Main problem.
Definition 1
A curvilinear lattice in \({\mathbb R}^2\) defined by three vectors
is the set of all vectors \(\lambda =(\lambda _1,\lambda _2)\in {\mathbb R}^2\) satisfying
The blue curves on Fig. 1 correspond to the curvilinear lattice \(l_{t, \xi , r}\) with \(t=(0,0), \xi = (1,1)\), and \(r = (1/\sqrt{10}, 3/\sqrt{10})\).
In what follows the notation \(W(\Lambda )\) stands for the collection of all weak limits of translates of a uniformly discrete set \(\Lambda \), see the definition in Sect. 2.
Condition (A): A uniformly discrete set \(\Lambda =\{\lambda =(\lambda _1,\lambda _2)\}\subset {\mathbb R}^2\) satisfies condition (A) if every set \(\Lambda ^*\in W(\Lambda )\) is not empty and does not lie on any lattice \(l_{t,\xi ,r}\).
Remark 1
A Delone set is a set that is both uniformly discrete and relatively dense. In particular, it is easy to check that every set that satisfies condition (A) is a Delone set.
We denote by \(PW^2_{\sigma }\) the space of square integrable on \({\mathbb R}^2\) functions with spectrum supported in the square \([-\sigma , \sigma ]^2\), i.e.
where
Now, we are ready to formulate the main result.
Theorem 1
Given a u.d. set \(\Lambda \subset {\mathbb R}^2\) and a rectangle \(I = (a,b)\times (c, d)\) with \(0< a< b < \infty \), \(0< c< d < \infty \). The following statements are equivalent:
-
(i)
For every \(\sigma >0\) there are positive constants \(D_1 = D_1(\sigma , I, \Lambda )\) and \(D_2 = D_2(\sigma , I, \Lambda )\) such that (1) holds true.
-
(ii)
\(\Lambda \) satisfies condition (A).
The paper is organized as follows. In Sect. 2 we give all necessary definitions and fix some notations. As it was mentioned above, we employ the approach from [19] and divide the solution into two parts. We start with solving Main Problem for the Bernstein spaces \(B_{\sigma }\) and prove an analogue of Theorem 1 in Sect. 3. In Sect. 4 we investigate the connection between the sampling with Gaussian kernel in the Paley-Wiener and Bernstein spaces. We also prove the main result in Sect. 4. The remarks on multi-dimensional cases and some open problems that puzzle us are placed in Sect. 5.
2 Notations and Preliminaries
In the present paper, we deal with signals that belong to the Bernstein and Paley-Wiener spaces. Since we investigate Main Problem simultaneously for all bandwidth parameters, we may consider only the functions with the spectrum supported in squares. This leads us to
Definition 2
Given a positive number \(\sigma \), we denote by \(B_{\sigma }\) the space of all entire functions f in \({\mathbb C}^2\) satisfying the estimate
where the constant \(C=C(f)\) depends only on f.
It is well-known that \(B_{\sigma }\) consists of the bounded continuous functions that are the inverse Fourier transforms of tempered distributions supported on the square \([-\sigma , \sigma ]^2\). We refer the reader to [12] for more information about Bernstein spaces.
For \(1\le p < \infty \) we may define the Paley-Wiener spaces by the formula
or equivalently
Following [5] (see also Chapter 3.4 in [10, 14, 17]), we introduce auxiliary
Definition 3
Let \(\{\Lambda _k\}\) and \(\Lambda \) be u.d. subsets of \({\mathbb R}^n\), satisfying \(\delta (\Lambda _k) \ge \delta > 0, \, k \in {\mathbb N}\). We say that the sequence \(\{\Lambda _k\}\) converges weakly to \(\Lambda \) if for every large \( R > 0\) and small \(\varepsilon > 0\) there exists such \(N = N(R, \varepsilon )\) that
for all \(k \ge N\).
Definition 4
By \(W(\Lambda )\) we denote all weak limits of the translates \(\Lambda _k:=\Lambda - x_k\), where \(\{x_k\} \subset {\mathbb R}^n\) is an arbitrarily bounded or unbounded sequence.
We supply these definitions with several examples concerning the condition (A).
Example 1
To construct the set that does not satisfy condition (A), one may consider the following perturbation of the rectangle lattice:
Taking any sequence \(\{x_k\} \subset {\mathbb R}^2\) such that \(|x_k| \rightarrow \infty \), one may check by the definition that the sequence \(\Lambda - x_k\) weakly converges to the set \(\Lambda = \left\{ \left( 2 \pi n, 2 \pi m \right) , \quad m,n \in {\mathbb Z}\right\} \), which, clearly, lies in \(l_{t, \xi , r}\) with \(t = (0,0)\), \(\xi = (1,1)\), and \(r = \left( \frac{1}{\sqrt{2}}, \frac{1}{\sqrt{2}} \right) \).
The following example is inspired by the papers [10, 17], which considered a planar mobile sampling problems.
Example 2
Set
i.e. \(D_{{\mathbb Z}}\) is a collection of the concentric equidistant circles with center (0, 0). Now, one may consider \(\Lambda \) to be any u.d. set located on the circles \(D_{{\mathbb Z}}\). One may check (see the proofs in [10, 17]) that any weak limit of translates for every unbounded sequence \(\{x_n\}\) for \(D_{{\mathbb Z}}\) lies on the parallel lines. The argument is based on the simple observation that the traces of translated circles in the rectangle \([-R,R]^2\) (for a fixed \(R>0\)) are getting closer and closer to the parallel lines as the value \(|x_n|\) increases. Moreover, the distance between these lines is \(2 \pi k\). For instance, one may take \(x_n = (0,2\pi n)\) and pass to a weak limit \(\Lambda - x_n \rightarrow \Lambda ^{*}\) to obtain that \(\Lambda ^{*} \subset l_{t,\xi ,r}\) with \(t = (0,0)\), \(\xi = (1,1)\), and \(r = \left( \frac{1}{\sqrt{2}}, \frac{1}{\sqrt{2}} \right) \).
The next example concerns the set that satisfy condition (A). In what follows, we skip some technical details.
Example 3
One may easily find a u.d. set \(\Lambda \) in \({\mathbb R}^2\) such that its projection \(P_{\Lambda }\) onto any rectangle \(P = [0, a] \times {[}0,b]\) is dense in P, where
For instance, one may take a u.d. subset of \((\sqrt{2}\, {\mathbb Z}\cup {\mathbb Z}) \times (\sqrt{3}\, {\mathbb Z}\cup {\mathbb Z})\) (with a dense projection ). Therefore, the set \(\Lambda \) and all its weak limits of translates do not lie on any curvilinear lattice, and \(\Lambda \) satisfy the condition (A).
Below we will use the simple fact that for every sequence \(x_k\) there is a subsequence \(x_{k_j}\) such that \(\Lambda - x_{k_j}\) converges weakly.
Throughout this paper we will adopt the following notations:
-
Let \(x \in {\mathbb R}^n, \, y \in {\mathbb R}^n, \, n \in {\mathbb N}\). Define \(|x| := \sqrt{x_1^2 + \dots + x_n^2}\). Notation \(x \cdot y\) stands for the scalar product of vectors x and y.
-
Set \(B_r(x) := \{y \in {\mathbb R}^n \, : \, |x-y| < r\},\,\,\) where \(x \in {\mathbb R}^n\) and \(r>0\).
-
Given \(\lambda =(\lambda _1, \dots , \lambda _n)\) and \(f \in L^{\infty }({\mathbb R}^n)\), we set
$$\begin{aligned} f_\lambda (x):=f(x-\lambda ) = f(x_1 -\lambda _1, \dots , x_n - \lambda _n). \end{aligned}$$ -
By |A| we denote the n-th dimensional Lebesgue measure of a set \(A \subset {\mathbb R}^n\).
-
By C we denote different positive constants.
Basically, we will focus on two-dimensional case. It is convenient to fix the following notations.
-
Given a point \(x=(x_1,x_2)\in {\mathbb R}^2\), denote \(\tilde{x}:=(-x_1,x_2)\).
-
A symmetrization operator S is defined by the formula
$$\begin{aligned} Sf(x):=f(x)+f(\tilde{x})+f(-\tilde{x})+f(-x), \quad f \in L^{\infty }({\mathbb R}^2). \end{aligned}$$ -
Set \({\mathbb T}:= \{|x| = 1, \, x \in {\mathbb R}^2\}\).
3 Sampling with Gaussian Kernel in Bernstein Spaces
An analogue of Theorem 1 for the Bernstein spaces is as follows:
Theorem 2
Given a u.d. set \(\Lambda \subset {\mathbb R}^2\) and \(I = (a,b)\times (c,d)\) with \(0< a< b< \infty , 0< c< d < \infty \). The following statements are equivalent:
-
(i)
For every \(\sigma >0\) there is a constant \(K=K(\sigma )\) such that
$$\begin{aligned} \Vert f\Vert _\infty \le K\sup _{\alpha \in I} \sup _{\lambda \in \Lambda } \Vert f*G_\alpha \Vert _\infty \quad \text { for every } f\in B_\sigma . \end{aligned}$$ -
(ii)
\(\Lambda \) satisfies condition (A).
Above, as usual, \(\Vert \cdot \Vert _{{\infty }}\) denotes the sup-norm
3.1 Proof of Theorem 2, Part I
(ii) \(\Rightarrow \) (i). In what follows we assume that (i) is not true. We have to show that (ii) fails, i.e. there is a set \(\Lambda ^*\in W(\Lambda )\) such that it lies on some curvilinear lattice. The proof is divided into 5 steps. For the convenience of the reader, we will briefly describe them here and then pass to the argument.
In Step 1, using the standard Beurling technique, we find \(\Lambda ^{*} \in W(\Lambda )\) and \(g \in B_{\sigma }\) such that \(g *G_{\alpha }\) vanishes on \(\Lambda ^{*}\) for every \(\alpha \in I\). Our next step is to show that \(Sg_{\lambda } = 0\) for every \(\lambda \in \Lambda ^{*}\). In Step 3 we prove that \(\Lambda ^{*}\) lies on some curvilinear lattice under the assumption that \(g \in L^2({\mathbb R}^2)\). In Steps 4 and 5, using some approximation technique, we show how to get rid of the requirement that g is square integrable.
1. Due to the assumption made one can find a sequence of Bernstein functions \(f_n\in B_\sigma \) satisfying
We may then introduce a sequence of functions
where x(n) are chosen so that \(|f_n(x(n))| > 1 - \frac{1}{n}, \, n \in {\mathbb N}\). Then we have
Using the compactness property of Bernstein space (see, e.g., [14], Proposition 2.19), we may assume that sequence \(g_n\) converges (uniformly on compacts in \({\mathbb C}^2\)) to some function \(g\in B_\sigma \). Moreover, passing if necessary to a subsequence, we may assume that the translates \(\Lambda +x(n)\) converge weakly to some u.d. set \(\Lambda ^*\). Of course, we may assume that \(\Lambda ^{*}\) is non-empty. Otherwise, we have arrived at contradiction with condition (A). Clearly, g satisfies
For a point \(z = (z_1, z_2) \in {\mathbb C}^2\) we consider its complex conjugate point \(\bar{z} = (\bar{z_1},\bar{z_2})\).
Consider the decomposition \(g(z) = \varphi (z) + i \psi (z)\), where
Then \(\varphi \) and \(\psi \) are real (on \({\mathbb R}^2\)) entire functions satisfying (5). Thereby, functions \(\varphi \) and \(\psi \) belong to \(\mathcal {B}_\sigma \), and since the kernel \(G_{\alpha }\) takes only real values on \({\mathbb R}^2\), we have \((\varphi *G_{\alpha })(\lambda )=0\) and \((\psi *G_{\alpha })(\lambda )=0\) for every \(\lambda \in \Lambda ^{*}\). Thus, we can continue the argument assuming that g is a real-valued function.
2. Recall that the notations \(\tilde{x}\) and Sf were introduced in Sect. 2.
Lemma 1
Assume a function \(g\in B_\sigma \) satisfies (6). Then for every \(\lambda \in \Lambda ^*\) the equality
holds for a.e. \( x \in {\mathbb R}^2\).
Proof
Without loss of generality, we may assume that \(\lambda = (0,0)\) and \(I=\left( \frac{1}{2},1\right) ^2\). Observe that
Set
Clearly, \(h \in L^2({\mathbb R}^2)\) and it is even in variables \(x_1\) and \(x_2\). Moreover, using (8), one can check that \((h *G_{\alpha })(0,0) = 0\) for any \(\alpha \in I_+\).
For every multi-index \(m = (m_1, m_2) \in \mathbb {N}^2\) and \(u \in I_+\) we have
In particular, h is orthogonal to every monomial \(x_1^{\alpha _1} x_2^{\alpha _2}\) with even indexes \(\alpha _1\) and \(\alpha _2\) in the weighted space \(L^2\left( {\mathbb R}^2, \exp \left\{ - \frac{1}{2} (x_1^2+x_2^2) \right\} \right) \). Moreover, since h is even in any variable, from the symmetry, we see that h is orthogonal to every polynomial in this space. To finish the proof we use the completeness property of multi-dimensional analogues of Hermite polynomials. More precisely, we invoke Theorem 3.2.18 from [7] to deduce \(h = 0\). Consequently, \(Sg(x) =0\) for every \(x \in {\mathbb R}^2\), and the lemma follows. \(\square \)
3. We will need a simple technical
Lemma 2
Given a function \(F\in L^2({\mathbb R}^2)\) such that its inverse Fourier transform f is a real function. Then
Proof
Indeed, we may write
Therefore,
which proves the lemma. \(\square \)
3. If we additionally assume that \(g\in L^2({\mathbb R}^2)\), the result follows from the next statement.
Lemma 3
Assume \(g\in L^2({\mathbb R}^2)\). Then \(\Lambda ^*\) lies on some curvilinear lattice.
Proof
Denote by G the inverse Fourier transform of g. Recall that g is real, whence
Denote by \(U(t) = \text{ Re }\left( e^{i\lambda \cdot t}G(t)+e^{i\tilde{\lambda }\cdot t}G(\tilde{t})\right) \). Since
we deduce that \(U(t) = U(-t)\). Combining this observation with Lemmas 1 and 2, we see that equality
holds for a.e. \(t\in {\mathbb R}^2\) and for every \(\lambda \in \Lambda ^{*}\).
Recall that \(G=0\) a.e. outside \((-\sigma ,\sigma )^2\). For every \(\epsilon >0\), find a real Schwartz function \(F_\epsilon \) whose support lies on \([-\sigma ,\sigma ]^2\) satisfying \(\Vert G-F_\epsilon \Vert _2<\epsilon \). Then
and
Using these inequalities, one can check that there are a point \(t_\epsilon \), \(t_{\epsilon } \in [-\sigma , \sigma ]^2\) and constants C and c depending only on \(\sigma \) such that
for every small enough \(\epsilon \). Indeed, by Plancherel theorem, we have \(\Vert G\Vert _2 = \Vert g\Vert _2\). Since \(\Vert g\Vert _{\infty } = 1\) and \(g \in B_{\sigma }\), using Bernstein inequality, we deduce \(\Vert G\Vert _2 = \Vert g\Vert _2 \ge C(\sigma )\). Now, assuming that for all t, the inequalities (11) do not hold true, by integration with respect to variable t and using the estimate (9), for sufficiently small \(\epsilon \) we arrive at
which contradicts to estimate (10) when \(c > 2 \sqrt{2}/C(\sigma )\).
Write
Then we get
Then normalizing we arrive at
Clearly, we may assume that \(u_{\epsilon } \in [0, 2 \pi ]\) and \(v_{\varepsilon } \in [0, 2 \pi ]\). Recall that \(t_{\epsilon } \in [-\sigma , \sigma ]^2\) and, of course,
Taking \(\epsilon = \frac{1}{n}\) and passing if necessary to a subsequence, we deduce that \(\Lambda \) lies on some curvilinear lattice.
4. In what follows we assume that
For \(\epsilon > 0\) we set
\(\square \)
The next statement easily follows from (12).
Lemma 4
We have \(\delta _\epsilon \rightarrow 0\) as \(\epsilon \rightarrow 0.\)
We skip the simple proof.
Let us introduce auxiliary functions
By Lemma 4,
Lemma 5
For every \(\lambda \in \Lambda ^*\) satisfying \(|\lambda |< 1/\sqrt{\delta _\epsilon }\) we have
Proof
By Lemma 1, \(S g_{\lambda } = 0\). Since the function \(\varphi _\epsilon \) is even with respect to each variable, we have
Hence,
Below we focus on the estimate of the first term at the right hand-side of the inequality above. The remaining terms admit the same estimate.
Write \(\lambda =(\lambda _1,\lambda _2)\). Observe that
For \(j = 1,2\) using the Cauchy-Schwartz inequality, we have
One may check that \(\Vert h_{\epsilon }\Vert _2=C/\sqrt{\epsilon }\) and \(\Vert h'_{\epsilon }\Vert _2=C\sqrt{\epsilon }\). Since \(\Vert g\Vert _\infty =1\) and \(|\lambda | \le 1/\sqrt{\delta _{\epsilon }}\), we arrive at
That finishes the proof. \(\square \)
5. Denote by \(G_\epsilon :=\widehat{g\varphi _\epsilon }\). Then \(G_\epsilon \in L^2({\mathbb R})\) vanishes a.e. outside some square \((-\sigma ^*,\sigma ^*)^2\) (it is easy to check that one may take \(\sigma ^*=\sigma +\epsilon )\).
Using Lemma 5, for \(|\lambda | \le 1/ \sqrt{\delta _{\epsilon }}\) we get
On the other hand, by (13), \(\Vert G_\epsilon \Vert _2\ge C\), for all small enough \(\epsilon \).
To finish the proof, we proceed as in the proof of Lemma 3.
3.2 Proof of Theorem 2, Part II
(i) \(\Rightarrow \) (ii). We will argue by contradiction. Assume that for every \(\sigma > 0\) there is a constant \(K=K(\sigma )\) such that
but condition (ii) is not satisfied, i.e. there exists some \(\Lambda ' \in W(\Lambda )\) such that \(\Lambda '\) lies on some curvilinear lattice. Clearly, to come to the contradiction it suffices to construct for every \(\varepsilon > 0\) a function \(f = f_{\varepsilon }\) such that
and \(f \in \mathcal {B}_{\sigma ^{*}}\) for some fixed \(\sigma ^{*}\).
Again, let us provide a brief description of the proof. We divide the proof into 4 steps. First, we build a function g such that \(g *G_{\alpha }\) vanishes on \(\Lambda '\) for every \(\alpha \in I\). A slight modification of g provides a function f, which satisfies (14). To verify the second estimate in (14) we split the set \(\Lambda \) into the sets \( \Lambda _I = \Lambda \cap P\) and \(\Lambda _O = \Lambda \cap ({\mathbb R}^2 \setminus P)\) for an appropriate rectangle P. In the steps 3 and 4, we show that f satisfies the relations (14) for \(\lambda \in \Lambda _O\) and \(\lambda \in \Lambda _I\) respectively.
Now we pass to the proof.
1. By our assumption, there exist \(\Lambda ' \in W(\Lambda )\), \(\xi \in {\mathbb R}^2\), \((t_1,t_2) \in {\mathbb R}^2\), and \((r_1, r_2) \in {\mathbb T}\) such that for every \(\lambda ' \in \Lambda '\) the equality
holds. Set
Clearly, \(g \in B_{\sigma }\) for \(\sigma = |\xi |\). Next, we will show that symmetrization of the function \(g_{\lambda '}\) vanishes for every \(\lambda ' \in \ \Lambda '\).
Lemma 6
The equality
holds for every \(x \in {\mathbb R}^2\) and \(\lambda ' \in \Lambda '\).
Proof
After some simple calculations, we have
where we, as usual, apply symmetrization operator S with respect to variable x. Clearly, \(S(e^{i \tilde{\xi } \cdot x}) = S(e^{i \xi \cdot x}) = 2 (\cos (\xi \cdot x) + \cos ( \tilde{\xi } \cdot x))\). Thus, using (15), we have
\(\square \)
Consequently, for every \(\lambda ' \in \Lambda '\) and \(\alpha \in I\) we have
since \(G_{\alpha }\) is even in every variable.
2. Fix small \(\varepsilon > 0\) and take large \(R=R(\varepsilon ) > 0\) (we will specify its value later). Recall that \(\Lambda ' \in W(\Lambda )\). In particular, that means that one can find \(v=(v_1, v_2) = v(R, \varepsilon ) \in {\mathbb R}^2\) such that inside the square \([-R,R]^2\), the set \(\Lambda - v\) is "close" to \(\Lambda '\):
Set \(P = [v_1-R,v_1+R]\times [v_2-R,v_2+R]\) and consider the decomposition
Consider
We define the function f by the formula
Clearly, \(\Vert f\Vert _{\infty } \ge C\), and it suffices to show that \(|f *G_{\alpha }(\lambda )| \le C \varepsilon \) for every \(\lambda \in \Lambda \). We will estimate the value \(|f *G_{\alpha }(\lambda )|\) for \(\lambda \in \Lambda _I\) and \(\lambda \in \Lambda _O\) separately.
3. Assume that \(\lambda \in \Lambda _O\). We may choose \(R = R(\varepsilon ) = \frac{1}{\varepsilon ^2}\). Set \(U = U_1 \times U_2 = [-\sqrt{R}, \sqrt{ R}]^2.\) For \(s\in U\) we have
since \(|\lambda _1 - v_1| \ge R\) and \(|\lambda _2-v_2| \ge R\). Next, it is easy to check that
Now, to estimate \(f *G_{\alpha } (\lambda )\) for \(\lambda \in \Lambda _O\), we write
Applying \(\Vert f\Vert _{\infty } \le 1\) and estimates (18) and (19), we arrive at
4. Now, assume that \(\lambda \in \Lambda _I\). Take \(\lambda ' \in \Lambda '\), satisfying condition (17) corresponding to \(\lambda \), i.e. \(\mathrm{dist \,}(\lambda - v, \lambda ' ) < \varepsilon \). Since \(g*G_{\alpha }(\lambda ') = 0\), we may write
Set
Clearly,
By Bernstein inequality and relation (17), we have
Combining estimates (20) and (21) together, we obtain
that finishes the proof.
The following statement easily follows from Theorem 2.
Lemma 7
Assume \(\Lambda \) and I satisfy the assumptions of Theorem 2 and condition (i) is fulfilled. Then for every \(\sigma > 0\) there is a constant C such that
for every \(f \in PW^2_{\sigma }\).
4 Sampling with Gaussian Kernel in Paley–Wiener Spaces
4.1 Auxiliary Statements
Recall that our aim is to describe the geometry of sets \(\Lambda \subset {\mathbb R}^2\) that for every \(f \in PW^2_{\sigma }\) the estimates
hold with some constants \(D_1\) and \(D_2\) independent on f.
4.1.1 Bessel-Type Inequality
We start with showing that the right hand-side of (23) follows easily from classical sampling results for u.d. set \(\Lambda \).
Proposition 1
Assume \(\Lambda \) is a u.d. set, \(I = (a,b) \times (c,d)\), where \(0<a<b<\infty , 0<c<d<\infty \). Then there is a constant \(D_2 = D_2(I,\Lambda )\) such that
for every \(f \in PW^2_{\sigma }\).
Proof
Recall the Bessel inequality for Paley-Wiener spaces: if \(\Lambda \) is a u.d. subset of \({\mathbb R}^d\) then there is a constant \(M=M(\Lambda , \sigma )\) such that
for every \(g \in PW^2_{\sigma }\), see [20], Chapter 2, Theorem 17.
Note that convolution with Gaussian Kernel \(G_{\alpha }\) keeps the function in Paley-Wiener space. Using Young’s convolution inequality and Bessel inequality, one can find a constant \(D_2\) such that for every \(f \in PW^2_{\sigma }\) the estimate
is true. That finishes the proof of proposition. \(\square \)
4.1.2 Auxiliary Functions
In what follows we need some auxiliary functions with special properties. These functions should belong to Paley-Wiener spaces, have a large \(L^2\)-norm with a small \(L^2\)-norm of the gradient. Now, we specify these requirements.
Condition (B): Let \(\varepsilon \) be a small positive parameter. A family of functions \(\{\Phi _{\varepsilon }\}\) satisfies condition (B) if
- (\(\beta _1\)):
-
\(\Phi _{\varepsilon }(0, 0) = 1, \quad \Vert \Phi _{\varepsilon }\Vert _{\infty } = 1;\)
- (\(\beta _2\)):
-
\(\Phi _{\varepsilon } \in PW^2_{\varepsilon }\);
- (\(\beta _3\)):
-
\(\Vert \Phi _{\varepsilon }\Vert _{2} \rightarrow \infty \) as \(\varepsilon \rightarrow 0\);
- (\(\beta _4\)):
-
\(\Vert \nabla \Phi _{\varepsilon }\Vert _{2} \rightarrow 0\) as \(\varepsilon \rightarrow 0\).
Next, we provide a few examples to illustrate some additional difficulties that occur in the multi-dimensional setting. Then we present an example of functions \(\Phi _{\varepsilon }\) that satisfy condition (B).
Example 4
Let us return to the one-dimensional case. Consider
Observe that functions \(\Phi _{\varepsilon }\) satisfy an analogue of condition (B) in the one-dimensional setting. Clearly, \(\Phi _{\varepsilon } (0) = 1, \Vert \Phi _{\varepsilon }\Vert _{\infty } = 1\), and \(\Phi _{\varepsilon } \in PW^2_{\varepsilon }\). One may easily check that
These relations prove the one-dimensional analogues of \((\beta _3)\) and \((\beta _4)\).
The passage from Bernstein to Paley-Wiener spaces and back in [19] was based on the properties of the functions in Example 4. One may try to construct functions \(\Phi _{\varepsilon }\) that satisfy condition (B) in the two-dimensional setting in the following natural way.
Example 5
Consider the function \(\Phi _{\varepsilon }\) defined by the formula
It is clear that conditions \((\beta _1)\) and \((\beta _2)\) are true. Property \((\beta _3)\) follows from
However, one may easily check that \(\Vert \nabla \Phi _{\varepsilon }\Vert _{2}\) does not converge to zero as \(\varepsilon \rightarrow 0\).
However, in two-dimensional setting it is still possible to construct functions that satisfy condition (B). Now, we pass to the construction.
Lemma 8
Assume \(\varepsilon > 0\). There exist functions \(\Psi _{\varepsilon }\) such that
- (P1):
-
\(\mathrm{supp \,}\Psi _{\varepsilon } \subset B_{\varepsilon }(0), \quad \Psi _{\varepsilon } \ge 0,\)
- (P2):
-
\(C_1 \le \int \limits _{{\mathbb R}^2} \Psi _{\varepsilon }(x) \, dx \le C_2, \quad 0< C_1 \le C_2 < \infty ,\)
- (P3):
-
\(\Vert \Psi _{\varepsilon }\Vert _{2} \ge \frac{C}{\varepsilon ^{3/4}},\)
- (P4):
-
\(\left( \int \limits _{{\mathbb R}^2} |\Psi _{\varepsilon }(x)|^2 |x|^2 \, dx \right) ^{1/2} \le \frac{C}{\sqrt{ \log \frac{1}{\varepsilon }}}.\)
Proof
Fix small \(0< \varepsilon < 1\) and denote the integer part of \(\log \frac{1}{\varepsilon }\) by m. For integers n from [m, 2m] we set \(a_n = 2^{2n}/n\). Next, we define the function \(\Psi _{\varepsilon }\) layer by layer by the formula
For \(|x|>\varepsilon \) and \(|x| < \frac{\varepsilon ^2}{2}\) we set \(\Phi _\varepsilon (x)=0\). Note that the area of the ring \(B_{2^{-n}}(0) \setminus B_{2^{-n-1}}(0)\) is equal to \(\frac{3 \pi }{4} 2^{-2n}\).
Clearly, \(\Psi _{\varepsilon }\) satisfy (P1). To verify (P2) we write
Note that the right-hand side of this equation can be estimated with some fixed positive constants from above and below by
Thus, condition (P2) follows. Next, we have
and (P3) follows. The estimate
implies (P4) that finishes the proof. \(\square \)
Corollary 1
There exist functions \(\Phi _\varepsilon \) satisfying condition (B).
Proof
Denote by \(c_{\Psi } = \int \limits _{{\mathbb R}^2} \Psi _{\varepsilon }(x) \, dx\). By \((P_2)\), \(c_{\Psi }\) is positive, finite, and separated from zero. Now, we may define \(\Phi _\varepsilon \) as the Fourier transform of \(\Psi _{\varepsilon }\) with a proper normalization:
The property \((\beta _2)\) follows from \((P_1)\). Due to \(\Psi _{\varepsilon } \ge 0\) and normalization condition \((\beta _1)\) is fulfilled. Relations \((P_3)\) and \((P_4)\) imply estimates \((\beta _3)\) and \((\beta _4)\) respectively.
\(\square \)
4.2 From Bernstein to Paley–Wiener Spaces and Back
To prove Theorem 1, we will use the following statement, which describes the connection between sampling in Paley-Wiener and Bernstein spaces.
Theorem 3
Let \(\Lambda \) be a u.d. set in \({\mathbb R}^2\), \(I = (a,b) \times (c,d), 0< a< b< \infty , 0< c< d < \infty ,\) and \(\sigma '>\sigma >0\).
-
(i)
Assume the inequality
$$\begin{aligned} \Vert f\Vert _\infty \le K\sup _{\alpha \in I} \sup _{\lambda \in \Lambda } \Vert f*G_\alpha \Vert _\infty \quad \text { for all } f\in B_{\sigma '} \end{aligned}$$(25)holds with some constant \(K = K(\sigma ', \Lambda )\). Then there exists a constant \(D_1 = D_1(\sigma , \Lambda )\) such that
$$\begin{aligned} D_1 \Vert f\Vert ^2_{2} \le \sum \limits _{\lambda \in \Lambda } \int \limits _{I} |f * G_{\alpha }(\lambda )|^2 \, d\alpha \quad \text { for every } f \in PW^2_{\sigma } \end{aligned}$$(26)is true.
-
(ii)
Assume that (26) holds with some constant \(D_1 = D_1(\sigma ', \Lambda )\) for all \(f \in PW_{\sigma '}\). Then there is a constant \(K = K(\sigma ', \Lambda )\) such that (25) is true for every \(f \in B_{\sigma }\).
Remark 2
For a similar result for space sampling see [15].
Remark 3
In this theorem we do not need to require I to be a rectangle. One may take \(I=(a,b) \subset {\mathbb R}\) with \(0< a< b < \infty \). In such a case by \(G_{\alpha }(x)\) we mean \(G_\alpha (x_1,x_2) = e^{-\alpha (x_1^2+x_2^2)}\) and \(d \alpha \) is a standard one-dimensional Lebesgue measure.
The proof of Theorem 3 is similar to the proof of Theorem 3 in the paper [19]. We provide the argument for statement (i) and leave the proof of (ii) to the reader. The functions \(\Phi _{\varepsilon }\) that satisfy condition (B) play a crucial role in our argument.
Proof of Theorem 3
Take \(\varepsilon > 0\) such that \(\sigma + \varepsilon < \sigma '\). By our assumption, for every \(q \in B_{\sigma }\) the estimate
is true and our aim is to prove (26). Consider functions \(\Phi _{\varepsilon }\) satisfying condition (B). Using \((\beta _1)\), we get
Note that \(q(t) := \Phi _{\varepsilon }(x - t) f(t) \in \mathcal {B}_{\sigma + \varepsilon }\), and we can apply Lemma 7 to obtain
where the constant C does not depend on t. To provide the estimate from above we may replace \(\sup \limits _{\lambda \in \Lambda }\) by \(\sum \limits _{\lambda \in \Lambda }\), and switch the order of integration and summation:
Denote by
Using the inequality \(|a + b|^2 \le C(|a|^2 + |b|^2)\), we deduce from (30), (31), and (32) that
Next, we estimate the terms with \(Y_1\) and \(Y_2\) separately. The value of \(\sum \limits _{\lambda \in \Lambda } \int \limits _I \int \limits _{{\mathbb R}^2}Y_1\, dx \, d\alpha \) is majorized by
The inequalities for the second term are more complicated. Set
We start with the observation
Using Cauchy-Schwarz inequality, we write
Thereby, for \(\lambda = (\lambda _1, \lambda _2)\) and \(s = (s_1, s_2)\) we get
whence
Now, we return to the estimation of the term with \(Y_2\) in the formula (33). Applying Cauchy–Schwarz inequality, we arrive at
With estimate (35) in hand, we continue
Clearly,
and since \(\Lambda \) is a u.d. set, we have
Using relations (36) and (37), we finish the estimate of the term with \(Y_2\):
Combining (33), (34), and (38) together, we get
To finish the proof we invoke properties \((\beta _3)\) and \((\beta _4)\). Indeed, taking sufficiently small \(\varepsilon > 0\) we make the first summand less than \(\frac{\Vert f\Vert ^2_{2}}{2}\), and (26) follows. \(\square \)
4.3 Proof of Theorem 1
Now, we are ready to prove the main result.
(i) \(\Rightarrow \) (ii)
Assume that for the set \(\Lambda \) condition (i) is satisfied. In particular, for any \(\sigma > 0\) inequality (26) is true for every \(f\in PW^2_{\sigma }\) with constant \(D_1\) depending on \(\sigma \). Then, Theorem 3 implies that for every \(\sigma > 0\) inequality (25) holds true for every \(f \in B_{\sigma }\) with constant K depending on \(\sigma \). By Theorem 2, we deduce that \(\Lambda \) satisfy condition (A).
(ii) \(\Rightarrow \) (i) Assume that condition (ii) is fulfilled. Recall that Proposition 1 ensures that the right hand-side estimate in (1) holds with some universal constant. Thus, it suffices to verify that inequality (26) is true for every \(\sigma > 0\) and every \(f \in PW^2_{\sigma }\) with some constant \(D_1 = D_1(\sigma )\). By our assumption and Theorem 2, the inequality (25) is true for every \(\sigma > 0\) and every \(f \in B_{\sigma }\) with a constant K depending only on \(\sigma \). Applying Theorem 3, we see that (26) holds true for every \(\sigma > 0\) and \(f \in PW^2_{\sigma }\) with a constant \(D_1\) depending only on \(\sigma \). Thus, condition (i) is true. That finishes the proof.
\(\square \)
5 Remarks
First, we would like to note that Theorems 1, 2, and 3 remain true for a wider collection of kernels that satisfy some additional assumptions similar to conditions \((\beta ) - (\theta )\) in [19].
Second, one may check that our approach provides a complete solution to the Main Problem for the Bernstein spaces \(B_{[-\sigma , \sigma ]^n}\) for the Gaussian kernel in \({\mathbb R}^n\) with any index set \(I = \prod \limits _{i=1}^n[a_i,b_i]\). One may therefore formulate an analogue of Theorem 2 in multi-dimensional setting. However, the passage to Paley-Wiener spaces faces obstacles similar to those discussed in Example 5, Sect. 4.
Recall the frame inequalities for a continuous frame \(\{e_x\}_{x \in X}\):
Inequalities (1) correspond to the case \(p=2\), \(X = I \times \Lambda \), and dx is product of n-dimensional Lebesgue measure on I and counting measure on \(\Lambda \). As it was pointed to me by a reviewer, the inequalities (40) typically hold true for all range of Banach spaces \((X, \Vert \cdot \Vert _p), \, 1 \le p < \infty \) simultaneously provided the frame \(\{e_x\}\) has a sufficiently good localization, see [1, 8], and [9]. However, in our setting we did not manage to prove the analogue of Theorem 1 for all \(p \in [1, \infty )\) when the dimension \(n > 2\).
On the other hand, using our approach, one may check that for every \(n > 2\) there are a number p(n) and functions \(\Phi _{\varepsilon }\) such that for \(p \ge p(n)\) we have
Thus, a modification of the proof of Theorem 3 leads to a complete solution of the Main Problem for \(PW^p_{[-\sigma , \sigma ]^n}\) spaces with \(p \ge p(n)\).
Notes
Sometimes, the term uniformly separated is used.
References
Aldroubi, A., Baskakov, A., Krishtal, I.: Slanted matrices, Banach frames, and sampling. J. Funct. Anal. 255(7), 1667–1691 (2008). https://doi.org/10.1016/j.jfa.2008.06.024
Aldroubi, A., Cabrelli, C., Çakmak, A.F., Molter, U., Petrosyan, A.: Iterative actions of normal operators. J. Funct. Anal. 272(3), 1121–1146 (2017). https://doi.org/10.1016/j.jfa.2016.10.027
Aldroubi, A., Cabrelli, C., Molter, U., Tang, S.: Dynamical sampling. Appl. Comput. Harmon. Anal. 42(3), 378–401 (2017). https://doi.org/10.1016/j.acha.2015.08.014
Aldroubi, A., Gröchenig, K., Huang, L., Jaming, P., Kristal, I., Romero, J.L.: Sampling the flow of a bandlimited function. J. Geom. Anal. 31, 9241–9275 (2021). https://doi.org/10.1007/s12220-021-00617-0
Beurling, A.: Balayage of Fourier-Stieltjes Transforms, The collected Works of Arne Beurling. Harmonic Analysis, vol. 2. Birkhäuser, Boston (1989)
Beurling, A.: Local Harmonic Analysis with Some Applications to Differential Operators, The collected Works of Arne Beurling. Harmonic Analysis, vol. 2. Birkhäuser, Boston (1989)
Dunkl, C.F., Xu, Y.: Orthogonal Polynomials of Several Variables. Encyclopedia of Mathematics and Its Applications, vol. 155, 2nd edn. Cambridge University Press, Cambridge (2014)
Gröchenig, K.: Localization of frames, Banach frames, and the invertibility of the frame operator. J. Fourier Anal. Appl. 10(2), 105–132 (2004). https://doi.org/10.1007/s00041-004-8007-1
Gröchenig, K., Romero, J.L., Stöckler, J.: Sharp results on sampling with derivatives in shift-invariant spaces and multi-window Gabor frames. Constr. Approx. 51(1), 1–25 (2020). https://doi.org/10.1007/s00365-019-09456-3
Jaming, P., Negreira, F., Romero, J.L.: The Nyquist sampling rate for spiraling curves. Appl. Comput. Harmon. Anal. 52, 198–230 (2021). https://doi.org/10.1016/j.acha.2020.01.005
Landau, H.J.: Necessary density conditions for sampling and interpolation of certain entire functions. Acta Math. 117, 37–52 (1967). https://doi.org/10.1007/BF02395039
Ya. Levin, B.: Lectures on Entire Functions, AMS Transl. of Math. Monographs, vol. 150, Amer. Math. Soc., Providence, RI (1996)
Nitzan, S., Olevskii, A.: Revisiting Landau’s density theorems for Paley-Wiener spaces. C. R. Math. Acad. Sci. Paris 350(9–10), 509–512 (2012). https://doi.org/10.1016/j.crma.2012.05.003
Olevskii, A., Ulanovskii, A.: Functions with Disconnected Spectrum: Sampling, Interpolation, Translates, AMS, University Lecture Series, 65, (2016)
Olevskii, A., Ulanovskii, A.: On multi-dimensional sampling and interpolation. Anal. Math. Phys. 2(2), 149–170 (2012). https://doi.org/10.1007/s13324-012-0027-4
Ortega-Cerdà, J., Seip, K.: Fourier frames. Ann. Math. (2) 155(3), 789–806 (2002). https://doi.org/10.2307/3062132
Rashkovskii, A., Ulanovskii, A., Zlotnikov, I.: On 2-dimensional mobile sampling, preprint. arXiv:2005.11193
Seip, K.: Interpolation and Sampling in Spaces of Analytic Functions. University Lecture Series, vol. 33. AMS, Providence (2004)
Ulanovskii, A., Zlotnikov, I.: Reconstruction of bandlimited functions from space-time samples. J. Funct. Anal. 280(9), 108962 (2021). https://doi.org/10.1016/j.jfa.2021.108962
Young, R.M.: An Introduction to Nonharmonic Fourier Series. Academic Press, New York (2001)
Acknowledgements
I am grateful to A. Ulanovskii for stimulating discussions and to D. Stolyarov for the proof of Lemma 8. I also thank the anonymous referees for the suggestions and constructive remarks.
Author information
Authors and Affiliations
Corresponding author
Additional information
Communicated by Akram Aldroubi.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
This research was supported by the Russian Science Foundation (Grant No. 18-11-00053), https://rscf.ru/project/18-11-00053/
Rights and permissions
About this article
Cite this article
Zlotnikov, I. On Planar Sampling with Gaussian Kernel in Spaces of Bandlimited Functions. J Fourier Anal Appl 28, 55 (2022). https://doi.org/10.1007/s00041-022-09948-0
Received:
Revised:
Accepted:
Published:
DOI: https://doi.org/10.1007/s00041-022-09948-0