Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Why Are RHSKs Suitable Spaces for Sampling Purposes?

Roughly speaking, sampling theory deals with the reconstruction of functions through their values (samples) on an appropriate sequence of points by means of sampling expansions involving these values. This is not always possible: for instance, a continuous function f on \(\mathbb{R}\) is not completely determined by a sequence {f(t n )} of its samples. As a consequence, one needs to impose some additional condition on the function f. Hence, f must belong to some suitable spaces. For example, assume that the function f belongs to a Hilbert space \(\mathcal{H}\) of functions on \(\Omega \) (generally, a subset of \(\mathbb{R}\) or \(\mathbb{C}\) ) such that any evaluation functional \(E_{t}: f \in \mathcal{H}\mapsto f(t) \in \mathbb{C}\) is bounded, i.e., the space \(\mathcal{H}\) is a reproducing kernel Hilbert space (RKHS henceforth). Via Riesz representation theorem, for each \(t \in \Omega \) there exists a unique \(k_{t} \in \mathcal{H}\) such that \(f(t) =\langle f,k_{t}\rangle _{\mathcal{H}}\) for every \(f \in \mathcal{H}\). In this manner, the stable reconstruction of any \(f \in \mathcal{H}\) from the sequence of samples {f(t n )} at \(\{t_{n}\} \subset \Omega \) depends on whether the sequence \(\{k_{t_{n}}\}\) is a frame for \(\mathcal{H}\). Recall that a sequence {x n } is a frame for a separable Hilbert space \(\mathcal{H}\) if there exist two constants A, B > 0 (frame bounds) such that

$$\displaystyle{A\|x\|^{2} \leq \sum _{ n}\vert \langle x,x_{n}\rangle \vert ^{2} \leq B\|x\|^{2}\,\,\text{ for all }x \in \mathcal{H}\,.}$$

Given a frame {x n } for \(\mathcal{H}\), the representation property of any vector \(x \in \mathcal{H}\) as a series x = n c n x n is retained, but, unlike the case of Riesz (orthonormal) bases, the uniqueness of this representation is sacrificed. Suitable frame coefficients c n which depend continuously and linearly on x are obtained by using the dual frames {y n } of {x n }, i.e., \(\{y_{n}\}_{n\in \mathbb{Z}}\) is another frame for \(\mathcal{H}\) such that

$$\displaystyle{ x =\sum _{n}\langle x,y_{n}\rangle x_{n} =\sum _{n}\langle x,x_{n}\rangle y_{n}\quad \mbox{ for each $x \in \mathcal{H}$}\,. }$$
(5.1)

In particular, frames in \(\mathcal{H}\) include orthonormal and Riesz bases for \(\mathcal{H}\). Recall that a Riesz basis in a separable Hilbert space \(\mathcal{H}\) is the image of an orthonormal basis by means of a bounded invertible operator. Any Riesz basis {x n } has a unique biorthogonal (dual) Riesz basis {y n }, i.e., \(\langle x_{n},y_{m}\rangle _{\mathcal{H}} =\delta _{n,m}\), such that the expansion (5.1) hold for every \(x \in \mathcal{H}\). An orthonormal basis is a self-dual Riesz basis. For more details and proofs, see [8, 39].

In case the sequence \(\{k_{t_{n}}\}\) forms a frame for the RKHS \(\mathcal{H}\), and a dual frame {S n (t)} is available (a difficult problem in general), the sampling formula in \(\mathcal{H}\)

$$\displaystyle{f(t) =\sum _{n}\langle f,k_{t}\rangle _{\mathcal{H}}\,S_{n}(t) =\sum _{n}f(t_{n})\,S_{n}(t)\,,\quad t \in \Omega }$$

holds. Notice that convergence in an RKHS \(\mathcal{H}\) of functions defined on \(\Omega \) implies pointwise convergence in \(\Omega \). For simplicity, in what follows only orthonormal and Riesz bases will be considered. An easy and straightforward sampling result involving orthonormal bases is the following:

Theorem 1 (Sampling Theorem in an RKHS).

Let \(\mathcal{H}\) be an RKHS of functions defined on a subset \(\Omega \) with reproducing kernel k. Assume that there exists a sequence \(\{t_{n}\}_{n=1}^{\infty }\subset \Omega \) such that {k(⋅,t n )} n=1 is an orthogonal basis for \(\mathcal{H}\). Then, any \(f \in \mathcal{H}\) can be expanded as

$$\displaystyle{ f(t) =\sum _{ n=1}^{\infty }f(t_{ n}) \frac{k(t,t_{n})} {k(t_{n},t_{n})}\,,\quad t \in \Omega \,, }$$
(5.2)

with convergence absolute and uniform on subsets of \(\Omega \) where the function t ↦ k(t,t) is bounded.

Proof.

This result follows from the expansion of \(f \in \mathcal{H}\) in the orthonormal basis \(\big\{k(\cdot,t_{n})/\sqrt{k(t_{n }, t_{n } )}\big\}_{n=1}^{\infty }\). Indeed, for each \(f \in \mathcal{H}\) we obtain

$$\displaystyle{f =\sum _{ n=1}^{\infty }\Bigg\langle f, \frac{k(\cdot,t_{n})} {\sqrt{k(t_{n }, t_{n } )}}\Bigg\rangle _{\mathcal{H}}\, \frac{k(\cdot,t_{n})} {\sqrt{k(t_{n }, t_{n } )}} =\sum _{ n=1}^{\infty }f(t_{ n})\ \frac{k(\cdot,t_{n})} {k(t_{n},t_{n})}\quad \text{in }\,\,\mathcal{H}\,.}$$

Now, the convergence in norm in an RKHS \(\mathcal{H}\) implies pointwise convergence in \(\Omega \) which is uniform on subsets of \(\Omega \) where the function tk(t, t) is bounded. Moreover, since an orthonormal basis is an unconditional basis, the above sampling series is pointwise unconditionally convergent for each \(t \in \Omega \) and hence absolutely convergent. □

The standard Hilbert space \(\ell^{2}(\mathbb{N})\) is an RKHS with reproducing kernel k the Kronecker delta, i.e., k(m, n) = δ m, n , \(m,n \in \mathbb{N}\). In this case, for any \(\{x(m)\}_{m=1}^{\infty }\in \ell^{2}(\mathbb{N})\) formula (5.2) trivially reads: \(x(m) =\sum _{ n=1}^{\infty }x(n)\,\delta _{m,n}\), \(m \in \mathbb{N}\).

Any finite dimensional Euclidean space of functions defined on \(\Omega \) is an RKHS; next we give two interesting examples in this finite dimensional setting:

Trigonometric Polynomials

Consider the space \(\mathcal{H}_{N}\) of \(2\pi\) -periodic trigonometric polynomials of degree ≤ N. \(\mathcal{H}_{N}\) is a closed subspace of L 2[−π, π] endowed with the usual inner product. An orthonormal basis for \(\mathcal{H}_{N}\) is given by the set of exponential complex \(\{e^{ikt}/\sqrt{2\pi }\}_{k=-N}^{N}\). Therefore, the reproducing kernel for \(\mathcal{H}_{N}\) is

$$\displaystyle{k_{N}(t,s) = \frac{1} {2\pi }\sum _{k=-N}^{N}e^{ik(t-s)} = \frac{1} {2\pi }D_{N}(t - s)\,,}$$

where D N denotes the Nth Dirichlet kernel [28, p. 9]

$$\displaystyle{D_{N}(t):=\sum _{ k=-N}^{N}e^{ikt} = \left \{\begin{array}{@{}l@{\quad }l@{}} \frac{\sin (N+\frac{1} {2} )t} {\sin \frac{t} {2} } \quad &\mbox{ if $t \in \mathbb{R}\setminus 2\pi \mathbb{Z}$} \\ 2N + 1\quad &\mbox{ if $t \in 2\pi \mathbb{Z}$}\\ \quad \end{array} \right.}$$

At the points \(s_{n} = \frac{2\pi n} {2N+1} \in [-\pi,\pi ]\), − NnN, the sequence \(\big\{k_{N}(\cdot,s_{n})\big\}_{n=-N}^{N}\) is an orthogonal basis for \(\mathcal{H}_{N}\) since

$$\displaystyle{\big\langle k_{N}(\cdot,s_{n}),k_{N}(\cdot,s_{m})\big\rangle _{L^{2}[-\pi,\pi ]} = k_{N}(s_{m},s_{n}) = \frac{1} {2\pi } \frac{\sin \pi (m - n)} {\sin \frac{\pi (m-n)} {2N+1} } = \frac{2N + 1} {2\pi } \,\delta _{mn}\,.}$$

A direct application of the sampling formula (5.2) gives

$$\displaystyle{p(t) = \frac{1} {2N + 1}\sum _{n=-N}^{N}p\Big( \frac{2\pi n} {2N + 1}\Big)\frac{\sin \big(\frac{2N+1} {2} \big)\big(t - \frac{2\pi n} {2N+1}\big)} {\sin \frac{1} {2}\big(t - \frac{2\pi n} {2N+1}\big)} \,,\quad t \in [-\pi,\pi )}$$

for every trigonometric polynomial \(p(t) =\sum _{ k=-N}^{N}c_{k}e^{ikt}\) in \(\mathcal{H}_{N}\). This interpolation formula goes back to [7].

Orthogonal Polynomials

Another important class of examples is given by finite families of orthogonal polynomials on an interval of the real line. Consider, as an example, the particular case of the Legendre polynomials {P n } n = 0 defined, for instance, by means of their Rodrigues formula

$$\displaystyle{P_{n}(t) = \dfrac{1} {2^{n}n!} \dfrac{d^{n}} {dt^{n}}\big[(t^{2} - 1)^{n}\big]\,,\quad n = 0,1,\ldots \,.}$$

It is known that they form an orthogonal basis for L 2[−1, 1] and that \(\|P_{n}\|^{2} = (n + \frac{1} {2})^{-1}\).

Consider the finite subspace \(\mathcal{H}_{N}\) of L 2[−1, 1] spanned by \(\{P_{0},P_{1},\ldots,P_{N}\}\). The Christoffel–Darboux formula for Legendre polynomials gives its reproducing kernel

$$\displaystyle{k_{N}(t,s)\,=\,\sum _{n=0}^{N}\left (n\,+\,\frac{1} {2}\right )\,P_{n}(t)P_{n}(s)\,=\,\frac{2N\,+\,1} {2} \,\frac{P_{N+1}(t)P_{N}(s)\,-\,P_{N}(t)P_{N+1}(s)} {t - s} \,.}$$

Note that

$$\displaystyle{k_{N}(t,t) = \frac{2N + 1} {2} \,\big[P_{N+1}'(t)P_{N}(t) - P_{N}'(t)P_{N+1}(t)\big]\,.}$$

We seek points {s n } n = 0 N in [−1, 1] such that k N (s m , s n ) = 0 for mn, i.e.,

$$\displaystyle{\frac{P_{N+1}(s_{m})} {P_{N}(s_{m})} = \frac{P_{N+1}(s_{n})} {P_{N}(s_{n})} \,.}$$

In particular we can take for {s n } n = 0 N the N + 1 simple roots of P N+1 in (−1, 1). Thus, for every \(f(t) =\sum _{ k=0}^{N}c_{k}\sqrt{(k + \frac{1} {2})}P_{k}(t)\) the finite sampling formula

$$\displaystyle{f(t) =\sum _{ n=0}^{N}f(s_{ n}) \frac{P_{N+1}(t)} {(t - s_{n})P_{N+1}'(s_{n})}\,,\quad t \in \mathbb{R}}$$

holds. This formula is nothing but Lagrange interpolation formula for the samples {f(s n )} n = 0 N. In general, one can take as sampling points {s n } n = 0 N the N + 1 simple roots of the polynomial \(P_{N+1}(t) - cP_{N}(t)\) in (−1, 1), where \(c \in \mathbb{R}\). The details on orthogonal polynomials can be found in [31, 35].

A Paradigmatic Example: Paley–Wiener Spaces

A function \(f \in L^{2}(\mathbb{R})\) is said to be band-limited to the interval [−π, π] if its Fourier transform \(\hat{f}\) vanishes outside [−π, π], i.e., \(\hat{f}\) is supported in [−π, π]. The space of band-limited functions to [−π, π] is known in the mathematical literature as the Paley–Wiener space and denoted by PW π . That is,

$$\displaystyle{PW_{\pi }:=\Big\{ f \in L^{2}(\mathbb{R})\,:\,\mathop{ supp}\nolimits \hat{f} \subseteq [-\pi,\pi ]\Big\}\,.}$$
  • The space PW π is a closed subspace of \(L^{2}(\mathbb{R})\) since the Fourier transform \(\mathcal{F}: L^{2}(\mathbb{R})\longrightarrow L^{2}(\mathbb{R})\) is a unitary operator and \(PW_{\pi } = \mathcal{F}^{-1}\big(L^{2}[-\pi,\pi ]\big)\); the space L 2[−π, π] is identified to a closed subspace of \(L^{2}(\mathbb{R})\) by extending to 0 on \(\mathbb{R}\) the functions of L 2[−π, π]. Here, the Fourier transform is defined in \(L^{1}(\mathbb{R}) \cap L^{2}(\mathbb{R})\) as \(\hat{f}(w):= \frac{1} {\sqrt{2\pi }}\int _{-\infty }^{\infty }f(t)\,\mathrm{e}^{-iwt}\,dt\), and extended to \(L^{2}(\mathbb{R})\) in the usual way.

  • By using the inverse Fourier transform, any fPW π can be expressed as

    $$\displaystyle{ f(t) = \frac{1} {\sqrt{2\pi }}\int _{-\pi }^{\pi }\hat{f}(w)\mathrm{e}^{iwt}\ dw =\bigg\langle \hat{ f}, \frac{\mathrm{e}^{-iwt}} {\sqrt{2\pi }} \bigg\rangle _{L^{2}[-\pi,\pi ]}\,,\quad t \in \mathbb{R}\,. }$$
    (5.3)

    Cauchy–Schwarz’s inequality and Parseval equality, \(\|f\|_{L^{2}(\mathbb{R})} =\|\hat{ f}\|_{L^{2}[-\pi,\pi ]}\), give, for every \(t \in \mathbb{R}\),

    $$\displaystyle{\vert f(t)\vert \leq \|\hat{ f}\|_{L^{2}[-\pi,\pi ]}\|\frac{\mathrm{e}^{-iwt}} {\sqrt{2\pi }} \|_{L^{2}[-\pi,\pi ]} =\| f\|_{L^{2}(\mathbb{R})}\,,\quad f \in PW_{\pi }\,.}$$

    In other words, evaluation functionals are bounded on PW π , which consequently is an RKHS. Its reproducing kernel is \(k_{\pi }(t,s) = \dfrac{\sin \pi (t - s)} {\pi (t - s)}\), \(t,s \in \mathbb{R}\); indeed, using Plancherel–Parseval theorem

    $$\displaystyle{f(s) =\bigg\langle \hat{ f}, \frac{\mathrm{e}^{-iws}} {\sqrt{2\pi }} \bigg\rangle _{L^{2}[-\pi,\pi ]} =\bigg\langle f, \frac{\sin \pi (\cdot - s)} {\pi (\cdot - s)}\bigg\rangle _{L^{2}(\mathbb{R})}\,,\quad s \in \mathbb{R}\,.}$$

    Notice that

    $$\displaystyle{\mathcal{F}^{-1}\Big(\frac{\mathrm{e}^{-iws}} {\sqrt{2\pi }} \chi _{[-\pi,\pi ]}(w)\Big)(t) = \dfrac{\sin \pi (t - s)} {\pi (t - s)}\,,\quad t \in \mathbb{R}\,.}$$
  • Since the sequence \(\Big\{\frac{\mathrm{e}^{-inw}} {\sqrt{2\pi }} \Big\}_{n\in \mathbb{Z}}\) is an orthonormal basis for L 2[−π, π] and \(\mathcal{F}^{-1}\) is a unitary operator we obtain that the sequence

    $$\displaystyle{ \Big\{\dfrac{\sin \pi (t - n)} {\pi (t - n)}\Big\}_{n\in \mathbb{Z}} }$$
    (5.4)

    is an orthonormal basis for the Paley–Wiener space PW π .

  • Moreover, having in mind that k π (t, t) = 1 for all \(t \in \mathbb{R}\), formula (5.2) yields the famous Shannon’s sampling theorem [33]:

Theorem 2 (Shannon’s Sampling Theorem).

Any function f ∈ PW π, i.e., band-limited to [−π,π], can be recovered from the sequence of its samples \(\big\{f(n)\big\}_{n\in \mathbb{Z}}\) by means of the formula

$$\displaystyle{ f(t) =\sum _{ n=-\infty }^{\infty }f(n)\,\frac{\sin \pi (t - n)} {\pi (t - n)}\,,\quad t \in \mathbb{R}\,. }$$
(5.5)

The convergence of the series is absolute and uniform on \(\mathbb{R}\).

Another proof of the above theorem is the following [18]: Given fPW π , the expansion of its Fourier transform \(\hat{f} \in L^{2}[-\pi,\pi ]\) with respect to the orthonormal basis \(\Big\{\mathrm{e}^{-inw}/\sqrt{2\pi }\Big\}_{n=-\infty }^{\infty }\) for L 2[−π, π] gives

$$\displaystyle{ \hat{f} =\sum _{ n=-\infty }^{\infty }\bigg\langle \hat{f}, \frac{\mathrm{e}^{-inw}} {\sqrt{2\pi }} \bigg\rangle \,\frac{\mathrm{e}^{-inw}} {\sqrt{2\pi }} =\sum _{ n=-\infty }^{\infty }f(n)\,\frac{\mathrm{e}^{-inw}} {\sqrt{2\pi }} \quad \text{ in }\,L^{2}[-\pi,\pi ]\,. }$$
(5.6)

The inverse Fourier transform \(\mathcal{F}^{-1}\) in (5.6) gives

$$\displaystyle{f\,=\,\sum _{n=-\infty }^{\infty }f(n)\mathcal{F}^{-1}\Big(\frac{\mathrm{e}^{-inw}} {\sqrt{2\pi }} \chi _{[-\pi,\pi ]}(w)\Big)\,=\,\sum _{n=-\infty }^{\infty }f(n)\,\frac{\sin \pi (t - n)} {\pi (t - n)}\ \ \text{ in }\,L^{2}(\mathbb{R})\,.}$$

The convergence properties come again since PW π is an RKHS.

  • Shannon’s sampling formula is an orthonormal expansion in PW π ; Parseval’s identity says that \(\|f\|^{2} =\sum _{ n=-\infty }^{\infty }\vert f(n)\vert ^{2}\) for all fPW π . In other words, the energy \(E_{f}:=\| f\|^{2}\) of the band-limited function fPW π is contained in its samples \(\big\{f(n)\big\}_{n\in \mathbb{Z}}\). The following commutative diagram goes into the meaning of the sampling formula in PW π :

    All mappings included in this diagram are unitary operators:

    1. (a)

      \(\mathcal{S}\) denotes the sampling mapping with sampling period T s = 1.

    2. (b)

      \(\mathcal{P}\) is the 2π-periodization mapping which extends a function \(\hat{f}\) in [−π, π] to the whole \(\mathbb{R}\) with period 2π.

    3. (c)

      The other two mappings are, respectively, the functional Fourier transform in \(L^{2}(\mathbb{R})\) and the Fourier transform in \(\ell^{2}(\mathbb{Z})\), defining the latter as \(\mathcal{F}(\{a_{n}\})(w):=\sum _{ n=-\infty }^{\infty }a_{n}\,\frac{\mathrm{e}^{-inw}} {\sqrt{2\pi }}\).

    The situation described by the diagram is depicted in Fig. 5.1.

  • As the space PW π is an RKHS contained in the Hilbert space \(L^{2}(\mathbb{R})\), the reproducing formula when applied to any \(f \in L^{2}(\mathbb{R})\) gives its orthogonal projection onto PW π

    $$\displaystyle{P_{PW_{\pi }}f(s) =\big\langle f, \frac{\sin \pi (\cdot - s)} {\pi (\cdot - s)}\big\rangle _{L^{2}(\mathbb{R})} =\big (f {\ast}\mathop{ sinc}\nolimits \big)(s)\,,\quad s \in \mathbb{R}\,,}$$

    where ∗ means the convolution operator and \(\mathop{sinc}\nolimits\) denotes the cardinal sine function \(\mathop{sinc}\nolimits t:=\sin \pi t/\pi t\), \(t \in \mathbb{R}\).

  • The crucial feature in PW π is that the sampling period is T s = 1 and it is not relevant to the points where the samples are taken. In fact, any function fPW π can be recovered from the sequence of samples \(\big\{f(n + a)\big\}_{n=-\infty }^{\infty }\), for a fixed \(a \in \mathbb{R}\). Observe that \(\Big\{\mathrm{e}^{-i(n+a)w}/\sqrt{2\pi }\Big\}_{n\in \mathbb{Z}}\) is also an orthonormal basis for L 2[−π, π] which goes, via \(\mathcal{F}^{-1}\), onto the orthonormal basis \(\Big\{\dfrac{\sin \pi (t - n - a)} {\pi (t - n - a)}\Big\}_{n\in \mathbb{Z}}\) for PW π . The expansion of any fPW π with respect to this basis yields the new sampling formula

    $$\displaystyle{f(t) =\sum _{ n=-\infty }^{\infty }f(n + a)\frac{\sin \pi (t - n - a)} {\pi (t - n - a)}\,,\quad t \in \mathbb{R}\,.}$$
  • Shannon’s sampling formula (5.5) is nothing but a Lagrange-type interpolation series. Indeed, formula (5.5) can be rewritten as

    $$\displaystyle{f(t) =\sum _{ n=-\infty }^{\infty }f(n)\frac{(-1)^{n}\sin \pi t} {\pi (t - n)} =\sum _{ n=-\infty }^{\infty }f(n) \frac{P(t)} {P'(n)(t - n)}\,,\quad t \in \mathbb{R}\,.}$$

    where P(t): = sinπ t, \(\,t \in \mathbb{R}\).

  • In general, one can consider the Paley–Wiener space PW π σ , σ > 0, of band-limited functions to [−π σ, π σ] defined as

    $$\displaystyle{PW_{\pi \sigma }:=\Big\{ f \in L^{2}(\mathbb{R})\,:\,\mathop{ supp}\nolimits \hat{f} \subseteq [-\pi \sigma,\pi \sigma ]\Big\}\,.}$$

    In this case the associated sampling period is \(T_{s} = 1/\sigma\). Indeed, for fPW π σ define \(g(t):= f(t/\sigma )\). Since \(\hat{g}(w) =\sigma \hat{ f}(\sigma w)\), the function gPW π . Therefore

    $$\displaystyle{g(t) = f(t/\sigma ) =\sum _{ n=-\infty }^{\infty }f(n/\sigma )\frac{\sin \pi (t - n)} {\pi (t - n)}\,,\quad t \in \mathbb{R}\,.}$$

    The change of variable \(t/\sigma = s\) gives, for any fPW π σ , the sampling formula

    $$\displaystyle{f(s) =\sum _{ n=-\infty }^{\infty }f(n/\sigma )\frac{\sin \pi (\sigma s - n)} {\pi (\sigma s - n)}\,,\quad s \in \mathbb{R}\,.}$$

    The reproducing kernel for PW π σ is \(k_{\pi \sigma }(t,s) =\sigma \mathop{ sinc}\nolimits \sigma (t - s)\), \(t,s \in \mathbb{R}\).

  • Usually, the band of frequencies is centered at 0 since this is the case for real band-limited functions. Indeed, for a real-valued function f one has \(\vert \hat{f}(w)\vert ^{2} =\hat{ f}(w)\overline{\hat{f}(w)} =\hat{ f}(w)\hat{f}(-w)\), i.e., it is an even function. Let f be a function in \(L^{2}(\mathbb{R})\) band-limited to the interval \([w_{0}-\pi,w_{0}+\pi ]\). Since \(\hat{g}(w) =\hat{ f}(w + w_{0})\), the function \(g(t):= \mathrm{e}^{-iw_{0}t}f(t)\) is band-limited to the interval [−π, π]. As a consequence,

    $$\displaystyle{g(t) = \mathrm{e}^{-iw_{0}t}f(t) =\sum _{ n=-\infty }^{\infty }\mathrm{e}^{-iw_{0}n}f(n)\,\frac{\sin \pi (t - n)} {\pi (t - n)}\,,\quad t \in \mathbb{R}\,,}$$

    from which the sampling formula for f reads

    $$\displaystyle{f(t) =\sum _{ n=-\infty }^{\infty }f(n)\,\mathrm{e}^{iw_{0}(t-n)}\,\frac{\sin \pi (t - n)} {\pi (t - n)}\,,\quad t \in \mathbb{R}\,.}$$
Fig. 5.1
figure 1

Time-frequency interpretation of Shannon’s sampling theorem

Undersampling and Oversampling

If one samples a function f in PW π with a general sampling period T s > 0, the question arises whether it is possible to reconstruct it from its samples {f(nT s )}. It is indeed possible in the case where 0 < T s ≤ 1, i.e., sampling the signal at a frequency higher than that given by its bandwidth [−π, π]. For sampling periods T s > 1, we cannot reconstruct the signal due to the aliasing phenomenon, which will be explained below.

Firstly, it is easy to study the relationship between to sample f and to periodize its Fourier transform \(\hat{f}\). To this end, consider the sequence of samples \(\{f(nT_{s})\}_{n\in \mathbb{Z}}\) taken from a function fPW π with a sampling period T s > 0. Let \(\hat{f}_{p}\) be the \(\frac{2\pi } {T_{s}}\) -periodized version of \(\hat{f}\), i.e., \(\hat{f}_{p}(\omega ) =\sum _{ n=-\infty }^{\infty }\hat{f}\big(\omega + \frac{2\pi } {T_{s}}n\big)\).

Obviously, \(\hat{f}_{p}\) is a \(\frac{2\pi } {T_{s}}\) -periodic function which belongs to \(L^{2}[0, \frac{2\pi } {T_{s}}]\). Its Fourier expansion with respect to the orthonormal basis \(\big\{\sqrt{ \frac{T_{s}} {2\pi }} \,\mathrm{e}^{-imT_{s}\omega }\big\}_{ m\in \mathbb{Z}}\) of \(L^{2}[0, \frac{2\pi } {T_{s}}]\) has Fourier coefficients

$$\displaystyle\begin{array}{rcl} c_{m}& =& \sqrt{\frac{T_{s } } {2\pi }} \int _{0}^{ \frac{2\pi } {T_{s}} }\hat{f}_{p}(\omega )\mathrm{e}^{imT_{s}\omega }d\omega = \sqrt{\frac{T_{s } } {2\pi }} \int _{0}^{ \frac{2\pi } {T_{s}} }\sum _{n=-\infty }^{\infty }\hat{f}\bigg(\omega + \frac{2\pi } {T_{s}}n\bigg)\mathrm{e}^{imT_{s}\omega }d\omega {}\\ & =& \sqrt{\frac{T_{s } } {2\pi }} \sum _{n=-\infty }^{\infty }\int _{ 0}^{ \frac{2\pi } {T_{s}} }\hat{f}\bigg(\omega + \frac{2\pi } {T_{s}}n\bigg)\mathrm{e}^{imT_{s}\omega }d\omega \,,\quad m \in \mathbb{Z}\,. {}\\ \end{array}$$

The change of variable \(\omega + \frac{2\pi } {T_{s}}n = x\) allows us to obtain

$$\displaystyle\begin{array}{rcl} c_{m}& =& \sqrt{\frac{T_{s } } {2\pi }} \sum _{n=-\infty }^{\infty }\int _{ \frac{ 2\pi } {T_{s}}n}^{ \frac{2\pi } {T_{s}}(n+1)}\hat{f}(x)\mathrm{e}^{imT_{s}x}dx = \sqrt{\frac{T_{s } } {2\pi }} \int _{-\pi }^{\pi }\hat{f}(x)\mathrm{e}^{imT_{s}x}dx {}\\ & =& \sqrt{T_{s}}f(mT_{s})\,. {}\\ \end{array}$$

Thus, the Fourier expansion for \(\hat{f}_{p}\) is

$$\displaystyle{ \hat{f}_{p}(\omega ) =\sum _{ n=-\infty }^{\infty }\hat{f}\bigg(\omega + \frac{2\pi } {T_{s}}n\bigg) = T_{s}\sum _{m=-\infty }^{\infty }f(mT_{ s})\,\frac{\mathrm{e}^{-imT_{s}\omega }} {\sqrt{2\pi }} \,. }$$
(5.7)

Formula (5.7) is the so-called Poisson summation formula applied to \(\hat{f}\) with period 2πT s . It says that:

The Fourier transform of the sequence \(\{f(mT_{s})\}_{m\in \mathbb{Z}}\), i.e., the sampled function, is precisely (up to a scale factor) the \(\frac{2\pi } {T_{s}}\) -periodized version of the Fourier transform \(\hat{f}\) of f.

  • In the oversampling case, where 0 < T s ≤ 1, the Fourier transform \(\hat{f}\) of f can be recovered from the Fourier transform of the sampled function. Hence, the function f can be also recovered. In terms of Shannon sampling theorem, the explanation is easy: if a function is band-limited to the interval [−π, π], it is also band-limited to any interval [−π σ, π σ] with σ ≥ 1. This situation is depicted in Fig. 5.2.

  • In the undersampling case, where T s > 1, we cannot obtain the Fourier transform of f from the Fourier transform of the sampled function because the copies of \(\hat{f}\) overlap in \(\hat{f}_{p}\). Hence, it is impossible to recover the function from its samples. The alluded overlap produces the aliasing phenomenon, i.e., some frequencies go under the name of other ones. As pointed out in [17], this is a familiar phenomenon to the watchers of TV and western movies. As the stage coach starts up, the wheels start going faster and faster, but then they gradually slow down, stop, go backwards, slow down, stop, go forward, etc. This effect is due solely to the sampling the picture makes of the real scene. The undersampling situation is depicted in Fig. 5.3.

Fig. 5.2
figure 2

Oversampling case

Fig. 5.3
figure 3

Undersampling case

This undersampling/oversampling discussion clarifies the crucial role of the critical Nyquist period which is given by \(T_{s} = 1/\sigma\) whenever \(\mathop{supp}\nolimits \hat{f} \subseteq [-\pi \sigma,\pi \sigma ]\).

Robust Reconstruction: Oversampling Technique The actual computation of the cardinal series presents some numerical difficulties since the cardinal sine function behaves like 1∕t as | t | → . An easy example is given by the numerical calculation of f(1∕2), for a function f in PW π , from a noisy sequence of samples {f(n) +δ n }. The error in this case \(\Big\vert \sum _{n}\frac{(-1)^{n}\delta _{ n}} {\pi (n-\frac{1} {2} )} \Big\vert \), even when all | δ n | ≤ δ, could be infinity.

One way to overcome this difficulty is the oversampling technique, i.e., sampling the signal at a frequency higher than that given by its bandwidth. In this way we obtain sampling functions converging to zero at infinity faster than the cardinal sine functions. Indeed, consider the band-limited function

$$\displaystyle{f(t) = \frac{1} {\sqrt{2\pi }}\int _{-\pi \sigma }^{\pi \sigma }F(\omega )\,\mathrm{e}^{i\omega t}\,d\omega \,\,\mbox{ with }F \in L^{2}[-\pi \sigma,\pi \sigma ]\,\,\mbox{ and }\sigma < 1\,.}$$

Extending F to be zero in \([-\pi,\pi ]\setminus [-\pi \sigma,\pi \sigma ]\), we have

$$\displaystyle{F(\omega ) =\sum _{ n=-\infty }^{\infty }f(n)\frac{\mathrm{e}^{-in\omega }} {\sqrt{2\pi }} \quad \mbox{ in }L^{2}[-\pi,\pi ]\,.}$$

Let \(\theta (\omega )\) be a smooth function taking the value 1 in [−π σ, π σ], and 0 outside [−π, π]. As a consequence,

$$\displaystyle{F(\omega ) =\theta (\omega )F(\omega ) =\sum _{ n=-\infty }^{\infty }f(n)\theta (\omega )\frac{\mathrm{e}^{-in\omega }} {\sqrt{2\pi }} \quad \mbox{ in }L^{2}[-\pi,\pi ]\,,}$$

and the sampling expansion

$$\displaystyle{f(t) =\sum _{ n=-\infty }^{\infty }f(n)S_{\theta }(t - n)\,,\quad t \in \mathbb{R}\,,}$$

holds, where S θ (t) is the inverse Fourier transform \(\mathcal{F}^{-1}\) of the function \(\theta (w)/\sqrt{2\pi }\). Consequently, \(S_{\theta }(t - n) = \mathcal{F}^{-1}[\theta (\omega )\,\mathrm{e}^{-in\omega }/\sqrt{2\pi }](t)\). Furthermore, using the properties of the Fourier transform, as smoother θ is, the faster the decay of S θ is as | t | → . However, the new sampling functions \(\{S_{\theta }(t - n)\}_{n=-\infty }^{\infty }\) are no longer orthogonal and they do not belong to PW π σ .

Next, let us consider an illustrative example. Take \(\sigma = 1-\epsilon\) with 0 < ε < 1, and consider for θ(w) the trapezoidal function

$$\displaystyle{\theta (\omega ) = \left \{\begin{array}{@{}l@{\quad }l@{}} 1 \quad &\mbox{ si $\vert \omega \vert \leq \pi (1-\epsilon )$}, \\ \dfrac{1} {\epsilon } \big(1 -\dfrac{\vert \omega \vert } {\pi } \big)\quad &\mbox{ si $\pi (1-\epsilon ) \leq \vert \omega \vert \leq \pi $}, \\ 0 \quad &\mbox{ si $\vert \omega \vert \geq \pi $}. \end{array} \right.}$$

One can easily obtain \(S_{\theta }(t) = \frac{\sin \epsilon \pi t} {\epsilon \pi t}\,\frac{\sin \pi t} {\pi t}\), \(\,t \in \mathbb{R}\), which behaves like 1∕t 2 as | t | → . The corresponding sampling expansion takes the form

$$\displaystyle{f(t) =\sum _{ n=-\infty }^{\infty }f(n)\,\frac{\sin \epsilon \pi (t - n)} {\epsilon \pi (t - n)}\,\frac{\sin \pi (t - n)} {\pi (t - n)}\,,\quad t \in \mathbb{R}\,.}$$

In this example, if each sample f(n) is subject to an error δ n such that | δ n | ≤ δ, then the total error in the above calculated f(t) is bounded by a constant depending only on δ and ε [28, p. 211].

The Paley–Wiener Space PW π as an RKHS of Entire Functions

Any function fPW π can be extended to any \(z \in \mathbb{C}\) as

$$\displaystyle{ f(z) = \frac{1} {\sqrt{2\pi }}\int _{-\pi }^{\pi }\hat{f}(\omega )\mathrm{e}^{iz\omega }d\omega \,. }$$
(5.8)

This extended function f is proved to be a continuous function on \(\mathbb{C}\) by using a standard argument allowing interchange the limit with the integral. Taking \(\gamma: [a,b]\longrightarrow \mathbb{C}\) a closed curve in \(\mathbb{C}\), the integral

$$\displaystyle{\int _{\gamma }f(z)\,dz = \frac{1} {\sqrt{2\pi }}\int _{a}^{b}\bigg(\int _{ -\pi }^{\pi }\hat{f}(\omega )e^{i\gamma (t)\omega }d\omega \bigg)\gamma '(t)dt}$$

is shown to be zero by interchanging the order of the integrals. Hence, Morera’s theorem says that f is an entire function.

Moreover, f is a function of exponential type at most π, i.e., f satisfies an inequality | f(z) | ≤ Ae π | z | for all \(z \in \mathbb{C}\) and some positive constant A. It follows from (5.8) by using the Cauchy–Schwarz inequality. Indeed, for \(z = x + iy \in \mathbb{C}\) one has

$$\displaystyle{\vert f(x + iy)\vert \leq \frac{1} {\sqrt{2\pi }}\int _{-\pi }^{\pi }\vert \hat{f}(\omega )\vert \,e^{-y\omega }\,d\omega \leq \frac{e^{\pi \vert y\vert }} {\sqrt{2\pi }}\int _{-\pi }^{\pi }\vert \hat{f}(\omega )\vert \,d\omega \leq e^{\pi \vert z\vert }\|f\|_{ PW_{\pi }}.}$$

Conversely, Paley–Wiener theorem, whose proof can be found, for instance, in [39, p. 101] says us that these properties characterize the space PW π :

Theorem 3 (Paley–Wiener Theorem).

Let f be an entire function such that \(\vert f(z)\vert \leq C\mathrm{e}^{\pi \vert z\vert }\), for any \(z \in \mathbb{C}\), and \(f\vert _{\mathbb{R}} \in L^{2}(\mathbb{R})\). Then there exists a function F ∈ L 2 [−π,π] such that

$$\displaystyle{f(z) = \frac{1} {\sqrt{2\pi }}\int _{-\pi }^{\pi }F(w)\,\mathrm{e}^{izw}dw\,,\quad z \in \mathbb{C}\,.}$$

Consequently, \(PW_{\pi } \equiv \Big\{ f \in H(\mathbb{C})\:\ \vert f(z)\vert \leq Ae^{\pi \vert z\vert },\quad f\vert _{\mathbb{R}} \in L^{2}(\mathbb{R})\Big\}\). Considering the space PW π as an RKHS of entire functions, its reproducing kernel is given by

$$\displaystyle{k_{\pi }(z,w) =\mathop{ sinc}\nolimits (z -\overline{w})\,,\quad z,w \in \mathbb{C}\,,}$$

since

$$\displaystyle{f(w) =\bigg\langle \hat{ f}, \frac{\mathrm{e}^{-i\overline{w}\cdot }} {\sqrt{2\pi }} \bigg\rangle _{L^{2}[-\pi,\pi ]} =\bigg\langle f, \frac{\sin \pi (\cdot -\overline{w})} {\pi (\cdot -\overline{w})}\bigg\rangle _{L^{2}(\mathbb{R})}\quad \mbox{ for any $w \in \mathbb{C}$}\,.}$$

Irregular Sampling: Paley–Wiener–Levinson’s Theorem

Let \(\{t_{n}\}_{n\in \mathbb{Z}}\) be a sequence of real numbers such that \(D:=\sup _{n\in \mathbb{Z}}\vert t_{n} - n\vert < 1/4\); hence, by \(\frac{1} {4}\) -Kadec’s theorem [39, p. 42] the sequence \(\big\{\mathrm{e}^{-it_{n}w}/\sqrt{2\pi }\big\}_{n\in \mathbb{Z}}\) is a Riesz basis for L 2[−π, π]. Consider its dual Riesz basis \(\{h_{n}\}_{n\in \mathbb{Z}}\) in L 2[−π, π]; given fPW π , expand its Fourier transform \(\hat{f} \in L^{2}[-\pi,\pi ]\) with respect to \(\{h_{n}\}_{n\in \mathbb{Z}}\) obtaining

$$\displaystyle{\hat{f} =\sum _{ n=-\infty }^{\infty }\langle \hat{f},\mathrm{e}^{-it_{n}w}/\sqrt{2\pi }\rangle \,h_{ n} =\sum _{ n=-\infty }^{\infty }f(t_{ n})\,h_{n}\quad \mbox{ in $L^{2}[-\pi,\pi ]$}\,.}$$

The inverse Fourier transform \(\mathcal{F}^{-1}\) gives in PW π the sampling formula

$$\displaystyle{ f(t) =\sum _{ n=-\infty }^{\infty }f(t_{ n})\,(\mathcal{F}^{-1}h_{ n})(t)\,,\quad t \in \mathbb{R}\,. }$$
(5.9)

The problem consists of identifying the sampling functions \((\mathcal{F}^{-1}h_{n})(t)\). By using entire functions techniques, Paley–Wiener–Levinson [24] proved that

$$\displaystyle{(\mathcal{F}^{-1}h_{ n})(t) = \frac{G(t)} {(t - t_{n})G'(t_{n})}\quad \mbox{ where}\quad G(t) = (t - t_{0})\prod _{n=1}^{\infty }\bigg(1 - \frac{t} {t_{n}}\bigg)\bigg(1 - \frac{t} {t_{-n}}\bigg)\,.}$$

In other words, sampling formula (5.9) is again a Lagrange-type interpolation series. As a consequence, the sequences

$$\displaystyle{\Big\{\frac{\sin \pi (t - t_{n})} {\pi (t - t_{n})}\Big\}_{n\in \mathbb{Z}}\quad \mbox{ and}\quad \Big\{ \frac{G(t)} {(t - t_{n})G'(t_{n})}\Big\}_{n\in \mathbb{Z}}}$$

form a pair of dual Riesz bases for \(PW_{\pi }\).

The irregular sampling studied here corresponds to that associated with the time-jitter error, i.e., \(t_{n} = n +\delta _{n}\), \(n \in \mathbb{Z}\). The general case concerns with real sequences \(\{t_{n}\}_{n\in \mathbb{Z}}\) for which there exist constants 0 < AB < such that

$$\displaystyle{A\|f\|^{2} \leq \sum _{ n\in \mathbb{Z}}\vert f(t_{n})\vert ^{2} \leq B\|f\|^{2}\quad \mbox{ for all $f \in PW_{\pi }$}\,.}$$

This means that the sequence \(\big\{\mathrm{e}^{-it_{n}w}/\sqrt{2\pi }\big\}_{n\in \mathbb{Z}}\) is a frame for L 2[−π, π]. See, for instance, [4, 9, 10].

Sampling by Using Samples of the Derivative

It is possible to recover any function fPW π by using its samples \(\{f(2n)\}_{n\in \mathbb{Z}}\) taken at half the due sampling rate, along with the samples \(\{f'(2n)\}_{n\in \mathbb{Z}}\) taken from its first derivative. Namely,Any function f ∈ PW π can be recovered from the two sets of samples \(\{f(2n)\}_{n\in \mathbb{Z}}\) and \(\{f'(2n)\}_{n\in \mathbb{Z}}\) by means of the formula

$$\displaystyle{f(t) =\sum _{ n=-\infty }^{\infty }\big\{f(2n) + (t - 2n)f'(2n)\big\}\left [\frac{\sin \frac{\pi } {2}(t - 2n)} { \frac{\pi }{2}(t - 2n)}\right ]^{2}\,,\quad t \in \mathbb{R}\,.}$$

For the proof, let \(\hat{f} \in L^{2}[-\pi,\pi ]\) be the Fourier transform of f; having in mind its 2π-periodic extension, the following Fourier expansions in L 2[−π, π] hold

$$\displaystyle{\hat{f}(\omega ) =\sum _{ n=-\infty }^{\infty }f(n)\frac{\mathrm{e}^{-in\omega }} {\sqrt{2\pi }} \quad \mbox{ and}\quad \hat{f}(\omega -\pi ) =\sum _{ n=-\infty }^{\infty }(-1)^{n}f(n)\frac{\mathrm{e}^{-in\omega }} {\sqrt{2\pi }} \,.}$$

As a consequence, the function \(S(\omega ) = \dfrac{1} {2}[\hat{f}(\omega ) +\hat{ f}(\omega -\pi ))]\) admits the Fourier expansion

$$\displaystyle{S(\omega ) =\sum _{ n=-\infty }^{\infty }f(2n)\frac{\mathrm{e}^{-i2n\omega }} {\sqrt{2\pi }} \quad \mbox{ in}\,\,L^{2}[0,\pi ]\,.}$$

In a similar way, since

$$\displaystyle{f'(t) = \frac{1} {\sqrt{2\pi }}\int _{-\pi }^{\pi }i\omega \hat{f}(\omega )\mathrm{e}^{it\omega }d\omega \,,\quad t \in \mathbb{R}\,,}$$

the following expansions in L 2[−π, π] hold

$$\displaystyle{i\omega \hat{f}(\omega ) =\sum _{ n=-\infty }^{\infty }f'(n)\frac{\mathrm{e}^{-in\omega }} {\sqrt{2\pi }} \ \text{and}\ i(\omega -\pi )\hat{f}(\omega -\pi ) =\sum _{ n=-\infty }^{\infty }(-1)^{n}f'(n)\frac{\mathrm{e}^{-in\omega }} {\sqrt{2\pi }} \,.}$$

Hence, the function \(R(\omega ) = \dfrac{i} {2}[\omega \hat{f}(\omega ) + (\omega -\pi )\hat{f}(\omega -\pi )]\) admits the Fourier expansion

$$\displaystyle{R(\omega ) =\sum _{ n=-\infty }^{\infty }f'(2n)\frac{\mathrm{e}^{-i2n\omega }} {\sqrt{2\pi }} \quad \mbox{ in}\,\,L^{2}[0,\pi ]\,.}$$

Groupingboth expansions, for ω ∈ [0, π ], we have

$$\displaystyle{\left (\begin{array}{*{10}c} S(\omega )\\ R(\omega ) \end{array} \right ) = \frac{1} {2}\left (\begin{array}{*{10}c} 1& 1\\ i\omega &i(\omega -\pi ) \end{array} \right )\left (\begin{array}{*{10}c} \hat{f}(\omega )\\ \hat{f}(\omega -\pi ) \end{array} \right )\,,}$$

or, inverting the matrix

$$\displaystyle{ \left (\begin{array}{*{10}c} \hat{f}(\omega )\\ \hat{f}(\omega -\pi ) \end{array} \right ) = \frac{2i} {\pi } \left (\begin{array}{*{10}c} i(\omega -\pi )&-1\\ -i\omega & 1 \end{array} \right )\left (\begin{array}{*{10}c} S(\omega )\\ R(\omega ) \end{array} \right ). }$$
(5.10)

Introducing this splitting of \(\hat{f}\) into (5.3) yields after some calculations

$$\displaystyle\begin{array}{rcl} f(t)& =& \frac{1} {\sqrt{2\pi }}\int _{-\pi }^{\pi }\hat{f}(\omega )\mathrm{e}^{it\omega }d\omega {}\\ & =& \frac{1} {\sqrt{2\pi }}\int _{-\pi }^{0}\sum _{ n=-\infty }^{\infty }\bigg[\frac{2} {\pi } (\omega +\pi )f(2n) + \frac{2i} {\pi } f'(2n)\bigg]\frac{\mathrm{e}^{-i2n\omega }} {\sqrt{2\pi }} \mathrm{e}^{it\omega }d\omega {}\\ & & + \frac{1} {\sqrt{2\pi }}\int _{0}^{\pi }\sum _{ n=-\infty }^{\infty }\bigg[\frac{2} {\pi } (\pi -\omega )f(2n) -\frac{2i} {\pi } f'(2n)\bigg]\frac{\mathrm{e}^{-i2n\omega }} {\sqrt{2\pi }} \mathrm{e}^{it\omega }d\omega {}\\ & =& \frac{1} {\sqrt{2\pi }}\sum _{n=-\infty }^{\infty }\Big\{\int _{ -\pi }^{\pi }\sqrt{ \frac{2} {\pi }} \bigg(1 -\frac{\vert \omega \vert } {\pi } \bigg)f(2n)\mathrm{e}^{i(t-2n)\omega }d\omega {}\\ & & +\frac{2} {\pi } \int _{-\pi }^{\pi }\frac{(-i\mathop{sgn}\nolimits \omega )} {\sqrt{2\pi }} f'(2n)\mathrm{e}^{i(t-2n)\omega }d\omega \Big\}\,,\quad t \in \mathbb{R}\,. {}\\ \end{array}$$

The desired result comes by using the Fourier duals

$$\displaystyle{\mathop{sinc}\nolimits \bigg( \frac{t} {2}\bigg)\sin \bigg( \frac{\pi t} {2}\bigg) = \frac{1} {\sqrt{2\pi }}\int _{-\pi }^{\pi }\frac{(-i\mathop{sgn}\nolimits \omega )} {\sqrt{2\pi }} \mathrm{e}^{it\omega }d\omega }$$

and

$$\displaystyle{\mathop{sinc}\nolimits ^{2}\bigg( \frac{t} {2}\bigg) = \frac{1} {\sqrt{2\pi }}\int _{-\pi }^{\pi }\sqrt{\frac{2} {\pi }} \Big(1 -\frac{\vert \omega \vert } {\pi } \Big)\mathrm{e}^{it\omega }d\omega \,.}$$

For derivative sampling, see [21] and references therein.

Generalizing Paley–Wiener Spaces

Paley–Wiener spaces can be generalized in different ways; two of these generalizations are briefly developed:

  1. 1.

    The first one consists in substituting the Hilbert space L 2[−π, π] and the Fourier kernel in expression (5.3) by an arbitrary Hilbert space \(\mathcal{H}\) and a kernel \(K: \Omega \ni t\mapsto K(t) \in \mathcal{H}\), and thus consider, for each \(x \in \mathcal{H}\), the function \(f_{x}(t) =\big\langle x,K(t)\big\rangle _{\mathcal{H}}\)\(t \in \Omega \).

  2. 2.

    According to Shannon’s sampling theorem, Paley–Wiener space PW π is a shift-invariant subspace in \(L^{2}(\mathbb{R})\) generated by the \(\mathop{sinc}\nolimits\) function, i.e., it can be described as \(PW_{\pi } \equiv \big\{\sum _{n\in \mathbb{Z}}a_{n}\,\mathop{ sinc}\nolimits (t - n)\,\,\mbox{ where $\{a_{n}\} \in \ell^{2}(\mathbb{Z})$}\big\}\). Other generalization consists of replacing the \(\mathop{sinc}\nolimits\) function by another generating function \(\varphi \in L^{2}(\mathbb{R})\) having better properties.

Of course, these generalizations do not cover all the possible situations. For example, de Branges spaces are RKHSs of entire functions which also generalize Paley–Wiener spaces. Sampling results in de Branges spaces can be found in [15, 27].

RKHSs Obtained by Duality from an Arbitrary Hilbert Space

Let \(\mathcal{H}\) be a separable Hilbert space and let \(K: \Omega \longrightarrow \mathcal{H}\) be an \(\mathcal{H}\) -valued mapping. Assume that there exists a sequence {t n } n = 1 in \(\Omega \) such that the sequence {K(t n )} n = 1 is an orthogonal basis for \(\mathcal{H}\). Under these circumstances:

  1. 1.

    Consider the set of functions defined on \(\Omega \)

    $$\displaystyle{\mathcal{H}_{K}:=\big\{ f_{x}: \Omega \rightarrow \mathbb{C}\,:\, f_{x}(t) =\langle x,K(t)\rangle _{\mathcal{H}}\,\text{ where }\,x \in \mathcal{H}\big\}\,.}$$

    The map \(\mathcal{T}_{K}: \mathcal{H}\rightarrow \mathcal{H}_{K}\) defined by \(\mathcal{T}_{K}(x):= f_{x}\) is a linear and bijective mapping. To obtain that \(\mathcal{T}_{K}\) is one-to-one, suppose that f x = 0 in \(\mathcal{H}_{K}\). In particular,

    $$\displaystyle{f_{x}(t_{n}) = 0 =\langle x,K(t_{n})\rangle _{\mathcal{H}}\quad \mbox{ for all $n \in \mathbb{N}$}\,,}$$

    which implies x = 0 since the sequence {K(t n )} n = 1 is a complete set in \(\mathcal{H}\).

  2. 2.

    The space \(\mathcal{H}_{K}\) endowed with the inner product \(\langle f_{x},f_{y}\rangle _{\mathcal{H}_{K}}:=\langle x,y\rangle _{\mathcal{H}}\) is a Hilbert space which inherits the Hilbertian structure of \(\mathcal{H}\). Moreover, it is an RKHS; indeed, for each \(t \in \mathbb{R}\), the evaluation functional at \(t \in \Omega \) is bounded since Cauchy–Schwarz’s inequality gives

    $$\displaystyle{\vert f_{x}(t)\vert = \vert \langle x,K(t)\rangle _{\mathcal{H}}\vert \leq \| x\|_{\mathcal{H}}\|K(t)\|_{\mathcal{H}} =\| f_{x}\|_{\mathcal{H}_{K}}\|K(t)\|_{\mathcal{H}}\,,\quad f \in \mathcal{H}_{K}\,.}$$

    Besides, the mapping \(\mathcal{T}_{K}\) is, obviously, a unitary operator between the spaces \(\mathcal{H}\) and \(\mathcal{H}_{K}\). The reproducing kernel in \(\mathcal{H}_{K}\) is

    $$\displaystyle{k(t,s) =\langle K(s),K(t)\rangle _{\mathcal{H}}\,,\quad t,s \in \Omega \,.}$$

    Indeed, for each fixed \(s \in \Omega \), the function \(k(\cdot,s) = \mathcal{T}_{K}\big(K(s)\big)\) belongs to \(\mathcal{H}_{K}\), and the reproducing property

    $$\displaystyle\begin{array}{rcl} f_{x}(s) =\langle x,K(s)\rangle _{\mathcal{H}} =\langle \mathcal{T}_{K}(x),\mathcal{T}_{K}\big(K(s)\big)\rangle _{\mathcal{H}_{K}}& =& \langle f_{x},k(\cdot,s)\rangle _{\mathcal{H}_{K}}\,,\,\,s \in \Omega {}\\ & & f_{x} \in \mathcal{H}_{K}\,, {}\\ \end{array}$$

    holds. One can find these spaces, for instance, in [29, 30].

  3. 3.

    Since \(\langle k(\cdot,t_{n}),k(\cdot,t_{m})\rangle _{\mathcal{H}_{K}} = k(t_{m},t_{n}) =\langle K(t_{n}),K(t_{m})\rangle _{\mathcal{H}}\), it is easy to check that the sequence \(\big\{K(t_{n})\big\}_{n=1}^{\infty }\) is an orthogonal basis for the auxiliary Hilbert space \(\mathcal{H}\) if and only if the sequence \(\big\{k(\cdot,t_{n})\big\}_{n=1}^{\infty }\) is an orthogonal basis for the RKHS \(\mathcal{H}_{K}\). Thus, in this context, for any \(f \in \mathcal{H}_{K}\) the sampling formula (5.2) reads

    $$\displaystyle{f(t) =\sum _{ n=1}^{\infty }f(t_{ n})\frac{\langle K(t_{n}),K(t)\rangle _{\mathcal{H}}} {\|K(t_{n})\|^{2}} \,,\quad t \in \Omega \,.}$$

    The convergence is absolute and uniform on subsets of \(\Omega \) where the function \(t\mapsto \|K(t)\|_{\mathcal{H}}\) is bounded. The above sampling formula is nothing but an abstract version of the Kramer sampling theorem; see [16, 20, 40], for instance.

Some Illustrative Examples

Next, some examples following the above construction are exhibited; for the omitted details see [12, 13]:

  1. 1.

    Consider the Hilbert space \(\mathcal{H}:= L^{2}[0,\pi ]\), the mapping \(K_{c}: \mathbb{R}\longrightarrow L^{2}[0,\pi ]\) such that \(K_{c}(t)(w):=\cos tw\), w ∈ [0, π], and the sequence \(\{t_{n}\} =\{ 0\} \cup \mathbb{N}\). Then, any function f defined as

    $$\displaystyle{f(t) =\big\langle F,K_{c}(t)\big\rangle _{L^{2}[0,\pi ]} =\int _{ 0}^{\pi }F(w)\cos tw\,dw\,,\quad t \in \mathbb{R}\,,}$$

    for some FL 2[0, π], can be recovered from the sampling formula

    $$\displaystyle{f(t) = f(0)\frac{\sin \pi t} {\pi t} + \frac{2} {\pi } \sum _{n=1}^{\infty }f(n)\frac{(-1)^{n}\,t\sin \pi t} {t^{2} - n^{2}} \,,\quad t \in \mathbb{R}\,.}$$

    The reproducing kernel of the corresponding \(\mathcal{H}_{K_{c}}\) space is

    $$\displaystyle{k_{c}(t,s) = \frac{1} {t^{2} - s^{2}}\big[t\sin \pi t\cos \pi s - s\cos \pi t\sin \pi s\big]\,,\quad t,s \in \mathbb{R}\,.}$$
  2. 2.

    Analogously, considering K s (t)(w): = sintw, w ∈ [0, π], and the sequence \(\{t_{n}\} = \mathbb{N}\) one obtains the sampling formula

    $$\displaystyle{f(t) = \frac{2} {\pi } \sum _{n=1}^{\infty }f(n)\frac{(-1)^{n}n\sin \pi t} {t^{2} - n^{2}} \,,\quad t \in \mathbb{R}\,,}$$

    for any function f having the form

    $$\displaystyle{f(t) =\big\langle F,K_{s}(t)\big\rangle _{L^{2}[0,\pi ]} =\int _{ 0}^{\pi }F(w)\sin tw\,dw\,,\quad t \in \mathbb{R}\,,}$$

    where FL 2[0, π].

    Functions in example (1) coincide with even functions in the Paley–Wiener PW π , whilst functions in example (2) coincide with odd functions in PW π . In fact, the orthogonal sum \(PW_{\pi } = \mathcal{H}_{K_{c}} \oplus \mathcal{H}_{K_{s}}\) holds.

  3. 3.

    The Fourier–Bessel set \(\big\{\sqrt{w}\,J_{\nu }(wt_{n})\big\}_{n=1}^{\infty }\) is known to be an orthogonal basis for L 2[0, 1], where t n is the nth positive zero of the Bessel function J ν (t), ν > −1. The Bessel function of order ν is given by

    $$\displaystyle{J_{\nu }(t) = \frac{t^{\nu }} {2^{\nu }\Gamma (\nu +1)}\Big[1 +\sum _{ n=1}^{\infty } \frac{(-1)^{n}} {n!(1+\nu )\cdots (n+\nu )}\left ( \frac{t} {2}\right )^{2n}\Big]\,,\quad t \in \mathbb{R}\,.}$$

    For any \(t \in \mathbb{R}\), consider K ν (t) ∈ L 2[0, 1] defined by \(K_{\nu }(t)(w):= \sqrt{wt}\,J_{\nu }(wt)\), w ∈ [0, 1], and the sequence of zeros {t n } n = 1 . Any function f defined as

    $$\displaystyle{f(t) =\big\langle F,K_{\nu }(t)\big\rangle _{L^{2}[0,1]} =\int _{ 0}^{1}F(w)\sqrt{wt}\,J_{\nu }(wt)\,dw\,,\quad t \in \mathbb{R}\,,}$$

    where FL 2[0, 1], can be recovered by means of the sampling formula

    $$\displaystyle{f(t) =\sum _{ n=1}^{\infty }f(t_{ n}) \frac{2\sqrt{t\,t_{n}}J_{\nu }(t)} {J'_{\nu }(t_{n})(t^{2} - t_{n}^{2})}\,,\quad t \in \mathbb{R}\,.}$$

    The reproducing kernel of the corresponding RKHS \(\mathcal{H}_{\nu }\) is

    $$\displaystyle{k_{\nu }(s,t) = \frac{\sqrt{st}} {t^{2} - s^{2}}\big[tJ_{\nu +1}(t)J_{\nu }(s) - sJ_{\nu +1}(s)J_{\nu }(t)\big]\,,\quad t,s \in \mathbb{R}\,.}$$
  4. 4.

    Finally, consider \(K: \mathbb{R}\longrightarrow L^{2}[-\pi,\pi ]\) defined by \(K(t)(w):= \mathrm{e}^{i(t^{2}+w^{2}-wt) }\), w ∈ [−π, π]. For the sampling points \(\{t_{n}\} = \mathbb{Z}\), the sequence \(\{K(t_{n})\}_{n\in \mathbb{Z}}\) is an orthogonal basis for L 2[−π, π]. Hence, any function f given as

    $$\displaystyle{f(t) =\big\langle F,K(t)\big\rangle _{L^{2}[-\pi,\pi ]} =\int _{ -\pi }^{\pi }F(w)\,\mathrm{e}^{-i(t^{2}+w^{2}-wt) }\,dw\,,\quad t \in \mathbb{R}\,,}$$

    where FL 2[−π, π], can be expressed as the sampling series

    $$\displaystyle{f(t) =\sum _{ n=-\infty }^{\infty }f(n)\,\mathrm{e}^{-i(t^{2}-n^{2}) }\frac{\sin \pi (t - n)} {\pi (t - n)}\,,\quad t \in \mathbb{R}\,.}$$

    The above formula is the corresponding sampling formula valid for band-limited functions to the interval [−π, π] in the sense of the fractional Fourier transform (FRFT).

Shift-Invariant Subspaces in \(L^{2}(\mathbb{R})\)

Although Shannon’s sampling theory has had an enormous impact, it has a number of problems, as pointed out in [36]: It relies on the use of ideal filters (in other words, in Fig. 5.1, \(\hat{f}\) can be obtained from \(\hat{f}_{p}\) multiplying by the characteristic function χ [−π, π]); the band-limited hypothesis is in contradiction with the idea of a finite duration signal (f is an entire function); the band-limiting operation generates Gibbs oscillations; and finally, the \(\mathop{sinc}\nolimits\) function has a very slow decay at infinity which makes computation in the signal domain very inefficient. Moreover, many applied problems impose different a priori constraints on the type of signals. For this reason, sampling and reconstruction problems have been investigated in spline spaces, wavelet spaces, and general shift-invariant spaces; signals are assumed to belong to some shift-invariant space of the form: \(V _{\varphi }^{2}:= \overline{\mathop{span}\nolimits }_{L^{2}}\{\varphi (t - n)\}_{n\in \mathbb{Z}}\) where the function φ in \(L^{2}(\mathbb{R})\) is called the generator of V φ 2.

Let \(V _{\varphi }^{2}:= \overline{\mathop{span}\nolimits }\big\{\varphi (\cdot - n)\big\}_{n\in \mathbb{Z}}\) be a shift-invariant space with stable generator \(\varphi \in L^{2}(\mathbb{R})\) which means that the sequence \(\{\varphi (\cdot - n)\}_{n\in \mathbb{Z}}\) is a Riesz basis for V φ 2.

The sequence \(\{\varphi (\cdot - n)\}_{n\in \mathbb{Z}}\) is a Riesz sequence in \(L^{2}(\mathbb{R})\), i.e., a Riesz basis for V φ 2 if and only if there exist two positive constants 0 < AB such that

$$\displaystyle{A \leq \sum _{k\in \mathbb{Z}}\vert \hat{\varphi }(w + k)\vert ^{2} \leq B\,,\quad \text{a.e.}\,\,w \in [0,1]\,,}$$

where \(\hat{\varphi }\) stands for the Fourier transform of \(\varphi\)(here, it is defined in \(L^{1}(\mathbb{R}) \cap L^{2}(\mathbb{R})\) as \(\hat{\varphi }(w):=\int _{ -\infty }^{\infty }\varphi (t)\,\mathrm{e}^{-2\pi \mathrm{i}wt}dt\)) [8, p. 143]. Thus we have that

$$\displaystyle{V _{\varphi }^{2} =\bigg\{\sum _{ n\in \mathbb{Z}}a_{n}\ \varphi (\cdot - n)\,:\,\{ a_{n}\} \in \ell^{2}(\mathbb{Z})\bigg\} \subset L^{2}(\mathbb{R})\,.}$$

It is also assumed that the functions in the shift-invariant space \(V _{\varphi }^{2}\) are continuous on \(\mathbb{R}\). This is equivalent to say that the generator φ is continuous on \(\mathbb{R}\) and the function \(t\mapsto \sum _{n\in \mathbb{Z}}\vert \varphi (t - n)\vert ^{2}\) is bounded on \(\mathbb{R}\) as proved in [42]. Thus, any fV φ 2 is defined on \(\mathbb{R}\) as the pointwise sum \(f(t) =\sum _{n\in \mathbb{Z}}a_{n}\varphi (t - n)\) for each \(t \in \mathbb{R}\).

On the other hand, the space V φ 2 is the image of the Hilbert space L 2[0, 1] by means of the isomorphism

$$\displaystyle{\begin{array}{rccl} \mathcal{T}_{\varphi }:& L^{2}[0,1] &\longrightarrow &V _{\varphi }^{2} \\ & \{\mathrm{e}^{-2\pi inx}\}_{n\in \mathbb{Z}} & \longmapsto & \{\varphi (t - n)\}_{n\in \mathbb{Z}}\,, \end{array} }$$

which maps the orthonormal basis \(\{\mathrm{e}^{-2\pi inw}\}_{n\in \mathbb{Z}}\) for L 2[0, 1] onto the Riesz basis \(\{\varphi (t - n)\}_{n\in \mathbb{Z}}\) for V φ 2. For any fV φ 2, there exists FL 2[0, 1] such that

$$\displaystyle\begin{array}{rcl} f(t)& =& \mathcal{T}_{\varphi }F(t) =\sum _{n\in \mathbb{Z}}\langle F,\mathrm{e}^{-2\pi inx}\rangle \varphi (t - n) =\bigg\langle F,\sum _{ n\in \mathbb{Z}}\overline{\varphi (t - n)}\mathrm{e}^{-2\pi inx}\bigg\rangle \\ & =& \big\langle F,K_{t}\big\rangle \,,\quad t \in \mathbb{R}\,, {}\end{array}$$
(5.11)

where, for each \(t \in \mathbb{R}\), the function K t L 2[0, 1] is given by

$$\displaystyle{ K_{t}(x):=\sum _{n\in \mathbb{Z}}\overline{\varphi (t - n)}\mathrm{e}^{-2\pi inx} = \overline{\sum _{ n\in \mathbb{Z}}\varphi (t + n)\mathrm{e}^{-2\pi inx}} = \overline{Z\varphi (t,x)}\,. }$$
(5.12)

Here, \(Z\varphi (t,x):=\sum _{n\in \mathbb{Z}}\varphi (t + n)\mathrm{e}^{-2\pi inx}\) is just the Zak transform of the function φ; see [8, p. 215] for properties and uses of the Zak transform. As a consequence, the shift-invariant space V φ 2 is an RKHS in \(L^{2}(\mathbb{R})\). The mapping \(\mathcal{T}_{\varphi }\) has the shifting property \(\mathcal{T}_{\varphi }(\mathrm{e}^{-2\pi imx}F)(t) = (\mathcal{T}_{\varphi }F)(t - m)\), \(t \in \mathbb{R}\) and \(m \in \mathbb{Z}\).

From (5.11), for a ∈ [0, 1) fixed and \(m \in \mathbb{Z}\) we have

$$\displaystyle{f(a + m) =\langle F,K_{a+m}\rangle _{L^{2}[0,1]} =\langle F,\mathrm{e}^{-2\pi imx}K_{ a}\rangle _{L^{2}[0,1]}\,,\quad F = \mathcal{T}_{\varphi }^{-1}f\,.}$$

In order to obtain a sampling formula in V φ 2, we look for sampling points of the form \(t_{m}:= a + m\), \(m \in \mathbb{Z}\), such that the sequence \(\big\{\mathrm{e}^{-2\pi imx}K_{a}(x)\big\}_{m\in \mathbb{Z}}\) is a Riesz basis for L 2[0, 1].

Recalling that the multiplication operator m g : L 2[0, 1] → L 2[0, 1] given as the product m g (f) = gf is well defined if and only if gL [0, 1], and then, it is bounded with norm \(\|m_{g}\| =\| g\|_{\infty }\), the following result comes out:

The sequence of functions \(\big\{\mathrm{e}^{-2\pi imx}K_{a}(x)\big\}_{m\in \mathbb{Z}}\) is a Riesz basis for L 2 [0,1] if and only if the inequalities \(0 <\| K_{a}\|_{0} \leq \| K_{a}\|_{\infty } < \infty \) hold, where \(\|K_{a}\|_{0}:=\mathop{ \mathrm{ess\,inf}}\limits _{x\in [0,1]}\vert K_{a}(x)\vert \) and \(\|K_{a}\|_{\infty }:=\mathop{ \mathrm{ess\,sup}}\limits _{x\in [0,1]}\vert K_{a}(x)\vert \). Moreover, its dual Riesz basis is \(\big\{\mathrm{e}^{-2\pi imx}/\overline{K_{a}(x)}\big\}_{m\in \mathbb{Z}}\).

In particular, the sequence \(\big\{\mathrm{e}^{-2\pi imx}K_{a}(x)\big\}_{m\in \mathbb{Z}}\) is an orthonormal basis in L 2[0, 1] if and only if | K a (x) | = 1 a.e. in [0, 1].

Let a be a real number in [0, 1) such that \(0 <\| K_{a}\|_{0} \leq \| K_{a}\|_{\infty } < \infty \). Any FL 2[0, 1] can be expanded as

$$\displaystyle{ F =\sum _{m\in \mathbb{Z}}\langle F,\mathrm{e}^{-2\pi imx}\,K_{ a}\rangle \frac{\mathrm{e}^{-2\pi imx}} {\overline{K_{a}}(x)} =\sum _{m\in \mathbb{Z}}f(a + m)\,\frac{\mathrm{e}^{-2\pi imx}} {\overline{K_{a}}(x)} \quad \mbox{ in $L^{2}[0,1]$}\,. }$$
(5.13)

Having in mind the shifting property of \(\mathcal{T}_{\varphi }\),

$$\displaystyle{\big\langle \mathrm{e}^{-2\pi imx}/\overline{K_{ a}}(x),K_{t}(x)\big\rangle _{L^{2}[0,1]} = \mathcal{T}_{\varphi }\big(\mathrm{e}^{-2\pi imx}/\overline{K_{ a}}(x)\big)(t) = S_{a}(t - m)\,,\quad t \in \mathbb{R}\,,}$$

where \(S_{a}:= \mathcal{T}_{\varphi }\big(1/\overline{K_{a}}\big) \in V _{\varphi }^{2}\). Thus, the isomorphism \(\mathcal{T}_{\varphi }\) acting in formula (5.13) gives the sampling result in V φ 2:

Any function f ∈ V φ 2 can be expanded as the sampling series

$$\displaystyle{ f(t) =\sum _{ n=-\infty }^{\infty }f(a + n)S_{ a}(t - n)\,,\quad t \in \mathbb{R}\,. }$$
(5.14)

The convergence of the series in (5.14) is absolute and uniform on \(\mathbb{R}\) since the function \(t\mapsto \|K_{t}\|^{2} =\sum _{n\in \mathbb{Z}}\vert \varphi (t - n)\vert ^{2}\) is bounded on \(\mathbb{R}\).

Some Examples Involving B-Splines

Consider the space V φ 2 for the generator φ: = N m where N m is the B-spline of order m − 1, i.e., N m : = N 1N 1 ∗⋯ ∗ N 1 (m times) and N 1: = χ [0, 1], i.e., the characteristic function of the interval [0, 1]. It is known that the sequence \(\big\{N_{m}(t - n)\big\}_{n\in \mathbb{Z}}\) is a Riesz basis for \(V _{N_{m}}^{2}\) [8, p. 69]. For example, the following sampling formulas hold:

  1. 1.

    For the quadratic spline N 3, we have \(ZN_{3}(t,x) = \frac{t^{2}} {2} +\big [\frac{3} {4} - (t -\frac{1} {2})^{2}\big]z + \frac{(1-t)^{2}} {2} z^{2}\) where \(z = \mathrm{e}^{-2\pi ix}\). Thus, for t = 0 we have \(ZN_{3}(0,x) = \frac{z} {2}(1 + z)\) which vanishes at \(x = 1/2\). However, for \(t = 1/2\) we have \(ZN_{3}(1/2,x) = \frac{1} {8}(1 + 6z + z^{2})\); according to (5.12) we deduce \(0 <\| K_{1/2}\|_{0} \leq \| K_{1/2}\|_{\infty } < \infty \). Hence, for any \(f \in V _{N_{3}}^{2}\), we have

    $$\displaystyle{f(t) =\sum _{ n=-\infty }^{\infty }f(n + \frac{1} {2})\,S_{1/2}(t - n)\,,\quad t \in \mathbb{R}\,,}$$

    where \(S_{1/2}(t) = \sqrt{2}\sum _{n=-\infty }^{\infty }(2\sqrt{2} - 3)^{\vert n+1\vert }\ N_{3}(t - n)\). This function has been obtained from the Laurent expansion of the function \(8(1 + 6z + z^{2})^{-1}\) in the annulus \(3 - 2\sqrt{2} < \vert z\vert < 3 + 2\sqrt{2}\).

  2. 2.

    Since \(ZN_{4}(0,x) = \frac{z} {6}\big(1 + 4z + z^{2}\big) = \frac{z} {6}(z-\lambda )(z - 1/\lambda )\) where \(z = \mathrm{e}^{-2\pi ix}\) and \(\lambda = \sqrt{3} - 2\), according to (5.12) we deduce that \(0 <\| K_{0}\|_{0} \leq \| K_{0}\|_{\infty } < \infty \). Thus, for any \(f \in V _{N_{4}}^{2}\) we have

    $$\displaystyle{f(t) =\sum _{ n=-\infty }^{\infty }f(n)\,S_{ 0}(t - n)\,,\quad t \in \mathbb{R}\,,}$$

    where \(S_{0}(t) = \sqrt{3}\sum _{n=-\infty }^{\infty }(-1)^{n}(2 -\sqrt{3})^{\vert n\vert }\ N_{4}(t - n + 2)\). To obtain the function S 0, we have used the Laurent expansion of the function \(6(z + 4z^{2} + z^{3})^{-1}\) in the annulus \(2 -\sqrt{3} < \vert z\vert < 2 + \sqrt{3}\).

Conclusion

In this introductory work, the basic sampling theory in an RKHS is exhibited. The leitmotiv was the classical sampling theory in Paley–Wiener spaces, which includes the well-known Shannon’s sampling theorem, and some of its generalizations, including shift-invariant spaces in \(L^{2}(\mathbb{R})\). In the literature one can find nice surveys [5, 19, 22, 27, 38] or books [20, 40] on this subject.

Although sampling theory is not only privative of RKHSs [6, 20, 25, 26, 41], this is the setting where the theory becomes more natural. Besides, another important topic concerns to sampling and interpolation in spaces of analytic functions, including, in particular, RKHSs of entire functions; see, for instance, [32] and the references therein.

The first sampling result in shift-invariant spaces was published in 1982 [37]; it was the beginning of a significant literature on sampling and reconstruction problems in spline spaces, wavelet spaces, and general shift-invariant spaces. Moreover, in many common situations, the available data are samples of some filtered (convolved) versions fh j , \(j = 1,2,\ldots,s\), of the function f itself, where each average function h j reflects the characteristics of an acquisition device. This leads to generalized or average sampling in shift-invariant spaces; notice that derivative sampling in Paley–Wiener spaces is a particular case. See [13, 11, 14, 23, 34, 42] and the references therein.

Cross-References