Abstract
In this paper, we discuss some numerical realizations of Shannon’s sampling theorem. First we show the poor convergence of classical Shannon sampling sums by presenting sharp upper and lower bounds on the norm of the Shannon sampling operator. In addition, it is known that in the presence of noise in the samples of a bandlimited function, the convergence of Shannon sampling series may even break down completely. To overcome these drawbacks, one can use oversampling and regularization with a convenient window function. Such a window function can be chosen either in frequency domain or in time domain. We especially put emphasis on the comparison of these two approaches in terms of error decay rates. It turns out that the best numerical results are obtained by oversampling and regularization in time domain using a \(\sinh \)-type window function or a continuous Kaiser–Bessel window function, which results in an interpolating approximation with localized sampling. Several numerical experiments illustrate the theoretical results.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
The classical Whittaker–Kotelnikov–Shannon sampling theorem (see [11, 25, 32]) plays a fundamental role in signal processing, since it describes the close relation between a bandlimited function and its equidistant samples. A function \(f \in L^2({\mathbb {R}})\) is called bandlimited with bandwidth \(\frac{N}{2}\), if the support of its Fourier transform
is contained in \(\big [- \tfrac{N}{2},\,\tfrac{N}{2}\big ]\). Let the space of all bandlimited functions with bandwidth \(\frac{N}{2}\) be denoted by
which is also known as the Paley–Wiener space. By definition (1.1), the Paley–Wiener space \({{\mathcal {B}}}_{N/2}(\mathbb {R})\) consists of equivalence classes of almost equal functions. However, it can be shown (see e.g. [10, p. 6]) that there is always a smooth representation, since for each \(r\in \mathbb {N}_0\) we have by inverse Fourier transform that
and \((2\pi \mathrm i \,\cdot )^r \hat{f} \in L^1\big (\big [- \tfrac{N}{2},\,\tfrac{N}{2}\big ]\big )\) and therefore \(f^{(r)}\in C_0(\mathbb {R})\), where \(C_0({\mathbb {R}})\) denotes the Banach space of continuous functions \(f :\mathbb {R}\rightarrow \mathbb {C}\) vanishing as \(|t| \rightarrow \infty \) with the norm
That is to say, we have \({{\mathcal {B}}}_{N/2}(\mathbb {R}) \subseteq L^2(\mathbb {R}) \cap C_0(\mathbb {R}) \cap C^\infty (\mathbb {R})\).
Then the Whittaker–Kotelnikov–Shannon sampling theorem states that any bandlimited function \(f \in {{\mathcal {B}}}_{N/2}({\mathbb {R}})\) can be recovered from its equispaced samples \(f\big (\tfrac{k}{L}\big )\), \(k \in {\mathbb {Z}}\), with \(L \ge N\) as
where the \({\textrm{sinc}}\) function is given by
It is well known that the series in (1.2) converges absolutely and uniformly on whole \({\mathbb {R}}.\) Unfortunately, the practical use of this sampling theorem is limited, since it requires infinitely many samples, which is impossible in practice. Furthermore, the \({\textrm{sinc}}\) function decays very slowly such that the Shannon sampling series
with \(L \ge N\) has rather poor convergence, as can be seen from the sharp upper and lower bounds on the norm of the Shannon sampling operator (see Theorem 2.2). In addition, it is known (see [6]) that in the presence of noise in the samples \(f\big (\frac{k}{L}\big )\), \(k \in {\mathbb {Z}}\), of a bandlimited function \(f\in {{\mathcal {B}}}_{N/2}({\mathbb {R}})\), the convergence of Shannon sampling series may even break down completely. To overcome these drawbacks, many applications employ oversampling, i.e., a function \(f \in L^2({\mathbb {R}})\) of bandwidth \(\frac{N}{2}\) is sampled on a finer grid \(\frac{1}{L}\, {\mathbb {Z}}\) with \(L > N\), where the oversampling is measured by the oversampling parameter \(\lambda :=\frac{L-N}{N} \ge 0\). In addition, we consider several regularization techniques, where a so-called window function is used. Since this window function can be chosen in frequency domain or in spatial domain, we study both approaches and compare the theoretical and numerical approximation properties in terms of decay rates.
On the one hand, we investigate the regularization with a window function in frequency domain (called frequency window function), cf. e.g. [5, 15, 17, 24, 28]. Here we use a suitable function of the form
where \(\chi :\big [\tfrac{N}{2},\,\tfrac{L}{2}\big ] \rightarrow [0,\,1]\) is frequently chosen as some monotonously decreasing, continuous function with \(\chi \big (\frac{N}{2}\big ) = 1\) and \(\chi \big (\frac{L}{2}\big ) = 0\). Applying inverse Fourier transform, we determine the corresponding function \(\psi \) in time domain. Since \(\hat{\psi }\) is compactly supported, the uncertainty principle (cf. [18, p. 103, Lemma 2.39]) yields \({\textrm{supp}}\,\psi = {\mathbb {R}}\). Then it is known that the function f can be represented in the form
Using uniform truncation, we approximate a function \(f \in {\mathcal {B}}_{N/2}({\mathbb {R}})\) by the T-th partial sum
On the other hand, we examine the regularization with a window function in time domain (called time window function), cf. e.g. [10, 13, 14, 21]. Here a suitable window function \(\varphi :{\mathbb {R}} \rightarrow [0,\,1]\) with compact support \(\big [- \frac{m}{L},\,\frac{m}{L}\big ]\) belongs to the set \(\Phi _{m/L}\) (as defined in Sect. 4) with some \(m \in {\mathbb {N}} {\setminus } \{1\}\). Then we recover a function \(f \in {\mathcal {B}}_{N/2}({\mathbb {R}})\) by the regularized Shannon sampling formula
with \(L \ge N\). By defining the set \(\Phi _{m/L}\) of window functions \(\varphi ,\) only small truncation parameters m are needed to achieve high accuracy, resulting in short sums being computable very fast. In other words, this approach uses localized sampling. Moreover, this method is an interpolating approximation, since for all \(n,\,k \in {\mathbb {Z}}\) we have
In this paper we propose new estimates of the uniform approximation errors
where we apply several window functions \(\hat{\psi }\) and \(\varphi \). Note that for the frequency window functions \(\hat{\psi }\) only error estimates on certain compact interval (such as for example \([-1,\,1]\)) can be given, while results on whole \({\mathbb {R}}\) are not feasible. It is shown in the subsequent sections that the uniform approximation error decays algebraically with respect to T, if \(\hat{\psi }\) is a frequency window function. Otherwise, if \(\varphi \in \Phi _{m/L}\) is chosen as a time window function such as the \(\sinh \)-type or continuous Kaiser–Bessel window function, then the uniform approximation error decays exponentially with respect to m.
To this end, this paper is organized as follows. First, in Sect. 2 we show the poor convergence of classical Shannon sampling sums and improve results on the upper and lower bounds of the norm of the Shannon sampling operator. Consequently, we study the different regularization techniques. In Sect. 3 we start with the regularization using a frequency window function. After recapitulating a general result in Theorem 3.3, we consider window functions of different regularity and present the corresponding algebraic decay results in Theorems 3.4 and 3.7. Subsequently, in Sect. 4 we proceed with the regularization using a time window function. Here we also review the known general result in Theorem 4.1 and afterwards demonstrate the exponential decay of the considered \(\sinh \)-type and continuous Kaiser–Bessel window functions in Theorems 4.2 and 4.3. Finally, in Sect. 5 we compare the previously considered approaches from Sects. 3 and 4 to illustrate our theoretical results.
2 Poor convergence of Shannon sampling sums
In order to show that the Shannon sampling series (1.3) has rather poor convergence, we truncate the series (1.3) with \(T\in {\mathbb {N}}\), and consider the T-th Shannon sampling sum
Obviously, this operator can be formed for each \(f \in C_0({\mathbb {R}})\).
Lemma 2.1
The linear operator \(S_T :\,C_0({\mathbb {R}}) \rightarrow C_0({\mathbb {R}})\) has the norm
Proof
For each \(f \in C_0({\mathbb {R}})\) and \(t\in {\mathbb {R}}\) we have
such that
By defining the even nonnegative function
which is contained in \(C_0({\mathbb {R}})\), and assuming that \(s_T\) has its maximum in \(t_0 \in {\mathbb {R}}\), this yields
The other way around, we consider the linear spline \(g\in C_0({\mathbb {R}})\) with nodes in \(\frac{1}{L}\,{\mathbb {Z}}\), where
Obviously, we have \(\Vert g\Vert _{C_0({\mathbb {R}})} = 1\) and
Then
implies
and hence (2.2). \(\square \)
Now we show that the norm \(\Vert S_T \Vert \) is unbounded with respect to T.
Theorem 2.2
The norm of the operator \(S_T :\,C_0({\mathbb {R}}) \rightarrow C_0({\mathbb {R}})\) can be estimated by
where we use Euler’s constant
Proof
As suggested in [27, p. 142, Problem 3.1.5], we represent (2.3) in the form
with
Since (2.3) is even, we estimate the maximum of \(s_T(t)\) only for \(t \ge 0\). For \(k= {1},\, \ldots ,\,T,\) we have \(a_k(0) = a_k\big (\frac{1}{L}\big )=0\) and by trigonometric identities we obtain for \(t \in \big (0,\, \tfrac{1}{L}\big )\) that
We define the functions \(b_k :\,(0,\,1) \rightarrow {\mathbb {R}}\), \(k = {1},\ldots ,T\), via
such that the symmetry relation \(b_k(x) = b_k(1-x)\) is fulfilled, i.e., each \(b_k\) is symmetric with reference to \(\frac{1}{2}\). Furthermore, by \(b_k'(x) \ge 0\) for \(x \in {\big (}0,\,\frac{1}{2}\big ]\), the function \(b_k\) is increasing on \({\big (}0,\,\frac{1}{2}\big ]\) and therefore has its maximum at \(x = \tfrac{1}{2}\). Thus, the function \(a_k :\,\big [0,\, \tfrac{1}{L}\big ]\rightarrow {\mathbb {R}}\) has its maximum at \(t = \tfrac{1}{2L}\), i.e.,
Since \(a_{T+1}(t)\) can be written as
for \(t\in \big [0,\, \tfrac{1}{L}\big ]\), we obtain
Hence, in summary this yields
For \(t\in \big [\tfrac{n}{L},\,\tfrac{n+1}{L}\big ]\) with arbitrary \(n \in {\mathbb {N}}\), the sum \(s_T(t)\) is less than it is for \(t\in \big [0,\, \tfrac{1}{L}\big ]\), since for each \(n\in {\mathbb {N}}\) and \(t\in \big (0, \, \tfrac{1}{L}\big )\) we have
and therefore \(s_T\big (\tfrac{n}{L} + t \big ) < s_T(t)\). Thus, we obtain
Note that for \(T \gg 1\) the value
is a good approximation of the norm \(\Vert S_T\Vert \).
Now we estimate \(\Vert S_T\Vert \) by \(\ln T\). For this purpose we denote the T-th harmonic number by
such that
Using Euler’s constant
the estimates
are known (see [33]). From (2.7) and (2.8) we conclude that
Therefore, applying (2.5), (2.7), and (2.9) yields the assertion (2.4). \(\square \)
Note that Theorem 2.2 improves a former result of [27, p. 142, Problem 3.1.5], which only contains a coarse proof sketch for the upper bound, while (2.4) gives a very precise nesting, see also Fig. 1. Additionally, we remark that Theorem 2.2 immediately implies
Remark 2.3
By Theorem 2.2 and the theorem of Banach–Steinhaus, an arbitrary function \(f \in C_0(\mathbb {R})\) cannot be represented in the form (1.3), since the norm of the linear operator \(S_T :\,C_0(\mathbb {R}) \rightarrow C_0(\mathbb {R})\) is unbounded with respect to T. However, as known from the Whittaker–Kotelnikov–Shannon sampling theorem for bandlimited functions \(f \in {{\mathcal {B}}}_{N/2}(\mathbb {R})\), the series (1.3) converges absolutely and uniformly on whole \(\mathbb {R}\).
Nevertheless, this convergence is very slow due to the poor decay of the \({\textrm{sinc}}\) function, as can be seen from the sharp upper and lower bounds on the norm of the Shannon sampling operator, see Lemma 2.2. More precisely, rigorous analysis of the approximation error, cf. [8, Theorem 1], shows that on the fixed interval \([-1,\,1]\) we have a convergence rate of only \((T-L)^{-1/2}\), which was also mentioned in [29, 34].
Note that general results on whole \(\mathbb {R}\) are not feasible for arbitrary \(f \in {{\mathcal {B}}}_{N/2}(\mathbb {R})\). Rather, the uniform approximation error \(\Vert f - S_T f\Vert _{C_0(\mathbb {R})}\) can only be studied under the additional assumption that \(f \in {{\mathcal {B}}}_{N/2}(\mathbb {R})\) satisfies certain decay conditions, see [9, 12].
Now let \(f \in {{\mathcal {B}}}_{N/2}({\mathbb {R}})\) be a given bandlimited function with bandwidth \(\tfrac{N}{2}\) and let \(T \in {\mathbb {N}}\) be sufficiently large. For given samples \(f\big (\tfrac{k}{L}\big )\) with \(k\in {\mathbb {Z}}\) and \(L\ge N\) we consider finitely many erroneous samples
with error terms \(\varepsilon _k\) which are bounded by \(|\varepsilon _k| \le \varepsilon \) for \(k = -T,\, \ldots ,\,T\). Then for the approximation
the following error bounds can be shown.
Theorem 2.4
Let \(f \in {{\mathcal {B}}}_{N/2}({\mathbb {R}})\) be an arbitrary bandlimited function with bandwidth \(\tfrac{N}{2}\). Further let \(L \ge N,\) \(T \in {\mathbb {N}},\) and \(\varepsilon > 0\) be given. Then we have
Moreover, for the special error terms
we have
such that the Shannon sampling series is not numerically robust in the worst case analysis.
Proof
By (2.3), (2.5) and (2.9) we are given the upper error bound
Since
and \(\mu _T :=\frac{T+2}{T+1} = 1+\frac{1}{T+1}\) is monotonously decreasing with \(\displaystyle \max _{T\in {\mathbb {N}}} \mu _T = \mu _1 = \tfrac{3}{2}\), we have
which yields (2.10).
Next, we proceed with the lower bound. Due to the special choice of the error terms \(\varepsilon _k\) we obtain
By (2.6) and (2.9) we conclude that
such that (2.12) completes the proof. \(\square \)
Note that (2.10) specifies a corresponding comment of [6, p. 681] as it makes the constant explicit. In addition, we remark that the norm \(\Vert \tilde{f} - f \Vert _{C_0({\mathbb {R}})}\) does not depend on the special choice of the function f or the oversampling parameter \(\lambda ,\) see Fig. 1. Furthermore, Fig. 1 also illustrates that for \(T \rightarrow \infty \) the error behavior shown in Theorem 2.4 is not satisfactory. Thus, in the presence of noise in the samples \(f\big (\tfrac{{k}}{L}\big )\), \({k} \in \mathbb {Z}\), the convergence of the Shannon sampling series (1.3) may even break down completely.
Remark 2.5
In the above worst case analysis we have seen that the approximation of \(f \in {{\mathcal {B}}}_{N/2}({\mathbb {R}})\) by the T-th partial sum (2.1) of its Shannon sampling series with \(L \ge N\) is not numerically robust in the deterministic sense. Otherwise, a simple average case study (see [31]) shows that this approximation is numerically robust in the stochastic sense. Therefore, we compute (2.1) as an inner product of the real \((2T+1)\)-dimensional vectors
Now assume that instead of the exact samples \(f\big (\frac{k}{L}\big )\), \(k = - T, \ldots ,T\), only perturbed samples \({\tilde{f}}\big (\frac{k}{L}\big ) = f\big (\frac{k}{L}\big ) + X_k\), \(k = - T, \ldots ,T\), are given, where the real random variables \(X_k\) are uncorrelated, each having expectation \({{\mathbb {E}}}(X_k) = 0\) and constant variance \({{\mathbb {V}}}(X_k) = {{\mathbb {E}}}(|X_k|^2) = \rho ^2\) with \(\rho > 0\). Then we consider the stochastic approximation error
Obviously, this error term \(\Delta _T\) has the expectation
and the variance
From [27, p. 89, Problem 1.10.9] it follows that
such that \({{\mathbb {V}}}(\Delta _T) \le \rho ^2\).
3 Regularization with a frequency window function
To overcome the drawbacks of poor convergence and numerical instability, one can apply regularization with a convenient window function either in the frequency domain or in the time domain. Often one employs oversampling, i.e., a bandlimited function \(f \in {{\mathcal {B}}}_{N/2}({\mathbb {R}})\) of bandwidth \(\tfrac{N}{2}\) is sampled on a finer grid \(\frac{1}{L}\, {\mathbb {Z}}\) with \(L =N\,(1+\lambda )\), where \(\lambda >0\) is the oversampling parameter.
First, together with oversampling, we consider the regularization with a frequency window function of the form
cf. [5, 17, 28], where \(\chi :\big [\tfrac{N}{2},\,\tfrac{L}{2}\big ] \rightarrow [0,\,1]\) is frequently chosen as some monotonously decreasing, continuous function with \(\chi \big (\frac{N}{2}\big ) = 1\) and \(\chi \big (\frac{L}{2}\big ) = 0\). Applying the inverse Fourier transform, we determine the corresponding function in time domain as
Example 3.1
A simple example of a frequency window function is the linear frequency window function (cf. [5, pp. 18–19] or [17, pp. 210–212])
Note that in a trigonometric setting a function of the form (3.3) is also often referred to as trapezoidal or de La Vallée-Poussin type window function, respectively. Obviously, \({\hat{\psi }_{{\textrm{lin}}}}(v)\) is a continuous linear spline supported on \(\big [-\tfrac{L}{2},\,\tfrac{L}{2}\big ]\), see Fig. 2a. By (3.2) we receive \(\psi _{{\textrm{lin}}}(0) = \tfrac{N+L}{2}\). For \(t\in {\mathbb {R}} {\setminus } \{0\}\) we obtain
This function \(\tfrac{1}{L}\, \psi _{{\textrm{lin}}}\) is even, supported on whole \({\mathbb {R}},\) has its maximum at \(t=0\) such that
In addition, \(\tfrac{1}{L}\, \psi _{{\textrm{lin}}}(t)\) has a faster decay than \({\textrm{sinc}}(N \pi t)\) for \(|t| \rightarrow \infty \), cf. Fig. 2b. Note that we have
Remark 3.2
Note that \(\Big \{\tfrac{1}{L}\,\psi _{{\text {lin}}}\big (\cdot \, - \tfrac{k}{L}\big )\Big \}_{k \in {\mathbb {Z}}}\) is a Bessel sequence in \(L^2({\mathbb {R}})\) with the bound \(\tfrac{1}{L}\), i.e., for all \(f \in L^2({\mathbb {R}})\) we have
However, \(\Big \{\tfrac{1}{L}\,\psi _{{\text {lin}}}\big (\cdot \, - \tfrac{k}{L}\big )\Big \}_{k \in {\mathbb {Z}}}\) is not an orthonormal sequence and also not a Riesz sequence. To see this, we consider the 1-periodic function
By (3.3) we have
i.e., \(\Psi (v) \le \frac{1}{L^2}\) for all \(v \in {\mathbb {R}}\) and \(\Psi (v) = 0\) for \(v \in \tfrac{1}{2} + {\mathbb {Z}}\). Then [4, Theorem 9.2.5] yields the result.
Analogous to [5, p. 19] and [17, Theorem 7.2.5], we obtain the following representation result, see also [28, p. 4].
Theorem 3.3
Let \(f \in {{\mathcal {B}}}_{N/2}({\mathbb {R}})\) be a bandlimited function with bandwidth \(\tfrac{N}{2},\) \(N \in \mathbb {N},\) and let \(L = N\,(1+\lambda )\) with \(\lambda > 0\) be given. Assume that the samples \(f\big (\frac{k}{L}\big ),\) \(k \in {\mathbb {Z}},\) fulfill the condition
Using oversampling and regularization with a frequency window function \(\hat{\psi }\) of the form (3.1), the function f can be represented as
where the series (3.6) converges absolutely and uniformly on \({\mathbb {R}}.\)
Proof
Since by assumption \(f \in {{\mathcal {B}}}_{N/2}({\mathbb {R}})\), we have \({\textrm{supp}}\,\hat{f} \subseteq \big [- \tfrac{N}{2},\,\tfrac{N}{2}\big ] \subset \big [- \tfrac{L}{2},\,\tfrac{L}{2}\big ]\) and therefore the function \({{\hat{f}}}\) restricted on \(\big [- \tfrac{L}{2},\,\tfrac{L}{2}\big ]\) can be represented by its L-periodic Fourier series
with the Fourier coefficients
Using the inverse Fourier transform, we see that
Hence, we may write \(\hat{f}\) given by (3.7) as
By the Weierstrass M-test and assumption (3.5), the Fourier series (3.8) converges absolutely and uniformly on \(\big [- \tfrac{L}{2},\,\tfrac{L}{2}\big ]\). Additionally, we have \({\hat{\psi }}(v) = 1\) for \(v \in \big [- \tfrac{N}{2},\, \tfrac{N}{2}\big ]\) by (3.1) as well as \({\textrm{supp}}\,\hat{f} \subseteq \big [- \tfrac{N}{2},\,\tfrac{N}{2}\big ]\) by assumption, such that \({\hat{f}}(v) = {\hat{f}}(v)\,{\hat{\psi }}(v)\) for all \(v \in {\mathbb {R}}\). Therefore, we obtain
where summation and integration may be interchanged by the theorem of Fubini–Tonelli, since we have (3.5) and \(|\hat{\psi }(v)|\le 1\) by definition (3.1). Additionally, note that from (3.1) and (3.2) it follows that
Hence, we have \(\frac{1}{L}\,\Big | \psi \big (t - \frac{k}{L}\big )\Big | < 1\) for all \(t \in {\mathbb {R}}\) and \(k \in {\mathbb {Z}}\), and consequently the series (3.6) converges absolutely and uniformly on \({\mathbb {R}}\) by (3.5) and the Weierstrass M-test. \(\square \)
Note that (3.6) is not an interpolating approximation, since in general we have
Moreover, since the frequency window function \(\hat{\psi }\) in (3.1) is compactly supported, the uncertainty principle (cf. [18, p. 103, Lemma 2.39]) yields \({\textrm{supp}}\,\psi = {\mathbb {R}}\), such that (3.6) does not imply localized sampling for any choice of \(\hat{\psi }\). In other words, the representation (3.6) still requires infinitely many samples \(f\big (\tfrac{k}{L}\big )\). Thus, for practical realizations we need to consider a truncated version of (3.6) and hence for \(T\in {\mathbb {N}}\) we introduce the T-th partial sum
Then for the linear frequency window function (3.3) we show the following convergence result.
Theorem 3.4
Let \(f \in {{\mathcal {B}}}_{N/2}({\mathbb {R}})\) be a bandlimited function with bandwidth \(\tfrac{N}{2},\) \(N \in \mathbb {N},\) and let \(L = N\,(1+\lambda )\) with \(\lambda > 0\) be given. Assume that the samples \(f\big (\tfrac{k}{L}\big ),\) \(k \in {\mathbb {Z}},\) fulfill the condition (3.5). Using oversampling and regularization with the linear frequency window function (3.3), the T-th partial sums \(P_{{{\textrm{lin}}},T} f\) converge uniformly to f on \([-1,\,1]\) as \(T \rightarrow \infty \). For \(T > L\) the following estimate holds
Proof
such that Cauchy–Schwarz inequality implies
For \(f \in {{\mathcal {B}}}_{N/2}({\mathbb {R}})\) the Parseval equality implies
see [10, Formula (3.8)], and thereby we have
It can easily be seen that (3.4) satisfies the decay condition
and thereby
Thus, for \(T > L\) and \(t \in [-1,\,1]\) we obtain
Using the integral test for convergence of series, we conclude
which yields
Therefore, (3.11), (3.12), and (3.13) imply the estimate (3.10). \(\square \)
Example 3.5
Next, we visualize the error bound of Theorem 3.4, i.e., for a given function \(f\in {{\mathcal {B}}}_{N/2}({\mathbb {R}})\) with \(L= N\,(1+\lambda )\), \(\lambda > 0\), we show that the approximation error satisfies (3.10). For this purpose, the error
is estimated by evaluating the given function f as well as its approximation \(P_{{\textrm{lin}},T} f\), cf. (3.9), at equidistant points \(t_s\in [-1,\,1]\), \(s=1,\ldots ,S\), with \(S=10^5\). Here we study the function \(f(t) = \sqrt{N} \,{\textrm{sinc}}(N \pi t)\), \(t \in {\mathbb {R}}\), such that \(\Vert f\Vert _{L^2({\mathbb {R}})}=1\). We fix \(N=128\) and consider the error behavior for increasing \(T \in {\mathbb {N}}\). More specifically, in this experiment we choose several oversampling parameters \(\lambda \in \{0.5,1,2\}\) and truncation parameters \(T = 2^c\) with \(c \in \{0,\ldots ,15\}\). The corresponding results are depicted in Fig. 3. Note that the error bound in (3.10) is only valid for \(T>L\). Therefore, we have additionally marked the point \(T=L\) for each \(\lambda \) as a vertical dash-dotted line. It can easily be seen that also the error results are much better when \(T>L\). Note, however, that increasing the oversampling parameter \(\lambda \) requires a much larger truncation parameter T to obtain errors of the same size.
In order to obtain convergence rates better than the one in Theorem 3.4, one may consider frequency window functions (3.1) of higher smoothness.
Example 3.6
Next, we construct a continuously differentiable frequency window function by polynomial interpolation. Since the frequency window function (3.1) is even, it suffices to consider only \(\chi :\big [\tfrac{N}{2},\,\tfrac{L}{2}\big ] \rightarrow [0,\,1]\) at the interval boundaries \(\tfrac{N}{2}\) and \(\tfrac{L}{2}\). Clearly, the linear frequency window function \(\hat{\psi }_{{\textrm{lin}}}\) in (3.3) fulfills
Thus, to obtain a smoother frequency window function, we need to additionally satisfy the first order conditions
Then the corresponding interpolation polynomial yields the cubic frequency window function
see Fig. 4a. By (3.2) we see that the inverse Fourier transform of (3.15) is given by
for \(t\in \mathbb {R}{\setminus } \{0\}\) and \(\psi _{\textrm{cub}}(0) = \frac{L+N}{2}\), cf. Fig. 4b.
Analogous to Theorem 3.4, the following error estimate can be derived.
Theorem 3.7
Let \(f \in {{\mathcal {B}}}_{N/2}({\mathbb {R}})\) be a bandlimited function with bandwidth \(\tfrac{N}{2},\) \(N \in \mathbb {N},\) and let \(L = N\,(1+\lambda )\) with \(\lambda > 0\) be given. Assume that the samples \(f\big (\tfrac{k}{L}\big ),\) \(k \in {\mathbb {Z}},\) fulfill the condition (3.5). Using oversampling and regularization with the cubic frequency window function (3.15), the T-th partial sums \(P_{{\textrm{cub}},T} f\) converge uniformly to f on \([-1,\,1]\) as \(T \rightarrow \infty \). For \(T > L\) the following estimate holds
Example 3.8
Another continuously differentiable frequency window function is given in [24] as the raised cosine frequency window function
see Fig. 4a. By (3.2) the corresponding function in time domain can be determined as
and \(\psi _{\cos }\left( \pm \tfrac{1}{L-N}\right) = \tfrac{L-N}{4}\, \cos \big (\tfrac{N\pi }{L-N}\big )\), see Fig. 4b. Note that since \({\hat{\psi }_{\cos }}\) in (3.18) possesses the same regularity as \({\hat{\psi }_{\textrm{cub}}}\) in (3.15), both frequency window functions meet the same error bound (3.17), cf. Fig. 7.
Note that by (3.4) and the convolution property of the Fourier transform, for \(L > N\) the linear frequency window function (3.3) can be written as
Therefore, instead of determining smooth frequency window functions of the form (3.1) by means of interpolation as in Example 3.6, they can also be constructed by convolution, cf. [15].
Lemma 3.9
Let \(L>N\) be given. Assume that \(\rho :{\mathbb {R}} \rightarrow [0,\,\infty )\) is an even integrable function with \({\textrm{supp}}\,\rho = \big [-\frac{L-N}{4},\,\frac{L-N}{4}\big ]\) and \(\int _{{\mathbb {R}}} \rho (v)\,{\textrm{d}}v = 1.\) Then the convolution
is a frequency window function of the form (3.1).
Proof
By assumptions we have
Since the convolution of two even functions is again an even function, it suffices to consider (3.21) only for \(v \ge 0\). For \(v \in \big [0,\,\frac{N}{2}\big ]\) we have \(v - \frac{L+N}{4}\le - \frac{L-N}{4} < \frac{L-N}{4} \le v + \frac{L+N}{4}\) and therefore
For \(v \in \big [\frac{N}{2},\, \frac{L}{2}\big ]\) we can write
such that \({\hat{\psi }}_{\textrm{conv}}\big (\frac{N}{2}\big ) = 1\), \({\hat{\psi }}_{\textrm{conv}}\big (\frac{L}{2}\big ) = 0\), and \({\hat{\psi }}_{\textrm{conv}} :\big [\frac{N}{2},\, \frac{L}{2}\big ] \rightarrow [0,\,1]\) is monotonously non-increasing, since \(\rho (w)\ge 0\) by assumption. For \(v \in \big [\frac{L}{2},\,\infty \big )\) we have \(v - \frac{L+N}{4} \ge \frac{L-N}{4}\), which implies by assumption \({\textrm{supp}}\,\rho = \big [-\frac{L-N}{4},\,\frac{L-N}{4}\big ]\) that
This completes the proof. \(\square \)
Given such a frequency window function \(\hat{\psi }_{\textrm{conv}}\) as in (3.20), its inverse Fourier transform (3.2) is known by the convolution property of the Fourier transform as
Thus, to obtain a suitable window function (3.20), we need to assure that the inverse Fourier transform \(\check{\rho }\) of \(\rho \) is explicitly known. Since \(\rho \) is even by assumption, we have \(\check{\rho }= \hat{\rho }\) with
Note that the convolutional approach (3.20) has the substantial advantage that the smoothness of (3.20) is determined by the smoothness of the chosen function \(\rho .\)
Remark 3.10
The frequency window functions \({\hat{\psi }}_\textrm{cub}\) in (3.15) and \({\hat{\psi }}_{\cos }\) in (3.18) lack a convolutional representation (3.20). Although the corresponding functions (3.16) and (3.19) in spatial domain are of the form (3.22), for both frequency windows the Fourier transform of the respective function \(\check{\rho }\) is only known in the sense of tempered distributions.
Example 3.11
For the special choice of \(\rho (v) =\frac{2 n}{L - N}\,M_n\big (\frac{2n}{L - N}\,v \big )\) with \(n \in {\mathbb {N}}\), where \(M_n\) is the centered cardinal B-spline of order n, we have
Using \(n=1\) this again yields (3.4), whereas for \(n=2\) we obtain
Note that the frequency window function \(\hat{\psi }_{{\textrm{conv}},2}\), cf. (3.23), possesses the same regularity as \(\hat{\psi }_{\textrm{cub}}\) in (3.15) and \(\hat{\psi }_{\textrm{cos}}\) in (3.18), and therefore they all meet the same error bound (3.17), cf. Fig. 7.
Example 3.12
In [15] the infinitely differentiable function
with the scaling factor
is considered. The corresponding frequency window function (3.20) is denoted by \(\hat{\psi }_{\infty }\). However, since for this function \(\rho _{\infty }\) the inverse Fourier transform \(\check{\rho }_{\infty }\) cannot explicitly be stated, the function (3.22) in time domain can only be approximated, which was done by a piecewise rational approximation \(\check{\rho }_{\textrm{rat}}\) in [15]. We remark that because of this additional approximation a numerical decay of the expected rate is doubtful, since the issue of robustness of the corresponding regularized Shannon series remains unclear. This effect can also be seen in Fig. 7, where the corresponding frequency window function (3.20), denoted by \(\hat{\psi }_{\textrm{rat}}\), shows similar behavior as the classical Shannon sampling sums (2.1).
The same comment also applies to [28], where an infinitely differentiable window function \(\hat{\psi }\) is aimed for as well. Since such \(\hat{\psi }\) with explicit inverse Fourier transform (3.2) is not known, in [28] the function \(\psi \) is estimated by some Gabor approximation. Although an efficient computation scheme via fast Fourier transform (FFT) was introduced in [29], the numerical nonrobustness by this approximation seems to be neglected in this work.
Finally, we remark that already in [5, p. 19] it was stated that a faster decay than for \(\hat{\psi }_{{\textrm{lin}}}\) from (3.3) can be obtained by choosing \(\hat{\psi }\) in (3.1) smoother, but at the price of a very large constant. This can also be seen in Fig. 7, where the results for the window functions \({\hat{\psi }_{\textrm{cub}}}\) in (3.15), \({\hat{\psi }_{\cos }}\) in (3.18), \({\hat{\psi }_{{\textrm{conv}},2}}\) in (3.23), and \(\hat{\psi }_{\textrm{rat}}\) from Example 3.12 are plotted as well. For this reason many authors restricted themselves to the linear frequency window function \(\hat{\psi }_{{\textrm{lin}}}\) in (3.3). Furthermore, the numerical results in Fig. 7 encourage the suggestion that in practice only algebraic decay rates are achievable by regularization with a frequency window function.
4 Regularization with a time window function
To preferably obtain better decay rates, we now consider a second regularization technique, namely regularization with a convenient window function in the time domain. To this end, for \(L > N\) and any \(m \in {{\mathbb {N}}} {\setminus } \{1\}\) with \(2m \ll L\), we introduce the set \(\Phi _{m/L}\) of all window functions \(\varphi :\,{\mathbb {R}} \rightarrow [0,\,1]\) with the following properties:
-
Each \(\varphi \in \Phi _{m/L}\) is supported on \(\big [-\frac{m}{L},\, \frac{m}{L}\big ]\). Further, \(\varphi \) is even and continuous on \(\big [-\frac{m}{L},\, \frac{m}{L}\big ]\).
-
Each \(\varphi \in \Phi _{m/L}\) restricted on \(\big [0,\,\frac{m}{L}\big ]\) is monotonously non-increasing with \(\varphi (0) = 1\).
As examples of such window functions we consider the \(\sinh \)-type window function
with \(\beta = \frac{\pi m\,(L-N)}{L}\), and the continuous Kaiser–Bessel window function
with \(\beta = \frac{\pi m\,(L-N)}{L}\), where \(I_0(x)\) denotes the modified Bessel function of first kind given by
Both window functions are well-studied in the context of the nonequispaced fast Fourier transform (NFFT), see e.g. [19] and references therein.
A visualization of the continuous Kaiser–Bessel window function (4.2) as well as the corresponding regularization \({\xi }_{\textrm{cKB}}(t) :={\textrm{sinc}}(L\pi t) \, \varphi _{\textrm{cKB}}(t)\) of the \({\textrm{sinc}}\) function can be found in Fig. 5. We remark that in contrast to Fig. 2 here the function \({\xi }_{\textrm{cKB}}\) in time domain is compactly supported and its Fourier transform \({\hat{\xi }}_{\textrm{cKB}}\) is supported on whole \({\mathbb {R}},\) where for the frequency window function (3.3) it is vice versa (see [18, p. 103, Lemma 2.39]). Note that a visualization for the \(\sinh \)-type window function (4.1) would look the same as Fig. 5.
Then we approximate a bandlimited function \(f \in {{\mathcal {B}}}_{N/2}({{\mathbb {R}}})\) by the regularized Shannon sampling formula
with \(L \ge N\). Since by assumption \({\textrm{sinc}}(\pi (n-k)) = \delta _{n,k}\) for all \(n,\,k\in {\mathbb {Z}}\) with the Kronecker delta \(\delta _{n,k}\) and \(\varphi (0) = 1\), this procedure is an interpolating approximation of f, because
Furthermore, the use of the compactly supported window function \(\varphi \in \Phi _{m/L}\) leads to localized sampling of the bandlimited function \(f \in {{\mathcal {B}}}_{N/2}({{\mathbb {R}}})\), i.e., the computation of \((R_{\varphi ,m}f)(t)\) for \(t \in {{\mathbb {R}}} {\setminus } \tfrac{1}{L}\,{{\mathbb {Z}}}\) requires only \(2m+1\) samples \(f\big (\frac{k}{L}\big )\), where \(k \in {{\mathbb {Z}}}\) fulfills the condition \(|k - L t| \le m\). Consequently, for given \(f \in {{\mathcal {B}}}_{N/2}({\mathbb {R}})\) and \(L \ge N\), the reconstruction of f on the interval \([-1,\,1]\) requires \(2m+ 2L+1\) samples \(f\big (\frac{k}{L}\big )\) with \(k = -m - L,\, \dots ,\,m + L\). In addition, we again employ oversampling of the bandlimited function \(f \in {{\mathcal {B}}}_{N/2}({{\mathbb {R}}})\), i.e., f is sampled on a finer grid \(\tfrac{1}{L}\,{{\mathbb {Z}}}\) with \(L >N\).
This concept of regularized Shannon sampling formulas with localized sampling and oversampling has already been studied by various authors. A survey of different approaches for window functions can be found in [22], while the prominent Gaussian window function was e.g. considered in [13, 21, 23, 26, 30]. Since this Gaussian window function has also been studied in [10], where the superiority of the \(\sinh \)-type window function (4.1) was shown, we now focus on time window functions \(\varphi \in \Phi _{m/L}\), such as (4.1) and (4.2).
Similar as in [10], for given \(f \in {{\mathcal {B}}}_{N/2}({{\mathbb {R}}})\) and \(\varphi \in \Phi _{m/L}\) the uniform approximation error \(\Vert f - R_{\varphi ,m}f \Vert _{C_0({{\mathbb {R}}})}\) of the regularized Shannon sampling formula can be estimated as follows.
Theorem 4.1
Let \(f \in {{\mathcal {B}}}_{N/2}({\mathbb {R}})\) be a bandlimited function with bandwidth \(\tfrac{N}{2},\) \(N \in \mathbb {N},\) and let \(L > N,\) and \(m \in {{\mathbb {N}}}{\setminus } \{1\}\) be given. Further let \(\varphi \in \Phi _{m/L}\). Then the regularized Shannon sampling formula (4.3) satisfies
with the corresponding error constants
Proof
For a proof of Theorem 4.1 see [10, Theorem 3.2]. \(\square \)
Note that it is especially beneficial for the estimation of the error constant (4.4), if the Fourier transform
of \(\varphi \in \Phi _{m/L}\) is explicitly known.
Now we specify the result of Theorem 4.1 for certain window functions. To this end, assume that \(f \in {{\mathcal {B}}}_{N/2}({{\mathbb {R}}})\) with \(N\in {\mathbb {N}}\) and \(L= N\,(1 + \lambda )\), \(\lambda > 0\). Additionally, we choose \(m \in {\mathbb {N}} {\setminus } \{1\}\). We demonstrate that for the window functions (4.1) and (4.2) the approximation error of the regularized Shannon sampling formula (4.3) decreases exponentially with respect to m. For the \(\sinh \)-type window function (4.1) the following result is already known.
Theorem 4.2
Let \(f \in {{\mathcal {B}}}_{N/2}({\mathbb {R}})\) be a bandlimited function with bandwidth \(\tfrac{N}{2},\) \(N \in \mathbb {N},\) and let \(L = N\,(1+\lambda )\) with \(\lambda > 0\) and \(m \in {\mathbb {N}} {\setminus } \{1\}\) be given.
Then the regularized Shannon sampling formula (4.3) with the \(\sinh \)-type window function (4.1) satisfies the error estimate
Proof
For a proof of Theorem 4.2 see [10, Theorem 6.1]. \(\square \)
Next, we continue with the continuous Kaiser–Bessel window function (4.2).
Theorem 4.3
Let \(f \in {{\mathcal {B}}}_{N/2}({\mathbb {R}})\) be a bandlimited function with bandwidth \(\tfrac{N}{2},\) \(N \in \mathbb {N},\) and let \(L = N\,(1+\lambda )\) with \(\lambda \ge \frac{1}{m-1}\) and \(m \in {\mathbb {N}} {\setminus } \{1\}\) be given.
Then the regularized Shannon formula (4.3) with the continuous Kaiser–Bessel window function (4.2) satisfies the error estimate
Proof
By means of Theorem 4.1 we only have to compute the error constants (4.4) and (4.5). Note that (4.2) implies \(\varphi _{\textrm{cKB}}\big (\tfrac{m}{L}\big ) = 0\), such that the error constant (4.5) vanishes. For computing the error constant (4.4) we introduce the function \(\eta :\,\big [- \tfrac{N}{2},\,\tfrac{N}{2}\big ] \rightarrow {\mathbb {R}}\) given by
As known by [16, p. 3, 1.1, and p. 95, 18.31], the Fourier transform of (4.2) has the form
with the scaled frequency \(w = \frac{2 \pi m}{\beta L}\,v\). Thus, substituting \(w = \frac{2 \pi m}{\beta L}\,u\) in (4.9) yields
with the increasing linear function \(a(v) :=\tfrac{2 m \pi }{\beta L}\,\big (v + \tfrac{L}{2}\big )\). By the choice of the parameter \(\beta = \tfrac{m \pi \lambda }{1 + \lambda }\) with \(\lambda \ge \frac{1}{m-1}\) we have \(a\big (- \tfrac{N}{2}\big ) = 1\) and \(a(v) \ge 1\) for all \(v \in \big [-\tfrac{N}{2},\,\tfrac{N}{2}\big ]\). Using (4.10), we decompose \(\eta (v)\) in the form
with
By [7, 3.997–1] we have
where \({{\textbf {L}}}_0(x)\) denotes the modified Struve function given by (see [1, 12.2.1])
Note that the function \(I_0(x) - {{\textbf {L}}}_0(x)\) is completely monotonic on \([0,\,\infty )\) (see [3, Theorem 1]) and tends to zero as \(x\rightarrow \infty \). Applying the sine integral function
implies
Hence, we obtain
Note that for suitable \(\beta = \tfrac{m \pi \lambda }{1 + \lambda }\) we find \(\big [ I_0(\beta ) - {{\textbf {L}}}_0(\beta ) - 1 + \frac{2}{\pi }\,{\textrm{Si}}(\beta ) \big ] \in (0,\,1)\), cf. Fig. 6. In addition, it is known that \(I_0(x) \ge 1\), \(x\in \mathbb {R}\), such that \(\eta _1(v) > 0\).
Now we estimate \(\eta _2(v)\) for \(v \in \big [- \tfrac{N}{2},\,\tfrac{N}{2}\big ]\) by the triangle inequality as
By [20, Lemma 4] we have for \(|w| \ge 1\) that
Thus, we conclude
and using Fig. 6 we therefore obtain
By [2] the function \(\frac{{\textrm{e}}^x}{I_0(x)}\) is strictly decreasing on \([0,\infty )\) and tends to zero as \(x \rightarrow \infty \). Numerical experiments have shown that \(\frac{{\textrm{e}}^{x}}{x\,(I_0(x) - 1)}\) is strictly decreasing on \([\pi , \infty )\), too. By the assumption \(\lambda \ge \frac{1}{m-1}\) we have \(\beta = \frac{\pi m \lambda }{1 + \lambda } \ge \pi \) for all \(m \in {\mathbb {N}} {\setminus } \{1\}\). Hence, it follows that
and thus we conclude that
for all \(m \in {\mathbb {N}} {\setminus } \{1\}\). This completes the proof. \(\square \)
As seen in Theorem 2.4, if the samples \(f\big (\frac{k}{L}\big )\), \(k \in \mathbb {Z}\), of a bandlimited function \(f\in {{\mathcal {B}}}_{N/2}(\mathbb {R})\) are not known exactly, i.e., only erroneous samples \(\tilde{f_k} :=f\big (\frac{k}{L}\big ) + \varepsilon _{k}\) with \(|\varepsilon _{k}| \le \varepsilon \), \(k\in \mathbb {Z}\), with \(\varepsilon >0\) are known, the corresponding Shannon sampling series (1.3) may differ appreciably from f. Here we denote the regularized Shannon sampling formula with erroneous samples \(\tilde{f_k}\) by
Then, in contrast to the Shannon sampling series (1.3), the regularized Shannon sampling formula (4.3) is numerically robust in the worst case analysis, i.e., the uniform perturbation error \(\Vert R_{\varphi ,m}{\tilde{f}} - R_{\varphi ,m}f \Vert _{C_0(\mathbb {R})}\) is small.
Theorem 4.4
Let \(f \in {{\mathcal {B}}}_{N/2}({\mathbb {R}})\) be a bandlimited function with bandwidth \(\tfrac{N}{2},\) \(N \in \mathbb {N},\) and let \(L > N,\) and \(m \in {{\mathbb {N}}}{\setminus } \{1\}\) be given. Further let \(\varphi \in \Phi _{m/L}\) as well as \({\tilde{f}}_{k} = f(\frac{k}{L}) + \varepsilon _{k},\) where \(|\varepsilon _{k}| \le \varepsilon \) for all \(k\in \mathbb {Z},\) with \(0<\varepsilon \ll 1\). Then the regularized Shannon sampling sum (4.3) satisfies
Proof
For a proof of Theorem 4.4 see [10, Theorem 3.4]. \(\square \)
Note that it is especially beneficial for obtaining explicit error estimates, if the Fourier transform (4.6) of \(\varphi \in \Phi _{m/L}\) is explicitly known. In the following, we demonstrate that for the window functions (4.1) and (4.2) the perturbation error of the regularized Shannon sampling formula (4.3) only grows as \({\mathcal {O}}(\sqrt{m}).\)
Theorem 4.5
Let \(f \in {{\mathcal {B}}}_{N/2}({\mathbb {R}})\) be a bandlimited function with bandwidth \(\tfrac{N}{2},\) \(N \in \mathbb {N},\) and let \(L= N\,(1+\lambda )\) with \(\lambda > 0\) and \(m \in {{\mathbb {N}}}{\setminus } \{1\}\) be given. Further let \(\tilde{f_k} = f\big (\frac{k}{L}\big ) + \varepsilon _{k},\) with \(|\varepsilon _{k}| \le \varepsilon \) for all \(k\in \mathbb {Z}\) and \(0 <\varepsilon \ll 1.\)
Then the regularized Shannon sampling formula (4.3) with the \(\sinh \)-type window function (4.1) satisfies the error estimate
Proof
For a proof of Theorem 4.5 see [10, Theorem 6.3]. \(\square \)
Theorem 4.6
Let \(f \in {{\mathcal {B}}}_{N/2}({\mathbb {R}})\) be a bandlimited function with bandwidth \(\tfrac{N}{2},\) \(N \in \mathbb {N},\) and let \(L= N\,(1+\lambda )\) with \(\lambda > 0\) and \(m \in {{\mathbb {N}}}{\setminus } \{1\}\) be given. Further let \({\tilde{f}}_{k} = f\big (\frac{k}{L}\big ) + \varepsilon _{k},\) with \(|\varepsilon _{k}| \le \varepsilon \) for all \(k\in \mathbb {Z}\) and \(0 <\varepsilon \ll 1\).
Then the regularized Shannon sampling formula (4.3) with the continuous Kaiser–Bessel window function (4.2) satisfies the error estimate
Proof
By Theorem 4.4 we have to compute \(\hat{\varphi }_{\textrm{cKB}}(0)\) for the continuous Kaiser–Bessel window function (4.2), which is given by (4.10) as
By [1, 9.7.1] we have
and hence
Moreover, for \(\beta > 0\) the term \(\frac{\sinh (\beta )-\beta }{\sqrt{\beta }\,\big (I_0(\beta ) - 1\big )}\) is monotonously increasing with
such that (4.11) and \(\beta = \frac{\pi m \lambda }{1+\lambda }\) yields the assertion (4.12). \(\square \)
5 Comparison of the two regularization methods
Finally, we compare the behavior of the regularization methods presented in Sects. 3 and 4 to the classical Shannon sampling sums (2.1). For a given function \(f\in {{\mathcal {B}}}_{N/2}({\mathbb {R}})\) with \(L= N\,(1+\lambda )\), \(\lambda > 0\), we consider the approximation errors
for \(\psi \in \{\psi _{{\textrm{lin}}},\, \psi _{\textrm{cub}},\, \psi _{\cos }, \, {\psi }_{{\textrm{conv}},2},\, \psi _{\textrm{rat}}\}\), cf. (3.4), (3.16), (3.19), (3.23), and Example 3.12, as well as the corresponding error constants (3.10) and (3.17). In addition, we study the approximation error
with \(\varphi \in \{\varphi _{\sinh }, \,\varphi _{\textrm{cKB}}\}\), cf. (4.1) and (4.2), and the corresponding error constants (4.7) and (4.8). By the definition of the regularized Shannon sampling formula in (4.3) we have
with the regularized \({\textrm{sinc}}\) function
Thus, to compare (5.3) to \(S_{T} f\) in (2.1) and \(P_{\psi ,T} f\) in (3.9), we set \(T = L+m\), such that all approximations use the same number of samples. As in Example 3.5 the errors (5.1) and (5.2) shall be estimated by evaluating a given function f and its approximation at equidistant points \(t_s\in [-1,\,1]\), \(s=1,\ldots ,S,\) with \(S=10^5\). Analogous to [16, Section IV, C], we choose the function
with \(\Vert f\Vert _{L^2({\mathbb {R}})} = 1\). We fix \(N=256\) and consider several values of \(m \in {\mathbb {N}} {\setminus } \{1\}\) and \(\lambda >0\).
The associated results are displayed in Fig. 7. We see that for all window functions the theoretical error behavior perfectly coincides with the numerical outcomes. In this regard, see also Table 1 which summarizes the theoretical results. Moreover, it can clearly be seen that for higher oversampling parameter \(\lambda \) and higher truncation parameter m, the error results using (4.3) get much better than the ones using (3.6), due to the exponential error decay rate shown for (4.3).
This is to say, our numerical results show that regularization with a time window function performs much better than regularization with a frequency window function, since an exponential decay can (up to now) only be realized using a time window function. Furthermore, the great importance of an explicit representation of the Fourier transform of the regularizing window function can be seen, cf. Example 3.12.
Note that the code files for this and all the other experiments are available on https://github.com/melaniekircheis/On-numerical-realizations-of-Shannons-sampling-theorem.
In summary, we found that the regularized Shannon sampling formula with the \(\sinh \)-type time window function is the best of the considered methods, since this approach is the most accurate, easy to compute, robust in the worst case error, and requires less data (for comparable accuracy) than the classical Shannon sampling sums or the regularization with a frequency window function.
References
Abramowitz, M., Stegun, I.A. (eds.): Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables. Dover, New York (1972)
Baricz, Á.: Bounds for modified Bessel functions of the first and second kinds. Proc. Edinb. Math. Soc. 2(53), 575–599 (2010)
Baricz, Á., Pogány, T.K.: Functional inequalities for modified Struve functions II. Math. Inequal. Appl. 17, 1387–1398 (2014)
Christensen, O.: An Introduction to Frames and Riesz Bases, 2nd edn. Birkhäuser/Springer, Basel (2016)
Daubechies, I.: Ten Lectures on Wavelets. SIAM, Philadelphia (1992)
Daubechies, I., DeVore, R.: Approximating a bandlimited function using very coarsely quantized data: a family of stable sigma-delta modulators of arbitrary order. Ann. Math. 2(158), 679–710 (2003)
Gradshteyn, I.S., Ryzhik, I.M.: Table of Integrals, Series, and Products. Academic Press, New York (1980)
Jagerman, D.: Bounds for truncation error of the sampling expansion. SIAM J. Appl. Math. 14(4), 714–723 (1966)
Jingfan, L., Gensun, F.: On uniform truncation error bounds and aliasing error for multidimensional sampling expansion. Sampl. Theory Signal Image Process. 2(2), 103–115 (2003)
Kircheis, M., Potts, D., Tasche, M.: On regularized Shannon sampling formulas with localized sampling. Sampl. Theory Signal Process. Data Anal. 20(2), Paper No. 20, 34 pp. (2022)
Kotelnikov, V.A.: On the transmission capacity of the “ether” and wire in electrocommunications. In: Modern Sampling Theory: Mathematics and Application, pp. 27–45. Birkhäuser, Boston (2001). Translated from Russian
Li, X.M.: Uniform bounds for sampling expansions. J. Approx. Theory 93(1), 100–113 (1998)
Lin, R., Zhang, H.: Convergence analysis of the Gaussian regularized Shannon sampling formula. Numer. Funct. Anal. Optim. 38(2), 224–247 (2017)
Micchelli, C.A., Xu, Y., Zhang, H.: Optimal learning of bandlimited functions from localized sampling. J. Complex. 25(2), 85–114 (2009)
Natterer, F.: Efficient evaluation of oversampled functions. J. Comput. Appl. Math. 14(3), 303–309 (1986)
Oberhettinger, F.: Tables of Fourier Transforms and Fourier Transforms of Distributions. Springer, Berlin (1990)
Partington, J.R.: Interpolation, Identification, and Sampling. Clarendon Press, New York (1997)
Plonka, G., Potts, D., Steidl, G., Tasche, M.: Numerical Fourier Analysis, 2nd edn. Birkhäuser/Springer, Cham (2023)
Potts, D., Tasche, M.: Uniform error estimates for nonequispaced fast Fourier transforms. Sampl. Theory Signal Process. Data Anal. 19(2), Paper No. 17 (2021)
Potts, D., Tasche, M.: Continuous window functions for NFFT. Adv. Comput. Math. 47(2), Paper 53, 34 pp. (2021)
Qian, L.: On the regularized Whittaker–Kotelnikov–Shannon sampling formula. Proc. Am. Math. Soc. 131(4), 1169–1176 (2003)
Qian, L.: The regularized Whittaker–Kotelnikov–Shannon sampling theorem and its application to the numerical solutions of partial differential equations. PhD thesis, National Univ. Singapore (2004)
Qian, L., Creamer, D.B.: Localized sampling in the presence of noise. Appl. Math. Lett. 19, 351–355 (2006)
Rappaport, T.S.: Wireless Communications: Principles and Practice. Prentice Hall, New Jersey (1996)
Shannon, C.E.: Communication in the presence of noise. Proc. I.R.E. 37, 10–21 (1949)
Schmeisser, G., Stenger, F.: Sinc approximation with a Gaussian multiplier. Sampl. Theory Signal Image Process. 6(2), 199–221 (2007)
Stenger, F.: Numerical Methods Based on Sinc and Analytic Functions. Springer, New York (1993)
Strohmer, T., Tanner, J.: Implementations of Shannon’s sampling theorem, a time-frequency approach. Sampl. Theory Signal Image Process. 4(1), 1–17 (2005)
Strohmer, T., Tanner, J.: Fast reconstruction methods for bandlimited functions from periodic nonuniform sampling. SIAM J. Numer. Anal. 44(3), 1071–1094 (2006)
Tanaka, K., Sugihara, M., Murota, K.: Complex analytic approach to the sinc-Gauss sampling formula. Jpn. J. Ind. Appl. Math. 25, 209–231 (2008)
Tasche, M., Zeuner, H.: Worst and average case roundoff error analysis for FFT. BIT 41(3), 563–581 (2001)
Whittaker, E.T.: On the functions which are represented by the expansions of the interpolation theory. Proc. R. Soc. Edinb. 35, 181–194 (1915)
Young, R.M.: Euler’s constant. Math. Gaz. 75, 187–190 (1991)
Zayed, A.I.: Advances in Shannon’s Sampling Theory. CRC Press, Boca Raton (1993)
Acknowledgements
Melanie Kircheis gratefully acknowledges the support from the BMBF grant 01\(\mid \)S20053A (project SA\(\ell \)E). Moreover, the authors thank the referees and the editor for their very useful suggestions for improvements.
Funding
Open Access funding enabled and organized by Projekt DEAL.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
On behalf of all authors, the corresponding author states that there is no conflict of interest.
Additional information
Communicated by Martin Ehler.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Kircheis, M., Potts, D. & Tasche, M. On numerical realizations of Shannon’s sampling theorem. Sampl. Theory Signal Process. Data Anal. 22, 13 (2024). https://doi.org/10.1007/s43670-024-00087-9
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s43670-024-00087-9
Keywords
- Shannon sampling sums
- Whittaker–Kotelnikov–Shannon sampling theorem
- Bandlimited function
- Regularization with window function
- Regularized Shannon sampling formulas
- Error estimates
- Numerical robustness