Keywords

1 Introduction

Efficient data processing is essential in large data applications, whether the phenomenon of interest is sound, heat, electrostatics, electrodynamics, fluid dynamics, elasticity, or quantum mechanics. The spatial/time distribution of these aspects can be described similarly in terms of PDEs. When solving a PDE of interest, we need to know the initial conditions, described by some function [5, 6, 8]. However, in real-life applications, full knowledge of the initial conditions is often impossible due to unavailability of a large number of sensors [1, 7]. The way to overcome this impairing is to exploit the evolutionary nature of the sampling environment, while working with a reduced number of sensors, i.e., employ the concept of dynamical sampling [2,3,4].

The concept of dynamical sampling is beneficial in setups where the available sensing devices are limited due to some access constraints. In such an under sampled case, we use the coarse system of sensors multiple times to compensate for the lack of samples at a single time instance. Our focus is on developing methods which efficiently approximate solutions of important PDEs by engaging the dynamical nature of the setup dictated by the initial conditions. We develop the theory and algorithms for a new sampling and approximation framework. This framework combines spatial samples of various states of approximations and eventually provides an exact reconstruction of the solution. We assume that the initial state of the solution is in a selected Sobolev class.

Recent results [7] show that only one sensor employed at a crucial location at multiple time instances leads to a sequence of approximate solutions, which converges to the exact solution of the heat equation:

$$\displaystyle \begin{aligned}u_t = u_{xx},\end{aligned}$$
$$\displaystyle \begin{aligned}u(0, t) = u(\pi, t) =0,\end{aligned}$$
$$\displaystyle \begin{aligned}u(x, 0) = f(x),\end{aligned} $$

under the assumption that the initial condition function f is in a compact class of Sobolev type. As a result, the sine basis decomposition coefficients of the initial function have controlled decay. We apply this approach to solve other PDEs, while using one spatial sensor multiple times for data collection: We use an appropriate basis decomposition, and work under the assumption that the basis decomposition coefficients of the initial state function have controlled decay. In other words, we assume that the initial state of the solution is in a selected Sobolev class.

2 Laplace Equation

We study the problem of solving an initial value problem (IVP) from discrete measurements made at appropriate instances/locations; thus, the initial conditions are not known in full detail. We aim to show that with a carefully selected placement and activation of the sensing devices, the unknown initial conditions can be completely determined by the discrete set of measurements; thus, the general solution to the IVP of interest is derived.

Under some initial and boundary conditions, the Laplace equation

$$\displaystyle \begin{aligned} u_{xx} + u_{yy} =0, \quad x \in [0, 1], \quad y \ge 0 \end{aligned} $$
(1)
$$\displaystyle \begin{aligned}u_x(0, y) = u_x(1, y) =0, \quad \lim_{y\to \infty} u(x, y) =0 \quad u(x, 0) = f(x),\end{aligned}$$

has a general solution

$$\displaystyle \begin{aligned} u(x,y) = \sum_{k=0}^\infty a_k \cos (k \pi x) \, e^{-k \pi y}, \quad \text{ where } \; a_k = 2 \int_0^1 f(x) \cos (k\pi x) \, dx. \end{aligned} $$
(2)

The solution to (1) is the steady state temperature u(x, y) in the semi-infinite plate 0 ≤ x ≤ 1, y ≥ 0, with the assumption that the left and right sides are insulated and assume that the solution is bounded. The temperature along the bottom side is assumed to be a known function f(x).

In case the values f(x) are not fully known at all x ∈ [0, 1], we propose to take samples u k := u(x 0, y k), k ≥ 0, at an array of space-time locations (x 0, y k), such that \(|\cos {}(k\pi x_0)|\geq d_0 k^{-1}\) for some d 0 > 0 and for all k integers, k ≠ 0. For the condition \(|\cos {}(k\pi x_0)|\geq d_0 k^{-1}\) for some d 0 > 0 and for all k integers, we choose α ∈ (0, 3∕2) so that

$$\displaystyle \begin{aligned}dist \left( \alpha, \left\{\frac{1}{2k}, \frac{3}{2k}, \ldots, \frac{2k+1}{2k} \,\right\}\right) \ge \frac{c_0}{k^2}, \, k =1, 2, \ldots,\end{aligned}$$

with c 0 an absolute constant. Then we have

$$\displaystyle \begin{aligned}dist \left( \alpha k \pi, \left\{\frac{\pi}{2}, \frac{3\pi}{2}, \ldots, \frac{ (2k+1) \pi }{2} \,\right\}\right) \ge \frac{c_0 \pi }{k}, \, k =1, 2, \ldots,\end{aligned}$$

We then take x 0 = α. We further assume that y 1 < y 2 < …. We work with (c k)k≥0 such that for some r > 0,

$$\displaystyle \begin{aligned} \sum c_k^2 k^{2r} \le 1 . \end{aligned} $$
(3)

The function

$$\displaystyle \begin{aligned} F_0(z):=\sum_{k=0}^{\infty}c_k z ^{-k } \end{aligned} $$
(4)

is an analytic function in the unit disk D = {z ∈ C : |z| < 1}, which is uniquely determined by the set of coefficients (c k)k≥0. Furthermore, for the choice of z = e πy and \(c_k = a_k \cos (k\pi x_0)\), k ≥ 0, we have: F 0(e πy) = u(x 0, y).

Note that the evaluations

$$\displaystyle \begin{aligned} F_0(z_k) = u_k, \; k\geq 0, \end{aligned} $$
(5)

where \(z_k = e^{-\pi y_k}\), fully determine the function F 0. In case there was another analytic function on the open disc G 0, which satisfied G 0(z k) = u k, k ≥ 0, then we’d have an analytic function F 0 − G 0 with countably many zeroes in D (since (F 0 − G 0)(z k) = 0, k ≥ 0); thus, F 0 − G 0 must be the zero function. This implies that {u k|k = 0, 1, 2, ..} uniquely determines (2).

Next, we sample u(x, y) at locations (x 0, y k), k ≥ 0, where

$$\displaystyle \begin{aligned}y_0 >0, \; y_n = \rho^n y_0, \; n \geq 1,\end{aligned}$$

for some ρ > 2. The samples have an expansion

$$\displaystyle \begin{aligned} u_j= \sum_{k=0}^{\infty}c_k e^{-k\pi y_j } = \sum_{k=0}^{\infty}c_k e^{-k\pi \rho^j y_0 }, \;\; j=1,2,\ldots. \end{aligned} $$
(6)

Notice that by (6) it holds

$$\displaystyle \begin{aligned}c_0 = u_n - \sum_{k=1}^{\infty}c_k e^{-k\pi \rho^n y_0 },\end{aligned}$$
$$\displaystyle \begin{aligned}c_1 = u_{n-1} e^{\pi \rho^{n-1}y_0} - c_0 e^{\pi \rho^{n-1}y_0} - \sum_{k=2}^{\infty}c_k e^{-k\pi \rho^{n-1}y_0 },\end{aligned}$$
$$\displaystyle \begin{aligned}c_2 = u_{n-2} e^{2\pi \rho^{n-2}y_0} - c_0 e^{2\pi \rho^{n-2}y_0} - c_1 e^{\pi \rho^{n-2}y_0}- \sum_{j=3}^{\infty}c_j e^{-(j-2)\pi \rho^{n-2}y_0 },\end{aligned}$$
$$\displaystyle \begin{aligned}\ldots\end{aligned}$$
$$\displaystyle \begin{aligned}c_n = u_{n} e^{n\pi y_0} - c_0 e^{n\pi y_0} - c_1 e^{(n-1)\pi y_0}- \ldots - \sum_{j=n+1}^{\infty}c_j e^{-(j-n)\pi y_0 }.\end{aligned}$$

We take n + 1 samples, and aim at approximating the initial value f, and respectively the solution (2). We define

$$\displaystyle \begin{aligned}\bar{c}_0 := u_n,\end{aligned}$$
$$\displaystyle \begin{aligned}\bar{c}_1 := u_{n-1} e^{\pi \rho^{n-1}y_0} - \bar{c}_0 e^{\pi \rho^{n-1}y_0} ,\end{aligned}$$
$$\displaystyle \begin{aligned}\bar{c}_2 := u_{n-2} e^{2\pi \rho^{n-2}y_0} - \bar{c}_0 e^{2\pi \rho^{n-2}y_0} - \bar{c}_1 e^{\pi \rho^{n-2}y_0} ,\end{aligned}$$
$$\displaystyle \begin{aligned}\ldots\end{aligned}$$
$$\displaystyle \begin{aligned}\bar{c}_n := u_{n} e^{n\pi y_0} - \bar{c}_0 e^{n\pi y_0} - \bar{c}_1 e^{(n-1)\pi y_0}- \ldots - \bar{c}_{n-1} e^{n\pi y_0} e^{-(n-1)\pi y_0}.\end{aligned}$$

For each j = 1, …, n, we denote the error in recovering c j by \(E_j := |\bar { c}_j - c_j|.\) Since ρ > 2, |c j|≤ j r ≤ k r for j > k, and \(\frac {1}{1-e^{-\pi \rho ^n y_0 }} \leq \frac {1}{1-e^{-\pi y_0 }}\), we estimate

$$\displaystyle \begin{aligned}E_0 \leq \sum_{j=1}^{\infty}|c_j| e^{-j\pi \rho^n y_0 } \leq \sum_{j=1}^{\infty} e^{-j\pi \rho^n y_0 } = \frac{e^{- \pi \rho^n y_0 } }{1-e^{-j\pi \rho^n y_0 } }\leq \frac{e^{- \pi \rho^n y_0 } }{1-e^{- \pi y_0 } }.\end{aligned}$$

Lemma 1

For each j ≥ 0, we have

$$\displaystyle \begin{aligned}E_j \leq 2^j \frac{e^{-\pi \rho^{n-j} y_0}}{1-e^{-\pi y_0}}.\end{aligned}$$

Proof

We use mathematical induction. The claim is verified for j = 0, 1. Suppose the claim holds true for all j ≤ k − 1 for some k ≥ 1. Then

$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle &\displaystyle E_{k} \\ &\displaystyle \leq &\displaystyle E_0 e^{\pi \rho^{n-k} y_0 k } + E_1 e^{\pi \rho^{n-k} y_0 (k-1) } + \ldots + E_{k-1} e^{\pi \rho^{n-k} y_0 } + \left(\frac{1}{k+1}\right)^r \frac{e^{-\pi \rho^{n-k} y_0}}{1-e^{-\pi y_0}} \\ &\displaystyle \leq &\displaystyle \sum_{j=0}^{k-1} 2^j \frac{e^{-\pi \rho^{n-j} y_0}}{1-e^{-\pi y_0}} e^{\pi \rho^{n-k} y_0 (k-j)} + \left(\frac{1}{k+1}\right)^r \frac{e^{-\pi \rho^{n-k} y_0}}{1-e^{-\pi y_0}} \\ &\displaystyle \leq &\displaystyle \frac{e^{-\pi \rho^{n-k} y_0}}{1-e^{-\pi y_0}} \left[ \sum_{j=0}^{k-1} 2^j e^{-\pi \rho^{n-j} y_0 -\pi \rho^{k-j} y_0 -\pi \rho^{n-k} y_0 } + \left(\frac{1}{k+1}\right)^r \right] \end{array} \end{aligned} $$

Since ρ > 2,

$$\displaystyle \begin{aligned} e^{-\pi \rho^{n-j} y_0 -\pi \rho^{k-j} y_0 -\pi \rho^{n-k} y_0 } \le 1, \end{aligned} $$
(7)

which implies that

$$\displaystyle \begin{aligned}\sum_{j=0}^{k-1} 2^j e^{-\pi \rho^{n-j} y_0 -\pi \rho^{k-j} y_0 -\pi \rho^{n-k} y_0 } + \left(\frac{1}{k+1}\right)^r \le 2^k.\end{aligned}$$

For the inequality (7)

$$\displaystyle \begin{aligned} e^{-\pi \rho^{n-j} y_0 -\pi \rho^{k-j} y_0 -\pi \rho^{n-k} y_0 } \le 1, \end{aligned} $$
(8)

since

$$\displaystyle \begin{aligned}-\pi \rho^{n-j} y_0 -\pi \rho^{k-j} y_0 -\pi \rho^{n-k} y_0 = -\pi t_0 (\rho^{k-j} -(k-j+1)),\end{aligned}$$

we need to have, for 0 ≤ j < k,

$$\displaystyle \begin{aligned}\rho^{k-j} > k-j+1.\end{aligned}$$

This implies ρ k > k + 1 for k ≥ 1. When k = 1, we have ρ > 2. By the mathematical induction, we have ρ k > k + 1 for k ≥ 1 if ρ > 2.

We define an approximation F n(x) to f(x) as

$$\displaystyle \begin{aligned}F_n (x) := \sum_{j=0}^m \frac{\bar{c}_j}{ \cos j \pi x_0} \cos j \pi x, \quad m := \lceil \frac{n}{2} \rceil .\end{aligned}$$

Theorem 1

Given any fixed choice of y 1 > 0, ρ > 2, let y k := ρ k y 0, k ≥ 1. Then for \( f \in \{ \sum a_k \cos k \pi x \in L_2([0, 1] \} \,:\, \sum _{k=1 }^\infty k^{2r} |a_k |{ }^2 \le 1 \},\) whenever

$$\displaystyle \begin{aligned}e^{-\pi y_k} \le 2^{-k} k^{-r-1},\end{aligned}$$

we have

$$\displaystyle \begin{aligned}\lim_{n\to \infty }\left\| f-F_n \right\| =0.\end{aligned}$$

Proof

By Lemma 1, we have

$$\displaystyle \begin{aligned} \begin{array}{rcl} \left\|f-F_n\right\|{}^2 &\displaystyle \le&\displaystyle \sum_{j=0}^m \frac{ E_j^2 }{| \cos j \pi x_0 |{}^2 } + \sum_{j=m+1}^\infty |a_j|{}^2 \\ &\displaystyle \le&\displaystyle \sum_{j=0}^m \left(\frac{j }{d_0 }\right)^2 \left(2^j \frac{e^{-\pi \rho^{n-j} y_0}}{1-e^{-\pi y_0}} \right)^2 + m^{-2r} \\ &\displaystyle \le&\displaystyle \left(\frac{m }{d_0 }\right)^2 \left(\frac{e^{-\pi \rho^{m} y_0}}{1-e^{-\pi y_0}} \right)^2 \sum_{j=0}^m 2^{2j} + m^{-2r} \\ &\displaystyle =&\displaystyle \left(\frac{m }{d_0 }\right)^2 \left(\frac{e^{-\pi y_m}}{1-e^{-\pi y_0}} \right)^2 \sum_{j=0}^m 2^{2j} + m^{-2r} \\ &\displaystyle \le &\displaystyle \left(\frac{ 3}{4d_0^2 (1-e^{-\pi y_0})^2 } + 1\right) m^{-2r} \to 0 ,\\ \end{array} \end{aligned} $$

as n →.

3 Variable Coefficient Wave Equation

In this section, we consider the following generalization of the wave equation:

$$\displaystyle \begin{aligned} u_{xx} +(1+t)^2u_{tt} +\frac{1}{1+t} u_t=0, \,\, t \ge 0, \end{aligned} $$
(9)

where x ∈ [0, π] and t ≥ 0. A simple calculation shows that the solution of this equation is

$$\displaystyle \begin{aligned}u(x,t) = \sum_{k\geq 1} a_k \sin{}(kx) \frac{1}{(1+t)^k} ,\end{aligned}$$

where \((a_k)_{k=1}^{\infty }\) are the Fourier sine coefficients of f(x) = u(x, 0). Thus if the initial function is f(x) = u(x, 0) is given, then we can obtain u = u(x, t). In case f(x) is not known at all x ∈ [0, π], we use later time samples, which are available at one fixed location x 0 and at time instances

$$\displaystyle \begin{aligned} t_1<t_2<\ldots<t_s<\ldots \end{aligned}$$

to recover the initial datum f, and consequently u. To do this, we first choose x 0 using a similar argument as in Sect. 2 so that we have \(|\sin {}(k x_0)|\geq d_0 k^{-1}\) for some d 0 > 0 and for all k ≥ 1.

We note that the samples satisfy

$$\displaystyle \begin{aligned} u_s:= u(x_0, t_s) = \sum_{k\geq 1} a_k \sin{}(k x_0) \frac{1}{(1+t_s)^k} = \sum_{k\geq 1} c_k \frac{1}{(1+t_s)^k}, \end{aligned} $$
(10)

where \(c_k:= a_k \sin {}(k x_0)\). We further assume that we have \(\sum _k c_k^2 k^{2r} \leq 1\). We will impose conditions on the time instances employed so we can construct an approximation of the initial datum and thus recover u(x, t). As we will see, the choice for \(t_1=\rho >\sqrt {2}\), \(t_k \geq \rho ^{2^{k-1}} -1\) when k ≥ 2, will provide good convergence rate. We set the algorithm as follows:

$$\displaystyle \begin{aligned}\bar{c}_1 = u_n (1+t_n),\end{aligned}$$

and for 2 ≤ k ≤ n we set

$$\displaystyle \begin{aligned}\bar{c}_k = u_{n-k} (1+t_{n-k+1})^k - \sum_{j=1}^{k-1} \bar{c}_j \frac{ (1+t_{n-k+1})^k }{(1+t_{n-k+1})^j }.\end{aligned}$$

Lemma 2

For every n ≥ 1 and 1 ≤ k  n, we have

$$\displaystyle \begin{aligned} E_k:=|c_k - \bar{c}_k| \leq 2^{k-1}A_0 \frac{1}{ 1+t_{n-k+1}}, \end{aligned} $$
(11)

where \(A_0 = 2^{-r} \frac {1}{1- (1+t_1)^{-1}}.\)

Proof

First, we note that for the choice of t k it holds:

$$\displaystyle \begin{aligned} \frac{1}{ 1+t_{n-j+1}} \; \frac{ (1+t_{n-k})^{k+1} }{(1+t_{n-k})^j } \leq \frac{1}{ 1+t_{n-k}} \; \text{ when } \; j\leq k. \end{aligned} $$
(12)

Then

$$\displaystyle \begin{aligned}E_1 \leq \sum_{k>1} |c_j| \frac{ 1+t_{n } }{(1+t_{n })^j } \leq 2^{-r} \sum_{k>1} (1+t_{n})^{-(j-1) }\end{aligned}$$
$$\displaystyle \begin{aligned}= 2^{-r}(1+t_{n })^{-1 } \frac{1}{1-(1+t_{n })^{-1 } }\leq 2^{-r}(1+t_{n })^{-1 } \frac{1}{1-(1+t_{1})^{-1 } }.\end{aligned}$$

Suppose for every j ≤ k it holds \(E_j \leq 2^{j-1}A_0 \frac {1}{ 1+t_{n-j+1}}.\) Then

$$\displaystyle \begin{aligned}E_{k+1} \leq \sum_{j<k+1} E_j \frac{ (1+t_{n-k})^{k+1} }{(1+t_{n-k})^j } + \sum_{j>k+1} |c_j| \frac{ (1+t_{n-k})^{k+1} }{(1+t_{n-k})^j }\end{aligned}$$
$$\displaystyle \begin{aligned}\leq \sum_{j<k+1} 2^{j-1}A_0 \frac{1}{ 1+t_{n-j+1}}\frac{ (1+t_{n-k})^{k+1} }{(1+t_{n-k})^j } + \sum_{j>1} |c_j| \frac{ 1 }{(1+t_{n-k})^{j }}.\end{aligned}$$

By (12), it holds

$$\displaystyle \begin{aligned}E_{k+1} \leq \sum_{j<k+1} 2^{j-1}A_0 \frac{1}{ 1+t_{n-k}} + \frac{1}{(k+1)^r}\frac{ 1 }{ 1+t_{n-k}} \frac{1}{1-(1+t_{n-k+1})^{-1 } }\end{aligned}$$
$$\displaystyle \begin{aligned}\leq \sum_{j<k+1} 2^{j-1}A_0 \frac{1}{ 1+t_{n-k}} + \frac{1}{(k+1)^r}\frac{ 1 }{ 1+t_{n-k}} A_0 \leq 2^k A_0 \frac{1}{ 1+t_{n-k}} .\end{aligned}$$

To simplify our calculations, we assume we always take n = 2m samples, and define

$$\displaystyle \begin{aligned} F_n: = \sum_{k=1}^m \bar{c}_k f_k. \end{aligned} $$
(13)

Theorem 2

Let \(t_1=\rho >\sqrt {2}\) and \(t_k \geq \rho ^{2^{k-1}} -1\) when k ≥ 2. Then, whenever \( f \in \{ \sum a_k \sin k x \in L_2( {\mathbb R} ) \,:\, \sum _{k=1 }^\infty k^{2r} |a_k |{ }^2 \le 1 \},\) we have

$$\displaystyle \begin{aligned}\lim_{n\to \infty }\left\| f-F_n \right\| =0.\end{aligned}$$

Proof

By the decay assumption on (c k)k≥1, we obtain

$$\displaystyle \begin{aligned}\Vert f - F_n\Vert_2 ^2 \leq \sum_{k=1}^m |c_k - \bar{c}_k |{}^2 + \sum_{k>m } |c_k|{}^2 \leq \sum_{k=1}^m 2^{k-1}A_0 \frac{1}{ 1+t_{n-k+1}}+ \frac{1}{m^{2r}}.\end{aligned}$$

Since \(t_{n-k+1} -1= \rho ^{2^{n-k}} \), for k = 1, 2, …, m we have

$$\displaystyle \begin{aligned}\frac{2^{k-1}}{ 1+t_{n-k+1}} = \frac{2^{k-1}}{ \rho^{2^{n-k}}} \leq \frac{2^{k-1}}{ \rho^{2^{m}}}.\end{aligned}$$

Thus