1 Introduction

The classical Sampling Theorem on the line has been extended to many different contexts. We refer the reader to [1, 38, 1420, 25] for just a sampling of such results. There are also some related works on filter banks and wavelets on graphs, for example [12, 13, 23, 24]. In general there is a space X where functions are defined and a sampling set \(E\subseteq X\) where the values of a function are given. The problem is to recover the values of a function f on the whole space X in terms of its sample values on E, under the assumption that a suitable spectral decomposition of f is entirely contained in a suitable low frequency band. Many of these results are of an “overdetermined” type, meaning that there are implicit “consistency conditions” on the sample values \(f|_E\) if f is bandlimited, and the algorithm for recovering the function is not unique and often involves infinite series or iterations. The results in this paper are of a more precise nature, in that there are no restrictions on the sample values, and the reconstruction algorithm is unique (so the sampling set is also an interpolation set) and precisely given.

We consider a locally finite bipartite graph G, whose vertices split \(V=V_0\cup V_1\) (disjoint) into even \(V_0\) and odd \(V_1\) vertices, and all edges \(x\sim y\) connect one even vertex to one odd vertex. (It is not necessary to assume that G is connected; on the other hand the disconnected case reduces to the analogues results on each connected component graph.) To define a Laplacian on G we assign positive weights c(xy) to all edges, set \(\mu (x)=\sum _{y\sim x} c(x,y)\) and

$$\begin{aligned} -\Delta f(x) = \frac{1}{\mu (x)} \sum _{y\sim x} c(x,y) (f(x)-f(y)) . \end{aligned}$$
(1.1)

For simplicity we discuss first the case when G is finite. Then \(-\Delta \) is a nonnegative self-adjoint operator with respect to the measure \(\mu \), so there exists an orthonormal basis of eigenfunctions \(u_k\) with eigenvalues \(\lambda _k\),

$$\begin{aligned} -\Delta u_k = \lambda _k u_k . \end{aligned}$$
(1.2)

In fact we have the estimate

$$\begin{aligned} 0\leqslant \lambda _k \leqslant 2 , \end{aligned}$$
(1.3)

and both endpoints occur, as 0 is the eigenvalue of the constant function and 2 is the eigenvalue of the function that is 1 on \(V_0\) and \(-1\) on \(V_1\). The midfrequency \(\lambda =1\) may or may not occur as an eigenvalue. If it does not then we easily define a half bandlimited function to be a function

$$\begin{aligned} f(x) = \sum _{\lambda _k<1} a_k u_k(x) \end{aligned}$$
(1.4)

that is expanded as a linear combination of low frequency eigenfunctions. Of course \(a_k=\langle f,u_k\rangle \) (the inner product with respect to the measure \(\mu \)) in (1.4), and (1.4) is equivalent to

$$\begin{aligned} \langle f,u_k\rangle = 0 \text { if }\lambda _k>1 . \end{aligned}$$
(1.5)

If there are eigenfunctions with \(\lambda _k=1\), we need to split up the 1-eigenspace into low frequency and high frequency parts. The details are given in Sect. 2. The main result proved in Sect. 2 is that a half bandlimited function is uniquely determined by its samples on \(V_0\), with no restrictions on the samples, and the reconstruction is given by an explicit formula involving a low-pass projection operator. We also obtain a relationship between the original spectral decomposition and the spectral decomposition of a Laplacian defined on \(V_0\), but this requires an additional symmetry hypothesis.

In Sect. 3 we discuss analogous results when G is an infinite graph. Here the spectral resolution of the Laplacian in general involves projection valued measures on a spectrum contained in [0, 2]. We make the additional assumption that it may be written in the form

$$\begin{aligned} f = \int _0^2 P_\lambda f\, \mathrm {d}\nu (\lambda ) \end{aligned}$$
(1.6)

where \(\nu \) is a finite positive measure supported on the spectrum and \(P_\lambda f\) is an eigenfunction,

$$\begin{aligned} -\Delta P_\lambda f = \lambda P_\lambda f . \end{aligned}$$
(1.7)

Here (1.6) holds for every \(f\in \ell ^2(\mu )\), but \(P_\lambda f\) is not assumed to be in \(\ell ^2(\mu )\). This form of the spectral resolution is known to hold for many interesting examples. Then we define f to be half bandlimited if the integral in (1.6) is limited to [0, 1]. Again, if \(\nu \) has a point mass at \(\lambda =1\) we have to split the 1-eigenspace. Then again a half bandlimited function is uniquely determined from its samples on \(V_0\) via an explicit formula.

While none of the above results are surprising or difficult, they set the stage for the discussion of a nontrivial example in Sect. 4. We denote by \(T_q\) the connected \((q+1)\)-regular tree, equipped with its completely symmetric Laplacian. The spectral resolution of this Laplacian in the form (1.6) was given explicitly by Cartier [2]. An exposition may be found in [9], and generalizations to other Laplacians may be found in [10]. A related “Plancherel formula”in the case \(q=2\) is given in [22].

It turns out that the spectrum is the smaller interval \(\left[ 1-\frac{2\sqrt{q}}{q+1},1+\frac{2\sqrt{q}}{q+1}\right] \), and \(\nu \) is an explicit absolutely continuous measure. The spectral resolution \(P_\lambda f\) is given by a spherical convolution operator

$$\begin{aligned} P_\lambda f(x) = \sum _y \varphi _\lambda (d(x,y)) f(y) \end{aligned}$$
(1.8)

for an explicit spherical function \(\varphi _\lambda \) (here d(xy) denotes the graph distance). The spherical function just fails to be in \(\ell ^2(G)\). We give here a simple proof of the spectral resolution based on the Cauchy integral formula. Using the same reasoning we obtain an explicit formula for the low-pass operator that gives the reconstruction formula. It is also a spherical convolution operator with a kernel that is now just barely in \(\ell ^2(G)\).

In Sect. 5 we discuss a related sampling problem on graphs \(G_H\) that are generated by edge substitution. That is, we are given graphs G and H and Laplacians and produce \(G_H\) by substituting a copy of H for each edge of G. Under some minimal hypotheses on H, we show how to define low-pass bandlimited functions on \(G_H\) (in terms of a natural Laplacian) so that every such function can be reconstructed from its samples on G, and all functions on G arise in this manner. The reconstruction algorithm is given explicitly, but it is not as simple as in the half sampling problem.

The results in this paper are motivated by pure mathematical curiosity. We are not aware of any potential applications, but as usual this does not preclude the possibility of future applications.

2 Finite Graphs

For simplicity we first describe the case of finite bipartite graphs G. We assume we are given a set of positive weights c(xy) on the edges of G, so that \(c(x,y)=c(y,x)>0\) if \(x\sim y\) and \(c(x,y)=0\) otherwise. Define

$$\begin{aligned} \mu (x) = \sum _j c(x,y_j) \end{aligned}$$
(2.1)

where \(\{y_j\}\) is the set of neighboring vertices of x. We consider \(\mu \) as a measure on the vertices V of G. Then the Laplacian is defined by

$$\begin{aligned} -\Delta f(x)&= \frac{1}{\mu (x)} \sum _j c(x,y_j)(f(x)-f(y_j)) \nonumber \\&= f(x) - \sum _j \frac{c(x,y_j)}{\mu (x)} f(y_j) . \end{aligned}$$
(2.2)

Since \(-\Delta \) is a nonnegative self-adjoint operator with respect to the measure \(\mu \), there exists an orthonormal basis \(\{u_k\}\) of eigenfunctions, with

$$\begin{aligned} -\Delta u_k = \lambda _k u_k . \end{aligned}$$
(2.3)

We order the eigenvalues in increasing order, \(0=\lambda _1 \leqslant \lambda _2 \leqslant \cdots \leqslant \lambda _N\leqslant 2\), with \(N=\# V\). Note that \(u_1\) is constant, and if we assume that G is connected then \(\lambda _2>0\).

We define the twist operator T by

$$\begin{aligned} T f(x) = (-1)^{p(x)} f(x) \end{aligned}$$
(2.4)

where p(x) is the parity function

$$\begin{aligned} p(x) = {\left\{ \begin{array}{ll} 0 &{} \text {if }x\in V_0 \\ 1 &{} \text {if }x\in V_1 \\ \end{array}\right. } . \end{aligned}$$

Lemma 1

If u is an eigenfunction with eigenvalue \(\lambda \), then T u is an eigenfunction with eigenvalue \(2-\lambda \).

Proof

If \(x\in V_0\) then all \(y_j\in V_1\), so

$$\begin{aligned} -\Delta T u(x)&= u(x) + \sum _j \frac{c(x,y_j)}{\mu (x)} u(y_j) \\&= 2u(x) + \Delta u(x) \\&= (2-\lambda ) u(x) \\&= (2-\lambda ) T u(x) . \end{aligned}$$

A similar computation works if \(x\in V_1\). \(\square \)

We may divide up the spectrum into low frequency eigenvalues with \(\lambda _j<1\) and high frequency eigenvalues with \(\lambda _j>1\), and these are interchanged by the twist operator. Thus

$$\begin{aligned} \lambda _{N-j+1} = 2-\lambda _j \end{aligned}$$
(2.5)

and we may redefine the eigenfunctions so that

$$\begin{aligned} u_{N-j+1} = T u_j \end{aligned}$$
(2.6)

for low frequency eigenfunctions. Of course it may happen (and often does) that there are midfrequency eigenvalues with \(\lambda _j=1\). If we denote this midfrequency eigenspace by \(E_1\) then the twist operator maps \(E_1\) to itself.

Lemma 2

\(E_1=E_1^+\oplus E_1^-\) where the functions in \(E_1^+\) are supported in \(V_0\) and the functions in \(E_1^-\) are supported in \(V_1\). Moreover

$$\begin{aligned} \dim E_1^+-\dim E_1^- = \# V_0-\# V_1 . \end{aligned}$$
(2.7)

Proof

Since \(T^2=I\) on \(E_1\), and T is self-adjoint, \(E_1\) splits into an orthonormal sum of eigenspaces of T with eigenvalues \(\pm 1\), and these are exactly \(E_1^+\) and \(E_1^-\), since \(T u=u\) if and only if u is supported in \(V_0\), and \(T u = - u\) if and only if u is supported in \(V_1\).

To prove (2.7) we note that

$$\begin{aligned} \dim E_1^+ + M&= \# V_0 \text { and} \\ \dim E_1^- + M&= \# V_1 \end{aligned}$$

where M denotes the number of low frequency eigenvalues, since when restricted to \(V_0\) we have \(T u=u\). \(\square \)

We choose the orthonormal basis of \(E_1\) so that the \(E_1^+\) eigenfunctions come before the \(E_1^-\) eigenfunctions.

Definition 1

  1. (a)

    A function is said to be half-bandlimited if it is a linear combination of low frequency eigenfunctions and midfrequency eigenfunctions in \(E_1^+\). Specifically,

    $$\begin{aligned} f = \sum _{k=1}^{\# V_0} a_k u_k . \end{aligned}$$
    (2.8)
  2. (b)

    The half-sampling operator H f is defined to be

    $$\begin{aligned} H f(x) = {\left\{ \begin{array}{ll} f(x) &{} \,\,\text {for }x\in V_0 \\ 0 &{} \text { for }x\in V_1 \end{array}\right. }, \end{aligned}$$
    (2.9)

    so that \(H=\frac{1}{2} (I+T)\). Note that we could also have defined an operator \(\widetilde{H}\) from functions on V to functions on \(V_0\) by restriction. Clearly H and \(\widetilde{H}\) contain the same information.

  3. (c)

    We define the low-pass operator L by

    $$\begin{aligned} L\left( \sum _{k=1}^{\# V} a_k u_k\right) = \sum _{k=1}^{\# V_0} a_k u_k . \end{aligned}$$
    (2.10)

    Note that L is given explicitly as

    $$\begin{aligned} L f(x)&= \sum _{y\in V} \mathscr {L}(x,y) f(y) \mu (y) \quad \text {for}\end{aligned}$$
    (2.11)
    $$\begin{aligned} \mathscr {L}(x,y)&= \sum _{k=1}^{\# V_0} u_k(x) u_k(y) . \end{aligned}$$
    (2.12)

Theorem 1

A half-bandlimited function f is uniquely determined from its half-samples \(\widetilde{H} f\), and every function on \(V_0\) arises as \(\widetilde{H} f\) for some unique half-bandlimited function f. In other words, \(V_0\) is both a sampling set and an interpolation set for the space of half-bandlimited functions. Explicitly

$$\begin{aligned} f(x)&= \sum _{y\in V_0} K(x,y) H f(y) \mu (y) \quad \text {for} \end{aligned}$$
(2.13)
$$\begin{aligned} K(x,y)&= 2 \sum _{k=1}^M u_k(x) u_k(y) + \sum _{k=M+1}^{\# V_0} u_k(x) u_k(y) . \end{aligned}$$
(2.14)

Proof

Suppose \(f=u_k\) for \(1\leqslant k\leqslant \# V_0\). Then \(H u_k = \frac{1}{2} u_k + \frac{1}{2} T u_k\). If \(1\leqslant k\leqslant M\) so that \(u_k\) is low frequency then \(T u_k\) is high frequency so \(L T u_k=0\) and \(L u_k = u_k\). Thus \(L H u_k = \frac{1}{2} u_k\). On the other hand, if \(M+1\leqslant k\leqslant \# V_0\) so that \(u_k\) is mid-frequency in \(E_1^+\), then \(T u_k = u_k\) and \(L u_k=u_k\). Thus \(L H u_k = u_k\). Taking linear combinations we arrive at (2.13) and (2.14).

Conversely, if g is any function on \(V_0\), extend g to \(\widetilde{g}\) on V in any way. Then \(\widetilde{g}\) is a linear combination of all the eigenfunctions. But the eigenfunctions in \(E_1^-\) vanish on \(V_0\) and the high frequency eigenfunctions \(u_{N-j+1} = T u_j\) are equal to the low frequency eigenfunctions \(u_j\) when restricted to \(V_0\). So g is equal to a half-bandlimited function restricted to \(V_0\). \(\square \)

Remark 1

We could reverse the roles of the even and odd vertices, and consider half-sampling on the odd vertices. We would then obtain the same sort of results, but we would have to replace \(E_1^+\) by \(E_1^-\) in the definition of half-bandlimited. Only in graphs where \(E_1\) is empty would we have a single class of bandlimited functions that would allow half-sampling on either even or odd vertices.

Next we consider the half graph \(G_0\) whose vertices are \(V_0\) and with edges \(x\sim z\) if and only if there exists \(y\in V_1\) such that \(x\sim y\) and \(y\sim z\) are edges in G. Note that such y need not be unique. We consider the question of whether we can define weights \(c_0(x,z)\) on the edges of \(G_0\) so that the functions \(\widetilde{u}_k\) for \(1\leqslant k\leqslant \# V_0\) give a complete set of eigenfunctions for the Laplacian \(-\Delta _0\) on \(G_0\). It appears that we need to make an additional assumption on the original set of weights of a weak symmetry type in order to do this.

Definition 2

We say that the graph G and its weights \(\{c(x,y)\}\) are weakly symmetric if there exists a constant A independent of \(x\in V_0\) such that

$$\begin{aligned} \sum _{y\sim x} \frac{c(y,x)^2}{\mu (x)\mu (y)} = A \end{aligned}$$
(2.15)

for all \(x\in V_0\). In that case we define weights \(c_0(x,z)\) on \(G_0\) by

$$\begin{aligned} c_0(x,z) = \sum _{\begin{array}{c} y\in V_1 \\ x\sim y\sim z \end{array}} \frac{c(x,y) c(y,z)}{\mu (y)(1-A)} . \end{aligned}$$
(2.16)

Theorem 2

Assume that G and its weights are weakly symmetric, and consider the Laplacian \(-\Delta _0\) on \(G_0\) with weights given by (2.16). Then \(\widetilde{H} u_k\) for \(1\leqslant k\leqslant \# V_0\) is an eigenfunction of \(-\Delta _0\) with eigenvalue

$$\begin{aligned} \widetilde{\lambda }_k = \frac{\lambda _k(2-\lambda _k)}{1-A} , \end{aligned}$$
(2.17)

and these give a complete set of eigenfunctions for \(-\Delta _0\).

Proof

Fix \(x\in V_0\) and let \(\{y_j\}\) enumerate the vertices in \(V_1\) that are neighbors of x. Let \(\{z_{j \ell }\}\) enumerate the vertices in \(V_0\) that are neighbors of \(y_j\). Note that x is one of these neighbors and we choose the enumeration so \(x=z_{j 1}\). The complete union of \(z_{j \ell }\) with \(\ell \geqslant 2\) gives all \(z\in V_0\) that are neighbors of x in \(G_0\), but there may be more than one representation of a fixed z. The eigenvalue equation for \(-\Delta \) at x yields

$$\begin{aligned} \sum _j c(x,y_j) u_k(y_j) = \mu (x)(1-\lambda _k) u_k(x) \end{aligned}$$
(2.18)

and similarly at \(y_j\) we have

$$\begin{aligned} \sum _\ell \frac{c(y_j,z_{j\ell })u_k(z_{j\ell })}{\mu (y_j)} = (1-\lambda _k) u_k(y_j) . \end{aligned}$$
(2.19)

Now we multiply (2.19) by \(c(x,y_j)\) and sum over j to obtain

$$\begin{aligned} \sum _j \sum _\ell \frac{c(x,y_j) c(y_j,z_{j\ell }) u_k(z_{j\ell })}{\mu (y_j)}&= (1-\lambda _k) \sum _j c(x,y_j) u_k(y_j) \nonumber \\&= \mu (x) (1-\lambda _k)^2 u_k(x) . \end{aligned}$$
(2.20)

Note that the terms corresponding to \(\ell =1\) on the left side of (2.20) sum to exactly \(\mu (x) A u_k(x)\) while the other terms sum to exactly \((1-A) \sum _z c_0(x,z) u_k(z)\) where the sum is over all z that are neighbors of x in \(G_0\). Dividing by \((1-A)\mu (x)\), we obtain

$$\begin{aligned} \sum _z \frac{c_0(x,z)}{\mu (x)} u_k(z)&= \left( \frac{(1-\lambda _k)^2}{1-A} - \frac{A}{1-A}\right) u_k(x) \\&= (1-\widetilde{\lambda }_k) u_k(x) , \end{aligned}$$

which is equivalent to \(-\Delta _0 \widetilde{H} u_k = \widetilde{\lambda }_k \widetilde{H} u_k\). Note that choosing \(k=1\) so \(u_1\) is constant verifies that \(\mu (x) = \sum _k c_0(y,z)\) as required. \(\square \)

3 Infinite Graphs

If G is infinite we assume that there is an upper bound to the order of each vertex. Then the operator \(-\Delta \) on \(\ell ^2(G)\) (with respect to the measure given by the weights \(\mu (x)\)) is a bounded, nonnegative self-adjoint operator with spectrum contained in [0, 2]. By the spectral theorem it has a spectral resolution. For simplicity we assume this resolution can be given in the form

$$\begin{aligned} f = \int _0^2 P_\lambda f\, \mathrm {d}\nu (\lambda ) \end{aligned}$$
(3.1)

where \(\nu \) is a finite positive measure on the spectrum, and \(P_\lambda f\) is an eigenfunction

$$\begin{aligned} -\Delta P_\lambda f = \lambda P_\lambda f . \end{aligned}$$
(3.2)

Note that \(P_\lambda f\) is not assumed to be in \(\ell ^2(G)\) (usually it is not). The usual spectral projection onto an interval [ab] is then \(\int _a^b P_\lambda f\, \mathrm {d}\nu (\lambda )\). Note that \(P_\lambda \) and \(\nu \) are not unique, since you could multiply \(P_\lambda \) by \(\varphi (x)\) and \(\nu \) by \(\frac{1}{\varphi (x)}\). The general case may be treated by similar arguments, but the notation is more awkward.

For simplicity we consider first the case when the measure \(\nu \) does not have an atom at \(\lambda =1\). We define the twist operator T as before, and note that \(-\Delta T P_\lambda f = (2-\lambda ) P_\lambda f\). We may assume then that \(P_{2\,-\,\lambda ^{f}} = P_\lambda T f\). We define the lowpass operator L by

$$\begin{aligned} L f = \int _0^1 P_\lambda f\, \mathrm {d}\nu (\lambda ) \end{aligned}$$
(3.3)

and define f to be half-bandlimited if \(L f=f\). The half-sampling operators H f and \(\widetilde{H} f\) are defined as before. Then the analogue of Theorem 1 holds with

$$\begin{aligned} f = 2 L H f . \end{aligned}$$
(3.4)

The proof is essentially the same.

In the case that \(\nu \) has an atom at \(\lambda =1\), we define the space \(E_1\) to be the set of 1-eigenfunctions in \(\ell ^2(G)\). If \(\{\varphi _j\}\) is an orthonormal basis for \(E_1\) then we define the projection onto \(E_1\) by

$$\begin{aligned} P_{E_1} f(x) = \sum _j \sum _y \varphi _j(x) \varphi _j(y) f(y) \mu (y) . \end{aligned}$$
(3.5)

We then split \(E_1\) into \(E_1^+\oplus E_1^-\) into eigenfunctions supported on \(V_0\) and \(V_1\), and obtain similar projections \(P_{E_1^+}\) and \(P_{E_1^-}\). We define \(P_\lambda \) as before for \(\lambda \ne 1\), and denote by \(\nu _0\) the measure \(\nu \) with the atom at 1 removed. Then the analogue of (3.1) is

$$\begin{aligned} f = \int _0^2 P_\lambda f\, \mathrm {d}\nu _0 + P_{E_1^+} f + P_{E_1^-} f , \end{aligned}$$
(3.6)

and the analogue of (3.3) is

$$\begin{aligned} L f = \int _0^1 P_\lambda f\, \mathrm {d}\nu _0(\lambda ) + P_{E_1^+} f . \end{aligned}$$
(3.7)

We say that f is half-bandlimited if \(L f=f\), in which case we can recover f from H f by

$$\begin{aligned} f&= 2\int _0^1 P_\lambda H f\, \mathrm {d}\nu _0(\lambda ) + P_{E_1^+} H f \nonumber \\&= 2 L H f - P_{E_1^+} H f . \end{aligned}$$
(3.8)

The survey paper [11] discusses many examples of infinite graphs where the spectral resolution (3.1) may be given explicitly.

4 Regular Trees

We follow the standard convention and denote by \(T_q\) the connected \((q+1)\)-regular tree. All trees are bipartite: choose one vertex x and define \(V_0\) to consist of all vertices that are an even distance to x. Every vertex in \(T_q\) has exactly \(q+1\) neighbors, and we place weights \(\frac{1}{q+1}\) on each edge so that \(\mu (x)=1\). Note that \(T_1=\mathbb {Z}\), so we are mainly interested in the case \(q\geqslant 2\). We note that \(T_q\) may be realized as the Cayley graph of a group in several different ways. For example, take the group with \(q+1\) generators \(a_1,a_1,\dots ,a_q\) and relations \(a_0^2=a_1^2=\cdots = a_q^2 = \mathrm {id}\). For some reason, the group theory plays no role in the spectral resolution of the Laplacian on \(T_q\). The theory that we outline is due to Cartier [2] and is described in detail in [9]. We will also provide a proof of the main theorem here.

Everything is described in terms of what might be called spherical convolution operators \(f\mapsto \sum f(y) G({{\mathrm{dist}}}(x,y))\) for a function G defined on \(\mathbb {Z}^+\). We begin by defining the c-function on \(\mathbb {C}\) by

$$\begin{aligned} c(z) = \left( \frac{1}{q+1}\right) \frac{q^{1-z}-q^{z-1}}{q^{-z} - q^{z-1}} \end{aligned}$$
(4.1)

and the spherical functions

$$\begin{aligned} \varphi _z(n) = c(z) q^{-n z} + c(1-z) q^{-n(1-z)} . \end{aligned}$$
(4.2)

A direct computation shows that for any fixed vertex \(x_0\), the function \(\varphi _z(d(x,x_0))\) is an eigenfunction of \(-\Delta \) with eigenvlue

$$\begin{aligned} \lambda = 1-\frac{q^z+q^{1-z}}{q+1} . \end{aligned}$$
(4.3)

Indeed, if \(x\ne x_0\) then x has q neighbors y with \(d(y,x_0)=d(x,x_0)+1\) and one neighbor with \(d(y,x_0)=d(x,x_0)-1\), so \(-\Delta q^{-d(x,x_0)z} = \lambda q^{-d(x,x_0) z}\) and similarly for z replaced by \(1-z\). In other words, the eigenvalue equation holds separately for each of the functions \(q^{-n z}\) and \(q^{-n(1-z)}\) that form \(\varphi _z(n)\) by linear combination. But \(x_0\) has \(q+1\) neighbors y with \(d(x_0,y)=1\) so \(\left. -\Delta \varphi _z(d(x,x_0))\right| _{x=x_0} = \varphi _z(0)-\varphi _z(1)\), and the specific linear combinations in (4.2) yields \(\varphi _z(0)-\varphi _z(1)=\lambda \varphi _z(0)\) as required.

We will actually only be interested in the choice \(z=\frac{1}{2}+\mathrm {i}t\) for \(0\leqslant t\leqslant \frac{\pi }{\log q}\). This yields

$$\begin{aligned} \lambda = 1-\frac{2\sqrt{q}}{q+1}\cos (t\log q) , \end{aligned}$$
(4.4)

and we will see that \(\left[ 1-\frac{2\sqrt{q}}{q+1},1+\frac{2\sqrt{q}}{q+1}\right] \) is the spectrum of \(-\Delta \). Note that we may write

$$\begin{aligned} c\left( \frac{1}{2}+\mathrm {i}t\right) = \left( \frac{1}{q+1}\right) \left( \frac{q \mathrm {e}^{\mathrm {i}t\log q} - \mathrm {e}^{\mathrm {i}t\log q}}{\mathrm {e}^{-\mathrm {i}t\log q} - \mathrm {e}^{\mathrm {i}t\log q}}\right) . \end{aligned}$$
(4.5)

Now define

$$\begin{aligned} P_\lambda f(x) = \sum _y \varphi _{\frac{1}{2}+\mathrm {i}t}(d(x,y)) f(y) \end{aligned}$$
(4.6)

where \(\lambda \) and t are related by (4.4). We may initially assume that f has finite support, since such functions are dense in \(\ell ^2(T_q)\). This eliminates all questions of convergence of sums, and we see from the above computation that

$$\begin{aligned} -\Delta P_\lambda f = \lambda P_\lambda f . \end{aligned}$$
(4.7)

We will take \(P_\lambda \) to be the spectral projection. It turns out that the associated measure is

$$\begin{aligned} \mathrm {d}\nu (\lambda ) = \frac{q\log q}{2\pi \mathrm {i}(q+1)} \left| c\left( \frac{1}{2}+\mathrm {i}t\right) \right| ^{-2}\, \mathrm {d}t = \frac{4 q(q+1)\log q\sin ^2(t\log q)}{2\pi ((q-1)^2+4 q \sin ^2(t\log q))}\, \mathrm {d}t . \end{aligned}$$
(4.8)

Theorem 3

(Cartier)The spectral resolution of \(-\Delta \) is

$$\begin{aligned} f = \int _{1-\frac{2\sqrt{q}}{q+1}}^{1+\frac{2\sqrt{q}}{q+1}} P_\lambda f\, \mathrm {d}\nu (\lambda ) . \end{aligned}$$
(4.9)

Proof

We need to verify (4.9), which follows easily from

$$\begin{aligned} \int _0^{\pi /\log q} \varphi _{\frac{1}{2}+\mathrm {i}t}(n) \frac{q\log q}{2\pi (q+1)}\left| c\left( \frac{1}{2}+\mathrm {i}t\right) \right| ^{-2}\, \mathrm {d}t = \delta _{n 0} . \end{aligned}$$
(4.10)

Since \(c(\frac{1}{2}-\mathrm {i}t)= \overline{c(\frac{1}{2}+\mathrm {i}t)}\), the integrand in (4.10) is

$$\begin{aligned} \left( \frac{q\log q}{2\pi (q+1)}\right) q^{-n/2} \left( \frac{\mathrm {e}^{-\mathrm {i}n t \log q}}{c(\frac{1}{2}-\mathrm {i}t)} + \frac{\mathrm {e}^{\mathrm {i}n+\log q}}{c(\frac{1}{2}+\mathrm {i}t)}\right) . \end{aligned}$$

Thus the integral in (4.10) is

$$\begin{aligned} \left( \frac{q\log q}{2\pi (q+1)}\right) q^{-n/2} \int _{-\frac{\pi }{\log q}}^{\frac{\pi }{\log q}} \frac{\mathrm {e}^{\mathrm {i}n t\log q}}{c(\frac{1}{2}+\mathrm {i}t)}\, \mathrm {d}t . \end{aligned}$$
(4.11)

We introduce the variable \(w=\mathrm {e}^{\mathrm {i}t\log q}\). Note that \(\frac{1}{c(\frac{1}{2}+\mathrm {i}t)}=(q+1)\left( \frac{1-w^2}{q-w^2}\right) \) and \(\mathrm {d}t = \frac{\mathrm {e}^{-\mathrm {i}t\log q}}{\mathrm {i}\log q}\, \mathrm {d}w\), so (4.11) becomes the contour integral over the unit circle

$$\begin{aligned} q^{1-\frac{\pi }{2}} \frac{1}{2\pi \mathrm {i}} \oint \frac{w^{n-1}(1-w^2)}{q-w^2}\, \mathrm {d}w . \end{aligned}$$
(4.12)

When \(n\geqslant 1\) the integrand is holomorphic inside the unit disk so the integral is zero. When \(n=0\) we obtain

$$\begin{aligned} q\frac{1}{2\pi \mathrm {i}} \oint \frac{1-w^2}{q-w^2}\, \frac{\mathrm {d}w}{w} = 1 \end{aligned}$$

by Cauchy’s integral formula. \(\square \)

It is clear that \([1-\frac{2\sqrt{q}}{q+1},1]\) is the lower half of the spectrum, so that the lowpass operator is

$$\begin{aligned} L f = \int _{1-\frac{2\sqrt{q}}{q+1}}^1 P_\lambda f\, \mathrm {d}\nu (\lambda ), \end{aligned}$$
(4.13)

and a function is half-bandlimited if \(L f=f\). If f is half-bandlimited it can be reconstructed from its half-samples H f via \(f=2 L H f\). We can get an explicit expression for L f by following the reasoning in the proof of Theorem 3 to write

$$\begin{aligned} L f(x) = \sum _y \mathscr {L}(d(x,y)) f(y) \end{aligned}$$
(4.14)

with

$$\begin{aligned} \mathscr {L}(n) = q^{1-\frac{n}{2}}\frac{1}{2\pi \mathrm {i}}\int _{-\mathrm {i}}^\mathrm {i}\frac{w^{n-1}(1-w^2)}{q-w^2}\, \mathrm {d}w \end{aligned}$$
(4.15)

since the low frequencies correspond to \(-\frac{\pi }{2\log q}\leqslant t\leqslant \frac{\pi }{2\log q}\). The integral in (4.15) is over the contour that is the right half of the unit circle, but for \(n\geqslant 1\) the integral is holomorphic so the integral is independent of the contour. When n is even it follows by symmetry that (4.15) is half of (4.12), so \(2 L H f(x) = f(x)\) for \(x\in V_0\) as required.

When \(n=2m+1\) is odd we define

$$\begin{aligned} \alpha _m = \frac{1}{2\pi \mathrm {i}}\int _{-\mathrm {i}}^\mathrm {i}\frac{w^{2m}}{q-w^2}\, \mathrm {d}w \end{aligned}$$
(4.16)

so that

$$\begin{aligned} \mathscr {L}(2m+1) = q^{\frac{1}{2}-m}(\alpha _m-\alpha _{m+1}) . \end{aligned}$$
(4.17)

We evaluate \(\alpha _m\) by standard methods of calculus by writing \(\frac{1}{q-w^2} = \frac{1}{2\sqrt{q}}\left( \frac{1}{\sqrt{q}+w}+\frac{1}{\sqrt{q}-w}\right) \) and \(w^{2m}=((\sqrt{q}\pm w)-\sqrt{q})^{2m} = \sum _{k=0}^{2m} \left( {\begin{array}{c}2m\\ k\end{array}}\right) (-1)^k q^{m+\frac{k}{2}} (\sqrt{q}\pm w)^k\) so that the integrand in (4.16) is

$$\begin{aligned} \frac{1}{2\sqrt{q}} \sum _{k=0}^{2m} \left( {\begin{array}{c}2m\\ k\end{array}}\right) (-1)^k q^{m+\frac{k}{2}} \left( (\sqrt{q}+w)^{k-1} + (\sqrt{q}-w)^{k-1}\right) . \end{aligned}$$

When we integrate the \(k=0\) term we obtain

$$\begin{aligned} \frac{1}{2\pi \mathrm {i}} \int _{-\mathrm {i}}^\mathrm {i}\left( \frac{1}{\sqrt{q}+w} + \frac{1}{\sqrt{q}-w}\right) \, \mathrm {d}w = \frac{1}{2\pi \mathrm {i}}\log \frac{\sqrt{q}+\mathrm {i}}{\sqrt{q}-\mathrm {i}} = \frac{2}{\pi } \tan ^{-1}\left( \frac{1}{\sqrt{q}}\right) , \end{aligned}$$

while for \(k\geqslant 1\), we obtain

$$\begin{aligned}&\frac{1}{2\pi \mathrm {i}} \int _{-\mathrm {i}}^\mathrm {i}\left( (\sqrt{q}+w)^{k-1} + (\sqrt{q}-w)^{k-1}\right) \, \mathrm {d}w = \frac{1}{\pi \mathrm {i}k} \left( (\sqrt{q}+\mathrm {i})^k - (\sqrt{q}-\mathrm {i})^k\right) \\&\quad = \frac{2}{\pi k} \sum _{j=0}^{\left\lfloor \frac{k-1}{2}\right\rfloor } \left( {\begin{array}{c}k\\ 2j+1\end{array}}\right) (-1)^j q^{\frac{k}{2}-j-\frac{1}{2}} . \end{aligned}$$

This leads to

$$\begin{aligned} \alpha _m = \frac{q^{n-\frac{1}{2}}}{\pi } \tan ^{-1}\left( \frac{1}{\sqrt{q}}\right) + \sum _{k=1}^{2 m} \sum _{j=0}^{\left\lfloor \frac{k-1}{2}\right\rfloor } \frac{1}{\pi k} \left( {\begin{array}{c}2 m\\ k\end{array}}\right) \left( {\begin{array}{c}k\\ 2j+1\end{array}}\right) (-1)^{j+k} q^{m+k-j-1} . \end{aligned}$$
(4.18)

Although (4.17) and (4.18) give an explicit formula for \(\mathscr {L}(2m+1)\), it is not clear how to obtain size estimates from this. However, we may rewrite (4.15) as

$$\begin{aligned} \mathscr {L}(n) = q^{1-\frac{n}{2}} \frac{1}{2\pi } \int _{-\frac{\pi }{2}}^{\frac{\pi }{2}} \mathrm {e}^{\mathrm {i}n \theta } \left( \frac{1-\mathrm {e}^{2\pi \mathrm {i}\theta }}{q-\mathrm {e}^{\mathrm {i}\theta }}\right) \, \mathrm {d}\theta . \end{aligned}$$
(4.19)

The integral gives the Fourier coefficients of a function with jump discontinuities at \(\theta =\pm \frac{\pi }{2}\), so

$$\begin{aligned} \mathscr {L}(n) = O\left( \frac{q^{-n/2}}{n}\right) \end{aligned}$$
(4.20)

and this estimate is sharp. This shows that for fixed y, \(f(x)=\mathscr {L}(d(x,y))\) is in \(\ell ^2(G)\) uniformly in y, since there are \((q+1) q^{n-1}\) points x with \(d(x,y)=n\). However, this is essentially best possible, as \(f(x)\notin L^p\) for any \(p<2\). This is in contrast to the sinc function on \(T_1=\mathbb {Z}\) which belongs to every \(L^p\) with \(p>1\).

We conclude this section by showing how the ideas in Theorem 2 lead to the explicit spectral resolution for the symmetric Laplacian on the graph \(G_0\). Note that \(G_0\) is not a tree, but it is homogeneous and \(q(q+1)\) regular, with each vector lying on the intersection of \(q+1\) complete-\((q+1)\) graphs. Let us fix a pair of distinct vertices x and y with \(d_0(x,y)=n\) (we denote the distance on \(G_0\) by \(d_0\), and of course \(2 d_0(x,y)=d(x,y)\) regarding x and y as vertices of G). Then y has one neighbor z with \(d_0(x,z)=n-1\), \(q-1\) neighbors z with \(d_0(x,z)=n\) and \(q^2\) neighbors z with \(d_0(x,z)=n+1\). Thus if \(f(y)=q^{-2(\frac{1}{2}+\mathrm {i}t)d_0(x,y)}\), an easy computation shows

$$\begin{aligned} -\Delta _0 f(y)&= \widetilde{\lambda }f(y) \quad \hbox {for } y\ne x \hbox { with} \end{aligned}$$
(4.21)
$$\begin{aligned} \widetilde{\lambda }&= 1-\frac{q-1}{q(q+1)} - \frac{q^{2\mathrm {i}t}+q^{-2\mathrm {i}t}}{q+1} = \frac{q^2+1}{q^2+q} - \frac{2 q\cos (2t\log q)}{q^2+q}, \end{aligned}$$
(4.22)

and this agrees with (2.17) for \(\lambda \) given by (4.4). Again we can verify directly that \(\varphi _{\frac{1}{2}+\mathrm {i}t}(d(x,y))\) satisfies the eigenvalue equation (4.21) also at \(y=x\). Thus we may define

$$\begin{aligned} P_{\widetilde{\lambda }}^0 f(x) = \sum _{y\in G_0} \varphi _{\frac{1}{2}+\mathrm {i}t}(2 d_0(x,y)) f(y) \end{aligned}$$
(4.23)

as our spectral projection onto the \(\widetilde{\lambda }\)-eigenspace, where t and \(\widetilde{\lambda }\) are related by (4.22). Note that this will give us a spectrum

$$\begin{aligned} \left[ \frac{(q-1)^2}{q^2+q},1+\frac{1}{q}\right] \end{aligned}$$
(4.24)

for \(-\Delta _0\).

Theorem 4

The spectral resolution of \(-\Delta _0\) is

$$\begin{aligned} f = \int _{\frac{(q-1)^2}{q^2-q}}^{1+\frac{1}{q}} P_{\widetilde{\lambda }}^0 f\, \mathrm {d}\nu _0(\widetilde{\lambda }) \end{aligned}$$
(4.25)

where

$$\begin{aligned} \mathrm {d}\nu _0(\widetilde{\lambda }) = 2\mathrm {d}\nu (\lambda ) . \end{aligned}$$
(4.26)

Proof

Let g denote the extension of f to V obtained by setting \(g=0\) on \(V_1\). Then \(g = \int _{1-\frac{2\sqrt{q}}{q+1}}^{1+\frac{2\sqrt{q}}{q+1}} P_\lambda g\, \mathrm {d}\nu (\lambda )\) on V by (4.9). Since \(P_\lambda g = P_{2-\lambda } g\) on \(V_0\) we obtain \(f=g=2\int _{1-\frac{2\sqrt{q}}{q+1}}^1 P_\lambda g\, \mathrm {d}\nu (\lambda )\) on \(V_0\). But \(P_\lambda g = P_{\widetilde{\lambda }}^0 f\) on \(V_0\). \(\square \)

5 Edge Substitution Graphs

In this section we consider edge substitution graphs as discussed in [21]. We consider two graphs G and H, and define a graph \(G_H\) obtained from G by substituting a copy of H for each edge in G. We assume that H is finite (with \(\# H\geqslant 3\)) and has two distinguished boundary points \(q_0\) and \(q_1\). For simplicity we assume that H has a symmetry that permutes \(q_0\) and \(q_1\). The vertices of \(G_H\) consist of \(V_\mathrm {old}\), the old vertices, that are just vertices of G, and \(V_\mathrm {new}\), the new vertices, denoted v(xyh), where \(x\sim y\) is an edge in G and \(h\in V_H\backslash \{q_0,q_1\}\). The edges of \(G_H\) are exactly the edges \(E_H\) of H in each copy of H, so

$$\begin{aligned} {\left\{ \begin{array}{ll} v(x,y,h) \sim v(x,y,h') &{} \text {if}\quad h\sim h' \quad \text {in}\quad H \\ v(x,y,h) \sim x &{} \text {if}\quad h\sim q_0 \quad \text {in}\quad H, \,\text {and} \\ v(x,y,h) \sim y &{} \text {if}\quad h\sim q_1\quad \text {in}\quad H. \end{array}\right. } \end{aligned}$$
(5.1)

Because of the symmetry assumption it doesn’t matter which order we take for \(x\sim y\).

In this section we are not assuming that any of the graphs are bipartite. The sampling problem we consider is to recover a function on \(G_H\) from its values on \(V_\mathrm {old}\), under an appropriate bandlimited assumption. To do this we need to consider Laplacians \(\Delta _G\), \(\Delta _H\) and \(\Delta _{G_H}\) on the three graphs. We assume we are given a set of positive conductances c(xy) on the edges of G, with \(\mu (x) =\sum _{y\sim x} c(x,y)\) and positive conductances \(c_H(h,h')\) on the edges of H, with \(\nu (h) = \sum _{h'\sim h} c_H(h,h')\), and we assume that the H conductances are invariant under the symmetry that permutes \(q_0\) and \(q_1\). Then

$$\begin{aligned} -\Delta _G f(x)&= f(x)-\sum _{y\sim x}\frac{c(x,y)}{\mu (x)} f(y) \quad \text {and} \end{aligned}$$
(5.2)
$$\begin{aligned} -\Delta _H f(h)&= f(h) - \sum _{h'\sim h} \frac{c(h,h')}{\nu (h)} f(h) . \end{aligned}$$
(5.3)

For the graph \(G_H\) we assign weights multiplicatively, so

$$\begin{aligned} {\left\{ \begin{array}{ll} c(v(x,y,h),v(x,y,h')) &{}= c(x,y) c_H(h,h') \\ c(v(x,y,h),x) &{}= c(x,y) c_H(q_0,h) \\ c(v(x,y,h),y) &{}= c(x,y) c_H(q_1,h) . \end{array}\right. } \end{aligned}$$
(5.4)

We may assume that \(\nu (q_0)=\nu (q_1)=1\) by multiplying all the H conductances by the appropriate constant. That means that the weights \(\mu (x)\) at the old vertices are unchanged, and \(\nu ((x,y,h))=c(x,y)\nu (h)\). Thus, at new vertices

$$\begin{aligned} -\Delta _{G_H} f(v(x,y,h))&= f(v(x,y,h)) - \sum _{h'\sim h} \frac{c_H(h,h')}{\nu (h)} f(v(x,y,h')) \nonumber \\&= -\Delta _H f(v(x,y,h)) , \end{aligned}$$
(5.5)

while at old vertices

$$\begin{aligned} -\Delta _{G_H} f(x) = f(x)-\sum _{y\sim x} \sum _{h\sim q_0} \frac{c(x,y)}{\mu (x)} c(q_0,h) f(v(x,y,h)) . \end{aligned}$$
(5.6)

We want to compare the \(\lambda '\)-eigenfuntion equation on \(G_H\) with the \(\lambda \)-eigenfunction equation on G. Note that (5.5) implies that \(\lambda '\)-eigenfunctions on \(G_H\), when restricted to the new points v(xyh), for fixed edge \(x\sim y\), will be \(\lambda '\)-eigenfunctions on the interior of H.

We first discuss the case when G is finite. Assume that \(\lambda '\) is not a Dirichlet eigenvalue of \(-\Delta _H\). Then interior values are determined by boundary values, so

$$\begin{aligned} f(v(x,y,h)) = a_0(\lambda ',h) f(x) + a_1(\lambda ',h) f(y) \end{aligned}$$
(5.7)

for certain rational functions \(a_i(\lambda ',h)\) of \(\lambda '\) with a denominator a polynomial of degree \(N=\# H-2\) and numerator polynomial of degree \(N-1\) (see [21] p. 2052). The zeros of the denominator are the Dirichlet eigenvalues. We then define

$$\begin{aligned} A_i(\lambda ') = \sum _{h\sim q_0} c_H(q_0,h) a_i(\lambda ',h),\quad i=0,1 \end{aligned}$$
(5.8)

so that

$$\begin{aligned} -\Delta _{G_H} f(x) = (1-A_0(\lambda '))f(x) - A_(\lambda ') (f(x)-\Delta _G f(x)) . \end{aligned}$$
(5.9)

In other words, f is a \(\lambda '\)-eigenfunction of \(-\Delta _{G_H}\) if and only if its restriction to \(V_\mathrm {old}\) is a \(\lambda \)-eigenfunction of \(-\Delta _G\) where the eigenvalues are related by

$$\begin{aligned} A_1(\lambda ')(1-\lambda ) = 1-\lambda '-A_0(\lambda ') . \end{aligned}$$
(5.10)

It is clear from (5.10) that each \(\lambda '\) determines a unique value of \(\lambda \), but there are multiple solutions for \(\lambda '\) in terms of \(\lambda \). Note that if f is constant then \(\lambda =\lambda '=0\).

Note that the relationship (5.10) depends on the graph H and its Laplacian, not on G. Suppose \(\lambda '\) is a Neumann eigenvalue of \(-\Delta _H\) corresponding to an odd eigenfunction g with \(g(q_0)=1\), \(g(q_1)=-1\). (Recall that a Neumann eigenfunction means that if we glue together two copies of H at \(q_0\) and \(q_1\) and extend g symmetrically then the eigenvalue equation continues to hold at \(q_0\) and \(q_1\).) It is clear that such eigenfunctions exist. If G is bipartite then we can take the \(-\Delta _G\) eigenfunction \(f|_{V_0}=1\) and \(f|_{V_1}=-1\) with \(\lambda =2\), extend it to \(G_H\) by interpolating g in every copy of H, and obtain an eigenfunction of \(-\Delta _{G_H}\) with eigenvalue \(\lambda '\). This shows that (5.10) has a solution \(\lambda '\) for \(\lambda =2\).

Denote by \(\lambda _1'\) the smallest solution of (5.10) with \(\lambda =2\). Then by continuity there is a solution \(\lambda '\) to (5.10) in the interval \([0,\lambda _1']\) for every \(\lambda \in [0,2]\). We make the additional assumption that \(\lambda \) is an increasing function of \(\lambda '\) in the interval \([0,\lambda _1']\). This implies that the solution of (5.10) in this interval is unique. We denote it \(\lambda '(\lambda )\).

We may now define the low frequency spectrum on \(G_H\) to correspond, roughly speaking, to the eigenfunctions of \(-\Delta _{G_H}\) with eigenvalues in \([0,\lambda _1']\). Each such eigenfunction restricts to a \(\lambda \)-eigenfunction of \(-\Delta _G\), and all such eigenfunctions arise in this way. The only exception is that we have to exclude eigenfunctions that restrict to zero on G. These, of course, correspond to Dirichlet eigenvalues on H. For each Dirichlet eigenvalue \(\lambda _\mathrm {D}'\) in \([0,\lambda _1']\), we must consider all such eigenfunctions as high frequency. Note that it may happen that there still remains some low frequency eigenfunctions with eigenvalues \(\lambda _\mathrm {D}'\), and in fact this happens exactly when \(\lambda _\mathrm {D}'\) has multiplicity greater than one in the spectrum of \(-\Delta _H\). In that case we are simply fine-tuning the distinction between low and high frequency at the endpoint of the band.

Now define a low bandlimited function on \(G_H\) to be one whose spectral expansion in eigenfunctions of \(-\Delta _{G_H}\) contains only low frequency spectrum

$$\begin{aligned} f=\sum b_k u_k , \end{aligned}$$
(5.11)

where the sum extends over all low frequency eigenfunctions. Then \(-\Delta _{G_H} u_k = \lambda _k' u_k\), but also the restriction of \(u_k\) to G satisfies \(-\Delta _G u_k = \lambda _k u_k\) for \(\lambda '_k=\lambda '(\lambda _k)\). Then the restriction of (5.11) to G just gives the spectral expansion of \(f|_{V_\mathrm {old}}\) for \(-\Delta _G\). In this manner we obtain all functions on G as restrictions of low bandlimited functions to G. We may then uniquely solve the sampling problem as follows. Given the sample values \(F=f|_{V_\mathrm {old}}\), expand F in the \(-\Delta _G\) eigenfunctions

$$\begin{aligned} F&= \sum c_k \varphi _k \quad \text {with} \end{aligned}$$
(5.12)
$$\begin{aligned} -\Delta _G \varphi _k&= \lambda _k \varphi _k . \end{aligned}$$
(5.13)

Solve for \(\lambda _k'=\lambda '(\lambda _k)\), and then the restriction of \(u_k\) to \(V_\mathrm {old}\) will be a multiple of \(\varphi _k\), and so (5.12) yields (5.11). (In the case that the eigenvalue \(\lambda _k\) has nontrivial multiplicity it may be necessary to reshuffle the eigenspaces to line up \(u_k\) and \(\varphi _k\).) The extension from \(\varphi _k\) to \(u_k\) is given explicitly by (5.7). The coefficients in the expansions (5.11) and (5.12) are given by different inner products. The relationship between the inner products is discussed in detail in [21] p. 2055-2057. The situation when G is an infinite graph is similar to the discussion for half sampling in Sect. 3. We leave the details to the interested reader.