Jorgensen and Pedersen [10] demonstrated that there exist singular measures \(\nu \) which are spectral—that is, they possess a sequence of exponential functions which form an orthonormal basis in \(L^2(\nu )\). The canonical example of such a singular and spectral measure is the uniform measure on the Cantor 4-set defined as follows:

$$\begin{aligned} C_{4} = \left\{ x \in [0,1] : x = \sum _{k=1}^{\infty } \dfrac{a_{k}}{4^{k}}, \ a_{k} \in \{0,2\} \right\} . \end{aligned}$$

This is analogous to the standard middle third Cantor set where \(4^{k}\) replaces \(3^{k}\). The set \(C_{4}\) can also be described as the attractor set of the following iterated function system on \(\mathbb {R}\):

$$\begin{aligned} \tau _{0}(x) = \dfrac{x}{4}, \qquad \tau _{2}(x) = \dfrac{x + 2}{4}. \end{aligned}$$

The uniform measure on the set \(C_{4}\) then is the unique probability measure \(\mu _{4}\) which is invariant under this iterated function system:

$$\begin{aligned} \int f(x) d \mu _{4}(x) = \dfrac{1}{2} \left( \int f( \tau _{0} (x) ) d \mu _{4}(x) + \int f( \tau _{2} (x) ) d \mu _{4}(x) \right) \end{aligned}$$

for all \(f \in C(\mathbb {R})\), see [9] for details. The standard spectrum for \(\mu _{4}\) is \(\Gamma _{4} = \big \{ \sum _{n=0}^{N} l_{n} 4^{n} : l_{n} \in \{0,1\} \big \}\), though there are many spectra [2, 4]. Extension to larger classes of available spectra and other considerations can be found for example in [11, 12, 14].

Remarkably, Jorgensen and Pedersen prove that the uniform measure \(\mu _{3}\) on the standard middle third Cantor set is not spectral. Indeed, there are no three mutually orthogonal exponentials in \(L^2(\mu _{3})\). Thus, there has been much attention on whether there exists a Fourier frame for \(L^2(\mu _{3})\)–the problem is still unresolved, but see [5, 6] for progress in this regard. In this paper, we will construct Fourier frames for \(L^2(\mu _{4})\) using a dilation theory type argument. The motivation is whether the construction we demonstrate here for \(\mu _{4}\) will be applicable to \(\mu _{3}\). Fourier frames for \(\mu _{4}\) were constructed in [6] using a duality type construction.

A frame for a Hilbert space H is a sequence \(\{ x_{n} \}_{n \in I} \subset H\) such that there exists constants \(A, B > 0\) such that for all \(v \in H\),

$$\begin{aligned} A \Vert v \Vert ^2 \le \sum _{n \in I} | \langle v , x_{n} \rangle |^2 \le B \Vert v \Vert ^2. \end{aligned}$$

The largest A and smallest B which satisfy these inequalities are called the frame bounds. The frame is called a Parseval frame if both frame bounds are 1. The sequence \(\{x_{n} \}_{n \in I}\) is a Bessel sequence if there exists a constant B which satisfies the second inequality, whether or not the first inequality holds; B is called the Bessel bound. A Fourier frame for \(L^2(\mu _{4})\) is a sequence of frequencies \(\{ \lambda _{n} \}_{n \in I} \subset \mathbb {R}\) together with a sequence of “weights” \(\{ d_{n} \}_{n \in I} \subset \mathbb {C}\) such that \(x_{n} = d_{n} e^{2 \pi i \lambda _{n} x}\) is a frame. Fourier frames (unweighted) for Lebesgue measure were introduced by Duffin and Schaffer [3], see also Ortega-Cerda and Seip [13].

It was proven in [8] that a frame for a Hilbert space can be dilated to a Riesz basis for a bigger space, that is to say, that any frame is the image under a projection of a Riesz basis. Moreover, a Parseval frame is the image of an orthonormal basis under a projection. This result is now known to be a consequence of the Naimark dilation theory. This will be our recipe for constructing a Fourier frame: constructing a basis in a bigger space and then projecting onto a subspace. We require the following result along these lines [1]:

FormalPara Lemma 1

Let H be a Hilbert space, VK closed subspaces, and let \(P_{V}\) be the projection onto V. If \(\{ x_{n} \}_{n \in I}\) is a frame in K with frame bounds AB, then:

  1. 1.

    \(\{ P_{V} x_{n} \}_{n \in I}\) is a Bessel sequence in V with Bessel bound no greater than B;

  2. 2.

    if the projection \(P_{V} : K \rightarrow V\) is onto, then \(\{P_{V} x_{n} \}_{n \in I}\) is a frame in V;

  3. 3.

    if \(V \subset K\), then then \(\{P_{V} x_{n} \}_{n \in I}\) is a frame in V with frame bounds between A and B.

Note that if \(V \subset K\) and \(\{x_{n} \}_{n \in I}\) is a Parseval frame for K, then \(\{ P_V x_{n} \}_{n \in I}\) is a Parseval frame for V. In the second item above, it is possible that the lower frame bound for \(\{ P_{V} x_{n} \}\) is smaller than A, but the upper frame bound is still no greater than B.

The foundation of our construction is a dilation theory type argument. Our first step, described in Sect. 1, is to consider the fractal like set \(C_{4} \times [0,1]\), which we will view in terms of an iterated function system. This IFS will give rise to a representation of the Cuntz algebra \(\mathcal {O}_{4}\) on \(L^2(\mu _{4} \times \lambda )\) since \(\mu _{4} \times \lambda \) is the invariant measure under the IFS. Then in Sect. 2, we will generate via the action of \(\mathcal {O}_{4}\) an orthonormal set in \(L^2(\mu _{4} \times \lambda )\) whose vectors have a particular structure. In Sect. 3, we consider a subspace V of \(L^2(\mu _{4} \times \lambda )\) which can be naturally identified with \(L^2(\mu _{4})\), and then project the orthonormal set onto V to, ultimately, obtain a frame. Of paramount importance will be whether the orthonormal set generated by \(\mathcal {O}_{4}\) spans the subspace V so that the projection yields a Parseval frame. Section 4 demonstrates concrete constructions in which this occurs, and identifies all possible Fourier frames that can be constructed using this method.

We note here that there may be Fourier frames for \(L^2(\mu _{4})\) which cannot be constructed in this manner, but we are unaware of such an example.

1 Dilation of the Cantor-4 Set

We wish to construct a Hilbert space H which contains \(L^2(\mu _{4})\) as a subspace in a natural way. We will do this by making the fractal \(C_{4}\) bigger as follows. We begin with an iterated function system on \(\mathbb {R}^2\) given by:

$$\begin{aligned} \Upsilon _{0} (x,y)= & {} \Big ( \frac{x}{4}, \frac{y}{2} \Big ), \Upsilon _{1} (x,y) = \Big ( \frac{x+2}{4}, \frac{y}{2} \Big ), \\ \Upsilon _{2} (x,y)= & {} \Big (\frac{x}{4}, \frac{y+1}{2} \Big ), \Upsilon _{3} (x,y) = \Big ( \frac{x+2}{4}, \frac{y+1}{2} \Big ). \end{aligned}$$

As these are contractions on \(\mathbb {R}^2\), there exists a compact attractor set, which is readily verified to be \(C_{4} \times [0,1]\). Likewise, by Hutchinson [9], there exists an invariant probability measure supported on \(C_{4} \times [0,1]\); it is readily verified that this invariant measure is \(\mu _{4} \times \lambda \), where \(\lambda \) denotes the Lebesgue measure restricted to [0, 1]. Thus, for every continuous function \(f : \mathbb {R}^2 \rightarrow \mathbb {C}\),

$$\begin{aligned}&\int f(x,y) \ d (\mu _{4} \times \lambda ) {=} \dfrac{1}{4} \left( \int f\Bigg (\frac{x}{4},\frac{y}{2}\Bigg ) \ d (\mu _{4} \times \lambda ) {+} \int f\Bigg (\frac{x+2}{4},\frac{y}{2}\Bigg ) \ d (\mu _{4} \times \lambda ) \right. \nonumber \\&\quad \left. + \int f\Bigg (\frac{x}{4},\frac{y+1}{2}\Bigg ) \ d (\mu _{4} \times \lambda ) + \int f\Bigg (\frac{x+2}{4},\frac{y+1}{2}\Bigg ) \ d (\mu _{4} \times \lambda ) \right) . \end{aligned}$$
(1)

The iterated function system \(\Upsilon _{j}\) has a left inverse on \(C_{4} \times [0,1]\), given by

$$\begin{aligned} R: C_{4} \times [0,1] \rightarrow C_{4} \times [0,1] : (x,y) \mapsto (4x, 2y) \mod 1, \end{aligned}$$

so that \(R \circ \Upsilon _{j} (x,y) = (x,y)\) for \(j=0,1,2,3\).

We will use the iterated function system to define an action of the Cuntz algebra \(\mathcal {O}_{4}\) on \(L^2(\mu _{4} \times \lambda )\). To do so, we choose filters

$$\begin{aligned} m_{0}(x,y)&= H_{0}(x,y) \\ m_{1}(x,y)&= e^{2 \pi i x} H_{1}(x,y)\\ m_{2}(x,y)&= e^{4 \pi i x} H_{2}(x,y) \\ m_{3}(x,y)&= e^{6 \pi i x} H_{3}(x,y) \end{aligned}$$

where

$$\begin{aligned} H_{j}(x,y) = \sum _{k=0}^{3} a_{jk} \chi _{\Upsilon _{k}(C_{4} \times [0,1])}(x,y) \end{aligned}$$

for some choice of scalar coefficients \(a_{jk}\). In order to obtain a representation of \(\mathcal {O}_{4}\) on \(L^2(\mu _{4} \times \lambda )\), we require that the above filters satisfy the matrix equation \(\mathcal {M}^{*}(x,y) \mathcal {M}(x,y) = I\) for \(\mu _{4} \times \lambda \) almost every (xy), where

$$\begin{aligned} \mathcal {M}(x,y) = \begin{pmatrix} m_{0}(\Upsilon _{0}(x,y)) &{} m_{0}(\Upsilon _{1}(x,y)) &{} m_{0}(\Upsilon _{2}(x,y)) &{} m_{0}(\Upsilon _{3}(x,y)) \\ m_{1}(\Upsilon _{0}(x,y)) &{} m_{1}(\Upsilon _{1}(x,y)) &{} m_{1}(\Upsilon _{2}(x,y)) &{} m_{1}(\Upsilon _{3}(x,y)) \\ m_{2}(\Upsilon _{0}(x,y)) &{} m_{2}(\Upsilon _{1}(x,y)) &{} m_{2}(\Upsilon _{2}(x,y)) &{} m_{2}(\Upsilon _{3}(x,y)) \\ m_{3}(\Upsilon _{0}(x,y)) &{} m_{3}(\Upsilon _{1}(x,y)) &{} m_{3}(\Upsilon _{2}(x,y)) &{} m_{3}(\Upsilon _{3}(x,y)) \end{pmatrix} \end{aligned}$$

For our choice of filters, the matrix \(\mathcal {M}\) becomes

$$\begin{aligned} \mathcal {M}(x,y) = \left( \begin{array} {llll} a_{00} &{} a_{01} &{} a_{02} &{} a_{03} \\ e^{\pi i x/2} a_{10} &{} -e^{\pi i x/2} a_{11} &{} e^{\pi i x/2} a_{12} &{} -e^{\pi i x/2} a_{13} \\ e^{\pi i x} a_{20} &{} e^{\pi i x} a_{21} &{} e^{\pi i x} a_{22} &{} e^{\pi i x} a_{23} \\ e^{3 \pi i x/2} a_{30} &{} -e^{3 \pi i x/2}a_{31} &{} e^{3 \pi i x/2}a_{32} &{} -e^{3 \pi i x/2}a_{33} \end{array} \right) , \end{aligned}$$

which is unitary if and only if the matrix

$$\begin{aligned} H = \left( \begin{array} {llll} a_{00} &{} a_{01} &{} a_{02} &{} a_{03} \\ a_{10} &{} -a_{11} &{} a_{12} &{} -a_{13} \\ a_{20} &{} a_{21} &{} a_{22} &{} a_{23} \\ a_{30} &{} -a_{31} &{} a_{32} &{} -a_{33} \end{array} \right) \end{aligned}$$

is unitary. For the remainder of this section, we assume that H is unitary.

Lemma 2

The operator \(S_{j} : L^2(\mu _{4} \times \lambda ) \rightarrow L^2(\mu _{4} \times \lambda )\) given by

$$\begin{aligned}{}[S_{j}f] (x,y) = \sqrt{4} m_{j}(x,y) f(R(x,y)) \end{aligned}$$

is an isometry.

Proof

We calculate:

$$\begin{aligned} \Vert S_{j}f \Vert ^2&= \int |\sqrt{4} m_{j}(x,y) f(R(x,y))|^2 \ d(\mu _{4} \times \lambda ) \\&= \dfrac{1}{4} \sum _{k=0}^{3} \int 4 | m_{j}(\Upsilon _{k}(x,y)) f(R(\Upsilon _{k}(x,y))) |^2 \ d(\mu _{4} \times \lambda ) \\&= \int \left( \sum _{k=0}^{3} | m_{j}(\Upsilon _{k}(x,y))|^2 \right) |f(x,y)|^2 \ d(\mu _{4} \times \lambda ). \end{aligned}$$

We used Eq. (1) in the second line. The sum in the integral is the square of the Euclidean norm of the jth row of the matrix \(\mathcal {M}\), which is unitary. Hence, the sum is 1, so the integral is \(\Vert f \Vert ^2\), as required. \(\square \)

Lemma 3

The adjoint is given by

$$\begin{aligned}{}[S_{j}^{*} f] (x,y) = \dfrac{1}{2} \sum _{k=0}^{3} \overline{m_{j} (\Upsilon _{k}(x,y)) } f(\Upsilon _{k}(x,y)). \end{aligned}$$

Proof

Let \(f,g \in L^2(\mu _{4} \times \lambda )\). We calculate

$$\begin{aligned} \langle S_{j}f, g \rangle&= \int \sqrt{4} m_{j}(x,y) f(R(x,y)) \overline{ g(x,y) } \ d(\mu _{4} \times \lambda ) \\&= \dfrac{1}{4} \sum _{k=0}^{3} \int \sqrt{4} m_{j}(\Upsilon _{k}(x,y)) f(R(\Upsilon _{k}(x,y))) \overline{ g(\Upsilon _{k} (x,y)) } \ d(\mu _{4} \times \lambda ) \\&= \int f(x,y) \overline{ \left( \dfrac{1}{2} \sum _{k=0}^{3} \overline{m_{j}(\Upsilon _{k}(x,y))} g(\Upsilon _{k} (x,y)) \right) } \ d(\mu _{4} \times \lambda ) \end{aligned}$$

where we use Eq. (1) and the fact that R is a left inverse of \(\Upsilon _{k}\). \(\square \)

Lemma 4

The isometries \(S_{j}\) satisfy the Cuntz relations:

$$\begin{aligned} S_{j}^{*} S_{k} = \delta _{jk} I, \qquad \sum _{k=0}^{3} S_{k} S_{k}^{*} = I. \end{aligned}$$

Proof

We consider the orthogonality relation first. Let \(f \in L^2(\mu _{4} \times \lambda )\). We calculate:

$$\begin{aligned}{}[S_{j}^{*} S_{k} f](x,y)&= \dfrac{1}{2} \sum _{\ell = 0}^{3} \overline{ m_{j}(\Upsilon _{\ell }(x,y))} [S_{k} f]( \Upsilon _{\ell }(x,y) ) \\&= \dfrac{1}{2} \sum _{\ell = 0}^{3} \overline{ m_{j}(\Upsilon _{\ell }(x,y))} \sqrt{4} m_{k}(\Upsilon _{\ell }(x,y)) f( R(\Upsilon _{\ell }(x,y) ) ) \\&= \left( \sum _{\ell = 0}^{3} \overline{ m_{j}(\Upsilon _{\ell }(x,y))} m_{k}(\Upsilon _{\ell }(x,y)) \right) f(x,y). \end{aligned}$$

Note that the sum is the scalar product of the kth row with the jth row of the matrix \(\mathcal {M}\), which is unitary. Hence, the sum is \(\delta _{jk}\) as required.

Now for the identity relation, let \(f,g \in L^2(\mu _{4} \times \lambda )\). We calculate:

$$\begin{aligned}&\Big \langle \sum _{k=0}^{3} S_{k} S_{k}^{*} f, g \Big \rangle = \sum _{k=0}^{3} \Big \langle S_{k}^{*} f, S_{k}^{*} g \Big \rangle \nonumber \\&\quad = \sum _{k=0}^{3} \int \left( \dfrac{1}{2} \sum _{\ell = 0}^{3} \overline{ m_{k}(\Upsilon _{\ell } (x,y)) } f(\Upsilon _{\ell }(x,y)) \right) \\&\qquad \times \left( \overline{ \dfrac{1}{2} \sum _{n = 0}^{3} \overline{ m_{k}(\Upsilon _{n} (x,y)) } g(\Upsilon _{n}(x,y))} \right) \ d(\mu _{4} \times \lambda ) \\&\quad \!= \!\sum _{\ell =0}^{3} \sum _{n=0}^{3} \dfrac{1}{4} \int \! \left( \sum _{k = 0}^{3} \overline{ m_{k}(\Upsilon _{\ell } (x,y)) }m_{k}(\Upsilon _{n} (x,y)) \right) f(\Upsilon _{\ell }(x,y)) \overline{ g(\Upsilon _{n}(x,y))} \ d(\mu _{4} \!\times \! \lambda ) \\&\quad = \dfrac{1}{4} \sum _{n=0}^{3} \int f(\Upsilon _{n}(x,y)) \overline{ g(\Upsilon _{n}(x,y))} \ d(\mu _{4} \times \lambda ) \\&\quad = \int f(x,y) \overline{ g(x,y)} \ d(\mu _{4} \times \lambda ) = \langle f, g \rangle . \end{aligned}$$

Note that the sum over k in the third line is the scalar product of the \(\ell \)th column with the nth column of \(\mathcal {M}\), so the sum collapses to \(\delta _{\ell n}\). The sum on n in the fourth line collapses by Eq. (1). \(\square \)

2 Orthonormal Sets in \(L^2(\mu _{4} \times \lambda )\)

Since the isometries \(S_{j}\) satisfy the Cuntz relations, we can use them to generate orthonormal sets in the space \(L^2(\mu _{4} \times \lambda )\). We do so by having the isometries act on a generating vector. We consider words in the alphabet \(\{0,1,2,3\}\); let \(W_{4}\) denote the set of all such words. For a word \(\omega = j_{K} j_{K-1} \dots j_{1}\), we denote by \(| \omega | = K\) the length of the word, and define

$$\begin{aligned} S_{\omega } f = S_{j_{K}} S_{j_{K-1}} \dots S_{j_{1}} f. \end{aligned}$$

Definition 1

Let

$$\begin{aligned} X_{4} = \{ \omega \in W_{4}: | \omega | = 1 \} \cup \{ \omega \in W_{4} : | \omega | \ge 2, \ j_{1} \ne 0 \}. \end{aligned}$$

For convenience, we allow the empty word \(\omega _{\emptyset }\) with length 0, and define \(S_{\omega _{\emptyset }} = I\), the identity.

Lemma 5

Suppose \(f \in L^2(\mu _{4} \times \lambda )\) with \(\Vert f\Vert = 1\), and that \(S_{0} f = f\). Then,

$$\begin{aligned} \{ S_{\omega } f : \omega \in X_{4} \} \end{aligned}$$

is an orthonormal set.

Proof

Suppose \(\omega , \omega ' \in X_{4}\) with \(\omega \ne \omega '\). First consider \(| \omega | = | \omega ' |\), with \(\omega = j_{K} \dots j_{1}\) and \(\omega ' = i_{K} \dots i_{1}\). Suppose that \(\ell \) is the largest index such that \(j_{\ell } \ne i_{\ell }\). Then we have

$$\begin{aligned} \langle S_{\omega } f , S_{\omega '} f \rangle = \langle S_{j_{\ell }} \dots S_{j_{1}} f , S_{i_{\ell }} \dots S_{i_{1}} f \rangle = \langle S_{i_{\ell }}^{*} S_{j_{\ell }} \dots S_{j_{1}} f , S_{i_{\ell - 1}} \dots S_{i_{1}} f \rangle = 0 \end{aligned}$$

by the orthogonality condition of the Cuntz relations.

Now, if \(K = | \omega | > | \omega ' | = M\), with \(\omega ' = i_{M} \dots i_{1}\), we define the word \(\rho = i_{M} \dots i_{1} 0 \dots 0\) so that \(| \rho | = K\). Note that \(\rho \notin X_{4}\) so \(\omega \ne \rho \). Note further that \(S_{\omega '} f = S_{\rho } f\). Thus, by a similar argument to that above, we have

$$\begin{aligned} \langle S_{\omega } f , S_{\omega '} f \rangle = 0. \end{aligned}$$

\(\square \)

Remark 1

The set \(\{ S_{\omega } f : \omega \in X_{4} \}\) need not be complete. We will provide an example of this in Example 1 in Sect. 4.

Our goal is to project the set \(\{ S_{\omega } f : \omega \in X_{4} \}\) onto some subspace V of \(L^2(\mu _{4} \times \lambda )\) to obtain a frame. To that end, we need to know when the projection \(\{ P_{V} S_{\omega } f : \omega \in X_{4} \}\) is a frame, which by Lemma 1 requires the projection \(P_{V} : K \rightarrow V\) to be onto, where K is the subspace spanned by \(\{ S_{\omega } f : \omega \in X_{4} \}\). The tool we will use is the following result, which is a minor adaptation of Theorem 3.1 from [7].

Theorem 1

Let \(\mathcal {H}\) be a Hilbert space, \(\mathcal {K} \subset \mathcal {H}\) a closed subspace, and \((S_i)_{i=0}^{N-1}\) be a representation of the Cuntz algebra \(\mathcal {O}_N\). Let \(\mathcal {E}\) be an orthonormal set in \(\mathcal {H}\) and \(f:X\rightarrow \mathcal {K}\) a norm continuous function on a topological space X with the following properties:

  1. (i)

    \(\mathcal {E}=\cup _{i=0}^{N-1} S_i\mathcal {E}\) where the union is disjoint.

  2. (ii)

    \(\overline{span}\{ f(t): t\in X\}=\mathcal {K}\) and \(\Vert f(t)\Vert =1\), for all \(t\in X\).

  3. (iii)

    There exist functions \(\mathfrak {m}_i: X\rightarrow \mathbb {C}\), \(g_i:X\rightarrow X\), \(i=0,\dots , N-1\) such that

    $$\begin{aligned} S_i^*f(t)=\mathfrak {m}_i(t)f(g_i(t)), \quad t\in X. \end{aligned}$$
    (2)
  4. (iv)

    There exist \(c_0\in X\) such that \(f(c_0)\in \overline{span} \mathcal {E}\).

  5. (v)

    The only function \(h\in \mathcal {C}(X)\) with \(h\ge 0\), \(h(c)=1\), \(\forall \) \(c\in \{ x\in X:f(x)\in \overline{span}\mathcal {E}\}\), and

    $$\begin{aligned} h(t)=\sum _{i=0}^{N-1}\left| \mathfrak {m}_i(t) \right| ^2 h(g_i(t)),\quad t\in X \end{aligned}$$
    (3)

    are the constant functions.

Then \(\mathcal {K} \subset \overline{span}\mathcal {E}\).

3 The Projection

Recall the definition of the filters \(m_{j}(x,y) = e^{2 \pi i j x} H_{j} (x,y)\) from Sect. 1. We choose the filter coefficients \(a_{j k}\) so that the matrix H is unitary. We place the additional constraint that

$$\begin{aligned} a_{00} = a_{01} = a_{02} = a_{03} = \dfrac{1}{2}, \end{aligned}$$

so that \(S_{0} \mathbbm {1} = \mathbbm {1}\), where \(\mathbbm {1}\) the function in \(L^2(\mu _{4} \times \lambda )\) which is identically 1. As \(S_{0} \mathbbm {1} = \mathbbm {1}\), by Lemma 5, the set \(\{ S_{\omega } \mathbbm {1} : \omega \in X_{4} \}\) is orthonormal. Moreover, we place the additional constraint that for every j, \(a_{j0} + a_{j2} = a_{j1} + a_{j3}\), which will be required for our calculation of the projection.

Definition 2

We define the subspace \(V = \{ f \in L^2(\mu _{4} \times \lambda ) : f(x,y) = g(x) \chi _{[0,1]}(y), \ g \in L^2(\mu _{4}) \}\). Note that the subspace V can be identified with \(L^2(\mu _{4})\) via the isometric isomorphism \(g \mapsto g(x) \chi _{[0,1]}(y)\). We will suppress the y variable in the future.

Definition 3

We define a function \(c : X_{4} \rightarrow \mathbb {N}_{0}\) as follows: for a word \(\omega = j_{K} j_{K-1} \dots j_{1}\),

$$\begin{aligned} c(\omega ) = \sum _{k=1}^{K} j_{k} 4^{K-k}. \end{aligned}$$

Here \(\mathbb {N}_{0} = \mathbb {N} \cup \{0\}\). It is readily verified that c is a bijection.

Lemma 6

For a word \(\omega = j_{K} j_{K-1} \dots j_{1}\),

$$\begin{aligned} S_{\omega } \mathbbm {1} = e^{2 \pi i c(\omega ) x} \left( \prod _{k=1}^{K} 2 H_{j_{k}}(R^{K-k}(x,y)) \right) . \end{aligned}$$

Proof

We proceed by induction on the length of the word \(\omega \). The equality is readily verified for \(| \omega | =1\). Let \(\omega _{0} = j_{K-1} j_{n-2} \dots j_{1}\). We have

$$\begin{aligned} S_{\omega } \mathbbm {1}&= S_{j_{K}} S_{\omega _{0}} \mathbbm {1} \\&= S_{j_{K}} \left[ e^{2 \pi i c(\omega _{0}) x } \left( \prod _{k=1}^{K-1} 2 H_{j_{k}}(R^{K-1-k}(x,y)) \right) \right] \\&= 2 e^{2 \pi i j_{K} x} H_{j_{K}}(x,y) e^{2 \pi i c(\omega _{0}) \cdot 4x } \left( \prod _{k=1}^{K-1} H_{j_{k}}(R^{K-k}(x,y)) \right) \\&= 2 e^{2 \pi i (j_{K} + 4c(\omega _{0}) ) x} H_{j_{K}}(R^{K-K}(x,y)) \left( \prod _{k=1}^{K-1} 2 H_{j_{k}}(R^{K-k}(x,y))\right) \\&= 2 e^{2 \pi i c(\omega )x}\left( \prod _{k=1}^{K-1} 2 H_{j_{k}}(R^{K-k}(x,y))\right) . \end{aligned}$$

The last line above is justified by the following calculation:

$$\begin{aligned} j_{K} + 4 c(\omega _{0})&= j_{K} + 4 \left( \sum _{k=1}^{K-1} j_{k} 4^{K-1-k} \right) \\&= j_{K} 4^{K-K} + \sum _{k=1}^{K-1} j_{k} 4^{K-k} \\&= \sum _{k=1}^{K} j_{k} 4^{K-k} \\&= c(\omega ). \end{aligned}$$

\(\square \)

We wish to project the vectors \(S_{\omega } \mathbbm {1}\) onto the subspace V. The following lemma calculates that projection, where \(P_{V}\) denotes the projection onto the subspace V.

Lemma 7

If \(f(x,y) = g(x)h(x,y)\) with \(g \in L^{2}(\mu _{4})\) and \(h \in L^{\infty }(\mu _{4} \times \lambda )\), then

$$\begin{aligned}{}[P_{V} f](x,y) = g(x) G(x) \end{aligned}$$

where \(G(x) = \int _{[0,1]} h(x,y) d \lambda (y)\).

Proof

We verify that for every \(F(x) \in L^2(\mu _{4})\), \(f(x,y) - g(x) G(x)\) is orthogonal to F(x). We calculate utilizing Fubini’s theorem:

$$\begin{aligned} \langle f - gG , F \rangle \!&= \!\int \int g(x) h(x,y) \overline{F(x)} \ d (\mu _{4} \times \lambda ) \!- \!\int \int g(x)G(x)\overline{F(x)} \ d (\mu _{4} \times \lambda ) \\&= \int _{C_{4}} g(x)\overline{F(x)} \left( \int _{[0,1]} h(x,y) - G(x) \ d \lambda (y) \right) \ d \mu _{4}(x) \\&= \int _{C_{4}} g(x)\overline{F(x)} \left( G(x) - G(x) \right) \ d \mu _{4}(x) \\&= 0. \end{aligned}$$

\(\square \)

For the purposes of the following lemma, \(\alpha x\) and \(\beta y\) are understood to be modulo 1.

Lemma 8

For any word \(\omega = j_{K} j_{K-1} \dots j_{1}\),

$$\begin{aligned} \int \prod _{k=1}^{K} 2 H_{j_{k}}(R^{k-1}(x,y)) \ d \lambda (y) = \prod _{k=1}^{K} 2 \int H_{j_{k}}(4^{k-1}x,y) \ d \lambda (y). \end{aligned}$$

Proof

Let \(F_{m}(x,y) = \prod _{k=m}^{K} 2 H_{j_{k}}(4^{k-1}x, 2^{k-m}y)\). Note that

$$\begin{aligned} F_{m}\Big (x, \frac{y}{2}\Big )= & {} 2 H_{j_{m}}\Big (4^{m-1}x, \dfrac{y}{2}\Big ) \left( \prod _{k=m+1}^{K} 2 H_{j_{k}}(4^{k-1}x, 2^{k-(m+1)}y) \right) \\= & {} 2H_{j_{m}}\left( 4^{m-1}x, \dfrac{y}{2}\right) F_{m+1}(x, y). \end{aligned}$$

Likewise for \(F_{m}\big (x, \frac{y+1}{2}\big )\).

Since \(\lambda \) is the invariant measure for the iterated function system \(y \mapsto \frac{y}{2}\), \(y \mapsto \frac{y+1}{2}\), we calculate:

$$\begin{aligned} \int _{0}^{1} F_{m}(x,y) \ d \lambda (y)&= \dfrac{1}{2} \left[ \int _{0}^{1} F_{m}\Big (x, \frac{y}{2}\Big ) \ d \lambda (y) + \int _{0}^{1} F_{m}\Big (x, \frac{y+1}{2}\Big ) \ d \lambda (y) \right] \\&= \dfrac{1}{2} \left[ \int _{0}^{1} 2 H_{j_{m}}\Big (4^{m-1}x, \dfrac{y}{2}\Big ) F_{m+1}(x, y)\right. \\&\left. + 2 H_{j_{m}}\Big ( 4^{m-1}x, \dfrac{y+1}{2}\Big ) F_{m+1}(x, y) \ d \lambda (y) \right] \\&= \dfrac{1}{2} \left[ \int _{0}^{1} 2a_{j_{m},q} F_{m+1}(x, y) + 2a_{j_{m},q+2} F_{m+1}(x, y) \ d \lambda (y) \right] \\&= \dfrac{1}{2} \left[ 2a_{j_{m},q} + 2a_{j_{m},q+2} \right] \cdot \left[ \int _{0}^{1} F_{m+1}(x, y) \ d \lambda (y) \right] \\&= \left[ \int _{0}^{1} 2 H_{j_{m}}(4^{m-1}x, y) \ d \lambda (y) \right] \cdot \left[ \int _{0}^{1} F_{m+1}(x, y) \ d \lambda (y) \right] \end{aligned}$$

where \(q = 0\) if \(0 \le 4^{m-1} x < \frac{1}{2}\), and \(q=1\) if \(\frac{1}{2} \le 4^{m-1} x < 1\).

The result now follows by a standard induction argument. \(\square \)

Proposition 1

Suppose the filters \(m_{j}(x,y)\) are chosen so that

  1. (i)

    the matrix H is unitary,

  2. (ii)

    \(a_{00} = a_{01} = a_{02} = a_{03} = \frac{1}{2}\), and

  3. (iii)

    for \(j=0,1,2,3\), \(a_{j0} + a_{j2} = a_{j1} + a_{j3}\).

Then for any word \(\omega = j_{K} \dots j_{1}\),

$$\begin{aligned} P_{V} S_{\omega } \mathbbm {1} = d_{\omega } e^{2 \pi i c(\omega ) x}, \end{aligned}$$

where

$$\begin{aligned} d_{\omega } = \prod _{k=1}^{K} \left( a_{j_{k}0} + a_{j_{k}2} \right) . \end{aligned}$$
(4)

Proof

We apply the previous three Lemmas to obtain

$$\begin{aligned}{}[P_{V} S_{\omega } \mathbbm {1}](x,y)&= e^{2 \pi i c(\omega ) x} \int \prod _{k=1}^{K} 2 H_{j_{k}}(R^{k-1}(x,y)) d \lambda (y) \\&= e^{2 \pi i c(\omega ) x} \prod _{k=1}^{K} 2 \int H_{j_{k}}(4^{k-1}x,y) d \lambda (y) \end{aligned}$$

By assumption (iii), the integral \(\int H_{j_{k}}(4^{k-1} x, y) d \lambda (y)\) is independent of x, and the value of the integral is \(\frac{a_{j0}}{2} + \frac{a_{j2}}{2}\). Eq. (4) now follows. \(\square \)

4 Concrete Constructions

We now turn to concrete constructions of Fourier frames for \(\mu _{4}\). The hypotheses of Lemma 5 and Proposition 1 require H to be unitary and requires the matrix

$$\begin{aligned} A = \left( \begin{array} {llll} a_{00} &{} a_{01} &{} a_{02} &{} a_{03} \\ a_{10} &{} a_{11} &{} a_{12} &{} a_{13} \\ a_{20} &{} a_{21} &{} a_{22} &{} a_{23} \\ a_{30} &{} a_{31} &{} a_{32} &{} a_{33} \end{array} \right) \end{aligned}$$

to have the first row be identically \(\frac{1}{2}\) and to have the vector \(\begin{pmatrix} 1&-1&1&-1 \end{pmatrix}^{T}\) in the kernel.

We can use Hadamard matrices to construct examples of such a matrix A. Every \(4 \times 4\) Hadamard matrix is a permutation of the following matrix:

$$\begin{aligned} U_{\rho } = \dfrac{1}{2} \left( \begin{array} {llll} 1 &{} 1 &{} 1 &{} 1 \\ 1 &{} -1 &{} \rho &{} -\rho \\ 1 &{} 1 &{} -1 &{} -1 \\ 1 &{} -1 &{} -\rho &{} \rho \end{array} \right) \end{aligned}$$

where \(\rho \) is any complex number of modulus 1.

If we set \(H = U_{\rho }\), we obtain

$$\begin{aligned} A = \dfrac{1}{2} \left( \begin{array} {llll} 1 &{} 1 &{} 1 &{} 1 \\ 1 &{} 1 &{} \rho &{} \rho \\ 1 &{} 1 &{} -1 &{} -1 \\ 1 &{} 1 &{} -\rho &{} -\rho \end{array} \right) \end{aligned}$$
(5)

which has the requisite properties to apply Lemma 5 and Proposition 1.

We define for \(k = 1,2,3\), \(l_{k} : \mathbb {N}_{0} \rightarrow \mathbb {N}_{0}\) by \(l_{k}(n)\) is the number of digits equal to k in the base 4 expansion of n. Note that \(l_{k}(0) = 0\), and we follow the convention that \(0^{0} = 1\).

Theorem 2

For the choice A as in Eq. (5) with \(\rho \ne -1\), the sequence

$$\begin{aligned} \left\{ \left( \dfrac{1 + \rho }{2}\right) ^{l_{1}(n)} 0^{l_{2}(n)} \left( \dfrac{1 - \rho }{2}\right) ^{l_{3}(n)} e^{2 \pi i n x} : n \in \mathbb {N}_{0} \right\} \end{aligned}$$
(6)

is a Parseval frame in \(L^2(\mu _{4})\).

Proof

By Lemma 5, we have that \( \{ S_{\omega } \mathbbm {1} : \omega \in X_{4} \}\) is an orthonormal set. For a word \(\omega = j_{K} j_{K-1} \dots j_{1}\), Proposition 1 yields that

$$\begin{aligned} P_{V} S_{\omega } \mathbbm {1} = e^{2 \pi i c(\omega ) x} \prod _{k=1}^{K} \left( a_{j_{k}0} + a_{j_{k}2} \right) . \end{aligned}$$

Then, setting \(n = c(\omega )\), we obtain

$$\begin{aligned} P_{V} S_{\omega } \mathbbm {1} = e^{2 \pi i n x} \left( a_{00} + a_{02} \right) ^{K-l_{1}(n) - l_{2}(n) - l_{3}(n)} \prod _{j=1}^{3} \left( a_{j0} + a_{j2} \right) ^{l_{j}(n)}. \end{aligned}$$

Since

$$\begin{aligned} a_{00} + a_{02} = 1, \quad a_{10} + a_{12} = \dfrac{ 1 + \rho }{2}, \quad a_{20} + a_{22} = 0, \quad a_{30} + a_{32} = \dfrac{ 1 - \rho }{2}, \end{aligned}$$

it follows that

$$\begin{aligned} P_{V} S_{\omega } \mathbbm {1} = \left( \dfrac{1 + \rho }{2}\right) ^{l_{1}(n)} 0^{l_{2}(n)} \left( \dfrac{1 - \rho }{2}\right) ^{l_{3}(n)} e^{2 \pi i n x}. \end{aligned}$$

Since c is a bijection, the set \(\{ P_{V} S_{\omega } \mathbbm {1} : \omega \in X_{4} \}\) coincides with the set in (6).

In order to establish that the set (6) is a Parseval frame, we wish to apply Lemma 1, which requires that the subspace V is contained in the closed span of \(\{ S_{\omega } \mathbbm {1} : \omega \in X_{4} \}\). Denote the closed span by \(\mathcal {K}\). We will proceed in a manner nearly identical to the proof of Theorem 1 and its inspiration [7, Theorem 3.1]. Define the function \(f: \mathbb {R} \rightarrow V\) by \(f(t) = e_{t}\) where \(e_{t}(x,y) = e^{2 \pi i x t}\). Note that \(f(0) = \mathbbm {1} \in \mathcal {K}\). Likewise, define a function \(h_{X} : \mathbb {R} \rightarrow \mathbb {R}\) by

$$\begin{aligned} h_{X}(t) = \sum _{\omega \in X_{4}} | \langle f(t) , S_{\omega } \mathbbm {1} \rangle |^2 = \Vert P_{\mathcal {K}} f(t) \Vert ^2. \end{aligned}$$

\(\square \)

Claim 1

We have \(h_{X} \equiv 1\).

Assuming for the moment that the claim holds, we deduce that \(f(t) \in \mathcal {K}\) for every \(t \in \mathbb {R}\). Since \(\{ f(\gamma ) : \gamma \in \Gamma _{4} \}\) is an orthonormal basis for V, it follows that the closed span of \(\{ f(t) : t \in \mathbb {R} \}\) is all of V. We conclude that \(V \subset \mathcal {K}\), and so Lemma 1 implies that \(\{ P_{V} S_{\omega } \mathbbm {1} : \omega \in X_{4} \}\) is a Parseval frame for V, from which the Theorem follows.

Thus, we turn to the proof of Claim 1. First, we require \(\{ S_{\omega } \mathbbm {1} : \omega \in X_{4} \} = \cup _{j=0}^{3} \{ S_{j} S_{\omega } \mathbbm {1} : \omega \in X_{4} \}\), where the union is disjoint. Clearly, the RHS is a subset of the LHS, and the union is disjoint. Consider an element of the LHS: \(S_{\omega } \mathbbm {1}\). If \(| \omega | \ge 2\), we write \(S_{\omega } \mathbbm {1} = S_{j} S_{\omega _{0}} \mathbbm {1}\) for some j and some \(\omega _{0} \in X_{4}\), whence \(S_{\omega } \mathbbm {1}\) is in the RHS. If \( | \omega | = 1\), then we write \(S_{\omega } \mathbbm {1} = S_{j} \mathbbm {1} = S_{j} S_{0} \mathbbm {1}\), which is again an element of the RHS. Equality now follows.

As a consequence,

$$\begin{aligned} h_{X}(t)&= \sum _{\omega \in X_{4}} | \langle f(t) , S_{\omega } \mathbbm {1} \rangle |^2 \\&= \sum _{j=0}^{3} \sum _{\omega \in X_{4}} | \langle f(t) , S_{j} S_{\omega } \mathbbm {1} \rangle |^2 \\&= \sum _{j=0}^{3} \sum _{\omega \in X_{4}} | \langle S^{*}_{j} f(t) , S_{\omega } \mathbbm {1} \rangle |^2. \end{aligned}$$

We calculate:

Thus, we define

$$\begin{aligned} \mathfrak {m}_{j}(t) = \frac{1}{2} \left( \overline{a_{j0}} + \overline{a_{j2}} \right) + \frac{e^{- \pi i j}}{2} \left( \overline{a_{j1}} + \overline{a_{j3}} \right) e^{\pi i t}, \end{aligned}$$

and

$$\begin{aligned} g_{j}(t) = \frac{t-j}{4}. \end{aligned}$$

As a consequence, we obtain

$$\begin{aligned} h_{X}(t)&= \sum _{j=0}^{3} \sum _{\omega \in X_{4}} | \langle S^{*}_{j} f(t) , S_{\omega } \mathbbm {1} \rangle |^2 \nonumber \\&= \sum _{j=0}^{3} \sum _{\omega \in X_{4}} | \langle \mathfrak {m}_{j} (t) f(g_{j}(t)) , S_{\omega } \mathbbm {1} \rangle |^2 \nonumber \\&= \sum _{j=0}^{3} | \mathfrak {m}_{j} (t) |^2 h_{X} (g_{j}(t)). \end{aligned}$$
(7)

Because of our choice of coefficients in the matrix A, which has the vector \(\begin{pmatrix} 1&-1&1&-1 \end{pmatrix}^{T}\) in the kernel, we have for every j: \(a_{j0} + a_{j2} = a_{j1} + a_{j3}\). Thus, if we let \(b_{j} = \overline{a_{j0}} + \overline{a_{j2}}\), the functions \(\mathfrak {m}_{j}\) simplify to

$$\begin{aligned} \mathfrak {m}_{j}(t) = \frac{b_{j}}{2} e^{\pi i \frac{t}{2} } \cos \Big (\pi \frac{t}{2} \Big ) \end{aligned}$$

for \(j=0,2,\) and

$$\begin{aligned} \mathfrak {m}_{j}(t) = - \frac{i b_{j}}{2} e^{\pi i \frac{t}{2} } \sin \Big (\pi \frac{t}{2} \Big ) \end{aligned}$$

for \(j = 1,3\). Substituting these into Eq. (7),

$$\begin{aligned} h_{X}(t)= & {} \cos ^2\left( \frac{\pi t}{2}\right) h_{X} \left( \frac{t}{4}\right) + \sin ^2\left( \frac{\pi t}{2}\right) \frac{|1+ \overline{\rho }|^2}{4} h_{X} \left( \frac{t-1}{4}\right) \\&+ \sin ^2\left( \frac{\pi t}{2}\right) \frac{|1- \overline{\rho }|^2}{4} h_{X} \left( \frac{t-3}{4}\right) .\nonumber \end{aligned}$$
(8)

Claim 2

The function \(h_{X}\) can be extended to an entire function.

Assume for the moment that Claim 2 holds, we finish the proof of Claim 1. If \(h_{X}(t) = 1\) for \(t \in [-1,0]\), then \(h_{X}(z) = 1\) for all \(z \in \mathbb {C}\), and Claim 1 holds.

Now, assume to the contrary that \(h_{X}(t)\) is not identically 1 on \([-1,0]\). Since \(0 \le h_{X}(t) \le 1\) for t real, then \(\beta = \text {min} \{h_{X}(t) : t \in [-1,0] \} < 1\). Because constant functions satisfy (8), \(h_1:=h_{X} - \beta \) also satisfies Eq. (8). There exists \(t_0\) such that \(h_1(t_0)=0\) and \(t_{0}\ne 0\) as \(h_{X}(0)=1\). Since \(h_1\ge 0\) each of the terms in (8) must vanish :

$$\begin{aligned} \cos ^2\left( \frac{\pi t_0}{2}\right) h_1\left( \frac{t_0}{4}\right) =0 \end{aligned}$$
(9)
$$\begin{aligned} \sin ^2\left( \frac{\pi t_0}{2}\right) \frac{|1+ \overline{\rho }|^2}{4}h_1\left( \frac{t_0-1}{4}\right) =0 \end{aligned}$$
(10)
$$\begin{aligned} \sin ^2\left( \frac{\pi t_0}{2}\right) \frac{|1- \overline{\rho }|^2}{4}h_1\left( \frac{t_0-3}{4}\right) =0 \end{aligned}$$
(11)

Our hypothesis is that \(\rho \ne -1\), so in Eq. (10), the coefficient \(\frac{| 1 + \overline{\rho } |^2}{4} \ne 0\).

Case 1 If \(t_0\ne -1\) then Eq. (9) implies \(h_1(t_0/4)=0=h_1(g_0(t_0))\). Let \(t_1:=g_0(t_0)\in (-1,0)\); iterating the previous argument implies that \(h_{1}(g_0(t_{1})) = 0\). Thus, we obtain an infinite sequence of zeroes of \(h_1\).

Case 2 If \(t_0=-1\), then the previous argument does not hold. However, we can construct another zero of \(h_1\), \(t_0'\in (-1,0)\) to which the previous argument will hold. Indeed, if \(t_{0} = -1\), Eq. (10) implies \(h_1((t_0-1)/4)=h_1(-1/2)=0\). Let \(t_0'=-1/2\) and continue as in Case 1.

In either case, \(h_1\) vanishes on a (countable) set with an accumulation point, and since \(h_1\) is analytic it follows that \(h_1\equiv 0\), a contradiction, and Claim 1 holds.

Now, to prove Claim 2, we follow the proof of Lemma 4.2 of [10]. For a fixed \(\omega \in X_{4}\), define \(f_{\omega } : \mathbb {C} \rightarrow \mathbb {C}\) by

$$\begin{aligned} f_{\omega } (z) = \langle e_{z} , S_{\omega } \mathbbm {1} \rangle = \int e^{2 \pi i z x} \overline{[S_{\omega } \mathbbm {1}] (x,y)} \ d(\mu _{4} \times \lambda ). \end{aligned}$$

Since the distribution \(\overline{[S_{\omega } \mathbbm {1}] (x,y)} \ d(\mu _{4} \times \lambda )\) is compactly supported, a standard convergence argument demonstrates that \(f_{\omega }\) is entire. Likewise, \(f^{*}_{\omega } (z) = \overline{ f_{\omega } (\overline{z})}\) is entire, and for t real,

$$\begin{aligned} f_{\omega }(t) f^{*}_{\omega }(t) = \left( \langle e_{t}, S_{\omega } \mathbbm {1} \rangle \right) \left( \overline{ \langle e_{t}, S_{\omega } \mathbbm {1} \rangle } \right) = | \langle e_{t}, S_{\omega } \mathbbm {1} \rangle |^2. \end{aligned}$$

Thus,

$$\begin{aligned} h_{X} (t) = \sum _{\omega \in X_{4} } f_{\omega } (t) f^{*}_{\omega }(t). \end{aligned}$$

For \(n \in \mathbb {N}\), let \(h_{n}(z) = \sum _{ |\omega | \le n } f_{\omega } (z) f^{*}_{\omega }(z)\), which is entire. By Hölder’s inequality,

$$\begin{aligned} \sum _{\omega \in X_{4}} | f_{\omega } (z) f^{*}_{\omega }(z) |&\le \left( \sum _{\omega \in X_{4}} | \langle e_{z} , S_{\omega } \mathbbm {1} \rangle |^2 \right) ^{1/2} \left( \sum _{\omega \in X_{4}} | \langle e_{\overline{z}} , S_{\omega } \mathbbm {1} \rangle |^2 \right) ^{1/2} \\&\le \Vert e_{z} \Vert \Vert e_{\overline{z}} \Vert \le e^{K Im(z)} \end{aligned}$$

for some constant K. Thus, the sequence \(h_{n}(z)\) converges pointwise to a function h(z), and are uniformly bounded on strips \(Im(z) \le C\). By the theorems of Montel and Vitali, the limit function h is entire, which coincides with \(h_{X}\) for real t, and Claim 2 is proved.

Example 1

As mentioned in Sect. 2, in general, \(\{ S_{\omega } \mathbbm {1} \}\) need not be complete, and the exceptional point \(\rho = -1\) in Theorem 2 provides the example. In the case \(\rho = -1\), the set (6) becomes

$$\begin{aligned} \{ d_{n} e^{2 \pi i n x} : n \in \mathbb {N}_{0} \} \end{aligned}$$

where the coefficients \(d_{n} = 1\) if \(n \in \Gamma _{3}\) and 0 otherwise. Here,

$$\begin{aligned} \Gamma _{3} = \left\{ \sum _{n=0}^{N} l_{n} 4^{n} : l_{n} \in \{0, 3\} \right\} \end{aligned}$$

and it is known [4] that the sequence \(\{ e^{2 \pi i n x} : n \in \Gamma _{3} \}\) is incomplete in \(L^2(\mu _{4})\). Thus, \(\{P_V S_{\omega } \mathbbm {1}\}\) is incomplete in V, so \(\{ S_{\omega } \mathbbm {1} \}\) is incomplete in \(L^2(\mu _{4} \times \lambda )\).

We can generalize the construction of Theorem 2 as follows. We want to choose a matrix

$$\begin{aligned} A = \begin{pmatrix} \frac{1}{2} &{} \frac{1}{2} &{} \frac{1}{2} &{} \frac{1}{2} \\ h_{10} &{} h_{11} &{} h_{12} &{} h_{13} \\ h_{20} &{} h_{21} &{} h_{22} &{} h_{23} \\ h_{30} &{} h_{31} &{} h_{32} &{} h_{33} \end{pmatrix} \end{aligned}$$

such that \(\begin{pmatrix} 1&-1&1&-1 \end{pmatrix}^{T}\) is in the kernel of H and the matrix

$$\begin{aligned} H = \left( \begin{array} {llll} \frac{1}{2} &{} \frac{1}{2} &{} \frac{1}{2} &{} \frac{1}{2} \\ h_{10} &{} -h_{11} &{} h_{12} &{} -h_{13} \\ h_{20} &{} h_{21} &{} h_{22} &{} h_{23} \\ h_{30} &{} -h_{31} &{} h_{32} &{} -h_{33} \end{array} \right) \end{aligned}$$

is unitary. We obtain a system of nonlinear equations in the 12 unknowns. To parametrize all solutions, we consider the following row vectors:

$$\begin{aligned} \vec {v}_{0}&= \frac{1}{2}\begin{pmatrix} 1&1&1&1 \end{pmatrix}&\vec {w}_{0}&= \frac{1}{2}\begin{pmatrix} 1&-1&1&-1 \end{pmatrix} \end{aligned}$$
(12)
$$\begin{aligned} \vec {v}_{1}&= \frac{1}{2}\begin{pmatrix} 1&-1&-1&1 \end{pmatrix}&\vec {w}_{1}&= \frac{1}{2}\begin{pmatrix} 1&1&-1&-1 \end{pmatrix} \end{aligned}$$
(13)
$$\begin{aligned} \vec {v}_{2}&= \frac{1}{2}\begin{pmatrix} 1&1&-1&-1 \end{pmatrix}&\vec {w}_{2}&= \frac{1}{2}\begin{pmatrix} 1&-1&-1&1 \end{pmatrix} \end{aligned}$$
(14)

If we construct the matrix A so that the rows are linear combinations of \(\{ \vec {v}_{0}, \vec {v}_{1}, \vec {v}_{2} \}\), then A will satisfy the desired condition on the kernel. Note that if the jth row of A is \(\alpha _{j0} \vec {v}_{0} + \alpha _{j1} \vec {v}_{1} + \alpha _{j2} \vec {v}_{2}\) for \(j = 1,3\), then the jth row of H is \(\alpha _{j0} \vec {w}_{0} + \alpha _{j1} \vec {w}_{1} + \alpha _{j2} \vec {w}_{2}\), whereas if \(j=0,2\), then the jth row of H is equal to the jth row of A.

Thus, we want to choose coefficients \(\alpha _{jk}\), \(j=0,1,2,3\), \(k=1,2,3\) so that the matrix

$$\begin{aligned} H = \begin{pmatrix} \alpha _{00} \vec {v}_{0} + \alpha _{01} \vec {v}_{1} + \alpha _{02} \vec {v}_{2} \\ \alpha _{10} \vec {w}_{0} + \alpha _{11} \vec {w}_{1} + \alpha _{12} \vec {w}_{2} \\ \alpha _{20} \vec {v}_{0} + \alpha _{21} \vec {v}_{1} + \alpha _{22} \vec {v}_{2} \\ \alpha _{30} \vec {w}_{0} + \alpha _{31} \vec {w}_{1} + \alpha _{32} \vec {w}_{2} \end{pmatrix} \end{aligned}$$
(15)

is unitary. To satisfy the requirement on the first row, we choose \(\alpha _{00} = 1\) and \(\alpha _{01} = \alpha _{02} = 0\). Calculating the inner products of the rows of H, we obtain the following necessary and sufficient conditions:

$$\begin{aligned} | \alpha _{j0} |^2 + | \alpha _{j1} |^2 + | \alpha _{j2} |^2&= 1 \end{aligned}$$
(16)
$$\begin{aligned} \alpha _{00} \overline{\alpha _{20}}&= 0 \end{aligned}$$
(17)
$$\begin{aligned} \alpha _{11} \overline{\alpha _{22}} + \alpha _{12} \overline{\alpha _{21}}&= 0 \end{aligned}$$
(18)
$$\begin{aligned} \alpha _{10} \overline{\alpha _{30}} + \alpha _{11} \overline{\alpha _{31}} + \alpha _{12} \overline{\alpha _{32}}&= 0 \end{aligned}$$
(19)
$$\begin{aligned} \alpha _{21} \overline{\alpha _{32}} + \alpha _{22} \overline{\alpha _{31}}&= 0 \end{aligned}$$
(20)

Proposition 2

Fix \(\alpha _{00} =1\). There exists a solution to the Eqs.  (1620) if and only if \(\alpha _{10}, \alpha _{30} \in \mathbb {C}\) with

$$\begin{aligned} |\alpha _{10}|^2 + |\alpha _{30}|^2 = 1. \end{aligned}$$
(21)

Proof

(\(\Leftarrow \)) If \(|\alpha _{10}|^2 = 1\), then we choose \(\alpha _{21} = \alpha _{31} = 1\) and all other coefficients to be 0 to obtain a solution to Eqs. (1620). Likewise, if \(| \alpha _{10}|^2 = 0\), then choose \(\alpha _{11} = \alpha _{21} = 1\) and all other coefficients to be 0.

Now suppose that \(0 < |\alpha _{10}| < 1\), and we choose \(\lambda = \dfrac{- \overline{\alpha _{10}} \alpha _{30}}{1 - |\alpha _{10}|^2}\). Then choose \(\alpha _{11}\) and \(\alpha _{12}\) such that \(|\alpha _{11}|^2 + |\alpha _{12}|^2 = 1 - |\alpha _{10}|^2\). Now let \(\alpha _{31} = \lambda \alpha _{11}\) and \(\alpha _{32} = \lambda \alpha _{12}\). We have

$$\begin{aligned} \alpha _{10} \overline{\alpha _{30}} + \alpha _{11} \overline{\alpha _{31}} + \alpha _{12} \overline{\alpha _{32}}&= \alpha _{10} \overline{\alpha _{30}} + \overline{\lambda } |\alpha _{11}|^2 + \overline{\lambda } |\alpha _{12}|^2 \nonumber \\&= \alpha _{10} \overline{\alpha _{30}} + \overline{\lambda } (1 - | \alpha _{10}|^2) \\&= 0, \nonumber \end{aligned}$$
(22)

so Eq. (19) is satisfied.

Equation (17) forces \(\alpha _{20} = 0\); choose \(\alpha _{21}\) and \(\alpha _{22}\) such that \(|\alpha _{21}|^2 + |\alpha _{22}|^2 = 1\) and \(\alpha _{11} \overline{\alpha _{21}} + \alpha _{12} \overline{\alpha _{22}} = 0\). Thus, Eqs. (18) and (20) are satisfied. Finally, regarding Eq. (16), it is satisfied for \(j=0,1,2\) by construction. For \(j=3\), we calculate:

$$\begin{aligned} |\alpha _{30}|^2 + |\alpha _{31}|^2 + |\alpha _{32}|^2&= | \alpha _{30} |^2 + |\lambda |^2 \left( |\alpha _{11}|^2 + | \alpha _{12}|^2 \right) \nonumber \\&= |\alpha _{30}|^2+ \dfrac{|\alpha _{10}|^2 | \alpha _{30} |^2}{(1 - | \alpha _{10}|^2)^2} \left( 1 - | \alpha _{10}|^2 \right) \nonumber \\&= | \alpha _{30}|^2 \left( 1 + \dfrac{ | \alpha _{10} |^2 }{ 1 - | \alpha _{10} |^2} \right) \nonumber \\&= \dfrac{ | \alpha _{30} |^2 }{ 1 - | \alpha _{10} |^2 } \\&= 1 \nonumber \end{aligned}$$
(23)

as required.

(\(\Rightarrow \)) Suppose that we have a solution to Eqs. (1620). If \(|\alpha _{10}| = 1\), then we must have \(\alpha _{11} = \alpha _{12} = 0\), and thus Eq. (19) requires \(\alpha _{30} = 0\), so Eq. (21) holds.

Now suppose \(|\alpha _{10}| < 1\). Since \(\alpha _{20} = 0\), we must have that \(| \alpha _{21} |^2 + | \alpha _{22} |^2 = 1\). Combining this with Eqs. (18) and (20) imply that the matrix

$$\begin{aligned} \begin{pmatrix} \alpha _{11} &{} \alpha _{12} \\ \alpha _{31} &{} \alpha _{32} \end{pmatrix} \end{aligned}$$

is singular. Thus, there exists a \(\lambda \) such that \(\alpha _{31} = \lambda \alpha _{11}\) and \(\alpha _{32} = \lambda \alpha _{12}\). Using the same computation as in Eq. (22), we conclude that \(\lambda = \dfrac{- \overline{\alpha _{10}} \alpha _{30}}{1 - |\alpha _{10}|^2}\); then Eq. (23) implies (21). \(\square \)

The coefficient matrix we obtain from this construction is

$$\begin{aligned} H \!=\! \dfrac{1}{2} \left( \begin{array} {llll} 1 &{} 1 &{} 1 &{} 1 \\ \alpha _{10} + \alpha _{11} + \alpha _{12} &{} \alpha _{10} - \alpha _{11} + \alpha _{12} &{} \alpha _{10} - \alpha _{11} - \alpha _{12} &{} \alpha _{10} + \alpha _{11} - \alpha _{12} \\ \alpha _{21} + \alpha _{22} &{} \ - \alpha _{21} + \alpha _{22} &{} \ - \alpha _{21} - \alpha _{22} &{} \alpha _{21} - \alpha _{22} \\ \alpha _{30} + \lambda \alpha _{11} + \lambda \alpha _{12} &{} \alpha _{30} - \lambda \alpha _{11} + \lambda \alpha _{12} &{} \alpha _{30} - \lambda \alpha _{11} - \lambda \alpha _{12} &{} \alpha _{30} + \lambda \alpha _{11} - \lambda \alpha _{12} \end{array} \right) \end{aligned}$$

where we are allowed to choose \(\alpha _{11}\), \(\alpha _{12}\), \(\alpha _{21}\) and \(\alpha _{22}\) subject to the normalization condition in Eq. (16). However, those choices do not affect the construction, since if we apply Proposition 1 and the calculation from Theorem 2, we obtain

$$\begin{aligned} P_{V} S_{\omega } \mathbbm {1} = (\alpha _{10})^{\ell _{1}(n)} \cdot (0)^{\ell _{2}(n))} \cdot (\alpha _{30})^{\ell _{3}(n)} e^{2 \pi i n x}. \end{aligned}$$
(24)

This will in fact be a Parseval frame for \(L^2(\mu _{4})\), provided \(V \subset \mathcal {K}\), as in the proof of Theorem 2.

Theorem 3

Suppose \(p, q \in \mathbb {C}\) with \(|p|^2 + |q|^2 = 1\). Then

$$\begin{aligned} \{ p^{\ell _{1}(n)} \cdot 0^{\ell _{2}(n)} \cdot q^{\ell _{3}(n)} e^{2 \pi i n x} : n \in \mathbb {N}_{0} \} \end{aligned}$$

is a Parseval frame for \(L^2(\mu _{4})\), provided \(p \ne 0\).

Proof

Substitute \(\alpha _{10} = p\) and \(\alpha _{30} = q\) in Proposition 2 and Eq. (24). As noted, we only need to verify \(V \subset \mathcal {K}\). We proceed as in the proof of Theorem 2; indeed, define f, \(h_{X}\), \(\mathfrak {m}_{j}\) and \(g_{j}\) as previously. We obtain \(b_{0} = 1\), \(b_{1} = \overline{p}\), \(b_{2} = 0\), and \(b_{3} = \overline{q}\), so Eq. (8) becomes

$$\begin{aligned} h_{X}(t)= & {} \cos ^2\left( \frac{\pi t}{2}\right) h_{X} \left( \frac{t}{4}\right) + | \overline{p} |^2 \sin ^2\left( \frac{\pi t}{2}\right) h_{X} \left( \frac{t-1}{4}\right) \\&+ | \overline{q} |^2 \sin ^2\left( \frac{\pi t}{2}\right) h_{X} \left( \frac{t-3}{4}\right) . \end{aligned}$$

From here, the same argument shows that \(h_{X} \equiv 1\), and \(V \subset \mathcal {K}\). \(\square \)

5 Concluding Remarks

We remark here that the constructions given above for \(\mu _{4}\) does not work for \(\mu _{3}\). Indeed, we have the following no-go result. To obtain the measure \(\mu _{3} \times \lambda \), we consider the iterated function system:

$$\begin{aligned} \Upsilon _{0} (x,y) = \left( \frac{x}{3}, \frac{y}{2} \right) , \ \Upsilon _{1} (x,y) = \Big ( \frac{x+2}{3}, \frac{y}{2} \Big ), \\ \Upsilon _{2} (x,y) = \Big ( \frac{x}{3}, \frac{y+1}{2} \Big ), \Upsilon _{3} (x,y) = \Big (\frac{x+2}{3}, \frac{y+1}{2} \Big ). \end{aligned}$$

Using the same choice of filters, the matrix \(\mathcal {M}(x,y)\) reduces to

$$\begin{aligned} H = \left( \begin{array} {llll} a_{00} &{} a_{01} &{} a_{02} &{} a_{03} \\ a_{10} &{} e^{4 \pi i/3} a_{11} &{} a_{12} &{} e^{4 \pi i /3}a_{13} \\ a_{20} &{} e^{2 \pi i/3} a_{21} &{} a_{22} &{} e^{2 \pi i/3} a_{23} \\ a_{30} &{} a_{31} &{} a_{32} &{} a_{33} \end{array} \right) \end{aligned}$$

which we require to be unitary. Additionally, we require the same conditions as for \(\mu _{4}\), namely, the first row of H must have all entries \(\frac{1}{2}\), and \(a_{j0} + a_{j2} = a_{j1} + a_{j3}\). The inner product of the first two rows must be 0. Hence,

$$\begin{aligned} \dfrac{1}{2} \left( a_{10} + e^{4 \pi i/3} a_{11} + a_{12} + e^{4 \pi i /3}a_{13} \right) = \dfrac{1}{2} \left( a_{10} + a_{12} \right) ( 1 + e^{4 \pi i/3} ) = 0. \end{aligned}$$

Consequently, \(a_{10} + a_{12} = 0\). Likewise, \(a_{20} + a_{22} = a_{30} + a_{32} = 0\). As a result,

$$\begin{aligned} H \begin{pmatrix} 1 \\ 0 \\ 1 \\ 0 \end{pmatrix} = \begin{pmatrix} a_{00} + a_{02} \\ a_{10} + a_{12} \\ a_{20} + a_{22} \\ a_{30} + a_{32} \end{pmatrix} = \begin{pmatrix} 1 \\ 0 \\ 0 \\ 0 \end{pmatrix} \end{aligned}$$

and so H cannot be unitary.

It may be possible to extend the construction for \(\mu _{4}\) to \(\mu _{3}\) by considering a representation of \(\mathcal {O}_{n}\) for some sufficiently large n, or by considering \(\mu _{3} \times \rho \) for some other fractal measure \(\rho \) rather than \(\lambda \).