1 Introduction

The purpose of this paper is to construct the open quantum random walks on the crystal lattices and investigate the asymptotic behavior, namely central limit theorems.

The unitary quantum walks have been developed and applied as a tool for quantum algorithms, and it succeeded by its power of speeding up in certain search algorithms [1, 7, 8, 18]. Since it was mathematically formulated many properties of quantum walks have been known, especially the asymptotic behavior of the quantum walks was shown [2, 9, 10, 12, 13]. More precisely, it was proved that the quantum walks, when they are scaled by 1 / n, have limit distributions with certain densities, which are drastically different from Gaussian, the limit distributions of the classical randoms walks resulting from the central limit theorem [9, 12, 13].

Recently, a new type of quantum walks, so called open quantum random walks (OQRWs hereafter) was introduced [3,4,5]. The OQRWs were developed to formulate the dissipative quantum computing algorithms and dissipative quantum state preparation [5]. The decoherence and dissipation occur by the interaction of a system with environment and one needs to develop a proper quantum walk so that the dissipativity can be implemented. The works of [3,4,5] aim to fulfill this requirement. The OQRWs are not unitary evolutions of states, contrary to the early developed unitary quantum walks (it was thus named). By the procedure of quantum trajectories, which amounts to a repeated measurement of the particle at each step and an application of a completely positive map, the OQRWs are simulated by Markov chains on the product of position and state spaces [3,4,5] (see Sect. 2 for the details).

In the paper [3], Attal et al. proved the central limit theorems for OQRWs on the integer lattice \({\mathbb {Z}}^d\). This result typically shows that the behavior of OQRWs and unitary quantum walks are much different. On the other hand, when we consider the dynamics on the integer lattices, we can develop Fourier transforms. In [14], Konno and Yoo developed the Fourier transform theory for the OQRWs on the integer lattices, and by it the so called dual process was constructed. It is in a sense the process of Fourier transforms of probability distributions. Some related works on the central limit theorems for OQRWs, one can find in the references [6, 16].

In this paper we construct OQRWs on the crystal lattices. The crystal lattices are the structures which have regularity globally, like integer lattices, but may have further structure locally (see Sect. 2.1 for the definition). Therefore, not only the integer lattices belong to this class but more fruitful structures can be considered. The goal of the paper is two-fold: one is to show the central limit theorems for the OQRWs on the crystal lattices and the other is to construct the dual processes by using a Fourier transform theory on the crystal lattices. Following the superb method developed in [3] we could show the central limit theorems. We will provide with some examples for the Hexagonal lattices. We then develop a Fourier transform theory and construct the dual processes as was done in [14]. By revisiting the examples we will see that the central limit theorems can be also obtained by the dual processes. In some examples it even provides a better understanding of the dynamics. We remark that recently the present authors considered the orbits, or the support of the scaled unitary quantum walks on the crystal lattices [11].

This paper is organized as follows. In Sect. 2 we introduce the crystal lattices and construct OQRWs on them. In Sect. 3, we show the central limit theorems (Theorem 3.5). Section 4 is devoted to the examples. Typically we will consider Hexagonal lattices. We give two examples which have nonzero- and zero-covariances, respectively, in the limit. In Sect. 5, we construct dual processes after a short introduction of Fourier analysis on the crystal lattices. The examples mentioned above are revisited for comparison. Appendix A gives a proof for the central limit theorem. We follow the methods in [3] with a suitable modification. In the Appendix B and C, we provide with analytic proofs for some technical results that are used in the examples.

2 OQRWs on the Crystal Lattices

2.1 Crystal Lattices

In this subsection we introduce the crystal lattices as was done in [11]. Let \(G_0=(V_0,E_0)\) be a finite graph which may have multi edges and self loops. We use the notation \(A(G_0)\) for the set of symmetric arcs induced by \(E_0\). The homology group of \(G_0\) with integer coefficients is denoted by \(H_1(G_0,{\mathbb {Z}})\). The abstract periodic lattice \({\mathbb {L}}\) induced by a subgroup \(H\subset H_1(G_0,{\mathbb {Z}})\) is denoted by \(H_1(G_0,{\mathbb {Z}})/H\) [17].

Let the set of basis of \(H_1(G_0,{\mathbb {Z}})\) be \(\{C_1,C_2,\dots ,C_{b_1}\}\) corresponding to fundamental cycles of \(G_0\), where \(b_1\) is the first Betti number of \(G_0\). The spanning tree induced by \(\{C_1,C_2,\ldots ,C_{b_1}\}\) is denoted by \({\mathbb {T}}_0\). We can take a one-to-one correspondence between \(\{C_1,C_2,\ldots ,C_{b_1}\}\) and \(A({\mathbb {T}}_0)^{c}\); we describe \(C(e)\in \{C_1,C_2,\ldots ,C_{b_1}\}\) as the fundamental cycle corresponding to \(e\in A({\mathbb {T}}_0)^{c}\) so that C(e) is the cycle generated by adding e to \({\mathbb {T}}_0\). Let d be the number of generators of the quotient group \(H_1(G_0,{\mathbb {Z}})/H\). By taking a set of generating vectors \(\{{{\widehat{\theta }}}(e):e\in A({\mathbb {T}}_0)^{c}\}\) (we suppose \({\widehat{\theta }}({\bar{e}})=-{\widehat{\theta }}(e)\), where \({\bar{e}}\) means the reversed arc of e), we may consider \({\mathbb {L}}\) as a subset of \({\mathbb {R}}^d\) isomorphic to \({\mathbb {Z}}^d\) . In other words, we may think

$$\begin{aligned} {\mathbb {L}}=\left\{ \sum n_e{{\widehat{\theta }}}(e):e\in A({\mathbb {T}}_0)^{c},\,\,n_e\in {\mathbb {Z}}\right\} . \end{aligned}$$

Let us define a covering graph \(G=(V,A)\) of \(G_0\) by the lattice \({\mathbb {L}}\). For it, define \(\phi : A({\mathbb {T}}_0) \rightarrow {\mathbb {R}}^d\) so that \( \phi ({\bar{e}})=-\phi (e)\) for every \(e\in A_0\). We also define \(\phi _0: V_0\rightarrow {\mathbb {R}}^d\) so that \( \phi (e)=\phi _0(\mathrm t(e))-\phi _0(\mathrm o(e)) \) for every \(e\in A({\mathbb {T}}_0)\) by fixing a point \(\phi _0(v_0)\) at some vertex \(v_0\in V_0\). Here \(\mathrm t(e)\) and \(\mathrm o(e)\) denote the terminal and origin of the arc e, respectively. Now the covering graph \(G=(V,A)\) is defined as follows.

$$\begin{aligned} V= & {} {\mathbb {L}}+\phi _0(V_0) \cong {\mathbb {L}}\times \phi _0(V_0); \\ A= & {} \cup _{x\in {\mathbb {L}}} \left\{ \left( (x,\mathrm o(e)),(x,\mathrm t(e))\right) \;|\; e\in A({\mathbb {T}}_0) \right\} \\&\quad \cup \left( \cup _{x\in {\mathbb {L}}} \left\{ \left( (x,\mathrm o(e)),(x+{{\widehat{\theta }}}(e),\mathrm t(e))\right) \;|\; e\in A^{c}({\mathbb {T}}_0) \right\} \right) . \end{aligned}$$

The covering graph \(G=(V,A)\) is called a crystal lattice.

We take \({{\widehat{\theta }}}(e)\equiv 0\) for \(e\in A({\mathbb {T}}_0)\) and choose \(e_{i_1},\ldots ,e_{i_d}\) from \(A({\mathbb {T}}_0)^c\) so that \({\widehat{\theta }}_1:={\widehat{\theta }}(e_{i_1}),\ldots ,{\widehat{\theta }}_d:={\widehat{\theta }}(e_{i_d})\) span \({\mathbb {R}}^d\). We further suppose that for all \(e\in A(G_0)\), \({{\widehat{\theta }}}(e)\in \{\sum _{i=1}^dn_i{\widehat{\theta _i}}:n_i\in \mathbb Z,\,i=1,\ldots ,d\}\), and for any two arcs \(e_i\) and \(e_j\) in \(A({\mathbb {T}}_0)^c\), \({{\widehat{\theta }}}(e_i)\) and \({{\widehat{\theta }}}(e_j)\) are linearly independent unless \(e_j=\overline{e}_i\). We define a \(d\times d\) matrix by

$$\begin{aligned} \Theta := \left( [{\widehat{\theta }}_1,\ldots ,{\widehat{\theta }}_{d}]^{-1}\right) ^T. \end{aligned}$$
(2.1)

Notice that if \(\{\mathbf{e}_i:i=1,\ldots ,d\}\) is the canonical basis for \({\mathbb {R}}^d\), then we have

$$\begin{aligned} \mathbf{e}_i=\sum _{j=1}^d\Theta _{ij}{\widehat{\theta _j}}. \end{aligned}$$
(2.2)

The matrix \(\Theta \) will take a crucial role when we consider Fourier transforms.

2.2 OQRWs on the Crystal Lattices

We let \({\mathcal {K}}:=l^2({\mathbb {L}})\) and by \(\{|x\rangle :x\in {\mathbb {L}}\}\) we denote the canonical orthonormal basis of \(\mathcal K\). Let \({\mathcal {H}}\) be a finite dimensional Hilbert space and for each \(u\in V_0\), let \({\mathcal {H}}_u\) be a copy of \({\mathcal {H}}\). Define

$$\begin{aligned} {\mathfrak {h}}:=\oplus _{u\in V_0}{\mathcal {H}}_u. \end{aligned}$$

\({\mathfrak {h}}\) represents an intrinsic structure at each site of \({\mathbb {L}}\). The Hilbert space \({\mathfrak {h}}\otimes {\mathcal {K}}\) is the base Hilbert space on which our OQRWs are working. For each \(e\in A(G_0)\), \(e=(u,v)\), we let B(e) be a bounded linear operator from \({\mathcal {H}}_u\) to \({\mathcal {H}}_v\) satisfying

$$\begin{aligned} \sum _{\begin{array}{c} e\in A(G_0);\\ \mathrm {o}(e)=u \end{array}}B^*(e)B(e)=I_{{\mathcal {H}}_u}\quad \text {for all }u\in V_0. \end{aligned}$$
(2.3)

Whenever there is no danger of confusion we also understand \({\mathcal {H}}_u\) as a subspace of \({\mathfrak {h}}\). With this convention, B(e) (using the same symbol by abuse of notations) is a bounded linear operator on \({\mathfrak {h}}\) and satisfies

$$\begin{aligned} \sum _{e\in A(G_0)}B^*(e)B(e)=\sum _{u\in V_0}\sum _{\begin{array}{c} e\in A(G_0);\\ \mathrm {o}(e)=u \end{array}}B^*(e)B(e)=\sum _{u\in V_0}I_{\mathcal H_u}=I_{\mathfrak {h}}. \end{aligned}$$
(2.4)

The operators \(\{B(e):e\in A(G_0)\}\) will constitute the Kraus representation of our OQRWs on the crystal lattices. For that we define for each \(x\in {\mathbb {L}}\) and \(e\in A(G_0)\), a bounded linear operator \(L_x^e\) on \({\mathfrak {h}}\otimes {\mathcal {K}}\) by

$$\begin{aligned} L_x^e:=B(e)\otimes |x+{{\widehat{\theta }}}(e)\rangle \langle x|. \end{aligned}$$
(2.5)

We can check the following property.

Lemma 2.1

$$\begin{aligned} \sum _{x\in {\mathbb {L}}}\sum _{e\in A(G_0)}\left( L_x^e\right) ^*L_x^e=I_{{\mathfrak {h}}\otimes {\mathcal {K}}}. \end{aligned}$$
(2.6)

Proof

By (2.4),

$$\begin{aligned} \sum _{x\in {\mathbb {L}}}\sum _{e\in A(G_0)}\left( L_x^e\right) ^*L_x^e= & {} \sum _{x\in {\mathbb {L}}}\sum _{e\in A(G_0)}B(e)^*B(e)\otimes |x\rangle \langle x|\\= & {} \sum _{x\in {\mathbb {L}}} I_{{\mathfrak {h}}}\otimes |x\rangle \langle x|\\= & {} I_{{\mathfrak {h}}\otimes {\mathcal {K}}}. \end{aligned}$$

\(\square \)

The OQRW is a completely positive linear operator on the ideal \({\mathcal {I}}_1\) of trace class operators on \({{\mathfrak {h}}\otimes {\mathcal {K}}}\) defined by

$$\begin{aligned} {\mathcal {M}}(\rho ):=\sum _{x\in {\mathbb {L}}}\sum _{e\in A(G_0)}L_{x}^{e}\rho (L_{x}^{e})^*. \end{aligned}$$
(2.7)

Let us consider a special class of states (density operators) on \({\mathfrak {h}}\otimes {\mathcal {K}}\) of the form

$$\begin{aligned} \rho =\sum _{x\in {\mathbb {L}} }\left( \oplus _{u\in V_0}\rho _{(x,u)}\right) \otimes |x\rangle \langle x|. \end{aligned}$$
(2.8)

Here, for each pair \((x,u)\in {\mathbb {L}}\times V_0\), \(\rho _{(x,u)}\) is a positive definite operator on \({\mathcal {H}}_u\) and satisfies

$$\begin{aligned} \sum _{x\in {\mathbb {L}}}\sum _{u\in V_0}\text {Tr}(\rho _{(x,u)})=1. \end{aligned}$$

The value \(\sum _{u\in V_0}\text {Tr}(\rho _{(x,u)})\) is understood as a probability of finding the particle at site \(x\in {\mathbb {L}}\) when the state is \(\rho \). We check that if the state has the form in (2.8), \(\rho =\sum _{x\in {\mathbb {L}} }\left( \oplus _{u\in V_0}\rho _{(x,u)}\right) \otimes |x\rangle \langle x|\), \(\mathcal M(\rho )\) has the form

$$\begin{aligned} {\mathcal {M}}(\rho )= \sum _{x\in {\mathbb {L}} }\left( \oplus _{u\in V_0}\rho '_{(x,u)}\right) \otimes |x\rangle \langle x|, \end{aligned}$$
(2.9)

where

$$\begin{aligned} \rho '_{(x,u)}=\sum _{ \begin{array}{c} e\in A(G_0);\\ \mathrm {t}(e)=u \end{array}}B(e)\rho _{(x-{{\widehat{\theta }}}(e),\mathrm {o}(e))}B(e)^*. \end{aligned}$$

From now on we assume that \({\mathcal {M}}\) is defined on the set of states of the form in (2.8).

Let X denote the random variable representing the position of the particle, or the walker. Starting from the initial state \(\rho \) in (2.8), the probability of finding the particle at site \(x\in {\mathbb {L}}\) after a one-step evolution is given by

$$\begin{aligned} {\mathbb {P}}(X=x)=\sum _{u\in V_0}\text {Tr}\left( \rho '_{(x,u)}\right) . \end{aligned}$$

As was introduced in [3, 4], let \((\rho _n,X_n)_{n\ge 0}\) denote the Markov chain of quantum trajectory procedure. This is obtained by repeatedly applying the completely positive map \({\mathcal {M}}\) and a measurement of the position on \({\mathcal {K}}\). More precisely, denoting \({\mathcal {E}}({\mathfrak {h}})\) the space of states on \({\mathfrak {h}}\), \((\rho _n,X_n)_{n\ge 0}\) is a Markov chain on the state space \({\mathcal {E}}({\mathfrak {h}})\times {\mathbb {L}}\) for which the transition rule is defined as follows: from a point \((\rho ,x)\in {\mathcal {E}}({\mathfrak {h}})\times {\mathbb {L}}\) it jumps to the point

$$\begin{aligned} \left( \frac{1}{p(e)}B(e)\rho B(e)^*, x+{{\widehat{\theta }}}(e)\right) \in {\mathcal {E}}({\mathfrak {h}})\times {\mathbb {L}}, \end{aligned}$$

with probability

$$\begin{aligned} p(e)=\text {Tr}(B(e)\rho B(e)^*). \end{aligned}$$

3 Central Limit Theorem

In this section we discuss the central limit theorem for the OQRWs on the crystal lattices. The same study for the OQRWs on the integer lattices \({\mathbb {Z}}^d\) was done in [3]. Here we follow the same stream lines of [3] with slight modifications.

3.1 Preparation

We let

$$\begin{aligned} {\mathcal {L}}(\rho ):=\sum _{e\in A(G_0)}B(e)\rho B(e)^*,\quad \rho \in {\mathcal {E}}({\mathfrak {h}}). \end{aligned}$$
(3.1)

We assume the following hypothesis.

  1. (H)

    \({\mathcal {L}}\) admits a unique invariant state \(\rho _\infty \).

Remark 3.1

The existence of an invariant state to the equation (3.1) follows from an ergodic theorem [15]. In fact, for any initial state \(\rho _0\), the time average

$$\begin{aligned} \frac{1}{n}\sum _{k=0}^{n-1}{\mathcal {L}}^k(\rho _0) \end{aligned}$$

converges almost surely (in a suitable probability space) to an invariant state \(\rho _\infty \in {\mathcal {E}}({\mathfrak {h}})\) (see also [3]).

Let us define

$$\begin{aligned} m:=\sum _{e\in A(G_0)}\text {Tr}\left( B(e)\rho _\infty B(e)^*\right) {{\widehat{\theta }}}(e). \end{aligned}$$
(3.2)

Lemma 3.2

For any \(l\in {\mathbb {R}}^d\), the equation

$$\begin{aligned} L-{{\mathcal {L}}}^*(L)=\sum _{ e\in A(G_0) }B(e)^*B(e)\left( {{\widehat{\theta }}}(e)\cdot l\right) -(m\cdot l)I \end{aligned}$$
(3.3)

admits a solution. The difference between any two solutions of (3.3) is a multiple of the identity.

Proof

By (3.2) we have for any \(l\in {\mathbb {R}}^d\),

$$\begin{aligned} \sum _{ e\in A(G_0) }\text {Tr}\left( B(e)\rho _\infty B(e)^*\right) {{\widehat{\theta }}}(e)\cdot l=m\cdot l. \end{aligned}$$

Hence

$$\begin{aligned} \text {Tr}\Big (\rho _\infty \Big (\sum _{ e\in A(G_0) }B (e)^*B(e){{\widehat{\theta }}}(e)\cdot l-\big (m\cdot l\big )I\Big )\Big )=0. \end{aligned}$$

Thus

$$\begin{aligned} \sum _{ e\in A(G_0) }B(e)^*B(e)\left( {{\widehat{\theta }}}(e)\cdot l\right) -\big (m \cdot l\big )I\in \{\rho _\infty \}^\perp =\mathrm {Ran}(I-{{\mathcal {L}}}^*). \end{aligned}$$

The last equality follows from the fact that

$$\begin{aligned} \{\rho _\infty \}^\perp =\mathrm {Ker}(I-{\mathcal {L}} )^\perp =\overline{\mathrm {Ran}(I-{{\mathcal {L}}^* })}=\mathrm {Ran}(I-{{\mathcal {L}}}^*), \end{aligned}$$

since \({\mathfrak {h}}\) is of finite dimensional. This proves the first part. The second part can be proven by the same argument that was used in [3, Lemma 5.1]. \(\square \)

Let us denote the solution of (3.3) corresponding to l by \(L_l \). In particular, for the basis vectors \(\{{\widehat{\theta }}_1,\ldots ,{\widehat{\theta _d\}}}\) of the lattice \({\mathbb {L}}\), we denote \(L_i \) for \(L_{{\widehat{\theta _i}}} \), \(i=1,\ldots ,d\). Note that

$$\begin{aligned} L_l =\sum _{i=1}^dl_iL_i, \end{aligned}$$
(3.4)

where \(\{l_i\}\) are the coordinates of l w.r.t. \(\{{\widehat{\theta }}_i\}\).

Recall the Markov chain \( (\rho _n,X_n)_{n\ge 0}\) on the state space \({\mathcal {E}}({\mathfrak {h}})\times {\mathbb {L}}\). We introduce a related Markov chain. The Markov chain \((\rho _n,Y_n)_{n\ge 0}\) is defined on the state space \({\mathcal {E}}({\mathfrak {h}})\times A(G_0)\). The transition probabilities are given as follows. From the state \((\rho , e)\), it jumps to \((\rho ',e')\) with probability \(\text {Tr}(B(e')\rho B(e')^*)\), where \(\rho '=\frac{1}{\text {Tr}(B(e')\rho B(e')^*)}(B(e')\rho B(e')^*)\). Notice that if we put \(\Delta X_n:=X_n-X_{n-1}\in \{{{\widehat{\theta }}}(e):\,e\in A(G_0)\}\), then \((\rho _n,\Delta X_n)_{n\ge 0}\) is a Markov chain that is equivalent with \((\rho _n,{{\widehat{\theta }}}(Y_n))_{n\ge 0}\). The Markov operator (transition operator) for the Markov chain \((\rho _n,Y_n)_{n\ge 0}\) is denoted by P.

Remark 3.3

We emphasize here that if \((\rho ,e)\) is the present state for the Markov chain \((Y_n)\) and particularly if \(\rho \) is supported on \({\mathcal {H}}_u\) for some \(u\in V_0\) (recall that \(\mathfrak h=\oplus _{u\in V_0}{\mathcal {H}}_u\)), then it jumps to some \((\rho ',e')\) where \(e'\) must satisfy \(\mathrm {o}(e)=u\), since \(B(e')\rho B(e')^*=0\) if \(\mathrm {o}(e')\ne u\).

Let us consider the Poisson equation [3]:

$$\begin{aligned} (I-P)f(\rho ,e)={{\widehat{\theta }}}(e)\cdot l-m \cdot l. \end{aligned}$$
(3.5)

Lemma 3.4

The equation (3.5) admits a solution which is

$$\begin{aligned} f(\rho ,e)=\mathrm {Tr}(\rho L_l)+{{\widehat{\theta }}}(e)\cdot l. \end{aligned}$$

Proof

For the function \(f(\rho ,e)\) in the statement, we have

$$\begin{aligned} (I-P)f(\rho ,e)= & {} \mathrm {Tr}(\rho L_l)+{{\widehat{\theta }}}(e)\cdot l\\&-\sum _{ e'\in A(G_0) }\Big (\text {Tr}\big (B(e')\rho B(e')^*L_l\big )+ \text {Tr}\big (B(e')\rho B(e')^*\big ){{\widehat{\theta }}}(e')\cdot l\Big )\\= & {} \text {Tr}\Big (\rho \Big (L_l-{{\mathcal {L}}}^*(L_l)-\sum _{e'\in A(G_0)}B(e')^*B(e'){{\widehat{\theta }}}(e')\cdot l\Big )\Big )+{{\widehat{\theta }}}(e)\cdot l\\= & {} {{\widehat{\theta }}}(e)\cdot l-m\cdot l. \end{aligned}$$

The proof is completed. \(\square \)

3.2 Central Limit Theorem

In this subsection we present the central limit theorem for the OQRWs on the crystal lattices. All the ingredients needed to show the central limit theorem are prepared in the previous subsection. The main result of this paper is the following theorem.

Theorem 3.5

Consider the open quantum random walk on a crystal lattice (embedded in \({\mathbb {R}}^d\)). Assume that the completely positive map

$$\begin{aligned} {\mathcal {L}}(\rho )=\sum _{e\in A(G_0)}B(e)\rho B(e)^* \end{aligned}$$

admits a unique invariant state \(\rho _\infty \) on \({\mathfrak {h}}\). Let \((\rho _n,X_n)_{n\ge 0}\) be the quantum trajectory process associated to this OQRW. Then,

$$\begin{aligned} \frac{X_n-nm}{\sqrt{n}} \end{aligned}$$

converges in law to the Gaussian distribution \(N(0,\Sigma )\) in \({\mathbb {R}}^d\), with covariance matrix \(\Sigma =(C_{ij})_{i,j=1}^d\) given by

$$\begin{aligned} C_{ij}= & {} -m_im_j+\sum _{e\in A(G_0)}\text {Tr}(B(e)\rho _\infty B(e)^*)\left( {{\widehat{\theta }}}(e)\right) _i\left( {{\widehat{\theta }}}(e)\right) _j\nonumber \\&+2\sum _{e\in A(G_0)}\mathrm {Tr}(B(e)\rho _\infty B(e)^*L_{\mathbf{e}_i})({{\widehat{\theta }}}(e))_j-2m_i\mathrm {Tr}(\rho _\infty L_{\mathbf{e}_j}). \end{aligned}$$
(3.6)

Remark 3.6

Recall that \(\{\mathbf{e}_i\}\) is the canonical basis of \({\mathbb {R}}^d\) and \(L_i=L_{{\widehat{\theta _i}}}\). Since \(\mathbf{e}_i=\sum _{j=1}^d\Theta _{ij}{\widehat{\theta _j}}\) (see (2.2)), we can compute \(L_{\mathbf{e}_i}\) by using \(L_j\)’s:

$$\begin{aligned} L_{\mathbf{e}_i}=\sum _{j=1}^d{\Theta }_{ij}L_j. \end{aligned}$$

In the real problems, it is generally easier to compute \(L_i\)’s than \(L_{\mathbf{e}_i}\)’s.

For the proof of the above theorem, it turns out that the methods are exactly the same as in [3]. We only have a different graph structure from integer lattices and need only to modify so that it is suitable for the new structure. For the readers’ convenience, however, we present the full proof in the Appendix A.

4 Examples: Hexagonal Lattice

In this section we provide with some examples. We will consider the OQRWs on the hexagonal lattice. Look at the hexagonal lattice in Fig. 1.

Fig. 1
figure 1

Hexagonal lattice: underlying graph \(G_0\) for Hexagonal lattice (left), Hexagonal lattice (right)

4.1 Preparation

We let \(V_0=\{u,v\}\) and let \(\{e_i\}_{i=1,2,3}\) be the three edges in \(G_0\) with \(\mathrm {o}(e_i)=u\) and \(\mathrm {t}(e_i)=v\). (See Fig. 1.) The reversed edges are \(\overline{e}_i\), \(i=1,2,3\). We let

$$\begin{aligned} {{\widehat{\theta }}}(e_1)=\frac{1}{\sqrt{2}}(1,1),\quad {{\widehat{\theta }}}(e_2)=\frac{1}{\sqrt{2}}(-1,1),\quad {{\widehat{\theta }}}(e_3)=0, \end{aligned}$$

and \({{\widehat{\theta }}}(\overline{e}_i)=-{{\widehat{\theta }}}(e_i)\), \(i=1,2,3\). In order to define the operators B(e), \(e\in A(G_0)\), let \(\mathcal H_u={\mathcal {H}}_v={\mathbb {C}}^3\), and \({\mathfrak {h}}={\mathcal {H}}_u\oplus {\mathcal {H}}_v\simeq {\mathbb {C}}^6\). Let \(U=\left[ \begin{matrix}{} \mathbf{u}_1&\mathbf{u}_2&\mathbf{u}_3\end{matrix}\right] \) and \(V=\left[ \begin{matrix}{} \mathbf{v}_1&\mathbf{v}_2&\mathbf{v}_3\end{matrix}\right] \) be \(3\times 3\) unitary matrices with column vectors \(\mathbf{u}_i= [u_{1i}, u_{2i},u_{3i}]^T\) and \(\mathbf{v}_i= [ v_{1i},v_{2i},v_{3i} ]^T\), \(i=1,2,3\). For \(i=1,2,3\), let \(U_i\) be a \(3\times 3\) matrix whose ith column is \(\mathbf{u}_i\) and remaining columns are zeros. Similarly, let \(V_i\) be the \(3\times 3\) matrix, whose ith column is the vector \(\mathbf{v}_i\) and other columns are zeros. For \(i=1,2,3\), let \({{\widetilde{U}}}_i\) and \({{\widetilde{V}}}_i\) be \(6\times 6\) matrices whose block matrices are given as follows:

$$\begin{aligned} {{\widetilde{U}}}_i=\left[ \begin{matrix}0&{}0\\ U_i&{}0\end{matrix}\right] , \quad {{\widetilde{V}}}_i=\left[ \begin{matrix}0&{}V_i\\ 0&{}0\end{matrix}\right] . \end{aligned}$$

Now we define

$$\begin{aligned} B(e_i):={{\widetilde{U}}}_i,\quad \text {and}\quad B(\overline{e}_i):=\widetilde{V}_i,\quad i=1,2,3. \end{aligned}$$

It is easy to check that a state \(\rho =\rho _u\oplus \rho _v\in {\mathcal {E}}({\mathfrak {h}})\) is an invariant state to the equation \({\mathcal {L}}(\rho )=\rho \), where \({\mathcal {L}}(\rho )\) is defined in (3.1), if and only if it holds that

$$\begin{aligned} \rho _u= & {} \sum _{i=1}^3V_i\rho _v V_i^*, \end{aligned}$$
(4.1)
$$\begin{aligned} \rho _v= & {} \sum _{i=1}^3U_i\rho _u U_i^*. \end{aligned}$$
(4.2)

Consider the following (doubly) stochastic matrices.

$$\begin{aligned} P_u:=\left[ \begin{matrix}|u_{11}|^2&{}|u_{21}|^2&{}|u_{31}|^2\\ |u_{12}|^2&{}|u_{22}|^2&{}|u_{32}|^2\\ |u_{13}|^2&{}|u_{23}|^2&{}|u_{33}|^2\end{matrix}\right] ,\quad P_v:=\left[ \begin{matrix}|v_{11}|^2&{}|v_{21}|^2&{}|v_{31}|^2\\ |v_{12}|^2&{}|v_{22}|^2&{}|v_{32}|^2\\ |v_{13}|^2&{}|v_{23}|^2&{}|v_{33}|^2\end{matrix}\right] . \end{aligned}$$
(4.3)

Proposition 4.1

If the stochastic matrices \(P_uP_v\) and \(P_vP_u\) are irreducible, then the equation \({\mathcal {L}}(\rho )=\rho \) has a unique solution \(\rho =\rho _u\oplus \rho _v\) with \(\rho _u=\rho _v=\frac{1}{6}I\). Conversely, suppose that \(P_uP_v\) and \(P_vP_u\) are reducible such that the corresponding Markov chains have a common decomposition into communicating classes. Then, the equation \(\mathcal L(\rho )=\rho \) has infinitely many different solutions.

Proof

Since \(U_i^*U_j=\delta _{ij}P_i\), where \(P_i\) is the projection onto ith component, by multiplying \(U^*\) in the left and U in the right to both terms in the Eq. (4.2) we get

$$\begin{aligned} U^*\rho _vU=\mathrm {diag}\left( (\rho _u)_{11},(\rho _u)_{22},(\rho _u)_{33}\right) , \end{aligned}$$
(4.4)

where \(\mathrm {diag}(a,b,c)\) means the diagonal matrix with entries \(a,\,\,b\), and c. By multiplying U from the left and \(U^*\) from the right in the Eq. (4.4) we get

$$\begin{aligned} \rho _v=U\left( \mathrm {diag}\left( (\rho _u)_{11},(\rho _u)_{22},(\rho _u)_{33}\right) \right) U^*, \end{aligned}$$
(4.5)

and similarly we have

$$\begin{aligned} \rho _u=V\left( \mathrm {diag}\left( (\rho _v)_{11},(\rho _v)_{22},(\rho _v)_{33}\right) \right) V^*. \end{aligned}$$
(4.6)

Comparing the diagonal components in (4.5) and (4.6), we get

$$\begin{aligned} \left[ \begin{matrix}(\rho _v)_{11}&(\rho _v)_{22}&(\rho _v)_{33}\end{matrix}\right] =\left[ \begin{matrix}(\rho _u)_{11}&(\rho _u)_{22}&(\rho _u)_{33}\end{matrix}\right] P_u, \end{aligned}$$
(4.7)

and

$$\begin{aligned} \left[ \begin{matrix}(\rho _u)_{11}&(\rho _u)_{22}&(\rho _u)_{33}\end{matrix}\right] =\left[ \begin{matrix}(\rho _v)_{11}&(\rho _v)_{22}&(\rho _v)_{33}\end{matrix}\right] P_v. \end{aligned}$$
(4.8)

Inserting the Eqs. (4.7) and (4.8) to each other we have

$$\begin{aligned} \left[ \begin{matrix}(\rho _u)_{11}&(\rho _u)_{22}&(\rho _u)_{33}\end{matrix}\right] =\left[ \begin{matrix}(\rho _u)_{11}&(\rho _u)_{22}&(\rho _u)_{33}\end{matrix}\right] P_uP_v, \end{aligned}$$
(4.9)

and

$$\begin{aligned} \left[ \begin{matrix}(\rho _v)_{11}&(\rho _v)_{22}&(\rho _v)_{33}\end{matrix}\right] =\left[ \begin{matrix}(\rho _v)_{11}&(\rho _v)_{22}&(\rho _v)_{33}\end{matrix}\right] P_vP_u. \end{aligned}$$
(4.10)

Therefore, \(\left[ \begin{matrix}(\rho _u)_{11}&(\rho _u)_{22}&(\rho _u)_{33}\end{matrix}\right] \) is a stationary vector for the stochastic matrix \(P_uP_v\), and \(\left[ \begin{matrix}(\rho _v)_{11}&(\rho _v)_{22}&(\rho _v)_{33}\end{matrix}\right] \) is a stationary vector for the stochastic matrix \(P_vP_u\).

Suppose that \(P_uP_v\) and \(P_vP_u\) are irreducible. Notice that since \(P_uP_v\) and \(P_vP_u\) are doubly stochastic matrices the uniform distribution is always a stationary distribution both for \(P_uP_v\) and \(P_vP_u\). Since the uniform distribution has full support, it follows that the three points (states) are all positive recurrent for the Markov chains. Now the Markov chains are irreducible, the irreducible and positive recurrent Markov chains with stochastic matrices \(P_uP_v\) and \(P_vP_u\) have a unique stationary state, which is, we know, the uniform distribution. Therefore, we have

$$\begin{aligned} \mathrm {diag}\left( (\rho _u)_{11},(\rho _u)_{22},(\rho _u)_{33}\right) =c_uI\quad \text {and}\quad \mathrm {diag}\left( (\rho _v)_{11},(\rho _v)_{22},(\rho _v)_{33}\right) =c_vI, \end{aligned}$$
(4.11)

where \(c_u\) and \(c_v\) are positive constants satisfying \(c_u+c_v=1/3\). We insert (4.11) into (4.5) and (4.6) to conclude that \(\rho _u\) and \(\rho _v\) are actually diagonal matrices \(\frac{1}{6}I\).

Now suppose that \(P_uP_v\) and \(P_vP_u\) are reducible with a common decomposition of the state space, say \(\{1,2,3\}\), into communicating classes. Without loss of generality, we may assume that \(\{\{1,2\},\{3\}\}\) is a common communicating classes and thus \(P_uP_v\) and \(P_vP_u\) have the matrix forms:

$$\begin{aligned} P_uP_v=\left[ \begin{matrix}*&{}*&{}0\\ *&{}*&{}0\\ 0&{}0&{}1\end{matrix}\right] ,\quad P_vP_u=\left[ \begin{matrix}\star &{}\star &{}0\\ \star &{}\star &{}0\\ 0&{}0&{}1\end{matrix}\right] . \end{aligned}$$
(4.12)

In this case, we will show in Appendix B that U and V are of the following forms:

$$\begin{aligned} U=\left[ \begin{matrix} u_{11}&{}u_{12}&{}0\\ u_{21}&{}u_{22}&{}0\\ 0&{}0&{}u_{33}\end{matrix}\right] ,\quad V=\left[ \begin{matrix} v_{11}&{}v_{12}&{}0\\ v_{21}&{}v_{22}&{}0\\ 0&{}0&{}v_{33}\end{matrix}\right] . \end{aligned}$$
(4.13)

Let us then show that for any \(\lambda \in [0,1]\), \(\rho ^{(\lambda )}=\rho _u^{(\lambda )}\oplus \rho _v^{(\lambda )}\) with \(\rho _u^{(\lambda )}=\rho _v^{(\lambda )}=\frac{1}{2}\mathrm {diag}(\lambda /2,\lambda /2,(1-\lambda ))\) are all solutions to the equation \({\mathcal {L}}(\rho )=\rho \), that is, they satisfy the Eqs. (4.1) and (4.2). First notice that

$$\begin{aligned} \sum _{i=1}^3U_iU_i^*=I\quad \text {and}\quad \sum _{i=1}^3V_iV_i^*=I. \end{aligned}$$

In fact, if \(i\ne j\), then we directly compute to see that \(U_iU_j^*=0\) and \(V_iV_j^*=0\). Therefore,

$$\begin{aligned} \sum _{i=1}^3U_iU_i^*=\left( \sum _{i=1}^3U_i\right) \left( \sum _{i=1}^3U_i^*\right) =UU^*=I, \end{aligned}$$

and similarly we show the second equation. We rewrite

$$\begin{aligned} \rho _u^{(\lambda )}=\frac{\lambda }{4}I+\frac{2-3\lambda }{4}\left[ \begin{matrix}0&{}0&{}0\\ 0&{}0&{}0\\ 0&{}0&{}1\end{matrix}\right] . \end{aligned}$$

Then, by the above observation,

$$\begin{aligned} \sum _iU_i\rho _u^{(\lambda )}U_i^*= & {} \frac{\lambda }{4}I+\frac{2-3\lambda }{4}\sum _iU_i\left[ \begin{matrix}0&{}0&{}0\\ 0&{}0&{}0\\ 0&{}0&{}1\end{matrix}\right] U_i^*\\= & {} \frac{\lambda }{4}I+\frac{2-3\lambda }{4}|u_{33}|^2\left[ \begin{matrix}0&{}0&{}0\\ 0&{}0&{}0\\ 0&{}0&{}1\end{matrix}\right] =\rho _v^{(\lambda )}. \end{aligned}$$

Here we have used the fact that \(|u_{33}|^2=1\) from the form of unitary U in (4.13). Similarly we can show that the equation \(\rho _u^{(\lambda )}=\sum _iV_i\rho _v^{(\lambda )}V_i^*\) holds. This completes the proof. \(\square \)

Example 4.2

Let us consider the following two unitary matrices.

(4.14)

For the choices of (UV) we consider three cases.

  1. (i)

    \((U,V)=(U_G,U_G)\). In this case we have

    $$\begin{aligned} P_uP_v=P_vP_u=\frac{1}{81}\left[ \begin{matrix}33&{}24&{}24\\ 24&{}33&{}24\\ 24&{}24&{}33 \end{matrix}\right] . \end{aligned}$$

    Thus \(P_uP_v=P_vP_u\) are irreducible and we have a unique invariant state \(\rho =\rho _u\oplus \rho _v\) with \(\rho _u=\rho _v=\frac{1}{6}I\) for the equation \({\mathcal {L}}(\rho )=\rho \).

  2. (ii)

    \((U,V)=(U_G,U_H)\). In this case we have

    $$\begin{aligned} P_uP_v=P_vP_u=\frac{1}{18}\left[ \begin{matrix}5&{}5&{}8\\ 5&{}5&{}8\\ 8&{}8&{}2\end{matrix}\right] . \end{aligned}$$

    Thus again \(P_uP_v=P_vP_u\) are irreducible and there is a unique invariant state \(\rho =\rho _u\oplus \rho _v\) with \(\rho _u=\rho _v=\frac{1}{6}I\).

  3. (iii)

    \((U,V)=(U_H,U_H)\). In this case we have

    $$\begin{aligned} P_uP_v=P_vP_u=\frac{1}{2}\left[ \begin{matrix}1&{}1&{}0\\ 1&{}1&{}0\\ 0&{}0&{}2 \end{matrix}\right] . \end{aligned}$$

    Here the stochastic matrix \(P_uP_v=P_vP_u\) is not irreducible and the equation \({\mathcal {L}}(\rho )=\rho \) has many different solutions. We can check that for any \(\lambda \in [0,1]\), \(\rho ^{(\lambda )}=\rho _u^{(\lambda )}\oplus \rho _v^{(\lambda )}\) with \(\rho _u^{(\lambda )}=\rho _v^{(\lambda )}=\frac{1}{2}\mathrm {diag}(\lambda /2,\lambda /2,(1-\lambda ))\) are all invariant states.

4.2 Example: Nonzero Covariance

From now on let us focus on a fixed model by taking \(U=V=U_G\) with \(U_G\) in (4.14). We want to see the mean m and covariance matrix \(\Sigma \) in Theorem 3.5. Since the unique invariant state to the equation \({\mathcal {L}}(\rho )=\rho \) is \(\rho _\infty =\frac{1}{6}I\), from the Eq. (3.2) it is easy to see that \(m=0\). By directly computing from (3.3), we see that, up to a sum of a constant multiple of identity,

$$\begin{aligned} L_1=L_{1,u}\oplus L_{1,v}, \,\,\text {with }L_{1,u}=-L_{1,v}=\frac{1}{6}\left[ \begin{matrix}7&{}0&{}0\\ 0&{}-2&{}0\\ 0&{}0&{}-2 \end{matrix}\right] , \end{aligned}$$

and

$$\begin{aligned} L_2=L_{2,u}\oplus L_{2,v}, \,\,\text {with }L_{2,u}=-L_{2,v}=\frac{1}{6}\left[ \begin{matrix}-2&{}0&{}0\\ 0&{}7&{}0\\ 0&{}0&{}-2 \end{matrix}\right] . \end{aligned}$$

Notice that

$$\begin{aligned} \Theta =\frac{1}{\sqrt{2}}\left[ \begin{matrix}1&{}-1\\ 1&{}1\end{matrix}\right] . \end{aligned}$$

Therefore, we get

$$\begin{aligned} L_{\mathbf{e}_1}=\Theta _{11}L_1+\Theta _{12}L_2=L_{\mathbf{e}_1,u}\oplus L_{\mathbf{e}_1,v}, \,\,\text {with }L_{\mathbf{e}_1,u}=-L_{\mathbf{e}_1,v}=\frac{3}{2\sqrt{2}}\left[ \begin{matrix}1&{}0&{}0\\ 0&{}-1&{}0\\ 0&{}0&{}0 \end{matrix}\right] , \end{aligned}$$

and

$$\begin{aligned} L_{\mathbf{e}_2}=\Theta _{21}L_1+\Theta _{22}L_2=L_{\mathbf{e}_2,u}\oplus L_{\mathbf{e}_2,v}, \,\,\text {with }L_{\mathbf{e}_2,u}=-L_{\mathbf{e}_2,v}=\frac{1}{6\sqrt{2}}\left[ \begin{matrix}5&{}0&{}0\\ 0&{}5&{}0\\ 0&{}0&{}-4 \end{matrix}\right] . \end{aligned}$$

Now we are ready to compute the covariance matrix \(\Sigma \) given in (3.6). Since the mean m is zero, we are left with

$$\begin{aligned} C_{ij}= & {} \sum _{e\in A(G_0)}\text {Tr}(B(e)\rho _\infty B(e)^*)\left( {{\widehat{\theta }}}(e)\right) _i\left( {{\widehat{\theta }}}(e)\right) _j\nonumber \\&+2\sum _{e\in A(G_0)}\mathrm {Tr}(B(e)\rho _\infty B(e)^*L_{\mathbf{e}_i})\left( {{\widehat{\theta }}}(e)\right) _j\nonumber \\=: & {} C^{(1)}_{ij}+C_{ij}^{(2)}. \end{aligned}$$
(4.15)

For the first term, the trace part is all 1 / 6 and thus we get

$$\begin{aligned} C^{(1)}=\frac{1}{3}I. \end{aligned}$$

For the second term, since \(\rho _\infty =\frac{1}{6}I\oplus \frac{1}{6}I\), we compute before taking trace,

Using this we get

$$\begin{aligned} C^{(2)}=\frac{1}{9}\left[ \begin{matrix}3&{}0\\ 0&{}-1\end{matrix}\right] . \end{aligned}$$

Thus summing those two terms we get covariance matrix

$$\begin{aligned} \Sigma =C^{(1)}+C^{(2)}=\frac{2}{9}\left[ \begin{matrix}3&{}0\\ 0&{}1\end{matrix}\right] . \end{aligned}$$
(4.16)

Remark 4.3

The movements between the points u and v in a single site do not contribute to the real movements. This is reflected by the fact that the variance in the vertical line (y-axis) is smaller than that in the horizontal line (x-axis) in (4.16).

Notice that the characteristic function for the Gaussian random variable X with mean zero and covariance \(\Sigma \) in (4.16) is

$$\begin{aligned} {\mathbb {E}}(e^{i\langle \mathbf{t},X\rangle })=e^{-\frac{1}{9}(3t_1^2+t_2^2)}. \end{aligned}$$
(4.17)

4.3 Example: Zero Covariance

Let us give one more example. This example, together with the former one, we will consider again in a different view point, namely by a dual process, in the next section.

For the model on the Hexagonal lattice, let us take \(U=U_G\) in (4.14) and \(V=I\). In that case, since \(P_uP_v=P_vP_u=P_u\) is irreducible, the equation \(\mathcal L(\rho )=\rho \) has a unique invariant state \(\rho _\infty =\frac{1}{6}I\oplus \frac{1}{6}I\). As before, the solutions of (3.3) are, up to a sum of constant multiple of identity,

$$\begin{aligned} L_1=L_{1,u}\oplus L_{1,v}, \,\,\text {with }L_{1,u}=\left[ \begin{matrix}1&{}0&{}0\\ 0&{}0&{}0\\ 0&{}0&{}0 \end{matrix}\right] , \,\,L_{1,v}=0, \end{aligned}$$

and

$$\begin{aligned} L_2=L_{2,u}\oplus L_{2,v}, \,\,\text {with }L_{2,u}=\left[ \begin{matrix}0&{}0&{}0\\ 0&{}1&{}0\\ 0&{}0&{}0 \end{matrix}\right] , \,\,L_{2,v}=0. \end{aligned}$$

We then get

$$\begin{aligned} L_{\mathbf{e}_1}=\Theta _{11}L_1+\Theta _{12}L_2=L_{\mathbf{e}_1,u}\oplus L_{\mathbf{e}_1,v}, \,\,\text {with }L_{\mathbf{e}_1,u}=\frac{1}{\sqrt{2}}\left[ \begin{matrix}1&{}0&{}0\\ 0&{}-1&{}0\\ 0&{}0&{}0 \end{matrix}\right] ,\,\,L_{\mathbf{e}_1,v}=0, \end{aligned}$$

and

$$\begin{aligned} L_{\mathbf{e}_2}=\Theta _{21}L_1+\Theta _{22}L_2=L_{\mathbf{e}_2,u}\oplus L_{\mathbf{e}_2,v}, \,\,\text {with }L_{\mathbf{e}_2,u}=\frac{1}{\sqrt{2}}\left[ \begin{matrix}1&{}0&{}0\\ 0&{}1&{}0\\ 0&{}0&{}0 \end{matrix}\right] ,\,\,L_{\mathbf{e}_2,v}=0. \end{aligned}$$

In this model, the mean and covariance matrix can be computed in the same way as before, and we get

$$\begin{aligned} m=0,\quad \Sigma =0. \end{aligned}$$
(4.18)

This means that the measure is a Dirac measure at the origin.

5 Dual Processes

In this section we consider the dual processes for the OQRWs on the crystal lattices. The concept of dual processes was introduced in [14], and it is an OQRW on the dual space, namely the Fourier transform space to the lattice. Since crystal lattices are intrinsically regular lattices, like the integer lattices, we can develop an analysis of Fourier transforms.

5.1 Fourier Transform on the Crystal Lattices

Let us denote the usual inner product in \({\mathbb {R}}^d\) by \(\langle \cdot ,\cdot \rangle \). The points of integer lattice \(\mathbb Z^d\) and crystal lattice \({\mathbb {L}}\) are naturally embedded in \({\mathbb {R}}^d\). Recall that \(\{{\widehat{\theta }}_1,\ldots ,{\widehat{\theta _d\}}}\) is a basis for \({\mathbb {L}}\). In general they are not orthonormal. We define a one to one mapping \(J:{\mathbb {Z}}^d \rightarrow {\mathbb {L}}\) by

$$\begin{aligned} J(\mathbf{x})=\sum _{i=1}^dx_i{\widehat{\theta _i}},\quad \mathbf{x}=(x_1,\ldots ,x_d)\in {\mathbb {Z}}^d. \end{aligned}$$
(5.1)

Embedded in \({\mathbb {R}}^d\), we see that

$$\begin{aligned} J(\mathbf{x})=(\Theta ^{-1})^T\mathbf{x}, \end{aligned}$$

that is,

$$\begin{aligned} J=\left( \Theta ^{-1}\right) ^T. \end{aligned}$$
(5.2)

For a function \(g:{\mathbb {Z}}^d\rightarrow {\mathbb {C}}\), we also make a transformation of g as a function on \({\mathbb {L}}\) by

$$\begin{aligned} J(g)({x}):=g\circ J^{-1}({x}),\quad x\in {\mathbb {L}}. \end{aligned}$$
(5.3)

Let \({\mathbb {T}}:=[0,2\pi ]\). Recall that for a function \(g:\mathbb Z^d\rightarrow {\mathbb {C}}\), its Fourier transform is defined by

$$\begin{aligned} {{\widehat{g}}}(\mathbf{k})=\sum _{\mathbf{x}\in {\mathbb {Z}}^d}e^{-\langle \mathbf{k},\mathbf{x}\rangle }g(\mathbf{x}),\quad \mathbf{k}\in {\mathbb {T}}^d, \end{aligned}$$

and its inverse Fourier transform is

$$\begin{aligned} g(\mathbf{x})=\frac{1}{(2\pi )^d}\int _{{{\mathbb {T}}}^d}e^{i\langle \mathbf{k},\mathbf{x}\rangle }{{\widehat{g}}}(\mathbf{k})d\mathbf{k}. \end{aligned}$$

For a function \(f:{\mathbb {L}}\rightarrow {\mathbb {C}}\), we also define its Fourier transform (abusing the notations) \({{\widehat{f}}}:\Theta ({\mathbb T}^d)\rightarrow {\mathbb {C}}\) by

$$\begin{aligned} {{\widehat{f}}}(\mathbf{k}):= & {} \sum _{x\in {\mathbb {L}}}e^{-i\langle \mathbf{k},x\rangle }f(x)\nonumber \\= & {} \sum _{\mathbf{x}\in {\mathbb {Z}}^d}e^{-\langle \mathbf{k},J\mathbf{x}\rangle }f\circ J(\mathbf{x})\nonumber \\= & {} \sum _{\mathbf{x}\in {\mathbb {Z}}^d}e^{-\langle \Theta ^{-1} \mathbf{k},\mathbf{x}\rangle }f\circ J(\mathbf{x}) = \widehat{f\circ J}\left( \Theta ^{-1}{} \mathbf{k}\right) ,\quad \mathbf{k}\in \Theta \left( \mathbb {T}^d\right) . \end{aligned}$$
(5.4)

On the other hand, for \(x=J(\mathbf{x})\in {\mathbb {L}}\),

$$\begin{aligned} f(x)= & {} f\circ J(\mathbf x)\nonumber \\= & {} \frac{1}{(2\pi )^d}\int _{{{\mathbb {T}}}^d}e^{i\langle \mathbf{k},\mathbf{x}\rangle }\widehat{f\circ J}(\mathbf{k})d\mathbf{k}\nonumber \\= & {} \frac{1}{(2\pi )^d}\int _{{{\mathbb {T}}}^d}e^{i\langle \Theta ^{-1}\Theta \mathbf{k},\mathbf{x}\rangle }\widehat{f\circ J}\left( \Theta ^{-1}\Theta \mathbf{k}\right) d\mathbf{k}\nonumber \\= & {} \frac{1}{|\det \Theta |}\frac{1}{(2\pi )^d}\int _{\Theta ({{\mathbb {T}}}^d)}e^{i\langle \Theta ^{-1}{} \mathbf{k},\mathbf{x}\rangle }\widehat{f\circ J}\left( \Theta ^{-1}\mathbf{k}\right) d\mathbf{k}\nonumber \\= & {} \frac{1}{|\det \Theta |}\frac{1}{(2\pi )^d}\int _{\Theta ({\mathbb {T}}^d)}e^{i\langle \mathbf{k},{x}\rangle }{\widehat{f}}(\mathbf{k})d\mathbf{k},\quad x\in {\mathbb {L}}. \end{aligned}$$
(5.5)

5.2 Dual Processes

In this subsection, we consider dual processes, which was introduced in [14]. The space \({\mathcal {B}}({\mathfrak {h}})\) is equipped with an inner product,

$$\begin{aligned} \langle A,B\rangle :=\text {Tr}(A^*B),\quad A,\,B\in {\mathcal {B}}(\mathfrak h). \end{aligned}$$
(5.6)

We let \({\mathcal {A}}:=\oplus _{x\in {{\mathbb {L}}}}{\mathcal {B}}(\mathfrak h)\) be the direct sum Hilbert space. Taking \({\mathcal {A}}_\mathbf{k}:={\mathcal {B}}({\mathfrak {h}})\) for each \(\mathbf{k}\in \Theta (\mathbb T^d)\), we also introduce the following direct integral Hilbert space:

$$\begin{aligned} \widehat{{\mathcal {A}}}:=\frac{1}{|\det \Theta |}\int _{\Theta (\mathbb T^d)}^{\oplus }{\mathcal {A}}_\mathbf{k}\frac{1}{(2\pi )^d}d\mathbf{k}. \end{aligned}$$

For each \(e\in A(G_0)\), we let \(T_e\) be the translation on \(l^2({\mathbb {L}})\) defined by for each \(a=(a_x)_{x\in {\mathbb {L}}}\),

$$\begin{aligned} (T(e)a)_x=a_{x-{\widehat{\theta }}(e)}. \end{aligned}$$

For any \(B\in {\mathcal {B}}({\mathfrak {h}})\), we let \(L_B\) and \(R_B\) be the left and right multiplication operators, respectively, on \({\mathcal {B}}({\mathfrak {h}})\):

$$\begin{aligned} L_B(A):=BA,\quad R_B(A):=AB,\quad A\in {\mathcal {B}}({\mathfrak {h}}). \end{aligned}$$

Slightly abusing the notations, we also let \(L_B\) and \(R_B\) be the left and right multiplication operators, respectively, on \(\mathcal A\) and on \(\widehat{{\mathcal {A}}}\): for \(a=(a_x)_{x\in {\mathbb {L}}}\) and \({{\widehat{a}}}=(a(\mathbf{k}))_{k\in \Theta ({\mathbb {T}})}\in \widehat{{\mathcal {A}}} \),

$$\begin{aligned} L_B(a):= & {} (Ba_x)_{x\in {\mathbb {L}}},\quad R_B(a):=(a_xB)_{x\in \mathbb L},\\ L_B({{\widehat{a}}}):= & {} (Ba(\mathbf{k}))_{\mathbf{k}\in \Theta (\mathbb T)},\quad R_B({{\widehat{a}}}):=(a(\mathbf{k})B)_{x\in \Theta ({\mathbb {T}})}. \end{aligned}$$

Recall that the OQRWs on the crystal lattices are the evolution of the states of the form in (2.8):

$$\begin{aligned} \rho =\sum _{x\in {\mathbb {L}} }\left( \oplus _{u\in V_0}\rho _{(x,u)}\right) \otimes |x\rangle \langle x|. \end{aligned}$$

Letting \(\rho _x:=\oplus _{u\in V_0}\rho _{(x,u)}\in \mathcal B({\mathfrak {h}}) \), we regard the above state as \(\rho =(\rho _x)_{x\in {\mathbb {L}}}\in {\mathcal {A}}\). Then, the dynamics of the OQRWs on the crystal lattices are represented as

$$\begin{aligned} \rho ^{(n)}=\left( \sum _{e\in A(G)}T(e)L_{B(e)}R_{B(e)^*}\right) ^n\rho ^{(0)}. \end{aligned}$$
(5.7)

Taking the Fourier transform, the evolution is given by

$$\begin{aligned} \widehat{\rho ^{(n)}}(\mathbf{k})=\left( \sum _{e\in A(G_0)}e^{-i\langle \mathbf{k},{{\widehat{\theta }}}(e)\rangle }L_{B(e)}R_{B(e)^*}\right) ^n\widehat{\rho ^{(0)}}(\mathbf{k}),\quad \mathbf{k}\in \Theta ({\mathbb {T}}^d). \end{aligned}$$
(5.8)

As in [14], we define the dual process as the process \((Y_n(\mathbf{k}))_{\mathbf{k}\in \Theta ({\mathbb {T}}^d)}\in \widehat{\mathcal A}\) given by

$$\begin{aligned} Y_n(\mathbf{k}):=\left( \sum _{e\in A(G_0)}e^{-i\langle \mathbf{k},{{\widehat{\theta }}}(e)\rangle }L_{B(e)^*}R_{B(e)}\right) ^n(I_{\mathfrak h}). \end{aligned}$$
(5.9)

Notice that the positions of B(e) and \(B(e)^*\) are different in Eqs. (5.8) and (5.9). The usefulness of the dual process is given by the following theorem, which was observed in [14, Theorem 2.3]. For a proof we refer to [14]. We just take a Fourier transform on the crystal lattice \({\mathbb {L}}\) introduced in the former subsection.

Theorem 5.1

The probability distribution of the OQRW at time n is given by

$$\begin{aligned} p_x^{(n)}=\frac{1}{|\det \Theta |}\frac{1}{(2\pi )^d}\int _{\Theta (\mathbb T^d)} e^{i\langle \mathbf{k},x\rangle }\mathrm {Tr}\left( \widehat{\rho ^{(0)}}(\mathbf{k})Y_n(\mathbf{k})\right) d\mathbf{k}, \quad x\in {\mathbb {L}}. \end{aligned}$$

That is, the Fourier transform of \((p_x^{(n)})_{x\in {\mathbb {L}}}\) is

$$\begin{aligned} \widehat{p_\cdot ^{(n)}}(\mathbf{k})=\mathrm {Tr}\left( \widehat{\rho ^{(0)}}(\mathbf{k})Y_n(\mathbf{k})\right) ,\quad \mathbf{k}\in \Theta ({\mathbb {T}}^d). \end{aligned}$$

Example 5.2

Let us consider the OQRW on the Hexagonal lattice introduced in Sect. 4.3. In this case

$$\begin{aligned} P_uP_v=P_vP_u=P_u=: P=\frac{1}{9}\left[ \begin{matrix}1&{}4&{}4\\ 4&{}1&{}4\\ 4&{}4&{}1\end{matrix}\right] \end{aligned}$$
(5.10)

is irreducible, and so by Proposition 4.1 the equation \({\mathcal {L}}(\rho )=\rho \) has a unique solution and Theorem 3.5 applies. Here we have \({\widehat{\theta }}_1=1/\sqrt{2}[1,\,1]^T\), \({\widehat{\theta }}_2=1/\sqrt{2}[-1,\,1]^T\), \(\Theta =\frac{1}{\sqrt{2}}\left[ \begin{matrix}1&{}-1\\ 1&{}1\end{matrix}\right] \), and hence \(\det \Theta =1\). Let us define diagonal matrices

$$\begin{aligned} D(\mathbf{k}):=\mathrm {diag}\left( e^{-i\langle \mathbf{k},{\widehat{\theta }}_1\rangle },e^{-i\langle \mathbf{k},{\widehat{\theta }}_2\rangle },1\right) ,\quad \mathbf{k}\in \Theta ({\mathbb {T}}^2). \end{aligned}$$
(5.11)

It is promptly computed that

$$\begin{aligned}&Y_n(\mathbf{k})=A_n(\mathbf{k})\oplus B_n(\mathbf{k});\\&A_n(\mathbf{k})=\mathrm {diag}(a_{n,1}(\mathbf{k}),a_{n,2}(\mathbf{k}),a_{n,3}(\mathbf{k})),\quad B_n(\mathbf{k})=\mathrm {diag}(b_{n,1}(\mathbf{k}),b_{n,2}(\mathbf{k}),b_{n,3}(\mathbf{k})), \end{aligned}$$

where the components satisfy the following recurrence relations.

$$\begin{aligned} \left[ \begin{matrix}a_{n,1}(\mathbf{k})\\ a_{n,2}(\mathbf{k})\\ a_{n,3}(\mathbf{k})\end{matrix}\right] =D(\mathbf{k})P\left[ \begin{matrix}b_{n-1,1}(\mathbf{k})\\ b_{n-1,2}(\mathbf{k})\\ b_{n-1,3}(\mathbf{k})\end{matrix}\right] ,\quad \left[ \begin{matrix}b_{n,1}(\mathbf{k})\\ b_{n,2}(\mathbf{k})\\ b_{n,3}(\mathbf{k})\end{matrix}\right] =D(\mathbf{k})^*\left[ \begin{matrix}a_{n-1,1}(\mathbf{k})\\ a_{n-1,2}(\mathbf{k})\\ a_{n-1,3}(\mathbf{k})\end{matrix}\right] . \end{aligned}$$
(5.12)

Solving the equations (5.12) with initial conditions \(A_0(\mathbf{k})=I\) and \(B_0(\mathbf{k})=I\), we get

$$\begin{aligned} \left[ \begin{matrix}a_{n,1}(\mathbf{k})\\ a_{n,2}(\mathbf{k})\\ a_{n,3}(\mathbf{k})\end{matrix}\right] =\widetilde{A_n}(\mathbf{k})\left[ \begin{matrix}1\\ 1\\ 1\end{matrix}\right] ,\quad \left[ \begin{matrix}b_{n,1}(\mathbf{k})\\ b_{n,2}(\mathbf{k})\\ b_{n,3}(\mathbf{k})\end{matrix}\right] =\widetilde{B_n}(\mathbf{k})\left[ \begin{matrix}1\\ 1\\ 1\end{matrix}\right] . \end{aligned}$$
(5.13)

Here the matrices \({\widetilde{A}}_n(\mathbf{k})\) and \({\widetilde{B}}_n(\mathbf{k})\) are computed as

$$\begin{aligned} {\widetilde{A}}_n(\mathbf{k})={\left\{ \begin{array}{ll} D(\mathbf{k})P^mD(\mathbf{k})^*,&{}n=2m\\ D(\mathbf{k})P^m,&{}n=2m-1\end{array}\right. },\quad {\widetilde{B}}_n(\mathbf{k})={\left\{ \begin{array}{ll} P^m,&{}n=2m\\ P^{m-1}D(\mathbf{k})^*,&{}n=2m-1.\end{array}\right. } \end{aligned}$$
(5.14)

Notice that P is diagonalized as

We thus get

and

Now finding out \(Y_n(\mathbf{k})\), we can compute the probability density \(p_x^{(n)}\) explicitly by Theorem 5.1. Let us take

$$\begin{aligned} \rho ^{(0)}:=\left( \frac{1}{6}I\oplus \frac{1}{6}I\right) \otimes |0\rangle \langle 0|. \end{aligned}$$

Then, by Theorem 5.1, using the above computations we see that ,

$$\begin{aligned} \lim _{n\rightarrow \infty }{\mathbb {E}}\left[ e^{i\langle {\mathbf{t}},\frac{X_n}{\sqrt{n}}\rangle }\right]= & {} \lim _{n\rightarrow \infty }\sum _{x\in {\mathbb {L}}}e^{i\langle {\mathbf{t}},\frac{x}{\sqrt{n}}\rangle } p_x^{(n)}\\= & {} \lim _{n\rightarrow \infty } {\widehat{p_\cdot ^{(n)}}}(-\frac{\mathbf{t}}{\sqrt{n}})\\= & {} (1)_{{\mathbf{k}}\in \Theta ({{\mathbb {T}}}^2)}, \end{aligned}$$

that is, it is a constant function 1. This means that the limit distribution of \(X_n/\sqrt{n}\) is a Dirac measure at the origin. This result was shown in Sect. 4.3. In fact, we see from (5.14) that for \(\rho ^{(0)}=\frac{1}{6}I\oplus \frac{1}{6}I\otimes |0\rangle \langle 0|\) and \(n=2m\),

$$\begin{aligned} \mathrm {Tr}(\widehat{\rho ^{(0)}}(\mathbf{k})Y_n(\mathbf{k}))= & {} \left( \frac{2}{3}+\frac{1}{3}\left( -\frac{1}{3}\right) ^m\right) +\frac{1}{18} \left( 1-\left( -\frac{1}{3}\right) ^m\right) \Big (e^{i\langle \mathbf{k},{\widehat{\theta }}_1\rangle }+e^{-i\langle \mathbf{k},{\widehat{\theta }}_1\rangle }\\&+e^{i\langle \mathbf{k},{\widehat{\theta }}_2\rangle }+e^{-i\langle \mathbf{k},{\widehat{\theta }}_2\rangle }+e^{i\langle \mathbf{k},{\widehat{\theta }}_2-{\widehat{\theta }}_1\rangle }+e^{-i\langle \mathbf{k},{\widehat{\theta }}_2-{\widehat{\theta }}_1\rangle }\Big ), \end{aligned}$$

and similarly for \(n=2m-1\). By Theorem 5.1, this means that the OQRW in this model is localized in the nearby points from the origin, the starting point. Therefore, it is obvious that we have a Dirac measure for the central limit theorem.

Next we revisit the example in Sect. 4.2, where the covariance matrix was nontrivial.

Example 5.3

We consider the OQRW on the Hexagonal lattice with \(U=V=U_G\) in Sect. 4.2. Recall the diagonal matrices \(D(\mathbf k)\) in (5.11) and the stochastic matrix P in (5.10). Like in the former example, we see that

$$\begin{aligned}&Y_n(\mathbf{k})=A_n(\mathbf{k})\oplus B_n(\mathbf{k});\nonumber \\&A_n(\mathbf{k})=\mathrm {diag}(a_{n,1}(\mathbf{k}),a_{n,2}(\mathbf{k}),a_{n,3}(\mathbf{k})),\quad B_n(\mathbf{k})=\mathrm {diag}(b_{n,1}(\mathbf{k}),b_{n,2}(\mathbf{k}),b_{n,3}(\mathbf{k})),\nonumber \\ \end{aligned}$$
(5.15)

where the components satisfy the following recurrence relations.

$$\begin{aligned} \left[ \begin{matrix}a_{n,1}(\mathbf{k})\\ a_{n,2}(\mathbf{k})\\ a_{n,3}(\mathbf{k})\end{matrix}\right] =D(\mathbf{k})P\left[ \begin{matrix}b_{n-1,1}(\mathbf{k})\\ b_{n-1,2}(\mathbf{k})\\ b_{n-1,3}(\mathbf{k})\end{matrix}\right] ,\quad \left[ \begin{matrix}b_{n,1}(\mathbf{k})\\ b_{n,2}(\mathbf{k})\\ b_{n,3}(\mathbf{k})\end{matrix}\right] =D(\mathbf{k})^*P\left[ \begin{matrix}a_{n-1,1}(\mathbf{k})\\ a_{n-1,2}(\mathbf{k})\\ a_{n-1,3}(\mathbf{k})\end{matrix}\right] . \end{aligned}$$
(5.16)

In order to solve the recurrence relation, let us define

$$\begin{aligned} D(\mathbf{k})^{1/2}:=\mathrm {diag}(e^{-i\langle \mathbf{k},{\widehat{\theta }}_1\rangle /2},e^{-i\langle \mathbf{k},{\widehat{\theta }}_2\rangle /2},1), \end{aligned}$$

so that \((D(\mathbf{k})^{1/2})^2=D(\mathbf{k})\). Solving the equations (5.16) with initial conditions \(A_0(\mathbf{k})=I\) and \(B_0(\mathbf{k})=I\), we get

$$\begin{aligned} \left[ \begin{matrix}a_{n,1}(\mathbf{k})\\ a_{n,2}(\mathbf{k})\\ a_{n,3}(\mathbf{k})\end{matrix}\right] =\widetilde{A_n}(\mathbf{k})\left[ \begin{matrix}1\\ 1\\ 1\end{matrix}\right] ,\quad \left[ \begin{matrix}b_{n,1}(\mathbf{k})\\ b_{n,2}(\mathbf{k})\\ b_{n,3}(\mathbf{k})\end{matrix}\right] =\widetilde{B_n}(\mathbf{k})\left[ \begin{matrix}1\\ 1\\ 1\end{matrix}\right] . \end{aligned}$$
(5.17)

Here the matrices \({\widetilde{A}}_n(\mathbf{k})\) and \({\widetilde{B}}_n(\mathbf{k})\) are given by (putting \(D(\mathbf{k})=:D\), for simplicity)

$$\begin{aligned} {\widetilde{A}}_n(\mathbf{k})= & {} {\left\{ \begin{array}{ll} D^{1/2}\left( D^{1/2}PD^*PD^{1/2}\right) ^mD^{1/2},&{}n=2m+1,\\ D^{1/2}\left( D^{1/2}PD^*PD^{1/2}\right) ^{m-1}D^{1/2}PD^*, &{}n=2m,\end{array}\right. } \end{aligned}$$
(5.18)
$$\begin{aligned} {\widetilde{B}}_n(\mathbf{k})= & {} {\left\{ \begin{array}{ll} (D^*)^{1/2}\left( (D^*)^{1/2}PDP(D^*)^{1/2}\right) ^m(D^*)^{1/2},&{}n=2m+1,\\ (D^*)^{1/2}\left( (D^*)^{1/2}PDP(D^*)^{1/2}\right) ^{m-1}(D^*)^{1/2}PD,&{}n=2m.\end{array}\right. } \end{aligned}$$
(5.19)

Let us take the initial state \(\rho ^{(0)}=\left( \frac{1}{6}I\oplus \frac{1}{6}I\right) \otimes |0\rangle \langle 0|\). We then get \(\widehat{\rho ^{(0)}}(\mathbf{k})=\frac{1}{6}I\oplus \frac{1}{6}I\). We want to get the limit

$$\begin{aligned} \lim _{n\rightarrow \infty }{\mathbb {E}}(e^{i\langle \mathbf{t},\frac{X_n}{\sqrt{n}}\rangle })= & {} \lim _{n\rightarrow \infty }\widehat{p_\cdot ^{(n)}}(-\frac{\mathbf{t}}{\sqrt{n}})\nonumber \\= & {} \frac{1}{6}\lim _{n\rightarrow \infty }\mathrm {Tr}\left( Y_n(-\frac{\mathbf{t}}{\sqrt{n}})\right) . \end{aligned}$$
(5.20)

Using (5.16)–(5.19), we can find the limit in (5.20). One may get a help from Mathematica to get the limit, but an analytic proof of this is given in Appendix C. Anyway, the limit is as follows:

$$\begin{aligned} \lim _{n\rightarrow \infty }{\mathbb {E}}(e^{i\langle \mathbf{t},\frac{X_n}{\sqrt{n}}\rangle })=e^{-\frac{1}{9}(3t_1^2+t_2^2)}. \end{aligned}$$
(5.21)

Notice that this is the same as that obtained in (4.17), Sect. 4.2. That is, the process \(X_n/\sqrt{n}\) converges in distribution to a Gaussian measure with mean zero and covariance \(\Sigma \) in (4.16).