1 Introduction

Recently Attal et al. [3, 4] introduced and investigated the open quantum random walk (OQRW) on graphs, which shows various different dynamical behaviors comparing to usual quantum walks and it includes the classical random walk as a special case. Then they considered the limit theorems for OQRW’s: they have shown the central limit theorem [2]. The purpose of this paper is to further investigate the distribution and the limit theorem of OQRW’s.

A quantum analog of the classical random walk, called quantum walk (QW), has been intensively studied for the last decade (see [6, 7, 11, 13]). The most remarkable difference between classical random walk and quantum walk appears in the central limit theorem. In the classical random walk, the limit distribution is Gaussian with scaling speed \(\sqrt{n}\). On the other hand, in quantum walk the speed is linear in time, i.e., n, and moreover, the limit distribution is far from Gaussian [1, 5, 810]. The main reason that makes quantum walk different from classical walk is the interference.

The OQRW is different from the usual discrete-time QW. It was introduced in order to model the quantum efficiency in biological systems and quantum computing and it is based on the non-unitary dynamics induced by the local environments [3, 4]. These random walks deal with density matrices instead of pure states. In [4], Attal et al. developed the quantum trajectory approach for OQRW and by using this concept, they have shown the central limit theorem for OQRW’s on the d-dimensional integer space ℤd [2].

In this paper we focus on OQRWs on ℤ. We will introduce a concept of dual process to the original OQRW. By this we may think that the OQRW is a process where the environment evolves rather than the walker itself evolves as time goes on. This is particularly useful when we compute the distribution of the walk because the initial states are not relevant during the evolution. See Sect. 2 for the details. Moreover, for many examples, we can compute the distribution of the walk very concretely and thereby we can also get the limit distributions of them. This paper is organized as follows. In Sect. 2, we introduce the concept of OQRW following [4] and new concept of dual process. Next we give our main result. Section 3 is devoted to the proof of our main result and some preparation that is useful for examples. In Sect. 4, we consider several concrete examples.

2 Open Quantum Random Walks

In this section, we give a brief definition of OQRW and define a dual process of it. Then we state the main result. In order to compare, we first shortly review the usual quantum walks, so called unitary quantum walks.

2.1 Unitary Quantum Walks

The discrete-time QW is a quantum version of the classical random walk with an additional degree of freedom called chirality. The chirality takes values left and right, and it determines the direction of the motion of the walker. At each time step, if the walker has the left chirality, it moves one step to the left, and if it has the right chirality, it moves one step to the right. In this paper, we put

where L and R refer to the left and right chirality state, respectively.

For the general setting, the time evolution of the walk is determined by a 2×2 unitary matrix, U, where

with a,b,c,d∈ℂ and ℂ is the set of complex numbers. The matrix U rotates the chirality before the displacement, which defines the dynamics of the walk. To describe the evolution of our model, we divide U into two matrices:

with U=P+Q. The important point is that P (resp. Q) makes the walker moves to the left (resp. right) at position x at each time step. For example, the Hadamard walk is determined by the Hadamard gate U=H:

The walk is intensively investigated in the study of the QW.

Let Ξ n (l,m) denote the sum of all paths starting from the origin in the trajectory consisting of l steps left and m steps right at time n with n=l+m. For example,

Let ℤ+={0,1,2,…}. The probability that our quantum walker is in position x∈ℤ at time n∈ℤ+ starting from the origin with φ=t[α,β] with α,β∈ℂ and |α|2+|β|2=1 is defined by

(1)

with n=l+m and x=−l+m where t[α,β] means the transpose of [α,β].

The reason that we call this quantum walk the unitary quantum walk is that the evolution of the walk is the unitary transform of a state in a Hilbert space. To say more concretely, let us denote by \(\mathcal {H}_{C}:=\mathbb{C}^{2}\) the space of intrinsic structure, namely the chirality and let \(\mathcal{K}_{S}:=l^{2}(\mathbb{Z})\) the space of positions. The Hilbert space on which the quantum walk evolves is given by

$$\mathcal{H}:=\mathcal{H}_C\otimes\mathcal{H}_S \thickapprox l^2\bigl(\mathbb{Z}, \mathbb{C}^2\bigr) $$

and any state, i.e., a unit vector of \(\mathcal{H}\) is given by

$$\psi=\bigl(\psi(x)\bigr)_{x\in\mathbb{Z}}, \quad\psi(x)\in \mathbb{C}^2. $$

Let us denote by T the left translation in l 2(ℤ):

$$(Ta) (x)=a(x+1), \quad\text{for }a=\bigl(a(x)\bigr)_{x\in\mathbb{Z}}. $$

T is a unitary map whose adjoint is the right translation:

$$\bigl(T^*a\bigr) (x)=a(x-1), \quad\text{for }a=\bigl(a(x)\bigr )_{x\in\mathbb{Z}}. $$

Let \(\psi_{0}\in\mathcal{H}\) be any initial state. Then the dynamics of 1-dimensional quantum walk driven by the unitary operator U=P+Q in the above is represented as

$$\psi_n=\bigl(PT+QT^*\bigr)^n\psi_0, $$

where \(\psi_{n}=(\psi_{n}(x))_{x\in\mathbb{Z}}\in\mathcal{H}\) is the state at time n. It is easy to see that the operator PT+QT is a unitary operator (here tacitly we understand the operators P and Q are the natural extensions of the original ones to \(\bigoplus_{x\in\mathbb {Z}}\mathcal{B} (\mathcal{H}_{C})\)) and hence the quantum walk is a unitary evolution on the Hilbert space \(\mathcal{H}\). The probability density of (5) is then nothing but

$$P(X_n=x)=\bigl\|\psi_n(x)\bigr\|^2, $$

with initial state ψ 0=φ⊗|0〉. Here {|x〉: x∈ℤ} denotes the canonical orthonormal basis of l 2(ℤ).

The limit distribution of quantum walk is very different from that of classical random walk. It is ballistic instead of diffusive in the sense that X n /n converges weakly to a limit whose density function, rigorously shown by Konno [9, 10], is given by (in the case abcd≠0)

$$x\mapsto\frac{\sqrt{1-|a|^2}(1-\beta x)}{\pi(1-x^2)\sqrt {|a|^2-x^2}}1_{(-|a|,|a|)}(x), $$

where β is a constant depending on the initial state φ and the unitary matrix U. Here 1 A (x)=1 (xA), =0 (xA).

2.2 Open Quantum Random Walks

In this subsection we introduce OQRW’s following [4]. The OQRW’s can be defined on any dimensional integer spaces as well as on any graphs, but here we confine ourselves to 1-dimensional space ℤ for simplicity.

As before \(\mathcal{H}_{C}=\mathbb{C}^{2}\) is the Hilbert space for the intrinsic structure and \(\mathcal{H}_{S}=l^{2}(\mathbb{Z})\) for positions. Let B and C be two linear operators on \(\mathcal{H}_{C}\), i.e., 2×2 matrices, such that

$$B^*B+C^*C=I. $$

Define a completely positive map on the density matrices of \(\mathcal {H}_{C}\) by

$$ \mathcal{L}(\rho):=B\rho B^*+C\rho C^*. $$

The OQRW lifts this map to \(\mathcal{H}=\mathcal{H}_{C}\otimes\mathcal {H}_{S}\) in the following way. We consider density matrices on \(\mathcal{H}\) of the type:

$$\rho=\sum_{x\in\mathbb{Z}}\rho_x\otimes|x\rangle \langle x|, $$

where each ρ x is a positive matrix and satisfy

$$\sum_{x\in\mathbb{Z}}\operatorname{Tr} (\rho_x)=1. $$

Let \(\widetilde{\mathcal{L}}\) be an operator that maps on such density matrices as follows:

$$ \widetilde{\mathcal{L}}(\rho)=\sum _{x\in\mathbb{Z}}\bigl(B\rho_{x+1}B^*+C\rho_{x-1}C^* \bigr)\otimes|x\rangle\langle x|. $$
(2)

The OQRW is an evolution obtained by iteration of the map \(\widetilde {\mathcal{L}}\). Let ρ (0):=ρ 0⊗|0〉〈0| be the initial state, i.e., a density matrix on \(\mathcal{H}\). Then, the evolution of OQRW on ℤ generated by B and C is given by

(3)
(4)

The probability distribution to find out the walker at site x at time n is given by

$$ p_x^{(n)}= \operatorname{Tr}\bigl(\rho_x^{(n)}\bigr), \quad x\in\mathbb{Z},\ n\ge0. $$
(5)

In [4], Attal et al. introduced the concept of quantum trajectory to OQRW’s, from which they could show the central limit theorem for OQRW’s [2]. Applied to OQRW’s on 1-dimensional space ℤ, it says that one can introduce a Markov process on \(\mathcal{E}(\mathcal{H}_{C})\times\mathbb{Z}\), where \(\mathcal{E}(\mathcal{H}_{C})\) is the space of density matrices on \(\mathcal{H}_{C}\), such that the distribution of the space component of the process coincides with that of OQRW. To say more in detail, it is a Markov chain (ρ n ,X n ) n∈ℕ with values in \(\mathcal{E}(\mathcal{H}_{C})\times\mathbb{Z}\) with the following transition rule. From any position (ρ,X) it jumps to one of two states: to \((\frac{1}{p_{B}}B\rho B^{*}, X-1)\) with probability p B :=Tr(BρB ) or to \((\frac{1}{p_{C}}C\rho C^{*}, X+1)\) with probability \(p_{C}:=\operatorname{Tr}(C\rho C^{*})\). Then the central limit theorem shown in [2] reads as follows (stated in 1-dimensional case).

Theorem 2.1

[2, Theorem 5.2]

Consider the stationary open quantum random walk onassociated to the operators {B,C}. We assume that the completely positive map

$$ \mathcal{L}(\rho)=B\rho B^*+C\rho C^* $$
(6)

admits a unique invariant state ρ . Let (ρ n ,X n ) n≥0 be the quantum trajectory process to this open quantum random walk, then

$$\frac{X_n-nm}{\sqrt{n}} $$

converges in law to the Gaussian distribution N(0,σ 2) in ℝ, with mean

$$m=\operatorname{Tr}\bigl(C\rho_\infty C^*\bigr)-\operatorname {Tr}\bigl(B \rho_\infty B^*\bigr) $$

and variance

$$\sigma^2=\operatorname{Tr}\bigl(B\rho_\infty B^*+C \rho_\infty C^*\bigr)-m^2 +2\operatorname{Tr}\bigl[\bigl(C \rho_\infty C^*-B\rho_\infty B^*\bigr)L\bigr]-2m \operatorname{Tr}( \rho_\infty L), $$

where L is the solution of the equation

$$L-\mathcal{L}^*(L)=C^*C-B^*B-mI. $$

2.3 Dual Processes and Main Result

In this subsection we introduce the concept of dual process to the OQRW and state our main result. Recall that \(\mathcal{H}_{C}=\mathbb{C}^{2}\) and so \(\mathcal{B} (\mathcal{H}_{C})=\mathcal{M}_{2}\), the algebra of all 2×2 matrices. From now on we regard \(\mathcal{M}_{2}\) as a Hilbert space equipped with the Hilbert-Schmidt inner product:

$$ \langle A,B\rangle:= \operatorname{Tr}\bigl(A^*B\bigr), \quad A,B\in\mathcal{M}_2. $$
(7)

Let \(\mathcal{M}:=\bigoplus_{x\in\mathbb{Z}}\mathcal{M}_{2}\) be the direct sum Hilbert space. Recall the left and right translation operators T and T on l 2(ℤ). The operators T and T naturally extend to \(\mathcal{M}\). We will use the same symbols whenever there is no danger of confusion. Let K:=(−π,π] and we understand it as a unit circle on the plane. The Fourier and inverse Fourier transforms between l 2(ℤ) and \(L^{2}(K, \frac{1}{2\pi}dx)\) are defined as usually:

For each kK, let \({\mathcal{M}}_{k}\) be the copy of the Hilbert space \(\mathcal{M}_{2}\) and let

$$\hat{\mathcal{M}}:=\int_{K}^{\oplus}\mathcal{M}_k\frac{1}{2\pi}dk $$

be the direct integral Hilbert space. It is nothing but the Hilbert space \(L^{2}(K,\mathcal{M}_{2})\) with K equipped with the probability measure \(\frac{1}{2\pi}dk\). The Fourier transform also naturally extends to the transform between \(\mathcal{M}\) and \(\hat{\mathcal{M}}\): for \(A=(A(x))_{x\in\mathbb{Z}}\in\mathcal{M}\),

and similarly for the inverse transform.

Now let us introduce the left and right multiplication operators on the Hilbert space \(\mathcal{M}_{2}\). For any \(B\in\mathcal{M}_{2}\), the left multiplication L B and the right multiplication R B are defined on \(\mathcal{M}_{2}\) by

$$ L_B(A):=BA, \qquad R_B(A):=AB, \quad A\in\mathcal{M}_2. $$
(8)

Notice that \((L_{B})^{*}=L_{B^{*}}\) and \((R_{B})^{*}=R_{B^{*}}\), and for any B and C in \(\mathcal{M}_{2}\), L B and R C commute: L B R C =R C L B . However, L B and L C do not commute in general. The operators L B and R B are positive definite if B≥0. Without mentioning further, we will use the same symbols L B and R B for the extensions to \(\mathcal{M}\). Thus, as an example, for \(A=(A(x))_{x\in \mathbb{Z}}\in\mathcal{M}\),

$$L_BR_C(A)=\bigl(BA(x)C\bigr)_{x\in\mathbb{Z}}\in\mathcal{M}. $$

Let us now come back to the OQRW’s on ℤ. The state ρ (n) at time n can be understood as an element of \(\mathcal{M}\). Then the dynamics (4) of OQRW’s becomes an evolution on \(\mathcal{M}\) given by

$$ \rho^{(n+1)}= \bigl(L_BR_{B^*}T+L_CR_{C^*}T^* \bigr)\rho^{(n)}. $$
(9)

Therefore the solution to (9) becomes simply

$$ \rho^{(n)}=\bigl(L_BR_{B^*}T+L_CR_{C^*}T^* \bigr)^n\rho^{(0)}. $$
(10)

If we look at the evolution in the Fourier transform space, then it becomes

(11)

Notice that \(\widehat{\rho^{(0)}}(k)\) is the constant (operator valued) function ρ 0 because ρ (0)=ρ 0⊗|0〉〈0| (of course we can take quite general initial state ρ (0) not localized at the origin).

In order to get the probability distribution, let us define a “dual process”:

Definition 2.2

The dual process to the OQRW generated by B and C is the process \(Y_{n}:=(Y_{n}(k))_{k\in K}\in\hat{\mathcal{M}}\) defined by

$$ Y_n(k):= \bigl(e^{ik}L_{B^*}R_B+e^{-ik}L_{C^*}R_C \bigr)^n(I). $$
(12)

The reasoning for the nomenclature becomes clear from the following relation, which is our main result.

Theorem 2.3

The probability distribution of the OQRW is given by

$$ p_x^{(n)}=\frac{1}{2\pi}\int_Ke^{ikx} \operatorname{Tr} \bigl(\rho_0 Y_n(k) \bigr)dk. $$
(13)

Remark 2.4

Notice that in the formula for the distribution of the walker in the above theorem, the initial state ρ 0 does not change at all as time goes on. Instead, the environment denoted by B and C evolve.

The easy proof of Theorem 2.3 will be given in the next section.

3 Proof and Some Analytic Preparation

In this section we provide with the proof of Theorem 2.3 and we give some analytic preparation which will be useful when we consider asymptotic behavior of some functions. We start with

Proof of Theorem 2.3

Recall the inner product in (7). By using Fourier transform and the formula (11), we see that

 □

In the next section we will consider several examples and compute the distribution concretely by using Theorem 2.3.

Before going into the examples we introduce some analytic result which is not only interesting in itself but also useful for studying an asymptotic behavior of functions. But, it may be well known in analysis and we omit the easy proof.

Proposition 3.1

Let [a,b]⊂ℝ be a finite interval and let f:[a,b]→ℝ be a continuous function such that |f| has a unique maximum at a point c∈[a,b]. Then for any continuous g:[a,b]→ℝ,

$$\lim_{n\to\infty} \frac{1}{\alpha_n}\int_a^bf(x)^ng(x)dx=g(c), $$

where \(\alpha_{n}=\int_{a}^{b}f(x)^{n}dx\).

4 Examples

In this section, in order to see the usefulness of our theorem we consider several examples. We also compare with the result of [2].

4.1 Example 1

For p and q such that p+q=1, p,q∈[0,1], let

Since B and C are all diagonal matrices, L B and L C , R B and R C commute. So, it is very easy to compute

Therefore, since , by Theorem 2.3, we have

Thus a standard argument implies

Proposition 4.1

Consider OQRW with initial state ρ 0. If p∈(0,1), then as n→∞,

  1. (i)
  2. (ii)

We notice that in this example Eq. (6) has infinitely many invariant states.

4.2 Example 2

We take

(14)

such that U=B+CU(2), where U(2) is the set of 2×2 unitary matrices. We suppose that

Let and be the projections. From the definition of B and C it is easily shown that

$$ B^*B=P_1,\qquad B^*P_1B=pP_1, \qquad B^*P_2B=(1-p)P_1 $$
(15)

and

$$ C^*C=P_2,\qquad C^*P_1C=(1-p)P_2, \qquad C^*P_2C=pP_2. $$
(16)

Therefore, \(Y_{n}(k)= (e^{ik}L_{B^{*}}R_{B}+e^{-ik}L_{C^{*}}R_{C} )^{n}(I_{2})\) becomes a linear combination of P 1 and P 2.

Lemma 4.2

Let

$$ Y_n(k):=a^{(n)}_1(k)P_1+a^{(n)}_2P_2. $$
(17)

Then the coefficients are determined by

Proof

We have \(Y_{n+1}(k)= (e^{ik}L_{B^{*}}R_{B}+e^{-ik}L_{C^{*}}R_{C} )Y_{n}(k)\). Inserting (17), and by using the relations (15) and (16) we get the recurrence relation

(18)

Since , the result follows. □

Proposition 4.3

The OQRW generated by B and C in (14) is a correlated random walk and as n→∞,

Proof

By Theorem 2.3 and Lemma 4.2, we have

$$ p^{(n)}_x=ap^{(n)}_1(x)+bp^{(n)}_2(x), $$
(19)

where

$$p^{(n)}_j(x):=\frac{1}{2\pi}\int_Ke^{ikx}a^{(n)}_j(k)dk, \quad j=1,2. $$

Since , from (18) we get

We multiply e ikx to the both sides of the above equation and integrate w.r.t. k. Then we obtain

or rewriting it we get

Equation (19) and the above relation show that the walk is a correlated random walk. That is, the jump rate to the left or right is governed by some intrinsic nature (denoted by \(p_{1}^{(n)}(x)\) and \(p_{2}^{(n)}(x)\) in (19)) and they are correlated. From the above relations and (19), we get the result (see [12], for example). □

We see that this example falls into the class that the result of [2] may be used. We compute that (6) has a unique invariant state

4.3 Example 3

Let us define

with p+q=1, p,q∈[0,1] and \(0 < \gamma\le\min\{\sqrt{2p}, \sqrt{2q}\}\). It is promptly shown that

$$ B^*B=P_1+\tilde{p}P_2,\qquad B^*P_1B=P_1, \qquad B^*P_2B=\tilde{p}P_2 $$
(20)

and

$$ C^*C=\tilde{q}P_2,\qquad C^*P_1C=\gamma^2P_2, \qquad C^*P_2C=\tilde{q}P_2, $$
(21)

where \(\tilde{p}=p-\frac{\gamma^{2}}{2}\) and \(\tilde{q}=q-\frac{\gamma^{2}}{2}\). Therefore, \(Y_{n}(k)= (e^{ik}L_{B^{*}}R_{B}+e^{-ik}L_{C^{*}}R_{C} )^{n}(I_{2})\) is again a linear combination of P 1 and P 2. By denoting \(Y_{n}(k)=a^{(n)}_{1}(k)P_{1}+a^{(n)}_{2}(k)P_{2}\), we repeat the method done in Lemma 4.2 using the relations (20) and (21). Then we easily get

(22)

Lemma 4.4

We have

Proof

Recall that \(Y_{n}(k)=a^{(n)}_{1}(k)P_{1}+a^{(n)}_{2}(k)P_{2}\) with \(a^{(n)}_{j}(k)\), j=1,2, being defined by (22). By Theorem 2.3 we see that

$$ p^{(n)}_x=a\frac{1}{2\pi} \int_Ke^{ikx}a^{(n)}_1(k)dk+b \frac{1}{2\pi}\int_Ke^{ikx}a^{(n)}_2(k)dk. $$
(23)

Inserting the formulas for \(a^{(n)}_{j}(k)\), j=1,2, in (22) into (23), we easily get the result. □

Proposition 4.5

Consider OQRW with initial state . Then

as n→∞.

Proof

Let us compute the characteristic function \(\phi _{X_{n}/n}(t):=E (e^{itX_{n}/n} )\). By using Lemma 4.4 we get

as n→∞. In the last line we have used the fact that \(( \tilde{p}+\tilde{q} e^{2it/n} )^{n} \to0\) because \(\vert \tilde{p}+\tilde{q} e^{2it/n} \vert\le\tilde{p}+\tilde{q}<1\). Also similarly \(( \tilde{p}e^{-2it/n}+\tilde{q} )^{n} \to0\). Notice that e it is the characteristic function for the distribution δ −1. □

Remark 4.6

By the same method as above we can in general show that for all α>0 as n→∞

$$\frac{X_n+n}{n^\alpha}\Longrightarrow\delta_0. $$

If we rely on Theorem 2.1, we can show that m=−1 and σ 2=0 and the result says that as n→∞

$$\frac{X_n+n}{\sqrt{n}}\Longrightarrow N\bigl(0,\sigma^2\bigr) $$

with σ 2=0.

4.4 Example 4

Here we will consider an example of OQRW whose distribution is a mixture of normal distributions. Let 0<ϵ be a small number such that 2ϵa(ϵ)<1/2, where \(a(\epsilon):=\sqrt {1/2-\epsilon^{2}}\), and let θ∈ℝ. Define

(24)

It is straightforward to see that all the matrices B, B , C, and C commute with each other and

(25)

and thus

$$B^*B+C^*C=I_2. $$

The two matrices B B and C C are spontaneously diagonalized as

(26)

where the eigenvalues are

$$ \lambda_{\pm}(\epsilon, \theta)=1/2\pm2\epsilon a(\epsilon)\cos\theta $$
(27)

and U is the unitary matrix given by

$$ U=\frac{1}{\sqrt{2}}\left[ \begin{array}{c@{\quad}c} 1&1\\1&-1 \end{array} \right]. $$
(28)

Because of the commuting properties of the matrices, we can easily compute that

(29)

Now by using Theorem 2.3, we get the following result.

Proposition 4.7

Let B and C be the matrices in (25), λ ±λ ±(ϵ,θ) the eigenvalues of B B and C C in (27), and let U be the unitary matrix in (28). Let ρ 0 be the initial state on \(\mathcal{M}_{2}\). Then the probability distribution of the OQRW defined by B and C is given by

$$p^{(n)}_x=a_1p^{(n)}_{1,x}+a_2p^{(n)}_{2,x}, $$

where a 1=( 0 U )11 and a 2=( 0 U )22, and \(p^{(n)}_{j,x}\), j=1,2, are the distributions of random variables X j,n , j=1,2, respectively, whose asymptotic behavior are as follows: as n→∞,

Proof

From (29), we see that

$$\operatorname{Tr}\bigl(\rho_0Y_n(k)\bigr)=\sum _{l=0}^n{n\choose l}e^{-ik(n-2l)} \bigl(a_1\lambda_+^l\lambda_-^{n-l}+a_2 \lambda_-^l\lambda_+^{n-l}\bigr), $$

where a 1=( 0 U )11 and a 2=( 0 U )22. Therefore by Theorem 2.3, we have

$$p^{(n)}_x=\sum_{l=0}^n{n \choose l}\bigl(a_1\lambda_+^l\lambda_-^{n-l}+a_2 \lambda_-^l\lambda_+^{n-l}\bigr)\delta_{x,n-2l}. $$

Now the result follows from the standard arguments. □

We remark that in [2], Attal et al. considered OQRW’s in which the matrices are block diagonals. In that case they showed that the quantum trajectories are a mixture of OQRW’s. Thereby the limit distributions are a mixture of Gaussians. See [2, Sect. 6] for details.

4.5 Example 5

Define

(30)

We let

(31)

By directly computing we get the recursion relation:

where

$$ Y(k):=\frac{1}{3} \left[ \begin{array}{c@{\quad}c@{\quad}c@{\quad}c} 2 \cos k & -e^{-ik} & -e^{-ik} & e^{-ik} \\ e^{ik} & 2 \cos k & 0 & -e^{-ik} \\ e^{ik} & 0 & 2 \cos k & -e^{-ik} \\ e^{ik} & e^{ik} & e^{ik} & 2 \cos k \end{array} \right]. $$
(32)

Thus we have the solution

(33)

By Theorem 2.3 we have

(34)

We can introduce a combinatoric way to compute the distribution. Notice that

(35)

Thus

$$ {B^*}^nB^n+{C^*}^nC^n= \frac{n^2+2}{3^n}I_2. $$
(36)

We notice also that

$$\operatorname{Tr} {B^*}^nB^n=\operatorname{Tr} {C^*}^nC^n= \frac{n^2+2}{3^n}. $$

Since \(Y_{n}(k)= (e^{ik}L_{B^{*}}R_{B}+e^{-ik}L_{C^{*}}R_{C} )^{n}(I_{2})\), by expanding the power and using Theorem 2.3, we see that \(p^{(n)}_{x}\) is the sum of all the contributions from the terms of the type

(37)

where \(\sum_{j=1}^{s}(l_{j}-r_{j})=x\) and r 1≥0, r j ≥1, j=2,…,s, l j ≥1, j=1,…,s−1, and l s ≥0. We would like to expand (37). By (36), the term \({B^{*}}^{r_{1}}B^{r_{1}}\) in the middle is equal to \(\frac {r_{1}^{2}+2}{3^{r_{1}}}I_{2}-{C^{*}}^{r_{1}}C^{r_{1}}\). Inserting this into (37), we get a sum of two sequences, whose middle terms are \({C^{*}}^{l_{1}}C^{l_{1}}\) multiplied by a factor \(\frac{r_{1}^{2}+2}{3^{r_{1}}}\) and \({C^{*}}^{(l_{1}+r_{1})}C^{(l_{1}+r_{1})}\) multiplied by a factor −1, respectively. We continue this process successively. For it, it is very convenient to understand (37) as a random walk path (assume all r j ’s and l j ’s are greater than 0 for simplicity): the walker goes upward r 1 units, then goes l 1 units downward. Then it goes r 2 units upward and l 2 units downward, and so on. So, the path is a continuously connected lines consisting of 2s segments (which have different lengths of r 1, l 1, etc.). We will further simplify the notation by denoting it just as a sequence (l s ,r s ,…,l 1,r 1). We will make short the sequence step by step by applying the process of “cutting” or “unfolding”. For example, at first step, if we make cutting we will get the sequence (l s ,r s ,…,l 1) with a weight \(\frac{r_{1}^{2}+2}{3^{r_{1}}}\). Instead, if we make unfolding at the first step, we get a new sequence (l s ,r s ,…,l 1+r 1) with weight −1. Any operation shortens the sequence by length 1. We continue this process until we get a length one sequence, or for the random walk path, until it remains a single segment of length l s +l′, say. The resultant matrix is nothing but \({C^{*}}^{l_{s}+l'}C^{l_{s}+l'}\) and we need to compute the trace \(\operatorname{Tr}(\rho_{0}{C^{*}}^{l_{s}+l'}C^{l_{s}+l'})\), which is simply \(\frac{1}{3^{l_{s}+l'}} (a ((l_{s}+l')^{2}+1 )+b )\). We summarize this process as a theorem. Below \(\sum_{l_{1},\ldots,l_{s}, r_{1},\ldots, r_{s}}^{(x)}\) means the sum over sequences such that \(\sum_{j=1}^{s}(l_{j}-r_{j})=x\) and r 1≥0, r j ≥1, j=2,…,s, l j ≥1, j=1,…,s−1, and l s ≥0. \(\mathcal{C}\mathcal{U}(l_{s},r_{s},\ldots, l_{1},r_{1})\) means the set of all sequences of shortening process of cutting and unfolding upto a single term and for \(\pi\in\mathcal{C}\mathcal{U}(l_{s},r_{s},\ldots, l_{1},r_{1}) \), l(π) is the length of the remaining single segment for the process π, and ω(π) is the product of the weights of π

Theorem 4.8

The probability distribution for the OQRW determined by B and C in (30) is given as follows:

$$p^{(n)}_x=\sum_{ l_1,\ldots,l_s, r_1,\ldots, r_s}^{(x)} \sum_{\pi \in\mathcal{C}\mathcal{U}(l_s,r_s,\ldots,l_1,r_1)}\omega(\pi)\mathrm {Tr} \bigl( \rho_0{{\overline{C}}^*}^{l(\pi)}{\overline{C}}^{l(\pi )} \bigr), $$

where \({\overline{C}}=C\) if l s ≠0 and \({\overline{C} }=B\) if l s =0.

As an example, let us compute the distribution of X 4 for the case a=b=1/2, i.e., the initial state is . Since \(\operatorname{Tr}({B^{*}}^{l}B^{l})=\operatorname{Tr}({C^{*}}^{l}C^{l})\) for all l≥0, it is easy to see that the distribution under the assumption is symmetric. So, we only need to compute P(X 4=4) and P(X 4=2). The random walk path leading to X 4=4 is a single segment consisting of 4 upward units. Or, in the symbol of finite sequence, it is just (r 1)=(4). Thus, we get

$$P(X_4=4)=\operatorname{Tr}\bigl(\rho_0{B^*}^4B^4 \bigr)=\frac{1}{2}\frac {4^2+2}{3^4}=\frac{1}{9}. $$

For X 4=2, we have 4-paths: (−+++), (+−++), (++−+), (+++−), or in symbols of sequences \((\overset{l_{1}}{1},\overset{r_{1}}{3})\), \((\overset{r_{2}}{1}, \overset {l_{1}}{1},\overset{r_{1}}{2})\), \((\overset{r_{2}}{2},\overset{l_{1}}{1},\overset {r_{1}}{1})\), and \((\overset{r_{2}}{3},\overset{l_{1}}{1})\), respectively. For each symbol we apply cutting-unfolding process.

Thus summing all the contributions we get \(P(X_{4}=2)=\frac{2}{9}\). Using the symmetry, we see that μ 4, the distribution of X 4, is equal to

$$\mu_4=\frac{1}{9}\delta_{-4}+\frac{2}{9}\delta_{-2}+ \frac{3}{9}\delta_{0}+\frac{2}{9}\delta_{2}+\frac{1}{9} \delta_{4}. $$

From now on we discuss the asymptotic behavior of the distribution. Recall the matrix Y n (k) in (31) and its representation in (33). The eigen-equation of Y(k) is

so the eigenvalues of Y(k) are

(38)
(39)

where

(40)

Let

Then a direct computation gives

where S:=R T with

and diag[a 0,a 1,…,a n ] denotes the diagonal matrix whose (i,i)-component is a i . Here

Recall the density formula in (34). We need to compute \(a^{(n)}_{jj}(k)\), k=1,2, which we can obtain from the diagonalization of Y(k). We note that \(\det R = 4 \sqrt{3(4 \cos^{2} k +1)} \sin k/9\). After a little computation we have

(41)

In order to get an information of the asymptotic behavior or \(p^{(n)}_{x}\) as n→∞, we need to investigate the eigenvalues more carefully, and then we will rely on Proposition 3.1. For that purpose we will rewrite the eigenvalues. Recall the eigenvalues in (38) and (39), and the function ξ(k) in (40). Since cosk appears in the eigenvalues, we let u:=cosk. Then u varies in the interval [−1,1] and we have

$$\xi=\xi(u)= \bigl(2u+\sqrt{4u^2+1}\bigr)^{1/3}. $$

Further, we define

$$s=s(u):=\xi(u)-\frac{1}{\xi(u)}, \quad-1\le u\le1. $$

It is not hard to show that

Moreover, on the interval [−1,1], the function s(u) is increasing with

$$s(-1)=-1 \quad\text{and}\quad s(1)=1. $$

We can also check that

$$\xi^3-\frac{1}{\xi^3}=4u. $$

Therefore, the eigenvalues can be rewritten as

Since −1≤s≤1, we see that

$$ | \lambda_0|\le2/3,\qquad|\lambda_2|\le\sqrt{2/3}. $$
(42)

Only the eigenvalue λ 1 moves fully on the interval [−1,1]: λ 1(u=−1)=−1, λ 1(u=1)=1. Regarding K:=(−π,π] as a unit circle in the plane, as usual, it is not hard to see that the eigenvalue λ 1λ 1(k) is anti-symmetric in the sense that λ 1(k+π)=−λ 1(k). Moreover, it behaves very much similar to the cosine function. In particular, λ 1(k)≥0 on [−π/2,π/2] and it is negative on K∖[−π/2,π/2]. Let us define a scaling constant α n by

$$ \alpha_n:=\int _{-\pi/2}^{\pi/2}\lambda_1(k)^ndk. $$
(43)

Lemma 4.9

For j=0,1,2,3, let g j (k) be continuous functions on K. Then

$$\lim_{n\to\infty}\frac{1}{\alpha_n}\int_K \sum_{j\neq1} g_j(k)\lambda_j(k)^ndk=0, $$

and

Proof

First, since λ 1(k) is continuous and λ 1(0)=1, as in the proof of Proposition 3.1, it is very easy to see that for any \(\sqrt{2/3}<q<1\),

$$ \lim_{n\to\infty} \frac{q^n}{\alpha_n}=0. $$
(44)

On the other hand, by (42)

$$\biggl\vert\int_K\sum_{j\neq1} g_j(k)\lambda_j(k)^ndk\biggr\vert=O \bigl((2/3)^{n/2}\bigr). $$

From this and by (44), the first assertion follows. For the second assertion, we divide the integral:

$$\frac{1}{\alpha_{n}}\int_K g_1(k) \lambda_1(k)^{n}dk=\frac{1}{\alpha_{n}}\int _{[-\pi/2,\pi/2]} g_1(k)\lambda_1(k)^{n}dk+ \frac{1}{\alpha_{n}}\int_{K\setminus[-\pi/2,\pi/2]} g_1(k) \lambda_1(k)^{n}dk. $$

Noticing the anti-symmetry of λ 1(k), i.e., λ 1(k+π)=−λ 1(k), the result follows from Proposition 3.1. □

Looking at the formula (41), by Lemma 4.9, we see that asymptotically the term containing \(\lambda_{1}^{n}\) dominates. By direct computation we have

Now we can get the proper asymptotics for the density \(p_{x}^{(n)}\).

Theorem 4.10

As n→∞, the asymptotic behavior of \(p_{x}^{(n)}\) is as follows.

Proof

The proof will follow from Lemma 4.9 with

$$g_1(k)=C(k) \bigl(aB_1(k)+bB_2(k)\bigr), $$

where

We need to know the values g 1(0) and g 1(π). Let us define a symbol η by

$$\eta(0):=1, \qquad\eta(\pi):=-1. $$

By directly computing, we get

$$C(0)=\frac{3}{4\pi}\quad\mathrm{and}\quad C(\pi)=\frac{3}{4\pi}\cos \pi x. $$

Also it is easy to check that when restricted to {0,π}, \(A_{1}=\frac{1}{3}\eta\) and the quantities in the brackets (⋯) of B 1 and B 2 are equal to 2η. Thus we see that when restricted to {0,π}

$$B_1=B_2=2/3. $$

Combining these we use Lemma 4.9 to get the result. □

Concerning the central limit theorem of this example we have the following result.

Theorem 4.11

For the example of this subsection, we have as n→∞

$$\frac{X_n}{\sqrt{n}}\Longrightarrow N(0,8/9). $$

Proof

Let us consider the characteristic function

$$\mathbb{E}\bigl[e^{itX_n/\sqrt{n}}\bigr]=\sum_{x\in\mathbb {Z}}e^{itx/\sqrt {n}}p_x^{(n)}. $$

Here, \(p_{x}^{(n)}\) is given in (41), but since the eigenvalue λ 1 dominates we have

$$p_x^{(n)}\sim\int_{-\pi}^\pi e^{ikx}f^{(n)}(k)dk, $$

where

$$f^{(n)}(k)=g(k)\lambda_1(k)^n $$

with

$$g(k)=\frac{1}{12\pi}\frac{1}{\sqrt{4\cos^2k+1}} \biggl( \xi^2-\frac{1}{\xi^2} \biggr) \biggl\{\cos k \biggl(\xi+ \frac{1}{\xi} \biggr)^2+\xi-\frac{1}{\xi} \biggr\}. $$

By putting \(y=x/\sqrt{n}\) and taking a change of variable \(m=\sqrt {n}k\) we have

$$\mathbb{E}\bigl[e^{itX_n/\sqrt{n}}\bigr]\sim\sum_{y\in\mathbb {Z}/\sqrt {n}}e^{ity} \frac{1}{\sqrt{n}}\int_{-\sqrt{n}\pi}^{\sqrt{n}\pi}e^{imy}g(m/{ \sqrt{n}})l_1(m/{\sqrt{n}})^n \,{dm}. $$

Notice that the function g(k) is bounded and continuous and λ 1(k) behaves very much similar to cosine function on the interval [−π,π]. Now we expand the interval to \([-\sqrt{n}\pi,\sqrt{n}\pi]\) and take a power n to λ 1. As n grows, the function \(\lambda_{1}(m/\sqrt{n})^{n}\), when integrated with a multiplication by a mild function \(g(m/\sqrt{n})\), will pick up the values of \(g(m/{\sqrt{n}})\) at m=0 and \(m=\pm\sqrt{n}\pi\) with dominating factors of itself. Notice that \(g(0)=g(\pm\pi)=\frac{1}{2\pi}\). We first consider the behavior at m=0. For that we notice that λ 1(k) has a Taylor expansion at k=0 as

$$\lambda_1(k)=1-\frac{4}{9}k^2+\frac{1}{54}k^4+o \bigl(k^4\bigr). $$

Therefore,

$$\lambda_1(m/\sqrt{n})^n\sim e^{-\frac{4}{9}m^2}. $$

Thus, the contribution to \(\mathbb{E}[e^{itX_{n}/\sqrt{n}}]\) is evaluated as

(45)

where σ 2=8/9. Next we consider the effect coming from the factor λ 1(k)n at kπ. For this, it is convenient to shift the integration interval as

$$p_x^{(n)}\sim\int_{0}^{2\pi} e^{ikx}f^{(n)}(k)dk. $$

Now λ 1(π)=−1 and by a similar argument as above the contribution to \(\mathbb{E}[e^{itX_{n}/\sqrt{n}}]\) is

Now as n→∞, by an argument of Riemann-Lebesgue Lemma, the last term converges to 0. Combining this with (45), we conclude that in Example 4,

$$\frac{X_n}{\sqrt{n}}\Rightarrow N(0, 8/9). $$

 □

This example was also dealt with in [2] too. There they computed the invariant state of (6) obtaining \(\rho_{\infty}=\frac{1}{2}I\). They also computed the mean m=0 and variance \(\sigma^{2}=\frac{8}{9}\), the same result as we obtained here.