Abstract
Quantum many-body systems exhibit a rich and diverse range of exotic behaviours, owing to their underlying non-classical structure. These systems present a deep structure beyond those that can be captured by measures of correlation and entanglement alone. Using tools from complexity science, we characterise such structure. We investigate the structural complexities that can be found within the patterns that manifest from the observational data of these systems. In particular, using two prototypical quantum many-body systems as test cases—the one-dimensional quantum Ising and Bose–Hubbard models—we explore how different information-theoretic measures of complexity are able to identify different features of such patterns. This work furthers the understanding of fully-quantum notions of structure and complexity in quantum systems and dynamics.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Quantum many-body systems hold a distinguished position in modern physics, playing a vital role in providing insight into the physical world. On the one hand, they constitute an excellent platform for studying a range of phenomena through their utility in quantum simulation [1,2,3,4]. Conversely, the properties intrinsic to these systems are interesting in their own right, and they thus form a target of simulation and modelling themselves [5,6,7,8,9,10,11,12]. Understanding the structures present in these systems, and the resources needed to characterise, study and emulate them, is thus of paramount importance.
Quantum entanglement [13, 14] captures the quantum correlations present in a system, and so plays a significant role in identifying structure in quantum systems. In particular, the half-chain entanglement quantifies the amount of information shared between the left and right halves of a one-dimensional quantum system; it provides an indicator to the amount of classical resources needed to simulate such systems [15,16,17,18], and related quantum mutual information-based quantities have previously been associated with structural complexity [19]. Nevertheless, complex systems possess structure beyond such correlations; we turn to tools from complexity science to identify and quantify this structure. The field of computational mechanics [20,21,22] adopts an information-theoretic approach to this this task, and equates structure in a stochastic process with the minimal amount of information that must be stored by a model that replicates its behaviour. Moreover, it offers a systematic approach to determining such minimal models.
Impelled by the growth of quantum technologies, recent efforts have extended the computational mechanics framework into the quantum regime, finding that classical limits on the information that models must store can be overcome [23,24,25,26,27,28,29,30]. This fundamentally changes how we might perceive and characterise structure—two prominent examples being the ambiguities of simplicity [31,32,33] and optimality [30, 34], which highlight properties of complexity that might be considered truisms classically no longer hold in the quantum domain. Practically, these quantum models can provide memory savings for stochastic simulation, and the resource gap between minimal quantum and classical simulators can even grow unbounded [28, 29, 35,36,37,38,39].
It is natural then to ask how these measures of complexity look— and what they can tell us about structure—when applied to quantum many-body systems. In this article, we apply this framework to study their structure and complexity. To this end, we look at the structure manifest in the measurement statistics of quantum states. This serves as a crucial first step in identifying the structure in the quantum processes that gave rise to the states. We begin by reviewing in Sect. 2 the relevant details of causal models and measures of complexity that form the background to our work. In Sect. 3, we introduce the mapping from states of quantum chains to stochastic processes through the statistics of observation sequences, and apply it to quantify structure in the one-dimensional quantum Ising and Bose–Hubbard models. We discuss the implications of our results and the future directions to which our framework may be applied in Sect. 4.
2 Framework
2.1 Stochastic Processes
We consider discrete-event, discrete-time stationary stochastic processes [40]. At each timestep \(t\in \mathbb {Z}\) such a process emits a symbol \(r_t\) drawn from a configuration space \(\mathcal {R}\). We use \(R_t\) to denote the random variable governing output \(r_t\). We also designate the semi-infinite strings and \(\vec {R} _t:= R_{t+1}R_{t+2}\dots \) as the random variables associated with the past and future observation sequences and \(\vec {r} _t:= r_{t+1}r_{t+2}\dots \) at time t respectively (throughout, upper case indicates random variables, and lower case the corresponding variates). The output symbol statistics are then described by a conditional probability distribution over these strings , detailing how future observations are correlated with past observations. We use the notation \(r_{t:t+n}=r_tr_{t+1}\dots r_{t+n-1}\) to represent the sequence of outputs between t and \(t+n-1\). Stationarity of a stochastic process is defined by \(P(R_{0:n})=P(R_{L:L+n})\) \(\forall n,L \in \mathbb {Z}\); this allows us to drop the subscript t from semi-infinite strings.
2.2 Causal Models
A causal model of a stationary stochastic process [20, 21, 37] is tasked with replicating its future output statistics according to the distribution ; it stores information about the past of a stochastic process in its internal memory, and uses this to predict the future output statistics. Crucially, a causal model contains no information about the future that cannot be inferred from the past (i.e., the mutual information between the memory and future outputs given the past outputs is zero). Causal models use an encoding function f to map pasts to states \(m\in \mathcal {M}\) according to , such that . At each timestep t, the model produces an output r following . At time \(t+1\), r becomes part of the past observation sequence, and the memory state is updated to , where the new past is the concatenation of the previous past with the new output symbol. The information stored by such a model is given by the Shannon entropy of its set of internal memory states:
where .
For any given stationary stochastic process, one can construct myriad causal models of the process that will faithfully replicate the future statistics. The field of computational mechanics provides us with a systematic way to identify and construct the provably optimal classical causal model of a given process [20,21,22]. By optimal, we here mean the model that stores the least possible amount of past information [Eq. (1)] while accurately simulating the process; the optimal classical causal model is referred to as the \(\varepsilon \)-machine. In this framework, sets of pasts are grouped into equivalence classes called causal states according to the relation
Equation (2) mandates that past observations leading to statistically identical futures belong to the same causal state. Let us denote \(\mathcal {S}\) as the set of causal states, where each state \(s\in \mathcal {S}\) is given by an associated encoding function . At each time step the \(\varepsilon \)-machine transitions from causal state j to k whilst emitting an output \(r\in \mathcal {R}\), with transition probability \(T_{kj}^r=P(R_{t+1}=r,S_{t+1}=k|S_t=j)\). The encoding function enforces unifilarity of the \(\varepsilon \)-machine, where, given the current causal state j and the emitted output symbol r, the subsequent causal state k of the model is uniquely specified [21]. We denote this mapping by a function \(\lambda (j,r)\) that outputs the value of the label of the subsequent causal state. This allows us to express
where \(\delta _{jk}\) is the Kronecker-\(\delta \) function. The amount of information required to track the dynamics of causal states has been widely employed as a measure of structure [20, 41,42,43,44,45,46,47,48,49], designated as the statistical complexity:
where is the steady-state probability of causal state j. The statistical complexity is often compared to the mutual information between the past and future of the system,
a quantity known as the excess entropy. It quantifies the amount of information in the past that correlates with future statistics. The excess entropy (also sometimes called the ‘predictive information’) is also used as a measure of complexity [50, 51]. The data processing inequality [52] ensures that the excess entropy represents a lower bound on the amount of information a simulator of a process must store in any physical theory, and thus \(C_\mu \ge E\).
2.3 Quantum Causal Models
Recently, computational mechanics has been extended into the quantum regime [23], where it has been shown that quantum effects allow one to construct causal models that require a lower amount of information than is classically possible. The present state-of-the-art systematic construction methods for quantum models [26, 27, 30] involve step-wise unitary interactions between the model memory and a probe system. The memory of such quantum models store a member of a set of quantum memory states \(\{ \left| s_j \right\rangle \}\) that are in one-to-one correspondence with the causal states of the process. The memory states are then used to produce the future outputs of the process sequentially. Starting from state \(\left| s_j \right\rangle \) and a blank ancilla, there exists a unitary operator U that satisfies
The stationary state of the model memory is given by \(\rho =\sum _j{P_j \left| s_j \right\rangle \left\langle s_j \right| }\), where \(P_j\) is as defined for Eq. (5). The amount of information stored internally is given by the von Neumann entropy of \(\rho \):
\(C_q\) is referred to as the quantum statistical memory. In general, the memory states are non-orthogonal (i.e., \(\left\langle s_j|s_k\right\rangle \ge 0\)), due to a quantum model being able to encode pasts with partially overlapping futures into partially overlapping states. Therefore, \(C_q \le C_\mu \); a quantum causal model can utilise less memory than the \(\varepsilon \)-machine of the same process [23,24,25,26,27,28,29,30]. While the quantum models presented here require less memory than \(\varepsilon \)-machines, they are in general not optimal over all quantum models [30]. However, in certain specific cases, this construction has been shown to be the provably optimal quantum model [31, 37]. Paralleling \(C_\mu \), the minimal possible memory cost across all quantum models is called the quantum statistical complexity. \(C_q\) thus upper bounds the quantum statistical complexity, and due to the difficulty associated with finding the true quantum minimum has been suggested as a potential measure of complexity in itself [23, 53,54,55]. As with \(C_\mu \), the data processing inequality mandates that \(C_q\ge E\).
2.4 Measures of Complexity
We focus our attention on the following measures of complexity:
-
Statistical complexity \(C_\mu \).
-
Quantum statistical memory \(C_q\).
-
Excess entropy E.
-
Half-chain entanglement entropy \(S_{\frac{1}{2}}\).
\(C_\mu \), \(C_q\), and E have been formally introduced above. The half-chain entanglement entropy of a one-dimensional quantum many-body system quantifies the amount of quantum correlations (entanglement) between the left and right halves of the system [13]. It is defined as
where \(\rho _A=\text {Tr}_B\left( \rho _{AB} \right) \) and \(\rho _B=\text {Tr}_A\left( \rho _{AB} \right) \) are the density matrices describing the states of the left and right halves of the quantum system respectively. \(S_{\frac{1}{2}}\) is measurement basis-independent, and depends only on the state of the quantum many-body system. \(S_\frac{1}{2}\) is very closely related to the quantum mutual information (\(I_q\left( A, B \right) =2S_{\frac{1}{2}}\) for pure states), and is loosely analogous to the excess entropy for quantum processes. Beyond identifying correlations, entanglement has also been suggested as an indicator of critical points of phase transitions in quantum many-body systems [56,57,58]. However, experimental measurement of \(S_\frac{1}{2}\) in real quantum systems is a highly non-trivial task [59,60,61,62].
3 Results
3.1 Stochastic Processes from Quantum Many-Body Systems
The measures of structural complexity used in this manuscript are defined in terms of classical stochastic processes. To apply them to quantum systems, we need some method to meaningfully extract such a process from a quantum system. Since measurements of a quantum system are inherently probabilistic, when making a series of measurements on a quantum system the outputs form a stochastic sequence. We can then analyse the structure in this sequence using the above measures. The structure in this sequence can embody structure present in the underlying quantum state that gave rise to the observations. Taking this as a proxy for structure in the quantum system itself, the approach can be described as a ‘semi-classical’ method for describing the structural complexity of quantum systems and processes.
There are many possible measurement protocols that could extract a classical time-series from an infinite length one-dimensional quantum many-body chain. For instance, one could measure the same site at multiple points in time, allowing the system to evolve between measurements. Alternatively, as done here, one can take measurements sequentially across consecutive sites of the chain, sweeping from left to right. This effectively measures one different site per timestep, and the outcomes of this form a time-series, where the temporal indices of the time-series are in one-to-one correspondence with the spatial indices of the sites in the chain. This is illustrated in Fig. 1.
Specifically, we consider non-degenerate, site-local measurement operators such that on site j, a set of measurement outcomes \(r_j\in \mathcal {R}\) are associated with unique eigenstates \(\left| r_j \right\rangle \) of the measurement operator. This measurement outcome is taken to be the corresponding j-th observation in the constructed stochastic process. As we sweep the output sequence from left to right, it follows then that the output sequence \(\overleftrightarrow {r}\) occurs with probability
where \(\left| \psi \right\rangle \) is the quantum state of the one-dimensional system being investigated. For the examples considered here, we will take the quantum chains to be in the ground states of their respective Hamiltonians, with no temporal evolution between measurements. We emphasis that there is no dynamical evolution of the systems considered here; the temporal dynamics of the extracted stochastic process manifest as a mapping of spatial position in the underlying chain.
3.2 Numerical Considerations
For numerical tractability, we make two approximations regarding the system size and the measures of complexity. Firstly, rather than an infinite chain, we study large finite-size quantum chains of length N. Secondly, we introduce the truncated Markov memory order L. It represents the number of sites from which past information may be obtained. That is, we approximate . We take \(N\gg L\). We employ tensor network methods [19] (see Appendix) to obtain near exact ground states of the example systems we study, as well as to extract the corresponding measurement sequences that form the stochastic processes.
We now apply our framework to explore the structures of two paradigmatic quantum many-body chains in their respective ground states.
3.3 Quantum Ising Chain
A quantum Ising chain [63] describes the physics of a system of interacting quantum spins subject to the influence of a magnetic field. They are governed by the Hamiltonian:
where J is a coupling parameter, B is the external magnetic field strength, and \(\sigma _l^w\), \(w\in \{x,y,z\}\), are the Pauli operators at site l. The system undergoes a quantum phase transition at \(B/J=0.5\). At \(B \gg J\), the field along the z-direction dominates the correlations in the system; the ground state of the system is \(\left| \psi _g \right\rangle = \left| \dots \downarrow _{i-1}\downarrow _{i}\downarrow _{i+1} \dots \right\rangle \), fully-polarised along the z-axis. On the other hand, when \(B \ll J\), the field is much weaker than the spin-spin correlation along the x-direction. There are then two degenerate ground states, with all spins either parallel or anti-parallel with the x-axis, \(\left| \psi _g \right\rangle =\left| \dots \rightarrow _{i-1}\rightarrow _{i}\rightarrow _{i+1}\dots \right\rangle \) or \(\left| \dots \leftarrow _{i-1}\leftarrow _{i}\leftarrow _{i+1}\dots \right\rangle \).
We investigate the structure of the model at a range of truncated Markov memory orders \(L\in \{ 1, 3, 5, 7, 9 \}\), with \(N=500\). The causal states are then determined by the L rightmost spins of the past spins, as defined in Eq. (2). We find that each length L spin configuration belongs to a unique causal state. To ensure that there is no spurious splitting of causal states, we determine that the conditional probability distributions differ by more than \(\mathcal {O}\left( 10^{-12}\right) \) from each other, while the ground states \(\left| \psi _g \right\rangle \) are accurate to \(\mathcal {O}\left( 10^{-14}\right) \); i.e., the error in the ground state is significantly smaller than the distance between conditional distributions of the spin configurations.
Using the framework above, we study the structural complexity of the quantum Ising chain through its measurement sequences as obtained from \(\sigma _\theta \)-basis measurements, where
with \(\theta \in [0,{\pi }/{2}]\) the angle measured from the z-axis of the Bloch sphere. Angles \(\theta =0\) and \(\theta =\pi /2\) correspond to the z- and x-axes respectively. Intuitively, \(\sigma _z\) measurement in the \(B \gg J\) regime will result in a highly-ordered stochastic process, while \(\sigma _x\) will return a near-random process. Conversely in the \(B \ll J\) regime, measurement sequences along \(\sigma _z\) will yield a near-random stochastic process, while \(\sigma _x\) will result in a highly-ordered process.
Figure 2 compares \(C_\mu \), \(C_q\), E, and \(S_{\frac{1}{2}}\) for measurements of \(\sigma _x\) and \(\sigma _z\) at different B/J, where we observe interesting differences between these measures of structure and complexity. Firstly, in all measurement bases studied, \(C_q\), E, and \(S_{\frac{1}{2}}\) reach maximal values close to the phase transition \(B/J\approx 0.5\). Instead, \(C_\mu \) exhibits its largest gradient near the phase transition.
We also observe that \(S_{\frac{1}{2}}> C_q\) for all measurement bases. This is because projecting a quantum state onto a specific basis effectively destroys information about other measurement bases, while \(S_{\frac{1}{2}}\) is a quantity that takes into account the full information contained in the quantum state. The relation \(S_{\frac{1}{2}}> C_q\) highlights that simulating a quantum system measured in a specific basis can require less information than the quantum correlations present in the system. This highlights that if we don’t require replication of all measurement bases, the state is not the most efficient simulator of itself.
Figure 2 also shows that when sequences from measurement outcomes of the quantum Ising chain are more ordered (\(B\gg J\) for \(\sigma _{z}\)-basis, \(B\ll J\) for \(\sigma _{x}\)-basis), the corresponding values of \(C_\mu \), \(C_q\), and E are lower. In a highly-ordered stochastic process, the corresponding \(\varepsilon \)-machine consists of a single dominant causal state that is occupied with very high probability, and other causal states arise with very low probabilities. Thus the resulting \(\varepsilon \)-machine requires little information to be stored to accurately simulate the corresponding stochastic process. Our quantum model behaves similarly in this regime, hence \(C_q\) mirrors \(C_\mu \). Further, in this parameter regime the more ordered the sequences are, the less information is carried forward from the left half to the right half of the system, resulting in low values of E.
On the other hand, when the observation sequences are near-random (\(B\gg J\) for \(\sigma _{x}\)-basis, \(B\ll J\) for \(\sigma _{z}\)-basis), \(C_\mu \) exhibits drastically different qualitative behaviour compared to \(C_q\) and E, as seen in Fig. 2. Both \(C_q\) and E are lower when the sequences appear more random, unlike \(C_\mu \), which saturates in this regime. This is because in the near-random limit the past configurations have different-yet-strongly-overlapping conditional future probability distributions, and thus they are mapped into different causal states. As a classical model, the \(\varepsilon \)-machine can only store information in distinguishable states, despite multiple causal states having significantly overlapping conditional future statistics; consequently, \(C_\mu \) is high in this regime. In contrast, quantum models have the ability to store information in non-orthogonal states. Thus, causal states with highly-overlapping future conditional probability distributions will be encoded into highly-overlapping quantum states, resulting in low \(C_q\) in the near-random regime.
We also observe differing behaviour of \(C_\mu \), \(C_q\), and E with respect to the truncated Markov memory order, as illustrated in Figs. 3, 4, and 5. As L is increased, \(C_q\) also increases, but at a decreasing rate, indicating that the value of \(C_q\) converges as larger L is considered (Fig. 3). This is consistent with the decrease in correlation strength between spins when the distance between them increases (i.e., spins that are further apart are more weakly correlated). This means that at high L, increasing the Markov memory order further adds little predictive information. This cannot be utilised effectively by the classical \(\varepsilon \)-machine with access only to orthogonal states; as L increases, so too does the number of causal states—resulting in a higher \(C_\mu \), as illustrated by Fig. 4.
We see that the behaviour of \(C_\mu \) with respect to B/J changes as we rotate the measurement angle \(\theta \) from \(\sigma _{z}\) to \(\sigma _{x}\) (Fig. 6b). As \(\theta \) changes from 0 to \(\pi /2\), the measurement outcome sequences sweep between near-random and near-deterministic in the region \(B\ll J\), and between near-deterministic and near-random in the region \(B\gg J\). Figure 6a shows the counterpart behaviour of \(C_q\) with respect to the different measurement angles. As the measurement basis changes from \(\sigma _{z}\) to \(\sigma _{x}\), we observe that the peak of \(C_q\) increases with respect to \(\sigma _{\theta }\). In the parameter region that is neither random nor deterministic, our quantum model stores less information than when measured in a basis that is closer to the z-axis. This captures the underlying structure of the quantum Ising chain very well, as measurement outcomes along the z-axis are less dependent on the past spins, because the inter-spin coupling of the system is along the x-axis.
3.4 Bose–Hubbard Chain
We now look at structure in the one-dimensional Bose–Hubbard model, which describes the physics of interacting spinless bosons on a lattice [3, 5, 6]. It is governed by:
where \(b^\dagger _l\) and \(b_l\) are bosonic creation and annihilation operators and \(n_l=b^\dagger _l b_l\) is the number of bosons at site l. The variable J denotes the hopping amplitude, describing the kinetic energy of the bosons, and U is the on-site repulsive interaction strength. The filling factor \(\nu \) is defined as the average number of bosons per site; we consider a chain with \(\nu =1\). In our numerical calculations, we use \(N=300\) and truncated Markov memory orders of \(L\in \{1,2,3,4\}\). We also enforce that each site has a maximum occupation number \(n_{\text {max}}=4\)—higher values of occupation number have very low probability of occurrence (and thus have little impact on the state), yet demand significantly more computational power. In the ground state, when \(J\gg U\) all bosons occupy zero-momentum eigenstates, wherein they are completely delocalised across the lattice; this is the superfluid phase. On the other hand, when \(J\ll U\), the system is in the Mott insulator phase, where at integer filling factors \(\nu \), each site contains \(\nu \) highly-localised bosons. The one-dimensional Bose–Hubbard model undergoes a quantum phase transition at \(U/J\approx 3.1\) [6, 64] for the case of \(\nu =1\).
We calculate the structural complexity measures for measurement outcome sequences of the Bose-Hubbard chain in the number basis, wherein n is measured sequentially on every site. Figure 7 shows the behaviour of \(C_\mu \), \(C_q\), and E for these sequences, as well as \(S_{\frac{1}{2}}\). We observe that E peaks close to the phase transition, while \(C_q\) peaks earlier. This indicates that E may identify the phase transition, while \(C_q\) and \(C_\mu \) do not. This is in contrast to the quantum Ising chain, where both E and \(C_q\) (in multiple measurement bases) reach a maximum value in the vicinity of the phase transition.
In the superfluid phase \((J\gg U)\) \(C_\mu \) behaves very differently to \(C_q\) and E: \(C_\mu \) increases as \(U/J \rightarrow 0\), while \(C_q\) and E decrease. This is because the bosons are delocalised, leading to near-random measurement sequences for the site occupations. Hence, the models occupy all available (quantum) causal states with almost equal probabilities. Analogous to the quantum Ising chain however, \(C_\mu \) and \(C_q\) behave very differently, due to the (non-)orthogonality of (quantum) memory states.
On the other hand, in the Mott insulator phase \((J\ll U)\), \(C_\mu \), \(C_q\), and E decrease as U/J increases. In this regime, the bosons in the system are highly localised, and so measurement in the n-basis yields a highly-ordered stochastic process since the number of bosons at each site tends to \(\nu \) as \(U/J \rightarrow \infty \). Similar to the analogous limit in the quantum Ising chain, measurement sequences that are highly-ordered have a single causal state that manifests with very high probability, while other causal states all occur with low probabilities. This results in low values of both \(C_\mu \) and \(C_q\).
Figure 7 also shows that \(S_{\frac{1}{2}}\) is larger than \(C_q\) for the measurement outcome sequences; as with the quantum Ising chain, this is because \(S_{\frac{1}{2}}\) quantifies the full quantum correlations between two halves of the system, while \(C_q\) results from projecting the quantum state onto one specific basis, and discarding information about all other bases. Notably, in the superfluid parameter region \(S_{\frac{1}{2}}\) behaves very differently to \(C_q\) and E: \(S_{\frac{1}{2}}\) increases, while \(C_q\) and E decrease. This is because projecting the state into the n-basis removes the structure in the conjugate basis (i.e., momentum), that would have manifest large half-chain entanglement.
Finally, Fig. 8 shows how \(C_\mu \) and \(C_q\) scale with increasing L. As with the quantum Ising chain, we see that \(C_q\) displays signs of convergence as L is increased, while \(C_\mu \) does not. Again, this is due to the decay in the strength of correlations between site occupations with increasing distance.
4 Discussion
Our results show that classical and quantum measures of structural complexity can exhibit drastically different qualitative behaviour when applied to sequences generated by measurement outcomes of quantum systems. In particular, it is evident that the measures interpret near-randomness very differently; quantum models are typically able to capture the predictive features in near-random sequences without storing a large amount of information about the past, in contrast to corresponding minimal classical models. The quantum measures also appear to signal proximity to a phase transition. For a given Hamiltonian, the most informative (i.e., highest complexity) basis appears to vary with the Hamiltonian parameters; by considering maximisations of \(C_q\) over all measurement bases we may obtain a stronger indicator of phase transitions and other related phenomena—we leave this question for future work. Another interesting open question in this direction is the study of universality classes of quantum systems—can quantum systems be categorised into universality classes according to their structural complexities? Members of the same universality class have identical critical behaviour despite possibly having radically diverse microscopic behaviour.
Moving forward in the field of computational mechanics, the transducer framework [39, 65, 66] provides a natural extension to study quantum systems as input–output processes, and thus lines up as a natural next step to this work. In such processes, the choice of the measurement basis would form the input, and the resulting measurement sequence is the output, allowing for the fully-quantum nature of non-commuting measurements to be considered. It would be interesting to make this extension and study the dynamics of quantum processes by measuring them at different bases at different timesteps, capturing the structure and complexity within an evolving quantum system. An ultimate goal of quantum computational mechanics is to construct quantum causal models that can simulate any quantum stochastic processes without restriction in choice of measurement bases; the results in this manuscript serve as a crucial first step towards this direction.
References
Feynman, R.P.: Simulating physics with computers. Int. J. Theor. Phys. 21, 467 (1982)
Bloch, I., Dalibard, J., Nascimbene, S.: Quantum simulations with ultracold quantum gases. Nat. Phys. 8, 267 (2012)
Lewenstein, M., Sanpera, A., Ahufinger, V.: Ultracold Atoms in Optical Lattices: Simulating Quantum Many-Body Systems. Oxford University Press, Oxford (2012)
Johnson, T.H., Clark, S.R., Jaksch, D.: What is a quantum simulator? EPJ Quant. Technol. 1, 1 (2014)
Jaksch, D., Bruder, C., Cirac, J.I., Gardiner, C.W., Zoller, P.: Cold bosonic atoms in optical lattices. Phys. Rev. Lett. 81, 3108 (1998)
Greiner, M., Mandel, O., Esslinger, T., Hänsch, T.W., Bloch, I.: Quantum phase transition from a superfluid to a Mott insulator in a gas of ultracold atoms. Nature 415, 39 (2002)
Jaksch, D., Zoller, P.: Creation of effective magnetic fields in optical lattices: The hofstadter butterfly for cold neutral atoms. New J. Phys. 5, 56 (2003)
Bloch, I., Dalibard, J., Zwerger, W.: Many-body physics with ultracold gases. Rev. Mod. Phys. 80, 885 (2008)
Baumann, K., Guerlin, C., Brennecke, F., Esslinger, T.: Dicke quantum phase transition with a superfluid gas in an optical cavity. Nature 464, 1301 (2010)
Ritsch, H., Domokos, P., Brennecke, F., Esslinger, T.: Cold atoms in cavity-generated dynamical optical potentials. Rev. Mod. Phys. 85, 553 (2013)
Aidelsburger, M., Atala, M., Lohse, M., Barreiro, J.T., Paredes, B., Bloch, I.: Realization of the hofstadter hamiltonian with ultracold atoms in optical lattices. Phys. Rev. Lett. 111, 185301 (2013)
Miyake, H., Siviloglou, G.A., Kennedy, C.J., Burton, W.C., Ketterle, W.: Realizing the Harper Hamiltonian with laser-assisted tunneling in optical lattices. Phys. Rev. Lett. 111, 185302 (2013)
Amico, L., Fazio, R., Osterloh, A., Vedral, V.: Entanglement in many-body systems. Rev. Mod. Phys. 80, 517 (2008)
Horodecki, R., Horodecki, P., Horodecki, M., Horodecki, K.: Quantum entanglement. Rev. Mod. Phys. 81, 865 (2009)
Vidal, G.: Efficient classical simulation of slightly entangled quantum computations. Phys. Rev. Lett. 91, 147902 (2003)
Verstraete, F., Cirac, J.I.: Matrix product states represent ground states faithfully. Phys. Rev. B 73, 094423 (2006)
Schollwöck, U.: The density-matrix renormalization group in the age of matrix product states. Ann. Phys. 326, 96 (2011)
Orús, R.: A practical introduction to tensor networks: matrix product states and projected entangled pair states. Ann. Phys. 349, 117 (2014)
Valdez, M.A., Jaschke, D., Vargas, D.L., Carr, L.D.: Quantifying complexity in quantum phase transitions via mutual information complex networks. Phys. Rev. Lett. 119, 225301 (2017)
Crutchfield, J.P., Young, K.: Inferring statistical complexity. Phys. Rev. Lett. 63, 105 (1989)
Shalizi, C.R., Crutchfield, J.P.: Computational mechanics: Pattern and prediction, structure and simplicity. J. Stat. Phys. 104, 817 (2001)
Crutchfield, J.P.: Between order and chaos. Nat. Phys. 8, 17 (2012)
Gu, M., Wiesner, K., Rieper, E., Vedral, V.: Quantum mechanics can reduce the complexity of classical models. Nat. Commun. 3, 762 (2012)
Mahoney, J.R., Aghamohammadi, C., Crutchfield, J.P.: Occam’s quantum strop: Synchronizing and compressing classical cryptic processes via a quantum channel. Sci. Rep. 6, 20495 (2016)
Palsson, M.S., Gu, M., Ho, J., Wiseman, H.M., Pryde, G.J.: Experimentally modeling stochastic processes with less memory by the use of a quantum processor. Sci. Adv. 3, e1601302 (2017)
Aghamohammadi, C., Loomis, S.P., Mahoney, J.R., Crutchfield, J.P.: Extreme quantum memory advantage for rare-event sampling. Phys. Rev. X 8, 011025 (2018)
Binder, F.C., Thompson, J., Gu, M.: Practical unitary simulator for non-Markovian complex processes. Phys. Rev. Lett. 120, 240502 (2018)
Elliott, T.J., Gu, M.: Superior memory efficiency of quantum devices for the simulation of continuous-time stochastic processes. NPJ Quant. Inf. 4, 18 (2018)
Elliott, T.J., Garner, A.J.P., Gu, M.: Memory-efficient tracking of complex temporal and symbolic dynamics with quantum simulators. New J. Phys. 21, 013021 (2019)
Liu, Q., Elliott, T.J., Binder, F.C., Di Franco, C., Gu, M.: Optimal stochastic modeling with unitary quantum dynamics. Phys. Rev. A 99, 062110 (2019)
Suen, W.Y., Thompson, J., Garner, A.J.P., Vedral, V., Gu, M.: The classical-quantum divergence of complexity in modelling spin chains. Quantum 1, 25 (2017)
Aghamohammadi, C., Mahoney, J.R., Crutchfield, J.P.: The ambiguity of simplicity in quantum and classical simulation. Phys. Lett. A 381, 1223 (2017)
Jouneghani, F.G., Gu, M., Ho, J., Thompson, J., Suen, W.Y., Wiseman, H.M., Pryde, G.J.: Observing the ambiguity of simplicity via quantum simulations of an ising spin chain. arXiv:1711.03661 (2017)
Loomis, S.P., Crutchfield, J.P.: Strong and weak optimizations in classical and quantum models of stochastic processes. J. Stat. Phys. 176, 1317 (2019)
Garner, A.J., Liu, Q., Thompson, J., Vedral, V., et al.: Provably unbounded memory advantage in stochastic simulation using quantum mechanics. New J. Phys. 19, 103009 (2017)
Aghamohammadi, C., Mahoney, J.R., Crutchfield, J.P.: Extreme quantum advantage when simulating classical systems with long-range interaction. Sci. Rep. 7, 6735 (2017)
Thompson, J., Garner, A.J.P., Mahoney, J.R., Crutchfield, J.P., Vedral, V., Gu, M.: Causal asymmetry in a quantum world. Phys. Rev. X 8, 031013 (2018)
Elliott, T.J., Yang, C., Binder, F.C., Garner, A.J.P., Thompson, J., Gu, M.: Extreme dimensionality reduction with quantum modeling. Phys. Rev. Lett. 125, 260501 (2020)
Elliott, T.J., Gu, M., Garner, A.J.P., Thompson, J.: Quantum adaptive agents with efficient long-term memories. Phys. Rev. X 12, 011007 (2022)
Khintchine, A.: Korrelationstheorie der stationären stochastischen Prozesse. Math. Ann. 109, 604 (1934)
Crutchfield, J.P., Feldman, D.P.: Statistical complexity of simple 1d spin systems. Phys. Rev. E 55, 1239R (1997)
Palmer, A.J., Fairall, C.W., Brewer, W.A.: Complexity in the atmosphere. IEEE Trans. Geosci. Remote Sens. 38, 2056 (2000)
Varn, D.P., Canright, G.S., Crutchfield, J.P.: Discovering planar disorder in close-packed structures from X-ray diffraction: beyond the fault model. Phys. Rev. B 66, 174110 (2002)
Clarke, R.W., Freeman, M.P., Watkins, N.W.: Application of computational mechanics to the analysis of natural data: an example in geomagnetism. Phys. Rev. E 67, 016203 (2003)
Park, J.B., Lee, J.W., Yang, J.-S., Jo, H.-H., Moon, H.-T.: Complexity analysis of the stock market. Physica A 379, 179 (2007)
Li, C.-B., Yang, H., Komatsuzaki, T.: Multiscale complex network of protein conformational fluctuations in single-molecule time series. Proc. Natl. Acad. Sci. 105, 536 (2008)
Haslinger, R., Klinkner, K.L., Shalizi, C.R.: The computational structure of spike trains. Neural Comput. 22, 121 (2010)
Kelly, D., Dillingham, M., Hudson, A., Wiesner, K.: A new method for inferring hidden Markov models from noisy time sequences. PLoS ONE 7, e29703 (2012)
Mu noz, R.N., Leung, A., Zecevik, A., Pollock, F.A., Cohen, D., van Swinderen, B., Tsuchiya, N., Modi, K.: General anesthesia reduces complexity and temporal asymmetry of the informational structures derived from neural recordings in drosophila. Phys. Rev. Res. 2, 023219 (2020)
Shaw, R.: The dripping faucet as a model chaotic system. Aerial Press, London (1984)
Bialek, W., Nemenman, I., Tishby, N.: Predictability, complexity, and learning. Neural Comput. 13, 2409 (2001)
Nielsen, M.A., Chuang, I.L.: Quantum computation and quantum information. Cambridge University Press, Cambridge (2010)
Tan, R., Terno, D.R., Thompson, J., Vedral, V., Gu, M.: Towards quantifying complexity with quantum mechanics. Eur. Phys. J. Plus 129, 1 (2014)
Ho, M., Gu, M., Elliott, T.J.: Robust inference of memory structure for efficient quantum modeling of stochastic processes. Phys. Rev. A 101, 032327 (2020)
Ho, M., Pradana, A., Elliott, T.J., Chew, L.Y., Gu, M.: Quantum-inspired identification of complex cellular automata. arXiv:2103.14053 (2021)
Osterloh, A., Amico, L., Falci, G., Fazio, R.: Scaling of entanglement close to a quantum phase transition. Nature 416, 608 (2002)
Osborne, T.J., Nielsen, M.A.: Entanglement in a simple quantum phase transition. Phys. Rev. A 66, 032110 (2002)
Vidal, G., Latorre, J.I., Rico, E., Kitaev, A.: Entanglement in quantum critical phenomena. Phys. Rev. Lett. 90, 227902 (2003)
Daley, A.J., Pichler, H., Schachenmayer, J., Zoller, P.: Measuring entanglement growth in quench dynamics of bosons in an optical lattice. Phys. Rev. Lett. 109, 020505 (2012)
Abanin, D.A., Demler, E.: Measuring entanglement entropy of a generic many-body system with a quantum switch. Phys. Rev. Lett. 109, 020504 (2012)
Islam, R., Ma, R., Preiss, P.M., Tai, M.E., Lukin, A., Rispoli, M., Greiner, M.: Measuring entanglement entropy in a quantum many-body system. Nature 528, 77 (2015)
Elliott, T.J., Kozlowski, W., Caballero-Benitez, S.F., Mekhov, I.B.: Multipartite entangled spatial modes of ultracold atoms generated and controlled by quantum measurement. Phys. Rev. Lett. 114, 113604 (2015)
Mattis, D.C.: The Theory of Magnetism Made Simple: An Introduction to Physical Concepts and to Some Useful Mathematical Methods. World Scientific, Singapore (2006)
Pino, M., Prior, J., Somoza, A.M., Jaksch, D., Clark, S.R.: Reentrance and entanglement in the one-dimensional Bose–Hubbard model. Phys. Rev. A 86, 023631 (2012)
Barnett, N., Crutchfield, J.P.: Computational mechanics of input-output processes: structured transformations and \(\text{ the }\backslash \) epsilon-transducer. J. Stat. Phys. 161, 404 (2015)
Thompson, J., Garner, A.J.P., Vedral, V., Gu, M.: Using quantum theory to simplify input-output processes. NPJ Quant. Inf. 3, 6 (2017)
Al-Assam, S., Clark, S.R., Jaksch, D.: The tensor network theory library. J. Stat. Mech. 2017, 093102 (2017)
Pirvu, B., Murg, V., Cirac, J.I., Verstraete, F.: Matrix product operator representations. New J. Phys. 12, 025012 (2010)
White, S.R.: Density matrix formulation for quantum renormalization groups. Phys. Rev. Lett. 69, 2863 (1992)
White, S.R.: Density-matrix algorithms for quantum renormalization groups. Phys. Rev. B 48, 10345 (1993)
White, S.R., Huse, D.A.: Numerical renormalization-group study of low-lying eigenstates of the antiferromagnetic s= 1 Heisenberg chain. Phys. Rev. B 48, 3844 (1993)
Lanczos, C.: An Iteration Method for the Solution of the Eigenvalue Problem of Linear Differential and Integral Operators. United States Government Press Office, Los Angeles (1950)
Davidson, E.R.: The iterative calculation of a few of the lowest eigenvalues and corresponding eigenvectors of large real-symmetric matrices. J. Comput. Phys. 17, 87 (1975)
De Chiara, G., Rizzi, M., Rossini, D., Montangero, S.: Density matrix renormalization group for dummies. J. Comput. Theor. Nanosci. 5, 1277 (2008)
Acknowledgements
We thank Felix Binder, Yang Chengran, Jirawat Tangpanitanon, and Benjamin Yadin for enlightening discussion, and the University of Oxford Advanced Research Computing department for providing us with access to their platform to run our numerical simulations. We are also grateful to Sarah Al-Assam, Stephen Clark, and Dieter Jaksch for their permission to use their tensor network library [67]. This work was funded by grant FQXi-RFP-1809 from the Foundational Questions Institute and Fetzer Franklin Fund (a donor advised fund of Silicon Valley Community Foundation), the Singapore Quantum Engineering Program QEP-SF3, the Singapore National Research Foundation Fellowship NRF-NRFF2016-02, the Imperial College Borland Fellowship in Mathematics, the Lee Kuan Yew Endowment Fund (Postdoctoral Fellowship), and the Singapore Ministry of Education Tier 1 grant RG162/19. M.G thanks the FQXi-funded workshop ‘Workshop on Agency at the Interface of Quantum and Complexity Science’ for catalyzing the research. T.J.E. thanks the Centre for Quantum Technologies for their hospitality.
Author information
Authors and Affiliations
Corresponding author
Additional information
Communicated by Chris Jarzynski.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Technical Appendix
Technical Appendix
Tensor Network Theory (TNT) [67] is a set of powerful and efficient numerical methods for classically simulating quantum many-body systems. In this Appendix, we briefly review matrix product states (MPS), matrix product operators (MPO), and the density matrix renormalisation group (DMRG) in the context of our work. MPS and MPO provide efficient descriptions of states and operators of quantum many-body systems respectively, while DMRG is an iterative procedure that variationally minimises the energy of Hamiltonians to obtain the ground states of quantum many-body systems.
MPS [18] are widely used as efficient representations of low energy states of one-dimensional quantum systems. In a quantum many-body chain, each lattice site is represented by a tensor, and the tensors are connected to their neighbours. Consider a quantum many-body chain of size N in a quantum state
where \(\{ \left| i_j \right\rangle \}\) are the local orthonormal basis states. We can perform repeated Schmidt decompositions [52] at each site, splitting the tensor \(c_{i_1i_2 \dots i_N}\) into local tensors \(\Gamma ^{[j]}\), and Schmidt coefficients \(\lambda ^{[j]}\) that quantify the entanglement across the split, which gives us the canonical form of the MPS representation of the state:
where \(\alpha _j\) takes positive integer values up to the rank of \(\Gamma ^{[j]}\). By contracting the Schmidt coefficient tensors \(\lambda ^{[j]}\) into the local tensors \(\Gamma ^{[j]}\), we obtain a more generic form:
where \(A^{x_j}\) is a matrix with the same dimension as the local basis states.
In a similar fashion, a quantum operator can be written in the form of MPO [68]:
where \(H^{i_j,k_j}\) is a matrix with the dimension of the local basis state. With quantum states and Hamiltonians represented in MPS and MPO forms respectively, ground states \(\left| \psi _g \right\rangle \) may then be obtained by minimising \(\left\langle \psi \right| \mathcal {H}\left| \psi \right\rangle \) across all states using the DMRG algorithm.
The DMRG algorithm [69, 70] is an iterative, variational method that truncates the degrees of freedom of the system, retaining only the most significant features required to accurately describe the physics of a target state. The algorithm achieves remarkable precision in describing one-dimensional quantum many-body systems [71].
In the DMRG algorithm, the elementary unit is a site, described by the state \(d_i\) where \(i=1,\dots , D\) is the label of the states accessible to a given site. A block \(B(L,v_L)\) consists of L sites, and has total dimension \(v_L\); \(H_B\) is the Hamiltonian of the block, containing only terms that involve the sites inside the block. Whenever a block is enlarged, a site is added to the block, forming an enlarged block \(B^e\) with a Hilbert space dimension that is the product of the Hilbert space of \(B(L,v_L)\) and a site, i.e. \(v_L \times D\). An important step in the algorithm is the formation of superblock Hamiltonians, consisting of two enlarged blocks connected to each other. The superblock ground state is calculated using Lanczos [72] or Davidson [73] methods. The ground state is then truncated by discarding the least-probable eigenstates.
The algorithm itself consist of two parts: the warm-up cycle, and finite-system algorithm. The warm-up cycle is designed to create a system block of the desired length of at most dimension \(\chi \), before the finite-system algorithm is applied to compute the ground state. Starting from a block B(1, D), each step of the warm-up cycle is carried out as follows [74]:
-
1.
Start from a left block \(B(L,v_L)\), and enlarge the block by adding a single site.
-
2.
Form a superblock by adding a reflected copy of the enlarged block to its right.
-
3.
Obtain the ground state of the superblock, and the \(v_{l+1}=\min (v_l D,\chi )\) eigenstates of the reduced density matrix of the left enlarged block with largest eigenvalues.
-
4.
The truncated left enlarged block is used for the next iteration.
-
5.
Renormalise all operators to obtain block \(B(L+1,v_{L+1})\).
These steps are repeated until the desired length \(L_{\mathrm {max}}\) is reached. Once the infinite-system algorithm reaches the desired length, the system consist of two blocks of \(B(L_{\text {max}}/2 - 1,\chi )\) and two free sites. The subsequent step is called the “sweep procedure", the goal of which is to enhance the convergence of the target state. The sweep procedure consists of enlarging the left block with one site and reducing the right block correspondingly to keep the length fixed. While the left block is constructed by the usual enlarging steps, the right block is recalled from memory, as it has been built in the infinite-system algorithm and saved. This procedure is repeated until the left block reaches the length \(L_{\text {max}}-4\). At this point the right block B(1, D) with one site is constructed from scratch and the left block \(B(L_{\text {max}},\chi )\) is obtained through renormalisation. The sweep procedure is then repeated from right to left, and at each iteration, the renormalised block has to be stored in memory. The procedure is stopped when the system energy converges.
In this manuscript, we use the implementations of these algorithms as described in [67], and the ground states are computed with \(\chi =150\) for the one-dimensional quantum Ising chain, and \(\chi =80\) for the one-dimensional Bose–Hubbard chain. The resulting ground states are accurate up to \(\mathcal {O}(10^{-14})\).
Rights and permissions
About this article
Cite this article
Suen, W.Y., Elliott, T.J., Thompson, J. et al. Surveying Structural Complexity in Quantum Many-Body Systems. J Stat Phys 187, 4 (2022). https://doi.org/10.1007/s10955-022-02895-6
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s10955-022-02895-6