1 Introduction

Quantum many-body systems hold a distinguished position in modern physics, playing a vital role in providing insight into the physical world. On the one hand, they constitute an excellent platform for studying a range of phenomena through their utility in quantum simulation [1,2,3,4]. Conversely, the properties intrinsic to these systems are interesting in their own right, and they thus form a target of simulation and modelling themselves [5,6,7,8,9,10,11,12]. Understanding the structures present in these systems, and the resources needed to characterise, study and emulate them, is thus of paramount importance.

Quantum entanglement [13, 14] captures the quantum correlations present in a system, and so plays a significant role in identifying structure in quantum systems. In particular, the half-chain entanglement quantifies the amount of information shared between the left and right halves of a one-dimensional quantum system; it provides an indicator to the amount of classical resources needed to simulate such systems [15,16,17,18], and related quantum mutual information-based quantities have previously been associated with structural complexity [19]. Nevertheless, complex systems possess structure beyond such correlations; we turn to tools from complexity science to identify and quantify this structure. The field of computational mechanics [20,21,22] adopts an information-theoretic approach to this this task, and equates structure in a stochastic process with the minimal amount of information that must be stored by a model that replicates its behaviour. Moreover, it offers a systematic approach to determining such minimal models.

Impelled by the growth of quantum technologies, recent efforts have extended the computational mechanics framework into the quantum regime, finding that classical limits on the information that models must store can be overcome [23,24,25,26,27,28,29,30]. This fundamentally changes how we might perceive and characterise structure—two prominent examples being the ambiguities of simplicity [31,32,33] and optimality [30, 34], which highlight properties of complexity that might be considered truisms classically no longer hold in the quantum domain. Practically, these quantum models can provide memory savings for stochastic simulation, and the resource gap between minimal quantum and classical simulators can even grow unbounded [28, 29, 35,36,37,38,39].

It is natural then to ask how these measures of complexity look— and what they can tell us about structure—when applied to quantum many-body systems. In this article, we apply this framework to study their structure and complexity. To this end, we look at the structure manifest in the measurement statistics of quantum states. This serves as a crucial first step in identifying the structure in the quantum processes that gave rise to the states. We begin by reviewing in Sect. 2 the relevant details of causal models and measures of complexity that form the background to our work. In Sect. 3, we introduce the mapping from states of quantum chains to stochastic processes through the statistics of observation sequences, and apply it to quantify structure in the one-dimensional quantum Ising and Bose–Hubbard models. We discuss the implications of our results and the future directions to which our framework may be applied in Sect. 4.

2 Framework

2.1 Stochastic Processes

We consider discrete-event, discrete-time stationary stochastic processes [40]. At each timestep \(t\in \mathbb {Z}\) such a process emits a symbol \(r_t\) drawn from a configuration space \(\mathcal {R}\). We use \(R_t\) to denote the random variable governing output \(r_t\). We also designate the semi-infinite strings and \(\vec {R} _t:= R_{t+1}R_{t+2}\dots \) as the random variables associated with the past and future observation sequences and \(\vec {r} _t:= r_{t+1}r_{t+2}\dots \) at time t respectively (throughout, upper case indicates random variables, and lower case the corresponding variates). The output symbol statistics are then described by a conditional probability distribution over these strings , detailing how future observations are correlated with past observations. We use the notation \(r_{t:t+n}=r_tr_{t+1}\dots r_{t+n-1}\) to represent the sequence of outputs between t and \(t+n-1\). Stationarity of a stochastic process is defined by \(P(R_{0:n})=P(R_{L:L+n})\) \(\forall n,L \in \mathbb {Z}\); this allows us to drop the subscript t from semi-infinite strings.

2.2 Causal Models

A causal model of a stationary stochastic process [20, 21, 37] is tasked with replicating its future output statistics according to the distribution ; it stores information about the past of a stochastic process in its internal memory, and uses this to predict the future output statistics. Crucially, a causal model contains no information about the future that cannot be inferred from the past (i.e., the mutual information between the memory and future outputs given the past outputs is zero). Causal models use an encoding function f to map pasts to states \(m\in \mathcal {M}\) according to , such that . At each timestep t, the model produces an output r following . At time \(t+1\), r becomes part of the past observation sequence, and the memory state is updated to , where the new past is the concatenation of the previous past with the new output symbol. The information stored by such a model is given by the Shannon entropy of its set of internal memory states:

$$\begin{aligned} H(M)=\sum _{m\in \mathcal {M}}{-P_m\log {P_m}}, \end{aligned}$$
(1)

where .

For any given stationary stochastic process, one can construct myriad causal models of the process that will faithfully replicate the future statistics. The field of computational mechanics provides us with a systematic way to identify and construct the provably optimal classical causal model of a given process [20,21,22]. By optimal, we here mean the model that stores the least possible amount of past information [Eq. (1)] while accurately simulating the process; the optimal classical causal model is referred to as the \(\varepsilon \)-machine. In this framework, sets of pasts are grouped into equivalence classes called causal states according to the relation

(2)

Equation (2) mandates that past observations leading to statistically identical futures belong to the same causal state. Let us denote \(\mathcal {S}\) as the set of causal states, where each state \(s\in \mathcal {S}\) is given by an associated encoding function . At each time step the \(\varepsilon \)-machine transitions from causal state j to k whilst emitting an output \(r\in \mathcal {R}\), with transition probability \(T_{kj}^r=P(R_{t+1}=r,S_{t+1}=k|S_t=j)\). The encoding function enforces unifilarity of the \(\varepsilon \)-machine, where, given the current causal state j and the emitted output symbol r, the subsequent causal state k of the model is uniquely specified [21]. We denote this mapping by a function \(\lambda (j,r)\) that outputs the value of the label of the subsequent causal state. This allows us to express

$$\begin{aligned} T_{kj}^r&= P(R_{t+1}=r,S_{t+1}=k|S_t=j)\end{aligned}$$
(3)
$$\begin{aligned}&= P\left( r| j \right) \delta _{k,\lambda (j,r)} \end{aligned}$$
(4)

where \(\delta _{jk}\) is the Kronecker-\(\delta \) function. The amount of information required to track the dynamics of causal states has been widely employed as a measure of structure [20, 41,42,43,44,45,46,47,48,49], designated as the statistical complexity:

$$\begin{aligned} C_\mu :=-\sum _{j}P_j\log {P_j}, \end{aligned}$$
(5)

where is the steady-state probability of causal state j. The statistical complexity is often compared to the mutual information between the past and future of the system,

(6)

a quantity known as the excess entropy. It quantifies the amount of information in the past that correlates with future statistics. The excess entropy (also sometimes called the ‘predictive information’) is also used as a measure of complexity [50, 51]. The data processing inequality [52] ensures that the excess entropy represents a lower bound on the amount of information a simulator of a process must store in any physical theory, and thus \(C_\mu \ge E\).

2.3 Quantum Causal Models

Recently, computational mechanics has been extended into the quantum regime [23], where it has been shown that quantum effects allow one to construct causal models that require a lower amount of information than is classically possible. The present state-of-the-art systematic construction methods for quantum models [26, 27, 30] involve step-wise unitary interactions between the model memory and a probe system. The memory of such quantum models store a member of a set of quantum memory states \(\{ \left| s_j \right\rangle \}\) that are in one-to-one correspondence with the causal states of the process. The memory states are then used to produce the future outputs of the process sequentially. Starting from state \(\left| s_j \right\rangle \) and a blank ancilla, there exists a unitary operator U that satisfies

$$\begin{aligned} U\left| s_j \right\rangle \left| 0 \right\rangle&= \sum _{r\in \mathcal {R}}\sqrt{P(r|j)}\left| s_{\lambda (j,r)} \right\rangle \left| r \right\rangle . \end{aligned}$$
(7)

The stationary state of the model memory is given by \(\rho =\sum _j{P_j \left| s_j \right\rangle \left\langle s_j \right| }\), where \(P_j\) is as defined for Eq. (5). The amount of information stored internally is given by the von Neumann entropy of \(\rho \):

$$\begin{aligned} C_q := -\text {Tr}\left( \rho \log _2 {\rho } \right) . \end{aligned}$$
(8)

\(C_q\) is referred to as the quantum statistical memory. In general, the memory states are non-orthogonal (i.e., \(\left\langle s_j|s_k\right\rangle \ge 0\)), due to a quantum model being able to encode pasts with partially overlapping futures into partially overlapping states. Therefore, \(C_q \le C_\mu \); a quantum causal model can utilise less memory than the \(\varepsilon \)-machine of the same process [23,24,25,26,27,28,29,30]. While the quantum models presented here require less memory than \(\varepsilon \)-machines, they are in general not optimal over all quantum models [30]. However, in certain specific cases, this construction has been shown to be the provably optimal quantum model [31, 37]. Paralleling \(C_\mu \), the minimal possible memory cost across all quantum models is called the quantum statistical complexity. \(C_q\) thus upper bounds the quantum statistical complexity, and due to the difficulty associated with finding the true quantum minimum has been suggested as a potential measure of complexity in itself [23, 53,54,55]. As with \(C_\mu \), the data processing inequality mandates that \(C_q\ge E\).

2.4 Measures of Complexity

We focus our attention on the following measures of complexity:

  • Statistical complexity \(C_\mu \).

  • Quantum statistical memory \(C_q\).

  • Excess entropy E.

  • Half-chain entanglement entropy \(S_{\frac{1}{2}}\).

\(C_\mu \), \(C_q\), and E have been formally introduced above. The half-chain entanglement entropy of a one-dimensional quantum many-body system quantifies the amount of quantum correlations (entanglement) between the left and right halves of the system [13]. It is defined as

$$\begin{aligned} S_{\frac{1}{2}}&:=-\text {Tr}(\rho _A \log _2 \rho _A) =-\text {Tr}(\rho _B \log _2 \rho _B), \end{aligned}$$
(9)

where \(\rho _A=\text {Tr}_B\left( \rho _{AB} \right) \) and \(\rho _B=\text {Tr}_A\left( \rho _{AB} \right) \) are the density matrices describing the states of the left and right halves of the quantum system respectively. \(S_{\frac{1}{2}}\) is measurement basis-independent, and depends only on the state of the quantum many-body system. \(S_\frac{1}{2}\) is very closely related to the quantum mutual information (\(I_q\left( A, B \right) =2S_{\frac{1}{2}}\) for pure states), and is loosely analogous to the excess entropy for quantum processes. Beyond identifying correlations, entanglement has also been suggested as an indicator of critical points of phase transitions in quantum many-body systems [56,57,58]. However, experimental measurement of \(S_\frac{1}{2}\) in real quantum systems is a highly non-trivial task [59,60,61,62].

3 Results

3.1 Stochastic Processes from Quantum Many-Body Systems

The measures of structural complexity used in this manuscript are defined in terms of classical stochastic processes. To apply them to quantum systems, we need some method to meaningfully extract such a process from a quantum system. Since measurements of a quantum system are inherently probabilistic, when making a series of measurements on a quantum system the outputs form a stochastic sequence. We can then analyse the structure in this sequence using the above measures. The structure in this sequence can embody structure present in the underlying quantum state that gave rise to the observations. Taking this as a proxy for structure in the quantum system itself, the approach can be described as a ‘semi-classical’ method for describing the structural complexity of quantum systems and processes.

There are many possible measurement protocols that could extract a classical time-series from an infinite length one-dimensional quantum many-body chain. For instance, one could measure the same site at multiple points in time, allowing the system to evolve between measurements. Alternatively, as done here, one can take measurements sequentially across consecutive sites of the chain, sweeping from left to right. This effectively measures one different site per timestep, and the outcomes of this form a time-series, where the temporal indices of the time-series are in one-to-one correspondence with the spatial indices of the sites in the chain. This is illustrated in Fig. 1.

Specifically, we consider non-degenerate, site-local measurement operators such that on site j, a set of measurement outcomes \(r_j\in \mathcal {R}\) are associated with unique eigenstates \(\left| r_j \right\rangle \) of the measurement operator. This measurement outcome is taken to be the corresponding j-th observation in the constructed stochastic process. As we sweep the output sequence from left to right, it follows then that the output sequence \(\overleftrightarrow {r}\) occurs with probability

$$\begin{aligned} P(\overleftrightarrow {r})=\left\langle \psi \right| \bigotimes _{j=-\infty }^{\infty }\left| r_j \right\rangle \left\langle r_j|\psi \right\rangle \end{aligned}$$
(10)

where \(\left| \psi \right\rangle \) is the quantum state of the one-dimensional system being investigated. For the examples considered here, we will take the quantum chains to be in the ground states of their respective Hamiltonians, with no temporal evolution between measurements. We emphasis that there is no dynamical evolution of the systems considered here; the temporal dynamics of the extracted stochastic process manifest as a mapping of spatial position in the underlying chain.

Fig. 1
figure 1

Extracting stochastic processes from quantum chains: By measuring a quantum chain with local operators sequentially at each site a time-series is generated, with temporal indices in the series corresponding directly to spatial indices of the sites. The structure in these sequences can be used as a proxy of structure in the underlying quantum chain

3.2 Numerical Considerations

For numerical tractability, we make two approximations regarding the system size and the measures of complexity. Firstly, rather than an infinite chain, we study large finite-size quantum chains of length N. Secondly, we introduce the truncated Markov memory order L. It represents the number of sites from which past information may be obtained. That is, we approximate . We take \(N\gg L\). We employ tensor network methods [19] (see Appendix) to obtain near exact ground states of the example systems we study, as well as to extract the corresponding measurement sequences that form the stochastic processes.

We now apply our framework to explore the structures of two paradigmatic quantum many-body chains in their respective ground states.

3.3 Quantum Ising Chain

A quantum Ising chain [63] describes the physics of a system of interacting quantum spins subject to the influence of a magnetic field. They are governed by the Hamiltonian:

$$\begin{aligned} \mathcal {H}_{QI}=\sum _{l}{-J\sigma _l^x \sigma _{l+1}^x - B \sigma _l^z} \end{aligned}$$
(11)

where J is a coupling parameter, B is the external magnetic field strength, and \(\sigma _l^w\), \(w\in \{x,y,z\}\), are the Pauli operators at site l. The system undergoes a quantum phase transition at \(B/J=0.5\). At \(B \gg J\), the field along the z-direction dominates the correlations in the system; the ground state of the system is \(\left| \psi _g \right\rangle = \left| \dots \downarrow _{i-1}\downarrow _{i}\downarrow _{i+1} \dots \right\rangle \), fully-polarised along the z-axis. On the other hand, when \(B \ll J\), the field is much weaker than the spin-spin correlation along the x-direction. There are then two degenerate ground states, with all spins either parallel or anti-parallel with the x-axis, \(\left| \psi _g \right\rangle =\left| \dots \rightarrow _{i-1}\rightarrow _{i}\rightarrow _{i+1}\dots \right\rangle \) or \(\left| \dots \leftarrow _{i-1}\leftarrow _{i}\leftarrow _{i+1}\dots \right\rangle \).

We investigate the structure of the model at a range of truncated Markov memory orders \(L\in \{ 1, 3, 5, 7, 9 \}\), with \(N=500\). The causal states are then determined by the L rightmost spins of the past spins, as defined in Eq. (2). We find that each length L spin configuration belongs to a unique causal state. To ensure that there is no spurious splitting of causal states, we determine that the conditional probability distributions differ by more than \(\mathcal {O}\left( 10^{-12}\right) \) from each other, while the ground states \(\left| \psi _g \right\rangle \) are accurate to \(\mathcal {O}\left( 10^{-14}\right) \); i.e., the error in the ground state is significantly smaller than the distance between conditional distributions of the spin configurations.

Using the framework above, we study the structural complexity of the quantum Ising chain through its measurement sequences as obtained from \(\sigma _\theta \)-basis measurements, where

$$\begin{aligned} \sigma _\theta = \begin{pmatrix} \cos {\theta } &{} \sin {\theta } \\ \sin {\theta } &{} -\cos {\theta } \end{pmatrix}, \end{aligned}$$
(12)

with \(\theta \in [0,{\pi }/{2}]\) the angle measured from the z-axis of the Bloch sphere. Angles \(\theta =0\) and \(\theta =\pi /2\) correspond to the z- and x-axes respectively. Intuitively, \(\sigma _z\) measurement in the \(B \gg J\) regime will result in a highly-ordered stochastic process, while \(\sigma _x\) will return a near-random process. Conversely in the \(B \ll J\) regime, measurement sequences along \(\sigma _z\) will yield a near-random stochastic process, while \(\sigma _x\) will result in a highly-ordered process.

Fig. 2
figure 2

Comparison between E, \(C_q\), \(C_\mu \) in the a \(\sigma _{z}\), and b \(\sigma _{x}\) measurement bases, and \(S_{\frac{1}{2}}\) of the one-dimensional quantum Ising system for \(L=9\). The vertical dashed line demarcates the phase transition of the system

Figure 2 compares \(C_\mu \), \(C_q\), E, and \(S_{\frac{1}{2}}\) for measurements of \(\sigma _x\) and \(\sigma _z\) at different B/J, where we observe interesting differences between these measures of structure and complexity. Firstly, in all measurement bases studied, \(C_q\), E, and \(S_{\frac{1}{2}}\) reach maximal values close to the phase transition \(B/J\approx 0.5\). Instead, \(C_\mu \) exhibits its largest gradient near the phase transition.

We also observe that \(S_{\frac{1}{2}}> C_q\) for all measurement bases. This is because projecting a quantum state onto a specific basis effectively destroys information about other measurement bases, while \(S_{\frac{1}{2}}\) is a quantity that takes into account the full information contained in the quantum state. The relation \(S_{\frac{1}{2}}> C_q\) highlights that simulating a quantum system measured in a specific basis can require less information than the quantum correlations present in the system. This highlights that if we don’t require replication of all measurement bases, the state is not the most efficient simulator of itself.

Figure 2 also shows that when sequences from measurement outcomes of the quantum Ising chain are more ordered (\(B\gg J\) for \(\sigma _{z}\)-basis, \(B\ll J\) for \(\sigma _{x}\)-basis), the corresponding values of \(C_\mu \), \(C_q\), and E are lower. In a highly-ordered stochastic process, the corresponding \(\varepsilon \)-machine consists of a single dominant causal state that is occupied with very high probability, and other causal states arise with very low probabilities. Thus the resulting \(\varepsilon \)-machine requires little information to be stored to accurately simulate the corresponding stochastic process. Our quantum model behaves similarly in this regime, hence \(C_q\) mirrors \(C_\mu \). Further, in this parameter regime the more ordered the sequences are, the less information is carried forward from the left half to the right half of the system, resulting in low values of E.

On the other hand, when the observation sequences are near-random (\(B\gg J\) for \(\sigma _{x}\)-basis, \(B\ll J\) for \(\sigma _{z}\)-basis), \(C_\mu \) exhibits drastically different qualitative behaviour compared to \(C_q\) and E, as seen in Fig. 2. Both \(C_q\) and E are lower when the sequences appear more random, unlike \(C_\mu \), which saturates in this regime. This is because in the near-random limit the past configurations have different-yet-strongly-overlapping conditional future probability distributions, and thus they are mapped into different causal states. As a classical model, the \(\varepsilon \)-machine can only store information in distinguishable states, despite multiple causal states having significantly overlapping conditional future statistics; consequently, \(C_\mu \) is high in this regime. In contrast, quantum models have the ability to store information in non-orthogonal states. Thus, causal states with highly-overlapping future conditional probability distributions will be encoded into highly-overlapping quantum states, resulting in low \(C_q\) in the near-random regime.

Fig. 3
figure 3

\(C_q\) plotted against B/J for sequences of measurements of the quantum Ising chain along a \(\sigma _{z}\), b \(\sigma _{x}\). The vertical dashed lines demarcate the phase transition of the system

Fig. 4
figure 4

\(C_\mu \) plotted against B/J for sequences of measurements of the quantum Ising chain along a \(\sigma _{z}\), b \(\sigma _{x}\). The vertical dashed lines demarcate the phase transition of the system

Fig. 5
figure 5

E plotted against B/J for sequences of measurements of the quantum Ising chain along a \(\sigma _{z}\), b \(\sigma _{x}\), . The vertical dashed lines demarcate the phase transition of the system

Fig. 6
figure 6

Plot of how the behaviours of \(C_q\), \(C_\mu \), and E changes with respect to \(\theta \) (rad)

We also observe differing behaviour of \(C_\mu \), \(C_q\), and E with respect to the truncated Markov memory order, as illustrated in Figs. 3, 4, and 5. As L is increased, \(C_q\) also increases, but at a decreasing rate, indicating that the value of \(C_q\) converges as larger L is considered (Fig. 3). This is consistent with the decrease in correlation strength between spins when the distance between them increases (i.e., spins that are further apart are more weakly correlated). This means that at high L, increasing the Markov memory order further adds little predictive information. This cannot be utilised effectively by the classical \(\varepsilon \)-machine with access only to orthogonal states; as L increases, so too does the number of causal states—resulting in a higher \(C_\mu \), as illustrated by Fig. 4.

We see that the behaviour of \(C_\mu \) with respect to B/J changes as we rotate the measurement angle \(\theta \) from \(\sigma _{z}\) to \(\sigma _{x}\) (Fig. 6b). As \(\theta \) changes from 0 to \(\pi /2\), the measurement outcome sequences sweep between near-random and near-deterministic in the region \(B\ll J\), and between near-deterministic and near-random in the region \(B\gg J\). Figure 6a shows the counterpart behaviour of \(C_q\) with respect to the different measurement angles. As the measurement basis changes from \(\sigma _{z}\) to \(\sigma _{x}\), we observe that the peak of \(C_q\) increases with respect to \(\sigma _{\theta }\). In the parameter region that is neither random nor deterministic, our quantum model stores less information than when measured in a basis that is closer to the z-axis. This captures the underlying structure of the quantum Ising chain very well, as measurement outcomes along the z-axis are less dependent on the past spins, because the inter-spin coupling of the system is along the x-axis.

3.4 Bose–Hubbard Chain

We now look at structure in the one-dimensional Bose–Hubbard model, which describes the physics of interacting spinless bosons on a lattice [3, 5, 6]. It is governed by:

$$\begin{aligned} \mathcal {H}_{BH} = -J\sum _l{b^\dagger _l b_{l+1}} + \mathrm {h.c.} +\frac{U}{2}\sum _l{n_l(n_l - 1)}, \end{aligned}$$
(13)

where \(b^\dagger _l\) and \(b_l\) are bosonic creation and annihilation operators and \(n_l=b^\dagger _l b_l\) is the number of bosons at site l. The variable J denotes the hopping amplitude, describing the kinetic energy of the bosons, and U is the on-site repulsive interaction strength. The filling factor \(\nu \) is defined as the average number of bosons per site; we consider a chain with \(\nu =1\). In our numerical calculations, we use \(N=300\) and truncated Markov memory orders of \(L\in \{1,2,3,4\}\). We also enforce that each site has a maximum occupation number \(n_{\text {max}}=4\)—higher values of occupation number have very low probability of occurrence (and thus have little impact on the state), yet demand significantly more computational power. In the ground state, when \(J\gg U\) all bosons occupy zero-momentum eigenstates, wherein they are completely delocalised across the lattice; this is the superfluid phase. On the other hand, when \(J\ll U\), the system is in the Mott insulator phase, where at integer filling factors \(\nu \), each site contains \(\nu \) highly-localised bosons. The one-dimensional Bose–Hubbard model undergoes a quantum phase transition at \(U/J\approx 3.1\) [6, 64] for the case of \(\nu =1\).

We calculate the structural complexity measures for measurement outcome sequences of the Bose-Hubbard chain in the number basis, wherein n is measured sequentially on every site. Figure 7 shows the behaviour of \(C_\mu \), \(C_q\), and E for these sequences, as well as \(S_{\frac{1}{2}}\). We observe that E peaks close to the phase transition, while \(C_q\) peaks earlier. This indicates that E may identify the phase transition, while \(C_q\) and \(C_\mu \) do not. This is in contrast to the quantum Ising chain, where both E and \(C_q\) (in multiple measurement bases) reach a maximum value in the vicinity of the phase transition.

In the superfluid phase \((J\gg U)\) \(C_\mu \) behaves very differently to \(C_q\) and E: \(C_\mu \) increases as \(U/J \rightarrow 0\), while \(C_q\) and E decrease. This is because the bosons are delocalised, leading to near-random measurement sequences for the site occupations. Hence, the models occupy all available (quantum) causal states with almost equal probabilities. Analogous to the quantum Ising chain however, \(C_\mu \) and \(C_q\) behave very differently, due to the (non-)orthogonality of (quantum) memory states.

On the other hand, in the Mott insulator phase \((J\ll U)\), \(C_\mu \), \(C_q\), and E decrease as U/J increases. In this regime, the bosons in the system are highly localised, and so measurement in the n-basis yields a highly-ordered stochastic process since the number of bosons at each site tends to \(\nu \) as \(U/J \rightarrow \infty \). Similar to the analogous limit in the quantum Ising chain, measurement sequences that are highly-ordered have a single causal state that manifests with very high probability, while other causal states all occur with low probabilities. This results in low values of both \(C_\mu \) and \(C_q\).

Fig. 7
figure 7

E, \(C_q\) and \(C_\mu \) at \(L=4\), and \(S_{\frac{1}{2}}\), plotted against U/J, for sequences of measurements of n in the Bose-Hubbard chain. The vertical dashed line demarcates the phase transition of the system

Figure 7 also shows that \(S_{\frac{1}{2}}\) is larger than \(C_q\) for the measurement outcome sequences; as with the quantum Ising chain, this is because \(S_{\frac{1}{2}}\) quantifies the full quantum correlations between two halves of the system, while \(C_q\) results from projecting the quantum state onto one specific basis, and discarding information about all other bases. Notably, in the superfluid parameter region \(S_{\frac{1}{2}}\) behaves very differently to \(C_q\) and E: \(S_{\frac{1}{2}}\) increases, while \(C_q\) and E decrease. This is because projecting the state into the n-basis removes the structure in the conjugate basis (i.e., momentum), that would have manifest large half-chain entanglement.

Finally, Fig. 8 shows how \(C_\mu \) and \(C_q\) scale with increasing L. As with the quantum Ising chain, we see that \(C_q\) displays signs of convergence as L is increased, while \(C_\mu \) does not. Again, this is due to the decay in the strength of correlations between site occupations with increasing distance.

4 Discussion

Our results show that classical and quantum measures of structural complexity can exhibit drastically different qualitative behaviour when applied to sequences generated by measurement outcomes of quantum systems. In particular, it is evident that the measures interpret near-randomness very differently; quantum models are typically able to capture the predictive features in near-random sequences without storing a large amount of information about the past, in contrast to corresponding minimal classical models. The quantum measures also appear to signal proximity to a phase transition. For a given Hamiltonian, the most informative (i.e., highest complexity) basis appears to vary with the Hamiltonian parameters; by considering maximisations of \(C_q\) over all measurement bases we may obtain a stronger indicator of phase transitions and other related phenomena—we leave this question for future work. Another interesting open question in this direction is the study of universality classes of quantum systems—can quantum systems be categorised into universality classes according to their structural complexities? Members of the same universality class have identical critical behaviour despite possibly having radically diverse microscopic behaviour.

Fig. 8
figure 8

Scaling with L of a \(C_\mu \) and b \(C_q\) against U/J for sequences of measurements of n in the Bose-Hubbard chain. The vertical dashed lines demarcate the phase transition of the system

Moving forward in the field of computational mechanics, the transducer framework [39, 65, 66] provides a natural extension to study quantum systems as input–output processes, and thus lines up as a natural next step to this work. In such processes, the choice of the measurement basis would form the input, and the resulting measurement sequence is the output, allowing for the fully-quantum nature of non-commuting measurements to be considered. It would be interesting to make this extension and study the dynamics of quantum processes by measuring them at different bases at different timesteps, capturing the structure and complexity within an evolving quantum system. An ultimate goal of quantum computational mechanics is to construct quantum causal models that can simulate any quantum stochastic processes without restriction in choice of measurement bases; the results in this manuscript serve as a crucial first step towards this direction.