Introduction

Biological systems frequently contain repetitions of similar structures. One of the most striking examples is the mammalian neocortex. The typical neocortex is composed of six layers, each containing specific neuron subtypes. Although similar structures have been observed in most of the neocortex, each subregion carries out a different type of information processing (Mountcastle 1997; Felleman and Van Essen 1991). Moreover, the existence of certain degrees of anatomical and functional hierarchies among subregions is widely known (Felleman and Van Essen 1991). Anatomical studies (Rockland and Pandya 1979; Felleman and Van Essen 1991) revealed that ascending (or feedforward) and descending (or feedback) pathways have different topologies: feedforward connections mainly originate in layer III neurons and terminate in layer IV, while feedback connections originate in layer V and VI neurons and terminate in layers I/II and VI. Felleman and Van Essen (1991) used this layer specificity of projections as a criterion for identifying hierarchical relationships between subregions.

It is hypothesized that such heterogeneity of connections may contribute to functional differentiation between higher and lower regions. Nevertheless, it remains unclear how these heterogeneous connections relate to functional differences among subregions.

Complex dynamic behaviors of interacting elementary systems have been intensively studied in the field of nonlinear dynamics (e.g., Kuramoto 1984; Kaneko and Tsuda 2001). In most cases, however, types of connections between elementary systems are assumed to be uniform throughout networks. Even in the case of heterogeneous connections, studies have been limited to uniformly distributed random connections, or to random connections with Gaussian distributions. Yet differences in dynamic behavior between subregions in the neocortex are expected to be induced by the asymmetry of the connections between subregions. Functional differentiation may thereby emerge.

In this paper, we investigate a mathematical mechanism of differentiation of dynamic behaviors induced by asymmetric couplings, which we call hetero-interactions. For this purpose, we take a constructive approach in which we try to extract the essence of the dynamics by investigating simpler but typical dynamical systems. In particular, we study the origin of role differentiation as driver and receiver. Estimating the structure of communication among neuronal groups from observed multiple dynamic signals has become an active topic in the neurosciences. Coherent oscillation is one of the key mechanisms assumed to subserve for neuronal communication (Tass et al. 1998; Engel et al. 2001; Varela et al. 2001; Fries 2005; Rodriguez et al. 1999; Klausberger et al. 2003). In this paper, we utilize information-theoretic methods to investigate the phase relationship between two chaotic oscillators.

In section "Models and methods", we describe coupled chaotic oscillator models, and describe an information-theoretic method for analyzing information flow. In sections "Uni-directional connections" and "Bi-directional connections", we study the dynamics of uni-directionally coupled systems and bi-directionally connected systems, respectively. Section "Selective switching of external information" describes a selective switching of external signals. Section "Summary and discussion" presents a summary and discussion.

Models and methods

Heterogeneously interacting Rössler chaotic oscillators

As a first example, consider two interacting Rössler systems as a simple hetero-interactive system. Assume two chaotic oscillators whose states at time t are represented by three-dimensional vectors X j (t) = (x j (t), y j (t), z j (t))T  (j = 1, 2) (Fig. 1).

Fig. 1
figure 1

A schematic diagram of heterogeneously-connected Rössler systems

These dynamics are given by

$$ \dot{X}_{j} = F(X_{j}) +K_{kj}(X_{k}-\bar{X} ), $$
(1)

where

$$ F(X_{j} ) = \left( {\begin{array}{*{20}c} { - (y_{j} + z_{j} )} \hfill \\ {x_{j} + ay_{j} } \hfill \\ {b + z_{j} (x_{j} - c)} \hfill \\ \end{array} } \right) $$
(2)

represents the dynamics of an uncoupled Rössler system, and a = 0.15, b = 0.2, and c = 10 are constants. In the interaction term, k = 2 for j = 1 and k = 1 for j = 2, respectively. The matrices K 12 and K 21 represent feedforward (ascending) and feedback (descending) connections, respectively. To keep the time averages of interaction terms at almost zero, we introduce a vector \(\bar{X}\), the time average of the state vectors under the uncoupled condition. To normalize interaction terms we use the diagonal matrix S = diag (σ x σ y σ z ), elements of which are given by the standard deviation of the respective variable in the uncoupled system. The feedforward connection matrix is set to be K 12 = k 12 SC 12 S −1, where k 12 is a control parameter for the connection strength, and each (mn) component of the matrix C 12 is 1 if and only if there exists a connection from the n-th variable of system 1 to the m-th variable of system 2, and 0 otherwise. The matrix K 21 is also determined in a similar way with the control parameter k 21. For simplicity, here we study two chaotic oscillators which have at-most one connections in one direction. Thus, only one element is nonzero in C 12 and in C 21.

For numerical integrations of Eq. (1), we used the fourth-order Runge–Kutta method with a time step of 0.01, which is small enough to yield robust results and is used in previous studies using coupled Rössler oscillators [for e.g., Wilmer et al. (2010)].

Wavelet transform

The phase relation between two oscillatory systems provides useful information about the dynamic relationship between interacting systems (Tass et al. 1998; Rodriguez et al. 1999; Lachaux et al. 2000; Rosenblum and Pikovsky 2001; Mizuhara et al. 2005). Here we determine the instantaneous phases θ j (t) (j = 1,2) of the chaotic oscillators by means of wavelet transform with a complex Morlet wavelet (Lachaux et al. 2000). The wavelet coefficient as a function of time t and frequency f is defined as follows:

$$ W_{j}(t,f) = \int\limits_{-\infty}^{+\infty} \varPsi_{t,f}(s)x_{j}(s) ds, $$

where \(\varPsi_{t,f}(s)\) is a complex conjugate of the Morlet wavelet defined at frequency f and time t by:

$$ \varPsi_{t,f}(s) = \sqrt{\frac{2}{\pi \sigma_{w}^{2}}} \exp(i2\pi f (s-t)) \exp\left( - \frac{(s-t)^{2}}{2\sigma_{w}^{2}}\right), $$
(3)

where \(i=\sqrt{-1}\) and σ w determines the half width of the Gaussian function. We use σ w  = n c /(2π f) with n c  = 8, which is approximately the number of waves appearing in the function defined by Eq. (3) (Lachaux et al. 2000). The parameter n c also determines the frequency resolution of the analysis, as it provides the width of the frequency interval for which phases are measured. The instantaneous phase \(\theta_{j}(t) \in S^{1}\) and the amplitude A j (t) ≥ 0 are defined as the argument and the amplitude of the complex value W j (tf), respectively:

$$ A_{j}(t)\exp(i\theta_{j}(t)) \equiv W_{j}(t,f). $$
(4)

The phase difference between two chaotic oscillators is defined as

$$ \phi(t) \equiv \theta_{2}(t)-\theta_{1}(t) \quad \quad (\hbox{mod}\quad 2\pi). $$
(5)

To measure the degree of phase locking between two systems, independent of their amplitudes, we use a phase-locking value (PLV) r, defined as

$$ r \equiv \left| \langle e^{i \phi(t)} \rangle_{t} \right|, $$
(6)

where \(\langle \cdot \rangle_{t}\) denotes the time average over the entire time series (Lachaux et al. 2000). The mean phase difference is defined by

$$ \langle \phi \rangle \equiv \arg \langle e^{i \phi(t)} \rangle_{t}. $$
(7)

We use f = 0.16, but the results in this paper are not sensitive to f as long as its deviation from the intrinsic frequency of the chaotic oscillations is not large. There exist several methods which determine instantaneous phase from time series, including Hilbert transform, wavelet transform, or an simple method using an angle in (x, y) plane. Among them, Hilbert transform and wavelet transform are often used in the field of neuroscience because of their usefulness for analyzing noisy experimental data. Le Van Quyen et al. (2001) discussed the comparison of two methods and concluded that difference is minor and these two methods are essentially same. In accordance with Le Van Quyen et al. (2001), when we used Hilbert transformation method instead of wavelet transform to estimate phase, essentially same results as shown in this papers were obtained.

Transfer entropy

Information-theoretic approaches are widely used to analyze nonlinear systems (Shaw 1981; Fraser and Swinney 1986; Kaneko 1986; Matsumoto and Tsuda 1985, 1987, 1988; Schreiber 2000). Mutual information is often used as a measure of interdependency between coupled systems (Kaneko 1986; Matsumoto and Tsuda 1987, 1988). Due to its symmetric nature, mutual information is not always sufficient to determine the direction of information flow between systems, although the quantity of information flowing from a certain state to other states can be estimated by introducing time-dependent mutual information. Schreiber (2000) extended the concept of mutual information to define a measure called transfer entropy (TE) that captures the amount of directed information exchanged between two time series.

In this paper, the information flow between two phase variables is calculated using TE. The phase variables are first discretized into one of 64 bins of uniform size. We use the notation \(\theta_{1}^{l}(t)\equiv (\theta_{1}(t),\ldots, \theta_{1}(t-(l-1)\tau))\) and \(\theta_{2}^{m}(t)\equiv (\theta_{2}(t),\ldots, \theta_{2}(t-(m-1)\tau))\) for vectors of dimensions l and m, respectively, made by delay reconstruction with the time step τ. Note that θ l1 (t) and θ m2 (t) have 64l and 64m distinct states, respectively. Let p ijk (θ 1(t + τ), θ l1 (t), θ m2 (t)) be the joint probability that θ 1(t + τ) lies in the i-th state, θ l1 (t) in the j-th state, and θ m2 (t) in the k-th state. Let p jk (θ l1 (t), θ m2 (t)) be the marginal probability distribution of θ l1 (t) and θ m2 (t), and p ijk (θ 1(t + τ)|θ l1 (t), θ m2 (t))≡ p ijk (θ 1(t + τ), θ l1 (t), θ m2 (t))/p jk (θ l1 (t), θ m2 (t)) be the conditional probability. Also, let p ij (θ 1(t + τ)|θ l1 (t)) be the similarly defined conditional probability. TE from θ 2 to θ 1 with time step τ is defined by a form of Kullback entropy:

$$ T_{2 \rightarrow 1}(\tau) \equiv \sum_{i,j,k} p_{ijk}( \theta_{1}(t+\tau), \theta^{l}_{1}(t), \theta^{m}_{2}(t)) \log_2 \frac{p_{ijk}(\theta_{1}(t+\tau)|\theta^{l}_{1}(t),\theta^{m}_{2}(t))} {p_{ij}(\theta_{1}(t+\tau)|\theta^{l}_{1}(t))}. $$
(8)

Using the conditional Shannon entropy \(H(\cdot|\cdot)\), TE is also represented by the difference of conditional Shannon entropies:

$$ T_{_2 \rightarrow 1}(\tau) = H\left(\theta_{1}(t+\tau)| \theta^{l}_{1}(t)\right) - H\left(\theta_{1}(t+\tau)| \theta^{l}_{1}(t),\theta^{m}_{2}(t) \right), $$

where the first term on the right represents uncertainty about the state transition of θ 1 given its current state, and the second term represents uncertainty about the state transition of θ 1, given the current state of both itself and θ 2. If the transition probability of θ 1 does not depend on θ 2, information about the current state of θ 2 does not improve predictions about the future state of θ 1, and the second term has the same value of uncertainty as the first, so TE becomes zero. On the other hand, if uncertainty decreases by adding the information of θ 2, it is considered that the states of θ 2 are correlated to the state transition of θ 1. In this sense, TE quantifies the amount of directed information transfer.

Hereafter, we treat the case l = m = 1. It is reported that the values of TE are sensitive to the choice of the time step τ, and that its dependence is not trivial, particularly for calculations from continuous signals (Kaiser and Schreiber 2002). Hence, we do not fix the value of τ, but treat TE as a function of τ.

Because numerical estimation of TE in the case of continuous variables is dependent on the size of partitions, we have to carefully choose an appropriate range of partition sizes for analyzing finite data. Theoretical basis and practical issues about numerical estimation of TE via discretization of continuous variables are discussed by Kaiser and Schreiber (2002) in detail. A simple method which uses uniform bin size is computationally less demanding than other sophisticated methods such as adaptive bin sizing and kernel estimation, but sometimes becomes problematic, especially when available amount of data is limited. In the present study, because our model is relatively simple and acquisition of enough amount of data is easy, we adopt a simple fix-sized bin method. If infinite data is available, Eq. (8) converges to a theoretical value as decreasing the width of bin. When the data size is finite, however, a finite sample effect appears if too small bin size is used. Thus, we have to chose intermediate bin width around which we can find plateau region when we draw the TE as a function of bin width (Kaiser and Schreiber 2002). We graphed TE as a function of the logarithm of the bin size, then determined the bin size used for analysis as 2π/64, because of the fairly flat slope of the graph around this value. In numerical simulations, after cutting a transient trajectory (104 time units), a time series with length 8.0 × 105 time units and 0.2 sampling interval were used for calculating TE.

Uni-directional connections

To verify the reliability of the present analysis, we studied information transmission between two chaotic oscillators with uni-directional connections. In the case that only a connection from x 1 to y 2 exists, which we will denote by \(x_{1}\rightarrow y_{2}\), a small connection strength (k 12 < 0.01) is enough to establish phase synchronization (r > 0.9) between the two chaotic oscillators. Figure 2 shows the PLV, the mean phase difference, and the ratio of the frequency as a function of the connection strength k 12. When the connection strength k 12 was increased slightly from zero, strong phase synchronization between two oscillators with about π/2 phase difference took place (Fig. 2a, b). Figure 2c indicates that one-to-one frequency locking was established with small connection strength. In the case that strong coupling, 1:1 phase locking is broken and the frequency of the driven oscillators (X 2) become larger than the driving one (X 1). In this case, so-called phase slips, i.e., rapid jumps of the relative phase about 2π occurs.

Fig. 2
figure 2

a The phase locking value, b the mean phase difference, and c the frequency ratio as a function of the connection strength in a uni-directionally connected system. For each k 12, the average values of eight samples from different initial conditions are plotted. In a, b, c and all figures below, error bars indicate standard errors of means

Figure 3a shows typical trajectories of x 1(t) and x 2(t) in the case where k 12 = 0.04. Due to the presence of chaotic dynamics, the distribution of the phase difference did not show a sharp delta-like peak but rather a Gaussian-like shape (Fig. 3b). We calculated TE between the two phase variables θ 1 and θ 2 (Fig. 3c). In the figure, the abscissa denotes \(\tau/\langle T\rangle\), which is the time step normalized by the mean period of cycle \(\langle T\rangle \approx 6.04\). Transfer entropy from θ 1 to θ 2 was larger than in the opposite direction, regardless of the choice of time step. The value of TE in the opposite direction, along which there are no actual connections, was negligible (\(T_{1 \rightarrow 2}(\tau) >T_{2 \rightarrow 1}(\tau)\approx 0\)). Figure 3d shows the maximums of TE, \(\max\nolimits_{\tau} T_{1\rightarrow 2}(\tau)\) and \(\max\nolimits_{\tau} T_{2\rightarrow 1}(\tau)\), in both directions as functions of the feedforward connection strength k 12. The direction of information flow between two systems was correctly detected by TE whether 1:1 phase locking was present or absent. These maximums were found at a relatively large \(\tau/\langle T\rangle\) ≈ 20 (Fig. 3c), which may reflect the existence of slow dynamics on the phase variables.

Fig. 3
figure 3

a Typical trajectories of x 1 and x 2, b distribution of the phase difference ϕ, and c transfer entropy (TE), in the case of k 12 = 0.04. The abscissa is the normalized time step \(\tau/\langle T \rangle\), where \(\langle T \rangle\) is the mean period of the oscillation. d TE maximums as a function of connection strength k 12

Bi-directional connections

Feedback effects on phase synchronization

Next, we investigate the phase dynamics in the case of heterogeneously bi-directional connections \(z_{1}\rightarrow z_{2}\) and \(x_{2}\rightarrow x_{1}\). We denote this type of connection as zz–xx.

We fixed the strength of the feedforward connection k 12 to 0.01 and changed the strength of the feedback connection k 21. Figure 4a shows the distribution of the phase difference ϕ(t) as a function of k 21.

Fig. 4
figure 4

The case of bi-directional couplings. Each figure is depicted as a function of the strength of feedback connections k 21. a The distribution of phase differences. Brighter shading indicates higher density. b Phase locking values. c TE maximums

In the case that only the feedforward connection \(z_{1}\rightarrow z_{2}\) exists (k 12 = 0.01 and k 21 = 0, the left boundary of Fig. 4a), degree of phase locking is not strong: the phase difference is broadly distributed on the unit circle, although mean frequencies of the two oscillators are same. With a small increase of the strength of the feedback connection, however, phase locking between the chaotic oscillators with a phase difference of about −π/2 emerges. On the other hand, when the feedback strength was strong enough (k 21 ≈ 0.005), nearly in-phase synchronization appeared. Figure 4b shows a dependence of PLV on k 21. PLV takes a local maximum value at k 21≈ 0.0019, then decreases, and again increases to nearly 1 at k 21 = 0.005.

To quantify information flow between two systems and to estimate the driver-response relation, we calculated TEs and their maximums, \(\max\nolimits_{\tau} T_{1\rightarrow 2}(\tau)\) and \(\max\nolimits_{\tau} T_{2\rightarrow 1}(\tau)\), which are depicted in Fig. 4c by crosses and solid lines, and open circles and dashed lines, respectively. This shows that the increase in strength of the feedback connection enhanced the information transmission not only in the feedback direction but also in the feedforward direction in the parameter range k 21 < 0.002. In this case, TE in the feedforward direction was larger than in the opposite direction. On the other hand, in the case of stronger feedback strength (k 21 > 0.003), the dominant direction of information flow was reversed (\(\max T_{2\rightarrow 1}(\tau) > \max T_{1\rightarrow 2}(\tau)\)).

Spontaneous switching of the phase difference and the direction of information flow

Intermediate connection strengths around k 21 = 0.003 resulted in intermittent transitions of the phase difference ϕ (t) between two states located near 0 and −π/2 (Fig. 5a). As suggested by the scaling factor of the horizontal axis in the figure, such switching dynamics have an extremely slow timescale as compared with the inherent periods of the chaotic oscillations. The phase difference shows a bimodal distribution (Fig. 5b). Figure 5c shows TEs calculated from whole time series, and indicates that information flows comparably into both directions.

Fig. 5
figure 5

Intermittent transitions of the phase difference, accompanied by switching the direction of information flow. Here, k 12 = 0.01 and k 21 = 0.003. a Trajectories of ϕ(t), b distribution of the phase difference, c TEs calculated from whole time series, and d state-dependent TEs (see text for definitions)

To investigate whether the dominant direction of information transfer changes with phase difference transitions, we calculated the state-dependent transfer entropy as follows. First, we divided the whole time series into two sub-time series, which are the collections of sub-intervals denoted by \(S^A=\{t|\phi_{th}< \phi(t)\leq \phi_{th}+\pi(\mod 2\pi)\}\) and \(S^B=\{t|-\pi < \phi(t)\leq \phi_{th} \hbox { or } \phi_{th}+\pi < \phi(t) \leq \pi \}\), respectively, where ϕ th  =  − 0.9 is marked by a star in Fig. 5b. TEs were then calculated separately for each sub-time series, which we denote by \(T^{A}_{\cdot \rightarrow \cdot }(\tau)\) and \(T^{B}_{\cdot \rightarrow \cdot }(\tau)\), respectively (Fig. 5d). We calculated the state-depending TE by replacing probabilities in Eq. (8) with conditional probabilities. Although these conditional probabilities were estimated from data of discontinued subsections, simulation time was long enough so that robust results were obtained. A striking feature was observed in that there exists an asymmetry in the amount of information flow and dominant directions of the information flows are different between two sub-time series S A and S B, that is, \(T^{A}_{2\rightarrow 1} > T^{A}_{1\rightarrow 2}\) and \(T^{B}_{2\rightarrow 1} < T^{B}_{1\rightarrow 2}\). These results suggest that even when the physical strength of a connection is unchanged, the effective direction of information flow and the driver-response relation between two systems can change, induced by a change of the phase relation.

When the connection was stronger, different types of behavior appeared. For example, In the case that k 12 = 0.03 and k 21 = 0.0045, the dynamics of the two oscillators fell into a periodic attractor, where estimation of the direction of information flow is difficult (Kaiser and Schreiber 2002). With further increasing connection strength, chaotic attractor appears again. In the case that k 12 = 0.04, dominant direction of information flows again switched as the feedback strength was increased. However, unlike Fig. 4a, a shift of the phase differences is not observed. Detailed analysis of complex sensitivity of the dynamics to the parameters is beyond the scope of the present study.

Similar chaotic transitions of phase differences and similar features of information flows were also observed in the case of other connection types, such as xy–xy connections (Fig. 6) and yx–yx connections (Figure not shown). In all three cases (zz–xx, xy–xy, yx–yx), there is a common relation between the phase difference and the directions of information flow. Information flow in one direction is larger when the phase differences falls into a state, which is observed if the connection in the same direction is stronger. For example, in zz–xx case, the direction \(1\rightarrow 2\) corresponds to the phase difference −π/2 and the direction \(2\rightarrow 1\) corresponds to the in-phase state. This correspondence between the phase difference, the connection strength and the direction of information flow holds in all three cases.

Fig. 6
figure 6

A typical transition behavior observed in the case of xy–xy type connections. k 12 = k 21 = 0.02. a Dynamics of ϕ(t), b the distribution of the phase differences, c TEs, and d state-dependent TEs. S A = {t| − π < ϕ(t) ≤ 0} and S B = {t|0 < ϕ(t) ≤ π}

These commonality of the switching phenomena in two-coupled chaotic systems may suggest that the switching of information flows can generally be observed when a coupled system has several quasi-stable states (quasi-attractors or attractor ruins) and transitions among them, such that chaotic itinerancy (Tsuda 1992, 2001) occur. A study of the detailed mechanism of such transitions in the present system will be published elsewhere.

Selective switching of external information

In the previous section, it was shown that the direction of information flow can change, caused by transitions of the phase difference of two chaotic oscillations. When we introduce external inputs and regard the system’s behavior as outputs, it is interesting to investigate whether the transitions of the phase difference between chaotic oscillators affect the input-output relations. In this section, we consider two independent external inputs and study the dependence of input-output relations on the phase difference, regarding this dynamical system as an information channel.

We consider the two mutually coupled chaotic oscillators X 1, X 2 and two additional input chaotic oscillators Y 1, Y 2, whose connections are shown in Fig 7a. Two independent systems Y 1 and Y 2 send input signals to systems X 1 and X 2, respectively, via connection matrix K in. The coupled dynamics of these systems are described by

$$ \dot{Y}_{j} = F(Y_{j}), $$
(9)
$$ \dot{X}_{j} = F(X_{j}) +K_{kj}(X_{k}-\bar{X} )+K_{{\rm in}}(Y_{j}-\bar{X} ), $$
(10)

where j = 1 or 2, and accordingly k = 2 or 1. In the following, we used an \(x \rightarrow y\) connection for both K 12 and K 21, and an \(x \rightarrow x\) connection for K in.

Fig. 7
figure 7

a A schematic diagram showing the connections among four chaotic oscillators. b Trajectories of phase differences among four chaotic oscillators. c Distribution of the phase differences between the chaotic oscillator X 1 and other systems. d Distribution of the phase differences between X 1 and other systems in sub-time series S A (upper) and S B (lower), respectively. e Phase locking values and f state-dependent TEs between chaotic oscillations, calculated from whole time series (black), from sub-time series S A (gray), and from S B (white), respectively. Error bars indicate standard error of means from 12 samples

Figure 7c shows distributions of the phase differences between X 1 and other systems. The phase difference between X 1 and X 2, denoted by \( \phi _{{X_{1} X_{2} }} \), shows a bimodal distribution as in the case of the previous section, while phase differences between X 1 and two input systems show single peaks. As in the previous section, we split the time series into two sub-time series, defined by \( S^{A} = \{ \left. t \right| - \pi < \phi _{{X_{1} X_{2} }} (t) \le 0 \}\) and \( S^{B} = \{ \left. t \right|0 < \phi _{{X_{1} X_{2} }} (t) \le \pi \}\), and separately calculated several measures for each sub-time series. Distributions of phase differences between X 1 and other systems in each sub-time series are shown in Fig. 7d. The lower figure of Fig. 7d shows that when the phase difference between X 1 and X 2 was around π/2, the phase of X 1 tended to lock to that of Y 1, which sends an input signal to X 1 directly. On the other hand, the upper figure in Fig. 7d shows that when the phase difference between X 1 and X 2 was around −π/2, the phase of X 1 was strongly correlated to that of Y 2, which did not connect to X 1 directly, rather than Y 1. This tendency became evident when comparing PLVs between chaotic oscillators calculated from sub-time series S A and S B (Fig. 7e). We also calculated state-dependent TE separately from sub-time series S A and S B, respectively, the maximums of which were depicted in Fig. 7f. In the sub-time series S B, information transmission from X 1 to X 2 is larger than in the opposite direction, and X 1 receives more information from Y 1 than from Y 2. In contrast, in S A information transmission from X 2 to X 1 is larger than in the opposite direction, and X 1 receives more information from Y 2 than from Y 1. As expected by considering the symmetry between X 1 and X 2, X 2 also received more information from Y 1 than from Y 2 in S B, and vice versa in S A (figure not shown to avoid redundancy). These results suggest that two mutually connected chaotic oscillators dynamically select one of the two input systems to receive information by changing the phase difference between X 1 and X 2. Thus, the phase difference acts as a dynamic switch, through which one of the two inputs is selected as an output.

Summary and discussion

Using information-theoretic measures, directions of information flows in heterogeneously connected Rössler systems were investigated. It is observed that the direction of information transmission spontaneously switched in an intermittent manner, depending on the phase difference between the two systems. When two independent chaotic inputs are added to the connected system, the system dynamically selects one of the two inputs to receive, effectively by synchronization depending on the internal phase differences between two mutually connected chaotic oscillators. These results indicate that effective directions of information transmission can dynamically change, induced by a switching of phase difference between two systems without modifying the strength of the connections.

Much attention has been paid to the phase dynamics of coupled chaotic oscillators (for e.g., Rosenblum et al. 1996, 1997; Belykh et al. 2001; Osipov et al. 2003; Li and Zheng 2007; Wilmer et al. 2010; Ouchi et al. 2011). Numbers of methods has been proposed for estimation of driver-response relationships from observed time series (Quiroga et al. 2000; Schreiber 2000; Rosenblum and Pikovsky 2001; Paluš and Vejmelka 2007; Wilmer et al. 2010). One of the novelties of our results lies in the finding that the driver-response relation temporally switches without changing connection strength among subsystems. These results shed light on the distinction between ’effective’ or ’functional’ connectivity and anatomical connectivity (Aertsen et al. 1989; Fujii et al. 1996).

The information-theoretic approach presented here will be useful for understanding dynamic change in structures of information flows in the brain. Dynamic switchings of information flow induced by shifts of the phase difference may related to the flexibility of the patterns of neuronal interactions. Related findings have been reported in the hippocampus (Klausberger et al. 2003). Some distinct brain states are characterized by rhythmic oscillations such as slow waves and theta oscillations, and some types of interneurons dynamically change their firing activities and then phase relations to their post-synaptic target neurons in a state-dependent manner. Further, Womelsdorf et al. (2007) recently discovered that mutual interaction among neural groups depends on phase relations between rhythmic activities within the groups. Although the direction of the information flow was not discussed in detail in their study, our present study and further studies along this line may provide a theoretical basis for such dependence of neural interactions on phase relations and its relation to heterogeneous neuronal structure. In these respect, studies of dynamic alternations of the driver-response relation and the direction of information flow among different neuron populations are quite important, and are expected to be further studied.

In addition, in the brain system, spontaneous shifts between perceptual states have been reported, for e.g, perceptual rivalry such as ambiguous figure perception and binocular rivalry. (Inoue and Nakamoto 1994; Murata et al. 2003). Because our present study provide a simple model yielding spontaneous switchings of the information flow, it may be applicable to explain such complex phenomena in a future study.