Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Open systems are often oscillatory in nature because their dynamics are determined by a balance between energy inflow, outflow and usage which, in general, do not match. Their lack of isolation means that such systems often interact with each other. The strength, direction and the functional relationships can define the nature of interactions, which can cause qualitative states to appear, such as synchronization between the oscillators. The time-variability that characterizes the oscillators and their interactions can cause transitions between the qualitative states.

In order to investigate and study interactions, one usually obtains observable measurements of the oscillating dynamics in a form of time-series. Through analysis of these readout signals one can detect and quantify the interacting phenomenons. In such an inverse approach, often the source of a time-variability can not be uniquely determined. Additionally, the observable time-series can involve part of a stochastic indeterministic dynamics, arising due to (for example) influence of the environment on the dynamics, or due to measurement noise.

For this reasons there is a need for technique that can infer parameters, functional relationships and transitions between states of the interactions, starting from time-series observations. Due to the nature of dynamics, the inference should be able to trace the time-variability of the intrinsic parameters, and at the same time to be able to deduce the effect of the noise. Offering such a complete and comprehensive description of the dynamics within a single formalism, the technique can be of wide applicability.

3.1 Phase Dynamics Decomposition

This section outlines the basic theoretical background for the implementation of the inferential framework. At the core of the technique lies the Bayesian inferential framework for stochastic dynamics, utilized to infer a time-evolution of the intrinsic parameters.

3.1.1 Main Concept

The methodological approach proposed in this study exploits the Bayesian inferential technique for inference of noisy time-varying phase dynamics. The parameters, reconstructed from the base functions, allow the interactions and the respective states between the oscillators to be determined. The method can be summarized as:

figure a

The starting point i.e. the inputs for the inference are multivariate phase time-series that encapsulate the dynamics of an interacting oscillators. The actual observable time-series represent instantaneous phases from the measured state signals, pre-estimated using appropriate phase detection methods (e.g. using Hilbert transform, angle variable or wavelet synchrosquueze transform).

Decomposition of the phase dynamics embedded within the Bayesian framework is accomplished through the use of periodic base functions—represented in a form of finite Fourier series. The use of probabilistic apparatus from Bayesian theory enables the parameters’ distribution to be inferred. Furthermore, the Bayesian probability lying at the core of the method is itself time-dependent via the prior probability as a time-dependent informational process. The outcome of the inference i.e. the time-varying parameters are then employed to estimate, quantify and describe the underlying oscillatory interactions. By reconstructing the dynamics in terms of a set of base functions, we evaluate the probability that they are driven by a set of equations which are intrinsically synchronized, thus distinguishing phase-slips of dynamical origin from those attributable to noise.

Estimation of the coupling is directly linked to the parametrization of the base functions: for oscillators which are similar enough to share the same base functions, confrontation between the parameters is sufficient for evaluation of which oscillator drives which. The examination of the interacting base function as a group, can reveal the functional relationship that describes the interactions among the oscillators.

3.1.2 Base Functions

When two noisy, \(N\)-dimensional, self-sustained oscillators interact weakly [1], their motion can be described by their phase dynamics:

$$\begin{aligned} \dot{\phi }_i= \omega _i + f_i(\phi _i) + g_i(\phi _i,\phi _j) + \xi _i(t), \end{aligned}$$
(3.1)

leaving all other coordinates expressed as functions of the phase: \(\mathbf{r_i} \equiv \mathbf{r_i} (\phi _i)\) [2]. The constant terms \(\omega _i\) represent the oscillating angular frequencies, the \(f_i(\phi _i)\) functions describe the inner-oscillating dynamics, while \(g_i(\phi _i,\phi _j)\) functions characterize the dynamics for the interactions between the oscillators. (The later functions \(g_i(\phi _i,\phi _j)\) are often referred to as coupling functions). \(\mathbf{\xi }\) is a two-dimensional spatially correlated noise, usually assumed to be Gaussian and white: \(\langle \xi _i(t) \xi _j(\tau )\rangle = \delta (t-\tau ) E_{\textit{ij}}\). Reliable evaluation of the interaction phenomena must rely on precise inference of \(f_i\) and \(g_i\) and of the noise matrix \(E_{\textit{ij}}\). The periodic nature of the systems suggest periodic base-functions, hence the use of Fourier terms for the decomposition:

$$\begin{aligned} f_i(\phi _i)&= \sum _{k=-\infty }^ \infty \tilde{c}_{i,k} \sin (k\phi _i) + \tilde{c}_{i,2k+1}\cos (k\phi _{i}) \nonumber \\ g_i(\phi _i,\phi _j)&= \sum _{s=-\infty }^\infty \sum _{r=-\infty }^\infty \tilde{c}_{i;r,s}\, e^{i2\pi r \phi _i} e^{i2\pi s \phi _j}. \end{aligned}$$
(3.2)

The inference of an underlying phase model through use of Fourier series has formed the functional basis for several techniques to infer the nature of phase-resetting curves and interactions viz. the structure of networks or proposed synchronization prediction [38]. However, these techniques inferred neither the noise dynamics nor the parameters characterising the noise.

It might seem natural at this point to consider the phase difference of the two oscillators, as in the case of synchronization determination. But, due to the need to extract as much information as possible from the whole dynamical space, the two dynamical fields \(\phi _1\) and \(\phi _2\) are modeled separately.

Assuming that the dynamics are adequately described by a finite number \(K\) of Fourier terms, one can rewrite the phase dynamics of (3.1) as a finite sum of base functions:

$$\begin{aligned} \dot{\phi }_l= \sum _{k=-K}^{K} c^{(l)}_k \, \Phi _{l,k}(\phi _1,\phi _2) + \xi _l(t), \end{aligned}$$
(3.3)

where \(l=1,2\), where \(\Phi _{1,0}=\Phi _{2,0}=1\), \(c^{(l)}_0=\omega _l\), and other \(\Phi _{l,k}\) and \(c^{(l)}_k\) are the \(K\) most important Fourier components.

It is important to note that a use of Fourier series for the phase dynamics is a general and model-independent decomposition. The latter results from the fact that the phase inputs used are monotonically increasing with time, regarding of the dimensions and the complexity of the signals. The phase \(\phi _i(t)\) possess the sufficient information for the measures required to be inferred: synchronization, directionality and time-varying phase dynamics. If one were about to decompose the oscillatory interactions in state space, then the dynamics must be inferred using specific non-general and model-dependent (e.g. polynomial) base function. The use of state base functions is discussed in detail toward the end of this chapter in Sect. 3.7.

3.1.3 Bayesian Inference

The following outlines a general inferential framework for stochastic dynamical processes. An \(M\)-dimensional time-series of observational data \(\mathcal{Y} = \lbrace \mathbf y _n \equiv \mathbf y (t_{n}) \rbrace \), defined over the time-grid \(t_n=nh\), is provided. It is assumed that a driving dynamic exists, described by an \(L\)-dimensional (\(L\ge M\)) stochastic process \({\varvec{\phi }}(t)\). The underlying dynamics can be described by a set of \(L\)-dimensional stochastic differential equations in the form:

$$\begin{aligned} \varvec{ \dot{\phi }}(t) = \mathbf f ({\varvec{\phi }}|\mathbf c ) + \mathbf{z }(t), \end{aligned}$$
(3.4)

where \(\mathbf c \) is a set of parameters that are embedded in the dynamical field \(\mathbf f \), and \(\mathbf z (t)\) is considered to be an \(L\)-dimensional white Gaussian noise processes. It is assumed that the measurement noise is negligible and that a unique relationship exists: \(y(t)=\phi (t) \, \forall t\), i.e. the readout data is also the dynamical variable. A Bayesian inference technique that includes inference of measurement noise and detailed derivations of similar inferential framework is discussed in [912].

The fundamental question for the inference is: “given the readout data \(\mathcal X \), what information can one obtain about the functions \(\mathbf f \), about their parameters \(\mathbf c \) and about the noisy processes \(\mathbf z \)?”.

Due to the stochastic nature of the dynamics, the process of information extraction involves the building of theoretical models that cannot be verified directly but can be exploited by estimation of their probability. For these reasons, one can employ Bayesian probability—an approach in statistical inference where the probability is intended as a subjective measure of belief in an event or in the state of a variable [1316]. In particular, the Bayes’ theorem states:

$$\begin{aligned} P_{\text {post}}(\mathcal M | \mathcal X ) = \frac{ P ( \mathcal X | \mathcal M ) \, P_{\text {prior}}(\mathcal M ) }{ \int {P ( \mathcal X | \mathcal M ) \, P_{\text {prior}}(\mathcal M ) d \mathcal M } } , \, \end{aligned}$$
(3.5)

where \( \mathcal M \) is a set of parameters on which the probabilities are assigned; \( \mathcal X \) represents the observational data. \( P_{\text {prior}}(\mathcal M ) \) is the prior probability of \( \mathcal M \): the measure of belief on the particular values of \( \mathcal M \) before the data \( \mathcal X \) was observed. \( P(\mathcal X |\mathcal M ) \) (also called the likelihood) is the conditional probability of observing \( \mathcal X \) given \( \mathcal M \). The desired result \( P_{\text {post}}(\mathcal M |\mathcal X ) \) is the posterior probability: the probability that the hypothesis (or parameters) are true, given the data and the previous state of belief about the hypothesis. Such a framework is ideal for applications with sequential data—the current posterior probability can act as a prior for the next sequence of data.

Thus within the Bayesian framework, the problem is reduced to the calculation of the likelihood function and the optimization of the posterior distribution with respect to \(\mathcal M = \left\{ \mathbf c , \mathbf E \right\} \, \).

In order to construct the expression for the likelihood function, an additional assumption is made that the sampling scheme \(\{t_n=nh\}\) is sufficiently dense in respect of the dynamics that the time interval \(h\) is small enough for the Euler mid-point approximation to be valid. If this is the case, Eqs. (3.4) can be approximated by:

$$\begin{aligned} {\varvec{\phi }}_{n+1}&= {\varvec{\phi }}_n + h\,\mathbf f ({\varvec{\phi }}_n^*|\mathbf c ) + \mathbf{z _n}, \end{aligned}$$
(3.6)

where \({\varvec{\phi }}_{n}^*\) is the average between two consecutive states of the dynamical variable \({\varvec{\phi }}\):

$$\begin{aligned} {\varvec{\phi }}_{n}^*=\frac{({\varvec{\phi }}_{n+1}+{\varvec{\phi }}_n)}{2} \,. \end{aligned}$$

In Eq. (3.6) the term \(\mathbf z _n\) is the stochastic integral:

$$\begin{aligned} \mathbf z _n \equiv \int _{t_n}^{t_{n+1}} \mathbf z (t) \, dt = \sqrt{h}\,\mathbf H \, {{\varvec{\xi }}_n} \, . \end{aligned}$$
(3.7)

\(\mathbf H \) is the matrix that satisfies \(\mathbf H \mathbf H ^{\text {T}}= \mathbf E \), and \({\varvec{\xi }}_n\) is a zero-average \( \langle {\varvec{\xi }}_n \rangle = 0 \) and unitary-variance normal variable \(\langle {\varvec{\xi }}_n\,\, {\varvec{\xi }}_m \rangle = \mathbb I \, \delta _{n\,m} \, \).

The main idea is to calculate the probability of \( {\varvec{\phi }}_{n+1} - {\varvec{\phi }}_n - h\,\mathbf f ({\varvec{\phi }}_n^*|\mathbf c ) \) for each single \(n\) as a function of the probability of the realization of the whole process \(\left\{ \mathbf z _n \right\} \). The Gaussian probability of a single \(\mathbf z _n\) is:

$$\begin{aligned} P \left( \mathbf z _n \right) = \frac{ d \mathbf z _n}{ \sqrt{ {(2\pi )}^{L} h^L \det (E) } } \exp {\left\{ -\frac{ \mathbf z _n^T \mathbf E ^{-1} \mathbf z _n}{2h} \right\} } \, . \end{aligned}$$

Thanks to the assumption that the noise under consideration is white \(\mathbf z _n\) and statistically independent of \(\mathbf z _m\) for \(n\ne m\), one can write the joint probability of the process \(\left\{ \mathbf z _n \right\} \) as a product of the probabilities of each single \(\mathbf z _n\):

$$\begin{aligned} P \left( \left\{ \mathbf z _n \right\} \right) = \prod _{i=0}^{N-1} P \left( \mathbf z _i \right) = \prod _{i=0}^{N-1} \frac{ d \mathbf z _i}{ \sqrt{ {(2\pi )}^{L} h^L \det (E) } } \exp {\left\{ -\frac{ \mathbf z _i^T \mathbf E ^{-1} \mathbf z _i}{2h} \right\} } \,. \end{aligned}$$
(3.8)

The likelihood probability \( P ( \mathcal X | \mathcal M ) \) over a time grid can be expressed as the probability density of a particular realization of the dynamical system \( P ( \mathcal X | \mathcal M ) = P \left( \left\{ {\varvec{\phi }}_n \right\} \right) = \rho _0\left( {\varvec{\phi }}_0 \right) \prod _{i\,=\,0}^{N} \rho \left( {\varvec{\phi }}_i \right) \) . The expression of \(P ( \mathcal X | \mathcal M )\) was decomposed in this way because of the need for \(\prod _{i\,=\,0}^{N} \rho \left( {\varvec{\phi }}_i \right) \) to be expressed directly in terms of \(\left\{ \mathbf z _n \right\} \).

Thanks to the change of variable from \(\mathbf z _n\) to \({\varvec{\phi }}_{n+1}(\mathbf z _n)\), and the introduction of its subsequent Jacobian term \(\left[ \mathbf J \right] _{\textit{ij}} = \delta _{\textit{ij}} - \frac{h}{2} \frac{ \partial \left[ \mathbf f \left( {\varvec{\phi }}_n^* | \mathbf c \right) \right] _i }{\partial \left[ {\varvec{\phi }}_n^* \right] _j}\), one obtains the probability of realization of the whole process \(\left\{ {\varvec{\phi }}_n \right\} \):

$$\begin{aligned} P&\left( \left\{ {\varvec{\phi }}_{n+1} \right\} \right) \nonumber \\&=\frac{ d {\varvec{\phi }}_{n+1} \det \left( \mathbf J \right) }{ \sqrt{ {(2\pi h)}^{L} \det (\mathbf E ) } } \exp {\left\{ -\frac{h}{2} \left( \varvec{ \dot{\phi }}_n - \mathbf f \left( {\varvec{\phi }}_n^* | \mathbf c \right) \right) ^T \mathbf E ^{-1} \left( \varvec{ \dot{\phi }}_n - \mathbf f \left( {\varvec{\phi }}_n^* | \mathbf c \right) \right) \right\} }\!, \end{aligned}$$
(3.9)

where the following definition was used : \( \varvec{ \dot{\phi }}_n \equiv \frac{{\varvec{\phi }}_{n+1} - {\varvec{\phi }}_n }{h}\). The determinant of the Jacobian can be further approximated, since the Jacobian matrix consists of all quasi-zero elements, except in the diagonal. Obtaining the probability density function leads to the complete expression for the likelihood function given (for convenience in logarithmic form) as:

$$\begin{aligned} -&\frac{2}{N} \ln \big ( P ( \mathcal X | \mathcal M ) \big )\, = \, \ln \big ( \det (E) \big )\nonumber \\&+ \frac{h}{N} \sum _{n\,=\,0}^{N-1} \left[ \left( - \frac{h}{2}\, \, \frac{\partial \mathbf f \left( {\varvec{\phi }}_n^* | \mathbf c \right) }{\partial {\varvec{\phi }}_n^*} \right) + {\left( \left( \varvec{ \dot{\phi }}_n - \mathbf f \left( {\varvec{\phi }}_n^* | \mathbf c \right) \right) ^T \mathbf E ^{-1} \left( \varvec{ \dot{\phi }}_n - \mathbf f \left( {\varvec{\phi }}_n^* | \mathbf c \right) \right) \right) } \right] \!. \end{aligned}$$
(3.10)

The next task is to maximize the posterior probability i.e. to fit the likelihood Eq. (3.10) to Bayesian theorem, in order to find the optimal probability of the parameter set \(\mathcal M \) given the data \(\mathcal X \).

The prior probability \(P_{\text {prior}}(\mathcal M )\) was chosen to be a multivariate normal distribution in respect of the parameters \(\mathbf c \); if \(\mathbf c \) is an \(M\)-dimensional vector, its prior probability is written as:

$$\begin{aligned} P_{\text {prior}}(\mathbf c ) \, = \, \frac{1}{\sqrt{ (2\pi )^M \det (\varvec{\Sigma }_{\text {pr}}) }} \exp \left[ -\frac{1}{2} (\mathbf c - \mathbf c _{\text {pr}})^T \varvec{\Sigma }_{\text {pr}}^{-1} (\mathbf c - \mathbf c _{\text {pr}}) \right] \!, \end{aligned}$$
(3.11)

where \(\mathbf c _{\text {pr}}\) is a vector of a priori coefficients and \(\varvec{\Sigma }_{\text {pr}}\) is its covariance matrix. The latter two expressions Eqs. (3.10) and (3.11) gave the required probabilities, from which (using the Bayesian theorem) the posterior probability can be estimated.

Before moving forward, explicit dependence of \(\mathbf f \) in respect of parameters vector \(\,\mathbf c \,\) needs to be defined, and the following parametrization is introduced:

$$\begin{aligned} \mathbf f ({\varvec{\phi }}|\mathbf c ) \, \, = {\varvec{\Phi }}({\varvec{\phi }})\,\mathbf c , \end{aligned}$$
(3.12)

where \({\varvec{\Phi }}({\varvec{\phi }})\) is a \(L\times M\) matrix of Fourier base functions, as described in Sect. 3.1.2. With this linear parametrization of \(\mathbf f \), one obtains a quadratic log-likelihood function in respect of parameters vector \(\,\mathbf c \,\). Hence, using a multivariate normal distribution for the prior probability immediately leads to a multivariate normal distribution for the posterior. This is highly desirable because the Gaussian posterior (described only by its mean and covariance) is computationally convenient and can be easily used again as a prior for the next sequential block.

Finally, taking the discussed expressions into account, the stationary point of the log-likelihood (and thus the posterior) can be calculated recursively with the following equations:

$$\begin{aligned} \mathbf E&= \frac{h}{N} \sum _{n=0}^{N-1} {\left[ \dot{\varvec{\phi }}_n - {\varvec{\Phi }}_n\mathbf c \right] }^\mathrm{{T}}{\left[ \dot{\varvec{\phi }}_n - {\varvec{\Phi }}_n\mathbf c \right] } ,\end{aligned}$$
(3.13)
$$\begin{aligned} \mathbf{w}_\mathcal{X }(\mathbf E )&= \varvec{\Sigma }_{\text {pr}}^{-1} \, \mathbf c _\mathrm{pr } + h\sum _{n= 0}^{N - 1}\left[ {\varvec{\Phi }}_n^{\text {T}}\, \mathbf E ^{-1} \, \dot{\varvec{\phi }}_n - \frac{1}{2} \frac{\partial {\varvec{\Phi }} \left( {\varvec{\phi }}_n^* \right) }{\partial {\varvec{\phi }}_n^*}\right] ,\end{aligned}$$
(3.14)
$$\begin{aligned} \mathbf{\Xi _\mathcal{X }}(\mathbf E )&= \varvec{\Sigma }_{\text {pr}}^{-1} + h \, \sum _{n = 0}^{N - 1} {\varvec{\Phi }}_n^{\text {T}}\, \mathbf E ^{-1} \, {\varvec{\Phi }}_n , \end{aligned}$$
(3.15)
$$\begin{aligned} \mathbf c&= \mathbf \Xi _\mathcal{X }^{-1}(\mathbf E )\mathbf w _\mathcal{X }(\mathbf E ) , \end{aligned}$$
(3.16)

where \(\mathbf \Xi \) is the inverse of the covariance matrix \(\mathbf \Xi =\varvec{\Sigma }^{-1}\) (often called concentration or precision matrix).

In terms of the optimal algorithm for computational calculations, this make sense: starting from initial prior \(\varvec{\Sigma }_{\text {pr}}^{-1}\, \) and \(\mathbf c _\mathrm{pr }\), the noise matrix \(\mathbf E \) can be calculated Eq. (3.13), then given this \(\mathbf E \), using Eqs. (3.143.16), the parameter vector \(\mathbf c \) can be evaluated. The same procedure should be repeated recursively until \(\mathbf c \) and \(\mathbf E \) converge to stability. In absence of any prior knowledge about the system, a non-informative initial prior can be used: \(\varvec{\Sigma }_{\text {pr}}^{-1}=0 \, \) and \(\mathbf c _\mathrm{pr }=0\). For details about the implementation and programming see [17].

The proposed Bayesian inferential framework can be summarized as follows. Thanks to the choice of the linear parametrization of the vector field \(\mathbf f ({\varvec{\phi }}|\mathbf c ) = {\varvec{\Phi }}({\varvec{\phi }}) \mathbf c \,\), a log-likelihood quadratic function in respect of parameters has been obtained. The choice of a multivariate normal distribution for the prior \(P_{\text {prior}}(\mathbf c )\) leads to a posterior which is still a multivariate normal distribution. Therefore, given a realization of \(\mathcal X \,\), with two input quantities, \(\mathbf c _{\text {pr}}\) and \(\varvec{\Sigma }_{\text {pr}}\), respectively the mean and the covariance of the prior \(P_{\text {pr}}(\mathbf c )\), the set of parameters that best describe the system, and their correlations, are described by only two other quantities: \(\mathbf c _{\text {post}}\) and \(\varvec{\Sigma }_{\text {post}}\), respectively the mean and the covariance of the posterior \(P_{\text {post}}(\mathbf c )\). The posterior probability density is thus:

$$\begin{aligned} P_{\text {post}}(\{c\}) = \frac{1}{{(2\pi )}^{L/2} |\mathbf \Xi _{\text {post}}|^{-1/2}} \exp \left[ -\frac{1}{2} (\mathbf c -\mathbf c _{\text {post}})^{\text {T}}\mathbf \Xi _{\text {post}} (\mathbf c -\mathbf c _{\text {post}}) \right] \!. \end{aligned}$$
(3.17)

If then a new sequential data-block \(\mathcal X \,\) (generated from the same dynamics) is given, we can use the posterior information from the first data-block as the prior for the second one. The latter procedure constitutes the information propagation process, the utilization of which for time-varying dynamics will be discussed in the following section.

3.1.4 Time-varying Information Propagation

The multivariate probability Eq. (3.17) described by \(\mathcal{N }_{\mathcal{X }}(c|\bar{c},\Xi )\) for the given time series \(\mathcal{X } = \lbrace {{\phi }}_{n} \equiv \phi (t_{n}) \rbrace \) explicitly defines the probability density of each parameter set of the dynamical system. When the sequential data comes from a stream of measurements providing multiple blocks of information, one applies (3.133.16) to each block. Within the Bayesian theorem, the evaluation of the current distribution relies on the evaluation of the previous block of data i.e. the current prior depends on the previous posterior. Thus the inference defined in this way is not a simple windowing, but each stationary posterior depends on the history of the evaluations from previous blocks of data.

In classical Bayesian inference, if the system is known to be non-time-varying, then the posterior density of each block is taken as the prior of the next one: \(\Sigma _{\text {prior}}^{n+1} =\Sigma _{\text {post}}^n\). This full propagation of the covariance matrix will allow good separation of the noise and the uncertainties in the parameters steadily decrease with time as more data are included. But if time-variability exists, this propagation will act as a strong constraint on the inference and will fail to follow the time-variability of the parameters. This situation is illustrated in Fig. 3.1a.Footnote 1

On the other hand, if the noisy dynamical system has time-variability, one can consider the processes between each block of data to be independent (i.e. to consider them as Markovian processes). Then there can be no propagation between the blocks of data and each inference starts from a flat distribution: \(\Sigma _{\text {prior}}^{n+1} =0\). Now the inference will follow more closely the time-variability of parameters, but the effect from the noise and the uncertainty of the inference will be larger Fig. 3.1b.

If the system has time dependence, however, the method of propagating knowledge about the state of parameters obviously has to be improved and refined. Our framework prescribes the prior to be multinormal, so we synthesize our knowledge into a squared symmetric positive definite matrix. We assume that the probability of each parameter diffuses normally with a known diffusion matrix \(\Sigma _{\text {diff}}\). Thus, the probability density of the parameters is the convolution of two normal multivariate distributions, \(\Sigma _{\text {post}}\) and \(\Sigma _{\text {diff}}\):

$$\begin{aligned} \Sigma _{\text {prior}}^{n+1} = \Sigma _{\text {post}}^n + \Sigma _{\text {diff}}^n. \end{aligned}$$

The particular form of \(\Sigma _{\text {diff}}\) describes which part of the dynamical fields defining the oscillators can change, and the size of the change. In general \((\Sigma _{\text {diff}})_{\textit{i,j}} = \rho _{\textit{ij}}\sigma _i \sigma _j\), where \(\sigma _i\) is the standard deviation of the diffusion of \(c_i\) in the time window \(t_w\), and \(\rho _{\textit{ij}}\) is the correlation between the change in the parameters \(c_i\) and \(c_j\):

$$\begin{aligned} \Sigma _{\text {diff(i,j)}}^n = \left[ \begin{matrix} \ddots &{} \cdots &{}\rho _{\textit{ij}}\sigma _i \sigma _j \\ \vdots &{} \rho _{\textit{ii}}\sigma _i \sigma _i &{} \vdots \\ \cdots &{} \cdots &{} \ddots \end{matrix} \right] \end{aligned}$$
(3.18)
Fig. 3.1
figure 1

Inference of steep time-varying coupling parameter from coupled noisy oscillators 3.22. The gray line represents the intrinsic (as in the numerical simulation) parameter, while the black line is for the inferred time-varying parameter, for: (a) full propagation: \(\Sigma _{\text {prior}}^{n+1} =\Sigma _{\text {post}}^n\), (b) no propagation: \(\Sigma _{\text {prior}}^{n+1} =0\), and (c) propagation for time-varying processes: \(\Sigma _{\text {prior}}^{n+1} = \Sigma _{\text {post}}^n + \Sigma _{\text {diff}}^n\) . From [18], Copyright (2012) by the American Physical Society

A particular example of \(\Sigma _{\text {diff}}\) will be considered: it is assumed that there is no change of correlation between parameters (\(\rho _{\textit{ij}}=\delta _{\textit{ij}}\)) and that each standard deviation \(\sigma _i\) from the main diagonal is a known fraction of the relevant parameter (or standard deviation), \(\sigma _i = p_w c_i\), where \(p_w\) indicates that the parameter \(p\) refers to a window of length \(t_w\). It is important to note that this particular example is rather general because it assumes that all of the parameters (from the \(\Sigma _{\text {post}}^{n}\) diagonal) can have a time-varying nature—which resembles inference of real (experimental) systems with a priori unknown time-variability. The resulting inference on Fig. 3.1c demonstrates that the time-variability is captured correctly and that the uncertainty is reduced with time as more data are included.

If one knows beforehand that only one parameter is varying (or at most, a small number of parameters), then \(\Sigma _{\text {diff}}\) can be customized to allow tracking of time-variability specifically on that parameter. This selective propagation can be achieved if, for example, not all but only the selected correlation \(\rho _{ii}\) from the diagonal has non-zero value. In the remaining presentation of the thesis, however, the general (with all correlations from the diagonal) propagation for time-varying processes will be used.

3.2 Synchronization Detection

After performing the inference, one can use the reconstructed parameters, given in a form of multivariate normal distribution \(\mathcal{N }_{\mathcal{X }}(c|\bar{c},\Xi )\), to study the interactions between the oscillators under study. One of the major points of interest is to detect whether the dynamics described by the inferred parameters undergo synchronization and if transitions exist between the qualitative states. The particular information propagation for tracing time-varying parameters can allow the synchronization state and its transitions to be observed in time.

It is important to notice that a non-zero noise can induce phase slips in a system that would be synchronized in the noiseless limit. However, the currently proposed methods for synchronization detection are based on the presence and statistics of phase-slips, rather than on the nature of the phase-slip itself [1921]. The novelty embedded in this study is that it proposes evaluation of the probability that the equations that drive the dynamics are intrinsically synchronized and if the possibly observed phase-slips are dynamics-related or noise-induced.

Every parameter set can be distinguished depending on whether it belongs to the Arnold tongue region i.e. whether it belongs to the synchronization parameter space. For the inferred parameters one needs to find a criterium for determining if the dynamics governed by the base phase function are in a synchronized state. This binary property was called \( s(c^{(l)}_k)=\{1,0\}\). Thus the posterior probability of the system to be synchronized or not is obtained by evaluating the probability of \(s\):

$$\begin{aligned} p_{\text {sync}} \equiv p_\mathcal{X }(s=1)= \int s(c)\, \mathcal{N }_{\mathcal{X }}(c|\bar{c},\Xi ) \, \text {d} c \, . \end{aligned}$$
(3.19)

In general, the border of the Arnold tongue might not have an analytic form, and, even if it had, the integral has no analytic solution and must be evaluated numerically. A practical way to proceed is to estimate numerically \(p_{\text {sync}}\) by sampling many realizations from the parameters space \(\{c^{(l)}_k\}_m\), where \(m\) labels each testing parameter vector, and for every set of \(c_m\) synchronization to be computed \(s(c_m)\). The probability sampling is discussed in more detail in Sect. 3.4.4.

But, how can one detect the binary property \( s(c)=\{1,0\}\) describing if a single set of parameters makes the phase dynamics synchronized or not? For the simple form of the base function \(\Phi _{l,k}\) (e.g. the phase model Eq. (3.1) described in Sect. 3.5.1 there might exist an analytic solution—then \( s(c)\) is explicitly defined. But in order to keep the generality of the method, there is a need for a technique that can detect synchronization of phase dynamics described by any number and general form of the base function \(\Phi _{l,k}\) defined.

3.2.1 Torus Dynamics and Map Representation

In this section a simple technique to recognize whether a phase oscillatory system is synchronized or not is presented. The technique itself is a simple check through numerical integration of an ordinary differential equation system (defined by Eq.(3.1) without the inferred noise) through one cycle of the dynamics, and testing whether the synchronization condition \(|\psi (t)|=|\phi _1(t)-\phi _2(t)|< K \) is always verified.

Fig. 3.2
figure 2

Torus representation of the phase dynamics, given with toroidal coordinate \(\zeta (\phi _1(t),\phi _2(t))\) and polar coordinate \(\psi (\phi _1(t),\phi _2(t))\). The white circle denotes the Poincaré cross section. From [18], Copyright (2012) by the American Physical Society

Let us assume we are observing the motion on the torus \(\mathbb T ^2\) defined by the toroidal coordinate \(\zeta (\phi _1(t),\phi _2(t))=(\phi _1(t)+\phi _2(t))/2\), and the polar coordinate \(\psi (t)\). For determination of synchronization the phase difference \(\psi (t)\) will be defined as \(\psi (\phi _1(t),\phi _2(t))=\phi _1(t)-\phi _2(t)\). Schematic representation of the phase dynamics on torus is shown in Fig. 3.2. Let us consider a Poincaré section defined by \(\zeta =0\) and assume that \(d\zeta (t)/dt |_{\zeta =0} > 0\) for any \(\psi \). This means that the direction of motion along the toroidal coordinate is the same for every point of the section. Ideally one would follow the time-evolution of every point in the section and check if there is a periodic orbit; if a periodic orbit exists and if its winding number is zero, then the system is synchronized. If such a periodic orbit exist, then there is at least another periodic orbit with one of them being stable and the other unstable.

The solution of the dynamical system over the torus induces a map \(M:[0,2\pi ]\rightarrow [0,2\pi ]\) that defines, for each \(\psi _n\) on the Poincaré section, the next phase \(\psi _{n+1}\) after one round of the toroidal coordinate: \(\psi _{n+1}=M(\psi _n)\). The map \(M\) is continuous, periodic, and has two fixed points (one stable and one unstable) if and only if there is a pair of periodic orbits for the dynamical system, i.e. synchronization is verified if \(\psi _e\) exists such that \(\psi _e=M(\psi _e)\) and \( \left| {\frac{dM(\psi )}{d \psi }}|_{\psi _e} \right| < 1\).

3.2.2 Synchronization Discrimination

The procedure of synchronization detection between the two oscillators that generate the phase time-series reduces to investigation of synchronization of the synthetic phase model using the parameters returned from the Bayesian machine. To calculate \(s(c)\) for any of the sampled parameter sets, one can proceed as follows:

  1. (i)

    From an arbitrary fixed \(\zeta \), and for an arbitrary \(\psi _0\) integrate numerically (with the standard fourth order Runge–Kutta algorithm) the dynamical system prescribed by the phase base function (Eq.(3.3) without the noise) for one cycle of the toroidal coordinate, obtaining the mapped point \(M(\psi _0)\);

  2. (ii)

    The same integration is repeated for multiple \(\psi _i\) coordinates next to the initial one, obtaining the map \(M(\psi _i)\);

  3. (iii)

    By finite difference evaluation of \(dM/d\psi \) a modified version of the Newton’s root finding method is employed in respect of the function \(M(\psi )-\psi \). The method is modified by calculating \(M\) at the next point \(\psi _{n+1}\) such that

    $$\begin{aligned} \psi _{n+1}= \psi _n + 0.8*| (M(\psi _n)-\psi _n)/(M'(\psi _n)-1))|. \end{aligned}$$

    Note that in this version, Newton’s method can only test the function by moving forward; in fact (a) the existence of the root is not guaranteed; (b) we are not interested in the root itself but only in its existence;

  4. (iv)

    If there is a root, \(s(c)=1\) is returned. If the root is not found, \(s(c)=0\) is returned.

3.3 Interactions Description

On of the main goals of this work is to infer and describe the interactions between oscillators in a dynamical environment subject to external deterministic and stochastic influences. The interactions characterize the inner relationships between several or large population of oscillators, and represent a base that defines phenomenological states (such as synchronization) and the flow of information, i.e. structure of the connectivity.

The nature of an interaction mainly depends on the physical properties of the oscillating systems, their functionality and how they react to perturbations. The central idea is to use the inferred parameters from \(\mathcal{N }_\mathcal{X }(c,\Sigma )\) to describe the interacting properties. Because the dynamics are reconstructed separately as described by Eq. (3.1), usage can be made only of those inferred parameters from the base functions which are linked to the influences between the oscillators. The influence of one oscillator on the other can either be direct through \(f_i (\phi _j)\), or can arise through the combined interacting base functions \(g_i (\phi _i,\phi _j)\). In what follows, the base functions \(f_i (\phi _j)\) and \(g_i (\phi _i,\phi _j)\) are described with a common notation \(q_i(\phi _i,\phi _j)\).

One can seek to determine the properties that characterize the interaction in terms of a strength of coupling, predominant direction of coupling or even by inference of a coupling function. As the use of information propagation allows inference of time-varying dynamics, the interactions’ properties can be traced in time as well. This is especially important for inference of open interacting oscillatory processes, which are often found in nature, where the time-variability interactions can lead to transitions between qualitative states, such as synchronization or oscillation death.

3.3.1 Directionality Estimation

The interaction strength or the coupling amplitude quantifies the net information flow between the oscillators. It has been found useful in many investigations, including determination of causality relationships [22, 23] or reconstruction of structure of networks [4, 8]. Several approaches have been proposed for quantification of the couplings, including mutual theoretic information [24, 25], phase dynamics decomposition [26, 27], wavelet bispectrum [28] and perturbation techniques [3, 8, 29]. However, these techniques did not inferred explicitly the noise dynamics nor the parameters characterizing the noise, and not all of them were able to cope with the time-variability of the intrinsic parameters.

The coupling amplitude quantifies the total influence between the oscillators in some direction: for example how much the dynamics of the first oscillator affect the dynamical behavior of the second oscillator (\(1 \rightarrow 2\)). If the coupling is in only one or in both directions, we speak of unidirectional or bidirectional coupling, respectively. In the proposed inferential framework, the coupling amplitudes are evaluated as normalized measures from the interacting parameters inferred from the coupling base functions \(q_i(\phi _i,\phi _j)\). The quantification is calculated as a Euclidian norm:

$$\begin{aligned} \epsilon _{21}&= \Vert q_1(\phi _1,\phi _2) \Vert \equiv \sqrt{c_1^2+c_3^2+\ldots } \nonumber \\ \epsilon _{12}&= \Vert q_2(\phi _1,\phi _2) \Vert \equiv \sqrt{c_2^2+c_4^2+\ldots }, \end{aligned}$$
(3.20)

where e.g. in the proposed implementation the odd inferred parameters were assigned to base functions \(q_1(\phi _1,\phi _2)\) for the coupling from the second oscillator on the first (\(\epsilon _{21}: 2 \rightarrow 1\)), and the even parameters for the first on second oscillator (\(\epsilon _{12}: 1 \rightarrow 2\)).

The direction of coupling often gives useful information about the interactions [26], and is defined as normalization about the predominant coupling amplitude:

$$\begin{aligned} \text {D}=\frac{\epsilon _{12}-\epsilon _{21}}{\epsilon _{12}+\epsilon _{21}}. \end{aligned}$$
(3.21)

If \(\text {D}\in (0,1]\) the first oscillator drives the second (\(1 \rightarrow 2\)), or if \(\text {D}\in [-1,0)\) the second (\(2 \rightarrow 1\)) drives the first. The quantified values of the coupling strengths \(\epsilon _i\) or the directionality \(D\) represent measures of combined relationships between the oscillators. Thus, a non-zero value can be inferred even when there is no interactions. The latter discrepancy can be overcome by careful surrogate testing [30, 31]—by rejecting values below an acceptance surrogate threshold, which can be determined as the mean plus two standard deviations of many realization of the measures.

Fig. 3.3
figure 3

Schematic representation of coupling function. The coupling as a function of the phase difference \(\psi =\phi _2-\phi _1\) and its implications for synchronization transitions (a). The full line is for unsynchronized while the dashed for the synchronized case—the white circle corresponds to stable and black to unstable equilibrium solutions. (b) The coupling as a function of both phase variables

3.3.2 Coupling Function Reconstruction

Beside the coupling strength and the directionality, one can also infer the function that characterizes the interactions. This coupling function defines the law that describes the functional relationships between the oscillators. Its characteristic form results from the nature of the oscillators and how their dynamics react under perturbations. The inference of an underlying phase model has formed the basis for techniques to infer the coupling functions [4, 5, 7, 27]. However, these techniques did not inferred the noise dynamics nor the parameters characterising it, and they did not treated time-varying dynamics.

The coupling function is defined as the law through which the interactions undergo transitions to synchronization i.e. transitions to stable states. This physical meaning is illustrated schematically on Fig. 3.3a for the case of simple phase oscillators with sine coupling function (following Kuramoto [2]). The black lines represent situations where the oscillators are not synchronized and there are no stable solutions for the phase difference. For certain parameters (frequency mismatch and coupling amplitudes) the coupling function intersect the equilibrium axis (\(\dot{\psi }=0\)), and two solutions appear, one stable and one unstable, and the oscillators are synchronized. To determine synchronization, it is sufficient to analyze the coupling function through the phase difference alone. In general, however, one can study the function with respect to both phases Fig. 3.3b. Winfree [32] used a function that is defined by both phases, rather than just the phase difference, while Daido and Crawford [3335] used a more general form where the function was expanded in its Fourier series.

The coupling function should be \(2\pi \)-periodic. In the inferential framework under study, the coupling functions was decomposed into finite number of Fourier components. The function describing the interactions between the two oscillators was decomposed by the odd parameters \(q_1(\phi _1,\phi _2)\in \{c_1,c_3,\ldots \}\) and the corresponding base functions \(\Phi _n[q_1(\phi _1,\phi _2)]\in \) \(\{\sin (\phi _1,\phi _2),\) \(\cos (\phi _1,\phi _2)\}\) up to order \(n\) of the decomposition. The other function \(q_2(\phi _1,\phi _2)\) \(\in \) \(\{c_2,c_4,\ldots \}\) was similarly decomposed.

The propagation of time-variability allows the coupling function to be inferred in time. This constitutes one of the novelties of the approach, because now one can trace the time-evolution of this functional relationships. From Chap. 2 and Sect.  3.5.4 it is clear that the latter is very important, and can act as a reason for transitions to synchronization. The importance for studying time-varying coupling functions is even greater given that it is a property observed in real life oscillatory systems—such as the cardiorespiratory system.

3.4 Technical Aspects of the Bayesian Inference

Before applying the inference method, as presented theoretically in the previous sections, some attention is spent on the technical properties, capabilities and limitations of the technique. Understanding the technical aspects is crucially important for appropriate and correct applications, especially because the final framework is a combination of several concepts and their functioning together must be set up correctly.

There are number of technical aspects characterizing the technique, which include inference of stochastic dynamics and parameters with time-varying nature, where the resulting measures are probabilistic distributions. For this reasons, we considered the following: how the different number of base functions affects the inference, how does the inference behave under different strengths of noise, what time-resolutions of the time-varying parameters can be traced and how to sample the combined measures of the resulting probability distributions. There exist many other technical aspects, but the ones presented here are considered to be sufficient for proper understanding of the particular (and similar) implementation of inferential technique.

3.4.1 Number of Base Functions

In this section, the discussion is focussed on the question of what is the optimal number of base functions to be used. The problem is basically an interplay between achieving the desired precision and computational speed. To infer the dynamics more precisely, we need to use larger number of base functions. This is even more pronounced when one tries to infer properties (like time-varying frequencies, coupling functions, \(\ldots \)) that have ‘non-sine’ steep form. Then, in order to trace the higher harmonics, the inference needs to include expansion of the Fourier components up to higher orders. On the other hand, having large number of base functions for inference reduces the computational speed of the algorithm, and the functions that are not part of the actual dynamics can infer (pick up) some components from the noise. The base functions within the inferential framework are presented as multivariate Gaussian distribution in matrix form. Thus a large number of base functions increases the parameter space vastly and the iterative calculations (especially the evaluation of inverse of a matrix) slow down the speed of processing exponentially. It is worth noting that, even though the Bayesian inference is popular for its real-time applications, the proposed inference framework for general phase dynamics does not allow (in computational speed sense) real-time applications.

Fig. 3.4
figure 4

Inference of time-varying coupling amplitude with different number of base functions, applied on signal from numerical simulation of model (3.22); parameters are given in the text. The particular number of base functions is shown on the legend. The difference of precision is mostly observe around the local maxima—also enlarged on the inset

In order to demonstrate the inference precision of time-varying parameters the technique was applied on a numerically simulated signal. The simulation was performed on a model of two coupled Poincaré oscillators subject to white noise:

$$\begin{aligned} \dot{x}_1&= - \Big (\sqrt{x_1^2+y_1^2}-1 \Big ) x_1 -\omega _1(t) y_1 + \varepsilon _{21}(t) (x_2-x_1)+\xi _1(t)\nonumber \\ \dot{y}_1&= - \Big (\sqrt{x_1^2+y_1^2}-1 \Big ) y_1 +\omega _1(t) x_1 + \varepsilon _{21}(t) (y_2-y_1)+\xi _1(t)\nonumber \\ \nonumber \\ \dot{x}_2&= - \Big (\sqrt{x_2^2+y_2^2}-1 \Big ) x_2 -\omega _2(t) y_2 + \varepsilon _{12}(t) (x_1-x_2)+\xi _2(t)\nonumber \\ \dot{y}_2&= - \Big (\sqrt{x_2^2+y_2^2}-1 \Big ) y_2 +\omega _2(t) x_2 + \varepsilon _{12}(t) (y_1-y_2)+\xi _2(t), \end{aligned}$$
(3.22)

where the frequency parameters \(\omega _i(t)\) and the coupling amplitudes \(\varepsilon _{\textit{ij}}(t)\) were allowed to be time-varying. The same model will be used for the remaining discussion of this section. The coupling function is a linear state difference (\(x_j-x_i\), \(y_j-y_i\)) and at this point is considered to have constant (non time-varying) form.

A particular case was considered, where the coupling amplitude from the first oscillator was periodically time-varying: \(\varepsilon _{12}(t)=\varepsilon _{12}+\tilde{A} \sin (\tilde{\omega }t)\). The parameters were: \(\omega _1(t)\equiv \omega 1=2\pi 1.1\), \(\omega _2(t)\equiv \omega 2=2\pi 2.77\), \(\varepsilon _{21}=0\), \(\varepsilon _{12}=1.7\), \(\tilde{\omega }=2\pi 0.0025\), \(\tilde{A} = 1.3\) and noise strength \(E_1=E_2=0.5 \,\). Evaluation of the coupling amplitude is done through calculation of the norm (Eq. 3.20) from the inferred coupling parameters. Results of the \(\varepsilon _{12}(t)\) inference from the same signal for three cases with different number of base functions are presented in Fig. 3.4. From the parameter estimations around the local maxima (also enlarged on the inset), one can notice that the inference is not following the sine form promptly. This can be due to particular effect of the noise, or if the two oscillators have become more coherent around these parameter values. The figure demonstrates that the three cases were different, and that the inference with larger numbers of base functions was getting closer to the intrinsic parameter values.

3.4.2 Effect of Noise Intensity

The proposed technique tries to infer dynamics of coupled oscillators subject to noise. One of the main tasks are to decompose what is considered to be intrinsic dynamics from the effect of the noise. The question posed here is: how well can we infer the parameters when the dynamics are subject to noise of different strengths. The answer partially depends on how the propagation of information is achieved. The results will be slightly better for full propagation and constant parameters, but because the objective is inference of time-varying dynamics, the following investigation is done for propagation that can trace time-varying parameters.

Fig. 3.5
figure 5

Statistical properties of inferred parameters for different noise intensity \(E\). The dotted line shows the intrinsic values of the parameters presented with boxplots. The boxplots indicate: median with black tick line, the lower and the upper quartile are shown within the gray box, while the range (minimum, maximum) is denoted with the vertical dashed line. Outliers are not shown. (a) The influence of noise on frequency, (b) on coupling parameters

The same numerical example (3.22) is considered, but for constant parameters and different noise strengths. The parameters were: \(\omega _1=2\pi 1.1\), \(\omega _2=2\pi 1.77\), \(\varepsilon _{21}=0.05\), \(\varepsilon _{12}=1.17\) and \(E_1=E_2=E\). The main idea is to investigate how much will the parameters deviate from their intrinsic values. The frequency \(\omega _1\) and the coupling amplitude \(\varepsilon _{12}\) were followed from the same simulation performed for each value of the noise intensities \(E_i\). Fig. 3.5 shows the statistical properties in terms of boxplots for different noise intensities. It is easy to notice that the inference of the parameters is worse i.e. their values deviate more as the noise intensity \(E\) is increased. Another feature is that the coupling amplitude \(\varepsilon _{12}\) has larger deviations than the frequency \(\omega _1\) parameter. This is probably because \(\varepsilon _{12}\) is the result of evaluation of the norm as a combination of several inferred parameters, and the noise effect from all of them contributes to the final deviation. Finally, it is worth pointing that in experiments (cardiorespiratory and electronically simulated interactions), the noise strength inferred was not usually very high (\(0.01\le E \le 0.2\)).

3.4.3 Time Resolution

The main objective of this work is to infer time-varying dynamics. The issue addressed here is: how fast/slow dynamics can be traced by the proposed technique and what precision is achieved. The problem is related to the size of the sequential windows i.e. the amount of information included within one block of data. The issue is also implicitly dependent on a time-resolution (i.e. frequencies) of dynamics of the interacting oscillators.

Fig. 3.6
figure 6

Inference of a time-varying frequency (a) and coupling parameter (b) from model (3.22) for four different lengths of the inference windows. The size of the windows is shown on the legend

Using numerical model (3.22), the time-resolution was investigated for case where the frequency \(\omega _1(t)=\omega _1+\tilde{A}_1 \sin (\tilde{\omega }t)\) and coupling amplitude \(\varepsilon _{12}(t)=\varepsilon _{12}+\tilde{A}_1 \sin (\tilde{\omega }t)\) were varying periodically at the same time. The parameters were: \(\omega _1=2\pi 1.1\), \(\omega _2=2\pi 2.77\), \(\varepsilon _{21}=0\), \(\varepsilon _{12}=1\), \(\tilde{\omega }=2\pi 0.002\), \(\tilde{A}_1 = 0.1\) \(\tilde{A}_2 = 0.5\) and noise strengths \(E_1=E_2=0.15 \,\). The parameters were reconstructed using four different lengths of the inference windows. The results presented on Fig. 3.6 demonstrate that for small windows (0.5s) the parameters are sparse and sporadic, while for very large windows (100s) the time-variability is faster than the size of the window and there is cut-off on the form of the variability. A better suited window size will be in between this two. Another interesting feature is that for the smallest window (0.5s), the coupling amplitude is improved with information propagation as time progresses, while the frequency inferred (as a constant component without base function) is sparse throughout the whole time interval.

3.4.4 Probability Sampling

The final result of the inference is given with the set \(\mathcal{N }_{\mathcal{X }}(c|\bar{c},\Sigma )\). Every inferred parameter has the nature of a Gaussian distribution, and it is a part of a multivariate Gaussian distribution for the whole parameter space given by the mean vector \(\bar{c}\) and the covariance matrix \(\Sigma \,\). If one needs to infer a measure that is evaluated from the combination of the inferred parameters then, in theory, one needs to evaluate the probability of the measure from the multivariate Gaussian distribution \(\mathcal{N }_{\mathcal{X }}(c|\bar{c},\Sigma )\). Assume that a binary property of the measure \( m(c)=\{1,0\}\) is given. For example, \(m(c)\) can be the synchronization index \( s(c)=\{1,0\}\) presented in Sect. 3.2.1, a normalized evaluation of the directionality index, or some other. Then the posterior probability of the measure can be evaluated as:

$$\begin{aligned} p_{m} \equiv p_\mathcal{X }(m=1)= \int m(c)\, \mathcal{N }_{\mathcal{X }}(c|\bar{c},\Sigma ) \, \text {d} c \, . \end{aligned}$$
(3.23)

This integral may not have an analytic solution, and in order to keep the generality and practicality of the approach, one can try to solve it by numerical evaluation. Proceeding in a Monte Carlo manner, using the parameter space, one can sample many realizations \(m_k\), where \(k\) labels each vector of testing parameter. Fig. 3.7 shows several examples of sampling distributions from the inference of model (3.22). Fig. 3.7a shows the Gaussian-like distribution of single frequency parameter after the sampling of \(\mathcal{N }_{\mathcal{X }}(c|\bar{c},\Sigma )\), while Fig. 3.7b, c demonstrate the distribution correlation of two inferred parameters. The two latter bivariate distributions only tackle the complexity of the full multivariate normal distribution \(\mathcal{N }_{\mathcal{X }}(c|\bar{c},\Sigma )\), which can have many more multivariate dimensions.

Fig. 3.7
figure 7

Probability distribution for the inferred parameters of model (3.22). (a) Gaussian-like distribution of frequency \(\omega _1\). Bi-variate distribution of two inferred, (b) coupling parameters, (c) frequency parameters. Note the high (blade-like) correlation on (c). There was no time-variability, and the parameters were \(\omega _1=1.27\), \(\omega _2=0.67\), \(\varepsilon _{21}=0.05\), \(\varepsilon _{12}=0.25\) and the rest same as on Fig. 3.4

To find \(p_{m}\) arbitrarily precisely it is enough to generate a number \(K\) of parameters \(c_k\), with \(k=1,\dots ,K\) sampled from \(\mathcal{N }_{\mathcal{X }}(c|\bar{c},\Sigma )\), since \(p_m= \lim _{K\rightarrow \infty }{\frac{1}{K}\sum _k^K} m(c_k) \). However, this high dimensional integration quickly becomes inefficient with an increasing number of Fourier components. On the other hand, if the posterior probability \(p_\mathcal{X }\) is sharply peaked around the mean value \(\bar{c}\), then \(p_m\) will be indistinguishable from \(m(\bar{c})\), and evaluation of \(m(\bar{c})\) only, would suffice.

3.5 Application Examples

After laying down the theoretical and technical aspects of the inferential framework, here we proceed with application of the technique on several characteristic models. This section demonstrates all the aspects, and shows how optimally one can exploit and benefit from the method. It also reveals the novelties brought by this approach in respect of application of earlier known methods.

The only requirements (inputs) for the method are phase time-series of interacting oscillators. As long as they are properly defined and detected, the phases are not model-dependent and they can come from any general form of oscillator. This contributes to the generality of the method and its wide applicability. In the following, different types of models are used to demonstrate particular features of the method.

3.5.1 Phase Oscillators model

In order to be systematic, and before going to more complicated realistic models, the technique is applied on a simple phase oscillators model. This will give a sufficient base model for synchronization description, which is analytically traceable at the same time. Moreover, the base functions embedded in the inferential framework are a perfect match for the inference of the interacting phase model.

The main objective in this section is to demonstrate how the synchronization detection works, and to investigate the implications when applied to noisy time-series. In this sense, the detection of synchronization means if the examination of the constructed map \(M(\psi )\) (followed after Bayesian inference) can distinguish synchronized (\(s(c)=1\)) from unsynchronized dynamics (\(s(c)=0\)), i.e. whether the root \(M(\psi _e)=\psi _e\) exists or not. It is important to notice that a non-zero noise can induce phase slips in a system that would be synchronized in the noiseless limit. Therefore, a genuine inference should not only detect the presence of a phase-slips, but also needs to describe the nature of the phase-slip itself: whether it is noise-induced or dynamic-related. The latter means to describe the dynamics in parameter space in relation to the inferred parameters, without the contribution of the noise. The parameter space for synchronization phenomenon can effectively be described by Arnold tongues [1]. Fig. 3.8a illustrates schematically a particular situation: in a noiseless case the systems are synchronized (black circle inside the Arnold tongue) and only because of the effect of the noise phase-slips occur (white circle outside the Arnold tongue). Thus the main goal is to detect whether the systems are intrinsically synchronized, and if the existence of phase slips is due to effect of the noise.

Fig. 3.8
figure 8

Synchronization discrimination for the coupled phase oscillators (3.24). a Schematic Arnold tongue to illustrate synchronization [1]. b Map of \(M(\psi )\) for \(\epsilon _{12}=0.25\) demonstrating that the oscillators are not synchronized. c Map of \(M(\psi )\) for (d) demonstrating that a root of \(M(\psi )=\psi \) exists, i.e. that the state is, in fact, synchronized. d Phase difference, exhibiting two phase slips. From [18], Copyright (2012) by the American Physical Society

The model for generating a numerical phase signal for analysis is given by two coupled phase oscillators subject to white noise:

$$\begin{aligned} \dot{\phi _1}=&\omega _1 +\epsilon _{21} \sin (\phi _2-\phi _1) + \xi _1(t) \nonumber \\[-3mm]&\\[-2mm] \dot{\phi _2}=&\omega _2 +\epsilon _{12} \sin (\phi _1-\phi _2) + \xi _2(t).\nonumber \end{aligned}$$
(3.24)

The parameters were \(\epsilon _{21}=0.1\), \(\omega _1=1.2\), \(\omega _2=0.8\) and \(E_1=E_2=2\). Note that there is no time-variability i.e. all of the parameters are constant in time. Thus the discussion shall be focussed more on the effect of the noise, and the inference will be applied to a single block of data.

The dynamics of the phase difference will be described as: \(\dot{\psi }=\Delta \omega -\epsilon \sin (\psi )+\xi _1(t)+ \xi _2(t)\), where \( \Delta \omega =\omega _2-\omega _1\) is the frequency mismatch and \(\epsilon =\epsilon _{21}+\epsilon _{12}\) is the resultant coupling. In the noiseless case, the analytic condition for synchronization i.e. the existence of stable equilibrium solution \(\dot{\psi }<0\) can be reduced to \(\Delta \omega / \epsilon <1\). Next, characteristic cases of numerically simulated signals from model (3.24) were analyzed. For coupling amplitude of \(\epsilon _{12}=0.25\) the reconstructed map \(M(\psi )\) (Fig. 3.8b) shows that root \(M(\psi _e)=\psi _e\) does not exist and the oscillators are not synchronized \(s(c)=0\). To demonstrate the novelty of our method, the parameters were such that the oscillators were only just inside the Arnold tongue. This was achieved by enlarging the coupling amplitude to \(\epsilon _{12}=0.35\)—-then the analytic condition for synchronization \(\Delta \omega / \epsilon =0.4/0.45<1\) is fulfilled and the systems should be synchronized. However, due to the effect of the moderate noise, phase-slips occurred, see Fig. 3.8d. The application of earlier methods based on the statistics of the phase difference [1921] suggests that the oscillators are not synchronized. In contrast, the proposed technique shows that the oscillators are intrinsically synchronized as illustrated in Fig. 3.8c: the phase slips are attributable purely to noise (the intensity of which is inferred in matrix \(E_{\textit{i,j}}\)), and not to deterministic interactions between the oscillators. The ability to identify noise-induced phase slip could be important in a number of contexts, including both noise-induced synchronization [3638] and desynchronization [39].

3.5.2 Limit-Cycle Oscillators Model

The proposed inferential framework offers a possibility of doing comprehensive analysis within one sole formalism. The following discussion explores this and investigates how the proposed method can trace time-varying parameters, coupling functions, directionality and synchronization.

The model under consideration consisted of two coupled non-autonomous Poincaré oscillators subject to white noise:

$$\begin{aligned} \dot{x}_1&= - \Big (\sqrt{x_1^2+y_1^2}-1 \Big ) x_1 -\omega _1(t) y_1 + \varepsilon _{1}(t) q_1(x_1,x_2,t)+\xi _1(t)\nonumber \\ \dot{y}_1&= - \Big (\sqrt{x_1^2+y_1^2}-1 \Big ) y_1 +\omega _1(t) x_1 + \varepsilon _{1}(t) q_1(y_1,y_2,t)+\xi _1(t)\nonumber \\[-3mm]&\\[-2mm] \dot{x}_2&= - \Big (\sqrt{x_2^2+y_2^2}-1 \Big ) x_2 -\omega _2(t) y_2 + \varepsilon _{2}(t) q_2(x_1,x_2,t)+\xi _2(t)\nonumber \\ \dot{y}_2&= - \Big (\sqrt{x_2^2+y_2^2}-1 \Big ) y_2 +\omega _2(t) x_2 + \varepsilon _{2}(t) q_2(y_1,y_2,t)+\xi _2(t).\nonumber \end{aligned}$$
(3.25)

All of the parameters can be time-varying, and the coupling function can have different forms with or without time variability.

First, we consider unidirectional coupling (1\(\rightarrow \)2), where the natural frequency of the first oscillator, and its coupling strength to the second one, vary periodically at the same time: \(\omega _1(t)=\omega _1+\tilde{A}_1\sin ({\tilde{\omega }_1}t)\) and \(\varepsilon _{2}(t)=\varepsilon _{2}+\tilde{A}_2\sin ({\tilde{\omega }_2}t)\). The other parameters were: \(\varepsilon _{2}=0.1\), \(\omega _1=2\pi 1\), \(\omega _2=2\pi 1.14\), \(\tilde{A}_1=0.2\), \(\tilde{A}_2=0.13\), \(\tilde{\omega }_1=2\pi 0.002\), \(\tilde{\omega }_2=2\pi 0.0014\) and noise \(E_{11}=E_{22}=0.1\). The coupling function was simple linear difference in the state variables: \(q_i(x_i,x_j,t)=x_i-x_j\) and \(q_i(y_i,y_j,t)=y_i-y_j\). The phases were estimated as the angle variable \(\phi _i = \text {arctan}(y_i/x_i)\). With \(\varepsilon _{1}=0.1\) there is no synchronization and the time-varying parameters (\(f_1(t)\) and \(\varepsilon _{2}(t)\)) are accurately traced: see full lines of Fig. 3.9a, b. The form and the speed of the inferred parameters demonstrate the precision of the method and the benefits of the time-varying information propagation. For a coupling amplitude of \(\varepsilon _{1}=0.3\) the two oscillators will be synchronized for part of the time, resulting in intermittent synchronization. The time-variability of the parameters in the non-synchronized intervals is again determined correctly, while in the synchronized intervals they differ from the values of the intrinsic parameters, Fig. 3.9a, b, dashed lines. Within these synchronized intervals, all of the base functions are highly correlated, with values lying within the Arnold tongue. The latter was detected as synchronized (\(s(c)=1\)) intervals, Fig. 3.9a, b, grey shaded regions.

Fig. 3.9
figure 9

Extraction of time-varying parameters, synchronization and coupling functions from numerical data created by (3.25). The frequency \(f_1(t)\) (a) and coupling \(\varepsilon _2(t)\) (b) are independently varied. The dotted and full lines plot the parameters when the two oscillators are synchronized for part of the time (\(\varepsilon _{1}=0.3\)), and not synchronized at all (\(\varepsilon _{1}=0.1\)), respectively. The regions of synchronization, found by calculation of the synchronization index, are indicated by the gray shaded regions. cf Show the coupling functions \(q_1(\phi _1,\phi _2)\) and \(q_2(\phi _1,\phi _2)\) for time windows centered at different times: (c, d) at \(t=350 s\); (e, f) at \(t=1000 s\). The window length \(t_w=50 s\), and coupling strength \(\varepsilon _{12}=0.1\) in both cases. Note the similarity in forms of (c, e), and of (d, f)

The reconstructed sine-like functions \(q_1(\phi _1,\phi _2)\) and \(q_2(\phi _1,\phi _2)\) are shown in Figs. 3.9c, d for the first and second oscillators, respectively. They describe the functional form of the interactions between the two Poincaré systems (3.25). The application of the proposed approach suggests that the form of the coupling functions does not evolve with time—\(q_1\) and \(q_2\) evaluated for later time segments are presented on Fig. 3.9e, f respectively. By comparison of Fig. 3.9c, e or of Fig. 3.9d, f, we see that the coupling functions are time invariant and they did not change qualitatively, even though there were time-varying parameters and weak effects from the noise.

Next, the method was applied to detect the predominant direction of coupling presented through a quantitative measure evaluated as the norm of the inferred coupling base parameters. To illustrate the detection and precision of directionality, the frequencies now were considered to be constant, while both of the coupling strengths to be discretely time-varying. The parameters were \(\omega _1=2\pi 1.3\), \(\omega _2=2\pi 1.7\), \(E_{11}=E_{22}=0.2\), and the coupling function were as in the previous example: \(q_i(x_i,x_j,t)=x_i-x_j\) and \(q_i(y_i,y_j,t)=y_i-y_j\). Synchronization, however, was not achieved for these parameters. The couplings alternate (in time intervals as depicted on Fig. 3.10) from unidirectionally (\(1\rightarrow 2\)), to bidirectionally (\(1\rightarrow 2\)), then bidirectionally (\(2\rightarrow 1\)), so as to finish with zero bidirectional couplings (\(1=2\)). The detected directionality index was consistent with the hypothetical values. Note that the value of unidirectionally coupling has not reached \(1\), due to the noise disturbance.

The oscillatory models used for studying interactions and synchronization, usually are considered to have time-invariant coupling functions (for example the coupling function on Fig. 3.9c–f) However, when the oscillators are open by nature, the functions defining their interactions can also be time-varying processes by themselves. Moreover, as discussed in the previous chapter, the variations of the form of a coupling functions can be the reason alone for which synchronization transitions can occur.

Fig. 3.10
figure 10

Directionality of coupling for discrete time-varying coupling strengths. Different unidirectionally and bidirectionally cases are reached by different values of the coupling amplitudes \(\varepsilon _1\) and \(\varepsilon _2\)—as indicated by the square insets. From [18], Copyright (2012) by the American Physical Society

Fig. 3.11
figure 11

Time-evolution of Coupling function from model (3.25) with exponentially varying (3.26). a–d coupling function \(q_2(\phi _1,\phi _2)\) from second oscillator for four consecutive time windows (the window length was \(t_w=50 s\)). For simplicity and clarity only function \(q_2(\phi _1,\phi _2)\) is shown (the behavior of \(q_1(\phi _1,\phi _2)\) from the first oscillator was similar). From [18], Copyright (2012) by the American Physical Society

To investigate the issue of time-varying coupling functions and the implications when the inferential technique is applied, the same model (3.25) was used but now the coupling functions were absolute values of the state difference on power of time-varying parameter:

$$\begin{aligned} q_i(x_i,x_j,t)=|(x_j-x_i)^{\nu (t)}|; \, \, q_i(y_i,y_j,t)=|(y_j-y_i)^{\nu (t)}|, \end{aligned}$$
(3.26)

where \(i=j=\{1,2\}\) and \(i\ne j\). The exponent parameter varied linearly with time \(\nu (t)=\{1\rightarrow 3\}\), and the rest of the parameters were constant: \(\omega _1=2\pi 1\), \(\omega _2=2\pi 2.14\), \(\varepsilon _{1}=0.2\), \(\varepsilon _{2}=0.3\) and \(E_{11}=E_{22}=0.05\).

Following the Bayesian inference, the phase coupling functions \(q_i(\phi _1,\phi _2)\) were calculated from the base parameters for the interacting terms. The results for four consecutive windows are presented on Fig. 3.11. Observing the inferred coupling functions, it can be easily noticed that their complex form now is not constant, but varies with time. Comparing them in neighboring (consecutive) pairs: (a) and (b), then (b) and (c), then (c) and (d), one can actually follow the time-evolution of the functions’ form. Even though we can follow the time-variability between them, the two most distant functions Fig. 3.11a, d have substantially different forms. It can also be noticed that beside the form, the functions’ norm i.e. coupling strength varies too (compare e.g. the scale of maxima on Fig. 3.11a, d). This probably happens because the coupling functions were varied in state space, and the way that the oscillators react on this perturbation affects the coupling strength. The latter can be even more significant for inducing synchronization transition.

The proposed method for inference of phase dynamics enables the evolution of the system under study to be tracked continuously. Unlike earlier methods that only detect the occurrence of transitions to/from synchronization, the method reveals details of the phase dynamics, thus describing the inherent nature of the transitions, and at the same time deducing the characteristics of the noise responsible for stimulating them. It can identify the time-varying nature of the functions that characterize interactions between open oscillatory systems. It was shown that not only the parameters, but also the functional relationships, can be time-varying, and the new technique can effectively follow their evolution.

3.5.3 Analogue Simulations

In the previous sections the method was applied on signals generated by synthetic numerical models. In the following, the attention will be concentrated more on applications on signals emanating from real oscillatory systems. In this way the noise embedded in the signals has more realistic meaning, and usually it is attributed to environmental disturbances or imperfections of some properties of the systems. Additionally, during the process of data acquisition and discretization, some amount of measurement noise can be introduced—a noise which has no links with the actual dynamics of the interacting oscillators.

The following example analyzes data from experimental analogue simulation of two coupled van der Pol oscillators. Details about the electronic implementation and further analysis are presented in Chap. 5. The noise here is emanating from the imperfections of the electronic elements (determined by their tolerance), from their thermal heating due to inner-dissipation and partly because of measurement noise.

The phase portrait from the first oscillator, whose frequency is time-varying is shown on Fig. 3.12a. The first oscillator is driving the second oscillator:

$$\begin{aligned}&\frac{1}{c^2}\ddot{x}_{1}-\mu _1(1-x_1^2)\frac{1}{c}\dot{x}_{1}+[ \omega _1+\tilde{\omega }_1(t)]^2x_1=0,\nonumber \\[-3mm]&\\[-2mm]&\frac{1}{c^2}\ddot{x}_{2}-\mu _2(1-x_2^2)\frac{1}{c}\dot{x}_{2} +\omega _2^2x_2+\varepsilon (x_1-x_2)=0,\nonumber \end{aligned}$$
(3.27)

where the periodic time-variability \(\tilde{\omega }_1(t)=\tilde{A}_1 \sin ({\tilde{\omega }}t)\) (Fig. 3.12b) comes from an external signal generator. The parameters were \(\varepsilon =0.7\), \(\omega _1=2\pi 15.9\), \(\omega _2=2\pi 17.5\), \(\tilde{A}_1=0.03\), \(\tilde{\omega }=2\pi 0.2\) and \(c\) is constant resulting from the analogue integration. The phases were estimated as \(\phi _i=\arctan (\dot{x}_i/x_i)\).

Fig. 3.12
figure 12

Analysis of signals from analogue simulation of system (5.1). a Phase portrait from the oscilloscope; b Frequency \(\tilde{\omega }_1(t)\) from the external signal generator; c Detected frequency \(\omega _2(t)\) of the second driven oscillator; d Fast Fourier Transform (FFT) of the detected frequency \(\omega _2(t)\). From [18], Copyright (2012) by the American Physical Society

The oscillators were synchronized for the given parameters and dynamical properties. Due to the effect of synchronization, the frequency of the second driven oscillator changed from constant to time-varying (as discussed in Chap. 2). Applying the inferential technique and investigating the detected synchronization showed that the oscillators were synchronized (\(s(c)=1\)) throughout the whole time period. The frequency of the second driven oscillator was inferred as time-varying, as shown in Fig. 3.12c. Performing simple FFT (Fig. 3.12d) showed that \(\omega _2(t)\) is periodic with period \(0.2 Hz\) (exactly as set on the signal generator). Therefore, the technique revealed information regarding the nature and the dynamics of the time-variability of the parameters.

3.5.4 Cardiorespiratory Interactions

Another interesting example, given its real-life nature, is the cardiorespiratory interaction. The analysis of physiological signals to detect and quantify cardiorespiratory interactions have already been found to be useful in relation to several diseases and physiological states (see [23] and references therein). Additionally, the transitions in cardiorespiratory synchronization have been studied in relation to anaesthesia [40], sleep cycles [41] and exercise [42].

It is well known that modulations and time-varying sources are present and can affect the synchronization between biological oscillators [23, 43, 44]. For comprehensive and genuine analysis there is a need for technique that can not only identify the time-varying information, but will allow the evaluation of the interacting measures (like synchronization and directionality) to be based solely on such inferred information.

To demonstrate the method on real biological data, cardiorespiratory measurements from human subject under anaesthesia were analyzed. During the experiment, the breathing rate was paced constantly by a respirator which acted as an external source of energy. In such systems the analytic model is not known (in contrast to analogue and numerical examples), but the oscillatory nature of the signal is easily observed. The instantaneous cardiac phase was estimated by wavelet synchrosqueezed decomposition [45] of the ECG signal. Details about instantaneous phase detection and the respective problems and advantages are discussed in Chap. 4. Similarly, the respiratory phase was extracted from the respiration signal. The final phase time-series were reached after protophase-phase transformation [6].

Fig. 3.13
figure 13

Synchronization and time-varying parameters in the cardiorespiratory interaction. a Standard 2:\(N\) synchrogram. b Synchronization index for ratios 2:8 and 2:9 as indicated. c Time-evolution of the cardiac \(f_h(t)\) and respiratory \(f_r(t)\) frequency. Note the detected constant pacing of the breathing frequency. From [18], Copyright (2012) by the American Physical Society

By applying the inferential technique one reconstructs the phase parameters that govern the interacting dynamics. Figure 3.13c shows the time-evolution of the cardiac and respiration frequencies. It is easy to notice that the (approximately) constant pacing of the breathing is well inferred, and that the cardiac frequency i.e. heart rate variability is increasing with time. The set of inferred parameters and how they are correlated can be used to determine whether cardiorespiratory synchronization exists and, if so, in what ratio. The synchronization evaluation \(I_{sync}=s(c)\in \{0,1\}\), shown on Fig. 3.13b reveals that several transitions exist between synchronized and non-synchronized states, and transitions between different ratios: from 2:8 (i.e. 1:4) at the beginning to 2:9 synchronization in the later intervals. Because the evaluation of the synchronization state is based on all of the given details about the phase dynamics, the proposed method not only detects the occurrence of transitions, but also describes their inherent nature. The synchronization detection (\(I_{sync}\)) was in good agreement with the corresponding synchrogram shown on Fig. 3.13a.

Fig. 3.14
figure 14

Coupling functions in the cardiorespiratory interaction calculated at different times. ac coupling function \(q_1(\phi _1,\phi _2)\) from first oscillator, df \(q_2(\phi _1,\phi _2)\) from second oscillator. The window time intervals were calculated at: \(t=725s\) for (a, d); \(t=1200s\) for (b, e); and at \(t=1250s\) for (c, f). The window length was \(t_w=50s\). From [18], Copyright (2012) by the American Physical Society

The cardiorespiratory coupling functions, evaluated for three different time windows, are presented on Fig. 3.14. Figure 3.14a–c shows the coupling function \(q_1(\phi _1,\phi _2)\) from the first oscillator, and Fig. 3.14d–f shows \(q_2(\phi _1,\phi _2)\) from the second oscillator. Note that the interactions are now described by complex functions whose form changes qualitatively over time—compare for example Fig. 3.14a with b, c, or d with e, f. This implies that, in contrast to many systems with time-invariant coupling functions, the functional relationships for the interactions of an open (biological) system can in itself be a time-varying process. By analyzing consecutive time windows, we can even follow the time-evolution of the coupling functions—compare the similarities i.e. evolution of Figs. 3.14b, c, or e, f.

Thus, the proposed method identified the time-varying nature of the functions that characterize interactions between open oscillatory systems. The cardiorespiratory analysis demonstrated that not only the parameters, but also the functional relationships, can be time-varying, and the new technique can effectively follow their evolution. This discovery immediately invites many new questions and points out that in future studies and modeling of such open systems, the time-varying coupling functions should be taken into account.

3.6 Generalization to Networks of Oscillators

A network of many complex dynamical systems can describe a large number of processes and system in the nature—examples including chemical reactions, ecological systems, electrical power grid, populations of synchronized crickets, the internet, and many others [46]. Especially important and relevant to this study are networks of complex oscillatory systems. This type of networks often require reconstruction of the coupling links i.e. structure of the network, or detection and study of qualitative phenomena such as synchronization [47].

For the sake of simplicity and clarity, all of the demonstrations in the previous sections were performed on systems of two interacting oscillatory processes. In fact, the proposed inference procedure can be applied with only minimal modification to any number \(N\) of interacting oscillators within the general coupled-network structure [18].

The general notation of Eq. (3.1) is readily generalized for the \(N\) oscillators, and the inference procedure, is then applied to the corresponding \(N\)-dimensional phase observable. For example, if one wants to include all \(k\)-tuple interactions with \(k\le 4\), then Eq. (3.1) would be generalized to

$$\begin{aligned} \dot{\phi }_i=&\omega _i + f_i(\phi _i) + \sum _j g^{(2)}_i(\phi _i,\phi _j) + \sum _{jk} g^{(3)}_{\textit{ijk}}(\phi _i,\phi _j,\phi _k)\nonumber \\ +&\sum _{\textit{jkl}} g^{(4)}_{ijkl}(\phi _i,\phi _j,\phi _{k},\phi _{l}) + \xi _i. \end{aligned}$$
(3.28)

Every function \(g^{(k)}\) is periodic on the \(k\)-dimensional torus, and can be decomposed in the sum of Fourier \(k\)-dimensional series of trigonometric functions. Although, this decomposition is theoretically possible, it becomes less and less feasible in practice as the number of oscillators and the number of \(k\)-tuples are increased. The computational power required increases very fast with \(N\), which makes the method unsuitable for the inference of large-scale networks. As a general approach, one could limit the number of base functions to the most significant Fourier terms per \(g^{(k)}\) functions. Automatic selection of the most important Fourier terms to be used as base functions is hard to achieve on a network of more than just a few oscillators. Known information about the system can be used to reduce the number of base functions such that only those terms relevant to the \(N\)-oscillator dynamics are included. Other sub-procedures like the time-varying propagation, and the noise inference, apply exactly as before.

The strength of the method is that it allows one to follow the time-variability of the structural and functional connectivity within the network. This is especially important when inferring the interactions of biological oscillators, for which it is known that the dynamics is time-varying [4850]. To illustrate the latter we infer the following network of four phase oscillators subject to white Gaussian noise

$$\begin{aligned} \dot{\phi }_1&= \omega _1+a \sin (\phi _1)+\varepsilon _{13}(t) \sin (\phi _3)+\varepsilon _{14}(t) \sin (\phi _4)+\xi _1(t)\nonumber \\ \dot{\phi }_2&= \omega _2+a \sin (\phi _2)+\varepsilon _{21}(t) \sin (\phi _2-\phi _1)+\xi _2(t)\nonumber \\[-3mm]&\\[-2mm] \dot{\phi }_3&= \omega _3+a \sin (\phi _3)+\varepsilon _{324}(t) \sin (\phi _2-\phi _4)+\xi _3(t)\nonumber \\ \dot{\phi }_4&= \omega _4+a \sin (\phi _4)+\varepsilon _{42}(t) \sin (\phi _2)+\xi _4(t).\nonumber \end{aligned}$$
(3.29)

Note that, because the coupling strengths are functions of time, we were effectively changing the structural connectivity of the network by varying their values. The parameter values for the simulations were: \(\omega _1=2\pi \,1.11\), \(\omega _2=2\pi \,2.13\), \(\omega _3=2\pi \,2.97\), \(\omega _1=2\pi \,0.8\), \(a=0.2\), and noise strengths \(E_i=0.1\). The couplings were varied discreetly in three time-segments, as follows. (i) For 0–500s: \(\varepsilon _{13}=0.4\), \(\varepsilon _{14}=0.0\), \(\varepsilon _{324}=0.4\) and \(\varepsilon _{42}=0.4\). (ii) For 500–1000s: \(\varepsilon _{13}=0\), \(\varepsilon _{14}=0.35\), \(\varepsilon _{324}=0\) and \(\varepsilon _{42}=0.4\). (iii) For 1000–1500: \(\varepsilon _{13}=0.45\), \(\varepsilon _{14}=0.35\), \(\varepsilon _{324}=0\) and \(\varepsilon _{42}=0\). The coupling \(\varepsilon _{21}\) was continuously varied between \(0.5\rightarrow 0.3\). Note also that in Eq. (3.29) the coupling functions are qualitatively different i.e. the arguments in the sine functions are not the same for each oscillator. For example the coupling functions for \(\varepsilon _{13}\), \(\varepsilon _{14}\) and \(\varepsilon _{42}\) have one phase argument, while the coupling functions for \(\varepsilon _{21}\) and \(\varepsilon _{324}\) have the phase difference as their argument. The last two are additionally different because the coupling function with \(\varepsilon _{21}\) for the second oscillator contains its own phase \(\phi _2\) in the phase difference.

Fig. 3.15
figure 15

Inference of time-varying coupling structure for the network (3.29). The color/grayscale code for the couplings is presented in the box at the top, where \(\varepsilon _{21}\) is represented by a dotted line, \(\varepsilon _{13}\) by a dashed line, \(\varepsilon _{14}\) by a dash-dotted line, \(\varepsilon _{324}\) by a bold full line and \(\varepsilon _{42}\) by a light full line. The four couplings \(\varepsilon _{13}\), \(\varepsilon _{14}\), \(\varepsilon _{324}\) and \(\varepsilon _{42}\) were held constant at different values within three time segments each of length 500 s. However, \(\varepsilon _{21}\) was varied continuously through the whole time interval. For each segment the structure of the network is presented schematically on the diagrams in the dashed grey boxes. The parameters are given in the text. From [18], Copyright (2012) by the American Physical Society

The results presented in Fig. 3.15 demonstrate that the method follows the time-variability of the couplings effectively and precisely. The dynamical variations are taking the network structure through various different connectivity states, and the different topologies are detected reliably throughout their time-evolution.

3.7 State Space Inference

In previous sections of this chapter, an inferential technique for reconstructions of phase dynamics was presented. Starting from the phase time-series and using phase base functions, the method inferred and described the interactions between the oscillators. This section, on the other hand, presents the case of inference in the state space, where the starting points are the state time-series and the base functions are also in state domain. The objective is to describe the interacting oscillatory dynamics by the inference of the state variables.

3.7.1 Main Concept

Given the state time-series \(x_i(t)\), the estimation of instantaneous phases \(\phi _i(t)\) is not often a trivial task. Many procedures for phase extraction are problematic when the state signals come from complex mixed-mode dynamics, or some information from the measurements is not used (or is interpolated). When inferring from the state signals, the technique exploits all of the measurement information. Moreover, if one can effectively use the state variables, then there is no need for the phase extraction and one step (subprocedure) of the inferential framework can be avoided.

The construction of the Bayesian technique now encloses a set of base functions that describe the state dynamics \(\Phi =\{\mathbf{x }_n^i\}\). For example, the base functions can be a finite number of polynomial functions. In general, the choice of the functions is not unique, and usually is model-dependent. The biggest disadvantage comes from not knowing the right number of dimensions, because often the only available input is a one dimensional readout signal. One can choose, for example, a large set of many combinations of base functions [51, 52], but this will incorporate a lot of noise from the base functions which are not present in the actual dynamics, and the computational expenses and parameter space will be unnecessarily increased.

On the other hand, if the model is known a priori, then fewer base functions will be needed, the processing will be faster and more efficient, and the separation of the noise will be more effective. The latter make sense because many of the processes in nature can be described by models—examples include models in biology, chemistry or climate science. Additionally, a lot of situations exist when the model is known and the objective is to determine the dynamical states at any point in time. For example, in interacting technical systems and communications [53], or in chemical Belousov-Zhabotinsky oscillators [5].

The previously proposed Bayesian technique is one of the first to infer phase oscillatory dynamics, while most of the known Bayesian techniques actually infer in state space [911]. Especially relevant is the work by Smelyanskiy et al. [54] where the authors have used Bayesian inference to reconstruct the cardiorespiratory interactions in the state space. However, their analysis was performed on a single stationary block of data where time-variability was not taken into account implicitly and synchronization was not studied.

The main idea for the following discussion is: starting from the state time-series as inputs and given the model’s state base functions, to infer the multivariate state dynamics of the interacting oscillators, using the same concepts for the Bayesian framework as discussed in Sect. 3.1.3. The use of the particular information propagation Sect. 3.1.4 can allow time-varying dynamics to be followed again. Defined in such a way and assuming that the model is known, the technique will give explicit inference information about the coupling strength and coupling functions. However, the synchronization in the state domain, also known as generalized synchronization, has not been studied in this manner and in the following section special attention will be given to this issue.

3.7.2 Detection of Synchronization

When two oscillators synchronize, their behaviour can be easily explained in terms of phase relationships: synchronization occurs if there exists a bounded phase shift i.e. if the equilibrium solutions of the phase difference are stable [1]. But how is synchronization reflected in the state dynamics of oscillators? Basically, when synchronization is reached, the state trajectories become dependent on each other as a result of the interactions. Thus by investigating the stability of individual oscillators in respect of the interactions, one can effectively determine the synchronization entrainment.

At the beginning of the chaos synchronization era, the concept of identical synchronization was one of the first established forms of state space synchronization. It defines the two oscillators to be synchronized if certain states reach unity i.e. if the Lissajou curves are a diagonal line [55]. Not long afterwards, a more general description was given for the cases of state synchronization, called generalized synchronization, where the trajectories do not necessarily reach unity [56]. A more specific definition of generalized synchronization, in terms of asymptotic stability, was also proposed [57].

Directional coupling has been studied in depth and can be viewed as a generalization of periodic or quasiperiodic driving which have been used in physics, mathematics, and engineering for a long time. The unidirectionally coupled systems can be represented with a skew product structure:

$$\begin{aligned} {\dot{\mathbf{x}}}&= \mathbf{f(x) } \nonumber \\ {\dot{\mathbf{y}}}&= \mathbf{g(y,u) }= \mathbf{g(y,h(x)) }, \end{aligned}$$
(3.30)

where  \(\mathbf{ x\in R^n }\), \(\mathbf{y\in R^m }\), a subset \(\mathbf{B=B_x\times B_y \subset R^n \times R^m }\) is given and the state coupling functions are \(\mathbf{u(t)=(u_1(t),\ldots ,u_k(t) ) }\) with \(\mathbf{ u(t)=(u_1(t),\ldots ,u_k(t) ) }\). The first and second systems in 3.30 are referred to as a drive and driven oscillator, respectively. The question of under what conditions does generalized synchronization occur for a unidirectionally coupled system 3.30, is addressed in the following theorem (see [57] for proof):

Theorem: Generalized synchronization occurs in system 3.30, if given for all \(\mathbf (x_0, y_0) \in \mathbf{B }\) the driven system \({\dot{\mathbf{y}}}= \mathbf{g(y,u) }=\mathbf{g(y,h(x)) }\) is asymptotically stable [i.e. \(\forall \mathbf y_{10}, y_{20} \in \mathbf{B_y : }\) \({\lim _{t\rightarrow \infty } ||\mathbf{y(t, x_0, y_{10})- y(t, x_0, y_{20} })||}=0\)].

The physical meaning of the theorem indicates that due to interactions the driven oscillator changes its independent stability, for example, from marginally stable to asymptotically stable, because of the entrainment to the drive oscillator. In fact, the vector field \({\dot{\mathbf{y}}} =\mathbf{g(y,h(x)) }\) is non-autonomous in respect of \({\dot{\mathbf{x}}}(\mathbf{t })\) to which is entrained.

One of the basic techniques for proving asymptotic stability is through numerical evaluation of conditional Lyapunov exponents of the driven oscillator. In this case, generalized synchronization occurs if all of the Lyapunov exponents from the driven oscillator are negative.

Several techniques have been proposed for detection of generalized synchronization from time-series. The most popular are based on mutual false nearest neighbors [56], mutual information [58, 59] or generalized angle [60]. These methods, however, are based on statistics and information flows and they do not take into account the intrinsic dynamics of the systems, nor do they consider the noise embedded in the interacting dynamics.

In the following, the discussion is focussed on generalized synchronization detection technique that uses the Bayesian framework to infer the interacting state dynamics and the noise, and determines the existence of synchronization if the driven oscillator is asymptotically stable i.e. if its largest Lyapunov exponent is negative.

3.7.2.1 Application Example

To demonstrate the main concept about the detection of generalized synchronization, a model of two coupled van der Pol oscillators subject to weak noise is considered:

$$\begin{aligned} \ddot{x}&-\mu _1(1-x^2)\dot{x} +\omega _1^2x+\varepsilon _1(t)y+\xi _1(t)=0\nonumber \\ \ddot{y}&-\mu _2(1-y^2)\dot{y} +\omega _2^2y+\varepsilon _2(t)x+\xi _2(t)=0, \end{aligned}$$
(3.31)

where the noise is assumed to be white Gaussian: \(\langle \xi _i(t) \xi _j(\tau )\rangle = \delta (t-\tau ) E_{\textit{ij}}\).

In order to apply the inferential technique, one needs first to prescribe appropriate base functions. Each oscillator can be described in two dimensions by a simple variable change: \(x_1=x\), \(x_2=\dot{x}\) and \(y_1=y\), \(y_2=\dot{y}\). Assuming the models are known beforehand, the following base functions were chosen for reconstruction of system (3.31):

$$\begin{aligned} \Phi = \left\{ \begin{array}{ll} x_2\\ x_1, \, x_2, \, x_1^2x_2, \, y_1\\ y_2 \\ y_1, \, y_2, \, y_1^2y_2, \, x_1 \end{array} \right\} , \end{aligned}$$
(3.32)

where each row corresponds to the respective dimension of system (3.31).

The coupled system (3.31) was simulated numerically for a specific case—the coupling was considered to be unidirectional (\(1\rightarrow 2\)) i.e. \(\varepsilon _1(t)=0\) and the rest of the parameters were set to: \(\omega _1=1.1\), \(\omega _2=0.9\), \(\mu _1=1\), \(\mu _2=0.7\) and the noise strength \(E_1=E_2=0.2\). To demonstrate the properties and precision of the inference in state space, first the coupling was set to a constant value \(\varepsilon _2(t)=0.15\) (for which the oscillators were not synchronized). The Bayesian inferential technique Sect. 3.1.3 exploiting the state base functions (3.32) was applied on the time-series of the two noisy oscillators. The inferred parameters acting as coefficients of appropriate base functions, are summarized together with the intrinsic parameters in Table 3.1. Comparing the last two columns, one observes the validity and precision with which the intrinsic parameters were inferred. The full and the inferred dynamics can be visualized and compared on Fig. 3.16a, b. Fig. 3.14a shows the phase portrait of the first oscillator from the numerical simulation of (3.31) affected by noise, while Fig. 3.14b shows the phase portrait of the same system simulated with the inferred parameters without the effect of noise.

Table 3.1 Results from the inference of numerically simulated system (3.31)

But how can one use the inferred parameters to determine if the two oscillators are synchronized? Namely, the second driven oscillator \(y(t)\), when not synchronized, has limit-cycle dynamics with marginal stability i.e. its largest Lyapunov exponent is zero. According to the theorem for generalized synchronization, when synchronization occurs the driven oscillator becomes asymptotically stable with negative largest Lyapunov exponent. Thus, by following the Lyapunov exponents of the inferred driven oscillator one can detect if synchronization exists. Moreover, using the discussed information propagation within the Bayesian framework, one can follow the generalized synchronization in time.

To demonstrate the latter, system (3.31) was simulated for unidirectionally interacting case where the coupling was non-autonomous function varying discretely between two predefined values \(\varepsilon _2(t)=\varepsilon =\{0,0.4 \}\) for which the two oscillators were intermittently synchronized. The application of the technique and the detection of generalized synchronization are presented on Fig. 3.14c. It can be noticed that, when the oscillators are not synchronized, the largest Lyapunov exponent [61] \(\lambda \) is zero, and when synchronization occurs (for \(\varepsilon =0.4\)) the driven oscillator becomes asymptotically stable and \(\lambda \) becomes negative. Thus the largest Lyapunov exponent \(\lambda \) can act as synchronization index for detection of generalized synchronization in time.

Fig. 3.16
figure 16

Inferred state dynamics and detection of intermittent generalized synchronization. a Phase portrait from numerically simulated noisy van der Pol oscillator \(x(t)\). b Phase portrait of van der Pol oscillator numerically simulated with the inferred parameters. c Largest Lyapunov exponent \(\lambda \) indicating non-synchronized intervals for zero values and synchronized for negative. The coupling amplitude \(\varepsilon \) was discretely varying on intermittent intervals as indicated on the top of the figure

Many of the concepts discussed above for the detection of phase synchronization are valid and can be applied for the detection of state synchronization. The identification of synchronization from the inferred dynamics through Lyapunov exponent \(\lambda \) can be seen as equivalent to the map reconstruction of torus phase dynamics. Using the information propagation procedure, the generalized synchronization and the respective transitions can be traced in time too. As the noise is decomposed separately, if there exist noise-induced phase slips i.e. noise-induced transitions to generalized non-synchronized states, the proposed method will be able to detect it. Having said this, the inferential technique is anticipated to be a useful tool in describing the time-varying nature and transitions of state synchronization in the presence of noise.