1 Introduction

The many sources of uncertainty in engineering are generally classified as aleatory or epistemic. Design under uncertainty needs to account for both the former, such as variability in material properties, and the latter, which include errors due to imperfect analysis tools. In real-life applications, both uncertainties are present simultaneously, and their added effect should be considered for a safe design. In mathematics, uncertainty is often characterized in terms of probability distribution, with epistemic uncertainty meaning not being sure of the assumed distribution, and aleatory uncertainty meaning not being sure of a random sample drawn from it [1]. In physical problems, uncertainties can be of parametric type, where parameter values are unknown to some degree; structural type, meaning a lack of knowledge of the underlying mechanics; algorithmic type, coming from numerical errors/approximations in the computer model; experimental type, arising from measurement variability and/or interpolation errors due to lack of available data.

In the context of structural engineering, the need to include parameter uncertainties and noise in dynamic analyses has long been recognized [2, 3]. Here, uncertainties come from material and geometric parameters, boundary conditions, manufacturing tolerances, and external loads. In addition, deterioration or evolution of the structure during its lifetime leads to increasing uncertainties, which can affect vibration behavior. Parameters such as natural frequencies or damping are subject to uncertainty stemming from a lack of knowledge of parameter values and a lack of understanding of the system actual behavior. The problem can be stated in a probabilistic framework to account for the uncertainty in system parameters, which leads to differential equations with coefficients modeled as random variables.

Various techniques have been developed for the analysis of uncertainties in structural problems. For an overview of classical methodologies, such as Monte Carlo sampling, perturbation analysis, moment equations, operator of the governing equations, Generalized Polynomial Chaos (GPC), stochastic Galerkin, and collocation, refer to Xiu [4]. More recent developments were devoted to mitigating the loss of accuracy of long time integration when employing usual expansions of the random space, and included time-dependent GPC methodology [5], stochastic time-warping polynomial chaos, as well as nonlinear autoregressive polynomial chaos [6] and GPC with flow map composition [7]. Time-dependent uncertainty is also important in structural dynamics, for representing noisy loads and parametric excitations. Various sampling-based methods, where the governing systems are reformulated as stochastic differential equations, have been developed. Arnold [8] presents the mathematical foundation of the theory of random dynamical systems, stochastic bifurcations, and their multiplicative ergodic theory. Han and Kloeden [9] discuss the numerical simulation and analysis of random ordinary differential equations. These works point out that noisy excitation represents a major difficulty in uncertainty analysis, requiring the analyst to ponder the meaning of the results, either numeric or analytic.

When considering nondeterministic effects, a physical problem may present many possible outcomes distributed in a probability space. Such distributions may evolve in time for dynamical systems, a phenomenon that is a dynamical system in itself, governed by a linear, positive, and density conserving transfer operator [10] of Markov type. Ulam [11] hypothesized that such transfer operators could be discretized and distributions approximated by histograms, formulating what is known as the Ulam method. Later, Hsu [12, 13] adopted an algorithmic perspective, developing the generalized cell-mapping, then proven to be equivalent to the Ulam method [14].

Several advances followed. Hsu and Chiu combined generalized cell-mapping and the previously developed simple cell-mapping into the so-called hybrid cell-mapping [15, 16]. In these works, there is already a separation between stochastic and parametric uncertainties, with specific methodologies to deal with them focused on global dynamics. However, a proper probabilistic framework is missing. Sun and Hsu [17] developed a short-time Gaussian approximation for nonlinear random vibration analysis. Han and coworkers explored this strategy extensively, considering nonautonomous cases [18] under colored noise [19], stochastic bifurcations in a turbulent swirling flow [20], and a combination with digraph algorithms [21]. Simple and generalized cell-mappings were recently reformulated by Yue et al. [22] into the so-called compatible cell-mapping, which employs adaptative refinement of the phase-space to increase the resolution of global attractors of random dynamical systems. In [23], this method was shown to refine stable and unstable manifolds, similar to the subdivision and selection method by Dellnitz and coworkers [24,25,26] but with digraph algorithms instead. Another cell-mapping method is designed with two distinct scales of cell spaces [27,28,29]. Similarities between the transfer probability distributions by Yue et al. [29] and the generalized committor functions by Lindner and Hellmann [30] are evident. However, the latter is adequate for transient analysis, describing how distributions evolve with time. Finally, the phase-space dimension of engineering problems demands High-Performance Computing (HPC), as described in [31,32,33]. Parallel computing strategies are fundamental, employing even general-purpose graphic cards (GPU) to this end [34].

The Ulam method was the focus of various works. Klus et al. [35] compared different numerical approximations of the Perron-Frobenius operator and its dual, the Koopman operator. Dellnitz and coworkers [24,25,26] developed a subdivision strategy with box-covering to approximate complicated numerical behavior, implemented in the software package GAIO [36]. Further developments include the detection of transport barriers [37], the analysis of dynamical systems with parameter uncertainty [38], invariant sets of infinite-dimensional dynamical systems [39, 40], and a set-oriented path-following method for computation of parameter-dependent attractors [41]. Koltai and coworkers developed methods for global analysis without trajectory integration, focused on basins of attraction [42,43,44] and nonautonomous systems [45]. A comparison of data-driven model reductions for dynamical systems based on the approximation of the transfer operators is given in [46]. Froyland et al. [47] applied the Ulam method to the analysis of surface ocean dynamics, obtaining attractors and basins from real data. Ding and coworkers investigated the original Ulam method and approximations of the Perron-Frobenius operator by piecewise linear and quadratic functions [48] and higher-order approximations [49]. Junge et al. [50] investigated the spectrum of transfer operators of stochastically perturbed conservative maps. Most recently, Jin and Ding [51] and Bangura et al. [52] applied spline and least-squares approximation for random maps as well, specifically considering the Foias operator which governs the average flow of random maps [53]. One crucial limitation of phase-space discretization-based methods is the resulting numerical diffusion of the flow [42, 44]. Indeed, depending on the dynamics, a high resolution is necessary, increasing the computational cost significantly.

In systems displaying coexisting solutions with distinct basins of attractions, uncertainties and noise may cause jumping between competing attractors, and global bifurcations as basins’ merging and basin instability. The interaction between previously separated basins is the focus of stochastic resonance, in global dynamic terms. Depending on basins’ topology, predicting systems’ outcome can be difficult even in the deterministic context, especially when highly intertwining basins or fractal boundaries are present. Uncertainties and noise are expected to induce further global changes, with the emergence of new dynamic phenomena, which may directly influence the concept of dynamic integrity [54,55,56]. For a safe analysis, it must be secured that initial conditions lie indeed in the basins of attraction of the corresponding attractors, even under the presence of uncertainties and noise.

This work aims at presenting an adaptative set-oriented phase-space discretization method for the global analysis of nonlinear dynamical systems with competing attractors. Global phase-space operators are presented by considering (i) deterministic, (ii) stochastic, and (iii) parametric uncertainty dynamics, extending the results in [57]. Their discretization is conducted through the Ulam method [53], for deterministic and stochastic cases. Mean results are obtained for parametric uncertainty dynamics through a discretization of their probability space. The adaptative discretization results in a sequence of operators with increasing refinement of important regions, here assumed as attractors’ distributions supports and basins’ observables boundaries. This local discretization reduces the computational cost, therefore being advantageous in comparison to a full phase-space discretization.

This paper is organized as follows. Section 2 summarizes basic concepts of stochastic global dynamics with parameter uncertainty and noise, based on the definitions of (random) dynamical systems theory, presenting the general phase-space operators and their discretization. Section 3 describes the proposed boundary and attractor refinement strategies for both deterministic and stochastic systems, and outlines the procedure for obtaining mean results for dynamical systems with parameter uncertainty. Section 4 deals with the forced Helmholtz oscillator as archetypal model for the analysis of escape from a potential well, with applications ranging from ship capsize [58] to structures liable to asymmetric buckling [59, 60]. Effects of noise and parametric uncertainty are discussed, with evaluation of computational advantages of adaptative discretizations, validation through Monte Carlo experiments, and assessment of global dynamics via a newly defined nondeterministic integrity measure. The final section provides concluding remarks.

2 Stochastic global dynamics of systems with parameter uncertainty and noise

In this section, some concepts of stochastic global dynamics with parameter uncertainty or noise are briefly summarized, based on the definitions of dynamical systems theory. Specifically, definitions of stochastic attractors and stochastic basins, operator formulation, phase-space discretization and probability space discretization for the parametric uncertainty case are illustrated. Concepts and definitions already present in the literature are generalized by introducing the random dependence on the system parameters, which are commonly considered to be fixed and with a deterministic nature.

2.1 Dynamical systems: a few aspects

Following Mezić and Runolfsson [61], attention is restricted to discrete time cases. This choice is motivated by the fact that information on continuous systems under periodic excitation can be obtained through Poincaré maps for both deterministic and noise-driven systems, see Lasota and Mackey [53], Sect. 8.1. Stroboscopic maps can be used in the analysis of parameter uncertainty dynamics, as well, when there is a periodic excitation. Therefore, the following discrete dynamical system is considered,

$$ \begin{gathered} \varphi :\Omega \times {\mathbb{L}} \times {\mathbb{X}} \to {\mathbb{X}}, \hfill \\ \left( {\omega ,\lambda ,x} \right) \mapsto \varphi \left( {\theta \omega ,\lambda } \right)x, \hfill \\ \end{gathered} $$
(1)

where \(x \in {\mathbb{X}}\) is the system state, \(\omega \in {\Omega }\) is the noise, and \(\lambda \in {\mathbb{L}}\) is the uncertain parameter. The usual depiction of this system is of an iterated map, \(x_{t + 1} = \varphi \left( {\theta^{t} \omega ,\lambda } \right)x_{t}\), with the state evolving from instant \(t\) to instant \(t + 1\), and the stochastic parameter \(\omega\) governed by a noise-model \(\theta^{t}\), while \(\lambda\) is fixed in time. It is useful to define the system state after t iterations through the composition of maps. For initial condition x and t iterations, the system state is given by \(\varphi^{t} \left( {\omega ,\lambda } \right)x = \varphi \left( {\theta^{t - 1} \omega ,\lambda } \right) \circ \ldots \circ \varphi \left( {\omega ,\lambda } \right)x\). The sequence \(\left\{ {\varphi^{t} \left( {\omega ,\lambda } \right)x|t = 0,1,2, \ldots } \right\}\) defines an orbit of the dynamical system (1) over \({\mathbb{X}}\) for each sample \(\omega\) and \(\lambda\), and initial condition x.

Some formalism is necessary to understand each particular case of dynamical system (1), i.e., deterministic, stochastic, and parametric uncertainty. It is assumed that all spaces are compact, metric, with corresponding Borel σ-algebras [30]. The phase-space, stochastic space, and parameter uncertain space are completely defined as \(\left( {{\mathbb{X}},{\mathfrak{B}},P_{x} } \right)\), \(\left( {{\Omega },{\mathfrak{F}},P_{\omega } } \right)\), and \(\left( {{\mathbb{L}},{\mathfrak{S}},P_{\lambda } } \right)\), respectively, with σ-algebras \(\mathfrak{B},\mathfrak{F},\mathfrak{S}\), Lebesgue measure \({P}_{x}\), and probability measures \({ P}_{\omega }\,\mathrm{ and}\, {P}_{\lambda }\). For fixed \(\omega \), a dynamical system \(\varphi \left( {\omega , \cdot } \right):{\mathbb{L}} \times {\mathbb{X}} \to {\mathbb{X}}\) is defined, with product measure \({P}_{x}\times {P}_{\lambda }\). The parameter \(\lambda \) is fixed, although randomly chosen according to \({P}_{\lambda }\), and the system evolution is deterministic. In the case of fixed \(\lambda \) (with a given value not randomly chosen), the flow map \(\varphi \left( { \cdot ,\lambda } \right):\Omega\times{\mathbb{X}} \to {\mathbb{X}}\) is defined as a random dynamical system, forming a cocycle over \({\theta }^{t}\) and product measure \({P}_{x}\times {P}_{\omega }\). The randomness evolves with the system, changing at each time-step t. This last case is much more involved, and the reader can find technical details in Arnold [8]. If both \(\omega \) and \(\lambda \) are fixed, the system becomes deterministic, with flow map \(\varphi \left( {\omega ,\lambda } \right):{\mathbb{X}} \to {\mathbb{X}}\) and Lebesgue measure \({P}_{x}\). Finally, the notion of phase-space volume given by the measure \({P}_{x}\) is crucial for the definition of Milnor attractors [62], minimal attractors [63], set attractors [64], ε-committor functions [30], or any set-attractive phase-space region.

2.2 Random attractors and basins

In a global dynamic analysis, the coexisting attractors and their basins are the main tools to understand the system behavior and safety. Without going into technical details, we can define attractors \(A\) as subsets of that attract some or all initial conditions asymptotically and are resilient to infinitesimal perturbations (Lyapunov stable) [30]. Another important definition is given by Milnor [62], where the stability criteria is dropped in favor of measurability of the basin of attraction. In this case, attractors are sets whose basins are observable, with generalized volume greater than zero. Milnor attractors were extended to random dynamical systems \(\varphi \left(\cdot ,\lambda \right)\) pointwise by Ashwin [63]. That is, attractors \(A\left(\cdot ,\lambda \right)\) are functions of the noise sample \(\omega \), and therefore are random variables. Arnold [8] and Ochs [64] expanded the classic definition by imposing convergence in probability in the pullback and pushforward sense, respectively, but still pointwise with respect to the noise sample \(\omega \). For the parametric uncertainty case, no similar definitions are found in the literature. This could be motivated by the fact that an uncertain parameter system is a collection of deterministic dynamics for each \(\lambda \in {\mathbb{L}}\), with attractors’ and basins’ statistics being obtained through Monte-Carlo or other technique, see [65]. Still, it is important to emphasize the distinction between these two cases by explicitly writing attractors \(A\left(\omega ,\cdot \right)\) as functions of the random parameter \(\lambda \).

Lindner and Hellmann [30] also explored the implications of stochasticity for the definition of a basin of attraction. They noticed the relation between basins of attraction and expected mean sojourn time (expected time that a system spends in a certain state) and focused on how to quantify the transient stability of stochastic systems. The procedure starts from the phase-space region of an attractor’s distribution \({f}_{A}\left(x;\lambda \right)\), given by \({\mathrm{id}}_{A\left(\lambda \right)}=\mathrm{supp}\left\{{f}_{A}\left(x;\lambda \right)\right\}\), for a fixed \(\lambda \). The probability that the (Hausdorff semi-) distance between a trajectory \({\varphi }^{t}\left(\omega ,\lambda \right)x\) and \({\mathrm{id}}_{A\left(\lambda \right)}\) vanishes after \(1/\varepsilon - 1\) iterations is given by an ε-committor function,

$$ P_{\omega } \left[ {d\left( {\varphi^{1/\varepsilon - 1} \left( {\omega ,\lambda } \right)x|{\text{id}}_{A\left( \lambda \right)} } \right) = 0} \right] = g_{A} \left( {\varepsilon ,x;\lambda } \right) $$
(2)

In other words, this is a probability that \(x\) converges to \(A\left( \lambda \right)\) after \(1/\varepsilon - 1\) iterations. It is a viable definition of basin of attraction, differing from Eq. (30) of Lindner and Hellmann [30] by the inclusion of the random parameter \(\lambda \). Also, they defined the quantity \(1/\varepsilon \) as the mean time-horizon, and transient states can be checked by varying \(\varepsilon \). This is important because attractors can become long transients under stochastic excitation, that is, \(\underset{\varepsilon \to 0}{\mathrm{lim}}\,{g}_{A}\left(\varepsilon ,x;\lambda \right)=0\), see [30, 57, 66]. Furthermore, the asymptotic case \({g}_{A}\left(0,x;\lambda \right)={g}_{A}\left(x;\lambda \right)\) corresponds to the classical, deterministic basin of attraction, with value 1 for \(x\) inside the basin and 0 otherwise. Finally, functions \({g}_{A}\left(\varepsilon ,x;\lambda \right)\) are observables in the \(L^{\infty } \left( {\mathbb{X}} \right)\) space, a fact that is explored in the transfer operator formulation later in the text. Throughout this work, this is the adopted definition of basin of attraction.

2.3 Generalized transfer operators: attractor distribution and basin observable

As stated by Ashwin [63], attractors and basins can be interpreted pointwise for systems with stochastic and uncertainty parameters \(\left(\omega ,\lambda \right)\). The definition (2) is a statistic of the basins with respect to the noise, but the dependency with the parameter \(\lambda \) still exists. The global view of such systems, computing mean results in the product space \({\Omega } \times {\mathbb{L}}\), is explained here.

The suitability of transfer operators to obtain attractors and basins, and therefore a global view of the dynamics of deterministic and stochastic systems, has been highlighted in recent years [30, 43, 57, 67,68,69] substituting usual algorithmic-based descriptions, such as grid of starts, Monte Carlo, simple and generalized cell-mappings, etc. Here, the transfer and composition operators are generalized to systems with both noise and parametric uncertainty by assuming a one-to-one relation between dynamics \(\varphi \left(\omega ,\lambda \right)\) and elements of \(\left( {{\mathbb{L}},{\mathfrak{S}},P_{\lambda } } \right)\). This assumption allows the definition of one time-step transfer operators over the space of distributions \(L^{1} ({\mathbb{X}})\). associated with (1), given by

$$ \begin{gathered} {\mathcal{P}}\left( \lambda \right):L^{1} \left( {\mathbb{X}} \right) \to L^{1} \left( {\mathbb{X}} \right), \hfill \\ \int\limits_{B} {{\mathcal{P}}\left( \lambda \right)\left[ {f\left( x \right)} \right]dx} = \int\limits_{{\mathbb{X}}} {\left\{ {\int\limits_{{\Omega_{x} \left( {\lambda ;B} \right)}} {dP_{\omega } } } \right\}f\left( x \right)dx} , \hfill \\ \Omega_{x} \left( {\lambda ;B} \right) = \{ \omega \in \Omega :\varphi (\omega ,\lambda )x \in B\} , \hfill \\ \end{gathered} $$
(3)

where \({\Omega }_{x} \left( {\lambda ;B} \right) \subseteq {\Omega }\) is the set of all \(\omega \)-values for which the flow is in \(B \in {\mathfrak{B}}\), for any \(\lambda \in {\mathbb{L}}\), and \(f:{\mathbb{X}} \to {\mathbb{R}}\)+ is an absolute integrable function over \({\mathbb{X}}\), denominated distribution, that for our applications will be a probability density function. For any \(\lambda \in {\mathbb{L}}\), \({\mathcal{P}}\left( \lambda \right)\) is a Markov operator, being signal-preserving, linear, and norm-preserving, with spectral radius equal to one [53]. For systems with only noise, there is only a single \(\lambda \)-value, and \(\mathcal{P}\left(\lambda \right)\equiv \mathcal{F}\) is a Foias operator [53, 57]; for systems with only parametric uncertainty, there is only a single \(\omega \)-value, and \(\mathcal{P}\left(\lambda \right)\) is a Perron-Frobenius operator that is also a function of \(\lambda \); for deterministic systems, only single values of \(\lambda \) and \(\omega \) are defined, and \(\mathcal{P}\left(\lambda \right)\equiv \mathcal{P}\) is a single Perron-Frobenius operator. Therefore, \(\mathcal{P}\left(\lambda \right)\) is a generalization of the Foias operator [53, 57], covering deterministic, stochastic, and parametric uncertainty dynamics.

The dual operator of \(\mathcal{P}\left(\lambda \right)\) can also be obtained by generalizing its usual definition for stochastic systems to parametric uncertainty systems. Specifically, this composition operator, which is referred also as a Koopman operator, is defined over the space of observables \(L^{\infty } \left( {\mathbb{X}} \right)\) and given by

$$ \begin{gathered} {\mathcal{K}}\left( \lambda \right):L^{\infty } \left( {\mathbb{X}} \right) \to L^{\infty } \left( {\mathbb{X}} \right), \hfill \\ {\mathcal{K}}\left( \lambda \right)\left[ {g\left( x \right)} \right] = \int\limits_{\Omega } {g \circ \varphi \left( {\omega ,\lambda } \right)xdP_{\omega } } , \hfill \\ \end{gathered} $$
(4)

for any \(\lambda \in {\mathbb{L}}\), and any \(g:{\mathbb{X}} \to {\mathbb{R}}^{ + }\), which is an absolute bounded function over \({\mathbb{X}}\) denominated observable. The duality relation is defined pointwise, given by

$$ \begin{gathered} \int\limits_{{\mathbb{X}}} {g\left( x \right){\mathcal{P}}\left( \lambda \right)\left[ {f\left( x \right)} \right]dx} = \int\limits_{{\mathbb{X}}} {{\mathcal{K}}\left( \lambda \right)\left[ {g\left( x \right)} \right]f\left( x \right)dx} , \hfill \\ \forall f \in L^{1} \left( {\mathbb{X}} \right),g \in L^{\infty } \left( {\mathbb{X}} \right), \hfill \\ \end{gathered} $$
(5)

for any \(\lambda \in {\mathbb{L}}\). The operators \(\mathcal{P}\left(\lambda \right)\) and \(\mathcal{K}\left(\lambda \right)\) define linear functional maps over \(L^{1} \left( {\mathbb{X}} \right)\) and \(L^{\infty } \left( {\mathbb{X}} \right)\), respectively, written as

$$ f\left( {t + 1,x} \right) = {\mathcal{P}}\left( \lambda \right)\left[ {f\left( {t,x} \right)} \right], $$
(6)
$$ g\left( {t + 1,x} \right) = {\mathcal{K}}\left( \lambda \right)\left[ {g\left( {t,x} \right)} \right], $$
(7)

for any \(\lambda \in {\mathbb{L}},\,t \in {\mathbb{N}}\). Systems (6) and (7) offer a global view of trajectories over \({\mathbb{X}}\) , governing mean results with respect to the noise (space \(\left(\Omega ,\mathfrak{F},{P}_{\omega }\right)\)), but still distributed according to the uncertain parameter \(\lambda \). Therefore, we could think of parametric dependent trajectories of distributions \(f\left(t,x\right)\) and observables \(g\left(t,x\right)\). Finally, a connection of Eq. (7) with the ε-committor functions given in [30] and in Eq. (2) can be obtained by defining an observable of an attractor region at \(t=0\) and iterating it. That is, by setting \(g\left(0,x\right)=A\left(\lambda \right)\), we obtain the equality \(g\left(1/\varepsilon -1,x\right)={g}_{A}\left(\varepsilon ,x;\lambda \right)\).

The asymptotic behavior of systems (6) and (7) is of particular importance. Invariant distributions describe attractors [8, 30], whereas the invariant observables characterize the basins’ structures [30, 68]. They are given by

$$ f\left( {x;\lambda } \right) = {\mathcal{P}}\left( \lambda \right)\left[ {f\left( {x;\lambda } \right)} \right] $$
(8)
$$ g\left( {x;\lambda } \right) = {\mathcal{K}}\left( \lambda \right)\left[ {g\left( {x;\lambda } \right)} \right] $$
(9)

respectively. The noise is accounted for in both structures thanks to the full formulation in Eq. (3), resulting in attractors’ regular distributions and basin boundary diffusion [30, 57]. Solutions \(f\left(x;\lambda \right)\) and \(g\left(x;\lambda \right)\) of Eqs. (8) and (9) depend explicitly on the operators \(\mathcal{P}\left(\lambda \right)\) and \(\mathcal{K}\left(\lambda \right)\), and, therefore, also depend on the parameter \(\lambda \). In the case of deterministic systems, \(g\left(x;\lambda \right)\) becomes an indicator function of the basin, with value 1 over it and 0 otherwise. Finally, mean invariant structures over \(\left( {{\mathbb{L}},{\mathfrak{S}},P_{\lambda } } \right)\) are obtained by simple integration [65],

$$ \overline{f}\left( x \right) = \int\limits_{{\mathbb{L}}} {f\left( {x;\lambda } \right)dP_{\lambda } } , $$
(10)
$$ \overline{g}\left( x \right) = \int\limits_{{\mathbb{L}}} {g\left( {x;\lambda } \right)dP_{\lambda } } . $$
(11)

2.4 Generalized Ulam discretization

The discretization of transfer operators \(\mathcal{P}\left(\lambda \right)\) is given by the Ulam method [30, 35, 69, 70], equivalent to the generalized cell-mapping [14]. Following [57], the discretization process starts adopting a disjoint partition of the phase-space \({\mathbb{X}}\) as \({\mathbb{B}} = \left\{ {b_{1} , \ldots ,b_{i} } \right\}\). Consider also the subspace \(\Delta_{h} \subset L^{1} \left( {\mathbb{X}} \right)\) spanned by the normalized indicator functions of \({\mathbb{B}}\), i.e., with basis \(\left\{{1}_{1},\dots ,{1}_{i}\right\}\), where \({1}_{i}={\mathrm{id}}_{{b}_{i}}/{P}_{x}\left({b}_{i}\right)\), \({P}_{x}\left({b}_{i}\right)\) being the Lebesgue measure (generalized volume) of \({b}_{i}\) and h the characteristic size of the partition. A projection operator \({Q}_{h}\) is defined such that a distribution \(f\left( x \right) \in L^{1} \left( {\mathbb{X}} \right)\) is projected onto the subspace \({\Delta }_{h}\), that is,

$$ \begin{gathered} Q_{h} :L^{1} \left( {\mathbb{X}} \right) \to \Delta_{h} , \hfill \\ Q_{h} f\left( x \right) = \mathop \sum \limits_{i} 1_{i} \int\limits_{{b_{i} }} {f\left( x \right)dx} . \hfill \\ \end{gathered} $$
(12)

A projected distribution over \({\Delta }_{h}\) is generically denominated \({Q}_{h}f\left(x\right)={f}_{h}\). Following [70], the projection of \(\mathcal{P}\left(\lambda \right)\) is defined from the composition of \({Q}_{h}\) and \(\mathcal{P}\left(\lambda \right)\). The resulting projected operator is \({Q}_{h}\mathcal{P}\left(\lambda \right)={P}_{h}\left(\lambda \right)\), that is,

$$ \begin{aligned} &P_{h} \left( \lambda \right):\Delta_{h} \to \Delta_{h} , \hfill \\ &f_{h} P_{h} \left( \lambda \right) = \mathop \sum \limits_{i,j = 1} f_{i} p_{ij} \left( \lambda \right){\mathbf{1}}_{j} \hfill \\ \end{aligned} $$
(13)

for any \(\lambda \in {\mathbb{L}}\), where the row vector \({f}_{i}\) and matrix \({p}_{ij}\left(\lambda \right)\) are

$$ \begin{gathered} f_{i} = \int\limits_{{b_{i} }} {f\left( x \right)dx} , \hfill \\ p_{ij} \left( \lambda \right) = \frac{1}{{P_{x} \left( {b_{i} } \right)}}\int\limits_{{b_{i} }} {\left\{ {\int\limits_{{\Omega_{x} \left( {\lambda ;b_{j} } \right)}} {dP_{\omega } } } \right\}} dx, \hfill \\ \Omega_{x} \left( {\lambda ;b_{j} } \right) = \left\{ {\omega \in \Omega :\varphi \left( {\omega ,\lambda } \right)x \in b_{j} } \right\}. \hfill \\ \end{gathered} $$
(14)

\(\mathcal{P}\left(\lambda \right)\) has spectral radius equal to one [30], and \({p}_{ij}\left(\lambda \right)\) is a row stochastic matrix. Row vectors \({f}_{i}\left(\lambda \right)\) in the fixed space of \({p}_{ij}\left(\lambda \right)\), identified as \(\mathrm{fix}\left({p}_{ij}\left(\lambda \right)\right)\), are solutions of

$$ f_{i} \left( \lambda \right)\delta_{ij} = f_{i} \left( \lambda \right)p_{ij} \left( \lambda \right), $$
(15)

where \({\delta }_{ij}\) is the Kronecker delta. Equation (15) is the discretized version of Eq. (8), and its solutions are discretized vector representations \({f}_{i}\left(\lambda \right)\) of invariant distributions \(f\left(x,\lambda \right)\) of the system (1), with values in \(\left[ {0;1} \right]\). It represents an attractor, referred to as discretized attractor’s distribution. Finally, the stochastic matrix \({p}_{ij}\left(\lambda \right)\) can be understood as the proportion of states in \({b}_{j}\) after one iteration, starting in \({b}_{i}\). This resumes into a simplified representation, that is,

$$ p_{ij} \left( \lambda \right) \approx \frac{{\# {\mathrm{states}}\;{\text{in}}\;b_{j} \;{\mathrm{due}}\;{\text{to}}\; \varphi \left( {\omega ,\lambda } \right)\, {\text{and}}\,{\mathrm{with}}\;{\text{i.c.}}\; {\text{in}}\;b_{i} }}{{\# {\text{i.c.}} \;{\mathrm{in}}\;b_{i} }}. $$
(16)

The general definitions in Eq. (14) reduce to the deterministic, parameter uncertainty, or stochastic, depending on the parameter space \(\left( {{\mathbb{L}},{\mathfrak{S}},P_{\lambda } } \right)\) and on the probability space \(\left(\Omega ,\mathfrak{F},{P}_{\omega }\right)\). The matrix representation of the projected Koopman operator \({K}_{h}\left(\lambda \right)\) is given by the transpose of \({p}_{ij}\left(\lambda \right)\), thanks to the dual relation (5).

Observables of the basins are computed by solving the ill-conditioned system [30]

$$ \left[ {\delta_{ij} - \left( {1 - \varepsilon } \right)p_{ij} \left( \lambda \right)} \right]g_{j} \left( {\varepsilon ;\lambda } \right) = \varepsilon\,\,{\rm{ id}}_{A\left( \lambda \right)} , $$
(17)

where \({\delta }_{ij}\) is the Kronecker delta, \({\mathrm{id}}_{A\left(\lambda \right)}\) is the indicator function of the region of attraction \(A\left(\lambda \right)\) in the vector representation, and \(\varepsilon \in \left(0;1\right]\) is a control variable. In other words, it gives the probability that a state in \({b}_{j}\) maps to \(A\left(\lambda \right)\) after \(1/\varepsilon -1\) iterations. The vector representation \({g}_{j}\left(\varepsilon ;\lambda \right)\) corresponds to invariant time-dependent observable given by Eq. (2), with component values in \(\left[ {0;1} \right]\). Finally, averages of both \({f}_{i}\left(\lambda \right)\) and \({g}_{j}\left(\varepsilon ;\lambda \right)\) in \(\left( {{\mathbb{L}},{\mathfrak{S}},P_{\lambda } } \right)\) can be obtained as in Eqs. (10) and (11), respectively. These integrals can be further discretized through polynomial chaos [65] or by a simply weighted sum [71].

3 Phase-space refinement and parameter space discretization

3.1 The phase-space adaptative algorithm

The computation of matrices \({p}_{ij}\left(\lambda \right)\), Eq. (14), involves a considerable number of time integrations when a Monte Carlo [30, 38] or quasi-Monte Carlo [70] strategy is employed, resulting in a slow convergence of \({P}_{h}\left(\lambda \right)\) to \(\mathcal{P}\left(\lambda \right)\) as h → 0 [70]. Furthermore, the discretization inserts a numerical diffusion in the dynamical system [42, 44] and inevitably changes the dynamics, a fact remediated by high resolution partitions at the expensive of increasing the computational cost significantly.

A possible efficient strategy is to adopt an irregular adaptative partition, with smaller cell-size h in regions of interest, such as attractors supports and basins boundaries. Such strategy is possible because the operator \({Q}_{h}\), Eq. (12), is not limited to cells of equal size, but only to disjoint partitions. A sequence of n phase-space partitions can be constructed, \({\mathbb{B}}_{0} \left( \lambda \right),{\mathbb{B}}_{1} \left( \lambda \right), \ldots ,{\mathbb{B}}_{n} \left( \lambda \right)\), where \(i<j\) implies that \({\mathbb{B}}_{j} \left( \lambda \right)\) has a higher resolution than \({\mathbb{B}}_{i} \left( \lambda \right)\). The corresponding matrix sequence, \({p}_{ij}^{\left(0\right)}\left(\lambda \right),{p}_{ij}^{\left(1\right)}\left(\lambda \right),\dots ,{p}_{ij}^{\left(n\right)}\left(\lambda \right)\), approximates the continuous transfer operator \(\mathcal{P}\left(\lambda \right)\) as n increases, for all \(\lambda \). Similar alternatives were proposed for the refinement of basins’ boundaries [72, 73] and SBR measures [74, 75], but restricted to deterministic dynamics. Below is the proposed strategy and in Fig. 1 a graphical illustrative depiction is provided.

Fig. 1
figure 1

Depiction of the three phases of the proposed algorithm. a Cells for subdivision at partition \({\mathbb{B}}_{n}\) are identified; b identified cells are refined; c the flow map is updated over the new partition. (colors in the online version)

Algorithm 1: Start from a partition \({\mathbb{B}}_{n} \left( \lambda \right)\) covering the phase-space region \({\mathbb{X}}\), and a flow map \({p}_{ij}^{\left(n\right)}\left(\lambda \right)\). The next partition \({\mathbb{B}}_{n + 1} \left( \lambda \right)\) and flow map \({p}_{ij}^{\left(n+1\right)}\left(\lambda \right)\) are constructed through the following procedure:

1. Identification of the cells to be subdivided

From a given partition \({\mathbb{B}}_{n} \left( \lambda \right)\) and a flow map \({p}_{ij}^{\left(n\right)}\left(\lambda \right)\), we identify the cells for subdivision, satisfying the following Eq. (18) (refinement of the attractor) or Eq. (19) (refinement of the boundary); no need to distinguish between them, since both have to be refined, even if for different motivations. We indicate them by \({\mathbb{S}}_{{n + \frac{1}{2}}}\); they are reported in green in Fig. 1(a)

2. Refinement of the cells

The cells \({\mathbb{S}}_{{n + \frac{1}{2}}}\) previously identified are subdivided into two, forming a new set of cells named \({\mathbb{S}}_{n + 1}\). The other cells \({\mathbb{B}}_{n} \left( \lambda \right)\backslash {\mathbb{B}}_{{n + \frac{1}{2}}}\) are unchanged. This gives the updated partition \({\mathbb{B}}_{n + 1} \left( \lambda \right) ={\mathbb{S}}_{n + 1} \cup \left( {{\mathbb{B}}_{n} \left( \lambda \right)\backslash {\mathbb{S}}_{{n + \frac{1}{2}}} } \right)\) which of course is the union of refined and unrefined cells (Fig. 1(b))

3. Update of the flow map \({p}_{ij}^{\left(n\right)}\left(\lambda \right)\) on \({\mathbb{B}}_{n + 1} \left( \lambda \right)\)

Compute new entries of \({p}_{ij}^{\left(n+1\right)}\left(\lambda \right)\). The flow map must be recomputed (updated) for all the subdivided cells \({\mathbb{S}}_{{n + \frac{1}{2}}}\) (cyan cells in Fig. 1(c.1)) and in their preimages, i.e., in all cells \(\varphi^{ - 1} \left( {\omega ,\lambda } \right)_{{n + \frac{1}{2}}}\) (exemplified by the red cells in Fig. 1(c.1)) that, at the previous subdivision n, have an image under the flow in \({\mathbb{S}}_{{n + \frac{1}{2}}}\). Indeed, their images have been subdivided and thus the flow is no longer defined over them. The flow in the remaining cells \({\mathbb{B}}_{n + 1} \left( \lambda \right)\backslash \left( {{\mathbb{S}}_{n + 1} \cup \varphi^{ - 1} \left( {\omega ,\lambda } \right){\mathbb{S}}_{{n + \frac{1}{2}}} } \right)\) (magenta in Fig. 1(c.2)) is unchanged

At the first iteration of the algorithm, \(n=0\), the flow map \({p}_{ij}^{\left(0\right)}\left(\lambda \right)\) of the entire initial partition \({}_{0}\) is computed. As the algorithm progresses, it is expected that the ratio between the generalized volumes of \({\mathbb{S}}_{{n + \frac{1}{2}}}\) and \({\mathbb{B}}_{n + 1} (\lambda )\) diminishes, namely, \(P_{x} \left( {{\mathbb{B}}_{{n + \frac{1}{2}}} } \right)/P_{x} \left( {{\mathbb{B}}_{n + 1} \left( \lambda \right)} \right) \to 0\) as \(n \to \infty\). In cases where this is true, the algorithm reduces the total computational cost. The process stops after a predefined number of iterations.

At a given partition n, the algorithm identifies regions to be refined at step 1. Equation (15) is solved, resulting in the left fixed space of \({p}_{ij}^{\left(n\right)}\left(\lambda \right)\). This is a computationally difficult problem since the transfer matrix \({p}_{ij}^{\left(n\right)}\left(\lambda \right)\) is sparse, generally asymmetric, indefinite, and large, requiring specialized algorithms. Additionally, Eq. (15) could have multiple solutions for a multistable dynamical system. In other words, the unitary eigenvalue has geometric multiplicity greater than one, with corresponding eigenvectors spamming \(\mathrm{fix}\left({p}_{ij}^{\left(n\right)}\left(\lambda \right)\right)\) not uniquely defined. To circumvent this problem, the methodology proposed in theorem 2.6 and lemma 5.2 of [26] is applied to transform a general set of solutions of Eq. (15) into a meaningful set of attractors’ distributions, with properties \(0\le {f}_{i}^{\left(n\right)}\left(\lambda \right)\le 1,\sum_{i}{f}_{i}^{\left(n\right)}\left(\lambda \right)=1\), and independent of each other. With the correct description of \(\mathrm{fix}\left({p}_{ij}^{\left(n\right)}\left(\lambda \right)\right)\), corresponding regions of attraction \(A\left(\lambda \right)\) are defined and basins’ observables \({g}_{j}^{\left(n\right)}\left(\varepsilon ;\lambda \right)\) at a predefined time horizon \(1/\varepsilon -1\) are computed by solving Eq. (17).

Once \({f}_{i}^{\left(n\right)}\left(\lambda \right)\) and \({g}_{j}^{\left(n\right)}\left(\varepsilon ;\lambda \right)\) are known, the region of interest \({\mathbb{S}}_{{n + \frac{1}{2}}}\) can be defined. Firstly, attractors’ regions have a high-density value, and corresponding entries of \({f}_{i}^{\left(n\right)}\left(\lambda \right)\) also have high values. Therefore, the heuristic constraint

$$ f_{i}^{\left( n \right)} \left( \lambda \right) \ge c_{f} $$
(18)

is adopted to identify such regions. This strategy is straightforward since it only depends on the computed distribution in the ith box, whereas strategies considering the local upper bound L1 error need information about neighboring boxes [76]. For basins boundaries, it can be shown that if a saddle’s stable manifold passes through a box \({b}_{i}\), then there will be values between 0 and 1 in the ith element of observable \({g}_{j}^{\left(n\right)}\left(\varepsilon ;\lambda \right)\). That is, trajectories passing through boxes \({b}_{i}\) can converge to distinct attractors. This effect is also known as numeric diffusion [77] for deterministic systems, caused by discretization. For nondeterministic systems, both numeric and real diffusion can happen, and such regions enlarge as the uncertainty increases. Therefore, a second constraint is defined,

$$ 0 < c_{g}^{\left( 1 \right)} \le g_{j}^{\left( n \right)} \left( {\varepsilon ;\lambda } \right) \le c_{g}^{\left( 2 \right)} < 1 $$
(19)

identifying boxes that can converge to more than one attractor with significant probability. Again, this strategy depends only on the computed observable in the ith box. Other methodologies consider the neighbor’s information [78], but are computationally more involved. To the best of the authors’ knowledge, there is no upper bound local error definition for basins’ observables analogous to the upper bound L1 error for distributions, as presented in [76], justifying the adoption of a stop criteria at a certain iteration.

The set \({\mathbb{S}}_{{n + \frac{1}{2}}}\) is refined in step 2, forming \({\mathbb{S}}_{n + 1}\). Each box \(b_{i}^{\left( n \right)} \in {\mathbb{S}}_{{n + \frac{1}{2}}}\) is subdivided into two smaller ones, such that \({b}_{i}^{\left(n\right)}={b}_{2i}^{\left(n+1\right)}\cup {b}_{2i+1}^{\left(n+1\right)}\) and \({b}_{2i}^{\left(n+1\right)}\cap {b}_{2i+1}^{\left(n+1\right)}=\varnothing \). The new boxes form the set \({\mathbb{S}}_{n + 1}\), marked in cyan in Fig. 1b. Unrefined boxes in \({\mathbb{B}}_{n} \left( \lambda \right)\backslash {\mathbb{B}}_{{n + \frac{1}{2}}}\), marked in white in Fig. 1b, are renamed, such that \({b}_{2i}^{\left(n+1\right)}={b}_{i}^{\left(n\right)}\). The union of refined and unrefined boxes forms the new partition \({\mathbb{S}}_{n + 1} \cup \left( {{\mathbb{B}}_{n} \left( \lambda \right)\backslash {\mathbb{S}}_{{n + \frac{1}{2}}} } \right) = {\mathbb{B}}_{n + 1} \left( \lambda \right)\). The adopted refinement strategy not only guarantees that no two cells will overlap each other by definition but also allows optimal storage, subdivision, and search of elements in a binary tree data structure, being previously used in the software GAIO [36].

The final step 3 constitutes the update of the transfer matrix to the new phase-space partition. New entries of \({p}_{ij}^{\left(n+1\right)}\left(\lambda \right)\) corresponding to cells \(b_{i}^{{\left( {n + 1} \right)}} \in {\mathbb{S}}_{n + 1}\) are computed. However, the flow map of the preimage region \(\varphi^{ - 1} \left( {\omega ,\lambda } \right){\mathbb{S}}_{{n + \frac{1}{2}}}\), marked in red in Fig. 1(c.1), must also be recomputed. Since cells in \({\mathbb{S}}_{{n + \frac{1}{2}}}\) do not have a corresponding entry in \({p}_{ij}^{\left(n+1\right)}\left(\lambda \right)\), their preimage lose meaning, and new entries must be calculated. The flow map of the remaining cells, marked in magenta in Fig. 1(c.2), is unaltered, where \({p}_{\left(2i,2j\right)}^{\left(n+1\right)}\left(\lambda \right)={p}_{ij}^{\left(n\right)}\left(\lambda \right)\). This ends the iteration n of the adaptative discretization. The algorithm then proceeds to the next iteration, whose starting partition is \({\mathbb{B}}_{n + 1}\).

3.2 Mean structures for dynamical systems with parametric uncertainty

The previous exposition outlined the main subdivision algorithm of the phase-space. Still, mean distributions and observables of parametric uncertainty cases, given by integrals (10) and (11), must be addressed. Since the aim is to deal with general nonlinear maps, Eq. (1), sparse sampling strategies of the parameter space may not be adequate, particularly when close to bifurcation points, given that in such case, the dynamical system may depend strongly on the parameter values, as shown by Le Maître and Knio [65]. Therefore, general discretization strategies must be considered.

Assuming the parameter space \({\mathbb{L}}\) to be bounded, we can consider a discretization into a number of points \({\lambda }_{k}\in\Lambda \), spaced by \(\Delta \lambda ={\lambda }_{k}-{\lambda }_{k-1}\), and obtain a discretization of the probability \({P}_{\lambda }\) as

$$ P_{\Lambda } \left( {\lambda_{k} } \right) = P_{\lambda } \left[ {\left( {\lambda_{k} - \frac{\Delta \lambda }{2}} \right) \le \lambda \le \left( {\lambda_{k} + \frac{\Delta \lambda }{2}} \right)} \right]. $$
(20)

Therefore, the bounded continuous probability space \(\left( {{\mathbb{L}},{\mathfrak{S}},P_{\lambda } } \right)\) can be approximated by the bounded discrete probability space \(\left(\Lambda ,{\mathfrak{S}}_{\Lambda },{P}_{\Lambda }\right)\), where \({\mathfrak{S}}_{\Lambda }\) is a σ-algebra over \(\Lambda \). The original dynamical system \(\varphi \left(\omega ,\lambda \right)\) becomes a collection of deterministic or stochastic dynamical systems, weighted by the discrete probability \({P}_{\Lambda }\). Finally, Algorithm 1 is applied for all \(\Lambda \), and statistics are computed according to \({P}_{\Lambda }\). We restrict our focus to averages, calculated according to the rectangle rule,

$$ {\mathbb{E}}\left[ {f\left( \lambda \right)} \right] = \mathop \int \limits_{{}}^{{}} f\left( {\lambda ;x} \right)dP_{\lambda } \approx \mathop \sum \limits_{k} f\left( {\lambda_{k} ;x} \right)P_{{\Lambda }} \left( {\lambda_{k} } \right), $$
(21)

where \(f\left(\lambda ;x\right)\) represents any dynamical structure dependent on the parameter \(\lambda \), such as attractors’ distributions, basins of attraction, or manifolds. Equation (21) is an approximation of an integral by a weighted sum, a strategy that has been used in uncertainty quantification [65]. No continuation procedure is necessary if one chooses a window large enough to contain all attractors (which is not always the case, in particular when one wants to “zoom” around certain attractors/basins of interest). In this case, the discretization methodology identifies all existent attractors. Given a parameter \({\lambda }_{k}\), it is only required to determine to which branch each identified attractor belongs. Once this correspondence is established, the mean distributions and basins are computed.

The distance between the distributions of two attractors is calculated through the Lukaszyk-Karmowski metric [79] to identify the corresponding attractor branch. Given a distribution \({f}_{{A}_{m}}\left({\lambda }_{k};x\right)\) in a known attractor branch \({A}_{m}\), the branch of a distribution \({f}_{A}\left({\lambda }_{k+1};x\right)\) for the next parameter value \({\lambda }_{k+1}\) is identified according to the expression

$$ D_{m} \left( {A_{m} \left( {\lambda_{k} } \right),A\left( {\lambda_{k + 1} } \right)} \right) = \mathop \int \limits_{{\mathbb{X}}} \mathop \int \limits_{{\mathbb{X}}} d\left( {x,y} \right)f_{{A_{m} }} \left( {\lambda_{k} ;x} \right)f_{A} \left( {\lambda_{k + 1} ;y} \right)dxdy, $$
(22)

where \(d\left(x,y\right)\) is the metric of \({\mathbb{X}}\). If, for a certain m, \({D}_{m}\left({A}_{m}\left({\lambda }_{k}\right),A\left({\lambda }_{k+1}\right)\right)\) is a minimum and is smaller than a predefined threshold, then \({f}_{A}\left({\lambda }_{k+1};x\right)\) belongs to the m branch of existent solutions. If there is no small \({D}_{m}\) value, then the existence of possible new branches must be investigated. After all attractors are identified, mean distributions \({\overline{f} }_{A}\) and observables \({\overline{g} }_{A}\left(\varepsilon \right)\) are computed over each branch.

If a phase-space window is too small, attractors outside it are flagged as escape solutions. Escape solutions are identified previously and do not enter the metric calculation, Eq. (22). Furthermore, there is no distinction between escape solutions and attractors outside the window, and basin structures that would belong to those attractors are also flagged as escape solutions. If an attractor branch would move outside the phase-space window for a given parameter value \(\lambda \), statistics such as mean distributions and observables, Eqs. (10) and (11), would yield erroneous results. Care must be taken in the evaluation of escaped solutions.

One point still must be addressed. The resulting adaptative partition by Algorithm 1 is parameter dependent, \({\mathbb{B}} _{n} \left( \lambda \right)\). Thus, not only the discretized structures, such as attractors’ distributions, basins, and manifolds, are parameter dependent, but the discretized space in which they are defined are distinct from each other. To properly apply Eq. (21), we must define a common partition, \(\overline{{\mathbb{B}}}_{n}\), over which all structures are discretized. We start from a partition \({\mathbb{B}}_{n} \left( \lambda \right) = \left\{ {b_{0}^{\left( n \right)} ,b_{1}^{\left( n \right)} , \ldots , b_{i}^{\left( n \right)} } \right\}\) at a given iteration n. Given that not all boxes are subdivided due to the adaptative Algorithm 1, we have \(i\le {2}^{n}-1\) and the index list \(\left\{ {0, \ldots ,i} \right\}\) possibly has holes (i.e., no consecutive numbers). The common partition \(\overline{{\mathbb{B}}}_{n}\) will be the set of boxes whose index list \(\left\{ {0, \ldots ,\overline{i}} \right\}\) contains the index lists of partitions \({\mathbb{B}} _{n} \left( \lambda \right)\), for all \(\lambda\)-values. Intuitively, it means that \(\overline{{\mathbb{B}}}_{n}\) is the selection of the smaller boxes from the set of partitions \({\mathbb{B}}_{n} \left( {\lambda_{k} } \right)\), covering the phase-space \({\mathbb{X}}\). An example of common partition construction is given in Fig. 2, at iteration n = 2. It is clear that the index list of \(\overline{{\mathbb{B}}}_{2}\) contains the lists of the other two partitions, with its boxes being the most refined ones.

Fig. 2
figure 2

Example of partition trees at iteration n = 2, for two \(\lambda \)-values (a, b), and corresponding common partition (c). The lists of cell numbers are also given, with the last containing the most refined boxes of all previous partitions

With the common partition \(\overline{{\mathbb{B}}}_{n}\) defined, the next step is to project the intermediate structures over the new partition. For attractors’ distributions, this is given by applying the operator of Eq. (12), corresponding to the common partition \(\overline{{\mathbb{B}}}_{n}\) with basis functions in \({\Delta }_{{h}^{^{\prime}}}\), over an already discretized distribution over \({\mathbb{B}}_{n} \left( {\lambda_{k} } \right)\) with basis functions in \({\Delta }_{h}\). Denominating it as \({Q}_{{h}^{^{\prime}}}{Q}_{h}\equiv {Q}_{h\left(\lambda \right)}^{{h}^{^{\prime}}}\), we have

$$ \begin{aligned} &Q_{h\left( \lambda \right)}^{{h^{\prime}}} :L^{1} ({\mathbb{X}}) \to {\Delta }_{{h^{\prime}}}, \\ &Q_{h\left( \lambda \right)}^{{h^{\prime}}} f\left( x \right) = \mathop \sum \limits_{{\overline{i}}} 1_{{\overline{i}}} \frac{{P_{x} \left( {b_{{\overline{i}}} } \right)}}{{P_{x} \left( {b_{i} } \right)}}\int\limits_{{b_{i} }} {f\left( x \right)dx} , \\ \end{aligned} $$
(23)

acting on each entry \({f}_{i}={\int }_{{b}_{i}}f\left(x\right)dx\) of the discretized distribution \({f}_{h}\) over partition \(.\) It is implicitly assumed that cells in \(\overline{{\mathbb{B}}}_{n}\) are always contained in cells in some \({\mathbb{B}}_{n} \left( {\lambda_{k} } \right)\), that is, \({b}_{\overline{i}}\subseteq {b}_{i}\). The fraction \({P}_{x}\left({b}_{\overline{i}}\right)/{P}_{x}\left({b}_{i}\right)\) is the proportional generalized volume (area in bidimensional cases) between boxes \({b}_{\overline{i}}\) and \({b}_{i}\). Entries of the vector representation \({f}_{\overline{i}}\left({\lambda }_{k}\right)\) in the common partition are

$$ f_{{\overline{i}}} \left( {\lambda_{k} } \right) = \frac{{P_{x} \left( {b_{{\overline{i}}} } \right)}}{{P_{x} \left( {b_{i} } \right)}}\mathop \int \limits_{{b_{i} }}^{{}} f\left( {\lambda_{k} ;x} \right)dx, $$
(24)

from which the average in Eq. (21) is computed, resulting in the mean discretized distribution

$$ \overline{f}_{{\overline{i}}} \approx \mathop \sum \limits_{k} f_{{\overline{i}}} \left( {\lambda_{k} } \right)P_{{\Lambda }} \left( {\lambda_{k} } \right). $$
(25)

For the projection of the basins’ observables in the vector representation \({g}_{i}\left(\varepsilon ;{\lambda }_{k}\right)\), obtained from Eq. (17) for \({\lambda }_{k}\), we start from the dual space \(\Delta_{h}^{*} \subset L^{\infty } \left( {\mathbb{L}} \right)\), spanned by the indicator functions \(\left\{{\mathrm{id}}_{{b}_{1}},\dots ,{\mathrm{id}}_{{b}_{i}}\right\}\) [35, 80]. Given that the projection of a function \(g\left(\varepsilon ,{\lambda }_{k};x\right)\) over \({\Delta }_{h}^{*}\) is given by \({g}_{i}\left(\varepsilon ;{\lambda }_{k}\right) {\mathrm{id}}_{{b}_{i}}\), the entries of the vector representation \({g}_{\overline{i}}\left(\varepsilon ;{\lambda }_{k}\right)\) are

$$ g_{{\overline{i}}} \left( {\varepsilon ;\lambda_{k} } \right) = g_{i} \left( {\varepsilon ;\lambda_{k} } \right) {\text{id}}_{{b_{{\overline{i}}} }} , $$
(26)

given that \({b}_{\overline{i}}\subseteq {b}_{i}\). The average in Eq. (21) is computed, resulting in the mean discretized observable

$$ \overline{g}_{{\overline{i}}} \left( \varepsilon \right) \approx \mathop \sum \limits_{k} g_{{\overline{i}}} \left( {\varepsilon ;\lambda_{k} } \right)P_{\Lambda } \left( {\lambda_{k} } \right). $$
(27)

The procedure can be resumed into the following algorithm:

Algorithm 2: Start from a refinement \(\left(\Lambda ,{\mathfrak{S}}_{\Lambda },{P}_{\Lambda }\right)\) of the parameter space \({\mathbb{L}}\) containing k values of \(\lambda \). The mean structures are computed through the following steps:

1. Discrete adaptative analysis

Apply Algorithm 1 for each \({\lambda }_{k}\)

2. Common projection

Obtain the common discrete space \(\overline{{\mathbb{B}}}_{n}\) and project onto it the attractors’ distributions and basins’ observables, applying eqs. (24) and (26), respectively

3. Compute averages

Compute mean structures, given by eqs. (25) and (27)

The parameter space subdivision procedure can be further extended to multidimensional parameter spaces, only requiring that the parameter set \(\Lambda \) is disjoint and covers all \({\mathbb{L}}\). Adaptative discretization of the parameter space [65] can reduce the computational cost of the discretization refinement.

4 Helmholtz oscillator with harmonic excitation

The standard dimensionless form of the damped harmonically excited Helmholtz oscillator is

$$ \ddot{x} + \delta \dot{x} + \left( {\alpha + \sigma \lambda } \right)x + \beta x^{2} = A\sin {\Omega }t + s\dot{W}, $$
(28)

where \(\alpha \) is the mean linear vibration frequency, the random variable \(\lambda \) is a truncated standard normal with distribution \(f\left(\lambda ;\mathrm{0,1},-\mathrm{3,3}\right)\), \(\sigma \) is a scaling factor, \(\dot{W}\) is a standard white noise process, and s is the noise standard deviation. The system becomes deterministic for σ = 0 and s = 0, stochastic for σ = 0 and s ≠ 0, with parametric uncertainty for σ ≠ 0 and s = 0, and general for σ ≠ 0 and s ≠ 0.

For a normal distribution, the probability density converges asymptotically to zero as the distance from the mean increases, but is positive for every value in the range (− ∞, + ∞), although the actual probability of an extreme event is very low. Since the range of a given parameter is bounded, a truncated normal distribution in which the range of definition is finite at one or both ends of the interval is considered [81], thus avoiding extreme values.

The Helmholtz oscillator has one potential well, with two different classes of oscillations, bounded periodic nonlinear oscillations within the well and unbounded nonperiodic solutions [54]. This is a useful archetypal model, presenting escape, basin erosion, and integrity loss, and may describe the behavior of various dynamical systems (see, e.g., [58,59,60, 82]). The values of Table 1 are adopted, resulting in three possible outcomes, a small amplitude (i.e., nonresonant) oscillation, a large amplitude (i.e., resonant) oscillation, and escape solutions.

Table 1 Helmholtz oscillator parameters

The analyzed phase-space window is \({\mathbb{X}} = \left\{ { - 0.7,1.8} \right\} \otimes \left\{ { - 1,1} \right\}\). The initial box partition is defined as a division of 25 in each dimension, totaling 32 × 32 = 1024 boxes of size {0.0781, 0.0625} at iteration 0, with one additional sink box that attracts unbounded trajectories. Algorithm 1 is conducted through ten subsequent iterations, with a final box size of {0.0024, 0.0020} (only for the cells that are refined at each iteration). Also, the number of initial conditions per box (used to compute \({p}_{ij}(\lambda )\)) depends on the box size, decreasing with refinement. The number of collocation points for each iteration is presented in Table 2. For the deterministic and parametric uncertainty cases, the usual Perron-Frobenius operator governs the phase-space distribution, where its matrix representation \(p_{ij} \left( \lambda \right)\) in Eq. (14) reduces to

$$ p_{ij} \left( \lambda \right) = \frac{{P_{x} \left( {b_{i} \cap \varphi^{ - 1} \left( \lambda \right)b_{j} } \right)}}{{P_{x} \left( {b_{i} } \right)}}, $$
(29)

for all \(\lambda \in {\mathbb{L}}\). Of course, all \({p}_{ij}\left(\lambda \right)\) will be equal for σ = 0, the pure deterministic case.

Table 2 Discretization data for the Helmholtz oscillator

The Helmholtz oscillator, Eq. (28), is a continuous time problem. To construct the map \(\varphi \left(\lambda \right)\) and obtain a discrete time evolution in the form of Eq. (1), we considered stroboscopic Poincaré sections at the period of excitation T = 2π/Ω, with Ω as the forcing frequency. The flow \(\varphi \left(\lambda \right)\) maps the system state from one section to the other, as usual [83]. The time evolution of Eq. (28) for one period is obtained through the fourth-order Runge–Kutta method, with time-step T/200. This strategy is adopted in the deterministic and the following parametric uncertainty analyses.

4.1 Deterministic case

The evolution of basins of attraction of the small and large amplitude solution of the deterministic Helmholtz oscillator is shown in Fig. 3 as a function of the excitation magnitude \(\left(A\in \left[0.05,0.08\right]\right)\). The attractors are marked in red. The color scale differentiates regions converging to the depicted attractor from probability zero to probability one. There is only one attractor for A = 0.05, whose basin is surrounded by the (black) escape region. As expected for the deterministic case, the probability is either zero or one, with the exception of the folded fractal regions close to the boundaries, which have values between zero and one, such as in Fig. 3(c, d). This results from numerical diffusion, since initial conditions in the same cell may converge to one of the two attractors or escape in such regions. After the emergence of the large amplitude attractor in the resonant region, the evolution of the basins’ boundaries shows increasing competition. The loss of integrity of the basins with increasing load is witnessed by the decreasing area. The algorithm has shown to be robust enough to discretize the boundaries in highly fractal and intertwined basins. The set of initial conditions outside the two coexisting basins corresponds to solutions diverging to infinity [54].

Fig. 3
figure 3

Evolution of the deterministic Helmholtz oscillator attractor’s basin (color bars) with the forcing magnitude. (colors in the online version)

Figure 4 presents the final box partition, \({\mathbb{B}}_{10}\), for an increasing excitation amplitude. It is evident that more boxes are needed to discretize the boundaries as the basin topology becomes more intricate. The partitions \({\mathbb{B}}_{0} ,{\mathbb{B}}_{2}\), and \({\mathbb{B}}_{4}\) are depicted in Fig. 5 for A = 0.06 to demonstrate the refinement procedure. The green boxes satisfy one of conditions (18) and (19), being either attractor boxes or boundary boxes. Specifically, the distribution threshold of Eq. (18) is adopted as \({c}_{f}={10}^{-10}\), while the boundary thresholds of Eq. (19) are calculated as \({c}_{g}^{\left(1\right)}=\mathrm{min}g+0.03\Delta g\) and \({c}_{g}^{\left(2\right)}=\mathrm{max}g-0.01\Delta g\), where \(\Delta g=\mathrm{max}g-\mathrm{min}g\). This permits the boundary boxes to be subdivided, allowing long transient solutions due to crude initial discretization to be refined as well. For example, in Fig. 5 the thresholds for the escape basin and the nonresonant basin are (0.03; 0.99) for all iterations, while the resonant basin has (0.0299; 0.9874) at iteration 0 and only attains the limits (0.03; 0.99) for higher iterations of discretization. Additionally, for discretization iterations equal to or lower than 1, the eigenvalues of \({p}_{ij}\) show that the resonant solution behaves like a long transient solution. This could lead to the wrong assumption that there is no resonant solution unless the analysis continues through additional iterations.

Fig. 4
figure 4

Dependence of the final partition \({\mathbb{B}}_{10}\) of the Helmholtz oscillator as a function of the excitation magnitude A

Fig. 5
figure 5

Interactive partition evolution of the Helmholtz oscillator for A = 0.06. Green cells for subdivision, Red cells for recalculation. (colors in the online version)

Red boxes are preimages of the green boxes, recalculated in each subsequent iteration, as explained in Sect. 3.1. The partition refinement is conducted by subdividing green boxes, thus locally refining the phase-space near attractors and boundaries. As the algorithm progresses, the green boxes concentrate at the basins’ boundary and the attractor, refining these regions in the phase-space, as desired. Finally, the total box count for each step and A = 0.06 is given in Table 3. A comparison of the current box count with a full discretization at a given iteration (maximum box count, that corresponds to the hypothetical case in which all cells would have been subdivided) is shown, with the last column representing the decrease of computational cost defined as the ratio between the maximum-to-current box count difference and the maximum box count. Lower values imply higher computational costs. This efficiency increases with the iterations, being over 90% from iteration 8 onwards.

Table 3 Box count for the deterministic Helmholtz oscillator for A = 0.06

4.2 Effects of parameter uncertainty

Before addressing the influence of parameter uncertainty, it is advantageous to understand the implications of considering an uncertain parameter near a bifurcation point. To this end, Fig. 6 presents both the dependency of the stable responses on varying stiffness parameter α for the excitation magnitude A = 0.06 and the normalized probability distributions of α + σ λ. There is a clear interval of α where the resonant and nonresonant responses coexist. Two saddle-node bifurcations limit the interval, with two possible jumps for a continuous change of α, forming a hysteretic cycle. Only one of the responses exists outside this region, the resonant for α < -1.1 and the nonresonant for α > -0.92. Three cases are chosen to investigate the parameter uncertainty, varying the scaling factor σ. For σ < 0.04, the probability of α + σ λ being outside of the hysteresis cycle is negligible. However, for σ ≥ 0.04, the uncertainty’s effect on the results cannot be neglected.

Fig. 6
figure 6

Bifurcation diagram of the Helmholtz oscillator as a function of the stiffness parameter α, for A = 0.06, and normalized probability distributions of α + σ λ for selected values of the scaling factor σ

The parametric analysis of the influence of parameter uncertainty on the global dynamics is conducted through iterations 0 to 8 (see Table 4), alleviating the computational cost without compromising the quality of the result. To focus only on the uncertainty in the parameter, the noise is set to zero and the time evolution of the dynamical system is deterministic. The parameter space is discretized into 30 values, and the mean basins of attraction and mean attractors’ distributions are calculated through weighted sums, following Algorithm 2 in Sect. 3.2. Since the system is deterministic for a fixed parameter, the same time integrator of the previous analysis is considered, i.e., the fourth-order Runge–Kutta method with time-step T/200.

Table 4 Box count for the Helmholtz oscillator with uncertainty at iteration 8 for A = 0.06

Figure 7 presents the mean distributions (first color bar) and basins (second color bar) for increasing levels of the scaling factor \(\sigma \), demonstrating the effect of the probability distribution. According to the adopted color scheme, the response for a set of initial conditions will converge to the expected attractor in the mean sense. The first and second columns refer to the small and large amplitude coexisting attractors, respectively. The effect is small for σ = 0.02, with only a slight spreading of both the attractors’ distributions and their basins’ boundaries. The latter concentrates near the internal saddle on the basin boundary. Furthermore, basins’ regions with a probability equal to one (yellow) almost coincide with the deterministic result. As the scaling parameter increases, the attractor distribution elongates (it is a one-dimensional structure embedded in the phase-space, an expected result according to the bifurcation diagram, Fig. 6) and approaches the boundary. The uncertain basin regions spread over the phase-space, and for σ ≥ 0.06, there is no region certainly converging to the resonant attractor in the mean sense (i.e., with a probability equal to one). The probability is lower than 0.8 for σ = 0.06. Also, the nonresonant basin with a probability equal to one decreases steadily, indicating a decrease in its dynamic integrity.

Fig. 7
figure 7

Helmholtz oscillator mean attractor distributions (first color bar) and mean basins of attraction (second color bar) for A = 0.06 and increasing values of the scaling parameter σ. (colors in the online version)

The final box set for the three initial scaling parameters is given in Fig. 8, corresponding to the last iteration 10 and common partition \(\overline{{\mathbb{B}}}_{8}\) of all 30 \(\lambda \)-values, for each σ-value. Table 4 presents a comparison of the total box count for all σ-values. As the uncertainty parameter increases, the discretization procedure results in an increasing number of boxes, implying a higher computational cost, as confirmed by the final box-counting. For σ ≥ 0.03, the final box counting does not change too much, since almost all potential well is discretized to the highest resolution in the final iteration. The computational efficiency decreases, as expected, as σ increases, since higher σ-values result in larger basin areas with a probability smaller than one, which requires a more refined discretization. A significant economy would be observed if further iterations were considered in such cases. However, the probability space should also be refined; otherwise, the results would not improve quality.

Fig. 8
figure 8

Dependence of the final common partition \(\overline{{\mathbb{B}}}_{8}\) of the Helmholtz oscillator for all \(\lambda \)-values as a function of the scaling parameter σ for A = 0.06

Figure 9 shows the variation of the Helmholtz oscillator normalized basins’ areas as a function of the scaling parameter σ for A = 0.06 and selected probability thresholds, quantifying the integrity of the system with parameter uncertainty. The weighted normalized basins’ areas are computed as

$$ \frac{{\int\limits_{{\mathbb{X}}} {id_{{\left\{ {p;1} \right\}}} \left( g \right)gdx} }}{{\int\limits_{{\mathbb{X}}} {dx} }}, $$
(30)

where g is a stochastic basin of attraction, \({\mathrm{id}}_{\left\{p;1\right\}}\left(g\right)\) is an indicator function equal to 1 if \(g\in \left\{p;1\right\}\) and zero otherwise, and p is the assumed probability threshold, between 0 and 1. In the deterministic limit (no uncertainty or noise, and infinite resolution), the basin g is an indicator function of the basin, and Eq. (30) reduces to the GIM definition in [84]. Furthermore, this expression is a particular case of Eq. (44) of [30], with \({\rho }_{pert}(x)\) as a uniform density over the phase-space window \({\mathbb{X}}\).

Fig. 9
figure 9

Variation of the Helmholtz oscillator basins area as a function of the scaling parameter σ for A = 0.06, showing various probability thresholds (color bar). (colors in the online version)

A probability threshold close to 1 is a conservative selection in terms of evaluation of actual integrity, while a threshold of 0 would provide the area of the entire phase-space \({\mathbb{X}}\). Of course, a probability threshold close to 1 actually corresponds to the maximal integrity only for vanishing parameter uncertainty (σ = 0), i.e., in the deterministic case. When the parameter uncertainty increases, the probability 1 conservative threshold provides notably reduced values of integrity, with the correspondingly higher ones being attained only with meaningfully lower (and thus not conservative) probability thresholds. This result shows the importance of such analysis in real applications presenting an almost-sure parametric variability.

Finally, Fig. 10 presents a validation of the results obtained so far. Figure 10a shows the probability density estimated from a Monte Carlo experiment considering 100,000 initial conditions uniformly distributed over the phase-space window with σ = 0.04. Each response is integrated up to t = 1000 T, demonstrating the influence of the parameter uncertainty on the Poincaré sections of the two attractors. The results agree with the attractors’ distribution, Fig. 10b, and the bifurcation diagram with respect to the support α of the uncertainty parameter, Fig. 10c, in terms of the attractor shape (plane curves), size, and probability distribution, thus matching the operator results and ratifying the present methodology.

Fig. 10
figure 10

a Probability density estimated from a Poincaré section at t = 1000 T using 100,000 trajectories of the Helmholtz oscillator initially uniformly distributed over \({\mathbb{X}}\), b attractors’ mean distributions, and c bifurcation diagram, for A = 0.06 and σ = 0.04. (colors in the online version)

Lastly, a remark on why a special care is needed for systems with parametric uncertainty. We also considered a basic formulation where a mean transfer matrix is computed, from which the candidate mean fixed space is obtained. Such hypothetical mean transfer matrix is given by

$$ \overline{p}_{ij} = \int\limits_{{\mathbb{L}}} {\frac{{P_{x} \left( {b_{i} \cap \varphi^{ - 1} \left( \lambda \right)b_{j} } \right)}}{{P_{x} \left( {b_{i} } \right)}}d\lambda } , $$
(31)

and Eqs. (15) and (17) become parameter independent. The Algorithm 1 of Sect. 3.1 is applied for a case with A = 0.06 and σ = 0.04, giving the result in Fig. 11, which is completely different from Fig. 7b.

Fig. 11
figure 11

Helmholtz oscillator mean attractor distributions (first color bar) and mean basins of attraction (second color bar) for A = 0.06, σ = 0.04. Obtained from mean transfer matrix formulation. (colors in the online version)

A diffusion pattern, characteristic of stochastic dynamics, is observed, and only one distribution is obtained, instead of the two expected solutions (nonresonant and resonant). Moreover, this result does not match the Monte Carlo experiment given by Fig. 10, showing that the formulation (31) does not indeed represent the original problem with parametric uncertainty. Instead, this noise-like behavior suggests that formulation (31) considers the random parameter at each iteration of the map, akin to a stochastic process where a new value is randomly selected at each period. This demonstrates why it is necessary to address the parametric uncertainty distinctively, with the methodology proposed in Sect. 3.2.

4.3 Effects of additive white noise

The noise-induced dynamics is considered next. The same time-integration parameters are adopted, with time-step T/200. However, the noise requires specialized integrators, so a stochastic version of the fourth-order Runge–Kutta method is adopted for the construction of the flow \(\varphi \left(\omega \right)\) [66], with time-step T/200. The metric dynamical system \({\theta }_{i}\) driving the stochastic flow is given by the integral of the standard white noise \(\dot{W}\). For the discrete point of view, this integral results in a normal random variable with variance T, sampled and added to the system state at each section [53]. The transfer matrix of the noise-driven system is the Foias operator, where its matrix representation \({p}_{ij}\left(\lambda \right)\) in Eq. (14) reduces to

$$ \begin{gathered} p_{ij} = \frac{1}{{P_{x} \left( {b_{i} } \right)}}\int\limits_{{b_{i} }} {\left\{ {\mathop \int \limits_{{\Omega_{x} \left( {b_{j} } \right)}}^{{}} dP_{\omega } } \right\}} dx, \hfill \\ \Omega_{x} \left( {b_{j} } \right) = \left\{ {\omega \in \Omega :\varphi \left( \omega \right)x \in b_{j} } \right\} \hfill \\ \end{gathered} $$
(32)

where the dependency on \(\lambda \in {\mathbb{L}}\) is suppressed since σ = 0. The probability integral in Eq. (2) is solved by the Monte Carlo method. Ten noise samples for each initial condition in each box are considered to compute \({p}_{ij}\). Again, a sink box is defined to detect escaped solutions. This is the procedure conducted in all stochastic problems in this study.

Figure 12 shows the results for the standard deviations s = 0.002 and s = 0.004. The influence of noise on the basin boundary is small. The basin structures present a pattern similar to the mean parameter results, with uncertainty associated with initial conditions only close to the boundaries. The crucial difference is the diffusion in the attractors’ distributions over the phase space as the standard deviation increases. Again, the resonant solution is more affected than the nonresonant one, with the attractor spreading over a larger area and approaching the basin boundary, thus indicating a decrease in dynamic integrity and possible disappearance under increasing noise. For s = 0.006, the resonant solution is destroyed, see Fig. 13a, and only the nonresonant solution and basin remain, including all initial conditions occupied previously by the two coexisting basins, with a sudden but localized increase of dynamic integrity. Indeed, as the noise intensity increases even further, s = 0.010, initial conditions initially in the resonant region start to escape, as indicated by the gray area in Fig. 13(b.2), which corresponds to the area with a probability lower than one in Fig. 13(b.1). In Fig. 12 and Fig. 13, the steady spreading of the nonresonant attractor with the white noise standard deviation is observed, too.

Fig. 12
figure 12

Influence of increasing white noise standard deviation \(s\) on the stochastic basins of attraction (second color bar) and attractors distribution (first color bar) of the Helmholtz oscillator for A = 0.06. Nonresonant vs resonant. (colors in the online version)

Fig. 13
figure 13

Influence of increasing white noise standard deviation s on the stochastic basins of attraction (second color bar), attractors distribution (first color bar), and escape regions (third color bar) of the Helmholtz oscillator for A = 0.06. Bounded attractor vs escape. (colors in the online version)

The effect of noise on time responses and power spectrums is now addressed. For comparison, Fig. 14 shows the deterministic case, with A = 0.06 and s = 0, for both attractors. Both power spectrums present peaks at the fundamental excitation frequency, ω = 0.81, and its superharmonics. The resonant solution, Fig. 14b, presents a richer spectrum with a higher number of excited harmonics. Figure 15 displays, for s = 0.002 and 0.004, the sample means, in black, and ten sampled time responses in grey. The results show that the white noise masks the higher harmonics with smaller power output of individual samples, while they are still present, although with reduced power, in the sample means. The nonresonant results for s = 0.006 and s = 0.010 are displayed in Fig. 16. The effect of increasing noise is observed, masking both the fundamental frequency and its harmonics. The resonant attractor for these cases is destroyed, as demonstrated by the basins of attraction in Fig. 13, and, therefore, it does not have a stationary power spectrum.

Fig. 14
figure 14

Time responses and power spectrum of the Helmholtz oscillator for A = 0.06 and s = 0. Nonresonant initial condition: (1.0; 0.13), resonant initial condition: (0.3; − 0.13)

Fig. 15
figure 15

Power spectrum of the Helmholtz oscillator for A = 0.06 and varying noise intensity. Light gray: 10 sample solutions; black: sample mean solution. Nonresonant initial condition: (1.0; 0.13), resonant initial condition: (0.3; − 0.13)

Fig. 16
figure 16

Power spectrum of the Helmholtz oscillator for A = 0.06 and increasing noise intensity. Light gray: 10 sample solutions; black: sample mean solution. Nonresonant initial condition: (1.0; 0.13)

The loss of stability of the resonant solution is identified by the eigenvalues of \({p}_{ij}\) slightly less than one. They correspond to long-transient solutions, that is, solutions taking a long time to converge to a given attractor. The influence of noise on the transient responses can be observed in Fig. 17. For small noise intensity, s = 0.006, the resonant solution takes a rather long time to converge to the nonresonant solution, see Fig. 17a. This corresponds to an eigenvalue of \({p}_{ij}\) with a value of almost one. The obtained value for the corresponding case, Fig. 13a, is 0.999990835. For s = 0.010, the convergence time is reduced. However, the resonant attractor can converge to either the nonresonant solution, Fig. 17b, or escape, Fig. 17c, with different probabilities. Again, this result corresponds to the one observed in the basin analysis, Fig. 13b. The eigenvalue is smaller, with a value of 0.993246847, corroborating the observed convergence time reduction.

Fig. 17
figure 17

Helmholtz oscillator’s resonant attractor long-time transient response due to high noise intensity for A = 0.06. Resonant initial condition: (0.3; − 0.13)

As shown by the previous results, the noise leads to uncertainty along the basin boundary, where probability is less than one. As in the deterministic case, the transient noisy response becomes longer as initial conditions are further away from the attractor. The time-dependency of the basins of attraction is demonstrated in Fig. 18 for A = 0.06 and s = 0.010. Values of ε ≈ 1 (respectively, ε ≈ 0) correspond to a small (respectively, large) time-horizon, identifying regions where the time response converges in the mean sense to a given attractor after a small-time (respectively, large-time) interval. The former corresponds to a small region surrounding the attractor, see Fig. 18(a.1, b.1). As ε decreases, the time horizon increases, and the obtained basin approaches its maximum size asymptotically. This is clear in Fig. 18(a.2, a.3) and Fig. 18(c.2, c.3), where the basin stabilizes at its final configuration. For this noise intensity, there is no resonant attractor in the classical sense, with solutions decaying to the nonresonant attractor or escaping. Figure 18b demonstrates what happens with the resonant region. Initially, solutions converge to the region where the resonant attractor exists for lower noise intensities, as demonstrated by the increase in basin area from Fig. 18(b.1) to Fig. 18(b.2). However, for large time-horizons, the supposed resonant basin decays to zero, see Fig. 18(b.3). To obtain the asymptotic basin of attraction for this noise level with methods based on time integration, the number of periods of integration would be prohibitively large. Furthermore, if time-horizons smaller than \({10}^{4}\) (\(\varepsilon >{10}^{-4}\)) were considered, the resonant region would mistakenly be considered as a basin, being in fact a set of initial conditions with a long transient.

Fig. 18
figure 18

Dependency of the stochastic basins of attraction (color bars) on the final time horizon 1/ε for A = 0.06, s = 0.010. (colors in the online version)

Long transients lead to large computation time to obtain the asymptotic response by usual time integration techniques. However, the proposed phase-space subdivision procedure can identify and separate these solutions from the true asymptotic behavior. Figure 19 contains the corresponding eigendistribution for the resonant solution, which however is not strictly a distribution but a long transient. This is shown in Fig. 19b, where negative (blue) and positive (red) regions, each with absolute value \(\left|f\right|=0.5\), are separated. The former represent regions where the solutions stay for a long time before decaying to the permanent (nonresonant attractor, in red, or escape) solution, as already observed in basins of attraction, Fig. 13b, and time responses, Fig. 17(b, c). Indeed, according to Dellnitz and Junge [26], there are two scenarios where almost invariant sets can be observed. The first case occurs when cyclic components of a periodic attractor collide. Specifically, the cyclic components’ eigenvalues change from an absolute value of one to less than one. Only one attractor is involved in this process, changing its periodicity to an almost periodicity. The second case refers to the collision of two or more attractors, with at least one of them changing its eigenvalue from an absolute value of one to less than one. The attractor whose eigenvalue changes loses stability, exhibiting a long transient solution. In this example, the resonant attractor loses stability by colliding with different probability (see Fig. 18(a, c) for long time horizons) with both the nonresonant attractor and the escape solution. A possible triple collision between the three distinct solutions, after which only two remain stable, may also occur for a very specific (i.e. coincident) probability value.

Fig. 19
figure 19

Helmholtz oscillator’s almost permanent eigendistribution for A = 0.06, s = 0.010. (colors in the online version)

The proposed measure to quantify the system’s integrity under various noise intensities is presented in Fig. 20, for ε = 10−8. Again, for each attractor, the integrity is computed according to Eq. (30). The nonresonant attractor resilience against the noise and the resonant attractor integrity loss for s ≥ 0.006 are clearly observed. Therefore, the proposed procedure can be used to quantify the influence of noise on any integrity measure.

Fig. 20
figure 20

Variation of the Helmholtz oscillator basins area as a function of the noise intensity s for A = 0.06, showing various probability thresholds (color bar). Time-horizon 1/ε = \({10}^{8}\). (colors in the online version)

A comparison with a Monte Carlo experiment is presented in Fig. 21. The probability density estimated through 10,000 initial conditions uniformly distributed over the phase-space window with s = 0.004, integrated up to t = 100000 T, is presented in Fig. 21a. The black areas represent high-density regions. They agree with the attractors’ distribution, Fig. 21b, obtained from the proposed methodology, validating the present strategy.

Fig. 21
figure 21

a Probability density estimated from a Poincaré section at t = 100000 T of 10,000 trajectories of the Helmholtz oscillator initially uniformly distributed over \({\mathbb{X}}\), b attractors’ mean distributions, for A = 0.06 and s = 0.004. (colors in the online version)

5 Conclusions

The presence of uncertainties in engineering systems is unavoidable and can drastically change their behavior. Furthermore, noise is inevitable in the operational stages. Here, an adaptative phase-space discretization strategy for the global analysis of deterministic, parameter uncertainty, and stochastic nonlinear dynamical systems with competing attractors was developed to quantify the uncertainty effects in those systems.

Rudiments of global dynamics were presented. The implications of nondeterminism over global structures, namely attractors and basins, were addressed. Then, generalized global operators were presented. Their fixed space corresponds to attractors’ distributions and basins observables in the mean sense over the phase-space only, integrating over the nondeterministic spaces. The generalized strategy of operator discretization was presented, resulting in a row-stochastic matrix, with also vector representations of invariant distributions and basins, that is, attractors’ distributions and basins observables, respectively.

The Ulam method is known for displaying a numerical diffusion due to the phase-space discretization. This can be remedied by refining the discretization at the expense of the computational cost. Here, an adaptative discretization scheme to refine only the most impactful regions, namely basins observables’ boundaries and attractors distributions’ supports, was proposed. The strategy was summarized into three main steps: identification, refinement, and update. Simple heuristics were adopted for the identification of basins observables’ boundaries and attractors distributions’ supports, only requiring the computed stochastic matrix and fixed space. The refinement is the most complicated step, and a technique based on a tree data structure to organize the phase-space subdivision was adopted. Flow maps of refined regions can be calculated, and the corresponding dynamical system’s transfer operator can be updated. The procedure is conducted through a previously defined number of iterations, resulting in a phase-space discretization with adaptative resolution. It is easily applied to stochastic dynamics through Monte Carlo, whereas the mean structures for parametric uncertainty dynamics are only attained through integration of the attractors and basins over the parametric uncertainty space. In this last respect, a simple numerical procedure, based on the rectangle rule and branch identification through the Lukaszyk-Karmowski metric, was adopted to solve the integral over the parameter space and compute mean structures.

Lastly, the Helmholtz oscillator under harmonic excitation was investigated. The deterministic analysis displayed three possible outcomes depending on the excitation amplitude: a small amplitude (i.e., nonresonant) attractor, a large amplitude (i.e., resonant) attractor and escape solutions. The adaptative discretization procedure was able to obtain attractors and basins’ boundaries with high fidelity, even when the latter become fractal and intermingled. The subdivision strategy has proven capable to mitigate the numerical diffusion, a common hindrance inherent to many phase-space discretization procedures. A comparison with the initially refined discretization showed that the economy achieved by the proposed procedure can be as high as 90% for highly refined phase-spaces. Next, the Helmholtz oscillator with a random stiffness parameter was considered, with the uncertainty parameter defined as a truncated normal variable to prevent large spurious values. Mean basins and distributions were obtained for varying uncertainty intensity, being the attractors’ distributions described by one-dimensional structures in the phase-space. As the uncertainty increases, broader regions along the basins’ boundaries need to be refined. Here, the economy of the proposed methodology is verified through a box count procedure. The results quantify the decrease of the safe basin area of both attractors, particularly the resonant one, with increasing uncertainty. For high uncertainty values, no set of initial conditions has a 100% probability of converging to the resonant attractor. The results were validated by a Monte Carlo analysis, demonstrating the efficiency of the proposed methodology. In turn, increasing the excitation noise entails a two-dimensional diffusion of the attractors, affecting particularly the resonant one, which approaches the basin boundary. This leads to a global bifurcation due to a connection between the resonant attractor and the hilltop saddle. After this bifurcation, the resonant basin vanishes and solutions either converge to the nonresonant attractor or escape. The detailed analysis of global bifurcation shows that formerly resonant solutions become long transients above a critical noise intensity. Long transient solutions are detected by the almost invariant eigendistributions identifying regions where solutions stay for a long time, with basins of attraction varying with the final time horizon. The variation of resonant basin area with noise intensity displays a characteristic Dover cliff profile, with a sudden drop to zero. Overall, considering parametric uncertainty and noise meaningfully affects the basin area and compactness, directly influencing the system global stability, with effects to be carefully evaluated in the design perspective. The matter will be explored in future investigations, along with the possible exploitation of control strategies (see, e.g., [85]) to increase the dynamic integrity of given attractors.

The investigated system shows that the adaptative discretization procedure can address efficiently dynamics with multiple attractors. Additionally, the methodologies for parametric uncertainty and stochasticity are essential to correctly analyze each case and understand the observed phenomena. Finally, the weighted basin area is able to quantify the integrity of nondeterministic cases, being also the most natural generalization of the global integrity concept. We expect to apply these strategies to dynamical systems representing real engineering problems, also addressing convergence and limitations of the proposed algorithms.