Abstract
An adaptative phase-space discretization strategy for the global analysis of stochastic nonlinear dynamical systems with competing attractors considering parameter uncertainty and noise is proposed. The strategy is based on the classical Ulam method. The appropriate transfer operators for a given dynamical system are derived and applied to obtain and refine the basins of attraction boundaries and attractors distributions. A review of the main concepts of parameter uncertainty and stochasticity from a global dynamics perspective is given, and the necessary modifications to the Ulam method are addressed. The stochastic basin of attraction definition here used replaces the usual basin concept. It quantifies the probability of the response associated with a given set of initial conditions to converge to a particular attractor. The phase-space dimension is augmented to include the extra dimensions associated with the parameter space for the case of parameter uncertainty, being a function of the number of uncertain parameters. The expanded space is discretized, resulting in a collection of transfer operators that enable obtaining the required statistics. A Monte Carlo procedure is conducted for the stochastic case to construct the proper transfer operator. An archetypal nonlinear oscillator with noise and uncertainty is investigated in-depth through the proposed strategy, showing significative computational cost reduction.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
1 Introduction
The many sources of uncertainty in engineering are generally classified as aleatory or epistemic. Design under uncertainty needs to account for both the former, such as variability in material properties, and the latter, which include errors due to imperfect analysis tools. In real-life applications, both uncertainties are present simultaneously, and their added effect should be considered for a safe design. In mathematics, uncertainty is often characterized in terms of probability distribution, with epistemic uncertainty meaning not being sure of the assumed distribution, and aleatory uncertainty meaning not being sure of a random sample drawn from it [1]. In physical problems, uncertainties can be of parametric type, where parameter values are unknown to some degree; structural type, meaning a lack of knowledge of the underlying mechanics; algorithmic type, coming from numerical errors/approximations in the computer model; experimental type, arising from measurement variability and/or interpolation errors due to lack of available data.
In the context of structural engineering, the need to include parameter uncertainties and noise in dynamic analyses has long been recognized [2, 3]. Here, uncertainties come from material and geometric parameters, boundary conditions, manufacturing tolerances, and external loads. In addition, deterioration or evolution of the structure during its lifetime leads to increasing uncertainties, which can affect vibration behavior. Parameters such as natural frequencies or damping are subject to uncertainty stemming from a lack of knowledge of parameter values and a lack of understanding of the system actual behavior. The problem can be stated in a probabilistic framework to account for the uncertainty in system parameters, which leads to differential equations with coefficients modeled as random variables.
Various techniques have been developed for the analysis of uncertainties in structural problems. For an overview of classical methodologies, such as Monte Carlo sampling, perturbation analysis, moment equations, operator of the governing equations, Generalized Polynomial Chaos (GPC), stochastic Galerkin, and collocation, refer to Xiu [4]. More recent developments were devoted to mitigating the loss of accuracy of long time integration when employing usual expansions of the random space, and included time-dependent GPC methodology [5], stochastic time-warping polynomial chaos, as well as nonlinear autoregressive polynomial chaos [6] and GPC with flow map composition [7]. Time-dependent uncertainty is also important in structural dynamics, for representing noisy loads and parametric excitations. Various sampling-based methods, where the governing systems are reformulated as stochastic differential equations, have been developed. Arnold [8] presents the mathematical foundation of the theory of random dynamical systems, stochastic bifurcations, and their multiplicative ergodic theory. Han and Kloeden [9] discuss the numerical simulation and analysis of random ordinary differential equations. These works point out that noisy excitation represents a major difficulty in uncertainty analysis, requiring the analyst to ponder the meaning of the results, either numeric or analytic.
When considering nondeterministic effects, a physical problem may present many possible outcomes distributed in a probability space. Such distributions may evolve in time for dynamical systems, a phenomenon that is a dynamical system in itself, governed by a linear, positive, and density conserving transfer operator [10] of Markov type. Ulam [11] hypothesized that such transfer operators could be discretized and distributions approximated by histograms, formulating what is known as the Ulam method. Later, Hsu [12, 13] adopted an algorithmic perspective, developing the generalized cell-mapping, then proven to be equivalent to the Ulam method [14].
Several advances followed. Hsu and Chiu combined generalized cell-mapping and the previously developed simple cell-mapping into the so-called hybrid cell-mapping [15, 16]. In these works, there is already a separation between stochastic and parametric uncertainties, with specific methodologies to deal with them focused on global dynamics. However, a proper probabilistic framework is missing. Sun and Hsu [17] developed a short-time Gaussian approximation for nonlinear random vibration analysis. Han and coworkers explored this strategy extensively, considering nonautonomous cases [18] under colored noise [19], stochastic bifurcations in a turbulent swirling flow [20], and a combination with digraph algorithms [21]. Simple and generalized cell-mappings were recently reformulated by Yue et al. [22] into the so-called compatible cell-mapping, which employs adaptative refinement of the phase-space to increase the resolution of global attractors of random dynamical systems. In [23], this method was shown to refine stable and unstable manifolds, similar to the subdivision and selection method by Dellnitz and coworkers [24,25,26] but with digraph algorithms instead. Another cell-mapping method is designed with two distinct scales of cell spaces [27,28,29]. Similarities between the transfer probability distributions by Yue et al. [29] and the generalized committor functions by Lindner and Hellmann [30] are evident. However, the latter is adequate for transient analysis, describing how distributions evolve with time. Finally, the phase-space dimension of engineering problems demands High-Performance Computing (HPC), as described in [31,32,33]. Parallel computing strategies are fundamental, employing even general-purpose graphic cards (GPU) to this end [34].
The Ulam method was the focus of various works. Klus et al. [35] compared different numerical approximations of the Perron-Frobenius operator and its dual, the Koopman operator. Dellnitz and coworkers [24,25,26] developed a subdivision strategy with box-covering to approximate complicated numerical behavior, implemented in the software package GAIO [36]. Further developments include the detection of transport barriers [37], the analysis of dynamical systems with parameter uncertainty [38], invariant sets of infinite-dimensional dynamical systems [39, 40], and a set-oriented path-following method for computation of parameter-dependent attractors [41]. Koltai and coworkers developed methods for global analysis without trajectory integration, focused on basins of attraction [42,43,44] and nonautonomous systems [45]. A comparison of data-driven model reductions for dynamical systems based on the approximation of the transfer operators is given in [46]. Froyland et al. [47] applied the Ulam method to the analysis of surface ocean dynamics, obtaining attractors and basins from real data. Ding and coworkers investigated the original Ulam method and approximations of the Perron-Frobenius operator by piecewise linear and quadratic functions [48] and higher-order approximations [49]. Junge et al. [50] investigated the spectrum of transfer operators of stochastically perturbed conservative maps. Most recently, Jin and Ding [51] and Bangura et al. [52] applied spline and least-squares approximation for random maps as well, specifically considering the Foias operator which governs the average flow of random maps [53]. One crucial limitation of phase-space discretization-based methods is the resulting numerical diffusion of the flow [42, 44]. Indeed, depending on the dynamics, a high resolution is necessary, increasing the computational cost significantly.
In systems displaying coexisting solutions with distinct basins of attractions, uncertainties and noise may cause jumping between competing attractors, and global bifurcations as basins’ merging and basin instability. The interaction between previously separated basins is the focus of stochastic resonance, in global dynamic terms. Depending on basins’ topology, predicting systems’ outcome can be difficult even in the deterministic context, especially when highly intertwining basins or fractal boundaries are present. Uncertainties and noise are expected to induce further global changes, with the emergence of new dynamic phenomena, which may directly influence the concept of dynamic integrity [54,55,56]. For a safe analysis, it must be secured that initial conditions lie indeed in the basins of attraction of the corresponding attractors, even under the presence of uncertainties and noise.
This work aims at presenting an adaptative set-oriented phase-space discretization method for the global analysis of nonlinear dynamical systems with competing attractors. Global phase-space operators are presented by considering (i) deterministic, (ii) stochastic, and (iii) parametric uncertainty dynamics, extending the results in [57]. Their discretization is conducted through the Ulam method [53], for deterministic and stochastic cases. Mean results are obtained for parametric uncertainty dynamics through a discretization of their probability space. The adaptative discretization results in a sequence of operators with increasing refinement of important regions, here assumed as attractors’ distributions supports and basins’ observables boundaries. This local discretization reduces the computational cost, therefore being advantageous in comparison to a full phase-space discretization.
This paper is organized as follows. Section 2 summarizes basic concepts of stochastic global dynamics with parameter uncertainty and noise, based on the definitions of (random) dynamical systems theory, presenting the general phase-space operators and their discretization. Section 3 describes the proposed boundary and attractor refinement strategies for both deterministic and stochastic systems, and outlines the procedure for obtaining mean results for dynamical systems with parameter uncertainty. Section 4 deals with the forced Helmholtz oscillator as archetypal model for the analysis of escape from a potential well, with applications ranging from ship capsize [58] to structures liable to asymmetric buckling [59, 60]. Effects of noise and parametric uncertainty are discussed, with evaluation of computational advantages of adaptative discretizations, validation through Monte Carlo experiments, and assessment of global dynamics via a newly defined nondeterministic integrity measure. The final section provides concluding remarks.
2 Stochastic global dynamics of systems with parameter uncertainty and noise
In this section, some concepts of stochastic global dynamics with parameter uncertainty or noise are briefly summarized, based on the definitions of dynamical systems theory. Specifically, definitions of stochastic attractors and stochastic basins, operator formulation, phase-space discretization and probability space discretization for the parametric uncertainty case are illustrated. Concepts and definitions already present in the literature are generalized by introducing the random dependence on the system parameters, which are commonly considered to be fixed and with a deterministic nature.
2.1 Dynamical systems: a few aspects
Following Mezić and Runolfsson [61], attention is restricted to discrete time cases. This choice is motivated by the fact that information on continuous systems under periodic excitation can be obtained through Poincaré maps for both deterministic and noise-driven systems, see Lasota and Mackey [53], Sect. 8.1. Stroboscopic maps can be used in the analysis of parameter uncertainty dynamics, as well, when there is a periodic excitation. Therefore, the following discrete dynamical system is considered,
where \(x \in {\mathbb{X}}\) is the system state, \(\omega \in {\Omega }\) is the noise, and \(\lambda \in {\mathbb{L}}\) is the uncertain parameter. The usual depiction of this system is of an iterated map, \(x_{t + 1} = \varphi \left( {\theta^{t} \omega ,\lambda } \right)x_{t}\), with the state evolving from instant \(t\) to instant \(t + 1\), and the stochastic parameter \(\omega\) governed by a noise-model \(\theta^{t}\), while \(\lambda\) is fixed in time. It is useful to define the system state after t iterations through the composition of maps. For initial condition x and t iterations, the system state is given by \(\varphi^{t} \left( {\omega ,\lambda } \right)x = \varphi \left( {\theta^{t - 1} \omega ,\lambda } \right) \circ \ldots \circ \varphi \left( {\omega ,\lambda } \right)x\). The sequence \(\left\{ {\varphi^{t} \left( {\omega ,\lambda } \right)x|t = 0,1,2, \ldots } \right\}\) defines an orbit of the dynamical system (1) over \({\mathbb{X}}\) for each sample \(\omega\) and \(\lambda\), and initial condition x.
Some formalism is necessary to understand each particular case of dynamical system (1), i.e., deterministic, stochastic, and parametric uncertainty. It is assumed that all spaces are compact, metric, with corresponding Borel σ-algebras [30]. The phase-space, stochastic space, and parameter uncertain space are completely defined as \(\left( {{\mathbb{X}},{\mathfrak{B}},P_{x} } \right)\), \(\left( {{\Omega },{\mathfrak{F}},P_{\omega } } \right)\), and \(\left( {{\mathbb{L}},{\mathfrak{S}},P_{\lambda } } \right)\), respectively, with σ-algebras \(\mathfrak{B},\mathfrak{F},\mathfrak{S}\), Lebesgue measure \({P}_{x}\), and probability measures \({ P}_{\omega }\,\mathrm{ and}\, {P}_{\lambda }\). For fixed \(\omega \), a dynamical system \(\varphi \left( {\omega , \cdot } \right):{\mathbb{L}} \times {\mathbb{X}} \to {\mathbb{X}}\) is defined, with product measure \({P}_{x}\times {P}_{\lambda }\). The parameter \(\lambda \) is fixed, although randomly chosen according to \({P}_{\lambda }\), and the system evolution is deterministic. In the case of fixed \(\lambda \) (with a given value not randomly chosen), the flow map \(\varphi \left( { \cdot ,\lambda } \right):\Omega\times{\mathbb{X}} \to {\mathbb{X}}\) is defined as a random dynamical system, forming a cocycle over \({\theta }^{t}\) and product measure \({P}_{x}\times {P}_{\omega }\). The randomness evolves with the system, changing at each time-step t. This last case is much more involved, and the reader can find technical details in Arnold [8]. If both \(\omega \) and \(\lambda \) are fixed, the system becomes deterministic, with flow map \(\varphi \left( {\omega ,\lambda } \right):{\mathbb{X}} \to {\mathbb{X}}\) and Lebesgue measure \({P}_{x}\). Finally, the notion of phase-space volume given by the measure \({P}_{x}\) is crucial for the definition of Milnor attractors [62], minimal attractors [63], set attractors [64], ε-committor functions [30], or any set-attractive phase-space region.
2.2 Random attractors and basins
In a global dynamic analysis, the coexisting attractors and their basins are the main tools to understand the system behavior and safety. Without going into technical details, we can define attractors \(A\) as subsets of that attract some or all initial conditions asymptotically and are resilient to infinitesimal perturbations (Lyapunov stable) [30]. Another important definition is given by Milnor [62], where the stability criteria is dropped in favor of measurability of the basin of attraction. In this case, attractors are sets whose basins are observable, with generalized volume greater than zero. Milnor attractors were extended to random dynamical systems \(\varphi \left(\cdot ,\lambda \right)\) pointwise by Ashwin [63]. That is, attractors \(A\left(\cdot ,\lambda \right)\) are functions of the noise sample \(\omega \), and therefore are random variables. Arnold [8] and Ochs [64] expanded the classic definition by imposing convergence in probability in the pullback and pushforward sense, respectively, but still pointwise with respect to the noise sample \(\omega \). For the parametric uncertainty case, no similar definitions are found in the literature. This could be motivated by the fact that an uncertain parameter system is a collection of deterministic dynamics for each \(\lambda \in {\mathbb{L}}\), with attractors’ and basins’ statistics being obtained through Monte-Carlo or other technique, see [65]. Still, it is important to emphasize the distinction between these two cases by explicitly writing attractors \(A\left(\omega ,\cdot \right)\) as functions of the random parameter \(\lambda \).
Lindner and Hellmann [30] also explored the implications of stochasticity for the definition of a basin of attraction. They noticed the relation between basins of attraction and expected mean sojourn time (expected time that a system spends in a certain state) and focused on how to quantify the transient stability of stochastic systems. The procedure starts from the phase-space region of an attractor’s distribution \({f}_{A}\left(x;\lambda \right)\), given by \({\mathrm{id}}_{A\left(\lambda \right)}=\mathrm{supp}\left\{{f}_{A}\left(x;\lambda \right)\right\}\), for a fixed \(\lambda \). The probability that the (Hausdorff semi-) distance between a trajectory \({\varphi }^{t}\left(\omega ,\lambda \right)x\) and \({\mathrm{id}}_{A\left(\lambda \right)}\) vanishes after \(1/\varepsilon - 1\) iterations is given by an ε-committor function,
In other words, this is a probability that \(x\) converges to \(A\left( \lambda \right)\) after \(1/\varepsilon - 1\) iterations. It is a viable definition of basin of attraction, differing from Eq. (30) of Lindner and Hellmann [30] by the inclusion of the random parameter \(\lambda \). Also, they defined the quantity \(1/\varepsilon \) as the mean time-horizon, and transient states can be checked by varying \(\varepsilon \). This is important because attractors can become long transients under stochastic excitation, that is, \(\underset{\varepsilon \to 0}{\mathrm{lim}}\,{g}_{A}\left(\varepsilon ,x;\lambda \right)=0\), see [30, 57, 66]. Furthermore, the asymptotic case \({g}_{A}\left(0,x;\lambda \right)={g}_{A}\left(x;\lambda \right)\) corresponds to the classical, deterministic basin of attraction, with value 1 for \(x\) inside the basin and 0 otherwise. Finally, functions \({g}_{A}\left(\varepsilon ,x;\lambda \right)\) are observables in the \(L^{\infty } \left( {\mathbb{X}} \right)\) space, a fact that is explored in the transfer operator formulation later in the text. Throughout this work, this is the adopted definition of basin of attraction.
2.3 Generalized transfer operators: attractor distribution and basin observable
As stated by Ashwin [63], attractors and basins can be interpreted pointwise for systems with stochastic and uncertainty parameters \(\left(\omega ,\lambda \right)\). The definition (2) is a statistic of the basins with respect to the noise, but the dependency with the parameter \(\lambda \) still exists. The global view of such systems, computing mean results in the product space \({\Omega } \times {\mathbb{L}}\), is explained here.
The suitability of transfer operators to obtain attractors and basins, and therefore a global view of the dynamics of deterministic and stochastic systems, has been highlighted in recent years [30, 43, 57, 67,68,69] substituting usual algorithmic-based descriptions, such as grid of starts, Monte Carlo, simple and generalized cell-mappings, etc. Here, the transfer and composition operators are generalized to systems with both noise and parametric uncertainty by assuming a one-to-one relation between dynamics \(\varphi \left(\omega ,\lambda \right)\) and elements of \(\left( {{\mathbb{L}},{\mathfrak{S}},P_{\lambda } } \right)\). This assumption allows the definition of one time-step transfer operators over the space of distributions \(L^{1} ({\mathbb{X}})\). associated with (1), given by
where \({\Omega }_{x} \left( {\lambda ;B} \right) \subseteq {\Omega }\) is the set of all \(\omega \)-values for which the flow is in \(B \in {\mathfrak{B}}\), for any \(\lambda \in {\mathbb{L}}\), and \(f:{\mathbb{X}} \to {\mathbb{R}}\)+ is an absolute integrable function over \({\mathbb{X}}\), denominated distribution, that for our applications will be a probability density function. For any \(\lambda \in {\mathbb{L}}\), \({\mathcal{P}}\left( \lambda \right)\) is a Markov operator, being signal-preserving, linear, and norm-preserving, with spectral radius equal to one [53]. For systems with only noise, there is only a single \(\lambda \)-value, and \(\mathcal{P}\left(\lambda \right)\equiv \mathcal{F}\) is a Foias operator [53, 57]; for systems with only parametric uncertainty, there is only a single \(\omega \)-value, and \(\mathcal{P}\left(\lambda \right)\) is a Perron-Frobenius operator that is also a function of \(\lambda \); for deterministic systems, only single values of \(\lambda \) and \(\omega \) are defined, and \(\mathcal{P}\left(\lambda \right)\equiv \mathcal{P}\) is a single Perron-Frobenius operator. Therefore, \(\mathcal{P}\left(\lambda \right)\) is a generalization of the Foias operator [53, 57], covering deterministic, stochastic, and parametric uncertainty dynamics.
The dual operator of \(\mathcal{P}\left(\lambda \right)\) can also be obtained by generalizing its usual definition for stochastic systems to parametric uncertainty systems. Specifically, this composition operator, which is referred also as a Koopman operator, is defined over the space of observables \(L^{\infty } \left( {\mathbb{X}} \right)\) and given by
for any \(\lambda \in {\mathbb{L}}\), and any \(g:{\mathbb{X}} \to {\mathbb{R}}^{ + }\), which is an absolute bounded function over \({\mathbb{X}}\) denominated observable. The duality relation is defined pointwise, given by
for any \(\lambda \in {\mathbb{L}}\). The operators \(\mathcal{P}\left(\lambda \right)\) and \(\mathcal{K}\left(\lambda \right)\) define linear functional maps over \(L^{1} \left( {\mathbb{X}} \right)\) and \(L^{\infty } \left( {\mathbb{X}} \right)\), respectively, written as
for any \(\lambda \in {\mathbb{L}},\,t \in {\mathbb{N}}\). Systems (6) and (7) offer a global view of trajectories over \({\mathbb{X}}\) , governing mean results with respect to the noise (space \(\left(\Omega ,\mathfrak{F},{P}_{\omega }\right)\)), but still distributed according to the uncertain parameter \(\lambda \). Therefore, we could think of parametric dependent trajectories of distributions \(f\left(t,x\right)\) and observables \(g\left(t,x\right)\). Finally, a connection of Eq. (7) with the ε-committor functions given in [30] and in Eq. (2) can be obtained by defining an observable of an attractor region at \(t=0\) and iterating it. That is, by setting \(g\left(0,x\right)=A\left(\lambda \right)\), we obtain the equality \(g\left(1/\varepsilon -1,x\right)={g}_{A}\left(\varepsilon ,x;\lambda \right)\).
The asymptotic behavior of systems (6) and (7) is of particular importance. Invariant distributions describe attractors [8, 30], whereas the invariant observables characterize the basins’ structures [30, 68]. They are given by
respectively. The noise is accounted for in both structures thanks to the full formulation in Eq. (3), resulting in attractors’ regular distributions and basin boundary diffusion [30, 57]. Solutions \(f\left(x;\lambda \right)\) and \(g\left(x;\lambda \right)\) of Eqs. (8) and (9) depend explicitly on the operators \(\mathcal{P}\left(\lambda \right)\) and \(\mathcal{K}\left(\lambda \right)\), and, therefore, also depend on the parameter \(\lambda \). In the case of deterministic systems, \(g\left(x;\lambda \right)\) becomes an indicator function of the basin, with value 1 over it and 0 otherwise. Finally, mean invariant structures over \(\left( {{\mathbb{L}},{\mathfrak{S}},P_{\lambda } } \right)\) are obtained by simple integration [65],
2.4 Generalized Ulam discretization
The discretization of transfer operators \(\mathcal{P}\left(\lambda \right)\) is given by the Ulam method [30, 35, 69, 70], equivalent to the generalized cell-mapping [14]. Following [57], the discretization process starts adopting a disjoint partition of the phase-space \({\mathbb{X}}\) as \({\mathbb{B}} = \left\{ {b_{1} , \ldots ,b_{i} } \right\}\). Consider also the subspace \(\Delta_{h} \subset L^{1} \left( {\mathbb{X}} \right)\) spanned by the normalized indicator functions of \({\mathbb{B}}\), i.e., with basis \(\left\{{1}_{1},\dots ,{1}_{i}\right\}\), where \({1}_{i}={\mathrm{id}}_{{b}_{i}}/{P}_{x}\left({b}_{i}\right)\), \({P}_{x}\left({b}_{i}\right)\) being the Lebesgue measure (generalized volume) of \({b}_{i}\) and h the characteristic size of the partition. A projection operator \({Q}_{h}\) is defined such that a distribution \(f\left( x \right) \in L^{1} \left( {\mathbb{X}} \right)\) is projected onto the subspace \({\Delta }_{h}\), that is,
A projected distribution over \({\Delta }_{h}\) is generically denominated \({Q}_{h}f\left(x\right)={f}_{h}\). Following [70], the projection of \(\mathcal{P}\left(\lambda \right)\) is defined from the composition of \({Q}_{h}\) and \(\mathcal{P}\left(\lambda \right)\). The resulting projected operator is \({Q}_{h}\mathcal{P}\left(\lambda \right)={P}_{h}\left(\lambda \right)\), that is,
for any \(\lambda \in {\mathbb{L}}\), where the row vector \({f}_{i}\) and matrix \({p}_{ij}\left(\lambda \right)\) are
\(\mathcal{P}\left(\lambda \right)\) has spectral radius equal to one [30], and \({p}_{ij}\left(\lambda \right)\) is a row stochastic matrix. Row vectors \({f}_{i}\left(\lambda \right)\) in the fixed space of \({p}_{ij}\left(\lambda \right)\), identified as \(\mathrm{fix}\left({p}_{ij}\left(\lambda \right)\right)\), are solutions of
where \({\delta }_{ij}\) is the Kronecker delta. Equation (15) is the discretized version of Eq. (8), and its solutions are discretized vector representations \({f}_{i}\left(\lambda \right)\) of invariant distributions \(f\left(x,\lambda \right)\) of the system (1), with values in \(\left[ {0;1} \right]\). It represents an attractor, referred to as discretized attractor’s distribution. Finally, the stochastic matrix \({p}_{ij}\left(\lambda \right)\) can be understood as the proportion of states in \({b}_{j}\) after one iteration, starting in \({b}_{i}\). This resumes into a simplified representation, that is,
The general definitions in Eq. (14) reduce to the deterministic, parameter uncertainty, or stochastic, depending on the parameter space \(\left( {{\mathbb{L}},{\mathfrak{S}},P_{\lambda } } \right)\) and on the probability space \(\left(\Omega ,\mathfrak{F},{P}_{\omega }\right)\). The matrix representation of the projected Koopman operator \({K}_{h}\left(\lambda \right)\) is given by the transpose of \({p}_{ij}\left(\lambda \right)\), thanks to the dual relation (5).
Observables of the basins are computed by solving the ill-conditioned system [30]
where \({\delta }_{ij}\) is the Kronecker delta, \({\mathrm{id}}_{A\left(\lambda \right)}\) is the indicator function of the region of attraction \(A\left(\lambda \right)\) in the vector representation, and \(\varepsilon \in \left(0;1\right]\) is a control variable. In other words, it gives the probability that a state in \({b}_{j}\) maps to \(A\left(\lambda \right)\) after \(1/\varepsilon -1\) iterations. The vector representation \({g}_{j}\left(\varepsilon ;\lambda \right)\) corresponds to invariant time-dependent observable given by Eq. (2), with component values in \(\left[ {0;1} \right]\). Finally, averages of both \({f}_{i}\left(\lambda \right)\) and \({g}_{j}\left(\varepsilon ;\lambda \right)\) in \(\left( {{\mathbb{L}},{\mathfrak{S}},P_{\lambda } } \right)\) can be obtained as in Eqs. (10) and (11), respectively. These integrals can be further discretized through polynomial chaos [65] or by a simply weighted sum [71].
3 Phase-space refinement and parameter space discretization
3.1 The phase-space adaptative algorithm
The computation of matrices \({p}_{ij}\left(\lambda \right)\), Eq. (14), involves a considerable number of time integrations when a Monte Carlo [30, 38] or quasi-Monte Carlo [70] strategy is employed, resulting in a slow convergence of \({P}_{h}\left(\lambda \right)\) to \(\mathcal{P}\left(\lambda \right)\) as h → 0 [70]. Furthermore, the discretization inserts a numerical diffusion in the dynamical system [42, 44] and inevitably changes the dynamics, a fact remediated by high resolution partitions at the expensive of increasing the computational cost significantly.
A possible efficient strategy is to adopt an irregular adaptative partition, with smaller cell-size h in regions of interest, such as attractors supports and basins boundaries. Such strategy is possible because the operator \({Q}_{h}\), Eq. (12), is not limited to cells of equal size, but only to disjoint partitions. A sequence of n phase-space partitions can be constructed, \({\mathbb{B}}_{0} \left( \lambda \right),{\mathbb{B}}_{1} \left( \lambda \right), \ldots ,{\mathbb{B}}_{n} \left( \lambda \right)\), where \(i<j\) implies that \({\mathbb{B}}_{j} \left( \lambda \right)\) has a higher resolution than \({\mathbb{B}}_{i} \left( \lambda \right)\). The corresponding matrix sequence, \({p}_{ij}^{\left(0\right)}\left(\lambda \right),{p}_{ij}^{\left(1\right)}\left(\lambda \right),\dots ,{p}_{ij}^{\left(n\right)}\left(\lambda \right)\), approximates the continuous transfer operator \(\mathcal{P}\left(\lambda \right)\) as n increases, for all \(\lambda \). Similar alternatives were proposed for the refinement of basins’ boundaries [72, 73] and SBR measures [74, 75], but restricted to deterministic dynamics. Below is the proposed strategy and in Fig. 1 a graphical illustrative depiction is provided.
Algorithm 1: Start from a partition \({\mathbb{B}}_{n} \left( \lambda \right)\) covering the phase-space region \({\mathbb{X}}\), and a flow map \({p}_{ij}^{\left(n\right)}\left(\lambda \right)\). The next partition \({\mathbb{B}}_{n + 1} \left( \lambda \right)\) and flow map \({p}_{ij}^{\left(n+1\right)}\left(\lambda \right)\) are constructed through the following procedure:
1. Identification of the cells to be subdivided | From a given partition \({\mathbb{B}}_{n} \left( \lambda \right)\) and a flow map \({p}_{ij}^{\left(n\right)}\left(\lambda \right)\), we identify the cells for subdivision, satisfying the following Eq. (18) (refinement of the attractor) or Eq. (19) (refinement of the boundary); no need to distinguish between them, since both have to be refined, even if for different motivations. We indicate them by \({\mathbb{S}}_{{n + \frac{1}{2}}}\); they are reported in green in Fig. 1(a) |
2. Refinement of the cells | The cells \({\mathbb{S}}_{{n + \frac{1}{2}}}\) previously identified are subdivided into two, forming a new set of cells named \({\mathbb{S}}_{n + 1}\). The other cells \({\mathbb{B}}_{n} \left( \lambda \right)\backslash {\mathbb{B}}_{{n + \frac{1}{2}}}\) are unchanged. This gives the updated partition \({\mathbb{B}}_{n + 1} \left( \lambda \right) ={\mathbb{S}}_{n + 1} \cup \left( {{\mathbb{B}}_{n} \left( \lambda \right)\backslash {\mathbb{S}}_{{n + \frac{1}{2}}} } \right)\) which of course is the union of refined and unrefined cells (Fig. 1(b)) |
3. Update of the flow map \({p}_{ij}^{\left(n\right)}\left(\lambda \right)\) on \({\mathbb{B}}_{n + 1} \left( \lambda \right)\) | Compute new entries of \({p}_{ij}^{\left(n+1\right)}\left(\lambda \right)\). The flow map must be recomputed (updated) for all the subdivided cells \({\mathbb{S}}_{{n + \frac{1}{2}}}\) (cyan cells in Fig. 1(c.1)) and in their preimages, i.e., in all cells \(\varphi^{ - 1} \left( {\omega ,\lambda } \right)_{{n + \frac{1}{2}}}\) (exemplified by the red cells in Fig. 1(c.1)) that, at the previous subdivision n, have an image under the flow in \({\mathbb{S}}_{{n + \frac{1}{2}}}\). Indeed, their images have been subdivided and thus the flow is no longer defined over them. The flow in the remaining cells \({\mathbb{B}}_{n + 1} \left( \lambda \right)\backslash \left( {{\mathbb{S}}_{n + 1} \cup \varphi^{ - 1} \left( {\omega ,\lambda } \right){\mathbb{S}}_{{n + \frac{1}{2}}} } \right)\) (magenta in Fig. 1(c.2)) is unchanged |
At the first iteration of the algorithm, \(n=0\), the flow map \({p}_{ij}^{\left(0\right)}\left(\lambda \right)\) of the entire initial partition \({}_{0}\) is computed. As the algorithm progresses, it is expected that the ratio between the generalized volumes of \({\mathbb{S}}_{{n + \frac{1}{2}}}\) and \({\mathbb{B}}_{n + 1} (\lambda )\) diminishes, namely, \(P_{x} \left( {{\mathbb{B}}_{{n + \frac{1}{2}}} } \right)/P_{x} \left( {{\mathbb{B}}_{n + 1} \left( \lambda \right)} \right) \to 0\) as \(n \to \infty\). In cases where this is true, the algorithm reduces the total computational cost. The process stops after a predefined number of iterations.
At a given partition n, the algorithm identifies regions to be refined at step 1. Equation (15) is solved, resulting in the left fixed space of \({p}_{ij}^{\left(n\right)}\left(\lambda \right)\). This is a computationally difficult problem since the transfer matrix \({p}_{ij}^{\left(n\right)}\left(\lambda \right)\) is sparse, generally asymmetric, indefinite, and large, requiring specialized algorithms. Additionally, Eq. (15) could have multiple solutions for a multistable dynamical system. In other words, the unitary eigenvalue has geometric multiplicity greater than one, with corresponding eigenvectors spamming \(\mathrm{fix}\left({p}_{ij}^{\left(n\right)}\left(\lambda \right)\right)\) not uniquely defined. To circumvent this problem, the methodology proposed in theorem 2.6 and lemma 5.2 of [26] is applied to transform a general set of solutions of Eq. (15) into a meaningful set of attractors’ distributions, with properties \(0\le {f}_{i}^{\left(n\right)}\left(\lambda \right)\le 1,\sum_{i}{f}_{i}^{\left(n\right)}\left(\lambda \right)=1\), and independent of each other. With the correct description of \(\mathrm{fix}\left({p}_{ij}^{\left(n\right)}\left(\lambda \right)\right)\), corresponding regions of attraction \(A\left(\lambda \right)\) are defined and basins’ observables \({g}_{j}^{\left(n\right)}\left(\varepsilon ;\lambda \right)\) at a predefined time horizon \(1/\varepsilon -1\) are computed by solving Eq. (17).
Once \({f}_{i}^{\left(n\right)}\left(\lambda \right)\) and \({g}_{j}^{\left(n\right)}\left(\varepsilon ;\lambda \right)\) are known, the region of interest \({\mathbb{S}}_{{n + \frac{1}{2}}}\) can be defined. Firstly, attractors’ regions have a high-density value, and corresponding entries of \({f}_{i}^{\left(n\right)}\left(\lambda \right)\) also have high values. Therefore, the heuristic constraint
is adopted to identify such regions. This strategy is straightforward since it only depends on the computed distribution in the ith box, whereas strategies considering the local upper bound L1 error need information about neighboring boxes [76]. For basins boundaries, it can be shown that if a saddle’s stable manifold passes through a box \({b}_{i}\), then there will be values between 0 and 1 in the ith element of observable \({g}_{j}^{\left(n\right)}\left(\varepsilon ;\lambda \right)\). That is, trajectories passing through boxes \({b}_{i}\) can converge to distinct attractors. This effect is also known as numeric diffusion [77] for deterministic systems, caused by discretization. For nondeterministic systems, both numeric and real diffusion can happen, and such regions enlarge as the uncertainty increases. Therefore, a second constraint is defined,
identifying boxes that can converge to more than one attractor with significant probability. Again, this strategy depends only on the computed observable in the ith box. Other methodologies consider the neighbor’s information [78], but are computationally more involved. To the best of the authors’ knowledge, there is no upper bound local error definition for basins’ observables analogous to the upper bound L1 error for distributions, as presented in [76], justifying the adoption of a stop criteria at a certain iteration.
The set \({\mathbb{S}}_{{n + \frac{1}{2}}}\) is refined in step 2, forming \({\mathbb{S}}_{n + 1}\). Each box \(b_{i}^{\left( n \right)} \in {\mathbb{S}}_{{n + \frac{1}{2}}}\) is subdivided into two smaller ones, such that \({b}_{i}^{\left(n\right)}={b}_{2i}^{\left(n+1\right)}\cup {b}_{2i+1}^{\left(n+1\right)}\) and \({b}_{2i}^{\left(n+1\right)}\cap {b}_{2i+1}^{\left(n+1\right)}=\varnothing \). The new boxes form the set \({\mathbb{S}}_{n + 1}\), marked in cyan in Fig. 1b. Unrefined boxes in \({\mathbb{B}}_{n} \left( \lambda \right)\backslash {\mathbb{B}}_{{n + \frac{1}{2}}}\), marked in white in Fig. 1b, are renamed, such that \({b}_{2i}^{\left(n+1\right)}={b}_{i}^{\left(n\right)}\). The union of refined and unrefined boxes forms the new partition \({\mathbb{S}}_{n + 1} \cup \left( {{\mathbb{B}}_{n} \left( \lambda \right)\backslash {\mathbb{S}}_{{n + \frac{1}{2}}} } \right) = {\mathbb{B}}_{n + 1} \left( \lambda \right)\). The adopted refinement strategy not only guarantees that no two cells will overlap each other by definition but also allows optimal storage, subdivision, and search of elements in a binary tree data structure, being previously used in the software GAIO [36].
The final step 3 constitutes the update of the transfer matrix to the new phase-space partition. New entries of \({p}_{ij}^{\left(n+1\right)}\left(\lambda \right)\) corresponding to cells \(b_{i}^{{\left( {n + 1} \right)}} \in {\mathbb{S}}_{n + 1}\) are computed. However, the flow map of the preimage region \(\varphi^{ - 1} \left( {\omega ,\lambda } \right){\mathbb{S}}_{{n + \frac{1}{2}}}\), marked in red in Fig. 1(c.1), must also be recomputed. Since cells in \({\mathbb{S}}_{{n + \frac{1}{2}}}\) do not have a corresponding entry in \({p}_{ij}^{\left(n+1\right)}\left(\lambda \right)\), their preimage lose meaning, and new entries must be calculated. The flow map of the remaining cells, marked in magenta in Fig. 1(c.2), is unaltered, where \({p}_{\left(2i,2j\right)}^{\left(n+1\right)}\left(\lambda \right)={p}_{ij}^{\left(n\right)}\left(\lambda \right)\). This ends the iteration n of the adaptative discretization. The algorithm then proceeds to the next iteration, whose starting partition is \({\mathbb{B}}_{n + 1}\).
3.2 Mean structures for dynamical systems with parametric uncertainty
The previous exposition outlined the main subdivision algorithm of the phase-space. Still, mean distributions and observables of parametric uncertainty cases, given by integrals (10) and (11), must be addressed. Since the aim is to deal with general nonlinear maps, Eq. (1), sparse sampling strategies of the parameter space may not be adequate, particularly when close to bifurcation points, given that in such case, the dynamical system may depend strongly on the parameter values, as shown by Le Maître and Knio [65]. Therefore, general discretization strategies must be considered.
Assuming the parameter space \({\mathbb{L}}\) to be bounded, we can consider a discretization into a number of points \({\lambda }_{k}\in\Lambda \), spaced by \(\Delta \lambda ={\lambda }_{k}-{\lambda }_{k-1}\), and obtain a discretization of the probability \({P}_{\lambda }\) as
Therefore, the bounded continuous probability space \(\left( {{\mathbb{L}},{\mathfrak{S}},P_{\lambda } } \right)\) can be approximated by the bounded discrete probability space \(\left(\Lambda ,{\mathfrak{S}}_{\Lambda },{P}_{\Lambda }\right)\), where \({\mathfrak{S}}_{\Lambda }\) is a σ-algebra over \(\Lambda \). The original dynamical system \(\varphi \left(\omega ,\lambda \right)\) becomes a collection of deterministic or stochastic dynamical systems, weighted by the discrete probability \({P}_{\Lambda }\). Finally, Algorithm 1 is applied for all \(\Lambda \), and statistics are computed according to \({P}_{\Lambda }\). We restrict our focus to averages, calculated according to the rectangle rule,
where \(f\left(\lambda ;x\right)\) represents any dynamical structure dependent on the parameter \(\lambda \), such as attractors’ distributions, basins of attraction, or manifolds. Equation (21) is an approximation of an integral by a weighted sum, a strategy that has been used in uncertainty quantification [65]. No continuation procedure is necessary if one chooses a window large enough to contain all attractors (which is not always the case, in particular when one wants to “zoom” around certain attractors/basins of interest). In this case, the discretization methodology identifies all existent attractors. Given a parameter \({\lambda }_{k}\), it is only required to determine to which branch each identified attractor belongs. Once this correspondence is established, the mean distributions and basins are computed.
The distance between the distributions of two attractors is calculated through the Lukaszyk-Karmowski metric [79] to identify the corresponding attractor branch. Given a distribution \({f}_{{A}_{m}}\left({\lambda }_{k};x\right)\) in a known attractor branch \({A}_{m}\), the branch of a distribution \({f}_{A}\left({\lambda }_{k+1};x\right)\) for the next parameter value \({\lambda }_{k+1}\) is identified according to the expression
where \(d\left(x,y\right)\) is the metric of \({\mathbb{X}}\). If, for a certain m, \({D}_{m}\left({A}_{m}\left({\lambda }_{k}\right),A\left({\lambda }_{k+1}\right)\right)\) is a minimum and is smaller than a predefined threshold, then \({f}_{A}\left({\lambda }_{k+1};x\right)\) belongs to the m branch of existent solutions. If there is no small \({D}_{m}\) value, then the existence of possible new branches must be investigated. After all attractors are identified, mean distributions \({\overline{f} }_{A}\) and observables \({\overline{g} }_{A}\left(\varepsilon \right)\) are computed over each branch.
If a phase-space window is too small, attractors outside it are flagged as escape solutions. Escape solutions are identified previously and do not enter the metric calculation, Eq. (22). Furthermore, there is no distinction between escape solutions and attractors outside the window, and basin structures that would belong to those attractors are also flagged as escape solutions. If an attractor branch would move outside the phase-space window for a given parameter value \(\lambda \), statistics such as mean distributions and observables, Eqs. (10) and (11), would yield erroneous results. Care must be taken in the evaluation of escaped solutions.
One point still must be addressed. The resulting adaptative partition by Algorithm 1 is parameter dependent, \({\mathbb{B}} _{n} \left( \lambda \right)\). Thus, not only the discretized structures, such as attractors’ distributions, basins, and manifolds, are parameter dependent, but the discretized space in which they are defined are distinct from each other. To properly apply Eq. (21), we must define a common partition, \(\overline{{\mathbb{B}}}_{n}\), over which all structures are discretized. We start from a partition \({\mathbb{B}}_{n} \left( \lambda \right) = \left\{ {b_{0}^{\left( n \right)} ,b_{1}^{\left( n \right)} , \ldots , b_{i}^{\left( n \right)} } \right\}\) at a given iteration n. Given that not all boxes are subdivided due to the adaptative Algorithm 1, we have \(i\le {2}^{n}-1\) and the index list \(\left\{ {0, \ldots ,i} \right\}\) possibly has holes (i.e., no consecutive numbers). The common partition \(\overline{{\mathbb{B}}}_{n}\) will be the set of boxes whose index list \(\left\{ {0, \ldots ,\overline{i}} \right\}\) contains the index lists of partitions \({\mathbb{B}} _{n} \left( \lambda \right)\), for all \(\lambda\)-values. Intuitively, it means that \(\overline{{\mathbb{B}}}_{n}\) is the selection of the smaller boxes from the set of partitions \({\mathbb{B}}_{n} \left( {\lambda_{k} } \right)\), covering the phase-space \({\mathbb{X}}\). An example of common partition construction is given in Fig. 2, at iteration n = 2. It is clear that the index list of \(\overline{{\mathbb{B}}}_{2}\) contains the lists of the other two partitions, with its boxes being the most refined ones.
With the common partition \(\overline{{\mathbb{B}}}_{n}\) defined, the next step is to project the intermediate structures over the new partition. For attractors’ distributions, this is given by applying the operator of Eq. (12), corresponding to the common partition \(\overline{{\mathbb{B}}}_{n}\) with basis functions in \({\Delta }_{{h}^{^{\prime}}}\), over an already discretized distribution over \({\mathbb{B}}_{n} \left( {\lambda_{k} } \right)\) with basis functions in \({\Delta }_{h}\). Denominating it as \({Q}_{{h}^{^{\prime}}}{Q}_{h}\equiv {Q}_{h\left(\lambda \right)}^{{h}^{^{\prime}}}\), we have
acting on each entry \({f}_{i}={\int }_{{b}_{i}}f\left(x\right)dx\) of the discretized distribution \({f}_{h}\) over partition \(.\) It is implicitly assumed that cells in \(\overline{{\mathbb{B}}}_{n}\) are always contained in cells in some \({\mathbb{B}}_{n} \left( {\lambda_{k} } \right)\), that is, \({b}_{\overline{i}}\subseteq {b}_{i}\). The fraction \({P}_{x}\left({b}_{\overline{i}}\right)/{P}_{x}\left({b}_{i}\right)\) is the proportional generalized volume (area in bidimensional cases) between boxes \({b}_{\overline{i}}\) and \({b}_{i}\). Entries of the vector representation \({f}_{\overline{i}}\left({\lambda }_{k}\right)\) in the common partition are
from which the average in Eq. (21) is computed, resulting in the mean discretized distribution
For the projection of the basins’ observables in the vector representation \({g}_{i}\left(\varepsilon ;{\lambda }_{k}\right)\), obtained from Eq. (17) for \({\lambda }_{k}\), we start from the dual space \(\Delta_{h}^{*} \subset L^{\infty } \left( {\mathbb{L}} \right)\), spanned by the indicator functions \(\left\{{\mathrm{id}}_{{b}_{1}},\dots ,{\mathrm{id}}_{{b}_{i}}\right\}\) [35, 80]. Given that the projection of a function \(g\left(\varepsilon ,{\lambda }_{k};x\right)\) over \({\Delta }_{h}^{*}\) is given by \({g}_{i}\left(\varepsilon ;{\lambda }_{k}\right) {\mathrm{id}}_{{b}_{i}}\), the entries of the vector representation \({g}_{\overline{i}}\left(\varepsilon ;{\lambda }_{k}\right)\) are
given that \({b}_{\overline{i}}\subseteq {b}_{i}\). The average in Eq. (21) is computed, resulting in the mean discretized observable
The procedure can be resumed into the following algorithm:
Algorithm 2: Start from a refinement \(\left(\Lambda ,{\mathfrak{S}}_{\Lambda },{P}_{\Lambda }\right)\) of the parameter space \({\mathbb{L}}\) containing k values of \(\lambda \). The mean structures are computed through the following steps:
The parameter space subdivision procedure can be further extended to multidimensional parameter spaces, only requiring that the parameter set \(\Lambda \) is disjoint and covers all \({\mathbb{L}}\). Adaptative discretization of the parameter space [65] can reduce the computational cost of the discretization refinement.
4 Helmholtz oscillator with harmonic excitation
The standard dimensionless form of the damped harmonically excited Helmholtz oscillator is
where \(\alpha \) is the mean linear vibration frequency, the random variable \(\lambda \) is a truncated standard normal with distribution \(f\left(\lambda ;\mathrm{0,1},-\mathrm{3,3}\right)\), \(\sigma \) is a scaling factor, \(\dot{W}\) is a standard white noise process, and s is the noise standard deviation. The system becomes deterministic for σ = 0 and s = 0, stochastic for σ = 0 and s ≠ 0, with parametric uncertainty for σ ≠ 0 and s = 0, and general for σ ≠ 0 and s ≠ 0.
For a normal distribution, the probability density converges asymptotically to zero as the distance from the mean increases, but is positive for every value in the range (− ∞, + ∞), although the actual probability of an extreme event is very low. Since the range of a given parameter is bounded, a truncated normal distribution in which the range of definition is finite at one or both ends of the interval is considered [81], thus avoiding extreme values.
The Helmholtz oscillator has one potential well, with two different classes of oscillations, bounded periodic nonlinear oscillations within the well and unbounded nonperiodic solutions [54]. This is a useful archetypal model, presenting escape, basin erosion, and integrity loss, and may describe the behavior of various dynamical systems (see, e.g., [58,59,60, 82]). The values of Table 1 are adopted, resulting in three possible outcomes, a small amplitude (i.e., nonresonant) oscillation, a large amplitude (i.e., resonant) oscillation, and escape solutions.
The analyzed phase-space window is \({\mathbb{X}} = \left\{ { - 0.7,1.8} \right\} \otimes \left\{ { - 1,1} \right\}\). The initial box partition is defined as a division of 25 in each dimension, totaling 32 × 32 = 1024 boxes of size {0.0781, 0.0625} at iteration 0, with one additional sink box that attracts unbounded trajectories. Algorithm 1 is conducted through ten subsequent iterations, with a final box size of {0.0024, 0.0020} (only for the cells that are refined at each iteration). Also, the number of initial conditions per box (used to compute \({p}_{ij}(\lambda )\)) depends on the box size, decreasing with refinement. The number of collocation points for each iteration is presented in Table 2. For the deterministic and parametric uncertainty cases, the usual Perron-Frobenius operator governs the phase-space distribution, where its matrix representation \(p_{ij} \left( \lambda \right)\) in Eq. (14) reduces to
for all \(\lambda \in {\mathbb{L}}\). Of course, all \({p}_{ij}\left(\lambda \right)\) will be equal for σ = 0, the pure deterministic case.
The Helmholtz oscillator, Eq. (28), is a continuous time problem. To construct the map \(\varphi \left(\lambda \right)\) and obtain a discrete time evolution in the form of Eq. (1), we considered stroboscopic Poincaré sections at the period of excitation T = 2π/Ω, with Ω as the forcing frequency. The flow \(\varphi \left(\lambda \right)\) maps the system state from one section to the other, as usual [83]. The time evolution of Eq. (28) for one period is obtained through the fourth-order Runge–Kutta method, with time-step T/200. This strategy is adopted in the deterministic and the following parametric uncertainty analyses.
4.1 Deterministic case
The evolution of basins of attraction of the small and large amplitude solution of the deterministic Helmholtz oscillator is shown in Fig. 3 as a function of the excitation magnitude \(\left(A\in \left[0.05,0.08\right]\right)\). The attractors are marked in red. The color scale differentiates regions converging to the depicted attractor from probability zero to probability one. There is only one attractor for A = 0.05, whose basin is surrounded by the (black) escape region. As expected for the deterministic case, the probability is either zero or one, with the exception of the folded fractal regions close to the boundaries, which have values between zero and one, such as in Fig. 3(c, d). This results from numerical diffusion, since initial conditions in the same cell may converge to one of the two attractors or escape in such regions. After the emergence of the large amplitude attractor in the resonant region, the evolution of the basins’ boundaries shows increasing competition. The loss of integrity of the basins with increasing load is witnessed by the decreasing area. The algorithm has shown to be robust enough to discretize the boundaries in highly fractal and intertwined basins. The set of initial conditions outside the two coexisting basins corresponds to solutions diverging to infinity [54].
Figure 4 presents the final box partition, \({\mathbb{B}}_{10}\), for an increasing excitation amplitude. It is evident that more boxes are needed to discretize the boundaries as the basin topology becomes more intricate. The partitions \({\mathbb{B}}_{0} ,{\mathbb{B}}_{2}\), and \({\mathbb{B}}_{4}\) are depicted in Fig. 5 for A = 0.06 to demonstrate the refinement procedure. The green boxes satisfy one of conditions (18) and (19), being either attractor boxes or boundary boxes. Specifically, the distribution threshold of Eq. (18) is adopted as \({c}_{f}={10}^{-10}\), while the boundary thresholds of Eq. (19) are calculated as \({c}_{g}^{\left(1\right)}=\mathrm{min}g+0.03\Delta g\) and \({c}_{g}^{\left(2\right)}=\mathrm{max}g-0.01\Delta g\), where \(\Delta g=\mathrm{max}g-\mathrm{min}g\). This permits the boundary boxes to be subdivided, allowing long transient solutions due to crude initial discretization to be refined as well. For example, in Fig. 5 the thresholds for the escape basin and the nonresonant basin are (0.03; 0.99) for all iterations, while the resonant basin has (0.0299; 0.9874) at iteration 0 and only attains the limits (0.03; 0.99) for higher iterations of discretization. Additionally, for discretization iterations equal to or lower than 1, the eigenvalues of \({p}_{ij}\) show that the resonant solution behaves like a long transient solution. This could lead to the wrong assumption that there is no resonant solution unless the analysis continues through additional iterations.
Red boxes are preimages of the green boxes, recalculated in each subsequent iteration, as explained in Sect. 3.1. The partition refinement is conducted by subdividing green boxes, thus locally refining the phase-space near attractors and boundaries. As the algorithm progresses, the green boxes concentrate at the basins’ boundary and the attractor, refining these regions in the phase-space, as desired. Finally, the total box count for each step and A = 0.06 is given in Table 3. A comparison of the current box count with a full discretization at a given iteration (maximum box count, that corresponds to the hypothetical case in which all cells would have been subdivided) is shown, with the last column representing the decrease of computational cost defined as the ratio between the maximum-to-current box count difference and the maximum box count. Lower values imply higher computational costs. This efficiency increases with the iterations, being over 90% from iteration 8 onwards.
4.2 Effects of parameter uncertainty
Before addressing the influence of parameter uncertainty, it is advantageous to understand the implications of considering an uncertain parameter near a bifurcation point. To this end, Fig. 6 presents both the dependency of the stable responses on varying stiffness parameter α for the excitation magnitude A = 0.06 and the normalized probability distributions of α + σ λ. There is a clear interval of α where the resonant and nonresonant responses coexist. Two saddle-node bifurcations limit the interval, with two possible jumps for a continuous change of α, forming a hysteretic cycle. Only one of the responses exists outside this region, the resonant for α < -1.1 and the nonresonant for α > -0.92. Three cases are chosen to investigate the parameter uncertainty, varying the scaling factor σ. For σ < 0.04, the probability of α + σ λ being outside of the hysteresis cycle is negligible. However, for σ ≥ 0.04, the uncertainty’s effect on the results cannot be neglected.
The parametric analysis of the influence of parameter uncertainty on the global dynamics is conducted through iterations 0 to 8 (see Table 4), alleviating the computational cost without compromising the quality of the result. To focus only on the uncertainty in the parameter, the noise is set to zero and the time evolution of the dynamical system is deterministic. The parameter space is discretized into 30 values, and the mean basins of attraction and mean attractors’ distributions are calculated through weighted sums, following Algorithm 2 in Sect. 3.2. Since the system is deterministic for a fixed parameter, the same time integrator of the previous analysis is considered, i.e., the fourth-order Runge–Kutta method with time-step T/200.
Figure 7 presents the mean distributions (first color bar) and basins (second color bar) for increasing levels of the scaling factor \(\sigma \), demonstrating the effect of the probability distribution. According to the adopted color scheme, the response for a set of initial conditions will converge to the expected attractor in the mean sense. The first and second columns refer to the small and large amplitude coexisting attractors, respectively. The effect is small for σ = 0.02, with only a slight spreading of both the attractors’ distributions and their basins’ boundaries. The latter concentrates near the internal saddle on the basin boundary. Furthermore, basins’ regions with a probability equal to one (yellow) almost coincide with the deterministic result. As the scaling parameter increases, the attractor distribution elongates (it is a one-dimensional structure embedded in the phase-space, an expected result according to the bifurcation diagram, Fig. 6) and approaches the boundary. The uncertain basin regions spread over the phase-space, and for σ ≥ 0.06, there is no region certainly converging to the resonant attractor in the mean sense (i.e., with a probability equal to one). The probability is lower than 0.8 for σ = 0.06. Also, the nonresonant basin with a probability equal to one decreases steadily, indicating a decrease in its dynamic integrity.
The final box set for the three initial scaling parameters is given in Fig. 8, corresponding to the last iteration 10 and common partition \(\overline{{\mathbb{B}}}_{8}\) of all 30 \(\lambda \)-values, for each σ-value. Table 4 presents a comparison of the total box count for all σ-values. As the uncertainty parameter increases, the discretization procedure results in an increasing number of boxes, implying a higher computational cost, as confirmed by the final box-counting. For σ ≥ 0.03, the final box counting does not change too much, since almost all potential well is discretized to the highest resolution in the final iteration. The computational efficiency decreases, as expected, as σ increases, since higher σ-values result in larger basin areas with a probability smaller than one, which requires a more refined discretization. A significant economy would be observed if further iterations were considered in such cases. However, the probability space should also be refined; otherwise, the results would not improve quality.
Figure 9 shows the variation of the Helmholtz oscillator normalized basins’ areas as a function of the scaling parameter σ for A = 0.06 and selected probability thresholds, quantifying the integrity of the system with parameter uncertainty. The weighted normalized basins’ areas are computed as
where g is a stochastic basin of attraction, \({\mathrm{id}}_{\left\{p;1\right\}}\left(g\right)\) is an indicator function equal to 1 if \(g\in \left\{p;1\right\}\) and zero otherwise, and p is the assumed probability threshold, between 0 and 1. In the deterministic limit (no uncertainty or noise, and infinite resolution), the basin g is an indicator function of the basin, and Eq. (30) reduces to the GIM definition in [84]. Furthermore, this expression is a particular case of Eq. (44) of [30], with \({\rho }_{pert}(x)\) as a uniform density over the phase-space window \({\mathbb{X}}\).
A probability threshold close to 1 is a conservative selection in terms of evaluation of actual integrity, while a threshold of 0 would provide the area of the entire phase-space \({\mathbb{X}}\). Of course, a probability threshold close to 1 actually corresponds to the maximal integrity only for vanishing parameter uncertainty (σ = 0), i.e., in the deterministic case. When the parameter uncertainty increases, the probability 1 conservative threshold provides notably reduced values of integrity, with the correspondingly higher ones being attained only with meaningfully lower (and thus not conservative) probability thresholds. This result shows the importance of such analysis in real applications presenting an almost-sure parametric variability.
Finally, Fig. 10 presents a validation of the results obtained so far. Figure 10a shows the probability density estimated from a Monte Carlo experiment considering 100,000 initial conditions uniformly distributed over the phase-space window with σ = 0.04. Each response is integrated up to t = 1000 T, demonstrating the influence of the parameter uncertainty on the Poincaré sections of the two attractors. The results agree with the attractors’ distribution, Fig. 10b, and the bifurcation diagram with respect to the support α of the uncertainty parameter, Fig. 10c, in terms of the attractor shape (plane curves), size, and probability distribution, thus matching the operator results and ratifying the present methodology.
Lastly, a remark on why a special care is needed for systems with parametric uncertainty. We also considered a basic formulation where a mean transfer matrix is computed, from which the candidate mean fixed space is obtained. Such hypothetical mean transfer matrix is given by
and Eqs. (15) and (17) become parameter independent. The Algorithm 1 of Sect. 3.1 is applied for a case with A = 0.06 and σ = 0.04, giving the result in Fig. 11, which is completely different from Fig. 7b.
A diffusion pattern, characteristic of stochastic dynamics, is observed, and only one distribution is obtained, instead of the two expected solutions (nonresonant and resonant). Moreover, this result does not match the Monte Carlo experiment given by Fig. 10, showing that the formulation (31) does not indeed represent the original problem with parametric uncertainty. Instead, this noise-like behavior suggests that formulation (31) considers the random parameter at each iteration of the map, akin to a stochastic process where a new value is randomly selected at each period. This demonstrates why it is necessary to address the parametric uncertainty distinctively, with the methodology proposed in Sect. 3.2.
4.3 Effects of additive white noise
The noise-induced dynamics is considered next. The same time-integration parameters are adopted, with time-step T/200. However, the noise requires specialized integrators, so a stochastic version of the fourth-order Runge–Kutta method is adopted for the construction of the flow \(\varphi \left(\omega \right)\) [66], with time-step T/200. The metric dynamical system \({\theta }_{i}\) driving the stochastic flow is given by the integral of the standard white noise \(\dot{W}\). For the discrete point of view, this integral results in a normal random variable with variance T, sampled and added to the system state at each section [53]. The transfer matrix of the noise-driven system is the Foias operator, where its matrix representation \({p}_{ij}\left(\lambda \right)\) in Eq. (14) reduces to
where the dependency on \(\lambda \in {\mathbb{L}}\) is suppressed since σ = 0. The probability integral in Eq. (2) is solved by the Monte Carlo method. Ten noise samples for each initial condition in each box are considered to compute \({p}_{ij}\). Again, a sink box is defined to detect escaped solutions. This is the procedure conducted in all stochastic problems in this study.
Figure 12 shows the results for the standard deviations s = 0.002 and s = 0.004. The influence of noise on the basin boundary is small. The basin structures present a pattern similar to the mean parameter results, with uncertainty associated with initial conditions only close to the boundaries. The crucial difference is the diffusion in the attractors’ distributions over the phase space as the standard deviation increases. Again, the resonant solution is more affected than the nonresonant one, with the attractor spreading over a larger area and approaching the basin boundary, thus indicating a decrease in dynamic integrity and possible disappearance under increasing noise. For s = 0.006, the resonant solution is destroyed, see Fig. 13a, and only the nonresonant solution and basin remain, including all initial conditions occupied previously by the two coexisting basins, with a sudden but localized increase of dynamic integrity. Indeed, as the noise intensity increases even further, s = 0.010, initial conditions initially in the resonant region start to escape, as indicated by the gray area in Fig. 13(b.2), which corresponds to the area with a probability lower than one in Fig. 13(b.1). In Fig. 12 and Fig. 13, the steady spreading of the nonresonant attractor with the white noise standard deviation is observed, too.
The effect of noise on time responses and power spectrums is now addressed. For comparison, Fig. 14 shows the deterministic case, with A = 0.06 and s = 0, for both attractors. Both power spectrums present peaks at the fundamental excitation frequency, ω = 0.81, and its superharmonics. The resonant solution, Fig. 14b, presents a richer spectrum with a higher number of excited harmonics. Figure 15 displays, for s = 0.002 and 0.004, the sample means, in black, and ten sampled time responses in grey. The results show that the white noise masks the higher harmonics with smaller power output of individual samples, while they are still present, although with reduced power, in the sample means. The nonresonant results for s = 0.006 and s = 0.010 are displayed in Fig. 16. The effect of increasing noise is observed, masking both the fundamental frequency and its harmonics. The resonant attractor for these cases is destroyed, as demonstrated by the basins of attraction in Fig. 13, and, therefore, it does not have a stationary power spectrum.
The loss of stability of the resonant solution is identified by the eigenvalues of \({p}_{ij}\) slightly less than one. They correspond to long-transient solutions, that is, solutions taking a long time to converge to a given attractor. The influence of noise on the transient responses can be observed in Fig. 17. For small noise intensity, s = 0.006, the resonant solution takes a rather long time to converge to the nonresonant solution, see Fig. 17a. This corresponds to an eigenvalue of \({p}_{ij}\) with a value of almost one. The obtained value for the corresponding case, Fig. 13a, is 0.999990835. For s = 0.010, the convergence time is reduced. However, the resonant attractor can converge to either the nonresonant solution, Fig. 17b, or escape, Fig. 17c, with different probabilities. Again, this result corresponds to the one observed in the basin analysis, Fig. 13b. The eigenvalue is smaller, with a value of 0.993246847, corroborating the observed convergence time reduction.
As shown by the previous results, the noise leads to uncertainty along the basin boundary, where probability is less than one. As in the deterministic case, the transient noisy response becomes longer as initial conditions are further away from the attractor. The time-dependency of the basins of attraction is demonstrated in Fig. 18 for A = 0.06 and s = 0.010. Values of ε ≈ 1 (respectively, ε ≈ 0) correspond to a small (respectively, large) time-horizon, identifying regions where the time response converges in the mean sense to a given attractor after a small-time (respectively, large-time) interval. The former corresponds to a small region surrounding the attractor, see Fig. 18(a.1, b.1). As ε decreases, the time horizon increases, and the obtained basin approaches its maximum size asymptotically. This is clear in Fig. 18(a.2, a.3) and Fig. 18(c.2, c.3), where the basin stabilizes at its final configuration. For this noise intensity, there is no resonant attractor in the classical sense, with solutions decaying to the nonresonant attractor or escaping. Figure 18b demonstrates what happens with the resonant region. Initially, solutions converge to the region where the resonant attractor exists for lower noise intensities, as demonstrated by the increase in basin area from Fig. 18(b.1) to Fig. 18(b.2). However, for large time-horizons, the supposed resonant basin decays to zero, see Fig. 18(b.3). To obtain the asymptotic basin of attraction for this noise level with methods based on time integration, the number of periods of integration would be prohibitively large. Furthermore, if time-horizons smaller than \({10}^{4}\) (\(\varepsilon >{10}^{-4}\)) were considered, the resonant region would mistakenly be considered as a basin, being in fact a set of initial conditions with a long transient.
Long transients lead to large computation time to obtain the asymptotic response by usual time integration techniques. However, the proposed phase-space subdivision procedure can identify and separate these solutions from the true asymptotic behavior. Figure 19 contains the corresponding eigendistribution for the resonant solution, which however is not strictly a distribution but a long transient. This is shown in Fig. 19b, where negative (blue) and positive (red) regions, each with absolute value \(\left|f\right|=0.5\), are separated. The former represent regions where the solutions stay for a long time before decaying to the permanent (nonresonant attractor, in red, or escape) solution, as already observed in basins of attraction, Fig. 13b, and time responses, Fig. 17(b, c). Indeed, according to Dellnitz and Junge [26], there are two scenarios where almost invariant sets can be observed. The first case occurs when cyclic components of a periodic attractor collide. Specifically, the cyclic components’ eigenvalues change from an absolute value of one to less than one. Only one attractor is involved in this process, changing its periodicity to an almost periodicity. The second case refers to the collision of two or more attractors, with at least one of them changing its eigenvalue from an absolute value of one to less than one. The attractor whose eigenvalue changes loses stability, exhibiting a long transient solution. In this example, the resonant attractor loses stability by colliding with different probability (see Fig. 18(a, c) for long time horizons) with both the nonresonant attractor and the escape solution. A possible triple collision between the three distinct solutions, after which only two remain stable, may also occur for a very specific (i.e. coincident) probability value.
The proposed measure to quantify the system’s integrity under various noise intensities is presented in Fig. 20, for ε = 10−8. Again, for each attractor, the integrity is computed according to Eq. (30). The nonresonant attractor resilience against the noise and the resonant attractor integrity loss for s ≥ 0.006 are clearly observed. Therefore, the proposed procedure can be used to quantify the influence of noise on any integrity measure.
A comparison with a Monte Carlo experiment is presented in Fig. 21. The probability density estimated through 10,000 initial conditions uniformly distributed over the phase-space window with s = 0.004, integrated up to t = 100000 T, is presented in Fig. 21a. The black areas represent high-density regions. They agree with the attractors’ distribution, Fig. 21b, obtained from the proposed methodology, validating the present strategy.
5 Conclusions
The presence of uncertainties in engineering systems is unavoidable and can drastically change their behavior. Furthermore, noise is inevitable in the operational stages. Here, an adaptative phase-space discretization strategy for the global analysis of deterministic, parameter uncertainty, and stochastic nonlinear dynamical systems with competing attractors was developed to quantify the uncertainty effects in those systems.
Rudiments of global dynamics were presented. The implications of nondeterminism over global structures, namely attractors and basins, were addressed. Then, generalized global operators were presented. Their fixed space corresponds to attractors’ distributions and basins observables in the mean sense over the phase-space only, integrating over the nondeterministic spaces. The generalized strategy of operator discretization was presented, resulting in a row-stochastic matrix, with also vector representations of invariant distributions and basins, that is, attractors’ distributions and basins observables, respectively.
The Ulam method is known for displaying a numerical diffusion due to the phase-space discretization. This can be remedied by refining the discretization at the expense of the computational cost. Here, an adaptative discretization scheme to refine only the most impactful regions, namely basins observables’ boundaries and attractors distributions’ supports, was proposed. The strategy was summarized into three main steps: identification, refinement, and update. Simple heuristics were adopted for the identification of basins observables’ boundaries and attractors distributions’ supports, only requiring the computed stochastic matrix and fixed space. The refinement is the most complicated step, and a technique based on a tree data structure to organize the phase-space subdivision was adopted. Flow maps of refined regions can be calculated, and the corresponding dynamical system’s transfer operator can be updated. The procedure is conducted through a previously defined number of iterations, resulting in a phase-space discretization with adaptative resolution. It is easily applied to stochastic dynamics through Monte Carlo, whereas the mean structures for parametric uncertainty dynamics are only attained through integration of the attractors and basins over the parametric uncertainty space. In this last respect, a simple numerical procedure, based on the rectangle rule and branch identification through the Lukaszyk-Karmowski metric, was adopted to solve the integral over the parameter space and compute mean structures.
Lastly, the Helmholtz oscillator under harmonic excitation was investigated. The deterministic analysis displayed three possible outcomes depending on the excitation amplitude: a small amplitude (i.e., nonresonant) attractor, a large amplitude (i.e., resonant) attractor and escape solutions. The adaptative discretization procedure was able to obtain attractors and basins’ boundaries with high fidelity, even when the latter become fractal and intermingled. The subdivision strategy has proven capable to mitigate the numerical diffusion, a common hindrance inherent to many phase-space discretization procedures. A comparison with the initially refined discretization showed that the economy achieved by the proposed procedure can be as high as 90% for highly refined phase-spaces. Next, the Helmholtz oscillator with a random stiffness parameter was considered, with the uncertainty parameter defined as a truncated normal variable to prevent large spurious values. Mean basins and distributions were obtained for varying uncertainty intensity, being the attractors’ distributions described by one-dimensional structures in the phase-space. As the uncertainty increases, broader regions along the basins’ boundaries need to be refined. Here, the economy of the proposed methodology is verified through a box count procedure. The results quantify the decrease of the safe basin area of both attractors, particularly the resonant one, with increasing uncertainty. For high uncertainty values, no set of initial conditions has a 100% probability of converging to the resonant attractor. The results were validated by a Monte Carlo analysis, demonstrating the efficiency of the proposed methodology. In turn, increasing the excitation noise entails a two-dimensional diffusion of the attractors, affecting particularly the resonant one, which approaches the basin boundary. This leads to a global bifurcation due to a connection between the resonant attractor and the hilltop saddle. After this bifurcation, the resonant basin vanishes and solutions either converge to the nonresonant attractor or escape. The detailed analysis of global bifurcation shows that formerly resonant solutions become long transients above a critical noise intensity. Long transient solutions are detected by the almost invariant eigendistributions identifying regions where solutions stay for a long time, with basins of attraction varying with the final time horizon. The variation of resonant basin area with noise intensity displays a characteristic Dover cliff profile, with a sudden drop to zero. Overall, considering parametric uncertainty and noise meaningfully affects the basin area and compactness, directly influencing the system global stability, with effects to be carefully evaluated in the design perspective. The matter will be explored in future investigations, along with the possible exploitation of control strategies (see, e.g., [85]) to increase the dynamic integrity of given attractors.
The investigated system shows that the adaptative discretization procedure can address efficiently dynamics with multiple attractors. Additionally, the methodologies for parametric uncertainty and stochasticity are essential to correctly analyze each case and understand the observed phenomena. Finally, the weighted basin area is able to quantify the integrity of nondeterministic cases, being also the most natural generalization of the global integrity concept. We expect to apply these strategies to dynamical systems representing real engineering problems, also addressing convergence and limitations of the proposed algorithms.
Data Availability
The datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request.
References
Kiureghian, A.D., Ditlevsen, O.: Aleatory or epistemic? Does it matter? Struct. Saf. 31, 105–112 (2009). https://doi.org/10.1016/j.strusafe.2008.06.020
Papadrakakis, M., Stefanou, G., Papadopoulos, V. (eds.): Computational Methods in Stochastic Dynamics, vol. 1. Springer, Dordrecht (2011)
Papadrakakis, M., Stefanou, G., Papadopoulos, V. (eds.): Computational Methods in Stochastic Dynamics, vol. 2. Springer, Dordrecht (2013)
Xiu, D.: Fast numerical methods for stochastic computations: a review. Commun. Comput. Phys. 5, 242–272 (2009)
Gerritsma, M., van der Steen, J.-B., Vos, P., Karniadakis, G.: Time-dependent generalized polynomial chaos. J. Comput. Phys. 229, 8333–8363 (2010). https://doi.org/10.1016/j.jcp.2010.07.020
Mai, C.V.: Polynomial Chaos Expansions for Uncertain Dynamical Systems - Applications In Earthquake Engineering. PhD thesis, ETH Zürich, Zürich (2016)
Luchtenburg, D.M., Brunton, S.L., Rowley, C.W.: Long-time uncertainty propagation using generalized polynomial chaos and flow map composition. J. Comput. Phys. 274, 783–802 (2014). https://doi.org/10.1016/j.jcp.2014.06.029
Arnold, L.: Random Dynamical Systems. Springer, Berlin (1998)
Han, X., Kloeden, P.E.: Random Ordinary Differential Equations and their Numerical Solution, Vol. 85. Springer, Singapore (2017)
Mezić, I.: Koopman operator, geometry, and learning of dynamical systems. Not. Am. Math. Soc. 68, 1 (2021). https://doi.org/10.1090/noti2306
Ulam, S.M.: Problems in Modern Mathematics. John Wiley & Sons, New York (1964)
Hsu, C.S.: A generalized theory of cell-to-cell mapping for nonlinear dynamical systems. J. Appl. Mech. 48, 634–642 (1981). https://doi.org/10.1115/1.3157686
Hsu, C.S.: A theory of cell-to-cell mapping dynamical systems. J. Appl. Mech. 47, 931–939 (1980). https://doi.org/10.1115/1.3153816
Guder, R., Kreuzer, E.J.: Using generalized cell mapping to approximate invariant measures on compact manifolds. Int. J. Bifurc. Chaos. 07, 2487–2499 (1997). https://doi.org/10.1142/S0218127497001667
Hsu, C.S., Chiu, H.M.: A cell mapping method for nonlinear deterministic and stochastic systems—part I: the method of analysis. J. Appl. Mech. 53, 695 (1986). https://doi.org/10.1115/1.3171833
Chiu, H.M., Hsu, C.S.: A cell mapping method for nonlinear deterministic and stochastic systems—part II: examples of application. J. Appl. Mech. 53, 702 (1986). https://doi.org/10.1115/1.3171834
Sun, J.-Q., Hsu, C.S.: The generalized cell mapping method in nonlinear random vibration based upon short-time Gaussian approximation. J. Appl. Mech. 57, 1018–1025 (1990). https://doi.org/10.1115/1.2897620
Han, Q., Xu, W., Sun, J.-Q.: Stochastic response and bifurcation of periodically driven nonlinear oscillators by the generalized cell mapping method. Phys. A Stat. Mech. its Appl. 458, 115–125 (2016). https://doi.org/10.1016/j.physa.2016.04.006
Yue, X., Wang, Y., Han, Q., Xu, Y., Xu, W.: Transient responses of nonlinear dynamical systems under colored noise. EPL. Europhys. Lett. 127, 24004 (2019). https://doi.org/10.1209/0295-5075/127/24004
Yue, X., Wang, Y., Han, Q., Xu, Y., Xu, W.: Probabilistic response and stochastic bifurcation in a turbulent swirling flow. J. Comput. Nonlinear Dyn. (2019). https://doi.org/10.1115/1.4044500
Han, Q., Xu, W., Hao, H., Yue, X.: Global analysis of stochastic systems by the digraph cell mapping method based on short-time gaussian approximation. Int. J. Bifurc. Chaos. 30, 2050071 (2020). https://doi.org/10.1142/S0218127420500716
Yue, X., Xu, Y., Xu, W., Sun, J.-Q.: Probabilistic response of dynamical systems based on the global attractor with the compatible cell mapping method. Phys. A Stat. Mech. its Appl. 516, 509–519 (2019). https://doi.org/10.1016/j.physa.2018.10.034
Yue, X., Xu, Y., Xu, W., Sun, J.-Q.: Global invariant manifolds of dynamical systems with the compatible cell mapping method. Int. J. Bifurc. Chaos. 29, 1950105 (2019). https://doi.org/10.1142/S0218127419501050
Dellnitz, M., Hohmann, A.: A subdivision algorithm for the computation of unstable manifolds and global attractors. Numer. Math. 75, 293–317 (1997). https://doi.org/10.1007/s002110050240
Dellnitz, M., Hohmann, A., Junge, O., Rumpf, M.: Exploring invariant sets and invariant measures. Chaos An Interdiscip. J. Nonlinear Sci. 7, 221–228 (1997). https://doi.org/10.1063/1.166223
Dellnitz, M., Junge, O.: On the approximation of complicated dynamical behavior. SIAM J. Numer. Anal. 36, 491–515 (1999). https://doi.org/10.1137/S0036142996313002
Yue, X., Xu, W., Zhang, Y.: Global bifurcation analysis of Rayleigh-Duffing oscillator through the composite cell coordinate system method. Nonlinear Dyn. 69, 437–457 (2012). https://doi.org/10.1007/s11071-011-0276-z
Yue, X., Lv, G., Zhang, Y.: Rare and hidden attractors in a periodically forced Duffing system with absolute nonlinearity. Chaos Solitons Fractals 150, 111108 (2021). https://doi.org/10.1016/j.chaos.2021.111108
Yue, X., Xiang, Y., Zhang, Y., Xu, Y.: Global analysis of stochastic bifurcation in shape memory alloy supporter with the extended composite cell coordinate system method. Chaos An Interdiscip. J. Nonlinear Sci. 31, 013133 (2021). https://doi.org/10.1063/5.0024992
Lindner, M., Hellmann, F.: Stochastic basins of attraction and generalized committor functions. Phys. Rev. E. 100, 022124 (2019). https://doi.org/10.1103/PhysRevE.100.022124
Andonovski, N., Lenci, S.: Six-dimensional basins of attraction computation on small clusters with semi-parallelized SCM method. Int. J. Dyn. Control. (2019). https://doi.org/10.1007/s40435-019-00557-2
Belardinelli, P., Lenci, S.: A first parallel programming approach in basins of attraction computation. Int. J. Non. Linear. Mech. 80, 76–81 (2016). https://doi.org/10.1016/j.ijnonlinmec.2015.10.016
Belardinelli, P., Lenci, S.: An efficient parallel implementation of cell mapping methods for MDOF systems. Nonlinear Dyn. 86, 2279–2290 (2016). https://doi.org/10.1007/s11071-016-2849-3
Sun, J.-Q., Xiong, F.-R., Schütze, O., Hernández, C.: Cell Mapping Methods. Springer, Singapore (2019)
Klus, S., Koltai, P., Schütte, C.: On the numerical approximation of the Perron-Frobenius and Koopman operator. J. Comput. Dyn. (2015). https://doi.org/10.3934/jcd.2016003
Dellnitz, M., Froyland, G., Junge, O.: The algorithms behind GAIO — set oriented numerical methods for dynamical systems. In: Fiedler, B. (ed.) Ergodic Theory, Analysis, and Efficient Simulation of Dynamical Systems, pp. 145–174. Springer, Berlin (2001)
Padberg, K., Thiere, B., Preis, R., Dellnitz, M.: Local expansion concepts for detecting transport barriers in dynamical systems. Commun. Nonlinear Sci. Numer. Simul. 14, 4176–4190 (2009). https://doi.org/10.1016/j.cnsns.2009.03.018
Dellnitz, M., Klus, S., Ziessler, A.: A set-oriented numerical approach for dynamical systems with parameter uncertainty. SIAM J. Appl. Dyn. Syst. 16, 120–138 (2017). https://doi.org/10.1137/16M1072735
Gerlach, R., Koltai, P., Dellnitz, M.: Revealing the intrinsic geometry of finite dimensional invariant sets of infinite dimensional dynamical systems. (2019). https://doi.org/10.48550/arXiv.1902.08824
Ziessler, A., Dellnitz, M., Gerlach, R.: The numerical computation of unstable manifolds for infinite dimensional dynamical systems by embedding techniques. SIAM J. Appl. Dyn. Syst. 18, 1265–1292 (2019). https://doi.org/10.1137/18M1204395
Gerlach, R., Ziessler, A., Eckhardt, B., Dellnitz, M.: A set-oriented path following method for the approximation of parameter dependent attractors. SIAM J. Appl. Dyn. Syst. 19, 705–723 (2020). https://doi.org/10.1137/19M1247139
Koltai, P.: A stochastic approach for computing the domain of attraction without trajectory simulation. Conf. Publ. 2011, 854–863 (2011). https://doi.org/10.3934/proc.2011.2011.854
Koltai, P., Volf, A.: Optimizing the stable behavior of parameter-dependent dynamical systems—maximal domains of attraction, minimal absorption times. J. Comput. Dyn. 1, 339–356 (2014). https://doi.org/10.3934/jcd.2014.1.339
Froyland, G., Junge, O., Koltai, P.: Estimating long-term behavior of flows without trajectory integration: the infinitesimal generator approach. SIAM J. Numer. Anal. 51, 223–247 (2013). https://doi.org/10.1137/110819986
Froyland, G., Koltai, P.: Estimating long-term behavior of periodically driven flows without trajectory integration. Nonlinearity 30, 1948–1986 (2017). https://doi.org/10.1088/1361-6544/aa6693
Klus, S., Nüske, F., Koltai, P., Wu, H., Kevrekidis, I., Schütte, C., Noé, F.: Data-driven model reduction and transfer operator approximation. J. Nonlinear Sci. 28, 985–1010 (2018). https://doi.org/10.1007/s00332-017-9437-7
Froyland, G., Stuart, R.M., van Sebille, E.: How well-connected is the surface of the global ocean? Chaos (2014). https://doi.org/10.1063/1.4892530
Ding, J., Li, T.Y.: Markov finite approximation of Frobenius–Perron operator. Nonlinear Anal. Theory Methods Appl. 17, 759–772 (1991). https://doi.org/10.1016/0362-546X(91)90211-I
Ding, J., Du, Q., Li, T.Y.: High order approximation of the Frobenius-Perron operator. Appl. Math. Comput. 53, 151–171 (1993). https://doi.org/10.1016/0096-3003(93)90099-Z
Junge, O., Marsden, J.E., Mezic, I.: Uncertainty in the dynamics of conservative maps. Proc. IEEE Conf. Decis. Control. 2, 2225–2230 (2004). https://doi.org/10.1109/cdc.2004.1430379
Jin, C., Ding, J.: A linear spline Markov approximation method for random maps with position dependent probabilities. Int. J. Bifurc. Chaos. 30, 2050046 (2020). https://doi.org/10.1142/S0218127420500467
Bangura, R.M., Jin, C., Ding, J.: The norm convergence of a least squares approximation method for random maps. Int. J. Bifurc. Chaos. 31, 2150068 (2021). https://doi.org/10.1142/S0218127421500681
Lasota, A., Mackey, M.C.: Chaos, Fractals, and Noise, vol. 97. Springer, New York (1994)
Lenci, S., Rega, G.: Optimal control of homoclinic bifurcation: theoretical treatment and practical reduction of safe basin erosion in the Helmholtz oscillator. J. Vib. Control. 9, 281–315 (2003). https://doi.org/10.1177/107754603030753
Lenci, S., Rega, G.: Optimal control of nonregular dynamics in a Duffing oscillator. Nonlinear Dyn. 33, 71–86 (2003). https://doi.org/10.1023/A:1025509014101
Gonçalves, P.B., da Silva, F.M.A., Rega, G., Lenci, S.: Global dynamics and integrity of a two-dof model of a parametrically excited cylindrical shell. Nonlinear Dyn. 63, 61–82 (2011). https://doi.org/10.1007/s11071-010-9785-4
Benedetti, K.C.B., Gonçalves, P.B., Lenci, S., Rega, G.: An operator methodology for the global dynamic analysis of stochastic nonlinear systems. Theor. Appl. Mech. Lett. (2022). https://doi.org/10.1016/j.taml.2022.100419
Thompson, J.M.T.: Designing against capsize in beam seas: recent advances and new insights. Appl. Mech. Rev. 50, 307 (1997). https://doi.org/10.1115/1.3101710
Soliman, M.S., Gonçalves, P.B.: Chaotic behaviour resulting in transient and steady-state instabilities of pressure loaded shallow spherical shells. J. Sound Vib. 259, 497–512 (2003). https://doi.org/10.1006/jsvi.2002.5163
da Silva, F.M.A., Gonçalves, P.B.: The influence of uncertainties and random noise on the dynamic integrity analysis of a system liable to unstable buckling. Nonlinear Dyn. 81, 707–724 (2015). https://doi.org/10.1007/s11071-015-2021-5
Mezić, I., Runolfsson, T.: Uncertainty propagation in dynamical systems. Automatica 44, 3003–3013 (2008). https://doi.org/10.1016/j.automatica.2008.04.020
Milnor, J.: On the concept of attractor. In: The theory of chaotic attractors. Springer, New York, pp. 243–264 (1985)
Ashwin, P.: Minimal attractors and bifurcations of random dynamical systems. Proc. R. Soc. London. Ser. A Math. Phys. Eng. Sci. 455, 2615–2634 (1999). https://doi.org/10.1098/rspa.1999.0419
Ochs, G.: Random attractors: robustness, numerics and chaotic dynamics. In: Ergodic theory, analysis, and efficient simulation of dynamical systems, pp. 1–30. Springer, Berlin (2001)
Le Maître, O.P., Knio, O.M.: Spectral Methods for uncertainty Quantification. Springer, Dordrecht (2010)
Benedetti, K.C.B., Gonçalves, P.B.: Nonlinear response of an imperfect microcantilever static and dynamically actuated considering uncertainties and noise. Nonlinear Dyn. 107, 1725–1754 (2022). https://doi.org/10.1007/s11071-021-06600-2
Takeishi, N., Kawahara, Y., Yairi, T.: Learning Koopman invariant subspaces for dynamic mode decomposition. Adv. Neural Inf. Process. Syst. 2017-Decem, pp. 1131–1141 (2017)
Mauroy, A., Mezić, I.: Global stability analysis using the Eigenfunctions of the Koopman operator. IEEE Trans. Automat. Contr. 61, 3356–3369 (2016). https://doi.org/10.1109/TAC.2016.2518918
Brunton, S.L., Budišić, M., Kaiser, E., Kutz, J.N.: Modern Koopman theory for dynamical systems. SIAM Rev. 64, 229–340 (2022). https://doi.org/10.1137/21M1401243
Ding, J., Yien Li, T., Zhou, A.: Finite approximations of Markov operators. J. Comput. Appl. Math. 147, 137–152 (2002). https://doi.org/10.1016/S0377-0427(02)00429-6
Benedetti, K.C.B.: Global Analysis of Stochastic Nonlinear Dynamical Systems: an Adaptative Phase-Space Discretization Strategy. PhD thesis, Pontifical Catholic University of Rio de Janeiro, (2022)
Grüne, L.: Subdivision techniques for the computation of domains of attractions and reachable sets. IFAC Proc. 34, 729–734 (2001). https://doi.org/10.1016/s1474-6670(17)35265-5
Grüne, L.: Asymptotic Behavior of Dynamical and Control Systems Under Perturbation and Discretization, vol. 1783. Springer, Berlin (2002)
Dellnitz, M., Junge, O.: An adaptive subdivision technique for the approximation of attractors and invariant measures. Comput. Vis. Sci. 1, 63–68 (1998). https://doi.org/10.1007/s007910050006
Junge, O.: An adaptive subdivision technique for the approximation of attractors and invariant measures: proof of convergence. Dyn. Syst. 16, 213–222 (2001). https://doi.org/10.1080/14689360110060708
Guder, R., Kreuzer, E.J.: Control of an adaptive refinement technique of generalized cell mapping by system dynamics. Nonlinear Dyn. 20, 21–32 (1999). https://doi.org/10.1023/A:1008352418599
Koltai, P.; Efficient Approximation Methods for the Global Long-Term Behavior of Dynamical Systems—Theory, Algorithms And Examples. PhD thesis, Technischen Universität München, Munich (2010)
Guder, R., Kreuzer, E.: Basin boundaries and robustness of nonlinear dynamic systems. Arch. Appl. Mech. 69, 569–583 (1999). https://doi.org/10.1007/s004190050244
Lukaszyk, S.: A new concept of probability metric and its applications in approximation of scattered data sets. Comput. Mech. 33, 299–304 (2004). https://doi.org/10.1007/s00466-003-0532-2
Goswami, D., Thackray, E., Paley, D.A.: Constrained ulam dynamic mode decomposition: approximation of the perron-frobenius operator for deterministic and stochastic systems. IEEE Control Syst. Lett. 2, 809–814 (2018). https://doi.org/10.1109/LCSYS.2018.2849552
Burkardt J (2014) The Truncated Normal Distribution. Department of scientific computing, Florida State University (2014)
Gonçalves, P.B., Santee, D.M.: Influence of uncertainties on the dynamic buckling loads of structures liable to asymmetric postbuckling behavior. Math. Probl. Eng. 2008, 1–24 (2008). https://doi.org/10.1155/2008/490137
Nayfeh, A.H., Balachandran, B.: Applied Nonlinear Dynamics. Wiley, New York (1995)
Benedetti, K.C.B., Gonçalves, P.B., da Silva, F.M.A.: Nonlinear oscillations and bifurcations of a multistable truss and dynamic integrity assessment via a Monte Carlo approach. Meccanica 55, 2623–2657 (2020). https://doi.org/10.1007/s11012-020-01202-5
Lenci, S., Orlando, D., Rega, G., Gonçalves, P.B.: Controlling practical stability and safety of mechanical systems by exploiting chaos properties. Chaos (2012). https://doi.org/10.1063/1.4746094
Acknowledgements
The authors are grateful to Prof. Americo Barbosa da Cunha Junior for the discussions regarding the mathematical background, and to the anonymous reviewers. The authors also acknowledge the financial support of the Brazilian research agencies, CNPq [Grant Numbers 301355/2018-5 and 200198/2022-0], FAPERJ-CNE [Grant Number E-26/202.711/2018], FAPERJ Nota 10 [Grant Number E-26/200.357/2020] and CAPES [finance code 001 and 88881.310620/2018-01].
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Benedetti, K.C.B., Gonçalves, P.B., Lenci, S. et al. Global analysis of stochastic and parametric uncertainty in nonlinear dynamical systems: adaptative phase-space discretization strategy, with application to Helmholtz oscillator. Nonlinear Dyn 111, 15675–15703 (2023). https://doi.org/10.1007/s11071-023-08667-5
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11071-023-08667-5