Keywords

1 Introduction

The purpose of this paper is to study a type of stochastic stability of invariant measures, which we call “empiric stochastic stability”for continuous maps \(f: M \mapsto M\) on a compact Riemannian manifold M of finite dimension, with or without boundary. In particular, we are interested on the empirically stochastically stable measures of one-dimensional continuous dynamical systems, and among them, the \(C^1\)-expanding maps on the circle.

Let us denote by (Mf) the deterministic (zero-noise) dynamical system obtained by iteration of f, and by \((M, f, P_{\varepsilon })\) the randomly perturbed system whose noise amplitude is \(\varepsilon \). Even if we will work on a wide scenario which includes any continuous dynamical system (Mf), we restrict the stochastic system \((M, f, P_{\varepsilon })\) by assuming that the noise probability distribution is uniform (i.e. it has constant density) on all the balls of radius \(\varepsilon >0\) of M (for a precise statement of this assumption see formula (1) below). We call \(\varepsilon \) the noise level, or also the amplitude of the random perturbation. To define the empiric stochastic stability we will take \(\varepsilon \rightarrow 0^+ \).

In the stochastic system \((M, f, P_{\varepsilon })\), the symbol \(P_{\varepsilon }\) denotes the family of probability distributions, which are called transition probabilities, according to which the noise is added to f(x) for each \(x \in M\). Precisely, each transition probability is, for all \(n \in \mathbb {N}\), the distribution of the state \(x_{n+1}\) of the noisy orbit conditioned to \(x_n = x\), for each \(x \in M\). As said above, the transition probability is supported on the ball with center at f(x) and radius \(\varepsilon >0\). So, the zero-noise system (Mf) is recovered by taking \(\varepsilon = 0\); namely, \((M, f) =(M, f, P_0)\). The observer naturally expects that if the amplitude \(\varepsilon >0\) of the random perturbation were small enough, then the ergodic properties of the stochastic system “remembered”those of the zero-noise system.

The foundation and tools to study the random perturbations of dynamical systems were early provided in [4, 19, 28]. The stochastic stability appears in the literature mostly defined through the stationary meaures \(\mu _{\varepsilon }\) of the stochastic system \((M, f, P_{\varepsilon })\). Classically, the authors prove and describe, under particular conditions, the existence and properties of the f-invariant measures that are the weak\(^*\)-limit of ergodic stationary measures as \(\varepsilon \rightarrow 0^+\). See for instance the early results of [8, 20,21,22, 30]), and the later works of [1,2,3, 25]. For a review on stochastic and statistical stability of randomly perturbed dynamical systems, see for instance [29] and Appendix D of [7].

The stationary measures of the ramdom perturbations provide the probabilistic behaviour of the noisy system asymptotically in the future. Nevertheless, from a rather practical or experimental point of view the concept of stochastic stability should not require the knowledge a priori of the limit measures of the perturbed system as \(n \rightarrow + \infty \). For instance [15] presents numerical experiments on the stability of one-dimensional noisy systems in a finite time. The ergodic stationary measure is in fact substituted by an empirical (i.e. obtained after a finite-time observation of the system) probability. Also in other applications of the theory of random systems (see for instance [16, 18]), the stationary measures are usually unkown, are not directly obtained from the experiments, but substituted by the finite-time empiric probabilities which approximate the stationary measures if the observations last enough.

Summarizing, for a certain type of stochastically stable properties, one should not need the infinite-time noisy orbits. Instead, one may take the noisy orbits up to a large finite time n, which are indeed those that the experimenter observes and predicts. The statistics of the observations and predictions of the noisy orbits still reflect, for the experimenter and the predictor, the behaviour of the stochastic system, but only up to some finite horizon.

Motivated by the above arguments, in Sect. 2 we will define the empiric stochastic stability. Roughly speaking, an f-invariant probability for the zero-noise system (Mf) is empirically stochastically stable if it approximates, up to an arbitrarily small error \(\rho >0\), the statistics of sufficiently large pieces of the noisy orbits, for some fixed time n, provided that the noise-level \(\varepsilon >0\) is small enough (see Definition 4). This concept is a reformulation in a finite-time scenario of one of the usual definition of infinite-time stochastic stability (see for instance [1, 8, 30]).

1.1 Setting the Problem

Let \(\varepsilon > 0\) and \(x \in M\). Denote by \(B_{\varepsilon }(x) \subset M\) the open ball of radius \(\varepsilon \) centered at x. Consider the Lebesgue measure m, i.e. the finite measure obtained from the volume form induced by the Riemannian structure of the manifold. For each point \(x \in M\), we take the restriction of m to the ball \(B_ \varepsilon (f(x))\). Precisely, we define the probability measure \(p_{\varepsilon } (x, \cdot )\) by the following equality:

$$\begin{aligned} p_{\varepsilon }(x, A) := \frac{m \big (A \cap B_{\varepsilon }(f(x))\big )}{m\big (B_{\varepsilon }(f(x))\big )} \ \ \forall \ A \in {\mathscr {A}},\end{aligned}$$
(1)

where \({\mathscr {A}}\) is the Borel sigma-algebra in M.

Definition 1

(Stochastic system with noise-level \(\varepsilon \).) For each value of \(\varepsilon > 0\), consider the stochastic process or Markov chain \(\{x_n\}_{n \in \mathbb {N}} \subset M^{\mathbb {N}}\) in the measurable space \({(M, {\mathscr {A}})}\) such that, for all \(A \in {\mathscr {A}}\):

$$\text{ prob }(x_0 \in A) = m(A), \ \ \text{ prob }(x_{n+1} \in A | x_n= x) = p_{\varepsilon }(x, A),$$

where \(p_{\varepsilon }(x, \cdot )\) is defined by equality (1).

The system whose stochastic orbits are the Markov chains as above is called stochastic system with noise-level \(\varepsilon \). We denote it by \((M, f, P_{\varepsilon })\), where

$$P_{\varepsilon } := \{p_{\varepsilon }(x, \cdot )\}_{x \in M}.$$

The stochastic systems with noise-level \(\varepsilon >0\) are usually studied by assuming certain regularity of the zero-noise systems (Mf), and by taking the ergodic stationary measures \(\mu _{\varepsilon }\) of the stochastic system \((M, f, P_{\varepsilon })\) (see for instance [30]). When assuming that the transition probabilities satisfy equality (1), all the stationary probability measures become absolutely continuous with respect to the Lebesgue measure m (see for instance [6]). Therefore, if a property holds for the noisy orbits for \(\mu _{\varepsilon }\)- a.e initial state \(x \in M\), it also holds for a Lebesgue-positive set of states.

When looking at the noisy system, the experimenter usually obtains the values of several bounded measurable functions \(\varphi \), which are called observables, along the stochastic orbits \(\{x_n\}_{n \in \mathbb {N}}\). From Definition 1, the expected value of \(\varphi \) at instant 0 is \(E(\varphi )_0= \int \varphi (x_0) \, dm (x_0) \). Besides, from the definition of the transition probabilities by equality (1), for any given state \(x \in M\) the expected value of \(\varphi (x_{n+1})\) conditioned to \(x_n = x\) is \(\int \varphi (y) \, p_{\varepsilon }(x, dy).\) So, in particular at instant 1 the expected value of \(\varphi \) is

$$E(\varphi )_1= \int \!\!\!\int \varphi (x_1)\, p_{\varepsilon }(x_0, dx_1) \, dm(x_0),$$

and its expected value at instant 2 is

$$ E(\varphi )_2= \int \! \! \!\int \! \!\! \int \varphi (x_2) \, p_{\varepsilon }(x_1, dx_2)\, p_{\varepsilon }(x_0, dx_1) \, dm(x_0).$$

Analogously, by induction on n we obtain that for all \(n \ge 1\), the expected value \(E(\varphi )_n \) of the observable \(\varphi \) is

$$\begin{aligned} E(\varphi )_n= \int \! \! \!\int \! \!\! \int ...\int \varphi (x_n) p_{\varepsilon }(x_{n-1}, dx_{n})...\, p_{\varepsilon }(x_1, dx_2)\, p_{\varepsilon }(x_0, dx_1) \, dm(x_0).\end{aligned}$$
(2)

Since the Lebesgue measure m is not necessarily stationary for the system \((M, f, P_{\varepsilon })\), the expected value of the same function \(\varphi \) at each instant n, if the initial distribution is m, may change with n.

As said at the beginning, we assume that the experimenter only sees the values of the observable functions along finite pieces of the noisy orbits because his experiment and his empiric observations can not last forever. When analyzing the statistics of the observed data, he considers for instance the time average of the collected observations along those finitely elapsed pieces of randomly perturbed orbits. These time averages can be computed by the integrals of the observable functions with respect to certain probability measures, which are called empiric stochastic probabilities for finite time n (see Definition 3). Precisely, for any any fixed time \(n \ge 1\) and for any initial state \(x_0 \in M\), the empiric stochastic probability \(\sigma _{\varepsilon , n, x_0}\) is defined such that the time average of the expected values of any observable \(\varphi \) at instants \(1, 2, \ldots , n\) along the noisy orbit initiating at \(x_0\), can be computed by the following equality:

$$ \frac{1}{n} \sum _{j= 1}^n E(\varphi (x_j)| {x_0}) = \int \varphi (y) d \sigma _{\varepsilon , n, x_0}(y),$$

where

$$\begin{aligned} E(\varphi (x_j)| {x_0}) = \int \!\!\! \int \ldots \int \varphi (x_j) \, p_{\varepsilon }(x_{j-1}, dx_j) \ldots p_{\varepsilon } (x_1, dx_2) p_{\varepsilon } (x_0, dx_1).\end{aligned}$$
(3)

We also assume that the experimenter only sees Lebesgue-positive sets in the phase space M. So, when analyzing the statistics of the observed data in the noisy system, he will not observe all the empiric stochastic distributions \(\sigma _{\varepsilon , n, x}\), but only those for Lebesgue-positive sets of initial states \(x \in M\). If besides he can only manage a finite set of continuous observable functions, then he will not see the exact probability distributions, but some weak\(^*\) approximations to them up to an error \(\rho >0\), in the metric space \({\mathscr {M}}\) of probability measures.

For some classes of mappings on the manifold M, even with high regularity (for instance Morse-Smale \(C^{\infty }\) diffeomorphisms with two or more hyperbolic sinks), one single measure \(\mu \) is not enough to approximate the empiric stochastic probabilities of the noisy orbits for Lebesgue-a.e. \(x \in M\). The experimenter may need a set \({\mathscr {K}} \) composed by several probability measures instead of a single measure. Motivated by this phenomenon, we define the empiric stochastic stability of a weak\(^*\)-compact set \({\mathscr {K}}\) of f-invariant probability measures (see Definition 8). This concept is similar to the empiric stochastic stability of a single measure, with two main changes: first, it substitutes the measure \(\mu \) by a weak\(^*\)-compact set \(\mathscr {K}\) of probabilities; and second, it requires \(\mathscr {K}\) be minimal with the property of empiric stochastic stability, when restricting the stochastic system to a fixed Lebesgue-positive set of noisy orbits. In particular, a globally empirically stochastically stable set \({\mathscr {K}}\) of invariant measures minimally approximates the statistics of Lebesgue-a.e. noisy orbits. We will prove that it exists and is unique.

1.2 Main Results

A classical concept in the ergodic theory of zero-noise dynamical systems is that of physical measures [14]. In brief, a physical measure is an f-invariant measure \(\mu \) whose basin of statistical attraction has positive Lebesgue measure. This basin is composed by the zero-noise orbits such that the time average probability up to time n converges to \(\mu \) in the weak\(^*\)-topology as \(n \rightarrow + \infty \) (see Definitions 11 and 12).

One of the main purposes of this paper is to answer the following question:

Question 1. Is there some relation between the empirically stochastically stable measures and the physical measures? If yes, how are they related?

We will give an answer to this question in Theorem 1 and Corollary 1 (see Sect. 2.1 for their precise statements). In particular, we will prove the following result:

Theorem. An f-invariant measure is empirically stochastically stable if and only if it is physical.

A generalization of physical measures, is the concept of pseudo-physical probability measures, which are sometimes also called SRB-like measures [10,11,12]. They are defined such that, for all \(\rho >0\), their weak\(^*\) \(\rho \)-neighborhood, has a (weak) basin of statistical attraction with positive Lebesgue measure (see Definitions 11 and 12).

To study this more general scenario of pseudo-physics, our second main purpose is to answer the following question:

Question 2. Do empirically stochastically stable sets of measures relate with pseudo-physical measures? If yes, how do they relate?

We will give an answer to this question in Theorem 2 and its corollaries, whose precise statements are in Sect. 2.1. In particular, we will prove the following result:

Theorem. A weak\(^*\)-compact set of invariant probability measures is empirically stochastically stable only if all its measures are pseudo-physical. Conversely, any pseudo-physical measure belongs to the unique globally empirically stochastically stable set of measures.

2 Definitions and Statements

We denote by \({\mathscr {M}}\) the space of Borel probability measures on the manifold M, endowed with the weak\(^*\)-topology; and by \({\mathscr {M}}_f\) the subspace of f-invariant probabilities, where (Mf) is the zero-noise dynamical system. Since the weak\(^*\) topology in \(\mathscr {M}\) is metrizable, we can choose and fix a metric \(\text{ dist }^*\) that endows that topology.

To make formula (2) and other computations concise, it is convenient to introduce the following definition:

Definition 2

(The transfer operators \({\mathscr {L}}_{\varepsilon }\) and \({\mathscr {L}}^*_{\varepsilon }\)). Denote by \(C^0(M, \mathbb {C})\) the space of complex continuous functions defined in M. For the stochastic system \((M, f, P_{\varepsilon })\), we define the transfer operator \({\mathscr {L}}_{\varepsilon }: C^0(M, \mathbb {C}) \mapsto C^0(M, \mathbb {C})\) as follows:

$$\begin{aligned} ({\mathscr {L}}_{\varepsilon } \varphi ) (x) := \int \varphi _(y) \, p_{\varepsilon }(x, dy) \ \ \forall \ x \in M, \ \ \forall \ \varphi \in C^0(M, \mathbb {C}).\end{aligned}$$
(4)

From equality (1) it is easy to prove that \(p_{\varepsilon }(x, \cdot )\) depends continuously on \(x \in M\) in the weak\(^*\) topology. So, \({\mathscr {L}}_{\varepsilon } \varphi \) is a continuous function for any \(\varphi \in C^0(M, \mathbb {C})\).

Through Riesz representation theorem, for any measure \(\mu \in {\mathscr {M}}\) there exists a unique measure, which we denote by \( {\mathscr {L}}^*_{\varepsilon } \mu \), such that

$$\begin{aligned} \int \varphi d({\mathscr {L}}_{\varepsilon }^* \mu ) := \int ({\mathscr {L}}_{\varepsilon } \varphi )\, d \mu \ \ \ \forall \ \varphi \in C^0(M, \mathbb {C}). \end{aligned}$$
(5)

We call \({\mathscr {L}}_{\varepsilon }^*:{\mathscr {M}} \mapsto {\mathscr {M}}\) the dual transfer operator or also, the transfer operator in the space of measures.

From the above definition, we obtain the following property for any observable function\(\varphi \in C^0(M, \mathbb {C}) \): its expected value at the instant n along the stochastic orbits with noise level \(\varepsilon \) is

$$E(\varphi )_n = \int ({{\mathscr {L}}_{\varepsilon }}^n \varphi ) \, d m = \int \varphi \, d({{\mathscr {L}}^*_{\varepsilon }}^n m). $$

We are not only interested in the expected values of the observables \(\varphi \), but also in the statistics (i.e time averages of the observables) along the individual noisy orbits. With such a purpose, we first consider the following equality:

$$\begin{aligned} ({{\mathscr {L}}_{\varepsilon }} ^n \varphi ) (x) = \int \varphi \, d({{\mathscr {L}}^*_{\varepsilon }}^n \delta _x) \ \ \ \forall \ x \in M, \end{aligned}$$
(6)

where \(\delta _x\) denotes the Dirac probability measure supported on \(\{x\}\). Second, we introduce the following concept of empiric probabilities for the stochastic system:

Definition 3

(Empiric stochastic probabilities) For any fixed instant \(n \ge 1\), and for any initial state \(x \in M\), we define the empiric stochastic probability \(\sigma _{\varepsilon , n,x} \) of the noisy orbit with noise-level \(\varepsilon >0\), with initial state x, and up to time n, as follows:

$$\begin{aligned} \sigma _{\varepsilon , n,x} := \frac{1}{n} \sum _{j= 1}^{n} {{\mathscr {L}}_{\varepsilon }^*}^j \delta _x. \end{aligned}$$
(7)

Note that the empiric stochastic probabilities for Lebesgue almost \(x \in M\) allow the computation of the time averages of the observable \(\varphi \) along the noisy orbits. Precisely,

$$\begin{aligned} \frac{1}{n} \sum _{j= 1}^{n} ({\mathscr {L}}_{\varepsilon }^j \varphi ) (x) = \int \varphi (y) \, d \sigma _{\varepsilon , n,x}(y) \ \ \ \forall \ \varphi \in C^0(M, \mathbb {C}). \end{aligned}$$
(8)

Definition 4

(Empiric stochastic stability of a measure) We call a probability measure \(\mu \in {\mathscr {M}}_f\) empirically stochastically stable if there exists a measurable set \(\widehat{A} \subset M\) with positive Lebesgue measure such that:

For all \(\rho > 0\) and for all \(n \in \mathbb {N}^+\) large enough there exists \(\varepsilon _0>0\) (which may depend on \(\rho \) and on n but not on x) satisfying

$$\text{ dist }^*(\sigma _{\varepsilon , n, x}, \ \mu )< \rho \ \ \ \ \forall \ 0 <\varepsilon \le \varepsilon _0 , \text{ for } \text{ Lebesgue } \text{ a.e. } x \in \widehat{A}.$$

Definition 5

(Basin of empiric stochastic stability of a measure) For any probability measure \(\mu \), we construct the following (maybe empty) set in the ambient manifold M:

$$\begin{aligned} \widehat{A}_{\mu }&:=\Big \{x \in M :\ \ \forall \rho>0 \ \exists \ N= N(\rho ) \text { such that } \forall \ n \ge N \ \exists \ \varepsilon _0 = \varepsilon _0(\rho , n) >0 \text { satisfying } \nonumber \\&\qquad \qquad \quad \qquad \text {dist}^*(\sigma _{\varepsilon , n, x}, \ \mu )< \rho \ \ \ \ \forall \ 0 <\varepsilon \le \varepsilon _0 \Big \}.\end{aligned}$$
(9)

We call the set \(\widehat{A}_{\mu } \subset M\) the basin of empiric stochastic stability of \(\mu \). Note that it is defined for any probability measure \(\mu \in {\mathscr {M}}\), but it may be empty, or even if nonempty, it may have zero Lebesgue-measure when \(\mu \) is not empirically stochastically stable.

The set \(\widehat{A}_{\mu }\) is measurable (see Lemma 2). According to Definition 4, a probability measure \(\mu \) is empirically stochastically stable if and only if the set \(\widehat{A}_{\mu }\) has positive Lebesgue measure (see Lemma 3).

Definition 6

(Global empiric stochastic stability of a measure) We say that \(\mu \in {\mathscr {M}}_f\) is globally empirically stochastically stable if it is empirically stochastically stable, and besides its basin \(\widehat{A}_{\mu }\) of empiric stability has full Lebesgue measure.

Definition 7

(Basin of empiric stochastic stability of a set of measures) For any nonempty weak\(^*\)-compact set \(\mathscr {K} \subset {\mathscr {M}}\), we construct the following (maybe empty) set in the space manifold M:

$$\begin{aligned} \widehat{A}_{\mathscr {K}}&:= \{x \in M :\ \ \forall \rho>0 \ \exists \ N= N (\rho ) \text { such that } \forall \ n \ge N \ \exists \ \varepsilon _0 = \varepsilon _0(\rho , n) >0 \text { satisfying } \nonumber \\&\qquad \quad \qquad \qquad \text {dist}^*(\sigma _{\varepsilon , n, x}, \ \mathscr {K})< \rho \ \ \ \forall \ 0 <\varepsilon \le \varepsilon _0 \}.\end{aligned}$$
(10)

We call \(\widehat{A}_{\mathscr {K}} \subset M\) the basin of empiric stochastic stability of \(\mathscr {K}\).

Note that \(\widehat{A}_{\mathscr {K}} \) is defined for any nonempty weak\(^*\)-compact set \({\mathscr {K}} \subset {\mathscr {M}}\). But it may be empty, or even if nonempty, it may have zero Lebesgue measure when \({\mathscr {K}}\) is not empirically stochastically stable, according to the following definition:

Definition 8

(Empiric stochastic stability of a set of measures) We call a nonempty weak\(^*\)-compact set \({\mathscr {K}} \subset {\mathscr {M}}_f\) of f-invariant probability measures empirically stochastically stable if :

  1. (a)

    There exists a measurable set \(\widehat{A} \subset M\) with positive Lebesgue measure, such that: For all \(\rho > 0\) and for all \(n \in \mathbb {N}^+\) large enough, there exists \(\varepsilon _0>0\) (which may depend on \(\rho \) and n, but not on x), satisfying:

    $$\text{ dist }^*(\sigma _{\varepsilon , n, x}, \ {\mathscr {K}})< \rho \ \ \ \ \forall \ 0 <\varepsilon \le \varepsilon _0, \ \ \ \forall \ x \in \widehat{A}.$$
  2. (b)

    \({\mathscr {K}}\) is minimal in the following sense: if \({\mathscr {K}}' \subset {\mathscr {M}}_f\) is nonempty and weak\(^*\)-compact, and if \({\widehat{A}}_{\mathscr {K}} \subset {\widehat{A}}_{{\mathscr {K}}'}\) Lebesgue-a.e., then \({\mathscr {K} } \subset {\mathscr {K}}'\).

By definition, if \(\mathscr {K}\) is empirically stochastically stable, then the set \(\widehat{A} \subset M\) satisfying condition (a), has positive Lebesgue measure and is contained in \(\widehat{A}_{\mathscr {K}}\). Since \(\widehat{A}_{\mathscr {K}}\) is measurable (see Lemma 4), we conclude that it has positive Lebesgue measure.

Nevertheless, for a nonempty weak\(^*\)-compact set \({\mathscr {K}}\) be empirically stochastically stable, it is not enough that \(\widehat{A}_{\mathscr {K}} \) has positive Lebesgue measure. In fact, to avoid the whole set \({\mathscr {M}}_f\) of f-invariant measures be always an empirically stochastically stable set, we ask \({\mathscr {K}}\) to satisfy condition (b). In brief, we require a property of minimality of \({\mathscr {K}}\) with respect to Lebesgue-a.e. point of its basin \(\widehat{A}_{\mathscr {K}}\) of empiric stochastic stability.

Definition 9

(Global empiric stochastic stability of a set of measures) We say that a nonempty weak\(^*\)-compact set \({\mathscr {K}} \in {\mathscr {M}}_f\) is globally empirically stochastically stable if it is empirically stochastically stable, and besides its basin \(\widehat{A}_{\mathscr {K}}\) of empiric stability has full Lebesgue measure.

We recall the following definitions from [11]:

Definition 10

(Empiric zero-noise probabilities and \(p\omega \)-limit sets) For any fixed natural number \(n \ge 1\), the empiric probability \(\sigma _{n, x} \) of the orbit with initial state \(x \in M\) and up to time n of the zero-noise system (Mf), is defined by the following equality:

$$\sigma _{ n, x} := \frac{1}{n} \sum _{j= 1}^n \delta _{f^j(x)}.$$

It is standard to check, from the construction of the empiric stochastic probabilities in Definition 3, that \(\sigma _{\varepsilon , n, x}\) is absolutely continuous with respect to the Lebesgue measure m. In contrast, the empiric probability \(\sigma _{n,x}\) for the zero-noise orbits is atomic, since it is supported on a finite number of points.

The p-omega limit set \(p\omega _x\) in the space \({\mathscr {M}}\) of probability measures, corresponding to the orbit of \(x \in M\), is defined by:

$$p\omega _x := \{\mu \in {\mathscr {M}}: \ \exists \ n_i \rightarrow + \infty \text{ such } \text{ that } {\lim }^*_{i \rightarrow + \infty } \sigma _{n_i, x} = \mu \}, $$

where \({\lim }^*\) is taken in the weak\(^*\)-topology of \({\mathscr {M}}\). It is standard to check that \(p \omega _x \subset {\mathscr {M}}_f\) for all \(x \in M\).

Definition 11

(Strong and \(\rho \)-weak basin of statistical attraction) For any f-invariant probability measure \(\mu \in {\mathscr {M}}_f\), the (strong) basin of statistical attraction of \(\mu \) is the (maybe empty) set

$$\begin{aligned} A_{\mu } := \big \{x \in M :\ \ p\omega _x = \{\mu \} \big \}.\end{aligned}$$
(11)

For any f-invariant probability measure \(\mu \in {\mathscr {M}}_f\), and for any \(\rho >0\), the \(\rho \)-weak basin of statistical attraction of \(\mu \) is the (maybe empty) set

$$A^{\rho }_{\mu } := \big \{x \in M :\ \ \text{ dist }^*(p\omega _x , \{\mu \}) < \rho \big \}.$$

Definition 12

(Physical and pseudo-physical measures) For the zero-noise dynamical system (Mf), an f-invariant probability measure \(\mu \) is physical if its strong basin of statistical attraction \(A_{\mu }\) has positive Lebesgue measure.

An f-invariant probability measure \(\mu \) is pseudo-physical if for all \(\rho >0\), its \(\rho \)-weak basin of statistical attraction \(A^{\rho }_{\mu }\) has positive Lebesgue measure.

It is standard to check that, even if the \(\rho \)-weak basin of statistical attraction \(A^{\rho }_{\mu }\) depends on the chosen weak\(^*\)-metric in the space \({\mathscr {M}}\) of probabilities, the set of pseudo-physical measures remains the same when changing this metric (provided that the new metric also induces the weak\(^*\)-topology).

Note that the strong basin of statistical attraction of any measure is always contained in the \(\rho \)-weak basin of the same measure. Hence, any physical measure (if there exists some) is pseudo-physical. But not all the pseudo-physical measures are necessarily physical (see for instance example 5 of [10]).

We remark that we do not require the ergodicity of \(\mu \) to be physical or pseudo-physical. In fact, in [17] it is proved that the \(C^{\infty }\) diffeomorphism, popularly known as the Bowen Eye, exhibits a segment of pseudo-physical measures whose extremes, and so all the measures in the segement, are non ergodic. Also, for some \(C^0\)-version of Bowen Eye (see example 5 B of [10]) there is a unique pseudo-physical measure, it is physical and non-ergodic.

2.1 Statement of the Results

Theorem 1

(Characterization of empirically stochastically stable measures) Let \(f: M \mapsto M\) be a continuous map on a compact Riemannian manifold M. Let \(\mu \) be an f-invariant probability measure. Then, \(\mu \) is empirically stochastically stable if and only if it is physical.

Besides, if \(\mu \) is physical, then its basin \(\widehat{A}_{\mu } \subset M \) of empiric stochastic stability equals Lebesgue-a.e. its strong basin \(A_{\mu } \subset M\) of statistical attraction.

We will prove Theorem 1 and the following corollaries in Sect. 3.

Corollary 1

Let \(f: M \mapsto M\) be a continuous map on a compact Riemannian manifold M. Then, the following conditions are equivalent:

  1. (i)

    There exists an f-invariant probability measure \(\mu _1\) that is globally empirically stochastically stable.

  2. (ii)

    There exists an f-invariant probability measure \(\mu _2\) that is physical and such that its strong basin of statistical attraction has full Lebesgue measure.

  3. (iii)

    There exists a unique f-invariant probability measure \(\mu _3\) that is pseudo-physical.

Besides, if (i), (ii) or (iii) holds, then \(\mu _1= \mu _2 = \mu _3\), this measure is the unique empirically stochastically stable, and the set \(\{\mu _1\}\) is the unique weak\(^*\)-compact set in the space of probability measures that is empirically stochastically stable.

Before stating the next corollary, we fix the following definition: we say that a property of the maps on M is \(C^1\)-generic if it holds for a countable intersection of open and dense sets of maps in the \(C^1\)- topology.

Corollary 2

For \(C^1\)-generic and for all \(C^2\) expanding maps of the circle, there exists a unique ergodic measure \(\mu \) that is empirically stochastically stable. Besides \(\mu \) is globally empirically stochastically stable and it is the unique measure that satisfies the following Pesin Entropy Formula [23, 24]:

$$\begin{aligned} h_{\mu }(f) = \int \log |f'| \, d \mu . \end{aligned}$$
(12)

Theorem 1 is a particular case of the following result.

Theorem 2

(Empirically stochastically stable sets and pseudo-physics)

Let \(f: M \mapsto M\) be a continuous map on a compact Riemannian manifold M.

(a) If \({\mathscr {K}}\) is a nonempty weak\(^*\)-compact set of f-invariant measures that is empirically stochastically stable, then any \(\mu \in {\mathscr {K}}\) is pseudo-physical.

(b) A set \({\mathscr {K}}\) of f-invariant measures is globally empirically stochastically stable if and only if it coincides with the set of all the pseudo-physical measures.

We will prove Theorem 2 and the following corollaries in Sect. 4.

Corollary 3

For any continuous map \(f: M \mapsto M\) on a compact Riemannian manifold M, there exists and is unique the nonempty weak\(^*\)-compact set \({\mathscr {K}}\) of f-invariant measures that is globally stochastically stable. Besides, \(\mu \in {\mathscr {K}}\) if and only if \(\mu \) is pseudo-physical.

Corollary 4

If a pseudo-physical measure \(\mu \) is isolated in the set of pseudo-physical measures, then it is empirically stochastically stable; hence physical.

Corollary 5

Let \(f: M \mapsto M\) be a continuous map on a compact Riemannian manifold M. Then, the following conditions are equivalent:

  1. (i)

    The set of pseudo-physical measures is finite.

  2. (ii)

    There exists a finite number of (individually) empirically stochastically stable measures, hence physical measures, and the union of their strong basins of statistical attraction covers Lebesgue a.e.

Corollary 6

If the set of pseudo-physical measures is countable, then there exists countably many empirically stochastically stable measures, hence physical, and the union of their strong basins of statistical attractions covers Lebesgue a.e.

Corollary 7

For all \(C^1\)-expanding maps of the circle, all the measures of any empirically stochastically stable set \({\mathscr {K}}\) satisfy Pesin Entropy Formula (12).

Corollary 8

For \(C^0\)-generic maps of the interval, the globally empirically stochastically stable set \({\mathscr {K}}\) of invariant measures includes all the ergodic measures but is meager in the whole space of invariant measures.

3 Proof of Theorem 1 and its Corollaries

We decompose the proof of Theorem 1 into several lemmas:

Lemma 1

For \(\varepsilon >0\) small enough:

(a) The transformation \(x \in M \mapsto p_{\varepsilon } (x, \cdot ) \in {\mathscr {M}}\) is continuous.

(b) The transfer operator \({\mathscr {L}}_{\varepsilon }^*: {\mathscr {M}} \mapsto {\mathscr {M}}\) is continuous.

(c) The transformation \(x \in M \mapsto \sigma _{\varepsilon , n,x} \in {\mathscr {M}}\) is continuous.

(d) \({\lim }^*_{\varepsilon \rightarrow 0^+} p_{\varepsilon }(x, \cdot ) = \delta _{f(x)}\) uniformly on M.

(e) \({\lim }^*_{\varepsilon \rightarrow 0^+} {{\mathscr {L}}^*_{\varepsilon }}^n \delta _x = \delta _{f^n(x)} \) uniformly on M.

(f) \({\lim }^*_{\varepsilon \rightarrow 0^+} \sigma _{\varepsilon , n, x} = \sigma _{n, x} \) uniformly on M.

Proof

(a): It is immediate from the construction of the probability measure \(p_{\varepsilon }(x, \cdot )\) by equality (1), and taking into account that the Lebesgue measure restricted to a ball of radius \(\varepsilon \) depends continuously on the center of the ball.

(b): Take a convergent sequence \(\{\mu _i\}_{i \in \mathbb {N}} \subset {\mathscr {M}}\) and denote \(\mu = \lim _i^* \mu _i\). For any continuous function \(\varphi : M \mapsto M\), we have

$$\begin{aligned} \int \varphi d {\mathscr {L}}_{\varepsilon }^* \mu _i = \int {\mathscr {L}}_{\varepsilon } \varphi \, d \mu _i . \end{aligned}$$
(13)

Since \((\mathscr {L}_{\varepsilon } \varphi ) (x) = \int \varphi (y) p_{\varepsilon }(x, dy)\) and \(p_{\varepsilon }(x, \cdot )\) depends continuously on x, we deduce that \({\mathscr {L}}_{\varepsilon } \varphi \) is a continuous function. So, from (13) and the definition of the weak\(^*\) topology in \({\mathscr {M}}\), we obtain:

$$ \lim _{i \rightarrow + \infty } \int \varphi d {\mathscr {L}}_{\varepsilon }^* \mu _i = \lim _{i \rightarrow + \infty }\int {\mathscr {L}}_{\varepsilon } \varphi \, d \mu _i = \int {\mathscr {L}}_{\varepsilon } \varphi \, d \mu = \int \varphi d {\mathscr {L}}_{\varepsilon }^* \mu . $$

We conclude that \(\lim _i^*{\mathscr {L}}_{\varepsilon }^* \mu _i = {\mathscr {L}}_{\varepsilon }^* \mu , \) hence \({\mathscr {L}}^*_{\varepsilon }\) is a continuous operator on \({\mathscr {M}}\).

(c): Since the composition of continuous operators is continuous, we have that \({{\mathscr {L}}_{\varepsilon }^*}^j: {\mathscr {M}} \mapsto {\mathscr {M}}\) is continuous for each fixed \(j \in {\mathbb {N}^+}\). Besides, it is immediate to check that the transformation \(x \in M \mapsto \delta _x \in {\mathscr {M}}\) is continuous. Thus, also the transformation \(x \in M \mapsto {{\mathscr {L}}_{\varepsilon }^*}^j \delta _x \in {\mathscr {M}} \) is continuous. We conclude that, for fixed \(\varepsilon >0\) and fixed \(n \in \mathbb {N}^+\), the transformation

$$x \in M \ \mapsto \ \sigma _{\varepsilon , n, x} = \frac{1}{n} \sum _{j= 1}^n {{\mathscr {L}}_{\varepsilon }^*}^j \delta _x \in {\mathscr {M}}$$

is continuous.

(d): For any given \(\rho >0\) we shall find \(\varepsilon _0 >0\) (independent on \(x \in M\)) such that, \(\text{ dist }^*(p_{\varepsilon } (x, \cdot ), \ \delta _{f(x)}) <\rho \) for all \(0<\varepsilon <\varepsilon _0\) and for all \(x \in M\). For any metric \(\text{ dist }^*\) that endows the weak\(^*\) topology in \({\mathscr {M}}\), the inequality \(\text{ dist }^*(p_{\varepsilon } (x, \cdot ), \ \delta _{f(x)}) <\rho \) holds, if and only if, for a finite number (which depends on \(\rho \) and on the metric) of continuous functions \(\varphi : M \mapsto \mathbb {C}\), the difference \(|\int \varphi (y) \, p_{\varepsilon }(x, dy) - \varphi (f(x)) |\) is smaller than a certain \(\varepsilon '>0\) (which depends on \(\rho \) and on the metric). Let us fix such a continuous function \(\varphi \). Since M is compact, \(\varphi \) is uniformly continuous on M. Thus, for any \(\varepsilon ' >0\) there exists \(\varepsilon _0\) such that, if \(\text{ dist }(y_1, y_2) <\varepsilon \le \varepsilon _0\), then \(|\varphi (y_1) - \varphi (y_2) | < \varepsilon '.\) Since \(p_{\varepsilon }(x, \cdot )\) is supported on the ball \(B_{\varepsilon }(f(x))\), we deduce:

$$\Big |\int \varphi (y) p_{\varepsilon }(x, dy) - \varphi (f(x))\Big | \le \int \big | \varphi (y) - \varphi (f(x)\big | \, p_{\varepsilon }(x, dy) \le \varepsilon ',$$

because \(\text{ dist } (y, f(x)) < \varepsilon \le \varepsilon _0\) for \(p_{\varepsilon }(x, \cdot )\)- a.e. \(y \in M\).

Since \(\varepsilon _0\) does not depend on x, we have proved that \({\lim }^*_{\varepsilon \rightarrow 0^+} p_{\varepsilon }(x, \cdot ) = \delta _{f(x)}\) uniformly for all \(x \in M\).

(e): Let us prove that \(\lim _{\varepsilon \rightarrow 0^+} {{\mathscr {L}}_{\varepsilon }^*}^n \delta _x = \delta _{f^n(x)}\) uniformly on \(x \in M\). By induction on \(n \in \mathbb {N}^+\):

If \(n= 1\), for any continuous function \(\varphi : M \mapsto {\mathbb {C}}\) we compute the following integral

$$\int \varphi \, d{{\mathscr {L}}_{\varepsilon }^*} \delta _x = \int ({{\mathscr {L}}_{\varepsilon } }\varphi ) \, d \delta _x = ({{\mathscr {L}}_{\varepsilon } }\varphi ) (x) = \int \varphi (y) \, p_{\varepsilon }(x, dy). $$

From the unicity of the probability measure of Riesz Representation Theorem, we obtain \({{\mathscr {L}}_{\varepsilon }^*} \delta _x = p_{\varepsilon }(x, \cdot )\). Applying part d), we conclude

$${\lim }^*_{\varepsilon \rightarrow 0^+}{{\mathscr {L}}_{\varepsilon }^*} \delta _x = {\lim }^*_{\varepsilon \rightarrow 0^+}p_{\varepsilon }(x, \cdot ) = \delta _{f(x)}, \text{ uniformly } \text{ on } x \in M.$$

Now, assume that, for some \(n \in \mathbb {N}^+\), the following assertion holds:

$$\begin{aligned} {\lim }^*_{\varepsilon \rightarrow 0^+}{{\mathscr {L}}_{\varepsilon }^*}^n \delta _x = \delta _{f^n(x)}, \text{ uniformly } \text{ on } x \in M. \end{aligned}$$
(14)

Let us prove the same assertion for \(n+1\), instead of n: Fix a continuous function \(\varphi : M \mapsto \mathbb {C}\). As proved in part (d), for any \(\varepsilon ' >0\), there exists \(\varepsilon _0 >0\) (independent on \(x \in M\)) such that

$$|{\mathscr {L}}_{\varepsilon } \varphi ) (x) - \varphi (f(x))| = |\int \varphi (y) p_{\varepsilon }(x, dy) - \varphi (f(x)) |< \frac{\varepsilon '}{2} \ \ \ \forall \ 0 <\varepsilon \le \varepsilon _0, \ \ \forall \ x \in M.$$

Thus

$$\Big | \int \varphi \, d {{\mathscr {L}}_{\varepsilon }^*}^{n+1} \delta _x - \int (\varphi \circ f) \, d {{\mathscr {L}}_{\varepsilon }^*}^{n } \delta _x \Big | = \Big | \int ({\mathscr {L}}_{\varepsilon } \varphi ) \, d {{\mathscr {L}}_{\varepsilon }^*}^{n} \delta _x - \int (\varphi \circ f) \, d {{\mathscr {L}}_{\varepsilon }^*}^{n } \delta _x \Big |$$
$$\begin{aligned} \le \int \big |{\mathscr {L}}_{\varepsilon } \varphi ) -\varphi \circ f \big | \, d {{\mathscr {L}}_{\varepsilon }^*}^{n } \delta _x \Big |<\frac{\varepsilon '}{2} \ \ \ \forall \ 0 <\varepsilon \le \varepsilon _0, \ \ \forall \ x \in M. \end{aligned}$$
(15)

Besides, the induction assumption (14) implies that, if \(\varepsilon _0\) is chosen small enough, then for the continuous function \(\varphi \circ f\) the following inequality holds:

$$ \Big | \int (\varphi \circ f ) \, d {{\mathscr {L}}_{\varepsilon }^*}^{n } \delta _x - \varphi ( f^{n+1}(x) ) \Big | = $$
$$\begin{aligned} = \Big | \int (\varphi \circ f ) \, d {{\mathscr {L}}_{\varepsilon }^*}^{n } \delta _x - \int (\varphi \circ f ) \, d \delta _{f^n(x)} \Big |<\frac{\varepsilon '}{2} \ \ \ \forall \ 0 <\varepsilon \le \varepsilon _0, \ \ \forall \ x \in M. \end{aligned}$$
(16)

Joining inequalities (15) and (16) we deduce that for all \(\varepsilon ' >0\), there exists \(\varepsilon _0>0\) (independent of x) such that

$$\Big | \int \varphi \, d {{\mathscr {L}}_{\varepsilon }^*}^{n+1} \delta _x - \int \varphi \, d \delta _{f^{n+1}(x)} \Big |< \varepsilon ' \ \ \ \ \forall \ 0 <\varepsilon \le \varepsilon _0, \ \ \forall \ x \in M. $$

In other words:

$${\lim }^*_{\varepsilon \rightarrow 0^+} {{\mathscr {L}}_{\varepsilon }^*}^{n+1} \delta _x = \delta _{f^{n+1}(x)} \ \ \text{ uniformly } \text{ on } x \in M,$$

ending the proof of part (e).

(f): Since \(\sigma _{\varepsilon , n, x} = \frac{1}{n} \sum _{j=1}^{n} {{\mathscr {L}}^*_{\varepsilon }}^j \delta _{x}\), applying part (e) to each probability measure \({{\mathscr {L}}^*_{\varepsilon }}^j \delta _{x}\), we deduce that

$${\lim }^*_{\varepsilon \rightarrow 0^+} {{\mathscr {L}}_{\varepsilon }^*}^{n+1} \delta _x = \frac{1}{n} \sum _{j= 1}^n\delta _{f^{j}(x)} = \sigma _{n,x} \ \ \text{ uniformly } \text{ on } x \in M,$$

ending the proof of Lemma 1. \(\square \)

Lemma 2

For any probability measure \(\mu \) consider the (maybe empty) basin of stochastic stability \(\widehat{A}_{\mu }\) defined by equality (9), and the (maybe empty) strong basin of statistical attraction \(A_{\mu }\) defined by equality (11).

Then, \(\widehat{A}_{\mu }\) and \(A_{\mu }\) are measurable sets and coincide. Besides, they satisfy the following equality:

$$\begin{aligned} \widehat{A}_{\mu } = A_{\mu } = \bigcap _{k \in \mathbb {N}^+} \bigcup _{N \in \mathbb {N}^+} \bigcap _{n \ge N} C_{n, \ 1/k } (\mu ), \end{aligned}$$
(17)

where, for any real number \(\rho >0\) and any natural number \(n \ge 1\), the set \(C_{n, \ \rho }(\mu ) \) is defined by

$$ C_{n, \ \rho }(\mu ) := \{ x \in M :\ \ \text{ dist }^*(\sigma _{n, x}, \ \mu ) < \rho \}.$$

Proof

From equality (11), we re-write the strong basin of statistical attraction of \(\mu \) as follows:

$$\begin{aligned} A_{\mu } = \Big \{x \in M: \ \ {\lim }^*_{n \rightarrow + \infty } \sigma _{n, x} = \mu \Big \} = \bigcap _{\rho >0} \bigcup _{N \in \mathbb {N}^+ }\bigcap _{n \ge N} C_{n, \rho }(\mu ). \end{aligned}$$
(18)

From equality (9) we have:

$$\begin{aligned} \widehat{A}_{\mu } =\bigcap _{\rho >0} \bigcup _{N \in \mathbb {N}^+} \bigcap _{n \ge N} D_{n, \rho }(\mu ), \end{aligned}$$
(19)

where \(D_{n, \rho }(\mu )\) is defined by

$$D_{n, \rho }(\mu ) : = \bigcup _{\varepsilon _0>0} \bigcap _{0<\varepsilon \le \varepsilon _0} \{x \in M:\ \ \text{ dist }^*(\sigma _{\varepsilon , n, x}, \ \mu ) < \rho \}.$$

The assertion \(\text{ dist }^*(\sigma _{\varepsilon , n, x}, \ \mu ) < \rho \) for all \(0 <\varepsilon \le \varepsilon _0\) implies

$$\lim _{\varepsilon \rightarrow 0^+}\text{ dist }^*(\sigma _{\varepsilon , n, x}, \ \mu ) \le \rho < 2 \rho . $$

Thus, applying part (f) of Lemma 1, we deduce that \(\text{ dist }^* (\sigma _{n,x}, \mu ) < 2 \rho \) for all \(x \in D_{n, \rho }(\mu )\). In other words,

$$ D_{n, \rho }(\mu ) \subset C_{n, 2 \rho }(\mu ),$$

which, joint with equalities (18) and (19), implies:

$$\widehat{A}_{\mu } \subset A_{\mu }.$$

To prove the converse inclusion, we apply again part (f) of Lemma 1 to write:

$$C_{n, \rho }(\mu ) = \{x \in X: \ \ \text{ dist }^* ({\lim }^*_{\varepsilon \rightarrow 0^+} \sigma _{\varepsilon , n, x}, \mu ) < \rho \}$$

Therefore

$$\lim _{\varepsilon \rightarrow 0^+} \text{ dist }^* (\sigma _{\varepsilon , n, x}, \mu ) < \rho \ \ \ \forall \ x \in C_{n, \rho }(\mu ).$$

Thus,

$$C_{n, \rho } (\mu ) \subset \bigcup _{\varepsilon _0 >0} \bigcup _{0< \varepsilon \le \varepsilon _0} \{ x \in M: \ \text{ dist }^* (\sigma _{\varepsilon , n, x}, \mu ) < \rho \} = D_{n, \rho }(\mu ).$$

The above inclusion, joint with equalities (18) and (19), implies

$$A_{\mu } \subset \widehat{A}_{\mu }.$$

We have proved that

$$\widehat{A}_{\mu } = A_{\mu } = \bigcap _{\rho >0} \bigcup _{N \in {\mathbb {N}^+}} \bigcap _{n \ge N} C_{n, \rho }(\mu ).$$

Since the set \(C_{n, \rho }(\mu )\) decreases when \(\rho \) decreases (with n and \(\mu \) fixed), the family

$$\Big \{ \bigcup _{N \in {\mathbb {N}^+}} \bigcap _{n \ge N} C_{n, \rho }(\mu ). \Big \}_{\rho >0},$$

whose intersection is \(A_{\mu }\), is decreasing when \(\rho \) decreases. Therefore, its intersection is equal to the intersection of its countable subfamily

$$\Big \{ \bigcup _{N \in {\mathbb {N}^+}} \bigcap _{n \ge N} C_{n, \ 1/k}(\mu ). \Big \}_{k \in \mathbb {N}^+}.$$

We have proved equality (13) of Lemma 2.

Finally, note that the set \(C_{n, \ 1/k}(\mu ) \subset M\) is open, because \(\sigma _{n,x} = (1/n) \sum _{j= 1}^n\)\(\delta _{f^j(x)}\) (with fixed n) depends continuously on x. Since equality (13) states that \(\widehat{A}_{\mu } = A_{\mu }\) is the countable intersection of a countable union of a countable intersection of open sets, we conclude that it is a measurable set, ending the proof of Lemma 2. \(\square \)

Lemma 3

A probability measure \(\mu \) is empirically stochastically stable, according to Definition 4, if and only if its basin \(\widehat{A}_{\mu }\) of empiric stability, defined by equality (9), has positive Lebesgue measure.

Proof

If \(\mu \) is empirically stochastically stable, then from Definition 4, there exists a Lebesgue-positive set \(\widehat{A} \subset M\) such that \(\widehat{A} \subset \widehat{A}_{\mu }\). Hence \(m(\widehat{A}_{\mu }) >0\).

To prove the converse assertion, assume that \(m(\widehat{A}_{\mu }) = \alpha >0.\) Let us construct a positive Lebesgue set \(\widehat{A} \subset \widehat{A}_{\mu }\) such that for any \(\rho >0\), there exists \(N \in \mathbb {N}^+\) (uniform on \(x \in \widehat{A}\)), such that for all \(n \ge N\) there exists \(\varepsilon _0 >0\) (uniform on \(x \in \widehat{A}\)) satisfying

$$\begin{aligned} \text{ dist }^*(\sigma _{\varepsilon , n, x}, \mu )< \rho \ \ \ \forall \ 0<\varepsilon \le \varepsilon _0, \ \ \forall \ x \in \widehat{A} \ \text{(to } \text{ be } \text{ proved) }. \end{aligned}$$
(20)

Applying Lemma 2 we have

$$\widehat{A}_{\mu } = \bigcap _{k \in \mathbb {N}^+} \bigcup _{N \in \mathbb {N}^+} E_{N, 1/ k}, \ \ \text{ where } \ \ E_{N,1/k} := \bigcap _{n \ge N} C_{n, 1/k}(\mu ). $$

For fixed \(k \in \mathbb {N}^+\) we have \(E_{N+1, 1/ k}\subset E_{N, 1/k}\) for all \(N \ge 1\), and

$$\widehat{A}_{\mu } = \bigcup _{N \in \mathbb {N}^+} ( E_{N, 1/ k} \cap \widehat{A}_{\mu }). \ \ \ \text{ Then } \ \ \lim _{N \rightarrow + \infty } m(E_{N, 1/ k} \cap \widehat{A}_{\mu }) = m (\widehat{A}_{\mu }) = \alpha .$$

Therefore, for each \(k \ge 1\) there exists \(N(k) \ge 1\) such that

$$\alpha (1- 1/3^k) \le m(E_{N(k), 1/k}\cap \widehat{A}_{\mu }) \le \alpha .$$

We construct

$$\widehat{A} := \bigcap _{k \in {\mathbb {N}^+}}(E_{N(k), 1/k}\cap \widehat{A}_{\mu }). $$

We will prove that \(\widehat{A}\) has positive Lebesgue measure and that assertion (20) is satisfied uniformly for all \(x \in \widehat{A}\). First,

$$m( \widehat{A}_{\mu } \setminus \widehat{A}) = m (\bigcup _{k\ge 1} ( \widehat{A}_{\mu } \setminus E_{N(k), 1/k} ) \le \sum _{k= 1}^{+ \infty } (\alpha -m( E_{N(k), 1/k} \cap \widehat{A}_{\mu })) \le \sum _{k= 1}^{+ \infty } \frac{\alpha }{3^k} = \frac{\alpha }{2}, $$

from where

$$ m( \widehat{A}) = m (\widehat{A}_{\mu } ) - m(A_{\mu } \setminus \widehat{A}) \ge \alpha - \frac{\alpha }{2} = \frac{\alpha }{2} >0. $$

Second, for all \(\rho >0\), there exists a natural number \(k \ge 2/\rho \), and a set

\(B_{N(k), 1/k} \supset \widehat{A}\) such that

$$x \in C_{n, 1/k}(\mu ) \ \ \ \forall \ n \ge N(k), \ \ \ \ \forall \ \ \ x \in B_{N(k), 1/k}. $$

Therefore, for all \(n \ge N(k)\) (which is independent on x) we obtain:

$$\begin{aligned} \text{ dist }^*(\sigma _{n, x}, \ \mu ) < \frac{1}{k} \le \frac{\rho }{2} \ \ \ \ \forall \ x \in \widehat{A}. \end{aligned}$$
(21)

Finally, applying part (f) of Lemma 1, for each fixed \(n \ge N(k)\) there exists \(\varepsilon _0>0\) (independent of x), such that

$$\begin{aligned} \text{ dist }^*(\sigma _{\varepsilon , n, x}, \ \sigma _{n, x})< \frac{\rho }{2} \ \ \ \ \forall \ 0 < \varepsilon \le \varepsilon _0, \ \ \forall \ x \in \widehat{M}. \end{aligned}$$
(22)

Inequalities (21) and (22) end the proof of inequality (20); hence Lemma 3 is proved. \(\square \)

End of the proof of Theorem 1.

Proof

From Lemma 3, \(\mu \) is empirically stochastically stable if and only if \(m(\widehat{A}_{\mu }) >0\). From Definition 12, \(\mu \) is physical if and only if \(m(A_{\mu }) >0\). Applying Lemma 2 we have \(\widehat{A}_{\mu } = A_{\mu }\). We conclude that \(\mu \) is empirically stochastically stable if and only if \(\mu \) is physical. \(\square \)

Before proving Corollary 1, we recall the following theorem taken from [11]:

Theorem 3

Let \(f: M \mapsto M\) be a continuous map on a compact Riemannian manifold M. Then, the set \({\mathscr {O}}_f\) of pseudo-physical measures for f is nonempty and weak\(^*\)-compact, and contains \(p \omega _x\) for Lebesgue-a.e. \(x \in M\).

Moreover, \({\mathscr {O}}_f\) is the minimal nonempty weak\(^*\)-compact set of probability measures that contains \(p \omega _x\) for Lebesgue-a.e. \(x \in M\).

Proof

See [11, Theorem 1.5].

Proof of Corollary 1.

Proof

(i) implies (ii): If \(\mu _1\) is globally empirically stable, then by Definition 6 \(m(\widehat{A}_{\mu _1}) = m(M)\). Applying Theorem 1, \(\mu _1\) is physical. Besides, from Lemma 2, we know \(\widehat{A}_{\mu _1} = A_{\mu _1}\). Then \(m(A_{\mu _1}) = m (M)\). So, there exists \(\mu _2 = \mu _1\) that is physical and whose strong basin of statistical attraction has full Lebesgue measure, as wanted.

(ii) implies (iii): If \(\mu _2\) is physical and \(m(A_{\mu _2})=m(M)\), then from Definitions 10 and 11, we deduce that the set \(\{\mu _2\}\) contains \(p\omega _x\) for Lebesgue-a.e. \(x \in M\). Besides \(\{\mu _2\}\) is nonempty and weak\(^*\)-compact. Hence, applying the last assertion of Theorem 3, we deduce that \(\{\mu _2\}\) is the whole set \({\mathscr {O}}_f\) of pseudo physical measures for f. In other words, there exists a unique measure \(\mu _3 = \mu _2\) that is pseudo-physical, as wanted.

(iii) implies (i): If there exists a unique measure \(\mu _3\) that is pseudo-physical for f, then, applying Theorem 3 we know that that the set \(\{\mu _3\}\) contains \(p\omega _x\) for Lebesgue-a.e. \(x \in M\). From Definitions 10 and 11, we deduce that the strong basin \(A_{\mu _3}\) of statistical attraction of \(\mu _3\) has full Lebesgue measure. Then, \(\mu _3\) is physical, and applying Theorem 1 \(\mu _3\) is empirically stochastically stable. Besides, from Lemma 2, we obtain that the basin \(\widehat{A}_{\mu _3}\) of empiric stochastic stability of \(\mu _3\) coincides with \(A_{\mu _3}\); hence it has full Lebesgue measure. From Definition 6 we conclude that there exists a measure \(\mu _1 = \mu _3\) that is globally empirically stochastically stable, as wanted.

We have proved that (i), (ii) and (iii) are equivalent conditions. Besides, we have proved that if these conditions holds, the three measures \(\mu _1\), \(\mu _2 \) and \(\mu _3\) coincide. This ends the proof of Corollary 1. \(\square \)

Proof of Corollary 2.

Proof

On the one hand, a classical theorem by Ruelle states that any \(C^2\) expanding map f of the circle \(S^1\) has a unique invariant measure \(\mu \) that is ergodic and absolutely continuous with respect to the Lebesgue measure. Thus, from Pesin’s Theory [26, 27], it is the unique invariant measure that satisfies Pesin Entropy Formula (12).

On the other hand, Campbell and Quas [9] have proved that \(C^1\)-generic expanding maps in the circle have a unique invariant measure \(\mu \) that satisfies Pesin Entropy Formula, but nevertheles \(\mu \) is mutually singular with the Lebesgue measure (see also [5]).

Applying the above known results, to prove this corollary we will first show that for any \(C^1\) expanding map f, if it exhibits a unique invariant measure \(\mu \) that satisfies (12), then \(\mu \) is the unique empirically stochastically stable measure. In fact, in [12] it is proved that any pseudo-physical measure of any \(C^1\) expanding map of \(S^1\) satisfies Pesin Entropy Formula (12). Hence, we deduce that, for our map f, \(\mu \) is the unique pseudo-physical measure. Besides in [11], it is proved that if the set of pseudo-physical or SRB-like measures is finite, then all the pseudo-physical measures are physical. We deduce that our map f has a unique physical measure \(\mu \). Applying Theorem 1, \(\mu \) is the unique empirically stochastically stable measure, as wanted.

Now, to end the proof of this corollary, let us show that the measure \(\mu \) that was considered above, is globally empirically stochastically stable. From Theorem 3, the set \({\mathscr {O}}_f\) of all the pseudo-physical measures is the minimal weak\(^*\)-compact set of invariant measures such that \(p\omega (x) \subset {\mathscr {O}}_f\) for Lebesgue-a.e. \(x \in S^1\). But, in our case, we have \({\mathscr {O}}_f = \{\mu \}\); hence \(p\omega (x) = \{\mu \}\) for Lebesgue-a.e. \(x \in S^1\). Applying Definition 11, we conclude that the strong basin of statistical attraction \(A_{\mu }\) has full Lebesgue measure; and so, by Theorem 1 the basin \(\widehat{A}_{\mu }\) of empirically stochastic stability of \(\mu \) covers Lebesgue-a.e. the space; hence \(\mu \) is globally empirically stochastically stable. \(\square \)

4 Proof of Theorem 2 and its Corollaries

For any nonempty weak\(^*\)-compact set \({\mathscr {K}} \) of f-invariant measures, recall Definition 7 of the (maybe empty) basin \(\widehat{A}_{\mathscr {K}} \subset M\) of empiric stochastic stability of \({\mathscr {K}}\) constructed by equality (10).

Similarly to Definition 11, in which the strong basin \(A_{\mu }\) of statistical attraction of a single measure \(\mu \) is constructed, we define now the (maybe empty) strong basin of statistical attraction \(A_{\mathscr {K}} \subset M\) of the set \({\mathscr {K}} \subset {\mathscr {M}}\), as follows:

$$\begin{aligned} A_{\mathscr {K}} := \{x \in M, \ \ p \omega _x \subset {\mathscr {K}}\}, \end{aligned}$$
(23)

where \(p \omega _x\) is the p-omega limit set (limit set in the space \({\mathscr {M}}\) of probabilities) for the empiric probabilities along the orbit with initial state in \(x \in M\) (recall Definition 10).

We will prove the following property of the basins \(\widehat{A}_{\mathscr {K}} \) and \(A_{\mathscr {K}}\):

Lemma 4

For any nonempty weak\(^*\)-compact set \({\mathscr {K}}\) in the space \({\mathscr {M}}\) of probability measures, the basins \(\widehat{A}_{\mathscr {K}} \subset M\) and \(A_{\mathscr {K}} \subset M\), defined by equalities (10) and (23) respectively, are measurable sets and coincide. Moreover

$$\widehat{A}_{\mathscr {K}} = \widehat{A}_{\mu } = \bigcap _{k \in {\mathbb {N}^+}} \bigcup _{N \in {\mathbb {N}^+}} \bigcap _{n \ge N} C_{n, 1/k}({\mathscr {K}}), $$

where, for all \(\rho >0\) the set \(C_{n, \rho }({\mathscr {K}}) \subset M\) is defined by

$$C_{n, \rho }({\mathscr {K}}) = \{x \in M :\ \ \text{ dist }^*(\sigma _{n, x}, \ {\mathscr {K}}) < \rho \}.$$

Proof

Repeat the proof of Lemma 2, with the set \({\mathscr {K}}\) instead of the single measure \(\mu \), and using equalities (10) and (23), instead of (9) and (11) respectively. \(\square \)

Lemma 5

The set \({\mathscr {O}}_f\) of all pseudo-physical measures is globally empirically stochastically stable.

Proof

From Theorem 3, \(p\omega _x \subset {\mathscr {O}}_f\) for Lebesgue-a.e. \(x \in M\). Thus, the strong basin of statistical attraction \( A_{{\mathscr {O}}_f}\) of \({\mathscr {O}}_f\), defined by equality (23), has full Lebesegue measure. After Lemma 4, the basin \(\widehat{A}_{{\mathscr {O}}_f}\) of empiric stochastic stability of \({\mathscr {O}}_f\), has full Lebesgue measure. Therefore, if we prove that \({\mathscr {O}}_f\) is empirically stochastically stable, it must be globally so.

We now repeat the proof of Lemma 3, using \({\mathscr {O}}_f\) instead of a single measure \(\mu \), to construct a Lebesgue-positive set \(\widehat{A} \subset M\) such that, for all \(\rho >0\) and for all n large enough, there exists \(\varepsilon _0>0\) (independenly of \(x \in \widehat{A}\)) such that

$$\text{ dist }^* (\sigma _{\varepsilon , n, x}, \ {\mathscr {O}}_f)< \rho \ \ \ \forall \ 0 <\varepsilon \le \varepsilon _0, \ \ \forall \ x \in \widehat{A}.$$

Thus, \({\mathscr {O}}_f \) satisfies condition (a) of Definition 8, to be empirically stochastically stable. Let us prove that \({\mathscr {O}}_f \) also satisfies condition (b):

Assume that \({\mathscr {K}} \subset {\mathscr {M}}_f\) is nonempty and weak\(^*\)-compact and \({\widehat{A}}_{{\mathscr {O}}_f }\subset {\widehat{A}}_{\mathscr {K}}\) Lebesgue-a.e. We shall prove that \({\mathscr {O}}_f \subset {\mathscr {K}}\). Arguing by contradiction, assume that there exists a probability measure \(\nu \in {\mathscr {O}}_f \setminus {\mathscr {K}}\). Choose

$$\begin{aligned} 0<\rho < \frac{\text{ dist }^* (\nu , \ {\mathscr {K}})}{2} \end{aligned}$$
(24)

On the one hand, since \(\nu \) is pseudo-physical, applying Definitions 11 and 12, the \(\rho \)-weak basin \(A^{\rho }_{\nu }\) of statistical attraction of \(\nu \) has positive Lebesgue measure. In brief:

$$\begin{aligned} m (\{x \in M: \liminf _{n \rightarrow + \infty } \text{ dist }^*(\sigma _{n,x}, \ \nu ) < \rho \}) >0. \end{aligned}$$
(25)

From inequalities (24) and (25), and applying equality (23), we deduce that

$$\begin{aligned} m (\{x \in M:\ p\omega _x \not \subset {\mathscr {K}}\}) >0, \ \ \ m(A_{\mathscr {K}}) < m (M). \end{aligned}$$
(26)

On the other hand, applying Lemma 4 and the hypothesis \({\widehat{A}}_{{\mathscr {O}}_f }\subset {\widehat{A}}_{\mathscr {K}}\) Lebesgue-a.e., we deduce

$$A_{{\mathscr {O}}_f} \subset A_{\mathscr {K}} \text{ Lebesgue } \text{ a.e. }.$$

Applying Theorem 3 and equality (23), we have

$$m(A_{{\mathscr {O}}_f}) = m (M), \ \ \text{ from } \text{ where } \text{ we } \text{ deduce } m(A_{{\mathscr {K}}}) = m (M),$$

contradicting the inequality at right in (26).

We have proved that \({\mathscr {O}}_f \subset {\mathscr {K}}\). Thus \({\mathscr {O}}_f\) satisfies condition (b) of Definition 8, ending the proof of Lemma 5. \(\square \)

End of the proof of Theorem 2.

Proof

We denote by \({\mathscr {O}}_f\) the set of all pseudo-physical measures.

(a) Let \({\mathscr {K}} \subset {\mathscr {M}}_f\) be empirically stochastically stable, according to Definition 8. We shall prove that \({\mathscr {K}} \subset {\mathscr {O}}_f\). Assume by contradiction that there exists \(\nu \in {\mathscr {K}} \setminus {\mathscr {O}}_f\). So, \(\nu \) is not pseudo-physical, and applying Definition 12, there exists \(\rho >0\) such that the \(\rho \)-weak basin \(A_{\nu }^\rho \) of statistical attraction of \(\nu \) has zero Lebesgue measure. In brief, after Definition 11, we have

$$m(\{ x \in M:\ \ \text{ dist }^*(p\omega _x, \ \nu ) <\rho \})= 0,$$

from where we deduce that

$$\begin{aligned} p\omega _x \ \ \subset \ \ {\mathscr {M}}_f \setminus {\mathscr {B}}_{\rho }(\nu ) \ \ \ \text{ Lebesgue-a.e. } x \in M, \end{aligned}$$
(27)

where \({\mathscr {B}}_{\rho }(\nu )\) is the open ball in the space \({\mathscr {M}}\) of probability measures, with center at \(\nu \) and radius \(\rho \).

Applying Lemma 4 and equality (23) we have

$$\widehat{A}_{\mathscr {K}} = A_{\mathscr {K}} = \{x \in X :\ \ p\omega _x \subset {\mathscr {K}}\}. $$

Joining with assertion (27), we deduce that \(A_{\mathscr {K}} \subset A_{{\mathscr {K}} \setminus {\mathscr {B}}_{\rho }(\nu )} \ \ \text{ Lebesgue-a.e. }\); and applying again Lemma 4 we deduce:

$$ \widehat{A}_{\mathscr {K}} \subset \widehat{A}_{{\mathscr {K}} \setminus {\mathscr {B}}_{\rho }(\nu )} \ \ \text{ Lebesgue-a.e. }$$

But, by hypothesis \({\mathscr {K}}\) is empirically stochastically stable. Thus, it satisfies condition (b) of Definition 8. We conclude that \({\mathscr {K}} \ \ \subset \ \ {\mathscr {K}} \setminus {\mathscr {B}}_{\rho }(\nu )\), which is a contradiction, ending the proof of part (a) of Theorem 2.

(b) According to Lemma 5, if \({\mathscr {K}} = {\mathscr {O}}_f\), then \({\mathscr {K}}\) is globally empirically stochastically stable. Now, let us prove the converse assertion. Assume that \({\mathscr {K}} \) is globally empirically stochastically stable. We shall prove that \({\mathscr {K}} = {\mathscr {O}}_f\). Applying part (a) of Theorem 2, we know that \({\mathscr {K}} \subset {\mathscr {O}}_f\). So, it is enough to prove now that \({\mathscr {O}}_f \subset {\mathscr {K}}\).

By hypothesis \(m(\widehat{A}_{\mathscr {K}}) = m(M)\). From Lemma 4 we have \(\widehat{A}_{\mathscr {K}} = A_{\mathscr {K}})\). We deduce that \(m(A {\mathscr {K}}) = m(M)\). From this latter assertion and equality (23), we obtain

$$p\omega _x \subset {\mathscr {K}} \ \ \text{ for } \text{ Lebesgue-a.e. } x \in M.$$

Finally, we apply the last assertion of Theorem 3 to conclude that \({\mathscr {O}}_f \subset {\mathscr {K}}\), as wanted. This ends the proof of Theorem 2. \(\square \)

Proof of Corollary 3.

Proof

This corollary is immediate after Theorem 2 and Lemma 5. In fact, Lemma 5 states that the set \({\mathscr {O}}_f\), which is composed by all the pseudo-physical measures, is globally empirically stochastically stable. And part (b) of Theorem 2, states that \({\mathscr {O}}_f\) is the unique set of f-invariant measures that is globally empirically stochastically stable. \(\square \)

Before proving Corollaries 4, 5 and 6, we recall the following known result:

Theorem 4

For all \(x \in M\) the p-omega limit set \(p\omega _x\) has the following property:

For any pair of measures \(\mu _0, \mu _1 \in p\omega _x\) and for every real number \(0 \le \lambda \le 1\) there exists a measure \(\mu _{\lambda }\) such that \(\text{ dist }^*(\mu _0, \mu _{\lambda }) = \lambda \text{ dist }^*(\mu _0, \mu _1)\).

Proof

See [11, Theorem 2.1].

Proof of Corollary 4.

Proof

Assume that \(\mu \) is pseudo-physical and isolated in the set \({\mathscr {O}}_f\) of all pseudo-physical measures. Then, there exists \(\rho >0\) such that:

$$\begin{aligned} \text{ if } \nu \in {\mathscr {O}}_f \text{ and } \text{ dist }^*(\nu , \mu ) < \rho , \ \ \text{ then } \ \ \nu = \mu . \end{aligned}$$
(28)

Since \(\mu \) is pseudo-physical, from Definition 12 we know that the \(\rho \)-weak basin \(A^{\rho }_{\mu }\) of statistical attraction of \(\mu \) has positive Lebesgue measure. From Definition 11 we deduce that

$$\begin{aligned} m(A^{\rho }_{\mu } = m (\{ x \in M :\ \ \text{ dist } ^*(p \omega _x, \mu ) < \rho \})>0. \end{aligned}$$
(29)

Applying Theorem 3, we know that \(p \omega _x \subset {\mathscr {O}}_f\) for Lebesgue-a.e. \(x \in M\). Joining the latter assertion with (28) and (29) we deduce that

$$\{\mu \} = p \omega _x \bigcap {\mathscr {B}}_{\rho } {\mu } \text{ for } \text{ Lebesgue-a.e. } x \in A^{\rho } _{\mu },$$

where \({\mathscr {B}}_{\rho } {\mu }\) is the ball in the space of probability measures, with center at \(\mu \) and radius \(\rho \).

Besides, from Theorem 4 we deduce that \(p \omega _x = \{\mu \}\) for Lebesgue-a.e. \(x \in A^{\rho } _{\mu } \), hence for a Lebesgue-positive set of points \(x \in M\). Applying Definition 12, we conclude that the given pseudo-physical measure \(\mu \) is physical; hence, from Theorem 1, \(\mu \) is empirically stochastically stable. \(\square \)

Proof of Corollary 5.

Proof

(i) implies (ii): If the set \(\mathscr {O}_f\) of pseudo-physical measures is finite, then all the pseudo-physical are physical due to Corollary 4. Then, applying Theorem 1, all of them are (individually) empirically stochastically stable. Besides the union of their strong basins of statistical attraction has full Lebesgue measure: In fact, applying Definition 11 and equality (23), that union is the set \(A_{\mathscr {O}_f}\); and, due to Theorem 3, the set \(A_{{\mathscr {O}}_f}\) has full Lebesgue measure. So, assertion (ii) is proved.

(ii) implies (i): Assume that there exists a finite number \(r \ge 1\) of empirically stochastically stable measures \(\mu _1, \mu _2, \ldots , \mu _r\) (hence, physical measures, due to Theorem 1). Assume also that the strong basins \(A_{\mu _i}\) of statistical attraction have an union \(\bigcup _{i=1}^r A_{\mu _i}\) that covers Lebesgue-a.e.. Applying Definition 11 and equality (23), we deduce that \(A_{\{\mu _1, \ldots , \mu _r\}} = \bigcup _{i=1}^r A_{\mu _i}\) has full Lebesgue measure. So, from the last assertion of Theorem 3, \({\mathscr {O}}_f \subset \{\mu _1, \ldots , \mu _r\}\). In other words, the set \({\mathscr {O}}_f\) of pseudo-physical measures is finite, proving assertion (i). \(\square \)

Proof of Corollary 6.

Proof

If the set \({\mathscr {O}}_f\) is finite, then we apply Corollary (5) to deduce that there exists a finite number of empirically stochastically stable measures, hence physical, and that the union of their strong basins of statistical attraction has full Lebesgue measure.

Now let us consider the case for which, by hypothesis, the set \({\mathscr {O}}_f\) of pseudo-physical measures is countably infinite. In brief: \({\mathscr {O}}_f = \{\mu _i\}_{i \in \mathbb {N}}\).

Applying Theorem 3, the p-omega limit sets \(p\omega _x\) are contained in \({\mathscr {O}}_f \) for Lebesgue-a.e. \(x \in M\). But, from Theorem 4 we know that \(p\omega _x\) is either a single measure or uncountably infinite. Since it is contained in the countable set \({\mathscr {O}}_f\), we deduce the \(p\omega _x \) is composed by a single measure of \({\mathscr {O}}_f\) for Lebesgue-a.e. \(x \in M\). Now, recalling Definition 11 and equality (23), we deduce that

$$A_{\mathscr {O}_{f}} = \bigcup _{i= 1}^{+ \infty } A_{\mu _i}, \ \ \ \ \ \ \ \ \ \ \ \sum _{i= 1}^{+ \infty } m(A_{\mu _i}) = m(M).$$

Therefore, there exists finitely many or countable infinitely many pseudo-physical measures \(\mu _{i_n} :1 \le n \le r \in \mathbb {N}^+ \cup \{+ \infty \}\) such that

$$\begin{aligned} \mu (A_{\mu _{i_n}}) >0 \ \ \forall \ 1 \le n \le r, \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \sum _{n= 1}^{r} m(A_{\mu _n}) = m(M). \end{aligned}$$
(30)

From Definition 12, each measure \(\mu _{i_n}\) is physical; hence empirically stochastically stable due to Theorem 1. Besides, from equality at right in (30), we deduce that the union \(\bigcup _{n= 1}^r A_{\mu _{i_n}}\) has full Lebesgue measure, as wanted.

Finally, to end the proof of Corollary 6, let us show that the set \(\{\mu _{i_n} :1 \le n \le r\) of physical measures above constructed, can not be finite. In brief, let us prove that \(r= + \infty \). In fact, if there existed a finite number \(r \in {\mathscr {N}}^+\) of physical measures whose basins of statistical attraction have an union with full Lebesgue measure, then, we would apply Corollary 5 and deduce that the set \({\mathscr {O}}_f\) of pseudo-physical measures is finite. But in our case, by hypothesis, \({\mathscr {O}}_f\) is countably infinite, ending the proof of Corollary 6. \(\square \)

Proof of Corollary 7.

Proof

From part (a) of Theorem 2 we know that all the measures of any empirically stochastically stable set \({\mathscr {K}} \subset {\mathscr {M}}_f\) is pseudo-physical. Besides, in [12] it is proved that, for any \(C^1\) expanding map f of the circle, any pseudo-physical or SRB-like measure satisfies Pesin Entropy Formula (12). We conclude that all the measures of \({\mathscr {K}}\) satisfy this formula. \(\square \)

Proof of Corollary 8.

Proof

From part (b) of Theorem 2 we know that the globally empirically stochastically stable set \({\mathscr {K}}\) coincides with the set \({\mathscr {O}}_f\) of pseudo-physical measures. Besides, in [13] it is proved that, for \(C^0\)-generic maps f of the interval, any ergodic measure belongs to \({\mathscr {O}}_f\) but, nevertheless \({\mathscr {O}}_f\) is a weak\(^*\)-closed with empty interior in the space \({\mathscr {M}}_f\) of invariant measures. We conclude that all ergodic measures belong to the globally empirically stochastically stable set \({\mathscr {K}}\) and that this set of invariant measures is meager in \({\mathscr {M}}_f\), as wanted. \(\square \)