1 Introduction

The so-called spectral method represents a powerful approach for establishing limit theorems. It has been introduced by Nagaev [40, 41] in the context of Markov chains and by Guivarc’h and Hardy [27] as well as Rousseau-Egele [45] for the deterministic dynamical systems. We refer to [33] for a detailed presentation of this method. In the case of deterministic dynamics, we have a map T on the state space X which preserves a probability measure \(\mu \) on X. Then, for a suitable class of observables g, we want to obtain limit laws for the process \((g\circ T^n)_{n\in \mathbb {N}}\). In other words, we wish to study the distribution of Birkhoff sums \(S_n g=\sum _{i=0}^{n-1}g\circ T^i\), \(n\in \mathbb {N}\). Let \(\mathcal {L}\) be the transfer operator (acting on a suitable Banach space \(\mathcal B\)) associated with T and for each complex parameter \(\theta \), let \(\mathcal {L}^\theta \) be the so-called twisted transfer operator given by \(\mathcal {L}^\theta f=\mathcal {L}(e^{\theta g}\cdot f)\), \(f\in \mathcal B\). The core of the spectral method consists of the following steps:

  • rewriting the characteristic function of \(S_n g\) in terms of the powers of the twisted transfer operators \(\mathcal {L}^\theta \);

  • applying the classical Kato’s perturbation theory to show that for \(\theta \) sufficiently close to 0, \(\mathcal {L}^\theta \) inherits nice spectral properties from \(\mathcal {L}\). More precisely, usually one works under assumptions which ensure that \(\mathcal {L}\) is a quasi-compact operator of spectral radius 1 with the property that 1 is the only eigenvalue on the unit circle with multiplicity one (and with the eigenspace essentially corresponding to \(\mu \)). Then, for \(\theta \) sufficiently close to 0, \(\mathcal {L}^\theta \) is again a quasi-compact operator with an isolated eigenvalue of multiplicity one such that both the eigenvalue and the corresponding eigenspace (as well as other related objects) depend analytically on \(\theta \).

This method has been used to establish a variety of limit laws for broad classes of chaotic deterministic systems exhibiting some degree of hyperbolicity. Indeed, it has been used to establish large deviation principles [33, 44], central limit theorems [3, 11, 33, 45], Berry–Esseen bounds [23, 27], local central limit theorems [23, 33, 45] as well as the almost sure invariance principle [25]. We refer to the excellent survey paper [26] for more details and further references.

Very recently, the spectral method was extended to broad classes of random dynamical systems. More precisely, the first author et al. adapted the spectral method in order to obtain several quenched limit theorems for random piecewise expanding as well as random hyperbolic dynamics [15, 17]. In particular, they proved the first version of the quenched local central limit in the context of random dynamics. A similar task was independently accomplished for random distance expanding dynamics by the second author and Kifer [31]. We stress that the study of the statistical properties of the random or time-dependent dynamical systems was initiated by Bakhtin [7, 8] and Kifer [34, 35] using different techniques from those in [15, 17] (and the present paper). Indeed, the methods in [7, 8] rely on the use of real Birkhoff cones (and share some similarities with the approach in [31]) although Bakhtin does not discuss the local central limit theorem and the dynamics he considered does not allow the presence of singularities. Moreover, his results do not include the large deviations principles obtained in [15, 17]. On the other hand, all the results in [35] rely on the martingale method which although also very powerful, cannot, for example, be used to obtain a local central limit theorem.

Let us now briefly discuss the main ideas from [15, 17, 31]. Instead of a single map as in the deterministic setting, we now have a collection of maps \((T_\omega )_{\omega \in \varOmega }\) acting on a state space X, where \((\varOmega , \mathcal F, \mathbb P)\) is a probability space. We consider random compositions of the form

$$\begin{aligned} T_\omega ^{(n)}=T_{\sigma ^{n-1}\omega }\circ \cdots \circ T_\omega \quad \text {for } \omega \in \varOmega \text { and } n\in \mathbb {N}, \end{aligned}$$

where \(\sigma :\varOmega \rightarrow \varOmega \) is an invertible \(\mathbb P\)-preserving transformation. Under appropriate conditions, there exists a unique family of probability measures \((\mu _\omega )_{\omega \in \varOmega }\) on X such that \(T_\omega ^*\mu _\omega =\mu _{\sigma \omega }\) for \(\mathbb P\)-a.e. \(\omega \in \varOmega \). Then, for a suitable class of observables \(g:\varOmega \times X\rightarrow \mathbb {R}\), we wish to establish limit laws for the process \((g_{\sigma ^n \omega }\circ T_\omega ^{(n)})_{n\in \mathbb {N}}\) with respect to \(\mu _\omega \), where \(g_\omega :=g(\omega , \cdot )\), \(\omega \in \varOmega \). Let \(\mathcal {L}_\omega \) denote the transfer operator associated with \(T_\omega \) (acting on a suitable Banach space \(\mathcal B\)). In a similar manner to that in the deterministic case, for each \(\theta \in \mathbb {C}\) and \(\omega \in \varOmega \) we consider the twisted transfer operator \(\mathcal {L}_\omega ^\theta \) on \(\mathcal B\) defined by \(\mathcal {L}_\omega ^\theta f=\mathcal {L}(e^{\theta g(\omega , \cdot )}f)\), \(f\in \mathcal B\). Then, the arguments in [15, 17] proceed as follows:

  • we represent the characteristic functions of the random Birkhoff sums

    $$\begin{aligned} S_n g(\omega , \cdot )=\sum _{i=0}^{n-1} g_{\sigma ^i \omega }(T_\omega ^{(i)}(\cdot )) \end{aligned}$$

    in terms of twisted transfer operators;

  • in the language of the multiplicative ergodic theory, for \(\theta \) sufficiently close to 0, the twisted cocycle \((\mathcal {L}_\omega ^\theta )_{\omega \in \varOmega }\) is quasi-compact, its largest Lyapunov exponents has multiplicity one (i.e., the associated Oseledets subspace is one dimensional) and similarly to the deterministic case all these objects exhibit sufficiently regular behavior with respect to \(\theta \).

Although Lyapunov exponents and associated Oseledets subspaces precisely represent a nonautonomous analogous of eigenvalues and eigenspaces, we emphasize that the methods in [15, 17] require a highly nontrivial adjustments of the classical spectral method for deterministic dynamics.

The goal of the present paper is twofold. In one direction, we wish to extend the main results from [15, 17] by establishing quenched versions of the large deviations principle, central limit theorem and the local central limit for vector-valued observables. We stress that in [15, 17] the authors dealt only with scalar-valued observables. Although in order to accomplish this we heavily rely on the previous work, we stress that the treatment of vector-valued observables requires several changes of nontrivial nature when compared to the previous papers.

In another direction, we show that the spectral method developed in [15, 17] can be used to establish a variety of new limit laws (either for scalar or vector-valued observables) that have not been considered previously in the literature (at least for the classes of dynamics that are considered in the present paper). Indeed, we here for the first time discuss a moderate deviations principle, Berry–Esseen bounds, concentration inequalities, Edgeworth and certain large deviations expansions for random piecewise expanding and hyperbolic dynamics. We emphasize that each of these results requires nontrivial adaptation of the techniques developed in [15, 17]. We in particularly stress that similarly to [15, 17], none of our results require any mixing assumptions for the base map \(\sigma \).

Finally, we would like to briefly mention some of other works devoted to statistical properties of random dynamical systems. We particularly mention the works of Ayyer, Liverani and Stenlund [3] as well as Aimino, Nicol and Vaienti [1] that preceded [15]. They also discuss limit laws for random toral automorphisms and random piecewise expanding maps, respectively, but under a restrictive assumption that the base space \((\varOmega , \sigma )\) is a Bernoulli shift. Furthermore, we mention the recent interesting papers by Bahsoun and collaborators [2, 4, 5] as well as Su [47] concerned with the decay of correlation and limit laws for systems which can be modeled by random Young towers. Further relevant contributions to the study of statistical properties of random or time-dependent dynamics have been established by Nándori, Szász, and Varjú [42], Nicol, Török and Vaienti [43], Hella and Stenlund [32], Leppänen and Stenlund [36, 37] as well as the second author [29, 30]. We also refer the readers to corresponding results for inhomogeneous Markov chains, including ones arising as almost sure realizations of Markov chains in random (dynamical) environments due to Dolgopyat and Sarig [14] and Kifer and the second author [31].

2 Preliminaries

In this section, we recall basic notions and results from the multiplicative ergodic theory which will be used in the subsequent sections. The material is essentially taken from [15], but we include it for readers’ convenience.

2.1 Multiplicative Ergodic Theorem

In this subsection, we recall the recently established versions of the multiplicative ergodic theorem which can be applied to the study of cocycles of transfer operators and will play an important role in the present paper. We begin by recalling some basic notions.

A tuple \(\mathcal {R}=(\varOmega , \mathcal {F}, \mathbb {P}, \sigma , \mathcal {B}, \mathcal {L})\) will be called a linear cocycle, or simply a cocycle, if \(\sigma \) is an invertible ergodic measure-preserving transformation on a probability space \((\varOmega ,\mathcal F,\mathbb P)\), \((\mathcal {B}, \Vert {\cdot } \Vert )\) is a Banach space and \(\mathcal L:\varOmega \rightarrow L(\mathcal {B})\) is a family of bounded linear operators such that \(\log ^+\Vert \mathcal L(\omega )\Vert \in L^1(\mathbb P)\). Sometimes, we will also use \(\mathcal {L}\) to refer to the full cocycle \(\mathcal {R}\). In order to obtain sufficient measurability conditions, we assume the following:

  1. (C0)

    \(\varOmega \) is a Borel subset of a separable, complete metric space, \(\sigma \) is a homeomorphism and \(\mathcal {L}\) is either \(\mathbb {P}-\)continuous (that is, \(\mathcal {L}\) is continuous on each of countably many Borel sets whose union is \(\varOmega \)) or strongly measurable (that is, the map \(\omega \mapsto \mathcal {L}_\omega f\) is measurable for each \(f\in \mathcal {B}\)) and \(\mathcal {B}\) is separable.

For each \(\omega \in \varOmega \) and \(n\ge 0\), let \( \mathcal {L}_\omega ^{(n)}\) be the linear operator given by

$$\begin{aligned} \mathcal {L}_\omega ^{(n)}:= \mathcal {L}_{\sigma ^{n-1}\omega }\circ \cdots \circ \mathcal {L}_{\sigma \omega } \circ \mathcal {L}_\omega . \end{aligned}$$

Condition (C0) implies that the map \(\omega \mapsto \log \Vert \mathcal {L}_\omega ^{(n)}\Vert \) is measurable for each \(n\in \mathbb {N}\). Thus, Kingman’s sub-additive ergodic theorem ensures that the following limits exist and coincide for \(\mathbb {P}\text {-a.e. } \omega \in \varOmega \):

$$\begin{aligned} \varLambda (\mathcal {R})&:= \lim _{n\rightarrow \infty } \frac{1}{n}\log \Vert \mathcal {L}_\omega ^{(n)}\Vert \\ \kappa (\mathcal {R})&:= \lim _{n\rightarrow \infty } \frac{1}{n}\log \text {ic}( \mathcal {L}_\omega ^{(n)}), \end{aligned}$$

where

$$\begin{aligned} \text {ic}(A):=\inf \Big \{r>0 : \ A(B_\mathcal {B}) \text { can be covered with finitely many balls of radius }r \Big \}, \end{aligned}$$

and \(B_\mathcal {B}\) is the unit ball of \(\mathcal {B}\). The cocycle \(\mathcal {R}\) is called quasi-compact if \(\varLambda (\mathcal {R})> \kappa (\mathcal {R})\). The quantity \(\varLambda (\mathcal {R})\) is called the top Lyapunov exponent of the cocycle and generalizes the notion of (logarithm of) spectral radius of a linear operator. Furthermore, \(\kappa (\mathcal {R})\) generalizes the notion of essential spectral radius to the context of cocycles.

Remark 2.1

We refer to [15, Lemma 2.1] for useful criteria which can be used to verify that the cocycle is quasi-compact.

A spectral-type decomposition for quasi-compact cocycles can be obtained via the following multiplicative ergodic theorem.

Theorem 2.2

(Multiplicative ergodic theorem, MET [10, 21, 22]). Let \(\mathcal R=(\varOmega ,\mathcal F,\mathbb P,\sigma ,\mathcal {B},\mathcal L)\) be a quasi-compact cocycle and suppose that condition (C0) holds. Then, there exist \(1\le l\le \infty \) and a sequence of exceptional Lyapunov exponents

$$\begin{aligned} \varLambda (\mathcal {R})=\lambda _1>\lambda _2>\cdots>\lambda _l>\kappa (\mathcal {R}) \quad (\text {if } 1\le l<\infty ) \end{aligned}$$

or

$$\begin{aligned} \varLambda (\mathcal {R})=\lambda _1>\lambda _2>\cdots \quad \text {and} \quad \lim _{n\rightarrow \infty } \lambda _n=\kappa (\mathcal {R}) \quad (\text {if } l=\infty ), \end{aligned}$$

and for \(\mathbb P\)-a.e. \(\omega \in \varOmega \), there exists a unique splitting (called the Oseledets splitting) of \(\mathcal {B}\) into closed subspaces

$$\begin{aligned} \mathcal {B}=V(\omega )\oplus \bigoplus _{j=1}^l Y_j(\omega ), \end{aligned}$$
(1)

depending measurably on \(\omega \) and such that:

  1. (I)

    For each \(1\le j \le l\), \(Y_j(\omega )\) is finite-dimensional (\(m_j:=\dim Y_j(\omega )<\infty \)), \(Y_j\) is equivariant, i.e., \(\mathcal {L}_\omega Y_j(\omega )= Y_j(\sigma \omega )\) and for every \(y\in Y_j(\omega ){\setminus }\{0\}\),

    $$\begin{aligned} \lim _{n\rightarrow \infty }\frac{1}{n}\log \Vert \mathcal L_\omega ^{(n)}y\Vert =\lambda _j. \end{aligned}$$

    (Throughout this paper, we will also refer to \(Y_1(\omega )\) as simply \(Y(\omega )\) or \(Y_\omega \).)

  2. (II)

    V is equivariant, i.e., \(\mathcal {L}_\omega V(\omega )\subseteq V(\sigma \omega )\) and for every \(v\in V(\omega )\),

    $$\begin{aligned} \lim _{n\rightarrow \infty }\frac{1}{n}\log \Vert \mathcal L_\omega ^{(n)}v\Vert \le \kappa (\mathcal {R}). \end{aligned}$$

The adjoint cocycle associated with \(\mathcal {R}\) is the cocycle \(\mathcal {R}^*:=(\varOmega , \mathcal {F}, \mathbb {P}, \sigma ^{-1}, \mathcal {B}^*, \mathcal {L}^*)\), where \((\mathcal {L}^*)_\omega := (\mathcal {L}_{\sigma ^{-1}\omega })^*\). In a slight abuse of notation which should not cause confusion, we will often write \(\mathcal {L}^*_\omega \) instead of \((\mathcal {L}^*)_\omega \), so \(\mathcal {L}^*_\omega \) will denote the operator adjoint to \(\mathcal {L}_{\sigma ^{-1}\omega }\).

The following two results are taken from [15].

Corollary 2.3

Under the assumptions of Theorem 2.2, the adjoint cocycle \(\mathcal {R}^*\) has a unique, measurable, equivariant Oseledets splitting

$$\begin{aligned} \mathcal {B}^*=V^*(\omega )\oplus \bigoplus _{j=1}^l Y^*_j(\omega ), \end{aligned}$$
(2)

with the same exceptional Lyapunov exponents \(\lambda _j\) and multiplicities \(m_j\) as \(\mathcal {R}\).

Let the simplified Oseledets decomposition for the cocycle \(\mathcal {L}\) (resp. \(\mathcal {L}^*\)) be

$$\begin{aligned} \mathcal B=Y(\omega )\oplus H(\omega ) \quad (\text {resp. } \mathcal {B}^*=Y^*(\omega ) \oplus H^*(\omega ) ), \end{aligned}$$
(3)

where \(Y(\omega )\) (resp. \(Y^*(\omega )\)) is the top Oseledets subspace for \(\mathcal {L}\) (resp. \(\mathcal {L}^*\)) and \(H(\omega )\) (resp. \(H^*(\omega )\)) is a direct sum of all other Oseledets subspaces.

For a subspace \(S\subset \mathcal B\), we set \( S^\circ =\{\phi \in \mathcal {B}^*: \phi (f)=0 \quad \text {for every } f\in S\}\) and similarly for a subspace \(S^* \subset \mathcal {B}^*\) we define \( (S^*)^\circ =\{f\in \mathcal {B}: \phi (f)=0 \quad \text {for every } \phi \in S^*\}. \)

Lemma 2.4

(Relation between Oseledets splittings of \(\mathcal {R}\) and \(\mathcal {R}^*\)). The following relations hold for \(\mathbb {P}\text {-a.e. } \omega \in \varOmega \):

$$\begin{aligned} H^*(\omega )=Y(\omega )^\circ \quad \text {and} \quad H(\omega )=Y^*(\omega )^\circ . \end{aligned}$$
(4)

3 Piecewise Expanding Dynamics

In this section, we introduce the class of random piecewise expanding dynamics we plan to study (which is the same as considered in [15]). We then proceed by introducing a class of vector-valued observables to which our limit theorems will apply. Furthermore, for \(\theta \in \mathbb {C}^d\), we introduce the corresponding twisted cocycle of transfer operators \((\mathcal {L}_\omega ^\theta )_{\omega \in \varOmega }\). Finally, we study the regularity (with respect to \(\theta \)) of the largest Lyapunov exponent and the corresponding top Oseledets space of the cocycle \((\mathcal {L}_\omega ^\theta )_{\omega \in \varOmega }\). Our arguments in this section follow closely the approach developed in [15]. We refer as much as possible to [15], discussing in detail only the arguments which require substantial changes (when compared to [15]).

3.1 Notions of Variation

Let \((X, \mathcal G)\) be a measurable space endowed with a probability measure m and a notion of a variation \({{\,\mathrm{var}\,}}:L^1(X, m) \rightarrow [0, \infty ]\) which satisfies the following conditions:

  1. (V1)

    \({{\,\mathrm{var}\,}}(th)=|t|{{\,\mathrm{var}\,}}(h)\);

  2. (V2)

    \({{\,\mathrm{var}\,}}(g+h)\le {{\,\mathrm{var}\,}}(g)+{{\,\mathrm{var}\,}}(h)\);

  3. (V3)

    \(\Vert h\Vert _{L^\infty } \le C_{{{\,\mathrm{var}\,}}}(\Vert h\Vert _1+{{\,\mathrm{var}\,}}(h))\) for some constant \(1\le C_{{{\,\mathrm{var}\,}}}<\infty \);

  4. (V4)

    for any \(C>0\), the set \(\{h:X \rightarrow \mathbb R: \Vert h\Vert _1+{{\,\mathrm{var}\,}}(h) \le C\}\) is \(L^1(m)\)-compact;

  5. (V5)

    \({{\,\mathrm{var}\,}}(1_X) <\infty \), where \(1_X\) denotes the function equal to 1 on X;

  6. (V6)

    \(\{h :X \rightarrow \mathbb R_+: \Vert h\Vert _1=1 \ \text {and} \ {{\,\mathrm{var}\,}}(h)<\infty \}\) is \(L^1(m)\)-dense in \(\{h:X \rightarrow \mathbb R_+: \Vert h\Vert _1=1\}\).

  7. (V7)

    for any \(f\in L^1(X, m)\) such that \({{\,\mathrm{ess\ inf}\,}}f>0\), we have \({{\,\mathrm{var}\,}}(1/f) \le \frac{{{\,\mathrm{var}\,}}(f)}{({{\,\mathrm{ess\ inf}\,}}f)^2}\).

  8. (V8)

    \({{\,\mathrm{var}\,}}(fg)\le \Vert f\Vert _{L^\infty }\cdot {{\,\mathrm{var}\,}}(g)+\Vert g\Vert _{L^\infty }\cdot {{\,\mathrm{var}\,}}(f)\).

  9. (V9)

    for \(M>0\), \(f:X \rightarrow \overline{B}_{\mathbb {R}^d} (0, M)\) measurable and every \(C^1\) function \(h:\overline{B}_{\mathbb {R}^d} (0, M) \rightarrow \mathbb {C}\), we have \({{\,\mathrm{var}\,}}(h\circ f)\le \sup \{ \Vert Dh(P)\Vert : P\in \overline{B}_{\mathbb {R}^d}(0, M) \} \cdot {{\,\mathrm{var}\,}}(f)\). Here, \(\overline{B}_{\mathbb {R}^d}(0, M)\) denotes the closed ball in \(\mathbb {R}^d\) centered in 0 with radius M.

We define

$$\begin{aligned} \mathcal {B}:=BV=BV(X,m)=\{g\in L^1(X, m): {{\,\mathrm{var}\,}}(g)<\infty \}. \end{aligned}$$

Then, \(\mathcal {B}\) is a Banach space with respect to the norm

$$\begin{aligned} \Vert g\Vert _{\mathcal {B}} =\Vert g\Vert _1+ {{\,\mathrm{var}\,}}(g). \end{aligned}$$

From now on, in this section, we will use \(\mathcal {B}\) to denote a Banach space of this type, and \( \Vert g\Vert _{\mathcal {B}} \), or simply \(\Vert g\Vert \) will denote the corresponding norm.

We note that examples of this notion correspond to the case where X is a subset of \(\mathbb {R}^n\). In the one-dimensional case, we use the classical notion of variation given by

$$\begin{aligned} {{\,\mathrm{var}\,}}(g)=\inf _{h=g (mod \ m)} \sup _{0=s_0<s_1<\cdots <s_n=1}\sum _{k=1}^n |h(s_k)-h(s_{k-1})|\end{aligned}$$
(5)

for which it is well known that properties (V1)–(V9) hold. On the other hand, in the multidimensional case (see [46]), we let \(m=Leb\) and define

$$\begin{aligned} {{\,\mathrm{var}\,}}(f)=\sup _{0<\epsilon \le \epsilon _0}\frac{1}{\epsilon ^\alpha }\int _{\mathbb {R}^d}\text {osc} (f, B_\epsilon (x)))\, \mathrm{d}x, \end{aligned}$$
(6)

where

$$\begin{aligned} \text {osc} (f, B_\epsilon (x))={{\,\mathrm{ess\ sup}\,}}_{x_1, x_2 \in B_\epsilon (x)}|f(x_1)-f(x_2)|\end{aligned}$$

and where \({{\,\mathrm{ess\ sup}\,}}\) is taken with respect to product measure \(m\times m\). It has been discussed in [15] that in this case, \(\text {var}(\cdot )\) again satisfies properties (V1)–(V9).

In another direction, by taking \(\text {var}(\cdot )\) to be a Hölder constant and X to be a compact metric space, our framework also includes distance expanding maps considered in [31, 38] which are nonsingular with respect to a given measure m. (In particular, we consider the case of identical fiber spaces \(X_\omega =X\).)

3.2 A Cocycles of Transfer Operators

Let \((\varOmega , \mathcal {F}, \mathbb P, \sigma )\) be as in Sect. 2.1, and X and \(\mathcal {B}\) as in Sect. 3.1. Let \(T_{\omega } :X \rightarrow X\), \(\omega \in \varOmega \) be a collection of nonsingular transformations (i.e., \(m\circ T_\omega ^{-1}\ll m\) for each \(\omega \)) acting on X. The associated skew product transformation \(\tau :\varOmega \times X \rightarrow \varOmega \times X\) is defined by

$$\begin{aligned} \tau (\omega , x)=( \sigma (\omega ), T_{\omega }(x)), \quad \omega \in \varOmega , \ x\in X. \end{aligned}$$
(7)

Each transformation \(T_{\omega }\) induces the corresponding transfer operator \(\mathcal L_{\omega }\) acting on \(L^1(X, m)\) and defined by the following duality relation

$$\begin{aligned} \int _X(\mathcal L_{\omega } \phi )\psi \, \mathrm{d}m=\int _X\phi (\psi \circ T_{\omega })\, \mathrm{d}m, \quad \phi \in L^1(X, m), \ \psi \in L^\infty (X, m). \end{aligned}$$

For each \(n\in \mathbb N\) and \(\omega \in \varOmega \), set

$$\begin{aligned} T_{\omega }^{(n)}=T_{\sigma ^{n-1} \omega } \circ \cdots \circ T_{\omega } \quad \text {and} \quad \mathcal L_{\omega }^{(n)}=\mathcal L_{\sigma ^{n-1} \omega } \circ \cdots \circ \mathcal L_{\omega }. \end{aligned}$$

Definition 3.1

(Admissible cocycle). We call the transfer operator cocycle \(\mathcal {R}=(\varOmega , \mathcal F, \mathbb {P}, \sigma , \mathcal {B}, \mathcal L)\) admissible if the following conditions hold:

  1. (C1)

    \(\mathcal {R}\) is \(\mathbb P\)-continuous (i.e., \(\mathcal L\) is continuous in \(\omega \) on each of countably many Borel sets whose union is \(\varOmega \));

  2. (C2)

    there exists \(K>0\) such that

    $$\begin{aligned} \Vert \mathcal L_\omega f\Vert _{\mathcal {B}} \le K\Vert f\Vert _{\mathcal {B}}, \quad \text {for every } f\in \mathcal {B}\text { and } \mathbb {P}\text {-a.e. } \omega \in \varOmega . \end{aligned}$$
  3. (C3)

    there exist \(N\in \mathbb N\) and measurable \(\alpha ^N, \beta ^N :\varOmega \rightarrow (0, \infty )\), with \( \int _\varOmega \log \alpha ^N (\omega )\, \mathrm{d}\mathbb P(\omega )<0\), such that for every \(f\in \mathcal {B}\) and \(\mathbb {P}\text {-a.e. } \omega \in \varOmega \),

    $$\begin{aligned} \Vert \mathcal {L}_\omega ^{(N)} f\Vert _{\mathcal {B}} \le \alpha ^N(\omega )\Vert f\Vert _{\mathcal {B}}+\beta ^N(\omega )\Vert f\Vert _1. \end{aligned}$$
  4. (C4)

    there exist \(K', \lambda >0\) such that for every \(n\ge 0\), \(f\in \mathcal {B}\) such that \(\int f\, \mathrm{d}m=0\) and \(\mathbb {P}\text {-a.e. } \omega \in \varOmega \).

    $$\begin{aligned} \Vert \mathcal L_{\omega }^{(n)} (f)\Vert _{\mathcal {B}} \le K'e^{-\lambda n}\Vert f\Vert _{\mathcal {B}}. \end{aligned}$$
  5. (C5)

    there exist \(N\in \mathbb {N}, c>0\) such that for each \(a>0\) and any sufficiently large \(n\in \mathbb N\),

    $$\begin{aligned} {{\,\mathrm{ess\ inf}\,}}\mathcal L_\omega ^{(Nn)} f\ge c \Vert f\Vert _1, \quad \text {for every } f\in C_a\text { and } \mathbb {P}\text {-a.e. } \omega \in \varOmega , \end{aligned}$$

    where \(C_a:=\{ f \in \mathcal {B}: f\ge 0 \text { and } {{\,\mathrm{var}\,}}(f)\le a\int f\, \mathrm{d}m \}.\)

Remark 3.2

We note that we have imposed condition (C1) since in this setting \(\mathcal {B}\) is not separable.

Remark 3.3

We refer to [15, Sect. 2.3.1] for explicit examples of admissible cocycles of transfer operators associated with piecewise expanding maps both in dimension 1 and in higher dimensions.

The following result is established in [15, Lemma 2.9].

Lemma 3.4

An admissible cocycle of transfer operators \(\mathcal R=(\varOmega , \mathcal F, \mathbb P, \sigma , \mathcal {B}, \mathcal {L})\) is quasi-compact. Furthermore, the top Oseledets space is one dimensional. That is, \(\dim Y(\omega )=1\) for \(\mathbb {P}\text {-a.e. } \omega \in \varOmega \).

The following result established in [15, Lemma 2.10] shows that in this context, the top Oseledets space is spanned by the unique random absolutely continuous invariant measure (a.c.i.m. for short). We recall that random a.c.i.m. is a measurable map \(v^0: \varOmega \times X\rightarrow \mathbb {R}^+\) such that for \(\mathbb {P}\text {-a.e. } \omega \in \varOmega \), \(v^0_\omega := v^0(\omega , \cdot ) \in \mathcal {B}\), \(\int v^0_\omega (x)\mathrm{d}m=1\) and

$$\begin{aligned} \mathcal L_\omega v_\omega ^0=v_{\sigma \omega }^0, \quad \text {for } \mathbb {P}\text {-a.e. } \omega \in \varOmega . \end{aligned}$$
(8)

Lemma 3.5

(Existence and uniqueness of a random acim). Let \(\mathcal R=(\varOmega ,\mathcal F,\mathbb P,\sigma ,\mathcal {B},\mathcal L)\) be an admissible cocycle of transfer operators. Then, there exists a unique random absolutely continuous invariant measure for \(\mathcal R\).

For an admissible transfer operator cocycle \(\mathcal {R}\), we let \(\mu \) be the invariant probability measure given by

$$\begin{aligned} \mu (A \times B)=\int _{A\times B} v^0(\omega , x)\, d (\mathbb P \times m)(\omega , x), \quad \text {for } A\in \mathcal F\text { and } B\in \mathcal G, \end{aligned}$$
(9)

where \(v^0\) is the unique random a.c.i.m. for \(\mathcal {R}\) and \(\mathcal {G}\) is the Borel \(\sigma \)-algebra of X. We note that \(\mu \) is \(\tau \)-invariant, because of (8). Furthermore, for each \(G\in L^1(\varOmega \times X, \mu )\) we have that

$$\begin{aligned} \int _{\varOmega \times X} G\, \mathrm{d}\mu =\int _{\varOmega } \int _X G(\omega , x)\, \mathrm{d}\mu _\omega (x)\, \mathrm{d}\mathbb P(\omega ), \end{aligned}$$

where \(\mu _\omega \) is a measure on X given by \(\mathrm{d}\mu _\omega =v^0(\omega , \cdot )\mathrm{d}m\).

Let us recall the following result established in [15, Lemma 2.11].

Lemma 3.6

The unique random a.c.i.m. \(v^0\) of an admissible cocycle of transfer operators satiesfies the following:

  1. 1.
    $$\begin{aligned} {{\,\mathrm{ess\ sup}\,}}_{\omega \in \varOmega } \Vert v_\omega ^0\Vert _{\mathcal {B}} <\infty ; \end{aligned}$$
    (10)
  2. 2.

    there exists \(c>0\) such that

    $$\begin{aligned} {{\,\mathrm{ess\ inf}\,}}v_\omega ^0 (\cdot )\ge c, \quad \text {for } \mathbb {P}\text {-a.e. } \omega \in \varOmega ; \end{aligned}$$
    (11)
  3. 3.

    there exists \(K>0\) and \(\rho \in (0, 1)\) such that

    $$\begin{aligned} \bigg |\int _X \mathcal L_\omega ^{(n)}(f v_\omega ^0)h\, \mathrm{d}m -\int _X f \, \mathrm{d}\mu _\omega \cdot \int _X h \, \mathrm{d}\mu _{\sigma ^n \omega } \bigg |\le K\rho ^n \Vert h\Vert _{L^\infty } \cdot \Vert f \Vert _{\mathcal {B}}, \end{aligned}$$
    (12)

    for \(n\ge 0\), \(h \in L^\infty (X, m)\), \(f \in \mathcal {B}\) and \(\mathbb {P}\text {-a.e. } \omega \in \varOmega \).

3.3 The Observable

Let us now introduce a class of observables to which our limit theorems will apply (although in some cases we will require additional assumptions).

Definition 3.7

(Observable). An observable is a measurable map \(g :\varOmega \times X \rightarrow \mathbb R^d\), \(g=(g^1, \ldots , g^d)\) satisfying the following properties:

  • Regularity:

    $$\begin{aligned} \Vert g(\omega , x)\Vert _{L^\infty (\varOmega \times X)}=: M<\infty \quad \text {and} \quad {{\,\mathrm{ess\ sup}\,}}_{\omega \in \varOmega } {{\,\mathrm{var}\,}}(g_\omega ) <\infty , \end{aligned}$$
    (13)

    where \(g_\omega =g (\omega , \cdot )\) and \({{\,\mathrm{var}\,}}(g_\omega ):=\max _{1\le i\le d}{{\,\mathrm{var}\,}}(g_\omega ^i)\), \(\omega \in \varOmega \).

  • Fiberwise centering:

    $$\begin{aligned} \int _X g^i(\omega , x) \, \mathrm{d}\mu _\omega (x)= \int _X g^i(\omega , x)v^0_\omega (x) \, \mathrm{d}m(x)=0 \quad \text {for } \mathbb P\text {-a.e. } \omega \in \varOmega , 1\le i\le d, \end{aligned}$$
    (14)

    where \(v^0\) is the density of the unique random a.c.i.m., satisfying (8).

Remark 3.8

The class of observables considered in [15] are scalar-valued, i.e., correspond to the case when \(d=1\).

We also introduce the corresponding random Birkhoff sums. More precisely, for \(n\in \mathbb {N}\) and \((\omega , x)\in \varOmega \times X\), set

$$\begin{aligned} S_n g(\omega , x):=\sum _{i=0}^{n-1}g(\sigma ^i \omega , T_\omega ^{(i)}(x)). \end{aligned}$$

3.4 Basic Properties of Twisted Transfer Operator Cocycles

Throughout this section, \(\mathcal {R}=(\varOmega , \mathcal {F}, \mathbb {P}, \sigma , \mathcal {B}, \mathcal {L})\) will denote an admissible transfer operator cocycle. Furthermore, by \(x\cdot y\) we will denote the scalar product of \(x, y\in \mathbb C^d\) and \(|x|\) will denote the norm of x.

For an observable g as in Definition 3.7 and \(\theta \in \mathbb C^d\), the twisted transfer operator cocycle (or simply a twisted cocycle) \(\mathcal {R}^\theta \) is defined as \(\mathcal {R}^\theta =(\varOmega , \mathcal {F}, \mathbb {P}, \sigma , \mathcal {B}, \mathcal {L}^\theta )\), where for each \(\omega \in \varOmega \), we define

$$\begin{aligned} \mathcal L_\omega ^{\theta }(f)=\mathcal L_\omega (e^{\theta \cdot g(\omega , \cdot )}f), \quad f\in \mathcal {B}. \end{aligned}$$
(15)

For convenience of notation, we will also use \( \mathcal {L}^\theta \) to denote the cocycle \(\mathcal {R}^\theta \). For each \(\theta \in \mathbb {C}^d\), set \(\varLambda (\theta ):=\varLambda (\mathcal {R}^\theta )\) and

$$\begin{aligned} \mathcal L_\omega ^{\theta , \, (n)}=\mathcal {L}^{\theta }_{\sigma ^{n-1}\omega }\circ \cdots \circ \mathcal {L}^{\theta }_\omega , \quad \text {for } \omega \in \varOmega \text { and } n\in \mathbb {N}. \end{aligned}$$

Lemma 3.9

For \(\mathbb P\)-a.e. \(\omega \in \varOmega \) and \(\theta \in \mathbb C^d\),

$$\begin{aligned} {{\,\mathrm{var}\,}}(e^{\theta \cdot g(\omega , \cdot )}) \le |\theta |e^{|\theta |M}{{\,\mathrm{var}\,}}(g(\omega , \cdot )). \end{aligned}$$

Proof

The conclusion of the lemma follows directly from (V9) applied for \(f=g(\omega , \cdot )\) and h given by \(h(z)=e^{\theta \cdot z}\) by taking into account (13). \(\square \)

Lemma 3.10

There exists a continuous function \(K:\mathbb C^d \rightarrow (0, \infty )\) such that

$$\begin{aligned} \Vert \mathcal L_\omega ^\theta h\Vert _{\mathcal {B}} \le K(\theta )\Vert h\Vert _{\mathcal {B}}, \quad \text {for } h\in \mathcal {B}, \theta \in \mathbb C \text { and } \mathbb {P}\text {-a.e. } \omega \in \varOmega . \end{aligned}$$
(16)

Proof

It follows from (13) that for any \(h\in \mathcal {B}\), \(|e^{\theta \cdot g(\omega , \cdot )}h|_1 \le e^{|\theta |M}|h|_1\). Furthermore, (V8) implies that

$$\begin{aligned} {{\,\mathrm{var}\,}}( e^{\theta \cdot g(\omega , \cdot )}h )\le \Vert e^{\theta \cdot g(\omega , \cdot )}\Vert _{L^\infty }\cdot {{\,\mathrm{var}\,}}(h)+{{\,\mathrm{var}\,}}(e^{\theta \cdot g(\omega , \cdot )})\cdot \Vert h\Vert _{L^\infty }, \end{aligned}$$

which together with (V3) and Lemma 3.9 yields that

$$\begin{aligned} \Vert e^{\theta \cdot g(\omega , \cdot )}h \Vert _{\mathcal {B}}&={{\,\mathrm{var}\,}}(e^{\theta \cdot g(\omega , \cdot )}h) +|e^{\theta \cdot g(\omega , \cdot )}h |_1 \\&\le e^{|\theta |M}\Vert h\Vert _{\mathcal {B}}+ +|\theta |e^{|\theta |M} {{\,\mathrm{ess\ sup}\,}}_{\omega \in \varOmega }{{\,\mathrm{var}\,}}(g(\omega , \cdot )) \Vert h\Vert _{L^\infty } \\&\le (e^{|\theta |M}+C_{{{\,\mathrm{var}\,}}}|\theta |e^{|\theta |M} {{\,\mathrm{ess\ sup}\,}}_{\omega \in \varOmega }{{\,\mathrm{var}\,}}(g(\omega , \cdot )))\Vert h\Vert _{\mathcal {B}}. \end{aligned}$$

Thus, from (C2) we conclude that (16) holds with

$$\begin{aligned} K(\theta )=K\left( e^{|\theta |M}+C_{{{\,\mathrm{var}\,}}}|\theta |e^{|\theta |M} {{\,\mathrm{ess\ sup}\,}}_{\omega \in \varOmega }{{\,\mathrm{var}\,}}(g(\omega , \cdot ))\right) . \end{aligned}$$

\(\square \)

Lemma 3.11

The following statements hold:

  1. 1.

    for every \(\phi \in \mathcal {B}^*, f \in \mathcal {B}\), \(\omega \in \varOmega \), \(\theta \in \mathbb C^d\) and \(n\in \mathbb {N}\) we have that

    $$\begin{aligned} \mathcal {L}_\omega ^{\theta , (n)}(f)=\mathcal {L}_\omega ^{(n)}(e^{\theta \cdot S_{n}g(\omega , \cdot )}f), \quad \text {and} \quad \mathcal {L}_\omega ^{\theta *,(n)}(\phi ) = e^{\theta \cdot S_ng(\omega , \cdot )} \mathcal {L}_\omega ^{*(n)}(\phi ), \end{aligned}$$
    (17)

    where \((e^{\theta \cdot S_ng(\omega , \cdot )} \phi ) (f):= \phi (e^{\theta \cdot S_ng(\omega , \cdot )} f)\);

  2. 2.

    for every \(f\in \mathcal {B}\), \(\omega \in \varOmega \) and \(n\in \mathbb {N}\) we have that

    $$\begin{aligned} \int _X \mathcal {L}^{\theta , \, (n)}_\omega (f)\ \mathrm{d}m=\int _X e^{\theta \cdot S_ng(\omega , \cdot )}f\ \mathrm{d}m. \end{aligned}$$
    (18)

Proof

We establish the first identity in (17) by induction on n. The case \(n=1\) follows from the definition of \(\mathcal {L}_\omega ^{\theta }\). We recall that for every \(f, \tilde{f}\in \mathcal {B}\),

$$\begin{aligned} \mathcal {L}_\omega ^{(n)}((\tilde{f} \circ T_\omega ^{(n)}) \cdot f) = \tilde{f}\cdot \mathcal {L}_\omega ^{(n)}( f). \end{aligned}$$
(19)

Let us assume that the claim holds for some n. Then, using (19) we have that

$$\begin{aligned} \mathcal {L}_{\omega }^{(n+1)} (e^{\theta \cdot S_{n+1}g(\omega , \cdot )}f)&= \mathcal {L}_{\sigma ^n\omega } \big (\mathcal {L}_{\omega }^{(n)} ( e^{\theta \cdot g(\sigma ^{n} \omega , \cdot )\circ T_\omega ^{(n)}} e^{\theta \cdot S_{n}g(\omega , \cdot )}f) \big ) \\&= \mathcal {L}_{\sigma ^n\omega } \big ( e^{\theta \cdot g(\sigma ^{n} \omega , \cdot )} \mathcal {L}_{\omega }^{(n)} ( e^{\theta \cdot S_{n}g(\omega , \cdot )}f) \big )\\&= \mathcal {L}_{\sigma ^n\omega }^{\theta }\mathcal {L}_\omega ^{\theta , (n)}(f) = \mathcal {L}_\omega ^{\theta , (n+1)}(f). \end{aligned}$$

The second identity in (17) follows directly from duality. Finally, (18) follows by integrating the first equality in (17). \(\square \)

3.5 An Auxiliary Existence and Regularity Result

We now recall the construction of Banach spaces introduced in [15] that play an important role in the spectral analysis of the twisted cocycle.

Let \(\mathcal {S}'\) denote the set of all measurable functions \(\mathcal {V}:\varOmega \times X\rightarrow \mathbb C\) such that:

  • for \(\mathbb P\)-a.e. \(\omega \in \varOmega \), we have that \(\mathcal {V}(\omega , \cdot )\in \mathcal {B}\);

  • $$\begin{aligned} {{\,\mathrm{ess\ sup}\,}}_{\omega \in \varOmega } \Vert \mathcal {V}(\omega , \cdot )\Vert _{\mathcal {B}}<\infty ; \end{aligned}$$

Then, \(\mathcal {S}'\) is a Banach space with respect to the norm

$$\begin{aligned} \Vert \mathcal {V}\Vert _{\infty }:={{\,\mathrm{ess\ sup}\,}}_{\omega \in \varOmega }\Vert \mathcal {V}(\omega , \cdot )\Vert _{\mathcal {B}}. \end{aligned}$$

Furthermore, let \(\mathcal {S}\) consist of all \(\mathcal {V}\in \mathcal {S}'\) such that for \(\mathbb P\)-a.e. \(\omega \in \varOmega \),

$$\begin{aligned} \int _X \mathcal {V} (\omega , \cdot )\, \mathrm{d}m=0. \end{aligned}$$

Then, \(\mathcal {S}\) is a closed subspace of \(\mathcal {S}'\) and therefore also a Banach space.

For \(\theta \in \mathbb {C}^d\) and \(\mathcal {W} \in \mathcal {S}\), set

$$\begin{aligned} F(\theta , \mathcal {W})(\omega , \cdot )= \frac{\mathcal {L}_{\sigma ^{-1}\omega }^\theta (\mathcal {W}(\sigma ^{-1}\omega , \cdot ) + v_{\sigma ^{-1}\omega }^0(\cdot ))}{\int \mathcal {L}_{\sigma ^{-1}\omega }^\theta (\mathcal {W}(\sigma ^{-1}\omega , \cdot ) + v_{\sigma ^{-1}\omega }^0(\cdot )) \mathrm{d}m} - \mathcal {W}(\omega ,\cdot ) - v_\omega ^0(\cdot ). \end{aligned}$$
(20)

Lemma 3.12

There exist \(\epsilon , R>0\) such that \(F :\mathcal {D} \rightarrow \mathcal {S}\) is a well-defined analytic map on \(\mathcal {D}:=\{ \theta \in \mathbb {C}^d : |\theta |<\epsilon \} \times B_{\mathcal {S}}(0,R)\), where \(B_{\mathcal {S}}(0,R)\) denotes the ball of radius R in \(\mathcal {S}\) centered at 0.

Proof

Let \(G :B_{\mathbb C^d}(0, 1) \times \mathcal S \rightarrow \mathcal S'\) and \(H :B_{\mathbb C^d} (0, 1) \times \mathcal {S} \rightarrow L^\infty (\varOmega )\) be defined by (73), where \(B_{\mathbb {C}^d}(0,1)\) denotes the unit ball in \(\mathbb {C}^d\). It follows from (10) and Lemma 3.10 that G and H are well defined. Furthermore, by arguing as in [17, Lemma 5.1] we have that G and H are analytic.

Moreover, since \(H(0,0)(\omega )=1\) for \(\omega \in \varOmega \), we have that

$$\begin{aligned} |H(\theta , \mathcal {W})(\omega )|\ge 1-|H(0,0)(\omega )-H(\theta , \mathcal {W})(\omega )|\ge 1-\Vert H(0,0)-H(\theta , \mathcal {W})\Vert _{L^\infty }, \end{aligned}$$
(21)

for \(\mathbb {P}\text {-a.e. } \omega \in \varOmega \). Hence, the continuity of H implies that \(\Vert H(0,0)-H(\theta , \mathcal {W})\Vert _{L^\infty } \le 1/2\) for all \((\theta , \mathcal {W})\) in a neighborhood of \((0,0)\in \mathbb C^d \times \mathcal {S}\). We observe that it follows from (21) that in such neighborhood,

$$\begin{aligned} {{\,\mathrm{ess\ inf}\,}}_{\omega } |H(\theta , \mathcal {W})(\omega )|\ge 1/2. \end{aligned}$$

The above inequality together with (10) yields the desired conclusion. \(\square \)

The proof of the following result follows closely the proof of [15, Lemma 3.5].

Lemma 3.13

Let \(\mathcal {D}=\{ \theta \in \mathbb {C}^d : |\theta |<\epsilon \} \times B_{\mathcal {S}}(0,R)\) be as in Lemma 3.12. Then, by shrinking \(\epsilon >0\) if necessary, we have that there exists \(O:\{ \theta \in \mathbb {C}^d: |\theta |<\epsilon \} \rightarrow \mathcal {S}\) analytic in \(\theta \) such that

$$\begin{aligned} F(\theta , O(\theta ))=0. \end{aligned}$$
(22)

Proof

We notice that \(F(0,0)=0\). Moreover, Proposition 6.4 implies that

$$\begin{aligned} (D_{d+1}F(0,0) \mathcal X)(\omega , \cdot )=\mathcal L_{\sigma ^{-1} \omega }(\mathcal X(\sigma ^{-1}\omega , \cdot ))-\mathcal X(\omega , \cdot ) \quad \text {for } \omega \in \varOmega \text { and } \mathcal X\in \mathcal S, \end{aligned}$$

where \(D_{d+1}F\) denotes the derivative of F with respect to \(\mathcal {W}\). We now prove that \(D_{d+1} F(0,0)\) is bijective operator.

For injectivity, we have that if \(D_{d+1}F(0, 0)\mathcal X=0\) for some nonzero \(\mathcal {X}\in \mathcal {S}\), then \(\mathcal {L}_\omega \mathcal {X}_\omega = \mathcal {X}_{\sigma \omega }\) for \(\mathbb {P}\text {-a.e. } \omega \in \varOmega \). Notice that \(\mathcal {X}_\omega \notin \langle v^0_\omega \rangle \) because \(\int \mathcal {X}_\omega (\cdot ) \mathrm{d}m=0\) and \( \mathcal {X}_\omega \ne 0\). Hence, this yields a contradiction with the one dimensionality of the top Oseledets space of the cocycle \(\mathcal L\), given by Lemma 3.4. Therefore, \(D_{d+1}F(0,0)\) is injective. To prove surjectivity, take \(\mathcal {X}\in \mathcal {S}\) and let

$$\begin{aligned} \tilde{\mathcal {X}}(\omega , \cdot ):= - \sum _{j=0}^\infty \mathcal {L}_{\sigma ^{-j}\omega }^{(j)} \mathcal {X}(\sigma ^{-j}\omega , \cdot ). \end{aligned}$$
(23)

It follows from (C4) that \(\tilde{\mathcal {X}} \in \mathcal S\) and it is easy to verify that \(D_{d+1} F(0,0)\tilde{\mathcal {X}}=\mathcal {X}\). Thus, \(D_{d+1}F(0,0)\) is surjective.

Combining the previous arguments, we conclude that \(D_{d+1}F(0,0)\) is bijective. The conclusion of the lemma now follows directly from the implicit complex analytic implicit function theorem in Banach spaces (see, for instance, the appendix in [49]). \(\square \)

3.6 On the Top Lyapunov Exponent for the Twisted Cocycle

Let \(\varLambda (\theta )\) be the largest Lyapunov exponent associated with the twisted cocycle \(\mathcal {L}^\theta \). Let \(0<\epsilon <1\) and \(O(\theta )\) be as in Lemma 3.13. Let

$$\begin{aligned} v_\omega ^\theta (\cdot ):= v_\omega ^0(\cdot ) +O(\theta )(\omega ,\cdot ). \end{aligned}$$
(24)

We notice that \(\int v_\omega ^\theta (\cdot )\ \mathrm{d}m =1\) and by Lemma 3.13, \(\theta \mapsto v^\theta \) is analytic. Let us define

$$\begin{aligned} \hat{\varLambda } (\theta ) := \int _\varOmega \log \Big |\int _X e^{\theta \cdot g(\omega , x)} v_\omega ^\theta (x) \,d m(x) \Big |\, \mathrm{d}\mathbb {P}(\omega ), \end{aligned}$$
(25)

and

$$\begin{aligned} \lambda _\omega ^\theta := \int _X e^{\theta \cdot g(\omega , x)} v_\omega ^\theta (x) \,d m(x) = \int _X \mathcal {L}_\omega ^\theta v_\omega ^\theta (x) \,d m(x), \end{aligned}$$
(26)

where the last identity follows from (18).

The proof of the following result is identical to the proof of [15, Lemma 3.8].

Lemma 3.14

For every \(\theta \in B_{\mathbb {C}^d}(0,\epsilon ):= \{ \theta \in \mathbb {C}: |\theta |<\epsilon \}\), \( \hat{\varLambda } (\theta )\le \varLambda (\theta )\).

The proof of the following result can be established by repeating the arguments in the proof of [15, Lemma 3.9].

Lemma 3.15

We have that \(\hat{\varLambda }\) is differentiable on a neighborhood of 0, and for each \(i\in \{1, \ldots , d\}\), we have that

$$\begin{aligned} D_i\hat{\varLambda } (\theta )= \mathfrak {R}\Bigg ( \int _\varOmega \frac{ \overline{\lambda _\omega ^\theta } ( \int _X g^i(\omega , \cdot )e^{\theta \cdot g(\omega , \cdot )}v_\omega ^\theta (\cdot )\, \mathrm{d}m+\int _X e^{\theta \cdot g(\omega , \cdot )}(D_i O(\theta ))_\omega (\cdot )\, \mathrm{d}m )}{|\lambda _\omega ^\theta |^2}\, \mathrm{d}\mathbb {P}(\omega ) \Bigg ), \end{aligned}$$

where \(\mathfrak {R}(z)\) denotes the real part of a complex number z and \(\overline{z}\) the complex conjugate of z. Here, \(D_i\) denotes the derivative with respect to \(\theta _i\), where \(\theta =(\theta _1, \ldots , \theta _d)\).

Lemma 3.16

For \(i\in \{1, \ldots , d\}\), we have that \(D_i \hat{\varLambda }(0)=0\).

Proof

Since \(\lambda _\omega ^0=1\), it follows from the previous lemma that

$$\begin{aligned} D_i \hat{\varLambda }(0)=\mathfrak {R}\Bigg ( \int _\varOmega \int _X \left( g^i(\omega , \cdot )v_\omega ^0(\cdot )+ (D_i O(0))_\omega (\cdot ) \right) \, \mathrm{d}m \, \mathrm{d}\mathbb {P}(\omega ) \Bigg ). \end{aligned}$$
(27)

On the other hand, it follows from the implicit function theorem that

$$\begin{aligned} D_i O(0)=-\,D_{d+1}F(0,0)^{-1}( D_iF(0,0)). \end{aligned}$$

It was proved in Lemma 3.13 that \(D_{d+1}F(0,0) :\mathcal S \rightarrow \mathcal S\) is bijective. Thus, \(D_{d+1}F(0,0)^{-1} :\mathcal S \rightarrow \mathcal S\) and therefore \(D_iO(0) \in \mathcal S\) which implies that

$$\begin{aligned} \int _X D_iO(0)_\omega \, \mathrm{d}m=0 \text { for } \mathbb {P}\text {-a.e. } \omega \in \varOmega . \end{aligned}$$
(28)

The conclusion of the lemma now follows directly from (14), (27) and (28). \(\square \)

The proofs of the following two results are identical to the proofs of [15, Theorem 3.12] and [15, Corollary 3.14], respectively.

Theorem 3.17

(Quasi-compactness of twisted cocycles, \(\theta \) near 0). Assume that the cocycle \(\mathcal {R}=(\varOmega , \mathcal F, \mathbb {P}, \sigma , \mathcal {B}, \mathcal L)\) is admissible. For \(\theta \in \mathbb {C}^d\) sufficiently close to 0, we have that the twisted cocycle \(\mathcal L^\theta \) is quasi-compact. Furthermore, for such \(\theta \), the top Oseledets space of \(\mathcal L^\theta \) is one dimensional. That is, \(\dim Y^\theta (\omega )=1\) for \(\mathbb {P}\text {-a.e. } \omega \in \varOmega \).

Lemma 3.18

For \(\theta \in \mathbb {C}^d\) near 0, we have that \(\varLambda (\theta )=\hat{\varLambda }(\theta )\). In particular, \(\varLambda (\theta )\) is differentiable near 0 and \(D_i \varLambda (0)=0\), for every \(i\in \{1, \ldots , d\}\).

By arguing as in the proof of [18, Proposition 2], we have that there exists a positive semi-definite \(d\times d\) matrix \(\varSigma ^2\) such that for \(\mathbb P\)-a.e. \(\omega \in \varOmega \), we have that

$$\begin{aligned} \varSigma ^2=\lim _{n\rightarrow \infty }\frac{1}{n} \text {Cov}_\omega (S_n g(\omega , \cdot )), \end{aligned}$$
(29)

where \(\text {Cov}_\omega \) denotes the e covariance with respect to the probability measure \(\mu _\omega \). Moreover, the entries \(\varSigma ^2_{ij}\) of \(\varSigma ^2\) are given by

$$\begin{aligned} \varSigma ^2_{ij}&=\int _{\varOmega \times X}g^i(\omega , x)g^j (\omega , x)\, \mathrm{d}\mu (\omega , x)+\sum _{n=1}^\infty \int _{\varOmega \times X} g^i (\omega , x)g^j (\tau ^n (\omega , x))\, \mathrm{d}\mu (\omega , x)\nonumber \\&\quad +\sum _{n=1}^\infty \int _{\varOmega \times X} g^j (\omega , x)g^i (\tau ^n (\omega , x))\, \mathrm{d}\mu (\omega , x). \end{aligned}$$
(30)

We also recall that \(\varSigma ^2\) is positive definite if and only if g does not satisfy that

$$\begin{aligned} v\cdot g=r-r\circ \tau \quad \mu \text {-a.e.,} \end{aligned}$$

for all \(v\in \mathbb {R}^d\), \(v\ne 0\) and some \(r\in L^2_{\mu }(\varOmega \times X)\).

Lemma 3.19

We have that \(\varLambda \) is of class \(C^2\) on a neighborhood of 0 and \(D^2 \varLambda (0)=\varSigma ^2\), where \(D^2\varLambda (0)\) denotes the Hessian of \(\varLambda \) in 0.

Proof

By repeating the arguments in the proof of [15, Lemma 3.15], one can show that \(\varLambda \) is of class \(C^2\) and that

$$\begin{aligned} D_{ij} \varLambda (\theta )=\mathfrak {R}\bigg ( \int _\varOmega \bigg ( \frac{D_{ij}\lambda _\omega ^\theta }{\lambda _\omega ^\theta }-\frac{D_i \lambda _\omega ^\theta D_j \lambda _\omega ^\theta }{(\lambda _\omega ^\theta )^2} \bigg ) \, \mathrm{d}\mathbb P(\omega ) \bigg ), \end{aligned}$$

where \(D_i\lambda _\omega ^\theta \) denotes the derivative of \(\theta \mapsto \lambda _\omega ^\theta \) with respect to \(\theta _i\) and \(D_{ij}\lambda _\omega ^\theta \) is the derivative of \(\theta \mapsto D_j \lambda _\omega ^\theta \) with respect to \(\theta _i\). Moreover, using (26), the same arguments as in the proof of [15, Lemma 3.15] yield that

$$\begin{aligned} D_i \lambda _\omega ^\theta =\int _X (g^i (\omega , x)e^{\theta \cdot g(\omega , x)}v_\omega ^\theta (x)+e^{\theta \cdot g(\omega , x) }(D_i O(\theta ))_\omega (x))\, \mathrm{d}m(x) \end{aligned}$$

and

$$\begin{aligned} D_{ij}\lambda _\omega ^\theta&=\int _X(g^i (\omega ,x)g^j (\omega , x)e^{\theta \cdot g(\omega , x)}v_\omega ^\theta (x) +g^j (\omega , x)e^{\theta \cdot g(\omega , x)}(D_i O(\theta ))_\omega (x))\, \mathrm{d}m(x) \\&\quad +\int _X (g^i (\omega , x)e^{\theta \cdot g(\omega , x)}(D_j O(\theta ))_\omega (x)+e^{\theta \cdot g(\omega , x)} (D_{ij}O(\theta ))_\omega (x))\, \mathrm{d}m(x). \end{aligned}$$

Since \(D_{ij}O(0)\in S\) for \(i, j\in \{1, \ldots , d\}\), we have that

$$\begin{aligned} \int _X (D_{ij}O(0))_\omega \, \mathrm{d}m=0, \quad \text {for } \mathbb P\text {-a.e. } \omega \in \varOmega \text { and } i, j\in \{1, \ldots , d\}. \end{aligned}$$
(31)

From (14) and (28), we conclude that \(D_i \lambda _\omega ^\theta |_{\theta =0}=0\) and

$$\begin{aligned} D_{ij}\lambda _\omega ^\theta |_{\theta =0}&=\int _X g^i (\omega ,x)g^j (\omega , x)\, \mathrm{d}\mu _\omega (x) \\&\quad +\int _X (g^j (\omega , x) (D_i O(0))_\omega (x)+g^i(\omega , x)(D_j O(0))_\omega (x))\, \mathrm{d}m(x). \end{aligned}$$

Hence,

$$\begin{aligned} D_{ij} \varLambda (0)&= \mathfrak {R}\bigg (\int _{\varOmega \times X} g^i (\omega ,x)g^j (\omega , x)\, \mathrm{d}\mu (\omega , x)\\&\quad +\int _{\varOmega } \int _X g^j (\omega , x) (D_i O(0))_\omega (x)\, \mathrm{d}m(x)\, \mathrm{d}\mathbb P(\omega ) \\&\quad +\int _{\varOmega } \int _X g^i (\omega , x) (D_j O(0))_\omega (x)\, \mathrm{d}m(x)\, \mathrm{d}\mathbb P(\omega ) \bigg ). \end{aligned}$$

On the other hand, by the implicit function theorem, we have that

$$\begin{aligned} D_i O(0)_\omega =-(D_{d+1} F(0,0)^{-1}(D_i F(0,0)))_\omega . \end{aligned}$$

Furthermore, (23) implies that

$$\begin{aligned} (D_{d+1}F(0,0)^{-1}\mathcal W)_\omega =-\sum _{n=0}^\infty \mathcal L_{\sigma ^{-n}\omega }^{(n)} (\mathcal {W}_{\sigma ^{-n} \omega }), \end{aligned}$$

for each \(\mathcal {W}\in \mathcal {S}\). Hence, it follows from Proposition 6.4 that

$$\begin{aligned} D_i O(0)_\omega =\sum _{n=1}^\infty \mathcal L_{\sigma ^{-n} \omega }^{(n)} (g^i(\sigma ^{-n} \omega , \cdot ) v_{\sigma ^{-n} \omega }^0(\cdot )). \end{aligned}$$

Consequently, since \(\sigma \) preserves \(\mathbb P\), we have that

$$\begin{aligned}&\int _{\varOmega } \int g^j (\omega , x) (D_i O(0))_\omega (x)\, \mathrm{d}m(x)\, \mathrm{d}\mathbb P(\omega ) \\&\quad =\sum _{n=1}^\infty \int _{\varOmega } \int _X g^j (\omega , x)\mathcal L_{\sigma ^{-n} \omega }^{(n)} (g^i(\sigma ^{-n} \omega , \cdot ) v_{\sigma ^{-n} \omega }^0)\, \mathrm{d}m(x) \, \mathrm{d}\mathbb P(\omega ) \\&\quad =\sum _{n=1}^\infty \int _{\varOmega } \int _X g^j (\omega , T_{\sigma ^{-n} \omega }^{(n)}x)g^i (\sigma ^{-n} \omega , x)\, d \mu _{\sigma ^{-n} \omega } (x)\, \mathrm{d}\mathbb P(\omega ) \\&\quad =\sum _{n=1}^\infty \int _{\varOmega } \int _X g^j (\sigma ^n \omega , T_{ \omega }^{(n)}x)g^i ( \omega , x)\, d \mu _{ \omega } (x)\, \mathrm{d}\mathbb P(\omega ) \\&\quad =\sum _{n=1}^\infty \int _{\varOmega \times X} g^i(\omega , x)g^j (\tau ^n (\omega , x))\, \mathrm{d}\mu (\omega , x). \end{aligned}$$

Thus, \(D_{ij}\varLambda (0)=\varSigma _{ij}^2\) and the conclusion of the lemma follows. \(\square \)

4 Limit Theorems

In this section, we establish the main results of our paper. More precisely, we prove a number of limit laws for a broad classes of random piecewise dynamics and for vector-valued observables. In particular, we prove the large deviations principle, central limit theorem and the local limit theorem, thus extending the main results in [15] from scalar to vector-valued observables. In addition, we prove a number of additional limit laws that have not been discussed earlier. Namely, we establish the moderate deviations principle, concentration inequalities, self-normalized Berry–Esseen bounds as well as Edgeworth and large deviations (LD) expansions.

4.1 Choice of Bases for Top Oseledets Spaces \(Y_\omega ^\theta \) and \(Y_\omega ^{*\theta }\)

We recall that \(Y_\omega ^\theta \) and \(Y_\omega ^{*\theta }\) are top Oseledets subspaces for twisted and adjoint twisted cocycle, \(\mathcal {L}^\theta \) and \(\mathcal {L}^{\theta *}\), respectively. The Oseledets decomposition for these cocycles can be written in the form

$$\begin{aligned} \mathcal {B}=Y^\theta _\omega \oplus H^\theta _\omega \quad \text { and } \quad \mathcal {B}^* = Y^{*\,\theta }_\omega \oplus H^{*\,\theta }_\omega , \end{aligned}$$
(32)

where \(H^\theta _\omega =V^\theta (\omega )\oplus \bigoplus _{j=2}^{l_\theta } Y^\theta _j(\omega )\) is the equivariant complement to \(Y^\theta _\omega := Y_1^\theta (\omega )\), and \(H^{*\,\theta }_\omega \) is defined similarly. Furthermore, Lemma 2.4 shows that the following duality relations hold:

$$\begin{aligned} \psi (y)&=0 \text { whenever } y \in Y^\theta _\omega \text { and } \psi \in H^{*\,\theta }_\omega ,\quad \text { and }\nonumber \\ \phi (f)&=0 \text { whenever } \phi \in Y^{*\, \theta }_\omega \text { and } f \in H^{\theta }_\omega . \end{aligned}$$
(33)

Let us fix convenient choices for elements of the one-dimensional top Oseledets spaces \(Y^\theta _\omega \) and \(Y^{*\,\theta }_\omega \), for \(\theta \in \mathbb {C}^d\) close to 0. Let \(v_\omega ^\theta \in Y^\theta _\omega \) be as in (24), so that \(\int v_\omega ^\theta (\cdot )\mathrm{d}m=1\). We recall that

$$\begin{aligned} \mathcal {L}_\omega ^\theta v_\omega ^\theta =\lambda _\omega ^\theta v_{\sigma \omega }^\theta \quad \text {for } \mathbb P\text {-a.e. } \omega \in \varOmega , \end{aligned}$$

where

$$\begin{aligned} \lambda _\omega ^\theta =\int e^{\theta \cdot g(\omega , \cdot )}v_\omega ^\theta \, \mathrm{d}m(x). \end{aligned}$$

Let us fix \(\phi ^\theta _\omega \in Y^{*\,\theta }_\omega \) so that \(\phi ^\theta _\omega (v^\theta _\omega )=1\). We note that this selection is possible and unique, because of (33). Moreover, as in [15] we easily conclude that

$$\begin{aligned} (\mathcal {L}^{\theta }_\omega )^*\phi ^\theta _{\sigma \omega }=\lambda ^\theta _\omega \phi ^\theta _{\omega }, \quad \text {for } \mathbb P\text {-a.e. } \omega \in \varOmega . \end{aligned}$$

4.2 Large Deviations Properties

The proof of the following result is identical to the proof of [15, Lemma 4.2].

Lemma 4.1

Let \(\theta \in \mathbb {C}^d\) be sufficiently close to 0, so that the results of Sect. 4.1 apply. Let \(f\in \mathcal {B}\) be such that \(f\notin H_\omega ^\theta \), i.e., \(\phi ^\theta _\omega (f) \ne 0\). Then,

$$\begin{aligned} \lim _{n\rightarrow \infty }\frac{1}{n} \log \Big | \int e^{\theta \cdot S_ng(\omega ,\cdot )}f\ \mathrm{d}m \Big | = \varLambda (\theta ) \quad \text {for } \mathbb {P}\text {-a.e. } \omega \in \varOmega . \end{aligned}$$

Next, suppose that \(\varSigma ^2\) is positive definite and let \(B\subset \mathbb R^d\) be a closed ball around the origin so that \(D^2\varLambda (t)\) is positive definite for any \(t\in B\) and set

$$\begin{aligned} \varLambda ^*(x)=\sup _{t\in B}\left( t\cdot x-\varLambda (t)\right) . \end{aligned}$$

Observe that the existence of B follows from Lemma 3.19. By combining Lemma 4.1 with Theorem 6.7, we obtain the following local large deviations principle.

Theorem 4.2

For \(\mathbb P\)-a.e. \(\omega \in \varOmega \), we have:

  1. (i)

    for any closed set \(A\subset \mathbb R^d\),

    $$\begin{aligned} \limsup _{n\rightarrow \infty }\frac{1}{n}\log \mu _\omega (\{S_n g(\omega ,\cdot )/n\in A \})\le -\inf _{x\in A}\varLambda ^*(x); \end{aligned}$$
  2. (ii)

    there exists a closed ball \(B_0\) around the origin (which does not depend on \(\omega \)) so that for any open subset A of \(B_0\) we have

    $$\begin{aligned} \liminf _{n\rightarrow \infty }\frac{1}{n}\log \mu _\omega (\{S_n g(\omega ,\cdot )/n\in A \})\ge -\inf _{x\in A}\varLambda ^*(x). \end{aligned}$$

Remark 4.3

In the scalar case, for \(\mathbb P\)-a.e. \(\omega \in \varOmega \) and for any sufficiently small \({\varepsilon }>0\), we have (see [33, Lemma XIII.2]) that

$$\begin{aligned} \lim _{n\rightarrow \infty }\frac{1}{n} \log \mu _\omega (\{x: S_ng(\omega ,x)>n{\varepsilon }\})=-\varLambda ^*({\varepsilon }). \end{aligned}$$

The above conclusion was already obtained in [15, Theorem A].

In the multidimensional case, we can apply [48, Theorem 3.2] and conclude that for any box A around the origin with a sufficiently small diameter,

$$\begin{aligned} \lim _{n\rightarrow \infty }\frac{1}{n} \log \mu _\omega (\{S_ng(\omega ,\cdot )/n\notin A \})=-\inf _{a\in \partial A}\varLambda ^*(a). \end{aligned}$$

We also refer the reader to [48, Theorem 3.1] which, in particular, deals with the asymptotic behavior of probabilities of the form \(\mu _\omega (\{S_ng(\omega ,\cdot )/n\in C\})\), where C is a cone with a nonempty interior.

Next, we establish the following (optimal) global moderate deviations principle. Let \((a_n)_n\) be a sequence in \(\mathbb {R}\) such that \(\lim _{n\rightarrow \infty }\frac{a_n}{\sqrt{n}}=\infty \) and \(\lim _{n\rightarrow \infty }\frac{a_n}{n}=0\).

Theorem 4.4

For \(\mathbb P\)-a.e. \(\omega \in \varOmega \) and any \(\theta \in \mathbb R^d\), we have that

$$\begin{aligned} \lim _{n\rightarrow \infty }\frac{1}{a_n^2/n}\log \mathbb E[e^{\theta \cdot S_ng (\omega , \cdot )/c_n}]=\frac{1}{2}\theta ^{\mathrm{T}}\varSigma ^2\theta , \end{aligned}$$

where \(c_n=n/a_n\). Consequently, when \(\varSigma ^2\) is positive definite, we have that:

  1. (i)

    for any closed set \(A\subset \mathbb R^d\),

    $$\begin{aligned} \limsup _{n\rightarrow \infty }\frac{1}{a_n^2/n}\log \mu _\omega (\{S_n g(\omega ,\cdot )/a_n\in A\})\le -\frac{1}{2} \inf _{x\in A}x^{\mathrm{T}}\varSigma ^{-2} x; \end{aligned}$$
  2. (ii)

    for any open set \(A\subset \mathbb R^d\), we have

    $$\begin{aligned} \liminf _{n\rightarrow \infty }\frac{1}{a_n^2/n}\log \mu _\omega (\{S_n g(\omega ,\cdot )/a_n\in A\})\ge -\frac{1}{2} \inf _{x\in A}x^{\mathrm{T}}\varSigma ^{-2} x, \end{aligned}$$

    where \(\varSigma ^{-2}\) denotes the inverse of \(\varSigma ^2\).

Proof

Let \(\varPi _\omega (\theta )\) be an analytic branch of \(\log \lambda _\omega ^\theta \) around 0 so that \(\varPi _\omega (0)=0\) and \(|\varPi _\omega (\theta )|\le c\) for some \(c>0\). Note that it is indeed possible to construct such functions \(\varPi _\omega \) in a deterministic neighborhood of 0 since \(\lambda _\omega ^0=1\) and \({\theta }\rightarrow \lambda _\omega ^{\theta }\) are analytic functions which are uniformly bounded around the origin. Set \(\varPi _{\omega ,n}(\theta )=\sum _{j=0}^{n-1}\varPi _{\sigma ^j\omega }(\theta )\). Then \(\nabla \varPi _\omega (0)=\nabla \lambda _\omega ^{\theta }|_{\theta =0}=0\) (see the proof of Lemma 3.19) and hence

$$\begin{aligned} \nabla \varPi _{\omega ,n}(0)=0. \end{aligned}$$
(34)

By applying \(\mathcal {L}_\omega ^{\theta , (n)}\) to the identity \(v_\omega ^0=\phi _\omega ^\theta (v_\omega ^0)v_\omega ^\theta + (v_\omega ^0-\phi _\omega ^\theta (v_\omega ^0)v_\omega ^\theta )\) and integrating with respect to m, we obtain that

$$\begin{aligned}&\int _X e^{\theta \cdot S_ng(\omega ,\cdot )}\mathrm{d}\mu _\omega =\int _X \mathcal {L}_\omega ^{\theta , (n)} v_\omega ^0\, \mathrm{d}m= \phi _\omega ^{\theta }(v_\omega ^0)e^{\varPi _{\omega ,n}(\theta )}\nonumber \\&\quad +\int _X\mathcal L^{\theta ,(n)}_\omega (v_\omega ^{0}-\phi _\omega ^{\theta }(v_\omega ^0)v_\omega ^\theta )\mathrm{d}m. \end{aligned}$$
(35)

By Lemma 4.7, the second term in the above right-hand side is \(O(r^n)\) uniformly in \(\omega \) and \(\theta \) (around the origin), for some \(0<r<1\). Using the Cauchy integral formula, we get that

$$\begin{aligned} \left| D^2\varPi _{\omega ,n}(0)-\text {Cov}_{\mu _\omega }(S_ng(\omega ,\cdot ))\right| \le C, \end{aligned}$$
(36)

where C is some constant which does not depend on \(\omega \) and n. In the derivation of (36), we have also used that the function \(\theta \rightarrow \phi _\omega ^\theta (v_\omega ^0)\) is analytic and uniformly bounded in \(\omega \), which can be proved as in [15, Appendix C], using again the complex analytic implicit function theorem.

Next, let \(\theta \in \mathbb R^d\) and set \(\theta _n=\theta /c_n\), where \(c_n=n/a_n\) and \((a_n)_n\) is the sequence from the statement of the theorem. Then, \(\lim _{n\rightarrow \infty }c_n=\infty \) and \(\lim _{n\rightarrow \infty }c_n^2/n=0\). Set \(\varSigma ^2_{\omega ,n}=\text {Cov}_{\mu _\omega }(S_ng(\omega ,\cdot ))\). By (36), when n is sufficiently large, we can write

$$\begin{aligned} \varPi _{\omega ,n}(\theta _n)=\frac{1}{2}\theta _n^{\mathrm{T}}\varSigma ^2_{\omega ,n}\theta _n+\mathcal O(|\theta _n|^2)+\mathcal O(n|\theta _n|^3). \end{aligned}$$

Therefore,

$$\begin{aligned} \lim _{n\rightarrow \infty }\frac{c_n^2}{n}\varPi _{\omega ,n}(\theta _n)=\frac{1}{2}\theta ^{\mathrm{T}}\varSigma ^2\theta . \end{aligned}$$

This together with (35) implies that

$$\begin{aligned} \lim _{n\rightarrow \infty }\frac{c_n^2}{n}\log \mathbb E[e^{\theta \cdot S_n g(\omega , \cdot )/c_n}]=\lim _{n\rightarrow \infty }\frac{c_n^2}{n}\varPi _{\omega ,n}(\theta _n)=\frac{1}{2}\theta ^{\mathrm{T}}\varSigma ^2\theta . \end{aligned}$$

The upper and lower large deviations bounds follow now from the Gartner–Ellis theorem (see [12, Theorem 2.3.6]). \(\square \)

Theorems 4.2 and 4.4 deal with the asymptotic behavior of probabilities of rare events on an exponential scale. We will also obtain more explicit (but not tight) exponential upper bounds.

Proposition 4.5

There exist constants \(c_1,c_2>0\) such that for \(\mathbb P\)-a.e. \(\omega \in \varOmega \), for any \(\varepsilon >0\) and \(n\in \mathbb {N}\) we have

$$\begin{aligned} \mu _\omega (\{x\in X: |S_ng(\omega ,x)|\ge \varepsilon n+c_1\} )\le 2d\mathrm{e}^{-c_2 \varepsilon ^2 n}. \end{aligned}$$

Proof

It is sufficient to establish the desired conclusion in the case when g is real-valued. Then, by [18, (51)] there is a reverse martingale \(\mathbf{M}_n=X_1+...+X_n\) (which depends on \(\omega \)) with the following properties:

  • there exists \(c>0\) independent on \(\omega \) such that \(\Vert X_i\Vert _{L^\infty (m)} \le c\);

  • there exists \(C>0\) independent on n and \(\omega \) such that

    $$\begin{aligned} \sup _n \Vert S_n g(\omega ,\cdot )-\mathbf{M}_n(\cdot )\Vert _{L^\infty (m)}\le C. \end{aligned}$$
    (37)

The proof of the proposition is completed now using the Chernoff bounding method. More precisely, by applying the Azuma–Hoeffding inequality with the martingale differences \(Y_{k}=X_{n-k}\) we get that for any \(\lambda >0\),

$$\begin{aligned} \mathbb E_\omega [e^{\lambda \mathbf{M}_n}]\le e^{\lambda ^2c^2 n}. \end{aligned}$$

Therefore, by the Markov inequality we have that

$$\begin{aligned} \mu _\omega (\{\mathbf{M}_n\ge \varepsilon n \})=\mu _\omega (\{e^{\lambda \mathbf{M}_n}\ge e^{\lambda \varepsilon n}\})\le e^{n(\lambda ^2 c^2-\lambda \varepsilon )}. \end{aligned}$$

By taking \(\lambda =\frac{\varepsilon }{2c^2}\), we obtain that \(\mu _\omega (\{\mathbf{M}_n\ge \varepsilon n \})\le e^{-\frac{\varepsilon ^2}{4c^2}n}\). Furthermore, by replacing \(\mathbf{M}_n\) with \(-\mathbf{M}_n\) we derive that

$$\begin{aligned} \mu _\omega (\{|\mathbf{M}_n|\ge \varepsilon n \})\le 2e^{-\frac{\varepsilon ^2}{4c^2}n}. \end{aligned}$$

The proof of the proposition is completed using (37). \(\square \)

Remark 4.6

We remark that we can get upper bounds on the constants c and C appearing in the above proof, and so we can express \(c_1\) and \(c_2\) in terms of the parameters appearing in (V1)–(V8) and (C1)–(C5).

4.3 Central Limit Theorem

We need the following lemma.

Lemma 4.7

There exist \(C>0\) and \(0<r<1\) such that for every \(\theta \in \mathbb {C}^d\) sufficiently close to 0, every \(n\in \mathbb {N}\) and \(\mathbb {P}\text {-a.e. } \omega \in \varOmega \), we have

$$\begin{aligned} \Big | \int \mathcal L_\omega ^{\theta , (n)}(v_\omega ^0 -\phi _\omega ^{\theta }(v_\omega ^0) v_{\omega }^{\theta })\, \mathrm{d}m \Big | \le Cr^n |\theta |. \end{aligned}$$
(38)

Proof

The lemma now follows since the left-hand side is \(\mathcal O(r^n)\) uniformly in \(\omega \) and \({\theta }\) (around the origin), it is analytic in \(\theta \) and it vanishes at \(\theta =0\) (and therefore, by the Cauchy integral formula its derivative is of order \(\mathcal O(r^n)\) as well). \(\square \)

Theorem 4.8

Assume the transfer operator cocycle \(\mathcal {R}\) is admissible, and the observable g satisfies conditions (13) and (14). Assume also that the asymptotic covariance matrix \(\varSigma ^2\) is positive definite. Then, for every bounded and continuous function \(\phi :\mathbb {R}^d \rightarrow \mathbb {R}\) and \(\mathbb {P}\text {-a.e. } \omega \in \varOmega \), we have

$$\begin{aligned} \lim _{n\rightarrow \infty }\int \phi \left( \frac{S_n g(\omega , x)}{ \sqrt{n}}\right) \, \mathrm{d}\mu _\omega (x)=\int \phi \, \mathrm{d}\mathcal N(0, \varSigma ^2). \end{aligned}$$

Proof

It follows from Levy’s continuity theorem that it is sufficient to prove that, for every \(t\in \mathbb {R}^d\),

$$\begin{aligned} \lim _{n\rightarrow \infty }\int e^{it^{\mathrm{T}}\frac{S_n g(\omega , \cdot )}{\sqrt{n}}t} \, \mathrm{d}\mu _\omega =e^{-\frac{1}{2}t^{\mathrm{T}} \varSigma ^2t} \quad \text {for } \mathbb {P}\text {-a.e. } \omega \in \varOmega , \end{aligned}$$

where \(t^{\mathrm{T}}\) denotes the transpose of t. Substituting \({\theta }=t/\sqrt{n}\) in (35) and taking into account that \(\lim _{{\theta }\rightarrow 0}\phi _\omega ^{\theta }(v_\omega ^0)=\phi _\omega ^0(v_\omega ^0)=1\), we conclude that it is sufficient to prove that

$$\begin{aligned} \lim _{n \rightarrow \infty } \sum _{j=0}^{n-1} \log \lambda _{\sigma ^j \omega }^{\frac{it}{\sqrt{n}}} = -\frac{1}{2} t^{\mathrm{T}} \varSigma ^2 t, \quad \text {for } \mathbb {P}\text {-a.e. } \omega \in \varOmega . \end{aligned}$$
(39)

We recall that \(\lambda _\omega ^\theta =H(\theta , O(\theta ))(\sigma \omega )\), where H is again given by (73). We define \(\tilde{H}\) on a neighborhood of \(0\in \mathbb {C}^d\) with values in \(L^\infty (\varOmega )\) by

$$\begin{aligned} \tilde{H}(\theta ) (\omega )=\log H(\theta , O(\theta )) (\omega ), \quad \omega \in \varOmega . \end{aligned}$$

Observe that \(\tilde{H}(0)(\omega )=0\) for \(\mathbb P\)-a.e. \(\omega \in \varOmega \) and that in the notations of the proof of Theorem 4.4 we have \(\tilde{H}(\theta ) (\omega )=\varPi _{\sigma ^{-1}\omega }({\theta })\). Therefore, as at the beginning of the proof of Theorem 4.4 we find that \(\tilde{H}\) is analytic on a neighborhood of 0. Furthermore, by proceeding as in the proof of [15, Lemma 4.5] we find that

$$\begin{aligned} D_i \tilde{H}(\theta )(\omega )=\frac{1}{H(\theta , O(\theta )) (\omega )}[D_i H(\theta , O(\theta )) (\omega )+ (D_{d+1}H(\theta , O(\theta ))D_i O(\theta ))(\omega )]. \end{aligned}$$

In particular, using Lemmas 6.1 and 6.3 we obtain that

$$\begin{aligned} D_i \tilde{H}(0)(\omega )=\int g^i (\sigma ^{-1}\omega , \cdot )v_{\sigma ^{-1}\omega }^0 \, \mathrm{d}m+\int (D_i O (0))_{\sigma ^{-1} \omega }\, \mathrm{d}m. \end{aligned}$$

Thus, it follows from (14) and (28) that \(D_i \tilde{H}(0)(\omega )=0\) for \(i\in \{1, \ldots , d\}\) and for \(\mathbb P\)-a.e. \(\omega \in \varOmega \).

Moreover, by taking into account that \(D_{d+1,d+1}H\) vanishes, we have that

$$\begin{aligned}&D_{ji}\tilde{H}(\theta )(\omega ) \\&\quad =\frac{-E_i (\omega ) E_j(\omega )}{[H(\theta , O(\theta )) (\omega )]^2} \\&\qquad +\frac{1}{H(\theta , O(\theta )) (\omega )}[D_{ji}H(\theta , O(\theta )) (\omega )+(D_{d+1, i}H(\theta , O(\theta )) D_j O(\theta ))(\omega )]\\&\qquad +\frac{1}{H(\theta , O(\theta )) (\omega )} [(D_{j,d+1}H(\theta , O(\theta ))D_i O(\theta ))(\omega )\\&\quad +(D_{d+1}H(\theta , O(\theta ))D_{ji} O(\theta ))(\omega )], \end{aligned}$$

where

$$\begin{aligned} E_i(\omega )=D_i H(\theta , O(\theta )) (\omega )+ (D_{d+1}H(\theta , O(\theta ))D_i O(\theta ))(\omega ). \end{aligned}$$

By applying Lemma 6.6 and using (31), we find that

$$\begin{aligned} D_{ji}\tilde{H}(0)(\omega )= & {} \int \Big (g^i(\sigma ^{-1}\omega , \cdot )g^j(\sigma ^{-1}\omega , \cdot )v_{\sigma ^{-1}\omega }^0+g^i (\sigma ^{-1}\omega , \cdot ) (D_j O(0))_{\sigma ^{-1} \omega }\\&+g^j(\sigma ^{-1}\omega , \cdot )(D_i O(0))_{\sigma ^{-1}\omega }\Big )\, \mathrm{d}m. \end{aligned}$$

Developing \(\tilde{H}\) in a Taylor series around 0, we have that

$$\begin{aligned} \tilde{H}(\theta )(\omega )= \log H(\theta , O(\theta ))(\omega ) =\frac{1}{2} \theta ^{\mathrm{T}} D^2 \tilde{H}(0)(\omega ) \theta + R(\theta )(\omega ), \end{aligned}$$

where R denotes the remainder. Therefore,

$$\begin{aligned} \log H \left( \frac{it}{\sqrt{n}}, O(\frac{it}{\sqrt{n}})\right) (\sigma ^{ j+1} \omega )=-\frac{1}{2n} t^{\mathrm{T}} D^2 \tilde{H}(0)(\sigma ^{j+1} \omega )t+R(it/\sqrt{n})(\sigma ^{j+1} \omega ), \end{aligned}$$

which implies that

$$\begin{aligned}&\sum _{j=0}^{n-1} \log H \left( \frac{it}{\sqrt{n}}, O(\frac{it}{\sqrt{n}})\right) (\sigma ^{ j+1} \omega )\nonumber \\&\qquad =-\frac{1 }{2}\cdot \frac{1}{n}\sum _{j=0}^{n-1}t^{\mathrm{T}} D^2 \tilde{H}(0)(\sigma ^{j+1} \omega ) t+\sum _{j=0}^{n-1}R(it/\sqrt{n})(\sigma ^{j+1} \omega ). \end{aligned}$$
(40)

By Birkhoff’s ergodic theorem, we have that

$$\begin{aligned} \lim _{n\rightarrow \infty }\frac{1}{n} \sum _{j=0}^{n-1}t^{\mathrm{T}} D^2 \tilde{H}(0)(\sigma ^{j+1} \omega ) t&= \lim _{n\rightarrow \infty }\frac{1}{n} \sum _{j=0}^{n-1}\sum _{k,l=1}^d t_k D_{kl}\tilde{H}(0) (\sigma ^{j+1}\omega ) t_l \\&=\sum _{k,l=1}^d t_k D^2 \varLambda (0)t_l \\&=t^{\mathrm{T}} \varSigma ^2 t, \end{aligned}$$

for \(\mathbb P\)-a.e. \(\omega \in \varOmega \), where we have used the penultimate equality in the Proof of Lemma 3.19. Furthermore, since \(\tilde{H}({\theta })\) are analytic in \({\theta }\) and uniformly bounded in \(\omega \) we have that when \(|t/\sqrt{n}|\) is sufficiently small then \(|R(it/\sqrt{n})(\omega )|\le C|t/\sqrt{n}|^3\), where \(C>0\) is some constant which does not depend on \(\omega \) and n, and hence

$$\begin{aligned} \lim _{n\rightarrow \infty }\sum _{j=0}^{n-1}R(it/\sqrt{n})(\sigma ^{j+1} \omega ) =0, \quad \text {for } \mathbb P\text {-a.e. } \omega \in \varOmega . \end{aligned}$$

Thus, (40) implies that

$$\begin{aligned} \lim _{n\rightarrow \infty }\sum _{j=0}^{n-1} \log H \left( \frac{it}{\sqrt{n}}, O(\frac{it}{\sqrt{n}})\right) (\sigma ^{ j+1} \omega )=-\frac{1}{2} t^{\mathrm{T}}\varSigma ^2 t \quad \text {for } \mathbb P\text {-a.e. } \omega \in \varOmega , \end{aligned}$$

and therefore (39) holds. This completes the proof of the theorem. \(\square \)

4.4 Berry–Esseen Bounds

In this subsection, we restrict to the case when \(d=1\), i.e., we consider real-valued observables. In this case, \(\varSigma ^2\) is a nonnegative number and in fact,

$$\begin{aligned} \varSigma ^2=\int _{\varOmega \times X}g(\omega , x)^2\, \mathrm{d}\mu (\omega , x)+2\sum _{n=1}^\infty \int _{\varOmega \times X}g(\omega , x)g(\tau ^n (\omega , x))\, \mathrm{d}\mu (\omega , x). \end{aligned}$$

In this section, we assume that \(\varSigma ^2>0\) which means that g is not an \(L^2(\mu )\) coboundary with respect to the skew product \(\tau \) (see [16, Proposition 3]). For \(\omega \in \varOmega \) and \(n\in \mathbb {N}\), set

$$\begin{aligned} \alpha _{\omega , n}:={\left\{ \begin{array}{ll} \sum _{j=0}^{n-1}\tilde{H}''(0)(\sigma ^{j+1}\omega ) &{} \quad \text {if } \sum _{j=0}^{n-1}\tilde{H}''(0)(\sigma ^{j+1}\omega ) \ne 0; \\ n\varSigma ^2 &{} \quad \text {if } \sum _{j=0}^{n-1}\tilde{H}''(0)(\sigma ^{j+1}\omega )=0, \end{array}\right. } \end{aligned}$$

where \(\tilde{H}\) is introduced in the previous subsection. Then,

$$\begin{aligned} \lim _{n\rightarrow \infty }\frac{\alpha _{\omega , n}}{n}=\varSigma ^2, \quad \text {for } \mathbb P\text {-a.e. } \omega \in \varOmega . \end{aligned}$$
(41)

Take now \(\omega \in \varOmega \) such that (41) holds. Set

$$\begin{aligned} a_n:=\frac{\alpha _{\omega , n}}{n} \quad \text {and} \quad r_n=\sqrt{a_n}, \end{aligned}$$

for \(n\in \mathbb {N}\). Observe that \(a_n\) and \(r_n\) depend on \(\omega \) but in order to simplify the notation, we will not make this explicit. Taking \({\theta }=tn^{-1/2}/r_n\) in (35), we have that

$$\begin{aligned} \int _X e^{it\frac{S_ng(\omega , \cdot )}{r_n\sqrt{n}}}\, \mathrm{d}\mu _\omega&=\phi _\omega ^{\frac{it}{r_n\sqrt{n}}}(v_\omega ^0)\prod _{j=0}^{n-1} \lambda _{\sigma ^j \omega }^{\frac{it}{r_n \sqrt{n}}}\\&\quad +\int _X\mathcal L_\omega ^{\frac{it}{r_n \sqrt{n}}, (n)}(v_\omega ^0-\phi _\omega ^{\frac{it}{r_n\sqrt{n}}}(v_\omega ^0)v_\omega ^{\frac{it}{r_n \sqrt{n}}})\, \mathrm{d}m. \end{aligned}$$

Hence,

$$\begin{aligned} \bigg |\int _X e^{it\frac{S_ng(\omega , \cdot )}{r_n\sqrt{n}}}\, \mathrm{d}\mu _\omega -e^{-\frac{1}{2} t^2} \bigg |&\le \bigg |\phi _\omega ^{\frac{it}{r_n\sqrt{n}}}(v_\omega ^0)\prod _{j=0}^{n-1} \lambda _{\sigma ^j \omega }^{\frac{it}{r_n \sqrt{n}}}-e^{-\frac{1}{2} t^2} \bigg |\\&\quad +\bigg |\int _X\mathcal L_\omega ^{\frac{it}{r_n \sqrt{n}}, (n)}(v_\omega ^0-\phi _\omega ^{\frac{it}{r_n\sqrt{n}}}(v_\omega ^0)v_\omega ^{\frac{it}{r_n \sqrt{n}}})\, \mathrm{d}m \bigg |. \end{aligned}$$

Observe that

$$\begin{aligned} \bigg |\phi _\omega ^{\frac{it}{r_n\sqrt{n}}}(v_\omega ^0)\prod _{j=0}^{n-1} \lambda _{\sigma ^j \omega }^{\frac{it}{r_n \sqrt{n}}}-e^{-\frac{1}{2} t^2} \bigg |&\le |\phi _\omega ^{\frac{it}{r_n\sqrt{n}}}(v_\omega ^0)-1|\cdot \bigg |\prod _{j=0}^{n-1} \lambda _{\sigma ^j \omega }^{\frac{it}{r_n \sqrt{n}}} \bigg |\\&\quad +\bigg |\prod _{j=0}^{n-1} \lambda _{\sigma ^j \omega }^{\frac{it}{r_n \sqrt{n}}}-e^{-\frac{1}{2} t^2}\bigg |\\&=:I_1+I_2. \end{aligned}$$

By [15, Lemma 4.6], we have that

$$\begin{aligned} \bigg |\prod _{j=0}^{n-1} \lambda _{\sigma ^j \omega }^{\frac{it}{r_n \sqrt{n}}} \bigg |\le e^{-\frac{\varSigma ^2}{8}r_n^{-2}t^2}, \end{aligned}$$

for each \(n\in \mathbb {N}\) sufficiently large and t such that \(|\frac{t}{r_n \sqrt{n}}|\) is sufficiently small. Moreover, using the analyticity of the map \(\theta \mapsto \phi ^\theta \) (which as we already commented can be obtained by repeating the arguments in [15, Appendix C]) and the factFootnote 1 that \(\frac{d}{\mathrm{d}\theta }\phi _\omega ^\theta |_{\theta =0} (v_\omega ^0)=0\), there exists \(A>0\) (independent on \(\omega \) and n) such that

$$\begin{aligned} |\phi _\omega ^{\frac{it}{r_n\sqrt{n}}}(v_\omega ^0)-1|=|\phi _\omega ^{\frac{it}{r_n\sqrt{n}}}(v_\omega ^0)-\phi _\omega ^0(v_\omega ^0)|\le At^2r_n^{-2}n^{-1}, \end{aligned}$$
(42)

whenever \(|\frac{t}{r_n \sqrt{n}}|\) is sufficiently small. Consequently, for n sufficiently large and if \(|\frac{t}{r_n \sqrt{n}}|\) is sufficiently small,

$$\begin{aligned} I_1 \le At^2r_n^{-2}n^{-1}e^{-\frac{\varSigma ^2}{8}r_n^{-2}t^2}. \end{aligned}$$

On the other hand, we have that

$$\begin{aligned} I_2&=\bigg |\prod _{j=0}^{n-1} \lambda _{\sigma ^j \omega }^{\frac{it}{r_n \sqrt{n}}}-e^{-\frac{1}{2} t^2}\bigg |\\&=\bigg |e^{\sum _{j=0}^{n-1}\log \lambda _{\sigma ^j \omega }^{\frac{it}{r_n \sqrt{n}}}}-e^{-\frac{1}{2} t^2}\bigg |\\&=e^{-\frac{1}{2} t^2}\bigg |\exp \bigg (\sum _{j=0}^{n-1}\log \lambda _{\sigma ^j \omega }^{\frac{it}{r_n \sqrt{n}}}+\frac{1}{2} t^2 \bigg )-1\bigg |. \end{aligned}$$

Observe that for n sufficiently large,

$$\begin{aligned} \sum _{j=0}^{n-1}\log \lambda _{\sigma ^j \omega }^{\frac{it}{r_n \sqrt{n}}}&=\sum _{j=0}^{n-1}\tilde{H}(\frac{it}{r_n \sqrt{n}})(\sigma ^{j+1}\omega )\\&=\sum _{j=0}^{n-1} \left( \frac{-t^2 \tilde{H}''(0)(\sigma ^{j+1} \omega )}{2nr_n^2}+R(\frac{it}{r_n\sqrt{n}})(\sigma ^{j+1} \omega ) \right) \\&=-\frac{t^2}{2}+\sum _{j=0}^{n-1}R(\frac{it}{r_n\sqrt{n}})(\sigma ^{j+1} \omega ), \end{aligned}$$

and therefore,

$$\begin{aligned} \sum _{j=0}^{n-1}\log \lambda _{\sigma ^j \omega }^{\frac{it}{r_n \sqrt{n}}}+\frac{t^2}{2}=\sum _{j=0}^{n-1}R(\frac{it}{r_n\sqrt{n}})(\sigma ^{j+1} \omega ). \end{aligned}$$

Using that \(R(\frac{it}{r_n\sqrt{n}})=\frac{\tilde{H}'''(p_t)}{3!}(\frac{it}{r_n\sqrt{n}})^3\), for some \(p_t\) between 0 and \(\frac{it}{r_n\sqrt{n}}\), we conclude that there exists \(M>0\) such that

$$\begin{aligned} \bigg |\sum _{j=0}^{n-1}\log \lambda _{\sigma ^j \omega }^{\frac{it}{r_n \sqrt{n}}}+\frac{t^2}{2} \bigg |\le nM|\frac{it}{r_n \sqrt{n}}|^3 =\frac{M|t|^3}{r_n^3 \sqrt{n}}. \end{aligned}$$

Since \(|e^z-1|\le 2|z|\) whenever \(|z|\) is sufficiently small, we conclude that

$$\begin{aligned} I_2\le 2Me^{-\frac{t^2}{2}}|t|^3r_n^{-3}n^{-1/2}. \end{aligned}$$

Observe that Lemma 4.7 implies that

$$\begin{aligned} \bigg |\int _X\mathcal L_\omega ^{\frac{it}{r_n \sqrt{n}}, (n)}(v_\omega ^0-\phi _\omega ^{\frac{it}{r_n\sqrt{n}}}(v_\omega ^0)v_\omega ^{\frac{it}{r_n \sqrt{n}}})\, \mathrm{d}m \bigg |\le C\frac{r^n |t|}{r_n \sqrt{n}}, \end{aligned}$$

for some \(C>0\) and whenever \(|\frac{t}{r_n \sqrt{n}}|\) is sufficiently small.

Let \(F_n :\mathbb {R}\rightarrow \mathbb {R}\) be a distribution function of \(\frac{S_ng(\omega , \cdot )}{r_n \sqrt{n}}=\frac{S_n g(\omega , \cdot )}{\sqrt{\alpha _{\omega , n}}}\). Furthermore, let \(F:\mathbb {R}\rightarrow \mathbb {R}\) be a distribution function of \(\mathcal N(0, 1)\). Then, it follows from Berry–Esseen inequality that

$$\begin{aligned} \sup _{x\in \mathbb {R}}|F_n(x)-F(x)|\le \frac{2}{\pi }\int _0^{\mathrm{T}}\bigg |\frac{\mu _\omega (e^{\frac{itS_ng(\omega , \cdot )}{r_n \sqrt{n}}})-e^{-\frac{1}{2} t^2}}{t}\bigg |\, \mathrm{d}t +\frac{24}{\pi T}\sup _{x\in \mathbb {R}}|F'(x)|, \end{aligned}$$
(43)

for any \(T>0\). It follows from the estimates we established that there exists \(\rho >0\) such that

$$\begin{aligned} \int _0^{\rho r_n \sqrt{n}}\bigg |\frac{\mu _\omega (e^{\frac{itS_ng(\omega , \cdot )}{r_n \sqrt{n}}})-e^{-\frac{1}{2} t^2}}{t}\bigg |\, \mathrm{d}t&\le Ar_n^{-2}n^{-1}\int _0^\infty te^{-\frac{\varSigma ^2}{8}r_n^{-2}t^2}\, \mathrm{d}t \\&\quad +2Mr_n^{-3}n^{-\frac{1}{2}}\int _0^\infty t^2 e^{-\frac{t^2}{2}}\, \mathrm{d}t \\&\quad +C\rho r^n, \end{aligned}$$

for sufficiently large n. Since

$$\begin{aligned} \sup _n \int _0^\infty te^{-\frac{\varSigma ^2}{8}r_n^{-2}t^2}\, \mathrm{d}t<\infty \quad \text {and} \quad \int _0^\infty t^2 e^{-\frac{t^2}{2}}\, \mathrm{d}t <\infty , \end{aligned}$$

we conclude that

$$\begin{aligned} \sup _{x\in \mathbb {R}}|F_n(x)-F(x)|\le R(\omega )n^{-\frac{1}{2}}, \end{aligned}$$
(44)

for some random variable R.

Next, notice that in the notations of the proof of Theorem 4.4 we have

$$\begin{aligned} \alpha _{\omega ,n}=\varPi _{\omega ,n}''(0). \end{aligned}$$

Set \(\sigma _{\omega ,n}^2={{\,\mathrm{var}\,}}_{\mu _\omega }\big (S_ng(\omega ,\cdot )\big )\). Then by (36) we have

$$\begin{aligned} |\sigma _{\omega ,n}^2-\alpha _{\omega ,n}|\le C, \end{aligned}$$

where C is some constant which does not depend on n. Since \(\alpha ^{-\frac{1}{2}}-\sigma ^{-\frac{1}{2}}=\frac{\sigma -\alpha }{\sqrt{\alpha \sigma }(\sqrt{\alpha }+\sqrt{\sigma })}\) for any nonzero \(\alpha \) and \(\sigma \), taking into account (13) we have

$$\begin{aligned} \left| S_ng(\omega ,\cdot )/\sqrt{\alpha _{\omega ,n}}-S_ng(\omega ,\cdot )/\sigma _{\omega ,n}\right| \le C_1n^{-\frac{1}{2}} \end{aligned}$$

for some constant \(C_1\) which does not depend on n. By applying [28, Lemma 3.3] with \(a=\infty \), we conclude from (44) that the following self-normalized version of the Berry–Esseen theorem holds true:

$$\begin{aligned} \sup _{x\in \mathbb {R}}|\bar{F}_n(x)-F(x)|\le R_1(\omega )n^{-\frac{1}{2}} \end{aligned}$$
(45)

for some random variable \(R_1\), where \(\bar{F}_n\) is a distribution function of \(\frac{S_n g(\omega , \cdot )}{\sigma _{\omega , n}}\).

Remark 4.9

We stress that analogous result (using different techniques) for random expanding dynamics was obtained in [31, Theorem 7.1.1]. In Theorem 4.13, we will give a somewhat different proof of (45), as well as prove certain Edgeworth expansions of order one.

4.5 Local Limit Theorem

Theorem 4.10

Suppose that \(\varSigma ^2\) is positive definite and that for any compact set \(J\subset \mathbb R^d{\setminus }\{0\}\) there exist \(\rho \in (0,1)\) and a random variable \(C:\varOmega \rightarrow (0, \infty )\) such that

$$\begin{aligned} \Vert \mathcal {L}_\omega ^{it, (n)}\Vert \le C(\omega ) \rho ^n, \quad \text {for } \mathbb P\text {-a.e. } \omega \in \varOmega , t\in J\text { and } n\in \mathbb {N}. \end{aligned}$$
(46)

Then, for \(\mathbb P\)-a.e. \(\omega \in \varOmega \) we have that

$$\begin{aligned} \lim _{n\rightarrow \infty }\sup _{s\in \mathbb {R}^d} \bigg ||\varSigma |n^{d/2}\mu _\omega (s+S_n g(\omega , \cdot )\in J)-\frac{1}{(2\pi )^{d/2}}e^{-\frac{1}{2n}s^{\mathrm{T}}\varSigma ^{-2}s}|J|\bigg |=0, \end{aligned}$$

where \(|\varSigma |=\sqrt{\det \varSigma ^2}\), \(\varSigma ^{-2}\) is the inverse of \(\varSigma ^2\) and \(|J|\) denotes the volume of J.

Proof

The proof is analogous to the proof of [15, Theorem C]. Using the density argument (analogous to that in [39]), it is sufficient to show that

$$\begin{aligned} \sup _{s\in \mathbb {R}^d} \bigg ||\varSigma |n^{d/2}\int h(s+S_ng(\omega , \cdot ))\, \mathrm{d}\mu _\omega -\frac{1}{(2\pi )^{d/2}}e^{-\frac{1}{2n}s^{\mathrm{T}}\varSigma ^{-2}s}\int _{\mathbb {R}^d} h(u)\, \mathrm{d}u\bigg |\rightarrow 0, \end{aligned}$$
(47)

when \(n\rightarrow \infty \) for every \(h\in L^1(\mathbb {R}^d)\) whose Fourier transform \(\hat{h}\) has compact support. By using the inversion formula

$$\begin{aligned} h(x)=\frac{1}{(2\pi )^d}\int _{\mathbb {R}^d} \hat{h}(t)e^{it\cdot x}\, \mathrm{d}t, \end{aligned}$$

and Fubini’s theorem, we have that

$$\begin{aligned} |\varSigma |n^{d/2}\int h(s+S_ng(\omega ,\cdot ))\, \mathrm{d}\mu _\omega&=\frac{|\varSigma |n^{d/2} }{(2\pi )^d} \int \int _{\mathbb {R}^d}\hat{h}(t)e^{it \cdot (s+S_n g(\omega , \cdot ))}\, \mathrm{d}t \, \mathrm{d}\mu _\omega \\&=\frac{|\varSigma |n^{d/2} }{(2\pi )^d} \int _{\mathbb {R}^d} e^{it\cdot s}\hat{h}(t)\int e^{it \cdot S_n g(\omega , \cdot )}\, \mathrm{d}\mu _\omega \, \mathrm{d}t \\&=\frac{|\varSigma |n^{d/2} }{(2\pi )^d} \int _{\mathbb {R}^d} e^{it\cdot s}\hat{h}(t)\int e^{it \cdot S_n g(\omega , \cdot )}v_\omega ^0\, \mathrm{d}m \, \mathrm{d}t \\&=\frac{|\varSigma |n^{d/2} }{(2\pi )^d} \int _{\mathbb {R}^d} e^{it\cdot s}\hat{h}(t)\int \mathcal L_{\omega }^{it, (n)}v_\omega ^0\, \mathrm{d}m \, \mathrm{d}t \\&=\frac{|\varSigma |}{(2\pi )^d}\int _{\mathbb {R}^d} e^{\frac{it \cdot s}{\sqrt{n}}}\hat{h}\left( \frac{t}{\sqrt{n}}\right) \int \mathcal L_{\omega }^{\frac{it}{\sqrt{n}}, (n)}v_\omega ^0\, \mathrm{d}m \, \mathrm{d}t. \end{aligned}$$

Recalling that the Fourier transform of \(f(x)=e^{-\frac{1}{2}x^{\mathrm{T}}\varSigma ^2 x}\) is given by \(\hat{f}(t)=\frac{(2\pi )^{d/2}}{|\varSigma |}e^{- \frac{1}{2} t^{\mathrm{T}}\varSigma ^{-2}t}\), we have that

$$\begin{aligned} \frac{1}{(2\pi )^{d/2}}e^{-\frac{1}{2n}s^{\mathrm{T}}\varSigma ^{-2}s}\int _{\mathbb {R}^d} h(u)\, \mathrm{d}u= & {} \frac{\hat{h}(0)}{(2\pi )^{d/2}}e^{-\frac{1}{2n}s^{\mathrm{T}}\varSigma ^{-2}s}\\= & {} \frac{\hat{h}(0) |\varSigma |}{(2\pi )^d} \hat{f}(-s/\sqrt{n}) \\= & {} \frac{\hat{h}(0)|\varSigma |}{(2\pi )^d} \int _{\mathbb {R}^d} e^{\frac{it\cdot s}{\sqrt{n}}} \cdot e^{-\frac{1}{2}t^{\mathrm{T}}\varSigma ^2 t}\, \mathrm{d}t. \end{aligned}$$

Therefore, in order to complete the proof of the theorem we need to show that

$$\begin{aligned} \sup _{s\in \mathbb {R}^d} \bigg |\frac{|\varSigma |}{(2\pi )^d}\int _{\mathbb {R}^d} e^{\frac{it \cdot s}{\sqrt{n}}}\hat{h}\left( \frac{t}{\sqrt{n}}\right) \int \mathcal L_{\omega }^{\frac{it}{\sqrt{n}}, (n)}v_\omega ^0\, \mathrm{d}m \, \mathrm{d}t-\frac{\hat{h}(0)|\varSigma |}{(2\pi )^d} \int _{\mathbb {R}^d} e^{\frac{it\cdot s}{\sqrt{n}}} \cdot e^{-\frac{1}{2}t^{\mathrm{T}}\varSigma ^2 t}\, \mathrm{d}t \bigg |\rightarrow 0, \end{aligned}$$

when \(n\rightarrow \infty \) for \(\mathbb P\)-a.e. \(\omega \in \varOmega \). Choose \(\delta >0\) such that the support of \(\hat{h}\) is contained in \(\{t\in \mathbb {R}^d: |t|\le \delta \}\). Then, for any \(\tilde{\delta } \in (0, \delta )\), we have that

$$\begin{aligned}&\frac{|\varSigma |}{(2\pi )^d}\int _{\mathbb {R}} e^{\frac{it\cdot s}{\sqrt{n}}}\hat{h}\left( \frac{t}{\sqrt{n}}\right) \int \mathcal L_{\omega }^{\frac{it}{\sqrt{n}}, (n)}v_\omega ^0\, \mathrm{d}m \, \mathrm{d}t - \frac{\hat{h}(0)|\varSigma |}{(2\pi )^d} \int _{\mathbb {R}} e^{\frac{it\cdot s}{\sqrt{n}}} \cdot e^{-\frac{1}{2}t^{\mathrm{T}}\varSigma ^2 t}\, \mathrm{d}t \\&\quad =\frac{|\varSigma |}{(2\pi )^d} \int _{|t|< \tilde{\delta } \sqrt{n}} e^{\frac{it\cdot s}{\sqrt{n}}} \Big (\hat{h}\left( \frac{t}{\sqrt{n}}\right) \prod _{j=0}^{n-1}\lambda _{\sigma ^j \omega }^{\frac{it}{\sqrt{n}}}-\hat{h}(0)e^{-\frac{1}{2}t^{\mathrm{T}}\varSigma ^2 t} \Big )\, \mathrm{d}t \\&\qquad +\frac{|\varSigma |}{(2\pi )^d}\int _{|t|< \tilde{\delta } \sqrt{n}}e^{\frac{it\cdot s}{\sqrt{n}}}\hat{h}\left( \frac{t}{\sqrt{n}}\right) \int \prod _{j=0}^{n-1}\lambda _{\sigma ^j \omega }^{\frac{it}{\sqrt{n}}} \Big ( \phi _\omega ^{\frac{it}{\sqrt{n}}}( v_\omega ^0 ) v_{\sigma ^{n}\omega }^{\frac{it}{\sqrt{n}}}-1 \Big )\,\mathrm{d}m \, \mathrm{d}t \\&\qquad +\frac{|\varSigma |n^{d/2}}{(2\pi )^d}\int _{|t|<\tilde{\delta }}e^{it\cdot s}\hat{h}(t)\int \mathcal L_{\omega }^{it, (n)} (v_\omega ^0 - \phi _\omega ^{it}( v_\omega ^0 ) v_{\omega }^{it}) \, \mathrm{d}m\, \mathrm{d}t \\&\qquad +\frac{|\varSigma |n^{d/2}}{(2\pi )^d}\int _{\tilde{\delta } \le |t|< \delta }e^{it\cdot s}\hat{h}(t)\int \mathcal L_\omega ^{it, (n)}v_\omega ^0\, \mathrm{d}m\, \mathrm{d}t \\&\qquad -\frac{|\varSigma |}{(2\pi )^d}\hat{h}(0) \int _{|t|\ge \tilde{\delta } \sqrt{n}}e^{\frac{it\cdot s}{\sqrt{n}}} \cdot e^{-\frac{1}{2} t^{\mathrm{T}}\varSigma ^2 t}\, \mathrm{d}t=: (I)+(II)+(III)+(IV)+(V). \end{aligned}$$

One can now proceed as in the proof of [15, Theorem C] and show that each of the terms (I)–(V) converges to zero as \(n\rightarrow \infty \). For the convenience of the reader, we give here complete arguments for terms (I) (which is most involved) and (IV) (since this is the only part of the proof that requires (46)). \(\square \)

Control of (I) We claim that for \(\mathbb {P}\text {-a.e. } \omega \in \varOmega \),

$$\begin{aligned} \lim _{n\rightarrow \infty } \sup _{s\in \mathbb {R}^d} \bigg |\int _{|t|< \tilde{\delta } \sqrt{n}}e^{\frac{it\cdot s}{\sqrt{n}}} \Big (\hat{h}\left( \frac{t}{\sqrt{n}}\right) \prod _{j=0}^{n-1} \lambda _{\sigma ^j \omega }^{\frac{it}{\sqrt{n}}}-\hat{h}(0)e^{-\frac{1}{2} t^{\mathrm{T}}\varSigma ^2 t}\Big )\, \mathrm{d}t \bigg |= 0. \end{aligned}$$

Observe that

$$\begin{aligned}&\sup _{s\in \mathbb {R}^d} \bigg |\int _{|t|< \tilde{\delta } \sqrt{n}}e^{\frac{it\cdot s}{\sqrt{n}}} \Big (\hat{h}\left( \frac{t}{\sqrt{n}}\right) \prod _{j=0}^{n-1} \lambda _{\sigma ^j \omega }^{\frac{it}{\sqrt{n}}}-\hat{h}(0)e^{-\frac{1}{2} t^{\mathrm{T}}\varSigma ^2 t} \Big )\, \mathrm{d}t \bigg |\\&\quad \le \int _{|t|< \tilde{\delta } \sqrt{n}}\bigg |\hat{h}\left( \frac{t}{\sqrt{n}}\right) \prod _{j=0}^{n-1} \lambda _{\sigma ^j \omega }^{\frac{it}{\sqrt{n}}}-\hat{h}(0)e^{-\frac{1}{2} t^{\mathrm{T}}\varSigma ^2 t} \bigg |\, \mathrm{d}t. \end{aligned}$$

It follows from the continuity of \(\hat{h}\) and (39) that for \(\mathbb {P}\text {-a.e. } \omega \in \varOmega \) and every t,

$$\begin{aligned} \hat{h}\left( \frac{t}{\sqrt{n}}\right) \prod _{j=0}^{n-1} \lambda _{\sigma ^j \omega }^{\frac{it}{\sqrt{n}}}-\hat{h}(0)e^{-\frac{1}{2} t^{\mathrm{T}}\varSigma ^2 t}\rightarrow 0, \quad \text {when } n\rightarrow \infty . \end{aligned}$$
(48)

The desired conclusion will follow from the dominated convergence theorem once we establish the following lemma.

Lemma 4.11

For \(\tilde{\delta } >0\) sufficiently small, there exists \(n_0\in \mathbb {N}\) such that for all \(n\ge n_0\) and t such that \(|t|< \tilde{\delta } \sqrt{n}\),

$$\begin{aligned} \bigg |\prod _{j=0}^{n-1} \lambda _{\sigma ^j \omega }^{\frac{it}{\sqrt{n}}} \bigg |\le e^{-\frac{1}{8} t^{\mathrm{T}}\varSigma ^2 t}. \end{aligned}$$

Proof of the lemma

We will use the same notation as in the proof of Theorem 4.8. We have that

$$\begin{aligned} \bigg |\prod _{j=0}^{n-1} \lambda _{\sigma ^j \omega }^{\frac{it}{\sqrt{n}}} \bigg |=e^{-\frac{1}{2n} \mathfrak {R}( \sum _{j=0}^{n-1} t^{\mathrm{T}} D^2 \tilde{H}''(0)(\sigma ^{j+1} \omega ) t)}\cdot e^{\mathfrak {R}(\sum _{j=0}^{n-1} R(it/\sqrt{n})(\sigma ^{j+1} \omega ))}. \end{aligned}$$

In the proof of Theorem 4.8, we have shown that

$$\begin{aligned} \frac{1}{n} \sum _{j=0}^{n-1} D^2 \tilde{H}(0)(\sigma ^{j+1} \omega ) \rightarrow \varSigma ^2 \quad \text {for } \mathbb {P}\text {-a.e. } \omega \in \varOmega . \end{aligned}$$

Therefore, for \(\mathbb P\)-a.e. \(\omega \in \varOmega \) there exists \(n_0=n_0(\omega ) \in \mathbb {N}\) such that

$$\begin{aligned} e^{-\frac{1}{2n} \mathfrak {R}( \sum _{j=0}^{n-1} t^{\mathrm{T}} D^2 \tilde{H}''(0)(\sigma ^{j+1} \omega ) t)} \le e^{-\frac{1}{4} t^{\mathrm{T}}\varSigma ^2 t}, \quad \text {for } n\ge n_0\text { and } t\in \mathbb {R}^d. \end{aligned}$$

Finally, recall that \(|R(it/\sqrt{n})(\omega )|\le C|t/\sqrt{n}|^3\), where \(C>0\) is some constant which does not depend on \(\omega \) and n when \(|t/\sqrt{n}|\) is small enough. Therefore, if \(|t|\le \sqrt{n}\tilde{\delta }\) and \(\tilde{\delta }\) is small enough, we have

$$\begin{aligned} e^{\mathfrak {R}(\sum _{j=0}^{n-1} R(it/\sqrt{n})(\sigma ^{j+1} \omega ))} \le e^{C|t|^3n^{-\frac{1}{2}}} \le e^{-\frac{1}{8} t^{\mathrm{T}}\varSigma ^2 t}. \end{aligned}$$

Here, we have used that \(|t|^3n^{-1/2}\le \tilde{\delta } |t|^2\) and that \(t^{\mathrm{T}}\varSigma ^2 t\ge a|t|^2\) for some \(a>0\) and all \(t\in \mathbb {R}^d\). The conclusion of the lemma follows directly from the last two estimates. \(\square \)

Control of (IV) By (46),

$$\begin{aligned}&\sup _{s\in \mathbb {R}^d}\frac{|\varSigma |n^{d/2}}{(2\pi )^d} \bigg |\int _{\tilde{\delta } \le |t|\le \delta } e^{it \cdot s}\hat{h}(t)\int \mathcal L_{\omega }^{it, (n)}v_\omega ^0\, \mathrm{d}m \, \mathrm{d}t \bigg |\\&\quad \le CV_{\delta , \tilde{\delta }}\frac{ |\varSigma |n^{d/2}}{(2\pi )^d}\Vert \hat{h}\Vert _{L^\infty } \cdot \rho ^n \cdot \Vert v^0\Vert _\infty \rightarrow 0, \end{aligned}$$

when \(n\rightarrow \infty \) by (10) and the fact that \(\hat{h}\) is continuous. Here, \(V_{\delta , \delta '}\) denotes the volume of \(\{t\in \mathbb {R}^d: \tilde{\delta } \le |t|\le \delta \}\). \(\square \)

Let us now discuss conditions under which (46) holds.

Lemma 4.12

Assume that:

  1. 1.

    \(\mathcal F\) is a Borel \(\sigma \)-algebra on \(\varOmega \);

  2. 2.

    \(\sigma \) has a periodic point \(\omega _0\) (whose period is denoted by \(n_0\)), and \(\sigma \) is continuous at each point that belongs to the orbit of \(\omega _0\);

  3. 3.

    \(\mathbb P(U)>0\) for any open set U that intersects the orbit of \(\omega _0\);

  4. 4.

    for any compact set \(J\subset \mathbb {R}^d\), the family of maps \(\omega \rightarrow \mathcal {L}_\omega ^{it},\,t\in J\) is uniformly continuous at the orbit points of \(\omega _0\);

  5. 5.

    for any \(t\not =0\), the spectral radius of \(\mathcal {L}_{\omega _0}^{it,(n_0)}\) is smaller than 1;

  6. 6.

    for any compact set \(J\subset \mathbb {R}^d\), there exists a constant \(B(J)>0\) such that

    $$\begin{aligned} \sup _{t\in J}\sup _{n\ge 1}\Vert \mathcal {L}_\omega ^{it,(n)}\Vert \le B(J). \end{aligned}$$
    (49)

Then, for any compact \(J\subset \mathbb {R}^d{\setminus }\{0\}\) there exist a random variable \(C:\varOmega \rightarrow (0, \infty )\) and a constant \(d=d(J)>0\) such that for \(\mathbb P\)-a.e. \(\omega \in \varOmega \) and for any \(n\ge 1\), we have that

$$\begin{aligned} \sup _{t\in J}\Vert \mathcal {L}_\omega ^{it,(n)}\Vert \le C(\omega ) e^{-nd}. \end{aligned}$$

The proof of Lemma 4.12 is identical to the proof of  [31, Lemma 2.10.4]. We also refer the readers to the arguments in proof of Lemma 4.17. Condition (49) is satisfied for the distance expanding maps considered in [31, Chapter 5] (assuming they are nonsingular). Indeed, the proof of the Lasota–Yorke inequality (see [31, Lemma 5.6.1]) proceeds similarly for vectors \(z\in \mathbb {C}^d\) instead of complex numbers. Therefore, there exists a constant \(C>0\) so that P-almost surely, for any \(t\in \mathbb {R}^d\) and \(n\ge 1\) we have

$$\begin{aligned} \Vert \mathcal {L}_\omega ^{it,(n)}\Vert \le C(1+|t|)\sup |\mathcal {L}_\omega ^{(n)}{} \mathbf{1}| \end{aligned}$$

where \(\mathbf{1}\) is the function which takes the constant value 1. Note that in the circumstances of [31],  \({{\,\mathrm{var}\,}}(\cdot )=v_\alpha (\cdot )\) is the Hölder constant corresponding to some exponent \(\alpha \in (0,1]\). In particular, \(\mathcal B\) contains only Hölder continuous functions and the norm \(\Vert {\cdot }\Vert _{\mathcal B}\) is equivalent to the norm \(\Vert g\Vert _\alpha =v_\alpha (g)+\sup |g|\). Therefore, by (C4) for P-almost any \(\omega \) we have

$$\begin{aligned} \sup |\mathcal {L}_\omega ^{(n)}{} \mathbf{1}|=\Vert \mathcal {L}_\omega ^{(n)}{} \mathbf{1}\Vert _{L^\infty }\le C \end{aligned}$$

for some C which does not depend on \(\omega \) and n and hence (49) holds true.

4.6 Edgeworth and LD Expansions

Let us restrict ourselves again to the scalar case \(d=1\). Our main result here is the following Edgeworth expansion of order 1.

Theorem 4.13

Suppose that \(\varSigma ^2>0\).

  1. (i)

    The following self-normalized version of the Berry–Esseen theorem holds true:

    $$\begin{aligned} \sup _{t\in \mathbb {R}}\left| \mu _\omega (\{S_ng(\omega ,\cdot )\le t\sigma _n\})-\varPhi (t)\right| \le R_\omega n^{-\frac{1}{2}}, \end{aligned}$$
    (50)

    for some random variable \(R_\omega \), where \(\varPhi (t)\) is the standard normal distribution function and \(\sigma _n^2=\sigma _{\omega ,n}^2=\text {Var}_{\mu _\omega }(S_ng(\omega ,\cdot ))\).

  2. (ii)

    Assume, in addition, that for any compact set \(J\subset \mathbb R{\setminus }\{0\}\) we have

    $$\begin{aligned} \lim _{n\rightarrow \infty }n^{1/2}\left| \int _J\int _X e^{\frac{it}{\sqrt{n}} S_n g(\omega ,x)}\mathrm{d}\mu _\omega (x)\mathrm{d}t\right| =0,\,P\text {-a.s.} \end{aligned}$$
    (51)

    Let \(A_{\omega ,n}\) be a function whose derivative’s Fourier transform is \(e^{-\frac{1}{2} t^2}(1+\mathcal {P}_{\omega ,n}(t))\), where

    $$\begin{aligned} \mathcal {P}_{\omega ,n}(t)=-\frac{1}{2}\varPi _{\omega ,n}''(0)\left( \frac{t}{\sigma _n}\right) ^2+\frac{1}{2}t^2-\frac{i}{6}\varPi _{\omega ,n}'''(0)\left( \frac{t}{\sigma _n}\right) ^3. \end{aligned}$$

    Then,

    $$\begin{aligned} \lim _{n\rightarrow \infty }\sqrt{n}\sup _{t\in \mathbb R}\left| \mu _\omega (\{S_ng(\omega ,\cdot )\le t\sigma _n\})-A_{\omega ,n}(t)\right| =0. \end{aligned}$$

Before proving Theorem 4.13, let us introduce some additional notation and make some observations. It is clear that \(A'_{\omega ,n}\) has the form \(A'_{\omega ,n}(t)=Q_{\omega ,n}(t)e^{-\frac{1}{2} t^2}\) where \(Q_{\omega ,n}(t)\) is a polynomial of degree 3. In fact, if we set \(a_{n,\omega }=\frac{1}{2}\big (1-\varPi _{\omega ,n}''(0)/\sigma _n^2\big )\) and \(b_{\omega ,n}=\frac{1}{6}\varPi _{\omega ,n}'''(0)/\sigma _n^3\), we have that

$$\begin{aligned} \sqrt{2\pi }Q_{\omega ,n}(t)=1+a_{\omega ,n}+3b_{\omega ,n}t-a_{\omega ,n}t^2-b_{\omega ,n}t^3. \end{aligned}$$
(52)

By (36), we have \(a_{\omega ,n}=\mathcal O(1/n)\), while \(b_{\omega ,n}=\mathcal O(1/\sqrt{n})\) (since \(|\varPi _{\omega ,n}'''(0)|\le cn\)). Set \(\varphi (t)=\frac{1}{\sqrt{2\pi }}e^{-\frac{1}{2} t^2}\) and \(u_{\omega ,n}=\frac{\varPi _{\omega ,n}^{(3)}(0)}{\sigma _{n}^2}\), which converges to \(\varSigma ^{-2}\int \varPi _\omega ^{(3)}(0)\mathrm{d}P(\omega )\) as \(n\rightarrow \infty \). Using the above formula of \(Q_{\omega ,n}\) together with \(a_{\omega ,n}=\mathcal O(1/n)\), we conclude that

$$\begin{aligned} \lim _{n\rightarrow \infty }\sqrt{n}\sup _{t\in \mathbb R}\left| \mu _\omega (\{S_ng(\omega ,\cdot )\le t\sigma _n\})-\varPhi (t)-u_{\omega ,n}\sigma _n^{-1}(t^2-1)\varphi (t)\right| =0. \end{aligned}$$

Remark 4.14

We remark that in the deterministic case (i.e., when \(\varOmega \) is a singleton), we have \(a_{\omega ,n}=0\) and \(\varPi _{\omega ,n}'''(0)=n\kappa _3\) for some \(\kappa _3\) which does not depend on n. Therefore, \(u_{\omega ,n}=\kappa _3\varSigma ^{-2}\) and we recover the order one deterministic Edgeworth expansion that was established in [19]. It seems unlikely that we can get the same results in the random case since this would imply that

$$\begin{aligned} \left| \varPi _{\omega ,n}^{(3)}(0)/n-\int \varPi _\omega ^{(3)}(0)\mathrm{d}P(\omega )\right| =o(n^{-\frac{1}{2}}). \end{aligned}$$

The term \(\varPi _{\omega ,n}^{(3)}(0)/n\) is an ergodic average, but such fast rate of convergence in the strong law of large number is not even true in general for sums of independent and identically distributed random variables. However, we note that under certain mixing assumptions for the base map \(\sigma \), the rate of order \(n^{-\frac{1}{2}}\ln n\) was obtained in [29] (see also [32]).

Remark 4.15

We note that condition (51) holds whenever (46) is satisfied.

Proof of Theorem 4.13

The purpose of the following arguments is to prove the second statement of Theorem 4.13, and the proof of the first statement (the self-normalized Berry–Esseen theorem) is a by-product of these arguments. In particular, we will be using Taylor polynomials of order three of the function \(\varPi _{\omega ,n}(\cdot )\), but it order to prove the self-normalized Berry–Esseen theorem, we could have used only second-order approximations.

Let \(t\in \mathbb {R}\). Then, by (35) and Lemma 4.7 when \(t_n=t/\sigma _{n}\) is sufficiently small, uniformly in \(\omega \) we have

$$\begin{aligned} \int e^{it_n \cdot S_n g(\omega ,\cdot )}\mathrm{d}\mu _\omega =\phi _\omega ^{it_n}(v_\omega ^0)e^{\varPi _{\omega ,n}(it_n)}+|t_n|\mathcal O(r^n). \end{aligned}$$
(53)

As in (42), since \(\phi _\omega ^{0}(v_\omega ^0)=1\) and the derivative of \(z\rightarrow \phi _\omega ^{z}(v_\omega ^0)\) vanishes at \(z=0\), we have

$$\begin{aligned} |\phi _\omega ^{it_n}(v_\omega ^0)-1|\le Ct_n^2. \end{aligned}$$

Using Lemma 4.11 and that \(\sigma _n \sim n^{\frac{1}{2}}\varSigma \), we conclude that when n is sufficiently large and \(t_n=t/\sigma _n\) is sufficiently small,

$$\begin{aligned} \left| \int e^{it_n \cdot S_n g(\omega ,\cdot )}\mathrm{d}\mu _\omega -e^{\varPi _{\omega ,n}(it_n)}\right| \le C\big (t_n^2e^{-ct^2}+|t_n|r^n\big ), \end{aligned}$$
(54)

where \(c,C>0\) are some constant. Next, by considering the function \(g(t)=e^{zt}\), where z is a fixed complex number, we derive that

$$\begin{aligned} \left| e^{z}-1-z\right| \le |z|^2e^{\max \{0,\mathfrak {R}(z)\}}. \end{aligned}$$
(55)

Since \(\sigma _n \sim n^{\frac{1}{2}}\varSigma \), Lemma 4.11 together with the fact that \(\varSigma >0\) yields that \(\mathfrak {R}(\varPi _{\omega ,n}(it_n))\le -ct^2\) when \(|t_n|\) is sufficiently small and n is large enough, where \(c>0\) is a constant which does not depend on \(\omega \), t and n. (We can clearly assume that \(c<\frac{1}{2}\).) It follows that \(\max \{0,\mathfrak {R}(t^2/2+\varPi _{\omega ,n}(it_n))\}\le (\frac{1}{2}-c)t^2\). Applying (55) with \(z=t^2/2+\varPi _{\omega ,n}(it_n)\) yields that when n is sufficiently large and \(|t_n|\) is sufficiently small,

$$\begin{aligned} \left| e^{\varPi _{\omega ,n}(it_n)}-e^{-\frac{1}{2} t^2}(1+\varPi _{\omega ,n}(it_n)+\frac{1}{2} t^2)\right| =e^{-\frac{1}{2} t^2}|e^{z}-1-z|\le e^{-ct^2}\left| \varPi _{\omega ,n}(it_n)+\frac{1}{2} t^2\right| ^2. \end{aligned}$$
(56)

Next, using the formula for Taylor reminder of order 3, we have that

$$\begin{aligned} |\varPi _{\omega ,n}(it_n)+\frac{1}{2} t^2-\mathcal {P}_{\omega ,n}(t)|\le Cnt_n^4. \end{aligned}$$
(57)

Observe also that

$$\begin{aligned} \mathcal {P}_{\omega ,n}(t)=\frac{1}{2}t^2\big (1-\varPi _{\omega ,n}''(0)/\sigma _n^2\big )-\varPi _{\omega ,n}'''(0)t^3/\sigma _n^3. \end{aligned}$$

The second term on the right-hand side is \(\mathcal O(|t|^3)n^{-\frac{1}{2}}\), while by (36) the first term is \(\mathcal O(t^2/\sigma _n^2)=\mathcal O(t^2)/n\). We conclude that

$$\begin{aligned} \left| \varPi _{\omega ,n}(it_n)+\frac{1}{2} t^2\right| \le C\max (t^2,|t|^3)n^{-\frac{1}{2}} \end{aligned}$$
(58)

and hence

$$\begin{aligned} \left| e^{\varPi _{\omega ,n}(it_n)}-e^{-\frac{1}{2} t^2}(1+\varPi _{\omega ,n}(it_n)+\frac{1}{2} t^2)\right| =e^{-\frac{1}{2} t^2}|e^{z}-1-z|\le Ce^{-ct^2}\max (t^4,t^6)/n. \end{aligned}$$
(59)

From (57) and (59), we conclude that

$$\begin{aligned} \left| e^{\varPi _{\omega ,n}(it_n)}-e^{-\frac{1}{2} t^2}(1+\mathcal {P}_{\omega ,n}(t))\right| \le C''e^{-ct^2}\max (t^4,t^6)/n. \end{aligned}$$
(60)

Finally, using the Berry–Esseen inequality we derive that

$$\begin{aligned}&\sup _{t\in \mathbb R}\left| \mu _\omega (\{S_ng(\omega ,\cdot )\le t\sigma _n\})-A_{\omega ,n}(t)\right| \nonumber \\&\le \int _0^{\mathrm{T}}\bigg |\frac{\mu _\omega (e^{\frac{it \cdot S_ng(\omega , \cdot )}{\sigma _n}})-e^{-\frac{1}{2} t^2}(1+\mathcal {P}_{\omega ,n}(t))}{t}\bigg |\, \mathrm{d}t +C/T, \end{aligned}$$
(61)

where C is some constant. We have used here the fact that the derivative of \(A_{\omega ,n}\) is bounded by some constants (since the coefficients of the polynomial \(\mathcal {P}_{\omega ,n}\) are bounded in \(\omega \) and n). In order to establish the first assertion of the theorem, we choose T of the form \(T=\delta _0\sqrt{n}\), where \(\delta _0>0\) is sufficiently small. Indeed, observe that the above estimates imply that

$$\begin{aligned} \sup _{t\in \mathbb R}\left| \mu _\omega (\{S_ng(\omega ,\cdot )\le t\sigma _n \})-A_{\omega ,n}(t)\right| =\mathcal O(n^{-\frac{1}{2}}). \end{aligned}$$

Set \(\varphi (t)=\frac{1}{\sqrt{2\pi }}e^{-\frac{1}{2} t^2}\). Integrating both sides of the equation \(A'_{\omega ,n}(t)=Q_{\omega ,n}(t)e^{-\frac{1}{2} t^2}\), where \(Q_{\omega ,n}\) satisfies (52) and using that \(a_{\omega ,n}=\mathcal O(1/n)\), we conclude that

$$\begin{aligned} \sup _{t\in \mathbb R}\left| \mu _\omega (\{S_ng(\omega ,\cdot )\le t\sigma _n \})-\varPhi (t)-u_{\omega ,n}\sigma _n^{-1}(t^2-1)\varphi (t)\right| =O(n^{-\frac{1}{2}}). \end{aligned}$$
(62)

Recall that \(u_{\omega ,n}=\frac{\varPi _{\omega ,n}^{(3)}(0)}{\sigma ^2_n}\), which converges to \(\varSigma ^{-2}\int \varPi _\omega ^{(3)}(0)\mathrm{d}P(\omega )\) as \(n\rightarrow \infty \), and in particular, it is bounded. Therefore, \(\sup _{t\in \mathbb {R}}|u_{\omega ,n}\sigma _n^{-1}(t^2-1)\varphi (t)|=\mathcal O(n^{-\frac{1}{2}})\), which together with (62) yields (50).

Next, in order to prove the second item, fix some \({\varepsilon }>0\) and choose T of the form \(C/T={{\varepsilon }}n^{-\frac{1}{2}}\). We then have that

$$\begin{aligned}&\int _0^{\mathrm{T}}\bigg |\frac{\mu _\omega (e^{\frac{it \cdot S_ng(\omega , \cdot )}{\sigma _n}})-e^{-\frac{1}{2} t^2}(1+\mathcal {P}_{\omega ,n}(t))}{t}\bigg |\, \mathrm{d}t +C/T \\&\quad \le \int _0^{\delta _0 \sqrt{n}}\bigg |\frac{\mu _\omega (e^{\frac{it \cdot S_ng(\omega , \cdot )}{\sigma _n}})-e^{-\frac{1}{2} t^2}(1+\mathcal {P}_{\omega ,n}(t))}{t}\bigg |\, \mathrm{d}t \\&\qquad +\int _{\delta _0 \sqrt{n} \le |t|\le \frac{C}{{\varepsilon }}\sqrt{n} }\bigg |\frac{\mu _\omega (e^{\frac{it \cdot S_ng(\omega , \cdot )}{\sigma _n}})-e^{-\frac{1}{2} t^2}(1+\mathcal {P}_{\omega ,n}(t))}{t}\bigg |\, \mathrm{d}t +{\varepsilon }n^{-\frac{1}{2}}. \end{aligned}$$

Using (60), we see that the first integral on the above right-hand side is of order \(\mathcal O(n^{-1})\), while the second integral is \(o(n^{-\frac{1}{2}})\) by (51). \(\square \)

Remark 4.16

In [29], expansions of order larger than 1 were obtained for some classes of interval maps under the assumption that the modulus of the characteristic function \(\varphi _n(t)\) of \(S_n g(\omega ,\cdot )\) does not exceed \(n^{-r_1}\) when \(|t|\in [K,n^{r_2}]\), where \(K,r_1,r_2\) are some constants. Of course, under such conditions we can obtain higher-order expansions also in our setup, but since we do not have examples under which this condition holds true (expect from the example covered in [29]), the proof (which is very close to [29]) is omitted.

4.6.1 Some Asymptotic Expansions for Large Deviations

In this section, we again consider the scalar case when \(d=1\). We will also assume that there exist constants \(C_1,C_2,r>0\) so that for \(\mathbb P\)-a.e. \(\omega \in \varOmega \), \(z\in \mathbb {C}\) with \(|z|\le r\) and a sufficiently large \(n\in \mathbb {N}\) we have

$$\begin{aligned} C_1\le \Vert \mathcal L_\omega ^{z,(n)}\Vert /|\lambda _\omega ^{z,(n)}|\le C_2, \end{aligned}$$
(63)

where \(\lambda _\omega ^{z, (n)}=\prod _{i=0}^{n-1}\lambda _{\sigma ^i \omega }^z\). Moreover, we assume that there exists a constant \(C>0\) such that for \(\mathbb P\)-a.e. \(\omega \in \varOmega \) and for any \(t,s\in \mathbb {R}\), we have that

$$\begin{aligned} \sup _{n\in \mathbb N}\Vert \mathcal {L}_{\omega }^{{\theta }+is,(n)}\Vert /\Vert \mathcal {L}_{\omega }^{{\theta },(n)}\Vert \le C(1+|{\theta }|+|s|). \end{aligned}$$
(64)

These conditions are satisfied in the setup of [31, Chapter 5]. (The second condition follows from the arguments in the Lasota–Yorke inequality which was proved in [31, Lemma 5.6.1].)

Our results in this subsection will rely on the following lemma.

Lemma 4.17

Suppose that:

  1. 1.

    \(\mathcal F\) is the Borel \(\sigma \)-algebra on \(\varOmega \);

  2. 2.

    \(\sigma \) has a periodic point \(\omega _0\) (whose period is denoted by \(n_0\)), and \(\sigma \) is continuous at each point that belongs to the orbit of \(\omega _0\);

  3. 3.

    \(\mathbb P(U)>0\) for any open set U that intersects the orbit of \(\omega _0\);

  4. 4.

    for any compact set \(K\subset \mathbb {R}\), the family of maps \(\omega \rightarrow \mathcal {L}_\omega ^z,\,z\in K\) is uniformly continuous at the orbit points of \(\omega _0\);

  5. 5.

    for any sufficiently small \(\theta \) and \(s\not =0\), the spectral radius of \(\mathcal {L}_{\omega _0}^{\theta +is,(n_0)}\) is smaller than the spectral radius of \(\mathcal {L}_{\omega _0}^{\theta ,(n_0)}\).

Then, there exists \(r>0\) with the following property: for \(\mathbb P\)-a.e. \(\omega \) and for any compact set \(J\subset \mathbb {R}{\setminus }\{0\}\) there exist constants \(C_J(\omega )\) and \(c_J(\omega )>0\) so that for any sufficiently large n, \(\theta \in [-r,r]\) and \(s\in J\) we have

$$\begin{aligned} \Vert \mathcal {L}_{\omega }^{\theta +is,(n)}\Vert \le C_J(\omega )e^{-c_J(\omega ) n}\Vert \mathcal {L}_{\omega }^{\theta ,(n)}\Vert \end{aligned}$$

Proof

Denote by \( r(z),\,z\in \mathbb {C}\) the spectral radius of the deterministic transfer operator \(\mathcal R_{z}:=\mathcal L_{\omega _0}^{z,(n_0)}\). Let \(J\subset \mathbb {R}{\setminus }\{0\}\) be a compact set. Since \(\mathcal R_{z}\) is continuous in z and \(r({\theta })\) is continuous around the origin, there exist \(\delta ,d_0>0\) which depend on J so that for any \({\theta }\in [-r,r]\), \(s\in J\) and \(d\ge d_0\) we have

$$\begin{aligned} \Vert \mathcal R_{{\theta }+is}^d\Vert \le (1-\delta )^{d}r({\theta })^d. \end{aligned}$$

Observe that we have also taken into account the last assumption in the statement of the lemma. Note that a deterministic version of (63) holds true with the operators \(\mathcal R_z\) and thus there is a constant \(C>0\) such that

$$\begin{aligned} \Vert \mathcal R_{\theta }^d\Vert \ge C r({\theta })^d \end{aligned}$$

for any \({\theta }\in [-r,r]\). Let \(K\subset \mathbb {R}\) be a bounded closed interval around the origin which contains J. Fix some \(d>d_0\) and let \({\varepsilon }\in (0,1/2)\) and \(\omega _1\in \varOmega \) be so that

$$\begin{aligned} \Vert \mathcal {L}_{\omega _1}^{{\theta }+is, (dn_0)}-\mathcal {L}_{\omega _0}^{{\theta }+is, (dn_0)}\Vert =\Vert \mathcal {L}_{\omega _1}^{{\theta }+is, (dn_0)}-\mathcal R_{{\theta }+is}^{d}\Vert <\varepsilon \min \{r({\theta })^d,1\}, \end{aligned}$$
(65)

for any \({\theta }\in [-r,r]\) and \(s\in K\). By (63), we have

$$\begin{aligned} 0<C_1\le \Vert \mathcal {L}_{\omega }^{{\theta }, (n)}\Vert / |\lambda _{\omega }^{{\theta }, (n)}|\le C_2<\infty , \end{aligned}$$
(66)

for some constants \(C_1\) and \(C_2\) which do not depend on \(\omega \) and n. Therefore, if \({\varepsilon }\) is small enough, then

$$\begin{aligned} 1/|\lambda _{\omega _1}^{{\theta },(dn_0)} |\le C/(\Vert \mathcal R_{{\theta }}^{d}\Vert -\varepsilon \min \{r({\theta })^d,1\})\le C'/(r({\theta })^d-\varepsilon \min \{r({\theta })^d,1\}), \end{aligned}$$
(67)

for some constants \(C, C'>0\). We conclude that

$$\begin{aligned} \Vert \mathcal {L}_{\omega _1}^{{\theta }+is, (dn_0)}\Vert / |\lambda _{\omega _1}^{{\theta }, (dn_0)} |\le C\,\frac{{\varepsilon }\min \{r({\theta })^d,1\}+(1-\delta )^{d}r({\theta })^d}{r({\theta })^d-\varepsilon \min \{r({\theta })^d,1\}}\le C''({\varepsilon }+(1-\delta )^d), \end{aligned}$$
(68)

where \(C''>0\) is another constant. By (64), we have that

$$\begin{aligned} {{\,\mathrm{ess\ sup}\,}}_{\omega \in \varOmega }\sup _{s\in J,{\theta }\in [-r,r],n\in \mathbb N}\Vert \mathcal {L}_{\omega }^{{\theta }+is,(n)}\Vert /\Vert \mathcal {L}_{\omega }^{{\theta },(n)}\Vert \le B_J<\infty , \end{aligned}$$
(69)

for some constant \(B_J\) which depends only on J. Fixing a sufficiently large d and then a sufficiently small \({\varepsilon }\), we conclude that for any \({\theta }\in [-r,r]\), \(s\in J\) and n, we have that

$$\begin{aligned} \frac{\Vert \mathcal {L}_{\omega }^{{\theta }+is, (n)}\mathcal {L}_{\omega _1}^{{\theta }+is,(dn_0)}\Vert }{|\lambda _\omega ^{{\theta },(n)}\lambda _{\omega _1}^{{\theta },(dn_0)}|}\le C_2 C'' B_J({\varepsilon }+(1-\delta )^d)<\frac{1}{2}. \end{aligned}$$

Indeed,

$$\begin{aligned} \Vert \mathcal {L}_{\omega }^{{\theta }+is,(n)}\mathcal {L}_{\omega _1}^{{\theta }+is,(dn_0)}\Vert\le & {} \Vert \mathcal {L}_{\omega }^{{\theta }+is,n}\Vert {\cdot }\Vert \mathcal {L}_{\omega _1}^{{\theta }+is,(dn_0)}\Vert \\&\le B_J\Vert \mathcal {L}_{\omega }^{{\theta },(n)}\Vert {\cdot } |\lambda _{\omega _1}^{{\theta },(dn_0)}|C''({\varepsilon }+(1-\delta )^d)\\\le & {} B_JC_2 |\lambda _{\omega }^{{\theta },(n)}|\cdot |\lambda _{\omega _1}^{{\theta },(dn_0)}|C''({\varepsilon }+(1-\delta )^d), \end{aligned}$$

where in the first inequality we have used the submultiplicativity of operator norm, in the second we have used (68) and (69) and in the third one we have used (66).

Finally, because of the fifth condition in the statement of the lemma and since \(r({\theta })\) is continuous in \({\theta }\) (around the origin), when r is small enough we have that (65) holds true for any \(\omega _1\in U\), \({\theta }\in [-r_0,r_0]\) and \(s\in K\) and, where U is a sufficiently small open neighborhood of the periodic point \(\omega _0\) and \(r_0\) depends only on the function \(r({\theta })\). By ergodicity of \(\sigma \),  for \(\mathbb P\)-a.e. \(\omega \in \varOmega \) we have an infinite strictly increasing sequence \(a_n=a_n(\omega )\) of visiting times to U so that \(a_n/n\) converges to 1/P(U) as \(n\rightarrow \infty \). Thus, by considering the subsequence \(b_n=a_{ndn_0} (\omega )\) we can write \(\mathcal {L}_{\omega }^{{\theta }+is,(n)}\) as composition of blocks of the form \(\mathcal {L}_{\omega '}^{{\theta }+is,(m)}\mathcal {L}_{\omega _1}^{{\theta }+is,(dn_0)}\) (and perhaps a single block of the form \(\mathcal {L}_{\omega ''}^{{\theta }+is,(m)})\), where \(m\ge 0\) and \(\omega _1\in U\). The number of blocks is approximately \(nP(U)/dn_0\) (i.e., when divided by n it converges to \(P(U)/dn_0\) as \(n\rightarrow \infty \)). Therefore,

$$\begin{aligned} \Vert \mathcal {L}_{\omega }^{{\theta }+is,(n)}\Vert \le C_J(\omega )|\lambda _{\omega }^{{\theta },n}|2^{-nP(U)/2dn_0}, \end{aligned}$$

which together with (66) completes the proof of the lemma. \(\square \)

Our main result here is the following theorem.

Theorem 4.18

Suppose that the conclusion of Lemma 4.17 holds true and that \(\varSigma ^2>0\). Then, for any sufficiently small \(a>0\) and for \(\mathbb P\)-a.e. \(\omega \in \varOmega \) we have

$$\begin{aligned} \mu _\omega (\{x: S_n g(\omega ,x)\ge an\})\cdot e^{nI_{\omega ,n}(a)}=\frac{\phi _\omega ^{{\theta }_{\omega ,n,a}}(v_\omega ^0)\sqrt{I_{\omega ,n}''(a)}}{{\theta }_{\omega ,n,a}\sqrt{2\pi n}}(1+o(1)). \end{aligned}$$

Here,

$$\begin{aligned} I_{\omega ,n}(a)=\sup _{t\in [0,r]}(t\cdot a-\varPi _{\omega ,n}(t)/n)={\theta }_{\omega ,n,a}-\varPi _{\omega ,n}({\theta }_{\omega ,n,a})/n, \end{aligned}$$

where \(r>0\) is any sufficiently small number.

Remark 4.19

Set

$$\begin{aligned} I(a)=\sup _{t\in [0,r]}(t\cdot a-\varLambda (t))={\theta }_a-\varLambda ({\theta }_a). \end{aligned}$$

Then,

$$\begin{aligned} \lim _{n\rightarrow \infty } I_{\omega ,n}(a)=I(a)\,\,\text { and }\,\,\lim _{n\rightarrow \infty } {\theta }_{\omega ,n,a}={\theta }_a. \end{aligned}$$

Furthermore, we have that \(\lim _{n\rightarrow \infty } I''_{\omega ,n}(a)=I''(a)\) (using the duality of Fenchel–Legendre transforms).

Proof

The proof follows the general scheme used in the proof of [20, Theorem 2.2] together with arguments similar to the ones in the proof of Theorem 4.13. Therefore, we will only provide a sketch of the arguments. Let a be sufficiently small. Denote by \(F_n^\omega \) the distribution of \(S_n g(\omega ,\cdot )\) and set

$$\begin{aligned} \mathrm{d}\tilde{F}_{\omega ,n}(x)=\left( e^{\theta _{\omega ,n,a}x}/\lambda _{\omega }^{\theta _{\omega ,n,a},(n)}\right) \,\mathrm{d}F_{\omega ,n}(x). \end{aligned}$$

Note that \(\mathrm{d}\tilde{F}_{\omega ,n}\) is a finite measure, which in general is not a probability measure. Set \(G_{\omega ,n}(x)=\tilde{F}_{\omega ,n}((-\infty ,x\sqrt{n}+an])\). Arguing as in the proof of [20, Theorem 2.3] (and using the consequence of Lemma 4.17), it is enough to show that the nonnormalized distribution functions \(G_{\omega ,n}\) admit Edgeworth expansions of order 1 (see Lemmas 3.2 and 3.3 in [20]). Observe that (when |s| is sufficiently small),

$$\begin{aligned} \hat{G}_{\omega ,n}(s\sqrt{n})= & {} (e^{-isna}/\lambda _{\omega }^{{\theta }_{\omega ,n,a},(n)})\int e^{({\theta }_{\omega ,n,a}+is) S_n g(\omega ,\cdot )}\mathrm{d}\phi _\omega ^0\nonumber \\= & {} \bar{\mu }_{\omega ,n}(s)\phi _{\omega }^{{\theta }_{\omega ,n,a}+is}(v_\omega ^0)+\delta _{\omega ,n}(s), \end{aligned}$$
(70)

where

$$\begin{aligned} \bar{\mu }_{\omega ,n}(s)&=e^{-iasn}\lambda _\omega ^{{\theta }_{\omega ,n,a}+is,(n)}/\lambda _\omega ^{{\theta }_{\omega ,n,a},(n)}\\&=e^{\varPi _{\omega ,n}({\theta }_{\omega ,n,a}+is)-\varPi _{\omega ,n}({\theta }_{\omega ,n,a})-ians}\\&=e^{\varPi _{\omega ,n}({\theta }_{\omega ,n,a}+is)-\varPi _{\omega ,n}({\theta }_{\omega ,n,a})-i\varPi _{\omega ,n}'(0)s}, \end{aligned}$$

and \(\delta _{\omega ,n}(z)\) is an holomorphic function of z such that uniformly in \(\omega \), we have \(\delta _{\omega ,n}(z)=\mathcal O(r^n)\) for some \(r\in (0,1)\) (and hence all of the derivatives of \(\delta _{\omega ,n}\) at zero are at most of the same order). By arguing as in the proof of Theorem 4.13, we obtain Edgeworth expansions of order 1 for \(G_{\omega ,n}\). \(\square \)

5 Hyperbolic Dynamics

The purpose of this section is to briefly discuss and indicate that almost all of our main results can be extended to the class of random hyperbolic dynamics introduced in [17, Sect. 2]. We stress that the spectral approach developed in [15] for the random piecewise expanding dynamics has been extended to the random hyperbolic case in [17] for the real-valued observables. By combining techniques developed in the present paper together with those in [17], we can now treat the case of vector-valued observables. In addition, we are not only able to provide the versions of the results in [17, Sects. 7 and 8] for vector-valued observables but we can also establish versions of almost all other results covered in the present paper (that have not been established previously even for real-valued observables).

Let X be a finite-dimensional \(C^\infty \) compact connected Riemannian manifold. Furthermore, let T be a topologically transitive Anosov diffeomorphism of class \(C^{r+1}\) for \(r>2\). As before, let \((\varOmega , \mathcal F, \mathbb P)\) be a probability space such that \(\varOmega \) is a Borel subset of a separable, complete metric space. Furthermore, let \(\sigma :\varOmega \rightarrow \varOmega \) be a homeomorphism. As in [17, Sect. 3], we now build a cocycle \((T_\omega )_{\omega \in \varOmega }\) such that all \(T_\omega \)’s are Anosov diffeomorphisms that belong to a sufficiently small neighborhood of T in the \(C^{r+1}\) topology on X. Furthermore, we require that \(\omega \rightarrow T_\omega \) is measurable. Let \(\mathcal {L}_\omega \) be the transfer operator associated with \(T_\omega \). It was verified in [17, Sect. 3] that conditions (C0) and (C2)–(C4) hold, with:

  • \(\mathcal B=(\mathcal B, \Vert \cdot \Vert _{1,1})\) is the space \(\mathcal B^{1,1}\) which belongs to the class of anisotropic Banach spaces introduced by Gouëzel and Liverani [24]. We stress that in this setting, the second alternative in (CO) holds. Namely, \(\mathcal B\) is separable and the cocycle of transfer operators is strongly measurable;

  • (C3) holds with constant \(\alpha ^N\) and \(\beta ^N\).

We recall that elements of \(\mathcal B\) are distributions of order 1. By \(h(\varphi )\), we will denote the action of \(h\in \mathcal B\) on a test function \(\varphi \). We note that in this setting, it was proved in [17, Lemma 3.5. and Proposition 3.6] that the version of Lemma 3.4 holds true. Moreover, one can show (see [17, Proposition 3.3. and Proposition 3.6]) that the top Oseledets space \(Y(\omega )\) is spanned by a Borel probability measure \(\mu _\omega \) on X.

We now consider a suitable class of observables. Let us fix a measurable map \(g:\varOmega \times X\rightarrow \mathbb {R}^d\) such that:

  • \(g(\omega , \cdot )\in C^r\) and \({{\,\mathrm{ess\ sup}\,}}_{\omega \in \varOmega } \Vert g(\omega , \cdot )\Vert _{C^r}<\infty \);

  • for \(\mathbb P\)-a.e. \(\omega \in \varOmega \) and \(1\le i\le d\),

    $$\begin{aligned} \int _X g^i(\omega , \cdot ) \, \mathrm{d}\mu _\omega =0. \end{aligned}$$

We recall (see [17, p. 634]) that for \(h\in \mathcal B\) and \(g\in C^r(X, \mathbb {C})\), we can define \(g\cdot h\in \mathcal B\). Furthermore, the action of \(g\cdot h\) as a distribution is given by

$$\begin{aligned} (g\cdot h)(\varphi )=h(g\varphi ), \quad \varphi \in C^1(X, \mathbb C). \end{aligned}$$

This enables us to introduce twisted transfer operators. Indeed, for \(\theta \in \mathbb {C}^d\) we introduce \(\mathcal {L}_\omega ^\theta :\mathcal B\rightarrow \mathcal B\) by

$$\begin{aligned} \mathcal {L}_\omega ^\theta h=\mathcal {L}_\omega (e^{\theta \cdot g(\omega , \cdot )}\cdot h), \quad h\in \mathcal B. \end{aligned}$$

By arguing as in the proof of [17, Proposition 4.3], one can establish the version of Lemma 3.10 in this setting.

Let us now introduce appropriate versions of spaces \(\mathcal S\) and \(\mathcal S'\) from Sect. 3.5 in the present context. Let \(\mathcal S'\) denote the space of all measurable maps \(\mathcal V:\varOmega \rightarrow \mathcal B\) such that

$$\begin{aligned} \Vert \mathcal V\Vert _\infty :={{\,\mathrm{ess\ sup}\,}}_{\omega \in \varOmega }\Vert \mathcal V(\omega )\Vert _{1,1}<\infty . \end{aligned}$$

Then, \((\mathcal S', \Vert \cdot \Vert _\infty )\) is a Banach space. Let \(\mathcal S\) consist of all \(\mathcal V\in \mathcal S'\) with the property that \(\mathcal V(\omega )(1)=0\) for \(\mathbb P\)-a.e. \(\omega \in \varOmega \), where 1 denotes the observable taking the value 1 at all points. Then, \(\mathcal S\) is a closed subspace of \(\mathcal S'\) (see [17, p. 641]).

For \(\theta \in \mathbb C^d\) and \(\mathcal W\in \mathcal S\), set

$$\begin{aligned} F(\theta , \mathcal W)(\omega )=\frac{\mathcal L_{\sigma ^{-1}\omega }^\theta (\mathcal W(\sigma ^{-1}\omega )+\mu _{\sigma ^{-1}\omega })}{\mathcal L_{\sigma ^{-1}\omega }^\theta (\mathcal W(\sigma ^{-1}\omega )+\mu _{\sigma ^{-1}\omega })(1)}-\mathcal W(\omega )-\mu _\omega , \quad \omega \in \varOmega . \end{aligned}$$

By arguing as in the proofs of Lemma 3.12 and [17, Lemma 5.3], we find that F is a well-defined analytic map on \(\mathcal D=\{\theta \in \mathbb C^d: |\theta | \le \epsilon \} \times B_{\mathcal S}(0, R)\) for some \(\epsilon , R>0\), where \(B_{\mathcal S}(0, R)\) denotes the open ball in \(\mathcal S\) of radius R centered at the origin.

The following is a version of Lemma 3.13 in the present setting.

Lemma 5.1

By shrinking \(\epsilon >0\) if necessary, we have that there exists \(O:\{ \theta \in \mathbb {C}^d: |\theta |<\epsilon \} \rightarrow \mathcal {S}\) analytic in \(\theta \) such that

$$\begin{aligned} F(\theta , O(\theta ))=0. \end{aligned}$$
(71)

Proof

We first note that (see [17, p. 636]) that there exists \(D, \lambda >0\) such that

$$\begin{aligned} \Vert \mathcal {L}_\omega ^{(n)} h\Vert _{1,1}\le De^{-\lambda n}\Vert h\Vert _{1,1}, \quad \text {for } h\in \mathcal B, h(1)=0\text { and } n\in \mathbb {N}. \end{aligned}$$

Moreover, the same arguments as in the proof of Proposition 6.4 (see also [17, Proposition 5.4]) yield that

$$\begin{aligned} (D_{d+1}F(0,0) \mathcal W)(\omega )=\mathcal {L}_{\sigma ^{-1}\omega }\mathcal W(\sigma ^{-1}\omega )-\mathcal W(\omega ), \quad \text {for } \omega \in \varOmega \text { and } \mathcal W\in \mathcal S. \end{aligned}$$

Now by arguing exactly as in the proof of Lemma 3.13, we conclude that \(D_{d+1}F(0, 0)\) is invertible and thus the desired conclusion follows from the implicit function theorem.\(\square \)

Let \(\varLambda (\theta )\) be the largest Lyapunov exponent associated with the twisted cocycle \(\mathcal {L}^\theta =(\mathcal {L}_\omega ^\theta )_{\omega \in \varOmega }\). Let

$$\begin{aligned} \mu _\omega ^\theta := \mu _\omega +O(\theta )(\omega ), \quad \text {for } \theta \in \mathbb C^d, |\theta |<\epsilon . \end{aligned}$$

Observe that \(\mu _\omega ^\theta (1) =1\) and by the previous lemma, \(\theta \mapsto \mu _\omega ^\theta \) is analytic. Let us define

$$\begin{aligned} \hat{\varLambda } (\theta ) := \int _\varOmega \log \Big |\mu _\omega ^\theta ( e^{\theta \cdot g(\omega , x)} ) \Big |\, \mathrm{d}\mathbb {P}(\omega ), \end{aligned}$$

and

$$\begin{aligned} \lambda _\omega ^\theta := \mu _\omega ^\theta ( e^{\theta \cdot g(\omega , x)}) =( \mathcal {L}_\omega ^\theta \mu _\omega ^\theta )(1). \end{aligned}$$

The proof of the following result is analogous to the proof of [17, Lemma 6.1] (see also the Lemmas in Sect. 3.6).

Lemma 5.2

  1. 1.

    For every \(\theta \in B_{\mathbb {C}^d}(0,\epsilon ):= \{ \theta \in \mathbb {C}: |\theta |<\epsilon \}\), we have \( \hat{\varLambda } (\theta )\le \varLambda (\theta )\).

  2. 2.

    \(\hat{\varLambda }\) is differentiable on a neighborhood of 0, and for each \(i\in \{1, \ldots , d\}\), we have that

    $$\begin{aligned} D_i\hat{\varLambda } (\theta )= \mathfrak {R}\Bigg ( \int _\varOmega \frac{ \overline{\lambda _\omega ^\theta } ( \mu _\omega ^\theta ( g^i(\omega , \cdot )e^{\theta \cdot g(\omega , \cdot )})+ (D_i O(\theta )) (\omega ) (e^{\theta \cdot g(\omega , \cdot )}) )}{|\lambda _\omega ^\theta |^2}\, \mathrm{d}\mathbb {P}(\omega ) \Bigg ), \end{aligned}$$

    where \(D_i\) denotes the derivative with respect to ith component of \(\theta \).

  3. 3.

    For \(i\in \{1, \ldots , d\}\), we have that \(D_i \hat{\varLambda }(0)=0\).

Lemma 5.3

  1. 1.

    For \(\theta \in \mathbb C^d\) sufficiently close to 0, the twisted cocycle \(\mathcal L^\theta =(\mathcal {L}_\omega ^\theta )_{\omega \in \varOmega }\) is quasi-compact. Furthermore, the top Oseledets space of \(\mathcal L^\theta \) is one dimensional.

  2. 2.

    The map \(\theta \mapsto \varLambda (\theta )\) is differentiable near 0 and \(D_i \varLambda (0)=0\) for \(i\in \{1, \ldots , d\}\).

Proof

The quasi-compactness of \(\mathcal L^\theta \) for \(\theta \) close to 0, as well as one dimensionality of the associated top Oseledets space, can be obtained by repeating the arguments in the proof of [15, Theorem 3.12] (which require the Lasota–Yorke inequalities obtained in [18, Lemma 3]). Furthermore, the same argument as in the proof of [15, Corollary 3.14] implies that \(\varLambda \) and \(\hat{\varLambda }\) coincide on a neighborhood of 0, which gives the second statement of the lemma. \(\square \)

By [18, Proposition 2], we have that there exists a positive semi-definite \(d\times d\) matrix \(\varSigma ^2\) such that for \(\mathbb P\)-a.e. \(\omega \in \varOmega \), (29) holds. Furthermore, the elements of \(\varSigma ^2\) are given by (30).

The following is a version of Lemma 3.19 in the present context.

Lemma 5.4

We have that \(\varLambda \) is of class \(C^2\) on a neighborhood of 0 and \(D^2 \varLambda (0)=\varSigma ^2\), where \(D^2\varLambda (0)\) denotes the Hessian of \(\varLambda \) in 0.

Proof

The proof is completely analogous to that of Lemma 3.19 and thus we only point out the small adjustments that need to be made. Namely, in the present context we have that

$$\begin{aligned} D_i \lambda _\omega ^\theta = \mu _\omega ^\theta (g^i (\omega , \cdot )e^{\theta \cdot g(\omega , \cdot )}) +D_i O(\theta ) (\omega ) (e^{\theta \cdot g(\omega , \cdot ) }), \end{aligned}$$

and

$$\begin{aligned} D_{ij}\lambda _\omega ^\theta&= \mu _\omega ^\theta (g^i (\omega ,\cdot )g^j (\omega , x)e^{\theta \cdot g(\omega , \cdot )} ) + D_i O(\theta ) (\omega ) (g^j (\omega , \cdot )e^{\theta \cdot g(\omega , \cdot )}) \\&\quad + D_j O(\theta ) (\omega ) (g^i (\omega , \cdot )e^{\theta \cdot g(\omega , \cdot )} )+ D_{ij}O(\theta ) (\omega ) (e^{\theta \cdot g(\omega , \cdot )}), \end{aligned}$$

for \(1\le i, j\le d\). Due to the centering condition for g and the fact that \(D_iO(0)\in \mathcal S\), we have that \(D_i \lambda _\omega ^\theta |_{\theta =0}=0\) for \(1\le i\le d\). In addition, since \(D_{ij}O(0)\in \mathcal S\), we have that

$$\begin{aligned} D_{ij}\lambda _\omega ^\theta |_{\theta =0} =\mu _\omega ( g^i (\omega ,\cdot )g^j (\omega , \cdot )) + D_i O(0) (\omega ) (g^j (\omega , \cdot ))+D_j O(0) (\omega ) (g^i(\omega , \cdot )), \end{aligned}$$

and therefore,

$$\begin{aligned} D_{ij} \varLambda (0)&= \mathfrak {R}\bigg (\int _{\varOmega \times X} g^i (\omega ,x)g^j (\omega , x)\, \mathrm{d}\mu (\omega , x) +\int _{\varOmega } D_i O(0) (\omega ) (g^j (\omega , \cdot )) \, \mathrm{d}\mathbb P(\omega ) \\&\quad +\int _{\varOmega } D_j O(0) (\omega ) (g^i (\omega , \cdot ))\, \mathrm{d}\mathbb P(\omega ) \bigg ), \end{aligned}$$

for \(1\le i, j\le d\). The rest of the proof proceeds exactly as the proof of Lemma 3.19, by taking into account that

$$\begin{aligned} D_i O(0) (\omega )=\sum _{n=1}^\infty \mathcal L_{\sigma ^{-n} \omega }^{(n)} (g^i(\sigma ^{-n} \omega , \cdot ) \cdot \mu _{\sigma ^{-n} \omega }), \quad 1\le i\le d. \end{aligned}$$

\(\square \)

Now the choice for the bases for top Oseledets spaces \(Y_\omega ^\theta \) and \(Y_\omega ^{*\theta }\) can be made as in Sect. 4.1.

5.1 Limit Theorems

In the preceding discussion, we have established all preparatory material (analogous to that for piecewise expanding case) for limit theorems in the context of random hyperbolic dynamics. The following is a version of Lemma 4.1 in the present context. The proof is again the same as the proof of [15, Lemma 4.2] (and relies only on the Oseledets decomposition). We sketch it for readers’ convenience.

Lemma 5.5

Let \(\theta \in \mathbb {C}^d\) be sufficiently close to 0. Furthermore, let \(h\in \mathcal B\) be such that \(\phi ^\theta _\omega (h) \ne 0\). Then,

$$\begin{aligned} \lim _{n\rightarrow \infty }\frac{1}{n} \log \Big | h( e^{\theta \cdot S_ng(\omega ,\cdot )}) \Big | = \varLambda (\theta ) \quad \text {for } \mathbb {P}\text {-a.e. } \omega \in \varOmega . \end{aligned}$$

Proof

We use the notation of Sect. 4.1 adapted to the present setting. Given \(h\in \mathcal {B}\), we write \(h=\phi ^\theta _\omega (h) \mu ^\theta _\omega +h^\theta _\omega \), where \(h^\theta _\omega \in H^\theta _\omega \). Then,

$$\begin{aligned} \mathcal {L}^{\theta ,(n)}_\omega h=\left( \prod _{i=0}^{n-1}\lambda _{\sigma ^i\omega }^\theta \right) \phi ^\theta _\omega (h) \mu ^\theta _{\sigma ^{n-1}\omega }+ \mathcal {L}^{\theta ,(n)}_\omega h_\omega ^\theta . \end{aligned}$$

By the multiplicative ergodic theorem, we have for \(\mathbb P\)-a.e. \(\omega \in \varOmega \) that

$$\begin{aligned} \lim _{n\rightarrow \infty }\frac{1}{n}\log \Vert \mathcal {L}^{\theta ,(n)}_{\omega }|_{H_\omega ^\theta }\Vert <\varLambda (\theta ). \end{aligned}$$
(72)

Thus, we have that for \(\mathbb {P}\text {-a.e. } \omega \in \varOmega \)  (since \(\phi ^\theta _\omega (h) \ne 0\)),

$$\begin{aligned} \lim _{n\rightarrow \infty }\frac{1}{n} \log \Big | h( e^{\theta \cdot S_ng(\omega ,\cdot )} ) \Big |&= \lim _{n\rightarrow \infty }\frac{1}{n}\log \Big | \mathcal {L}^{\theta ,(n)}_\omega h (1) \Big | \\&= \max \bigg \{ \lim _{n\rightarrow \infty }\frac{1}{n}\sum _{i=0}^{n-1}\log |\lambda _{\sigma ^i\omega }^\theta |, \lim _{n\rightarrow \infty } \frac{1}{n} \log | \mathcal {L}^{\theta ,(n)}_\omega h_\omega ^\theta (1) | \bigg \}\\&=\varLambda (\theta ), \end{aligned}$$

where in the last step we have used (72) and the equality

$$\begin{aligned} \varLambda (\theta )=\lim _{n\rightarrow \infty } \frac{1}{n} \lim _{n\rightarrow \infty } \Vert \mathcal {L}_\omega ^{\theta , (n)} \mu _\omega ^\theta \Vert _{1,1}= \lim _{n\rightarrow \infty }\frac{1}{n}\sum _{i=0}^{n-1}\log |\lambda _{\sigma ^i\omega }^\theta |. \end{aligned}$$

\(\square \)

The previous lemma readily implies the version of Theorem 4.2 in the present context. Moreover, we have the following version of Theorem 4.4.

Theorem 5.6

Let \((a_n)_n\) be a sequence in \(\mathbb {R}\) such that \(\lim _{n\rightarrow \infty }\frac{a_n}{\sqrt{n}}=\infty \) and \(\lim _{n\rightarrow \infty }\frac{a_n}{n}=0\). Then, for \(\mathbb P\)-a.e. \(\omega \in \varOmega \) and any \(\theta \in \mathbb R^d\), we have that

$$\begin{aligned} \lim _{n\rightarrow \infty }\frac{1}{a_n^2/n}\log \mathbb E[e^{\theta \cdot S_ng (\omega , \cdot )/c_n}]=\frac{1}{2}\theta ^{\mathrm{T}}\varSigma ^2\theta , \end{aligned}$$

where \(c_n=n/a_n\). Consequently, when \(\varSigma ^2\) is positive definite, we have that:

  1. (i)

    for any closed set \(A\subset \mathbb R^d\),

    $$\begin{aligned} \limsup _{n\rightarrow \infty }\frac{1}{a_n^2/n}\log \mu _\omega (\{S_n g(\omega ,\cdot )/a_n\in A\})\le -\frac{1}{2} \inf _{x\in A}x^{\mathrm{T}}\varSigma ^{-2} x; \end{aligned}$$
  2. (ii)

    for any open set \(A\subset \mathbb R^d\) we have

    $$\begin{aligned} \liminf _{n\rightarrow \infty }\frac{1}{a_n^2/n}\log \mu _\omega (\{S_n g(\omega ,\cdot )/a_n\in A\})\ge -\frac{1}{2} \inf _{x\in A}x^{\mathrm{T}}\varSigma ^{-2} x, \end{aligned}$$

    where \(\varSigma ^{-2}\) denotes the inverse of \(\varSigma ^2\).

Proof

The proof proceeds exactly as the proof of Theorem 4.4 by replacing (35) with

$$\begin{aligned} \int _X e^{\theta \cdot S_ng(\omega ,\cdot )}\mathrm{d}\mu _\omega = \mathcal {L}_\omega ^{\theta , (n)} \mu _\omega (1)= \phi _\omega ^{\theta }(\mu _\omega )e^{\varPi _{\omega ,n}(\theta )}+ \mathcal L^{\theta ,(n)}_\omega (\mu _\omega -\phi _\omega ^{\theta }(\mu _\omega )\mu _\omega ^\theta )(1). \end{aligned}$$

\(\square \)

One can now establish the Berry–Esseen theorem, Edgeworth expansions, local CLT and large and moderate deviations exactly as in the case of random piecewise expanding dynamics with almost identical proofs. We remark that Lemma 4.12 holds true for general cocycles \(\mathcal {L}_\omega ^{it}\) acting on a Banach space (see  [31, Lemma 2.10.4]). We also note that (49) holds true in our case without any additional assumptions. Indeed, this follows exactly as in the scalar case [17, Lemma 9.3] (see the arguments in the proof of [18, Lemma 4]).

Regarding the exponential concentration inequalities, in the present setting we are currently not able to obtain the version of Proposition 4.5. The reason is that the proof of Proposition 4.5 relies on the martingale approach. Currently there exists only one paper (namely [13]) that explores the martingale method in the context of anisotropic Banach spaces adapted to hyperbolic dynamics. However, it is restricted to the case of deterministic dynamics and it is not clear if the techniques can be extended to the case of random dynamics. The other limit theorem which we cannot obtain for random Anosov maps is the large deviations type expansions (Theorem 4.18). The issue here is that, in contrary to the case of expanding maps, it is not clear to us when the additional assumption (64) holds true.

Remark 5.7

We emphasize that it was convenient for us to use the class of anisotropic Banach spaces introduced in [24], since we could refer to the previous work in [17, 18]. In principle, one could use any class of separable (in the nonseparable case, we would need to restrict to the first alternative in (C0)) anisotropic Banach spaces which are stable under small perturbations: The anisotropic Banach spaces associated with two Anosov diffeomorphisms T and \(T'\) coincide if T and \(T'\) are sufficiently close. We refer to [9] for an excellent survey on anisotropic Banach spaces for hyperbolic dynamics and to [6] for yet another interesting class of spaces recently introduced.