1 Introduction

Observe n i.i.d. random variables \(X_1,\ldots ,X_n\) with values in a measurable space \((E,{\mathcal {E}})\) and assume that their common distribution \(P^{\star }\) belongs to a family \({\mathscr {M}}\) of candidate probabilities, or at least lies close enough to it in a suitable sense. We consider the problem of estimating \(P^{\star }\) from the observation of \({\varvec{X}}=(X_1,\ldots ,X_n)\) and we evaluate the performance of an estimator with values in \({\mathscr {M}}\) by means of a given loss function \(\ell :{\mathscr {P}}\times {\mathscr {M}}\rightarrow {\mathbb {R}}_{+}\), where \({\mathscr {P}}\) denotes a set of probabilities containing \(P^{\star }\).

Our approach to solve this estimation problem has a Bayesian flavour. We endow \({\mathscr {M}}\) with a \(\sigma \)-algebra \({\mathcal {A}}\) and a probability measure \(\pi \) that plays the same role as the prior in the classical Bayes paradigm. Our aim is to design a posterior distribution \({\widehat{\pi }}_{{\varvec{X}}}\), solely based on \({\varvec{X}}\) and the choice of \(\ell \), that concentrates its mass, with a probability close to one, on an \(\ell \)-ball, namely a set of the form

$$\begin{aligned} {\mathscr {B}}(P^{\star },r)=\left\{ {P\in {\mathscr {M}},\; \ell (P^{\star },P)\leqslant r}\right\} \quad \text {with}\quad r>0. \ \end{aligned}$$
(1)

This means that with a probability close to 1, a point \({{\widehat{P}}}\) which is randomly drawn according to our (random) distribution \({\widehat{\pi }}_{{\varvec{X}}}\) is likely to estimate \(P^{\star }\) with an accuracy (with respect to the chosen loss \(\ell \)) not larger than r. Our objective is to design \({\widehat{\pi }}_{{\varvec{X}}}\) in such a way that this concentration property holds for small values of r and under mild assumptions on \(P^{\star }\) and \({\mathscr {M}}\).

In the literature, many authors have studied the concentration properties of the classical Bayes posterior distribution on Hellinger balls. We refer to the pioneering papers by van der Vaart and his co-authors—see for example Ghosal, Ghosh and van der Vaart [19]. They show that the concentration property around \(P^{\star }\) holds, as n tends to infinity, provided that the prior \(\pi \) puts enough mass on sets of the form \({\mathcal {K}}(P^{\star },{\varepsilon })=\{P\in {\mathscr {M}},\; K(P^{\star },P)<{\varepsilon }\}\) where \({\varepsilon }\) is a positive number and \(K(P^{\star },P)\) the Kullback–Leibler divergence between \(P^{\star }\) and P. This assumption may, however, be quite restrictive even in the favorable situation where \(P^{\star }\) belongs to the model \({\mathscr {M}}\). Such sets may indeed be empty, and the condition therefore unsatisfied, when the probabilities in \({\mathscr {M}}\) are not equivalent. This is for example the case when \({\mathscr {M}}\) is the set of all uniform distributions \(P_{\theta }\) on \([\theta -1/2,\theta +1/2]\), with \(\theta \in {\mathbb {R}}\), although the problem of estimating \(P^{\star }\in {\mathscr {M}}\) in this setting is quite easy, even in the Bayesian paradigm. The assumption appears even more restrictive when the probability \(P^{\star }\) does not belong to \({\mathscr {M}}\), that is when the model is misspecified. For example, if the distributions in \({\mathscr {M}}\) are all equivalent and R is singular with respect to \({\overline{P}}\in {\mathscr {M}}\), \({\mathcal {K}}(P^{\star },{\varepsilon })\) is empty for \(P^{\star }=(1-10^{-10}){\overline{P}} +10^{-10}R\) although \(P^{\star }\) and \({\overline{P}}\in {\mathscr {M}}\) are statistically indistinguishable from any n-sample of realistic size.

Unfortunately, it is in general impossible to get rid of the restrictive conditions we have mentioned above. It is well known that the Bayes posterior distribution can be unstable in case of a misspecification of the model. Examples that illustrate this weakness have been given in Jiang and Tanner [21] and Baraud and Birgé [6] for instance. This instability is due to the fact that the Bayes posterior distribution is based on the log-likelihood function and similar issues are known for the maximum likelihood estimator.

In order to obtain the concentration and stability properties we look for, we replace the log-likelihood function by a more stable one. Substituting another function to the log-likelihood one is not new in the literature and leads to what is called quasi-posterior distributions. The resulting estimators, called quasi-Bayesian estimators or Laplace type estimators, have been studied by various statisticians among which Chernozhukov and Hong [18] and Bissiri et al. [16] (we also refer to the references therein). These papers, however, do not address the problem of misspecification. In contrast, it is addressed in Jiang and Tanner [21] for performing variable selection in the logistic model. The authors show that the classical Bayesian approach is no longer reliable when the model is slightly misspecified while their Gibbs posterior distribution performs well and offers thus a much safer alternative. The problem of estimating a high-dimensional parameter \(\theta \in {\mathbb {R}}^{d}\) under a sparsity condition was considered in Atchadé [2]. His quasi-posterior distribution is obtained by replacing the joint density of the data by a more suitable one and by using some specific prior that forces sparsity. He proves that the so-defined posterior distribution contracts around the true parameter \(\theta ^{\star }\) at rate \(\sqrt{(s^{\star }\log d)/n}\) (where \(s^{\star }\) is the number of nonzero coordinates of \(\theta ^{\star }\)) when both d and n tend to infinity. A common feature of the papers we have cited above lies in their asymptotic nature. This is not the case for Bhattacharya et al. [8] who replaced the likelihood function in the expression of the posterior distribution by the fractional likelihood, that is a suitable power of the likelihood function. The authors also consider the situation where the model is possibly misspecified but their result involves the \(\alpha \)-divergence which, as the Kullback one, can be infinite even when the true distribution of the data is close to the model for the total variation distance or the Hellinger one.

Baraud and Birgé [6] propose a surrogate to the Bayes posterior distribution that is called the \(\rho \)-posterior distribution in reference to the theory of \(\rho \)-estimation that was developed in Baraud et al. [7] and Baraud and Birgé [5]. In the frequentist paradigm, this theory aimed at solving the various problems connected to the instability of the maximum likelihood method. The \(\rho \)-posterior distribution preserves some of the nice features of the classical Bayes one but also possesses the robustness property we are interested in. The authors show that their posterior distribution concentrates on a Hellinger ball around \(P^{\star }\) as soon as the prior puts enough mass around a point which is close enough to \(P^{\star }\). However their approach applies to specific dominated models \({\mathscr {M}}=\{P=p\cdot \mu ,\; p\in {\mathcal {M}}\}\) only. They assume that the family \({\mathcal {M}}\) of densities that defines their model possesses some special combinatorial structure which is either met when \({\mathcal {M}}\) is finite or when it satisfies some VC-type condition (see their Section 5). As a consequence, the concentration radius they obtain not only depends on the choice of the prior but also on a complexity term that is linked to this structure. Unlike theirs, our approach makes no such assumptions on \({\mathcal {M}}\) and we are therefore able to get rid of this unpleasant complexity term while retaining a similar dependency with respect to the choice of the prior. Baraud and Birgé’s posterior distribution has also the drawback to involve the supremum over the family \({\mathcal {M}}\) of an empirical process. Their posterior distribution is therefore difficult to calculate in practice, unless \({\mathcal {M}}\) is finite with a reasonable size. From a more theoretical point of view, it also raises some unpleasant issues with regard to the measurability of this supremum in the situation where the family \({\mathcal {M}}\) is uncountable, which is the typical case. Finally, Baraud and Birgé’s approach restricts to the squared Hellinger loss while ours applies to many others.

Closer to our approach are the aggregation methods and PAC-Bayesian techniques that have been popularized by Olivier Catoni in statistical learning (see Catoni [17]). This approach has mainly been applied for the purpose of empirical risk minimization and statistical learning (see for example Alquier [1]). Our aim is to extend these techniques toward a versatile tool that can solve our Bayes-like estimation problem for various loss functions simultaneously.

The problem of designing a good estimator of \(P^{\star }\) for a given loss function \(\ell \) was tackled in the frequentist paradigm in Baraud [4]. There, the author provides a general framework that enables one to deal with various loss functions of interest, among which the total variation, 1-Wasserstein, Hellinger, and \({\mathbb {L}}_{j}\)-losses among others. His approach relies on the construction of a suitable family of robust tests and lies in the line of the former work of Le Cam [22], Birgé [9] and Birgé [11]. The aim of the present paper is to transpose this theory from the frequentist to the Bayesian paradigm. If \(\ell \) is the Kullback–Leibler divergence, our construction recovers the classical Bayes posterior distribution even though this is not the choice we would recommend for the reasons we have explained before.

Quite surprisingly, the concentration properties that we establish here require almost no assumption on \({\mathscr {M}}\) and the distribution of the data (apart from independence). They mostly depend on the choices of the prior \(\pi \) and the loss function \(\ell \). For a suitable element P which belongs to the model \({\mathscr {M}}\) and lies close enough to \(P^{\star }\), these concentration properties depend on the minimal value of the radius r over which the log-ratio \(V(P,r)=\log \left[ {\pi ({\mathscr {B}}(P,2r))/\pi ({\mathscr {B}}(P,r))}\right] \) (with \({\mathscr {B}}\) defined in (1)) increases at most linearly with r. This log-ratio was introduced in Birgé [12] for the purpose of analyzing the behaviour of the classical Bayes posterior distribution. In our Bayes-like paradigm, we show that the behaviour of the quantities V(Pr) for \(P\in {\mathscr {M}}\) and \(r>0\) completely encapsulates the complexity of the model \({\mathscr {M}}\). We prove that our posterior distribution \({\widehat{\pi }}_{{\varvec{X}}}\) concentrates on an \(\ell \)-ball centered at \(P^{\star }\) and the radius \(r=r(n)\) of which is usually of minimax order as n tends to infinity when the model is well-specified. From a nonasymptotic point of view, we show that \({\widehat{\pi }}_{{\varvec{X}}}\) retains its nice concentration properties as long as \(P^{\star }\) remains close enough to an element P in \({\mathscr {M}}\) around which the prior puts enough mass, that is, even in the situation where the model is slightly misspecified. Actually, we establish the stronger result that even when the data are only independent but not i.i.d., the above conclusion remains true for the average \({\overline{P}}^{\star }\) of their marginal distributions in place of \(P^{\star }\). We therefore show that the posterior distribution \({\widehat{\pi }}_{{\varvec{X}}}\) enjoys some robustness properties with respect to the equidistribution assumption we started from. The main theorems involve as much as possible explicit numerical constants. We illustrate our results with examples which are deliberately chosen to be as general and simple as possible. Our aim is to give a flavour of the results that can be established with our Bayes-like posterior, avoiding as much as possible the technicalities that would result from the choice of ad-hoc priors introduced to solve specific problems. Instead, we wish to discuss the optimality and robustness properties of our construction for solving general parametric and nonparametric estimation problems in the density framework under assumptions that we wish to be as weak as possible. These posterior distributions will therefore provide a benchmark for comparison with other methods. Their practical implementation will be the subject of future work.

Of special interest is the choice of \(\ell \) given by the total variation distance or the Hellinger one. As we shall see, for such losses the stability of our posterior distribution automatically leads to estimators \({{\widehat{P}}}\sim {\widehat{\pi }}_{{\varvec{X}}}\) that are naturally robust to the presence of outliers or contaminating data among the sample. These results contrast sharply with the instability of the classical Bayes posterior distribution we underlined earlier. Nevertheless, our posterior distribution also shares some similarities with the classical Bayes one. When the model is well-specified and one uses the squared Hellinger loss, we show that the credible regions of our posterior distribution asymptotically possess the same ellipsoidal shapes and approximately the same sizes as the ones we derive from the classical Bayes posterior by means of the Bernstein–von Mises theorem. Establishing an analogue of this theorem for our Bayes-like posterior distribution is, however, beyond the scope of the present paper.

Our paper is organized as follows. We present our statistical setting in Sect. 2. We consider there independent but not necessarily i.i.d. data in order to analyse later on the behaviour of our posterior distribution with respect to a possible departure from equidistribution. The construction of the posterior distribution is described in Sect. 3. In this section, we also show how more classical constructions based on the likelihood or the fractional likelihoods are particular cases of ours. We complete this section with some heuristics which, we hope, help understanding the main ideas of our approach. In particular, we bridge there the problem of designing robust posterior distributions to that of testing between two disjoint \(\ell \)-balls. Section 4 is devoted to the main theorems. We describe there the concentration properties of our posterior distribution. The applications of these results to classical loss functions are presented in Sect. 5. We put a special emphasis on the cases of the total variation distance and the squared Hellinger loss. In the remaining part of the paper, we only focus on these two losses. In Sect. 6 we highlight some similarities and differences between the classical Bayes posterior and ours for the squared Hellinger loss. In Sect. 7 we explain how our posterior distribution can be used to solve the problem of estimating a density, or a parameter associated with it, in several statistical frameworks of interest. We discuss there how the concentration properties of our posterior distribution deteriorate in the case of a misspecification of the model by the prior. We also consider the problems of estimating a density in a location-scale family and a high-dimensional parameter in a parametric model under a sparsity constraint. We also show how our estimation strategy leads to unusual rates of convergence for estimating a translation parameter in a non-regular statistical model. In Sect. 8, we provide an evaluation of the concentration radius of our posterior distributions in the parametric framework. Finally, Sect. 9 is devoted to the proofs of the main theorems and Sect. 10 to the other proofs.

2 The statistical setting

Let \({\varvec{X}}=(X_1,\ldots ,X_n)\) be an n-tuple of independent random variables with values in a measurable space \((E,{\mathcal {E}})\) and joint distribution \({\textbf{P}}^{\star }=\bigotimes _{i=1}^{n}P_{i}^{\star }\). Even though this might not be true, we pretend that the \(X_{i}\) are i.i.d. and our aim is to estimate their (presumed) common distribution \(P^{\star }\) from the observation of \({\varvec{X}}\). To do so, we introduce a family \({\mathscr {M}}\) that consists of candidate probabilities (or merely finite signed measures in the case of the \({\mathbb {L}}_{j}\)-loss). The reason for considering finite signed measures lies in the fact that statisticians sometimes estimate probability densities by integrable functions that are not necessarily densities but elements of a suitable linear space for instance (think of the case of projection estimators). We endow \({\mathscr {M}}\) with a \(\sigma \)-algebra \({\mathcal {A}}\) and a probability measure \(\pi \), that we call a prior by analogy to the classical Bayesian framework, and we refer to the resulting pair \(({\mathscr {M}},\pi )\) as our model. The model \(({\mathscr {M}},\pi )\) plays here a similar role as in the classical Bayes paradigm. It encapsulates the a priori information that the statistician has on \(P^{\star }\). Nevertheless, we do not assume that \(P^{\star }\), if it ever exists, belongs to \({\mathscr {M}}\) nor that the true marginals \(P_{i}^{\star }\) do. We rather assume that the model \(({\mathscr {M}},\pi )\) is approximately correct in the sense that the average distribution

$$\begin{aligned} {\overline{P}}^{\star }=\frac{1}{n}\sum _{i=1}^{n}P_{i}^{\star }\end{aligned}$$

is close enough to some point P in \({\mathscr {M}}\) around which the prior \(\pi \) puts enough mass. We assume that \({\overline{P}}^{\star }\) belongs to a given set \({\mathscr {P}}\) of probability measures on \((E,{\mathcal {E}})\) and we measure the estimation accuracy by means of a loss function \(\ell :({\mathscr {M}}\cup {\mathscr {P}})\times {\mathscr {M}}\rightarrow {\mathbb {R}}_{+}\) which is not identical to 0 in order to avoid trivialities. Even though \(\ell \) may not be a genuine distance in general, we assume that it shares some similar features and we interpret it as if it were. For this reason, we call \(\ell \)-ball (or ball for short) centered at \(P\in {\mathscr {P}}\cup {\mathscr {M}}\) with radius \(r>0\) the subset of \({\mathscr {M}}\)

$$\begin{aligned} {\mathscr {B}}(P,r)=\left\{ {Q\in {\mathscr {M}},\; \ell (P,Q)\leqslant r}\right\} . \end{aligned}$$

Our aim is to built a posterior distribution (or posterior for short) \({\widehat{\pi }}_{{\varvec{X}}}\) on \(({\mathscr {M}},{\mathcal {A}})\), depending on our observation \({\varvec{X}}\), which concentrates with a probability close to 1 on an \(\ell \)-ball of the form \({\mathscr {B}}({\overline{P}}^{\star },r_{n})\) where we wish the value of \(r_{n}>0\) to be small.

2.1 The special case of parametrized models

In many situations we consider statistical models \({\mathscr {M}}=\{P_{\theta },\; \theta \in \Theta \}\) which are parametrized via a one-to-one mapping \(\theta \mapsto P_{\theta }\). When \((\Theta , \mathfrak {B},\nu )\) is a measurable space, we endow \({\mathscr {M}}\) with the \(\sigma \)-algebra \({\mathcal {A}}=\{A,\; \{\theta \in \Theta ,\; P_{\theta }\in A\}\in \mathfrak {B}\}\). This choice possesses several advantages. First, the mapping \(\theta \mapsto P_{\theta }\) is measurable from \((\Theta ,\mathfrak {B})\) onto \(({\mathscr {M}},{\mathcal {A}})\) and we may therefore define the prior \(\pi \) on \(({\mathscr {M}},{\mathcal {A}})\) as the image of \(\nu \) by this mapping. Besides, a function F is measurable on \(({\mathscr {M}},{\mathcal {A}})\) if and only if the mapping \(\theta \mapsto F\circ P_{\theta }\) is measurable on \((\Theta ,\mathfrak {B})\). This property makes the measurability of F easier to check in general. In particular, the mapping \(F:P_{\theta }\mapsto \theta \) is measurable on \(({\mathscr {M}},{\mathcal {A}})\) because \(\theta \mapsto F\circ P_{\theta }=\theta \) is measurable on \((\Theta ,\mathfrak {B})\) and we may then define a posterior \(\widehat{\nu }_{{\varvec{X}}}\) on \((\Theta ,\mathfrak {B})\) as the image by F of our posterior \({\widehat{\pi }}_{{\varvec{X}}}\) on \(({\mathscr {M}},{\mathcal {A}})\). By definition of \({\widehat{\nu }}_{{\varvec{X}}}\), for all \(\theta \in \Theta \) and \(r>0\)

$$\begin{aligned} {\widehat{\pi }}_{{\varvec{X}}}\left( {{\mathscr {B}}(P_{\theta },r)}\right) =\widehat{\nu }_{{\varvec{X}}}\left( {\left\{ {\theta '\in \Theta ,\; \ell (\theta ,\theta ')\leqslant r}\right\} }\right) \end{aligned}$$
(2)

where \(\ell (\theta ,\theta ')\) denotes, slightly abusively, \(\ell (P_{\theta },P_{\theta '})\) for \(\theta ,\theta '\in \Theta \). The concentration of \({\widehat{\pi }}_{{\varvec{X}}}\) on an \(\ell \)-ball centered at \(P_{\theta }\) with radius \(r>0\) is then equivalent to the concentration of \({\widehat{\nu }}_{{\varvec{X}}}\) on the set \(\{\theta '\in \Theta ,\; \ell (\theta ,\theta ')\leqslant r\}\). Every time we consider a parametrized model, we assume that it is identifiable and implicitly use the construction that we presented above as well as its consequences.

2.2 Notation and conventions

Throughout this paper, we use the following notation and conventions. For \(a,b\in {\mathbb {R}}\), \(a\vee b \) and \(a\wedge b\) denote \(\min \{a,b\}\) and \(\max \{a,b\}\) respectively. For \(x\in {\mathbb {R}}\), \((x)_{+}=x\vee 0\) while \((x)_{-}=(-x)\vee 0\). The Euclidean spaces \({\mathbb {R}}^{k}\) with \(k\geqslant 1\) are equipped with their Borel \(\sigma \)-algebras. The cardinality of a set A is denoted |A| and its complement \({^\textsf{c}}{\!{A}}{}\). In particular, for \(P\in {\mathscr {P}}\cup {\mathscr {M}}\) and \(r>0\), \({^\textsf{c}}{\!{{\mathscr {B}}}}{}(P,r)=\left\{ {Q\in {\mathscr {M}},\; \ell (P,Q)> r}\right\} \). The elements of \({\mathbb {R}}^{k}\) with \(k>1\) are denoted with bold letters, e.g. \({\varvec{x}}=(x_{1},\ldots ,x_{k})\) and \(\varvec{0}=(0,\ldots ,0)\). For \({\varvec{x}}\in {\mathbb {R}}^{k}\), \(|{\varvec{x}}|_{\infty }=\max _{i\in \{1,\ldots ,k\}}|x_{i}|\) while \(\left| {{\varvec{x}}}\right| \) denotes the Euclidean norm of \({\varvec{x}}\). The inner product of \({\mathbb {R}}^{k}\) is denoted by \(\langle \cdot ,\cdot \rangle \) and the closed Euclidean ball centered at \({\varvec{x}}\) with radius \(r\geqslant 0\) by \({\mathcal {B}}({\varvec{x}},r)\). By convention \(\inf _{{\varnothing }}=+\infty \) unless otherwise specified. We write \(f\equiv c\) when a function f is constant and equals c on its domain. For all suitable functions f on \((E^{n},{\mathcal {E}}^{\otimes n})\), \({\mathbb {E}}\left[ {f({\varvec{X}})}\right] \) means \(\int _{E^{n}}fd{\textbf{P}}^{\star }\) while for f on \((E,{\mathcal {E}})\), \({\mathbb {E}}_{S}\left[ {f(X)}\right] \) denotes the integral \(\int _{E}fdS\) with respect to the measure S on \((E,{\mathcal {E}})\). For \(j\in [1,+\infty )\), we denote by \({\mathscr {L}}_{j}(E,{\mathcal {E}},\mu )\), the set of measurable functions f on \((E,{\mathcal {E}})\) such that \(\left\| {f}\right\| _{j,\mu }=[\int _{E}|f|^{j}d\mu ]^{1/j}<+\infty \) while \(\left\| {f}\right\| _{\infty }=\sup _{x\in E}|f(x)|\) is the supremum norm of a function f on E. If \(\pi '\) is a distribution on \(({\mathscr {M}},{\mathcal {A}})\), \(Q\sim \pi '\) means that Q is a random variable with distribution \(\pi '\). Finally, all the measures that we consider are implicitly assumed to be \(\sigma \)-finite.

3 Construction of the posterior distribution

Throughout this section, the model \(({\mathscr {M}},\pi )\) is assumed to be fixed.

3.1 The properties of our loss functions

The construction of the posterior not only depends on the prior \(\pi \) but also on the choice of the loss function. We first assume that \(\ell \) satisfies some basic properties which are described below.

Assumption 1

For all \(S\in {\mathscr {P}}\cup {\mathscr {M}}\), the mapping

$$\begin{aligned} \begin{array}{l|rcl} \ell (S,\cdot ): &{} ({\mathscr {M}},{\mathcal {A}}) &{} \longrightarrow &{} {\mathbb {R}}_{+} \\ &{} P &{} \longmapsto &{} \ell (S,P) \end{array} \end{aligned}$$

is measurable.

Under such an assumption, \(\ell \)-balls are measurable and the quantities \(\pi ({\mathscr {B}}(P,r))\) for \(P\in {\mathscr {P}}\cup {\mathscr {M}}\) and \(r>0\) are therefore well-defined.

Assumption 2

There exists a positive number \(\tau \) such that, for all \(S\in {\mathscr {P}}\) and \(P,Q\in {\mathscr {M}}\),

$$\begin{aligned} \ell (S,Q)&\leqslant \tau \left[ {\ell (S,P)+\ell (P,Q)}\right] \end{aligned}$$
(3)
$$\begin{aligned} \ell (S,Q)&\geqslant \tau ^{-1}\ell (P,Q)-\ell (S,P). \end{aligned}$$
(4)

When \(\ell \) is a genuine distance, inequalities (3) and (4) are satisfied with \(\tau =1\) since they correspond to the triangle inequality. When \(\ell \) is the square of a distance, these inequalities are satisfied with \(\tau =2\).

Importantly, we assume that \(\ell \) is associated with a family \({\mathscr {T}}(\ell ,{\mathscr {M}})=\big \{t_{(P,Q)},\; (P,Q)\in {\mathscr {M}}^{2}\big \}\) of test statistics on \((E,{\mathcal {E}})\) which possesses the properties below. We shall see in Sect. 5 that many classical loss functions (among which the total variation distance, the squared Hellinger distance, etc.) can be associated with families \({\mathscr {T}}(\ell ,{\mathscr {M}})\) satisfying the following assumptions.

Assumption 3

The elements \(t_{(P,Q)}\) of \({\mathscr {T}}(\ell ,{\mathscr {M}})\) satisfy:

  1. (i)

    The mapping

    $$\begin{aligned} \begin{array}{l|rcl} t: &{} (E\times {\mathscr {M}}\times {\mathscr {M}},{\mathcal {E}}\otimes {\mathcal {A}}\otimes {\mathcal {A}}) &{} \longrightarrow &{} {\mathbb {R}} \\ &{} (x,P,Q) &{} \longmapsto &{} t_{(P,Q)}(x) \end{array} \end{aligned}$$

    is measurable.

  2. (ii)

    For all \(P,Q\in {\mathscr {M}}\), \(t_{(P,Q)}=-t_{(Q,P)}\).

  3. (iii)

    there exist positive numbers \(a_{0},a_{1}\) such that, for all \(S\in {\mathscr {P}}\) and \(P,Q\in {\mathscr {M}}\),

    $$\begin{aligned} {\mathbb {E}}_{S}\left[ {t_{(P,Q)}(X)}\right] \leqslant a_{0}\ell (S,P)-a_{1}\ell (S,Q). \end{aligned}$$
    (5)
  4. (iv)

    For all \(P,Q\in {\mathscr {M}}\),

    $$\begin{aligned} \sup _{x\in E}t_{(P,Q)}(x)-\inf _{x\in E}t_{(P,Q)}(x)\leqslant 1. \end{aligned}$$

Under assumption (ii), \(t_{(P,P)}=0\) and we deduce from (5) that \( (a_{0}-a_{1})\ell (S,P)\geqslant 0\), hence that \(a_{0}\geqslant a_{1}\) since \(\ell \) is not constantly equal to 0.

Some families \({\mathscr {T}}(\ell ,{\mathscr {M}})\) may satisfy the stronger

Assumption 4

Additionally to Assumption 3, there exists \(a_{2}>0\) such that

  1. (iv)

    for all \(S\in {\mathscr {P}}\) and \(P,Q\in {\mathscr {M}}\),

    $$\begin{aligned} {\text {Var}}_{S}\left[ {t_{(P,Q)}(X)}\right] \leqslant a_{2}\left[ {\ell (S,P)+\ell (S,Q)}\right] . \end{aligned}$$

3.2 Construction of the posterior

Let \({\mathscr {T}}(\ell ,{\mathscr {M}})\) be a family of test statistics that satisfies our Assumption 3 and let \(\beta \) and \(\lambda \) be two positive numbers such that

$$\begin{aligned} \lambda =(1+{c})\beta \quad \text {with}\quad {c}>0 \,\text { satisfying }\, {c}_{0}=(1+{c})-{c}(a_{0}/a_{1})>0. \end{aligned}$$
(6)

We set

$$\begin{aligned} {\textbf{T}}({\varvec{X}},P,Q)=\sum _{i=1}^{n}t_{(P,Q)}(X_{i})\quad \text {for all } P,Q\in {\mathscr {M}}\end{aligned}$$

and define \({\widetilde{\pi }}_{{\varvec{X}}}(\cdot |P)\) as the probability on \(({\mathscr {M}},{\mathcal {A}})\) with density

$$\begin{aligned} \frac{d{\widetilde{\pi }}_{{\varvec{X}}}(\cdot |P)}{d\pi }:Q\mapsto \frac{\exp \left[ { \lambda {\textbf{T}}({\varvec{X}},P,Q)}\right] }{\int _{{\mathscr {M}}}\exp \left[ {\lambda {\textbf{T}}({\varvec{X}},P,Q)}\right] d\pi (Q)}. \end{aligned}$$

Then, for \(P\in {\mathscr {M}}\) we set

$$\begin{aligned} {\textbf{T}}({\varvec{X}},P)&=\int _{{\mathscr {M}}}{\textbf{T}}({\varvec{X}},P,Q)d{\widetilde{\pi }}_{{\varvec{X}}}(Q|P)\\&=\int _{{\mathscr {M}}}{\textbf{T}}({\varvec{X}},P,Q)\frac{\exp \left[ {\lambda {\textbf{T}}({\varvec{X}},P,Q)}\right] }{\int _{{\mathscr {M}}}\exp \left[ {\lambda {\textbf{T}}({\varvec{X}},P,Q)}\right] d\pi (Q)}d\pi (Q). \end{aligned}$$

Finally, we define \({\widehat{\pi }}_{{\varvec{X}}}\) as the posterior distribution on \(({\mathscr {M}},{\mathcal {A}})\) with density

$$\begin{aligned} \frac{d{\widehat{\pi }}_{{\varvec{X}}}}{d\pi }:P\mapsto \frac{\exp \left[ {-\beta {\textbf{T}}({\varvec{X}},P)}\right] }{\int _{{\mathscr {M}}}\exp \left[ {-\beta {\textbf{T}}({\varvec{X}},P)}\right] d\pi (P)}. \end{aligned}$$
(7)

Our Assumption 3– (i) ensures that \(d\widetilde{\pi }_{{\varvec{X}}}(\cdot |P)/d\pi \) is a measurable function of \(({\varvec{X}},P,Q)\) and \(d{\widehat{\pi }}_{{\varvec{X}}}/d\pi \) a measurable function of \(({\varvec{X}},P)\).

The posterior distribution depends on our choice of \(\beta \) and \(\lambda \) (or equivalently c) even though we drop this dependency with the notation \({\widehat{\pi }}_{{\varvec{X}}}\).

3.3 Monte Carlo computation of functions of the posterior

Even though we focus on the concentration properties of the posterior \({\widehat{\pi }}_{{\varvec{X}}}\), one may alternatively be interested in some estimators derived from it. For example, estimators of the form

$$\begin{aligned} I=\int _{{\mathscr {M}}}F(P)d{\widehat{\pi }}_{{\varvec{X}}}(P) \end{aligned}$$

where F is a real-valued \(\pi \)-integrable function on \(({\mathscr {M}},{\mathcal {A}})\). For typical choices of F, I gives the expected mean, mode or median of the posterior whenever these quantities make sense. One may also choose \(F:P\mapsto {\mathbb {1}}_{P\in {\mathscr {B}}(P_{0},{\varepsilon })}\) with \(P_{0}\in {\mathscr {M}}\) and \({\varepsilon }>0\) in order to compute the (posterior) probability that \(\ell (P_{0}, {{\widehat{P}}})\) is not larger than \({\varepsilon }\) when \({{\widehat{P}}}\sim {\widehat{\pi }}_{{\varvec{X}}}\).

Interestingly, the integral I can be approximated by Monte Carlo as follows. Assume that the prior \(\pi \) admits a density of the form \(C^{-1}\Pi \) with respect to a given probability measure \(\mathfrak {m}\), where \(\Pi \) is a nonnegative \(\mathfrak {m}\)-integrable function on \(({\mathscr {M}},{\mathcal {A}})\) and \(C=\int _{{\mathscr {M}}}\Pi (P)d\mathfrak {m}(P)>0\) a positive normalizing constant (that will not be involved in our calculation). Let \(P_{1},\ldots ,P_{N}\) be an N-sample with distribution \(\mathfrak {m}\) and for each \(i\in \{1,\ldots ,N\}\), \(Q_{i}^{(1)},\ldots ,Q_{i}^{(N')}\) an independent \(N'\)-sample with the same distribution. We may approximate I by

$$\begin{aligned} {{\widehat{I}}}_{N,N'}=\sum _{i=1}^{N}F(P_{i})\frac{\exp \left[ {-\beta W_{i,N'}(P_{i})}\right] \Pi (P_{i})}{\sum _{i'=1}^{N}\exp \left[ {-\beta W_{i',N'}(P_{i'})}\right] \Pi (P_{i'})} \end{aligned}$$

where for all \(i\in \{1,\ldots ,N\}\),

$$\begin{aligned} W_{i,N'}(P_{i})=\sum _{j=1}^{N'}T({\varvec{X}},P_{i},Q_{i}^{(j)})\frac{\exp \left[ {\lambda T({\varvec{X}},P_{i},Q_{i}^{(j)})}\right] \Pi (Q_{i}^{(j)})}{\sum _{j'=1}^{N'}\exp \left[ {\lambda T({\varvec{X}},P_{i},Q_{i}^{(j')})}\right] \Pi (Q_{i}^{(j')})}. \end{aligned}$$

It is then easy to check that, by the law of large numbers,

$$\begin{aligned} \lim _{N\rightarrow +\infty }\left[ {\lim _{N'\rightarrow +\infty }{{\widehat{I}}}_{N,N'}}\right] =I. \end{aligned}$$

3.4 Connection with the classical Bayes posterior distribution

The classical Bayes posterior turns out to be a particular case of the posterior-type ones introduced in Sect. 3.2. As we shall see now, they are associated with the Kullback–Leibler divergence loss. We recall that the Kullback–Leibler divergence \(\ell (P,Q)=K(P,Q)\) between two probabilities PQ on \((E,{\mathcal {E}})\) is defined by

$$\begin{aligned} K(P,Q)= {\left\{ \begin{array}{ll} \displaystyle {\int _{E}\log \left( {\frac{dP}{dQ}}\right) dP} &{}\text {when } P\ll Q; \\ +\infty &{} \text {otherwise.} \end{array}\right. } \end{aligned}$$

Let us consider now a family \({\mathscr {M}}\) of probabilities that satisfy for some \(a>0\) and suitable versions of their densities dQ/dP the following inequalities:

$$\begin{aligned} e^{-a}\leqslant \frac{dP}{dQ}(x)\leqslant e^{a}\quad \text {for all }x\in E\text { and } P,Q\in {\mathscr {M}}. \end{aligned}$$
(8)

It follows from Baraud [4, Proposition 12] that the families of functions

$$\begin{aligned} {\mathscr {T}}(\ell ,{\mathscr {M}})=\left\{ {t_{(P,Q)}=\frac{1}{2a}\log \left( {\frac{dQ}{dP}}\right) ,\; P,Q\in {\mathscr {M}}}\right\} \end{aligned}$$
(9)

satisfies our Assumptions 3 and 4 with \(a_{0}=a_{1}=1/(2a)\) and \(a_{2}=2a/[\tanh (a/2)]\). Note that given \(P,Q\in {\mathscr {M}}\), \(P\ne Q\), the test based on the sign of \(t_{(P,Q)}\) is the classical likelihood ratio test between P and Q.

If we apply the construction described in Sect. 3.2 to the family \({\mathscr {T}}(\ell ,{\mathscr {M}})\) we obtain that for all \(P,Q,P_{0}\in {\mathscr {M}}\),

$$\begin{aligned} {\textbf{T}}({\varvec{X}},P,Q)={\textbf{T}}({\varvec{X}},P_{0},Q)-{\textbf{T}}({\varvec{X}},P_{0},P). \end{aligned}$$

For all \(\lambda >0\), the density of \({\widetilde{\pi }}_{{\varvec{X}}}(\cdot |P)\)

$$\begin{aligned} Q\mapsto \frac{\exp \left[ { \lambda {\textbf{T}}({\varvec{X}},P,Q)}\right] }{\int _{{\mathscr {M}}}\exp \left[ {\lambda {\textbf{T}}({\varvec{X}},P,Q)}\right] d\pi (Q)}=\frac{\exp \left[ { \lambda {\textbf{T}}({\varvec{X}},P_{0},Q)}\right] }{\int _{{\mathscr {M}}}\exp \left[ {\lambda {\textbf{T}}({\varvec{X}},P_{0},Q)}\right] d\pi (Q)} \end{aligned}$$

is independent of P and writing \({\widetilde{\pi }}_{{\varvec{X}}}(\cdot )\) in place of \({\widetilde{\pi }}_{{\varvec{X}}}(\cdot |P)\) we obtain that

$$\begin{aligned} {\textbf{T}}({\varvec{X}},P)&=\int _{{\mathscr {M}}}{\textbf{T}}({\varvec{X}},P,Q)d{\widetilde{\pi }}_{{\varvec{X}}}(Q)\\&=\int _{{\mathscr {M}}}{\textbf{T}}({\varvec{X}},P_{0},Q)d{\widetilde{\pi }}_{{\varvec{X}}}(Q)-{\textbf{T}}({\varvec{X}},P_{0},P)\\&=C-\frac{1}{2a}\sum _{i=1}^{n}\log \left( {\frac{dP}{dP_{0}}}\right) (X_{i})\text { with }C=\int _{{\mathscr {M}}}{\textbf{T}}({\varvec{X}},P_{0},Q)d{\widetilde{\pi }}_{{\varvec{X}}}(Q). \end{aligned}$$

Finally, the density of our posterior \({\widehat{\pi }}_{{\varvec{X}}}\) at \(P\in {\mathscr {M}}\) is given by

$$\begin{aligned} \frac{d{\widehat{\pi }}_{{\varvec{X}}}}{d\pi }(P)&=\frac{\exp \left[ {-\beta {\textbf{T}}({\varvec{X}},P)}\right] }{\int _{{\mathscr {M}}}\exp \left[ {-\beta {\textbf{T}}({\varvec{X}},P)}\right] d\pi (P)}= \frac{\left[ {\prod _{i=1}^{n}(dP/dP_{0})(X_{i})}\right] ^{\beta /(2a)}}{\int _{{\mathscr {M}}}\left[ {\prod _{i=1}^{n}(dP/dP_{0})(X_{i})}\right] ^{\beta /(2a)}d\pi (P)}. \end{aligned}$$

This is the density of the classical Bayes posterior when \(\beta =2a\) while for other values of \(\beta \) it is that of fractional Bayes ones.

Nevertheless, in the present paper we restrict our study to loss functions that satisfy some triangle-type inequality – see Assumption 2. This excludes the Kullback–Leibler divergence unless one is ready to make strong assumptions on the unknown distribution of the data, which we do not want to do here.

3.5 Some heuristics

In this section, we present the basic ideas that underline our approach. In particular, we shall see how the estimation problem we want to solve is linked to the one of testing between two disjoint \(\ell \)-balls \({\mathscr {B}}(P,r)\) and \({\mathscr {B}}(Q,r)\) with \(P,Q\in {\mathscr {M}}\).

In order to avoid unnecessary details, we assume here that we observe i.i.d. data \(X_1,\ldots ,X_n\) with distribution \(P^{\star }\in {\mathscr {P}}\) and that we have at disposal a family \({\mathscr {T}}(\ell ,{\mathscr {M}})\) of functions that satisfies our Assumption 3. In particular it follows from Assumption 3-(iii) that

$$\begin{aligned} {\mathbb {E}}\left[ {\frac{{\textbf{T}}({\varvec{X}},P,Q)}{n}}\right] =\frac{1}{n}\sum _{i=1}^{n}{\mathbb {E}}\left[ {t_{(P,Q)}(X_{i})}\right] \leqslant a_{0}\ell (P^{\star },P)-a_{1}\ell (P^{\star },Q). \end{aligned}$$

The antisymmetric property required by Assumption 3-(ii) entails that

$$\begin{aligned} {\textbf{T}}({\varvec{X}},P,Q)=-{\textbf{T}}({\varvec{X}},Q,P) \end{aligned}$$

and leads to the lower bound

$$\begin{aligned} {\mathbb {E}}\left[ {\frac{{\textbf{T}}({\varvec{X}},P,Q)}{n}}\right] \geqslant a_{1}\ell (P^{\star },P)-a_{0}\ell (P^{\star },Q). \end{aligned}$$

Assuming for the sake of simplicity that \(a_{0}=a_{1}=1\), these calculations show that \(n^{-1}{\textbf{T}}({\varvec{X}},P,Q)=n^{-1}\sum _{i=1}^{n}t_{(P,Q)}(X_{i})\) is an unbiased and consistent estimator of \(\ell (P^{\star },P)-\ell (P^{\star },Q)\). In particular, if the two \(\ell \)-balls \({\mathscr {B}}(P,r)\), \({\mathscr {B}}(Q,r)\) are disjoint and \(P^{\star }\) belongs to one of them, the sign of \(n^{-1}{\textbf{T}}({\varvec{X}},P,Q)=n^{-1}\sum _{i=1}^{n}t_{(P,Q)}(X_{i})\) provides a consistent test for deciding which one contains \(P^{\star }\). In fact, the test does not depend on the value of r and consequently chooses the element among \(\{P,Q\}\) which is the closest to \(P^{\star }\) (with respect to \(\ell \)), at least when n is large enough. As compared to the classical likelihood ratio test between P and Q, this test has the advantage not to assume that \(P^{\star }\) is either P or Q but only that it lies in a small enough \(\ell \)-vicinity around one of these two probabilities. The test is said to be robust with respect to the model \(\{P,Q\}\). Its nonasymptotic properties have been studied in Baraud [4].

Let us now explain how such families \(\{{\textbf{T}}({\varvec{X}},P,Q), (P,Q)\in {\mathscr {M}}^{2}\}\) of test statistics can be used to build robust estimators and not only tests. In the frequentist paradigm, the construction of \(\ell \)-estimators is based on the following heuristics. If, with a probability close to 1, \(n^{-1}{\textbf{T}}({\varvec{X}},P,Q)\) is close to its expectation \(\ell (P^{\star },P)-\ell (P^{\star },Q)\) uniformly with respect to \((P,Q)\in {\mathscr {M}}^{2}\) then \(n^{-1}{\textbf{T}}'({\varvec{X}},P)=\sup _{Q\in {\mathscr {M}}}\left[ {n^{-1}{\textbf{T}}({\varvec{X}},P,Q)}\right] \) is close to

$$\begin{aligned} \sup _{Q\in {\mathscr {M}}}\left[ {\ell (P^{\star },P)-\ell (P^{\star },Q)}\right] =\ell (P^{\star },P)-\inf _{Q\in {\mathscr {M}}}\ell (P^{\star },Q). \end{aligned}$$

We therefore expect that a minimizer over \({\mathscr {M}}\) of the function \(P\in {\mathscr {M}}\mapsto n^{-1}{\textbf{T}}'({\varvec{X}},P)\) be close to a minimizer over \({\mathscr {M}}\) of the function \(P\in {\mathscr {M}}\mapsto \ell (P^{\star },P)-\inf _{Q\in {\mathscr {M}}}\ell (P^{\star },Q)\), that is an element that minimizes the loss \(\ell (P^{\star },P)\) among the probabilities \(P\in {\mathscr {M}}\).

In the Bayesian paradigm, we may argue in a similar way as follows. Replacing \(n^{-1}{\textbf{T}}({\varvec{X}},P,Q)\) by its expectation \(\ell (P^{\star },P)-\ell (P^{\star },Q)\), as we did before, amounts to replacing \({\textbf{T}}({\varvec{X}},{\textbf{P}})\) by

$$\begin{aligned}&{\overline{{\textbf{T}}}}({\varvec{X}},{\textbf{P}})\\&\quad =n\int _{{\mathscr {M}}}\left( {\ell (P^{\star },P)-\ell (P^{\star },Q)}\right) \frac{\exp \left[ {n\lambda \left( {\ell (P^{\star },P) -\ell (P^{\star },Q)}\right) }\right] d\pi (Q)}{\int _{{\mathscr {M}}}\exp \left[ {n\lambda \left( {\ell (P^{\star },P)-\ell (P^{\star },Q)}\right) }\right] d\pi (Q)}\\&\quad =n\ell (P^{\star },P)-n\int _{{\mathscr {M}}}\ell (P^{\star },Q) \frac{\exp \left[ {-n\lambda \ell (P^{\star },Q)}\right] }{\int _{{\mathscr {M}}} \exp \left[ {-n\lambda \ell (P^{\star },Q)}\right] d\pi (Q)}d\pi (Q). \end{aligned}$$

Note that the second term in the right-hand side does not depend on P. Consequently, replacing \({\textbf{T}}({\varvec{X}},{\textbf{P}})\) by \(\overline{{\textbf{T}}}({\varvec{X}},{\textbf{P}})\) in the expression (7) of the density of \({\widehat{\pi }}_{{\varvec{X}}}\) leads to the density

$$\begin{aligned} P\mapsto \frac{\exp \left[ {-\beta \overline{{\textbf{T}}}({\varvec{X}},P)}\right] }{\int _{{\mathscr {M}}}\exp \left[ {-\beta \overline{{\textbf{T}}}({\varvec{X}},P)}\right] d\pi (P)}=\frac{\exp \left[ {-n\beta \ell (P^{\star },P) }\right] }{\int _{{\mathscr {M}}}\exp \left[ {-n\beta \ell (P^{\star },P)}\right] d\pi (P)}. \end{aligned}$$

We recognize here the density of a Gibbs measure associated with the energy \(\ell (P^{\star },P)\) at point \(P\in {\mathscr {M}}\) and inverse temperature \(n\beta >0\). We know that when the temperature goes to 0 (or equivalently \(n\beta \) to infinity), Gibbs measures concentrate their masses in vicinities of low energy points in \({\mathscr {M}}\). In our case, these low energy points are those for which \(\ell (P^{\star },P)\) is minimal.

Similar ideas can be found in Catoni’s work and more specifically in his construction of Gibbs estimators—see Catoni [17, Chapter 4]. There, Catoni shows how to aggregate a continuous family of estimators in order to minimize a risk. In the present paper, we do not aim at aggregating estimators but we use similar ideas and tools that are due to Catoni and his co-authors for the construction of our robust posterior distribution.

4 The main results

4.1 Linking the prior to the complexity of the model

For \(P\in {\mathscr {M}}\) and \(r>0\), we recall that

$$\begin{aligned} V(P,r)=\log \left( {\frac{\pi ({\mathscr {B}}(P,2r))}{\pi ({\mathscr {B}}(P,r))}}\right) \end{aligned}$$

where we use the convention \(a/0=+\infty \) for all \(a\geqslant 0\). We said in the Introduction that such quantities encapsulate in some sense the complexity of the model \(({\mathscr {M}},\pi )\) and we shall now explain why. If \({\mathscr {M}}=\{P_{{\varvec{\theta }}},\; {\varvec{\theta }}\in {\mathbb {R}}^{k}\}\) is a parametric model endowed with a loss \(\ell \) such that \(\ell ({\varvec{\theta }},{\varvec{\theta }}')=\left| {{\varvec{\theta }}-{\varvec{\theta }}'}\right| \), so that \(({\mathscr {M}},\ell )\) is isometric to \(({\mathbb {R}}^{k},\left| {\cdot }\right| )\), and if the prior \(\nu \) on \(\Theta ={\mathbb {R}}^{k}\) is improper and given by the Lebesgue measure, we obtain that for all \(P\in {\mathscr {M}}\) and \(r>0\)

$$\begin{aligned} V(P,r)&=\log \left( {\frac{\pi ({\mathscr {B}}(P,2r))}{\pi ({\mathscr {B}}(P,r))}}\right) =\log \left( {\frac{(2r)^{k}}{r^{k}}}\right) =k\log 2. \end{aligned}$$
(10)

We observe that V(Pr) corresponds in this case to the usual dimension of \({\mathbb {R}}^{k}\) (up the factor \(\log 2\)). For more general models \(({\mathscr {M}},\pi )\) and loss functions \(\ell \), we may interpret V(Pr) as some notion of dimension (or complexity) associated with the element \(P\in {\mathscr {M}}\) at the scale \(r>0\). As we do not consider improper priors but probability distributions, \(\lim _{r\rightarrow +\infty }\pi ({\mathscr {B}}(P,r))=1\) and consequently \(\lim _{r\rightarrow +\infty }V(P,r)=0\). This means that the connection with the notion of “dimension” is only relevant for values of r which are not too large.

Given \(\gamma \in (0,1]\), the set

$$\begin{aligned} {\mathcal {R}}(\beta ,P)=\left\{ {{r}\geqslant \frac{1}{n\beta a_{1}},\text { such that } \sup _{r'\geqslant r}\frac{V(P,r')}{r'}\leqslant \gamma n\beta a_{1}}\right\} \end{aligned}$$

is the subinterval of \({\mathbb {R}}_{+}\) on which the mapping \(r\mapsto V(P,r)\) is not larger than \(r\mapsto \gamma n\beta a_{1}{r}\). We denote by

$$\begin{aligned} {r}_{n}(\beta ,P)=\inf {\mathcal {R}}(\beta ,P) \end{aligned}$$
(11)

the left endpoint of \({\mathcal {R}}(\beta ,P)\). Since \({\mathcal {R}}(\beta ,P)\) is increasing with \(\beta \) with respect to set inclusion, \({r}_{n}(\beta ,P)\) is a nonincreasing function of \(\beta \). For example, in the ideal situation given in (10) where \(V(P,r)\equiv k\log 2\) with \(k\log 2\geqslant 1\), \({r}_{n}(\beta ,P)=(\gamma a_{1})^{-1}[k\log 2/(n\beta )]\). When the model \({\mathscr {M}}=\{P_{{\varvec{\theta }}},\; {\varvec{\theta }}\in \Theta \}\) is parametric and the parameter space \(\Theta \) is an open subset of \({\mathbb {R}}^{k}\) endowed with a prior \(\nu \), we shall see in Sect. 8.2 that under suitable assumptions \({r}_{n}(\beta ,P_{{\varvec{\theta }}})\) is indeed of order \(k/(n\beta )\), at least for n sufficiently large.

The Bayesian paradigm offers the possibility to favour some elements of \({\mathscr {M}}\) as compared to others. The order of magnitude of \({r}_{n}(\beta ,P)\) allows one to quantify how much the prior \(\pi \) advantages or disadvantages \(P\in {\mathscr {M}}\). It follows from the definition of \({r}_{n}(\beta ,P)\) that

$$\begin{aligned} 0<\pi \left( {{\mathscr {B}}(P,2{r})}\right) \leqslant \exp \left( {\gamma n\beta a_{1}{r}}\right) \pi \left( {{\mathscr {B}}(P,{r})}\right) \quad \text {for all }{r}>{r}_{n}(\beta ,P). \end{aligned}$$
(12)

Letting \({r}\) decrease to \({r}_{n}(\beta ,P)\), we derive that (12) also holds for \({r}={r}_{n}(\beta ,P)\). In particular, \(\pi \left( {{\mathscr {B}}(P,r)}\right) >0\) for \({r}={r}_{n}(\beta ,P)\). If the prior puts no mass on the \(\ell \)-ball \({\mathscr {B}}(P,{r})\), which clearly corresponds to a situation where the prior disadvantages P, \({r}_{n}(\beta ,P)>{r}\) and \({r}_{n}(\beta ,P)\) is therefore large if \({r}\) is large. In the opposite case, if the prior puts enough mass on \({\mathscr {B}}(P,{r})\) in the sense that

$$\begin{aligned} \pi \left( {{\mathscr {B}}(P,{r})}\right) \geqslant \exp \left( {-\gamma n\beta a_{1}{r}}\right) , \end{aligned}$$
(13)

then for all \({r}'\geqslant {r}\),

$$\begin{aligned} \pi \left( {{\mathscr {B}}(P,{r}')}\right)&\geqslant \exp \left( {-\gamma n\beta a_{1}{r}}\right) \geqslant \exp \left( {-\gamma n\beta a_{1}{r}'}\right) \\&\geqslant \exp \left( {-\gamma n\beta a_{1}{r}'}\right) \pi \left( {{\mathscr {B}}(P,2{r}')}\right) \end{aligned}$$

hence,

$$\begin{aligned} \frac{\pi \left( {{\mathscr {B}}(P,2{r}')}\right) }{\pi \left( {{\mathscr {B}}(P,{r}')}\right) }\leqslant \exp \left( {\gamma n\beta a_{1}{r}'}\right) \quad \text {and }{r}_{n}(\beta ,P)\leqslant {r}. \end{aligned}$$

The quantity \({r}_{n}(\beta ,P)\) is therefore small if \({r}\) is small. Although (13) is not equivalent to (12) (it is actually stronger), the previous arguments provide a partial view on the relationship between \(\pi \) and \({r}_{n}\) and conditions to decide whether P is favoured by \(\pi \) or not, according to the size of \({r}_{n}(\beta ,P)\).

4.2 A general result on the concentration property of the posterior distribution

According to the discussion of Sect. 4.1, we see that, when the set

$$\begin{aligned} {\mathscr {M}}(\beta )=\left\{ {P\in {\mathscr {M}},\; {r}_{n}(\beta ,P)\leqslant a_{1}^{-1}\beta }\right\} \end{aligned}$$
(14)

is nonempty, it contains the most favoured elements of the model \(({\mathscr {M}},\pi )\) at level \(a_{1}^{-1}\beta \). Since \({r}_{n}(\beta ,P)\) is nonincreasing with \(\beta \), the set \({\mathscr {M}}(\beta )\) is increasing with \(\beta \) with respect to set inclusion. If \(a_{1}^{-1}\beta \geqslant (n\beta a_{1})^{-1}\) or equivalently \(\beta \geqslant 1/\sqrt{n}\), the set \({\mathscr {M}}(\beta )\) can alternatively be defined from V(Pr) as follows:

$$\begin{aligned} {\mathscr {M}}(\beta )&=\left\{ {P\in {\mathscr {M}},\; V(P,r)\leqslant \gamma n\beta a_{1}r \text { for all }{r}\geqslant a_{1}^{-1}\beta }\right\} . \end{aligned}$$
(15)

This set plays a crucial role in our first result.

Theorem 1

Assume that the model \(({\mathscr {M}},\pi )\) and the loss \(\ell \) satisfy Assumptions 1 and 2 and the family \({\mathscr {T}}(\ell ,{\mathscr {M}})\) Assumption 3. Let \(\gamma <(c_{0}\wedge {c})/(2\tau )\) and \(\beta \geqslant 1/\sqrt{n}\) be chosen in such a way that the set \({\mathscr {M}}(\beta )\) defined by (14) is not empty. Then, the posterior \(\widehat{\pi }_{{\varvec{X}}}\) defined by (7) possesses the following property. There exists \(\kappa _{0}>0\) only depending on \({c},\tau ,\gamma \) and the ratio \(a_{0}/a_{1}\) such that, for all \(\xi >0\) and any distribution \({\textbf{P}}^{\star }\) with marginals in \({\mathscr {P}}\),

$$\begin{aligned} {\mathbb {E}}\left[ {{\widehat{\pi }}_{{\varvec{X}}}\left( {{^\textsf{c}}{\!{{\mathscr {B}}}}{}(\overline{P}^{\star },\kappa _{0}{r})}\right) }\right] \leqslant 2e^{-\xi } \end{aligned}$$
(16)

with

$$\begin{aligned} {r}=\inf _{P\in {\mathscr {M}}(\beta )}\ell (\overline{P}^{\star },P)+\frac{1}{a_{1}}\left( {\beta +\frac{2\xi }{n\beta }}\right) . \end{aligned}$$
(17)

In particular,

$$\begin{aligned} {\mathbb {P}}\left[ {{\widehat{\pi }}_{{\varvec{X}}}\left( {{^\textsf{c}}{\!{{\mathscr {B}}}}{}(\overline{P}^{\star },\kappa _{0}{r})}\right) \geqslant e^{-\xi /2}}\right] \leqslant 2e^{-\xi /2}. \end{aligned}$$

The value of \(\kappa _{0}\) is given by (119) in the proof. It only depends on the choice of the family \({\mathscr {T}}(\ell ,{\mathscr {M}})\) but not on the prior \(\pi \). Hence, for a given family \({\mathscr {T}}(\ell ,{\mathscr {M}})\), \(\kappa _{0}\) is a numerical constant.

Let us now comment on Theorem 1. When \(X_1,\ldots ,X_n\) are truly i.i.d. with distribution \(P^{\star }\) and the prior puts enough mass around \(P^{\star }\), in the sense that \(P^{\star }\in {\mathscr {M}}(\beta )\), then \(r=a_{1}^{-1}[\beta +2\xi /(n\beta )]\) in (17). When this ideal situation is not met, either because the data are not identically distributed or because \(P^{\star }\) does not belong to \({\mathscr {M}}(\beta )\), r increases by at most an additive term of order \(\inf _{P\in {\mathscr {M}}(\beta )}\ell ({\overline{P}}^{\star },P)\). When this approximation term remains small as compared to \(a_{1}^{-1}\beta \), the value of r does not deteriorate too much as compared to the previous situation.

The value of \({r}\) given by (17) depends on the choice of the parameter \(\beta \). Since the set \({\mathscr {M}}(\beta )\) is increasing (with respect to set inclusion) as \(\beta \) gets larger, the two terms \(\inf _{P\in {\mathscr {M}}(\beta )}\ell ({\overline{P}}^{\star },P)\) and \(a_{1}^{-1}\beta \) vary in opposite directions as \(\beta \) increases. The set \({\mathscr {M}}(\beta )\) must be large enough to provide a suitable approximation of \({\overline{P}}^{\star }\) while \(\beta \) must not be too large in order to keep \(a_{1}^{-1}\beta \) to a reasonable size. Practically, we recommend to choose \(\beta =\beta (\alpha )\geqslant 1/\sqrt{n}\) such that

$$\begin{aligned} \pi \left( {{\mathscr {M}}(\beta )}\right) \geqslant 1-\alpha \quad \text {for }\alpha \in (0,1/10). \end{aligned}$$
(18)

In Example 1 below and in Sect. 7.1, we give some examples of choices of \(\beta \).

Example 1

Let \(({\mathscr {M}},\pi )\) be a model where the prior \(\pi \) satisfies for some \(k\geqslant 1\) and constants \(0<A\leqslant (2/e)B\),

$$\begin{aligned} \left( {A{r}}\right) ^{k}\wedge 1\leqslant \pi \left( {{\mathscr {B}}(P,{r})}\right) \leqslant \left( {B{r}}\right) ^{k}\wedge 1\quad \text {for all }P\in {\mathscr {M}}\text { and }{r}>0. \end{aligned}$$
(19)

This means that the prior \(\pi \) behaves like the Lebesgue measure on an Euclidean space of dimension k for small enough values of r. Then,

$$\begin{aligned} V(P,r)=\log \frac{\pi \left( {{\mathscr {B}}(P,2{r})}\right) }{ \pi \left( {{\mathscr {B}}(P,{r})}\right) }\leqslant k\log \left( {\frac{2B}{A}}\right) \quad \text {for all }P\in {\mathscr {M}}\text { and }r>0 \end{aligned}$$
(20)

which implies that for all \(P\in {\mathscr {M}}\)

$$\begin{aligned} r_{n}(P,\beta )\leqslant \frac{k}{\gamma a_{1} n \beta }\log \left( {\frac{2B}{A}}\right) . \end{aligned}$$
(21)

The right-hand side is not larger than \(a_{1}^{-1}\beta \) for

$$\begin{aligned} \beta =\sqrt{\frac{k\log (2B/A)}{\gamma n }} \end{aligned}$$
(22)

which is larger than \(1/\sqrt{n}\) since \((2B/A)\geqslant e\) and \(\gamma \in (0,1]\). For such a value of \(\beta \), which does not depend on the distribution of the data, the element P belongs to \({\mathscr {M}}(\beta )\) given by (15), and since P is arbitrary we derive that \({\mathscr {M}}(\beta )={\mathscr {M}}\). Applying Theorem 1 we conclude that the distribution \({\widehat{\pi }}_{{\varvec{X}}}\) concentrates on an \(\ell \)-ball centered at \({\overline{P}}^{\star }\) with a radius \({r}\) of order

$$\begin{aligned} {\overline{{r}}}_{n}=\inf _{P\in {\mathscr {M}}}\ell (\overline{P}^{\star },P)+\frac{1}{a_{1}}\left( {\sqrt{\frac{k}{n}}+\frac{2\xi }{\sqrt{nk}}}\right) . \end{aligned}$$
(23)

4.3 A refined result under Assumption 4

Let us assume now that the family \({\mathscr {T}}(\ell ,{\mathscr {M}})\) satisfies the stronger Assumption 4. We introduce the mapping

$$\begin{aligned} \begin{array}{l|rcl} \phi : &{} (0,+\infty ) &{} \longrightarrow &{} {\mathbb {R}}_{+} \\ &{} z &{} \longmapsto &{} \displaystyle {\phi (z)=\frac{2\left( {e^{z}-1-z}\right) }{z^{2}}}. \end{array} \end{aligned}$$
(24)

The function \(\phi \) is increasing on \((0,+\infty )\) and tends to 1 when z tends to 0. Given \(\beta >0\) and a family \({\mathscr {T}}(\ell ,{\mathscr {M}})\) that satisfies Assumption 4, we define

$$\begin{aligned} {\overline{{c}}}_{1}&={c}_{0}-\beta a_{2}a_{1}^{-1}\tau ^{2}\phi \left[ {\beta (1+2{c})}\right] (1+2{c}(1+{c})); \end{aligned}$$
(25)
$$\begin{aligned} {\overline{{c}}}_{2}&= {c}-\beta a_{2}a_{1}^{-1}\tau ^{2}\phi \left[ {\beta (1+2{c})}\right] {c}^{2}; \end{aligned}$$
(26)
$$\begin{aligned} {\overline{{c}}}_{3}&=(2+{c}) -\beta a_{2}a_{1}^{-1}\tau ^{2}\phi \left[ {\beta (3+2{c})}\right] (2+{c})^{2}. \end{aligned}$$
(27)

Note that the value of \({\overline{{c}}}_{1}\wedge {\overline{{c}}}_{2} \wedge {\overline{{c}}}_{3}\) is positive for \(\beta =0\) and decreases continuously to \(-\infty \) when \(\beta \) grows to infinity. Consequently, there exists some \(\beta _{0}>0\) for which \({\overline{{c}}}_{1}\wedge {\overline{{c}}}_{2} \wedge {\overline{{c}}}_{3}=0\) and \({\overline{{c}}}_{1}\wedge {\overline{{c}}}_{2} \wedge {\overline{{c}}}_{3}\) is positive for all values \(\beta \in (0,\beta _{0})\).

Let us now present our second result on the concentration property of our posterior \({\widehat{\pi }}_{{\varvec{X}}}\).

Theorem 2

Assume that the model \(({\mathscr {M}},\pi )\) and the loss \(\ell \) satisfy Assumptions 1 and 2 and the family \({\mathscr {T}}(\ell ,{\mathscr {M}})\) Assumption 4. For \(\beta \in (0,\beta _{0})\) and \(\gamma <({\overline{{c}}}_{1}\wedge {\overline{{c}}}_{2} \wedge {\overline{{c}}}_{3})/(2\tau )\), the posterior \(\widehat{\pi }_{{\varvec{X}}}\) defined by (7) satisfies the following property. There exists \(\kappa _{0}>0\) only depending on \(a_{0}/a_{1},a_{2}/a_{1},{c},\tau ,\beta \) and \(\gamma \) such that, for all \(\xi >0\) and any distribution \({\textbf{P}}^{\star }\) with marginals in \({\mathscr {P}}\),

$$\begin{aligned} {\mathbb {E}}\left[ {{\widehat{\pi }}_{{\varvec{X}}}\left( {{^\textsf{c}}{\!{{\mathscr {B}}}}{}(\overline{P}^{\star },\kappa _{0}{r})}\right) }\right] \leqslant 2e^{-\xi } \end{aligned}$$
(28)

with

$$\begin{aligned} {r}=\inf _{P\in {\mathscr {M}}}\left[ {\ell (\overline{P}^{\star },P)+{r}_{n}(\beta ,P)}\right] +\frac{2\xi }{n\beta a_{1}}. \end{aligned}$$
(29)

In particular,

$$\begin{aligned} {\mathbb {P}}\left[ {{\widehat{\pi }}_{{\varvec{X}}}\left( {{^\textsf{c}}{\!{{\mathscr {B}}}}{}(\overline{P}^{\star },\kappa _{0}{r})}\right) \geqslant e^{-\xi /2}}\right] \leqslant 2e^{-\xi /2}. \end{aligned}$$

The value of \(\kappa _{0}\) is given by (132) in the proof. Note that the constraints on \(\beta \) and \(\gamma \), that are required in our Theorem 2, and that on \({c}\) given in (6) only depend on \(a_{0},a_{1}\) and \(a_{2}\), hence on the choice of the family \({\mathscr {T}}(\ell ,{\mathscr {M}})\). When \(a_{0},a_{1}\) and \(a_{2}\) do not depend on \({\mathscr {M}}\), the value of \(\beta \) can be chosen as a universal constant. In particular, it neither depends on the model \(({\mathscr {M}},\pi )\) nor on the sample size n.

Example 2

(Example 1 continued) Let us go back to the framework of our Example 1 and assume that \({\mathscr {T}}(\ell ,{\mathscr {M}})\) satisfies the requirements of Theorem 2, hence Assumption 4. Applying our construction with some numerical value of \(\beta \) which satisfies the constraint of our Theorem 2, we deduce from (21) that \({\widehat{\pi }}_{{\varvec{X}}}\) concentrates on an \(\ell \)-ball with radius of order

$$\begin{aligned} {\overline{{r}}}=\inf _{P\in {\mathscr {M}}}\ell (\overline{P}^{\star },P)+\frac{\log (2B/A)}{\gamma a_{1} \beta } \frac{k}{n}+\frac{2}{a_{1}\beta }\frac{\xi }{n}. \end{aligned}$$
(30)

When the model is well-specified, \(\inf _{P\in {\mathscr {M}}}\ell (\overline{P}^{\star },P)=0\) and the ball \({\mathscr {B}}(P^{\star },\kappa _{0}{\overline{{r}}})\) with radius \({\overline{{r}}}={\overline{{r}}}(n)\) contracts at the rate 1/n. Applying our Theorem 1 under Assumption 3, ignoring the fact that the family \({\mathscr {T}}(\ell ,{\mathscr {M}})\) also satisfies Assumption 4, would lead to the weaker result that when the model is well-specified the posterior concentrates on an \(\ell \)-ball with radius of order \(\sqrt{k/n}\), hence at a rate \(1/\sqrt{n}\), as shown by (23).

4.4 Concentrated priors

Theorem 1 and 2 show that starting from a prior \(\pi \) that puts enough mass around most of the elements of \({\mathscr {M}}\), the posterior \({\widehat{\pi }}_{{\varvec{X}}}\) concentrates on an \(\ell \)-ball with radius of order \(\inf _{P\in {\mathscr {M}}}\ell ({\overline{P}}^{\star },P)+r_{n}\) where \(r_{n}\) is small, at least under suitable assumptions and for n sufficiently large. The situation we want to investigate now is what happens when the prior is very concentrated on a small \(\ell \)-ball with radius \({\varepsilon }>0\) around an element \({{\overline{Q}}}\in {\mathscr {M}}\) that might not be the true distribution of the data. More precisely, assume the following

Assumption 5

For \({{\overline{Q}}}\in {\mathscr {M}}\) and \({\varepsilon }>0\),

$$\begin{aligned} \pi \left( {{^\textsf{c}}{\!{{\mathscr {B}}}}{}({{\overline{Q}}},{\varepsilon })}\right) \leqslant e^{-(2\xi +1)}\pi \left( {{\mathscr {B}}({{\overline{Q}}},{\varepsilon })}\right) \quad \text {with }\xi >0. \end{aligned}$$

In this case, we establish the following result.

Theorem 3

Assume that the model \(({\mathscr {M}},\pi )\) and the loss \(\ell \) satisfy Assumptions 1 and 2 and the family \({\mathscr {T}}(\ell ,{\mathscr {M}})\) Assumption 3. If Assumption 5 is satisfied, there exists \(\kappa _{0}>0\) only depending on \({c},\tau \) and the ratio \(a_{0}/a_{1}\) such that for any distribution \({\textbf{P}}^{\star }\) with marginals in \({\mathscr {P}}\),

$$\begin{aligned} {\mathbb {E}}\left[ {{\widehat{\pi }}_{{\varvec{X}}}\left( {{^\textsf{c}}{\!{{\mathscr {B}}}}{}({\overline{P}}^{\star },\kappa _{0} {r})}\right) }\right] \leqslant 2e^{-\xi }\quad \text {with}\quad {r}=\ell (\overline{P}^{\star },{{\overline{Q}}})\vee \frac{\beta }{a_{1}}\vee {\varepsilon }. \end{aligned}$$
(31)

In particular, for the choice \(\beta =a_{1} {\varepsilon }\), \(r=\ell (\overline{P}^{\star },{{\overline{Q}}})\vee {\varepsilon }\).

If furthermore, Assumption 4 is satisfied and \(\beta \in (0,\beta _{0})\) (where \(\beta _{0}\) is defined in Sect. 4.3), there exists \(\kappa _{0}'>0\) only depending on \(\tau ,\beta , a_{0}/a_{1}\) and \(a_{2}/a_{1}\) such that for any distribution \({\textbf{P}}^{\star }\) with marginals in \({\mathscr {P}}\),

$$\begin{aligned} {\mathbb {E}}\left[ {{\widehat{\pi }}_{{\varvec{X}}}\left( {{^\textsf{c}}{\!{{\mathscr {B}}}}{}({\overline{P}}^{\star },\kappa _{0}' {r})}\right) }\right] \leqslant 2e^{-\xi }\quad \text {with}\quad {r}=\ell (\overline{P}^{\star },{{\overline{Q}}})\vee {\varepsilon }. \end{aligned}$$
(32)

This result shows that for a suitable choice of \(\beta \), the posterior \({\widehat{\pi }}_{{\varvec{X}}}\) also concentrates on an \(\ell \)-ball centred at \({\overline{P}}^{\star }\) with radius of order \({\varepsilon }\) when the model is well-specified, that is, when the data are i.i.d. with distribution \({\overline{P}}^{\star }={{\overline{Q}}}\). When the model is misspecified, the radius of the ball is of order \(\ell ({\overline{P}}^{\star },{{\overline{Q}}})\vee {\varepsilon }\) and therefore does not inflate more than the distance of \({\overline{P}}^{\star }\) to the center \({{\overline{Q}}}\). This result illustrates the stability of the posterior \({\widehat{\pi }}_{{\varvec{X}}}\) with respect to misspecification.

5 Applications to classical loss functions

The aim of this section is to show how our general construction can be applied to loss functions \(\ell \) of interest. The propositions contained in this section about the corresponding families \({\mathscr {T}}(\ell ,{\mathscr {M}})\) have been established in Baraud [4] except for the squared Hellinger loss for which we refer to Baraud and Birgé [5, Proposition 3]. The list of loss functions we present here is not exhaustive. Our results also apply to all loss functions that derive from a variational formula of the form

$$\begin{aligned} \ell (P,Q)=\sup _{f\in {\mathscr {F}}}\left[ {\int _{E}fdP-\int _{E}fdQ}\right] \end{aligned}$$

where \({\mathscr {F}}\) is a suitable class of bounded functions. For such losses, we refer the reader to Baraud [4].

In this section, we consider models \({\mathscr {M}}=\{P=p\cdot \mu , p\in {\mathcal {M}}\}\) which are dominated by a measure \(\mu \) on \((E,{\mathcal {E}})\) and we denote by \({\mathcal {M}}\subset {\mathscr {L}}_{1}(E,{\mathcal {E}},\mu )\) the corresponding families of densities with respect to \(\mu \). Elements \(P,Q,\ldots \) in \({\mathscr {M}}\) are associated with their densities in \({\mathcal {M}}\) by using lower case letters \(p,q,\ldots \). In all the cases we consider, \(t_{(P,Q)}(x)\) is a measurable function of (p(x), q(x)) for \(P,Q\in {\mathscr {M}}\) and \(x\in E\). In order to satisfy our measurability Assumption 3-(i), it is therefore sufficient to assume that

$$\begin{aligned} \begin{array}{rcl} (E\times {\mathscr {M}},{\mathcal {E}}\otimes {\mathcal {A}}) &{} \longrightarrow &{} {\mathbb {R}}\\ (x,P) &{} \longmapsto &{} p(x) \end{array} \end{aligned}$$

is measurable. In the case of a parametrized model \({\mathscr {M}}=\{P_{\theta }=p_{\theta }\cdot \mu , \theta \in \Theta \}\), as described in Sect. 2.1, this condition is satisfied as soon as the mapping

$$\begin{aligned} \begin{array}{l|rcl} p: &{} (E\times \Theta ,{\mathcal {E}}\otimes \mathfrak {B}) &{} \longrightarrow &{} {\mathbb {R}}_{+} \\ &{} (x,\theta ) &{} \longmapsto &{} p_{\theta }(x) \end{array} \end{aligned}$$

is measurable. Throughout this section, we assume that such measurability assumptions are satisfied.

5.1 The case of the total variation distance

In this section, \({\mathscr {P}}\) is the set of all probability measures on \((E,{\mathcal {E}})\) and

$$\begin{aligned} \left\| {P-Q}\right\| =\frac{1}{2}\int _{E}\left| {p-q}\right| d\mu \end{aligned}$$
(33)

denotes the total variation loss (TV-loss for short) between \(P,Q\in {\mathscr {P}}\).

Proposition 1

The family \({\mathscr {T}}(\ell ,{\mathscr {M}})\) which consists of all the functions \(t_{(P,Q)}\) defined for \(P=p\cdot \mu \) and \(Q=q\cdot \mu \) in \({\mathscr {M}}\) by

$$\begin{aligned} t_{(P,Q)}=\frac{1}{2}\left[ {{\mathbb {1}}_{q>p}-Q(q>p)}\right] -\frac{1}{2}\left[ {{\mathbb {1}}_{p>q}-P(p>q)}\right] \end{aligned}$$
(34)

satisfies Assumption 2 with \(\tau =1\) and Assumption 3 with \(a_{0}=3/2\) and \(a_{1}=1/2\).

It follows from Proposition 1 that we may apply our general construction to the so-defined family \({\mathscr {T}}(\ell ,{\mathscr {M}})\) with the values \({c}={c}_{0}=1/3\) (hence \(\lambda =4/3\)). The reader can check that the value \(\gamma =1/100\) satisfies the requirement of our Theorem 1 and that (16) is satisfied with \(\kappa _{0}=220\). Theorem 1 can therefore be rephrased as follows.

Corollary 1

Let \(\beta \geqslant 1/\sqrt{n}\), \({c}=1/3\) and \({\widehat{\pi }}_{{\varvec{X}}}^{{{\text {TV}}}}\) be the posterior defined by (7) and associated with the family \({\mathscr {T}}(\ell ,{\mathscr {M}})\) given in Proposition 1. For all \(\xi >0\) and any distribution \({\textbf{P}}^{\star }\), with a probability at least \(1-2e^{-\xi /2}\), the posterior \({\widehat{\pi }}_{{\varvec{X}}}^{{\text {TV}}}\) satisfies

$$\begin{aligned} {\widehat{\pi }}_{{\varvec{X}}}^{{\text {TV}}}&\left( {\left\{ {P\in {\mathscr {M}},\; \ell ({\overline{P}}^{\star },P)\leqslant 220\left[ {\inf _{P'\in {\mathscr {M}}(\beta )}\ell ({\overline{P}}^{\star },P')+2\left( {\beta +\frac{2\xi }{n\beta }}\right) }\right] }\right\} }\right) \nonumber \\&\geqslant 1-e^{-\xi /2} \end{aligned}$$
(35)

where

$$\begin{aligned} {\mathscr {M}}(\beta )=\left\{ {P\in {\mathscr {M}},\; \sup _{r\geqslant 2\beta }\left[ {\frac{200}{n{r}}\log \left( {\frac{\pi \left( {{\mathscr {B}}(P,2{r})}\right) }{\pi \left( {{\mathscr {B}}(P,{r})}\right) }}\right) }\right] \leqslant \beta }\right\} . \end{aligned}$$

By convexity, we may write that

$$\begin{aligned} \inf _{P\in {\mathscr {M}}(\beta )}\left\| {P-{\overline{P}}^{\star }}\right\| \leqslant \inf _{P\in {\mathscr {M}}(\beta )}\left[ {\frac{1}{n}\sum _{i=1}^{n}\left\| {P-P_{i}^{\star }}\right\| }\right] \end{aligned}$$

and the left-hand side is therefore small when there exists \(P\in {\mathscr {M}}(\beta )\) that approximates well enough most of the marginals of \({\textbf{P}}^{\star }\). The concentration properties of \(\widehat{\pi }_{{\varvec{X}}}^{{\text {TV}}}\) remain thus stable with respect to a possible misspecification of the model and a departure from the equidistribution assumption.

In fact, as we shall see in our Example 3 below, the average distribution \({\overline{P}}^{\star }\) may belong to \({\mathscr {M}}(\beta )\) even when none of the marginals \(P_{i}^{\star }\) does. This means that in good situations, the posterior may concentrate around \(\overline{P}^{\star }\), as it would do in the i.i.d. case when the distribution of the data does belong to \({\mathscr {M}}(\beta )\), even when the data are non-i.i.d. and their marginals do not belong to \({\mathscr {M}}(\beta )\).

Example 3

(Example 1 continued) Going back to Example 1 and taking for \(\ell \) the TV-loss (then \(a_{1}=1/2\)), we deduce from (23) that

$$\begin{aligned} {\overline{{r}}}_{n}=\inf _{P\in {\mathscr {M}}}\left\| {\overline{P}^{\star }-P}\right\| +2\left( {\sqrt{\frac{k}{n}}+\frac{2\xi }{\sqrt{n k}}}\right) . \end{aligned}$$

In particular, if for each \(i\in \{1,\ldots ,n\}\), \(P_{i}^{\star }\) is the uniform distribution on \([(i-1)/n,i/n]\) and \({\mathscr {M}}\) contains the uniform distribution \({\mathcal {U}}([0,1])\) on [0, 1], \({\mathscr {M}}\) contains \({\overline{P}}^{\star }={\mathcal {U}}([0,1])\), even if none of the marginals \(P_{i}^{\star }\) belongs to \({\mathscr {M}}\). We then get that

$$\begin{aligned} {\overline{{r}}}_{n}=2\left( {\sqrt{\frac{k}{n}}+\frac{2\xi }{\sqrt{n k}}}\right) \end{aligned}$$

and the posterior concentrates around \({\overline{P}}^{\star }\) at a parametric rate.

5.2 Case of the \({\mathbb {L}}_{j}\)-loss

Let \(j\in (1,+\infty )\). We denote by \({\mathscr {P}}_{j}\) the set of all finite and signed measures on \((E,{\mathcal {E}},\mu )\) which are of the form \(P=p\cdot \mu \) with \(p\in {\mathscr {L}}_{j}(E,\mu )\cap {\mathscr {L}}_{1}(E,\mu )\). Let \(\ell _{j}\) be the loss defined by \(\ell _{j}(P,Q)=\left\| {p-q}\right\| _{\mu ,j}\) for all \(P=p\cdot \mu \) and \(Q=q\cdot \mu \) in \({\mathscr {P}}_{j}\). In this section, \({\mathscr {P}}\) is the subset that consists of all the probability measures in \({\mathscr {P}}_{j}\).

Proposition 2

Let \({\mathscr {M}}=\left\{ {P=p\cdot \mu ,\; p\in {\mathcal {M}}}\right\} \) be a subset of \({\mathscr {P}}_{j}\) for which \({\mathcal {M}}\) satisfies for some \(R>0\)

$$\begin{aligned} \hspace{8mm}\left\| {p-q}\right\| _{\infty }\leqslant R\left\| {p-q}\right\| _{\mu ,j}\quad \text {for all }p,q\in {\mathcal {M}}. \end{aligned}$$
(36)

Define for \(P=p\cdot \mu \) and \(Q=q\cdot \mu \) in \({\mathscr {M}}\),

$$\begin{aligned} f_{(P,Q)}=\frac{\left( {p-q}\right) _{+}^{j-1}-\left( {p-q}\right) _{-}^{j-1}}{\left\| {p-q}\right\| _{\mu ,j}^{j-1}}\quad \text {when }P\ne Q\quad \text {and}\quad f_{(P,P)}= 0. \end{aligned}$$

Then, the family \({\mathscr {T}}(\ell _{j},{\mathscr {M}})\) which contains the functions \(t_{(P,Q)}\) defined for \(P,Q\in {\mathscr {M}}\) by

$$\begin{aligned} t_{(P,Q)}=\frac{1}{2R^{j-1}}\left[ {\int _{E}f_{(P,Q)}\frac{dP+dQ}{2}-f_{(P,Q)}}\right] \end{aligned}$$
(37)

satisfies Assumption 2 with \(\tau =1\) and Assumption 3 with \(a_{0}=3/(4R^{j-1})\) and \(a_{1}=1/(4R^{j-1})\).

When \(j=2\), (36) is typically satisfied when \({\mathcal {M}}\) is a subset of a linear space enjoying good connections between the \({\mathbb {L}}_{2}(\mu )\) and the supremum norms. Many finite dimensional linear spaces with good approximation properties do satisfy such connections (e.g. piecewise polynomials of a fixed degree on a regular partition of [0, 1], trigonometric polynomials on [0, 1) etc.). We refer the reader to Birgé and Massart [14, Section 3] for additional examples. The property may also hold for infinite dimensional linear spaces as proven in Baraud [4].

It follows from Proposition 2 that one may choose \({c}={c}_{0}=1/3\) in (6) and \(\gamma =1/100\) in Theorem 1. Besides, Theorem 1 applies with \(\kappa _{0}=220\).

Example 4

(Example 1 continued) Let us go back to our Example 1 with \(\ell =\ell _{j}\) and \({\mathscr {T}}(\ell ,{\mathscr {M}})\) given in Proposition 2. For the choice of \(\beta \) given in (22) and \(\gamma =1/100\), we deduce from (23) (with \(a_{1}=1/(4R^{j-1})\)) that the resulting posterior \({\widehat{\pi }}_{{\varvec{X}}}\) concentrates on an \(\ell _{j}\)-ball around \({\overline{P}}^{\star }\) with a radius of order

$$\begin{aligned} \overline{r}_{n}=\inf _{p\in {\mathcal {M}}}\left\| {\frac{1}{n}\sum _{i=1}^{n}p_{i}^{\star }-p}\right\| _{\mu ,j} +4R^{j-1}\left( {\sqrt{\frac{k}{n}}+\frac{2\xi }{\sqrt{n k}}}\right) . \end{aligned}$$

5.3 The case of the squared Hellinger loss

Here, \({\mathscr {P}}\) is the set of all probability measures on \((E,{\mathcal {E}})\) and

$$\begin{aligned} \ell (P,Q)=h^{2}(P,Q)=\frac{1}{2}\int _{E}\left( {\sqrt{p}-\sqrt{q}}\right) ^{2}d\mu , \end{aligned}$$
(38)

is the squared Hellinger distance between two probabilities \(P,Q\in {\mathscr {P}}\).

Proposition 3

Let \(\psi \) be the function defined by

$$\begin{aligned} \begin{array}{l|rcl} \psi : &{} [0,+\infty ] &{} \longrightarrow &{} [-1,1] \\ &{} x &{} \longmapsto &{} { {\left\{ \begin{array}{ll} \displaystyle {\frac{x-1}{x+1}} &{}\text { if }x\in [0,+\infty )\\ 1 &{}\text { if }x=+\infty . \end{array}\right. } } \end{array} \end{aligned}$$

The family \({\mathscr {T}}(\ell ,{\mathscr {M}})\) containing the functions \(t_{(P,Q)}\) defined for \(P=p\cdot \mu \) and \(Q=q\cdot \mu \) in \({\mathscr {M}}\) by

$$\begin{aligned} t_{(P,Q)}=\frac{1}{2}\psi \left( {\sqrt{\frac{q}{p}}}\right) \end{aligned}$$
(39)

(with the conventions \(0/0=1\) and \(x/0=+\infty \) for all \(x>0\)) satisfies Assumption 2 with \(\tau =2\) and Assumption 4 with \(a_{0}=2\), \(a_{1}=3/16\), \(a_{2}=3\sqrt{2}/4\).

With such a choice of family \({\mathscr {T}}(\ell ,{\mathscr {M}})\), (6) is satisfied with \({c}=1/125\), then \({c}_{0}\in [0.922,0.923]\), and the requirements of Theorem 2 are satisfied with \(\beta =2\gamma =1/500\). Then the value \(\kappa _{0}=1694\) suits. The definition (11) of \({r}_{n}(\beta ,P)\) for \(P\in {\mathscr {M}}\) becomes

$$\begin{aligned}&{r}_{n}(\beta ,P)\nonumber \\&\quad =\inf \left\{ {{r}\geqslant \frac{8000}{3n},\; \frac{\pi \left( {{\mathscr {B}}(P,2{r}')}\right) }{\pi \left( {{\mathscr {B}}(P,{r}')}\right) }\leqslant \exp \left( {\frac{3n{r}'}{8.10^{6}}}\right) \text { for all }{r}'\geqslant {r}}\right\} , \end{aligned}$$
(40)

with the convention \(\sup {\varnothing }=8000/(3n)\). Theorem 2 can then be rephrased as follows.

Corollary 2

Let \(\pi _{{\varvec{X}}}^{h}\) be the posterior defined by (7) and associated with the family \({\mathscr {T}}(\ell ,{\mathscr {M}})\) given in Proposition 3 and the choices \({c}=1/125\) and \(\beta =1/500\). For all \(\xi >0\) and any distribution \({\textbf{P}}^{\star }\), with a probability at least \(1-2e^{-\xi /2}\),

$$\begin{aligned} {\widehat{\pi }}_{{\varvec{X}}}^{h}\left( {\left\{ {P\in {\mathscr {M}},\; h^{2}\left( {\overline{P}^{\star },P}\right) \leqslant 1694{r}}\right\} }\right) \geqslant 1-e^{-\xi /2} \end{aligned}$$

where

$$\begin{aligned} {r}=\inf _{P\in {\mathscr {M}}}\left[ {h^{2}\left( {\overline{P}^{\star },P}\right) +{r}_{n}(\beta ,P)}\right] +\frac{5334\xi }{n} \end{aligned}$$

and \({r}_{n}(\beta ,P)\) is given by (40).

As for the total variation distance, we may write that

$$\begin{aligned} \inf _{P\in {\mathscr {M}}}h^{2}\left( {{\overline{P}}^{\star },P}\right) \leqslant \inf _{P\in {\mathscr {M}}}\left[ {\frac{1}{n}\sum _{i=1}^{n}h^{2}\left( {P_{i}^{\star },P}\right) }\right] . \end{aligned}$$

The left-hand side is small when there exists an element \(P\in {\mathscr {M}}\) that approximates well most of the marginal distribution \(P_{i}^{\star }\). If for such a P, the quantity \({r}_{n}(\beta ,P)\) is small enough, the posterior concentrates around \({\overline{P}}^{\star }\) just as it would do if the data were truly i.i.d. with distribution \(P\in {\mathscr {M}}\).

Example 5

(Example 1 continued) Let us go back to Example 1, more precisely Example 2, with \(\ell =h^{2}\) and \({\mathscr {T}}(\ell ,{\mathscr {M}})\) given in Proposition 3. Inequality (21) is satisfied with \(\beta =2\gamma =1/500\) and \(a_{1}=3/16\). It follows from (30) that \({\widehat{\pi }}_{{\varvec{X}}}^{h}\) concentrates on an \(h^{2}\)-ball around \({\overline{P}}^{\star }\) with a radius of order

$$\begin{aligned} {\overline{r}}=\inf _{P\in {\mathscr {M}}}h^{2}\left( {\overline{P}^{\star },P}\right) +\frac{k+\xi }{n}. \end{aligned}$$

6 Comparing the classical Bayesian approach to ours

In this section, our aim is to highlight some similarities and differences between the Bayesian posterior and ours. Throughout this section, we consider the squared Hellinger loss \(\ell =h^{2}\) and denote by \({\widehat{\pi }}_{{\varvec{X}}}^{K}\) the Bayes posterior associated with the model \(({\mathscr {M}},\pi )\). The letter K in the notation \(\widehat{\pi }_{{\varvec{X}}}^{K}\) refers to the fact that the Bayesian posterior can be obtained from our general construction by using the Kullback–Leibler divergence as explained in Sect. 3.4. When \({\mathscr {M}}=\{P_{{\varvec{\theta }}},\; {\varvec{\theta }}\in \Theta \}\) is parametric with \(\Theta \subset {\mathbb {R}}^{k}\), we denote by \({\widehat{\nu }}_{{\varvec{X}}}^{K}\) the Bayesian posterior on the parameter space \(\Theta \) and \({\widehat{\nu }}_{{\varvec{X}}}^{h}\) that associated to \({\widehat{\pi }}_{{\varvec{X}}}^{h}\).

6.1 Some classical concentration results for the Bayes posterior distribution

Most of the results that have been established about the concentration properties of the Bayesian posterior are asymptotic in nature. It seems difficult to establish a general nonasymptotic version of those as we do for our posterior. One of the only exceptions we are aware of is Birgé [13].

When the data are i.i.d. with a distribution \(P^{\star }\in {\mathscr {M}}\), a typical asymptotic form of these results is the following one (see Ghosal et al. [19] Theorems 2.1 and 2.4 for example). Let \({\varepsilon }_{n}\) be a sequence of positive numbers that converges to zero when n goes to infinity. If \(P^{\star }\) fulfils some suitable conditions, that we shall discuss later on and which depend on the prior \(\pi \) and \({\varepsilon }_{n}\), the following convergence in probability holds true

(41)

In (41), \(M_{n}=M\) denotes some large enough positive constant if \(n{\varepsilon }_{n}^{2}\rightarrow +\infty \) as \(n\rightarrow +\infty \) while \(M_{n}\) is increasing to infinity as \(n\rightarrow +\infty \) if \(\liminf n{\varepsilon }_{n}^{2}>0\) as \(n\rightarrow +\infty \). The first condition on \({\varepsilon }_{n}\) is typically satisfied when \({\mathscr {M}}\) is a nonparametric model while the second one generally applies to parametric ones.

In comparison, in this well-specified framework, our Corollary 2 leads to the following result. For all \(P^{\star }\in {\mathscr {M}}\) and \(\xi >0\)

$$\begin{aligned}&{\mathbb {P}}\left[ {{\widehat{\pi }}_{{\varvec{X}}}^{h}\left( {\left\{ {P\in {\mathscr {M}},\; h^{2}(P^{\star },P)\geqslant \kappa _{0}'\left( {r_{n}(\beta ,P^{\star })+\frac{\xi }{n}}\right) }\right\} }\right) \geqslant e^{-\xi /2}}\right] \nonumber \\&\quad \leqslant 2e^{-\xi /2} \end{aligned}$$
(42)

for some numerical constant \(\kappa _{0}'>0\). If \(P^{\star }\) satisfies \(r_{n}(\beta ,P^{\star })\leqslant {\varepsilon }_{n}^{2}\), we recover (41) by setting \(\xi =\xi _{n}=(M_{n}/(\kappa _{0}')-1)n{\varepsilon }_{n}^{2}\). However, our condition that \(r_{n}(\beta ,P^{\star })\leqslant {\varepsilon }_{n}^{2}\) is not equivalent to that imposed on \(P^{\star }\) by Ghosal, Ghosh and van der Vaart [19]. It is actually weaker. In their paper, this condition is fulfilled when the prior puts enough mass on Kullback–Leibler type balls around \(P^{\star }\). Our approach allows one to consider Hellinger balls only, which are larger and make our assumption weaker. In fact, as already underlined in the Introduction, these Kullback–Leibler type balls could be empty, and the condition unsatisfied, while our theorem would still apply.

The result established by Birgé [13] provides an improvement as compared to the one presented above and established by Ghosal, Ghosh and van der Vaart. Birgé shows that it is essentially possible to get rid of the Kullback–Leibler divergence (see his Theorem 2) but only when the model is parametric and well-specified. Apart for the nonparametric framework, this result leaves little place for improvement since we know that the Bayesian posterior may fail to concentrate around the true parameter when the model becomes slightly ill-specified.

Another consequence of our Corollary 2, as compared to (41), is that it allows one to control

$$\begin{aligned} {\widehat{\pi }}_{{\varvec{X}}}^{h}\left( {\left\{ {P\in {\mathscr {M}},\; h^{2}(P^{\star },P)\geqslant \kappa _{0}'\left( {{\varepsilon }_{n}^{2}+\frac{\xi }{n}}\right) }\right\} }\right) \end{aligned}$$

uniformly over the set \(\{P^{\star }\in {\mathscr {M}}, r_{n}(\beta ,P^{\star })\leqslant {\varepsilon }_{n}^{2}\}\). For example, in the framework of Example 2, for the choice \({\varepsilon }_{n}^{2}=ck/n\) with \(c=\log (2B/A)/(\gamma a_{1}\beta )\), we know that \(r_{n}(\beta ,P^{\star })\leqslant {\varepsilon }_{n}^{2}\) for all \(P^{\star }\in {\mathscr {M}}\) and we deduce from (42) that

$$\begin{aligned} \sup _{P^{\star }\in {\mathscr {M}}}{\mathbb {P}}\left[ {{\widehat{\pi }}_{{\varvec{X}}}^{h}\left( {\left\{ {P\in {\mathscr {M}},\; h^{2}(P^{\star },P)\geqslant \kappa _{0}'\left( {{\varepsilon }_{n}^{2}+\frac{\xi }{n}}\right) }\right\} }\right) \geqslant e^{-\xi /2}}\right] \leqslant 2e^{-\xi /2}. \end{aligned}$$

The concentration properties of our posterior is therefore uniform over the statistical model \({\mathscr {M}}\).

6.2 About the shapes and sizes of the credible regions

A nice feature of the Bayesian approach lies in the fact that it allows one to build credible regions. In practice, they often play the same role as the confidence regions in the frequentist paradigm. When the data are i.i.d. with distribution \(P^{\star }=P_{{\varvec{\theta }}^{\star }}\) in a parametric model \({\mathscr {M}}=\{P_{{\varvec{\theta }}},\; {\varvec{\theta }}\in \Theta \}\), \(\Theta \subset {\mathbb {R}}^{k}\), a credible set for the parameter \({\varvec{\theta }}^{\star }\) is a subset \({\widehat{\Theta }}_{n,{\varvec{X}}}\subset \Theta \) (only depending on observable quantities) that satisfies \(\widehat{\nu }_{{\varvec{X}}}^{K}({\widehat{\Theta }}_{n,{\varvec{X}}})\geqslant 1-e^{-\xi }\) for some choice of \(\xi >0\). When \({\mathscr {M}}\) is a regular parametric model with a nonsingular Fisher information matrix \({\textbf{J}}\), and provided that it satisfies additional assumptions—see van der Vaart [24]—the Bernstein–von Mises theorem applies and tells us that

$$\begin{aligned} \left\| {{\widehat{\nu }}_{{\varvec{X}}}^{K}-{\mathcal {N}}\left( {\widehat{\varvec{\theta }}_{n},(n{\textbf{J}}({\varvec{\theta }}^{\star }))^{-1}}\right) }\right\| {\mathop {\,\displaystyle {\mathop {\longrightarrow }_{n\rightarrow +\infty }}\,}\limits ^{\textrm{P}}} 0\quad \text {under }P_{{\varvec{\theta }}^{\star }} \end{aligned}$$

where \({\widehat{{\varvec{\theta }}}}_{n}\) denotes the Maximum Likelihood Estimator (MLE for short). Denoting by \(\overline{\chi }_{k}^{-1}(\xi )\) the \((1-e^{-\xi })\)-quantile of a chi-square random variable with k degrees of freedom and

$$\begin{aligned} \Theta _{n,{\varvec{X}}}=\left\{ {{\varvec{\theta }}\in \Theta ,\; n\left| {{\textbf{J}}^{1/2}({\varvec{\theta }}^{\star })\left( {{\widehat{{\varvec{\theta }}}}_{n}-{\varvec{\theta }}}\right) }\right| ^{2}\leqslant {\overline{\chi }}_{k}^{-1}(\xi )}\right\} , \end{aligned}$$
(43)

we deduce that

$$\begin{aligned} \left| {{\widehat{\nu }}_{{\varvec{X}}}^{K}\left( {\Theta _{n,{\varvec{X}}}}\right) -(1-e^{-\xi })}\right| \leqslant \left\| {{\widehat{\nu }}_{{\varvec{X}}}^{K}-{\mathcal {N}}\left( {\widehat{\varvec{\theta }}_{n},(n{\textbf{J}}({\varvec{\theta }}^{\star }))^{-1}}\right) }\right\| {\mathop {\,\displaystyle {\mathop {\longrightarrow }_{n\rightarrow +\infty }}\,}\limits ^{\textrm{P}}} 0 \end{aligned}$$

hence

$$\begin{aligned} {\widehat{\nu }}_{{\varvec{X}}}^{K}\left( {\Theta _{n,{\varvec{X}}}}\right) {\mathop {\,\displaystyle {\mathop {\longrightarrow }_{n\rightarrow +\infty }}\,}\limits ^{\textrm{P}}}1-e^{-\xi }\quad \text {under }P_{{\varvec{\theta }}^{\star }}. \end{aligned}$$

The asymptotic level of “credibility” of the set \(\Theta _{n,{\varvec{X}}}\) is therefore \(1-e^{-\xi }\). This set is not, however, a genuine credible region since it depends on the unknown parameter \({\varvec{\theta }}^{\star }\). We would obtain a genuine credible region by replacing \({\varvec{\theta }}^{\star }\) by \({\widehat{{\varvec{\theta }}}}_{n}\) in the expression of \(\Theta _{n,{\varvec{X}}}\). This substitution would change the level of credibility but not the shape of the region, which is an ellipsoid centred at \({\widehat{{\varvec{\theta }}}}_{n}\) and the axes of which are given by the eigenvectors of the Fisher information matrix.

The aim of this section is to show that our posterior concentrates its mass on regions that have the same shape and approximately the same size. The size of \(\Theta _{n,{\varvec{X}}}\) is determined by the value of the quantile \({\overline{\chi }}_{k}^{-1}(\xi )\). The aim of the following lemma is to specify the order of magnitude of this quantile as a function of k and \(\xi \). In fact, we consider below the more general case of the quantiles of a gamma distribution \(\gamma (s,\sigma )\) with parameters \(s,\sigma >0\), that is, the distribution with density \(x\mapsto (x^{s-1}e^{-s/\sigma })/(\sigma ^{s}\Gamma (s))\) with respect to the Lebesgue measure on \({\mathbb {R}}_{+}\). The proof is postponed to Sect. 10.1.

Lemma 1

For \(s,\sigma ,\xi >0\), let \({\overline{\gamma }}_{s,\sigma }^{-1}(\xi )\) be the \((1-e^{-\xi })\)-quantile of the gamma distribution \(\gamma (s,\sigma )\) and \({\overline{\Phi }}^{-1}(\xi )\) that of a standard Gaussian random variable. Then,

$$\begin{aligned} {\overline{\gamma }}_{s,\sigma }^{-1}(\xi )\leqslant \sigma \left( {\sqrt{s}+\sqrt{\xi }}\right) ^{2} \end{aligned}$$
(44)

and for all \(s=t+1>1\) and \(\xi \geqslant \log 2+1/(12t)\),

$$\begin{aligned} {\overline{\gamma }}_{s,\sigma }^{-1}(\xi )\geqslant \sigma \left[ {t+\left[ {\sqrt{t}\;\overline{\Phi }^{-1}\left( {\xi -\frac{1}{12t}}\right) }\right] \vee \left[ {\xi +\log \left( {\frac{e^{-1/(12t)}}{\sqrt{2\pi t}}}\right) }\right] }\right] . \end{aligned}$$
(45)

Since \({\overline{\Phi }}^{-1}(\xi )\) is equivalent to \(\sqrt{2\xi }\) for large values of \(\xi >0\), these two inequalities show that for s and \(\xi \) large enough, \({\overline{\gamma }}_{s,\sigma }^{-1}(\xi )\) is of order \(\sigma \left[ {s+\xi }\right] \). In particular, \(\overline{\chi }_{k}^{-1}(\xi )={\overline{\gamma }}_{k/2,2}^{-1}(\xi )\) is of order \(k+\xi \) for k and \(\xi \) large enough.

To compare ourselves with the classical Bayesian paradigm, we prove in Sect. 10.2 the result below for our posterior. This result is based on the assumption that the statistical model \({\mathscr {M}}\) is regular in the sense that is defined in Ibragimov and Has’minskiĭ [20]. In order to avoid too many technicalities here, we refer the reader to our Sect. 8.3, more precisely Corollary 4, for a complete description of the assumptions on the statistical model \({\mathscr {M}}\).

Theorem 4

Assume that the statistical model \({\mathscr {M}}\) satisfies the assumptions of Corollary 4. If \(X_{1},\ldots ,X_{n}\) are i.i.d. with distribution \(P_{{\varvec{\theta }}^{\star }}\in {\mathscr {M}}\), for all \(\xi >0\) and n large enough, with a probability \(1-2e^{-\xi }\),

$$\begin{aligned} {\widehat{\nu }}_{{\varvec{X}}}^{h}\left( {\left\{ {{\varvec{\theta }}\in \Theta ,\; n\left| {{\textbf{J}}^{1/2}({\varvec{\theta }}^{\star })\left( {{\varvec{\theta }}-{\varvec{\theta }}^{\star }}\right) }\right| ^{2}\leqslant \kappa ^{\star }\left( {k+\xi }\right) }\right\} }\right) \geqslant 1-e^{-\xi } \end{aligned}$$
(46)

where \(\kappa ^{\star }\) is a positive numerical constant.

The set

$$\begin{aligned} \left\{ {{\varvec{\theta }}\in \Theta ,\; n\left| {{\textbf{J}}^{1/2}({\varvec{\theta }}^{\star })\left( {{\varvec{\theta }}-{\varvec{\theta }}^{\star }}\right) }\right| ^{2}\leqslant \kappa ^{\star }\left( {k+\xi }\right) }\right\} \end{aligned}$$

possesses the same shape and, by Lemma 1, approximately the same size as the set \(\Theta _{n,{\varvec{X}}}\) defined by (43). We deduce from Theorem 4 that the classical Bayes posterior and ours concentrate both on similar sets. If \({\widehat{{\varvec{\theta }}}}_{n}\) is an asymptotically efficient estimator of \({\varvec{\theta }}^{\star }\), it is therefore reasonable to look for a credible region of the form

$$\begin{aligned} \left\{ {{\varvec{\theta }}\in \Theta ,\;n\left| {{\textbf{J}}^{1/2}(\widehat{\varvec{\theta }}_{n})\left( {{\varvec{\theta }}-{\widehat{{\varvec{\theta }}}}_{n}}\right) }\right| ^{2}\leqslant t}\right\} , t>0 \end{aligned}$$

for \({\widehat{\nu }}_{{\varvec{X}}}^{h}\) as we would do for the classical Bayes one.

6.3 Robustness

As already mentioned, our approach allows the statistician to design robust posteriors by choosing as a loss function the squared Hellinger loss or the total variation one. In this section, we illustrate this property on a concrete example. Consider the statistical model \({\mathscr {M}}=\{P_{\theta }={\mathcal {N}}(\theta ,1),\; \theta \in {\mathbb {R}}\}\) and the prior \(\pi \) associated with the distribution \(\nu ={\mathcal {N}}(0,1)\) on \(\Theta ={\mathbb {R}}\). Then, the Bayes posterior on \(\Theta \) is \(\widehat{\nu }_{{\varvec{X}}}^{K}={\mathcal {N}}({{\widehat{m}}}_{n}, \sigma _{n}^{2})\) with \(\widehat{m}_{n}=(n+1)^{-1}\sum _{i=1}^{n} X_{i}\) and \(\sigma _{n}^{2}=1/(n+1)\). It concentrates on intervals of the form \([\widehat{m}_{n}-c/\sqrt{n+1},{{\widehat{m}}}_{n}+c/\sqrt{n+1}]\) for \(c>0\) large enough. If the distribution of the data is contaminated so that \(X_{1},\ldots ,X_{n}\) are i.i.d. with distribution

$$\begin{aligned} P^{\star }=\left( {1-\frac{1}{n}}\right) P_{0}+\frac{1}{n}{\mathcal {N}}\left( {10^{4}(n+1),1/n}\right) , \end{aligned}$$

then with a probability at least \(1-(1-1/n)^{n}\geqslant 1-1/e>63\%\), the posterior concentrates around \({{\widehat{m}}}_{n}\approx 10^{4}\), hence far away from 0, even though \(P^{\star }\) and \(P_{0}\) are close: \(\left\| {P^{\star }-P_{0}}\right\| \leqslant 1/n\).

In this specific framework, the model \({\mathscr {M}}\) is regular, the Fisher information is constant and positive, \(\nu \) admits a positive density which is continuous at \(\theta ^{\star }=0\) and for all \(\theta ,\theta '\in \Theta \), \(h^{2}(\theta ,\theta ')=1-e^{-|\theta -\theta '|^{2}/8}\). We shall see in Sect. 8, more precisely in Corollary 4, that for such regular statistical models \(r_{n}(\beta ,P_{0})\leqslant \kappa ^{\star }/n\) for some numerical constant \(\kappa ^{\star }>0\), at least for n large enough. Since \(h^{2}(P^{\star },P_{0})\leqslant \left\| {P^{\star }-P_{0}}\right\| \leqslant 1/n\), we deduce from Corollary 2 that the posterior \(\widehat{\nu }_{{\varvec{X}}}^{h}\) concentrates on a set of the form

$$\begin{aligned} \left\{ {\theta \in {\mathbb {R}},\; h^{2}(\theta ,0)\leqslant \frac{c}{n}}\right\} =\left\{ {\theta \in {\mathbb {R}},\; |\theta |\leqslant \sqrt{8\log \left( {\frac{1}{1-c/n}}\right) }}\right\} \end{aligned}$$

with \(c>0\). This set is an interval around 0 of approximate length \(1/\sqrt{n}\), at least for n sufficiently large. Despite the contamination of the data, the concentration property of \(\widehat{\nu }_{{\varvec{X}}}^{h}\) remains thus the same as in the well-specified case.

7 Applications

7.1 How to choose \(\beta \) in Theorem 1 for a translation model?

In this section, we consider the translation model \({\mathscr {M}}=\{P_{\theta }=p(\cdot -\theta )\cdot \mu ,\; \theta \in {\mathbb {R}}\}\) where p is a density on \({\mathbb {R}}\) with respect to the Lebesgue measure \(\mu \). Our aim is to estimate the translation parameter \(\theta \) by using a prior \(\nu _{\sigma }\) on \(\Theta ={\mathbb {R}}\) with a density (with respect to \(\mu \)) of the form \(q(\cdot /\sigma )/\sigma \) for some density q and positive number \(\sigma \). We evaluate the estimation error by means of the total variation loss. In order to use our construction we need to tune the parameter \(\beta \). In Sect. 4.2, we suggested to choose \(\beta \geqslant 1/\sqrt{n}\) satisfying (18). In order to find such a value of \(\beta =\beta (\alpha )\), we may proceed as follows. Consider a symmetric bounded interval \(I=[-l/2,l/2]\subset {\mathbb {R}}\) of length \(l>0\) satisfying \(\nu _{\sigma }(I)\geqslant 1-\alpha \), hence concentrating most of the mass of the prior \(\nu _{\sigma }\). If the set \({\mathscr {M}}(\beta )\) is large enough to contain \(\{P_{\theta },\; \theta \in I\}\),

$$\begin{aligned} \pi \left( {{\mathscr {M}}(\beta )}\right) \geqslant \pi \left( {\{P_{\theta },\; \theta \in I\}}\right) =\nu _{\sigma }(I)\geqslant 1-\alpha \end{aligned}$$
(47)

and \(\beta \) satisfies (18). We deduce from our Corollary 1 that the corresponding posterior \(\widehat{\pi }_{{\varvec{X}}}^{{\text {TV}}}\) concentrates with a probability at least \(1-2e^{-\xi /2}\) on a TV-ball with a radius of order

$$\begin{aligned} \inf _{P'\in {\mathscr {M}}(\beta )}\ell (\overline{P}^{\star },P')+2\left( {\beta +\frac{2\xi }{n\beta }}\right) \leqslant \inf _{\theta \in I}\ell ({\overline{P}}^{\star },P_{\theta })+ 2\beta +\frac{4\xi }{\sqrt{n}}=r( \beta ). \end{aligned}$$
(48)

The approximation term \(\inf _{\theta \in I}\ell (\overline{P}^{\star },P_{\theta })\) is small as soon as \({\overline{P}}^{\star }\) is close enough to a distribution \(P_{\theta ^{\star }}\) whose parameter \(\theta ^{\star }\) belongs to I. If we want to prevent us from the situation where \(\textrm{argmin}_{\theta \in \Theta }\ell ({\overline{P}}^{\star },P_{\theta })\) is far from 0, we need to increase I (or equivalently diminish \(\alpha \)). What would be the consequence on the value of \( \beta =\beta (\alpha )\)? What if we increase \(\sigma \), to make the prior distribution flatter, or diminish \(\sigma \) to make it more picky? Finally, what is the influence of the choice of the density q on the size of \( \beta \)?

These are the questions we want to answer in this section. In order to simplify the presentation of our results and avoid technicalities, we make the change of variables \(l=2\sigma t\), or equivalently \(t=l/(2\sigma )>0\), and assume the following.

Assumption 6

The density q is positive, symmetric and decreasing on \({\mathbb {R}}_{+}\). There exists some nonnegative and nondecreasing function \(\varphi :[0,1)\rightarrow {\mathbb {R}}_{+}\) such that

$$\begin{aligned} \left\| {P_{0}-P_{\theta }}\right\| \leqslant r\iff \left| {\theta }\right| \leqslant \varphi (r)\quad \text {for all }r\in [0,1). \end{aligned}$$

When p is symmetric and nonincreasing on \({\mathbb {R}}_{+}\), the total variation distance between \(P_{0}\) and \(P_{\theta }\) is given by

$$\begin{aligned} \left\| {P_{0}-P_{\theta }}\right\| =2P_{0}\left( {[0,|\theta |/2]}\right) \quad \text {for all } \theta \in {\mathbb {R}}. \end{aligned}$$

Our Assumption 6 is then satisfied with \(\varphi (r)=F_{0}^{-1}[(r+1)/2]\) for all \(r\in [0,1)\), where \(F_{0}^{-1}\) denotes the quantile function of the distribution \(P_{0}\). We set

$$\begin{aligned} {\overline{\Gamma }}=\max \left\{ {\left[ {\sup _{0<r\leqslant 1/4}\frac{\varphi (2r)}{\varphi (r)}}\right] q(0),\frac{1}{2\varphi (1/4)}}\right\} \end{aligned}$$
(49)

and assume that this quantity is finite. Note that it only depends on q(0) and p. For example, if p is the density \(x\mapsto (1/2)e^{-|x|}\),

$$\begin{aligned} \left\| {P_{0}-P_{\theta }}\right\| =1-\exp \left[ {-|\theta |/2}\right] \quad \text {and }\quad \varphi : r\mapsto -2\log (1-r). \end{aligned}$$

Since the mapping \(r\mapsto [\varphi (2r)/\varphi (r)]\) is increasing, we obtain in this case

$$\begin{aligned} {\overline{\Gamma }}=\frac{1}{\log (4/3)}\max \left\{ {q(0)\log 2,\frac{1}{4}}\right\} . \end{aligned}$$

If now \(p:x\mapsto (s/2)(1-|x|)^{s-1}{\mathbb {1}}_{|x|<1}\) with \(s>0\),

$$\begin{aligned} \left\| {P_{0}-P_{\theta }}\right\| =1-(1-|\theta |/2)^{s}\quad \text {and}\quad \varphi : r\mapsto 2[1-(1-r)^{1/s}]. \end{aligned}$$

The mapping \(r\mapsto \varphi (2r)/\varphi (r)\) has a continuous extension on [0, 1/4] and is therefore bounded. Given q(0), \({\overline{\Gamma }}\) is therefore a finite number.

The following result is proven in Sect. 10.3.

Proposition 4

Assume that Assumption 6 is satisfied and \(\overline{\Gamma }\) is finite. Let t be a \((1-\alpha /2)\)-quantile of q with \(\alpha \leqslant 1/2\). The set \({\mathscr {M}}(\beta )\) contains the subset \(\left\{ {P_{\theta }, \theta \in [-\sigma t,\sigma t]}\right\} \) and therefore satisfies (47) if

$$\begin{aligned} \beta \geqslant \overline{\beta }=\sqrt{\frac{1}{n\gamma }\max \left\{ {\log \left( {\frac{{\overline{\Gamma }} \left( {\sigma \vee 1}\right) }{q(2 t)}}\right) ,\log 4}\right\} }. \end{aligned}$$
(50)

Let us now comment on this result. The quantity \({\overline{\beta }}\) may be written as \(C/\sqrt{n}\) with

$$\begin{aligned} C=\sqrt{\frac{1}{\gamma }\max \left\{ {\log \left( {\frac{{\overline{\Gamma }} \left( {\sigma \vee 1}\right) }{q(2 t)}}\right) ,\log 4}\right\} }. \end{aligned}$$

Increasing the value of \(\sigma \) or that of t enlarges the interval \(I=[-\sigma t,\sigma t]\). It also makes the value of \(C=C(\sigma ,t)\) larger. Increasing \(\sigma \) makes the prior \(\nu _{\sigma }\) flatter and for a fixed value of \(t>0\), \(C=C(\sigma )\) increases as \(\sqrt{\log \sigma }\) when \(\sigma \) is larger than 1. In the other case, for a fixed value of \(\sigma \), \(C=C(t)\) increases as \(\sqrt{\log (1/q(2t))}\). For example, when q is the density of a standard Gaussian random variable, \(\sqrt{\log (1/q(2t))}\) is of order t, while for the Laplace and the Cauchy distributions it is of order \(\sqrt{t}\) and \(\sqrt{\log t}\) respectively. This result illustrates the fact that it is safer to use priors with heavy tails when the size of the location parameter is uncertain. In case of a light-tailed prior, it may be wise to introduce a scaling parameter \(\sigma >1\). By taking \(\sigma =10\), the concentration radius only increases by a factor less than 1.6, while the interval I is ten times longer.

7.2 Fast rates

We go back to the statistical framework described in Sect. 7.1 and consider the special case of the density \(p:x\mapsto s x^{s-1}{\mathbb {1}}_{(0,1]}\) with \(s\in (0,1]\). As before, we choose the TV-loss. In this specific situation,

$$\begin{aligned} \left\| {P_{\theta }-P_{\theta '}}\right\| =\left| {\theta -\theta '}\right| ^{s}\wedge 1\quad \text {for all }\theta ,\theta '\in {\mathbb {R}}\end{aligned}$$
(51)

and consequently, \(\varphi (r)=r^{1/s}\) for all \(r\in [0,1)\). Besides, the family \({\mathscr {T}}(\ell ,{\mathscr {M}})\) given by (34) satisfies not only Assumption 3 but also Assumption 4 with \(a_{2}=1\). These two facts are proven in Baraud [4, Examples 5 and 6]. As a consequence, Theorem 2 applies. The reader can check that the constants \({c}=\beta =0.1\) and \(\gamma =0.01\) satisfy the requirements of Theorem 2 and that its conclusion holds true with \(\kappa _{0}=144\).

In order to be more specific about the concentration radius of our posterior \({\widehat{\pi }}_{{\varvec{X}}}^{{\text {TV}}}\), the following proposition provides an upper bound for the quantity \(r_{n}(\beta ,P_{\theta })\). The proof is postponed to Sect. 10.4.

Proposition 5

Let \(t_{0}\) be the third quartile of \(\nu _{1}\). If the density q is positive, symmetric and decreasing on \([0,+\infty )\), for all \(\theta \in {\mathbb {R}}\) the quantity \(r_{n}(\beta ,P_{\theta })\) is not larger than

$$\begin{aligned} {\overline{r}}_{n}(\beta ,P_{\theta })= \frac{2000}{ n}\max \left\{ {\log \left( {\frac{{\overline{\Gamma }} \left( {\sigma \vee 1}\right) }{{q\left[ {2\left( {\frac{|\theta |}{\sigma }\vee t_{0}}\right) }\right] }}}\right) ,\log 4}\right\} . \end{aligned}$$
(52)

Then, our Theorem 2 tells us that for all \(\xi >0\), with a probability at least \(1-2e^{-\xi /2}\), the posterior satisfies

$$\begin{aligned} {\widehat{\pi }}_{{\varvec{X}}}^{{\text {TV}}}\left( {{\mathscr {B}}({\overline{P}}^{\star },144r)}\right) \geqslant 1-e^{-\xi /2} \end{aligned}$$

with

$$\begin{aligned} {r}\leqslant \inf _{\theta \in {\mathbb {R}}}\left[ {\left\| {\overline{P}^{\star }-P_{\theta }}\right\| +{\overline{r}}_{n}(\beta ,P_{\theta })}\right] +\frac{40\xi }{n}. \end{aligned}$$
(53)

When the data are i.i.d. with distribution \(P_{\theta ^{\star }}\), with probability close to 1, a randomized estimator \(P_{{\widehat{\theta }}}\) with distribution \({\widehat{\pi }}_{{\varvec{X}}}^{{\text {TV}}}\) satisfies with high probability

$$\begin{aligned} \left| {\theta ^{\star }-{\widehat{\theta }}}\right| ^{s}\wedge 1&=\left\| {P_{\theta ^{\star }}-P_{{\widehat{\theta }}}}\right\| \leqslant \frac{C(\xi ,s,q,\theta ^{\star },\sigma )}{n}. \end{aligned}$$

This inequality implies, at least for n large enough, that

$$\begin{aligned} \left| {\theta ^{\star }-{\widehat{\theta }}}\right| \leqslant \frac{C^{1/s}(\xi ,s,q,\theta ^{\star },\sigma )}{n^{1/s}}, \end{aligned}$$

which means that the parameter \(\theta ^{\star }\) is estimated at rate \(n^{-1/s}\). This rate is much faster than the usual \((1/\sqrt{n})\)-parametric one that is reached by an estimator based on a moment method for instance. For example, when \(s=1/3\) and \(n=100\), a moment estimator provides an accuracy of order \(10^{-1}\) while that of \({\widehat{\theta }}\) is of order \(10^{-6}\). Since p is unbounded, note that the maximum likelihood estimator for \(\theta ^{\star }\) does not exist and is therefore useless.

It follows from the work of Le Cam that in a translation model \({\mathscr {M}}\) of the form \(\{P_{\theta }=p(\cdot -\theta )\cdot \mu , \theta \in {\mathbb {R}}\}\), where p is a density with respect to the Lebesgue measure \(\mu \), it is impossible to estimate a distribution \(P^{\star }\in {\mathscr {M}}\) from an n-sample at a rate faster than 1/n for the TV-loss. Because of (51), the rate we get is not only optimal for estimating the distribution \(P_{\theta ^{\star }}\) but also for estimating the parameter \(\theta ^{\star }\) with respect to the Euclidean distance.

An alternative rate-optimal estimator for estimating \(\theta ^{\star }\) is that given by the minimum of the observations. This estimator is unfortunately obviously non-robust to the presence of an outlier among the sample. Our construction provides an estimator which possesses the property of being both rate-optimal and robust.

It also interesting to see how the quantity \(\overline{r}_{n}(\beta ,P_{\theta })\) given in (52) deteriorates under a misspecification of the prior \(\nu _{\sigma }\), that is, when the size of the parameter \(\theta ^{\star }\) is large compared to \(\sigma \). When q is Gaussian, \(\overline{r}_{n}(\beta ,P_{\theta ^{\star }})\) increases by a factor of order \((\theta ^{\star }/\sigma )^{2}\) while for the Laplace and Cauchy distributions it is of order \(|\theta ^{\star }|/\sigma \) and \(\log (|\theta ^{\star }|/\sigma )\) respectively. From these results, we conclude as before that the Cauchy distribution possesses some advantages over the other two distributions when little information is available on the location of the parameter \(\theta ^{\star }\).

7.3 A general result under entropy

In this section, we equip \(E={\mathbb {R}}^{k}\) with the Lebesgue measure \(\mu \) and the norm \(\left| {\cdot }\right| _{\infty }\). We consider the TV-loss and the location-scale family

$$\begin{aligned} {\mathscr {M}}=\left\{ {P_{(p,{\textbf{m}},\sigma )}=\frac{1}{\sigma ^{k}}p\left( {\frac{\cdot -{\textbf{m}}}{\sigma }}\right) \cdot \mu ,\ p\in {\mathcal {M}}_{0},\; {\textbf{m}}\in {\mathbb {R}}^{k}, \sigma >0}\right\} , \end{aligned}$$
(54)

where \({\mathcal {M}}_{0}\) is a set of densities on \({\mathbb {R}}^{k}\). Given independent observations \(X_1,\ldots ,X_n\) with presumed distribution \(P^{\star }=P_{(p^{\star },{\textbf{m}}^{\star },\sigma ^{\star })}\in {\mathscr {M}}\), our aim is to estimate the density \(p^{\star }\in {\mathcal {M}}_{0}\), the location parameter \({\textbf{m}}^{\star }\in {\mathbb {R}}^{k}\) and the scale parameter \(\sigma ^{\star }>0\), hence the parameter \(\theta ^{\star }=(p^{\star },{\textbf{m}}^{\star },\sigma ^{\star })\in \Theta ={\mathcal {M}}_{0}\times {\mathbb {R}}^{k}\times (0,+\infty )\). We assume that the set of densities \({\mathcal {M}}_{0}\) satisfies the following conditions.

Assumption 7

Let \({{\widetilde{D}}}\) be a continuous nonincreasing mapping from \((0,+\infty )\) to \([1,+\infty )\) such that \(\lim _{\eta \rightarrow +\infty }\eta ^{-2}{{\widetilde{D}}}(\eta )=0\). For all \(\eta >0\), there exists a finite subset \({\mathcal {M}}_{0}[\eta ]\subset {\mathcal {M}}_{0}\) satisfying

$$\begin{aligned} {\left| {{\mathcal {M}}_{0}[\eta ]}\right| }\leqslant \exp \left[ {{{\widetilde{D}}}(\eta )}\right] \end{aligned}$$
(55)

such that for all \(p\in {\mathcal {M}}_{0}\), there exists \({\overline{p}}\in {\mathcal {M}}_{0}[\eta ]\) that satisfies

$$\begin{aligned} \left\| {P_{(p,\varvec{0},1)}-P_{(\overline{p},\varvec{0},1)}}\right\| =\frac{1}{2}\int _{{\mathbb {R}}^{k}}\left| {p-{\overline{p}}}\right| d\mu \leqslant \eta . \end{aligned}$$
(56)

Besides, we assume that there exist \(A,s>0\) such that for all \(p\in {\mathcal {M}}_{0}\), \({\textbf{m}}\in {\mathbb {R}}^{k}\) and \(\sigma \geqslant 1\),

$$\begin{aligned} \left\| {P_{(p,\varvec{0},1)}-P_{(p,{\textbf{m}},\sigma )}}\right\| \leqslant \left[ {A\left( {\left( \left| {\frac{{\textbf{m}}}{\sigma }}\right| _{\infty }\right) ^{s}+\left( {1-\frac{1}{\sigma }}\right) ^{s}}\right) }\right] \bigwedge 1. \end{aligned}$$
(57)

The first part of Assumption 7, which corresponds to inequalities (55) and (56), aims at measuring the size of the set \({\mathcal {M}}_{0}\) by means of its entropy. The entropy of a set controls its metric dimension and usually determines the minimax rate of convergence over it as shown in Birgé [9]. With the second part of Assumption 7, namely inequality (57), we require some regularity properties of the TV-loss with respect to the location and scale parameters. It will be commented on later. We shall see that this condition may be satisfied even when the densities in \({\mathcal {M}}_{0}\) are not smooth.

Let us now turn to the choice of our prior. We first consider a countable subset of the parameter space \(\Theta \) that will be proven to possess good approximation properties. Namely, we define for \(\eta ,\delta >0\)

$$\begin{aligned} \Theta [\eta ,\delta ]=\left\{ {\left( {\overline{p},(1+\delta )^{j_{0}}\delta {\textbf{j}},(1+\delta )^{j_{0}}}\right) ,\; (\overline{p},j_{0},{\textbf{j}})\in {\mathcal {M}}_{0}[\eta ]\times {\mathbb {Z}}\times {\mathbb {Z}}^{k}}\right\} \end{aligned}$$

and we associate a positive weight \(L_{\theta }\) with any element \(\theta =\theta ({\overline{p}},j_{0},{\textbf{j}})\in \Theta [\eta ,\delta ]\) as follows

$$\begin{aligned} L_{\theta }=(k+1)L+\log \left| {{\mathcal {M}}_{0}[\eta ]}\right| +2\sum _{i=0}^{k}\log (1+|j_{i}|) \end{aligned}$$
(58)

with \(L=\log \left[ {(\pi ^{2}/3)-1}\right] \). It is not difficult to check that \(\sum _{\theta \in \Theta [\eta ,\delta ]}e^{-L_{\theta }}=1\), and we may therefore endow \({\mathscr {M}}\) with the (discrete) prior \(\pi \) defined as

$$\begin{aligned} \pi (\left\{ {P_{\theta }}\right\} )=e^{-L_{\theta }}\quad \text {for all }\theta \in \Theta [\eta ,\delta ]. \end{aligned}$$
(59)

With such a prior, our posterior \({\widehat{\pi }}_{{\varvec{X}}}^{{\text {TV}}}\) given in Corollary 1 possesses the following properties.

Corollary 3

Let \(\xi >0\) , \(K>1\). Assume that \({\mathcal {M}}_{0}\) satisfies Assumption 7 and define

$$\begin{aligned} \eta&=\eta _{n}=\inf {\mathscr {D}}_{n}\quad \text {with}\quad {\mathscr {D}}_{n}=\left\{ {\eta >0,\; {{\widetilde{D}}}(\eta )\leqslant \frac{n\eta ^{2}}{24}}\right\} \end{aligned}$$
(60)
$$\begin{aligned} \delta&=\delta _{n}=\left( {\frac{\eta _{n}}{2A}}\right) ^{1/s}, \end{aligned}$$
(61)
$$\begin{aligned} \beta&=\beta _{n}= \frac{1}{2}\left[ {K\eta _{n}+2\sqrt{\frac{18.6(k+1)}{n}}}\right] \end{aligned}$$
(62)

and the subset \({\mathscr {M}}_{n}(K)\) of \({\mathscr {M}}\) that consists of the elements \(P_{(p,{\textbf{m}},\sigma )}\) for which

$$\begin{aligned} |\log \sigma |\vee \left| {\frac{{\textbf{m}}}{\sigma }}\right| _{\infty }\leqslant \Lambda _{n}=\exp \left[ {\frac{(K^{2}-1)n\eta _{n}^{2}}{48(k+1)}+\log \log (1+\delta _{n})}\right] . \end{aligned}$$
(63)

Then, the posterior \({\widehat{\pi }}_{{\varvec{X}}}^{{\text {TV}}}\) satisfies the following property: there exists a numerical constant \(\kappa _{0}'>0\) such that for all \(\xi >0\),

$$\begin{aligned} {\mathbb {E}}\left[ {{\widehat{\pi }}_{{\varvec{X}}}\left( {{^\textsf{c}}{\!{{\mathscr {B}}}}{}(\overline{P}^{\star },\kappa _{0}'{r}_{n})}\right) }\right] \leqslant 2e^{-\xi } \end{aligned}$$
(64)

with

$$\begin{aligned} {r}_{n}=\inf _{P\in {\mathscr {M}}_{n}(K)}\ell (\overline{P}^{\star },P)+K\eta _{n}+\sqrt{\frac{k+1}{n}}+\frac{\xi }{\sqrt{n(k+1)}}\wedge \frac{\xi }{Kn\eta _{n}}. \end{aligned}$$
(65)

Let us now comment on this result. The radius \({r}_{n}\) is the sum of three main terms, omitting the dependency with respect to \(\xi \). The first one, \(\inf _{P\in {\mathscr {M}}_{n}(K)}\ell ({\overline{P}}^{\star },P)\), corresponds to the approximation of \({\overline{P}}^{\star }\) by an element of \({\mathscr {M}}\) whose location and scale parameters satisfy the constraints given in (63). The quantity \(\eta _{n}\), involved in the second term, usually corresponds to the minimax rate for solely estimating a density \(p\in {\mathcal {M}}_{0}\) from an n-sample. Finally, the third term \(\sqrt{(k+1)/n}\) corresponds to the rate we would get for solely estimating the location and translation parameters \(({\textbf{m}},\sigma )\in {\mathbb {R}}^{k+1}\) when the density p is known.

Let us now provide some examples for which our condition (57) is satisfied. We start with an example where the densities in \({\mathcal {M}}_{0}\) are smooth.

Lemma 2

Assume that the set \({\mathcal {M}}_{0}\) consists of densities p that are supported on \([0,1]^{k}\), satisfy \(\sup _{p\in {\mathcal {M}}_{0}}\left\| {p}\right\| _{\infty }\leqslant L_{0}\) and

$$\begin{aligned} \sup _{p\in {\mathcal {M}}_{0}}\left| {p({\varvec{x}})-p({\varvec{x}}')}\right| \leqslant L_{1}\left| {{\varvec{x}}-{\varvec{x}}'}\right| ^{s}\quad \text {for all }{\varvec{x}},{\varvec{x}}'\in {\mathbb {R}}^{k}, \end{aligned}$$
(66)

with constants \(L_{0},L_{1}>0\) and \(s\in (0,1]\). Then (57) is satisfied with \(A=L_{1}\vee [(1+L_{1}k^{s/2}+L_{0})/2]\).

Nevertheless, condition (57) may also be satisfied for families \({\mathcal {M}}_{0}\) of densities which are not smooth, as shown in Lemma 3 below. It makes it possible to consider the following example.

Example 6

We consider here the situation where \(k=1\) and \({\mathcal {M}}_{0}\) is the set of all nonincreasing densities on [0, 1] that are bounded by \(B>1\). Then, \({\mathscr {M}}\) consists of all the probabilities whose densities are supported on intervals I with positive lengths, nonincreasing on I and which are bounded by \(B/\mu (I)\). Birman and Solomjak [15] proved that \({\mathcal {M}}_{0}\) satisfies Assumption 7 with \({{\widetilde{D}}}(\eta )\) of order \((1/\eta )\vee 1\) (up to some constant that depends on B). We deduce from (60) that \(\eta _{n}\) is therefore of order \(n^{-1/3}\). Besides, it follows from Lemma 3 below that (57) is satisfied with \(A=B\) and \(s=1\). We may therefore apply Corollary 3. For a value of K large enough compared to 1, \(\Lambda _{n}\) defined by (63) is larger than \(\exp \left[ {CK^{2}n^{1/3}}\right] \) for some constant \(C>0\) (depending on A). In particular, if \(X_{1},\ldots ,X_{n}\) are i.i.d. with a density of the form

$$\begin{aligned} x\mapsto p^{\star }(x)=\frac{1}{\sigma ^{\star }}p\left( {\frac{x-m^{\star }}{\sigma ^{\star }}}\right) \end{aligned}$$

where \(p\in {\mathcal {M}}_{0}\), \(|m^{\star }/\sigma ^{\star }|\leqslant \exp \left[ {CK^{2}n^{1/3}}\right] \) and

$$\begin{aligned} \exp \left[ {-\exp \left[ {CK^{2}n^{1/3}}\right] }\right] \leqslant \sigma ^{\star }\leqslant \exp \left[ {\exp \left[ {CK^{2}n^{1/3}}\right] }\right] , \end{aligned}$$

(64) is satisfied with \(r_{n}\) of order \(C'n^{-1/3}\) where the constant \(C'>0\) only depends on \(\xi ,K,B\) but not on \(m^{\star }\) and \(\sigma ^{\star }\). This means that the concentration properties of \({\widehat{\pi }}_{{\varvec{X}}}\) hold true uniformly over a huge range of translation and scale parameters \({\textbf{m}}\) and \(\sigma \) when n is large enough.

Lemma 3

Let p be a nonincreasing density on \((0,+\infty )\). For all \(\sigma \geqslant 1\)

$$\begin{aligned} \frac{1}{2}\int _{{\mathbb {R}}}\left| {\frac{1}{\sigma }p\left( {\frac{x}{\sigma }}\right) -p(x)}\right| dx\leqslant \left( {1-\frac{1}{\sigma }}\right) . \end{aligned}$$
(67)

If, furthermore, p is bounded by \(B\geqslant 1\), for all \(m\in {\mathbb {R}}\),

$$\begin{aligned} \frac{1}{2}\int _{{\mathbb {R}}}\left| {p(x)-p(x-m)}\right| dx\leqslant (|m|B)\wedge 1. \end{aligned}$$
(68)

In particular, for all \(m\in {\mathbb {R}}\) and \(\sigma \geqslant 1\),

$$\begin{aligned} \frac{1}{2}\int _{{\mathbb {R}}}\left| {\frac{1}{\sigma }p\left( {\frac{x-m}{\sigma }}\right) -p(x)}\right| dx\leqslant \left[ {B\left| {\frac{m}{\sigma }}\right| +\left( {1-\frac{1}{\sigma }}\right) }\right] \wedge 1. \end{aligned}$$
(69)

7.4 Estimating a parameter under sparsity

Let us consider a parametric dominated model \({\mathscr {M}}=\left\{ {P_{{\varvec{\theta }}}=p_{{\varvec{\theta }}}\cdot \mu ,\; \varvec{\theta }\in {\mathbb {R}}^{k}}\right\} \) where the dimension k of the parameters is large. We presume, even though this might not be true, that the data are i.i.d. with distribution \(P_{\varvec{\theta }^{\star }}\in {\mathscr {M}}\) and that the coordinates of the true parameter \(\varvec{\theta }^{\star }=(\theta _{1}^{\star },\ldots ,\theta _{k}^{\star })\) are all zero except for a small number of them. Our aim is to estimate \(P_{\varvec{\theta }^{\star }}\) from the observation of \(X_1,\ldots ,X_n\) by using the squared Hellinger loss.

To tackle this problem, we partition the model \({\mathscr {M}}\) into the sub-models \(\{{\mathscr {M}}_{m},\; m\subset \{1,\ldots ,k\}\}\) where \({\mathscr {M}}_{m}\) consists of those distributions \(P_{\varvec{\theta }}\in {\mathscr {M}}\) for which the coordinates of \(\varvec{\theta }=(\theta _{1},\ldots ,\theta _{k})\) are all zero except those with an index \(i\in m\). We denote by \(\Theta _{m}\) the set of such parameters, so that \({\mathscr {M}}_{m}=\{P_{\varvec{\theta }},\; \varvec{\theta }\in \Theta _{m}\}\), and we use the conventions \(\Theta _{{\varnothing }}=\{\textbf{0}\}\) and \({\mathscr {M}}_{{\varnothing }}=\{P_{\textbf{0}}\}\). Given some positive number \(R>0\), we equip each parameter space \(\Theta _{m}\), \(m\subset \{1,\ldots ,k\}\), with the uniform distribution \(\nu _{m}\) on \(\Theta _{m}(R)=[-R,R]^{k}\cap \Theta _{m}\) when \(m\ne {\varnothing }\) and the Dirac mass \(\nu _{{\varnothing }}=\delta _{\textbf{0}}\) at \(\textbf{0}\in {\mathbb {R}}^{k}\) when \(m={\varnothing }\). We may then define on \({\mathbb {R}}^{k}=\bigcup _{m\subset \{1,\ldots ,k\}}\Theta _{m}\), the hierarchical prior

$$\begin{aligned} \nu =\sum _{m\subset \{1,\ldots ,k\}}e^{-L_{m}}\nu _{m}\quad \text {with}\quad L_{m}=|m|\log k+k\log \left( {1+\frac{1}{k}}\right) . \end{aligned}$$
(70)

We endow \({\mathscr {M}}\) with the \(\sigma \)-algebra and the prior \(\pi \) as described in Sect. 2.1. Besides, we assume that there exists \(s\in (0,1]\) and a positive number \(B_{k}=B_{k}(R)\), possibly depending on k and R (although we drop the dependency with respect to R), such that

$$\begin{aligned} h^{2}\left( {P_{\varvec{\theta }},P_{\varvec{\theta }'}}\right) \leqslant B_{k}\left| {\varvec{\theta }-\varvec{\theta }'}\right| _{\infty }^{s}\quad \text {for all }\varvec{\theta },\varvec{\theta }'\in [-R,R]^{k}. \end{aligned}$$
(71)

The following result is proven in Sect. 10.8.

Proposition 6

Assume that

$$\begin{aligned} \begin{array}{l|rcl} p: &{} E\times {\mathbb {R}}^{k} &{} \longrightarrow &{} {\mathbb {R}}_{+} \\ &{} (x,\varvec{\theta }) &{} \longmapsto &{} p_{\varvec{\theta }}(x) \end{array} \end{aligned}$$

is measurable. If \(RB_{k}^{1/s}\geqslant 1\) there exists a numerical constant \(\kappa _{0}'>0\) such that for any distribution \({\textbf{P}}^{\star }\) and \(\xi >0\)

$$\begin{aligned} {\mathbb {E}}\left[ {{\widehat{\pi }}_{{\varvec{X}}}^{h}\left( {{^\textsf{c}}{\!{{\mathscr {B}}}}{}(\overline{P}^{\star },\kappa _{0}'{r})}\right) }\right] \leqslant 2e^{-\xi } \end{aligned}$$

where

$$\begin{aligned} r=\inf _{m\subset \{1,\ldots ,k\}}\left[ {\inf _{\varvec{\theta }\in \Theta _{m}(R)}\ell (\overline{P}^{\star },P_{\varvec{\theta }})+\frac{|m|\log \left( {2kR(nB_{k})^{1/s}}\right) +\xi }{n}}\right] . \end{aligned}$$
(72)

Let us now comment on this result. First of all, the mapping

$$\begin{aligned} R\mapsto \sup \left\{ {\frac{h^{2}\left( {P_{\varvec{\theta }},P_{\varvec{\theta }'}}\right) }{\left| {\varvec{\theta }-\varvec{\theta }'}\right| _{\infty }^{s}},\; \varvec{\theta }\ne \varvec{\theta },'\; \varvec{\theta },\varvec{\theta }'\in [-R,R]^{k}}\right\} \end{aligned}$$

being nondecreasing, our condition \(RB_{k}^{1/s}=R[B_{k}(R)]^{1/s}\geqslant 1\) is always satisfied for a value of R sufficiently large.

When \(B_{k}\) does not increase faster than a power of k, the radius r given in (72) only depends logarithmically on the dimension k of the parameter space, as expected.

Let us now illustrate Proposition 6 by choosing some specific models \({\mathscr {M}}=\{P_{\varvec{\theta }},\; \varvec{\theta }\in {\mathbb {R}}^{k}\}\). If \(P_{\varvec{\theta }}\) is the Gaussian distribution with mean \(\varvec{\theta }\in {\mathbb {R}}^{k}\) and covariance matrix \(\sigma ^{2} I_{k}\), where \(I_{k}\) denotes the \(k\times k\) identity matrix,

$$\begin{aligned} h^{2}(P_{\varvec{\theta }},P_{\varvec{\theta }'})=1-\exp \left[ {-\frac{\left| {\varvec{\theta }-\varvec{\theta }'}\right| ^{2}}{8\sigma ^{2}}}\right] \leqslant \frac{\left| {\varvec{\theta }-\varvec{\theta }'}\right| ^{2}}{8\sigma ^{2}}\leqslant \frac{k\left| {\varvec{\theta }-\varvec{\theta }'}\right| _{\infty }^{2}}{8\sigma ^{2}}. \end{aligned}$$

Then, inequality (71) is satisfied with \(B_{k}=k/(8\sigma ^{2})\) and \(s=2\). In particular, our condition \(RB_{k}^{1/s}\geqslant 1\) is equivalent to \(R\geqslant 2\sigma \sqrt{(2/k)}\). In this case, the value of r given by (72) is of order

$$\begin{aligned} \inf _{m\subset \{1,\ldots ,k\}}\left[ {\inf _{\varvec{\theta }\in \Theta _{m}(R)}\ell (\overline{P}^{\star },P_{\varvec{\theta }})+\frac{|m|\log \left( {knR/\sigma }\right) +\xi }{n}}\right] . \end{aligned}$$

More generally, if \({\mathscr {M}}=\{P_{\varvec{\theta }},\varvec{\theta }\in {\mathbb {R}}^{k}\}\) is a regular statistical model with a nonsingular Fisher information matrix \({\textbf{J}}(\varvec{\theta })\) for all \(\varvec{\theta }\in {\mathbb {R}}^{k}\), we know from the book of Ibragimov and Has’minskiĭ [20, Theorem 7.1, p. 81] that for all \(\varvec{\theta },\varvec{\theta }'\in {\mathbb {R}}^{k}\) such that \(\varvec{\theta },\varvec{\theta }'\in [-R,R]^{k}\)

$$\begin{aligned} h^{2}(P_{\varvec{\theta }},P_{\varvec{\theta }'})\leqslant \frac{\left| {\varvec{\theta }-\varvec{\theta }'}\right| ^{2}}{8}\sup _{\varvec{\theta }''\in {\mathbb {R}}^{k}, \left| {\varvec{\theta }''}\right| _{\infty }\leqslant R}\textrm{tr}\left( {{\textbf{J}}(\varvec{\theta }'')}\right) . \end{aligned}$$

Then, Assumption (71) holds with \(s=2\) and we may take

$$\begin{aligned} B_{k}=\frac{k^{2}}{8}\sup _{\varvec{\theta }''\in {\mathbb {R}}^{k}, \left| {\varvec{\theta }''}\right| _{\infty }\leqslant R}\varrho \left( {{\textbf{J}}(\varvec{\theta }'')}\right) \end{aligned}$$

where \(\varrho \left( {{\textbf{J}}(\varvec{\theta }'')}\right) \) denotes the largest eigenvalue of the matrix \({\textbf{J}}(\varvec{\theta }'')\). This value is independent of \(\varvec{\theta }''\) when \({\mathscr {M}}\) is a translation model.

Finally note that the second term in (72) only increases logarithmically with respect to R, at least when \(B_{k}=B_{k}(R)\) does not increase faster than a power of R. By taking larger values of R one may therefore considerably enlarge the sizes of the cubes \(\Theta _{m}(R)\), and therefore diminish the approximation term in (72), while only slightly increasing the second term \([|m|\log (2kR(nB_{k})^{1/s})+\xi ]/n\).

8 Some tools for evaluating \({r}_{n}(\beta ,P)\)

The aim of this section is to provide some mathematical results that allow one to bound the quantity \({r}_{n}(\beta ,P)\) from above, or at least evaluate its order of magnitude, when n is sufficiently large. Throughout this section, we consider a parametric statistical model \({\mathscr {M}}=\{P_{{\varvec{\theta }}},\; {\varvec{\theta }}\in \Theta \}\) where the parameter space \(\Theta \subset {\mathbb {R}}^{k}\) is endowed with a prior \(\nu \) which admits a density q with respect to the Lebesgue measure on \({\mathbb {R}}^{k}\). In order to use the definition (11) of the quantity \({r}_{n}(\beta ,P)\), we assume that we have at disposal a family \({\mathscr {T}}(\ell ,{\mathscr {M}})\) that satisfies our Assumption 3, which provides us with a value of \(a_{1}>0\), as well as a value \(\gamma \) that satisfy the requirements of our main theorems. Our aim is to bound \({r}_{n}(\beta ,P)\) as a function of \(a_{1},\gamma ,\beta , k\) and n under suitable assumptions on the density q and the behaviour of the loss \(\ell \). Once \(\ell \) and \({\mathscr {T}}(\ell ,{\mathscr {M}})\) are given, \(a_{1}\) and \(\gamma \) can be considered as fixed numerical constants. The value of \(\beta \) can also be considered as a numerical constant when Theorem 2 applies. Otherwise, it can be chosen of order \(\sqrt{k/n}\) as in our Example 1.

8.1 Bounding \({r}_{n}(\beta ,P_{{\varvec{\theta }}})\) in parametric models

In what follows, \(\left| {\cdot }\right| _{*}\) denotes some arbitrary norm on \({\mathbb {R}}^{k}\) and \({\mathcal {B}}_{*}({\varvec{x}},z)\) the corresponding closed ball centered at \({\varvec{x}}\in {\mathbb {R}}^{k}\) with radius \(z\geqslant 0\).

Assumption 8

Let \({\varvec{\theta }}^{\star }\) be an element of \(\Theta \subset {\mathbb {R}}^{k}\).

  1. (i)

    There exist positive numbers \({\underline{a}},{\overline{a}}\) and \({s}\) such that

    $$\begin{aligned} {\underline{a}}\left| {{\varvec{\theta }}-{\varvec{\theta }}^{\star }}\right| _{*}^{s}\leqslant \ell ({\varvec{\theta }},{\varvec{\theta }}^{\star })\leqslant \overline{a}\left| {{\varvec{\theta }}-{\varvec{\theta }}^{\star }}\right| _{*}^{s}\quad \text{ for } \text{ all } {\varvec{\theta }}\in \Theta . \end{aligned}$$
    (73)
  2. (ii)

    There exists a positive nonincreasing function \(\upsilon _{{\varvec{\theta }}}\) on \({\mathbb {R}}_{+}\) such that

    $$\begin{aligned} \nu ({\mathcal {B}}_{*}({\varvec{\theta }}^{\star },2x))\leqslant \upsilon _{{\varvec{\theta }}^{\star }}(x)\nu ({\mathcal {B}}_{*}({\varvec{\theta }}^{\star },x))\quad \text{ for } \text{ all } x>0. \end{aligned}$$
    (74)

Under Assumption 8-(i), the loss function behaves like a power of a norm between the parameters.

The following result is an extension of Proposition 10 in Baraud and Birgé [6]. It was established there for the special case of the squared Hellinger loss and we provide here an extension to an arbitrary one. Since the proof follows the same lines, we omit it.

Proposition 7

Under Assumption 8,

$$\begin{aligned} {r}_{n}(\beta ,P_{{\varvec{\theta }}^{\star }})\leqslant \inf \left\{ {r\geqslant \frac{1}{n\beta a_{1}},\; r\geqslant \frac{\varrho _{0}\log \left[ {\upsilon _{{\varvec{\theta }}^{\star }}\left( {[r/\overline{a}]^{1/{s}}}\right) }\right] }{\gamma n \beta a_{1}}}\right\} \end{aligned}$$
(75)

with \(\varrho _{0}=1+\log (2{\overline{a}}/{\underline{a}})/[{s}\log 2]\). If \(\upsilon _{{\varvec{\theta }}^{\star }}\equiv \upsilon >0\), then

$$\begin{aligned} {r}_{n}(\beta ,P_{{\varvec{\theta }}^{\star }})\leqslant \frac{\left( {\varrho _{0}\log \upsilon }\right) \vee 1}{a_{1} n \gamma \beta }. \end{aligned}$$
(76)

If Assumption 8-(i) is satisfied and if the parameter space \({\Theta }\) is convex and q satisfies

$$\begin{aligned} {\underline{b}}\leqslant q({\varvec{\theta }})\leqslant {\overline{b}}\quad \text{ for } \text{ all } {\varvec{\theta }}\in {\Theta }\quad \text {with }0<{\underline{b}}\leqslant {\overline{b}}, \end{aligned}$$
(77)

then Assumption 8-(ii) holds with \(\upsilon _{{\varvec{\theta }}^{\star }}\equiv 2^{k}({\overline{b}}/{\underline{b}})\). Consequently,

$$\begin{aligned} {r}_{n}(\beta ,P_{{\varvec{\theta }}^{\star }})\leqslant \frac{\varrho _{1}}{a_{1}\gamma }\frac{k}{n\beta }\quad \text {with}\quad \varrho _{1}=\left[ {\varrho _{0}\log \left( 2\left[ {\overline{b}}/\underline{b}\right] ^{1/k}\right) }\right] \vee 1. \end{aligned}$$
(78)

When Assumption 8-(i) is satisfied and \(\nu \) admits a density which is bounded away from 0 and infinity on a convex parameter space \(\Theta \subset {\mathbb {R}}^{k}\), \({r}_{n}(\beta ,P_{{\varvec{\theta }}})\) is of order \(k/(n\beta )\) for all \({\varvec{\theta }}\in \Theta \). This result may also hold true when the density is not bounded away from infinity as shown in the following example. If \(k=1\), \(\Theta =[-1,1]\) and \(q:\theta \mapsto (t/2)|\theta |^{t-1}{\mathbb {1}}_{[-1,1]}(\theta )\) with \(t\in (0,1)\), Assumption 8-(ii) holds with \(\upsilon _{\theta }\equiv 2^{1+t}\left( 2^{t}-1\right) ^{-1}\) for all \(\theta \in [-1,1]\)—see Baraud and Birgé [6, Proposition 11]. Then (76) still applies. In the other direction, when the density q takes very small values in the neighbourhood of the parameter \({\varvec{\theta }}\), the function \(\upsilon _{{\varvec{\theta }}}\) may take large values around 0. This is for example the case when q is proportional to \(\theta \mapsto \exp \left[ -1/\left( 2|\theta |^{t}\right) \right] {\mathbb {1}}_{[-1,1]}(\theta )\), \(t>0\), and \(\theta =0\). It follows from Baraud and Birgé [6, Proposition 12] (and its proof) that Assumption 8-(ii) is satisfied with \(\upsilon _{\theta }:x\mapsto \exp (c(t)/x^{t})\) for some quantity \(c(t)>0\). Applying (75) leads to an upper bound on \({r}_{n}(\beta ,P_{\theta })\) of order \((n \beta )^{-s/(s+t)}\).

8.2 Some asymptotic order of magnitude

In Sect. 8.2, we have given some general tools for controlling the quantity \(r_{n}(\beta ,P_{{\varvec{\theta }}})\) for a given value of n. In this section, we present some sufficient conditions under which \(r_{n}(\beta ,P_{{\varvec{\theta }}})\) is of order \(k/(n\beta )\) at least when n is large enough. These conditions are not the weakest possible ones but they have the advantage to be relatively easy to check on many examples.

Assumption 9

The density q is continuous and positive at \({\varvec{\theta }}^{\star }\in \Theta \). The loss function \(\ell \) satisfies the following properties for some positive number \(s>0\) and a norm \(\left| {\cdot }\right| _{*}\) on \({\mathbb {R}}^{k}\).

  1. (i)

    For all \({\varepsilon }>0\), there exists \(z=z({\varepsilon })>0\) such that

    $$\begin{aligned} (1-{\varepsilon })\left| {{\varvec{\theta }}-{\varvec{\theta }}^{\star }}\right| _{*}^{s}\leqslant \ell ({\varvec{\theta }},{\varvec{\theta }}^{\star })\leqslant (1+{\varepsilon })\left| {{\varvec{\theta }}-{\varvec{\theta }}^{\star }}\right| _{*}^{s}\quad \text {for all } {\varvec{\theta }}\in {\mathcal {B}}_{*}({\varvec{\theta }}^{\star },z). \end{aligned}$$
  2. (ii)

    There exists a subset \({\mathcal {K}}\subset \Theta \), the interior of which contains \({\varvec{\theta }}^{\star }\), that satisfies for some positive numbers \({\underline{a}}_{{\mathcal {K}}}\) and \(\eta \):

    $$\begin{aligned} {\underline{a}}_{{\mathcal {K}}}\left| {{\varvec{\theta }}-{\varvec{\theta }}^{\star }}\right| _{*}^{s}\leqslant \ell ({\varvec{\theta }},{\varvec{\theta }}^{\star })\quad \text {for }{\varvec{\theta }}\in {\mathcal {K}}\text { and for } {\varvec{\theta }}\not \in {\mathcal {K}}\quad \ell ({\varvec{\theta }},{\varvec{\theta }}^{\star })\geqslant \eta >0. \end{aligned}$$
    (79)

Under these assumptions, we establish the following proposition, the proof of which is postponed to Sect. 10.10.

Proposition 8

Under Assumption 9, at least for n sufficiently large,

$$\begin{aligned} r_{n}\left( {\beta ,P_{{\varvec{\theta }}^{\star }}}\right) \leqslant \frac{(1+1/s)}{a_{1}\gamma }\frac{k}{n\beta }. \end{aligned}$$
(80)

8.3 The case of the squared Hellinger loss on a regular statistical model

Of particular interest is the situation where the statistical model \({\mathscr {M}}=\{P_{{\varvec{\theta }}},\; {\varvec{\theta }}\in \Theta \}\), \(\Theta \subset {\mathbb {R}}^{k}\), is regular. There exist several ways of defining a regular model in statistics and we adopt here the definition of Ibragimov and Has’minskiĭ [20].

Definition 1

Let \(\mu \) be a measure on \((E,{\mathcal {E}})\) and \(\Theta \) an open subset of \({\mathbb {R}}^{k}\). The statistical model \({\mathscr {M}}=\{P_{{\varvec{\theta }}}=p_{{\varvec{\theta }}}\cdot \mu ,\; {\varvec{\theta }}\in \Theta \}\) is said to be regular if the family of functions \(\{\zeta _{{\varvec{\theta }}}=\sqrt{p_{{\varvec{\theta }}}},\; \theta \in \Theta \}\subset {\mathscr {L}}_{2}(E,{\mathcal {E}},\mu )\) satisfies the following properties.

  1. (i)

    For \(\mu \)-almost all \(x\in E\), \({\varvec{\theta }}\mapsto \zeta _{{\varvec{\theta }}}(x)\) is continuous.

  2. (ii)

    For all \({\varvec{\theta }}\in \Theta \), there exists \(\dot{\varvec{\zeta }}_{{\varvec{\theta }}}=({\dot{\zeta }}_{{\varvec{\theta }},1},\ldots ,{\dot{\zeta }}_{{\varvec{\theta }},k}):E\rightarrow {\mathbb {R}}^{k}\) such that

    $$\begin{aligned} \int _{E}\left| {\dot{\varvec{\zeta }}_{{\varvec{\theta }}}(x)}\right| ^{2}d\mu (x)<+\infty \end{aligned}$$

    and

    $$\begin{aligned} \int _{E}\left| {\zeta _{{\varvec{\theta }}+\varvec{\epsilon }}(x)-\zeta _{{\varvec{\theta }}}(x) -\langle \dot{\varvec{\zeta }}_{{\varvec{\theta }}}(x),\varvec{\epsilon }\rangle }\right| ^{2}d\mu (x)=o(\left| {\varvec{\epsilon }}\right| ^{2})\quad \text {when }\left| {\varvec{\epsilon }}\right| \rightarrow 0. \end{aligned}$$
  3. (iii)

    For all \(i\in \{1,\ldots ,k\}\), the mapping \({\varvec{\theta }}\mapsto {\dot{\zeta }}_{{\varvec{\theta }},i}\) is continuous in \( {\mathscr {L}}_{2}(E,{\mathcal {E}},\mu )\).

When the model is regular, the matrix

$$\begin{aligned} {\textbf{J}}({\varvec{\theta }})=\left( {4\int _{E}{\dot{\zeta }}_{{\varvec{\theta }},i}(x){\dot{\zeta }}_{{\varvec{\theta }},j}(x)d\mu (x)}\right) _{\begin{array}{c} 1\leqslant i\leqslant k\\ 1\leqslant j\leqslant k \end{array}}, \end{aligned}$$

is called the Fisher information matrix.

The matrix \({\textbf{J}}({\varvec{\theta }})\) is symmetric and nonnegative and we may therefore consider its square root \({\textbf{J}}^{1/2}({\varvec{\theta }})\), that is, the symmetric \((k\times k)\)-nonnegative matrix that satisfies \({\textbf{J}}^{1/2}({\varvec{\theta }}){\textbf{J}}^{1/2}({\varvec{\theta }})={\textbf{J}}({\varvec{\theta }})\).

Regular statistical models enjoy nice metric properties that are described in Proposition 9 below. For a proof we refer the reader to Ibragimov and Has’minskiĭ [20]—Lemma 7.1 page 65, Theorem 7.6 page 81 and its proof.

Proposition 9

Let \(\Theta \) be an open subset of \({\mathbb {R}}^{k}\) and \({\varvec{\theta }}^{\star }\in \Theta \). If \({\mathscr {M}}=\{P_{{\varvec{\theta }}}=p_{{\varvec{\theta }}}\cdot \mu ,\; {\varvec{\theta }}\in \Theta \}\) is regular and the Fisher information matrix \({\textbf{J}}({\varvec{\theta }}^{\star })\) nonsingular at \({\varvec{\theta }}^{\star }\in \Theta \), Assumption 9-(i) is satisfied with \(\ell =h^{2}\), \(s=2\) and for the norm \(\left| {\cdot }\right| _{*}\) defined by

$$\begin{aligned} \left| {{\varvec{x}}}\right| _{*}=\frac{1}{\sqrt{8}}\left| {{\textbf{J}}^{1/2}({\varvec{\theta }}^{\star }){\varvec{x}}}\right| \quad \text {for all }{\varvec{x}}\in {\mathbb {R}}^{k}. \end{aligned}$$
(81)

Besides, for any compact subset \({\mathcal {K}}\subset \Theta \) there exist positive numbers \( {\overline{a}}_{{\mathcal {K}}},{\underline{a}}_{{\mathcal {K}}}\) such that

$$\begin{aligned} {\underline{a}}_{{\mathcal {K}}}\left| {{\varvec{\theta }}-{\varvec{\theta }}^{\star }}\right| _{*}^{2}\leqslant h^{2}\left( {{\varvec{\theta }},{\varvec{\theta }}^{\star }}\right) \leqslant \overline{a}_{{\mathcal {K}}}\left| {{\varvec{\theta }}-{\varvec{\theta }}^{\star }}\right| _{*}^{2}\quad \text {for all } {\varvec{\theta }}\in {\mathcal {K}}. \end{aligned}$$
(82)

Using Proposition 8, we immediately infer the following result.

Corollary 4

Let \(\Theta \) be an open subset of \({\mathbb {R}}^{k}\). Assume that \({\mathscr {M}}=\{P_{{\varvec{\theta }}}=p_{{\varvec{\theta }}}\cdot \mu ,\; {\varvec{\theta }}\in \Theta \}\) is regular and the Fisher information matrix \({\textbf{J}}({\varvec{\theta }}^{\star })\) nonsingular at \({\varvec{\theta }}^{\star }\in \Theta \subset {\mathbb {R}}^{k}\). Assume that there exists a compact set \({\mathcal {K}}\subset \Theta \), containing \({\varvec{\theta }}^{\star }\) in its interior, such that \(h({\varvec{\theta }},{\varvec{\theta }}^{\star })\geqslant \eta >0\) for all \({\varvec{\theta }}\not \in {\mathcal {K}}\). Assume furthermore that the density q is continuous and positive at \({\varvec{\theta }}^{\star }\). Then, \(r_{n}(\beta ,P_{{\varvec{\theta }}^{\star }})\leqslant [3/(2a_{1}\gamma \beta )] (k/n)\), at least for n sufficiently large.

9 Proofs of Theorems 12 and 3

Throughout this proof we fix some \({{\overline{Q}}}\in {\mathscr {M}}\), \({r},\beta >0\) and use the following notation: \({c}_{1}=1+{c}\), \({c}_{2}=2+{c}\),

$$\begin{aligned} {\mathcal {V}}(\pi ,{{\overline{Q}}})=\left\{ {{r}>0,\pi \left( {{\mathscr {B}}({{\overline{Q}}},{r})}\right) >0}\right\} \end{aligned}$$

and for \({r}\in {\mathcal {V}}(\pi ,{{\overline{Q}}})\), \({\mathscr {B}}={\mathscr {B}}({{\overline{Q}}},{r})\) and \(\pi _{{\mathscr {B}}}=\left[ {\pi ({\mathscr {B}})}\right] ^{-1}{\mathbb {1}}_{{\mathscr {B}}}\cdot \pi \).

9.1 Main parts of the proofs of Theorems 1 and 2

Throughout the proofs of these two theorems we fix some positive number z, that will be chosen later on, \(r\geqslant {r}_{n}(\beta ,{{\overline{Q}}})\) and set

$$\begin{aligned} A=\left\{ {\int _{{\mathscr {M}}}\exp \left[ {-\beta {\textbf{T}}({\varvec{X}},P)}\right] d\pi (P)>z}\right\} . \end{aligned}$$

It follows from the definition (7) of \(\widehat{\pi }_{{\varvec{X}}}\) that for all \(J\in {\mathbb {N}}\)

$$\begin{aligned} {\mathbb {E}}\left[ {{\widehat{\pi }}_{{\varvec{X}}}\left( {{^\textsf{c}}{\!{{\mathscr {B}}}}{}({{\overline{Q}}},2^{J}{r})}\right) }\right]&={\mathbb {E}}\left[ {{\widehat{\pi }}_{{\varvec{X}}}\left( {{^\textsf{c}}{\!{{\mathscr {B}}}}{}({{\overline{Q}}},2^{J}{r})}\right) {\mathbb {1}}_{{^\textsf{c}}{\!{A}}{}}}\right] +{\mathbb {E}}\left[ {{\widehat{\pi }}_{{\varvec{X}}}\left( {{^\textsf{c}}{\!{{\mathscr {B}}}}{}({{\overline{Q}}},2^{J}{r})}\right) {\mathbb {1}}_{A}}\right] \nonumber \\&\leqslant {\mathbb {P}}({^\textsf{c}}{\!{A}}{})+\frac{1}{z}{\mathbb {E}}\left[ {\int _{{^\textsf{c}}{\!{{\mathscr {B}}}}{}({{\overline{Q}}},2^{J}{r})}\exp \left[ {-\beta {\textbf{T}}({\varvec{X}},P)}\right] d\pi (P)}\right] \nonumber \\&= {\mathbb {P}}({^\textsf{c}}{\!{A}}{})+\frac{1}{z}\int _{{^\textsf{c}}{\!{{\mathscr {B}}}}{}({{\overline{Q}}},2^{J}{r})}{\mathbb {E}}\left[ {\exp \left[ {-\beta {\textbf{T}}({\varvec{X}},P)}\right] }\right] d\pi (P). \end{aligned}$$
(83)

In a first step, we prove that for some well chosen values of \(\beta ,z,{r}\) and for J large enough, each of the two terms in the right-hand side of (83) is not larger than \(e^{-\xi }\). To achieve this goal, we bound the first term of the right-hand side of (83) by applying Markov’s inequality

$$\begin{aligned} {\mathbb {P}}({^\textsf{c}}{\!{A}}{})&={\mathbb {P}}\left[ {\int _{{\mathscr {M}}}\exp \left[ {-\beta {\textbf{T}}({\varvec{X}},P)}\right] d\pi (P)\leqslant z}\right] \\&={\mathbb {P}}\left[ {\left[ {\int _{{\mathscr {M}}}\exp \left[ {-\beta {\textbf{T}}({\varvec{X}},P)}\right] d\pi (P)}\right] ^{-1}\geqslant z^{-1}}\right] \\&\leqslant z{\mathbb {E}}\left[ {\frac{1}{\int _{{\mathscr {M}}}\exp \left[ {-\beta {\textbf{T}}({\varvec{X}},P)}\right] d\pi (P)}}\right] \end{aligned}$$

and then by using Lemma 6, we obtain that

$$\begin{aligned} {\mathbb {P}}({^\textsf{c}}{\!{A}}{})\leqslant \frac{z}{\pi ^{2}({\mathscr {B}})} \left[ {\int _{{\mathscr {B}}^{2}}\exp \left[ {-{\textbf{L}}(P,Q)}\right] d\pi _{{\mathscr {B}}}(P)d\pi _{{\mathscr {B}}}(Q)}\right] ^{-1}. \end{aligned}$$
(84)

We therefore have a control of \({\mathbb {P}}({^\textsf{c}}{\!{A}}{})\) by choosing z small enough. We bound the second term of (83) by using Lemma 5.

We then finish the proofs of Theorems 1 and 2 as follows. In the context of Theorem 1, we finally establish that for a suitable value of J and all \({{\overline{Q}}}\in {\mathscr {M}}(\beta )\),

$$\begin{aligned} {\mathbb {E}}\left[ {{\widehat{\pi }}_{{\varvec{X}}}\left( {{^\textsf{c}}{\!{{\mathscr {B}}}}{}({{\overline{Q}}},2^{J}{r})}\right) }\right] \leqslant 2e^{-\xi }\quad \text {with}\quad r=r({{\overline{Q}}})=\ell (\overline{P}^{\star },{{\overline{Q}}})+a_{1}^{-1}\left( {\beta +\frac{2\xi }{n\beta }}\right) . \end{aligned}$$

By (3), \({\mathscr {B}}({{\overline{Q}}},2^{J}r)\subset {\mathscr {B}}(\overline{P}^{\star },\tau \ell ({\overline{P}}^{\star },{{\overline{Q}}})+\tau 2^{J}r)\) for all \({{\overline{Q}}}\in {\mathscr {M}}(\beta )\), and consequently \({\mathbb {E}}\left[ {\widehat{\pi }_{{\varvec{X}}}\left( {{^\textsf{c}}{\!{{\mathscr {B}}}}{}({\overline{P}}^{\star },{\overline{{r}}}}\right) }\right] \leqslant 2e^{-\xi }\) with

$$\begin{aligned} {\overline{{r}}}={\overline{{r}}}({{\overline{Q}}})=\tau \left[ {\ell (\overline{P}^{\star },{{\overline{Q}}})+2^{J}{r}}\right] =\tau \left[ {(1+2^{J})\ell (\overline{P}^{\star },{{\overline{Q}}})+2^{J}a_{1}^{-1}\left( {\beta +\frac{2\xi }{n\beta }}\right) }\right] . \end{aligned}$$

We obtain (16) by monotone convergence, taking a sequence \(({{\overline{Q}}}_{N})_{N\geqslant 0}\subset {\mathscr {M}}(\beta )\) such that \(\ell (\overline{P}^{\star },{{\overline{Q}}}_{N})\) is nonincreasing to \(\inf _{P\in {\mathscr {M}}(\beta )}\ell ({\overline{P}}^{\star },P)\), so that

$$\begin{aligned} \lim _{N\rightarrow +\infty }{\overline{{r}}}({{\overline{Q}}}_{N})&=\tau \left[ {(1+2^{J})\inf _{{{\overline{Q}}}\in {\mathscr {M}}(\beta )}\ell ({\overline{P}} ^{\star },{{\overline{Q}}})+2^{J}a_{1}^{-1}\left( {\beta +\frac{2\xi }{n\beta }}\right) }\right] \\&\leqslant \tau (1+2^{J})\left[ {\inf _{{{\overline{Q}}}\in {\mathscr {M}}(\beta )}\ell (\overline{P}^{\star },{{\overline{Q}}})+a_{1}^{-1}\left( {\beta +\frac{2\xi }{n\beta }}\right) }\right] \end{aligned}$$

and (16) holds provided that \(\kappa _{0}\geqslant \tau (2^{J}+1)\).

In the context of Theorem 2, we show that for some suitable value of J and all \({{\overline{Q}}}\in {\mathscr {M}}\),

$$\begin{aligned} {\mathbb {E}}\left[ {{\widehat{\pi }}_{{\varvec{X}}}\left( {{^\textsf{c}}{\!{{\mathscr {B}}}}{}({{\overline{Q}}},2^{J}{r})}\right) }\right] \leqslant 2e^{-\xi }\quad \text {with}\quad r=\ell (\overline{P}^{\star },{{\overline{Q}}})+{r}_{n}({{\overline{Q}}},\beta )+ \frac{2\xi }{n\beta a_{1}}, \end{aligned}$$

and we get (28) by arguing similarly.

9.2 Preliminary results

In the proofs of Theorems 1 and 2, we use the following consequence of our Assumption 3. We may write

$$\begin{aligned} \frac{1}{n}\sum _{i=1}^{n}{\mathbb {E}}\left[ {t_{(P,Q)}(X_{i})}\right] ={\mathbb {E}}_{S}\left[ {t_{(P,Q)}(X)}\right] \quad \text {with }S=\overline{P}^{\star }=\frac{1}{n}\sum _{i=1}^{n}P_{i}^{\star }\in {\mathscr {P}}\end{aligned}$$

and we deduce from (5) that for all \(P,Q\in {\mathscr {M}}\),

$$\begin{aligned} \frac{1}{n}\sum _{i=1}^{n}{\mathbb {E}}\left[ {t_{(P,Q)}(X_{i})}\right] \leqslant a_{0}\ell ({\overline{P}}^{\star },P)-a_{1}\ell ({\overline{P}}^{\star },Q). \end{aligned}$$
(85)

Besides, using the antisymmetry property (ii) we also obtain that

$$\begin{aligned} \frac{1}{n}\sum _{i=1}^{n}{\mathbb {E}}\left[ {t_{(P,Q)}(X_{i})}\right] \geqslant a_{1}\ell ({\overline{P}}^{\star },P)-a_{0}\ell ({\overline{P}}^{\star },Q). \end{aligned}$$
(86)

For the proof of Theorems 2, we additionnally use the following consequence of our Assumption 4. By taking \(S={\overline{P}}^{\star }\) and using the convexity of the mapping \(u\mapsto u^{2}\), we deduce that for all \(P,Q\in {\mathscr {M}}\)

$$\begin{aligned} \frac{1}{n}\sum _{i=1}^{n}{\text {Var}}\left[ {t_{(P,Q)}(X_{i})}\right]&={\mathbb {E}}_{S}\left[ {t_{(P,Q)}^{2}(X)}\right] -\frac{1}{n}\sum _{i=1}^{n}\left( {{\mathbb {E}}\left[ {t_{(P,Q)}(X_{i})}\right] }\right) ^{2}\\&\leqslant {\mathbb {E}}_{S}\left[ {t_{(P,Q)}^{2}(X)}\right] -\left( {{\mathbb {E}}_{S}\left[ {t_{(P,Q)}(X)}\right] }\right) ^{2}\\&={\text {Var}}_{S}\left[ {t_{(P,Q)}(X)}\right] \end{aligned}$$

and it follows then from Assumption 4 (iv) that for all \(P,Q\in {\mathscr {M}}\)

$$\begin{aligned} \frac{1}{n}\sum _{i=1}^{n}{\text {Var}}\left[ {t_{(P,Q)}(X_{i})}\right] \leqslant a_{2}\left[ {\ell ({\overline{P}}^{\star },P)+\ell ({\overline{P}}^{\star },Q)}\right] . \end{aligned}$$
(87)

The proofs of our main results rely on the following lemmas.

Lemma 4

Let (UV) be a pair of random variables with values in a product space \((E\times F,{\mathcal {E}}\otimes {\mathcal {F}})\) and marginal distributions \(P_{U}\) and \(P_{V}\) respectively. For all measurable function h on \((E\times F,{\mathcal {E}}\otimes {\mathcal {F}})\),

$$\begin{aligned} {\mathbb {E}}_{U}\left[ {\frac{1}{{\mathbb {E}}_{V}\left[ {\exp \left[ {-h(U,V)}\right] }\right] }}\right] \leqslant \left[ {{\mathbb {E}}_{V}\left[ {\frac{1}{{\mathbb {E}}_{U}\left[ {\exp \left[ {h(U,V)}\right] }\right] }}\right] }\right] ^{-1}. \end{aligned}$$

This lemma is proven in Audibert and Catoni [3, Lemma 4.2, p. 28].

Lemma 5

For \(P,Q\in {\mathscr {M}}\), we set

$$\begin{aligned} {\textbf{M}}(P,Q)=\log \left[ {\int _{{\mathscr {M}}}{\mathbb {E}}\left[ {\exp \left[ {\beta \left( {{c}{\textbf{T}}({\varvec{X}},P,Q')-{c}_{1}{\textbf{T}}({\varvec{X}},P,Q)}\right) }\right] d\pi (Q')}\right] }\right] . \end{aligned}$$

For all \({r}\in {\mathcal {V}}(\pi ,{{\overline{Q}}})\) and \(P\in {\mathscr {M}}\),

$$\begin{aligned} {\mathbb {E}}\left[ {\exp \left[ {-\beta {\textbf{T}}({\varvec{X}},P)}\right] }\right] \leqslant \frac{1}{\pi ({\mathscr {B}})}\left[ {\int _{{\mathscr {B}}}\exp \left[ {-{\textbf{M}}(P,Q)}\right] d\pi _{{\mathscr {B}}}(Q)}\right] ^{-1}. \end{aligned}$$
(88)

Proof

Let \({r}\in {\mathcal {V}}(\pi ,{{\overline{Q}}})\). For \(P,Q\in {\mathscr {M}}\), we set

$$\begin{aligned} I({\varvec{X}},P,Q)={c}_{1}\beta {\textbf{T}}({\varvec{X}},P,Q)-\log \int _{{\mathscr {M}}}\exp \left[ {{c}\beta {\textbf{T}}({\varvec{X}},P,Q')}\right] d\pi (Q'). \end{aligned}$$

Then,

$$\begin{aligned}&{\mathbb {E}}\left[ {\exp \left[ {-I({\varvec{X}},P,Q)}\right] }\right] \nonumber \\&\quad ={\mathbb {E}}\left[ {\exp \left[ {-{c}_{1}\beta {\textbf{T}}({\varvec{X}},P,Q)+\log \int _{{\mathscr {M}}}\exp \left[ {{c}\beta {\textbf{T}}({\varvec{X}},P,Q')}\right] d\pi (Q')}\right] }\right] \nonumber \\&\quad ={\mathbb {E}}\left[ {\int _{{\mathscr {M}}}\exp \left[ {{c}\beta {\textbf{T}}({\varvec{X}},P,Q')-{c}_{1}\beta {\textbf{T}}({\varvec{X}},P,Q)}\right] d\pi (Q')}\right] \nonumber \\&\quad =\exp \left[ {{\textbf{M}}(P,Q)}\right] . \end{aligned}$$
(89)

Since \(\lambda ={c}_{1}\beta =(1+{c})\beta \), it follows from the convexity of the exponential that

$$\begin{aligned} {\mathbb {E}}\left[ {\exp \left[ {-\beta {\textbf{T}}({\varvec{X}},P)}\right] }\right]&={\mathbb {E}}\left[ {\exp \left[ {\int _{{\mathscr {M}}}[-\beta {\textbf{T}}({\varvec{X}},P,Q)]d{\widetilde{\pi }}_{{\varvec{X}}}(Q|P)}\right] }\right] \\&\leqslant {\mathbb {E}}\left[ {{\int _{{\mathscr {M}}}{\exp \left[ {-\beta {\textbf{T}}({\varvec{X}},P,Q)}\right] d{\widetilde{\pi }}_{{\varvec{X}}}(Q|P)}}}\right] \\&={\mathbb {E}}\left[ {\frac{\int _{{\mathscr {M}}}\exp \left[ {{c}\beta {\textbf{T}}({\varvec{X}},P,Q)}\right] d\pi (Q)}{\int _{{\mathscr {M}}}\exp \left[ {{c}_{1}\beta {\textbf{T}}({\varvec{X}},P,Q)}\right] d\pi (Q)}}\right] \\&\leqslant {\mathbb {E}}\left[ {\frac{\int _{{\mathscr {M}}}\exp \left[ {{c}\beta {\textbf{T}}({\varvec{X}},P,Q)}\right] d\pi (Q)}{\int _{{\mathscr {B}}}\exp \left[ {{c}_{1} \beta {\textbf{T}}({\varvec{X}},P,Q)}\right] d\pi (Q)}}\right] . \end{aligned}$$

Hence,

$$\begin{aligned} {\mathbb {E}}\left[ {\exp \left[ {-\beta {\textbf{T}}({\varvec{X}},P)}\right] }\right]&\leqslant {\mathbb {E}}\left[ {\frac{1}{\int _{{\mathscr {B}}}\exp \left[ {I({\varvec{X}},P,Q)}\right] d\pi (Q)}}\right] \\&=\frac{1}{\pi ({\mathscr {B}})}{\mathbb {E}}\left[ {\frac{1}{\int _{{\mathscr {B}}}\exp \left[ {I({\varvec{X}},P,Q)}\right] d\pi _{{\mathscr {B}}}(Q)}}\right] . \end{aligned}$$

Applying Lemma 4 with \(U={\varvec{X}}\), \(V=Q\) with distribution \(\pi _{{\mathscr {B}}}\), and \(h(U,V)=-I({\varvec{X}},P,Q)\), we obtain that

$$\begin{aligned} {\mathbb {E}}\left[ {\exp \left[ {-\beta {\textbf{T}}({\varvec{X}},P)}\right] }\right]&\leqslant \frac{1}{\pi ({\mathscr {B}})} \left[ {\int _{{\mathscr {B}}}\frac{1}{{\mathbb {E}}\left[ {\exp \left[ {-I({\varvec{X}},P,Q)}\right] }\right] }d\pi _{{\mathscr {B}}}(Q)}\right] ^{-1} \end{aligned}$$

and (88) follows from (89). \(\square \)

Lemma 6

For \(P,Q\in {\mathscr {M}}\), we set

$$\begin{aligned} {\textbf{L}}(P,Q)=\log \int _{{\mathscr {M}}}{\mathbb {E}}\left[ {\exp \left[ {\beta \left( {{c}_{2}{\textbf{T}}({\varvec{X}},P,Q')-{c}_{1}{\textbf{T}}({\varvec{X}},P,Q)}\right) }\right] }\right] d\pi (Q'). \end{aligned}$$

For all \({r}\in {\mathcal {V}}(\pi ,{{\overline{Q}}})\),

$$\begin{aligned}&{\mathbb {E}}\left[ {\frac{1}{\int _{{\mathscr {M}}}\exp \left[ {-\beta {\textbf{T}}({\varvec{X}},P)}\right] d\pi (P)}}\right] \\&\leqslant \frac{1}{\pi ^{2}({\mathscr {B}})} \left[ {\int _{{\mathscr {B}}^{2}}\exp \left[ {-{\textbf{L}}(P,Q)}\right] d\pi _{{\mathscr {B}}}(P)d\pi _{{\mathscr {B}}}(Q)}\right] ^{-1}. \end{aligned}$$

Proof

For \(P,Q\in {\mathscr {M}}\), we set

$$\begin{aligned} H({\varvec{X}},P,Q)=\beta {c}_{1}{\textbf{T}}({\varvec{X}},P,Q)-\log \left[ {\int _{{\mathscr {M}}}\exp \left[ {{c}_{2}\beta {\textbf{T}}({\varvec{X}},P,Q')}\right] d\pi (Q')}\right] . \end{aligned}$$

Then,

$$\begin{aligned}&{\mathbb {E}}\left[ {\exp \left[ {-H({\varvec{X}},P,Q)}\right] }\right] \nonumber \\&\quad ={\mathbb {E}}\left[ {\exp \left[ {-\beta {c}_{1}{\textbf{T}}({\varvec{X}},P,Q)}\right] \int _{{\mathscr {M}}}\exp \left[ {{c}_{2}\beta {\textbf{T}}({\varvec{X}},P,Q')}\right] d\pi (Q')}\right] \nonumber \\&\quad ={\mathbb {E}}\left[ {\int _{{\mathscr {M}}}\exp \left[ {\beta \left( {{c}_{2}{\textbf{T}}({\varvec{X}},P,Q')-{c}_{1}{\textbf{T}}({\varvec{X}},P,Q)}\right) }\right] d\pi (Q')}\right] \nonumber \\&\quad =\exp \left[ {{\textbf{L}}(P,Q)}\right] . \end{aligned}$$
(90)

It follows from the convexity of the exponential and the fact that \(\lambda ={c}_{1}\beta \) that for all \(P\in {\mathscr {M}}\),

$$\begin{aligned} {\mathbb {E}}\left[ {\exp \left[ {\beta {\textbf{T}}({\varvec{X}},P)}\right] }\right]&={\mathbb {E}}\left[ {\exp \left[ {\int _{{\mathscr {M}}}[\beta {\textbf{T}}({\varvec{X}},P,Q)] d{\widetilde{\pi }}_{{\varvec{X}}}(Q|P)}\right] }\right] \\&\leqslant {\mathbb {E}}\left[ {{\int _{{\mathscr {M}}}{\exp \left[ {\beta {\textbf{T}}({\varvec{X}},P,Q)}\right] d{\widetilde{\pi }}_{{\varvec{X}}}(Q|P)}}}\right] \\&={\mathbb {E}}\left[ {\frac{\int _{{\mathscr {M}}}\exp \left[ {{c}_{2}\beta {\textbf{T}}({\varvec{X}},P,Q)}\right] d\pi (Q)}{\int _{{\mathscr {M}}}\exp \left[ {{c}_{1}\beta {\textbf{T}}({\varvec{X}},P,Q)}\right] d\pi (Q)}}\right] \\&={\mathbb {E}}\left[ {\frac{1}{\int _{{\mathscr {M}}}\exp \left[ {H({\varvec{X}},P,Q)}\right] d\pi (Q)}}\right] . \end{aligned}$$

Applying Lemma 4 with \(U={\varvec{X}}\), \(V=Q\) with distribution \(\pi \), and \(h(U,V)=-H({\varvec{X}},P,Q)\) we obtain that

$$\begin{aligned} {\mathbb {E}}\left[ {\exp \left[ {\beta {\textbf{T}}({\varvec{X}},P)}\right] }\right] \leqslant \left[ {\int _{{\mathscr {M}}}\frac{1}{{\mathbb {E}}\left[ {\exp \left[ {-H({\varvec{X}},P,Q)}\right] }\right] }d\pi (Q)}\right] ^{-1}. \end{aligned}$$

We deduce from (90) that for all \(P\in {\mathscr {M}}\)

$$\begin{aligned} {\mathbb {E}}\left[ {\exp \left[ {\beta {\textbf{T}}({\varvec{X}},P)}\right] }\right]&\leqslant \left[ {\int _{{\mathscr {M}}}\exp \left[ {-{\textbf{L}}(P,Q)}\right] d\pi (Q)}\right] ^{-1}\nonumber \\&\leqslant \frac{1}{\pi ({\mathscr {B}})}\left[ {\int _{{\mathscr {B}}}\exp \left[ {-{\textbf{L}}(P,Q)}\right] d\pi _{{\mathscr {B}}}(Q)}\right] ^{-1}. \end{aligned}$$
(91)

Applying Lemma 4 with \(U={\varvec{X}}\), \(V=P\) with distribution \(\pi \) and \(h(U,V)=\beta {\textbf{T}}({\varvec{X}},P)\), gives

$$\begin{aligned} {\mathbb {E}}\left[ {\frac{1}{\int _{{\mathscr {M}}}\exp \left[ {-\beta {\textbf{T}}({\varvec{X}},P)}\right] d\pi (P)}}\right]&\leqslant \left[ {\int _{{\mathscr {M}}}\frac{1}{{\mathbb {E}}\left[ {\exp \left[ {\beta {\textbf{T}}({\varvec{X}},P)}\right] }\right] }d\pi (P)}\right] ^{-1}\\&\leqslant \frac{1}{\pi ({\mathscr {B}})}\left[ {\int _{{\mathscr {B}}}\frac{1}{{\mathbb {E}}\left[ {\exp \left[ {\beta {\textbf{T}}({\varvec{X}},P)}\right] }\right] }d\pi _{{\mathscr {B}}}(P)}\right] ^{-1} \end{aligned}$$

which together with (91) leads to the result. \(\square \)

The proofs of Theorems 1 and 2 rely on suitable bounds on the Laplace transforms of sums of independent random variables and on a summation lemma. These results are presented below.

Lemma 7

For all \(\beta \in {\mathbb {R}}\) and random variable U with values in an interval of length \(l\in (0,+\infty )\),

$$\begin{aligned} \log {\mathbb {E}}\left[ {\exp \left[ {\beta U}\right] }\right] \leqslant \beta {\mathbb {E}}\left[ {U}\right] +\frac{\beta ^{2}l^{2}}{8}. \end{aligned}$$
(92)

Lemma 8

Let U be a squared integrable random variable not larger than \(b> 0\). For all \(\beta >0\),

$$\begin{aligned} \log {\mathbb {E}}\left[ {\exp \left[ {\beta U}\right] }\right] \leqslant \beta {\mathbb {E}}\left[ {U}\right] +\beta ^{2}{\mathbb {E}}\left[ {U^{2}}\right] \frac{\phi (\beta b)}{2}, \end{aligned}$$
(93)

where \(\phi \) is defined by (24).

The proofs of Lemmas 7 and 8 can be found on pages 21 and 23 in Massart [23] (where our function \(\phi \) is defined as twice his).

Lemma 9

Let \(J\in {\mathbb {N}}\), \(\gamma >0\) and \({{\overline{Q}}}\in {\mathscr {M}}\). If \({r}\) satisfies \(n\beta a_{1}{r}\geqslant 1\) and (12), for all \(\gamma _{0}>2\gamma \)

$$\begin{aligned}&\int _{{^\textsf{c}}{\!{{\mathscr {B}}}}{}({{\overline{Q}}},2^{J}{r})}\exp \left[ {-\gamma _{0}n\beta a_{1}\ell ({{\overline{Q}}},P)}\right] d\pi (P)\nonumber \\&\quad \leqslant \pi \left( {{\mathscr {B}}}\right) \exp \left[ {\Xi -\left( {\gamma _{0}-2\gamma }\right) n \beta a_{1}2^{J}{r}}\right] \end{aligned}$$
(94)

with

$$\begin{aligned} \Xi&= -\gamma +\log \left[ {\frac{1}{1-\exp \left[ {-\left( {\gamma _{0}-2\gamma }\right) }\right] }}\right] \end{aligned}$$

Besides,

$$\begin{aligned} \int _{{\mathscr {M}}}\exp \left[ {-\gamma _{0}n\beta a_{1}\ell ({{\overline{Q}}},P)}\right] d\pi (P)\leqslant \pi ({\mathscr {B}})\exp \left[ {\Xi '}\right] \end{aligned}$$
(95)

with

$$\begin{aligned} \Xi '=\log \left[ {1+\frac{\exp \left[ {-\left( {\gamma _{0}-\gamma }\right) }\right] }{1-\exp \left[ {-\left( {\gamma _{0}-2\gamma }\right) }\right] }}\right] . \end{aligned}$$

Proof

From (12), we deduce by induction that for all \(j\geqslant 0\)

$$\begin{aligned} \pi \left( {{\mathscr {B}}({{\overline{Q}}},2^{j+1}{r})}\right)&\leqslant \exp \left[ {\gamma n \beta a_{1}{r}\sum _{k=0}^{j}2^{k}}\right] \pi \left( {{\mathscr {B}}}\right) \\&= \exp \left[ {(2^{j+1}-1)\gamma n \beta a_{1}{r}}\right] \pi \left( {{\mathscr {B}}}\right) \end{aligned}$$

Consequently,

$$\begin{aligned}&\int _{{^\textsf{c}}{\!{{\mathscr {B}}}}{}({{\overline{Q}}},2^{J}{r})}\exp \left[ {-\gamma _{0}n\beta a_{1}\ell ({{\overline{Q}}},P)}\right] d\pi (P)\\&\quad =\sum _{j\geqslant J}\int _{{\mathscr {B}}({{\overline{Q}}},2^{j+1}{r}){\setminus } {\mathscr {B}}({{\overline{Q}}},2^{j}{r})} \exp \left[ {-\gamma _{0}\beta n a_{1}\ell ({{\overline{Q}}},P)}\right] d\pi (P)\\&\quad \leqslant \pi \left( {{\mathscr {B}}}\right) \sum _{j\geqslant J}\frac{\pi \left( {{\mathscr {B}}({{\overline{Q}}},2^{j+1}{r})}\right) }{\pi \left( {{\mathscr {B}}}\right) } \exp \left[ {-\gamma _{0}n\beta a_{1} 2^{j}{r}}\right] \\&\quad \leqslant \pi \left( {{\mathscr {B}}}\right) \sum _{j\geqslant J}\exp \left[ {\gamma n\beta a_{1}(2^{j+1}-1){r}-\gamma _{0}n\beta a_{1} 2^{j}{r}}\right] \\&\quad = \pi \left( {{\mathscr {B}}}\right) \exp \left[ {-\gamma n \beta a_{1}{r}}\right] \sum _{j\geqslant J} \exp \left[ {-\left( {\gamma _{0}-2\gamma }\right) n\beta a_{1} 2^{j}{r}}\right] \\&\quad = \pi \left( {{\mathscr {B}}}\right) \exp \left[ {-\gamma n \beta a_{1}{r}}\right] \sum _{j\geqslant 0}\exp \left[ {-\left( {\gamma _{0}-2\gamma }\right) n\beta a_{1} 2^{j}2^{J}{r}}\right] . \end{aligned}$$

Since \(2^{j}\geqslant j+1\) for all \(j\geqslant 0\) we obtain that

$$\begin{aligned}&\int _{{^\textsf{c}}{\!{{\mathscr {B}}}}{}({{\overline{Q}}}, 2^{J}r)}\exp \left[ {-\gamma _{0}n\beta a_{1}\ell ({{\overline{Q}}},P)}\right] d\pi (P)\\&\quad \leqslant \pi \left( {{\mathscr {B}}}\right) \exp \left[ {-\gamma n \beta a_{1}{r}}\right] \sum _{j\geqslant 0}\exp \left[ {-\left( {\gamma _{0}-2\gamma }\right) n\beta a_{1} (j+1)2^{J}{r}}\right] \\&\quad \leqslant \pi \left( {{\mathscr {B}}}\right) \exp \left[ {-\gamma n \beta a_{1}{r}-\left( {\gamma _{0}-2\gamma }\right) n\beta a_{1}2^{J}{r}}\right] \sum _{j\geqslant 0}\exp \left[ {-j\left( {\gamma _{0}-2\gamma }\right) n\beta a_{1}2^{J}{r}}\right] \\&\quad = \pi \left( {{\mathscr {B}}}\right) \frac{\exp \left[ {-\gamma n \beta a_{1}{r}}\right] }{1-\exp \left[ {-\left( {\gamma _{0}-2\gamma }\right) n\beta a_{1}2^{J}{r}}\right] }\exp \left[ {-\left( {\gamma _{0}-2\gamma }\right) n \beta a_{1}2^{J}{r}}\right] . \end{aligned}$$

which leads to (94) since \(n\beta a_{1}2^{J}{r}\geqslant n\beta a_{1}{r}\geqslant 1\). Finally, by applying this inequality with \(J=0\) we obtain that

$$\begin{aligned}&\int _{{\mathscr {M}}}\exp \left[ {-\gamma _{0}\beta n a_{1}\ell ({{\overline{Q}}},P)}\right] d\pi (P)\\&\quad =\int _{{\mathscr {B}}}\exp \left[ {-\gamma _{0}\beta n a_{1}\ell ({{\overline{Q}}},P)}\right] d\pi (P) +\int _{{^\textsf{c}}{\!{{\mathscr {B}}}}{}}\exp \left[ {-\gamma _{0}\beta n a_{1}\ell ({{\overline{Q}}},P)}\right] d\pi (P)\\&\quad \leqslant \pi ({\mathscr {B}})\left[ {1+\frac{\exp \left[ {-\gamma -\left( {\gamma _{0} -2\gamma }\right) n \beta a_{1}{r}}\right] }{1-\exp \left[ {-\left( {\gamma _{0}-2\gamma }\right) }\right] }}\right] \\&\quad \leqslant \pi ({\mathscr {B}})\left[ {1+\frac{\exp \left[ {-\left( {\gamma _{0}-\gamma }\right) }\right] }{1-\exp \left[ {-\left( {\gamma _{0}-2\gamma }\right) }\right] }}\right] , \end{aligned}$$

which is (95). \(\square \)

9.3 Proof of Theorem 1

For all \(i\in \{1,\ldots ,n\}\) and \(P,Q,Q'\in {\mathscr {M}}\), let us set

$$\begin{aligned} U_{i}&={c}\left( {t_{(P,Q')}(X_{i})-{\mathbb {E}}\left[ {t_{(P,Q')}(X_{i})}\right] }\right) \nonumber \\&\quad -{c}_{1}\left( {t_{(P,Q)}(X_{i})-{\mathbb {E}}\left[ {t_{(P,Q)}(X_{i})}\right] }\right) \end{aligned}$$
(96)
$$\begin{aligned} V_{i}&={c}_{2}\left( {t_{(P,Q')}(X_{i})-{\mathbb {E}}\left[ {t_{(P,Q')}(X_{i})}\right] }\right) \nonumber \\&\quad -{c}_{1}\left( {t_{(P,Q)}(X_{i})-{\mathbb {E}}\left[ {t_{(P,Q)}(X_{i})}\right] }\right) . \end{aligned}$$
(97)

The random variables \(U_{i}\) are independent and under Assumption 3-(iv), they takes their values in an interval of length \(l_{1}={c}+{c}_{1}=1+2{c}\). The \(V_{i}\) are also independent and they takes their values in an interval of length \(l_{2}={c}_{1}+{c}_{2}=3+2{c}\). Applying Lemma 7, we obtain that

$$\begin{aligned} \prod _{i=1}^{n}{\mathbb {E}}\left[ {\exp \left[ {\beta U_{i}}\right] }\right]&\leqslant \exp \left[ {\frac{l_{1}^{2} n\beta ^{2}}{8}}\right] \end{aligned}$$
(98)

and

$$\begin{aligned} \prod _{i=1}^{n}{\mathbb {E}}\left[ {\exp \left[ {\beta V_{i}}\right] }\right]&\leqslant \exp \left[ {\frac{l_{2}^{2} n\beta ^{2}}{8}}\right] . \end{aligned}$$
(99)

By using Assumption 2 and the fact that \({c}_{0}={c}_{1}-{c}a_{0}/a_{1}>0\),

$$\begin{aligned}&{c}\left( {a_{0}\ell ({\overline{P}}^{\star },P)- a_{1}\ell ({\overline{P}}^{\star },Q')}\right) -{c}_{1}\left( {a_{1}\ell ({\overline{P}}^{\star },P)-a_{0} \ell ({\overline{P}}^{\star },Q)}\right) \nonumber \\&\quad = -\left( {{c}_{1}a_{1}-{c}a_{0}}\right) \ell ({\overline{P}}^{\star },P)-{c}a_{1}\ell ({\overline{P}}^{\star },Q')+{c}_{1}a_{0}\ell ({\overline{P}}^{\star },Q)\nonumber \\&\quad \leqslant -{c}_{0}a_{1}\left[ {\tau ^{-1}\ell ({{\overline{Q}}},P)-\ell ({\overline{P}} ^{\star },{{\overline{Q}}})}\right] -{c}a_{1}\left[ {\tau ^{-1}\ell ({{\overline{Q}}},Q')-\ell ({\overline{P}}^{\star },{{\overline{Q}}})}\right] \nonumber \\&\qquad +\tau {c}_{1}a_{0}\left[ {\ell ({\overline{P}}^{\star },{{\overline{Q}}})+\ell ({{\overline{Q}}},Q)}\right] \nonumber \\&\quad =e_{0}a_{1}\ell (\overline{P}^{\star },{{\overline{Q}}})-\tau ^{-1}{c}_{0}a_{1}\ell ({{\overline{Q}}},P)-\tau ^{-1}{c}a_{1}\ell ({{\overline{Q}}},Q')+\tau {c}_{1}a_{0}\ell ({{\overline{Q}}},Q) \end{aligned}$$
(100)

with

$$\begin{aligned} e_{0}={c}_{0}+{c}+\frac{\tau {c}_{1} a_{0}}{a_{1}}. \end{aligned}$$
(101)

It follows from (100) and Assumptions 3-(iii), more precisely its consequences (85) and (86), that

$$\begin{aligned}&n^{-1}\left\{ {{c}{\mathbb {E}}\left[ {{\textbf{T}}({\varvec{X}},P,Q')}\right] -{c}_{1}{\mathbb {E}}\left[ {{\textbf{T}}({\varvec{X}},P,Q)}\right] }\right\} \nonumber \\&\leqslant {c}\left[ {a_{0}\ell ({\overline{P}}^{\star },P)- a_{1} \ell ({\overline{P}}^{\star },Q')}\right] -{c}_{1}\left[ {a_{1}\ell ({\overline{P}}^{\star },P)-a_{0}\ell ({\overline{P}}^{\star },Q)}\right] \nonumber \\&\leqslant e_{0}a_{1}\ell (\overline{P}^{\star },{{\overline{Q}}})-\tau ^{-1}{c}_{0}a_{1}\ell ({{\overline{Q}}},P)-\tau ^{-1}{c}a_{1}\ell ({{\overline{Q}}},Q')+\tau {c}_{1}a_{0}\ell ({{\overline{Q}}},Q). \end{aligned}$$
(102)

Since \(a_{0}\geqslant a_{1}\) and \({c}_{2}>{c}_{1}\), \({c}_{0}'={c}_{2}(a_{0}/a_{1})-{c}_{1}>0\) and by arguing as above, we obtain similarly that

$$\begin{aligned}&n^{-1}\left\{ {{c}_{2}{\mathbb {E}}\left[ {{\textbf{T}}({\varvec{X}},P,Q')}\right] -{c}_{1}{\mathbb {E}}\left[ {{\textbf{T}}({\varvec{X}},P,Q)}\right] }\right\} \nonumber \\&\quad \leqslant {c}_{2} \left( {a_{0}\ell ({\overline{P}}^{\star },P)- a_{1}\ell ({\overline{P}}^{\star },Q')}\right) -{c}_{1}\left( {a_{1}\ell ({\overline{P}}^{\star },P)-a_{0}\ell ({\overline{P}}^{\star },Q)}\right) \nonumber \\&\quad ={c}_{0}'a_{1}\ell ({\overline{P}}^{\star },P)-{c}_{2}a_{1}\ell ({\overline{P}}^{\star },Q')+{c}_{1}a_{0}\ell ({\overline{P}}^{\star },Q)\nonumber \\&\quad \leqslant \tau {c}_{0}'a_{1}\left[ {\ell ({\overline{P}}^{\star },{{\overline{Q}}})+\ell ({{\overline{Q}}},P)}\right] -{c}_{2}a_{1}\left[ {\tau ^{-1}\ell ({{\overline{Q}}},Q')-\ell ({\overline{P}}^{\star },{{\overline{Q}}})}\right] \nonumber \\&\qquad +\tau {c}_{1}a_{0}\left[ {\ell ({\overline{P}}^{\star },{{\overline{Q}}})+\ell ({{\overline{Q}}},Q)}\right] \nonumber \\&\quad \leqslant \left( {e_{1}+{c}_{2}}\right) a_{1}\ell ({\overline{P}}^{\star },{{\overline{Q}}})+\tau {c}_{0}'a_{1}\ell ({{\overline{Q}}},P)\nonumber \\&\qquad -\tau ^{-1}{c}_{2} a_{1}\ell ({{\overline{Q}}},Q')+\tau {c}_{1}a_{0}\ell ({{\overline{Q}}},Q), \end{aligned}$$
(103)

with

$$\begin{aligned} e_{1}=\tau \left[ {{c}_{0}'+{c}_{1} a_{0}/a_{1}}\right] =\tau \left[ {{c}_{2}(a_{0}/a_{1})+\ {c}_{1} \left( {a_{0}/a_{1}-1}\right) }\right] . \end{aligned}$$
(104)

Using (98) and (102), we deduce that for all \(P,Q,Q'\in {\mathscr {M}}\)

$$\begin{aligned}&{\mathbb {E}}\left[ {\exp \left[ {\beta \left( {{c}{\textbf{T}}({\varvec{X}},P,Q')-{c}_{1}{\textbf{T}}({\varvec{X}},P,Q)}\right) }\right] }\right] \nonumber \\&\quad =\prod _{i=1}^{n}{\mathbb {E}}\left[ {\exp \left[ {\beta \left( {{c}t_{(P,Q')}(X_{i})-{c}_{1}t_{(P,Q)}(X_{i})}\right) }\right] }\right] \nonumber \\&\quad =\exp \left[ {\beta \left( {{c}{\mathbb {E}}\left[ {{\textbf{T}}({\varvec{X}},P,Q')}\right] -{c}_{1}{\mathbb {E}}\left[ {{\textbf{T}}({\varvec{X}},P,Q)}\right] }\right) }\right] \prod _{i=1}^{n}{\mathbb {E}}\left[ {\exp \left[ {\beta U_{i}}\right] }\right] \nonumber \\&\quad \leqslant \exp \left[ {n\beta \left[ {\Delta _{1}(P,Q)-\tau ^{-1}{c}a_{1}\ell ({{\overline{Q}}},Q')}\right] }\right] \end{aligned}$$
(105)

with

$$\begin{aligned}&\Delta _{1}(P,Q)=e_{0}a_{1}\ell (\overline{P}^{\star },{{\overline{Q}}})+\tau {c}_{1}a_{0}\ell ({{\overline{Q}}},Q)+\frac{l_{1}^{2} \beta }{8}-\tau ^{-1}{c}_{0}a_{1}\ell ({{\overline{Q}}},P). \end{aligned}$$
(106)

Using (99) and (103), we obtain similarly that for all \(P,Q,Q'\in {\mathscr {M}}\)

$$\begin{aligned}&{\mathbb {E}}\left[ {\exp \left[ {\beta \left( {{c}_{2}{\textbf{T}}({\varvec{X}},P,Q')-{c}_{1}{\textbf{T}}({\varvec{X}},P,Q)}\right) }\right] }\right] \nonumber \\&\quad \leqslant \exp \left[ {n\beta \left[ {\Delta _{2}(P,Q)-\tau ^{-1}{c}_{2}a_{1}\ell ({{\overline{Q}}},Q')}\right] }\right] \end{aligned}$$
(107)

with

$$\begin{aligned} \Delta _{2}(P,Q)&=\left( {e_{1}+{c}_{2}}\right) a_{1}\ell ({\overline{P}}^{\star },{{\overline{Q}}}) +\tau {c}_{0}'a_{1}\ell ({{\overline{Q}}},P)+\tau {c}_{1}a_{0}\ell ({{\overline{Q}}},Q)\nonumber \\&\quad +\frac{l_{2}^{2}\beta }{8}. \end{aligned}$$
(108)

Since \(2\gamma<\tau ^{-1}{c}<\tau ^{-1}{c}_{2}\), we may apply Lemma 9 with \(\gamma _{0}=\tau ^{-1}{c}\) and \(\gamma _{0}=\tau ^{-1}{c}_{2}\) successively which leads to

$$\begin{aligned} \int _{{\mathscr {M}}}\exp \left[ {-\tau ^{-1}{c}n \beta a_{1}\ell ({{\overline{Q}}},Q')}\right] d\pi (Q')\leqslant \pi \left( {{\mathscr {B}}}\right) \exp \left[ {\Xi _{1}}\right] \end{aligned}$$
(109)

and

$$\begin{aligned} \int _{{\mathscr {M}}}\exp \left[ {-\tau ^{-1}{c}_{2}n \beta a_{1}\ell ({{\overline{Q}}},Q')}\right] d\pi (Q')\leqslant \pi \left( {{\mathscr {B}}}\right) \exp \left[ {\Xi _{1}}\right] \end{aligned}$$
(110)

with

$$\begin{aligned} \Xi _{1}&=\log \left[ {1+\frac{\exp \left[ {-\left( {\tau ^{-1}{c}-\gamma }\right) }\right] }{1-\exp \left[ {-\left( {\tau ^{-1}{c}-2\gamma }\right) }\right] }}\right] \nonumber \\&\geqslant \log \left[ {1+\frac{\exp \left[ {-\left( {\tau ^{-1}{c}_{2}-\gamma }\right) }\right] }{1-\exp \left[ {-\left( {\tau ^{-1}{c}_{2}-2\gamma }\right) }\right] }}\right] . \end{aligned}$$
(111)

Putting (107) and (110) together leads to

$$\begin{aligned} \exp \left[ {{\textbf{L}}(P,Q)}\right]&=\int _{{\mathscr {M}}}{\mathbb {E}}\left[ {\exp \left[ {\beta \left( {{c}_{2}{\textbf{T}}({\varvec{X}},P,Q')-{c}_{1}{\textbf{T}}({\varvec{X}},P,Q)}\right) }\right] }\right] d\pi (Q')\\&\leqslant \exp \left[ {n\beta \Delta _{2}(P,Q)}\right] \int _{{\mathscr {M}}}\exp \left[ {-\tau ^{-1}{c}_{2}n \beta a_{1}\ell ({{\overline{Q}}},Q')}\right] d\pi (Q')\\&\leqslant \pi \left( {{\mathscr {B}}}\right) \exp \left[ {\Xi _{1}+n\beta \Delta _{2}(P,Q)}\right] , \end{aligned}$$

and since, for all \((P,Q)\in {\mathscr {B}}^{2}\), by definition (108) of \(\Delta _{2}(P,Q)\),

$$\begin{aligned} \Delta _{2}(P,Q)&\leqslant \left( {e_{1}+{c}_{2}}\right) a_{1} \ell ({\overline{P}}^{\star },{{\overline{Q}}})+\left[ {\tau {c}_{0}'a_{1}+\tau {c}_{1}a_{0}}\right] {r}+\frac{l_{2}^{2}\beta }{8}\nonumber \\&=\left( {e_{1}+{c}_{2}}\right) a_{1}\ell (\overline{P}^{\star },{{\overline{Q}}})+e_{1}a_{1}{r}+\frac{l_{2}^{2}\beta }{8}=\Delta _{2} \end{aligned}$$
(112)

we derive that

$$\begin{aligned}&\left[ {\int _{{\mathscr {B}}^{2}}\exp \left[ {-{\textbf{L}}(P,Q)}\right] d\pi _{{\mathscr {B}}}(P)d\pi _{{\mathscr {B}}}(Q)}\right] ^{-1}\leqslant \pi \left( {{\mathscr {B}}}\right) \exp \left[ {\Xi _{1}+n\beta \Delta _{2}}\right] . \end{aligned}$$

We deduce from (84) that

$$\begin{aligned} {\mathbb {P}}({^\textsf{c}}{\!{A}}{})&\leqslant \frac{z}{\pi \left( {{\mathscr {B}}}\right) }\exp \left[ {\Xi _{1}+n\beta \Delta _{2}}\right] . \end{aligned}$$

In particular, \({\mathbb {P}}({^\textsf{c}}{\!{A}}{})\leqslant e^{-\xi }\) for z satisfying

$$\begin{aligned} \log \left( {\frac{1}{z}}\right) =\xi +\log \frac{1}{\pi ({\mathscr {B}})}+\Xi _{1}+n\beta \Delta _{2}. \end{aligned}$$
(113)

Putting (105) and (109) together, we obtain that

$$\begin{aligned}&\exp \left[ {{\textbf{M}}(P,Q)}\right] \\&\quad =\int _{{\mathscr {M}}}{\mathbb {E}}\left[ {\exp \left[ {\beta \left( {{c}{\textbf{T}}({\varvec{X}},P,Q')-{c}_{1}{\textbf{T}}({\varvec{X}},P,Q)}\right) }\right] }\right] d\pi (Q')\\&\quad \leqslant \exp \left[ {n\beta \Delta _{1}(P,Q)}\right] \int _{{\mathscr {M}}}\exp \left[ {-\tau ^{-1}{c}n \beta a_{1}\ell ({{\overline{Q}}},Q')}\right] d\pi (Q')\\&\quad \leqslant \pi \left( {{\mathscr {B}}}\right) \exp \left[ {\Xi _{1}+n\beta \Delta _{1}(P,Q)}\right] . \end{aligned}$$

It follows from the definition (106) of \(\Delta _{1}(P,Q)\) that for all \(P\in {\mathscr {M}}\) and for all \(Q\in {\mathscr {B}}\),

$$\begin{aligned} \Delta _{1}(P,Q)&\leqslant e_{0}a_{1}\ell (\overline{P}^{\star },{{\overline{Q}}})+\tau {c}_{1}a_{0}{r}+\frac{l_{1}^{2} \beta }{8}-\tau ^{-1}{c}_{0}a_{1}\ell ({{\overline{Q}}},P), \end{aligned}$$

and consequently, for all \(P\in {\mathscr {M}}\) and \(Q\in {\mathscr {B}}\)

$$\begin{aligned}&\exp \left[ {{\textbf{M}}(P,Q)}\right] \\&\quad \leqslant \pi \left( {{\mathscr {B}}}\right) \exp \left[ {\Xi _{1}+n\beta \left( {e_{0}a_{1}\ell (\overline{P}^{\star },{{\overline{Q}}})+\tau {c}_{1}a_{0}{r}+\frac{l_{1}^{2} \beta }{8}-\tau ^{-1}{c}_{0}a_{1}\ell ({{\overline{Q}}},P)}\right) }\right] . \end{aligned}$$

We derive from Lemma 5 that

$$\begin{aligned}&{\mathbb {E}}\left[ {\exp \left[ {-\beta {\textbf{T}}({\varvec{X}},P)}\right] }\right] \\&\quad \leqslant \frac{1}{\pi ({\mathscr {B}})}\left[ {\int _{{\mathscr {B}}}\exp \left[ {-{\textbf{M}}(P,Q)}\right] d\pi _{{\mathscr {B}}}(Q)}\right] ^{-1}\\&\quad \leqslant \exp \left[ {\Xi _{1}+n\beta \left( {e_{0}a_{1}\ell (\overline{P}^{\star },{{\overline{Q}}})+\tau {c}_{1}a_{0}{r}+\frac{l_{1}^{2} \beta }{8}-\tau ^{-1}{c}_{0}a_{1}\ell ({{\overline{Q}}},P)}\right) }\right] , \end{aligned}$$

hence,

$$\begin{aligned} \int _{{^\textsf{c}}{\!{{\mathscr {B}}}}{}({{\overline{Q}}},2^{J}{r})}&{\mathbb {E}}\left[ {\exp \left[ {-\beta {\textbf{T}}({\varvec{X}},P)}\right] }\right] d\pi (P)\nonumber \\&\leqslant \exp \left[ {\Xi _{1}+n\beta \left( { e_{0}a_{1}\ell ({\overline{P}} ^{\star },{{\overline{Q}}})+\tau {c}_{1}a_{0}{r}+\frac{l_{1}^{2} \beta }{8}}\right) }\right] \nonumber \\&\quad \times \int _{{^\textsf{c}}{\!{{\mathscr {B}}}}{}({{\overline{Q}}},2^{J}{r})}\exp \left[ {-\tau ^{-1}{c}_{0} n\beta a_{1} \ell ({{\overline{Q}}},P)}\right] d\pi (P). \end{aligned}$$
(114)

Applying Lemma 9 with \(\gamma _{0}=\tau ^{-1}{c}_{0}>2\gamma \) and setting \(e_{2}=\tau ^{-1}{c}_{0}-2\gamma \), we get

$$\begin{aligned} \int _{{^\textsf{c}}{\!{{\mathscr {B}}}}{}({{\overline{Q}}},2^{J}{r})}\exp \left[ {-\tau ^{-1}{c}_{0} n\beta a_{1} \ell ({{\overline{Q}}},P)}\right] d\pi (P)\leqslant \pi \left( {{\mathscr {B}}}\right) \exp \left[ {\Xi _{2}-e_{2}n\beta a_{1}2^{J}{r}}\right] \end{aligned}$$

with

$$\begin{aligned} \Xi _{2}=-\gamma +\log \left[ {\frac{1}{1-\exp \left[ {-e_{2}}\right] }}\right] , \end{aligned}$$
(115)

which together with (114) leads to

$$\begin{aligned}&\log \int _{{^\textsf{c}}{\!{{\mathscr {B}}}}{}({{\overline{Q}}},2^{J}{r})}{\mathbb {E}}\left[ {\exp \left[ {-\beta {\textbf{T}}({\varvec{X}},P)}\right] }\right] d\pi (P)\nonumber \\&\quad \leqslant \log \left[ { \pi \left( {{\mathscr {B}}}\right) }\right] +\Xi _{1}+\Xi _{2}\nonumber \\&\qquad +n\beta \left[ {e_{0}a_{1}\ell (\overline{P}^{\star },{{\overline{Q}}})+\tau {c}_{1}a_{0}{r}+\frac{l_{1}^{2} \beta }{8}-e_{2}a_{1}2^{J}{r}}\right] . \end{aligned}$$
(116)

Using the definitions (113) of z and (112) of \(\Delta _{2}\) we deduce from (116) that

$$\begin{aligned}&\log \left[ {\frac{1}{z}\int _{{^\textsf{c}}{\!{{\mathscr {B}}}}{}({{\overline{Q}}},2^{J}{r})}{\mathbb {E}}\left[ {\exp \left[ {-\beta {\textbf{T}}({\varvec{X}},P)}\right] }\right] d\pi (P)}\right] \nonumber \\&\quad \leqslant \log \left( {\frac{1}{z}}\right) +\log \left[ { \pi \left( {{\mathscr {B}}}\right) }\right] +\Xi _{1}+\Xi _{2}\nonumber \\&\quad \quad +n\beta \left[ {e_{0}a_{1}\ell ({\overline{P}}^{\star },{{\overline{Q}}})+\tau {c}_{1}a_{0}{r}+\frac{l_{1}^{2} \beta }{8} -e_{2}a_{1}2^{J}{r}}\right] \nonumber \\&\quad =\xi +\log \frac{1}{\pi ({\mathscr {B}})}+\Xi _{1}+n\beta \Delta _{2}+\log \left[ { \pi \left( {{\mathscr {B}}}\right) }\right] +\Xi _{1}+\Xi _{2}\nonumber \\&\qquad +n\beta \left[ {e_{0}a_{1}\ell ({\overline{P}}^{\star },{{\overline{Q}}})+\tau {c}_{1}a_{0}{r}+\frac{l_{1}^{2} \beta }{8} -e_{2}a_{1}2^{J}{r}}\right] \nonumber \\&\quad =n\beta \left[ {\left( {e_{1}+{c}_{2}+e_{0}}\right) a_{1}\ell ({\overline{P}}^{\star },{{\overline{Q}}}) +e_{1}a_{1}{r}+\frac{l_{2}^{2}\beta }{8}++\tau {c}_{1}a_{0}{r}+\frac{l_{1}^{2} \beta }{8}}\right] \nonumber \\&\quad \quad +\xi +2\Xi _{1}+\Xi _{2}-e_{2}n \beta a_{1}2^{J}{r}\nonumber \\&\quad =n\beta \left[ {\left( {e_{0}+e_{1}+{c}_{2}}\right) a_{1}\ell ({\overline{P}}^{\star },{{\overline{Q}}})+\left[ {e_{1} +\frac{\tau {c}_{1}a_{0}}{a_{1}}}\right] a_{1}{r}+\frac{(l_{1}^{2}+l_{2}^{2}) \beta }{8}}\right] \nonumber \\&\quad \quad +\xi +2\Xi _{1}+\Xi _{2}-e_{2}n \beta a_{1}2^{J}{r}. \end{aligned}$$
(117)

Setting,

$$\begin{aligned} C_{1}=e_{0}+e_{1}+{c}_{2}\quad \text {and}\quad C_{2}=e_{1}+\frac{\tau {c}_{1}a_{0}}{a_{1}}, \end{aligned}$$

we see that the right-hand side of (117) is not larger than \(-\xi \), provided that

$$\begin{aligned} e_{2}n\beta a_{1}2^{J}{r}&\geqslant 2\xi +2\Xi _{1}+\Xi _{2}+n\beta \left[ {C_{1}a_{1}\ell (\overline{P}^{\star },{{\overline{Q}}})+C_{2}a_{1}{r}+\frac{(l_{1}^{2}+l_{2}^{2}) \beta }{8}}\right] \end{aligned}$$

or equivalently if

$$\begin{aligned} 2^{J}&\geqslant \frac{1}{e_{2}}\left[ {\frac{2\xi +2\Xi _{1}+\Xi _{2}}{\beta na_{1}{r}}+\frac{C_{1}\ell (\overline{P}^{\star },{{\overline{Q}}})}{{r}}+C_{2}+\frac{\left[ {l_{1}^{2}+l_{2}^{2}}\right] \beta }{8a_{1}{r}}}\right] . \end{aligned}$$
(118)

Choosing \({{\overline{Q}}}\) in \({\mathscr {M}}(\beta )\) and using the inequalities \(a_{1}^{-1}\beta \geqslant {r}_{n}(\beta ,{{\overline{Q}}})\geqslant 1/(\beta na_{1})\), for

$$\begin{aligned} {r}=\ell (\overline{P}^{\star },{{\overline{Q}}})+\frac{1}{a_{1}}\left( {\beta +\frac{2\xi }{n\beta }}\right) \geqslant \frac{1}{\beta n a_{1}} \end{aligned}$$

we obtain that the right-hand side of (118) satisfies

$$\begin{aligned}&\frac{1}{e_{2}}\left[ {\frac{2\xi +2\Xi _{1}+\Xi _{2}}{\beta na_{1}{r}}+\frac{C_{1}\ell ({\overline{P}}^{\star },{{\overline{Q}}})+C_{2}{r}}{{r}}+\frac{\left[ {l_{1}^{2}+l_{2}^{2}}\right] \beta }{8a_{1}{r}}}\right] \\&\quad \leqslant \frac{1}{e_{2}}\left[ {C_{2}+2\Xi _{1}+\Xi _{2}+\frac{C_{3}}{{r}}\left( {\ell ({\overline{P}}^{\star },{{\overline{Q}}})+\frac{1}{a_{1}}\left( {\beta +\frac{2\xi }{n\beta }}\right) }\right) }\right] \\&\quad = \frac{1}{e_{2}}\left[ {C_{2}+2\Xi _{1}+\Xi _{2}+C_{3}}\right] \end{aligned}$$

with \(C_{3}=\max \{1,C_{1},\left[ {l_{1}^{2}+l_{2}^{2}}\right] /8\}\). Inequality (118) is therefore satisfied for \(J\in {\mathbb {N}}\) such that

$$\begin{aligned} 2^{J}\geqslant \frac{C_{2}+2\Xi _{1}+\Xi _{2}+C_{3}}{e_{2}}\vee 1> 2^{J-1}, \end{aligned}$$

and we may take

$$\begin{aligned} \kappa _{0}=\tau \left[ { \frac{2\left( {C_{2}+2\Xi _{1}+\Xi _{2}+C_{3}}\right) }{e_{2}}\vee 1+1}\right] \geqslant \tau \left( {2^{J}+1}\right) . \end{aligned}$$
(119)

We recall below, the list of constants depending on \(a_{0},a_{1},c,\tau \) and \(\gamma \) and we have used along the proof.

$$\begin{aligned} c_{0}&=1+c-\frac{ca_{0}}{a_{1}},&c_{1}&=1+c,&c_{2}&=2+c,\\ c'_{0}&=\frac{c_{2}a_{0}}{a_{1}}-c_{1},&l_{1}&=1+2c,&l_{2}&=3+2c,\\ e_{0}&=c_{0}+c+\frac{\tau c_{1}a_{0}}{a_{1}},&e_{1}&=\tau \left[ {c_{0}'+c_{1}\frac{a_{0}}{a_{1}}}\right] ,&e_{2}&=\tau ^{-1}c_{0}-2\gamma ,\\ C_{1}&=e_{0}+e_{1}+c_{2},&C_{2}&=e_{1}+\frac{\tau c_{1}a_{0}}{a_{1}},&C_{3}&=\max \left\{ {1,C_{1},\frac{l_{1}^{2}+l_{2}^{2}}{8}}\right\} , \end{aligned}$$

and

$$\begin{aligned} \Xi _{1}&=\log \left[ {1+\frac{\exp \left[ {-\left( {\tau ^{-1}{c}-\gamma }\right) }\right] }{1-\exp \left[ {-\left( {\tau ^{-1}{c}-2\gamma }\right) }\right] }}\right] ,\quad \Xi _{2}=-\gamma +\log \left[ {\frac{1}{1-\exp \left[ {-e_{2}}\right] }}\right] . \end{aligned}$$

9.4 Proof of Theorem 2

The proof follows the same lines as that of Theorem 1. Under Assumption 3-(iv), the random variables \(U_{i}\) and \(V_{i}\) defined by (96) and (97) are not larger than with \(b={c}+{c}_{1}=l_{1}\) and \(b={c}_{2}+{c}_{1}=l_{2}\) respectively. Since under Assumption 4, more precisely its consequence (87), that

$$\begin{aligned} \frac{1}{n}\sum _{i=1}^{n}{\mathbb {E}}\left[ {U_{i}^{2}}\right]&\leqslant 2\left[ {\frac{{c}^{2}}{n} \sum _{i=1}^{n}{\text {Var}}\left[ {t_{(P,Q')}(X_{i})}\right] +\frac{{c}_{1}^{2}}{n}\sum _{i=1}^{n}{\text {Var}}\left[ {t_{(P,Q)}(X_{i})}\right] }\right] \\&\leqslant 2a_{2}\left[ {({c}^{2}+{c}_{1}^{2})\ell (\overline{P}^{\star },P)+{c}^{2}\ell ({\overline{P}}^{\star },Q')+{c}_{1}^{2}\ell (\overline{P}^{\star },Q)}\right] \end{aligned}$$

and

$$\begin{aligned} \frac{1}{n}\sum _{i=1}^{n}{\mathbb {E}}\left[ {V_{i}^{2}}\right]&\leqslant 2a_{2}\left[ {({c}_{2}^{2}+{c}_{1}^{2})\ell (\overline{P}^{\star },P)+{c}_{2}^{2}\ell ({\overline{P}}^{\star },Q')+{c}_{1}^{2}\ell (\overline{P}^{\star },Q)}\right] \end{aligned}$$

we may apply Lemma 8 and using the notation \(\Lambda _{1}=\tau \phi (\beta l_{1})\), \(\Lambda _{2}=\tau \phi (\beta l_{2})\) and Assumption 1, we get

$$\begin{aligned}&\frac{1}{n\beta }\log \left[ {\prod _{i=1}^{n}{\mathbb {E}}\left[ {\exp \left[ {\beta U_{i}}\right] }\right] }\right] \nonumber \\&\quad \leqslant {\phi (\beta l_{1})}\beta a_{2}\left[ {({c}^{2}+{c}_{1}^{2})\ell ({\overline{P}} ^{\star },P)+{c}^{2}\ell ({\overline{P}}^{\star },Q')+{c}_{1}^{2}\ell ({\overline{P}}^{\star },Q)}\right] \nonumber \\&\quad \leqslant 2\Lambda _{1} \beta a_{2}\left[ {{c}^{2}+{c}_{1}^{2}}\right] \ell ({\overline{P}}^{\star },{{\overline{Q}}})\nonumber \\&\quad \quad +\Lambda _{1} \beta a_{2}\left[ {({c}^{2}+{c}_{1}^{2})\ell ({{\overline{Q}}},P)+{c}^{2}\ell ({{\overline{Q}}},Q') +{c}_{1}^{2}\ell ({{\overline{Q}}},Q)}\right] \end{aligned}$$
(120)

and similarly

$$\begin{aligned}&\frac{1}{n\beta }\log \left[ {\prod _{i=1}^{n}{\mathbb {E}}\left[ {\exp \left[ {\beta V_{i}}\right] }\right] }\right] \nonumber \\&\quad \leqslant 2\Lambda _{2} \beta a_{2}\left[ {{c}_{2}^{2}+{c}_{1}^{2}}\right] \ell ({\overline{P}}^{\star },{{\overline{Q}}})\nonumber \\&\quad \quad +\Lambda _{2} \beta a_{2}\left[ {({c}_{2}^{2}+{c}_{1}^{2})\ell ({{\overline{Q}}},P)+{c}_{2}^{2}\ell ({{\overline{Q}}},Q') +{c}_{1}^{2}\ell ({{\overline{Q}}},Q)}\right] . \end{aligned}$$
(121)

It follows from (102) that

$$\begin{aligned} E_{1}&=n^{-1}\left\{ {{c}{\mathbb {E}}\left[ {{\textbf{T}}({\varvec{X}},P,Q')}\right] -{c}_{1}{\mathbb {E}}\left[ {{\textbf{T}}({\varvec{X}},P,Q)}\right] }\right\} \\&\quad +2\Lambda _{1} \beta a_{2}\left[ {{c}^{2}+{c}_{1}^{2}}\right] \ell ({\overline{P}}^{\star },{{\overline{Q}}})\nonumber \\&\quad +\Lambda _{1} \beta a_{2}\left[ {({c}^{2}+{c}_{1}^{2})\ell ({{\overline{Q}}},P)+{c}^{2}\ell ({{\overline{Q}}},Q')+{c}_{1}^{2}\ell ({{\overline{Q}}},Q)}\right] \\&\leqslant \left[ {e_{0}a_{1}+2\Lambda _{1} \beta a_{2}\left( {{c}^{2}+{c}_{1}^{2}}\right) }\right] \ell ({\overline{P}}^{\star },{{\overline{Q}}})\\&\quad -\left[ {\tau ^{-1}{c}_{0}a_{1}-\Lambda _{1} \beta a_{2}({c}^{2}+{c}_{1}^{2})}\right] \ell ({{\overline{Q}}},P)\\&\quad - \left[ {\tau ^{-1}{c}a_{1}-\Lambda _{1} \beta a_{2}{c}^{2}}\right] \ell ({{\overline{Q}}},Q')\\&\quad + \left[ {\tau {c}_{1}a_{0}+\Lambda _{1} \beta a_{2}{c}_{1}^{2}}\right] \ell ({{\overline{Q}}},Q). \end{aligned}$$

Using the definitions (25) of \({\overline{{c}}}_{1}\) and (26) of \({\overline{{c}}}_{2}\),, that is,

$$\begin{aligned} {\overline{{c}}}_{1}={c}_{0}-\tau \Lambda _{1}\beta a_{2}a_{1}^{-1}({c}^{2}+{c}_{1}^{2})\quad \text {and}\quad \overline{{c}}_{2}={c}-\tau \Lambda _{1} \beta a_{2}a_{1}^{-1}{c}^{2} \end{aligned}$$

and setting

$$\begin{aligned} e_{3}&=e_{0}+2\Lambda _{1} \beta \frac{a_{2}\left( {{c}^{2}+{c}_{1}^{2}}\right) }{a_{1}}\\ e_{4}&=\frac{1}{a_{1}}\left[ {\tau {c}_{1}a_{0}+\Lambda _{1} \beta a_{2}{c}_{1}^{2}}\right] \end{aligned}$$

and arguing as in the proof of inequality (105), we deduce from (120) that

$$\begin{aligned}&\log {\mathbb {E}}\left[ {\exp \left[ {\beta \left( {{c}{\textbf{T}}({\varvec{X}},P,Q')-{c}_{1}{\textbf{T}}({\varvec{X}},P,Q)}\right) }\right] }\right] \nonumber \\&\quad \leqslant n\beta E_{1}\nonumber \\&\quad \leqslant n\beta a_{1}\left[ {e_{3}\ell (\overline{P}^{\star },{{\overline{Q}}})-\tau ^{-1}\left[ {{\overline{{c}}}_{1}\ell ({{\overline{Q}}},P)+\overline{{c}}_{2}\ell ({{\overline{Q}}},Q')}\right] +e_{4}\ell ({{\overline{Q}}},Q)}\right] . \end{aligned}$$
(122)

It follows from (103) that

$$\begin{aligned} E_{2}&=n^{-1}\left\{ {{c}_{2}{\mathbb {E}}\left[ {{\textbf{T}}({\varvec{X}},P,Q')}\right] -{c}_{1}{\mathbb {E}}\left[ {{\textbf{T}}({\varvec{X}},P,Q)}\right] }\right\} \\&\quad +2\Lambda _{2} \beta a_{2}\left[ {{c}_{2}^{2}+{c}_{1}^{2}}\right] \ell ({\overline{P}}^{\star },{{\overline{Q}}})\nonumber \\&\quad +\Lambda _{2} \beta a_{2}\left[ {({c}_{2}^{2}+{c}_{1}^{2})\ell ({{\overline{Q}}},P)+{c}_{2}^{2}\ell ({{\overline{Q}}},Q')+{c}_{1}^{2}\ell ({{\overline{Q}}},Q)}\right] \\&\leqslant \left( {e_{1}+{c}_{2}}\right) a_{1}\ell ({\overline{P}}^{\star },{{\overline{Q}}})+\tau {c}_{0}'a_{1}\ell ({{\overline{Q}}},P)-\tau ^{-1}{c}_{2} a_{1}\ell ({{\overline{Q}}},Q')\\&\quad +\tau {c}_{1}a_{0}\ell ({{\overline{Q}}},Q)+2\Lambda _{2} \beta a_{2}\left[ {{c}_{2}^{2}+{c}_{1}^{2}}\right] \ell ({\overline{P}}^{\star },{{\overline{Q}}})\nonumber \\&\quad +\Lambda _{2} \beta a_{2}\left[ {({c}_{2}^{2}+{c}_{1}^{2})\ell ({{\overline{Q}}},P)+{c}_{2}^{2}\ell ({{\overline{Q}}},Q')+{c}_{1}^{2}\ell ({{\overline{Q}}},Q)}\right] \\&=\left[ {\left( {e_{1}+{c}_{2}}\right) a_{1}+2\Lambda _{2} \beta a_{2}\left( {{c}_{2}^{2}+{c}_{1}^{2}}\right) }\right] \ell ({\overline{P}}^{\star },{{\overline{Q}}})\nonumber \\&\quad + \left[ {\tau {c}_{0}'a_{1}+\Lambda _{2} \beta a_{2}({c}_{2}^{2}+{c}_{1}^{2})}\right] \ell ({{\overline{Q}}},P)\\&\quad -\left[ {\tau ^{-1}{c}_{2} a_{1}-\Lambda _{2} \beta a_{2}{c}_{2}^{2}}\right] \ell ({{\overline{Q}}},Q')\\&\quad + \left[ {\tau {c}_{1}a_{0}+\Lambda _{2} \beta a_{2}{c}_{1}^{2}}\right] \ell ({{\overline{Q}}},Q). \end{aligned}$$

Using the definition (27) of \({\overline{{c}}}_{3}\),, that is,

$$\begin{aligned} {\overline{{c}}}_{3}={c}_{2}-\tau \Lambda _{2} \beta a_{2}a_{1}^{-1}{c}_{2}^{2}, \end{aligned}$$

and setting

$$\begin{aligned} e_{5}&=e_{1}+{c}_{2}+2\Lambda _{2} \beta \frac{a_{2}\left( {{c}_{2}^{2} +{c}_{1}^{2}}\right) }{a_{1}}, \qquad e_{6}=\tau {c}_{0}'+\Lambda _{2} \beta \frac{a_{2}({c}_{2}^{2}+{c}_{1}^{2})}{a_{1}}\\ e_{7}&=\frac{1}{a_{1}}\left[ {\tau {c}_{1}a_{0}+\Lambda _{2} \beta a_{2}{c}_{1}^{2}}\right] , \end{aligned}$$

and arguing as in the proof of (107), we deduce from (121) that

$$\begin{aligned}&\log {\mathbb {E}}\left[ {\exp \left[ {\beta \left( {{c}_{2}{\textbf{T}}({\varvec{X}},P,Q')-{c}_{1}{\textbf{T}}({\varvec{X}},P,Q)}\right) }\right] }\right] \leqslant n\beta E_{2}\nonumber \\&\quad =n\beta a_{1}\left( {e_{5}\ell (\overline{P}^{\star },{{\overline{Q}}})+e_{6}\ell ({{\overline{Q}}},P)-\tau ^{-1}\overline{{c}}_{3}\ell ({{\overline{Q}}},Q')+e_{7}\ell ({{\overline{Q}}},Q)}\right) . \end{aligned}$$
(123)

Under our assumption on \(\beta \), we know that the quantities \({\overline{{c}}}_{2}\) and \({\overline{{c}}}_{3}\) are positive and that \(2\gamma <\tau ^{-1}\left( {{\overline{{c}}}_{2}\wedge {\overline{{c}}}_{3}}\right) \). We may therefore apply Lemma 9 with \(\gamma _{0}=\tau ^{-1} {\overline{{c}}}_{2}\) and \(\gamma _{0}=\tau ^{-1} {\overline{{c}}}_{3}\) successively and get

$$\begin{aligned} \int _{{\mathscr {M}}}\exp \left[ {-\tau ^{-1} {\overline{{c}}}_{2}n\beta a_{1}\ell ({{\overline{Q}}},Q')}\right] d\pi (Q')&\leqslant \pi \left( {{\mathscr {B}}}\right) \exp \left[ {\overline{\Xi }_{1}}\right] \end{aligned}$$
(124)

and

$$\begin{aligned} \int _{{\mathscr {M}}}\exp \left[ {-\tau ^{-1} {\overline{{c}}}_{3}n\beta a_{1}\ell ({{\overline{Q}}},Q')}\right] d\pi (Q')&\leqslant \pi \left( {{\mathscr {B}}}\right) \exp \left[ {\overline{\Xi }_{1}}\right] \end{aligned}$$
(125)

with

$$\begin{aligned} {\overline{\Xi }}_{1}=\log \left[ {1+\frac{\exp \left[ {-\left( {\tau ^{-1}( {\overline{{c}}}_{2}\wedge {\overline{{c}}}_{3})-\gamma }\right) }\right] }{1-\exp \left[ {-\left( {\tau ^{-1}( {\overline{{c}}}_{2}\wedge {\overline{{c}}}_{3})-2\gamma }\right) }\right] }}\right] . \end{aligned}$$
(126)

Putting (123) and (125) together, we obtain that for all \((P,Q)\in {\mathscr {B}}^{2}\)

$$\begin{aligned}&\exp \left[ {{\textbf{L}}(P,Q)}\right] =\int _{{\mathscr {M}}}{\mathbb {E}}\left[ {\exp \left[ {\beta \left( {{c}_{2}{\textbf{T}}({\varvec{X}},P,Q')-{c}_{1}{\textbf{T}}({\varvec{X}},P,Q)}\right) }\right] }\right] d\pi (Q')\\&\quad \leqslant \exp \left[ {n\beta a_{1}\left( {e_{5}\ell ({\overline{P}}^{\star },{{\overline{Q}}})+e_{6}\ell ({{\overline{Q}}},P)+e_{7}\ell ({{\overline{Q}}},Q)}\right) }\right] \\&\quad \quad \times \int _{{\mathscr {M}}}\exp \left[ {-\tau ^{-1}{\overline{{c}}}_{3}n \beta a_{1}\ell ({{\overline{Q}}},Q')}\right] d\pi (Q')\\&\quad \leqslant \pi \left( {{\mathscr {B}}}\right) \exp \left[ {{\overline{\Xi }}_{1}+n\beta a_{1}\left( {e_{5}\ell ({\overline{P}}^{\star },{{\overline{Q}}})+e_{6}\ell ({{\overline{Q}}},P)+e_{7}\ell ({{\overline{Q}}},Q)}\right) }\right] \\&\quad \leqslant \pi \left( {{\mathscr {B}}}\right) \exp \left[ {{\overline{\Xi }}_{1}+n\beta a_{1}\left( {e_{5}\ell ({\overline{P}}^{\star },{{\overline{Q}}})+(e_{6}+e_{7}){r}}\right) }\right] . \end{aligned}$$

Consequently,

$$\begin{aligned}&\left[ {\int _{{\mathscr {B}}^{2}}\exp \left[ {-{\textbf{L}}(P,Q)}\right] d\pi _{{\mathscr {B}}}(P)d\pi _{{\mathscr {B}}}(Q)}\right] ^{-1}\\&\quad \leqslant \pi \left( {{\mathscr {B}}}\right) \exp \left[ {{\overline{\Xi }}_{1}+n\beta a_{1}\left( {e_{5}\ell ({\overline{P}}^{\star },{{\overline{Q}}})+(e_{6}+e_{7}){r}}\right) }\right] . \end{aligned}$$

We deduce from (84) that

$$\begin{aligned} {\mathbb {P}}({^\textsf{c}}{\!{A}}{})&\leqslant \frac{z}{\pi \left( {{\mathscr {B}}}\right) }\exp \left[ {\overline{\Xi }_{1}+n\beta a_{1}\left( {e_{5}\ell (\overline{P}^{\star },{{\overline{Q}}})+(e_{6}+e_{7}){r}}\right) }\right] . \end{aligned}$$

In particular, \({\mathbb {P}}({^\textsf{c}}{\!{A}}{})\leqslant e^{-\xi }\) for z satisfying

$$\begin{aligned} \log \left( {\frac{1}{z}}\right) =\xi +\log \frac{1}{\pi ({\mathscr {B}})}+\overline{\Xi }_{1}+n\beta a_{1}\left[ {e_{5}\ell (\overline{P}^{\star },{{\overline{Q}}})+(e_{6}+e_{7}){r}}\right] . \end{aligned}$$
(127)

Putting (122) and (124) together, we obtain that for all \(Q\in {\mathscr {B}}\)

$$\begin{aligned}&\exp \left[ {{\textbf{M}}(P,Q)}\right] \\&\quad =\int _{{\mathscr {M}}}{\mathbb {E}}\left[ {\exp \left[ {\beta \left( {{c}{\textbf{T}}({\varvec{X}},P,Q')-{c}_{1}{\textbf{T}}({\varvec{X}},P,Q)}\right) }\right] }\right] d\pi (Q')\\&\quad \leqslant \exp \left[ {n\beta a_{1}\left( {e_{3}\ell ({\overline{P}}^{\star },{{\overline{Q}}}) -\tau ^{-1} {\overline{{c}}}_{1}\ell ({{\overline{Q}}},P)+e_{4}\ell ({{\overline{Q}}},Q)}\right) }\right] \\&\quad \quad \times \int _{{\mathscr {M}}}\exp \left[ {-\tau ^{-1} {\overline{{c}}}_{2} n \beta a_{1}\ell ({{\overline{Q}}},Q')}\right] d\pi (Q')\\&\quad \leqslant \pi ({\mathscr {B}})\exp \left[ {{\overline{\Xi }}_{1}+n\beta a_{1}\left( {e_{3}\ell ({\overline{P}}^{\star },{{\overline{Q}}})+e_{4}{r}-\tau ^{-1} {\overline{{c}}}_{1}\ell ({{\overline{Q}}},P)}\right) }\right] . \end{aligned}$$

We derive from Lemma 5 that

$$\begin{aligned}&{\mathbb {E}}\left[ {\exp \left[ {-\beta {\textbf{T}}({\varvec{X}},P)}\right] }\right] \\&\quad \leqslant \frac{1}{\pi ({\mathscr {B}})}\left[ {\int _{{\mathscr {B}}}\exp \left[ {-{\textbf{M}}(P,Q)}\right] d\pi _{{\mathscr {B}}}(Q)}\right] ^{-1}\\&\quad \leqslant \exp \left[ {{\overline{\Xi }}_{1}+n\beta a_{1}\left( {e_{3}\ell (\overline{P}^{\star },{{\overline{Q}}})+e_{4}{r}-\tau ^{-1} {\overline{{c}}}_{1}\ell ({{\overline{Q}}},P)}\right) }\right] , \end{aligned}$$

and consequently,

$$\begin{aligned}&\int _{{^\textsf{c}}{\!{{\mathscr {B}}}}{}({{\overline{Q}}},2^{J}{r})}{\mathbb {E}}\left[ {\exp \left[ {-\beta {\textbf{T}}({\varvec{X}},P)}\right] }\right] d\pi (P)\nonumber \\&\quad \leqslant \exp \left[ {{\overline{\Xi }}_{1}+n\beta a_{1}\left( {e_{3}\ell ({\overline{P}}^{\star },{{\overline{Q}}})+e_{4}{r}}\right) }\right] \nonumber \\&\quad \quad \times \int _{{^\textsf{c}}{\!{{\mathscr {B}}}}{}({{\overline{Q}}},2^{J}{r})}\exp \left[ {-\tau ^{-1} {\overline{{c}}}_{1}n\beta a_{1} \ell ({{\overline{Q}}},P)}\right] d\pi (P). \end{aligned}$$
(128)

Since under our assumptions, \( {\overline{{c}}}_{1}>0\) and \(2\gamma <\tau ^{-1} {\overline{{c}}}_{1}\) we may apply Lemma 9 with \(\gamma _{0}=\tau ^{-1} {\overline{{c}}}_{1}\), and setting \(e_{8}=\tau ^{-1} {\overline{{c}}}_{1}-2\gamma \) which leads to

$$\begin{aligned}&\int _{{^\textsf{c}}{\!{{\mathscr {B}}}}{}({{\overline{Q}}},2^{J}{r})}\exp \left[ {-\tau ^{-1} {\overline{{c}}}_{1} n\beta a_{1} \ell ({{\overline{Q}}},P)}\right] d\pi (P)\leqslant \pi \left( {{\mathscr {B}}}\right) \exp \left[ {{\overline{\Xi }}_{2}-e_{8}n\beta a_{1}2^{J}{r}}\right] . \end{aligned}$$

with

$$\begin{aligned} {\overline{\Xi }}_{2}=-\gamma +\log \left[ {\frac{1}{1-\exp \left[ {-e_{8}}\right] }}\right] , \end{aligned}$$
(129)

which together with (128) leads to

$$\begin{aligned}&\int _{{^\textsf{c}}{\!{{\mathscr {B}}}}{}({{\overline{Q}}},2^{J}{r})}{\mathbb {E}}\left[ {\exp \left[ {-\beta {\textbf{T}}({\varvec{X}},P)}\right] }\right] d\pi (P)\nonumber \\&\quad \leqslant \pi \left( {{\mathscr {B}}}\right) \exp \left[ {{\overline{\Xi }}_{1}+{\overline{\Xi }}_{2}+n\beta a_{1}\left( {e_{3}\ell (\overline{P}^{\star },{{\overline{Q}}})+e_{4}{r}-e_{8}2^{J}{r}}\right) }\right] . \end{aligned}$$
(130)

Using the definition (127) of z, we deduce that

$$\begin{aligned}&\log \left[ {\frac{1}{z}\int _{{^\textsf{c}}{\!{{\mathscr {B}}}}{}({{\overline{Q}}},2^{J}{r})}{\mathbb {E}}\left[ {\exp \left[ {-\beta {\textbf{T}}({\varvec{X}},P)}\right] }\right] d\pi (P)}\right] \\&\quad \leqslant \log \left( {\frac{1}{z}}\right) +\log \pi ({\mathscr {B}})+{\overline{\Xi }}_{1}+{\overline{\Xi }}_{2} +n\beta a_{1}\left( {e_{3}\ell ({\overline{P}}^{\star },{{\overline{Q}}})+e_{4}{r}-e_{8}2^{J}{r}}\right) \\&\quad =\xi +\log \frac{1}{\pi ({\mathscr {B}})}+{\overline{\Xi }}_{1}+n\beta a_{1}\left[ {e_{5}\ell ({\overline{P}} ^{\star },{{\overline{Q}}})+(e_{6}+e_{7}){r}}\right] \\&\quad \quad +\log \pi ({\mathscr {B}})+{\overline{\Xi }}_{1}+{\overline{\Xi }}_{2} +n\beta a_{1}\left( {e_{3}\ell ({\overline{P}}^{\star },{{\overline{Q}}})+e_{4}{r}-e_{8}2^{J}{r}}\right) \\&\quad = \xi +2{\overline{\Xi }}_{1}+{\overline{\Xi }}_{2}+n\beta a_{1} \left[ {\left( {e_{3}+e_{5}}\right) \ell ({\overline{P}}^{\star },{{\overline{Q}}})+(e_{4}+e_{6}+e_{7}){r}}\right] \\&\quad \quad - e_{8}n\beta a_{1}2^{J}{r}. \end{aligned}$$

The right-hand side is not larger than \(-\xi \) provided that

$$\begin{aligned} 2^{J}&\geqslant \frac{1}{e_{8}}\left[ {\frac{2\xi +2\overline{\Xi }_{1}+{\overline{\Xi }}_{2}}{n\beta a_{1}{r}}+\left[ {\left( {e_{3}+e_{5}}\right) \frac{\ell (\overline{P}^{\star },{{\overline{Q}}})}{{r}}+e_{4}+e_{6}+e_{7}}\right] }\right] . \end{aligned}$$
(131)

Using the fact that \({r}_{n}(\beta ,{{\overline{Q}}})\geqslant 1/(n \beta a_{1})\), with the choice

$$\begin{aligned} {r}=\ell (\overline{P}^{\star },{{\overline{Q}}})+{r}_{n}(\beta ,{{\overline{Q}}})+\frac{2\xi }{n\beta a_{1}}\geqslant \ell ({\overline{P}}^{\star },{{\overline{Q}}})+ \frac{1+2\xi }{n \beta a_{1}}\geqslant \frac{1}{n\beta a_{1}}, \end{aligned}$$

the right-hand side of (131) satisfies

$$\begin{aligned}&\frac{1}{e_{8}}\left[ {\frac{2\xi +2{\overline{\Xi }}_{1}+{\overline{\Xi }}_{2}}{n\beta a_{1}{r}} +\left[ {\left( {e_{3}+e_{5}}\right) \frac{\ell ({\overline{P}}^{\star },{{\overline{Q}}})}{{r}}+e_{4}+e_{6}+e_{7}}\right] }\right] \\&\quad \leqslant \frac{1}{e_{8}}\left[ {2{\overline{\Xi }}_{1}+{\overline{\Xi }}_{2} +e_{4}+e_{6}+e_{7}+\frac{(e_{3}+e_{5})\vee 1}{{r}}\left( {\ell ({\overline{P}}^{\star },{{\overline{Q}}})+\frac{2\xi }{n\beta a_{1}}}\right) }\right] \\&\quad \leqslant \frac{2{\overline{\Xi }}_{1}+\overline{\Xi }_{2}+e_{4}+e_{6}+e_{7}+(e_{3}+e_{5})\vee 1}{e_{8}}. \end{aligned}$$

Inequality (131) holds for \(J\in {\mathbb {N}}\) such that

$$\begin{aligned} 2^{J}\geqslant \frac{2{\overline{\Xi }}_{1}+\overline{\Xi }_{2}+e_{4}+e_{6}+e_{7}+(e_{3}+e_{5})\vee 1}{e_{8}}\vee 1>2^{J-1}, \end{aligned}$$

and we may take

$$\begin{aligned} \kappa _{0}&= \tau \left[ {\frac{2\left[ {2{\overline{\Xi }}_{1}+{\overline{\Xi }}_{2}+e_{4}+e_{6}+e_{7}+(e_{3}+e_{5})\vee 1}\right] }{e_{8}}\vee 1+1}\right] \nonumber \\&\geqslant \tau \left( {2^{J}+1}\right) . \end{aligned}$$
(132)

In complements to constants listed at the end of the proof of Theorem 1, we recall that

$$\begin{aligned} \Lambda _{1}=\tau \phi (\beta l_{1}),\quad \Lambda _{2}=\tau \phi (\beta l_{2}) \end{aligned}$$
$$\begin{aligned} {\overline{{c}}}_{1}&={c}_{0}-\tau \Lambda _{1}\beta \frac{a_{2}({c}^{2}+{c}_{1}^{2})}{a_{1}},&\overline{{c}}_{2}&={c}-\tau \Lambda _{1} \beta \frac{a_{2}{c}^{2}}{a_{1}},&\overline{{c}}_{3}&={c}_{2}-\tau \Lambda _{2} \beta \frac{a_{2}{c}_{2}^{2}}{a_{1}}, \end{aligned}$$
$$\begin{aligned} e_{3}&=e_{0}+2\Lambda _{1} \beta \frac{a_{2}\left( {{c}^{2}+{c}_{1}^{2}}\right) }{a_{1}},&e_{4}&=\frac{1}{a_{1}}\left[ {\tau {c}_{1}a_{0}+\Lambda _{1} \beta a_{2}{c}_{1}^{2}}\right] ,\\ e_{5}&=e_{1}+{c}_{2}+2\Lambda _{2} \beta \frac{a_{2}\left( {{c}_{2}^{2}+{c}_{1}^{2}}\right) }{a_{1}},&e_{6}&=\tau {c}_{0}'+\Lambda _{2} \beta \frac{a_{2}({c}_{2}^{2}+{c}_{1}^{2})}{a_{1}},\\ e_{7}&=\frac{1}{a_{1}}\left[ {\tau {c}_{1}a_{0}+\Lambda _{2} \beta a_{2}{c}_{1}^{2}}\right] ,&e_{8}&=\tau ^{-1} {\overline{{c}}}_{1}-2\gamma ,\\ \end{aligned}$$

and

$$\begin{aligned} {\overline{\Xi }}_{1}&=\log \left[ {1+\frac{\exp \left[ {-\left( {\tau ^{-1}( {\overline{{c}}}_{2}\wedge {\overline{{c}}}_{3})-\gamma }\right) }\right] }{1-\exp \left[ {-\left( {\tau ^{-1}( {\overline{{c}}}_{2}\wedge {\overline{{c}}}_{3})-2\gamma }\right) }\right] }}\right] ,\\ {\overline{\Xi }}_{2}&=-\gamma +\log \left[ {\frac{1}{1-\exp \left[ {-e_{8}}\right] }}\right] .\\ \end{aligned}$$

9.5 Proof of Theorem 3

Let us take \({r}\geqslant {\varepsilon }\) and set \(\varpi =2\xi +1\) so that

$$\begin{aligned} \pi \left( {{^\textsf{c}}{\!{{\mathscr {B}}}}{}}\right) \leqslant \pi \left( {{^\textsf{c}}{\!{{\mathscr {B}}}}{}({{\overline{Q}}},{\varepsilon })}\right) \leqslant e^{-\varpi }\pi \left( {{\mathscr {B}}({{\overline{Q}}},{\varepsilon })}\right) \leqslant e^{-\varpi }\pi \left( {{\mathscr {B}}}\right) . \end{aligned}$$

In order to prove the first part, let us go back to the proof of Theorem 1. Clearly,

$$\begin{aligned} \int _{{\mathscr {M}}}\exp \left[ {-\tau ^{-1}{c}n \beta a_{1}\ell ({{\overline{Q}}},Q')}\right] d\pi (Q')\leqslant 1=\pi \left( {{\mathscr {B}}}\right) +\pi \left( {{^\textsf{c}}{\!{{\mathscr {B}}}}{}}\right) \leqslant \pi \left( {{\mathscr {B}}}\right) (1+e^{-\varpi }) \end{aligned}$$

and similarly,

$$\begin{aligned} \int _{{\mathscr {M}}}\exp \left[ {-\tau ^{-1}{c}_{2}n \beta a_{1}\ell ({{\overline{Q}}},Q')}\right] d\pi (Q')\leqslant \pi \left( {{\mathscr {B}}}\right) (1+e^{-\varpi }). \end{aligned}$$

Inequalities (109) and (110) are therefore satisfied with \(\Xi _{1}=\log (1+e^{-1})\). Moreover,

$$\begin{aligned}&\int _{{^\textsf{c}}{\!{{\mathscr {B}}}}{}({{\overline{Q}}},2^{J}{r})}\exp \left[ {-\tau ^{-1}{c}_{0} n\beta a_{1} \ell ({{\overline{Q}}},P)}\right] d\pi (P)\\&\quad \leqslant \exp \left[ {-\tau ^{-1}{c}_{0} n\beta a_{1}2^{J}{r}}\right] \pi \left( {{^\textsf{c}}{\!{{\mathscr {B}}}}{}}\right) \leqslant \pi \left( {{\mathscr {B}}}\right) \exp \left[ {-\varpi -\tau ^{-1}{c}_{0} n\beta a_{1}2^{J}{r}}\right] . \end{aligned}$$

We deduce from (114) that

$$\begin{aligned}&\int _{{^\textsf{c}}{\!{{\mathscr {B}}}}{}({{\overline{Q}}},2^{J}{r})}{\mathbb {E}}\left[ {\exp \left[ {-\beta {\textbf{T}}({\varvec{X}},P)}\right] }\right] d\pi (P)\\&\quad \leqslant \exp \left[ {\Xi _{1}+n\beta \left( { e_{0}a_{1}\ell ({\overline{P}}^{\star },{{\overline{Q}}})+\tau {c}_{1}a_{0}{r}+\frac{l_{1}^{2} \beta }{8}}\right) }\right] \\&\qquad \times \pi \left( {{\mathscr {B}}}\right) \exp \left[ {-\varpi -\tau ^{-1}{c}_{0} n\beta a_{1}2^{J}{r}}\right] , \end{aligned}$$

and consequently,

$$\begin{aligned}&\log \int _{{^\textsf{c}}{\!{{\mathscr {B}}}}{}({{\overline{Q}}},2^{J}{r})}{\mathbb {E}}\left[ {\exp \left[ {-\beta {\textbf{T}}({\varvec{X}},P)}\right] }\right] d\pi (P)\\&\quad \leqslant \log \pi \left( {{\mathscr {B}}}\right) + \Xi _{1}-\varpi \\&\qquad +n\beta \left[ {e_{0}a_{1}\ell (\overline{P}^{\star },{{\overline{Q}}})+\tau {c}_{1}a_{0}{r}+\frac{l_{1}^{2} \beta }{8}-\tau ^{-1}{c}_{0} a_{1}2^{J}{r}}\right] . \end{aligned}$$

Using the definitions (113) of z and (112) of \(\Delta _{2}\), we deduce that

$$\begin{aligned}&\log \left[ {\frac{1}{z}\int _{{^\textsf{c}}{\!{{\mathscr {B}}}}{}({{\overline{Q}}},2^{J}{r})}{\mathbb {E}}\left[ {\exp \left[ {-\beta {\textbf{T}}({\varvec{X}},P)}\right] }\right] d\pi (P)}\right] \\&\quad \leqslant \xi +\log \frac{1}{\pi ({\mathscr {B}})}+\Xi _{1}+n\beta \Delta _{2}+\log \pi \left( {{\mathscr {B}}}\right) + \Xi _{1}-\varpi \\&\quad \quad +n\beta \left[ {e_{0}a_{1}\ell ({\overline{P}}^{\star },{{\overline{Q}}})+\tau {c}_{1}a_{0}{r}+\frac{l_{1}^{2} \beta }{8}-\tau ^{-1}{c}_{0}a_{1}2^{J}{r}}\right] \\&\quad =\xi +2\Xi _{1}+n\beta \left[ {\left( {e_{1}+{c}_{2}}\right) a_{1}\ell ({\overline{P}}^{\star },{{\overline{Q}}})+e_{1}a_{1}{r}+\frac{l_{2}^{2}\beta }{8}}\right] -\varpi \\&\quad \quad +n\beta \left[ {e_{0}a_{1}\ell ({\overline{P}}^{\star },{{\overline{Q}}})+\tau {c}_{1}a_{0}{r}+\frac{l_{1}^{2} \beta }{8}-\tau ^{-1}{c}_{0} a_{1}2^{J}{r}}\right] \\&\quad =\xi +2\Xi _{1}-\varpi \\&\quad \quad +n\beta a_{1}\left[ {C_{1}\ell (\overline{P}^{\star },{{\overline{Q}}})+C_{2}{r}+\frac{(l_{1}^{2}+l_{2}^{2}) \beta }{8 a_{1}}-\tau ^{-1}{c}_{0} 2^{J}{r}}\right] , \end{aligned}$$

where the constants \(C_{1}\) and \(C_{2}\) are the same as those defined in the proof of Theorem 1. If we choose \(r=\ell ({\overline{P}}^{\star },{{\overline{Q}}})\vee (\beta /a_{1})\vee {\varepsilon }\) and J such that \(\tau ^{-1}c_{0}2^{J}\geqslant C_{1}+C_{2}+(l_{1}^{2}+l_{2}^{2})/8\), we obtain that

$$\begin{aligned} \log \left[ {\frac{1}{z}\int _{{^\textsf{c}}{\!{{\mathscr {B}}}}{}({{\overline{Q}}},2^{J}{r})}{\mathbb {E}}\left[ {\exp \left[ {-\beta {\textbf{T}}({\varvec{X}},P)}\right] }\right] d\pi (P)}\right] \leqslant \xi +2\Xi _{1}-\varpi \leqslant -\xi \end{aligned}$$

since \(\varpi =2\xi +1\geqslant 2(\xi +\Xi _{1})\). We conclude as in the proof of Theorem 1.

In order to prove the second part of Theorem 3, we go back to the proof of Theorem 2. The arguments are similar. As before,

$$\begin{aligned} \int _{{\mathscr {M}}}\exp \left[ {-\tau ^{-1} {\overline{{c}}}_{2}n\beta a_{1}\ell ({{\overline{Q}}},Q')}\right] d\pi (Q')&\leqslant \pi \left( {{\mathscr {B}}}\right) (1+e^{-\varpi }) \end{aligned}$$

and

$$\begin{aligned} \int _{{\mathscr {M}}}\exp \left[ {-\tau ^{-1} {\overline{{c}}}_{3}n\beta a_{1}\ell ({{\overline{Q}}},Q')}\right] d\pi (Q')&\leqslant \pi \left( {{\mathscr {B}}}\right) (1+e^{-\varpi }). \end{aligned}$$

Inequalities (124) and (125) are therefore both satisfied with \({\overline{\Xi }}_{1}=\log (1+e^{-1})\). Moreover

$$\begin{aligned}&\int _{{^\textsf{c}}{\!{{\mathscr {B}}}}{}({{\overline{Q}}},2^{J}{r})}\exp \left[ {-\tau ^{-1} {\overline{{c}}}_{1} n\beta a_{1} \ell ({{\overline{Q}}},P)}\right] d\pi (P)\\&\quad \leqslant \pi \left( {{\mathscr {B}}}\right) \exp \left[ {-\varpi -\tau ^{-1} {\overline{{c}}}_{1} n\beta a_{1}2^{J}{r}}\right] , \end{aligned}$$

and we deduce from (128) that

$$\begin{aligned}&\int _{{^\textsf{c}}{\!{{\mathscr {B}}}}{}({{\overline{Q}}},2^{J}{r})}{\mathbb {E}}\left[ {\exp \left[ {-\beta {\textbf{T}}({\varvec{X}},P)}\right] }\right] d\pi (P)\\&\quad \leqslant \exp \left[ {{\overline{\Xi }}_{1}+n\beta a_{1}\left( {e_{3}\ell ({\overline{P}}^{\star },{{\overline{Q}}})+e_{4}{r}}\right) }\right] \\&\quad \quad \times \int _{{^\textsf{c}}{\!{{\mathscr {B}}}}{}({{\overline{Q}}},2^{J}{r})}\exp \left[ {-\tau ^{-1} {\overline{{c}}}_{1}n\beta a_{1} \ell ({{\overline{Q}}},P)}\right] d\pi (P)\\&\quad \leqslant \pi \left( {{\mathscr {B}}}\right) \exp \left[ {{\overline{\Xi }}_{1}+n\beta a_{1}\left[ {e_{3}\ell ({\overline{P}}^{\star },{{\overline{Q}}})+e_{4}{r}-\tau ^{-1} {\overline{{c}}}_{1} 2^{J}{r}}\right] -\varpi }\right] . \end{aligned}$$

Using the definition (127) of z, we deduce that

$$\begin{aligned}&\log \left[ {\frac{1}{z}\int _{{^\textsf{c}}{\!{{\mathscr {B}}}}{}({{\overline{Q}}},2^{J}{r})}{\mathbb {E}}\left[ {\exp \left[ {-\beta {\textbf{T}}({\varvec{X}},P)}\right] }\right] d\pi (P)}\right] \\&\quad \leqslant \xi +\log \frac{1}{\pi ({\mathscr {B}})}+{\overline{\Xi }}_{1}+n\beta a_{1}\left[ {e_{5}\ell ({\overline{P}}^{\star },{{\overline{Q}}})+(e_{6}+e_{7}){r}}\right] \\&\quad \quad +\log \pi \left( {{\mathscr {B}}}\right) + {\overline{\Xi }}_{1}+n\beta a_{1}\left[ {e_{3}\ell ({\overline{P}}^{\star },{{\overline{Q}}})+e_{4}{r}-\tau ^{-1} {\overline{{c}}}_{1} 2^{J}{r}}\right] -\varpi \\&\quad =\xi +2{\overline{\Xi }}_{1}-\varpi \\&\quad \quad +n\beta a_{1}\left[ {(e_{3}+e_{5})\ell (\overline{P}^{\star },{{\overline{Q}}})+(e_{4}+e_{6}+e_{7}){r}-\tau ^{-1} {\overline{{c}}}_{1} 2^{J}{r}}\right] .\end{aligned}$$

Taking \(r=\ell ({\overline{P}}^{\star },{{\overline{Q}}})\vee {\varepsilon }\geqslant {\varepsilon }\) and \(J\geqslant 0\) such that

$$\begin{aligned} \tau ^{-1} {\overline{{c}}}_{1} 2^{J}\geqslant e_{3}+e_{5}+ e_{4}+e_{6}+e_{7} \end{aligned}$$

we obtain that

$$\begin{aligned} \log \left[ {\frac{1}{z}\int _{{^\textsf{c}}{\!{{\mathscr {B}}}}{}({{\overline{Q}}},2^{J}{r})}{\mathbb {E}}\left[ {\exp \left[ {-\beta {\textbf{T}}({\varvec{X}},P)}\right] }\right] d\pi (P)}\right] \leqslant -\xi \end{aligned}$$

and we conclude as before.

10 Other proofs

10.1 Proof of Lemma 1

Let Y be a random variable with gamma distribution \(\gamma (s,1)\). Since \(\sigma Y\sim \gamma (s,\sigma )\), it is sufficient to prove the result for \(\sigma =1\). Using the inequality \(\log (1-x)\geqslant -x/(1-x)\) which holds for all \(x\in [0,1)\), we obtain that

$$\begin{aligned} \log {\mathbb {E}}\left[ {e^{\beta (Y-s)}}\right] =-s\left[ {\log (1-\beta )+\beta }\right] \leqslant \frac{s \beta ^{2}}{1-\beta }\quad \text {for all }\beta \in [0,1). \end{aligned}$$

Applying Lemma 8.2 in Birgé [10] with \(a=\sqrt{s}\) and \(b=1\), we obtain that

$$\begin{aligned} {\mathbb {P}}\left[ {Y\geqslant s+2\sqrt{s \xi }+\xi }\right] \leqslant e^{-\xi }\quad \text {for all } \xi \geqslant 0 \end{aligned}$$

which proves (44). Let us now turn to the lower bound. For \(x\geqslant 0\), let us set

$$\begin{aligned} g(x)=x-\log (1+x)\leqslant \left( {\frac{x^{2}}{2}}\right) \wedge x. \end{aligned}$$

For all \(t,u\geqslant 0\),

$$\begin{aligned} \int _{t+u}^{+\infty }x^{t}e^{-x}dx&=\int _{u}^{+\infty }(t+y)^{t}e^{-t-y}dy=t^{t}e^{-t}\int _{u}^{+\infty }e^{-tg(y/t)}dy\\&\geqslant t^{t}e^{-t}\left( {\int _{u}^{+\infty }e^{-y^{2}/(2t)}dy\vee \int _{u}^{+\infty }e^{-y}dy}\right) \\&=t^{t}e^{-t}\left[ {\left( {\sqrt{2\pi t}\;\overline{F}\left( {\frac{u}{\sqrt{t}}}\right) }\right) \vee e^{-u}}\right] , \end{aligned}$$

where \({\overline{F}}(z)={\mathbb {P}}\left[ {{\mathcal {N}}(0,1)\geqslant z}\right] \) for all \(z\in {\mathbb {R}}\). Using the the following inequalities

$$\begin{aligned} t^{t-1/2}e^{-t}\sqrt{2\pi }\leqslant \Gamma (t)\leqslant t^{t-1/2}e^{-t}\sqrt{2\pi }\exp [1/(12t)], \end{aligned}$$
(133)

that can be found in Whittaker and Watson [25, p. 253], with \(t=s-1>0\), we deduce that

$$\begin{aligned} {\mathbb {P}}\left[ {Y\geqslant t+u}\right]&=\frac{1}{\Gamma (t+1)}\int _{t+u}^{+\infty }x^{t}e^{-x}dx =\frac{1}{t\Gamma (t)}\int _{t+u}^{+\infty }x^{t}e^{-x}dx\\&\geqslant \left[ {{\overline{F}}\left( {\frac{u}{\sqrt{t}}}\right) e^{-1/(12t)}}\right] \vee \left[ {\frac{e^{-u-1/(12t)}}{\sqrt{2\pi t}}}\right] . \end{aligned}$$

Using the fact that \({\overline{F}}({\overline{\Phi }}^{-1}(z))=e^{-z}\) for all \(z\geqslant 0\), we obtain that for the choice

$$\begin{aligned} u=\left[ {\sqrt{t}\;{\overline{\Phi }}^{-1}\left( {\xi -\frac{1}{12t}}\right) }\right] \vee \log \left( {\frac{e^{\xi -1/(12t)}}{\sqrt{2\pi t}}}\right) , \end{aligned}$$

which is nonnegative for \(\xi \geqslant \log 2+1/(12t)\), the quantity \({\mathbb {P}}\left[ {Y\geqslant t+u}\right] \) is at least \(e^{-\xi }\), which proves (45).

10.2 Proof of Theorem 4

Throughout this proof, \(a_{0}=2\), \(a_{1}=3/16\), \(\beta =2\gamma =1/500\) and \(\kappa \) denotes a positive numerical constant that may vary from line to line. It follows from Corollary 4 that for n large enough, \(r_{n}(\beta ,P_{\theta ^{\star }})\leqslant r_{n}^{\star }=\kappa k/n\). Applying our Corollary 2 with \(\ell =h^{2}\) (and \(2\xi \) in place of \(\xi \)), we obtain that for n large enough, with a probability at least \(1-2e^{-\xi }\),

$$\begin{aligned} 1-e^{-\xi }\leqslant {\widehat{\nu }}_{{\varvec{X}}}^{h}\left( {\left\{ {{\varvec{\theta }}\in \Theta ,\; h^{2}({\varvec{\theta }},{\varvec{\theta }}^{\star })\leqslant r_{n}(\xi )}\right\} }\right) \text { with } r_{n}(\xi )=\frac{\kappa (k+\xi )}{n}. \end{aligned}$$

We know by Proposition 9 that under the assumptions of Corollary 4, Assumption 9-(i) is satisfied with \(s=2\), \(\left| {\cdot }\right| _{*}\) given by (81) and \({\varepsilon }=1/2\). This implies that for n large

$$\begin{aligned} \left\{ {{\varvec{\theta }}\in \Theta ,\; h^{2}({\varvec{\theta }},{\varvec{\theta }}^{\star })\leqslant r_{n}(\xi )}\right\} \subset \left\{ {{\varvec{\theta }}\in \Theta ,\; \left| {{\varvec{\theta }}-{\varvec{\theta }}^{\star }}\right| _{*}^{2}\leqslant 2r_{n}(\xi )}\right\} , \end{aligned}$$

which leads to (46).

10.3 Proof of Proposition 4

Let us denote by \(F_{\sigma }\) the distribution function of \(\nu _{\sigma }\). Throughout this proof, we fix some \(\theta ^{\star }\in [-\sigma t, \sigma t]\). Our aim is to prove that \(P_{\theta ^{\star }}\) belongs to \({\mathscr {M}}({\overline{\beta }})\).

Since the total variation distance is translation invariant, \(\left\| {P_{\theta }-P_{\theta ^{\star }}}\right\| = \left\| {P_{\theta -\theta ^{\star }}-P_{0}}\right\| =\left\| {P_{\theta ^{\star }-\theta }-P_{0}}\right\| \) and consequently, for all \(r\in [0,1)\),

$$\begin{aligned} \left\{ {\theta \in \Theta ,\; \left\| {P_{\theta }-P_{\theta ^{\star }}}\right\| \leqslant r}\right\} =\left\{ {\theta \in \Theta ,\; \left| {\theta ^{\star }-\theta }\right| \leqslant \varphi (r)}\right\} \quad \text {for all }r\in [0,1) \end{aligned}$$

while for \(r\geqslant 1\), \(\left\{ {\theta \in \Theta ,\; \left\| {P_{\theta }-P_{\theta ^{\star }}}\right\| \leqslant r}\right\} =\Theta ={\mathbb {R}}\).

We set \(r_{0}=\sup \{r>0,\; \varphi (r)\leqslant \sigma t\}\) and distinguish between two cases.

Case 1 Assume \(r_{0}\leqslant 1/4\). For all \(r<r_{0}\), \(\varphi (r)<\sigma t\), \(2r<1\), and since q is symmetric, positive and decreasing on \({\mathbb {R}}_{+}\),

$$\begin{aligned} \frac{\pi ({\mathscr {B}}(P_{\theta ^{\star }},2r))}{\pi ({\mathscr {B}}(P_{\theta ^{\star }},r))}&=\frac{\nu _{\sigma }\left( {\left\{ {\theta \in {\mathbb {R}},\; \left\| {P_{\theta }-P_{ \theta ^{\star }}}\right\| \leqslant 2r}\right\} }\right) }{\nu _{\sigma }\left( {\left\{ {\theta \in {\mathbb {R}},\; \left\| {P_{\theta }-P_{\theta ^{\star }}}\right\| \leqslant r}\right\} }\right) }\\&=\frac{\nu _{\sigma }\left( {\left\{ {\theta \in {\mathbb {R}},\;\left| {\theta -\theta ^{\star }}\right| \leqslant \varphi (2r)}\right\} }\right) }{\nu _{\sigma }\left( {\left\{ {\theta \in {\mathbb {R}},\; \left| {\theta -\theta ^{\star }}\right| \leqslant \varphi (r)}\right\} }\right) }\leqslant \frac{2q_{\sigma }(0)\varphi (2r)}{2q_{\sigma }(|\theta ^{\star }|+\varphi (r))\varphi (r)}\\&\leqslant \frac{q_{\sigma }(0)\varphi (2r)}{q_{\sigma }(|\theta ^{\star }|+\sigma t)\varphi (r)}\leqslant \frac{q_{\sigma }(0)\varphi (2r)}{q_{\sigma }(2\sigma t)\varphi (r)}\\&=\frac{q(0)\varphi (2r)}{q(2 t)\varphi (r)}\leqslant \frac{\overline{\Gamma }}{q(2 t)}. \end{aligned}$$

For all \(r_{0}<r<1\), \(|\theta ^{\star }|\leqslant \sigma t< \varphi (r)\), hence \(F_{\sigma }(|\theta ^{\star }|-\varphi (r))\leqslant F_{\sigma }(0)=1/2\) and \(F_{\sigma }(|\theta ^{\star }|+\varphi (r))\geqslant F_{\sigma }(\varphi (r))\geqslant F_{\sigma }(\sigma t)= F_{1}(t)\geqslant 3/4\) under our assumption on t. Consequently,

$$\begin{aligned} \frac{\pi ({\mathscr {B}}(P_{\theta ^{\star }},2r))}{\pi ({\mathscr {B}}(P_{\theta ^{\star }},r))}&\leqslant \frac{1}{\nu _{\sigma }\left( {\left\{ {\theta \in {\mathbb {R}},\; \left| {\theta -\theta ^{\star }}\right| \leqslant \varphi (r)}\right\} }\right) }\\&=\frac{1}{F_{\sigma }\left( {|\theta ^{\star }|+\varphi (r)}\right) -F(|\theta ^{\star }|-\varphi (r))}\\&\leqslant \frac{1}{3/4-1/2}= 4. \end{aligned}$$

Note that the result also holds for \(r=r_{0}\) by letting r decrease to \(r_{0}\).

Case 2 Assume that \(r_{0}>1/4\). Then \(\varphi (1/4)\leqslant \sigma t\) and arguing as before, we obtain that for all \(r\leqslant 1/4<r_{0}\),

$$\begin{aligned} \frac{\pi ({\mathscr {B}}(P_{\theta ^{\star }},2r))}{\pi ({\mathscr {B}}(P_{\theta ^{\star }},r))}&\leqslant \frac{2q_{\sigma }(0)\varphi (2r)}{2q_{\sigma }(|\theta ^{\star }| +\varphi (r))\varphi (r)}=\frac{q_{\sigma }(0)\varphi (2r)}{q_{\sigma }(|\theta ^{\star }|+\varphi (1/4))\varphi (r)}\\&\leqslant \frac{q_{\sigma }(0)\varphi (2r)}{q_{\sigma }(2\sigma t)\varphi (r)}\leqslant \frac{{\overline{\Gamma }} }{q(2 t)}. \end{aligned}$$

For all \(r\in (1/4,1)\), \(\varphi (r)\geqslant \varphi (1/4)\) and

$$\begin{aligned} \frac{\pi ({\mathscr {B}}(P_{\theta ^{\star }},2r))}{\pi ({\mathscr {B}}(P_{\theta ^{\star }},r))}&\leqslant \frac{1}{\nu _{\sigma }\left( {\left\{ {\theta \in {\mathbb {R}},\; \left| {\theta -\theta ^{\star }}\right| \leqslant \varphi (r)}\right\} }\right) }\\&\leqslant \frac{1}{\nu _{\sigma }\left( {\left\{ {\theta \in {\mathbb {R}},\; \left| {\theta -\theta ^{\star }}\right| \leqslant \varphi (1/4)}\right\} }\right) }\\&\leqslant \frac{1}{2q_{\sigma }(|\theta ^{\star }|+\varphi (1/4))\varphi (1/4)}\\&\leqslant \frac{1}{2q_{\sigma }(2\sigma t)\varphi (1/4)}\leqslant \frac{\overline{\Gamma }\sigma }{q(2t)}. \end{aligned}$$

We obtain that in any case, for all \(r\in (0,1)\) and \(\theta ^{\star }\in [-\sigma t,\sigma t]\),

$$\begin{aligned} \log \left( {\frac{\pi ({\mathscr {B}}(P_{\theta ^{\star }},2r))}{\pi ({\mathscr {B}}(P_{\theta ^{\star }},r))}}\right) \leqslant \max \left\{ {\log \left( {\frac{{\overline{\Gamma }} \left( {\sigma \vee 1}\right) }{q(2 t)}}\right) ,\log 4}\right\} . \end{aligned}$$
(134)

The inequality is also clearly true for \(r\geqslant 1\) since then \(\pi ({\mathscr {B}}(P_{\theta ^{\star }},2r))=\pi ({\mathscr {B}}(P_{\theta ^{\star }},r))=1\). Hence, for all \(r\geqslant a_{1}^{-1}\beta \)

$$\begin{aligned} \frac{1}{n \gamma a _{1} r}\log \left( {\frac{\pi ({\mathscr {B}}(P_{\theta ^{\star }},2r))}{\pi ({\mathscr {B}}(P_{\theta ^{\star }},r))}}\right)&\leqslant \frac{1}{n \gamma \beta }\sup _{r>0}\log \left( {\frac{\pi ({\mathscr {B}}(P_{\theta ^{\star }},2r))}{\pi ({\mathscr {B}}(P_{\theta ^{\star }},r))}}\right) \\&\leqslant \frac{1}{n \gamma \beta }\max \left\{ {\log \left( {\frac{{\overline{\Gamma }} \left( {\sigma \vee 1}\right) }{q(2 t)}}\right) ,\log 4}\right\} . \end{aligned}$$

The right-hand side is not larger than \(\beta \) provided that it satisfies (50) and this lower bound is not smaller than \(1/\sqrt{n}\) since \(\gamma \leqslant 1\). We conclude by using (15).

10.4 Proof of Proposition 5

Under our assumption on q, Assumption 6 is satisfied and

$$\begin{aligned} {\overline{\Gamma }}=2^{1/s}\max \left\{ {q(0),2^{(1/s)-1}}\right\} . \end{aligned}$$

Let \(t=(|\theta |/\sigma )\vee t_{0}\). Then, \(\theta \in [-\sigma t,\sigma t]\), \(\nu _{1}([t,+\infty ))\leqslant 1/4\) and inequality (134) holds true. We deduce from (11) that

$$\begin{aligned} r_{n}(\beta ,P_{\theta })\leqslant \frac{1}{\gamma n a_{1}\beta }\max \left\{ {\log \left( {\frac{{\overline{\Gamma }} \left( {\sigma \vee 1}\right) }{q(2 t)}}\right) ,\log 4}\right\} \end{aligned}$$

and the result follows from our specific choices of \(a_{1},\gamma \) and \(\beta \).

10.5 Proof of Corollary 3

We set for short \(\Theta =\Theta [\eta ,\delta ]\) with the parameters \(\eta \) and \(\delta \) defined by (60) and (61) respectively and also define

$$\begin{aligned} J_{n}=\exp \left[ {\frac{(K^{2}-1)\gamma \tau ^{4}a_{1}^{2} n\eta _{n}^{2}}{2(k+1)}}\right] \end{aligned}$$
(135)

so that \({\mathscr {M}}_{n}(K)\) contains the elements \(P=P_{(p,{\textbf{m}},\sigma )}\) of \({\mathscr {M}}\) such that

$$\begin{aligned} |\log \sigma |\vee \left| {\frac{{\textbf{m}}}{\sigma }}\right| _{\infty }\leqslant \log (1+\delta )J_{n}. \end{aligned}$$

Hereafter we fix \(P=P_{(p,{\textbf{m}},\sigma )}\in {\mathscr {M}}_{n}(K)\). There exist \(\theta =\theta (P)=({{\overline{Q}}},{\overline{{\textbf{m}}}},{\overline{\sigma }})\in \Theta \) with \({\overline{\sigma }}=(1+\delta )^{j_{0}}\), \({\overline{{\textbf{m}}}}=\overline{\sigma }\delta {\textbf{j}}\), \((j_{0},{\textbf{j}})\in {\mathbb {Z}}\times {\mathbb {Z}}^{k}\) such that

$$\begin{aligned} \frac{{\overline{\sigma }}}{(1+\delta )}\leqslant \sigma<{\overline{\sigma }}\quad \text {and}\quad {\overline{m}}_{i}=j_{i}{\overline{\sigma }}\delta \leqslant m_{i}<{\overline{m}}_{i}+{\overline{\sigma }}\delta , \end{aligned}$$
(136)

for all \(i\in \{1,\ldots ,k\}\). Consequently,

$$\begin{aligned} 0\leqslant \left( {1-\frac{\sigma }{{\overline{\sigma }}}}\right) \leqslant \frac{\delta }{1+\delta }< \delta \quad \text {and}\quad \left| {\frac{{\textbf{m}}-{\overline{{\textbf{m}}}}}{{\overline{\sigma }}}}\right| _{\infty }\leqslant \delta , \end{aligned}$$
(137)

and we infer from (56) and (57) and the fact that the total variation loss is translation and scale invariant that \(P_{\theta }\) satisfies

$$\begin{aligned} \ell \left( {P_{(p,{\textbf{m}},\sigma )},P_{\theta }}\right)&\leqslant \ell \left( {P_{(p,{\textbf{m}},\sigma )},P_{({{\overline{Q}}},{\textbf{m}},\sigma )}}\right) +\ell \left( {P_{({{\overline{Q}}},{\textbf{m}},\sigma )},P_{({{\overline{Q}}},{\overline{{\textbf{m}}}}, {\overline{\sigma }})}}\right) \\&\leqslant \ell \left( {P_{(p,\varvec{0},1)},P_{({{\overline{Q}}},\varvec{0},1)}}\right) +\ell \left( {P_{({{\overline{Q}}},\varvec{0},1)},P_{({{\overline{Q}}},\frac{{\overline{{\textbf{m}}}} -{\textbf{m}}}{\sigma },\frac{{\overline{\sigma }}}{\sigma })}}\right) \\&\leqslant \eta +\left[ {A\left( {\left| {\frac{{\textbf{m}}-{\overline{{\textbf{m}}}}}{{\overline{\sigma }}}}\right| _{\infty }^{s} +\left( {1-\frac{\sigma }{{\overline{\sigma }}}}\right) ^{s}}\right) }\right] \wedge 1\\&\leqslant \eta +2A\delta ^{s}=2\eta . \end{aligned}$$

Besides, the parameters \((j_{0},{\textbf{j}})\in {\mathbb {Z}}\times {\mathbb {Z}}^{k}\) can be controlled in the following way. Using that \(\sigma \leqslant \overline{\sigma }\), the inequality \(\log (1+\delta )\leqslant \delta \) and (137), we obtain that for all \(i\in \{1,\ldots ,k\}\),

$$\begin{aligned} \left| {j_{i}}\right|&=\left| {\frac{{\overline{m}}_{i}}{\overline{\sigma }\delta }}\right| =\frac{1}{{\overline{\sigma }}\delta }\left| {\overline{m}_{i}-m_{i}+m_{i}}\right| \leqslant \frac{1}{{\overline{\sigma }}\delta }\left[ {\overline{\sigma }\delta + \sigma \left| {\frac{m_{i}}{\sigma }}\right| }\right] \leqslant 1+\frac{1}{\log (1+\delta )}\left| {\frac{m_{i}}{\sigma }}\right| . \end{aligned}$$

Besides,

$$\begin{aligned} j_{0}&=\frac{\log {\overline{\sigma }}}{\log (1+\delta )}=\frac{1}{\log (1+\delta )} \left[ {-\log \left( {1+\frac{\sigma }{{\overline{\sigma }}}-1}\right) +\log \sigma }\right] \\&\leqslant \frac{1}{\log (1+\delta )}\left[ {-\log \left( {1-\frac{\delta }{1+\delta }}\right) +|\log \sigma |}\right] \\&=\frac{1}{\log (1+\delta )}\left[ {\log \left( {1+\delta }\right) +|\log \sigma |}\right] \leqslant 1+\frac{|\log \sigma |}{\log (1+\delta )} \end{aligned}$$

and using the inequality \(\log (1+2x)\leqslant 2\log (1+x)\), which holds for all \(x\geqslant 0\), we obtain that

$$\begin{aligned} j_{0}&\geqslant \frac{\log \sigma }{\log (1+\delta )}\geqslant -\frac{|\log \sigma |}{\log (1+\delta )}\geqslant -\left[ {1+\frac{|\log \sigma |}{\log (1+\delta )}}\right] . \end{aligned}$$

Putting these inequalities together and using the fact that \(P\in {\mathscr {M}}_{n}(K)\), we get

$$\begin{aligned} \left| {(j_{0},{\textbf{j}})}\right| _{\infty }\leqslant 1+\frac{1}{\log (1+\delta )}\left[ {|\log \sigma |\vee \left| {\frac{{\textbf{m}}}{\sigma }}\right| _{\infty }}\right] \leqslant 1+J_{n}. \end{aligned}$$
(138)

For all \(r>0\), \(e^{-L_{\theta }}\leqslant \pi \left( {{\mathscr {B}}(P_{\theta },r)}\right) \leqslant 1\) and these two inequalities together with the definition (60) of \(\eta \) and Assumption 7 imply that for all \(r>0\)

$$\begin{aligned} \frac{\pi \left( {{\mathscr {B}}(P_{\theta },2r)}\right) }{\pi \left( {{\mathscr {B}}(P_{\theta },r)}\right) }&\leqslant \exp \left[ {L_{\theta }}\right] \leqslant \exp \left[ {{{\widetilde{D}}}(\eta )+2\sum _{i=0}^{k}\left[ {\frac{L}{2}+\log (1+|j_{i}|)}\right] }\right] \\&\leqslant \exp \left[ {\gamma \tau ^{4}a_{1}^{2} n\eta ^{2}+(k+1)\left[ {L+2\log (1+|(j_{0},{\textbf{j}})|_{\infty })}\right] }\right] . \end{aligned}$$

Using (138), the definition (135) of \(J_{n}\) and the fact that \(\log (2+x)\leqslant \log 3+\log x\) for all \(x\geqslant 1\), we derive that

$$\begin{aligned} \frac{\pi \left( {{\mathscr {B}}(P_{\theta },2r)}\right) }{\pi \left( {{\mathscr {B}}(P_{\theta },r)}\right) }&\leqslant \exp \left[ {\gamma \tau ^{4}a_{1}^{2} n\eta ^{2}+(k+1)L+2(k+1)\log (2+J_{n})}\right] ,\\&\leqslant \exp \left[ {K^{2}\gamma \tau ^{4}a_{1}^{2} n\eta ^{2}+(k+1)\left( {L+\log 9}\right) }\right] \end{aligned}$$

and since \(\gamma =1/6\leqslant L'=L+\log 9<3.1\),

$$\begin{aligned} \frac{1}{n\beta a_{1}}\leqslant r_{n}(\beta ,P_{\theta })&\leqslant \frac{1}{\gamma n\beta a_{1}}\left[ {K^{2}\gamma \tau ^{4}a_{1}^{2} n\eta ^{2}+(k+1)L'}\right] \\&=\frac{1}{a_{1}\beta }\left[ {K^{2}\tau ^{4}a_{1}^{2} \eta ^{2}+\frac{(k+1)L'}{\gamma n}}\right] . \end{aligned}$$

For the choice of \(\beta =\beta _{n}\) given by (62),

$$\begin{aligned} \beta \geqslant \sqrt{K^{2}\tau ^{4}a_{1}^{2} \eta ^{2}+\frac{(k+1)L'}{\gamma n}}\geqslant \sqrt{\frac{k+1}{n}}\vee \frac{K\eta }{2} \end{aligned}$$

hence, \(r_{n}(\beta ,P_{\theta })\leqslant a_{1}^{-1}\beta \) and \(P_{\theta }\in {\mathscr {M}}(\beta )\). This implies that

$$\begin{aligned} \inf _{P'\in {\mathscr {M}}(\beta )}\ell ({\overline{P}}^{\star },P')+a_{1}^{-1}\beta&\leqslant \ell ({\overline{P}}^{\star },P_{\theta })+a_{1}^{-1}\beta \\&\leqslant \ell ({\overline{P}}^{\star },P)+\ell (P,P_{\theta })+a_{1}^{-1}\beta \\&\leqslant \ell ({\overline{P}}^{\star },P)+ 2\eta +\left[ {K\tau ^{2}\eta +\frac{1}{a_{1}}\sqrt{\frac{(k+1)L'}{\gamma n}}}\right] , \end{aligned}$$

and the result follows by applying Corollary 1 and by using the fact that P is arbitrary in \({\mathscr {M}}_{n}(K)\).

10.6 Proof of Lemma 2

For all \(p\in {\mathcal {M}}_{0}\), \(\sigma \geqslant 1\) and \({\textbf{m}}\in {\mathbb {R}}^{k}\), the supports of the functions \({\varvec{x}}\mapsto p({\varvec{x}}/\sigma )\) and \({\varvec{x}}\mapsto p(({\varvec{x}}-{\textbf{m}})/\sigma )\) are included in the set \({\mathcal {K}}=[0,\sigma ]^{k}\cup \{{\textbf{m}}+{\varvec{x}},\; {\varvec{x}}\in [0,\sigma ]^{k}\}\) the Lebesgue measure of which is not larger than \(2\sigma ^{k}\). Consequently, using (66), we deduce that for all \(p\in {\mathcal {M}}_{0}\), \(\sigma \geqslant 1\) and \({\textbf{m}}\in {\mathbb {R}}^{k}\),

$$\begin{aligned}&\left\| {P_{(p,\varvec{0},1)}-P_{(p,{\textbf{m}},\sigma )}}\right\| \\&\quad \leqslant \left\| {P_{(p,\varvec{0},1)}-P_{(p,\varvec{0}, \sigma )}}\right\| +\left\| {P_{(p,\varvec{0},\sigma )}-P_{(p,{\textbf{m}},\sigma )}}\right\| \\&\quad =\frac{1}{2}\int _{{\mathbb {R}}^{k}}\left| {p({\varvec{x}})-\frac{1}{\sigma ^{k}}p\left( {\frac{{\varvec{x}}}{\sigma }}\right) }\right| d{\varvec{x}}+\frac{1}{2\sigma ^{k}}\int _{{\mathbb {R}}^{k}} \left| {p\left( {\frac{{\varvec{x}}}{\sigma }}\right) -p\left( {\frac{{\varvec{x}}-{\textbf{m}}}{\sigma }}\right) }\right| d{\varvec{x}}\\&\quad \leqslant \frac{1}{2}\int _{{\mathbb {R}}^{k}}\left| {p({\varvec{x}}) -\frac{1}{\sigma ^{k}}p\left( {{\varvec{x}}}\right) }\right| d{\varvec{x}}+\frac{1}{2\sigma ^{k}}\int _{{\mathbb {R}}^{k}}\left| {p({\varvec{x}})-p\left( {\frac{{\varvec{x}}}{\sigma }}\right) }\right| d{\varvec{x}}\\&\quad \quad +\frac{1}{2\sigma ^{k}}\int _{{\mathbb {R}}^{k}}\left| {p\left( {\frac{{\varvec{x}}}{\sigma }}\right) -p\left( {\frac{{\varvec{x}}-{\textbf{m}}}{\sigma }}\right) }\right| d{\varvec{x}}\\&\quad \leqslant \frac{1}{2}\int _{{\mathbb {R}}^{k}}\left| {p({\varvec{x}}) -\frac{1}{\sigma ^{k}}p\left( {{\varvec{x}}}\right) }\right| d{\varvec{x}}+\frac{1}{2\sigma ^{k}}\int _{[0,1]^{k}}\left| {p({\varvec{x}})-p\left( {\frac{{\varvec{x}}}{\sigma }}\right) }\right| d{\varvec{x}}\\&\quad \quad +\frac{1}{2\sigma ^{k}}\int _{[0,\sigma ]^{k}{\setminus }[0,1]^{k}}\left| {p\left( {\frac{{\varvec{x}}}{\sigma }}\right) }\right| d{\varvec{x}}+\frac{1}{2\sigma ^{k}}\int _{{\mathcal {K}}}\left| {p\left( {\frac{{\varvec{x}}}{\sigma }}\right) -p\left( {\frac{{\varvec{x}}-{\textbf{m}}}{\sigma }}\right) }\right| d{\varvec{x}}\\&\quad \leqslant \frac{1}{2}\left( {1-\frac{1}{\sigma ^{k}}}\right) +\frac{1}{2\sigma ^{k}}\int _{[0,1]^{k}}L_{1}\left( {1-\frac{1}{\sigma }}\right) ^{s}\left| {{\varvec{x}}}\right| ^{s}d{\varvec{x}}\\&\quad \quad + \frac{1}{2}\int _{[0,1]^{k}{\setminus }[0,1/\sigma ]^{k}}\left| {p({\varvec{x}})}\right| d{\varvec{x}}+\frac{L_{1}}{2\sigma ^{k}}\int _{{\mathcal {K}}}\left| {\frac{{\textbf{m}}}{\sigma }}\right| ^{s}d{\varvec{x}}\\&\quad \leqslant \frac{1}{2}\left( {1-\frac{1}{\sigma ^{k}}}\right) +\frac{L_{1} k^{s/2}}{2\sigma ^{k}}\left( {1-\frac{1}{\sigma }}\right) ^{s}+\frac{L_{0}}{2}\left( {1-\frac{1}{\sigma ^{k}}}\right) +L_{1}\left| {\frac{{\textbf{m}}}{\sigma }}\right| ^{s}\\&\quad \leqslant \frac{1}{2}\left[ {1+L_{1}k^{s/2}+L_{0}}\right] \left( {1-\frac{1}{\sigma }}\right) ^{s}+L_{1}\left| {\frac{{\textbf{m}}}{\sigma }}\right| ^{s} \end{aligned}$$

and (57) is therefore satisfied with \(A=L_{1}\vee [(1+L_{1}k^{s/2}+L_{0})/2]\).

10.7 Proof of Lemma 3

By doing the change of variables \(u=x-m\) in (68) if ever necessary, we may assume with no loss of generality that \(m>0\). Then, since p is nonincreasing in \((0,+\infty )\) and vanishes elsewhere \(p(x-m)\geqslant p(x)\) for all \(x\geqslant m\) and \(p(x)\geqslant p(x-m)=0\) for all \(x\in (0,m)\). Consequently,

$$\begin{aligned} \int _{{\mathbb {R}}}\left| {p(x)-p(x-m)}\right| dx&=\int _{0}^{m}p(x)dx+\int _{m}^{+\infty }\left[ {p(x-m)-p(x)}\right] dx\\&=2\int _{0}^{m}p(x)dx+\int _{m}^{+\infty }p(x-m)dx-\int _{0}^{+\infty }p(x)dx\\&\leqslant 2mB+1-1, \end{aligned}$$

and we obtain (68).

Since \(\sigma \geqslant 1\), \(p(x/\sigma )\geqslant p(x)\) and \(p(x)/\sigma \leqslant p(x)\) for all \(x>0\). Hence,

$$\begin{aligned}&\int _{{\mathbb {R}}}\left| {\frac{1}{\sigma }p\left( {\frac{x}{\sigma }}\right) -p(x)}\right| dx\\&\quad \leqslant \int _{{\mathbb {R}}}\left| {\frac{1}{\sigma }p\left( {\frac{x}{\sigma }}\right) -\frac{1}{\sigma }p(x)}\right| dx+\int _{{\mathbb {R}}}\left| {\frac{1}{\sigma }p\left( {x}\right) -p(x)}\right| dx\\&\quad =\frac{1}{\sigma }\int _{{\mathbb {R}}}\left( {p\left( {\frac{x}{\sigma }}\right) -p(x)}\right) dx+\int _{{\mathbb {R}}}\left( {p(x)-\frac{1}{\sigma }p\left( {x}\right) }\right) dx\\&\quad =2\left( {1-\frac{1}{\sigma }}\right) , \end{aligned}$$

which leads to (67).

Finally, by combining (68) and (67) we deduce that for all \(m\in {\mathbb {R}}\) and \(\sigma \geqslant 1\)

$$\begin{aligned}&\frac{1}{2}\int _{{\mathbb {R}}}\left| {\frac{1}{\sigma }p\left( {\frac{x-m}{\sigma }}\right) -p(x)}\right| dx\\&\quad =\frac{1}{2}\int _{{\mathbb {R}}}\left| {\frac{1}{\sigma }p\left( {\frac{x-m}{\sigma }}\right) -\frac{1}{\sigma }p\left( {\frac{x}{\sigma }}\right) }\right| dx+\frac{1}{2}\int _{{\mathbb {R}}}\left| {\frac{1}{\sigma }p\left( {\frac{x}{\sigma }}\right) -p(x)}\right| dx\\&\quad =\frac{1}{2}\int _{{\mathbb {R}}}\left| {p\left( {u-\frac{m}{\sigma }}\right) -p(u)}\right| du+\frac{1}{2}\int _{{\mathbb {R}}}\left| {\frac{1}{\sigma }p\left( {\frac{x}{\sigma }}\right) -p(x)}\right| dx\\&\quad \leqslant B\left| {\frac{m}{\sigma }}\right| +\left( {1-\frac{1}{\sigma }}\right) \end{aligned}$$

which yields to (69).

10.8 Proof of Proposition 6

This proposition is a consequence of Corollary 2. Let us first check that the assumptions of this corollary are satisfied. For all \(S\in {\mathscr {P}}\), the mapping \({\varvec{\theta }}\mapsto h(S,P_{\theta })\) is continuous because of (71). It is therefore measurable and it follows from the definition of the algebra \({\mathcal {A}}\) that Assumption 1 is satisfied. Since the mapping \((x,{\varvec{\theta }})\mapsto p(x,{\varvec{\theta }})\) is measurable, so are the mappings

$$\begin{aligned} \begin{array}{l|rcl} {\textbf{p}}: &{} ({\mathbb {R}}^{k}\times E\times E &{} \longrightarrow &{} {\mathbb {R}}_{+} \\ &{} (x,\varvec{\theta },\varvec{\theta }') &{} \longmapsto &{} (p_{\varvec{\theta }}(x),p_{\varvec{\theta }'}(x)). \end{array} \end{aligned}$$

and \( (x,\varvec{\theta },\varvec{\theta }')\mapsto \psi \left( {\sqrt{p_{\varvec{\theta }'}(x)/p_{\varvec{\theta }}(x)}}\right) \), since \(\psi \) is measurable. We deduce that \((x,P,P')\mapsto t_{(P,P')}(x)\) is measurable on \((E\times {\mathscr {M}}\times {\mathscr {M}},{\mathcal {E}}\otimes {\mathcal {A}}\otimes {\mathcal {A}})\) which proves that Assumption 3-(i) holds true. The requirements of Corollary 2 are therefore satisfied and we may apply it. In order to evaluate the quantity \(r_{n}(\beta ,P_{{\varvec{\theta }}})\) for \({\varvec{\theta }}\in {\mathbb {R}}^{k}\), we use the following lemma the proof of which is postponed to Sect. 10.9.

Lemma 10

Let \(\varvec{\theta }\in [-R,R]^{k}\). For all \(m\subset \{1,\ldots ,k\}\) and \(r>0\)

$$\begin{aligned}&\nu _{m}\left( {\left\{ {\varvec{\theta }'\in {\mathbb {R}}^{k},\, \left| {\varvec{\theta }'-\varvec{\theta }}\right| _{\infty }\leqslant r}\right\} }\right) \\&\quad = {\left\{ \begin{array}{ll} \displaystyle {\frac{1}{2^{|m|}}\prod _{i\in m}\left[ {\left( {1-\frac{|\theta _{i}|}{R}}\right) \wedge \frac{r}{R}+\left( {1+\frac{|\theta _{i}|}{R}}\right) \wedge \frac{r}{R}}\right] } &{} \text {if }\left| {\theta _{i}}\right| \leqslant r\text { for all }i\not \in m \\ 0 &{} \text {otherwise,} \end{array}\right. } \end{aligned}$$

with the convention \(\prod _{{\varnothing }}=1\). In particular, if \(\varvec{\theta }\in \Theta _{m}(R)\) and

$$\begin{aligned} \nu _{m}\left( {\left\{ {\varvec{\theta }'\in {\mathbb {R}}^{k},\, \left| {\varvec{\theta }'-\varvec{\theta }}\right| _{\infty }\leqslant r}\right\} }\right) \geqslant \frac{1}{2^{|m|}}\left( {\frac{r}{R}\wedge 1}\right) ^{|m|} \end{aligned}$$
(139)

and for all \(K>1\)

$$\begin{aligned} \frac{\nu _{m}\left( {\left\{ {\varvec{\theta }'\in {\mathbb {R}}^{k},\, \left| {\varvec{\theta }'-\varvec{\theta }}\right| _{\infty }\leqslant Kr}\right\} }\right) }{\nu _{m}\left( {\left\{ {\varvec{\theta }'\in {\mathbb {R}}^{k},\, \left| {\varvec{\theta }'-\varvec{\theta }}\right| _{\infty }\leqslant r}\right\} }\right) }\leqslant K^{|m|}. \end{aligned}$$
(140)

Let us set \(B=B_{k}\) for short and define \(m^{\star }\) as the subset of \(\{1,\ldots ,k\}\) that minimizes over those \(m\subset \{1,\ldots ,k\}\) the mapping

$$\begin{aligned} m\mapsto \inf _{\varvec{\theta }\in \Theta _{m}(R)}\ell (\overline{P}^{\star },P_{\varvec{\theta }})+\frac{|m|\log \left( {2kR(nB)^{1/s}}\right) +1}{\gamma n\beta a_{1}}. \end{aligned}$$

Finally, let \(\varvec{\theta }^{\star }\) for some arbitrary element of \(\Theta _{m^{\star }}(R)\). It follows from (71) and (139) that for all \(r>0\),

$$\begin{aligned} 1&\geqslant \pi _{m}\left( {{\mathscr {B}}(P_{\varvec{\theta }^{\star }},r)}\right) \nonumber \\&=\nu _{m}\left( {\left\{ {\varvec{\theta }\in {\mathbb {R}}^{k},\; h^{2}(P_{\varvec{\theta }^{\star }},P_{\varvec{\theta }})\leqslant r}\right\} }\right) \nonumber \\&\geqslant \nu _{m}\left( {\left\{ {\varvec{\theta }\in {\mathbb {R}}^{k},\; \left| {\varvec{\theta }-\varvec{\theta }^{\star }}\right| _{\infty }\leqslant (r/B)^{1/s}}\right\} }\right) \nonumber \\&\geqslant \frac{1}{2^{|m|}}\left( {\frac{(r/B)^{1/s}}{R}\wedge 1}\right) ^{|m|}\geqslant \frac{1}{2^{|m|}}\left( {\frac{(r\wedge 1)^{1/s}}{RB^{1/s}}}\right) ^{|m|}, \end{aligned}$$
(141)

where the last inequality holds true under the assumption that \(RB^{1/s}\geqslant 1\).

We deduce from (141) that for all \(r>0\)

$$\begin{aligned}&\frac{\pi \left( {{\mathscr {B}}(P_{\varvec{\theta }^{\star }},2r)}\right) }{\pi \left( {{\mathscr {B}}(P_{\varvec{\theta }^{\star }},r)}\right) }\leqslant \frac{1}{\pi \left( {{\mathscr {B}}(P_{\varvec{\theta }^{\star }},r)}\right) }\nonumber \\&\quad \leqslant \frac{1}{\sum _{m\subset \{1,\ldots ,k\}}e^{-L_{m}}\nu _{m}\left( {\left\{ {\varvec{\theta }\in {\mathbb {R}}^{k},\; \left| {\varvec{\theta }-\varvec{\theta }^{\star }}\right| _{\infty }\leqslant (r/B)^{1/s}}\right\} }\right) }\nonumber \\&\quad \leqslant \frac{e^{L_{m^{\star }}}}{\nu _{m^{\star }}\left( {\left\{ {\varvec{\theta }\in \Theta _{m^{\star }},\; \left| {\varvec{\theta }-\varvec{\theta }^{\star }}\right| _{\infty }\leqslant (r/B)^{1/s}}\right\} }\right) }\nonumber \\&\quad \leqslant \exp \left[ {L_{m^{\star }}+|m^{\star }|\log \left( {\frac{2RB^{1/s}}{(r\wedge 1)^{1/s}}}\right) }\right] \nonumber \\&\quad =\exp \left[ {|m^{\star }|\log \left( {2kRB^{1/s}}\right) +k\log \left( {1+\frac{1}{k}}\right) +\frac{|m^{\star }|}{s}\log \left( {\frac{1}{r}\vee 1}\right) }\right] . \end{aligned}$$
(142)

Provided that

$$\begin{aligned} r\geqslant \frac{|m^{\star }|\log \left( {2kR(nB)^{1/s}}\right) +1}{\gamma n\beta a_{1}}\geqslant \frac{1}{n}, \end{aligned}$$

we obtain

$$\begin{aligned}&|m^{\star }|\log \left( {2kRB^{1/s}}\right) +k\log \left( {1+\frac{1}{k}}\right) +\frac{|m^{\star }|}{s}\log \left( {\frac{1}{r}\vee 1}\right) \\&\quad \leqslant |m^{\star }|\log \left( {2kRB^{1/s}}\right) +k\log \left( {1+\frac{1}{k}}\right) +|m^{\star }|\log \left( {n^{1/s}}\right) \\&\quad \leqslant |m^{\star }|\log \left( {2kR(nB)^{1/s}}\right) +1\leqslant \gamma n\beta a_{1}r \end{aligned}$$

and deduce from (142) that \( r_{n}(\beta ,P_{\varvec{\theta }^{\star }})\) defined by (11) satisfies

$$\begin{aligned} \frac{1}{n\beta a_{1}}\leqslant r_{n}(\beta ,P_{\varvec{\theta }^{\star }})&\leqslant \frac{|m^{\star }|\log \left( {2kR(nB)^{1/s}}\right) +1}{\gamma n\beta a_{1}}. \end{aligned}$$

Applying Corollary 2, we obtain that for some numerical constant \(\kappa _{0}'>0\),

$$\begin{aligned} {\mathbb {E}}\left[ {{\widehat{\pi }}_{{\varvec{X}}}\left( {{^\textsf{c}}{\!{{\mathscr {B}}}}{}(\overline{P}^{\star },\kappa _{0}'{r}(m^{\star },\varvec{\theta }^{\star }))}\right) }\right] \leqslant 2e^{-\xi } \end{aligned}$$

with

$$\begin{aligned} r(m^{\star },\varvec{\theta }^{\star })&=\ell (\overline{P}^{\star },P_{\varvec{\theta }^{\star }})+\frac{|m^{\star }|\log \left( {2kR(nB)^{1/s}}\right) +\xi }{\gamma n\beta a_{1}}. \end{aligned}$$

Finally, the conclusion follows from the definition of \(m^{\star }\) and the fact that \(\varvec{\theta }^{\star }\) is arbitrary in \(\Theta _{m^{\star }}(R)\).

10.9 Proof of Lemma 10

Let \(\theta \in {\mathbb {R}}\) and \(\nu \) be the uniform distribution on \([-R,R]\). For all \(\theta \in [-R,R]\) and \(r>0\),

$$\begin{aligned} \nu \left( {[\theta -r,\theta +r]}\right)&=\frac{1}{2R}\left[ {(\theta +r)\wedge R-(\theta -r)\vee (-R)}\right] _{+}\\&=\frac{1}{2R}\left[ {(r+\theta )\wedge R+(r-\theta )\wedge R}\right] _{+}\\&=\frac{1}{2R}\left[ {(r+|\theta |)\wedge R+(r-|\theta |)\wedge R}\right] _{+}\\&=\frac{1}{2}\left[ {\left( {1-\frac{|\theta |}{R}}\right) \wedge \frac{r}{R}+\left( {1+\frac{|\theta |}{R}}\right) \wedge \frac{r}{R}}\right] . \end{aligned}$$

Let now \(\varvec{\theta }\in {\mathbb {R}}^{k}\) such that \(\left| {\varvec{\theta }}\right| _{\infty }\leqslant R\). For all \(m\subset \{1,\ldots ,k\}\), \(m\ne {\varnothing }\),

$$\begin{aligned} \nu _{m}\left( {\left\{ {\varvec{\theta }'\in \Theta _{m},\; \left| {\varvec{\theta }'-\varvec{\theta }}\right| _{\infty }\leqslant r}\right\} }\right)&= 0 \end{aligned}$$

if there exists \(i\not \in m\) such that \(|\theta _{i}|>r\). Otherwise

$$\begin{aligned} \nu _{m}\left( {\left\{ {\varvec{\theta }'\in {\mathbb {R}}^{k},\; \left| {\varvec{\theta }'-\varvec{\theta }}\right| _{\infty }\leqslant r}\right\} }\right)&= \nu _{m}\left( {\left\{ {\varvec{\theta }'\in \Theta _{m},\; \max _{i\in m}\left| {\theta _{i}'-\theta _{i}}\right| \leqslant r}\right\} }\right) \\&=\prod _{i\in m}\nu \left( {\left[ {\theta _{i}-r,\theta _{i}+r}\right] }\right) \\&=\frac{1}{2^{|m|}}\prod _{i\in m}\left[ {\left( {1-\frac{|\theta _{i}|}{R}}\right) \wedge \frac{r}{R}+\left( {1+\frac{|\theta _{i}|}{R}}\right) \wedge \frac{r}{R}}\right] . \end{aligned}$$

If \(m={\varnothing }\),

$$\begin{aligned} \nu _{{\varnothing }}\left( {\left\{ {\varvec{\theta }'\in {\mathbb {R}}^{k},\; \left| {\varvec{\theta }'-\varvec{\theta }}\right| _{\infty }\leqslant r}\right\} }\right) ={\mathbb {1}}_{\left| {\varvec{\theta }}\right| _{\infty }\leqslant r}. \end{aligned}$$

Let us now turn to the proof of (140). Since \(\varvec{\theta }\in \Theta _{m}(R)\), for all \(K'\in \{1,K\}\)

$$\begin{aligned} \nu _{m}&\left( {\left\{ {\varvec{\theta }'\in {\mathbb {R}}^{k},\; \left| {\varvec{\theta }'-\varvec{\theta }}\right| _{\infty }\leqslant K'r}\right\} }\right) \\&=\nu _{m}\left( {\left\{ {\varvec{\theta }'\in \Theta _{m},\; \max _{i\in m}\left| {\theta _{i}'-\theta _{i}}\right| \leqslant K'r}\right\} }\right) \\&=\prod _{i\in m}\nu \left( {[\theta _{i}-K'r,\theta _{i}+K'r]}\right) , \end{aligned}$$

It is therefore enough to show that for all \(r>0\) and \(\theta \in [0,R]\)

$$\begin{aligned} \Delta (r)=\frac{\nu \left( {\left[ {\theta -Kr,\theta +Kr}\right] }\right) }{\nu \left( {\left[ {\theta -r,\theta +r}\right] }\right) }\leqslant K. \end{aligned}$$

This is what we do now by distinguishing between several cases.

When \(\theta +Kr\leqslant R\), \(\theta -Kr\geqslant 2\theta -R\geqslant -R\) and consequently, \(\Delta (r)=K\). When \(\theta +Kr>R\) and \(-R\leqslant \theta -Kr\),

$$\begin{aligned} \Delta (r)=\frac{R-(\theta -Kr)}{(\theta +r)\wedge R-(\theta -r)}= {\left\{ \begin{array}{ll} \displaystyle {\frac{R-\theta +Kr}{R-\theta +r}}&{}\text {when }\theta +r>R\\ \ \\ \displaystyle {\frac{R-\theta +Kr}{2r}}&{} \text {when }\theta +r\leqslant R, \end{array}\right. } \end{aligned}$$

and the conclusion follows from the facts that \(0\leqslant R-\theta \leqslant Kr\). When \(\theta +Kr> R\) and \(\theta -Kr< -R\), \(r\geqslant (\theta +R)/K\geqslant R/K\), hence \(R+r-\theta \geqslant 2R/K\) and \(R\leqslant Kr\). Consequently,

$$\begin{aligned} \Delta (r)&=\frac{2R}{(\theta +r)\wedge R-(\theta -r)\vee (-R)}\\&= {\left\{ \begin{array}{ll} \displaystyle {\frac{2R}{2R}}=1&{}\text {when }\theta +r>R\text { and }\theta -r<-R\\ \ \\ \displaystyle {\frac{2R}{R+r-\theta }}\leqslant K&{}\text {when }\theta +r>R\text { and }\theta -r\geqslant -R\\ \ \\ \displaystyle {\frac{2R}{2r}\leqslant K} &{} \text {when }\theta +r\leqslant R, \end{array}\right. } \end{aligned}$$

which concludes the proof.

10.10 Proof of Proposition 8

Let \({\varepsilon }\) be a small enough positive number. Since q is continuous and positive at \({\varvec{\theta }}^{\star }\) and since \({\mathcal {K}}\) has a nonempty interior, there exists \(z^{\star }>0\) such that \(\Theta ^{\star }={\mathcal {B}}_{*}({\varvec{\theta }}^{\star },z^{\star })\subset {\mathcal {K}}\),

$$\begin{aligned} 0<{\underline{b}}^{\star }\leqslant q({\varvec{\theta }})\leqslant {\overline{b}}^{\star }\quad \text {with } {\overline{b}}^{\star }/{\underline{b}}^{\star }\leqslant 1+{\varepsilon }, \end{aligned}$$
(143)

for all \({\varvec{\theta }}\in \Theta ^{\star }\) and

$$\begin{aligned} (1-{\varepsilon }) |{\varvec{\theta }}-{\varvec{\theta }}^{\star }|_{*}^{s}\leqslant \ell ({\varvec{\theta }},{\varvec{\theta }}^{\star })\leqslant (1+{\varepsilon }) |{\varvec{\theta }}-{\varvec{\theta }}^{\star }|_{*}^{s}. \end{aligned}$$
(144)

In particular, \(\nu (\Theta ^{\star })>0\) and we may define the distribution \(\nu ^{\star }=\nu (\cdot \cap \Theta ^{\star })/\nu (\Theta ^{\star })\) on \(\Theta ^{\star }\) with density \(q^{\star }=q{\mathbb {1}}_{\Theta ^{\star }}/\nu (\Theta ^{\star })\). Let \({\mathscr {M}}^{\star }=\{P_{{\varvec{\theta }}},\; {\varvec{\theta }}\in \Theta ^{\star }\}\) and \(\pi ^{\star }\) be the prior on \({\mathscr {M}}^{\star }\) associated with \(\nu ^{\star }\). The parameter space \(\Theta ^{\star }\) is convex and it follows fom (144) that \((\Theta ^{\star },{\varvec{\theta }}^{\star },\ell , \nu ^{\star })\) satisfy Assumption 8-(i) with \(\overline{a}=1+{\varepsilon }, {\underline{a}}=1-{\varepsilon }\). Besides, it follows from (143) that the density \(q^{\star }\) satisfies condition (77) on \(\Theta ^{\star }\). We may apply Proposition 7 and deduce that for the model \(({\mathscr {M}}^{\star },\pi ^{\star })\), \(r_{n}^{\star }=r_{n}^{\star }(\beta ,P_{{\varvec{\theta }}^{\star }})\) is not larger than \(\kappa _{0}^{\star }k/(\beta n)\) with

$$\begin{aligned} \kappa _{0}^{\star }=\frac{1}{a_{1}\gamma }\left\{ {\left[ {1+\frac{\log \left[ {2(1+{\varepsilon })/(1-{\varepsilon })}\right] }{s\log 2}}\right] \log \left( {2(1+{\varepsilon })}\right) }\right\} \vee 1<\frac{(1+s^{-1})}{a_{1}\gamma } \end{aligned}$$

for \({\varepsilon }\) small enough. Consequently, by definition of \(r_{n}^{\star }\), for all \(r\geqslant r_{n}^{\star }\)

$$\begin{aligned} \pi ^{\star }\left( {{\mathscr {B}}(P_{{\varvec{\theta }}^{\star }},2r)}\right)&=\frac{1}{\nu (\Theta ^{\star })}\nu \left( {\{{\varvec{\theta }}\in \Theta ,\; \ell ({\varvec{\theta }},{\varvec{\theta }}^{\star })\leqslant 2r\}\cap \Theta ^{\star }}\right) \nonumber \\&\leqslant \frac{\exp \left( {\gamma n\beta a_{1}r}\right) }{\nu (\Theta ^{\star })}\nu \left( {\{{\varvec{\theta }}\in \Theta ,\; \ell ({\varvec{\theta }},{\varvec{\theta }}^{\star })\leqslant r\}\cap \Theta ^{\star }}\right) \nonumber \\&\leqslant \frac{\exp \left( {\gamma n\beta a_{1}r}\right) }{\nu (\Theta ^{\star })}\nu \left( {\{{\varvec{\theta }}\in \Theta ,\; \ell ({\varvec{\theta }},{\varvec{\theta }}^{\star })\leqslant r\}}\right) . \end{aligned}$$
(145)

Let \(r_{1}=[(z^{\star })^{s}{\underline{a}}_{{\mathcal {K}}})\wedge \eta ]/2\). If \(r\in (0,r_{1})\) and the parameter \({\varvec{\theta }}\in \Theta \) satisfies \(\ell ({\varvec{\theta }},{\varvec{\theta }}^{\star })\leqslant 2r\), then \(\ell ({\varvec{\theta }},{\varvec{\theta }}^{\star })<\eta \) and \({\varvec{\theta }}\) necessarily belongs to \({\mathcal {K}}\) under Assumption 9-(ii). Applying (79) we deduce that for such a parameter \({\varvec{\theta }}\in \Theta \)

$$\begin{aligned} {\underline{a}}_{{\mathcal {K}}}\left| {{\varvec{\theta }}-{\varvec{\theta }}^{\star }}\right| _{*}^{s}\leqslant \ell \left( {{\varvec{\theta }},{\varvec{\theta }}^{\star }}\right) \leqslant 2r < 2r_{1}\leqslant \underline{a}_{{\mathcal {K}}}(z^{\star })^{s}, \end{aligned}$$

which implies that \({\varvec{\theta }}\in \Theta ^{\star }\). For n large enough, \(r_{n}^{\star }=\kappa _{0}^{\star }k/n<r_{1}\) and for \(r\in (r_{n}^{\star },r_{1})\) we may therefore write, using (145),

$$\begin{aligned} \pi \left( {{\mathscr {B}}(P_{{\varvec{\theta }}^{\star }},2r)}\right)&=\nu \left( {\{{\varvec{\theta }}\in \Theta ,\; \ell ({\varvec{\theta }},{\varvec{\theta }}^{\star })\leqslant 2r\}}\right) \nonumber \\&=\nu \left( {\{{\varvec{\theta }}\in \Theta ,\; \ell ({\varvec{\theta }},{\varvec{\theta }}^{\star })\leqslant 2r\}\cap \Theta ^{\star }}\right) \nonumber \\&\leqslant \exp \left( {\gamma n\beta a_{1}r}\right) \nu \left( {\{{\varvec{\theta }}\in \Theta ,\; \ell ({\varvec{\theta }},{\varvec{\theta }}^{\star })\leqslant r\}}\right) \nonumber \\&=\exp \left( {\gamma n\beta a_{1}r}\right) \pi \left( {{\mathscr {B}}(P_{{\varvec{\theta }}^{\star }},r)}\right) . \end{aligned}$$
(146)

Since q is bounded away from 0 in a neighbourhood of \({\varvec{\theta }}^{\star }\), \(\pi \left( {{\mathscr {B}}(P_{{\varvec{\theta }}^{\star }},r_{1})}\right) >0\) and we may also write that for \(r\geqslant r_{1}\) and n large enough

$$\begin{aligned} \pi \left( {{\mathscr {B}}(P_{{\varvec{\theta }}^{\star }},r)}\right)&\geqslant \pi \left( {{\mathscr {B}}(P_{{\varvec{\theta }}^{\star }},r_{1})}\right) \nonumber \\&=\exp \left[ {\log \pi \left( {{\mathscr {B}}(P_{{\varvec{\theta }}^{\star }},r_{1})}\right) +\gamma n\beta a_{1}r_{1}-\gamma n\beta a_{1}r_{1}}\right] \nonumber \\&\geqslant \exp \left[ {-\gamma n\beta a_{1}r_{1}}\right] \geqslant \exp \left[ {-\gamma n\beta a_{1}r}\right] \nonumber \\&\geqslant \exp \left[ {-\gamma n\beta a_{1}r}\right] \pi \left( {{\mathscr {B}}(P_{{\varvec{\theta }}^{\star }},2r)}\right) . \end{aligned}$$
(147)

Putting (146) and (147) together we obtain that for n large enough

$$\begin{aligned} \pi \left( {{\mathscr {B}}(P_{{\varvec{\theta }}^{\star }},2r)}\right) \leqslant \exp \left( {\gamma n\beta a_{1}r}\right) \pi \left( {{\mathscr {B}}(P_{{\varvec{\theta }}^{\star }},r)}\right) \quad \text {for all }r\geqslant r_{n}^{\star }\end{aligned}$$

and consequently that \(r_{n}(\beta ,P_{{\varvec{\theta }}^{\star }})\leqslant r_{n}^{\star }=\kappa _{0}^{\star }k/n\).