Keywords

1 Introduction

Concentration inequalities are central in the analysis of adaptive nonparametric statistics. They lead to sharp penalized criteria for model selection [20], to select bandwidths and even approximation kernels for Parzen’s estimators in high dimension [17], to aggregate estimators [24] and to properly calibrate thresholds [9].

In the present work, we are interested in the selection of a general kernel estimator based on a least-squares density estimation approach. The problem has been considered in L 1-loss by Devroye and Lugosi [8]. Other methods combining log-likelihood and roughness/smoothness penalties have also been proposed in [1012]. However these estimators are usually quite difficult to compute in practice. We propose here to minimize penalized least-squares criteria and obtain from them more easily computable estimators. Sharp concentration inequalities for U-statistics [1, 16, 18] control the variance term of the kernel estimators, whose asymptotic behavior has been precisely described, for instance in [7, 21, 22]. We derive from these bounds (see Proposition (4.1)) a penalization method to select a kernel which satisfies an asymptotically optimal oracle inequality, i.e. with leading constant asymptotically equal to 1.

In the spirit of [14], we use an extended definition of kernels that allows to deal simultaneously with classical collections of estimators as projection estimators, weighted projection estimators, or Parzen’s estimators. This method can be used for example to select an optimal model in model selection (in accordance with [20]) or to select an optimal bandwidth together with an optimal approximation kernel among a finite collection of Parzen’s estimators. In this sense, our method deals, in particular, with the same problem as that of Goldenshluger and Lepski [17] and we establish in this framework that a leading constant 1 in the oracle inequality is indeed possible.

Another main consequence of concentration inequalities is to prove the existence of a minimal level of penalty, under which no oracle inequalities can hold. Birgé and Massart shed light on this phenomenon in a Gaussian setting for model selection [5]. Moreover in this setting, they prove that the optimal penalty is twice the minimal one. In addition, there is a sharp phase transition in the dimension of the selected models leading to an estimate of the optimal penalty in their case (which is known up to a multiplicative constant). Indeed, starting from the idea that in many models the optimal penalty is twice the minimal one (this is the slope heuristic), Arlot and Massart [3] propose to detect the minimal penalty by the phase transition and to apply the “ × 2” rule (this is the slope algorithm). They prove that this algorithm works at least in some regression settings.

In the present work, we also show that minimal penalties exist in the density estimation setting. In particular, we exhibit a sharp “phase transition” of the behavior of the selected estimator around this minimal penalty. The analysis of this last result is not standard however. First, the “slope heuristic” of [5] only holds in particular cases as the selection of projection estimators, see also [19]. As in the selection of a linear estimator in a regression setting [2], the heuristic can sometimes be corrected: for example for the selection of a bandwidth when the approximation kernel is fixed. In general since there is no simple relation between the minimal penalty and the optimal one, the slope algorithm of [3] shall only be used with care for kernel selection. Surprisingly our work reveals that the minimal penalty can be negative. In this case, minimizing an unpenalized criterion leads to oracle estimators. To our knowledge, such phenomenon has only been noticed previously in a very particular classification setting [13]. We illustrate all of these different behaviors by means of a simulation study.

In Sect. (2), after fixing the main notation, providing some examples and defining the framework, we explain our goal, describe what we mean by an oracle inequality and state the exponential inequalities that we shall need. Then we derive optimal penalties in Sect. (3) and study the problem of minimal penalties in Sect. (4). All of these results are illustrated for our three main examples: projection kernels, approximation kernels and weighted projection kernels. In Sect. (5), some simulations are performed in the approximation kernel case. The main proofs are detailed in Sect. (6) and technical results are discussed in the appendix.

2 Kernel Selection for Least-Squares Density Estimation

2.1 Setting

Let X, Y, X 1, , X n denote i.i.d. random variables taking values in the measurable space \((\mathbb{X},\mathcal{X},\mu )\), with common distribution P. Assume P has density s with respect to μ and s is uniformly bounded. Hence, s belongs to L 2, where, for any p ≥ 1,

$$\displaystyle{L^{p}:= \left \{\left.t: \mathbb{X} \rightarrow \mathbb{R},\,\mbox{ s.t. }\,\left \Vert t\right \Vert _{ p}^{p}:=\int \left \vert t\right \vert ^{p}d\mu <\infty \right.\right \}.}$$

Moreover, \(\left \Vert \cdot \right \Vert = \left \Vert \cdot \right \Vert _{2}\) and 〈⋅ , ⋅ 〉 denote respectively the L 2-norm and the associated inner product and \(\left \Vert \cdot \right \Vert _{\infty }\) is the supremum norm. We systematically use xy and xy for max(x, y) and min(x, y) respectively, and denote | A | the cardinality of the set A. Recall that x + = x ∨ 0 and, for any \(y \in \mathbb{R}^{+}\), \(\left \lfloor y\right \rfloor =\sup \{ n \in \mathbb{N}\,\mbox{ s.t. }\,n \leq y\}\).

Let \(\left \{\left.k\right.\right \}_{k\in \mathcal{K}}\) denote a collection of symmetric functions \(k: \mathbb{X}^{2} \rightarrow \mathbb{R}\) indexed by some given finite set \(\mathcal{K}\) such that

$$\displaystyle{\sup _{x\in \mathbb{X}}\ \int _{\mathbb{X}}k(x,y)^{2}d\mu (y)\ \vee \sup _{ (x,y)\in \mathbb{X}^{2}}\left \vert k(x,y)\right \vert <\infty.}$$

A function k satisfying these assumptions is called a kernel, in the sequel. A kernel k is associated with an estimator \(\hat{s}_{k}\) of s defined for any \(x \in \mathbb{X}\) by

$$\displaystyle\begin{array}{rcl} \hat{s}_{k}(x):= \frac{1} {n}\sum _{i=1}^{n}k(X_{ i},x).& & {}\\ \end{array}$$

Our aim is to select a “good” \(\hat{s}_{\hat{k}}\) in the family \(\{\hat{s}_{k},k \in \mathcal{K}\}\). Our results are expressed in terms of a constant \(\Gamma \geq 1\) such that for all \(k \in \mathcal{K}\),

$$\displaystyle{ \sup _{x\in \mathbb{X}}\ \int _{\mathbb{X}}k(x,y)^{2}d\mu (y)\ \vee \sup _{ (x,y)\in \mathbb{X}^{2}}\left \vert k(x,y)\right \vert \leq \Gamma n. }$$
(2.1)

This condition plays the same role as \(\int \vert k(x,y)\vert s(y)d\mu (y) <\infty\), the milder condition used in [8] when working with L 1-losses. Before describing the method, let us give three examples of such estimators that are used for density estimation, and see how they can naturally be associated to some kernels. Section A of the appendix gives the computations leading to the corresponding \(\Gamma\)’s.

Example 1 (Projection Estimators)

Projection estimators are among the most classical density estimators. Given a linear subspace S ⊂ L 2, the projection estimator on S is defined by

$$\displaystyle{\hat{s}_{S} =\arg \min _{t\in S}\left \{\left.\left \Vert t\right \Vert ^{2} - \frac{2} {n}\sum _{i=1}^{n}t(X_{ i})\right.\right \}.}$$

Let \(\mathcal{S}\) be a family of linear subspaces S of L 2. For any \(S \in \mathcal{S}\), let \((\varphi _{\ell})_{\ell\in \mathcal{I}_{S}}\) denote an orthonormal basis of S. The projection estimator \(\hat{s}_{S}\) can be computed and is equal to

$$\displaystyle{\hat{s}_{S} =\sum _{\ell\in \mathcal{I}_{S}}\left (\left.\frac{1} {n}\sum _{i=1}^{n}\varphi _{ \ell}(X_{i})\right.\right )\varphi _{\ell}.}$$

It is therefore easy to see that it is the estimator associated to the projection kernel k S defined for any x and y in \(\mathbb{X}\) by

$$\displaystyle{k_{S}(x,y):=\sum _{\ell\in \mathcal{I}_{S}}\varphi _{\ell}(x)\varphi _{\ell}(y).}$$

Notice that k S actually depends on the basis \((\varphi _{\ell})_{\ell\in \mathcal{I}_{S}}\) even if \(\hat{s}_{S}\) does not. In the sequel, we always assume that some orthonormal basis \((\varphi _{\ell})_{\ell\in \mathcal{I}_{S}}\) is given with S. Given a finite collection \(\mathcal{S}\) of linear subspaces of L 2, one can choose the following constant \(\Gamma\) in (2.1) for the collection \((k_{S})_{S\in \mathcal{S}}\)

$$\displaystyle{ \Gamma = 1 \vee \frac{1} {n}\sup _{S\in \mathcal{S}}\sup _{f\in S,\left \Vert f\right \Vert =1}\left \Vert \,f\right \Vert _{\infty }^{2}. }$$
(2.2)

Example 2 (Parzen’s Estimators)

Given a bounded symmetric integrable function \(K: \mathbb{R} \rightarrow \mathbb{R}\) such that \(\int _{\mathbb{R}}K(u)du = 1\) and a bandwidth h > 0, the Parzen estimator is defined by

$$\displaystyle{\forall x \in \mathbb{R},\quad \hat{s}_{K,h}(x) = \frac{1} {nh}\sum _{i=1}^{n}K\left (\left.\frac{x - X_{i}} {h} \right.\right ).}$$

It can also naturally be seen as a kernel estimator, associated to the function k K, h defined for any x and y in \(\mathbb{R}\) by

$$\displaystyle{k_{K,h}(x,y):= \frac{1} {h}K\left (\left.\frac{x - y} {h} \right.\right ).}$$

We shall call the function k K, h an approximation or Parzen kernel.

Given a finite collection of pairs \((K,h) \in \mathcal{H}\), one can choose \(\Gamma = 1\) in (2.1) if,

$$\displaystyle{ h \geq \frac{\left \Vert K\right \Vert _{\infty }\left \Vert K\right \Vert _{1}} {n} \quad \mbox{ for any }(K,h) \in \mathcal{H}. }$$
(2.3)

Example 3 (Weighted Projection Estimators)

Let (φ i ) i = 1, , p denote an orthonormal system in L 2 and let w = (w i ) i = 1, , p denote real numbers in [0, 1]. The associated weighted kernel projection estimator of s is defined by

$$\displaystyle{\hat{s}_{w} =\sum _{ i=1}^{p}w_{ i}\left (\left.\frac{1} {n}\sum _{j=1}^{n}\varphi _{ i}(X_{j})\right.\right )\varphi _{i}.}$$

These estimators are used to derive very sharp adaptive results. In particular, Pinsker’s estimators are weighted kernel projection estimators (see for example [23]). When \(w \in \left \{\left.0,1\right.\right \}^{p}\), we recover a classical projection estimator. A weighted projection estimator is associated to the weighted projection kernel defined for any x and y in \(\mathbb{X}\) by

$$\displaystyle{k_{w}(x,y):=\sum _{ i=1}^{p}w_{ i}\varphi _{i}(x)\varphi _{i}(y).}$$

Given any finite collection \(\mathcal{W}\) of weights, one can choose in (2.1)

$$\displaystyle{ \Gamma = 1 \vee \left (\left.\frac{1} {n}\sup _{x\in \mathbb{X}}\sum _{i=1}^{p}\varphi _{ i}(x)^{2}\right.\right ). }$$
(2.4)

2.2 Oracle Inequalities and Penalized Criterion

The goal is to estimate s in the best possible way using a finite collection of kernel estimators \((\hat{s}_{k})_{k\in \mathcal{K}}\). In other words, the purpose is to select among \((\hat{s}_{k})_{k\in \mathcal{K}}\) an estimator \(\hat{s}_{\hat{k}}\) from the data such that \(\left \Vert \hat{s}_{\hat{k}} - s\right \Vert ^{2}\) is as close as possible to \(\inf _{k\in \mathcal{K}}\left \Vert \hat{s}_{k} - s\right \Vert ^{2}\). More precisely our aim is to select \(\hat{k}\) such that, with high probability,

$$\displaystyle{ \left \Vert \hat{s}_{\hat{k}} - s\right \Vert ^{2} \leq C_{ n}\inf _{k\in \mathcal{K}}\left \Vert \hat{s}_{k} - s\right \Vert ^{2} + R_{ n}, }$$
(2.5)

where C n  ≥ 1 is the leading constant and R n  > 0 is usually a remaining term. In this case, \(\hat{s}_{\hat{k}}\) is said to satisfy an oracle inequality, as long as R n is small compared to \(\inf _{k\in \mathcal{K}}\left \Vert \hat{s}_{k} - s\right \Vert ^{2}\) and C n is a bounded sequence. This means that the selected estimator does as well as the best estimator in the family up to some multiplicative constant. The best case one can expect is to get C n close to 1. This is why, when C n  →  n →  1, the corresponding oracle inequality is called asymptotically optimal. To do so, we study minimizers of penalized least-squares criteria. Note that in our three examples choosing \(\hat{k} \in \mathcal{K}\) amounts to choosing the smoothing parameter, that is respectively to choosing \(\widehat{S} \in \mathcal{S}\), \((\widehat{K},\hat{h}) \in \mathcal{H}\) or \(\hat{w} \in \mathcal{W}\).

Let P n denote the empirical measure, that is, for any real valued function t,

$$\displaystyle{P_{n}(t):= \frac{1} {n}\sum _{i=1}^{n}t(X_{ i}).}$$

For any t ∈ L 2, let also \(P(t):=\int _{\mathbb{X}}t(x)s(x)d\mu (x).\)

The least-squares contrast is defined, for any t ∈ L 2, by

$$\displaystyle{\gamma (t):= \left \Vert t\right \Vert ^{2} - 2t.}$$

Then for any given function \(\mathop{\mathrm{pen}}\nolimits: \mathcal{K}\rightarrow \mathbb{R}\), the least-squares penalized criterion is defined by

$$\displaystyle{ \mathcal{C}_{\mathop{\mathrm{pen}}\nolimits }(k):= P_{n}\gamma (\hat{s}_{k}) +\mathop{ \mathrm{pen}}\nolimits (k). }$$
(2.6)

Finally the selected \(\hat{k} \in \mathcal{K}\) is given by any minimizer of \(\mathcal{C}_{\mathop{\mathrm{pen}}\nolimits }(k)\), that is,

$$\displaystyle{ \hat{k} \in \arg \min _{k\in \mathcal{K}}\left \{\left.\mathcal{C}_{\mathop{\mathrm{pen}}\nolimits }(k)\right.\right \}. }$$
(2.7)

As \(P\gamma (t) = \left \Vert t - s\right \Vert ^{2} -\left \Vert s\right \Vert ^{2}\), it is equivalent to minimize \(\left \Vert \hat{s}_{k} - s\right \Vert ^{2}\) or \(P\gamma (\hat{s}_{k})\). As our goal is to select \(\hat{s}_{\hat{k}}\) satisfying an oracle inequality, an ideal penalty penid should satisfy \(\mathcal{C}_{\mathop{\mathrm{pen}}\nolimits _{\mathrm{id}}}(k) = P\gamma (\hat{s}_{k})\), i.e. criterion (2.6) with

$$\displaystyle{\mathop{\mathrm{pen}}\nolimits _{\mathrm{id}}(k):= (P - P_{n})\gamma (\hat{s}_{k}) = 2(P_{n} - P)(\hat{s}_{k}).}$$

To identify the main quantities of interest, let us introduce some notation and develop penid(k). For all \(k \in \mathcal{K}\), let

$$\displaystyle\begin{array}{rcl} s_{k}(x):=\int _{\mathbb{X}}k(y,x)s(y)d\mu (y) = \mathbb{E}\left [\left.k(X,x)\right.\right ],\qquad \forall x \in \mathbb{X},& & {}\\ \end{array}$$

and

$$\displaystyle{U_{k}:=\sum _{ i\neq j=1}^{n}\left (\left.k(X_{ i},X_{j}) - s_{k}(X_{i}) - s_{k}(X_{j}) + \mathbb{E}\left [\left.k(X,Y )\right.\right ]\right.\right ).}$$

Because those quantities are fundamental in the sequel, let us also define \(\Theta _{k}(x) = A_{k}(x,x)\) where for \((x,y) \in \mathbb{X}^{2}\)

$$\displaystyle{ A_{k}(x,y):=\int _{\mathbb{X}}k(x,z)k(z,y)d\mu (z). }$$
(2.8)

Denoting

$$\displaystyle{\mbox{ for all }x \in \mathbb{X},\quad \chi _{k}(x) = k(x,x),}$$

the ideal penalty is then equal to

$$\displaystyle\begin{array}{rcl} & & \mathop{\mathrm{pen}}\nolimits _{\mathrm{id}}(k) = 2(P_{n} - P)(\hat{s}_{k} - s_{k}) + 2(P_{n} - P)s_{k} \\ & & = 2\left (\left.\frac{P\chi _{k} - Ps_{k}} {n} + \frac{(P_{n} - P)\chi _{k}} {n} + \frac{U_{k}} {n^{2}} + \left (\left.1 - \frac{2} {n}\right.\right )(P_{n} - P)s_{k}\right.\right ).{}\end{array}$$
(2.9)

The main point is that by using concentration inequalities, we obtain:

$$\displaystyle{\mathop{\mathrm{pen}}\nolimits _{\mathrm{id}}(k) \simeq 2\left (\left.\frac{P\chi _{k} - Ps_{k}} {n} \right.\right ).}$$

The term Ps k n depends on s which is unknown. Fortunately, it can be easily controlled as detailed in the sequel. Therefore one can hope that the choice

$$\displaystyle{\mathop{\mathrm{pen}}\nolimits (k) = 2\frac{P\chi _{k}} {n} }$$

is convenient. In general, this choice still depends on the unknown density s but it can be easily estimated in a data-driven way by

$$\displaystyle{\mathop{\mathrm{pen}}\nolimits (k) = 2\frac{P_{n}\chi _{k}} {n}.}$$

The goal of Sect. (3) is to prove this heuristic and to show that 2P χ k n and 2P n χ k n are optimal choices for the penalty, that is, they lead to an asymptotically optimal oracle inequality.

2.3 Concentration Tools

To derive sharp oracle inequalities, we only need two fundamental concentration tools, namely a weak Bernstein’s inequality and the concentration bounds for degenerate U-statistics of order two. We cite them here under their most suitable form for our purpose.

A Weak Bernstein’s Inequality

Proposition 2.1

For any bounded real valued function f and any X 1 ,…,X n i.i.d. with distribution P, for any u > 0,

$$\displaystyle{\Pr (P_{n} - P)f \geq \sqrt{\frac{2P\left (\left.f^{2 } \right. \right ) u} {n}} + \frac{\left \Vert \,f\right \Vert _{\infty }u} {3n} \leq \exp (-u).}$$

The proof is straightforward and can be derived from either Bennett’s or Bernstein’s inequality [6].

Concentration of Degenerate U-Statistics of Order 2

Proposition 2.2

Let X,X 1 ,…X n be i.i.d. random variables defined on a Polish space \(\mathbb{X}\) equipped with its Borel σ-algebra and let \(\left (\left.f_{i,j}\right.\right )_{1\leq i\not =j\leq n}\) denote bounded real valued symmetric measurable functions defined on \(\mathbb{X}^{2}\) , such that for any i≠j, f i,j = f j,i and

$$\displaystyle{ \forall \ i,j\mbox{ s.t. }1 \leq i\neq j \leq n,\qquad \mathbb{E}\left [\left.f_{i,j}(x,X)\right.\right ] = 0\qquad \mbox{ for a.e.}x\mbox{ in }\mathbb{X}. }$$
(2.10)

Let U be the following totally degenerate U-statistic of order 2,

$$\displaystyle{U =\sum _{1\leq i\neq j\leq n}f_{i,j}(X_{i},X_{j}).}$$

Let A be an upper bound of \(\left \vert f_{i,j}(x,y)\right \vert\) for any i,j,x,y and

$$\displaystyle\begin{array}{rcl} B^{2}& =& \max \left (\left.\sup _{ i,x\in \mathbb{X}}\sum _{j=1}^{i-1}\mathbb{E}\left [\left.f_{ i,j}(x,X_{j})^{2}\right.\right ],\sup _{ j,t\in \mathbb{X}}\sum _{i=j+1}^{n}\mathbb{E}\left [\left.f_{ i,j}(X_{i},t)^{2}\right.\right ]\right.\right ) {}\\ C^{2}& =& \sum _{ 1\leq i\neq j\leq n}\mathbb{E}\left [\left.f_{i,j}(X_{i},X_{j})^{2}\right.\right ] {}\\ D& =& \sup _{(a,b)\in \mathcal{A}}\mathbb{E}\left [\left.\sum _{1\leq i<j\leq n}f_{i,j}(X_{i},X_{j})a_{i}(X_{i})b_{j}(X_{j})\right.\right ], {}\\ \end{array}$$

where \(\mathcal{A} = \left \{\left.(a,b),\,\mbox{ s.t. }\,\mathbb{E}\left [\left.\sum _{i=1}^{n-1}a_{ i}(X_{i})^{2}\right.\right ] \leq 1,\; \mathbb{E}\left [\left.\sum _{ j=2}^{n}b_{ j}(X_{j})^{2}\right.\right ] \leq 1\right.\right \}.\)

Then there exists some absolute constant κ > 0 such that for any u > 0, with probability larger than 1 − 2.7e −u ,

$$\displaystyle\begin{array}{rcl} U \leq \kappa \left (C\sqrt{u} + Du + Bu^{3/2} + Au^{2}\right ).& & {}\\ \end{array}$$

The present result is a simplification of Theorem 3.4.8 in [15], which provides explicit constants for any variables defined on a Polish space. It is mainly inspired by Houdré and Reynaud-Bouret [18], where the result therein has been stated only for real variables. This inequality actually dates back to Giné et al. [16]. This result has been further generalized by Adamczak to U-statistics of any order [1], though the constants are not explicit.

3 Optimal Penalties for Kernel Selection

The main aim of this section is to show that 2P χ k n is a theoretical optimal penalty for kernel selection, which means that if pen(k) is close to 2P χ k n, the selected kernel \(\hat{k}\) satisfies an asymptotically optimal oracle inequality.

3.1 Main Assumptions

To express our results in a simple form, a positive constant \(\Upsilon\) is assumed to control, for any k and k′ in \(\mathcal{K}\), all the following quantities.

$$\displaystyle\begin{array}{rcl} \left (\left.\Gamma (1 + \left \Vert s\right \Vert _{\infty })\right.\right ) \vee \sup _{k\in \mathcal{K}}\left \Vert s_{k}\right \Vert ^{2}& \leq \Upsilon,&{}\end{array}$$
(3.11)
$$\displaystyle\begin{array}{rcl} P\left (\left.\chi _{k}^{2}\right.\right )& \leq & \Upsilon nP\Theta _{ k},{}\end{array}$$
(3.12)
$$\displaystyle\begin{array}{rcl} \left \Vert s_{k} - s_{k'}\right \Vert _{\infty }& \leq & \Upsilon \vee \sqrt{\Upsilon n}\left \Vert s_{k} - s_{k'}\right \Vert,{}\end{array}$$
(3.13)
$$\displaystyle\begin{array}{rcl} \mathbb{E}\left [\left.A_{k}(X,Y )^{2}\right.\right ]& \leq & \Upsilon P\Theta _{ k},{}\end{array}$$
(3.14)
$$\displaystyle\begin{array}{rcl} \sup _{x\in \mathbb{X}}\ \mathbb{E}\left [\left.A_{k}(X,x)^{2}\right.\right ]& \leq & \Upsilon n,{}\end{array}$$
(3.15)
$$\displaystyle\begin{array}{rcl} v_{k}^{2}:=\sup _{ t\in \mathbb{B}_{k}}Pt^{2}& \leq & \Upsilon \vee \sqrt{\Upsilon P\Theta _{ k}},{}\end{array}$$
(3.16)

where \(\mathbb{B}_{k}\) is the set of functions t that can be written t(x) = ∫ a(z)k(z, x)d μ(z) for some a ∈ L 2 with \(\left \Vert a\right \Vert \leq 1\).

These assumptions may seem very intricate. They are actually fulfilled by our three main examples under very mild conditions (see Sect. (3.3)).

3.2 The Optimal Penalty Theorem

In the sequel, □ denotes a positive absolute constant whose value may change from line to line and if there are indices such as □  θ , it means that this is a positive function of θ and only θ whose value may change from line to line.

Theorem 3.1

If Assumptions  (3.11) (3.16) hold, then, for any x ≥ 1, with probability larger than \(1 -\square \vert \mathcal{K}\vert ^{2}e^{-x}\) , for any θ ∈ (0,1), any minimizer \(\hat{k}\) of the penalized criterion  (2.6) satisfies the following inequality

$$\displaystyle\begin{array}{rcl} \forall k \in \mathcal{K},\qquad (1 - 4\theta )\left \Vert s -\hat{s}_{\hat{k}}\right \Vert ^{2}& & \leq (1 + 4\theta )\left \Vert s -\hat{s}_{ k}\right \Vert ^{2} + \left (\left.\mathop{\mathrm{pen}}\nolimits (k) - 2\frac{P\chi _{k}} {n} \right.\right ) \\ & & -\left (\left.\mathop{\mathrm{pen}}\nolimits \left (\left.\hat{k}\right.\right ) - 2\frac{P\chi _{\hat{k}}} {n} \right.\right ) + \square \frac{\Upsilon x^{2}} {\theta n}. {}\end{array}$$
(3.17)

Assume moreover that there exists C > 0, δ′ ≥δ > 0 and r ≥ 0 such that for any x ≥ 1, with probability larger than 1 − Ce −x , for any \(k \in \mathcal{K}\) ,

$$\displaystyle{ (\delta -1)\frac{P\Theta _{k}} {n} -\square r\frac{\Upsilon x^{2}} {n} \leq \mathop{\mathrm{pen}}\nolimits (k) -\frac{2P\chi _{k}} {n} \leq (\delta ' - 1)\frac{P\Theta _{k}} {n} + \square r\frac{\Upsilon x^{2}} {n}. }$$
(3.18)

Then for all θ ∈ (0,1) and all x ≥ 1, the following holds with probability at least \(1 -\square (C + \vert \mathcal{K}\vert ^{2})e^{-x}\) ,

$$\displaystyle\begin{array}{rcl} \frac{(\delta \wedge 1) - 5\theta } {(\delta ' \vee 1) + (4 +\delta ')\theta }\left \Vert s -\hat{s}_{\hat{k}}\right \Vert ^{2}& \leq \inf _{ k\in \mathcal{K}}\left \Vert s -\hat{s}_{k}\right \Vert ^{2} + \square \left (\left.r + \frac{1} {\theta ^{3}} \right.\right )\frac{\Upsilon x^{2}} {n}.& {}\\ \end{array}$$

Let us make some remarks.

  • First, this is an oracle inequality (see (2.5)) with leading constant C n and remaining term R n given by

    $$\displaystyle{C_{n} = \frac{(\delta ' \vee 1) + (4 +\delta ')\theta } {(\delta \wedge 1) - 5\theta } \quad \mbox{ and}\quad R_{n} = \square C_{n}(r +\theta ^{-3})\frac{\Upsilon x^{2}} {n},}$$

    as long as

    • θ is small enough for C n to be positive,

    • x is large enough for the probability to be large and

    • n is large enough for R n to be negligible.

    Typically, r, δ, δ′, θ and \(\Upsilon\) are bounded w.r.t. n and x has to be of the order of \(\log (\vert \mathcal{K}\vert \vee n)\) for the remainder to be negligible. In particular, \(\mathcal{K}\) may grow with n as long as (i) \(\log (\vert \mathcal{K}\vert \vee n)^{2}\) remains negligible with respect to n and (ii) \(\Upsilon\) does not depend on n.

  • If pen(k) = 2P χ k n, that is if δ = δ′ = 1 and r = C = 0 in (3.18), the estimator \(\hat{s}_{\hat{k}}\) satisfies an asymptotically optimal oracle inequality i.e. C n  →  n →  1 since θ can be chosen as close to 0 as desired. Take for instance, θ = (logn)−1.

  • In general P χ k depends on the unknown s and this last penalty cannot be used in practice. Fortunately, its empirical counterpart pen(k) = 2P n χ k n satisfies (3.18) with δ = 1 −θ, δ′ = 1 +θ, r = 1∕θ and \(C = 2\vert \mathcal{K}\vert\) for any θ ∈ (0, 1) and in particular θ = (logn)−1 (see (6.34) in Proposition (B.1)). Hence, the estimator \(\hat{s}_{\hat{k}}\) selected with this choice of penalty also satisfies an asymptotically optimal oracle inequality, by the same argument.

  • Finally, we only get an oracle inequality when δ > 0, that is when pen(k) is larger than \((2P\chi _{k} - P\Theta _{k})/n\) up to some residual term. We discuss the necessity of this condition in Sect. (4).

3.3 Main Examples

This section shows that Theorem (3.1) can be applied in the examples. In addition, it provides the computation of 2P χ k n in some specific cases of special interest.

Example 1 (Continued)

Proposition 3.2

Let \(\left \{\left.k_{S},S \in \mathcal{S}\right.\right \}\) be a collection of projection kernels. Assumptions  (3.11) (3.12) (3.14) (3.15) and  (3.16) hold for any \(\Upsilon \geq \Gamma (1 + \left \Vert s\right \Vert _{\infty })\) , where \(\Gamma\) is given by  (2.2) . In addition, Assumption  (3.13) is satisfied under either of the following classical assumptions (see [20, Chap.  7 ]):

$$\displaystyle{ \forall S,S' \in \mathcal{S},\qquad \mbox{ either }S \subset S'\mbox{ or }S' \subset S, }$$
(3.19)

or

$$\displaystyle{ \forall S \in \mathcal{S},\qquad \left \Vert s_{k_{S}}\right \Vert _{\infty }\leq \frac{\Upsilon } {2}. }$$
(3.20)

These particular projection kernels satisfy for all \((x,y) \in \mathbb{X}^{2}\)

$$\displaystyle\begin{array}{rcl} A_{k_{S}}(x,y) =& & \int _{\mathbb{X}}k_{S}(x,z)k_{S}(y,z)d\mu (z) {}\\ & & \phantom{gflskgdglsglsdf} =\sum _{(i,j)\in \mathcal{I}_{S}^{2}}\varphi _{i}(x)\varphi _{j}(y)\int _{\mathbb{X}}\varphi _{i}(z)\varphi _{j}(z)d\mu (z) = k_{S}(x,y). {}\\ \end{array}$$

In particular, \(\Theta _{k_{S}} =\chi _{k_{S}} =\sum _{i\in \mathcal{I}_{S}}\varphi _{i}^{2}\) and \(2P\chi _{k_{S}} - P\Theta _{k_{S}} = P\chi _{k_{S}}\).

Moreover, it appears that the function \(\Theta _{k_{S}}\) is constant in some linear spaces S of interest (see [19] for more details). Let us mention one particular case studied further on in the sequel. Suppose \(\mathcal{S}\) is a collection of regular histogram spaces S on \(\mathbb{X}\), that is, any \(S \in \mathcal{S}\) is a space of piecewise constant functions on a partition \(\mathcal{I}_{S}\) of \(\mathbb{X}\) such that μ(i) = 1∕D S for any i in \(\mathcal{I}_{S}\). Assumption (3.20) is satisfied for this collection as soon as \(\Upsilon \geq 2\left \Vert s\right \Vert _{\infty }\). The family \((\varphi _{i})_{i\in \mathcal{I}_{S}}\), where \(\varphi _{i} = \sqrt{D_{S}}\mathbf{1}_{i}\) is an orthonormal basis of S and

$$\displaystyle{\chi _{k_{S}} =\sum _{i\in \mathcal{I}_{S}}\varphi _{i}^{2} = D_{ S}.}$$

Hence, \(P\chi _{k_{S}} = D_{S}\) and 2D S n can actually be used as a penalty to ensure that the selected estimator satisfies an asymptotically optimal oracle inequality. Moreover, in this example it is actually necessary to choose a penalty larger than D S n to get an oracle inequality (see [19] or Sect. (4) for more details).

Example 2 (Continued)

Proposition 3.3

Let \(\left \{\left.k_{K,h},(K,h) \in \mathcal{H}\right.\right \}\) be a collection of approximation kernels. Assumptions  (3.11) (3.16) hold with \(\Gamma = 1\) , for any

$$\displaystyle{\Upsilon \geq \max _{K}\left \{\left.\frac{\vert K(0)\vert } {\left \Vert K\right \Vert ^{2}} \vee \left (\left.1 + 2\left \Vert s\right \Vert _{\infty }\left \Vert K\right \Vert _{1}^{2}\right.\right )\right.\right \},}$$

as soon as  (2.3) is satisfied.

These approximation kernels satisfy, for all \(x \in \mathbb{R}\),

$$\displaystyle\begin{array}{rcl} \chi _{k_{K,h}}(x)& =& k_{K,h}(x,x) = \frac{K(0)} {h}, {}\\ \Theta _{k_{K,h}}(x)& =& A_{k_{K,h}}(x,x) = \frac{1} {h^{2}}\int _{\mathbb{R}}K\left (\left.\frac{x - y} {h} \right.\right )^{2}dy = \frac{\left \Vert K\right \Vert ^{2}} {h}. {}\\ \end{array}$$

Therefore, the optimal penalty \(2P\chi _{k_{K,h}}/n = 2K(0)/(nh)\) can be computed in practice and yields an asymptotically optimal selection criterion. Surprisingly, the lower bound \(2P\chi _{k_{K,h}}/n - P\Theta _{k_{K,h}}/n = (2K(0) -\left \Vert K\right \Vert ^{2})/(nh)\) can be negative if \(\left \Vert K\right \Vert ^{2}> 2K(0)\) and even if K(0) ¿ 0, which is usually the case for Parzen kernels. In this case, a minimizer of (2.6) satisfies an oracle inequality, even if this criterion is not penalized. This remarkable fact is illustrated in the simulation study in Sect. (5).

Example 3 (Continued)

Proposition 3.4

Let \(\left \{\left.k_{w},w \in \mathcal{W}\right.\right \}\) be a collection of weighted projection kernels. Assumption  (3.11) is valid for \(\Upsilon \geq \Gamma (1 + \left \Vert s\right \Vert _{\infty })\) , where \(\Gamma\) is given by  (2.4) . Moreover  (3.11) and  (2.1) imply  (3.12) (3.16) .

For these weighted projection kernels, for all \(x \in \mathbb{X}\)

$$\displaystyle\begin{array}{rcl} \chi _{k_{w}}(x)& =& \sum _{i=1}^{p}w_{ i}\varphi _{i}(x)^{2},\qquad \mbox{ hence}\qquad P\chi _{ k_{w}} =\sum _{ i=1}^{p}w_{ i}P\varphi _{i}^{2} \qquad \mbox{ and} {}\\ \Theta _{k_{w}}(x)& =& \sum _{i,j=1}^{p}w_{ i}w_{j}\varphi _{i}\varphi _{j}\int _{\mathbb{X}}\varphi _{i}(x)\varphi _{j}(x)d\mu (x) =\sum _{ i=1}^{p}w_{ i}^{2}\varphi _{ i}(x)^{2} \leq \chi _{ k_{w}}(x). {}\\ \end{array}$$

In this case, the optimal penalty \(2P\chi _{k_{w}}/n\) has to be estimated in general. However, in the following example it can still be directly computed.

Let \(\mathbb{X} = [0,1]\), let μ be the Lebesgue measure. Let φ 0 ≡ 1 and, for any j ≥ 1,

$$\displaystyle{\varphi _{2j-1}(x) = \sqrt{2}\cos (2\pi jx),\qquad \varphi _{2j}(x) = \sqrt{2}\sin (2\pi jx).}$$

Consider some odd p and a family of weights \(\mathcal{W} = \left \{\left.w_{i},i = 0,\ldots,p\right.\right \}\) such that, for any \(w \in \mathcal{W}\) and any i = 1, , p∕2,  w 2i−1 = w 2i  = τ i . In this case, the values of the functions of interest do not depend on x

$$\displaystyle{\chi _{k_{w}} = w_{0} +\sum _{ j=1}^{p/2}\tau _{ j},\qquad \Theta _{k_{w}} = w_{0}^{2} +\sum _{ j=1}^{p/2}\tau _{ j}^{2}.}$$

In particular, this family includes Pinsker’s and Tikhonov’s weights.

4 Minimal Penalties for Kernel Selection

The purpose of this section is to see whether the lower bound \(\mathop{\mathrm{pen}}\nolimits _{\mathrm{min}}(k):= (2P\chi _{k} - P\Theta _{k})/n\) is sharp in Theorem (3.1). To do so we first need the following result which links \(\left \Vert s -\hat{s}_{k}\right \Vert\) to deterministic quantities, thanks to concentration tools.

4.1 Bias-Variance Decomposition with High Probability

Proposition 4.1

Assume \(\left \{\left.k\right.\right \}_{k\in \mathcal{K}}\) is a finite collection of kernels satisfying Assumptions  (3.11) (3.16) . For all x > 1, for all η in (0,1], with probability larger than \(1 -\square \vert \mathcal{K}\vert e^{-x}\)

$$\displaystyle\begin{array}{rcl} & \left \Vert s_{k} -\hat{s}_{k}\right \Vert ^{2} \leq (1+\eta )\frac{P\Theta _{k}} {n} + \square \frac{\Upsilon x^{2}} {\eta n},& {}\\ & \frac{P\Theta _{k}} {n} \leq (1+\eta )\left \Vert s_{k} -\hat{s}_{k}\right \Vert ^{2} + \square \frac{\Upsilon x^{2}} {\eta n}.& {}\\ \end{array}$$

Moreover, for all x > 1 and for all η in (0,1), with probability larger than \(1 -\square \vert \mathcal{K}\vert e^{-x}\) , for all \(k \in \mathcal{K}\) , each of the following inequalities hold

$$\displaystyle\begin{array}{rcl} & \left \Vert s -\hat{s}_{k}\right \Vert ^{2} \leq (1+\eta )\left (\left.\left \Vert s - s_{k}\right \Vert ^{2} + \frac{P\Theta _{k}} {n} \right.\right ) + \square \frac{\Upsilon x^{2}} {\eta ^{3}n},& {}\\ & \left \Vert s - s_{k}\right \Vert ^{2} + \frac{P\Theta _{k}} {n} \leq (1+\eta )\left \Vert s -\hat{s}_{k}\right \Vert ^{2} + \square \frac{\Upsilon x^{2}} {\eta ^{3}n}. & {}\\ \end{array}$$

This means that not only in expectation but also with high probability can the term \(\left \Vert s -\hat{s}_{k}\right \Vert ^{2}\) be decomposed in a bias term \(\left \Vert s - s_{k}\right \Vert ^{2}\) and a “variance” term \(P\Theta _{k}/n\). The bias term measures the capacity of the kernel k to approximate s whereas \(P\Theta _{k}/n\) is the price to pay for replacing s k by its empirical version \(\hat{s}_{k}\). In this sense, \(P\Theta _{k}/n\) measures the complexity of the kernel k in a way which is completely adapted to our problem of density estimation. Even if it does not seem like a natural measure of complexity at first glance, note that in the previous examples, it is indeed always linked to a natural complexity. When dealing with regular histograms defined on [0, 1], \(P\Theta _{k_{S}}\) is the dimension of the considered space S, whereas for approximation kernels \(P\Theta _{k_{K,h}}\) is proportional to the inverse of the considered bandwidth h.

4.2 Some General Results About the Minimal Penalty

In this section, we assume that we are in the asymptotic regime where the number of observations n → . In particular, the asymptotic notations refers to this regime.

From now on, the family \(\mathcal{K} = \mathcal{K}_{n}\) may depend on n as long as both \(\Gamma\) and \(\Upsilon\) remain absolute constants that do not depend on it. Indeed, on the previous examples, this seems a reasonable regime. Since \(\mathcal{K}_{n}\) now depends on n, our selected \(\hat{k} =\hat{ k}_{n}\) also depends on n.

To prove that the lower bound penmin(k) is sharp, we need to show that the estimator chosen by minimizing (2.6) with a penalty smaller than penmin does not satisfy an oracle inequality. This is only possible if the \(\left \Vert s -\hat{s}_{k}\right \Vert ^{2}\)’s are not of the same order and if they are larger than the remaining term \(\square (r +\theta ^{-3})\Upsilon x^{2}/n\). From an asymptotic point of view, we rewrite this thanks to Proposition (4.1) as for all n ≥ 1, there exist k 0, n and k 1, n in \(\mathcal{K}_{n}\) such that

$$\displaystyle{ \left \Vert s - s_{k_{1,n}}\right \Vert ^{2}\! +\! \frac{P\Theta _{k_{1,n}}} {n} \! \gg \!\left \Vert s - s_{k_{0,n}}\right \Vert ^{2}\! +\! \frac{P\Theta _{k_{0,n}}} {n} \! \gg \!\square \left (\left.\!r + \frac{1} {\theta ^{3}} \!\right.\right )\frac{\Upsilon x^{2}} {n}, }$$
(4.21)

where a n  ≫ b n means that b n a n  →  n →  0. More explicitly, denoting by o(1) a sequence only depending on n and tending to 0 as n tends to infinity and whose value may change from line to line, one assumes that there exists c s and c R positive constants such that for all n ≥ 1, there exist k 0, n and k 1, n in \(\mathcal{K}_{n}\) such that

$$\displaystyle\begin{array}{rcl} \left \Vert s - s_{k_{0,n}}\right \Vert ^{2} + \frac{P\Theta _{k_{0,n}}} {n} \leq c_{s}\ \mathrm{o}(1)\left (\left.\left \Vert s - s_{k_{1,n}}\right \Vert ^{2} + \frac{P\Theta _{k_{1,n}}} {n} \right.\right )& &{}\end{array}$$
(4.22)
$$\displaystyle\begin{array}{rcl} \frac{(\log (\vert \mathcal{K}_{n}\vert \vee n))^{3}} {n} \leq c_{R}\ \mathrm{o}(1)\left (\left.\left \Vert s - s_{k_{0,n}}\right \Vert ^{2} + \frac{P\Theta _{k_{0,n}}} {n} \right.\right ).& &{}\end{array}$$
(4.23)

We put a log-cube factor in the remaining term to allow some choices of θ = θ n  →  n →  0 and r = r n  →  n →  + .

But (4.22) and (4.23) (or (4.21)) are not sufficient. Indeed, the following result explains what happens when the bias terms are always the leading terms.

Corollary 4.2

Let \((\mathcal{K}_{n})_{n\geq 1}\) be a sequence of finite collections of kernels k satisfying Assumptions  (3.11) (3.16) for a positive constant \(\Upsilon\) independent of n and such that

$$\displaystyle{ \frac{1} {n} = c_{b}\ \mathrm{o}(1)\inf _{k\in \mathcal{K}_{n}}\frac{\left \Vert s - s_{k}\right \Vert ^{2}} {P\Theta _{k}}, }$$
(4.24)

for some positive constant c b .

Assume that there exist real numbers of any sign δ′ ≥δ and a sequence (r n ) n≥1 of nonnegative real numbers such that, for all n ≥ 1, with probability larger than 1 −□∕n 2 , for all \(k \in \mathcal{K}_{n}\) ,

$$\displaystyle\begin{array}{rcl} \delta \frac{P\Theta _{k}} {n} -& &\square _{\delta,\delta ',\Upsilon }\frac{r_{n}\log (n \vee \vert \mathcal{K}_{n}\vert )^{2}} {n} {}\\ & & \quad \leq \mathop{\mathrm{pen}}\nolimits (k) -\frac{2P\chi _{k} - P\Theta _{k}} {n} \leq \delta '\frac{P\Theta _{k}} {n} + \square _{\delta,\delta ',\Upsilon }\frac{r_{n}\log (n \vee \vert \mathcal{K}_{n}\vert )^{2}} {n}. {}\\ \end{array}$$

Then, with probability larger than 1 −□∕n 2 ,

$$\displaystyle\begin{array}{rcl} & & \left \Vert s -\hat{s}_{\hat{k}_{n}}\right \Vert ^{2} \leq {}\\ & &\quad (1 + \square _{\delta,\delta ',\Upsilon,c_{b}}\ \mathrm{o}(1))\inf _{k\in \mathcal{K}_{n}}\left \Vert s -\hat{s}_{k}\right \Vert ^{2} + \square _{\delta,\delta ',\Upsilon }\left (\left.r_{n} +\log n\right.\right )\frac{\log (n \vee \vert \mathcal{K}_{n}\vert )^{2}} {n}. {}\\ \end{array}$$

The proof easily follows by taking θ = (logn)−1 in (3.17), η = 2 for instance in Proposition (4.1) and by using Assumption (4.24) and the bounds on pen(k). This result shows that the estimator \(\hat{s}_{\hat{k}_{n}}\) satisfies an asymptotically optimal oracle inequality when condition (4.24) holds, whatever the values of δ and δ′ even when they are negative. This proves that the lower bound penmin is not sharp in this case.

Therefore, we have to assume that at least one bias \(\left \Vert s - s_{k}\right \Vert ^{2}\) is negligible with respect to \(P\Theta _{k}/n\). Actually, to conclude, we assume that this happens for k 1, n in (4.21).

Theorem 4.3

Let \((\mathcal{K}_{n})_{n\geq 1}\) be a sequence of finite collections of kernels satisfying Assumptions  (3.11) (3.16) , with \(\Upsilon\) not depending on n. Each \(\mathcal{K}_{n}\) is also assumed to satisfy  (4.22) and  (4.23) with a kernel \(k_{1,n} \in \mathcal{K}_{n}\) in  (4.22) such that

$$\displaystyle{ \left \Vert s - s_{k_{1,n}}\right \Vert ^{2} \leq c\ \mathrm{o}(1)\frac{P\Theta _{k_{1,n}}} {n}, }$$
(4.25)

for some fixed positive constant c. Suppose that there exist δ ≥δ′ > 0 and a sequence (r n ) n≥1 of nonnegative real numbers such that \(r_{n} \leq \square \log (\vert \mathcal{K}_{n}\vert \vee n)\) and such that for all n ≥ 1, with probability larger than 1 −□n −2 , for all \(k \in \mathcal{K}_{n}\) ,

$$\displaystyle\begin{array}{rcl} \frac{2P\chi _{k} - P\Theta _{k}} {n} & -& \delta \frac{P\Theta _{k}} {n} -\square _{\delta,\delta ',\Upsilon }\frac{r_{n}\log (\vert \mathcal{K}_{n}\vert \vee n)^{2}} {n} \leq \mathop{\mathrm{pen}}\nolimits (k) \\ & \leq & \frac{2P\chi _{k} - P\Theta _{k}} {n} -\delta '\frac{P\Theta _{k}} {n} + \square _{\delta,\delta ',\Upsilon }\frac{r_{n}\log (\vert \mathcal{K}_{n}\vert \vee n)^{2}} {n}.{}\end{array}$$
(4.26)

Then, with probability larger than 1 −□∕n 2 , the following holds

$$\displaystyle{ P\Theta _{\hat{k}_{n}} \geq \left (\left.\frac{\delta '} {\delta } + \square _{\delta,\delta ',\Upsilon,c,c_{s},c_{R}}\ \mathrm{o}(1)\right.\right )P\Theta _{k_{1,n}}\quad \mbox{ and} }$$
(4.27)
$$\displaystyle\begin{array}{rcl} \left \Vert s -\hat{s}_{\hat{k}_{n}}\right \Vert ^{2} \geq & &\left (\left.\frac{\delta '} {\delta } + \square _{\delta,\delta ',\Upsilon,c,c_{s},c_{R}}\ \mathrm{o}(1)\right.\right )\left \Vert s -\hat{s}_{k_{1,n}}\right \Vert ^{2} \\ & & \phantom{kdsjfsdlfksdlkfldk} \gg \left \Vert s -\hat{s}_{k_{0,n}}\right \Vert ^{2} \geq \inf _{ k\in \mathcal{K}_{n}}\left \Vert s -\hat{s}_{k}\right \Vert ^{2}.{}\end{array}$$
(4.28)

By (4.28), under the conditions of Theorem (4.3), the estimator \(\hat{s}_{\hat{k}_{ n}}\) cannot satisfy an oracle inequality, hence, the lower bound \((2P\chi _{k} - P\Theta _{k})/n\) in Theorem (3.1) is sharp. This shows that \((2P\chi _{k} - P\Theta _{k})/n\) is a minimal penalty in the sense of [5] for kernel selection. When

$$\displaystyle{\mathop{\mathrm{pen}}\nolimits (k) = \frac{2P\chi _{k} - P\Theta _{k}} {n} +\kappa \frac{P\Theta _{k}} {n},}$$

the complexity \(P\Theta _{\hat{k}_{n}}\) presents a sharp phase transition when κ becomes positive. Indeed, when κ < 0 it follows from (4.27) that the complexity \(P\Theta _{\hat{k}_{n}}\) is asymptotically larger than \(P\Theta _{k_{1,n}}\). But on the other hand, as a consequence of Theorem (3.1), when κ > 0, this complexity becomes smaller than

$$\displaystyle\begin{array}{rcl} \square _{\kappa }n\inf _{k\in \mathcal{K}_{n}}\left (\left.\left \Vert s - s_{k}\right \Vert ^{2} + \frac{P\Theta _{k}} {n} \right.\right )& \leq & \square _{\kappa }\left (\left.n\left \Vert s - s_{k_{0,n}}\right \Vert ^{2} + P\Theta _{ k_{0,n}}\right.\right ) {}\\ & \ll & \square _{\kappa }\left (\left.n\left \Vert s - s_{k_{1,n}}\right \Vert ^{2} + P\Theta _{ k_{1,n}}\right.\right ) \leq \square _{\kappa }P\Theta _{k_{1,n}}. {}\\ \end{array}$$

4.3 Examples

Example 1 (Continued)

Let \(\mathcal{S} = \mathcal{S}_{n}\) be the collection of spaces of regular histograms on [0, 1] with dimensions \(\left \{\left.1,\ldots,n\right.\right \}\) and let \(\hat{S}= \hat{S} _{n}\) be the selected space thanks to the penalized criterion. Recall that, for any \(S \in \mathcal{S}_{n}\), the orthonormal basis is defined by \(\varphi _{i} = \sqrt{D_{S}}\mathbf{1}_{i}\) and \(P\Theta _{k_{S}} = D_{S}\). Assume that s is α-Hölderian, with α ∈ (0, 1] with α-Hölderian norm L. It is well known (see for instance Section 1.3.3. of [4]) that the bias is bounded above by

$$\displaystyle{\left \Vert s - s_{k_{S}}\right \Vert ^{2} \leq \square _{ L}D_{S}^{-2\alpha }.}$$

In particular, if \(D_{S_{1}} = n\),

$$\displaystyle{\left \Vert s - s_{k_{S_{ 1}}}\right \Vert ^{2} \leq \square _{ L}n^{-2\alpha } \ll 1 = \frac{D_{S_{1}}} {n} = \frac{P\Theta _{k_{S_{ 1}}}} {n}.}$$

Thus, (4.25) holds for kernel \(k_{S_{1}}\). Moreover, if \(D_{S_{0}} = \lfloor \sqrt{n}\rfloor\),

$$\displaystyle\begin{array}{rcl} \frac{(\log (n \vee \vert \mathcal{S}_{n}\vert )^{3}} {n} \ll \left \Vert s - s_{k_{S_{ 0}}}\right \Vert ^{2} + \frac{D_{S_{0}}} {n} \leq & &\square _{L}\left (\left.\frac{1} {n^{\alpha }} + \frac{1} {\sqrt{n}}\right.\right ) {}\\ & & \phantom{sdfmslgfkga} \ll \left \Vert s - s_{k_{S_{ 1}}}\right \Vert ^{2} + \frac{D_{S_{1}}} {n}. {}\\ \end{array}$$

Hence, (4.21) holds with \(k_{0,n} = k_{S_{0}}\) and \(k_{1,n} = k_{S_{1}}\). Therefore, Theorem (4.3) and Theorem (3.1) apply in this example. If pen(k S ) = (1 −δ)D S n, the dimension \(D_{k\,_{\,\widehat{S}_{n}}} \geq \square _{\delta }n\) and \(\hat{s}_{k\,_{\,\widehat{S}_{n}}}\) is not consistent and does not satisfy an oracle inequality. On the other hand, if pen(k S ) = (1 +δ)D S n,

$$\displaystyle{D_{\,\widehat{S}_{n}} \leq \square _{L,\delta }\left (\left.n^{1-\alpha } + \sqrt{n}\right.\right ) \ll D_{ S_{1}} = n}$$

and \(\hat{s}_{k\,_{ \,\widehat{S}_{n}}}\) satisfies an oracle inequality which implies that, with probability larger than 1 − □ ∕n 2,

$$\displaystyle{\left \Vert s -\hat{s}_{k\,_{\,\widehat{S}_{n}}}\right \Vert ^{2} \leq \square _{\alpha,L,\delta }n^{-2\alpha /(2\alpha +1)},}$$

by taking D S  ≃ n 1∕(2α+1). It achieves the minimax rate of convergence over the class of α-Hölderian functions.

From Theorem (3.1), the penalty pen(k S ) = 2D S n provides an estimator \(\hat{s}_{k\,_{\,\widehat{S}_{n}}}\) that achieves an asymptotically optimal oracle inequality. Therefore the optimal penalty is equal to 2 times the minimal one. In particular, the slope heuristics of [5] holds in this example, as already noticed in [19].

Finally to illustrate Corollary (4.2), let us take s(x) = 2x and the collection of regular histograms with dimension in {1, , ⌊n β⌋}, with β < 1∕3. Simple calculations show that

$$\displaystyle{\frac{\left \Vert s - s_{k_{S}}\right \Vert ^{2}} {D_{S}} \geq \square D_{S}^{-3} \geq \square n^{-3\beta } \gg n^{-1}.}$$

Hence (4.24) applies and the penalized estimator with penalty \(\mathop{\mathrm{pen}}\nolimits (k_{S}) \simeq \delta \frac{D_{S}} {n}\) always satisfies an oracle inequality even if δ = 0 or δ < 0. This was actually expected since it is likely to choose the largest dimension which is also the oracle choice in this case.

Example 2 (Continued)

Let K be a fixed function, let \(\mathcal{H} = \mathcal{H}_{n}\) denote the following grid of bandwidths

$$\displaystyle{\mathcal{H} = \left \{\left.\frac{\left \Vert K\right \Vert _{\infty }\left \Vert K\right \Vert _{1}} {i} \quad /\quad i = 1,\ldots,n\right.\right \} }$$

and let \(\hat{h} =\hat{ h}_{n}\) be the selected bandwidth. Assume as before that s is a density on [0, 1] that belongs to the Nikol’ski class \(\mathcal{N}(\alpha,L)\) with α ∈ (0, 1] and L > 0. By Proposition 1.5 in [25], if K satisfies \(\int \left \vert u\right \vert ^{\alpha }\left \vert K(u)\right \vert du <\infty\)

$$\displaystyle{\left \Vert s - s_{k_{K,h}}\right \Vert ^{2} \leq \square _{\alpha,K,L}h^{2\alpha }.}$$

In particular, when \(h_{1} = \left \Vert K\right \Vert _{\infty }\left \Vert K\right \Vert _{1}/n\),

$$\displaystyle{\left \Vert s - s_{k_{K,h_{ 1}}}\right \Vert ^{2} \leq \square _{\alpha,K,L}n^{-2\alpha } \ll \frac{P\Theta _{k_{K,h_{1}}}} {n} = \frac{\left \Vert K\right \Vert ^{2}} {\left \Vert K\right \Vert _{\infty }\left \Vert K\right \Vert _{1}}.}$$

On the other hand, for \(h_{0} = \left \Vert K\right \Vert _{\infty }\left \Vert K\right \Vert _{1}/\left \lfloor \sqrt{n}\right \rfloor\),

$$\displaystyle\begin{array}{rcl} \frac{(\log n \vee \vert \mathcal{H}_{n}\vert )^{2}} {n} & \ll & \left \Vert s - s_{k_{K,h_{ 0}}}\right \Vert ^{2} + \frac{P\Theta _{k_{K,h_{0}}}} {n} {}\\ & & \qquad \leq \square _{K,\alpha,L}\left (\left.\frac{1} {n^{\alpha }} + \frac{1} {\sqrt{n}}\right.\right ) \ll \left \Vert s - s_{k_{K,h_{ 1}}}\right \Vert ^{2} + \frac{P\Theta _{k_{K,h_{1}}}} {n}. {}\\ \end{array}$$

Hence, (4.21) and (4.25) hold with kernels \(k_{0,n} = k_{K,h_{0}}\) and \(k_{1,n} = k_{K,h_{1}}\). Therefore, Theorems (4.3) and (3.1) apply in this example. If for some δ > 0 we set \(\mathop{\mathrm{pen}}\nolimits (k_{K,h}) = (2K(0) -\left \Vert K\right \Vert ^{2} -\delta \left \Vert K\right \Vert ^{2})/(nh)\), then \(\hat{h}_{n} \leq \square _{\delta,K}n^{-1}\) and \(\hat{s}_{k_{K,\hat{h}_{n}}}\) is not consistent and does not satisfy an oracle inequality. On the other hand, if \(\mathop{\mathrm{pen}}\nolimits (k_{K,h}) = (2K(0) -\left \Vert K\right \Vert ^{2} +\delta \left \Vert K\right \Vert ^{2})/(nh)\), then

$$\displaystyle{\hat{h}_{n} \geq \square _{\delta,K,L}\left (\left.n^{1-\alpha } + \sqrt{n}\right.\right )^{-1} \gg \square _{\delta,K,L}n^{-1},}$$

and \(\hat{s}_{K,k_{\hat{h}_{n}}}\) satisfies an oracle inequality which implies that, with probability larger than 1 − □ ∕n 2,

$$\displaystyle{\left \Vert s -\hat{s}_{k_{K,\hat{h}_{n}}}\right \Vert ^{2} \leq \square _{\alpha,K,L,\delta }n^{-2\alpha /(2\alpha +1)},}$$

for \(h = \left \Vert K\right \Vert _{\infty }\left \Vert K\right \Vert _{1}/\left \lfloor n^{1/(2\alpha +1)}\right \rfloor \in \mathcal{H}.\) In particular it achieves the minimax rate of convergence over the class \(\mathcal{N}(\alpha,L)\). Finally, if pen(k K, h ) = 2K(0)∕(nh), \(\hat{s}_{k_{K,\hat{h}_{n}}}\) achieves an asymptotically optimal oracle inequality, thanks to Theorem (3.1).

The minimal penalty is therefore

$$\displaystyle{\mathop{\mathrm{pen}}\nolimits _{\mathrm{min}}(k_{K,h}) = \frac{2K(0) -\left \Vert K\right \Vert ^{2}} {nh}.}$$

In this case, the optimal penalty penopt(k K, h ) = 2K(0)∕(nh) derived from Theorem (3.1) is not twice the minimal one, but one still has, if \(2K(0)\neq \left \Vert K\right \Vert ^{2}\),

$$\displaystyle{\mathop{\mathrm{pen}}\nolimits _{\mathrm{opt}}(k_{K,h}) = \frac{2K(0)} {2K(0) -\left \Vert K\right \Vert ^{2}}\mathop{ \mathrm{pen}}\nolimits _{\mathrm{min}}(k_{K,h}),}$$

even if they can be of opposite sign depending on K. This type of nontrivial relationship between optimal and minimal penalty has already been underlined in [2] in regression framework for selecting linear estimators.

Note that if one allows two kernel functions K 1 and K 2 in the family of kernels such that \(2K_{1}(0)\neq \left \Vert K_{1}\right \Vert ^{2}\), \(2K_{2}(0)\neq \left \Vert K_{2}\right \Vert ^{2}\) and

$$\displaystyle{ \frac{2K_{1}(0)} {2K_{1}(0) -\left \Vert K_{1}\right \Vert ^{2}}\neq \frac{2K_{2}(0)} {2K_{2}(0) -\left \Vert K_{2}\right \Vert ^{2}},}$$

then there is no absolute constant multiplicative factor linking the minimal penalty and the optimal one.

5 Small Simulation Study

In this section we illustrate on simulated data Theorems (3.1) and (4.3). We focus on approximation kernels only, since projection kernels have been already discussed in [19].

We observe an n = 100 i.i.d. sample of standard gaussian distribution. For a fixed parameter a ≥ 0 we consider the family of kernels

$$\displaystyle{k_{K_{a},h}(x,y) = \frac{1} {h}K_{a}\left (\left.\frac{x - y} {h} \right.\right )\qquad \mbox{ with}\quad h \in \mathcal{H} = \left \{\left. \frac{1} {2i},\ i = 1,\ldots,50\right.\right \},}$$

where for \(x \in \mathbb{R},\quad K_{a}(x) = \frac{1} {2\sqrt{2\pi }}\left (\left.e^{-\frac{(x-a)^{2}} {2} } + e^{-\frac{(x+a)^{2}} {2} }\right.\right ).\)

In particular the kernel estimator with a = 0 is the classical Gaussian kernel estimator. Moreover

$$\displaystyle{K_{a}(0) = \frac{1} {\sqrt{2\pi }}\exp \left (\left.-\frac{a^{2}} {2} \right.\right )\quad \mbox{ and}\quad \left \Vert K_{a}\right \Vert ^{2} = \frac{1 + e^{-a^{2} }} {4\sqrt{\pi }}.}$$

Thus, depending on the value of a, the minimal penalty \((2K_{a}(0) -\left \Vert K_{a}\right \Vert ^{2})/(nh)\) may be negative. We study the behavior of the penalized criterion

$$\displaystyle{\mathcal{C}_{\mathop{\mathrm{pen}}\nolimits }\left (\left.k_{K_{a},h}\right.\right ) = P_{n}\gamma (\hat{s}_{k_{K_{a},h}}) +\mathop{ \mathrm{pen}}\nolimits (k_{K_{a},h})}$$

with penalties of the form

$$\displaystyle{ \mathop{\mathrm{pen}}\nolimits \left (\left.k_{K_{a},h}\right.\right ) = \frac{2K_{a}(0) -\left \Vert K_{a}\right \Vert ^{2}} {nh} +\kappa \frac{\left \Vert K_{a}\right \Vert ^{2}} {nh}, }$$
(5.29)

for different values of κ (κ = −1, 0, 1) and a (a = 0, 1. 5, 2, 3). On Fig. (1) are represented the selected estimates by the optimal penalty 2K a (0)∕(nh) for the different values of a and on Fig. (2) one sees the evolution of the different penalized criteria as a function of 1∕h.

Fig. 1
figure 1

Selected approximation kernel estimators when the penalty is the optimal one, i.e. 2K a (0)∕(nh)

Fig. 2
figure 2

Behavior of \(P_{n}\gamma (\hat{s}_{k_{K_{a},h}})\) (solid blue line) and \(\mathcal{C}_{\mathop{\mathrm{pen}}\nolimits }\left (\left.k_{K_{a},h}\right.\right )\) as a function of 1∕h, which is proportional to the complexity \(P\Theta _{k_{K_{a},h}}\)

The contrast curves for a = 0 are classical on Fig. (2). Without penalization, the criterion decreases and leads to the selection of the smallest bandwidth. At the minimal penalty, the curve is flat and at the optimal penalty one selects a meaningful bandwidth as shown on Fig. (1).

When a > 0, despite the choice of those unusual kernels, the reconstructions on Fig. (1) for the optimal penalty are also meaningful. However when a = 2 or a = 3, the criterion with minimal penalty is smaller than the unpenalized criterion, meaning that minimizing the latter criterion leads by Theorem (3.1) to an oracle inequality. In our simulation, when a = 3, the curves for the optimal criterion and the unpenalized one are so close that the same estimator is selected by both methods.

Fig. 3
figure 3

Behavior of \(1/\hat{h}\), which is proportional to the complexity \(P\Theta _{k_{K_{a},h}}\), for the estimator selected by the criterion whose penalty is given by (5.29), as a function of κ

Finally Fig. (3) shows that there is indeed in all cases a sharp phase transition around κ = 0 i.e. at the minimal penalty for the complexity of the selected estimate.

6 Proofs

6.1 Proof of Theorem (3.1)

The starting point to prove the oracle inequality is to notice that any minimizer \(\hat{k}\) of \(\mathcal{C}_{\mathop{\mathrm{pen}}\nolimits }\) satisfies

$$\displaystyle{\left \Vert s -\hat{s}_{\hat{k}}\right \Vert ^{2} \leq \left \Vert s -\hat{s}_{ k}\right \Vert ^{2} + \left (\left.\mathop{\mathrm{pen}}\nolimits (k) -\mathop{\mathrm{pen}}\nolimits _{\mathrm{ id}}(k)\right.\right ) -\left (\left.\mathop{\mathrm{pen}}\nolimits \left (\left.\hat{k}\right.\right ) -\mathop{\mathrm{pen}}\nolimits _{\mathrm{id}}\left (\left.\hat{k}\right.\right )\right.\right ).}$$

Using the expression of the ideal penalty (2.9) we find

$$\displaystyle\begin{array}{rcl} \left \Vert s -\hat{s}_{\hat{k}}\right \Vert ^{2}& & \leq \left \Vert s -\hat{s}_{ k}\right \Vert ^{2} + \left (\left.\mathop{\mathrm{pen}}\nolimits (k) - 2\frac{P\chi _{k}} {n} \right.\right ) -\left (\left.\mathop{\mathrm{pen}}\nolimits \left (\left.\hat{k}\right.\right ) - 2\frac{P\chi _{\hat{k}}} {n} \right.\right ) \\ & & +2\frac{P(s_{k} - s_{\hat{k}})} {n} + 2\left (\left.1 - \frac{2} {n}\right.\right )(P_{n} - P)(s_{\hat{k}} - s_{k}) \\ & & +2\frac{(P_{n} - P)(\chi _{\hat{k}} -\chi _{k})} {n} + 2\frac{U_{\hat{k}} - U_{k}} {n^{2}}. {}\end{array}$$
(6.30)

By Proposition (B.1) (see the appendix), for all x > 1, for all θ in (0, 1), with probability larger than \(1 - (7.4\vert \mathcal{K}\vert + 2\vert \mathcal{K}\vert ^{2})e^{-x}\),

$$\displaystyle\begin{array}{rcl} \left \Vert s -\hat{s}_{\hat{k}}\right \Vert ^{2}& & \leq \left \Vert s -\hat{s}_{ k}\right \Vert ^{2} + \left (\left.\mathop{\mathrm{pen}}\nolimits (k) - 2\frac{P\chi _{k}} {n} \right.\right ) -\left (\left.\mathop{\mathrm{pen}}\nolimits \left (\left.\hat{k}\right.\right ) - 2\frac{P\chi _{\hat{k}}} {n} \right.\right ) {}\\ & & +\theta \left \Vert s - s_{\hat{k}}\right \Vert ^{2} +\theta \left \Vert s - s_{ k}\right \Vert ^{2} + \square \frac{\Upsilon } {\theta n} {}\\ & & +\left (\left.1 - \frac{2} {n}\right.\right )\theta \left \Vert s - s_{\hat{k}}\right \Vert ^{2} + \left (\left.1 - \frac{2} {n}\right.\right )\theta \left \Vert s - s_{k}\right \Vert ^{2} + \square \frac{\Upsilon x^{2}} {\theta n} {}\\ & & +\theta \frac{P\Theta _{k}} {n} +\theta \frac{P\Theta _{\hat{k}}} {n} + \square \frac{\Upsilon x} {\theta n} +\theta \frac{P\Theta _{k}} {n} +\theta \frac{P\Theta _{\hat{k}}} {n} + \square \frac{\Upsilon x^{2}} {\theta n} {}\\ \end{array}$$

Hence

$$\displaystyle\begin{array}{rcl} \left \Vert s -\hat{s}_{\hat{k}}\right \Vert ^{2}& & \leq \left \Vert s -\hat{s}_{ k}\right \Vert ^{2} + \left (\left.\mathop{\mathrm{pen}}\nolimits (k) - 2\frac{P\chi _{k}} {n} \right.\right ) -\left (\left.\mathop{\mathrm{pen}}\nolimits \left (\left.\hat{k}\right.\right ) - 2\frac{P\chi _{\hat{k}}} {n} \right.\right ) {}\\ & & +2\theta \left [\left.\left \Vert s - s_{\hat{k}}\right \Vert ^{2} + \frac{P\Theta _{\hat{k}}} {n} \right.\right ] + 2\theta \left [\left.\left \Vert s - s_{k}\right \Vert ^{2} + \frac{P\Theta _{k}} {n} \right.\right ] + \square \frac{\Upsilon x^{2}} {\theta n}. {}\\ \end{array}$$

This bound holds using (3.11)– (3.13) only. Now by Proposition (4.1) applied with η = 1, we have for all x > 1, for all θ ∈ (0, 1), with probability larger than \(1 - (16.8\vert \mathcal{K}\vert + 2\vert \mathcal{K}\vert ^{2})e^{-x}\),

$$\displaystyle\begin{array}{rcl} \left \Vert s -\hat{s}_{\hat{k}}\right \Vert ^{2}& & \leq \left \Vert s -\hat{s}_{ k}\right \Vert ^{2} + \left (\left.\mathop{\mathrm{pen}}\nolimits (k) - 2\frac{P\chi _{k}} {n} \right.\right ) -\left (\left.\mathop{\mathrm{pen}}\nolimits \left (\left.\hat{k}\right.\right ) - 2\frac{P\chi _{\hat{k}}} {n} \right.\right ) {}\\ & & +4\theta \left \Vert s -\hat{s}_{\hat{k}}\right \Vert ^{2} + 4\theta \left \Vert s -\hat{s}_{ k}\right \Vert ^{2} + \square \frac{\Upsilon x^{2}} {\theta n}. {}\\ \end{array}$$

This gives the first part of the theorem.

For the second part, by the condition (3.18) on the penalty, we find for all x > 1, for all θ in (0, 1), with probability larger than \(1 - (C + 16.8\vert \mathcal{K}\vert + 2\vert \mathcal{K}\vert ^{2})e^{-x}\),

$$\displaystyle\begin{array}{rcl} & & (1 - 4\theta )\left \Vert s -\hat{s}_{\hat{k}}\right \Vert ^{2} \leq {}\\ & &\quad (1 + 4\theta )\left \Vert s -\hat{s}_{k}\right \Vert ^{2} + (\delta ' - 1)_{ +}\frac{P\Theta _{k}} {n} + (1-\delta )_{+}\frac{P\Theta _{\hat{k}}} {n} + \square \left (\left.r + \frac{1} {\theta } \right.\right )\frac{\Upsilon x^{2}} {n}. {}\\ \end{array}$$

By Proposition (4.1) applied with η = θ, we have with probability larger than \(1 - (C + 26.2\vert \mathcal{K}\vert + 2\vert \mathcal{K}\vert ^{2})e^{-x}\),

$$\displaystyle\begin{array}{rcl} (1 - 4\theta )\left \Vert s -\hat{s}_{\hat{k}}\right \Vert ^{2}& \leq & (1 + 4\theta )\left \Vert s -\hat{s}_{ k}\right \Vert ^{2} + (\delta ' - 1)_{ +}(1+\theta )\left \Vert s -\hat{s}_{k}\right \Vert ^{2} {}\\ & & \quad + (1-\delta )_{+}(1+\theta )\left \Vert s -\hat{s}_{\hat{k}}\right \Vert ^{2} + \square \left (\left.r + \frac{1} {\theta ^{3}} \right.\right )\frac{\Upsilon x^{2}} {n}, {}\\ \end{array}$$

that is

$$\displaystyle\begin{array}{rcl} & & \left (\left.(\delta \wedge 1) -\theta (4 + (1-\delta )_{+})\right.\right )\left \Vert s -\hat{s}_{\hat{k}}\right \Vert ^{2} {}\\ & & \phantom{dsflflkglf} \leq \left (\left.(\delta ' \vee 1) +\theta (4 + (\delta ' - 1)_{+})\right.\right )\left \Vert s -\hat{s}_{k}\right \Vert ^{2} + \square \left (\left.r + \frac{1} {\theta ^{3}} \right.\right )\frac{\Upsilon x^{2}} {n}. {}\\ \end{array}$$

Hence, because 1 ≤ [(δ′ ∨ 1) + (4 + (δ′ − 1)+)θ] ≤ (δ′ ∨ 1) + (4 +δ′)θ, we obtain the desired result.

6.2 Proof of Proposition (4.1)

First, let us denote for all \(x \in \mathbb{X}\)

$$\displaystyle{F_{A,k}(x):= \mathbb{E}\left [\left.A_{k}(X,x)\right.\right ],\qquad \zeta _{k}(x):=\int \left (\left.k(y,x) - s_{k}(y)\right.\right )^{2}d\mu (y),}$$

and

$$\displaystyle{U_{A,k}:=\sum _{ i\neq j=1}^{n}\left (\left.A_{ k}(X_{i},X_{j}) - F_{A,k}(X_{i}) - F_{A,k}(X_{j}) + \mathbb{E}\left [\left.A_{k}(X,Y )\right.\right ]\right.\right ).}$$

Some easy computations then provide the following useful equality

$$\displaystyle{\left \Vert s_{k} -\hat{s}_{k}\right \Vert ^{2} = \frac{1} {n}P_{n}\zeta _{k} + \frac{1} {n^{2}}U_{A,k}.}$$

We need only treat the terms on the right-hand side, thanks to the probability tools of Sect. (2.3). Applying Proposition (2.1), we get, for any x ≥ 1, with probability larger than \(1 - 2\left \vert \mathcal{K}\right \vert e^{-x}\),

$$\displaystyle{\left \vert (P_{n} - P)\zeta _{k}\right \vert \leq \sqrt{\frac{2x} {n} P\zeta _{k}^{2}} + \frac{\left \Vert \zeta _{k}\right \Vert _{\infty }x} {3n}.}$$

One can then check the following link between ζ k and \(\Theta _{k}\)

$$\displaystyle{P\zeta _{k} =\int \left (\left.k(y,x) - s_{k}(x)\right.\right )^{2}s(y)d\mu (x)d\mu (y) = P\Theta _{ k} -\left \Vert s_{k}\right \Vert ^{2}.}$$

Next, by (2.1) and (3.11)

$$\displaystyle\begin{array}{rcl} \left \Vert \zeta _{k}\right \Vert _{\infty }& =& \sup _{y\in \mathbb{X}}\int \left (\left.k(y,x) - \mathbb{E}\left [\left.k(X,x)\right.\right ]\right.\right )^{2}d\mu (x) {}\\ & \leq & 4\sup _{y\in \mathbb{X}}\int k(y,x)^{2}d\mu (x) \leq 4\Upsilon n. {}\\ \end{array}$$

In particular, since ζ k  ≥ 0,

$$\displaystyle{P\zeta _{k}^{2} \leq \left \Vert \zeta _{ k}\right \Vert _{\infty }P\zeta _{k} \leq 4\Upsilon nP\Theta _{k}.}$$

It follows from these computations and from (3.11) that there exists an absolute constant □ such that, for any x ≥ 1, with probability larger than \(1 - 2\left \vert \mathcal{K}\right \vert e^{-x}\), for any θ ∈ (0, 1),

$$\displaystyle{\left \vert P_{n}\zeta _{k} - P\Theta _{k}\right \vert \leq \theta P\Theta _{k} + \square \frac{\Upsilon x} {\theta }.}$$

We now need to control the term U A, k . From Proposition (2.2), for any x ≥ 1, with probability larger than \(1 - 5.4\left \vert \mathcal{K}\right \vert e^{-x}\),

$$\displaystyle{\frac{\left \vert U_{A,k}\right \vert } {n^{2}} \leq \frac{\square } {n^{2}}\left (\left.C\sqrt{x} + Dx + Bx^{3/2} + Ax^{2}\right.\right ).}$$

By (2.1), (3.11) and Cauchy-Schwarz inequality,

$$\displaystyle{A = 4\sup _{(x,y)\in \mathbb{X}^{2}}\int k(x,z)k(y,z)d\mu (z) \leq 4\sup _{x\in \mathbb{X}}\int k(x,z)^{2}d\mu (z) \leq 4\Upsilon n.}$$

In addition, by (3.15), \(B^{2} \leq 16\sup _{x\in \mathbb{X}}\mathbb{E}\left [\left.A_{k}(X,x)^{2}\right.\right ] \leq 16\Upsilon n.\)

Moreover, applying the Assumption (3.14),

$$\displaystyle{C^{2} \leq \sum _{ i\neq j=1}^{n}\mathbb{E}\left [\left.A_{ k}(X_{i},X_{j})^{2}\right.\right ] = n^{2}\mathbb{E}\left [\left.A_{ k}(X,Y )^{2}\right.\right ] \leq n^{2}\Upsilon P\Theta _{ k}.}$$

Finally, applying the Cauchy-Schwarz inequality and proceeding as for C 2, the quantity used to define D can be bounded above as follows:

$$\displaystyle{\mathbb{E}\left [\left.\sum _{i=1}^{n-1}\sum _{ j=i+1}^{n}a_{ i}(X_{i})b_{j}(X_{j})A_{k}(X_{i},X_{j})\right.\right ] \leq n\sqrt{\mathbb{E}\left [\left.A_{k } (X, Y )^{2 } \right. \right ]} \leq n\sqrt{\Upsilon P\Theta _{k}}.}$$

Hence for any x ≥ 1, with probability larger than \(1 - 5.4\left \vert \mathcal{K}\right \vert e^{-x}\),

$$\displaystyle{\mbox{ for any }\theta \in (0,1),\quad \frac{\left \vert U_{A,k}\right \vert } {n^{2}} \leq \theta \frac{P\Theta _{k}} {n} + \square \frac{\Upsilon x^{2}} {\theta n}.}$$

Therefore, for all θ ∈ (0, 1),

$$\displaystyle{\left \vert \left \Vert \hat{s}_{k} - s_{k}\right \Vert ^{2} -\frac{P\Theta _{k}} {n} \right \vert \leq 2\theta \frac{P\Theta _{k}} {n} + \square \frac{\Upsilon x^{2}} {\theta n},}$$

and the first part of the result follows by choosing θ = η∕2. Concerning the two remaining inequalities appearing in the proposition, we begin by developing the loss. For all \(k \in \mathcal{K}\)

$$\displaystyle{\left \Vert \hat{s}_{k} - s\right \Vert ^{2} = \left \Vert \hat{s}_{ k} - s_{k}\right \Vert ^{2} + \left \Vert s_{ k} - s\right \Vert ^{2} + 2\langle \hat{s}_{ k} - s_{k},s_{k} - s\rangle.}$$

Then, for all \(x \in \mathbb{X}\)

$$\displaystyle\begin{array}{rcl} F_{A,k}(x) - s_{k}(x)& =& \int s(y)\int k(x,z)k(z,y)d\mu (z)d\mu (y) -\int s(z)k(z,x)d\mu (z) {}\\ & =& \int \left (\left.\int s(y)k(z,y)d\mu (y) - s(z)\right.\right )k(x,z)d\mu (z) {}\\ & =& \int \left (\left.s_{k}(z) - s(z)\right.\right )k(z,x)d\mu (z). {}\\ \end{array}$$

Moreover, since \(PF_{A,k} = \left \Vert s_{k}\right \Vert ^{2}\), we find

$$\displaystyle\begin{array}{rcl} \langle \hat{s}_{k} - s_{k},s_{k} - s\rangle & =& \int \left (\left.\hat{s}_{k}(x)\left (\left.s_{k}(x) - s(x)\right.\right )\right.\right )d\mu (x) + \mathbb{E}\left [\left.s_{k}(X)\right.\right ] -\left \Vert s_{k}\right \Vert ^{2} {}\\ & =& \frac{1} {n}\sum _{i=1}^{n}\int \left (\left.k(x,X_{ i})\left (\left.s_{k}(x) - s(x)\right.\right )\right.\right )d\mu (x) + P(s_{k} - F_{A,k}) {}\\ & =& \frac{1} {n}\sum _{i=1}^{n}\left (\left.F_{ A,k}(X_{i}) - s_{k}(X_{i})\right.\right ) + P(s_{k} - F_{A,k}) {}\\ & =& (P_{n} - P)(F_{A,k} - s_{k}). {}\\ \end{array}$$

This expression motivates us to apply again Proposition (2.1) to this term. We find by (2.1), (3.11) and Cauchy-Schwarz inequality

$$\displaystyle\begin{array}{rcl} \sup _{x\in \mathbb{X}}\left \vert F_{A,k}(x) - s_{k}(x)\right \vert & \leq & \left \Vert s - s_{k}\right \Vert \sup _{x\in \mathbb{X}}\int \frac{\left \vert s(z) - s_{k}(z)\right \vert } {\left \Vert s - s_{k}\right \Vert } k(x,z)d\mu (z) {}\\ & \leq & \left \Vert s - s_{k}\right \Vert \sqrt{ \sup _{x\in \mathbb{X}}\int k(x,z)^{2}d\mu (z)} \leq \left \Vert s - s_{k}\right \Vert \sqrt{\Upsilon n}. {}\\ \end{array}$$

Moreover,

$$\displaystyle\begin{array}{rcl} P\left (\left.F_{A,k} - s_{k}\right.\right )^{2}& \leq & \left \Vert s - s_{ k}\right \Vert ^{2}P\left (\left.\int \frac{\left \vert s(z) - s_{k}(z)\right \vert } {\left \Vert s - s_{k}\right \Vert } k(.,z)d\mu (z)\right.\right )^{2} {}\\ & \leq & \left \Vert s - s_{k}\right \Vert ^{2}v_{ k}^{2}. {}\\ \end{array}$$

Thus by (3.16), for any θ, u > 0,

$$\displaystyle\begin{array}{rcl} & & \sqrt{\frac{2P\left (\left.F_{A,k } - s_{k } \right. \right ) ^{2 } x} {n}} \leq \theta \left \Vert s - s_{k}\right \Vert ^{2} + \frac{\left (\left.\Upsilon \vee \sqrt{\Upsilon P\Theta _{k}}\right.\right )x} {2\theta n} {}\\ & & \phantom{dfdffda} \leq \theta \left \Vert s - s_{k}\right \Vert ^{2} + \frac{\Upsilon x} {\theta n} \vee \left (\left.\frac{u} {\theta } \frac{P\Theta _{k}} {n} + \frac{\Upsilon x^{2}} {16\theta un}\right.\right ). {}\\ \end{array}$$

Hence, for any θ ∈ (0, 1) and x ≥ 1, taking u = θ 2

$$\displaystyle\begin{array}{rcl} \sqrt{ \frac{2P\left (\left.F_{A,k} - s_{k}\right.\right )^{2}x} {n}} \leq \theta \left (\left.\left \Vert s - s_{k}\right \Vert ^{2} + \frac{P\Theta _{k}} {n} \right.\right ) + \square \frac{\Upsilon x^{2}} {\theta ^{3}n}.& & {}\\ \end{array}$$

By Proposition (2.1), for all θ in (0, 1), for all x > 0 with probability larger than \(1 - 2\vert \mathcal{K}\vert e^{-x}\),

$$\displaystyle\begin{array}{rcl} 2\left \vert \langle \hat{s}_{k} - s_{k},s_{k} - s\rangle \right \vert & \leq & 2\sqrt{\frac{2P\left (\left.F_{A,k } - s_{k } \right. \right ) ^{2 } x} {n}} + 2\left \Vert s - s_{k}\right \Vert \sqrt{\Upsilon n} \frac{x} {3n} {}\\ & \leq & 3\theta \left (\left.\left \Vert s - s_{k}\right \Vert ^{2} + \frac{P\Theta _{k}} {n} \right.\right ) + \square \frac{\Upsilon x^{2}} {\theta ^{3}n}. {}\\ \end{array}$$

Putting together all of the above, one concludes that for all θ in (0, 1), for all x > 1, with probability larger than \(1 - 9.4\vert \mathcal{K}\vert e^{-x}\)

$$\displaystyle{\left \Vert \hat{s}_{k} - s\right \Vert ^{2} -\left \Vert s_{ k} - s\right \Vert ^{2} \leq 3\theta \left \Vert s - s_{ k}\right \Vert ^{2} + (1 + 4\theta )\frac{P\Theta _{k}} {n} + \square \frac{\Upsilon x^{2}} {\theta ^{3}n} }$$

and

$$\displaystyle{\left \Vert \hat{s}_{k} - s\right \Vert ^{2} -\left \Vert s_{ k} - s\right \Vert ^{2} \geq -3\theta \left (\left.\left \Vert s - s_{ k}\right \Vert ^{2} + \frac{P\Theta _{k}} {n} \right.\right ) + (1-\theta )\frac{P\Theta _{k}} {n} -\square \frac{\Upsilon x^{2}} {\theta ^{3}n}.}$$

Choosing, θ = η∕4 leads to the second part of the result.

6.3 Proof of Theorem (4.3)

It follows from (3.17) (applied with θ = □ (logn)−1 and \(x = \square \log (n \vee \vert \mathcal{K}_{n}\vert )\)) and Assumption (4.26) that with probability larger than 1 − □ n −2 we have for any \(k \in \mathcal{K}\) and any n ≥ 2

$$\displaystyle\begin{array}{rcl} \left \Vert \hat{s}_{\hat{k}_{n}} - s\right \Vert ^{2}& \leq & \left (\left.1 + \frac{\square } {\log n} \right.\right )\left \Vert \hat{s}_{k} - s\right \Vert ^{2} - (1 +\delta ')\left (\left.1 + \frac{\square } {\log n} \right.\right )\frac{P\Theta _{k}} {n} \\ & +& (1+\delta )\left (\left.1 + \frac{\square } {\log n} \right.\right )\frac{P\Theta _{\hat{k}_{n}}} {n} + \square _{\delta,\delta ',\Upsilon }\frac{\log (\vert \mathcal{K}_{n}\vert \vee n)^{3}} {n}.{}\end{array}$$
(6.31)

Applying this inequality with k = k 1, n and using Proposition (4.1) with η = □ (logn)−1∕3 and \(x = \square \log (\vert \mathcal{K}_{n}\vert \vee n)\) as a lower bound for \(\left \Vert \hat{s}_{\hat{k}_{n}} - s\right \Vert ^{2}\) and as an upper bound for \(\left \Vert \hat{s}_{k_{1,n}} - s\right \Vert ^{2}\), we obtain asymptotically that with probability larger than 1 − □ n −2,

$$\displaystyle\begin{array}{rcl} -\delta (1 + \square _{\delta }\ \mathrm{o}(1))\frac{P\Theta _{\hat{k}_{n}}} {n} \leq \left (\left.1 +\mathrm{ o}(1)\right.\right )\left \Vert s_{k_{1,n}} - s\right \Vert ^{2} -\delta '(1 + \square _{\delta '}\ \mathrm{o}(1))\frac{P\Theta _{k_{1,n}}} {n} & & {}\\ +\square _{\delta,\delta ',\Upsilon }\frac{\log (\vert \mathcal{K}_{n}\vert \vee n)^{3}} {n}.& & {}\\ \end{array}$$

By Assumption (4.25), \(\left \Vert s_{k_{1,n}} - s\right \Vert ^{2} \leq c\ \mathrm{o}(1)\frac{P\Theta _{k_{1,n}}} {n}\) and by (4.22),

$$\displaystyle{\frac{\left (\left.\log (\vert \mathcal{K}_{n}\vert \vee n)\right.\right )^{3}} {n} \leq c_{R}c_{s}\ \mathrm{o}(1)\frac{P\Theta _{k_{1,n}}} {n}.}$$

This gives (4.27). In addition, starting with the event where (6.31) holds and using Proposition (4.1), we also have with probability larger than 1 − □ n −2,

$$\displaystyle\begin{array}{rcl} \left \Vert \hat{s}_{\hat{k}_{n}} - s\right \Vert ^{2}& \leq & \left (\left.1 + \frac{\square } {\log n} \right.\right )\left \Vert \hat{s}_{k_{1,n}} - s\right \Vert ^{2} - (1 +\delta ')\frac{P\Theta _{k_{1,n}}} {n} {}\\ & & \qquad + (1+\delta )\left (\left.1 +\mathrm{ o}(1)\right.\right )\left \Vert \hat{s}_{\hat{k}_{n}} - s\right \Vert ^{2} + \square _{\delta,\delta ',\Upsilon }\frac{\log (\vert \mathcal{K}_{n}\vert \vee n)^{3}} {n}. {}\\ \end{array}$$

Since \(\left \Vert \hat{s}_{k_{1,n}} - s\right \Vert ^{2} \simeq \frac{P\Theta _{k_{1,n}}} {n}\), this leads to

$$\displaystyle\begin{array}{rcl} & & (-\delta + \square _{\delta }\ \mathrm{o}(1))\left \Vert \hat{s}_{\hat{k}} - s\right \Vert ^{2} \leq {}\\ & &\phantom{dsfgdsdfdfdlglfg} - (\delta ' + \square _{\delta ',c}\ \mathrm{o}(1))\left \Vert \hat{s}_{k_{1,n}} - s\right \Vert ^{2} + \square _{\delta,\delta ',\Upsilon }\ \frac{\log (\vert \mathcal{K}_{n}\vert \vee n)^{3}} {n}. {}\\ \end{array}$$

This leads to (4.28) by (4.21).

Appendix 1: Proofs for the Examples

Computation of the Constant \(\Gamma\) for the Three Examples

We have to show for each family \(\left \{\left.k\right.\right \}_{k\in \mathcal{K}}\) (see (2.8) and (2.1)) that there exists a constant \(\Gamma \geq 1\) such that for all \(k \in \mathcal{K}\)

$$\displaystyle{\sup _{x\in \mathbb{X}}\ \left \vert \Theta _{k}(x)\right \vert \leq \Gamma n,\quad \text{and}\quad \sup _{(x,y)\in \mathbb{X}^{2}}\left \vert k(x,y)\right \vert \leq \Gamma n.}$$

Example 1 (Projection Kernels)

First, notice that from Cauchy-Schwarz inequality we have for all \((x,y) \in \mathbb{X}^{2}\ \left \vert k_{S}(x,y)\right \vert \leq \sqrt{\chi _{k_{S } } (x)\chi _{k_{S } } (y)}\) and by orthonormality, for any \((x,x') \in \mathbb{X}^{2}\),

$$\displaystyle{ A_{k_{S}}(x,x') =\sum _{(i,j)\in \mathcal{I}_{S}^{2}}\varphi _{i}(x)\varphi _{j}(x')\int _{\mathbb{X}}\varphi _{i}(y)\varphi _{j}(y)d\mu (y) = k_{S}(x,x'). }$$

In particular, for any \(x \in \mathbb{X}\), \(\Theta _{k_{S}}(x) =\chi _{k_{S}}(x)\). Hence, projection kernels satisfy (2.1) for \(\Gamma = 1 \vee n^{-1}\sup _{S\in \mathcal{S}}\left \Vert \chi _{k_{S}}\right \Vert _{\infty }\). We conclude by writing

$$\displaystyle{\left \Vert \chi _{k_{S}}\right \Vert _{\infty } =\sup _{x\in \mathbb{X}}\sum _{i\in \mathcal{I}_{S}}\varphi _{i}(x)^{2} =\sup _{ \begin{array}{c}(a_{i})_{i\in \mathcal{I}}\,\mbox{ s.t. }\, \\ \sum _{i\in \mathcal{I}_{S}}a_{i}^{2}=1 \end{array}}\sup _{x\in \mathbb{X}}\left (\left.\sum _{i\in \mathcal{I}_{S}}a_{i}\varphi _{i}(x)\right.\right )^{2}.}$$

For f ∈ S we have \(\left \Vert \,f\right \Vert ^{2} =\sum _{i\in \mathcal{I}}\langle \,f,\varphi _{i}\rangle ^{2}\). Hence with a i  = 〈 f, φ i 〉,

$$\displaystyle{\left \Vert \chi _{k_{S}}\right \Vert _{\infty } =\sup _{f\in S,\left \Vert \,f\right \Vert =1}\left \Vert \,f\right \Vert _{\infty }^{2}.}$$

Example 2 (Approximation Kernels)

​​First, \(\sup _{(x,y)\in \mathbb{X}^{2}}\left \vert k_{K,h}(x,y)\right \vert \leq \left \Vert K\right \Vert _{\infty }/h.\) Second, since K ∈ L 1

$$\displaystyle{\Theta _{k_{K,h}}(x) = \frac{1} {h^{2}}\int _{\mathbb{X}}K\left (\left.\frac{x - y} {h} \right.\right )^{2}dy = \frac{\left \Vert K\right \Vert ^{2}} {h} \leq \frac{\left \Vert K\right \Vert _{\infty }\left \Vert K\right \Vert _{1}} {h}.}$$

Now K ∈ L 1 and ∫ K(u)du = 1 implies \(\left \Vert K\right \Vert _{1} \geq 1\), hence (2.1) holds with \(\Gamma = 1\) if one assumes that \(h \geq \left \Vert K\right \Vert _{\infty }\left \Vert K\right \Vert _{1}/n\).

Example 3 (Weighted Projection Kernels)

For all \(x \in \mathbb{X}\)

$$\displaystyle{\Theta _{k_{w}}(x) =\sum _{ i,j=1}^{p}w_{ i}\varphi _{i}(x)w_{j}\varphi _{j}(x)\int _{\mathbb{X}}\varphi _{i}(y)\varphi _{j}(y)d\mu (y) =\sum _{ i=1}^{p}w_{ i}^{2}\varphi _{ i}(x)^{2}.}$$

From Cauchy-Schwarz inequality, for any \((x,y) \in \mathbb{X}^{2}\),

$$\displaystyle{\left \vert k_{w}(x,y)\right \vert \leq \sqrt{\Theta _{k_{w } } (x)\Theta _{k_{w } } (y)}.}$$

We thus find that k w verifies (2.1) with \(\Gamma \geq 1 \vee n^{-1}\sup _{w\in \mathcal{W}}\left \Vert \Theta _{k_{w}}\right \Vert _{\infty }\). Since w i  ≤ 1 we find the announced result which is independent of \(\mathcal{W}\).

Proof of Proposition (3.2)

Since \(\left \Vert s_{k_{S}}\right \Vert ^{2} \leq \left \Vert s\right \Vert ^{2} \leq \left \Vert s\right \Vert _{\infty }\), we find that (3.11) only requires \(\Upsilon \geq \Gamma (1 + \left \Vert s\right \Vert _{\infty })\). Assumption (3.12) holds: this follows from \(\Upsilon \geq \Gamma\) and

$$\displaystyle{\mathbb{E}\left [\left.\chi _{k_{S}}(X)^{2}\right.\right ] \leq \left \Vert \chi _{ k_{S}}\right \Vert _{\infty }P\chi _{k_{S}} \leq \Gamma nP\Theta _{k_{S}}.}$$

Now for proving Assumption (3.14), we write

$$\displaystyle\begin{array}{rcl} \mathbb{E}\left [\left.A_{k_{S}}(X,Y )^{2}\right.\right ]& =& \mathbb{E}\left [\left.k_{ S}(X,Y )^{2}\right.\right ] =\int _{ \mathbb{X}}\mathbb{E}\left [\left.k_{S}(X,x)^{2}\right.\right ]s(x)d\mu (x) {}\\ & \leq & \left \Vert s\right \Vert _{\infty }\sum _{(i,j)\in \mathcal{I}_{S}^{2}}\mathbb{E}\left [\left.\varphi _{i}(X)\varphi _{j}(X)\right.\right ]\int _{\mathbb{X}}\varphi _{i}(x)\varphi _{j}(x)d\mu (x) {}\\ & =& \left \Vert s\right \Vert _{\infty }P\Theta _{k_{S}} \leq \Upsilon P\Theta _{k_{S}}. {}\\ \end{array}$$

In the same way, Assumption (3.15) follows from \(\left \Vert s\right \Vert _{\infty }\Gamma \leq \Upsilon\). Suppose (3.19) holds with S = S + S′ so that the basis \((\varphi _{i})_{i\in \mathcal{I}}\) of S′ is included in the one \((\varphi _{i})_{i\in \mathcal{J}}\) of S. Since \(\left \Vert \chi _{k_{S}}\right \Vert _{\infty }\leq \Gamma n\) we have

$$\displaystyle\begin{array}{rcl} s_{k_{S}}(x) - s_{k_{S'}}(x)& =& \sum _{j\in \mathcal{J}\setminus \mathcal{I}}\left (\left.P\varphi _{j}\right.\right )\varphi _{j}(x) \leq \sqrt{\sum _{j\in \mathcal{J}\setminus \mathcal{I} } \left (\left.P\varphi _{j } \right. \right ) ^{2 } \sum _{j\in \mathcal{J}\setminus \mathcal{I} } \varphi _{j } (x)^{2}} {}\\ & \leq & \left \Vert s_{k_{S}} - s_{k_{S'}}\right \Vert \left \Vert \chi _{k_{S}}\right \Vert _{\infty }^{1/2} \leq \left \Vert s_{ k_{S}} - s_{k_{S'}}\right \Vert \sqrt{\Gamma n}. {}\\ \end{array}$$

Hence, (3.13) holds in this case. Assuming (3.20) implies that (3.13) holds since

$$\displaystyle{\left \Vert s_{k_{S}} - s_{k_{S'}}\right \Vert _{\infty }\leq \left \Vert s_{k_{S}}\right \Vert _{\infty } + \left \Vert s_{k_{S'}}\right \Vert _{\infty }\leq \Upsilon.}$$

Finally for (3.16), for any a ∈ L 2,

$$\displaystyle{\int _{\mathbb{X}}a(x)k_{S}(x,y)d\mu (x) =\sum _{i\in \mathcal{I}}\langle a,\varphi _{i}\rangle \varphi _{i}(y) = \Pi _{S}(a).}$$

is the orthogonal projection of a onto S. Therefore, \(\mathbb{B}_{k_{S}}\) is the unit ball in S for the L 2-norm and, for any \(t \in \mathbb{B}_{k_{S}}\), \(\mathbb{E}\left [\left.t(X)^{2}\right.\right ] \leq \left \Vert s\right \Vert _{\infty }\left \Vert t\right \Vert ^{2} \leq \left \Vert s\right \Vert _{\infty }.\)

Proof of Proposition (3.3)

First, since \(\left \Vert K\right \Vert _{1} \geq 1\)

$$\displaystyle\begin{array}{rcl} \left \Vert s_{k_{K,h}}\right \Vert ^{2}& =& \int _{ \mathbb{X}}\left (\left.\int _{\mathbb{X}}s(y)\frac{1} {h}K\left (\left.\frac{x - y} {h} \right.\right )dy\right.\right )^{2}dx {}\\ & =& \int _{\mathbb{X}}\left (\left.\int _{\mathbb{X}}s(x + hz)K\left (\left.z\right.\right )dz\right.\right )^{2}dx {}\\ & \leq & \left \Vert K\right \Vert _{1}^{2}\int _{ \mathbb{X}}\left (\left.\int _{\mathbb{X}}s(x + hz)\frac{\left \vert K\left (\left.z\right.\right )\right \vert } {\left \Vert K\right \Vert _{1}} dz\right.\right )^{2}dx {}\\ & \leq & \left \Vert K\right \Vert _{1}^{2}\int _{ \mathbb{X}^{2}}s(x + hz)^{2}\frac{\left \vert K\left (\left.z\right.\right )\right \vert } {\left \Vert K\right \Vert _{1}} dxdz \leq \left \Vert s\right \Vert _{\infty }\left \Vert K\right \Vert _{1}^{2}. {}\\ \end{array}$$

Hence, Assumption (3.11) holds if \(\Upsilon \geq 1 + \left \Vert s\right \Vert _{\infty }\left \Vert K\right \Vert _{1}^{2}\). Now, we have

$$\displaystyle{P\left (\left.\chi _{k_{K,h}}^{2}\right.\right ) = \frac{K(0)^{2}} {h^{2}} = P\Theta _{k_{K,h}}\frac{K(0)^{2}} {\left \Vert K\right \Vert ^{2}h} \leq nP\Theta _{k_{K,h}} \frac{K(0)^{2}} {\left \Vert K\right \Vert ^{2}\left \Vert K\right \Vert _{\infty }\left \Vert K\right \Vert _{1}},}$$

so it is sufficient to have \(\Upsilon \geq \vert K(0)\vert /\left \Vert K\right \Vert ^{2}\) (since \(\vert K(0)\vert \leq \left \Vert K\right \Vert _{\infty }\)) to ensure (3.12). Moreover, for any \(h \in \mathcal{H}\) and any \(x \in \mathbb{X}\),

$$\displaystyle{s_{k_{K,h}}(x) =\int _{\mathbb{X}}s(y)\frac{1} {h}K\left (\left.\frac{x - y} {h} \right.\right )dy =\int _{\mathbb{X}}s(x + zh)K(z)dz \leq \left \Vert s\right \Vert _{\infty }\left \Vert K\right \Vert _{1}.}$$

Therefore, Assumption (3.13) holds for \(\Upsilon \geq 2\left \Vert s\right \Vert _{\infty }\left \Vert K\right \Vert _{1}\). Then on one hand

$$\displaystyle\begin{array}{rcl} \left \vert A_{k_{K,h}}(x,y)\right \vert & \leq & \frac{1} {h^{2}}\int _{\mathbb{X}}\left \vert K\left (\left.\frac{x - z} {h} \right.\right )K\left (\left.\frac{y - z} {h} \right.\right )\right \vert dz {}\\ & \leq & \frac{1} {h}\int _{\mathbb{X}}\left \vert K\left (\left.\frac{x - y} {h} - u\right.\right )K\left (\left.u\right.\right )\right \vert du {}\\ & \leq & \frac{\left \Vert K\right \Vert ^{2}} {h} \wedge \frac{\left \Vert K\right \Vert _{\infty }\left \Vert K\right \Vert _{1}} {h} \leq P\Theta _{k_{K,h}} \wedge n. {}\\ \end{array}$$

And on the other hand

$$\displaystyle\begin{array}{rcl} \mathbb{E}\left [\left.\left \vert A_{k_{K,h}}(X,x)\right \vert \right.\right ]& \leq & \frac{1} {h}\int _{\mathbb{X}^{2}}\left \vert K\left (\left.\frac{x - y} {h} - u\right.\right )K\left (\left.u\right.\right )\right \vert du\ s(y)dy {}\\ & =& \int _{\mathbb{X}^{2}}\left \vert K\left (\left.v\right.\right )K\left (\left.u\right.\right )\right \vert s(x + h(v - u))dudv \leq \left \Vert s\right \Vert _{\infty }\left \Vert K\right \Vert _{1}^{2}. {}\\ \end{array}$$

Therefore,

$$\displaystyle\begin{array}{rcl} \sup _{x\in \mathbb{X}}\ \mathbb{E}\left [\left.A_{k_{K,h}}(X,x)^{2}\right.\right ]& \leq & \sup _{ (x,y)\in \mathbb{X}^{2}}\left \vert A_{k_{K,h}}(x,y)\right \vert \ \sup _{x\in \mathbb{X}}\ \mathbb{E}\left [\left.\left \vert A_{k_{K,h}}(X,x)\right \vert \right.\right ] {}\\ & & \phantom{lfdsllldsfl;sdkg;dsglfgk} \leq \left (\left.P\Theta _{k_{K,h}} \wedge n\right.\right )\left \Vert s\right \Vert _{\infty }\left \Vert K\right \Vert _{1}^{2},{}\\ \end{array}$$

and \(\mathbb{E}\left [\left.A_{k_{K,h}}(X,Y )^{2}\right.\right ] \leq \sup _{x\in \mathbb{X}}\ \mathbb{E}\left [\left.A_{k_{K,h}}(X,x)^{2}\right.\right ] \leq \left \Vert s\right \Vert _{\infty }\left \Vert K\right \Vert _{1}^{2}P\Theta _{k_{K,h}}.\) Hence Assumption (3.14) and (3.15) hold when \(\Upsilon \geq \left \Vert s\right \Vert _{\infty }\left \Vert K\right \Vert _{1}^{2}\). Finally let us prove that Assumption (3.16) is satisfied. Let \(t \in \mathbb{B}_{k_{K,h}}\) and a ∈ L 2 be such that \(\left \Vert a\right \Vert = 1\) and \(t(y) =\int _{\mathbb{X}}a(x)\frac{1} {h}K\left (\left.\frac{x-y} {h} \right.\right )dx\) for all \(y \in \mathbb{X}\). Then the following follows from Cauchy-Schwarz inequality

$$\displaystyle{t(y) \leq \frac{1} {h}\sqrt{\int _{\mathbb{X} } a(x)^{2 } dx}\sqrt{\int _{\mathbb{X} } K\left (\left.\frac{x - y} {h} \right.\right )^{2}dx} \leq \frac{\left \Vert K\right \Vert } {\sqrt{h}}.}$$

Thus for any \(t \in \mathbb{B}_{k_{K,h}}\)

$$\displaystyle{Pt^{2} \leq \left \Vert t\right \Vert _{ \infty }\langle \left \vert t\right \vert,s\rangle \leq \frac{\left \Vert K\right \Vert } {\sqrt{h}}\left \Vert s\right \Vert = \left \Vert s\right \Vert \sqrt{P\Theta _{k_{K,h }}} \leq \sqrt{\Upsilon P\Theta _{k_{K,h }}}.}$$

We conclude that all the assumptions hold if

$$\displaystyle{\Upsilon \geq \left (\left.\vert K(0)\vert /\left \Vert K\right \Vert ^{2}\right.\right ) \vee \left (\left.1 + 2\left \Vert s\right \Vert _{ \infty }\left \Vert K\right \Vert _{1}^{2}\right.\right ).}$$

Proof of Proposition (3.4)

Let us define for convenience \(\Phi (x):=\sum _{ i=1}^{p}\varphi _{i}(x)^{2}\), so \(\Gamma \geq 1 \vee n^{-1}\left \Vert \Phi \right \Vert _{\infty }\). Then we have for these kernels: \(\Phi (x) \geq \chi _{k_{w}}(x) \geq \Theta _{k_{w}}(x)\) for all \(x \in \mathbb{X}\). Moreover, denoting by \(\Pi s\) the orthogonal projection of s onto the linear span of (φ i ) i = 1, , p ,

$$\displaystyle\begin{array}{rcl} \left \Vert s_{k_{w}}\right \Vert ^{2} =\sum _{ i=1}^{p}w_{ i}^{2}\left (\left.P\varphi _{ i}\right.\right )^{2} \leq \left \Vert \Pi s\right \Vert ^{2} \leq \left \Vert s\right \Vert ^{2} \leq \left \Vert s\right \Vert _{ \infty }.& & {}\\ \end{array}$$

Assumption (3.11) holds for this family if \(\Upsilon \geq \Gamma (1 + \left \Vert s\right \Vert _{\infty })\). We prove in what follows that all the remaining assumptions are valid using only (2.1) and (3.11).

First, it follows from Cauchy-Schwarz inequality that, for any \(x \in \mathbb{X}\), \(\chi _{k_{w}}(x)^{2} \leq \Phi (x)\Theta _{k_{w}}(x)\). Assumption (3.12) is then automatically satisfied from the definition of \(\Gamma\)

$$\displaystyle{\mathbb{E}\left [\left.\chi _{k_{w}}(X)^{2}\right.\right ] \leq \left \Vert \Phi \right \Vert _{ \infty }P\Theta _{k_{w}} \leq \Gamma nP\Theta _{k_{w}}.}$$

Now let w and w′ be any two vectors in [0, 1]p, we have

$$\displaystyle{s_{k_{w}} =\sum _{ i=1}^{p}w_{ i}(P\varphi _{i})\varphi _{i},\qquad s_{k_{w}} - s_{k_{w'}} =\sum _{ i=1}^{p}(w_{ i} - w_{i}')\left (\left.P\varphi _{i}\right.\right )\varphi _{i}.}$$

Hence \(\left \Vert s_{k_{w}} - s_{k_{w'}}\right \Vert ^{2} =\sum _{ i=1}^{p}(w_{i} - w_{i}')^{2}\left (\left.P\varphi _{i}\right.\right )^{2}\) and, by Cauchy-Schwarz inequality, for any \(x \in \mathbb{X}\),

$$\displaystyle{\left \vert s_{k_{w}}(x) - s_{k_{w'}}(x)\right \vert \leq \left \Vert s_{k_{w}} - s_{k_{w'}}\right \Vert \sqrt{\Phi (x)} \leq \left \Vert s_{k_{w}} - s_{k_{w'}}\right \Vert \sqrt{\Gamma n}.}$$

Assumption (3.13) follows using (3.11). Concerning Assumptions (3.14) and (3.15), let us first notice that by orthonormality, for any \((x,x') \in \mathbb{X}^{2}\),

$$\displaystyle{A_{k_{w}}(x,x') =\sum _{ i=1}^{p}w_{ i}^{2}\varphi _{ i}(x)\varphi _{i}(x').}$$

Therefore, Assumption (3.15) holds since

$$\displaystyle\begin{array}{rcl} \mathbb{E}\left [\left.A_{k_{w}}(X,x)^{2}\right.\right ]& =& \int _{ \mathbb{X}}\left (\left.\sum _{i=1}^{p}w_{ i}^{2}\varphi _{ i}(y)\varphi _{i}(x)\right.\right )^{2}s(y)d\mu (y) {}\\ & \leq & \left \Vert s\right \Vert _{\infty }\sum _{1\leq i,j\leq p}w_{i}^{2}w_{ j}^{2}\varphi _{ i}(x)\varphi _{j}(x)\int _{\mathbb{X}}\varphi _{i}(y)\varphi _{j}(y)d\mu (y) {}\\ & =& \left \Vert s\right \Vert _{\infty }\sum _{i=1}^{p}w_{ i}^{4}\varphi _{ i}(x)^{2} \leq \left \Vert s\right \Vert _{ \infty }\Phi (x) \leq \left \Vert s\right \Vert _{\infty }\Gamma n. {}\\ \end{array}$$

Assumption (3.14) also holds from similar computations:

$$\displaystyle\begin{array}{rcl} \mathbb{E}\left [\left.A_{k_{w}}(X,Y )^{2}\right.\right ]& =& \int _{ \mathbb{X}}\mathbb{E}\left [\left.\left (\left.\sum _{i=1}^{p}w_{ i}^{2}\varphi _{ i}(X)\varphi _{i}(x)\right.\right )^{2}\right.\right ]s(x)d\mu (x) {}\\ & \leq & \left \Vert s\right \Vert _{\infty }\sum _{1\leq i,j\leq p}w_{i}^{2}w_{ j}^{2}\mathbb{E}\left [\left.\varphi _{ i}(X)\varphi _{j}(X)\right.\right ]\int _{\mathbb{X}}\varphi _{i}(x)\varphi _{j}(x)d\mu (x) {}\\ & \leq & \left \Vert s\right \Vert _{\infty }P\Theta _{k_{w}}. {}\\ \end{array}$$

We finish with the proof of (3.16). Let us prove that \(\mathbb{B}_{k_{w}} = \mathcal{E}_{k_{w}}\), where

$$\displaystyle{ \mathcal{E}_{k_{w}} = \left \{\left.t =\sum _{ i=1}^{p}w_{ i}t_{i}\varphi _{i},\,\mbox{ s.t. }\,\sum _{i=1}^{p}t_{ i}^{2} \leq 1\right.\right \}. }$$

First, notice that any \(t \in \mathbb{B}_{k_{w}}\) can be written

$$\displaystyle\begin{array}{rcl} \int _{\mathbb{X}}a(x)k_{w}(x,y)d\mu (x) =\sum _{ i=1}^{p}w_{ i}\langle a,\varphi _{i}\rangle \varphi _{i}(y).& & {}\\ \end{array}$$

Then, consider some \(t \in \mathcal{E}_{k_{w}}\). By definition, there exists a collection (t i ) i = 1, , p such that t =  i = 1 p w i t i φ i , and i = 1 p t i 2 ≤ 1. If a =  i = 1 p t i φ i , \(\left \Vert a\right \Vert ^{2} =\sum _{ i=1}^{p}t_{i}^{2} \leq 1\) and 〈a, φ i 〉 = t i , hence \(t \in \mathbb{B}_{k_{w}}\). Conversely, for \(t \in \mathbb{B}_{k_{w}}\), there exists some function a ∈ L 2 such that \(\left \Vert a\right \Vert ^{2} \leq 1\), and t =  i = 1 p w i a, φ i φ i . Since (φ i ) i = 1, , p is an orthonormal system, one can take a =  i = 1 pa, φ i φ i . With t i  = 〈a, φ i 〉, we find \(\left \Vert a\right \Vert ^{2} =\sum _{ i=1}^{p}t_{i}^{2}\) and \(t \in \mathcal{E}_{k_{w}}\). For any \(t \in \mathbb{B}_{k_{w}} = \mathcal{E}_{k_{w}}\), \(\left \Vert t\right \Vert ^{2} =\sum _{ i=1}^{p}w_{i}^{2}t_{i}^{2} \leq \sum _{i=1}^{p}t_{i}^{2} \leq 1\). Hence \(Pt^{2} \leq \left \Vert s\right \Vert _{\infty }\left \Vert t\right \Vert ^{2} \leq \left \Vert s\right \Vert _{\infty }.\)

Appendix 2: Concentration of the Residual Terms

The following proposition gathers the concentration bounds of the remaining terms appearing in (6.30).

Proposition B.1

Let \(\left \{\left.k\right.\right \}_{k\in \mathcal{K}}\) denote a finite collection of kernels satisfying  (2.1) and suppose that Assumptions  (3.11) (3.13) hold. Then

$$\displaystyle{ \forall \theta \in (0,1),\qquad 2\frac{P(s_{\hat{k}} - s_{k})} {n} \leq \theta \left \Vert s - s_{\hat{k}}\right \Vert ^{2} +\theta \left \Vert s - s_{ k}\right \Vert ^{2} + \frac{2\Upsilon } {\theta n}. }$$
(6.32)

For any x ≥ 1, with probability larger than \(1 - 2\left \vert \mathcal{K}\right \vert ^{2}e^{-x}\) , for any \((k,k') \in \mathcal{K}^{2}\) , for any θ ∈ (0,1),

$$\displaystyle{ \left \vert 2(P_{n} - P)(s_{k} - s_{k'})\right \vert \leq \theta \left (\left.\left \Vert s - s_{k'}\right \Vert ^{2} + \left \Vert s - s_{ k}\right \Vert ^{2}\right.\right ) + \square \frac{\Upsilon x^{2}} {\theta n}. }$$
(6.33)

For any x ≥ 1, with probability larger than \(1 - 2\left \vert \mathcal{K}\right \vert e^{-x}\) , for any \(k \in \mathcal{K}\) ,

$$\displaystyle{ \forall \theta \in (0,1),\qquad \left \vert 2(P_{n} - P)\chi _{k}\right \vert \leq \theta P\Theta _{k} + \square \frac{\Upsilon x} {\theta }. }$$
(6.34)

For any x ≥ 1, with probability larger than \(1 - 5.4\left \vert \mathcal{K}\right \vert e^{-x}\) , for any \(k \in \mathcal{K}\) ,

$$\displaystyle{ \forall \theta \in (0,1),\qquad \frac{2\left \vert U_{k}\right \vert } {n^{2}} \leq \theta \frac{P\Theta _{k}} {n} + \square \frac{\Upsilon x^{2}} {\theta n}. }$$
(6.35)

Proof

First for (6.32), notice that, by (3.13), for any θ ∈ (0, 1)

$$\displaystyle\begin{array}{rcl} 2\frac{P(s_{\hat{k}} - s_{k})} {n} & \leq & 2\frac{\left \Vert s_{\hat{k}} - s_{k}\right \Vert _{\infty }} {n} \leq \frac{2} {n}\left (\left.\Upsilon \vee \left (\left. \frac{\theta } {4}n\left \Vert s_{k} - s_{\hat{k}}\right \Vert ^{2} + \frac{\Upsilon } {\theta } \right.\right )\right.\right ) {}\\ & \leq & \frac{\theta } {2}\left \Vert s_{k} - s_{\hat{k}}\right \Vert ^{2} + \frac{2\Upsilon } {\theta n} \leq \theta \left \Vert s - s_{\hat{k}}\right \Vert ^{2} +\theta \left \Vert s - s_{ k}\right \Vert ^{2} + \frac{2\Upsilon } {\theta n}. {}\\ \end{array}$$

Then, by Proposition (2.1), with probability larger than \(1 -\left \vert \mathcal{K}\right \vert ^{2}e^{-x}\),

$$\displaystyle{\mbox{ for any }(k,k') \in \mathcal{K}^{2},\quad (P_{ n} -P)(s_{k} -s_{k'}) \leq \sqrt{\frac{2P\left (\left.s_{k } - s_{k' } \right. \right ) ^{2 } x} {n}} + \frac{\left \Vert s_{k} - s_{k'}\right \Vert _{\infty }x} {3n}.}$$

Since by (3.11) \(P\left (\left.s_{k} - s_{k'}\right.\right )^{2} \leq \left \Vert s\right \Vert _{\infty }\left \Vert s_{k} - s_{k'}\right \Vert ^{2} \leq \Upsilon \left \Vert s_{k} - s_{k'}\right \Vert ^{2},\)

$$\displaystyle{\sqrt{\frac{2P\left (\left.s_{k } - s_{k' } \right. \right ) ^{2 } x} {n}} \leq \frac{\theta } {4}\left \Vert s_{k} - s_{k'}\right \Vert ^{2} + \frac{2\Upsilon x} {\theta n}.}$$

Moreover, by (3.13) \(\frac{\left \Vert s_{k}-s_{k'}\right \Vert _{\infty }x} {3n} \leq \frac{\theta } {4}\left \Vert s_{k} - s_{k'}\right \Vert ^{2} + \square \frac{\Upsilon x^{2}} {\theta n}.\) Hence, for x ≥ 1, with probability larger than \(1 -\left \vert \mathcal{K}\right \vert ^{2}e^{-x}\)

$$\displaystyle\begin{array}{rcl} (P_{n} - P)(s_{k} - s_{k'})& \leq & \frac{\theta } {2}\left \Vert s_{k} - s_{k'}\right \Vert ^{2} + \square \frac{\Upsilon x^{2}} {\theta n} {}\\ & \leq & \theta \left (\left.\left \Vert s - s_{k'}\right \Vert ^{2} + \left \Vert s - s_{ k}\right \Vert ^{2}\right.\right ) + \square \frac{\Upsilon x^{2}} {\theta n}, {}\\ \end{array}$$

which gives (6.33). Now, using again Proposition (2.1), with probability larger than \(1 -\left \vert \mathcal{K}\right \vert e^{-x}\), for any \(k \in \mathcal{K}\),

$$\displaystyle{(P_{n} - P)\chi _{k} \leq \sqrt{\frac{2P\left (\left.\chi _{k } \right. \right ) ^{2 } x} {n}} + \frac{\left \Vert \chi _{k}\right \Vert _{\infty }x} {3n}.}$$

By (2.1) and (3.11), for any \(k \in \mathcal{K}\), \(\left \Vert \chi _{k}\right \Vert _{\infty } \leq \sup _{(x,y)\in \mathbb{X}^{2}}\left \vert k(x,y)\right \vert \leq \Gamma n \leq \Upsilon n.\)

Concerning (6.34), we get by (3.12), \(P\chi _{k}^{2} \leq \Upsilon nP\Theta _{k}\), hence, for any x ≥ 1 we have with probability larger than \(1 -\left \vert \mathcal{K}\right \vert e^{-x}\)

$$\displaystyle{(P_{n} - P)\chi _{k} \leq \theta P\Theta _{k} + \left (\left.\frac{1} {3} + \frac{1} {2\theta }\right.\right )\Upsilon x.}$$

For (6.35), we apply Proposition (2.2) to obtain with probability larger than \(1 - 2.7\left \vert \mathcal{K}\right \vert e^{-x}\), for any \(k \in \mathcal{K}\),

$$\displaystyle\begin{array}{rcl} \frac{U_{k}} {n^{2}} \leq \frac{\square } {n^{2}}\left (\left.C\sqrt{x} + Dx + Bx^{3/2} + Ax^{2}\right.\right ),& & {}\\ \end{array}$$

where A, B, C, D are defined accordingly to Proposition (2.2). Let us evaluate all these terms. First, \(A \leq 4\sup _{(x,y)\in \mathbb{X}^{2}}\left \vert k(x,y)\right \vert \leq 4\Upsilon n\) by (2.1) and (3.11). Next, \(C^{2} \leq \square n^{2}\mathbb{E}\left [\left.k(X,Y )^{2}\right.\right ] \leq \square n^{2}\left \Vert s\right \Vert _{\infty }P\Theta _{k} \leq \square n^{2}\Upsilon P\Theta _{k}.\)

Using (2.1), we find \(B^{2} \leq 4n\sup _{x\in \mathbb{X}}\int k(x,y)^{2}s(y)d\mu (y) \leq 4n\left \Vert s\right \Vert _{\infty }\Gamma.\)

By (3.11), we consequently have \(B^{2} \leq 4\Upsilon n\). Finally, using Cauchy-Schwarz inequality and proceeding as for C 2,

$$\displaystyle{\mathbb{E}\left [\left.\sum _{i=1}^{n-1}\sum _{ j=i+1}^{n}a_{ i}(X_{i})b_{j}(X_{j})k(X_{i},X_{j})\right.\right ] \leq n\sqrt{\mathbb{E}\left [\left.k(X, Y )^{2 } \right. \right ]} \leq n\sqrt{\Upsilon P\Theta _{k}}.}$$

Hence, \(D \leq n\sqrt{\Upsilon P\Theta _{k}}\) which gives (6.35).