1 INTRODUCTION

Symmetric models are frequently studied in statistical inference. A fundamental example is the family of normal distributions. We consider \(n\) non-degenerate independent identically distributed random variables \(X_{1},\ldots,X_{n}\) (iid, for short) whose common marginal distribution function \(F\), say, is symmetric. This means that there exists \(\mu\in\mathbb{R}\), called the symmetry center of \(F\), such that

$$F(x-\mu)=1-F(\mu-x-),\quad x\in\mathbb{R}.$$
(1.1)

We assume that \(F\) has a finite expectation which immediately implies that \(\mathbb{E}X_{1}=\mu\). Due to the assumption, all the order statistics \(X_{i:n}\), \(i=1,\ldots,n\), based on \(X_{1},\ldots,X_{n}\) have finite expectations as well.

The purpose of this paper is to provide the sharp upper bounds on the expectations \(\mathbb{E}\sum_{i=1}^{n}c_{i}(X_{i:n}-\mu)\) of linear combinations of order statistics (\(L\)-statistics, for short) centered about \(\mu\), where \(\mathbf{c}=(c_{1},\ldots,c_{n})\in\mathbb{R}^{n}\) is an arbitrary vector of combination coefficients. The bounds are expressed in the scale units \(\sigma_{p}=(\sigma_{p}^{p})^{1/p}\) based on the absolute central moments

$$\sigma_{p}^{p}=\mathbb{E}|X_{1}-\mu|^{p},\quad 1\leq p<\infty,$$

of the parent distribution. Writing \(\sigma_{p}\) below, we tacitly assume that \(X_{1}\) has a positive and finite \(p\)th absolute central moment. If we additionally assume that \(X_{1}\) has a bounded support we also use the following scale measure

$$\sigma_{\infty}=\lim_{p\rightarrow\infty}\sigma_{p}=\max\{\mu-F^{-1}(0),F^{-1}(1)-\mu\},$$

where

$$F^{-1}(0)=\inf\,\{x\in\mathbb{R}:\ F(x)>0\},$$
$$F^{-1}(1)=\sup\{x\in\mathbb{R}:\ F(x)<1\}$$

denote the left and right end-points of the support of \(F\), respectively. Note that for the symmetric distribution function \(F\)

$$\sigma_{\infty}=\mu-F^{-1}(0)=F^{-1}(1)-\mu.$$

Bounds on the expectations of order statistics and their linear combinations were studied by a number of researchers. The first contribution to the topic was due to Plackett [11] who determined the sharp upper bounds on the expectations of sample ranges \(\mathbb{E}\frac{X_{n:n}-X_{1:n}}{\sigma_{2}}\) expressed in the standard deviation units \(\sigma_{2}\). The respective bounds were valid for general parent distributions as well as those under the restriction to the symmetric ones. Moriguti [8] presented the optimal bound on \(\mathbb{E}\frac{X_{n:n}-\mu}{\sigma_{2}}\). His result was extended in Arnold [1] to more general scale units \(\sigma_{p}\), \(1<p<\infty\). The positive sharp bounds on \(\mathbb{E}\frac{X_{i:n}-\mu}{\sigma_{2}}\) for high ranks \(\frac{n}{2}+1\leq i\leq n-1\) were established in Rychlik [13]. The order statistics with low ranks \(1\leq i\leq\frac{n+1}{2}\) were treated in Rychlik [14]. The general idea of calculating positive bounds on \(\mathbb{E}\sum_{i=1}^{n}c_{i}\,\frac{X_{i:n}-\mu}{\sigma_{p}}\) for various \(p\) was proposed by Rychlik [12]. We develop the idea here more precisely, and present the method of establishing analogous non-positive bounds. We also mention more precise evaluations for order statistics under the more stringent condition that the parent distribution is symmetric and unimodal which were presented in Gajek and Rychlik [3] and Rychlik [14]. Sharp bounds on the variances of order statistics coming from symmetric populations were described in Moriguti [8], Papadatos [10], and Jasiński and Rychlik [6].

It is well known that when \(U_{1},\ldots,U_{n}\) are independent identically uniformly distributed on the interval \([0,1]\), then the distribution function and the density function of the \(i\)th order statistic \(U_{i:n}\) have the forms

$$\mathbb{P}(U_{i:n}\leq x)=F_{i:n}(x)=\sum_{k=i}^{n}B_{k,n}(x),$$
$$f_{i:n}(x)=F^{\prime}_{i:n}(x)=nB_{i-1,n-1}(x),\quad 0<x<1,\quad i=1,\ldots,n,$$

respectively, where

$$B_{k,m}(x)=\binom{m}{k}x^{k}(1-x)^{m-k},\quad 0<x<1,\quad k=0,\ldots,m,$$

denote the Bernstein polynomials of degree \(m\). More generally, if \(X_{1},\ldots,X_{n}\) are iid with a parent distribution function \(F\), then the order statistics have the distribution functions

$$\mathbb{P}(X_{i:n}\leq x)=F_{i:n}(F(x))=\sum_{k=i}^{n}B_{k,n}(F(x)),\quad x\in\mathbb{R},\quad i=1,\ldots,n.$$

Given distribution function \(F\), let

$$F^{-1}(u)=\inf\{x\in\mathbb{R}:\ F(x)\geq u\},\quad 0<u<1,$$

denote the left-continuous version of the quantile function of \(F\). It is also commonly known that if \(U_{1},\ldots,U_{n}\) are iid standard uniform, then \(X_{1}=F^{-1}(U_{1}),\ldots,X_{n}=F^{-1}(U_{n})\) are iid \(F\)-distributed, and \(F^{-1}(U_{1:n})\leq\ldots\leq F^{-1}(U_{n:n})\) are the respective order statistics. Therefore

$$\mu=\mathbb{E}X_{1}=\int\limits_{0}^{1}F^{-1}(x)\,dx,$$
$$\sigma_{p}^{p}=\mathbb{E}|X_{1}-\mu|^{p}=\int\limits_{0}^{1}|F^{-1}(x)-\mu|^{p}\,dx,$$

and

$$\mathbb{E}X_{i:n}=\int\limits_{0}^{1}F^{-1}(x)f_{i:n}(x)\,dx,$$
$$\mathbb{E}\sum_{i=1}^{n}c_{i}(X_{i:n}-\mu)=\int\limits_{0}^{1}[F^{-1}(x)-\mu]\sum_{i=1}^{n}c_{i}f_{i:n}(x)\,dx.$$

The assumption (1.1) implies that

$$F^{-1}(x)-\mu=\mu-F^{-1}(1-x+),\quad 0<x<1.$$
(1.2)

It follows that \(F^{-1}(x)-\mu\) is non-positive for \(0<x\leq\frac{1}{2}\) and nonnegative for \(\frac{1}{2}<x<1\). Since for every symmetric \(F\) the set of points \(0<x<1\) such that \(F^{-1}(x)-\mu\neq\mu-F^{-1}(-x)\) has the Lebesgue measure equal to \(0\), for every \(\mathbf{c}\in\mathbb{R}^{n}\) we can write

$$\int\limits_{0}^{\frac{1}{2}}[F^{-1}(x)-\mu]\sum_{i=1}^{n}c_{i}f_{i:n}(x)\,dx=\int\limits_{\frac{1}{2}}^{1}[F^{-1}(1-x)-\mu]\sum_{i=1}^{n}c_{i}f_{i:n}(1-x)\,dx$$
$${}=\int\limits_{\frac{1}{2}}^{1}[F^{-1}(1-x)-\mu]\sum_{i=1}^{n}c_{i}f_{n+1-i:n}(x)\,dx=-\int\limits_{\frac{1}{2}}^{1}[F^{-1}(x)-\mu]\sum_{i=1}^{n}c_{n+1-i}f_{i:n}(x)\,dx.$$

In consequence,

$$\mathbb{E}\sum_{i=1}^{n}c_{i}(X_{i:n}-\mu)=\int\limits_{0}^{1}[F^{-1}(x)-\mu]\sum_{i=1}^{n}c_{i}f_{i:n}(x)\,dx$$
$${}=\int\limits_{\frac{1}{2}}^{1}[F^{-1}(x)-\mu]\sum_{i=1}^{n}(c_{i}-c_{n+1-i})f_{i:n}(x)\,dx$$
$${}=\int\limits_{\frac{1}{2}}^{1}[F^{-1}(x)-\mu]f_{\mathbf{c}^{s}:n}(x)\,dx,$$
(1.3)

where \(\mathbf{c}^{s}=(c_{1}-c_{n},c_{2}-c_{n-1},\ldots,c_{n}-c_{1})\). Notice that

$$f_{\mathbf{c}^{s}:n}(x)=\sum_{i=1}^{n}(c_{i}-c_{n+1-i})f_{i:n}(x)=\sum_{i=1}^{n}c_{i}[f_{i:n}(x)-f_{i:n}(1-x)]$$
(1.4)

is antisymmetric about \(\frac{1}{2}\), where it vanishes. It is the derivative of the function

$$F_{\mathbf{c}^{s}:n}(x)=\sum_{i=1}^{n}(c_{i}-c_{n+1-i})F_{i:n}(x)=\sum_{i=1}^{n}c_{i}[F_{i:n}(x)+F_{i:n}(1-x)-1],$$
(1.5)

which is symmetric about \(\frac{1}{2}\), and vanishes at \(0\) and \(1\).

We use the formula (1.3) for establishing the optimal upper bounds on the expectations of centered \(L\)-statistics \(\mathbb{E}\sum_{i=1}^{n}c_{i}\frac{X_{i:n}-\mu}{\sigma_{p}}\) gauged in various scale units \(\sigma_{p}\), \(1\leq p\leq\infty\). One can guess that the signs of the bounds depend merely on the coefficients \(c_{1},\ldots,c_{n}\), but they do not depend on the scale units. The positive and non-positive bounds are deduced with use of different methods. In Section 2, we determine the conditions on \(\mathbf{c}\in\mathbb{R}^{n}\) assuring positivity of the bounds, the bound values, and the conditions of their attainability. In Sections 3 and 4 we consider analogous problems for the \(L\)-statistics which have non-positive bounds. Precisely, in Section 3 we describe the conditions on the coefficient vectors \(\mathbf{c}\) under which the upper bound on the expectations of the respective centered \(L\)-statistics amount to \(0\), and are attained by some symmetric parent distributions. We also determine corresponding optimal distributions. We analyze the remaining cases in Section 4. Then the optimal bounds are either equal to \(0\), but they are merely attained in the limit or they take on strictly negative values. In all the cases, we deduce the bounds for single order statistics from the general results.

The lower bounds are immediately deduced from the upper ones with use of the following transformation \(Y_{i}=2\mu-X_{i}\), \(i=1,\ldots,n\). We notice that the transformation neither changes the marginal symmetric distribution nor affects the independence. We also have \(Y_{i:n}-\mu=-(X_{n+1-i:n}-\mu)\), \(i=1,\ldots,n\). It follows that the lower bound on \(\mathbb{E}\sum_{i=1}^{n}c_{i}\frac{X_{i:n}-\mu}{\sigma_{p}}\) is just the negative of the upper bound on \(\mathbb{E}\sum_{i=1}^{n}c_{n+1-i}\frac{X_{i:n}-\mu}{\sigma_{p}}\), and their attainability conditions are identical. Therefore we concentrate below on calculating the upper bounds only.

2 POSITIVE BOUNDS

For establishing the positive bounds, we use here two tools described in the following three lemmas. The first one is a simplified version of Theorem 1 in Moriguti [9].

Lemma 1. Suppose that a real function \(h\) defined on \([a,b]\) has a finite integral. Let \(\underline{h}\) denote (the right-continuous version of, say) the derivative of the greatest convex minorant \(\underline{H}\) of the antiderivative \(H(x)=\int_{a}^{x}h(t)dt\) , \(a\leq x\leq b\) of \(h\) . Then for every nondecreasing function \(g:[a,b]\mapsto\mathbb{R}\) we have

$$\int\limits_{a}^{b}g(x)h(x)\,dx\leq\int\limits_{a}^{b}g(x)\underline{h}(x)\,dx$$
(2.1)

under the assumption that both the integrals exist. The equality in (2.1) is attained it \(g\) is constant on every interval contained in the set \(\{x\in[a,b]:\,\underline{H}(x)<H(x)\}\).

The greatest convex minorant \(\underline{H}\) of \(H\) is defined as the supremum of all convex functions not greater than \(H\). By definition, it is convex as well. It can be shown that in the case that \(h\) is square integrable over \([a,b]\), then \(\underline{h}\) is the projection of \(h\) onto the family of all nondecreasing functions in \(L^{2}([a,b],dx)\) (see, e.g., Rychlik [13]).

The other tool is the classic Hölder inequality, see, e.g., Mitrinović [7].

Lemma 2. Let \(g\) and \(h\) belong to the Banach spaces \(L^{p}([a,b],dx)\) and \(L^{q}([a,b],dx)\) , respectively, for some \(1<p<\infty\) and \(1<q=\frac{p}{p-1}<\infty\) . Then

$$\int\limits_{a}^{b}g(x)h(x)\,dx\leq\left[\int\limits_{a}^{b}|g(x)|^{p}\,dx\right]^{1/p}\left[\int\limits_{a}^{b}|h(x)|^{q}\,dx\right]^{1/q},$$

and the equality in (2.2) holds if either \(g(x)=0\), or \(h(x)=0\), or

$$g(x)=\alpha|h(x)|^{q/p}\mathrm{sgn}\{h(x)\}$$

almost everywhere on \([a,b]\) for some positive \(\alpha\) .

The last auxiliary result is called the variation diminishing property of Bernstein polynomials.

Lemma 3 (cf. [13, p. 66]). The number of zeros of a given nonzero linear combination

$$B(x)=\sum_{i=0}^{n}a_{i}B_{i,n}(x),\quad 0<x<1,$$

of Bernstein polynomials of a fixed degree \(n\) is not greater than the number of sign changes in the sequence \(a_{0},\ldots,a_{n}\) . Moreover, the signs of \(B\) in the right neighborhood of \(0\) and the left neighborhood of \(1\) coincide with the signs of the first and last nonzero elements among \(a_{0},\ldots,a_{n}\) , respectively.

The first statement was proved in Schoenberg [15]. The second claim is trivial.

We are now in a position to prove the following proposition.

Proposition 1. Suppose that \(X_{1},\ldots,X_{n}\) are non-degenerate iid symmetrically distributed about \(\mu\) , and they have a finite moment of order \(1<p<\infty\) . Then for every \(\mathbf{c}=(c_{1},\ldots,c_{n})\in\mathbb{R}^{n}\)

$$\mathbb{E}\sum_{i=1}^{n}c_{i}\frac{X_{i:n}-\mu}{\sigma_{p}}\leq\frac{1}{2^{1/p}}\left[\int\limits_{\frac{1}{2}}^{1}[(\underline{f}_{\mathbf{c}^{s}:n})_{+}(x)]^{q}dx\right]^{\frac{1}{q}},$$
(2.2)

where \((\underline{f}_{\mathbf{c}^{s}:n})_{+}(x)=\max\{\underline{f}_{\mathbf{c}^{s}:n}(x),0\}\) is the positive part of the derivative \(\underline{f}_{\mathbf{c}^{s}:n}(x)\) of the greatest convex minorant \(\underline{F}_{\mathbf{c}^{s}:n}(x)\) of (1.5) restricted to the interval \(\left(\frac{1}{2},1\right)\). Moreover, if \(\mathbf{c}\) satisfies

$$F_{\mathbf{c}^{s}:n}(x)<0\quad for\ some\quad\frac{1}{2}<x<1,$$
(2.3)

then the bound (2.2) is positive and sharp. It is attained then by the distribution function \(F\) determined from the relation

$$\frac{F^{-1}(x)-\mu}{\sigma_{p}}=\begin{cases}-\frac{[(\underline{f}_{\mathbf{c}^{s}:n})_{+}(1-x)]^{q/p}}{\left[2\int\limits_{\frac{1}{2}}^{1}(\underline{f}_{\mathbf{c}^{s}:n})^{q}_{+}(x)dx\right]^{1/p}},\quad 0<x\leq\frac{1}{2}\\ \frac{[(\underline{f}_{\mathbf{c}^{s}:n})_{+}(x)]^{q/p}}{\left[2\int\limits_{\frac{1}{2}}^{1}(\underline{f}_{\mathbf{c}^{s}:n})^{q}_{+}(x)dx\right]^{1/p}},\quad\frac{1}{2}<x<1.\end{cases}$$
(2.4)

Proof. The bound (2.2) follows from the relations

$$\mathbb{E}\sum_{i=1}^{n}c_{i}\frac{X_{i:n}-\mu}{\sigma_{p}}=\int\limits_{\frac{1}{2}}^{1}\frac{F^{-1}(x)-\mu}{\sigma_{p}}f_{\mathbf{c}^{s}:n}(x)\,dx$$
$${}\leq\int\limits_{\frac{1}{2}}^{1}\frac{F^{-1}(x)-\mu}{\sigma_{p}}\underline{f}_{\mathbf{c}^{s}:n}(x)\,dx\leq\int\limits_{\frac{1}{2}}^{1}\frac{F^{-1}(x)-\mu}{\sigma_{p}}(\underline{f}_{\mathbf{c}^{s}:n})_{+}(x)\,dx$$
$${}\leq\left[\int\limits_{\frac{1}{2}}^{1}\frac{|F^{-1}(x)-\mu|^{p}}{\sigma_{p}^{p}}\,dx\right]^{\frac{1}{p}}\left[\int\limits_{\frac{1}{2}}^{1}[(\underline{f}_{\mathbf{c}^{s}:n})_{+}(x)]^{q}\,dx\right]^{\frac{1}{q}}$$
$${}=\frac{1}{2^{\frac{1}{p}}}\left[\int\limits_{\frac{1}{2}}^{1}[(\underline{f}_{\mathbf{c}^{s}:n})_{+}(x)]^{q}\,dx\right]^{\frac{1}{q}}.$$
(2.5)

The first two inequalities follow from the facts that \(\frac{F^{-1}(x)-\mu}{\sigma_{p}}\) is nondecreasing (see Lemma 1) and nonnegative on \(\left[\frac{1}{2},1\right)\).

Under the assumption (2.3) \(\underline{f}_{\mathbf{c}^{s}:n}\) is positive on some neighborhood of \(1\). It follows that the RHS of (2.2) is strictly positive. We also show that the corresponding bound is attainable. The equality in the last inequality of (2.5) is attained under the assumption

$$\frac{F^{-1}(x)-\mu}{\sigma_{p}}=\alpha\left[(\underline{f}_{\mathbf{c}^{s}:n})_{+}(x)\right]^{\frac{q}{p}},\quad\frac{1}{2}<x<1,$$
(2.6)

for some positive \(\alpha\) (\(\alpha=0\) is excluded here, because \(F\) is non-degenerate). By the assumption, the LHS of (2.6) has the \(p\)th norm equal to \(\frac{1}{2^{1/p}}\). The RHS has the same norm for

$$\alpha=\left[2\int\limits_{\frac{1}{2}}^{1}[(\underline{f}_{\mathbf{c}^{s}:n})_{+}(x)]^{q}\,dx\right]^{-\frac{1}{p}}.$$
(2.7)

Note that under the conditions (2.6) with (2.7) the second inequality in (2.5) becomes the equality, because \(\frac{F^{-1}(x)-\mu}{\sigma_{p}}\) vanishes on the (possibly empty) interval where \(\underline{f}_{\mathbf{c}^{s}:n}(x)\) is negative. Moreover, the conditions imply the equality in the first inequality, because the constancy intervals of (2.6) contain all the intervals where \(\underline{F}_{\mathbf{c}^{s}:n}(x)<F_{\mathbf{c}:n}(x)\) (cf. Lemma 1). The symmetry condition (1.2) and continuity of (2.6) we define the lacking part of the symmetric quantile function attaining the bound in (2.5)

$$\frac{F^{-1}(x)-\mu}{\sigma_{p}}=\frac{\mu-F^{-1}(1-x)}{\sigma_{p}}=-\frac{[(\underline{f}_{\mathbf{c}^{s}:n})_{+}(1-x)]^{q/p}}{\left[2\int\limits_{\frac{1}{2}}^{1}(\underline{f}_{\mathbf{c}^{s}:n})^{q}_{+}(x)dx\right]^{1/p}},\quad 0<x\leq\frac{1}{2}.$$

This shows that that the parent distribution described in (2.4) actually provides the equality in (2.2). \(\blacksquare\)

In the most classic subcase \(p=2\) the formulae slightly simplify.

Corollary 1. Let \(X_{1},\ldots,X_{n}\) be iid symmetrically distributed about \(\mu\) , and have a positive and finite variance \(\sigma_{2}^{2}\) . Then for every \(\mathbf{c}\in\mathbb{R}^{n}\)

$$\mathbb{E}\sum_{i=1}^{n}c_{i}\frac{X_{i:n}-\mu}{\sigma_{2}}\leq\left[\frac{1}{2}\int\limits_{\frac{1}{2}}^{1}[(\underline{f}_{\mathbf{c}^{s}:n})_{+}(x)]^{2}dx\right]^{\frac{1}{2}}.$$

Under the condition (2.3) the above bound is positive and it is attained then by the distribution function satisfying

$$\frac{F^{-1}(x)-\mu}{\sigma_{2}}=\begin{cases}-\frac{(\underline{f}_{\mathbf{c}^{s}:n})_{+}(1-x)}{\left[2\int\limits_{\frac{1}{2}}^{1}[(\underline{f}_{\mathbf{c}^{s}:n})_{+}(x)]^{2}dx\right]^{1/2}},\quad 0<x\leq\frac{1}{2}\\ \frac{(\underline{f}_{\mathbf{c}^{s}:n})_{+}(x)}{\left[2\int\limits_{\frac{1}{2}}^{1}[(\underline{f}_{\mathbf{c}^{s}:n})_{+}(x)]^{2}dx\right]^{1/2}},\quad\frac{1}{2}<x<1.\end{cases}$$

Now we consider the extreme case \(p=1\) and \(\infty\).

Proposition 2. If \(X_{1},\ldots,X_{n}\) are iid symmetrically distributed about \(\mu\) , and have a finite first absolute moment \(\sigma_{1}\) , and \(\mathbf{c}\in\mathbb{R}^{n}\) , then

$$\mathbb{E}\sum_{i=1}^{n}c_{i}\frac{X_{i:n}-\mu}{\sigma_{1}}\leq\frac{1}{2}(\underline{f}_{\mathbf{c}:n})_{+}(1).$$
(2.8)

Under condition (2.3) on the vector of coefficients \(\mathbf{c}\) the RHS is positive. Define then

$$\alpha_{*}^{s}=\alpha_{*}^{s}(1,\mathbf{c})=\min\left\{\frac{1}{2}\leq\alpha\leq 1:\ \underline{f}_{\mathbf{c}:n}(\alpha)=\underline{f}_{\mathbf{c}:n}(1)\right\}.$$

If \(\alpha_{*}^{s}<1\) then the equality in (2.8) holds for the marginal distribution

$$\mathbb{P}\left(X_{1}=\mu-\frac{\sigma_{1}}{2(1-\alpha)}\right)=\mathbb{P}\left(X_{1}=\mu+\frac{\sigma_{1}}{2(1-\alpha)}\right)=1-\alpha,$$
$$\mathbb{P}\left(X_{1}=\mu\right)=2\alpha-1$$
(2.9)

with \(\alpha=\alpha_{*}^{s}\). If \(\alpha_{*}^{s}=1\), then the bound (2.8) is attained in the limit by the marginal distributions (2.9) with \(\alpha\rightarrow 1\).

Proof. The inequality proof is following

$$\mathbb{E}\sum_{i=1}^{n}c_{i}\frac{X_{i:n}-\mu}{\sigma_{1}}\leq\int\limits_{\frac{1}{2}}^{1}\frac{F^{-1}(x)-\mu}{\sigma_{1}}\underline{f}_{\mathbf{c}^{s}:n}(x)\,dx$$
$${}\leq\int\limits_{\frac{1}{2}}^{1}\frac{F^{-1}(x)-\mu}{\sigma_{1}}(\underline{f}_{\mathbf{c}^{s}:n})_{+}(x)\,dx$$
$${}\leq(\underline{f}_{\mathbf{c}^{s}:n})_{+}(1)\int\limits_{\frac{1}{2}}^{1}\frac{F^{-1}(x)-\mu}{\sigma_{1}}\,dx=\frac{1}{2}(\underline{f}_{\mathbf{c}^{s}:n})_{+}(1).$$
(2.10)

If (2.3) holds then \((\underline{f}_{\mathbf{c}^{s}:n})_{+}(1)=\underline{f}_{\mathbf{c}^{s}:n}(1)>0\). If moreover \(\alpha_{*}^{s}<1\), then the last inequality in (2.10) is attained if

$$\frac{F^{-1}(x)-\mu}{\sigma_{1}}=\begin{cases}0,\quad\frac{1}{2}<x\leq\alpha_{*}^{s}\\ d>0,\quad\alpha_{*}^{s}<x<1.\end{cases}$$
(2.11)

Note that all the intervals where \(\underline{F}_{\mathbf{c}^{s}:n}(x)<F_{\mathbf{c}^{s}:n}(x)\) and so \(\underline{f}_{\mathbf{c}^{s}:n}(x)\) is constant are contained in the intervals \(\left(\frac{1}{2},\alpha_{*}^{s}\right)\) and \((\alpha_{*}^{s},1)\). This assures the equality in the first inequality of (2.10). Moreover, \(\{x:\ \underline{f}_{\mathbf{c}^{s}:n}(x)<0\}\) is contained in \(\left(\frac{1}{2},\alpha_{*}^{s}\right)\) which guarantees the equality in the middle inequality. By the symmetry assumption (2.11) is extended on \(\left(0,\frac{1}{2}\right)\) by

$$\frac{F^{-1}(x)-\mu}{\sigma_{1}}=\begin{cases}-d,\quad 0<x\leq 1-\alpha_{*}^{s}\\ 0,\quad 1-\alpha_{*}^{s}<x\leq\frac{1}{2}.\end{cases}$$
(2.12)

The first absolute central moment condition forces \(d=\frac{1}{2(1-\alpha_{*}^{s})}\). Then the conditions (2.11) and (2.12) determine (2.9) with \(\alpha=\alpha_{*}\).

If \(\alpha_{*}^{s}=1\), then \((\underline{f}_{\mathbf{c}^{s}:n})_{+}(x)=f_{\mathbf{c}^{s}:n}(x)\) is strictly increasing on some neighborhood of \(1\). For (2.9) with \(\alpha\) sufficiently close to \(1\), all the intervals where \(\underline{f}_{\mathbf{c}^{s}:n}\) is constant and \(\underline{f}_{\mathbf{c}^{s}:n}<0\) are contained in \(\left(\frac{1}{2},\alpha\right)\), which gives the equalities in the first two inequalities of (2.10), respectively. By continuity of \((\underline{f}_{\mathbf{c}^{s}:n})_{+}\), the last inequality becomes the equality as well. \(\blacksquare\)

Observe that in the case \(\alpha_{*}^{s}=1\), then the RHS of (2.8) takes on the simple form

$$\frac{1}{2}f_{\mathbf{c}^{s}:n}(1)=\frac{n}{2}(c_{n}-c_{1}).$$

Proposition 3. Suppose that \(X_{1},\ldots,X_{n}\) are independent, have a common distribution symmetric about \(\mu\) , and a nondegenerate bounded support. Let

$$\alpha_{**}^{s}=\alpha_{**}^{s}(\infty,\mathbf{c})=\max\left\{\frac{1}{2}\leq\alpha\leq 1:\ \underline{f}_{\mathbf{c}^{s};n}(\alpha)\leq 0\right\}$$
(2.13)

for some fixed \(\mathbf{c}\in\mathbb{R}^{n}\) . Then

$$\mathbb{E}\sum_{i=1}^{n}c_{i}\frac{X_{i:n}-\mu}{\sigma_{\infty}}\leq-F_{\mathbf{c}^{s}:n}(\alpha_{**}^{s}).$$

Under the assumption (2.3) the bound is positive and it is attained by the parent distribution

$$\mathbb{P}(X_{1}=\mu-\sigma_{\infty})=\mathbb{P}(X_{1}=\mu+\sigma_{\infty})=1-\alpha_{**}^{s},$$
$$\mathbb{P}(X_{1}=\mu)=2\alpha_{**}^{s}-1.$$
(2.14)

Proof. As usual, we start with proving the inequality

$$\mathbb{E}\sum_{i=1}^{n}c_{i}\frac{X_{i:n}-\mu}{\sigma_{\infty}}\leq\int\limits_{\frac{1}{2}}^{1}\frac{F^{-1}(x)-\mu}{\sigma_{\infty}}\underline{f}_{\mathbf{c}^{s}:n}(x)\,dx$$
$${}\leq\int\limits_{\frac{1}{2}}^{1}\frac{F^{-1}(x)-\mu}{\sigma_{\infty}}(\underline{f}_{\mathbf{c}^{s}:n})_{+}(x)\,dx\leq\int\limits_{\frac{1}{2}}^{1}(\underline{f}_{\mathbf{c}^{s}:n})_{+}(x)\,dx$$
$${}=-\underline{F}_{\mathbf{c}^{s}:n}(\alpha_{**}^{s})=-F_{\mathbf{c}^{s}:n}(\alpha_{**}^{s}),$$
(2.15)

where the restrictions \(0\leq\frac{F^{-1}(x)-\mu}{\sigma_{\infty}}\leq 1\) assure the validity of the last inequality. Note that \(\underline{f}_{\mathbf{c}^{s}:n}(x)\neq f_{\mathbf{c}^{s}:n}(x)\) only inside the intervals where \(\underline{F}_{\mathbf{c}^{s}:n}(x)<F_{\mathbf{c}^{s}:n}(x)\), and \(\int_{\alpha_{1}}^{\alpha_{2}}\underline{f}_{\mathbf{c}^{s}:n}(x)dx=\int_{\alpha_{1}}^{\alpha_{2}}f_{\mathbf{c}^{s}:n}(x)dx\) if \(\underline{f}_{\mathbf{c}^{s}:n}(\alpha_{i})=f_{\mathbf{c}^{s}:n}(\alpha_{i})\), \(i=1,2\). Therefore \(\underline{F}_{\mathbf{c}^{s}:n}(\alpha_{**}^{s})=F_{\mathbf{c}^{s}:n}(\alpha_{**}^{s})\).

The ultimate and penultimate inequalities are attained under the conditions

$$\frac{F^{-1}(x)-\mu}{\sigma_{\infty}}=1\quad\mathrm{if}\quad\underline{f}_{\mathbf{c}^{s}:n}(x)>0,$$
(2.16)
$$\frac{F^{-1}(x)-\mu}{\sigma_{\infty}}=0\quad\mathrm{if}\quad\underline{f}_{\mathbf{c}^{s}:n}(x)<0,$$
(2.17)

respectively. These intervals together with \(\{x:\underline{f}_{\mathbf{c}^{s}:n}(x)=0\}\) (if non-degenerate) contain all the intervals where \(\underline{F}_{\mathbf{c}^{s}:n}(x)<F_{\mathbf{c}^{s}:n}(x)\). This means that under the conditions (2.16) and (2.17) the bound (2.15) is attained.

Assume now that (2.3) holds, and the condition (2.16) is satisfied on a non-degenerate interval (otherwise the bound is zero, and it is attained by a degenerate parent distribution which contradicts our assumptions). Then the bound is positive, and

$$\frac{F^{-1}(x)-\mu}{\sigma_{\infty}}=\begin{cases}0,\quad\frac{1}{2}<x\leq\alpha_{**}^{s}\\ 1,\quad\alpha_{**}^{s}<x<1\end{cases}$$

satisfies (2.16) and (2.17), which means the bound (2.13) is attained by (2.14). \(\blacksquare\)

Observe that if \(\underline{f}_{\mathbf{c}:n}(x)>0\), \(\frac{1}{2}<x<1\), then the optimal marginal distribution (2.14) is supported on two points. We cannot exclude the possibility that \(\underline{f}_{\mathbf{c}:n}(x)=0\) on some non-degenerate interval \([\beta_{**}^{s},\alpha_{**}^{s}]\), say. Then the bound is attained by the five-point symmetric distributions

$$\mathbb{P}(X_{1}=\mu)=2\beta_{**}^{s}-1,$$
$$\mathbb{P}(X_{1}=\mu\mp d)=\alpha_{**}^{s}-\beta_{**}^{s},$$
$$\mathbb{P}(X_{1}=\mu\mp\sigma_{\infty})=1-\alpha_{**}^{s}$$

for any \(0<d<1\) as well. Obviously, the point \(\mu\) is reduced if \(\beta_{**}^{s}=\frac{1}{2}\).

Now we focus on the special case of single order statistics. For \(X_{i:n}\), \(1\leq i\leq n\), we have

$$f^{s}_{i:n}(x)=f_{i:n}(x)-f_{n+1-i:n}(x)=n{n-1\choose i-1}[x^{i-1}(1-x)^{n-i}-x^{n-i}(1-x)^{i-1}].$$
(2.18)

This is negative, equal to \(0\), and positive on \(\left(\frac{1}{2},1\right)\) for \(i\leq\frac{n}{2}\), \(i=\frac{n+1}{2}\) and \(i\geq\frac{n}{2}+1\), respectively. It always vanishes at \(\frac{1}{2}\). It follows that the respective antiderivative

$$F^{s}_{i:n}(x)=F_{i:n}(x)-F_{n+1-i:n}(x),\quad\frac{1}{2}<x\leq 1,$$

is nonnegative for \(i\leq\frac{n+1}{2}\), and we can obtain sharp positive upper bounds of Propositions 1–3 only for \(i\geq\frac{n}{2}+1\).

If \(i=n\), then

$$f^{s}_{n:n}(x)=n[x^{n-1}-(1-x)^{n-1}],\quad\frac{1}{2}<x\leq 1,$$

is positive and increasing, which implies

$$(\underline{f}^{s}_{n:n})_{+}(x)=f^{s}_{n:n}(x)=n[x^{n-1}-(1-x)^{n-1}].$$

For \(\frac{n}{2}+1\leq i\leq n-1\), (2.18) is negative on \(\left(0,\frac{1}{2}\right)\), positive on \(\left(\frac{1}{2},1\right)\), and equal to \(0\) at \(0\), \(\frac{1}{2}\), and \(1\). It has at least one local maximum in \(\left(\frac{1}{2},1\right)\), and its derivative changes the sign from plus to minus there. Also, it has at least one minimum in \(\left(0,\frac{1}{2}\right)\), and its derivative is negative and positive left and right to it, respectively. We have

$$(f^{s}_{i:n})^{\prime}(x)=n(n-1)[-B_{n-i-1,n-2}(x)+B_{n-i,n-2}(x)+B_{i-2,n-2}(x)-B_{i-1,n-2}(x)].$$

By Lemma 3, \(F^{s}_{i:n}(x)\) is first concave, then convex, and ultimately concave, because it has at least one convexity interval. Since it is symmetric about \(\frac{1}{2}\), it is first convex and then concave on \(\left(\frac{1}{2},1\right)\). Its greatest convex minorant \(\underline{F}^{s}_{i:n}\) is first identical with \(F^{s}_{i:n}\) and then linear. Linearity of \(\underline{F}^{s}_{i:n}\) on the whole interval is excluded, because \(F^{s}_{i:n}\left(\frac{1}{2}\right)<F^{s}_{i:n}(1)\) and \(f^{s}_{i:n}\left(\frac{1}{2}\right)=0\). The shape changing point \(\alpha_{*}=\alpha_{*}(i,n)\) is determined by the equation

$$f^{s}_{i:n}(\alpha)=f_{i:n}(\alpha)-f_{n+1-i:n}(\alpha)=\frac{F^{s}_{i:n}(1)-F^{s}_{i:n}(\alpha)}{1-\alpha}=\frac{F_{n+1-i:n}(\alpha)-F_{i:n}(\alpha)}{1-\alpha}.$$
(2.19)

Consequently,

$$(\underline{f}^{s}_{i:n})_{+}(x)=\underline{f}^{s}_{i:n}(x)=f^{s}_{i:n}(\min\{x,\alpha_{*}\})$$
$${}=f_{i:n}(\min\{x,\alpha_{*}\})-f_{n+1-i:n}(\min\{x,\alpha_{*}\}).$$

This leads us to the following conclusions.

Corollary 2. Assume that \(X_{1},\ldots,X_{n}\) are iid, and symmetrically distributed about \(\mu\) , and \(0<\sigma_{p}<\infty\) for some \(1\leq p\leq\infty\) . Then for \(p=1\)

$$\mathbb{E}\frac{X_{n:n}-\mu}{\sigma_{1}}\leq\frac{n}{2},$$

and the equality is attained in the limit by the marginal distributions

$$\mathbb{P}\left(X_{1}=\mu\mp\frac{\sigma_{1}}{2(1-\alpha)}\right)=1-\alpha,$$
$$\mathbb{P}(X_{1}=\mu)=2\alpha-1$$

for \(\alpha\rightarrow 1\) .

For \(1<p<\infty\)

$$\mathbb{E}\frac{X_{n:n}-\mu}{\sigma_{p}}\leq\frac{n}{2^{1/p}}\left[\int\limits_{\frac{1}{2}}^{1}[x^{n-1}-(1-x)^{n-1}]^{q}dx\right]^{1/q},$$
(2.20)

and the equality condition is

$$\frac{F^{-1}(x)-\mu}{\sigma_{p}}=\begin{cases}-\frac{[(1-x)^{n-1}-x^{n-1}]^{q/p}}{\left[2\int\limits_{\frac{1}{2}}^{1}[x^{n-1}-(1-x)^{n-1}]^{q}dx\right]^{1/p}},\quad 0<x\leq\frac{1}{2}\\ \frac{[x^{n-1}-(1-x)^{n-1}]^{q/p}}{\left[2\int\limits_{\frac{1}{2}}^{1}[x^{n-1}-(1-x)^{n-1}]^{q}dx\right]^{1/p}},\quad\frac{1}{2}<x<1.\end{cases}$$

For \(p=\infty\)

$$\mathbb{E}\frac{X_{n:n}-\mu}{\sigma_{\infty}}\leq 1-\frac{1}{2^{n-1}},$$

with the equality condition

$$\mathbb{P}(X_{1}=\mu\mp\sigma_{\infty})=\frac{1}{2}.$$

The statement of the corollary for \(1<p<\infty\) can be found in Arnold [1]. In the case \(p=2\) inequality (2.20) simplifies to

$$\mathbb{E}\frac{X_{n:n}-\mu}{\sigma_{2}}\leq n\sqrt{\frac{1}{4n-2}\left(1-\frac{[(n-1)!]^{2}}{(2n-2)!}\right)}.$$

This inequality was established by Moriguti [8].

Corollary 3. Under the assumptions of Corollary 2 for \(\frac{n}{2}+1\leq i\leq n-1\) the following inequalities are sharp.

For \(p=1\) we have

$$\mathbb{E}\frac{X_{i:n}-\mu}{\sigma_{1}}\leq\frac{1}{2}\,f^{s}_{i:n}(\alpha_{*}^{s}),$$

which becomes the equality for

$$\mathbb{P}\left(X_{1}=\mu\mp\frac{\sigma_{1}}{2(1-\alpha_{*}^{s})}\right)=1-\alpha_{*}^{s},$$
$$\mathbb{P}(X_{1}=\mu)=2\alpha_{*}^{s}-1,$$

where \(\alpha_{*}^{s}\) is defined in the Eq. (2.19).

For \(1<p<\infty\)

$$\mathbb{E}\frac{X_{i:n}-\mu}{\sigma_{p}}\leq\frac{B_{q}}{2^{1/p}},$$
(2.21)

where

$$B^{q}_{q}=\int\limits_{\frac{1}{2}}^{\alpha_{*}^{s}}[f_{i:n}(x)-f_{n+1-i:n}(x)]^{q}dx+(1-\alpha_{*}^{s})[f_{i:n}(\alpha_{*}^{s})-f_{n+1-i:n}(\alpha_{*}^{s})]^{q}.$$

The parent distribution assuring the equality in (2.21) is defined by the relations

$$\frac{F^{-1}(x)-\mu}{\sigma_{p}}=\frac{1}{(2B_{q}^{q})^{1/p}}\times\begin{cases}f_{n+1-i:n}(\alpha_{*}^{s})-f_{i:n}(\alpha_{*}^{s}),\quad 0<x\leq 1-\alpha_{*}^{s}\\ f_{n+1-i:n}(x)-f_{i:n}(x),\quad 1-\alpha_{*}^{s}<x\leq\frac{1}{2}\\ f_{i:n}(x)-f_{n+1-i:n}(x),\quad\frac{1}{2}<x\leq\alpha_{*}^{s}\\ f_{i:n}(\alpha_{*}^{s})-f_{n+1-i:n}(\alpha_{*}^{s}),\quad\alpha_{*}^{s}<x<1.\end{cases}$$

For \(p=\infty\)

$$\mathbb{E}\frac{X_{i:n}-\mu}{\sigma_{\infty}}\leq\sum_{j=n+1-i}^{i-1}B_{j,n}\left(\frac{1}{2}\right),$$

and the equality holds for

$$\mathbb{P}(X_{1}=\mu\mp\sigma_{\infty})=\frac{1}{2}.$$

3 ATTAINABLE ZERO BOUNDS

It remains to consider the linear combinations of order statistics with coefficient vectors \(\mathbf{c}\) which do not satisfy (2.3), i.e., for which

$$F_{\mathbf{c}^{s}:n}(x)=\sum_{i=1}^{n}(c_{i}-c_{n+1-i})F_{i:n}(x)\geq 0,\quad\frac{1}{2}\leq x<1.$$
(3.1)

It is expected that under the condition (3.1) the bounds take on non-positive values. We first focus on the case when the bounds are equal to \(0\) and attained for some fixed parent distributions. To this end we assume that (3.1) holds together with

$$F_{\mathbf{c}^{s}:n}(\alpha_{*})=0\quad\mathrm{for\;some\;}\frac{1}{2}\leq\alpha_{*}=\alpha_{*}(\mathbf{c})<1.$$
(3.2)

Since \(F_{\mathbf{c}^{s}:n}(1)=0\) as well, we obtain \(\underline{F}_{\mathbf{c}^{s}:n}(x)=\underline{f}_{\mathbf{c}^{s}:n}(x)=0\), \(\alpha_{*}\leq x\leq 1\), and this is the maximal value of the derivative of the greatest convex minorant. We have

$$\mathbb{E}\sum_{i=1}^{n}\frac{X_{i:n}-\mu}{\sigma_{p}}=\int\limits_{\frac{1}{2}}^{1}\frac{F^{-1}(x)-\mu}{\sigma_{p}}f_{\mathbf{c}:n}(x)dx\leq\int\limits_{\frac{1}{2}}^{1}\frac{F^{-1}(x)-\mu}{\sigma_{p}}\underline{f}_{\mathbf{c}:n}(x)dx$$
$${}\leq\int\limits_{\frac{1}{2}}^{1}\frac{F^{-1}(x)-\mu}{\sigma_{p}}(\underline{f}_{\mathbf{c}:n})_{+}(x)dx=0$$
(3.3)

for arbitrary \(1\leq p\leq\infty\) for which \(\sigma_{p}\) is finite and positive.

If \(F_{\mathbf{c}^{s}:n}\left(\frac{1}{2}\right)=0\), then we take \((\underline{f}_{\mathbf{c}:n})_{+}(x)=\underline{f}_{\mathbf{c}:n}(x)=0\), \(\frac{1}{2}\leq x<1\), and the last inequality is the equality. The first one is the equality as well if \(\frac{F^{-1}(x)-\mu}{\sigma_{p}}\) is constant on \(\left[\frac{1}{2},1\right)\). By assumption, it is nonnegative. If we assume that it is strictly positive, we define a two-point symmetric distribution supported on \(\mu-d\) and \(\mu+d\), which satisfies \(0<\sigma_{p}<\infty\) for every \(1\leq p\leq\infty\).

Otherwise we take the minimal \(\alpha_{*}>\frac{1}{2}\) which satisfies \(F_{\mathbf{c}^{s}:n}(\alpha_{*})=0\). Then \(\underline{f}_{\mathbf{c}^{s}:n}(x)\) is negative and equal to \(0\) on the intervals \(\left[\frac{1}{2},\alpha_{*}\right)\) and \([\alpha_{*},1)\), respectively. The latter inequality in (3.3) becomes the equality if \(\frac{F^{-1}(x)-\mu}{\sigma_{p}}=0\) for \(\frac{1}{2}\leq x<\alpha_{*}\). So does the previous one if \(\frac{F^{-1}(x)-\mu}{\sigma_{p}}=\mathrm{const}\) for \(\alpha_{*}\leq x<1\). If the constant is strictly positive, taking into account the symmetry condition (1.2) we determine the three-point symmetric distribution

$$\mathbb{P}(X_{1}=\mu\mp d)=1-\alpha_{*},$$
$$\mathbb{P}(X_{1}=\mu)=2\alpha_{*}-1,$$

which has a bounded support (and all the moments finite in consequence), and attains the zero bound in (3.3).

Summing up, we proved the following.

Proposition 4. If \(X_{1},\ldots,X_{n}\) are iid symmetrically distributed about \(\mu\) with \(0<\sigma_{p}<\infty\) for some \(1\leq p\leq\infty\), and \(\mathbf{c}\in\mathbb{R}^{n}\) satisfies (3.1) with (3.2), then

$$\mathbb{E}\sum_{i=1}^{n}c_{i}\,\frac{X_{i:n}-\mu}{\sigma_{p}}\leq 0.$$

The equality is attained by the parent distribution

$$\mathbb{P}\left(X_{1}=\mu\pm\frac{\sigma_{p}}{[2(1-\alpha_{*})]^{1/p}}\right)=1-\alpha_{*},$$
$$\mathbb{P}(X_{1}=\mu)=2\alpha_{*}-1$$

(under the convention that \(a^{1/\infty}=a^{0}=1\)) for minimal \(\alpha_{*}\) satisfying (3.2).

The construction of the distribution functions attaining the zero bounds presented above are unique up to the scale transformations if there is a single \(\frac{1}{2}\leq\alpha_{*}<1\) satisfying (3.2). If there are more such points then the extreme distributions may have more support points. A specific family of coefficients consists of ones that satisfy \(\mathbf{c}^{\prime}=\mathbf{c}\), i.e., \(c_{i}=c_{n+1-i}\), \(i=1,\ldots,n\). The corresponding \(L\)-statistics are linear combinations of quasi-midranges \(\frac{1}{2}(X_{i:n}+X_{n+1-i:n})\), \(1\leq i\leq\frac{n+1}{2}\) (see, e.g., David and Nagaraja [2, p. 242]). Then \(f_{\mathbf{c}^{s}:n}(x)=0\), \(\frac{1}{2}\leq x<1\) (see (1.4)), and the equality

$$\mathbb{E}\sum_{i=1}^{n}c_{i}\frac{X_{i:n}-\mu}{\sigma_{p}}=\int\limits_{\frac{1}{2}}^{1}\frac{F^{-1}(x)-\mu}{\sigma_{p}}f_{\mathbf{c}^{s}:n}(x)dx=0$$

is attained by arbitrary symmetric parent distribution with mean \(\mu\) and \(0<\sigma_{p}<\infty\). This confirms a well known fact that the convex combinations of order statistics with the coefficients satisfying \(c_{i}=c_{n+1-i}\), \(i=1,\ldots,n\), are unbiased estimators of the symmetry centers of symmetric distributions. In particular, for the expectation of the sample median (equal to \(X_{\frac{n+1}{2}:n}\) and \(\frac{1}{2}(X_{\frac{n}{2}:n}+X_{\frac{n}{2}+1:n})\), when the sample size is odd and even, respectively) is identical with the population mean.

4 OTHER NON-POSITIVE BOUNDS

We finally focus on the case of \(L\)-statistics with the coefficients \(c_{1},\ldots,c_{n}\) satisfying the condition

$$F_{\mathbf{c}^{s}:n}(x)=\sum_{i=1}^{n}(c_{i}-c_{n+1-i})F_{i:n}(x)>0,\quad\frac{1}{2}\leq x<1.$$
(4.1)

We show that then the optimal upper bounds are either equal to \(0\), but they are attained in the limit by sequences of parent distributions or the bounds are strictly negative. Proposition 5 show that the sharp bounds cannot be negative if the are measured in the \(\sigma_{p}\) units with parameter \(p\) different from \(1\).

Proposition 5. Suppose that \(X_{1},\ldots,X_{n}\) are independent identically symmetrically distributed about \(\mu\in\mathbb{R}\) non-degenerate random variables with a finite moment of order \(1<p<\infty\). If the vector \(\mathbf{c}=(c_{1},\ldots,c_{n})\) satisfies the condition (4.1), then the bound

$$\mathbb{E}\sum_{i=1}^{n}c_{i}\frac{X_{i:n}-\mu}{\sigma_{p}}\leq 0$$
(4.2)

is sharp. It is attained in the limit by the family of three-point distributions

$$\mathbb{P}\left(X_{1}=\mu\mp\frac{\sigma_{p}}{[2(1-\alpha)]^{1/p}}\right)=1-\alpha,$$
$$\mathbb{P}(X_{1}=\mu)=2\alpha-1$$
(4.3)

as \(\alpha\rightarrow 1\) .

Proof. The upper bound for the LHS of (4.2) under the assumption (4.1) cannot be greater than \(0\) due to Proposition 1. Our aim is to show that the zero bound cannot be improved. We consider the parametric family of of one-dimensional probability distributions with the quantile functions satisfying

$$F^{-1}_{\alpha}(x)-\mu=\begin{cases}-1,\quad 0<x\leq 1-\alpha\\ 0,\quad 1-\alpha<x\leq\alpha\\ 1,\quad\alpha<x<1,\end{cases}\quad\frac{1}{2}<\alpha<1.$$
(4.4)

The respective \(p\)th absolute central moments and their \(p\)th roots are equal to \(\sigma_{p}^{p}=\mathbb{E}_{\alpha}|X_{1}-\mu|^{p}=2(1-\alpha)\) and \(\sigma_{p}=[2(1-\alpha)]^{1/p}\), respectively. Furthermore

$$\mathbb{E}_{\alpha}\sum_{i=1}^{n}c_{i}\frac{X_{i:n}-\mu}{\sigma_{p}}=\int\limits_{\frac{1}{2}}^{1}\frac{F_{\alpha}^{-1}(x)-\mu}{\sigma_{p}}f_{\mathbf{c}^{s}:n}(x)dx$$
$${}=\frac{\int\limits_{\alpha}^{1}f_{\mathbf{c}^{s}:n}(x)dx}{[2(1-\alpha)]^{1/p}}=-\frac{F_{\mathbf{c}^{s}:n}(\alpha)}{[2(1-\alpha)]^{1/p}}.$$
(4.5)

Note that this is negative by (4.1). We represent the numerator as follows

$$F_{\mathbf{c}^{s}:n}(\alpha)=\sum_{i=1}^{n}(c_{i}-c_{n+1-i})\sum_{j=i}^{n}B_{j,n}(\alpha)=\sum_{j=1}^{n}\left[\sum_{i=1}^{j}(c_{i}-c_{n+1-i})\right]B_{j,n}(\alpha)$$
$${}=\sum_{j=1}^{n-1}\left[\sum_{i=1}^{j}(c_{i}-c_{n+1-i})\right]{n\choose j}\alpha^{j}(1-\alpha)^{n-j}.$$
(4.6)

The last summand can be dropped because \(\sum_{i=1}^{n}(c_{i}-c_{n+1-i})=0\). Therefore

$$\mathbb{E}_{\alpha}\sum_{i=1}^{n}c_{i}\frac{X_{i:n}-\mu}{\sigma_{p}}=-\frac{F_{\mathbf{c}^{s}:n}(\alpha)}{[2(1-\alpha)]^{1/p}}$$
$${}=-\frac{1}{2^{1/p}}\sum_{j=1}^{n-1}\left[\sum_{i=1}^{j}(c_{i}-c_{n+1-i})\right]{n\choose j}\alpha^{j}(1-\alpha)^{n-j-1/p}$$

tends to \(0\) as \(\alpha\rightarrow 1\), because \(j\leq n-1<n-\frac{1}{p}\) for all \(j=1,\ldots,n-1\). Since our bound problem is location-scale invariant, we simply normalize the distributions satisfying (4.4) transforming them into (4.3) which satisfy moment conditions. \(\blacksquare\)

Corollary 4. Under the assumptions of Proposition 5 with an extra condition that \(X_{1},\ldots,X_{n}\) have a bounded support, the following yields

$$\mathbb{E}\sum_{i=1}^{n}c_{i}\frac{X_{i:n}-\mu}{\sigma_{\infty}}\leq 0$$

with the attainability condition

$$\mathbb{P}\left(X_{1}=\mu\mp\sigma_{\infty}\right)=1-\alpha,$$
$$\mathbb{P}(X_{1}=\mu)=2\alpha-1$$

as \(\alpha\rightarrow 1\) .

Proof. It is easily deduced from the above one. For the distribution functions defined in (4.4) we have \(\sigma_{\infty}=1\). In consequence

$$\mathbb{E}_{\alpha}\sum_{i=1}^{n}c_{i}\frac{X_{i:n}-\mu}{\sigma_{\infty}}=-F_{\mathbf{c}^{s}:n}(\alpha)$$
(4.7)

(cf. (4.5)). Looking at (4.6) we immediately conclude that (4.7) tends to \(0\) when \(\alpha\) approaches \(1\). \(\blacksquare\)

Proposition 6. Under the assumptions of Proposition 5 with \(p=1\), we have

$$\mathbb{E}\sum_{i=1}^{n}c_{i}\frac{X_{i:n}-\mu}{\sigma_{1}}\leq-\min_{\frac{1}{2}\leq\alpha\leq 1}U_{\mathbf{c}^{s}:n}(\alpha),$$
(4.8)

where

$$U_{\mathbf{c}^{s}:n}(\alpha)=\frac{n}{2}\sum_{j=1}^{n-1}\frac{1}{n-j}\left[\sum_{i=1}^{j}(c_{i}-c_{n+1-i})\right]B_{j,n-1}(\alpha).$$
(4.9)

If the minimum is attained at \(\frac{1}{2}<\alpha_{*}<1\) ( \(\alpha_{*}=\frac{1}{2}\) , respectively) then the bound is attained by the three-point (two-point, respectively) symmetric marginal distribution

$$\mathbb{P}\left(X_{1}=\mu\mp\frac{\sigma_{1}}{2(1-\alpha_{*})}\right)=1-\alpha_{*},$$
$$\mathbb{P}(X_{1}=\mu)=2\alpha_{*}-1.$$
(4.10)

If \(1\) is the minimum point of (4.9), then the bound (4.8) is attained in the limit by (4.10) with \(\alpha_{*}\) replaced by \(\alpha\rightarrow 1\).

Proof. For \(\mathbf{c}\) satisfying (4.1) we have

$$\mathbb{E}\sum_{i=1}^{n}c_{i}\frac{X_{i:n}-\mu}{\sigma_{1}}=T_{h}(g)$$
$${}=\int\limits_{\frac{1}{2}}^{1}g(x)h(x)\;dx=\int\limits_{\frac{1}{2}}^{1}\frac{F^{-1}(x)-\mu}{\sigma_{1}}\,f_{\mathbf{c}^{s}:n}(x)\,dx<0$$
(4.11)

for fixed \(h(x)=f_{\mathbf{c}^{s}:n}(x)\) and every \(g(x)=\frac{F^{-1}(x)-\mu}{\sigma_{1}}\) which is necessarily nonnegative, nondecreasing, and satisfies

$$||g||_{1}=\int\limits_{\frac{1}{2}}^{1}|g(x)|\,dx=\int\limits_{\frac{1}{2}}^{1}\frac{|F^{-1}(x)-\mu|}{\sigma_{1}}\,dx=\frac{1}{2}.$$

We apply here the norm maximization method proposed in Goroncy and Rychlik [5] (see also Goroncy [4]). Under the simple transformation \(\tilde{g}=\frac{g}{-T_{h}(g)}\), we obtain the family of functions \(\tilde{g}\) which constitute the convex set \(\mathcal{S}\in L^{1}\left(\left[\frac{1}{2},1\right],dx\right)\) of nonnegative, nondecreasing functions such that

$$T_{h}(\tilde{g})=-\frac{T_{h}(g)}{T_{h}(g)}=-1$$

with the corresponding norms

$$||\tilde{g}||_{1}=\frac{||g||_{1}}{|T_{h}(g)|}=-\frac{1}{2T_{h}(g)}.$$

We see that our original problem of maximizing (4.11) is equivalent to maximizing the norm over the functions \(\tilde{g}\) from the convex set \(\mathcal{S}\).

Every element of \(\mathcal{S}\) can be approximated with arbitrary accuracy by stepwise nonnegative nondecreasing elements of \(\mathcal{S}\) (indeed, it suffices to apply a standard piecewise constant interpolations \(g_{k}\), \(k=1,2,\ldots\), with an increasing number of knots, and the uniformly decreasing distances among them, and take their scale transformations \(\tilde{g}_{k}\) assuring that \(T_{h}(\tilde{g}_{k})=-1\)). Every stepwise element of \(\mathcal{S}\) can be represented as a convex mixture of two-valued elements of \(\mathcal{S}\), with the first value equal to \(0\). Since the norm functional is convex, the norm of a mixture cannot exceed the maximal norm of its elements. It follows that maximizing the norm over \(\mathcal{S}\) we can focus only on two-valued members of the set. Equivalently, we can just maximize the original functional (4.11) over the family of two-valued functions \(g(x)=\frac{F^{-1}(x)-\mu}{\sigma_{1}}\), \(\frac{1}{2}\leq x<1\), with the first value equal to \(0\). It follows that we can focus our attention on the symmetric distributions with quantile function satisfying (4.4). Then \(\sigma_{1}=2(1-\alpha)\) and

$$\mathbb{E}_{\alpha}\sum_{i=1}^{n}c_{i}\frac{X_{i:n}-\mu}{\sigma_{1}}=-\frac{F_{\mathbf{c}^{s}:n}(\alpha)}{2(1-\alpha)}$$
$${}=-\frac{n}{2}\sum_{j=1}^{n-1}\frac{1}{n-j}\left[\sum_{i=1}^{j}(c_{i}-c_{n+1-i})\right]B_{j,n-1}(\alpha).$$
(4.12)

This assures that the formulae (4.8) with (4.9) represent the upper bound for the LHS of (4.7). The attainability condition is easily established then. \(\blacksquare\)

Corollary. For \(X_{1},\ldots,X_{n}\) satisfying the assumptions of Proposition 6 the following bound

$$\mathbb{E}\frac{X_{1:n}-\mu}{\sigma_{1}}<-1+\frac{1}{2^{n-1}}$$
(4.13)

is valid, and it is attained by the two-point symmetric distribution

$$\mathbb{P}(X_{1}=\mu-\sigma_{1})=\mathbb{P}(X_{1}=\mu+\sigma_{1})=\frac{1}{2}.$$

Proof. We apply the conclusion of Proposition 6 for \(\mathbf{c}=(1,0,\ldots,0)\). Then

$$\sum_{i=1}^{j}(c_{i}-c_{n+1-i})=1,\quad i=1,\ldots,n-1,$$

and we need to maximize the function

$$-U^{s}_{1:n}(\alpha)=-\frac{n}{2}\sum_{j=1}^{n-1}\frac{1}{n-j}B_{j,n-1}(\alpha),\quad\frac{1}{2}\leq\alpha\leq 1.$$
(4.14)

Elementary calculations show that

$$-(U^{s}_{1:n})^{\prime}(\alpha)=-\frac{n}{2}B_{0,n-2}(\alpha)-\frac{n(n-1)}{2}\sum_{j=1}^{n-2}\frac{1}{(n-j)(n-j-1)}B_{j,n-2}(\alpha),$$

which is negative on the whole standard unit interval \([0,1]\). It follows that (4.14) is maximized at \(\frac{1}{2}\). For calculating the respective minimal value it is more convenient to use the representation of the first line of (4.12) which in the case of sample minimum takes on the form

$$-\frac{F^{s}_{1:n}(\alpha)}{2(1-\alpha)}=-\frac{F_{1:n}(\alpha)-F_{n:n}(\alpha)}{2(1-\alpha)}=-\frac{1-(1-\alpha)^{n}-\alpha^{n}}{2(1-\alpha)},$$

and whose value at \(\frac{1}{2}\) amounts to the RHS of (4.13). The attainability condition immediately follows from Proposition 6. \(\blacksquare\)

The above result was proved in Rychlik [14] with use of a different method.

Corollary 6. Assume that \(X_{1},\ldots,X_{n}\) are independent and identically symmetrically distributed about its mean \(\mu\). Suppose that \(\mathbf{c}=(c_{1},\ldots,c_{n})\) satisfies (4.1), and moreover \(c_{1}=c_{n}\). Then

$$\mathbb{E}\sum_{i=1}^{n}c_{i}\frac{X_{i:n}-\mu}{\sigma_{1}}\leq 0,$$

and this bound is attained in the limit by the parent distributions

$$\mathbb{P}\left(X_{1}=\mu\mp\frac{\sigma_{1}}{2(1-\alpha)}\right)=1-\alpha,$$
$$\mathbb{P}(X_{1}=\mu)=2\alpha-1$$
(4.15)

as \(\alpha\rightarrow 1\) .

Proof. Under the assumption

$$\sum_{i=1}^{n-1}(c_{i}-c_{n+1-i})=c_{1}-c_{n}=0,$$

and (4.12) can be represented as

$$U_{\mathbf{c}^{s}:n}(\alpha)=\frac{n}{2}\sum_{j=1}^{n-2}\frac{1}{n-j}\left[\sum_{i=1}^{j}(c_{i}-c_{n+1-i})\right]B_{j,n-1}(\alpha).$$

Notice that the function tends to \(0\) as \(\alpha\rightarrow 1\), because so do all \(B_{j,n-1}(\alpha)\), \(j=1,\ldots,n-2\). \(\blacksquare\)

Corollary 7. For iid random variables \(X_{1},\ldots,X_{n}\) symmetrically distributed about its mean \(\mu\) , for every \(2\leq i\leq\frac{n}{2}\) the following inequality is sharp

$$\mathbb{E}\frac{X_{i:n}-\mu}{\sigma_{1}}\leq 0.$$

The bound is attained in the limit by the marginal distribution (4.15) as \(\alpha\rightarrow 1\).