1 Introduction

The \(r\mathrm{th}\) central moment \(\mu _{r}\) of a random variable X in \(\left[ m,M\right] \) for the continuous and discrete cases respectively are defined as

$$\begin{aligned} \mu _{r}=\int \limits _{m}^{M}\left( x-\mu _{1}^{{\prime }}\right) ^{r}f\left( x\right) \, \mathrm{d}x \, \text { or }\,\mu _{r}=\sum _{i=1}^{n}p_{i}\left( x_{i}-\mu _{1}^{{\prime }}\right) ^{r}, \end{aligned}$$
(1.1)

where

$$\begin{aligned} \mu _{1}^{{\prime }}=\int \limits _{m}^{M}xf\left( x\right) \, \mathrm{d}x\text { or } \, \mu _{1}^{{\prime }}=\sum _{i=1}^{n}p_{i}x_{i}, \end{aligned}$$
(1.2)

\(f\left( x\right) \) and \(p_{i}\) are corresponding probability densities and probability functions, respectively, such that

$$\begin{aligned} \int \limits _{m}^{M}f\left( x\right) \, \mathrm{d}x=1 \, \text { or}\sum _{i=1}^{n}p_{i}=1. \end{aligned}$$
(1.3)

We denote by \(m_{r}\) the r th central moment of n real numbers \( x_{1},x_{2}, \ldots ,x_{n}\),

$$\begin{aligned} m_{r}=\frac{1}{n}\sum _{i=1}^{n}\left( x_{i}-m_{1}^{{\prime }}\right) ^{r}, \end{aligned}$$
(1.4)

where \(m_{1}^{{\prime }}=\frac{1}{n}\underset{i=1}{\overset{n}{\sum }}x_{i}\) is the arithmetic mean.

Bounds on the variance (\(\sigma ^{2}=\mu _{2},\ S^{2}=m_{2}\)), their extensions and applications have been studied extensively in the literature; see, [5, 16] and [18]. The well-known Popoviciu inequality [16] gives an upper bound for the variance of a random variable

$$\begin{aligned} \mu _{2}\le \frac{\left( M-m\right) ^{2}}{4}. \end{aligned}$$
(1.5)

A complementary inequality due to Nagy [14] says that for the variance of n real numbers \(x_{i}\) with \(m\le x_{i}\le M\), we have

$$\begin{aligned} S^{2}\ge \frac{\left( M-m\right) ^{2}}{2n}. \end{aligned}$$
(1.6)

For more details, see [1]. Such inequalities are also useful in many other contexts. For example, Wolkowicz and Styan [23] have observed that if the eigenvalues of an \(n\times n\) complex matrix are all real, as in the case of Hermitian matrices, the inequalities (1.5) and (1.6) provide bounds for the spread of a matrix, spd\(\left( A\right) =\underset{i,j }{\max }\left| \lambda _{i}-\lambda _{j}\right| \). Let \(B=A-\frac{ \text {tr}A}{n}I\), where \(\hbox {tr}A\) denotes the trace of A. Then,

$$\begin{aligned} \frac{4}{n}\text {tr}B^{2}\le \text {spd}\left( A\right) ^{2}\le 2\text {tr} B^{2}. \end{aligned}$$
(1.7)

Further, let \(\mathbb {M}(n)\) denote the \(\hbox {C}^{*}-\)algebra of all \(n\times n\) complex matrices and let \(\Phi :\mathbb {M}(n)\rightarrow \mathbb {M}(k)\) be a positive unital linear map, see [6]. The inequality of Bhatia and Davis [5] says that if the spectrum of a Hermitian matrix A is contained in the interval \(\left[ m,M\right] \), then

$$\begin{aligned} \Phi \left( A^{2}\right) -\Phi \left( A\right) ^{2}\le \frac{\left( M-m\right) ^{2}}{4}=\frac{\text {spd}\left( A\right) ^{2}}{4}, \end{aligned}$$
(1.8)

for every positive unital linear map \(\Phi \). This gives a noncommutative analogue of the inequality (1.5) and yields many old and new bounds for the spread of a matrix, see [7]. The inequality (1.8) provides an inequality complementary to Kadison’s inequality [9],

$$\begin{aligned} \Phi \left( A^{2}\right) \ge \Phi \left( A\right) ^{2}. \end{aligned}$$

The Cauchy–Schwarz and some related inequalities in a semi-inner product module over a \(C^{*}\) algebra are studied by Arambasic et al., see [2] and references therein.

Likewise, the inequalities, (1.5) and (1.6), provide bounds for the span of polynomial, see [17] and Sect. 4, below.

Such basic inequalities, their further refinements, extensions and alternative proofs have been studied by several authors. In particular, Sharma et al. [18] and [19] have proved that

$$\begin{aligned} \frac{\mu _{2}^{2}-\left( \mu _{1}^{{\prime }}-m\right) ^{2}\mu _{2}}{\mu _{1}^{{\prime }}-m}\le \mu _{3}\le \frac{\left( M-\mu _{1}^{{\prime }}\right) ^{2}\mu _{2}-\mu _{2}^{2}}{M-\mu _{1}^{{\prime }}} \end{aligned}$$
(1.9)

and

$$\begin{aligned} \sigma ^{2}+\left( \frac{\mu _{3}}{2\sigma ^{2}}\right) ^{2}\le \frac{ \left( M-m\right) ^{2}}{4}. \end{aligned}$$
(1.10)

The inequality (1.10) provides a refinement of the Popoviciu inequality (1.5). The inequalities, (1.9) and (1.10), yield bounds for the eigenvalues and spread of a Hermitian matrix. Likewise, these inequalities provide bounds for the roots of polynomial equations, see [18].

We focus here on inequalities involving fourth central moment (\(\mu _{4}\)). One such inequality in the literature due to Pearson [15] gives an interesting relation between two important parameters of statistical distributions, namely skewness (\(\alpha _{3}\)) and kurtosis (\( \alpha _{4}\)),

$$\begin{aligned} \alpha _{4}\ge 1+\alpha _{3}^{2}\ , \end{aligned}$$

where

$$\begin{aligned} \alpha _{3}=\sqrt{\frac{m_{3}^{2}}{m_{2}^{3}}} \quad \text {and } \quad \alpha _{4}= \frac{m_{4}}{m_{2}^{2}}. \end{aligned}$$
(1.11)

For more details, see [19, 20] and references therein.

We derive some inequalities involving fourth central moment and discuss related extensions and applications. We prove an analogue of the Popoviciu inequality (1.5) for the fourth central moment (Theorem 2.1, below). Our main result (Theorem 2.2) gives bounds for the fourth central moment in terms of the second and third central moments. The inequalities involving first four central moments and range of the random variable are obtained (Corollary 2.32.4). This also provides a relation among skewness, kurtosis and studentized range (Corollary 2.5). It is shown that the inequality (1.10) provides a refinement of the inequality for third central moment in terms of the range of the random variable (Theorem 2.6). A generalization of the Nagy inequality (1.6) is proved for the \(s\mathrm{th}\) central moment, \(s=2r\) (Theorem 2.7). We obtain bounds for the spread of a Hermitian matrix (Theorem 3.1 and 3.3). Likewise, bounds for the span of polynomial are discussed (Theorem 4.14.2).

2 Main Results

It is enough to prove the following results for the case when X is a discrete random variable taking finitely many values \( x_{1},x_{2}, \ldots ,x_{n}\) with probabilities \(p_{1},p_{2}, \ldots ,p_{n}\), respectively. The arguments are similar for the case when X is a continuous random variable.

Theorem 2.1

Let X be a discrete or continuous random variable taking values in \(\left[ m,M\right] \). Then

$$\begin{aligned} \mu _{4}\le \frac{\left( M-m\right) ^{4}}{12}. \end{aligned}$$
(2.1)

Proof

For \(\alpha \le y\le \beta \), we have

$$\begin{aligned} \left( y-\alpha \right) \left( y-\beta \right) \left( \left( y+\frac{\alpha +\beta }{2}\right) ^{2}+\frac{\alpha ^{2}+\beta ^{2}+\left( \alpha +\beta \right) ^{2}}{4}\right) \le 0. \end{aligned}$$
(2.2)

Putting \(y=x_{i}-\mu _{1}^{{\prime }},\ \alpha =m-\mu _{1}^{{\prime }}\) and \( \beta =M-\mu _{1}^{{\prime }}\) in (2.2); multiply both sides by \( p_{i} \); add n inequalities, \(i=1,2, \ldots ,n\); and use (1.1)–(1.3), we see that

$$\begin{aligned} \mu _{4}\le \left( \mu _{1}^{{\prime }}-m\right) \left( M-\mu _{1}^{{\prime }}\right) \left( \left( \mu _{1}^{{\prime }}-m\right) ^{2}+\left( M-\mu _{1}^{{\prime }}\right) ^{2}-\left( \mu _{1}^{{\prime }}-m\right) \left( M-\mu _{1}^{{\prime }}\right) \right) . \end{aligned}$$
(2.3)

The inequality (2.1) now follows from (2.3) and the fact that the function

$$\begin{aligned} h(x)=\left( x-m\right) \left( M-x\right) \left( \left( x-m\right) ^{2}+\left( M-x\right) ^{2}-\left( x-m\right) \left( M-x\right) \right) , \end{aligned}$$

achieves its maximum at

$$\begin{aligned} x=\frac{m+M}{2}\pm \frac{m-M}{2\sqrt{3}}, \end{aligned}$$

where

$$\begin{aligned} h(x)\le \frac{\left( M-m\right) ^{4}}{12}. \end{aligned}$$

\(\square \)

The sign of equality holds in (2.3) if and only if \(n=2\). In this case, \(m_{4}=\frac{\left( M-m\right) ^{4}}{16}\). Equality holds in (2.1) for \(n=2;\ x_{1}=m\) and \(x_{2}=M\) with \(p_{1}=\frac{1 }{2}\pm \frac{1}{2\sqrt{3}}\) and \(p_{2}=\frac{1}{2}\mp \frac{1}{2\sqrt{3}}\).

Pearson [15] gives a lower bound for the fourth central moment:

$$\begin{aligned} \mu _{4}\ge \frac{\mu _{3}^{2}}{\mu _{2}}+\mu _{2}^{2}. \end{aligned}$$
(2.4)

We derive a complementary upper bound in the following theorem.

Theorem 2.2

Let X be a discrete or continuous random variable taking values in \(\left[ m,M\right] \). Then,

$$\begin{aligned}&\mu _{4}\le \left( \mu _{1}^{{\prime }}-m\right) \left( M-\mu _{1}^{{\prime }}\right) \mu _{2}+\left( m+M-2\mu _{1}^{{\prime }}\right) \mu _{3} \nonumber \\&\qquad \quad -\frac{\left( \mu _{3}-\left( m+M-2\mu _{1}^{{\prime }}\right) \mu _{2}\right) ^{2}}{\left( \mu _{1}^{{\prime }}-m\right) \left( M-\mu _{1}^{{\prime }}\right) -\mu _{2}}, \end{aligned}$$
(2.5)

where\(\ \mu _{2}\ne \left( \mu _{1}^{{\prime }}-m\right) \left( M-\mu _{1}^{{\prime }}\right) \).

Proof

Let \(\alpha \le y\le \beta \). Then, for any real number \(\gamma \),

$$\begin{aligned} \left( y-\alpha \right) \left( y-\beta \right) \left( y-\gamma \right) ^{2}\le 0. \end{aligned}$$
(2.6)

Putting \(y=x_{i}-\mu _{1}^{{\prime }},\ \alpha =m-\mu _{1}^{{\prime }}\) and \(\beta =M-\mu _{1}^{{\prime }}\) in (2.6); multiply both sides by \( p_{i} \); add n inequalities, \(i=1,2, \ldots ,n\); and use (1.1)–(1.3), we get

$$\begin{aligned} \mu _{4}\le & {} \left( \left( \mu _{1}^{{\prime }}-m\right) \left( M-\mu _{1}^{{\prime }}\right) -\mu _{2}\right) \gamma ^{2}-2\left( \left( m+M-2\mu _{1}^{{\prime }}\right) \mu _{2}-\mu _{3}\right) \gamma \nonumber \\&+\left( m+M-2\mu _{1}^{{\prime }}\right) \mu _{3}+\left( \mu _{1}^{{\prime }}-m\right) \left( M-\mu _{1}^{{\prime }}\right) \mu _{2}. \end{aligned}$$
(2.7)

The inequality (2.7) is valid for every real number \(\gamma \) and gives least upper bound for

$$\begin{aligned} \gamma =\frac{\left( m+M-2\mu _{1}^{{\prime }}\right) \mu _{2}-\mu _{3}}{ \left( \mu _{1}^{{\prime }}-m\right) \left( M-\mu _{1}^{{\prime }}\right) -\mu _{2}}. \end{aligned}$$
(2.8)

Substitute the value of \(\gamma \) from (2.8) in (2.7); a little calculation leads to (2.5). \(\square \)

Note that \(\mu _{2}=\left( \mu _{1}^{{\prime }}-m\right) \left( M-\mu _{1}^{{\prime }}\right) \) if and only if every \( x_{i}\) is equal either to m or to M, see [5]. Hence, (2.5) is not valid for \(n=2\). Equality holds in (2.5) when

$$\begin{aligned} x_{1}=m,\ x_{2}=\frac{\left( m+M-2\mu _{1}^{{\prime }}\right) \mu _{2}-\mu _{3}}{\left( \mu _{1}^{{\prime }}-m\right) \left( M-\mu _{1}^{{\prime }}\right) -\mu _{2}}\,\text { and }x_{3}=M;\ n=3. \end{aligned}$$

Pearson’s inequality (2.4) implies that \(\mu _{2}\mu _{4}-\mu _{2}^{3}-\mu _{3}^{2}\ge 0\). We prove a complementary upper bound in the following theorem.

Corollary 2.3

Under the conditions of Theorem 2.2, we have

$$\begin{aligned} \mu _{2}\mu _{4}-\mu _{2}^{3}-\mu _{3}^{2}\le \frac{\left( \left( \mu _{1}^{{\prime }}-m\right) \left( M-\mu _{1}^{{\prime }}\right) \left( M-m\right) \right) ^{2}}{27}\le \frac{\left( M-m\right) ^{6}}{432}. \end{aligned}$$
(2.9)

Proof

From (2.5), we have

$$\begin{aligned} \mu _{2}\mu _{4}-\mu _{2}^{3}-\mu _{3}^{2}\le & {} \left( \mu _{1}^{{\prime }}-m\right) \left( M-\mu _{1}^{{\prime }}\right) \mu _{2}^{2}+\left( m+M-2\mu _{1}^{{\prime }}\right) \mu _{2}\mu _{3} \nonumber \\&+\,\frac{\mu _{2}\left( \mu _{3}-\left( m+M-2\mu _{1}^{{\prime }}\right) \mu _{2}\right) ^{2}}{\mu _{2}-\left( \mu _{1}^{{\prime }}-m\right) \left( M-\mu _{1}^{{\prime }}\right) }-\mu _{2}^{3}-\mu _{3}^{2}\ . \end{aligned}$$
(2.10)

One can easily see, on using derivatives that the right-hand side expression in (2.10) is maximum at

$$\begin{aligned} \mu _{3}=\frac{1}{2}\frac{\left( m+M-2\mu _{1}^{{\prime }}\right) \mu _{2}}{ \left( \mu _{1}^{{\prime }}-m\right) \left( M-\mu _{1}^{{\prime }}\right) } \left( \mu _{2}+\left( \mu _{1}^{{\prime }}-m\right) \left( M-\mu _{1}^{{\prime }}\right) \right) . \end{aligned}$$

So,

$$\begin{aligned} \mu _{2}\mu _{4}-\mu _{2}^{3}-\mu _{3}^{2}\le \left( 1-\frac{\mu _{2}}{ \left( \mu _{1}^{{\prime }}-m\right) \left( M-\mu _{1}^{{\prime }}\right) } \right) \left( \frac{M-m}{2}\right) ^{2}\mu _{2}^{2}. \end{aligned}$$
(2.11)

The first inequality (2.9) now follows from (2.11) and the fact that the right-hand side expression (2.11) is maximum at

$$\begin{aligned} \mu _{2}=\frac{2}{3}\left( \mu _{1}^{{\prime }}-m\right) \left( M-\mu _{1}^{{\prime }}\right) . \end{aligned}$$

Using arithmetic–geometric mean inequality, we have

$$\begin{aligned} \left( \mu _{1}^{{\prime }}-m\right) \left( M-\mu _{1}^{{\prime }}\right) \le \left( \frac{M-m}{2}\right) ^{2}. \end{aligned}$$
(2.12)

The second inequality (2.9) follows from (2.12). \(\square \)

We now prove one more inequality complementary to Pearson’s inequality \(\mu _{4}-\mu _{2}^{2}-\frac{\mu _{3}^{2}}{\mu _{2}} \ge 0\) in the following theorem.

Corollary 2.4

Under the conditions of Theorem 2.2, we have

$$\begin{aligned} \mu _{4}-\mu _{2}^{2}-\frac{\mu _{3}^{2}}{\mu _{2}}\le \left( \mu _{1}^{{\prime }}-m\right) \left( M-\mu _{1}^{{\prime }}\right) \left( \frac{M-m}{4}\right) ^{2}\le \frac{\left( M-m\right) ^{4}}{64}. \end{aligned}$$
(2.13)

Proof

From (2.11), we have

$$\begin{aligned} \mu _{4}-\mu _{2}^{2}-\frac{\mu _{3}^{2}}{\mu _{2}}\le \left( 1-\frac{\mu _{2}}{\left( \mu _{1}^{{\prime }}-m\right) \left( M-\mu _{1}^{{\prime }}\right) }\right) \left( \frac{M-m}{2}\right) ^{2}\mu _{2}. \end{aligned}$$
(2.14)

The first inequality (2.13) follows from the fact that the right-hand side expression in (2.14) is maximum at

$$\begin{aligned} \mu _{2}=\frac{1}{2}\left( \mu _{1}^{{\prime }}-m\right) \left( M-\mu _{1}^{{\prime }}\right) . \end{aligned}$$

The second inequality (2.13) follows from (2.12). \(\square \)

The studentized range q of n real numbers \(x_{i};\, m\le x_{i}\le M,\ i=1,2, \ldots ,n\) is defined as

$$\begin{aligned} q=\frac{M-m}{S}, \end{aligned}$$
(2.15)

where S is standard deviation. We now find an interesting relation among studentized range, skewness and kurtosis.

Corollary 2.5

For \(m\le x_{i}\le M,\ i=1,2, \ldots ,n\), we have

$$\begin{aligned} \alpha _{4}-\alpha _{3}^{2}\le \frac{q^{2}}{4}, \end{aligned}$$
(2.16)

where \(\alpha _{3},\alpha _{4}\) and q are respectively defined by (1.11) and (2.15).

Proof

Dividing both sides of (2.11) by \( \mu _{2}^{3}\), we see that

$$\begin{aligned} \frac{\mu _{4}}{\mu _{2}^{2}}-\frac{\mu _{3}^{2}}{\mu _{2}^{3}}\le \frac{ \left( M-m\right) ^{2}}{4\mu _{2}}+\left( 1-\frac{\left( M-m\right) ^{2}}{ 4\left( \mu _{1}^{{\prime }}-m\right) \left( M-\mu _{1}^{{\prime }}\right) }\right) . \end{aligned}$$
(2.17)

Combining (2.12) and (2.17), we get that

$$\begin{aligned} \frac{\mu _{4}}{\mu _{2}^{2}}-\frac{\mu _{3}^{2}}{\mu _{2}^{3}}\le \frac{ \left( M-m\right) ^{2}}{4\mu _{2}}. \end{aligned}$$
(2.18)

The inequality (2.18) implies (2.16), and uses (1.11) and (2.15). \(\square \)

Remark 1

The \(r\mathrm{th}\)-order moment about origin is defined as

$$\begin{aligned} \mu _{r}^{{\prime }}=\int \limits _{m}^{M}x^{r}f\left( x\right) \, \mathrm{d}x\,\text { or }\,\mu _{r}^{{\prime }}=\sum _{i=1}^{n}p_{i}x_{i}^{r}. \end{aligned}$$

On using the well-known relations, \(\mu _{2}=\mu _{2}^{{\prime }}-\mu _{1}^{\prime ^{2}},\ \mu _{3}=\mu _{3}^{{\prime }}-3\mu _{1}^{{\prime }}\mu _{2}^{\prime }+2\mu _{1}^{\prime ^{3}}\) and \(\mu _{4}=\mu _{4}^{{\prime }}-4\mu _{1}^{{\prime }}\mu _{3}^{\prime }+6\mu _{1}^{\prime ^{2}}\mu _{2}^{{\prime }}-3\mu _{1}^{\prime ^{2}}\) in the above inequalities, we can write the inequalities involving moments about origin of discrete and continuous distributions. For example, the inequalities (2.5), (2.9) and (2.13), respectively, give

$$\begin{aligned} \mu _{4}^{{\prime }}\le & {} \left( m+M\right) \mu _{3}^{{\prime }}-mM\mu _{2}^{{\prime }}-\frac{\left( \mu _{3}^{{\prime }}-\left( m+M\right) \mu _{2}^{{\prime }}-mM\mu _{1}^{{\prime }}\right) ^{2}}{\left( m+M\right) \mu _{1}^{{\prime }}-\mu _{2}^{{\prime }}-mM}, \\&\left( \mu _{4}^{{\prime }}-\mu _{2}^{\prime ^{2}}\right) \left( \mu _{2}^{{\prime }}-\mu _{1}^{\prime ^{2}}\right) -\left( \mu _{3}^{{\prime }}-\mu _{1}^{{\prime }}\mu _{2}^{\prime }\right) ^{2}\le \frac{\left( M-m\right) ^{6}}{432} \end{aligned}$$

and

$$\begin{aligned} \mu _{4}^{{\prime }}-\mu _{2}^{\prime ^{2}}-\frac{\left( \mu _{3}^{{\prime }}-\mu _{1}^{{\prime }}\mu _{2}^{\prime }\right) ^{2}}{\mu _{2}^{{\prime }}-\mu _{1}^{\prime ^{2}}}\le \frac{\left( M-m\right) ^{4}}{64}. \end{aligned}$$

The inequalities (1.5) and (2.1), respectively, give the upper bound for \(\mu _{2}\) and \(\mu _{4}\) in terms of the range of the random variable, \(M-m\). It is interesting to note that the analogous upper bound for the third central moment \(\mu _{3}\) follows easily from the inequality (1.10).

Theorem 2.6

Let X be a discrete or continuous random variable taking values in \(\left[ m,M\right] \). Then

$$\begin{aligned} \left| \mu _{3}\right| \le \frac{\left( M-m\right) ^{3}}{6\sqrt{3}}. \end{aligned}$$
(2.19)

Proof

From the inequality (1.10), we have

$$\begin{aligned} \mu _{3}^{2}\le \left( M-m\right) ^{2}\sigma ^{4}-4\sigma ^{6}. \end{aligned}$$
(2.20)

The inequality (2.19) follows from (2.20) and the fact that the function

$$\begin{aligned} h\left( x\right) =\left( M-m\right) ^{2}x^{4}-4x^{6}, \end{aligned}$$

achieves its maximum at \(x=\frac{M-m}{\sqrt{6}}\) where \(h\left( x\right) \le \frac{\left( M-m\right) ^{6}}{108}\). \(\square \)

Equality holds in (2.19) for \(n=2;\ x_{1}=m\) and \(x_{2}=M\) with \(p_{1}=\frac{1}{2}\pm \frac{1}{2\sqrt{3}}\) and \(p_{2}=\frac{ 1}{2}\mp \frac{1}{2\sqrt{3}}\).

It remains to prove an analogous of the Nagy inequality (1.6) for the fourth central moment. We show that a generalization of the Nagy inequality (1.6) follows easily for the central moment \(m_{2r}\).

Theorem 2.7

Let \(m_{2r}\) be the central moment of n real numbers \(x_{i}\) such that \(m\le x_{i}\le M\); then for \( n\ge 3\), we have

$$\begin{aligned} m_{2r}\ge \frac{\left( M-m\right) ^{2r}}{2^{2r-1}n}+\left( \frac{n}{n-2} \right) ^{r-1}\left( m_{2}-\frac{\left( M-m\right) ^{2}}{2n}\right) ^{r}. \end{aligned}$$
(2.21)

Proof

From (1.4), we have

$$\begin{aligned} m_{2r}=\frac{\left( M-m_{1}^{{\prime }}\right) ^{2r}+\left( m_{1}^{{\prime }}-m\right) ^{2r}}{n}+\frac{n-2}{n}\left( \frac{1}{n-2}\sum \limits _{i=2}^{n-1}\left( x_{i}-m_{1}^{{\prime }}\right) ^{2r}\right) . \end{aligned}$$
(2.22)

It is well known that for m positive real numbers \(y_{1},y_{2}, \ldots ,y_{m}\),

$$\begin{aligned} \frac{1}{m}\sum \limits _{i=1}^{m}y_{i}^{k}\ge \left( \frac{1}{m} \sum \limits _{i=1}^{m}y_{i}\right) ^{k}, \quad k=1,2, \ldots . \end{aligned}$$
(2.23)

Applying (2.23) to \(n-2\) positive real numbers \(\left( x_{i}-m_{1}^{{\prime }}\right) ^{2},i=2, \ldots ,n-1\), we get

$$\begin{aligned} \frac{1}{n-2}\sum \limits _{i=2}^{n-1}\left( x_{i}-m_{1}^{{\prime }}\right) ^{2r}\ge \left( \frac{1}{n-2}\sum \limits _{i=2}^{n-1}\left( x_{i}-m_{1}^{{\prime }}\right) ^{2}\right) ^{r}. \end{aligned}$$
(2.24)

We also have

$$\begin{aligned} \sum _{i=2}^{n-1}\left( x_{i}-m_{1}^{{\prime }}\right) ^{2}=nm_{2}-\left( m-m_{1}^{{\prime }}\right) ^{2}-\left( M-m_{1}^{{\prime }}\right) ^{2}. \end{aligned}$$
(2.25)

Combining (2.22), (2.24) and (2.25), we have

$$\begin{aligned}&m_{2r}\ge \frac{\left( M-m_{1}^{{\prime }}\right) ^{2r}+\left( m_{1}^{{\prime }}-m\right) ^{2r}}{n}+\frac{1}{n\left( n-2\right) ^{r-1}} \left( nm_{2}-\left( m-m_{1}^{{\prime }}\right) ^{2} \right. \nonumber \\&\quad \quad \quad \quad \left. -\left( M-m_{1}^{{\prime }}\right) ^{2}\right) ^{r}. \end{aligned}$$
(2.26)

The right-hand side expression (2.26) is minimum at \(m_{1}^{{\prime }}=\frac{m+M}{2}\), and so (2.21) follows from (2.26). \(\square \)

The inequality (2.21) provides a generalization of the Nagy inequality (1.6):

$$\begin{aligned} m_{2r}\ge \frac{\left( M-m\right) ^{2r}}{2^{2r-1}n}. \end{aligned}$$
(2.27)

When \(n=2\), the inequality (2.27) becomes equality. For \(n=3\), equality holds when \(x_{1}=m,\ x_{2}=x_{3}= \cdots =x_{n-1}=\frac{m+M}{2}\) and\(\ x_{n}=M\). Also, for \(r=2\) and \(n=3\), the inequalities (1.6) and (2.27) give equal estimates.

3 Bounds on the Spread of a Matrix

Let \(\mathbb {M}(n)\) be the space of all \(n\times n\) complex matrices. A linear functional \(\varphi :\mathbb {M}(n)\rightarrow \mathbb {C}\) is said to be positive if \(\varphi \left( A\right) \) is non-negative whenever A is positive semidefinite. It is unital if \(\varphi \left( I\right) =1\). For more details, see [6]. Let \(A=\left( a_{ij}\right) \) be an element of \(\mathbb {M}(n)\) with eigenvalues \(\lambda _{i},\, i=1,2, \ldots ,n\). The spread of A is defined as

$$\begin{aligned} \text {spd}\left( A\right) =\max _{i,j}\left| \lambda _{i}-\lambda _{j}\right| . \end{aligned}$$

Bhatia and Sharma [7] and [8] have shown that how positive unital linear maps can be used to derive many inequalities for the spread of a matrix. Enhancing this technique, we derive here some more inequalities for the positive unital linear functional and obtain bounds for the spread of a Hermitian matrix.

Beginning with Mirsky [12], several authors have obtained bounds for the spread of a matrix A in terms of the functions of its entries. Mirsky [13] proves that for every Hermitian matrix A,

$$\begin{aligned} \text {spd}\left( A\right) ^{2}\ge \max _{i\ne j}\left( \left( a_{ii}-a_{jj}\right) ^{2}+4\left| a_{ij}\right| ^{2}\right) . \end{aligned}$$

Barnes and Hoffman [3] prove the following sharper bound,

$$\begin{aligned} \text {spd}\left( A\right) ^{2}\ge \max _{i,j}\left( \left( a_{ii}-a_{jj}\right) ^{2}+2\underset{k\ne i}{\sum }\left| a_{ik}\right| ^{2}+\underset{k\ne j}{2\sum }\left| a_{jk}\right| ^{2}\right) \ . \end{aligned}$$
(3.1)

One more inequality of our current interest is, see [7],

$$\begin{aligned} \text {spd}\left( A\right) ^{2}\ge 4\max _{j}\underset{k\ne j}{\sum } \left| a_{jk}\right| ^{2}. \end{aligned}$$
(3.2)

The inequalities (3.1) and (3.2) are independent. Bhatia and Sharma [7] and [8] have shown that such inequalities follow easily from the inequalities for positive linear maps. We pursue this topic further and obtain bounds for the spread in the following theorems.

Theorem 3.1

Let \(\varphi :\mathbb {M} (n)\rightarrow \mathbb {C}\) be a positive unital linear functional and A be any Hermitian element of \(\mathbb {M}(n)\). Then

$$\begin{aligned} \varphi \left( B^{4}\right) \le \frac{\text {spd}\left( A\right) ^{4}}{12} \end{aligned}$$
(3.3)

and

$$\begin{aligned} \varphi \left( B^{2}\right) \varphi \left( B^{4}\right) -\varphi \left( B^{2}\right) ^{3}-\varphi \left( B^{3}\right) ^{2}\le \frac{\text {spd} \left( A\right) ^{6}}{432}, \end{aligned}$$
(3.4)

where \(B=A-\varphi \left( A\right) I\).

Proof

Let \(\lambda _{i}\) be the eigenvalues of A, \(i=1,2, \ldots ,n\). By the spectral theorem,

$$\begin{aligned} B=\underset{i=1}{\overset{n}{\sum }}\left( \lambda _{i}-\varphi \left( A\right) \right) P_{i}, \end{aligned}$$

where \(\lambda _{i}-\varphi \left( A\right) \) are the eigenvalues of B and \(P_{i}\) the corresponding projections with \(\underset{i=1}{\overset{n}{\sum } }P_{i}=I\), see [6]. Then, for \(r=1,2, \ldots \), we have

$$\begin{aligned} B^{r}=\underset{i=1}{\overset{n}{\sum }}\left( \lambda _{i}-\varphi \left( A\right) \right) ^{r}P_{i}. \end{aligned}$$
(3.5)

Applying \(\varphi \) to both sides of (3.5), we get

$$\begin{aligned} \varphi \left( B^{r}\right) =\underset{i=1}{\overset{n}{\sum }}\left( \lambda _{i}-\varphi \left( A\right) \right) ^{r}\varphi \left( P_{i}\right) . \end{aligned}$$

Since \(\lambda _{i}-\varphi \left( A\right) \) are real numbers and \(\varphi \left( P_{i}\right) \) are non-negative real numbers such that \(\underset{i=1}{\overset{n}{\sum }}\varphi \left( P_{i}\right) =1\), the inequalities, (3.3) and (3.4), follow, respectively, from (2.1) and (2.9). \(\square \)

Note that an equivalent form of (3.4) says that the determinant

$$\begin{aligned} \left| \begin{array}{c@{\quad }c@{\quad }c} 1 &{} \varphi \left( A\right) &{} \varphi \left( A^{2}\right) \\ \varphi \left( A\right) &{} \varphi \left( A^{2}\right) &{} \varphi \left( A^{3}\right) \\ \varphi \left( A^{2}\right) &{} \varphi \left( A^{3}\right) &{} \varphi \left( A^{4}\right) \end{array} \right| \le \frac{\text {spd}\left( A\right) ^{6}}{432}. \end{aligned}$$

Also the lower bounds for the spread in terms of traces of A and \(A^{2}\) are studied in [21]. Our current bound of interest (3.4) involves entries of A, \(A^{2}\), \(A^{3}\) and \(A^{4}\).

Likewise, it follows from (2.19) that

$$\begin{aligned} \varphi \left( B^{3}\right) \le \frac{\text {spd}\left( A\right) ^{3}}{6 \sqrt{3}}. \end{aligned}$$

In this connection, we prove one more inequality in the following theorem.

Theorem 3.2

Let \(\varphi :\mathbb {M} (n)\rightarrow \mathbb {C} \) be a positive unital linear functional. Then for \(0\le A\le MI\), we have

$$\begin{aligned} 0\le \left| \begin{array}{c@{\quad }c} \varphi \left( A\right) &{} \varphi \left( A^{2}\right) \\ \varphi \left( A^{2}\right) &{} \varphi \left( A^{3}\right) \end{array} \right| \le \frac{M^{4}}{27}. \end{aligned}$$
(3.6)

Proof

On using arguments similar to those used in the proof of the above theorem, it follows from the second inequality (1.9) that

$$\begin{aligned} \varphi \left( A^{3}\right) \le M\varphi \left( A^{2}\right) -\frac{\left( M\varphi \left( A\right) -\varphi \left( A^{2}\right) \right) ^{2}}{ M-\varphi \left( A\right) }. \end{aligned}$$
(3.7)

Since \(\varphi \left( A\right) \ge 0\), the inequality (3.7) implies that for \(A<MI\),

$$\begin{aligned} \varphi \left( A^{3}\right) \varphi \left( A\right) -\varphi \left( A^{2}\right) ^{2}\le & {} M\varphi \left( A^{2}\right) \varphi \left( A\right) \nonumber \\&-\, \frac{\left( M\varphi \left( A\right) -\varphi \left( A^{2}\right) \right) ^{2}\varphi \left( A\right) }{M-\varphi \left( A\right) }-\varphi \left( A^{2}\right) ^{2}. \end{aligned}$$
(3.8)

The inequality (3.6) follows from (3.8) and the fact that the function

$$\begin{aligned} h\left( x,y\right) = bxy-\frac{\left( bx-y\right) ^{2}}{b-x}x-y^{2}, \end{aligned}$$

achieves its maximum at \(x=\frac{2}{3}b\) and \(y=\frac{x\left( b+x\right) }{2}\), where \(h\left( x,y\right) \le \frac{b^{4}}{27}\). \(\square \)

We now consider an upper bound for the spread of a matrix. Mirsky [12] proves that for any \(n\times n\) matrix A,

$$\begin{aligned} \text {spd}\left( A\right) ^{2}\le 2\text {tr}A^{*}A-\frac{2}{n} \left| \text {tr}A\right| ^{2}. \end{aligned}$$

See also [4] and [23]. We prove an extension of this inequality in the following theorem.

Theorem 3.3

Let A be a complex \(n\times n\) matrix and let \(B=A-\frac{\text {tr}A}{n}I\). Then the inequality

$$\begin{aligned} \text {spd}\left( A\right) ^{2r}\le 2^{2r-1}\text {tr}\left( B^{r}\left( B^{*}\right) ^{r}\right) , \end{aligned}$$
(3.9)

holds true for \(r=1,2, \ldots \) .

Proof

Let \(\lambda _{i}\) be the eigenvalues of \(A,\ i=1,2, \ldots ,n\). Then

$$\begin{aligned} \frac{1}{n}\text {tr}\left( B^{r}\left( B^{*}\right) ^{r}\right) \ge \frac{1}{n}\sum _{i=1}^{n}\left| \lambda _{i}-\frac{\text {tr}A}{n} \right| ^{2r}. \end{aligned}$$
(3.10)

From (3.10), we see that the inequality

$$\begin{aligned} \frac{1}{n}\text {tr}\left( B^{r}\left( B^{*}\right) ^{r}\right) \ge \frac{1}{n}\left( \left| \lambda _{j}-\frac{\text {tr}A}{n}\right| ^{2r}+\left| \lambda _{k}-\frac{\text {tr}A}{n}\right| ^{2r}\right) , \end{aligned}$$
(3.11)

holds for any \(j,k=1,2, \ldots ,n\) with \(j\ne k\). Also, for two positive real numbers \(x_{1}\) and \(x_{2}\), \(2^{r-1}\left( x_{1}^{r}+x_{2}^{r}\right) \ge \left( x_{1}+x_{2}\right) ^{r}\); therefore,

$$\begin{aligned} \left| \lambda _{j}-\frac{\text {tr}A}{n}\right| ^{2r}+\left| \lambda _{k}-\frac{\text {tr}A}{n}\right| ^{2r}\ge \frac{1}{2^{2r-1}} \left( \left| \lambda _{j}-\frac{\text {tr}A}{n}\right| +\left| \lambda _{k}-\frac{\text {tr}A}{n}\right| \right) ^{2r}. \end{aligned}$$
(3.12)

On using the triangle inequality for complex numbers, we have

$$\begin{aligned} \left| \lambda _{j}-\frac{\text {tr}A}{n}\right| +\left| \lambda _{k}-\frac{\text {tr}A}{n}\right| \ge \left| \lambda _{j}-\lambda _{k}\right| . \end{aligned}$$
(3.13)

Combining (3.11)–(3.13), we immediately get (3.9). \(\square \)

Several inequalities for the spread can be obtained from (3.3) and (3.4). For example, if we choose \(\varphi \left( A\right) =\frac{\text {tr}A}{n}\), then we have

$$\begin{aligned} \text {spd}\left( A\right) ^{4}\ge \frac{12}{n}\text {tr}B^{4} \end{aligned}$$
(3.14)

and

$$\begin{aligned} \text {spd}\left( A\right) ^{6}\ge \frac{432}{n^{3}}\left( n\text {tr}B^{2} \text {tr}B^{4}-\left( \text {tr}B^{2}\right) ^{3}-n\left( \text {tr} B^{3}\right) ^{2}\right) . \end{aligned}$$
(3.15)

Note that (1.8) yields first inequality (1.7) for \(\varphi \left( A\right) =\frac{\text {tr}A}{n}\). Also, (1.8) gives (3.1) and (3.2) for \(\varphi \left( A\right) =\frac{a_{ii}+a_{jj}}{2}\) and \(\varphi \left( A\right) =a_{ii}\). The corresponding estimates for the spread from (3.3) and (3.4) can be calculated numerically, see Example 1, below.

We give examples and compare the bound (1.7) in terms of the traces with our corresponding bounds (3.9), (3.14) and (3.15). Likewise, we compare (3.1) and (3.2) with (3.3) and (3.4), respectively.

Example 1

Let

$$\begin{aligned} A=\left[ \begin{array}{c@{\quad }c@{\quad }c} 3 &{} 2 &{} 1 \\ 2 &{} 0 &{} 2 \\ 1 &{} 2 &{} 3 \end{array} \right] . \end{aligned}$$

Then, from the bound (1.7), spd\(\left( A\right) \ge 5.6569\) while from our current bounds, (3.14) and (3.15), we respectively, have \(\hbox {spd}\left( A\right) \ge 5.8259\) and \(\hbox {spd}\left( A\right) \ge 6.9282\). Here \(n=3\), the inequalities, (1.7) and (3.9), therefore, give equal estimates \(\hbox {spd} \left( A\right) \le 6.9282\), \(r=2\). Further, from (3.1), \(\hbox {spd}\left( A\right) \ge 5.9161\), while from our bounds, (3.3) and (3.4) for \( \varphi \left( A\right) =\frac{a_{ii}+a_{jj}}{2}\) give \(\hbox {spd}\left( A\right) \ge 6.0181\) and \(\hbox {spd}\left( A\right) \ge 6.8252\). Likewise, from (3.2) , \(\hbox {spd}\left( A\right) \ge 5.6569\) and from (3.3) and (3.4), we have \(\hbox {spd}\left( A\right) \ge 6.9282\) and \(\hbox {spd}\left( A\right) \ge 6.2947\), \(\varphi \left( A\right) =a_{ii}\). Hence, our bounds give better estimates.

Example 2

Let

$$\begin{aligned} A_{1}=\left[ \begin{array}{c@{\quad }c@{\quad }c@{\quad }c} 6 &{} 3 &{} 4 &{} 2 \\ 3 &{} 1 &{} 0 &{} 3 \\ 4 &{} 0 &{} 2 &{} 1 \\ 2 &{} 3 &{} 1 &{} 2 \end{array} \right] \ \ \text {and }\, \, A_{2}=\left[ \begin{array}{c@{\quad }c@{\quad }c@{\quad }c} 6 &{} 0 &{} 4 &{} 2 \\ 3 &{} 1 &{} 0 &{} 3 \\ 4 &{} 0 &{} 2 &{} 1 \\ 2 &{} 3 &{} 1 &{} 2 \end{array} \right] . \end{aligned}$$

For the Hermitian matrix \(A_{1}\), (1.7) gives spd\(\left( A_{1}\right) \le 13.\,620\) while from our bound (3.9) spd\(\left( A_{1}\right) \le \) \(13.559,\ r=2\). Likewise, for arbitrary matrix \(A_{2}\), the Mirsky bound (3.9) with \(r=1\) gives \(\hbox {spd}\left( A_{2}\right) \le 12.\,227\), while from our bound (3.9), \(\hbox {spd}\left( A_{2}\right) \le 11.934,\, r=2\).

4 Bounds for the Span of a Polynomial

In the theory of polynomial equations, the study of polynomials with real roots is of special interest, see [11] and [17]. The span of a polynomial is the length \(b-a\) of the smallest interval \(\left[ a,b\right] \) containing all the zeros of polynomial. It is also of interest to find bounds on the roots and span of a polynomial in terms of its coefficients; see [10, 17] and [18]. We obtain here some bounds for the span of polynomial.

It is sufficient to consider the polynomial equation in which the coefficient of \(x^{n-1}\) is zero,

$$\begin{aligned} f\left( x\right) =x^{n}+a_{2}x^{n-2}+a_{3}x^{n-3}+ \cdots +a_{n-1}x+a_{n}=0. \end{aligned}$$
(4.1)

Let \(x_{1},x_{2}, \ldots ,x_{n}\) be the roots of (4.1). On using the well-known Newton’s identity,

$$\begin{aligned} \alpha _{k}+a_{1}\alpha _{k-1}+a_{2}\alpha _{k-2}+ \cdots +a_{k-1}\alpha _{1}+ka_{k}=0, \end{aligned}$$

where \(\alpha _{k}=\sum \limits _{i=1}^{n}x_{i}^{k}\) and \(k=1,2, \ldots n\), we have

$$\begin{aligned} m_{1}=\frac{1}{n}\sum _{i=1}^{n}x_{i}=0,\, m_{2}=\frac{1}{n} \sum _{i=1}^{n}x_{i}^{2}=-\frac{2}{n}a_{2},\, m_{3}=\frac{1}{n} \sum _{i=1}^{n}x_{i}^{3}=-\frac{3}{n}a_{3} \end{aligned}$$
(4.2)

and

$$\begin{aligned} m_{4}=\frac{1}{n}\sum _{i=1}^{n}x_{i}^{4}=\frac{2}{n}\left( a_{2}^{2}-2a_{4}\right) . \end{aligned}$$
(4.3)

The span of polynomial (4.1) is \(\hbox {spn}\left( f\right) =\underset{i,j}{ \max }\left| x_{i}-x_{j}\right| \). Then, from (1.6), we get

$$\begin{aligned} \text {spn}\left( f\right) \le 2\sqrt{-a_{2}}, \end{aligned}$$
(4.4)

see [14] and [17]. Likewise, from (1.5) we have

$$\begin{aligned} \text {spn}\left( f\right) \ge 2\sqrt{\frac{-2a_{2}}{n}}, \end{aligned}$$
(4.5)

see [17].

In a similar spirit, we obtain some further estimates for \(\hbox {spn}\left( f\right) \) in the following theorems.

Theorem 4.1

If the roots of the polynomial (4.1) are all real, then for \(n\ge 5\), we have

$$\begin{aligned} \left( \frac{24}{n}\left( a_{2}^{2}-2a_{4}\right) \right) ^{\frac{1}{4}}\le \mathrm{spn}\left( f\right) \le 2\left( a_{2}^{2}-2a_{4}\right) ^{\frac{1}{4} }. \end{aligned}$$
(4.6)

Proof

Let \(x_{i}\) be the roots of polynomial (4.1) such that \(x_{1}\le x_{i}\le x_{n},\ i=1,2, \ldots ,n\). Then from the inequality (2.1), we have

$$\begin{aligned} \left( x_{n}-x_{1}\right) ^{4}\ge 12m_{4}. \end{aligned}$$
(4.7)

Combine (4.3) and (4.7), we immediately get the first inequality (4.6), \(x_{n}-x_{1}= \hbox {spn}\left( f\right) \). Similarly, the inequality (2.27) gives \(x_{n}-x_{1}\le \left( 8nm_{4}\right) ^{\frac{1}{4}},\ r=2\). This implies the second inequality (4.6). \(\square \)

Theorem 4.2

Under the conditions of Theorem 4.1, we have

$$\begin{aligned} \mathrm{spn}\left( f\right) \ge \left( \frac{432}{n^{3}}\left( 4\left( 2-n\right) a_{2}^{3}-9na_{3}^{2}+8na_{2}a_{4}\right) \right) ^{\frac{1}{6}}. \end{aligned}$$
(4.8)

Proof

As in the proof of above theorem, it follows from the inequality (2.9) that

$$\begin{aligned} \left( x_{n}-x_{1}\right) ^{6}\ge 432\left( m_{2}m_{4}-m_{2}^{3}-m_{3}^{2}\right) . \end{aligned}$$
(4.9)

Combining (4.2), (4.3) and (4.9), we immediately get (4.8). \(\square \)

Example 3

Let

$$\begin{aligned} f\left( x\right) =x^{5}+80x^{4}+1500x^{3}+5000x^{2}+3750x+\frac{1}{5}=0. \end{aligned}$$
(4.10)

The roots \(x_{i}\) of (4.10) are real, \(i=1,2, \ldots ,5\); see [22]. Let \(y_{i}=x_{i}-16\) be the roots of the diminished equation:

$$\begin{aligned} f\left( y\right) =y^{5}-1060y^{3}+14{,}920y^{2}+12{,}710y-\frac{3{,}648{,}479}{5}=0. \end{aligned}$$

The Nagy inequality (4.4) gives \(s\left( f\right) \le 65.116\) while from (4.6) \(s\left( f\right) \le 64.744\). Also, the Popoviciu inequality (4.5) gives \(s\left( f\right) \ge 41.183\), while from our bounds, (4.6) and (4.8), we have \(s\left( f\right) \ge 47.916\) and \(s\left( f\right) \ge 48.435\).