1 Introduction

Let \(X_{k:n}\) be the k-th smallest order statistic of random variables \(X_1,\ldots ,X_n\). In reliability theory, \(X_{k:n}\) characterizes the lifetime of a \((n-k+1)\)-out-of-n system, which works if at least \(n-k+1\) of the n components function normally. Particularly, \(X_{1:n}\), \(X_{n:n}\) and \(X_{2:n}\) represent the lifetimes of the series, parallel and fail-safe systems, respectively. Also, the sample extremes and the second smallest order statistic have nice applications in auction theory. For example, the maximum and minimum define the final price of the first-price sealed-bid auction and the first-price procurement auction, and the second smallest order statistic is exactly the final price in the second-price procurement auction. One may refer to Milgrom (2004), Menezes and Monteiro (2005) and Krishna (2010) for comprehensive expositions of auction theory, and we refer readers to Li (2005) and Fang and Li (2015) for applications of order statistics in auction theory. Besides, order statistics play a role in statistical inference, operation research, economics and many other applied probability fields. There are considerable studies on order statistics during the past several decades and a large number of which are on stochastic comparisons of order statistics from heterogeneous and homogeneous samples. Due to the complexity of the distribution theory, most existing research assumes the mutual independence among concerned random variables. For comprehensive references one may refer to David and Nagaraja (2003), Kochar (2012) and Balakrishnan and Zhao (2013).

Let \(F(x,\lambda )\) be a distribution function with parameter \(\lambda \) and \(\bar{F}(x,\lambda )=1-F(x,\lambda )\) be the corresponding survival function. Denote \({\mathcal {I}}_n=\{1,\ldots ,n\}\), \({\varvec{\lambda }}=(\lambda _1,\ldots ,\lambda _n)\) and \({\varvec{\mu }}=(\mu _1,\ldots ,\mu _n)\). For two groups of mutually independent random variables \(X_i\sim F(x,\lambda _i)\), \(i\in {\mathcal {I}}_n\) and \(Y_i\sim F(x,\mu _i)\), \(i\in {\mathcal {I}}_n\), (1971, Theorems 2.8 and 2.9) first showed that, if \(\bar{F}(x,\lambda )\) (\(F(x,\lambda )\)) is differentiable, monotone and log-convex with respect to \(\lambda \ge 0\) for \(x\ge 0\), then

$$\begin{aligned} \varvec{\lambda }\mathop {\preceq }\limits ^{\mathrm{m}}\varvec{\mu }\Longrightarrow X_{k:n}\le _{\mathrm{st}}(\ge _{\mathrm{st}})\, Y_{k:n},\quad k\in {\mathcal {I}}_n, \end{aligned}$$
(1.1)

and if \(F(x,\lambda )\) (\(\bar{F}(x,\lambda )\)) is differentiable and log-concave in \(\lambda \ge 0\) for \(x\ge 0\), then

$$\begin{aligned} {\varvec{\lambda }}\mathop {\preceq }\limits ^{\mathrm{m}}{\varvec{\mu }}\Longrightarrow X_{n:n}\le _{\mathrm{st}}Y_{n:n}\quad (X_{1:n}\ge _{\mathrm{st}}Y_{1:n}), \end{aligned}$$
(1.2)

where ‘\(\mathop {\preceq }\limits ^{\mathrm{m}}\)’ and ‘\(\le _{\mathrm{st}}\)’ respectively denote the majorization and the usual stochastic order (see Sect. 2 for their definitions).

The random variables \(X_1,\ldots ,X_n\) are said to follow the scale model if \(X_i\sim F(\lambda _i x)\) for \(i\in {\mathcal {I}}_n\), i.e., \(X_i\) has the distribution function \(F(\lambda _i x)\), where F is the baseline distribution and \(\lambda _i\)’s are the scale parameters. Many commonly used distributions have scale parameters, for example, exponential distribution \(\mathcal {E}(\lambda )\) with density \(\lambda e^{-\lambda x}\) for \(x>0,\lambda >0\), Weibull distribution \(\mathcal {W}(\alpha ,\lambda )\) with density \(\alpha x^{\alpha -1}\lambda ^\alpha e^{-(\lambda x)^\alpha }\) for \(x>0,\alpha >0,\lambda >0\), Lomax distribution \({\mathcal {L}}(\alpha ,\lambda )\) with density \(\alpha \lambda (1+\lambda x)^{-\alpha -1}\) for \(x>0,\alpha >0,\lambda >0\), Fréchet distribution \({\mathcal {F}}(\alpha ,\lambda )\) with density \(\alpha \lambda ^{-\alpha }x^{-\alpha -1} e^{-(\lambda x)^{-\alpha }}\) for \(x>0,\alpha >0,\lambda >0\), and Gamma distribution \({\mathcal {G}}(\alpha ,\lambda )\) with density \(\frac{\lambda ^\alpha }{\Gamma (\alpha )}x^{\alpha -1}e^{-\lambda x}\) for \(x>0,\alpha >0,\lambda >0\) all have the scale parameter \(\lambda \).

For the exponential samples \(X_i\sim \mathcal {E}(\lambda _i)\), \(i\in {\mathcal {I}}_n\), and \(Y_i\sim \mathcal {E}(\mu _i)\), \(i\in {\mathcal {I}}_n\), both mutually independent, Pledger and Proschan (1971, Corollary 2.7) showed that

$$\begin{aligned} {\varvec{\lambda }}\mathop {\preceq }\limits ^{\mathrm{m}}{\varvec{\mu }}\Longrightarrow X_{1:n}\mathop {=}\limits ^\mathrm{{st}}Y_{1:n}\quad \text{ and } \quad X_{k:n}\le _{\mathrm{st}}Y_{k:n},\quad k=2,\ldots ,n, \end{aligned}$$
(1.3)

where ‘\(\mathop {=}\limits ^\mathrm{{st}}\)’ means both sides have a common distribution. Also for \(X_i\sim \mathcal {E}(\lambda _i)\), \(i\in {\mathcal {I}}_n\), and \(Z_i\sim \mathcal {E}(\lambda )\), \(i\in {\mathcal {I}}_n\), both mutually independent, Dykstra et al. (1997, Theorem 2.1) verified that

$$\begin{aligned} \lambda =\frac{1}{n}\sum _{i=1}^n \lambda _i\Longrightarrow X_{n:n}\ge _{\mathrm{disp}}Z_{n:n}, \end{aligned}$$

where ‘\(\le _{\mathrm{disp}}\)’ denotes the dispersive order (see Sect. 2 for the definition). Later on, Khaledi and Kochar (2000, Theorems 2.2 and 2.1) strengthened the result on the maximums of the exponential samples in (1.2) as

$$\begin{aligned} {\varvec{\lambda }}\mathop {\preceq }\limits ^{\mathrm{p}}{\varvec{\mu }}\Longrightarrow X_{n:n}\le _{\mathrm{st}}Y_{n:n}, \end{aligned}$$
(1.4)

and showed that

$$\begin{aligned} \lambda =\bigg (\prod _{i=1}^n\lambda _i\bigg )^{\frac{1}{n}}\Longrightarrow X_{n:n}\ge _{\mathrm{disp}}Z_{n:n}, \end{aligned}$$

where ‘\(\mathop {\preceq }\limits ^{\mathrm{p}}\)’ denotes the p-larger order (see Sect. 2 for the definition).

As for Weibull samples \(X_i\sim \mathcal {W}(\alpha ,\lambda _i)\), \(i\in {\mathcal {I}}_n\) and \(Y_i\sim \mathcal {W}(\alpha ,\mu _i)\), \(i\in {\mathcal {I}}_n\), both mutually independent, Khaledi and Kochar (2006, Corollaries 3.2 and 3.1) further proved that

$$\begin{aligned} {\varvec{\lambda }}\mathop {\preceq }\limits ^{\mathrm{p}}{\varvec{\mu }}\Longrightarrow X_{n:n}\le _{\mathrm{st}}Y_{n:n}, \end{aligned}$$
(1.5)

and for \(X_i\sim \mathcal {W}(\alpha ,\lambda _i)\), \(i\in {\mathcal {I}}_n\) and \(Z_i\sim \mathcal {W}(\alpha ,\lambda )\), \(i\in {\mathcal {I}}_n\), both mutually independent,

$$\begin{aligned} \lambda =\bigg (\prod _{i=1}^n\lambda _i\bigg )^{\frac{1}{n}}\Longrightarrow X_{n:n}\ge _{\mathrm{disp}}Z_{n:n},\quad \text{ for } 0<\alpha \le 1. \end{aligned}$$

In the case of Gamma samples \(X_i\sim {\mathcal {G}}(\alpha ,\lambda _i)\), \(i\in {\mathcal {I}}_n\), and \(Y_i\sim {\mathcal {G}}(\alpha ,\mu _i)\), \(i\in {\mathcal {I}}_n\), both mutually independent, Sun and Zhang (2005, Theorem 1.2) proved that

$$\begin{aligned} {\varvec{\lambda }}\mathop {\preceq }\limits ^{\mathrm{m}}{\varvec{\mu }}\Longrightarrow X_{1:n}\ge _{\mathrm{st}}Y_{1:n}\,\ \text{ and }\,\ X_{n:n}\le _{\mathrm{st}}Y_{n:n},\quad \text{ for } \alpha \ge 1, \end{aligned}$$
(1.6)

which actually follows immediately from (1.2) by setting the distribution function \(F(x,\lambda )=\int _0^x \frac{\lambda ^\alpha }{\Gamma (\alpha )}t^{\alpha -1}e^{-\lambda t}\,\mathrm {d}t\) for \(\alpha \ge 1\).

As is known, both the exponential and the Weibull distributions follow the scale model and are of decreasing proportional reversed hazard rate (DPRHR) (see Sect. 2 for the definition). For mutually independent \(X_i\sim F(\lambda _ix)\), \(i\in {\mathcal {I}}_n\) and \(Y_i\sim F(\mu _ix)\), \(i\in {\mathcal {I}}_n\), Khaledi et al. (2011, Theorem 3.2) strengthened (1.4) and (1.5) to

$$\begin{aligned} {\varvec{\lambda }}\mathop {\preceq }\limits ^{\mathrm{p}}{\varvec{\mu }}\Longrightarrow X_{n:n}\le _{\mathrm{st}}Y_{n:n}\quad \text{ whenever } F\hbox { is DPRHR}. \end{aligned}$$
(1.7)

Recently, some authors paid their attention to comparing order statistics of dependent samples. For instance, Navarro and Spizzichino (2010) studied stochastic orders of series and parallel systems with components’ lifetimes sharing a common copula, Rezapour and Alamatsaz (2014) obtained stochastic orders on order statistics from samples with different survival Archimedean copulas, and Li and Fang (2015) investigated stochastic orders of the maximums of samples having proportional hazards and coupled by Archimedean copula. In particularly, for Weibull samples \(X_i\sim \mathcal {W}(\alpha ,\lambda _i)\), \(i\in {\mathcal {I}}_n\), and \(Y_i\sim \mathcal {W}(\alpha ,\mu _i)\), \(i\in {\mathcal {I}}_n\), both sharing a common Archimedean survival copula with generator \(\psi \), Li and Li (2015, Corollary 4.3) showed that

$$\begin{aligned}&{\varvec{\lambda }}\preceq _{\mathrm{w}}{\varvec{\mu }}\Longrightarrow X_{1:n}\ge _{\mathrm{st}}Y_{1:n},\quad \hbox {for log-convex }\psi \hbox { and } \alpha \ge 1, \end{aligned}$$
(1.8)
$$\begin{aligned}&{\varvec{\lambda }}\preceq ^{\mathrm{w}}{\varvec{\mu }}\Longrightarrow X_{1:n}\le _{\mathrm{st}}Y_{1:n}, \quad \hbox {for log-concave }\psi \text{ and } \alpha \le 1, \end{aligned}$$
(1.9)

and for \(X_i\sim \mathcal {W}(\alpha ,\lambda _i)\), \(i\in {\mathcal {I}}_n\) and \(Z_i\sim \mathcal {W}(\alpha ,\lambda )\), \(i\in {\mathcal {I}}_n\), both sharing a common Archimedean survival copula with generator \(\psi \), Li and Li (2015, Theorem 4.4) proved that

$$\begin{aligned} \lambda ^\alpha \le (\ge )\, \frac{1}{n}\sum \limits _{k=1}^n \lambda _k^\alpha \Longrightarrow X_{1:n}\le _{\mathrm{disp}}(\ge _{\mathrm{disp}})\, Z_{1:n} \end{aligned}$$

whenever \(\ln \psi \) and \(\frac{\psi \ln \psi }{\psi '}\) are both convex (concave), where ‘\(\preceq _{\mathrm{w}}\)’ and ‘\(\preceq ^{\mathrm{w}}\)’ denote the weak submajorization and supermajorization (see Sect. 2 for definitions), respectively.

Along this line of research, this note further devotes to studying stochastic orders on sample extremes and the second smallest order statistic from random variables following the scale model and coupled by Archimedean copulas or survival copulas. The rest of this paper is organized as follows: Sect. 2 recalls those important concepts concerned with our studies and some related results in the literature. In Sect. 3 we present several useful lemmas that will be utilized to develop the main theorems in the sequel. Section 4 investigates the usual stochastic order on sample extremes and the second smallest order statistic, and Sect. 5 studies the dispersive order and the star order on sample extremes.

For convenience, from now on we denote the sets \({\mathbb {R}}=(-\infty ,+\infty )\), \({\mathbb {R}}_+=(0,+\infty )\), \({\mathcal {I}}_n=\{1,\ldots ,n\}\), the real vectors \({\varvec{\lambda }}=(\lambda _1,\ldots ,\lambda _n)\), \({\varvec{\mu }}=(\mu _1,\ldots ,\mu _n)\), \(\varvec{1}=(1,\ldots ,1)\), and the random vectors \({\varvec{X}}=(X_1,\ldots ,X_n)\), \({\varvec{Y}}=(Y_1,\ldots ,Y_n)\) \({\varvec{Z}}=(Z_1,\ldots ,Z_n)\). Throughout this note, all random variables are assumed to be nonnegative and absolutely continuous, and the terms terms increasing and decreasing mean nondecreasing and nonincreasing, respectively.

2 Preliminaries

For ease of reference, in this section we recall some related concepts and present several lemmas that will be used in deriving the main results in the sequel.

A distribution function F with hazard rate \(s(\cdot )\) and reversed hazard rate \(r(\cdot )\) is said to be of

  1. (i)

    Decreasing reversed hazard rate (denoted as DRHR) if r(x) is decreasing;

  2. (ii)

    Increasing/decreasing hazard rate (denoted as IHR/DHR) if s(x) is increasing/decreasing;

  3. (iii)

    Decreasing proportional reversed hazard rate (denoted as DPRHR) if xr(x) is decreasing;

  4. (iv)

    Increasing proportional hazard rate (denoted as IPHR) if xs(x) is increasing.

For more on the above aging properties, one may refer to Barlow and Proschan (1975), Marshall and Olkin (2007), Righter et al. (2009) and Li and Fang (2015).

For two random variables X and Y with distribution functions F and G, denote \(F^{-1}\) and \(G^{-1}\) their respective right continuous inverses, and \(\bar{F}\) and \(\bar{G}\) their respective survival functions. Then, X is said to be smaller than Y in the

  1. (i)

    Usual stochastic order (denoted as \(X\le _{\mathrm{st}}Y)\) if \(\bar{F}(t)\le \bar{G}(t)\) for all t;

  2. (ii)

    Dispersive order (denoted as \(X\le _{\mathrm{disp}}Y)\) if \(F^{-1}(\beta )-F^{-1}(\alpha )\le G^{-1}(\beta )-G^{-1}(\alpha )\) for all \(0<\alpha \le \beta <1\);

  3. (iii)

    Star order (denoted as \(X\le _{*}Y)\) if \(G^{-1}\big (F(t)\big )/t\) increases in \(t\ge 0\).

For more on these stochastic orders one may refer to Müller and Stoyan (2002), Shaked and Shanthikumar (2007) and Li and Li (2013).

For two real vectors \(\varvec{x}=(x_1,\ldots ,x_n)\) and \(\varvec{y}=(y_1,\ldots ,y_n)\) in \({\mathbb {R}}^n\), denote \(x_{(1)}\le \cdots \le x_{(n)}\) the increasing arrangement of \(x_1,\ldots ,x_n\). Then, \(\varvec{x}\) is said to be

  1. (i)

    Majorized by \(\varvec{y}\) (denoted as \(\varvec{x}\mathop {\preceq }\limits ^{\mathrm{m}} \varvec{y}\)) if \(\sum _{i=1}^n x_i=\sum _{i=1}^n y_i\) and \(\sum _{i=1}^j x_{(i)}\ge \sum _{i=1}^j y_{(i)}\) for \(j\in {\mathcal {I}}_{n-1}\);

  2. (ii)

    Weakly submajorized by \(\varvec{y}\) (denoted as \(\varvec{x}\preceq _{\mathrm{w}}\varvec{y}\)) if \(\sum _{i=j}^n x_{(i)}\le \sum _{i=j}^n y_{(i)}\) for \(j\in {\mathcal {I}}_n\);

  3. (iii)

    Weakly supermajorized by \(\varvec{y}\) (denoted as \(\varvec{x}\preceq ^{\mathrm{w}} \varvec{y}\)) if \(\sum _{i=1}^j x_{(i)}\ge \sum _{i=1}^j y_{(i)}\) for \(j\in {\mathcal {I}}_n\).

Also a vector \(\varvec{y}\in {\mathbb {R}}_+^n\) is said to be p-larger than \(\varvec{x}\in {\mathbb {R}}_+^n\) (denoted as \(\varvec{x}\overset{\text {p}}{\preceq } \varvec{y}\)) if \(\prod _{i=1}^j x_{(i)}\ge \prod _{i=1}^j y_{(i)}\) for \(j\in {\mathcal {I}}_n\). It is well-known that, for \(\varvec{x},\varvec{y}\in {\mathbb {R}}_{+}^n\),

$$\begin{aligned} \varvec{x}\overset{\text {p}}{\preceq }\varvec{y}\Longleftarrow \varvec{x}\preceq ^{\mathrm{w}}\varvec{y}\Longleftarrow \varvec{x}\mathop {\preceq }\limits ^{\mathrm{m}}\varvec{y} \Longrightarrow \varvec{x}\preceq _{\mathrm{w}}\varvec{y}. \end{aligned}$$

A real function \(\hbar \) defined on \({\mathcal {A}}\subseteq {\mathbb {R}}^n\) is said to be Schur-convex (Schur-concave) on \({\mathcal {A}}\) if

$$\begin{aligned} \varvec{x}\mathop {\preceq }\limits ^{\mathrm{m}} \varvec{y}\hbox { on }{\mathcal {A}}\Longrightarrow \hbar (\varvec{x})\le (\ge )\,\hbar (\varvec{y}). \end{aligned}$$

Clearly, \(\hbar \) is Schur-concave on \({\mathcal {A}}\) if and only if \(-\hbar \) is Schur-convex. For more details on the above partial orders of real vectors, Schur-convexity and Schur-concavity we refer readers to Bon and Pǎltǎnea (1999) and Marshall et al. (2011).

The following three lemmas concerning majorization and Schur-convex or Schur-concave functions are useful in developing our main results in the sequel.

Lemma 2.1

(Marshall et al. (2011), Theorem 3.A.4) For an open interval \(I\subseteq {\mathbb {R}}\), a continuously differentiable \(\hbar :I^n\rightarrow {\mathbb {R}}\) is Schur-convex if and only if \(\hbar \) is symmetric on \(I^n\) and

$$\begin{aligned} (x_i-x_j)\Big (\frac{\partial \hbar (\varvec{x})}{\partial x_i}-\frac{\partial \hbar (\varvec{x})}{\partial x_j}\Big )\ge 0, \quad \text{ for } \text{ all } 1\le i\ne j\le n \text{ and } \varvec{x}\in I^n. \end{aligned}$$

Lemma 2.2

(Marshall et al. (2011), Theorem 3.A.8) For a real function \(\hbar \) on \({\mathcal {A}}\subseteq {\mathbb {R}}^n\),

  1. (i)

    \(\varvec{x}\preceq _{\mathrm{w}}\varvec{y}\) implies \(\hbar (\varvec{x})\le \hbar (\varvec{y})\) if and only if \(\hbar \) is increasing and Schur-convex on \({\mathcal {A}}\), and

  2. (ii)

    \(\varvec{x}\preceq ^{\mathrm{w}}\varvec{y}\) implies \(\hbar (\varvec{x})\le \hbar (\varvec{y})\) if and only if \(\hbar \) is decreasing and Schur-convex on \({\mathcal {A}}\).

Lemma 2.3

(Khaledi and Kochar (2002), Lemma 2.1) For a function \(\hbar :{\mathbb {R}}_+^n\mapsto {\mathbb {R}}\),

$$\begin{aligned} \varvec{x}\mathop {\preceq }\limits ^{\mathrm{p}}\varvec{y}\Longrightarrow \hbar (\varvec{x})\le \hbar (\varvec{y}) \end{aligned}$$

if and only if \(\hbar (e^{a_1},\ldots ,e^{a_n})\) is decreasing in \(a_i=\ln x_i\), \(i\in {\mathcal {I}}_n\) and Schur-convex in \((a_1,\ldots ,a_n)\).

For a random vector \({\varvec{X}}=(X_1,\dots ,X_n)\) with the joint distribution function F, joint survival function \(\bar{F}\), univariate marginal distribution functions \(F_1,\ldots ,F_n\) and univariate survival functions \(\bar{F}_1,\ldots ,\bar{F}_n\), if there exists some \(C:[0,1]^n\mapsto [0,1]\) and \(\widehat{C}:[0,1]^n\mapsto [0,1]\) such that, for all \(x_i\), \(i\in {\mathcal {I}}_n\),

$$\begin{aligned} F(x_1,\ldots ,x_n)= & {} C\big (F_1(x_1),\ldots ,F_n(x_n)\big ),\\ \bar{F}(x_1,\ldots ,x_n)= & {} \widehat{C}\big (\bar{F}_1(x_1),\ldots ,\bar{F}_n(x_n)\big ), \end{aligned}$$

then C and \(\widehat{C}\) are called as the copula and survival copula of \({\varvec{X}}\), respectively.

A real function \(\psi \) is n-monotone on \((a,b)\subseteq (-\infty ,+\infty )\) if \((-1)^{n-2}\psi ^{(n-2)}\) is decreasing and convex in (ab) and \((-1)^k\psi ^{(k)}(x)\ge 0\) for all \(x\in (a,b)\), \(k=0,1,\ldots ,n-2\). For a n-monotone function \(\psi :[0,+\infty )\rightarrow [0,1]\) with \(\psi (0)=1\) and \(\lim \limits _{x\rightarrow +\infty }\psi (x) = 0\), then

$$\begin{aligned} C_\psi (u_1,\ldots ,u_n)=\psi \big (\psi ^{-1}(u_1)+\cdots +\psi ^{-1}(u_n)\big ), \quad \text{ for } \text{ all } u_i\in [0,1], i\in {\mathcal {I}}_n, \end{aligned}$$

is called an Archimedean copula with generator \(\psi \). For convenience, we denote \(\phi =\psi ^{-1}=\sup \{x\in {\mathbb {R}}:\psi (x)>u\}\), the right continuous inverse. Archimedean copulas cover a wide range of dependence structures including the independence copula with generator \(e^{-t}\). For more on Archimedean copulas, readers may refer to Nelsen (2006) and McNeil and Nešlehová (2009).

At the end of this section, we also recall two useful lemmas, which play a role in the proofs of theorems in Sects. 4 and 5. One reviewer points out that the two-dimensional case of Lemma 2.4 below had been proved in Theorem 4.4.2 of Nelsen (2006).

Lemma 2.4

(Li and Fang (2015), Lemma A.1) For two n-dimensional Archimedean copulas \(C_{\psi _1}(\varvec{u})\) and \(C_{\psi _2}(\varvec{u})\), if \(\phi _2\circ \psi _1\) is super-additive, then \(C_{\psi _1}(\varvec{u})\le C_{\psi _2}(\varvec{u})\) for all \(\varvec{u}\in [0,1]^n\).

Lemma 2.5

(Čebyšev’s inequality) For two real vectors \((a_1,\ldots ,a_n)\) and \((b_1,\ldots ,b_n)\),

$$\begin{aligned} \bigg (\frac{1}{n}\sum _{k=1}^n a_k\bigg )\bigg (\frac{1}{n}\sum _{k=1}^n b_k\bigg )\le \frac{1}{n}\sum \limits _{k=1}^n a_kb_k \end{aligned}$$

whenever \(a_1\le \cdots \le a_n\) and \(b_1\le \cdots \le b_n\), or \(a_1\ge \cdots \ge a_n\) and \(b_1\ge \cdots \ge b_n\).

3 Several useful lemmas

For ease of reference, by \({\varvec{X}}\sim \text {S}(F,{\varvec{\lambda }},\psi )\) and \({\varvec{X}}\sim \text {S}(\bar{F},{\varvec{\lambda }},\psi )\) we denote random variables \(X_1,\ldots ,X_n\) coupled respectively by the Archimedean copula and Archimedean survival copula with generator \(\psi \) and following the scale model with baseline distribution function F and scale parameter vector \({\varvec{\lambda }}\). Let \(f(\cdot )\), \(s(\cdot )\) and \(r(\cdot )\) be the density function, the hazard rate and the reversed hazard rate of the baseline distribution F, respectively.

Denote, for \(x\ge 0\), \({\varvec{\lambda }}\in {\mathbb {R}}_+^n\),

$$\begin{aligned} J_1({\varvec{\lambda }};x,\psi )= & {} \psi \bigg (\sum _{k=1}^n \phi \big (\bar{F}(\lambda _k x)\big )\bigg ),\\ J_2({\varvec{\lambda }};x,\psi )= & {} 1-\psi \bigg (\sum _{k=1}^n \phi \big (F(\lambda _k x)\big )\bigg ),\\ J_3({\varvec{\lambda }};x,\psi )= & {} \sum _{l=1}^n\psi \bigg (\sum _{k\ne l} \phi \big (\bar{F}(\lambda _k x)\big )\bigg )-(n-1)\psi \bigg (\sum _{k=1}^n \phi \big (\bar{F}(\lambda _k x)\big )\bigg ). \end{aligned}$$

Evidently, \(J_i({\varvec{\lambda }};x,\psi )\) is symmetric with respect to \({\varvec{\lambda }}\) and \((\ln \lambda _1,\ldots ,\ln \lambda _n)\), \(i=1,2,3\).

In this section, we present several lemmas that play an important role in developing the theorems in the coming Sects. 4 and 5.

Lemma 3.1

If s(x) is increasing and log-convex, then \(x[\ln s(x)]'\) is increasing.

Proof

Since s(x) and hence \(\ln s(x)\) is increasing, it holds that \([\ln s(x)]'\ge 0\). In view of the log-convexity of s(x), we conclude that \([\ln s(x)]'\) is increasing. So, \(x[\ln s(x)]'\) is increasing. \(\square \)

Lemma 3.2

\(J_1({\varvec{\lambda }};x,\psi )\) is decreasing in \(\ln \lambda _i\) for \(i\in {\mathcal {I}}_n\), and the log-convexity of \(\psi \) along with the IPHR property of F implies that \(J_1({\varvec{\lambda }};x,\psi )\) is Schur-concave with respect to \((\ln \lambda _1,\ldots ,\ln \lambda _n)\).

Proof

For \(z>0\), let

$$\begin{aligned} \eta _1(z,x)= zxs(zx)\frac{\psi \big (\phi \big (\bar{F}(zx)\big )\big )}{\psi '\big (\phi \big (\bar{F}(zx)\big )\big )}. \end{aligned}$$

Since \(\psi \) is n-monotone, we have \(\psi '(x)\le 0\) for \(x\ge 0\) and hence the partial derivative of \(J_1({\varvec{\lambda }};x,\psi )\) with respect to \(\ln \lambda _i\) is

$$\begin{aligned} \frac{\partial J_1({\varvec{\lambda }};x,\psi )}{\partial \ln \lambda _i}=-\psi '\bigg (\sum _{k=1}^n \phi \big (\bar{F}(\lambda _k x)\big )\bigg )\eta _1(\lambda _i,x) \le 0,\quad \text{ for } i\in {\mathcal {I}}_n\text{. } \end{aligned}$$

That is, \(J_1({\varvec{\lambda }};x,\psi )\) is decreasing in \(\ln \lambda _i\) for \(i\in {\mathcal {I}}_n\). Furthermore, for \(i\ne j\),

$$\begin{aligned}&\frac{\partial J_1({\varvec{\lambda }};x,\psi )}{\partial \ln \lambda _i}- \frac{\partial J_1({\varvec{\lambda }};x,\psi )}{\partial \ln \lambda _j}\\&\quad =-\psi '\bigg (\sum _{k=1}^n \phi \big (\bar{F}(\lambda _k x)\big )\bigg ) \big [\eta _1(\lambda _i,x)-\eta _1(\lambda _j,x)\big ]. \end{aligned}$$

The log-convexity of \(\psi \) implies that \(\psi /\psi '\) is decreasing. Since \(\phi (\bar{F}(zx))\) is increasing in \(z>0\), then \(\frac{\psi (\phi (\bar{F}(zx)))}{\psi '(\phi (\bar{F}(zx)))}\) is decreasing in \(z>0\). From the increasing property of xs(x) it follows that zxs(zx) is increasing in \(z>0\), which in turn implies that \(\eta _1(z,x)\) is decreasing in \(z>0\). Consequently, it holds that, for \(i\ne j\),

$$\begin{aligned} (\ln \lambda _i-\ln \lambda _j)\left( \frac{\partial J_1({\varvec{\lambda }};x,\psi )}{\partial \ln \lambda _i}- \frac{\partial J_1({\varvec{\lambda }};x,\psi )}{\partial \ln \lambda _j}\right) \le 0 \end{aligned}$$

whenever \(\psi \) is log-convex and xs(x) is increasing. Now, the desired result follows immediately from Lemma 2.1. \(\square \)

Lemma 3.3

\(J_1({\varvec{\lambda }};x,\psi )\) is decreasing in \(\lambda _i\) for \(i\in {\mathcal {I}}_n\).

  1. (i)

    The log-convexity of \(\psi \) along with the IHR property of F implies that \(J_1({\varvec{\lambda }};x,\psi )\) is Schur-concave with respect to \({\varvec{\lambda }}\);

  2. (ii)

    The log-concavity of \(\psi \) along with the DHR property of F implies that \(J_1({\varvec{\lambda }};x,\psi )\) is Schur-convex with respect to \({\varvec{\lambda }}\).

Proof

Since the logarithm is strictly increasing, it follows from Lemma 3.2 that \(J_1({\varvec{\lambda }};x,\psi )\) is decreasing in \(\lambda _i\) for \(i\in {\mathcal {I}}_n\).

For \(z>0\), let

$$\begin{aligned} \eta _2(z,x)=s(zx)\frac{\psi \big (\phi \big (\bar{F}(zx)\big )\big )}{\psi '\big (\phi \big (\bar{F}(zx)\big )\big )}. \end{aligned}$$

Then, for \(i\ne j\),

$$\begin{aligned}&\frac{\partial J_1({\varvec{\lambda }};x,\psi )}{\partial \lambda _i}- \frac{\partial J_1({\varvec{\lambda }};x,\psi )}{\partial \lambda _j}\\&\quad =-x\psi '\bigg (\sum _{k=1}^n \phi \big (\bar{F}(\lambda _k x)\big )\bigg ) \big [\eta _2(\lambda _i,x)-\eta _2(\lambda _j,x)\big ]. \end{aligned}$$

Since \(\psi \) is log-convex (log-concave) and \(\phi (\bar{F}(zx))\) is increasing in \(z>0\), then \(\psi /\psi '\) is decreasing (increasing), and thus \(\frac{\psi (\phi (\bar{F}(zx)))}{\psi '(\phi (\bar{F}(zx)))}\) is decreasing (increasing) in \(z>0\). Moreover, the increasing (decreasing) property of s(x) implies that s(zx) is increasing (decreasing) in \(z>0\), and thus \(\eta _2(z,x)\) is decreasing (increasing) in \(z>0\). Consequently, it holds that, for \(i\ne j\),

$$\begin{aligned} (\lambda _i-\lambda _j)\left( \frac{\partial J_1({\varvec{\lambda }};x,\psi )}{\partial \lambda _i}- \frac{\partial J_1({\varvec{\lambda }};x,\psi )}{\partial \lambda _j}\right) \le 0 \end{aligned}$$

whenever \(\psi \) is log-convex and s(x) is increasing, and

$$\begin{aligned} (\lambda _i-\lambda _j)\left( \frac{\partial J_1({\varvec{\lambda }};x,\psi )}{\partial \lambda _i}- \frac{\partial J_1({\varvec{\lambda }};x,\psi )}{\partial \lambda _j}\right) \ge 0 \end{aligned}$$

whenever \(\psi \) is log-concave and s(x) is decreasing. Hence, we obtain the desired results due to Lemma 2.1. \(\square \)

Lemma 3.4

\(J_2({\varvec{\lambda }};x,\psi )\) is decreasing in \(\ln \lambda _i\) for \(i\in {\mathcal {I}}_n\), and the log-convexity of \(\psi \) along with DPRHR property of F implies that \(J_2({\varvec{\lambda }};x,\psi )\) is Schur-convex with respect to \((\ln \lambda _1,\ldots ,\ln \lambda _n)\).

Proof

For \(z>0\), let

$$\begin{aligned} \eta _3(z,x)=zxr(zx)\frac{\psi \big (\phi \big (F(zx)\big )\big )}{\psi '\big (\phi \big (F(zx)\big )\big )}. \end{aligned}$$

Since \(\psi \) is n-monotone, we have \(\psi '(x)\le 0\) for \(x\ge 0\) and hence

$$\begin{aligned} \frac{\partial J_2({\varvec{\lambda }};x,\psi )}{\partial \ln \lambda _i}=-\psi '\bigg (\sum _{k=1}^n \phi \big (F(\lambda _k x)\big )\bigg )\eta _3(\lambda _i,x) \le 0,\quad \text{ for } i\in {\mathcal {I}}_n. \end{aligned}$$

That is, \(J_2({\varvec{\lambda }};x,\psi )\) is decreasing in \(\ln \lambda _i\) for \(i\in {\mathcal {I}}_n\). Furthermore, for \(i\ne j\),

$$\begin{aligned}&\frac{\partial J_2({\varvec{\lambda }};x,\psi )}{\partial \ln \lambda _i}- \frac{\partial J_2({\varvec{\lambda }};x,\psi )}{\partial \ln \lambda _j}\\&\quad =-\psi '\bigg (\sum _{k=1}^n \phi \big (F(\lambda _k x)\big )\bigg ) \big [\eta _3(\lambda _i,x)-\eta _3(\lambda _j,x)\big ]. \end{aligned}$$

Note that the log-convexity of \(\psi \) implies the decreasing property of \(\psi /\psi '\). Since \(\phi (F(zx))\) is decreasing in \(z>0\), then \(\frac{\psi (\phi (F(zx)))}{\psi '(\phi (F(zx))){}_{}}\) is increasing in \(z>0\). Also the decreasing property of xr(x) implies that zxr(zx) is decreasing in \(z>0\), and thus \(\eta _3(z,x)\) is increasing in \(z>0\). Consequently, it holds that, for \(i\ne j\),

$$\begin{aligned} (\ln \lambda _i-\ln \lambda _j)\left( \frac{\partial J_2({\varvec{\lambda }};x,\psi )}{\partial \ln \lambda _i}- \frac{\partial J_2({\varvec{\lambda }};x,\psi )}{\partial \ln \lambda _j}\right) \ge 0 \end{aligned}$$

whenever \(\psi \) is log-convex and xr(x) is decreasing. Then the desired result follows immediately from Lemma 2.1. \(\square \)

Lemma 3.5

\(J_2({\varvec{\lambda }};x,\psi )\) is decreasing in \(\lambda _i\) for \(i\in {\mathcal {I}}_n\), and the log-convexity of \(\psi \) along with the DRHR property of F implies that \(J_2({\varvec{\lambda }};x,\psi )\) is Schur-convex with respect to \({\varvec{\lambda }}\).

Proof

Since the logarithm is strictly increasing, it follows from Lemma 3.4 that \(J_2({\varvec{\lambda }};x,\psi )\) is decreasing in \(\lambda _i\) for \(i\in {\mathcal {I}}_n\).

For \(z>0\), let

$$\begin{aligned} \eta _4(z,x)=r(zx)\frac{\psi \big (\phi \big (F(zx)\big )\big )}{\psi '\big (\phi \big (F(zx)\big )\big )}. \end{aligned}$$

Then, for \(i\ne j\),

$$\begin{aligned}&\frac{\partial J_2({\varvec{\lambda }};x,\psi )}{\partial \lambda _i}- \frac{\partial J_2({\varvec{\lambda }};x,\psi )}{\partial \lambda _j}\\&\quad =-x\psi '\bigg (\sum _{k=1}^n \phi \big (F(\lambda _k x)\big )\bigg ) \big [\eta _4(\lambda _i,x)-\eta _4(\lambda _j,x)\big ]. \end{aligned}$$

Since \(\psi \) is log-convex and \(\phi (F(zx))\) is decreasing in \(z>0\), by proof of Lemma 3.4, we have \(\frac{\psi (\phi (F(zx)))}{\psi '(\phi (F(zx)))}\) is increasing in \(z>0\). Note that the decreasing property of r(x) implies that r(zx) is decreasing in \(z>0\), thus \(\eta _4(z,x)\) is increasing in \(z>0\). Consequently, it holds that, for \(i\ne j\),

$$\begin{aligned} (\lambda _i-\lambda _j)\left( \frac{\partial J_2({\varvec{\lambda }};x,\psi )}{\partial \lambda _i}- \frac{\partial J_2({\varvec{\lambda }};x,\psi )}{\partial \lambda _j}\right) \ge 0 \end{aligned}$$

whenever \(\psi \) is log-convex and r(x) is decreasing, which completes the proof by applying Lemma 2.1 directly. \(\square \)

Lemma 3.6

\(J_3({\varvec{\lambda }};x,\psi )\) is decreasing in \(\lambda _i\) for \(i\in {\mathcal {I}}_n\), and the log-concavity of \(\psi \) along with the DHR property of F implies that \(J_3({\varvec{\lambda }};x,\psi )\) is Schur-convex with respect to \({\varvec{\lambda }}\).

Proof

For \(1\le i\ne j\le n\), let

$$\begin{aligned} \eta _5(z,x)= & {} (n-1)\psi '\bigg (\sum _{k=1}^n \phi \big (\bar{F}(\lambda _k x)\big )\bigg )-\sum _{l\not \in \{i,j\}}\psi '\bigg (\sum _{k\ne l} \phi \big (\bar{F}(\lambda _k x)\big )\bigg )\\&-\psi '\bigg (\phi \big (\bar{F}(z x)\big )+\sum _{k\not \in \{i,j\}} \phi \big (\bar{F}(\lambda _k x)\big )\bigg ). \end{aligned}$$

Since \(\psi \) is n-monotone, it holds that \(\psi '(x)\le 0\) for \(x\ge 0\) and \(\psi '\) is increasing. Since \(\phi (\bar{F}(zx))\) is increasing in \(z>0\), the function \(\eta _5(z,x)\) is decreasing in \(z>0\). In view of \(\phi (x)\ge 0\) for \(x\in [0,1]\), we have

$$\begin{aligned} \psi '\bigg (\sum _{k=1}^n \phi \big (\bar{F}(\lambda _k x)\big )\bigg )\ge \psi '\bigg (\sum _{k\ne l} \phi \big (\bar{F}(\lambda _k x)\big )\bigg ),\quad \text{ for } l\in {\mathcal {I}}_n, \end{aligned}$$

and then

$$\begin{aligned} (n-1)\psi '\bigg (\sum _{k=1}^n \phi \big (\bar{F}(\lambda _k x)\big )\bigg )\ge \sum _{l\ne i}\psi '\bigg (\sum _{k\ne l} \phi \big (\bar{F}(\lambda _k x)\big )\bigg ),\quad \text{ for } i\in {\mathcal {I}}_n. \end{aligned}$$

As a result, it holds that

$$\begin{aligned} \eta _5(\lambda _i,x)= & {} (n-1)\psi '\bigg (\sum _{k=1}^n \phi \big (\bar{F}(\lambda _k x)\big )\bigg )-\sum _{l\not \in \{i,j\}}\psi '\bigg (\sum _{k\ne l} \phi \big (\bar{F}(\lambda _k x)\big )\bigg )\\&-\psi '\bigg (\sum _{k\ne j} \phi \big (\bar{F}(\lambda _k x)\big )\bigg )\\= & {} (n-1)\psi '\bigg (\sum _{k=1}^n \phi \big (\bar{F}(\lambda _k x)\big )\bigg )-\sum _{l\ne i}\psi '\bigg (\sum _{k\ne l} \phi \big (\bar{F}(\lambda _k x)\big )\bigg )\\\ge & {} 0,\quad \text{ for } \text{ any } x\ge 0\hbox { and }i\in {\mathcal {I}}_n. \end{aligned}$$

Hence,

$$\begin{aligned} \frac{\partial J_3({\varvec{\lambda }};x,\psi )}{\partial \lambda _i}=x\eta _2(\lambda _i,x)\eta _5(\lambda _i,x)\le 0,\quad \text{ for } i\in {\mathcal {I}}_n. \end{aligned}$$

That is, \(J_3({\varvec{\lambda }};x,\psi )\) is decreasing in \(\lambda _i\) for \(i\in {\mathcal {I}}_n\). Moreover, for \(1\le i\ne j\le n\), it holds that

$$\begin{aligned}&\frac{\partial J_3({\varvec{\lambda }};x,\psi )}{\partial \lambda _i}- \frac{\partial J_3({\varvec{\lambda }};x,\psi )}{\partial \lambda _j}\\&\quad =x\big [\eta _2(\lambda _i,x)\eta _5(\lambda _i,x)- \eta _2(\lambda _j,x)\eta _5(\lambda _j,x)\big ]. \end{aligned}$$

Note that \(\psi \) is log-concave, s(x) is decreasing and \(\phi (\bar{F}(zx))\) is increasing in \(z>0\). By the proof of Lemma 3.3, \(\eta _2(z,x)\) and hence \(\eta _2(z,x)\eta _5(z,x)\) is increasing in \(z>0\). Consequently, we have, for \(1\le i\ne j\le n\),

$$\begin{aligned} (\lambda _i-\lambda _j)\left( \frac{\partial J_3({\varvec{\lambda }};x,\psi )}{\partial \lambda _i}- \frac{\partial J_3({\varvec{\lambda }};x,\psi )}{\partial \lambda _j}\right) \ge 0 \end{aligned}$$

whenever \(\psi \) is log-concave and s(x) is decreasing, and by Lemma 2.1 this completes the proof. \(\square \)

4 Usual stochastic order

This section studies the usual stochastic order on sample extremes and the second smallest order statistic from the scaled samples coupled by Archimedean copulas or survival copulas. Recall that \({\varvec{X}}\sim \mathrm {S}(\bar{F},{\varvec{\lambda }},\psi )\) denotes the sample \((X_1,\ldots ,X_n)\) with scale parameter vector \(\varvec{\lambda }\) and the Archimedean survival copula generated by \(\psi \) and \({\varvec{X}}\sim \mathrm {S}(F,{\varvec{\lambda }},\psi )\) denotes \((X_1,\ldots ,X_n)\) with the scale parameter vector \({\varvec{\lambda }}\) and the Archimedean copula generated by \(\psi \).

4.1 On sample minimum and the second smallest order statistic

For the scale samples with Archimedean survival copulas, we present here the usual stochastic order on the sample minimum and the second smallest order statistic.

Theorem 4.1

Suppose, for \({\varvec{X}}\sim \text {S}(\bar{F},{\varvec{\lambda }},\psi _1)\) and \({\varvec{Y}}\sim \text {S}(\bar{F},{\varvec{\mu }},\psi _2)\), \(\psi _1\) or \(\psi _2\) is log-convex, and \(\phi _1\circ \psi _2\) is super-additive. Then, \(X_{1:n}\ge _{\mathrm{st}}Y_{1:n}\) if (i) \((\ln \lambda _1,\ldots ,\ln \lambda _n) \preceq _{\mathrm{w}}(\ln \mu _1,\ldots ,\ln \mu _n)\) and F is IPHR, or (ii) \({\varvec{\lambda }}\preceq _{\mathrm{w}}{\varvec{\mu }}\) and F is IHR.

Proof

The sample minimums \(X_{1:n}\) and \(Y_{1:n}\) have their respective survival functions, for \(x\ge 0\),

$$\begin{aligned} \mathrm {P}(X_{1:n}>x)= & {} \psi _1\bigg (\sum _{k=1}^n \phi _1\big (\bar{F}(\lambda _k x)\big )\bigg ) =J_1({\varvec{\lambda }};x,\psi _1), \end{aligned}$$
(4.1)
$$\begin{aligned} \mathrm {P}(Y_{1:n}>x)= & {} \psi _2\bigg (\sum _{k=1}^n \phi _2\big (\bar{F}(\mu _k x)\big )\bigg ) =J_1({\varvec{\mu }};x,\psi _2). \end{aligned}$$
(4.2)

Let us assume here that \(\psi _1\) is log-convex, for the case with log-convex \(\psi _2\), the proof can be completed in the similar manner.

(i) Since xs(x) is increasing, from Lemma 3.2 it follows that \(-J_1({\varvec{\lambda }};x,\psi _1)\) is Schur-convex with respect to \((\ln \lambda _1,\ldots ,\ln \lambda _n)\) and increasing in \(\ln \lambda _i\) for \(i\in {\mathcal {I}}_n\). According to Lemma 2.2(i), \((\ln \lambda _1,\ldots ,\ln \lambda _n)\preceq _{\mathrm{w}}(\ln \mu _1,\ldots ,\ln \mu _n)\) implies

$$\begin{aligned} -J_1({\varvec{\lambda }};x,\psi _1)\le -J_1({\varvec{\mu }};x,\psi _1). \end{aligned}$$
(4.3)

Since \(\phi _1\circ \psi _2\) is super-additive, by Lemma 2.4 we have

$$\begin{aligned} J_1({\varvec{\mu }};x,\psi _1)\ge J_1({\varvec{\mu }};x,\psi _2), \end{aligned}$$
(4.4)

and thus

$$\begin{aligned} J_1({\varvec{\lambda }};x,\psi _1)\ge J_1({\varvec{\mu }};x,\psi _1)\ge J_1({\varvec{\mu }};x,\psi _2). \end{aligned}$$
(4.5)

(ii) Since s(x) is increasing, according to Lemma 3.3(i), \(-J_1({\varvec{\lambda }};x,\psi _1)\) is Schur-convex with respect to \({\varvec{\lambda }}\) and increasing in \(\lambda _i\), \(i\in {\mathcal {I}}_n\). Also, since \(\phi _1\circ \psi _2\) is super-additive and \({\varvec{\lambda }} \preceq _{\mathrm{w}}{\varvec{\mu }}\), by Lemma 2.2(i) and Lemma 2.4, we reach (4.3), (4.4) again and hence (4.5).

As a consequence, due to (4.1) and (4.2) it concludes that \(\mathrm {P}(X_{1:n}>x)\ge \mathrm {P}(Y_{1:n}>x)\) for \(x\ge 0\). That is, \(X_{1:n}\ge _{\mathrm{st}}Y_{1:n}\). \(\square \)

Here, we present an example, illustrating the condition on generators in Theorem 4.1.

Example 4.1

Suppose that \({\varvec{X}}\) and \({\varvec{Y}}\) have either of the following two dependence structures.

  1. (i)

    Gumbel survival copulas with respective generators

    $$\begin{aligned} \psi _1(x)=e^{-x^{1/\theta _1}},\quad \psi _2(x)=e^{-x^{1/\theta _2}},\quad \text{ for } \quad \theta _1\ge \theta _2\ge 1; \end{aligned}$$
  2. (ii)

    Archimedean survival copulas with respective generators

    $$\begin{aligned} \psi _1(x)=(x^{1/\theta _1}+1)^{-1},\quad \psi _2(x)=(x^{1/\theta _2}+1)^{-1}, \quad \text{ for } \quad \theta _1\ge \theta _2\ge 1. \end{aligned}$$

It is not difficult to verify that \(\psi _i\) is log-convex for \(i=1,2\). In view of \(\phi _1(\psi _2(0))=0\) and the convexity of \(\phi _1(\psi _2(x))=x^{\theta _1/\theta _2}\), we conclude that \(\phi _1(\psi _2(x))\) is super-additive by Proposition 21.A.11 in Marshall and Olkin (2007). \(\square \)

According to (1.2), for \(X_i\sim F(\lambda _ix)\), \(i\in {\mathcal {I}}_n\), and \(Y_i\sim F(\mu _ix)\), \(i\in {\mathcal {I}}_n\), both mutually independent, it holds that

$$\begin{aligned} {\varvec{\lambda }}\mathop {\preceq }\limits ^{\mathrm{m}}{\varvec{\mu }}\Longrightarrow X_{1:n}\ge _{\mathrm{st}}Y_{1:n}\quad \text{ whenever } F\hbox { is }IHR. \end{aligned}$$

Also, both the Gamma distribution \({\mathcal {G}}(\alpha ,\lambda )\) and Weibull distribution \(\mathcal {W}(\alpha ,\lambda )\) follow the scale model and are IHR for \(\alpha \ge 1\). Note that

  1. (i)

    for two independent samples, the independence survival copula is the Archimedean survival copula with log-convex generator \(e^{-x}\);

  2. (ii)

    for two samples share a common Archimedean survival copula with a log-convex generator, the conditions on generators in Theorem 4.1 are satisfied;

  3. (iii)

    the majorization \({\varvec{\lambda }}\mathop {\preceq }\limits ^{\mathrm{m}}{\varvec{\mu }}\) implies the submajorization \({\varvec{\lambda }}\preceq _{\mathrm{w}}{\varvec{\mu }}\).

Theorem 4.1(ii) partially improves the implication in (1.2) by relaxing the independence assumption in (1.2) under the scale model, Theorem 4.1(ii) significantly generalizes the implication in (1.6) by extending the independent Gamma model of (1.6) to the more general dependent scale model, and Theorem 4.1(ii) also improves the result in (1.8) by further generalizing the scenario where two Weibull samples share a common Archimedean survival copula to the situation in which two scale samples possess possibly different Archimedean survival copulas.

Theorem 4.2

Suppose, for \({\varvec{X}}\sim \text {S}(\bar{F},{\varvec{\lambda }},\psi _1)\) and \({\varvec{Y}}\sim \text {S}(\bar{F},{\varvec{\mu }},\psi _2)\), \(\psi _1\) or \(\psi _2\) is log-concave and \(\phi _2\circ \psi _1\) is super-additive. Then, \(X_{1:n}\le _{\mathrm{st}}Y_{1:n}\) if \({\varvec{\lambda }}\preceq ^{\mathrm{w}}{\varvec{\mu }}\) and F is DHR.

Proof

Let us assume here that \(\psi _1\) is log-concave, for the case with log-concave \(\psi _2\), the proof can be completed in the similar manner.

Note that s(x) is decreasing. From Lemma 3.3(ii) it follows that \(J_1({\varvec{\lambda }};x,\psi _1)\) is Schur-convex with respect to \({\varvec{\lambda }}\) and decreasing in \(\lambda _i\) for \(i\in {\mathcal {I}}_n\). According to Lemma 2.2(ii), \({\varvec{\lambda }}\preceq ^{\mathrm{w}}{\varvec{\mu }}\) implies

$$\begin{aligned} J_1({\varvec{\lambda }};x,\psi _1)\le J_1({\varvec{\mu }};x,\psi _1). \end{aligned}$$

Since \(\phi _2\circ \psi _1\) is super-additive, by Lemma 2.4 we have

$$\begin{aligned} J_1({\varvec{\mu }};x,\psi _1)\le J_1({\varvec{\mu }};x,\psi _2), \end{aligned}$$

and thus

$$\begin{aligned} J_1({\varvec{\lambda }};x,\psi _1)\le J_1({\varvec{\mu }};x,\psi _1)\le J_1({\varvec{\mu }};x,\psi _2). \end{aligned}$$

As a consequence, by (4.1) and (4.2) we conclude that \(\mathrm {P}(X_{1:n}>x)\le \mathrm {P}(Y_{1:n}>x)\) for \(x\ge 0\). That is, \(X_{1:n}\le _{\mathrm{st}}Y_{1:n}\). \(\square \)

According to (1.1), for \(X_i\sim F(\lambda _ix)\), \(i\in {\mathcal {I}}_n\), and \(Y_i\sim F(\mu _ix)\), \(i\in {\mathcal {I}}_n\), both mutually independent, it holds that

$$\begin{aligned} {\varvec{\lambda }}\mathop {\preceq }\limits ^{\mathrm{m}}{\varvec{\mu }}\Longrightarrow X_{1:n}\le _{\mathrm{st}}Y_{1:n}\quad \text{ and }\quad X_{2:n}\le _{\mathrm{st}}Y_{2:n}\quad \text{ whenever } F\hbox { is }DHR. \end{aligned}$$
(4.6)

Moreover, it is easy to verify that Weibull distribution \(\mathcal {W}(\alpha ,\lambda )\) follows the scale model and is DHR for \(0<\alpha \le 1\). Note that (i) \(\phi _2\circ \psi _1\) is clearly super-additive and both \(\psi _1\) and \(\psi _2\) are log-concave if two samples have a common Archimedean copula with a log-concave generator \(\psi \), namely \(\psi _1(x)=\psi _2(x)=\psi (x)\), and (ii) the majorization \({\varvec{\lambda }}\mathop {\preceq }\limits ^{\mathrm{m}}{\varvec{\mu }}\) implies the supermajorization \({\varvec{\lambda }}\preceq ^{\mathrm{w}}{\varvec{\mu }}\). Theorem 4.2 partly generalizes the implication in (1.1) to two dependent scale samples and further improves that in (1.9) by allowing samples to have possibly different Archimedean survival copulas.

About the conditions of the generators in Theorem 4.2 we present one example below.

Example 4.2

(i) Let \({\varvec{X}}\) and \({\varvec{Y}}\) have the Gumbel-Hougaard survival copulas with respective generators \(\psi _1(x)=e^{\frac{1}{\theta _1}(1-e^x)}\) and \(\psi _2(x)=e^{\frac{1}{\theta _2}(1-e^x)}\) for \(0<\theta _2\le \theta _1\le 1\). It is easy to verify that \(\psi _i\) is log-concave for \(i=1,2\). Since

$$\begin{aligned} \frac{\,\mathrm {d}^2\big [\phi _2\big (\psi _1(x)\big )\big ]}{\,\mathrm {d}x^2}=e^{-x}\Big (\frac{\theta _1}{\theta _2}-1\Big )\Big [1+\Big (\frac{\theta _1}{\theta _2}-1\Big )e^{-x}\Big ]^{-2}\ge 0, \end{aligned}$$

for \(0<\theta _2\le \theta _1\le 1\), it holds that \(\phi _2(\psi _1(x))\) is convex. In view of \(\phi _2(\psi _1(0))=0\), by Proposition 21.A.11 of Marshall and Olkin (2007) we have that \(\phi _2(\psi _1(x))\) is super-additive.

(ii) Let \({\varvec{X}}\) and \({\varvec{Y}}\) have the Archimedean survival copulas with respective generators \(\psi _1(x)=e^{1-(1+x)^{1/\theta _1}}\) and \(\psi _2(x)=e^{1-(1+x)^{1/\theta _2}}\) for \(0<\theta _1\le \theta _2\le 1\). It is easy to check that \(\psi _i\) is log-concave for \(i=1,2\). In observation of

$$\begin{aligned} \frac{\,\mathrm {d}^2\big [\phi _2\big (\psi _1(x)\big )\big ]}{\,\mathrm {d}x^2}=\frac{\theta _2}{\theta _1}\Big (\frac{\theta _2}{\theta _1}-1\Big )(1+x)^{\frac{\theta _2}{\theta _1}-2}\ge 0, \end{aligned}$$

for \(0<\theta _1\le \theta _2\le 1\), we have the convexity of \(\phi _2(\psi _1(x))\). In view of \(\phi _2(\psi _1(0))=0\), we reach the super-additivity of \(\phi _2(\psi _1(x))\) by Proposition 21.A.11 of Marshall and Olkin (2007) again. \(\square \)

For scale samples with a common Archimedean survival copula, we also obtain the usual stochastic order on the second smallest order statistic.

Theorem 4.3

For \({\varvec{X}}\sim \text {S}(\bar{F},{\varvec{\lambda }},\psi )\) and \({\varvec{Y}}\sim \text {S}(\bar{F},{\varvec{\mu }},\psi )\) with log-concave \(\psi \), \(X_{2:n}\le _{\mathrm{st}}Y_{2:n}\) if \({\varvec{\lambda }}\preceq ^{\mathrm{w}}{\varvec{\mu }}\) and F is DHR.

Proof

The second smallest order statistics \(X_{2:n}\) and \(Y_{2:n}\) have their respective survival functions, for \(x\ge 0\),

$$\begin{aligned}&\mathrm {P}(X_{2:n}>x)\\&\quad =\mathrm {P}(X_{2:n}>x,X_{1:n}\le x)+\mathrm {P}(X_{2:n}>x,X_{1:n}>x)\\&\quad =\sum _{l=1}^n\mathrm {P}(X_l\le x,X_k>x, k\ne l)+\mathrm {P}(X_k>x,k\in {\mathcal {I}}_n)\\&\quad =\sum _{l=1}^n\big [\mathrm {P}(X_k>x, k\ne l)-\mathrm {P}(X_k>x,k\in {\mathcal {I}}_n)\big ]+\mathrm {P}(X_k>x,k\in {\mathcal {I}}_n)\\&\quad =\sum _{l=1}^n\mathrm {P}(X_k>x, k\ne l)-(n-1)\mathrm {P}(X_k>x,k\in {\mathcal {I}}_n)\\&\quad =\sum _{l=1}^n\psi \bigg (\sum _{k\ne l} \phi \big (\bar{F}(\lambda _k x)\big )\bigg )-(n-1)\psi \bigg (\sum _{k=1}^n \phi \big (\bar{F}(\lambda _k x)\big )\bigg )\\&\quad = J_3({\varvec{\lambda }};x,\psi ), \end{aligned}$$

and

$$\begin{aligned} \mathrm {P}(Y_{2:n}>x)= & {} \sum _{l=1}^n\psi \bigg (\sum _{k\ne l} \phi \big (\bar{F}(\mu _k x)\big )\bigg )-(n-1)\psi \bigg (\sum _{k=1}^n \phi \big (\bar{F}(\mu _k x)\big )\bigg )\\= & {} J_3({\varvec{\mu }};x,\psi ). \end{aligned}$$

Since \(\psi \) is log-concave and s(x) is decreasing, due to Lemma 3.6, \(J_3({\varvec{\lambda }};x,\psi )\) is Schur-convex with respect to \({\varvec{\lambda }}\) and decreasing in \(\lambda _i\) for \(i\in {\mathcal {I}}_n\). According to Lemma 2.2(ii), \({\varvec{\lambda }}\preceq ^{\mathrm{w}}{\varvec{\mu }}\) implies \(J_3({\varvec{\lambda }};x,\psi )\le J_3({\varvec{\mu }};x,\psi )\). As a consequence, we conclude that \(\mathrm {P}(X_{2:n}>x)\le \mathrm {P}(Y_{2:n}>x)\), \(x\ge 0\). That is, \(X_{2:n}\le _{\mathrm{st}}Y_{2:n}\). \(\square \)

For \(n=3\), Ali-Mikhail-Haq survival copula has a log-concave generator \(\psi (x)=\frac{1-\theta }{e^x-\theta }\), \(\theta \in [-2+\sqrt{3},0]\), and for \(n=2\), the generator \(\psi (x)=[0.5(e^x+1)]^{-1/\theta }\) for \(\theta \in (0,1]\) is log-concave also. Again, note that \({\varvec{\lambda }}\mathop {\preceq }\limits ^{\mathrm{m}}{\varvec{\mu }}\) implies \({\varvec{\lambda }}\preceq ^{\mathrm{w}}{\varvec{\mu }}\). As for the second smallest order statistic, due to (4.6) and the log-concave generator \(e^{-x}\) of the independence survival copula, our Theorem 4.3 partially improves the result in (1.1). On the other hand, Theorem 4.3 partially generalizes the implication in (1.3) because the exponential distribution has a constant hazard rate.

4.2 On sample maximum

In this subsection we consider a dual model to the one studied in the previous subsection. We assume that samples have some Archimedean copulas and compare the corresponding maximums. Note that, for \({\varvec{X}}\sim \text {S}(F,{\varvec{\lambda }},\psi _1)\) and \({\varvec{Y}}\sim \text {S}(F,{\varvec{\mu }},\psi _2)\), sample maximums \(X_{n:n}\) and \(Y_{n:n}\) have their respective survival functions, for all \(x\ge 0\),

$$\begin{aligned} \mathrm {P}(X_{n:n}>x)= & {} 1-\mathrm {P}(X_k\le x,k\in {\mathcal {I}}_n)\\= & {} 1-\psi _1\bigg (\sum _{k=1}^n \phi _1\big (F(\lambda _k x)\big )\bigg )\\= & {} J_2({\varvec{\lambda }};x,\psi _1), \end{aligned}$$

and

$$\begin{aligned} \mathrm {P}(Y_{n:n}>x)=1-\psi _2\bigg (\sum _{k=1}^n \phi _2\big (F(\mu _k x)\big )\bigg )=J_2({\varvec{\mu }};x,\psi _2). \end{aligned}$$

In parallel to Theorems 4.1 and 4.2, we obtain the stochastic comparison on the sample maximums also. Since the following theorem can be verified in a similar manner to Theorems 4.1 and 4.2 based on Lemmas 2.4, 3.4 and 3.5, we omit the proof for brevity.

Theorem 4.4

Suppose, for \({\varvec{X}}\sim \text {S}(F,{\varvec{\lambda }},\psi _1)\) and \({\varvec{Y}}\sim \text {S}(F,{\varvec{\mu }},\psi _2)\), \(\psi _1\) or \(\psi _2\) is log-convex, and \(\phi _1\circ \psi _2\) is super-additive. Then, \(X_{n:n}\le _{\mathrm{st}}Y_{n:n}\) if (i) \({\varvec{\lambda }}\mathop {\preceq }\limits ^{\mathrm{p}}{\varvec{\mu }}\) and F is DPRHR, or (ii) \({\varvec{\lambda }}\preceq ^{\mathrm{w}}{\varvec{\mu }}\) and F is DRHR.

According to (1.2), for \(X_i\sim F(\lambda _ix)\), \(i\in {\mathcal {I}}_n\), and \(Y_i\sim F(\mu _ix)\), \(i\in {\mathcal {I}}_n\), both mutually independent, it holds that

$$\begin{aligned} {\varvec{\lambda }}\mathop {\preceq }\limits ^{\mathrm{m}}{\varvec{\mu }}\Longrightarrow X_{n:n}\le _{\mathrm{st}}Y_{n:n}\quad \text{ whenever } F\hbox { is DRHR}. \end{aligned}$$
(4.7)

In particular, for two independent samples we have \(\psi _1(x)=\psi _2(x)=e^{-x}\) and thus Theorem 4.4(i) coincides with (1.7). Further, due to \({\varvec{\lambda }}\mathop {\preceq }\limits ^{\mathrm{m}}{\varvec{\mu }}\) implying \({\varvec{\lambda }}\preceq ^{\mathrm{w}}{\varvec{\mu }}\), the implication in (4.7) follows from Theorem 4.4(ii). So, Theorem 4.4(i) serves as a generalization of the implication in (1.7) and Theorem 4.4(ii) partially improves that in (1.2).

5 Homogeneous and heterogeneous samples

In this section, we switch our focus to the dispersive order and the star order between extremes of heterogeneous and homogeneous samples.

5.1 Samples sharing a common Archimedean copula or survival copula

First, we investigate how the baseline distribution and dependence structure impact on the variability of the sample minimum.

Theorem 5.1

Suppose, for \({\varvec{X}}\sim \text {S}(\bar{F},{\varvec{\lambda }},\psi )\) and \({\varvec{Z}}\sim \text {S}(\bar{F},\lambda \varvec{1},\psi )\), \(\psi /\psi '\) is decreasing and concave. Then, \(\lambda \le \frac{1}{n}\sum _{k=1}^n \lambda _k\) implies \(X_{1:n}\le _{\mathrm{disp}}Z_{1:n}\) if (i) \(x[\ln s(x)]'\) is increasing and F is IHR, or (ii) \(\phi (\bar{F}(x))\) is convex, F is both DHR and IPHR, and F has a convex proportional hazard rate.

Proof

The distribution functions of \(X_{1:n}\) and \(Z_{1:n}\) are respectively, for \(x\ge 0\),

$$\begin{aligned} F_1(x)=1-\psi \bigg (\sum \limits _{k=1}^n \phi \big (\bar{F}(\lambda _k x)\big )\bigg ),\quad H_1(x)=1-\psi \big (n \phi \big (\bar{F}(\lambda x)\big )\big ), \end{aligned}$$

and their respective density functions are

$$\begin{aligned} f_1(x)= & {} \psi '\bigg (\sum _{k=1}^n \phi \big (\bar{F}(\lambda _k x)\big )\bigg )\sum _{k=1}^n\frac{\lambda _k f(\lambda _k x)}{\psi '\big (\phi \big (\bar{F}(\lambda _k x)\big )\big )},\\ h_1(x)= & {} \psi '\big (n\phi \big (\bar{F}(\lambda x)\big )\big ) \frac{n\lambda f(\lambda x)}{\psi '\big (\phi \big (\bar{F}(\lambda x)\big )\big )}. \end{aligned}$$

Denote \(L_1(x;{\varvec{\lambda }})=\bar{F}^{-1}\Big (\psi \Big (\frac{1}{n}\sum \limits _{k=1}^n\phi \big (\bar{F}(\lambda _k x)\big )\Big )\Big )\). Then, for \(x\ge 0\),

$$\begin{aligned} H_1^{-1}\big (F_1(x)\big )=\frac{1}{\lambda }L_1(x;{\varvec{\lambda }}), \end{aligned}$$
(5.1)

and

$$\begin{aligned} h_1\big (H_1^{-1}\big (F_1(x)\big )\big )=\psi '\bigg (\sum \limits _{k=1}^n \phi \big (\bar{F}(\lambda _k x)\big )\bigg )\frac{n\lambda f\big (L_1(x;{\varvec{\lambda }})\big )}{\psi '\Big (\frac{1}{n}\sum \nolimits _{k=1}^n\phi \big (\bar{F}(\lambda _k x)\big )\Big )}. \end{aligned}$$

(i) Since s(x) is increasing, then the increasing property of \(\bar{F}^{-1}(\psi (x))\) implies that of \(\bar{F}^{-1}(\psi (x)) s\big (\bar{F}^{-1}(\psi (x))\big )\). Note that \(\psi /\psi '\) is decreasing, then, for \(x\ge 0\),

$$\begin{aligned} \frac{\,\mathrm {d}}{\,\mathrm {d}x}\frac{\psi (x)}{\psi '(x)}=1-\frac{\psi ^{(2)}(x)\psi (x)}{(\psi '(x))^2}\le 0. \end{aligned}$$

From the concavity of \(\psi /\psi '\) it follows that \(1-\frac{\psi ^{(2)}(x)\psi (x)}{(\psi '(x))^2}\) is decreasing. Hence, for \(x\ge 0\),

$$\begin{aligned} \bar{F}^{-1}\big (\psi (x)\big )s\big (\bar{F}^{-1}\big (\psi (x)\big )\big ) \bigg [1-\frac{\psi ^{(2)}(x)\psi (x)}{(\psi '(x))^2}\bigg ] \end{aligned}$$

is decreasing. Since both \(x[\ln s(x)]'\) and \(\bar{F}^{-1}(\psi (x))\) are increasing, then, for \(x\ge 0\),

$$\begin{aligned} \bar{F}^{-1}\big (\psi (x)\big ) \Bigg [\frac{f'\big (\bar{F}^{-1}(\psi (x))\big )}{f\big (\bar{F}^{-1}(\psi (x))\big )} +s\big (\bar{F}^{-1}(\psi (x))\big )\Bigg ] \end{aligned}$$

is increasing. In view of

$$\begin{aligned}&\frac{\,\mathrm {d}}{\,\mathrm {d}x}\frac{\bar{F}^{-1}(\psi (x)) f\big (\bar{F}^{-1}(\psi (x))\big )}{\psi '(x)}\\&\quad =-1-\bar{F}^{-1}(\psi (x)) \frac{f'\big (\bar{F}^{-1}(\psi (x))\big )}{f\big (\bar{F}^{-1}(\psi (x))\big )}- \bar{F}^{-1}(\psi (x))s\big (\bar{F}^{-1}(\psi (x))\big ) \frac{\psi ^{(2)}(x)\psi (x)}{(\psi '(x))^2}\\&\quad =-1-\bar{F}^{-1}(\psi (x)) \bigg [\frac{f'\big (\bar{F}^{-1}(\psi (x))\big )}{f\big (\bar{F}^{-1}(\psi (x))\big )}+ s\big (\bar{F}^{-1}(\psi (x))\big )\bigg ]\\&\qquad +\,\bar{F}^{-1}(\psi (x))s\big (\bar{F}^{-1}(\psi (x))\big ) \bigg [1-\frac{\psi ^{(2)}(x)\psi (x)}{(\psi '(x))^2}\bigg ], \quad \text{ for } x\ge 0, \end{aligned}$$

we conclude that \(\frac{\,\mathrm {d}}{\,\mathrm {d}x}\big [\bar{F}^{-1}(\psi (x)) f(\bar{F}^{-1}(\psi (x)))/\psi '(x)\big ]\) is decreasing. That is, \(\bar{F}^{-1}(\psi (x)) f(\bar{F}^{-1}(\psi (x)))/\psi '(x)\) is concave. Then, for \(x\ge 0\),

$$\begin{aligned} \frac{1}{n}\sum _{k=1}^n\frac{\lambda _k x f(\lambda _k x)}{\psi '\big (\phi \big (\bar{F}(\lambda _k x)\big )\big )}= & {} \frac{1}{n}\sum _{k=1}^n\frac{\bar{F}^{-1}\big (\psi \big (\phi \big (\bar{F}(\lambda _k x)\big )\big )\big ) f\big (\bar{F}^{-1}\big (\psi \big (\phi \big (\bar{F}(\lambda _k x)\big )\big )\big )\big )}{\psi '\big (\phi \big (\bar{F}(\lambda _k x)\big )\big )}\nonumber \\\le & {} \frac{L_1(x;{\varvec{\lambda }}) f\big (L_1(x;{\varvec{\lambda }})\big )}{\psi '\Big (\frac{1}{n}\sum \nolimits _{k=1}^n\phi \big (\bar{F}(\lambda _k x)\big )\Big )}. \end{aligned}$$
(5.2)

Since \(\psi /\psi '\) is decreasing and \(\phi (\bar{F}(x))\) and s(x) both are increasing, then

$$\begin{aligned} \frac{\,\mathrm {d}}{\,\mathrm {d}x}\phi \big (\bar{F}(x)\big ) =-s(x)\frac{\psi \big (\phi \big (\bar{F}(x)\big )\big )}{\psi '\big (\phi \big (\bar{F}(x)\big )\big )} \end{aligned}$$

is increasing, i.e., \(\phi (\bar{F}(x))\) is convex. Note that \(\phi (\bar{F}(x))\) is increasing. \(\lambda \le \frac{1}{n}\sum _{k=1}^n\lambda _k\) implies

$$\begin{aligned} \frac{1}{n}\sum \limits _{k=1}^n \phi \big (\bar{F}(\lambda _k x)\big )\ge \phi \bigg (\bar{F}\bigg (\frac{x}{n}\sum \limits _{k=1}^n\lambda _k\bigg )\bigg )\ge \phi \big (\bar{F}(\lambda x)\big ),\quad \text{ for } x\ge 0, \end{aligned}$$

and thus the increasing property of \(\bar{F}^{-1}\big (\psi (x)\big )\) leads to

$$\begin{aligned} L_1(x;{\varvec{\lambda }})=\bar{F}^{-1}\bigg (\psi \bigg (\frac{1}{n}\sum _{k=1}^n \phi \big (\bar{F}(\lambda _k x)\big )\bigg )\bigg )\ge \lambda x. \end{aligned}$$
(5.3)

Consequently, it holds that

$$\begin{aligned}&h_1\big (H_1^{-1}\big (F_1(x)\big )\big )-f_1(x)\nonumber \\&\quad =n\psi '\bigg (\sum \limits _{k=1}^n \phi \big (\bar{F}(\lambda _k x)\big )\bigg )\Bigg [\frac{\lambda f\big (L_1(x;{\varvec{\lambda }})\big )}{\psi '\Big (\frac{1}{n}\sum \nolimits _{k=1}^n\phi \big (\bar{F}(\lambda _k x)\big )\Big )}-\frac{1}{n}\sum \limits _{k=1}^n\frac{\lambda _k f(\lambda _k x)}{\psi '\big (\phi \big (\bar{F}(\lambda _k x)\big )\big )}\Bigg ]\nonumber \\&\quad \mathop {=}\limits ^{\mathrm{sgn}}\frac{1}{n}\sum \limits _{k=1}^n\frac{\lambda _k x f(\lambda _k x)}{\psi '\big (\phi \big (\bar{F}(\lambda _k x)\big )\big )}-\frac{\lambda x f\big (L_1(x;{\varvec{\lambda }})\big )}{\psi '\Big (\frac{1}{n}\sum \nolimits _{k=1}^n\phi \big (\bar{F}(\lambda _k x)\big )\Big )}\nonumber \\&\quad \le \frac{L_1(x;{\varvec{\lambda }}) f\big (L_1(x;{\varvec{\lambda }})\big )}{\psi '\Big (\frac{1}{n}\sum \nolimits _{k=1}^n\phi \big (\bar{F}(\lambda _k x)\big )\Big )}-\frac{\lambda x f\big (L_1(x;{\varvec{\lambda }})\big )}{\psi '\Big (\frac{1}{n}\sum \nolimits _{k=1}^n\phi \big (\bar{F}(\lambda _k x)\big )\Big )}\nonumber \\&\quad \mathop {=}\limits ^{\mathrm{sgn}}\lambda x-L_1(x;{\varvec{\lambda }})\nonumber \\&\quad \le 0,\quad \text{ for } x\ge 0, \end{aligned}$$
(5.4)

where ‘\(\mathop {=}\limits ^{\mathrm{sgn}}\)’ means both sides have the same sign, and respectively the first and the second inequalities follow from (5.2) and (5.3).

(ii) From the concavity of \(\psi /\psi '\) it follows that, for \(x\ge 0\),

$$\begin{aligned} \frac{\psi \Big (\frac{1}{n}\sum \nolimits _{k=1}^n\phi \big (\bar{F}(\lambda _k x)\big )\Big )}{\psi '\Big (\frac{1}{n}\sum \nolimits _{k=1}^n\phi \big (\bar{F}(\lambda _k x)\big )\Big )}\ge \frac{1}{n}\sum \limits _{k=1}^n\frac{\bar{F}(\lambda _k x)}{\psi '\big (\phi \big (\bar{F}(\lambda _k x)\big )\big )}. \end{aligned}$$
(5.5)

Note that \(\phi (\bar{F}(x))\) is increasing and convex. Due to Jensen’s inequality \(\lambda \le \frac{1}{n}\sum _{k=1}^n\lambda _k\) implies

$$\begin{aligned} \frac{1}{n}\sum \limits _{k=1}^n \phi \big (\bar{F}(\lambda _k x)\big )\ge \phi \bigg (\bar{F}\bigg (\frac{x}{n}\sum \limits _{k=1}^n\lambda _k\bigg )\bigg )\ge \phi \big (\bar{F}(\lambda x)\big ),\quad \text{ for } x\ge 0, \end{aligned}$$

and thus the increasing property of \(\bar{F}^{-1}(\psi (x))\) leads to

$$\begin{aligned} L_1(x;{\varvec{\lambda }})=\bar{F}^{-1}\bigg (\psi \bigg (\frac{1}{n}\sum _{k=1}^n \phi \big (\bar{F}(\lambda _k x)\big )\bigg )\bigg )\ge \lambda x. \end{aligned}$$
(5.6)

In view of (5.6), \(\lambda \le \frac{1}{n}\sum _{k=1}^n\lambda _k\), the decreasing property of s(x), the increasing property and the convexity of xs(x), we have, for \(x\ge 0\),

$$\begin{aligned} \lambda xs\big (L_1(x;{\varvec{\lambda }})\big )\le \lambda xs(\lambda x)\le \frac{1}{n}\sum _{k=1}^n\lambda _k x\cdot s\Big (\frac{1}{n}\sum _{k=1}^n\lambda _k x\Big )\le \frac{1}{n}\sum _{k=1}^n\lambda _k xs(\lambda _k x). \end{aligned}$$
(5.7)

Since \(\psi /\psi '\) is decreasing, and xs(x) and \(\phi (\bar{F}(x))\) are increasing, then according to Lemma 2.5, we have, for \(x\ge 0\),

$$\begin{aligned} \frac{1}{n}\sum _{k=1}^n\frac{\lambda _k x s(\lambda _k x)\psi \big (\phi \big (\bar{F}(\lambda _k x)\big )\big )}{\psi '\big (\phi \big (\bar{F}(\lambda _k x)\big )\big )}-\frac{1}{n}\sum _{k=1}^n\lambda _k xs(\lambda _k x)\cdot \frac{1}{n}\sum \limits _{k=1}^n\frac{\psi \big (\phi \big (\bar{F}(\lambda _k x)\big )\big )}{\psi '\big (\phi \big (\bar{F}(\lambda _k x)\big )\big )}\le 0. \end{aligned}$$
(5.8)

Consequently, by (5.4) it holds that, for \(x\ge 0\),

$$\begin{aligned}&h_1\big (H_1^{-1}\big (F_1(x)\big )\big )-f_1(x)\\&\quad \mathop {=}\limits ^{\mathrm{sgn}}\frac{1}{n}\sum \limits _{k=1}^n\frac{\lambda _kx s(\lambda _k x)\psi \big (\phi \big (\bar{F}(\lambda _k x)\big )\big )}{\psi '\big (\phi \big (\bar{F}(\lambda _k x)\big )\big )}-\frac{\lambda xs\big (L_1(x;{\varvec{\lambda }})\big )\psi \Big (\frac{1}{n}\sum \nolimits _{k=1}^n\phi \big (\bar{F}(\lambda _k x)\big )\Big )}{\psi '\Big (\frac{1}{n}\sum \nolimits _{k=1}^n\phi \big (\bar{F}(\lambda _k x)\big )\Big )}\\&\quad \le \frac{1}{n}\sum _{k=1}^n\frac{\lambda _k x s(\lambda _k x)\psi \big (\phi \big (\bar{F}(\lambda _k x)\big )\big )}{\psi '\big (\phi \big (\bar{F}(\lambda _k x)\big )\big )}-\lambda xs\big (L_1(x;{\varvec{\lambda }})\big )\cdot \frac{1}{n}\sum _{k=1}^n\frac{\bar{F}(\lambda _k x)}{\psi '\big (\phi \big (\bar{F}(\lambda _k x)\big )\big )}\\&\quad \le \frac{1}{n}\sum _{k=1}^n\frac{\lambda _k x s(\lambda _k x)\psi \big (\phi \big (\bar{F}(\lambda _k x)\big )\big )}{\psi '\big (\phi \big (\bar{F}(\lambda _k x)\big )\big )}-\frac{1}{n}\sum _{k=1}^n\lambda _k xs(\lambda _k x)\cdot \frac{1}{n}\sum _{k=1}^n\frac{\bar{F}(\lambda _k x)}{\psi '\big (\phi \big (\bar{F}(\lambda _k x)\big )\big )}\\&\quad \le 0, \end{aligned}$$

where the three inequalities follow from (5.5), (5.7) and (5.8), respectively.

Now, we can conclude that \(f_1\big (F_1^{-1}(x)\big )\ge h_1\big (H_{1}^{-1}(x)\big )\) for all \(x\in (0,1)\). By (3.B.11) in Shaked and Shanthikumar (2007), this invokes \(X_{1:n}\le _{\mathrm{disp}}Z_{1:n}\). \(\square \)

In the following, we provide an example to illustrate those conditions concerned with Theorem 5.1.

Example 5.1

(i) Set \(n=2\), \(\alpha \ge 1\), \(0<\lambda \le \frac{\lambda _1+\lambda _2}{2}\) and \(\lambda _i>0\) for \(i=1,2\). Let \(X_i\sim \mathcal {W}(\alpha ,\lambda _i)\), \(i=1,2\) and \(Z_i\sim \mathcal {W}(\alpha ,\lambda )\), \(i=1,2\), both have the Archimedean survival copula with generator

$$\begin{aligned} \psi (x)=\left\{ \begin{array}{ll} e^{-2\arctan x}, &{} \quad x\le 1; \\ x^{-1}e^{-\pi /2}, &{} \quad x\ge 1. \end{array} \right. \end{aligned}$$

It can be verified that

$$\begin{aligned} \frac{\psi (x)}{\psi '(x)}=\left\{ \begin{array}{ll} -\frac{1+x^2}{2}, &{}\quad x\le 1; \\ -x, &{} \quad x\ge 1, \end{array} \right. \end{aligned}$$

is decreasing and concave. On the other hand, we have both \(s(x)=\alpha x^{\alpha -1}\) and \(x[\ln s(x)]'=\alpha -1\) are increasing in \(x\ge 0\) for \(\alpha \ge 1\). So, the conditions in Theorem 5.1(i) are satisfied.

(ii) Set \(\alpha \theta \ge 1\), \(0<\lambda \le \frac{1}{n}\sum _{k=1}^n \lambda _k\) and \(\lambda _i>0\) for \(i\in {\mathcal {I}}_n\). Let \(X_i\sim {\mathcal {L}}(\alpha ,\lambda _i)\), \(i\in {\mathcal {I}}_n\) and \(Z_i\sim {\mathcal {L}}(\alpha ,\lambda )\), \(i\in {\mathcal {I}}_n\), both have the Clayton survival copula with generator \(\psi (x)=(\theta x+1)^{-1/\theta }\). Clearly, \(\psi (x)/\psi '(x)=-\theta x-1\) is decreasing and concave. On the other hand, it is easy to verify that \(s(x)=\alpha (1+x)^{-1}\) is decreasing and \(xs(x)=\alpha x(1+x)^{-1}\) is increasing and convex. Since, for \(\alpha \theta \ge 1\),

$$\begin{aligned} \frac{\,\mathrm {d}^2\phi \big (\bar{F}(x)\big )}{\,\mathrm {d}x^2}=\alpha (\alpha \theta -1)(1+x)^{\alpha \theta -2}\ge 0, \end{aligned}$$

we have the convexity of \(\phi (\bar{F}(x))\). Therefore, the conditions of Theorem 5.1(ii) are satisfied also. \(\square \)

The next theorem presents a sufficient condition for the star order between the sample minimum of one heterogeneous scale sample and that of the other homogeneous scale sample.

Theorem 5.2

For \({\varvec{X}}\sim \text {S}(\bar{F},{\varvec{\lambda }},\psi )\) and \({\varvec{Z}}\sim \text {S}(\bar{F},\lambda \varvec{1},\psi )\) with decreasing and concave \(\psi /\psi '\), \(X_{1:n}\le _{*}Z_{1:n}\) if \(x[\ln s(x)]'\) is increasing and F is IPHR.

Proof

Since xs(x) and \(x[\ln s(x)]'\) are increasing, according to the proof of Theorem 5.1(i), the decreasing property and the concavity of \(\psi /\psi '\) imply the concavity of \(\frac{\bar{F}^{-1}(\psi (x)) f(\bar{F}^{-1}(\psi (x)))}{\psi '(x)}\), and thus we reach (5.2) again. Then, by (5.1) we have, for \(x\ge 0\),

$$\begin{aligned} \frac{\,\mathrm {d}}{\,\mathrm {d}x}\frac{H_1^{-1}(F_1(x))}{x}= & {} \frac{\psi '\big (\frac{1}{n}\sum _{k=1}^n\phi \big (\bar{F}(\lambda _k x)\big )\big )}{\lambda x^2f\big (L_1(x;{\varvec{\lambda }})\big )}\\&\quad \cdot \left[ \frac{1}{n}\sum \limits _{k=1}^n\frac{\lambda _k x f(\lambda _k x)}{\psi '\big (\phi \big (\bar{F}(\lambda _k x)\big )\big )}-\frac{L_1(x;{\varvec{\lambda }}) f\big (L_1(x;{\varvec{\lambda }})\big )}{\psi '\big (\frac{1}{n}\sum _{k=1}^n\phi \big (\bar{F}(\lambda _k x)\big )\big )}\right] \ge 0. \end{aligned}$$

That is, \(H_1^{-1}(F_1(x))/x\) is increasing in x and hence \(X_{1:n}\le _{*}Z_{1:n}\). \(\square \)

The following two corollaries follow immediately from Lemma 3.1 and Theorems 5.1(i), 5.2 and thus are presented here with the proofs being omitted for brevity.

Corollary 5.1

For \({\varvec{X}}\sim \text {S}(\bar{F},{\varvec{\lambda }},\psi )\) and \({\varvec{Z}}\sim \text {S}(\bar{F},\lambda \varvec{1},\psi )\) with decreasing and concave \(\psi /\psi '\), \(\lambda \le \frac{1}{n}\sum _{k=1}^n \lambda _k\) implies \(X_{1:n}\le _{\mathrm{disp}}Z_{1:n}\) if F is IHR and F has a log-convex hazard rate.

Corollary 5.2

For \({\varvec{X}}\sim \text {S}(\bar{F},{\varvec{\lambda }},\psi )\) and \({\varvec{Z}}\sim \text {S}(\bar{F},\lambda \varvec{1},\psi )\) with decreasing and concave \(\psi /\psi '\), \(X_{1:n}\le _{*}Z_{1:n}\) if F is IHR and F has a log-convex hazard rate.

In parallel to random variables having Archimedean survival copulas, we can obtain the following dual theorem on the maximums of two scale samples with a common Archimedean copula, and the proof is similar to that of Theorem 5.2 and hence omitted also.

Theorem 5.3

For \({\varvec{X}}\sim \text {S}(F,{\varvec{\lambda }},\psi )\) and \({\varvec{Z}}\sim \text {S}(F,\lambda \varvec{1},\psi )\) with decreasing and concave \(\psi /\psi '\), \(X_{n:n}\le _{*}Z_{n:n}\) if \(x[\ln r(x)]'\) is increasing and F is DPRHR.

Here we provide an example of the conditions in Theorem 5.3.

Example 5.2

Let \(X_i\sim {\mathcal {F}}(\alpha ,\lambda _i)\), \(i=1,2\) and \(Z_i\sim {\mathcal {F}}(\alpha ,\lambda )\), \(i=1,2\), for \(\alpha ,\lambda ,\lambda _i\ge 0\), \(i=1,2\) have the Archimedean copula with generator

$$\begin{aligned} \psi (x)=\left\{ \begin{array}{ll} e^{-2\arctan x}, &{} \quad x\le 1; \\ e^{-1-\pi /2+1/x}, &{} \quad x\ge 1. \end{array} \right. \end{aligned}$$

It is plain that

$$\begin{aligned} \frac{\psi (x)}{\psi '(x)}=\left\{ \begin{array}{ll} -\frac{1+x^2}{2}, &{} \quad x\le 1; \\ -x^2, &{}\quad x\ge 1, \end{array} \right. \end{aligned}$$

which is decreasing and concave. Also, clearly, \(xr(x)=\alpha x^{-\alpha }\) is decreasing in \(x\ge 0\) and \(x[\ln r(x)]'=-\alpha -1\) is increasing in \(x\ge 0\). So, the conditions of Theorem 5.3 are confirmed. \(\square \)

5.2 Samples with different dependence structures

In this subsection, we discuss the dispersive ordering of the sample minimums from two homogenous scale samples with different Archimedean survival copulas. For \(\varvec{\tilde{{\varvec{Z}}}}=(\tilde{Z}_1,\ldots ,\tilde{Z}_n)\), denote \(\tilde{Z}_{1:n}\) the minimum of \(\varvec{\tilde{{\varvec{Z}}}}\).

Theorem 5.4

Suppose, for \({\varvec{Z}}\sim \text {S}(\bar{F},\lambda \varvec{1},\psi _1)\) and \(\tilde{{\varvec{Z}}}\sim \text {S}(\bar{F},\lambda \varvec{1},\psi _2)\), \(\psi _1\) or \(\psi _2\) is log-concave and F is DHR. Then, (i) \(Z_{1:n}\le _{\mathrm{disp}}\tilde{Z}_{1:n}\) if \(\phi _2(\psi _1(x))\) is convex, and (ii) \(Z_{1:n}\ge _{\mathrm{disp}}\tilde{Z}_{1:n}\) if \(\phi _1(\psi _2(x))\) is convex.

Proof

We only prove the part (i) and the part (ii) can be obtained in a similar manner.

The sample minimums \(Z_{1:n}\) and \(\tilde{Z}_{1:n}\) have their respective distribution functions, for \(x\ge 0\),

$$\begin{aligned} H_1(x)=1-\psi _1\big (n \phi _1\big (\bar{F}(\lambda x)\big )\big ),\quad \tilde{H}_1(x)=1-\psi _2\big (n \phi _2\big (\bar{F}(\lambda x)\big )\big ), \end{aligned}$$

and the corresponding density functions are

$$\begin{aligned} h_1(x)= & {} \psi _1'\big (n\phi _1\big (\bar{F}(\lambda x)\big )\big )\frac{n\lambda f(\lambda x)}{\psi _1'\big (\phi _1\big (\bar{F}(\lambda x)\big )\big )},\\ \tilde{h}_1(x)= & {} \psi _2'\big (n\phi _2\big (\bar{F}(\lambda x)\big )\big )\frac{n\lambda f(\lambda x)}{\psi _2'\big (\phi _2\big (\bar{F}(\lambda x)\big )\big )}. \end{aligned}$$

Also denote, for \(x\ge 0\),

$$\begin{aligned} L_2(x;\lambda )= & {} \bar{F}^{-1}\big (\psi _1\big (n^{-1}\phi _1\big (\psi _2(n\phi _2(\bar{F}(\lambda x)))\big )\big )\big ),\\ L_3(x;\lambda )= & {} \bar{F}^{-1}\big (\psi _2\big (n^{-1}\phi _2\big (\psi _1(n\phi _1(\bar{F}(\lambda x)))\big )\big )\big ). \end{aligned}$$

Then, for \(x\ge 0\),

$$\begin{aligned} H_1^{-1}\big (\tilde{H}_1(x)\big )= & {} \frac{1}{\lambda }L_2(x;\lambda ),\quad \tilde{H}_1^{-1}\big (H_1(x)\big )=\frac{1}{\lambda }L_3(x;\lambda ),\\ h_1\big (H_1^{-1}\big (\tilde{H}_1(x)\big )\big )= & {} \psi _1'\big (n \phi _1\big (\bar{F}\big (L_2(x;\lambda )\big )\big )\big ) \frac{n\lambda f\big (L_2(x;\lambda )\big )}{\psi _1'\big (\phi _1\big (\bar{F}(L_2(x;\lambda ))\big )\big )},\nonumber \end{aligned}$$
(5.9)

and

$$\begin{aligned} \tilde{h}_1\big (\tilde{H}_1^{-1}\big (H_1(x)\big )\big )= \psi _2'\big (n \phi _2\big (\bar{F}\big (L_3(x;\lambda )\big )\big )\big ) \frac{n\lambda f\big (L_3(x;\lambda )\big )}{\psi _2'\big (\phi _2\big (\bar{F}(L_3(x;\lambda ))\big )\big )}. \end{aligned}$$

First, let us assume that \(\psi _1\) is log-concave. Since \(\phi _2(\psi _1(x))\) is increasing and convex, then \(\phi _1(\psi _2(x))\) is increasing and concave and hence \(\frac{\psi _2'(x)}{\psi _1'(\phi _1(\psi _2(x)))}\) is decreasing. In view of \(\phi _1(\psi _2(0))=0\), by Proposition 21.A.11 in Marshall and Olkin (2007), we have \(\phi _1(\psi _2(x))\) is sub-additive, and thus

$$\begin{aligned} \phi _1\big (\bar{F}(L_2(x;\lambda ))\big )=\frac{1}{n}\phi _1\big (\psi _2\big (n\phi _2\big (\bar{F}(\lambda x)\big )\big )\big )\le \phi _1\big (\bar{F}(\lambda x)\big ),\quad x\ge 0. \end{aligned}$$
(5.10)

Then the increasing property of \(\phi _1(\bar{F}(x))\) implies \(\lambda x\ge L_2(x;\lambda )\). Since s(x) is decreasing, then

$$\begin{aligned} s(\lambda x)\le s\big (L_2(x;\lambda )\big ),\qquad x\ge 0. \end{aligned}$$
(5.11)

Since \(\psi _1\) is log-concave, then \(\psi _1'/\psi _1\) is decreasing, and thus (5.10) implies

$$\begin{aligned} \frac{\psi _1'\big (\phi _1\big (\bar{F}(L_2(x;\lambda ))\big )\big )}{\bar{F}\big (L_2(x;\lambda )\big )} \ge \frac{\psi _1'\big (\phi _1\big (\bar{F}(\lambda x)\big )\big )}{\bar{F}(\lambda x)},\qquad x\ge 0. \end{aligned}$$
(5.12)

In view of \(n\phi _2(\bar{F}(\lambda x))\ge \phi _2(\bar{F}(\lambda x))\) and the decreasing property of \(\frac{\psi _2'(x)}{\psi _1'(\phi _1(\psi _2(x)))}\), we have

$$\begin{aligned} \frac{\psi _2'\big (n\phi _2\big (\bar{F}(\lambda x)\big )\big )}{\psi _1'\big (n \phi _1\big (\bar{F}\big (L_2(x;\lambda )\big )\big )\big )} \le \frac{\psi _2'\big (\phi _2\big (\bar{F}(\lambda x)\big )\big )}{\psi _1'\big (\phi _1\big (\bar{F}(\lambda x)\big )\big )},\qquad x\ge 0. \end{aligned}$$
(5.13)

As a consequence, for \(x\ge 0\),

$$\begin{aligned} h_1\big (H_1^{-1}\big (\tilde{H}_1(x)\big )\big )-\tilde{h}_1(x)= & {} n\lambda \left[ \psi _1'\big (n \phi _1\big (\bar{F}\big (L_2(x;\lambda )\big )\big )\big ) \frac{f\big (L_2(x;\lambda )\big )}{\psi _1'\big (\phi _1\big (\bar{F}\big (L_2(x;\lambda )\big )\big )\big )}\right. \\&\left. -\psi _2'\big (n\phi _2\big (\bar{F}(\lambda x)\big )\big )\frac{f(\lambda x)}{\psi _2'\big (\phi _2\big (\bar{F}(\lambda x)\big )\big )}\right] \\\ge & {} n\lambda \bar{F}(\lambda x)\left[ \psi _1'\big (n \phi _1\big (\bar{F}\big (L_2(x;\lambda )\big )\big )\big ) \frac{s\big (L_2(x;\lambda )\big )}{\psi _1'\big (\phi _1\big (\bar{F}(\lambda x)\big )\big )}\right. \\&\left. -\psi _2'\big (n\phi _2\big (\bar{F}(\lambda x)\big )\big )\frac{s(\lambda x)}{\psi _2'\big (\phi _2\big (\bar{F}(\lambda x)\big )\big )}\right] \\\ge & {} 0, \end{aligned}$$

where the first inequality is due to (5.12), and the second one follows from (5.11) and (5.13).

Secondly, we assume that \(\psi _2\) is log-concave. Since \(\phi _2(\psi _1(x))\) is convex, then \(\frac{\psi _1'(x)}{\psi _2'(\phi _2(\psi _1(x)))}\) is increasing. In view of \(\phi _2(\psi _1(0))=0\), by Proposition 21.A.11 in Marshall and Olkin (2007) we conclude that \(\phi _2(\psi _1(x))\) is super-additive, and thus

$$\begin{aligned} \phi _2\big (\bar{F}(L_3(x;\lambda ))\big )=\frac{1}{n}\phi _2\big (\psi _1\big (n\phi _1\big (\bar{F}(\lambda x)\big )\big )\big )\ge \phi _2\big (\bar{F}(\lambda x)\big ),\qquad x\ge 0. \end{aligned}$$
(5.14)

So, the increasing property of \(\phi _2\big (\bar{F}(x)\big )\) implies \(\lambda x\le L_3(x;\lambda )\). Due to the decreasing property of s(x), we have

$$\begin{aligned} s(\lambda x)\ge s\big (L_3(x;\lambda )\big ),\qquad x\ge 0. \end{aligned}$$
(5.15)

Since \(\psi _2\) is log-concave, then \(\psi _2'/\psi _2\) is decreasing, and thus (5.14) implies

$$\begin{aligned} \frac{\psi _2'\big (\phi _2\big (\bar{F}(L_3(x;\lambda ))\big )\big )}{\bar{F}\big (L_3(x;\lambda )\big )} \le \frac{\psi _2'\big (\phi _2\big (\bar{F}(\lambda x)\big )\big )}{\bar{F}(\lambda x)},\qquad x\ge 0. \end{aligned}$$
(5.16)

In view of \(n\phi _1(\bar{F}(\lambda x))\ge \phi _1(\bar{F}(\lambda x))\) and the increasing property of \(\frac{\psi _1'(x)}{\psi _2'(\phi _2(\psi _1(x)))}\), we have

$$\begin{aligned} \frac{\psi _1'\big (n\phi _1\big (\bar{F}(\lambda x)\big )\big )}{\psi _2'\big (n \phi _2\big (\bar{F}\big (L_3(x;\lambda )\big )\big )\big )} \ge \frac{\psi _1'\big (\phi _1\big (\bar{F}(\lambda x)\big )\big )}{\psi _2'\big (\phi _2\big (\bar{F}(\lambda x)\big )\big )},\qquad x\ge 0. \end{aligned}$$
(5.17)

As a consequence, it holds that, for \(x\ge 0\),

$$\begin{aligned} \tilde{h}_1\big (\tilde{H}_1^{-1}\big (H_1(x)\big )\big )-h_1(x)= & {} n\lambda \bigg [\psi _2'\big (n \phi _2\big (\bar{F}\big (L_3(x;\lambda )\big )\big )\big ) \frac{f\big (L_3(x;\lambda )\big )}{\psi _2'\big (\phi _2\big (\bar{F}\big (L_3(x;\lambda )\big )\big )\big )}\\&-\psi _1'\big (n\phi _1\big (\bar{F}(\lambda x)\big )\big )\frac{f(\lambda x)}{\psi _1'\big (\phi _1\big (\bar{F}(\lambda x)\big )\big )}\bigg ]\\\le & {} n\lambda \bar{F}(\lambda x)\bigg [\psi _2'\big (n \phi _2\big (\bar{F}\big (L_3(x;\lambda )\big )\big )\big ) \frac{s\big (L_3(x;\lambda )\big )}{\psi _2'\big (\phi _2\big (\bar{F}(\lambda x)\big )\big )}\\&-\psi _1'\big (n\phi _1\big (\bar{F}(\lambda x)\big )\big )\frac{s(\lambda x)}{\psi _1'\big (\phi _1\big (\bar{F}(\lambda x)\big )\big )}\bigg ]\\\le & {} 0, \end{aligned}$$

where the first inequality follows from (5.16), and the second one follows from (5.15) and (5.17).

Now, we conclude that \(\tilde{h}_1\big (\tilde{H}_1^{-1}(x)\big )\le h_1\big (H_1^{-1}(x)\big )\) for all \(x\in (0,1)\), and thus \(Z_{1:n}\le _{\mathrm{disp}}\tilde{Z}_{1:n}\) follows from (3.B.11) of Shaked and Shanthikumar (2007) directly. \(\square \)