1 Introduction

Huang and Kotz [24] considered a polynomial-type single parameter extension of the classical Farlie–Gumble–Morgenstern (FGM) family of distributions. The distribution function (df) which they suggested is

$$\begin{aligned} F_{X,Y}(x,y)=F_X(x)F_Y(y)\left[ 1+\lambda (1-F_X^p(x))(1-F_Y^p(y))\right] , p\ge 1, \end{aligned}$$
(1.1)

denoted by HK–FGM\((\lambda ,p).\) The corresponding probability density function (pdf) is given by

$$\begin{aligned} f_{X,Y}(x,y)=f_X(x)f_Y(y)\left[ 1+\lambda ((1+p)F_X^p(x)-1)((1+p)F_Y^p(y)-1)\right] ,\qquad \end{aligned}$$
(1.2)

where \(F_X(x)\) and \(F_Y(y)\) are df’s, while \(f_X(x)\) and \(f_Y(y)\) are pdf’s of the random variables (rv’s) X and Y,  respectively. The admissible range of the associated parameter \(\lambda \) is \(-\max (1,p)^{-2}\le \lambda \le p^{-1}\), and since \(p\ge 1,\) this admissible becomes \(-p^{-2}\le \lambda \le p^{-1}.\) When the marginals are uniform then, while for the classical FGM the correlation between components does not exceed \(\frac{1}{3},\) the modified version HK–FGM allows correlation up to 0.39. Actually, Huang and Kotz [24] modification of the FGM distribution paved the way for many research papers on modifications of FGM distributions allowing high correlation. Meanwhile, the simple analytical form of HK–FGM family aroused interest of many researchers, e.g., Amblard and Girard [6], Bairamov and Kotz [8], Fischer and Klein [19] and Mokhlis and Khames [25, 26] among others.

The generalized exponential (GE) distribution is defined as a particular case of the Gompertz–Verhulst df\(G(x)=(1 -\rho \exp (-\theta x))^\alpha ,\) for \(x> \frac{1}{\theta }\log \rho ,~ \rho ,\theta ,\alpha > 0\) (see, Gompertz [21] and Verhulst [32,33,34]), when \(\rho = 1.\) For more detail on the Gompertz–Verhulst df, see Ahsanullah et al. [3], Ahuja [4] and Ahuja and Nash [5]. Therefore, X is a two-parameter generalized exponential rv if it has the df

$$\begin{aligned} F_X(x) = (1 -\exp (-\theta x))^\alpha ; x> 0; \theta> 0; \alpha > 0, \end{aligned}$$

denoted by GE\((\theta ;\alpha ).\) This distribution is a generalization of the exponential distribution and is more flexible, for being that, the hazard function of the exponential distribution is constant, but the hazard function of GE distribution can be constant, increasing or decreasing. Gupta and Kundu [22] showed that the kth moment of GE\((\theta ;\alpha )\) is

$$\begin{aligned} \mu _k=\frac{\alpha k!}{\theta ^k}\sum \limits _{i=0}^{\aleph (\alpha -1)}\frac{(-1)^i}{(i+1)^{k+1}}\left( {\begin{array}{c} \alpha -1\\ \scriptstyle {i}\end{array}}\right) , \end{aligned}$$

where \(\aleph (x)=\infty ,\) if x is noninteger and \(\aleph (x)=x,\) if x is integer. Moreover, the mean, variance and moment generating function of GE\((\theta ;\alpha )\) are given, respectively, by

$$\begin{aligned} \mu _1=\text{ E }(X)=\frac{B(\alpha )}{\theta },~~\text{ Var }(X)=\frac{C(\alpha )}{\theta ^2},~~M_X(t)=\alpha \beta \left( \alpha ,1-\frac{t}{\theta }\right) , \end{aligned}$$
(1.3)

where \(B(\alpha )=\Psi (\alpha +1)-\Psi (1),\)\(C(\alpha )=\Psi '(1)-\Psi '(\alpha +1),\)\(\beta (a,b)=\frac{\Gamma (a)\Gamma (b)}{\Gamma (a+b)}\) and \(\Psi (.)\) is the digamma function, while \(\Psi '(.)\) is its derivation (the trigamma function). Recently, Tahmasebi and Jafari [29] studied some properties of the Morgenstern type bivariate generalized exponential distribution (denoted by MTBGED). Also, they studied some distributional properties of concomitants of order statistics as well as record values of this df. Moreover, they obtained some recurrence relations between moments of concomitants of order statistics.

In this paper, all the results of Tahmasebi and Jafari [29] are extended to HK–FGM family with two marginals \(F_X\) and \(F_Y,\) where \(X\sim \mathrm{GE}(\theta _1;\alpha _1)\) and \(Y\sim \mathrm{GE}(\theta _2;\alpha _2)\) (denoted by HK–FGM-GE\((\theta _1,\alpha _1;\theta _2,\alpha _2)).\) Moreover, some new results, which were not obtained by Tahmasebi and Jafari [29] for FGM family, are given such as recurrence relations for the single, as well as the product, moments of bivariate concomitants of order statistics, the concomitant rank order statistics and the asymptotic behavior of the concomitants of order statistics. Finally, some essential corrections of Tahmasebi and Jafari [29] are made. Namely, both the variance and the correlation of concomitants of order statistics, as well as all the results of Sect. 4, concerning the concomitants of record values, are corrected.

It is worth mentioning that some of the results presented in this paper are related to paper of Beg and Ahsanullah [12]. Namely, Beg and Ahsanullah [12] considered concomitants of generalized order statistics (the generalized order statistics constitute a unified model for ordered random variables that includes order statistics and record values among others) for the FGM family and derived the joint distribution of concomitants of two generalized order statistics and obtain their product moments. Tahmasebi and Behboodian [27], Tahmasebi et al. [30], Tahmasebi and Jafari [28] and Tahmasebi et al. [31] are further recent relevant works on this subject.

2 The HK–FGM-GE and Some of its Properties

The joint df and pdf of (XY) are defined by (1.1) and (1.2), respectively, where \(X\sim \mathrm{GE}(\theta _1;\alpha _1)\) and \(Y\sim \mathrm{GE}(\theta _2;\alpha _2).\) Therefore, it is easy to show that the (nm)th joint moments of HK–FGM-GE\((\theta _1,\alpha _1;\theta _2,\alpha _2)\) are given by

$$\begin{aligned} \text{ E }(X^{n}Y^{m})= & {} \text{ E }(X^{n})E(Y^{m}) +\lambda (\text{ E }(U^{n})-\text{ E }(X^{n}))(\text{ E }(V^{m})-\text{ E }(Y^{m})),~n,\nonumber \\&m=1,2,\ldots , \end{aligned}$$
(2.1)

where \(U\sim \mathrm{GE}(\theta _1;\alpha _1(p+1))\) and \(V\sim \mathrm{GE}(\theta _2;\alpha _2(p+1)).\) Thus, by combining (2.1) and (1.3), we get

$$\begin{aligned} E(XY)=\frac{B(\alpha _1)B(\alpha _2)+\lambda D(\alpha _1,p)D(\alpha _2,p)}{\theta _1\theta _2}, \end{aligned}$$

where \(D(\alpha _i,p)=B(\alpha _i(1+p))-B(\alpha _i), i=1,2.\) Therefore, the coefficient of correlation between X and Y is

$$\begin{aligned} \rho _{_{X,Y}}=\frac{\lambda D(\alpha _1,p)D(\alpha _2,p)}{\sqrt{C(\alpha _1)C(\alpha _2)}}=\lambda g(\alpha _1,\alpha _2,p). \end{aligned}$$

Clearly, for any \(p\ge 1,\) the function \(g(\alpha _1,\alpha _2,p)\) is increasing and positive function with respect to each of \(\alpha _i, i=1,2.\) Therefore, if \(\lambda >0,\) then \(\rho _{_{X,Y}}\) is increasing and positive function, and if \(\lambda <0,\) then \(\rho _{_{X,Y}}\) decreasing and negative function with respect to each of \(\alpha _1\) and \(\alpha _2.\) Moreover, we can show that

$$\begin{aligned} \lim _{\mathop {\alpha _2\rightarrow \infty }\limits ^{\alpha _1\rightarrow \infty }}g(\alpha _1,\alpha _2,p)=\frac{6(\log (p+1))^2}{\pi ^2}, \lim _{\mathop {\alpha _2\rightarrow 0^+}\limits ^{\alpha _1\rightarrow 0^+}}g(\alpha _1,\alpha _2,p)=0. \end{aligned}$$

Therefore, \(\underline{\rho }(p)=-\frac{6(\log (p+1))^2}{\pi ^2p^2}\le \rho _{_{X,Y}}\le \frac{6(\log (p+1))^2}{\pi ^2p}=\overline{\rho }(p)\) (note that \(-p^{-2}\le \lambda \le p^{-1}\)), which yields \(\rho _{_{X,Y}}\rightarrow 0,\) as \(p\rightarrow \infty \) and \(|\rho _{_{X,Y}}|\le 0.2921,\) when \(p=1\) (see Tahmasebi and Jafari [29]). However, we can show that the upper bound \(\overline{\rho }(p)\) as a function of p is increasing function in the interval (1, 3.9241) and is decreasing function in the interval \((3.9242,\infty ).\) Therefore, we get \(\max \limits _{p\ge 1}\overline{\rho }(p)\approx \overline{\rho }(3.9241)= 0.3937.\) On the other hand, since \(\frac{\log (1+p)}{p}\) is strictly decreasing function of p,  then \(\min \limits _{p\ge 1}\underline{\rho }(p)=\underline{\rho }(1).\) Consequently, we get \(\underline{\rho }(1)\le \rho _{_{X,Y}}\le 0.3937,\) which is a significant improvement comparing with the upper bound “0.2921” obtained by Tahmasebi and Jafari [29]. This fact gives a satisfactory motivation to deal with HK–FGM-GE rather than MTBGED. It is worth mentioning that the interval for p when \(\overline{\rho }(p)\) is better than Tahmasebi and Jafari [29] is (1, 18.1] (\(\overline{\rho }(18.1)=0.2922302\)), while the interval for p when \(\overline{\rho }(p)\) is worse than Tahmasebi and Jafari [29] is \([18.2,\infty )\) (\(\overline{\rho }(18.2)=0.291645\)) and \(\underline{\rho }(p)\) is not better than lower bound given by Tahmasebi and Jafari [29].

The conditional df of Y given \(X =x\) is given by

$$\begin{aligned} F_{Y|X}(y|x)=F_Y(y)\left[ 1-\lambda (1-F_Y^p(y))((1+p)F_X^p(x)-1)\right] . \end{aligned}$$
(2.2)

Therefore, the regression curve of Y given \(X =x\) for HK–FGM-GE is

$$\begin{aligned} \text{ E }(Y|X=x)= & {} \text{ E }(Y)+\lambda ((p+1)F_X^p (x)-1)(\text{ E }(V)-\text{ E }(Y))\\= & {} \frac{1}{\theta _2}\left[ B(\alpha _2)+\lambda D(\alpha _2,p)((p+1)(1-\mathrm{e}^{-\theta _1 x})^{\alpha _1p}-1)\right] , \end{aligned}$$

where \(V\sim \mathrm{GE}(\theta _2;\alpha _2(p+1))\) and the conditional expectation is nonlinear with respect to x.

3 Concomitants of Order Statistics Based on HK–FGM-GE

The concept of concomitants of order statistics was first introduced by David [15] and almost simultaneously under the name of induced order statistics by Bhattacharya [13]. Suppose \((X_i, Y_i), i = 1,2,\ldots ,n\) is a random sample from a bivariate df \(F_{X,Y}(x,y).\) If we order the sample by the \(X-\)variate and obtain the order statistics, \(X_{1:n}\le X_{1:n}\le \cdots \le X_{n:n}\), for the X sample, then the \(Y-\)variate associated with the rth order statistic \(X_{r:n}\) is called the concomitant of the rth order statistic and is denoted by \(Y_{[r:n]}.\) Concomitants of order statistics can arise in several applications. In selection procedures, items or subjects may be chosen on the basis of their X characteristic, and an associated characteristic Y that is hard to measure or can be observed only later may be of interest. Another application of concomitants of order statistics is in ranked set sampling. It is a sampling scheme for situations where measurement of the variable of primary interest for sampled items is expensive or time-consuming while ranking of a set of items related to the variable of interest can be easily done. A comprehensive review of ranked set sampling can be found in Chen et al. [14]. Concomitants of order statistics have also been used in estimation and hypotheses testing problems. Another natural application of concomitants of order statistics is in dealing with the estimation of parameters for multivariate data sets that are subject to some form of type II censoring. For a recent comprehensive review of these applications, see David and Nagaraja [16] and Sects. 9.8 and 11.7 of David and Nagaraja [17].

3.1 Marginal Distribution of Concomitants of Order Statistics Based on HK–FGM-GE

Let \(X\sim \mathrm{GE}(\theta _1;\alpha _1)\) and \(Y\sim \mathrm{GE}(\theta _2;\alpha _2).\) Since the conditional pdf of \(Y_{[r:n]}\) given \(X_{[r:n]}=x\) is \(f_{Y_{[r:n]}|X_{r:n}}(y|x)=f_{Y|X}(y|x)\) (cf. Galambos [20], see also Tahmasebi and Jafari [29]), then the pdf of \(Y_{[r:n]}\) is given by

$$\begin{aligned} f_{[r:n]}(y)= & {} f_Y(y)\left[ 1+(1-(1+p)F_Y^p(y))\Delta _{r,n:p})\right] \nonumber \\= & {} (1+\Delta _{r,n:p})f_Y(y)-\Delta _{r,n:p}f_V(y),~y>0, \end{aligned}$$
(3.1)

where \(V\sim \mathrm{GE}(\theta _2;\alpha _2(p+1))\) and

$$\begin{aligned} \Delta _{r,n:p}=\lambda \left( 1-\frac{(1+p)\beta (r+p,n-r+1)}{\beta (r,n-r+1)}\right) . \end{aligned}$$

Therefore, the moment generating function of \(Y_{[r:n]}\) is given by

$$\begin{aligned} M_{[r:n]}(t)= & {} \alpha _2\left[ (1+\Delta _{r,n:p})\beta \left( \alpha _2,1-\frac{t}{\theta _2}\right) \right. \nonumber \\&\left. -(p+1)\Delta _{r,n:p}\beta \left( \alpha _2(p+1), 1-\frac{t}{\theta _2}\right) \right] . \end{aligned}$$
(3.2)

Thus, by using (3.1) (or by using (3.2)), the kth moment of \(Y_{[r:n]}\) is given by

$$\begin{aligned} \mu _{[r:n]}^{(k)}= & {} \text{ E }[Y_{[r:n]}^{k}]=(1+\Delta _{r,n:p})\text{ E }[Y^{k}]-\Delta _{r,n:p}\text{ E }[V^{k}]\\= & {} (1+\Delta _{r,n:p})\sum _{i=0}^{\aleph (\alpha _2-1)} \frac{\alpha _2 k!(-1)^i}{\theta _{2}^{k}(i+1)^{k+1}} \left( \begin{array}{c} \scriptstyle {\alpha _2-1}\\ \scriptstyle {i}\end{array}\right) \\&-\Delta _{r,n:p}\sum _{i=0}^{\aleph (\alpha _2(p+1)-1)} \frac{\alpha _2(p+1) k!(-1)^i}{\theta _{2}^{k}(i+1)^{k+1}} \left( \begin{array}{c} \scriptstyle {\alpha _2(p+1)-1}\\ \scriptstyle {i}\end{array}\right) . \end{aligned}$$

Clearly, all the moments exist for integer values of \(\alpha _2\) and \(\alpha _2p.\) Moreover, by putting \(k=1\) we get the mean of \(Y_{[r:n]}\)

$$\begin{aligned} \mu _{[r:n]}=\frac{1}{\theta _2}\left[ B(\alpha _2)-\Delta _{r,n:p}D(\alpha _2,p)\right] . \end{aligned}$$
(3.3)

Thus, the difference between the means of Y and \(Y_{[r:n]}\) is \(h(r,\lambda ,\alpha _2,p)=-\frac{\Delta _{r,n:p}D(\alpha _2,p)}{\theta _2},\) which implies that \(h(r,\lambda ,\alpha _2,p)=0,\) if \(\lambda =0\) or \(\frac{(p+1)\beta (r+p,n-r+1)}{\beta (r,n-r+1)}=1.\) Since \(B(\alpha )\) is increasing function of \(\alpha ,\) then \(D(\alpha ,p)\ge 0.\) Therefore, \(h(r,\lambda ,\alpha _2,p)\) has the same sign of \(-\Delta _{r,n:p},\) which means that \(h(r,\lambda ,\alpha _2,p)>0,\) if and only if \(\lambda >0,\)\((p+1)\beta (r+p,n-r+1)>\beta (r,n-r+1),\) or \(\lambda <0,\)\((p+1)\beta (r+p,n-r+1)<\beta (r,n-r+1).\) Finally, by using (3.3) we get the following general recurrence relations:

Theorem 3.1

For any \(1\le r\le n-3,\) we get

$$\begin{aligned} (r+1)\mu _{[r+2:n]}=(2r+p+1)\mu _{[r+1:n]}-(p+r)\mu _{[r:n]}. \end{aligned}$$
(3.4)

Moreover, for all \(n>2,\) we get

$$\begin{aligned} (n+p)\mu _{[r:n]}=(2n+p-1)\mu _{[r:n-1]}-(n-1)\mu _{[r:n-2]}. \end{aligned}$$
(3.5)

Proof

It is easy to check that

$$\begin{aligned} \Delta _{r+1,n:p}=\Delta _{r,n:p}-\frac{\lambda p(p+1)}{r}\frac{\beta (r+p,n-r+1)}{\beta (r,n-r+1)} \end{aligned}$$

and

$$\begin{aligned} \Delta _{r+2,n:p}=\Delta _{r,n:p}-\frac{\lambda p(p+1)(2r+p+1)}{r(r+1)}\frac{\beta (r+p,n-r+1)}{\beta (r,n-r+1)}, \end{aligned}$$

which yield, after some algebra, the first recurrence relation (3.4). Also, we can check that

$$\begin{aligned} \Delta _{r,n-1:p}=\Delta _{r,n:p}-\frac{\lambda p(p+1)}{n}\frac{\beta (r+p,n-r+1)}{\beta (r,n-r+1)} \end{aligned}$$

and

$$\begin{aligned} \Delta _{r,n-2:p}=\Delta _{r,n:p}-\frac{\lambda p(p+1)(2n+p-1)}{n(n-1)}\frac{\beta (r+p,n-r+1)}{\beta (r,n-r+1)}. \end{aligned}$$

The second recurrence relation (3.5) is followed by combining the last two relations, after some algebra. \(\square \)

Remark 3.1

When \(p=1,\) we get the recurrence relation \(\mu _{[r+2:n]}=2\mu _{[r+1:n]}-\mu _{[r:n]}\) of \(Y_{[r:n]}\) based on MTBGED, which is obtained by Tahmasebi and Jafari [29]. Moreover, it is worth mentioning that the second recurrence relation (3.5) is new even for MTBGED.

Remark 3.2

The recurrence relation (3.4) can provide us with an estimate of p. Namely, based on the relation \(p=\frac{(r+1)\mu _{[r+2:n]}-(2r+1)\mu _{[r+1:n]}+r\mu _{[r:n]}}{\mu _{[r+1:n]}-\mu _{[r:n]}},\) we can suggest the estimator

$$\begin{aligned} \hat{p}=\frac{1}{n-2}\sum _{i=1}^{n-3}\frac{(i+1)Y_{[i+2:n]}-(2i+1)Y_{[i+1:n]}+iY_{[i:n]}}{Y_{[i+1:n]}-Y_{[i:n]}}. \end{aligned}$$

Actually, the suggested estimator \(\hat{p}\) does not consistent or even unbiased, but bearing in mind that there is no any known estimator of the power parameter p in the literature, we can use it; nevertheless, it needs further theoretical and practical investigation.

By multiplying the both sides of (3.1) by \((y-\mu _{[r:n]})^2\) and integrating, we obtain the variance of \(Y_{[r :n]}\) as

$$\begin{aligned} \sigma _{[r:n]}^2= & {} \frac{1}{\theta _2^2}\left[ C(\alpha _2)+\Delta _{r,n:p}(C(\alpha _2)-C(\alpha _2(p+1)))\right. \nonumber \\&\left. -\Delta _{r,n:p} (1+\Delta _{r,n:p})D^2(\alpha _2,p)\right] . \end{aligned}$$
(3.6)

Clearly, when \(p=1,\) we get

$$\begin{aligned} \sigma _{[r:n]}^2=\frac{1}{\theta _2^2}\left[ C(\alpha _2)+\delta _r(C(\alpha _2)-C(2\alpha _2))-\delta _r(1+\delta _r)D^2(\alpha _2)\right] , \end{aligned}$$
(3.7)

where \(\delta _r=\frac{\lambda (n-2r+1)}{n+1}\) and \(D(\alpha _2)=B(2\alpha _2)-B(\alpha _2).\) The formula (3.7) is the correction of the formula

$$\begin{aligned} \sigma _{[r:n]}^2=\frac{1}{\theta _2^2}\left[ C(\alpha _2)+\delta _r(C(2\alpha _2)-C(\alpha _2))\right] , \end{aligned}$$

which is obtained by Tahmasebi and Jafari [29].

It is well known that in many cases, the concomitants of the extremes among the X’s are not extremes among the Y’s (with high probability) (cf. Galambos [20]). This fact aroused interest of some researchers to investigate the rank (of \(Y_{[r:n]}\)) \(\mathcal{R}_{[r:n]}=\sum \nolimits _{j=1}^{n}\text{ I }(Y_{[r:n]}-Y_j),\) where \(\text{ I }(x)=1,\) if \(x\ge 0,\)\(\text{ I }(x)=0,\) if \(x<0.\) The distribution of \(R_{r:n}\) is obtained by David et al. [18]. Barakat and El-Shandidy [9] gave a new representation of the df and the expected value of \(\mathcal{R}_{[r:n]}.\) Namely, for all \(r,s=2,3,\ldots ,n-1,\) we have

$$\begin{aligned} A_{r:n}(s)= & {} P(\mathcal{R}_{[r:n]}=s)=n[\text{ E }(\mathcal{C}(W_{r:n-1},Z_{s:n-1}))-\text{ E }(\mathcal{C}(W_{r-1:n-1},Z_{s:n-1}))\nonumber \\&\quad - \text{ E }(\mathcal{C}(W_{r:n-1},Z_{s-1:n-1}))+\text{ E }(\mathcal{C}(W_{r-1:n-1},Z_{s-1:n-1}))], \end{aligned}$$
(3.8)

where \(\mathcal{C}(.,.)\) is the copula of the bivariate df \(F_{X,Y}(x,y),\) i.e., \(\mathcal{C}(w,z)=wz(1+\lambda (1-w^p)(1-z^p)).\) Moreover, \(W_{j:n}=F_X(X_{j:n})\) and \(Z_{j:n}=F_Y(Y_{j:n})\) are the jth uniform order statistics with expectation \(\text{ E }(W_{j:n})=\text{ E }(Z_{j:n})=\frac{j}{n+1}.\) The representation (3.8) enables us to use the \(\delta -\)method (with one-step Taylor approximation) to compute an approximate formula for the df \(A_{r:n}(s),\) by

$$\begin{aligned}&A_{r:n}(s)\sim n\left[ \mathcal{C}\left( \frac{r}{n},\frac{s}{n}\right) -\mathcal{C}\left( \frac{r-1}{n},\frac{s}{n}\right) - \mathcal{C}\left( \frac{r}{n},\frac{s-1}{n}\right) +\mathcal{C}\left( \frac{r-1}{n},\frac{s-1}{n}\right) \right] \\&\quad =\frac{1+\lambda }{n}\\&\qquad -\frac{\lambda }{n^{p+1}}[rs(r^p+s^p)-(r-1)s((r-1)^p+s^p)-r(s-1)(r^p+(s-1)^p)\\&\qquad +\, (r-1)(s-1)((r-1)^p+(s-1)^p)]\\&\qquad +\,\frac{\lambda }{n^{2p+1}}[r^{p+1}s^{p+1}-(r-1)^{p+1}s^{p+1}\\&\qquad -\,r^{p+1}(s-1)^{p+1}+(r-1)^{p+1}(s-1)^{p+1}]. \end{aligned}$$

The limiting distribution of \(Y_{[n:n]},\) as \(n\rightarrow \infty ,\) depends on the conditional distribution of Y given X and the marginal distribution of X,  and it is given by the following theorem.

Theorem 3.2

Let \(A_n=\frac{1}{\theta _2}.\) Then

$$\begin{aligned} F_{[n:n]}(A_ny) \mathop {\longrightarrow }\limits _{n}^{w} F_Y(y)\left( 1-\lambda p\left( 1-F_Y^p(y)\right) \right) , \end{aligned}$$

where “\(\mathop {\longrightarrow }\limits _{n}^{w}\)” denotes the weak convergence, as \(n\rightarrow \infty \) (for the definition of the weak convergence, see Galambos [20]) and \(Y\sim \mathrm{GE}(\theta _2;\alpha _2).\)

Proof

First, by applying Theorem 2.1, Part II, in Barakat et al. [10], by putting \(b=1,a=n,\) we get

$$\begin{aligned} P(X_{n:n}\le a_nx+b_n)=F_X^n(a_nx+b_n) \mathop {\longrightarrow }\limits _{n}^{w} \mathrm{e}^{-\mathrm{e}^{-x}},\forall x, \end{aligned}$$
(3.9)

where \(X\sim \mathrm{GE}(\theta _1;\alpha _1),\)\(a_n=\frac{1}{\theta _1},\)\(b_n=-\log [\alpha _1 n]\) and [x] means the integer part of x. On the other hand, in view of (2.2), we get

$$\begin{aligned} F_{Y|X}(A_ny|X=a_nx+b_n) \mathop {\longrightarrow }\limits _{n}^{w} =T(x,y)=F_Y(y)(1-\lambda p(1-F_Y^p(y))).\qquad \end{aligned}$$
(3.10)

Finally, we can easily check that the df \(F_X(x)\) satisfies the von Mises condition: Namely,

$$\begin{aligned} \lim _{x\rightarrow \infty }\frac{\mathrm{d}}{\mathrm{d}x}\left[ \frac{1-F_X(x)}{f_X(x)}\right] =-1-\lim _{x\rightarrow \infty }\frac{1-(1-\mathrm{e}^{-\theta _1x})^{\alpha _1}}{\alpha _1 \mathrm{e}^{-\theta _1x}}=0. \end{aligned}$$
(3.11)

Therefore, in view of Theorem 5.5.1, in Galambos [20], (3.9), (3.10) and (3.11) are sufficient conditions for the relation

$$\begin{aligned} F_{[n:n]}(A_ny) \mathop {\longrightarrow }\limits _{n}^{w} \int _{-\infty }^{\infty }T(y,x)\mathrm{e}^{-\mathrm{e}^{-x}}\mathrm{d}x=F_Y(y)(1-\lambda p(1-F_Y^p(y))). \end{aligned}$$

This completes the proof. \(\square \)

3.2 Joint Distribution of Concomitants of Order Statistics Based on HK–FGM-GE

The joint pdf of concomitants \(Y_{[r:n]}\) and \(Y_{[s:n]}, r<s,\) is (cf. Tahmasebi and Jafari [29])

$$\begin{aligned} f_{[r,s:n]}(y_1,y_2)=\int _{0}^{\infty }\int _{0}^{x_2} f_{Y|X}(y_1|x_1)f_{Y|X}(y_2|x_2)f_{r,s:n}(x_1,x_2)\mathrm{d}x_1\mathrm{d}x_2, \end{aligned}$$

where \(\beta (a,b,c)=\frac{\Gamma (a)\Gamma (b)\Gamma (c)}{\Gamma (a+b+c)}\) and

$$\begin{aligned} f_{r,s:n}(x_1,x_2)= & {} \frac{1}{\beta (r,s-r,n-s+1)}F_X^{r-1}(x_1)\\&\times (F_X(x_2)-F_X(x_1))^{s-r-1}(1-F_X(x_2))^{n-s}f_X(x_{1})f_X(x_{2}), x_1<x_2. \end{aligned}$$

Therefore,

$$\begin{aligned}&f_{[r,s:n]}(y_1,y_2)=\int _{0}^{\infty }\int _{0}^{x_2} \left[ f_Y(y_{1})(1+\lambda ((1+p)F_X^{p}(x_{1})-1)((1+p)F_Y^{p}(y_{1})-1))\right] \nonumber \\&\quad \times \left[ f_Y(y_{2})(1+\lambda ((1+p)F_X^{p}(x_{2})-1)((1+p)F_Y^{p}(y_{2})-1))\right] \nonumber \\&\quad \times \left[ \frac{F_X^{r-1}(x_{1})(F_X(x_{2})-F_X(x_{1}))^{s-r-1}(1-F_X(x_{2}))^{n-s}}{\beta (r,s-r,n-s+1)}f_X(x_{1})f_X(x_{2})\right] \mathrm{d}x_{1}\mathrm{d}x_{2}.\nonumber \\ \end{aligned}$$
(3.12)

On the other hand, we have

$$\begin{aligned} I_{1}= & {} \lambda \int _{0}^{\infty }\int _{0}^{x_2} ((1+p)F_X^{p}(x_{1})-1)\nonumber \\&\times \left[ \frac{F_X^{r-1}(x_{1})(F_X(x_{2})-F_X(x_{1}))^{s-r-1}(1-F_X(x_{2}))^{n-s}}{\beta (r,s-r,n-s+1)}\frac{f_X(x_{1}) f_X(x_{2})}{1-F_X(x_{1})}\right] \mathrm{d}x_{1}\mathrm{d}x_{2} \nonumber \\= & {} \frac{\lambda (1+p)}{\beta (r,s-r,n-s+1)}\int _{0}^{1}\int _{0}^{v} \left[ u^{p+r-1}(v-u)^{s-r-1}(1-v)^{n-s}\right] \mathrm{d}u\mathrm{d}v-\lambda \nonumber \\= & {} -\lambda \frac{\beta (r,s-r,n-s+1)-(p+1)\beta (r+p,s-r,n-s+1)}{\beta (r,s-r,n-s+1)}=-\Delta _{r,s,n:p}^{(1)} \end{aligned}$$
(3.13)

(upon substituting \(u=F_X(x_{1})\) and \(v=F_X(x_{2})\)). Moreover, we have

$$\begin{aligned} I_{2}= & {} \lambda \int _{0}^{\infty }\int _{0}^{x_2}((1+p)F_X^{p}(x_{2})-1)\\&\times \left[ \frac{F_X^{r-1}(x_{1})(F_X(x_{2})-F_X(x_{1}))^{s-r-1}(1-F_X(x_{2}))^{n-s}}{\beta (r,s-r,n-s+1)}f_X(x_{1})f_X(x_{2})\right] \mathrm{d}x_{1}\mathrm{d}x_{2}. \end{aligned}$$

Upon substituting \(u=F_X(x_{1})\) and \(v=F_X(x_{2}),\) we get

$$\begin{aligned} I_{2}= & {} \lambda \int _{0}^{1}\int _{0}^{v}\left( (1+p)v^{p}-1\right) \left[ \frac{u^{r-1}(v-u)^{s-r-1}(1-v)^{n-s}}{\beta (r,s-r,n-s+1)}\right] \mathrm{d}u\mathrm{d}v\\= & {} \frac{\lambda (1+p)}{\beta (r,s-r,n-s+1)}\int _{0}^{1}\int _{0}^{v} v^{p}u^{r-1}(v-u)^{s-r-1}(1-v)^{n-s}\mathrm{d}u\mathrm{d}v-\lambda . \end{aligned}$$

Moreover, upon substituting \(\frac{u}{v}=w,\) we get

$$\begin{aligned} I_{2}= & {} \frac{\lambda (1+p)}{\beta (r,s-r,n-s+1)}\int _{0}^{1}\int _{0}^{1} v^{s+p-1}(1-v)^{n-s}w^{r-1}(1-w)^{s-r-1}\mathrm{d}w\mathrm{d}v-\lambda \nonumber \\= & {} -\lambda \frac{\beta (r,s-r,n-s+1)-(p+1)\beta (s+p,n-s+1)\beta (r,s-r)}{\beta (r,s-r,n-s+1)}\nonumber \\= & {} -\Delta _{r,s,n:p}^{(2)}. \end{aligned}$$
(3.14)

Finally, consider

$$\begin{aligned} I_{3}= & {} \lambda ^2\int _{0}^{\infty }\int _{0}^{x_2}((1+p)F_X^{p}(x_{1})-1)((1+p)F_X^{p}(x_{2})-1)\nonumber \\&\times \left[ \frac{F_X^{r-1}(x_{1})(F_X(x_{2})-F_X(x_{1}))^{s-r-1}(1-F_X(x_{2}))^{n-s}}{\beta (r,s-r,n-s+1)}f_X(x_{1})f_X(x_{2})\right] \mathrm{d}x_{1}\mathrm{d}x_{2}\nonumber \\= & {} \lambda (I'_{3}-I_{1}-I_{2})=\Delta _{r,s,n:p}, \end{aligned}$$
(3.15)

where

$$\begin{aligned} I'_{3}= & {} \lambda (1+p)^2\int _{0}^{\infty }\int _{0}^{x_2} F_X^{p}(x_{1})F_X^{p}(x_{2})\nonumber \\&\left[ \frac{F_X^{r-1}(x_{1})(F_X(x_{2})-F_X(x_{1}))^{s-r-1}(1-F_X(x_{2}))^{n-s}}{\beta (r,s-r,n-s+1)}f_X(x_{1})f_X(x_{2})\right] \mathrm{d}x_{1}\mathrm{d}x_{2}.\nonumber \\ \end{aligned}$$
(3.16)

Put \(u=F_X(x_{1})\) and \(v=F_X(x_{2})\) in the double integration (3.16) and then put \(\frac{u}{v}=t,\) we get (by using (3.13) and (3.14))

$$\begin{aligned} I'_{3}= & {} \lambda (1+p)^2\int _{0}^{1}\int _{0}^{1} v^{s+2p-1}(1-v)^{n-s}t^{r+p-1}(1-t)^{s-r-1}\mathrm{d}t\mathrm{d}v-\lambda .\\= & {} \lambda \frac{(p+1)^2\beta (s+2p,n-s+1)\beta (r+p,s-r)-\beta (r,s-r,n-s+1)}{\beta (r,s-r,n-s+1)}\\= & {} \Delta _{r,s,n:p}^{(3)}. \end{aligned}$$

Thus,

$$\begin{aligned} \Delta _{r,s,n:p}=\lambda \left( \Delta _{r,s,n:p}^{(3)}-\Delta _{r,s,n:p}^{(1)}-\Delta _{r,s,n:p}^{(2)}\right) . \end{aligned}$$
(3.17)

Now, combining (3.12)–(3.15), with (3.17), we get

$$\begin{aligned}&f_{[r,s:n]}(y_1,y_2)=\left( 1+\Delta _{r,s,n:p}^{(1)}+\Delta _{r,s,n:p}^{(2)}+\Delta _{r,s,n:p}\right) f_Y(y_1)f_Y(y_2)\nonumber \\&\quad -\,\left( \Delta _{r,s,n:p}^{(1)}+\Delta _{r,s,n:p}\right) f_V(y_1)f_Y(y_2)-\left( \Delta _{r,s,n:p}^{(2)}+\Delta _{r,s,n:p}\right) f_V(y_2)f_Y(y_1)\nonumber \\&\quad +\,\Delta _{r,s,n:p}f_V(y_1)f_V(y_2), \end{aligned}$$
(3.18)

where

$$\begin{aligned} \Delta _{r,s,n:p}= & {} \lambda \left( \Delta _{r,s,n:p}^{(3)}-\Delta _{r,s,n:p}^{(1)}-\Delta _{r,s,n:p}^{(2)}\right) ,\nonumber \\ \Delta _{r,s,n:p}^{(1)}= & {} \lambda \frac{\beta (r,s-r,n-s+1)-(p+1)\beta (r+p,s-r,n-s+1)}{\beta (r,s-r,n-s+1)},\nonumber \\ \Delta _{r,s,n:p}^{(2)}= & {} \lambda \frac{\beta (r,s-r,n-s+1)-(p+1)\beta (s+p,n-s+1)\beta (r,s-r)}{\beta (r,s-r,n-s+1)},\nonumber \\ \Delta _{r,s,n:p}^{(3)}= & {} \lambda \frac{(p+1)^2\beta (s+2p,n-s+1)\beta (r+p,s-r)-\beta (r,s-r,n-s+1)}{\beta (r,s-r,n-s+1)}.\nonumber \\ \end{aligned}$$
(3.19)

The product moment \(\text{ E }[Y_{[r :n]}Y_{[s:n]}]=\mu _{[r,s:n]}\) is obtained directly from (3.18) as

$$\begin{aligned} \mu _{[r,s:n]}= & {} \frac{1}{\theta _2^2}\Bigg [\left( 1+\Delta _{r,s,n:p}^{(1)}+\Delta _{r,s,n:p}^{(2)}+\Delta _{r,s,n:p}\right) B^2(\alpha _2)\nonumber \\&-\,\left( \Delta _{r,s,n:p}^{(1)}+\Delta _{r,s,n:p}^{(2)}+2\Delta _{r,s,n:p}\right) B(\alpha _2)B(\alpha _2(p+1))\nonumber \\&+\,\Delta _{r,s,n:p}B^2(\alpha _2(p+1))\Bigg ]. \end{aligned}$$
(3.20)

Therefore, by using (3.3) and (3.20) we can after some algebra calculate the covariance between \(Y_{[r:n]}\) and \(Y_{[s:n]}\) as

$$\begin{aligned} \qquad \qquad \sigma _{[r,s:n,p]}= & {} \Delta _{r,s,n:p}D^2(\alpha _2,p)-(\Delta _{r,s,n:p}^{(1)}+\Delta _{r,s,n:p}^{(2)})B(\alpha _2)D(\alpha _2,p)\nonumber \\&+\,(\Delta _{r,n:p}+\Delta _{s,n:p})B(\alpha _2)D(\alpha _2,p)-\Delta _{r,n:p}\Delta _{s,n:p}D^2(\alpha _2,p).\nonumber \\ \end{aligned}$$
(3.21)

It is easily to verify that

$$\begin{aligned} \sigma _{[r,s:n,1]}=\frac{1}{\theta _2^2}D^2(\alpha _2)(\delta _{r,s}-\delta _r\delta _s), \end{aligned}$$
(3.22)

where \(\delta _{r,s}=\lambda ^2\left[ \frac{n-2s + 1}{n + 1}-\frac{2r(n-2s)}{(n + 1)(n + 2)}\right] \) (we can easily verify that \(\Delta _{r,s,n:1}^{(1)}=\delta _r,\)\(\Delta _{r,s,n:1}^{(2)}=\delta _s\) and \(\Delta _{r,s,n:1}=\delta _{r,s}\)). The relation (3.22) is obtained by Tahmasebi and Jafari [29] for MTBGED.

We can now use (3.21) and (3.6) to obtain the coefficient of correlation between \(Y_{[r:n]}\) and \(Y_{[s:n]}\) as

$$\begin{aligned}&\rho _{[r,s:n,p]}\nonumber \\&\quad =\frac{\Delta _{r,s,n:p}D^2-(\Delta _{r,s,n:p}^{(1)}+\Delta _{r,s,n:p}^{(2)})BD +(\Delta _{r,n:p}+\Delta _{s,n:p})BD-\Delta _{r,n:p}\Delta _{s,n:p}D^2}{\sqrt{\prod _{i=1}^{2} [C(\alpha _2)+\Delta _{i,n:p}(C(\alpha _2)-C(\alpha _2(p+1)))-\Delta _{i,n:p} (1+\Delta _{i,n:p})D^2]}},\nonumber \\ \end{aligned}$$
(3.23)

where in formula (3.23) we abbreviated \(B(\alpha _2)\) and \(D(\alpha _2,p)\) by B and D,  respectively. Moreover, we use the abbreviations \(\Delta _{1,n:p}=\Delta _{r,n:p}\) and \(\Delta _{2,n:p}=\Delta _{s,n:p}.\) It is easily to verify that

$$\begin{aligned} \rho _{[r,s:n,1]}=\frac{D^2(\alpha _2)(\delta _{r,s}-\delta _r\delta _s)}{\sqrt{\prod _{i=1}^{2} [C(\alpha _2)+\delta _i(C(\alpha _2)-C(2\alpha _2))-\delta _i (1+\delta _i)D^2(\alpha _2)]}}, \end{aligned}$$

which is the correction formula of the formula (3.20) obtained by Tahmasebi and Jafari [29] for MTBGED, where \(\delta _1=\delta _r\) and \(\delta _2=\delta _s.\)

Now, by using (3.19) and the representation (3.18), we get the following general recurrence relations for the product moment \(\mu _{[r,s:n]}.\)

Theorem 3.3

For any \(1\le r\le n-3,\) we get

$$\begin{aligned} (r+1)\mu _{[r+2,s:n]}=(2r+p+1)\mu _{[r+1,s:n]}-(p+r)\mu _{[r,s:n]}. \end{aligned}$$
(3.24)

Moreover, \(1\le s\le n-3,\) we get

$$\begin{aligned} (s+1)\mu _{[r,s+2:n]}=(2s+p+1)\mu _{[r,s+1:n]}-(p+s)\mu _{[r,s:n]}+\xi _n(r,s,\alpha _2,\lambda :p).\nonumber \\ \end{aligned}$$
(3.25)

\(\xi _n(s,n,\alpha _2,\lambda :p)=\frac{\lambda p(1-p)}{\theta _2^2(s+p+1)}D^2(\alpha _2,p)\left( \Delta _{r,s+1,n:p}^{(3)}-\Delta _{r,s,n:p}^{(3)}\right) .\) Finally, for all \(n>2,\) we get

$$\begin{aligned} (n+p)\mu _{[r,s:n]}=(2n+p-1)\mu _{[r,s:n-1]}-(n-1)\mu _{[r,s:n-2]}+\zeta _n(r,s,\alpha _2,\lambda :p),\nonumber \\ \end{aligned}$$
(3.26)

where \(\zeta _n(s,n,\alpha _2,\lambda :p)=\frac{\lambda p}{\theta _2^2}D^2(\alpha _2,p)\left( \Delta _{r,s,n-1:p}^{(3)}-\Delta _{r,s,n:p}^{(3)}\right) .\)

Proof

It is easy to check that

$$\begin{aligned} \Delta _{r+2,s,n:p}^{(i)}-\Delta _{r,s,n:p}^{(i)}=\left( \Delta _{r+1,s,n:p}^{(i)}-\Delta _{r,s,n:p}^{(i)}\right) \frac{2r+p+1}{r+1}, i=1,3, \end{aligned}$$
(3.27)

and

$$\begin{aligned} \Delta _{r,s,n:p}^{(2)}=\Delta _{r+1,s,n:p}^{(2)}=\Delta _{r+2,s,n:p}^{(2)}. \end{aligned}$$
(3.28)

Therefore,

$$\begin{aligned} \Delta _{r+2,s,n:p}-\Delta _{r,s,n:p}=(\Delta _{r+1,s,n:p}-\Delta _{r,s,n:p})\frac{2r+p+1}{r+1}. \end{aligned}$$
(3.29)

The recurrence relation (3.24) is now followed by combining (3.27), (3.28) and (3.29) with (3.20). Now, we turn to prove (3.25). First, we notice that

$$\begin{aligned} \Delta _{r,s,n:p}^{(1)}=\Delta _{r,s+1,n:p}^{(1)}=\Delta _{r,s+2,n:p}^{(1)}. \end{aligned}$$
(3.30)

Moreover, it is easy to check that

$$\begin{aligned} \Delta _{r,s+2,n:p}^{(i)}-\Delta _{r,s,n:p}^{(i)}=\left( \Delta _{r,s+1,n:p}^{(i)}-\Delta _{r,s,n:p}^{(i)}\right) \frac{2s+p+1}{s+1}+\phi _i, i=2,3,\nonumber \\ \end{aligned}$$
(3.31)

where \(\phi _2=0\) and \(\phi _3=-\frac{p(1-p)}{(s+1)(s+p+1)}.\) Therefore,

$$\begin{aligned} \Delta _{r,s+2,n:p}-\Delta _{r,s,n:p}= & {} (\Delta _{r,s+1,n:p}-\Delta _{r,s,n:p})\frac{2s+p+1}{s+1}\nonumber \\&+\,\lambda \phi _3 (\Delta _{r,s+1,n:p}^{(3)}-\Delta _{r,s,n:p}^{(3)}). \end{aligned}$$
(3.32)

The recurrence relation (3.25) is now followed by combining (3.30), (3.31) and (3.32) with (3.20). In order to prove the recurrence relation (3.26), we first notice that

$$\begin{aligned} \Delta _{r,s,n-2:p}^{(i)}-\Delta _{r,s,n:p}^{(i)}=\left( \Delta _{r,s,n-1:p}^{(i)}-\Delta _{r,s,n:p}^{(i)}\right) \frac{2n+p-1}{n-1}, i=1,2, \end{aligned}$$
(3.33)

and

$$\begin{aligned} \Delta _{r,s,n-2:p}^{(3)}-\Delta _{r,s,n:p}^{(3)}=\left( \Delta _{r,s,n-1:p}^{(3)}-\Delta _{r,s,n:p}^{(3)}\right) \frac{2n+2p-1}{n-1}. \end{aligned}$$
(3.34)

Therefore,

$$\begin{aligned} \Delta _{r,s,n-2:p}-\Delta _{r,s,n:p}= & {} (\Delta _{r,s,n-1:p}-\Delta _{r,s,n:p})\frac{2n+p-1}{n-1}\nonumber \\&+\, \frac{\lambda p}{n-1}\left( \Delta _{r,s,n-1:p}^{(3)}-\Delta _{r,s,n:p}^{(3)}\right) . \end{aligned}$$
(3.35)

The recurrence relation (3.26) is now followed by combining (3.33), (3.34) and (3.35) with (3.20). The theorem is established. \(\square \)

4 Concomitants of Record Values Based on HK–FGM-GE

Let \(\{(X_i, Y_i)\}, i=1,2,\ldots \) be a random sample from HK–FGM-GE\((\theta _1,\alpha _1;\theta _2,\alpha _2).\) When the experimenter interests in studying just the sequence of records of the first component \(X_i\)’s, the second component associated with the record value of the first one is termed as the concomitant of that record value. The concomitants of record values arise in a wide variety of practical experiments, e.g., see Bdair and Raqab [11] and Arnold et al. [7]. Some properties from concomitants of record values were discussed in Ahsanullah [1] and Ahsanullah and Shakil [2]. Let \(\{R_n, n\ge 1\}\) be the sequence of record values in the sequence of X’s, while \(R_{[n]}\) be the corresponding concomitant. Houchens [23] has obtained the pdf of concomitant of nth record value for \(n\ge 1,\) as \(h_{[n]}(y) =\int _{0}^{\infty }f_Y(y|x)g_n(x)\mathrm{d}x,\) where \(g_n(x)=\frac{1}{\Gamma (n)}(-\log (1-F_X(x)))^{n-1}f_X(x)\) is the pdf of \(R_n.\) Therefore, after some algebra, we get

$$\begin{aligned} h_{[n]}(y) =(1+\Upsilon _{n:p})f_Y(y)-\Upsilon _{n:p} f_V(y), \end{aligned}$$
(4.1)

where \(V\sim \mathrm{GE}(\theta _2;\alpha _2(p+1))\) and

$$\begin{aligned} \Upsilon _{n:p}=\lambda \left[ 1-(1+p)\sum _{i=0}^{\aleph (p)}\frac{(-1)^i\left( {\begin{array}{c} p\\ \scriptstyle {i}\end{array}}\right) }{(i+1)^n}\right] . \end{aligned}$$

Clearly, \(\Upsilon _{n:1}=\lambda (2^{-(n-1)}-1)=\lambda _{n-1}\) and the representation (4.1) becomes \(h_{[n]}(y) =(1+\lambda _{n-1})f_Y(y)-\lambda _{n-1} f_V(y),\) which is the essential correction of the representation (4.1) due to Tahmasebi and Jafari [29] for MTBGED, which is given by \(h_{[n]}(y) =(1+\lambda _n)f_Y(y)-2\lambda _n f_V(y).\)

The representation (4.1) enables us to derive the mean and the variance of \(R_{[n]}\) as

$$\begin{aligned} \mu _{[R_n]:p}=\frac{1}{\theta _2}\left[ B(\alpha _2)-\Upsilon _{n:p}D(\alpha _2,p)\right] \end{aligned}$$

and

$$\begin{aligned} \sigma _{[R_n]:p}^2=\frac{1}{\theta _2^2}\left[ C(\alpha _2)+\Upsilon _{n:p}(C(\alpha _2)-C(\alpha _2(p+1)))-\Upsilon _{n:p} (1+\Upsilon _{n:p})D^2(\alpha _2,p)\right] .\nonumber \\ \end{aligned}$$
(4.2)

Clearly,

$$\begin{aligned} \mu _{[R_n]:1}=\frac{1}{\theta _2}\left[ B(\alpha _2)-\lambda _{n-1}D(\alpha _2)\right] \end{aligned}$$

and

$$\begin{aligned} \sigma _{[R_n]:1}^2=\frac{1}{\theta _2^2}\left[ C(\alpha _2)+\lambda _{n-1}(C(\alpha _2)-C(2\alpha _2))-\lambda _{n-1} (1+\lambda _{n-1})D^2(\alpha _2)\right] , \end{aligned}$$

which are the correction formulas of the mean and the variance, respectively, of \(R_{[n]}\) given by Tahmasebi and Jafari [29] for MTBGED.

The joint pdf of the concomitants \(R_{[n]}\) and \(R_{[m]},\)\(n<m,\) is given by

$$\begin{aligned} h_{[n,m]}(y_1,y_2) =\int _{0}^{\infty }\int _{x_1}^{\infty }f_{Y|X}(y_1|x_1)f_{Y|X}(y_2|x_2)g_{m,n}(x_1,x_2)\mathrm{d}x_2\mathrm{d}x_1, \end{aligned}$$

where

$$\begin{aligned} g_{m,n}(x)= & {} \frac{1}{\Gamma (n)\Gamma (m-n)}(-\log (1-F_X(x_1)))^{n-1}\left( -\log \frac{ 1-F_X(x_2)}{1-F_X(x_1)}\right) ^{m-n-1}\\&\frac{f_X(x_1)f_X(x_1)}{1-F_X(x_1)} \end{aligned}$$

is the joint pdf of \(R_n\) and \(R_m.\) Therefore, after some algebra, we get

$$\begin{aligned} h_{[n,m]}(y_1,y_2)= & {} (1+\Upsilon _{m:p}+\Upsilon _{m:p}+\Upsilon _{n,m:p})f_Y(y_1)f_Y(y_2)\nonumber \\&-(\Upsilon _{n:p}+\Upsilon _{n,m:p})f_V(y_1)f_Y(y_2)-(\Upsilon _{m:p}+\Upsilon _{n,m:p})f_V(y_2)f_Y(y_1)\nonumber \\&+\Upsilon _{n,m:p}f_V(y_1)f_V(y_2), \end{aligned}$$
(4.3)

where \(\Upsilon _{n,m:p}=\lambda (\Upsilon _{n:p}+\Upsilon _{m:p}+\Upsilon _{n:p}^\star )\) and

$$\begin{aligned} \Upsilon _{n,m:p}^\star =\lambda \left[ (1+p)^2\sum _{i=0}^{\aleph (p)}\sum _{j=0}^{\aleph (p)}\frac{(-1)^{i+j} \left( {\begin{array}{c} p\\ \scriptstyle {i}\end{array}}\right) \left( {\begin{array}{c} p\\ \scriptstyle {j}\end{array}}\right) }{(i+j+1)^n(j+1)^{m-n}}-1\right] . \end{aligned}$$

Clearly, \(\Upsilon _{n:1}=\lambda ^2\left[ 3^{-n}(2^{n-m+2}-3^n)-\frac{\lambda _{n-1}+\lambda _{m-1}}{\lambda }\right] ,\) which is the correction of the term \(\lambda _{n,m}\) by Tahmasebi and Jafari [29] to compute the joint pdf \(h_{[n,m]}(y_1,y_2).\)

The representation (4.3) enables us to derive the product moment and the covariance of \(R_{[n]}\) and \(R_{[m]}\) as

$$\begin{aligned} \mu _{[R_n,R_m]:p}= & {} \frac{1}{\theta _2^2}[(1+\Upsilon _{n:p}+\Upsilon _{m:p}+\Upsilon _{n,m:p})B^2(\alpha _2)\\&-\,(\Upsilon _{n:p}+\Upsilon _{m:p}+2\Upsilon _{n,m:p})B(\alpha _2)B(\alpha _2(p+1))\\&+\,\Upsilon _{n,m:p}B^2(\alpha _2(p+1))] \end{aligned}$$

and

$$\begin{aligned} \sigma _{[R_n,R_m]:p}=\frac{D^2(\alpha _2,p)}{\theta _2^2}\left[ \Upsilon _{n,m:p}-\Upsilon _{n:p}\Upsilon _{m:p}\right] . \end{aligned}$$
(4.4)

Clearly, \(\sigma _{[R_n,R_m]:1}=\frac{D^2(\alpha _2)}{\theta _2^2}\left[ \Upsilon _{n,m:1}-\lambda _{n-1}\lambda _{m-1}\right] ,\) which again is the correction of wrong relation (4.6) given by Tahmasebi and Jafari [29] to compute the covariance of the concomitants \(R_{[n]}\) and \(R_{[m]},\)\(n<m.\) Finally, combining (4.2) with (4.4), we get the correlation coefficient of the concomitants \(R_{[n]}\) and \(R_{[m]},\) as

$$\begin{aligned}&\rho _{[R_n,R_m]:p}\\&\quad =\frac{D^2(\alpha _2,p)\left[ \Upsilon _{n,m:p}-\Upsilon _{n:p}\Upsilon _{m:p}\right] }{\sqrt{\prod _{i=1}^{2} \left[ C(\alpha _2)+\Upsilon _{i:p}(C(\alpha _2)-C(\alpha _2(p+1)))-\Upsilon _{i:p} (1+\Upsilon _{i:p})D^2(\alpha _2,p)\right] }}, \end{aligned}$$

where in the above formula we use the abbreviation \(\Upsilon _{1:p}=\Upsilon _{n:p}\) and \(\Upsilon _{2:p}=\Upsilon _{m:p}.\) Again, \(\rho _{[r,s:n,1]}\) gives the correction formula of the correlation given by Tahmasebi and Jafari [29] for MTBGED.