1 Introduction

In modeling multivariate data, when the available information is only in the form of marginal distributions, it is suitable to consider families of multivariate distribution functions (DFs) with specified marginals. The Farlie–Gumbel–Morgenstern (FGM) family of bivariate DFs provides a flexible family that can be used in such situations. The FGM family of bivariate DFs is defined by \(F_{X,Y}(x,y)=F_X(x)F_Y(y)[1+\omega {\overline{F}}_X(x){\overline{F}}_Y(y)],\) \(-1\le \omega \le 1,\) where \(F_X(x)=P(X\le x)\) and \(F_Y(y)=P(Y\le y)\) are the marginals DFs, while \({\overline{F}}_X\) and \({\overline{F}}_Y\) are the survival functions of \(F_X\) and \(F_Y,\) respectively. The FGM family was originally introduced by Morgenstern [40] for Cauchy marginals. A well-known drawback to this family is the low dependence level it permits between random variables (RVs), where the Spearman’s Rho \(\rho \in (-0.33,0.33).\) Therefore, the FGM family is a useful family in applications provided that the correlation between the variables is not too large. Nowadays, several extensions for the family FGM have been introduced in the literature in an attempt to improve the correlation level. We shall mention here a number of the most important extensions of the FGM family developed primarily to increase the maximal value of the correlation coefficient. All these extensions are polynomial type (i.e., that are expressed in terms of polynomials in \(F_X\) and \(F_Y\) ).

  1. 1.

    Huang and Kotz [34] used successive iterations in the FGM family. As a particular case, the bivariate FGM with a single iteration is defined by

    $$\begin{aligned} F_{X,Y}(x,y)=F_X(x)F_Y(y)\left[ 1+\lambda {\bar{F}}_X(x){\bar{F}}_Y(y)+ \omega F_X(x)F_Y(y){\bar{F}}_X(x){\bar{F}}_Y(y) \right] , \end{aligned}$$

    denoted by IFGM\((\lambda ,\omega ).\) When the two marginals \(F_X\) and \(F_Y\) are continuous, Huang and Kotz [34] showed that the natural parameter space \(\Omega \) (the admissible set of the parameters \(\lambda \) and \(\omega \) that makes \(F_{X,Y}\) is a DF) is convex, where \(\Omega =\{(\lambda ,\omega ):-1\le \lambda \le 1;\omega +\lambda \ge -1;\omega \le \frac{3-\lambda +\sqrt{9-6\lambda -3\lambda ^{2}}}{2}\}.\) Moreover, when the marginals are uniform, the correlation coefficient is \(\rho =\frac{\lambda }{3}+\frac{\omega }{12},\) with the maximal positive value 0.434. Recently, this family was studied in different important aspects by Alawady et al. [7], Barakat and Husseiny [15], and Barakat et al. [16, 19].

  2. 2.

    Huang and Kotz [35] proposed two analogous extensions by

    $$\begin{aligned} F_{X,Y}^{(1)}(x,y)=F_X(x)F_Y(y)\left[ 1+\lambda _1(1-F_X^{p_1}(x))(1-F_Y^{p_1}(y))\right] , p_1\ge 1, \end{aligned}$$
    (1.1)

    and

    $$\begin{aligned} F_{X,Y}^{(2)}(x,y)=F_X(x)F_Y(y)\left[ 1+\lambda _2 (1-F_{X}(x))^{p_2}(1-F_{Y}(y))^{p_2}\right] , p_2\ge 1. \end{aligned}$$
    (1.2)

    The admissible range of the shape parameter vectors \((\lambda _1,p_1)\) and \((\lambda _2,p_2)\) is \(\Omega _{1}=\{(\lambda _1,p_1):\) \( -p_1^{-2}\le \lambda _1\le p_1^{-1}, p_1\ge 1\}\) and \(\Omega _{2}=\{(\lambda _2,p_2): -1\le \lambda _2\le \left( \frac{p_2+1}{p_2-1}\right) ^{p_2-1},~\) \(p_2>1~\text{ or }~-1\le \lambda _2\le +1,~ p_2=1\},\) respectively. The maximal positive correlation for the families (1.1) and (1.2) is given by 0.375 and 0.391,  which are attained at \(p_1=2\) and \(p{_2}=1.1877,\) respectively. The most works about the extensions (1.1) and (1.2) are concerning to the family (1.1). Among those works are Abd Elgawad et al. [2, 5], Bairamov and Kotz [12], Barakat et al. [18], and Fisher and Klein (2007).

  3. 3.

    Bekrizadeh et al. [21] proposed a generalization family for FGM by

    $$\begin{aligned} F_{X,Y}(x,y)= & {} F_X(x)F_Y(y)\left[ 1+\lambda (1-F_X^{p}(x)) (1-F_Y^{p}(y))\right] ^ N,\nonumber \\&p>0 , ~N=0,1,2,\ldots \end{aligned}$$
    (1.3)

    The admissible range of the associated parameter \(\lambda \) is \(-\text{ min }\left\{ 1,\frac{1}{Np^2}\right\} \le \lambda \le \frac{1}{Np}.\) Bekrizadeh et al. [21] showed that by means of the family (1.3), the strongest positive of Spearman’s correlation coefficient between the marginal distributions becomes \(\rho \cong 0.43,\) while the weakest negative of Spearman’s correlation coefficient remains \(\rho \cong -0.50.~\) Moreover, Barakat et al. [20] showed that when \(0< p < 1,\) the model (1.3) becomes poor and is not allowing any improvement in the positive Spearman’s correlation. Recently, Abd Elgawad et al. [3] discussed some aspects of the distribution of the concomitants of generalized order statistics from the family (1.3).

  4. 4.

    Bairamov et al. [11] suggested a four-parameter family, which is the most general form of the FGM family, by

    $$\begin{aligned} F_{X,Y}(x,y)= & {} F_{X}(x)F_{Y}(y)[1+\lambda (1-F_{X}^{p_{1}}(x))^{q_{1}}(1-F_{Y}^{p_{2}}(y))^{q_{2}}],\nonumber \\&p_1,p_2,q_1,q_2\ge 1, \end{aligned}$$
    (1.4)

    with \(\rho \in (-0.48,0.502)\) for uniform marginals. For some recent works about this family and its properties, see Alawady et al. [8] and Barakat et al. [17].

  5. 5.

    Sarmanov [43] suggested an extension of FGM defined by

    $$\begin{aligned} F_{X,Y}(x,y)= & {} F_X(x)F_Y(y)\Big [1+3\alpha {\bar{F}}_X(x){\bar{F}}_Y(y)+5\alpha ^2 (2F_X(x)-1) \nonumber \\&(2F_Y(y)-1){\bar{F}}_X(x){\bar{F}}_Y(y) \Big ]. \end{aligned}$$
    (1.5)

    denoted by SAR\((\alpha ).\) The corresponding probability density function (PDF) is given by

    $$\begin{aligned} f_{X,Y}(x,y)= & {} f_X(x)f_Y(y)\left[ 1+3\alpha (2F_X(x)-1)(2F_Y(y)-1)\right. \nonumber \\&+\left. \frac{5}{4} \alpha ^2 (3(2F_X(x)-1)^2-1)(3(2F_Y(y)-1)^2-1)\right] ,~|\alpha |\le \frac{\sqrt{7}}{5}.\nonumber \\ \end{aligned}$$
    (1.6)

    Moreover, when the marginals are uniform (to get the copula), the correlation coefficient is \(\alpha .\) Thus, in this case, the minimal and maximal correlation coefficient \(\rho \) of this copula are \(-0.529\) and 0.529,  respectively (cf. [13]; page 74).

It is worth noting that all the preceding extended families are special cases of an extended family to the family FGM, which is defined via its PDF

$$\begin{aligned} f_{X,Y}(x,y)=f_X(x)f_Y(y)(1+\Theta (\overline{\kappa };x,y)), \end{aligned}$$
(1.7)

where \(f_X\) and \(f_Y\) are the PDFs of \(F_X\) and \(F_Y,\) respectively, \(\overline{\kappa }\) is a shape-parameter vector, \(1+\Theta (\overline{\kappa };x,y)\ge 0,\) and \(\Theta \) is a measurable function satisfying \(\text{ E }(\Theta (\overline{\kappa };X,y))=\text{ E }(\Theta (\overline{\kappa };x,Y))=0\) (the last two conditions are necessary conditions for \(f_{X,Y}\) to be a bona fide joint PDF). This legitimates that we consider the family (1.6) as an extension of the FGM family, although there is no value of the shape parameter \(\alpha \) makes the family switching to the FGM family. Moreover, the PDF (1.7) is a slight extension of the Sarmanov density, which was introduced by Sarmanov [42]. For the Sarmanov density, we have \(\Theta (\overline{\kappa };x,y)=\lambda \theta _1(x)\theta _2(y),\) where \(\lambda \) is a shape parameter, \(\theta _1(.)\) and \(\theta _2(.)\) are measurable functions satisfying \(1+\lambda \theta _1(x)\theta _2(y)\ge 0,\) and \(\text{ E }(\theta _1(X))=\text{ E }(\theta _2(Y))=0.\) For more details about the Sarmanov density and its advantage and wide applications, see Abdallah et al. [1], Bolancé and Vernic [22], Bolancé et al. [23], and Lin and Huang [38].

Clearly, the SAR\((\alpha )\) family is the most efficient one among all the mentioned extended families (actually it is one of the most efficient extended families in the literature) in the sense that it provides the best improvement in the correlation level. Moreover, this family, among all the well-known extensions, has one shape parameter, which makes it the most flexible family; particularly, this shape parameter represents the correlation coefficient in the case of uniform marginals. The last individual feature facilitates the estimation of the shape parameter by using, for example, the sample correlation estimate, and thus this family is easy to use in the modeling of bivariate data. Despite all these useful and unique features of this family, it has not been studied or paid any attention to by researchers since its proposal. In the present paper, we reveal some additional motivating properties for the Sarmanov family (1.6). Moreover, we discuss some aspects of the concomitants of order statistics (OSs) and some information measures pertain to this disused family. In view of these information measures and via a computational study, some comparisons are carried out between the IFGM\((\lambda ,\omega )\) and SAR\((\alpha )\) families based on the admissible values of the correlation coefficient.

The study of concomitants of OSs is a growing field. The concept of concomitant OSs is related to the ordering bivariate RVs. The concomitants of OSs arise when one sorts the members of a random sample according to corresponding values of another random sample. More specifically, in collecting any data for an observation, several characteristics are often recorded; some of them are considered primary and others can be observed from the primary data automatically. The latter one is called concomitant or explanatory variables or covariables. David [24] was among the early authors who popularized the study of this subject. Further authoritative updates on concomitants of OSs are given in Barakat and El-Shandidy [14], David and Nagaraja [25, 26], David et al. [27], Eryilmaz [29], and Hanif [33]. The PDF of the rth concomitant, \(Y_{[r:n]},\) of the rth OS, \(X_{r:n},~ 1\le r\le n,\) is given by

$$\begin{aligned} f_{[r:n]}(y)=\int _{-\infty }^{\infty }f_{Y| X}(y|x)f_{r:n}(x){\text {d}}x, \end{aligned}$$
(1.8)

where \(f_{r:n}(x)\) is the PDF of rth OSs and \(f_{Y| X}(y|x)\) is the conditional PDF of Y given X (see, e.g., [2] and [18, 19]). Moreover, the joint PDF of the rth and sth concomitants, \(Y_{[r:n]}\) and \(Y_{[s:n]},\) of the rth and sth OSs \(X_{r:n}\) and \(X_{s:n},\) \(1\le r<s\le n,\) respectively, is given by

$$\begin{aligned} f_{[r,s:n]}(y_{1},y_{2})=\int _{-\infty }^{\infty }\int _{-\infty }^{x_{2}}f_{Y|X}(y_{1}| x_{1})f_{Y| X}(y_{2}| x_{2})f_{r,s:n}(x_{1},x_{2}){\text {d}}x_{1}{\text {d}}x_{2}, \end{aligned}$$
(1.9)

where \(f_{r,s:n}(x_{1},x_{2})\) is the joint PDF of \(X_{r:n}\) and \(X_{s:n}\) (see, e.g., [2] and [18, 19]).

Although most of the results of this paper are derived for arbitrary marginal DFs, we consider the generalized exponential DF, which is defined by \(F_X(x)=\left( 1-e^{-\theta x}\right) ^{a},\) \( x;a, \theta > 0,\) and is denoted by \(GE(\theta ;a),\) as a case study example. Clearly, \(GE(\theta ;1)\) is an exponential DF. Many authors studied various properties of this distribution, e.g., Kundu and Pradhan [37]. Gupta and Kundu [32] showed that the \(\ell \)th moment of \(GE(\theta ;a)\) is given by

$$\begin{aligned} \mu _X^{(\ell )}=\frac{a \ell !}{\theta ^\ell }\sum \limits _{i=0}^{\varphi (a-1)}\frac{(-1)^i}{(i+1)^{\ell +1}}A(a-1,i), \end{aligned}$$
(1.10)

where \(A(a-1,i)=\left( {\begin{array}{c}a-1\\ i\end{array}}\right) \) and \( \varphi (x)=\infty ,\) if x is non-integer and \(\varphi (x)=x,\) if x is integer. Moreover, the mean, variance, and moment-generating function (MGF) of \(GE(\theta ;a)\) are given, respectively, by

$$\begin{aligned} \mu _{X}=\text{ E }(X)=\frac{B(a)}{\theta },\text{ Var }(X)=\sigma ^{2}_{X}=\frac{C(a)}{\theta ^2} ~\text{ and }~ M_X(t)= a\beta \left( a,1-\frac{t}{\theta }\right) ,\nonumber \\ \end{aligned}$$
(1.11)

where \(B(a)=\Psi (a+1)-\Psi (1),\) \(C(a)=\Psi '(1)-\Psi '(a+1),\) \(\beta (a,b)=\frac{\Gamma (a)\Gamma (b)}{\Gamma (a+b)}\) and \(\Psi (.)\) is the digamma function, while \(\Psi '(.)\) is its derivation (the trigamma function).

The Shannon entropy is a mathematical measure of information that measures the average reduction in uncertainty or variability associated with a RV. This measure is maximal for uniform distribution, additive for independent events, increasing in the number of outcomes with nonzero probabilities, continuous, nonnegative, and permutation-invariant. For more details about this measure, see Abd Elgawad et al. [2], Alawady et al. [9], Barakat and Husseiny [15], and Abd Elgawad et al. [4].

In this study, we consider also an inaccuracy measure known as Kerridge measure of inaccuracy associated with two RVs as an expansion of uncertainty, that was defined by Kerridge [36].

The Fisher information number (FIN) is the second moment of the “score function” where the derivative is with respect to x in a given PDF \(f_X(\theta ,x),\) rather than the parameter \(\theta .\) It is a Fisher information (FI) for a location parameter; for this reason, it is also called shift-invariant FI. Recently, FIN is frequently used in different aspects of science. For example, the FIN is intimately related to many of the fundamental equations of theoretical physics, cf. Frieden and Gatenby [31]. For some recent works about this measure, see Abd Elgawad et al. [2], Tahmasebi and Jafari [44], and the references therein.

The rest of the paper is organized as follows. In Sect. 2, we study some distributional characterizations of the Sarmanov family. We obtain some new interesting results pertaining to the Sarmanov family and concomitants of OSs that are based on it. In Sect. 3, we first study the concomitants of OSs based on SAR\((\alpha )\) with general marginals. As an example, the GE is taken as possible marginals, denoted by SAR-GE\((\theta _1,a_1;\theta _2,a_2).\) Moreover, some recurrence relations between the PDFs, MGFs, and moments of concomitants are derived. At the end of Sect. 3, we study the joint concomitants of OSs based on SAR\((\alpha ).\) In Sect. 4, we get some new elegant and useful relations for the Shannon entropy concerning the Sarmanov copula. Moreover, the Shannon entropy, inaccuracy measure, and FIN for the Sarmanov family are derived and then computed with some comparison with those measures for the IFGM family. In Sect. 5, which contains evaluations of two real-world data sets, we examine the Shannon entropy and inaccuracy measure. Furthermore, when comparing the Sarmanov family to the FGM family for the second real data set, we find that the Sarmanov family fits the data better. Finally, we conclude the paper in Sect. 6.

2 Some Distributional Characterizations of the Sarmanov Family

Let \(X\sim {GE}(\theta _1;a_1)\) and \(Y\sim {GE}(\theta _2;a_2).\) Thus, it is easy to show that the (nm)th joint moments of the SAR-GE\((\theta _1,a_1;\) \(\theta _2,a_2)\) family are given by

$$\begin{aligned} E(X^{n}Y^{m})= & {} \int _{-\infty }^{\infty }\int _{-\infty }^{\infty }X^{n}Y^{m}f_X(x)f_Y(y)[1+3\alpha (2F_X(x)-1)(2F_Y(y)-1) \nonumber \\&+\frac{5}{4}\alpha ^2(3(2F_X(x)-1)^2-1)(3(2F_Y(y)-1)^2-1)]{\text {d}}x{\text {d}}y\nonumber \\= & {} 3\alpha \left[ E(U_1^{n})-E(X^{n})\right] \left[ E(V_1^{m})-E(Y^{m})\right] \nonumber \\&+\frac{5}{4}\alpha ^2\left[ 4E(U_2^{n})-6E(U_1^{n})+2E(X^{n})\right] \left[ 4E(V_2^{m})-6E(V_1^{m})+2E(Y^{m})\right] \nonumber \\&+E(X^{n})E(Y^{m})~~,n,m=1,2,\ldots \end{aligned}$$
(2.1)

where \(U_{1}\sim {GE}(\theta _1;2a_1),\) \(U_{2}\sim {GE}(\theta _1;3a_1),\) \( V_{1}\sim {GE}(\theta _2;2a_2)\) and \(V_{2}\sim {GE}(\theta _2;3a_2).\) Thus, by combining (2.1) and (1.11), we get

$$\begin{aligned} E(XY)= & {} \frac{1}{\theta _1 \theta _2}\{\left[ B(a_1)B(a_2)+3\alpha D(2a_1)D(2a_2)\right] \nonumber \\&+\frac{5}{4}\alpha ^2\left[ 4B(3a_2)-6B(2a_2)+2B(a_2)\right] \left[ 4B(3a_1)-6B(2a_1)+2B(a_1)\right] \},\nonumber \\ \end{aligned}$$
(2.2)

where \(D((k+1)a)=B((k+1)a)-B(ka)\), \(k=1,2.\) Therefore, the coefficient of correlation between X and Y is

$$\begin{aligned} \rho _{_{X,Y}}=\frac{3\alpha D(2a_1)D(2a_2)+\frac{5}{4}\alpha ^2(4B(3a_2)-6B(2a_2)+2B(a_2))(4B(3a_1)-6B(2a_1)+2B(a_1))}{\sqrt{C(a_1)C(a_2)}}.\nonumber \\ \end{aligned}$$
(2.3)

Table 1 displays the coefficient of correlation for SAR-GE\((\theta _1,a_1;\theta _2,a_2),\) by using (2.3). The result of this table shows that the maximum value of \(\rho _{_{X,Y}}\) from SAR-GE\((\theta _1,a_1;\theta _2,a_2)\) is 0.463407.

Table 1 Coefficient of correlation, \(\rho ,\) in SAR-GE \((\theta _1,a_1;\theta _2,a_2)\)

After simple algebra, the conditional DF of Y given \(X =x\) is given by

$$\begin{aligned} F_{Y|X}(y|x)= & {} F_Y(y)\{1+3\alpha (F_Y(y)-1)(2F_X(x)-1)+\frac{5}{4}\alpha ^2\left[ \!3(2F_X(x)-1)^2-1\!\right] \nonumber \\&\times \left[ 4F^2_Y(y)-6F_Y(y)+2\right] \}. \end{aligned}$$
(2.4)

Therefore, the regression curve of Y given \(X =x\) for SAR\((\alpha )\) is

$$\begin{aligned} E(Y|X=x)= & {} \frac{1}{\theta _2}\{B(a_2)+3\alpha D(2a_2)(2F_X (x)-1)+\frac{5}{4}\alpha ^2\left[ 3(2F_X (x)-1)^2-1\right] \nonumber \\&\times \left[ 4B(3a_2)-6B(2a_2)+B(a_2)\right] \}, \end{aligned}$$
(2.5)

where the conditional expectation is nonlinear with respect to x.

We end this section by revealing two interesting features of the Sarmanov copula. A bivariate copula is a bivariate DF whose marginals are uniform on the interval (0, 1) (see, [41]). Therefore, to obtain the copula of any extended families (1-5), we use the transformation \(u=F_X(x),\) \(v=F_Y(y).\) For example, the FGM copula is \(C(u,v;\omega )=uv(1+\omega (1-u)(1-v)),~0\le u,v\le 1\) and the corresponding density copula is \({{{\mathcal {C}}}}(u,v;\omega )=1+\omega (1-2u)(1-2v).\) Clearly, the FGM copula is radially symmetric about \((\frac{1}{2},\frac{1}{2})\), i.e., \({{{\mathcal {C}}}}(\frac{1}{2}-u,\frac{1}{2}-v;\omega )={{{\mathcal {C}}}}(\frac{1}{2}+u,\frac{1}{2}+v;\omega )\) (cf. [41]). We have the following result concerning the Sarmanov copula.

Proposition 1

The Sarmanov copula is radially symmetric. No other copula concerning the extended families (1-4) is radially symmetric. Moreover, the PDF of rth concomitant of OSs, \({{{\mathcal {S}}}}_{[r:n]}(.;\alpha )\) based on the Sarmanov copula satisfies the relation

$$\begin{aligned} {{{\mathcal {S}}}}_{[r:n]}~ \left( \frac{1}{2}-v;\alpha \right) = {{{\mathcal {S}}}}_{[n-r+1:n]}\left( \frac{1}{2}+v;\alpha \right) ,~0\le v\le 1. \end{aligned}$$
(2.6)

Proof

The first part of the proof is elementary. To prove the second part, let \({{{\mathcal {S}}}}(.,.;\alpha )\) be the PDF of the Sarmanov copula. According to (1.8), we get \({{{\mathcal {S}}}}_{[r:n]}(v;\alpha )=\int _{0}^{1}{{{\mathcal {S}}}}(u,v;\alpha ) f_{r:n}(u) {\text {d}}u,\) where \(f_{r:n}(u)\) is the rth OS from uniform distribution over (0,1). Taking the transformation \(u=\frac{1}{2}-z,\) and change v to \((\frac{1}{2}-v),\) we get

$$\begin{aligned}&{{{\mathcal {S}}}}_{[r:n]}~ \left( \frac{1}{2}-v;\alpha \right) =\int _{-\frac{1}{2}}^{\frac{1}{2}}{{{\mathcal {S}}}}(\frac{1}{2}-z,\frac{1}{2}-v;\alpha ) f_{r:n}\left( \frac{1}{2}-z\right) \\&\quad {\text {d}}z=\int _{-\frac{1}{2}}^{\frac{1}{2}}{{{\mathcal {S}}}}\left( \frac{1}{2}+z,\frac{1}{2}+v;\alpha \right) f_{n-r+1:n}\left( \frac{1}{2}+z\right) {\text {d}}z, \end{aligned}$$

since \(f_{r:n}(\frac{1}{2}-z)=f_{n-r+1:n}(\frac{1}{2}+z).\) Put \(\frac{1}{2}+z =\eta ,\) we get

$$\begin{aligned} {{{\mathcal {S}}}}_{[r:n]}~ \left( \frac{1}{2}-v;\alpha \right)= & {} \int _{0}^{1}{{{\mathcal {S}}}}\left( \eta ,\frac{1}{2}+v;\alpha \right) f_{n-r+1:n}(\eta ) d\eta \\= & {} {{{\mathcal {S}}}}_{[n-r+1:n]}\left( \frac{1}{2}+v;\alpha \right) . \end{aligned}$$

This proves the second part. \(\square \)

Remark 1

The proof of Proposition 1 shows that the rth concomitant of OSs based on any radially symmetric copula satisfies the relation (2.6).

The following interesting result connects the FGM and Sarmanov copulas via the concomitants of OSs based on them.

Theorem 2.1

Let \({{{\mathcal {C}}}}_{[r:n]}(.;\alpha )\) be the PDF of the rth concomitant of OSs based on the FGM copula \(C(.,.;\alpha ).\) Then, we get

$$\begin{aligned} {{{\mathcal {S}}}}_{[n-r+1:n]}(v;\alpha )-{{{\mathcal {S}}}}_{[r:n]}(v;\alpha )=3[{{{\mathcal {C}}}}_{[n-r+1:n]}(v;\alpha )-{{{\mathcal {C}}}}_{[r:n]}(v;\alpha )],~0\le v\le 1. \nonumber \\ \end{aligned}$$
(2.7)

Proof

A quick look at the Sarmanov copula enables us to write

$$\begin{aligned} {{{\mathcal {S}}}}(u,v;\alpha )=3 {{{\mathcal {C}}}}(u,v;\alpha )+L(u,v;\alpha ), \end{aligned}$$
(2.8)

where \(L(u,v;\alpha )=\frac{5}{4}\alpha ^2[3(2u-1)^2-1][3(2v-1)^2-1]-2.\) Clearly, \(L(\frac{1}{2}+u,\frac{1}{2}+v;\alpha )=L(\frac{1}{2}-u,\frac{1}{2}-v;\alpha )\) (i.e., the function \(L(u,v;\alpha )\) is radially symmetric). Moreover, the relation (2.8) yields

$$\begin{aligned} {{{\mathcal {S}}}}_{[r:n]}(v;\alpha )= & {} 3{{{\mathcal {C}}}}_{[r:n]}(v;\alpha )+\int _{0}^{1}L(u,v;\alpha )f_{r:n}(u) {\text {d}}u\nonumber \\= & {} 3{{{\mathcal {C}}}}_{[r:n]}(v;\alpha )+J_r(v;\alpha ), \end{aligned}$$
(2.9)

where \(f_{r:n}(u)\) is the PDF of the rth OS based on the uniform distribution over (0,1). In view of the fact that the function \(L(u,v;\alpha )\) is radially symmetric, we can proceed as we have done in the proof of Proposition 1 to prove, after some simple algebra, that \(J_{n-r+1}(v;\alpha )=J_r(v;\alpha ).\) Thus, by using the relation (2.9), we get the required relation (2.7). \(\square \)

3 Concomitants of OSs Based on the Sarmanov Family

In this section, the marginal DF, MGFs, moments and recurrence relations between PDF, MGFs and moments of concomitant of OSs for SAR\((\alpha )\) are obtained. As an example of the relevant obtained results, the SAR-GE\((\theta _1,a_1;\theta _2,a_2)\) is studied. Moreover, the joint DF of the bivariate concomitants of OSs based on SAR\((\alpha )\) is derived.

3.1 Marginal Distributions of Concomitants of OSs

The following theorem gives a useful representation for the PDF of \(Y_{[r:n]}.\)

Theorem 3.1

Let \(V_1\sim F_Y^{2}\) and \(V_2\sim F_Y^{3}.\) Then,

$$\begin{aligned} f_{[r:n]}(y)= & {} \left( 1-3\Delta ^{(\alpha )}_{1,r:n}+\frac{5}{2}\Delta ^{(\alpha )}_{2,r:n}\right) f_Y(y)+ \left( 3\Delta ^{(\alpha )}_{1,r:n}-\frac{15}{2}\Delta ^{(\alpha )}_{2,r:n}\right) f_{V_1}(y)\nonumber \\&+5\Delta ^{(\alpha )}_{2,r:n}f_{V_2}(y), \end{aligned}$$
(3.1)

where \(\Delta ^{(\alpha )}_{1,r:n}=\frac{\alpha (2r-n-1)}{n+1}\) and \(\Delta ^{(\alpha )}_{2,r:n}=2\alpha ^2 \left[ 1-6\frac{r(n-r+1)}{(n+1)(n+2)}\right] .\)

Proof

By using (1.8) and simple algebra, we get

$$\begin{aligned} f_{[r:n]}(y)= & {} \int _{-\infty }^{\infty }f_Y(y)\left\{ 1+3\alpha (2F_X(x)-1)(2F_Y(y)-1)\right. \\&+\left. \frac{5}{4} \alpha ^2 \left[ 3(2F_X(x)-1)^2-1\right] \left[ 3(2F_Y(y)-1)^2-1\right] \right\} \\&\frac{1}{\beta (r,n-r+1)}F_X^{r-1}(x)(1-F_X(x))^{n-r}f_X(x){\text {d}}x \\= & {} f_Y(y)+3(f_{V_1}(y)-f_Y(y))I_1+\frac{5}{4}\left[ 4f_{V_2}(y)-6f_{V_1}(y)+2f_Y(y)\right] I_2, \end{aligned}$$

where

$$\begin{aligned} I_1= & {} \frac{\alpha }{\beta (r,n-r+1)}\int _{-\infty }^{\infty }(2F_X(x)-1)F_X^{r-1}(x)(1-F_X(x))^{n-r}f_X(x){\text {d}}x \\= & {} \frac{\alpha (2r-n-1)}{n+1}=\Delta ^{(\alpha )}_{1,r:n} \end{aligned}$$

and

$$\begin{aligned} I_2= & {} \frac{\alpha ^2}{\beta (r,n-r+1)}\int _{-\infty }^{\infty }\left[ 3(2F_X(x)-1)^2-1\right] F_X^{r-1}(x)(1-F_X(x))^{n-r}f_X(x){\text {d}}x\\= & {} 2\alpha ^2 \left[ 1-6\frac{r(n-r+1)}{(n+1)(n+2)}\right] =\Delta ^{(\alpha )}_{2,r:n}. \end{aligned}$$

This completes the proof. \(\square \)

Relying on (3.1), the MGF of \(Y_{[r:n]}\) based on SAR\((\alpha )\) is given by

$$\begin{aligned} M_{[r:n]}(t)= & {} \left( 1-3\Delta ^{(\alpha )}_{1,r:n}+\frac{5}{2}\Delta ^{(\alpha )}_{2,r:n}\right) M_Y(t)+ \left( 3\Delta ^{(\alpha )}_{1,r:n}-\frac{15}{2}\Delta ^{(\alpha )}_{2,r:n}\right) M_{V_1}(t)\nonumber \\&+\, 5\Delta ^{(\alpha )}_{2,r:n}M_{V_2}(t), \end{aligned}$$
(3.2)

where \(M_{Y}(t), M_{V_1}(t)\) and \(M_{V_2}(t)\) are the MGFs of the RVs \(Y, V_1\) and \(V_2,\) respectively. Thus, by using (3.1) the \(\ell \)th moment of \(Y_{[r:n]}\) based on SAR\((\alpha )\) is given by

$$\begin{aligned} \mu _{[r:n]}^{(\ell )}= & {} (1-3\Delta ^{(\alpha )}_{1,r:n}+\frac{5}{2}\Delta ^{(\alpha )}_{2,r:n}) \mu _Y^{(\ell )}\nonumber \\&+(3\Delta ^{(\alpha )}_{1,r:n}-\frac{15}{2}\Delta ^{(\alpha )}_{2,r:n})\mu _{V_1}^{(\ell )}+5\Delta ^{(\alpha )}_{2,r:n}\mu _{V_2}^{(\ell )}, \end{aligned}$$
(3.3)

where \(\mu _Y^{(\ell )}=E[Y^{\ell }],~\mu _{V_1}^{(\ell )}=E[{V_1}^{\ell }]\) and \(\mu _{V_2}^{(\ell )}=E[{V_2}^{\ell }]\). Moreover, by putting \(\ell =1\) in (3.3) and by using (1.11), we get the mean of \(Y_{[r:n]}\) based on SAR-GE\((\theta _1,a_1;\theta _2,a_2),\) by

$$\begin{aligned} \mu _{[r:n]}= & {} \frac{1}{\theta _2}\left[ \left( 1-3\Delta ^{(\alpha )}_{1,r:n}+\frac{5}{2}\Delta ^{(\alpha )}_{2,r:n}\right) B(a_2)+ \left( 3\Delta ^{(\alpha )}_{1,r:n}-\frac{15}{2}\Delta ^{(\alpha )}_{2,r:n}\right) B(2a_2)\right. \nonumber \\&\left. +5\Delta ^{(\alpha )}_{2,r:n}B(3a_2)\right] . \end{aligned}$$
(3.4)

The following theorem shows that both the FGM and Sarmanov families share an interesting property concerning the concomitants of OSs based on them.

Theorem 3.2

Let \(f_{[r:n]}^{(c)}(y;\omega )\) be the PDF of the \(Y_{[r:n]}\) based on the FGM\((\omega )\) family. Furthermore, throughout this theorem, \(f_{[r:n]}(y;\alpha )\) will denote the PDF of the \(Y_{[r:n]}\) based on the Sarmanov family. Then,

  1. 1.

    \(f_{[r:n]}^{(c)}(y;-\omega )= f_{[n-r+1:n]}^{(c)}(y;\omega ),\)

  2. 2.

    \(f_{[r:n]}~ (y;-\alpha )= f_{[n-r+1:n]}(y;\alpha ).\)

Proof

The first part of the theorem follows from the obvious relation \(f_{[r:n]}^{(c)}~ (v;\omega )=1-\Lambda _{r,n}(\omega )[2F_Y(y)-1],\) where \(\Lambda _{r,n}(\omega ) =(1-\frac{2r}{n+1})\omega \) and \(\Lambda _{r,n}(-\omega )=\Lambda _{n-r+1,n}(\omega ).\) We now prove the second part. By applying the easy-check relations

$$\begin{aligned}&\Delta ^{(\alpha )}_{1,r:n}=\frac{\alpha (2r-n-1)}{n+1}=\Delta ^{(-\alpha )}_{1,n-r+1:n},\\&\Delta ^{(\alpha )}_{2,r:n}=2\alpha ^2 \left[ 1-6\frac{r(n-r+1)}{(n+1)(n+2)}\right] =\Delta ^{(\alpha )}_{2,n-r+1:n},~\text{ and }~\Delta ^{(\alpha )}_{2,r:n}=\Delta ^{(-\alpha )}_{2,r:n}, \end{aligned}$$

we immediately get the relation \(f_{[r:n]}~ (y;-\alpha )= f_{[n-r+1:n]}(y;\alpha ).\) This completes the proof. \(\square \)

3.2 Some Recurrence Relations

In this subsection, we derive some useful recurrence relations between the PDFs, MGFs, and moments of concomitants. From (3.1), we get the following general recurrence relation:

$$\begin{aligned}&f_{[r:n]}(y)-f_{[r-i:n-j]}(y)=3\Delta ^{(\alpha )}_{1,i,j;r:n}(f_{V_1}(y)-f_Y(y))\nonumber \\&\quad + 5\Delta ^{(\alpha )}_{2,i,j;r:n}\left( \frac{1}{2}f_Y(y)-\frac{3}{2}f_{V_1}(y)+f_{V_2}(y)\right) , \end{aligned}$$
(3.5)

where

$$\begin{aligned}&\Delta ^{(\alpha )}_{1,i,j;r:n}=\frac{2\alpha (ni-rj+i)}{(n+1)(n+1-j)}~\text{ and } \\&~\Delta ^{(\alpha )}_{2,i,j;r:n}=12\alpha ^2\left[ \frac{(r-i)(n-r+1+i-j)}{(n+1-j)(n+2-j)}-\frac{r(n-r+1)}{(n+1)(n+2)}\right] . \end{aligned}$$

Using (3.5), we get the following recurrence relations between the MGFs and moments, for the concomitants of OSs based on SAR\((\alpha ),\) respectively, by

$$\begin{aligned}&M_{[r:n]}(t)-M_{[r-i:n-j]}(t)\\&\quad =3\Delta ^{(\alpha )}_{1,i,j;r:n}(M_{V_1}(t)-M_Y(t))+ 5\Delta ^{(\alpha )}_{2,i,j;r:n}\left( \frac{1}{2}M_Y(t)-\frac{3}{2}M_{V_1}(t)+M_{V_2}(t)\right) \end{aligned}$$

and

$$\begin{aligned}&\mu ^{(\ell )}_{[r:n]}-\mu ^{(\ell )}_{[r-i:n-j]}\nonumber \\&\quad =3\Delta ^{(\alpha )}_{1,i,j;r:n}\left( \mu ^{(\ell )}_{V_1}-\mu ^{(\ell )}_Y\right) + 5\Delta ^{(\alpha )}_{2,i,j;r:n}\left( \frac{1}{2}\mu ^{(\ell )}_Y-\frac{3}{2}\mu ^{(\ell )}_{V_1}+\mu ^{(\ell )}_{V_2}\right) . \end{aligned}$$
(3.6)

The following two theorems give some useful recurrence relations satisfied by the \(\ell \)th moments of concomitants of OSs based on SAR\((\alpha )\) for any arbitrary distribution.

Theorem 3.3

For any \(\ell \in \Re ^+\) and \(1\le r\le n-2,\) we have

$$\begin{aligned} \frac{\mu ^{(\ell )}_{[r+2:n]}-\mu ^{(\ell )}_{[r:n]}}{\mu ^{(\ell )}_{[r+1:n]}-\mu ^{(\ell )}_{[r:n]}}=\frac{2\alpha (n+2) \left( \mu ^{(\ell )}_{V_1}-\mu ^{(\ell )}_Y\right) - 60\alpha ^2(n-2r-1)\left( \frac{1}{2}\mu ^{(\ell )}_Y-\frac{3}{2}\mu ^{(\ell )}_{V_1}+\mu ^{(\ell )}_{V_2}\right) }{\alpha (n+2) \left( \mu ^{(\ell )}_{V_1}-\mu ^{(\ell )}_Y\right) - 30\alpha ^2(n-2r)\left( \frac{1}{2}\mu ^{(\ell )}_Y-\frac{3}{2}\mu ^{(\ell )}_{V_1}+\mu ^{(\ell )}_{V_2}\right) }.\nonumber \!\!\!\\ \end{aligned}$$
(3.7)

and

$$\begin{aligned}&\mu ^{(\ell )}_{[r+2:n]}+\mu ^{(\ell )}_{[r+1:n]}-2\mu ^{(\ell )}_{[r:n]}=\frac{6\alpha }{(n+1)}\left( \mu ^{(\ell )}_{V_1}-\mu ^{(\ell )}_Y\right) \nonumber \\&\quad -\frac{60\alpha ^2(3n-6r-2)}{(n+1)(n+2)}\left( \frac{1}{2}\mu ^{(\ell )}_Y-\frac{3}{2}\mu ^{(\ell )}_{V_1}+\mu ^{(\ell )}_{V_2}\right) . \end{aligned}$$
(3.8)

Proof

Put \( i=2, j=0, \) and replace r by \(r+2\) in (3.6), we get

$$\begin{aligned}&\mu ^{(\ell )}_{[r+2:n]}-\mu ^{(\ell )}_{[r:n]}=\frac{4\alpha }{(n+1)}\left( \mu ^{(\ell )}_{V_1}-\mu ^{(\ell )}_Y\right) \nonumber \\&\quad - \frac{120\alpha ^2(n-2r-1)}{(n+1)(n+2)}\left( \frac{1}{2}\mu ^{(\ell )}_Y-\frac{3}{2}\mu ^{(\ell )}_{V_1}+\mu ^{(\ell )}_{V_2}\right) . \end{aligned}$$
(3.9)

On the other hand, put \( i=1, j=0, \) and replace r by \(r+1\) in (3.6), we get

$$\begin{aligned} \mu ^{(\ell )}_{[r+1:n]}-\mu ^{(\ell )}_{[r:n]}= & {} \frac{2\alpha }{(n+1)}\left( \mu ^{(\ell )}_{V_1}-\mu ^{(\ell )}_Y\right) \nonumber \\&- \frac{60\alpha ^2(n-2r)}{(n+1)(n+2)}\left( \frac{1}{2}\mu ^{(\ell )}_Y-\frac{3}{2}\mu ^{(\ell )}_{V_1}+\mu ^{(\ell )}_{V_2}\right) . \end{aligned}$$
(3.10)

Now, by dividing (3.9) by (3.10), we obtain (3.7). Relation (3.8) follows by adding (3.9) to (3.10). \(\square \)

Theorem 3.4

For any \(\ell \in \Re ^+\) and \(1\le r\le n,\) we have

$$\begin{aligned}&\frac{\mu ^{(\ell )}_{[r:n]}-\mu ^{(\ell )}_{[r:n-2]}}{\mu ^{(\ell )}_{[r:n]}-\mu ^{(\ell )}_{[r:n-1]}}\nonumber \\&\quad =\frac{60\alpha ^2(rn^2-2nr^2-r^2-r) \left( \frac{1}{2}\mu ^{(\ell )}_Y-\frac{3}{2}\mu ^{(\ell )}_{V_1}+\mu ^{(\ell )}_{V_2}\right) -2\alpha rn(n+2)\left( \mu ^{(\ell )}_{V_1}-\mu ^{(\ell )}_Y\right) }{30\alpha ^2(n-1)(nr-2r^2)\left( \frac{1}{2}\mu ^{(\ell )}_Y-\frac{3}{2} \mu ^{(\ell )}_{V_1}+\mu ^{(\ell )}_{V_2} \right) -\alpha r(n-1)(n+2)\left( \mu ^{(\ell )}_{V_1}-\mu ^{(\ell )}_Y\right) } \end{aligned}$$
(3.11)

and

$$\begin{aligned}&2\mu ^{(\ell )}_{[r:n]}-\mu ^{(\ell )}_{[r:n-2]}-\mu ^{(\ell )}_{[r:n-1]}\nonumber \\&\quad = \frac{60\alpha ^2(3rn^2-6nr^2-rn-2r)}{n(n+1)(n+2)(n-1)}\left( \frac{1}{2}\mu ^{(\ell )}_Y-\frac{3}{2}\mu ^{(\ell )}_{V_1}+\mu ^{(\ell )}_{V_2}\right) \nonumber \\&\qquad -\frac{2\alpha r(3n-1)}{n(n-1)(n+1)}\left( \mu ^{(\ell )}_{V_1}-\mu ^{(\ell )}_Y\right) . \end{aligned}$$
(3.12)

Proof

First, we use the representation (3.6) with \( i=0, j=2, \) we get

$$\begin{aligned}&\mu ^{(\ell )}_{[r:n]}-\mu ^{(\ell )}_{[r:n-2]}=\frac{-4\alpha r\left( \mu ^{(\ell )}_{V_1}-\mu ^{(\ell )}_Y\right) }{(n-1)(n+1)}\nonumber \\&\quad + \frac{120\alpha ^2(rn^2-2nr^2-r^2-1)}{n(n+1)(n+2)(n-1)}\left( \frac{1}{2}\mu ^{(\ell )}_Y-\frac{3}{2}\mu ^{(\ell )}_{V_1}+\mu ^{(\ell )}_{V_2}\right) . \end{aligned}$$
(3.13)

On the other hand, using the representation (3.6) with \( i=0, j=1, \) we get

$$\begin{aligned} \mu ^{(\ell )}_{[r:n]}-\mu ^{(\ell )}_{[r:n-1]}= & {} \frac{-2\alpha r}{n(n+1)}\left( \mu ^{(\ell )}_{V_1}-\mu ^{(\ell )}_Y\right) + \frac{60\alpha ^2(nr-2r^2)}{n(n+1)(n+2)}\nonumber \\&\times \left( \frac{1}{2}\mu ^{(\ell )}_Y-\frac{3}{2}\mu ^{(\ell )}_{V_1}+\mu ^{(\ell )}_{V_2}\right) . \end{aligned}$$
(3.14)

Now, by dividing (3.13) by (3.14) we obtain (3.11). Finally, adding (3.13) to (3.14), we get (3.12). \(\square \)

3.3 Joint Distribution of Bivariate Concomitants of OSs Based on SAR(\(\alpha \))

The following theorem gives the joint PDF \(f_{[r,s:n]}(y_1,y_2)\) (defined by (1.9)) of the concomitants \(Y_{[r:n]}\) and \(Y_{[s:n]}, r<s,\) based on SAR\((\alpha ).\)

Theorem 3.5

Let \(V_1\sim F_Y^{2}\) and \(V_2\sim F_Y^{3}.\) Then,

$$\begin{aligned}&f_{[r,s:n]}(y_{1},y_{2})\nonumber \\&\quad =f_Y(y_1)f_Y(y_2)+\left( 3\Delta ^{(\alpha )}_{1,r:n}-\frac{5}{2}\Delta ^{(\alpha )}_{2,r:n} \right) f_Y(y_2)(f_{V_1}(y_1)-f_Y(y_1)) +\left( 3\Delta ^{(\alpha )}_{1,s:n}\right. \nonumber \\&\qquad -\left. \frac{5}{2}\Delta ^{(\alpha )}_{2,s:n}\right) f_Y(y_1)(f_{V_1}(y_2)-f_Y(y_2))+5\Delta ^{(\alpha )}_{2,r:n}f_Y(y_2)(f_{V_2}(y_1)-f_{V_1}(y_1)) \nonumber \\&\qquad +5\Delta ^{(\alpha )}_{2,s:n}f_Y(y_1)(f_{V_2}(y_2)-f_{V_1}(y_2))+\left( 9\Delta ^{(\alpha )}_{1,r,s:n}- \frac{15}{2}\Delta ^{(\alpha )}_{2,r,s:n}-\frac{15}{2}\Delta ^{(\alpha )}_{3,r,s:n}\right. \nonumber \\&\qquad +\left. \frac{25}{4}\Delta ^{(\alpha )}_{4,r,s:n}\right) (f_{V_1}(y_1)-f_Y(y_1))(f_{V_1}(y_2)-f_Y(y_2))+ \left( 15\Delta ^{(\alpha )}_{2,r,s:n}-\frac{25}{2}\Delta ^{(\alpha )}_{4,r,s:n}\right) \nonumber \\&\qquad \times (f_{V_2}(y_1)-f_{V_1}(y_1))(f_{V_1}(y_2)-f_Y(y_2))+\left( 15\Delta ^{(\alpha )}_{3,r,s:n}-\frac{25}{2} \Delta ^{(\alpha )}_{4,r,s:n}\right) (f_{V_1}(y_1) \nonumber \\&\qquad -f_{Y}(y_1))(f_{V_2}(y_2)-f_{V_1}(y_2))+25 \Delta ^{(\alpha )}_{4,r,s:n}(f_{V_2}(y_1)-f_{V_1}(y_1))(f_{V_2}(y_2)-f_{V_1}(y_2)),\nonumber \\ \end{aligned}$$
(3.15)

where \(\Delta ^{(\alpha )}_{1,s:n}\) and \(\Delta ^{(\alpha )}_{2,s:n}\) are defined by replacing r with s in \(\Delta ^{(\alpha )}_{1,r:n}\) and \(\Delta ^{(\alpha )}_{2,r:n}\), respectively,

$$\begin{aligned}&\Delta ^{(\alpha )}_{1,r,s:n}=\alpha ^2\left[ \frac{4r(s+1)}{(n+1)(n+2)}-\frac{2(r+s)}{n+1}+1\right] ,\\&\Delta ^{(\alpha )}_{2,r,s:n}=\alpha ^3\left[ \frac{24r(r+1)(s+2)}{(n+3)(n+2)(n+1)}-\frac{24r(s+1)+12r(r+1)}{(n+1)(n+2)}+\frac{4s+12r}{n+1}-2\right] ,\\&\Delta ^{(\alpha )}_{3,r,s:n}=\alpha ^3\left[ \frac{24r(s+1)(s+2)}{(n+1)(n+2)(n+3)}-\frac{24r(s+1)+12s(s+1)}{(n+1)(n+2)}+\frac{4r+12s}{n+1}-2\right] , \end{aligned}$$

and

$$\begin{aligned} \Delta ^{(\alpha )}_{4,r,s:n}= & {} \alpha ^4\left[ \frac{144r(r+1)(s+2)(s+3)}{(n+1)(n+2)(n+3)(n+4)}-\frac{144r(s+2)(s+r+2)}{(n+1)(n+2)(n+3)}\right. \\&\left. + \frac{24r(r+1)+144r(s+1)+24s(s+1)}{(n+1)(n+2)}-\frac{24(r+s)}{n+1}+4\right] . \end{aligned}$$

Proof

Consider the following integration:

$$\begin{aligned}&I_{p,q}^{}(r,s,n)\\&\quad =\frac{\Gamma (n+1)}{\Gamma (r)\Gamma (s-r)\Gamma (n-s+1)}\int _{-\infty }^{\infty }\int _{-\infty }^{x_{2}}F^p_X(x_1)F^q_X(x_2) F^{r-1}_X(x_1)\\&\qquad \times (F_X(x_2)-F_X(x_1))^{s-r-1} (1-F_X(x_2))^{n-s}f_X(x_1)f_X(x_2){\text {d}}x_{1}{\text {d}}x_{2}. \end{aligned}$$

Taking the transformation \( u_1=F_X(x_1)\) and \( u_2=F_X(x_2)\), we get

$$\begin{aligned} I_{p,q}^{}(r,s,n)= & {} \frac{\Gamma (n+1)}{\Gamma (r)\Gamma (s-r)\Gamma (n-s+1)}\\&\int _{0}^{1}\int _{0}^{u_2} u^{p+r-1}_1u_2^{q}(u_2-u_1)^{s-r-1}(1-u_2)^{n-s}{\text {d}}u_{1}{\text {d}}u_{2}. \end{aligned}$$

Furthermore, by using the transformation \( z=\frac{u_1}{u_2},\) we get

$$\begin{aligned} I_{p,q}^{}(r,s,n)= & {} \frac{\Gamma (n+1)}{\Gamma (r)\Gamma (s-r)\Gamma (n-s+1)}\nonumber \\&\int _{0}^{1}\int _{0}^{1} z^{p+r-1}(1-z)^{s-r-1}u_2^{p+q+s-1}(1-u_2)^{n-s}{\text {d}}z{\text {d}}u_{2} \nonumber \\&\quad =\frac{\Gamma (n+1)~\Gamma (r+p)~\Gamma (s+p+q)}{\Gamma (n+p+q+1)~\Gamma (r)~\Gamma (s+p)},~~p,q=1,2,3,\ldots \end{aligned}$$
(3.16)

Now, by using (1.9), we get

$$\begin{aligned} f_{[r,s:n]}(y_{1},y_{2})= & {} \int _{-\infty }^{\infty }\int _{-\infty }^{x_{2}}f_{Y|X}(y_{1}| x_{1})f_{Y| X}(y_{2}| x_{2})f_{r,s:n}(x_{1},x_{2}){\text {d}}x_{1}{\text {d}}x_{2}\\= & {} \int _{-\infty }^{\infty }\int _{-\infty }^{x_{2}} f_{r,s:n}(x_{1},x_{2})[f_Y(y_1)+3\alpha f_Y(y_1)(2F_X(x_1)-1)(2F_Y(y_1)-1)\\&+\, \frac{5}{4}\alpha ^2f_Y(y_1)(3(2F_X(x_1)-1)^2-1)(3(2F_Y(y_1)-1)^2-1)]\\&\times \, [f_Y(y_2)+3\alpha f_Y(y_2)(2F_X(x_2)-1)(2F_Y(y_2)-1)+\frac{5}{4}\alpha ^2f_Y(y_2)\\&\times \, (3(2F_X(x_2)-1)^2-1)(3(2F_Y(y_2)-1)^2-1)]{\text {d}}x_{1}{\text {d}}x_{2}. \end{aligned}$$

By using the relations \(f_{V_1}=2f_YF_Y\) and \(f_{V_2}=3f_YF^2_Y\) and carrying out some algebra, we get

$$\begin{aligned}&f_{[r,s:n]}(y_{1},y_{2})\\&\quad =\int _{-\infty }^{\infty }\int _{-\infty }^{x_{2}} f_{r,s:n}(x_{1},x_{2})[f_Y(y_1)+3\alpha (2F_X(x_1)-1)(f_{V_1}(y_1)-f_Y(y_1)) \\&\qquad +\,\frac{5}{4}\alpha ^2(12F^2_X(x_1)-12F_X(x_1)+2)(4f_{V_2}(y_1)-6f_{V_1}(y_1)+2f_{Y}(y_1))] \\&\qquad \times [f_Y(y_2)+3\alpha (2F_X(x_2)-1)(f_{V_1}(y_2)-f_Y(y_2))\\&\qquad +\,\frac{5}{4}\alpha ^2(12F^2_X(x_2)-12F_X(x_2) \\&\qquad +\,2)(4f_{V_2}(y_2)-6f_{V_1}(y_2)+2f_{Y}(y_2))]{\text {d}}x_{1}{\text {d}}x_{2}. \end{aligned}$$

On the other hand, upon using (3.16), with \(p=1\) and \(q=0,\) for \(t=1,\) and with \(p=0\) and \(q=1,\) for \(t=2,\) we get after some algebra

$$\begin{aligned}&\int _{-\infty }^{\infty }\int _{-\infty }^{x_2}\frac{\Gamma (n+1)}{\Gamma (r)\Gamma (s-r)\Gamma (n-s+1)}\alpha (2F_X(x_{t})-1) F^{r-1}_X(x_1)\\&\qquad \times (F_X(x_2)-F_X(x_1))^{s-r-1}(1-F_X(x_2))^{n-s}f_X(x_1)f_X(x_2){\text {d}}x_{1}{\text {d}}x_{2}\\&\quad = \left\{ \begin{array}{ll} \alpha \left( 2I_{1,0}(r,s,n)-1\right) =\frac{\alpha (2r-n-1)}{n+1}=\Delta ^{(\alpha )}_{1,r:n},&{}t=1,\\ \alpha \left( 2I_{0,1}(r,s,n)-1\right) =\frac{\alpha (2s-n-1)}{n+1}=\Delta ^{(\alpha )}_{1,s:n},&{}t=2.\\ \end{array}\right. \end{aligned}$$

Finally, by the same way, we can obtain \(\Delta ^{(\alpha )}_{2,r:n}, \Delta ^{(\alpha )}_{2,s:n}, \Delta ^{(\alpha )}_{1,r,s:n}, \Delta ^{(\alpha )}_{2,r,s:n}, \Delta ^{(\alpha )}_{3,r,s:n},\) and \(\Delta ^{(\alpha )}_{4,r,s:n}.\) This completes the proof. \(\square \)

As a direct consequence of Theorem 3.4, the joint MGF of concomitants \(Y_{[r:n]}\) and \(Y_{[s:n]}, r<s,\) based on SAR(\(\alpha \)) is given by

$$\begin{aligned}&M_{[r,s:n]}(t_1,t_2) \nonumber \\&\quad =M_Y(t_1)M_Y(t_2)+\left( 3\Delta ^{(\alpha )}_{1,r:n}-\frac{5}{2}\Delta ^{(\alpha )}_{2,r:n} \right) M_Y(t_2)(M_{V_1}(t_1)-M_Y(t_1)) +\left( 3\Delta ^{(\alpha )}_{1,s:n}\right. \nonumber \\&\qquad -\,\left. \frac{5}{2}\Delta ^{(\alpha )}_{2,s:n}\right) M_Y(t_1)(M_{V_1}(t_2)-M_Y(t_2))+5\Delta ^{(\alpha )}_{2,r:n}M_Y(t_2)(M_{V_2}(t_1)-M_{V_1}(t_1)) \nonumber \\&\qquad +\,5\Delta ^{(\alpha )}_{2,s:n}M_Y(t_1)(M_{V_2}(t_2)-M_{V_1}(t_2))\nonumber \\&\qquad +\,\left( 9\Delta ^{(\alpha )}_{1,r,s:n}-\frac{15}{2}\Delta ^{(\alpha )}_{2,r,s:n}-\frac{15}{2}\Delta ^{(\alpha )}_{3,r,s:n}+\frac{25}{4}\Delta ^{(\alpha )}_{4,r,s:n}\right) \nonumber \\&\qquad \times \,(M_{V_1}(t_1)-M_Y(t_1))(M_{V_1}(t_2)-M_Y(t_2))+\left( 15\Delta ^{(\alpha )}_{2,r,s:n}- \frac{25}{2}\Delta ^{(\alpha )}_{4,r,s:n}\right) (M_{V_2}(t_1) \nonumber \\&\qquad -\,M_{V_1}(t_1))(M_{V_1}(t_2)-M_Y(t_2))+\left( 15\Delta ^{(\alpha )}_{3,r,s:n}-\frac{25}{2} \Delta ^{(\alpha )}_{4,r,s:n}\right) (M_{V_1}(t_1)-M_Y(t_1)) \nonumber \\&\qquad \times \,(M_{V_2}(t_2)-M_{V_1}(t_2))+25\Delta ^{(\alpha )}_{4,r,s:n}(M_{V_2}(t_1)-M_{V_1}(t_1))(M_{V_2}(t_2)-M_{V_1}(t_2)).\nonumber \\ \end{aligned}$$
(3.17)

The product moment \(\text{ E }[Y_{[r:n]}Y_{[s:n]}]=\mu _{[r,s:n]}\) is obtained directly from (3.17) by

$$\begin{aligned} \mu _{[r,s:n]}= & {} \left[ 3\left( \Delta ^{(\alpha )}_{1,r:n}+\Delta ^{(\alpha )}_{1,s:n}\right) -\frac{5}{2} \left( \Delta ^{(\alpha )}_{2,r:n}+\Delta ^{(\alpha )}_{2,s:n}\right) \right] \mu _Y(\mu _{V_1}-\mu _Y)\nonumber \\&+\,25\Delta ^{(\alpha )}_{4,r,s:n}(\mu _{V_2}-\mu _{V_1})^2 \nonumber \\&+\,\frac{5}{2}\left( \Delta ^{(\alpha )}_{2,r:n}+\Delta ^{(\alpha )}_{2,s:n}\right) \mu _Y(\mu _{V_2} -\mu _{V_1})\nonumber \\&+\,\left[ 15\left( \Delta ^{(\alpha )}_{2,r,s:n}+\Delta ^{(\alpha )}_{3,r,s:n}\right) -25\Delta ^{(\alpha )}_{4,r,s:n}\right] (\mu _{V_1}-\mu _Y) \nonumber \\&\times \,(\mu _{V_2}-\mu _{V_1})\nonumber \\&+\,\left( 9\Delta ^{(\alpha )}_{1,r,s:n}-\frac{15}{2} \Delta ^{(\alpha )}_{2,r,s:n}-\frac{15}{2}\Delta ^{(\alpha )}_{3,r,s:n}+\frac{25}{4}\Delta ^{(\alpha )}_{4,r,s:n}\right) (\mu _{V_1}-\mu _Y)^2+\mu ^2_Y.\nonumber \\ \end{aligned}$$
(3.18)

Now, the product moment \(\text{ E }[Y_{[r:n]}Y_{[s:n]}]=\mu _{[r,s:n]}\) based on SAR-GE(\(\theta _1,a_1;\theta _2,a_2\)) is obtained from (3.18) (and by using (1.11)) by

$$\begin{aligned} \mu _{[r,s:n]}= & {} \frac{1}{\theta ^2_2}\left\{ B^2(a_2)+\left[ 3\left( \Delta ^{(\alpha )}_{1,r:n}+\Delta ^{(\alpha )}_{1,s:n} \right) -\frac{5}{2}\left( \Delta ^{(\alpha )}_{2,r:n}+\Delta ^{(\alpha )}_{2,s:n}\right) \right] \right. \\&\times \, B(a_2) (B(2a_2)-B(a_2)) +25\Delta ^{(\alpha )}_{4,r,s:n}(B(3a_2)-B(2a_2))^2\\&+\,\frac{5}{2}\left( \Delta ^{(\alpha )}_{2,r:n}+ \Delta ^{(\alpha )}_{2,s:n}\right) \\&B(a_2) (B(3a_2)-B(2a_2)) +\left[ 15\left( \Delta ^{(\alpha )}_{2,r,s:n}+\Delta ^{(\alpha )}_{3,r,s:n}\right) - 25\Delta ^{(\alpha )}_{4,r,s:n}\right] \\&\times \, (B(2a_2)-B(a_2))(B(3a_2)-B(2a_2)) \\&+\,\left. \left( 9\Delta ^{(\alpha )}_{1,r,s:n}-\frac{15}{2}\Delta ^{(\alpha )}_{2,r,s:n}- \frac{15}{2}\Delta ^{(\alpha )}_{3,r,s:n}+\frac{25}{4}\Delta ^{(\alpha )}_{4,r,s:n} \right) (B(2a_2)-B(a_2))^2\right\} . \end{aligned}$$

4 Shannon Entropy, Inaccuracy Measures, and FIN

In Sect. 4.1, we get some useful theoretical relations for the Shannon entropy concerning the Sarmanov copula and any radially symmetric copula. In Sect. 4.2, the Shannon entropy, inaccuracy measure, and FIN for Sarmanov family are derived and then computed with some comparison with those measures for the IFGM family.

4.1 Some Theoretical Relations

We have the following general result concerning any radially symmetric copula and especially concerning the FGM and Sarmanov copulas.

Proposition 2

For any radially symmetric copula about \((\frac{1}{2},\frac{1}{2})\) with density \( {{{\mathcal {L}}}}(u,v),\) the Shannon entropy

$$\begin{aligned} H_{[r:n]} = -\int _{0}^{1}{{{\mathcal {L}}}}_{[r:n]}(v)\log {{{\mathcal {L}}}}_{[r:n]}(v) {\text {d}}v, \end{aligned}$$
(4.1)

where \({{{\mathcal {L}}}}_{[r:n]}(.)\) is the PDF of the rth concomitant of OSs based on \( {{{\mathcal {L}}}}(u,v),\) satisfies the relation

$$\begin{aligned} H_{[r:n]}=H_{[n-r+1:n]}. \end{aligned}$$
(4.2)

Proof

Taking the transformation \(v = \frac{1}{2}-z\) in (4.1) and by using Proposition 1 and Remark 1, we get

$$\begin{aligned} H_{[r:n]}= & {} - \int _{-\frac{1}{2}}^{\frac{1}{2}} {{{\mathcal {L}}}}_{[r:n]}\left( \frac{1}{2}-z\right) \log {{{\mathcal {L}}}}_{[r:n]}\left( \frac{1}{2}-z\right) {\text {d}}z \\= & {} - \int _{-\frac{1}{2}}^{\frac{1}{2}} {{{\mathcal {L}}}}_{[n-r+1:n]}\left( \frac{1}{2}+z\right) \log {{{\mathcal {L}}}}_{[n-r+1:n]}\left( \frac{1}{2}+z\right) {\text {d}}z. \end{aligned}$$

Now, let \(\frac{1}{2}+z=\eta \), we obtain \(H_{[r:n]}= - \int _{0}^{1} {{{\mathcal {L}}}}_{[n-r+1:n]}(\eta )\log {{{\mathcal {L}}}}_{[n-r+1:n]}(\eta ) d\eta = H_{[n-r+1:n]}.\) This proves the proposition. \(\square \)

Theorem 4.1

Let the Shannon entropy associated with the FGM and Sarmanov copulas be denoted by \(H_{[r:n]}^{(c)}(\omega )\) and \(H_{[r:n]}^{(s)}(\alpha ),\) respectively. Then, we get

  1. 1.

    \(H_{[r:n]}^{(c)}(\omega ) = H_{[r:n]}^{(c)}(-\omega ),\)

  2. 2.

    \(H_{[r:n]}^{(s)}(\alpha )=H_{[r:n]}^{(s)}(-\alpha ).\)

Proof

From (4.1) and (4.2), we get

$$\begin{aligned} H_{[r:n]}^{(c)}(-\omega )= & {} -\int _{0}^{1}C_{[r:n]}(v,-\omega )\log C_{[r:n]}(v,-\omega ){\text {d}}v\\= & {} -\int _{0}^{1}C_{[n-r+1:n]}(v,\omega )\log C_{[n-r+1:n]}(v,\omega ){\text {d}}v\\= & {} H_{[n-r+1:n]}^{(c)}(\omega )=H_{[r:n]}^{(c)}(\omega ). \end{aligned}$$

This proves the first part of the theorem. To prove the second part, we use again (4.1) and (4.2) for the Sarmanov copula and proceeding in a similar way as the first part. The proof is completed. \(\square \)

4.2 Shannon Entropy, Inaccuracy Measure, and FIN Based on the Sarmanov Family

Theorems 4.2, 4.3, and 4.4 give an explicit form of each of the Shannon entropy, inaccuracy measures, and FIN for concomitants of OSs based on SAR\((\alpha )\) family, respectively.

Theorem 4.2

Let \(a(r)=1-3\Delta ^{(\alpha )}_{1,r:n}+\frac{5}{2}\Delta ^{(\alpha )}_{2,r:n},\) \(b(r)=3\Delta ^{(\alpha )}_{1,r:n}-\frac{15}{2}\Delta ^{(\alpha )}_{2,r:n},\) and \(c(r)=-(a(r)+b(r)-1)\). Furthermore, let \(3a(r)c(r)-b^2(r)>0\) and \( b(r)+2c(r)+1>0.\) Then, the explicit form of the Shannon entropy of \(Y_{[r:n]},\) \(1\le r\le n,\) based on SAR\((\alpha )\) is given by

$$\begin{aligned}&H_{[r:n]}(\alpha )=E[-\log f_{[r:n]}(Y_{[r:n]})]=\delta _{[r:n]}-E(-\log f_Y(Y_{[r:n]}))\nonumber \\&\quad =\delta _{[r:n]}+H(Y)(1-3\Delta ^{(\alpha )}_{1,r:n}+\frac{5}{2}\Delta ^{(\alpha )}_{2,r:n})-\phi _{f}(1)(6\Delta ^{(\alpha )}_{1,r:n}-15\Delta ^{(\alpha )}_{2,r:n})\nonumber \\&\qquad -\,\phi _{f}(2)(15\Delta ^{(\alpha )}_{2,r:n}) , \end{aligned}$$
(4.3)

where \(H(Y)=-E(\log f_Y(Y))=-\int _{-\infty }^{\infty }f_Y(y)\log f_Y(y){\text {d}}y\) is the Shannon entropy of Y\(\phi _{f}(p)=\int _{-\infty }^{\infty }F^{p}_Y(y)f_Y(y)\log f_Y(y){\text {d}}y=\int _{0}^{1}u^{p}\log f_Y(F_Y^{-1}(u)){\text {d}}u,~p=1,2,~\) \(\delta _{[r:n]}=-\log \left( 1+3\Delta ^{(\alpha )}_{1,r:n}+\frac{5}{2}\Delta ^{(\alpha )}_{2,r:n}\right) +2b(r)J_{0}(r,n)+6c(r)J_{1}(r,n),~\) and

$$\begin{aligned} J_{\ell }(r,n)=\int _{0}^{1}\frac{z^{\ell }(a(r)z+b(r)z^2+c(r)z^3)}{a(r)+2b(r)z+3c(r)z^2}{\text {d}}z,~\ell =0,1. \end{aligned}$$

Moreover,

$$\begin{aligned} J_{0}(r,n)= & {} \frac{-1}{27}\left( \frac{b(2b^2-9ac)\tan ^{-1}\left( \frac{b}{\sqrt{3ac-b^2}}\right) }{\sqrt{3ac-b^2}}-(b^2-3ac)\log a\right) \nonumber \\&+\frac{1}{54}\left( 9+\frac{6b}{c}+\frac{2}{c^2}\left[ \frac{b(2b^2-9ac)\tan ^{-1}\left( \frac{b+3c}{\sqrt{3ac-b^2}}\right) }{\sqrt{3ac-b^2}}\right. \right. \nonumber \\&\left. \left. -(b^2-3ac)\log (a+2b+3c)\right] \right) \end{aligned}$$
(4.4)

and

$$\begin{aligned} J_{1}(r,n)= & {} \frac{-1}{162c^3}\left( \frac{(-8b^4+42ab^2c-36a^2c^2)\tan ^{-1}\left( \frac{b}{\sqrt{3ac-b^2}}\right) }{\sqrt{3ac-b^2}}-(4b^3-15abc)\log a\right) \nonumber \\&+\,\frac{1}{162c^3}\left( \frac{(-8b^4+42ab^2c-36a^2c^2)\tan ^{-1}\left( \frac{b+3c}{\sqrt{3ac-b^2}}\right) }{\sqrt{3ac-b^2}}\right. \nonumber \\&\left. +\,(4b^3-15abc)\log (a+2b+3c)\right) \nonumber \\&+\,\frac{3c(-4b^2+3bc+6c(2a+c))}{162c^3}, \end{aligned}$$
(4.5)

where in (4.4) and (4.5), a(r), b(r),  and c(r) are abbreviated for simplicity to ab,  and c,  respectively.

Proof

The Shannon entropy of \( Y_{[r:n]}\) is given by

$$\begin{aligned}&H_{[r:n]}(\alpha )=-\int _{-\infty }^{\infty }f_{[r:n]}(y)\log f_{[r:n]}(y){\text {d}}y \nonumber \\&\quad =-\int _{-\infty }^{\infty }f_Y(y)\left[ 1+3\Delta ^{(\alpha )}_{1,r:n}(2F_Y(y)-1)+\frac{5}{4}\Delta ^{(\alpha )}_{2,r:n}(3(2F_Y(y)-1)^2-1)\right] \nonumber \\&\qquad \times \log \left[ \!f_Y(y)\left( \!1+3\Delta ^{(\alpha )}_{1,r:n}(2F_Y(y)-1)+\frac{5}{4}\Delta ^{(\alpha )}_{2,r:n}(3(2F_Y(y)-1)^2-1)\!\right) \!\right] {\text {d}}y\nonumber \\&\quad = H(Y)(1-3\Delta ^{(\alpha )}_{1,r:n}+\frac{5}{2}\Delta ^{(\alpha )}_{2,r:n})-\phi _{f}(1)(6\Delta ^{(\alpha )}_{1,r:n}-15\Delta ^{(\alpha )}_{2,r:n})\nonumber \\&\qquad -\phi _{f}(2)(15\Delta ^{(\alpha )}_{2,r:n})+\delta _{[r:n]}, \end{aligned}$$
(4.6)

where \(\delta _{[r:n]}=-E(\log (1+3\Delta ^{(\alpha )}_{1,r:n}(2F_Y(Y_{[r:n]})-1)+\frac{5}{4}\Delta ^{(\alpha )}_{2,r:n}(3(2F_Y(Y_{[r:n]})-1)^2-1))).\) Upon integrating by part, we get

$$\begin{aligned} \delta _{[r:n]}= & {} -\int _{-\infty }^{\infty }f_{[r:n]}(y)\log \left( 1+3\Delta ^{(\alpha )}_{1,r:n}(2F_Y(y)-1)\right. \\&\left. +\frac{5}{4}\Delta ^{(\alpha )}_{2,r:n}(3(2F_Y(y)-1)^2-1)\right) {\text {d}}y\\= & {} -\log \left( 1+3\Delta ^{(\alpha )}_{1,r:n}+\frac{5}{2}\Delta ^{(\alpha )}_{2,r:n}\right) +\int _{-\infty }^{\infty }V_r\,d\,U_r, \end{aligned}$$

where \(U_r=\log \left( 1+3\Delta ^{(\alpha )}_{1,r:n}(2F_Y(y)-1)+\frac{5}{4}\Delta ^{(\alpha )}_{2,r:n}(3(2F_Y(y)-1)^2-1)\right) \) and \(V_r=F_Y(y)(1+3\Delta ^{(\alpha )}_{1,r:n}\) \((F_Y(y)-1)+\frac{5}{2}\Delta ^{(\alpha )}_{2,r:n}(2F^2_Y(y)-3F_Y(y)+1)).\) Thus, by using the integral probability transformation \(z=F_Y(y)\) and simplifying the result, we get

$$\begin{aligned} \delta _{[r:n]}=-\log \left( 1+3\Delta ^{(\alpha )}_{1,r:n}+\frac{5}{2}\Delta ^{(\alpha )}_{2,r:n}\right) +2b(r)J_{0}(r,n)+6c(r)J_{1}(r,n). \end{aligned}$$
(4.7)

Therefore, by combining (4.7) and (4.6) we get (4.3), the integration \(J_{\ell }(r,n)\) for \(\ell =0,1\) can be explicitly (as well as numerically) evaluated by MATHEMATICA Ver.12 in the forms (4.4) (for \(\ell =0\)) and (4.5) (for \(\ell =1\)); if the conditions \(3a(r)c(r)-b^2(r)>0\) and \(b(r)+2c(r)+1>0\) are satisfied, the latter condition leads to \( a(r)+2b(r)+3c(r)>0 \). This completes the proof. \(\square \)

Proposition 3

We have \(H_{[r:n]}(-\alpha )=H_{[n-r+1:n]}(\alpha ).\)

Proof

The proof follows directly from the definition of the Shannon entropy and the second part of Theorem 3.2. \(\square \)

Example 4.1

For the Sarmanov copula, we have \(H(Y)=\phi _{f}(1)=\phi _{f}(2)=0.\) Thus, by using Theorem 4.1, the Shannon entropy of \(Y_{[r:n]}\) is given by \(H_{[r:n]}(\alpha )=H_{[r:n]}^{(s)}(\alpha )=\delta _{[r:n]}.\)

Table 2 displays a comparison between the Shannon entropy of the rth concomitant \(Y_{[r:n]}\) based on the Sarmanov and IFGM copulas via some admissible common values of the correlation, \({\rho }_c.\) It is worth noting that the choosing values of the shape parameters in the two copulas according to the same value of the correlation coefficient enable us to make a comparison between the two copulas despite the differences between their shape parameters. Table 3 displays the Shannon entropy for the Sarmanov copula for values of \({\rho }_c,\) where some of these values are not admissible by the IFGM copula. The computations are carried out by using MATHEMATICA ver.12. The following properties can be extracted from Tables 2 and 3.

  1. 1.

    Generally, we have \(H_{[r:n]}(\lambda ,\omega )\le H_{[r:n]}^{(s)}(\alpha )\) at the same values of \({\rho }_c,\) where \(H_{[r:n]}(\lambda ,\omega )\) is the Shannon entropy concerning the IFGM\((\lambda ,\omega )\) copula.

  2. 2.

    The value of \(H_{[r:n]}(\lambda ,\omega )\) and \(H_{[r:n]}^{(s)}(\alpha ),~\forall r, n,\) decreases as the value of \({\rho }_c\) increases.

  3. 3.

    With fixed r\(H_{[r:n]}(\lambda ,\omega )\) and \(H_{[r:n]}^{(s)}(\alpha )\) decrease as the value of n increases.

  4. 4.

    Generally, \(H_{[r:n]}^{(s)}(\alpha )=H_{[n-r+1:n]}^{(s)}(\alpha )\) and \(H_{[r:n]}^{(s)}(-\alpha )=H_{[r:n]}^{(s)}(\alpha ),\) which endorse the theoretical results given in Sect. 4.1.

Table 2 Shannon entropy for \(Y_{[r:n]}\) based on the IFGM and Sarmanov copulas
Table 3 Shannon entropy for \(Y_{[r:n]}\) based on the Sarmanov copula

Theorem 4.3

Let \(f_{[r:n]}(y)\) be the PDF of the rth concomitant of OSs based on SAR\((\alpha ).\) Then, the inaccuracy measure between \(f_{[r:n]}(y)\) and \(f_Y(y)\) for \(1\le r\le n,\) \(\alpha \ne 0\) is given by

$$\begin{aligned}&I_{[r:n]}(\alpha )=-\int _{-\infty }^{\infty }f_{Y_{[r:n]}}(y)\log f_Y(y){\text {d}}y\nonumber \\&\quad =\left( 1-3\Delta ^{(\alpha )}_{1,r:n}+\frac{5}{2}\Delta ^{(\alpha )}_{2,r:n}\right) H(Y)-(6\Delta ^{(\alpha )}_{1,r:n}-15\Delta ^{(\alpha )}_{2,r:n})\phi _{f}(1) -15\Delta ^{(\alpha )}_{2,r:n}\phi _{f}(2),\nonumber \\ \end{aligned}$$
(4.8)

where \(H(Y)=-\int _{-\infty }^{\infty }f_Y(y)\log f_Y(y){\text {d}}y~\) is the Shannon entropy of the RV Y and

$$\begin{aligned} \phi _{f}(p)=\int _{-\infty }^{\infty }F^{p}{_Y(y)}f_Y(y)\log f_Y(y){\text {d}}y=\int _{0}^{1}u^p\log f_Y(F^{-1}{_Y(u)}){\text {d}}u,~p=1,2. \end{aligned}$$

Proof

Clearly, we have

$$\begin{aligned}&I_{[r:n]}(\alpha )= -\int _{-\infty }^{\infty }f_{Y_{[r:n]}}(y)\log f_Y(y){\text {d}}y =\left( 1-3\Delta ^{(\alpha )}_{1,r:n}+\frac{5}{2}\Delta ^{(\alpha )}_{2,r:n}\right) H(Y)\nonumber \\&\qquad - (6\Delta ^{(\alpha )}_{1,r:n}-15\Delta ^{(\alpha )}_{2,r:n})\int _{-\infty }^{\infty }F_Y(y)f_Y(y)\log f_Y(y){\text {d}}y \nonumber \\&\qquad -15\Delta ^{(\alpha )}_{2,r:n}\int _{-\infty }^{\infty }F^2_Y(y)f_Y(y)\log f_Y(y){\text {d}}y \nonumber \\&\quad =(1-3\Delta ^{(\alpha )}_{1,r:n}+\frac{5}{2}\Delta ^{(\alpha )}_{2,r:n})H(Y)-(6\Delta ^{(\alpha )}_{1,r:n}-15\Delta ^{(\alpha )}_{2,r:n})\int _{0}^{1}u\log f_Y(F^{-1}_Y(u)){\text {d}}u \nonumber \\&\qquad -15\Delta ^{(\alpha )}_{2,r:n}\int _{0}^{1}u^2\log f_Y(F^{-1}_Y(u)){\text {d}}u \nonumber \\&\quad =(1-3\Delta ^{(\alpha )}_{1,r:n}+\frac{5}{2}\Delta ^{(\alpha )}_{2,r:n})H(Y)-(6\Delta ^{(\alpha )}_{1,r:n}-15\Delta ^{(\alpha )}_{2,r:n})\phi _{f}(1) -15\Delta ^{(\alpha )}_{2,r:n}\phi _{f}(2).\nonumber \\ \end{aligned}$$
(4.9)

\(\square \)

Proposition 4

We have \(I_{[r:n]}(-\alpha )=I_{[n-r+1:n]}(\alpha ).\)

Proof

The proof follows directly from the definition of the inaccuracy measure and the second part of Theorem 3.2. \(\square \)

Example 4.2

Suppose that X and Y have exponential distributions with mean \( \frac{1}{\theta *}\) and \(\frac{1}{\theta },\) respectively. After simple algebra, we get \( H(Y)= -\int _{0}^{\infty }f_Y(y)\log f_Y(y){\text {d}}y =1-\log \theta ,\) \( \phi _{f}(1)=\int _{0}^{\infty }f_Y(y)F_Y(y)\log f_Y(y){\text {d}}y\) \(=\frac{-3+2\log \theta }{4},\) and \(\phi _{f}(2)=\int _{0}^{\infty }f_Y(y)F^2_Y(y)\log f_Y(y){\text {d}}y=\frac{-11+6\log \theta }{18}.\) Then,

$$\begin{aligned} I_{[r:n]}(\alpha )= & {} \left( 1-3\Delta ^{(\alpha )}_{1,r:n}+\frac{5}{2}\Delta ^{(\alpha )}_{2,r:n}\right) (1-\log \theta )-(6\Delta ^{(\alpha )}_{1,r:n}-15\Delta ^{(\alpha )}_{2,r:n})\nonumber \\&\times \left( \frac{-3+2\log \theta }{4}\right) -15\Delta ^{(\alpha )}_{2,r:n}\left( \frac{-11+6\log \theta }{18}\right) .\end{aligned}$$
(4.10)

Table 4 is devoted to some computed values of the inaccuracy measure for SAR\((\alpha )\) and IFGM\((\lambda ,\) \(\omega )\) families based on exponential marginals at the same value of the correlation coefficient of the SAR\((\alpha )\) and IFGM\((\lambda ,\omega )\) copulas, \({\rho }_c.\) Clearly, the choosing values of the shape parameters in the two families according to the same value of \({\rho }_c\) enable us to make a comparison between the two families despite the differences between their shape parameters. Table 5 displays the inaccuracy measure for the Sarmanov family for values of \({\rho }_c,\) where some of these values do not admissible by the IFGM copula. The following properties can be extracted from Tables 3 and 4.

  1. 1.

    For all r and n\(I_{[r:n]}(\lambda ,\omega )\le I_{[r:n]}(\alpha ),\) where \(I_{[r:n]}(\lambda ,\omega )\) is the inaccuracy measure pertains to the family IFGM\((\lambda ,\omega ).\)

  2. 2.

    The value of the inaccuracy measures \(I_{[r:n]}(\lambda ,\omega )\) and \(I_{[r:n]}(\alpha )\) increases with decreasing the difference \(n-r.\)

  3. 3.

    With fixed r,  the value of the inaccuracy measures \(I_{[r:n]}(\lambda ,\omega )\) and \(I_{[r:n]}(\alpha )\) decreases as n increases.

  4. 4.

    Generally, \(I_{[r:n]}(-\alpha )=I_{[n-r+1:n]}(\alpha ),\) which endorses the result given in Proposition 4.

  5. 5.

    With fixed r,  the value of the inaccuracy measure \(I_{[r:n]}(\alpha )\) decreases as n increases.

  6. 6.

    The value of \(I_{[r:n]}(\alpha )\) decreases with \(\alpha >0\) increases at \(r<\frac{n}{2}\) (median rank) and increases with \(\alpha <0\) increases at \(r>\frac{n}{2}\) (median rank).

Table 4 Inaccuracy measure between \(Y_{[r:n]}\) and Y in SAR\((\alpha )\) and IFGM\((\lambda ,\omega )\) with exponential marginals
Table 5 Inaccuracy measure between \(Y_{[r:n]}\) and Y in SAR\((\alpha )\) with exponential marginals

Theorem 4.4

Let \(f_{[r:n]}(y)\) be the PDF of the rth concomitant of OSs based on SAR\((\alpha ).\) Then, the FIN of \(Y_{[r:n]}\) for \(1\le r\le n\) is given by

$$\begin{aligned} I_{f_Y}(Y_{[r:n]},\alpha )= & {} E\left[ \left( \frac{\partial \log f_{[r:n]}(y)}{\partial y}\right) ^2_{y=Y_{[r:n]}}\right] = I_{f_Y}(Y)+\tau _{f_Y}+2\phi _{f_Y}+\delta _{f_Y},\nonumber \\ \end{aligned}$$
(4.11)

where

$$\begin{aligned}&\tau _{f_Y}=\int _{-\infty }^{\infty }\left( \frac{\partial \log f_Y(y)}{\partial y}\right) ^2 (3\Delta ^{(\alpha )}_{1,r:n}(2F_Y(y)-1)\\&\quad + \frac{5}{4}\Delta ^{(\alpha )}_{2,r:n}(3(2F_Y(y)-1)^2-1))f_Y(y){\text {d}}y,\\&\phi _{f_Y}=\int _{-\infty }^{\infty }(6\Delta ^{(\alpha )}_{1,r:n}- 15\Delta ^{(\alpha )}_{2,r:n}+30\Delta ^{(\alpha )}_{2,r:n}F_Y(y))f'_Y(y)f_Y(y){\text {d}}y \end{aligned}$$

and

$$\begin{aligned} \delta _{f_Y}= & {} \int _{-\infty }^{\infty }\frac{\left[ f_Y(y)\left( 6\Delta ^{(\alpha )}_{1,r:n}-15\Delta ^{(\alpha )}_{2,r:n}+30\Delta ^{(\alpha )}_{2,r:n} F_Y(y)\right) \right] ^2}{(1-3\Delta ^{(\alpha )}_{1,r:n} +\frac{5}{2}\Delta ^{(\alpha )}_{2,r:n})+F_Y(y)(6\Delta ^{(\alpha )}_{1,r:n}-15\Delta ^{(\alpha )}_{2,r:n})+15\Delta ^{(\alpha )}_{2,r:n}F^2_Y(y)}\\&f_Y(y){\text {d}}y. \end{aligned}$$

Proof

By using (3.1), the FIN of \( Y_{[r:n]}\) is given by

$$\begin{aligned}&I_{f_Y}(Y_{[r:n]},\alpha )\nonumber \\&\quad = \int _{-\infty }^{\infty }\left( \frac{\partial \log f_{[r:n]}(y)}{\partial y}\right) ^2 f_{[r:n]}(y){\text {d}}y \nonumber \\&\quad =\int _{-\infty }^{\infty }\left( \frac{\partial \log f_Y(y)}{\partial y}+ \frac{\partial \log \left( 1+3\Delta ^{(\alpha )}_{1,r:n}(2F_Y(y)-1)+\frac{5}{4}\Delta ^{(\alpha )}_{2,r:n}(3(2F_Y(y)-1)^2-1)\right) }{\partial y}\right) ^2 \nonumber \\&\qquad \times \left( f_Y(y)\left[ 1+3\Delta ^{(\alpha )}_{1,r:n}(2F_Y(y)-1)+\frac{5}{4}\Delta ^{(\alpha )}_{2,r:n}(3(2F_Y(y)-1)^2-1)\right] \right) {\text {d}}y \nonumber \\&\quad =\int _{-\infty }^{\infty }\left( \frac{\partial \log f_Y(y)}{\partial y}\right) ^2 f_Y(y){\text {d}}y+\int _{-\infty }^{\infty }\left( \frac{\partial \log f_Y(y)}{\partial y}\right) ^2 \nonumber \\&\qquad \times (3\Delta ^{(\alpha )}_{1,r:n}(2F_Y(y)-1)+\frac{5}{4}\Delta ^{(\alpha )}_{2,r:n}(3(2F_Y(y)-1)^2-1))f_Y(y){\text {d}}y \nonumber \\&\qquad +\int _{-\infty }^{\infty } \left( \frac{\partial \log \left( 1+3\Delta ^{(\alpha )}_{1,r:n}(2F_Y(y)-1)+\frac{5}{4}\Delta ^{(\alpha )}_{2,r:n}(3(2F_Y(y)-1)^2-1)\right) }{\partial y}\right) ^2 \nonumber \\&\qquad \times \left( 1+3\Delta ^{(\alpha )}_{1,r:n}(2F_Y(y)-1)+\frac{5}{4}\Delta ^{(\alpha )}_{2,r:n}(3(2F_Y(y)-1)^2-1)\right) f_Y(y){\text {d}}y \nonumber \\&\qquad +2\int _{-\infty }^{\infty } \frac{\partial \log f_Y(y)}{\partial y}\times \frac{\partial \log \left( 1+3\Delta ^{(\alpha )}_{1,r:n}(2F_Y(y)-1)+\frac{5}{4}\Delta ^{(\alpha )}_{2,r:n}(3(2F_Y(y)-1)^2-1)\right) }{\partial y} \nonumber \\&\qquad \times \left( 1+3\Delta ^{(\alpha )}_{1,r:n}(2F_Y(y)-1)+\frac{5}{4}\Delta ^{(\alpha )}_{2,r:n}(3(2F_Y(y)-1)^2-1)\right) f_Y(y){\text {d}}y. \end{aligned}$$
(4.12)

Upon using the transformation  \( u=F_Y(y)\) in the three integrations on the right of (4.12) and simplifying the result, we get the required result. \(\square \)

Proposition 5

We have \(I_{f_Y}(Y_{[r:n]},-\alpha )=I_{f_Y}(Y_{[n-r+1:n]},\alpha ).\)

Proof

The proof follows directly from the definition of the FIN and the second part of Theorem 3.2. \(\square \)

Example 4.3

Let X and Y have exponential distributions with means \( \frac{1}{\theta *}\) and \(\frac{1}{\theta },\) respectively. Then,

$$\begin{aligned} I_{f_Y}(y)= & {} \int _{0}^{\infty }\left( \frac{\partial \log f_Y(y)}{\partial y}\right) ^2 f_Y(y){\text {d}}y=\theta ^2,\\ \tau _{f_Y}= & {} (-3\Delta ^{(\alpha )}_{1,r:n}+\frac{5}{2}\Delta ^{(\alpha )}_{2,r:n})\\&\int _{0}^{\infty }\theta ^3e^{-\theta y} {\text {d}}y+(6\Delta ^{(\alpha )}_{1,r:n}-15\Delta ^{(\alpha )}_{2,r:n})\int _{0}^{\infty }\theta ^3(1-e^{-\theta y})e^{-\theta y}\,{\text {d}}y\\&+15\Delta ^{(\alpha )}_{2,r:n}\int _{0}^{\infty }\theta ^3(1-e^{-\theta y})^2e^{-\theta y}\,{\text {d}}y =0 \end{aligned}$$

and

$$\begin{aligned} \phi _{f_Y}= & {} (15\Delta ^{(\alpha )}_{2,r:n}-6\Delta ^{(\alpha )}_{1,r:n})\int _{0}^{\infty }\theta ^3e^{-2\theta y}{\text {d}}y+30\theta ^3\Delta ^{(\alpha )}_{2,r:n}\int _{0}^{\infty }(e^{-3\theta y}-e^{-2\theta y}){\text {d}}y \\= & {} \theta ^2\left( -3\Delta ^{(\alpha )}_{1,r:n}+\frac{5}{2}\Delta ^{(\alpha )}_{2,r:n}\right) . \end{aligned}$$

Thus, the FIN of \( Y_{[r:n]}\) is given by

$$\begin{aligned} I_{f_Y}(Y_{[r:n]},\alpha )=\theta ^2+2\theta ^2(-3\Delta ^{(\alpha )}_{1,r:n}+\frac{5}{2}\Delta ^{(\alpha )}_{2,r:n})+\theta ^3 J{(\theta )}, \end{aligned}$$

where

$$\begin{aligned} J{(\theta )}=\int _{0}^{\infty }\frac{\left( 6\Delta ^{(\alpha )}_{1,r:n}-15\Delta ^{(\alpha )}_{2,r:n}+30\Delta ^{(\alpha )}_{2,r:n}(1-e^{-\theta y})\right) ^2e^{-3\theta y}}{(1-3\Delta ^{(\alpha )}_{1,r:n}+\frac{5}{2}\Delta ^{(\alpha )}_{2,r:n})+(6\Delta ^{(\alpha )}_{1,r:n}- 15\Delta ^{(\alpha )}_{2,r:n})(1-e^{-\theta y})+15\Delta ^{(\alpha )}_{2,r:n}(1-e^{-\theta y})^2}\,{\text {d}}y.\nonumber \!\!\!\! \\ \end{aligned}$$
(4.13)

The integration in (4.13) can be numerically evaluated by MATHEMATICA Ver.12

Table 6 displays a comparison between the FIN of the rth concomitant \(Y_{[r:n]}\) based on the Sarmanov and IFGM families with exponential marginals via some admissible common values of the correlation, \({\rho }_c.\) Table 7 displays the FIN for the Sarmanov family with exponential marginals for values of \({\rho }_c,\) where some of these values do not admissible by the IFGM copula. The following properties can be extracted from Tables 6 and 7.

  1. 1.

    The FIN for SAR\((\alpha )\) and IFGM\((\lambda ,\omega )\) families increases as the difference \(n-r\) increases.

  2. 2.

    Generally, we have \(I_{f_Y}(Y_{[r:n]},-\alpha )=I_{f_Y}(Y_{[n-r+1:n]},\alpha ),\) which endorses the result given in Proposition 5.

  3. 3.

    The value of \(I_{f_Y}(Y_{[r:n]},\alpha )\) increases with an increase in \(\alpha \) \((\alpha >0)\) at \(r < \frac{n}{2}\) and increases with a decrease in \(\alpha \) \((\alpha <0)\) at \(r > \frac{n}{2}+1\).

Table 6 FIN in \(Y_{[r:n]}\) for SAR\((\alpha )\) and IFGM\((\lambda ,\omega )\) at \(\theta =1\)
Table 7 FIN in \(Y_{[r:n]}\) for SAR\((\alpha )\)

5 Application of Real Data

This section includes analyses of two real-world data sets, where the Shannon entropy and inaccuracy measure are examined. Moreover, for the second real data set, we show that the Sarmanov family gets the better fitting comparing the FGM family.

Example 5.1

The following data set, which is quoted from McGilchrist and Aisbett [39] and was used and analyzed by Al turk et al. (2007) and Ahmed et al. [6] in the context of different topics, represents the recurrence times to infection at point of insertion of the catheter for kidney patients using portable dialysis equipment. The RV X refers to the first recurrence time and the RV Y to second recurrence time. The data for 30 patients are reported in Table 8.

Ahmed et al. [6] fitted the GE distribution to X and Y separately. The ML estimates of the scale and shape parameters \((\theta _i,a_i),~i=1,2,\) are (0.0062, 0.6638) and (0.0096, 0.9244),  respectively. The correlation between X and Y is 0.0531,  which yields \(\alpha =0.07\) as an estimate of the shape parameter for the estimated model SAR-GE (0.0062, 0.6638; 0.0096, 0.9244). The value of this estimate attunes to the values given in Table 1. Table 9 examines the Shannon entropy and inaccuracy measure for the model SAR-GE (0.0062, 0.6638;  0.0096, 0.9244) for the concomitants \(Y_{[r:30]},~r=1,2,15,16,29,30,\) i.e., the concomitants of lower extreme, upper extreme, and central values. This table shows that the Shannon entropy has maxim values at extremes, while the value of the inaccuracy measure slowly increases as r increases. It is worth mentioning that for the GE marginal (the second marginal) the FIN exists only for \(a_2=1, a_2>2.\) Therefore, for this data set the FIN is not available.

Table 8 Recurrence times of infection for kidney patients
Table 9 The Shannon entropy and inaccuracy measure

Example 5.2

The economic data set, which is quoted from El-Sherpieny et al. [28] and reproduced in Table 10, consists of 31 yearly time series observations \([1980-2010]\) on response variable: Exports of goods and services X and GDP growth Y. These data were originally collected by World Bank National Accounts data and OECD National Accounts data. The data are relevant to the distribution based on FGM copula and its generalizations including Sarmanov family, since the correlation between data is 0.2709. El-Sherpieny et al. [28] have used the MLE method to compare between three FGM families with Weibull (FGM-W), Gamma (FGM-G), and GE (FGM-GE) marginals. By applying the Akaike information criterion (AIC) and the Bayesian information criterion (BIC), their result confirmed the best model is FGM-W to these data. The summary of their result is reproduced in Table 11. By using the MLE method and based on SAR\((\alpha ),\) we estimate the four parameters \(a_i,\beta _i,~i=1,2,\) in the Weibull DF \(F_W(w)= 1-\exp \left( -\left( \frac{w}{\beta _i}\right) ^{a_i}\right) ,~w>0,\) besides the shape parameter \(\alpha .\) Moreover, the AIC and BIC are computed for comparing purposes. Table 12 summarizes the results of these estimates. A quick look at Tables 11 and 12 (at AIC and BIC) reveals that the best model is SAR(0.5)-W to these data. Table 12 examines the Shannon entropy and inaccuracy measure for the model estimated SAR(0.5)-W for the concomitants \(Y_{[r:31]},~r=1,2,15,16,30,31,\) i.e., the concomitants of lower extreme, upper extreme, and central values. This table shows that each of the Shannon entropy and inaccuracy measure has maxim values at lower extremes.

Table 10 Data of economics
Table 11 AIC and BIC for FGM-W, FGM-G, and FGM-GE
Table 12 Parameter estimation for SAR\((\alpha ),\) with Weibull marginals (SAR\((\alpha )\) -W)
Table 13 Entropy and inaccuracy of SAR(0.5)-W at \(a_2=8.154\) and \(\beta _2=3\)

6 Conclusion

In this paper, we revisited the Sarmanov bivariate DF, which was originally suggested by Sarmanov [43] as a new mathematical model of hydrological processes that may be used in stream flow control, in studying the persistence of sequences of years with high and low flow, in calculating reservoir volume, and for many other applications. We showed that this family belongs to the family of the extensions of the FGM family, which is widely used in modeling bivariate data with low correlation, as well as it belongs to a wider family suggested by Sarmanov [42], which has many recorded applications in the literature. Moreover, several new prominent statistical properties of this family were revealed, namely:

  1. 1.

    The Sarmanov family is the most efficient one among all the extended families of the FGM family because on both the positive and negative sides, it delivers the best improvement in the correlation level. This fact makes this family be able to model the bivariate data with moderate correlation. Besides, Example 5.2 shows that this family is a strong competitor to the FGM family and its known extensions in modeling the data set with a low correlation.

  2. 2.

    Among all the known extensions, this family contains only one shape parameter, which is shared by the two marginal variates. This property enables us to estimate easily the shape parameter by using the sample correlation estimate.

  3. 3.

    The Sarmanov family is the only one of the extended families of FGM with a radially symmetric copula about \((-0.5,0.5).\) This property was used in this paper to reveal several prominent statistical properties for the concomitants of order statistics from this family and some of the information measures, namely the Shannon entropy, inaccuracy measure, and Fisher information number, which were theoretically and numerically studied. Moreover, these information measures were computed with some comparison with those measures for the IFGM family.

Despite all of the above exclusive features, this capable and flexible family has never been used in modeling bivariate real data sets, since its inception. This work was primarily undertaken to fill this need and encourage statisticians to view this family as a viable option for modeling bivariate data with low and moderate correlation.