1 Introduction

Adding parameters to a well-established distribution (a base distribution function (df)) is an effective way to enlarge the behavior range of this distribution and to obtain more flexible family of distributions to model various types of data. This technique has been tackled by many authors, among them are Marshall and Olkin [17], Eugene et al. [14], Jones [16], Alzaatreh [3], and Cordeiro and de Castro [13]. Actually, there are many reasons call us to expand a family of df’s, including for example survival analysis (in this case we focus on the resulted survival and hazard rate functions, etc) and data modeling (in this case we focus on obtaining a wide range of the indices of skewness and kurtosis). Whatever the purpose for which the base distribution was extended to a more flexible family, it is of a great benefit to have mathematical relationships between the family and its base, which enable us to deduce the different statistical properties for this family from the corresponding properties of its base. Clearly, some of the most beneficial and important of those statistical properties are the asymptotic behavior of the df’s of the different order statistics (extreme, intermediate, and central order statistics) and record values. The record values are defined as follows: An observation \(X_j\) will be called an upper record value if \(X_j>X_i\) for every \(i<j,\) and an analogous definition deals with lower record values. By convention, \(~X_1~\) is an upper as well as lower record value. For the definition, properties, applications, and the generalization of the order statistic and record value models, see, e.g., Arnold et al. [5, 6], Ahsanullah and Shakil [1], Barakat et al. [10], and Bdair and Raqab [11]. Actually, the knowledge of asymptotic behaviors of order statistics and record values facilitates to use these flexible families to build statistical models for many important random phenomena, e.g., see Barakat et al. [9] and Barakat and Nigm [7]. Namely, Marshall and Olkin [17] introduced a parameterization operation for adding a parameter \(\alpha >0\) to any base df \(F(x)=P(X\le x).\) The df of the extended family is defined by \(\mathcal {M}_F(x;\alpha )=\frac{F(x)}{\alpha +(1-\alpha )F(x)}.\) Clearly, the family \({{\mathcal {M}}}_F(x;\alpha )\) includes its base F(x) as a special case when \(\alpha =1.\) Moreover, it is stable in the sense that \({{\mathcal {M}}}_{{{\mathcal {M}}}_F(.;\alpha )}(x;\beta )=\mathcal {M}_F(x;\alpha \beta ),\) or in other words, if the operation is applied twice, nothing new will be obtained the second time around. Barakat et al. [9] showed that both \({{\mathcal {M}}}_F(x;\alpha )\) and its base F(x) belong to the same domain of maximal (minimal) (upper record value) (lower record value) attraction. On the other hand, it is shown that the weak convergence of any order statistic with variable rank (intermediate and central order statistics) based on the base family F under given normalizing constants (the normalizing constants are sequences of constants \(A_n>0, B_n,\) which are used to normalize any statistic \(X_n\) such that \(A_nX_n+B_n\)), to a non-degenerate limit type implies that the corresponding order statistic based on the family \({{\mathcal {M}}}_F(x;\alpha )\) under the same normalizing constants does not converge to any non-degenerate distribution and vice versa. The same problem is tackled by Barakat and Nigm [7] for the beta-generated-distributions family introduced by Jones [16] and after that this family has been extensively studied by many authors, among them are Alexander et al. [2] and Triantafyllou and Koutras [19]. Let \(I_u(a,b),~a,b\ge 0,~\) be the incomplete beta ratio function (beta df). Then the beta-generated-distributions family is given by \(\mathcal {B}_F(x;a,b)=I_{F(x)}(a,b)=\frac{1}{\beta (a,b)}\int _0^{F(x)}t^{a-1}(1-t)^{b-1}\mathrm{{d}}t,\) where \(\beta (.,.)\) is the beta function. Clearly, \(\mathcal {B}_F(x;1,1)=F_X(x).\) Moreover, when r is an integer such that \(1\le r\le n,\) \({{\mathcal {B}}}_F(x;r,n-r+1)\) is the df of the rth order statistic \(X_{r:n}\) of size n from the df F. Barakat and Nigm [7] showed that the family \({{\mathcal {B}}}_F\) and its base df F belong to the same domain of maximal (or minimal or upper record value, only if \(b=1,\) or lower record value, only if \(a=1\)) attraction. Moreover, it is shown that the weak convergence of any non-extreme order statistic (central or intermediate order statistic), based on a base distribution F,  to a non-degenerate limit type implies the weak convergence of the corresponding statistics based on the family \(G_F\) to non-degenerate limit types. The relations between the two limit types are deduced.

Alzaatreh [3] and Alzaatreh et al. [4] suggested and studied a family of distributions motivated by the upper record values, for any base df F

$$\begin{aligned} G_F(x;a)=\Gamma _{-\log (1-F(x))}(a)=\frac{1}{\Gamma (a)}\int _0^{-\log (1-F(x))} t^{a-1}e^{-t}\mathrm{{d}}t,\end{aligned}$$
(1.1)

where \(\Gamma _x(a)=\frac{1}{\Gamma (a)}\int _0^x t^{a-1}e^{-t}\mathrm{{d}}t\) is the gamma df. Clearly, \(G_F(x;1)=F(x).\) When \(a=n,\) the family (1.1) is the df of the nth upper record value arising from a sequence \(\{X_i\}\) of identically independent random variables with the df F(x) (see [5]). Moreover, if \(Y\sim \Gamma _x(a), x\ge 0,\) and \(\eta \sim G_F(x;a),\) then \(\eta =F^{-1}(1-e^{-Y}).\) Thus, \(\text{ E }(\eta )\) can be obtained using the relation \(\text{ E }(\eta )=\text{ E }(F^{-1}(1-e^{-Y})).\) In 2011, Cordeiro and de Castro have created a family of generalized distributions derived from the distribution initially proposed by Kumaraswamy. Honoring this author, Cordeiro and de Castro [13] called this family of Kw. For any base \(F_X\) df, the df of Kw family is defined by

$$\begin{aligned} K_F(x;a,b)= 1-\left( 1-F^a(x)\right) ^b,~a,b>0.\end{aligned}$$
(1.2)

The family (1.2) has an advantage over the beta family \({{\mathcal {B}}}_F(x; a,b),\) since it does not involve any special function. Clearly, \(K_F(x;1,1)=F(x)\) and with \(a = 1,\) the Kw family coincides with the family \({{\mathcal {B}}}_F(x; a,b),\) generated by the \(I_u(1,b)\) distribution. Furthermore, for \(b = 1\) and a being an integer, (1.2) is the distribution of the maximum of a random sample of size a from F. It is worth mentioning that the family (1.2) is relevant to the order statistics in an interesting way. Namely, if \(X_{11},\ldots ,X_{1m};X_{21},\ldots ,X_{2m};\ldots ;X_{n1},\ldots ,X_{nm}\) are i.i.d random variables from the base df F,  then \(\eta =\min \limits _{1\le i\le n}\max \limits _{1\le j\le n} \{X_{ij}\}\sim K_F(x;n,m).\)

In this paper, we show that the weak convergence of any upper(lower) record value, or central order statistic, based on a base distribution F,  to a non-degenerate limit type implies the weak convergence of those corresponding statistics based on the family \(G_F(x;\alpha )\) to non-degenerate limit types. The relations between the two limit types are obtained. Moreover, the weak convergence of the extreme, or intermediate, order statistics based on F to non-degenerate limits implies the convergence of those statistics based on \(G_F\) to the degenerate limits. Finally, we show that the asymptotic behaviors of the order statistics and record values based on Kumaraswamy and beta-generated-distributions families are the same. We conclude this section by an elementary lemma which is an essential tool of our study. The proof of this lemma directly follows by an application of L’Hospital’s rule and Leibniz’s rule for differentiation under the integral sign (c.f. [15]).

Lemma 1.1

For any \(0\le x,\lambda \le 1,\) we have

$$\begin{aligned} \Gamma _x(a)\sim & {} \frac{x^a}{a\Gamma (a)},~\quad \text{ as }~x\rightarrow 0,\end{aligned}$$
(1.3)
$$\begin{aligned} 1-\Gamma _x(a)\sim & {} \frac{x^{a-1} e^{-x}}{\Gamma (a)},~\quad \text{ as }~ x\rightarrow \infty ,\end{aligned}$$
(1.4)
$$\begin{aligned} \Gamma _{x+\lambda }(a)-\Gamma _\lambda (a)\sim & {} \frac{x}{\Gamma (a)}\lambda ^{a-1}e^{-\lambda },\quad ~\text{ as }~ x\rightarrow 0. \end{aligned}$$
(1.5)

Moreover, let \(0\le x\le 1\) and \(0<\lambda <1.\) Then

$$\begin{aligned} 1-(1-x^a)^b\sim & {} b~x^a,~\quad \text{ as }~x\rightarrow 0,\end{aligned}$$
(1.6)
$$\begin{aligned} (1-x^a)^b\sim & {} a^{b}(1-x)^b,~ \quad \text{ as }~x\rightarrow 1,\end{aligned}$$
(1.7)
$$\begin{aligned} (1-\lambda ^a)^b-(1-x^a)^b\sim & {} ~ab~\lambda ^{a-1}(1-\lambda ^a)^{b-1}(x-\lambda ),~ \quad \text{ as }~x\rightarrow \lambda ,\end{aligned}$$
(1.8)

where the symbol \(\sim \) stands for the equivalence in limit, i.e., \(f(x)\sim g(x),\) as \(x\rightarrow x_0,\) means that \(\lim _{x\rightarrow x_0}\frac{f(x)}{g(x)}=1.\)

2 Asymptotic Distribution of Extreme Order Statistics

A df F(x) is said to belong to the domain of maximal (minimal) attraction of a non-degenerate df H(x)(G(x)) denoted by \(F(x)\in {{\mathcal {D}}}_\mathrm{{max}}(H(x))(F(x) \in {{\mathcal {D}}}_\mathrm{{min}}(G(x))\) if there exist normalizing constants \(A_n>0\) and \(B_n\) \((C_n>0\) and \(D_n\)) such that \(P(X_{n:n} \le A_nx+ B_n)\longrightarrow H(x)( P(X_{1:n} \le C_nx+ D_n)\longrightarrow G(x)\)) for all continuity points of H(x)(G(x)). Sometimes, we use the notation \(F(A_nx+B_n)\in \mathcal { D}_\mathrm{{max}}(H(x))(F(C_nx+D_n) \in {{\mathcal {D}}}_\mathrm{{min}}(G(x)\))) when our attention is focused on some specific normalizing constants \(A_n>0\) and \(B_n\) \((C_n>0\) and \(D_n\)). It is well known, see [6, pp. 210–213], that H(x) is one of the types:

  1. (i)

    \(H_1(x;\alpha )=e^{-x^{-\alpha }}, ~x,\alpha >0.\)

  2. (ii)

    \(H_2(x;\alpha )=e^{-(-x)^{\alpha }},~x\le 0,\alpha >0.\)

  3. (iii)

    \(H_3(x)=e^{-e^{-x}},~-\infty<x<\infty .\)

Moreover, G(x) is related to H(x) by \(G(x)=1 - H(-x).\)

Lemma 2.1

(see [6, p. 218]).

  1. (i)

    \(F(A_nx+B_n)\in {{\mathcal {D}}}_\mathrm{{max}}(H(x))\) if and only if \(n(1-F(A_nx+B_n))\longrightarrow -\log H(x),~\) as \(n\rightarrow \infty .\)

  2. (ii)

    \(F(C_nx+D_n)\in {{\mathcal {D}}}_\mathrm{{min}}(G(x))\) if and only if \(nF(C_nx+D_n)\longrightarrow -\log (1-G(x)),~\) as \(n\rightarrow \infty .\)

Theorem 2.1

For any base df F and suitable normalizing constants \(A_n,C_n>0,\) \(B_n,D_n,\) we have:

Part I. \(F(A_nx+B_n)\in {{\mathcal {D}}}_\mathrm{{max}}(H)\) (or \(F(C_nx+D_n) \in {{\mathcal {D}}}_\mathrm{{min}}(G)\)) implies \(G_F(A_nx+B_n;a)\notin {{\mathcal {D}}}_\mathrm{{max}}(H^{\prime })\) (or \(G_F(C_nx+D_n;a)\notin {{\mathcal {D}}}_\mathrm{{min}}(G^{\prime })\)), for any non-degenerate limit df \(H^{\prime }\) (or \(G^{\prime }\)).

Part II. \(~K_{F}\in {{\mathcal {D}}}_\mathrm{{max}}(H)(\mathcal { D}_\mathrm{{min}}(G))~ \text{ if } \text{ and } \text{ only } \text{ if }~F\in {{\mathcal {D}}}_\mathrm{{max}}(H)(\mathcal { D}_\mathrm{{min}}(G)).~\) More specifically

  1. 1.

    \(~F(A_nx+B_n)\in {{\mathcal {D}}}_\mathrm{{max}}(H_1(x;\alpha ))\) if and only if \(K_F(A_{\varphi (n;b)}x+B_{\varphi (n;b)};a,b)\in {{\mathcal {D}}}_\mathrm{{max}}(H_1(a^{-\frac{1}{\alpha }}x; \alpha b),\) \(~F(C_nx+D_n)\in {{\mathcal {D}}}_\mathrm{{min}}(G_1(x;\alpha ))\) if and only if \(K_F(C_{\varphi (n;a)}x+D_{\varphi (n;a)};a,b)\in \mathcal { D}_\mathrm{{min}}(G_1(b^{-\frac{1}{\alpha a}}x; \alpha a);\)

  2. 2.

    \(~F(A_nx+B_n)\in {{\mathcal {D}}}_\mathrm{{max}}(H_2(x;\alpha ))\) if and only if \(~K_F(A_{\varphi (n;b)}x+B_{\varphi (n;b)};a,b)\in {{\mathcal {D}}}_\mathrm{{max}}(H_2(a^{\frac{1}{\alpha }}x; \alpha b),\) \(~F(C_nx+D_n)\in {{\mathcal {D}}}_\mathrm{{min}}(G_2(x;\alpha ))\) if and only if \(~K_F(C_{\varphi (n;a)}x+D_{\varphi (n;a)};a,b)\in \mathcal { D}_\mathrm{{min}}(G_2(b^{\frac{1}{\alpha a}}x; \alpha a);\)

  3. 3.

    \(~F(A_nx+B_n)\in {{\mathcal {D}}}_\mathrm{{max}}(H_3(x))\) if and only if \(~K_F(A_{\varphi (n;b)}x+B_{\varphi (n;b)};a,b)\in {{\mathcal {D}}}_\mathrm{{max}}(H_3(bx-b\log a),\) \(~F(C_nx+D_n)\in {{\mathcal {D}}}_\mathrm{{min}}(G_3(x))\) if and only if \(~K_F(C_{\varphi (n;a)}x+D_{\varphi (n;a)};a,b)\in \mathcal { D}_\mathrm{{min}}(G_3(ax+\log b);\)

where for any \(\beta >0\), \(\varphi (n;\beta )=[n^{\frac{1}{\beta }}]\) and [.] denotes the floor function.

Proof

If \(F(A_nx+B_n)\in {{\mathcal {D}}}_\mathrm{{max}}(H),\) then Lemma 2.1 yields

$$\begin{aligned} n(1-F(A_nx+B_n))\rightarrow -\log H(x),~\text{ as }~n\rightarrow \infty . \end{aligned}$$
(2.1)

Thus, upon using the relation (1.4), we get

$$\begin{aligned} n(1-G_F(A_nx+B_n;a))\sim & {} \frac{n(1-F(A_nx+B_n))(-\log (1-F(A_nx+B_n)))^{a-1}}{\Gamma (a)}\nonumber \\\rightarrow & {} \left\{ \begin{array}{ll} \infty ,&{} a>1,\\ 0,&{} a<1. \end{array}\right. \end{aligned}$$
(2.2)

An application of Lemma 2.1 and the relation (2.2) proves Part I of Theorem 2.1, in the case of the upper extremes. For the lower extreme case, we first apply Lemma 2.1 to get \(nF(C_nx+D_n)\rightarrow -\log (1-G(x))\) and then by applying (1.3), we get

$$\begin{aligned} nG_F(C_nx+D_n;a)\sim & {} \frac{n(-\log (1-F(C_nx+D_n)))^a}{a\Gamma (a)}\nonumber \\\sim & {} \frac{nF(C_nx+D_n)F^{a-1}(C_nx+D_n)}{a\Gamma (a)}\rightarrow \left\{ \begin{array}{l} 0,a>1,\\ \infty ,a<1. \end{array}\right. \quad \quad \quad \end{aligned}$$
(2.3)

In view of Lemma 2.1, the relation (2.3) completes the proof of Part I of Theorem 2.1.   \(\square \)

Turning now to prove the second part of the theorem. Under the condition (2.1) and by applying Lemma 2.1, we get \(\varphi (n;b)(1-F(A_{\varphi (n;b)}x+B_{\varphi (n;b)}))\rightarrow -\log H(x),\) which implies

$$\begin{aligned} n(1-F(A_{\varphi (n;b)}x+B_{\varphi (n;b)}))^b\rightarrow (-\log H(x))^b. \end{aligned}$$
(2.4)

On the other hand, (2.4) yields that \(F(A_{\varphi (n;b)}x+B_{\varphi (n;b)})\rightarrow 1,\) for all values of x for which \(-\log H(x)\) is finite. Thus, (1.7) and (2.4) yield

$$\begin{aligned}&n(1-K_{F}(A_{\varphi (n;b)}x+B_{\varphi (n;b)};a,b))\nonumber \\&\quad =n(1-F^{a}(A_{\varphi (n;b)}x+B_{\varphi (n;b)}))^b\nonumber \\&\quad \sim a^bn (1-F(A_{\varphi (n;b)}x+B_{\varphi (n;b)}))^{b}\rightarrow (-a\log H(x))^b. \end{aligned}$$
(2.5)

Moreover, it is easy to verify that \((-a\log H_1(x;\alpha ))^b\) \(=-\log H_1(a^{-\frac{1}{\alpha }}x;\alpha b),\) \((-a\log H_2(x;\alpha ))^b\) \(=-\log H_2(a^{\frac{1}{\alpha }}x;\alpha b)\) and \((-a\log H_3(x))^b\) \(=-\log H_3(bx-b\log a).\) Thus, the direct assertion of the second part of Theorem 2.1 for upper extremes follows immediately. Turning now to the converse assertion. Assume that, for a given \(a,b>0,\) we have \(K_{F}\in \mathcal { D}_\mathrm{{max}}(H),~\) with normalizing constants \({\hat{A}}_n>0\) and \({\hat{B}}_n,\) then in view of Lemma 2.1, we get

$$\begin{aligned} n(1-K_{F}({\hat{A}}_nx+{\hat{B}}_n;a,b))\rightarrow -\log H(x),~\text{ as }~n\rightarrow \infty , \end{aligned}$$
(2.6)

which implies \(1-K_{F}({\hat{A}}_nx+{\hat{B}}_n;a,b)\rightarrow 0,\) i.e., \(F({\hat{A}}_nx+{\hat{B}}_n)\rightarrow 1,\) as \(n\rightarrow \infty ,\) for all values of  x  for which \(-\log H(x)\) is finite. Thus, by using (1.7) and (2.6), we get \(a^bn(1-F({\hat{A}}_nx+{\hat{B}}_n))^b\rightarrow -\log H(x),\) as \(n\rightarrow \infty ,\) or equivalently, \(\varphi (n;b)(1-F({\hat{A}}_nx+{\hat{B}}_n))\rightarrow \frac{1}{a}(-\log H(x))^{\frac{1}{b}}.\) Since the last convergence holds for every subsequence of n and especially holds for the subsequence \(n^{\prime }=\varphi (n;\frac{1}{b})=[n^b],\) where \(\varphi (n^{\prime };b)=[[n^b]^{\frac{1}{b}}]\sim n,\) we get \(n(1-F({\tilde{A}}_nx+{{\tilde{B}}}_n))\rightarrow a^{-1}(-\log H(x))^{\frac{1}{b}},\) where \({{\tilde{A}}}_n={\hat{A}}_{[n^b]}\) and \({{\tilde{B}}}_n={\hat{B}}_{[n^b]}.\) Therefore, we get the desired result by applying Lemma 2.1 (note that the converse part of the theorem holds with the normalizing constants \({{\tilde{A}}}_n\) and \({{\tilde{B}}}_n,\) i.e., \(K_{F}({\hat{A}}_nx+{\hat{B}}_n)\in {{\mathcal {D}}}_\mathrm{{max}}(H)(\mathcal { D}_\mathrm{{min}}(G))~\text{ implies }~F({{\tilde{A}}}_nx+ {\tilde{B}}_n)=F({\hat{A}}_{[n^b]}x+{\hat{B}}_{[n^b]})\in {{\mathcal {D}}}_\mathrm{{max}}(H)(\mathcal { D}_\mathrm{{min}}(G)).~\) Thus, the theorem is proved for the upper extreme case. The proof of the second part for the lower extreme case follows by similar argument by using the relation (1.6) and applying Lemma 2.1. This completes the proof of Theorem 2.1.\(\square \)

Remark 2.1

Theorem 2.1, Part II, shows that the asymptotic behaviors of the extreme order statistics based on Kumaraswamy and beta-generated-distributions families are the same.

Example 2.1

If F is an exponential(\(\sigma \)) df, it can be shown that \(F(A_nx+B_n)\in {{\mathcal {D}}}_\mathrm{{max}} (H_3(x))\) and \(F(C_nx)\in {{\mathcal {D}}}_\mathrm{{min}} (G_2(x;1)),\) where \((A_n,C_n)=(\frac{1}{\sigma },\frac{1}{n\sigma })\) and \(B_n=\frac{1}{\sigma }\log n.\) An application of Theorem 2.1 thus yields \(K_F(A_{\varphi (n;b)}x+B_{\varphi (n;b)};a,b)\in {{\mathcal {D}}}_\mathrm{{max}}(H_3((bx-b\log a)))\) and \(K_F(C_{\varphi (n;a)}x;a,b)\in {{\mathcal {D}}}_\mathrm{{min}}(G_2(b^{\frac{1}{a}}x;a)).\) Note that (cf., Example 2.1 of [7]) \({{\mathcal {B}}}_F(A_{\varphi (n;b)}x+B_{\varphi (n;b)};a,b)\) \(\in {{\mathcal {D}}}_\mathrm{{max}}(H_3((bx+\log b\beta (a,b))))\) and \({{\mathcal {B}}}_F(C_{\varphi (n;a)}x;a,b)\) \(\in {{\mathcal {D}}}_\mathrm{{min}}(G_2((a\beta (a,b))^{-\frac{1}{a}}x;a)).\)

3 Asymptotic Distribution of Intermediate and Central Order Statistics

The limit theory of the order statistic \(X_{r:n},\) with variable rank (i.e., \(\min (r,n - r)\rightarrow \infty ,\) as \(n\rightarrow \infty \)), was studied by many authors, such as Smirnov [18], Chibisov [12], and Wu [20]. When \(\sqrt{n}\left( \frac{r}{n}-\lambda \right) \rightarrow 0,\) as \(n\rightarrow \infty ,\) \(0<\lambda <1,\) a df F is said to belong to the domain of normal \(\lambda -\)attraction of a non-degenerate df \(\Phi ,\) denoted by \(F\in {{\mathcal {D}}}_{\lambda }(\Phi ),\) if there exist normalizing constants \(A_n>0\) and \(B_n \) such that \(P(X_{r:n} \le A_n x+ B_n)\longrightarrow \Phi (x)\) for all continuity points of \(\Phi (x)\) (when we have specific normalizing constants \(A_n>0\) and \(B_n,\) the notation \(F(A_nx+B_n)\in {{\mathcal {D}}}_{\lambda }(\Phi )\) may be used). Smirnov [18] showed that \(F(A_nx+B_n)\in \mathcal { D}_{\lambda }(\Phi )\) if and only if

$$\begin{aligned} \sqrt{n}\,\frac{F(A_nx+B_n)-\lambda }{{{\mathcal {C}}}_{\lambda }}\rightarrow {{\mathcal {N}}}^{-1}(\Phi (x)),~\text{ as }~ n\rightarrow \infty , \end{aligned}$$
(3.1)

where \({{\mathcal {C}}}_{\lambda }=\sqrt{\lambda (1-\lambda )}\) and \({{\mathcal {N}}}\) is the standard normal df. Moreover, the df \(\Phi \) has only one of the types:

  1. (i)

    \(\Phi _1(x;\alpha )={{\mathcal {N}}}(cx^{\alpha }),~x,c,\alpha >0.\)

  2. (ii)

    \(\Phi _2(x;\alpha )={{\mathcal {N}}}(-c(-x)^{\alpha }),~x\le 0,c,\alpha >0.\)

  3. (iii)

    \(\Phi _3(x;\alpha )={{\mathcal {N}}}(-c_1(-x)^{\alpha }),~x\le 0,~\) \(\Phi _3(x;\alpha )={{\mathcal {N}}}(c_2x^{\alpha }),~x>0,\) where \(c_1,c_2,\alpha >0.\)

  4. (iv)

    \(\Phi _4(x)=\frac{1}{2},~-1\le x\le 1.\)

Theorem 3.1

For any df F,  and \(0<\lambda <1,\) we have:

Part I. \(~G_{F}\in {{\mathcal {D}}}_{\lambda ^*}(\Phi )\) if and only if \(~F\in {{\mathcal {D}}}_{\lambda }(\Phi ),~\) where \(\lambda ^*=\Gamma _{{\hat{\lambda }}}(a)\) and \({\hat{\lambda }}=-\log (1-\lambda ).\) More specifically, let \(\zeta =\frac{\mathcal { C}_\lambda {\hat{\lambda }}^{a-1}}{{{\mathcal {C}}}_{\lambda ^*}\Gamma (a)}.\) Then, we have

  1. 1.

    \(~F\in {{\mathcal {D}}}_{\lambda }(\Phi _i(x;\alpha ))\) if and only if \(~G_F\in \mathcal { D}_{\lambda ^*}(\Phi _i(\zeta ^{\frac{1}{\alpha }}x; \alpha )),~i=1,2,3;\)

  2. 2.

    \(~F\in {{\mathcal {D}}}_{\lambda }(\Phi _4(x))\) if and only if \(~G_{F}\in {{\mathcal {D}}}_{\lambda ^*}(\Phi _4(x)).\)

Part II. \(~K_{F}\in {{\mathcal {D}}}_{{\tilde{\lambda }}}(\Phi )\) if and only if \(~F\in {{\mathcal {D}}}_{\lambda }(\Phi ),~\) where \({\tilde{\lambda }}=1-(1-\lambda ^a)^b.\) More specifically, let \(\eta =\frac{ab\mathcal { C}_\lambda \lambda ^{a-1}(1-\lambda ^a)^{b-1}}{\mathcal { C}_{{\tilde{\lambda }}}}.\) Then, we have

  1. 1.

    \(~F\in {{\mathcal {D}}}_{\lambda }(\Phi _i(x;\alpha ))\) if and only if \(~K_F\in \mathcal { D}_{{\tilde{\lambda }}}(\Phi _i(\eta ^{\frac{1}{\alpha }}x; \alpha )),~i=1,2,3;\)

  2. 2.

    \(~F\in {{\mathcal {D}}}_{\lambda }(\Phi _4(x))\) if and only if \(~K_F\in \mathcal { D}_{{\tilde{\lambda }}}(\Phi _4(x)).\)

In all the cases of Parts I and II, the normalizing constants \(A_n>0\) and \(B_n\) used in the convergence of the df of the central order statistics for the base df F and the families \(G_F\) and \(K_F\) are the same.

Proof

Let \(~F\in {{\mathcal {D}}}_{\lambda }(\Phi ).\) Then in view of Smirnov’s result [18], there exist suitable normalizing constants \(A_n>0\) and \(B_n\) for which \(\sqrt{n}\,\frac{F(A_nx+B_n)-\lambda }{{{\mathcal {C}}}_\lambda }\rightarrow \mathcal { N}^{-1}(\Phi (x)),\) as \(n\rightarrow \infty .\) Therefore,

$$\begin{aligned} F(A_nx+B_n)=\lambda +\frac{{{\mathcal {C}}}_\lambda }{\sqrt{n}}\mathcal { N}^{-1}(\Phi (x))(1+\circ (1))=\lambda +x_n, \end{aligned}$$

where \(x_n=\frac{1}{\sqrt{n}}{{\mathcal {C}}}_\lambda \mathcal { N}^{-1}(\Phi (x))(1+\circ (1))\rightarrow 0,~\) as \(n\rightarrow \infty \) and \(\circ (.)\) is the well-known little\(-\circ \) Landau’s notation. On the other hand, we have \(-\log (1-F(A_nx+B_n))={\hat{\lambda }}-\log (1-\frac{x_n}{1-\lambda })={\hat{\lambda }}+\frac{x_n}{1-\lambda }(1+\circ (1)).\) Thus, by using (1.5), we get

$$\begin{aligned} \sqrt{n}\frac{G_F(A_nx+B_n;a)-\lambda ^*}{{{\mathcal {C}}}_{\lambda ^*}}= & {} \sqrt{n}\frac{\Gamma _{{\hat{\lambda }}+\frac{x_n}{1-\lambda }(1+\circ (1))}(a)- \Gamma _{{\hat{\lambda }}}(a)}{{{\mathcal {C}}}_{\lambda ^*}} \sim \sqrt{n}\frac{x_n{\hat{\lambda }}^{a-1}}{{{\mathcal {C}}}_{\lambda ^*}\Gamma (a)} \nonumber \\= & {} \frac{{{\mathcal {C}}}_{\lambda }{\hat{\lambda }}^{a-1}{{\mathcal {N}}}^{-1} (\Phi (x))(1+\circ (1))}{{{\mathcal {C}}}_{\lambda ^*}\Gamma (a)}\nonumber \\\rightarrow & {} \zeta {{\mathcal {N}}}^{-1}(\Phi (x)),~\text{ as }~n\rightarrow \infty , \end{aligned}$$

which in view of Smirnov’s result [18] leads to the proof of the direct assertion of Part I, if we note that \({{\mathcal {N}}}(\zeta \mathcal { N}^{-1}(\Phi ))\) is a non-degenerate df, which has the same type as \({{\mathcal {N}}}(\Phi ).\) Namely, \({{\mathcal {N}}}(\zeta \mathcal { N}^{-1}(\Phi _i(x;\alpha )))=\Phi _i(\zeta ^{\frac{1}{\alpha }}x; \alpha ), ~i=1,2,3,\) and \({{\mathcal {N}}}(\zeta \mathcal { N}^{-1}(\Phi _4(x)))=\Phi _4(x).\) Conversely, assume that \(~G_{F}(A_{n}x+B_n;a)\in {{\mathcal {D}}}_{\lambda ^*}(\Phi ),\) then by Smirnov’s result we get \(\sqrt{n}\,\frac{G_F(A_{n}x+B_n;a)-\lambda ^*}{\mathcal { C}_{\lambda ^*}}\rightarrow {{\mathcal {N}}}^{-1}(\Phi (x)),\) as \(n\rightarrow \infty .\) Therefore, \(G_{F}(A_nx+B_n;a)\sim \lambda ^*,\) for all values of x for which \({{\mathcal {N}}}^{-1}(\Phi (x))\) is finite, i.e., for these values of x we have

$$\begin{aligned} -\log (1-F(A_n{x}+B_n))={\hat{\lambda }}+\epsilon _n, \end{aligned}$$
(3.2)

where \(\epsilon _n\rightarrow 0,\) as \(n\rightarrow \infty .\) Thus, we get \(F(A_n{x}+B_n)=1-(1-\lambda )e^{-\epsilon _n}=\lambda +(1-\lambda )\epsilon _n(1+\circ (1)),\) which implies \(\sqrt{n}\,\frac{F(A_nx+B_n)-\lambda }{\mathcal { C}_\lambda }\sim \sqrt{n}\frac{(1-\lambda )\epsilon _n}{\mathcal { C}_\lambda }.\) Upon combining the last relation with (3.2) and (1.5), we get

$$\begin{aligned} \sqrt{n}\frac{F(A_n{x}+B_n)-\lambda }{{{\mathcal {C}}}_\lambda }\sim & {} \sqrt{n}\frac{(1-\lambda )\epsilon _n}{{{\mathcal {C}}}_\lambda }\sim \sqrt{n}\frac{\Gamma (a)(\Gamma _{\epsilon _n+{\hat{\lambda }}}(a)-\Gamma _{{\hat{\lambda }}}(a))}{{\hat{\lambda }}^{^{a-1}}\mathcal { C}_{\lambda }}\nonumber \\= & {} \zeta ^{-1} \sqrt{n}\frac{G_F(A_n{x}+B_n;a)-\lambda ^*}{{{\mathcal {C}}}_{\lambda ^*}}\rightarrow \zeta ^{-1}{{\mathcal {N}}}^{-1}(\Phi (x)). \end{aligned}$$

This completes the first part of the Theorem 3.1.\(\square \)

We turn now to prove the second part. Let \(F(A_nx+B_n)\in \mathcal { D}_{\lambda }(\Phi ).\) Then in view of (3.1) and (1.8), we get

$$\begin{aligned} \sqrt{n}\frac{K_F(A_nx+B_n;a,b)-{\tilde{\lambda }}}{{{\mathcal {C}}}_{{\tilde{\lambda }}}}= & {} \sqrt{n}\frac{(1-\lambda ^a)^b-(1-F^a(A_{n}x+B_{n}))^{b}}{\mathcal { C}_{{\tilde{\lambda }}}}\nonumber \\\sim & {} \sqrt{n}\frac{ab\lambda ^{a-1}(1-\lambda ^a)^{b-1}(F(A_{n}x+B_{n})-\lambda )}{{{\mathcal {C}}}_{{\tilde{\lambda }}}}\nonumber \\= & {} \eta \sqrt{n}\frac{F(A_{n}x+B_{n})-\lambda }{\mathcal { C}_\lambda }\rightarrow \eta {{\mathcal {N}}}^{-1}(\Phi (x)), \end{aligned}$$

as \(n\rightarrow \infty ,\) which in view of Smirnov’s result [18] leads to the proof of the direct assertion if we note that \({{\mathcal {N}}}(\eta {{\mathcal {N}}}^{-1}(\Phi ))\) is a non- degenerate df, which has the same type as \({{\mathcal {N}}}(\Phi ).\) Namely, \({{\mathcal {N}}}(\eta \mathcal { N}^{-1}(\Phi _i(x;\alpha )))=\Phi _i(\eta ^{\frac{1}{\alpha }}x; \alpha ), ~i=1,2,3,\) and \({{\mathcal {N}}}(\eta {{\mathcal {N}}}^{-1}(\Phi _4(x)))=\Phi _4(x).\) Conversely, assume that \(K_{F}\in {{\mathcal {D}}}_{{\tilde{\lambda }}}(\Phi ),\) then by Smirnov’s result we get \(\sqrt{n}\frac{K_F(A_nx+B_n;a,b)-{\tilde{\lambda }}}{\mathcal { C}_{{\tilde{\lambda }}}}\rightarrow {{\mathcal {N}}}^{-1}(\Phi (x)),\) as \(n\rightarrow \infty .\) Therefore, \(K_{F}(A_nx+B_n;a,b)\sim {\tilde{\lambda }},\) for all values of x for which \({{\mathcal {N}}}^{-1}(\Phi (x))\) is finite, i.e., for these values of x we have \(F(A_nx+B_n)\sim \lambda .\) Therefore, by using (1.8) we get

$$\begin{aligned} \sqrt{n}\frac{F(A_nx+B_n)-\lambda }{{{\mathcal {C}}}_\lambda }\sim & {} \eta ^{-1}\sqrt{n}\frac{(1-\lambda ^a)^b-(1-F^a(A_nx+B_n))^b}{{{\mathcal {C}}}_{{\tilde{\lambda }}}}\nonumber \\= & {} \eta ^{-1}\sqrt{n}\frac{K_F(A_nx+B_n;a,b)-{\tilde{\lambda }}}{{{\mathcal {C}}}_{{\tilde{\lambda }}}}\nonumber \\\rightarrow & {} \eta ^{-1} {{\mathcal {N}}}^{-1}(\Phi (x)),~\text{ as }~n\rightarrow \infty . \end{aligned}$$

This in view of Smirnov’s result [18] yields the desired converse assertion.\(\square \)

Example 3.1

It is well known (see, [18]) that \({{\mathcal {N}}}({{\mathcal {C}}}_\lambda \sqrt{\frac{2\pi }{n}}x)\in {{\mathcal {D}}}_\lambda ({{\mathcal {N}}}(x)),\) for every \(0<\lambda <1.\) Therefore, for every \(0<{\bar{\lambda }}<1,\) Theorem 3.1 implies that

  1. 1.

    \(G_{{{\mathcal {N}}}}(A_n^\star x;a)\in {{\mathcal {D}}}_{{\bar{\lambda }}}({{\mathcal {N}}}(x)),\) where \(~A_n^\star =\frac{{{\mathcal {C}}}_{{\bar{\lambda }}}\Gamma (a)}{(-\log (1-\lambda ))^{a-1}}\sqrt{\frac{2\pi }{n}}\) and \(\lambda \) is determined from the relation \({\bar{\lambda }}=\Gamma _{\lambda }(a);\)

  2. 2.

    \(K_{{{\mathcal {N}}}}(B_n^\star x;a,b)\in {{\mathcal {D}}}_{{\bar{\lambda }}}(\mathcal { N}(x),\) where \(~B_n^\star =\frac{\mathcal { C}_{{\bar{\lambda }}}}{ab[1-(1-{\bar{\lambda }}^\frac{1}{b})]^{\frac{a-1}{a}}(1- {\bar{\lambda }})^{\frac{b-1}{b}}}\sqrt{\frac{2\pi }{n}}.\)

When the variable rank r is such that \(\frac{r}{n}\rightarrow 0,\) as \(n\rightarrow \infty ,\) a df F is said to belong to the domain of attraction of a possible non-degenerate lower intermediate limit df \(\Psi ,\) denoted by \(F\in {{\mathcal {D}}}_{r}(\Psi ),\) if there exist normalizing constants \(A_n>0\) and \(B_n \) such that \(P(X_{r:n} \le A_n x+ B_n)\longrightarrow \Psi (x)\) for all continuity points of \(\Psi (x)\) (again, we use the notation \(F(A_nx+B_n)\in {{\mathcal {D}}}_{r}(\Psi ),\) when our attention is focused on some specific normalizing constants \(A_n>0\) and \(B_n\)). An intermediate rank \(r=r_n\) is said to satisfy Chibisov’s condition if \(\lim _{n\rightarrow \infty }(\sqrt{r_{n+z_n(\nu )}}-\sqrt{r_n})=\frac{\theta \nu \ell }{2},~\) for any sequence of integer values \(\{z_n(\nu )\},\) for which \(\frac{z_n(\nu )}{n^{1-\frac{\theta }{2}}}\rightarrow \nu ,\) as \(n\rightarrow \infty ,\) where \(0<\theta <1,~\ell >0\) and \(~\nu ~\) is any real number. It is easy to show that Chibisov’s condition implies the condition \(\frac{r_n}{n^{\theta }}\rightarrow \ell ^2.\) Moreover, the latter condition implies Chibisov’s condition, see [8], which means that the class of intermediate rank sequences, which satisfies the Chibisov condition, is a very wide class. Chibisov [12] has proved that \(F(A_nx+B_n)\in {{\mathcal {D}}}_{r}(\Psi )\) if and only if

$$\begin{aligned} \sqrt{n}\,\frac{F(A_nx+B_n)-R}{\sqrt{R}}=\frac{nF(A_nx+B_n) -r}{\sqrt{r}}\longrightarrow {{\mathcal {N}}}^{-1}(\Psi (x)),~\text{ as }~ n\rightarrow \infty , \end{aligned}$$

where \(R=\frac{r}{n}.\) Moreover, the df \(\Psi \) has only one of the types:

$$\begin{aligned} \left. \begin{array}{l} (i) ~~\Psi _1(x;\alpha )={{\mathcal {N}}}(\alpha \log x),~x,\alpha>0. \\ (ii)~\Psi _2(x;\alpha )={{\mathcal {N}}}(-\alpha \log (-x)),~x\le 0,~\alpha >0.\\ (iii)~\Psi _3(x)={{\mathcal {N}}}(x). \end{array}\right\} \end{aligned}$$
(3.3)

Theorem 3.2

Let \(r\sim \ell ^2n^\theta ,~0<\theta <1,\) be a Chibisov rank sequence and \(R=\frac{r}{n}.\) Furthermore, for suitable normalizing constants \(A_n>0\) and \(B_n,\) let \(F(A_nx+B_n)\in {{\mathcal {D}}}_{r}(\Psi ).\) Then

Part I. \(G_F(A_nx+B_n;a)\notin {{\mathcal {D}}}_{\acute{r}}(\Psi (x)),\) for any Chibisov rank sequence \(\acute{r}.\)

Part II. \(K_F(A_nx+B_n;a,b)\in {{\mathcal {D}}}_{r^*}(\Psi (x)),\) where \(r^*=nR^*,\) \(R^*=1-(1-R^a)^b\) and \(0<a<(1-\theta )^{-1},\) only if \(a=1\) (in this case we have \(R^*=1-(1-R^a)^b\sim bR\)). More specifically,

  1. 1.

    \(~K_F(A_nx+B_n;1,b)\in {{\mathcal {D}}}_{r^*}(\Psi _i(x;\alpha \sqrt{b}))\) if \(~F(A_nx+B_n)\in \mathcal { D}_{r}(\Psi _i(x;\alpha )), i=1,2;\)

  2. 2.

    \(~K_F(A_nx+B_n;1,b)\in {{\mathcal {D}}}_{r^*}(\Psi _3(\sqrt{b}x))\) if \(~F(A_nx+B_n)\in \mathcal { D}_{r}(\Psi _3(x)).\)

Proof

For proving the first part, we first note that the condition \(F(A_nx+B_n)\in {{\mathcal {D}}}_{r}(\Psi )\) implies that \(F(A_nx+B_n)\sim R\rightarrow 0,~\) as \(n\rightarrow \infty ,\) i.e., \(-\log (1-F(A_nx+B_n))\rightarrow 0.~\) Now, let, \(\acute{R}=\frac{\acute{r}}{n}\) and \(\acute{r}\sim k^2n^\sigma \) be any Chibisov rank sequence, where \(k>0\) and \(0<\sigma <1.\) Then, (1.3) implies that

This completes the proof of the first part of the theorem. \(\square \)

For proving the second part, we first show that the rank \(r^*=nR^*=n(1-(1-R^a))^b\) is a Chibisov rank, i.e., \(r^*\sim \kappa ^2n^\rho ,~0<\rho <1, \kappa >0.\) Indeed, in view of (1.6), we have \(r^*=nR^*\sim n~b~{R^a}\sim \ell ^{2a}~b~n^{1-a(1-\theta )}.\) Thus, \(\kappa =\sqrt{b} \ell ^a>0\) and \(0<\rho =1-a(1-\theta )<1,\) whenever the condition \(0<a<(1-\theta )^{-1}\) is satisfied. Now, let \(F(A_nx+B_n)\in {{\mathcal {D}}}_{r}(\Psi ).\) Then due to Chibisov’s result [12], we get \(\sqrt{n}\frac{F(A_nx+B_n)-R}{\sqrt{R}}\rightarrow \mathcal { N}^{-1}(\Psi (x)),\) as \(n\rightarrow \infty .\) Thus, we get \(F(A_nx+B_n)=R+x_n,\) where \(x_n=\sqrt{\frac{R}{n}}\mathcal { N}^{-1}(\Psi (x))(1+\circ (1))\rightarrow 0,\) as \(n\rightarrow \infty .\) Therefore, by applying (1.8) and (1.6), we get

$$\begin{aligned}&\sqrt{n}\,\frac{K_{F}(A_nx+B_n;a,b)-R^*}{\sqrt{R^*}}\nonumber \\&\quad \sim \sqrt{n}\,\frac{ab(F(A_{n}x+B_{n})-R)R^{a-1}(1-R^a)^{b-1}}{\sqrt{R^*}}\nonumber \\&\quad =\sqrt{n}\frac{ab\sqrt{\frac{R}{n}}{{\mathcal {N}}}^{-1}(\Psi (x))(1+\circ (1))~R^{a-1}~(1-R^a)^{b-1}}{\sqrt{b~R^a}}\nonumber \\&\quad =a\sqrt{b}{{\mathcal {N}}}^{-1}(\Psi (x))(1+\circ (1))R^{\frac{a}{2}-\frac{1}{2}}(1-R^a)^{b-1}\longrightarrow \sqrt{b}{{\mathcal {N}}}^{-1}(\Psi (x)),\nonumber \\&\qquad ~\text{ only } \text{ if }~a=1; \end{aligned}$$

otherwise, \(\sqrt{n}\,\frac{K_{F}(A_nx+B_n;a,b)-R^*}{\sqrt{R^*}}\rightarrow 0.\) This completes the proof of the second part of the theorem. \(\square \)

Example 3.2

It is well known (see, [12]) that \({{\mathcal {N}}}\in {{\mathcal {D}}}_r({{\mathcal {N}}}),\) for any Chibisov rank sequence r. On the other hand, in view of the relation (1.6), we have \(R^*=1-(1-R)^b\sim bR,\) where \(R=\frac{r}{n}\) and \(R^*=\frac{r^*}{n},\) then \(r^*\) is a Chibisov rank sequence if and only if r is a Chibisov rank sequence. Thus, an application of Theorem  3.2 yields that \(K_{{{\mathcal {N}}}}(x;1,b)=1-{{\mathcal {N}}}^b(-x)\in {{\mathcal {D}}}_{r^*}({{\mathcal {N}}}),\) for any Chibisov rank sequence \(r^*.\)

4 Asymptotic Distribution of Record Values

The upper and lower record value sequences \(\{R_n\}\) and \(\{L_n\}\) can be defined by \(R_n=X_{N_n}\) and \(L_n=X_{M_n},\) respectively, where \(N_n=\min \{j:j>N_{n-1},X_j>X_{N_{n-1}},~n>1\}\) and \(M_n=\min \{j:j>M_{n-1},X_j<X_{M_{n-1}},~ n>1\}\) (note that \(N_1=M_1=1\)) are the upper and lower record time sequences, respectively. A df F is said to belong to the domain of upper(lower) record value attraction of a non-degenerate df \(\Psi (\Psi ^{*})\) and write \(F \in {{\mathcal {D}}}_\mathrm{{urec}}(\Psi )(F\in \mathcal { D}_\mathrm{{lrec}}(\Psi ^{*})\)) if there exist normalizing constants \(A_n>0\) and \(B_n\) (\(C_n>0\) and \(D_n\)) such that \(P(R_n\le A_n x+ B_n)\longrightarrow \Psi (x)~(P(L_n\le C_nx+D_n)\longrightarrow \Psi ^*(x)\)), for all continuity points of \(\Psi (x)(\Psi ^*(x))\) (again, when our attention is focused on some specific normalizing constants \(A_n>0\) and \(B_n\) \((C_n>0\) and \(D_n\)), we use the notation \(F(A_nx+B_n)\in {{\mathcal {D}}}_\mathrm{{urec}}(\Psi )(F(C_nx+D_n) \in \mathcal { D}_\mathrm{{lrec}}(\Psi ^{*}))).\) It is well known that \(\Psi (x)\) has only one of the types (3.3). Moreover, \(\Psi ^*(x)=1-\Psi (-x),\) see [6].

Lemma 4.1

(See [6]). Let \(U_{n:F}(x)=-\log (1-F(x))\) and \(V_{n:F}(x)=-\log F(x).\) Then

  1. 1.

    \(F(A_nx+B_n)\in {{\mathcal {D}}}_\mathrm{{urec}}(\Psi (x))\) if and only if \(\frac{U_{n:F}(A_nx+B_n)-n}{\sqrt{n}}\longrightarrow \mathcal { N}^{-1}(\Psi (x));\)

  2. 2.

    \(F(C_nx+D_n)\in {{\mathcal {D}}}_\mathrm{{lrec}}(\Psi ^*(x))\) if and only if \(\frac{V_{n:F}(C_nx+D_n)-n}{\sqrt{n}}\longrightarrow \mathcal { N}^{-1}(\Psi ^*(x)).\)

Theorem 4.1

For any df F,  we have the following:

  1. 1.

    \(F(A_nx+B_n)\in {{\mathcal {D}}}_\mathrm{{urec}}(\Psi (x))\) implies \(G_F(A_nx+B_n;a)\in {{\mathcal {D}}}_\mathrm{{urec}}(\Psi (x)).\)

  2. 2.

    \(F(C_nx+D_n)\in {{\mathcal {D}}}_\mathrm{{lrec}}(\Psi ^*(x))\) implies \(G_F\left( \big (C_{\left[ \frac{n}{a}\right] }x+D_{\left[ \frac{n}{a}\right] }\big )\right) \in {{\mathcal {D}}}_\mathrm{{lrec}}(\Psi ^*(x)).\) More specifically, \(F(C_nx+D_n)\in {{\mathcal {D}}}_\mathrm{{lrec}}(\Psi _i^*(x;\alpha ))\) implies \(G_F\left( \big (C_{\left[ \frac{n}{a}\right] }x+D_{\left[ \frac{n}{a}\right] }\big )\right) \in {{\mathcal {D}}}_\mathrm{{lrec}}(\Psi _i^*(x;\sqrt{a}\alpha )),~i=1,2,\) and \(F(C_nx+D_n)\in {{\mathcal {D}}}_\mathrm{{lrec}}(\Psi _3^*(x))\) implies \(G_F\left( \big (C_{\left[ \frac{n}{a}\right] }x+D_{\left[ \frac{n}{a}\right] }\big )\right) \in {{\mathcal {D}}}_\mathrm{{lrec}}(\Psi _3^*(\sqrt{a}x)).\)

  3. 3.

    \(F(A_nx+B_n)\in {{\mathcal {D}}}_\mathrm{{urec}}(\Psi (x))\) implies \(K_F(A_nx+B_n;a,b)\in {{\mathcal {D}}}_\mathrm{{urec}}(\Psi (x)),\) only if \(b=1.\)

  4. 4.

    \(F(C_nx+D_n)\in {{\mathcal {D}}}_\mathrm{{lrec}}(\Psi ^*(x))\) implies \(K_F(C_nx+D_n;a,b)\in {{\mathcal {D}}}_\mathrm{{lrec}}(\Psi ^*(x)),\) only if \(a=1.\)

Proof

If \(F(A_nx+B_n)\in {{\mathcal {D}}}_\mathrm{{urec}}(\Psi (x)),\) then in view of Lemma 4.1, we get \(\frac{U_{n:F}(A_nx+B_n)-n}{\sqrt{n}}=\frac{-\log (1-F(A_nx+B_n))-n}{\sqrt{n}}\rightarrow {{\mathcal {N}}}^{-1}(\Psi (x)),\) which implies \(-\log (1-F(A_nx+B_n))=n+\sqrt{n}~\mathcal { N}^{-1}(\Psi (x))(1+\circ (1))\rightarrow \infty .\) Therefore,(1.4) yields \(1-G_{F}(A_nx+B_n;a)\sim \frac{(1-F(A_{n}x+B_n))~(-\log (1-F(A_{n}x+B_n)))^{a-1}}{\Gamma (a)},\) as \(n\rightarrow \infty .\) Thus, we get

$$\begin{aligned}&\lim _{n\rightarrow \infty }\frac{U_{n:G_{F}}(A_nx+B_n)-n}{\sqrt{n}} =\lim _{n\rightarrow \infty }\frac{-\log (1-G_{F}(A_nx+B_n;a))-n}{\sqrt{n}}\nonumber \\&\quad =\lim _{n\rightarrow \infty }\frac{-\log (\frac{(1-F(A_{n}x+B_n))(-\log ((1-F(A_{n}x+B_n)))^{a-1}}{\Gamma (a)}(1+\circ (1)))-n}{\sqrt{n}}\nonumber \\&\quad =\lim _{n\rightarrow \infty }\left( \frac{-\log (1-F(A_{n}x+B_n))-n}{\sqrt{n}}+\frac{\log \Gamma (a)-\log (1+\circ (1))}{\sqrt{n}}\right. \nonumber \\&\qquad \left. -\frac{(a-1)\log n+(a-1)\log (1+\frac{1}{\sqrt{n}}\mathcal { N}^{-1}(\Psi (x))(1+\circ (1)))}{\sqrt{n}}\right) \nonumber \\&\quad = {{\mathcal {N}}}^{-1}(\Psi (x)),\quad ~\text{ as }~n\rightarrow \infty . \end{aligned}$$

This completes the proof of the first part of the theorem.\(\square \)

Now, if \(F(C_nx+D_n)\in {{\mathcal {D}}}_\mathrm{{lrec}}(\Psi ^{*}(x)),\) then in view of Lemma 4.1, we get \(\frac{V_{n:F}(C_nx+D_n)-n}{\sqrt{n}}=\frac{-\log F(C_nx+D_n)-n}{\sqrt{n}}\rightarrow {{\mathcal {N}}}^{-1}(\Psi ^*(x)),\) which implies \(\frac{V_{\left[ \frac{n}{a}\right] :F}\left( C_{\left[ \frac{n}{a}\right] }x+D_{\left[ \frac{n}{a}\right] }\right) - \left[ \frac{n}{a}\right] }{\sqrt{\left[ \frac{n}{a}\right] }}= \frac{-\log F\left( C_{\left[ \frac{n}{a}\right] }x+D_{\left[ \frac{n}{a}\right] }\right) - \left[ \frac{n}{a}\right] }{\sqrt{\left[ \frac{n}{a}\right] }}\rightarrow \mathcal { N}^{-1}(\Psi ^*(x)).\) This leads to

$$\begin{aligned} \log F\left( C_{\left[ \frac{n}{a}\right] }x+D_{\left[ \frac{n}{a}\right] }\right) =-\left[ \frac{n}{a}\right] -\sqrt{\left[ \frac{n}{a}\right] }\mathcal { N}^{-1}(\Psi ^*(x))(1+\circ (1)), \end{aligned}$$

i.e., \(F\left( C_{\left[ \frac{n}{a}\right] }x+D_{\left[ \frac{n}{a}\right] }\right) \rightarrow 0,\) or equivalently \(-\log \left( 1-F\left( C_{\left[ \frac{n}{a}\right] }x+D_{\left[ \frac{n}{a}\right] }\right) \right) \rightarrow 0.\) Therefore, by applying (1.3), we get \(G_{F}\left( C_{\left[ \frac{n}{a}\right] }x+D_{\left[ \frac{n}{a}\right] };a\right) \sim \frac{\left( -\log \left( 1-F\left( C_{\left[ \frac{n}{a}\right] }x+D_{\left[ \frac{n}{a}\right] }\right) \right) \right) ^a}{a\Gamma (a)},\) as \(n\rightarrow \infty .\) Thus, we get

$$\begin{aligned}&\lim _{n\rightarrow \infty }\frac{V_{n:G_{F}}\left( C_{\left[ \frac{n}{a}\right] }x+D_{\left[ \frac{n}{a}\right] }\right) -n}{ \sqrt{n}}\nonumber \\&\quad =\lim _{n\rightarrow \infty } \frac{-\log \left( \frac{\left( -\log \left( 1-F\left( C_{\left[ \frac{n}{a}\right] }x+ D_{\left[ \frac{n}{a}\right] }\right) \right) \right) ^a}{a\Gamma (a)}(1+\circ (1))\right) -n}{\sqrt{n}}\nonumber \\&\quad =\sqrt{a}\lim _{n\rightarrow \infty }\frac{-\log \left( -\log \left( 1-F\left( C_{\left[ \frac{n}{a}\right] }x+ D_{\left[ \frac{n}{a}\right] }\right) \right) \right) -\frac{n}{a}}{\sqrt{\frac{n}{a}}}\nonumber \\&\qquad +\lim _{n\rightarrow \infty }\frac{\log a\Gamma (a)-\log (1+\circ (1))}{\sqrt{n}}\nonumber \\&\quad =\sqrt{a}\lim _{n\rightarrow \infty }\frac{-\log F(C_{\left[ \frac{n}{a}\right] }x+D_{\left[ \frac{n}{a}\right] })- \left[ \frac{n}{a}\right] }{\sqrt{\left[ \frac{n}{a}\right] }}=\sqrt{a}\mathcal { N}^{-1}(\Psi ^*(x)). \end{aligned}$$

This completes the proof of the second part of the theorem.\(\square \)

Now, if \(F\in {{\mathcal {D}}}_\mathrm{{urec}}(\Psi (x)),\) then in view of Lemma 4.1, we get \(\frac{U_{n:F}(A_nx+B_n)-n}{\sqrt{n}}\rightarrow \mathcal { N}^{-1}(\Psi (x)),\) which implies \(\log (1-F(A_nx+B_n))=-n-\sqrt{n}{{\mathcal {N}}}^{-1}(\Psi (x))(1+\circ (1)),\) i.e., \(F(A_nx+B_n)\rightarrow 1,\) as \(n\rightarrow \infty .\) Therefore, by applying (1.7), we get \(1-K_{F}(A_nx+B_n;a,b)\sim a^{b}(1-F(A_{n}x+B_{n}))^b,\) as \(n\rightarrow \infty .\) Thus, we get

$$\begin{aligned}&\lim _{n\rightarrow \infty }\frac{U_{n:K_{F}}(A_nx+B_n)-n}{\sqrt{n}}\nonumber \\&\quad =\lim _{n\rightarrow \infty }\frac{-\log (a^{b}(1-F(A_{n}x+B_{n}))^b)-n}{\sqrt{n}}\nonumber \\&\quad =\lim _{n\rightarrow \infty }\frac{-b\log a-b(-n-\sqrt{n}~{{\mathcal {N}}}^{-1}(\Psi (x))(1+\circ (1)))-n}{\sqrt{n}}\nonumber \\&\quad =\lim _{n\rightarrow \infty }\frac{-b\log a+n(b-1)+b~\sqrt{n}~{{\mathcal {N}}}^{-1}(\Psi (x))(1+\circ (1))}{\sqrt{n}}\nonumber \\&\quad = \left\{ \begin{array}{lll} {{\mathcal {N}}}^{-1}(\Psi (x)),&{}\quad \text{ if }\quad b=1,\\ \infty ,&{}\quad \text{ if }\quad b>1,\\ -\infty ,&{}\quad \text{ if }\quad 0<b<1.\\ \end{array} \right. \end{aligned}$$

This completes the proof of the third part of the theorem.\(\square \)

Finally, assume that \(F\in {{\mathcal {D}}}_\mathrm{{lrec}}(\Psi ^{*}(x)),\) then in view of Lemma 4.1, we get \(\frac{V_{n:F}(C_nx+D_n)-n}{\sqrt{n}}=\frac{-\log F(C_nx+D_n)-n}{\sqrt{n}}\rightarrow {{\mathcal {N}}}^{-1}(\Psi ^*(x)),\) which implies \(\log F(C_nx+D_n)=-n-\sqrt{n}{{\mathcal {N}}}^{-1}(\Psi ^*(x))(1+\circ (1)),\) i.e., \(F(C_nx+D_n)\rightarrow 0,\) as \(n\rightarrow \infty .\) Therefore, by applying (1.6) we get \(K_{F}(C_nx+D_n;a,b)\sim bF^a(C_nx+D_n),\) as \(n\rightarrow \infty .\) Thus, we get

$$\begin{aligned}&\lim _{n\rightarrow \infty }\frac{V_{n:K_{F}}(C_nx+D_n)-n}{\sqrt{n}}\nonumber \\&\quad =\lim _{n\rightarrow \infty }\frac{-\log (b~F^a(C_nx+D_n))-n}{\sqrt{n}}\nonumber \\&\quad =\lim _{n\rightarrow \infty }\frac{-\log b-a(-n-\sqrt{n}{{\mathcal {N}}}^{-1}(\Psi ^*(x))(1+\circ (1)))-n}{\sqrt{n}}\nonumber \\&\quad =a{{\mathcal {N}}}^{-1}(\Psi ^*(x))+\lim _{n\rightarrow \infty }\frac{(a-1)n}{\sqrt{n}}= \left\{ \begin{array}{ll} {{\mathcal {N}}}^{-1}(\Psi ^*(x)),&{}\quad \text{ if }\quad a=1,\\ \infty ,&{}\quad \text{ if }\quad a>1,\\ -\infty ,&{}\quad \text{ if }\quad 0<a<1,\\ \end{array} \right. \end{aligned}$$

as \(n\rightarrow \infty .\) This completes the proof of the last part of the theorem.\(\square \)

Example 4.1

  1. 1.

    It is well known that (see [5]) the Weibull df \(W(x)=(1-e^{-x^c})\in {{\mathcal {D}}}_\mathrm{{urec}}(\Psi _3),~ c,x>0,\) with \(A_{n}=(n+\sqrt{n})^{\frac{1}{c}}-n^{\frac{1}{c}}\) and \(B_{n}=n^{\frac{1}{c}}\) (note that \(\Psi _3={{\mathcal {N}}}\)). Therefore, in view of Theorem 4.1, we get \(G_{W}(A_nx+B_n;a),W^a(A_nx+B_n)\in {{\mathcal {D}}}_\mathrm{{urec}}({{\mathcal {N}}}).\)

  2. 2.

    It is well known that (see [5]) the Logistic df \(L(x)=(\frac{e^x}{1+e^x})\in {{\mathcal {D}}}_\mathrm{{urec}}({{\mathcal {N}}}),\) with \(A_{n}=\log (e^{n+\sqrt{n}}-1)-\log (e^{n}-1)\) and \(B_{n}=\log (e^{n}-1).\) Therefore, in view of Theorem 4.1, we get \(G_{L}(A_nx+B_n;a),L^a(A_nx+B_n)\in {{\mathcal {D}}}_\mathrm{{urec}}({{\mathcal {N}}}).\)

  3. 3.

    It is well known that (see [5]) the standard normal df \({{\mathcal {N}}}\in {{\mathcal {D}}}_\mathrm{{urec}}({{\mathcal {N}}}),\) with \(A_{n}=\frac{(n+\sqrt{n})^2}{2}+\log (n+\sqrt{n})-\frac{n^2}{2}-\log {n}\) and \(B_{n}=\frac{n^2}{2}\log {n}.\) Therefore, in view of Theorem 4.1, we get \(G_{{{\mathcal {N}}}}(A_{n}x+B_n;a),{{\mathcal {N}}}^a(A_{n}x+B_n)\in {{\mathcal {D}}}_\mathrm{{urec}}({{\mathcal {N}}}).\)

  4. 4.

    It is well known that (see [5]) the df \({{\mathcal {F}}}(x;\alpha )=(1-e^{-\frac{\alpha ^2}{4}(\log x)^2})\in {{\mathcal {D}}}_\mathrm{{urec}}(\Psi _1(x;\alpha )),~\alpha >0,1<x<\infty ,\) with \(A_n=e^{\frac{2}{\alpha }\sqrt{n}}\) and \(B_{n}=0.\) Therefore, in view of Theorem 4.1, we \(G_{{{\mathcal {F}}}}(A_nx+B_n;a,\alpha ),{{\mathcal {F}}}^a(A_nx+B_n;\alpha )\in {{\mathcal {D}}}_\mathrm{{urec}}(\Psi _1(x;\alpha )).\)

  5. 5.

    It is well known that (see [5]) the df \({{\mathcal {L}}}(x;\alpha ,\delta )=(1-e^{-\frac{\alpha ^2}{4}(\log (\delta -x))^2})\in {{\mathcal {D}}}_\mathrm{{urec}}(\Psi _2(x;\alpha )),~\delta -1\le x<\delta <\infty ,\) with \(A_n=e^{\frac{2}{\alpha }\sqrt{n}}\) and \(B_n=\delta .~\) Therefore, in view of Theorem 4.1, we get \(G_{{{\mathcal {L}}}}(A_nx+B_n;a), {{\mathcal {L}}}^a(A_nx+B_n;\alpha ,\delta ) \in {{\mathcal {D}}}_\mathrm{{urec}}(\Psi _2(x;\alpha )).\)

5 Concluding Remarks

In this paper, the asymptotic behavior of the order statistics and record values based on the gamma and Kw-generated-distributions families is studied. We show that the weak convergence of any upper(lower) record value, or central order statistic, based on a base distribution F,  to a non-degenerate limit type implies the weak convergence of those corresponding statistics based on the gamma-generated-distributions family \(G_F\) to non-degenerate limit types. The relations between the two limit types are deduced. Moreover, it is shown that the weak convergence of the extreme, or intermediate, order statistics based on F to non-degenerate limits implies the convergence of those statistics based on \(G_F\) to the degenerate limits. Finally, it is shown that the asymptotic behaviors of the order statistics and record values based on Kw and beta-generated-distributions families are the same.