Keywords and phrases

2.1 Introduction

In life-testing and reliability experiments, it is natural to compare several treatments with a standard treatment (control). For example, a manufacturer of electronic components may wish to compare (k−1) new production processes with the standard process and then determine whether any of these new processes would produce more reliable components than the standard process. In many cases, the costs of production for the new processes are relatively high because they are under development, and so it would be desirable to have a statistical test procedure which allows the experimenter to make a decision early on in the life-test.

The precedence test, first proposed by Nelson (1963), is a distribution-free two-sample life-test (i.e., a special case when k = 2) based on the order of early failures. Assume that a random sample of n 1 units from distribution F X and another independent sample of n 2 units from distribution F Y are placed simultaneously on a life-testing experiment. Suppose the null hypothesis is that the two lifetime distributions are equal, and the alternative hypothesis of interest is that one distribution is stochastically larger than the other, say, F X is stochastically larger than F Y . This alternative corresponds to the situation wherein the Y -units are more reliable than the X-units. The experiment is terminated as soon as the r-th failure from the Y -sample is observed. Then, the precedence test statistic P (r) is defined simply as the number of failures from the X-sample that precede the r-th failure from the Y -sample. It is obvious that large values of P (r) lead to the rejection of the hypothesis that F X = F Y and in favor of the above-mentioned alternative hypothesis. The precedence test will be useful (i) when a life-test involves expensive units as the units that had not failed could be used for some other testing purposes, and (ii) to make quick and reliable decisions early on in the life-testing experiment. Many authors have studied the power properties of the precedence test and have also proposed some alternative tests; see, for example, Eilbott and Nadler (1965), Shorack (1967), Nelson (1986, 1993), Lin and Sukhatme (1992), Balakrishnan and Frattina (2000), Balakrishnan and Ng (2001), Ng and Balakrishnan (2002, 2004), and van der Laan and Chakraborti (2001). A brief review of all these precedence-type tests is first presented in Section 2.2, while an elaborate discussion of precedence-type tests and their variants can be found in the review articles by Chakraborti and van der Laan (1996, 1997) and also in the recent book by Balakrishnan and Ng (2006).

In this work, different precedence-type test procedures are proposed for the k-sample problem. Specifically, suppose we have (k−1) treatments that we wish to compare with a control, or (k−1) new processes that we wish to compare with the standard process. With F 1(x) denoting the lifetime distribution associated with the control (or the standard process) and F i+1(x) denoting the lifetime distribution associated with the i-th treatment (or the i-th new process) for \(i = 1,2, \ldots ,k - 1\), our null hypothesis is simply

$$\begin{array}{rcl}{ H}_{0} : {F}_{1}(x) = {F}_{2}(x) = \mathrel{\cdots } = {F}_{k}(x)\mbox{ for all }x.& & \end{array}$$
(2.1)

We are specifically concerned with a stochastically ordered alternative of the form

$$\begin{array}{rcl} & {H}_{1} :\{ {F}_{2}(x) \leq{F}_{1}(x)\} \cup \{ {F}_{3}(x) \leq{F}_{1}(x)\} \cup \mathrel{\cdots } \cup \{ {F}_{k}(x) \leq{F}_{1}(x)\}\mbox{ for all }x,& \\ & \qquad \qquad \qquad \qquad \mbox{ with at least one holding strictly for some }x. & \end{array}$$
(2.2)

Suppose k independent random samples of sizes \({n}_{1},{n}_{2}, \ldots ,{n}_{k}\) from F 1(x), \({F}_{2}(x), \ldots ,{F}_{k}(x)\), respectively, are placed simultaneously on a life-testing experiment. The experiment is terminated as soon as the r-th failure from F 1(x) is observed. Then, the number of failures from F i (x), \(i = 2, \ldots ,k\), in between the failures from F 1(x) are counted and their functions are used as test statistics for testing the hypothesis in (2.1).

The chapter is organized as follows. In Section 2.2, we review some results on the precedence-type tests which are considered in the subsequent sections. In Section 2.3, we propose the precedence-type tests, which include tests based on the precedence, weighted maximal precedence and minimum Wilcoxon rank-sum precedence test statistics, for testing the hypothesis in (2.1). The exact null distributions of the proposed test statistics are derived in Section 2.3, and critical values for some selected choices of sample sizes are also tabulated. Exact power properties of these tests under Lehmann alternatives are derived in Section 2.4. We then compare the power properties of the proposed precedence-type tests under Lehmann alternatives. Finally, an example is presented to illustrate all the tests discussed here.

2.2 Review of Precedence-Type Tests

The precedence-type test allows a simple and robust comparison of two distribution functions. Suppose there are two failure time distributions F X and F Y and that we are interested in testing

$$\begin{array}{rcl}{ H}_{0}^{{_\ast}} : {F}_{ X} = {F}_{Y }\ \ \mbox{ against}\ \ {H}_{1}^{{_\ast}} : {F}_{ X} > {F}_{Y }.& & \end{array}$$
(2.3)

Note that some specific alternatives such as the location-shift alternative and the Lehmann alternative are subclasses of the stochastically ordered alternative considered in (2.3).

Assume that a random sample of n 1 units from distribution F X and another independent sample of n 2 units from distribution F Y are placed simultaneously on a life-testing experiment. Let \({X}_{1}, \ldots ,{X}_{{n}_{1}}\) denote the sample from F X , and \({Y }_{1}, \ldots ,{Y }_{{n}_{2}}\) denote the sample from F Y . Let us denote the order statistics from the X- and Y -samples by \({X}_{1:{n}_{1}} \leq \mathrel{\cdots } \leq{X}_{{n}_{1}:{n}_{1}}\) and \({Y }_{1:{n}_{2}} \leq \mathrel{\cdots } \leq{Y }_{{n}_{2}:{n}_{2}}\), respectively. Further, let M 1 denote the number of X-failures before \({Y }_{1:{n}_{2}}\) and M i the number of X-failures between \({Y }_{i-1:{n}_{2}}\) and \({Y }_{i:{n}_{2}}\), \(i = 2,3, \ldots ,r\). Figure 2.1 gives a schematic representation of this precedence setup.

Note here that the idea of precedence-type test is closely related to that of a run, which is defined as an uninterrupted sequence. Wald and Wolfowitz (1940) used runs to establish a two-sample test for testing the hypothesis in (2.3). They suggested that one should combine the two samples, arrange the n 1+n 2 observations in increasing order of magnitude, and replace the ordered values by 0 or 1 depending on whether it originated from the X-sample or the Y -sample, respectively. For example, in Figure 2.1, we have a binary sequence (1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1). Then, the total number of runs in that binary sequence is used as a test statistic to test the hypothesis in (2.3). Instead of using the number of runs in the binary sequence, the precedence-type tests use the length of the runs of 0’s (i.e., M i , \(i = 1, \ldots ,{n}_{2}\)) and their functions as test statistics for testing the hypotheses in (2.3). For extensive reviews on runs and applications, one may refer to Balakrishnan and Koutras (2002) and Fu and Lou (2003).

Fig. 2.1
figure 1_2figure 1_2

Schematic representation of a precedence life-test.

2.2.1 Precedence test

The precedence test statistic P (r) is defined simply as the number of failures from the X-sample that precede the r-th failure from the Y -sample, i.e.,

$${P}_{(r)} = \sum \limits_{j = 1}^{r}{M}_{ j}.$$

Large values of P (r) lead to the rejection of H 0 and in favor of H 1 in (2.3). In other words, H 0 is rejected if P (r)s, where s is the critical value of the precedence test statistic for specific values of n 1,n 2,r and level of significance (α). For example, from Figure 2.1, with r = 4, the precedence test statistic takes on the value \({P}_{(4)} = \sum \limits_{i = 1}^{4}{M}_{i} = 0 + 3 + 4 + 1 = 8\). If we have \({n}_{1} = {n}_{2} = 10\) and we use the precedence test with r = 4, the near 5% critical value will be s = 8 with exact level of significance 0.035, in which case H 0 would be rejected if there were at least 8 failures from the X-sample before the fourth failure from the Y-sample. Therefore, the null hypothesis that the two distributions are equal is rejected based on the precedence test in this example.

From Balakrishnan and Ng (2006, Theorem 4.1), we have the joint probability mass function of \(({M}_{1}, \ldots ,{M}_{r})\), under \({H}_{0}^{{_\ast}} : {F}_{X} = {F}_{Y }\), to be

$$\begin{array}{rcl} & & \Pr \left ({M}_{1} = {m}_{1},{M}_{2} = {m}_{2}, \ldots ,{M}_{r} = {m}_{r}\mathrel{\mid }{H}_{0} : {F}_{X} = {F}_{Y }\right ) \\ & & \qquad \quad \ = \frac{\left (\begin{array}{c} {n}_{1} + {n}_{2} - \sum \limits_{j = 1}^{r}{m}_{j} - r \\ {n}_{2} - r \end{array} \right )} {\left (\begin{array}{c} {n}_{1} + {n}_{2}\\ {n}_{2 } \end{array} \right )}\end{array}$$
(.2.4)

The null distribution and critical values of the precedence test statistic P (r) can be readily computed from (2.4). The critical values and their exact levels of significance (as close as possible to 5% and 10%) for different choices of r and the sample sizes n 1 and n 2 are presented, for example, in Balakrishnan and Ng (2006).

2.2.2 Weighted maximal precedence test

Balakrishnan and Frattina (2000) observed that a masking effect is present in the precedence test which has an adverse effect on its power properties. The maximal precedence test proposed by Balakrishnan and Frattina (2000) and Balakrishnan and Ng (2001) was specifically to avoid this masking problem. It is a test procedure based on the maximum number of failures occurring from the X-sample before the first, between the first and the second, \(\ldots \) , between the (r−1)-th and the r-th failures from the Y -sample. Then, Ng and Balakrishnan (2005) proposed the weighted maximal precedence test by giving a decreasing weight to m j as j increases, which is given by

$$\begin{array}{rcl}{ M}_{(r)} { = \max }_{1\leq j\leq r}({n}_{2} - j + 1){M}_{j}.& & \end{array}$$
(2.5)

It is also a test procedure suitable for testing the hypotheses in (2.3) with large values of M (r) leading to the rejection of H 0 and in favor of H 1 in (2.3). The null distribution of the weighted maximal precedence test statistic M (r) can also be obtained from (2.4). The critical values and their exact levels of significance (as close as possible to 5% and 10%) for different choices of r and the sample sizes n 1 and n 2 are presented, for example, in Balakrishnan and Ng (2006). For example, if we refer to Figure 2.1, with r = 4 and with \({n}_{1} = {n}_{2} = 10\), the critical value is 42 with exact level of significance 0.043 and the weighted maximal precedence test statistic is \({M}_{(4)} = \max (10 \times0,9 \times3,8 \times4,7 \times1) = \max (0,27,32,7) = 32\). Therefore, the null hypothesis that the two distributions are equal is not rejected based on the weighted maximal precedence test in this example.

2.2.3 Minimal Wilcoxon rank-sum precedence test

The Wilcoxon rank-sum test is a well-known nonparametric procedure for testing the hypotheses in (2.3) based on complete samples. For testing the hypotheses in (2.3), if complete samples of sizes n 1 and n 2 are available from F X and F Y , respectively, one can use the standard Wilcoxon’s rank-sum statistic, proposed by Wilcoxon (1945), which is simply the sum of ranks of X-observations in the combined sample.

Ng and Balakrishnan (2002, 2004) proposed the Wilcoxon-type rank-sum precedence tests for testing the hypotheses in (2.3) in the context of precedence test described earlier, i.e., when the Y -sample is Type-II right censored. This test is a variation of the precedence test and a generalization of the Wilcoxon rank-sum test. In order to test the hypotheses in (2.3), instead of using the maximum of the frequencies of failures from the X-sample between the first r failures of the Y -sample, one could use the sum of the ranks of those failures. More specifically, suppose that \({M}_{1},{M}_{2}, \ldots ,{M}_{r}\) denote the number of X-failures that occurred before the first, between the first and the second, \(\ldots \) , between the (r−1)-th and the r-th Y -failures, respectively; see Figure 2.1. Let W be the rank-sum of the X-failures that occurred before the r-th Y -failure. The Wilcoxon’s rank-sum test statistic will be smallest when all the remaining \(\left ({n}_{1} - \sum \limits_{j = 1}^{r}{M}_{j}\right )\) X-failures occur between the r-th and (r+1)-th Y -failures. The test statistic in this case would be

$$\begin{array}{rcl}{ W}_{(r)}& = & W + \left [\left ( \sum \limits_{j = 1}^{r}{M}_{ j} + r + 1\right ) + \left ( \sum \limits_{j = 1}^{r}{M}_{ j} + r + 2\right ) + \mathrel{\cdots } + ({n}_{1} + r)\right ] \\ & = & \frac{{n}_{1}({n}_{1} + 2r + 1)} {2} - \sum \limits_{j = 1}^{r}(r - j + 1){M}_{ j}\end{array}$$
(.)

This is called the minimal rank-sum statistic. Note that in the special case of r = n 2 (that is, when we observe a complete sample), \({W}_{({n}_{2})}\) is equivalent to the classical Wilcoxon’s rank-sum statistic. Small values of W (r) lead to the rejection of H 0 and in favor of H 1 in (2.3). The null distribution of the minimal Wilcoxon-type rank-sum precedence test statistic can once again be obtained from (2.4). The critical values and their exact levels of significance (as close as possible to 5% and 10%) for different choices of r and the sample sizes n 1 and n 2 are presented, for example, in Balakrishnan and Ng (2006).

For example, from Figure 2.1, when \({n}_{1} = {n}_{2} = 10\) and r = 4, we have

$$\begin{array}{rcl}{ W}_{(4)}& = & 2 + 3 + 4 + 6 + 7 + 8 + 9 + 11 + 13 + 14 = 77\end{array}$$

and the critical value of the test is 81 with exact level of significance 0.050. Therefore, the null hypothesis that the two distributions are equal is not rejected based on the minimal Wilcoxon rank-sum precedence test in this example.

Ng and Balakrishnan (2002, 2004) observed that the large-sample normal approximation for the null distribution of these statistics is not satisfactory in the case of small or moderate sample sizes. For this reason, they developed an Edgeworth expansion to approximate the significance probabilities. They also derived the exact power function under the Lehmann alternative and examined the power properties of the minimal Wilcoxon-type rank-sum precedence test.

2.3 Test Statistics for Comparing k1 Treatments with Control

Suppose k independent random samples of sizes \({n}_{1},{n}_{2}, \ldots ,{n}_{k}\) from F 1(x), \({F}_{2}(x), \ldots ,{F}_{k}(x)\), respectively, are placed simultaneously on a life-testing experiment. When the sample sizes are all equal, we have a balanced case which usually provides a favorable setting for carrying out a precedence-type procedure for testing H 0 in (2.1) against the alternative in (2.2); however, the test can be carried out even in the unbalanced case, although the power of the test may be adversely affected in this case.

A precedence-type test procedure, for this specific testing problem, may be constructed as follows. After pre-fixing an r (≤n 1), the life-test continues until the r-th failure in the sample from the control group. We then observe \({\mbox{ $M$}}_{2} = ({M}_{12},{M}_{22}, \ldots ,{M}_{r2}), \ldots ,{\mbox{ $M$}}_{k} = ({M}_{1k},{M}_{2k}, \ldots ,{M}_{rk})\) from the (k−1) treatments, where \({M}_{1i},{M}_{2i}, \ldots ,{M}_{ri}\) are the numbers of failures in the sample from the (i−1)-th treatment (for \(i = 2,3, \ldots ,k\)) before the first failure, between the first and second failures, \(\ldots \) , and between the (r−1)-th and r-th failures from the control group, respectively. The observed value of \({\mbox{ $M$}}_{i}\) is denoted by \({\mbox{ $m$}}_{i}\), \(i = 2, \ldots ,k\).

2.3.1 Tests based on precedence statistic

Let us consider

$$\begin{array}{rcl}{ P}_{(r)i} = \sum \limits_{j = 1}^{r}{M}_{ ji}\quad \mbox{ for }\ i = 2,3, \ldots ,k& & \end{array}$$
(2.6)

for the precedence statistic corresponding to the sample from the (i−1)-th treatment. For convenience of notation, let \({M}_{j\mbox{ $\cdot $}} = \sum \limits_{i = 2}^{k}{M}_{ji}\) and denote its observed value by \({m}_{j\mbox{ $\cdot $}}\), \(j = 1, \ldots ,r\). We may then propose the following precedence-type test statistics:

$$\begin{array}{rcl}{ P}_{1} = \sum \limits_{i = 2}^{k}{P}_{ (r)i} = \sum \limits_{i = 2}^{k} \sum \limits_{j = 1}^{r}{M}_{ ji} = \sum \limits_{j = 1}^{r}{M}_{ j} \cdot \end{array}$$
(2.7)

and

$$\begin{array}{rcl}{ P}_{2} { = \min \limits_{2\leq i\leq k}{P}_{(r)i}} { = \min \limits_{2\leq i\leq k}}\left \{ \sum \limits_{j = 1}^{r}{M}_{ ji}\right \}.& & \end{array}$$
(2.8)

The rationale for the use of the statistics in (2.7) and (2.8) is that, under the stochastically ordered alternative H 1 in (2.2), we would expect some of the precedence statistics P (r)i in (2.6) to be too small. Consequently, we will tend to reject H 0 in (2.1) in favor of H 1 in (2.2) for small values of P 1 and P 2 in which the critical values can be determined for specific values of k, r, \({n}_{i},i = 1,2, \ldots ,k\), and pre-fixed level of significance α. Specifically, \(\{0 \leq{P}_{1} \leq{c}_{{P}_{1}}\}\) and \(\{0 \leq{P}_{2} \leq{c}_{{P}_{2}}\}\) will serve as critical regions, where \({c}_{{P}_{1}}\) and \({c}_{{P}_{2}}\) are determined such that

$$\begin{array}{rcl} \Pr ({P}_{1} \leq{c}_{{P}_{1}}\vert {H}_{0}) = \alpha \quad \mbox{ and }\quad \Pr ({P}_{2} \leq{c}_{{P}_{2}}\vert {H}_{0}) = \alpha.& & \end{array}$$
(2.9)

The null distributions of the test statistics P 1 and P 2 can be expressed as

$$\begin{array}{rcl} & & \Pr ({P}_{1} = {p}_{1}\vert {H}_{0}) \\ & & \quad = \sum \limits_{{p}_{(r)2} = 0}^{{n}_{2} } \ldots\sum \limits_{{p}_{(r)k} = 0}^{{n}_{k} }\Pr ({P}_{(r)i} = {p}_{(r)i},i = 2, \ldots ,k\vert {H}_{0})I\left ( \sum \limits_{i = 2}^{k}{p}_{ (r)i} = {p}_{1}\right ) \\ & & \end{array}$$
(2.10)

for \({p}_{1} = 0,1, \ldots , \sum \limits_{i = 2}^{k}{n}_{i}\), and

$$\begin{array}{rcl} & & \Pr ({P}_{2} = {p}_{2}\vert {H}_{0}) \\ & & \quad = \sum \limits_{{p}_{(r)2} = 0}^{{n}_{2} } \ldots\sum \limits_{{p}_{(r)k} = 0}^{{n}_{k} }\Pr ({P}_{(r)i} = {p}_{(r)i},i = 2, \ldots ,k\vert {H}_{0})I\left({\min \limits_{2\leq i\leq k}{p}_{(r)i}} = {p}_{2}\right ) \\ & & \end{array}$$
(2.11)

for \({p}_{2} = 0,1, \ldots {,\min \limits_{2\leq i\leq k}{n}_{i}}\), where I(A) is the indicator function defined by

$$\begin{array}{rcl} I(A) = \left \{\begin{array}{ll} 1&\mbox{ if $A$ is true,}\\ 0 &\mbox{ otherwise,} \\ \end{array} \right.& & \\ \end{array}$$

and

$$\begin{array}{rcl} & & \Pr ({P}_{(r)i} = {p}_{(r)i},i = 2, \ldots ,k\vert {H}_{0}) \\ & & \quad = \sum \limits_{{\mbox{ $m$}}_{2}} \ldots\sum \limits_{{\mbox{ $m$}}_{k}}\delta ({\mbox{ $m$}}_{2}, \ldots ,{\mbox{ $m$}}_{k})I\left ( \sum \limits_{j = 1}^{r}{m}_{ ji} = {p}_{(r)i},i = 2, \ldots ,k\right )\end{array}$$
(2.12)

with

$$\begin{array}{rcl} \sum \limits_{{\mbox{m}}_{i}} {\stackrel{def.}{ = }} \sum \limits_{{m}_{1i} = 0}^{{n}_{i} } \sum \limits_{{m}_{2i} = 0}^{{n}_{i}-{m}_{1i} } \ldots\sum \limits_{{m}_{ri} = 0}^{{n}_{i}- \sum_{j = 1}^{r-1}{m}_{ ji}}\ \mbox{ for }i = 2, \ldots ,k\\ \end{array}$$

and \(\delta ({\mbox{ $m$}}_{2}, \ldots ,{\mbox{ $m$}}_{k})\) is the probability mass function of \(({\mbox{ $M$}}_{2}, \ldots ,{\mbox{ $M$}}_{k})\) under H 0 (see Appendix A)

$$\begin{array}{rcl} \delta ({\mbox{ $m$}}_{2}, \ldots ,{\mbox{ $m$}}_{k})& = & \Pr ({\mbox{ $M$}}_{2} = { \mbox{ $m$}}_{2}, \ldots ,{\mbox{ $M$}}_{k} = { \mbox{ $m$}}_{k}\vert {H}_{0} : {F}_{1} = {F}_{2} = \mathrel{\cdots } = {F}_{k}) \\ & = & \frac{1} {\left (\begin{array}{*{10}c} \sum \limits_{i = 1}^{k}{n}_{i} \\ {n}_{1}, \ldots ,{n}_{k} \end{array} \right )}\left \{ \prod _{j = 1}^{r}\left (\begin{array}{*{10}c} {m}_{j\mbox{ $\cdot $}}\\ {m}_{ j2}, \ldots ,{m}_{jk} \end{array} \right )\right \} \\ & & \times \left (\begin{array}{*{10}c} \sum \limits_{i = 1}^{k}{n}_{i} - \sum \limits_{j = 1}^{r}{m}_{j\cdot }- r \\ {n}_{1} - r,{n}_{2} - \sum \limits_{j = 1}^{r}{m}_{j2}, \ldots ,{n}_{k} - \sum \limits_{j = 1}^{r}{m}_{jk} \end{array} \right ), \\ & & \\ \end{array}$$

where

$$\begin{array}{rcl} \left (\begin{array}{*{10}c} {a}_{1} + \mathrel{\cdots } + {a}_{l}\\ {a}_{1 } , \ldots , {a}_{l } \end{array} \right ) = \frac{({a}_{1} + \ldots + {a}_{l})!} {{a}_{1}! \ldots {a}_{l}!}.& & \\ \end{array}$$

From Equations (2.9)–(2.12), the critical values \({c}_{{P}_{1}}\), \({c}_{{P}_{2}}\) and their exact levels of significance as close as possible to α = 5% for k = 3,4 with equal sample sizes \({n}_{1} = \mathrel{\cdots } = {n}_{k} = n\) and r = 4(1)n were computed and are presented in Tables 2.1 and 2.2; similarly, for the unequal sample sizes \({n}_{1} = 10,{n}_{2} = \mathrel{\cdots } = {n}_{k} = 15\); \({n}_{1} = 15,{n}_{2} = \mathrel{\cdots } = {n}_{k} = 20\) and r = 4(1)n 1, the values are presented in Tables 2.3 and 2.4. Due to the heavy computational demand in going through all the possible outcomes, the critical values of the tests discussed in this section were obtained from the exact null distribution for r≤8 and through 20,000,000 Monte Carlo simulations for r>8.

Table 2.1 Near 5% critical values and exact levels of significance (l.o.s.) for P 1, P 2, T 1, T 2, W 1 and W 2 with k = 3, \({n}_{1} = {n}_{2} = {n}_{3} = n = 10,15\) and 20.
Table 2.2 Near 5% critical values and exact levels of significance (l.o.s.) for P 1, P 2, T 1, T 2, W 1 and W 2 with k = 4, \({n}_{1} = {n}_{2} = {n}_{3} = {n}_{4} = n = 10,15\) and 20.
Table 2.3 Near 5% critical values and exact levels of significance (l.o.s.) for P 1, P 2, T 1, T 2, W 1 and W 2 with k = 3, n 1 = 10, \({n}_{2} = {n}_{3} = 15\) and n 1 = 15, \({n}_{2} = {n}_{3} = 20\).
Table 2.4 Near 5% critical values and exact levels of significance (l.o.s.) for P 1, P 2, T 1, T 2, W 1 and W 2 with k = 3, n 1 = 10, \({n}_{2} = {n}_{3} = {n}_{4} = 15\) and n 1 = 15, \({n}_{2} = {n}_{3} = {n}_{4} = 20\).

2.3.2 Tests based on weighted maximal precedence statistic

We can proceed similarly and propose weighted maximal precedence-type statistics for the testing problem discussed here. Once again, we terminate the life-test when the r-th failure occurs in the sample from the control group. Then, with \({\mbox{ $M$}}_{i} = ({M}_{1i},{M}_{2i}, \ldots ,{M}_{ri})\), for \(i = 2, \ldots ,k\), being observed from the (k−1) treatments, where M ji denotes the number of failures in the sample from the (i−1)-th treatment between the (j−1)-th and j-th failures from the control group, we may set

$$\begin{array}{rcl}{ M}_{(r)i} { = \max }_{1\leq j\leq r}({n}_{1} - j + 1){M}_{ji}\quad \mbox{ for }\ i = 2,3, \ldots ,k& & \\ \end{array}$$

for the weighted maximal precedence statistic corresponding to the sample from the (i−1)-th treatment. We may then propose the weighted maximal precedence-type test statistics as

$$\begin{array}{rcl}{ T}_{1} = {\sum \limits_{i = 2}^{k}}{M}_{(r)i} = \sum \limits_{i = 2}^{k}{\max }_{ 1\leq j\leq r}({n}_{1} - j + 1){M}_{ji}& & \end{array}$$
(2.13)

and

$$\begin{array}{rcl}{ T}_{2} = \min \limits_{2\leq i\leq k}{M}_{(r)i} = \min \limits_{2\leq i\leq k}\left \{\max \limits_{1\leq j\leq r}({n}_{1} - j + 1){M}_{ji}\right \}.& & \end{array}$$
(2.14)

Here again, the rationale for the use of the statistics in (2.13) and (2.14) is that, under the stochastically ordered alternative H 1 in (2.2), we would expect some of the weighted maximal precedence statistics M (r)i in (2.12) to be too small. Therefore, we would reject H 0 in (2.1) in favor of H 1 in (2.2) for small values of T 1 and T 2 in which the critical values can be determined for specific values of k, r, \({n}_{i},i = 1,2, \ldots ,k\), and pre-fixed level of significance α. Specifically, \(\{0 \leq{T}_{1} \leq{c}_{{T}_{1}}\}\) and \(\{0 \leq{T}_{2} \leq{c}_{{T}_{2}}\}\) will serve as critical regions, where \({c}_{{T}_{1}}\) and \({c}_{{T}_{2}}\) are determined such that

$$\begin{array}{rcl} \Pr ({T}_{1} \leq{c}_{{T}_{1}}\vert {H}_{0}) = \alpha \quad \mbox{ and }\quad \Pr ({T}_{2} \leq{c}_{{T}_{2}}\vert {H}_{0}) = \alpha.& & \end{array}$$
(2.15)

The null distributions of the test statistics T 1 and T 2 can be expressed as

$$\begin{array}{rcl} & & \Pr ({T}_{1} = {t}_{1}\vert {H}_{0}) \\ & & = \sum \limits_{{m}_{(r)2} = 0}^{{n}_{2} } \ldots\sum \limits_{{m}_{(r)k} = 0}^{{n}_{k} }\Pr ({M}_{(r)i} = {m}_{(r)i},i = 2, \ldots ,k\vert {H}_{0})I\left ( \sum \limits_{i = 2}^{k}{m}_{ (r)i} = {t}_{1}\right ) \\ & & \end{array}$$
(2.16)

for \({t}_{1} = 0,1, \ldots , \sum \limits_{i = 2}^{k}{n}_{i}\), and

$$\begin{array}{rcl} & & \Pr ({T}_{2} = {t}_{2}\vert {H}_{0}) \\ & & = \sum \limits_{{m}_{(r)2} = 0}^{{n}_{2}} \ldots\sum \limits_{{m}_{(r)k} = 0}^{{n}_{k} }\Pr ({M}_{(r)i} = {m}_{(r)i},i = 2, \ldots ,k\vert {H}_{0})I\left({\min \limits_{2\leq i\leq k}{m}_{(r)i}} = {t}_{2}\right ) \\ & & \end{array}$$
(2.17)

for \({t}_{2} = 0,1, \ldots {,\min \limits_{2\leq i\leq k}{n}_{i}}\), where Pr(M (r)i = m (r)i |H 0) is

$$\begin{array}{rcl} & & \Pr ({M}_{(r)i} = {m}_{(r)i},i = 2, \ldots ,k\vert {H}_{0}) \\ & = & \sum \limits_{{\mbox{ $m$}}_{2}} \ldots\sum \limits_{{\mbox{ $m$}}_{k}}\delta ({\mbox{ $m$}}_{2}, \ldots ,{\mbox{ $m$}}_{k})I\left ({\max }_{1\leq j\leq r}({n}_{1} - j + 1){m}_{ji} = {m}_{(r)i},i = 2, \ldots ,k\right ). \\ & & \end{array}$$
(2.18)

From Equations (2.15)–(2.18), the critical values \({c}_{{T}_{1}}\), \({c}_{{T}_{2}}\) and their exact levels of significance as close as possible to α = 5% for k = 3,4 with equal sample sizes \({n}_{1} = \mathrel{\cdots } = {n}_{k} = n\) and r = 4(1)n were computed and are presented in Tables 2.1 and 2.2; similarly, for the unequal sample sizes \({n}_{1} = 10,{n}_{2} = \mathrel{\cdots } = {n}_{k}\, = \,15\); \({n}_{1}\, = \,15,{n}_{2} = \mathrel{\cdots } = {n}_{k}\, = \,20\) and r = 4(1)n 1, the values are presented in Tables 2.3 and 2.4.

2.3.3 Tests based on minimal Wilcoxon rank-sumprecedence statistic

Similarly, we propose test procedures based on minimal Wilcoxon rank-sum precedence statistic for the testing problem discussed here. We set

$$\begin{array}{rcl}{ W}_{(r)i} = \frac{{n}_{i}({n}_{i} + 2r + 1)} {2} - \sum \limits_{j = 1}^{r}(r - j + 1){M}_{ ji}\quad \mbox{ for }\ i = 2,3, \ldots ,k& & \end{array}$$
(2.19)

for the minimal Wilcoxon rank-sum precedence statistic corresponding to the sample from the (i−1)-th treatment. We may then propose the minimal Wilcoxon rank-sum precedence statistics as

$$\begin{array}{rcl}{ W}_{1} = \sum \limits_{i = 2}^{k}{W}_{ (r)i}& & \\ \end{array}$$

and

$$\begin{array}{rcl}{ W}_{2} { = \max }_{2\leq i\leq k}{W}_{(r)i}.& & \\ \end{array}$$

Under the stochastically ordered alternative H 1 in (2.2), we would expect some of the minimal Wilcoxon rank-sum precedence statistics W (r)i in (2.19) to be large. Therefore, we would reject H 0 in (2.1) in favor of H 1 in (2.2) for large values of W 1 and W 2 in which the critical values can be determined for specific values of k, r, \({n}_{i},i = 1,2, \ldots ,k\), and pre-fixed level of significance α. Specifically, \(\{{W}_{1} \geq{c}_{{W}_{1}}\}\) and \(\{{W}_{2} \geq{c}_{{W}_{2}}\}\) will serve as critical regions, where \({c}_{{W}_{1}}\) and \({c}_{{W}_{2}}\) are determined such that

$$\begin{array}{rcl} \Pr ({W}_{1} \geq{c}_{{W}_{1}}\vert {H}_{0}) = \alpha \quad \mbox{ and }\quad \Pr ({W}_{2} \geq{c}_{{W}_{2}}\vert {H}_{0}) = \alpha.& & \end{array}$$
(2.20)

The null distributions of the test statistics W 1 and W 2 can be expressed as

$$\begin{array}{rcl} & & \Pr ({W}_{1} = {w}_{1}\vert {H}_{0}) \\ & & = \sum \limits_{{w}_{(r)2} = {l}_{2}}^{{u}_{2} } \ldots\sum \limits_{{w}_{(r)k} = {l}_{k}}^{{u}_{k} }\Pr ({W}_{(r)i} = {w}_{(r)i},i = 2, \ldots ,k\vert {H}_{0})I\left ( \sum \limits_{i = 2}^{k}{w}_{ (r)i} = {w}_{1}\right ) \\ & & \end{array}$$
(2.21)

for \({w}_{1} { = \min \limits_{2\leq i\leq k}{l}_{i}}, \ldots {,\max \limits_{2\leq i\leq k}{u}_{i}}\), with \({l}_{i} = {n}_{i}({n}_{i} + 1)/2\), \({u}_{i} = (r + {n}_{i})(r + {n}_{i} + 1)/2 - r(r + 1)/2\), and

$$\begin{array}{rcl} & & \Pr ({W}_{2} = {w}_{2}\vert {H}_{0}) \\ & & = \sum \limits_{{w}_{(r)2} = {l}_{2}}^{{u}_{2} } \ldots\sum \limits_{{w}_{(r)k} = {l}_{k}}^{{u}_{k} }\Pr ({W}_{(r)i} = {w}_{(r)i},i = 2, \ldots ,k\vert {H}_{0})I\left ({\max }_{2\leq i\leq k}{w}_{(r)i} = {w}_{2}\right ) \\ & & \end{array}$$
(2.22)

for \({w}_{2} { = \min \limits_{2\leq i\leq k}{l}_{i}}, \ldots {,\min \limits_{2\leq i\leq k}{u}_{i}}\), where Pr(W (r)i = w (r)i |H 0) is given by

$$\begin{array}{rcl} & & \Pr ({W}_{(r)i} = {w}_{(r)i},i = 2, \ldots ,k\vert {H}_{0}) \\ & & = \sum \limits_{{\mbox{ $m$}}_{2}} \ldots\sum \limits_{{\mbox{ $m$}}_{k}}\delta ({\mbox{ $m$}}_{2}, \ldots ,{\mbox{ $m$}}_{k}) \\ & & \qquad \qquad \times I\left (\frac{{n}_{i}({n}_{i} + 2r + 1)} {2} - \sum \limits_{j = 1}^{r}(r - j + 1){m}_{ ji} = {w}_{(r)i},i = 2, \ldots ,k\right ). \\ & & \end{array}$$
(2.23)

From Equations (2.20)–(2.23), the critical values \({c}_{{W}_{1}}\), \({c}_{{W}_{2}}\) and their exact levels of significance as close as possible to α = 5% for k = 3,4 with equal sample sizes \({n}_{1} = \mathrel{\cdots } = {n}_{k} = n\) and r = 4(1)n were computed and are presented in Tables 2.1 and 2.2; similarly, for the unequal sample sizes \({n}_{1} = 10,{n}_{2} = \mathrel{\cdots } = {n}_{k} = 15\); \({n}_{1} = 15,{n}_{2} = \mathrel{\cdots } = {n}_{k} = 20\) and r = 4(1)n 1, the values are presented in Tables 2.3 and 2.4.

2.4 Exact Power Under Lehmann Alternative

The Lehmann alternative \({H}_{1} : {[{F}_{i}(x)]}^{{\gamma }_{i}} = {F}_{1}(x)\) for some γ i , \(i = 2, \ldots ,k\), which was first proposed by Lehmann (1953), is a subclass of the alternative H 1 :F i (x)>F 1(x) when at least one γ i ∈(0,1) (see Gibbons and Chakraborti, 2003). In this section, we will derive an explicit expression for the power functions of the proposed test procedures under the Lehmann alternative.

When \({\gamma }_{2} = \mathrel{\cdots } = {\gamma }_{k} = \gamma \), for some γ∈(0,1), under the Lehmann alternative \({H}_{1} : {[{F}_{i}(x)]}^{\gamma } = {F}_{1}(x)\), the probability mass function of \(({\mbox{ $M$}}_{2}, \ldots ,{\mbox{ $M$}}_{k})\) is (see Appendix B)

$$\begin{array}{rcl} & & {\delta }^{{_\ast}}({\mbox{ $m$}}_{ 2}, \ldots ,{\mbox{ $m$}}_{k}) \\ & & = \Pr ({\mbox{ $M$}}_{2} = { \mbox{ $m$}}_{2}, \ldots ,{\mbox{ $M$}}_{k} = { \mbox{ $m$}}_{k}\vert {H}_{1} : {[{F}_{i}]}^{\gamma } = {F}_{ 1},i = 2, \ldots ,k) \\ & & = \frac{{\gamma }^{r}{n}_{1}!} {({n}_{1} - r)!}\left \{ \prod _{i = 2}^{k}\left (\begin{array}{*{10}c} {n}_{i} \\ {m}_{1i},{m}_{2i}, \ldots ,{m}_{ri},{n}_{i} - \sum \limits_{j = 1}^{r}{m}_{ji} \end{array} \right )\right \} \\ & & \quad \times \left \{ \prod _{j = 1}^{r-1}B\left ({m}_{ 1\mbox{ $\cdot $}} + \ldots + {m}_{j\mbox{ $\cdot $}} + j\gamma ,{m}_{j+1\mbox{ $\cdot $}} + 1\right )\right \} \\ & & \quad \times \left \{ \sum \limits_{l = 0}^{{n}_{1}-r}\left (\begin{array}{*{10}c} {n}_{1} - r \\ l \end{array} \right ){(-1)}^{l}B\left ( \sum \limits_{j = 1}^{r}{m}_{ j\mbox{ $\cdot $}} + (r + l)\gamma , \sum \limits_{i = 2}^{k}{n}_{ i}- \sum \limits_{j = 1}^{r}{m}_{ j\mbox{ $\cdot $}} + 1\right )\right \}, \\ & & \end{array}$$
(2.24)

where \(B(a,b) = \int _{0}^{1}{x}^{a-1}{(1 - x)}^{b-1}dx\) is the complete beta function. Note that the exact distribution of (\({\mbox{ $M$}}_{2}, \ldots ,{\mbox{ $M$}}_{k}\)) under the general Lehmann alternative \({H}_{1} : {[{F}_{k}(x)]}^{{\gamma }_{k}} = {[{F}_{ k-1}(x)]}^{{\gamma }_{k-1}} = \mathrel{\cdots } = {[{F}_{ 2}(x)]}^{{\gamma }_{2}} = [{F}_{ 1}(x)]\) can also be obtained. For the purpose of illustration, we present the result for k = 3 in Appendix B.

Under the Lehmann alternative, the probability mass functions of P 1, P 2, T 1, T 2, W 1 and W 2 can be computed from Equations (2.10), (2.11), (2.16), (2.17), (2.21) and (2.22), respectively, by replacing \(\delta ({\mbox{ $m$}}_{2}, \ldots ,{\mbox{ $m$}}_{k})\) with \({\delta }^{{_\ast}}({\mbox{ $m$}}_{2}, \ldots ,{\mbox{ $m$}}_{k})\) in Equations (2.12), (2.18) and (2.23). Here, we computed the power values of the proposed test procedures for k = 3,4 with \({n}_{1} = \mathrel{\cdots } = {n}_{k} = 10\), γ = 0.2(0.2)1.0, \(i = 2, \ldots ,k\). Note that when γ = 1.0, the power values are precisely the exact levels of significance. These results are presented in Tables 2.5 and 2.6.

Table 2.5 Power values under Lehmann alternative for k = 3, \({n}_{1} = {n}_{2} = {n}_{3} = 10\), r = 4(1)10 and \({\gamma }_{2} = {\gamma }_{3} = \gamma = 0.2(0.2)1.0\).
Table 2.6 Power values under Lehmann alternative for k = 4, \({n}_{1} = \mathrel{\cdots } = {n}_{4} = 10,r = 4(1)10\) and \({\gamma }_{2} = {\gamma }_{3} = {\gamma }_{4} = \gamma = 0.2(0.2)1.0\).

2.5 Discussion

The results in Tables 2.5 and 2.6 show that the test procedures can detect the difference between two distributions effectively in most cases early in the life-testing experiment. Note that the desired level of significance may be impossible to attain for some test statistics when r is small, especially for the tests based on extrema (viz., P 2, T 2 and W 2). For instance, for k = 4, \({n}_{1} = {n}_{2} = {n}_{3} = {n}_{4} = 20\) and r = 4, the minimum level of significance attainable by the tests based on P 2, T 2 and W 2 are all equal to 0.132. It is, therefore, not possible to test the hypotheses in (2.1) at 5% level in this setting based on P 2, T 2 and W 2. For this reason, the tests based on the extrema of the precedence statistics from the treatments may not be applicable for small values of r in practice.

From Tables 2.5 and 2.6, we can observe that the power values of the tests increase with the number of treatments (i.e., k−1) as expected, but the power values do not increase with r under Lehmann alternatives. We can also see that the tests based on precedence statistics (P 1 and P 2) suffer from the masking effect. In other words, the power values of P 1 and P 2 decrease as r increases and the information given by a larger value of r is thus being masked. The tests based on weighted maximal precedence statistics (T 1 and T 2) and minimal Wilcoxon rank-sum precedence statistics (W 1 and W 2) reduce the masking effect that affects the performance of P 1 and P 2.

In comparing the power performance of tests based on the sum of the precedence statistics from the treatments (viz., P 1, T 1 and W 1) with those based on the extrema of the precedence statistics from the treatments (viz., P 2, T 2 and W 2), we observe that the former have better power performance than the latter. Furthermore, among all the tests discussed here, the test based on the sum of minimal Wilcoxon rank-sum precedence statistics among treatments (viz., W 1) seems to give overall the best power performance under the Lehmann alternative, and hence is the one that we recommend for the problem discussed here.

Further, the decrease in power values with increasing r also suggests that the test procedures based on the order of early failures can be more powerful than those based on a complete sample. In fact, r(≤n 1) need not be large to provide reliable comparison between treatments and the control. This can save both time and experimental units in a life-testing experiment, which are clear advantages of precedence-type tests. One may be interested in maximizing the power with respect to r, i.e., to determine the best choice of r in designing the experiment. When prior information about the alternative is available, this task can be achieved by comparing the power values for different values of r. For example, for k = 4, \({n}_{1} = {n}_{2} = {n}_{3} = {n}_{4} = 10\), if prior information suggests γ = 0.4 for the Lehmann alternative, we would recommend the use of W 1 with r = 6 based on the power values presented in Table 2.6.

2.6 Illustrative Example

Let us consider X 2, X 3 and X 1 samples to be the data on appliance cord life in flex tests 1, 2 and 3, respectively, of Nelson (1982, p. 510). These three tests were done using two types of cord, viz., B6 and B7, where flex tests 1 and 2 were done with cord type B6 and test 3 was done with cord type B7. Suppose cord B7 was the standard production cord and B6 was proposed as a cost improvement. We will then be interested in testing the equality of the lifetime distributions of these cords. For these data, we have k = 3, \({n}_{1} = {n}_{2} = {n}_{3} = 12\). Had we fixed r = 8, the experiment would have stopped as soon as the eighth failure from the X 1-sample (cord B7) had been observed, i.e., at 128.7 hours. The data are presented in Table 2.7. The observed values of \(({m}_{1i}, \ldots ,{m}_{8i})\) and the values of the statistics P (8)i , M (8)i and W (8)i , i = 2,3,are presented in Table 2.8.

Table 2.7 Appliance cord life data from Nelson (1982, p. 510) (∗denotes censored observations).
Table 2.8 Values of \(({m}_{1i}, \ldots ,{m}_{8i})\) and the statistics P (8)i , M (8)i and W (8)i for i = 2,3.

The near 5% critical values for k = 3, \({n}_{1} = {n}_{2} = {n}_{3} = 12\) and r = 8 and their exact level of significance (in parentheses) for the test procedures discussed in the preceding sections are as follows:

Table 9

Then the test statistics and their p-values are

Table 10

and so we will not reject the null hypothesis that the lifetime distributions of these cords are equal. This means that the cord B6 is not better than the cord B7. Incidentally, this finding agrees with that of Nelson (1982), who analyzed these data by assuming a normal model.

2.7 Appendix A: Probability Mass Function of (M 2 , , M k ) Under the Null Hypothesis

Let the ordered failures from the control be \({x}_{1} < {x}_{2} < \mathrel{\cdots } < {x}_{r}\). Consider the (i−1)-th treatment, conditional on the failures from the control. Then, the probability that there are m 1i failures from the treatment before x 1 and m ji failures between x j−1 and x j , \(j = 2, \ldots ,r\), is given by the multinomial probability

$$\begin{array}{ll} {\rm Pr} &(M_i = m+i|x_1, \ldots, x_r)\\ &= {\rm Pr} (M_{1i} = m_{1i},\ldots , M_{ri} = m_{ri} | x_1, \ldots, x_r)\\ &= \left(\begin{array}{c} n_i\\ m_{1i},\ldots, m_{ri}, n_i - \sum_{j=1}^{r} m_{ji}\end{array}\right)\\ &\qquad\quad \times [F_i(x_1)]^{m_{1i}} \left\{\prod\limits^{r}_{j=2} [F_i (x_2) - F_i(x_1)]^{m_{mj}}\right\} [1-F_i(x_r)]^{\left(n_i-\sum\limits^{r}_{j=1} m_{ji}\right)}.\\ \end{array}$$

For fixed values of \({x}_{1} < {x}_{2} < \mathrel{\cdots } < {x}_{r}\), due to the independence of the samples from the (k−1) treatments, we readily have the conditional joint probability as

$$\begin{array}{rcl} & & \Pr \left.\left ({\mbox{ $M$}}_{2} = { \mbox{ $m$}}_{2}, \ldots ,{\mbox{ $M$}}_{k} = { \mbox{ $m$}}_{k}\right \vert {x}_{1}, \ldots ,{x}_{r}\right ) \\ & & \quad = \left \{ \prod _{i = 2}^{k}\left (\begin{array}{*{10}c} {n}_{i} \\ {m}_{1i}, \ldots ,{m}_{ri},{n}_{i} - \sum \limits_{j = 1}^{r}{m}_{ji} \end{array} \right )\right \} \\ & & \qquad \times \left \{ \prod _{i = 2}^{k}{[{F}_{ i}({x}_{1})]}^{{m}_{1i} }\right \}\left \{ \prod _{i = 2}^{k} \prod _{j = 2}^{r}{[{F}_{ i}({x}_{j}) - {F}_{i}({x}_{j-1})]}^{{m}_{ji} }\right \} \\ & & \qquad \times \left \{ \prod _{i = 2}^{k}{[1 - {F}_{ i}({x}_{r})]}^{\left ({n}_{i}- \sum \limits_{j = 1}^{r}{m}_{ ji}\right )}\right \}\end{array}$$
(.)

Now, we have the joint density of the first r order statistics from the control as

$$\begin{array}{rcl}{ f}_{1, \ldots ,r:{n}_{1}}({x}_{1}, \ldots ,{x}_{r}) = \frac{{n}_{1}!} {({n}_{1} - r)!}\left [ \prod _{j = 1}^{r}{f}_{ 1}({x}_{j})\right ]{[1 - {F}_{1}({x}_{r})]}^{{n}_{1}-r},\quad {x}_{ 1} < \ldots < {x}_{r}.& & \\ \end{array}$$

As a result, we obtain the unconditional probability of \(({\mbox{ $M$}}_{2} = { \mbox{ $m$}}_{2}, \ldots ,{\mbox{ $M$}}_{k} = { \mbox{ $m$}}_{k})\) as

$$\begin{array}{ll} \Pr &\left ({{M}}_{2} = { {m}}_{2}, \ldots ,{{M}}_{k} = {{m}}_{k}\right ) \\ & = C \int _{-\infty }^{\infty } \int _{-\infty }^{{x}_{r} } \ldots\int _{-\infty }^{{x}_{2} }\left \{ \prod _{i = 2}^{k}{[{F}_{ i}({x}_{1})]}^{{m}_{1i} }\right \}\left \{ \prod _{i = 2}^{k} \prod _{j = 2}^{r}{[{F}_{ i}({x}_{j}) - {F}_{i}({x}_{j-1})]}^{{m}_{ji} }\right \} \\ &\quad \times \left \{ \prod _{i = 2}^{k}{[1 - {F}_{ i}({x}_{r})]}^{\left ({n}_{i}- \sum \limits_{j = 1}^{r}{m}_{ ji}\right )}\right \} \\ &\quad \times \left [ \prod _{j = 1}^{r}{f}_{ 1}({x}_{j})\right ]{[1 - {F}_{1}({x}_{r})]}^{{n}_{1}-r}d{x}_{ 1}\mathrel{\cdots }d{x}_{r},\\ \end{array}$$
(2.25)

where

$$\begin{array}{rcl} C = \frac{{n}_{1}!} {({n}_{1} - r)!} \prod _{i = 2}^{k}\left (\begin{array}{*{10}c} {n}_{i} \\ {m}_{1i}, \ldots ,{m}_{ri},{n}_{i} - \sum \limits_{j = 1}^{r}{m}_{ji} \end{array} \right ).& & \\ \end{array}$$

Under the null hypothesis, \({H}_{0} : {F}_{1}(x) = {F}_{2}(x) = \mathrel{\cdots } = {F}_{k}(x)\), by denoting \({m}_{j\mbox{ $\cdot $}} = \sum \limits_{i = 2}^{k}{m}_{ji}\), the expression in (2.25) becomes

$$\begin{array}{rcl} & & \Pr \left.\left ({\mbox{ $M$}}_{2} = { \mbox{ $m$}}_{2}, \ldots ,{\mbox{ $M$}}_{k} = { \mbox{ $m$}}_{k}\right \vert {H}_{0}\right ) \\ & & \quad = \,C \int _{-\infty }^{\infty } \int _{-\infty }^{{x}_{r} } \ldots\int _{-\infty }^{{x}_{2} }\left \{ \prod _{i = 2}^{k}{[{F}_{ 1}({x}_{1})]}^{{m}_{1i} }\right \}\left \{ \prod _{i = 2}^{k} \prod _{j = 2}^{r}{[{F}_{ 1}({x}_{j})\,-\,{F}_{1}({x}_{j-1})]}^{{m}_{ji} }\right \} \\ & & \qquad \times \left \{ \prod _{i = 2}^{k}{[1 - {F}_{ 1}({x}_{r})]}^{\left ({n}_{i}- \sum \limits_{j = 1}^{r}{m}_{ ji}\right )}\right \} \\ & & \qquad \times \left [ \prod _{j = 1}^{r}{f}_{ 1}({x}_{j})\right ]{[1 - {F}_{1}({x}_{r})]}^{{n}_{1}-r}d{x}_{ 1}\mathrel{\cdots }d{x}_{r} \\ & & \quad = C \int _{-\infty }^{\infty } \int _{-\infty }^{{x}_{r} } \ldots\int _{-\infty }^{{x}_{2} }{[{F}_{1}({x}_{1})]}^{{m}_{1\mbox{ $\cdot $}}}\left \{ \prod _{j = 2}^{r}{[{F}_{ 1}({x}_{j}) - {F}_{1}({x}_{j-1})]}^{{m}_{j\mbox{ $\cdot $}}}\right \} \\ & &\qquad \times{[1 - {F}_{1}({x}_{r})]}^{\left ( \sum \limits_{i = 1}^{k}{n}_{ i}- \sum \limits_{j = 1}^{r}{m}_{ j\mbox{ $\cdot $}}-r\right )}\left [ \prod _{j = 1}^{r}{f}_{ 1}({x}_{j})\right ]d{x}_{1}\mathrel{\cdots }d{x}_{r}\end{array}$$
(.)

Upon setting u i = F 1(x i ) for \(i = 1, \ldots ,r\), the above expression becomes

$$\begin{array}{rcl} & & \Pr \left.\left ({\mbox{ $M$}}_{2} = { \mbox{ $m$}}_{2}, \ldots ,{\mbox{ $M$}}_{k} = { \mbox{ $m$}}_{k}\right \vert {H}_{0}\right ) \\ & & \quad = C \int _{0}^{1} \int _{0}^{{u}_{r} } \ldots\int _{0}^{{u}_{2} }{u}_{1}^{{m}_{1\mbox{ $\cdot $}}}\left [ \prod _{i = 2}^{k}{({u}_{ j} - {u}_{j-1})}^{{m}_{j\mbox{ $\cdot $}}}\right ] \\ & & \qquad \times{(1 - {u}_{r})}^{\left ( \sum \limits_{i = 1}^{k}{n}_{ i}- \sum \limits_{j = 1}^{r}{m}_{ j\mbox{ $\cdot $}}-r\right )}d{u}_{1}\mathrel{\cdots }d{u}_{r}\end{array}$$
(.)

Using the transformation \({w}_{1} = {u}_{1}/{u}_{2}\), we have

$$\begin{array}{rcl} \int _{0}^{{u}_{2} }{u}_{1}^{{m}_{1\mbox{ $\cdot $}}}{({u}_{ 2} - {u}_{1})}^{{m}_{2\mbox{ $\cdot $}}}d{u}_{ 1}& = & {u}_{2}^{{m}_{1\mbox{ $\cdot $}}+{m}_{2\mbox{ $\cdot $}}} \int _{0}^{1}{w}_{ 1}^{{m}_{1\mbox{ $\cdot $}}}{(1 - {w}_{ 1})}^{{m}_{2\mbox{ $\cdot $}}}d{w}_{ 1} \\ & = & {u}_{2}^{{m}_{1\mbox{ $\cdot $}}+{m}_{2\mbox{ $\cdot $}}+1}B({m}_{ 1\mbox{ $\cdot $}} + 1,{m}_{2\mbox{ $\cdot $}} + 1), \\ \end{array}$$

where, as before, \(B(a,b) = \int _{0}^{1}{x}^{a-1}{(1 - x)}^{b-1}dx\) is the complete beta function. Proceeding similarly and using the transformations \({w}_{l} = {u}_{l}/{u}_{l+1}\) for \(l = 2, \ldots ,r - 1\), we obtain

$$\begin{array}{rcl} & & \Pr \left.\left ({\mbox{ $M$}}_{2} = { \mbox{ $m$}}_{2}, \ldots ,{\mbox{ $M$}}_{k} = { \mbox{ $m$}}_{k}\right \vert {H}_{0}\right ) \\ & & \quad = C\left \{ \prod _{j = 1}^{r-1}B\left ({m}_{ 1\mbox{ $\cdot $}} + \mathrel{\cdots } + {m}_{j\mbox{ $\cdot $}} + j,{m}_{j+1\mbox{ $\cdot $}} + 1\right )\right \} \\ & & \qquad \times\int _{0}^{1}{u}_{ r}^{\left ( \sum \limits_{j = 1}^{r}{m}_{ j\mbox{ $\cdot $}}+r+1\right )}{(1 - {u}_{r})}^{\left ( \sum \limits_{i = 1}^{k}{n}_{ i}- \sum \limits_{j = 1}^{r}{m}_{ j\mbox{ $\cdot $}}-r\right )}d{u}_{r} \\ & & \quad = C\left \{ \prod _{j = 1}^{r-1}B\left ({m}_{ 1\mbox{ $\cdot $}} + \mathrel{\cdots } + {m}_{j\mbox{ $\cdot $}} + j,{m}_{j+1\mbox{ $\cdot $}} + 1\right )\right \} \\ & & \qquad \times B\left ( \sum \limits_{j = 1}^{r}{m}_{ j\mbox{ $\cdot $}} + r, \sum \limits_{i = 1}^{k}{n}_{ i} - \sum \limits_{j = 1}^{r}{m}_{ j\mbox{ $\cdot $}}- r + 1\right ) \\ & & \quad = \frac{{n}_{1}!} {({n}_{1} - r)!}\left \{ \prod _{i = 2}^{k}\left (\begin{array}{*{10}c} {n}_{i} \\ {m}_{1i}, \ldots ,{m}_{ri},{n}_{i} - \sum \limits_{j = 1}^{r}{m}_{ji} \end{array} \right )\right \} \\ & & \qquad \times \frac{\left ( \sum \limits_{i = 1}^{k}{n}_{i} - \sum \limits_{j = 1}^{r}{m}_{j\mbox{ $\cdot $}}- r\right )!{m}_{1\mbox{ $\cdot $}}! \ldots {m}_{r\mbox{ $\cdot $}}!} {\left ( \sum \limits_{i = 1}^{k}{n}_{i}\right )!} \\ & & \quad = \frac{1} {\left (\begin{array}{*{10}c} \sum \limits_{i = 1}^{k}{n}_{i} \\ {n}_{1}, \ldots ,{n}_{k} \end{array} \right )}\left \{ \prod _{j = 1}^{r}\left (\begin{array}{*{10}c} {m}_{j\mbox{ $\cdot $}}\\ {m}_{ j2}, \ldots ,{m}_{jk} \end{array} \right )\right \} \\ & & \qquad \times \left (\begin{array}{*{10}c} \sum \limits_{i = 1}^{k}{n}_{i} - \sum \limits_{j = 1}^{r}{m}_{j\mbox{ $\cdot $}}- r \\ {n}_{1} - r,{n}_{2} - \sum \limits_{j = 1}^{r}{m}_{j2}, \ldots ,{n}_{k} - \sum \limits_{j = 1}^{r}{m}_{jk} \end{array} \right )\end{array}$$
(.)

2.8 Appendix B: Probability Mass Function of (M 2 , , M k ) Under the Lehmann Alternative

Under the Lehmann alternative H 1: \({[{F}_{k}(x)]}^{{\gamma }_{k}}\) = \({[{F}_{k-1}(x)]}^{{\gamma }_{k-1}}\) = ⋯ = \({[{F}_{2}(x)]}^{{\gamma }_{2}} = \) F 1(x), for some γ i ∈(0,1), the expression in (2.24) can be expressed as follows:

$$\begin{array}{rcl} & & \Pr \left ({\mbox{ $M$}}_{2} = { \mbox{ $m$}}_{2}, \ldots ,{\mbox{ $M$}}_{k} = { \mbox{ $m$}}_{k}\vert {H}_{1} : {F}_{k}^{{\gamma }_{k} } = \mathrel{\cdots } = {F}_{2}^{{\gamma }_{2} } = {F}_{1}\right ) \\ & & \quad = C{\gamma }_{k}^{r} \int _{-\infty }^{\infty } \int _{-\infty }^{{x}_{r} } \ldots\int _{-\infty }^{{x}_{2} }\left \{ \prod _{i = 2}^{k}{[{F}_{ k}({x}_{1})]}^{{m}_{1i}{\gamma }_{k}/{\gamma }_{i} }\right \} \\ & & \qquad \times \left \{ \prod _{i = 2}^{k} \prod _{j = 2}^{r}{[{F}_{ k}^{{\gamma }_{k}/{\gamma }_{i} }({x}_{j}) - {F}_{k}^{{\gamma }_{k}/{\gamma }_{i} }({x}_{j-1})]}^{{m}_{ji} }\right \} \\ & & \qquad \times \left \{ \prod _{i = 2}^{k}{[1 - {F}_{ i}^{{\gamma }_{k}/{\gamma }_{i} }({x}_{r})]}^{\left ({n}_{i}- \sum \limits_{j = 1}^{r}{m}_{ ji}\right )}\right \}\left [ \prod _{j = 1}^{r}{F}_{ k}^{{\gamma }_{k}-1}({x}_{ j})\right ] \\ & & \qquad \times \left [ \prod _{j = 1}^{r}{f}_{ k}({x}_{i})\right ]{[1 - {F}_{k}^{{\gamma }_{k} }({x}_{r})]}^{{n}_{1}-r}d{x}_{ 1}\mathrel{\cdots }d{x}_{r}. \end{array}$$
(2.26)

In the special case when γ i = γ for \(i = 2, \ldots ,k\), the expression in (2.26) can be simplified as

$$\begin{array}{rcl} & & \Pr \left ({\mbox{ $M$}}_{2} = { \mbox{ $m$}}_{2}, \ldots ,{\mbox{ $M$}}_{k} = { \mbox{ $m$}}_{k}\vert {H}_{1} : {F}_{k}^{\gamma } = \mathrel{\cdots } = {F}_{ 2}^{\gamma } = {F}_{ 1}\right ) \\ & & \quad = C{\gamma }^{r} \int _{-\infty }^{\infty } \int _{-\infty }^{{x}_{r} } \ldots\int _{-\infty }^{{x}_{2} }{[{F}_{k}({x}_{1})]}^{{m}_{1\mbox{ $\cdot $}}+\gamma -1} \\ & & \qquad \times \left \{ \prod _{j = 2}^{r}{F}_{ k}^{\gamma -1}({x}_{ j}){[{F}_{k}({x}_{j}) - {F}_{k}({x}_{j-1})]}^{{m}_{j\mbox{ $\cdot $}}}\right \} \\ & &\qquad \times{[1 - {F}_{k}({x}_{r})]}^{\left ( \sum \limits_{i = 2}^{k}{n}_{ i}- \sum \limits_{j = 1}^{r}{m}_{ j\mbox{ $\cdot $}}\right )}\left [ \prod _{j = 1}^{r}{f}_{ k}({x}_{i})\right ]{[1 - {F}_{k}^{\gamma }({x}_{ r})]}^{{n}_{1}-r}d{x}_{ 1}\mathrel{\cdots }d{x}_{r}\end{array}$$
(.)

Upon setting u i = F k (x i ) for \(i = 1, \ldots ,r\), the above expression becomes

$$\begin{array}{rcl} & & \Pr \left ({\mbox{ $M$}}_{2} = { \mbox{ $m$}}_{2}, \ldots ,{\mbox{ $M$}}_{k} = { \mbox{ $m$}}_{k}\vert {H}_{1} : {F}_{k}^{\gamma } = \mathrel{\cdots } = {F}_{ 2}^{\gamma } = {F}_{ 1}\right ) \\ & & \quad = C{\gamma }^{r} \int _{0}^{1} \int _{0}^{{u}_{r} } \ldots\int _{0}^{{u}_{2} }{u}_{1}^{{m}_{1\mbox{ $\cdot $}}+\gamma -1}\left \{ \prod _{j = 2}^{r}{u}_{ j}^{\gamma -1}{({u}_{ j} - {u}_{j-1})}^{{m}_{j\mbox{ $\cdot $}}}\right \} \\ & &\qquad \times{(1 - {u}_{r})}^{\left ( \sum \limits_{i = 2}^{k}{n}_{ i}- \sum \limits_{j = 1}^{r}{m}_{ j\mbox{ $\cdot $}}\right )}{(1 - {u}_{r}^{\gamma })}^{{n}_{1}-r}d{x}_{1}\mathrel{\cdots }d{x}_{r}\end{array}$$
(.)

Adopting an approach similar to the one used in Appendix A, we obtain

$$\begin{array}{rcl} & & \Pr \left ({\mbox{ $M$}}_{2} = { \mbox{ $m$}}_{2}, \ldots ,{\mbox{ $M$}}_{k} = { \mbox{ $m$}}_{k}\vert {H}_{1} : {F}_{k}^{\gamma } = \mathrel{\cdots } = {F}_{ 2}^{\gamma } = {F}_{ 1}\right ) \\ & & \quad = C{\gamma }^{r}\left \{ \prod _{j = 1}^{r-1}B\left ({m}_{ 1\mbox{ $\cdot $}} + \mathrel{\cdots } + {m}_{j\mbox{ $\cdot $}} + j\gamma ,{m}_{j+1\mbox{ $\cdot $}} + 1\right )\right \} \\ & & \qquad \times\int _{0}^{1}{u}_{ r}^{\left ( \sum \limits_{j = 1}^{r}{m}_{ j\mbox{ $\cdot $}}+r\gamma +1\right )}{(1 - {u}_{r})}^{\left ( \sum \limits_{i = 2}^{k}{n}_{ i}- \sum \limits_{j = 1}^{r}{m}_{ j\mbox{ $\cdot $}}\right )}{(1 - {u}_{r}^{\gamma })}^{{n}_{1}-r}d{u}_{r} \\ & & \quad = C{\gamma }^{r}\left \{ \prod _{j = 1}^{r-1}B\left ({m}_{ 1\mbox{ $\cdot $}} + \mathrel{\cdots } + {m}_{j\mbox{ $\cdot $}} + j\gamma ,{m}_{j+1\mbox{ $\cdot $}} + 1\right )\right \} \\ & & \qquad \times \Biggl [ \sum \limits_{l = 0}^{{n}_{1}-r}\left (\begin{array}{*{10}c} {n}_{1} - r \\ l \end{array} \right ){(-1)}^{l} \\ & & \qquad \qquad \qquad \times\int _{0}^{1}{u}_{ r}^{\left ( \sum \limits_{j = 1}^{r}{m}_{ j\mbox{ $\cdot $}}+r\gamma +1+l\gamma \right )}{(1 - {u}_{r})}^{\left ( \sum \limits_{i = 2}^{k}{n}_{ i}- \sum \limits_{j = 1}^{r}{m}_{ j\mbox{ $\cdot $}}\right )}\Biggr ]d{u}_{r} \\ & & \quad = C{\gamma }^{r}\left \{ \prod _{j = 1}^{r-1}B\left ({m}_{ 1\mbox{ $\cdot $}} + \mathrel{\cdots } + {m}_{j\mbox{ $\cdot $}} + j\gamma ,{m}_{j+1\mbox{ $\cdot $}} + 1\right )\right \} \\ & & \qquad \times\sum \limits_{l = 0}^{{n}_{1}-r}\left (\begin{array}{*{10}c} {n}_{1} - r \\ l \end{array} \right ){(-1)}^{l}B\left ( \sum \limits_{j = 1}^{r}{m}_{ j\mbox{ $\cdot $}} + (r + l)\gamma , \sum \limits_{i = 2}^{k}{n}_{ i} - \sum \limits_{j = 1}^{r}{m}_{ j\mbox{ $\cdot $}} + 1\right )\end{array}$$
(.)

The exact distribution of (\({\mbox{ $M$}}_{2}, \ldots ,{\mbox{ $M$}}_{k}\)), under the general Lehmann alternative \({H}_{1} : {[{F}_{k}(x)]}^{{\gamma }_{k}} = {[{F}_{ k-1}(x)]}^{{\gamma }_{k-1}} = \mathrel{\cdots } = {[{F}_{ 2}(x)]}^{{\gamma }_{2}} = {F}_{ 1}(x)\), can be derived in a similar manner by expanding each term by the binomial formula, and the final expression would then involve multiple summation. For purposes of illustration, we present the result for k = 3. In this case, we have from Equation (2.26)

$$\begin{array}{rcl} & & \Pr \left ({\mbox{ $M$}}_{2} = { \mbox{ $m$}}_{2},{\mbox{ $M$}}_{3} = { \mbox{ $m$}}_{3}\vert {H}_{1} : {F}_{3}^{{\gamma }_{3} } = {F}_{2}^{{\gamma }_{2} } = {F}_{1}\right ) \\ & & \quad = C{\gamma }_{3}^{r} \int _{-\infty }^{\infty } \int _{-\infty }^{{x}_{r} } \ldots\int _{-\infty }^{{x}_{2} }{[{F}_{3}({x}_{1})]}^{{m}_{12}{\gamma }_{3}/{\gamma }_{2} }{[{F}_{3}({x}_{1})]}^{{m}_{13} } \\ & & \qquad \times \left \{ \prod _{j = 2}^{r}{[{F}_{ 3}^{{\gamma }_{3}/{\gamma }_{2} }({x}_{j}) - {F}_{3}^{{\gamma }_{3}/{\gamma }_{2} }({x}_{j-1})]}^{{m}_{j2} }\right \}\left \{ \prod _{j = 2}^{r}{[{F}_{ 3}({x}_{j}) - {F}_{3}({x}_{j-1})]}^{{m}_{j3} }\right \} \\ & & \qquad \times{[1 - {F}_{3}^{{\gamma }_{3}/{\gamma }_{2} }({x}_{r})]}^{\left ({n}_{2}- \sum \limits_{j = 1}^{r}{m}_{ j2}\right )}{[1 - {F}_{3}({x}_{r})]}^{\left ({n}_{3}- \sum \limits_{j = 1}^{r}{m}_{ j3}\right )} \\ & & \qquad \times \left \{ \prod _{j = 1}^{r}{[{F}_{ 3}({x}_{i})]}^{{\gamma }_{3}-1}{f}_{ 3}({x}_{i})\right \}{[1 - {F}_{3}^{{\gamma }_{3} }({x}_{r})]}^{{n}_{1}-r}d{x}_{ 1}\mathrel{\cdots }d{x}_{r}\end{array}$$
(.)

Upon setting u i = F 3(x i ) for \(i = 1, \ldots ,r\), the preceding expression becomes

$$\begin{array}{rcl} & & \Pr \left ({\mbox{ $M$}}_{2} = { \mbox{ $m$}}_{2},{\mbox{ $M$}}_{3} = { \mbox{ $m$}}_{3}\vert {H}_{1} : {F}_{3}^{{\gamma }_{3} } = {F}_{2}^{{\gamma }_{2} } = {F}_{1}\right ) \\ & & \quad = C{\gamma }_{3}^{r} \int _{0}^{1} \int _{0}^{{u}_{r} } \ldots\int _{0}^{{u}_{2} }{u}_{1}^{\left (\frac{{m}_{12}{\gamma }_{3}} {{\gamma }_{2}} +{m}_{13}+{\gamma }_{3}-1\right )} \\ & & \qquad \times \left \{ \prod _{j = 2}^{r}{u}_{ j}^{{\gamma }_{3}-1}{\left ({u}_{ j}^{{\gamma }_{3}/{\gamma }_{2} } - {u}_{j-1}^{{\gamma }_{3}/{\gamma }_{2} }\right )}^{{m}_{j2} }{\left ({u}_{j} - {u}_{j-1}\right )}^{{m}_{j3} }\right \} \\ & & \qquad \times {\left (1 - {u}_{r}^{{\gamma }_{3}/{\gamma }_{2} }\right )}^{\left ({n}_{2}- \sum \limits_{j = 1}^{r}{m}_{ j2}\right )}{\left (1 - {u}_{r}\right )}^{\left ({n}_{3}- \sum \limits_{j = 1}^{r}{m}_{ j3}\right )}d{u}_{1}\mathrel{\cdots }d{u}_{r}\end{array}$$
(.)

The first integral with respect to u 1 can be expressed as

$$\begin{array}{rcl} & & \int _{0}^{{u}_{2} }{u}_{1}^{\left ({m}_{12} \frac{{\gamma }_{3}} {{\gamma }_{2}} +{m}_{13}+{\gamma }_{3}-1\right )}{\left ({u}_{2}^{{\gamma }_{3}/{\gamma }_{2}} - {u}_{1}^{{\gamma }_{3}/{\gamma }_{2}}\right )}^{{m}_{22}}{\left ({u}_{2} - {u}_{1}\right )}^{{m}_{23}}d{u}_{1} \\ & & \quad = \int _{0}^{{u}_{2} }{u}_{1}^{\left (\frac{{m}_{12}{\gamma }_{3}} {{\gamma }_{2}} +{m}_{13}+{\gamma }_{3}-1\right )} \\ & & \qquad \qquad \times \left \{ \sum \limits_{{l}_{1} = 0}^{{m}_{22} }\left (\begin{array}{*{10}c} {m}_{22} \\ {l}_{2} \end{array} \right ){(-1)}^{{l}_{1} }{u}_{2}^{({m}_{22}-{l}_{1})\frac{{\gamma }_{3}} {{\gamma }_{2}} }{u}_{1}^{\left (\frac{{l}_{1}{\gamma }_{3}} {{\gamma }_{2}} \right )}\right \}{\left ({u}_{2} - {u}_{1}\right )}^{{m}_{23}}d{u}_{1} \\ & & \quad = {u}_{2}^{\left (({m}_{12}+{m}_{22})\frac{{\gamma }_{3}} {{\gamma }_{2}} +({m}_{13}+{m}_{23})+{\gamma }_{3}-1\right )} \\ & & \qquad \times\sum \limits_{{l}_{1} = 0}^{{m}_{22} }\left (\begin{array}{*{10}c} {m}_{22} \\ {l}_{2} \end{array} \right ){(-1)}^{{l}_{1} }B\left (({m}_{12} + {l}_{1})\frac{{\gamma }_{3}} {{\gamma }_{2}} + {m}_{13} + {\gamma }_{3},{m}_{23} + 1\right )\end{array}$$
(.)

Similarly, the j-th integral with respect to u j (\(j = 2, \ldots ,r - 1\)) becomes

$$\begin{array}{rcl} & & {u}_{j+1}^{\left (({m}_{12}+ \ldots +{m}_{(j+1)2})\frac{{\gamma }_{3}} {{\gamma }_{2}} +({m}_{13}+ \ldots +{m}_{(j+1)3})+{\gamma }_{3}-1\right )} \\ & & \quad \times\sum \limits_{{l}_{j} = 0}^{{m}_{(j+1)2} }\left (\begin{array}{*{10}c} {m}_{(j+1)2} \\ {l}_{j} \end{array} \right ){(-1)}^{{l}_{j} } \\ & & \quad \times B\left (({m}_{12} + \mathrel{\cdots } + {m}_{j2} + {l}_{j})\frac{{\gamma }_{3}} {{\gamma }_{2}} + ({m}_{13} + \mathrel{\cdots } + {m}_{j3}){\gamma }_{3},{m}_{(j+1)3} + 1\right ), \\ \end{array}$$

while the last integral with respect to u r becomes

$$\begin{array}{rcl} & & \int _{0}^{{u}_{r} }{u}_{r}^{\left (\left ( \sum \limits_{j = 1}^{r}{m}_{ j2}\right )\frac{{\gamma }_{3}} {{\gamma }_{2}} +\left ( \sum \limits_{j = 1}^{r}{m}_{ j3}\right )+{\gamma }_{3}-1\right )}{(1 - {u}_{ r}^{{\gamma }_{3}/{\gamma }_{2} })}^{\left ({n}_{2}- \sum \limits_{j = 1}^{r}{m}_{ j2}\right )} \\ & & \qquad \times{(1 - {u}_{r})}^{\left ({n}_{3}- \sum \limits_{j = 1}^{r}{m}_{ j3}\right )}{(1 - {u}_{r}^{{\gamma }_{3}})}^{{n}_{1}-r}d{u}_{r} \\ & & \quad = \sum \limits_{{l}_{r} = 0}^{{n}_{2}- \sum \limits_{j = 1}^{r}{m}_{ j2}} \sum \limits_{l = 0}^{{n}_{1}-r}\left (\begin{array}{*{10}c} {n}_{2} - \sum \limits_{j = 1}^{r}{m}_{j2} \\ {l}_{r} \end{array} \right )\left (\begin{array}{*{10}c} {n}_{1} - r \\ l \end{array} \right ){(-1)}^{{l}_{r}+l} \\ & & \qquad \times\int _{0}^{1}{u}_{ r}^{\left (\left ( \sum \limits_{j = 1}^{r}{m}_{ j2}\right )\frac{{\gamma }_{3}} {{\gamma }_{2}} +\left ( \sum \limits_{j = 1}^{r}{m}_{ j3}\right )+{\gamma }_{3}-1+{l}_{r}\frac{{\gamma }_{3}} {{\gamma }_{2}} +l{\gamma }_{3}\right )}{(1 - {u}_{r})}^{{n}_{3}- \sum \limits_{j = 1}^{r}{m}_{ j3}}d{u}_{r} \\ & & \quad = \sum \limits_{{l}_{r} = 0}^{{n}_{2}- \sum \limits_{j = 1}^{r}{m}_{ j2}} \sum \limits_{l = 0}^{{n}_{1}-r}\left (\begin{array}{*{10}c} {n}_{2} - \sum \limits_{j = 1}^{r}{m}_{j2} \\ {l}_{r} \end{array} \right )\left (\begin{array}{*{10}c} {n}_{1} - r \\ l \end{array} \right ){(-1)}^{{l}_{r}+l} \\ & & \qquad \times B\left (\!\!\left ( \sum \limits_{j = 1}^{r}{m}_{ j2}\,+\,{l}_{r}\right )\frac{{\gamma }_{3}} {{\gamma }_{2}}\,+\,\left ( \sum \limits_{j = 1}^{r}{m}_{ j3}\right )\,+\,(l + 1){\gamma }_{3},{n}_{3}- \sum \limits_{j = 1}^{r}{m}_{ j3}\,+\,1\right )\end{array}$$
(.)

Combining all these expressions, we finally obtain

$$\begin{array}{rcl} & & \Pr \left ({\mbox{ $M$}}_{2} = { \mbox{ $m$}}_{2},{\mbox{ $M$}}_{3} = { \mbox{ $m$}}_{3}\vert {H}_{1} : {F}_{3}^{{\gamma }_{3} } = {F}_{2}^{{\gamma }_{2} } = {F}_{1}\right ) \\ & & \quad = C{\gamma }_{3}^{r} \sum \limits_{{l}_{1} = 0}^{{m}_{22} } \ldots\sum \limits_{{l}_{r-1} = 0}^{{m}_{r2} } \sum \limits_{{l}_{r} = 0}^{{n}_{2}- \sum \limits_{j = 1}^{r}{m}_{ j2}} \sum \limits_{l = 0}^{{n}_{1}-r}\left \{ \prod _{j = 2}^{r}\left (\begin{array}{*{10}c} {m}_{j2} \\ {l}_{j-1} \end{array} \right )\right \} \\ & & \qquad \times \left (\begin{array}{*{10}c} {n}_{2} - \sum \limits_{j = 1}^{r}{m}_{j2} \\ {l}_{r} \end{array} \right )\left (\begin{array}{*{10}c} {n}_{1} - r \\ l \end{array} \right ){(-1)}^{\left ( \sum \limits_{j = 1}^{r}{l}_{ j}+l\right )} \\ & & \qquad \times \left \{ \prod _{j = 2}^{r}B\left (\left ( \sum \limits_{{l}^{{_\ast}} = 1}^{j}{m}_{{ l}^{{_\ast}}2} + {l}_{j}\right )\frac{{\gamma }_{3}} {{\gamma }_{2}} + \left ( \sum \limits_{{l}^{{_\ast}} = 1}^{j}{m}_{{ l}^{{_\ast}}3}\right ){\gamma }_{3},{m}_{(j+1)3} + 1\right )\right \} \\ & & \qquad \times B\left (\!\left ( \sum \limits_{j = 1}^{r}{m}_{ j2} + {l}_{r}\right )\frac{{\gamma }_{3}} {{\gamma }_{2}}+\left ( \sum \limits_{j = 1}^{r}{m}_{ j3}\right ) + (l + 1){\gamma }_{3},{n}_{3}- \sum \limits_{j = 1}^{r}{m}_{ j3} + 1\right )\end{array}$$
(.)