Abstract
This chapter introduces permutation methods for multiple independent variables; that is, completely-randomized designs. Included in this chapter are six example analyses illustrating computation of exact permutation probability values for multi-sample tests, calculation of measures of effect size for multi-sample tests, the effect of extreme values on conventional and permutation multi-sample tests, exact and Monte Carlo permutation procedures for multi-sample tests, application of permutation methods to multi-sample rank-score data, and analysis of multi-sample multivariate data. Included in this chapter are permutation versions of Fisher’s F test for one-way, completely-randomized analysis of variance, the Kruskal–Wallis one-way analysis of variance for ranks, the Bartlett–Nanda–Pillai trace test for multivariate analysis of variance, and a permutation-based alternative for the four conventional measures of effect size for multi-sample tests: Cohen’s \(\hat {d}\), Pearson’s η 2, Kelley’s \(\hat {\eta }^{2}\), and Hays’ \(\hat {\omega }^{2}\).
Access provided by Autonomous University of Puebla. Download chapter PDF
Similar content being viewed by others
This chapter presents exact and Monte Carlo permutation statistical methods for multi-sample tests. Multi-sample tests are of two types: tests for experimental differences among three or more independent samples (completely-randomized designs) and tests for experimental differences among three or more dependent samples (randomized-blocks designs).Footnote 1 Permutation statistical methods for multiple dependent samples are presented in Chap. 9. Permutation statistical methods for multiple independent samples are presented in this chapter. In addition there are mixed models with one or more independent samples and one or more dependent samples, but these models are beyond the scope of this introductory book on permutation statistical methods. Interested readers can consult a 2016 book on Permutation Statistical Methods: An Integrated Approach by the authors [2].
Multi-sample tests for independent samples constitute a large family of tests in conventional statistical methods. Included in this family are one-way analysis of variance with univariate responses (ANOVA), one-way analysis of variance with multivariate responses (MANOVA), one-way analysis of variance with one or more covariates and univariate responses (ANCOVA), one-way analysis of variance with one or more covariates and multivariate responses (MANCOVA), and a variety of factorial designs that may be two-way, three-way, four-way, nested, balanced, unbalanced, fixed, random, or mixed.
In this chapter, permutation statistical methods for multiple independent samples are illustrated with six example analyses. The first example utilizes a small set of data to illustrate the computation of exact permutation methods for multiple independent samples, wherein the permutation test statistic, δ, is developed and compared with Fisher’s conventional F-ratio test statistic. The second example develops a permutation-based measure of effect size as a chance-corrected alternative to the five conventional measures of effect size for multi-sample tests: Cohen’s \(\hat {d}\), Pearson’s η 2, Kelley’s \(\hat {\eta }^{2}\), Hays’ \(\hat {\omega }_{\text{F}}^{2}\) for fixed models, and Hays’ \(\hat {\omega }_{\text{R}}^{2}\) for random models. The third example compares permutation statistical methods based on ordinary and squared Euclidean scaling functions, with an emphasis on the analysis of data sets containing extreme values. The fourth example utilizes a larger data set to provide a comparison of exact permutation methods and Monte Carlo permutation methods, demonstrating the efficiency and accuracy of Monte Carlo statistical methods for multi-sample tests. The fifth example illustrates the application of permutation statistical methods to univariate rank-score data, comparing permutation statistical methods to the conventional Kruskal–Wallis one-way analysis of variance for ranks test. The sixth example illustrates the application of permutation statistical methods to multivariate data, comparing permutation statistical methods with the conventional Bartlett–Nanda–Pillai trace test for multivariate data.
8.1 Introduction
The most popular univariate test for g ≥ 3 independent samples under the Neyman–Pearson population model of statistical inference is Fisher’s one-way analysis of variance wherein the null hypothesis (H 0) posits no mean differences among the g populations from which the samples are presumed to have been randomly drawn; that is, H 0: μ 1 = μ 2 = ⋯ = μ g. It should be noted that Fisher , writing in the first edition of Statistical Methods for Research Workers in 1925, named the aforementioned statistic the variance-ratio test, symbolized it as z, and defined it as
where ν 1 = MS Between and ν 0 = MS Within in modern notation. In 1934, in an effort to eliminate the calculation of the natural logarithm required for calculating Fisher’s z test, George Snedecor at Iowa State University published tabled values in a small monograph for Fisher’s variance-ratio z statistic and renamed the test statistic F, presumably in honor of Fisher [22]. It has often been reported that Fisher was displeased when the variance-ratio z test statistic was renamed F by Snedecor [4, 8].
Fisher’s F-ratio test for a completely-randomized design does not determine whether or not the null hypothesis is true, but only provides the probability that, if the null hypothesis is true, the samples have been drawn from populations with identical mean values, assuming normality and homogeneity of variance.
Consider a conventional multi-sample F test with samples of independent and identically distributed univariate random variables of sizes n 1, …, n g, viz.,
drawn from g specified populations with cumulative distribution functions F 1(x), …, F g(x), respectively. For simplicity, suppose that population i is normal with mean μ i and variance σ 2 for i = 1, …, g. This is the standard one-way classification model with g treatment groups. Under the Neyman–Pearson population model of statistical inference, the null hypothesis of no differences among the population means tests
for g treatment groups. The permissible probability of a type I error is denoted by α and if the observed value of Fisher’s F-ratio test statistic is equal to or greater than the critical value of F that defines α, the null hypothesis is rejected with a probability of type I error equal to or less than α, under the assumptions of normality and homogeneity.
For multi-sample tests with g treatment groups and N observations, Fisher’s F-ratio test statistic is given by
where the mean-square between treatments is given byFootnote 2
the sum-of-squares between treatments is given by
the mean-square within treatments is given by
the sum-of-squares within treatments is given by
the sum-of-squares total is given by
the mean value for the ith of g treatment groups is given by
the grand mean for all g treatment groups combined is given by
and the total number of observations is
Under the Neyman–Pearson null hypothesis, H 0: μ 1 = μ 2 = ⋯ = μ g, test statistic F is asymptotically distributed as Snedecor’s F distribution with ν 1 = g − 1 degrees of freedom in the numerator and ν 2 = N − g degrees of freedom in the denominator. However, if any of the g populations is not normally distributed, then the distribution of test statistic F no longer follows Snedecor’s F distribution with ν 1 = g − 1 and ν 2 = N − g degrees of freedom.
The assumptions underlying Fisher’s F-ratio test for multiple independent samples are (1) the observations are independent, (2) the data are random samples from well-defined, normally-distributed populations, and (3) homogeneity of variance; that is, \(\sigma _{1}^{2} = \sigma _{2}^{2} = \cdots = \sigma _{g}^{2}\).
8.2 A Permutation Approach
Now consider a test for multiple independent samples under the Fisher–Pitman permutation model of statistical inference. Under the Fisher–Pitman permutation model there is no null hypothesis specifying population parameters. Instead the null hypothesis simply states that all possible arrangements of the observations occur with equal chance [10]. Also, there is no alternative hypothesis under the permutation model and no specified α level. Moreover, there is no requirement of random sampling, no degrees of freedom, no assumption of normality, and no assumption of homogeneity of variance.
A permutation alternative to the conventional F test for multiple independent samples is easily defined. The permutation test statistic for g ≥ 3 independent samples is given by
where C i > 0 is a positive treatment-group weight for i = 1, …, g,
is the average distance-function value for all distinct pairs of objects in sample S i for i = 1, …, g,
denotes a symmetric distance-function value for a single pair of objects,
and Ψ(⋅) is an indicator function given by
Under the Fisher–Pitman permutation model, the null hypothesis simply states that equal probabilities are assigned to each of the
possible, equally-likely allocations of the N objects to the g samples [10]. The probability value associated with an observed value of δ, say δ o, is the probability under the null hypothesis of observing a value of δ as extreme or more extreme than δ o. Thus, an exact probability value for δ o may be expressed as
When M is large, an approximate probability value for δ may be obtained from a Monte Carlo permutation procedure, where
and L denotes the number of randomly-sampled test statistic values. Typically, L is set to a large number to ensure accuracy; for example, L = 1, 000, 000 [11].
8.3 The Relationship Between Statistics F and δ
When the null hypothesis under the Neyman–Pearson population model states H 0: μ 1 = μ 2 = ⋯ = μ g, v = 2, and the treatment-group weights are given by
the functional relationships between test statistic δ and Fisher’s F-ratio test statistic are given by
where
and x i is a univariate measurement score for the ith of N objects. The permutation analogue of the F test is generally known as the Fisher–Pitman permutation test [3].
Because of the relationship between test statistics δ and F, the exact probability values given by
and
are equivalent under the Fisher–Pitman null hypothesis, where δ o and F o denote the observed values of δ and F, respectively, and M is the number of possible, equally-likely arrangements of the observed data.
A chance-corrected measure of agreement among the N measurement scores is given by
where μ δ is the arithmetic average of the M δ test statistic values calculated on all possible arrangements of the observed measurements; that is,
Alternatively, in terms of a one-way analysis of variance model, the exact expected value of test statistic δ is a simple function of the total sum-of-squares; that is,
8.4 Example 1: Test Statistics F and δ
A small example will serve to illustrate the relationship between test statistics F and δ. Consider the example data listed in Table 8.1 with g = 3 treatment groups, sample sizes of n 1 = n 2 = 3, n 3 = 4, and N = n 1 + n 2 + n 3 = 3 + 3 + 4 = 10 total observations. Under the Neyman–Pearson population model with sample sizes n 1 = n 2 = 3, and n 3 = 4, treatment-group means \(\bar {x}_{1} = 3\), \(\bar {x}_{2} = 4\), and \(\bar {x}_{3} = 8\), grand mean \(\bar {\bar {x}} = 5.30\), estimated population variances \(s_{1}^{2} = s_{2}^{2} = 1.00\) and \(s_{3}^{2} = 0.6667\), the sum-of-squares between treatments is
the sum-of-squares within treatments is
the sum-of-squares total is
the mean-square between treatments is
the mean-square within treatments is
and the observed value of Fisher’s F-ratio test statistic is
The essential factors, sums of squares (SS), degrees of freedom (df), mean squares (MS), and variance-ratio test statistic (F) are summarized in Table 8.2.
Under the Neyman–Pearson null hypothesis, H 0: μ 1 = μ 2 = μ 3, Fisher’s F-ratio test statistic is asymptotically distributed as Snedecor’s F with ν 1 = g − 1 and ν 2 = N − g degrees of freedom. With ν 1 = g − 1 = 3 − 1 = 2 and ν 2 = N − g = 10 − 3 = 7 degrees of freedom, the asymptotic probability value of F = 29.2250 is P = 0.4001×10−3, under the assumptions of normality and homogeneity.
8.4.1 An Exact Analysis with v = 2
For the first permutation analysis of the example data listed in Table 8.1 let v = 2, employing squared Euclidean scaling, and let the treatment-group weights be given by
for correspondence with Fisher’s F-ratio test statistic.
Because there are only
possible, equally-likely arrangements in the reference set of all permutations of the N = 10 observations listed in Table 8.1, an exact permutation analysis is feasible. While M = 4200 arrangements are too many to list, Table 8.3 illustrates the calculation of the ξ, δ, and F values for a small sample of the M possible arrangements of the N = 10 observations listed in Table 8.1.
Following Eq. (8.1) on p. 261, the N = 10 observations yield g = 3 average distance-function values of
Alternatively, in terms of a one-way analysis of variance model the average distance-function values are \(\xi _{1} = 2s_{1}^{2} = 2(1.00) = 2.00\), \(\xi _{2} = 2s_{2}^{2} = 2(1.00) = 2.00\), and \(\xi _{3} = 2s_{3}^{2} = 2(0.6667) = 1.3333\).
Following Eq. (8.1) on p. 260, the observed value of the permutation test statistic based on v = 2 and treatment-group weights
is
Alternatively, in terms of a one-way analysis of variance model the permutation test statistic is
For the example data listed in Table 8.1, the sum of the N = 10 observations is
the sum of the N = 10 squared observations is
and the total sum-of-squares is
where \(\bar {\bar {x}}\) denotes the grand mean of all N = 10 observations. Then following the expressions given in Eq. (8.5) on p. 262 for test statistics δ and F, the observed value of test statistic δ with respect to test statistic F is
and the observed value of test statistic F with respect to test statistic δ is
Under the Fisher–Pitman permutation model, the exact probability of an observed δ is the proportion of δ test statistic values computed on all possible, equally-likely arrangements of the N = 10 observations listed in Table 8.1 that are equal to or less than the observed value of δ = 1.7143. There are exactly 10 δ test statistic values that are equal to or less than the observed value of δ = 1.7143. If all M arrangements of the N = 10 observations listed in Table 8.1 occur with equal chance under the Fisher–Pitman null hypothesis, the exact probability value of δ = 1.7143 computed on all M = 4200 arrangements of the observed data with n 1 = n 2 = 3 and n 3 = 4 preserved for each arrangement is
where δ o denotes the observed value of test statistic δ and M is the number of possible, equally-likely arrangements of the N = 10 observations listed in Table 8.1.
Alternatively, there are only 10 F values that are larger than the observed value of F = 29.2250. Thus, if all arrangements of the observed data occur with equal chance, the exact probability value of F = 29.2250 under the Fisher–Pitman null hypothesis is
where F o denotes the observed value of test statistic F.
Following Eq. (8.7) on p. 263, the exact expected value of the M = 4200 δ test statistic values under the Fisher–Pitman null hypothesis is
Alternatively, in terms of a one-way analysis of variance model the exact expected value of test statistic δ is
Following Eq. (8.6) on p. 263, the observed chance-corrected measure of effect size is
indicating approximately 86% within-group agreement above what is expected by chance. Alternatively, in terms of a one-way analysis of variance model the chance-corrected measure of effect size is
8.5 Example 2: Measures of Effect Size
Measures of effect size express the practical or clinical significance of differences among multiple independent sample means, as contrasted with the statistical significance of differences. Five measures of effect size are commonly used for determining the magnitude of treatment effects for multiple independent samples: Cohen’s \(\hat {d}\), Pearson’s η 2, Kelley’s \(\hat {\eta }^{2}\), Hays’ \(\hat {\omega }_{F}^{2}\) for fixed models, and Hays’ \(\hat {\omega }_{R}^{2}\), for random models. Cohen’s \(\hat {d}\) measure of effect size is given by
where n denotes the common size of each treatment group. Pearson’s η 2 measure of effect size is given by
which is equivalent to Pearson’s r 2 for a one-way analysis of variance design. Kelley’s “unbiased” correlation ratio is given byFootnote 3
which is equivalent to an adjusted or “shrunken” squared multiple correlation coefficient reported by most computer statistical packages and given by
where R 2 is the squared product-moment multiple correlation coefficient and p is the number of predictors. Hays’ \(\hat {\omega }_{\text{F}}^{2}\) measure of effect size for a fixed-effects analysis of variance model is given by
Hays’ \(\hat {\omega }_{\text{R}}^{2}\) measure of effect size for a random-effects analysis of variance model is given by
where n denotes the common size of each treatment group. Mielke and Berry’s \(\Re \) chance-corrected measure of effect size is given by
where δ is defined in Eq. (8.1) on p. 261 and μ δ is the exact expected value of δ under the Fisher–Pitman null hypothesis given by
where, for a test of g ≥ 3 independent samples, the number of possible, equally-likely arrangements of the observed data is given by
For the example data listed in Table 8.1 on p. 263 for N = 10 observations, Cohen’s \(\hat {d}\) measure of effect size isFootnote 4
Pearson’s r 2 measure of effect size is usually labeled as η 2 when reported with an analysis of variance. For the example data listed in Table 8.1, η 2 is
Kelley’s \(\hat {\eta }^{2}\) measure of effect size is
Hays’ \(\hat {\omega }_{\text{F}}^{2}\) measure of effect size for a fixed-effects analysis of variance model is
Hays’ \(\hat {\omega }_{\text{R}}^{2}\) measure of effect size for a random-effects analysis of variance model isFootnote 5
and Mielke and Berry’s \(\Re \) chance-corrected measure of effect size is
where the exact expected value of test statistic δ under the Fisher–Pitman null hypothesis is
It can easily be shown that Mielke and Berry’s \(\Re \) chance-corrected measure of effect size is identical to Kelley’s \(\hat {\eta }^{2}\) measure of effect size for a one-way, completely-randomized analysis of variance design, under the Neyman–Pearson population model.
8.5.1 Comparisons of Effect Size Measures
In this section the various measures of effect size are compared and contrasted. Because Pearson’s r 2 and η 2 are equivalent and Kelley’s \(\hat {\eta }^{2}\) and Mielke and Berry’s \(\Re \) are equivalent for multi-sample designs, only η 2 and \(\Re \) are utilized for the comparisons. The functional relationships between Cohen’s \(\hat {d}\) measure of effect size and Pearson’s η 2 (r 2) measure of effect size for g ≥ 3 independent samples are given by
where n denotes the common treatment-group size. The relationships between Cohen’s \(\hat {d}\) measure of effect size and Mielke and Berry’s \(\Re \) (\(\hat {\eta }^{2}\)) chance-corrected measure of effect size are given by
The relationships between Cohen’s \(\hat {d}\) measure of effect size and Hays’ \(\hat {\omega }_{\text{F}}^{2}\) measure of effect size for a fixed-effects model are given by
and
The relationships between Cohen’s \(\hat {d}\) measure of effect size and Hays’ \(\hat {\omega }_{\text{R}}^{2}\) measure of effect size for a random-effects model are given by
The relationships between Pearson’s η 2 (r 2) measure of effect size and Mielke and Berry’s \(\Re \) (\(\hat {\eta }^{2}\)) measure of effect size are given by
The relationships between Pearson’s η 2 (r 2) measure of effect size and Hays’ \(\hat {\omega }_{\text{F}}^{2}\) measure of effect size for a fixed-effects model are given by
and
The relationships between Pearson’s η 2 (r 2) measure of effect size and Hays’ \(\hat {\omega }_{\text{R}}^{2}\) measure of effect size for a random-effects model are given by
and
The relationships between Mielke and Berry’s \(\Re \) (\(\hat {\eta }^{2}\)) measure of effect size and Hays’ \(\hat {\omega }_{\text{F}}^{2}\) measure of effect size for a fixed-effects model are given by
The relationships between Mielke and Berry’s \(\Re \) (\(\hat {\eta }^{2}\)) measure of effect size and Hays’ \(\hat {\omega }_{\text{R}}^{2}\) measure of effect size for a random-effects model are given by
and
And the relationships between Hays’ \(\hat {\omega }_{\text{F}}^{2}\) measure of effect size for a fixed-effects model and Hays’ \(\hat {\omega }_{\text{R}}^{2}\) measure of effect size for a random-effects model are given by
8.5.2 Example Comparisons of Effect Size Measures
In this section comparisons of Cohen’s \(\hat {d}\), Pearson’s η 2, Mielke and Berry’s \(\Re \), Hays’ \(\hat {\omega }_{\text{F}}^{2}\), and Hays’ \(\hat {\omega }_{\text{R}}^{2}\) measures of effect size are illustrated with the example data listed in Table 8.1 on p. 263 with n 1 = n 2 = 3, n 3 = 4, and N = n 1 + n 2 + n 3 = 3 + 3 + 4 = 10 observations. Because the treatment-group sizes are unequal, the ns in the equations for Cohen’s \(\hat {d}\) and Hays’ \(\hat {\omega }_{\text{R}}^{2}\) are replaced with a simple average; that is, \(\bar {n} = (3+3+4)/3 = 3.3333\).
Given the example data listed in Table 8.1 and following the expressions given in Eq. (8.8) for Cohen’s \(\hat {d}\) measure of effect size and Pearson’s η 2 (r 2) measure of effect size, the observed value for Cohen’s \(\hat {d}\) measure of effect size with respect to the observed value of Pearson’s η 2 (r 2) measure of effect size is
and the observed value for Pearson’s η 2 (r 2) measure of effect size with respect to the observed value of Cohen’s \(\hat {d}\) measure of effect size is
Following the expressions given in Eq. (8.9) for Cohen’s \(\hat {d}\) measure of effect size and Mielke and Berry’s \(\Re \) (\(\hat {\eta }^{2}\)) measure of effect size, the observed value for Cohen’s \(\hat {d}\) measure of effect size with respect to the observed value of Mielke and Berry’s \(\Re \) (\(\hat {\eta }^{2}\)) measure of effect size is
and the observed value for Mielke and Berry’s \(\Re \) (\(\hat {\eta }^{2}\)) measure of effect size with respect to the observed value of Cohen’s \(\hat {d}\) measure of effect size is
Following the expressions given in Eqs. (8.10) and (8.11) for Cohen’s \(\hat {d}\) measure of effect size and Hays’ \(\hat {\omega }_{\text{F}}^{2}\) measure of effect size for a fixed-effects model, the observed value for Cohen’s \(\hat {d}\) measure of effect size with respect to the observed value of Hays’ \(\hat {\omega }_{\text{F}}^{2}\) measure of effect size is
and the observed value for Hays’ \(\hat {\omega }_{\text{F}}^{2}\) measure of effect size with respect to the observed value of Cohen’s \(\hat {d}\) measure of effect size is
Following the expressions given in Eq. (8.12) for Cohen’s \(\hat {d}\) measure of effect size and Hays’ \(\hat {\omega }_{\text{R}}^{2}\) measure of effect size for a random-effects model, the observed value for Cohen’s \(\hat {d}\) measure of effect size with respect to the observed value of Hays’ \(\hat {\omega }_{\text{R}}^{2}\) measure of effect size is
and the observed value of Hays’ \(\hat {\omega }_{\text{R}}^{2}\) measure of effect size with respect to the observed value of Cohen’s \(\hat {d}\) measure of effect size is
Following the expressions given in Eq. (8.13) for Pearson’s η 2 (r 2) measure of effect size and Mielke and Berry’s \(\Re \) (\(\hat {\eta }^{2}\)) measure of effect size, the observed value for Pearson’s η 2 (r 2) measure of effect size with respect to the observed value of Mielke and Berry’s \(\Re \) (\(\hat {\eta }^{2}\)) measure of effect size is
and the observed value for Mielke and Berry’s \(\Re \) (\(\hat {\eta }^{2}\)) measure of effect size with respect to the observed value of Pearson’s η 2 (r 2) measure of effect size is
Following the expressions given in Eqs. (8.14) and (8.15) for Pearson’s η 2 (r 2) measure of effect size and Hays’ \(\hat {\omega }_{\text{F}}^{2}\) measure of effect size for a fixed-effects model, the observed value for Pearson’s η 2 (r 2) measure of effect size with respect to the observed value of Hays’ \(\hat {\omega }_{\text{F}}^{2}\) measure of effect size is
and the observed value for Hays’ \(\hat {\omega }_{\text{F}}^{2}\) measure of effect size with respect to the observed value of Pearson’s η 2 (r 2) measure of effect size is
Following the expressions given in Eqs. (8.16) and (8.17) for Pearson’s η 2 (r 2) measure of effect size and Hays’ \(\hat {\omega }_{\text{R}}^{2}\) measure of effect size for a random-effects model, the observed value for Pearson’s η 2 (r 2) measure of effect size with respect to the observed value of Hays’ \(\hat {\omega }_{\text{R}}^{2}\) measure of effect size is
and the observed value for Hays’ \(\hat {\omega }_{\text{R}}^{2}\) measure of effect size with respect to the observed value of Pearson’s η 2 (r 2) measure of effect size is
Following the expressions given in Eq. (8.18) for Mielke and Berry’s \(\Re \) (\(\hat {\eta }^{2}\)) measure of effect size and Hays’ \(\hat {\omega }_{\text{F}}^{2}\) measure of effect size for a fixed-effects model, the observed value for Mielke and Berry’s \(\Re \) (\(\hat {\eta }^{2}\)) measure of effect size with respect to the observed value of Hays’ \(\hat {\omega }_{\text{F}}^{2}\) measure of effect size is
and the observed value for Hays’ \(\hat {\omega }_{\text{F}}^{2}\) measure of effect size with respect to the observed value of Mielke and Berry’s \(\Re \) (\(\hat {\eta }^{2}\)) measure of effect size is
Following the expressions given in Eqs. (8.19) and (8.20) for Mielke and Berry’s \(\Re \) (\(\hat {\eta }^{2}\)) measure of effect size and Hays’ \(\hat {\omega }_{\text{R}}^{2}\) measure of effect size for a random-effects model, the observed value for Mielke and Berry’s \(\Re \) (\(\hat {\eta }^{2}\)) measure of effect size with respect to the observed value of Hays’ \(\hat {\omega }_{\text{R}}^{2}\) measure of effect size is
and the observed value for Hays’ \(\hat {\omega }_{\text{R}}^{2}\) measure of effect size with respect to the observed value for Mielke and Berry’s \(\Re \) (\(\hat {\eta }^{2}\)) measure of effect size is
Following the expressions given in Eq. (8.21) for Hays’ \(\hat {\omega }_{\text{F}}^{2}\) measure of effect size for a fixed-effects model and Hays’ \(\hat {\omega }_{\text{R}}^{2}\) measure of effect size for a random-effects model, the observed value for Hays’ \(\hat {\omega }_{\text{F}}^{2}\) measure of effect size with respect to the observed value of Hays’ \(\hat {\omega }_{\text{R}}^{2}\) measure of effect size is
and the observed value for Hays’ \(\hat {\omega }_{\text{R}}^{2}\) measure of effect size with respect to the observed value of Hays’ \(\hat {\omega }_{\text{F}}^{2}\) measure of effect size is
8.6 Example 3: Analyses with v = 2 and v = 1
For a third example of tests of differences among g ≥ 3 independent samples, consider the example data set given in Table 8.4 with g = 4 treatment groups, sample sizes of n 1 = n 2 = n 3 = n 4 = 7, and N = 28 total observations. Under the Neyman–Pearson population model with sample sizes n 1 = n 2 = n 3 = n 4 = 7, treatment-group means \(\bar {x}_{1} = 20.4286\), \(\bar {x}_{2} = 20.8571\), \(\bar {x}_{3} = 9.1429\), and \(\bar {x}_{4} = 14.1429\), grand mean \(\bar {\bar {x}} = 16.1429\), estimated population variances \(s_{1}^{2} = 27.9524\), \(s_{2}^{2} = 35.4762\), and \(s_{3}^{2} = s_{4}^{2} = 8.8095\), the sum-of-squares between treatments is
the sum-of-squares within treatments is
the sum-of-squares total is
the mean-square between treatments is
the mean-square within treatments is
and the observed value of Fisher’s F-ratio test statistic is
The essential factors, sums of squares (SS), degrees of freedom (df), mean squares (MS), and variance-ratio test statistic (F) are summarized in Table 8.5.
Under the Neyman–Pearson null hypothesis, H 0: μ 1 = μ 2 = μ 3 = μ 4, Fisher’s F-ratio test statistic is asymptotically distributed as Snedecor’s F with ν 1 = g − 1 and ν 2 = N − g degrees of freedom. With ν 1 = g − 1 = 4 − 1 = 3 and ν 2 = N − g = 28 − 4 = 24 degrees of freedom, the asymptotic probability value of F = 10.7778 is P = 0.1122×10−3, under the assumptions of normality and homogeneity.
8.6.1 A Monte Carlo Analysis with v = 2
For the first analysis of the example data listed in Table 8.4 on p. 278 under the Fisher–Pitman permutation model let v = 2, employing squared Euclidean scaling, and let the treatment-group weights be given by
for correspondence with Fisher’s F-ratio test statistic.
Because there are
possible, equally-likely arrangements in the reference set of all permutations of the N = 28 observations listed in Table 8.4, an exact permutation analysis is not possible and a Monte Carlo analysis is required.
Following Eq. (8.2) on p. 261, the N = 28 observations yield g = 4 average distance-function values of
Alternatively, in terms of a one-way analysis of variance model the average distance-function values are \(\xi _{1} = 2s_{1}^{2} = 2(27.9524) = 55.9048\), \(\xi _{2} = 2s_{2}^{2} = 2(34.4762) = 70.9524\), \(\xi _{3} = 2s_{3}^{2} = 2(8.8095) = 2(8.8095) = 17.6190\), and \(\xi _{4} = 2s_{4}^{2} = 2(8.8095) = 17.6190\).
Following Eq. (8.1) on p. 261, the observed value of the permutation test statistic based on v = 2 and treatment-group weights
is
Alternatively, in terms of a one-way analysis of variance model the permutation test statistic is
For the example data listed in Table 8.4, the sum of the N = 28 observations is
the sum of the N = 28 squared observations is
and the total sum-of-squares is
where \(\bar {\bar {x}}\) denotes the grand mean of all N = 28 observations.
Then following the expressions given in Eq. (8.5) on p. 262 for test statistics δ and F, the observed value for test statistic δ with respect to the observed value of test statistic F is
and the observed value of test statistic F with respect to the observed value of test statistic δ is
Under the Fisher–Pitman permutation model, the Monte Carlo probability of an observed δ is the proportion of δ test statistic values computed on the randomly-selected, equally-likely arrangements of the N = 28 observations listed in Table 8.4 that are equal to or less than the observed value of δ = 40.5238. There are exactly 138 δ test statistic values that are equal to or less than the observed value of δ = 40.5238. If all M arrangements of the N = 28 observations listed in Table 8.4 occur with equal chance under the Fisher–Pitman null hypothesis, the Monte Carlo probability value of δ = 40.5238 computed on L = 1, 000, 000 random arrangements of the observed data with n 1 = n 2 = n 3 = n 4 = 7 preserved for each arrangement is
where δ o denotes the observed value of test statistic δ and L is the number of randomly-selected, equally-likely arrangements of the N = 28 observations listed in Table 8.4.
In terms of a one-way analysis of variance model, there are only 138 F values that are larger than the observed value of F = 10.7779. Thus, if all arrangements of the observed data occur with equal chance, the exact probability value of F = 10.7779 under the Fisher–Pitman null hypothesis is
where F o denotes the observed value of test statistic F and L is the number of random, equally-likely arrangements of the example data listed in Table 8.4.
Following Eq. (8.7) on p. 263, the exact expected value of the M = 4200 δ test statistic values under the Fisher–Pitman null hypothesis is
Alternatively, in terms of a one-way analysis of variance model the exact expected value of test statistic δ under the Fisher–Pitman null hypothesis is
Following Eq. (8.6) on p. 263, the observed chance-corrected measure of effect size is
indicating approximately 52% within-group agreement above what is expected by chance. Alternatively, in terms of a one-way analysis of variance model, the observed chance-corrected measure of effect size is
Alternatively, in terms of Fisher’s F-ratio test statistic the chance-corrected measure of effect size is
8.6.2 Measures of Effect Size
For the example data listed in Table 8.4, Cohen’s \(\hat {d}\) measure of effect size is
Pearson’s η 2 (r 2) measure of effect size is
Kelley’s \(\hat {\eta }^{2}\) measure of effect size is
Hays’ \(\hat {\omega }_{\text{F}}^{2}\) measure of effect size for a fixed-effects model is
Hays’ \(\hat {\omega }_{\text{R}}^{2}\) measure of effect size for a random-effects model is
and the observed chance-corrected measure of effect size is
indicating approximately 52% within-group agreement above what is expected by chance.
8.6.3 A Monte Carlo Analysis with v = 1
Consider a second analysis of the example data listed in Table 8.4 on p. 278 under the Fisher–Pitman permutation model with v = 1 and treatment-group weights
For v = 1, the average distance-function values for the g = 4 treatment groups are
respectively, and the observed permutation test statistic is
Because there are
possible, equally-likely arrangements in the reference set of all permutations of the N = 28 observations listed in Table 8.4, an exact permutation analysis is impossible and a Monte Carlo permutation analysis is required. Under the Fisher–Pitman permutation model, the Monte Carlo probability of an observed δ is the proportion of δ test statistic values computed on the randomly-selected, equally-likely arrangements of the N = 28 observations listed in Table 8.4 that are equal to or less than the observed value of δ = 5.1905. There are exactly 204 δ test statistic values that are equal to or less than the observed value of δ = 5.1905. If all M arrangements of the N = 28 observations listed in Table 8.4 occur with equal chance under the Fisher–Pitman null hypothesis, the Monte Carlo probability value of δ = 5.1905 computed on L = 1, 000, 000 random arrangements of the observed data with n 1 = n 2 = n 3 = n 4 = 7 preserved for each arrangement is
where δ o denotes the observed value of test statistic δ and L is the number of randomly-selected, equally-likely arrangements of the N = 28 observations listed in Table 8.4. No comparison is made with Fisher’s F-ratio test statistic as F is undefined for ordinary Euclidean scaling.
For the example data listed in Table 8.4, the exact expected value of test statistic δ under the Fisher–Pitman null hypothesis is
and the observed chance-corrected measure of effect size is
indicating approximately 30% within-group agreement above what is expected by chance. No comparisons are made with Cohen’s \(\hat {d}\), Pearson’s η 2 (r 2), Kelley’s \(\hat {\eta }^{2}\), Hays’ \(\hat {\omega }^{2}_{\text{F}}\), or Hays’ \(\hat {\omega }^{2}_{\text{R}}\) conventional measures of effect size as \(\hat {d}\), η 2, \(\hat {\eta }^{2}\), \(\hat {\omega }_{\text{F}}^{2}\), and \(\hat {\omega }_{\text{R}}^{2}\) are undefined for ordinary Euclidean scaling.
8.6.4 The Effects of Extreme Values
To illustrate the robustness to the inclusion of extreme values of ordinary Euclidean scaling with v = 1, consider the example data listed in Table 8.4 on p. 278 with one alteration. The seventh (last) observation in Group 4 in Table 8.4 has been increased from x 7,4 = 15 to x 7,4 = 75, as shown in Table 8.6. Under the Neyman–Pearson population model with sample sizes n 1 = n 2 = n 3 = n 4 = 7, treatment-group means \(\bar {x}_{1} = 20.4286\), \(\bar {x}_{2} = 20.8571\), \(\bar {x}_{3} = 9.1429\), and \(\bar {x}_{4} = 22.7143\), grand mean \(\bar {\bar {x}} = 18.2857\), estimated population variances \(s_{1}^{2} = 27.9524\), \(s_{2}^{2} = 35.4762\), \(s_{3}^{2} = 8.8095\), and \(s_{4}^{2} = 540.2381\), the sum-of-squares between treatments is
the sum-of-squares within treatments is
the sum-of-squares total is
the mean-square between treatments is
the mean-square within treatments is
and the observed value of Fisher’s F-ratio test statistic is
The essential factors, sums of squares (SS), degrees of freedom (df), mean squares (MS), and variance-ratio test statistic (F) are summarized in Table 8.7.
Under the Neyman–Pearson null hypothesis, H 0: μ 1 = μ 2 = μ 3 = μ 4, Fisher’s F-ratio test statistic is asymptotically distributed as Snedecor’s F with ν 1 = g − 1 and ν 2 = N − g degrees of freedom. With ν 1 = g − 1 = 4 − 1 = 3 and ν 2 = N − g = 28 − 4 = 24 degrees of freedom, the asymptotic probability value of F = 1.7434 is P = 0.1849, under the assumptions of normality and homogeneity. The original F-ratio test statistic value with observation x 7,4 = 15 was F = 10.7779 with an asymptotic probability value of P = 0.1122×10−3, yielding a difference between the two probability values of
8.6.5 A Monte Carlo Analysis with v = 2
For the first analysis of the example data listed in Table 8.6 on p. 285 under the Fisher–Pitman permutation model let v = 2, employing squared Euclidean scaling, and let the treatment-group weights be given by
for correspondence with Fisher’s F-ratio test statistic.
Because there are
possible, equally-likely arrangements in the reference set of all permutations of the N = 28 observations listed in Table 8.6, an exact permutation analysis is not possible and a Monte Carlo analysis is required.
Following Eq. (8.2) on p. 261, the N = 28 observations yield g = 4 average distance-function values of
Alternatively, under an analysis of variance model, \(\xi _{1} = 2s_{1}^{2} = 2(27.9524) = 55.9048\), \(\xi _{2} = 2s_{2}^{2} = 2(35.4762) = 70.9524\), \(\xi _{3} = 2s_{3}^{2} = 2(8.8095) = 17.6190\), and \(\xi _{4} = 2s_{4}^{2} = 2(540.2381) = 1080.4762\).
Following Eq. (8.1) on p. 261, the observed value of the permutation test statistic based on v = 2 and treatment-group weights
is
Under the Fisher–Pitman permutation model, the Monte Carlo probability of an observed δ is the proportion of δ test statistic values computed on the randomly-selected, equally-likely arrangements of the N = 28 observations listed in Table 8.6 that are equal to or less than the observed value of δ = 306.2381. There are exactly 128,239 δ test statistic values that are equal to or less than the observed value of δ = 306.2381. If all M arrangements of the N = 28 observations listed in Table 8.6 occur with equal chance under the Fisher–Pitman null hypothesis, the Monte Carlo probability value of δ = 306.2381 computed on L = 1, 000, 000 random arrangements of the observed data with n 1 = n 2 = n 3 = n 4 = 7 preserved for each arrangement is
where δ o denotes the observed value of test statistic δ and L is the number of randomly-selected, equally-likely arrangements of the N = 28 observations listed in Table 8.6. For comparison, the original value of test statistic δ based on v = 2 with observation x 7,4 = 15 was δ = 40.5238 with a Monte Carlo probability value of P = 0.1380×10−3, yielding a difference between the two probability values of
8.6.6 A Monte Carlo Analysis with v = 1
For the second analysis of the example data listed in Table 8.6 on p. 285 under the Fisher–Pitman permutation model let v = 1, employing ordinary Euclidean scaling, and let the treatment-group weights be given by
Setting v = 1 can be expected to reduce the outsized effect of extreme value x 7,4 = 75.
Because there are
possible, equally-likely arrangements in the reference set of all permutations of the N = 28 observations listed in Table 8.6, an exact permutation analysis is not possible and a Monte Carlo analysis is required.
Following Eq. (8.2) on p. 261, the N = 28 observations yield g = 4 average distance-function values of
Following Eq. (8.1) on p. 261, the observed value of the permutation test statistic based on v = 1 and treatment-group weights
is
Under the Fisher–Pitman permutation model, the exact probability of an observed δ is the proportion of δ test statistic values computed on the randomly-selected, equally-likely arrangements of the N = 28 observations listed in Table 8.6 that are equal to or less than the observed value of δ = 9.3571. There are exactly 1960 δ test statistic values that are equal to or less than the observed value of δ = 9.3571. If all M arrangements of the N = 28 observations listed in Table 8.6 occur with equal chance, the Monte Carlo probability value of δ = 9.3571 computed on L = 1, 000, 000 random arrangements of the observed data with n 1 = n 2 = n 3 = n 4 = 7 preserved for each arrangement is
where δ o denotes the observed value of δ and L is the number of randomly-selected, equally-likely arrangements of the N = 28 observations listed in Table 8.6.
The original value of test statistic δ based on v = 1 with observation x 7,4 = 15 was δ = 5.1905 with a Monte Carlo probability value of P = 0.2040×10−3, yielding a difference between the two probability values of only
Multi-sample permutation tests based on ordinary Euclidean scaling with v = 1 tend to be relatively robust with respect to extreme values when compared with permutation tests based on squared Euclidean scaling with v = 2.
8.7 Example 4: Exact and Monte Carlo Analyses
For a fourth, larger example of tests for differences among g ≥ 3 independent samples, consider the example data given in Table 8.8 with g = 4 treatment groups, sample sizes of n 1 = n 2 = 3, n 3 = 4, n 4 = 5, and N = n 1 + n 2 + n 3 + n 4 = 3 + 3 + 4 + 5 = 15 total observations. Under the Neyman–Pearson population model with sample sizes n 1 = n 2 = 3, n 3 = 4, and n 4 = 5, treatment-group means \(\bar {x}_{1} = 11.00\), \(\bar {x}_{2} = 12.00\), \(\bar {x}_{3} = 13.50\), and \(\bar {x}_{4} = 19.00\), grand mean \(\bar {\bar {x}} = 14.5333\), estimated population variances \(s_{1}^{2} = s_{2}^{2} = 1.00\), \(s_{3}^{2} = 1.6667\), and \(s_{4}^{2} = 62.50\), the sum-of-squares between treatments is
the sum-of-squares within treatments is
the sum-of-squares total is
the mean-square between treatments is
the mean-square within treatments is
and the observed value of Fisher’s F-ratio test statistic is
The essential factors, sums of squares (SS), degrees of freedom (df), mean squares (MS), and variance-ratio test statistic (F) are summarized in Table 8.9.
Under the Neyman–Pearson null hypothesis, H 0: μ 1 = μ 2 = μ 3 = μ 4, Fisher’s F-ratio test statistic is asymptotically distributed as Snedecor’s F with ν 1 = g − 1 and ν 2 = N − g degrees of freedom. With ν 1 = g − 1 = 4 − 1 = 3 and ν 2 = N − g = 15 − 4 = 11 degrees of freedom, the asymptotic probability value of F = 2.2755 is P = 0.1366, under the assumptions of normality and homogeneity.
8.7.1 A Permutation Analysis with v = 2
For the first analysis of the example data listed in Table 8.8 under the Fisher–Pitman permutation model let v = 2, employing squared Euclidean scaling, and let the treatment-group weighting functions be given by
for correspondence with Fisher’s F-ratio test statistic.
Because there are
possible, equally-likely arrangements in the reference set of all permutations of the N = 15 observations listed in Table 8.8, an exact permutation analysis is not practical and a Monte Carlo analysis is utilized.
Following Eq. (8.2) on p. 261, the N = 15 observations yield g = 4 average distance-function values of
Alternatively, in terms of a one-way analysis of variance model the average distance-function values are \(\xi _{1} = 2s_{1}^{2} = 2(1.00) = 2.00\), \(\xi _{2} = 2s_{2}^{2} = 2(1.00) = 2.00\), \(\xi _{3} = 2s_{3}^{2} = 2(1.667) = 3.3333\), and \(\xi _{4} = 2s_{4}^{2} = 2(62.50) = 125.00\).
Following Eq. (8.1) on p. 261, the observed value of the permutation test statistic based on v = 2 and treatment-group weights
is
Alternatively, in terms of a one-way analysis of variance model the permutation test statistic is
For the example data listed in Table 8.8, the sum of the N = 15 observations is
the sum of the N = 15 squared observations is
and the total sum-of-squares is
where \(\bar {\bar {x}}\) denotes the grand mean of all N = 15 observations. Then following the expressions given in Eq. (8.5) on p. 262 for test statistics δ and F, the observed value for test statistic δ with respect to the observed value of test statistic F is
and the observed value for test statistic F with respect to the observed value of test statistic δ is
Under the Fisher–Pitman permutation model, the Monte Carlo probability of an observed δ is the proportion of δ test statistic values computed on the randomly-selected, equally-likely arrangements of the N = 15 observations listed in Table 8.8 that are equal to or less than the observed value of δ = 47.0909. There are exactly 53,200 δ test statistic values that are equal to or less than the observed value of δ = 47.0909. If all M arrangements of the N = 15 observations listed in Table 8.8 occur with equal chance under the Fisher–Pitman null hypothesis, the Monte Carlo probability value of δ = 47.0909 computed on L = 1, 000, 000 randomly-selected arrangements of the observed data with n 1 = n 2 = 3 = n 3 = 4, and n 4 = 5 preserved for each arrangement is
where δ o denotes the observed value of test statistic δ and L is the number of randomly-selected, equally-likely arrangements of the N = 15 observations listed in Table 8.8.
Alternatively, in terms of a one-way analysis of variance model, there are 53,200 F values that are equal to or greater than the observed value of F = 2.2755. Thus, if all arrangements of the observed data occur with equal chance, the exact probability value of F = 2.2755 under the Fisher–Pitman null hypothesis is
where F o denotes the observed value of test statistic F.
Following Eq. (8.7) on p. 263, the exact expected value of the M = 12, 612, 600 δ test statistic values under the Fisher–Pitman null hypothesis is
In terms of a one-way analysis of variance model the exact expected value of test statistic δ is
Following Eq. (8.6) on p. 263, the observed chance-corrected measure of effect size is
indicating approximately 21% within-group agreement above what is expected by chance. Alternatively, in terms of a one-way analysis of variance model, the observed measure of effect size is
8.7.2 Measures of Effect Size
For the example data listed in Table 8.8 on p. 289, the average treatment-group size is
Cohen’s \(\hat {d}\) measure of effect size is
Pearson’s η 2 (r 2) measure of effect size is
Kelley’s \(\hat {\eta }^{2}\) measure of effect size is
Hays’ \(\hat {\omega }_{\text{F}}^{2}\) measure of effect size for a fixed-effects model is
Hays’ \(\hat {\omega }_{\text{R}}^{2}\) measure of effect size for a random-effects model is
and Mielke and Berry’s \(\Re \) chance-corrected measure of effect size is
indicating approximately 21% within-group agreement above what is expected by chance.
8.7.3 An Exact Analysis with v = 2
While an exact permutation analysis with M = 12, 612, 600 possible arrangements of the observed data may be impractical, it is not impossible. An exact analysis of the N = 15 observations listed in Table 8.8 on p. 289 under the Fisher–Pitman permutation model yields g = 4 average distance-function values of
The observed value of the permutation test statistic based on v = 2 and treatment-group weights
is
Under the Fisher–Pitman permutation model, the exact probability of an observed δ is the proportion of δ test statistic values computed on all possible, equally-likely arrangements of the N = 15 observations listed in Table 8.8 that are equal to or less than the observed value of δ = 47.0909. There are exactly 673,490 δ test statistic values that are equal to or less than the observed value of δ = 47.0909. If all M arrangements of the N = 15 observations listed in Table 8.8 occur with equal chance under the Fisher–Pitman null hypothesis, the exact probability value of δ = 47.0909 computed on the M = 12, 612, 600 possible arrangements of the observed data with n 1 = n 2 = 3 = n 3 = 4, and n 4 = 5 preserved for each arrangement is
where δ o denotes the observed value of test statistic δ and M is the number of possible, equally-likely arrangements of the N = 15 observations listed in Table 8.8.
Carrying the Monte Carlo probability value based on L = 1, 000, 000 random arrangements and the exact probability value based on M = 12, 612, 600 possible arrangements to a few extra decimal places allows for a more direct comparison of the Monte Carlo and exact permutation approaches. The Monte Carlo approximate probability value and the corresponding exact probability value to six decimal places are
respectively. The difference between the two probability values is only
demonstrating the efficiency and accuracy of a Monte Carlo approach for permutation methods when L is large and the exact probability value is not too small. In general, L = 1, 000, 000 random arrangements of the observed data is sufficient to ensure three decimal places of accuracy [11].
8.7.4 A Monte Carlo Analysis with v = 1
Consider a second analysis of the example data listed in Table 8.8 on p. 289 under the Fisher–Pitman permutation model with v = 1 and treatment-group weights
For v = 1, employing ordinary Euclidean scaling between the observations, thereby reducing the effects of any extreme values, the average distance-function values for the g = 4 treatment groups are
respectively, and the observed permutation test statistic is
Because there are
possible, equally-likely arrangements in the reference set of all permutations of the N = 28 observations listed in Table 8.8, a Monte Carlo permutation analysis is recommended.
Under the Fisher–Pitman permutation model, the Monte Carlo probability of an observed δ is the proportion of δ test statistic values computed on the randomly-selected, equally-likely arrangements of the N = 15 observations listed in Table 8.8 that are equal to or less than the observed value of δ = 3.8485. There are exactly 18,000 δ test statistic values that are equal to or less than the observed value of δ = 3.8485. If all M arrangements of the N = 15 observations listed in Table 8.8 occur with equal chance under the Fisher–Pitman null hypothesis, the Monte Carlo probability value of δ = 3.8485 computed on L = 1, 000, 000 random arrangements of the observed data with n 1 = n 2 = 3, n 3 = 4, and n 4 = 5 preserved for each arrangement is
where δ o denotes the observed value of test statistic δ and L is the number of randomly-selected, equally-likely arrangements of the N = 15 observations listed in Table 8.8.
For comparison, the approximate Monte Carlo probability value based on v = 2, L = 1, 000, 000, and
is P = 0.0532. The difference between the two probability values, P = 0.0180 and P = 0.0532, is due to the single extreme value of x 5,4 = 33 in the fourth treatment group. No comparison is made with Fisher’s F-ratio test statistic as F is undefined for ordinary Euclidean scaling.
For the example data listed in Table 8.8 on p. 289, the exact expected value of the M = 12, 612, 600 δ test statistic values under the Fisher–Pitman null hypothesis is
and the observed chance-corrected measure of effect size is
indicating approximately 19% within-group agreement above what is expected by chance. No comparisons are made with Cohen’s \(\hat {d}\), Pearson’s η 2 (r 2), Kelley’s \(\hat {\eta }^{2}\), Hays’ \(\hat {\omega }_{\text{F}}^{2}\), or Hays’ \(\hat {\omega }_{\text{R}}^{2}\) conventional measures of effect size as \(\hat {d}\), η 2, \(\hat {\eta }^{2}\), \(\hat {\omega }_{\text{F}}^{2}\), and \(\hat {\omega }_{\text{R}}^{2}\) are undefined for ordinary Euclidean scaling.
8.7.5 An Exact Analysis with v = 1
An exact permutation analysis of the observations listed in Table 8.8 with v = 1 yields g = 4 average distance-function values of
The observed value of the permutation test statistic based on v = 1 and treatment-group weights
is
Under the Fisher–Pitman permutation model, the exact probability of an observed δ is the proportion of δ test statistic values computed on all possible, equally-likely arrangements of the N = 15 observations listed in Table 8.8 that are equal to or less than the observed value of δ = 3.8485. There are exactly 225,720 δ test statistic values that are equal to or less than the observed value of δ = 3.8485. If all M arrangements of the N = 15 observations listed in Table 8.8 occur with equal chance under the Fisher–Pitman null hypothesis, the exact probability value of δ = 3.8485 computed on the M = 12, 612, 600 possible arrangements of the observed data with n 1 = n 2 = 3, n 3 = 4, and n 4 = 5 preserved for each arrangement is
where δ o denotes the observed value of test statistic δ and M is the number of possible, equally-likely arrangements of the N = 15 observations listed in Table 8.8.
The exact expected value of the M = 12, 612, 600 δ test statistic values under the Fisher–Pitman null hypothesis is
and the observed chance-corrected measure of effect size is
indicating approximately 19% within-group agreement above what is expected by chance. No comparisons are made with Cohen’s \(\hat {d}\), Pearson’s η 2 (r 2), Kelley’s \(\hat {\eta }^{2}\), Hays’ \(\hat {\omega }_{\text{F}}^{2}\), or Hays’ \(\hat {\omega }_{\text{R}}^{2}\) conventional measures of effect size as \(\hat {d}\), η 2, \(\hat {\eta }^{2}\), \(\hat {\omega }_{\text{F}}^{2}\), and \(\hat {\omega }_{\text{R}}^{2}\) are undefined for ordinary Euclidean scaling.
Finally, note the effect of a single extreme value (x 4,5 = 33) in Treatment 4 in the analysis based on ordinary Euclidean scaling with v = 1, compared with the analysis based on squared Euclidean scaling with v = 2. In the analysis based on v = 2, the value for the fourth average distance-function value was ξ 4 = 125.00, but in the analysis based on v = 1, ξ 4 was reduced to only ξ 4 = 8.00. Also, in the analysis based on v = 2 the exact probability value was P = 0.0534, but in the analysis based on v = 1 the exact probability value was only P = 0.0179, a reduction of approximately 66%. For comparison, the asymptotic probability value of F = 2.2755 with ν 1 = g − 1 = 4 − 1 = 3 and ν 2 = N − g = 15 − 4 = 11 degrees of freedom was P = 0.1366.
8.8 Example 5: Rank-Score Permutation Analyses
In many research applications it becomes necessary to analyze rank-score data, typically because the required parametric assumptions of normality and homogeneity cannot be met. Consequently, the raw scores are often converted to rank scores and analyzed under a less-restrictive model. While it is never necessary to convert raw scores to rank scores under the Fisher–Pitman permutation model, sometimes the observed data are simply collected as rank scores. Thus, this fifth example serves merely to demonstrate the relationship between a g-sample test of rank-score observations under the population model and the same test under the permutation model. The conventional approach to univariate rank-score data for multiple independent samples under the Neyman–Pearson population model is the Kruskal–Wallis g-sample rank-sum test. As Kruskal and Wallis explained, the rank-sum test stemmed from two statistical methods: rank transformations of the original raw scores and permutations of the rank-order statistics [12].
8.8.1 The Kruskal–Wallis Rank-Sum Test
Consider g random samples of possibly different sizes and denote the size of the ith sample by n i for i = 1, …, g. Let
denote the total number of observations, assign rank 1 to the smallest of the N observations, rank 2 to the next smallest observation, continuing to the largest observation that is assigned rank N, and let R i denote the sum of the rank scores in the ith sample, i = 1, …, g. If there are no tied rank scores, the Kruskal–Wallis g-sample rank-sum test statistic is given by
When g = 2, H is equivalent to the Wilcoxon [25], Festinger [5], Mann–Whitney [15], Haldane–Smith [7], and van der Reyden [24] two-sample rank-sum tests.
For an example analysis of g-sample rank-score data, consider the rank scores listed in Table 8.10 with g = 3 samples, n 1 = n 2 = n 3 = 6, N = 18, and no tied rank scores.
The conventional Kruskal–Wallis g-sample rank-sum test on the N = 18 rank scores listed in Table 8.10 yields an observed test statistic of
where test statistic H is asymptotically distributed as Pearson’s chi-squared under the Neyman–Pearson null hypothesis with g − 1 degrees of freedom as N →∞. Under the Neyman–Pearson null hypothesis with g − 1 = 3 − 1 = 2 degrees of freedom, the observed value of H = 7.0526 yields an asymptotic probability value of P = 0.0294, under the assumption of normality.
8.8.2 A Monte Carlo Analysis with v = 2
For the first analysis of the rank-score data listed in Table 8.10 under the Fisher–Pitman permutation model let v = 2, employing squared Euclidean scaling between the pairs of rank scores, and let the treatment-groups weights be given by
for correspondence with the Kruskal–Wallis g-sample rank-sum test. The average distance-function values for the g = 3 samples are
and the observed value of the permutation test statistic based on v = 2 is
Because there are
possible, equally-likely arrangements in the reference set of all permutations of the N = 18 rank scores listed in Table 8.10, an exact permutation analysis is not practical and a Monte Carlo permutation analysis is utilized.
Under the Fisher–Pitman permutation model, the Monte Carlo probability of an observed δ is the proportion of δ test statistic values computed on the randomly-selected, equally-likely arrangements of the N = 18 rank scores listed in Table 8.10 that are equal to or less than the observed value of δ = 37.80. There are exactly 21,810 δ test statistic values that are equal to or less than the observed value of δ = 37.80. If all M arrangements of the N = 18 observations listed in Table 8.10 occur with equal chance under the Fisher–Pitman null hypothesis, the Monte Carlo probability value of δ = 37.80 computed on L = 1, 000, 000 random arrangements of the observed data with n 1 = n 2 = n 3 = 6 preserved for each arrangement is
where δ o denotes the observed value of test statistic δ and L is the number of randomly-selected, equally-likely arrangements of the N = 18 rank scores listed in Table 8.10. It should be noted that whereas the Kruskal–Wallis test statistic, H, as defined in Eq. (8.24) does not allow for tied rank scores, test statistic δ automatically accommodates tied rank scores.
The functional relationships between test statistics δ and H are given by
and
where, if no rank scores are tied, S and T may simply be expressed as
Note that in Eqs. (8.25) and (8.26), S, T, N, and g are invariant under permutation, along with the constants 2, 3, and 6.
The relationships between test statistics δ and H can be confirmed with the rank-score data listed in Table 8.10. For the rank scores listed in Table 8.10 with no tied values, the observed value of S is
and the observed value of T is
Then following Eq. (8.25), the observed value of the permutation test statistic for the N = 18 rank scores listed in Table 8.10 is
and, following Eq. (8.26), the observed value of the Kruskal–Wallis test statistic is
Because of the relationship between test statistics δ and H, the Monte Carlo probability value of the realized value of H = 7.0526 is identical to the Monte Carlo probability value of δ = 37.80 under the Fisher–Pitman null hypothesis. Thus,
where H o denotes the observed value of test statistic H.
The exact expected value of the M = 17, 153, 136 δ test statistic values under the Fisher–Pitman null hypothesis is
and the observed chance-corrected measure of effect size is
indicating approximately 34% within-group agreement above what is expected by chance. No comparisons are made with Cohen’s \(\hat {d}\), Pearson’s η 2 (r 2), Kelley’s \(\hat {\eta }^{2}\), Hays’ \(\hat {\omega }^{2}\), or Hays’ \(\hat {\omega }_{\text{R}}^{2}\) measures of effect size as \(\hat {d}\), η 2, \(\hat {\eta }^{2}\), \(\hat {\omega }_{\text{F}}^{2}\), and \(\hat {\omega }_{\text{R}}^{2}\) are undefined for rank-score data.
8.8.3 An Exact Analysis with v = 2
Although an exact permutation analysis with M = 17, 153, 136 possible arrangements of the observed data may be impractical, it is not impossible. An exact permutation analysis of the N = 18 observations listed in Table 8.10 yields g = 3 average distance-function values of
and the observed value of the permutation test statistic based on v = 2 and treatment-group weights
is
There are
possible, equally-likely arrangements in the reference set of all permutations of the N = 18 rank scores listed in Table 8.10, making an exact permutation analysis feasible. Under the Fisher–Pitman permutation model, the exact probability of an observed δ is the proportion of δ test statistic values computed on all possible, equally-likely arrangements of the N = 18 rank scores listed in Table 8.10 that are equal to or less than the observed value of δ = 37.80. There are exactly 376,704 δ test statistic values that are equal to or less than the observed value of δ = 37.80. If all M arrangements of the N = 18 rank scores listed in Table 8.10 occur with equal chance under the Fisher–Pitman null hypothesis, the exact probability value of δ = 37.80 computed on the M = 17, 153, 136 possible arrangements of the observed data with n 1 = n 2 = n 3 = 6 preserved for each arrangement is
where δ o denotes the observed value of test statistic δ and M is the number of possible, equally-likely arrangements of the N = 18 rank scores listed in Table 8.10. For comparison, the Monte Carlo probability value based on v = 2, L = 1, 000, 000 random arrangements of the observed data, and treatment-group weights given by
is P = 0.0218 for a difference between the two probability values of only
8.8.4 An Exact Analysis with v = 1
For a second analysis of the rank-score data listed in Table 8.10, let the treatment-group weights be given by
as in the previous example but set v = 1, employing ordinary Euclidean scaling between the pairs of rank scores. The N = 18 rank scores listed in Table 8.10 yield g = 3 average distance-function values of
and the observed value of the permutation test statistic based on v = 1 is
Under the Fisher–Pitman permutation model, the exact probability of an observed δ is the proportion of δ test statistic values computed on all possible, equally-likely arrangements of the N = 18 rank scores listed in Table 8.10 that are equal to or less than the observed value of δ = 5.1778. There are exactly 547,662 δ test statistic values that are equal to or less than the observed value of δ = 5.1778. If all M arrangements of the N = 18 rank scores listed in Table 8.10 occur with equal chance under the Fisher–Pitman null hypothesis, the exact probability value of δ = 5.1778 computed on the M = 17, 153, 136 possible arrangements of the observed data with n 1 = n 2 = n 3 = 6 preserved for each arrangement is
where δ o denotes the observed value of test statistic δ and M is the number of possible, equally-likely arrangements of the N = 18 rank scores listed in Table 8.10. For comparison, the exact probability value based on v = 2, M = 17, 153, 136, and
is P = 0.0220. No comparison is made with the conventional Kruskal–Wallis g-sample rank-sum test as H is undefined for ordinary Euclidean scaling.
The exact expected value of the M = 17, 153, 136 δ test statistic values under the Fisher–Pitman null hypothesis is
and the observed chance-corrected measure of effect size is
indicating approximately 18% within-group agreement above what is expected by chance. No comparisons are made with Cohen’s \(\hat {d}\), Pearson’s r 2 (η 2), Kelley’s \(\hat {\eta }^{2}\), Hays’ \(\hat {\omega }_{\text{F}}^{2}\), or Hays’ \(\hat {\omega }_{\text{R}}^{2}\) measures of effect size as \(\hat {d}\), r 2, \(\hat {\eta }^{2}\), \(\hat {\omega }_{\text{F}}^{2}\), and \(\hat {\omega }_{\text{R}}^{2}\) are undefined for rank-score data.
8.9 Example 6: Multivariate Permutation Analyses
It is sometimes desirable to test for differences among g ≥ 3 independent treatment groups where r ≥ 2 measurement scores have been obtained from each object. The conventional approach is a one-way multivariate analysis of variance (MANOVA) for which a number of statistical tests have been proposed, including the Bartlett–Nanda–Pillai (BNP) trace test [1, 16, 19], Wilks’ likelihood-ratio test [26], Roy’s maximum-root test [20, 21], and the Lawley–Hotelling trace test [9, 13, 14]. The Bartlett–Nanda–Pillai trace test is considered to be the most powerful and robust of the four tests [17, 18, 23, p. 269].
8.9.1 The Bartlett–Nanda–Pillai Trace Test
To illustrate a conventional multivariate analysis of variance, consider the BNP trace test given by
where W denotes the Within matrix summarizing within-object variability, B denotes the hypothesized Between matrix summarizing between-object variability, and \(s = \min (r,\;g-1)\). For a conventional test of significance, the BNP trace statistic, V (s), can be transformed into a conventional F test statistic by
where \(s = \min (r, \,g-1)\), u = 0.50(N − g − r − 1), t = 0.50(|r − q|− 1), and q = g − 1. Assuming independence, normality, and homogeneity of variance and covariance, test statistic F is asymptotically distributed as Snedecor’s F under the Neyman–Pearson null hypothesis with ν 1 = s(2t + s + 1) and ν 2 = s(2u + s + 1) degrees of freedom.
To illustrate the BNP trace test, consider the multivariate observations listed in Table 8.11, where r = 2 measurements, g = 3 treatment groups, n 1 = 5, n 2 = 4, and n 3 = 3 sample sizes, and N = 12 multivariate observations.
A conventional BNP analysis of the multivariate observations listed in Table 8.11 yields
and
Then, q = g − 1 = 3 − 1 = 2, \(s = \min (r,\;q) = \min (2,\;3-1) = 2\), u = 0.50(N − g − r − 1) = 0.50(12 − 3 − 2 − 1) = 3, t = 0.50(|r − q|− 1) = 0.50(|2 − 2|− 1) = −0.50, and following Eq. (8.27) on p. 306, the observed value of Fisher’s F-ratio test statistic is
Assuming independence, normality, homogeneity of variance, and homogeneity of covariance, test statistic F is asymptotically distributed as Snedecor’s F with ν 1 = s(2t + s + 1) = 2[(2)(−0.50) + 2 + 1] = 4 and ν 2 = s(2u + s + 1) = 2[(2)(3) + 2 + 1] = 18 degrees of freedom. Under the Neyman–Pearson null hypothesis, the observed value of F = 2.8879 with ν 1 = 4 and ν 2 = 18 degrees of freedom yields an asymptotic probability value of P = 0.0521.
8.9.2 An Exact Analysis with v = 2
For the first analysis of the observed data listed in Table 8.11 under the Fisher–Pitman permutation model let v = 2, employing squared Euclidean scaling between the pairs of multivariate observations, and let the treatment-group weights be given by
for correspondence with the BNP trace test. An exact permutation analysis is feasible for the multivariate observations listed in Table 8.11 as there are only
possible, equally-likely arrangements in the reference set of all permutations of the N = 12 multivariate scores listed in Table 8.11.
Following Eq. (8.2) on p. 261, the multivariate observations listed in Table 8.11 yield g = 3 average distance-function values of
Following Eq. (8.1) on p. 261, the observed value of the permutation test statistic based on v = 2 and treatment-group weights
is
Under the Fisher–Pitman permutation model, the exact probability of an observed δ is the proportion of δ test statistic values computed on all possible, equally-likely arrangements of the N = 12 multivariate observations listed in Table 8.11 that are equal to or less than the observed value of δ = 0.2707. There are exactly 967 δ test statistic values that are equal to or less than the observed value of δ = 0.2702. If all M arrangements of the N = 12 multivariate scores listed in Table 8.11 occur with equal chance under the Fisher–Pitman null hypothesis, the exact probability value of δ = 0.2707 computed on the M = 27, 720 possible arrangements of the observed data with n 1 = 5, n 2 = 4, and n 3 = 3 multivariate observations preserved for each arrangement is
where δ o denotes the observed value of test statistic δ and M is the number of possible, equally-likely arrangements of the N = 12 multivariate observations listed in Table 8.11.
Following Eq. (8.7) on p. 263, the exact expected value of the M = 27, 720 δ test statistic values under the Fisher–Pitman null hypothesis is
and, following Eq. (8.6) on p. 263, the observed chance-corrected measure of effect size is
indicating approximately 26% within-group agreement above what is expected by chance.
A convenient, although positively biased, measure of effect size for the BNP trace test is given by
which can be compared with the unbiased chance-corrected measure of effect size, \(\Re = +0.2556\). No comparisons are made with Cohen’s \(\hat {d}\), Kelley’s \(\hat {\eta }^{2}\), Hays’ \(\hat {\omega }_{\text{F}}^{2}\), or Hays’ \(\hat {\omega }_{\text{R}}^{2}\) measures of effect size as \(\hat {d}\), \(\hat {\eta }^{2}\), \(\hat {\omega }_{\text{F}}^{2}\), and \(\hat {\omega }_{\text{R}}^{2}\) are undefined for multivariate data.
The functional relationships between statistic δ and the V (2) BNP trace statistic are given by
Following the expressions given in Eq. (8.28) for test statistics δ and V 2, the observed value for test statistic δ with respect to the observed value of test statistic V 2 is
and the observed value for test statistic V 2 with respect to the observed value of test statistic δ is
8.9.3 An Exact Analysis with v = 1
For a second analysis of the multivariate measurement scores listed in Table 8.11 on p. 307 under the Fisher–Pitman permutation model, let the treatment-group weights again be given by
but set v = 1 instead of v = 2, employing ordinary Euclidean scaling between the N = 12 multivariate scores. Following Eq. (8.2) on p. 261, the multivariate scores listed in Table 8.11 yield g = 3 average distance-function values of
Following Eq. (8.1) on p. 261, the observed value of the permutation test statistic based on v = 1 and treatment-group weights
is
There are only
possible, equally-likely arrangements in the reference set of all permutations of the N = 12 multivariate observations listed in Table 8.11, making an exact permutation analysis feasible.
Under the Fisher–Pitman permutation model, the exact probability of an observed δ is the proportion of δ test statistic values computed on all possible, equally-likely arrangements of the N = 12 multivariate observations listed in Table 8.11 that are equal to or less than the observed value of δ = 2.0253. There are exactly 618 δ test statistic values that are equal to or less than the observed value of δ = 2.0253. If all M arrangements of the N = 12 multivariate observations listed in Table 8.11 occur with equal chance under the Fisher–Pitman null hypothesis, the exact probability value of δ = 2.0253 computed on the M = 27, 720 possible arrangements of the observed data with n 1 = 5, n 2 = 4, and n 3 = 3 multivariate observations preserved for each arrangement is
where δ o denotes the observed value of test statistic δ and M is the number of possible, equally-likely arrangements of the N = 12 multivariate observations listed in Table 8.11. No comparison is made with the Bartlett–Nanda–Pillai trace test as the BNP test is undefined for ordinary Euclidean scaling.
Following Eq. (8.7) on p. 263, the exact expected value of the M = 27, 720 δ test statistic values under the Fisher–Pitman null hypothesis is
and, following Eq. (8.6) on p. 263, the observed chance-corrected measure of effect size is
indicating approximately 20% within-group agreement above that expected by chance. No comparison is made with the conventional measure of effect size as η 2 is undefined for ordinary Euclidean scaling.
8.10 Summary
This chapter examined statistical methods for multiple independent samples where the null hypothesis posits no differences among the g ≥ 3 populations that the g random samples are presumed to represent. Under the Neyman–Pearson population model of statistical inference, a conventional one-way analysis of variance and five measures of effect size were described and illustrated: Fisher’s F-ratio test statistic, and Cohen’s \(\hat {d}\), Pearson’s η 2, Kelley’s \(\hat {\eta }^{2}\), Hays’ \(\hat {\omega }_{\text{F}}^{2}\), and Hays’ \(\hat {\omega }_{\text{R}}^{2}\) measures of effect size, respectively.
Under the Fisher–Pitman permutation model of statistical inference, test statistic δ and associated measure of effect size, \(\Re \), were described and illustrated for multi-sample tests. For tests of g ≥ 3 independent samples, test statistic δ was demonstrated to be flexible enough to incorporate both ordinary and squared Euclidean scaling functions with v = 1 and v = 2, respectively. Effect size measure \(\Re \) was shown to be applicable to either v = 1 or v = 2 without modification and to have a clear and meaningful chance-corrected interpretation.
Six examples illustrated permutation-based statistics δ and \(\Re \). In the first example, a small sample of N = 10 observations in g = 3 treatment groups was utilized to describe and illustrate the calculation of test statistics δ and \(\Re \) for multiple independent samples. The second example with N = 10 observations in g = 3 treatment groups demonstrated the chance-corrected measure of effect size, \(\Re \), and related \(\Re \) to the five conventional measures of effect size for g ≥ 3 independent samples: Cohen’s \(\hat {d}\), Pearson’s η 2, Kelley’s \(\hat {\eta }^{2}\), Hays’ \(\hat {\omega }_{\text{F}}^{2}\), and Hays’ \(\hat {\omega }_{\text{R}}^{2}\). The third example with N = 28 observations in g = 4 treatment groups illustrated the effects of extreme values on analyses using v = 1 for ordinary Euclidean scaling and v = 2 for squared Euclidean scaling. The fourth example with N = 15 observations in g = 4 treatment groups compared exact and Monte Carlo permutation statistical methods, illustrating the accuracy and efficiency of Monte Carlo analyses. The fifth example with N = 18 rank scores in g = 3 treatment groups illustrated an application of permutation statistical methods to univariate rank-score data, comparing a permutation analysis of the rank-score data with the conventional Kruskal–Wallis g-sample one-way analysis of variance for ranks. In the sixth example, both test statistic δ and effect size measure \(\Re \) were extended to multivariate data with N = 12 multivariate observations in g = 3 treatment groups and compared the permutation analysis of the multivariate data to the conventional Bartlett–Nanda–Pillai trace test for multivariate independent samples.
Chapter 9 continues the presentation of permutation statistical methods for g ≥ 3 samples, but examines research designs in which the subjects in the g ≥ 3 samples are matched on specific characteristics; that is, not independent. Research designs that posit no differences among matched treatment groups have a long history and are ubiquitous in the contemporary statistical literature and are generally known as randomized-blocks designs, of which there exist a large variety.
Notes
- 1.
In some disciplines tests on multiple independent samples are known as between-subjects tests and tests for multiple dependent or related samples are known as within-subjects tests.
- 2.
The terms MS Between and MS Within are only one set of descriptive labels for the numerator and denominator of the F-ratio test statistic. MS Between is often replaced by either MS Treatment or MS Factor and MS Within is often replaced by MS Error.
- 3.
It is well known that Kelley’s correlation ratio is not unbiased, but since the title of Truman Kelley’s 1935 article was “An unbiased correlation ratio measure,” the label has persisted.
- 4.
Since the sizes of the treatment groups are not equal, the average value of \(\bar {n} = 3.3333\) is used for both Cohen’s \(\hat {d}\) measure of effect size and Hays’ \(\hat {\omega }_{\text{R}}^{2}\) measure of effect size for a random-effects model. In cases where the treatment-group sizes differ greatly, a weighted average recommended by Haggard is often adopted [6].
- 5.
For a one-way completely-randomized analysis of variance, a fixed-effects model and a random-effects model yield the same F-ratio, but measures of effect size can differ under the two models.
References
Bartlett, M.S.: A note on tests of significance in multivariate analysis. Proc. Camb. Philos. Soc. 34, 33–40 (1939)
Berry, K.J., Mielke, P.W., Johnston, J.E.: Permutation Statistical Methods: An Integrated Approach. Springer, Cham (2016)
Boik, R.J.: The Fisher–Pitman permutation test: a non-robust alternative to the normal theory F test when variances are heterogeneous. Brit. J. Math. Stat. Psychol. 40, 26–42 (1987)
Box, J.F.: R. A. Fisher: The Life of a Scientist. Wiley, New York (1978)
Festinger, L.: The significance of differences between means without reference to the frequency distribution function. Psychometrika 11, 97–105 (1946)
Haggard, E.A.: Intraclass Correlation and the Analysis of Variance. Dryden, New York (1958)
Haldane, J.B.S., Smith, C.A.B.: A simple exact test for birth-order effect. Ann. Eugenic. 14, 117–124 (1948)
Hall, N.S.: R. A. Fisher and his advocacy of randomization. J. Hist. Biol. 40, 295–325 (2007)
Hotelling, H.: A generalized T test and measure of multivariate dispersion. In: Neyman, J. (ed.) Proceedings of the Second Berkeley Symposium on Mathematical Statistics and Probability, vol. II, pp. 23–41. University of California Press, Berkeley (1951)
Hotelling, H., Pabst, M.R.: Rank correlation and tests of significance involving no assumption of normality. Ann. Math. Stat. 7, 29–43 (1936)
Johnston, J.E., Berry, K.J., Mielke, P.W.: Permutation tests: precision in estimating probability values. Percept. Motor Skill. 105, 915–920 (2007)
Kruskal, W.H., Wallis, W.A.: Use of ranks in one-criterion variance analysis. J. Am. Stat. Assoc. 47, 583–621 (1952). [Erratum: J. Am. Stat. Assoc. 48, 907–911 (1953)]
Lawley, D.N.: A generalization of Fisher’s z test. Biometrika 30, 180–187 (1938)
Lawley, D.N.: Corrections to “A generalization of Fisher’s z test”. Biometrika 30, 467–469 (1939)
Mann, H.B., Whitney, D.R.: On a test of whether one of two random variables is stochastically larger than the other. Ann. Math. Stat. 18, 50–60 (1947)
Nanda, D.N.: Distribution of the sum of roots of a determinantal equation. Ann. Math. Stat. 21, 432–439 (1950)
Olson, C.L.: On choosing a test statistic in multivariate analysis of variance. Psychol. Bull. 83, 579–586 (1976)
Olson, C.L.: Practical considerations in choosing a MANOVA test statistic: a rejoinder to Stevens. Psychol. Bull. 86, 1350–1352 (1979)
Pillai, K.C.S.: Some new test criteria in multivariate analysis. Ann. Math. Stat. 26, 117–121 (1955)
Roy, S.N.: On a heuristic method of test construction and its use in multivariate analysis. Ann. Math. Stat. 24, 220–238 (1953)
Roy, S.N.: Some Aspects of Multivariate Analysis. Wiley, New York (1957)
Snedecor, G.W.: Calculation and Interpretation of Analysis of Variance and Covariance. Collegiate Press, Ames (1934)
Tabachnick, B.G., Fidell, L.S.: Using Multivariate Statistics, 5th edn. Pearson, Boston (2007)
van der Reyden, D.: A simple statistical significance test. Rhod. Agric. J. 49, 96–104 (1952)
Wilcoxon, F.: Individual comparisons by ranking methods. Biometrics Bull. 1, 80–83 (1945)
Wilks, S.S.: Certain generalizations in the analysis of variance. Biometrika 24, 471–494 (1932)
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this chapter
Cite this chapter
Berry, K.J., Johnston, J.E., Mielke, P.W. (2019). Completely-Randomized Designs. In: A Primer of Permutation Statistical Methods. Springer, Cham. https://doi.org/10.1007/978-3-030-20933-9_8
Download citation
DOI: https://doi.org/10.1007/978-3-030-20933-9_8
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-20932-2
Online ISBN: 978-3-030-20933-9
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)