Abstract
Quantitative traits, by definition, are controlled by the segregation of multiple genes. However, the continuous distribution of a quantitative trait does not require the segregation of too many genes. Segregation of just a few genes or even a single gene may be sufficient to generate a continuously distributed phenotype, provided that the environmental variant contributes substantial amount of the trait variation. It is often postulated that aquantitative trait may be controlled by one or a few “major genes” plus multiple modifier genes (genes with very small effects). Such a model is calledoligogenic model, which is in contrast to the so calledpolygenic model where multiple genes with small and equal effects are assumed.
Access provided by Autonomous University of Puebla. Download chapter PDF
Keywords
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
Quantitative traits, by definition, are controlled by the segregation of multiple genes. However, the continuous distribution of a quantitative trait does not require the segregation of too many genes. Segregation of just a few genes or even a single gene may be sufficient to generate a continuously distributed phenotype, provided that the environmental variant contributes substantial amount of the trait variation. It is often postulated that aquantitative trait may be controlled by one or a few “major genes” plus multiple modifier genes (genes with very small effects). Such a model is calledoligogenic model, which is in contrast to the so calledpolygenic model where multiple genes with small and equal effects are assumed.
In this chapter, we will discuss a method to test the hypothesis that a quantitative trait is controlled by a single major gene even without observing the genotypes of themajor gene. The method is calledsegregation analysis of quantitative traits. Although segregation analysis belongs to major gene detection, we discuss this topic separately from the previous topic to emphasize a slight difference between segregation analysis and the major gene detection discussed earlier. Here, we define major gene detection as an association study between a single-locus genotype with a quantitative trait where genotypes of the major gene are observed for all individuals. Segregation analysis, however, refers to a single-locus association study where genotypes of the major gene are not observed at all. Another reason for separating major gene detection from segregation analysis is that the statistical method and hypothesis test for segregation analysis can be quite different from those of the major gene detection.
1 Gaussian Mixture Distribution
We will use an { F}2 population as an example to discuss thesegregation analysis. Consider the three genotypes in the following order: A 1 A 1, A 1 A 2, and A 2 A 2. Let k = 1, 2, 3 indicate the three ordered genotypes. The means of individuals bearing the three ordered genotypes are denoted by μ1, μ2, and μ3, respectively. Let y j be the phenotypic value of individual j for j = 1, …, n, where n is the sample size. Given that individual j has the kth genotype, the linear model for y j is
where ε j ∼ N(0, σ2) and σ2 is the residual error variance. The probability density of y j conditional on the kth genotype is
In reality, the genotype of an individual is not observable, and thus, amixture distribution is needed to describe the probability density of y j . Let π k , ∀k = 1, 2, 3, be the proportion of genotype k (also called themixing proportion). Without any prior knowledge, π k may be described by the Mendelian segregation ratio, i.e., \({\pi }_{1} = {\pi }_{3} = \frac{1} {2}{\pi }_{2} = \frac{1} {4}\). Therefore, under the assumption ofMendelian segregation, the π k ’s are constants, not parameters. The distribution of y j is a mixture of three normal distributions, each is weighted by the Mendelian mixing proportion. The mixture distribution is demonstrated by Fig. 7.1. The probability density of y j is
The overall observed log likelihood function for parameters θ = { μ1, μ2, μ3, σ2} is
Any numerical algorithms may be used to estimate the parameters. However, the EM algorithm (Dempster et al. 1977) appears to be the most convenient method for such a mixture model problem and thus will be introduced in this chapter.
2 EM Algorithm
The expectation-maximization (EM) algorithm was developed by Dempster et al. [1977] as a special numerical algorithm for finding the maximum likelihood estimates (MLE) of parameters. In contrast to theNewton–Raphson algorithm, the EM algorithm is not a general algorithm for MLE; rather, it can only be applied to some special problems. If the following two conditions hold, then we should consider using the EM algorithm. The first condition is that the maximum likelihood problem can be formulated as a missing value problem. The second condition is that if the missing values were not missing, the MLE would have a closed form solution or, at least, a mathematically attractive form of the solution. We now evaluate themixture model problem to see whether the two conditions apply.
2.1 Closed Form Solution
We introduce a label η j to indicate the genotype of individual j. The definition of η j is
Since the genotype of an individual is not observable, the label η j is missing. Therefore, we can formulate the problem as a missing value problem. The missing values are the genotypes of the major gene and denoted by variable η j for \(j = 1,\ldots ,n\). Therefore, the first condition for using the EM algorithm is met. If η j is not missing, do we have a closed form solution for the parameters? Let us now define three more variables as functions of η j . These three variables are called δ(η j , 1), δ(η j , 2), and δ(η j , 3), and their values are defined as
for k = 1, 2, 3. We now use δ(η j , k) to represent the missing values. If δ(η j , k) were not missing, the linear model would be described by
Let us define δ j = [δ(η j , 1) δ(η j , 2) δ(η j , 3)] as a 1 ×3 vector and β = [μ1 μ2 μ3]T as a 3 ×1 vector. The linear model can be rewritten as
When ε j ∼ N(0, σ2) is assumed, the maximum likelihood estimates of parameters are
for the means and
for the residual variance. We see that if the missing variables were not missing, the MLE of the parameters do have an attractive closed form solution. Since both requirements of the EM algorithm are met, we can adopt the EM algorithm to search for the MLE of parameters.
2.2 EM Steps
Before we derive theEM algorithm, let us show the expectation and maximization steps of the EM algorithm. The E-step involves calculating the expectations of all items containing the missing variables δ j . The M-step is simply to estimate β and σ2 using the closed form solutions given above with the items containing the missing variables replaced by the expectations obtained in the E-step, as shown below:
and
We can see that the EM algorithm is better described by introducing the M-step first and then describing the E-step (in a reverse direction). The detail of the E-step is now given below:
and
Here, we only need to calculate E[δ(η j , k)], which is the conditional expectation of δ(η j , k) given the parameter values and the phenotypic value. The full expression of the conditional expectation should be E[δ(η j , k) | y j , β, σ2], but we use E[δ(η j , k)] as a short notation.
where \({\pi }_{1} = {\pi }_{3} = \frac{1} {2}{\pi }_{2} = \frac{1} {4}\) is the Mendelian segregation ratio and f k (y j | θ) = N(y j | μ k , σ2) is the normal density. In summary, the EM algorithm is described by
-
Initialization: set t = 0 and let θ = θ(t).
-
E-step: calculate E[δ(η j , k) | y j , θ(t)].
-
M-step: update β(t + 1) and σ2(t + 1).
-
Iteration: set \(t = t + 1\) and iterate between the E-step and the M-step.
The convergence criterion is
where { dim}(θ) = 4 is the dimension of the parameter vector and ε is an arbitrarily small positive number, say 10 − 8.
Once the three genotypic values are estimated, the additive and dominance effects are estimated using linear contrasts of the genotypic values, e.g.,
2.3 Derivation of the EM Algorithm
Theobserved log likelihood function is given in (7.4). The MLE of θ is the (vector) value that maximizes this log likelihood function. The EM algorithm, however, does not directly maximize this likelihood function; instead, it maximizes the expectation of the complete-data log likelihood function with the expectation taken with respect to the missing variable δ(η j , k). Thecomplete-data log likelihood function is
The expectation of the complete-data log likelihood is \({\text{ E}}_{{ \theta }^{(t)}}[{L}_{c}(\theta )\vert y,{\theta }^{(t)}]\), which is denoted in short by L(θ | θ(t)) and is defined as
With theEM algorithm, the target likelihood function for maximization is neither the complete-data log likelihood function (7.19) nor the observed log likelihood function (7.4); rather, it is theexpected complete-data log likelihood function (7.20). An alternative expression of the above equation is
The partial derivatives of L(θ | θ(t)) with respect to β and σ2 are
and
respectively. Setting \(\frac{\partial } {\partial \beta }L(\theta \vert {\theta }^{(t)}) = \frac{\partial } {\partial {\sigma }^{2}} L(\theta \vert {\theta }^{(t)}) = 0\), we get
and
This concludes the derivation of the EM algorithm.
2.4 Proof of the EM Algorithm
The target likelihood function for maximization in the EM algorithm is the expectation of the complete-data log likelihood function. However, the actual MLE of θ is obtained by maximization of the observed log likelihood function. To prove that the EM solution of the parameters is indeed the MLE, we only need to show that the partial derivative of the expected complete-data likelihood is identical to the partial derivative of the observed log likelihood, i.e., \(\frac{\partial } {\partial \theta }L(\theta \vert {\theta }^{(t)}) = \frac{\partial } {\partial \theta }L(\theta )\). If the two partial derivatives are the same, then the solutions must be the same because they both solve the same equation system, i.e., \(\frac{\partial } {\partial \theta }L(\theta ) = 0\).
Recall that the partial derivative of the expected complete-data log likelihood function with respect to β is
which is a 3 ×1 vector as shown below:
The kth component of this vector is
The equation holds because E[δ(η j , k)] = E[δ2(η j , k)], a property for theBernoulli distribution. We now evaluate the partial derivative of the expected complete-data log likelihood with respect to σ2,
We now look at the partial derivatives of L(θ) with respect to the parameters. Theobserved log likelihood function is
where
The partial derivatives of L(θ) with respect to β = [μ1 μ2 μ3]T are
where
Hence,
Recall that
Therefore,
which is exactly the same as \(\frac{\partial } {\partial {\mu }_{k}}L(\theta \vert {\theta }^{(t)})\) given in (7.27). Now, let us look at the partial derivative of L(θ) with respect to σ2.
where
Hence,
Note that
Therefore,
which is exactly the same as \(\frac{\partial } {\partial {\sigma }^{2}} L(\theta \vert {\theta }^{(t)})\) given in (7.28). We now have confirmed that
and
This concludes the proof that the EM algorithm does lead to the MLE of the parameters.
3 Hypothesis Tests
The overall null hypothesis is “no major gene is segregating” denoted by
The alternative hypothesis is “at least one of the means is different from others,” denoted by
The likelihood ratio test statistic is
where \({L}_{1}(\hat{\theta })\) is the observed log likelihood function evaluated at the MLE of θ for the full model, and
is the log likelihood values evaluated at the null model where
and
Under the null hypothesis, λ will follow approximately a chi-square distribution with two degrees of freedom. Therefore, H 0 will be rejected if λ > χ2, 1 − α 2, where α = 0. 05 may be chosen as the type I error.
4 Variances of Estimated Parameters
Unlike other iterative methods of parameter estimation, e.g.,Newton–Raphson method, thatvariance–covariance matrix of the estimated parameters are provided automatically as a by-product of the iteration process, the EM algorithm does not facilitate an easy way for calculating the variance–covariance matrix of the estimated parameters. We now introduce a special method for calculating the variance–covariance matrix. The method was developed by Louis [1982] particularly for calculating the variance–covariance matrix of parameters that are estimated via the EM algorithm. The method requires the first and second partial derivatives of the complete-data log likelihood function (not the observed log likelihood function). The complete-data log likelihood function is
where
The first partial derivative of this log likelihood with respect to the parameter is called thescore function, which is
where
The second partial derivative is called theHessian matrix H(θ, δ). The negative value of the Hessian matrix is denoted by \(B(\theta ,\delta ) = -H(\theta ,\delta )\),
where
Detailed expression of B j (θ, δ) is given below:
Louis [1982] gave the following information matrix:
where the expectation and variance are taken with respect to the missing variable δ j using the posterior probability of δ j . Detailed expressions of { E}[B j (θ, δ)] and { var}[S j (θ, δ)] are given in the end of this section. Readers may also refer to Han and Xu [2008] and Xu and Hu [2010] for the derivation and the results. Replacing θ by \(\hat{\theta }\) and taking the inverse of the information matrix, we get the variance–covariance matrix of the estimated parameters,
This is a 4 ×4 variance–covariance matrix, as shown below:
where \(\text{ var}(\hat{\beta })\) is a 3 ×3 variance matrix for the estimated genotypic values.
The additive and dominance effects can be expressed as linear functions of β, as demonstrated below:
where
The variance–covariance matrix for the estimated major gene effects is
Thevariance–covariance matrix of the estimated major gene effects also facilitates an alternative method for testing the hypothesis of \({H}_{0} : a = d = 0\). This test is called the Wald-test statistic (Wald 1943),
The Wald-test statistic is much like thelikelihood ratio test statistic. Under the null model, W follows approximately a χ2 distribution with 2 degrees of freedom. However, Wald test is usually considered inferior compared to the likelihood ratio test statistic, especially when the sample size is small.
Before exiting this section, we now provide the derivation of { E}[B j (θ, δ)] and { var}[S j (θ, δ)]. Recall that δ j is a 1 ×3 multinomial variable with sample size 1 and defined as
This variable has the following properties:
and
Therefore, the expectation of δ j is
The expectation of its quadratic form is
The variance–covariance matrix of δ j is
To derive the observed information matrix, we need the first and second partial derivatives of the complete-data log likelihood with respect to the parameter vector \(\theta = {[\begin{array}{*{20}{c}} {\beta }^{T}&{\sigma }^{2} \end{array} ]}^{T}\). The score vector is rewritten as
where 03 ×1 is a 3 ×1 vector of zeros, and thus, the score is a 4 ×1 vector. The negative of the second partial derivative is
where 03 ×3 is a 3 ×3 matrix of zeros, and thus, B j (θ, δ) is a 4 ×4 matrix. The expectation of B j (θ, δ) is easy to derive, but derivation of the variance–covariance matrix of the score vector is very difficult. Xu and Xu [2003] used a Monte Carlo approach to approximating the expectation and the variance–covariance matrix. They simulated multiple (e.g., 5,000) samples of δ j from the posterior distribution and then took the sample mean of B j (θ, δ) and the sample variance–covariance matrix of S j (θ, δ) as the approximations of the corresponding terms. Here, we took a theoretical approach for the derivation and provide explicit expressions for the expectation and variance–covariance matrix. We can express the score vector as a linear function of δ j and the B j (θ, δ) matrix as a quadratic function of δ j . By trial and error, we found that
where A j T is the 4 ×3 coefficient matrix and C is the 4 ×1 vector of constants. Let us define a 4 ×1 matrix H j T as
where T j T is the 4 ×3 coefficient matrix. We can now express matrix B j (θ, δ) as
where
is a 4 ×4 constant matrix. The expectation of B j (θ, δ) is
The expectation vector and the variance–covariance matrix of S j (θ, δ) are
and
respectively. Expressing S j (θ, δ) and B j (θ, δ) as linear and quadratic functions of the missing vector δ j has significantly simplified the derivation of the information matrix.
5 Estimation of the Mixing Proportions
We used an { F} 2 population as an example for segregation analysis. Extension of the segregation analysis to other populations is straightforward and will not be discussed here. For the { F} 2 population, we assumed that the major gene follows the Mendelian segregation ratio, i.e., \({\pi }_{1} = {\pi }_{3} = \frac{1} {2}{\pi }_{2} = \frac{1} {4}\). Therefore, π k is a constant, not a parameter for estimation. The method can be extended to a situation where the major gene does not follow the Mendelian segregation ratio. In this case, the values of π k are also parameters for estimation. This section will introduce a method to estimate the π k ’s. These π k ’s are called themixing proportions.
We simply add one more step in the EM algorithm to estimate π k , { } ∀k = 1, 2, 3. Again, we maximize the expected complete-data log likelihood function. To enforce the restriction that \({\sum \nolimits }_{k=1}^{3}{\pi }_{k} = 1\), we introduce aLagrange multiplier ξ. Therefore, the actual function to be maximized is
The partial derivatives of L(θ | θ(t)) with respect to π k and λ are
and
respectively. Let \(\frac{\partial } {\partial {\pi }_{k}}L(\theta \vert {\theta }^{(t)}) = \frac{\partial } {\partial \lambda }L(\theta \vert {\theta }^{(t)}) = 0\), and solve for π k ’s and λ. The solution for π k is
The solution for λ is obtained by
This is because \({\sum \nolimits }_{k=1}^{3}E[\delta ({\eta }_{j},k)] = 1\) and \({\sum \nolimits }_{j=1}^{n} = n\). As a result, λ = n and thus
References
Abdi H (2007) Bonferroni and Šidák corrections for multiple comparisons. Encyclopedia of Measurement and Statistics. Sage, Thousand Oaks, California
Almasy L, Blangero J (1998) Multipoint quantitative-trait linkage analysis in general pedigrees. Am J Human Genet 62(5):1198–1211
Amos CI (1994) Robust variance-components approach for assessing genetic linkage in pedigrees. Am J Human Genet 54(3):535–543
Baldi P, Long AD (2001) A Bayesian framework for the analysis of microarray expression data: regularized t-test and statistical inferences of gene changes. Bioinformatics 17(6):509–519
Banerjee S, Yandell BS, Yi N (2008) Bayesian quantitative trait loci mapping for multiple traits. Genetics 179(4):2275–2289
Benjamini Y, Hochberg Y (1995) Controlling the false discovery rate – a practical and powerful approach to multiple testing. J Roy Stat Soc Ser B (Stat Methodol) 57(1):289–300
Blalock EM, Geddes JW, Chen KC, Porter NM, Markesbery WR, Landfield PW (2004) Incipient Alzheimer’s disease: microarray correlation analyses reveal major transcriptional and tumor suppressor responses. Proc Nat Acad Sci USA 101(7):2173–2178
Bottolo L, Petretto E, Blankenberg S, Cambien F, Cook SA, Tiret L, Richardson S (2011) Bayesian detection of expression quantitative trait loci hot-spots. Genetics 189(4):1449–1459
Bottolo L, Richardson S (2010) Evolutionary stochastic search for Bayesian model exploration. Bayesian Anal 5(3):583–618
Box GEP, Cox DR (1964) An analysis of transformations. J Roy Stat Soc Ser B (Stat Methodol) 26(2):211–252
Box GEP, Tiao GC (1973) Bayesian inference in statistical analysis. Wiley, New York
Brem RB, Yvert G, Clinton R, Kruglyak L (2002) Genetic dissection of transcriptional regulation in budding yeast. Science 296(5568):752–755
Broman KW, Speed TP (2002) A model selection approach for the identification of quantitative trait loci in experimental crosses. J Roy Stat Soc Ser B (Stat Methodol) 64(4):641–656
Cai X, Huang A, Xu S (2011) Fast empirical Bayesian LASSO for multiple quantitative trait locus mapping. BMC Bioinformatics 12(1):211
Che X, Xu S (2010) Significance test and genome selection in Bayesian shrinkage analysis. Int J Plant Genomics 2010:doi:10.1155/2010/893206
Chen M, Presting G, Barbazuk WB, Goicoechea JL, Blackmon, B, Fang G, Ki H, Frisch D, Yu Y, Sun S, Higingbottom S, Phimphilai J, Phimphilai D, Thurmond S, Gaudette B, Li P, Liu J, Hatfield J, Main D, Farrar K, Henderson C, Barnett L, Costa R, Williams B, Walser S, Atkins M, Hall C, Budiman MA, Tomkins JP, Luo M, Bancroft I, Salse J, Regad F, Mohapatra T, Singh NK, Tyagi AK, Soderlund C, Dean RA, Wing RA (2002) An integrated physical and genetic map of the rice genome. Plant Cell 14(3):537–545
Cheung VG, Spielman RS (2002) The genetics of variation in gene expression. Nat Genet 32(Supp):522–525
Chun H, Keles S (2009) Expression quantitative trait loci mapping with multivariate sparse partial least squares regression. Genetics 182(1):79–90
Churchill GA, Doerge RW (1994) Empirical threshold values for quantitative trait mapping. Genetics 138(3):963–971
Civardi L, Xia Y, Edwards EJ, Schnable PS, Nikolau BJ (1994) The relationship between genetic and physical distances in the cloned al-h2 interval of the Zea mays L. genome. Proc Nat Acad Sci USA 91(17):8268–8272
Cohen AC (1991) Truncated and censored samples:theory and applications, vol 119 of Statistics: textbooks and monographs, 1st edn. Marcel Dekker Inc., New York
Cookson W, Liang L, Abecasis G, Moffatt M, Lathrop M (2009) Mapping complex disease traits with global gene expression. Nat Rev Genet 10(3):184–194
Cullinan WE, Herman JP, Battaglia DF, Akil H, Watson SJ (1995) Pattern and time course of immediate early gene expression in rat brain following acute stress. Neuroscience 64(2): 477–505
Dagliyan O, Uney-Yuksektepe F, Kavakli IH, Turkay M (2011) Optimization based tumor classification from microarray gene expression data. Publ Libr Sci One 6(2):e14579
de Boor C (1978) A practical guide to splines. Springer, New York
Dempster AP, Laird NM, Rubin DB (1977) Maximum likelihood from incomplete data via the EM algorithm. J Roy Stat Soc Ser B (Stat Methodol) 39(1):1–38
Dou B, Hou B, Xu H, Lou X, Chi X, Yang J, Wang F, Ni Z, Sun Q (2009) Efficient mapping of a female sterile gene in wheat (Triticum aestivum l.). Genet Res 91(05):337–343
Dunn OJ (1961) Multiple comparisons among means. J Am Stat Assoc 56(293):52–64
Efron B (1979) Bootstrap methods: another look at the jackknife. Ann Stat 7(1):1–26
Efron B, Hastie T, Johnstone I, Tibshirani R (2004) Least angle regression. Ann Stat 32(2):407–499
Efron B, Tibshirani R, Storey JD, Tusher V (2001) Empirical Bayes analysis of a microarray experiment. J Am Stat Assoc 96(456):1151–1160
Eisen MB, Spellman PT, Brown PO, Botstein D (1998) Cluster analysis and display of genome-wide expression patterns. Proc Nat Acad Sci USA 95(25):14863–14868
Elston RC, Steward J (1971) A general model for the genetic analysis of pedigree data. Hum Hered 21(6):523–542
Emilsson V, Thorleifsson G, Zhang B, Leonardson AS, Zink F, Zhu J, Carlson S, Helgason A, Walters GB, Gunnarsdottir S, Mouy M, Steinthorsdottir V, Eiriksdottir GH, Bjornsdottir G, Reynisdottir I, Gudbjartsson D, Helgadottir A, Jonasdottir A, Styrkarsdottir U, Gretarsdottir S, Magnusson KP, Stefansson H, Fossdal R, Kristjansson K, Gislason HG, Stefansson T, Leifsson BG, Thorsteinsdottir U, Lamb JR, Gulcher JR, Reitman ML, Kong A, Schadt EE, Stefansson K (2008) Genetics of gene expression and its effect on disease. Nature 452:423–428
Falconer DS, Mackay TFC (1996) Introduction to quantitative genetics, 4th edn. Longman Group Ltd., London
Feenstra B, Skovgaard IM, Broman KW (2006) Mapping quantitative trait loci by an extension of the Haley-Knott regression method using estimating equations. Genetics 173(4):2269–2282
Felsenstein J (1981a) Evolutionary trees from DNA sequences: a maximum likelihood approach. J Mol Evol 17(6):368–376
Felsenstein J (1981b) Evolutionary trees from gene frequencies and quantitative characters: finding maximum likelihood estimates. Evolution 35(6):1229–1242
Felsenstein J (1985) Confidence limits on phylogenies: an approach using the bootstrap. Evolution 39(4):783–791
Fisher RA (1946) A system of scoring linkage data, with special reference to the pied factors in mice. Am Nat 80(794):568–578
Fraley C, Raftery AE (2002) Model-based clustering, discriminant analysis, and density estimation. J Am Stat Assoc 97(458):611–631
Friedman J, Hastie T, Tibshirani R (2010) Regularization paths for generalized linear models via coordinate descent. J Stat Software 33(1):1–22
Fu YB, Ritland K (1994) On estimating the linkage of marker genes to viability genes controlling inbreeding depression. Theoret Appl Genet 88(8):925–932
Fulker DW, Cardon LR (1994) A sib-pair approach to interval mapping of quantitative trait loci. Am J Hum Genet 54(6):1092–1103
Gelfand AE, Hills SE, Racine-Poon A, Smith AFM (1990) Illustration of Bayesian inference in normal data models using Gibbs sampling. J Am Stat Assoc 85(412):972–985
Gelman A (2005) Analysis of variance – why it is more important than ever. Ann Stat 33(1):1–53
Gelman A (2006) Prior distributions for variance parameters in hierarchical models (Comment on article by Browne and Draper). Bayesian Anal 1(3):515–533
Gelman A, Jakulin A, Pittau MG, Su YS (2008) A weakly informative default prior distribution for logistic and other regression models. Ann Appl Stat 2(4):1360–1383
Geman S, Geman D (1984) Stochastic relaxation, Gibbs distribution, and the Bayesian restoration of images. IEEE Trans Pattern Anal Mach Intell PAMI-6(6):721–741
George EI, McCulloch RE (1993) Variable selection via Gibbs sampling. J Am Stat Assoc 88(423):881–889
George EI, McCulloch RE (1997) Approaches for Bayesian variable selection. Statistica Sinica 7:339–373
Ghosh D, Chinnaiyan AM (2002) Mixture modelling of gene expression data from microarray experiments. Bioinformatics 18(2):275–286
Gilks WR, Richardson S, Spiegelhalter DJ (1996) Markov chain Monte Carlo in practice. Chapman and Hall/CRC, London
Glonek G, Solomon P (2004) Factorial and time course designs for cDNA microarray experiments. Biostatistics 5(1):89–111
Goldgar DE (1990) Multipoint analysis of human quantitative genetic variation. Am J Hum Genet 47(6):957–967
Golub GH, Van Loan CF (1996) Matrix computations, 3rd edn. The Johns Hopkins University Press, Baltimore
Golub TR, Slonim DK, Tamayo P, Huard C, Gaasenbeek M, Mesirov JP, Coller H, Loh ML, Downing JR, Caligiuri MA, Bloomfield CD, Lander ES (1999) Molecular classification of cancer: class discovery and class prediction by gene expression monitoring. Science 286(5439):531–537
Green PJ (1995) Reversible jump Markov chain Monte Carlo computation and Bayesian model determination. Biometrika 82(4):711–732
Hackett CA, Meyer RC, Thomas WTB (2001) Multi-trait QTL mapping in barley using multivariate regression. Genet Res 77(1):95–106
Hackett CA, Weller JI (1995) Genetic mapping of quantitative trait loci for traits with ordinal distributions. Biometrics 51(4):1252–1263
Haldane JBS (1919) The combination of linkage values and the calculation of distances between the loci of linked factors. J Genet 8(29):299–309
Haldane JBS, Waddington CH (1931) Inbreeding and linkage. Genetics 16(4):357–374
Haley CS, Knott SA (1992) A simple regression method for mapping quantitative trait loci in line crosses using flanking markers. Heredity 69(4):315–324
Haley CS, Knott SA, Elsen JM (1994) Mapping quantitative trait loci in crosses between outbred lines using least squares. Genetics 136(3):1195–1207
Han L, Xu S (2008) A Fisher scoring algorithm for the weighted regression method of QTL mapping. Heredity 101(5):453–464
Han L, Xu S (2010) Genome-wide evaluation for quantitative trait loci under the variance component model. Genetica 138(9–10):1099–1109
Hardy GH (1908) Mendelian proportions in a mixed population. Science 28(706):49–50
Hartigan J, Wong MA (1979) Algorithm AS 136: a K-means clustering algorithm. J Roy Stat Soc Ser C (Appl Stat) 28(1):100–108
Hartigan JA (1975) Clustering algorithms. Wiley, New York
Hartl DL, Clark AG (1997) Principles of population genetics, 3rd edn. Sinauer Associates Inc., Sunderland, Massachusetts
Haseman JK, Elston RC (1972) The investigation of linkage between a quantitative trait and a marker locus. Behav Genet 2(1):3–19
Hastings WK (1970) Monte Carlo sampling methods using Markov chains and their applications. Biometrika 57(1):97–109
Hayes JG (1974) Numerical methods for curve and surface fitting. Bull Inst Math Appl 10(5/6):144–152
Hayes PM, Liu BH, Knapp SJ, Chen F, Jones B, Blake T, Franckowiak J, Rasmusson D, Sorrells M, Ullrich SE, Wesenberg D, Kleinhofs A (1993) Quantitative trait locus effects and environmental interaction in a sample of North American barley germ plasm. Theor Appl Genet 87(3):392–401
Heath SC (1997) Markov chain Monte Carlo segregation and linkage analysis of oligogenic models. Am J Hum Genet 61(3):748–760
Henderson CR (1950) Estimation of genetic parameters (abstract). Ann Math Stat 21(2):309–310
Henderson CR (1975) Best linear unbiased estimation and prediction under a selection model. Biometrics 31(2):423–447
Henshall JM, Goddard ME (1999) Multiple-trait mapping of quantitative trait loci after selective genotyping using logistic regression. Genetics 151(2):885–894
Hoerl AE, Kennard RW (1970) Ridge regression: Biased estimation for nonorthogonal problems. Technometrics 12(2):55–67
Horton NJ, Laird NM (1999) Maximum likelihood analysis of generalized linear models with missing covariates. Stat Methods Med Res 8(1):37–50
Hu Z, Xu S (2009) PROC QTL – a SAS procedure for mapping quantitative trait loci. Int J Plant Genom 2009:1–3, doi:10.1155/2009/141234
Huelsenbeck JP Ronquist F, Nielsen R, Bollback JP (2001) Bayesian inference of phylogeny and its impact on evolutionary biology. Science 294(5550):2310–2314
Ibrahim JG (1990) Incomplete data in generalized linear models. J Am Stat Assoc 85(411): 765–769
Ibrahim JG, Chen MH, Lipsitz SR (2002) Bayesian methods for generalized linear models with covariates missing at random. Can J Stat 30(1):55–78
Ibrahim JG, Chen MH, Lipsitz SR, Herring AH (2005) Missing-data methods for generalized linear models. J Am Stat Assoc 100(469):332–346
Jia Z, Xu S (2005) Clustering expressed genes on the basis of their association with a quantitative phenotype. Genet Res 86(3):193–207
Jia Z, Xu S (2007) Mapping quantitative trait loci for expression abundance. Genetics 176(1): 611–623
Jiang C, Zeng ZB (1995) Multiple trait analysis of genetic mapping for quantitative trait loci. Genetics 140(3):1111–1127
Jiang C, Zeng ZB (1997) Mapping quantitative trait loci with dominance and missing markers in various crosses from two inbred lines. Genetica 101(1):47–58
Jirapech-Umpai T, Aitken S (2005) Feature selection and classification for microarray data analysis: evolutionary methods for identifying predictive genes. BMC Bioinform 6:148
Kao CH (2000) On the differences between the maximum likelihood and the regression interval mapping in the analysis of quantitative trait loci. Genetics 156(2):855–865
Kao CH, Zeng ZB, Teasdale RD (1999) Multiple interval mapping for quantitative trait loci. Genetics 152(3):1203–1216
Kendziorski C, Wang P (2006) A review of statistical methods for expression quantitative trait loci mapping. Mamm Genome 17(6):509–517
Kendziorski CM, Chen M, Yuan M, Lan H, Attie AD (2006) Statistical methods for expression quantitative trait loci (eQTL) mapping. Biometrics 62(1):19–27
Knott SA, Haley CS (2000) Multitrait least squares for quantitative trait loci detection. Genetics 156(2):899–911
Korol AB, Ronin YI, Itskovich AM, Peng J, Nevo E (2001) Enhanced efficiency of quantitative trait loci mapping analysis based on multivariate complexes of quantitative traits. Genetics 157(4):1789–1803
Korol AB, Ronin YI, Kirzhner VM (1995) Interval mapping of quantitative trait loci employing correlated trait complexes. Genetics 140(3):1137–1147
Kosambi DD (1943) The estimation of map distances from recombination values. Ann Hum Genet 12(1):172–175
Lan H, Chen M, Flowers JB, Yandell BS, Stapleton DS, Mata CM, Mui ET, Flowers MT, Schueler KL, Manly KF, Williams RW, Kendziorski C, Attie AD (2006) Combined expression trait correlations and expression quantitative trait locus mapping. Pub Lib Sci Genet 2(1):e6
Land AH, Doig AG (1960) An automatic method of solving discrete programming problems. Econometrica 28(3):497–520
Lander ES, Botstein D (1989) Mapping Mendelian factors underlying quantitative traits using RFLP linkage maps. Genetics 121(1):185–199
Lee Y, Lee C (2003) Classification of multiple cancer types by multicategory support vector machines using gene expression data. Bioinformatics 19(9):1132–1139
Li CC (1955) Population genetics. University of Chicago Press, Chicago
Liao J, Chin K (2007) Logistic regression for disease classification using microarray data: model selection in a large p and small n case. Bioinformatics 23(15):1945–1951
Liu BH (1998) Statistical genomics: linkage, mapping and qtl analysis, 1st edn. CRC, Boca Raton
Lorieux M, Goffinet B, Perrier X, Leon DG, Lanaud C (1995a) Maximum-likelihood models for mapping genetic markers showing segregation distortion. 1. Backcross populations. Theor Appl Genet 90(1):73–80
Lorieux M, Perrier X, Goffinet B, Lanaud C, Leon DG (1995b) Maximum-likelihood models for mapping genetic markers showing segregation distortion. 2. F2 populations. Theor Appl Genet 90(1):81–89
Loudet O, Chaillou S, Camilleri C, Bouchez D, Daniel-Vedele F (2002) Bay-0 × Shahdara recombinant inbred line population: a powerful tool for the genetic dissection of complex traits in Arabidopsis. Theor Appl Genet 104(6):1173–1184
Louis T (1982) Finding the observed information matrix when using the EM algorithm. J Roy Stat Soc Ser B (Stat Methodol) 44(2):226–233
Luan Y, Li H (2003) Clustering of time-course gene expression data using a mixed-effects model with B-splines. Bioinformatics 19(4):474–482
Luo L, Xu S (2003) Mapping viability loci using molecular markers. Heredity 90(6):459–467
Luo L, Zhang YM, Xu S (2005) A quantitative genetics model for viability selection. Heredity 94(3):347–355
Luo ZW, Zhang RM, Kearsey MJ (2004) Theoretical basis for genetic linkage analysis in autotetraploid species. Proc Nat Acad Sci USA 101(18):7040–7045
Luo ZW, Zhang Z, Leach L, Zhang RM, Bradshaw JE, Kearsey MJ (2006) Constructing genetic linkage maps under a tetrasomic model. Genetics 172(4):2635–2645
Lynch M, Walsh B (1998) Genetics and analysis of quantitative traits, 1st edn. Sinauer Associates Inc., Sunderland
Ma P, Castillo-Davis CI, Zhong W, Liu JS (2006) A data-driven clustering method for time course gene expression data. Nucleic Acids Res 34(4):1261–1269
MacQueen JB (1967) Some methods for classification and analysis of multivariate observations. Proceedings of the 5th Berkeley symposium on mathematical statistics and probability, vol 1, pp 281–297, Berkeley, California
Mangin B, Thoquet P, Grimsley N (1998) Pleiotropic QTL analysis. Biometrics 54(1):88–99
McCullagh P, Nelder JA (1999) Generalized linear models. Monograph on statistics and applied probability. Chapman and Hall/CRC, London
McCulloch CE, Searle SR (2001) Generalized linear and mixed models. Wiley, New York
McLachlan G, Peel D (2000) Finite mixture models. Wiley, New York
McLachlan GJ, Bean RW, Peel D (2002) A mixture model-based approach to the clustering of microarray expression data. Bioinformatics 18(3):413–422
McNicholas PD, Murphy TB (2010) Model-based clustering of microarray expression data via latent Gaussian mixture models. Bioinformatics 26(21):2705–2712
Metropolis N, Rosenbluth AW, Rosenbluth MN, Teller AH, Teller E (1953) Equation of state calculations by fast computing machines. J Chem Phys 21(6):1087–1092
Mitchell-Olds T (1995) Interval mapping of viability loci causing heterosis in Arabidopsis. Genetics 140(3):1105–1109
Morgan TH (1928) The theory of the gene. Yale University Press, New Haven
Morgan TH, Bridges CB (1916) Sex-linked inheritance in drosophila. Carniegie Institute of Washington, Washington DC
Narula SC (1979) Orthogonal polynomial regression. Int Stat Rev 47(1):31–36
Nelder JA, Mead R (1965) A simplex method for function minimization. Comput J 7(4):308–313
Nelder JA, Wedderburn RWM (1972) Generalized linear models. J Roy Stat Soc Ser A (General) 135(3):370–384
Nettleton D, Doerge RW (2000) Accounting for variability in the use of permutation testing to detect quantitative trait loci. Biometrics 56(1):52–58
Newton MA, Noueiry A, Sarkar D, Ahlquist P (2004) Detecting differential gene expression with a semiparametric hierarchical mixture method. Biostatistics 5(2):155–176
Ouyang M, Welsh WJ, Georgopoulos P (2004) Gaussian mixture clustering and imputation of microarray data. Bioinformatics 20(6):917–923
Pan W, Lin J, Le CT (2002) Model-based cluster analysis of microarray gene expression data. Genome Biol 3(2):research0009.1–0009.8
Park T, Casella G (2008) The Bayesian Lasso. J Am Stat Assoc 103(482):681–686
Park T, Yi SG, Lee S, Lee SY, Yoo DH, Ahn JI, Lee YS (2003) Statistical tests for identifying differentially expressed genes in time-course microarray experiments. Bioinformatics 19(6):694–703
Peddada SD, Lobenhofer EK, Li L, Afshari CA, Weinberg CR, Umbach DM (2003) Gene selection and clustering for time-course and dose-response microarray experiments using order-restricted inference. Bioinformatics 19(7):834–841
Piepho HP (2001) A quick method for computing approximate thresholds for quantitative trait loci detection. Genetics 157(1):425–432
Potokina E, Caspers M, Prasad M, Kota R, Zhang H, Sreenivasulu N, Wang M, Graner A (2004) Functional association between malting quality trait components and cDNA array based expression patterns in barley. Mol Breed 14(2):153–170
Qu Y, Xu S (2004) Supervised cluster analysis for microarray data based on multivariate Gaussian mixture. Bioinformatics 20(12):1905–1913
Qu Y, Xu S (2006) Quantitative trait associated microarray gene expression data analysis. Mol Biol Evol 23(8):1558–1573
Robinson GK (1991) That BLUP is a good thing: the estimation of random effects. Stat Sci 6(1):15–32
Rubin NB (1987) Multiple imputation for nonresponse in survey. Wiley, New York
Rubinstein R (1981) Simulation and the Monte Carlo method. Wiley, New York
Saitou N, Nei M (1987) The neighbor-joining method: a new method for reconstructing phylogenetic trees. Mol Biol Evol 4(4):406–425
SAS Institute (2008a). SAS/IML 9.2 user’s guide. SAS Institute Inc, Cary, North Carolina
SAS Institute (2008b) SAS/STAT 9.2 user’s guide. SAS Institute Inc., Cary, North Carolina
Satagopan JM, Yandell BS, Newton MA, Osborn TC (1996) A Bayesian approach to detect quantitative trait loci using Markov chain Monte Carlo. Genetics 144(2):805–816
Schadt EE, Monks SA, Drake TA, Lusis AJ, Che N, Colinayo V, Ruff TG, Milligan SB, Lamb JR, Cavet G, Linsley PS, Mao M, Stoughton RB, Friend SH (2003) Genetics of gene expression surveyed in maize, mouse and man. Nature 422:297–302
Schena M, Shalon D, Davis RW, Brown PO (1995) Quantitative monitoring of gene expression patterns with a complementary DNA microarray. Science 270(5235):467–470
Schliep A, Schnhuth A, Steinhoff C (2003) Using hidden Markov models to analyze gene expression time course data. Bioinformatics 19(supp 1):i255–i263
Schork NJ (1993) Extended multipoint identity-by-descent analysis of human quantitative traits: efficiency, power, and modeling considerations. Am J Hum Genet 53(6):1306–1319
Schwarz GE (1978) Estimating the dimension of a model. Ann Stat 6(2):461–464
Searle SR, Casella G, McCulloch CE (1992) Variance components. Wiley, New Yok
Seber GAF (1977) Linear regression analysis, 1st edn. Wiley, New York
Sillanpää MJ, Arjas E (1998) Bayesian mapping of multiple quantitative trait loci from incomplete inbred line cross data. Genetics 148(3):1373–1388
Sillanpää MJ, Arjas E (1999) Bayesian mapping of multiple quantitative trait loci from incomplete outbred offspring data. Genetics 151(4):1605–1619
Smyth GK (2004) Linear models and empirical Bayes methods for assessing differential expression in microarray experiments. Stat Appl Genet Mol Biol 3(2):Article 3
Sobel E, Sengul H, Weeks DE (2001) Multipoint estimation of identity-by-descent probabilities at arbitrary positions among marker loci on general pedigrees. Hum Hered 52(3):121–131
Sober E (1983) Parsimony in systematics: philosophical issues. Ann Rev Ecol Systemat 14: 335–357
Sokal R, Michener C (1958) A statistical method for evaluating systematic relationships. Univ Kansas Sci Bull 38:1409–1438
Sorensen D, Gianola D (2002) Likelihood, Bayesian, and MCMC methods in quantitative genetics. Springer, New York
Statnikov A, Aliferis CF, Tsamardinos I, Hardin D, Levy S (2005) A comprehensive evaluation of multicategory classification methods for microarray gene expression cancer diagnosis. Bioinformatics 21(5):631–643
Steeb W, Hardy Y (2011) Matrix calculus and Kronecker product: a practical approach to linear and multilinear algebra. World Scientific Publishing Company, Singapore
Storey JD, Xiao W, Leek JT, Tompkins RG, Davis RW (2005) Significance analysis of time course microarray experiments. Proc Nat Acad Sci USA 102(36):12837–12842
Studier JA, Keppler KJ (1988) A note on the neighbor-joining algorithm of Saitou and Nei. Mol Biol Evol 5(6):729–731
Swofford DL, Olsen GJ, Waddell PJ, Hillis DM (1996) Phylogenetic inference. In: Hillis DM, Moritz C, Mable BK (eds) Molecular systematics, 2nd edn. Sinauer Associates, Sunderland, Mass., pp 407–514
ter Braak CJF, Boer MP, Bink MCAM (2005) Extending Xu’s Bayesian model for estimating polygenic effects using markers of the entire genome. Genetics 170(3):1435–1438
Tibshirani R (1996) Regression shrinkage and selection via the Lasso. J Roy Stat Soc Ser B (Stat Methodol) 58(1):267–288
Tipping ME (2001) Sparse Bayesian learning and the relevance vector machine. J Mach Learn Res 1:211–244
Tusher VG, Tibshirani R, Chu G (2001) Significance analysis of microarrays applied to the ionizing radiation response. Proc Nat Acad Sci USA 98(9):5116–5121
Visscher PM, Haley CS, Knott SA (1996) Mapping QTLs for binary traits in backcross and F2 populations. Genet Res 68(01):55–63
Vogl C, Xu S (2000) Multipoint mapping of viability and segregation distorting loci using molecular markers. Genetics 155(3):1439–1447
Wald A (1943) Tests of statistical hypotheses concerning several parameters when the number of observations is large. Trans Am Math Soc 54(3):426–482
Wang C, Zhu C, Zhai H, Wan J (2005a) Mapping segregation distortion loci and quantitative trait loci for spikelet sterility in rice (Oryza sativa l.). Genet Res 86(2):97–106
Wang H, Zhang Y, Li X, Masinde GL, Mohan S, Baylink DJ, Xu S (2005b) Bayesian shrinkage estimation of quantitative trait loci parameters. Genetics 170(1):465–480
Wedderburn RWM (1974) Quasi-likelihood functions, generalized linear models, and the Gauss-Newton method. Biometrika 61(3):439–447
Weinberg W (1908) Über den nachweis der vererbung beim menschen. Jahreshefte des Vereins für vaterländische Naturkunde in Württemberg 64:368–382
Welham S, Cullis B, Kenward M, Thompson R (2007) A comparison of mixed model splines for curve fitting. Aust New Zeal J Stat 49(1):1–23
Williams JT, Van Eerdewegh P, Almasy L, Blangero J (1999) Joint multipoint linkage analysis of multivariate qualitative and quantitative traits. I. Likelihood formulation and simulation results. Am J Hum Genet 65(4):1134–1147
Wolfinger RD, Gibson C, Wolfinger ED, Bennet L, Hamadeh H, Rishel P, Afshari C, Paules RS (2001) Assessing gene significance from cDNA microarray expression data via mixed models. J Comput Biol 8(6):625–637
Xie C, Xu S (1999) Mapping quantitative trait loci with dominant markers in four-way crosses. Theor Appl Genet 98(6):1014–1021
Xu C, Li Z, Xu S (2005) Joint mapping of quantitative trait loci for multiple binary characters. Genetics 169(2):1045–1059
Xu C, Wang X, Li Z, Xu S (2009) Mapping QTL for multiple traits using Bayesian statistics. Genet Res 91(1):23–37
Xu C, Xu S (2003) A SAS/IML program for mapping QTL in line crosses. Proceedings of the twenty-eighth annual SAS users group international conference (SUGI), Cary, NC. SAS Institute
Xu S (1995) A comment on the simple regression method for interval mapping. Genetics 141(4):1657–1659
Xu S (1996) Mapping quantitative trait loci using four-way crosses. Genet Res 68(02):175–181
Xu S (1998a) Further investigation on the regression method of mapping quantitative trait loci. Heredity 80(3):364–373
Xu S (1998b) Iteratively reweighted least squares mapping of quantitative trait loci. Behav Genet 28(5):341–355
Xu S (2003) Estimating polygenic effects using markers of the entire genome. Genetics 163(2):789–801
Xu S (2007) An empirical Bayes method for estimating epistatic effects of quantitative trait loci. Biometrics 63(2):513–521
Xu S (2008) Quantitative trait locus mapping can benefit from segregation distortion. Genetics 180(4):2201–2208
Xu S, Atchley WR (1995) A random model approach to interval mapping of quantitative trait loci. Genetics 141(3):1189–1197
Xu S, Hu Z (2009) Mapping quantitative trait loci using distorted markers. Int J Plant Genom 2009, doi:10.1155/2009/410825
Xu S, Hu Z (2010) Generalized linear model for interval mapping of quantitative trait loci. Theor Appl Genet 121(1):47–63
Xu S, Xu C (2006) A multivariate model for ordinal trait analysis. Heredity 97(6):409–417
Xu S, Yi N (2000) Mixed model analysis of quantitative trait loci. Proc Nat Acad Sci USA 97(26):14542–14547
Xu S, Yi N, Burke D, Galecki A, Miller RA (2003) An EM algorithm for mapping binary disease loci: application to fibrosarcoma in a four-way cross mouse family. Genet Res 82(2):127–138
Yeung KY, Bumgarner RE (2003) Multiclass classification of microarray data with repeated measurements: application to cancer. Genome Biol 4(12):R83
Yi N (2004) A unified Markov chain Monte Carlo framework for mapping multiple quantitative trait loci. Genetics 167(2):967–975
Yi N, George V, Allison DB (2003) Stochastic search variable selection for identifying multiple quantitative trait loci. Genetics 164(3):1129–1138
Yi N, Shriner D (2008) Advances in Bayesian multiple QTL mapping in experimental designs. Heredity 100(3):240–252
Yi N, Xu S (1999) A random model approach to mapping quantitative trait loci for complex binary traits in outbred populations. Genetics 153(2):1029–1040
Yi N, Xu S (2000) Bayesian mapping of quantitative trait loci for complex binary traits. Genetics 155(3):1391–1403
Yi N, Xu S (2001) Bayesian mapping of quantitative trait loci under complicated mating designs. Genetics 157(4):1759–1771
Yi N, Xu S (2008) Bayesian LASSO for quantitative trait loci mapping. Genetics 179(2): 1045–1055
Yuan M, Lin Y (2005) Efficient empirical Bayes variable selection and estimation in linear models. J Am Stat Assoc 100(472):1215–1225
Zeng ZB (1994) Precision mapping of quantitative trait loci. Genetics 136(4):1457–1468
Zhan H, Chen X, Xu S (2011) A stochastic expectation and maximization algorithm for detecting quantitative trait-associated genes. Bioinformatics 27(1):63–69
Zhao H, Speed TP (1996) On genetic map functions. Genetics 142(4):1369–1377
Zhu J, Hastie T (2004) Classification of gene microarrays by penalized logistic regression. Biostatistics 5(3):427–443
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2013 Springer Science+Business Media, LLC
About this chapter
Cite this chapter
Xu, S. (2013). Segregation Analysis. In: Principles of Statistical Genomics. Springer, New York, NY. https://doi.org/10.1007/978-0-387-70807-2_7
Download citation
DOI: https://doi.org/10.1007/978-0-387-70807-2_7
Published:
Publisher Name: Springer, New York, NY
Print ISBN: 978-0-387-70806-5
Online ISBN: 978-0-387-70807-2
eBook Packages: Biomedical and Life SciencesBiomedical and Life Sciences (R0)