Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Multistage theory of cancer

Throughout these notes, we model cancer as an exponentially growing cell population in which type i cells are those that have accumulated i ≥ 0 mutations compared to the type 0 cells. To motivate the study of these models we begin with a brief and incomplete history lesson and a description of some situations in which our model may be applied.

The idea that carcinogenesis is a multistage process goes back to the 1950s. Fisher and Holloman [49] pointed out that when the logarithm of cancer mortality from stomach cancer in women was plotted versus log of age, the result was line with slope 6. Nordling [61] and Armitage and Doll [40] suggested that the observed relationship would be explained if a cancer cell was the end result of seven mutations. There was no model. The conclusion was based on the fact that if X i are independent and exponential with rate u i , then

$$\displaystyle{P(X_{1} + \cdots + X_{k} \leq t) \sim \left (\prod _{i=1}^{k}u_{ i}\right ) \frac{t^{k-1}} {(k - 1)!},}$$

the restriction to small t being due to the fact that most cancers affect only a small fraction of the population. For more on early work, see the survey by Armitage [39].

In 1971 Knudson [52] analyzed 48 cases of retinoblastoma, a cancer that develops in the retinas of children as the eye grows its full size in the first five years of life. 25 children had tumors in only one eyes, while 23 had tumors in both eyes and generally more serious symptoms. From the age of onset of cancer for patients in the two groups, he inferred that there was a gene, later identified and named RB1, so that when both copies were knocked out cancer was initiated, and that individuals in the group with bilateral tumors had a germline mutation so that one copy was knocked out in all of their cells. This was the first example of a tumor suppressor gene, which leads to trouble when both copies have been inactivated. For more see Knudson’s 2001 survey [53].

In the late 1980s, it became possible to identify the molecular events that underlie the initiation and progression of human tumors. The abundant data for colorectal tumors made it an excellent system to study the genetic alterations involved. Fearon and Vogelstein [46] argued that mutations in four or five genes were required for the formation of a malignant tumor, and that colorectal tumors appear to arise from the mutational inactivation of tumor suppressor genes coupled with a mutational activation of oncogenes; one hit turns them on. In population genetics terms the later would be called advantageous mutations because they increase the growth rate of the cells in which they occur.

Over a decade later, Luebeck and Moolgavkar [56] synthesized the existing research on colon cancer to produce a four stage model. The first two steps were the knockout of the adenomatous polyposis coli (APC) gene, followed by activation of the oncogene KRAS and the inactivation of TP53, which has been called “the guardian of the genome” because of its role in conserving stability by preventing genome mutation. For more on the genetic basis of colon cancer, see Fearon’s recent survey [45].

Compared to colon cancer, chronic myeloid leukemia (CML) is a very simple disease. 95% of patients have a BCR-ABL gene fusion caused by a a reciprocal translocation between chromosome 9 and chromosome 22. Because of this simplicity, the disease can be treated with a tyrosine kinase inhibitor, e.g., imatinib, that blocks the action of BCR-ABL. Unfortunately, mutations in the ABL binding domain can lead to resistance to therapy. In Sections 6 and 12, we will use a two type branching process model to investigate the questions: What is the probability that resistant cells are present at diagnosis? If so, how many are there?

Metastasis, the spread of cancer to distant organs, is the most common cause of death for cancer patients. It is a very complex process: cells must enter the blood stream (intravasation), survive the trip through the circulatory system, leave the blood stream at its destination (extravasation), and survive in an alien environment, e.g., cells from the breast tissue living in bone. See [71] and [47] for more detailed descriptions. In Section 16, we will analyze a simple three type model for metastasis that ignores the most of the details of the process. In Section 17 we will consider the special case of ovarian cancer.

2 Mathematical Overview

Our model is a multitype branching process with mutation in which Z i (t) be the number of type i cells at time t. Type i cells give birth at rate a i and die at rate b i . Here, we always assume that the growth rate \(\lambda _{i} = a_{i} - b_{i}> 0\), even though this does not hold in all applications, see e.g., [16, 17]. To take care of mutations, we suppose that individuals of type i in addition give birth at rate u i+1 to individuals of type i + 1. Some researchers prefer to have births at rate α i and mutations at birth with probability μ i , but this is equivalent to \(a_{i} =\alpha _{i}(1 -\mu _{i})\) and \(u_{i} =\alpha _{i}\mu _{i}\).

For our model it is natural to investigate:

  • τ k , the time of the first type k mutation, and σ k , the time of the first type k mutation that founds a family line that does not die out. The limiting distributions of these quantities as s, u i  → 0 are studied in Section 5 for the case k = 1 and in general in Section 15.

  • The limiting behavior of \(e^{-\lambda _{1}t}Z_{1}(t)\) as t →  is studied in Section 9, where one-sided stable laws with index α < 1. Once the result is proved for k = 1, it extends in a straightforward way to k > 1, see Section 13. The method of proof of these results allows us to obtain insights into tumor heterogeneity in Section 18.

  • Let T M the time for the type 0 population to reach size M, which we think of the time at which cancer is detected. Motivated by the questions about resistance to imatinib, we will investigate \(P(Z_{1}(T_{M})> 0)\) in Section 6 and the size of \(Z_{1}(T_{M})\) in Section 12. Thinking about metastasis, we will study \(P(Z_{2}(T_{M})> 0)\) in Section 16.

There are more results in this survey, but these are the main themes. Along the way we will review, and in some cases improve, results that have been derived using nonrigorous arguments.

Some of our formulas are complicated, so it useful to look at concrete examples. To do this, we need to have reasonable parameter values. We will typically take the death rates b i  = 1. This corresponds to measuring time in units of cell divisions. The exponential growth rate λ i is analogous to the selective advantage of type i cells. In a study of glioblastoma and colorectal cancer, Bozic et al. [5] concluded that the average selective advantage of somatic mutations was surprisingly small, 0. 004. Here, we will often use slightly larger values, λ 1 = 0. 02, and λ 2 = 0. 04.

To identify the order of magnitude of the mutation rates u i , we note that the point mutation rate has been estimated, see [54], to be 5 × 10−10 per nucleotide per cell division. To compute the u i this number needs to be multiplied by the number of nucleotides that when mutated lead to cancer. In some cases there are a small number of nonsynonymous mutations (that change the amino acid in the corresponding protein) that achieve the desired effect, while in other cases there may be hundreds of possible mutations that knock out the gene and there may be a number of genes within a genetic pathway that can be hit. Thus mutation rates can range from 10−9 to 10−5, or can be larger after the mechanisms that govern DNA replication are damaged.

3 Branching process results

We begin by studying the number of type 0 cells, Z 0(t), which is a branching process in which each cell gives birth at rate a 0 and dies at rate b 0. In terms of the theory of continuous time Markov chains, the matrix q(i, j) that gives the rate of jumps from i to j has

$$\displaystyle{q(i,i + 1) = a_{0}i,\qquad q(i,i - 1) = b_{0}i,}$$

and q(i, j) = 0 otherwise. Since each initial individual gives rise to an independent copy of the branching process, we will suppose throughout this section that Z 0(0) = 1. Since each individual gives birth at rate a 0 and dies as rate b 0

$$\displaystyle{ \frac{d} {\mathit{dt}}\mathit{EZ}_{0}(t) =\lambda _{0}\mathit{EZ}_{0}(t),}$$

where \(\lambda _{0} = a_{0} - b_{0}\). Since EZ 0(0) = 1,

$$\displaystyle{ \mathit{EZ}_{0}(t) = e^{\lambda _{0}t}. }$$
(1)

Our next step is to compute the extinction probability,

$$\displaystyle{\rho = P(Z_{0}(t) = 0\mbox{ for some $t \geq 0$}).}$$

By considering what happened on the first jump

$$\displaystyle{ \rho = \frac{b_{0}} {a_{0} + b_{0}} \cdot 1 + \frac{a_{0}} {a_{0} + b_{0}} \cdot \rho ^{2}. }$$
(2)

In words, since jumps 1 → 0 occur at rate b 0 and from 1 → 2 at rate a 0, the first event is a death with probability \(b_{0}/(a_{0} + b_{0})\) in which case the probability of extinction is 1. The first event is a birth with probability \(a_{0}/(a_{0} + b_{0})\) in which case the probability of extinction is ρ 2, since for this to happen both of the lineages have to die out and they are independent.

Rearranging (2) gives \(a_{0}\rho ^{2} - (a_{0} + b_{0})\rho + b_{0} = 0\). Since 1 is a root, the quadratic factors as (ρ − 1)(a 0 ρb 0) = 0, and

$$\displaystyle{ \rho = \left \{\begin{array}{@{}l@{\quad }l@{}} b_{0}/a_{0}\quad &\mbox{ if $a_{0}> b_{0}$}, \\ 1 \quad &\mbox{ if $a_{0} \leq b_{0}$}. \end{array} \right. }$$
(3)

To compute the generating function \(F(x,t) = \mathit{Ex}^{Z_{0}(t)}\), we begin by noting that

Lemma 1.

\(\partial F/\partial t = -(a_{0} + b_{0})F + a_{0}F^{2} + b_{0} = (1 - F)(b_{0} - a_{0}F)\) .

Proof.

If h is small, then the probability of more than one event in [0, h] is O(h 2), the probability of a birth is \(\approx a_{0}h\), of a death is \(\approx b_{0}h\). In the second case we have no particles, so the generating function of Z 0(t + h) will be \(\equiv 1\). In the first case we have two particles at time h who give rise to two independent copies of the branching process, so the generating function of Z 0(t + h) will be F(x, t)2. Combining these observations,

$$\displaystyle{F(x,t + h) = a_{0}hF(x,t)^{2} + b_{ 0}h \cdot 1 + (1 - (a_{0} + b_{0})h)F(x,t) + O(h^{2}).}$$

A little algebra converts this into

$$\displaystyle{\frac{F(x,t + h) - F(x,t)} {h} = a_{0}F(x,t)^{2} + b_{ 0} - (a_{0} + b_{0})F(x,t) + O(h).}$$

Letting h → 0 gives the desired result. □ 

On page 109 of Athreya and Ney [3], or in formula (5) of Iwasa, Nowak, and Michor [24], we find the solution:

$$\displaystyle{ F(x,t) = \frac{b_{0}(x - 1) - e^{-\lambda _{0}t}(a_{0}x - b_{0})} {a_{0}(x - 1) - e^{-\lambda _{0}t}(a_{0}x - b_{0})}. }$$
(4)

Remark.

Here and in what follows, if the reader finds the details of the calculations too unpleasant, feel free to skip ahead to the end of the proof.

Proof.

Later we will need a generalization so we will find the solution of

$$\displaystyle{\frac{\mathit{dg}} {\mathit{dt}} = (c - g)(b -\mathit{ag})\qquad g(0) = x.}$$

The equation in Lemma 1 corresponds to c = 1, a = a 0, and b = b 0. Rearranging we have

$$\displaystyle{\mathit{dt} = \frac{\mathit{dg}} {(c - g)(b -\mathit{ag})} = \frac{1} {b -\mathit{ac}}\left ( \frac{\mathit{dg}} {c - g} - a \frac{\mathit{dg}} {b -\mathit{ag}}\right ).}$$

Integrating we have for some constant D

$$\displaystyle{ t + D = - \frac{1} {b -\mathit{ac}}\log (c - g) + \frac{1} {b -\mathit{ac}}\log (b -\mathit{ag}) = \frac{1} {b -\mathit{ac}}\log \left (\frac{b -\mathit{ag}} {c - g} \right ), }$$
(5)

so \(b -\mathit{ag} = e^{(b-\mathit{ac})t}e^{D(b-\mathit{ac})}(c - g)\), and solving for g gives

$$\displaystyle{ g(t) = \frac{b - ce^{D(b-\mathit{ac})}e^{(b-\mathit{ac})t}} {a - e^{D(b-\mathit{ac})}e^{(b-\mathit{ac})t}}. }$$
(6)

Taking t = 0 in (6), we have

$$\displaystyle{e^{D(b-\mathit{ac})} = \frac{\mathit{ax} - b} {x - c}.}$$

Using this in (6), setting c = 1, adding the subscript 0’s, and rearranging gives (4). □ 

Remark.

Here and throughout the paper we use log for the natural (base e) logarithm.

It is remarkable that one can invert the generating function to find the underlying distribution. These formulas come from Section 8.6 in Bailey [4]. If we let

$$\displaystyle{ \alpha = \frac{b_{0}e^{\lambda _{0}t} - b_{0}} {a_{0}e^{\lambda _{0}t} - b_{0}}\quad \text{and}\quad \beta = \frac{a_{0}e^{\lambda _{0}t} - a_{0}} {a_{0}e^{\lambda _{0}t} - b_{0}}, }$$
(7)

then the underlying distribution is a generalized geometric

$$\displaystyle{ p_{0} =\alpha \qquad p_{n} = (1-\alpha )(1-\beta )\beta ^{n-1}\quad \mbox{ for $n \geq 1$.} }$$
(8)

To check this claim, note that taking x = 0 in (4) confirms the size of the atom at 0 and suggests that we write

$$\displaystyle{ F(x,t) = \frac{(b_{0}e^{\lambda _{0}t} - b_{0}) - x(b_{0}e^{\lambda _{0}t} - a_{0})} {(a_{0}e^{\lambda _{0}t} - b_{0}) - x(a_{0}e^{\lambda _{0}t} - a_{0})}. }$$
(9)

For comparison we note that the generating function of (8) is

$$\displaystyle\begin{array}{rcl} \alpha +(1-\alpha )(1-\beta )\sum _{n=1}^{\infty }\beta ^{n-1}x^{n}& =& \alpha +(1-\alpha )(1-\beta ) \frac{x} {1 -\beta x} {}\\ & =& \frac{\alpha +(1 -\alpha -\beta )x} {1-\beta }. {}\\ \end{array}$$

Dividing the numerator and the denominator in (9) by \(a_{0}e^{\lambda _{0}t} - b_{0}\) gives

$$\displaystyle{F(x,t) = \frac{\alpha -x(b_{0}e^{\lambda _{0}t} - a_{0})/(a_{0}e^{\lambda _{0}t} - b_{0})} {1 -\beta x},}$$

and it remains to check that using (7) that

$$\displaystyle{1 -\alpha -\beta = \frac{(a_{0} - b_{0})e^{\lambda _{0}t} - a_{0}e^{\lambda _{0}t} + a_{0}} {a_{0}e^{\lambda _{0}t} - b_{0}} = -\frac{b_{0}e^{\lambda _{0}t} - a_{0}} {a_{0}e^{\lambda _{0}t} - b_{0}}.}$$

Yule process.

If b 0 = 0 (no death), and hence λ 0 = a 0, then the generating function in (4) simplifies to

$$\displaystyle{ F(x,t) = \frac{xe^{-\lambda _{0}t}} {1 - x + xe^{-\lambda _{0}t}}. }$$
(10)

α = 0 in (8), so we have a geometric distribution with

$$\displaystyle{\beta = 1 - e^{-\lambda _{0}t}.}$$

Since the mean of the geometric is 1∕(1 −β), this is consistent with the fact that \(\mathit{EZ}_{0}(t) = e^{\lambda _{0}t}\).

Returning to the general case and subtracting (4) from 1, we have

$$\displaystyle{ 1 - F(x,t) = \frac{\lambda _{0}(x - 1)} {a_{0}(x - 1) - e^{-\lambda _{0}t}(a_{0}x - b_{0})}. }$$
(11)

Setting x = 0 in (11), we have

$$\displaystyle\begin{array}{rcl} P(Z_{0}(t) = 0)& =& \frac{b_{0} - b_{0}e^{-\lambda _{0}t}} {a_{0} - b_{0}e^{-\lambda _{0}t}}\quad \text{and} \\ P(Z_{0}(t)> 0)& =& 1 - F(0,t) = \frac{\lambda _{0}} {a_{0} - b_{0}e^{-\lambda _{0}t}}.{}\end{array}$$
(12)

Note that this converges exponentially fast to the probability of nonextinction, λ 0a 0.

Theorem 1.

Suppose a 0 > b 0 . As t →∞, \(e^{-\lambda _{0}t}Z_{0}(t) \rightarrow W_{0}\) .

$$\displaystyle{ W_{0} = _{d} \frac{b_{0}} {a_{0}}\delta _{0} + \frac{\lambda _{0}} {a_{0}}\,\text{exponential}(\lambda _{0}/a_{0}) }$$
(13)

where δ 0 is a point mass at 0, and the exponential (r) distribution has density re −rt and mean 1∕r. That is,

$$\displaystyle{P(W_{0} = 0) = \frac{b_{0}} {a_{0}}\quad \text{and}\quad P(W_{0}> x\vert W_{0}> 0) =\exp (-x\lambda _{0}/a_{0}).}$$

Proof.

From (8), the atom at 0

$$\displaystyle{\alpha (t) = \frac{b_{0} - b_{0}e^{-\lambda _{0}t}} {a_{0} - b_{0}e^{-\lambda _{0}t}} \rightarrow \frac{b_{0}} {a_{0}}.}$$

The geometric distribution (1 −β)β n−1 has mean

$$\displaystyle{ \frac{1} {1-\beta } = \frac{a_{0}e^{\lambda _{0}t} - b_{0}} {a_{0} - b_{0}} \sim \frac{a_{0}e^{\lambda _{0}t}} {\lambda _{0}},}$$

where \(x_{t} \sim y_{t}\) means x t y t  → 1. Since a geometric distribution rescaled by its mean converges to a mean 1 exponential, the desired result follows.

This gives the limiting behavior of the distribution of \(e^{-\lambda _{0}t}Z_{0}(t)\). To show that the sequence of numbers \(e^{-\lambda _{0}t}Z_{0}(t) \rightarrow W_{0}\), we note that \(\mathit{EZ}_{0}(t) = e^{\lambda _{0}}t\) and individuals give birth independently; so \(e^{-\lambda _{0}t}Z_{0}(t)\) is a nonnegative martingale and hence converges with probability one to a limit W 0. □ 

If we let \(\Omega _{0}^{0} =\{ Z_{0}(t) = 0\) for some t ≥ 0}, then (3) implies

$$\displaystyle{ P(\Omega _{0}^{0}) = b_{ 0}/a_{0}. }$$
(14)

Since W 0 = 0 on \(\Omega _{0}^{0}\), (13) implies that W 0 > 0 when the process does not die out. Letting \(\Omega _{\infty }^{0} =\{ Z_{0}(t)> 0\) for all t ≥ 0} we have

$$\displaystyle{ (e^{-\lambda _{0}t}Z_{ 0}(t)\vert \Omega _{\infty }^{0}) \rightarrow V _{ 0} =\ \text{exponential}(\lambda _{0}/a_{0}). }$$
(15)

Later we need the formula for the Laplace transform of V 0:

$$\displaystyle\begin{array}{rcl} Ee^{-\theta V _{0} }& =& \int _{0}^{\infty }e^{-\theta x} \frac{\lambda _{0}} {a_{0}}e^{-(\lambda _{0}/a_{0})x}\,\mathit{dx} \\ & =& \frac{\lambda _{0}/a_{0}} {(\lambda _{0}/a_{0})+\theta } = (1 + (a_{0}/\lambda _{0})\theta )^{-1}.{}\end{array}$$
(16)

3.1 Conditioned branching processes

For almost all of these notes, we will consider processes in continuous time. The one exception is that to develop the theory in this section, we begin with discrete time. In that setting the branching process is described by giving the offspring distribution p k , i.e., the probability an individual has k children. Given a family of independent random variables \(\xi _{i}^{n}\) with \(P(\xi _{i}^{n} = k) = p_{k}\), we can define the branching process by specifying Z 0 and then inductively defining

$$\displaystyle{Z_{n+1} = \left \{\begin{array}{@{}l@{\quad }l@{}} 0 \quad &\mbox{ if $Z_{n} = 0$, } \\ \sum _{i=1}^{Z_{n}}\xi _{i}^{n}\quad &\mbox{ if $Z_{n}> 0$. } \end{array} \right.}$$

Let \(\mu =\sum _{k}kp_{k}\) be the mean of the offspring distribution. It is known that if μ < 1 or μ = 1 and p 1 < 1 the process always dies out, while if μ > 1 then the probability the process dies out starting from Z 0 = 1 is the solution ∈ [0, 1) of ϕ(ρ) = ρ, where \(\phi (x) =\sum _{k}p_{k}x^{k}\) is the generating function of the offspring distribution.

In some situations, it is desirable to decompose a branching process with μ > 1 into the backbone (the individuals who have an infinite line of descent) and the side trees (the individuals who start a family that dies out). A fact, due to Harris [23], which makes this useful, is that if we condition on the branching process not dying out, and only look at the individuals that have an infinite line of descent, then we have a branching process with offspring distribution:

$$\displaystyle{ \bar{p}_{k} = (1-\rho )^{-1}\sum _{ m\geq k}p_{m}\binom{m}{k}(1-\rho )^{k}\rho ^{m-k}. }$$
(17)

In words, if we want k children whose family lines do not die out, then we need to have m ≥ k children and exactly mk of these must start lineages that die out. We divide by 1 −ρ because that is the probability that the individual under consideration starts a family line that does not die out.

To compute the generating function of \(\bar{p}_{k}\), we interchange the order of summation, which is legitimate because all the terms are ≥ 0, and then use the binomial formula to evaluate the inner sum.

$$\displaystyle\begin{array}{rcl} \sum _{k=1}^{\infty }\sum _{ m\geq k}p_{m}\binom{m}{k}(1-\rho )^{k}\rho ^{m-k}z^{k}& =& \sum _{ m=1}^{\infty }p_{ m}\sum _{k=1}^{m}\binom{m}{k}(1-\rho )^{k}z^{k}\rho ^{m-k} {}\\ & =& \sum _{m=1}^{\infty }p_{ m}[((1-\rho )z+\rho )^{m} -\rho ^{m}], {}\\ \end{array}$$

so the generating function is

$$\displaystyle{\bar{\phi }(z) =\sum _{ k=1}^{\infty }\bar{p}_{ k}z^{k} = \frac{\phi ((1-\rho )z+\rho ) -\phi (\rho )} {1-\rho }.}$$

Geometrically, we have taken the part of the generating function over [ρ, 1], which has range [ρ, 1], and linearly rescaled the x and y axes to make it map [0, 1] onto [0, 1]. See Figure 1 for a picture.

Fig. 1
figure 1

Graph of the generating function ϕ(z) = (1 + z)3∕8 for Binomial(3,1/2), showing \(\bar{\phi }\) in the upper right square and \(\hat{\phi }\) in the lower left. Fixed point is at \(-2 + \sqrt{5} = 0.23606\).

If we condition our branching process to die out, we get a branching process with offspring distribution:

$$\displaystyle{ \hat{p}_{k} = p_{k}\rho ^{k}/\rho. }$$
(18)

In words, if there are k children, then all of their family lines must die out, and these events are independent with probability ρ. We divide by ρ because that if the probability that the individual under consideration starts a family line that dies out. The generating function is

$$\displaystyle{\hat{\phi }(z) =\sum _{ k=1}^{\infty }p_{ k}\rho ^{k}z^{k}/\rho =\phi (\rho z)/\rho.}$$

In this case, we have taken the part of the generating function over [0, ρ], which has range [0, ρ], and linearly rescaled the x and y axes to make it map [0, 1] onto [0, 1]. See Figure 1 for a picture. The mean of \(\hat{p}_{k}\) is \(\hat{\mu }=\sum _{ k=1}^{\infty }kp_{k}\rho ^{k-1} =\phi '(\rho )\), so the mean total progeny of a process conditioned to die out is

$$\displaystyle{ \sum _{n=0}^{\infty }\phi '(\rho )^{n} = 1/(1 -\phi '(\rho )). }$$
(19)

To do these conditionings in continuous time, we follow O’Connell [32] and take a limit of discrete time approximations. Let the offspring distribution be

$$\displaystyle{p_{0} = b_{0}\delta \qquad p_{2} = a_{0}\delta \qquad p_{1} = 1 - (a_{0} + b_{0})\delta }$$

where δ is small enough so that p 1 > 0. The extinction probability solves

$$\displaystyle{\rho = b_{0}\delta + [1 - (a_{0} + b_{0})\delta ]\rho + a_{0}\delta \rho ^{2},}$$

or \(b_{0} - (a_{0} + b_{0})\rho + a_{0}\rho ^{2} = 0\) which again gives ρ = b 0a 0 if a 0 > b 0. If we condition the branching process to die out then by (18)

$$\displaystyle{\hat{p}_{0} = p_{0}/\rho = a_{0}\delta \qquad \hat{p}_{2} = p_{2}\rho b_{0}\delta \qquad \hat{p}_{1} = p_{1} = 1 - (a_{0} + b_{0})\delta }$$

so in the limit as δ → 0 we have births at rate b 0 and deaths at rate a 0. If we condition on survival and recall \(\rho = a_{0}/b_{0}\), then we get

$$\displaystyle{\bar{p}_{2} = p_{2}(1-\rho ) = b_{0}\delta (1-\rho ) =\lambda _{0}\delta \qquad \bar{p}_{1} = 1 -\bar{ p}_{2} = 1 -\lambda _{0}\delta }$$

so in the limit as δ → 0 we end up with a Yule process with births at rate λ 0.

4 Time for Z 0 to reach size M

While from the point of view of stochastic processes, it is natural to start measuring time when there is one cancer cell, that time is not known in reality. Thus we will shift our attention to the time at which the cancer is detected, which we will idealize as the time the total number of cancer cells reaches M. For example, in studies of chronic myeloid leukemia it has been common to take M = 105 cancerous cells.

As a first step in investigating this quantity we consider \(T_{M} =\min \{ t: Z_{0}(t) = M\}\), and then return later to consider Z i (T M ) for i > 0. To find the distribution of T M , we note that by (15), conditional on nonextinction, \(e^{-\lambda _{0}t}Z_{0}(t) \rightarrow V _{0}\), which is exponential with rate \(\lambda _{0}/a_{0}\), or informally \(Z_{0}(t) \approx e^{\lambda _{0}t}V _{0}\). From this we see that

$$\displaystyle{P(T_{M} \leq t) \approx P(e^{\lambda _{0}t}V _{0} \geq M) =\exp (-(\lambda _{0}/a_{0})Me^{-\lambda _{0}t})}$$

which is the double exponential, or Gumbel distribution. Differentiating we find the density function

$$\displaystyle{ f_{T_{M}}(t) =\exp (-(\lambda _{0}/a_{0})Me^{-\lambda _{0}t}) \cdot \frac{\lambda _{0}^{2}M} {a_{0}} e^{-\lambda _{0}t} }$$
(20)

Clearly the actual waiting T M  ≥ 0; however, in our formula \(P(T_{M} \leq 0) =\exp (-\lambda _{0}M/a_{0})\). As we will see in the concrete example below, in applications this will be very small. Because of this, it is natural to view the density in (20) as defined on (−, ). To compute the mean we have to compute

$$\displaystyle{\mathit{ET}_{M} = \frac{\lambda _{0}^{2}M} {a_{0}} \int _{-\infty }^{\infty }te^{-\lambda _{0}t}\exp (-(\lambda _{ 0}/a_{0})Me^{-\lambda _{0}t})\,\mathit{dt}}$$

If we let \(z = (\lambda _{0}/a_{0})Me^{-\lambda _{0}t}\), then \(e^{-\lambda _{0}t} = a_{0}z/\lambda _{0}M\), \(t = -(1/\lambda _{0})\log (a_{0}z/\lambda _{0}M)\), and dt = −dzz λ 0, so the integral becomes

$$\displaystyle\begin{array}{rcl} & =& -\frac{\lambda _{0}^{2}M} {a_{0}} \int _{0}^{\infty }\frac{1} {\lambda _{0}} \log (a_{0}z/\lambda _{0}M) \cdot \frac{a_{0}z} {\lambda _{0}M} \cdot e^{-z}\,\frac{\mathit{dz}} {z\lambda _{0}} {}\\ & =& -\frac{1} {\lambda _{0}} \int _{0}^{\infty }\log (a_{ 0}z/\lambda _{0}M)e^{-z}\,\mathit{dz} {}\\ \end{array}$$

Since \(\int _{0}^{\infty }e^{-z}\,\mathit{dz} = 1\) it follows that

$$\displaystyle{ \mathit{ET}_{M} = \frac{1} {\lambda _{0}} \log \left (\frac{M\lambda _{0}} {a_{0}} \right ) -\frac{1} {\lambda _{0}} \int _{0}^{\infty }e^{-z}\log z\,\mathit{dz} }$$
(21)

The first term is the value of T M if we replace V 0 by its mean a 0λ 0 and solve

$$\displaystyle{e^{\lambda _{0}T_{M}}a_{0}/\lambda _{0} = M}$$

The integral in the second term (including the minus sign) is Euler’s constant

$$\displaystyle{\gamma = 0.5772156649.}$$

Example 1.

For a concrete example suppose a 0 = 1. 02, b = 1, λ 0 = 0. 02 and set M = 105. In this case \(P(T_{M} \leq 0) =\exp (-200/1.02) \approx 0\). The first term in (21) is

$$\displaystyle{\frac{1} {\lambda _{0}} \log \left (\frac{M\lambda _{0}} {a_{0}} \right ) = 50\log 1960.78 = 379.05}$$

and the second is γλ 0 = 28. 86 a small correction. b = 1 so the units are the number of cell divisions, i.e., for cells that divide on the average every four days this is 1516 days or 4.15 years.

5 Time until the first type 1

Let τ 1 be the time of occurrence of the first type 1. If we think of 0’s as an exponentially growing population of tumor cells, then type 1’s cells might have more aggressive growth or be resistant to therapy. Since 1’s are produced at rate u 1 Z 0(s) at time s,

$$\displaystyle{ P(\tau _{1}> t\vert Z_{0}(s),s \leq t,\Omega _{\infty }^{0}) =\exp \left (-u_{ 1}\int _{0}^{t}Z_{ 0}(s)\mathit{ds}\right ). }$$
(22)

τ 1 will typically occur when \(\int _{0}^{t}Z_{0}(s)\,\mathit{ds}\) is of order 1∕u 1. A typical value for the mutation rate is u 1 = 10−5 or smaller, so 1∕u 1 is a large number, and we can use the approximation \((Z_{0}(s)\vert \Omega _{\infty }^{0}) \approx e^{\lambda _{0}s}V _{0}\). Evaluating the integral,

$$\displaystyle{\int _{0}^{t}e^{\lambda _{0}s}V _{ 0}\,\mathit{ds} = V _{0} \cdot \frac{e^{\lambda _{0}t} - 1} {\lambda _{0}}.}$$

Dropping the second term and taking the expected value by using (16) with \(\theta = u_{1}e^{\lambda _{0}t}/\lambda _{0}\), we conclude that

$$\displaystyle{ P(\tau _{1}> t\vert \Omega _{\infty }^{0}) \approx E\exp (-u_{ 1}V _{0}e^{\lambda _{0}t}/\lambda _{ 0}) = \left (1 + (a_{0}/\lambda _{0}^{2})u_{ 1}e^{\lambda _{0}t}\right )^{-1}. }$$
(23)

The median t 1∕2 1 of the limiting distribution has \(\lambda _{0}^{2} = a_{0}u_{1}e^{\lambda _{0}t_{1/2}^{1} }\) so

$$\displaystyle{ t_{1/2}^{1} \approx \frac{1} {\lambda _{0}} \log \left ( \frac{\lambda _{0}^{2}} {a_{0}u_{1}}\right ). }$$
(24)

In some cases we regard V 0 as a fixed constant. Implicitly assuming that V 0 > 0 we write

$$\displaystyle{ P(\tau _{1}> t\vert V _{0}) \approx \exp (-u_{1}V _{0}e^{\lambda _{0}t})/\lambda _{0}. }$$
(25)

If we replace V 0 by its mean \(\mathit{EV }_{0} = a_{0}/\lambda _{0}\) the tail of the limit distribution of τ 1 is equal to 1∕e at

$$\displaystyle{ \bar{t}_{1/e}^{1} = \frac{1} {\lambda _{0}} \log \left ( \frac{\lambda _{0}^{2}} {a_{0}u_{1}}\right ). }$$
(26)

A second quantity of interest is σ 1, the time of occurrence of the first type 1 that gives rise to a family which does not die out. Since the rate of these successful type 1 mutations is \(u_{1}\lambda _{1}/a_{1}\), all we have to do is to replace u 1 by \(u_{1}\lambda _{1}/a_{1}\) in (23) and (25) to get

$$\displaystyle\begin{array}{rcl} P(\sigma _{1}> t\vert \Omega _{\infty }^{0}) \approx \left (1 + (a_{ 0}/\lambda _{0}^{2})(u_{ 1}\lambda _{1}/a_{1})e^{\lambda _{0}t}\right )^{-1},& &{}\end{array}$$
(27)
$$\displaystyle\begin{array}{rcl} P(\sigma _{1}> t\vert V _{0}) \approx \exp (-V _{0}(u_{1}\lambda _{1}/a_{1})e^{\lambda _{0}t})/\lambda _{0}.& &{}\end{array}$$
(28)

Replacing t by s to define the quantities for σ 1 corresponding to (24) and (26)

$$\displaystyle{ s_{1/2}^{1} =\bar{ s}_{ 1/e}^{1} = \frac{1} {\lambda _{0}} \log \left ( \frac{\lambda _{0}^{2}a_{1}} {a_{0}u_{1}\lambda _{1}}\right ). }$$
(29)

Example 2.

To help digest these formulas it is useful to have concrete examples. If the mutation rate u 1 = 10−5, b 0 = b 1 = 1, a 0 = 1. 02, and a 1 = 1. 04 then λ 0 = 0. 02, λ 1 = 0. 04 and

$$\displaystyle\begin{array}{rcl} t_{1/2}^{1} =\bar{ t}_{ 1/e}^{1}& =& 50\log \left ( \frac{4 \times 10^{-4}} {1.02 \times 10^{-5}}\right ) = 183.45, {}\\ s_{1/2}^{1} =\bar{ s}_{ 1/e}^{1}& =& 50\log \left (\frac{4.16 \times 10^{-4}} {4.08 \times 10^{-7}}\right ) = 50\log (1019.6) = 346.36. {}\\ \end{array}$$

Again since b i  = 1, the time units are cell-divisions. If cells divide every four days, this translates into 733.8 days (about two years) and 1385.44 days or 3.8 years. If instead u 1 = 10−6 this adds (1∕λ 0)log(10) = 115. 13 or 460.5 days to the two waiting times.

Limit Theorems.

Our next goal is to find the limiting behavior of τ 1. Since the median is where the distribution function crosses 1/2, (22) implies

$$\displaystyle{P(\tau _{1}> t_{1/2}^{1} + t\vert \Omega _{ \infty }^{0}) \approx (1 + e^{\lambda _{0}t})^{-1},}$$

and it follows that

$$\displaystyle{ P(\tau _{1}> t_{1/2}^{1} + x/\lambda _{ 0}\vert \Omega _{\infty }^{0}) \rightarrow (1 + e^{x})^{-1}. }$$
(30)

For a comparison with simulation see Figure 2.

Fig. 2
figure 2

Results of 200 runs of the system with a 0 = 1. 02, a 1 = 1. 04, a 2 = 1. 06, b i  = 1. 0, and u = 10−5. Smooth curves are the limit results for τ i , i = 1, 2, 3.

The results for fixed V 0 are similar, but the limit distributions is slightly different.

$$\displaystyle{P(\tau _{1}>\bar{ t}_{1/e}^{1} + t\vert V _{ 0}) \approx \exp (-e^{\lambda _{0}t}),}$$

and it follows that

$$\displaystyle{P(\tau _{1}>\bar{ t}_{1/e}^{1} + x/\lambda _{ 0}\vert V _{0}) \rightarrow \exp (-e^{x}).}$$

The results for σ 1 come from changing the value of \(u_{1} \rightarrow u_{1}\lambda _{2}/a_{2}\).

$$\displaystyle\begin{array}{rcl} & & P(\sigma _{1}> s_{1/2}^{1} + x/\lambda _{ 0}\vert \Omega _{\infty }^{0}) = P(\tau _{ 1}> t_{1/2}^{1} + x/\lambda _{ 0}\vert \Omega _{\infty }^{0}) \rightarrow (1 + e^{x})^{-1}, \\ & & P(\sigma _{1}>\bar{ s}_{1/e}^{1} + x/\lambda _{ 0}\vert V _{0}) = P(\tau _{1}>\bar{ t}_{1/e}^{1} + x/\lambda _{ 0}\vert V _{0}) \rightarrow \exp (-e^{x}). {}\end{array}$$
(31)

6 Mutation before detection?

Cancer therapy often fails because acquired resistance enables cancer cells to grow despite the continuous administration of therapy. In some cases, a single genetic alteration is sufficient to cause resistance to cancer treatment. Chronic myeloid leukemia (CML) is caused by a reciprocal translocation between chromosomes 9 and 22 which creates the BCR-ABL oncoprotein. Treatment of CML with imatinib can fail due to a single point mutation in the tyrosine kinase domain of ABL. To date more than 90 point mutations that cause resistance to treatment have been observed. The possibility of drug-resistant cells at the beginning of therapy is of clinical importance since the likelihood and extent of resistance determines treatment choices and patient prognosis.

For this reason Iwasa, Nowak, and Michor [24] were interested in the probability that a mutation conferring resistance to a particular treatment would occur before a cancer was detected. To formulate this a math problem, we assume that the disease can be detected when the number of cancer cells reaches size M. Assuming that the number of resistant (type 1) cells will be a small fraction of M we will define this to be \(T_{M} =\min \{ t: Z_{0}(t) = M\}\). Using the calculation in (22), and noting that on the nonextinction event \(\Omega _{\infty }^{0}\), we have \(Z_{0}(t) \sim V _{0}e^{\lambda _{0}t}\) which implies \(Z_{0}(T_{M} - s) \approx Me^{-\lambda _{0}s}\), we find

$$\displaystyle\begin{array}{rcl} P(\tau _{1}> T_{M}\vert Z_{0}(s),s \leq T_{M},\Omega _{\infty }^{0}) =\exp \left (-u_{ 1}\int _{0}^{T_{M} }Z_{0}(t)\,\mathit{dt}\right )& & \\ \approx \exp \left (-Mu_{1}\int _{0}^{\infty }e^{-\lambda _{0}s}\,\mathit{ds}\right ) =\exp \left (-Mu_{ 1}/\lambda _{0}\right ).& &{}\end{array}$$
(32)

This answers our math question, but since the mutation to type 1 might die out, the biologically relevant question is to compute the probability that \(Z_{1}(T_{M})> 0\). To do this we note that mutations to type 1 occur at rate \(u_{1}Me^{-\lambda _{0}s}\) at time T M s, and by (12) will not die out by time T M with probability \(\lambda _{1}/(a_{1} - b_{1}e^{-\lambda _{1}s})\). Thinking about thinning a Poisson process we see that the number of mutations to type 1 that survive to time T M is Poisson with mean

$$\displaystyle{ \mu (M) = Mu_{1}\int _{0}^{\infty }e^{-\lambda _{0}s} \frac{\lambda _{1}} {a_{1} - b_{1}e^{-\lambda _{1}s}}\,\mathit{ds}, }$$
(33)

and it follows that

$$\displaystyle{ P(Z_{1}(T_{M})> 0\vert \Omega _{\infty }^{0}) = 1 -\exp (-\mu (M)). }$$
(34)

The integral in (33) cannot be evaluated exactly, but it is useful to change variables t = exp(−λ 0 s), \(\mathit{dt} = -\lambda _{0}\exp (-\lambda _{0}s)\,\mathit{ds}\) to rewrite it as

$$\displaystyle{ \mu (M) = \frac{Mu_{1}} {\lambda _{0}} \int _{0}^{1} \frac{\lambda _{1}} {a_{1} - b_{1}t^{\lambda _{1}/\lambda _{0}}} \,\mathit{dt}. }$$
(35)

To compare with (7) in [24], we change notation

$$\displaystyle{ \begin{array}{lccccc} \text{here}&a_{0} & b_{0} & a_{1} & b_{1} & u_{1} \\{} [24] & r & d & a & b &\mathit{ru}\end{array} }$$

To explain the last conversion recall that in [24] their mutation rate u is per cell division, while ours is a rate per unit time. The final conversion is that their α = λ 1λ 0, which in our notation is 1∕α. For this reason, we will let \(\bar{\alpha }=\lambda _{1}/\lambda _{0}\). Once these identifications are made, one can recognize their

$$\displaystyle{ F =\int _{ 0}^{1} \frac{1 - b/a} {1 - (b/a)y^{\bar{\alpha }}}\,\mathit{dy} }$$
(36)

as our integral and the two formulas are identical. In our notation the final result is

$$\displaystyle{ P(Z_{1}(T_{M})> 0\vert \Omega _{\infty }^{0}) = 1 -\exp \left (-\frac{\mathit{Mu}_{1}F} {a_{0} - b_{0}}\right ). }$$
(37)

When mutation does not change the birth or death rates, i.e., \(a_{0} = a_{1} = a\) and \(b_{0} = b_{1} = b\),

$$\displaystyle{\int _{0}^{1} \frac{\mathit{dt}} {a -\mathit{bt}} = \left.-\frac{1} {b}\log (a -\mathit{bt})\right \vert _{0}^{1} = \frac{1} {b}\log \left ( \frac{a} {a - b}\right ),}$$

and (37) becomes

$$\displaystyle{ P(Z_{1}(T_{M})> 0\vert \Omega _{\infty }^{0}) = 1 -\exp \left [-\frac{\mathit{Mu}_{1}a} {b} \log \left ( \frac{a} {a - b}\right )\right ]. }$$
(38)

Recalling u 1 = au, this agrees with (8) in [24] and (1) in Tomasetti and Levy [36] (there a = L, b = D).

The derivation in [24] is clever but not 100% rigorous. They break things down according to the number of sensitive cells k. The number increases to k + 1 at rate kr and decreases to k − 1 at rate kd. This shows that the number of sensitive cells is a time change of a random walk. If we let f x (t) be the probability there are x sensitive cells at time t, then the expected number of resistant cancer cells produced when there are x sensitive cancer cells is

$$\displaystyle{R_{x} = \mathit{ru}\int _{0}^{\infty }\mathit{xf }_{ x}(t)\,\mathit{dt}.}$$

We are interested in what happens when the sensitive are conditioned to not die out and the process is stopped when M is reached. In this case, if we ignore the values near the endpoints 0 and M, then the number of man-hours at a site, \(\int _{0}^{\infty }\mathit{xf }_{x}(t)\,\mathit{dt}\), will be the same as the occupation time of x in a random walk with drift. At time t the random walk will have moved from 0 to ≈ (rd)t, so the time spent at each value in [0, (rd)t] is ≈ 1∕(rd). This gives

$$\displaystyle{ R_{x} \approx \frac{u} {1 - d/r} }$$
(39)

in agreement with the result computed in their appendix B. The (minor) flaw in their argument is that the R x are random with \(\mathit{ER}_{x} \approx u/(1 - d/r)\), but successive values are correlated. However, the correlations are short-range, so weighted averages of the R x , such as the ones that appear in sums approximating (35), will be close to their mean.

Approximations.

Logic tells us that \(P(\tau _{1}> T_{M}\vert \Omega _{\infty }^{0}) \leq P(Z_{1}(T_{M}) = 0\vert \Omega _{\infty }^{0})\), so it is comforting to note that the integrand in (35) is ≤ 1. To get an upper bound we note that \(Z_{1}(T_{M}) = 0\) implies σ 1 > T M and type 1 mutations live forever with probability λ 1a 1, so using the reasoning that led to (34)

$$\displaystyle{ P(Z_{1}(T_{M}) = 0\vert \Omega _{\infty }^{0}) \leq P(\sigma _{ 1}> T_{M}\vert \Omega _{\infty }^{0}) =\exp (-Mu_{ 1}\lambda _{1}/a_{1}\lambda _{0}), }$$
(40)

an inequality which can also be derived by noting that λ 1a 1 is a lower bound on the integrand.

Example 3.

Leder et al. [29] have taken a closer look at the probability of pre-existing resistance in chronic myeloid leukemia in order to obtain estimates of the diversity of resistant cells. They choose M = 105 cells as the threshold for detection, and on the basis of in vitro studies set a 0 = 0. 008, b 0 = 0. 003, and λ = 0. 005 with time measured in years. They are interested in particular nucleotide substitutions, so they set the mutation rate per birth at 10−7, or at rate \(u = 0.008 \cdot 10^{-7} = 8 \times 10^{-10}\).

They examined eleven BCR-ABL mutations that produced resistance to treatment by imatinib, dasatinib, and nilotinib. The mutants are listed in the next table in decreasing order of their birth rates (per day). They assumed that the death rate in all cases is 0.003. To explain the names, T315I has a Threonine instead of Isoleucine at position 315, while p210 is the wild type.

mutation

growth rate

resistant to

T351I

0.0088

all

E255K

0.0085

I

Y235F

0.0082

I

p210

0.0080

none

E255V

0.0078

I

V299L

0.0074

D

Y253H

0.0074

I

M351I

0.0072

I

F317L

0.0071

I,D

T315A

0.0070

D

F317V

0.0067

D

L248R

0.0061

I,D

The growth parameters for T351I are a 1 = 0. 0088 and b 1 = 0. 003, so λ 1 = 0. 0058. In this case \(\mathit{Mu}_{1}/\lambda _{0} = 10^{5} \cdot 0.008 \cdot 10^{-7}/0.005 = 0.016\), and λ 1a 1 = 0. 659, so we have

$$\displaystyle\begin{array}{rcl} P(\tau _{1} \leq T_{M}\vert \Omega _{\infty }^{0}) = 1 - e^{-0.016}& =& 0.01587 {}\\ P(Z_{1}(T_{M})> 0\vert \Omega _{\infty })& =& 0.01263 {}\\ P(\sigma _{1} \leq T_{M}\vert \Omega _{\infty }) = 1 - e^{-0.010544}& =& 0.01049 {}\\ \end{array}$$

where the last answer comes from evaluating the integral in (33) numerically. The mutation with the lowest growth rate in the table is L248R, which changes Leucine to Arginine at position 248. It has growth parameters a 1 = 0. 0061 and b 1 = 0. 003, so λ 1 = 0. 0031. Again \(\mathit{Mu}_{1}/\lambda _{0} = 0.0058\) but this time \(\lambda _{1}/a_{1} = 0.581\), so we have

$$\displaystyle\begin{array}{rcl} P(\tau _{1} \leq T_{M}\vert \Omega _{\infty }^{0}) = 1 - e^{-0.016}& =& 0.01587 {}\\ P(Z_{1}(T_{M})> 0\vert \Omega _{\infty })& =& 0.01198 {}\\ P(\sigma _{1} \leq T_{M}\vert \Omega _{\infty }) = 1 - e^{-0.008131}& =& 0.00810 {}\\ \end{array}$$

Comparing these two extreme cases we see that for all 11 mutations the probability of resistance is between 0.01198 and 0.01263. From this they conclude that for the parameters used in the computation the probability of no resistant type is 0.87, while there will be one with probability 0.12, and two with probability 0.01.

7 Accumulation of neutral mutations

It is widely accepted that cancers result from an accumulation of mutations that increase the growth rates of tumor cells compared to the cells that surround them. A number of studies have sequenced the genomes of tumors in order to find the causative or “driver” mutations. [54, 63, 66, 69, 72]. The Cancer Genome Atlas project and the International Cancer Genome Consortium represent the largest of such efforts. They aim to sequence hundreds of examples of dozens of tumor types. See [43] for a summary of some of the results that have been found. There is much more on the web pages of these two organizations.

The search for “driver” mutations that cause the disease has been complicated by the fact that a typical solid tumor has dozens of amino-acid altering substitutions but only a small fraction of these are involved in the disease. The others are “passenger” mutations that are genetically neutral. A recent study [70] argues that half or more of the somatic mutations found in tumors occur before the onset of disease. To begin to understand the behavior of neutral mutations in our cancer model, we first consider those that occur to type 0’s. As shown in Section 3.1, if we condition Z 0(t) on the event \(\Omega _{\infty }^{0}\) that it does not die out, and let Y 0(t) be the number of individuals at time t whose families do not die out, then Y 0(t) is a Yule process in which births occur at rate λ 0. Since each of the Z 0(t) individuals at time t has a probability γ = λ 0a 0 of starting a family that does not die out, and the events are independent for different individuals,

$$\displaystyle{ Y _{0}(t)/Z_{0}(t) \rightarrow \gamma \quad \text{in probability,} }$$
(41)

As explained in Section 1, Y 0(t) is the backbone of the branching tree. The remainder of the population lives in finite trees attached to the backbone. The finite trees are copies of the branching process conditioned to die out, so most of these are small, i.e., each individual in the population has a close relative on the backbone. Because of this, it is enough to study the accumulation of mutations in Y 0(t).

Our first problem is to investigate the population site frequency spectrum,

$$\displaystyle{ F(x) =\lim _{t\rightarrow \infty }F_{t}(x), }$$
(42)

where F t (x) is the expected number of neutral “passenger” mutations present in more than a fraction x of the individuals at time t. To begin to compute F(x), we note that by remarks, it is enough to investigate the frequencies of neutral mutations within Y 0. If we take the viewpoint of the infinite alleles model, where each mutation is to a type not seen before, then results can be obtained from Durrett and Schweinsberg’s [15] study of a gene duplication model. In their system there is initially a single individual of type 1. No individual dies and each individual independently gives birth to a new individual at rate 1. When a new individual is born it has the same type as its parent with probability 1 − r and with probability r is a new type which is different from all previously observed types.

Let T N be the first time there are N individuals and let F S, N be the number of families of size > S at time T N . Omitting the precise error bounds given in Theorem 1.3 of [15], that result says

$$\displaystyle{ F_{S,N} \approx r\Gamma \left (\frac{2 - r} {1 - r}\right )NS^{-1/(1-r)}\quad \mbox{ for $1 \ll S \ll N^{1-r}$}. }$$
(43)

The upper cutoff on S is needed for the result to hold. When S ≫ N 1−r, EF S, N decays exponentially fast.

As mentioned above, the last conclusion gives a result for a branching process with mutations according to the infinite alleles model, a subject first investigated by Griffiths and Pakes [19]. To study DNA sequence data, we are more interested in the frequencies of individual mutations. Using ideas from Durrett and Schweinsberg [14] it is easy to show:

Theorem 2.

If passenger mutations occur at rate ν then F(x) = ν∕λ 0 x.

Numerical example.

To illustrate the use of Theorem 2 suppose the time has been scaled so that a 0 = 1, λ 0 = 0. 01 and ν = 10−5. In support of the numbers we note that Bozic et al. [5] estimate that the selective advantage provided by a typical cancer driver mutation is 0. 004 ± 0. 0004. As for the second, if the per nucleotide mutation rate is 10−9 and there are 1000 nucleotides in a gene, then a mutation rate of 10−6 per gene results. In this case Theorem 2 predicts if we focus only on one gene, then the expected number of mutations with frequency > 0. 1 is

$$\displaystyle{ F(0.1) = 10^{-6+2+1} = 0.001 }$$
(44)

so, to a good first approximation, no particular neutral mutation occurs with an appreciable frequency. Of course, if we are sequencing 20,000 genes, then there will be a few dozen passenger mutations seen in a given individual. On the other hand, there will be very few specific neutral mutations that will appear multiple times in the sample, i.e., there should be few false positives. However, the bad news for sequencing studies is that in many cancers there is a wide variety of driver mutations that accomplish the same end.

Proof of Theorem 2.

We will drop the subscript 0 for convenience. For j ≥ 1 let \(T_{j} =\min \{ t: Y _{t} = j\}\) and notice that T 1 = 0. Since the j individuals at time T j start independent copies Y 1, … Y j of Y, we have

$$\displaystyle{\lim _{s\rightarrow \infty }e^{-\lambda s}Y ^{i}(s) =\xi _{ i},}$$

where the ξ i are independent exponential mean 1 (here time s in Y i corresponds to time T j + s in the original process). From the limit theorem for the Y i we see that for j ≥ 2 the limiting fraction of the population descended from individual i at time T j is

$$\displaystyle{r_{i} =\xi _{i}/(\xi _{1} + \cdots +\xi _{j}),\quad 1 \leq i \leq j,}$$

which as some of you know has a beta(1, j − 1) distribution with density

$$\displaystyle{(j - 1)(1 - x)^{j-2}.}$$

For those who don’t we give the simple proof of this fact. Note that

$$\displaystyle{((\xi _{1},\ldots \xi _{j})\vert \xi _{1} + \cdots +\xi _{j} = t)}$$

is uniform over all nonnegative vectors that sum to t, so \((r_{1},\ldots r_{j})\) is uniformly distributed over the nonnegative vectors that sum to 1. Now the joint distribution of the r i can be generated by letting \(U_{1},\ldots U_{j-1}\) be uniform on [0, 1], \(U^{(1)} <U^{(2)} <\ldots U^{(j-1)}\) be the order statistics, and \(r_{i} = U^{(i)} - U^{(i-1)}\) where U (0) = 0 and U (j) = 1. From this and symmetry, we see that

$$\displaystyle{P(r_{i}> x) = P(r_{j}> x) = P(U_{i} <1 - x\mbox{ for $1 \leq i \leq j - 1$}) = (1 - x)^{j-1},}$$

and differentiating gives the density.

If the neutral mutation rate is ν, then on \([T_{j},T_{j+1})\) mutations occur to individuals in Y at rate ν j, while births occur at rate λ j, so the number of mutations N j in this time interval has a shifted geometric distribution with success probability λ∕(λ +ν), i.e.,

$$\displaystyle{ P(N_{j} = k) = \left ( \frac{\nu } {\nu +\lambda }\right )^{k} \frac{\lambda } {\nu +\lambda }\quad \mbox{ for $k = 0,1,2\ldots $} }$$
(45)

The N j are i.i.d. with mean

$$\displaystyle{\frac{\nu +\lambda } {\lambda } - 1 = \frac{\nu } {\lambda }.}$$

Thus the expected number of neutral mutations that are present at frequency larger than x is

$$\displaystyle{\frac{\nu } {\gamma }\sum _{j=1}^{\infty }(1 - x)^{j-1} = \frac{\nu } {\lambda x}.}$$

The j = 1 term corresponds to mutations in [T 1, T 2) which will be present in the entire population. □ 

Theorem 2 describes the population site frequency spectrum. The next result describes the frequencies in a sample. Suppose we sample n individuals from the Yule process when it has size N γ and hence the whole population has size ∼ N. Let η n, m be the number of mutations present in m individuals.

Theorem 3.

As N →∞

$$\displaystyle{ E\eta _{n,m}\left \{\begin{array}{@{}l@{\quad }l@{}} \rightarrow \frac{n\nu } {\lambda } \cdot \frac{1} {m(m - 1)}\quad &2 \leq m <n, \\ \quad \\ \quad \\ \sim \frac{n\nu } {\lambda } \cdot \log (N\gamma ) \quad &m = 1. \end{array} \right. }$$
(46)

where \(a_{N} \sim b_{N}\) means a N ∕b N → 1.

To explain the result for m = 1, we note that, as Slatkin and Hudson [35] observed, genealogies in exponentially growing population tend to be star-shaped, i.e., the most recent common ancestor of two individuals occurs near the beginning of the branching process. The time required for Y 0(t) to reach size N γ is \(\sim (1/\lambda )\log (N\gamma )\), so the number of mutations on our n lineages is roughly n ν times this. Note that, (i) for a fixed sample size, E η n, m , 2 ≤ m < n are bounded.

Proof of Theorem 3.

We begin with a calculus fact that is easy for readers who can remember the definition of the beta distribution. The rest of us can simply integrate by parts.

Lemma 2.

If a and b are nonnegative integers

$$\displaystyle{ \int _{0}^{1}x^{a}(1 - x)^{b}\,\mathit{dx} = \frac{a!b!} {(a + b + 1)!}. }$$
(47)

Differentiating the distribution function from Theorem 2 gives the density νλ x 2. We have removed the atom at 1 since those mutations will be present in every individual and we are supposing the sample size n > m the number of times the mutation occurs in the sample. Conditioning on the frequency in the entire population, it follows that for m ≤ 2 < n that

$$\displaystyle{E\eta _{n,m} =\int _{ 0}^{1} \frac{\nu } {\lambda x^{2}}\binom{n}{m}x^{m}(1 - x)^{n-m}\,\mathit{dx} = \frac{n\nu } {\lambda m(m - 1)},}$$

where we have used n ≪ N and the second step requires m ≥ 2.

When m = 1 the formula above gives E η n, 1 = . To get a finite answer we note that the expected number that are present at frequency larger than x is

$$\displaystyle{\frac{\nu } {\lambda }\sum _{j=1}^{N\gamma }(1 - x)^{j-1} = \frac{\nu } {\lambda x}\left (1 - (1 - x)^{N\gamma }\right ).}$$

Differentiating (and multiplying by − 1) changes the density from νλ x 2 to

$$\displaystyle{ \frac{\nu } {\lambda }\left ( \frac{1} {x^{2}}\left (1 - (1 - x)^{N\gamma }\right ) -\frac{1} {x}N\gamma (1 - x)^{N\gamma -1}\right ). }$$
(48)

Ignoring the constant νλ for the moment and noticing

$$\displaystyle{\binom{n}{m}x^{m}(1 - x)^{n-m} = \mathit{nx}(1 - x)^{n-1}}$$

when m = 1 the contribution from the second term is

$$\displaystyle{n\int _{0}^{1}N\gamma (1 - x)^{N\gamma +n-2}\,\mathit{dx} = n \cdot \frac{N\gamma } {N\gamma + n - 1} <n}$$

and this term can be ignored. Changing variables x = yN γ the first integral is

$$\displaystyle\begin{array}{rcl} & & \int _{0}^{1} \frac{1} {x}\left (1 - (1 - x)^{N\gamma }\right )(1 - x)^{n-1}\,\mathit{dx} {}\\ & & =\int _{ 0}^{N\gamma }\frac{1} {y}\left (1 - (1 - y/N\gamma )^{N\gamma }\right )(1 - y/N\gamma )^{n-1}\,\mathit{dy}. {}\\ \end{array}$$

To show that the above is \(\sim \log (N\gamma )\) we let K N  →  slowly and divide the integral into three regions [0, K N ], [K N , N γ∕logN], and [N γ∕logN, N γ]. Outside the first interval, (1 − yN γ)N γ → 0 and outside the third, (1 − yN γ)n−1 → 1 so we conclude that the above is

$$\displaystyle{O(K_{N}) +\int _{ K_{N}}^{N\gamma /\log N}\frac{1} {y}\,\mathit{dy} + O(\log \log N)}$$

 □ 

8 Properties of the gamma function

The Gamma function is defined for α > 0 by

$$\displaystyle{ \Gamma (\alpha ) =\int _{ 0}^{\infty }t^{\alpha -1}e^{-t}\,\mathit{dt}. }$$
(49)

This quantity with 0 < α < 1 will show up in the constants of our limit theorems, so we record some of its properties now. Integrating by parts

$$\displaystyle{ \Gamma (\alpha +1) =\int _{ 0}^{\infty }t^{\alpha }e^{-t}\,\mathit{dt} =\int _{ 0}^{\infty }\alpha t^{\alpha -1}e^{-t}\,\mathit{dt} =\alpha \Gamma (\alpha ). }$$
(50)

Since \(\Gamma (1) = 1\) it follows that if n is an integer \(\Gamma (n) = (n - 1)!\). Among the many formulas for \(\Gamma\), the most useful for us is Euler’s reflection formula

$$\displaystyle{ \Gamma (\alpha )\Gamma (1-\alpha ) = \frac{\pi } {\sin (\pi \alpha )}. }$$
(51)

Taking α = 1∕2 we see that implies \(\Gamma (1/2) = \sqrt{\pi }\). Letting α → 0 and using \(\Gamma (1-\alpha ) \rightarrow \Gamma (1) = 1\)

$$\displaystyle{ \Gamma (\alpha ) \sim \frac{\pi } {\sin (\pi \alpha )} \sim \frac{1} {\alpha }, }$$
(52)

where we have used \(\sin x \sim x\) as x → 0.

9 Growth of Z 1(t)

In this section we will examine the growth of the type 1’s under the assumption that \(Z_{0}^{{\ast}}(t) = V _{0}e^{\lambda _{0}t}\) for t ∈ (−, ), where the star is to remind us that we have extended Z 0 to negative times. The constant V 0 could be set equal to 1 by changing the origin of time. We will not do that because proving the result for a general V 0 will make it easy to prove results for Z k (t) by induction. The expected number of type 1 families that begin at negative times is \(V _{0}\mu /\lambda _{0}\). When V 0 = 1 this is 10−3 or smaller, so the extension changes the behavior very little. However, as the reader will see, it makes the analysis much easier.

We begin by stating and discussing the main results before we get involved in the details of the proofs. Let α = λ 0λ 1,

$$\displaystyle{ c_{\mu,1} = \frac{1} {a_{1}}\left (\frac{a_{1}} {\lambda _{1}} \right )^{\alpha }\Gamma (\alpha ),\qquad c_{h,1} = c_{\mu,1}\Gamma (1-\alpha ), }$$
(53)

and \(c_{\theta,1} = c_{h,1}(a_{0}/\lambda _{0})\).

Theorem 4.

If we assume \(Z_{0}^{{\ast}}(t) = V _{0}e^{\lambda _{0}t}\) then as t →∞,

$$\displaystyle{e^{-\lambda _{1}t}Z_{ 1}^{{\ast}}(t) \rightarrow V _{ 1}.}$$
  1. (i)

    V 1 is the sum of the points in a Poisson process with mean measure

    $$\displaystyle{\mu (x,\infty ) = c_{\mu,1}u_{1}V _{0}x^{-\alpha }}$$
  2. (ii)

    \(E(e^{-\theta V _{1}}\vert V _{0}) =\exp (-c_{h,1}u_{1}V _{0}\theta ^{\alpha })\)

  3. (iii)

    \(P(V _{1}> x\vert V _{0}) \sim x^{-\alpha }c_{h,1}u_{1}V _{0}/\Gamma (1-\alpha ) = c_{\mu,1}u_{1}V _{0}x^{-\alpha }\) .

  4. (iv)

    If V 0 is exponential(λ 0 ∕a 0 ) then

    $$\displaystyle{ E\exp (-\theta V _{1}) = (1 + c_{\theta,1}u_{1}\theta ^{\alpha })^{-1}, }$$
    (54)

    and (v) \(P(V _{1}> x) \sim x^{-\alpha }c_{\theta,1}u_{1}/\Gamma (1-\alpha )\) .

Conclusion (ii) implies that the distribution of (V 1 | V 0) is a one-sided stable law with index α. It is know that such distributions have a power law tail. Conclusion (iii) makes this precise. In words, when x is large the sum is bigger than x if and only if there is a point in (x, ) in the Poisson process. The result in (iii) was discovered by Iwasa, Noawk, and Michor [24] using simulation. The conclusions in (iv) follows from (ii) by taking expected value and using (16). Finally (v) follows from (iv) in the same way (iii) follows from (ii), by using a Tauberian theorem given in Lemma 4.

Proof of (i).

Mutations to type 1 occur at times of a Poisson process with rate \(u_{1}V _{0}e^{\lambda _{0}s}\). Theorem 1 implies that a mutation at time s will grow to size \(\approx e^{\lambda _{1}(t-s)}W_{1}\) by time t, where W 1 has distribution

$$\displaystyle{W_{1} = _{d} \frac{b_{1}} {a_{1}}\delta _{0} + \frac{\lambda _{1}} {a_{1}}\,\text{exponential}(\lambda _{1}/a_{1}).}$$

To add up the contributions, we associate with each point s i in the Poisson process an independent random variable y i with the same distribution as W 1. This gives us a Poisson process on (−, ) × (0, ) (we ignore the points with y i  = 0) that has intensity

$$\displaystyle{u_{1}V _{0}e^{\lambda _{0}s} \cdot (\lambda _{1}/a_{1})^{2}e^{-(\lambda _{1}/a_{1})y}.}$$

Here, one of the two factors of λ 1a 1 comes from P(W 1 > 0), the other from the exponential density function.

A point (s, y) makes a contribution \(e^{-\lambda _{1}s}y\) to \(\lim _{t\rightarrow \infty }e^{-\lambda _{1}t}Z_{1}^{{\ast}}(t)\). Points with \(e^{-\lambda _{1}s}y> x\) will contribute more than x to the limit. The number of such points is Poisson distributed with mean

$$\displaystyle{\int _{-\infty }^{\infty }u_{ 1}V _{0}e^{\lambda _{0}s} \frac{\lambda _{1}} {a_{1}}e^{-(\lambda _{1}/a_{1})xe^{\lambda _{1}s} }\,\mathit{ds},}$$

where one factor of \(\lambda _{1}/a_{1}\) has disappeared since we are looking at the tail of the distribution. Changing variables

$$\displaystyle{ \frac{\lambda _{1}} {a_{1}}xe^{\lambda _{1}s} = t,\qquad \frac{\lambda _{1}} {a_{1}}x\lambda _{1}e^{\lambda _{1}s}\mathit{ds} = \mathit{dt},}$$

and noticing \(s = (1/\lambda _{1})\log (ta_{1}/x\lambda _{1})\) implies \(e^{(\lambda _{0}-\lambda _{1})s} = (a_{1}t/\lambda _{1}x)^{(\lambda _{0}/\lambda _{1})-1}\) the integral above becomes

$$\displaystyle\begin{array}{rcl} & & = u_{1}V _{0}\int _{0}^{\infty }\left (\frac{a_{1}t} {\lambda _{1}x} \right )^{\alpha -1}e^{-t} \frac{dt} {\lambda _{1}x} {}\\ & & = \frac{u_{1}V _{0}} {a_{1}} \left (\frac{a_{1}} {\lambda _{1}} \right )^{\alpha }x^{-\alpha }\int _{ 0}^{\infty }t^{\alpha -1}e^{-t}\,\mathit{dt}, {}\\ \end{array}$$

which completes the proof of (i). □ 

Proof of (ii).

Let \(\tilde{Z}_{1}(t)\) be the number of 1’s at time t in the branching process with Z 0(0) = 0, Z 1(0) = 1, and let \(\tilde{\phi }_{1,t}(\theta ) = Ee^{-\theta \tilde{Z}_{1}(t)}\).

Lemma 3.

\(E\left (e^{-\theta Z_{1}^{{\ast}}(t) }\vert V _{0}\right ) =\exp \left (-u_{1}\int _{-\infty }^{t}V _{0}e^{\lambda _{0}s}(1 -\tilde{\phi }_{1,t-s}(\theta ))\,\mathit{ds}\right )\) .

Proof of (ii).

We begin with the corresponding formula in discrete time:

$$\displaystyle\begin{array}{rcl} E\left (\!\left.e^{-\theta Z_{1}^{{\ast}}(n) }\right \vert Z_{0}^{{\ast}}(m),m \leq n\!\right )& =& \prod _{ m=-\infty }^{n-1}\sum _{ k_{m}=0}^{\infty }e^{-u_{1}Z_{0}^{{\ast}}(m) }\frac{(u_{1}Z_{0}^{{\ast}}(m))^{k_{m}}} {k_{m}!} \tilde{\phi }_{1,n-m-1}(\theta )^{k_{m} } {}\\ & =& \prod _{m=-\infty }^{n-1}\exp \left (-u_{ 1}Z_{0}^{{\ast}}(m)(1 -\tilde{\phi }_{ 1,n-m-1}(\theta ))\right ) {}\\ & =& \exp \left (-u_{1}\sum _{m=-\infty }^{n-1}Z_{ 0}^{{\ast}}(m)(1 -\tilde{\phi }_{ 1,n-m-1}(\theta ))\right ). {}\\ \end{array}$$

Breaking up the time-axis into intervals of length h and letting h → 0 and using \(Z_{0}^{{\ast}}(s) = V _{0}e^{\lambda _{0}s}\) gives the result in continuous time. □ 

Replacing θ by \(\theta e^{-\lambda _{1}t}\) in Lemma 3 and letting t → 

$$\displaystyle{ E\left (e^{-\theta V _{1} }\vert V _{0}\right ) =\lim _{t\rightarrow \infty }\exp \left (-u_{1}V _{0}\int _{-\infty }^{t}e^{\lambda _{0}s}(1 -\tilde{\phi }_{ 1,t-s}(\theta e^{-\lambda _{1}t}))\,\mathit{ds}\right ). }$$
(55)

To calculate the limit, we note that by (15)

$$\displaystyle{\tilde{Z}_{1}(t - s)e^{-\lambda _{1}(t-s)} \Rightarrow \frac{b_{1}} {a_{1}}\delta _{0} + \frac{\lambda _{1}} {a_{1}}\ \text{exponential}(\lambda _{1}/a_{1}),}$$

so multiplying by \(e^{-\lambda _{1}s}\) and taking the Laplace transform, we have

$$\displaystyle{ 1 -\tilde{\phi }_{1,t-s}(\theta e^{-\lambda _{1}t}) \rightarrow \frac{\lambda _{1}} {a_{1}}\int _{0}^{\infty }(1 - e^{-\theta x})(\lambda _{ 1}/a_{1})e^{\lambda _{1}s}e^{-xe^{\lambda _{1}s}\lambda _{ 1}/a_{1}}\mathit{dx}. }$$
(56)

Using this in (55) and interchanging the order of integration

$$\displaystyle{ E\left (e^{-\theta V _{1} }\vert V _{0}\right ) =\exp \left (-u_{1}V _{0}h(\theta )\right ), }$$
(57)

where

$$\displaystyle{ h(\theta ) = \frac{\lambda _{1}^{2}} {a_{1}^{2}}\int _{0}^{\infty }(1 - e^{-\theta x})\left [\int _{ -\infty }^{\infty }e^{\lambda _{0}s}e^{\lambda _{1}s}e^{-xe^{\lambda _{1}s}\lambda _{ 1}/a_{1}}\mathit{ds}\right ]\,\mathit{dx}. }$$
(58)

Changing variables \(u = \mathit{xe}^{\lambda _{1}s}\lambda _{1}/a_{1}\), \(e^{\lambda _{1}s}\mathit{ds} = a_{1}\,\mathit{du}/(\lambda _{1}^{2}x)\), the inside integral

$$\displaystyle{=\int _{ 0}^{\infty }\left (\frac{a_{1}u} {\lambda _{1}x} \right )^{\lambda _{0}/\lambda _{1}}e^{-u}\,\frac{a_{1}\mathit{du}} {\lambda _{1}^{2}x}.}$$

Inserting this in (58) and recalling \(\alpha =\lambda _{0}/\lambda _{1}\), we have

$$\displaystyle{h(\theta ) = \frac{1} {a_{1}}\left (\frac{a_{1}} {\lambda _{1}} \right )^{\alpha }\int _{0}^{\infty }(1 - e^{-\theta x})x^{-\alpha -1}\,\mathit{dx}\int _{ 0}^{\infty }u^{\alpha }e^{-u}\,\mathit{du}.}$$

Comparing with (53) and using \(\Gamma (\alpha +1) =\alpha \Gamma (\alpha )\), see (50), gives

$$\displaystyle{ h(\theta ) = c_{\mu,1}\int _{0}^{\infty }(1 - e^{-\theta x})\alpha x^{-\alpha -1}\,\mathit{dx}. }$$
(59)

Changing variables x = yθ, dx = dyθ we have

$$\displaystyle{h(\theta ) = c_{\mu,1}\theta ^{\alpha }\int _{ 0}^{\infty }(1 - e^{-y})\alpha y^{-\alpha -1}\,\mathit{dy}.}$$

Integrating by parts, it follows that

$$\displaystyle{ h(\theta ) = c_{\mu,1}\theta ^{\alpha }\int _{ 0}^{\infty }e^{-y}y^{-\alpha }\,\mathit{dy} = c_{\mu,1}\Gamma (1-\alpha )\theta ^{\alpha } = c_{ h,1}\theta ^{\alpha }, }$$
(60)

which completes the proof of (ii). □ 

Proof of (iii).

To show that V 1 has a power law tail, we note that (57) and (60) imply that as θ → 0,

$$\displaystyle{ 1 - E(e^{-\theta V _{1} }\vert V _{0}) \sim c_{h,1}u_{1}V _{0}\theta ^{\alpha }, }$$
(61)

and then use a Tauberian theorem from Feller Volume II (pages 442–446). Let

$$\displaystyle{\omega (\lambda ) =\int _{ 0}^{\infty }e^{-\lambda x}\mathit{dU}(x).}$$

Lemma 4.

If L is slowly varying and U has an ultimately monotone derivative u, then \(\omega (\lambda ) \sim \lambda ^{-\rho }L(1/\lambda )\) if and only if \(u(x) \sim x^{\rho -1}L(x)/\Gamma (\rho )\) .

To use this result we note that if ϕ(θ) is the Laplace transform of the probability distribution F, then integrating by parts gives

$$\displaystyle{\int _{0}^{\infty }e^{-\theta x}\mathit{dF}(x) = \left.(e^{-\theta x})(F(x) - 1)\right \vert _{ 0}^{\infty }-\theta \int _{ 0}^{\infty }e^{-\theta x}(1 - F(x))\,\mathit{dx},}$$

so we have

$$\displaystyle{1 -\phi (\theta ) =\theta \int _{ 0}^{\infty }e^{-\theta x}(1 - F(x))\,\mathit{dx}.}$$

Using (61), it follows that \((1 -\phi (\theta ))/\theta \sim c_{h,1}u_{1}V _{0}\theta ^{\alpha -1}\). Apply Lemma 4 with ω(θ) = (1 −ϕ(θ))∕θ, u(x) = 1 − F(x) which is decreasing and ρ = 1 −α we conclude

$$\displaystyle{1 - F(x) \sim \frac{c_{h,1}u_{1}V _{0}} {\Gamma (1-\alpha )} x^{-\lambda _{0}/\lambda _{1} },}$$

which proves (iii) and completes the proof of Theorem 4. □ 

10 Moments of Z 1(t)

The fact that EV 1 =  is a signal of complications that we will now investigate. We return to considering the process Z k (t) which starts with one type 0 at time 0. The first step is to compute expected values. \(\mathit{EZ}_{0}(s) = e^{\lambda _{0}t}\) so, assuming λ 0 < λ 1,

$$\displaystyle\begin{array}{rcl} \mathit{EZ}_{1}(t)& =& \int _{0}^{t}u_{ 1}e^{\lambda _{0}s}e^{\lambda _{1}(t-s)}\,\mathit{ds} = u_{ 1}e^{\lambda _{1}t}\int _{ 0}^{t}e^{-(\lambda _{1}-\lambda _{0})s}\,\mathit{ds} \\ & =& e^{\lambda _{1}t} \frac{u_{1}} {\lambda _{1} -\lambda _{0}}(1 - e^{-(\lambda _{1}-\lambda _{0})s}). {}\end{array}$$
(62)

To investigate the limiting behavior of Z 1(t) we note that

$$\displaystyle{M_{t} = e^{-\lambda _{1}t}Z_{ 1}(t) -\int _{0}^{t}u_{ 1}Z_{0}(s)e^{-\lambda _{1}(t-s)}\,\mathit{ds}\quad \text{is a martingale.}}$$

The integral, call it I t , is increasing in t and has expected value \(e^{-\lambda _{1}t}\mathit{EZ}_{1}(t) \leq u_{1}(\lambda _{1} -\lambda _{0})\) so it converges to a limit I . Since M t  ≥ −I the martingale convergence theorem implies that

$$\displaystyle{e^{-\lambda _{1}t}Z_{ 1}(t) \rightarrow W_{1}\quad \text{a.s.}}$$

Our next step is to show \(\mathit{EW }_{1} = \mathit{EI}_{\infty } = u_{1}/(\lambda _{1} -\lambda _{0})\). This follows from:

Lemma 5.

For k ≥ 0, \(\sup _{t}E(e^{-\lambda _{k}t}Z_{ k}(t))^{2} <\infty\) .

Proof.

The base case is easy. We look at the derivative \(\frac{d} {\mathit{dt}}E(e^{-\lambda _{0}t}Z_{ 0}(t))^{2}\)

$$\displaystyle\begin{array}{rcl} & & = -2\lambda _{0}E(e^{-\lambda _{0}t}Z_{ 0}(t))^{2} + e^{-2\lambda _{0}t}\left (E[a_{ 0}Z_{0}(t)(2Z_{0}(t) + 1)] - E[b_{0}Z_{0}(t)(2Z_{0}(t) - 1)]\right ) {}\\ & & = e^{-2\lambda _{0}t}(a_{ 0} + b_{0})\mathit{EZ}_{0}(t) = e^{-\lambda _{0}t}(a_{ 0} + b_{0}), {}\\ \end{array}$$

and it follows that \(\sup _{t}E(e^{-\lambda _{0}t}Z_{0}(t))^{2} <\infty\). Next, we suppose \(\sup _{t}E(e^{-\lambda _{k-1}t}Z_{ k-1}(t))^{2} \leq c_{ k-1} <\infty\) and consider the derivative \(\frac{d} {\mathit{dt}}E(e^{-\lambda _{k}t}Z_{ k}(t))^{2}\)

$$\displaystyle\begin{array}{rcl} & & = -2\lambda _{k}E(e^{-\lambda _{k}t}Z_{ k}(t))^{2} + e^{-2\lambda _{k}t}E[a_{ k}Z_{k}(t)(2Z_{k}(t) + 1)] {}\\ & & \quad - e^{-2\lambda _{k}t}E[b_{ k}Z_{k}(t)(2Z_{k}(t) - 1)] + e^{-2\lambda _{k}t}E[u_{ k}Z_{k-1}(t)(2Z_{k}(t) + 1)] {}\\ & & = (a_{k} + b_{k})e^{-2\lambda _{k}t}EZ_{ k}(t) + u_{k}e^{-2\lambda _{k}t}E[Z_{ k-1}(t)(2Z_{k}(t) + 1)]. {}\\ \end{array}$$

To bound \(2u_{k}e^{-2\lambda _{k}t}E[Z_{ k-1}(t)Z_{k}(t)]\), we use the Cauchy-Schwarz inequality and y 1∕2 ≤ 1 + y for y ≥ 0 to get

$$\displaystyle\begin{array}{rcl} & & \leq 2u_{k}e^{-(\lambda _{k}-\lambda _{k-1})t}E[e^{-2\lambda _{k-1}t}Z_{ k-1}^{2}(t)]^{1/2}E[e^{-2\lambda _{k}t}Z_{ k}^{2}(t)]^{1/2} {}\\ & & \leq 2u_{k}e^{-(\lambda _{k}-\lambda _{k-1})t}c_{ k-1}^{1/2}\left (1 + E[e^{-2\lambda _{k}t}Z_{ k}^{2}(t)]\right ). {}\\ \end{array}$$

Comparison theorems for differential equations imply that \(E(e^{-\lambda _{k}t}Z_{ k}(t))^{2} \leq m(t)\) where m(t) is the solution of the differential equation

$$\displaystyle{ \frac{d} {\mathit{dt}}m(t) = a(t)m(t) + b(t),\quad m(0) = 0, }$$
(63)

with \(a(t) = 2u_{k}c_{k-1}^{1/2}e^{-(\lambda _{k}-\lambda _{k-1})t}\), and

$$\displaystyle{b(t) = (a_{k} + b_{k})e^{-2\lambda _{k}t}\mathit{EZ}_{ k}(t) + 2u_{k}e^{-2\lambda _{k}t}EZ_{ k-1}(t) + 2u_{k}c_{k-1}^{1/2}e^{-(\lambda _{k}-\lambda _{k-1})t}.}$$

Solving (63) gives

$$\displaystyle{m(t) =\int _{ 0}^{t}b(s)\exp \left (\int _{ s}^{t}a(r)\,\mathit{dr}\right ).}$$

Since a(t) and b(t) are both integrable, m(t) is bounded. □ 

At this point we have shown that

$$\displaystyle\begin{array}{rcl} & & e^{-\lambda _{1}t}Z_{ 1}(t) \rightarrow W_{1}\mbox{ with $\mathit{EW }_{1} = u_{1}/(\lambda _{1} -\lambda _{0})$}, {}\\ & & e^{-\lambda _{1}t}Z_{ 1}^{{\ast}}(t) \rightarrow V _{ 1}\mbox{ with $\mathit{EV }_{1} = \infty $}. {}\\ \end{array}$$

To see the reason for the difference between the two limits, we will repeat part of the proof of the point process result (i) from Theorem 4 to Z 1(t). A point (s, y) makes a contribution \(e^{-\lambda _{1}s}y\) to \(\lim _{t\rightarrow \infty }e^{-\lambda _{1}t}Z_{1}(t)\). Points with \(e^{-\lambda _{1}s}y> x\) will contribute more than x to the limit. The number of such points is Poisson distributed with mean

$$\displaystyle{\int _{0}^{\infty }u_{ 1}V _{0}e^{\lambda _{0}s} \frac{\lambda _{1}} {a_{1}}e^{-(\lambda _{1}/a_{1})xe^{\lambda _{1}s} }\,\mathit{ds}.}$$

Changing variables \(t = (\lambda _{1}/a_{1})xe^{\lambda _{1}s}\) now leads to

$$\displaystyle{ \frac{u_{1}V _{0}} {a_{1}} \left (\frac{\lambda _{1}x} {a_{1}} \right )^{-\alpha }\int _{ \lambda _{1}x/a_{1}}^{\infty }t^{\alpha -1}e^{-t}\,\mathit{dt}, }$$
(64)

while in the last section the result was

$$\displaystyle{\frac{u_{1}V _{0}} {a_{1}} \left (\frac{\lambda _{1}x} {a_{1}} \right )^{-\alpha }\int _{ 0}^{\infty }t^{\alpha -1}e^{-t}\,\mathit{dt}.}$$

When \(\lambda _{1}x/a_{1}\) is large the integral in (64) is roughly the value of the integrand at the lower limit, so the quantity above is \(\sim Cx^{-1}\exp (-\lambda _{1}x/a_{1})\) as x → .

The last calculation shows that W 1 is roughly the infinite mean random variable V 1 truncated at a 1λ 1. However \(\mathit{EV }_{0} = a_{0}/\lambda _{0}\) and in most of our applications \(u_{1}a_{0}/a_{1}\lambda _{0}\) is small, so this truncation is much larger than the typical value of V 1 and EW 1 is not a good measure of the size of W 1.

11 Luria-Delbruck distributions

We now go back in time 70 years to the work of Luria and Delbruck [30] on bacterial growth. Luria and Delbruck grew a number of bacterial populations until they reached a size of order 108, and then exposed them to attack by bacteriophage (anti-bacterial virus). Only bacteria resistant to the phage survived. At the time there were two theories about the emergence of resistance. (i) Mutations from sensitive to resistant were constantly occurring even in the absence of the phage. (ii) A certain proportion of organisms were able to adapt themselves to the new environment, and thereby acquired an immunity that is passed on to their offspring. Under the second scenario, one would expect a Poisson number of survivors, but that was not what was observed.

To make it easier to connect with published work, the normal cells will be called type 1 and mutants type 2. Type 1’s will be assumed to grow deterministically. There are two cases to consider (a) the type 2’s grow deterministically or (b) the type 2’s are a Yule process (no deaths). If type 2’s were a branching process, then we would have the model we have been studying. The point of this section is to see the analogous results in a simpler setting and to explore other techniques that have been used.

11.1 Deterministic growth of type 1’s

To determine the distribution of the number of mutants under scenario (i), Luria and Delbruck defined a process that starts at time 0 with one normal cell and no mutants. They assumed that normal cells grow deterministically at rate β 1 so

$$\displaystyle{N(t) = e^{\beta _{1}t}.}$$

Resistant mutants appear at rate μ N(t), so the expected number of mutations at time t is

$$\displaystyle{ m(t) =\int _{ 0}^{t}\mu e^{\beta _{1}s}\,\mathit{ds} = \frac{\mu } {\beta _{1}}(e^{\beta _{1}t} - 1). }$$
(65)

Mutants grow at rate β 2, so the expected number of mutants at time t is

$$\displaystyle{ \mathit{EX}(t) =\int _{ 0}^{t}\mu e^{\beta _{1}s}e^{\beta _{2}(t-s)}\,\mathit{ds} = \left \{\begin{array}{@{}l@{\quad }l@{}} \mu te^{\beta _{1}t} \quad &\beta _{1} =\beta _{2}, \\ \frac{\mu }{\beta _{ 2}-\beta _{1}} (e^{\beta _{2}t} - e^{\beta _{1}t})\quad &\beta _{1}\neq \beta _{2}. \end{array} \right. }$$
(66)

In our cancer applications we have β 1 < β 2 and the formula reduces to the one in (62). However, for the bacterial experiment, it is natural to assume β 1 = β 2, i.e., in the absence of phage the resistance mutation is neutral.

Crump and Hoel [6] analyzed the model using the theory of “filtered Poisson processes”:

$$\displaystyle{X(t) =\sum _{ i=1}^{M(t)}W_{ i}(t -\tau _{i}),}$$

where M(t) is the number of mutations by time t, τ i is the time of the ith mutation, and in the current context W i (s) = exp(β 2 s). Using equation (5.42) of Parzen [33], which is our Lemma 3 given in Section 9, the cumulant generating function

$$\displaystyle{ K(u,t) =\log E[e^{\theta X(t)}] =\int _{ 0}^{t}\mu e^{\beta _{1}s}(\psi _{ t-s}(\theta ) - 1)\,\mathit{ds}, }$$
(67)

where ψ r (θ) = E(e θ W(r)), which in this case is just \(\exp (\theta e^{\beta _{1}r})\).

Given a random variable Y

$$\displaystyle{\frac{d} {d\theta }\log E(e^{\theta Y }) = \frac{E(\mathit{Ye}^{\theta Y })} {E(e^{\theta Y })} = \mathit{EY }\quad \mbox{ when $\theta = 0$}.}$$

Differentiating again

$$\displaystyle\begin{array}{rcl} \frac{d^{2}} {d\theta ^{2}}\log E(e^{\theta Y })& =& \frac{E(Y ^{2}e^{\theta Y })E(e^{\theta Y }) - (E\mathit{Ye}^{\theta Y })^{2}} {(\mathit{Ee}^{\theta Y })^{2}} {}\\ & =& E(Y ^{2}) - (\mathit{EY })^{2}\quad \mbox{ when $\theta = 0$}. {}\\ \end{array}$$

In general,

$$\displaystyle{\left.\frac{d^{n}} {d\theta ^{n}}\log E(e^{\theta Y })\right \vert _{\theta =0} =\kappa _{n}}$$

where κ n is the nth cumulant. Differentiating n times we have

$$\displaystyle{\kappa _{n}(t) =\int _{ 0}^{t}\mu e^{\beta _{1}s}\mathit{EW }^{n}(t - s)\,\mathit{ds}.}$$

When n = 1 this is the formula for the mean given in (65). When n = 2,

$$\displaystyle\begin{array}{rcl} \kappa _{2}(t)& =& \int _{0}^{t}\mu e^{\beta _{1}s}e^{2\beta _{2}(t-s)}\,\mathit{ds} {}\\ & =& \left \{\begin{array}{@{}l@{\quad }l@{}} \frac{\mu }{\beta _{ 1}-2\beta _{2}} (e^{\beta _{1}t} - e^{2\beta _{2}t})\quad &\beta _{1}\neq 2\beta _{2}, \\ \mu te^{2\beta _{1}t} \quad &\beta _{1} = 2\beta _{2}. \end{array} \right.{}\\ \end{array}$$

According to Zheng’s (1999) survey [37], very little is known about the distribution of X(t). However, the proof of Theorem 4 applies easily to this case. Since the type 1 process is deterministic, a mutation at time s contributes \(e^{-\beta _{2}s}\) to \(e^{-\beta _{2}t}X(t)\). In order for this contribution to be > x we need s to be < −(1∕β 2)logx, so using the formula in (65) the number is Poisson with mean

$$\displaystyle{ \frac{\mu } {\beta _{1}}(e^{-(\beta _{1}/\beta _{2})\log x} - 1) = \frac{\mu } {\beta _{1}}(x^{-\beta _{1}/\beta _{2} } - 1)\qquad \mbox{ for $x \leq 1$.}}$$

This is the Poisson process for our stable law with the points > 1 removed.

11.2 Lea-Coulson [28] formulation

As in the previous example, normal cells grow deterministically \(N(t) = e^{\beta _{1}t}\) and mutants appear at rate μ N(t), but now each mutant starts a Yule process having birth rate β 2. We begin by deriving an equation for the generating function G(z, t) = E[z X(t)]. Since jumps from k → k + 1 occur at rate \(\beta _{2}k +\mu e^{\beta _{1}t}\) at time t,

$$\displaystyle{\frac{\partial G} {\partial t} =\sum _{k}(\beta _{2}k +\mu e^{\beta _{1}t})(z^{k+1} - z^{k})P(X(t) = k).}$$

The second term \(=\mu e^{\beta _{1}t}(z - 1)G\). To deal with the first one, we rewrite it as

$$\displaystyle{\beta _{2}z(z - 1)\sum _{k\geq 1}kz^{k-1}P(X(t) = k) =\beta _{ 2}z(z - 1)\frac{\partial G} {\partial z},}$$

so we have

$$\displaystyle{ \frac{\partial G} {\partial t} =\beta _{2}z(z - 1)\frac{\partial G} {\partial z} +\mu e^{\beta _{1}t}(z - 1)G. }$$
(68)

For our purposes it is more convenient to use the cumulant generating function \(K(\psi,t) =\log (E[e^{\psi X(t)}]) =\log G(e^{\psi })\). The chain rule tells us that

$$\displaystyle\begin{array}{rcl} & & \frac{\partial K} {\partial t} = \frac{1} {G(e^{\psi })} \frac{\partial G} {\partial t} (e^{\psi }), {}\\ & & \frac{\partial K} {\partial \psi } = \frac{1} {G(e^{\psi })} \frac{\partial G} {\partial z} (e^{\psi })e^{\psi }, {}\\ \end{array}$$

so we have

$$\displaystyle{ \frac{\partial K} {\partial t} =\beta _{2}(e^{\psi } - 1)\frac{\partial K} {\partial \psi } +\mu e^{\beta _{1}t}(e^{\psi } - 1), }$$
(69)

which agrees with (50) in Zheng [37] and pages 125–129 in Bailey [4]. Inserting \(K(\psi,t) =\sum _{j\geq 1}\kappa _{j}(t)\psi ^{j}/j!\) into (69) and equating terms we have

$$\displaystyle\begin{array}{rcl} \kappa _{1}'(t)& =& \beta _{2}\kappa _{1}(t) +\mu e^{\beta _{1}t}, {}\\ \kappa _{2}'(t)& =& \beta _{2}\kappa _{1}(t) + 2\beta _{2}\kappa _{2}(t) +\mu e^{\beta _{1}t}. {}\\ \end{array}$$

The formula for the mean is the same as in (66). When \(\beta _{1}\neq \beta _{2}\) and \(\beta _{1}\neq 2\beta _{2}\)

$$\displaystyle{\mbox{ var}\,[X(t)] = \frac{\mu e^{\beta _{2}t}[\beta _{1}(1 + e^{\beta _{1}-\beta _{2}}t - 2e^{\beta _{2}t}] + 2\beta _{2}(e^{\beta _{2}t} - 1)} {(\beta _{1} -\beta _{2})(\beta _{1} - 2\beta _{2})}.}$$

In the exceptional cases

$$\displaystyle{\mbox{ var}\,[X(t)] = \left \{\begin{array}{@{}l@{\quad }l@{}} \frac{\mu }{\beta _{ 2}} e^{\beta _{2}t}(1 - e^{\beta _{2}t} + 2\beta _{2}te^{\beta _{2}t})\quad &\beta _{1} = 2\beta _{2}, \\ \frac{2\mu } {\beta _{1}} e^{\beta _{1}t}(e^{\beta _{1}t} - 1) -\mu te^{\beta _{1}t} \quad &\beta _{1} =\beta _{2}. \end{array} \right.}$$

In the special case β 1 = β 2 these go back to Bailey. Zheng claims credit for the general formulas, see his (52) and (53). Using this approach we could derive exact formulas for the variance of our process rather than simply the bounds on the second moment given in Section 10.

Generating function.

If β 2 > β 1, then we can analyze the limiting behavior of \(e^{-\beta _{2}t}X(t)\) as we did in Section 9. For this reason, we restrict our attention now to the case \(\beta _{1} =\beta _{2} =\beta\), in which case (68) becomes

$$\displaystyle{\frac{\partial G} {\partial t} =\beta z(z - 1)\frac{\partial G} {\partial z} +\mu e^{\beta t}(z - 1)G,}$$

with initial condition G(z, 0) = 1. Alternatively, using (65) we can use the boundary condition

$$\displaystyle{G(0,t) =\exp (-m(t)).}$$

Changing variables \(\theta = (\mu /\beta )e^{\beta t}\), so \(F(z,\theta ) = G(z,\beta ^{-1}\log (\beta \theta /\mu ))\), one arrives at

$$\displaystyle{\frac{\partial F} {\partial \theta } = \frac{z(z - 1)} {\theta } \frac{\partial F} {\partial z} + (z - 1)F.}$$

One solution with F(0, θ) = e θ, called the Lea-Coulson p.g.f. can be written as

$$\displaystyle\begin{array}{rcl} & & F(z,\theta ) = (1 - z)^{\theta (1-z)/z} =\exp (\theta (f(z) - 1)), \\ & & \text{where}\quad f(z) = 1 + \left (\frac{1 - z} {z} \right )\log (1 - z). {}\end{array}$$
(70)

If we let \(\theta (t) = (\mu /\beta )e^{\beta _{1}t}\), then in terms of G the boundary condition is

$$\displaystyle{G(0,t) = F(0,\theta (t)) = e^{-\theta (t)}.}$$

In the literature it is often remarked that this is not correct due to the value at t = 0. However, since \(\theta (t) =\int _{ -\infty }^{t}\mu e^{\beta _{1}s}\,\mathit{ds}\) this corresponds to our process on (−, ).

Define the LD(θ, ϕ) distribution by its generating function

$$\displaystyle{ G(z,\theta,\phi ) =\exp \left (\theta \left [\frac{1} {z} - 1\right ]\log (1 -\phi z)\right ) = (1 -\phi z)^{\theta (1-z)/z}. }$$
(71)

If θ(t) = (μβ)e β t and ϕ(t) = 1 − e β t, then this is the exact generating function for X(t), which first appeared as (30a) in Armitage [2] where it was attributed to Bartlett. If instead we take ϕ(t) = 1 it is the Lea-Coulson p.g.f.

We will now derive these results from Parzen’s formula (67). Recalling the formula for the generating function of the Yule process given in (10)

$$\displaystyle{\log E(z^{X(t)}) = -\int _{ 0}^{t}\mu e^{\beta s} \frac{1 - z} {1 - z + \mathit{ze}^{-\beta (t-s)}}\,\mathit{ds}.}$$

Changing variables x = ze β(ts) so that \(\mathit{dz} =\beta ze^{-\beta (t-s)}\mathit{ds}\) and \(e^{\beta s}\mathit{ds} = (e^{\beta t}/\beta z)\mathit{dx}\) the above

$$\displaystyle\begin{array}{rcl} & & = -\frac{\mu e^{\beta t}} {\beta z} \int _{\mathit{ze}^{-\beta t}}^{z} \frac{1 - z} {1 - z + x}\,\mathit{dx} {}\\ & & = \frac{\mu e^{\beta t}} {\beta } \left [\frac{1 - z} {z} \right ]\log (1 - z + \mathit{ze}^{-\beta t}), {}\\ \end{array}$$

so changing notation we have (71):

$$\displaystyle{G(z,\theta,\phi ) =\exp \left (\theta \left [\frac{1 - z} {z} \right ]\log (1 -\phi z)\right )}$$

If we replace 0 by − in the lower limit, then the last term becomes log(1 − z) instead.

The Lea-Coulson model is a special case of our branching process conditioned on V 0, so Theorem 4 implies that when β 1 < β 2 the tail of the distribution for V 1 has

$$\displaystyle{P(V _{1}> x\vert V _{0}) \sim c_{\mu,1}u_{1}V _{0}x^{-\beta _{1}/\beta _{2} }.}$$

From this it should not be surprising that if X = LD(θ, 1) then

$$\displaystyle{P(X> n) \sim \theta /n\quad \text{and}\quad P(X = n) \sim \theta /n^{2}.}$$

A nice proof with references to the earlier contributions can be found in Zheng [38].

12 Number of type 1’s at time T M

In Section 6 we studied the probability that type 1 cells were present when the number of type 0’s reached size M. Since the 1’s may be tumor cells with more aggressive growth or resistant to therapy, it is also important to understand the number that are present.

As in Section 6, Iwasa, Nowak, and Michor [24] study Z 1(T M ) by decomposing according to the size of the process when mutations happen. Their formula (4) for the generating function of the number of resistant cancer cells (written here in our notation) is:

$$\displaystyle{ G(\xi ) =\exp \left (- \frac{u} {1 - b_{0}/a_{0}}\sum _{x=1}^{M-1}(1 - g_{ x}(\xi ))\right ) }$$
(72)

Here g x (ξ) is the generating function of the branching process \(\tilde{Z}_{1}(t)\) evaluated at t = (1∕λ 0)log(Mx), which is the time the type 0’s need to grow from size x to size M.

Here we will instead note that it is immediate from the proof of Lemma 3 that

$$\displaystyle{ E\exp (-\theta Z_{1}^{{\ast}}(T_{ M})) \approx \exp \left (-u_{1}\int _{-\infty }^{0}Me^{\lambda _{0}s}(1 -\tilde{\phi }_{ -s}(\theta )\,\mathit{ds}\right ) }$$
(73)

where \(\tilde{\phi }\) is the Laplace transform of the distribution of \(\tilde{Z}_{1}(t)\), the branching process starting from a single 1 (and no mutations from 0 to 1). To see this note that in the previous formula the mutation rate at time s was \(u_{1}V _{1}e^{\lambda _{0}s}\) for s ≤ t, while now it is \(u_{1}\mathit{Me}^{\lambda _{0}s}\) for s ≤ 0.

From (73), it is immediate that

Theorem 5.

If M →∞ and Mu 1 →ν ∈ (0,∞), Z 1 (T M ) converges in distribution to U 1 with Laplace transform

$$\displaystyle{E(\exp (-\theta U_{1})) =\exp \left (-\nu \int _{0}^{\infty }e^{-\lambda _{0}t}(1 -\tilde{\phi }_{ t}(\theta ))\,\mathit{dt}\right )}$$

To explain the assumption that Mu 1 → ν ∈ (0, ), note that this implies that resistance is neither certain nor impossible. More concretely, it is estimated that in chronic myeloid leukemia [31] that M = 2. 5 × 105 while u = 4 × 10−7 so \(\mathit{Mu} \approx 0.1\).

Since (11) implies

$$\displaystyle{1 -\tilde{\phi }_{t}(\theta ) = \frac{\lambda _{1}(1 - e^{-\theta })} {a_{1}(1 - e^{-\theta }) - e^{-\lambda _{1}t}(b_{1} - a_{1}e^{-\theta })}}$$

the Laplace transform of U 1 is not pretty. However, as we will now show U 1 has a power law tail, a result that [24] demonstrated by simulation.

To do this we note that if there is a mutation before T M − (1∕λ 1)logy, then it is likely that {U 1 > y}. The expected number of such mutations is

$$\displaystyle{\mathit{Mu}_{1}\int _{-\infty }^{-(1/\lambda _{1})\log y}e^{\lambda _{0}s}\,\mathit{ds} = \mathit{Mu}_{ 1} \cdot \frac{1} {\lambda _{0}} e^{-(\lambda _{0}/\lambda _{1})\log y} = (1/\lambda _{ 0})\nu y^{-\alpha }.}$$

As in Section 9 one can prove this rigorously by looking at the asymptotics for the Laplace transform as θ → 0. Changing variables t = −t(θ) + x where \(t(\theta ) = (1/\lambda _{1})\log (1 - e^{-\theta })\)

$$\displaystyle\begin{array}{rcl} & & \int _{0}^{\infty }\mathit{dt}\,e^{-\lambda _{0}t} \frac{\lambda _{1}(1 - e^{-\theta })} {a_{1}(1 - e^{-\theta }) - e^{-\lambda _{1}t}(b_{1} - a_{1}e^{-\theta })} {}\\ & & = (1 - e^{-\theta })^{\alpha }\int _{ -t(\theta )}^{\infty }e^{-\lambda _{0}x} \frac{\lambda _{1}} {a_{1} - e^{-\lambda _{1}x}(b_{1} - a_{1}e^{-\theta })}\,\mathit{dx} {}\\ & & \sim \theta ^{\alpha }\int _{-\infty }^{\infty }e^{-\lambda _{0}x} \frac{\lambda _{1}} {a_{1} - e^{-\lambda _{1}x}(b_{1} - a_{1})}\,\mathit{dx}\quad \mbox{ as $\theta \rightarrow 0$.} {}\\ \end{array}$$

The probability of having a type 1 at time T M is, by (37),

$$\displaystyle{P = 1 -\exp \left (- \frac{\mathit{Mu}_{1}F} {1 - b_{0}/a_{0}}\right )}$$

where F is given in (36). Using (72), we see that the mean of the number of resistance cells conditional on there being at least 1 is

$$\displaystyle{\bar{Y } = \frac{G'(1)} {P} = \frac{u} {P(1 - b_{0}/a_{0})}\sum _{x=1}^{M-1}\left (\frac{M} {x} \right )^{\bar{\alpha }},}$$

where using notation from Section 6, \(\bar{\alpha }=\lambda _{1}/\lambda _{0} = 1/\alpha\). As observed in [24] this formula is not accurate in the case of advantageous resistant mutations because the dominant contribution comes from mutations when the tumor is small.

When mutations do not change the birth or death rates, i.e., a 0 = a 1 = a and b 0 = b 1 = b, then \(\bar{\alpha }= 1\) and the answer becomes

$$\displaystyle{ \frac{\mathit{Mu}_{1}\log M} {P(1 - b/a)}.}$$

If we suppose in addition that Mu 1 is small, then (38) implies

$$\displaystyle{\frac{\mathit{Mu}_{1}} {P} \approx \frac{\mathit{Mu} - 1} {1 -\exp \left [-\frac{\mathit{Mu}_{1}a} {\log } \left ( \frac{a} {a-b}\right )\right ]} \approx (a/b)\log (a/(a - b))}$$

and we have

$$\displaystyle{\bar{Y } \approx \frac{\log M} {(a/b - 1)\log (a/(a - b))},}$$

which is (12) in [24] and (2) in [36]. Again the logM comes from the unlikely event of resistance mutations that occur when the tumor is small.

13 Growth of Z k (t)

The formulas for the constants in the next limit theorem are ugly, but the proof is very easy. The arguments in Section 9 and induction give the desired result. Let α k  = λ k−1λ k . Generalizing (53) we define

$$\displaystyle{ c_{\mu,k} = \frac{1} {a_{k}}\left (\frac{a_{k}} {\lambda _{k}} \right )^{\alpha _{k}}\Gamma (\alpha _{k})\qquad c_{h,k} = \Gamma (1 -\alpha _{k})c_{\mu,k} }$$
(74)

Let μ 1 = u 1 and inductively define for k ≥ 2

$$\displaystyle\begin{array}{rcl} c_{\theta,k} = c_{\theta,k-1}c_{h,k}^{\lambda _{0}/\lambda _{k-1} }& &{}\end{array}$$
(75)
$$\displaystyle\begin{array}{rcl} \mu _{k} =\mu _{k-1}u_{k}^{\lambda _{0}/\lambda _{k-1} } =\prod _{ j=1}^{k}u_{ j}^{\lambda _{0}/\lambda _{j-1} }.& &{}\end{array}$$
(76)

Let \(\mathcal{F}_{\infty }^{k-1}\) be the σ-field generated by Z j (t), j ≤ k − 1, t ≥ 0.

Theorem 6.

If \(Z_{0}^{{\ast}}(t) = V _{0}e^{\lambda _{0}t}\) for t ∈ (−∞,∞), then as t →∞.

$$\displaystyle{e^{-\lambda _{k}t}Z_{ k}^{{\ast}}(t) \rightarrow V _{ k}\quad \text{a.s.}}$$
  1. (i)

    \((V _{k}\vert \mathcal{F}_{\infty }^{k-1})\) is the sum of the points in a Poisson process with mean measure

    $$\displaystyle{\mu (x,\infty ) = c_{\mu,k}u_{k}V _{k-1}x^{-\alpha _{k} }.}$$
  2. (ii)

    \(E(e^{-\theta V _{k}}\vert \mathcal{F}_{ \infty }^{k-1}) =\exp (-c_{ h,k}u_{k}V _{k-1}\theta ^{\alpha _{k}})\) .

  3. (iii)

    \(P(V _{k}> x\vert \mathcal{F}_{\infty }^{k-1}) \sim x^{-\alpha _{k}}c_{\mu,k}u_{k}V _{k-1}/\Gamma (1 -\alpha _{k})\)

  4. (iv)

    If V 0 is exponential(λ 0 ∕a 0 ), then

    $$\displaystyle{ \mathit{Ee}^{-\theta V _{k} } = \left (1 + c_{\theta,k}\mu _{k}\theta ^{\lambda _{0}/\lambda _{k} }\right )^{-1} }$$
    (77)

    and (v) \(P(V _{k}> x) \sim x^{-\lambda _{0}/\lambda _{k}}c_{\theta,k}\mu _{k}/\Gamma (1 -\lambda _{0}/\lambda _{k})\) .

Proof.

We will prove this by induction. When k = 1, this follows from Theorem 4. Suppose now that k ≥ 2. Let \(\mathcal{F}_{t}^{k-1}\) be the σ-field generated by Z j (s) for j ≤ k − 1 and s ≤ t. Let \(\tilde{Z}_{k}(t)\) be the number of type k’s at time t in the branching process with \(\tilde{Z}_{k}(0) = 1\) and \(\tilde{Z}_{j}(0) = 0\) for j ≤ k − 1, and let \(\tilde{\phi }_{k,t}(\theta ) = \mathit{Ee}^{-\theta \tilde{Z}_{k}(t)}\). The reasoning that led to Lemma 3 implies

$$\displaystyle{E(e^{-\theta Z_{k}^{{\ast}}(t) }\vert \mathcal{F}_{t}^{k-1}) =\exp \left (-u_{ k}\int _{-\infty }^{t}Z_{ k-1}^{{\ast}}(s)(1 -\tilde{\phi }_{ k,t-s}(\theta ))\,\mathit{ds}\right )}$$

Replacing Z k−1 (s) by \(e^{\lambda _{k-1}s}V _{ k-1}\), θ by \(\theta e^{-\lambda _{k}t}\), and letting t → 

$$\displaystyle{ E\left (e^{-\theta V _{k} }\vert \mathcal{F}_{\infty }^{k-1}\right ) =\lim _{ t\rightarrow \infty }\exp \left (-u_{k}V _{k-1}\int _{-\infty }^{t}e^{\lambda _{k-1}s}(1 -\tilde{\phi }_{ k,t-s}(\theta e^{-\lambda _{k}t}))\,\mathit{ds}\right ) }$$
(78)

At this point the calculation is the same as the one in the proof of Theorem 4 with 1 and 0 replaced by k and k − 1 respectively, so it follows from the proofs of (ii) and (iii) that

$$\displaystyle\begin{array}{rcl} E\left (e^{-\theta V _{k} }\vert \mathcal{F}_{\infty }^{k-1}\right ) =\exp \left (-c_{ h,k}u_{k}V _{k-1}\theta ^{\alpha _{k} }\right )& &{}\end{array}$$
(79)
$$\displaystyle\begin{array}{rcl} P(V _{k}> x\vert \mathcal{F}_{\infty }^{k-1}) \sim x^{-\alpha _{k} }c_{\mu,k}u_{k}V _{k-1}/\Gamma (1 -\alpha _{k}).& &{}\end{array}$$
(80)

Taking expected value of (79) and using the result for k − 1

$$\displaystyle\begin{array}{rcl} \mathit{Ee}^{-\theta V _{k} }& =& \left (1 + c_{\theta,k-1}\mu _{k-1}(c_{h,k}u_{k}\theta ^{\lambda _{k-1}/\lambda _{k} })^{\lambda _{0}/\lambda _{k-1}}\right )^{-1} {}\\ & =& \left (1 + c_{\theta,k}\mu _{k}\theta ^{\lambda _{0}/\lambda _{k} }\right )^{-1} {}\\ \end{array}$$

by (75) and (76), which proves (77). Part (v) now follows from Lemma 4. □ 

14 Transitions between waves

While the formulas in Theorem 6 are complicated, there is a simple underlying conceptual picture. For simplicity, consider the special case in which all the u i  = u and let L = log(1∕u).

Theorem 7.

Let \(\beta _{k} =\sum _{ j=0}^{k-1}1/\lambda _{j}\) . As u → 0,

$$\displaystyle{ \frac{1} {L}\log ^{+}Z_{ k}(\mathit{Lt}) \rightarrow z_{k}(t) =\lambda _{k}(t -\beta _{k})^{+}.}$$

Here x + = max{0, x} takes care of the fact that log(0) = −. A picture tells the story much better than formulas:

In words Z k−1(Lt) hits 1∕u at time \(\approx \beta _{k}\). At this point the first type k is born and the type k population grows like \(e^{\lambda _{k}t}\), i.e., its logarithm grows like λ k t and hence

$$\displaystyle{ \beta _{k+1} -\beta _{k} = \frac{1} {\lambda _{k}}. }$$
(81)

Note that the process is accelerating, i.e., the increments between the birth times for successive waves are decreasing (Fig. 3).

Fig. 3
figure 3

Growth of (1∕L)log+(Z k (Lt)) for L = log(1∕u) shows the dominant type in the population as a function of t.

Theorem 7 makes it easy to obtain results for the time \(T_{k} =\inf \{ t \geq 0: Z_{k}(t)> Z_{j}(t)\) for all jk} at which the type k’s first become dominant in the population. The type k’s overtake the type k − 1’s at the time \(t_{k}>\beta _{k}\) when \(\lambda _{k}(t -\beta _{k}) =\lambda _{k-1}(t -\beta _{k-1})\) or

$$\displaystyle{(\lambda _{k} -\lambda _{k-1})t_{k} =\lambda _{k}\beta _{k} -\lambda _{k-1}\beta _{k-1}.}$$

In the special case \(\lambda _{k} =\lambda _{0} + \mathit{kb}\) this becomes

$$\displaystyle{\mathit{bt}_{k} = b\beta _{k} + \frac{1} {\lambda _{k-1}}(\beta _{k} -\beta _{k-1}),}$$

so using (81)

$$\displaystyle{t_{k} =\beta _{k} + b^{-1},}$$

Note that this is a constant time after the time the first type k appears:

Theorem 8.

If \(u_{j} \equiv u\) and \(\lambda _{k} =\lambda _{0} + kb\) then \(T_{k}/L \rightarrow \beta _{k} + b^{-1}\)

15 Time to the first type k, k ≥ 2

Our next topic is the waiting time for the first type k + 1:

$$\displaystyle{P(\tau _{k+1}> t\vert \mathcal{F}_{t}^{k}) =\exp \left (-\int _{ 0}^{t}u_{ k+1}Z_{k}^{{\ast}}(s)\,\mathit{ds}\right ) \approx \exp (-u_{ k+1}V _{k}e^{\lambda _{k}t}/\lambda _{ k}).}$$

Taking expected value and using Theorem 6

$$\displaystyle{P(\tau _{k+1}> t\vert \Omega _{\infty }^{0}) = \left (1 + c_{\theta,k}\mu _{k}(u_{k+1}e^{\lambda _{k}t}/\lambda _{ k})^{\lambda _{0}/\lambda _{k} }\right )^{-1}.}$$

Using the definition of μ k+1 the median t 1∕2 k+1 is defined by

$$\displaystyle{c_{\theta,k}\mu _{k+1}\exp (\lambda _{0}t_{1/2}^{k+1})\lambda _{ k}^{-\lambda _{0}/\lambda _{k} } = 1,}$$

and solving gives

$$\displaystyle{ t_{1/2}^{k+1} = \frac{1} {\lambda _{0}} \log \left ( \frac{\lambda _{k}^{\lambda _{0}/\lambda _{k}}} {c_{\theta,k}\mu _{k+1}}\right ) = \frac{1} {\lambda _{k}}\log (\lambda _{k}) -\frac{1} {\lambda _{0}} \log \left (c_{\theta,k}\mu _{k+1}\right ). }$$
(82)

As in the case of τ 1,

$$\displaystyle{ P(\tau _{k+1}> t_{1/2}^{k+1} + x/\lambda _{ 0}\vert \Omega _{\infty }^{0}) \approx (1 + e^{x})^{-1}. }$$
(83)

Again the result for the median \(s_{1/2}^{k+1}\) of the time σ k+1 of the first mutation to type k + 1 with a family that does not die out can be found by replacing u k+1 by \(u_{k+1}\lambda _{k+1}/a_{k+1}\). Using \(\mu _{k+1} =\mu _{k}u_{k+1}^{\lambda _{0}/\lambda _{k}}\) from (76), when we do this gives

$$\displaystyle{ s_{1/2}^{k+1} = \frac{1} {\lambda _{k}}\log \left ( \frac{\lambda _{k}a_{k+1}} {u_{k+1}\lambda _{k+1}}\right ) -\frac{1} {\lambda _{0}} \log (c_{\theta,k}\mu _{k}). }$$
(84)

As in the case of τ k+1,

$$\displaystyle{ P(\sigma _{k+1}> t_{1/2}^{k+1} + x/\lambda _{ 0}\vert \Omega _{\infty }^{0}) \approx (1 + e^{x})^{-1}. }$$
(85)

To simplify and to relate our result to (S5) of Bozic et al. [5] we will look at the difference

$$\displaystyle{s_{1/2}^{k+1} - s_{ 1/2}^{k} = \frac{1} {\lambda _{k}}\log \left ( \frac{\lambda _{k}a_{k+1}} {u_{k+1}\lambda _{k+1}}\right ) - \frac{1} {\lambda _{k-1}}\log \left (\frac{\lambda _{k-1}a_{k}} {u_{k}\lambda _{k}} \right ) -\frac{1} {\lambda _{0}} \log \left (c_{h,k}^{\lambda _{0}/\lambda _{k-1} }u_{k}^{\lambda _{0}/\lambda _{k-1} }\right ),}$$

where in the second term we have used (75) and (76) to evaluate \(c_{\theta,k}/c_{\theta,k-1}\) and \(\mu _{k}/\mu _{k-1}\). Recalling the formula

$$\displaystyle{c_{h,k} = \frac{1} {a_{k}}\left (\frac{a_{k}} {\lambda _{k}} \right )^{\alpha _{k}}\Gamma (\alpha _{k})\Gamma (1 -\alpha _{k})\quad \text{with}\quad \alpha _{k} =\lambda _{k-1}/\lambda _{k},}$$

given in (74) we have

$$\displaystyle{ s_{1/2}^{k+1} - s_{ 1/2}^{k} = \frac{1} {\lambda _{k}}\log \left ( \frac{\lambda _{k}^{2}a_{k+1}} {a_{k}u_{k+1}\lambda _{k+1}}\right ) - \frac{1} {\lambda _{k-1}}\log (\alpha _{k}\Gamma (\alpha _{k})\Gamma (1 -\alpha _{k})) }$$
(86)

15.1 Relationship to Bozic et al. [5]

Bozic et al. [5] investigated σ k in order to obtain insights into the accumulation of passenger mutations. Their model takes place in discrete time, which facilitates simulation, their types are numbered starting from 1 rather than from 0. At each time step, a cell of type j ≥ 1 either divides into two cells, which occurs with probability b j , or dies with probability d j where \(d_{j} = (1 - s)^{j}/2\) and \(b_{j} = 1 - d_{j}\). In addition, at every division, the new daughter cell can acquire an additional driver mutation with probability u. It is unfortunate that their birth probability b j is our death rate for type j cells. We will not resolve this conflict because we want to preserve their notation in order to make it easy to compare with the results in the paper.

They use τ j to denote \(\sigma _{j+1} -\sigma _{j}\). In (S5) in their supplementary materials.

$$\displaystyle{ E(\sigma _{j+1} -\sigma _{j}) = \frac{T\log \left [ \frac{1-q_{j}} {ub_{j}(1-q_{j+1})}\left (1 - \frac{1} {b_{j}(2-u)}\right )\right ]} {\log [b_{j}(2 - u)]}, }$$
(87)

where q j is probability that a type j mutation dies out. In quoting their result, we have dropped the 1+ inside the log in their formula, since it disappears in their later calculations and this makes their result easier to relate to ours.

When the differences in notation are taken into account (29) agrees with the j = 1 case of (87). The death and birth probabilities in the model of Bozic et al. [5] are d 1 = (1 − s)∕2 and \(b_{1} = 1 - d_{1} = (1 + s)/2\), so \(\log (2b_{1}) \approx \log (1 + s) \approx s\). \(q_{j} \approx (1 -\mathit{js})/(1 + \mathit{js}) \approx 1 - 2\mathit{js}\). Taking into account the fact that mutations occur only in the new daughter cell at birth, we have u 1 = b 1 u, so when j = 1 (87) becomes

$$\displaystyle{E(\sigma _{2} -\sigma _{1}) \approx \frac{1} {s}\log \left ( \frac{s^{2}} {u_{1} \cdot 2s}\right ).}$$

Setting λ j  = (j + 1)s, and a i  = b i+1 in our continuous time branching process, we have \(a_{1}/a_{0} \approx 1\) and this agrees with (29).

Example 4.

To match a choice of parameters studied in Bozic et al. [5], we will take u = 10−5 and s = 0. 01, so \(u_{i} = b_{i}u \approx 5 \times 10^{-6}\), and

$$\displaystyle{s_{1/2}^{1} \approx \frac{1} {0.01}\log \left ( \frac{10^{-4}} {5 \times 10^{-6} \cdot 0.02}\right ) = 100\log (1000) = 690.77.}$$

Note that by (31) the fluctuations in σ 1 are of order 1∕λ 0 = 100.

To connect with reality, we note that for colon cancer the average time between cell divisions is T = 4 days, so 690.77 translates into 7.57 years. In contrast, Bozic et al. [5] compute a waiting time of 8.3 years on page 18546. This difference is due to the fact that the formula they use ((1) on the cited page) employs the approximation 1∕2 ≈ 1.

Turning to the later waves, we note that:

  1. (i)

    the first “main” term in (86) corresponds to the answer in (87).

  2. (ii)

    by (51), \(\alpha _{k}\Gamma (\alpha _{k})\Gamma (1 -\alpha _{k}) =\pi \alpha _{k}/\sin (\pi \alpha _{k})> 1\), so the “correction” term not present in (87) is < 0, which is consistent with the fact that the heuristic leading to (87) considers only the first successful mutation.

To obtain some insight into the relative sizes of the “main” and the “correction” terms in (86), we will consider our concrete example in which λ i  = (i + 1)s and \(a_{i} = b_{i+1} \approx 1/2\), so for i ≥ 1

$$\displaystyle{s_{1/2}^{i+1} - s_{ 1/2}^{i} = \frac{1} {(i + 1)s}\log \left ( \frac{(i + 1)^{2}s} {u_{i+1}(i + 2)}\right ) - \frac{1} {\mathit{is}}\log \left ( \frac{\pi \alpha _{i}} {\sin (\pi \alpha _{i})}\right ).}$$

Taking s = 0. 01, u = 10−5, and u i  = 5 × 10−6 leads to the results given in Table 1.

Table 1 Comparison of expected waiting times from (86) and (87). The numbers in parentheses are the answers converted into years using T = 4 as the average number of days between cell divisions.

The values in the last column differ from the sum of the values in the first column because Bozic et al. [5] indulge in some dubious arithmetic to go from their formula

$$\displaystyle{E(\sigma _{j+1} -\sigma _{j}) = \frac{1} {\mathit{js}}\log \left ( \frac{2j^{2}s} {(j + 1)u}\right ),}$$

to their final result

$$\displaystyle{E\sigma _{k} \approx \frac{1} {2s}\log \left (\frac{4\mathit{ks}^{2}} {u^{2}} \right )\log k.}$$

First they use the approximation j∕(j + 1) ≈ 1 and then \(\sum _{j=1}^{k-1} \approx \int _{0}^{k}\). In the first row of the table this means that their formula underestimates the right answer by 20%. Bozic et al. [5] tout the excellent agreement between their formula and simulations given in their Figure S2. However, a closer look at the graph reveals that while their formula underestimates simulation results, our answers agree with them almost exactly.

16 Application: Metastasis

Haeno, Iwasa, and Michor [21] and Haeno and Michor [22] have used mutlitype branching processes to study metastasis and have applied their work to study pancreatic cancer data [20]. Suppressing the complex details of the process of metastasis, the model has three types of cells

  • Ordinary tumor cells, type 0.

  • Cells that have the ability to metastasize, type 1.

  • Cells that have spread to another location, type 2.

In the notation of [21] the birth rates of the three types of cells are r, a 1, and a 2, while the death rates are d, b 1, and b 2, so a 0 = r and b 0 = d. Their mutation rates, which we will call μ i , are per birth so in our terminology, \(u_{i} = a_{i-1}\mu _{i}\).

The main questions are: what is the probability metastasis has occurred at diagnosis, and if so what is the size? To turn this into a precise mathematical question, we declare, as in Sections 6 and 12, the time of diagnosis to be \(T_{M}^{0} =\inf \{ t: Z_{t}^{0} = M\}\).

16.1 P(Z 2(T M ) > 0)

The first step is to calculate the probability of having a type 2 at time t given that we start with a single type 1 at time 0. This is related to the problem studied in Section 6 but here we do not condition on nonextinction of the 1’s or use the approximation \(Z_{1} \approx V _{1}e^{\lambda _{1}t}\). Let \(g_{i}(z_{0},z_{1},t)\) be the generating function for \((Z_{1}(t),Z_{2}(t))\) when the system is started from one individual of type i. Note that

$$\displaystyle{P(Z_{2}(t)> 0\vert Z_{1}(0) = 1) = 1 - g_{1}(1,0,t).}$$

To make it easier to derive the next result, we will think about the version of the model in which type 1’s give birth at rate a 1 and the result is type 2 with probability μ 2, where \(\mu _{2} = u_{2}/a_{1}\). By considering what happens on the first jump and arguing as in the proof of Lemma 1:

$$\displaystyle\begin{array}{rcl} \frac{\partial g_{1}} {\partial t} & =& b_{1}(1 - g_{1}) + a_{1}(1 -\mu _{2})(g_{1}^{2} - g_{ 1}) + a_{1}\mu _{2}(g_{1}g_{2} - g_{1}), \\ \frac{\partial g_{2}} {\partial t} & =& b_{2}(1 - g_{2}) + a_{2}(g_{2}^{2} - g_{ 2}). {}\end{array}$$
(88)

These equations can be solved exactly, see Antal and Krapivsky [1], but their result, which involves hypergeometric functions, is not particularly useful.

If we let g(t) = g 1(1, 0, t) and h(t) = g 2(1, 0, t), then taking x = 0 in (11), adjusting the indices, and multiplying top and bottom of the fraction by − 1∕a 2

$$\displaystyle{h(t) = 1 - \frac{1 - b_{2}/a_{2}} {1 - (b_{2}/a_{2})e^{-(a_{2}-b_{2})t}}.}$$

Our next goal is to show:

$$\displaystyle{ 1 - g(t) \approx \frac{\lambda _{1}/a_{1}} {1 + (\lambda _{1}/a_{1})^{2}(1/\mu _{2})e^{-\lambda _{1}t}}. }$$
(89)

Proof of (89).

Doing algebra on the first differential equation in (88) we have

$$\displaystyle\begin{array}{rcl} \frac{\mathit{dg}} {\mathit{dt}} & =& b_{1}(1 - g) + a_{1}(1 -\mu _{2})(g^{2} - g) + a_{ 1}\mu _{2}(\mathit{gh} - g) \\ & =& (b_{1} - a_{1}g)(1 - g) - a_{1}\mu _{2}g(g - h). {}\end{array}$$
(90)

Paraphrasing [21], if we neglect the second term, which is of order u 2, this equation is similar to a logistic equation and has two equilibria, g = 1 and g = b 1a 1. For small t, g ≈ 1 and h ≈ 0, so the second term is approximately − a 1 μ 2 and pushes the system away from the unstable equilibrium g = 1 to the stable one at g = b 1a 1. The second term is therefore only important for small t and we can approximate (90) as

$$\displaystyle{\frac{\mathit{dg}} {\mathit{dt}} = (b_{1} - a_{1}g)(1 - g) - a_{1}\mu _{2}\frac{a_{1}g - b_{1}} {a_{1} - b_{1}}.}$$

The logic of this approximation is not explained in [21], but the final term is a linear function that is 0 when g = b 1a 1 and is 1 when g = 1. In addition, as the reader will soon see, it is convenient for computation. If we let \(\epsilon = a_{1}\mu _{2}/(a_{1} - b_{1})\), then the above can be written as

$$\displaystyle{\frac{\mathit{dg}} {\mathit{dt}} = (b_{1} - a_{1}g)(1 +\epsilon -g).}$$

Using the calculation in the proof of (4) now with c = 1 +ε we get from (6) that

$$\displaystyle{g(t) = \frac{b_{1} - (1+\epsilon )e^{D(b_{1}-(1+\epsilon )a_{1})}e^{(b_{1}-(1+\epsilon )a_{1})t}} {a_{1} - e^{D(b_{1}-(1+\epsilon )a_{1})}e^{(b_{1}-(1+\epsilon )a_{1})t}}.}$$

Taking t = 0 in (5) and recalling g(0) = 1 we have

$$\displaystyle{e^{D(b_{1}-(1+\epsilon )a_{1})} = \frac{b_{1} - a_{1}} {\epsilon }.}$$

Plugging into the previous equation we have

$$\displaystyle{g(t) = \frac{b_{1} - (1+\epsilon )(b_{1} - a_{1})\epsilon ^{-1}e^{(b_{1}-(1+\epsilon )a_{1})t}} {a_{1} - (b_{1} - a_{1})\epsilon ^{-1}e^{(b_{1}-(1+\epsilon )a_{1})t}}.}$$

Subtracting this from 1, and noting the second term in the numerator is (1 +ε) times the second term in denominator we have

$$\displaystyle\begin{array}{rcl} 1 - g(t)& =& \frac{a_{1} - b_{1} +\epsilon (b_{1} - a_{1})\epsilon ^{-1}e^{b_{1}-(1+\epsilon )a_{1}}} {a_{1} - (b_{1} - a_{1})\epsilon ^{-1}e^{(b_{1}-(1+\epsilon )a_{1})t}} {}\\ & =& \frac{a_{1}(1+\epsilon ) - b_{1}} {a_{1} - (b_{1} - a_{1})\epsilon ^{-1}e^{(b_{1}-(1+\epsilon )a_{1})t}} -\epsilon {}\\ & =& \frac{(\lambda _{1}/a_{1})+\epsilon } {1 + (1 - b_{1}/a_{1})\epsilon ^{-1}e^{(b_{1}-(1+\epsilon )a_{1})t}} -\epsilon. {}\\ \end{array}$$

This does not match (A7) in [21], but remarkably when we simply end up with the same end result. Plugging the definition of \(\epsilon = a_{1}\mu _{2}/(a_{1} - b_{1})\) into ε −1 and replacing the other two ε by 0 gives the desired result. □ 

As in the work of Iwasa, Nowak, and Michor [24] discussed in Section 6, [21] we break things down according to the number of type 1 individuals that are present when the type 2 mutations occur to conclude that

$$\displaystyle{P(Z_{2}(T_{M}^{0})> 0) \approx 1 -\exp \left [- \frac{\mu _{1}} {1 - b_{0}/a_{0}}\sum _{x=1}^{M}(1 - g_{ x}^{1}(1,0))\right ].}$$

where \(g_{x}^{1}(s_{1},s_{2})\) is the bivariate generating function of \((Z_{1}(t),Z_{2}(t))\) for a process started with one type 1 when Z 0(t) = x. As in (39) the first factor gives the expected number of type 1 mutations that occur when Z 0(t) = x. To evaluate the sum they assume deterministic growth of the type 0’s to conclude that

$$\displaystyle{g_{x}^{1}(1,0) = g_{ 1}\left (1,0, \frac{1} {\lambda _{0}} \log (M/x)\right ) = g((1/\lambda _{0})\log M).}$$

Combining the last two formulas with (89), we have the formula derived in (A9) of [24]

$$\displaystyle\begin{array}{rcl} P(Z_{2}(T_{M}^{0}) = 0) \approx \exp \left [- \frac{\mu _{1}} {1 - b_{0}/a_{0}}\int _{0}^{M} \frac{\lambda _{1}/a_{1}\,\mathit{dx}} {1 + (\lambda _{1}/a_{1})^{2}(1/\mu _{2})(M/x)^{-\lambda _{1}/\lambda _{0}}} \right ].& &{}\end{array}$$
(91)

An alternative approach.

It follows from the results given in Section 5 that

$$\displaystyle{ P(\sigma _{2}> T_{M}^{0}) =\exp \left (-\int _{ -\infty }^{0}\mathit{ds}\,Me^{\lambda _{0}s}u_{ 1} \frac{\lambda _{1}} {a_{1}}P(\sigma _{2} \leq s\vert \Omega _{\infty }^{1})\right ). }$$
(92)

To see this note that \(u_{1}\lambda _{1}/a_{1}\) is the rate for type 1 mutations that don’t die out, so the integral gives \(\Lambda _{1,2}(M) =\) the mean number of successful type 1 mutations that produce a successful type 2 family before T M 0. Since the number of such successes is Poisson with mean \(\Lambda _{1,2}(M)\), the probability of none is \(\exp (-\Lambda _{1,2}(M))\). Using (23)

$$\displaystyle{ P(\sigma _{2} \leq t\vert \Omega _{\infty }^{1}) = 1 - (1 + c_{ 1,2}u_{2}e^{\lambda _{1}t})^{-1}. }$$
(93)

where \(c_{1,2} = (a_{1}/\lambda _{1}^{2})\lambda _{2}/a_{2}\). To prepare for comparison with (91), we rewrite the right-hand side as

$$\displaystyle{ \frac{c_{1,2}u_{2}e^{\lambda _{1}t}} {1 + c_{1,2}u_{2}e^{\lambda _{1}t}} = \frac{1} {1 + (1/u_{2}c_{1,2})e^{-\lambda _{1}t}}. }$$
(94)

Combining (92), (93), and (94), then changing variables \(x = Me^{\lambda _{0}s}\), \(\mathit{dx} =\lambda _{0}Me^{\lambda _{0}s}\) we have

$$\displaystyle\begin{array}{rcl} P(\sigma _{2}> T_{M}^{0})& \approx & \exp \left [-\frac{u_{1}} {\lambda _{0}} \int _{0}^{M} \frac{\lambda _{1}/a_{1}\,\mathit{dx}} {1 + (1/u_{2}c_{1,2})(M/x)^{-\lambda _{1}/\lambda _{0}}} \right ] \\ & =& \exp \left [-\frac{u_{1}} {\lambda _{0}} \int _{0}^{M} \frac{\lambda _{1}/a_{1}\,\mathit{dx}} {1 + (\lambda _{1}^{2}/a_{1})(a_{2}/\lambda _{2}u_{2})(M/x)^{-\lambda _{1}/\lambda _{0}}} \right ].{}\end{array}$$
(95)

Since \(\mu _{1} = u_{1}/a_{0}\) the factors in front are the same. In the term after the 1+ in denominator of the integral in (95), \(a_{2}/u_{2} =\mu _{2}\), so there is a factor of a 1λ 2 that separates this part of the formula from the corresponding part of (91).

16.2 Neutral case

Suppose that \(a_{i} \equiv a\), \(b_{i} \equiv b\), and \(\lambda _{i} \equiv \lambda\). In words, the mutation that confers the ability to migrate does not change the growth rate of the cancer cells. In this case, c 1, 2 = 1∕λ so (95) becomes

$$\displaystyle{P(\sigma _{2}> T_{M}^{0}) \approx \exp \left [-\frac{u_{1}} {a} \int _{0}^{M} \frac{\mathit{dx}} {1 + (\lambda /u_{2})(x/M)}\right ].}$$

The integral is

$$\displaystyle{\left.\frac{u_{2}M} {\lambda } \log (1 + (\lambda /u_{2})(x/M))\right \vert _{0}^{M},}$$

so we have

$$\displaystyle{ P(\sigma _{2}> T_{M}^{0}) \approx \exp \left (-\frac{\mathit{Mu}_{1}u_{2}} {\lambda a} \log (1 +\lambda /u_{2})\right ). }$$
(96)

Since λu 2 is large we can drop the 1+ inside the logarithm. If the quantity inside the exponent is small, then we have

$$\displaystyle{ P(\sigma _{2} \leq T_{M}^{0}) \approx \frac{\mathit{Mu}_{1}u_{2}} {\lambda a} \log (\lambda /u_{2}). }$$
(97)

Using u i  = a μ i , and then dropping the λa from inside the logarithm as is done in [21]) the above becomes

$$\displaystyle{P(\sigma _{2} \leq T_{M}^{0}) \approx \frac{M\mu _{1}\mu _{2}} {(\lambda /a)} \log (1/u_{2}),}$$

which is λa times (5) in [21].

Concrete Example.

The following situation (described in our notation) is simulated in panel (a) of Figure 2 in [21]: a i  = 0. 2, b i  = 0. 1, λ i  = 0. 1, u 1 = 2 × 10−4, u 2 = 2 × 10−6, and M = 106. log(104) = 9. 210 so the approximation from (97) is 0.184 versus 0.368 from (5). Despite the fact that we are estimating σ 2 < T M 0, the first estimate is much closer to the data point for μ 1 = 10−3. Even closer is the one from (96), which is 1 − exp(−0. 184) = 0. 168.

17 Application: Ovarian cancer

This section summarizes results from [7]. Ovarian cancer is the fifth leading cause of cancer death among women in the United States with one in 71 American women developing the disease during her lifetime. In 2012, 22,280 new cases are estimated to develop in the United States with 15,500 deaths expected [65]. To motivate our model, we begin by describing the four general stages used to classify the disease clinically:

  1. I.

    Cancer confined to one ovary

  2. II.

    Cancer involves both ovaries or has spread to other tissue within the pelvis

  3. III.

    Cancer has spread to the abdomen

  4. IV.

    Cancer has spread to distant organs

According to the SEER database [68], the distribution of the stage at diagnosis is (roughly) I: 20%, II: 10%, III: 40%, and IV: 30%. Five-year survival statistics based on stage at diagnosis are as follows: I: 90%, II: 65%, III: 25%, IV: 10%, [44]. Given these statistics, the ability to accurately detect early stage disease could improve ovarian cancer survival dramatically. However, no screening strategy has yet been proven to reduce mortality [42]. Our goal is to estimate the size of the window of opportunity for screening, i.e., the amount of time in which screening can help improve survival. More specifically, it is the amount of time during which the primary tumor is of a size detectable by transvaginal ultrasound while the amount of metastasis has not significantly increased the chance of mortality.

Ovarian carcinoma begins as a tumor on the surface of the ovary or fallopian tube, which we call the primary tumor. Metastasis occurs either by direct extension from the primary tumor to neighboring organs, such as the bladder or colon, or when cancer cells detach from the surface of the primary tumor via an epithelial-to-mesenchymal (EMT) transition. Once the cells have detached, they float in the peritoneal fluid as single cells or multicelluar spheroids. Cells then reattach to the omentum and peritoneum and begin more aggressive metastatic growth [55, 59]. We can therefore think of ovarian cancer as consisting of three general tumor cell subtypes:

  • Primary (cells in the ovary or fallopian tube), type 0.

  • Peritoneal (viable cells in peritoneal fluid), type 1.

  • Metastatic (cells implanted on other intra-abdominal surfaces), type 2.

To parametrize the branching process model, we use data from [41], who examined the incidence of unsuspected ovarian cancers in apparently healthy women who underwent prophylactic bilateral salpingo-oophorectomies. They estimated that ovarian cancers had two-phase exponential growth with λ 0 = (log2)∕4 and λ 2 = (log2)∕2. 5 per month, i.e., in the early stage the doubling time is 4 months, while in the later stage it is 2.5 months. The growth rate λ 1 cannot be estimated from this data, so we will take it to be 0 and thereby get an upper bound on the size of the window of opportunity for screening, which was described in words above and will be defined more precisely in a minute. There does not seem to be data to allow us to directly estimate the migration rates u 1 and u 2. Fortunately, only the product u 1 u 2 appears in our answers. We will choose \(u_{1}u_{2} = 10^{-4}\) to achieve agreement with observed quantities such as the size of the primary tumor when stage III is reached.

Type 0 cells are a branching process that grows at exponential rate λ 0 > 0. We are not interested in the situation in which the type 0’s die out, so we consider the Z 0’s conditioned on nonextinction \(\Omega _{\infty }^{0} =\{ Z_{0}(t)> 0\mbox{ for all $t$}\}\). In this case (15) tells us that

$$\displaystyle{(e^{-\lambda _{0}t}Z_{ 0}(t)\vert \Omega _{\infty }^{0}) \rightarrow V _{ 0} = \text{exponential}(\lambda _{0}/a_{0}).}$$

Time t represents the amount of time since the initial mutation that began the tumor. That event is not observable, so by shifting the origin of time we can

Assume \(Z_{0}(t) = e^{\lambda _{0}t}\) to get rid of V 0.

Type 1’s leave from the surface of the primary tumor at rate u 1 times the surface area. Ignoring the constant that comes from the relationship between the surface area and volume of a sphere, and letting \(\gamma _{1} = 2\lambda _{0}/3\), the mean is

$$\displaystyle\begin{array}{rcl} \mathit{EZ}_{1}(t)& =& \int _{0}^{t}u_{ 1}e^{\gamma _{1}s}e^{\lambda _{1}(t-s)}\,\mathit{ds} \\ & =& \frac{u_{1}} {\gamma _{1} -\lambda _{1}}\left (e^{\gamma _{1}t} - e^{\lambda _{1}t}\right ) \sim \left ( \frac{u_{1}} {\gamma _{1} -\lambda _{1}}\right )e^{\gamma _{1}t}.{}\end{array}$$
(98)

Since type 1’s are cells floating in the peritoneal fluid and have less access to nutrients, it is natural to assume that \(\lambda _{1} <\gamma _{1}\). To remove the unknown rate λ 1 from our calculations, we will later set λ 1 = 0. For the moment, we will proceed without that assumption.

Theorem 9.

If \(\gamma _{1}>\lambda _{1} \geq 0\) , then \(Z_{1}(t)/\mathit{EZ}_{1}(t) \rightarrow 1\) in probability as t →∞.

Intuitively, this holds since the integral in (98) has its dominant contribution from times s near t when there are a lot of migrations. The results are easily shown by computing second moments. The proof is given in the Section 17.3.

At time s, mutations occur to type 2 cells at rate \(u_{2}(u_{1}/\gamma _{1})e^{\gamma _{1}s},\) so we let

$$\displaystyle{ s_{2} = \frac{1} {\gamma _{1}} \log \left ( \frac{\gamma _{1}} {u_{1}u_{2}}\right ) }$$
(99)

be the time at which the mutation rate is 1. The next result follows easily from the proof of Theorem 6, but has the advantage of having simpler constants.

Theorem 10.

If \(\lambda _{2}>\gamma _{1}> 0\) , then \(e^{-\lambda _{2}(t-s_{2})}Z_{2}(t) \rightarrow V _{2}\) where V 2 is the sum of points in a Poisson process with mean measure \(\mu (x,\infty ) = C_{2}x^{-\alpha _{2}}\) where \(\alpha _{2} =\gamma _{1}/\lambda _{2}\) ,

$$\displaystyle{C_{2} = \frac{1} {a_{2}}\left (\frac{a_{2}} {\lambda _{2}} \right )^{\alpha _{2}}\Gamma (\alpha _{2}).}$$

17.1 Window of opportunity for screening

In order to be able to compute the size of the window of time in which screening can be effective, we need to have precise definitions of its two endpoints. For the upper bound, we define the time at which the patient enters stage III as \(T_{2} =\min \{ t: Z_{2}(t) = 10^{9}\}\), where we have used the often-quoted rule of thumb that 109 cells = 1 cm3 = 1 gram. For the lower bound, we focus on detection by transvaginal ultrasound, so we define \(T_{0} =\min \{ Z_{0}(t) = 6.5 \times 10^{7}\}\), corresponding to a spherical tumor of diameter 0.5 cm. These definitions are based on somewhat crude estimates of detectability and “significant” metastasis. If the reader prefers different values, it is easy to recalculate the size of the window.

Using our growth rate parameters λ 0 = (log2)∕4 = 0. 1733 and λ 2 = (log2)∕2. 5 = 0. 2772, we set

$$\displaystyle{e^{0.1733T_{0} } = 6.5 \times 10^{7},\quad \text{which gives}\quad T_{ 0} = \frac{1} {0.1733}\log (6.5 \times 10^{7}) = 103.8.}$$

months or 8.65 years. This may seem to be a very long time, but the estimate is consistent with calculations done for other types of cancers.

To make a crude calculation of T 2, ignoring the randomness in the growth of the 2’s, we note that by (99), mutations to type 2 occur at rate 1 at time

$$\displaystyle{s_{2} = \frac{1} {\gamma _{1}} \log \left ( \frac{\gamma _{1}} {u_{1}u_{2}}\right ).}$$

From this point, it will take roughly

$$\displaystyle{\frac{1} {\lambda _{2}} \log (10^{9}) = 74.76\text{ months}}$$

for the 2’s to grow to size 109. If we let \(u_{1}u_{2} = 10^{-4}\) and note γ 1 = 0. 1155, then

$$\displaystyle{s_{2} = \frac{1} {.1155}\log (1155) = 61.05,}$$

so T 2 = 74. 76 + 61. 05 = 135. 81 months, and the window of opportunity is T 2T 0 = 32. 01 months or 2.67 years.

To take the randomness of Z 2 into account, we use Theorem 10 to conclude that

$$\displaystyle{Z_{2}(t) \approx e^{\lambda _{2}(t-s_{2})}V _{2}.}$$

Setting this equal to 109 and solving, we have

$$\displaystyle{ T_{2} \approx s_{2} + \frac{1} {\lambda _{2}} \log (10^{9}/V _{ 2}), }$$
(100)

and hence

$$\displaystyle{T_{2} = 138.51 -\log (1/V _{2}).}$$

The distribution of the window, T 2T 0, is shown in Figure 4. When one takes the correction into account, T 2T 0 is between 27 and 34 months with high probability.

Fig. 4
figure 4

Distribution for T 2T 0. Due to a computational error in [7] the distribution needs to be shifted to the left by 2.7 months.

17.2 Primary tumor size at onset of metastasis

When tumors are found at an early stage and removed surgically, patients almost always undergo chemotherapy as the next step of treatment. Although chemotherapy is prescribed in part due to the possibility of residual tumor in the site of surgery, it is also done out of concern that some cells had already metastasized but had not yet grown to a detectable size. Here, we use our model to estimate the probability of metastasis given a primary tumor of known size.

We want to find the distribution for the size of the primary tumor at the onset of metastasis, i.e., at time T 2. Using (100), we have

$$\displaystyle{Z_{0}(T_{2}) \approx \exp \left (\lambda _{0}\left [s_{2} + (1/\lambda _{2})\log (10^{9}/V _{ 2})\right ]\right ).}$$

Using (99) and recalling \(\gamma _{1} = 2\lambda _{0}/3\), \(\lambda _{0}/\lambda _{2} = 5/8\), we now have

$$\displaystyle{Z_{0}(T_{2}) \approx \left ( \frac{\gamma _{1}} {u_{1}u_{2}}\right )^{3/2}\left (\frac{10^{9}} {V _{2}} \right )^{5/8}.}$$

From this, we see that the size of the primary tumor at time s 2 is

$$\displaystyle{\left ( \frac{\gamma _{1}} {u_{1}u_{2}}\right )^{3/2} \approx 4 \times 10^{4},}$$

which translates into a diameter of 0.84 mm. If we ignore randomness and take V 2 = 1, then the size of the primary at time T 2 is

$$\displaystyle{4 \times 10^{4} \times 10^{45/8} = 1.686 \times 10^{10},}$$

which translates into 3 cm. Figure 5 shows the distribution for size and a comparison with the results of [41], respectively.

Fig. 5
figure 5

Computed distribution of the size of the primary at the time of detection compared with data from [41].

There is an overall difference of factor two between the time at which the curves decrease to 0. This difference could be removed by adjusting our parameters. However, such an adjustment would not change the disagreement between the shapes of the early parts. We find the initial sharp drop of the curve surprising and think it might be an artifact of the way in which they analyzed data. In a typical Kaplan-Meier survival study, patients either leave the study or die. Deaths in such a study are observed when they occur. In the ovarian cancer study, in which a woman is observed to have stage III cancer, the progression occurred at some time in the past which must be estimated. The curve will be skewed if this is not done correctly.

17.3 Proof of Theorem 9

Theorem 9.

If γ 1 > λ 1 ≥ 0, then Z 1 (t)∕EZ 1 (t) → 1 in probability as t →∞.

Proof.

Let \(\bar{Z}_{1}(t)\) be the contribution from mutations before time t − logt.

$$\displaystyle{E\bar{Z}_{1}(t) =\int _{ 0}^{t-\log t}u_{ 1}e^{\gamma _{1}s}e^{\lambda _{1}(t-s)}\,\mathit{ds}.}$$

A little algebra and the asymptotic behavior of EZ 1(t) given in (98) shows that

$$\displaystyle{E\bar{Z}_{1}(t) = e^{\lambda _{1}(\log t)}EZ_{1}(t -\log t) \sim e^{(\lambda _{1}-\gamma _{1})(\log t)}\mathit{EZ}_{1}(t).}$$

Since γ 1 > λ 1, \(E\bar{Z}_{1}(t)/EZ_{1}(t) \rightarrow 0\).

Let Y 1(t) be the number of type 1’s at time t when we start our multitype branching process with one 1 and no 0’s. Using Theorem 1, there is a positive constant C so that

$$\displaystyle{\mbox{ var}\,(Y _{1}(t)) \sim Ce^{2\lambda _{1}t}.}$$

The process Z 0(t) is deterministic, so the contributions to Z 1(t) from different time intervals are independent. Thus, if we let \(\hat{Z}_{1}(t) = Z_{1}(t) -\bar{ Z}_{1}(t),\) we have

$$\displaystyle{\mbox{ var}\,(\hat{Z}_{1}(t)) \sim \int _{t-\log t}^{t}u_{ 1}e^{\gamma _{1}s}e^{2\lambda _{1}(t-s)}\,\mathit{ds}.}$$

Introducing the normalization and changing variables r = ts in the integral, we have

$$\displaystyle\begin{array}{rcl} \mbox{ var}\,(e^{-\gamma _{1}t}\hat{Z}_{ 1}(t))& \sim & e^{-\gamma _{1}t}\int _{ 0}^{\log t}e^{(2\lambda _{1}-\gamma _{1})r}\,\mathit{dr} {}\\ & \leq & e^{-\gamma _{1}t}(\log t)(1 + t^{2\lambda _{1}-\gamma _{1} }) \rightarrow 0, {}\\ \end{array}$$

where we have added 1 to take into account the possibility that 2λ 1γ 1 < 0. Since \(e^{-\gamma _{1}t}Z_{1}(t)\) converges to a positive limit, it follows that \(\mbox{ var}\,(\hat{Z}_{1}(t)/\mathit{EZ}_{1}(t)) \rightarrow 0.\) Combining this with the fact that \(E\bar{Z}_{1}(t)/\mathit{EZ}_{1}(t) \rightarrow 0\) gives the desired result. □ 

18 Application: Intratumor heterogeneity

For several reasons problems in cancer treatment are caused by diversity of cell types in a tumor.

  • Different subpopulations within a tumor and its metastases may have varying types of response to any given treatment. This has been documented by sequencing samples from different regions in renal carcinoma [51], glioblastoma [67], and breast cancer [60].

  • Heterogeneity levels are associated with aggressiveness of disease. For example, in Barrett’s esophagus [57] and in breast cancer [62]

  • Heterogeneity has long been implicated in the development of resistance to cancer therapy after an initial response, and in the development of metastases [48]. This has important consequences for treatment [50, 64].

For simplicity, we will restrict our attention to quantifying the amount of diversity in the first wave. These results generalize easily to later waves. Since the population is (at most times) dominated by a single wave, this is enough to quantify the diversity in the entire population. More details about the generalization can be found in [12].

To formulate a simple mathematical problem, we will suppose that the type 1 producing mutations are all different, and the descendants of a mutant, which we will call a clone, have the same genotype. The point process in the proof of Theorem 4 allows us to understand the sizes of the clones. Recall that we are supposing \(Z_{0}(t) = V _{0}e^{\lambda _{0}t}\) where V 0 is nonrandom. The results about diversity will not depend on V 0, so this can be done without loss of generality.

As in Section 9, define a two-dimensional point process \(\mathcal{X}_{t}\) with a point at (s, w) if there was a mutation to type 1 at time s and the resulting type 1 branching process \(\tilde{Z}_{1}(t)\) has \(e^{-\lambda _{1}(t-s)}\tilde{Z}_{1}(t) \rightarrow w\). A point at (s, w) contributes \(e^{-\lambda _{1}s}w\) to \(V _{1} =\lim _{t\rightarrow \infty }e^{-\lambda _{1}t}Z_{1}(t)\).

$$\displaystyle{V _{1} =\sum _{(s,w)\in \mathcal{X}_{t}}e^{-\lambda _{1}s}w}$$

is the sum of points in a Poisson point process with mean measure \(\mu (z,\infty ) = A_{1}u_{1}V _{0}z^{-\alpha }\) where α = λ 0λ 1.

In Section 9 we showed that the distribution of V 1 is a one-sided stable law. To make the connection with the point process representation recall

Theorem 11.

Let \(Y _{1},Y _{2},\ldots\) be independent and identically distributed nonnegative random variables with \(P(Y _{i}> x) \sim cx^{-\alpha }\) where 0 < α < 1 and let \(S_{n} = Y _{1} + \cdots + Y _{n}\) . Then

$$\displaystyle{S_{n}/n^{1/\alpha } \rightarrow V,}$$

where V is the sum of points in a Poisson process with mean measure μ(z,∞) = cz −α .

Proof (Ideas behind the proof).

The key is to let

$$\displaystyle{N_{n}(z) = \vert \{1 \leq m \leq n: Y _{m}> \mathit{zn}^{1/\alpha }\}\vert.}$$

The assumption \(P(Y _{i}> x) \sim cx^{-\alpha }\) implies that \(\mathit{nP}(Y _{i}> \mathit{zn}^{1/\alpha }) \sim cz^{-\alpha }/n\). Since the variables are independent N n (a) converges to a Poisson with mean cz α. With a little work, one can show that the contribution of all the jumps \(\leq \epsilon n^{1/\alpha }\) is small if ε is and the desired result follows. For more details see Section 3.7 in [10]. □ 

Simpson’s index

is a useful measure of the amount of diversity. Called the homozygosity in genetics, it is defined to be the probability two randomly chosen individuals in wave 1 are descended from the same mutation.

$$\displaystyle{R =\sum _{ i=1}^{\infty }\frac{X_{i}^{2}} {V _{1}^{2}},}$$

where \(X_{1}> X_{2}>\ldots\) are points in the Poisson process and V 1 is the sum. While the point process is somewhat complicated, the formula for the mean of Simpson’s index is very simple.

Theorem 12.

ER = 1 −α where α = λ 0 ∕λ 1 .

This was proved in [12] by using a 2001 result of Fuchs, Joffe, and Teugels [18] about

$$\displaystyle{R_{n} =\sum _{ i=1}^{n}\frac{Y _{i}^{2}} {S_{n}^{2}}\quad \text{where}\quad S_{n} =\sum _{ i=1}^{n}Y _{ i},}$$

and the Y i are the i.i.d. random variables from Theorem 11. They proved ER n  → 1 −α so the task in [12] was to show \(R_{n} \Rightarrow R\) and ER n  → ER. See page 472 of [12] for more details. A more interesting derivation can be given using results about Poisson-Dirichlet distributions but that requires more machinery, so we postpone the proof until after we have stated all of our results

We can use a 1973 result of Logan, Mallows, Rice, and Shepp [27] to get an idea about the distribution of R. Consider the “self-normalized sums”

$$\displaystyle{S_{n}(p) = \frac{\sum _{i=1}^{n}Y _{i}} {(\sum _{j=1}^{n}Y _{j}^{p})^{1/p}},}$$

where \(Y _{1},Y _{2},\ldots \geq 0\) are i.i.d. In the theory of sums of independent random variables, this definition is nice because one does not have to know what the index of the stable law is in order to do the normalization. Of course, our motivation here is that

$$\displaystyle{S_{n}(2) = R_{n}^{-1/2}.}$$

Logan et al. [27] proved convergence in distribution and identified the Fourier transform of the limit. The convergence of

$$\displaystyle{\sum _{i=1}^{n}Y _{ i}/n^{1/\alpha }\quad \text{and}\quad \sum _{ j=1}^{n}Y _{ j}^{p}/n^{p/\alpha }}$$

to stable laws are standard, but to get convergence of the ratio one needs to show convergence of the joint distribution. This is done with clever computations. Even in the relatively nice case p = 2 there is not a good formula. The picture of the density function in the case p = 2, α = 0. 15 if Figure 6 might explain why. In the figure is the asymmetry parameter of the stable law.  = 0 means we are in the positive one-sided case.

Fig. 6
figure 6

Picture of the limit distribution for S n (2) when α = 0. 15.

Size of the largest clone.

Using a 1952 result of Darling [8], we can find the limiting distribution

$$\displaystyle{M_{n} =\max _{1\leq i\leq n}Y _{i}/S_{n}}$$

of the fraction of individuals in the largest clone.

Theorem 13.

As n →∞, 1∕M n → T where T has characteristic function e it ∕f α (t) where

$$\displaystyle{f_{\alpha }(t) = 1 +\alpha \int _{ 0}^{1}(1 - e^{\mathit{itu}})u^{-(\alpha +1)}\,\mathit{du}.}$$

ET = 1∕(1 −α) and var (T) = 2∕(1 −α) 2 (2 −α).

18.1 Poisson-Dirichlet distributions

We now give a second derivation of Theorem 12. The proof is not simple, but it does take us through some interesting territory. The facts we use here, unless otherwise indicated can be found in the 1997 paper by Pitman and Yor [34]. We thank Jason Schweinsberg, Jim Pitman, and Ngoc Tran for helpful discussions of this material. This two-parameter family of distributions PD(α, θ) can be defined from a residual allocation model. To do this we begin by recalling that a beta(β, γ) distribution has density on (0, 1) given by

$$\displaystyle{ \frac{\Gamma (\beta +\gamma )} {\Gamma (\beta )\Gamma (\gamma )}x^{\beta -1}(1 - x)^{\beta -1}. }$$
(101)

Let B 1, B 2,  be independent with B n  =  d beta(1 −α, θ + n α) distribution. Let Z 1 = B 1 and for k ≥ 2 let

$$\displaystyle{Z_{k} = (1 - B_{1})\cdots (1 - B_{k-1})B_{k}.}$$

This sometimes called a stick breaking model since at time k we break off a fraction B k of what remains of the stick.

Let \(U_{1},U_{2},\ldots\) be the Z i arranged in decreasing order. It is known that we can go from the U i to a sequence \(\hat{U}_{i}\) with the same joint distribution as the Z i by a size-biased permutation.

$$\displaystyle{P(\hat{U}_{n+1} = U_{j}\vert \hat{U}_{1},\ldots \hat{U}_{n},U_{1},U_{2},\ldots ) = \frac{U_{j}} {1 -\hat{ U}_{1}\cdots \hat{U}_{n}},}$$

if U j has not already been chosen or 0 otherwise. An important corollary is

$$\displaystyle{ \hat{U}_{1}\mbox{ has a beta$(1-\alpha,\theta +\alpha )$ distribution.} }$$
(102)

An example familiar from elementary probability is given by the limiting behavior of the cycle sizes of a random permutation π of {1, 2, … N}. Construct the first cycle by following 1 → π(1) → π(π(1)) →  until we return to 1. When this happens the first cycle is complete. To continue we take the smallest value not in the first cycle and repeat the procedure. For more details see Example 2.2.4 in [10]. It is easy to see that if we let \(L_{1},L_{2},\ldots\) be the cycle lengths and we take \(B_{i}\) to be beta(1,1), i.e., uniform on (0,1), then

$$\displaystyle{(L_{1}/N,L_{2}/N,L_{3}/N,\ldots ) \Rightarrow (Z_{1},Z_{2},Z_{3},\ldots ).}$$

where \(\Rightarrow\) indicates convergence in distribution of the sequence, i.e., for each \(n\) the joint distribution of the first \(n\) terms converges.

A more sophisticated example appears in the limiting behavior of the Ewen’s sampling formula for the infinite alleles model. As explained in Section 1.3.3 of [9], if we let \(s_{j}(N)\) be the number of individuals with the \(j\) th most frequent allele when the rescaled mutation rate is \(\theta\), and let the \(B_{i}\) be beta\((1,\theta )\) then

$$\displaystyle{(s_{1}(N)/N,s_{2}(N)/N,\ldots ) \Rightarrow (U_{1},U_{2},\ldots ).}$$

so the limit distribution is \(\mathit{PD}(0,\theta )\). This example provided Kingman’s motivation for introducing a one-parameter family of Poisson-Dirichlet distribution in [25].

The connection we are most interested in here comes from the following example. Let \(\tau _{s}\) be a one-sided stable process with index \(\alpha\). That is, \(s \rightarrow \tau _{s}\) is nondecreasing, has independent increments and

$$\displaystyle{E\exp (-\lambda (\tau _{t} -\tau _{s})) =\exp \left (-\int _{0}^{\infty }(1 -\exp (-\lambda x))\,(t - s)\mathit{cx}^{-\alpha }\,\mathit{dx}\right ).}$$

Let \(J_{1}(t)> J_{2}(t)>\ldots\) be the jumps of \(\tau _{s}\) for \(s \in [0,t]\) listed in decreasing order. Proposition 6 in [34] implies that

$$\displaystyle{\left (\frac{J_{1}(t)} {\tau _{t}}, \frac{J_{1}(t)} {\tau _{t}},\ldots \right ) = _{d}\mathit{PD}(0,\alpha ).}$$

Changing to our notation this implies

$$\displaystyle{ \left (\frac{X_{1}} {V _{1}}, \frac{X_{2}} {V _{1}},\ldots \right ) = _{d}\mathit{PD}(0,\alpha ). }$$
(103)

Returning to other notation introduced earlier let \((U_{1},U_{2},\ldots ) = _{d}\mathit{PD}(\theta,\alpha )\) and \(\hat{U}_{i}\) be a size biased permutation. Formula (6) in [34] gives us

$$\displaystyle\begin{array}{rcl} E_{\alpha,\theta }\sum _{n=1}^{\infty }f(U_{ n})& =& E_{\alpha,\theta }\left [\frac{f(\hat{U}_{1})} {\hat{U}_{1}} \right ] \\ & =& \frac{\Gamma (\theta +1)} {\Gamma (\theta +\alpha )\Gamma (1-\alpha )}\int _{0}^{1}\mathit{du}\,\frac{f(u)} {u} u^{-\alpha }(1 - u)^{\alpha +\theta -1}.{}\end{array}$$
(104)

The first equality follows easily from the fact that \(\hat{U}_{1} = U_{n}\) with probability \(U_{n}\), the second from (102) and the formula for the beta density given in (101). If we let \(f(u) = u^{2}\), then we have that

$$\displaystyle\begin{array}{rcl} E\sum _{n=1}^{\infty }\frac{X_{n}^{2}} {V _{1}^{2}} & =& \frac{1} {\Gamma (\alpha )\Gamma (1-\alpha )}\int _{0}^{1}\mathit{du}\,u^{1-\alpha }(1 - u)^{\alpha -1} \\ & =& \frac{1} {\Gamma (\alpha )\Gamma (1-\alpha )} \cdot \Gamma (2-\alpha )\Gamma (\alpha ) = 1-\alpha,{}\end{array}$$
(105)

by the Gamma function recursion (50).

A second commonly used measure of diversity is the entropy, which is often called Shannon’s index in the literature. Taking \(f(u) = u\log u\) in (104), we see that the expected value of the entropy is

$$\displaystyle{ \frac{\Gamma (\theta +1)} {\Gamma (\theta +\alpha )\Gamma (1-\alpha )}\int _{0}^{1}\mathit{du}\,u^{-\alpha }(1 - u)^{\alpha +\theta -1}\log u.}$$

We do not know how to evaluate the integral using calculus but it is easy to evaluate numerically. For example of the use of Simpson’s and Shannon’s indices in studying breast cancer see [62]. Merlo et al. [58] have investigated how well these indices and the Hill index

$$\displaystyle{\left (\sum _{i}p_{i}^{q}\right )^{1/(1-q)}}$$

perform as predictors of the progression of Barrett’s esophagus to esophageal adenocarcinoma. They found that all of the diversity measures were strong and highly significant predictors of progression.