Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

This chapter applies SPLC to a variety of stochastic models in order to illustrate the scope, applicability and flexibility of the methodology, and to motivate additional new applications. Section  analyzes a variant of the classical Cramér-Lundberg (C-L) risk process, based on Model 1, pp. 289–302 in [54]. Section 11.2 gives a general technique for transient distributions in a stochastic process. Section 11.10 discusses the application of LC to simple harmonic motion. The intervening sections analyze other potentially motivational models.

11.1 Risk Model: Barrier and Reinvestment

Let \(\{X(t)\}_{t\ge 0}\) denote the risk reserve (also called the surplus) at time \(t\ge 0\) in a standard C-L (Cramér-Lundberg) risk model with initial reserve \(x_{0}\) > 0 (see, e.g., pp. 22–28 in [71]). Insurance claims occur in a Poisson process with rate \(\lambda \). Claim sizes are i.i.d. r.v.s \(\{Z_{i}\}_{i=1, 2,...}\) with common cdf B(x), \( x\ge 0.\) Then

$$\begin{aligned} X(t)=x_{0}+c\, t-\sum _{i=1}^{\mathbf {N}(t)}Z_{i}, t\ge 0\text {,} \end{aligned}$$
(11.1)

where c > 0 is the premium rate, and \(\{\mathbf {N} (t)\}_{t\ge 0}\) is a Poisson process having rate \(\lambda \). Denote the time until ruin by

$$\begin{aligned} \tau ={\inf }\{t>0\text { }|\text { }X(t)\text { downcrosses level }0\}\text {.} \end{aligned}$$
(11.2)

11.1.1 Variant of the Cramér-Lundberg Model

Here, we consider a variant of the standard C-L model (characterized by formula (11.1)), with the following properties. There is a constant barrier at level M > \(x_{0}\), and a reinvestment strategy. Whenever \( \{X(t)\}_{t\ge 0}\) reaches the barrier M, a portion of net gain \(M-x_{0}\) is transferred out immediately and moved into alternative investments (e.g., a conservative investment portfolio). Specifically, if \(X(t^{-})\) = M then X(t) = a, where \(a\in (x_{0}, M)\). Every \(t_{M}\in \{t\!|\) \(X(t^{-})=M\}\) is a regenerative point of \(\{X(t)\}_{t\ge 0}\). At such \(t_{M}\)s the capital reserve jumps downward by the reinvestment amount \(M-a\) to level a (see Fig. 11.1). Let k be the number of instants \(t_{M}\) occurring in (0, t). The following formulas characterize this variant.

$$\begin{aligned} \begin{array}{l} X(t)=x_{0}+c\, t-\sum _{i=1}^{\mathbf {N}(t)}Z_{i}-k(M-a),\ X(t)<M, t\in \left( 0,\tau \right) \\ X(t)=a\text { if }X(t^{-})=M, t\in \left( 0,\tau \right) \text {,} \end{array} \end{aligned}$$
(11.3)

where \(\tau \) is the ruin time of \(\{X(t)\}_{t\ge 0}\).

Fig. 11.1
figure 1

Sample path of regenerative risk process \(\left\{ X(t)\right\} _{t\ge 0}\); two “ruin” cycles; barrier at M; initial reserve \(x_{0}\); reinvestment indicator level a; deficits at ruin \(\delta _{1}\), \(\delta _{2}\)

11.1.2 Extending \(\{X(t)\}\) from \([0,\tau )\) to \([0,\infty )\)

We create a regenerative process whose cycles are i.i.d. replicas of the process \(\left\{ X(t)\right\} _{t\in \left[ 0,\tau \right) }\), as follows. At the ruin time \(\tau \), we restore the capital reserve to the initial value \(x_{0}\), and restart the process \(\left\{ X(t)\right\} _{t\in \left[ 0,\tau \right) }\). We repeat this restoration at each successive “ruin” instant. This procedure forms a regenerative process, denoted by \( \left\{ X(t)\right\} _{t\ge 0}\). Figure 11.1 shows the first two cycles of \(\left\{ X(t)\right\} _{t\ge 0}\). The regenerative process has the advantage of containing an infinite number of i.i.d. ruin cycles, and enables us to analyze the ruin model using its properties. The time points \(\tau _{1}\), \(\tau _{1}+\tau _{2}\), \(\tau _{1}+\tau _{2}+\tau _{3}\), ..., are regenerative points at which the SP makes a double jump—one downward below level 0 causing a deficit ending a ruin cycle, and one upward to level \(x_{0}\), starting the next cycle. Since \(\left\{ X(t)\right\} _{t\ge 0}\) is a regenerative process, its limiting pdf exists as \(t\rightarrow \infty \), which we denote by f(x), \( x\in \left( 0, M\right) \).

Expected Time of Ruin \(E(\tau )\)

Consider the renewal process \(\{\tau _{n}\}_{n=1, 2,...}\), where each \(\tau _{n}\underset{dis}{=}\tau \). Let \(N_{\tau }(t):\)= number of renewals (i.e., “ruins”) in (0, t). Let \(\mathcal {D}_{t}(x)\) := number of SP downcrossings of level\(\ x\ \)in\(\ (0, t)\) (Fig. 11.1). From the elementary renewal theorem and LC theory.

$$\begin{aligned} \lim _{t\rightarrow \infty }\frac{E(\mathcal {D}_{t}(x))}{t}= & {} \lambda \int _{x}^{M}\overline{B}(y-x)f(y)dy, 0\le x\le M, \nonumber \\ \lim _{t\rightarrow \infty }\frac{E(\mathcal {D}_{t}(0))}{t}= & {} \lim _{t\rightarrow \infty }\frac{N_{\tau }(t)}{t}\underset{a.s.}{=} \lim _{t\rightarrow \infty }\frac{E(N_{\tau }(t))}{t}=\frac{1}{E(\tau )}\text { ,} \end{aligned}$$
(11.4)

where \(\overline{B}(\cdot )\) = \(1-B(\cdot ).\) In particular

$$\begin{aligned} \frac{1}{E(\tau )}= & {} \lambda \int _{0}^{M}\overline{B}(y)f(y)dy, \nonumber \\ E(\tau )= & {} \frac{1}{\lambda \int _{0}^{M}\overline{B}(y)f(y)dy}. \end{aligned}$$
(11.5)

Let \(\mathcal {U}_{t}(x):\)= number of SP upcrossings of level\(\ x\ \)in\(\ (0, t) \). All upcrossings of any level \(x\in (0, M)\) occur when the sample path is continuous, except at the “ruin” instants, when upcrossings of level x are jumps over level \(x\in (0, x_{0}).\) From LC theory and (11.5),

$$\begin{aligned} \lim _{t\rightarrow \infty }\frac{{E}(\mathcal {U}_{t}(x))}{t}= & {} cf(x), x_{0}\le x<M, \\ \lim _{t\rightarrow \infty }\frac{{E}(\mathcal {U}_{t}(x))}{t}= & {} cf(x)+\lambda \int _{0}^{M}\overline{B}(y)dF(y) \\= & {} cf(x)+\frac{1}{E(\tau )}, 0<x<x_{0}. \end{aligned}$$

11.1.3 CDF and PDF of the Deficit at Ruin

A random variable of interest in risk theory is the deficit at ruin, denoted by \(\varvec{\delta }\), having cdf denoted by \(F_{\varvec{\delta } }(y),\) and pdf \(f_{\varvec{\delta }}(y), y>0\) (Fig. 11.1). R.v. \(\varvec{\delta }\) is equal to the excess below level 0 of a downward jump due to a claim.

Theorem 11.1

The cdf and pdf of the deficit at ruin \(\varvec{\delta }\) are given by

$$\begin{aligned} F_{\varvec{\delta }}(x)= & {} 1-\lambda E(\tau )\int _{0}^{M} \overline{B}(y+x)f(y)dy, x>0, \end{aligned}$$
(11.6)
$$\begin{aligned} f_{\varvec{\delta }}(x)= & {} \lambda E(\tau )\int _{[0, M]}b(y+x)dF(y), x>0 \text {,} \end{aligned}$$
(11.7)

where \(E(\tau )\) is given by (11.5) and b(y) = dB(y) / dy, \(y>0,\) is the pdf of the claim size.

Proof

Fix level x < 0. By (11.5), \(1/E(\tau )\) is the downcrossing rate of level 0. The term \(1-F_{\varvec{ \delta }}(|x|)\) is the probability that a downward jump that crosses level 0, also crosses level x, where \(\left| \cdot \right| \) stands for absolute value. Thus \(\left( 1/E(\tau )\right) (1-F_{\varvec{\delta } }(|x|))\) is the downcrossing rate of level \(x\,\). Since downcrossings of level 0 are caused by claims only, we have the following equation for \(F_{ \varvec{\delta }}(|x|)\)

$$\begin{aligned} \frac{1}{E(\tau )}(1-F_{\varvec{\delta }}(|x|))=\lambda \int _{0}^{M} \overline{B}(y-x)f(y)dy, x<0\text {,} \end{aligned}$$
(11.8)

which is equivalent to (11.6) (since \(x<0\)). Taking d / dx on both sides of (11.6) gives (11.7).\(\square \)

11.1.4 Analysis of the Risk Model

Let

$$\begin{aligned} f(x):=f_{0}(x)\varvec{I}_{[0, x_{0})}(x)+f_{1}(x)\varvec{I} _{[x_{0}, a)}(x)+f_{2}(x)\varvec{I}_{[a, M)}(x) \text {,} \end{aligned}$$

where \(\varvec{I}_{\varvec{A}}(x)=1\) if \(x\in \varvec{A}\) and \( \varvec{I}_{\varvec{A}}(x)=0\) if \(x\notin \varvec{A}\).

Define \(f_{2}(M)\) := \(f_{2}(M^{-})\) for convenience; this does not affect any probability measures. By LC and rate balance across level x (see explanations after (11.13) below), we obtain

$$\begin{aligned} cf_{2}(x)=cf_{2}(M^{-})+\lambda \int _{x}^{M}\overline{B}(y-x)f_{2}(y)\,{d} y, a\le x<M, \end{aligned}$$
(11.9)
$$\begin{aligned} cf_{1}(x)=\lambda \int _{x}^{a}\overline{B}(y-x)f_{1}(y)\,{d}y+\lambda \int _{a}^{M}\overline{B}(y-x)f_{2}(y)\,{d}y, x_{0}\le x<a, \end{aligned}$$
(11.10)
$$\begin{aligned} cf_{0}(x)+\frac{1}{E(\tau )}=\lambda \int _{x}^{x_{0}}\overline{B} (y-x)f_{0}(y)\,{d}y+\lambda \int _{x_{0}}^{a}\overline{B}(y-x)f_{1}(y)\,{d}y \nonumber \\ +\lambda \int _{a}^{M}\overline{B}(y-x)f_{2}(y)\,{d}y, 0<x<x_{0}, \end{aligned}$$
(11.11)

where

$$\begin{aligned} \frac{1}{E(\tau )}=\lambda \int _{0}^{x_{0}}\overline{B}(y)f_{0}(y)\,{d} y+\lambda \int _{x_{0}}^{a}\overline{B}(y)f_{1}(y)\,{d}y+\lambda \int _{a}^{M} \overline{B}(y)f_{1}(y)\,{d}y. \end{aligned}$$
(11.12)

The normalizing condition is

$$\begin{aligned} \int _{0}^{x_{0}}f_{0}(y)\,{d}y+\int _{x_{0}}^{a}f_{1}(y)\,{d} y+\int _{a}^{M}f_{2}(y)\,{d}y=1. \end{aligned}$$
(11.13)

Explanation of (11.9)–(11.11).

In (11.9), the left side is the upcrossing rate of level x (\( a<x<M\)). On the right side, the first term is the downcrossing rate of level x, due to hits of level M from below causing jumps downward to level a. The second term is the downcrossing rate of level x due to claims occurring when the capital reserve is in the interval (xM). Equation (11.10) is explained similarly.

In (11.11), the left side is the upcrossing rate of level x (\( 0<x<x_{0}\)). The first term is the upcrossing rate at continuous points of the sample path. The second term is the total downcrossing rate of level 0 , by (11.12), resulting in immediate upcrossings of level x (a double jump). The right side of (11.11) is the downcrossing rate of x caused by claims when the SP is above x.

Rate balance across level x yields equations (11.9), (11.10) and (11.11).

Remark 11.1

Consider the hit and egress rates of level \(x_{0}\). We obtain the following rate balance equation

$$\begin{aligned} c\, f_{0}(x_{0}^{-})+ \frac{1}{E(\tau )}=c\, f_{1}(x_{0}). \end{aligned}$$
(11.14)

In (11.14), the left side is the total hit rate of \(x_{0}\); the right side is the egress rate out of \(x_{0}\) above (Fig. 11.1). The term \(cf_{0}(x_{0}^{-})\ \)is the hit rate of level \(x_{0}\) from below at continuous sample-path points. Since downward jumps ending below level 0 cause ruin and immediate jumps up to level \(x_{0}\), \(1/E(\tau )\) is also the hit rate of level \(x_{0}\) from below. Finally, on the right side, \(cf_{1}(x_{0})\) is the egress rate out of level \(x_{0}\) above.

From (11.14), f(x) has a jump discontinuity at \(x_{0},\) given by

$$\begin{aligned} f_{1}(x_{0})-\, f_{0}(x_{0}^{-})=\frac{1}{cE(\tau )}. \end{aligned}$$
(11.15)

Similar reasoning for level a gives

$$\begin{aligned} f_{2}(a)-f_{1}(a^{-})=f_{2}(M) \end{aligned}$$
(11.16)

(The derivations of (11.15) and (11.16) are examples of how LC can lead to analytical properties of the pdf in an intuitive manner. These formulas also serve as a check on integral equations (11.9)–(11.11).)

11.1.5 Solution of Model with Exponential Claim Sizes

We now solve (11.9)–(11.11) and (11.13) for \(f_{0}(x), f_{1}(x)\) and \(f_{2}(x)\) when the claim sizes are \(\underset{dis}{=}\) Exp\(_{\mu }\), to illustrate the solution technique. Thus, \(e^{-\mu (\cdot )}\) is substituted for \( \overline{B}(\cdot ) \) in (11.9)–(11.11).

General Case \(c\ne \frac{\lambda }{\mu }\)

Taking d / dx on both sides of (11.9)–(11.11) and solving the resulting differential equations for \(f_{2}(\cdot )\), \(f_{1}(\cdot )\) and \(f_{0}(\cdot )\) yields

$$\begin{aligned} \left\{ \begin{array}{l} f_{0}(x)=A_{10}^{\prime }+B_{10}^{\prime }e^{(\mu -\frac{\lambda }{c} )x}, 0\le x<x_{0}, \\ f_{1}(x)=A_{11}^{\prime }e^{(\mu -\frac{\lambda }{c})x}, x_{0}\le x<a, \\ f_{2}(x)=f_{2}(a)\frac{\mu -\left( \frac{\lambda }{c}\right) \, e^{-(\mu - \frac{\lambda }{c})(M-x)}}{\mu -\left( \frac{\lambda }{c}\right) e^{-(\mu - \frac{\lambda }{c})(M-a)}}, a\le x<M\text {.} \end{array} \right. \end{aligned}$$
(11.17)

where \(A_{10}^{\prime }\), \(B_{10}^{\prime }\), \(A_{11}^{\prime }\) are constants.

Substituting \(f_{2}(\cdot )\), \(f_{1}(\cdot )\) and \(f_{0}(\cdot )\) into (11.9)–(11.11) and (11.13) yields

$$\begin{aligned} \left\{ \begin{array}{l} A_{10}^{\prime }=\left( C_{11}^{\prime }\right) ^{-1}\lambda \mu e^{x_{0}\mu }\left( e^{\frac{\lambda }{c}M+a\mu }-e^{\frac{\lambda }{c}a+M\mu }\right) , \\ B_{10}^{\prime }=-A_{10}^{\prime }, \\ A_{11}^{\prime }=\left( C_{11}^{\prime }\right) ^{-1}\left( \frac{\lambda }{c }\right) \left( e^{\frac{\lambda }{c}M+a\mu }-e^{\frac{\lambda }{c}a+M\mu }\right) \left( \lambda e^{\frac{\lambda }{c}x_{0}}-c\mu e^{x_{0}\mu }\right) \\ f_{2}(a)=\left( C_{11}^{\prime }\right) ^{-1}ce^{a(\mu -\frac{\lambda }{c} )}\left( \frac{\lambda }{c}e^{\frac{\lambda }{c}x_{0}}-\mu e^{x_{0}\mu }\right) \left( \frac{\lambda }{c}e^{\frac{\lambda }{c}M+a\mu }-\mu e^{\frac{ \lambda }{c}a+M\mu }\right) , \end{array} \right. \end{aligned}$$
(11.18)

where 

$$\begin{aligned} C_{11}^{\prime }= & {} \left( a-M\right) c\mu e^{(M+a)\mu }\left( \frac{\lambda }{c}e^{\frac{\lambda }{c}x_{0}}-\mu e^{x_{0}\mu }\right) \\&+\lambda (1+x_{0}\mu )e^{x_{0}\mu }\left( e^{\frac{\lambda }{c}M+a\mu }-e^{ \frac{\lambda }{c}a+M\mu }\right) \text {.} \end{aligned}$$

The expected time of ruin is, from (11.12),

$$\begin{aligned} E(\tau )=C_{11}^{\prime }\lambda ^{-1}(\lambda -c\mu )^{-1}e^{-x_{0}\mu }\left( e^{\frac{\lambda }{c}M+a\mu }-e^{\frac{\lambda }{c}a+M\mu }\right) ^{-1}\text {.} \end{aligned}$$
(11.19)

Expected Number of Claims Before Ruin

Let N be the number of claims before ruin. From \(E(\tau )\) given in (11.19) we obtain

$$\begin{aligned} E(N)=\lambda E(\tau )=C_{11}^{\prime }(\lambda -c\mu )^{-1}e^{-x_{0}\mu }\left( e^{\frac{\lambda }{c}M+a\mu }-e^{\frac{\lambda }{c}a+M\mu }\right) ^{-1}. \end{aligned}$$
(11.20)

Special Case \(c=\frac{\lambda }{\mu }\)

By a similar analysis, we can show that when \(c=\lambda /\mu \)

$$\begin{aligned} \left\{ \begin{array}{ll} f_{0}(x)=B_{10}^{\prime \prime }x, &{} 0\le x<x_{0}, \\ f_{1}(x)=A_{11}^{\prime \prime }, &{} x_{0}\le x<a, \\ f_{2}(x)=A_{12}^{\prime \prime }+B_{12}^{\prime \prime }x, &{} a\le x<M, \end{array} \right. \end{aligned}$$
(11.21)

where

$$\begin{aligned} B_{10}^{\prime \prime }= & {} 2\mu ^{2}\left( C_{11}^{\prime \prime }\right) ^{-1},{\quad } A_{11}^{\prime \prime }=2\mu \left( 1+x_{0}\mu \right) C_{11}^{\prime \prime -1}, \nonumber \\ B_{12}^{\prime \prime }= & {} \left( \frac{1}{a-M}\right) A_{11}^{\prime \prime },{\quad } A_{12}^{\prime \prime }=-\left( M+\frac{1}{\mu } \right) B_{12}^{\prime \prime } \end{aligned}$$
(11.22)

and\(\,\, C_{11}^{\prime \prime }=2-x_{0}^{2}\mu ^{2}+\mu \left( a+M\right) \left( 1+x_{0}\mu \right) \).

The expected time of ruin is

$$\begin{aligned} E(\tau )=\left( \frac{1}{2\lambda }\right) C_{11}^{\prime \prime }. \end{aligned}$$
(11.23)

Expected Capital Transferred Out Before Ruin

Let \(N_{M}\) be the number of hits of level M and let CAP be the capital transferred out, before ruin. We give a method to derive E(CAP),  since each illustrates useful intuitive notions.

\(\varvec{E(CAP)}\) Using the renewal reward theorem we get

(11.24)

When \(c\ne \lambda /\mu \), substituting \(f_{2}(M)\) and \(E(\tau )\) from (11.17) and (11.19) into (11.24) yields

$$\begin{aligned} E(CAP)=(M-a)\, E(N_{M})=(M-a)\,\left( \frac{c}{\lambda }\right) \left( \frac{ \left( \frac{\lambda }{c}\right) e^{\left( \frac{\lambda }{c}-\mu \right) x_{0}}-\mu }{e^{\left( \frac{\lambda }{c}-\mu \right) M}-e^{\left( \frac{ \lambda }{c}-\mu \right) a}}\right) . \end{aligned}$$
(11.25)

When \(c=\lambda /\mu \), \(E(\tau )\) is given by (11.23), and \( f_{2}(M)\) is obtained from (11.21). Then it can be shown that

$$\begin{aligned} E(CAP)=x_{0}+\frac{1}{\mu }. \end{aligned}$$
(11.26)

Note that when c = \(\lambda /\mu ,\) the expected capital transferred out for alternative investments before ruin is independent of the barrier M.

For additional analytical details, numerical examples, and applications of LC to other risk models, see [54].

11.2 A Technique for Transient Distributions

This section outlines a technique for deriving transient distributions of continuous-parameter processes with a continuous or discrete state space, denoted as \(\left\{ X(t)\right\} _{t\ge 0}\). The technique is based on the general version of Theorem B (Theorem 4.1 in Sect. 4.2.1). We repeat here formulas (4.1) and (4.2) of Theorem B for easy reference, i.e.,

$$\begin{aligned} E(\mathcal {I}_{t}(\varvec{A}))&=E(\mathcal {O}_{t}(\varvec{A} ))+P_{t}(\varvec{A})-P_{0}(\varvec{A}), \text { }t\ge 0\text {,} \end{aligned}$$
(11.27)
$$\begin{aligned} \frac{\partial }{\partial t}E(\mathcal {I}_{t}(\varvec{A}))&=\frac{ \partial }{\partial t}E(\mathcal {O}_{t}(\varvec{A}))+\frac{\partial }{ \partial t}P_{t}(\varvec{A}), t>0\text {,} \end{aligned}$$
(11.28)

where \(\mathcal {I}_{t}(\varvec{A})\) is the number of SP entrances, and \( \mathcal {O}_{t}(\varvec{A})\) is the number of SP exits, of state-space set \(\varvec{A}\) during \(\left[ 0, t\right] \). Let the parameter set be \( \varvec{T}\) = \(\left[ 0,\infty \right) \).

Remark 11.2

If the limiting distribution of \(\left\{ X(t)\right\} _{t\ge 0}\) exists, it is obtained by taking the limit of the derived transient distribution as \(t\rightarrow \infty \).

11.2.1 State-Space Set with Variable Boundary

State Space \(\varvec{S}\subseteq \mathbb {R}\)

In formulas (11.27) and (11.28) assume set \(\varvec{A}\) depends on a continuous variable x and define \(\varvec{A}\) := \( \varvec{A}_{x}\), \(x\in \varvec{S}\), where x is a state-space level, e.g., \(\varvec{T}\times \{x\}\) (a line in the \(\varvec{T}\)-\( \varvec{S}\) coordinate system parallel to the time axis). For fixed x, replace formulas (11.27) and (11.28) by

$$\begin{aligned} E(\mathcal {I}_{t}(\varvec{A}_{x}))&=E(\mathcal {O}_{t}(\varvec{A} _{x}))+P_{t}(\varvec{A}_{x})-P_{0}(\varvec{A}_{x}) \text {,} \end{aligned}$$
(11.29)
$$\begin{aligned} \frac{\partial }{\partial t}E(\mathcal {I}_{t}(\varvec{A}_{x}))&=\frac{ \partial }{\partial t}E(\mathcal {O}_{t}(\varvec{A}_{x}))+\frac{\partial }{\partial t}P_{t}(\varvec{A}_{x}). \end{aligned}$$
(11.30)

Assume the following mixed partial derivatives exist and are equal, i.e.,

$$\begin{aligned} \frac{\partial ^{2}}{\partial x\partial t}E(\mathcal {O}_{t}(\varvec{A} _{x}))&=\frac{\partial ^{2}}{\partial t\partial x}E(\mathcal {O}_{t}( \varvec{A}_{x})), \\ \frac{\partial ^{2}}{\partial x\partial t}P_{t}(\varvec{A}_{x})&=\frac{ \partial ^{2}}{\partial t\partial x}P_{t}(\varvec{A}_{x})\text {.} \end{aligned}$$

Taking \(\partial /\partial x\) in (11.30) we obtain

$$\begin{aligned} \frac{\partial ^{2}}{\partial x\partial t}E(\mathcal {I}_{t}(\varvec{A} _{x}))=\frac{\partial ^{2}}{\partial t\partial x}E(\mathcal {O}_{t}( \varvec{A}_{x}))+\frac{\partial ^{2}}{\partial t\partial x}P_{t}( \varvec{A}_{x}). \end{aligned}$$
(11.31)

State Space \(\varvec{S}\subseteq \mathbb {R}^{n}\)

Let \(\{\varvec{X}(t)\}_{t\ge 0}\) denote a continuous-time process with n-dimensional state space \(\varvec{S}\subseteq \mathbb {R} ^{n}\). The state space may be continuous or discrete. Let vector \( \varvec{x}\) = \((x_{1},..., x_{n})\), and let state-space set \(\varvec{A }_{\varvec{x}}\) = \(\cap _{i=1}^{n}(-\infty , x_{i}]\subseteq \varvec{S }\). Then \(P_{t}(\varvec{A}_{\varvec{x}})\) = \(F_{t}(\varvec{x})\) = \(F_{t}(x_{1},..., x_{n})\) is the joint cdf of the n state variables at time \(t\ge 0\).

From the general formula (11.29) the joint cdf is given by

$$\begin{aligned} F_{t}(\varvec{x})=E(\mathcal {I}_{t}(\varvec{A}_{x}))-E(\mathcal {O} _{t}(\varvec{A}_{x}))+F_{0}(\varvec{x}) \end{aligned}$$

where \(F_{0}(\varvec{x})\) = \(\left\{ \begin{array}{l} 1 \ \ \text {if} \ \ \varvec{X}(0)\in \varvec{A}_{x}, \\ 0 \ \ \text {if} \ \ \varvec{X}(0)\notin \varvec{A}_{x} \end{array} \text {.}\right. \)

Provided the derivatives exist, we obtain

$$\begin{aligned} \frac{\partial F_{t}(\varvec{x})}{\partial x_{i}}&=\frac{\partial }{ \partial x_{i}}\left[ E(\mathcal {I}_{t}(\varvec{A}_{x}))-E(\mathcal {O} _{t}(\varvec{A}_{x}))\right] , i=1,..., n, \\ \frac{\partial ^{n}F_{t}(\varvec{x})}{\partial x_{1}\cdot \cdot \cdot \partial x_{n}}&=\frac{\partial ^{n}}{\partial x_{1}\cdot \cdot \cdot \partial x_{n}}\left[ E(\mathcal {I}_{t}(\varvec{A}_{x}))-E(\mathcal {O} _{t}(\varvec{A}_{x}))\right] , \\ \frac{\partial F_{t}(\varvec{x})}{\partial t}&=\frac{\partial }{ \partial t}\left[ E(\mathcal {I}_{t}(\varvec{A}_{x}))-E(\mathcal {O}_{t}( \varvec{A}_{x}))\right] . \end{aligned}$$

If \(\frac{\partial E(\mathcal {I}_{t}(\varvec{A}_{x}))}{\partial t}\) and \( \frac{\partial E(\mathcal {O}_{t}(\varvec{A}_{x}))}{\partial t}\) can be expressed as functions of \(F_{t}(\varvec{x})\) or \(f_{t}(\varvec{x})\), then we may be able to derive an integro-differential equation for \(F_{t}( \varvec{x})\) or \(f_{t}(\varvec{x})\).

If n = 1 the state space is one-dimensional, and \(\varvec{A}_{x}\) \( \varvec{=}\) \((-\infty , x]\). Thus

$$\begin{aligned} f_{t}(x)=\frac{\partial }{\partial x}\left[ E(\mathcal {I}_{t}((-\infty , x]))-E(\mathcal {O}_{t}((-\infty , x]))\right] \end{aligned}$$

where \(f_{t}(x)\) := transient pdf of \(\varvec{X}(t)\).

LC Computation

The expressions in this Section can aid in estimating or computing the transient cdf and pdf of a continuous-parameter n-dimensional process using LCE (level crossing estimation or computation) for transient distributions. We will not expound on this transient LCE technique further in this monograph. LCE for transient distributions is discussed briefly in Remark 3.7 and Example 3.1 in Sect. 3.2.8, and briefly mentioned in Remark 9.2 in Sect. 9.2.

11.3 Discrete-Parameter Processes

Let \(\{X_{n}\}_{n=0, 1, 2,...}\) denote a discrete-parameter process taking values in a state space \(\varvec{S}\), which may be discrete or continuous. Let \(\varvec{A},\) \(\varvec{B},\) \(\varvec{C}\) be (measurable) subsets of \(\varvec{S}\). Let \(P_{n}(\varvec{A})\) = \( P(X_{n}\in \varvec{A})\), and \(P_{m, n}(\varvec{B, C})\) = \(P(X_{m}\in \varvec{B},\varvec{X}_{n}\in \varvec{C})\).

Definition 11.1

(a) The SP exits set \(\varvec{A}\) at time n if \(X_{n}\in \varvec{A}\) and \(X_{n+1}\notin \varvec{A}\). (b) The SP enters set \(\varvec{A} \) at time n if \(X_{n-1}\notin \varvec{A}\) and \(X_{n}\in \varvec{A}\) . (c) \(\mathcal {I}_{n}(\varvec{A}))\) := number of SP entrances into \(\varvec{A}\) during [0, n]. (d) \(\mathcal {O}_{n}(\varvec{A)}\) := number of SP exits out of \(\varvec{A}\) during [0, n].

We now state a theorem for discrete-time processes which is analogous to Theorem B (see formulas (11.29) and (11.30)).

Theorem 11.2

Let \(\{X_{n}\}_{n=0, 1, 2,...}\) be a discrete-time process with state space \( \varvec{S}\). Let \(\varvec{A}\subseteq \varvec{S}\).

$$\begin{aligned} E(\mathcal {I}_{n}(\varvec{A}))=E(\mathcal {O}_{n}(\varvec{A}))+P_{n}( \varvec{A})-P_{0}(\varvec{A}) \text {,} \end{aligned}$$
(11.32)

where \(P_{0}(\varvec{A})=\left\{ \begin{array}{c} 1\text { if }X_{0}\in A, \\ 0\text { if }X_{0}\notin A \end{array} \text {.}\right. \)

Proof

The proof is similar to that of Theorem 4.1 in Sect. 4.2.1 of Chap. 4, upon replacing t by n. \(\square \)

11.3.1 Application to Markov Chains

Let \(\{X_{n}\}_{n=0, 1,...}\) be a Markov chain with the discrete state space \( \varvec{S}\). For example, let \(\varvec{S}=\{0,\pm 1,\pm 2,...\}\). Let the set \(\varvec{A}\!:=j\in \varvec{S}\). Then

$$\begin{aligned} E(\mathcal {I}_{n}(j))=\sum _{i\ne j}\sum _{m=0}^{n-1}P_{i}^{m}P_{ij}, \ \text {and}\ E(\mathcal {O}_{n}(j))=\sum _{i\ne j}\sum _{m=0}^{n}P_{j}^{m}P_{ji}, \end{aligned}$$

where \(P_{ij}\) is the one-step transition probability from i to j and \( P_{j}^{m}\equiv P_{m}(\varvec{A})=P_{m}(j)\). Substituting into (11.32) gives

$$\begin{aligned} P_{j}^{n}=\sum _{i\ne j}\sum _{m=0}^{n-1}P_{i}^{m}P_{ij}-\sum _{i\ne j}\sum _{m=0}^{n}P_{j}^{m}P_{ji}+P_{j}^{0}. \end{aligned}$$
(11.33)

Assume the following limiting probabilities exist:

$$\begin{aligned} \lim _{n\rightarrow \infty }P_{i, j}^{n}=\lim _{n\rightarrow \infty }P_{jj}^{n}=\lim _{n\rightarrow \infty }P_{j}^{n}\equiv \pi _{j}, \end{aligned}$$

where \(P_{i, j}^{n}\) is the n-step transition probability from i to j. That is, the Markov chain is positive recurrent and aperiodic, and \( \sum _{j\in \varvec{S}}\pi _{j}=1\). Dividing both sides of (11.33) by n and letting \(n\rightarrow \infty \) yields

Thus we have derived the classical equations for the limiting probabilities \( \left\{ \pi _{j}\right\} _{j\in \varvec{S}}\) by using an LC method, namely

$$\begin{aligned} \begin{array}{l} \pi _{j}=\sum _{i\in \varvec{S}}\pi _{i}P_{i, j}, j\in \varvec{S}\text {, } \\[1ex] \sum _{j\in \varvec{S}}\pi _{j}=1\text {.} \end{array} \end{aligned}$$
(11.34)

Remark 11.3

We have applied the discrete-time analog of Theorem B to a standard Markov chain in order to demonstrate its applicability to discrete-time discrete-state models. Theorem B emphasizes the system point aspect of the SPLC method. SPLC utilizes SP entrance/exit rates of state-space sets. (SP level crossings are special cases of SP entrances and exits.)

11.4 Semi-Markov Process

Consider a semi-Markov process (SMP) \(\{X(t)\}_{t\ge 0}\), with discrete state space \(\varvec{S}\) (also called a Markov renewal process ). (See, e.g., pp. 207–208 in [99]; pp. 457–460 in [125].) Let the sojourn time in state \(j\in \varvec{S}\) have a general distribution with mean \(\mu _{j}>0\). The type of distribution of the sojourn time may differ from state to state; only the means are utilized in this analysis. At the end of a sojourn in state i, say at instant \(\tau ^{-}\), assume \(P\left( X(t)=j|X(t^{-})=i\right) \) = \(P_{i, j}\), \(j\ne i\), \(j\in \varvec{S}\). The matrix \(\left\| P_{i, j}\right\| \) is a Markov matrix. Assume the Markov chain with transition matrix \(\left\| P_{i, j}\right\| \) is positive recurrent and aperiodic so that the limiting probabilities \(\pi _{j}\), \(j\in \varvec{S}\) exist.

Let \(P_{j}(t)\) := \(P(X(t)=j)\), \(t\ge 0\) and \(P_{j}\) := \(\lim _{t\rightarrow \infty }P_{j}(t)\), \(j\in \varvec{S}\). We now derive the probabilities \( P_{j}\), \(j\in \varvec{S},\) using LC.

Consider a sample path of \(\{X(t)\}_{t\ge 0}\) . Let \( T_{t}(i)\) denote the total time spent by the SP in state i during (0, t), and

$$\begin{aligned} \varvec{I}_{i}(X(s))=\left\{ \begin{array}{c} 1 \quad \text {if }\ X(s)=i, s\in \left[ 0, t\right] , \\ 0 \quad \text {if }\ X(s)\ne i, s\in \left[ 0, t\right] \end{array} \right. . \end{aligned}$$

Then \(E\left( \varvec{I}_{i}(X(s))\right) \) = \(P_{i}(s)\), and \(T_{t}(i)\) = \(\int _{s=0}^{t}\varvec{I}_{i}(X(s)ds\), implying

$$\begin{aligned} E(T_{t}(i))=\int _{s=0}^{t}E\left( \varvec{I}_{i}(X(s)\right) ds=\int _{s=0}^{t}P_{i}(s)ds\text {.} \end{aligned}$$
(11.35)

The expected number of SP exits from state i during (0, t) is \( E(T_{t}(i))/\mu _{i}\) since the mean of each sojourn time in i is \(\mu _{i} \). The expected number of SP \(i\rightarrow j\) transitions during (0, t) is \(E(T_{t}(i))/\mu _{i}P_{i, j}\). The expected total number of SP transitions into (entrances into) state j during (0, t) is

$$\begin{aligned} E(\mathcal {I}_{t}(j))=\sum _{i\ne j}\frac{E(T_{t}(i))}{\mu _{i}}P_{i, j}. \end{aligned}$$
(11.36)

Similarly, the expected number of SP exits out of j during (0, t) is

$$\begin{aligned} E(\mathcal {O}_{t}(j))=\frac{E(T_{t}(j))}{\mu _{j}}. \end{aligned}$$
(11.37)

Substituting from (11.36) and (11.37) into Theorem B, formula (11.27), gives

$$\begin{aligned} \sum _{i\ne j} \frac{E(T_{t}(i))}{\mu _{i}}P_{i, j}=\frac{E(T_{t}(j))}{\mu _{j}} +P_{j}(t)-P_{j}(0)\text {.} \end{aligned}$$
(11.38)

(We assume the interchange of summation and the limit operation is valid. This applies if, e.g., \(\varvec{S}\) has a finite number of states.)

From (11.35), the long-run proportion of time the SP is in state \(i\in \varvec{S}\) is

$$\begin{aligned} \lim _{t\rightarrow \infty }\frac{E(T_{t}(i))}{t}=P_{i}, i\in \varvec{S}. \end{aligned}$$

Since \(0\le P_{j}(t)\le 1\), \(t\ge 0\),

$$\begin{aligned} \lim _{t\rightarrow \infty }\frac{P_{j}(t)}{t}=\lim _{t\rightarrow \infty } \frac{P_{j}(0)}{t}=0, \end{aligned}$$

Dividing both sides of (11.38) by \(t>0\) and letting \(t\rightarrow \infty \) gives

$$\begin{aligned} \sum _{i\ne j}\frac{P_{i}}{\mu _{i}}P_{ij}=\frac{P_{j}}{\mu _{j}}, j\in \varvec{S} \end{aligned}$$
(11.39)

Suppose \(\sum _{j\in S}\frac{1}{\mu _{j}}P_{j}=K>0\); then \(\sum _{j\in S}\left( \frac{1}{K\mu _{j}}P_{j}\right) =1\). Dividing both sides of (11.39) by K and transposing terms gives the system of equations for \(P_{i}, i\in \varvec{S}\),

$$\begin{aligned} \begin{array}{l} \frac{1}{K\mu _{j}}P_{j}=\sum _{i\ne j}\left( \frac{1}{K\mu _{j}} P_{i}\right) P_{ij}, j\in \varvec{S} \\ \sum _{j\in S}\left( \frac{1}{K\mu _{j}}P_{j}\right) =1. \end{array} \end{aligned}$$
(11.40)

The system of equations (11.40) for \( \left\{ 1/\left( K\mu _{j}\right) \cdot P_{j}\right\} _{j\in \varvec{S}}\) is identical to the system of equations (11.34) for \(\left\{ \pi _{j}\right\} _{j\in \varvec{S}}\) in Markov chains. Thus

$$\begin{aligned} \frac{1}{K\mu _{j}}P_{j}=\pi _{j}, j\in \varvec{S,} \end{aligned}$$
$$\begin{aligned} P_{j}=(\pi _{j}\mu _{j})K, j\in \varvec{S}, \end{aligned}$$
(11.41)

and K is obtained from the normalizing condition

$$\begin{aligned} \sum _{j\in S}P_{j}=K\sum _{j\in S}\pi _{j}\mu _{j}=1, \end{aligned}$$

namely

$$\begin{aligned} K=\frac{1}{\sum _{j\in S}\pi _{j}\mu _{j}}\text {,} \end{aligned}$$
(11.42)

which substituted into (11.41) gives the well-known formula

$$\begin{aligned} P_{j}=\frac{\pi _{j}\mu _{j}}{\sum _{j\in \varvec{S}}\pi _{j}\mu _{j}}, \text { }j\in \varvec{S}\text {.} \end{aligned}$$
(11.43)

The key steps in this LC derivation of (11.43) are: (1) obtain expressions for the expected SP entrance and exit rates of each state; (2) apply formula (11.27) of Theorem B; (3) divide by t and take \(\lim _{t\rightarrow \infty }\); (4) evaluate the constant K by recognizing the role of the linear Markov-chain equations (11.34) for \(\left\{ \pi _{j}\right\} _{j\in \varvec{S}}\).

11.5 Non-homogeneous Pure Birth Processes

Consider the pure birth process \(\{X(t)\}_{t\ge 0}\), where X(t) denotes the population at time \( t>0\). Let the initial population be X(0) = i, a non-negative integer. Let the sequence of positive functions \(\lambda _{k}(t) \), \(k=i, i+1\),..., (\(i=0, 1\),...), denote the birth rate at time t given that the population at t is k, with the property

$$\begin{aligned} P(X(t+h)-X(t)&=1|X(t)=k)=\lambda _{k}(t)h+o(h),\\ P(X(t+h)-X(t)&=0|X(t)=k)=1-\lambda _{k}(t)h+o(h), \end{aligned}$$

where \(h>0\). Define \(P_{n}(t)\) := \(P(X(t)=n)\).

We now derive an expression for \(P_{n}(t), t>0\), \(n=i, i+1\), ..., using Theorem B, i.e., formulas (11.27) and (11.28).

The expected number of SP entrances into state i during \(\left( 0, t\right) \) is \(E(\mathcal {I}_{t}(i))\) = 0,  since \(X(0)=i\), and \(X\left( t\right) \), \(t>0\), never visits state i once it increases from i to \(i+1\) . On the other hand the expected number of SP exits out of state i during \(\left( 0, t\right) \) is \(E(\mathcal {O}_{t}(i))\) = \( \int _{s=0}^{t}\lambda _{s}(i)P_{i}(s)ds\), since an SP \(i\rightarrow i+1\) transition can occur at any instant \(s\in (0, t)\). Note that \(P_{i}(0)=1\). Substituting \(E(\mathcal {I}_{t}(i))\), \(E(\mathcal {O}_{t}(i))\) and \(P_{i}(0)\) into (11.27), we obtain

$$\begin{aligned} 0=\int _{s=0}^{t}\lambda _{i}(s)P_{i}(s)ds+P_{i}(t)-1. \end{aligned}$$
(11.44)

Differentiating (11.44) with respect to t gives

$$\begin{aligned} \frac{d}{dt}P_{i}(t)+\lambda _{i}(t)P_{i}(t)=0 \end{aligned}$$

having solution, since \(P_{i}(0)\) = 1,

$$\begin{aligned} P_{i}(t)=e^{-m_{i}(t)}, t\ge 0\text {,} \end{aligned}$$
(11.45)

where

$$\begin{aligned} m_{i}(t)=\int _{s=0}^{t}\lambda _{i}(s)ds\text {.} \end{aligned}$$

For an arbitrary state \(j>i\),

$$\begin{aligned} E(\mathcal {I}_{t}(j))&=\int _{s=0}^{t}\lambda _{j-1}(s)P_{j-1}(s)ds\text {,} \end{aligned}$$
(11.46)
$$\begin{aligned} E(\mathcal {O}_{t}(j))&=\int _{s=0}^{t}\lambda _{j}(s)P_{j}(s)ds\text {.} \end{aligned}$$
(11.47)

Substituting from (11.46) and (11.47) into (11.27) gives

$$\begin{aligned} \int _{s=0}^{t}\lambda _{j-1}(s)P_{j-1}(s)ds=\int _{s=0}^{t}\lambda _{j}(s)P_{j}(s)ds+P_{j}(t)-0\text {.} \end{aligned}$$
(11.48)

Taking d / dt on both sides of (11.48) yields

$$\begin{aligned} \frac{d}{dt}P_{j}(t)+\lambda _{j}(t)P_{j}(t)=\lambda _{j-1}(t)P_{j-1}(t), \end{aligned}$$

with solution

$$\begin{aligned} P_{j}(t)=e^{-m_{j}(t)}\int _{s=0}^{t}e^{m_{j}(s)}\lambda _{j-1}(s)P_{j-1}(s)ds \text {, }t\ge 0\text {.} \end{aligned}$$
(11.49)

Formula (11.49) provides a recursive solution expressing \(P_{j}(t)\) in terms of \(P_{j-1}(t)\) , j = \(i+1\),...., and \(P_{i}(0)\) = 1.

11.5.1 Non-homogeneous Poisson Process

The non-homogeneous Poisson process is a special case of the pure birth process (see, e.g., pp. 339–345 in [125]). Assume \( X(0)=0\), \(\lambda _{j}(t)\equiv \lambda (t)\) independent of the state j, so that \(m(t)=\int _{s=0}^{t}\lambda (s)ds\). Setting \(i=0\) in (11.45) gives \(P_{0}(t)=e^{-m(t)}.\) From (11.49) we obtain (by induction) the well-known formula

$$\begin{aligned} P_{n}(t)=e^{-m(t)} \frac{(m(t))^{n}}{n!}, n=0, 1, 2,...\text { .} \end{aligned}$$
(11.50)

Formula (11.50) is the pmf (probability mass function) of a Poisson distribution with mean m(t). Then \(P_{n}(t)\), \( n=0, 1 \),..., for the standard Poisson process are obtained from (11.50) by setting \(\lambda (t)\equiv \lambda ,\) so that \(m(t)\equiv \lambda t.\)

11.5.2 Yule Process

The Yule process is a special case of the pure birth process, where the birth rate (growth rate) is directly proportional to the current population size, but independent of time t. Assume X(0) = 1 and \(\lambda _{i}(t)\) = \(i\lambda \), \(t\ge 0\) , \(i=1, 2\),.... Then \(P_{1}(t)\) = \(e^{-\lambda t}\) (= probability of no births during \(\left( 0, t\right) \)). Using (11.49) and mathematical induction, we obtain the well-known geometric distribution for the Yule process

$$\begin{aligned} P_{n}(t)=(1-e^{-\lambda t})^{n-1}e^{-\lambda t}, n=1, 2 \text {,....} \end{aligned}$$
(11.51)

Let \(P_{k,\left( i\right) }(t)\) := P(i independent Yule processes with the same parameter \(\lambda \) yield a total of \(k\ge i\) individuals at time \(t>0)\). Assume each process starts in state 1 at time 0. Since \(P_{n}(t)\) in (11.51) is a geometric distribution, we obtain the convolution of i i.i.d. geometric distributions as the negative binomial distribution

$$\begin{aligned} P_{k,\left( i\right) }(t)=\left( {\begin{array}{c}k-1\\ i-1\end{array}}\right) e^{-\lambda ti}(1-e^{-\lambda t})^{k-i}, k=i, i+1\text {,....} \end{aligned}$$
(11.52)

(see, e.g., p. 164ff in [73]). Formulas (11.51) and (11.52 ) can be derived in several different ways (e.g., pp. 122–123 in [99]; pp. 383–384 in [125]). We now outline a direct proof of (11.52) using LC.

We derive similarly as for (11.49),

$$\begin{aligned} P_{i,(k)}(t)=(k+1)\lambda e^{-k\lambda t}\int _{s=0}^{t}e^{k\lambda s}P_{i,\left( k-1\right) }(s)ds+C_{k}e^{-k\lambda t}, k\ge i\text {,} \end{aligned}$$
(11.53)

where \(C_{k}\) = \(\left\{ \begin{array}{l} 1\text { if }k=i\text {,} \\ 0\text { if }k>i \end{array} \text {.}\right. \) Since P(no births in \(\left( 0, t\right) )\) = P(Exp\( _{i\lambda }>t)\) we have

$$\begin{aligned} P_{i,\left( i\right) }(t)=e^{-i\lambda t}. \end{aligned}$$
(11.54)

Thus (11.52) holds for k = i. From (11.54) and (11.53) with k = \(i+1\), we obtain

$$\begin{aligned} P_{i,(i+1)}(t)=ie^{-i\lambda t}\left( 1-e^{-\lambda t}\right) =\left( {\begin{array}{c}i+1-1\\ i-1\end{array}}\right) e^{-i\lambda t}\left( 1-e^{-\lambda t}\right) \text {.} \end{aligned}$$
(11.55)

Therefore (11.52) holds for k = \(i+1\).

Assume (11.52) holds for an arbitrary integer \(k>i\). We then show using (11.53) that (11.52) holds for \(k+1\). Hence it holds for all k = \(i, i+1\),..., by the principle of mathematical induction.

11.6 Pharmacokinetic Model

This Section outlines an LC approach to multiple dosing in pharmacokinetics, by briefly discussing a simplified one-compartment model. We assume bolus dosing, i.e., a full dose of a drug is absorbed into the blood stream immediately at each dosing instant. Also, inter-dose times are \( \underset{dis}{=}\) Exp\(_{\lambda }\). Thus doses occur in a Poisson process at rate \(\lambda \). This assumption may be valid outside of a controlled environment. Statistical tests have shown that many patients take certain medications over time in a Poisson process [47].

Fig. 11.2
figure 2

Sample path of drug concentration \(\left\{ W(t)\right\} _{t\ge 0}\) in one-compartment model with bolus dosing and first-order kinetics

We assume first-order kinetics. That is, the concentration of the drug in the blood stream decays at a rate which is proportional to the concentration. This is equivalent to a plot of the concentration over time having a negative exponential shape between doses (similar to Fig. 11.2 below).

11.6.1 Model Description

Let \(\left\{ W(t)\right\} _{t\ge 0}\), denote the drug concentration at time t. Let the dosing times be \(\left\{ \tau _{n}\right\} \), \(\tau _{n}<\tau _{n+1}\), \(n=0, 1, 2\),...The rate of concentration decay due to drug elimination is

$$\begin{aligned} \frac{dW(t)}{dt}=-kW(t),\tau _{n}\le t<\tau _{n+1}, n=0, 1, 2,{\ldots }, \end{aligned}$$
(11.56)

where \(k>0\). The dimension of the concentration W(t) is \(\left[ W(t)\right] \) = \(\left[ \frac{Mass}{Volume}\right] \); of decay rate is \(\left[ \frac{ dW(t)}{dt}\right] \) = \(\left[ \frac{Mass}{Volume}\right] \cdot \left[ Time^{-1}\right] \); of the constant k is \(\left[ k\right] \) = \(\left[ Time \right] ^{-1}\).

Let \(\left\{ P_{0}, f(x)\right\} _{x>0}\), denote the steady-state pdf of concentration. Then \(P_{0}=0\) due to the negative exponential shape of the decay graph between doses (see Sects. 6.2.4, 6.2.5 and 6.4 in Chap. 6). In theory, the concentration of the drug never vanishes. In practice, it goes to 0 or is negligible. (We are not discussing the treatment effects of multiple dosing; only the concentration dynamics.) Table 11.1 below indicates the close relationship between the M/G/r(\(\cdot \)) Dam and the Pharmacokinetic model.

Table 11.1 M/G/r(\(\cdot \)) Dam versus Pharmakokinetic model

11.6.2 Dose Amounts Exponentially Distributed

We first analyze a model which assumes exponentially distributed dose sizes. This assumption may be valid if the amount of each dose absorbed is affected randomly by the dosing environment (e.g., acidity, presence of enzymes, interaction with other medications, etc.). Another instance could occur when eye drops are instilled by a patient, say approximately every 6 h. Often, the sizes of the individual drops may vary considerably, due to using a hand-squeezed medicine dropper. The location on the cornea of the instillation may vary from dose to dose, possibly affecting absorption. This could create random increases in concentration with the successive doses during a dosing regime. Similar remarks apply to fast-acting sprays, such as nitrolingual pump sprays, or to nasal sprays (Also, for certain medications it may be feasible to study the effect of using random dose sizes as an exponential random variable inherently in a prescription, in order to test whether it will decrease the long-run variance of concentration.).

Let us assume the bolus dose amounts are randomly distributed as Exp\(_{\mu }\). Since \(P_{0}\) = 0 the LC balance equation for the pdf of concentration is

$$\begin{aligned} kxf(x)=\lambda \int _{y=0}^{x}e^{-\mu (x-y)}f(y)dy. \end{aligned}$$
(11.57)

Equation (11.57) has the solution

$$\begin{aligned} f(x)= \frac{1}{\Gamma \left( \frac{\lambda }{k}\right) }(\mu x)^{(\frac{\lambda }{k }-1)}e^{-\mu x}\mu , x>0. \end{aligned}$$
(11.58)

where \(\Gamma \left( \cdot \right) \) is the Gamma function (see Sect. 6.4, formulas (6.48) and (6.49)). Let W denote the steady-state concentration of W(t) as \(t\rightarrow \infty \), having the pdf in (11.58). The first and second moments of W are

$$\begin{aligned} E(W)=\frac{\lambda }{k\mu },\text {}E(W^{2})=\frac{\lambda }{ k\mu ^{2}}\left( \frac{\lambda }{k}+1\right) \text {.} \end{aligned}$$

The variance of W is

$$\begin{aligned} Var(X)=E(W^{2})-(E(W))^{2}=\frac{\lambda }{k\mu ^{2}}\text {.} \end{aligned}$$

We can find the probability that the W is between two threshold limits, say \(\alpha<\) \(\beta \), using

$$\begin{aligned} P(\alpha<W<\beta )=\int _{x=\alpha }^{\beta }\frac{1}{\Gamma \left( \frac{ \lambda }{k}\right) }\mu (\mu x)^{(\frac{\lambda }{k}-1)}e^{-\mu x}dx. \end{aligned}$$
(11.59)

The information in (11.59) can be useful when multiple dosing continues for a long time, e.g., when administering the blood thinner coumadin (warfarin). If the concentration is \({<}\alpha \) coumadin is not effective for the intended treatment. If the concentration is >\(\beta \) the blood becomes too thin. The interval \((\alpha ,\beta )\) is thought of as the therapeutic range (see Fig. 11.2).

The type of analysis outlined briefly here can be extended to various pharmacokinetic models of varying complexity.

11.6.3 Dose Amounts Deterministic

We next suppose the dose amounts are all of size \(D>0\) units (e.g., milligrams). This model is equivalent to an M/D/r(\(\cdot \)) dam (see, e.g., Sect. 6.2).

Equation for Stationary CDF and PDF of Concentration

Consider a sample path of \(\left\{ W(t)\right\} _{t\ge 0}\) (similar to Fig. 11.2 with all jump amounts equal to D). Fix level \(x>0\). The SP downcrossing rate of level \(x>0\) is kxf(x). The SP upcrossing rate of x is equal to \(\lambda F(x)-\lambda F(x-D)\) where F(x) = \(\int _{y=0}^{x}f(y)dy\). (see Sect. 3.10). The principle of rate balance across level x gives an equation for the pdf f(x) and cdf F(x) of W, namely

$$\begin{aligned} kxf(x)=\lambda F(x)-\lambda F(x-D), x>0. \end{aligned}$$
(11.60)

Due to jumps of size D we derive f(x) and F(x) on successive non-overlapping intervals of length D. Denote the pdf and cdf of W as \( f_{0}(x)\), \(F_{0}(x)\), \(x\in (0, D]\), and as \(f_{n}(x)\), \(F_{n}(x)\), \(x\in \left[ nD,(n+1)D\right) \), \(n=1, 2\),...

In differential equation (11.60) for \(F(\cdot )\) (since f(x) = \(F^{\prime }(x)\)), observe that \(F(x-D)\) = 0 for \(x\in (0, D)\). Thus

$$\begin{aligned} kxf_{0}(x)=\lambda F_{0}(x), x\in (0, D) \text {,} \end{aligned}$$

implying

$$\begin{aligned} \frac{F_{0}^{\prime }(x)}{F_{0}(x)}=\frac{d}{dx}\ln F_{0}(x)=\frac{\lambda }{ kx}, x\in (0, D) \end{aligned}$$

with solution

$$\begin{aligned} F_{0}(x)=Ax^{\frac{\lambda }{k}}, x\in (0, D)\text {,} \end{aligned}$$
(11.61)

where A is a constant to be determined. For \(x\in [D, 2D)\), substituting from (11.61) into equation (11.60) gives

$$\begin{aligned} kxf_{1}(x)= & {} \lambda F_{1}(x)-\lambda F_{0}\left( x-D\right) \text {,} \nonumber \\ f_{1}(x)= & {} \frac{\lambda }{kx}F_{1}(x)-\frac{\lambda }{kx}A\left( x-D\right) ^{\frac{\lambda }{\kappa }}\text {,} \nonumber \\ F_{1}^{\prime }(x)-\frac{\lambda }{kx}F_{1}(x)= & {} -\frac{\lambda }{kx} A\left( x-D\right) ^{\frac{\lambda }{\kappa }}, x\in [D, 2D)\text {.} \end{aligned}$$
(11.62)

To solve the differential equation (11.62) multiply both sides by the integrating factor \(e^{-\int _{D}^{x}\frac{\lambda }{ku}du}\) and apply continuity of the CDF at level D, i.e., \(F_{1}(D^{+})\) = \(F(D^{-})\), to obtain

$$\begin{aligned}\begin{gathered} F_{1}^{\prime }(x)e^{-\int _{D}^{x}\frac{\lambda }{ku}du}-\frac{\lambda }{kx} F_{1}(x)e^{-\int _{D}^{x}\frac{\lambda }{ku}du}=-\frac{\lambda }{kx}A\left( x-D\right) ^{\frac{\lambda }{\kappa }}e^{-\int _{D}^{x}\frac{\lambda }{ku}du}, \\ \frac{d}{dx}F_{1}(x)e^{-\int _{D}^{x}\frac{\lambda }{ku}du}=-\frac{\lambda }{ kx}A\left( x-D\right) ^{\frac{\lambda }{\kappa }}e^{-\int _{D}^{x}\frac{ \lambda }{ku}du}\text {,} \\ F_{1}(x)=-e^{+\frac{\lambda }{k}\ln \left( x/D\right) }\int _{D}^{x}\frac{ \lambda }{ky}A\left( y-D\right) ^{\frac{\lambda }{\kappa }}e^{-\frac{\lambda }{k}\ln \left( y/D\right) }dy+C_{1}e^{+\frac{\lambda }{k}\ln \left( x/D\right) }\text {,} \end{gathered}\end{aligned}$$

where \(C_{1}\) is a constant. Setting x = D in the last equality gives \( F_{1}(D)\) = \(C_{1}\) = \(AD^{\frac{\lambda }{k}}\) = \(F_{0}(D^{-})\), from (11.61) since \(\ln \left( 1\right) \) = 0 , which leads to

$$\begin{aligned} F_{1}(x)=Ax^{\frac{\lambda }{k}}\left[ -\frac{\lambda D}{k} \int _{D}^{x}\left( \frac{y}{D}-1\right) ^{\frac{\lambda }{\kappa }}\left( \frac{y}{D}\right) ^{-\left( \frac{\lambda }{k}+1\right) }dy+1\right] , x\in [D, 2D)\text {,} \end{aligned}$$
(11.63)

where A is defined in (11.61). Setting x = D in (11.63), checks with \(F_{1}(D^{+})\) = \(AD^{\frac{\lambda }{k}}\) = \(F_{0}(D^{-})\).

The solution for \(F_{n}(x)\), \(x\in [nD,(n+1)D)\), \(n=2, 3,\)..., can be obtained by a similar recursive procedure and mathematical induction. We now give the induction step. Suppose we know \(F_{n}(x)\), \(x\in [nD,\left( n+1\right) D)\). For \(x\in [\left( n+1\right) D,\left( n+2\right) D)\), letting the integrating factor be \(\varvec{\Phi }(x)\) := \(e^{-\frac{ \lambda }{k}\int _{\left( n+1\right) D}^{x}\frac{1}{u}du}\), we get

$$\begin{aligned}\begin{gathered} F_{n+1}^{\prime }(x)\varvec{\Phi }(x)-\frac{\lambda }{kx}F_{n+1}(x) \varvec{\Phi }(x)=-\frac{\lambda }{kx}F_{n}(x-D)\varvec{\Phi }(x) \text {,} \\ \frac{d}{dx}F_{n+1}(x)\varvec{\Phi }(x)=-\frac{\lambda }{kx}F_{n}(x-D) \varvec{\Phi }(x)\text {,} \\ F_{n+1}(x)\left( \frac{x}{\left( n+1\right) D}\right) ^{-\frac{\lambda }{k} }=-\frac{\lambda }{k}\int _{\left( n+1\right) D}^{x}\frac{1}{y} F_{n}(y-D)\left( \frac{y}{\left( n+1\right) D}\right) ^{-\frac{\lambda }{k}} \\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ +C_{n+1}\text {,} \\ F_{n+1}(x)=-\frac{\lambda }{k}\left( \frac{x}{\left( n+1\right) D}\right) ^{+ \frac{\lambda }{k}}\int _{\left( n+1\right) D}^{x}\frac{1}{y}\left( \frac{y}{ \left( n+1\right) D}\right) ^{-\frac{\lambda }{k}}F_{n}(y-D)dy \\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ +C_{n+1}\left( \frac{x}{\left( n+1\right) D}\right) ^{+\frac{\lambda }{k}}\text {.} \end{gathered}\end{aligned}$$

Letting x = \(\left( n+1\right) D\) gives \(C_{n+1}\) = \(F_{n+1}(\left( n+1\right) D)\) = \(F_{n}(\left( n+1\right) D^{-})\) which is known on the assumption we know \(F_{n}(x)\), \(x\in [nD,\left( n+1\right) D)\). Moreover, \(C_{n+1}\) will be in terms of the factor A. Thus, in principle, we can derive \(F_{n}(x)\), \(n=0, 1, 2\),..., in terms of A. The constant A can then be determined (or closely approximated) using the normalizing condition \(F(\infty )\) = 1. Once F(x) is obtained, we can determine f(x) by substituting into (11.60) (as in Sect. 3.10).

11.6.4 Using LCE to Compute f(x)

As an alternative to the foregoing analytical solution, we can solve for f(x) using LCE (LC Estimation) via a typical simulated sample path of \( \left\{ W(s)\right\} _{s\ge 0}\) over a long time \(t>0\), to estimate the pdf f(x) for \(x\in (0, x_{M(t)})\) where \(x_{M(t)}\) is the maximum state-space partition level (see Sects. 9.3.1 and 9.6).

Remark 11.4

We mention that it is possible to apply Theorem B to compute the time-dependent pdf and cdf of concentration (see formulas (11.27)–(11.30) in Sect. 11.2). Knowledge of transient distributions may be useful in multiple dosing regimes where it is important to estimate the concentration after a short dosing period, or to manage dosing quantities.

Remark 11.5

Some related stochastic models have characteristics in common with the pharmacokinetic model. One group of such models involves consumer response (CR) to non-uniform advertisements, which have been analyzed along similar lines using LC (see, e.g., [40]).

11.7 Counter Models

For a description of a classical example of a counter model see pp. 128–131 in [99]. In this section we analyze the transient total output of type-1 and type-2 counters, using LC.

11.7.1 Type-2 Counter

We first analyze a type-2 counter. Electrical pulses arrive in a Poisson process at rate \(\lambda \). Each arriving pulse is followed immediately by a fixed locked period of length D > 0, during which new arrivals cannot be detected by the counter. However, if a new arrival occurs at a time t while the counter is locked, then the locked period is extended to time \(t+D\). Thus the locked periods “telescope” to form a ‘total locked period’ denoted by L, implying \(L\ge D\). Arrivals can be detected only when the counter is unlocked or free. Let us assume that the counter is free at time t = 0 (see Fig. 11.3).

Let the amplitudes of the pulses be \(\underset{dis}{\equiv }X\), having cdf B(y), \(y>0\). Let \(\eta _{i}(t)\), \(t\ge \tau _{i}\), denote the output at time t due to the detected pulse \(X_{i}\) occurring at \(\tau _{i}\). Assume that \(\eta _{i}(t)\) dissipates at rate

$$\begin{aligned} \frac{d\eta _{i}(t)}{dt}=-k\cdot \eta _{i}(t), t>\tau _{i}\text {,} \end{aligned}$$
(11.64)

where the constant \(k>0\) is the same for all i = 1, 2,....

Let \(\eta _{t}\) denote the total output at time t, due to all registered (detected) pulses that arrive during \(\left( 0, t\right) \) (see Fig. 11.3). If the number of detected pulses in \(\left( 0, t\right) \) is n, then

$$\begin{aligned} \eta _{t}=\sum _{i=1}^{n}\eta _{i}(t),\tau _{n}\le t<t_{n+1}, n=1, 2,\ldots \text {,} \end{aligned}$$
(11.65)

From (11.65)

$$\begin{aligned} \frac{d}{dt}\eta _{t}=-k\sum _{i=1}^{n}\eta _{i}(t)=-k\eta _{t},\tau _{n}\le t<t_{n+1}, n=1, 2\text {,....} \end{aligned}$$
(11.66)

Denote the cdf and pdf of \(\eta _{t}\) respectively by \(F_{t}(x)\) and \( f_{t}(x)\) (=\(\frac{d}{dt}F_{t}(x)\), \(x>0\), wherever the derivative exists).

Fig. 11.3
figure 3

Sample path of total output \( \eta _{t}\) for type-2 counter model. Total locked periods are each \( \underset{dis}{=}L\ge D\). Arrivals during locked periods are not detected, but extend it by D. Pulses arrive at Poisson rate \(\lambda \)

11.7.2 Sample Path of Total Output \(\eta _{t}\)

A sample path of the process \(\left\{ \eta _{t}\right\} _{t\ge 0}\) consists of segments that decay exponentially with decay constant k, between the \( \tau _{i}\)s, which are instants when arrivals are detected (Fig. 11.3). That is,

$$\begin{aligned} \eta _{t}=\sum _{i=1}^{n}X_{i}e^{-k(t-\tau _{i})},\tau _{n}\le t<t_{n+1}, n=1, 2,\ldots \end{aligned}$$
(11.67)

Note that a sample path cannot descend to level 0 due to exponential decay.

Probability that the Counter Is Free at Time t

Let p(t) := P(counter is free to detect a new arriving pulse at time \(t\ge 0\)). Then

$$\begin{aligned} p(t)=\left\{ \begin{array}{l} e^{-\lambda t}, 0<t<D, \\ e^{-\lambda D}, t\ge D\text {.} \end{array} \right. \end{aligned}$$
(11.68)

The reason for (11.68) is that for \( 0<t<D\), the counter is free at t iff there is no arrival in (0, t), which has probability \(e^{-\lambda t}\). For \(t\ge D\), the counter is free at time t iff there has not been an arrival during the interval \(\left( t-D, t\right) \). The probability of this event is \(e^{-\lambda D}\), by the memoryless property of Exp\(_{\lambda }\) (see, e.g., pp. 179–181 in [99]).

11.7.3 Integro-differential Equation for PDF of Output

Consider level \(x>0\) in the state space, and state-space set \(\varvec{A} _{x}\) := \(\left( 0, x\right] \). Similarly as in the theorems on down- and upcrossings of level x in Sect. 6.2.8, and also in Theorem 6.3 in Sect. 6.2.9, we infer that for SP entrances into set \(\varvec{A}_{x}\) (all entrances are downcrossings of level x)

$$\begin{aligned} \frac{\partial }{\partial t}E(\mathcal {I}_{t}(\varvec{A}_{x}))=\frac{ \partial }{\partial t}E(\mathcal {D}_{t}(x))=kxf_{t}(x), t>0\text {.} \end{aligned}$$
(11.69)

For SP exits out of \(\varvec{A}_{x}\) (all exits are upcrossings of level x), using (11.68),

$$\begin{aligned} \begin{array}{l} \frac{\partial }{\partial t}E(\mathcal {U}_{t}(x))=\frac{\partial }{\partial t}E(\mathcal {O}_{t}(A_{x})) \\ =\left\{ \begin{array}{l} \lambda e^{-\lambda t}\cdot \int _{y=0}^{x}\overline{B} (x-y)f_{t}(y)dy, x>0, 0<t<D, \\ \lambda e^{-\lambda D}\cdot \int _{y=0}^{x}\overline{B}(x-y)f_{t}(y)dy, x>0, t \ge D\text {.} \end{array} \right. \end{array} \end{aligned}$$
(11.70)

Substituting (11.69) and (11.70) into Theorem B (i.e., Theorem 4.1 in Sect. 4.2.1) , and noting that \( \frac{\partial }{\partial t}F_{t}(x)\) = \(-\frac{\partial }{\partial t} (1-F_{t}(x)\)), we get the integro-differential equations for the pdf \( f_{t}(x)\),

$$\begin{aligned} kxf_{t}(x)=\lambda e^{-\lambda t}\cdot \int _{y=0}^{x}\overline{B} (x-y)f_{t}(y)dy&-\frac{\partial }{\partial t}(1-F_{t}(x)), \nonumber \\&x>0, 0<t<D, \end{aligned}$$
(11.71)
$$\begin{aligned} kxf_{t}(x)=\lambda e^{-\lambda D}\cdot \int _{y=0}^{x}\overline{B} (x-y)f_{t}(y)dy-\frac{\partial }{\partial t}(&1-F_{t}(x)), \nonumber \\&x>0, t\ge D, \end{aligned}$$
(11.72)

where the arrival rate is \(\lambda \), and a time-t arrival is registered at time t iff the counter is unlocked (free).

11.7.4 Expected Value of Total Output

We obtain the expected value of \(\eta _{t}\) by integrating both sides of (11.71) and (11.72) with respect to \(x\in (0,\infty )\). In (11.71) and (11.72), we assume that \(\frac{ \partial }{\partial t}F_{t}(x)\) is continuous with respect to \(t>0\), which is required to apply theorems on interchanging the operations \( \int _{x=0}^{\infty }\) and \(\frac{\partial }{\partial t}\) (see, e.g., the dominated convergence theorem, Fubini’s theorem, and related comments on p. 274 in [116]; p. 111 in [74]; p. 269 in [127]; p.273 in [6]).

Upon integrating (11.71) with respect to x, for t on (0, D) we obtain

$$\begin{aligned} kE(\eta _{t})= & {} \lambda e^{-\lambda t}E(X)-\frac{\partial }{\partial t} E(\eta _{t}), \nonumber \\ \frac{\partial }{\partial t}e^{kt}E(\eta _{t})= & {} \lambda e^{\left( k-\lambda \right) t}E(X), \nonumber \\ E(\eta _{t})= & {} \frac{\lambda e^{-\lambda t}E(X)}{k-\lambda } +Ae^{-kt}, 0<t<D,(A\text { constant),} \nonumber \\ E(\eta _{t})= & {} \frac{\lambda E(X)}{k-\lambda }\left( e^{-\lambda t}-e^{-kt}\right) , 0<t<D\text {,} \end{aligned}$$
(11.73)

since \(E(\eta _{0})\) = 0 by assumption.

Integrating (11.72) with respect to x, for t on \( (D,\infty )\) we obtain

$$\begin{aligned} kE(\eta _{t})= & {} \lambda e^{-\lambda D}E(X)-\frac{\partial }{\partial t} E(\eta _{t}), \nonumber \\ \frac{\partial }{\partial t}e^{kt}E(\eta _{t})= & {} \lambda e^{-\lambda D}E(X)e^{kt}, \nonumber \\ E(\eta _{t})= & {} \frac{\lambda e^{-\lambda D}E(X)}{k}+Ae^{-kt}, t\ge D, \end{aligned}$$
(11.74)

where the constant A is given by

$$\begin{aligned} A=\lambda E(X)\left( \frac{e^{-(\lambda -k)D}-1}{k-\lambda }-\frac{ e^{-(\lambda -k)D}}{k}\right) . \end{aligned}$$

To evaluate A, we have used continuity of \(\eta _{t}\) at t = D, i.e., \( \eta _{D^{-}}=\eta _{D}\) (see Fig. 11.3), which implies continuity of \(E(\eta _{t})\) at t = D (a.s.). Thus, from (11.73),

$$\begin{aligned} E(\eta _{D})=\frac{\lambda E(X)}{k-\lambda }\left( e^{-\lambda D}-e^{-kD}\right) \text {.} \end{aligned}$$

If \(t\rightarrow \infty \), then (11.74) reduces to

$$\begin{aligned} \lim _{t\rightarrow \infty }E(\eta _{t})=\frac{\lambda e^{-\lambda D}E(X)}{k} \text {.} \end{aligned}$$

If \(D=0\), then \(A=-\frac{\lambda E(X)}{k}\). We then obtain \(E(\eta _{t})= \frac{\lambda E(X)}{k}\left( 1-e^{-kt}\right) \) and \(\lim _{t\rightarrow \infty }E(\eta _{t})=\frac{\lambda E(X)}{k}\), as on p. 131 in [99].

11.7.5 Type-1 Counter

A type-1 counter differs from the type-2 counter analyzed in Sect. 11.7.1, only in the locking mechanism (see, e.g., pp. 177–179 in [99]). In a type-1 counter, only registered (detected) arrivals when the counter is free, generate locked periods. Arrivals when the counter is locked, do not effect the locked period. Thus every locked period has length \(D>0\). Aside from the locking mechanism, we generally use the same notation and assumptions for type-1 and type-2 counters. Thus equations (11.64)–(11.67) hold for type-1 counters.

Fig. 11.4
figure 4

Sample path of total output \( \eta _{t}\) for type-1 counter model. Locked periods are each \(=D\). Undetected arrivals do not effect a locked period. Pulses arrive at Poisson rate \(\lambda \)

11.7.6 Sample Path of Total Output

A sample path of the total-output process \(\left\{ \eta _{t}\right\} _{t\ge 0}\) consists of segments that decay exponentially with decay constant k, between successive detection times \(\tau _{n}\), n =1, 2,... (Fig. 11.4).

Probability that the Counter Is Free at Time t

The probability that the counter is free to register a newly arriving pulse at time t is given by the following recursion ([91]).

$$\begin{aligned} p_{1}(t)= & {} e^{-\lambda t}, 0<t<D, \nonumber \\ p_{2}(t)= & {} e^{-\lambda (t-D)}p_{1}(D)+ \frac{\left( \lambda (t-D)\right) e^{-\lambda (t-D)}}{1!}, D\le t<2D, \nonumber \\&\cdot \cdot \cdot \nonumber \\ p_{n}(t)= & {} \sum _{j=1}^{n-1}\frac{\left( \lambda (t-(n-1)D)\right) ^{j-1}\cdot e^{-\lambda (t-(n-1)D)}}{(j-1)!}p_{n-j}((n-j)D) \nonumber \\&\ \ \ +\, \frac{\left( \lambda (t-(n-1)D)\right) ^{n-1}e^{-\lambda (t-(n-1)D)}}{\left( n-1\right) !}, \nonumber \\&\begin{array}{l} \ \ \ \ \ \ \ \ \ \ \ \left( n-1\right) D\le t<nD, n=1, 2,...\text { ,} \end{array} \end{aligned}$$
(11.75)

where \(\sum _{j=1}^{0}\equiv 0\).

Remark 11.6

The successive time intervals (free, locked) with mean lengths \(1/\lambda \) and D, respectively, form an alternating renewal process (see time axis in Fig. 11.4). Let p(t) = P(the counter is free at time t), \(t\ge 0\). Then \( \lim _{t\rightarrow \infty }p(t)\) = \(\left( 1/\lambda \right) /\left( 1/\lambda +D\right) \) (a known result for alternating renewal processes–e.g., pp. 84–86 in [66]). Hence we have proved using probability arguments that

$$\begin{aligned} \lim _{n\rightarrow \infty }p_{n}(nD)=\frac{\frac{1}{\lambda }}{\frac{1}{ \lambda }+D}\text {,} \end{aligned}$$

where \(\left\{ p_{n}(nD)\right\} _{n=1, 2,...}\) is the series obtained by substituting t = nD in (11.75). Another way of stating the limiting time-t result is: for every \(\alpha \in [0, 1]\), the same limit holds for any convex combination of the time points \((n-1)D\) and nD, i.e.,

$$\begin{aligned} \lim _{n\rightarrow \infty }p_{n}(\alpha (n-1)D+(1-\alpha )nD)=\frac{\frac{1}{ \lambda }}{\frac{1}{\lambda }+D}\text {.} \end{aligned}$$

11.7.7 Integro-differential Equation for PDF of Output

Consider level \(x>0\) in the state space; and state-space set \(\varvec{A} _{x}=\left( 0, x\right] \). We can show as in Sect. 6.2.9, that for SP entrances into set \(\varvec{A}_{x}\),

$$\begin{aligned} \frac{\partial }{\partial t}E(\mathcal {I}_{t}(\varvec{A}_{x}))=\frac{ \partial }{\partial t}E(\mathcal {D}_{t}(x))=kxf_{t}(x), t>0. \end{aligned}$$
(11.76)

For SP exits out of \(\varvec{A}_{x}\),

$$\begin{aligned} \frac{\partial }{\partial t}E(\mathcal {O}_{t}(\varvec{A}_{x}))=&\frac{ \partial }{\partial t}E(\mathcal {U}_{t}(x))=\lambda p_{n}(t)\cdot \int _{y=0}^{x}\overline{B}(x-y)f_{t}(y)dy, \nonumber \\&\left( n-1\right) D\le t<nD, n=1, 2,.... \end{aligned}$$
(11.77)

In (11.77), the factor \(p_{n}(t)\) occurs because an arrival is registered if it arrives when the counter is free.

Substituting (11.76) and (11.77) into Theorem B (Theorem 4.1 in Sect. 4.2.1) , we get an integro-differential equation for the pdf \(f_{t}(x)\),

$$\begin{aligned} kxf_{t}(x)= & {} \lambda p_{n}(t)\cdot \int _{y=0}^{x} \overline{B}(x-y)f_{t}(y)dy+\frac{\partial }{\partial t}F_{t}(x), x>0, \nonumber \\ kxf_{t}(x)= & {} \lambda p_{n}(t)\cdot \int _{y=0}^{x}\overline{B} (x-y)f_{t}(y)dy-\frac{\partial }{\partial t}(1-F_{t}(x)), x>0, \nonumber \\&\begin{array}{l} \ \ \ \ \ \left( n-1\right) D\le t<nD, n=1, 2,...\text { .} \end{array} \end{aligned}$$
(11.78)

11.7.8 Expected Value of Total Output

We obtain the expected value of \(\eta _{t}\) by dividing both sides by k and then integrating both sides of (11.78) for \(x\in (0,\infty )\), yielding

$$\begin{aligned} E(\eta _{t})=\frac{\lambda E(X)}{k-\lambda }\left( e^{-\lambda t}-e^{-kt}\right) , 0<t<D \end{aligned}$$
(11.79)

in the same manner as for (11.73). Similarly, we can obtain \(E(\eta _{t})\), \(\, nD\le t<\left( n+1\right) D\), \( n=1, 2\),.... (We do not carry out the latter computation here.)

Remark 11.7

If the locked period has value \(D=0\), then \(p_{n}(t)=1, n=1, 2,...\) . Then every arrival is registered. We then obtain the known result \(E(\eta _{t})= \frac{\lambda E(X)}{k}\left( 1-e^{-kt}\right) , t>0\) (e.g., p. 131 in [99]).

When \(D=0\), if \(t\rightarrow \infty \), then (11.79) reduces to \(\lim _{t\rightarrow \infty }E(\eta _{t})=\frac{\lambda E(X)}{k}\).

Remark 11.8

When there is no locked time (\(D=0\)), the foregoing type-1 and type-2 counter models coincide with an M/G/r(\(\cdot \)) dam with efflux rate proportional to content. Thus, results for a dam with r(x) = \(kx, x>0\), can be derived as a special case of either counter model. (See Sect. 6.4 for a related analysis in the M/M/r(\(\cdot \)) dam where r(x) = kx.)

11.8 Dam with Alternating Influx and Efflux

Consider a dam in which the content alternates between random periods of continuous influx and continuous efflux, when nonempty. We arbitrarily classify periods of emptiness as being parts of periods of efflux, for notational convenience. Periods of influx (inflow) are \( \underset{dis}{=}\) Exp\(_{\lambda _{1}}\) and periods of efflux (outflow) are \(\underset{dis}{=}\) Exp\(_{\lambda _{2}}\). Let \(W(t)\ge 0\) denote the content of the dam at time \(t\ge 0.\) Assume that during an influx period, the rate of increase of content is dW(t) / dt = q(W(t)), where \(q(x)>0\), \(x>0.\) Assume that during an efflux period, the rate of decrease of content is dW(t) / dt \(=-r(W(t))\), where \(r(x)>0\) , \(x>0\). In addition, we assume that \(r(0^{+})>0\), i.e., there exists m > 0 such that \(\lim _{x\downarrow 0}r(x)\) = m, which guarantees that the dam will reach emptiness. Whenever the dam is empty (i.e., W(t) = 0), dW(t) / dt \(\ \)= 0. (By contrast, in the model of a dam in Sect. 6.4, where \(r(0^{+})\) = 0 so that emptiness is never achieved theoretically. This is also the case in the pharmaceutical kinetics model in Sect. 11.6.1.). By the memoryless property of Exp\(_{\lambda _{2}}\), sojourns at level 0 are also distributed as Exp\(_{\lambda _{2}}\) (see Fig. 11.5). The empty period is analogous to an idle period in an M/G/1 queue, or empty period in an M/G/r(\(\cdot \)) dam (Sect. 6.2). The efflux rate r(x) is similar to that of the M/G/r(\(\cdot \)) dam. We also assume, in the present model, that the influx rate is \(q(x)>0\), \(x\ge 0\), so that \(q(0^{+})\) = q(0) > 0.

Fig. 11.5
figure 5

Sample path of dam with alternating periods of continuous influx and efflux. Slope at level x: during influx is \(\frac{d}{dt}W(t)=q(x)\); during efflux is \(-r(x)\). Slope at level 0 is \(\frac{d}{dt}W(t)=0\). Influx times are \(\underset{dis}{= }\) Exp\(_{\lambda _{1}}\), efflux and empty times are \( \underset{dis}{=}\) Exp\(_{\lambda _{2}}\) (memoryless property)

11.8.1 Analysis of the Dam Using Method of Sheets

Consider the stochastic process \(\{W(t), M(t)\}_{t\ge 0}\) where W(t) denotes the content at instant t, and the system configuration is \(M(t)\in \varvec{M}\) = \(\left\{ 0, 1, 2\right\} \). The state space is \(\varvec{S }=\left[ 0,\infty \right) \times \varvec{M}\). (See Sects. 4.44.5 for discussions on system configuration.) The meaning of M(t) is given in the following table.

$$\begin{aligned} \begin{array}{cl} \hline { M}({ t}) &{} \text {Meaning} \\ \hline 0 &{} \text {Empty period}. \\ 1 &{} \text {Influx phase: content is increasing}. \\ 2 &{} \text {Efflux phase: content is decreasing or at level 0}. \\ \hline \end{array} \end{aligned}$$

A sample path of \(\{W(t), M(t)\}_{t\ge 0}\) evolves over two sheets (i.e., pages) corresponding to system configurations 1 and 2, and on one line corresponding to an empty period (\(W(t)=\) 0) (Fig.  11.6).

Fig. 11.6
figure 6

Sample path of dam with continuous influx and efflux, showing line 0 and 2 sheets (pages). Sheet 1 \(\leftrightarrow \) \(M(t)=1,\) influx phase. Sheet \(2\leftrightarrow M(t)=2,\) efflux phase. Line 0 \(\leftrightarrow W(t)=0\), empty phase, bottom of Sheet 2. Also indicates composite states \(\langle (x,\infty ), i\rangle , i=1, 2 \)

11.8.2 Steady-State PDF of Content

Denote the ‘partial cdfs’ of content by

$$\begin{aligned} F_{i}(x)=\lim _{t\rightarrow \infty }P(W(t)\le x, M(t)=i), x>0, i=1, 2. \end{aligned}$$

Denote the steady-state ‘partial’ pdf of content by

$$\begin{aligned} f_{i}(x)=\frac{d}{dx}F_{i}(x), i=1, 2, x>0\text {,} \end{aligned}$$

wherever the derivative exists.

The total pdf of content (marginal pdf) is

$$\begin{aligned} f(x)=f_{1}(x)+f_{2}(x), x>0\text {.} \end{aligned}$$
(11.80)

Let \(P_{0}\) = \(\lim _{t\rightarrow \infty }P(W(t)=0)\). We shall derive: \( f_{i}(x), i=1, 2\); f(x); \(P_{0}\); F(x) = \(P_{0}+\int _{y=0}^{x}f(y)dy\), in terms of the input parameters \(\lambda _{1}\), \(\lambda _{2}\), \(q(\cdot )\), \( r(\cdot )\). The steady-state probability that the dam is in the influx phase (\(i=1\)) or efflux phase (\(i=2\)) is \(F_{i}(\infty )=\int _{x=0^{-}}^{\infty }f_{i}(x)dx\), \(i=1, 2\).

11.8.3 Equations for PDFs

Consider composite state \(\left( (x,\infty ), 1\right) , x>0\), on sheet 1. The SP rate out of \(\left( (x,\infty ), 1\right) \) is \(\lambda _{1}\int _{y=x}^{\infty }f_{1}(y)dy\), since the end of an influx period signals an instantaneous SP page \(1\rightarrow \) page 2 transition from \(\left( (x,\infty ), 1\right) \) to \(\left( (x,\infty ), 2\right) \) at the same level.

The SP rate into \(\left( (x,\infty ), 1\right) \) is

$$\begin{aligned} q(x)f_{1}(x)+\lambda _{2}\int _{y=x}^{\infty }f_{2}(y)dy\text {,} \end{aligned}$$

since: (1) the SP upcrosses level x on sheet 1 at rate \(q(x)f_{1}(x)\); (2) the SP enters \(\left( (x,\infty ), 1\right) \) from \(\left( (x,\infty ), 2\right) \) (page \(2\rightarrow \) page 1 transition) at the same level (the rate at which efflux periods end when the SP is in \(\left( (x,\infty ), 2\right) \) is \(\lambda _{2}\)). Set balance, namely

$$\begin{aligned} {{\varvec{SP}}~{\varvec{rate}}~{\varvec{out}}~{\varvec{of}}}\left( (x,\infty ), 1\right) = {{\varvec{SP}}~{\varvec{rate}}~{\varvec{into}}}\left( (x,\infty ), 1\right) \text {,} \end{aligned}$$

gives an integral equation relating \(f_{1}(x)\) and \(f_{2}(x)\),

$$\begin{aligned} \lambda _{1}\int _{y=x}^{\infty }f_{1}(y)dy=q(x)f_{1}(x)+\lambda _{2}\int _{y=x}^{\infty }f_{2}(y)dy\text {.} \end{aligned}$$
(11.81)

Similarly, balancing SP rates out of, and into \(\left( (x,\infty ), 2\right) , x>0\), on sheet 2 yields the integral equation

$$\begin{aligned} \lambda _{2}\int _{y=x}^{\infty }f_{2}(y)dy+r(x)f_{2}(x)=\lambda _{1}\int _{y=x}^{\infty }f_{1}(y)dy. \end{aligned}$$
(11.82)

In (11.82), the left and right sides are the SP exit and entrance rates respectively, of \(\left( (x,\infty ), 2\right) \).

Addition of (11.81) and (11.82) yields

$$\begin{aligned} q(x)\cdot f_{1}(x)=r(x)\cdot f_{2}(x). \end{aligned}$$
(11.83)

There is an easy alternative derivation of (11.83), which follows by viewing the sample-path via the “cover”. That is, we project the segments of the sample path from sheets \(1\ \)and 2 (pages 1 and 2) onto a single t-\( W(t)\varvec{\,\ }\)coordinate system (Fig. 11.5). Then we apply SP rate balance across level x:

$$\begin{aligned} \varvec{total}\text { }\varvec{upcrossing\ rate}=\varvec{total} \text { }\varvec{downcrossing\ rate}\text {,} \end{aligned}$$

which translates to formula (11.83).

Using (11.83), we substitute \(f_{2}(x)\) = \(\left( q(x)/r(x)\right) f_{1}(x)\) into (11.81), and take d / dx in (11.81). Then we solve the resulting differential equation, applying the initial condition

$$\begin{aligned} r(0^{+})f_{2}(0)=\lambda _{2}P_{0}=q(0^{+})f_{1}(0) \text {.} \end{aligned}$$

These operations result in the formula

$$\begin{aligned} f_{1}(x)=\frac{\lambda _{2}P_{0}}{q(x)}\cdot e^{-\left( \lambda _{1}\int _{y=0}^{x}\frac{1}{q(y)}dy-\lambda _{2}\int _{y=0}^{x}\frac{1}{r(y)} dy\right) }, x>0\text {.} \end{aligned}$$
(11.84)

Since \(f_{2}(x)\) = \(\left( q(x)/r(x)\right) f_{1}(x)\), we have

$$\begin{aligned} f_{2}(x)=\frac{\lambda _{2}P_{0}}{r(x)}\cdot e^{-\left( \lambda _{1}\int _{y=0}^{x}\frac{1}{q(y)}dy-\lambda _{2}\int _{y=0}^{x}\frac{1}{r(y)} dy\right) }, x>0\text {.} \end{aligned}$$
(11.85)

Since f(x) = \(f_{1}(x)+f_{2}(x)\), adding (11.84) and (11.85) gives

$$\begin{aligned} f(x)= & {} \lambda _{2}\left( \frac{1}{q(x)}+\frac{1}{r(x)}\right) P_{0}\cdot e^{-\left( \lambda _{1}\int _{y=0}^{x}\frac{1}{q(y)}dy-\lambda _{2}\int _{y=0}^{x}\frac{1}{r(y)}dy\right) }, x>0, \nonumber \\= & {} \lambda _{2}\frac{q(x)+r(x)}{q(x)r(x)}\cdot P_{0}\cdot e^{-\left( \lambda _{1}\int _{y=0}^{x}\frac{1}{q(y)}dy-\lambda _{2}\int _{y=0}^{x}\frac{1}{r(y)} dy\right) }, x>0. \nonumber \\&\end{aligned}$$
(11.86)

The normalizing condition is

$$\begin{aligned} P_{0}+\int _{x=0}^{\infty }f(x)dx=1. \end{aligned}$$
(11.87)

From (11.86) and (11.87)

$$\begin{aligned} P_{0}=\frac{1}{1+\lambda _{2}\int _{x=0}^{\infty }\left( \frac{q(x)+r(x)}{ q(x)r(x)}\cdot e^{-\left( \lambda _{1}\int _{y=0}^{x}\frac{1}{q(y)}dy-\lambda _{2}\int _{y=0}^{x}\frac{1}{r(y)}dy\right) }\right) dx}\text {.} \end{aligned}$$
(11.88)

Remark 11.9

Formulas (11.84)–(11.88) are asymmetric with respect to \(\lambda _{1}\) and \(\lambda _{2}\), because empty periods are distributed as Exp\(_{\lambda _{2}}\) (classified as part of efflux phase).

Remark 11.10

The model can be generalized in various ways. There may be several different important state-space levels at which there is no change in content (no influx or efflux), other than at level 0. Such levels may be due to a control policy or due to natural phenomena. There would then be more than one atom in the state space. Also, the influx and efflux periods may have more general distributions. The content may be bounded above, resulting in an atom. Some of these variants are easy to analyze; others are more complex. We do not treat such variants here.

Stability Condition

A necessary condition for the pdf to exist is \(f(\infty )=0\). Thus, the exponent \(\left( \lambda _{1}\int _{y=0}^{x} \frac{1}{q(y)}dy-\lambda _{2}\int _{y=0}^{x}\frac{1}{r(y)}dy\right) \) in (11.86) and (11.88) must be positive for all \(\dot{x}>0\). That is

$$\begin{aligned} \lambda _{2}\int _{y=0}^{x} \frac{1}{r(y)}dy<\lambda _{1}\int _{y=0}^{x}\frac{1}{q(y)}dy,&\nonumber \\ \lambda _{1}\int _{y=0}^{x}\frac{1}{q(y)}dy-\lambda _{2}\int _{y=0}^{x}\frac{1 }{r(y)}dy>0,\text { for all }x&>0. \end{aligned}$$
(11.89)

Remark 11.11

In (11.89), if q(y) = r(y), \(y>0\), then the stability condition reduces to \(1/\lambda _{1}\) < \( 1/\lambda _{2}\) or E(nonempty period) < E(empty period). If q (y) = q and r(y) = r, \(y>0\), then \(1/\lambda _{1}\) < \(\left( r/q\right) \left( 1/\lambda _{2}\right) \) or E( nonempty period) < \(\left( r/q\right) E(\)empty period). Additionally, If r / q < 1 then E(nonempty period) < E(empty period).

11.8.4 Numerical Example

Let \(\lambda _{1}=1\), \(\lambda _{2}=2\), \(q(x)=\sqrt{x}\), \(r(x)=3\sqrt{x}\), \( x>0\). Substituting into (11.89) gives, for \(x>0\),

$$\begin{aligned} \lambda _{1}\int _{y=0}^{x}\frac{1}{q(y)}dy-\lambda _{2}\int _{y=0}^{x}\frac{1 }{r(y)}dy=2\sqrt{x}\left( \lambda _{1}-\frac{\lambda _{2}}{3}\right) =2\sqrt{ x}\left( 1-\frac{2}{3}\right) >0, \end{aligned}$$

implying stability, and the steady-state pdf f(x) exists. From (11.86), we obtain

$$\begin{aligned} f(x)=\frac{8}{3\sqrt{x}}P_{0}\cdot e^{-\frac{2}{3}\sqrt{x}}, x>0\text {.} \end{aligned}$$
(11.90)

From the normalizing condition (11.87),

$$\begin{aligned} P_{0}=\frac{1}{1+\int _{x=0}^{\infty }\frac{8}{3\sqrt{x}}e^{-\frac{2}{3}\sqrt{ x}}dx}=\frac{1}{9}=0.111111\text {.} \end{aligned}$$
(11.91)

Thus

$$\begin{aligned} f(x)=\frac{8}{27\sqrt{x}}e^{-\frac{2}{3}\sqrt{x}}, x>0\text {.} \end{aligned}$$
(11.92)

From (11.91) and (11.92), the cdf is (see Figs. 11.7, 11.8),

$$\begin{aligned} F(x)=P_{0}+\int _{y=0}^{x}f(y)dy=1-\frac{8}{9}e^{-\frac{2}{3}\sqrt{x}}\text {.} \end{aligned}$$
(11.93)
Fig. 11.7
figure 7

Steady-state pdf \(f(x)=\frac{8}{ 27\sqrt{x}}e^{-\frac{2}{3}\sqrt{x}}, x>0,\) in continuous dam with alternating influx/efflux periods: \(\lambda _{1}=1\), \(\lambda _{2}=2\), \(q(x)=\sqrt{x}\), \(r(x)=3\sqrt{x}\)

Fig. 11.8
figure 8

Steady-state cdf \(F(x)=1-\frac{8}{9}e^{-\frac{2}{3}\sqrt{ x}},\) \(x>0,\) \(P_{0}=0.1111,\) in continuous dam with alternating influx/efflux periods: \(\lambda _{1}=1\), \(\lambda _{2}=2\), \( q(x)=\sqrt{x}\), \(r(x)=3\sqrt{x}\)

Proportion of Time in Influx and Efflux Phases

From (11.83) and (11.80) we obtain

$$\begin{aligned} f_{1}(x)&=\frac{2}{9\sqrt{x}}e^{-\frac{2}{3}\sqrt{x}},\text { }x>0, \\ f_{2}(x)&=\frac{2}{27\sqrt{x}}e^{-\frac{2}{3}\sqrt{x}},\text { }x>0\text {.} \end{aligned}$$

Hence the proportions of time the dam is in the influx, efflux phase respectively are

$$\begin{aligned} F_{1}(\infty )&=\int _{x=0}^{\infty }\frac{2}{9\sqrt{x}}e^{-\frac{2}{3}\sqrt{ x}}dx=0.666667, \\ F_{2}(\infty )-P_{0}&=\int _{x=0}^{\infty }\frac{2}{27\sqrt{x}}e^{-\frac{2}{3 }\sqrt{x}}dx=0.222222\text {.} \end{aligned}$$

These values are also the steady-state probabilities of the dam being in these phases at an arbitrary time point. A check on the normalizing condition is

$$\begin{aligned} P_{0}+F_{1}(\infty )+F_{2}(\infty )=0.111111+0.666667+0.222222=1\text {.} \end{aligned}$$

11.9 Estimation of Laplace Transforms

We very briefly discuss a procedure for estimating the LST (Laplace-Stieltjes transform) of the state variable of a stochastic model. We shall use the virtual wait in a GI/G/1 queue as an example.

Suppose we want to estimate the LST of the steady-state pdf of the virtual wait in a GI/G/1 queue. Let the steady-state cdf of the virtual wait be F(x) , \(x\ge 0\), having pdf f(x), \(x>0\), and let \(P_{0}\) = F(0). The LST of the mixed pdf \(\left\{ P_{0}, f(x)\right\} _{x>0}\) is (see Sect. 3.4.4 in Chap. 3)

$$\begin{aligned} F^{*}(s)=\int _{x=0}^{\infty }e^{-sx}dF(x), s>0. \end{aligned}$$
(11.94)

11.9.1 Probabilistic Interpretation of LST

The probabilistic interpretation of the LST is as follows (see p. 264 and pp. 267–269 in [104]; and various papers, e.g., [41]). In formula (11.94), the right side is the probability that an independent “catastrophe random variable\(\underset{dis}{=}\) Exp\(_{s}\,\)is greater than the virtual wait having cdf F(x), \(x\ge 0\).

11.9.2 Estimation of LST

In order to estimate \(F^{*}(s)\), we simulate a sample path of the virtual wait \(\left\{ W(u)\right\} _{u\ge 0}\), over a long period of simulated time (0, t). Next, we generate a sample path of a renewal process \(\left\{ \mathcal {C}(u)\right\} _{u\ge 0}\) with inter-renewal times equal to the catastrophe r.v., and overlay it on the same time-state coordinate system (see Fig. 11.9). Fix \( s>0\). The SP jump sizes and inter-renewal times in the sample path of \( \left\{ \mathcal {C}(u)\right\} _{u\ge 0}\), are i.i.d. r.v.s \(\underset{dis}{ =}\) Exp\(_{s}\). This is because the process \(\left\{ \mathcal {C}(u)\right\} _{u\ge 0}\) represents the excess life \(\gamma \) at time u (see Sect. 10.1.5 and Fig. 10.2). The steady-state pdf of the excess life is \(f_{\gamma }(x)\) = \(se^{-sx}\), \(x>0\).

Fig. 11.9
figure 9

Sample paths of virtual wait \( \left\{ W(u)\right\} _{u\ge 0}\) and renewal process with inter-arrival time \(\underset{dis}{=}\) Exp\(_{s}\), the catastrophe r.v., \(\left\{ \mathcal {C}(u)\right\} _{u\ge 0}.\) \(T_{s}\) = \(T_{s1}+T_{s2}+\cdot \cdot \cdot +T_{s6}.\)

Now we observe the sample paths of \(\left\{ W(u)\right\} _{u\ge 0}\) and \( \left\{ \mathcal {C}(u)\right\} _{u\ge 0}\) on the fixed time interval \( \left( 0, t\right) \). We compute the sum, \(T_{s}\) = \(\sum _{i}T_{si}\), of all time intervals such that \(\mathcal {C}(u)>W(u)\), \(u\in (0, t)\) (Fig. 11.9). An estimate of \(F^{*}(s)\) is then \(\widehat{F^{*}}(s)=T_{s}/t\), which is the proportion of time such that \( \mathcal {C}(u)\) > W(u), \(u\in \left( 0, t\right) \). The probabilistic interpretation of the LST strongly suggests that \(T_{s}/t\) is an appropriate estimate of \(F^{*}(s)\).

In order to estimate \(F^{*}(s)\), \(s>0\), we repeat the simulation procedure using different values of \(s>0\). For example, we may choose a partition of N uniformly-spaced values for \(s\in \left( 0, t\right) \), such as s = \(\Delta \), \(2\Delta \), \(3\Delta \), ..., \(N\Delta \), where N is a large positive integer and \(\Delta \) is a small positive number. (Different spacing for the partition may improve the estimates, e.g., if \(F(\cdot )\) is known to have certain properties such as a long tail.) This procedure results in a set of estimates \( \widehat{F^{*}}(n\Delta )=T_{n\Delta }/t\), \(n=1,..., N\). (From (11.94), \(\widehat{F^{*}}(0)=1\), which is the normalizing condition.)

Finally, we can plot the points

$$\begin{aligned} \left( 0,\widehat{F^{*}}(0)\right) =\left( 0, 1\right) \ \text {and} \ \left( n\Delta ,\widehat{F^{*}}(n\Delta )\right) , n=1,..., N\text {,} \end{aligned}$$

on a two-dimensional \(\left( s,\widehat{F^{*}}(s)\right) \) coordinate system. The \(\left\{ n\Delta \right\} _{n=1,..., N}\) grid is on the horizontal axis; the corresponding \(\widehat{F^{*}}(n\Delta )\) terms are ordinates along vertical lines parallel to the \(\widehat{F^{*}}(s)\)-axis.

The plot will be a discrete estimate of the LST of the pdf of the virtual wait. It may be improved by smoothing techniques. In order to obtain an estimate of the pdf of the virtual wait from it, use numerical inversion of \( \left\{ \widehat{F^{*}}(n\Delta )\right\} _{n=1,..., N}\) (see, e.g., pp. 349–355 in [104]).

11.10 Simple Harmonic Motion

We analyze an elementary model of deterministic simple harmonic motion, using LC.

Consider a particle moving according to simple harmonic motion (SHM) (see, e.g., p. 133 in [10]; pp. 216–217 in [119]). Let X(t) denote the position of the particle at instant \(t\ge 0\), and \(X(0)=0\). Let the state space be the interval \( \varvec{S}=[-1,+1]\). In this version of the standard SHM model there is only one sample path, namely,

$$\begin{aligned} X(t)=\sin (t), t\ge 0. \end{aligned}$$
Fig. 11.10
figure 10

Sample path of simple harmonic motion \(X(t)=\sin t\). State space is \(\varvec{S=[-1,+1].~}\)Shows level x in \(\varvec{S}\)

We wish to determine the stationary pdf f(x) and cdf F(x) of X(t) when the particle is observed at an arbitrary time point, as \(t\rightarrow \infty \).

Consider the sample path \(X(t), t\ge 0\) (Fig. 11.10). The slope of the sample path at level x is

$$\begin{aligned} r(x)==\frac{d}{dt}\sin t|_{t=\sin ^{-1}x}=\cos \left( \sin ^{-1}x\right) = \sqrt{1-x^{2}}, x\in \left[ -1,+1\right] . \end{aligned}$$
(11.95)

Consider levels x, \(x+h\in \varvec{S}\), where \(h>0\) is small. The time required for the SP to ascend from level x to level \(x+h\) is

$$\begin{aligned} \int _{y=x}^{x+h}\frac{1}{r(y)}dy=\int _{y=x}^{x+h}\frac{1}{\sqrt{1-y^{2}}}dy. \end{aligned}$$
(11.96)

The symmetries of the sample path imply that the time required for the SP to descend from level \(x+h\) to level x is also given by (11.96).

Applying (11.96), we see that the long-run proportion of time the SP spends in state-space interval \((x, x+h)\) in a cycle of length \(2\pi \) time units is

$$\begin{aligned} \frac{2}{2\pi }\int _{y=x}^{x+h}\frac{1}{\sqrt{1-y^{2}}}dy=F(x+h)-F(x). \end{aligned}$$
(11.97)

Formula (11.97) leads to

$$\begin{aligned} \frac{1}{\pi }h\frac{1}{\sqrt{1-\left( x^{*}\right) ^{2}}}=F(x+h)-F(x) \end{aligned}$$
(11.98)

where \(x^{*}\in (x, x+h),\) by the definition of F(x) as the long-run proportion of time the process is in state-space interval \(\left[ -1, x\right] \). Dividing both sides of (11.98) by h and letting \(h\downarrow 0,\) yields

$$\begin{aligned} f(x)=\frac{1}{\pi \sqrt{1-x^{2}}}, x\in \left[ -1,+1\right] . \end{aligned}$$
(11.99)

The stationary pdf f(x) in (11.99) is interesting and suggests intuitive insights (Fig. 11.11). Note that \(\lim _{x\downarrow -1}f(x)\) = \( \lim _{x\uparrow +1}f(x)=\) \(\infty \). Also, \(\min _{x\in \varvec{S}}f(x)= \frac{1}{\pi }\), at \(x=0\). The pdf f(x) is symmetric about \(x=0\), and is convex.

Fig. 11.11
figure 11

Stationary pdf \(f(x)= \frac{1}{\pi \sqrt{1-x^{2}}}, x\in \left[ -1,+1\right] \), for particle moving in simple harmonic motion, \(X(t)=\sin t, t\ge 0\)

From (11.99), the cdf is

$$\begin{aligned} F(x)= & {} \int _{y=-1}^{x}f(y)dy,=\frac{1}{\pi }\left( \sin ^{-1}(x)-\sin ^{-1}(-1)\right) \nonumber \\= & {} \frac{1}{\pi }\sin ^{-1}(x)+\frac{1}{2}, x\in [-1,+1]\text {.} \end{aligned}$$
(11.100)

11.10.1 Inferences Based on PDF and CDF

From (11.95), the speed of the particle is r(x) = \(\sqrt{1-x^{2}}\), implying \(r(\pm 1)\) = 0 and r(0) = 1, the maximum speed. Hence, at an arbitrary time point in the long run, it is much more likely to find the particle close to one of the boundaries of \(\varvec{S} \) (\(x=\pm 1\)), rather than close to the center of \(\varvec{S}\) (its range of motion). This fact suggests that the particle spends a much greater proportion of time near the boundaries x = \(\pm 1\) than near the center \( x=0\) (Fig. 11.12).

Fig. 11.12
figure 12

Stationary cdf \(F(x)=\frac{1}{ \pi }\sin ^{-1}(x)+\frac{1}{2}, x\in \left[ -1,+1\right] \), for particle moving in simple harmonic motion, \(X(t)=\sin t, t\ge 0\)

From computations using formula (11.100), the proportion of time the SP (particle) spends in the central interval \( [-0.5,+0.5]\) is equal to \(F(0.5)-F(-0.5)=0.333\). The proportion of time the particle spends in the outer regions \([-1.0,-0.5]\cup [0.5, 1.0]\), is equal to \(2\cdot (F(1.0)-F(0.5))=0.667.\) The “median” symmetric outer intervals of \(\varvec{S}\) with respect to the time spent by the particle, is \(\varvec{A}_{0.5}\equiv [-1.0,-0.707]\cup [0.707, 1.0]\), i.e., P(particle \(\in \varvec{A}_{0.5})\) = 0.5. This indicates that it is equally likely to find the particle in two bands of equal width 0.293 touching the edges \(\pm 1.0\) (total width 0.586), as it is to find it in a central interval of width 1.414 about 0. Arbitrary observations on operating pendulum clocks, readily corroborate these theoretical computations.

Remark 11.12

The type of LC analysis in this section, may be extended to analyze random trigonometric functions (e.g., like \(A\sin \left( \theta t\right) +B\cos (\theta t), t\ge 0\), where A, B are random variables and \(\theta \) is a constant). Extensions may also be applicable in some models of physics, and in the analysis of roots of equations.