1 Introduction

Many existing non-homogeneous Poisson process (NHPP) software reliability model have been carried out through the fault intensity rate function h(t) and the mean value function m(t) within a controlled testing environment to estimate reliability metrics such as the number of residual faults, failure rate, and reliability of software. Specifically, the mean value function is modeled as

$$\begin{aligned} \frac{d}{dt} m(t)= & {} h(t) ( N(t) - m(t) ), \end{aligned}$$
(1.1)

where m(t) is the number of software failures detected by time t, N(t) is the expected number of faults that exist in the software before testing and h(t) is the time dependent fault detection rate per unit of time. The goal of these models is to achieve an explicit formula for m(t) which is applied to the software testing data and used to make predictions on the software failures and reliability in the field.

We observe, however, that model (1.1) is deterministic, while the operating environments in the field are random. Specifically, there are factors of the operating environments that will affect the software failure and software reliability in an unpredictable way. Many authors have worked in different directions to extend the model (1.1) to account for the random environment effects. Teng and Pham (2006) discussed a generalized model that captures the uncertainty of the environment and its effects upon the software failure rate. Other researchers (Chang et al. 2014; Goel and Okumoto 1979; Inoue et al. 2015, 2016; Kapur et al. 2011, 2012, 2014; Kumar et al. 2016; Lee et al. 2016; Liu et al. 2016; Minamino et al. 2016; Ohba 1984; Okamura and Dohi 2016; Ohba and Yamada 1984; Persona et al. 2010; Pham and Pham 2000; Pham 1993, 1996, 2006, 2007, 2013, 2014a, b, 2016; Pham and Deng 2003; Pham et al. 1999, 2014; Pham and Zhang 1997, 2003; Roy et al. 2014; Sgarbossa and Pham 2010; Sato and Yamada 2016; Teng and Pham 2006, 2004; Xiao and Dohi 2011; Yamada and Osaki 1985; Yamada et al. 1992; Zhang and Pham 2006; Zhu and Pham 2016; Zhu et al. 2015) have also developed reliability and cost models incorporating both testing phase and operating phase in the software development cycle for estimating the reliability of software systems in the field. citeP1 recently also developed a software reliability model with Vtub-shaped fault-detection rate subject to the uncertainty of operating environments.

One way we can capture the random environment effects abstractly is by letting \(h(t, \omega )\) be a stochastic process adapted to a given filtration \(\mathcal{F}(t), 0 \le t < \infty \):

$$\begin{aligned} \frac{d m(t,\omega )}{dt} = h(t, \omega ) (N(t) - m(t,\omega )). \end{aligned}$$
(1.2)

Under the assumptions that \(N(t) = N\) and \(m(0) = 0\), one can derive the solution for (1.2) as

$$\begin{aligned} m(t,\omega ) = N\left[ 1 - e^{-\int _0^t h(s,\omega ) ds} \right] . \end{aligned}$$
(1.3)

As \(m(t,\omega )\) is a random variable, the interest now is to obtain an explicit representation of

$$\begin{aligned} \bar{m}(t):= E(m(t,\omega )). \end{aligned}$$
(1.4)

The general formulation of (1.2) allows for flexibility in modelling the random environment effect via the term \(h(t,\omega )\). For example, Pham (2014a) models

$$\begin{aligned} h(t,\omega ) = \eta (\omega ) \tilde{h}(t) \end{aligned}$$
(1.5)

where \(\eta (\omega )\) is a Gamma(\(\alpha , \beta \)) random variable and \(\tilde{h}(t)\) is a Vtub-shaped rate function. We will refer to this formulation as a (static) multiplicative noise model. Under these assumptions,

$$\begin{aligned} \bar{m}(t) = N\left[ 1 - \left( \frac{\beta }{\beta + \int _0^t \tilde{h}(s) ds} \right) ^\alpha \right] . \end{aligned}$$
(1.6)

One can also generalize the model in Pham (2014a) by considering

$$\begin{aligned} h(t,\omega ) = \eta (t,\omega ) \tilde{h}(t), \end{aligned}$$
(1.7)

where \(\eta (t,\omega )\) is a well-known stochastic process adapted to a given filtration and \(\tilde{h}(t)\) is a deterministic rate function. We refer to this formulation as a dynamic multiplicative noise model. A possible choice would be \(\eta (t,\omega ) = W(t,\omega )\) where \(W(t,\omega )\) is a Brownian motion. We do not pursue the dynamic multiplicative noise model and its implications in this paper. Instead, we propose a dynamic additive noise model:

$$\begin{aligned} h(t,\omega ) = \tilde{h}(t) + \dot{M}(t,\omega ). \end{aligned}$$
(1.8)

The goal in this formulation is to choose \(\dot{M}(t,\omega )\) such that \(M(t,\omega ),\) the (formal) anti-derivative of \(\dot{M}(t,\omega )\) with respect to t, is a martingale. That is for all \(s < t\)

$$\begin{aligned} E(M(t,\omega ) | \mathcal{F}(s)) = M(s,\omega ), \end{aligned}$$
(1.9)

where \(\mathcal{F}(t), t \ge 0\) is a given filtration representing the flow of information in time. Equation (1.8) and the martingale property of \(M(t,\omega )\) implies that for all \(u < t\)

$$\begin{aligned} E \left( \int _u^t h(s, \omega ) ds \right) = \int _u^t \tilde{h}(s) ds. \end{aligned}$$
(1.10)

This captures the intuition that on average the random effect of \(h(s, \omega )\) cancels out to give us the equivalene of the original deterministic rate function \(\tilde{h}(s)\). We note that relation (1.10) does not trivialize the model in the sense of reducing \(\bar{m}(t)\) to the deterministic case since in general \(E( e^{\int _u^t h(s, \omega ) ds} ) \ne e^{E (\int _u^t h(s, \omega ) ds)} \).

Our main contribution of this paper is to provide a general theory for the software reliability model with stochastic fault-detection rate. The results when the rate follows a dynamic additive white noise model are presented in Sect. 2.1. To the best of our knowledge, the martingale framework, in particular the Brownian motion and white noise processes has not been utilized in the software reliability literature to model the random environment effect. We also extend the theory of the static multiplicative rate (1.5) in the model (1.2) by removing the assumption that N(t) is constant. The results are presented in Sect. 2.2. Section 3 concludes the paper.

Notation

  • \(m(t, \omega )\): number of software failures detected by time t

  • \(\bar{m}(t)\): expected number of software failures detected by time t

  • N(t): expected number of faults that exist in the software before testing

  • \(h(t,\omega )\): time-dependent fault-detection rate per unit of time

  • \(\phi _\eta (t)\): moment generating function of the random variable \(\eta \) evaluated at t

2 General fault-detection model with stochastic rate

This section presents the main contribution of this paper, which is a general software reliability model where the fault-detection rate \(h(t,\omega )\) is a stochastic process. We present specific modelling details of \(h(t, \omega )\) in subsequent sections.

Consider the initial value problem

$$\begin{aligned} \frac{d}{dt} m(t, \omega )= & {} h(t,\omega ) ( N(t) - m(t, \omega ) ) \nonumber \\ m(0)= & {} 0. \end{aligned}$$
(2.1)

This equation has the solution

$$\begin{aligned} m(t, \omega ) = e^{ -\int _0^t h(s,\omega ) ds} \int _0^t e^{\int _0^u h(s,\omega ) ds} h(u) N(u) du. \end{aligned}$$
(2.2)

Assuming N(t) is differentiable and using integration by parts, we have

$$\begin{aligned} m(t, \omega ) = \left\{ N(t) - N(0) e^{ - \int _0^t h(s, \omega ) ds} \right\} - \int _0^t e^{-\int _u^t h(s,\omega ) ds} N'(u) du. \end{aligned}$$
(2.3)

In this expression, the randomness of \(m(t, \omega )\) comes from \(\int _0^t h(s, \omega ) ds\) and \(\int _u^t h(s,\omega ) ds\). Thus the explicit representation of \(\bar{m}(t)\) depends on the computation of

$$\begin{aligned} E \left( e^{-\int _u^t h(s,\omega ) ds} \right) , u \ge 0. \end{aligned}$$
(2.4)

In Pham (2014a), the author facilitates this computation by imposing the multiplicative noise structure

$$\begin{aligned} h(t,\omega ) = \eta (\omega ) \tilde{h}(t). \end{aligned}$$
(2.5)

It follows that

$$\begin{aligned} E \left( e^{-\int _u^t h(s,\omega ) ds} \right) = \phi _\eta (- \int _u^t \tilde{h}(s) ds), \end{aligned}$$
(2.6)

where \(\phi _\eta (t)\) is the moment generating function of \(\eta (\omega )\). On the other hand, the moment generating function \(\phi _\eta (t)\) also poses difficulty in deriving the explicit representation of \(\bar{m}(t)\) when we generalize the expected number of faults N to a non-constant function N(t). Indeed as the examples in Sects. 2.2.1 and 2.2.2 show, we need to choose a suitable N(t) to achieve such goal.

2.1 Fault-detection rate model with additive white noise

In this section, we utilize the martingale framework to model the random environment effect. For the convenience of the readers, we present the basic definitions of a martingale and a Brownian motion. For a thorough introduction we refer the readers to Protter (2013).

A stochastic process M(t) is a martingale with respect to a filtration \(\mathcal{F}(t), 0 \le t \le T\) if for any \(s \le t\)

$$\begin{aligned} E(M(t) | \mathcal{F}(s) ) = M(s). \end{aligned}$$
(2.7)

We note that by convention we suppress the dependence of M(t) (and later on, the Brownian motion W(t)) on \(\omega \). That is we write M(t) instead of \(M(t,\omega )\) (respectively W(t) instead of \(W(t,\omega )\)) . For a continuous martingale M(t), the quadratic variation process [M](t) is such that \(M^2(t) - [M](t)\) is also a \(\mathcal{F}(t)\) martingale.

A particular and well-known example of a martingale is a Brownian motion. A stochastic process W(t) is a Brownian motion with respect to a filtration \(\mathcal{F}(t), 0 \le t \le T\) if

$$\begin{aligned}&W(0) = 0 \nonumber \\&W(t) - W(s) \text{ is } \text{ independent } \text{ of } \mathcal{F}(s), \text{ for } \text{ any } s < t \nonumber \\&W(t) - W(s) \text{ has } N(0, t-s) \text{ distribution }. \end{aligned}$$
(2.8)

For our purpose, if M(t) is a continuous martingale with suitable integrability condition then

$$\begin{aligned} E\left( e^{M(t) - M(u)} \right) =E\left( e^{\frac{1}{2} ([M](t) - [M](u))}\right) . \end{aligned}$$
(2.9)

We can take advantage of this relation by using a martingale M(t) where [M](t) has a simple expression. For example, if W(t) is a Brownian motion then W(t) is also a martingale (with respect to its own filtration) and \([W](t) = t\).

To utilize the exponential martingale structure, we assume that

$$\begin{aligned} \int _0^t h(s, \omega ) ds = M(t) + \int _0^t \tilde{h}(s) ds, \end{aligned}$$
(2.10)

where \(\tilde{h}(s)\) is a deterministic rate function. Equivalently

$$\begin{aligned} h(t, \omega ) = \dot{M}(t) + \tilde{h}(t), \end{aligned}$$
(2.11)

where \(\dot{M}(t)\) formally denotes the derivative of M with respect to time. The technical difficulty here is that a non-trivial continuous martingale would almost surely have paths that are nowhere differentiable. Thus one has to be careful to interpret the process \(h(t,\omega )\) in this formulation. For our starting point, we can choose \(M(t) = W(t)\) to be a Brownian motion because the process \(\dot{W}(t)\) is well-studied in the literature : it is referred to as the white noise process. We will investigate other possible choices for M(t), including the martingale with jumps case in future works.

Thus we consider the model

$$\begin{aligned} \frac{d}{dt} m(t,\omega )= & {} h(t,\omega ) ( N(t) - m(t,\omega ) ) \nonumber \\ m(0)= & {} 0. \end{aligned}$$
(2.12)

where

$$\begin{aligned} h(t,\omega ) = \tilde{h}(t) + \sqrt{2} \dot{W}(t). \end{aligned}$$
(2.13)

Here \(\tilde{h}(t)\) is a usual deterministic rate process and \(\dot{W}(t)\) is the white noise process. In other words, \(\dot{W}(t)\) is a Gaussian process with covariance structure

$$\begin{aligned} E(\dot{W}(s)\dot{W}(t) ) = \delta (t-s), 0< s < t; \end{aligned}$$
(2.14)

where \(\delta (t)\) is the Dirac Delta measure. For more details about white noise processes, we refer the readers to Martin (2009). In our model, the structure (2.13) has the effect that

$$\begin{aligned} \int _u^t h(t,\omega ) ds = \int _u^t \tilde{h}(s) ds + \sqrt{2} (W(t) - W(u)), \end{aligned}$$
(2.15)

where W(t) is a Brownian motion. Plugging this expression in (2.3), we have the computation of \(\bar{m}(t)\) as followed

$$\begin{aligned} \bar{m}(t)= & {} \left\{ N(t) - N(0) e^{-\int _0^t \tilde{h}(s) ds } E (e^{ - \sqrt{2} W(t) } ) \right\} \nonumber \\&- \int _0^t e^{-\int _u^t \tilde{h}(s) ds } E (e^{ - \sqrt{2}(W(t) - W(u))} ) N'(u) du \nonumber \\= & {} \left\{ N(t) - N(0) e^{-\int _0^t \tilde{h}(s) ds } e^{t} \right\} - \int _0^t e^{-\int _u^t \tilde{h}(s) ds } e^{t-u} N'(u) du \nonumber \\= & {} \left\{ N(t) - N(0) e^{-\int _0^t (\tilde{h}(s) - 1) ds } \right\} - \int _0^t e^{-\int _u^t ( \tilde{h}(s) - 1) ds } N'(u) du. \end{aligned}$$
(2.16)

The expression (2.16) is suitable for the choice of \(\tilde{h}(t)\) such that \(\int \tilde{h}(t) dt \) is of the \(\log \) type. We demonstrate this by several examples below.

Example 1

Let

$$\begin{aligned} \tilde{h}(t) = \frac{b}{1 + \gamma e^{-bt}}. \end{aligned}$$
(2.17)

This is the S-shape detection rate used in Pham et al. (1999). Then

$$\begin{aligned} \bar{m}(t) = \left\{ N(t) - N(0) \frac{\gamma + 1}{\gamma + e^{bt}} e^t \right\} - \frac{ e^t }{\gamma + e^{bt}} \int _0^t \left( \gamma e^{-u} + e^{ (b-1) u }\right) N'(u) du. \end{aligned}$$
(2.18)

Let

$$\begin{aligned} N(t) = ke^t \end{aligned}$$

then

$$\begin{aligned} \bar{m}(t) = k \left\{ e^t - \frac{\gamma + 1}{\gamma + e^{bt}} e^t \right\} - \frac{k e^t }{\gamma + e^{bt}} \left( \gamma t + \frac{e^{bt} - 1}{b}\right) . \end{aligned}$$
(2.19)

Example 2

Let

$$\begin{aligned} \tilde{h}(t) = \frac{bt}{1 + bt}, b > 0. \end{aligned}$$
(2.20)

Letting \(\tilde{b} = \frac{1}{b}\), we have

$$\begin{aligned} \tilde{h}(t) = \frac{t}{t + \tilde{b}}. \end{aligned}$$
(2.21)

Then

$$\begin{aligned} \bar{m}(t) = \left\{ N(t) - N(0) \left( 1 + \frac{t}{\tilde{b}}\right) ^{\tilde{b}} \right\} - (t + \tilde{b})^{\tilde{b}} \int _0^t \frac{ N'(u)}{(u + \tilde{b})^{\tilde{b}}} du. \end{aligned}$$
(2.22)

Let

$$\begin{aligned} N(u)= & {} \log (u + \tilde{b}) \end{aligned}$$

or

$$\begin{aligned} N'(u) = \frac{1}{u + \tilde{b}} \end{aligned}$$

then

$$\begin{aligned} \bar{m}(t) = \log (t + \tilde{b}) - \left( \log (\tilde{b}) + \frac{1}{\tilde{b}}\right) \left( 1 + \frac{t}{\tilde{b}}\right) ^{\tilde{b}} + \frac{1}{\tilde{b}}. \end{aligned}$$

2.2 Fault-detection rate model with static multiplicative noise

We consider the model

$$\begin{aligned} \frac{d}{dt} m(t,\omega )= & {} h(t,\omega ) ( N(t) - m(t,\omega ) ) \nonumber \\ m(0)= & {} 0. \end{aligned}$$
(2.23)

where

$$\begin{aligned} h(t,\omega ) = \eta (\omega ) \tilde{h}(t). \end{aligned}$$
(2.24)

In this section, we present the explicit representations of \(\bar{m}(t)\) under the choice of \(\tilde{h}(t)\) as the S-shaped rate function

$$\begin{aligned} \tilde{h}(t) = \frac{b}{1 + \gamma e^{-bt}}, \end{aligned}$$
(2.25)

assuming certain form of N(t). We have

$$\begin{aligned} m(t,\omega ) = N(t) - N(0) e^{ - \eta \int _0^t h(s) ds} - \int _0^t e^{-\eta \int _u^t h(s) ds} N'(u) du \end{aligned}$$
(2.26)

And thus

$$\begin{aligned} \bar{m}(t) = N(t) - N(0) \phi _\eta (- \int _0^t h(s) ds) - \int _0^t \phi _\eta ( - \int _u^t h(s) ds ) N'(u) du, \end{aligned}$$
(2.27)

where \(\phi _\eta (t)\) is the moment generating function of the random variable \(\eta (\omega )\). For convenience of notation, we introduce the function

$$\begin{aligned} H(t):= \int _0^t h(s) ds = \log \left( 1 + \frac{e^{bt}}{\gamma }\right) - \log \left( 1 + \frac{1}{\gamma }\right) . \end{aligned}$$
(2.28)

Thus \(\bar{m}(t)\) can be expressed as

$$\begin{aligned} \bar{m}(t) = N(t) - N(0) \phi _\eta (- H(t) ) - \int _0^t \phi _\eta ( H(u) - H(t)) N'(u) du. \end{aligned}$$
(2.29)

2.2.1 \(N(t) = H(t) + N(0)\)

Under the assumption that \(N(t) = H(t) + N(0),\) we have \(N'(t) = h(t).\) This assumption still ensures that N(t) is increasing since \(N'(t) = h(t) > 0\). Equation (2.27) becomes

$$\begin{aligned} \bar{m}(t)= & {} \left\{ N(t) - N(0) \phi _\eta (- H(t) ) \right\} - \int _0^{H(t)} \phi _\eta (-u) du. \end{aligned}$$
(2.30)

Example 3

Let \(\eta (\omega )\) follows a Gamma(\(\alpha ,\beta \)) distribution. That is the probability density function of \(\eta (\omega )\) is given by

$$\begin{aligned} g(x) = \frac{\beta ^\alpha x^{\alpha -1} e^{-\beta x}}{\Gamma (\alpha )}; quad \alpha , \beta> 0, x > 0. \end{aligned}$$
(2.31)

We have

$$\begin{aligned} \phi _\eta (t) = \left( 1 - \frac{t}{\beta }\right) ^{-\alpha }, \end{aligned}$$
(2.32)

and

$$\begin{aligned} \int _0^t \phi _\eta (-u) du = \beta \frac{\left( 1 + \frac{t}{\beta }\right) ^{1 -\alpha } - 1}{1 - \alpha }. \end{aligned}$$
(2.33)

Thus

$$\begin{aligned} \bar{m}(t)= & {} \left\{ N(t) - N(0) (1 + \frac{H(t)}{\beta })^{-\alpha } \right\} - \beta \frac{\left( 1 + \frac{H(t)}{\beta }\right) ^{1 -\alpha } - 1}{1 - \alpha }, \end{aligned}$$
(2.34)

where \(N(t) = H(t) + N(0)\) and H(t) is given by (2.28).

Remark 1

There are 5 parameters in example (3): \(b, \gamma \) from h(t),  \(\alpha , \beta \) from the distribution of \(\eta \) and N(0). One can choose these parameters to fit a given existing data by standard methods such as maximum likelihood or least square estimations.

2.2.2 \(N(t) = G(H(t))\)

Under the assumption that \(N(t) = G(H(t))\) and G(x) is differentiable, we have

$$\begin{aligned} N'(t) = G'(H(t))h(t). \end{aligned}$$

In particular, when \(G(x) = x + N(0)\) this reduces to the assumption we have in section (2.2.1). To ensure that N(t) is increasing, we also require that

$$\begin{aligned} g(x) := G'(x) >0. \end{aligned}$$

Now equation (2.27) becomes

$$\begin{aligned} \bar{m}(t)= & {} \left\{ N(t) - N(0) \phi _\eta (- H(t) ) \right\} - \int _0^{H(t)} \phi _\eta [- (H(t) - u)] g(u) du \\= & {} \left\{ N(t) - N(0) \phi _\eta (- H(t) ) \right\} - (\hat{\phi }_\eta * g) (H(t)), \end{aligned}$$

where \(\hat{\phi }_\eta (u) = \phi _\eta (-u)\) and \(*\) denotes the convolution operator:

$$\begin{aligned} f * g(t) := \int _0^t f(t - s) g(s) ds. \end{aligned}$$
(2.35)

Remark 2

The challenge in finding the explicit representation for (2.35) is to compute the convolution \(\hat{\phi }_\eta * g(t).\) In the following examples, we choose certain combinations of \(\eta \) and g that allow for such computation.

Example 4

Let \(\eta \) follows a Gamma(\(\alpha ,\beta \)) in Example (3) with \(\alpha \) an integer. Let \(g(t) = t\). Then

$$\begin{aligned} \hat{\phi }_\eta * g(t) = \beta ^2\frac{\left( 1 + \frac{t}{\beta }\right) ^{2 -\alpha } - (1 + \frac{1}{\beta })^{2 - \alpha }}{(1 - \alpha )(2 - \alpha ) } - \frac{\beta \left( 1 + \frac{1}{\beta }\right) ^{1 - \alpha }}{1 - \alpha } . \end{aligned}$$
(2.36)

Thus

(2.37)

Example 5

Let \(\eta \) follows a Normal(\(\mu ,\sigma ^2\)) distribution. Then

$$\begin{aligned} \phi _\eta (t) = e^{ \mu t + \frac{\sigma ^2 t^2}{2}}. \end{aligned}$$
(2.38)

Let \(g(t) = e^{ \alpha t - \frac{\sigma ^2 t^2}{2}}.\) Then

$$\begin{aligned} \hat{\phi }_\eta * g(t) = \frac{e^{\alpha t- \frac{\sigma ^2 t^2}{2}}- e^{- \mu t + \frac{\sigma ^2 t^2}{2}}}{\mu +\alpha - \sigma ^2 t}. \end{aligned}$$
(2.39)

Thus

$$\begin{aligned}&\displaystyle \bar{m}(t) = \left\{ N(t) - N(0) e^{ - \mu H(t) + \frac{\sigma ^2 H^2(t)}{2}} \right\} - \frac{e^{\alpha H(t) - \frac{\sigma ^2 H(t)^2}{2}}- e^{- \mu H(t) + \frac{\sigma ^2 H(t)^2}{2}}}{\mu +\alpha - \sigma ^2 H(t)} \qquad \quad \end{aligned}$$
(2.40)
$$\begin{aligned}&\displaystyle H(t) = \log \left( 1 + \frac{e^{bt}}{\gamma }\right) - \log \left( 1 + \frac{1}{\gamma }\right) . \end{aligned}$$
(2.41)

In particular, if \(\mu = \alpha = 0\) and \(\sigma = \sqrt{2}\)

$$\begin{aligned} \hat{\phi }_\eta * g(t) = \frac{\sinh (t^2)}{2t}, \end{aligned}$$
(2.42)

and

$$\begin{aligned} \bar{m}(t)= & {} \left\{ N(t) - N(0) e^{ - H^2(t)} \right\} - \frac{\sinh (H^2(t))}{2H(t)}. \end{aligned}$$
(2.43)

Example 6

Let \(\eta \) follows a Poisson(\(\lambda \)) distribution. Then

$$\begin{aligned} \phi _\eta (t) = e^{\lambda (e^t - 1) }. \end{aligned}$$
(2.44)

Let \(g(t) = e^t\). Then

$$\begin{aligned} \hat{\phi }_\eta * g(t) = \frac{e^{t-\lambda }}{\lambda }\left( e^{\lambda } - e^{\lambda e^{-t}}\right) . \end{aligned}$$
(2.45)

Thus

$$\begin{aligned}&\displaystyle \bar{m}(t) = N(t) - N(0) e^{\lambda (e^{-H(t)} - 1) } - \frac{e^{H(t) -\lambda }}{\lambda }\left( e^{\lambda } - e^{\lambda e^{-H(t)}}\right) \end{aligned}$$
(2.46)
$$\begin{aligned}&\displaystyle H(t) = \log \left( 1 + \frac{e^{bt}}{\gamma }\right) - \log \left( 1 + \frac{1}{\gamma }\right) . \end{aligned}$$
(2.47)

Remark 3

The advantage of the assumption \(N(t) = G(H(t))\) compared with \(N(t) = H(t) + N(0)\) is that it allows us to take advantage of the choice of the function \(g(x) = G'(x)\) in the calculation. In assuming \(N(t) = H(t) + N(0)\) we need to compute \(\int _0^t \phi _\eta (-u) du\) and there are not many choices of the distribution of \(\eta \) where we know this integral explicitly. In Sect. 2.2 we have seen one example where that is possible: the Gamma distribution. Examples (5) and (6) show instances where this computation is not known explicitly because it involves the anti-derivative of \(\phi _\eta (t)\). On the other hand, by assuming a suitable g(t) we can still carry out the calculation of \(\hat{\phi }_\eta * g(t)\) and achieve a representation of \(\bar{m}(t).\)

3 Conclusion and future works

We have presented the explicit representations for \(\bar{m}(t)\) for two different models, one with dynamic additive noise in Sect. (2.1) and one with static multiplicative noise in section (2.2). In both cases, N(t) is allowed to be a function of t. We observe that the model with additive noise is most suitable when the anti-derivative of the rate function h(t) is of a \(\log \) type. On the other hand, the multiplicative model is suitable when we have a nice moment generating function \(\phi _\eta \) to work with.

For our future works, we will apply the models we have derived in this paper to an existing data and compare them under various fitting criteria. Because we allow N(t) to be a function of t we hope to achieve a better fit in cases where N was traditionally assumed to be a constant. We will also investigate and generalize the additive and multiplicative noise structure mentioned in Sects. (2.2) and (2.1). One direction is to investigate the dynamic multiplicative noise model. For the additive model, we plan to consider other possible choices of M(t), including the non-continuous case such as a compensated Poisson process.

So far in this paper we have limited the source of randomness to the rate function \(h(t,\omega )\). One can also consider other places where randomness comes in, such as in the total number of faults N(t). While \(h(t,\omega )\) models the endogenous random aspects of the software testing process, \(N(t,\omega )\) models the exogeneous aspects. Thus another direction we can take for our future work is to investigate the model

$$\begin{aligned} \frac{d m(t,\omega )}{dt} = h(t,\omega ) (N(t,\omega ) - m(t,\omega )), \end{aligned}$$
(3.1)

were \(h(t,\omega )\) and \(N(t,\omega )\) follows some correlation structure, given by a function \(\rho (t)\).