1 Introduction

Though statistical inference plays a major role in any use of queueing models, study of asymptotic inference problems for queueing system can be hardly traced back to the works by Basawa and Prabhu (1981, 1988) where they have discussed about the maximum likelihood (ML) estimators of the parameters in single server queues. Basawa et al. (1996) have studied the consistency and asymptotic normality of the parameters in a GI / G / 1 queue based on information on waiting times. Acharya (1999) has studied the rate of convergence of the distribution of the maximum likelihood estimators of the arrival and the service rates from a single server queue. Acharya and Mishra (2007) have proved the Bernstein–von Mises theorem for the arrival process in a M / M / 1 queue.

From a Bayesian outlook, inferences about the parameter are based on its posterior distribution. The study of asymptotic posterior normality can be traced back to the time of Laplace and it has attracted the attention of many authors. A conventional approach to such problems starts from a Taylor series expansion of the log-likelihood function around the maximum likelihood estimator (MLE) and proceeds from there to develop expansions that have standard normal as a leading term and hold in probability or almost surely, given the data. This type of study have not been done in queueing system. For the general set up in this direction the previous work seems to be those by Walker (1969), Johnston (1970) for i.i.d observations; Hyde and Johnston (1979), Basawa and Prakasa Rao (1980), Chen (1985) and Sweeting and Adekola (1987) for stochastic process. The most recent work was done by Kim (1998) in which he provided a set of conditions to prove the asymptotic normality under quite general situations of possible non-stationary time series model and Weng and Tsai (2008) where they studied asymptotic normality for multiparameter problems.

In this paper, our aim is to prove that the joint posterior distribution of \((\theta , \phi )\) is asymptotically normal for GI / G / 1 queueing model in the context of exponential families. In Sect. 2 we introduce the model of our interest and explain some elements of maximum likelihood estimator (MLE) as well as Bayesian procedure. In Sect. 3 we prove our main result. For the illustration purpose we provide an example Sect. 4. Section 5 deals with the simulation study while in Sect. 6 concluding remarks are given.

2 GI / G / 1 Queueing Model

Consider a single server queueing system in which the interarrival times \(\{u_{k}, k\ge 1\}\) and the service times \(\{v_{k}, k\ge 1\}\) are two independent sequences of independent and identically distributed nonnegative random variables with densities \(f(u; \theta )\) and \(g(v; \phi )\), respectively, where \(\theta \) and \(\phi \) are unknown parameters. Let us assume that f and g belong to the continuous exponential families given by

$$\begin{aligned} f(u; \theta )= & {} a_{1}(u) \text {exp} \{\theta h_{1}(u)- k_{1}(\theta )\}, \end{aligned}$$
(2.1)
$$\begin{aligned} g(v; \phi )= & {} a_{2}(v) \text {exp}\{\phi h_{2}(v)- k_{2}(\phi )\}. \end{aligned}$$
(2.2)

and

$$\begin{aligned} f(u; \theta )= g(v; \phi )=0 \quad \text {on} \quad (-\infty , 0) \end{aligned}$$

where \(\Theta _1=\{\theta >0:~ k_1(\theta )< \infty \}\) and \(\Theta _2=\{\phi >0:~ k_2(\phi )< \infty \}\) are open subsets of \(\mathbb {R}\). It is easy to see that, \(E_{\theta }(h_1(u)) = k_1^{\prime }(\theta )\), \(var_{\theta }(h_1(u))=k_1^{\prime \prime }(\theta )\), \(E_{\phi }(h_2(v)) = k_2^{\prime }(\phi )\), \(var_{\phi }(h_2(v)) = k_2^{\prime \prime }(\phi )\), are supposed to be finite.

For simplicity we assume that the initial customer arrives at time \(t=0\). Our sampling scheme is to observe the system over a continuous time interval (0, T], where T is a suitable stopping time. The sample data consist of

$$\begin{aligned} \{A(T), D(T), u_{1}, u_{2}, u_{3},\ldots , u_{A(T)}, v_{1}, v_{2},\ldots , v_{D(T)} \}, \end{aligned}$$
(2.3)

where A(T) is the number of arrivals and D(T) is the number of departures during (0, T]. Obviously no arrivals occur during \([\sum _{i=1}^{A(T)} u_{i}, T]\) and no departures during \([\gamma (T)+\sum _{i=1}^{D(T)}v_{i}, T]\), where \(\gamma (T)\) is the total idle period in (0, T].

The likelihood function based on data (2.3) is given by

$$\begin{aligned} L_{T}(\theta , \phi )&= \prod _{i=1}^{A(T)} f(u_{i},\theta )\prod _{i=1}^{D(T)} f(v_{i},\phi ) \nonumber \\&\quad \times \,\left[ 1-F_{\theta }\left[ T-\sum _{i=1}^{A(T)} u_{i}\right] \right] \left[ 1-G_{\phi }\left[ T-\gamma (T)-\sum _{i=1}^{D(T)}v_{i}\right] \right] , \end{aligned}$$
(2.4)

where F and G are distribution functions corresponding to the densities f and g respectively.

The approximate likelihood \(L_{T}^{(a)}(\theta ,\phi )\) is defined as

$$\begin{aligned} L_{T}^{(a)}(\theta ,\phi ) = \prod _{i=1}^{A(T)} f(u_{i},\theta )\prod _{i=1}^{D(T)} f(v_{i},\phi ) = L_T^{(a)}(\theta ) L_T^{(a)}(\phi ), \end{aligned}$$
(2.5)

where

$$\begin{aligned} L_T^{(a)}(\theta )= \left[ \prod _{i=1}^{A(T)} a_1(u_i)\right] \text {exp} \left\{ \sum _{i=1}^{A(T)} \left[ \theta h_1(u_i) - k_1(\theta )\right] \right\} \end{aligned}$$
(2.6)

and

$$\begin{aligned} L_T^{(a)}(\phi )= \left[ \prod _{i=1}^{D(T)} a_2(v_i) \right] \text {exp} \left\{ \sum _{i=1}^{D(T)} \left[ \phi h_2(v_i) - k_2(\phi )\right] \right\} . \end{aligned}$$
(2.7)

The maximum likelihood estimates obtained from (2.5) are asymptotically equivalent to those obtained from (2.4) provided that the following two conditions are satisfied for \(T \rightarrow \infty \):

$$\begin{aligned} \left( A(T)\right) ^{-1/2} \frac{\partial }{\partial \theta } \text{ log } \left[ 1-F_{\theta }\left( T- \sum _{i=1}^{A(T)}u_i\right) \right] {\mathop {\longrightarrow }\limits ^{p}}0 \end{aligned}$$
(2.8)

and

$$\begin{aligned} \left( D(T)\right) ^{-1/2} \frac{\partial }{\partial \phi } \text{ log } \left[ 1-G_{\phi }\left( T - \gamma (T)- \sum _{i=1}^{D(T)}v_i\right) \right] {\mathop {\longrightarrow }\limits ^{p}}0. \end{aligned}$$
(2.9)

The implications of these conditions have been explained by Basawa and Prabhu (1988).

Basawa and Prabhu (1988) have shown that the maximum likelihood estimator of \(\theta \) and \(\phi \) are given by

$$\begin{aligned} \hat{\theta }_T&= \eta _{1}^{-1} \bigg [(A(T))^{-1}\sum _{i=1}^{A(T)}h_{1}(u_{i}) \bigg ], \end{aligned}$$
(2.10)
$$\begin{aligned} \hat{\phi }_T&= \eta _{2}^{-1}\bigg [(D(T))^{-1}\sum _{i=1}^{D(T)}h_{2}(v_{i}) \bigg ] \end{aligned}$$
(2.11)

where \(\eta _i^{-1}(.)\) denotes the inverse functions of \(\eta _i(.)\) for \(i=1, 2\) and

$$\begin{aligned} \eta _1(\theta )=E_{\theta }(h_1(u)) = k_1^{'}(\theta ) \end{aligned}$$

and

$$\begin{aligned} \eta _2(\phi )=E_{\phi }(h_2(v)) = k_2^{'}(\phi ). \end{aligned}$$

The Fisher information matrix is given by

$$\begin{aligned} I(\theta , \phi ) = \left[ \begin{array}{cc} k_1^{''}(\theta )E(A(T)) &{} 0 \\ 0 &{} k_2^{''}(\phi )E(D(T)) \\ \end{array} \right] = \left[ \begin{array}{cc} I(\theta ) &{} 0 \\ 0 &{} I(\phi ) \\ \end{array} \right] . \end{aligned}$$
(2.12)

Under suitable stability conditions on stopping times, Basawa and Prabhu (1988) have proved that the estimators \(\hat{\theta }_T\) and \(\hat{\phi }_T\) are consistent, i.e,

$$\begin{aligned} \hat{\theta }_T {\mathop {\longrightarrow }\limits ^{a.s.}}\theta _0 \quad \text {and} \quad \hat{\phi }_T {\mathop {\longrightarrow }\limits ^{a.s.}}\phi _0 \quad \text {as} \quad T \rightarrow \infty \end{aligned}$$
(2.13)

and

$$\begin{aligned} I^{\frac{1}{2}}(\theta _0, \phi _0) \left[ \begin{array}{c} \hat{\theta }_T - \theta _0 \\ \hat{\phi }_T - \phi _0\\ \end{array} \right] \Rightarrow N \left[ \left( \begin{array}{c} 0 \\ 0 \\ \end{array} \right) , \left( \begin{array}{cc} 1 &{} 0 \\ 0 &{} 1 \\ \end{array} \right) \right] , \end{aligned}$$
(2.14)

where \(\theta _0\) and \(\phi _0\) denote the true value of \(\theta \) and \(\phi \) respectively, and the symbol \(\Rightarrow \) denotes the convergence in distribution.

From Eq. (2.5) we have the loglikelihood function

$$\begin{aligned} \ell _T(\theta , \phi )=\text{ log }L_T^{(a)}(\theta , \phi ) =\ell _T(\theta ) + \ell _T(\phi ), \end{aligned}$$
(2.15)

where

$$\begin{aligned} \ell _T(\theta )=\text{ log } L_T^{(a)}(\theta ) = \sum _{i=1}^{A(T)}a_1(u_i)+ \theta \sum _{i=1}^{A(T)}h_1(u_i)-A(T)k_1(\theta ) \end{aligned}$$
(2.16)

and

$$\begin{aligned} \ell _T(\phi )=\text{ log } L_T^{(a)}(\phi ) = \sum _{i=1}^{D(T)}a_2(v_i)+ \phi \sum _{i=1}^{D(T)}h_2(v_i)-D(T)k_2(\phi ). \end{aligned}$$
(2.17)

Let

$$\begin{aligned} \ell _T^{'}(\theta _0)= & {} \frac{\partial }{\partial \theta }\ell _T(\theta , \phi )\bigg |_{\theta =\theta _0}= \frac{\partial }{\partial \theta }\ell _T(\theta )\bigg |_{\theta =\theta _0}, \\ \ell _T^{''}(\theta _0)= & {} \frac{\partial ^2}{\partial \theta ^2}\ell _T(\theta , \phi )\bigg |_{\theta =\theta _0}= \frac{\partial ^2}{\partial \theta ^2}\ell _T(\theta )\bigg |_{\theta =\theta _0}. \end{aligned}$$

Similarly \(\ell _T^{'}(\hat{\theta }_T)\), \(\ell _T^{'}(\hat{\phi }_T)\), \(\ell _T^{'}(\phi _0)\), \(\ell _T^{''}(\phi _0)\), \(\ell _T^{''}(\hat{\theta }_T)\) and \(\ell _T^{''}(\hat{\phi }_T)\) are defined.

Let \(\pi _1(\theta )\) and \(\pi _2(\phi )\) be the prior distributions of \(\theta \) and \(\phi \) respectively. Let the joint prior distribution \(\theta \) and \(\phi \) be \(\pi (\theta , \phi )\). Since the interarrival time and service time distributions are independent, so we have \(\pi (\theta , \phi )=\pi _1(\theta ) \pi _2(\phi )\). Then the joint posterior density of \((\theta , \phi )\) is

$$\begin{aligned} \pi (\theta , \phi | (u_i, v_i);~ i \ge 1)=\pi _1(\theta |u_i;~i=1,\ldots ,A(T)) \pi _2(\phi |v_i;~i=1,\ldots ,D(T)) \end{aligned}$$
(2.18)

with

$$\begin{aligned} \pi _1(\theta |u_i;~i=1,\ldots ,A(T))&= \frac{L_T^{(a)}(\theta ) \pi _1(\theta )}{\int _{\Theta _1}L_T^{(a)}(\theta ) \pi _1(\theta ) d\theta }\nonumber \\&= \frac{\text {exp} \bigr \{ \sum _{i=1}^{A(T)}[\theta h_1(u_i) - k_1(\theta )] \bigr \} \pi _1(\theta )}{\int _{\Theta _1}\text {exp} \bigr \{ \sum _{i=1}^{A(T)}[\theta h_1(u_i) - k_1(\theta )] \bigr \} \pi _1(\theta )d\theta } \end{aligned}$$
(2.19)

and

$$\begin{aligned} \pi _2(\phi |v_i;~i=1,\ldots ,D(T))=\frac{ \text {exp} \bigr \{ \sum _{i=1}^{D(T)}[\phi h_2(v_i) - k_2(\phi )] \bigr \} \pi _2(\phi )}{\int _{\Theta _2} \text {exp} \bigr \{ \sum _{i=1}^{D(T)}[\phi h_2(v_i) - k_2(\phi )] \bigr \} \pi _2(\phi ) d\phi } \end{aligned}$$
(2.20)

the marginal posterior densities of \(\theta \) and \(\phi \), respectively. Let \(\tilde{\theta }_T\) and \(\tilde{\phi }_T\) be Bayes estimator of \(\theta \) and \(\phi \) respectively.

In the next section we will state and prove our main result.

3 Main Result

Theorem 3.1

Let \((\theta _0, \phi _0)\in \Theta _1 \times \Theta _2\). If the prior densities \(\pi _1(\theta )\) and \(\pi _2(\phi )\) are continuous and positive at \(\theta _0\) and \(\phi _0\) respectively then, for any \(\alpha _i\), \(\beta _i\) such that \(-\infty \le \alpha _i \le \beta _i \le \infty \), \(i=1, 2\), the posterior probability that \((\hat{\theta }_T + \alpha _1 \sigma _T \le \theta \le \hat{\theta }_T + \beta _1 \sigma _T, \hat{\phi }_T + \alpha _2 \tau _T \le \phi \le \hat{\phi }_T + \beta _2 \tau _T)\), namely

$$\begin{aligned} \int \limits _{\hat{\theta }_T + \alpha _1 \sigma _T }^{\hat{\theta }_T + \beta _1 \sigma _T} \int \limits _{\hat{\phi }_T + \tau _T \sigma _2}^{\hat{\phi }_T + \beta _2 \tau _T} \pi (\theta , \phi | (u_i, v_i), i \ge 1) d\theta d\phi \end{aligned}$$

tends in \([P_{(\theta _0, \phi _0)}]\) probability to

$$\begin{aligned} (2\pi )^{-1} \int \limits _{\alpha _1}^{\beta _1} \int \limits _{\alpha _2}^{\beta _2} e^{-\frac{1}{2}(x^2+y^2)} dx dy \end{aligned}$$

as \(T\rightarrow \infty \), where \(\sigma _T\) and \(\tau _T\) are the positive square roots of \([-\ell _T^{''}(\hat{\theta }_T)]^{-1}\) and \([-\ell _T^{''}(\hat{\phi }_T)]^{-1}\) respectively.

Proof of Theorem 3.1

The former integral of the above theorem can be written as the product of the integrals of the marginal posterior densities, i.e.,

$$\begin{aligned}&\int \limits _{\hat{\theta }_T + \alpha _1 \sigma _T }^{\hat{\theta }_T + \beta _1 \sigma _T} \int \limits _{\hat{\phi }_T + \alpha _2 \tau _T}^{\hat{\phi }_T + \beta _2 \tau _T} \pi (\theta , \phi | (u_i, v_i), i \ge 1) d\theta d\phi \nonumber \\&\quad = \int \limits _{\hat{\theta }_T + \alpha _1 \sigma _T }^{\hat{\theta }_T + \beta _1 \sigma _T} \frac{\text {exp}\left\{ \sum _{i=1}^{A(T)}[\theta h_1(u_i) - k_1(\theta )] \right\} \pi _1(\theta )}{\int _{\Theta _1} \text {exp} \left\{ \sum _{i=1}^{A(T)}[\theta h_1(u_i) - k_1(\theta )] \right\} \pi _1(\theta )d\theta } d\theta \nonumber \\&\qquad \times \int \limits _{\hat{\phi }_T + \alpha _2 \tau _T}^{\hat{\phi }_T + \beta _2 \tau _T} \frac{\text {exp}\left\{ \sum _{i=1}^{D(T)}[\phi h_2(v_i) - k_2(\phi )] \right\} \pi _2(\phi )}{\int _{\Theta _2}\text {exp}\left\{ \sum _{i=1}^{D(T)}[\phi h_2(v_i) - k_2(\phi )] \right\} \pi _2(\phi ) d\phi }d\phi \end{aligned}$$
(3.1)

and the convergence of both can be established separately.

For any \(\delta >0\), let us write \(\mathcal {N}(\varrho , \delta )=(\varrho -\delta , \varrho +\delta ) \) with \(\varrho \in \Theta _1\) and \(\mathcal {J}_B=\int _{B}L_T^{(a)}(\theta ) \pi _1(\theta ) d\theta \) where \( B \subseteq \Theta _1\). Hence,

$$\begin{aligned} \int _{\hat{\theta }_T+\alpha _1 \sigma _T}^{\hat{\theta }_T+\beta _1 \sigma _T} \frac{ \text {exp} \left\{ \sum _{i=1}^{A(T)}[\theta h_1(u_i) - k_1(\theta )] \right\} \pi _1(\theta )}{\int _{\Theta _1} \text {exp} \left\{ \sum _{i=1}^{A(T)}[\theta h_1(u_i) - k_1(\theta )] \right\} \pi _1(\theta )d\theta } d\theta =(\mathcal {J}_{\Theta _1})^{-1} \mathcal {J}_{\mathcal {N}(\theta _T, \delta _T)} \end{aligned}$$
(3.2)

with \(\delta _T=\frac{\sigma _T(\beta _1-\alpha _1)}{2}\) and \(\theta _T=\hat{\theta }_T +\frac{\sigma _T(\alpha _1+\beta _1)}{2}\). Then, we want to prove that

$$\begin{aligned} (\mathcal {J}_{\Theta _1})^{-1} \mathcal {J}_{\mathcal {N}(\theta _T, \delta _T)} \rightarrow \Phi (\beta _1)-\Phi (\alpha _1)= \frac{1}{\sqrt{2\pi }}\int _{\alpha _1}^{\beta _1} e^{-\frac{x^2}{2}}dx \end{aligned}$$
(3.3)

in probability \([P_{\theta _0}]\), where \(\Phi (z)=\frac{1}{\sqrt{2\pi }} \int _{-\infty }^{z}e^{-\frac{s^2}{2}}ds\).

Let us split \(\mathcal {J}_{\Theta _1}\) into \(J_{\Theta _1 \setminus \mathcal {J}_{\mathcal {N}(\theta _0, \delta )}}\) and \(\mathcal {J}_{\mathcal {N}(\theta _0, \delta )}\). Then, to obtain the above result it is sufficient to prove that the following statements holds in probability \([P_{\theta _0}]\) : For some \(\delta >0\),

  1. (a)

    \( \lim \limits _{ T \rightarrow \infty } [L_T^{(a)}(\hat{\theta }_T) \sigma _T]^{-1} J_{\Theta _1 \setminus \mathcal {J}_{\mathcal {N}(\theta _0, \delta )}} = 0 \)

  2. (b)

    \( \lim \limits _{ T \rightarrow \infty } [L_T^{(a)}(\hat{\theta }_T) \sigma _T]^{-1} \mathcal {J}_{\mathcal {N}(\theta _0, \delta )} = (2\pi )^\frac{1}{2} \pi _1(\theta _0) \)

  3. (c)

    \( \lim \limits _{ T \rightarrow \infty } [L_T^{(a)}(\hat{\theta }_T) \sigma _T]^{-1} \mathcal {J}_{\mathcal {N}(\theta _T, \delta _T)} = (2\pi )^\frac{1}{2} \pi _1(\theta _0) (\Phi (\beta _1)-\Phi (\alpha _1)) \)

Define

$$\begin{aligned} r_T(\theta )= - \frac{\ell _T^{\prime \prime }(\theta ) - \ell _T^{\prime \prime }(\hat{\theta }_T)}{\ell _T^{\prime \prime }(\hat{\theta }_T)} = 1 - \frac{\ell _T^{\prime \prime }(\theta ) / \ell _T^{\prime \prime }(\theta _0)}{\ell _T^{\prime \prime }(\hat{\theta }_T) / \ell _T^{\prime \prime }(\theta _0)}. \end{aligned}$$
(3.4)

If \(\theta \) belongs to \(\mathcal {N}(\theta _0,\delta )\) for some \(\delta >0\), \(\ell _T^{\prime \prime }(\theta )/ \ell _T^{\prime \prime }(\theta _0)\) is close enough to 1 and, since \(\hat{\theta }_T \rightarrow \theta _0\) almost surely, \( \ell _T^{\prime \prime }(\hat{\theta }_T)/ \ell _T^{\prime \prime }(\theta _0)\) is almost surely close to 1 for T sufficiently large. Therefore we can deduce that for given \(\varepsilon >0\), we can take \(\delta \) such that, if T is large enough,

$$\begin{aligned} \sup _{\theta \in \mathcal {N}(\theta _0,\delta )} |r_T(\theta )| < \varepsilon ~~[P_{\theta _0}]. \end{aligned}$$
(3.5)

Consider also

$$\begin{aligned} q_T(\theta )= - \frac{\ell _T(\theta )-\ell _T(\hat{\theta }_T)}{\ell _T^{\prime \prime }(\theta _0)} =\frac{(\theta -\hat{\theta }_T) \sum _{i=1}^{A(T)}h_1(u_i) - A(T)(k_1(\theta ) - k_1(\hat{\theta }_T)) }{A(T)k_1^{\prime \prime }(\theta _0)}. \end{aligned}$$

Since \(\ell _T(.)\) has a strict maximum at \(\hat{\theta }_T\), it is obvious that \(q_T(.)\) is negative on \(\Theta _1 \setminus \mathcal {N}(\theta _0,\delta )\) for T large enough. Moreover, since \(\hat{\theta }_T \rightarrow \theta _0\) almost surely, it can be shown that there exists a positive constant \(\kappa (\delta )\) such that

$$\begin{aligned} \sup _{\theta \in \Theta _1 \setminus \mathcal {N}(\theta _0,\delta )} q_T < - \kappa (\delta ) ~~ [P_{\theta _0}]. \end{aligned}$$
(3.6)

Now,

$$\begin{aligned}&[L_T^{(a)}(\hat{\theta }_T) \sigma _T]^{-1} \mathcal {J}_{\Theta _1 \setminus \mathcal {N}(\theta _0,\delta )} \\&\quad = [L_T^{(a)}(\hat{\theta }_T) \sigma _T]^{-1} \int _{\Theta _1 \setminus \mathcal {N}(\theta _0,\delta )} L_T^{(a)}(\theta )\pi _1(\theta ) d\theta \\&\quad = [L_T^{(a)}(\hat{\theta }_T) \sigma _T]^{-1} L_T^{(a)}(\hat{\theta }_T) \int _{\Theta _1\setminus \mathcal {N}(\theta _0,\delta )} \pi _1(\theta ) \text {exp} \{ \ell _T(\theta )-\ell _T(\hat{\theta }_T)\} d\theta \\&\quad =(-\ell _T^{\prime \prime }(\hat{\theta }_T))^\frac{1}{2} \int _{\Theta _1 \setminus \mathcal {N}(\theta _0,\delta )} \pi _1(\theta ) \text {exp} \{ q_T(\theta )(-\ell _T^{\prime \prime }(\theta _0))\} d\theta \\&\quad \le (-\ell _T^{\prime \prime }(\hat{\theta }_T))^\frac{1}{2} \text {exp} \{ -\kappa (\delta )(-\ell _T^{\prime \prime }(\theta _0))\} \quad \quad \text {(using Eq.~}(3.6)) \\&\quad = \frac{(-\ell _T^{\prime \prime }(\hat{\theta }_T))^\frac{1}{2}}{(-\ell _T^{\prime \prime }(\theta _0))^\frac{1}{2}} (-\ell _T^{\prime \prime }(\theta _0))^\frac{1}{2} \text {exp} \{ -\kappa (\delta )(-\ell _T^{\prime \prime }(\theta _0))\} ~ ~ [P_{\theta _0}]. \end{aligned}$$

We have \(-\ell _T^{\prime \prime }(\theta _0)= A(T) \sigma ^2(\theta _0)\) diverges to \(\infty \) almost surely as \(T \rightarrow \infty \). So, in the above expression

$$\begin{aligned} (-\ell _T^{\prime \prime }(\theta _0))^\frac{1}{2} \text {exp} \{ -\kappa (\delta )(-\ell _T^{\prime \prime }(\theta _0))\} \rightarrow 0 \end{aligned}$$

in probability and, using Eq. (3.5), for some constant M and T large enough

$$\begin{aligned} \frac{(-\ell _T^{\prime \prime }(\hat{\theta }_T))^\frac{1}{2}}{(-\ell _T^{\prime \prime }(\theta _0))^\frac{1}{2}} = \bigg ( \frac{1}{1-r_T(\theta _0)} \bigg )^\frac{1}{2} < M \end{aligned}$$

in probability and, consequently (a) holds.

Let us prove (b). Write

$$\begin{aligned} L_T^{(a)}(\theta )= L_T^{(a)}(\hat{\theta }_T) \text {exp} \{ \ell _T(\theta ) - \ell _T(\hat{\theta }_T)\}. \end{aligned}$$
(3.7)

Using Taylor expansion around \(\hat{\theta }_T\),

$$\begin{aligned} \ell _T(\theta )=\ell _T(\hat{\theta }_T)+ \frac{1}{2} (\theta - \hat{\theta }_T)^2 \ell _T^{\prime \prime }(\bar{\theta }_T) \end{aligned}$$
(3.8)

for \(\bar{\theta }_T=\theta +\xi (\hat{\theta }_T-\theta )\) with \(0<\xi <1\). Thus letting

$$\begin{aligned} R_T=R_T(\theta )=\sigma _T^2 \{ \ell _T^{\prime \prime }(\bar{\theta }_T) - \ell _T^{\prime \prime }(\hat{\theta }_T)\}, \end{aligned}$$

we have

$$\begin{aligned} -\frac{1-R_T}{\sigma _T^2}=\ell _T^{\prime \prime }(\bar{\theta }_T). \end{aligned}$$
(3.9)

Using Eqs. (3.8) and (3.9) in Eq. (3.7) and, for some \(\delta >0\) and T large enough such that \(\hat{\theta }_T \in \mathcal {N}(\theta _0,\delta )\), we have, for every \(\theta \in \mathcal {N}(\theta _0,\delta )\)

$$\begin{aligned} L_T^{(a)}(\theta )=L_T^{(a)}(\hat{\theta }_T) \text {exp} \bigg \{ - \frac{(\theta -\hat{\theta }_T)^2}{2\sigma _T^2}(1-R_T) \bigg \} ~~[P_{\theta _0}] \end{aligned}$$
(3.10)

and consequently,

$$\begin{aligned}{}[L_T^{(a)}(\hat{\theta }_T) \pi _1(\theta _0)]^{-1} \mathcal {J}_{\mathcal {N}(\theta _0,\delta )}= \int _{\mathcal {N}(\theta _0,\delta )} \frac{\pi _1(\theta )}{\pi _1(\theta _0)} \text {exp} \bigg \{ - \frac{(\theta -\hat{\theta }_T)^2}{2\sigma _T^2}(1-R_T) \bigg \} d\theta ~~[P_{\theta _0}] \end{aligned}$$
(3.11)

Since \(\pi _1(\theta )\) is continuous and positive at \(\theta =\theta _0\), then for given \(0<\varepsilon <1\), we can choose \(\delta \) small enough so that

$$\begin{aligned} 1-\varepsilon< \inf _{\theta \in \mathcal {N}(\theta _0,\delta )} \frac{\pi _1(\theta )}{\pi _1(\theta _0)}< \sup _{\theta \in \mathcal {N}(\theta _0,\delta )} \frac{\pi _1(\theta )}{\pi _1(\theta _0)} < 1+\varepsilon . \end{aligned}$$
(3.12)

Denote

$$\begin{aligned} \tilde{\mathcal {J}}_B=\int _{B} \text {exp} \bigg \{ - \frac{(\theta -\hat{\theta }_T)^2}{2\sigma _T^2}(1-R_T) \bigg \}d\theta , \quad B \subseteq \Theta _1. \end{aligned}$$

Then from Eq. (3.12) we get that

$$\begin{aligned} (1-\varepsilon ) \tilde{\mathcal {J}}_{\mathcal {N}(\theta _0,\delta )}< [L_T^{(a)}(\hat{\theta }_T) \pi _1(\theta _0)]^{-1} \mathcal {J}_{\mathcal {N}(\theta _0,\delta )} < (1+\varepsilon ) \tilde{\mathcal {J}}_{\mathcal {N}(\theta _0,\delta )}. \end{aligned}$$
(3.13)

If \(\sup _{\theta \in \mathcal {N}(\theta _0,\delta )} |R_T|< \varepsilon <1\), then

$$\begin{aligned}&\int _{\mathcal {N}(\theta _0,\delta )} \text {exp} \bigg \{ - \frac{(\theta -\hat{\theta }_T)^2}{2\sigma _T^2}(1+\varepsilon ) \bigg \} d\theta \\&\quad< \tilde{\mathcal {J}}_{\mathcal {N}(\theta _0,\delta )} < \int _{\mathcal {N}(\theta _0,\delta )} \text {exp} \bigg \{ - \frac{(\theta -\hat{\theta }_T)^2}{2\sigma _T^2}(1-\varepsilon ) \bigg \}d\theta \end{aligned}$$

and for \(\eta =+\varepsilon \) or \(-\varepsilon \), making a change of variable,

$$\begin{aligned}&\int _{\mathcal {N}(\theta _0,\delta )} \text {exp} \bigg \{ - \frac{(\theta -\hat{\theta }_T)^2}{2\sigma _T^2}(1+\eta ) \bigg \} d\theta \nonumber \\&\quad = \frac{\sigma _T}{(1+\eta )^\frac{1}{2}} \int _{(\theta _0-\delta -\hat{\theta }_T)(1+\eta )^\frac{1}{2}\sigma _T^{-1}} ^{(\theta _0+\delta -\hat{\theta }_T)(1+\eta )^\frac{1}{2}\sigma _T^{-1}} e^{-\frac{x^2}{2}} dx \nonumber \\&\quad = (2\pi )^{\frac{1}{2}} \sigma _T (1+\eta )^{-\frac{1}{2}} \bigg [ \Phi \bigr \{ \sigma _T^{-1}(\theta _0+\delta -\hat{\theta }_T)(1+\eta )^{\frac{1}{2}} \bigr \}\nonumber \\&\qquad - \Phi \bigr \{ \sigma _T^{-1}(\theta _0-\delta -\hat{\theta }_T)(1+\eta )^{\frac{1}{2}} \bigr \} \bigg ]. \end{aligned}$$
(3.14)

Since \(\sigma _T^{-1} \rightarrow \infty \) and \(\hat{\theta }_T \rightarrow \theta _0\) almost surely, it is deduced that the limits \((\theta _0-\delta -\hat{\theta }_T)(1+\eta )^\frac{1}{2}\sigma _T^{-1}\) and \((\theta _0+\delta -\hat{\theta }_T)(1+\eta )^\frac{1}{2}\sigma _T^{-1}\) of the integrals in the above equation converges to \(-\infty \) and \(\infty \) respectively. Therefore, the term in square brackets in Eq. (3.14) converges to 1. Thus, using an appropriate bound on \(R_T\) it follows that,

$$\begin{aligned} (2\pi )^\frac{1}{2} (1+\varepsilon )^{-\frac{1}{2}}< \sigma _T^{-1} \tilde{\mathcal {J}}_{\mathcal {N}(\theta _0,\delta )} < (2\pi )^\frac{1}{2} (1-\varepsilon )^{-\frac{1}{2}} \end{aligned}$$

in probability as \(T \rightarrow \infty \) and, using the above expression with the Eq. (3.13) we have the following bounds for \(\mathcal {J}_{\mathcal {N}(\theta _0,\delta )}\):

$$\begin{aligned} (1+\varepsilon )^{-\frac{1}{2}}(1-\varepsilon )< \bigr [ L_T(\hat{\theta }_T) \pi _1(\theta _0) (2\pi )^{\frac{1}{2}} \sigma _T \bigr ]^{-1} \mathcal {J}_{\mathcal {N}(\theta _0,\delta )}< (1-\varepsilon )^{-\frac{1}{2}}(1+\varepsilon )~~[P_{\theta _0}] \end{aligned}$$

Hence (b) holds.

Finally, let us show (c). Using the same arguments and notations above, given \(\varepsilon >0\), there exists \(\delta \) such that if \(\mathcal {N}(\theta _T,\delta _T) \subseteq \mathcal {N}(\theta _0,\delta )\) for T large enough then

$$\begin{aligned} (1-\varepsilon ) \tilde{\mathcal {J}}_{\mathcal {N}(\theta _T,\delta _T)}< [L_T^{(a)}(\hat{\theta }_T) \pi _1(\theta _0)]^{-1} \mathcal {J}_{\mathcal {N}(\theta _T,\delta _T)} < (1+\varepsilon ) \tilde{\mathcal {J}}_{\mathcal {N}(\theta _T,\delta _T)}~~[P_{\theta _0}] \end{aligned}$$

While the last term in Eq. (3.14) becomes

$$\begin{aligned} (2\pi )^{\frac{1}{2}} \sigma _T (1+\eta )^{-\frac{1}{2}} \left[ \Phi (\beta _1(1+\eta )^{\frac{1}{2}}) - \Phi (\alpha _1(1+\eta )^{\frac{1}{2}}) \right] . \end{aligned}$$

Therefore, we obtain that

$$\begin{aligned}{}[L_T^{(a)}(\hat{\theta }_T) \pi _1(\theta _0)]^{-1} \mathcal {J}_{\mathcal {N}(\theta _T,\delta _T)} \rightarrow (2\pi )^{\frac{1}{2}} \pi _1(\theta _0)[\Phi (\beta _1)-\Phi (\alpha _1)]~~[P_{\theta _0}] \end{aligned}$$

and now (3.3) is established.

Similarly, using the same arguments as in the above, it can be shown that

$$\begin{aligned} \int \limits _{\hat{\phi }_T + \alpha _2 \tau _T}^{\hat{\phi }_T + \beta _2 \tau _T} \frac{\text {exp}\left\{ \sum _{i=1}^{D(T)}[\phi h_2(v_i) - k_2(\phi )] \right\} \pi _2(\phi )}{\int _{\Theta _2}\text {exp}\left\{ \sum _{i=1}^{D(T)}[\phi h_2(v_i) - k_2(\phi )] \right\} \pi _2(\phi ) d\phi }d\phi \rightarrow \frac{1}{\sqrt{2\pi }} \int _{\alpha _2}^{\beta _2} e^{-\frac{y^2}{2}} \end{aligned}$$

in probability \([P_{\phi _0}]\) and the proof is completed. \(\square \)

4 Example

Let us consider a M / M / 1 queueing system. Under the Markovian set-up we have

$$\begin{aligned} f(u; \theta )=\theta e^{-\theta u} \quad \text {and} \quad g(v; \phi )= \phi e^{-\phi v}. \end{aligned}$$

So, the loglikelihood function is written as

$$\begin{aligned} \ell _T(\theta , \phi )=A(T)\text {log}\theta -\theta \sum _{i=1}^{A(T)} u_i + D(T)\text {log}\phi -\phi \sum _{i=1}^{D(T)} v_i \end{aligned}$$

and the MLEs are given by

$$\begin{aligned} \hat{\theta }_T=\left[ \frac{\sum _{i=1}^{A(T)}u_i}{A(T)}\right] ^{-1} \quad \text {and} \quad \hat{\phi }_T=\left[ \frac{\sum _{i=1}^{D(T)}v_i}{D(T)}\right] ^{-1}. \end{aligned}$$

Here \(\sigma _T=\left[ -\ell _T^{''}(\hat{\theta }_T)\right] ^{-\frac{1}{2}}=\frac{\sum _{i=1}^{A(T)}u_i}{\sqrt{A(T)}}\) and \(\tau _T=\left[ -\ell _T^{''}(\hat{\phi }_T)\right] ^{-\frac{1}{2}} =\frac{\sum _{i=1}^{D(T)}v_i}{\sqrt{D(T)}}\).

Let us assume that the conjugate prior distributions of \(\theta \) and \(\phi \) are gamma distributions with hyper-parameters \((a_1, b_1)\) and \((a_2, b_2)\), that is

$$\begin{aligned} \pi _1(\theta )= \frac{b_1^{a_1}}{\Gamma (a_1)} \theta ^{a_1-1} e^{-b_1 \theta } \quad \text {and} \quad \pi _2(\phi )= \frac{b_2^{a_2}}{\Gamma (a_2)} \phi ^{a_2-1} e^{-b_2 \phi } \end{aligned}$$

where \(a_i, b_i > 0\) for \(i=1,2\).

Then, the posterior distribution of \(\theta \) can be computed as:

$$\begin{aligned}&\pi _1(\theta | u_i; ~i=1,2,\ldots ,A(T))\\&\quad = \frac{L_T^{a}(\theta ) \pi _1(\theta )}{\int _{\Theta _1} L_T^{a} \pi _1(\theta ) d\theta } \\&\quad = \frac{\theta ^{A(T)+a_1-1} e^{-\left( \sum _{i=1}^{A(T)}u_i + b_1\right) \theta }}{\int _0^{\infty } \theta ^{A(T)+a_1-1} e^{-\left( \sum _{i=1}^{A(T)}u_i + b_1\right) \theta } d\theta } \\&\quad = \frac{\left( \sum _{i=1}^{A(T)}u_i + b_1\right) ^{A(T)+ a_1}}{\Gamma \left( A(T)+ a_1\right) } \theta ^{A(T)+ a_1-1} e^{-\left( \sum _{i=1}^{A(T)}u_i + b_1\right) \theta }. \end{aligned}$$

Similarly,

$$\begin{aligned}&\pi _2(\phi | v_i; ~i=1,2,\ldots ,D(T))\\&\quad = \frac{\left( \sum _{i=1}^{D(T)}v_i + b_2\right) ^{D(T)+ a_2}}{\Gamma \left( D(T)+ a_2\right) } \phi ^{D(T)+ a_2-1} e^{-\left( \sum _{i=1}^{D(T)}v_i + b_2\right) \phi }. \end{aligned}$$

It is easy to see that

$$\begin{aligned} \tilde{\theta }_T = \frac{A(T)+a_1}{\sum _{i=1}^{A(T)}u_i + b_1} \quad \text {and} \quad \tilde{\phi }_T = \frac{D(T)+a_2}{\sum _{i=1}^{D(T)}v_i + b_2}. \end{aligned}$$

Here, the posterior distributions of \(\theta \) and \(\phi \) are seen to be gamma distributions [\(\text {Gamma}(A(T)+a_1, \sum _{i=1}^{A(T)}u_i+b_1)\) and \(\text {Gamma}(D(T)+a_2, \sum _{i=1}^{D(T)}v_i+b_2)\)]. Hence, by Central Limit Theorem (CLT), the joint posterior distribution converges to normal distribution as \(T \rightarrow \infty \).

Table 1 For \((\theta _0, \phi _0)=(1,2)\), \((a_1, b_1)=(1.5, 2.5)\) and \((a_2, b_2)=(3, 3.5)\) calculation of MLEs, Bayes estimators and their statndard errors
Table 2 For \((\theta _0, \phi _0)=(2,3)\), \((a_1, b_1)=(1.5, 2.5)\) and \((a_2, b_2)=(3, 3.5)\) calculation of MLEs, and their standard errors
Table 3 For \((\theta _0, \phi _0)=(1,2)\), \((a_1, b_1)=(3, 5)\) and \((a_2, b_2)=(4, 5.5)\) calculation of MLEs, Bayes estimators and standard errors

5 Simulation

For the feasibility of the main result discussed in Sect. 3, simulation was conducted for M / M / 1 queueing system. For given values of true parameters \(\theta _0\) and \(\phi _0\) MLEs (\(\hat{\theta }_T\) and \(\hat{\phi }_T\)) are computed at different time interval (0, T]. Also by choosing different values of hyper-parameters of gamma distribution we compute the Bayes estimators (\(\tilde{\theta }_T\) and \(\tilde{\phi }_T\)) of \(\theta \) and \(\phi \). Here, we consider two pair of true value of parameters \(\theta _0\) and \(\phi _0\) as (1, 2) and (2, 3). For the hyper-parameters we have taken as: \((a_1, b_1)=(1.5, 2.5)\), \((a_2, b_2)=(3, 3.5)\) and \((a_1, b_1)=(3, 5)\), \((a_2, b_2)=(4, 5.5)\). The simulation procedure are repeated 10000 time to estimate the parameters. The computed values of estimators and their respective standard errors are presented in Tables 1, 2 and 3. The values in the parenthesis indicate the standard errors.

6 Concluding Remarks

In simulation study we present the estimates by proposed methods. It is clear that the estimators are quite closer to the true parameter values and their standard errors are negligible.