Keywords

1 Introduction

At several occasions, customers while seeking for a particular type of service are met with some other type which is not desirable for him/her. For example, consider a person admitted to a hospital. If for some unwanted reason, he/she get wrongly diagnosed and may receive a service which is unwanted for him/her. Another example is a mechanic wrongly diagnoses a car problem leading to offer an unwanted service. There are several other examples which, we are sure, the readers can think from the day-to-day life. In many such occasions, it is probable that the unwanted service which is being offered after a wrong diagnosis finishes without eternal damage and the customer receives the required service. These real world phenomena have motivated us to develop a queuing model, which we describe as follows. We consider a k-out-of-n system, where the components are subjected to failure. When selected for a repair, with probability p, the failed component may receive an unwanted service (which we call the inessential service). If the inessential service finishes before a random duration, we assume that the component has overcome the wrong diagnosis and hence proceeds for the essential service and become as good as a new one thereafter. If the random duration is over while the component still receiving the unwanted service, we assume that its ongoing service is stopped and it is replaced with a new component. Our model differs from the vacation queuing models studied in Madan et al. [1] or Ayyappan et al. [2], where at each service start, a customer can choose one of the two kinds of services. The difference is that in our model, a customer who got selected for inessential service may leave the system either without completing any service or receiving both types of services. Saravanarajan and Chandrasekaran [3] studies a vacation queuing model with system breakdowns, where the customers can choose one of the two type of services offered. This model allows a customer to remain in the system for another service, joining the tail of the queue, after a service completion. However, a customer can’t receive two back to back services of different kind and it can’t leave the system in between an ongoing service. Hence our model differs from this model also.

This paper has been arranged as follows: In Sect. 1, we give the technical description of the model. In Sect. 3, the steady state distribution has been found. Several important system performance measures have been derived in Sect. 4 and Sect. 5 presents the results from a numerical study of the performance measures.

2 Technical Description of the Model

We consider a k-out-of-n system with a single server repair facility. At the epoch the system starts, all components are in operational state. The life-time of a component follows an exponential distribution with parameter \(\lambda /i\), when i components are operational. Service to failed components is in the order of their arrival. When a component is selected for repair, it may get selected for an inessential service with probability p and with probability \((1-p)\), it may be taken for desired service, called the essential service. Once the inessential service process starts, the failed component either completes the service there and moves for the essential service or is replaced by a new component. A random clock is assumed to start ticking the moment the inessential service starts, which decides the event to follow: if the clock realises first (still the inessential service is going on) the failed component’s ongoing service is stopped and it is replaced with a new component. On the other hand if the inessential service gets completed before the realisation of the random clock, then the component moves for the essential service immediately. After a successful repair (the essential service) the component is assumed to be as good as a new component.

The essential service time of a failed component is exponentially distributed with parameter \(\mu \) and the service time of failed components in inessential service has a phase type distribution with representation \((\alpha , S)\) of order m. We assume that \(S^0=-S\pmb {e}\).

The random clock time is assumed to be exponentially distributed with parameter \(\delta \).

2.1 The Markov Chain

Let \(N(t)=\) at time t number of failed components in the system.

$$ J(t) ={\left\{ \begin{array}{ll} 0, \text {if the failed component getting essential service},\\ i, \text {if a failed component getting } i^{\text {th}} \text { phase of inessential service}, \end{array}\right. } $$

where \( i=1,2,\ldots ,m\).

Then \(\{X(t),t\ge 0\}\) where \(X(t)=\left( N(t),J(t)\right) \) is a continuous time Markov chain with state space \(\{(0,0)\}\cup \{1,2,\ldots ,n-k+1\}\times \{0,1,2,\ldots ,m\}\).

The generator matrix of the Markov chain \(\{X(t),t\ge 0\}\) is

$$\begin{aligned} Q = \begin{bmatrix} A_{00}&B_{0}&&&\\ B_{1}&A_{1}&A_0&&\\&A_{2}&A_{1}&A_0&&\\&\cdot&\cdot&\cdot&\\&&\cdot&\cdot&\cdot \\&&A_{2}&A_{1}&A_0 \\&&&A_{2}&\widetilde{A}_{1} \end{bmatrix} \end{aligned}$$
$$\begin{aligned} A_{00}&= [-\lambda ]; B_0 = \begin{bmatrix}(1-p)\lambda&p\lambda \alpha \end{bmatrix};\; B_1=\begin{bmatrix}\mu \\\delta e \end{bmatrix}\\ A_1&= \begin{bmatrix} -(\mu +\lambda )&0 \\ S^0&S-(\delta +\lambda )I_m \end{bmatrix}; A_0 = [\lambda I_{m+1}]; A_2 = \begin{bmatrix} (1-p)\mu&p\mu \alpha \\ (1-p)\delta \,\pmb {e}&p\delta \,\pmb {e}\alpha \end{bmatrix}\\ \widetilde{A}_1&= \begin{bmatrix} -\mu&0 \\ S^0&S-\delta I_m \end{bmatrix} \end{aligned}$$

where \(\alpha =(\alpha _1,\alpha _2\ldots ,\alpha _m)\) with \(\alpha _1+\alpha _2+\ldots +\alpha _m=1\).

We also define \(\beta =((1-p)\quad p\alpha ).\)

3 Steady State Distribution

Since this system is finite, it is stable. Let

$$ \pmb {\pi }= (\pi (0),\pi (1),\ldots ,\pi (n-k+1)) $$

with

$$ \pi (i)= (\pi (i,0),\pi (i,1),\pi (i,2),\ldots \pi (i,m)), \,1\le i\le n-k+1 $$

be the steady state probability vector of the system \(\{X(t),t\ge 0\}\). Then it satisfies the equations \(\pmb {\pi } Q=0\) and \(\pmb {\pi }\,\pmb {e}=1\).

The equation \(\pmb {\pi } Q=0\) gives rise to

$$\begin{aligned} \pi (0) A_{00}+\pi (1)B_1&= 0 \end{aligned}$$
(1)
$$\begin{aligned} \pi (0) B_{0}+\pi (1)A_1 + \pi (2) A_2&= 0 \end{aligned}$$
(2)
$$\begin{aligned} \pi (i-1)A_0 + \pi (i)A_1+ \pi (i+1)A_2&= 0,2\le i\le n-k \end{aligned}$$
(3)
$$\begin{aligned} \pi (n-k)A_0 + \pi (n-k+1)\widetilde{A}_1&= 0. \end{aligned}$$
(4)

Since \(A_{00}=[-\lambda ]\) and \(B_1=A_2\,\pmb {e}\), from (1) it follows that

$$\begin{aligned} \lambda \pi (0) = \pi (1) A_2\,\pmb {e}. \end{aligned}$$
(5)

Since \(B_0=\lambda \beta \), equation (2) becomes

$$\begin{aligned} \pi (0)\lambda \beta + \pi (1)A_1+\pi (2)A_2 = 0. \end{aligned}$$
(6)

Using (5) we can write this equation as

$$\begin{aligned} \pi (1) B_1 \beta + \pi (1) A_1+\pi (2) A_2 =0. \end{aligned}$$
(7)

We notice that \(B_1\beta =A_2\) and hence equation (7) becomes

$$\begin{aligned} \pi (1)(A_1+A_2)+\pi (2)A_2 = 0. \end{aligned}$$
(8)

Post multiplying equation (8) with \(\pmb {e}\), we get

$$\begin{aligned} \pi (1)(A_1+A_2)\,\pmb {e} + \pi (2) A_2 \,\pmb {e} = 0 \end{aligned}$$
(9)

but \((A_1+A_2)\,\pmb {e}=-A_0\,\pmb {e}=-\lambda \,\pmb {e}\). Hence (9) becomes

$$\begin{aligned} \pi (1)\lambda \,\pmb {e} = \pi (2)A_2\,\pmb {e}. \end{aligned}$$
(10)

We notice that \(A_2=A_2\,\pmb {e}\beta \), which transforms equation (8) in to

$$\begin{aligned} \pi (1)(A_1+A_2)+\pi (2)A_2\,\pmb {e}\beta = 0. \end{aligned}$$
(11)

Substituting for \(\pi (2)A_2\,\pmb {e}\) from (10) in (11), we get

$$ \pi (1)(A_1+A_2)+\pi (1)\lambda \,\pmb {e}\beta = 0. $$

That is

$$\begin{aligned} \pi (1)(A_1+A_2+\lambda \,\pmb {e}\beta ) = 0. \end{aligned}$$
(12)

Equation (12) shows that \(\pi (1)\) is a constant multiple of the steady state vector \(\pmb {\varphi }\) of the generator matrix \(A_1+A_2+\lambda \,\pmb {e}\beta \). That is

$$\begin{aligned} \pi (1) = \eta \,\pmb {\varphi } \end{aligned}$$
(13)

where \(\eta \) is a constant.

Equation (3) for \(i=2\) gives

$$\begin{aligned} \pi (1)A_0 +\pi (2)A_1+ \pi (3)A_2 =0. \end{aligned}$$
(14)

Since \(A_2=A_2\,\pmb {e}\beta \), equation (14) becomes

$$\begin{aligned} \pi (1)A_0+\pi (2)A_1+\pi (3)A_2\,\pmb {e}\beta = 0. \end{aligned}$$
(15)

Post multiplying with \(\pmb {e}\), we get

$$\begin{aligned} \pi (1)\lambda \,\pmb {e}+\pi (2) A_1\,\pmb {e} + \pi (3)A_2\,\pmb {e} = 0. \end{aligned}$$
(16)

Using (10) the above equation can be written as

$$\begin{aligned} \pi (2)A_2\,\pmb {e} +\pi (2)A_1\,\pmb {e}+\pi (3)A_2\,\pmb {e}&= 0\nonumber \\ \text {i.e.,}\quad \pi (2)(A_1+A_2)\,\pmb {e}&= -\pi (3)A_2\,\pmb {e}\nonumber \\ \text {i.e.,}\quad \pi (2)\lambda \,\pmb {e}&= \pi (3)A_2\,\pmb {e}. \end{aligned}$$
(17)

In the light of equation (17), equation (15) becomes,

$$\begin{aligned} \pi (1)A_0+\pi (2)A_1+\pi (2)\lambda \,\pmb {e}\beta&= 0\\ \text {i.e.,}\quad \pi (1)A_0+\pi (2)(A_1+\lambda \,\pmb {e}\beta )&= 0 \end{aligned}$$

which implies that

$$ \pi (2) = -\pi (1)A_0(A_1+\lambda \,\pmb {e}\beta )^{-1}. $$

That is

$$\begin{aligned} \pi (2) = -\eta \,\pmb {\varphi } A_0(A_1+\lambda \,\pmb {e}\beta )^{-1}. \end{aligned}$$
(18)

Post-multiplying equation (3) with \(\pmb {e}\) and proceeding in the same lines as we derived equation (17), we can derive that

$$\begin{aligned} \pi (i+1)A_2\,\pmb {e} = \pi (i)\lambda \,\pmb {e},\text { for } 3\le i\le n-k. \end{aligned}$$
(19)

Equation (19) then transforms equation (3) as

$$ \pi (i-1)A_0 +\pi (i)A_1 + \pi (i)\lambda \,\pmb {e}\beta =0, 3\le i\le n-k, $$

which implies that

$$\begin{aligned} \pi (i) = -\pi (i-1)A_0(A_1+\lambda \,\pmb {e}\beta )^{-1}, 2\le i\le n-k \end{aligned}$$
(20)

which in turn gives

$$\begin{aligned} \pi (i) = (-1)^{i-1} \eta \,\pmb {\varphi } (A_0(A_1+\lambda \,\pmb {e}\beta )^{-1})^{i-1}, 2\le i\le n-k. \end{aligned}$$
(21)

We notice that \(\widetilde{A}_1\,\pmb {e}=-A_2\,\pmb {e}\); post-multiplying equation (4) with \(\pmb {e}\), we get

$$\begin{aligned} \pi (n-k)\lambda \,\pmb {e} = \pi (n-k+1)A_2\,\pmb {e}. \end{aligned}$$
(22)

From equation (4), we can also write

$$\begin{aligned} \pi (n-k+1) = -\pi (n-k)A_0(\widetilde{A}_1)^{-1}. \end{aligned}$$
(23)

Using (21) for \(i=n-k\), (23) becomes

$$\begin{aligned} \pi (n-k+1) = (-1)^{n-k}\eta \,\pmb {\varphi } (A_0(A_1+\lambda \,\pmb {e}\beta )^{-1} )^{n-k+1} A_0(\widetilde{A}_1)^{-1}. \end{aligned}$$
(24)

Hence, we have the following theorem for the steady state distribution:

Theorem 1

The steady state distribution \(\pmb {\pi }= (\pi (0),\pi (1),\ldots ,\pi (n-k+1))\) of the Markov chain \(\{X(t),t\ge 0\}\) is given by

$$\begin{aligned} \pi (0)&= \frac{1}{\lambda } \eta \,\pmb {\varphi } B_1,\\ \pi (1)&= \eta \,\pmb {\varphi },\\ \pi (i)&= (-1)^{i-1} \eta \,\pmb {\varphi } (A_0(A_1+\lambda \,\pmb {e}\beta )^{-1} )^{i-1}, 2\le i\le n-k\\ \pi (n-k+1)&= (-1)^{n-k} \eta \,\pmb {\varphi } (A_0(A_1+\lambda \,\pmb {e}\beta )^{-1} )^{n-k-1} A_0(\widetilde{A}_1)^{-1}, \end{aligned}$$

where \(\pmb {\varphi }\) is the steady state vector of the generator matrix \(A_1+A_2+\lambda \,\pmb {e}\beta \) and \(\eta \) is a constant, which can be found from the normalizing condition \(\pmb {\pi }\,\pmb {e}=1\).

4 System Performance Measures

  1. 1.

    Fraction of time the system is down,

    $$ P_{\text {down}} = \sum ^m_{j=0} \pi {(n-k+1,j)}. $$
  2. 2.

    System reliability,

    $$ P_{\text {rel}} = 1-P_{\text {down}} = 1-\sum ^m_{j=0} \pi (n-k+1,j). $$
  3. 3.

    Average number of failed components in the system,

    $$ N_{\text {fail}} = \sum ^{n-k+1}_{i=1} i \left( \sum ^m_{j=0} \pi {(i,j)}\right) . $$
  4. 4.

    Expected rate at which failed components are taken for essential service:

    $$ E_{\text {es}} = (1-p)\lambda \pi (0) + \sum ^{n-k+1}_{i=2} (1-p)\mu \pi (i,0)+ \sum ^{n-k+1}_{i=2} (1-p)\delta \left( \sum ^{m}_{j=1} \pi (i,j)\right) . $$
  5. 5.

    Expected rate at which failed components are taken for inessential service

    $$ E_{\text {in}\,\text {es}} = p\lambda \pi (0) + \sum ^{n-k+1}_{i=2} p\mu \pi (i,0)+ \sum ^{n-k+1}_{i=2} p\delta \left( \sum ^{m}_{j=1} \pi (i,j)\right) . $$
  6. 6.

    Expected rate at which new components were bought:

    $$ E_{C.R} = \sum ^{n-k+1}_{i=1} \delta \left( \sum ^{m}_{j=1} \pi (i,j)\right) . $$
  7. 7.

    Expected rate at which failed components that start with inessential service subsequently moving to essential service before clock realisation:

    $$ E_{\text {IN}\,\text {E}} = \sum ^{n-k+1}_{i=1} \sum _{j=2}^{m+1} \pi (i,j) S^{0} (j-1,1). $$
  8. 8.

    Fraction of time server is idle:

    $$ P_{\text {idle}} = \pi (0). $$
  9. 9.

    Fraction of time server is busy:

    $$ P_{\text {busy}} = 1-\pi (0). $$

5 Numerical Study of the System Performance Measures

Notice that if a component is selected for inessential service, it is either replaced by a new component (if the random clock realises before completion of the inessential service) or is got repaired (if the inessential service completes before the random clock realises). Hence a component getting selected for inessential service according to probability p affects the system reliability only through an increase in the repair time by a random amount of time (minimum of inessential service time and random clock time). Table 1 shows that very high reliability is maintained in the system, which decreases slightly as the probability p that a failed component receives an undesired service initially, increases. The decrease in the average rate at which components directly receive essential service with an increase in p, as seen in Table 2, was expected. According to the modelling assumption, if the random clock expires during an inessential service, the component receiving the inessential service is replaced with a new component. Hence, as the probability p increases, more components will get selected for inessential service, which leads to an increase in the replacement rate as seen in Table 3.

Table 1. Variation in system reliability
Table 2. Average rate at which components are taken for essential service
Table 3. Average rate at which components were bought

Since the inessential service is not helping the system in any way whatsoever, one would expect the optimal value for the probability p as to be zero. However in a situation where the possibility for inessential service can’t be avoided, one would like to know its harm through some number. For this purpose, we have constructed a cost function as follows:

Let \(C_1\) be the cost per unit time incurred if the system is down, \(C_2\), be the repair cost per unit time for essential service per failed component, \(C_3\) is the cost incurred towards the time loss due to wrong diagnosis with failed components and consequent realisation of random clock before inessential service completion. \(C_4\) is the extra cost incurred on failed components that start with inessential service subsequently moves to essential service before clock realisation, \(C_5\) be the repair cost per unit time for inessential service.

Expected cost per unit time

$$= C_1\cdot P_{\text {down}} + C_2 \cdot E_{\text {es}} + C_3 \cdot E_{\text {C.R}} + C_4\cdot E_{\text {IN E}}+ C_5 \cdot E_{\text {ines}}. $$

Table 4 presents the variation in cost function as the probability p increases for different component failure rates. In all the cases studied, the optimum value of p was found zero as was expected. The table also shows that as the component failure rate increases, the cost function also increases.

General parameters for Tables 13 are as follows: \(\lambda = 4\), \(\mu =3.2\), \(\delta =5\), \(S=\begin{bmatrix} -18&4&6\\ 5&-18&5\\ 7&4&-19\end{bmatrix}\), \(\alpha =(0.4,0.3,0.3)\).

Table 4. Variation in cost \(C_1=9500\), \(C_2=2600\), \(C_3=4000\), \(C_4=1600\) \(C_5=3000\)