1 Introduction

The queueing system is a powerful tool for modeling communication networks, transportation networks, production lines and operating systems. In recent years, computer networks and data communication systems are the fastest growing technologies, which have led to significant development in applications such as swift advance in internet, audio data traffic, video data traffic, etc.

The retrial queueing system is characterized by an arriving job who finds the server busy leaves the service area and repeat its demand after some time. Between trials, the blocked job joins a pool of unsatisfied jobs called orbit, for example, web access, telecommunication networks, packet switching networks, collision avoidance, star local area networks, etc.

The server works continuously as long as there is at least one job in the orbit. When the server finishes serving a job and finds the orbit empty, it leaves the system for a period of time called a vacation. This is seen in maintenance activities, telecommunication networks, customized manufacturing, production systems, etc.

Artalejo (2010) and Templeton (1999) have concluded explicit surveys on retrial queueing systems. Gomez-Corral (1999) widely discussed a single server retrial queueing system with general retrial times. Krishna Kumar et al. (2002) introduced an \(M/G/1\) retrial queueing system with a two phase service and preemptive resume. Krishna Kumar and Arivudainambi (2002) analyzed a single server retrial queue with bernoulli vacation schedules and general retrial times.

Gharbi and Ioualalen (2010) presented an approach for modeling and analyzing finite-source multiserver systems with single and multiple vacations of servers for all stations, using the Generalized Stochastic Petrinets model. Arivudainambi and Godhandaraman (2012) considered an \(M^{X}/G/1\) retrial queue with two phases of service, balking, feedback and K optional vacations. The stationary distributions of the number of jobs in the system and orbit are obtained.

Katehakis and Smit (2012) discussed a successive lumping procedure for a class of Markov chain. The results for discrete time Markov chains extend to semi-Markov processes and continuous time Markov processes. Arivudainambi et al. (2014) investigated a single server retrial queue with working vacation, in which the server works with different service rates rather than completely terminating the service during its vacation period. Katehakis and Smit (2014) derived explicit solutions and simple truncation bounds for the steady state probabilities of both down entrance state (DES) and restart entrance state (RES) processes.

Though a lot of work has been done in retrial queueing systems, there have not been many significant studies on single server retrial queue with general retrial time, balking, second optional service and single vacation. In this paper, we have given a mathematical description in Sect. 3, and a justification for the model in Sect. 2. Section 4 deals with the derivations of the steady state distribution of the server. The mean number of jobs in a system and several performance measures are discussed in Sect. 5. Existence of the stochastic decomposition property is also demonstrated in Sect. 6. In Sect. 7, some important special cases of this model are briefly discussed. Numerical results related to the effect of various parameters on system performance measures are analyzed in Sect. 8.

2 Practical justification of the suggested model

The suggested model has potential application in the transfer model of an email system. Simple mail transfer protocol (SMTP) is used to deliver the messages between the mail servers. On a remote machine, a mail transfer program contacts a server for TCP connection. When the TCP is connected, SMTP allows the sender to identify itself, specify a recipient, and then transfer an email message.

When the sender deposits the email in his/her own mail server, the mail server repeat continuously (retrial) until the contact message is delivered. In the mail server, the contact message follows the Poisson process. At the arrival epoch, the arriving message starts its service immediately if the server is free or else joins the buffer. In the buffer, each message waits for some amount of time and retries the service again. Each time it tries but fails, it will wait for another period of time before trying again. The mail server employs a spam filter service to prevent spam mails clogging, this is done to filter the incoming messages via the normal mail-receiving service. To keep the mail server functioning well, some maintenance activities are needed.

For example, virus scan is an important maintenance activity in the mail server. When the maintenance activity is finished, the mail server waits for the arrival of messages. The reason for the proposed model is to design a program to collect information of the contacting messages. In this scenario, the buffer in the sender mail server, the receiver mail server, spam filter server and normal mail receiving service, the retransmission policy, and the maintenance activities correspond to the orbit, the server, two phases of service, the retrial and the vacation policy respectively in queueing terminology.

3 Model description and ergodicity condition

An arrival of a job follows a Poisson process with rate \(\lambda \) and start its service immediately if the server is available. If an arriving job finds the server busy, then the job either balk the system with probability \(1-b\) or joins the orbit with probability \(b\). The job from the orbit to the server is governed by an arbitrary law with distribution function \(R(t)\) and Laplace-Stieltjes transform (LST) \(R^{*}(\theta )\).

In succession, a single server provides two phases of service to each job. The first phase of service (FPS) is followed by the second phase of service (SPS). On completion of regular service, a job desires to have the second optional service with probability \(p\) or may leave the system with probability \(q\). It is assumed that the service \(S_{i}(i=1,2)\) of the \(i^{th}\) phases of service follows a random variable with distribution function \(S_{i}(t)\) and Laplace-Stieltjes transform \(S^{*}_{i}(\theta )\). When no jobs are found in the orbit, the server deactivates and goes for a single vacation of random length \(V\) with distribution function \(V(t)\) and Laplace-Stieltjes transform \(V^{*}(\theta )\).

The state of the system at time \(t\) can be defined by the Markov process \(\{N(t); t\ge 0\}=\{(C(t), X(t), \xi _0(t), \xi _1(t), \xi _2(t), \xi _3(t), t\ge 0\}\), where \(C(t)\) denotes the server state \((0, 1, 2\) and \(3\), according to the server being free, busy with FPS, busy with SPS and with vacation respectively) and \(X(t)\) is the number of jobs in the orbit at time \(t\). If \(C(t)=0\) and \(X(t)>0\), then \(\xi _0(t)\) represents the elapsed retrial time, if \(C(t)=i\), then \(\xi _i(t), i=1,2\) represents the elapsed service time of the job, if \(C(t)=3\) and \(X(t)\ge 0\), then \(\xi _3(t)\) represents the elapsed vacation time at time \(t\). The functions \(\theta (x)\), \(\mu _{i}(x)\) and \(\nu (x)\) are the conditional completion rates for repeated attempts, service and vacation respectively at time \(x\). i.e., \(\theta (x)dx=dR(x)/{(1-R(x))},\,\mu _{i}(x)dx= dS_{i}(x)/(1-S_{i}(x)), \, \nu (x)dx={dV(x)}/{(1-V(x))}\).

3.1 Ergodicity condition

Let \(\{t_n;\, n \in N\}\) be the sequence of epochs of either service completion times or vacation termination time. The sequence of random vectors \(Z_n=\{C(t_n{+}),X(t_n{+})\}\) form a Markov chain which is the embedded Markov chain for our queueing system. Its state space is \(S=\{0, 1, 2\) and \(3\}\times N\).

Theorem 1

The embedded Markov chain \(\{Z_n;\,n \in N\}\) is ergodic if and only if \(\lambda b[E(S_1)+pE(S_2)]<R^{*}(\lambda ).\)

Proof

It is clear that \(\{Z_n;\, n \in N\}\) is an irreducible and aperiodic Markov chain. To prove the ergodicity may use Foster’s criterion, which states that an irreducible and aperiodic Markov chain is ergodic if there exists a non-negative function \(f(j), j\in N\) and \(\epsilon > 0\) such that the mean drift \(\chi _{j}=E[f(z_{n+1})-f(z_{n})|z_{n}=j]\) is finite for all \(j\in N\) and \(\chi _{j}\le -\epsilon \) for all \(j\in N\), except perhaps for some finite number \(j\). In this case, consider the function \(f(j)= j\), then we have

$$\begin{aligned} \chi _{j} =\left\{ \begin{array}{lr}\lambda b[E(S_1)+pE(S_2)]-R^{*}(\lambda ),~j= 1, 2, \cdots \\ \lambda b[E(S_1)+pE(S_2)]-1,~j=0\end{array}\right. \end{aligned}$$

The inequality \(\lambda b[E(S_1)+pE(S_2)]<R^{*}(\lambda )\) is a sufficient condition for ergodicity. The same inequality is also necessary for ergodicity. The necessary condition follows from Kaplan’s condition as noted in Sennot et al. (1983), namely \(\chi _{j}<\infty \) for all \(j\ge 0\) and there exists \(j_0\in N\) such that \(\chi _{j}\ge 0\) for \(j\ge j_0\). In this case, Kaplan’s condition is fulfilled because there exists \(h\) such that \(r_{ij}=0\) for \(j<i-h\) and \(i>0\), where \(R=(r_{ij})\) is the one step transition matrix of \(\{Z_n;\, n\ge 1\}\). Then, the inequality \(\lambda b[E(S_1)+pE(S_2)]\ge R^{*}(\lambda )\) implies the non-ergodicity of the Markov chain. \(\square \)

4 Steady state distribution of the server state

For the process \(\{N(t), t\ge 0\}\), the probabilities are define as

$$\begin{aligned} P_{0}(t)&= P\{C(t)=0, ~X(t)=0\}\\ P_{n}(x,t)dx&= P\{C(t)=0, ~X(t)=n, ~x\le \xi _{0}(t)<x+dx\}, ~t\ge 0, ~x\ge 0,~ n\ge 1\\ Q_{i,n}(x,t)dx&= P\{C(t)=i, ~X(t)=n, ~x\le \xi _{i}(t)<x+dx\}, ~i=1,2, ~x\ge 0,~ n\ge 1 \\ G_{n}(x,t)dx&= P\{C(t)=3, ~X(t)=n, ~x\le \xi _{3}(t)< x+dx\}, ~t\ge 0, ~x\ge 0, ~n\ge 0. \end{aligned}$$

Assuming that the steady state condition \(\lambda b[E(S_1)+pE(S_2)]<R^{*}(\lambda )\) is fulfilled, so that the limiting probability \(P_0=\lim _{t\rightarrow \infty } P_0(t)\) and limiting densities \(P_{n}(x)=\lim _{t\rightarrow \infty } P_n(t, x)\) for \(x\ge 0\), \(n\ge 1\), \(Q_{n}(x)=\lim _{t\rightarrow \infty } Q_n(t, x)\) for \(x\ge 0\), \(n\ge 0\) and \(G_{n}(x)=\lim _{t\rightarrow \infty } G_n(t, x)\) for \(x\ge 0\), \(n\ge 0\). By the method of supplementary variables, system of equations that govern the dynamics of the system behavior are obtained as

$$\begin{aligned} \lambda P_{0}&= \int ^{\infty }_{0}G_{0}(x)\,\nu (x)dx\end{aligned}$$
(1)
$$\begin{aligned} \frac{d}{dx}\, P_{n}(x)+[\lambda +\theta (x)]\,P_{n}(x)&= 0, ~x>0, ~n\ge 1 \end{aligned}$$
(2)
$$\begin{aligned} \frac{d}{dx}\, Q_{i,\,0}(x)+[\lambda +\mu _{i}(x)]\,Q_{i,\,0}(x)&= \lambda (1-b) Q_{i,\,0}(x), ~x>0, ~i=1,2 \end{aligned}$$
(3)
$$\begin{aligned} \frac{d}{dx}\, Q_{i,\,n}(x)+[\lambda +\mu _{i}(x)]\,Q_{i,\,n}(x)&= \lambda b Q_{i, n-1}(x)+ \lambda (1-b)Q_{i, n}(x),\nonumber \\&\quad n\ge 1, ~i=1,2 \end{aligned}$$
(4)
$$\begin{aligned} \frac{d}{dx}\, G_{0}(x)+[\lambda +\nu (x)]\,G_{0}(x)&= \lambda (1-b) G_{0}(x), ~x>0\end{aligned}$$
(5)
$$\begin{aligned} \frac{d}{dx} G_{n}(x)+[\lambda +\nu (x)]G_{n}(x)&= \lambda bG_{n-1}(x) + \lambda (1-b)G_{n}(x),~ n\ge 1 \end{aligned}$$
(6)

The above set of equations can be solved using the steady state boundary conditions at \(x=0\),

$$\begin{aligned} P_{n}(0)&= \int _{0}^{\infty }\,G_{n}(x)\,\nu (x)dx +q\int ^{\infty }_{0}Q_{1,\,n}(x)\,\mu _{1}(x)dx+\int ^{\infty }_{0}Q_{2,\,n}(x)\,\mu _{2}(x)dx\qquad \end{aligned}$$
(7)
$$\begin{aligned} Q_{1,\,0}(0)&= \lambda P_{0}+\int _{0}^{\infty }\,P_{1}(x)\,\theta (x)dx\end{aligned}$$
(8)
$$\begin{aligned} Q_{1,\,n}(0)&= \int _{0}^{\infty }\,P_{n+1}(x)\,\theta (x)dx+\lambda \int _{0}^{\infty }P_{n}(x)dx\end{aligned}$$
(9)
$$\begin{aligned} Q_{2,\,n}(0)&= p\int _{0}^{\infty }\,Q_{1,\,n}(x)\,\mu _{1}(x)dx,~n\ge 1\end{aligned}$$
(10)
$$\begin{aligned} G_{0}(0)&= q\int ^{\infty }_{0}Q_{1,\,0}(x)\,\mu _{1}(x)dx+\int _{0}^{\infty }\,Q_{2,0}(x)\mu _{2}(x)dx \end{aligned}$$
(11)

The normalization condition is given by

$$\begin{aligned} P_{0}+\sum ^{\infty }_{n=1}\int _{0}^{\infty }P_{n}(x)dx+\sum ^{\infty }_{n=0} \sum ^{2}_{i=1}\int _{0}^{\infty }Q_{i,\,n}(x)dx +\sum ^{\infty }_{n=0}\int _{0}^{\infty }G_{n}(x)dx=1 \end{aligned}$$
(12)

Let us define the probability generating functions as \(P(x,z) = \sum ^{\infty }_{ n=1}z^{n}P_{n}(x)\) for \(|z|\le 1\) and \(x>0\), \(P(0,z) =\sum ^{\infty }_{ n=1}z^{n}P_{n}(0)\) for \(|z|\le 1\), \(Q_{i}(x,z) = \sum ^{\infty }_{ n=0}z^{n}Q_{i,n}(x)\) for \(|z|\le 1\) and \(x>0\), \(Q_{i}(0,z) = \sum ^{\infty }_{ n=0}z^{n}Q_{i,n}(0)\) for \(|z|\le 1\) and \(i=1,2\), \(G(x,z) = \sum ^{\infty }_{ n=0}z^{n}G_{n}(x)\) for \(|z|\le 1\) and \(x>0\) and \(G(0,z) = \sum ^{\infty }_{ n=0}z^{n}G_{n}(0)\) for \(|z|\le 1\) and \(x>0\).

Theorem 2

Under the stability condition \(\lambda b[E(S_1)+pE(S_2)]<R^{*}(\lambda )\), the stationary distributions of the number of jobs in the system when the server being idle, busy with FPS, busy with SPS and on vacations are

$$\begin{aligned} P(z)&= \bigg \{\frac{[1-V^*(\lambda b(1-z))]+V^*(\lambda b)[1-(q+pS_{2}^*(\lambda b(1-z)))S_{1}^*(\lambda b(1-z))]}{V^*(\lambda b)\{[z+(1-z)R^*(\lambda )][q+pS_{2}^*(\lambda b(1-z))]S_{1}^*(\lambda b(1-z))-z\}}\bigg \}\nonumber \\&\times \, z[1-R^*(\lambda )]P_0\end{aligned}$$
(13)
$$\begin{aligned} Q_1(z)&= P_0\bigg \{\frac{[1-V^*(\lambda b(1-z))][z+(1-z)R^*(\lambda )]+(1-z)R^*(\lambda )V^*(\lambda b)\}}{[z+(1-z)R^*(\lambda )][q+pS_{2}^*(\lambda b(1-z))]S_{1}^*(\lambda b(1-z))-z}\bigg \}\nonumber \\&\times \, \frac{[1-S^*_1(\lambda b(1-z))]}{V^*(\lambda b)b(1-z)}\end{aligned}$$
(14)
$$\begin{aligned} Q_2(z)&= P_0\bigg \{\frac{[1-V^*(\lambda b(1-z))][z+(1-z)R^*(\lambda )]+(1-z)R^*(\lambda )V^*(\lambda b)\}}{[z+(1-z)R^*(\lambda )][q+pS_{2}^*(\lambda b(1-z))]S_{1}^*(\lambda b(1-z))-z}\bigg \}\nonumber \\&\times \, p\bigg \{\frac{S^*_1(\lambda b(1-z))[1-S^*_2(\lambda b(1-z))]}{V^*(\lambda b)b(1-z)}\bigg \}\end{aligned}$$
(15)
$$\begin{aligned} G(z)&= P_0\bigg [\frac{[1-V^*(\lambda b(1-z))]}{V^*(\lambda b)b(1-z)}\bigg ] \end{aligned}$$
(16)
$$\begin{aligned} P_0&= {V^*(\lambda b)\{R^*(\lambda )-\lambda b[E(S_1)+p E(S_2)]\}}\{\lambda b E(V)+R^*(\lambda )V^*(\lambda b)\nonumber \\&+\,(1-b)\{\lambda E(V)R^*(\lambda )+\lambda [E(S_1)+p E(S_2)]R^*(\lambda )V^*(\lambda b)\}\}^{-1} \end{aligned}$$
(17)

Proof

Multiplying equations (2) - (6) by suitable powers of \(z\) and summing over \(n\), we obtain the following partial differential equations

$$\begin{aligned}&\frac{\partial P(x,z)}{\partial x}+[\lambda +\theta (x)]P(x,z)=0,~ x>0 \end{aligned}$$
(18)
$$\begin{aligned}&\frac{\partial Q_i{(x,z)}}{\partial x}+[\lambda b(1-z)+\mu _i(x)]Q_i{(x,z)}=0, ~x>0 ,~ i=1,2\end{aligned}$$
(19)
$$\begin{aligned}&\frac{\partial G{(x,z)}}{\partial x}+[\lambda b(1-z)+\nu (x)]G{(x,z)}=0 \end{aligned}$$
(20)

Solving the above partial differential equations (18) - (20), we get

$$\begin{aligned}&P(x,z)=P(0,z)[1-R(x)]e^{-\lambda x}, ~x>0\end{aligned}$$
(21)
$$\begin{aligned}&Q_i{(x,z)}=Q_i{(0,z)}[1-S_i{(x)}]e^{-\lambda b(1-z) x}, ~x>0 ,~ i=1,2\end{aligned}$$
(22)
$$\begin{aligned}&G{(x,z)}=G{(0,z)}[1-V{(x)}]e^{-\lambda b(1-z) x}, ~x>0 \end{aligned}$$
(23)

From equation (5), we obtain

$$\begin{aligned} G_{0}(x)=G_{0}(0)[1-V{(x)}]e^{-\lambda b x},~x>0 \end{aligned}$$
(24)

Multiplying equation (24) by \(\nu (x)\) on both sides and integrating with respect to \(x\) from \(n=0\) to \({\infty }\) and using equation (1), we have

$$\begin{aligned} G_{0}(0)=\frac{\lambda P_0}{V^{*}(\lambda b)} \end{aligned}$$
(25)

Multiplying equation (7) by suitable powers of \(z\), summing over \(n\) from \(1\) to \({\infty }\) and after some algebraic simplification we arrive,

$$\begin{aligned} P(0,z)&= \int _{0}^{\infty }\,G{(x,z)}\nu (x)dx+q\int _{0}^{\infty }Q_1{(x,z)}\mu _1(x)dx+ \int _{0}^{\infty }\,Q_2{(x,z)}\mu _2(x)dx\nonumber \\&-\,G_{0}(0)-\lambda P_0 \end{aligned}$$
(26)

Multiplying equations (8) – (11) by appropriate powers of \(z\), summing over \(n\) from \(0\) to \({\infty }\) and after some algebraic manipulation, we get

$$\begin{aligned} Q_1(0,z)&= \frac{1}{z}\int _{0}^{\infty }\,P(x,z)\,\theta (x)dx+\lambda \int _{0}^{\infty }P(x,z)dx+\lambda P_0\end{aligned}$$
(27)
$$\begin{aligned} Q_2(0,z)&= pQ_1(0,z)S^*_1(\lambda b (1-z))\end{aligned}$$
(28)
$$\begin{aligned} G(0,z)&= \frac{\lambda P_0}{V^{*}(\lambda b)} \end{aligned}$$
(29)

Further using equations (22) - (23), (25) and (29) in equation (26), we get

$$\begin{aligned} P(0,z)&= \frac{\lambda P_0}{V^{*}(\lambda b)}[V^{*}(\lambda b (1-z))-1]+Q_1{(0,z)}[q+pS^*_2(\lambda b (1-z))]S^*_1(\lambda b (1-z))\nonumber \\&-\,\lambda P_0 \end{aligned}$$
(30)

Substituting equation (21) in (27), we obtain

$$\begin{aligned} Q_1(0,z)=P(0,z)\bigg [\frac{z+(1-z)R^*(\lambda )}{z}\bigg ]+\lambda P_0 \end{aligned}$$
(31)

Using equation (31) in equation (28), we get

$$\begin{aligned} Q_2(0,z)=p\Bigg [P(0,z)\bigg (\frac{z+(1-z)R^*(\lambda )}{z}\bigg )+ \lambda P_0 \Bigg ] S^*_1(\lambda b(1-z)) \end{aligned}$$
(32)

Substituting equation (31) in (30), we obtain

$$\begin{aligned} P(0,z)&= \bigg \{\frac{[1-V^*(\lambda b (1-z))]+V^*(\lambda b)[1-(q+pS^*_2(\lambda b (1-z)))S^*_1(\lambda b (1-z))]}{V^*(\lambda b)\{[z+(1-z)R^*(\lambda )][q+pS_{2}^*(\lambda b(1-z))]S_{1}^*(\lambda b(1-z))-z\}}\bigg \}\nonumber \\&\times \,\lambda zP_0 \end{aligned}$$
(33)

Substituting equation (33) in (31), we get

$$\begin{aligned} Q_1(0,z)&= \bigg [\frac{[1-V^*(\lambda b (1-z))]+V^*(\lambda b)[1-(q+pS^*_2(\lambda b (1-z)))S^*_1(\lambda b (1-z))]}{V^*(\lambda b)\{[z+(1-z)R^*(\lambda )][q+pS_{2}^*(\lambda b(1-z))]S_{1}^*(\lambda b(1-z))-z\}}\bigg ]\nonumber \\&\times \,\lambda P_0[z+(1-z)R^*(\lambda )]+\lambda P_0 \end{aligned}$$
(34)

Utilizing equation (33) in (32) and by simplifying we get

$$\begin{aligned} Q_2(0,z)&= \bigg [\frac{\{[1-V^*(\lambda b (1-z))]+V^*(\lambda b)[1-(q+pS^*_2(\lambda b (1-z)))S^*_1(\lambda b (1-z))]\}}{V^*(\lambda b)\{[z+(1-z)R^*(\lambda )][q+pS_{2}^*(\lambda b(1-z))]S_{1}^*(\lambda b(1-z))-z\}}\nonumber \\&\times \,\lambda pP_0[z+(1-z)R^*(\lambda )]+\lambda pP_0 \bigg ] S^*_1(\lambda b(1-z)) \end{aligned}$$
(35)

Substituting equations (33)-(35) in equations (21)-(23) and after some algebraic manipulation, we obtain,

$$\begin{aligned} P(x,z)&= \bigg [\frac{[1-V^*(\lambda b (1-z))]+V^*(\lambda b)[1-(q+pS^*_2(\lambda b (1-z)))S^*_1(\lambda b (1-z))]}{V^*(\lambda b)\{[z+(1-z)R^*(\lambda )][q+pS_{2}^*(\lambda b(1-z))]S_{1}^*(\lambda b(1-z))-z\}}\bigg ]\\&\times \,\lambda zP_0[1-R(x)]\,e^{-\lambda x}\\ Q_1(x,z)&= \bigg [\frac{\lambda P_0\{[1-V^*(\lambda b (1-z))][z+(1-z)R^*(\lambda )]+(1-z)R^*(\lambda )V^*(\lambda b)\}}{V^*(\lambda b)\{[z+(1-z)R^*(\lambda )][q+pS_{2}^*(\lambda b(1-z))]S_{1}^*(\lambda b(1-z))-z\}}\bigg ]\\&\times \,[1-S_1{(x)}]e^{-\lambda b(1-z) x}\\ Q_2(x,z)&= \bigg [\frac{\{[1-V^*(\lambda b (1-z))][z+(1-z)R^*(\lambda )]+(1-z)R^*(\lambda )V^*(\lambda b)\}}{V^*(\lambda b)\{[z+(1-z)R^*(\lambda )][q+pS_{2}^*(\lambda b(1-z))]S_{1}^*(\lambda b(1-z))-z\}}\bigg ]\\&\times \,\lambda zP_0S^*_1(\lambda b (1-z))[1-S_2{(x)}]e^{-\lambda b(1-z) x}\\ G{(x,z)}&= \frac{\lambda P_0}{V^{*}(\lambda b)}[1-V{(x)}]e^{-\lambda b(1-z) x} \end{aligned}$$

\(\square \)

Finally, integrating the above equations with respect to \(x\) from 0 to \({\infty }\), the required results (13)–(16) are obtained. At this point, the only unknown is \(P_0\), which can be determined using the normalization condition \(P_0+P(1)+Q_1(1)+Q_2(1)+G(1)=1\). Let \(K(z)=P_0+P(z)+z[Q_1(z)+Q_2(z)]+G(z)\) be the probability generating function for the number of jobs in the system and \(H(z)=P_0+P(z)+Q_1(z)+Q_2(z)+G(z)\) be the probability generating function for the number of jobs in the orbit at stationary point of time.

Theorem 3

Under the stability condition \(\lambda b[E(S_1)+pE(S_2)]<R^{*}(\lambda )\), the probability generating function of the system size and orbit size distribution at stationary point of time is given by:

$$\begin{aligned} K(z)&= P_0\bigg \{\frac{\{[z+(1-z)R^*(\lambda )][1-V^*(\lambda b (1-z))]+(b-z)R^*(\lambda )V^*(\lambda b)\}}{bV^*(\lambda b)\{[z+(1-z)R^*(\lambda )][q+pS_{2}^*(\lambda b(1-z))]S_{1}^*(\lambda b(1-z))-z\}}\nonumber \\&\times \,[q+pS_{2}^*(\lambda b(1-z))]S_{1}^*(\lambda b(1-z))\nonumber \\&+\,\frac{(1-b)\{zR^*(\lambda )V^*(\lambda b)-z[1-R^*(\lambda )][1-V^*(\lambda b (1-z))]\}}{bV^*(\lambda b)\{[z+(1-z)R^*(\lambda )][q+pS_{2}^*(\lambda b(1-z))]S_{1}^*(\lambda b(1-z))-z\}}\bigg \}\end{aligned}$$
(36)
$$\begin{aligned} H(z)&= P_0\bigg \{\frac{[bz+(1-bz)R^*(\lambda )][1-V^*(\lambda b (1-z))]+(1-bz)R^*(\lambda )V^*(\lambda b)}{bV^*(\lambda b)\{[z+(1-z)R^*(\lambda )][q+pS_{2}^*(\lambda b(1-z))]S_{1}^*(\lambda b(1-z))-z\}}\nonumber \\&-\,\frac{(1-b)[q+pS_{2}^*(\lambda b(1-z))]S_{1}^*(\lambda b(1-z))R^*(\lambda )V^*(\lambda b)}{bV^*(\lambda b)\{[z+(1-z)R^*(\lambda )][q+pS_{2}^*(\lambda b(1-z))]S_{1}^*(\lambda b(1-z))-z\}}\bigg \}\nonumber \\ \end{aligned}$$
(37)

where \(P_0\) is given in equation (17).

5 Performance measures

We analyze some system performance measures of the retrial queueing system under study. Differentiating equation (36) with respect to \(z\) and evaluating at \(z=1\), the mean number of jobs in the system \(L_s\) is obtained as

$$\begin{aligned} L_s&= {\frac{Nr1}{Dr1}}+{\frac{Nr2}{Dr2}}\\ \text{ where }~Nr1&= \lambda ^2 b^2[E(V^2)+2E(V)(E(S_1)+p E(S_2))]+2\lambda b [E(V)(1-R^*(\lambda ))\\&\quad +\,(E(S_1)+p E(S_2))R^*(\lambda )V^*(\lambda b)]+(1-b)\{\lambda ^2 b^2[E(S_1^2)+p E(S_2^2)\\&\quad +\,2pE(S_1)E(S_2)]R^*(\lambda )V^*(\lambda b)+2\lambda b E(V)[R^*(\lambda )-1]+\lambda ^2 b^2E(V^2)\\&\times \,[R^*(\lambda )-1]\}\\ Nr2&= \{\lambda E(V)+R^*(\lambda )V^*(\lambda b)+(1-b)\{\lambda [E(S_1)+p E(S_2)]R^*(\lambda )V^*(\lambda b)\\&\quad +\,\lambda E(V)[1-R^*(\lambda )]\}\}\{\lambda ^2 b^2[E(S_1^2)+2pE(S_1)E(S_2)+p E(S_2^2)]\\&\quad +\,2\lambda b[E(S_1)+p E(S_2)][1-R^*(\lambda )]\}\\ Dr1&= 2b\{\lambda b E(V)+R^*(\lambda )V^*(\lambda b)+(1-b)\{\lambda E(V)R^*(\lambda )\\&\quad +\,\lambda [E(S_1)+p E(S_2)]R^*(\lambda )V^*(\lambda b)\}\}\\ Dr2&= 2\{R^*(\lambda )-\lambda b[E(S_1)+p E(S_2)]\}\{\lambda b E(V)+R^*(\lambda )V^*(\lambda b)\\&\quad +\,(1-b)\{\lambda E(V)R^*(\lambda )+\lambda [E(S_1)+p E(S_2)]R^*(\lambda )V^*(\lambda b)\}\} \end{aligned}$$

Differentiating equation (37) with respect to \(z\) and evaluating \(z=1\), the mean number of jobs in the orbit \(L_q\) is given by

$$\begin{aligned} L_q&= {\frac{Nr3}{Dr3}}+{\frac{Nr4}{Dr4}}\\ \text{ where }~Nr3&= \lambda ^2 b^2E(V^2)+2\lambda b [1-R^*(\lambda )]+(1-b)\{\lambda ^2 b E(V^2)R^*(\lambda )\\&\quad +\,\lambda ^2 b[E(S_1^2)+2pE(S_1)E(S_2)+p E(S_2^2)]R^*(\lambda )V^*(\lambda b)\}\\ Nr4&= \{\lambda ^2 b[E(S_1^2)+2pE(S_1)E(S_2)+p E(S_2^2)]\\&\quad +\,2\lambda [E(S_1)+p E(S_2)][1-R^*(\lambda )]\}\\ Dr3&= 2\{\lambda b E(V)+R^*(\lambda )V^*(\lambda b)+(1-b)\{\lambda E(V)R^*(\lambda )\\&\quad +\,\lambda [E(S_1)+p E(S_2)]R^*(\lambda )V^*(\lambda b)\}\}\\ Dr4&= 2\{R^*(\lambda )-\lambda b[E(S_1)+p E(S_2)]\} \end{aligned}$$

6 Stochastic decomposition

Stochastic decomposition has been widely observed among \(M/G/1\) type queues with generalized vacations Fuhrmann and Cooper (1985), in which the vacations begins at the end of each service time. Let \(\varPi (z)\) be the probability generating function of the number of jobs in the \(M/G/1\) queueing system (see Gross and Harris (2011)), in steady state at a random point in time, \(\chi (z)\) be the probability generating function of the number of jobs in the generalized vacation at a random point in time given that the server is on vacation or idle, and \(K(z)\) be the probability generating function of the random variable being decomposed. Then the mathematical version of the stochastic decomposition law is

$$\begin{aligned} K(z)=\varPi (z)\chi (z) \end{aligned}$$

The \(M/G/1\) queueing system (see Gross and Harris (2011)), we have

$$\begin{aligned} \varPi (z)=\frac{[1-\lambda E(S)](1-z)S^*(\lambda (1-z))}{S^*(\lambda (1-z))-z} \end{aligned}$$

To obtain an expression for \(\chi (z)\), the generalized vacation is defined as

$$\begin{aligned} \chi (z)={\frac{P_0+P(z)+G(z)}{P_0+P(1)+G(1)}} \end{aligned}$$

Using the equations (13), (16) and (17), we obtain

$$\begin{aligned} \chi (z)&= {\frac{Nr}{Dr}P_0}\\ \text{ where }~Nr&= \{\{[z+(1-z)R^*(\lambda )][1-V^*(\lambda b (1-z))]+b(1-z)R^*(\lambda )V^*(\lambda b)\}\\&\times \,\{[q+pS_{2}^*(\lambda b(1-z))]S_{1}^*(\lambda b(1-z))-bz\}\\&\quad -\,(1-b)z[1-V^*(\lambda b (1-z))]\}\{\lambda b E(V)+R^*(\lambda )V^*(\lambda b)\\&\quad +\,(1-b)\{\lambda E(V)R^*(\lambda )+\lambda [E(S_1)+p E(S_2)]R^*(\lambda )V^*(\lambda b)\}\}\\ Dr&= {b(1-z)\{[z+(1-z)R^*(\lambda )][q+pS_{2}^*(\lambda b(1-z))]S_{1}^*(\lambda b(1-z))-z\}}\\&\times \, V^*(\lambda b)\{\lambda b E(V)+R^*(\lambda )V^*(\lambda b)+(1-b)\lambda E(V)R^*(\lambda )\\&\quad -\,\lambda b [E(S_1)+p E(S_2)][\lambda E(V)+R^*(\lambda )V^*(\lambda b)]\} \end{aligned}$$

where \(P_0\) is given in (17).

7 Special cases

In this section, we analyze briefly some special cases of our model, which are consistent with the existing literature.

Case 1: If \(b=1\) and \(p=0\), the model reduces to the \(M/G/1\) retrial queue with general retrial times and a single vacation. The probability generating function of the number of jobs in the system \(K(z)\), the idle probability \(P_{0}\) and the mean system size \(L_{s}\) are obtained in the following form and which are in accordance with those of Krishna Kumar and Arivudainambi (2002).

$$\begin{aligned} P_0&= \frac{[R^*(\lambda )-\lambda E(S)]V^*(\lambda )}{\lambda E(V)+V^*(\lambda )R^{*}(\lambda )}\\ K(z)&= \frac{P_0\{[1-V^*(\lambda -\lambda z)][z+(1-z)R^*(\lambda )]+(1-z)R^*(\lambda )V^*(\lambda )\}S^*(\lambda -\lambda z)}{V^*(\lambda )\{[z+(1-z)R^*(\lambda )]S^*(\lambda -\lambda z)-z]\}} \\ L_s&= \lambda E(S)+\{\lambda ^2[E(V^2)]+2\lambda E(V)[1-R^{*}(\lambda )]\}\{2[\lambda E(V) +R^{*}(\lambda )V^{*}(\lambda )]\}^{-1}\\&\quad +\,\{\lambda ^2[E(S^2)+2\lambda [1-R^{*}(\lambda )]E(S)\}\{2[R^{*}(\lambda )-\lambda E(S)]\}^{-1} \end{aligned}$$

Case 2: If \(V^*(\lambda )=1\), \(b=1\) and \(p=0\), our model reduces to \(M/G/1\) retrial queue with general retrial times and two phases of service and this result is equivalent to the result obtained by Choudhury (2009).

$$\begin{aligned} P_0&= \frac{R^*(\lambda )-\lambda [E(S_1)+p E(S_2)]}{R^{*}(\lambda )}\\ K(z)&= \frac{P_0\{(1-z)[q+pS^*_2(\lambda -\lambda z)]S^*_1(\lambda -\lambda z)R^*(\lambda )\}}{[z+(1-z)R^*(\lambda )][q+pS^*_2(\lambda -\lambda z)]S^*_1(\lambda -\lambda z)-z}\\ L_s&= \lambda [E(S_1)+p E(S_2)]+\frac{\lambda ^2[E(S_1^2)+2pE(S_1)E(S_2)+p E(S_2^2)]}{2[R^{*}(\lambda )-\lambda (E(S_1)+p E(S_2))]}\\&+\,\frac{\lambda [1-R^{*}(\lambda )][E(S_1)+p E(S_2)]}{[R^{*}(\lambda )-\lambda (E(S_1)+p E(S_2)]} \end{aligned}$$

Case 3: If \(V^*(\lambda )=1\), \(b=1\) and \(p=0\), we get an \(M/G/1\) retrial queue with general retrial times. In this case, the probability generating function of the number of jobs in the system \(K(z)\), the probability of no job in the system \(P_{0}\) and the mean system size \(L_{s}\) can be rewritten in the following form and the results agree with Gomez-Corral (1999).

$$\begin{aligned} P_0&= \frac{R^*(\lambda )-\lambda E(S)}{R^*(\lambda )}\\ K(z)&= \frac{P_0(1-z)R^*(\lambda )S^*(\lambda -\lambda z)}{[z+(1-z)R^*(\lambda )]S^*(\lambda -\lambda z)-z}\\ L_s&= \lambda E(S)+\frac{\lambda ^2E(S^2)+2\lambda E(S)[1-R^{*}(\lambda )]}{2[R^{*}(\lambda )-\lambda E(S)]} \end{aligned}$$

Case 4: If \(R^*(\lambda )\rightarrow 1\), \(V^*(\lambda )=1\), \(b=1\) and \(p=0\), our model is reduced to the \(M/G/1\) queueing system. In this case, the probability generating function of the number of jobs in the system \(K(z)\), the idle probability \(P_{0}\) and the mean system size \(L_{s}\) can be simplified to the following expressions which are consistent with well known the P-K formula Gross and Harris (2011).

$$\begin{aligned} P_0&= 1-\lambda E(S) \\ K(z)&= \frac{[1-\lambda E(S)](1-z)S^*(\lambda -\lambda z)}{S^*(\lambda -\lambda z)-z}\\ L_s&= \lambda E(S)+\frac{\lambda ^2E(S^2)}{2[1-\lambda E(S)]} \end{aligned}$$

8 Numerical illustrations

In this section, some numerical results using Matlab in order to illustrate the effect of various parameters on the main performance measures of the system is presented. Choosing arbitrary values for the parameters \(\lambda =10, ~\mu _1=20, ~\mu _2=25\) and various values of parameters \(b, ~p, ~\theta \) and \(p\) such that the stability condition is satisfied.

Two dimensional graphs are drawn in Figs. 110. Figure 1 shows that the idle probability \(P_0\) decreases for increasing optional service rate \(p\) with varying balking rate \(b\). The idle probability \(P_0\) decreases for increasing arrival rate \(\lambda \) with varying optional service rate \(p\) as shown in Fig. 2. The probability value of \(P_0\) decreases for increasing retrial rate \(\theta \) and increasing arrival rate \(\lambda \) as shown in Figs. 3 and 4 respectively. Figure 5 shows that the mean system size \(L_s\) increases for increasing optional service rate \(p\) with varying balking rate \(b\).

Fig. 1
figure 1

\(P_0\) versus \(p\) for \(b=0.1,0.5,0.9\)

Fig. 2
figure 2

\(P_0\) versus \(\lambda \) for \(p=0.1,0.5,1\)

Fig. 3
figure 3

\(P_0\) versus \(\theta \) for \(p=0.1,0.5,1\)

Fig. 4
figure 4

\(P_0\) versus \(\theta \) for \(\lambda =1,5,10\)

Fig. 5
figure 5

\(L_s\) versus \(p\) for \(b=0.1,0.5,0.9\)

Figure 6 shows that the mean system size \(L_s\) increases for increasing arrival rate \(\lambda \) with varying optional service rate \(p\). The mean system size \(L_s\) increases for increasing retrial rate \(\theta \) with varying optional service rate \(p\) is shown in Fig. 7. Figure 8 shows that the mean system size \(L_s\) increases for increasing retrial rate \(\theta \) with varying arrival rate \(\lambda \). Figure 9 shows that the mean system size \(L_s\) increases for increasing arrival rate \(\lambda \) with varying retrial rate \(\theta \). In Fig. 10, the mean system size \(L_s\) increases for increasing balking rate \(b\) with varying retrial rate \(\theta \).

Fig. 6
figure 6

\(L_s\) versus \(\lambda \) for \(p=0.1,0.5,1\)

Fig. 7
figure 7

\(L_s\) versus \(\theta \) for \(p=0.1,0.5,1\)

Fig. 8
figure 8

\(L_s\) versus \(\theta \) for \(\lambda =1,5,10\)

Fig. 9
figure 9

\(L_s\) versus \(\lambda \) for \(\theta =1,5,10\)

Fig. 10
figure 10

\(L_s\) versus \(b\) for \(\theta =1,5,10\)

Three dimensional graphs are drawn in Figs. 1118. The surface displays a downward trend for \(P_0\) against increasing retrial rate \(\theta \) and arrival rate \(\lambda \) as shown in Fig. 11. In Fig. 12, the surface displays a downward trend for \(P_0\) against increasing \(p\) and \(\theta \). The surface displays a downward trend for increasing \(p\) and \(b\) against idle probability \(P_0\) as shown in Fig. 13. For increasing optional service rate \(p\) and arrival rate \(\lambda \), the surface displays a downward trend for the idle probability \(P_0\) as expected is shown in Fig. 14. The surface displays an upward trend for \(L_s\) against increasing retrial rate and arrival rate as expected in Fig. 15. In Fig. 16, the surface displays an upward trend for \(L_s\) against increasing \(p\) and \(\theta \). For increasing optional service rate \(p\) and balking rate \(b\), the surface displays an upward trend for \(L_s\) as expected is shown in Fig. 17. In Fig. 18, the surface displays an upward trend for \(L_s\) against increasing \(p\) and \(\lambda \).

Fig. 11
figure 11

\(P_0\) versus \(\theta \) and \(\lambda \)

Fig. 12
figure 12

\(P_0\) versus \(p\) and \(\theta \)

Fig. 13
figure 13

\(P_0\) versus \(p\) and \(b\)

Fig. 14
figure 14

\(P_0\) versus \(p\) and \(\lambda \)

Fig. 15
figure 15

\(L_s\) versus \(\theta \) and \(\lambda \)

Fig. 16
figure 16

\(L_s\) versus \(p\) and \(\theta \)

Fig. 17
figure 17

\(L_s\) versus \(p\) and \(b\)

Fig. 18
figure 18

\(L_s\) versus \(p\) and \(\lambda \)

9 Conclusion

In this chapter, a single server retrial queueing system with general repeated attempts, balking, second optional service and single vacation are considered. For this model, explicit expressions are obtained for the probability generating function of the server state and the number of jobs in the system and orbit are found using the supplementary variable technique. Various performance measures and special cases have been analyzed. The general decomposition law holds for this model also. The effect of various parameters on the performance measure are illustrated numerically and graphically.