Keywords

1 Introduction

Cooperative communication is an effective way to improve the quality of wireless links since it is evidently proved that allows a flexible and robust exchange of data. In a wireless network, each source user increases its quality of service via cooperation with other users that “share” the antennas of their devices and assist the source users to transmit their data to a destination node; e.g., [14, 19, 2325]. This is so called cooperation with relaying, and the assistant users are called relay nodes.

Such a system operates as follows: Consider a network of a finite number of source users, a finite number of relay nodes, and a destination node. Source users transmit packets to the destination node with the cooperation of the relay(s). If a transmission of a user’s packet to the destination fails, the relays store the blocked packet in their buffers and try to re-transmit it to the destination later. A mechanism must be employed to decide which of the relays will cooperate with sources (i.e., cooperation strategy). This problem gives rise to the usage of a cooperative space diversity protocol [26], under which, each user has a number of “partners” (i.e., relays) that are responsible for transmitting its blocked packets.

The core ideas behind cooperative communications were introduced in [9]. An overview of the rapidly expanding literature can be found in [22], where it was proved that relaying leads to a substantial reduction both on the packet delay, and on the energy consumption of the sources and relay nodes. Thus, further investigation of such systems is of great interest for the research community.

In this work we analyze a novel queueing system to model the impact of using two relay nodes in a network, to assist with relaying packets from a number of users to a destination node, under a cooperative space diversity protocol. We consider three saturated source users, say a central and two background source users, that transmit packets to a destination node, with the cooperation of two relays (i.e., network-level cooperation). Relay nodes have infinite capacity buffers and re-transmit blocked packets of the source users. The cooperation strategy is as follows: When the central source user fails to transmit a packet to the destination node, forwards its blocked packet at both relays (i.e., both relays overhear the transmission due to the wireless multicast advantage of the medium; two “partners"). On the contrary, a background source user cooperates only with a single relay node, and forwards its blocked packet only in that relay node.

Note that the notion of “re-transmission” gives rise to the so-called retrial queues [14, 10] (not exhaustive list) that have been proved very useful for modeling communication networks, where a “customer” meeting a busy server tries its luck again after a random time. Our system is modeled as a three-dimensional Markov process, and we prove that its steady-state performance can be expressed in terms of the solution of a Riemann-Hilbert boundary value problem. The study of queueing systems using the boundary value theory had been initiated in [12, 13], and a concrete methodological approach was presented in [5]. Important generalizations were given in [2, 68, 17, 28] (not exhaustive list).

Contribution of the paper. Besides its practical applicability, our work is also theoretically oriented. We provide for the first time in the related literature, an exact analysis of a model that unifies two fundamental queueing systems: the retrial queue with two orbits and constant retrial policy, and the generalized two-demand model (i.e., fork-join queue). The exact analysis of a typical fork-join queue with c parallel servers is possible only when \(c=2\) (see [15, 29]). Moreover, there are also limited results on retrial queues with more than one orbits. In the vast majority of these results, a mean value approach was massively applied due to the complexity of the model [11, 20, 21]. In this work we present an exact analysis of a very intricate queueing model, and prove that the powerful and quite technical boundary value theory is an adequate technique to handle it.

The paper is organized as follows. In Sect. 2 we present the model in detail, and we derive the balance equations that are used to form the fundamental functional equation. In Sect. 3 we obtain some important results for the following analysis, and investigate stability conditions by considering an associated random walk in the first quadrant. Section 4 is devoted to the formulation and solution of a Riemann-Hilbert boundary value problem, which provides the generating function of the joint queue length distribution of the relays and destination node. Performance metrics are obtained in Sect. 5, and a simple numerical example along with some computational issues are discussed in Sect. 6.

2 The Model

We consider a network with three saturated source users, say \(S_{i}\), \(i=0,1,2\), two relay nodes with infinite capacity queues, say \(R_{1}\), \(R_{2}\), and a common destination node D (see Fig. 1). The source users transmit packets towards the destination node with the cooperation of the relay nodes.

Fig. 1.
figure 1

The model.

User \(S_{i}\) generates packets towards the node D according to a Poisson process with rate \(\lambda _{i}\), \(i=0,1,2\). Node D can handle at most one packet that forwards outside the network. The service time of a packet at the node D (i.e., the required time to forward the packet outside the network) is exponentially distributed with rate \(\mu \) (we assume that the acknowledgments of successful or unsuccessful transmissions are instantaneous and error free).

The relays do not generate packets of their own but only re-transmit the packets they have received from the users. A relay node stores a packet in its queue when the direct transmission from a source user to the node D has failed. Specifically, the cooperation strategy to be applied between source users and relay nodes is as follows: If a direct transmission of a user’s \(S_{0}\) packet to the node D fails (i.e., node D is busy (transmitting)), both relay nodes store the blocked packet in their queues and try independently to forward it to the node D later. Moreover, if a direct transmission of a user’s \(S_{i}\), \(i=1,2\), packet to the node D fails, only node \(R_{i}\) stores it in its queue and is responsible to re-transmit it to the node D later (i.e., user \(S_{i}\) cooperates only with node \(R_{i}\), \(i=1,2\)). The node \(R_i\) tries to re-dispatch a blocked packet to the node D after an exponentially distributed time period with rate \(\mu _{i}\), \(i=1,2\).

Under such a scheme, the user \(S_{0}\) exploits both the spatial diversity provided by the relays, and the broadcast nature of wireless communication, where with a single transmission, a number of cooperating relay nodes (i.e., “partners”) receive and relay its data [18, 23, 26]. In another scenario, the user \(S_{0}\) splits its blocked packet (or job) in two sub-packets (or sub-jobs) and store each sub-packet in each relay node. Moreover, it can be assumed that the user \(S_{0}\) transmits within the overlapping area created by the intersecting covering regions of both relay nodes, and thus, its blocked packet is forwarded to both relays. On the contrary, user \(S_{i}\) transmits only within the covering region of the node \(R_{i}\), \(i=1,2\).

Let \(Q_{i}(t)\) be the number of stored packets in the queue of the relay node \(R_i\), \(i=1,2\), and C(t) be the number of packets under transmission at the destination node D at time t. Clearly, \(X(t)=\left\{ Q_{1}(t),Q_{2}(t),C(t);t\ge 0\right\} \) constitutes a continuous time Markov chain with state space \(S=\left\{ 0,1,...\right\} \times \left\{ 0,1,...\right\} \times \{0,1\}\). Define the stationary probabilities for \(m,n=0,1,2,...\), \(k=0,1,\)

$$\begin{aligned} p_{m,n}(k)=\lim _{t\rightarrow \infty }P(Q_{1}(t)\!=\!m,Q_{2}(t)\!=\!n,C(t)=k)=P(Q_{1}=m,Q_{2}=n,C=k). \end{aligned}$$

Then, for \(Q_{2}=0\),

(1)

where \(\lambda =\lambda _{0}+\lambda _{1}+\lambda _{2}\). For \(Q_{2}\ge 1\),

(2)

Define for \(|x|\le 1\), \(|y|\le 1\), \(k=0,1\), \(H^{(k)}(x,y)=\sum _{m=0}^{\infty }\sum _{n=0}^{\infty }p_{m,n}x^{m}y^{n}.\) Then, using Eqs. (1) and (2) we obtain,

$$\begin{aligned} \begin{array}{l} (\lambda +\mu _{1}+\mu _{2})H^{(0)}(x,y)-\mu H^{(1)}(x,y)=\mu _{2}H^{(0)}(x,0)+\mu _{1}H^{(0)}(0,y), \end{array} \end{aligned}$$
(3)
(4)

Solving (3) with respect to \(H^{(1)}(x,y)\) and substituting to (4), we obtain after some algebra the following functional equation,

$$\begin{aligned} \begin{array}{l} R(x,y)H^{(0)}(x,y)=A(x,y)H^{(0)}(x,0) +B(x,y)H^{(0)}(0,y), \end{array} \end{aligned}$$
(5)

where, \(\widehat{\lambda }_{i}=\lambda _{i}\alpha \), \(i=0,1,2\), \(\widehat{\mu }_{i}=\mu \mu _{i}\), \(i=1,2\), \(\alpha =\lambda +\mu _{1}+\mu _{2}\), \(\widehat{\lambda }=\lambda \alpha \),

$$\begin{aligned} R(x,y)=xy[\widehat{\lambda }_{0}(1-xy)+\widehat{\lambda }_{1}(1-x)+\widehat{\lambda }_{2}(1-y)]-\widehat{\mu }_{1}y(1-x)-\widehat{\mu }_{2}x(1-y), \end{aligned}$$
(6)

Remark 1:

Our model can be generalized to incorporate a coordination mechanism between relays that decides, which of the two relays will keep the blocked packet they both have received by the user \(S_{0}\); [23]. However, since wireless communication is fragile, the coordination between relays may fails. In such a case, both relay nodes will keep the blocked packet of the user \(S_{0}\) in their queues.

3 General Results

We proceed with the derivation of some general results. Denote for \(k=0,1,\)

$$\begin{aligned} \begin{array}{c} p_{m,.}(k)=\sum _{n=0}^{\infty }p_{m,n}(k),m=0,1,...,\,p_{.,n}(k)=\sum _{m=0}^{\infty }p_{m,n}(k),n=0,1,.... \end{array} \end{aligned}$$

Lemma 1

Let \(\rho _{i}=\frac{\lambda _{i}}{\mu }<1\), \(i=0,1,2\), \(\rho =\rho _{0}+\rho _{1}+\rho _{2}\). Then,

$$\begin{aligned} \begin{array}{rl} H^{(1)}(1,1)=\frac{\rho }{1-\rho _{0}},&{} H^{(0)}(1,1)=\frac{1-\rho _{1}-\rho _{2}-2\rho _{0}}{1-\rho _{0}},\\ H^{(0)}(0,1)=&{}1-\frac{\rho }{1-\rho _{0}}(\frac{\lambda _{0}+\lambda _{1}+\mu _{1}}{\mu _{1}})=1-\widehat{\rho }_{1},\\ H^{(0)}(1,0)=&{}1-\frac{\rho }{1-\rho _{0}}(\frac{\lambda _{0}+\lambda _{2}+\mu _{2}}{\mu _{2}})=1-\widehat{\rho }_{2}. \end{array} \end{aligned}$$
(7)

Proof:

For each \(m=0,1,...,\) we consider the vertical cut between the states \(\left\{ Q_{1}=m,C=1\right\} \) and \(\left\{ Q_{1}=m+1,C=0\right\} \). Then,

$$\begin{aligned} \begin{array}{l} (\lambda _{0}+\lambda _{1})p_{m,.}(1)=\mu _{1} p_{m+1,.}(0). \end{array} \end{aligned}$$
(8)

Summing for all \(m=0,1,...\), we derive

$$\begin{aligned} \begin{array}{c} (\lambda _{0}+\lambda _{1})H^{(1)}(1,1)=\mu _{1}(H^{(0)}(1,1)-H^{(0)}(0,1)). \end{array} \end{aligned}$$
(9)

Note that Eq. (9) is a “conservation of flow” relation, since it equates the flow of jobs into the relay node \(R_{1}\), with the flow of jobs out of the relay node \(R_{1}\). Similarly, by repeating the procedure we have

$$\begin{aligned} \begin{array}{c} (\lambda _{0}+\lambda _{2})H^{(1)}(1,1)=\mu _{2}(H^{(0)}(1,1)-H^{(0)}(1,0)). \end{array} \end{aligned}$$
(10)

Having in mind that \(H^{(1)}(1,1)+H^{(0)}(1,1)=1\) we conclude in

$$\begin{aligned} \begin{array}{l} 1-H^{(0)}(0,1)=\frac{\lambda _{0}+\lambda _{1}+\mu _{1}}{\mu _{1}}H^{(1)}(1,1),\, 1-H^{(0)}(1,0)=\frac{\lambda _{0}+\lambda _{2}+\mu _{2}}{\mu _{2}}H^{(1)}(1,1). \end{array} \end{aligned}$$

Substituting the above equation in (3), with \((x,y)=(1,1)\), we obtain after some algebra Eq. (7).   \(\square \)

3.1 The Associated Random Walk in Quadrant and Stability Condition

Our model can be seen as a random walk in quarter plane (RWQP) modulated by a two-state Markov process (idle or busy node D). Using its special structure, we convert it to a usual RWQP and investigate its stability condition. Without loss of generality we assume that \(\lambda +\mu +\mu _{1}+\mu _{2} = 1\). Using the notation in [12, 27], the functional Eq. (5) is equivalent to:

$$\begin{aligned} \begin{array}{l} -h^{(0)}(x,y)\pi ^{(0)}(x,y)=h^{(1)}(x,y)\pi _{1}^{(0)}(x)+h^{(2)}(x,y)\pi _{2}^{(0)}(y)+h^{(3)}(x,y)p_{0,0}(0), \end{array} \end{aligned}$$
(11)

where for \(|x|\le 1\), \(|y|\le 1\), \(k=0,1,\)

Equation (11) is the fundamental form corresponding to a RWQP whose one-step transition probabilities from state (mn) to \((m+i,n+j)\) are for \(-1\le i,j\le 1\):

$$\begin{aligned} \begin{array}{rl} \widehat{p}_{\{(m,n);(m+i,n+j)\}} =&{} \widehat{p}_{i,j}\delta _{\{m,n>0\}}+\widehat{p}_{i,j}^{\prime }\delta _{\{m>0, n=0\}}\\ {} &{}+\widehat{p}_{i,j}^{\prime \prime }\delta _{\{m=0, n>0\}}+\widehat{p}_{i,j}^{(0)}\delta _{\{m=0, n=0\}}, \end{array} \end{aligned}$$

where \(\delta _{\{.\}}\) is Kronecker’s delta and:

Following [12], set

$$\begin{aligned} \left\{ \begin{array}{rl} M=(M_{x},M_{y})=&{}(\sum _{j}\widehat{p}_{1,j}-\sum _{j}\widehat{p}_{-1,j},\sum _{i}\widehat{p}_{i,1}-\sum _{i}\widehat{p}_{i,-1})\\ =&{}(\widehat{\lambda }_{0}+\widehat{\lambda }_{1}-\widehat{\mu }_{1},\widehat{\lambda }_{0}+\widehat{\lambda }_{2}-\widehat{\mu }_{2}),\\ M^{(1)}=(M^{(1)}_{x},M^{(1)}_{y})=&{}(\sum _{j} \widehat{p}_{1,j}^\prime -\sum _{j} \widehat{p}_{-1,j}^\prime ,\sum _{i} \widehat{p}_{i,1}^\prime )\\ =&{}((\lambda _{0}+\lambda _{1})(\lambda +\mu _{1})-\widehat{\mu }_{1},(\lambda _{0}+\lambda _{2})(\lambda +\mu _{1})),\\ M^{(2)}=(M^{(2)}_{x},M^{(2)}_{y})=&{}(\sum _{j}\widehat{p}_{1,j}^{\prime \prime },\sum _{i}\widehat{p}_{i,1}^{\prime \prime }-\sum _{i}\widehat{p}_{i,-1}^{\prime \prime })\\ =&{}((\lambda _{0}+\lambda _{1})(\lambda +\mu _{2}),(\lambda _{0}+\lambda _{2})(\lambda +\mu _{2})-\widehat{\mu }_{2}).\end{array}.\right. \end{aligned}$$

Theorem 1 gives necessary and sufficient conditions for the ergodicity of our model.

Theorem 1

[12]. When \(M\ne 0\), a random walk is ergodic if, and only if, one of the following conditions holds,

  1. 1.
    $$\begin{aligned} \left\{ \begin{array}{l} M_{x}=\widehat{\lambda }_{0}+\widehat{\lambda }_{1}-\widehat{\mu }_{1}<0,M_{y}=\widehat{\lambda }_{0}+\widehat{\lambda }_{2}-\widehat{\mu }_{2}<0,\\ M_{x}M^{(1)}_{y}-M_{y}M^{(1)}_{x}<0\Leftrightarrow \widehat{\mu }_{1}\widehat{\mu }_{2}(1-\rho _{0})(\widehat{\rho }_{1}-1)<0\Leftrightarrow \widehat{\rho }_{1}<1,\\ M_{y}M^{(2)}_{x}-M_{x}M^{(2)}_{y}<0\Leftrightarrow \widehat{\mu }_{1}\widehat{\mu }_{2}(1-\rho _{0})(\widehat{\rho }_{2}-1)<0\Leftrightarrow \widehat{\rho }_{2}<1;\end{array} \right. \end{aligned}$$
  2. 2.

    \(M_{x}<0,\) \(M_{y}\ge 0,\) \(M_{y}M^{(2)}_{x}-M_{x}M^{(2)}_{y}<0;\)

  3. 3.

    \(M_{x}\ge 0,\) \(M_{y}<0,\) \(M_{x}M^{(1)}_{y}-M_{y}M^{(1)}_{x}<0.\)

Remark 2:

Note that under stability condition, \(M_{x}\ge 0\), \(M_{y}\ge 0\) cannot hold simultaneously. Without loss of generality we assume here on that \(M_{x}<0\).

3.2 Analysis of the Kernel

We now provide detailed properties on the branch points, and the branches defined by \(R(x,y)=0\). The kernel R(xy) can be written as a quadratic polynomial in x (resp. y) with coefficients that are polynomial in y (resp. x). Specifically,

$$\begin{aligned} \begin{array}{c} R(x,y)=a(x)y^{2}+b(x)y+c(x)=\widehat{a}(y)x^{2}+\widehat{b}(y)x+\widehat{c}(y), \end{array} \end{aligned}$$

where

$$\begin{aligned} \begin{array}{l} a(x)=-(\widehat{\lambda }_{0}x^{2}+\widehat{\lambda }_{2}x),\,b(x)=x(\widehat{\lambda }+\widehat{\mu }_{1}+\widehat{\mu }_{2})-\widehat{\mu }_{1}-\widehat{\lambda }_{1}x^{2},\,c(x)=-\widehat{\mu }_{2}x,\\ \widehat{a}(y)=-(\widehat{\lambda }_{0}y^{2}+\widehat{\lambda }_{1}y),\,\widehat{b}(y)=y(\widehat{\lambda }+\widehat{\mu }_{1}+\widehat{\mu }_{2})-\widehat{\mu }_{2}-\widehat{\lambda }_{2}y^{2},\,\widehat{c}(y)=-\widehat{\mu }_{1}y. \end{array} \end{aligned}$$

The solutions of \(R(x,y)=0\) for each y, x respectively are given by,

(12)

We now focus on the branch points. Denote by \(x_{i}\), \(y_{i}\), \(i=1,2,3,4\), the zeros of \(D_{x}(x)\), \(D_{y}(y)\) respectively. Clearly, \(b(x)=0\) has two solutions given by: \(x_{\pm }^{b}=\frac{\widehat{\lambda }+\widehat{\mu }_{1}+\widehat{\mu }_{2}\pm \sqrt{(\widehat{\lambda }+\widehat{\mu }_{1}+\widehat{\mu }_{2})^{2}-4\widehat{\lambda }_{1}\widehat{\mu }_{1}}}{2\widehat{\lambda }_{1}}, \) with \(x_{-}^{b}<1<x_{+}^{b}\). Then, it is readily seen from,

$$\begin{aligned} \begin{array}{l} D_{x}(-\infty )=+\infty ,\,D_{x}(0)=\widehat{\mu }_{1}^{2}>0,\,D_{x}(1)=(\widehat{\lambda }_{0}+\widehat{\lambda }_{2}-\widehat{\mu }_{2})^{2}>0,\\ D_{x}(x_{-}^{b})\le 0,\,D_{x}(x_{+}^{b})\le 0,\,D_{x}(+\infty )=+\infty , \end{array} \end{aligned}$$

that \(x_{i}\)s are real, such that \(0<x_{1}\le x_{-}^{b}\le x_{2}<1<x_{3}\le x_{+}^{b}<x_{4}<\infty .\) Moreover, \(D_{x}(x)<0,\,x\in (x_{1},x_{2})\cup (x_{3},x_{4}), D_{x}(x)>0,\,x\in (-\infty ,x_{1})\cup (x_{2},x_{3})\cup (x_{4},\infty ).\) Similarly, we can prove that \(y_{i}\)s are also real, and such that \(0<y_{1}<y_{2}<1<y_{3}<y_{4}<\infty .\) Furthermore, \(D_{y}(y)<0,\,y\in (y_{1},y_{2})\cup (y_{3},y_{4}), D_{y}(y)>0,\,y\in (-\infty ,y_{1})\cup (y_{2},y_{3})\cup (y_{4},\infty ).\)

To ensure the continuity of the two valued function Y(x) (resp. X(y)), we consider the following cut planes: \(\tilde{C}_{x}=C_{x}-[x_{3},x_{4}],\tilde{C}_{y}=C_{y}-[y_{3},y_{4}], \widehat{C}_{x}=C_{x}-([x_{1},x_{2}]\cup [x_{3},x_{4}]),\widehat{C}_{y}=C_{y}-([y_{1},y_{2}]\cup [y_{3},y_{4}]),\) where \(C_{x}\), \(C_{y}\) the complex planes of x, y, respectively. For \(x\in \widehat{C}_{x}\), the two branches of Y(x) are defined by

$$\begin{aligned} \begin{array}{l} Y_{0}(x)= \left\{ \begin{array}{ll} Y_{-}(x) &{} \text {if}\, |Y_{-}(x)|\le |Y_{+}(x)|,\\ Y_{+}(x) &{} \text {if}\, |Y_{-}(x)|>|Y_{+}(x)|; \end{array}, \quad \right. \,Y_{1}(x)= \left\{ \begin{array}{ll} Y_{+}(x) &{} \text {if}\, |Y_{-}(x)|\le |Y_{+}(x)|,\\ Y_{-}(x) &{} \text {if}\, |Y_{-}(x)|>|Y_{+}(x)|. \end{array} \right. \end{array} \end{aligned}$$

Similarly, we can define functions \(X_{0}(y)\), \(X_{1}(y)\), \(y\in \widehat{C}_{y}\) based on \(X_{\pm }(y)\). We proceed with some properties of \(Y_{0}(x)\), \(Y_{1}(x)\):

Lemma 2

The functions \(Y_{i}(x)\), \(x\in C_{x}\), \(i=0,1\) are meromorphic. Moreover,

  1. 1.

    \(Y_{0}(x)\) has one zero and no poles (i.e. it is analytic in \(\widehat{C}_{x}\)). \(Y_{1}(x)\) has two poles and no zeros.

  2. 2.

    \(|Y_{0}(x)|\le |Y_{1}(x)|\), \(x\in \widehat{C}_{x}\), and equality takes place only on the cuts.

  3. 3.

    When \(|x|=1\), \(|Y_{0}(x)|\le 1\). For \(x=1\), \(Y_{0}(1)=1\).

  4. 4.

    \( Y_{0}^\prime (1)=-\frac{\widehat{\mu }_{1}-\widehat{\lambda }_{0}-\widehat{\lambda }_{1}}{\widehat{\mu }_{2}-\widehat{\lambda }_{0}-\widehat{\lambda }_{2}}. \)

Similar results can be obtained for \(X_{0}(y)\), \(X_{1}(y)\).

Proof

The proof of \(1.-3.\) is based on Lemma 2.3.4, Theorem 5.3.3 in [12]. 4. is proved by noticing that

$$\begin{aligned} \begin{array}{lr} Y_{0}(x)Y_{1}(x)=\frac{\widehat{\mu }_{2}}{\widehat{\lambda }_{0}x+\widehat{\lambda }_{2}},\, Y_{0}(x)+Y_{1}(x)=\frac{(\widehat{\lambda }+\widehat{\mu }_{1}+\widehat{\mu }_{2})x-\widehat{\mu }_{1}-\widehat{\lambda }_{1}x^{2}}{\widehat{\lambda }_{0}x^{2}+\widehat{\lambda }_{2}x}. \end{array} \end{aligned}$$
(13)

Using (13) and taking into account that \(Y_{0}(1)=1\), we can obtain after some basic algebra the desired result.\(\square \)

Define the following image contours: \(\mathcal {L}=Y_{0}[\overrightarrow{\underleftarrow{x_{1},x_{2}}}],\,\mathcal {L}_{ext}=Y_{0}[\overrightarrow{\underleftarrow{x_{3},x_{4}}}],\, \mathcal {M}=X_{0}[\overrightarrow{\underleftarrow{y_{1},y_{2}}}],\,\mathcal {M}_{ext}=X_{0}[\overrightarrow{\underleftarrow{y_{3},y_{4}}}],\) where \([\overrightarrow{\underleftarrow{u,v}}]\) stands for the contour traversed from u to v along the upper edge of the slit [uv] and then back to u along the lower edge of the slit. Then, we have the following lemma:

Lemma 3

  1. 1.

    For \(x\in [x_{1},x_{2}]\), the algebraic function Y(x) lies on a closed contour \(\mathcal {L}\), which is symmetric with respect to the real line and defined by

    $$\begin{aligned} \begin{array}{lr} |y|^{2}=\frac{2\widehat{\mu }_{2}(2\widehat{\lambda }_{0}Re(y)+\widehat{\lambda }_{1})}{\widehat{\lambda }_{0}[\widehat{\lambda }+\widehat{\mu }_{1}+\widehat{\mu }_{2}-2\widehat{\lambda }_{2}Re(y)-\sqrt{\varDelta _{y}}]+2\widehat{\lambda }_{2}(2\widehat{\lambda }_{0}Re(y)+\widehat{\lambda }_{1})},&|y|^{2}\le \frac{\widehat{\mu }_{2}}{\widehat{\lambda }_{0}x_{1}+\widehat{\lambda }_{2}}, \end{array} \end{aligned}$$
    (14)

    where \(\varDelta _{y}=(\widehat{\lambda }+\widehat{\mu }_{1}+\widehat{\mu }_{2}-2\widehat{\lambda }_{2}Re(y))^{2}-4\widehat{\mu }_{1}(2\widehat{\lambda }_{0}Re(y)+\widehat{\lambda }_{1})\). Moreover, set \(\zeta :=|Y(x_{1})|=\sqrt{\frac{\widehat{\mu }_{2}}{\widehat{\lambda }_{0}x_{1}+\widehat{\lambda }_{2}}}\), the point on \(\mathcal {L}\) with the largest modulus. The point \(Y_{0}(x_{2})=-\sqrt{\frac{\widehat{\mu }_{2}}{\widehat{\lambda }_{0}x_{2}+\widehat{\lambda }_{2}}}\) is the extreme left point of \(\mathcal {L}\).

  2. 2.

    Similarly, for \(y\in [y_{1},y_{2}]\), the algebraic function X(y) lies on a closed contour \(\mathcal {M}\), which is symmetric with respect to the real line and defined by

    $$\begin{aligned} \begin{array}{lr} |x|^{2}=\frac{2\widehat{\mu }_{1}(2\widehat{\lambda }_{0}Re(x)+\widehat{\lambda }_{2})}{\widehat{\lambda }_{0}[\widehat{\lambda }+\widehat{\mu }_{1}+\widehat{\mu }_{2}-2\widehat{\lambda }_{1}Re(x)-\sqrt{\varDelta _{x}}]+2\widehat{\lambda }_{1}(2\widehat{\lambda }_{0}Re(x)+\widehat{\lambda }_{2})},&|x|^{2}\le \frac{\widehat{\mu }_{1}}{\widehat{\lambda }_{0}y_{1}+\widehat{\lambda }_{1}}, \end{array} \end{aligned}$$
    (15)

    where \(\varDelta _{x}=(\widehat{\lambda }+\widehat{\mu }_{1}+\widehat{\mu }_{2}-2\widehat{\lambda }_{1}Re(x)))^{2}-4\widehat{\mu }_{2}(2\widehat{\lambda }_{0}Re(x)+\widehat{\lambda }_{2})\). Moreover, set \(\beta :=|X(y_{1})|=\sqrt{\frac{\widehat{\mu }_{1}}{\widehat{\lambda }_{0}y_{1}+\widehat{\lambda }_{1}}}>1\) (see Remark 2), the point on \(\mathcal {M}\) with the largest modulus. \(X_{0}(y_{2})=-\sqrt{\frac{\widehat{\mu }_{1}}{\widehat{\lambda }_{0}y_{2}+\widehat{\lambda }_{1}}}\) is the extreme left point of \(\mathcal {M}\).

Proof: We prove the first part for Y(x) (the proof of 2. is similar). Clearly, \(D_{x}(x)<0\), \(x\in (x_{1},x_{2})\) and \(Y_{0}(x)\), \(Y_{1}(x)\) are complex conjugates. Moreover,

$$\begin{aligned} \begin{array}{l} Re(Y(x))=\frac{x(\widehat{\lambda }+\widehat{\mu }_{1}+\widehat{\mu }_{2})-\widehat{\mu }_{1}-\widehat{\lambda }_{1}x^{2}}{2(\widehat{\lambda }_{0}x^{2}+\widehat{\lambda }_{2}x)}. \end{array} \end{aligned}$$
(16)

Since \(R(x,Y(x))=0\) we have \(|Y(x)|^{2}=\frac{\widehat{\mu }_{2}}{\widehat{\lambda }_{0}x+\widehat{\lambda }_{2}}\Leftrightarrow |Y(x)|=\sqrt{\frac{\widehat{\mu }_{2}}{\widehat{\lambda }_{0}x+\widehat{\lambda }_{2}}}.\) Clearly, |Y(x)| is a decreasing function in x. Thus, \(|Y(x)|\le |Y(x_{1})|=\zeta :=\sqrt{\frac{\widehat{\mu }_{2}}{\widehat{\lambda }_{0}x_{1}+\widehat{\lambda }_{2}}}\), which is the extreme right point of \(\mathcal {L}\). Solving (16) with respect to x and taking the solution such that \(x\in [0,1]\) yields,

$$\begin{aligned} \begin{array}{l} \widetilde{x}=\frac{\widehat{\lambda }+\widehat{\mu }_{1}+\widehat{\mu }_{2}-2\widehat{\lambda }_{2}Re(y)-\sqrt{(\widehat{\lambda }+\widehat{\mu }_{1}+\widehat{\mu }_{2}-2\widehat{\lambda }_{2}Re(y))^{2}-4\widehat{\mu }_{1}(2\widehat{\lambda }_{0}Re(y)+\widehat{\lambda }_{1})}}{2((2\widehat{\lambda }_{0}Re(y)+\widehat{\lambda }_{1}))}. \end{array} \end{aligned}$$
(17)

Substituting (17) into \(|y|^{2}=\widehat{\mu }_{2}/(\widehat{\lambda }_{0}x+\widehat{\lambda }_{2})\) (i.e., for \(x=\widetilde{x}\)) yields (14).\(\square \)

Finally, for any simple closed contour \(\mathcal {U}\), denote by \(G_{\mathcal {U}}\) (resp. \(G_{\mathcal {U}}^{c}\)) the interior (resp. exterior) domain bounded by \(\mathcal {U}\). The next result gives topological and algebraic properties for the associated RWQP, summarized in Lemma 4 (see also Theorem 5.3.2, Corrolary 5.3.5 in [12]). Define,

$$\begin{aligned} \begin{array}{c} \varDelta =\begin{vmatrix} \widehat{p}_{1,1}&{}\widehat{p}_{1,0}&{}\widehat{p}_{1,-1} \\ \widehat{p}_{0,1}&{}\widehat{p}_{0,0}-1&{}\widehat{p}_{0,-1} \\ \widehat{p}_{-1,1}&{}\widehat{p}_{-1,0}&{}\widehat{p}_{-1,-1} \end{vmatrix}=\begin{vmatrix} \widehat{\lambda }_{0}&{}\widehat{\lambda }_{1}&{}0 \\ \widehat{\lambda }_{2}&{}-(\widehat{\lambda }+\widehat{\mu }_{1}+\widehat{\mu }_{2})&{}\widehat{\mu }_{2} \\ 0&{}\widehat{\mu }_{1}&{}0 \end{vmatrix}=-\widehat{\mu }_{1}\widehat{\mu }_{2}\widehat{\lambda }_{0}<0. \end{array} \end{aligned}$$

Lemma 4

(i) The curves \(\mathcal {L}\) and \(\mathcal {L}_{ext}\) (resp. \(\mathcal {M}\) and \(\mathcal {M}_{ext}\)) are quartic, symmetrical with respect to the real axis, closed and simple. Since \(\varDelta <0\), \([y_{1},y_{2}]\subset G_{\mathcal {L}_{ext}}\subset G_{\mathcal {L}}\ and\ [y_{3},y_{4}]\subset G_{\mathcal {L}}^{c}.\) Similar results hold for \(\mathcal {M}\), \(\mathcal {M}_{ext}\), \([x_{1},x_{2}]\), \([x_{3},x_{4}]\).

(ii) \(Y_{0}(x):G_{\mathcal {M}}-[x_{1},x_{2}]\rightarrow G_{\mathcal {L}}-[y_{1},y_{2}]\), \(X_{0}(y):G_{\mathcal {L}}-[y_{1},y_{2}]\rightarrow G_{\mathcal {M}}-[x_{1},x_{2}]\) are conformal mappings, and since \(\varDelta <0\), the values of \(Y_{0}(x)\) (resp. \(X_{0}(y)\)) are contained in \(G_{\mathcal {L}}\) (resp. \(G_{\mathcal {M}}\)), whereas the values of \(Y_{1}(x)\) (resp. \(X_{1}(y)\)) are contained in \(G_{\mathcal {L}_{ext}}^{c}\) (resp. \(G_{\mathcal {M}_{ext}}^{c}\)).

3.3 Intersection Points of the Curves

The analytic continuation of \(H^{(0)}(x,0)\) (resp. \(H^{(0)}(0,y)\)) outside the unit disc is achieved by various methods (e.g., Lemma 2.2.1 and Theorem 3.2.3 in [12]). Note that the common solutions of \(R(x,y)=0\), \(A(x,y)=0\) (resp. B(xy)) are potential singularities for the functions \(H^{(0)}(x,0)\), \(H^{(0)}(0,y)\). Thus, the study of the intersection points of the curves \(R(x,y)=0\), \(A(x,y)=0\) (resp. \(B(x,y)=0\)) is crucial for the analytic continuation of \(H^{(0)}(x,0)\), \(H^{(0)}(0,y)\).

Intersection points of the curves \(R(x,y)=0\), \(A(x,y)=0\). Let \(x\in \widehat{C}_{x}\) and \(R(x,y) = 0\), \(y = Y_{\pm }(x)\). Their intersection points (if any) are the roots of their resultant. We can easily show that the resultant in y of the two polynomials R(xy) and A(xy) is

$$\begin{aligned} \begin{array}{c} Res_{y}(R,A;x)=(\lambda _{0}x+\lambda _{2})\widehat{\mu }_{2}^{2}x^{2}(x-1)T_{y}(x), \end{array} \end{aligned}$$

where \(T_{y}(x)=(\lambda (\lambda _{0}+\lambda _{1})+\lambda _{0}\mu _{1})(\lambda +\mu _{1})x^{2}+\lambda \mu _{1}(\lambda -\mu +\mu _{1})x-\mu _{1}\widehat{\mu }_{1}.\) Note that \(T_{y}(0)=-\mu _{1}\widehat{\mu }_{1}<0\), \(T_{y}(1)=\widehat{\mu }_{1}(\lambda +\mu _{1})(1-\rho _{0})(\widehat{\rho }_{1}-1)<0\) (due to the stability condition), and \(\lim _{x\rightarrow \pm \infty }T_{y}(x)=+\infty .\) Thus, \(T_{y}(x)=0\) has two roots of opposite sign with \(x_{*}<0<1<x^{*}\).

Intersection points of the curves \(R(x,y)=0\), \(B(x,y)=0\) . Let \(y\in \widehat{C}_{y}\) and \(R(x,y)=0\), \(x=X_{\pm }(y)\). It is easy to see that

$$\begin{aligned} \begin{array}{c} R(x,y)=\frac{\alpha }{\mu _{1}}B(x,y)+\lambda \mu y(1-x)+\widehat{\mu }_{2}(y-x). \end{array} \end{aligned}$$

Thus, \(R(x,y)=0\), \(B(x,y)=0\), implies that,

$$\begin{aligned} \begin{array}{rl} \lambda _{0}x(1-xy)+\lambda _{2}x(1-y)+(\lambda _{1}x-\mu )(1-x)=&{}0,\\ \lambda \mu y(1-x)+\widehat{\mu }_{2}(y-x)=&{}0. \end{array} \end{aligned}$$
(18)

The second equation in (18) gives \(x=(\lambda +\mu _{2})y/(\lambda y+\mu _{2})\), and substituting in the first one yields,

$$\begin{aligned} \begin{array}{c} L(y)=\frac{1-y}{(\lambda y+\mu _{2})^{2}}Z_{x}(y)=0, \end{array} \end{aligned}$$

where \(Z_{x}(y)=y^{2}(\lambda _{0}(\lambda +\mu _{2})+\lambda \lambda _{2})(\lambda +\mu _{2})+y\lambda \mu _{2}(\lambda +\mu _{2}-\mu )-\mu _{2}\widehat{\mu }_{2}\). Note that \(Z_{x}(0)=-\mu _{2}\widehat{\mu }_{2}<0\), \(Z_{x}(1)=(\lambda +\mu _{2})\widehat{\mu }_{2}(1-\rho _{0})(\widehat{\rho }_{2}-1)<0\) (due to the stability condition), and \(\lim _{y\rightarrow \pm \infty }Z_{x}(y)=+\infty \). Thus, \(Z_{x}(y)\) has two zeros of opposite sign \(y_{*}<0<1<y^{*}\), and \(Z_{x}(y)<0\), \(y\in [0,1]\). Therefore, \(L(y)<0\), \(y\in [0,1)\), which in turn implies that \(B(X_{0}(y),y)\ne 0\), \(y\in [y_{1},y_{2}]\subset [0,1)\), or equivalently \(B(x,Y_{0}(x))\ne 0\), \(x\in \mathcal {M}\).

4 Formulation and Solution of a Boundary Value Problem

For zero pairs (xy) of \(R(x,y)=0\), \(y\in D_{y}=\{y\in C_{y}:|y|\le 1,|X_{0}(y)|\le 1\}\),

$$\begin{aligned} \begin{array}{c} A(X_{0}(y),y)H^{(0)}(X_{0}(y),0)=-B(X_{0}(y),y)H^{(0)}(0,y). \end{array} \end{aligned}$$
(19)

For \(y\in D_{y}-[y_{1},y_{2}]\) the functions \(H^{(0)}(0,y)\), \(H^{(0)}(X_{0}(y),0)\) are both analytic. This entails from (19) that \(A(X_{0}(y),y)\), \(B(X_{0}(y),y)\) must not vanish in \(D_{y}-[y_{1},y_{2}]\), otherwise \(H^{(0)}(0,y)\), \(H^{(0)}(x,0)\) would have poles in \(|x|\le 1\), \(|y|\le 1\). Then, the right hand side in (19) can be analytically continued up to the slit \([y_{1},y_{2}]\) and thus,

$$\begin{aligned} \begin{array}{c} A(X_{0}(y),y)H^{(0)}(X_{0}(y),0)+B(X_{0}(y),y)H^{(0)}(0,y)=0,\,y\in [y_{1},y_{2}], \end{array} \end{aligned}$$
(20)

or equivalently

$$\begin{aligned} \begin{array}{c} A(x,Y_{0}(x))H^{(0)}(x,0)+B(x,Y_{0}(x))H^{(0)}(0,Y_{0}(x))=0,\,x\in \mathcal {M}. \end{array} \end{aligned}$$
(21)

The function \(H^{(0)}(x,0)\) is holomorphic in \(D_{x}=\{x\in C_{x}:|x|<1\}\) and continuous in \(\bar{D}_{x}=\{x\in C_{x}:|x|\le 1\}\), but might have poles in \(S_{x}:=G_{\mathcal {M}}\cap (\bar{D}_{x})^{c}\). We also know that (see also Corollary 5.3.5 in [12]) for \(x\in S_{x}\), \(|Y_{0}(x)|\le 1\), as a consequence of the maximum modulus principle. Hence, from (21) the possible poles of \(H^{(0)}(x,0)\) in \(S_{x}\) are the zeros of \(A(x,Y_{0}(x))\) in this region. Specifically, the only possible zero is obtained in Subsect. 3.3 and given by

$$\begin{aligned} \begin{array}{l} x^{*}=\frac{-\lambda \mu _{1}(\lambda +\mu _{1}-\mu )+\sqrt{(\lambda \mu _{1}(\lambda +\mu _{1}-\mu ))^{2}+4\mu _{1}\widehat{\mu }_{1}(\lambda (\lambda _{0}+\lambda _{1})+\lambda _{0}\mu _{1})(\lambda +\mu _{1})}}{2(\lambda (\lambda _{0}+\lambda _{1})+\lambda _{0}\mu _{1})(\lambda +\mu _{1})}. \end{array} \end{aligned}$$

Remark 3:

Note that the other zero, \(x_{*}(<0)\) (see Subsect. 3.3), cannot belong to the region \(S_{x}\). Indeed, it can be easily shown that

$$\begin{aligned} \begin{array}{c} A(x_{*},Y_{0}(x_{*}))=0\Leftrightarrow \lambda x_{*}(1-Y_{0}(x_{*}))+\mu _{1}(x_{*}-Y_{0}(x_{*}))=0, \end{array} \end{aligned}$$

and since \(-1\le Y_{0}(x_{*})\le 1\), then, \(x_{*}(1-Y_{0}(x_{*}))\le 0\), \(x_{*}-Y_{0}(x_{*})<0\), which implies that \(x_{*}\notin S_{x}\). Thus, we focus only on the positive zero \(x^{*}\).

If \(x^{*}>\beta \), then \(A(x,Y_{0}(x))\ne 0\) for \(x\in S_{x}\). If \(x^{*}\in S_{x}\), then \(x^{*}\) is a zero of \(A(x,Y_{0}(x))\) provided that \(|Y_{0}(x^{*})|\le 1\). Therefore, set \(r=1\), if \(x^{*}\le \beta \) and \(|Y_{0}(x^{*})|\le 1\), and \(r=0\) elsewhere. If \(r=1\), then \(A(x,Y_{0}(x))\) has a unique zero in \(S_{x}\) given by \(x=x^{*}\). Otherwise, \(A(x,Y_{0}(x))\) does not vanish in \(S_{x}\). It is easy to prove that when \(A(x,Y_{0}(x))\) vanishes at \(x=x^{*}\), then, this zero has multiplicity equal to one, since it can be shown that \(dA(x,Y_{0}(x))/dx\) does not vanish at \(x=x^{*}\).

For \(y\in [y_{1},y_{2}]\), letting \(X_{0}(y)=x\in \mathcal {M}\) and realizing that \(Y_{0}(X_{0}(y))=y\) so that \(y=Y_{0}(x)\), we rewrite (20) as (\(B(x,Y_{0}(x))\ne 0\), \(x\in \mathcal {M}\); see Subsect. 3.3)

$$\begin{aligned} \begin{array}{c} \frac{A(x,Y_{0}(x))}{B(x,Y_{0}(x))}H^{(0)}(x,0)=-H^{(0)}(0,Y_{0}(x)),\,x\in \mathcal {M}. \end{array} \end{aligned}$$
(22)

Taking into account the possible zero of \(A(x,Y_{0}(x))\) for \(x\in S_{x}\), multiplying both sides of (22) by the imaginary complex number i, and noticing that \(H^{(0)}(0,Y_{0}(x))\) is real for \(x\in \mathcal {M}\), since \(Y_{0}(x)\in [y_{1},y_{2}]\), we have

(23)

where, G(x) is regular for \(x\in G_{\mathcal {M}}\), continuous for \(x\in \mathcal {M}\cup G_{\mathcal {M}}\), and U(x) is a non-vanishing function on \(\mathcal {M}\). In order to solve the Riemann-Hilbert boundary value problem formulated in (23), we must conformally transform it to the unit circle \(\mathcal {C}\). Define the conformal mapping and its inverse respectively by

$$\begin{aligned} \begin{array}{c} z=f(x):G_{\mathcal {M}}\rightarrow G_{\mathcal {C}},\,x=f_{0}(z):G_{\mathcal {C}}\rightarrow G_{\mathcal {M}}. \end{array} \end{aligned}$$

Then, the Riemann-Hilbert problem formulated in (23) is reduced to the following: Determine a function \(F(z):=G(f_{0}(z))\), regular in \(G_{\mathcal {C}}\) and continuous in \(G_{\mathcal {C}}\cup \mathcal {C}\) satisfying

$$\begin{aligned} \begin{array}{c} Re[iU(f_{0}(z))F(z)]=0,\,z\in \mathcal {C}. \end{array} \end{aligned}$$
(24)

Define \(\chi =\frac{-1}{\pi }[arg\{U(x)\}]_{x\in \mathcal {M}}\), i.e., the index of the Riemann-Hilbert problem, where \([arg\{U(x)\}]_{x\in \mathcal {M}}\), denotes the variation of the argument of the function U(x) as x moves along the closed contour \(\mathcal {M}\) in the positive direction, provided that \(U(x)\ne 0\), \(x\in \mathcal {M}\). As expected [2, 13], under the stability conditions given in Theorem 1, the index \(\chi =0\). Following the lines in [13] (remind from Remark 2 that \(M_{x}<0\)):

Lemma 5

  1. 1.

    If \(M_{y}<0\), then \(\chi =0\) is equivalent to

  2. 2.

    If \(M_{y}\ge 0\), \(\chi =0\) is equivalent to \(\frac{d B(X_{0}(y),y)}{dy}|_{y=1}<0\Leftrightarrow \widehat{\rho }_{2}<1\).

Thus, under stability conditions (see Theorem 1) the homogeneous Riemann-Hilbert problem (23) has a unique solution given by,

$$\begin{aligned} \begin{array}{c} H^{(0)}(x,0)=W(x-x^{*})^{-r}\exp [\frac{1}{2i\pi }\int _{|t|=1}\frac{\log \{J(t)\}}{t-f(x)}dt],\,x\in G_{\mathcal {M}}, \end{array} \end{aligned}$$
(25)

where W is a constant, and \(J(t)=\frac{\overline{U(t)}}{U(t)}\), \(U(t)=U(f_{0}(t))\). Since \(1\in G_{\mathcal {M}}\), W is obtained by setting \(x=1\) in (25) and combining with the value of \(H^{(0)}(1,0)\) found in (7). After some algebra we conclude, for \(x\in G_{\mathcal {M}}\), in

$$\begin{aligned} \begin{array}{l} H^{(0)}(x,0)=(\frac{1-x^{*}}{x-x^{*}})^{r}(1-\widehat{\rho }_{2})\exp [\frac{1}{2i\pi }\int _{|t|=1}\frac{\log \{J(t)\}(f(x)-f(1))}{(t-f(x))(t-f(1))}dt]. \end{array} \end{aligned}$$
(26)

We now focus on the determination of the conformal mapping and its inverse. For this purpose, we need a representation of \(\mathcal {M}\) in polar coordinates, i.e., \(\mathcal {M}=\{x:x=\rho (\phi )\exp (i\phi ),\phi \in [0,2\pi ]\}.\) This representation can be obtained as follows: Since \(0\in G_{\mathcal {M}}\), for each \(x\in \mathcal {M}\), a relation between its absolute value and its real part is given by \(|x|^{2}=m(Re(x))\) (see (15)), where

$$\begin{aligned} \begin{array}{c} m(\delta ):=\frac{2\widehat{\mu }_{1}(2\widehat{\lambda }_{0}\delta +\widehat{\lambda }_{2})}{\widehat{\lambda }_{0}[\widehat{\lambda }+\widehat{\mu }_{1}+\widehat{\mu }_{2}-2\widehat{\lambda }_{1}\delta -\sqrt{\varDelta _{x}(\delta )}]+2\widehat{\lambda }_{1}(2\widehat{\lambda }_{0}\delta +\widehat{\lambda }_{2})}, \end{array} \end{aligned}$$

and \(\varDelta _{x}(\delta )=(\widehat{\lambda }+\widehat{\mu }_{1}+\widehat{\mu }_{2}-2\widehat{\lambda }_{1}\delta )^{2}-4\widehat{\mu }_{2}(2\widehat{\lambda }_{0}\delta +\widehat{\lambda }_{2})\). Given the angle \(\phi \) of some point on \(\mathcal {M}\), the real part of this point, say \(\delta (\phi )\), is the solution of \(\delta -\cos (\phi )\sqrt{m(\delta )}\), \(\phi \in [0,2\pi ].\) Since \(\mathcal {M}\) is a smooth, egg-shaped contour, the solution is unique. Clearly, \(\rho (\phi )=\frac{\delta (\phi )}{\cos (\phi )}\), and the parametrization of \(\mathcal {M}\) in polar coordinates is fully specified. Then, the mapping from \(z\in G_{\mathcal {C}}\) to \(x\in G_{\mathcal {M}}\), where \(z = e^{i\phi }\) and \(x= \rho (\psi (\phi ))e^{i\psi (\phi )}\), satisfying \(f_{0}(0)=0\) and \(f_{0}(z)=\overline{f_{0}(z)}\) is uniquely determined by (see [5], Sect. 1.4.4),

$$\begin{aligned} \begin{array}{c} f_{0}(z)=z\exp [\frac{1}{2\pi }\int _{0}^{2\pi }\log \{\rho (\psi (\omega ))\}\frac{e^{i\omega }+z}{e^{i\omega }-z}d\omega ],\,|z|<1. \end{array} \end{aligned}$$
(27)

The angular deformation \(\psi (.)\) is uniquely determined as the solution of Theodorsen integral equation

$$\begin{aligned} \begin{array}{c} \psi (\phi )=\phi -\int _{0}^{2\pi }\log \{\rho (\psi (\omega ))\}\cot (\frac{\omega -\phi }{2})d\omega ,\,0\le \phi \le 2\pi , \end{array} \end{aligned}$$
(28)

with \(\psi (\phi )=2\pi -\psi (2\pi -\phi )\). Due to the correspondence-boundaries theorem, \(f_{0}(z)\) is continuous in \(\mathcal {C}\cup G_{\mathcal {C}}\). Note that the non linear Eq. (28) cannot be solved in closed form but numerically, although a unique solution can be proven to exist. The numerical procedure will be discussed later.

Similarly, we can determine \(H^{(0)}(0,y)\) by solving another Riemann-Hilbert boundary value problem on the closed contour \(\mathcal {L}\). Then, using the fundamental functional Eq. (5) we obtain \(H^{(0)}(x,y)\), and substituting back in (3), the generating function \(H^{(1)}(x,y)\) is also uniquely determined.

5 Performance Metrics

In the following we derive formulas for the probability of an empty system and the expected number of packets at each relay node in steady state. Note that since \(0\in G_{\mathcal {M}}\), \(P(Q_{1}=0,Q_{2}=0,C=0)=H^{(0)}(0,0)\). Clearly,

Differentiating (5) and (3) with respect to x and setting \((x,y)=(1,1)\), we obtain respectively after some algebra,

From (26),

$$\begin{aligned} \begin{array}{l} \frac{d}{dx}H^{(0)}(x,0)|_{x=1}=(1-\widehat{\rho }_{2})[\frac{-r}{1-x^{*}}+\frac{1}{2\pi i}\int _{|t|=1}\frac{\log \{J(t)\}f^{\prime }(1)}{(t-f(1))^{2}}dt]. \end{array} \end{aligned}$$
(29)

Then, using the last two equations, we can easily derive

Similarly,

where by differentiating (19) with respect to y and setting \(y=1\), we obtain

6 A Numerical Example

We proceed with a simple numerical example to illustrate the validity of the expressions derived in the previous section. The calculation of \(P(Q_{1}=0,Q_{2}=0,C=0)\), \(E(Q_{i})\), \(i=1,2,\) requires the evaluation of the integrals in (26), (29) as well as the numerical determination of the mapping \(f_{0}(z)\) (see [5, 28]). We now outline how these integrals can be computed: Firstly, we rewrite the integrals (26) and (29), by substituting \(t=e^{i\phi }\):

(30)

Then, we split the interval \([0,2\pi ]\) into K parts of length \(2\pi /K\). For the K points given by their angles \(\left\{ \phi _{0},...,\phi _{K-1}\right\} \), we solve the Theodorsen integral Eq. (28) to obtain iteratively the corresponding points \(\left\{ \psi (\phi _{1}),...,\psi (\phi _{K-1})\right\} \) from:

$$\begin{aligned} \begin{array}{rl} \psi _{0}(\phi _{k})=\phi _{k},&\, \psi _{n+1}(\phi _{k})=\phi _{k}-\frac{1}{2\pi }\int _{0}^{2\pi }\log \left\{ \frac{\delta (\psi _{n}(\omega ))}{\cos (\psi _{n}(\omega ))}\right\} \cot [\frac{1}{2}(\omega -\phi _{k})]d\omega , \end{array} \end{aligned}$$

where \(\delta (\psi _{n}(\omega ))\) is determined by \(\delta (\psi _{n}(\omega ))-\cos (\psi _{n}(\omega ))\sqrt{m(\delta (\psi _{n}(\omega )))}=0\), using the Newton-Raphson method. For the iteration, we use the stopping criterion: \(\max _{k\in \left\{ 0,1,...,K-1\right\} }\left| \psi _{n+1}(\phi _{k})-\psi _{n}(\phi _{k})\right| <10^{-6}.\) Having obtained \(\psi (\phi _{k})\) numerically, the values of the conformal mapping \(f_{0}(z)\), \(\left| z\right| \le 1\) are given by

$$\begin{aligned} \begin{array}{c} f_{0}(e^{i\phi _{k}})=e^{i\psi (\phi _{k})}\frac{\delta (\psi (\phi _{k}))}{\cos (\psi (\phi _{k}))}=\delta (\psi (\phi _{k}))[1+i \tan (\psi (\phi _{k}))],\,k=0,1,...,K-1. \end{array} \end{aligned}$$

It remains to determine f(1), \(f^{\prime }(1)\). Clearly, \(f(1)=\eta \) means \(f_{0}(\eta )=1\). Thus, f(1) is the unique solution of \(f_{0}(\eta )=1\) in [0, 1], and can be obtained using (27) and the Newton-Raphson method. Furthermore,

$$\begin{aligned} \begin{array}{l} f^{\prime }(1)=(\frac{d}{dz}f_{0}(z)|_{z=\eta })^{-1}=\{\frac{1}{f(1)}+\frac{1}{2\pi }\int _{0}^{2\pi }\frac{\log \{\rho (\psi (\omega ))\}2e^{i\omega }}{(e^{i\omega }-f(1))^{2}}d\omega \}^{-1}, \end{array} \end{aligned}$$

is numerically determined using the trapezium rule.

We now use the above described procedure and set \(\mu _{2}=2\), \(\mu =10\), \(\lambda _{2}=0.2\), \(K=4000\). The left hand side figure in Fig. 2 shows the impact of \(\lambda _{0}\) (i.e., the packet generation rate of the user \(S_{0}\)) on the probability of an empty system for increasing values of \(\lambda _{1}\). As expected, \(P(Q_{1}=0,Q_{2}=0,C=0)\) decreases as \(\lambda _{1}\) increases. However, we can observe that the values of \(P(Q_{1}=0,Q_{2}=0,C=0)\) deviate significantly for larger \(\lambda _{0}\) and especially for small values of \(\lambda _{1}\).

In the right hand side figure of Fig. 2 we can observe how the expected length of the relay nodes, \(E(Q_{1})\), \(E(Q_{2})\), vary for increasing values of the retrial rate \(\mu _{1}\). Clearly, \(E(Q_{1})\) decreases for increasing values of \(\mu _{1}\), while \(E(Q_{2})\) increases, since packets in \(R_{1}\) retry faster than packets in \(R_{2}\). Moreover, the impact of \(\lambda _{0}\) remains significant, since by increasing its values, \(E(Q_{1})\), \(E(Q_{2})\) increase too.

Fig. 2.
figure 2

\(P(Q_{1}=0,Q_{2}=0,C=0)\) for \(\mu _{1}=2\) (left). \(E(Q_{i})\), \(i=1,2\) for \(\lambda _{1}=0.4\) (right).