1 Introduction

1.1 Overview

1.1.1 Main Goal

In this article, we consider \({{Z=(Z(t), t \geqslant 0)}}\), an obliquely reflected Brownian motion with drift in \(\mathbb {R}_+^2\) starting from x. Denote the transition semigroup by \((P_t)_{t\geqslant 0}\). We will focus on the quadrant case because thanks to a simple linear transform it is easy to extend all the results to any wedge, see [31, Appendix A]. This process behaves as a Brownian motion with drift vector \(\mu \) and covariance matrix \(\varSigma \) in the interior of this quadrant and reflects instantaneously in a constant direction \(R_i\) for \(i=1,2\) on each edge, see Fig. 1 and Proposition 1 for more details. We are interested in the case where this process is transient, that is when the parameters make the process tend to infinity almost surely, see Sect. 2.2.

Fig. 1
figure 1

Reflection vectors and drift

The main goal of this article is to study G, Green’s measure (potential kernel) of Z:

$$\begin{aligned} G(x, A) := \mathbb {E}_{x} \left[ \int _{0}^{\infty } \mathbf {1}_A ({{Z(t)}}) \, \mathrm {d} t \right] = \int _0^{\infty } P_t(x,A)\, \mathrm {d} t \end{aligned}$$

which represents the mean time spent by the process in some measurable set A of the quadrant. Let us remark that if A is bounded and if Z is transient then G(xA) is finite. The density of the measure G with respect to the Lebesgue measure is called Green’s function and is equal to

$$\begin{aligned} g(x,\cdot ) :=\int _0^{\infty } p_t(x,\cdot )\, \mathrm {d} t, \end{aligned}$$

if we assume that \(p_t\) is a transition density for Z(t). The kernel G defines a potential operator

$$\begin{aligned} G f(x): =\mathbb {E}_{x} \left[ \int _0^{\infty } f({{Z(t)}}) \, \mathrm {d} t \right] = \int _{\mathbb {R}_+^2} f(y) \ g(x,y) \, \mathrm {d} y, \end{aligned}$$

for every positive measurable function f. We define \(H_1\) and \(H_2\) the boundary Green’s measures on the edges such that for \(i=1,2\),

$$\begin{aligned} H_i(x,A) := \mathbb {E}_{x} \left[ \int _0^\infty {\mathbf {1}}_{A} ({{Z(t)}})\, \mathrm {d}{{L_i(t)}} \right] \end{aligned}$$

where we integrate with respect to \({{L_i(t)}}\), the local time of the process on the edge \(z_i=0\). The support of \(H_1\) lies on the vertical axis and the support of \(H_2\) lies on the horizontal axis. We can say that \(H_i (x,A)\) represents the mean local time spent on the corresponding edge. When it exists, the density of the measure \(H_i\) with respect to the Lebesgue measure is denoted \(h_i\) and the boundary potential kernel is given by

$$\begin{aligned} H_i f (x):= \mathbb {E}_{x} \left[ \int _0^\infty f({{Z(t)}}) \, \mathrm {d}L^i_t \right] = \int _{\mathbb {R}_+^2} f(y) h_i(x,y) \, \mathrm {d}y. \end{aligned}$$

In this article we determine an explicit formula for \(\psi ^x\) and \(\psi ^x_i \) the Laplace transforms of g and \(h_i\) usually named moment generating functions, defined by

$$\begin{aligned} \psi ^x (\theta ) :=\mathbb {E}_{x} \left[ \int _0^{\infty } e^{\theta \cdot {{Z(t)}}} \, \mathrm {d} t \right] \text { and } \psi _i^x (\theta ):= \mathbb {E}_{x} \left[ \int _0^{\infty } e^{ \theta \cdot {{Z(t)}} } \, \mathrm {d} {{L_i(t)}} \right] \end{aligned}$$
(1)

where \(\theta =(\theta _1,\theta _2)\in \mathbb {C}^2\). Thereafter, we will often omit to write the x. Furthermore, we notice that the functions \(\psi _i\) depend on only one variable. We will then denote them by \(\psi (\theta )\), \(\psi _1(\theta _2)\) and \(\psi _2(\theta _1)\).

1.1.2 Context

Obliquely reflected Brownian motion in the quadrant and in orthants of any dimensions was introduced and extensively studied in the eighties by Harrison, Reiman, Varadhan and Williams [33, 34, 55,56,57]. The initial motivation for the study of this kind of processes was because it serves as an approximation of large queuing networks as we can see in [3, 26, 28, 36, 51]. Recurrence or transience in two dimensions, which is an important aspect for us, was studied in [15, 38, 57]. In higher dimensions the problem is more complex, see for example [8, 9, 12, 16]. The intertwining relations of obliquely reflecting Brownian motion have been studied in [21, 40], its Lyapunov functions in [22], its cone points in [44] and its existence in non-smooth planar domains and its links with complex and harmonic analysis in [11]. Some articles link SRBM in the orthant to financial models as in [4, 39]. Such a process and these financial models are also related to competing Brownian particle systems as in [10, 53]. Finally, some other related stochastic processes have been studied too as two-dimensional oblique Bessel processes in [45] and two-dimensional obliquely sticky Brownian motion in [14].

1.1.3 Green’s Functions and Invariant Measures

Green’s functions and invariant measures are two similar concepts, the first dealing with the transient case and the second the recurrent case. Indeed, in the transient case the process spends a finite time in a bounded set while in the recurrent case it spends an infinite time in it. Thus, Green’s measure may be interpreted as the average time spent in some set while ergodic theorems say that the invariant measure is the average proportion of time spent in some set.

In the discrete setting, Green’s functions of random walks in the quadrant have been studied in several articles, as in the reflecting case in [43] or in the absorbed case in [41]. To our knowledge it seems that in the continuous setting, Green’s functions of reflected Brownian motion in cones has not been studied yet (except in dimension one, see [13]).

On the other hand, the invariant measure of this kind of processes has been deeply studied in the literature: the asymptotics of the stationary distribution is the subject of many articles as [18, 19, 29, 35, 54], numerical methods to compute the stationary distribution have been developed in [15, 17] and explicit expressions for the stationary distribution are found in some particular cases in [3, 6, 20, 26, 28, 30, 37] and in the general case in [31].

1.1.4 Oblique Neumann Boundary Problem

Green’s functions and invariant measures of Markov processes are central in potential theory and in ergodic theorems for additive functionals. In particular they give a probabilistic interpretation to the solutions of some partial differential equations. “Appendix A” illustrates this. Our case is especially complicated because we consider a non-smooth unbounded domain, and reflection at the boundary is oblique.

Consider Z, an obliquely reflected Brownian motion with drift vector \(\mu \), covariance matrix \(\varSigma \), and reflection matrix R. Its first and second columns \(R^1\) and \(R^2\) form reflection vectors at the faces \(\{(0,z) | z\geqslant 0 \}\) and \(\{(z,0) | z\geqslant 0 \}\). Its generator inside the quarter plane \(\mathcal {L}\) and its dual generator \(\mathcal {L}^*\) are equals to

$$\begin{aligned} \mathcal {L}f=\frac{1}{2} \nabla \cdot \varSigma \nabla + \mu \cdot \nabla \quad \text {and} \quad \mathcal {L}^* f=\frac{1}{2} \nabla \cdot \varSigma \nabla - \mu \cdot \nabla . \end{aligned}$$
(2)

Harrison and Reiman [33, (8.2) and (8.3)] derive (informally) the backward and the forward equations (with boundary and initial conditions) for \(p_t(x,y)\), the transition density of the process. The forward equation (or Fokker–Planck equation) may be written as

$$\begin{aligned} {\left\{ \begin{array}{ll} \mathcal {L}^*_y p_t(x,y) = \partial _t p_t(x,y),\\ \partial _{R_i^*} p_t(x,y ) - 2\mu _i p_t(x,y ) = 0 \text { if } y_i =0,\\ p_0(x,\cdot )=\delta _x, \end{array}\right. } \end{aligned}$$

where

$$\begin{aligned} R^* =2\varSigma -R \ \text {diag}(R)^{-1} \text {diag}( \varSigma ){,} \end{aligned}$$

\(R^*_i\) is its ith column and \(\partial _{R_i^*} = R_i^* \cdot \nabla _y\) the derivative along \(R^*_i\) on the boundary. (In [33], notation is different: Row vectors instead of column vectors.) Letting t going to infinity in the forward equation, Harrison and Reiman conclude that, in the positive recurrent case, the density \(\pi \) of the stationary distribution satisfies the following steady-state equation [33, (8.5)]

$$\begin{aligned} {\left\{ \begin{array}{ll} \mathcal {L}^* \pi =0,\\ \partial _{R_i^*} \pi - 2\mu _i \pi = 0 \text { if } y_i =0. \end{array}\right. } \end{aligned}$$

In the transient case, integrating the forward equation in time from 0 to infinity suggests that the Green’s function g satisfies the following partial differential equation with Robin boundary condition (specification of the values of a linear combination of a function and its derivative on the boundary)

$$\begin{aligned} {\left\{ \begin{array}{ll} \mathcal {L}^*_y g(x,\cdot ) = - \,\delta _x,\\ \partial _{R_i^*} g(x,\cdot ) - 2\mu _i g(x,\cdot ) = 0 \text { if } y_i =0. \end{array}\right. } \end{aligned}$$
(3)

A similar equation holds in dimension one, see (32). The Green’s function g of the obliquely reflected Brownian motion in the quadrant is then a fundamental solution of the dual operator \(\mathcal {L}^*\). Together with the boundary Green’s functions \(h_i\) they should allow to solve the following oblique Neumann boundary problem

$$\begin{aligned} {\left\{ \begin{array}{ll} \mathcal {L} u = -\, f &{} \quad \text {in } \mathbb {R}_+^2,\\ \partial _{R_i} u = \varphi _i &{} \quad \text {if } y_i =0, \end{array}\right. } \end{aligned}$$

where \(\partial _{R_i} = R_i \cdot \nabla _y\) is the derivative along \(R_i\). If a solution u exists, it should satisfy

$$\begin{aligned} u =Gf +H_1\varphi _1 +H_2\varphi _2. \end{aligned}$$

One may see “Appendix A” to better understand this thought.

1.2 Main Results and Strategy

1.2.1 Functional Equation

To find \(\psi \) and \(\psi _i\) the moment generation functions of Green’s functions, we will establish in Proposition 5 a new kernel functional equation connecting what happens inside the quadrant and on its boundaries, namely

$$\begin{aligned} - \gamma (\theta ) \psi (\theta ) = \gamma _1 (\theta ) \psi _1 (\theta _2) + \gamma _2 (\theta ) \psi _2 (\theta _1) + e^{\theta \cdot x} \end{aligned}$$
(4)

where x is the starting point and the kernel \(\gamma \) and \(\gamma _i\) are some polynomials given in Eq. (9). To our knowledge this formula has not yet appeared in the literature. Such an equation is reminiscent of the balance equation satisfied by the moment generation function of the invariant measure in the recurrent case which derives from the basic adjoint relationship, see [18, (2.3) and (4.1)] and [31, (5)]. The additional term \(e^{\theta \cdot x}\) depending on the starting point makes this equation differ from the one of the recurrent case. It reminds also of the several kernel equations obtained in the discrete setting in order to study random walks and count walks in the quadrant [25, 42].

1.2.2 Analytic Approach

In the seventies, Malyshev [24, 47] introduced an analytic approach to solve such functional equations. This method is presented in the famous book of Fayolle et al. [25]. Since then, it has been used a lot in the discrete setting in order to solve many problems as counting walks, studying Martin boundaries, determining invariant measures or Green’s functions, see [5, 7, 41,42,43]. This approach has also been used in the continuous setting in order to study stationary distributions in a few articles as in [3, 26, 28, 31]. However, to our knowledge it is the first time that this method is used to find Green’s functions in the continuous case. To obtain an explicit expression of the Laplace transforms using this analytic approach we will go through the following steps:

  1. (i)

    Find a functional equation, see Sect. 3.1;

  2. (ii)

    Study the kernel (and its related Riemann surface) and extend meromorphically the Laplace transforms, see Sects. 3.2 and 3.3;

  3. (iii)

    Deduce from the functional equation a boundary value problem (BVP), see Sect. 4.2;

  4. (iv)

    Find some conformal glueing function and solve the BVP, see Sects. 4.3 and 4.5.

For some analytic steps, our strategy of proof is similar to the one used in [31] to determine the stationary distribution. In some places, the technical details will be identical to [3, 31], especially related to the kernel. But, being in the transient case, the probabilistic study differs from [31] and leads to a different functional equation and to a more complicated boundary value problem whose analytic resolution is more difficult.

1.2.3 Boundary Value Problem

In Lemma 8 we establish a Carleman boundary value problem satisfied by the Laplace transform \(\psi _1\). For some functions G and g defined in (19) and (20) and some hyperbola \(\mathcal {R}\) defined in (17) which depend on the parameters \((\mu ,\varSigma ,R)\) we obtain the boundary condition (21):

$$\begin{aligned} \psi _1(\overline{\theta _2})=G(\theta _2)\psi _1({\theta _2}) + g(\theta _2), \qquad \forall \theta _2\in \mathcal {R}. \end{aligned}$$

This equation is particularly complicated: The function g makes the BVP doubly non-homogeneous due to the function G but also to g which comes from the term \(e^{\theta \cdot z_0}\) in the functional Eq. (4). The function g makes the BVP differ from the one obtained in the recurrent case for the Laplace transform of the stationary distribution [31, (22)].

1.2.4 Explicit Expression

The resolution of such a BVP is technical and uses the general theory of BVP. In order to make the paper self-contained, “Appendix B” briefly presents this theory. The solutions can be expressed in terms of Cauchy integral and some conformal mapping w defined in (23). Our main result is an integral formula for the Laplace transform \(\psi _1\) precisely stated in Theorem 11. Let us give now the shape of the solution. We have

$$\begin{aligned} \psi _1(\theta _2)= \frac{-Y (w(\theta _2))}{2i\pi } \int _{\mathcal {R}^-} \frac{g(t)}{Y^+ (w(t))} \left( \frac{w'(t) }{w(t)-w(\theta _2)} + \chi \frac{ w'(t) }{w(t)} \right) \, \mathrm {d}t \end{aligned}$$

where

$$\begin{aligned} Y (w(\theta _2))=w(\theta _2)^{\chi } \exp \left( \frac{1}{2i\pi } \int _{\mathcal {R}^-} \log (G(s))\left( \frac{ w'(s)}{w(s)-w(\theta _2)}- \frac{ w'(s)}{w(s)} \right) \, \mathrm {d}s \right) , \end{aligned}$$

\(\chi =0\) or 1 and \(Y^+\) is the limit of Y on \(\mathcal {R}\). This formula is analogous but more complicated than the one obtained in [31, (14)]. In the same way there is a similar formula for \(\psi _2\), and then the functional Eq. (4) gives an explicit formula for the Laplace transform \(\psi \). Green’s functions are obtained by taking the inverse Laplace transforms.

1.3 Perspectives

Developing the analytic approach, it would be certainly be possible to study further Green’s functions and obliquely reflected Brownian motion in wedges. Here, are some research topic perspectives:

  • Study the algebraic nature of Green’s function: as in the discrete models it would require to introduce the group related to the process and analyze further the structure of the BVP in studying the existence of multiplicative and additive decoupling functions, see Sect. 4.6 and [5,6,7, 47];

  • Determine the asymptotics of Green’s function, the Martin boundary and the corresponding harmonic functions: to do this we should study the singularities and invert the Laplace transforms in order to use transfer lemmas and the saddle point method on the Riemann surface, see [23, 29, 41, 43, 49, 50];

  • Give an explicit expression for the transition function: to do that, we could try to find a functional equation satisfied by the resolvent of the process, which would contain one more variable, and seek to solve it.

We leave these questions for future works. Furthermore, even if there are some attempts, extending the analytic approach to higher dimensions remains an open question.

1.4 Structure of the Paper

  • Section 2 presents the process we are studying and focuses on the transience conditions.

  • Section 3 establishes the new functional equation which is the starting point of our analytic study. The kernel is studied and the Laplace transform is continued on some domain.

  • Section 4 states and solves the boundary value problem satisfied by the Laplace transform \(\psi _1\). The main result, which is the explicit expression of \(\psi _1\), is stated in Theorem 11.

  • “Appendix A” presents in a brief way the potential theory which links Green’s functions and the partial differential equations.

  • “Appendix B” presents the general theory of boundary value problems which is used in Sect. 4.

  • “Appendix C” studies Green’s functions of reflected Brownian motion in dimension one.

  • “Appendix D” explain how to generalize the results to the case of a non-positive drift.

2 Transient SRBM in the Quadrant

2.1 Definition

Let

$$\begin{aligned} \varSigma= & {} \left( \begin{array}{c@{\quad }c} \sigma _{11} &{} \sigma _{12} \\ \sigma _{12} &{} \sigma _{22} \end{array} \right) \in \mathbb {R}^{2 \times 2}, \quad \\ \mu= & {} \left( \begin{array}{c} \mu _1 \\ \mu _2 \end{array} \right) \in \mathbb {R}^2, \quad R=(R_1,R_2)= \left( \begin{array}{c@{\quad }c} 1 &{} r_{12} \\ r_{21} &{} 1 \end{array} \right) \in \mathbb {R}^{2 \times 2} \end{aligned}$$

respectively be a positive-definite covariance matrix, a drift and a reflection matrix. The matrix R has two reflection vectors giving the reflection direction \(R_1 \) along the y-axis and \(R_2 \) along the x-axis, see Fig. 1. We will define the obliquely reflected Brownian motion in the quadrant in the case where the process is a semi-martingale, see Williams [56]. Such a process is also called semimartingale reflected Brownian motion (SRBM).

Proposition 1

(Existence and uniqueness) Let us define \({{Z=(Z(t),t\geqslant 0)}}\) a SRBM with drift in the quarter plane \(\mathbb {R}_+^2\) associated to \((\varSigma , \mu , R)\) as the semi-martingale such that for \(t\in \mathbb {R}_+\) we have

$$\begin{aligned} {{Z(t)}}=x + W{(t)} + \mu t + R L{(t)} \ \in \mathbb {R}_+^2, \end{aligned}$$

where x is the starting point, W is a planar Brownian motion starting from 0 and of covariance \(\varSigma \) and for \(i=1,2\) the coordinate \({{L_i(t)}}\) of L(t) is a continuous non-decreasing process which increases only when \({Z_i}=0\), that is when the process reaches the face i of the boundary (\(\int _{\{t : {{Z_i(t)}} > 0 \}} \mathrm {d} {{L_i(t)}}=0\)). The process Z exists in a weak sense if and only if one of the three conditions holds

$$\begin{aligned} {r_{12}>0,\quad r_{21}>0, \quad r_{12}r_{21}<1.} \end{aligned}$$
(5)

In this case the process is unique in law and defines a Feller continuous strong Markov process.

The process L(t) represents the local time on the boundaries, more specifically its first coordinate \({L_1(t)}\) is the local time on the vertical axis and the second coordinate \({L_2(t)}\) the local time on the horizontal axis. The proof of existence and uniqueness can be found in the survey of Williams [58, Theorem 2.3] for orthants, in general dimension \(d\geqslant 2\). These conditions mean that the reflection vectors must not be too much inclined toward 0 for the process to exist. Otherwise the process will be trapped in the corner, see Fig. 2. The limit condition \(r_{12}r_{21}=1\) is satisfied when the two reflection vectors are collinear and of opposite directions.

Fig. 2
figure 2

Existence conditions

2.2 Recurrence and Transience

Markov processes have approximately two possible behaviors as explained in the book of Revuz and Yor [52, p. 424]. Either they converge to infinity which is the transient case, or they come back at arbitrarily large times to some small sets which is the recurrent case. We present very briefly some results of the corresponding theory, for more details (in particular on topological issues) one can read the articles of Azéma et al. [1, 2].

Let X(t) be a Feller continuous strong Markov process on state space E, a locally compact set with countable base. We say that the point x leads to y if for all neighborhood V of y we have \(\mathbb {P}_x ( \tau _V <\infty )>0\) where \(\tau _V=\inf \{t > 0 : {{X(t)}} \in V \}\). The points x and y communicate if x leads to y and y leads to x, it is an equivalence relation. For \(x\in E\) we say that

  • x is recurrent if \(\mathbb {P}_x \left( \overline{\mathop {\lim }\nolimits _{t\rightarrow \infty }} \mathbf {1}_U ({{X(t)}})=1 \right) =1\) for all U neighborhoods of x,

  • x is transient if \(\mathbb {P}_x \left( \overline{\mathop {\lim }\nolimits _{t\rightarrow \infty }} \mathbf {1}_U ({{X(t)}})=1 \right) =0\) for all U relatively compact neighborhoods of x.

Each point is either recurrent or transient, and if two states communicate, they are either both recurrent or both transient, see [1, Theorem III 1]. The process is called recurrent or transient if each point is recurrent or transient, respectively. The next proposition may be found in [1, Proposition III 1].

Proposition 2

(Transience properties) The following properties are equivalent

  1. 1.

    every point is transient;

  2. 2.

    X(t) tends to infinity when \(t\rightarrow \infty \) a.s.;

  3. 3.

    for all compact K of E and for all starting point x Green’s measure of K is finite:

    $$\begin{aligned} G(x,K) = \mathbb {E}_{x} \left[ \int _{0}^{\infty } \mathbf {1}_K ({{X(t)}}) \, \mathrm {d} t \right] < \infty . \end{aligned}$$

The main articles which study the recurrence and the transience of SRBM in wedges are [57] with zero drift [38] with nonzero drift and the survey [58]. The process has only one equivalence class equal to the whole quadrant, see for example [57, (4.1)]. The process will be recurrent if for each set V (of positive Lebesgue measure) and all starting point x, \(\mathbb {P}_x ( \tau _V < \infty )=1\), otherwise it will be transient. It will be called positive recurrent and will admit a stationary distribution if \(\mathbb {E}_x [\tau _V] < \infty \) and null recurrent if \(\mathbb {E}_x [\tau _V] = \infty \) for all x and V.

Proposition 3

(Transience and recurrence) Assume that the existence condition (5) is satisfied and note \(\mu _1^-\) and \(\mu _2^-\) the negative parts of the drift components. The process Z is transient if and only if

$$\begin{aligned} \mu _1 + r_{12} \mu _2^-> 0 \text { or } \mu _2 + r_{21} \mu _1^- > 0 , \end{aligned}$$
(6)

and recurrent if and only if

$$\begin{aligned} \mu _1 + r_{12} \mu _2^- \leqslant 0 \text { and } \mu _2 + r_{21} \mu _1^- \leqslant 0. \end{aligned}$$
(7)

In the latter case the process is positive recurrent and admit a unique stationary distribution if and only if \( \mu _1 + r_{12} \mu _2^-< 0 \text { and } \mu _2 + r_{21} \mu _1^- < 0, \) and is null recurrent if and only if \( \mu _1 + r_{12} \mu _2^- = 0 \text { or } \mu _2 + r_{21} \mu _1^- = 0. \)

This result may be found in [38, 57, 58]. In order to restrict the number of cases to handle, we will now assume that the drift has positive coordinates, that is

$$\begin{aligned} \mu _1>0 \text { and } \mu _2 >0. \end{aligned}$$
(8)

In this case the process is then obviously transient and converges to infinity. In the other transient cases the process tends to infinity but along one of the axis. See for example [27] which computes the probability of escaping along each axis when \(\mu _1<0\) and \(\mu _2<0\). These cases could be treated in the same way with additional technical issues. See “Appendix D” which details the main differences of the study and generalize the results to the case of a non-positive drift.

Assumption (8) is the counterpart to the rather standard hypothesis made in the recurrent case (as in [20, 26, 28, 29, 31]) which takes \(\mu _1 <0\) and \(\mu _2 <0\).

Fig. 3
figure 3

Recurrence and transience conditions according to the parameters

3 A New Functional Equation

3.1 Functional Equation

We determine a kernel functional equation which is the starting point of our analytic study. This key formula connects the Laplace transforms of Green’s function inside and on the boundaries of the quarter plane. Let us define the kernel \(\gamma \), \(\gamma _1\) and \(\gamma _2\) the two variables polynomials such that for \(\theta =(\theta _1,\theta _2)\) we have

$$\begin{aligned} {\left\{ \begin{array}{ll} \gamma (\theta )=\frac{1}{2} \theta \cdot \varSigma \theta + \theta \cdot \mu = \frac{1}{2}(\sigma _{11}\theta _1^2 + 2\sigma _{12}\theta _1\theta _2 + \sigma _{22} \theta _2^2) + \mu _1\theta _1+\mu _2\theta _2, \\ \gamma _1 (\theta )= R^1 \cdot \theta =\theta _1 + r_{21} \theta _2,\\ \gamma _2 (\theta )= R^2 \cdot \theta =r_{12} \theta _1 + \theta _2, \end{array}\right. } \end{aligned}$$
(9)

where \(\cdot \) is the scalar product. The equations \(\gamma =0\), \(\gamma _1=0\) and \(\gamma _2=0\), respectively, define in \(\mathbb {R}^2\) an ellipse and two straight lines. Let \(\theta ^*\) (resp. \(\theta ^{**}\)) be the point in \(\mathbb {R}^2 {\setminus } (0,0)\) such that \(\gamma (\theta ^*)=0\) and \(\gamma _1(\theta ^*)=0\) (resp. \(\gamma _2(\theta ^{**})=0\)). The point \(\theta ^*\) (resp. \(\theta ^{**}\)) is the intersection point between the ellipse \(\gamma =0\) and the straight line \(\gamma _1=0\) (resp. \(\gamma _2=0\)), see Fig. 4.

Remark 4

Notice that the drift \(\mu \) is an outer normal vector to the ellipse in (0, 0). Then, the ellipse \(\{\theta \in \mathbb {R}^2 : \gamma (\theta )=0 \} \subset \{\theta \in \mathbb {C}^2 : \mathfrak {R}\theta \cdot \mu <0 \} \).

Fig. 4
figure 4

Ellipse \(\gamma =0\), straight lines \(\gamma _1=0\) and \(\gamma _2=0\) and intersection points \(\theta ^*\) and \(\theta ^{**}\)

Proposition 5

(Functional equation) Assume that \(\mu _1>0\) and \(\mu _2>0\). Denoting by x the starting point of the transient process Z, the following formula holds

$$\begin{aligned} - \gamma (\theta ) \psi (\theta ) = \gamma _1 (\theta ) \psi _1 (\theta _2) + \gamma _2 (\theta ) \psi _2 (\theta _1) + e^{\theta \cdot x} \end{aligned}$$
(10)

for all \(\theta =(\theta _{1},\theta _2 )\in \mathbb {C}^2\) such that \(\mathfrak {R}\theta \cdot \mu <0 \) and such that the integrals \(\psi (\theta )\), \(\psi _1(\theta _2)\) and \(\psi _2(\theta _1)\) are finite. Furthermore:

  • \(\psi _1(\theta _2) \) is finite on \(\{\theta _2\in \mathbb {C} : \mathfrak {R}\theta _2\leqslant \theta _2^{**} \} \),

  • \(\psi _2(\theta _1)\) is finite on \(\{\theta _1\in \mathbb {C} : \mathfrak {R}\theta _1 \leqslant \theta _1^* \} \),

  • \(\psi (\theta )\) is finite on \(\{\theta \in \mathbb {C}^2 : \mathfrak {R}\theta _1< \theta _1^*\wedge 0 \text { and } \mathfrak {R}\theta _2<\theta _2^{**}\wedge 0\} \subset \{\theta \in \mathbb {C}^2 :\mathfrak {R}\theta \cdot \mu <0 \}\).

Proof

The proof of this functional equation is a consequence of Ito’s formula. For \(f\in \mathcal {C}^2 (\mathbb {R}_+^2)\) we have

$$\begin{aligned} f({{Z(t)}})-f(Z_0)&=\int _0^t \nabla f({{Z(s)}}){\cdot }\mathrm {d} W_s + \int _0^t \mathcal {L}f({{Z(s)}}) \, \mathrm {d} s \\&\quad + \sum _{i=1}^2 \int _0^t R_i \cdot \nabla f({{Z(s)}}) \, \mathrm {d} {{L_i(t)}}, \end{aligned}$$

where \(\mathcal {L}\) is the generator defined in (2). Choosing \(f(z)=e^{\theta \cdot z}\) for \(z\in \mathbb {R}_+^2\) and taking the expectation of the last equality we obtain :

$$\begin{aligned} \mathbb {E}_{x} [ e^{\theta \cdot {{Z(t)}}} ] - e^{\theta \cdot x} = 0 + \gamma (\theta ) \mathbb {E}_{x} \left[ \int _0^t e^{\theta \cdot {{Z(s)}}} \, \mathrm {d} s \right] + \sum _{i=1}^2 \gamma _i (\theta ) \mathbb {E}_{x} \left[ \int _0^t e^{\theta \cdot {{Z(s)}}} \, \mathrm {d} {{L_i(t)}} \right] . \end{aligned}$$
(11)

Indeed \(\int _0^t \nabla f({{Z(s)}}){\cdot }\mathrm {d} W_s \) is a martingale and then its expectation is zero. Now, let t tend to infinity. Due to (8) we have \(\theta \cdot {{Z(t)}} / t \underset{t\rightarrow \infty }{\longrightarrow }\theta \cdot \mu \) Choosing \(\theta \) such that \(\mathfrak {R}\theta \cdot \mu <0\) then implies that \(\mathfrak {R}\theta \cdot {{Z(t)}} \rightarrow -\infty \). We deduce that \(\mathbb {E}_{x} [e^{\theta \cdot {{Z(t)}}} ] \underset{t\rightarrow \infty }{\longrightarrow } 0 \). The expectations of the following formula being finite by hypothesis, we obtain

$$\begin{aligned} 0 - e^{\theta \cdot x} = \gamma (\theta ) \mathbb {E}_{x} \left[ \int _0^{\infty }e^{\theta \cdot {{Z(s)}}} \, \mathrm {d} s \right] + \sum _{i=1}^2 \gamma _i (\theta ) \mathbb {E}_{x} \left[ \int _0^{\infty }e^{\theta \cdot {{Z(s)}}} \, \mathrm {d} {{L_i(t)}} \right] \end{aligned}$$

which is the desired Eq. (10).

Let us now assume that \(\theta =\theta ^*\) in equality (11), we obtain

$$\begin{aligned} \mathbb {E}_{x} [ e^{\theta ^* \cdot {{Z(t)}}} ] - e^{\theta ^* \cdot x} = \gamma _2 (\theta ^*) \mathbb {E}_{x} \left[ \int _0^t e^{\theta ^*\cdot {{Z(s)}}} \, \mathrm {d} {{L_2(t)}} \right] . \end{aligned}$$

Let t tend to infinity. Thanks to Remark 4 we have \(\theta ^* \cdot {{Z(t)}}\rightarrow -\infty \) and we obtain

$$\begin{aligned} \psi _2(\theta _1^*)= \mathbb {E}_{x} \left[ \int _0^\infty e^{\theta ^*\cdot {{Z(s)}}} \, \mathrm {d} {{L_2(t)}} \right] =\frac{- e^{\theta ^* \cdot x}}{ \gamma _2 (\theta ^*)} <\infty . \end{aligned}$$

It implies that \(\psi _2(\theta _1)\) is finite for all \(\mathfrak {R}\theta _1 \leqslant \theta _1^*\). On the same way we obtain that \(\psi _1(\theta _2)\) is finite for all \(\theta _2 \leqslant \theta _2^{**}\).

Now assume that \(\theta \) satisfies \(\mathfrak {R}\theta _1< \theta _1^* \wedge 0\), \(\mathfrak {R}\theta _2< \theta _2^{**}\wedge 0\) and let us deduce that the Laplace transform \(\psi (\theta _1,\theta _2)\) is finite. Thanks to (8) we have \( \mathfrak {R}\theta \cdot \mu <0\) and then \(\mathbb {E}_{x} [e^{\theta \cdot {{Z(t)}}} ] \underset{t\rightarrow \infty }{\longrightarrow } 0 \). Let us consider two cases:

  • if \(\gamma (\theta _1^*\wedge 0, \theta _2^{**}\wedge 0) \ne 0\), taking \(\theta =(\theta _1^*\wedge 0, \theta _2^{**}\wedge 0)\) and letting t tend to infinity in (11), we obtain that \(\psi (\theta _1^*\wedge 0, \theta _2^{**}\wedge 0)\) is finite. Then, \(\psi (\theta _1,\theta _2)\) is finite for all \((\theta _1,\theta _2)\) such that \(\mathfrak {R}\theta _1 \leqslant \theta _1^*\wedge 0\) and \(\mathfrak {R}\theta _2\leqslant \theta _2^{**}\wedge 0\).

  • if \(\gamma (\theta _1^*\wedge 0, \theta _2^{**}\wedge 0) = 0\) it is possible to find \(\varepsilon >0\) as small as we want such that \(\gamma (\theta _1^*\wedge 0-\varepsilon , \theta _2^{**}\wedge 0-\varepsilon ) \ne 0\). In the same way that in the previous case we deduce that \(\psi (\theta _1,\theta _2)\) is finite for all \((\theta _1,\theta _2)\) such that \(\mathfrak {R}\theta _1< \theta _1^*\wedge 0\) and \(\mathfrak {R}\theta _2< \theta _2^{**}\wedge 0\). \(\square \)

3.2 Kernel

The kernel \(\gamma \) defined in (9) can be written as

$$\begin{aligned} \gamma (\theta _1, \theta _2) =a(\theta _1)\theta _2^2+b(\theta _1)\theta _2+c(\theta _1), \end{aligned}$$

where abc are polynomials in \(\theta _1\) such that

$$\begin{aligned} a(\theta _1)= \frac{1}{2}\sigma _{22}, \qquad b(\theta _1)=\sigma _{12}\theta _1+\mu _2, \qquad c(\theta _1)=\frac{1}{2}\sigma _{11}\theta _1^2+\mu _1\theta _1. \end{aligned}$$

Let \(d (\theta _1) = b^2 (\theta _1) -4a(\theta _1)c(\theta _1)\) be the discriminant. It has two real zeros \(\theta _1^\pm \) of opposite sign which are equal to

$$\begin{aligned} \theta _1^\pm = \frac{(\mu _2\sigma _{12}-\mu _1\sigma _{22}) \pm \sqrt{(\mu _2\sigma _{12}-\mu _1\sigma _{22})^2 +\mu _2^2 \det {\varSigma }}}{\det \varSigma }. \end{aligned}$$
(12)

We define \(\varTheta _2(\theta _1)\) a bivalued algebraic function which has two branch points \(\theta _1^\pm \) by \(\gamma (\theta _1,\varTheta _2(\theta _1))= 0\). We define the two branches \( \varTheta _2^\pm \) on the cut plane \(\mathbb {C}{\setminus } ((-\,\infty ,\theta _1^-)\cup (\theta _1^+,\infty ))\) by \(\varTheta _2^\pm (\theta _1) =\frac{-b(\theta _1)\pm \sqrt{d(\theta _1)}}{2a(\theta _1)} \), that is

$$\begin{aligned} \varTheta _2^\pm (\theta _1)&= \dfrac{-(\sigma _{12}\theta _1+\mu _2)\pm \sqrt{\theta _1^2(\sigma _{12}^2-\sigma _{11}\sigma _{22})+2\theta _1(\mu _2\sigma _{12}-\mu _1\sigma _{22})+\mu _2^2}}{\sigma _{22}} . \end{aligned}$$
(13)

On \((-\,\infty ,\theta _1^-)\cup (\theta _1^+,\infty )\) the discriminant d is negative and the branches \(\varTheta _2^\pm \) take conjugate complex values on this set. It will imply that the curve \(\mathcal {R}\) defined in Eq. (17) is symmetric with respect to the horizontal axis. On the same way we define \(\theta _2^\pm \) and \(\varTheta _1^\pm \), it yields

$$\begin{aligned} \theta _2^\pm = \frac{(\mu _1\sigma _{12}-\mu _2\sigma _{11}) \pm \sqrt{(\mu _1\sigma _{12}-\mu _2\sigma _{11})^2 +\mu _1^2 \det {\varSigma }}}{\det \varSigma } \end{aligned}$$

and

$$\begin{aligned} \varTheta _1^\pm (\theta _2) =\dfrac{-(\sigma _{12}\theta _2+\mu _1)\pm \sqrt{\theta _2^2(\sigma _{12}^2-\sigma _{11}\sigma _{22})+2\theta _2(\mu _1\sigma _{12}-\mu _2\sigma _{11})+\mu _1^2}}{\sigma _{11}}. \end{aligned}$$
(14)

The previous formulas can also be found in [31, (7) and (8)].

3.3 Holomorphic Continuation

The boundary value problem satisfied by \(\psi _1(\theta _2)\) in Sect. 4 lies on a curve outside of the convergence domain established in Proposition 5 that is \(\{ \theta _2\in \mathbb {C} : \mathfrak {R}\theta _2 \leqslant \theta _2^{**} \}\). That is why we extend holomorphically the Laplace transform \(\psi _1\). We assume that the transient condition (6) is satisfied.

Lemma 6

(Holomorphic continuation) The Laplace transform \(\psi _1\) may be holomorphically extended to the open set

$$\begin{aligned} \{\theta _2\in \mathbb {C}{\setminus } (\theta _2^+,\infty ) : \mathfrak {R}\, \theta _2< \theta _2^{**} \text { or } \mathfrak {R}\, \varTheta _1^-(\theta _2) < \theta _1^*\}. \end{aligned}$$
(15)

Proof

This proof is similar to the one of Lemma 3 of [31]. The Laplace transform \(\psi _1\) is initially defined on \(\{\theta _2\in \mathbb {C} : \mathfrak {R}\, \theta _2 < \theta _2^{**} \}\), see Proposition 5. By evaluating the functional Eq. (10) at \((\varTheta _1^-(\theta _2),\theta _2)\) we have

$$\begin{aligned} \psi _1(\theta _2)=-\frac{\gamma _2(\varTheta _1^-(\theta _2),\theta _2)\psi _2(\varTheta _1^-(\theta _2)) + \exp (\varTheta _1^-(\theta _2) x_1 + \theta _2 x_2)}{\gamma _1(\varTheta _1^-(\theta _2),\theta _2)} \end{aligned}$$
(16)

for \(\theta _2\) in the open and non-empty set \(\{\theta _2\in \mathbb {C} : \mathfrak {R}\, \theta _2< \theta _2^{**} \text { and } \mathfrak {R}\, \varTheta _1^-(\theta _2) < \theta _1^*\}\). The formula (16) then allows to continue meromorphically \(\psi _1\) on \(\{\theta _2\in \mathbb {C} : \mathfrak {R}\, \varTheta _1^-(\theta _2) < \theta _1^*\}\). The potential poles may come from the zeros of \(\gamma _1(\varTheta _1^-(\theta _2),\theta _2)\). The points \(\theta ^*\) and (0, 0) are the only points at which \(\gamma _1\) is 0. We notice that \(\varTheta _1^-(0)\ne 0\) as \(\mu _1>0\). Then, the only possible value in that domain at which the denominator of (16) takes the value 0 is \(\theta _2^*\) when \(\theta ^*=(\varTheta _1^-(\theta _2^*),\theta _2^*)\). In that case \(\theta _2^*<\theta _2^{**}\) and thanks to Proposition 5 we deduce that \(\psi _1(\theta _2^{*})\) is finite (which means that the numerator of (16) is zero). We conclude that \(\psi _1\) is holomorphic in the domain (15). \(\square \)

This continuation is similar to what is done for the Laplace transform of the invariant measure in [29,30,31]. In fact it would be possible to introduce the Riemann surface \(\mathcal {S}=\{(\theta _1,\theta _2)\in \mathbb {C}^2:\gamma (\theta _1,\theta _2)=0\}\) which is a sphere and to continue meromorphically the Laplace transforms to the whole surface and even on its universal covering.

4 A Boundary Value Problem

The goal of this section is to establish and to solve the non-homogeneous Carleman boundary value problem with shift satisfied by \(\psi _1 (\theta _2)\), the Laplace transform of Green’s function on the vertical axis. Here, the shift is the complex conjugation. We will refer to the reference books on boundary value problems [32, 46, 48] and one will see “Appendix B” for a brief survey of this theory. In this section we will assume that transience condition (6) is satisfied.

4.1 Boundary and Domain

This section is mostly technical. Before to state the BVP in Sect. 4.2 we need to introduce the boundary \(\mathcal {R}\) and the domain \(\mathcal G_\mathcal R\) where the BVP will be satisfied.

4.1.1 An Hyperbola

The curve \(\mathcal {R}\) is a branch of hyperbola already introduced in [3, 30, 31]. We define \(\mathcal {R}\) as

$$\begin{aligned} \mathcal {R}=\{\theta _2\in \mathbb C: \gamma (\theta _1,\theta _2)=0 \text { et } \theta _1\in (-\,\infty ,\theta _1^-)\}=\varTheta _2^\pm ((\,-\infty ,\theta _1^-)) \end{aligned}$$
(17)

and \(\mathcal G_\mathcal R\) as the open domain of \(\mathbb {C}\) bounded by \(\mathcal {R}\) on the right, see Fig. 5. As we noticed in Sect. 3.2 the curve \(\mathcal {R}\) is symmetric with respect to the horizontal axis, see Fig. 5. See [30, 31] or [3, Lemma 9] for more details and a study of this hyperbola. In particular the equation of the hyperbola is given by

$$\begin{aligned} \sigma _{22}(\sigma _{12}^2-\sigma _{11}\sigma _{22})x^2+\sigma _{12}^2\sigma _{22}y^2-2\sigma _{22}(\sigma _{11}\mu _2-\sigma _{12}\mu _1)x=\mu _2(\sigma _{11}\mu _2-2\sigma _{12}\mu _1). \end{aligned}$$
(18)

In Fig. 6 one can see the shape of \(\mathcal {R}\) according to the sign of the covariance \(\sigma _{12}\). The part of \(\mathcal {R}\) with negative imaginary part is denoted by \(\mathcal {R}^-\).

Fig. 5
figure 5

Curve \(\mathcal {R}\) defined in (17) in green and domain \(\mathcal G_\mathcal R\) in blue

4.1.2 Continuation on the Domain

Together with Lemma 6 the following lemma implies that \(\psi _1\) may be holomorphically extended to a domain containing \(\overline{\mathcal G_\mathcal R}\).

Lemma 7

The set \(\overline{\mathcal G_\mathcal R}\) is strictly included in the domain

$$\begin{aligned} \{\theta _2\in \mathbb {C}{\setminus } (\theta _2^+,\infty ) : \mathfrak {R}\, \theta _2< \theta _2^{**} \text { or } \mathfrak {R}\, \varTheta _1^-(\theta _2) < \theta _1^*\} \end{aligned}$$

defined in (15).

Proof

This proof is similar to the one of Lemma 5 of [31]. First we notice that the set \(\overline{\mathcal G_\mathcal R} \cap \{\theta _2\in \mathbb {C} : \mathfrak {R}\, \theta _2 < \theta _2^{**} \}\) is included in the domain defined in (15). Then, it remains to prove that the set

$$\begin{aligned} S:= \overline{\mathcal G_\mathcal R} \cap \{\theta _2\in \mathbb {C} : \mathfrak {R}\, \theta _2 \geqslant \theta _2^{**}\} \end{aligned}$$

is a subset of the domain (15). More precisely, we show that S is included in

$$\begin{aligned} T:= \{\theta _2\in \mathbb {C} {\setminus } (\theta _2^+,\infty ) : \mathfrak {R}\, \varTheta _1^-(\theta _2) < \theta _1^*\}. \end{aligned}$$

First of all, notice that the set S is bounded by (a part of) the hyperbola \(\mathcal {R}\) and (a part of) the straight line \(\theta _2^{**}+i\mathbb {R}\). We denote \(\theta _2^{**}\pm i t_1\) the two intersection points of these two curves when they exist, see Fig. 6. The definition of \(\mathcal {R}\) implies that \(\mathcal {R}\subset T\). Indeed the image of \(\mathcal {R}\) by \(\varTheta _1^-\) is included in \((\,-\infty ,\theta _1^-)\) and \(\theta _1^-\leqslant \theta _1^*\). Furthermore, (the part of) \(\theta _2^{**}+i\mathbb {R}\) that bounds S also belongs to T because for \(t\in \mathbb {R}_+\) and using the fact that \(\det \varSigma >0\) Eq. (14) yields after some calculations

$$\begin{aligned} \left\{ \begin{array}{l@{\quad }l} \mathfrak {R}\varTheta _1^-(\theta _2^{**}\pm it)\leqslant \mathfrak {R}\varTheta _1^-(\theta _2^{**})=\theta _1^{**}<\theta _1^{*}, &{}\quad \hbox {when}~ \theta _2^{**}\leqslant \varTheta _2(\theta _1^-);\\ \mathfrak {R}\varTheta _1^-(\theta _2^{**}\pm i(t_1+t))\leqslant \mathfrak {R}\varTheta _1^-(\theta _2^{**}\pm i t_1) <\theta _1^{*}, &{}\quad \hbox {when}~ \theta _2^{**}>\varTheta _2(\theta _1^-). \end{array}\right. \end{aligned}$$

The inequality \(\theta _1^{**} <\theta _1^{*}\) follows from the assumption that \(\theta _2^{**}\leqslant \varTheta _2(\theta _1^-)\) and the inequality \( \mathfrak {R}\varTheta _1^-(\theta _2^{**}\pm i t_1) <\theta _1^{*}\) follows from the fact that \(\theta _2^{**}\pm i t_1\in \mathcal {R}\subset T\). Let us denote \(\beta =\arccos { \left( -\frac{\sigma _{12} }{\sqrt{\sigma _{11}\sigma _{22}}} \right) }\). To conclude we consider two cases:

  • \(\sigma _{12}<0\) or equivalently \(0<\beta <\frac{\pi }{2}\): the set S is either empty or bounded, see the left picture on Fig. 6. Applying the maximum principle to the function \(\mathfrak {R}\varTheta _1^-\) show that the image of every point of S by \(\mathfrak {R}\varTheta _1^-\) is smaller than \(\theta _1^*\) and then that S is included in T.

  • \(\sigma _{12}\geqslant 0\) or equivalently \(\frac{\pi }{2}\leqslant \beta <\pi \): henceforth the set S is unbounded as we can see on the right picture of Figure 6. It is no longer possible to apply directly the maximum principle. However, to conclude we show that the image by \(\mathfrak {R}\varTheta _1^-\) of a point \(re^{it}\in T\) near to infinity is smaller than \(\theta _1^*\). The asymptotic directions of \(\theta _1^*+i\mathbb {R}\) are \(\pm \frac{\pi }{2}\) and (18) implies that those of \(\mathcal {R}\) are \(\pm (\pi -\beta )\). Then, as in the proof of Lemma 5 of [31] we prove with (14) that for \( t\in (\pi -\beta , \frac{\pi }{2})\) we have

    $$\begin{aligned} \varTheta _1^-(re^{\pm it})\underset{r\rightarrow \infty }{\sim } r\sqrt{\frac{\sigma _{22}}{\sigma _{11}}} e^{\pm i(t+\beta )}. \end{aligned}$$

    For \( t\in (\pi -\beta , \frac{\pi }{2})\) this implies that \(\mathfrak {R}\varTheta _1^-(re^{\pm it})\underset{r\rightarrow \infty }{\longrightarrow }-\infty \) and we obtain that \(\mathfrak {R}\varTheta _1^-(re^{\pm it})<\theta _1^*\) for r large enough. As in the case \(\sigma _{12}<0\) we finish the proof with the maximum principle. \(\square \)

Fig. 6
figure 6

On the left \(\sigma _{12}<0\), and on the right \(\sigma _{12}\geqslant 0\). The blue domain is the set S

4.2 Carleman Boundary Value Problem

We establish a boundary value problem (BVP) with shift (here it is the complex conjugation) on the hyperbola \(\mathcal {R}\). Let us define the functions G and g such that

$$\begin{aligned} G(\theta _2)&:=\frac{\gamma _1}{\gamma _2} (\varTheta _1^-(\theta _2),\theta _2) \frac{\gamma _2}{\gamma _1}(\varTheta _1^-(\theta _2),\overline{\theta _2}), \end{aligned}$$
(19)
$$\begin{aligned} g(\theta _2)&:= \frac{\gamma _2}{\gamma _1}(\varTheta _1^-(\theta _2),\overline{\theta _2}) \left( \frac{e^{(\varTheta _1^-(\theta _2),\theta _2) \cdot x}}{\gamma _2(\varTheta _1^-(\theta _2),\theta _2)} -\frac{e^{(\varTheta _1^-(\theta _2),\overline{\theta _2})\cdot x}}{\gamma _2(\varTheta _1^-(\theta _2),\overline{\theta _2})} \right) . \end{aligned}$$
(20)

Lemma 8

(BVP for \(\psi _1\)) The Laplace transform \(\psi _1\) satisfies the following boundary value problem:

  1. (i)

    \(\psi _1\) is analytic on \(\mathcal G_\mathcal R\), continuous on its closure \(\overline{\mathcal G_\mathcal R}\) and tends to 0 at infinity;

  2. (ii)

    \(\psi _1\) satisfies the boundary condition

    $$\begin{aligned} \psi _1(\overline{\theta _2})=G(\theta _2)\psi _1({\theta _2}) + g(\theta _2), \qquad \forall \theta _2\in \mathcal {R}. \end{aligned}$$
    (21)

This BVP is said to be non-homogeneous because of the function g coming from the term \(e^{\theta \cdot x}\) in the functional equation.

Proof

The analytic and continuous properties of item (i) follow from Lemmas 6 and 7. The behavior at infinity follows from the integral formula (1) which defines the Laplace transform \(\psi _1\) and from the continuation formula (16). We now show item (ii). For \(\theta _1\in (\,-\infty ,\theta _1^-)\) let us evaluate the functional equation (10) at the points \((\theta _1,\varTheta _2^\pm (\theta _1)\). It yields the two equations

$$\begin{aligned} 0 = \gamma _1 (\theta _1,\varTheta _2^\pm (\theta _1)) \psi _1 (\varTheta _2^\pm (\theta _1)) + \gamma _2 (\theta _1,\varTheta _2^\pm (\theta _1)) \psi _2 (\theta _1) + e^{(\theta _1,\varTheta _2^\pm (\theta _1)) \cdot x}. \end{aligned}$$

Eliminating \(\psi _2 (\theta _1)\) from the two equations gives

$$\begin{aligned} \psi _1 (\varTheta _2^+(\theta _1)) =&\frac{\gamma _1}{\gamma _2} (\theta _1,\varTheta _2^-(\theta _1)) \frac{\gamma _2}{\gamma _1} (\theta _1,\varTheta _2^+(\theta _1)) \psi _1 (\varTheta _2^-(\theta _1))\\&+ \frac{\gamma _2}{\gamma _1} (\theta _1,\varTheta _2^+(\theta _1)) \frac{e^{(\theta _1,\varTheta _2^-(\theta _1)) \cdot x}}{\gamma _2(\theta _1,\varTheta _2^-(\theta _1))} - \frac{e^{(\theta _1,\varTheta _2^+(\theta _1) \cdot x}}{\gamma _1(\theta _1,\varTheta _2^+(\theta _1))}. \end{aligned}$$

Choosing \(\theta _1\in (\,-\infty ,\theta _1^-)\), the quantities \(\varTheta _2^+(\theta _1)\) and \(\varTheta _2^-(\theta _1)\) go through the whole curve \(\mathcal {R}\) (defined in (17)) and are complex conjugate, see Sect. 3.2. Noticing in that case that \(\varTheta _1^-(\varTheta _2^-(\theta _1))=\theta _1\), we obtain Eq. (21). \(\square \)

4.3 Conformal Glueing Function

To solve the BVP of Lemma 8 we need a function w which satisfies the following conditions:

  1. (i)

    w is holomorphic on \(\mathcal G_\mathcal R\), continuous on \(\overline{\mathcal G_\mathcal R}\) and tends to infinity at infinity,

  2. (ii)

    w is one to one from \(\mathcal G_\mathcal R\) to \(\mathbb {C}{\setminus } (-\,\infty ,-\,1]\),

  3. (iii)

    \(w(\theta _2)=w(\overline{\theta _2})\) for all \(\theta _2\in \mathcal {R}\).

Such a function w is called a conformal glueing function because it glues together the upper and the lower part of the hyperbola \(\mathcal {R}\). Let us define w in terms of generalized Chebyshev polynomial

$$\begin{aligned} T_a(x)&:=\cos (a\arccos (x))=\frac{1}{2} {\big [} \big (x+\sqrt{x^2-1}\big )^a+\big (x-\sqrt{x^2-1}\big )^a {\big ]},\nonumber \\ \beta&:=\arccos {{ \left( -\frac{\sigma _{12}}{\sqrt{\sigma _{11}\sigma _{22}}} \right) }}, \end{aligned}$$
(22)
$$\begin{aligned} {w} (\theta _2)&:=T_{\frac{\pi }{\beta }}\bigg (-\frac{2\theta _2-(\theta _2^++\theta _2^-)}{\theta _2^+-\theta _2^-}\bigg ) \text {for all } \theta _2\in \mathbb {C}{\setminus } [\theta _2^+,\infty ). \end{aligned}$$
(23)

The function w is a conformal glueing function which satisfies (i), (ii), (iii) and \(w(\varTheta _2^\pm (\theta _1^-))=-\,1\). See [30, Lemma 3.4] for the proof of these properties. The following lemma is a direct consequence of these properties.

Lemma 9

(Conformal glueing function) The function W defined by

$$\begin{aligned} W(\theta _2)=\frac{w(\theta _2)+1}{w(\theta _2)} \end{aligned}$$

satisfies the following properties :

  1. 1.

    W is holomorphic on \(\mathcal G_\mathcal R{\setminus } \{w^{-1}(0)\}\), continuous on \(\overline{\mathcal G_\mathcal R}{\setminus } \{w^{-1}(0)\}\) and tends to 1 at infinity,

  2. 2.

    W is one to one from \(\mathcal G_\mathcal R{\setminus } \{w^{-1}(0)\}\) to \(\mathbb {C}{\setminus } [0,1]\),

  3. 3.

    \(W(\theta _2)=W(\overline{\theta _2})\) for all \(\theta _2\in \mathcal {R}\).

We introduce W to avoid any technical problem at infinity. We have a cut on the segment [0, 1] and we will be able to apply the propositions presented in “Appendix B”. Notice that we have chosen arbitrarily the pole of W in \(w^{-1}(0)\), but every other point \(w^{-1}(x)\) for \(x\in \mathbb {C}{\setminus } (\,-\infty ,-1]\) would have been suitable.

4.4 Index of the BVP

We denote

$$\begin{aligned} \varDelta =[\text {arg } G ]_{\mathcal {R}^-} \quad \text {and} \quad d=\text {arg } G (\varTheta _2^\pm (\theta _1^-)) \in (-\pi ,\pi ]. \end{aligned}$$

To solve the BVP of Lemma 8 we need to compute the index \(\chi \) which is defined by

$$\begin{aligned} \chi =\left\lfloor \frac{d+\varDelta }{2\pi } \right\rfloor . \end{aligned}$$

Lemma 10

(Index) The index \(\chi \) is equal to

$$\begin{aligned} \chi ={\left\{ \begin{array}{ll} 0 &{} \quad \text {if } \gamma _1(\theta _1^-, \varTheta _2^\pm (\theta _1^-)) \gamma _2(\theta _1^-, \varTheta _2^\pm (\theta _1^-)) \leqslant 0,\\ 1 &{} \quad \text {if } \gamma _1(\theta _1^-, \varTheta _2^\pm (\theta _1^-)) \gamma _2(\theta _1^-, \varTheta _2^\pm (\theta _1^-)) > 0. \end{array}\right. } \end{aligned}$$

The index is then equal to 0 or 1 depending on the position of the two straight lines \(\gamma _1=0\) and \(\gamma _2=0\) with respect to the red point \((\theta _1^-, \varTheta _2^\pm (\theta _1^-)) \). See Fig. 7 which illustrates this lemma.

Proof

The proof is similar in each step to the proof of Lemma 14 in [31] except that in our case \( \gamma _2(\theta _1^-, \varTheta _2^\pm (\theta _1^-))\) is not always positive. \(\square \)

Fig. 7
figure 7

On the left \(\chi =0\) and on the right \(\chi =1\)

4.5 Resolution of the BVP

The following theorem, already presented in the introduction as the main result of this paper, holds.

Theorem 11

(Explicit expression of \(\psi _1\)) Assume conditions (6) and (8). For \(\theta _2\in \mathcal G_\mathcal R\), the Laplace transform \(\psi _1\) defined in (1) is equal to

$$\begin{aligned} \psi _1(\theta _2)= \frac{-Y (w(\theta _2))}{2i\pi } \int _{\mathcal {R}^-} \frac{g(t)}{Y^+ (w(t))} \left( \frac{w'(t) }{w(t)-w(\theta _2)} + \chi \frac{ w'(t) }{w(t)} \right) \, \mathrm {d}t \end{aligned}$$
(24)

with

$$\begin{aligned} Y (w(\theta _2))=w(\theta _2)^{\chi } \exp \left( \frac{1}{2i\pi } \int _{\mathcal {R}^-} \log (G(s))\left( \frac{ w'(s)}{w(s)-w(\theta _2)}- \frac{ w'(s)}{w(s)} \right) \, \mathrm {d}s \right) , \end{aligned}$$

and where

  • G is defined in (19) and g in (20),

  • w is the conformal glueing function defined in (23),

  • \(\mathcal {R}^-\) is the part of the hyperbola \(\mathcal {R}\) defined in (17) with negative imaginary part,

  • \(\chi =0\) or 1 is determined by Lemma 10,

  • \(Y^+\) is the limit value on \(\mathcal {R}^-\) of Y (and may be expressed thanks to Sokhotski–Plemelj formulas stated in Proposition 12 of “Appendix B”).

Proof

We define the function \(\varPsi \) by

$$\begin{aligned} \varPsi (z)=\psi _1(W^{-1}(z)), \quad \text {for } z\in \mathbb {C}{\setminus }[0,1]. \end{aligned}$$

Then, \(\varPsi \) satisfies the Riemann BVP of Proposition 23 in “Appendix B”. The resolution of this BVP leads to Proposition 24 which gives a formula for the Laplace transform \(\psi _1 = \varPsi \circ W\). We then have

$$\begin{aligned} \psi _1(\theta _2)= {\left\{ \begin{array}{ll} X(W(\theta _2))\varphi (W(\theta _2)) &{} \quad \text {for } \chi =-\,1,\\ X(W(\theta _2))(\varphi (W(\theta _2))+C) &{} \quad \text {for } \chi =0, \end{array}\right. } \end{aligned}$$

where C is a constant, \(\chi \) is determined in Lemma 10 and the functions X and \(\varphi \) are defined by

$$\begin{aligned} X(W(\theta _2))&:&= (W(\theta _2)-1)^{-\chi } \exp \left( \frac{1}{2i\pi } \int _{\mathcal {R}^-} \log (G(t))\frac{ W'(t)}{W(t)-W(\theta _2)} \, \mathrm {d}t \right) , \quad \\&\theta _2\in \mathcal G_\mathcal R, \end{aligned}$$

and

$$\begin{aligned} \varphi (W(\theta _2)):= \frac{-1}{2i\pi } \int _{\mathcal {R}^-} \frac{g(t)}{X^+ (W(t))} \frac{W'(t) }{W(t)-W(\theta _2)} \, \mathrm {d}t, \quad \theta _2\in \mathcal G_\mathcal R. \end{aligned}$$

When \(\chi =0\) the constant is determined evaluating \(\psi _1\) at \(-\infty \). We have \(\psi _1(\,-\infty )=0\), \(W(\,-\infty )=0\) and we obtain \(C=-\,\varphi (1)=\frac{1}{2i\pi } \int _{\mathcal {R}^-} \frac{g(t)}{X^+ (W(t))} \frac{W'(t) }{W(t)-1} \, \mathrm {d}t\). To end the proof we just have to notice that

$$\begin{aligned} W(\theta _2)-1= & {} \frac{1}{w(\theta _2)}, \quad \frac{W'(t)}{W(t)-W(\theta _2)}= \frac{w'(t)}{w(t)-w(\theta _2)} - \frac{w'(t)}{w(t)} \quad \text {and} \quad \\ \frac{W'(t)}{W(t)-1}= & {} - \frac{w'(t)}{w(t)}. \end{aligned}$$

\(\square \)

4.6 Decoupling Functions

Due to the function \(G\ne 1\) in (21), the boundary value problem is complex. When it is possible to reduce the BVP to the case where \(G=1\), it is then possible to solve it directly thanks to Sokhotski–Plemelj formulas, see Remark 13 in “Appendix B”.

In some specific cases it is possible to find a rational function F satisfying the decoupling condition

$$\begin{aligned} G(\theta _2) = \frac{F(\theta _2)}{F(\overline{\theta _2})}, \quad \forall \theta _2\in \mathcal {R}, \end{aligned}$$
(25)

where G is defined in (19). Such a function F is called a decoupling function. In [6] the authors show that such a function exists if and only if the following condition holds

$$\begin{aligned} \varepsilon +\delta \in \beta \mathbb {Z}+\pi \mathbb {Z}, \end{aligned}$$
(26)

where \(\beta \) is defined in (22) and \(\varepsilon ,\delta \in (0,\pi )\) are defined by

$$\begin{aligned} \tan \varepsilon = \frac{\sin \beta }{r_{21}\sqrt{\frac{\sigma _{11}}{\sigma _{22}}}+\cos \beta } \quad \text {and} \quad \tan \delta = \frac{\sin \beta }{r_{12}\sqrt{\frac{\sigma _{22}}{\sigma _{11}}}+\cos \beta }. \end{aligned}$$

In this case it is possible to solve in an easier way the boundary value problem. The boundary condition (21) may be rewritten as

$$\begin{aligned} (F\psi _1)(\overline{\theta _2})=(F\psi _1)({\theta _2}) + F(\overline{\theta _2})g(\theta _2), \qquad \forall \theta _2\in \mathcal {R}. \end{aligned}$$

Using again the conformal glueing function w, we transform the BVP into a Riemann BVP, see “Appendix B”. Such an approach leads to an alternative formula for \(\psi _1\) which is simpler. Indeed, thanks to Remark 13, in the cases where the rational fraction F tends to 0 at infinity, we obtain

$$\begin{aligned} \psi _1({\theta _2}) = \frac{1}{2i\pi } \frac{1}{F(\theta _2)} \int _{\mathcal {R}^-} \frac{F(\overline{t})g(t)}{w(t)-w(\theta _2)} \, \mathrm {d}t, \quad \theta _2\in \mathcal G_\mathcal R. \end{aligned}$$