1 Introduction

Stochastic fluid queues have been used to model communication networks, in particular, the flow of data through the network as a “fluid” continuously over time. The input of such fluid queues is assumed to be an exogenous random process while the output is a constant rate. The fluid queue, which is often viewed as the fluid workload process, is then modeled via the one-dimensional reflection. See, for example, an overview of the stochastic fluid queues in [33, Chapter 5] (and an overview of scheduling of stochastic fluid networks in [7, Chapter 12]). Such models are also used to model the dynamics in storage or dams [30].

Although the input can be of any general continuous-time stochastic process, in the telecommunication literature, Gaussian processes with self-similarity and long-range dependence, such as fractional Brownian motion (FBM), are often used to model the traffic flow into the system [22, 23, 25, 26, 28, 34]. However, the existing studies using FBM only model stationary inputs that have these self-similarity and long-range dependent properties. Many Internet and communication input flows exhibit non-stationarity (see, e.g., [5, 19, 31]). Therefore, it is desirable to use a process to capture all these characteristics.

Recently, Pang and Taqqu [27] have introduced a generalized fractional Brownian motion (GFBM) as the scaling limit of power-law shot noise processes extending [29, Chapter 3.4] and [21]. The GFBM loses the stationary increments property of the standard FBM, while exhibiting self-similarity and long-range dependence. In this paper, we use a special case of GFBM, which is the generalized Riemann–Liouville (R–L) FBM (see Eq. (2.1)), as the input process for fluid queues.

We particularly focus on the large deviation principles (LDPs) of the fluid queues with the GFBM input. Large deviations of fluid queues have been well studied (see an overview in [13]). Our paper is of similar flavor as Chang et al. [6], which studies the large deviations and moderate deviations properties of fluid queues with an input process that can be regarded as an extension of the R–L FBM. Specifically, the Brownian motion in the R–L FBM is replaced by a process of stationary increments that satisfies a large deviations or a moderate deviations principle. That construction obviously differs from the GFBM. One distinction is that the mapping in that construction is continuous from the process of stationary increments to the input process, and thus, the contraction principle can be applied to establish the LDP for the input process. However, that is not the case for the GFBM. We explain in detail why the contraction principle cannot be directly used to establish the LDP for the GBM from that of the driving BM in Sect. 2.2.2.

Therefore, we establish an LDP for the GFBM (\(\{X^\varepsilon \}_{\varepsilon >0}\) as defined in (2.8)) using a different approach i.e., the weak convergence approach (see Sect. 2.2.3 for a brief description). This approach is commonly used in proving LDPs of processes that can be expressed as a measurable map of a Brownian motion (\(X^\varepsilon \) is clearly an example). We establish the LDP for \(\{X^\varepsilon \}_{\varepsilon >0}\) by proving Lemmas 3.13.2 and 3.3 (following the procedure for LDPs according to Theorem 2.1). The advantage of this approach lies in the fact that the LDP for \(\{X^\varepsilon \}_{\varepsilon >0}\) is simply equivalent to tightness of processes \(\{X^{\varepsilon ,v^\varepsilon }\}_{\varepsilon >0}\) (defined in (3.2)), for an appropriate precompact family of processes \(\{v^\varepsilon \}_{\varepsilon >0}\) and uniqueness of solutions to Eq. (3.2), for an appropriately specified process v. The aforementioned tightness (which is required to prove Lemma 3.1) is derived under the assumption that the set of parameters \((\alpha ,\gamma )\) for the GFBM in (2.1) satisfying (2.6) (noting that the Hurst parameter H can take values in (0, 1) in this range unlike the standard FBM \(B^H\) with \(H\in (1/2,1)\) when \(\gamma =0\)). On the other hand, the rate function obtained using the weak convergence approach is given in the form of an optimization problem (see (3.1)). In fact, even for standard FBM, the rate function using the contraction principal in [6] is also implicitly given via the integral mapping. Here we present an expression of the rate function for the GFBM using Laplace transform in Lemma 3.4.

We then move on to prove the LDP for the workload process \(V(\cdot )\) of a stochastic fluid queue with the GFBM as input and with a constant service rate and the corresponding running maximum process \(M(\cdot )\). See (4.1) and (4.2). It is clear that the sample path LDP for \(V(\cdot )\) and \(M(\cdot )\) can be easily obtained by applying the contraction principle by the continuity of the reflection mapping in the Skorohod topology. However, by adapting the method in [6, Section 4 & 5], using the LDP result for the GFBM, we obtain the LDP for \(V(\cdot )\) and \(M(\cdot )\) at a fixed time, in which the rate function is explicitly provided (see Theorem 2.2 and Lemma 4.1).

Finally, we analyze the long-time behavior of these processes in Sect. 5. As it is well known, if the input process as stationary increments, the study of V(t) and M(t) is equivalent (see (4.3)). Since the GFBM has non-stationary increments, the usual approach with stationary input to derive the steady-state of the queueing process does not apply (see, e.g, tail asymptotics of fluid queues with the R–L FBM in [9, 10, 12, 14, 15] and the reference therein).

To study the long-time behavior, we first establish that the laws of V(t) and M(t) have a weak limit point as \(t\rightarrow \infty \) (in fact, we show that M(t) converges almost surely as \(t\rightarrow \infty \)). We first derive an alternative representation of the GFBM in Lemma 5.1 by using Itô product formula for which we have to use an approximation approach to avoid an ill-defined issue around time zero. We then derive a new maximal inequality for the scaled GFBM (see Lemma 5.3), in particular, the tail asymptotics for \(\max _{\delta _0\le s\le t} \big \{s^{-H}X(s)\big \},\) for some \(\delta _0>0\) and a modulus of continuity type estimates for X(t), when t is around 0. Moreover, by using this new maximal inequality, we can show that the tail of laws of V(t) and M(t) at fixed t is sub-exponential (Theorems 5.1 and 5.3), from which we conclude that the laws of V(t) have a weak limit point as \(t\rightarrow \infty \). In addition, this sub-exponential tail behavior also implies that expectation of the M(t) is uniformly bounded in time, and thus conclude that M(t) converges almost surely.

Now that the existence of a steady-state distribution is proved, we next study the tail asymptotics of these steady-state distributions. Due to the non-stationarity of the processes, the steady-state distribution of this process is not necessarily equal to the steady state of the queuing process mentioned above. We derive tail asymptotics for the steady states in Theorems 5.2 and 5.4. For this purpose, we derive a maximal inequality (see Lemma 5.3) and a modulus of continuity estimates (see Lemmas 5.2 and 5.4) for the GFBM.

We also provide alternative proofs for certain results in Sects. 4 and 5 using well-known results on the extremes of Gaussian processes. Specifically, we give proofs for Theorem 4.1 and Lemma 4.1 in Sect. 4.1 using Landau–Marcus–Shepp asymptotics [24, Equation (1.1)], and discuss how it is used to prove Lemma 5.4 in Remark 5.5. We also give an alternative proof for Theorem 5.2 in Sect. 5.1 using results on the tail asymptotics for locally stationary self-similar Gaussian processes by Hüsler and Piterbarg [16]. For this, we show that the GFBM is locally stationary, despite its non-stationary increments (see Lemma A.2).

1.1 Notation

Let \((\Omega , \mathcal {F}, \{\mathcal {F}_t\}_{t\ge 0}, \mathbb {P})\) be the filtered probability space with \(\mathcal {F}_t\) satisfying the usual conditions. \(\mathbb {E}\) denotes the expectation with respect to \(\mathbb {P}\). For \(T>0\), let \(\mathcal {C}_T\) be the space of continuous real-valued functions f on [0, T] such that \(f(0)=0\) and equipped with the uniform topology (\(\Vert \cdot \Vert _\infty \) denotes the corresponding norm). When there is no ambiguity, we write \(\mathcal {C}_T\) as \(\mathcal {C}\). \(L^2([0,T])\) denotes the space of square integrable Lebesgue measurable functions on [0, T]. \(\mathcal {P}_Z\) denotes the law of the random variable Z.

1.2 Organization of the paper

In Sect. 2, we introduce the GFBM process and give its basic properties. In Sect. 2.2, we give the definitions and necessary results from the general theory of large deviations. As mentioned already we use the approach of weak convergence in this work, we introduce and compare this approach to other well-known approaches proving large deviation principle. We also state important results used in this approach. In Sect. 3, we prove that the GFBM process defined in (2.8) satisfies a large deviation principle. In Sect. 4, we establish a large deviation principle for the workload process and the running maximum process of a stochastic fluid queue with constant service rate and scaled GFBM as the arrival process. Finally, in Sect. 5 we study the long-time behavior of the the running maximum process and the queuing process.

2 Preliminaries

2.1 Generalized Riemann–Liouville FBM

The generalized Riemann–Liouville (R–L) FBM \(\{X(t): t\ge 0\}\) is introduced in [27, Remark 5.1] and further studied in [17, Section 2.2]. The process X(t) is defined by

$$\begin{aligned} X(t) = c \int _0^t (t-u)^\alpha u^{-\gamma /2} d B(u)\,, \quad t \ge 0\,, \end{aligned}$$
(2.1)

where B(t) is a standard Brownian motion and \(c \in \mathbb {R}\),

$$\begin{aligned} \gamma \in [0,1), \quad \alpha \in \Big (-\frac{1}{2}+ \frac{\gamma }{2}, \ \frac{1}{2}+ \frac{\gamma }{2}\Big ) . \end{aligned}$$

The normalization constant c is such that \(E[X(t)^2] = t^{2H}\) (it can be explicitly given as in Lemma 2.1 of [17]). The process X(t) is a continuous self-similar Gaussian process with Hurst parameter

$$\begin{aligned} H= \alpha - \frac{\gamma }{2} + \frac{1}{2} \in (0,1). \end{aligned}$$

It has non-stationary increments; in particular, the second moment for its increments is

$$\begin{aligned} \mathbb {E}\bigl [(X(t) - X(s))^2\bigr ] = c^2 \int _s^t (t-u)^{2\alpha } u^{-\gamma } du + c^2\int _0^s ( (t-u)^{\alpha } - (s-u)^{\alpha } )^2 u^{-\gamma } du, \end{aligned}$$
(2.2)

for any \(0\le s<t\). It has mean zero and covariance function

$$\begin{aligned} \text {Cov}(X(t), X(s)) = \mathbb {E}[X(s)X(t)] = c^2 \int _0^s (t-u)^{\alpha } (s-u)^{\alpha } u^{-\gamma } du, \end{aligned}$$
(2.3)

for \(0\le s \le t\). For simplicity, we refer to this process as GFBM.

When \(\gamma =0\), the process X(t) becomes the standard R–L FBM

$$\begin{aligned} B^H(t) = c \int _0^t (t-u)^\alpha B( d u), \quad t\ge 0\, . \end{aligned}$$
(2.4)

which has

$$\begin{aligned} \mathbb {E}\bigl [(B^H(s) - B^H(t))^2\bigr ] = c^2 |t-s|^{2H}, \end{aligned}$$

and the covariance function

$$\begin{aligned} \text {Cov}(X(t), X(s)) = \mathbb {E}\bigl [B^H(s) B^H(t)\bigr ] = \frac{1}{2}c^2 \big (t^{2H} + s^{2H} - |t-s|^{2H}\big ). \end{aligned}$$
(2.5)

It is clear that the GFBM X loses the stationary increment property that the standard FBM \(B^H\) possess.

Some sample path properties of the GFBM X have been studied. It is shown in [27, Proposition 5.1] and [17, Theorems 3.1 and 4.1] that X has continuous sample paths almost surely, and moreover, is Hölder continuous with parameter \(H-\epsilon \) for \(\epsilon >0\); and the paths of X is non-differentiable if \(\gamma \in (0,1)\) and \((\gamma -1)/2 <\alpha \le 1/2\), and differentiable if \(\gamma \in (0,1)\) and \(1/2<\alpha \le (1+\gamma )/2\), almost surely. In [32], the additional properties of the exact uniform modulus of continuity are studied.

For standard FBM, the Hurst parameter H not only indicates the self-similarity property, but also dictates the short and long-range dependences, that is, \(H \in (0,1/2)\) and \(H\in (1/2,1)\) for short and long-range dependences, respectively. The usual definition of long-range dependence is through the autocovariance functions, namely, letting \(\gamma _s=Cov(Z(t), Z(t+s))\) be the covariance function of a stationary process Z(t) (noting that \(\gamma _s\) is independent of t due to stationary increments), one says the process has long-range dependence if \(\sum _{s=-\infty }^\infty \gamma _s =\infty \). However, for processes with non-stationary increments this definition does not apply. In [18], a concept of long-range dependence for self-similar processes (not necessarily stationary) is introduced via the associated Lamperti transform (which turns the non-stationary process into a stationary one). Specifically, for a self-similar process Z(t) with Hurst parameter H and \(Z(0)=0\), the Lamperti transform \(\widetilde{Z}\) is defined by \(\widetilde{Z}(t) = e^{-Ht} Z(e^t)\) for \(t \in \mathbb {R}\), which is strictly stationary with covariance function \(\widetilde{\gamma }_s = \mathbb {E}[\widetilde{Z}(t) \widetilde{Z}(t+s) ] \) for any \(t, s \in \mathbb {R}\). We then say that the process Z has a long-range dependence if \(\lim _{t\rightarrow \infty } \frac{1}{t} \log |\widetilde{\gamma }_t| + H >0\). For standard FBM, it can be checked that this condition is equivalent to \(2H-1>0\), that is, \(H>1/2\). It is shown in [18, Proposition 6] that the GFBM has long range dependence in that sense if and only if \(\alpha >0\). As a special case, when \(\gamma =0\), the FBM \(B^H\) is long range dependent if \(H=\alpha + 1/2>1/2\). Observe that, for the GFBM, when

$$\begin{aligned} \gamma \in (0,1), \quad 0<\alpha < (1+\gamma )/2, \end{aligned}$$
(2.6)

the value of the Hurst parameter \(H = \alpha -\gamma /2+1/2\) can take any value in (0, 1). Specifically, for \(0<\alpha < \gamma /2\), \(H\in (0,1/2)\) while for \(\gamma /2<\alpha < (1+\gamma )/2\), \(H \in (1/2,1)\). Our results below in the large deviation of the GFBM and the fluid queue with the GFBM input assume this parameter range in (2.6).

2.2 Large deviation principle for functionals of BM

Suppose \((S,\mathcal {B}(S))\) is a Polish space with \(\mathcal {B}(S)\) being the Borel \(\sigma \)-algebra of S. Consider a family of S-valued random variables \(\{X^\varepsilon \}_{\varepsilon >0}\), whose corresponding family of probability measures is denoted by \(\mu ^\varepsilon \).

Definition 2.1

The family of S-valued random variables \(\{X^\varepsilon \}_{\varepsilon >0}\) (or the family of probability measures \(\{\mu ^\varepsilon \}_{\varepsilon >0}\)) is said to satisfy a large deviation principle (LDP), if there is a lower semicontinuous function \(I:S\rightarrow [0,\infty ]\) and the following is satisfied:

  1. (1)

    For every \(A\in \mathcal {B}(S)\),

    $$\begin{aligned} -\inf _{x\in A^\circ }I(x)\le \liminf _{\varepsilon \rightarrow 0}\varepsilon \log \mu ^\varepsilon (A)\le \limsup _{\varepsilon \rightarrow 0}\varepsilon \log \mu ^\varepsilon (A)\le -\inf _{x\in {\bar{A}}}I(x), \end{aligned}$$

    where \(A^\circ \) and \({\bar{A}}\) denote the interior and closure of the measurable set A.

  2. (2)

    For \(l\ge 0\), \(\{x:I(x)\le l\}\) is a compact set in S.

We refer to I as the rate function and \(\varepsilon \) as the rate.

It is well known that an equivalent way of defining the LDP is given by the result below (see [4, Theorem 1.5 and 1.8]).

Theorem 2.1

A family of probability measures \(\{\mu ^\varepsilon \}_{\varepsilon >0}\) satisfies an LDP with rate function I and rate \(\varepsilon \) if and only if for every bounded continuous function \(\Phi :S\rightarrow \mathbb {R}\),

$$\begin{aligned} \lim _{\varepsilon \rightarrow 0} -\varepsilon \log \int _S \exp \left( -\frac{1}{\varepsilon }\Phi (x)\right) \mu ^\varepsilon (dx)=\inf _{x\in S} \left[ I(x)+\Phi (x)\right] . \end{aligned}$$
(2.7)

and for every \(l\ge 0\), \(\{x\in S: I(x)\le l\}\) is compact in \(\mathcal {B}(S)\).

The following result is used often in the sections that follow [4, Theorem 1.16].

Theorem 2.2

(Contraction principle) Suppose \((S',\mathcal {B}(S'))\) is another Polish space and \(F:(S,\mathcal {B}(S))\rightarrow (S',\mathcal {B}(S'))\) be a continuous map. If the family \(\{\mu ^\varepsilon \}_{\varepsilon >0}\) satisfies LDP with rate function I and rate \(\varepsilon \), then the family \(\{\nu ^\varepsilon \doteq \mu ^\varepsilon \circ F^{-1}\}_{\varepsilon >0}\) also satisfies LDP on \(S'\) with the rate \(\varepsilon \) and the rate function \(I'\) given by

$$\begin{aligned} I'(y)= \inf _{x\in S: F(y)=x}I(x). \end{aligned}$$

One of the main goals of this work is to prove that

$$\begin{aligned} X^\varepsilon \doteq \varepsilon X \end{aligned}$$
(2.8)

satisfies the LDP with appropriate rate and rate function for the GFBM X in (2.1). From the existing literature, three common approaches can be used to arrive at the desired result. We briefly describe these approaches and point out the difficulties or lack thereof in adopting these approaches to our case.

2.2.1 Using Gartner-Ellis theorem [11, Section 4.5.3]

In this approach, we study the logarithm of moment generating function of finite dimensional distribution of \(X^\varepsilon \) and its limiting behavior as \(\varepsilon \rightarrow 0\). It is also required to prove the exponential tightness (See [11, Page 8]) of the process. In contrast, using the weak convergence approach described briefly below, we are only required to show tightness of some appropriate family of processes.

2.2.2 Using LDP of \(\{\varepsilon B\}_{\varepsilon >0}\) and Theorem 2.2

It is well known that the family of \(\mathcal {C}\)- valued random variables \(\{\varepsilon B\}_{\varepsilon >0}\) satisfies LDP [11, Theorem 5.2.3] with rate \(\varepsilon ^2\) and rate function \(I_B:\mathcal {C}\rightarrow [0,\infty ]\) given by

$$\begin{aligned} I_B(\xi )\doteq {\left\{ \begin{array}{ll} &{}\frac{1}{2}\int _0^T {\dot{\xi }}(s)^2ds, \text { whenever }\xi \text { is absolutely continuous and } \xi (0)=0,\\ &{}\infty , \text { otherwise.} \end{array}\right. } \end{aligned}$$

Remark 2.1

Fix \(b(\varepsilon )\) such that

$$\begin{aligned} \frac{\sqrt{\varepsilon }}{b(\varepsilon )}\rightarrow 0 \quad \text { and } \quad b(\varepsilon )\rightarrow 0, \quad \text { as }\varepsilon \rightarrow 0. \end{aligned}$$

Suppose an S-valued process A on [0, T] such that \(\{\varepsilon A(\varepsilon ^{-1} \cdot )\}_{\varepsilon >0}\) satisfies an LDP with rate function I and rate \(\varepsilon \) and \(\{\sqrt{\varepsilon } A(\varepsilon ^{-1}\cdot )\}_{\varepsilon >0}\) is weakly convergent to a non-trivial distribution. Then it is of interest to study the asymptotic behavior \(\{b(\varepsilon )\sqrt{\varepsilon } A(\varepsilon ^{-1}\cdot )\}_{\varepsilon >0}\) which is in some sense, in-between the above behaviors. The process \(\{b(\varepsilon )\sqrt{\varepsilon } A(\varepsilon ^{-1}\cdot )\}_{\varepsilon >0}\) is said to satisfy a moderate deviations principle if it satisfies an LDP with some rate function \({\bar{I}}\) and rate \(b(\varepsilon )^2\).

Clearly, both the families \(\{\sqrt{\varepsilon }B\}_{\varepsilon >0}\) and \(\{b(\varepsilon ) B\}_{\varepsilon >0}\) satisfy LDP with same rate function \(I_B\) and rates \(\varepsilon \) and \(b(\varepsilon )^2\), respectively. But the LDP of \(\{b(\varepsilon )B\}\) can be framed as the MDP by noting that the laws of \(\{b(\varepsilon )\sqrt{\varepsilon }B(\cdot \varepsilon ^{-1})\}\) and \(\{b(\varepsilon ) B\}\) are equal. In other words, the rate functions corresponding to LDP and MDP are the same. It is just the rates that change accordingly. Since GFBM X as defined in (2.1) is a linear function of Brownian motion B, similar comments can be made for X. Hence, without loss of generality, we just consider the large deviation behavior as the driving noise in our case is a Brownian motion.

Suppose a \(\mathcal {C}\)-valued process defined by \(Y^\varepsilon \doteq F(\varepsilon B)\), for a continuous function \(F:\mathcal {C}\rightarrow \mathcal {C}\). Using Theorem 2.2, we can conclude that \(\{Y^\varepsilon \}_{\varepsilon >0}\) satisfies LDP with rate \(\varepsilon ^2\) and rate function \(I_Y:\mathcal {C}\rightarrow [0,\infty ]\) given by

$$\begin{aligned} I_Y(\eta )= \frac{1}{2}\inf _{\xi \in \mathcal {C}: \eta =F(\xi )} \int _0^T{\dot{\xi }}(s)^2ds. \end{aligned}$$

This approach was used in [6, Theorem 3.1] to prove the LDP of the standard FBM:

$$\begin{aligned} Y^\varepsilon (t) =F(\varepsilon B)(t)\doteq \varepsilon \int _0^t(t-s)^{H-\frac{1}{2}}dB(s), \quad \text { for } H>\frac{1}{2}. \end{aligned}$$
(2.9)

It can be checked that F as defined above is a continuous map from \(\mathcal {C}\) to \(\mathcal {C}\). (In fact, a more general class of processes are considered in [6] where the Brownian motion B is replaced by any process with stationary increments satisfying an LDP.) Unfortunately, we cannot adopt this method to our case as the map defined by

$$\begin{aligned} G(\xi )(t)\doteq \int _0^t(t-s)^\alpha s^{-\frac{\gamma }{2}}d\xi (s) \end{aligned}$$

fails to be continuous from \(\mathcal {C}\) to \(\mathcal {C}\). This is mainly due to the presence of the term \(s^{-\frac{\gamma }{2}}\) in the integral and without having strong decaying behavior of \(\xi (s)\) as \(s\rightarrow 0\), the above integral may not be well-defined. Indeed, we consider the following: Fix \(\gamma \in (0,1)\) and choose \(\xi \in \mathcal {C}\) such that \(\xi (s)= s^{\beta }\) on \([0,\delta _1]\), with \(0<\delta _1<t\) and \(0<\beta <\frac{\gamma }{2}\). This choice is sufficient to illustrate the effect of \(s^{-\frac{\gamma }{2}}\), although \(\xi \) with a more general form can also be considered. With the above choice of \(\xi \), we have for any \(0<\delta<\delta _1<t\),

$$\begin{aligned} \int _0^t(t-s)^\alpha s^{-\frac{\gamma }{2}}\xi (ds)&\ge \beta \int _{\delta }^{\delta _1} (t-s)^\alpha s^{-\frac{\gamma }{2}}s^{\beta -1}ds \\&\ge \beta (t-\delta _1)^\alpha \int _{\delta }^{\delta _1} s^{-\frac{\gamma }{2}}s^{\beta -1}ds \\&= \frac{\beta (t-\delta )^\alpha }{-\frac{\gamma }{2}+\beta } \left( \delta _1^{-\frac{\gamma }{2}+\beta }- \delta ^{-\frac{\gamma }{2}+\beta }\right) \\&\uparrow \infty , \quad \text { as} \quad \delta \rightarrow 0. \end{aligned}$$

It is easy to see that the set of all functions \(\xi \in \mathcal {C}\) satisfying the above property form an open set in \(\mathcal {C}\). Therefore, we can conclude that the map G is not well defined on at least an open set of \(\mathcal {C}\). In other words, we cannot use Theorem 2.2 on the map G. However, we note that the rate function corresponding to \(Y^\varepsilon \) is obtained from the rate function corresponding to \(X^\varepsilon \) by directly evaluating it as \(\gamma =0\). Compare [6, Theorem 3.1] and Theorem 3.1.

2.2.3 Using weak convergence approach [4, Section 3.2]

This approach can be used to study the large deviation behavior of any \(\mathcal {C}\)-valued family of random variables defined as \(\{Z^\varepsilon \doteq R(\varepsilon B) \}\), where \(R:\mathcal {C}\rightarrow \mathcal {C}\) is Borel measurable. The key tool used in this approach is the following variational representation of exponential functionals of Brownian motion B.

Theorem 2.3

[3, Theorem 3.1] For a bounded Borel measurable functional \(\Psi :\mathcal {C}\rightarrow \mathbb {R}\),

$$\begin{aligned} -\log \mathbb {E}\Bigg [\exp \Big (-\Psi (B)\Big )\Bigg ]=\inf _{v\in \mathcal {A}}\mathbb {E}\left[ \frac{1}{2}\int _0^Tv(s)^2 ds + \Psi \left( B+\int _0^\cdot v(s) ds\right) \right] . \end{aligned}$$
(2.10)

Here, \(\mathcal {A}\) is the set of \(\mathcal {F}_t\)- progressively measurable processes \(v(\cdot )\) such that

$$\begin{aligned} \mathbb {E}\left[ \int _0^T v(s)^2 ds\right] <\infty . \end{aligned}$$

In what follows, we sometimes refer to elements of \(\mathcal {A}\) as controls. Using the above result, we are set to prove the LDP of \(Z^\varepsilon =R(\varepsilon B)\) in the following way.

For \(\varepsilon >0\) and any bounded continuous function \(\Phi :\mathcal {C}\rightarrow \mathbb {R}\), we first rewrite (2.10) by choosing \(\Psi (B)= \varepsilon ^{-2}\Phi \circ R(\varepsilon B)= \varepsilon ^{-2}\Phi (Z^\varepsilon )\) and defining \(Z^{\varepsilon ,v}\doteq R(\varepsilon B +\int _0^\cdot v(s)ds)\):

$$\begin{aligned} -\varepsilon ^2\log \mathbb {E}\left[ \exp \left( -\frac{1}{\varepsilon ^{2}}\Phi (Z^\varepsilon )\right) \right]&=-\varepsilon ^2\log \mathbb {E}\left[ \exp \left( -\varepsilon ^{-2}\Phi \circ R(\varepsilon B)\right) \right] \nonumber \\&=\varepsilon ^2\inf _{v\in \mathcal {A}}\mathbb {E}\left[ \frac{1}{2}\int _0^Tv(s)^2 ds \right. \nonumber \\&\quad \left. + \varepsilon ^{-2}\Phi \circ R(\varepsilon B+\varepsilon \int _0^\cdot v(s) ds)\right] \nonumber \\&=\inf _{v\in \mathcal {A}}\mathbb {E}\left[ \frac{\varepsilon ^2}{2}\int _0^Tv(s)^2 ds + \Phi (Z^{\varepsilon , v})\right] \end{aligned}$$
(2.11)
$$\begin{aligned}&=\inf _{v\in \mathcal {A}}\mathbb {E}\left[ \frac{1}{2}\int _0^Tv(s)^2 ds + \Phi (Z^{\varepsilon ,v})\right] . \end{aligned}$$
(2.12)

To get the final equality, we re-defined \(\varepsilon v\) as v. Note that this does not change the right-hand side. To prove the LDP for \(\{Z^\varepsilon \}_{\varepsilon >0}\), we now work with the expression on the left hand side above. Note that this resembles the left-hand side of (2.7) without the limit.

Using Theorem 2.1, to conclude that \(\{Z^\varepsilon \}_{\varepsilon >0}\) satisfies LDP, it remains to show that

  1. (1)

    the expression in (2.11) has a limit;

  2. (2)

    this limit is equal to

    $$\begin{aligned} \inf _{x\in \mathcal {C}}\left[ I(x)+ \Phi (x)\right] , \end{aligned}$$

    for some lower semi-continuous function \(I:\mathcal {C}\rightarrow [0,\infty ]\) with compact level sets.

To this end, we require the following lemma [4, Page 62] which states that there are nearly optimal controls of the right-hand side in (2.11) which are almost surely finite in \(L^2([0,T])\) norm.

Lemma 2.1

For every \(\delta >0\), there is \(M<\infty \) such that

$$\begin{aligned} - \varepsilon ^2\log \mathbb {E}\left[ \exp \left( -\frac{1}{\varepsilon ^2}\Phi (Z^\varepsilon )\right) \right] \ge \inf _{v\in \mathcal {A}_{b,M}}\mathbb {E}\left[ \frac{1}{2}\int _0^Tv(s)^2 ds + \Phi (Z^{\varepsilon ,v})\right] -\delta , \end{aligned}$$

for every \(\delta >0\). Here, \(\mathcal {A}_{b,M}\) is a subset of \(\mathcal {A}\) that contains \(v\in \mathcal {A}\) such that \(\int _0^Tv(s)^2ds\le M,\quad \mathbb {P}-\) a.s.

In the above, the maps F and R are chosen to be \(\mathcal {C}\)-valued for simplicity. They are allowed to take values in any Polish space.

3 LDP for the generalized R–L FBM

In this section, we prove the LDP result of the process \(\{X^\varepsilon \}_{\varepsilon >0}\) in (2.8).

Theorem 3.1

Assuming that \((\alpha ,\gamma )\) satisfy (2.6), \(\{X^\varepsilon \}_{\varepsilon >0}\) satisfies an LDP with rate \(\varepsilon ^2\) and rate function \(I_X:\mathcal {C}\rightarrow [0,\infty ]\) given by

$$\begin{aligned} I_X(\xi )\doteq {\left\{ \begin{array}{ll} &{}\inf _{v\in \mathcal {S}_\xi }\frac{1}{2}\int _0^T v(s)^2ds, \\ &{}\infty , \text { whenever }\mathcal {S}_\xi =\emptyset . \end{array}\right. } \end{aligned}$$
(3.1)

Here \(\mathcal {S}_\xi \), for \(\xi \in \mathcal {C}\), is the collection of all \(v\in L^2([0,T])\) such that

$$\begin{aligned} \xi (t)=c\int _0^t (t-s)^{\alpha }s^{-\frac{\gamma }{2}}v(s)ds. \end{aligned}$$

Remark 3.1

This result for the case where \(\gamma =0\) can be obtained as a special case of [6, Theorem 3.1]. In the above theorem, we get the rate function in an implicit form. This is not a consequence of the \(s^{-\frac{\gamma }{2}}\) term in the definition of \(X(\cdot )\), but because of the \((t-s)^\alpha \) term. To see this, one can take \(\alpha =0\) and proceed with the same proof. The rate function in this case turns out to be

$$\begin{aligned} I_X(\xi )= \frac{1}{2} \int _0^T s^{{\gamma }} {\dot{\xi }}(s)^2ds, \end{aligned}$$

whenever \(\xi \) is absolutely continuous on [0, T] and \(\infty \), otherwise. Note that the hypothesis of the above theorem assumes \(\alpha >0\), but this will not be an issue in adopting the same proof.

Remark 3.2

This result is used repeatedly in the sections that follow. The techniques of the proof break down as \(\gamma \rightarrow 1\). This is mainly because the process

$$\begin{aligned} \int _0^t(t-s)^\alpha s^{-\frac{1}{2}} dB(s) \end{aligned}$$

is not well defined, \(\mathbb {P}-\) a.s.

Define

$$\begin{aligned} X^{\varepsilon ,v}(t)\doteq \varepsilon \int _0^t (t-s)^\alpha s^{-\frac{\gamma }{2}}dB(s) + c\int _0^t (t-s)^\alpha s^{-\frac{\gamma }{2}}v(s)ds. \end{aligned}$$
(3.2)

This process will be used in the following two lemmas.

Lemma 3.1

For any bounded continuous function \(\Phi :\mathcal {C}\rightarrow \mathbb {R}\),

$$\begin{aligned} \liminf _{\varepsilon \rightarrow 0} -\varepsilon ^2\log \mathbb {E}\left[ \exp \left( -\frac{1}{\varepsilon ^2}\Phi (X^\varepsilon )\right) \right] \ge \inf _{x\in \mathcal {C}} \left[ I_X(x)+\Phi (x)\right] , \end{aligned}$$

with \(I_X\) as defined in the statement of Theorem 3.1.

Proof

Fix \(\delta >0\). From Lemma 2.1, we have

$$\begin{aligned} - \varepsilon ^2\log \mathbb {E}\left[ \exp \left( -\frac{1}{\varepsilon ^2}\Phi (X^\varepsilon )\right) \right] \ge \inf _{v\in \mathcal {A}_{b,M}}\mathbb {E}\left[ \frac{1}{2}\int _0^Tv(s)^2 ds + \Phi (X^{\varepsilon ,v})\right] -\delta , \end{aligned}$$

for every \(\delta >0\). Recall that \(A_{b,M}\) is a subset of \(\mathcal {A}\) that contains \(v\in \mathcal {A}\) such that \(\int _0^Tv(s)^2ds\le M,\; \mathbb {P}-\text {a.s.}\)

Now consider a \(\delta \)-optimal control \(v^\varepsilon \in \mathcal {A}_{b,M}\) to the above infimum, that is, \(v^\varepsilon \) satisfies

$$\begin{aligned} -\varepsilon ^2\log \mathbb {E}\left[ \exp \left( -\frac{1}{\varepsilon ^2}\Phi (X^\varepsilon )\right) \right] \ge \mathbb {E}\left[ \frac{1}{2}\int _0^Tv^\varepsilon (s)^2 ds + \Phi (X^{\varepsilon ,v^\varepsilon })\right] -2\delta . \end{aligned}$$

Since \(\int _0^T v^\varepsilon (s)^2ds \le M\), \(\{v^\varepsilon \}_{\varepsilon >0}\) is weakly compact in \(L^2([0,T])\), i.e., there exists a subsequence \(\varepsilon _n\) and a \(v\in L^2([0,T])\) such that \(\int _0^T v^{\varepsilon _n}(s)u(s)ds\rightarrow \int _0^T v(s) u(s)ds \), for every \(u\in L^2([0,T])\).

For now, let us assume that the family of \(\mathcal {C}\times L^2([0,T])\) - valued random variables \(\{(X^{\varepsilon ,v^\varepsilon },v^\varepsilon )\}_\varepsilon \) is tight. Let \(\varepsilon _n\) be the converging subsequence with \(({\bar{X}}^v, v)\) as the corresponding weak limit and write \((X^{\varepsilon _n,v^{\varepsilon _n}},v^{\varepsilon _n})\) as \((X^n,v^n)\) when there is no ambiguity. From the Skorohod representation theorem, we have a probability space \(\left( \Omega ^*,\mathcal {F}^*,\mathbb {P}^*\right) \) in which

$$\begin{aligned} \left( X^n,v^n\right) \rightarrow \left( {\bar{X}}^v,v\right) ,\quad \mathbb {P}^*- \text {a.s.} \end{aligned}$$

and the distributions of B, \(\{X^n\}\), \(\{v^n\}\), \({\bar{X}}^v\) and v remain the same under \(\mathbb {P}^*\) and \(\mathbb {P}\). We have

$$\begin{aligned} \liminf _{\varepsilon _n\rightarrow 0} - \varepsilon _n^2\log \mathbb {E}\left[ \exp \left( -\frac{1}{\varepsilon _n^2}\Phi (X^{\varepsilon _n})\right) \right]&\ge \liminf _{n\rightarrow \infty } \mathbb {E}\left[ \frac{1}{2}\int _0^Tv^{n}(s)^2 ds + \Phi (X^n)\right] -2\delta \\&\ge \mathbb {E}\left[ \frac{1}{2}\int _0^T v(s)^2 ds + \Phi ({\bar{X}}^v)\right] -2\delta \\&\ge \mathbb {E}\left[ \inf _{v\in \mathcal {S}_{{\bar{X}}^v}}\frac{1}{2}\int _0^Tv(s)^2ds +\Phi ({\bar{X}}^v)\right] -2\delta \\&\ge \inf _{x\in \mathcal {C}} \left[ I_X(x)+\Phi (x)\right] -2\delta . \end{aligned}$$

Here the second inequality follows from Fatou’s lemma. From the arbitrariness of \(\delta \), we have the result. The construction of \((\Omega ^*,\mathcal {F}^*,\mathbb {P}^*)\) is necessary to characterize the limit points \(({\bar{X}}^v, v)\).

It now remains to show that \(\{(X^{\varepsilon ,v^\varepsilon },v^\varepsilon )\}_{\varepsilon >0}\) is in fact tight in \(\mathcal {C}\times L^2([0,T])\). To that end, \(\{v^{\varepsilon }\}_{\varepsilon >0}\) is precompact in \(L^2([0,T])\) under weak\(^*\) topology. Indeed, since any closed ball is compact in \(L^2([0,T])\) under weak\(^*\) topology and \(\int _0^Tv^{\varepsilon }(s)^2ds\le M\). Let \(\varepsilon _n\) (denoted simply by n) be the converging subsequence and v be the corresponding limit. Note that we have only concluded that the laws of \(v^n\) converge weakly to the law of v. From the Skorohod representation theorem, we can infer that

$$\begin{aligned} v^n\rightarrow v,\; \quad \mathbb {P}^*- \text {a.s.} \end{aligned}$$

Finally, we show that \(X^{\varepsilon _n,v^{\varepsilon _n}}\) (written as \(X^n\)) converges almost surely in \(\mathcal {C}\) and also characterize the limit. Note that

$$\begin{aligned} \varepsilon _n B \rightarrow 0 \quad \text { in }\mathcal {C},\;\quad \mathbb {P}^*- \text {a.s.} \end{aligned}$$

Recall that

$$\begin{aligned} X^n(t)&= \varepsilon _n c\int _0^t (t-s)^{\alpha }s^{-\frac{\gamma }{2}}B(ds)+ c\int _0^t (t-s)^{\alpha }s^{-\frac{\gamma }{2}}v^{n}(s)ds.\\&\doteq X^n_1 (t) +X^n_2(t) \end{aligned}$$

and from the \(\mathbb {P}^*-\) a.s. convergence of \(\{v^n\}\), we know that for any \(u\in L^2([0,T])\),

$$\begin{aligned} \int _0^T u(s)v^{\varepsilon _n}(s)ds \rightarrow \int _0^Tu(s)v(s)ds, \quad \mathbb {P}^*-\text {a.s.} \end{aligned}$$

And since \( (t-s)^{\alpha }s^{-\frac{\gamma }{2}}\in L^2([0,T])\), for every \(t\in [0,T]\), we have

$$\begin{aligned} \int _0^T \mathbbm {1}_{s\in [0,t]}(t-s)^{\alpha }s^{-\frac{\gamma }{2}}v^{\varepsilon _n}(s)ds\rightarrow \int _0^T \mathbbm {1}_{s\in [0,t]}(t-s)^{\alpha }s^{-\frac{\gamma }{2}}v(s)ds. \end{aligned}$$

Consider the following: for \(1>h>0\), \(\mathbb {P}^*-\) a.s., we have

$$\begin{aligned}&|X^n_2(t+h)-X^n_2(t)|\le c \left| \int _t^{t+h} (t+h-s)^{\alpha }s^{-\frac{\gamma }{2}}v^{n}(s)ds\right| \nonumber \\&\qquad + c\left| \int _0^t \left[ (t+h-s)^{\alpha }-(t-s)^{\alpha }\right] s^{-\frac{\gamma }{2}}v^{n} (s)ds\right| \nonumber \\&\quad \le ch^\alpha \left| \int _t^{t+h} s^{-\frac{\gamma }{2}}v^{n}(s)ds\right| + c\max _{0\le s\le t}\{ |(t+h-s)^{\alpha }-(t-s)^{\alpha } |\}\left| \int _0^t s^{-\frac{\gamma }{2}}v^{n} (s)ds\right| \nonumber \\&\quad \le ch^{\alpha }\sqrt{\int _t^{t+h}s^{-\gamma }ds}\sqrt{\int _0^T |v^{n}(s)|^2ds} + c\max _{0\le s\le t}\{ |(s+h)^{\alpha }-s^{\alpha } |\}\left| \int _0^t s^{-\frac{\gamma }{2}}v^{n} (s)ds\right| \nonumber \\&\quad \le ch^{\alpha } \sqrt{ \frac{1}{1-\gamma }\left( (t+h)^{1-\gamma }-t^{1-\gamma }\right) }\sqrt{\int _0^T |v^{n}(s)|^2ds} + ch^\alpha \left| \int _0^t s^{-\frac{\gamma }{2}}v^{n} (s)ds\right| \nonumber \\&\quad \le ch^{\alpha } \sqrt{ \frac{1}{1-\gamma }h^{1-\gamma }}\sqrt{\int _0^T |v^{n}(s)|^2ds} +c h^\alpha \left| \int _0^t s^{-\frac{\gamma }{2}}v^{n} (s)ds\right| \nonumber \\&\quad \le ch^{\alpha -\frac{\gamma -1}{2}}\sqrt{\frac{M}{{1-\gamma }}} +c h^\alpha \left| \int _0^t s^{-\frac{\gamma }{2}}v^{n} (s)ds\right| \nonumber \\&\quad \le cK \max \left\{ h^{\alpha }, h^{\alpha -\frac{\gamma -1}{2}}\right\} \nonumber \\&\quad \le c Kh^\alpha , \end{aligned}$$
(3.3)

where

$$\begin{aligned} K\doteq \sup _{n\in \mathbb {N}}\sup _{0\le t\le T}\left\{ \sqrt{\frac{M}{{1-\gamma }}} + \left| \int _0^t s^{-\frac{\gamma }{2}}v^{n} (s)ds\right| \right\} \end{aligned}$$

and the last inequality follows since \(\alpha >0\) and \(0\le \gamma <1\). In the above, we have used the fact that

$$\begin{aligned} \left| \int _0^t s^{-\frac{\gamma }{2}}v^{\varepsilon _n} (s)ds\right| \quad \text { and } \quad \sqrt{\int _0^T |v^{\varepsilon _n}(s)|^2ds} \end{aligned}$$

are uniformly bounded in n. To summarize, we have proved that \(X^n_2\) is \(\alpha \)-Hölder continuous, \(\mathbb {P}^*-\) a.s. \(X^n_2\) is clearly uniformly bounded in n. Indeed, from (3.3) (note that this is valid for every \(0\le h\le T\)) with \(t=0\),

$$\begin{aligned} \sup _{0\le h\le T}|X^n_2(h)|\le K\max \left\{ T^{\alpha }, T^{\alpha -\frac{\gamma -1}{2}}\right\} . \end{aligned}$$

Since \(\{X^n_2\}\) is uniformly bounded and equicontinuous in \(\mathcal {C}\), \(\mathbb {P}^*-\) a.s., the Arzelà-Ascoli theorem gives us the precompactness of \(\{X^n_2\}\), \(\mathbb {P}^*-\) a.s.

We now show that any limit point of \(\{X^n_2\}\) is given by

$$\begin{aligned} {\bar{X}}^v_2(t)\doteq c\int _0^t (t-s)^{\alpha }s^{-\frac{\gamma }{2}}v(s)ds. \end{aligned}$$

In other words, \(\{X^n_2\}\) is convergent in \(\mathcal {C}\), \(\mathbb {P}^*-\) a.s. To show this, for \(t\in [0,T]\), we have

$$\begin{aligned} |X^n_2(t)-{\bar{X}}^v_2(t)|&=c \left| \int _0^t(t-s)^\alpha s^{-\frac{\gamma }{2}} (v^{\varepsilon _n}(s)-v(s))ds \right| \\&\quad \rightarrow 0, \quad \text { as }n\rightarrow \infty , \end{aligned}$$

since \(v^n\rightarrow v\), \(\mathbb {P}^*-\) a.s.

We now shift our focus on to \(X^n_1\). Note that from [2, Theorem 1.6], for every \(\delta >0\), there is a compact set \(\textsf{K}_\delta \subset \mathcal {C}\) such that \(\mathbb {P}(X\in \textsf{K}_\delta )>1-\delta \). For every n,

$$\begin{aligned} 1-\delta < \mathbb {P}(\varepsilon _n X\in \varepsilon \textsf{K})\le \mathbb {P}(\varepsilon _n X\in \textsf{K}). \end{aligned}$$

To understand the second inequality, note that for every compact set \(\textsf{K}\subset \mathcal {C}\), from the Arzelà-Ascoli theorem, there are two parameters that correspond to \(\textsf{K}\): C, the uniform bound in n of the \(\Vert .\Vert _\infty \) norm and \(\rho (\cdot )\), the modulus of continuity of the elements in \(\textsf{K}\). Checking the following parameters for \(\{\varepsilon _n X\}\), we can clearly see that C and \(\rho (\cdot )\) can be used to conclude the uniform boundedness and equicontinuity of \(\{\varepsilon _n X\}\). Hence, \(\{\varepsilon _n X\in \varepsilon _n \textsf{K}\}\subset \{\varepsilon _n X\in \textsf{K}\}\). Therefore, \(\{\varepsilon _n X\}\) is tight in \(\mathcal {C}\). This completes the proof of the lemma. \(\square \)

Lemma 3.2

For any bounded continuous function \(\Phi :\mathcal {C}\rightarrow \mathbb {R}\),

$$\begin{aligned} \limsup _{\varepsilon \rightarrow 0} -\varepsilon ^2\log \mathbb {E}\left[ \exp \left( -\frac{1}{\varepsilon ^2}\Phi (X^\varepsilon )\right) \right] \le \inf _{x\in \mathcal {C}} \left[ I_X(x)+\Phi (x)\right] , \end{aligned}$$
(3.4)

where \(I_X\) is as defined in the statement of Theorem 3.1.

Proof

Choose a \(\delta \)-optimal \(x^*\in \mathcal {C}\) of the right-hand side of (3.4), i.e.,

$$\begin{aligned} I_X(x^*)+\Phi (x^*)\le \inf _{x\in \mathcal {C}} \left[ I_X(x)+\Phi (x)\right] +\delta \end{aligned}$$

and also choose a \(\delta \)-optimal \(v^*\in \mathcal {S}_{x^*}\), i.e.,

$$\begin{aligned} \frac{1}{2}\int _0^T v^*(s)^2ds \le \inf _{v\in \mathcal {S}_{x^*}}\frac{1}{2}\int _0^T v(s)^2ds+\delta . \end{aligned}$$

We note here that \(v^*\) is non-random, from the definition of \(\mathcal {S}_x^*\), as \(x^*\) is non-random. Now by (2.12), we obtain

$$\begin{aligned}&\limsup _{\varepsilon \rightarrow 0} -\varepsilon \log \mathbb {E}\left[ \exp \left( -\frac{1}{\varepsilon }\Phi (X^\varepsilon )\right) \right] \nonumber \\&\quad = \limsup _{\varepsilon \rightarrow 0}\inf _{v\in \mathcal {A}}\mathbb {E}\left[ \frac{1}{2}\int _0^Tv(s)^2 ds + \Phi (X^{\varepsilon , v}))\right] \nonumber \\&\quad \le \limsup _{\varepsilon \rightarrow 0}\mathbb {E}\left[ \frac{1}{2}\int _0^Tv^*(s)^2 ds + \Phi (X^{\varepsilon , v^*}))\right] \nonumber \\&\quad \le \frac{1}{2}\int _0^Tv^*(s)^2 ds + \limsup _{\varepsilon \rightarrow 0}\mathbb {E}\big [ \Phi (X^{\varepsilon ,v^*})\big ]. \end{aligned}$$
(3.5)

To proceed further, recall the fact from the proof of Lemma 3.1 that \(X^{\varepsilon ,v^*}(\cdot )\) converges weakly to

$$\begin{aligned} X^{0,v^*}(t)\doteq \int _0^t (t-s)^\alpha s^{-\frac{\gamma }{2}} v^*(s)ds, \end{aligned}$$

which is non-random. Since \(v^*\in \mathcal {S}_{x^*}\),

$$\begin{aligned} x^*(t)=\int _0^t (t-s)^\alpha s^{-\frac{\gamma }{2}} v^*(s)ds=X^{0,v^*}(t). \end{aligned}$$

Thus we obtain

$$\begin{aligned} \limsup _{\varepsilon \rightarrow 0} -\varepsilon \log \mathbb {E}\left[ \exp \left( -\frac{1}{\varepsilon }\Phi (X^\varepsilon )\right) \right]&\le \frac{1}{2}\int _0^Tv^*(s)^2 ds + \limsup _{\varepsilon \rightarrow 0}\mathbb {E}\big [ \Phi (X^{0,v^*})\big ] \\&\le \frac{1}{2}\int _0^Tv^*(s)^2 ds+ \Phi (X^{0,v^*}) \\&\le I_X(x^*)+\delta + \Phi (x^*)+\delta \\&\le \inf _{x\in \mathcal {C}} \left[ I_X(x)+\Phi (x)\right] +2\delta . \end{aligned}$$

Here the first inequality follows from the last display in (3.5) by applying the continuous mapping theorem and the weak convergence of \(X^{\varepsilon ,v^*}(\cdot )\) to \(X^{0,v^*}(\cdot )\), and the second inequality follows since \(X^{0,v^*}\) is non-random. From the arbitrariness of \(\delta \), we have the result. \(\square \)

Lemma 3.3

For every \(l\ge 0\), \(\{x\in \mathcal {C}:I_X(x)\le l\}\) is compact in \(\mathcal {C}\).

Proof

Fix \(l\ge 0\) and consider a sequence \(\{\xi _n\}_{n\in \mathbb {N}}\subset \{\xi :I_X(\xi )\le l\}\). Now, for every \(n\in \mathbb {N}\), there exists \(v_n\in \mathcal {S}_{\xi _n}\) such that

$$\begin{aligned} \frac{1}{2}\int _0^Tv_n(s)^2ds\le I_X(\xi _n)+\frac{1}{n}\le l+\frac{1}{n}. \end{aligned}$$

Therefore, \(\{v_n\}_{n\in \mathbb {N}}\) is precompact in \(L^2([0,T])\) under weak\(^*\) topology. Denote the converging subsequence again by n and the limit by \({\bar{v}}\).

Consider

$$\begin{aligned} \xi _n(t) = c\int _0^t (t-s)^\alpha s^{-\frac{\gamma }{2}}v_n(s)ds. \end{aligned}$$

From the proof of Lemma 3.1, it is clear that \(\{\xi _n\}_{n\in \mathbb {N}}\) is precompact in \(\mathcal {C}\). Let \({\bar{\xi }}\) be a sequential limit of \(\{\xi _n\}\) along a subsequence, which we again denote by n. Also, we have

$$\begin{aligned} {\bar{\xi }}(t)= c\int _0^t (t-s)^\alpha s^{-\frac{\gamma }{2}}{\bar{v}}(s)ds. \end{aligned}$$

Clearly, \({\bar{v}}\in \mathcal {S}_{{\bar{\xi }}}\) and \(I_X({\bar{\xi }})\le \frac{1}{2}\int _0^T {\bar{v}}(s)^2ds\le l\). Hence, \({\bar{\xi }}\in \{\xi :I_X(\xi )\le l\}\). This proves the desired result. \(\square \)

Proof of Theorem 3.1

Combining Lemmas 3.1, 3.2 and 3.3, it is clear that from Theorem 2.1, we have the LDP of \(\{X^\varepsilon \}_{\varepsilon >0}\). \(\square \)

The following result gives the expression for the rate function \(I_X\) at \(\xi \) explicitly in terms of \(\xi \), rather than as an optimal value to an optimization problem.

Lemma 3.4

Suppose \(\mathcal {L}[f]\) denotes the Laplace transform of f, whenever it is defined. Then,

$$\begin{aligned} I_X(\xi )= \frac{\Gamma (\alpha +1)^2}{2} \int _0^T s^{\gamma } \left( \mathcal {L}^{-1}[p^{\alpha +1 } \mathcal {L}[{\bar{\xi }}](p)](s)\right) ^2 ds, \end{aligned}$$
(3.6)

whenever \(\xi \) is absolutely continuous on [0, T].

Proof

To begin with, we consider \({\bar{u}}\in L^2([0,\infty ))\) such that \(s^{-\frac{\gamma }{2}}{\bar{u}}\in L^2([0,\infty ))\). Now define a continuous function \({\bar{\xi }}\) on \([0,\infty )\) in the following way:

$$\begin{aligned} {\bar{\xi }}(t)=\int _0^t (t-s)^\alpha s^{-\frac{\gamma }{2}} {\bar{u}}(s)ds. \end{aligned}$$

Recall that the Laplace transform of a function f on \([0,\infty )\) is defined as

$$\begin{aligned} \mathcal {L}[f](p)\doteq \int _0^\infty e^{-pt}f(t)ds, \end{aligned}$$

whenever the integral is finite. Since \(|{\bar{\xi }}(t)|\le Ct^{1+\alpha }\), for some \(C>0\), then \(\mathcal {L}[{\bar{\xi }}]\) is well defined. We are now in a position to consider the Laplace transform of \({\bar{\xi }}\). We have

$$\begin{aligned} \mathcal {L}[{\bar{\xi }}](p)&= \mathcal {L}\left[ \int _0^t (t-s)^\alpha s^{-\frac{\gamma }{2}} {\bar{u}}(s)ds\right] (p)\\&\quad = \mathcal {L}[t^\alpha ](p) \mathcal {L}\left[ t^{-\frac{\gamma }{2}} {\bar{u}}(t)\right] (p)\\&\quad = \frac{\Gamma (\alpha +1)}{p^{\alpha +1}}\mathcal {L}\left[ t^{-\frac{\gamma }{2}} {\bar{u}}(t)\right] (p). \end{aligned}$$

Therefore,

$$\begin{aligned} \mathcal {L}[t^{-\frac{\gamma }{2}} {\bar{u}}(t)](p)= \frac{1}{\Gamma (\alpha +1)}p^{\alpha +1 } \mathcal {L}[{\bar{\xi }}](p). \end{aligned}$$

Now, suppose the inverse Laplace transform \(\mathcal {L}^{-1}\) of the right hand side above exists. Then,

$$\begin{aligned} {\bar{u}}(t)= \Gamma (\alpha +1)s^{\frac{\gamma }{2}} \mathcal {L}^{-1}[p^{\alpha +1 } \mathcal {L}[{\bar{\xi }}](p)](t), \end{aligned}$$

where \(\mathcal {L}^{-1}[F(p)](t)\) is defined (see [8, Page 42]) as

$$\begin{aligned} \mathcal {L}^{-1}[F(p)](t)=\frac{1}{2\pi i}\int _{c-i\infty }^{c+i\infty } F(p) e^{pt} dp, \quad \text { for }c>\eta , \end{aligned}$$
(3.7)

whenever F(p) is analytic for \(\Re (p)>\eta \). Since \(|{\bar{\xi }}(t)|<Ct^{1+\alpha }\), we have the following:

$$\begin{aligned} |\mathcal {L}[{\bar{\xi }}](p)| \le C\int _0^\infty e^{-pt}t^{\alpha +1}dt<\infty , \quad \text { for some }C_1>0 \text { and every }p>0. \end{aligned}$$

From [8, Section 2.1], \(\mathcal {L}[{\bar{\xi }}](p)\) is analytic for \(\Re (p)>0\). Therefore, \(|p^{\alpha +1}\mathcal {L}[{\bar{\xi }}](p)|\le C_1|p|^\alpha \) and \(p^{\alpha +1}\mathcal {L}[{\bar{\xi }}](p)\) is analytic for \(\Re (p)>0\). Hence, taking \(c>0\) in (3.7) gives a convergent integral. In other words, the definition in (3.7) is well defined for \(c>0\) and the inverse Laplace transform of \(p^{\alpha +1 } \mathcal {L}[{\bar{\xi }}](p)\) exists. To summarize, we have our desired result in (3.6). \(\square \)

4 LDP for fluid queues with GFBM input

The main content of this section is the study of LDPs in the context of a stochastic fluid queue with GFBM input. In particular, we focus our attention on two processes: workload process and running maximum process which will be defined below.

We consider a stochastic fluid queue with the GFBM X in (2.1) as the arrival process, and a deterministic service rate \(k>0\). In particular, the workload process V(t) (assuming that \(V(0)=0\)) is given by

$$\begin{aligned} V(t)&\doteq \sup _{0\le s\le t} \left( X(t)-X(s) -k(t-s)\right) \nonumber \\&= X(t)-kt- \inf _{0\le s\le t}(X(s)-ks)\doteq F(X)(t). \end{aligned}$$
(4.1)

We also define another process that is closely related to V(t) viz., the running maximum process

$$\begin{aligned} M(t)\doteq \max _{0\le s\le t} \left( X(s)-ks\right) . \end{aligned}$$
(4.2)

Recall that for a stationary input process X (stationary increments), the workload process V(t) in (4.1) has the same distribution as the following:

$$\begin{aligned} V(t) {\mathop {=}\limits ^{\textrm{d}}}\max _{0 \le s \le t} \left( -X(-s)-ks\right) . \end{aligned}$$
(4.3)

This equivalent-in-distribution expression is often used to derive the stationary distribution of V(t) as \(t\rightarrow \infty \) (we defer the analysis of steady state of V(t) to Sect. 5). It can be shown that for an input process with stationary increments, it is also equivalent in distribution to the running maximum process M(t). However, this approach does not apply to the queueing process with GFBM input, since it has non-stationary increments.

In [6], as a special case of [6, Theorem 4.1], the authors have studied the LDP for the workload process V(t) with the FBM process Y in (2.9) as the input, and proved that \( F(\varepsilon Y)(T)\) satisfies an LDP with rate \(\varepsilon ^2\) and an appropriate rate function. (In fact, their result applies to a more general process for Y in (2.9) with the Brownian motion B being replaced by a stationary process satisfying an LDP.) It is well known that the map \(F:\mathcal {C}\rightarrow \mathcal {C}\) (reflection mapping) is continuous (see, e.g., [7, Chapter 6], [33, Chapter 13.5]). Therefore, we can apply the contraction principle and obtain the sample path LDP for the process \(\{ F(\varepsilon X)(t): t\ge 0\}\). In the following, we study the LDP of \( F(\varepsilon X)(T)\) at a fixed time T, for which the rate function can be characterized explicitly.

Let

$$\begin{aligned} V^\varepsilon \doteq V^\varepsilon (T)= F(\varepsilon X)(T). \end{aligned}$$

Theorem 4.1

Assume that \((\alpha ,\gamma )\) satisfy (2.6). \(\{ V^\varepsilon \}\) satisfies an LDP with rate \(\varepsilon ^2\) and rate function \(I_V:\mathbb {R}_+\rightarrow [0,\infty ]\),

$$\begin{aligned} I_V(x)= \inf _{\xi \in \mathcal {C}: F(\xi )(T)=x} I_X(\xi ). \end{aligned}$$
(4.4)

Moreover, for \(\lambda \ge 0\), we have

$$\begin{aligned} \lim _{\varepsilon \rightarrow 0}-\varepsilon ^2\log \mathbb {P}\left( V^\varepsilon \ge \lambda \right) = - \inf _{0\le s\le T} \frac{(k(T-s)+\lambda )^2}{v_1(T,s) + v_2(T,s)}, \end{aligned}$$
(4.5)

where

$$\begin{aligned} v_1(T,s)\doteq c\int _0^s \left[ (T-\tau )^\alpha - (s-\tau )^\alpha \right] ^2\tau ^{-\gamma }d\tau , \end{aligned}$$
(4.6)

and

$$\begin{aligned} v_2(T,s)\doteq c\int _s^T (T-\tau )^{2\alpha } \tau ^{-\gamma }d\tau . \end{aligned}$$
(4.7)

Proof

From the continuity of the map \(F:\mathcal {C}\rightarrow \mathcal {C}\) and Theorem 2.2, we know that \( V^\varepsilon \) satisfies the LDP with rate \(\varepsilon ^2\) and rate function \(I_V:\mathbb {R}_+\rightarrow [0,\infty ]\) given in (4.4).

The proof for the result in (4.5) follows exactly along the lines of the proof of [6, Theorem 4.1]. We adapt that proof for our process. From the LDP of \(\{ V^\varepsilon \}\) and Theorem 2.1, we know that for any Borel set \(A\subset \mathbb {R}_+\),

$$\begin{aligned} -\inf _{x\in A^\circ }I_V(x)&\le \liminf _{\varepsilon \rightarrow 0}\varepsilon ^2 \log \mathbb {P}\left( V^\varepsilon \in A\right) \\&\le \limsup _{\varepsilon \rightarrow 0}\varepsilon ^2 \log \mathbb {P}\left( V^\varepsilon \in A\right) \le -\inf _{x\in {\bar{A}}}I_V(x). \end{aligned}$$

For \(\lambda \ge 0\), taking \(A=[\lambda ,\infty )\), we have

$$\begin{aligned} -\inf _{x\in (\lambda ,\infty )}I_V(x)&\le \liminf _{\varepsilon \rightarrow 0}\varepsilon ^2 \log \mathbb {P}\left( V^\varepsilon \ge \lambda \right) \\&\le \limsup _{\varepsilon \rightarrow 0}\varepsilon ^2 \log \mathbb {P}\left( V^\varepsilon \ge \lambda \right) \le -\inf _{x\in [\lambda ,\infty )}I_V(x). \end{aligned}$$

To prove (4.5), it suffices to show that

$$\begin{aligned} \inf _{x\in [\lambda ,\infty )} I^T_V(x)= \inf _{x\in (\lambda ,\infty )} I^T_V(x) = \inf _{0\le s\le T} \frac{(k(T-s)+\lambda )^2}{v_1(T,s) + v_2(T,s)}. \end{aligned}$$

Since

$$\begin{aligned} \inf _{0\le s\le T} \frac{(k(T-s)+\lambda )^2}{v_1(T,s) + v_2(T,s)} \end{aligned}$$

is continuous in \(\lambda \), proving that

$$\begin{aligned} \inf _{x\in [\lambda ,\infty )} I^T_V(x) = \inf _{0\le s\le T} \frac{(k(T-s)+\lambda )^2}{v_1(T,s) + v_2(T,s)} \end{aligned}$$
(4.8)

automatically implies that

$$\begin{aligned} \inf _{x\in (\lambda ,\infty )} I^T_V(x) = \inf _{0\le s\le T} \frac{(k(T-s)+\lambda )^2}{v_1(T,s) + v_2(T,s)}. \end{aligned}$$

Therefore, we only show (4.8).

The left-hand side of (4.8) can be rewritten as

$$\begin{aligned} \inf _{x\in [\lambda ,\infty )} I^T_V(x)= \inf _{u\in \mathcal {R}_\lambda }\frac{1}{2}\int _0^T u(s)^2ds, \end{aligned}$$

where,

$$\begin{aligned}&\mathcal {R}_\lambda \doteq \left\{ u\in L^2[0,T]: \sup _{0\le s\le T}\left( c\int _0^T (T-\tau )^\alpha \tau ^{-\frac{\gamma }{2}}u(\tau )d\tau \right. \right. \\&\quad \left. \left. -c\int _0^s (s-\tau )^\alpha \tau ^{-\frac{\gamma }{2}}u(\tau )d\tau -k(T-s)\right) \ge \lambda \right\} . \end{aligned}$$

Clearly,

$$\begin{aligned} \mathcal {R}_\lambda =\cup _{0\le s\le T}\mathcal {R}_\lambda (s) \end{aligned}$$

with

$$\begin{aligned} \mathcal {R}_\lambda (s)&\doteq \left\{ u\in L^2[0,T]:c\int _0^T (T-\tau )^\alpha \tau ^{-\frac{\gamma }{2}}u(\tau )d\tau -c\int _0^s (s-\tau )^\alpha \tau ^{-\frac{\gamma }{2}}u(\tau )d\tau \right. \\&\quad \left. -k(T-s) \ge \lambda \right\} \\&=\Bigg \{u\in L^2[0,T]:c\int _0^s \left[ (T-\tau )^\alpha - (s-\tau )^\alpha \right] \tau ^{-\frac{\gamma }{2}}u(\tau )d\tau \\&\quad -c\int _s^T (T-\tau )^\alpha \tau ^{-\frac{\gamma }{2}}u(\tau )d\tau \ge \lambda + k(T-s) \Bigg \}. \end{aligned}$$

Then,

$$\begin{aligned} \inf _{u\in \mathcal {R}_\lambda }\frac{1}{2}\int _0^T u(s)^2ds= \inf _{0\le s\le T}\inf _{u\in \mathcal {R}_\lambda (s)}\frac{1}{2}\int _0^T u(s)^2ds. \end{aligned}$$

The infimum inside can be solved explicitly using [6, Lemma 3.3 (ii)]. We then get

$$\begin{aligned} \inf _{u\in \mathcal {R}_\lambda (s)}\frac{1}{2}\int _0^T u(s)^2ds= \frac{(k(T-s)+\lambda )^2}{v_1(T,s)+v_2(T,s)}, \end{aligned}$$

and the minimizer is given as follows:

$$\begin{aligned} u(\tau )={\left\{ \begin{array}{ll} c\frac{k(T-s)+\lambda }{v_1(T,s)+v_2(T,s)} \left[ (T-\tau )^\alpha - (s-\tau )^\alpha \right] \tau ^{-\frac{\gamma }{2}}, &{} \tau \in [0,s), \\ c\frac{k(T-s)+\lambda }{v_1(T,s)+v_2(T,s)}(T-\tau )^\alpha \tau ^{-\frac{\gamma }{2}}, &{} \tau \in [s,T]. \end{array}\right. } \end{aligned}$$

This proves the result. \(\square \)

We now prove an LDP for the running maximum process \(M(\cdot )\). Define \(M^\varepsilon \) by

$$\begin{aligned} M^\varepsilon =M^\varepsilon (T) =J(\varepsilon X)(T)\doteq \sup _{0\le s\le T}(\varepsilon X(s)-ks). \end{aligned}$$

Lemma 4.1

Assume that \((\alpha ,\gamma )\) satisfy (2.6). \(\{M^\varepsilon \}\) satisfies an LDP with rate \(\varepsilon ^2\) and rate function \(I_M:\mathbb {R}_+\rightarrow [0,\infty ]\) given by

$$\begin{aligned} I_M(x)= \inf _{\xi \in \mathcal {C}: J(\xi )(T)=x}I_X(\xi ). \end{aligned}$$

Moreover, we have

$$\begin{aligned} \lim _{\varepsilon \rightarrow 0} -\varepsilon ^{2}\log \mathbb {P}(M^\varepsilon \ge \lambda )= \chi (\lambda ,T ), \end{aligned}$$
(4.9)

where

$$\begin{aligned} \chi (\lambda ,T )={\left\{ \begin{array}{ll} \frac{(\lambda +kT)^2}{2T^{2H}}, &{} T< \frac{\lambda H}{k(1-H)},\\ \frac{k^{2H}}{2H^{2H} (1-H)^{2(1-H)}}\lambda ^{2(1-H)}, &{} \text { otherwise.} \end{array}\right. } \end{aligned}$$
(4.10)

Proof

The proof for the result in (4.9) follows exactly along the lines of the proof of [6, Corollary 3.4]. We adapt that proof for our process. From the LDP of \(\{ M^\varepsilon \}\) and Theorem 2.1, we know that for any Borel set \(A\subset \mathbb {R}_+\),

$$\begin{aligned} -\inf _{x\in A^\circ }I_M(x)&\le \liminf _{\varepsilon \rightarrow 0}\varepsilon ^2 \log \mathbb {P}\left( M^\varepsilon \in A\right) \\&\le \limsup _{\varepsilon \rightarrow 0}\varepsilon ^2 \log \mathbb {P}\left( M^\varepsilon \in A\right) \le -\inf _{x\in {\bar{A}}}I_M(x). \end{aligned}$$

For \(\lambda \ge 0\), taking \(A=[\lambda ,\infty )\), we have

$$\begin{aligned} -\inf _{x\in (\lambda ,\infty )}I_M(x)&\le \liminf _{\varepsilon \rightarrow 0}\varepsilon ^2 \log \mathbb {P}\left( M^\varepsilon \ge \lambda \right) \\&\le \limsup _{\varepsilon \rightarrow 0}\varepsilon ^2 \log \mathbb {P}\left( M^\varepsilon \ge \lambda \right) \le -\inf _{x\in [\lambda ,\infty )}I_M(x). \end{aligned}$$

To prove (4.9), it suffices to show that

$$\begin{aligned} \inf _{x\in [\lambda ,\infty )} I^T_M(x)= \inf _{x\in (\lambda ,\infty )} I^T_M(x) = \inf _{0\le s\le T}\frac{(\lambda +ks)^2}{s^{2H}}=\chi (\lambda ,T). \end{aligned}$$

Since

$$\begin{aligned} \inf _{0\le s\le T}\frac{(\lambda +ks)^2}{s^{2H}} \end{aligned}$$

is continuous in \(\lambda \), proving that

$$\begin{aligned} \inf _{x\in [\lambda ,\infty )} I^T_M(x) = \inf _{0\le s\le T}\frac{(\lambda +ks)^2}{s^{2H}} \end{aligned}$$
(4.11)

automatically implies that

$$\begin{aligned} \inf _{x\in (\lambda ,\infty )} I^T_M(x) = \inf _{0\le s\le T}\frac{(\lambda +ks)^2}{s^{2H}}. \end{aligned}$$

Therefore, we only show (4.11).

The left-hand side of (4.11) can be rewritten as

$$\begin{aligned} \inf _{x\in [\lambda ,\infty )} I^T_M(x)= \inf _{u\in \mathcal {Q}_\lambda }\frac{1}{2}\int _0^Tu(s)^2ds, \end{aligned}$$

where

$$\begin{aligned} \mathcal {Q}_\lambda \doteq \bigg \{u\in L^2[0,T]: \sup _{0\le s\le T}\left( c\int _0^s (s-\tau )^\alpha \tau ^{-\frac{\gamma }{2}}u(\tau )d\tau -ks\right) \ge \lambda \bigg \}. \end{aligned}$$

Clearly,

$$\begin{aligned} \mathcal {Q}_\lambda =\cup _{0\le s\le T}\mathcal {Q}_\lambda (s) \end{aligned}$$

with

$$\begin{aligned} \mathcal {Q}_\lambda (s)&\doteq \left\{ u\in L^2[0,T]:c\int _0^s (s-\tau )^\alpha \tau ^{-\frac{\gamma }{2}}u(\tau )d\tau -ks \ge \lambda \right\} \\&=\bigg \{u\in L^2[0,T]:c\int _0^s (s-\tau )^\alpha \tau ^{-\frac{\gamma }{2}}u(\tau )d\tau \ge \lambda + ks \bigg \}. \end{aligned}$$

Then,

$$\begin{aligned} \inf _{u\in \mathcal {Q}_\lambda }\frac{1}{2}\int _0^T u(s)^2ds= \inf _{0\le s\le T}\inf _{u\in \mathcal {Q}_\lambda (s)}\frac{1}{2}\int _0^T u(s)^2ds. \end{aligned}$$

The infimum inside on the right-hand side can be solved explicitly using [6, Lemma 3.3 (i)]. We then get

$$\begin{aligned} \inf _{u\in \mathcal {Q}_\lambda (s)}\frac{1}{2}\int _0^T u(s)^2ds= \frac{\lambda +ks}{2c^2\int _0^s(s-\tau )^{2\alpha } \tau ^{-{\gamma }} d\tau } = \frac{(\lambda +ks)^2}{2s^{2H}} \end{aligned}$$

and the minimizer is given as follows:

$$\begin{aligned} u(\tau )=\frac{\lambda +ks}{c^2\int _0^s(s-\tau )^{2\alpha } \tau ^{-{\gamma }}d\tau } (s-\tau )^\alpha \tau ^{-\frac{\gamma }{2}}, \text { for }\tau \in [0,s]. \end{aligned}$$

Therefore,

$$\begin{aligned} \inf _{x\in [\lambda ,\infty )} I^T_M(x)= \inf _{0\le s\le T}\frac{(\lambda +ks)^2}{2s^{2H}} ={\left\{ \begin{array}{ll} \frac{(\lambda +kT)^2}{2T^{2H}}, &{} T< \frac{\lambda H}{k(1-H)}, \\ \frac{k^{2H}}{2H^{2H} (1-H)^{2(1-H)}}\lambda ^{2(1-H)}, &{} \text { otherwise.} \end{array}\right. }\end{aligned}$$

This proves the result. \(\square \)

4.1 Alternative proofs of Theorem 4.1 and Lemma 4.1 using Landau–Marcus–Shepp Asymptotics

In the proofs of Theorem 4.1 and Lemma 4.1, we have used the large deviation asymptotics of \(X^\varepsilon =\varepsilon X\) in Theorem 3.1. Alternatively, these proofs can also be given by using a straightforward application of the well-known Landau–Marcus–Shepp asymptotics [24, Equation (1.1)] which is given as follows. For \(T>0\), suppose \(\{G_t:0\le t\le T\}\) is a centered Gaussian process. Then we have

$$\begin{aligned} \lim _{\varepsilon \rightarrow 0}-\varepsilon ^2 \log \mathbb {P}\bigg (\sup _{0\le s\le T} G_s> \varepsilon ^{-1}\bigg )= \frac{1}{2 \sigma ^2}, \end{aligned}$$
(4.12)

where \(\sigma ^2\doteq \sup _{0\le s\le T} \mathbb {E}[G_s^2]\).

To apply (4.12) in Theorem 4.1 and Lemma 4.1 (below, we only illustrate this to prove (4.1) using (4.12) as the other case follows exactly along the same lines), we make the following observation:

$$\begin{aligned} \mathbb {P}\bigg ( \sup _{0\le s\le T} \big (\varepsilon X(T)-\varepsilon X(s)-k(T-s)\big )>\lambda \bigg )= \mathbb {P}\bigg ( \sup _{0\le s\le T} \frac{X(T)-X(s)}{\lambda + k(T-s)}> \varepsilon ^{-1}\bigg ). \end{aligned}$$

Since \(\frac{X(T)-X(s)}{\lambda + k(T-s)}\) is a centered Gaussian process, from (4.12), we have

$$\begin{aligned} \lim _{\varepsilon \rightarrow 0}-\varepsilon ^2 \log \mathbb {P}\bigg ( \sup _{0\le s\le T} \big (\varepsilon X(T)-\varepsilon X(s)-k(T-s) \big )>\lambda \bigg )&= \frac{1}{2 {\overline{\sigma }}^2 }, \end{aligned}$$

where

$$\begin{aligned} {\overline{\sigma }}^2 \doteq \sup _{0\le s\le T} \mathbb {E}\bigg [\frac{\big (X(T)-X(s)\big )^2}{\big (\lambda +k(T-s)\big )^2}\bigg ]=\bigg ({\inf _{0\le s\le T}\frac{(k(T-s)+\lambda )^2}{v_1(T,s) + v_2(T,s)} }\bigg )^{-1}. \end{aligned}$$

In the last equality, we have used (2.2), and \(v_1\) and \(v_2\) are defined in (4.6), (4.7). This gives (4.5).

Even though using (4.12) gives us a shorter proof of Theorem 4.1 and Lemma 4.1, we believe that using the LDP (Theorem 3.1) of \(\{\varepsilon X\}_{\varepsilon >0}\) is a general and robust approach. To see this, we note that (4.12) can be obtained as a consequence of LDP of a general Gaussian process (see [1, Pages 53 and 57]).

5 Long-time behavior of \(M(\cdot )\) and \(V(\cdot )\)

In this section, we first study the existence of the steady state for the processes M and V. Upon showing the existence, we derive the tail asymptotics of the steady state \(M^*\) and \(V^*\) of M(t) and V(t), respectively.

We now briefly describe the method that is adopted.

  1. (1)

    We first derive a certain type of maximal inequalities and modulus of continuity estimates for the GFBM process X(t) in Lemmas 5.3 and 5.4. Using these and exploiting the self-similarity of X (Lemma 5.5), we establish (uniform in time) sub-exponential tail bounds of M(t) and V(t) for each fixed \(t>0\).

  2. (2)

    We next prove the existence of a weak limit of the laws of V(t) as \(t\rightarrow \infty \), and the almost sure convergence of M(t) as \(t\rightarrow \infty \). Then using the LDPs of \(\{M^\varepsilon \}_{\varepsilon >0}\) and \(\{V^\varepsilon \}_{\varepsilon >0}\), we derive the tail asymptotics of \(M^*\) and \(V^*\) (a weak limit point of \(\{V(t)\}_{t\in \mathbb {R}_+}\)).

Remark 5.1

In this section, for any analysis related to M(t), we assume that \((\alpha ,\gamma )\) satisfy (2.6) and for any analysis related to V(t) we assume a stronger assumption: \(\alpha >\frac{\gamma }{2}\), in which case, the Hurst parameter \(H \in (1/2,1)\).

Remark 5.2

Throughout the section, \(\delta _0\) is always the positive constant in Corollary A.1. We still occasionally remind the reader of this.

We first give an alternate expression for X that is easily amenable for analysis.

Lemma 5.1

Assume that \((\alpha ,\gamma )\) satisfy (2.6). Then, for \(t\ge 0\), \(\mathbb {P}-\) a.s., the GFBM X in (2.1) can be equivalently represented as

$$\begin{aligned} X(t)= \alpha c \int _0^t B(u)(t-u)^{\alpha -1}u^{-\frac{\gamma }{2}} du +\frac{\gamma c}{2} \int _0^t (t-u)^\alpha u^{-\frac{\gamma }{2}-1}B(u)du. \end{aligned}$$
(5.1)

Proof

We begin by recalling Itô’s product rule: for semi-martingales \(Z^1\) and \(Z^2\), for \(0 \le s \le t\),

$$\begin{aligned} Z^1(t)Z^2(t)= & {} Z^1(s)Z^2(s) + \int _s^tZ^1(u)dZ^2(u) +\int _s^t Z^2(u)dZ^1(u)\\{} & {} + \frac{1}{2}\int _s^t d[Z^1,Z^2](u), \end{aligned}$$

where \([Z^1,Z^2](\cdot )\) is the cross-quadratic variation of the corresponding martingale component.

Let

$$\begin{aligned} X_\rho (t)=c\int _\rho ^t (t-u)^{\alpha } u^{-\frac{\gamma }{2}} dB(u). \end{aligned}$$

Define

$$\begin{aligned} Z^1(u)= c(t-u)^\alpha u^{-\frac{\gamma }{2}}, \quad \text {and} \quad Z^2(u)= B(u). \end{aligned}$$

Even though \(Z^1(u)\) depends on t, we drop this dependence, because t is fixed throughout. Also observing that \(Z^1(t)=0\). Moreover, \([Z^1,Z^2](\cdot )\equiv 0\). Note that we cannot apply the Itô’s product rule with \(s=0\) since \(Z^1(0)\) is ill-defined for \(\gamma >0\). To overcome this issue, we set \(s=\rho \) and then take \(\rho \rightarrow 0\), and therefore define the process \(X_\rho (t)\) above. Thus applying the Itô’s product rule, we obtain

$$\begin{aligned}&(t-\rho )^\alpha \rho ^{-\frac{\gamma }{2}}B(\rho ) + X_\rho (t) - \alpha c \int _\rho ^t B(u)(t-u)^{\alpha -1}u^{-\frac{\gamma }{2}} du\nonumber \\&\quad -\frac{c\gamma }{2} \int _\rho ^t (t-u)^\alpha u^{-\frac{\gamma }{2}-1}B(u)du =0. \end{aligned}$$
(5.2)

Now, we take \(\rho \rightarrow 0\) (or along a subsequence). First of all, we have

$$\begin{aligned} \lim _{\rho \rightarrow 0}\rho ^{-\frac{\gamma }{2}}B(\rho )=0. \end{aligned}$$

This follows from the property of Brownian motion: \(\limsup _{\rho \rightarrow 0}\frac{B(\rho )}{ \sqrt{\rho }\log \log (\rho ^{-1})}=\sqrt{2}\), \(\mathbb {P}-\) a.s.; see, e.g., [20, Theorem 2.9.23]. We then have

$$\begin{aligned} I^1_\rho \doteq \int _\rho ^t (t-u)^\alpha u^{-\frac{\gamma }{2}-1}B(u)du\xrightarrow {\rho \rightarrow 0} I^1\doteq \int _0^t (t-u)^\alpha u^{-\frac{\gamma }{2}-1}B(u)du, \;\quad \mathbb {P}-\text { a.s.} \end{aligned}$$

Indeed,

$$\begin{aligned} \mathbb {E}\left[ | I^1_\rho -I^1|\right]&\le \mathbb {E}\left[ \int _0^\rho (t-u)^\alpha u^{-\frac{\gamma }{2}-1} |B(u)|du \right] \\&\le \int _0^\rho (t-u)^{\alpha } u^{-\frac{\gamma }{2} -1} \mathbb {E}[|B(u)|] du \\&\le t^{\alpha } \int _0^\rho u^{-\frac{\gamma }{2}-1+\frac{1}{2}}du\\&\le t^{\alpha } \rho ^{-\frac{\gamma }{2}+\frac{1}{2}}\\&\xrightarrow {\rho \rightarrow 0} 0, \text { as }0\le \gamma <1, \end{aligned}$$

where the second inequality follows from Tonelli’s theorem. Similarly, one can show that

$$\begin{aligned} I^2_\rho \doteq \int _\rho ^t (t-u)^{\alpha -1 } u^{-\frac{\gamma }{2}}B(u)du\xrightarrow {\rho \rightarrow 0} I^2\doteq \int _0^t (t-u)^{\alpha -1} u^{-\frac{\gamma }{2}}B(u)du, \quad \text { in }L^1([0,T]). \end{aligned}$$

Using Ito’s isometry along with similar analysis as above, we can also show that

$$\begin{aligned} \mathbb {E}\left[ |X_\rho (t)-X(t)|^2\right] \rightarrow 0, \quad \text {as }\rho \rightarrow 0. \end{aligned}$$

Therefore, we can find a subsequence \(\rho _n \rightarrow 0\) along which we will have

$$\begin{aligned} I^1_\rho \rightarrow I^1,\quad I^2_\rho \rightarrow I^2, \quad X_\rho (t)\rightarrow X(t) \quad \text { and } \quad \rho ^{-\frac{\gamma }{2} } B(\rho )\rightarrow 0,\quad \mathbb {P}-\text {a.s.} \end{aligned}$$

From these and (5.2), we obtain the expression of X(t) in (5.1). \(\square \)

As mentioned earlier we require a maximal inequality for X which is the content of Lemma 5.4 below. For our purposes, we only estimate the maximal inequality over \(0\le s\le 1\). In the following, without loss in generality, we assume \(\delta _0\) in Corollary A.1 is less than one. In the following, \(\textbf{B}(x_1,x_2)\) denotes the Beta function with parameters \(x_1,x_2>0\). We also use the inequality given below, often in what follows. For \(0<x<1\),

$$\begin{aligned} \sqrt{\log \left( 1/x\right) }\le K_\eta x^{-\eta }, \end{aligned}$$
(5.3)

for some \(K_\eta \), depending on \(\eta >0\).

In the next two lemmas, we study the the behavior of X(t) in two subintervals,\(0 \le t\le \delta _0\) and \(\delta _0<t \le 1\). These results are used in Theorem 5.1 to ensure that if we condition that maximum of \(X(t)-kt\) over [0, T] is appropriately large, then the maximizer is almost surely attained in the complement of \([0,\delta _0]\).

Lemma 5.2

Assume that \((\alpha ,\gamma )\) satisfy (2.6) and \(\frac{1-\gamma }{2}>\eta >0\). Then,

$$\begin{aligned} X(t)\le C t^{H-\eta },\quad \mathbb {P}-\text {a.s.,} \end{aligned}$$

for \(0\le t\le \delta _0\) and

$$\begin{aligned} C\doteq 2(1+\rho )K_\eta \left( \alpha c \mathbf{{B}}\Big ( \frac{3}{2}-\gamma -\eta , \alpha \Big )+\frac{\gamma c}{2}{} \mathbf{{B}}\Big (\frac{1}{2}-\frac{\gamma }{2}-\eta ,\alpha +1\Big ) \right) . \end{aligned}$$

Here, \(\delta _0\) is as in Corollary A.1.

Proof

Fix \(\rho >0\) and choose \(\delta _0\) from the Corollary A.1 corresponding to \(\rho >0\). Then, from Corollary A.1,

$$\begin{aligned} B(s)\le (1+\rho )\sqrt{2s \log (s^{-1})}, \text { for }s\le \delta _0, \quad \mathbb {P}-\text {a.s. } \end{aligned}$$

Using this, for \(t\le \delta _0\), we have

$$\begin{aligned} X(t)&= \alpha c t^{\alpha -\frac{\gamma }{2}}\int _0^1 B(vt)(1-v)^{\alpha -1}v^{-\frac{\gamma }{2}} dv +\frac{\gamma c}{2}t^{\alpha +\frac{\gamma }{2}} \int _0^1 (1-v)^\alpha v^{-\frac{\gamma }{2}-1}B(vt)dv\\&\le \sqrt{2}(1+\rho )\alpha c t^{\alpha -\frac{\gamma }{2}}\int _0^1 \sqrt{(vt) \log ((vt)^{-1})}(1-v)^{\alpha -1}v^{-\frac{\gamma }{2}} dv\\&\qquad +\frac{\sqrt{2}(1+\rho )\gamma c}{2}t^{\alpha -\frac{\gamma }{2}} \int _0^1 (1-v)^\alpha v^{-\frac{\gamma }{2}-1}\sqrt{(vt) \log ( (vt)^{-1})}dv\\&\le \sqrt{2}(1+\rho )K_\eta \alpha c t^{\alpha -\frac{\gamma }{2}+\frac{1}{2}-\eta }\int _0^1 (1-v)^{\alpha -1}v^{-\frac{\gamma }{2}+\frac{1}{2}-\eta } dv\\&\qquad +\frac{\sqrt{2}(1+\rho )K_\eta \gamma c}{2}t^{\alpha -\frac{\gamma }{2}+\frac{1}{2}-\eta } \int _0^1 (1-v)^\alpha v^{-\frac{\gamma }{2}-\frac{1}{2}-\eta }dv\\&\le 2(1+\rho )K_\eta t^{H-\eta } \left( \alpha c \mathbf{{B}}( \frac{3}{2}-\gamma -\eta , \alpha )+\frac{\gamma c}{2}{} \mathbf{{B}}(\frac{1}{2}-\frac{\gamma }{2}-\eta ,\alpha +1) \right) . \end{aligned}$$

In the above, we chose \(\frac{1-\gamma }{2}>\eta >0\) and used the fact in (5.3). This completes the proof. \(\square \)

Lemma 5.3

Assume that \((\alpha ,\gamma )\) satisfy (2.6) and \(\frac{1-\gamma }{2}>\eta >0\). For \(\delta _0<t\le 1\) and \(K>0\),

$$\begin{aligned} \mathbb {P}\left( \max _{\delta _0\le s\le t} \frac{X(s)}{s^H}\ge K\right) \le \exp \left( -\frac{1}{2}\left( \frac{Kt^{-\eta }-\Delta }{\Lambda }\right) ^2t^{{2\eta }}\right) , \end{aligned}$$
(5.4)

where

$$\begin{aligned} \Lambda \doteq \Lambda (\delta _0,\alpha , \gamma ,\eta ,c)= \alpha c\mathbf{{B}}\Big (1-\frac{\gamma }{2}, \alpha \Big )+ \frac{\gamma c}{2\delta _0^{\frac{1}{2}-\eta }} \mathbf{{B}}\Big (\frac{3}{2}-\frac{\gamma }{2}-\eta , \alpha +1\Big ), \end{aligned}$$
(5.5)

and

$$\begin{aligned} \Delta \doteq \Delta (\delta _0, \gamma , \eta , c, \rho , K_\eta )= \frac{\gamma c\sqrt{2}(1+\rho )K_\eta }{2} {\delta _0}^{-\frac{\gamma }{2} +\frac{1}{2}-\eta }. \end{aligned}$$
(5.6)

Here, \(\delta _0\) is as in Corollary A.1.

Remark 5.3

Since

$$\begin{aligned} \left\{ \omega : \max _{\delta _0\le s\le t} \frac{X(s)(\omega )}{s^H}\le K\right\} \subset \left\{ \omega : \max _{\delta _0\le s\le t} {X(s)(\omega )}\le K\right\} , \text { for } t\le 1, \end{aligned}$$

we have

$$\begin{aligned} \mathbb {P}\left( \max _{\delta _0\le s\le t}{X(s)}\ge K\right) \le \mathbb {P}\left( \max _{\delta _0\le s\le t} \frac{X(s)}{s^H}\ge K\right) \le \exp \left( -\frac{1}{2}\left( \frac{Kt^{-\eta }-\Delta }{\Lambda }\right) ^2t^{{2\eta }}\right) . \end{aligned}$$

Proof

Fix \(t>\delta _0\). From Lemma 5.1, using the expression of X(t) in (5.1), we have \(\mathbb {P}-\) a.s.

$$\begin{aligned} X(t)&\le \alpha c \max _{0\le s\le t} B(s)\int _0^t (t-u)^{\alpha -1}u^{-\frac{\gamma }{2}} du +\frac{\gamma c}{2} \int _0^t (t-u)^\alpha u^{-\frac{\gamma }{2}-1}B(u)du\nonumber \\&\le \alpha c t^{\alpha -\frac{\gamma }{2}} \max _{0\le s\le t} B(s)\int _0^1 (1-v)^{\alpha -1}v^{-\frac{\gamma }{2}} dv\nonumber \\&\qquad +\frac{\gamma c}{2}t^{\alpha -\frac{\gamma }{2}} \int _0^1 (1-v)^\alpha v^{-\frac{\gamma }{2}-1}B(vt)dv \nonumber \\&\le t^{\alpha -\frac{\gamma }{2}}\alpha c\max _{0\le s\le t} B(s) \mathbf{{B}}\Big (1-\frac{\gamma }{2}, \alpha \Big ) + \frac{\gamma c}{2}t^{\alpha -\frac{\gamma }{2}}\int _0^1 (1-v)^\alpha v^{-\frac{\gamma }{2}-1}B(vt)dv, \end{aligned}$$
(5.7)

where we have used change of variables from u to vt in the integral terms to obtain the second inequality.

We now focus on the integral in (5.7). We observe that we cannot directly pull \(\max _{0\le s\le t} B(s)\) out of the integral as \(\int _0^1 (1-v)^\alpha v^{-\frac{\gamma }{2}-1}dv \) is not well-defined. So fixing some \(\frac{1-\gamma }{2}>\eta >0\), we obtain

$$\begin{aligned}&\int _0^1 (1-v)^\alpha v^{-\frac{\gamma }{2}-1}B(vt)dv \nonumber \\&\quad = \int _{\frac{\delta _0}{t}}^1 (1-v)^\alpha v^{-\frac{\gamma }{2}-1}(vt)^{\frac{1}{2}-\eta }\frac{B(vt)}{(vt)^{\frac{1}{2}-\eta }}dv +\int _0^{\frac{\delta _0}{t}}(1-v)^\alpha v^{-\frac{\gamma }{2}-1}{B(vt)}dv \nonumber \\&\quad \le \max _{\delta _0\le s\le t} \frac{B(s)}{s^{\frac{1}{2}-\eta }} \int _{\frac{\delta _0}{t}}^1 (1-v)^\alpha v^{-\frac{\gamma }{2}+\frac{1}{2}-\eta }dv\nonumber \\&\qquad + \sqrt{2}(1+\rho )\int _0^{\frac{\delta _0}{t}}(1-v)^\alpha v^{-\frac{\gamma }{2}-1}\sqrt{v \log \Big (\frac{1}{v}\Big )}\, dv \nonumber \\&\quad \le \max _{\delta _0\le s\le t} \frac{B(s)}{s^{\frac{1}{2}-\eta }} \int _{\frac{\delta _0}{t}}^1 (1-v)^\alpha v^{-\frac{\gamma }{2}+\frac{1}{2}-\eta }dv\nonumber \\&\quad + \sqrt{2}(1+\rho )K_\eta \int _0^{\frac{\delta _0}{t}}(1-v)^\alpha v^{-\frac{\gamma }{2}-\frac{1}{2}-\eta }dv. \end{aligned}$$
(5.8)

Here the first inequality follows from Corollary A.1 and the second inequality uses (5.3).

Thus, by (5.7) and (5.8), we obtain for \(t>\delta _0\),

$$\begin{aligned} X(t)&\le \alpha c t^{\alpha -\frac{\gamma }{2}}\max _{0\le s\le t} B(s) \mathbf{{B}}\Big (1-\frac{\gamma }{2}, \alpha \Big )\\&\qquad \; + \frac{\gamma c}{2}t^{\alpha -\frac{\gamma }{2}}\left( \max _{\delta _0\le s\le t} \frac{B(s)}{s^{\frac{1}{2}-\eta }} \int _{\frac{\delta _0}{t}}^1 (1-v)^\alpha v^{-\frac{\gamma }{2}+\frac{1}{2}-\eta }dv\right. \nonumber \\&\qquad \left. + \sqrt{2}(1+\rho )K_\eta \int _0^{\frac{\delta _0}{t}}(1-v)^\alpha v^{-\frac{\gamma }{2}-\frac{1}{2}-\eta }dv\right) \\&\le \alpha c t^{\alpha -\frac{\gamma }{2}}\max _{0\le s\le t} B(s) \mathbf{{B}}\Big (1-\frac{\gamma }{2}, \alpha \Big )\\&\qquad \;+ \frac{\gamma c}{2}t^{\alpha -\frac{\gamma }{2}}\Bigg (\frac{1}{\delta _0^{\frac{1}{2}-\eta }}\max _{\delta _0\le s\le t}{B(s)} \int _{\frac{\delta _0}{t}}^1 (1-v)^\alpha v^{-\frac{\gamma }{2}+\frac{1}{2}-\eta }dv \\&\qquad + \sqrt{2}(1+\rho )K_\eta \int _0^{\frac{\delta _0}{t}}(1-v)^\alpha v^{-\frac{\gamma }{2}-\frac{1}{2}-\eta }dv\Bigg ). \end{aligned}$$

Consider the following event:

$$\begin{aligned} A(K,\eta )\doteq \left\{ \omega : \sup _{0<s\le t} {B(s)(\omega )} \le Kt^{\frac{1}{2}+\eta }\right\} . \end{aligned}$$
(5.9)

On this event,

$$\begin{aligned} X(t)&\le \alpha c Kt^{H+\eta } \, \mathbf{{B}}(1-\frac{\gamma }{2}, \alpha )\nonumber \\&\qquad \;+ \frac{\gamma c}{2}t^{H+\eta }\left( \frac{1}{\delta _0^{\frac{1}{2}-\eta }}K \int _{\frac{\delta _0}{t}}^1 (1-v)^\alpha v^{-\frac{\gamma }{2}+\frac{1}{2}-\eta }dv \right. \nonumber \\&\qquad \left. + \sqrt{2}(1+\rho )t^{-\frac{1}{2}-\eta }K_\eta \int _0^{\frac{\delta _0}{t}}(1-v)^\alpha v^{-\frac{\gamma }{2}-\frac{1}{2}-\eta }dv\right) \nonumber \\&\quad \le t^{H+\eta }\Bigg (\alpha c K\mathbf{{B}}(1-\frac{\gamma }{2}, \alpha )+ \frac{\gamma c}{2}\Big (\frac{K}{\delta _0^{\frac{1}{2}-\eta }} \mathbf{{B}}(\frac{3}{2}-\frac{\gamma }{2}-\eta , \alpha +1)\nonumber \\&\qquad +\sqrt{2}(1+\rho )K_\eta ({\delta _0}^{-\frac{\gamma }{2}+\frac{1}{2}-\eta }{t}^{-\frac{\gamma }{2}+1+\eta })\Big )\Bigg ) \le t^{H+\eta }\Big (K\Lambda + \Delta \Big ). \end{aligned}$$
(5.10)

In the above, we used the fact that

$$\begin{aligned} \int _0^{\frac{\delta _0}{t}}(1-v)^\alpha v^{-\frac{\gamma }{2}-\frac{1}{2}-\eta }dv\le \int _0^{\frac{\delta _0}{t}} v^{-\frac{\gamma }{2}-\frac{1}{2}-\eta }dv \le {\delta _0}^{-\frac{\gamma }{2}+\frac{1}{2}-\eta }{t}^{\frac{\gamma }{2}-\frac{1}{2}+\eta }. \end{aligned}$$

The quantities \(\Lambda \) and \(\Delta \) are as given in (5.5) and (5.6), and in the last inequality, we bound the terms involving t (inside the parenthesis) by 1 to make the quantity inside, uniform in t.

From the above inequality, we have

$$\begin{aligned} \max _{\delta _0\le s \le t}\frac{X(s)}{s^H}\le t^{\eta }\left( \Lambda K+ \Delta \right) . \end{aligned}$$

Therefore,

$$\begin{aligned} \mathbb {P}\left( \max _{\delta _0\le s\le t} \frac{X(s)}{s^H}> t^{\eta }(K\Lambda +\Delta )\right)&\le \mathbb {P}\left( \Big \{\omega : \max _{0\le s\le t}B(s)(\omega )> Kt^{\frac{1}{2}+\eta }\Big \}\right) \\&\le \mathbb {P}\left( \sup _{0<s\le t} {B(s)} > Kt^{\frac{1}{2}+\eta }\right) \\&\le \exp \left( -\frac{1}{2}K^2t^{{2\eta }}\right) , \end{aligned}$$

where the second inequality uses (5.9) and the last uses the maximal inequality of Brownian motion, i.e., \( \mathbb {P}\left( \sup _{0\le s\le t} B(s)>\lambda \right) \le \exp \left( -\frac{\lambda ^2}{2t}\right) . \) Therefore, the inequality in (5.4) holds. \(\square \)

Remark 5.4

In Lemma 5.2 and Eq. (5.10) in the proof of Lemma 5.3, the exponents are \(H-\eta \) and \(H+\eta \) are the consequence of behavior of Brownian motion near \(t=0\) (see Theorem A.1) and away from zero (the maximal inequality of Brownian motion).

The following modulus of continuity type estimate is used in establishing the uniform in t sub-exponential tail bounds of V(t).

Lemma 5.4

Assume that \(\alpha >\frac{\gamma }{2}\). Then, we have the following:

$$\begin{aligned} \mathbb {P}\left( \sup _{1-\delta _0\le s\le 1}\frac{|X(1)-X(s)|}{|1-s|^{\alpha -\frac{\gamma }{2}}}\ge C(K) \right) \le e^{-\frac{K^2}{2}}, \end{aligned}$$
(5.11)

for some \(C=C(K)>0\) such that \(C(K)\uparrow \infty \) as \(K\rightarrow \infty \).

Proof

Consider the event:

$$\begin{aligned} A(K)=\left\{ \omega : \sup _{0\le v\le 1} B(v)(\omega )\le K\right\} . \end{aligned}$$

We consider the following on A(K). For \(1-\delta _0\le s\le 1\), by (5.1),

$$\begin{aligned} X(1)-X(s)&=\alpha c \int _0^1(1-v)^{\alpha -1}v^{-\frac{\gamma }{2}}\left( B(v)- s^{\alpha -\frac{\gamma }{2}} B(vs)\right) dv\\&\qquad +\frac{\gamma c}{2} \int _0^1 (1-v)^\alpha v^{-\frac{\gamma }{2}-1}\left( B(v)- s^{\alpha -\frac{\gamma }{2}} B(vs)\right) dv. \end{aligned}$$

We estimate the first integral using Theorem A.1 and Corollary A.1. Choose \(\eta <\frac{\gamma }{2}+\frac{1}{2}-\alpha = 1-H\). We have

$$\begin{aligned}&\int _0^1(1-v)^{\alpha -1}v^{-\frac{\gamma }{2}}\left( B(v)- s^{\alpha -\frac{\gamma }{2}} B(vs)\right) dv\nonumber \\&\quad =\left( 1- s^{\alpha -\frac{\gamma }{2}}\right) \int _0^1(1-v)^{\alpha -1}v^{-\frac{\gamma }{2}}B(v)dv\nonumber \\&\qquad +s^{\alpha -\frac{\gamma }{2}} \int _0^1(1-v)^{\alpha -1}v^{-\frac{\gamma }{2}}\left( B(v)- B(vs)\right) dv\nonumber \\&\quad \le \left( 1- s^{\alpha -\frac{\gamma }{2}}\right) K \int _0^1(1-v)^{\alpha -1}v^{-\frac{\gamma }{2}}dv\nonumber \\&\qquad + \sqrt{2} (1+\rho )s^{\alpha -\frac{\gamma }{2}} (1- s^{\frac{1}{2}-\eta })\int _0^1(1-v)^{\alpha -1}v^{-\frac{\gamma }{2}+\frac{1}{2}-\eta }dv\nonumber \\&\quad \le (KC_1+C_2) (1-s)^{\alpha -\frac{\gamma }{2}}, \end{aligned}$$
(5.12)

for some \(C_1, C_2>0\). To get (5.12), we applied Remark A.2 to uniformly bound \(|B(v)-B(vs)|.\) Finally, we estimate the second term in the similar way:

$$\begin{aligned}&\int _0^1(1-v)^{\alpha }v^{-\frac{\gamma }{2}-1}\left( B(v)- s^{\alpha -\frac{\gamma }{2}} B(vs)\right) dv\\&\quad =\left( 1- s^{\alpha -\frac{\gamma }{2}}\right) \int _0^1(1-v)^{\alpha }v^{-\frac{\gamma }{2}-1}B(v)dv\\&\qquad +s^{\alpha -\frac{\gamma }{2}} \int _0^1(1-v)^{\alpha }v^{-\frac{\gamma }{2}-1}\left( B(v)- B(vs)\right) dv, \text { from Remark }~A.2\\&\quad \le \left( 1- s^{\alpha -\frac{\gamma }{2}}\right) \int _0^1(1-v)^{\alpha }v^{-\frac{\gamma }{2}-1}B(v)dv\\&\qquad + \sqrt{2} (1+\rho )s^{\alpha -\frac{\gamma }{2}} (1- s^{\frac{1}{2}-\eta })\int _0^1(1-v)^{\alpha }v^{-\frac{\gamma }{2}-\frac{1}{2}-\eta }dv\\&\quad \le (KC_3+C_4) (1-s)^{\alpha -\frac{\gamma }{2}}, \end{aligned}$$

In the above, to arrive at the final inequality, we bounded the first integral using (5.8), noting that \(\sup _{0\le v\le 1} B(v)(\omega )\le K\) on the event A(K) and used the fact that \(\eta <\frac{\gamma }{2}+\frac{1}{2}-\alpha \). \(C_3\) and \(C_4\) are appropriate constants that are independent of K and s. Defining \(C(K)\doteq \max \{\alpha c (KC_1+C_2), \frac{\gamma c}{2} (KC_3+C_4)\}\) gives the result. \(\square \)

Remark 5.5

In the following, we observe that

$$\begin{aligned} {\widetilde{X}}\doteq \frac{X(1)-X(s)}{(1-s)^{\alpha -\frac{\gamma }{2}}} \end{aligned}$$

is a centered Gaussian process and hence, symmetric (i.e., \({\widetilde{X}}\) and \(-\widetilde{X} \) have the same distribution). Therefore,

$$\begin{aligned} \mathbb {P}\bigg (\sup _{1-\delta _0\le s\le 1}|{\widetilde{X}}(s)|\ge K \bigg )= R_K \mathbb {P}\bigg (\sup _{1-\delta _0\le s\le 1}{\widetilde{X}}(s)\ge K\bigg ). \end{aligned}$$

Here,

$$\begin{aligned} R_K\doteq 2 -\frac{\mathbb {P}\Big (\big \{\sup _{1-\delta _0\le s\le 1}{\widetilde{X}}(s)\ge K\big \}\cap \big \{\inf _{1-\delta _0\le s\le 1}{\widetilde{X}}(s)\le -K\big \}\Big )}{\mathbb {P}\Big (\sup _{1-\delta _0\le s\le 1}{\widetilde{X}}(s)\ge K\Big )} \end{aligned}$$

and the ratio of probabilities is simply the conditional probability of the event \(\big \{\inf _{1-\delta _0\le s\le 1}{\widetilde{X}}(s)\le -K\big \}\) conditioned on the occurrence of the event \(\big \{\sup _{1-\delta _0\le s\le 1}{\widetilde{X}}(s)\ge K\big \}\). Since the paths of \({\widetilde{X}}\) are almost surely continuous, this probability approaches 0 as \(K\rightarrow \infty \). Therefore, \(R_K\rightarrow 2\) as \(K\rightarrow \infty .\)

Now by a similar argument in Sect. 4.1, we obtain the following:

$$\begin{aligned} \lim _{K\rightarrow \infty } - \frac{1}{K^2} \log \mathbb {P}\bigg (\sup _{1-\delta _0\le s\le 1}{\widetilde{X}}(s)\ge K \bigg )= \frac{1}{2 {\widetilde{\sigma }}^2}, \end{aligned}$$

where

$$\begin{aligned} {\widetilde{\sigma }}^2\doteq \sup _{\delta _0\le s\le 1} \mathbb {E}[{\widetilde{X}}^2(s)]=\sup _{\delta _0\le s\le 1} \frac{\mathbb {E}[(X(1)-X(s))^2]}{(1-s)^{\alpha -\frac{\gamma }{2}}}= \sup _{\delta _0\le s\le 1} \frac{v_1(1,s)+ v_2(1,s)}{(1-s)^{\alpha -\frac{\gamma }{2}}}. \end{aligned}$$

Again in the last equality, we have used (2.2), with the definitions of \(v_1\) and \(v_2\) in (4.6) and (4.7). From the above, we can conclude that for every \(\delta >0\), there is \(K_0>0\) such that

$$\begin{aligned} \mathbb {P}\Big (\sup _{1-\delta _0\le s\le 1}{\widetilde{X}}(s)\ge K \Big ) \le e^{-\frac{K^2}{2{\overline{\sigma }}^2}}, \text { for }K>K_0. \end{aligned}$$
(5.13)

We remark that Lemma 5.4 is only used in the proof of Theorem 5.3. Even though the alternative estimate in (5.13) is different from that in (5.11), it is still sufficient for the proof of Theorem 5.3. Note that (5.11) is used in Eqs. (5.22) and (5.23) in the proof of Theorem 5.3. Now following the same arguments of the proof with (5.13) instead of (5.11) gives us the similar result to (5.21) with appropriately different constants. We do not give the exact modified version of (5.21) as this estimate is only used to prove tightness of \(\{V(t)\}_{t\ge 0}\) in Corollary 5.1 and this estimate is sufficient.

In the following, we exploit the self-similarity of X and show that the random variables M(t) and V(t) at fixed time \(t>0\), are equal in laws to respective random variables which involve \(C([0,1],\mathbb {R})\)-valued processes \({\bar{Z}}\) and Z which have the same law as that of X when it is defined on [0, 1]. That is, for \(T=1\), \({\bar{Z}}\) and Z are \(C([0,1],\mathbb {R})\) such that

$$\begin{aligned} {\bar{Z}}{\mathop {=}\limits ^{\textrm{d}}}Z{\mathop {=}\limits ^{\textrm{d}}}X. \end{aligned}$$

Lemma 5.5

For any fixed \(t>0\), we have

$$\begin{aligned} M(t)&{\mathop {=}\limits ^{\textrm{d}}}\max _{0\le v\le 1} (t^H {\bar{Z}}(v)- kvt) \end{aligned}$$
(5.14)
$$\begin{aligned} V(t)&{\mathop {=}\limits ^{\textrm{d}}}\max _{0\le v\le 1} \left( t^H Z(1)-t^HZ(v) - kt(1-v)\right) . \end{aligned}$$
(5.15)

Proof

Fix \(t>0\). Using self-similarity of X and Lemma A.1, the following holds:

$$\begin{aligned} \mathcal {P}_X(t ^{-H}A)= \mathcal {P}_X\circ J_t (A), \end{aligned}$$

where \(\mathcal {P}_X\) is the law of X and \(J_t\) is as defined in (A.1). Then the equal in law relationships in (5.14) and (5.15) follow directly. \(\square \)

Remark 5.6

We stress that the statement of the above lemma only states that for every given \(t>0\), the laws of M(t) and V(t) are expressed as above. To study the sample path of M and V, more detailed analysis is needed, which we do not pursue in this paper.

Theorem 5.1

Assume that \((\alpha ,\gamma )\) satisfy (2.6). There exist \(t_0\doteq t_0(\delta _0, k,H, \eta ,C )\) and \(Q\doteq Q(\delta _0,H,C,\eta ,k)\) such that the following holds:

$$\begin{aligned} \mathbb {P}\left( M(t)>\rho \right) \le \exp \bigg ( -\frac{1}{2}\bigg (\frac{{\widehat{\Delta }}\rho ^{1-H} -\Delta }{\Lambda }\bigg )^2 \bigg ), \end{aligned}$$

for \(t>t_0\) and \(\rho >Q\). Here,

$$\begin{aligned} {\widehat{\Delta }}= {\widehat{\Delta }}(\delta _0,H) \doteq \frac{ \delta _0^H}{(1-H)^{1-H}H^H}. \end{aligned}$$

Proof

We fix t throughout the proof after choosing it large enough. In the rest of the proof, we suppress the dependence on \(\omega \) for all the random processes that follow. The method of the proof goes as follows: We prove that the maximum

$$\begin{aligned} \max _{0\le v\le \delta _0} (t^H {\bar{Z}}(v)-kvt) \end{aligned}$$

is almost surely less than a positive constant Q (uniformly in large t). This implies that the maximizers for

$$\begin{aligned} \max _{0\le v\le 1} (t^H {\bar{Z}}(v)-kvt) \end{aligned}$$

conditioned on the event

$$\begin{aligned} \Big \{\omega : \max _{0\le v\le 1} (t^H {\bar{Z}}(v)-kvt)>Q\Big \} \end{aligned}$$

are greater than \(\delta _0\), \(\mathbb {P}-\) a.s. Indeed, if the maximum satisfies

$$\begin{aligned} \max _{0\le v\le 1} (t^H {\bar{Z}}(v)-kvt)>Q, \end{aligned}$$

then from the hypothesis,

$$\begin{aligned} \max _{0\le v\le \delta _0} (t^H {\bar{Z}}(v)-kvt)\le Q<\max _{0\le v\le 1} (t^H {\bar{Z}}(v)-kvt). \end{aligned}$$

This implies that the maximizers on [0, 1] are strictly greater than \(\delta _0\).

To that end, we recall from Lemma 5.2 that \(\mathbb {P}-\) a.s.,

$$\begin{aligned} {\bar{Z}}(v)\le C v^{H-\eta }, \end{aligned}$$

for \(0 \le v\le \delta _0\) with \(0<\eta <\frac{1-\gamma }{2}\). Thus, \(\mathbb {P}-\) a.s.,

$$\begin{aligned} t^H {\bar{Z}}(v)-kvt\le C v^{H-\eta }t^H- kvt, \text { for }0\le v\le \delta _0. \end{aligned}$$

The maximum of the right-hand side is attained at \(v=\min \{\delta _0, \left( \frac{kt}{Ct^H}\right) ^{\frac{1}{1+\eta -H}} \} \). For

$$\begin{aligned} t>t_0\doteq \bigg (\frac{\delta _0^{{1+\eta -H}}C}{k}\bigg )^{\frac{1}{1-H}}, \end{aligned}$$

(this ensures that maximum is attained at \(\delta _0\)), we have

$$\begin{aligned} \max _{0\le v\le \delta _0} (t^H {\bar{Z}}(v)-kvt)&\le C \delta _0^{H-\eta }t^H- k\delta _0t \nonumber \\&\le (1-H)H^{\frac{H}{1-H}}\left( C \delta _0^{H-\eta }\right) ^\frac{1}{1-H} (k\delta _0)^{\frac{H}{H-1}}\doteq Q. \end{aligned}$$
(5.16)

It is thus clear that for

$$\begin{aligned} \max _{0\le v\le 1} \left( t^H {\bar{Z}}(v)-kvt\right)> \rho >Q, \end{aligned}$$

the maximizer cannot be in \([0,\delta _0]\). Therefore, for \(\rho >Q\) and \(t>t_0\),

$$\begin{aligned}&\mathbb {P}\left( \max _{0\le v\le 1} \left( t^H {\bar{Z}}(v)-kvt\right)> \rho \right) \\&\quad \le \mathbb {P}\left( t^H \max _{\delta _0\le v\le 1}{\bar{Z}}(v)- kt \delta _0>\rho \right) \\&\quad \le \mathbb {P}\left( \max _{\delta _0\le v\le 1} {\bar{Z}}(v)> \frac{\rho +k\delta _0 t}{t^H}\right) \\&\quad \le \mathbb {P}\left( \max _{\delta _0\le v\le 1} {\bar{Z}}(v)> \min _{t>0}\left\{ \frac{\rho +k\delta _0 t}{t^H}\right\} \right) \\&\quad = \mathbb {P}\left( \max _{\delta _0\le v\le 1} {\bar{Z}}(v)> \frac{\rho ^{1-H} \delta _0^H}{(1-H)^{1-H}H^H}\right) \\&\quad \le \exp \Bigg ( -\frac{1}{2}\Big (\frac{\rho ^{1-H}\widehat{\Delta } -\Delta }{\Lambda }\Big )^2 \Bigg ), \end{aligned}$$

where the last inequality follows from Remark 5.3 and \(\widehat{\Delta }\) is as in the hypothesis. In the first inequality, we used the following: For \(\rho >Q\), \(t>t_0\) and \(v\in [0,1]\), from the inequality in (5.16) and the definition of Q,

$$\begin{aligned} Q<\rho < \max _{0\le v\le 1} (t^H {\bar{Z}}(v)-kvt)&= \max _{\delta _0\le v\le 1}(t^H {\bar{Z}}(v)- kt v) \\&\le t^H \max _{\delta _0\le v\le 1}{\bar{Z}}(v)- kt \delta _0. \\ \end{aligned}$$

\(\square \)

Lemma 5.6

Assume that \((\alpha ,\gamma )\) satisfy (2.6). Then,

$$\begin{aligned} M(\infty )\doteq \lim _{t\rightarrow \infty } M(t) \text { exists}\quad \mathbb {P}- \text {a.s.} \end{aligned}$$

Proof

Since M(t) is nondecreasing and is a submartingale with respect to its own filtration, if

$$\begin{aligned} \sup _{t>0} \mathbb {E}[M(t)]<\infty , \end{aligned}$$
(5.17)

then from the submartingale convergence theorem ([20, Theorem 1.3.15]), we know that \( M(\infty )\doteq \lim _{t\rightarrow \infty } M(t)\) exists \(\mathbb {P}-\) a.s. From Theorem 5.1, it is easy to see that sub-exponential tail behavior of M(t) ensures that (5.17) holds. \(\square \)

We will next study the tail behavior of \( M^* \doteq M(\infty )\).

Theorem 5.2

Assume that \((\alpha ,\gamma )\) satisfy (2.6). Then,

$$\begin{aligned} \lim _{x\rightarrow \infty } \frac{1}{x^{2(1-H)}}\log \mathbb {P}(M^* >x)= - \theta ^*, \end{aligned}$$
(5.18)

where

$$\begin{aligned} \theta ^* \doteq \frac{ k^{2H}}{2H^{2H} (1-H)^{2(1-H)}}. \end{aligned}$$
(5.19)

Proof

We first prove the lower bound. For \(\lambda >0\),

$$\begin{aligned} \liminf _{x\rightarrow \infty } \frac{1}{x^{2(1-H)}}\log \mathbb {P}\left( M^*>x\right)&= \liminf _{\varepsilon \rightarrow 0} \frac{\varepsilon ^{2(1-H)}}{\lambda ^{2(1-H)}}\log \mathbb {P}\left( M^*> \lambda \varepsilon ^{-1}\right) \\&\ge \liminf _{\varepsilon \rightarrow 0} \frac{\varepsilon ^{2(1-H)}}{\lambda ^{2(1-H)}}\log \mathbb {P}\left( X(\varepsilon ^{-1})-k\varepsilon ^{-1}> \lambda \varepsilon ^{-1}\right) \\&\ge \liminf _{\varepsilon \rightarrow 0} \frac{\varepsilon ^{2(1-H)}}{\lambda ^{2(1-H)}}\log \mathbb {P}\left( \varepsilon X(\varepsilon ^{-1})-k> \lambda \right) \\&\ge \liminf _{\varepsilon \rightarrow 0}\frac{\varepsilon ^{2(1-H)}}{\lambda ^{2(1-H)}}\log \mathbb {P}\left( X(1)> \varepsilon ^{H-1}(\lambda +k)\right) . \end{aligned}$$

In the above, we used the fact that \(X(\varepsilon ^{-1}){\mathop {=}\limits ^{\textrm{d}}}\varepsilon ^{-H}X(1)\). Since X(1) is a Gaussian random variable with zero mean and unit variance (recall that the choice of c ensures this), we have

$$\begin{aligned} \frac{1}{\lambda ^{2(1-H)}}\liminf _{\varepsilon \rightarrow 0}\varepsilon ^{2(1-H)}\log \mathbb {P}\left( X(1)> \varepsilon ^{H-1}(\lambda +k)\right)&\ge - \frac{(\lambda + k)^2}{2\lambda ^{2(1-H)}}, \text { for every }\lambda>0,\\ \implies \liminf _{x\rightarrow \infty } \frac{1}{x^{2(1-H)}}\log \mathbb {P}\left( M^*>x\right)&\ge -\inf _{\lambda >0} \frac{(\lambda + k)^2}{2\lambda ^{2(1-H)}}=-\theta ^*. \end{aligned}$$

A simple computation gives us that the above infimum is \(\theta ^*\) and attained at \(\lambda = \frac{1-H}{H}k\).

To prove the upper bound, we again write for \(\lambda >0\),

$$\begin{aligned} \limsup _{x\rightarrow \infty } \frac{1}{x^{2(1-H)}}\log \mathbb {P}\left( M^*>x\right)&= \limsup _{\varepsilon \rightarrow 0} {\varepsilon ^{2(1-H)}}\log \mathbb {P}\left( M^*> \varepsilon ^{-1}\right) . \end{aligned}$$

Choose \(T_0> \frac{H}{k(1-H)}\). Clearly, for any \(\varepsilon >0\),

$$\begin{aligned}&\mathbb {P}\left[ M^*> \varepsilon ^{-1}\right] \\&\quad = \mathbb {P}\left( \sup _{0\le s\le T_0\varepsilon ^{-1}}(X(s)-ks)> \varepsilon ^{-1}\right) \\&\qquad + \mathbb {P}\left( \sup _{0\le s\le T_0\varepsilon ^{-1}}(X(s)-ks)\le \varepsilon ^{-1}, \sup _{s>T_0\varepsilon ^{-1}}( X(s)-ks)> \varepsilon ^{-1}\right) . \end{aligned}$$

We now compute the above terms individually,

$$\begin{aligned}&\mathbb {P}\left( \sup _{0\le s\le T_0 \varepsilon ^{-1}}(X(s)-ks)> \varepsilon ^{-1}\right) \\&\quad = \mathbb {P}\left( \sup _{0\le s\le T_0}(\varepsilon ^{1-H} X(s)-ks)> 1 \right) , \text { from self-similarity of }X. \end{aligned}$$

and as earlier in Lemma 4.1,

$$\begin{aligned}&\limsup _{\varepsilon \rightarrow 0}{\varepsilon ^{2(1-H)}}\log \mathbb {P}\left( \sup _{0\le s\le 1}(\varepsilon ^{1-H} X(s)-ks)> 1\right) \\&\quad \le - \frac{k^{2H}}{2H^{2H} (1-H)^{2(1-H)}} =-\theta ^*. \end{aligned}$$

In the above, we applied Lemma 4.1, for \(T=T_0\) and \(\lambda =1\).

We now estimate

$$\begin{aligned}&\mathbb {P}\left( \sup _{0\le s\le T_0\varepsilon ^{-1}}(X(s)-ks)\le \varepsilon ^{-1}, \sup _{s>T_0\varepsilon ^{-1}}( X(s)-ks)> \varepsilon ^{-1}\right) \\&\quad \le \mathbb {P}\left( \sup _{s>T_0\varepsilon ^{-1}}( X(s)-ks)> \varepsilon ^{-1}\right) \\&\quad \le \mathbb {P}\left( \sup _{s>\lfloor T_0\varepsilon ^{-1}\rfloor }( X(s)-ks)> \varepsilon ^{-1}\right) \\&\quad = \mathbb {P}\left( \cup _{n> \lfloor T_0\varepsilon ^{-1}\rfloor }\bigg \{\sup _{n-1< s\le n }( X(s)-ks)> \varepsilon ^{-1}\bigg \}\right) \\&\quad \le \sum _{n= \lfloor T_0\varepsilon ^{-1}\rfloor +1}^\infty \mathbb {P}\left( \sup _{n-1<s\le n}( X(s)-ks)> \varepsilon ^{-1}\right) . \end{aligned}$$

In the fourth line above, we partitioned \((\lfloor T_0\varepsilon ^{-1}\rfloor ,\infty )\) into sets of the form \((n-1,n]\), for integer \(n>\lfloor T_0\varepsilon ^{-1}\rfloor \).

In the following, we bound the individual terms. To that end, define

$$\begin{aligned} U(t)\doteq \sup _{t-1<s\le t}( X(s)-ks). \end{aligned}$$

We have

$$\begin{aligned}&\limsup _{\varepsilon \rightarrow 0}\varepsilon ^{2(1-H)}\log \mathbb {P}\left( \varepsilon U(\varepsilon ^{-1})> \lambda \right) \nonumber \\&\quad = \limsup _{\varepsilon \rightarrow 0}\varepsilon ^{2(1-H)}\log \mathbb {P}\left( \sup _{1-\varepsilon <s\le 1}( \varepsilon ^{1-H}X(s)-ks)> \lambda \right) \nonumber \\&\quad \le \limsup _{\varepsilon \rightarrow 0}\varepsilon ^{2(1-H)}\log \mathbb {P}\left( \sup _{1-\delta \le s\le 1}( \varepsilon ^{1-H}X(s)-ks) > \lambda \right) \nonumber \\&\quad \le -\inf _{1-\delta \le s\le 1} \frac{(\lambda + ks)^2 }{2 s^{2H}}, \end{aligned}$$
(5.20)

where \(0<\delta <1\). In the first equality, we used the following:

$$\begin{aligned} U(\varepsilon ^{-1})&= \sup _{\varepsilon ^{-1}-1<s< \varepsilon ^{-1}}( X(s)-ks)\\&{\mathop {=}\limits ^{\textrm{d}}}\sup _{1-\varepsilon<s<1} ( \varepsilon ^{1-H}X(s)-ks), \end{aligned}$$

where we have changed s to \(\varepsilon ^{-1}s\) and applied Corollary A.1. The inequality 5.20 is obtained in the similar way as it was done in the proof of Lemma 4.1. Taking \(\delta \downarrow 0\), we have

$$\begin{aligned} \limsup _{\varepsilon \rightarrow 0}\varepsilon ^{2(1-H)}\log \mathbb {P}\left( \varepsilon U(\varepsilon ^{-1})> \lambda \right) = - {(\lambda + k)^2 }. \end{aligned}$$

Therefore, for \(\delta >0\), there exists \(\varepsilon _0>0\) such that for every \(\varepsilon <\varepsilon _0\),

$$\begin{aligned}&\sum _{n= \lfloor T_0\varepsilon ^{-1}\rfloor }^\infty \mathbb {P}\left( \sup _{n<s\le n+1}( X(s)-ks)> \varepsilon ^{-1}\right) \\&\quad \le \sum _{n= \lfloor T_0\varepsilon ^{-1}\rfloor }^\infty \mathbb {P}\left( n^{-1}\sup _{n<s\le n+1}( X(s)-ks)> 0\right) \\&\quad \le \sum _{n= \lfloor T_0\varepsilon ^{-1}\rfloor }^\infty \exp \left( -n^{2(1-H)}(k^2+\delta ) \right) \\&\quad \le \exp \left( -\lfloor T_0\varepsilon ^{-1}\rfloor ^{2(1-H)}(k^2+\delta ) \right) \\&\quad \sum _{n= \lfloor T_0\varepsilon ^{-1}\rfloor }^\infty \exp \left( -(n^{2(1-H)}-\lfloor T_0\varepsilon ^{-1}\rfloor ^{2(1-H)})(k^2+\delta ) \right) \\&\quad \le \exp \left( -\lfloor T_0\varepsilon ^{-1}\rfloor ^{2(1-H)}(k^2+\delta ) \right) C, \end{aligned}$$

for some constant \(C>0\). This gives us

$$\begin{aligned}&\limsup _{\varepsilon \rightarrow 0} \varepsilon ^{2(1-H)}\log \left( \sum _{n= \lfloor T_0\varepsilon ^{-1}\rfloor }^\infty \mathbb {P}\left( \sup _{n<s\le n+1}( X(s)-ks)> \varepsilon ^{-1}\right) \right) \\&\quad \le -T_0^{2(1-H)}(k^2+\delta ). \end{aligned}$$

Putting all the terms together, we have

$$\begin{aligned}&\limsup _{x\rightarrow \infty } \frac{1}{x^{2(1-H)}}\log \mathbb {P}\left( M^*>x\right) \\&\quad = \limsup _{\varepsilon \rightarrow 0} {\varepsilon ^{2(1-H)}}\log \mathbb {P}\left( M^*> \varepsilon ^{-1}\right) \\&\quad \le \max \left\{ \limsup _{\varepsilon \rightarrow 0}{\varepsilon ^{2(1-H)}}\log \mathbb {P}\left( \sup _{0\le s\le 1}(\varepsilon ^{1-H} X(s)-ks)> 1\right) ,\right. \\&\quad \left. \limsup _{\varepsilon \rightarrow 0} \varepsilon ^{2(1-H)}\log \left( \sum _{n= \lfloor T_0\varepsilon ^{-1}\rfloor }^\infty \mathbb {P}\left( \sup _{n<s\le n+1}( X(s)-ks)> \varepsilon ^{-1}\right) \right) \right\} \\&\quad \le \max \Big \{-\theta ^*, -T_0^{2(1-H)}(k^2+\delta )\Big \}. \end{aligned}$$

Now, we take \(T_0\uparrow \infty \) (the second term goes to \(-\infty \)), to get the result. \(\square \)

We now study the tail asymptotics and long-time behavior of \(\{V(t)\}_{t\in \mathbb {R}_+}\).

Theorem 5.3

Assume that \(\alpha >\frac{\gamma }{2}\). For every \(K>0\), there exist \(t_0=t_0(K,\delta _0,k,\alpha ,\gamma )\) and \(Q=Q(K,\delta _0,k,\alpha ,\gamma )\) such that the following holds.

$$\begin{aligned} \mathbb {P}\left( V(t)>\rho \right) \le \exp \bigg ( -\frac{1}{2}\bigg (\frac{ \frac{1}{2}\rho ^{1-H}{\widehat{\Delta }}-\Delta }{\Lambda }\bigg )^2 \bigg ) + e^{-\frac{K^2}{2}}, \end{aligned}$$
(5.21)

for \(t>t_0\) and \(\rho >Q\). Here, \({\widehat{\Delta }}\) is as given in Theorem 5.1.

Proof

Consider the following set:

$$\begin{aligned} S=S(K,\delta _0 )=\left\{ \omega : \sup _{1-\delta _0\le s\le 1}\frac{|X(1)(\omega )-X(s)(\omega )|}{|1-s|^{\alpha -\frac{\gamma }{2}}}\le C(K)\right\} . \end{aligned}$$
(5.22)

Here, C(K) is the same constant that appears in Lemma 5.4. On the event S, we consider the following: We follow a similar argument as in the proof of Theorem 5.1. We fix t throughout the proof after choosing it large enough. In the rest of the proof, we suppress the dependence on \(\omega \) for all the random processes that follow. We now show that on S,

$$\begin{aligned} \max _{1-\delta _0\le v\le 1} (t^H Z(1) - t^H Z(v)-kt(1-v)) \end{aligned}$$

is less than a positive constant Q (uniformly in large t). This implies that the maximizers for

$$\begin{aligned} \max _{0\le v\le 1} (t^H Z(1) - t^H Z(v)-kt(1-v)) \end{aligned}$$

conditioned on the event

$$\begin{aligned} \Big \{\omega : \max _{0\le v\le 1} (t^H Z(1) - t^H Z(v)-kt(1-v))>Q\Big \}\cap S \end{aligned}$$

are less than \(1-\delta _0\). Indeed, if

$$\begin{aligned} \max _{0\le v\le 1} (t^H Z(1) - t^H Z(v)-kt(1-v))>Q, \end{aligned}$$

then from the hypothesis,

$$\begin{aligned}{} & {} \max _{{1-\delta _0\le v\le 1}} (t^H Z(1) - t^H Z(v)-kt(1-v))\le Q<\max _{0\le v\le 1} (t^H Z(1)\\{} & {} \quad - t^H Z(v)-kt(1-v)). \end{aligned}$$

This implies that the maximizers are strictly less than \(1-\delta _0\). To that end, we recall that on the event S,

$$\begin{aligned} Z(1)-Z(v)\le C (1-v)^{\alpha -\frac{\gamma }{2}}, \end{aligned}$$
(5.23)

for \({1-\delta _0\le v\le 1}\). Hence, on S,

$$\begin{aligned}{} & {} (t^H Z(1) - t^H Z(v)-kt(1-v))\le C (1-v)^{\alpha -\frac{\gamma }{2}}t^H- k(1-v)t,\\{} & {} \quad \text { for } 1-\delta _0\le v\le 1. \end{aligned}$$

The maximum of the right-hand side is attained at

$$\begin{aligned} v=1- \min \bigg \{\delta _0, \left( \frac{kt}{Ct^H}\right) ^{\frac{1}{1+\frac{\gamma }{2}-\alpha }} \bigg \}. \end{aligned}$$

For

$$\begin{aligned} t>t_0\doteq \bigg (\frac{\delta _0^{{1+\frac{\gamma }{2}-\alpha }}C}{k}\bigg )^{\frac{1}{1-H}} \end{aligned}$$

(this ensures that maximum is attained at \(1-\delta _0\)), we have

$$\begin{aligned} \max _{{1-\delta _0\le v\le 1}} (t^H Z(1) - t^H Z(v)-kt(1-v))&\le C \delta _0^{\alpha -\frac{\gamma }{2}}t^H- k\delta _0t\\&\le (1-H)H^{\frac{H}{1-H}}\left( C \delta _0^{\alpha -\frac{\gamma }{2}}\right) ^\frac{1}{1-H} (k\delta _0)^{\frac{H}{H-1}}\doteq Q. \end{aligned}$$

It is thus clear that for

$$\begin{aligned} \max _{{0\le v\le 1}} (t^H Z(1) - t^H Z(v)-kt(1-v))> \rho >Q, \end{aligned}$$

the maximizer cannot be in \([1-\delta _0,1]\).

Therefore, for \(\rho >Q\) and \(t>t_0\),

$$\begin{aligned} \mathbb {P}\left( V(t)>\rho \right)&=\mathbb {P}\left( \max _{0\le v\le 1} \left( t^H Z(1)-t^HZ(v) - kt(1-v)\right)> \rho \right) \nonumber \\&= \mathbb {P}\left( \Big \{\max _{0\le v\le 1} \left( t^H Z(1)-t^HZ(v) - kt(1-v)\right)> \rho \Big \}\cap S \right) \nonumber \\&\quad +\mathbb {P}\left( \Big \{\max _{0\le v\le 1} \left( t^H Z(1)-t^HZ(v) - kt(1-v)\right)> \rho \Big \}\cap S^c \right) \nonumber \\&\le \mathbb {P}\left( t^H \max _{0\le v\le 1-\delta _0}(Z(1)-Z(v))- kt \delta _0>\rho \right) + \mathbb {P}(S^c)\nonumber \\&\le \mathbb {P}\left( \max _{0\le v\le 1-\delta _0}(Z(1)-Z(v))> \frac{\rho +k\delta _0 t}{t^H}\right) + \mathbb {P}(S^c)\nonumber \\&\le \mathbb {P}\left( \max _{0\le v\le 1-\delta _0}(Z(1)-Z(v))> \min _{t>0}\left\{ \frac{\rho +kv_0 t}{t^H}\right\} \right) + \mathbb {P}(S^c)\nonumber \\&= \mathbb {P}\left( \max _{0\le v\le 1-\delta _0}(Z(1)-Z(v))> \frac{\rho ^{1-H} \delta _0^H}{(1-H)^{1-H}H^H}\right) + \mathbb {P}(S^c) \end{aligned}$$
(5.24)
$$\begin{aligned}&\le \mathbb {P}\left( \max _{0\le v\le 1-\delta _0}Z(v)> \frac{\rho ^{1-H} \delta _0^H}{2(1-H)^{1-H}H^H}\right) +\mathbb {P}(S^c) \end{aligned}$$
(5.25)
$$\begin{aligned}&\le \exp \bigg ( -\frac{1}{2}\bigg (\frac{\frac{1}{2}\rho ^{1-H} \widehat{\Delta }-\Delta }{\Lambda }\bigg )^2 \bigg )+ \exp \Big (-\frac{K^2}{2}\Big ). \end{aligned}$$
(5.26)

Above, \(\widehat{\Delta }\) is as in hypothesis of Theorem 5.1. To get (5.24), we used the following: For \(t>t_0\) and \(\rho >Q\), from the above analysis

$$\begin{aligned}&\mathbb {P}\left( \Big \{\max _{0\le v\le 1} \left( t^H Z(1)-t^HZ(v) - kt(1-v)\right)> \rho \Big \}\cap S \right) \\&\quad =\mathbb {P}\left( \Big \{\max _{0\le v\le 1-\delta _0} \left( t^H Z(1)-t^HZ(v) - kt(1-v)\right) > \rho \Big \}\cap S \right) . \end{aligned}$$

To get (5.25), we used

$$\begin{aligned} \max _{0\le v\le 1-\delta _0}(Z(1)-Z(v))\le 2\max _{0\le v\le 1-\delta _0}Z(v). \end{aligned}$$

Finally, to get (5.26), we applied Lemmas 5.3 and 5.4. \(\square \)

Corollary 5.1

The laws of \(\mathbb {R}_+\) valued random variables \(\{V(t)\}\) have a weak limit point as \(t\rightarrow \infty \).

Proof

From Theorem 5.3, it is clear that for any \(\epsilon >0\), there exists \(\rho _0\) and such that for \(t>t_0\), for some \(t_0\) such that the upper bound in (5.21) is less than \(\epsilon \). From this and Prohorov’s theorem, we have the existence of weak limit point of the law of V(t) as \(t\rightarrow \infty \). \(\square \)

In the following, without loss of generality, we assume that V(t) converges to along every subsequence, almost surely to respective limit points.

Theorem 5.4

Let \(V^*\) be a weak limit point of \(\{V(t)\}_{t\in \mathbb {R}_+}\) as \(t\rightarrow \infty \). Then,

$$\begin{aligned} \lim _{\varepsilon \rightarrow 0} \varepsilon ^{2(1-H)}\log \mathbb {P}\left( V^*>\varepsilon ^{-1} \right) = -\inf _{0\le s\le 1} \frac{(k(1-s)+1)^2}{v_1(1,s) + v_2(1,s)}. \end{aligned}$$

Proof

We have already seen from Lemma 5.5 that for \(t>0\),

$$\begin{aligned} V(t){\mathop {=}\limits ^{\textrm{d}}}t\max _{0\le v\le 1} \left( t^{H-1} Z(1)-t^{H-1}Z(v) - k(1-v)\right) \doteq {\bar{V}}(t). \end{aligned}$$

Now we consider a sequence \(t_n\uparrow \infty \) such that \(V(t_n)\) converges weakly to \(V^*\). From the above equality of laws, \({\bar{V}}(t_n)\) also converges weakly to \(V^*\). From Skorohod representation theorem, we can without loss of generality, assume that

$$\begin{aligned} {\bar{V}}(t_n) \text { converges to } V^*,\quad \mathbb {P}-\text {a.s.} \end{aligned}$$

Therefore, we have

$$\begin{aligned} V^*= \lim _{t\rightarrow \infty }{\bar{V}}(t)= \lim _{t\rightarrow \infty } t\max _{0\le v\le 1} \left( t^{H-1} Z(1)-t^{H-1}Z(v) - k(1-v)\right) ,\quad \mathbb {P}-\text { a.s}. \end{aligned}$$

Now we replace t in \({\bar{V}}(t)\) by \(\varepsilon {^{-1}}\) and treat \(t\rightarrow \infty \) as \(\varepsilon \rightarrow 0\). In other words, we have

$$\begin{aligned} V^*= \lim _{\varepsilon _n \rightarrow 0}{\bar{V}}(\varepsilon _n^{-1})= \lim _{\varepsilon _n\rightarrow 0} \varepsilon _n^{-1}\max _{0\le v\le 1} \left( \varepsilon _n^{1-H} Z(1)-\varepsilon _n^{1-H}Z(v) - k(1-v)\right) ,\quad \mathbb {P}-\text {a.s}. \end{aligned}$$
(5.27)

From Theorem 4.1, we know that \(\varepsilon {\bar{V}}(\varepsilon ^{-1})\) satisfies an LDP. From (5.27), we also know that

$$\begin{aligned} |V^*- {\bar{V}}(\varepsilon _n^{-1})|=f(\varepsilon _n), \end{aligned}$$

where f is a deterministic positive function such that \(f(x)\rightarrow 0\), as \(x \rightarrow 0\), \( \mathbb {P}-\) a.s. Then, we have \( |\varepsilon _n V^*- \varepsilon _n{\bar{V}}(\varepsilon _n^{-1})|=\varepsilon _n f(\varepsilon _n).\)

Now we are in a position to derive the tail behavior of \(V^*\):

$$\begin{aligned} \limsup _{\varepsilon _n\rightarrow 0} \varepsilon _n^{2(1-H)}\log \mathbb {P}\left( \varepsilon _n V^*>1 \right) \le \limsup _{\varepsilon _n\rightarrow 0} \varepsilon _n^{2(1-H)}\log \mathbb {P}\left( \varepsilon _n {\bar{V}}(\varepsilon _n^{-1})> 1 -\varepsilon _n f(\varepsilon _n) \right) . \end{aligned}$$

Similarly,

$$\begin{aligned} \liminf _{\varepsilon _n\rightarrow 0} \varepsilon _n^{2(1-H)}\log \mathbb {P}\left( \varepsilon _n {\bar{V}}(\varepsilon _n^{-1})>1 \right) \le \liminf _{\varepsilon _n\rightarrow 0} \varepsilon _n^{2(1-H)}\log \mathbb {P}\left( \varepsilon _n V^*> 1 -\varepsilon _n f(\varepsilon _n) \right) . \end{aligned}$$

From Theorem 4.1, we have

$$\begin{aligned}&\lim _{\varepsilon _n\rightarrow 0} \varepsilon _n^{2(1-H)}\log \mathbb {P}\left( \varepsilon _n V^*>1 \right) = \lim _{\varepsilon _n\rightarrow 0} \varepsilon _n^{2(1-H)}\log \mathbb {P}\left( \varepsilon _n {\bar{V}}(\varepsilon _n^{-1})> 1 \right) \\&\quad = -\inf _{0\le s\le 1} \frac{(k(1-s)+1)^2}{v_1(1,s) + v_2(1,s)}. \end{aligned}$$

Since the right-hand side of the above equation is independent of the sequence \(\varepsilon _n\rightarrow 0\), we can replace \(\varepsilon _n\) in the above equation with \(\varepsilon \). This gives us

$$\begin{aligned} \lim _{\varepsilon \rightarrow 0} \varepsilon ^{2(1-H)}\log \mathbb {P}\left( V^*>\varepsilon ^{-1} \right) =-\inf _{0\le s\le 1} \frac{(k(1-s)+1)^2}{v_1(1,s) + v_2(1,s)}. \end{aligned}$$

This completes the proof. \(\square \)

5.1 Alternative proof of Theorem 5.2 using the results of Hüsler and Piterbarg [16]

The proof of Theorem 5.2 uses the large deviation asymptotics of the processes \(\{M^\varepsilon \}_{\varepsilon >0}\) (Lemma 4.1) and \(\{V^\varepsilon \}_{\varepsilon >0}\) (Theorem 4.1). But to use these results, it was necessary in the proofs to establish the existence of the limit points of \(\{M(t)\}_{t>0} \) and \(\{V(t)\}_{t>0}\) which was the content of Theorems 5.1 and 5.3, respectively. Alternatively, the proof can be given as a direct application of a result by Hüsler and Piterbarg [16]. Before we state the result in [16], we recall the following definition: A centered self-similar Gaussian process (with Hurst parameter \(0<H<1\)) with continuous sample paths \(\{Z(t)\}_{t>0}\) is called locally stationary self-similar if for some positive K and \(0<\eta \le 2\),

$$\begin{aligned} \lim _{ {\mathop {t_2\rightarrow t}\limits ^{t_1\rightarrow t}} } \frac{\mathbb {E}\Big [\big (Z(t_1)t_1^{-H}-Z(t_2)t_2^{-H}\big )^2\Big ]}{|t_1-t_2|^\eta }=Kt^{-2H}. \end{aligned}$$
(5.28)

Theorem 5.5

[16, Theorem 1] Suppose that \(\{Z(t)\}_{t>0}\) is a locally stationary self-similar Gaussian process. Then, as \( \lambda \rightarrow \infty ,\)

$$\begin{aligned} \mathbb {P}\bigg ( \sup _{t\ge 0} \big (Z(t)-kt\big )>\lambda \bigg )\sim C_\eta (A)^{\frac{1}{H}-\frac{1}{2}} \lambda ^H \Psi (A\lambda ^{1-H}). \end{aligned}$$

Here, \(C_\eta \) is a positive constant (its explicit form is given in [16]) and

$$\begin{aligned} A\doteq \frac{ k^{H}}{H^{H} (1-H)^{(1-H)}} \end{aligned}$$

and \(\Psi \) is the tail distribution function of standard normal random variable.

In Lemma A.2, we show that the GFBM \(X(\cdot )\) is locally stationary. Hence, from Theorem 5.5, we have the following: In this case, with \(A= \sqrt{2\theta ^*}\),

$$\begin{aligned} \lim _{\lambda \rightarrow \infty } \frac{1}{\lambda ^{2(1-H)}}\log \mathbb {P}\bigg (\sup _{t>0} \big (Z(t)-kt\big )>\lambda \bigg )= \lim _{\lambda \rightarrow \infty } \frac{1}{\lambda ^{2(1-H)} } \log \Psi (\sqrt{2\theta ^*} \lambda ^{1-H})= -\theta ^*. \end{aligned}$$
(5.29)

In the last equality, we used the tail behavior of a standard normal random variable (\(\Psi \) is its tail distribution function). This shows that [16, Theorem 1] can be applied to prove Theorem 5.2.