INTRODUCTION

In what follows, we assume that a standard Brownian motion \(W(t) \) and a fractional Brownian motion \(B^H(t) \) with Hurst exponent \(H\in (1/2,1) \) defined for \(t\geq 0 \) and independent are given on a probability space \((\Omega ,\mathcal {F},\mathsf {P})\).

By a one-dimensional stochastic differential equation of the mixed type we mean the equation

$$ dx(t)=f\big (t,x(t)\big )\thinspace dt+g\big (t, x(t)\big )\thinspace dW(t)+\sigma \big (t, x(t)\big )\thinspace dB^H(t),\quad t\ge 0, $$
(1)

where \(f\colon [0,\infty )\times \mathbb {R}\to \mathbb {R} \), \(g\colon [0,\infty )\times \mathbb {R}\to \mathbb {R} \), and \(\sigma \colon [0,\infty )\times \mathbb {R}\to \mathbb {R}\) are deterministic functions. In what follows, we assume that \(f(t, 0)=0\), \(g(t, 0)=0 \), and \(\sigma (t, 0)=0 \) for all \(t\ge 0 \).

To define a solution of Eq. (1), one interprets this equation as an integral equation. There are several ways to define integrals over \(dW \) and over \(dB^{H} \) [1, Chs. 2–5; 2, Chs. 1, 2]. In the present paper, the integral over \(dW \) is the Itô stochastic integral, and the integral over \(dB^H \) is the pathwise Riemann–Stieltjes integral introduced in the paper [3] and often referred to as the pathwise Young integral. The different nature of these integrals accounts for certain difficulties in studying Eqs. (1); for example, the pathwise Young integral, generally speaking, does not have the zero mean and is not a semimartingale. Theorems that give sufficient conditions for the existence and uniqueness of solutions of Eqs. (1) were first obtained in [4] for equations without drift and in [5] for Eqs. (1) of the general form. Later on, the conditions for the existence of solutions of Eqs. (1) were considerably weakened, and the continuous dependence of solutions on the initial data was proved under the same conditions that ensure the existence of the solutions [6,7,8,9,10,11]. Moreover, as shown in the papers [12,13,14,15], involving the theory of rough paths and Gubinelli’s integration theory permits one to study the properties of solutions of Eqs. (1) of a class wider than the one indicated above, namely, equations containing fractional Brownian motions with Hurst exponents \(H\in (1/3, 1) \).

The stability of the Itô equations (1) and systems of such equations (i.e., equations not containing a fractional Brownian motion, \(\sigma \equiv 0\)) is explored rather well with an extensive literature devoted to it (see, e.g., [16,17,18]). In particular, the monograph [17] describes a stability analysis method that uses Lyapunov functions and is based on the Markov property of solutions \(x(t) \) of the Itô equations. In turn, sufficient conditions for the stability of the zero solutions of the linear Itô equations (1) and systems of such equations were obtained in the monograph [17].

The stability analysis of Eqs. (1) of the general form is an extremely hard problem. Significant difficulties are encountered when trying to extend the scope of the Lyapunov function method to the class of Eqs. (1) with coefficient \(\sigma \not \equiv 0 \): the Young integral does not have zero mean, and there exist no estimates for this integral similar to those for the Itô stochastic integral. In addition, the process \(B^{H}(t)\) has large variance \(t^{2H} \), \(2H>1\), which implies certain restrictions on the coefficient \(\sigma \) and complicates the study of stability properties. The stability of a fairly general class of equations (1) is dealt with in [19, 20]. The conditions obtained in [19] ensure the local (i.e., on a finite interval \([0,T] \)) almost sure exponential stability of the zero solution of the autonomous equation (1) not containing \(W(t) \) (\(g\equiv 0\)) as well as the global almost sure exponential stability for the case in which the coefficient multiplying \(dB^{H} \) is linear, \(\sigma (x)=\gamma x \), \(\gamma \in \mathbb {R} \). The conditions found in the paper [20] guarantee the \((\alpha ,p) \)-asymptotic stability in probability and the \((\alpha ,p) \)-attraction of solutions of Eqs. (1) with isolated linear part, \(f(t,x)=A(t)x+F(t,x) \).

In this paper, we restrict our considerations to the case of linear homogeneous equations of the mixed type

$$ dx(t)=a(t)x(t)\thinspace dt+b(t)x(t)\thinspace dW(t)+c(t)x(t)\thinspace dB^H(t),\quad t\ge 0,$$
(2)

where \(a\colon [0,\infty )\to \mathbb {R}\), \(b\colon [0,\infty )\to \mathbb {R}\), and \(c\colon [0,\infty )\to \mathbb {R}\) are deterministic functions. Special attention is paid to Eqs. (2) that are time-invariant,

$$ dx(t)=ax(t)\thinspace dt+bx(t)\thinspace dW(t)+cx(t)\thinspace dB^H(t),\quad t\ge 0,$$
(3)

where \(a, b, c\in \mathbb {R}\).

In the present paper, we establish necessary and sufficient conditions for the asymptotic stability in probability, \(p\)-stability, and exponential stability of the zero solution of Eq. (2) generalizing the results for the corresponding Itô equations [17, Ch. 6]. In addition, we obtain an explicit formula expressing the \(p \)th moment, \(p>0 \), of the solution of Eq. (2). The results of this paper can be used, say, in the stability analysis of equations reducible to linear ones (e.g., equations of Bernoulli type [21]) as well as when studying the stability of the zero solution of Eq. (1) by the linear approximation.

1. PRELIMINARIES AND NOTATION

By the symbol \(\mathbb {E}\) we denote the expectation of random variables defined on a probability space \((\Omega ,\mathcal {F}, \mathsf {P})\). The abbreviation “a.s.” is used for the phrase “almost surely,” which means that an assertion holds on a set \(\tilde \Omega \subset \Omega \) of probability measure \(1 \); i.e., \(\mathsf {P}\{\tilde \Omega \}=1 \).

A fractional Brownian motion with Hurst exponent \(H\in (0,1)\) is a centered continuous Gaussian process \(B^H(t)\), \(t\ge 0 \), with covariance function

$$ R_H(t, s):=\mathbb {E}B^H(t)B^H(s)=\frac {1}{2}\big (t^{2H}+s^{2H} -|t-s|^{2H}\big ), \quad s, t\ge 0.$$

For \(H=1/2\), the fractional Brownian motion \( B^{1/2}(t)\) is a Wiener process. In other words, the process \(W(t) \) is the special process with \(H=1/2 \) in the family \(B^H(t) \).

Consider the function

$$ \phi (t,s):=H(2H-1)|t-s|^{2H-2},\quad t,s\ge 0. $$

One can readily see that the following representation holds [1, p. 24]:

$$ R_H(t, s)=\int _0^t\int _0^s\phi (u, v)\thinspace dv\thinspace du.$$
(4)

By \(L^2_\phi [0,T]\) we denote the linear space of measurable functions \(f\colon [0,T]\to \mathbb {R} \) such that the Lebesgue integral \(\int \nolimits _0^T\int \nolimits _0^T f(s)f(u)\phi (s, u)\thinspace ds\thinspace du \) is finite. It was proved in the paper [22] that on the linear space of equivalence classes of functions in \( L^2_\phi [0,T]\) one can define the inner product

$$ \langle f, g\rangle _{L^2_\phi ; T}:=\int _0^T\int _0^T f(s)g(u)\phi (s, u)\thinspace ds\thinspace du, \quad f, g\in L^2_\phi [0,T],$$

and accordingly the norm

$$ \|f\|_{L^2_\phi ; T}:=\sqrt {\langle f, f\rangle _{L^2_\phi ; T}} =\left (\thinspace \int _0^T\int _0^T f(s)f(u)\phi (s, u)\thinspace ds\thinspace du\right )^{\!1/2}, \quad f\in L^2_\phi [0,T];$$

this linear space with the inner product \(\langle \thinspace ,\thinspace \rangle _{L^2_\phi ; T} \) is a pre-Hilbert space (it is not complete).

By \(C^\lambda (0, T)\) we denote the normed linear space of functions \(f\colon [0, T]\to \mathbb {R} \) Hölder continuous with exponent \(\lambda \in (0, 1]\); the norm on this space is given by the formula

$$ \|f\|_{C^\lambda ; T}:= \sup \limits _{t\in [0, T]}\big |f(t)\big |+ \sup \limits _{0\le s<t\le T}\frac {\big |f(t)-f(s)\big |}{(t-s)^\lambda }.$$

The space \( C^\lambda (0, T)\) is a Banach space.

The most important property of the fractional Brownian motion \(B^{H}(t) \) used when constructing pathwise integrals is the Hölder property of its sample paths: for each \(\varepsilon \!\in \!(0, H) \), the sample paths of the process \(B^{H}(t) \), \(t\in [0, T]\), a.s. belong to the class \(C^{H-\varepsilon }(0, T) \).

Let \(\alpha \in (0, 1/2)\). By \(W^{\alpha , 1}_0(0, T)\) we denote the space of measurable functions \(f\colon [0, T]\to \mathbb {R}\) such that

$$ \|f\|_{\alpha , 1; T}:= \int _0^T\frac {\big |f(s)\big |}{s^\alpha }\thinspace ds +\int _0^T\int _0^s \frac {\big |f(s)-f(u)\big |}{(s-u)^{\alpha +1}}\thinspace du\thinspace ds<\infty . $$

In what follows, we use the notation \(\|f\|_{\alpha , T}:=\|f\|_{\alpha , 1; T}\) for brevity.

By \( f|_{Y}\) we denote the restriction of a function \(f\colon X\to \mathbb {R}\) to a set \(Y\subset X\subset \mathbb {R} \).

Definition 1.

A solution of Eq. (1) (in the strong sense) is a process \(x(t) \), \(t\ge 0\), defined on the probability space \( (\Omega ,\mathcal {F},\mathsf {P})\), consistent with the flow of \( \sigma \)-algebras \(\mathcal {F}_t \) generated by the processes \(W(t) \) and \(B^H(t) \), and having the following properties.

  1. 1.

    There exists an \(\alpha >1-H\) such that the sample paths of the process \(x(t)\) are Hölder continuous with exponent \(\alpha \) a.s.

  2. 2.

    For each \( t\ge 0\), one a.s. has the relation

    $$ x(t)= x(0) +\int _0^t f\big (s,x(s)\big )\thinspace ds +\int _0^t g\big (s,x(s)\big )\thinspace dW(s) +\int _0^t\sigma \big (s,x(s)\big )\thinspace dB^H(s),$$

    where the integral over the process \(W(t)\) is an Itô stochastic integral and the integral over the process \(B^H(t) \) is a pathwise Young integral [5].

Remark 1.

Quite often the integral over the process \(B^H(t)\) is defined as a generalized Stieltjes integral with the use of fractional derivatives of the integrand processes [23]. However, according to Remark 4.1 in [23], the generalized Stieltjes integral \(\int \nolimits _0^T f(t)\thinspace dg(t)\) coincides with the ordinary Riemann–Stieltjes integral (the Young integral) if \(f\in C^\lambda (0, T) \), \(g\in C^\mu (0, T)\), and \(\lambda +\mu >1\).

Further, we introduce the definitions of stability used throughout the paper.

Definition 2.

The zero solution of Eq. (1) is said to be stable in probability if for any \(\varepsilon _1,\varepsilon _2>0 \) there exists a \(\delta =\delta (\varepsilon _1,\varepsilon _2)>0\) such that for each \(t>0 \) and each solution \(x(t) \) of Eq. (1) satisfying the condition \(|x(0)|<\delta \) one a.s. has the inequality

$$ \mathsf {P}\Big \{\big |x(t)\big |>\varepsilon _1\Big \}<\varepsilon _2.$$

Definition 3.

The zero solution of Eq. (1) is said to be asymptotically stable in probability if it is stable in probability and for each \(\varepsilon >0\) there exists a \(\delta =\delta (\varepsilon )>0\) such that for each solution \(x(t) \) of Eq. (1) satisfying the condition \(|x(0)|<\delta \) one a.s. has the relation

$$ \mathsf {P}\Big \{\big |x(t)\big |>\varepsilon \Big \} \xrightarrow [t\to \infty ]{} 0. $$

Definition 4.

The zero solution of Eq. (1) is said to be \(p \)-stable ( \(p>0 \)) if for each \(\varepsilon >0 \) there exists a \(\delta =\delta (\varepsilon )>0\) such that for each \(t>0 \) and each solution \(x(t) \) of Eq. (1) satisfying the condition \(|x(0)|<\delta \) one a.s. has the inequality

$$ \mathbb {E}\big |x(t)\big |^p<\varepsilon .$$

Definition 5.

The zero solution of Eq. (1) is said to be asymptotically \(p \)-stable ( \(p>0 \)) if it is \(p \)-stable and for each \(\varepsilon >0 \) there exists a \(\delta =\delta (\varepsilon )>0\) such that for each solution \(x(t) \) of Eq. (1) satisfying the condition \(|x(0)|<\delta \) one a.s. has the relation

$$ \mathbb {E}\big |x(t)\big |^p \xrightarrow [t\to \infty ]{} 0.$$

Definition 6.

The zero solution of Eq. (1) is said to be exponentially \(p \)-stable ( \(p>0 \)) if there exist constants \(A=A(p)>0 \) and \(\alpha =\alpha (p)>0 \) such that

$$ \mathbb {E}\big |x(t)\big |^p\le A\mathbb {E}\big |x(0)\big |e^{-\alpha t}$$

for all \(t>0 \).

As shown in the paper [21], the solution of the linear equation (2) is expressed by the formula

$$ x(t)=x(0)\exp \left (\thinspace \int _0^t\left (a(s)-\frac {1}{2}b^2(s)\right )\thinspace ds +\int _0^t b(s)\thinspace dW(s) +\int _0^t c(s)\thinspace dB^H(s) \right ),\quad t\ge 0, $$
(5)

which, however, can be obtained by applying the Itô formula for processes with standard and fractional Brownian motions [2, p. 184] to the process

$$ y(t)=y(0)+\int _0^t b(s)\thinspace dW(s)+\int _0^t c(s)\thinspace dB^H(s) $$

and to the function

$$ F(t, y)=\exp \left (\thinspace \int _0^t\left (a(s)-\frac {1}{2}b^2(s)\right )\thinspace ds +y\right ).$$

For convenience, we introduce special notation for the process occurring in the exponent in formula (5),

$$ \nu (t):= \int _0^t\left (a(s)-\frac {1}{2}b^2(s)\right )\thinspace ds +\int _0^t b(s)\thinspace dW(s) +\int _0^t c(s)\thinspace dB^H(s), \quad t\ge 0.$$

Then the solution formula (5) acquires the form \( x(t)=x(0)e^{\nu (t)}\).

2. ASSUMPTIONS

From now on, we always assume that the following conditions are satisfied.

C1.

The processes \(W(t) \) and \(B^{(H)}(t) \), \(t\ge 0 \), are independent.

C2.

The random variable \( x(0)\) is \(\mathcal {F}_0 \)-measurable and independent of \(W(t) \) and \(B^{(H)}(t) \).

C3.

The functions \(a(t) \) and \(b(t) \) are continuous for \(t\ge 0 \).

C4.

There exists a \(\lambda >1-H\) such that the function \(c(t)|_{[0, T]} \) belongs to the class \(C^\lambda (0, T) \) for each \(T>0 \).

Note that conditions C2C4 guarantee the existence of a solution of Eq. (5) and its representation in the form (5).

3. AUXILIARY ASSERTIONS

In what follows, several auxiliary assertions be will useful.

In the following two lemmas, we need to consider an arbitrary sequence

$$ \mathcal {P}_n=\big \{t_0, t_1,\ldots , t_{N_n}\in [0, t]: 0=t_0<t_1<\ldots <t_{N_n}=t\big \}$$
(6)

of partitions of the interval \( [0, t]\) with radii

$$ |\mathcal {P}_n|=\max \big \{t_{i+1}-t_i: i={0,\ldots ,N_n-1}\big \}$$

tending to zero as \(n\to \infty \).

Lemma 1.

If the processes \(W(t) \) and \( B^H(t)\) are independent, then the Itô stochastic integral \(\int \nolimits _0^t b(s)\thinspace dW(s)\) and the pathwise Young integral \(\int \nolimits _0^t c(s)\thinspace dB^H(s)\) are independent as well.

Proof. Consider the integral sums

$$ I^{(W)}_n=\sum _{i=0}^{N_n-1} b(t_i)\big (W(t_{i+1})-W(t_i)\big )\quad \text {and}\quad I^{(B)}_n=\sum _{i=0}^{N_n-1} c(t_i)\big (B^H(t_{i+1})-B^H(t_i)\big ) $$

for the Itô integral \(I^{(W)}=\int \nolimits _0^t b(s)\thinspace dW(s)\) and the Young integral \(I^{(B)}=\int \nolimits _0^t c(s)\thinspace dB^H(s)\), respectively, along the partitions (6). It is well known that \( I^{(W)}_n\) tends to \(I^{(W)} \) in probability and \(I^{(B)}_n \) tends to \(I^{(B)} \) a.s. as \(n\to \infty \).

It is obvious that the sums \(I^{(W)}_n\) and \(I^{(B)}_n \) are linear combinations of components of the vectors \( W_{\mathcal {P}_n}=(W(t_0),\ldots , W(t_{N_n})) \) and \(B_{\mathcal {P}_n}=(B^H(t_0),\ldots , B^H(t_{N_n}))\). The independence of the processes \(W(t)\) and \(B^H(t) \) implies the independence of the vectors \(W_{\mathcal {P}_n}\) and \(B_{\mathcal {P}_n} \), which implies the independence of the integral sums \(I^{(W)}_n \) and \(I^{(B)}_n \). Hence for each \(n\in \mathbb {N} \) one has the relation

$$ F_{(I^{(W)}_n, I^{(B)}_n)}(x_1, x_2)=F_{I^{(W)}_n}(x_1) F_{I^{(B)}_n}(x_2), $$

where \(F_\xi (x)\) is the distribution function of the random variable \(\xi \). Since the vector \((I^{(W)}_n, I^{(B)}_n)\) tends to the vector \((I^{(W)}, I^{(B)}) \) in probability as \(n\to \infty \), and since the convergence in probability implies the convergence in distribution, we have, passing to the limit in the last relation,

$$ F_{(I^{(W)}, I^{(B)})}(x_1, x_2)=F_{I^{(W)}}(x_1) F_{I^{(B)}}(x_2).$$

This implies the independence of the integrals \(I^{(W)}\) and \(I^{(B)} \). The proof of the lemma is complete.

Proposition 1.

Let \(c(s)\not \equiv 0 \) be a function continuous for \(s\ge 0\) , and let \(c|_{[0, t]}\in L^2_\phi [0, t] \) for some \(t>0 \) . Then the Young integral \(\int \nolimits _0^t c(s)\thinspace dB^H(s) \) is a normally distributed random variable with zero mean \(\mu (t)=0 \) and variance

$$ \sigma ^2(t)=\|c\|^2_{L^2_\phi ; t}=\int _0^t\int _0^t c(s)c(u)\phi (s, u)\thinspace ds\thinspace du.$$

Proof. Set \(S:=\|c\|^2_{L^2_\phi ; t}>0 \). Consider the integral sums \(I_n \) along the partitions (6) with the intermediate points \(\tau _i\in [t_i,t_{i+1}] \), \(i={0,\ldots ,N_n-1} \), arising when the mean value theorem is applied to the integrals

$$ \int _{t_i}^{t_{i+1}}\int _{t_j}^{t_{j+1}}\phi (u, v)\thinspace du\thinspace dv =\phi (\tau _i,\tau _j) (t_{i+1}-t_i) (t_{j+1}-t_j);$$

i.e.,

$$ I_n:=\sum _{i=0}^{N_n-1} c(\tau _i)\big (B^H(t_{i+1})-B^H(t_i)\big ) \xrightarrow [n\to \infty ]{}\int _0^t c(s)\thinspace dB^H(s) \quad \text {(a.s.).} $$

Since \(I_n\) is a linear combination of the values \(B^H(t)\) and \(B^H(\tau _i) \), \(i={0,\ldots ,N_n-1} \), of the Gaussian process \(B^H(t) \), we conclude that \(I_n=I_n(\omega ) \) is a normally distributed random variable for each \(n \). Its mean is zero,

$$ \mu _n=\mathbb {E} I_n(t) =\sum _{i=0}^{N_n-1} c(\tau _i)\big (\mathbb {E}B^H(t_{i+1})-\mathbb {E}B^H(t_i)\big ) =0,$$

and the variance is calculated by the formula

$$ \eqalign { \sigma _n^2 &=\mathbb {E} I_n^2(t) =\sum _{i,j=0}^{N_n-1} c(\tau _i)c(\tau _j) \mathbb {E}\big (B^H(t_{i+1})-B^H(t_i)\big )\big (B^H(t_{j+1})-B^H(t_j)\big )\cr &=\sum _{i,j=0}^{N_n-1} c(\tau _i)c(\tau _j)\bigl ( R_H(t_{i+1}, t_{j+1})-R_H(t_{i+1}, t_j)-R_H(t_i, t_{j+1})+R_H(t_i, t_j) \bigr ),} $$

where \(R_H(u, v)\) is the covariance function of the fractional Brownian motion \(B^H(t) \). Applying the representation (4) and the mean value theorem, we obtain

$$ \sigma _n^2 =\sum _{i,j=0}^{N_n-1} c(\tau _i)c(\tau _j) \int _{t_i}^{t_{i+1}}\int _{t_j}^{t_{j+1}}\phi (u, v)\thinspace dv\thinspace du =\sum _{i,j=0}^{N_n-1} c(\tau _i)c(\tau _j)\phi (\tau _i,\tau _j) (t_{i+1}-t_i) (t_{j+1}-t_j). $$

Thus, \(\lim \limits _{n\to \infty }\sigma _n^2=S \).

It remains to show that the a.s. limit \(\lim \limits _{n\to \infty } I_n=I\) is a normally distributed random variable. The a.s. convergence implies the convergence in distribution; therefore, for each \(x\in \mathbb {R} \) we have

$$ F_{I}(x)=\lim \limits _{n\to \infty } F_{I_n}(x) =\lim \limits _{n\to \infty }\frac {1}{\sqrt {2\pi \sigma ^2_n}}\int _{-\infty }^x e^{-y^2/(2\sigma _n^2)}\thinspace dy =\frac {1}{\sqrt {2\pi S}}\lim \limits _{n\to \infty }\int _{-\infty }^x e^{-y^2/(2\sigma _n^2)}\thinspace dy,$$

where \(F_\xi (x) \) is the distribution function of the random variable \(\xi \). Let us consider the function \(f(y,\tau )=e^{-y^2/(2\tau )} \), \(y\in (-\infty , x) \), \(\tau \in [S/2,3S/2] \). Obviously, \(f(y,\tau )\le f(y,3S/2) \) for each \(y \), and

$$ \int _{-\infty }^x f(y,3S/2)\thinspace dy\le \sqrt {2\pi }\sqrt {\frac {3S}{2}};$$

therefore, the integral \(\int \nolimits _{-\infty }^x f(y,\tau )\thinspace dy\) converges uniformly with respect to \(\tau \in [S/2,3S/2]\) for each \(x \). On the other hand, for each \(y \) one has the relation

$$ f^{\prime }_{\tau }(y,\tau )=\frac {y^2}{2\tau ^2}e^{-y^2/(2\tau )} \le \frac {2}{Se}, $$

because \(\max \limits _{z\in \mathbb {R}}z^2e^{-z^2}=1/e\). Hence the finite increment formula implies the inequality

$$ \big |f(y,\tau )-f(y, S)\big |\le 2|\tau -S|/(Se) $$

for each \(y \) and for \(\tau \in [S/2,3S/2] \); it follows that \(f(y,\sigma ^2_n)\to f(y, S) \) uniformly with respect to \(y \) as \(n\to \infty \). Thus, passing to the limit in the integrand, we obtain

$$ F_I(x)=\frac {1}{\sqrt {2\pi S}}\int _{-\infty }^x e^{-y^2/(2S)}\thinspace dy; $$

i.e., the random variable \(I \) obeys the normal distribution law with parameters \((\mu ,\sigma ^2)=(0, S)\), as desired. The proof of the proposition is complete.

The following lemma was proved in the monograph [17, Lemma 6.1].

Lemma 2.

The Itô stochastic integral \( \int \nolimits _0^t b(s)\thinspace dW(s) \) is a.s. representable in the form

$$ \int _0^t b(s)\thinspace dW(s)=\widetilde {W}\big (\tau (t)\big ) $$

for all \( t\ge 0\), where \( \widetilde {W}(\tau )\), \(\tau \ge 0, \) is some other Brownian motion (a Wiener process) and \(\tau (t)=\int \nolimits _0^t b^2(s)\thinspace ds\).

Lemma 3.

Assume that \(c|_{[0, t]}\in W^{\alpha , 1}_0(0, t)\) for some \(t>0 \) and \( \alpha \in (1-H, 1/2)\) . Then the following assertions hold for each \(\varepsilon \in \bigl (0,\alpha -(1-H)\bigr )\) .

  1. 1.

    There exists a constant \(K=K_{H,\varepsilon ,\alpha } \) depending only on \( \varepsilon \) , \( \alpha \) , and \(H \) such that one a.s. has the estimate

    $$ \left |\thinspace \int _0^t c(s)\thinspace dB^H(s)\right |\le K \eta _{H,\varepsilon , t}(\omega ) \|c\|_{\alpha , t} t^{H-\varepsilon +\alpha -1} =: C^{H}_{\varepsilon ,\alpha }(t,\omega ), $$
    (7)

    where \( \eta _{H,\varepsilon , t}(\omega )\) for a given \(t\) is a random variable for which one a.s. has the inequality \( |B^H(s)-B^H(u)|\le \eta _{H,\varepsilon , t}|s-u|^{H-\varepsilon } \) for all \(s, u\in [0,t]\) and which is given by the relation

    $$ \eta _{H,\varepsilon , t}:= \gamma _{H,\varepsilon }\left (\thinspace \int _0^t \int _0^t\frac {\big |B^H(s)-B^H(u)\big |^{2\varepsilon }} {|s-u|^{2H/\varepsilon }}\thinspace ds\thinspace du \right )^{\!2/\varepsilon } $$

    with some constant \( \gamma _{H,\varepsilon }\) depending only on \(H \) and \( \varepsilon \) .

  2. 2.

    For the random process \(C^{H}_{\varepsilon ,\alpha }(t,\omega )\) defined by the right-hand side of inequality (7), there exists a constant \( L=L_{H,\varepsilon ,\alpha }\) depending only on \(\varepsilon \), \( \alpha \), and \(H \) such that for each number \(M>0\) one has the estimate

    $$ \mathsf {P}\big \{C^{H}_{\varepsilon ,\alpha }(t)\le M\big \} \ge 1-L\left ( \frac {\|c\|_{\alpha , t} t^{H+\alpha -1}}{M} \right )^{\!2/\varepsilon }.$$
    (8)

Proof. The estimate (7) readily follows from the results in the paper [23]. Indeed, first, according to [23, p. 74], one a.s. has the inequality

$$ \left |\thinspace \int _0^t c(s)\thinspace dB^H(s)\right |\le G_{\alpha , t}(\omega )\|c\|_{\alpha , t}, $$

where

$$ G_{\alpha , t}=\frac {1}{\Gamma (1-\alpha )}\sup \limits _{0<s<u<t}\big |D^{1-\alpha }_{u^{-}} B^H_{u^{-}}(s)\big |; $$

here \(\Gamma \) is the gamma function, \( D^{1-\alpha }_{u^{-}}\) is the operator of the left Weyl fractional derivative of order \(1 -\alpha \) [23, pp. 59–60], and \( B^H_{u^{-}}(s)=(B^H(s)-B^H(u-)) 1_{(0, u)}(s) \), where \(1_{(0, u)}(s) \) is the indicator function of the interval \((0, u) \). Second, the estimate derived in [23, Lemma 7.5] gives the inequality

$$ G_{\alpha , t}\le \frac {1}{\Gamma (1-\alpha )\Gamma (\alpha )} \biggl (1+\frac {1}{H-\varepsilon +\alpha -1}\biggr ) \eta _{H,\varepsilon , t} t^{H-\varepsilon +\alpha -1}, $$

which implies the estimate (7).

The estimate (8) is obtained by using the estimate in [23, Lemma 7.4] and the Markov inequality. It follows from the proof of Lemma 7.4 in [23, p. 77] that the inequality \( \mathbb {E} (\eta _{H,\varepsilon , t})^q\le \gamma _{H,\varepsilon }^q\tilde c_{\varepsilon , q} t^{q\varepsilon } \) holds for each \(q\ge 2/\varepsilon \) and for some constant \(\tilde c_{\varepsilon , q}\) depending on \(\varepsilon \) and \(q \). Setting \(q=2/\varepsilon \), we obtain the estimate \(\mathbb {E}(\eta _{H,\varepsilon , t})^{2/\varepsilon }\le L_{H,\varepsilon } t^2 \) for some constant \(L_{H,\varepsilon } \) depending on \(\varepsilon \) and \(H \). Applying the Markov inequality, for each \(M>0 \) we arrive at the inequality

$$ \mathsf {P}\big \{(\eta _{H,\varepsilon , t})^{2/\varepsilon }>M\big \} \le \frac {L_{H,\varepsilon } t^2}{M},$$

which is equivalent to the inequality \(\mathsf {P}\{\eta _{H,\varepsilon , t}>M\}\le {L_{H,\varepsilon } t^2}/{M^{2/\varepsilon }}\), which, in turn, is equivalent to the inequality

$$ \mathsf {P}\big \{C^{H}_{\varepsilon ,\alpha }(t)>M\big \} \le L_{H,\varepsilon } t^2\biggl ( \frac {K\|c\|_{\alpha , t} t^{H-\varepsilon +\alpha -1}}{M} \biggr )^{\!2/\varepsilon } =L\biggl ( \frac {\|c\|_{\alpha , t} t^{H+\alpha -1}}{M} \biggr )^{\!2/\varepsilon },$$

where \(L=L_{H,\varepsilon } K^{2/\varepsilon }\) is a constant depending on \(\varepsilon \), \(\alpha \), and \(H \). The proof of the lemma is complete.

4. STABILITY OF LINEAR EQUATIONS

Set

$$ A(t):=\int ^t_0a(s)\thinspace ds,\quad \tau (t):=\int ^t_0b^2(s)\thinspace ds,\quad A_\tau (t):=A(t)-\frac {1}{2}\tau (t),\quad \varkappa (t):=\sqrt {2\tau (t)\ln \ln \tau (t)}.$$
(9)

Sufficient conditions for the asymptotic stability in probability of the zero solution of Eq. (2) are given by the following assertion.

Theorem 1.

Assume that there exists an \( \alpha \in (1-H, 1/2)\) such that \(c|_{[0, T]}\in W^{\alpha , 1}_0(0, T) \) for each \(T>0 \), and let \( A(t)\), \( \tau (t)\), \( A_\tau (t)\), and \( \varkappa (t)\) be the functions defined in (9). Then the following assertions hold:

  1. 1.

    If \(\tau (\infty )<\infty \) and the conditions

    $$ A(\infty )=-\infty ,\quad \lim \limits _{t\to \infty }{t^{H+\alpha -1}\|c\|_{\alpha , t}}/{A(t)}=0$$

    are satisfied, then the zero solution of Eq. (2) is asymptotically stable in probability.

  2. 2.

    If \(\tau (\infty )=\infty \) and the conditions

    $$ \mathop {\overline {\lim }}\limits _{t\to \infty } A_{\tau }(t)/\varkappa (t)<-1,\quad \lim \limits _{t\to \infty } t^{H+\alpha -1}\|c\|_{\alpha , t} /\varkappa (t)=0$$

    are satisfied, then the zero solution of Eq. (2) is asymptotically stable in probability.

Proof. Take an arbitrary \(\varepsilon _1>0 \) and a solution \(x(t)=x(0)e^{\nu (t)} \) of Eq. (2) whose initial value \(x(0) \) a.s. satisfies the inequality \(|x(0)|<\delta <\varepsilon _1 \). Consider the probability

$$ \mathsf {P}\Big \{\big |x(t)\big |>\varepsilon _1\Big \} =\mathsf {P}\Big \{\nu (t)>\ln \big (\varepsilon _1/|x(0)|\big )\Big \} =1-\mathsf {P}\Big \{\nu (t)\le \ln \big (\varepsilon _1/|x(0)|\big )\Big \}.$$

It suffices to show that \(\lim \limits _{t\to \infty } \mathsf {P}\{\nu (t)\le \ln (\varepsilon _1/\delta )\}=0\) under the assumptions of the theorem. Indeed, if this is the case, then, on the one hand, \(\lim \limits _{t\to \infty } \mathsf {P}\{|x(t)|>\varepsilon _1\}=0\), because

$$ \mathsf {P}\Big \{\nu (t)\le \ln \big (\varepsilon _1/\big |x(0)\big |\big )\Big \} \ge \mathsf {P}\big \{\nu (t)\le \ln (\varepsilon _1/\delta )\big \}. $$

On the other hand, the inequality \(\mathbb {E}|\nu (t)|\le C_T<\infty \) holds on each interval \(t\in [0, T] \) with some constant \(C_T \) depending on \(T \), because the inequalities

$$ \begin {gathered} \big |A(t)\big |\le \int _0^T\big |a(s)\big |\thinspace ds,\quad \mathbb {E}\left |\thinspace \int _0^t b(s)\thinspace dW(s)\right |\le \big (\tau (T)\big )^{1/2}, \\ \mathbb {E}\left |\thinspace \int _0^t c(s)\thinspace dB^H(s)\right |\le K\mathbb {E}|\eta _{H,\varepsilon , T}|\|c\|_{\alpha , T}T^{H-\varepsilon +\alpha -1}<\infty \end {gathered}$$

are satisfied for each \(t\in [0, T] \) (in view of Lemma 3 and [23, Lemma 7.4]). Then an application of the Markov inequality gives the estimate

$$ \mathsf {P}\{|x(t)|>\varepsilon _1\}\le \mathsf {P}\{|\nu (t)|>\ln (\varepsilon _1/\delta )\}\le C_T/\ln (\varepsilon _1/\delta ), $$

and by choosing a sufficiently small \(\delta \) one can ensure that the right-hand side of the last inequality is less than any prescribed \(\varepsilon _2>0\).

Let us separately consider two cases indicated in the assumptions of the theorem.

Case 1. Let \(\tau (\infty )=\tau _0<\infty \), and let the conditions in assertion 1 of the theorem be satisfied. We introduce the notation

$$ \nu _W(t)= \frac {1}{2}A(t) -\frac {1}{2}\tau (t) +\int _0^t b(s)\thinspace dW(s), \quad \nu _B(t)= \frac {1}{2}A(t) +\int _0^t c(s)\thinspace dB^H(s).$$

With this notation, one has \(\nu (t)=\nu _W(t)+\nu _B(t)\). Consider the events

$$ \mathcal {A}^W_t=\bigg \{\nu _W(t)\le \frac {1}{2}\ln (\varepsilon _1/\delta )\bigg \}\quad \text {and}\quad \mathcal {A}^B_t=\bigg \{\nu _B(t)\le \frac {1}{2}\ln (\varepsilon _1/\delta )\bigg \}.$$

It can readily be seen that \( \mathsf {P}\{\nu (t)\le \ln (\varepsilon _1/\delta )\} \ge \mathsf {P}\{\mathcal {A}^W_t\cap \mathcal {A}^B_t\} =\mathsf {P}\{\mathcal {A}^W_t\}\mathsf {P}\{\mathcal {A}^B_t\}, \) because the processes \(\int \nolimits _0^t b(s)\thinspace dW(s)\) and \(\int \nolimits _0^t c(s)\thinspace dB^H(s)\) are independent.

Consider the probability \(\mathsf {P}\{\mathcal {A}^W_t\} \). Let

$$ M(t)= \frac {1}{2}\ln (\varepsilon _1/\delta ) -\frac {1}{2}A(t) +\frac {1}{2}\tau (t).$$

By Lemma 2 and the properties of the Wiener process,

$$ \mathsf {P}\{\mathcal {A}^W_t\} =\mathsf {P}\Big \{\widetilde {W}\big (\tau (t)\big )\le M(t)\Big \} =\frac {1}{\sqrt {2\pi \tau (t)}}\int _{-\infty }^{M(t)} e^{-s^2/(2\tau (t))}\thinspace ds =\frac {1}{\sqrt {2\pi }}\int _{-\infty }^{M(t)/\sqrt {\tau (t)}}e^{-s^2/2}\thinspace ds \xrightarrow [t\to \infty ]{}1, $$

because \(A(\infty )=-\infty \), \(\tau (\infty )=\tau _0 \), and accordingly

$$ \lim \limits _{t\to \infty }{M(t)}/{\sqrt {\tau (t)}} =\frac {1}{\sqrt {\tau _0}}\lim \limits _{t\to \infty } M(t) =\infty .$$

Now let us estimate the probability \(\mathsf {P}\{\mathcal {A}^B_t\}\) using Lemma 3,

$$ \mathsf {P}\{\mathcal {A}^B_t\} \ge \mathsf {P}\bigg \{ C^{H}_{\varepsilon ,\alpha }(t) \le \frac {1}{2}\ln (\varepsilon _1/\delta )-\frac {1}{2}A(t) \bigg \} \ge 1-L\biggl ( \frac {2\|c\|_{\alpha , t} t^{H+\alpha -1}}{ \ln (\varepsilon _1/\delta )-A(t) } \biggr )^{\!2/\varepsilon } \xrightarrow [t\to \infty ]{}1,$$

because

$$ \lim \limits _{t\to \infty }t^{H+\alpha -1}\|c\|_{\alpha , t}/A(t)=0.$$

Thus, \( \mathsf {P}\{\nu (t)\le \ln (\varepsilon _1/\delta )\} \ge \mathsf {P}\{\mathcal {A}^W_t\}\mathsf {P}\{\mathcal {A}^B_t\} \xrightarrow [t\to \infty ]{} 1 \), as desired.

Case 2. Let \(\tau (\infty )=\infty \), and let the conditions in assertion 2 of the theorem be satisfied. We introduce the following notation for the function in the condition in assertion 2:

$$ J(t):=A_{\tau }(t)/\varkappa (t), \quad \mathop {\overline {\lim }}\limits _{t\to \infty } J(t)<-1.$$

For brevity, we also write

$$ \xi (t):=A_{\tau }(t)+\int _0^t c(s)\thinspace dB^H(s).$$

By Lemma 2,

$$ \widetilde {W}(\tau (t))=\int _0^t b(s)\thinspace dW(s)$$

a.s., hence \(\nu (t)=\widetilde {W}(\tau (t))+\xi (t)\) a.s., and consequently,

$$ \mathsf {P}\big \{\nu (t)\le \ln (\varepsilon _1/\delta )\big \}= \mathsf {P}\bigg \{ \frac {\nu (t)}{\varkappa (t)} \le \frac {\ln (\varepsilon _1/\delta )}{\varkappa (t)} \bigg \} \ge \mathsf {P}\bigg \{ \frac {\widetilde {W}\big (\tau (t)\big )}{\varkappa (t)} +\frac {\xi (t)}{\varkappa (t)} \le 0 \bigg \}=: \mathsf {P}\{\mathcal {A}\}.$$

Take a sufficiently small positive \(\widetilde \varepsilon \in (0, -1-\mathop {\overline {\lim }}\limits _{t\to \infty } J(t)) \) and consider the events

$$ \mathcal {A}^{\widetilde W}_t=\bigg \{ \frac {\widetilde {W}(\tau (t))}{\varkappa (t)} \le 1+\widetilde \varepsilon \bigg \} \quad \text {and}\quad \mathcal {A}^\xi _t=\bigg \{ \frac {\xi (t)}{\varkappa (t)} \le -1-\widetilde \varepsilon \bigg \}. $$

By analogy with case 1, we obtain \( \mathsf {P}\{\mathcal {A}\} \ge \mathsf {P}\{\mathcal {A}^{\widetilde W}_t\cap \mathcal {A}^\xi _t\} =\mathsf {P}\{\mathcal {A}^{\widetilde W}_t\}\mathsf {P}\{\mathcal {A}^\xi _t\}. \)

Consider the probability \(\mathsf {P}\{\mathcal {A}^{\widetilde W}_t\} \). For brevity, we introduce the notation

$$ \zeta (\tau ):=\sup \limits _{s\ge \tau }{\widetilde {W}(s)}/{\sqrt {2 s\ln \ln s}}. $$

By the iterated logarithm law, \(\lim \limits _{t\to \infty }\zeta (\tau (t)) =\lim \limits _{\tau \to \infty }\zeta (\tau )=1 \) a.s. Further, the function \(\tau (t) \) is increasing and the function \(\zeta (\tau ) \) is decreasing with respect to \(\tau \) for each \(\omega \in \Omega \). Hence for any \(t_1 \), \(t_2>0 \), \(t_1<t_2 \), one has the inclusion \( \{\zeta (\tau (t_1))\le 1+\widetilde \varepsilon \} \subset \{\zeta (\tau (t_2))\le 1+\widetilde \varepsilon \}, \) whence, using the continuity axiom, we obtain

$$ \lim \limits _{t\to \infty } \mathsf {P}\{\mathcal {A}^{\widetilde W}_t\} \ge \lim \limits _{t\to \infty } \mathsf {P}\Big \{\zeta \big (\tau (t)\big )\le 1+\widetilde \varepsilon \Big \} \ge \mathsf {P}\Big \{\lim \limits _{t\to \infty }\zeta \big (\tau (t)\big )\le 1+\widetilde \varepsilon \Big \} =1. $$

Now let us estimate the probability \(\mathsf {P}\{\mathcal {A}^{\xi }_t\}\) for sufficiently large \(t \) using Lemma 3. We have

$$ \eqalign { \mathsf {P}\{\mathcal {A}^{\xi }_t\} &=\mathsf {P}\left \{\frac {1}{\varkappa (t)} \int _0^t c(s)\thinspace dB^H(s) \le -1-\widetilde \varepsilon -J(t) \right \} \cr &\ge \mathsf {P}\bigg \{ \frac {C^{H}_{\varepsilon ,\alpha }(t)}{\varkappa (t)} \le -1-\widetilde \varepsilon -\sup \limits _{s\ge t}J(s) \bigg \} \ge 1-L\left ( \frac {\|c\|_{\alpha , t} t^{H+\alpha -1}}{ \left (-1-\widetilde \varepsilon -\sup \nolimits _{s\ge t}J(s)\right ) \varkappa (t) } \right )^{\!2/\varepsilon }.} $$

Passing to the limit in the last inequality, we obtain

$$ \lim \limits _{t\to \infty } \mathsf {P}\{\mathcal {A}^{\xi }_t\} \ge 1-L\left ( \frac {1}{ \left (-1-\widetilde \varepsilon -\mathop {\overline {\lim }}\nolimits _{t\to \infty } J(s)\right ) } \lim \limits _{t\to \infty }\frac {\|c\|_{\alpha , t} t^{H+\alpha -1}}{ \varkappa (t)} \right )^{\!2/\varepsilon } =1. $$

Thus, \(\mathsf {P}\{\nu (t)\le \ln (\varepsilon _1/\delta )\}\ge \mathsf {P}\{\mathcal {A}\} \ge \mathsf {P}\{\mathcal {A}^{\widetilde W}_t\}\mathsf {P}\{\mathcal {A}^\xi _t\} \xrightarrow [t\to \infty ]{} 1 \). The proof of the theorem is complete.

Corollary 1.

For Eq. (3), any of the following conditions is sufficient for the asymptotic stability in probability of the zero solution:

  1. 1.

    \(b=0 \), \(a<0 \), and \(c \) is arbitrary.

  2. 2.

    \(b\ne 0 \), \( a<{b^2}/{2}\), and \(c=0 \).

The proof can be obtained by a straightforward application of Theorem 1 with allowance for the fact that in the case of a constant function \(c(t)\equiv c\) the norm \( \|c\|_{\alpha , t}\) is \(|c|t^{1-\alpha }/(1-\alpha ) \).

The next theorem is a criterion for the asymptotic stability in probability of the zero solution of Eq. (2) under the assumption that the coefficient \( b(t)\) is not identically zero. Set

$$ C_\phi (t)=\|c\|^2_{L^2_\phi ; t}=\int _0^t\int _0^t c(s)c(u)\phi (s, u)\thinspace ds\thinspace du.$$
(10)

Theorem 2.

Let \(b\not \equiv 0 \), and let \( c|_{[0, T]}\in L^2_\phi [0, T]\) for each \(T>0\). The zero solution of Eq. (2) is asymptotically stable in probability if and only if

$$ \lim \limits _{t\to \infty }\frac {A_\tau (t)}{\sqrt {\tau (t)+C_\phi (t)}}=-\infty ,$$

where \(A_{\tau }(t) \), \( \tau (t)\), and \( C_\phi (t)\) are the functions defined in (9) and (10).

Proof. Let \(c\not \equiv 0 \). It follows from Lemmas 1 and 2 and Proposition 1 that the Itô integral \(I_W(t)=\int \nolimits _0^t b(s)\thinspace dW(s)\) and the Young integral \( I_B(t)=\int \nolimits _0^t c(s)\thinspace dB^H(s) \) for a given \(t \) are independent normally distributed random variables with zero means and with variances \(\tau (t)\) and \(C_\phi (t) \), respectively. Consequently, their sum is again a normally distributed random variable with zero mean and variance \(\tau (t)+C_\phi (t) \); hence for each \(M>0 \) we obtain the relation

$$ \eqalign { \mathsf {P}\big \{\nu (t)>M\big \} &=\mathsf {P}\big \{I_W(t)+I_B(t)>M-A_\tau (t)\big \}\cr &=\frac {1}{\sqrt {2\pi \big (\tau (t)+C_\phi (t)\big )}} \int _{M-A_\tau (t)}^{\infty } e^{-s^2/(2 (\tau (t)+C_\phi (t)))}\thinspace ds =\frac {1}{\sqrt {2\pi }} \int _{M(t)}^{\infty } e^{-s^2/2}\thinspace ds,\cr M(t)&=\frac {M-A_\tau (t)}{\sqrt {\tau (t)+C_\phi (t)}}.}$$
(11)

It can readily be seen that formula (11) remains valid for \(c\equiv 0 \). The condition \(b\not \equiv 0 \) guarantees that the denominator of the fraction \(M(t) \) is nonzero for sufficiently large \(t \).

Further, the asymptotic stability is equivalent to the relation \(\lim \limits _{t\to \infty }\mathsf {P}\{\nu (t)>M\}=0\), which is in turn equivalent to \( \lim \limits _{t\to \infty } M(t)=\infty \). Note that the expression \(M/{\sqrt {\tau (t)+C_\phi (t)}} \) is bounded for sufficiently large \(t \); namely, for a sufficiently small \(\epsilon >0 \) it belongs to the interval

$$ \left [ {M}\bigg /{\sqrt {\lim \limits _{t\to \infty }\tau (t) +\mathop {\overline {\lim }}\limits _{t\to \infty }C_\phi (t)+\epsilon }}, {M}\bigg /{\sqrt {\lim \limits _{t\to \infty }\tau (t) +\mathop {\underline {\lim }}\limits _{t\to \infty }C_\phi (t)-\epsilon }} \thinspace \right ],$$

which, however, can degenerate into the point \(0 \) if at least one of the limits \(\lim \limits _{t\to \infty }\tau (t)\) or \(\mathop {\underline {\lim }}\limits _{t\to \infty }C_\phi (t)\) is infinity. The boundedness of the function \({M}/{\sqrt {\tau (t)+C_\phi (t)}} \) implies the equivalence of the relations \(\lim \limits _{t\to \infty } M(t)=\infty \) and \(\lim \limits _{t\to \infty }{A_\tau (t)}/{\sqrt {\tau (t)+C_\phi (t)}}=-\infty \), as desired. The proof of Theorem 2 is complete.

Corollary 2.

The inequality \(a<{b^2}/{2} \) is a necessary and sufficient condition for the asymptotic stability in probability of the zero solution of Eq. (3).

Proof. If the coefficients of the equation are constant, then the relations \(\nu (t)=(a-{b^2}/{2}) t+b W(t)+c B^H(t) \) and \(C_\phi (t)=c^2 R_H(t, t)=c^2 t^{2H}\) hold. For \(b=c=0 \), the assertion of the theorem is obvious.

If \( b\ne 0\) and \(c=0 \), then formula (11) implies the equivalence

$$ M(t)\sim -(a-{b^2}/{2}) t^{1/2}/|b|$$

as \(t\to \infty \). If, however, \(c\ne 0 \), then formula (11) implies the equivalence

$$ M(t)\sim -(a-{b^2}/{2})t^{1-H}/|c|$$

as \(t\to \infty \). Thus, it is necessary and sufficient for the asymptotic stability in probability that the inequality \(a<{b^2}/{2} \) be satisfied, as desired. The proof of the corollary is complete.

Remark 2.

It follows from the last assertion that the term \(cx(t)\thinspace dB^H(t) \) in Eq. (3) does not affect the asymptotic stability in probability of the zero solution.

In the following proposition, we derive an explicit formula for the \(p \)th moment, \(p>0 \), of the solution of Eq. (2).

Proposition 2.

Let \(c|_{[0,t]}\in L^2_\phi [0, t] \) for some \(t>0 \). Then for each \(p>0 \) one has the relation

$$ \mathbb {E}\big |x(t)\big |^p=\mathbb {E}\big |x(0)\big |^p\exp \left (p\int _0^t\left (a(s)+\frac {p-1}{2}b^2(s)+p H (2H-1)\int _0^s (s-u)^{2H-2} c(s)c(u)\thinspace du\right )ds\right )\!,$$

where \(x(t)\) is the solution of Eq. (2) with the initial value \( x(0)\).

Proof. The representation (5) of the solution of Eq. (2), the independence of \(x(0), \) \(W(t)\), and \(B^H(t) \), and Lemma 1 imply the relation

$$ \eqalign { \mathbb {E}\big |x(t)\big |^p &=\mathbb {E}\big |x(0)\big |^p \exp \left (p\int _0^t\left (a(s)-\frac {1}{2}b^2(s)\right )\thinspace ds\right )\cr &\qquad {}\times \mathbb {E}\exp \left (\int _0^tpb(s)\thinspace dW(s)\right ) \mathbb {E}\exp \left (\int _0^t\!pc(s)\thinspace dB^H(s)\right ).}$$
(12)

Let us calculate \(u(t)=\mathbb {E}\exp (\int \nolimits _0^t pb(s)\thinspace dW(s))\). It follows from the representation (5) that the process \(\eta (t)=\exp (\int \nolimits _0^t pb(s)\thinspace dW(s))\) is a solution of the linear equation

$$ \eta (t)=1+\frac {p^2}{2}\int _0^t b^2(s)\eta (s)\thinspace ds+p\int _0^t b(s)\eta (s)\thinspace dW(s). $$

Let us take the expectation of both sides of the last equality. Using the Fubini theorem and the fact that the mean of the Itô integral is zero, we obtain

$$ u(t)=1+\frac {p^2}{2}\int _0^t b^2(s) u(s)\thinspace ds. $$

By differentiating the last relation, we arrive at the equation

$$ u^{\prime }(t)-\frac {p^2}{2} b^2(t) u(t)=0$$

with the initial condition \( u(0)=\mathbb {E}e^{y(0)}=1\). Its solution is

$$ \mathbb {E}\exp \left (\thinspace \int _0^t pb(s)\thinspace dW(s)\right )=u(t) =\exp \biggl (\frac {p^2}{2}\tau (t)\biggr ). $$
(13)

It remains to calculate \(v(t)=\mathbb {E}\exp (\int \nolimits _0^t pc(s)\thinspace dB^H(s))\). For brevity, we introduce the notation \(y(t)=\int \nolimits _0^t pc(s)\thinspace dB^H(s)\), \(z(t)=e^{y(t)} \); then \(v(t)=\mathbb {E}z(t) \). It follows from the representation of the solution (5) that the process \(z(t) \) is a solution of the linear equation

$$ z(t)=1+p\int _0^t c(s) z(s)\thinspace dB^H(s). $$
(14)

According to [1, Sec. 5.1], the pathwise Young integral \(\int \nolimits _0^t c(s) z(s)\thinspace dB^H(s)\) coincides with the symmetric integral \(\int \nolimits _0^t c(s) z(s) d^\circ B^H(s) \). By [1, Theorem 5.5.1], the symmetric integral can be expressed via the Wick–Itô–Skorokhod integral \(\int \nolimits _0^t c(s) z(s)\diamond \thinspace dB^H(s) \) by the formula

$$ \int _0^t c(s) z(s) d^\circ B^H(s) =\int _0^t c(s) z(s)\diamond \thinspace dB^H(s) +\int _0^t D^\phi _s(c(s)z(s))\thinspace ds$$

(a.s.), where \(D^\phi _t \) is the operator of the \(\phi \)-derivative [1, Sec. 3.5] (the generalized derivative with respect to \(\omega ) \). Since the Wick–Itô–Skorokhod integral has zero mean, we obtain, by taking the expectation of both sides of relation (14),

$$ v(t)=1+p\mathbb {E}\int _0^t D^\phi _s(c(s)z(s))\thinspace ds.$$

Let us calculate the \(\phi \)-derivative \(D^\phi _s(c(s)z(s))\). Using the relationship between the Young integral, the symmetric integral, and the Wick–Itô–Skorokhod integral, one can readily see that

$$ y(t)=p\int _0^t c(s)\thinspace dB^H(s) =p\int _0^t c(s) d^\circ B^H(s) =p\int _0^t c(s)\diamond \thinspace dB^H(s),$$

because the function \(c(s) \) is independent of \(\omega \) and hence of \(D^\phi _s(c(s))=0 \). Consequently, we have \(z(t)=F(\int \nolimits _0^t c(s)\diamond \thinspace dB^H(s))\), where \(F(y)=e^{py} \). By the properties of \(\phi \)-derivative [1, Sec. 3.5],

$$ \eqalign { D^\phi _t\big (c(t)z(t)\big ) &=c(t)D^\phi _t F(y(t)) =c(t) F^{\prime }\big (y(t)\big ) =c(t) p e^{y(t)} D^\phi _t\left (\thinspace \thinspace \int _{-\infty }^{\infty } c(s)1_{[0, t]}(s)\diamond \thinspace dB^H(s) \right )\cr &=p c(t)z(t)\int _{-\infty }^{\infty }\phi (s, t) c(s)1_{[0, t]}(s)\thinspace ds =pH(2H-1) c(t)z(t)\int _0^t (t-s)^{2H-2} c(s)\thinspace ds.}$$

Thus, using the last relation and applying the Fubini theorem to the integral (14), we obtain the integral equation

$$ v(t)=1+p^2 H(2H-1)\int _0^t\left (c(s)\int _0^s (s-u)^{2H-2} c(u)\thinspace du\right ) v(s)\thinspace ds $$

for the function \(v(t) \). By differentiating the last relation, we arrive at the equation

$$ v^{\prime }(t)-p^2 H(2H-1)c(t)\left (\thinspace \int _0^t (t-s)^{2H-2} c(s)\thinspace ds\right ) v(t)=0$$

with the initial condition \( v(0)=\mathbb {E}e^{y(0)}=1\). Its solution is

$$ \mathbb {E}\exp \left (\int _0^t pc(s)\thinspace dB^H(s)\!\right )= v(t)=\exp \left (p^2 H(2H-1)\!\!\int _0^t c(s)\!\left (\int _0^s (s-u)^{2H-2} c(u)\thinspace du\right )ds \right ). $$
(15)

Now relations (12), (13), and (15) imply the desired assertion. The proof of the proposition is complete.

Set

$$ F_p(t):=a(t)+\frac {p-1}{2}b^2(t)+p H (2H-1)\int _0^t (t-u)^{2H-2} c(t)c(u)\thinspace du,\quad I_p(t):=\int _0^t F_p(s)\thinspace ds. $$
(16)

Proposition 2 obviously implies the theorem on the \(p\)-stability of the zero solution of Eq. (2).

Theorem 3.

Let \(c|_{[0,T]}\in L^2_\phi [0, T] \) for each \(T>0 \), and let \( F_p(t)\) and \( I_p(t)\) be the functions defined by relations (16). Then the following assertions hold:

  1. 1.

    The zero solution of Eq. (2) is \(p \)-stable if and only if \( \mathop {\overline {\lim }}\limits _{t\to \infty }I_p(t)<\infty \).

  2. 2.

    The zero solution of Eq. (2) is asymptotically \(p \)-stable if and only if \( \lim \limits _{t\to \infty }I_p(t)=-\infty \).

  3. 3.

    The zero solution of Eq. (2) is exponentially \(p \)-stable if \( \sup \nolimits _{t>0} F_p(t)<0\).

Corollary 3.

The following assertions hold for Eq. (3):

  1. 1.

    A necessary and sufficient condition for the \(p \)-stability of its zero solution is that \(c=0\) and \(a\le (1-p)b^2/2 \).

  2. 2.

    A necessary and sufficient condition for the exponential \(p \)-stability of its zero solution is that \(c=0\) and \(a<(1-p)b^2/2 \).

In particular, its zero solution is not \(p \) -stable for any \( c\ne 0\) .

Proof. If the coefficients of Eq. (2) are constant, then the expression for \(I_p(t) \) becomes

$$ I_p(t)=\int _0^t\biggl (a+\frac {p-1}{2}b^2+p H c^2 s^{2H-1}\biggr )\thinspace ds =\biggl (a+\frac {p-1}{2}b^2+\frac {p}{2}c^2 t^{2H-1}\biggr ) t.$$

If \( c\ne 0\), then one has the equivalence \(I_p(t)\sim {p}c^2 t^{2H}/2\), and hence \(\lim \limits _{t\to \infty } I_p(t)=\infty \) and the zero solution is not \(p \)-stable. Therefore, we necessarily have \(c=0 \). In that case, \(I_p(t)=(a+(p-1)b^2/2) t \), and the assertion becomes obvious. The proof of the corollary is complete.

Remark 3.

The conditions for the asymptotic stability in probability of the zero solution of the time-invariant equation (3) differ from the conditions for the asymptotic \(p \)-stability. In the criterion for the asymptotic stability in probability, \(c\) is arbitrary, while \(c=0 \) in the criterion for the asymptotic \(p \)-stability (under the conditions \(a<{b^2}/{2} \) and \(a<(1-p)b^2/2 \), respectively).

This happens because the asymptotic stability in probability depends on the standard deviation \(\sigma (t)=\sqrt {b^2 t+c^2 t^{2H}}\) of the process \(\nu (t)=(a-{b^2}/{2}) t+b W(t)+c B^H(t)\). The function \(\sigma (t)\) has growth order \(t^{H} \) lower than that of the expectation \(\mu (t)=(a-b^2/2)t\) of the process \(\nu (t) \). In turn, the asymptotic \(p \)-stability depends on the variance \(\sigma ^2(t) \) of the process \(\nu (t) \), which has growth order \(t^{2H} \) higher than that of the function \(\mu (t) \).

Remark 4.

The linear time-invariant Itô equation (3) (\(c=0 \)) has the important property that the asymptotic stability in probability implies the \(p\)-stability of the zero solution of this equation for sufficiently small \(p\) [17, Sec. 6.1]. This property does not hold in the general case for \(c\ne 0 \).

5. EXAMPLES

Example 1.

Consider the equation

$$ dx(t)=-2t x(t)\thinspace dt+\frac {x(t)}{\sqrt {1+t^2}}\thinspace dW(t)+tx(t)\thinspace dB^H(t),\quad t\ge 0.$$

For this equation,

$$ \tau (t)=\arctan t \xrightarrow [t\to \infty ]{}{\pi }/{2},\quad A(t)=-t^2 \xrightarrow [t\to \infty ]{}-\infty ,\quad \|c\|_{\alpha , t}={t^{2-\alpha }}/(1-\alpha ), $$

and one can readily compute

$$ \lim \limits _{t\to \infty }{t^{H+\alpha -1}\|c\|_{\alpha , t}}/A(t) =\lim \limits _{t\to \infty }{t^{H-1}}/(1-\alpha )=0.$$

Therefore, based on Theorem 1, we conclude that the zero solution of this equation is asymptotically stable in probability.

Example 2.

Consider the equation

$$ dx(t)=t(\cos ^2 t) x(t)\thinspace dt+2\sqrt {t}(\cos t) x(t)\thinspace dW(t)+x(t)\thinspace dB^H(t),\quad t\ge 0. $$

In this case,

$$ \tau (t)=t^2+t\sin 2t+\frac {1}{2}\cos 2t \xrightarrow [t\to \infty ]{}\infty ,\quad a(s)=\frac {1}{4}b^2(t), $$

and accordingly,

$$ \lim \limits _{t\to \infty } A_{\tau }(t)/\varkappa (t)= -\frac {1}{4}\lim \limits _{\tau \to \infty }\sqrt {\frac {\tau }{2\ln \ln \tau }}=-\infty .$$

Since \(\|c\|_{\alpha , t}=t^{1-\alpha }/(1-\alpha )\) and \(\tau (t)\sim t^2\), we have

$$ \lim \limits _{t\to \infty }t^{H+\alpha -1}\|c\|_{\alpha , t}/\varkappa (t)= \frac {1}{\sqrt {2}(1-\alpha )}\lim \limits _{t\to \infty } \frac {t^{H-1}}{\sqrt {\ln \ln \tau (t)}}=0, $$

and, based on Theorem 1, we conclude that the zero solution is asymptotically stable in probability.

Example 3.

Consider the equation

$$ dx(t)=a(t) x(t)\thinspace dt+b(t) x(t)\thinspace dW(t)+e^{-t} x(t)\thinspace dB^H(t),\quad t\ge 0.$$
(17)

Since \( c(u)=e^{-u}\in (0, 1]\) for \(u\ge 0 \), we have

$$ \int _0^t (t-u)^{2H-2} c(t)c(u)\thinspace du\le c(t)\int _0^t (t-u)^{2H-2}\thinspace du=\frac {1}{2H-1} e^{-t} t^{2H-1}.$$

Note that the function \(\psi (t)=e^{-t} t^{2H-1}\) attains its maximum at the point \(t_0=2H-1 \), because \(\psi (0)=\psi (\infty )=0 \) and \(\psi ^{\prime }(t)=e^{-t} t^{2H-2} ((2H-1)-t)\). Hence, in the notation of Theorem 3,

$$ F_p(t)\le a(t)+\frac {p-1}{2}b^2(t)+p H (2H-1)^{2H-1} e^{1-2H}<a(t)+\frac {p-1}{2}b^2(t)+pH; $$

based on Theorem 3, a sufficient condition for the exponential \(p \)-stability is given by the inequality

$$ \sup \limits _{t\ge 0}\biggl (a(t)+\frac {p-1}{2}b^2(t)\biggr )\le -pH. $$

In a sense, the last inequality is an analog of condition 2 in Corollary 3 in the class of equations (17) with nonconstant coefficients.

In particular, the latter assertion implies that, for example, the zero solution of the equation

$$ dx(t)=\biggl (-\beta +\frac {(p-1)H}{2}\sin ^2 t\biggr )x(t)\thinspace dt+\sqrt {H}(\cos t) x(t)\thinspace dW(t)+e^{-t} x(t)\thinspace dB^H(t), \quad t\ge 0, $$

is \(p\)-exponentially stable for any \(p>0 \) and \(\beta \ge {(3p-1)H/2} \).