1 Introduction

The fractional differential equations (FDEs) have more and more applications in many areas such as electromagnetic, mechanics, physics and chemistry (see [1, 2]). In the past few years, FDEs in infinite-dimensional spaces have been studied extensively since they are abstract formulations for many problems arising from areas of economics, mechanics and physics. For more results of FDEs in infinite-dimensional spaces, see [3,4,5,6,7,8,9] and the references therein.

On the other hand, the deterministic models often fluctuate due to noise or stochastic perturbation. Generally, the noise or perturbation of a system is typically modeled by a Brownian motion (Wiener process). There have been much efforts of fractional stochastic differential equations (FSDEs) with Brownian motion in recent years [10,11,12,13,14,15,16]. However, as many researchers have found [17, 18], it is insufficient to use standard Brownian motion to model many phenomena which have long memory, such as telecommunication and asset prices. As an extension of Brownian motion, fractional Brownian motion (fBm) is a family of Gaussian processes, which was introduced by Kolmogorov [19]. It is desirable to replace Brownian motion by fBm to model the practical problems better.

Controllability is one of the most fundamental but significant concepts in mathematical control theory, which can be categorized into two kinds: exact (complete) controllability and approximate controllability. Exact controllability means that under some admissible control input, a system can be steered from an arbitrary given initial state to an arbitrary desired final state, while approximate controllability can steer the system to arbitrary small neighborhood of final state. The study of latter for control systems is more appropriate since the conditions of former are usually too strong in infinite-dimensional spaces [20, 21]. Recently, many efforts focused on the approximate controllability of FSDEs with Brownian motion; see [22,23,24,25,26,27,28,29]. However, few work is known about the approximate controllability of FSDEs with fBm, which is still a bottleneck.

Motivated by the above considerations, in this paper, we study the approximate controllability of FSDEs with fBm of the form

$$\begin{aligned} \left\{ \begin{array}{ll} ^{C}D^{\alpha }x(t)=Ax(t)+Bu(t)+f(t,x(t))+\sigma (t)\frac{\mathrm{d}B^{H}(t)}{\mathrm{d}t},\;t\in J:=[0,b],\\ x(0)=x_{0}, \end{array}\right. \end{aligned}$$
(1.1)

where \(\alpha \in (\frac{1}{2},1]\), \(^{C}D^{\alpha }\) denotes the Caputo fractional derivative and A is the infinitesimal generator of a strongly continuous semigroup \(\{S(t)\}_{t\ge 0}\) in a real separable Hilbert space Y. \(B^{H}\) is a fBm on a real separable Hilbert space V with Hurst index \(H\in \left( \frac{1}{2},1\right) \). \(x_{0}\) is an \({\mathscr {F}}_{0}\)-measurable random variable independent of \(B^{H}\) with finite second moment. \(B:L^{2}_{{\mathscr {F}}}(J,U)\rightarrow L^{2}(J,Y)\) is a bounded linear operator. \(f:J\times Y\rightarrow Y\) and \(\sigma : J\rightarrow L^{0}_{Q}(V,Y)\) are appropriate functions satisfying some assumptions.

Our aim is to study the approximate controllability of system (1.1) and investigate its generalizations to other systems. Note that in [22,23,24,25,26,27,28,29], the authors discussed the approximate controllability results under the assumptions that the nonlinear item is uniformly bounded and the associated fractional linear system is approximate controllable, which is too constrained. In this paper, we omit these two assumptions and use the method similar to [30] with suitable modifications so as to be compatible with our researched equations. Further, we attempt to extend the results to study the approximate controllability of FSDEs with bounded delay.

An outline of this paper is given as follows: Section 2 introduces some preliminary facts. The existence and uniqueness of mild solutions for system (1.1) are established in Sect. 3. In Sect. 4, we prove the approximate controllability of system (1.1) and extend the results to FSDEs with bounded delay. Section 5 presents an example.

2 Preliminaries

Some preliminary facts are presented in this section which is necessary for this paper. For more details, see [6, 31,32,33].

Assume that Y, U and V are three real separable Hilbert spaces. Let \(\left( \Omega ,{\mathscr {F}},P\right) \) be a complete probability space with a normal filtration \(\{{\mathscr {F}}_{t}\}_{t\in [0,b]}\) satisfying the usual conditions and \({\mathscr {F}}_{b}={\mathscr {F}}\). The control function \(u\in L^{2}_{{\mathscr {F}}}(J,U)\), where \({L}^{2}_{{\mathscr {F}}}(J,U)\) is the closed subspace of \({L}^{2}(J\times \Omega ,U)\) consisting of all \({{\mathscr {F}}}_{t}\)-adapted, U-valued process. Throughout this paper, let \(M:=\sup \limits _{t\in [0,+\infty )}\Vert S(t)\Vert <\infty \). We introduce the following Banach spaces:

$$\begin{aligned}&L(V,Y):=\{x:V\rightarrow Y|x\;\text {is a bounded linear operator}\}, \\&L^{2}(\Omega ,Y):= \left\{ x:\Omega \rightarrow Y\mid x \text { is an}\; {\mathscr {F}}{-}\text {measurable square integrable random variable} \right\} , \\&C(J,L^{2}(\Omega ,Y)):= \left\{ x:J\rightarrow L^{2}(\Omega ,Y) \mid x \;\text {is an}\;{\mathscr {F}}_{t}\text {-adapted stochastic process, which}\;\right. \\&\left. \text { is a continuous mapping}\;\text {such that} \sup \limits _{t\in J}E\Vert x(t)\Vert ^{2}<\infty \right\} . \end{aligned}$$

Let \({\mathscr {C}}:=C(J,L^{2}(\Omega ,Y))\). The space \({\mathscr {C}}\) equipped with the norm \(\Vert x\Vert _{{\mathscr {C}}}=\left( \sup \limits _{t\in J}E\Vert x(t)\Vert ^{2}\right) ^{\frac{1}{2}}\) is a Banach space.

Definition 2.1

[31, 32] A real-valued one-dimensional fBm \(\beta ^{H}=\{\beta ^{H}(t),t\in J\}\) with Hurst index \(H\in (0,1)\) is a continuous and centered Gaussian process with covariance function

$$\begin{aligned} R^{H}(t,s)=E[\beta ^{H}(t)\beta ^{H}(s)]=\frac{1}{2}(t^{2H}+s^{2H}-|t-s|^{2H}),\;t,s\in J. \end{aligned}$$

In the rest of this paper, we assume \(H\in (\frac{1}{2},1)\). \(\beta ^{H}\) can be represented by

$$\begin{aligned} \beta ^{H}(t)=\int _{0}^{t}K^{H}(t,s)d\beta (s), \end{aligned}$$

where \(\beta =\{\beta (t),t\in J\}\) is a one-dimensional Wiener process and

$$\begin{aligned} K^{H}(t,s)=c_{H}\left( H-\frac{1}{2}\right) s^{\frac{1}{2}-H}\int _{s}^{t} (u-s)^{H-\frac{3}{2}}u^{H-\frac{1}{2}}\mathrm{d}u. \end{aligned}$$

Let \(h\in L^{2}(0,b)\) be a deterministic function, the Wiener integral of h with respect to \(\beta ^{H}\) is given by

$$\begin{aligned} \int _{0}^{b}h(s)d\beta ^{H}(s)=\int _{0}^{b}(K^{*}_{b}h)(s)d\beta (s), \end{aligned}$$

where

$$\begin{aligned} (K^{*}_{\tau }h)(s)=\int _{s}^{\tau }h(z)\frac{\partial K^{H}(z,s)}{\partial z}\mathrm{d}z,\;\tau \in [0,b]. \end{aligned}$$

Then, the definition of infinite-dimensional fBm and its stochastic integral are given.

Let \(Q\in L(V,V)\) be a nonnegative self-adjoint trace class operator such that \(Qe_{n}=\lambda _{n}e_{n}\) with \(trQ=\sum \nolimits _{n=1}^{\infty }\lambda _{n}<\infty ,\) where \(\lambda _{n}\ge 0(n=1,2,\ldots )\) and \(\{e_{n}\}(n=1,2,\ldots )\) is a complete orthonormal basis in V. The V-valued Q-cylindrical fBm on \((\Omega ,{\mathscr {F}},P)\) with covariance operator Q is defined as

$$\begin{aligned} B^{H}(t)=\sum _{n=1}^{\infty }Q^{\frac{1}{2}}e_{n}\beta ^{H}_{n}(t) =\sum _{n=1}^{\infty }\sqrt{\lambda _{n}}e_{n}\beta ^{H}_{n}(t), \end{aligned}$$

where \(\beta ^{H}_{n}\) are real, independent one-dimensional fBm.

Let \(L^{0}_{Q}(V,Y)\) be the space of all Q-Hilbert–Schmidt operator \(\xi :V\rightarrow Y\). Note that if \(\xi \in L(V,Y)\) and

$$\begin{aligned} \Vert \xi \Vert ^{2}_{L^{0}_{Q}(V,Y)}:=\sum _{n=1}^{\infty }\Vert \sqrt{\lambda _{n}}\xi e_{n}\Vert ^{2}<\infty , \end{aligned}$$

then \(\xi \) is called a Q-Hilbert–Schmidt operator.

Definition 2.2

[32, 34, 35] If \(\Psi :J\rightarrow L^{0}_{Q}(V,Y)\) satisfies

$$\begin{aligned} \sum _{n=1}^{\infty }\Vert K^{*}_{b}(\Psi Q^{\frac{1}{2}})e_{n}\Vert _{L^{2}(J,Y)}<\infty , \end{aligned}$$
(2.1)

then the stochastic integral \(\int _{0}^{t}\Psi (s)\mathrm{d}B^{H}(s)\) can be defined as

$$\begin{aligned} \int _{0}^{t}\Psi (s)\mathrm{d}B^{H}(s):= & {} \sum _{n=1}^{\infty }\int _{0}^{t}\Psi (s)Q^{\frac{1}{2}}e_{n}\mathrm{d}\beta ^{H}_{n}(s)\\= & {} \sum _{n=1}^{\infty }\int _{0}^{t}(K^{*}_{b}(\Psi Q^{\frac{1}{2}}e_{n}))(s)\mathrm{d}\beta (s),\quad t\in J. \end{aligned}$$

Lemma 2.1

[34, 35] If \(\Psi :J\rightarrow L^{0}_{Q}(V,Y)\) satisfies

$$\begin{aligned} \sum _{n=1}^{\infty }\Vert \Psi Q^{\frac{1}{2}}e_{n}\Vert _{L^{\frac{1}{H}}(J,Y)}<\infty , \end{aligned}$$
(2.2)

then for \(\forall \;0\le s<t\le b\),

$$\begin{aligned} E\left\| \int _{s}^{t}\Psi (\tau )\mathrm{d}B^{H}(\tau )\right\| ^{2}_{Y} \le C_{H}(t-s)^{2H-1}\sum _{n=1}^{\infty }\int _{s}^{t}\Vert \Psi (\tau )Q^{\frac{1}{2}}e_{n}\Vert ^{2}_{Y}\mathrm{d}\tau , \end{aligned}$$

where the constant \(C_{H}>0\) depends on H. In addition, if \(\sum \limits _{n=1}^{\infty }\Vert \Psi (t)Q^{\frac{1}{2}}e_{n}\Vert _{Y}\) is uniformly convergent for \(t\in J\), then

$$\begin{aligned} E\left\| \int _{s}^{t}\Psi (\tau )\mathrm{d}B^{H}(\tau )\right\| ^{2}_{Y} \le C_{H}(t-s)^{2H-1}\int _{s}^{t}\Vert \Psi (\tau )\Vert ^{2}_{L^{0}_{Q}(V,Y)}\mathrm{d}\tau . \end{aligned}$$
(2.3)

Inspired by [6, 7], one can define the mild solution for system (1.1).

Definition 2.3

[6, 7] A stochastic process \(\{x(t)\}_{t\in [0,b]}\) is said to be a mild solution of system (1.1), if for \(\forall \;u\in L^{2}_{{\mathscr {F}}}(J,U)\),

$$\begin{aligned} x(t)= & {} S_{\alpha }(t)x_{0}+\int _{0}^{t}(t-s)^{\alpha -1}T_{\alpha }(t-s)\left[ f(s,x(s))+Bu(s)\right] \mathrm{d}s\\&+\int _{0}^{t}(t-s)^{\alpha -1}T_{\alpha }(t-s)\sigma (s)\mathrm{d}B^{H}(s),\;t\in [0,b],\;\;\;P-a.s. \end{aligned}$$

where

$$\begin{aligned} S_\alpha (t)= & {} \int _{0}^{\infty }\xi _\alpha (\theta )S(t^{\alpha }\theta )d\theta ,\quad T_\alpha (t)=\alpha \int _{0}^{\infty }\theta \xi _\alpha (\theta )S(t^{\alpha }\theta )d\theta ,\\ \xi _\alpha (\theta )= & {} \frac{1}{\alpha }\theta ^{-(1+\frac{1}{\alpha })}{\overline{\omega }}_\alpha (\theta ^{-\frac{1}{\alpha }}),\; {\overline{\omega }}_\alpha (\theta )=\sum _{n=1}^{\infty }(-1)^{n-1}\theta ^{-n\alpha -1} \frac{\Gamma (n\alpha +1)}{\pi n!}\sin (n\pi \alpha ),\;\theta \in (0,\infty ), \end{aligned}$$

\(\xi _\alpha \) is a probability density function which satisfies

$$\begin{aligned} \xi _\alpha (\theta )\ge 0,\;\theta \in (0,\infty )\;\text {and}~\int _{0}^{\infty }\xi _\alpha (\theta )d\theta =1. \end{aligned}$$

Lemma 2.2

[6, 7] Operators \(S_\alpha (t)\) and \(T_\alpha (t)\) have the following properties.

\(\mathrm{(i)}\):

\(S_\alpha (t)\) and \(T_\alpha (t)\) are linear and bounded operators, i.e., for \(\forall \;t\ge 0\),

$$\begin{aligned} \Vert S_\alpha (t)x\Vert \le M\Vert x\Vert ,\;x\in Y\text {and}\;\Vert T_\alpha (t)x\Vert \le \frac{\alpha M}{\Gamma (\alpha +1)}\Vert x\Vert ,\;x\in Y. \end{aligned}$$
\(\mathrm{(ii)}\):

Operators \(\{S_\alpha (t),t\ge 0\}\) and \(\{T_\alpha (t),t\ge 0\}\) are strongly continuous.

\(\mathrm{(iii)}\):

If for \(\forall \;t>0\), S(t) is a compact operator, then \(S_\alpha (t)\) and \(T_\alpha (t)\) are also compact operators.

Definition 2.4

The set

$$\begin{aligned} K_{b}(f)=\left\{ x(b):x(b)\text { is the mild solution of}~ ((1.1)) \,\text {at time} \;b\;\text {corresponding to the control }u\right\} \end{aligned}$$

is said to be the reachable set of system (1.1). If \(f\equiv 0\), then system (1.1) is denoted by (1.1)\(^{*}\). Moreover, we denote \(K_{b}(0)\) the reachable set of system (1.1)\(^{*}\).

Definition 2.5

System (1.1) is approximate controllable on J if \(\overline{K_{b}(f)}=L^{2}(\Omega ,Y)\), where \(\overline{K_{b}(f)}\) is the closure of \(K_{b}(f)\). That is, for \(\forall \xi \in L^{2}(\Omega ,Y)\) and \(\forall \varepsilon >0\), there exists a control \(u\in L^{2}_{{\mathscr {F}}}([0,b],U)\) such that \(E\Vert x(b)-\xi \Vert ^{2}<\varepsilon \). Similarly, system (1.1)\(^{*}\) is approximately controllable if \(\overline{K_{b}(0)}=L^{2}(\Omega ,Y)\).

3 Existence and Uniqueness of Mild Solutions

The existence and uniqueness of mild solutions for system (1.1) are investigated in this section. We first introduce the following hypotheses.

\((H_1)\): The function \(f:J\times Y\rightarrow Y\) is measurable and there exists a constant \(c_{1}>0\) such that for \(\forall \;x\in Y,\forall \;t\in J\),

$$\begin{aligned} \Vert f(t,x)\Vert ^{2}\le c_{1}(1+\Vert x\Vert ^{2}). \end{aligned}$$

\((H_2)\): There exists a constant \(l_{1}>0\) such that for \(\forall \;x,y\in Y,\forall \;t\in J\),

$$\begin{aligned} \Vert f(t,x)-f(t,y)\Vert ^{2}\le l_{1}\Vert x-y\Vert ^{2}. \end{aligned}$$

\((H_3)\): The function \(\sigma : J\rightarrow L^{0}_{Q}(V,Y)\) is measurable and there exists a constant \(c_{2}>0\) such that

$$\begin{aligned}&\hbox {(i)}\;\sup _{0\le s\le b}\Vert \sigma (s)\Vert ^{2}_{L^{0}_{Q}(V,Y)}\le c_{2},\\&\hbox {(ii)}\;\sum _{n=1}^{\infty }\Vert \sigma Q^{\frac{1}{2}}e_{n}\Vert _{L^{\frac{1}{H}}(J,Y)}<\infty ,\\&\hbox {(iii)}\;\sum _{n=1}^{\infty }\Vert \sigma (t) Q^{\frac{1}{2}}e_{n}\Vert _{Y}\text { is uniformly convergent for} \;t\in J. \end{aligned}$$

\((H_{4})\): For \(\forall \;t>0\), S(t) is a compact operator.

Define the operator \(T:{\mathscr {C}}\rightarrow {\mathscr {C}}\) by

$$\begin{aligned} (Tx)(t)= & {} S_{\alpha }(t)x_{0}+\int _{0}^{t}(t-s)^{\alpha -1}T_{\alpha }(t-s)[f(s,x(s))+Bu(s)]\mathrm{d}s\\&+\int _{0}^{t}(t-s)^{\alpha -1}T_{\alpha }(t-s)\sigma (s)\mathrm{d}B^{H}(s). \end{aligned}$$

Lemma 3.1

Suppose \((H_1),(H_{3})\) and \((H_4)\) hold, then for \(\forall \;x\in {\mathscr {C}}\), \(t\rightarrow (Tx)(t)\) is continuous on [0, b] in the \(L^{2}(\Omega ,Y)\)-sense.

Proof

For \(\forall \; x\in {\mathscr {C}}\) and \(0\le t_{1}<t_{2}\le b\), we have

$$\begin{aligned}&E\left\| (Tx)(t_{2})-(Tx)(t_{1})\right\| ^{2}\\&\quad \le 4E\left\| S_{\alpha }(t_{2})x_{0}-S_{\alpha }(t_{1})x_{0}\right\| ^{2}\\&\quad \quad +\,4E\left\| \int _{0}^{t_{2}}(t_{2}{-}s)^{\alpha -1}T_{\alpha }(t_{2}{-}s)f(s,x(s))\mathrm{d}s -\int _{0}^{t_{1}}(t_{1}-s)^{\alpha -1}T_{\alpha }(t_{1}-s)f(s,x(s))\mathrm{d}s\right\| ^{2} \\&\quad \quad +\, 4E\left\| \int _{0}^{t_{2}}(t_{2}-s)^{\alpha -1}T_{\alpha }(t_{2}-s)Bu(s)\mathrm{d}s -\int _{0}^{t_{1}}(t_{1}-s)^{\alpha -1}T_{\alpha }(t_{1}-s)Bu(s)\mathrm{d}s\right\| ^{2}\\&\quad \quad +\, 4E\left\| \int _{0}^{t_{2}}(t_{2}-s)^{\alpha -1}T_{\alpha }(t_{2}-s)\sigma (s)\mathrm{d}B^{H}(s)- \int _{0}^{t_{1}}(t_{1}-s)^{\alpha -1}T_{\alpha }(t_{1}-s)\sigma (s)\mathrm{d}B^{H}(s)\right\| ^{2} \\&\quad :=I_1+I_2+I_3+I_{4}. \end{aligned}$$

By the strong continuity of \(S_{\alpha }(t)\), we obtain

$$\begin{aligned} \lim _{t_{2}\rightarrow t_{1}}\Vert S_{\alpha }(t_{2})x_{0}-S_{\alpha }(t_{1})x_{0}\Vert =0. \end{aligned}$$

Using Lemma 2.2, it follows that

$$\begin{aligned} \left\| S_{\alpha }(t_{2})x_{0}-S_{\alpha }(t_{1})x_{0}\right\|\le & {} 2M\Vert x_{0}\Vert \in L^{2}(\Omega ,R^{+}). \end{aligned}$$

According to Lebesgue dominated theorem, we can obtain

$$\begin{aligned} \lim _{t_{2}\rightarrow t_{1}}I_1=0. \end{aligned}$$

Moreover,

$$\begin{aligned} I_{2}\le & {} 12E\left\| \int _{0}^{t_{1}}[(t_{2}-s)^{\alpha -1}-(t_{1}-s)^{\alpha -1}]T_\alpha (t_{2}-s)f(s,x(s))\mathrm{d}s\right\| ^{2} \\&+12E\left\| \int _{0}^{t_{1}}(t_{1}-s)^{\alpha -1}[T_\alpha (t_{2}-s)-T_\alpha (t_{1}-s)]f(s,x(s))\mathrm{d}s\right\| ^{2} \\&+12E\left\| \int _{t_{1}}^{t_{2}}(t_{2}-s)^{\alpha -1}T_\alpha (t_{2}-s)f(s,x(s))\mathrm{d}s\right\| ^{2}\\:= & {} I_{21}+I_{22}+I_{23}. \end{aligned}$$

By \((H_{1})\) and Hölder’s inequality, it is easy to validate that

$$\begin{aligned} I_{21}\le & {} \frac{12M^{2}}{\Gamma ^{2}(\alpha )} \left( \int _{0}^{t_{1}}[(t_{1}-s)^{\alpha -1}-(t_{2}-s)^{\alpha -1}]^{2}\mathrm{d}s\right) \int _{0}^{t_{1}}E\Vert f(s,x(s))\Vert ^{2}\mathrm{d}s\\\le & {} \frac{12M^{2}}{\Gamma ^{2}(\alpha )}\int _{0}^{t_{1}}[(t_{1}-s)^{2\alpha -2}-(t_{2}-s)^{2\alpha -2}]\mathrm{d}s \times \int _{0}^{t_{1}}c_{1}\left( 1+E\Vert x(s)\Vert ^{2}\right) \mathrm{d}s\\\le & {} \frac{12M^{2}c_{1}t_{1}(1+\sup _{s\in J}E\Vert x(s)\Vert ^{2})}{\Gamma ^{2}(\alpha )(2\alpha -1)}\times \left[ t^{2\alpha -1}_{1}+(t_{2}-t_{1})^{2\alpha -1}-t^{2\alpha -1}_{2}\right] , \end{aligned}$$

and

$$\begin{aligned} I_{22}\le & {} 12E\left( \int _{0}^{t_{1}}(t_{1}-s)^{\alpha -1} \Vert T_{\alpha }(t_{2}-s)-T_{\alpha }(t_{1}-s)\Vert \Vert f(s,x(s))\Vert \mathrm{d}s\right) ^{2}\\\le & {} \frac{12t^{2\alpha }_{1}c_{1}(1+\sup _{s\in J}E\Vert x(s)\Vert ^{2})}{2\alpha -1} \left( \sup _{s\in [0,t_{1}]}\Vert T_{\alpha }(t_{2}-s)-T_{\alpha }(t_{1}-s)\Vert \right) ^{2}. \end{aligned}$$

From Lemma 2.2 and \((H_{4})\), we know that \(T_{\alpha }(t)(t>0)\) is continuous in uniform operator topology about the variable t. Hence, \(\lim \limits _{t_{2}\rightarrow t_{1}}I_{21}=\lim \limits _{t_{2}\rightarrow t_{1}}I_{22}=0\). Further,

$$\begin{aligned} I_{23}\le & {} \frac{12M^{2}}{\Gamma ^{2}(\alpha )} \left( \int _{t_{1}}^{t_{2}}(t_{2}-s)^{2\alpha -2}\mathrm{d}s\right) \left( \int _{t_{1}}^{t_{2}}E\Vert f(s,x(s))\Vert ^{2}\mathrm{d}s\right) \\\le & {} \frac{12M^{2}c_{1}(t_{2}-t_{1})^{2\alpha }\left( 1+\sup _{s\in J}E\Vert x(s)\Vert ^{2}\right) }{\Gamma ^{2}(\alpha )(2\alpha -1)}\\\rightarrow & {} 0\;\;\text {as}\;\;t_{2}\rightarrow t_{1}. \end{aligned}$$

A similar computation yields that

$$\begin{aligned} I_{3}\le & {} 12E\left\| \int _{0}^{t_{1}}[(t_{2}-s)^{\alpha -1}-(t_{1}-s)^{\alpha -1}]T_\alpha (t_{2}-s)Bu(s)\mathrm{d}s\right\| ^{2} \\&+12E\left\| \int _{0}^{t_{1}}(t_{1}-s)^{\alpha -1}[T_\alpha (t_{2}-s)-T_\alpha (t_{1}-s)]Bu(s)\mathrm{d}s\right\| ^{2} \\&+12E\left\| \int _{t_{1}}^{t_{2}}(t_{2}-s)^{\alpha -1}T_\alpha (t_{2}-s)Bu(s)\mathrm{d}s\right\| ^{2}\\:= & {} I_{31}+I_{32}+I_{33}. \end{aligned}$$

Similarly,

$$\begin{aligned} I_{31}\le & {} \frac{12M^{2}\Vert Bu\Vert _{L^{2}(J,Y)}[t^{2\alpha -1}_{1}+(t_{2}-t_{1})^{2\alpha -1}-t^{2\alpha -1}_{2}]}{\Gamma ^{2}(\alpha )(2\alpha -1)} \rightarrow 0\;\;\text {as}\;\;t_{2}\rightarrow t_{1}, \\ I_{32}\le & {} \frac{12t^{2\alpha -1}_{1}\Vert Bu\Vert _{L^{2}(J,Y)}}{2\alpha -1} \left( \sup _{s\in [0,t_{1}]}\Vert T_{\alpha }(t_{2}-s)-T_{\alpha }(t_{1}-s)\Vert \right) ^{2}\rightarrow 0\;\;\text {as}\;\;t_{2}\rightarrow t_{1}, \\ I_{33}\le & {} \frac{12M^{2}(t_{2}-t_{1})^{2\alpha -1}\Vert Bu\Vert _{L^{2}(J,Y)}}{\Gamma ^{2}(\alpha )(2\alpha -1)}\rightarrow 0\;\;\text {as}\;\;t_{2}\rightarrow t_{1}. \end{aligned}$$

In a similar way, one can obtain

$$\begin{aligned} I_{4}\le & {} 12E\left\| \int _{0}^{t_{1}}[(t_{2}-s)^{\alpha -1}-(t_{1}-s)^{\alpha -1}]T_\alpha (t_{2}-s)\sigma (s)\mathrm{d}B^{H}(s)\right\| ^{2} \\&+12E\left\| \int _{0}^{t_{1}}(t_{1}-s)^{\alpha -1}[T_\alpha (t_{2}-s)-T_\alpha (t_{1}-s)]\sigma (s)\mathrm{d}B^{H}(s)\right\| ^{2} \\&+12E\left\| \int _{t_{1}}^{t_{2}}(t_{2}-s)^{\alpha -1}T_\alpha (t_{2}-s)\sigma (s)\mathrm{d}B^{H}(s)\right\| ^{2}\\:= & {} I_{41}+I_{42}+I_{43}. \end{aligned}$$

Combing Lemma 2.1 and \((H_{3})\), we have

$$\begin{aligned} I_{41}\le & {} 12C_{H}t^{2H-1}_{1}\int _{0}^{t_{1}} \left\| [(t_{2}-s)^{\alpha -1}-(t_{1}-s)^{\alpha -1}]T_\alpha (t_{2}-s)\sigma (s)\right\| ^{2}_{L^{0}_{Q}(V,Y)}\mathrm{d}s\\\le & {} \frac{12C_{H}t^{2H-1}_{1}c_{2}M^{2}}{(\Gamma (\alpha ))^{2}} \int _{0}^{t_{1}}[(t_{1}-s)^{2\alpha -1}-(t_{2}-s)^{2\alpha -1}]\mathrm{d}s\\\le & {} \frac{12C_{H}t^{2H-1}_{1}c_{2}M^{2}}{(2\alpha -1)(\Gamma (\alpha ))^{2}} [t^{2\alpha -1}_{1}+(t_{2}-t_{1})^{2\alpha -1}-t^{2\alpha -1}_{2}]\\\rightarrow & {} 0\;\;\text {as}\;\;t_{2}\rightarrow t_{1}, \\ I_{42}\le & {} 12C_{H}t^{2H-1}_{1} \int _{0}^{t_{1}}\left\| (t_{1}-s)^{\alpha -1}[T_\alpha (t_{2}-s)-T_\alpha (t_{1}-s)]\sigma (s)\right\| ^{2}_{L^{0}_{Q}(V,Y)}\mathrm{d}s \\ {}\le & {} 12C_{H}t^{2H-1}_{1}c_{2}\sup _{s\in [0,t_{1}]}\Vert T_\alpha (t_{2}-s)-T_\alpha (t_{1}-s)\Vert ^{2} \int _{0}^{t_{1}}(t_{1}-s)^{2\alpha -2}\mathrm{d}s\\ {}\le & {} \frac{12C_{H}t^{2H+2\alpha -2}_{1}c_{2}}{(2\alpha -1)}\sup _{s\in [0,t_{1}]}\Vert T_\alpha (t_{2}-s)-T_\alpha (t_{1}-s)\Vert ^{2}\\\rightarrow & {} 0\;\;\text {as}\;\;t_{2}\rightarrow t_{1}, \\ I_{43}\le & {} 12C_{H}(t_{2}-t_{1})^{2H-1} \int _{t_{1}}^{t_{2}}\Vert (t_{2}-s)^{\alpha -1}T_{\alpha }(t_{2}-s)\sigma (s)\Vert ^{2}_{L^{0}_{Q}(V,Y)}\mathrm{d}s\\\le & {} \frac{12C_{H}M^{2}c_{2}(t_{2}-t_{1})^{2H+2\alpha -2}}{(2\alpha -1)(\Gamma (\alpha ))^{2}}\\\rightarrow & {} 0\;\;\text {as}\;\;t_{2}\rightarrow t_{1}. \end{aligned}$$

Hence, \(\lim \nolimits _{t_{2}\rightarrow t_{1}}E\left\| (Tx)(t_{2})-(Tx)(t_{1})\right\| ^{2}=0\), which implies that \(t\rightarrow (Tx)(t)\) is continuous on J in the \(L^{2}(\Omega ,Y)\)-sense. \(\square \)

Lemma 3.2

Under \((H_1),(H_{3})\) and \((H_4)\), the operator T sends \({\mathscr {C}}\) into \({\mathscr {C}}\).

Proof

For \(\forall \;x\in {\mathscr {C}}\), we have

$$\begin{aligned}&E\Vert (Tx)(t)\Vert ^{2}\\&\quad \le 4E\left\| S_{\alpha }(t)x_{0}\right\| ^{2} +4E\left\| \int _{0}^{t}(t-s)^{\alpha -1}T_{\alpha }(t-s)f(s,x(s))\mathrm{d}s\right\| ^{2}\\&\quad \quad +\,4E\left\| \int _{0}^{t}(t-s)^{\alpha -1}T_{\alpha }(t-s)Bu(s)\mathrm{d}s\right\| ^{2}\\&\quad \quad +\,4E\left\| \int _{0}^{t}(t-s)^{\alpha -1}T_{\alpha }(t-s)\sigma (s)\mathrm{d}B^{H}(s)\right\| ^{2}\\&\quad :=J_{1}+J_{2}+J_{3}+J_{4}. \end{aligned}$$

According to Lemma 2.2, one can obtain

$$\begin{aligned} J_{1}\le & {} 4M^{2}E\Vert x_{0}\Vert ^{2}. \end{aligned}$$

By \((H_{1})\) and Hölder’s inequality, it follows that

$$\begin{aligned} J_{2}\le & {} \frac{4M^{2}}{\Gamma ^{2}(\alpha )} E\left( \int _{0}^{t}(t-s)^{\alpha -1}\Vert f(s,x(s))\Vert \mathrm{d}s\right) ^{2}\\\le & {} \frac{4M^{2}}{\Gamma ^{2}(\alpha )}\left( \int _{0}^{t}(t-s)^{2\alpha -2}\mathrm{d}s\right) \left( \int _{0}^{t}E\Vert f(s,x(s))\Vert ^{2}\mathrm{d}s\right) \\ {}\le & {} \frac{4M^{2}b^{2\alpha }c_{1}\left( 1+\sup _{s\in J}E\Vert x(s)\Vert ^{2}\right) }{(2\alpha -1)\Gamma ^{2}(\alpha )}, \\ J_{3}\le & {} \frac{4M^{2}}{\Gamma ^{2}(\alpha )}\left( \int _{0}^{t}(t-s)^{2\alpha -2}\mathrm{d}s\right) \left( \int _{0}^{t}\Vert Bu(s)\Vert ^{2}\mathrm{d}s\right) \\\le & {} \frac{4M^{2}b^{2\alpha -1}\Vert Bu\Vert _{L^{2}(J,Y)}}{\Gamma ^{2}(\alpha )(2\alpha -1)}. \end{aligned}$$

Combining Lemma 2.1, \((H_{3})\) and Hölder’s inequality, we have

$$\begin{aligned} J_{4}\le & {} 4C_{H}t^{2H-1}\int _{0}^{t}\Vert (t-s)^{\alpha -1}T_{\alpha }(t-s)\sigma (s)\Vert ^{2}_{L^{0}_{Q}(V,Y)}\mathrm{d}s\\\le & {} \frac{4C_{H}M^{2}b^{2H-1}}{\Gamma ^{2}(\alpha )} \int _{0}^{t}(t-s)^{2\alpha -2}\Vert \sigma (s)\Vert ^{2}_{L^{0}_{Q}(V,Y)}\mathrm{d}s\\\le & {} \frac{4C_{H}M^{2}b^{2H+2\alpha -2}c_{2}}{\Gamma ^{2}(\alpha )(2\alpha -1)}. \end{aligned}$$

Thus, \(\Vert Tx\Vert ^{2}_{{\mathscr {C}}}=\sup _{t\in J}E\Vert (Tx)(t)\Vert ^{2}<\infty \). From Lemma 3.1, (Tx)(t) is continuous on J in the \(L^{2}(\Omega ,Y)\)-sense and therefore, T maps \({\mathscr {C}}\) into \({\mathscr {C}}\). \(\square \)

Theorem 3.1

Suppose that hypotheses \((H_{1}){-}(H_{4})\) are satisfied, then system (1.1) has a unique mild solution on \({\mathscr {C}}\).

Proof

We utilize the Picard’s iteration argument to prove the existence of mild solutions.

For \(\forall \; n\ge 0\), let

$$\begin{aligned} \left\{ \begin{array}{lll} x_{n+1}(t)=(Tx_{n})(t),\quad n=0,1,2,\ldots \\ x_{0}(t)=x_{0}. \end{array}\right. \end{aligned}$$
(3.1)

By Lemma 3.2, we have \(x_{n}\in {\mathscr {C}},n=0,1,2,\ldots \). Moreover, from Lemma 2.2 and \((H_{2})\), we have

$$\begin{aligned}&E\Vert x_{n+1}(t)-x_{n}(t)\Vert ^{2}\\&\quad = E\left\| \int _{0}^{t}(t-s)^{\alpha -1}T_{\alpha }(t-s)[f(s,x_{n}(s))-f(s,x_{n-1}(s))]\mathrm{d}s\right\| ^{2}\\&\quad \le \frac{M^{2}}{\Gamma ^{2}(\alpha )}\left( \int _{0}^{t}(t-s)^{2\alpha -2}ds\right) \left( \int _{0}^{t}E\left\| f(s,x_{n}(s))-f(s,x_{n-1}(s))\right\| ^{2}\mathrm{d}s\right) \\&\quad \le \frac{M^{2}l_{1}b^{2\alpha -1}}{\Gamma ^{2}(\alpha )(2\alpha -1)}\int _{0}^{t}E\Vert x_{n}(s)-x_{n-1}(s)\Vert ^{2}\mathrm{d}s\\&\quad \le \left( \frac{M^{2}l_{1}b^{2\alpha -1}}{\Gamma ^{2}(\alpha )(2\alpha -1)}\right) ^{2} \int _{0}^{t}\int _{0}^{s}E\Vert x_{n-1}(s_{1})-x_{n-2}(s_{1})\Vert ^{2}\mathrm{d}s_{1}\mathrm{d}s\\&\quad \le \cdots \\&\quad \le \left( \frac{M^{2}l_{1}b^{2\alpha -1}}{\Gamma ^{2}(\alpha )(2\alpha -1)}\right) ^{n} \int _{0}^{t}\int _{0}^{s}\cdots \int _{0}^{s_{n-2}}E\left\| x_{1}(s_{n-1})-x_{0}(s_{n-1})\right\| ^{2} \mathrm{d}s_{n-1}\ldots \mathrm{d}s_{1}\mathrm{d}s\\&\quad \le \left( \frac{M^{2}l_{1}b^{2\alpha -1}}{\Gamma ^{2}(\alpha )(2\alpha -1)}\right) ^{n} \frac{\sup _{s\in J}E\Vert x_{1}(s)-x_{0}(s)\Vert ^{2}}{n!}, \end{aligned}$$

which implies that

$$\begin{aligned} \sup _{t\in J}E\Vert x_{n+1}(t)-x_{n}(t)\Vert ^{2}\le \left( \frac{M^{2}l_{1}b^{2\alpha -1}}{\Gamma ^{2}(\alpha )(2\alpha -1)}\right) ^{n} \frac{\sup _{s\in J}E\Vert x_{1}(s)-x_{0}(s)\Vert ^{2}}{n!}. \end{aligned}$$

Thus, the sequence \(\{x_{n}(t)\}_{n\ge 0}\subseteq L^{2}(\Omega ,Y)\) is a Cauchy sequence. Therefore, there exists \(x\in L^{2}(\Omega ,Y)\) such that

$$\begin{aligned} \sup _{t\in J}E\Vert x_{n}(t)-x(t)\Vert ^{2}=0. \end{aligned}$$

Taking the limitation in (3.1) as \(n\rightarrow \infty \), the existence of mild solutions is obtained.

Next, we will prove the uniqueness.

Suppose that x and y are two mild solutions of system (1.1). It is easy to check that

$$\begin{aligned}&E\Vert x(t)-y(t)\Vert ^{2}\\&\quad = E\left\| \int _{0}^{t}(t-s)^{\alpha -1}T_{\alpha }(t-s)[f(s,x(s))-f(s,y(s))]\mathrm{d}s\right\| ^{2}\\&\quad \le \frac{M^{2}}{\Gamma ^{2}(\alpha )}\left( \int _{0}^{t}(t-s)^{2\alpha -2}\mathrm{d}s\right) \left( \int _{0}^{t}E\Vert f(s,x(s))-f(s,y(s))\Vert ^{2}\mathrm{d}s\right) \\&\quad \le \frac{M^{2}b^{2\alpha -1}l_{1}}{\Gamma ^{2}(\alpha )(2\alpha -1)}\int _{0}^{t}E\Vert x(s)-y(s)\Vert ^{2}\mathrm{d}s. \end{aligned}$$

Using Gronwall’s lemma, we have

$$\begin{aligned} \sup _{t\in J}E\Vert x(t)-y(t)\Vert ^{2}=0. \end{aligned}$$

Thus, the mild solution is unique. \(\square \)

4 Approximate Controllability

In this section, we investigate the approximate controllability of system (1.1) and extend our results to FSDEs with bounded delay.

Define an operator \(F:{\mathscr {C}}\rightarrow L^{2}(J,Y)\) by

$$\begin{aligned} (Fx)(t)=f(t,x(t)),\;t\in J. \end{aligned}$$

The linear operator \({\mathscr {L}}:L^{2}(J,Y)\rightarrow Y\) is given by

$$\begin{aligned} {\mathscr {L}}p=\int _{0}^{b}(b-s)^{\alpha -1}T_{\alpha }(b-s)p(s)\mathrm{d}s,\;t\in J. \end{aligned}$$

The null space of \({\mathscr {L}}\) is denoted by \({\mathscr {N}}_{0}({\mathscr {L}})\). It is easy to see that \({\mathscr {N}}_{0}({\mathscr {L}})\subseteq L^{2}(J,Y)\) is a closed subspace, whose orthogonal space is denoted by \({\mathscr {N}}^{\bot }_{0}({\mathscr {L}})\). Furthermore, \(L^{2}(J,Y)\) can be decomposed uniquely as \(L^{2}(J,Y)={\mathscr {N}}_{0}({\mathscr {L}})\oplus {\mathscr {N}}^{\bot }_{0}({\mathscr {L}})\).

Let R(B) be the range of operator B. We also need the following assumption.

\((H_{5})\): For \(\forall \; p\in L^{2}(J,Y)\), there exists a function \(q\in \overline{R(B)}\) such that \({\mathscr {L}}p={\mathscr {L}}q\). Moreover, \(R({\mathscr {L}})=Y\).

Obviously, \((H_{5})\) implies that \(L^{2}(J,Y)={\mathscr {N}}_{0}({\mathscr {L}})\oplus \overline{R(B)}\). Define a linear and continuous mapping \({\mathscr {P}}:{\mathscr {N}}^{\bot }_{0}({\mathscr {L}})\rightarrow \overline{R(B)}\) by \({\mathscr {P}}z^{*}=q^{*}\), where \(q^{*}\in \{z^{*}+{\mathscr {N}}_{0}({\mathscr {L}})\}\cap \overline{R(B)}\) is the unique minimum norm element, that is

$$\begin{aligned} \Vert {\mathscr {P}}z^{*}\Vert =\Vert q^{*}\Vert =\min \left\{ \Vert v\Vert :v\in \{z^{*}+{\mathscr {N}}_{0}({\mathscr {L}})\}\cap \overline{R(B)}\right\} . \end{aligned}$$

By \((H_{5})\), it follows that for \(\forall \; z^{*}\in {\mathscr {N}}^{\bot }_{0}({\mathscr {L}})\), \(\{z^{*}+{\mathscr {N}}_{0}({\mathscr {L}})\}\cap \overline{R(B)}\ne \emptyset \) and \( \forall \; z\in L^{2}(J,Y)\) has a unique decomposition \(z=n+q^{*}\). Therefore, the operator \({\mathscr {P}}\) is well defined and \(\Vert {\mathscr {P}}\Vert \le c\) for some constant c [36].

Lemma 4.1

[30, 37] For \(\forall \; z\in L^{2}(J,Y)\) and its corresponding \(n\in {\mathscr {N}}_{0}({\mathscr {L}})\), there exists a constant \(C>0\) such that

$$\begin{aligned} \Vert n\Vert _{L^{2}(J,Y)}\le (1+C)\Vert z\Vert _{L^{2}(J,Y)}. \end{aligned}$$

We define the operator \({\mathscr {K}}:L^{2}(J,Y)\rightarrow L^{2}(J,Y)\) by

$$\begin{aligned} ({\mathscr {K}}\nu )(t)=\int _{0}^{t}(t-s)^{\alpha -1}T_{\alpha }(t-s)\nu (s)\mathrm{d}s. \end{aligned}$$

Let

$$\begin{aligned} {\mathscr {D}}_{0}=\{m\in L^{2}(J,Y):m(t)=({\mathscr {K}}n)(t),n\in {\mathscr {N}}_{0}({\mathscr {L}}), \;t\in J\}. \end{aligned}$$

It is clear that for \(\forall \;m\in {\mathscr {D}}_{0}\), \(m(b)=({\mathscr {K}}n)(b)=0\).

Assume that \(x(\cdot )\) is a mild solution of system (1.1)\(^{*}\), define an operator \(g_{x}:{\mathscr {D}}_{0}\rightarrow {\mathscr {D}}_{0}\) by

$$\begin{aligned} (g_{x}m)(t)=({\mathscr {K}}n)(t),\;t\in J, \end{aligned}$$

where n is given by the unique decomposition

$$\begin{aligned} F(x+m)=n+q,\;n\in {\mathscr {N}}_{0}({\mathscr {L}}),\;q\in \overline{R(B)}. \end{aligned}$$

Theorem 4.1

Suppose that \((H_1)-(H_{5})\) are satisfied, then system (1.1)\(^{*}\) is approximate controllable, i.e., \(\overline{K_{b}(0)}=L^{2}(\Omega ,Y)\).

Proof

For \(\forall \;\xi \in L^{2}(\Omega ,Y)\), we have \(\xi -S_{\alpha }(b)x_{0}-\int _{0}^{b}(b-s)^{\alpha -1}T_{\alpha }(b-s)\sigma (s)\mathrm{d}B^{H}(s)\in L^{2}(\Omega ,Y)\). In particular \(\xi -S_{\alpha }(b)x_{0}-\int _{0}^{b}(b-s)^{\alpha -1}T_{\alpha }(b-s)\sigma (s)\mathrm{d}B^{H}(s)\in Y\) for almost all \(\omega \in \Omega \). By \((H_{5})\), there exists \(p\in L^{2}(J,Y)\) such that

$$\begin{aligned} \xi -S_{\alpha }(b)x_{0}{-}\int _{0}^{b}(b{-}s)^{\alpha -1}T_{\alpha }(b{-}s)\sigma (s)\mathrm{d}B^{H}(s) {=}\int _{0}^{b}(b-s)^{\alpha -1}T_{\alpha }(b{-}s)p(s)\mathrm{d}s. \end{aligned}$$

Using \((H_{5})\) again, there exists \(q\in \overline{R(B)}\) such that

$$\begin{aligned} \int _{0}^{b}(b-s)^{\alpha -1}T_{\alpha }(b-s)p(s)\mathrm{d}s=\int _{0}^{b}(b-s)^{\alpha -1}T_{\alpha }(b-s)q(s)\mathrm{d}s. \end{aligned}$$

Therefore,

$$\begin{aligned} \xi =S_{\alpha }(b)x_{0}{+}\int _{0}^{b}(b{-}s)^{\alpha {-}1}T_{\alpha }(b{-}s)\sigma (s)\mathrm{d}B^{H}(s) {+}\int _{0}^{b}(b{-}s)^{\alpha {-}1}T_{\alpha }(b-s)q(s)\mathrm{d}s. \end{aligned}$$

Note that \(q\in \overline{R(B)}\), for \(\forall \;\varepsilon >0\) there exists a control function \(u_{\varepsilon }\) such that

$$\begin{aligned} \sup _{t\in J}E\Vert Bu_{\varepsilon }(t)-q(t)\Vert ^{2}<\frac{\Gamma ^{2}(\alpha )(2\alpha -1)\varepsilon }{M^{2}b^{2\alpha }}. \end{aligned}$$

Define

$$\begin{aligned} \xi _{\varepsilon }= & {} S_{\alpha }(b)x_{0}+\int _{0}^{b}(b-s)^{\alpha -1}T_{\alpha }(b-s)\sigma (s)\mathrm{d}B^{H}(s)\\&+\int _{0}^{b}(b-s)^{\alpha -1}T_{\alpha }(b-s)Bu_{\varepsilon }(s)\mathrm{d}s. \end{aligned}$$

Thus, \(\xi _{\varepsilon }\in K_{b}(0)\). Moreover,

$$\begin{aligned}&E\Vert \xi -\xi _{\varepsilon }\Vert ^{2}\\&\quad = E\left\| \int _{0}^{b}(b-s)^{\alpha -1}T_{\alpha }(b-s)[Bu_{\varepsilon }(s)-q(s)]\mathrm{d}s\right\| ^{2}\\&\quad \le \frac{M^{2}}{\Gamma ^{2}(\alpha )} E\left( \int _{0}^{b}(b-s)^{\alpha -1}\Vert Bu_{\varepsilon }-q\Vert \mathrm{d}s\right) ^{2}\\&\quad \le \frac{M^{2}b^{2\alpha }}{(2\alpha -1)\Gamma ^{2}(\alpha )} \sup _{s\in J}E\Vert Bu_{\varepsilon }(s)-q(s)\Vert ^{2}\\&\quad <\varepsilon , \end{aligned}$$

which means that system (1.1)\(^{*}\) is approximate controllable. \(\square \)

Lemma 4.2

Suppose that \((H_{1})-(H_{4})\) are fulfilled, the operator \(g_{x}\) has a fixed point \(m_{0}\in {\mathscr {D}}_{0}\) provided that

$$\begin{aligned} \frac{4M^{2}b^{2\alpha }(1+C)^{2}l_{1}}{\Gamma ^{2}(\alpha )(2\alpha -1)}<1. \end{aligned}$$
(4.1)

Proof

For \(r>0\), let \(B_{r}=\{z\in {\mathscr {D}}_{0}:\Vert z\Vert _{L^{2}(J,Y)}\le r\}\). Next, we prove that \(g_{x}(B_{r})\subseteq B_{r}\). If it is not true, then for \(\forall \;r>0\), there exists an element \(m\in B_{r}\), such that \(\Vert g_{x}(m)\Vert _{L^{2}(J,Y)}>r\). Consequently,

$$\begin{aligned} r^{2}<\Vert g_{x}(m)\Vert ^{2}_{L^{2}(J,Y)}=\Vert {\mathscr {K}}n\Vert ^{2}_{L^{2}(J,Y)}. \end{aligned}$$

In fact, by Lemma 4.1, we have

$$\begin{aligned} \Vert ({\mathscr {K}}n)(t)\Vert ^{2}= & {} \left\| \int _{0}^{t}(t-s)^{\alpha -1}T_{\alpha }(t-s)n(s)\mathrm{d}s\right\| ^{2}\\\le & {} \frac{M^{2}}{\Gamma ^{2}(\alpha )} \left( \int _{0}^{t}(t-s)^{\alpha -1}\Vert n(s)\Vert \mathrm{d}s\right) ^{2}\\\le & {} \frac{M^{2}}{\Gamma ^{2}(\alpha )}\left( \int _{0}^{t}(t-s)^{2\alpha -2}\mathrm{d}s\right) \left( \int _{0}^{t}\Vert n(s)\Vert ^{2}\mathrm{d}s\right) \\\le & {} \frac{M^{2}b^{2\alpha -1}}{\Gamma ^{2}(\alpha )(2\alpha -1)}\Vert n\Vert ^{2}_{L^{2}(J,Y)}\\\le & {} \frac{M^{2}b^{2\alpha -1}(1+C)^{2}}{\Gamma ^{2}(\alpha )(2\alpha -1)}\Vert F(x+m)\Vert ^{2}_{L^{2}(J,Y)}\\\le & {} \frac{M^{2}b^{2\alpha -1}(1+C)^{2}}{\Gamma ^{2}(\alpha )(2\alpha -1)} \left( \int _{0}^{b}\Vert f\left( t,(x+m)(t)\right) -f(t,0)+f(t,0)\Vert ^{2}\mathrm{d}t\right) \\\le & {} \frac{2M^{2}b^{2\alpha -1}(1+C)^{2}}{\Gamma ^{2}(\alpha )(2\alpha -1)} \left( \int _{0}^{b}\Vert f\left( t,(x+m)(t)\right) -f(t,0)\Vert ^{2}+\Vert f(t,0)\Vert ^{2}\mathrm{d}t\right) \\\le & {} \frac{2M^{2}b^{2\alpha -1}(1+C)^{2}}{\Gamma ^{2}(\alpha )(2\alpha -1)} \left[ \int _{0}^{b}\left( l_{1}\Vert (x+m)(t)\Vert ^{2}+l^{2}_{f}\right) \mathrm{d}t\right] \\\le & {} \frac{2M^{2}b^{2\alpha -1}(1+C)^{2}}{\Gamma ^{2}(\alpha )(2\alpha -1)} \left[ 2l_{1}\left( \int _{0}^{b}\Vert x(t)\Vert ^{2}\mathrm{d}t+r^{2}\right) +l^{2}_{f}b\right] , \end{aligned}$$

where \(l_{f}=\max _{t\in J}\Vert f(t,0)\Vert \). Hence,

$$\begin{aligned} r^{2}< & {} \Vert g_{x}(m)\Vert ^{2}_{L^{2}(J,Y)}\\\nonumber= & {} \Vert {\mathscr {K}}n\Vert ^{2}_{L^{2}(J,Y)}\\\nonumber\le & {} \frac{2M^{2}b^{2\alpha }(1+C)^{2}}{\Gamma ^{2}(\alpha )(2\alpha -1)} \left[ 2l_{1}\int _{0}^{b}\Vert x(t)\Vert ^{2}\mathrm{d}t+2l_{1}r^{2}+l^{2}_{f}b\right] . \end{aligned}$$
(4.2)

Dividing by \(r^{2}\) on both sides of (4.2) and taking limitation as \(r\rightarrow \infty \), it follows that

$$\begin{aligned} \frac{4M^{2}b^{2\alpha }(1+C)^{2}l_{1}}{\Gamma ^{2}(\alpha )(2\alpha -1)}\ge 1, \end{aligned}$$

which is a contradiction to (4.1). Thus, \(g_{x}\) maps \(B_{r}\) into \(B_{r}\).

By \((H_{4})\) and Lemma 2.2, \(T_{\alpha }(t)\) is a compact operator, which implies that \(g_{x}\) is a compact operator.

Due to Schauder fixed-point theorem, \(g_{x}\) has a fixed point \(m_{0}\in {\mathscr {D}}_{0}\), which implies that \(g_{x}(m_{0})={\mathscr {K}}n_{0}=m_{0}\). \(\square \)

Theorem 4.2

Suppose that \((H_{1})-(H_{5})\) are fulfilled and (4.1) is satisfied, then system (1.1) is approximate controllable.

Proof

Suppose that \(x(\cdot )\) is the mild solution of system (1.1)\(^{*}\), that is

$$\begin{aligned} x(t)= & {} S_{\alpha }(t)x_{0}+\int _{0}^{t}(t-s)^{\alpha -1}T_{\alpha }(t-s)Bu(s)\mathrm{d}s\\&+\int _{0}^{t}(t-s)^{\alpha -1}T_{\alpha }(t-s)\sigma (s)\mathrm{d}B^{H}(s). \end{aligned}$$

Recalling that \(g_{x}(m_{0})={\mathscr {K}}n_{0}=m_{0}\), then we get

$$\begin{aligned} F(x+m_{0})(t)=n_{0}(t)+q_{0}(t). \end{aligned}$$

Operating \({\mathscr {K}}\) on both sides, we obtain

$$\begin{aligned} {\mathscr {K}}F(x+m_{0})(t)={\mathscr {K}}n_{0}(t)+{\mathscr {K}}q_{0}(t)=m_{0}(t)+{\mathscr {K}}q_{0}(t). \end{aligned}$$

Hence,

$$\begin{aligned} x(t)+{\mathscr {K}}F(x+m_{0})(t)=x(t)+m_{0}(t)+{\mathscr {K}}q_{0}(t). \end{aligned}$$

Denote \(y(t)=x(t)+m_{0}(t)\), then

$$\begin{aligned} x(t)+{\mathscr {K}}Fy(t)=y(t)+{\mathscr {K}}q_{0}(t). \end{aligned}$$

Hence,

$$\begin{aligned} y(t)= & {} x(t)+{\mathscr {K}}Fy(t)-{\mathscr {K}}q_{0}(t)\\= & {} S_{\alpha }(t)x_{0}+\int _{0}^{t}(t-s)^{\alpha -1}T_{\alpha }(t-s)[Bu(s)-q_{0}(s)]\mathrm{d}s\\&+\int _{0}^{t}(t-s)^{\alpha -1}T_{\alpha }(t-s)f(s,y(s))\mathrm{d}s+ \int _{0}^{t}(t-s)^{\alpha -1}T_{\alpha }(t-s)\sigma (s)\mathrm{d}B^{H}(s). \end{aligned}$$

Therefore, \(y=x+m_{0}\) is the mild solution of the following equation

$$\begin{aligned} \left\{ \begin{array}{ll} ^{C}D^{\alpha }y(t)=Ay(t)+(Bu-q_{0})(t)+f(t,y(t))+\sigma (t)\frac{\mathrm{d}B^{H}(t)}{\mathrm{d}t},\;t\in [0,b],\\ y(0)=x_{0}. \end{array}\right. \end{aligned}$$
(4.3)

By the definition of \({\mathscr {K}}\) and \(n_{0}\in {\mathscr {N}}_{0}({\mathscr {L}})\), we have \(m_{0}(0)=m_{0}(b)=0\). Further,

$$\begin{aligned} y(0)= & {} x(0)+m_{0}(0)=x_{0}, \\ y(b)= & {} x(b)+m_{0}(b)=x(b)\in K_{b}(0). \end{aligned}$$

Next, we will prove \(K_{b}(0)\subseteq K_{b}(f)\). Since \(q_{0}\in \overline{R(B)}\), there exists a \(v\in L^{2}_{{\mathscr {F}}}(J,U)\) such that

$$\begin{aligned} \sup _{t\in J}E\Vert Bv-q_{0}\Vert ^{2}<\varepsilon . \end{aligned}$$

Denote \({\widetilde{u}}=u-v\) and suppose that \(x_{{\widetilde{u}}}\) is the mild solution of the following equation

$$\begin{aligned} \left\{ \begin{array}{ll} ^{C}D^{\alpha }w(t)=Aw(t)+B{\widetilde{u}}(t)+f(t,w(t))+\sigma (t)\frac{\mathrm{d}B^{H}(t)}{\mathrm{d}t},\;t\in [0,b],\\ w(0)=x_{0}. \end{array}\right. \end{aligned}$$

Thus,

$$\begin{aligned} x_{{\widetilde{u}}}(t)= & {} S_{\alpha }(t)x_{0}+\int _{0}^{t}(t-s)^{\alpha -1}T_{\alpha }(t-s) \left[ f(s,x_{{\widetilde{u}}}(s))+B{\widetilde{u}}(s)\right] \mathrm{d}s\\&+\int _{0}^{t}(t-s)^{\alpha -1}T_{\alpha }(t-s)\sigma (s)\mathrm{d}B^{H}(s),\;t\in [0,b], \end{aligned}$$

and \(x_{{\widetilde{u}}}(b)\in K_{b}(f)\).

On the other hand,

$$\begin{aligned}&E\Vert y(t)-x_{{\widetilde{u}}}(t)\Vert ^{2}\\&\quad = E\bigg \Vert \int _{0}^{t}(t-s)^{\alpha -1}T_{\alpha }(t-s)[(Bv)(s)-q(s)]\mathrm{d}s\\&\quad \quad +\int _{0}^{t}(t-s)^{\alpha -1}T_{\alpha }(t-s)[f(s,y(s))-f(s,x_{{\widetilde{u}}}(s))]\mathrm{d}s\bigg \Vert ^{2}\\&\quad \le 2E\left\| \int _{0}^{t}(t-s)^{\alpha -1}T_{\alpha }(t-s)[(Bv)(s)-q(s)]\mathrm{d}s\right\| ^{2}\\&\quad \quad +2E\left\| \int _{0}^{t}(t-s)^{\alpha -1}T_{\alpha }(t-s)[f(s,y(s))-f(s,x_{{\widetilde{u}}}(s))]\mathrm{d}s\right\| ^{2}\\&\quad \le \frac{2M^{2}}{\Gamma ^{2}(\alpha )}\left( \int _{0}^{t}(t-s)^{2\alpha -2}\mathrm{d}s\right) \left( \int _{0}^{t}E\Vert Bv-q\Vert ^{2}\mathrm{d}s\right) \\&\quad \quad +\frac{2M^{2}l^{2}_{1}}{\Gamma ^{2}(\alpha )}\left( \int _{0}^{t}(t-s)^{2\alpha -2}\mathrm{d}s\right) \left( \int _{0}^{t}E\Vert y(s)-x_{{\widetilde{u}}}(s)\Vert ^{2}\mathrm{d}s\right) \\&\quad \le \frac{2b^{2\alpha }M^{2}\varepsilon }{(2\alpha -1)\Gamma ^{2}(\alpha )} +\frac{2M^{2}l^{2}_{1}b^{2\alpha -1}}{(2\alpha -1)\Gamma ^{2}(\alpha )} \int _{0}^{t}E\Vert y(s)-x_{{\widetilde{u}}}(s)\Vert ^{2}\mathrm{d}s. \end{aligned}$$

Let \(\phi (t)=E\Vert y(t)-x_{{\widetilde{u}}}(t)\Vert ^{2}\), according to Gronwall’s lemma, it follows that

$$\begin{aligned} E\Vert y(t)-x_{{\widetilde{u}}}(t)\Vert ^{2}\le & {} \frac{2M^{2}b^{2\alpha }\varepsilon }{(2\alpha -1)\Gamma ^{2}(\alpha )} \exp \left\{ \frac{2M^{2}l^{2}_{1}b^{2\alpha }}{(2\alpha -1)\Gamma ^{2}(\alpha )}\right\} . \end{aligned}$$

Moreover,

$$\begin{aligned} E\Vert y(b){-}x_{{\widetilde{u}}}(b)\Vert ^{2}{\le } \sup _{t\in J}E\Vert y(t){-}x_{{\widetilde{u}}}(t)\Vert ^{2}\le & {} \frac{2M^{2}b^{2\alpha }\varepsilon }{(2\alpha -1)\Gamma ^{2}(\alpha )} \exp \left\{ \frac{2M^{2}l^{2}_{1}b^{2\alpha }}{(2\alpha -1)\Gamma ^{2}(\alpha )}\right\} . \end{aligned}$$

Therefore,

$$\begin{aligned} E\Vert x(b)-x_{{\widetilde{u}}}(b)\Vert ^{2}= & {} E\Vert y(b)-x_{{\widetilde{u}}}(b)\Vert ^{2}\\\le & {} \frac{2M^{2}b^{2\alpha }\varepsilon }{(2\alpha -1)\Gamma ^{2}(\alpha )} \exp \left\{ \frac{2M^{2}l^{2}_{1}b^{2\alpha }}{(2\alpha -1)\Gamma ^{2}(\alpha )}\right\} , \end{aligned}$$

which implies that \(K_{b}(0)\subseteq K_{b}(f)\). By Theorem 4.1, \(\overline{K_{b}(0)}=L^{2}(\Omega ,Y)\). Therefore, \(\overline{K_{b}(f)}=L^{2}(\Omega ,Y)\). Hence, system (1.1) is approximate controllable. \(\square \)

Remark 4.1

Theorem 4.2 is a generalization of the results in [22,23,24,25,26,27,28,29]. Our results are obtained without assuming that the nonlinear item is uniformly bounded and the corresponding fractional linear system is approximate controllable.

Based on the arguments above, we can extend our results to the approximate controllability of FSDEs with bounded delay

$$\begin{aligned} \left\{ \begin{array}{ll} ^{C}D^{\alpha }x(t)=Ax(t)+Bu(t)+g(t,x(t-h))+\sigma (t)\frac{\mathrm{d}B^{H}(t)}{\mathrm{d}t},\;t\in J:=[0,b],\\ x(t)=\psi (t),\;t\in [-h,0], \end{array}\right. \end{aligned}$$
(4.4)

where \(\alpha \in (\frac{1}{2},1]\), \(^{C}D^{\alpha }\) denotes the Caputo fractional derivative, \(A,B,u,\sigma ,B^{H}\) are defined the same as system (1.1), \(g:J\times Y\rightarrow Y\), \(\psi \in C([-h,0],Y)\). Let \(\widetilde{{\mathscr {C}}}=C\left( [-h,b],L^{2}(\Omega ,Y)\right) \). It is a Banach space with the norm \(\Vert x\Vert _{\widetilde{{\mathscr {C}}}}:=\left( \sup \limits _{t\in [-h,b]}E\Vert x(t)\Vert ^{2}\right) ^{\frac{1}{2}}\).

The stochastic process \(x\in \widetilde{{\mathscr {C}}}\) is a mild solution of system (4.4) if \(x(t)=\psi (t),\;t\in [-h,0]\) and for all \(t\in [0,b]\) it satisfies the following integral equation

$$\begin{aligned} x(t)= & {} S_{\alpha }(t)x_{0}+\int _{0}^{t}(t-s)^{\alpha -1}T_{\alpha }(t-s)\left[ g(s,x(s-h))+Bu(s)\right] \mathrm{d}s\\&+\int _{0}^{t}(t-s)^{\alpha -1}T_{\alpha }(t-s)\sigma (s)\mathrm{d}B^{H}(s),\;t\in [0,b],\;\;\;P-a.s. \end{aligned}$$

We introduce the following hypotheses.

\((H'_1)\): The function \(g:J\times Y\rightarrow Y\) is measurable and there exists a constant \(c_{1}>0\) such that for \(\forall \;x\in Y,\forall \;t\in J\),

$$\begin{aligned} \Vert g(t,x)\Vert ^{2}\le c_{1}(1+\Vert x\Vert ^{2}). \end{aligned}$$

\((H'_2)\): There exists a constant \(l_{1}>0\) such that for \(\forall \;x,y\in Y,\forall \;t\in J\),

$$\begin{aligned} \Vert g(t,x)-g(t,y)\Vert ^{2}\le l_{1}\Vert x-y\Vert ^{2}. \end{aligned}$$

Theorem 4.3

Suppose that \((H'_{1}),~(H'_{2}),~(H_{3})\) and \((H_{4})\) are satisfied, then system (4.4) has a unique mild solution on \(\widetilde{{\mathscr {C}}}\) provided that

$$\begin{aligned} \frac{M^{2}b^{2\alpha }l_{1}}{\Gamma ^{2}(\alpha )(2\alpha -1)}<1. \end{aligned}$$
(4.5)

Proof

Define the operator \(T:\widetilde{{\mathscr {C}}}\rightarrow \widetilde{{\mathscr {C}}}\) by

$$\begin{aligned} (Tx)(t)=\left\{ \begin{array}{lll} S_{\alpha }(t)x_{0}+\int _{0}^{t}(t-s)^{\alpha -1}T_{\alpha }(t-s)[g(s,x(s-h))+Bu(s)]\mathrm{d}s\\ \;\;+\int _{0}^{t}(t-s)^{\alpha -1}T_{\alpha }(t-s)\sigma (s)\mathrm{d}B^{H}(s), \;t\in [0,b],\\ \psi (t),\;t\in [-h,0].\\ \end{array}\right. \end{aligned}$$

Using the same method in Lemmas 3.1 and 3.2, one can deduce that \(T:\widetilde{{\mathscr {C}}}\rightarrow \widetilde{{\mathscr {C}}}\) is well defined. By simple calculations and (4.5), we can deduce that T is a contraction. By applying the Banach contraction principle, we can get T has a unique fixed point on \(\widetilde{{\mathscr {C}}}\). Therefore, system (4.4) has a unique mild solution. \(\square \)

Theorem 4.4

Suppose that \((H'_{1}),~(H'_{2}),~(H_{3}){-}(H_{5})\) are fulfilled and (4.1), (4.5) are satisfied, then system (4.4) is approximate controllable.

Proof

Using the same method in Theorems 4.1, 4.2, one can deduce this theorem and hence we omit the detailed proof here. \(\square \)

Remark 4.2

If \(\sigma (t)\equiv 0\), then system (4.4) reduces to system (1) of [30]. Hence, results of [30] are generalized.

5 An Example

Consider the following fractional stochastic control system

$$\begin{aligned} \left\{ \begin{array}{ll} ^{C}D^{\frac{3}{4}}z(t,\xi )=\frac{\partial ^{2}}{\partial \xi ^{2}}z(t,\xi ) +f(t,z(t,\xi )){+}Bu(t,\xi ){+}\sigma (t)\frac{\mathrm{d}B^{H}(t)}{\mathrm{d}t},\; t\in [0,1],\;\xi \in (0,\pi ),\\ z(t,0)=z(t,\pi )=0,\;t\in [0,1],\\ z(0,\xi )=z_{0}(\xi ),\; \xi \in (0,\pi ), \end{array}\right. \nonumber \\ \end{aligned}$$
(5.1)

where \(^{C}D^{\frac{3}{4}}\) is the Caputo fractional derivative of order \(\frac{3}{4}\) with respect to t, \(B^{H}\) denotes a fBm defined on \(\left( \Omega ,{\mathscr {F}},P\right) \). Let \(Y=V=L^{2}(0,\pi ),\;J=[0,1]\). Define the operator \(A:D(A)\subset Y\rightarrow Y\) by \(Az=\frac{\partial ^{2}z}{\partial \xi ^{2}}\), where

$$\begin{aligned} D(A)=\left\{ z\in Y:z,\frac{\partial z}{\partial \xi }\;\text {are absolutely continuous,}\;\frac{\partial ^{2}z}{\partial \xi ^{2}}\in Y,\;z(0)=z(\pi )=0\right\} . \end{aligned}$$

Let \(e_{n}(\xi )=\sqrt{\frac{2}{\pi }}\sin (n\xi ),n=1,2,\ldots \). Note that \(\{e_{n}\}_{n\ge 1}\) is a complete orthonormal set of eigenvectors of A. It is easy to check that A generates a strongly continuous semigroup \(\{S(t)\}_{t\ge 0}\) which is compact, analytic and self-adjoint [9]. Hence, \((H_{4})\) is fulfilled.

Define an infinite-dimensional space U by

$$\begin{aligned} U=\left\{ u:u=\sum _{n=2}^{\infty }u_{n}e_{n}\;\text {with}\sum _{n=2}^{\infty }u^{2}_{n}<\infty \right\} . \end{aligned}$$

The norm in U is defined by \(\Vert u\Vert _{U}=\left( \sum _{n=2}^{\infty }u^{2}_{n}\right) ^{\frac{1}{2}}\). Define the bounded linear operator \(B:U\rightarrow Y\) as follows:

$$\begin{aligned} Bu=2u_{2}e_{1}+\sum _{n=2}^{\infty }u_{n}e_{n},\;\text {for}\;u=\sum _{n=2}^{\infty }u_{n}e_{n}\in U. \end{aligned}$$

Choose a sequence \(\{\alpha _{n}\}_{n\in N}\), \(\alpha _{n}\ge 0\). Consider the operator \(Q:V\rightarrow V\) defined by \(Qe_{n}=\alpha _{n}e_{n}\). Assume that

$$\begin{aligned} tr(Q)=\sum _{n=1}^{\infty }\sqrt{\alpha _{n}}<\infty . \end{aligned}$$

The process \(B^{H}(t)\) is defined by

$$\begin{aligned} B^{H}(t)=\sum _{n=1}^{\infty }\sqrt{\alpha _{n}}\beta ^{H}_{n}(t)e_{n},\;t\ge 0,\;\frac{1}{2}<H<1, \end{aligned}$$

where \(\{\beta ^{H}_{n}\}_{n\in N}\) is a sequence of mutually independent one-dimensional fBm.

Let

$$\begin{aligned} x(t)(\xi )=z(t,\xi ),f(t,x(t))(\xi )=f(t,z(t,\xi )),u(t)(\xi )=u(t,\xi ). \end{aligned}$$

Then, (5.1) can be reformulated as

$$\begin{aligned} \left\{ \begin{array}{ll} ^{C}D^{\frac{3}{4}}x(t)=Ax(t)+Bu(t)+f(t,x(t))+\sigma (t)\frac{\mathrm{d}B^{H}(t)}{\mathrm{d}t},\;t\in [0,1],\\ x(0)=x_{0} \end{array}\right. \end{aligned}$$

Define \(f(t,z(t,\xi ))=\frac{e^{-t}|z(t,\xi )|}{(1+e^{t})(1+|z(t,\xi )|)}\). Clearly, we have

$$\begin{aligned} \Vert f(t,z(t,\xi ))\Vert \le |z(t,\xi )|, \end{aligned}$$

and

$$\begin{aligned}&\Vert f(t,z_{1}(t))(\xi )-f(t,z_{2}(t))(\xi )\Vert \\&\quad =\frac{e^{-t}\left| |z_{1}(t,\xi )|-|z_{2}(t,\xi )|\right| }{(1+e^{t})(1+|z_{1}(t,\xi )|)(1+|z_{2}(t,\xi )|)}\\&\quad \le \frac{e^{-t}}{1+e^{t}}|z_{1}(t,\xi )-z_{2}(t,\xi )|\\&\quad \le \frac{1}{2}|z_{1}(t,\xi )-z_{2}(t,\xi )|. \end{aligned}$$

Hence, \((H_1)\) and \((H_{2})\) are satisfied. If conditions \((H_3)\), \((H_5)\) and (4.1) are satisfied, then by Theorem 4.2, system (5.1) is approximately controllable on [0, 1].