1 Introduction

We consider the following nonlinear time-fractional Sobolev-type stochastic evolution equations:

$$\begin{aligned} \!^HD_{0+}^{\alpha ,\beta } (Ex)(t)+A x(t)=f(t,x)+B(t)u(t)+g(t,x)\frac{d}{dt}W(t),\quad \ t\in (0,T], \end{aligned}$$
(1)

and the initial value condition

$$\begin{aligned} (I_{0+}^{1-\gamma }x)(0) =x_0, \end{aligned}$$
(2)

where \(^HD_{0+}^{\alpha ,\beta }\) is the Hilfer fractional derivative of order \(\alpha \in (1/2,1)\), with type of \(\beta \in [0,1]\) and \(\gamma =\alpha +\beta (1-\alpha )\), \(I^{1-\gamma }_{0+}\) is the Riemann–Liouville fractional integral of order \(1-\gamma \). The operators A and E are two linear operators defined on Banach X with domains \(D(A)\subset X \) and \(D(E)\subset X \), which satisfy the following hypotheses: (i) Operator A is closed, (ii) \(D(E)\subset D(A)\) and E is bijective, (iii) \(E^{-1}:X\rightarrow D(E)\) is compact. These means that hypotheses (i)–(iii) and the closed graph theorem imply the boundedness of the linear operator \(AE^{-1}:X\rightarrow X\), and it generates a uniformly continuous semigroup \(\{T_E(t),t \ge 0\}\) from X into X. The state \(x(\cdot )\) takes its values in a separable Hilbert space V with inner product \((\cdot ,\cdot )_V\) and norm \(\Vert \cdot \Vert _V\); the control u takes its values in U, a separable Hilbert space of admissible control functions, \(x_0\in D(E)\). The term W(t) is an \({\mathcal {F}}_t\)-adapted Wiener process defined on a filtered probability space \(({\mathbf {D}}, {\mathcal {F}}, {\mathbf {P}},\{{\mathcal {F}}_t\}_{t\ge 0})\). Linear operator \(B:[0,T]\rightarrow {\mathcal {L}}(U, V)\), where \({\mathcal {L}}(U, V)\) stands for space of all bounded linear operators from U into V. The functions f and g are given with satisfying some suitable assumptions.

During the past decades, fractional calculus has attracted much attention; since the nonlocal property of fractional derivatives, a number of phenomena described by fractional differential equations have been investigated by many researchers in the fields of mathematics, science and engineering; see the relevant works [1, 13, 17, 26]. A classical example is anomalous diffusion process; the tool of fractional derivatives has played a decisive role to solve the relevant problems; see, e.g., [10, 12, 19, 20, 28, 33] and the references therein. When the performance index and system dynamics are described by fractional differential equations, a fractional optimal control problem will appear very naturally. In particular, a typical case is the fractional optimal control of a distributed system. For the fractional optimal controls, one sees Wang et al. [29], Debbouche and Nieto [4], Yan and Jia [30, 31], Liu and Wang [16], Liu [15], El-Borai et al. [6] and the references therein. Besides, Sobolev-type problems find a lot of applications in mathematical models, for instance the flow of fluid through fissured rocks, thermodynamics, propagation of long waves of small amplitude; see, e.g., [14, 27]. Specially in [3], the authors considered the fractional stochastic evolution equations of Sobolev type of order \(\alpha \in (1,2)\), they showed the existence of solutions and existence conditions of optimal pairs in optimal control systems. Let us point out that the problem (1)–(2) without the Sobolev type will reduce to a special fractional stochastic evolution equation, i.e., \(E=bI\) with an identity operator I for some \(b>0\), many scholars have considered its existence and optimal control of solutions; see the papers [7, 22, 23] and the references therein.

We remark that the Hilfer fractional derivative is a generalization of the classical Riemann–Liouville fractional derivative and Caputo fractional derivative, although the existence of solution for the case of Caputo fractional derivative to problem (1)–(2) has already been investigated by [18]; to the best of our knowledge, the optimal control of fractional stochastic evolution equations with Sobolev type has not been studied extensively. In particular, the case of Riemann–Liouville fractional derivative to problem (1)–(2) has not been seen yet. In this paper, we investigate the existence and optimal controls for this type of a problem in Hilfer’s fractional derivative. Obviously, the result in the current paper is a natural generalization of the cases of Riemann–Liouville fractional derivative or the Caputo fractional derivative.

This paper is organized as follows. Section 2 introduces some useful notations and preparations. In Sect. 3, the existence of mild solutions of problem (1)–(2) is presented without the control term. In Sect. 4, we will study the existence of mild solutions for optimal controls of the Bolza problem. Finally, we get an application to illustrate the main results.

2 Preliminaries

Throughout this paper, we denote by X a Hilbert space with norm \(\Vert \cdot \Vert \). Let \(J=[0,T]\), \(J'=(0,T]\), and let \({\mathcal {L}}(X,Y)\) be the space of all bounded linear operators form X into Y equipped with the norm \(\Vert \cdot \Vert _{{\mathcal {L}}(X,Y)}\). For the sake of simplicity, we set \({\mathcal {L}}(X,X)\) by \({\mathcal {L}}(X)\) equipped with the norm \(\Vert \cdot \Vert _{{\mathcal {L}}(X)}\).

Let \({\mathcal {X}}\) be a real random variable defined on a probability space \(({\mathbf {D}}, {\mathcal {F}}, {\mathbf {P}})\), which is a complete probability space equipped with a normal filtration \({\mathcal {F}}_t\), \(t\in J\). \({\mathcal {X}}\) is called p-integrable if

$$\begin{aligned} \int _{{\mathbf {D}}}|{\mathcal {X}}(w)|^p{\mathbf {P}}(dw)<\infty , \quad p\ge 1. \end{aligned}$$

The totality of pth integrable random variables is denoted by \( L_2({\mathbf {D}},{\mathcal {F}},{\mathbf {P}};X)\) \((p=2)\) or simply by \( L_2({\mathbf {D}},X)\) with the norm

$$\begin{aligned} {\mathbf {E}}\Vert {\mathcal {X}}\Vert ^2 = \int _{{\mathbf {D}}} \Vert {\mathcal {X}}(w)\Vert ^2{\mathbf {P}} (d w), \end{aligned}$$

where the notation \(\displaystyle {\mathbf {E}}({\mathcal {X}}) = \int _{{\mathbf {D}}}{\mathcal {X}}(w){\mathbf {P}}(dw)\) is called the expectation of \({\mathcal {X}}\) for any integrable random variable \({\mathcal {X}}\).

Let K be a separable Hilbert space and \(W(t):J\times {\mathbf {D}}\rightarrow K\) be the Q-Weiner process on \(({\mathbf {D}},{\mathcal {F}}, {\mathbf {P}})\) with the linear bounded covariance operator Q such that \(TrQ<\infty \), which is adapted to normal filtration \(\{{\mathcal {F}}_t\}_{t\in J}\). Assume that there exists a complete orthonormal system \(\{e_k\}_{k=1}^\infty \) in K, a bounded sequence of nonnegative real numbers \(\{\lambda _k\}_{k=1}^\infty \) such that

$$\begin{aligned} Qe_k=\lambda _ke_k,\quad \lambda _k\ge 0,\quad k= 1, 2,3,\ldots , \end{aligned}$$

and a sequence of independent real-valued Brownian motions \(\{\sigma _k\}_{k=1}^\infty \) such that

$$\begin{aligned} \langle W(t),e\rangle =\sum _{k=1}^\infty \sqrt{\lambda _k}\langle e_k,e\rangle \sigma _k(t),\quad e\in K,\ \ t\in J. \end{aligned}$$

We introduce a subspace \(X_0=Q^{1/2}K\) of K endowed with the inner product

$$\begin{aligned} \langle u,v\rangle _{X_0}=\langle Q^{-1/2}u,Q^{-1/2}v\rangle _K,\quad {\text {for\, any}}\ u,v\in X_0, \end{aligned}$$

where \(Q^{-1/2}\) denotes the pseudo-inverse of \(Q^{1/2}\). It is clear that \(X_0\) is a Hilbert space. Let \(L^0_2= L^2(X_0,X)\) be the space of all Hilbert–Schmidt operators from \(X_0\) to X endowed with the inner product \(\langle \varphi ,\phi \rangle _{L_2^0}=Tr[\varphi Q\phi ^*]\) for any \(\varphi ,\phi \in L_2^0\). Obviously, it is also a Hilbert space. Since for each \(t\ge 0\) the subsigma algebras \({\mathcal {F}}_t\) are complete, \(L_2 ({\mathcal {F}}_t,X)\) are closed subspaces of \(L_2({\mathbf {D}},X)\) and hence they are also Banach spaces. Similarly, \(L_{{\mathcal {F}}}^2(J,X)\) denotes the closed subspace of \(L^2_{{\mathcal {F}}}(J\times {\mathbf {D}},X)\), consisting of all measurable and \({\mathcal {F}}_t\)-progressively measurable random processes defined in J, taking its values in X satisfying \({\mathbf {E}}\int _0^T\Vert x(t)\Vert ^2 dt <\infty \).

Let \(C(J, {\mathcal {L}}_2({\mathbf {D}},X))\) be Banach space of continuous maps from J into \({\mathcal {L}}_2({\mathbf {D}},X)\) satisfying norm \(\sup _{t\in J}e^{-Lt}{\mathbf {E}}\Vert x(t)\Vert _{{\mathbf {P}}} ^2< \infty \) for a fixed constant \(L>0\). Let \( {\mathcal {C}} (J, X)\) be a closed subspace of \(C(J, {\mathcal {L}}_2({\mathbf {D}},X))\) consisting of measurable and \( {\mathcal {F}}_t\)-adapted X-valued continuous processes \(x(\cdot )\in C(J, {\mathcal {L}}_2({\mathbf {D}},X))\) endowed with the norm

$$\begin{aligned} \Vert x\Vert _{{\mathcal {C}}} =\left( \sup _{t\in J}e^{-Lt}{\mathbf {E}}\Vert x(t)\Vert ^2\right) ^{1/2}. \end{aligned}$$

Then, \(({\mathcal {C}}(J, X), \Vert \cdot \Vert _{{\mathcal {C}}})\) is a Banach space.

Let \(\gamma =\alpha +\beta (1-\alpha )\), then \(1-\gamma =(1-\alpha )(1-\beta )\ge 0\), define

$$\begin{aligned} {\mathcal {C}}^\gamma (J,X)=\{x\in {\mathcal {C}}(J', X):\ t^{1-\gamma }x\in {\mathcal {C}}(J, X)\}. \end{aligned}$$

It is clear that \({\mathcal {C}}^\gamma (J,X) \) is a Banach space with norm

$$\begin{aligned} \Vert x\Vert _{{\mathcal {C}}^\gamma } =\left( \sup _{t\in J}e^{-Lt}t^{1-\gamma }{\mathbf {E}}\Vert x(t)\Vert ^2\right) ^{1/2}. \end{aligned}$$

Let function \(g_\gamma (\cdot )\) be given by

$$\begin{aligned} g_\gamma (t) = t^{\gamma -1}/\varGamma (\gamma ),\quad {\text {for}}\; t>0,\ \gamma >0, \end{aligned}$$

where \(\varGamma (\cdot )\) is the usual gamma function. In case \(\gamma = 0\), we denote \(g_0(t) = \delta (t)\), the Dirac measure is concentrated at the origin.

Let us recall the definition of factional derivative. For more detail, we refer to [11, 13, 24].

Definition 2.1

The Riemann–Liouville fractional integral of order \(\alpha >0\) for a function \(x:[0,\infty )\rightarrow X\) is defined by

$$\begin{aligned} I^\alpha _{0+} x(t)=\frac{1}{\varGamma (\alpha )}\int _0^t(t-s)^{\alpha -1}x(s)ds,\quad t>0, \end{aligned}$$

provided the right side is pointwise defined on \([0,\infty )\).

Definition 2.2

The Hilfer fractional derivative of order \(0<\alpha <1\) and \(0\le \beta \le 1\) for a function \(x :[0, \infty )\rightarrow X\) is defined as

$$\begin{aligned} ^H\!D^{\alpha ,\beta }_{0+} x(t)=I^{\beta (1-\alpha )}_{0+} \left( \frac{d}{dt} I^{(1-\beta )(1-\alpha )}_{0+} x \right) (t),\quad t>0. \end{aligned}$$

Remark 2.1

Noting that if \(\beta =0\) and \(0<\alpha < 1\), the Hilfer fractional derivative is the Riemann–Liouville fractional derivative. If \(\beta =1\) and \(0<\alpha <1\), the Hilfer fractional derivative is the Caputo fractional derivative.

We firstly introduce the Wright-type function \(M_\varsigma (\cdot )\); for more details, we refer to [13, 19], which is defined by

$$\begin{aligned} M_\varsigma (z)=\sum _{n=0}^\infty \frac{(-z)^{n}}{n!\varGamma (1-\varsigma (n+1))},\quad \varsigma \in (0,1),\ z\in {\mathbf {C}}. \end{aligned}$$

Lemma 2.1

[19] For any \(t > 0\), the Wright-type function has the properties

$$\begin{aligned} M_{\varsigma }(t)\ge 0,\quad \int ^{\infty }_{0}\theta ^{\delta } M_{\varsigma }(\theta )d\theta =\frac{\varGamma (1+\delta )}{\varGamma (1+\varsigma \delta )},\quad for\ -1<\delta <\infty . \end{aligned}$$
(3)

Lemma 2.2

[25, Lemma 7.7] For any \(r\ge 1\) and for arbitrary \(L_2^0\)-valued predictable process \(\varPhi (\cdot )\), there is

$$\begin{aligned}\sup _{s\in [0,t]}{\mathbf {E}}\left( \left\| \int _0^s\varPhi (\tau )dW(\tau )\right\| ^{2r}\right) \le L_r\left( \int _0^t \left( {\mathbf {E}}\Vert \varPhi (s)\Vert ^{2r}_{L^0_2}\right) ^{1/r}ds\right) ^{r},\quad t\in [0,T],\end{aligned}$$

where \(L_r=(r(2r-1))^{r}\).

For \(x\in X\) and \(0<\alpha <1\), we define a family \(\{{\mathcal {S}}_{\alpha ,E}(t) : t \ge 0\}\) of operator by

$$\begin{aligned} {\mathcal {S}}_{\alpha ,E }(t)=\int _0^\infty \alpha \theta M_\alpha (\theta )T_E(t^\alpha \theta )d\theta . \end{aligned}$$

By the similar proofs in Zhou [8, 32] or Gu and Trujillo [9], the following results can be given.

Lemma 2.3

Assume that \(\sup _{t\ge 0}\Vert T_E(t)\Vert _{{\mathcal {L}}(X)}\le M\) for some \(M\ge 1\). One has the following properties:

  1. (i)

    for any fixed \(t\ge 0\), \({\mathcal {S}}_{\alpha ,E }(t)\) is linear bounded operator on X with

    $$\begin{aligned} \Vert {\mathcal {S}}_{\alpha ,E }(t)\Vert _{{\mathcal {L}}(X)}\le \frac{M}{\varGamma (\alpha )}, \end{aligned}$$
  2. (ii)

    if \(T_E(t)\) is compact, then \({\mathcal {S}}_{\alpha ,E }(t)\) is compact in X for \(t > 0\),

  3. (iii)

    \({\mathcal {S}}_{\alpha ,E }(t): [0,\infty )\rightarrow {\mathcal {L}}(X)\) is continuous.

Lemma 2.4

[21] If \(\mu ,\nu ,\tau > 0\) and \(t> 0\), then

$$\begin{aligned} t^{1-\nu }\int _0^t(t-s)^{\nu -1}s^{\mu -1}e^{-\tau s}ds\le C_{\nu ,\mu } \tau ^{-\nu }, \end{aligned}$$

where

$$\begin{aligned} C_{\nu ,\mu }=\max \{1,2^{1-\mu } \}\varGamma (\nu ) (1+\nu (\nu +1)/\mu )>0. \end{aligned}$$

3 Existence of Mild Solutions

In this section, we study the existence of mild solutions to problem (1)–(2) without the control term. To do this, we next introduce the following definition of mild solutions for the problem (1)–(2), similarly in [9]. We first consider the following problem:

$$\begin{aligned} \!^HD_{0+}^{\alpha ,\beta } (Ex)(t)+A x(t)= & {} f(t,x)+ g(t,x)\frac{d}{dt}W(t),\quad \ t\in (0,T], \end{aligned}$$
(4)
$$\begin{aligned} (I_{0+}^{1-\gamma }x)(0)= & {} x_0. \end{aligned}$$
(5)

Definition 3.1

A function \(x\in {\mathcal {C}} (J, X)\) is called a solution of the problem (4)–(5) if it satisfies the integral equation

$$\begin{aligned} x(t)= & {} E^{-1}{\mathcal {P}}_{\alpha ,E}^\beta (t)Ex_0+\int _0^tE^{-1}{\mathcal {T}}_{\alpha ,E}(t-s)f(s,x(s))ds\\&~&+\int _0^tE^{-1}{\mathcal {T}}_{\alpha ,E}(t-s)g(s,x(s))dW(s), \end{aligned}$$

for \(t\in J\), where

$$\begin{aligned} {\mathcal {P}}_{\alpha ,E}^\beta (t)= I_{0+}^{\beta (1-\alpha )}{\mathcal {T}}_{\alpha ,E}(t),\quad {\mathcal {T}}_{\alpha ,E}(t) = t^{\alpha -1}{\mathcal {S}}_{\alpha ,E}(t). \end{aligned}$$

In order to explain our theorem, we need the following assumptions:

  1. (Hf)

    \(f : J\times X\rightarrow X\) satisfies the following:

    1. (a)

      \(f (t,\cdot ) :X\rightarrow X\) is continuous for each \(t\in J\) and for each \(x\in X\), \(f (\cdot , x) : J\rightarrow X\) is strongly measurable,

    2. (b)

      there is a positive function \(h(t)\in L^\infty (J, {\mathbf {R}}^+)\) such that for every \((t, x)\in J\times X \),

      $$\begin{aligned}{\mathbf {E}}\Vert f(t,x)\Vert ^2 \le h(t) (1+{\mathbf {E}}\Vert x\Vert ^2).\end{aligned}$$
  2. (Hg)

    \(g : J\times X\rightarrow L_0^2\) satisfies the following:

    1. (a)

      \(g(t,\cdot ) :X\rightarrow L_0^2\) is continuous for each \(t\in J\) and for each \(x\in X\), \(g (\cdot , x) : J\rightarrow L_0^2\) is strongly measurable,

    2. (b)

      there is a positive function \(l\in L^\infty (J, {\mathbf {R}}^+)\) such that for every \((t, x)\in J\times X \),

      $$\begin{aligned}{\mathbf {E}}\Vert g(t,x)\Vert ^2_{L_0^2}\le l(t) (1+{\mathbf {E}}\Vert x\Vert ^2).\end{aligned}$$

We note that if x is a solution to problem (4)–(5), there appears a singular point at \(t=0\). In order to overcome this difficulty for applying the Ascoli–Arzelà theorem and the Krasnoselskii fixed point theorem, for any \(x\in {\mathcal {C}}^\gamma (J,X)\), we define a mapping \({\mathcal {Q}}\) by

$$\begin{aligned} (\mathcal Qx)(t)=({\mathcal {Q}}_1x)(t)+({\mathcal {Q}}_2x)(t), \end{aligned}$$

where

$$\begin{aligned} ({\mathcal {Q}}_1x)(t)= & {} E^{-1}{\mathcal {P}}_{\alpha ,E}^\beta (t)Ex_0,\nonumber \\ ({\mathcal {Q}}_2x)(t)= & {} \int _0^tE^{-1}{\mathcal {T}}_{\alpha ,E}(t-s)f(s,x(s))ds\nonumber \\&+\int _0^tE^{-1}{\mathcal {T}}_{\alpha ,E}(t-s)g(s,x(s))dW(s). \end{aligned}$$
(6)

Let \(x(t)=t^{\gamma -1}y(t)\) for any \(y\in {\mathcal {C}}(J,X)\), then \(x\in {\mathcal {C}}^\gamma (J,X)\). Define a mapping \({\mathcal {S}}\) by

$$\begin{aligned} (\mathcal Sy)(t)=({\mathcal {S}}_1y)(t)+({\mathcal {S}}_2y)(t), \end{aligned}$$

where

$$\begin{aligned} ({\mathcal {S}}_1y)(t)= \frac{x_0}{\varGamma (\gamma )}, \quad ({\mathcal {S}}_2y)(t)=0, \quad {\text {for}}\; t=0, \end{aligned}$$

and

$$\begin{aligned} ({\mathcal {S}}_1y)(t)= t^{1-\gamma }({\mathcal {Q}}_1x)(t),\quad ({\mathcal {S}}_2y)(t)= t^{1-\gamma }({\mathcal {Q}}_2x)(t),\quad {\text {for}}\; t\in (0,T]. \end{aligned}$$

For each \(r>0\), we denote sets \({\mathcal {B}}_r=\{v\in {\mathcal {C}}(J, X): \Vert v\Vert _{{\mathcal {C}}}\le r\}\) and \({\mathcal {B}}_r^\gamma =\{v\in {\mathcal {C}}^\gamma (J, X): \Vert v\Vert _{{\mathcal {C}}^\gamma }\le r\}\). The sets \({\mathcal {B}}_r\) and \({\mathcal {B}}_r^\gamma \) are the bounded, closed and convex in the spaces \({\mathcal {C}}(J, X)\), \({\mathcal {C}}^\gamma (J, X)\), respectively.

Lemma 3.1

Let \(\alpha \in (1/2,1)\). Assume that (Hf) and (Hg) hold, then \( {\mathcal {S}}({\mathcal {B}}_r)\subset {\mathcal {B}}_r\) for some \(r>0\).

Proof

From Lemma 2.3, we know that

$$\begin{aligned} \Vert {\mathcal {P}}_{\alpha ,E}^\beta (t)x\Vert\le & {} \int _0^tg_{\beta (1-\alpha )}(t-s)s^{\alpha -1}\Vert {\mathcal {S}}_{\alpha ,E}(s)x\Vert ds\nonumber \\\le & {} \frac{M}{\varGamma (\alpha )}\int _0^tg_{\beta (1-\alpha )}(t-s)s^{\alpha -1}ds\Vert x\Vert =\frac{M\Vert x\Vert }{\varGamma (\gamma )} t^{\gamma -1}, \end{aligned}$$
(7)

for any \(x\in X\), \(t>0\), where \(\gamma =\alpha +\beta (1-\alpha )\) and we have used the identity

$$\begin{aligned} \int _0^t (t-s)^{a-1}s^{b-1}ds=\frac{\varGamma (a)\varGamma (b)}{\varGamma (a+b)}t^{a+b-1},\ \ a,b>0, t>0. \end{aligned}$$

For any \(x\in {\mathcal {B}}_r^\gamma \), let \(y(t)=t^{\gamma -1}x(t)\), then \(y\in {\mathcal {B}}_r\). Since \(x_0\in D(E)\), clearly, for the case of \(t=0\), we have

$$\begin{aligned} {\mathbf {E}} \Vert ({\mathcal {S}}_1y)(0)\Vert ^2\le & {} \frac{ \varrho ^2}{(\varGamma (\gamma ))^2} {\mathbf {E}}\Vert Ex_0\Vert ^2 \le \frac{ 2 M\varrho ^2}{(\varGamma (\gamma )) ^2} \Vert x_0\Vert _{D(E)}^2, \end{aligned}$$

where \(\varrho =\Vert E^{-1}\Vert _{{\mathcal {L}}(X)}\). For \(t>0\), it follows that

$$\begin{aligned} {\mathbf {E}}\Vert ({\mathcal {S}}_1y)(t)\Vert ^2= & {} {\mathbf {E}}\Vert t^{1-\gamma }({\mathcal {Q}}_1x)(t)\Vert ^2 = {\mathbf {E}}\Vert t^{1-\gamma }E^{-1}{\mathcal {P}}_{\alpha ,E}^\beta (t)Ex_0\Vert ^2\\\le & {} \frac{ 2 M ^2\varrho ^2}{(\varGamma (\gamma )) ^2} {\mathbf {E}}\Vert Ex_0\Vert ^2. \end{aligned}$$

Therefore, we have

$$\begin{aligned} \Vert {\mathcal {F}}_1y \Vert _{{\mathcal {C}}}^2\le&\frac{ 2 M ^2\varrho ^2}{(\varGamma (\gamma )) ^2} \Vert x_0\Vert _{D(E)}^2. \end{aligned}$$

By applying Hölder inequality, stochastic Fubini theorem and the definition of the solution operator, for \(t>0\) we have

$$\begin{aligned}&{\mathbf {E}}\left\| t^{1-\gamma } \int _0^tE^{-1}{\mathcal {T}}_{\alpha ,E}(t-s)f(s,x(s))ds\right\| ^2 \\&\quad \le t^{2(1-\gamma )} \int _0^t(t-s)^{2(\alpha -1)} {\mathbf {E}}\left\| E^{-1}{\mathcal {S}}_{\alpha ,E}(t-s)f(s,x(s))\right\| ^2 ds \\&\quad \le \frac{M ^2\varrho ^2}{ (\varGamma (\alpha )) ^2}t^{ 2(1-\gamma ) } \int _0^t(t-s)^{2(\alpha -1)}h(s)(1+ {\mathbf {E}}\Vert x(s) \Vert ^2 ) ds\\&\quad \le \frac{M ^2\varrho ^2}{ (2\alpha -1)(\varGamma (\alpha )) ^2}t^{2(1-\gamma ) +2\alpha -1}\Vert h\Vert _\infty \\&\qquad + \frac{M ^2\varrho ^2 r^2}{ (\varGamma (\alpha )) ^2}t^{2(1-\gamma ) }\Vert h\Vert _\infty \int _0^t(t-s)^{2(\alpha -1)} e^{Ls}s^{\gamma -1} ds, \end{aligned}$$

which shows from Lemma 2.4 that

$$\begin{aligned}&{\mathbf {E}}\left\| t^{1-\gamma } \int _0^tE^{-1}{\mathcal {T}}_{\alpha ,E}(t-s)f(s,x(s))ds\right\| ^2 \\&\quad \le \frac{M ^2\varrho ^2}{(2\alpha -1)(\varGamma (\alpha )) ^2}t^{2(1-\gamma )+2\alpha -1}\Vert h\Vert _\infty +\frac{M ^2\varrho ^2 r^2}{ (\varGamma (\alpha )) ^2} \Vert h\Vert _\infty C_{\gamma ,2\alpha -1} t^{1 -\gamma } e^{Lt} L^{-\gamma }. \end{aligned}$$

Similarly, by Lemma 2.2 we have

$$\begin{aligned}&{\mathbf {E}}\left\| t^{1-\gamma }\int _0^tE^{-1}{\mathcal {T}}_{\alpha ,E}(t-s)g(s,x(s))dW(s)\right\| ^2 \\&\quad \le t^{2(1-\gamma )}\int _0^t(t-s)^{2(\alpha -1)}{\mathbf {E}} \left\| E^{-1}{\mathcal {S}}_{\alpha ,E}(t-s)g(s,x(s))\right\| _{L_0^2}^2 ds\\&\quad \le \frac{M ^2\varrho ^2}{(\varGamma (\alpha )) ^2} t^{2(1-\gamma )} \int _0^t(t-s)^{2(\alpha -1)}l(s)(1+ {\mathbf {E}}\Vert x(s) \Vert ^2 ) ds\\&\quad \le \frac{M ^2\varrho ^2}{(2 \alpha -1) (\varGamma (\alpha )) ^2}t^{2(1-\gamma )+ 2\alpha -1}\Vert l\Vert _\infty \\&\qquad + \frac{M ^2\varrho ^2 r^2}{(\varGamma (\alpha )) ^2} \Vert l\Vert _\infty t^{2(1-\gamma )} \int _0^t(t-s)^{2(\alpha -1)} e^{Ls}s^{\gamma -1} ds. \end{aligned}$$

Hence, Lemma 2.4 shows that

$$\begin{aligned}&{\mathbf {E}}\left\| t^{1-\gamma }\int _0^tE^{-1}{\mathcal {T}}_{\alpha ,E}(t-s)g(s,x(s))dW(s)\right\| ^2\\&\quad \le \frac{M ^2\varrho ^2}{(2 \alpha -1) (\varGamma (\alpha )) ^2}t^{2(1-\gamma )+ 2\alpha -1} \Vert l\Vert _\infty +\frac{M ^2\varrho ^2 r^2}{(\varGamma (\alpha ) ) ^2} \Vert l\Vert _\infty C_{\gamma ,2\alpha -1} t^{ 1-\gamma } e^{Lt}L^{-\gamma }. \end{aligned}$$

Combining the above arguments, we have

$$\begin{aligned} \Vert {\mathcal {S}}_2y \Vert _{{\mathcal {C}}} ^2\le & {} \sup _{t\in J}\frac{M ^2\varrho ^2}{(2 \alpha -1) (\varGamma (\alpha ) ) ^2}(\Vert h\Vert _\infty + \Vert l\Vert _\infty )t^{2(1-\gamma )+ 2\alpha -1}e^{-Lt} \\&+\sup _{t\in J}\frac{M ^2\varrho ^2r^2}{(\varGamma (\alpha ) ) ^2}(\Vert h\Vert _\infty + \Vert l\Vert _\infty ) C_{\gamma ,2\alpha -1} t^{ 1-\gamma } L^{-\gamma }. \end{aligned}$$

Since continuous function \(n(t)=t^ae^{-Lt}\) has a maximum value at \(t=a/L\) for \(a\ge 0\), \(t\ge 0\), that is \(n^*=\max _{t\ge 0}n(t)=a^aL^{-a}e^{-a}\) and \(n^*\le 5L^{-a}\) for \(0\le a\le 4\), it follows that

$$\begin{aligned} \Vert {\mathcal {S}}_2y \Vert _{{\mathcal {C}}}^2\le & {} \frac{5M ^2\varrho ^2}{(2 \alpha -1) (\varGamma (\alpha ) ) ^2}(\Vert h\Vert _\infty + \Vert l\Vert _\infty ) L^{-(2(1-\gamma )+ 2\alpha -1)} \\&+\frac{M ^2\varrho ^2r^2}{(\varGamma (\alpha ) ) ^2}(\Vert h\Vert _\infty + \Vert l\Vert _\infty ) C_{\gamma ,2\alpha -1} T^{ 1-\gamma } L^{-\gamma }. \end{aligned}$$

Hence, we obtain

$$\begin{aligned} \Vert {\mathcal {F}} y \Vert _{{\mathcal {C}}}^2 \le \rho ^2+\eta ^2 r^2, \end{aligned}$$

where we have chosen \(L> 0\) large enough such that

$$\begin{aligned} \eta ^2:= & {} \frac{M ^2\varrho ^2 }{(\varGamma (\alpha ) ) ^2}(\Vert h\Vert _\infty + \Vert l\Vert _\infty ) C_{\gamma ,2\alpha -1} T^{ 1-\gamma } L^{-\gamma }<1, \end{aligned}$$

and we set

$$\begin{aligned} \rho ^2:= & {} \frac{ 2 M ^2\varrho ^2}{(\varGamma (\gamma )) ^2} \Vert x_0\Vert _{D(E)}^2+\frac{5M ^2\varrho ^2}{(2 \alpha -1) (\varGamma (\alpha ) ) ^2}(\Vert h\Vert _\infty + \Vert l\Vert _\infty ) L^{-(2(1-\gamma )+ 2\alpha -1)}. \end{aligned}$$

Consequently, for any \(r\ge \rho /(1-\eta )\), by the elementary inequality \(\sqrt{a^2+b^2}\le a+b\) for \(a,b\ge 0\), we get that \(\Vert {\mathcal {S}} x\Vert _{{\mathcal {C}}} \le r\). Hence, \({\mathcal {S}} ({\mathcal {B}}_r)\subset {\mathcal {B}}_r\) for some \(r>0\). The proof is completed. \(\square \)

Lemma 3.2

Let \(\alpha \in (1/2,1)\). Assume that (Hf) and (Hg) hold, then the set \(\{\mathcal Sy:y\in {\mathcal {B}}_r\}\) is equicontinuous on J.

Proof

For any \(x\in {\mathcal {B}}_r^\gamma \), let \(x(t)=t^{\gamma -1}y(t)\), then \(y\in {\mathcal {B}}_r\) for \(r>0\) given in Lemma 3.1. We next prove this lemma dividing into two steps.

Step 1. The set \(\{{\mathcal {S}}_1 y: y\in {\mathcal {B}}_r\}\) is equicontinuous. Since \(\lim _{t\rightarrow 0}{\mathcal {S}}_{\alpha ,E}^\beta (t)x=x/\varGamma (\alpha )\) for any \(x\in X\), we know that

$$\begin{aligned} \lim _{t\rightarrow 0}t^{\gamma -1}E^{-1}{\mathcal {P}}_{\alpha ,E}^\beta (t)Ex_0= & {} \lim _{t\rightarrow 0}t^{\gamma -1} E^{-1}\int _0^{t} g_{\beta (1-\alpha )}(t-s) s^{\alpha -1}{\mathcal {S}}_{\alpha ,E}^\beta (s)Ex_0 ds\\= & {} \lim _{t\rightarrow 0}\int _0^{1} g_{\beta (1-\alpha )}(1-s) ^{\alpha -1}E^{-1}{\mathcal {S}}_{\alpha ,E}^\beta (ts)Ex_0 ds\\= & {} \frac{x_0}{\varGamma (\gamma )}. \end{aligned}$$

For \(t_1=0\), \(0<t_2\le T\), it follows that

$$\begin{aligned}&{\mathbf {E}}\Vert e^{-Lt_2/2}({\mathcal {S}}_1y)(t_2) -({\mathcal {S}} _1y)(0)\Vert ^2 = {\mathbf {E}}\left\| e^{-Lt_2/2}t_2^{1-\gamma }({\mathcal {Q}}_1x)(t_2) -\frac{x_0}{\varGamma (\gamma )} \right\| ^2 \\&\quad = {\mathbf {E}}\left\| e^{-Lt_2/2}t_2^{1-\gamma }E^{-1}{\mathcal {P}}_{\alpha ,E}^\beta (t_2)Ex_0-\frac{x_0}{\varGamma (\gamma )}\right\| ^2\\&\quad \rightarrow 0,\quad {\text {as}}\; t_2\rightarrow 0. \end{aligned}$$

For \(0<t_1<t_2\le T\), we have

$$\begin{aligned}&{\mathbf {E}}\Vert e^{-Lt_2/2}({\mathcal {S}}_1y)(t_2) -e^{-Lt_1/2}({\mathcal {S}}_1y)(t_1)\Vert ^2\nonumber \\&\quad ={\mathbf {E}}\Vert (e^{-Lt_2/2}-e^{-Lt_1/2})({\mathcal {S}}_1y)(t_2)\Vert ^2 +e^{-Lt_1 }{\mathbf {E}}\Vert ({\mathcal {S}}_1y)(t_2) - ({\mathcal {S}}_1y)(t_1)\Vert ^2, \end{aligned}$$
(8)

which shows from the continuity of \(e^{-at}\) for \(a>0\) and the proof in Lemma 3.1 that

$$\begin{aligned}&{\mathbf {E}}\Vert (e^{-Lt_2/2}-e^{-Lt_1/2})({\mathcal {S}}_1y)(t_2)\Vert ^2 =(e^{-Lt_2/2}-e^{-Lt_1/2})^2 {\mathbf {E}}\Vert ({\mathcal {S}}_1y)(t_2)\Vert ^2 \\&\quad \le (e^{-Lt_2/2}-e^{-Lt_1/2})^2 \frac{ 2 M ^2\varrho ^2}{(\varGamma (\gamma )) ^2} \Vert x_0\Vert _{D(E)}^2 \\&\quad \rightarrow 0,\quad {\text {as}}\; t_2\rightarrow t_1. \end{aligned}$$

From the definition of operator \({\mathcal {S}}_1\), similarly, we have

$$\begin{aligned} (t_2^{1-\gamma }-t_1^{1-\gamma })^2{\mathbf {E}}\Vert ({\mathcal {Q}}_1x)(t_2)\Vert ^2 \rightarrow 0,\quad {\text {as}}\; t_2\rightarrow t_1. \end{aligned}$$

This means that we just need to prove

$$\begin{aligned} t_1^{2(1-\gamma )}{\mathbf {E}}\Vert ({\mathcal {Q}}_1x)(t_2) - ({\mathcal {Q}}_1x)(t_1) \Vert ^2 \rightarrow 0,\quad {\text {as}}\; t_2\rightarrow t_1. \end{aligned}$$

In fact, it is clear from Lemma 2.3 (iii) that

$$\begin{aligned} {\mathbf {E}}\Vert ({\mathcal {Q}}_1x)(t_2) - ({\mathcal {Q}}_1x)(t_1) \Vert ^2= & {} {\mathbf {E}}\Vert E^{-1}{\mathcal {P}}_{\alpha ,E}^\beta (t_2)Ex_0- E^{-1}{\mathcal {P}}_{\alpha ,E}^\beta (t_1)Ex_0\Vert ^2\\\rightarrow & {} 0,\quad {\text {as}} \ t_2\rightarrow t_1. \end{aligned}$$

Hence, this means that \(\{{\mathcal {S}}_1 y: y\in {\mathcal {B}}_r\}\) is equicontinuous.

Step 2. The set \(\{{\mathcal {S}}_2 y: y\in {\mathcal {B}}_r\}\) is equicontinuous.

For \(t_1=0,~0 <t_2\le T\), from the proof in Lemma 3.1, we obtain

$$\begin{aligned} {\mathbf {E}}\Vert e^{-Lt_2/2}({\mathcal {S}}_2y)(t) \Vert ^2= & {} {\mathbf {E}}\Vert e^{-Lt_2/2}t_2^{1-\gamma }({\mathcal {Q}}_2x)(t_2) \Vert ^2 \\\le & {} \frac{M ^2\varrho ^2}{(2 \alpha -1) (\varGamma (\alpha ) ) ^2}(\Vert h\Vert _\infty + \Vert l\Vert _\infty )t_2^{2(1-\gamma )+ 2\alpha -1}e^{-Lt_2} \\&\frac{M ^2\varrho ^2r^2}{(\varGamma (\alpha ) ) ^2}(\Vert h\Vert _\infty + \Vert l\Vert _\infty ) C_{\gamma ,2\alpha -1} t_2^{ 1-\gamma } L^{-\gamma } \rightarrow 0,\\&\quad {\text {as}}\; t_2\rightarrow 0. \end{aligned}$$

By the same way as in (8) for \(0< t_1<t_2\le T\), it remains to check

$$\begin{aligned} {\mathbf {E}}\Vert ({\mathcal {Q}}_2x)(t_2) - ({\mathcal {Q}}_2x)(t_1) \Vert ^2 \rightarrow 0,\quad {\text {as}}\; t_2\rightarrow t_1. \end{aligned}$$

In fact, by Hölder inequality and Lemma 2.2, we have

$$\begin{aligned}&{\mathbf {E}}\Vert ({\mathcal {Q}}_2x)(t_2) - ({\mathcal {Q}}_2x)(t_1)\Vert ^2 \\&\qquad \le 4\sqrt{t_2}\int _{t_1}^{t_2}{\mathbf {E}} \Vert E^{-1}{\mathcal {T}}_{\alpha ,E}(t_2-s)f(s,x(s))\Vert ^2 ds\\&\qquad \quad +\, 4\sqrt{t_1}\int _{0}^{t_1}{\mathbf {E}} \Vert E^{-1}({\mathcal {T}}_{\alpha ,E}(t_2-s)-{\mathcal {T}}_{\alpha ,E}(t_1-s))f(s,x(s))\Vert ^2 ds\\&\qquad \quad +\, 4\sqrt{t_2}\int _{t_1}^{t_2}{\mathbf {E}} \Vert E^{-1}{\mathcal {T}}_{\alpha ,E}(t_2-s)g(s,x(s))\Vert ^2_{L_0^2}ds\\&\qquad \quad +\, 4\sqrt{t_1}\int _{t_1}^{t_2}{\mathbf {E}} \Vert E^{-1}({\mathcal {T}}_{\alpha ,E}(t_2-s)-{\mathcal {T}}_{\alpha ,E}(t_1-s))g(s,x(s))\Vert ^2_{L_0^2}ds, \end{aligned}$$

which shows that for any \(\varepsilon \in (0,t_1)\)

$$\begin{aligned}&{\mathbf {E}}\Vert ({\mathcal {Q}}_2x)(t_2) - ({\mathcal {Q}}_2x)(t_1)\Vert ^2 \\&\quad \le \frac{4 M^2\varrho ^2\sqrt{t_2} }{(\varGamma (\alpha ))^2}\int _{t_1}^{t_2} (t_2-s)^{2(\alpha -1)}h(s)(1+{\mathbf {E}} \Vert x(s)\Vert ^2 )ds\\&\qquad +4\varrho ^2\sqrt{t_1} \int _{0}^{t_1-\varepsilon }h(s)(1+{\mathbf {E}} \Vert x(s)\Vert ^2 )ds \\&\qquad \quad \times \sup _{s\in [0,t_1-\varepsilon ]}\Vert {\mathcal {T}}_{\alpha ,E}(t_2-s)-{\mathcal {T}}_{\alpha ,E}(t_1-s)\Vert _{{\mathcal {L}}(X)}\\&\qquad +\frac{8 M^2\varrho ^2\sqrt{t_1} }{(\varGamma (\alpha ))^2}\int _ {t_1-\varepsilon }^{t_1}(t_1-s)^{2(\alpha -1)}h(s)(1+{\mathbf {E}} \Vert x(s)\Vert ^2 ) ds\\&\qquad + \frac{4 M^2\varrho ^2\sqrt{t_2} }{(\varGamma (\alpha ))^2}\int _{t_1}^{t_2} (t_2-s)^{2(\alpha -1)}l(s)(1+{\mathbf {E}} \Vert x(s)\Vert ^2 )ds\\&\qquad +4\varrho ^2\sqrt{t_1} \int _{0}^{t_1-\varepsilon }l(s)(1+{\mathbf {E}} \Vert x(s)\Vert ^2 )ds \\&\qquad \quad \times \sup _{s\in [0,t_1-\varepsilon ]}\Vert {\mathcal {T}}_{\alpha ,E}(t_2-s)-{\mathcal {T}}_{\alpha ,E}(t_1-s)\Vert _{{\mathcal {L}}(X)}\\&\qquad +\frac{8 M^2\varrho ^2\sqrt{t_1} }{(\varGamma (\alpha ))^2}\int _ {t_1-\varepsilon }^{t_1}(t_1-s)^{2(\alpha -1)}l(s)(1+{\mathbf {E}} \Vert x(s)\Vert ^2 ) ds. \end{aligned}$$

By the similar way in Lemma 3.1, repeating the proof in Step 1, we obtain

$$\begin{aligned}&{\mathbf {E}}\Vert ({\mathcal {Q}}_2x)(t_2) - ({\mathcal {Q}}_2x)(t_1)\Vert ^2 \\&\quad \le \frac{4 M^2\varrho ^2\sqrt{t_2} }{(2\alpha -1)(\varGamma (\alpha ))^2}(t_2-t_1)^{2\alpha -1}(\Vert h\Vert _\infty +\Vert l\Vert _\infty ) (1+r^2e^{Lt_2}t_1^{\gamma -1})\\&\qquad +4\varrho ^2\sqrt{t_1}(\Vert h\Vert _\infty +\Vert l\Vert _\infty )(t_1+ r^2 e^{Lt_1}t_1^\gamma /\gamma )\\&\qquad \quad \times \sup _{s\in [0,t_1-\varepsilon ]}\Vert {\mathcal {T}}_{\alpha ,E}(t_2-s)-{\mathcal {T}}_{\alpha ,E}(t_1-s)\Vert _{L(X)}\\&\qquad +\frac{8 M^2\varrho ^2\sqrt{t_1} }{(\varGamma (\alpha ))^2}(\Vert h\Vert _\infty +\Vert l\Vert _\infty ) \varepsilon ^{2\alpha -1}(1+r^2e^{Lt_2}(t_1-\varepsilon )^{\gamma -1})\\&\quad \rightarrow 0,\quad {\text {as}}\; t_2\rightarrow t_1,\ \varepsilon \rightarrow 0. \end{aligned}$$

Therefore, the set \(\{{\mathcal {S}}_2y:x\in {\mathcal {B}}_r\}\) is equicontinuous. Combining the above two steps, we obtain that the set \(\{\mathcal Sy:y\in {\mathcal {B}}_r\}\) is equicontinuous. This completes the proof. \(\square \)

Lemma 3.3

Let \(\alpha \in (1/2,1)\). Assume that (Hf) and (Hg) hold, then \({\mathcal {S}}_2\) is a completely continuous operator.

Proof

We divide this proof into two steps.

Step 1. \({\mathcal {S}}_2\) is a continuous operator.

The case of \(t=0\) is trivial. For \(t\in (0,T]\) and any \(x_n,x\in {\mathcal {B}}_r^\gamma \), with satisfying \(\lim _{n\rightarrow \infty }x_n=x\) for some \(r>0\) given in Lemma 3.1. Let \(x(t)=t^{\gamma -1}y(t)\) and \(x_n(t)=t^{\gamma -1}y_n(t)\), \(n=1,2\ldots ,\) then \(y,y_n\in {\mathcal {B}}_r\).

Hence, it follows from the hypothesis of f that

$$\begin{aligned}f(t,x_n(t)) \rightarrow f(t,x(t)),\quad g(t,x_n(t)) \rightarrow g(t,x(t)),\quad {\text {as}}\; n\rightarrow \infty ,\, {\text {a.e.}}\, t\in J.\end{aligned}$$

In addition, by the proof as in Lemma 3.1, we know that

$$\begin{aligned} (t-s)^{2(\alpha -1)}{\mathbf {E}}\Vert f(s,x_n(s))-f(s,x(s))\Vert ^2 \le 2 (t-s)^{2(\alpha -1)}r^2e^{Ls}s^{\gamma -1}, \end{aligned}$$

and

$$\begin{aligned} (t-s)^{2(\alpha -1)}{\mathbf {E}}\Vert g(s,x_n(s))-g(s,x(s))\Vert _{L_0^2}^2 \le 2 (t-s)^{2(\alpha -1)}r^2e^{Ls}s^{\gamma -1}, \end{aligned}$$

are \(L^1\)-integral for a.e. \(s\in (0,t)\) and \(t\in J'\). Therefore, it yields by the dominated convergence theorem and Lemma 2.2 that

$$\begin{aligned}&{\mathbf {E}}\big \Vert ({\mathcal {S}}_2 y_n )(t)-({\mathcal {S}}_2 y)(t) \big \Vert ^2 \\&\quad \le 2t^{2(1-\gamma )}{\mathbf {E}}\left\| \int _0^tE^{-1}{\mathcal {T}}_{\alpha ,E}(t_2-s) (f(s,x_n(s))-f(s,x(s)))ds\right\| ^2 \\&\qquad +\, 2t^{2(1-\gamma )}{\mathbf {E}}\left\| \int _0^tE^{-1}{\mathcal {T}}_{\alpha ,E}(t_2-s) (g(s,x_n(s))-g(s,x(s)))dW(s)\right\| ^2 \\&\quad \le \frac{2\varrho ^2 M^2}{ (\varGamma (\alpha ))^2}t^{2(1-\gamma ) } \int _0^t(t-s)^{2(\alpha -1)}{\mathbf {E}}\Vert f(s,x_n(s))-f(s,x(s))\Vert ^2ds\\&\qquad +\frac{2\varrho ^2 M^2}{ (\varGamma (\alpha ))^2}t^{2(1-\gamma ) } \int _0^t(t-s)^{2(\alpha -1)}{\mathbf {E}}\Vert g(s,x_n(s))-g(s,x(s))\Vert _{L_0^2}^2ds \rightarrow 0,\\&\quad \quad {\text {as}}\; n\rightarrow \infty , \end{aligned}$$

which deduce that \(\Vert {\mathcal {S}}_2 y_n -{\mathcal {S}}_2 y\Vert _{{\mathcal {C}}}\rightarrow 0 \) pointwise on \(J'\) as \(n\rightarrow \infty \). Hence, \(\Vert {\mathcal {S}}_2 y_n -{\mathcal {S}} _2y\Vert _{{\mathcal {C}}}\rightarrow 0 \) pointwise on J as \(n\rightarrow \infty \). From Step 2 in Lemma 3.2, it yields that \({\mathcal {S}}_2y_n\rightarrow {\mathcal {S}}_2 y\) uniformly on J as \(n\rightarrow \infty \). Thus, \({\mathcal {S}}_2\) is a continuous operator.

Step 2. \({\mathcal {S}}_2\) is a compact operator.

The case \(t=0\) is trivial. Now, let \(t\in J'\) be fixed, for any \(\varepsilon \in (0,t)\), \(\delta >0\), define the operator by

$$\begin{aligned}&({\mathcal {S}}_{\varepsilon ,\delta } y )(t) = t^{1-\gamma }({\mathcal {Q}}_{\varepsilon ,\delta } x )(t)\\&\quad = t^{1-\gamma }\int _0^{t-\varepsilon }\int _\delta ^\infty E^{-1}(t-s)^{\alpha -1}\alpha \theta M_\alpha (\theta )T_E((t-s)^\alpha \theta ) f(s,x(s)) d\theta ds\\&\qquad +t^{1-\gamma }\int _0^{t-\varepsilon }\int _\delta ^\infty E^{-1}(t-s)^{\alpha -1}\alpha \theta M_\alpha (\theta )T_E((t-s)^\alpha \theta ) g(s,x(s)) d\theta dW(s). \end{aligned}$$

Therefore, it follows that

$$\begin{aligned} ({\mathcal {S}}_{\varepsilon ,\delta } y )(t)= & {} T_E( \varepsilon ^\alpha \theta ) {\mathcal {S}}_{fg}(t) \\ \quad= & {} t^{1-\gamma }T_E( \varepsilon ^\alpha \theta )\\&\qquad \quad \times \int _0^{t-\varepsilon }\int _\delta ^\infty E^{-1}(t-s)^{\alpha -1}\alpha \theta M_\alpha (\theta )T_E((t-s)^\alpha \theta -\varepsilon ^\alpha \theta ) f(s,x(s)) d\theta ds\\&\quad +t^{1-\gamma }T_E( \varepsilon ^\alpha \theta )\\&\qquad \quad \times \int _0^{t-\varepsilon }\int _\delta ^\infty E^{-1}(t-s)^{\alpha -1}\alpha \theta M_\alpha (\theta )T_E((t-s)^\alpha \theta -\varepsilon ^\alpha \theta ) g(s,x(s)) d\theta dW(s). \end{aligned}$$

From Lemma 3.1, we know that \({\mathcal {S}}_{fg}(t)\) is bounded for all \(t\in J'\) in \(L_2({\mathbf {D}},X)\), hence form the compactness of \(T_E( \varepsilon ^\alpha \theta )\) for each \(\varepsilon >0\), \(\delta >0\); we get that \(({\mathcal {S}}_{\varepsilon ,\delta } y)(t)\) are compact for every \(t\in J\). This means that the set \(\{{\mathcal {S}}_{\varepsilon ,\delta } y: y\in {\mathcal {B}}_r \}\) is compact in \(L_2({\mathbf {D}},X)\). On the other hand, we know

$$\begin{aligned}&({\mathcal {S}}_{\varepsilon ,\delta } y )(t)-({\mathcal {S}}_2 y)(t)\\&\quad =t^{1-\gamma }\int _ {t-\varepsilon }^t\int _\delta ^\infty E^{-1}(t-s)^{\alpha -1}\alpha \theta M_\alpha (\theta )T_E((t-s)^\alpha \theta ) f(s,x(s)) d\theta ds\\&\qquad +t^{1-\gamma }\int _0^{t-\varepsilon }\int _0^\delta E^{-1}(t-s)^{\alpha -1}\alpha \theta M_\alpha (\theta )T_E((t-s)^\alpha \theta ) f(s,x(s)) d\theta ds\\&\qquad +t^{1-\gamma }\int _ {t-\varepsilon }^t\int _\delta ^\infty E^{-1}(t-s)^{\alpha -1}\alpha \theta M_\alpha (\theta )T_E((t-s)^\alpha \theta ) g(s,x(s)) d\theta dW(s)\\&\qquad +t^{1-\gamma }\int _0^{t-\varepsilon }\int _0^\delta E^{-1}(t-s)^{\alpha -1}\alpha \theta M_\alpha (\theta )T_E((t-s)^\alpha \theta ) g(s,x(s)) d\theta dW(s). \end{aligned}$$

Clearly, from the assumption (Hf), let \(M(\delta )=\int _0^\delta \alpha \theta M_\alpha (\theta )d\theta \), clearly \(M(\delta )\rightarrow 0 \) as \(\delta \rightarrow 0\). Hence, we have

$$\begin{aligned}&t^{2(1-\gamma )}{\mathbf {E}}\left\| \int _ {t-\varepsilon }^t\int _\delta ^\infty E^{-1}(t-s)^{\alpha -1}\alpha \theta M_\alpha (\theta )T_E((t-s)^\alpha \theta ) f(s,x(s)) d\theta ds\right\| ^2\\&\qquad +t^{2(1-\gamma )}{\mathbf {E}}\left\| \int _0^{t-\varepsilon }\int _0^\delta E^{-1}(t-s)^{\alpha -1}\alpha \theta M_\alpha (\theta )T_E((t-s)^\alpha \theta ) f(s,x(s)) d\theta ds \right\| ^2\\&\quad \le \left( \frac{M\varrho }{\varGamma (\alpha )}\Vert h\Vert _\infty \right) ^2t^{2(1-\gamma )}\int _ {t-\varepsilon }^t (t-s)^{2(\alpha -1)} (1+e^{Ls}s^{\gamma -1}r^2) ds\\&\qquad +\left( M\varrho \Vert h\Vert _\infty M(\delta )\right) ^2t^{2(1-\gamma )}\int _0^{t-\varepsilon } (t-s)^{2(\alpha -1)} (1+e^{Ls}s^{\gamma -1}r^2) ds \\&\quad \le \left( \frac{M\varrho }{\varGamma (\alpha )}\Vert h\Vert _\infty \right) ^2t^{2(1-\gamma )} \varepsilon ^{2 \alpha -1 } (1+e^{Lt}(t-\varepsilon )^{\gamma -1}r^2)/(2\alpha -1)\\&\qquad +\left( M\varrho \Vert h\Vert _\infty M(\delta )\right) ^2(t^{2(1-\gamma )+2 \alpha - 1}/(2\alpha -1)+e^{Lt}t^{ 1-\gamma }C_{\gamma ,2\alpha -1}L^{-\gamma } r^2) \\&\quad \rightarrow 0,\quad {\text {as}}\; \varepsilon \rightarrow 0,\ \ \delta \rightarrow 0. \end{aligned}$$

Similarly, from the assumption (Hg) and Lemma 2.2, we get

$$\begin{aligned}&+t^{1-\gamma }\int _ {t-\varepsilon }^t\int _\delta ^\infty E^{-1}(t-s)^{\alpha -1}\alpha \theta M_\alpha (\theta )T_E((t-s)^\alpha \theta ) g(s,x(s)) d\theta dW(s)\\&\quad +t^{1-\gamma }\int _0^{t-\varepsilon }\int _0^\delta E^{-1}(t-s)^{\alpha -1}\alpha \theta M_\alpha (\theta )T_E((t-s)^\alpha \theta ) g(s,x(s)) d\theta dW(s)\\&\quad \rightarrow 0,\quad {\text {as}}\; \varepsilon \rightarrow 0,\ \ \delta \rightarrow 0. \end{aligned}$$

Consequently, we deduce that

$$\begin{aligned} \Vert {\mathcal {S}}_{\varepsilon ,\delta } y -{\mathcal {S}}_2 y \Vert _{{\mathcal {C}}} \rightarrow 0,\quad {\text {as}}\; \varepsilon \rightarrow 0,\quad \delta \rightarrow 0. \end{aligned}$$

This means that \(\{{\mathcal {S}}_2 y: y\in {\mathcal {B}}_r \}\) is a relatively compact set in \({\mathcal {C}}(J,X)\). Hence, \({\mathcal {S}}_2\) is a compact operator. This completes the proof. \(\square \)

We next use the following Krasnoselskii fixed point theorem (see, e.g., [32]) to obtain the existence of solution to problem (4)–(5).

Lemma 3.4

Let X be a Banach space, let \(\varOmega \) be a bounded closed convex subset of X, and let TF be mappings of \(\varOmega \) into X such that \(Tx+Fy\in \varOmega \) for every pair \(x,y\in \varOmega \). If T is a contraction and F is completely continuous, then the equation \(Tz+Fz = z\) has a solution on \(\varOmega \).

Lemma 3.5

Let \(\alpha \in (1/2,1)\). Assume that (Hf) and (Hg) hold, then there exists at least one mild solution \(x\in B_r^\gamma \) of the problem (4)–(5) for some \(r>0\).

Proof

Obviously, from Lemma 3.1, it follows that for every pair \(u,v\in {\mathcal {B}}_r\), \({\mathcal {S}}_1u+{\mathcal {S}}_2v\in {\mathcal {B}}_r\). Lemma 3.3 shows that \({\mathcal {S}}_2\) is completely continuous. Hence, it remains to prove that \({\mathcal {S}}_1\) is a contraction. Clearly, for \(t\in J\), \({\mathcal {S}}_1 \) is a contraction. Hence, Lemma 3.4 shows that there exists at least one fixed point \(y^*\in {\mathcal {B}}_r\) such that \(\mathcal Sy^*=y^*\) holds. Let \(x^*(t)=t^{\gamma -1}y^*(t)\) then \(x^*\in {\mathcal {B}}_r^\gamma \). Thus, \(x^*\) is the mild solution of problem (4)–(5). The proof is completed. \(\square \)

Remark 3.1

Note that the author [18] showed an existence result for fractional stochastic evolution equations when B(t) reduces to B (independently to t) and \(\beta =0\) in Hilfer fractional derivative, where a more general nonlinear functions f and g are proposed. However, for \(\beta =1\), since the Hilfer fractional derivative corresponds to the Riemann–Liouville fractional derivative, this result in Lemma 3.5 seems to be new without the control term.

4 Existence of Optimal Controls

In this section, we suppose that U is another Hilbert space from which the control \(u\in U\). We denote a class of nonempty closed and convex subsets of U by \(W_f(U)\). The multifunction \(\omega : J\mapsto W_f (U)\) is measurable and \(w(\cdot )\subset V\) where V is a bounded set of U, the admissible control set \(U_{ad}=\{u\in L^2_{{\mathcal {F}}}(J,V) :\ u(t)\in w(t)\; {\text {a.e.}}\; t\in J\}\). Then, \(U_{ad}\ne \emptyset \).

We consider a stochastic optimal control problem (1)–(2). We next use the following definition of mild solutions.

Definition 4.1

For any \(u\in U_{ad}\), a function \(x\in {\mathcal {C}}(J, X)\) is called a solution of the problem (1)–(2) if it satisfies the integral equation

$$\begin{aligned} x(t)= & {} E^{-1}{\mathcal {P}}_{\alpha ,E}^\beta (t)Ex_0+\int _0^tE^{-1}{\mathcal {T}}_{\alpha ,E}(t-s)f(s,x(s))ds\\&+\int _0^tE^{-1}{\mathcal {T}}_{\alpha ,E}(t-s)B(s)u(s)ds+\int _0^tE^{-1}{\mathcal {T}}_{\alpha ,E}(t-s)g(s,x(s))dW(s). \end{aligned}$$

In order to obtain the existence of mild solutions to problem (1)–(2), we need the assumption of operator \(B(\cdot )\).

  1. (Hb)

    \(B : J\rightarrow {\mathcal {L}}(U,X)\) is essentially bounded, i.e., \(B\in L^\infty (J, {\mathcal {L}}(U, X))\).

Theorem 4.1

Let \(\alpha \in (1/2,1)\). Assume that (Hf), (Hg) and (Hb) hold, then for every \(u\in U_{ad}\), there exists at least one mild solution \(x\in B_r^\gamma \) of the problem (1)–(2) for some \(r>0\).

Proof

We set

$$\begin{aligned} (\widehat{{\mathcal {Q}}}_1x)(t)=E^{-1}{\mathcal {P}}_{\alpha ,E}^\beta (t)Ex_0+\int _0^tE^{-1}{\mathcal {T}}_{\alpha ,E}(t-s)B(s)u(s)ds, \end{aligned}$$

replaced \({\mathcal {Q}}_1\) in (6). Therefore, it needs to prove that \({\mathcal {S}}_1 y \in {\mathcal {B}}_r\) for any \(y\in {\mathcal {B}}_r\) with some \(r>0\). In fact, since \(x_0\in D(E)\), for \(t>0\) by applying Hölder inequality, it follows that

$$\begin{aligned}&{\mathbf {E}}\Vert ({\mathcal {S}}_1y)(t)\Vert ^2=\Vert t^{1-\gamma }(\widehat{{\mathcal {Q}}}_1x)(t)\Vert ^2\\&\quad \le 2{\mathbf {E}}\Vert t^{1-\gamma }E^{-1}{\mathcal {P}}_{\alpha ,E}^\beta (t)Ex_0\Vert ^2\\&\qquad +2t^{2(1-\gamma )}{\mathbf {E}}\left( \int _0^t\Vert E^{-1}{\mathcal {T}}_{\alpha ,E}(t-s)B(s)u(s)\Vert ds\right) ^2\\&\quad \le 2{\mathbf {E}}\Vert t^{1-\gamma }E^{-1}{\mathcal {P}}_{\alpha ,E}^\beta (t)Ex_0\Vert ^2\\&\qquad +2t^{2(1-\gamma )} {\mathbf {E}}\left( \int _0^t(t-s)^{\alpha -1}\Vert E^{-1}{\mathcal {S}}_{\alpha ,E}(t-s)B(s)u(s)\Vert ds\right) ^2 \\&\quad \le \frac{ 2 M ^2\varrho ^2}{(\varGamma (\gamma )) ^2}{\mathbf {E}}\Vert Ex_0\Vert ^2+ \frac{ 2 M ^2\varrho ^2}{ (\varGamma (\alpha )) ^2} t^{2(1-\gamma ) }{\mathbf {E}}\left( \int _0^t (t-s)^{\alpha -1} \Vert u(s)\Vert ds\right) ^2\Vert B\Vert _{\infty } ^2, \end{aligned}$$

where we set the norm of operator \(B(\cdot )\) in \(L^\infty (J, {\mathcal {L}}(U, X) )\) by \(\Vert B\Vert _\infty \). Since Hölder inequality shows

$$\begin{aligned} {\mathbf {E}}\left( \int _0^t (t-s)^{\alpha -1} \Vert u(s)\Vert ds\right) ^2\le & {} \int _0^t (t-s)^{2(\alpha -1)}ds \Vert u\Vert _{L^2_{{\mathcal {F}}}(J,U)}^2 \\\le & {} \frac{ 1}{ 2 \alpha -1 } t^{ 2 \alpha -1 }\Vert u\Vert _{L^2_{{\mathcal {F}}}(J,U)}^2, \end{aligned}$$

it follows that

$$\begin{aligned} {\mathbf {E}}\Vert ({\mathcal {S}}_1y)(t)\Vert ^2\le & {} \frac{ 2 M ^2\varrho ^2}{(\varGamma (\gamma )) ^2}{\mathbf {E}}\Vert Ex_0\Vert ^2\\&+ \frac{2 M ^2\varrho ^2 }{( 2 \alpha -1 )(\varGamma (\alpha )) ^2 }\Vert u\Vert _{L^2_{{\mathcal {F}}}(J,U)}^2\Vert B\Vert _{\infty } ^2 t^{2(1-\gamma )+2 \alpha -1}. \end{aligned}$$

Therefore, we have

$$\begin{aligned} \Vert {\mathcal {S}}_1y \Vert _{{\mathcal {C}}}^2\le & {} \frac{ 2 M ^2\varrho ^2}{(\varGamma (\gamma )) ^2} \Vert x_0\Vert _{D(E)}^2+ \frac{2 M ^2\varrho ^2 }{( 2 \alpha -1 )(\varGamma (\alpha )) ^2 }\Vert u\Vert _{L^2_{{\mathcal {F}}}(J,U)}^2\Vert B\Vert _{\infty } ^2 T^{2(1-\gamma )+2 \alpha -1 }. \end{aligned}$$

Clearly, for the case of \(t=0\), we have

$$\begin{aligned} \Vert {\mathcal {S}}_1x \Vert _{{\mathcal {C}}}^2\le & {} \frac{ \varrho ^2}{(\varGamma (\gamma ))^2} {\mathbf {E}}\Vert Ex_0\Vert ^2 \le \frac{ 2 M\varrho ^2}{(\varGamma (\gamma )) ^2} \Vert x_0\Vert _{D(E)}^2. \end{aligned}$$

From the proof in Lemma 3.1, choosing the same \(L> 0\) such that \(\eta ^2<1\) where \(\eta \) is defined in Lemma 3.1. Let

$$\begin{aligned} {\widehat{\rho }}^2:= & {} \frac{ 2 M ^2\varrho ^2}{(\varGamma (\gamma )) ^2} \Vert x_0\Vert _{D(E)}^2+ \frac{2 M ^2\varrho ^2 }{( 2 \alpha -1 )(\varGamma (\alpha )) ^2 }\Vert u\Vert _{L^2_{{\mathcal {F}}}(J,U)}^2\Vert B\Vert _{\infty } ^2 T^{2(1-\gamma )+2 \alpha -1 }\\&+\frac{5M ^2\varrho ^2}{(2 \alpha -1) (\varGamma (\alpha ) ) ^2}(\Vert h\Vert _\infty + \Vert l\Vert _\infty ) L^{-(2(1-\gamma )+ 2\alpha -1)}. \end{aligned}$$

Therefore, for some fixed \(r\ge {\widehat{\rho }}/(1- \eta )\), we obtain

$$\begin{aligned} \Vert \mathcal Sy \Vert _{{\mathcal {C}}}^2\le & {} \frac{ 2 M ^2\varrho ^2}{(\varGamma (\gamma )) ^2} \Vert x_0\Vert _{D(E)}^2+ \frac{2 M ^2\varrho ^2 }{( 2 \alpha -1 )(\varGamma (\alpha )) ^2 }\Vert u\Vert _{L^2_{{\mathcal {F}}}(J,U)}^2\Vert B\Vert _{\infty } ^2 T^{2(1-\gamma )+2 \alpha -1 }\\&+\frac{5M ^2\varrho ^2}{(2 \alpha -1) (\varGamma (\alpha ) ) ^2}(\Vert h\Vert _\infty + \Vert l\Vert _\infty ) L^{-(2(1-\gamma )+ 2\alpha -1)}\\\le & {} {\widehat{\rho }}^2+\eta ^2 r^2, \end{aligned}$$

which means that \(\Vert {\mathcal {S}} y\Vert _{{\mathcal {C}}} \le r\). Obviously, from Lemma 3.1, it follows that for every pair \(u,v\in {\mathcal {B}}_r\), \({\mathcal {S}}_1u+{\mathcal {S}}_2v\in {\mathcal {B}}_r\). Lemma 3.3 shows that \({\mathcal {S}}_2\) is completely continuous. Hence, it remains to prove that \({\mathcal {S}}_1\) is a contraction. In fact, for \(t\in J\), and any \(u\in U_{ad}\) it is obvious that \({\mathcal {S}}_1 \) is a contraction. Hence, Lemma 3.4 shows that there exists at least one fixed point \(y^*\in {\mathcal {B}}_r\) such that \( {\mathcal {S}}y^*=y^*\) holds. Let \(x^*(t)=t^{\gamma -1}y^*(t)\) then \(x^*\in {\mathcal {B}}_r^\gamma \). Thus, \(x^*\) is the mild solution of problem (1)–(2). The proof is completed. \(\quad \square \)

Following Theorem 4.1, for any \(u\in U_{ad}\), let \({\mathcal {B}}_r^\gamma \) be defined as before and let \({\mathcal {S}}(u)\) be the mild solution set. A pair (xu) is feasible if it satisfies problem (1)–(2) for \(x\in B_r^\gamma \), and if (xu) is feasible, then \({\mathcal {S}}(u)\subset B_r^\gamma \).

The following Bolza problem (P) is expressed by:

Find an \(x^0\in {\mathcal {B}}_r^\gamma \subset {\mathcal {C}}(J,X)\) and \(u^0\in U_{ad}\) such that

$$\begin{aligned}{\mathcal {J}}(x^0,u^0)\le {\mathcal {J}}(x,u),\quad {\text {for\ all}}\; u\in U_{ad},\end{aligned}$$

where

$$\begin{aligned}{\mathcal {J}}(x^u,u)={\mathbf {E}}\int _0^T {\mathcal {L}}(t,x^u(t),u(t))dt.\end{aligned}$$

We make some assumptions on \({\mathcal {L}}\):

(\({\mathcal {L}}_1\)):

The functional \({\mathcal {L}} : J\times X\times U\rightarrow {\mathbf {R}}\cup \{\infty \}\) is Borel measurable,

(\({\mathcal {L}}_2\)):

\({\mathcal {L}}(t,\cdot ,\cdot )\) is sequentially lower semicontinuous on \( X\times U\) for almost all \(t\in J\),

(\({\mathcal {L}}_3\)):

\({\mathcal {L}}(t, x, \cdot )\) is convex on U for each \(x\in X\) and almost all \(t\in J\),

(\({\mathcal {L}}_4\)):

there exist constants \(c\ge 0\), \(d > 0\), and nonnegative function \(\psi (\cdot )\) and \(\psi \in L^1(J,X)\) such that

$$\begin{aligned}{\mathcal {L}}(t, x, u)\ge \psi (t)+c \Vert x\Vert ^2+d\Vert u\Vert _U^2.\end{aligned}$$

Theorem 4.2

Assume that the hypotheses in Theorem 4.1 and assumptions on \({\mathcal {L}}\) hold. Then, Bolza problem (P) possesses at least one optimal control pair.

Proof

Let \(\varTheta _u\) denotes the set of all solutions to the problem (1)–(2) in \(B_r^\gamma \) for each control \(u\in U_{ad}\). If \(\inf _{x^u\in \varTheta }J(x^u,u) =+\infty \), there is nothing to prove. Now, we assume that \({\mathcal {J}}(u) = \inf _{x^u\in \varTheta _u}{\mathcal {J}}(x^u, u)=m < +\infty \). By hypothesis \({\mathcal {L}}_4\), we have

$$\begin{aligned} {\mathcal {J}}(x,u)\ge & {} \int _0^T\psi (t)dt+c{\mathbf {E}} \int _0^T \Vert x(t )\Vert ^2dt +d{\mathbf {E}} \int _0^T\Vert u(t )\Vert _U^2dt >-\kappa , \end{aligned}$$

where \(\kappa > 0\) is a constant. Hence, \(m\ge -\kappa >-\infty \). We now divide the proof into three steps.

Step 1. By the definition of the infimum, there exists a sequence \(\{x_k^u\}\subset \varTheta _u\) satisfying \({\mathcal {J}}(x_k^u, u )\rightarrow m\) as \(k\rightarrow \infty \). Considering \(\{x_k^u, u\}\) as a sequence of feasible pairs, we have

$$\begin{aligned} x_k^u(t)= & {} E^{-1}{\mathcal {P}}_{\alpha ,E}^\beta (t)Ex_0+\int _0^tE^{-1}{\mathcal {T}}_{\alpha ,E}(t-s)[f(s,x_k^u(s))+B(s)u(s)]ds \nonumber \\&+\int _0^tE^{-1}{\mathcal {T}}_{\alpha ,E}(t-s)g(s,x_k^u(s))dW(s)\nonumber \\=: & {} I_1x_k^u+I_2x_k^u+I_3x_k^u. \end{aligned}$$
(9)

Step 2. Let us prove that there exists some \({\tilde{x}}^{u}\in \varTheta _u\) such that \({\mathcal {J}}({\tilde{x}}^{u},u)=m\). Next, we show that for each \(u\in U_{ad}\), the set \(\{x_k^u\}\) is relatively compact in \({\mathcal {C}}(J, X)\).

By analogous to Lemma 3.5, one has that \(\{I_ix_k^u\}\), \(i=1,2,3\) are precompact subsets of \({\mathcal {C}}(J, X)\). This means that the set \(\{x_k^u\}\) is relatively compact in \({\mathcal {B}}_r^\gamma \) for each \(u\in U_{ad}\). From this aspect, we may assume that \(x_k^u\rightarrow {\tilde{x}}^u\) in \({\mathcal {B}}_r^\gamma \) as \(k\rightarrow \infty \). Thus, by the Lebesgue dominated convergence theorem, we obtain as \(k\rightarrow \infty \) in (9) that

$$\begin{aligned} {\tilde{x}} ^u(t)= & {} E^{-1}{\mathcal {P}}_{\alpha ,E}^\beta (t)Ex_0+\int _0^tE^{-1}{\mathcal {T}}_{\alpha ,E}(t-s)[f(s,{\tilde{x}}^u(s) )+B(s)u(s)]ds\\&+\int _0^tE^{-1}{\mathcal {T}}_{\alpha ,E}(t-s)(t-s)g(s,{\tilde{x}}^u(s))dW(s), \end{aligned}$$

which deduces that \({\tilde{x}} ^u\in \varTheta _u\). Thus, we have \({\mathcal {J}}({\tilde{x}}^u, u_k)=m\) for each \(u\in U_{ad}\). Indeed, according to \({\mathcal {C}}(J, X)\hookrightarrow {\mathcal {L}}^1(J, X)\), here \({\mathcal {L}}^1(J, X)\) is the closed subspace of \({\mathcal {L}}_1({\mathbf {D}},{\mathcal {F}},{\mathbf {P}};X)\), through the definition of a feasible pair, the assumption on \({\mathcal {L}}\) and Balder theorem (see, e.g., [2]) implies

$$\begin{aligned} m= & {} {\mathcal {J}}(u) = \lim _{k\rightarrow \infty } {\mathbf {E}}\int _0^T {\mathcal {L}}(t, x_k^u(t), u(t))dt\ge {\mathbf {E}}\int _0^T{\mathcal {L}}(t, {\tilde{x}}^u(t), u(t))dt\\= & {} {\mathcal {J}}({\tilde{x}}^u, u)\ge {\mathcal {J}}(u)=m, \end{aligned}$$

which means that \({\mathcal {J}}(x^u, u)=m\). This shows that \({\mathcal {J}}(u)\) attains its minimum at \(x^u\in {\mathcal {C}}(J, X)\) for each control \(u\in U_{ad}\).

Step 3. In the sequel, we shall show that there exists \(u^0\in U_{ad}\) such that \({\mathcal {J}}(u^0)\le m\). In fact, if \(\inf _{u\in U_{ad}}{\mathcal {J}}(u) = +\infty \), then there is nothing to prove. Now, we assume that \(\inf _{u\in U_{ad}}\mathcal J(u) < +\infty \). Similarly to Step 1, one can check that \(\inf _{u\in U_{ad}}{\mathcal {J}}(u) > -\infty \), and there exists a sequence \(\{u_k\}\subset U_{ad}\) such that \({\mathcal {J}}(u_k)\rightarrow \inf _{u\in U_{ad}}{\mathcal {J}}(u)\) as \(k\rightarrow \infty \). Since \(\{u_k\}\subset U_{ad}\), \(\{u_k\}\) is bounded in \(L^2_{{\mathcal {F}}}(J,U)\) and \(L^2_{{\mathcal {F}}}(J,U)\) is a reflexive Hilbert space, there exists a subsequence, relabeled as \(\{u_k\}\), weakly convergent to some \(u^0\in L^2_{{\mathcal {F}}}(J,U)\) as \(k\rightarrow \infty \). Note that \(U_{ad}\) is closed and convex, using the fact that the closure and weak closure of a convex subset of a normed space are the same by [5]; it follows that \(u^0\in U_{ad}\).

Suppose \({\tilde{x}}^{u_k}\) is the mild solution to the problem (1)–(2) related to \(u_k\) by the proof of Theorem 4.1, where \({\mathcal {J}}(u_k)\) attains its minimum. Then, \(({\tilde{x}}^{u_k}, u_k)\) is a feasible pair and verifies the following integral equation for \(t\in J\):

$$\begin{aligned} {\tilde{x}} ^{u_k}(t)= & {} E^{-1}{\mathcal {P}}_{\alpha ,E}^\beta (t)Ex_0+\int _0^tE^{-1}{\mathcal {T}}_{\alpha ,E}(t-s)f(s,{\tilde{x}} ^{u_k})ds\nonumber \\&+\int _0^tE^{-1}{\mathcal {T}}_{\alpha ,E}(t-s) B(s){u_k}(s)ds +\int _0^tE^{-1}{\mathcal {T}}_{\alpha ,E}(t-s)g(s,{\tilde{x}} ^{u_k})dW(s)\nonumber \\=: & {} L_1x_k^u+L_2x_k^u+L_3x_k^u+L_4 u_k.\nonumber \\ \end{aligned}$$
(10)

By analogous to Lemma 3.5 again, one has similarly that the sets \(\{L_i{\tilde{x}}^{u_k}\}\), \(i=1,2,3\) are relatively compact subsets of \({\mathcal {C}}(J, X)\). Furthermore, we have \(L_4u_k\rightarrow L_4u^0\) in \({\mathcal {C}}(J, X)\) as \(k\rightarrow \infty \) and \(L_4\) is compact. Thus, the set \(\{{\tilde{x}}^{u_k}\}\subset {\mathcal {C}}(J, X)\) is relatively compact, and there exists a subsequence still denoted by \(\{{\tilde{x}}^{u_k}\}\), and \({\tilde{x}}^{u^0}\in {\mathcal {C}}(J, X)\) such that \({\tilde{x}}^{u_k}\rightarrow {\tilde{x}}^{u^0}\) in \({\mathcal {C}}(J, X)\) as \(k\rightarrow \infty \). Thus, by the Lebesgue dominated convergence theorem, we obtain as \(k\rightarrow \infty \) in (10) that have

$$\begin{aligned} {\tilde{x}} ^{u^0}(t)= & {} E^{-1}{\mathcal {P}}_{\alpha ,E}^\beta (t)Ex_0+\int _0^tE^{-1}{\mathcal {T}}_{\alpha ,E}(t-s)f(s,{\tilde{x}} ^{u^0})ds\\&+\int _0^tE^{-1}{\mathcal {T}}_{\alpha ,E}(t-s)B(s){u^0}(s)ds\\&+\int _0^tE^{-1}{\mathcal {T}}_{\alpha ,E}(t-s)g(s,{\tilde{x}} ^{u^0})dW(s), \end{aligned}$$

which deduces that \(({\tilde{x}}^{u^0}, u^0)\) is a feasible pair. Since \({\mathcal {C}}(J,X)\hookrightarrow {\mathcal {L}}^1(J,X)\), by the assumption on \({\mathcal {L}}\) and Balder theorem again, we have

$$\begin{aligned} \inf _{u\in U_{ad}}{\mathcal {J}}(u)= & {} \lim _{k\rightarrow \infty } {\mathbf {E}}\int _0^T {\mathcal {L}}(t, {\tilde{x}} ^{u_k}(t), u_k(t))dt\\\ge & {} \lim _{k\rightarrow \infty } {\mathbf {E}}\int _0^T {\mathcal {L}}(t, {\tilde{x}} ^{u^0}(t), u^0(t))dt \\= & {} {\mathcal {J}}( {\tilde{x}} ^{u^0}, u^0)\ge \int _{u\in U_{ad}}{\mathcal {J}}(u). \end{aligned}$$

Therefore, \({\mathcal {J}}( {\tilde{x}} ^{u^0}, u^0) = {\mathcal {J}}(u^0) = \inf _{x^{u^0}\in \varTheta _u}{\mathcal {J}}(x^{u^0}, u^0)\). Moreover, \({\mathcal {J}}(u^0) = \inf _{u\in U_{ad}}{\mathcal {J}}(u)\), i.e., \({\mathcal {J}}\) attains its minimum at \(u^0\in U_{ad}\). The proof is completed. \(\square \)

5 An Application

Let \(X= U :=L^2[0,\pi ]\). We consider the following semilinear time-fractional Sobolev-type stochastic diffusion equations:

$$\begin{aligned} \partial _t^{\alpha ,\beta } (x-x_{zz})-x_{zz}= & {} f(t,z,x)+B(t)u(t,z)+g(t,z,x)\partial _tW(t,z),\\&~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ t\in (0,T],\ z\in [0,\pi ],\\ x(t,0)= & {} x(t,\pi )=0,\quad t\in (0,T],\\ (I_{0+}^{1-\gamma }x)(0,z)= & {} x_0(z),\quad z\in [0,\pi ], \end{aligned}$$

where \(\gamma =\alpha +\beta (1-\alpha )\) for \(\alpha \in (1/2,1)\) and \(\beta \in [0,1]\), and where \(\partial _t^{\alpha ,\beta }\) is the Hilfer fractional partial derivative. \(B(\cdot ) :[0,T]\rightarrow {\mathcal {L}}(U,X)\) is defined by \(B(t):=b e^{-t}I\) for some \(b > 0\), \(t\in [0,T]\).

Define \(A : D(A)\rightarrow X\) by \(Ax := -x_{zz}\) and \(E : D(E)\rightarrow X\) by \(Ex :=x- x_{zz}\), where each of the domains D(A) and D(E) is given by

$$\begin{aligned} \{x\in X :\ x,x_z\; {\text {are\, absolutely\, continuous}},\; x_{zz}\in X,\ x(t,0) = x(t,\pi ) = 0\}. \end{aligned}$$

From [8], A and E can be written as

$$\begin{aligned} Ax=\sum _{n=1}^\infty n^2(x,\varphi _n)\varphi _n,\ x\in D(A), \end{aligned}$$

and

$$\begin{aligned} Ex=\sum _{n=1}^\infty (1+n^2)(x,\varphi _n)\varphi _n,\ x\in D(E), \end{aligned}$$

respectively, where \((\cdot ,\cdot )\) is the inner product in X, \(\varphi _n(z) :=\sqrt{2/\pi }\sin (nz)\), \(n = 1,2,...\) are the eigenfunctions, and \(\{\varphi _n\}_{n=1}^\infty \) forms an orthonormal basis of X. Moreover, for any \(x\in X\), it follows that

$$\begin{aligned} E^{-1}x=\sum _{n=1}^\infty \frac{1}{1+n^2}(x,\varphi _n)\varphi _n,\ -AE^{-1}x=\sum _{n=1}^\infty \frac{-n^2}{1+n^2}(x,\varphi _n)\varphi _n, \end{aligned}$$

and thus the semigroup \(T_E(t)\), \(t\ge 0\) generated by \(-AE^{-1}\) is given as follows:

$$\begin{aligned} T_E(t)x=\sum _{n=1}^\infty e^{-\frac{ n^2}{1+n^2}t}(x,\varphi _n)\varphi _n. \end{aligned}$$

It is obvious that \(E^{-1}\) is compact, bounded with \(\Vert E^{-1}\Vert _{L(X)}\le 1\), and the semigroup \(T_E(t)\) is strongly continuous on X satisfying \(\Vert T_E(t)\Vert _{{\mathcal {L}}(X)}\le 1\).

Let \(\psi _n(t) = \langle W(t, \cdot ), e_n\rangle \) for each n. Assume that the covariance operator \({\mathcal {Q}}\) satisfies \(\langle \mathcal Qe_n, e_k\rangle =\delta _{nk} \varpi _n^2\) for some constants \(\varphi _n>0\), \(n, k \in {\mathbf {N}}\). Then, we get that \(\{\psi _n(t)\}_{n\ge 1}\) is a sequence of independent Wiener processes in one dimension with mean zero and covariance \({\mathbf {E}}\{\psi _n(t)\psi _k(s)\} =\delta _{nk}\varpi _n^2(t\wedge s)\). So we can write \(\psi _n(t) =\varpi _n w_n(t)\) and

$$\begin{aligned} W(t,z) =\sum _{n=1}^\infty \psi _n(t)e_n(z) =\sum _{n=1}^\infty \varpi _n w_n(t)e_n(z), \end{aligned}$$

where \(\{w_n(t)\}\) is a sequence of standard Wiener processes in one dimension.

Finally, let functions f and g satisfy the assumptions (Hf) and (Hg), since the current fractional partial differential equations can be formulated as problem (1)–(2), we thus obtain that there exists least one mild solution \(x\in B_r^\gamma \) for some \(r>0\) in Theorem 4.1, and its corresponding limited Bolza problem admits at least one optimal feasible pair in Theorem 4.2.

6 Conclusions

In this paper, we study some sufficient conditions for the existence of mild solutions to Sobolev-type stochastic evolution equations with Hilfer fractional derivative of order \(\alpha \in (1/2,1)\) and \(\beta \in [0,1]\) as well as the existence to a control system. Moreover, it is shown that the Lagrange problem admits at least one optimal state control pair under some natural assumptions. The results are established by means of the compactness of semigroup, fractional calculus, Krasnoselskii’s fixed point theorem and Balder theorem to Bolza problem. Also, our results are obtained without the Lipschitz conditions to nonlinear functions.