Keywords

1 Introduction

It is well known that the fractional calculus is a classical mathematical notion and is a generalization of ordinary differentiation and integration to arbitrary order. Nowadays, studying fractional calculus has become an active area of research field as it has gained considerable importance due to its numerous applications in various fields, such as physics, chemistry, viscoelasticity, engineering sciences, etc. For more details, one can see the cited papers [18, 14] and reference therein.

The deterministic models often fluctuate due to environmental noise. A natural extension of a deterministic model is stochastic model, where relevant parameters are modeled as suitable stochastic processes. Due to this fact that, most of the problems in a practical life situation are modeled by stochastic equations rather than deterministic. Therefore, it is of great significance to introduce stochastic effects in the investigation of differential equations [13]. For more details on stochastic differential equations see [1012] and references therein.

However, it is known that the impulsive effects exist widely in different areas of real world such as mechanics, electronics, telecommunications, finance, economics, etc., for more detail see [9]. Due to this fact, the states of many evolutionary processes are often subject to instantaneous perturbations and experience abrupt changes at certain moments of time. The duration of these changes is very short and negligible in comparison with the duration of the process considered, and can be thought of as impulses. Therefore, it is important to consider the effect of impulses in the investigation of stochastic differential equations.

Wang et al. [16] considered the following impulsive fractional differential equation for order \(q \in (1,2)\)

$$\begin{aligned}&^cD_t^{q}u(t)= f(t,u(t)),\;\;t \in J'=[0,T],\;q \in (1,2),\\&\varDelta u(t_k) = I_k(u(t_k^- )),\varDelta u'(t_k) = J_k(u(t_k^- )), k = 1, 2,\ldots ,m,\\&u(0)=u_0,\;u'(0) =\overline{u}_0, \end{aligned}$$

and discussed the existence and uniqueness of solutions with the help of Banach fixed point theorem and Krasnoselskii fixed point theorem.

Sakthivel et al. [15] considered the following impulsive fractional stochastic differential equations with infinite delay in the form

$$\begin{aligned}\left\{ \begin{array}{lllllll} D_t^\alpha x(t)=Ax(t)+ f (t, x_t, B_1x(t)) + \sigma (t, x_t, B_2x(t)){d{\textit{w}}(t)\over dt}, t\in [0,T], t \ne t_k, \\ \varDelta x(t_k)= I_k(x(t_k)),\; k = 1, 2,\ldots ,m,\\ x(t)= \phi (t),\;\; \phi (t) \in \mathscr {B}_h, \end{array}\right. \end{aligned}$$

and discussed the existence of mild solutions using Banach contraction principle, Krasnoselskii’s fixed point theorem.

Motivated by the mentioned work [15, 16], in this article, we are concerned with the existence and uniqueness of solution for impulsive fractional functional integro-differential equation of the form:

$$\begin{aligned}&{}^cD^\alpha _t x(t)= f\left( t,x(t), x_t,\int _0^t K(t,s)x(s)ds\right) \nonumber \\&+\,g\left( t,x(t), x_t,\int _0^t K(t,s)x(s)ds\right) {d{\textit{w}}(t)\over dt},t \in J = [0,T],t \ne t_k, \end{aligned}$$
(1)
$$\begin{aligned}&\varDelta x(t_k) = I_k(x(t_k^- )),\varDelta x'(t_k) = Q_k(x(t_k^- )), k = 1, 2, \ldots ,m, \end{aligned}$$
(2)
$$\begin{aligned}&x(t) =\phi (t),x'(0) =x_1,\; t\in [-d,0], \end{aligned}$$
(3)

where J is an operational interval and \(^cD_t^{\alpha }\) denotes the Caputo’s fractional derivative of order \(\alpha \in (1, 2)\) and \(x(\cdot )\) takes the value in the real separable Hilbert space \(\mathscr {H}\); \(f : J\times \mathscr {H} \times PC^0_{\mathscr {L}}\times \mathscr {H} \rightarrow \mathscr {H}\) and \(g : J \times \mathscr {H} \times PC^0_{\mathscr {L}}\times \mathscr {H} \rightarrow \mathscr {L}(\mathscr {K},\mathscr {H})\) and \(I_k,Q_k:\mathscr {H} \rightarrow \mathscr {H}\) are appropriate functions; \(\phi (t)\) is \(\mathscr {F}_0\)-measurable \(\mathscr {H}\)-valued random variables independent of w. Here let \(0 = t_0 < t_1 <\cdots < t_m < t_{m+1} = T\), \(\varDelta x(t_k) = x(t_k^+ ) - x(t_k^- ),\; \varDelta x'(t_k) =x'(t_k^+ )-x'(t_k^- ),\; x(t_k^+)\; \text{ and }\; x(t_k^- )\) denote the right and left limits of x at \(t_k\). Similarly, \(x'(t_k^+ )\) and \(x'(t_k^- )\) denote the right and left limits of \(x'\) at \(t_k\), respectively.

For further details, this work has three sections. Second section provides some basic definitions, preliminaries, theorems, and lemmas. Third section is equipped with main results for the considered problem (1)–(3).

2 Preliminaries

Let \(\mathscr {H},\mathscr {K}\) be two real separable Hilbert spaces and \(\mathscr {L}(\mathscr {K},\mathscr {H})\) be the space of bounded linear operators from \(\mathscr {K}\) into \(\mathscr {H}\). For convenience, we will use the same notation \(\Vert \cdot \Vert \) to denote the norms in \(\mathscr {H},\mathscr {K}\) and \(\mathscr {L}(\mathscr {K},\mathscr {H})\), and use \((\cdot , \cdot )\) to denote the inner product of \(\mathscr {H}\) and \(\mathscr {K}\) without any confusion. Let \((\varOmega , \mathscr {F}, \{\mathscr {F}_t \}_{t\ge 0}, \mathscr {P})\) be a complete filtered probability space satisfying that \(\mathscr {F}_0\) contains all \(\mathscr {P}\)-null sets of \(\mathscr {F}.\) An \(\mathscr {H}\)-valued random variable is an \(\mathscr {F}\)-measurable function \(x(t):\varOmega \rightarrow \mathscr {H}\) and a collection of random variables \(S=\{x(t,\omega ):\varOmega \rightarrow \mathscr {H} \setminus t\in J\}\) is called stochastic process. Usually we write x(t) instead of \(x(t,\omega )\) and \(x(t):J \rightarrow \mathscr {H}\) in the space of S. \(\mathscr {W} = (\mathscr {W}_t )_{t\ge 0}\) be a \(\mathscr {Q}\)-Wiener process defined on \((\varOmega , \mathscr {F}, \{\mathscr {F}_t \}_{t\ge 0}, \mathscr {P})\) with the covariance operator \(\mathscr {Q}\) such that \(Tr\mathscr {Q} < \infty .\) We assume that there exists a complete orthonormal system \(\{e_k\}_{k\ge 1}\) in \(\mathscr {K}\), a bounded sequence of nonnegative real numbers \(\lambda _k\) such that \(\mathscr {Q}e_k = \lambda _ke_k,\; k = 1, 2,\dots ,\) and a sequence of independent Brownian motions \(\{\beta _k\}_{k\ge 1}\) such that

$$(w(t), e)_{\mathscr {K}} =\sum _{k=1}^\infty \sqrt{\lambda _k}(e_k, e)_{\mathscr {K}}\beta _k(t), e \in \mathscr {K}, t \ge 0.$$

Let \(\mathscr {L}^2_0= \mathscr {L}^2(\mathscr {Q}^{ 1\over 2} \mathscr {K},\mathscr {H})\) be the space of all Hilbert Schmidt operators from \(\mathscr {Q}^{ 1\over 2}\mathscr {K}\) to \(\mathscr {H}\) with the inner product \(<\varphi ,\psi >_{\mathscr {L}^2_0}\,=\,Tr[\varphi \mathscr {Q}\psi *]\).

The collection of all strongly measurable, square integrable, \(\mathscr {H}\)-valued random variables, denoted by \(\mathscr {L}^2(\varOmega , \mathscr {F}, \{\mathscr {F}_t \}_{t\ge 0}, \mathscr {P};\mathscr {H})=\mathscr {L}^2(\varOmega ;\mathscr {H}),\) is a Banach space equipped with norm \(\Vert x(\cdot )\Vert ^2_{\mathscr {L}^2}=E\Vert x(\cdot ,w)\Vert ^2_{\mathscr {H}},\) where E denotes expectation defined by \(E(h)=\int _{\varOmega }h(w)d\mathscr {P}\). An important subspace is given by \(\mathscr {L}^2_0(\varOmega ;\mathscr {H})=\{f\in \mathscr {L}^2(\varOmega ,\mathscr {H}):f \;\text{ is }\; \mathscr {F}_0\)- is measurable}.

Let \(PC^0_{\mathscr {L}}=C([-d,0],\mathscr {L}^2(\varOmega ;\mathscr {H}))\) be a Banach space of all continuous map from \([-d,0] \;\text{ into }\; \mathscr {L}^2(\varOmega ;\mathscr {H})\) satisfying the condition \(\sup E\Vert \phi (t)\Vert ^2 < \infty \) with norm

$$\Vert \phi \Vert _{PC_{\mathscr {L}}^0}=\sup _{t\in [-d,0]}\left\{ E\Vert \phi (t)\Vert _{\mathscr {H}}, \phi \in PC^0_{\mathscr {L}} \right\} .$$

Consider \(C^2(J,\mathscr {L}^2(\varOmega ;\mathscr {H}))\) be a Banach space of all continuously differentiable map from \(J \;\text{ into }\; \mathscr {L}^2(\varOmega ;\mathscr {H})\) satisfying the condition \(\sup E\Vert x(t)\Vert ^2 < \infty \) with norm defined

$$\Vert x\Vert ^2_{C^2}=\sup _{t\in J}\sum _{j=0}^{1}\left\{ E\Vert x^j(t)\Vert ^2_{\mathscr {H}}, x\in C^2(J,\mathscr {L}^2(\varOmega ;\mathscr {H}))\right\} .$$

To study the impulsive conditions, we consider

$$PC^2_{\mathscr {L}}=PC^2([-d,T],\mathscr {L}^2(\varOmega ;\mathscr {H}))$$

a Banach space of all such continuous functions \(x:[-d,T]\rightarrow \mathscr {L}^2(\varOmega ;\mathscr {H}),\) which are continuously differentiable on [0, T] except for a finite number of points \(t_i\in (0,T),\;i=1,2,\dots ,\mathscr {N},\) at which \(x'(t_i^+)\) and \(x'(t_i^-)=x'(t_i)\) exist and are endowed with the norm

$$\Vert x\Vert ^2_{PC^2_{\mathscr {L}}}=\sup _{t\in J}\sum _{j=0}^{1}\left\{ E\Vert x^j(t)\Vert ^2_{\mathscr {H}}, x\in PC^2_{\mathscr {L}}\right\} .$$

Definition 1

The Reimann–Liouville fractional integral operator for order \(\alpha > 0\), of a function \(f:\mathscr {R}^+\rightarrow \mathscr {R}\) and \(f\in L^1(\mathscr {R}^+,X)\) is defined by

$$\begin{aligned} J_t^0f(t)=f(t),\;J_t^{\alpha }f(t)={1\over \varGamma (\alpha )}\int _0^t(t-s)^{\alpha -1}f(s)ds,\quad \alpha >0,\;t>0, \end{aligned}$$

where \(\varGamma (\cdot )\) is the Gamma function.

Definition 2

Caputo’s derivative of order \(\alpha >0\) for a function \(f:[0,\infty )\rightarrow \mathscr {R}\) is defined as

$$\begin{aligned} D^\alpha _tf(t)={1\over \varGamma (n-\alpha )}\int _0^t(t-s)^{n-\alpha -1}f^{(n)}(s)ds =J^{n-\alpha }f^{(n)}(t), \end{aligned}$$

for \(n-1<\alpha <n,\;n\in N\). If \(0<\alpha < 1\), then

$$\begin{aligned} D^{\alpha }_tf(t)={1\over \varGamma (1-\alpha )}\int _0^t(t-s)^{-\alpha }f^{(1)}(s)ds. \end{aligned}$$

Obviously, Caputo’s derivative of a constant is equal to zero.

Lemma 1

A measurable \(\mathscr {F}_t\)-adapted stochastic process \(x :[-d,T]\rightarrow \mathscr {H}\) such that \(x\in PC^2_{\mathscr {L}}\) is called a mild solution of the system (1)–(3) if \( x(0)=\phi (0)\) and \(x'(0)=x_1\) on \([-d,0], \varDelta x|_{t=t_k}=I_k(x(t_k^-))\) and \(\varDelta x'|_{t=t_k}=Q_k(x(t_k^-))\), \(k = 1,2,\cdots ,m\) the restriction of \(x(\cdot )\) to the interval \([0, T ) \backslash {t_1, \cdots , t_m}\) is continuous and x(t) satisfies the following fractional integral equation

$$\begin{aligned} x(t)={\left\{ \begin{array}{ll} \phi (0) + x_1t + {1\over \varGamma {(\alpha )}}\int _0^t(t-s)^{\alpha -1}f\left( s,x(s),x_s,\int _0^t K(s,t)x(s)ds\right) ds \\ +\,{1\over \varGamma {(\alpha )}}\int _0^t (t-s)^{\alpha -1} g\left( s,x(s),x_s,\int _0^t K(s,t)x(s)ds \right) d{\textit{w}}(s),&{}t\in (0,t_1],\\ \phi (0) + x_1t+ I_1(x(t_1^-)) + Q_1(x(t_1^-))(t-t_1)\\ +\, {1\over \varGamma {(\alpha )}}\int _0^t(t-s)^{\alpha -1}f\left( s,x(s),x_s,\int _0^t K(s,t)x(s)ds\right) ds \\ +\,{1\over \varGamma {(\alpha )}}\int _0^t (t-s)^{\alpha -1} g\left( s,x(s),x_s,\int _0^t K(s,t)x(s)ds\right) d{\textit{w}}(s) ,&{} t \in (t_1,t_2],\\ \cdots \\ \phi (0) + x_1t+\sum _{i=1}^k\left[ I_i(x(t_i^-)) + Q_i(x(t_i^-))(t-t_i)\right] \\ + \,{1\over \varGamma {(\alpha )}}\int _0^t(t-s)^{\alpha -1}f\left( s,x(s),x_s,\int _0^t K(s,t)x(s)ds\right) ds \\ +\,{1\over \varGamma {(\alpha )}}\int _0^t (t-s)^{\alpha -1} g\left( s,x(s),x_s,\int _0^t K(s,t)x(s)ds \right) d{\textit{w}}(s),&{} t\in (t_k,t_{k+1}]. \end{array}\right. } \end{aligned}$$

Further, we introduce the following assumptions to establish our results:

  1. (H1)

    The nonlinear maps f and g are continuous and there exit constants \(\mu _1,\mu _2,\mu _3,{\textit{v}}_1,{\textit{v}}_2,{\textit{v}}_3>0\) such that

    $$\begin{aligned}&E\Vert f(t,x,\varphi ,u)-f(t,y,\psi ,{\textit{v}})\Vert _\mathscr {H}^2 \le \mu _1\Vert x-y\Vert _{\mathscr {H}}^2+ \mu _2 \Vert \varphi -\psi \Vert _{{PC}^0_{\mathscr {L}}}+\mu _3\Vert u-{\textit{v}}\Vert _{\mathscr {H}}^2, \\&E\Vert g(t,x,\varphi ,u)-g(t,y,\psi ,{\textit{v}})\Vert _\mathscr {H}^2 \le {\textit{v}}_1\Vert x-y\Vert _{\mathscr {H}}^2+ {\textit{v}}_2 \Vert \varphi -\psi \Vert _{{PC}^0_{\mathscr {L}}}+{\textit{v}}_3\Vert u-{\textit{v}}\Vert _{\mathscr {H}}^2 \end{aligned}$$

    for all \(x,y,u,v \in \mathscr {H}\) , \(t\in J\) and \(\varphi ,\psi \in PC^0_{\mathscr {L}}\).

  2. (H2)

    The functions \(I_k,Q_k\) are continuous and there exists \(L_I,L_Q >0,\) such that

    $$\begin{aligned} E\Vert I_k(x) - I_k(y) \Vert _{\mathscr {H}}^2 \le L_I E\Vert x-y \Vert _{\mathscr {H}}^2,\\ E\Vert Q_k(x) - Q_k(y) \Vert _{\mathscr {H}}^2 \le L_Q E\Vert x-y \Vert _{\mathscr {H}}^2 \end{aligned}$$

    for all \(x,y \in \mathscr {H}\;\; \text{ and } \;\; k=1,2,\cdots ,m.\)

3 Existence and Uniqueness Results

This result is based on Banach contraction fixed point theory.

Theorem 1

Suppose that the assumptions (H1) and (H2) hold and

$$\begin{aligned} \varTheta =\left\{ 4(mL_I+mT^2L_Q)+{{4T^{2\alpha }}\over {\varGamma (\alpha )}}\left[ {1\over \alpha ^2}(\mu _1+\mu _2+\mu _3K^*)+{1\over T{(2\alpha -1)}}({\textit{v}}_1+{\textit{v}}_2+{\textit{v}}_3K^*)\right] \right\} < 1, \end{aligned}$$

where \(K^*=\sup _{t\in [0,t]}\int _0^t K(t,s)ds < \infty .\) Then the system (1)–(3) has a unique solution.

Proof

We convert the problem (1)–(3) into fixed point problem. We consider an operator \(N:PC^2_{\mathscr {L}} \rightarrow PC^2_{\mathscr {L}}\) defined by

$$\begin{aligned} (Nx)(t)={\left\{ \begin{array}{ll} \phi (0) + x_1t + {1\over \varGamma {(\alpha )}}\int _0^t(t-s)^{\alpha -1}f\left( s,x(s),x_s,\int _0^t K(s,t)x(s)ds\right) ds \\ +\,{1\over \varGamma {(\alpha )}}\int _0^t (t-s)^{\alpha -1} g\left( s,x(s),x_s,\int _0^t K(s,t)x(s)ds\right) d{\textit{w}}(s),&{} t\in (0,t_1],\\ \phi (0) + x_1t+ I_1(x(t_1^-)) + Q_1(x(t_1^-))(t-t_1)\\ + \,{1\over \varGamma {(\alpha )}}\int _0^t(t-s)^{\alpha -1}f\left( s,x(s),x_s,\int _0^t K(s,t)x(s)ds\right) ds \\ +\,{1\over \varGamma {(\alpha )}}\int _0^t (t-s)^{\alpha -1} g\left( s,x(s),x_s,\int _0^t K(s,t)x(s)ds \right) d{\textit{w}}(s) ,&{} t \in (t_1,t_2],\\ \cdots \\ \phi (0) + x_1t+\sum _{i=1}^k \left[ I_i(x(t_i^-)) + Q_i(x(t_i^-))(t-t_i)\right] \\ + \,{1\over \varGamma {(\alpha )}}\int _0^t(t-s)^{\alpha -1}f\left( s,x(s),x_s,\int _0^t K(s,t)x(s)ds\right) ds \\ +\,{1\over \varGamma {(\alpha )}}\int _0^t (t-s)^{\alpha -1} g\left( s,x(s),x_s,\int _0^t K(s,t)x(s)ds\right) d{\textit{w}}(s),&{} t\in (t_k,t_{k+1}]. \end{array}\right. } \end{aligned}$$

Now we show that N is a contraction map. For this we take two points \(x,x^*\) such that for \(t \in (0,t_1]\)

$$\begin{aligned} E\Vert (Nx)(t)-(Nx^*)(t) \Vert _\mathscr {H}^2\le & {} 2E\Vert {1\over \varGamma (\alpha )}\int _0^t(t-s)^{\alpha -1}[f\left( s,x(s),x_s,\int _0^t K(s,t)x(s)ds\right) \\&-f\left( s,x^*(s),x^*_s,\int _0^t K(s,t)x^*(s)ds\right) ds \Vert _\mathscr {H}^2 \\&+ \,2E\Vert {1\over \varGamma (\alpha )}\int _0^t(t-s)^{\alpha -1}[g(s,x(s),x_s,\int _0^t K(s,t)x(s)ds)\\&-\,g\left( s,x^*(s),x^*_s,\int _0^t K(s,t)x^*(s)ds\right) d{\textit{w}}(s) \Vert _\mathscr {H}^2 \\&\le {{2T^{2\alpha }}\over {\varGamma (\alpha )}}\left[ {1\over \alpha ^2}(\mu _1+\mu _2+\mu _3K^*\right) \\&+\,{1\over T{(2\alpha -1)}}({\textit{v}}_1+{\textit{v}}_2+{\textit{v}}_3K^*)\Vert x-x^*\Vert _{{PC}^2_{\mathscr {L}}}^2. \end{aligned}$$

When \(t\in (t_1,t_2],\)

$$\begin{aligned} E\Vert (Nx)(t)-(Nx^*)(t) \Vert _\mathscr {H}^2\le & {} 4E\Vert I_1(x(t_1^-))-I_1(x^*(t_1^-)) \Vert _\mathscr {H}^2 \\&+\, 4E\Vert \ Q_1(x(t_1^-))(t-t_1) - Q_1(x^*(t_1^-))(t-t_1) \Vert _\mathscr {H}^2 \\&+ \,4E\Vert {1\over \varGamma (\alpha )}\int _0^t(t-s)^{\alpha -1}[f\left( s,x(s),x_s,\int _0^t K(s,t)x(s)ds\right) \\&-\,f\left( s,x^*(s),x^*_s,\int _0^t K(s,t)x^*(s)ds\right) ]ds \Vert _\mathscr {H}^2 \\&+ \,4E\Vert {1\over \varGamma (\alpha )}\int _0^t(t-s)^{\alpha -1}[g(s,x(s),x_s,\int _0^t K(s,t)x(s)ds)\\&-\,g\left( s,x^*(s),x^*_s,\int _0^t K(s,t)x^*(s)ds\right) ]d{\textit{w}}(s) \Vert _\mathscr {H}^2 \\&\le \left\{ 4(L_I+T^2L_Q)+ {{4T^{2\alpha }}\over {\varGamma (\alpha )}}\left[ {1\over \alpha ^2}(\mu _1+\mu _2+\mu _3K^*)\right. \right. \\&\left. \left. +\,{1\over T{(2\alpha -1)}}({\textit{v}}_1+{\textit{v}}_2+{\textit{v}}_3K^*)\right] \right\} \Vert x-x^*\Vert _{{PC}^2_{\mathscr {L}}}^2. \end{aligned}$$

Similarly for \(t \in (t_k,t_{k+1}],\;\;k=2,3,\ldots ,m,\)

$$\begin{aligned} E\Vert (Nx)(t)-(Nx^*)(t) \Vert _\mathscr {H}^2\le & {} \left\{ 4(mL_I+mT^2L_Q)+ {{4T^{2\alpha }}\over {\varGamma (\alpha )}}\left[ {1\over \alpha ^2}(\mu _1+\,\mu _2+\mu _3K^*)\right. \right. \\&\left. \left. \, {1\over T{(2\alpha -1)}}({\textit{v}}_1+{\textit{v}}_2+{\textit{v}}_3K^*)\right] \right\} \Vert x-x^*\Vert _{{PC}^2_{\mathscr {L}}}^2\\&=\,\varTheta \Vert x-x^*\Vert _{{PC}^2_{\mathscr {L}}}^2. \end{aligned}$$

Since \(\varTheta < 1\), by the condition given in Theorem 1, N is a contraction map and therefore it has a unique fixed point \(x \in PC^2_{\mathscr {L}}\) which is a solution of our equation (1)–(3) on J. This completes the proof of the theorem.