1 Introduction

Controllability is one of the fundamental concepts in modern mathematical control theory. This is the qualitative property of control systems and is of particular importance in control theory. Many dynamical systems are such that the control does not affect the complete state of the dynamical system but only a part of it. On the other hand, very often in real industrial processes it is possible to observe only a certain part of the complete state of the dynamical system. Therefore, it is very important to determine whether or not control of the complete state of the dynamical system is possible.So, here the concept of complete controllability and approximate controllability arises. Roughly speaking ,controllability generally means, that it is possible to steer dynamical system from an arbitrary final state using the set of admissible controls.Controllability is also strongly connected with the theory of minimal realization of linear time-invariant control systems.

It is well known that controllability of deterministic equation is widely used in many fields of science and technology. Kalman [1] introduced the concept of controllability for finite dimensional deterministic linear control systems. Then Barnett [2] and Curtain [3] introduced the concepts of deterministic control theory in finite and infinite dimensional spaces. Balachandran [4] and Dauer et al. [5] studied the controllability of nonlinear systems in infinite dimensional spaces. However, in many cases, some kind of randomness can appear in the problem, so that the system should be modelled by a stochastic form. Only few authors have studied the extensions of deterministic controllability concepts to stochastic control systems. Klamka [6] studied the controllability of linear stochastic systems in finite dimensional spaces with delay and without delay in control as well as in state. In [711], Mahmudov et al. established results for controllability of linear and semilinear stochastic systems in Hilbert Spaces. Instead of this,  Sakthivel et al. [12] studied the approximate controllability of nonlinear stochastic systems. Shen and Sun [13] studied the controllability of stochastic nonlinear systems with delay in control in finite dimensional as well as in infinite dimensional spaces.

Now, in the last few decades, there has been an expanding interest in the problems involving retarded systems. Retarded Systems are the systems having retarded arguments. Many real life problems that have in the past, sometimes been modelled by initial value problems for differential equations actually involve a significant memory effect that can be represented in a more refined model, using a differential equation incorporating retarded or delayed arguments(arguments that lag behind the current value).Therefore it becomes necessary and important to consider retarded systems as these systems have found many applications in mathematical physics, biology and finance.

On the other hand, Byszewski et al. [14] introduced nonlocal conditions into the initial value problems and argued that the corresponding models more accurately describe the phenomena since more information was taken into account at the oneset of the experiment , thereby reducing the ill effects incurred by a single initial measurement. Motivated by these facts, our main purpose in this paper is to study the approximate controllability of retarded semilinear stochastic system with nonlocal conditions. However, to the best of our knowledge, there are no results on the approximate controllability of retarded semilinear stochastic system with nonlocal conditions as treated in the current paper.

Let \((\Omega ,\mathfrak {I},P)\) be a complete space equipped with a normal filtration \(\mathfrak {I}_t\), \(t\in J=[0,T]\) generated by \(\omega \). Let \(X, U\) and \(E\) be the separable Hilbert spaces and \(A:D(A)\subset X\rightarrow X\) generates a strongly continuous compact semigroup (see [15]) denoted as \(S(t)\).  \(B:U\rightarrow X\) is a linear continuous operator. Suppose \(\omega \) be a \(Q\)-Weiner process on \((\Omega ,\mathfrak {I}_T,P)\) with the covariance operator \(Q\) such that \(trQ<\infty \). We assume that there exists a complete orthonormal system \({e_n}\) in \(E\), a bounded sequence of nonnegative real numbers \({\lambda _n}\) such that \(Q e_n=\lambda _n e_n\), \(n=1,2,\ldots \) and a sequence \({\beta _n}\) of independent Brownian motions such that

$$\begin{aligned} w(t) = \sum _{n=1}^{\infty }\sqrt{\lambda _n}\beta _n(t)e_n,\quad e_n\in E, t\in J \end{aligned}$$

and \(\mathfrak {I}_t = {\mathfrak {I}_t}^{\omega }\), where \({\mathfrak {I}_t}^{\omega }\) is the \(\sigma \)-algebra generated by \(\omega \). Let \({L_2}^0 = L_2(Q^{1/2}E;X)\) be the space of all Hilbert-Schmidt operators from \(Q^{1/2}E\) to \(X\). Then the space \({L_2}^{0}\) is a separable Hilbert space equipped with the norm \(||\psi ||^2_{Q}=tr[\psi Q \psi ^*]\). Let \(L_2(\Omega ,\mathfrak {I}_t,X)\) be the space of \(\mathfrak {I}_t\) measurable square integrable random variables with values in the Hilbert space \(X\). Let \(L_2^{\mathfrak {I}}(J,X)\) is the space of all \(\mathfrak {I}_t\) adapted,\(X\)-valued measurable square integrable processes on \(J\times \Omega \). Let \(C([0,T];L^2(\mathfrak {I},X))\) be the Banach space of continuous maps from \([0,T]\) into \(L^2(\mathfrak {I},X)\) satisfying the condition \(\displaystyle \sup _{t\in J}\mathbb {\mathbb {E}}||x(t)||^2<\infty \).

Let \(X_2\) be the closed subspace of \(C([0,T];L^2(\mathfrak {I},X))\) consisting of measurable and \(\mathfrak {I}_t\) - adapted \(X\) valued processes \(\phi \in C([0,T];L^2(\mathfrak {I},X))\) endowed with the norm

$$\begin{aligned} ||\phi ||_{X_2} = {\left( \sup _{t\in [0,T]}\mathbb {\mathbb {E}}||\phi (t)||_{X}^2\right) }^{1/2} \end{aligned}$$

In this paper we examine the approximate controllability of the following semi-linear stochastic Retarded system with nonlocal conditions :

$$\begin{aligned} \left. \begin{array}{rcl}\mathrm{d}x(t)&{}=&{}[Ax(t)+Bu(t)+f(t,x_t)]\mathrm{d}t+\sigma (t,x_t) \mathrm{d}\omega (t) \text{ for } \text{ t } \in (0,T]\\ x(t)&{}=&{}\psi (t), \text{ for } \text{ t } \in [-h,0),\quad x(0)=x_0+g(x). \end{array} \right\} \end{aligned}$$
(1.1)

where the state \(x(t)\in L_2(\Omega ,\mathfrak {I}_t,X)\) and the control \(u(t)\in L_2^\mathfrak {I}(J,U)\).  \(x_t\in L^2([-h,0],X)\) and is defined as \(x_t(s)=\{x(t+s)|-h\le s\le 0|\}\) and \(\psi =\{\psi (s)|-h\le s\le 0\}\in L^2([-h,0],X)\). Moreover, the function \(f:J\times X\rightarrow X\) is a purely nonlinear function and \(\sigma :J\times X\rightarrow L_2^{0}\) is a nonlinear function and \(g(x)\) is a continuous function from \(C(J,X)\rightarrow X\).

For simplicity of considerations, we generally assume that the set of admissible controls \(U_{ad} = L_2^{\mathfrak {I}}(J,U)\).

2 Preliminaries

It is well known that for given initial conditions, any admissible control \(u\in U_{ad}\), for \(t\in [-h,T]\) and suitable nonlinear functions \(f(t,x(t))\) and \(\sigma (t,x(t))\) there exists unique mild solution \(x(t;x_0,u)\in L_2(\Omega ,\mathfrak {I}_t,X)\) of the semilinear stochastic differential state Eq. (1.1) which can be represented in the following integral form

$$\begin{aligned} x(t;x_0,u)=\left\{ \begin{array}{l}S(t)(x_0+g(x))+\displaystyle \int _{0}^{t}S(t-s)(Bu(s)+f(s,x_s))ds\\ \quad +\displaystyle \int _{0}^{t}S(t-s)\sigma (s,x_s)d\omega (s)\quad for\, t\ge 0\\ \psi (t) \quad for\, t\in [-h,0) \end{array}\right. \end{aligned}$$
(2.1)

Let us introduce the following operators and sets \(L_T\in \mathbb {L}(U_{ad},\) \(L_2(\Omega ,\mathfrak {I}_T,X))\) defined by

$$\begin{aligned} L_Tu=\int _{0}^{T}S(T-s)Bu(s)ds \end{aligned}$$

where \(\mathbb {L}(X,Y)\) denotes the set of bounded linear operators from \(X\) to \(Y\).

Then it can be seen that the adjoint operator \(L_T^*\in L_2(\Omega ,\mathfrak {I}_T,X)\rightarrow U_{ad}\) is given by

$$\begin{aligned} L_T^*z=B^*S^*(T-t)\mathbb {E}\{z|\mathfrak {I}_t\} \end{aligned}$$

The set of all states reachable in time \(T\) from initial state \(x(0)=x_0\in L_2(\Omega ,\mathfrak {I}_0,X)\), using admissible controls is defined as

$$\begin{aligned} R_T(U_{ad})&= \{x(T;x_0,u)\in L_2(\Omega ,\mathfrak {I}_T,X):u\in U_{ad}\} \\ where \quad x(T;x_0,u)&= S(t)(x_0+g(x))+\int _{0}^{T}S(T-s)B u(s)ds\\&+\int _{0}^{T}S(T-s)f(s,x_s)\mathrm{d}s+\displaystyle \int _{0}^{T}S(T-s)\sigma (s,x_s)d\omega (s) \end{aligned}$$

Let us now we introduce the linear controllability operator \(\Pi _0^T \in \mathbb {L}(L_2(\Omega ,\mathfrak {I}_T,X),\) \(L_2(\Omega ,\mathfrak {I}_T,X))\) as follows:

$$\begin{aligned} \Pi _0^T \{.\}&= L_T(L_T)^*\{.\}\\&= \int _{0}^{T}S(T-t)BB^*S^*(T-t)\mathbb {E}\{.|\mathfrak {I}_t\}\mathrm{d}t \end{aligned}$$

The corresponding controllability operator for deterministic model is:

$$\begin{aligned} \Gamma _s^T&= L_T(s) L_T^*(s)\\&= \int _{s}^{T}S(T-t)BB^*S^*(T-t)\mathrm{d}t \end{aligned}$$

Definition 2.1

The stochastic dynamic system (1.1) is said to be approximately controllable on \([0,T]\) if

$$\begin{aligned} \overline{R_T(U_{ad})}=L_2(\Omega ,\mathfrak {I}_T,X) \end{aligned}$$

Lemma 1

[16] Let \(G:J\times \Omega \rightarrow {L_2}^0\) be a strongly measurable mapping such that \(\displaystyle \int _{0}^{T}\mathbb {E}||G(t)||^p_{L_2^0} \mathrm{d}t<\infty \). Then

$$\begin{aligned} \mathbb {E}\bigg |\bigg |\int _{0}^{t}G(s)\mathrm{d}\omega (s)\bigg |\bigg |^p\le L_G\int _{0}^{t}\mathbb {E}||G(s)||^p \mathrm{d}s, \end{aligned}$$
(2.2)

for all \(t\in J\) and \(p\ge 2\), where \(L_G\) is the constant involving \(p\) and \(T\).

Lemma 2

Schwartz inequality: Let \(\psi _1(x)\) and \(\psi _2(x)\) be any two real integrable functions in \([a,b]\) then

$$\begin{aligned} \bigg [\int _{a}^{b}\psi _1(x)\psi _2(x)\mathrm{d}x\bigg ]^2\le \int _{a}^{b}[\psi _1(x)]^2\mathrm{d}x\int _{a}^{b}[\psi _2(x)]^2\mathrm{d}x \end{aligned}$$

3 Main result

In this section, it will be shown that the system (1.1) is approximately controllable under appropriate conditions. Some sufficient conditions will be investigated to show how the solutions of (1.1) be steered approximately close to \(x_T\) at \(T\).

In order to prove our main results, we assume the following hypotheses:

  1. (i)

    The functions \(f:J\times X\rightarrow X\) and \(\sigma :J\times X\rightarrow L_{2}^{0}\) satisfy linear growth and Lipschitz conditions. Moreover, there exist positive constants \(L_1, L_2, L_3\) and \(L_4\) such that

    $$\begin{aligned}&||f(t,x_t)-f(t,y_t)||^2\le L_1||x_t-y_t||^2,\\&||\sigma (t,x_t)-\sigma (t,y_t)||^2_{L_2^0}\le L_2||x_t-y_t||^2\\&||f(t,x_t)||^2\le L_3(1+||x_t||^2), \quad ||\sigma (t,x_t)||^2_{L_2^0}\le L_4(1+||x_t||^2) \end{aligned}$$
  2. (ii)

    The function \(g(x)\) is a continuous function and there exists a positive constants \(L_g\) such that

    $$\begin{aligned} ||g(x)-g(y)||^2\le L_g ||x-y||^2, \quad ||g(x)||^2\le L_g(1+||x||^2)\\ \end{aligned}$$

    for all \(x,y\in C(J,X)\)

  3. (iii)

    For each \(0\le t<T\), the operator \(\alpha (\alpha I+\Gamma _s^T)^{-1}\rightarrow 0\) in the strong operator topology as \(\alpha \rightarrow 0^+\), where

    $$\begin{aligned} \Gamma _s^T&= \int _{s}^{T}S(T-t)B B^*S^*(T-t)\mathrm{d}t \end{aligned}$$

    is the controllability Grammian. Observe that the linear deterministic system corresponding to (1.1)

    $$\begin{aligned} \left. \begin{array}{rcl}\mathrm{d}x'(t)&{}=&{}[Ax(t)+B u(t)]\mathrm{d}t,\quad t\in J\\ x(0)&{}=&{}x_0 \end{array}\right\} \end{aligned}$$
    (3.1)

    is approximately controllable on \([s,T]\) iff the operator \(\alpha (\alpha I+\Gamma _s^T)^{-1}\rightarrow 0\) strongly as \(\alpha \rightarrow 0^+\) [7].

Now, for convenience, let us introduce the notation

$$\begin{aligned}&l_1=max\{||S(t)||^2:t\in [0,T]\},\quad \quad l_2=||B||^2\\&M=max\{||\Gamma _{s}^{T}||^2: s\in [0,T]\} \end{aligned}$$

Let us recall two lemmas concerning approximate controllability, which will be used in the proof.

The following lemma is required to define the control function

Lemma 3

For any \(x_T\in L_2(\Omega ,\mathfrak {I}_T,X)\), there exists \(\phi \in L_2^\mathfrak {I}(J,{L_2}^0) \) such that \(x_T=\mathbb {E}x_T+\displaystyle \int _{0}^{T}\tilde{\phi }(s)\mathrm{d}\omega (s)\).(see [7])

Now for any \(\alpha >0\) and \(x_T\in L_2(\Omega ,\mathfrak {I}_T,X)\), we define the control function

$$\begin{aligned} u^{\alpha }(t,x)&= B^*S^*(T-t)\bigg ((\alpha I+\Gamma _{0}^{T})^{-1}(\mathbb {E}x_T-S(T)(x_0+g(x))\nonumber \\&+\int _{0}^{t}(\alpha I+\Gamma _{s}^{T})^{-1}\tilde{\phi }(s)\mathrm{d}\omega (s)\bigg )\nonumber \\&-B^*S^*(T-t)\int _{0}^{t}(\alpha I+\Gamma _{s}^{T})^{-1}S(T-s)f(s,x_s)\mathrm{d}s\nonumber \\&-B^*S^*(T-t)\int _{0}^{t}(\alpha I+\Gamma _{s}^{T})^{-1}S(T-s)\sigma (s,x_s)\mathrm{d}\omega (s) \end{aligned}$$
(3.2)

Lemma 4

There exists a positive constant \(M_u\) such that for all \(x,y\in X_2\), we have

$$\begin{aligned}&\mathbb {E}||u^\alpha (t,x)-u^\alpha (t,y)||^2\le \frac{M_u}{\alpha ^2}||x-y||^2_{X_2}\\&\mathbb {E}||u^\alpha (t,x)||^2\le \frac{M_u}{\alpha ^2}\left( 1+||x||_{X_2}^2\right) \end{aligned}$$

Proof

Let \(x,y\in X_2\). From lemma 1, 2 and the assumptions on the data, we obtain

\(\mathbb {E}||u^\alpha (t,x)-u^\alpha (t,y)||^2\)

$$\begin{aligned}&\le 3\mathbb {E}\big |\big |B^*S^*(T-t)(\alpha I+\Gamma _0^T)^{-1}S(T)[g(y)-g(x)]\big |\big |^2\\&+\,3\mathbb {E}\bigg |\bigg |B^*S^*(T-t)\int _{0}^{t}(\alpha I+\Gamma _s^T)^{-1}S(T-s)[f(s,x_s)-f(s,y_s)]\mathrm{d}s\bigg |\bigg |^2\\&+\,3\mathbb {E}\bigg |\bigg |B^*S^*(T-t)\int _{0}^{t}(\alpha I+\Gamma _s^T)^{-1}S(T-s)[\sigma (s,x_s) -\sigma (s,y_s)]\mathrm{d}\omega (s)\bigg |\bigg |^2\\&\le \frac{3}{\alpha ^2}||B||^2l_1^2\bigg [L_g ||x{-}y||^2_{X_2}{+}t\int _{0}^{t}L_1\mathbb {E}||x_s{-}y_s||^2\mathrm{d}s +L_\sigma \int _{0}^{t}L_2\mathbb {E}||x_s-y_s||^2\mathrm{d}s\bigg ]\\&\le \frac{3}{\alpha ^2}||B||^2l_1^2\left[ L_g ||x-y||^2_{X_2}{+}T L_1T(T{+}h)||x{-}y||^2_{X_2}{+}L_\sigma L_2T(T{+}h)||x{-}y||^2_{X_2}\right] \\&\le \frac{3}{\alpha ^2}l_2l_1^2[L_g+T^2L_1(T+h)+L_\sigma L_2T(T+h)]||x-y||^2_{X_2}\\&= \frac{M_u}{\alpha ^2}||x-y||^2_{X_2} \end{aligned}$$

where \(M_u=3l_2l_1^2[L_g+T^2L_1(T+h)+L_\sigma L_2T(T+h)]\). Since

$$\begin{aligned} \int _{0}^{t}\mathbb {E}||x_s-y_s||^2ds&\le \int _{0}^{T}\mathbb {E}\int _{-h}^{0}||x(t+s)-y(t+s)||^2\mathrm{d}tds\\&= \int _{0}^{T}\mathbb {E}\int _{s-h}^{s}||x(v)-y(v)||^2\mathrm{d}vds\\&\le \int _{0}^{T}\mathbb {E}\int _{-h}^{T}||x(v)-y(v)||^2\mathrm{d}vds\\&\le T\left( \int _{-h}^{T}\mathbb {E}||x(v)-y(v)||^2\mathrm{d}v\right) \\&\le T(T+h)\sup _{v\in [-h,T]} \mathbb {E}||x(v)-y(v)||^2\\&= T(T+h)||x-y||^2_{X_2} \end{aligned}$$

The proof of second inequality can be verified by putting \(u^\alpha (t,y)=0\). So,the proof of the lemma is completed. \(\square \)

For any \(\alpha >0\), define the operator \(\mathbf {P_\alpha }:X_2\rightarrow X_2\) by

$$\begin{aligned} (\mathbf {P_\alpha }x)(t)=\left\{ \begin{array}{l}S(t)(x_0+g(x))+\displaystyle \int _{0}^{t}S(t-s)[Bu^\alpha (s,x)+f(s,x_s)]\mathrm{d}s\\ \quad +\displaystyle \int _{0}^{t}S(t-s)\sigma (s,x_s)\mathrm{d}\omega (s)\quad for\, t>0\\ \psi (t) \quad for\, t\in [-h,0) \end{array}\right. \end{aligned}$$
(3.3)

To prove the approximate controllability, we first prove in theorem 3.1, the existence of a fixed point of the operator \(\mathbf {P}_\alpha \) as above, using the contraction mapping principle. Then in theorem 3.2, we show that under certain assumptions the approximate controllability of the system (1.1) is implied by the approximate controllability of the corresponding linear system.

Theorem 3.1

Under the hypothesis \((i)-(iii)\), the system (1.1) has a mild solution on \([0,T]\).

Proof

The proof of this theorem is divided into several steps.

Step 1: For any \(x\in X_2\), \(\mathbf {P}_\alpha (x)(t)\) is continuous on \([-h,T]\). Let \(-h\le t_1<t_2\le T\). Using lemmas 1, 2 and the assumptions on the theorem, we have

$$\begin{aligned}&\mathbb {E}||(\mathbf {P}_\alpha x)(t_2)-(\mathbf {P}_\alpha x)(t_1)||^2\\&\quad \le 8\bigg \{\mathbb {E}||\psi (t_2){-}\psi (t_1)||^2{+}\mathbb {E}|| (S(t_2)-S(t_1))(x_0+g(x))||^2\\&\quad \quad +\,\mathbb {E}\bigg |\bigg |\int _{0}^{t_1} [S(t_2{-}s){-}S(t_1{-}s)]\times f(s,x_s)ds\bigg |\bigg |^2+\mathbb {E}\bigg |\bigg |\int _{t_1}^{t_2}S(t_2-s)f(s,x_s)\mathrm{d}s\bigg |\bigg |^2\\&\quad \quad +\,\mathbb {E}\bigg |\bigg |\int _{0}^{t_1}[S(t_2-s)-S(t_1-s)]\sigma (s,x_s)\mathrm{d}\omega (s)\bigg |\bigg |^2\\&\quad \quad +\,\mathbb {E}\bigg |\bigg |\int _{t_1}^{t_2}S(t_2-s)\sigma (s,x_s)\mathrm{d}\omega (s)\bigg |\bigg |^2\\&\quad \quad +\,\mathbb {E}\bigg |\bigg |\int _{0}^{t_1}[S(t_2-s)-S(t_1-s)]Bu^\alpha (s,x)\mathrm{d}s\bigg |\bigg |^2\\&\quad \quad +\,\mathbb {E}\bigg |\bigg |\int _{t_1}^{t_2}S(t_2-s)Bu^\alpha (s,x)\mathrm{d}s\bigg |\bigg |^2\bigg \}\\&\quad \le 8\bigg [\mathbb {E}||\psi (t_2)-\psi (t_1)||^2+ \mathbb {E}||(S(t_2)-S(t_1))(x_0+g(x))||^2\\&\quad \quad +\,t_1\int _{0}^{t_1}\mathbb {E}||[S(t_2{-}s){-}S(t_1{-}s)] \!\times \! f(s,x_s)||^2ds+l_1(t_2{-}t_1)\int _{t_1}^{t_2}\mathbb {E}||f(s,x_s)||^2\mathrm{d}s\\&\quad \quad +\,t_1\int _{0}^{t_1}\mathbb {E}||[S(t_2-s)-S(t_1-s)]\sigma (s,x_s)\mathrm{d}\omega (s)||^2\\&\quad \quad +\,l_1(t_2-t_1)\int _{t_1}^{t_2}\mathbb {E}||\sigma (s,x_s)d\omega (s)||^2\\&\quad \quad +\,t_1\int _{0}^{t_1}\mathbb {E}||[S(t_2-s)-S(t_1-s)]Bu^\alpha (s,x)||^2\mathrm{d}s\\&\quad \quad +\,||B||^2l_1(t_2-t_1)\int _{t_1}^{t_2}\mathbb {E}||u^\alpha (s,x)||^2\mathrm{d}s\bigg ] \end{aligned}$$

Hence Using Lebesgue’s dominated convergence theorem, we conclude that the right hand side of the above inequality tends to zero as \(t_2-t_1\rightarrow 0\). Thus we conclude that \(\mathbf {P}_\alpha (x)(t)\) is continuous from right in \([-h,T)\). A similar argument shows that it is also continuous from left in \((-h,T]\). Thus \(\mathbf {P}_\alpha (x)(t)\) is continuous on \([-h,T]\).

Step 2: We show that \(\mathbf {P}_\alpha (X_2)\subset X_2\). Let \(x\in X_2\). From 3.2 and assumption \((i)\), we have

$$\begin{aligned} \mathbb {E}||(\mathbf {P}_\alpha x)||^2&\le \mathbb {E}\bigg |\bigg |\psi (t)+S(t)(x_0+g(x))+\displaystyle \int _{0}^{t}S(t-s)(Bu^\alpha (s,x)+f(s,x_s))\mathrm{d}s\\&+\int _{0}^{t}S(t-s)\sigma (s,x_s)\mathrm{d}\omega (s)\bigg |\bigg |^2 \le 6\bigg [\mathbb {E}||\psi (t)||^2 +\mathbb {E}||S(t)x_0||^2\\&+\,\mathbb {E}||S(t)g(x)||^2+\mathbb {E}\bigg |\bigg |\int _{0}^{t}S(t-s)Bu^\alpha (s,x)\mathrm{d}s\bigg |\bigg |^2\\&+\,\mathbb {E}\bigg |\bigg |\int _{0}^{t}S(t-s)f(s,x_s)ds\bigg |\bigg |^2+\mathbb {E}\bigg |\bigg |\int _{0}^{t}S(t-s)\sigma (s,x_s)\mathrm{d}\omega (s)\bigg |\bigg |^2\bigg ]\\&\le 6\bigg [||\psi (t)||^2+l_1||x_0||^2\\&+\,l_1 L_g(1+||x||^2_{X_2})+\frac{M_ul_1l_2T}{\alpha ^2}(1+||x||^2_{X_2})+l_1TL_3\\&\times \bigg (1+\int _{0}^{t}\mathbb {E}||x_s||^2\mathrm{d}s\bigg ) +L_4L_\sigma \bigg (1+\int _{0}^{t}\mathbb {E}||x_s||^2\mathrm{d}s\bigg )\bigg ]\\&\le 6\bigg [||\psi (t)||^2+l_1||x_0||^2+l_1L_g(1+||x||^2_{X_2})\\&+\,\frac{M_u l_1l_2T}{\alpha ^2}(1+||x||^2_{X_2})+l_1TL_3(1+T(T+h)||x||^2_{X_2})\\&+\,L_4L_\sigma (1+T(T+h)||x||^2_{X_2})\bigg ]\\&= B_1+B_2||x||^2_{X_2} \end{aligned}$$

where \(B_1>0\) and \(B_2>0\) are suitable constants. Since

$$\begin{aligned} \int _{0}^{t}\mathbb {E}||x_r||^2dr&\le \int _{0}^{T}\mathbb {E}\int _{-h}^{0}||x(r+s)||^2dsdr\\&= \int _{0}^{T}\mathbb {E}\int _{r-h}^{r}||x(v)||^2\mathrm{d}vdr\\&\le \int _{0}^{T}\mathbb {E}\int _{-h}^{T}||x(v)||^2\mathrm{d}vdr\\&\le T\left( \int _{-h}^{T}\mathbb {E}||x(v)||^2\mathrm{d}v\right) \\&\le T(T+h)\sup _{ t\in [-h,T]}\mathbb {E}||x(t)||^2\\&\le T(T+h)||x||^2_{X_2} \end{aligned}$$

for all \(t\in [-h,T]\). Hence \(\mathbb {E}||(\mathbf {P}_\alpha x)(t)||^2<\infty \), therefore \(\mathbf {P_\alpha }\) maps \(X_2\) into itself.

Step 3: Now we prove that for each fixed \(\alpha >0\), the operator \(\mathbf {P}_\alpha \) has a unique fixed point in \(X_2\). We claim that there exists a natural number \(n\) such that \(\mathbf {P}^n_\alpha \) is a contraction on \(X_2\). To see this, let \(x\in X_2\) so for \(t\in [-h,T]\), we obtain,

\(\mathbb {E}||(\mathbf {P}_\alpha x)t-(\mathbf {P}_\alpha y)t||^2\)

$$\begin{aligned}&= \mathbb {E}\bigg |\bigg |S(t)[g(x)-g(y)]+\int _{0}^{t}S(t-s)B[u^\alpha (s,x)-u^\alpha (s,y)]ds\\&+\int _{0}^{t}S(t-s)[f(s,x_s)-f(s,y_s)]ds\\&\displaystyle +\int _{0}^{t}S(t-s)[\sigma (s,x_s)-\sigma (s,y_s)]\mathrm{d}\omega (s)\bigg |\bigg |^2\\&\le 4\bigg (l_1M_g||x-y||^2_{X_2}+\frac{M_u T l_2l_1}{\alpha ^2}||x-y||^2_{X_2}+T l_1L_1\\&\times \int _{0}^{t}\mathbb {E}||x_s-y_s||^2\mathrm{d}s+l_1L_2L_\sigma \int _{0}^{t}\mathbb {E}||x_s-y_s||^2\mathrm{d}s\bigg )\\&\le 4\bigg (l_1M_g||x-y||^2_{X_2}+\frac{M_u T l_2l_1}{\alpha ^2}||x-y||^2_{X_2}+T l_1L_1T(T+h)||x-y||^2_{X_2}\\&+\,l_1L_2L_\sigma T(T+h)||x-y||^2_{X_2}\bigg )\\&\le 4\bigg (l_1M_g+\frac{M_u T l_2l_1}{\alpha ^2}+T l_1L_1T(T+h)+l_1L_2L_\sigma T(T+h)\bigg )||x-y||^2_{X_2} \end{aligned}$$

Hence we obtain a positive real constant \(\gamma (\alpha )\) such that

$$\begin{aligned} \mathbb {E}||(\mathbf {P}_\alpha x)t-(\mathbf {P}_\alpha y)t||^2\le \gamma (\alpha )||x-y||^2_{X_2}\mathrm{d}s \end{aligned}$$

for all \(t\in [-h,T]\) and for any \(x,y\in X_2\). Moreover,

$$\begin{aligned} \mathbb {\mathbb {E}}||\mathbf {P_\alpha }^2(x)(t)-\mathbf {P_\alpha }^2(y)(t)||^2&\le \gamma (\alpha )\int _{0}^{t}\mathbb {\mathbb {E}}||\mathbf {P_\alpha }(x)(s)-\mathbf {P_\alpha }(y)(s)||^2\mathrm{d}s\\&\le \gamma (\alpha )\int _{0}^{t} \gamma (\alpha )\mathbb {\mathbb {E}}||x-y||^2 \mathrm{d}s\\&= \gamma ^2(\alpha )t||x-y||_{X_{2}}^{2} \end{aligned}$$

Using Mathematical Induction, one can get

$$\begin{aligned} \mathbb {\mathbb {E}}||\mathbf {P_\alpha }^n(x)(t)-\mathbf {P_\alpha }^n(y)(t)||^2&\le \gamma (\alpha )\int _{0}^{t}\mathbb {\mathbb {E}}||\mathbf {P_\alpha }^{n-1}(x)(s)-\mathbf {P_\alpha }^{n-1}(y)(s)||^2\mathrm{d}s\\&\le \dfrac{(t^{n-1}) (\gamma (\alpha ))^n}{(n-1)!}||x-y||_{X_2}^{2} \end{aligned}$$

In general,

$$\begin{aligned} ||\mathbf {P_\alpha }^n(x)(t)-\mathbf {P_\alpha }^n(y)(t)||_{X_2}^{2} \le \dfrac{(T^{n-1})(\gamma (\alpha ))^n}{(n-1!)}||x-y||_{X_2}^{2} \end{aligned}$$

For any fixed \(\alpha >0\), there exists \(n\) such that \( \dfrac{(T^{n-1})(\gamma (\alpha ))^n}{(n-1!)}<1\). It follows that \(\mathbf {P}^n_\alpha \) is a contraction mapping for sufficiently large \(n\). Then, by the contraction principle the operator \(\mathbf {P}_\alpha \) has a unique fixed point \(x_\alpha \) in \(X_2\), which is the mild solution of (1.1). \(\square \)

Theorem 3.2

If the assumptions \((i)-(iii)\) are satisfied, \(\{S(t):t\ge 0\}\) is compact and \(f\), \(\sigma \) are uniformly bounded, then the system (1.1) is approximately controllable on \([-h,T]\).

Proof

Let \(x_\alpha \) be a fixed point in \(P_\alpha \) in \(X_2\). By using the stochastic Fubini theorem, it is easy to see that

$$\begin{aligned} x^\alpha (T)&= x_T-\alpha (\alpha I+\Gamma _{0}^{T})^{-1}\bigg (\mathbb {E}x_T-S(T)(x_0+g(x)\bigg )\\&+\,\alpha \int _{0}^{T}(\alpha I+\Gamma _{s}^{T})^{-1}S(T-s)f(s,x^\alpha _s)\mathrm{d}s\\&+\,\alpha \int _{0}^{T}(\alpha I+\Gamma _{s}^{T})^{-1}S(T-s)\sigma (s,x^\alpha _s)d\omega (s)\\&-\,\alpha \int _{0}^{T}(\alpha I+\Gamma _{s}^{T})^{-1}\tilde{\phi }(s)\mathrm{d}\omega (s) \end{aligned}$$

By the assumption that \(f\) and \(\sigma \) are uniformly bounded, there exists \(D>0\) such that

$$\begin{aligned} ||f(s,x^\alpha _s)||^2+||\sigma (s,x^\alpha _s)||^2\le D \end{aligned}$$

Then there is a subsequence denoted by\(\{f(s,x^\alpha _s),\sigma (s,x^\alpha _s)\}\) weakly converging to say \(\{f(s,w),\sigma (s,w)\}\) in \(X\times L_2^0\).

Now the compactness of \(S(t)\) implies \( S(T-s)\) \(f(s,x^\alpha _s)\rightarrow S(T-s)f(s),\) \( S(T-s)\sigma (s,x^\alpha _s) \rightarrow S(T-s)\sigma (s)\) in \(J\times \Omega \).

From the above equation, we obtain

$$\begin{aligned} \mathbb {E}||x^\alpha (T)-x_T||^2&\le 6||\alpha (\alpha I+\Gamma _{0}^{T})^{-1}||^2\;||\mathbb {E}x_T-S(T)(x_0+g(x))||^2\\&{+}\,6\mathbb {E}\bigg (\int _{0}^{T}||\alpha (\alpha I{+}\Gamma _{s}^{T})^{-1}\tilde{\phi }(s)||^2\mathrm{d}s\bigg )\\&+\,6\mathbb {E}\bigg (\int _{0}^{T}||\alpha (\alpha I{+}\Gamma _{s}^{T})^{{-}1}||\;||S(T{-}s)[f(s,x^\alpha _s){-}f(s)]||\mathrm{d}s\bigg )^2\\&+\,6\mathbb {E}\bigg (\int _{0}^{T}||\alpha (\alpha I+\Gamma _{s}^{T})^{-1}S(T-s)f(s)||\mathrm{d}s\bigg )^2\\&+\,6\mathbb {E}\bigg (\int _{0}^{T}||\alpha (\alpha I{+}\Gamma _{s}^{T})^{{-}1}||^2\;||S(T{-}s)[\sigma (r,x^\alpha _s)-\sigma (s)] ||^2\mathrm{d}s\bigg )\\&+\,6\mathbb {E}\bigg (\int _{0}^{T}\bigg |\bigg |\alpha (\alpha I+\Gamma _{s}^{T})^{-1}S(T-s)\sigma (s)\bigg |\bigg |^2\mathrm{d}s\bigg ) \end{aligned}$$

Since by definition of \(\Gamma _s^T\), for all \(0\le s<T\) the operator \(\alpha (\alpha I+\Gamma _s^T)^{-1}\rightarrow 0\) as \(\alpha \rightarrow 0^+\) and moreover \(||\alpha (\alpha I+\Gamma _s^T)^{-1}||\le 1\). Thus by the Lebesgue domainated convergence theorem, we obtain \(\mathbb {E}||x^\alpha (T)-x_T||^2\rightarrow 0\) as \(\alpha \rightarrow 0^+\). This gives the approximate controllability. \(\square \)

Remark 3.1

If we consider the time varying semilinear retarded stochastic differential equation in finite dimensional spaces with nonlocal conditions of the form

$$\begin{aligned} \left. \begin{array}{rcl} \mathrm{d}x(t)&{}=&{}[A(t)x(t)+B(t)u(t)+f(t,x_t)]\mathrm{d}t+\sigma (t,x_t) \mathrm{d}\omega (t) \text{ for } \text{ t } \in (0,T]\\ x(t)&{}=&{}\psi (t) \text{ for } \text{ t } \in [-h,0),\quad x(0)=x_0+g(x).\\ \end{array}\right\} \qquad \end{aligned}$$
(3.4)

where \(A(t)\) and \(B(t)\) are the matrices of \(n\times n\) and \(n\times m\) respectively, and \(f, \sigma \) and \(g\) are defined as previously. The solution of the above equation is

$$\begin{aligned} x(t;x_0,u)&= \phi (t,t_0)x_0+\int _{0}^{t} \phi (t,s)B(s)u(s)ds\\&+\int _{0}^{t} \phi (t,s)f(s,x_s)\mathrm{d}s+\int _{0}^{t}\phi (t,s)\sigma (s,x_s)\mathrm{d}\omega (s) \end{aligned}$$

If the functions \(f\), \(\sigma \) and \(g\) satisfy the conditions \((i)\) and \((ii)\) and the corresponding linear system is approximately controllable, then by suitably applying the above theorem, one can show that the system (3.4) is approximately controllable.

4 Examples

Example 1

Consider the retarded stochastic heat equation with nonlocal conditions

$$\begin{aligned} \left. \begin{array}{rcl} &{}&{}\mathrm{d}_{t}z(t,\theta )=[z_{\theta \theta }+B u(t,\theta )+p(t,z_t)]dt+k(t,z_t)d\omega (t)\\ &{}&{}z(t,0)=z(t,\pi )=0,\quad 0\le t\le T,\quad 0<\theta <\pi \\ &{}&{}z(t,\theta )=\psi (\theta )\quad -h\le t<0,0\le \theta \le \pi \\ &{}&{}z(0,\theta )+\displaystyle \sum _{i=1}^{n}\alpha _i z(t_i,\theta )=z_0(\theta )\quad t\in J\\ \end{array}\right\} \end{aligned}$$
(4.1)

where \(B\) is a bounded linear operator from a Hilbert space \(U\) into \(X\), \(x_t\in L^2([-h,0],X)\) and is defined as \(x_t(s)=\{x(t+s)|-h\le s\le 0|\}\) and \(\psi =\{\psi (s)|-h\le s\le 0\}\in L^2([-h,0],X)\). and \(p:J\times X\rightarrow X\), \(k:J\times X\rightarrow L^0_2\) are all continuous and uniformly bounded, \(u(t)\) is a feedback control and \(w\) is a \(Q\)-Wiener process.

Let \(X=L_2[0,\pi ]\), and let \(A:D(A)\subset X\rightarrow X\) be an operator defined by

$$\begin{aligned} Az=z_{\theta \theta } \end{aligned}$$

with domain

$$\begin{aligned} D(A)=\{z(.)\in X|z,z_\theta \text{ are } \text{ absolutely } \text{ continuous } ,z_{\theta \theta }\in X,z(0)=z(\pi )=0\} \end{aligned}$$

Furthermore, \(A\) has discrete spectrum, the eigen values are \(-n^2,n=1,2,\cdots \) with the corresponding normalized characterstic vectors \(e_n(s)=(2/\pi )^{1/2}\sin ns\),then

$$\begin{aligned} Az=\displaystyle \sum _{n=1}^{\infty }-n^2<z,e_n>e_n,\quad z\in X \end{aligned}$$

It is known that \(A\) generates a compact semigroup \(S(t),t>0\) in \(X\) and is given by

$$\begin{aligned} S(t)z = \sum _{n=1}^{\infty }e^{-n^2 t}<z,e_n>e_n(\theta ),\quad z\in X \end{aligned}$$

Let \(f:J\times X\rightarrow X\) be defined by

$$\begin{aligned} f(t,x_t)(\theta )=p(t,x_t(\theta )),\quad (t,x_t)\in J\times X,\theta \in [0,\pi ]. \end{aligned}$$

Let \(\sigma :J\times X\rightarrow L^0_2\) be defined by

$$\begin{aligned} \sigma (t,x_t)(\theta )=k(t,x_t(\theta )),\quad (t,x_t)\in J\times X,\theta \in [0,\pi ]. \end{aligned}$$

The function \(g:C(J,X)\rightarrow X\) is defined as

$$\begin{aligned} g(z)(\theta )=\sum _{i=1}^{n}\alpha _i z(t_i,\theta ) \end{aligned}$$

for \(0<t_i<T\) and \(\theta \in [0,\pi ]\).

With this choice of \(A,B,f,\sigma \) and \(g\), (1.1) is the abstract formulation of (4.1) such that the conditions in \((i)\) and \((ii)\) are satisfied.

Now define an infinite-dimensional space

$$\begin{aligned} U =\bigg \{u:u=\sum _{n=2}^{\infty }u_n e_n(\theta )\;\;|\sum _{n=2}^{\infty }u_n^2<\infty \bigg \} \end{aligned}$$

with the norm defined by

$$\begin{aligned} ||u||_U=\bigg (\sum _{n=2}^{\infty }u_n^2\bigg )^{1/2} \end{aligned}$$

and a linear continuous mapping \(B\) from \(U\rightarrow X\) as follows:

$$\begin{aligned} Bu=2u_2e_1(\theta )+\sum _{n=2}^{\infty }u_n(t) e_n(\theta ) \end{aligned}$$

It is obvious that for \(u(t,\theta ,\omega )=\displaystyle \sum _{n=2}^{\infty }u_n(t,\omega ) e_n(\theta )\in L^\mathfrak {I}_2(J,U)\)

$$\begin{aligned} B u(t)=2u_2(t)e_1(\theta )+\displaystyle \sum _{n=2}^{\infty }u_n(t) e_n(\theta ) \in L^\mathfrak {I}_2(J,X). \end{aligned}$$

Moreover

$$\begin{aligned}&B^*v=(2v_1+v_2)e_2(\theta )+\sum _{n=3}^{\infty }v_n e_n(\theta ),\\&B^*S^*(t)z=(2z_1e^{-t}+z_2e^{-4t})e_2(\theta )+\sum _{n=3}^{\infty }z_n e^{-n^2t} e_n(\theta ), \end{aligned}$$

for \(v =\sum _{n=1}^{\infty }v_n e_n(\theta )\) and \(z=\sum _{n=1}^{\infty }z_n e_n(\theta )\).

Let \(||B^*S^*(t)z||=0\),\(\quad t\in [0,T],\) it follows that

$$\begin{aligned} ||2z_1 e^{-t}+z_2e^{-4t}||^2+\displaystyle \sum _{n=3}^{\infty }||z_n e^{-n^2 t}||^2=0,\quad t\in [0,T] \end{aligned}$$

\(\Rightarrow z_n=0,\quad n=1,2,\cdots \Rightarrow z=0\)

Thus by theorem 4.1.7 of [3], the deterministic linear system corresponding to (4.1) is approximately controllable on \([0,T]\). Therefore the system (4.1) is approximately controllable provided that \(f,\sigma \) and \(g\) satisfy the assumptions \((i)\) and \((ii)\).

Example 2

Consider a two-dimensional retarded semi-linear stochastic system

$$\begin{aligned} \left. \begin{array}{rcl}\mathrm{d}x(t)&{}=&{}[Ax(t)+Bu(t)+f(t,x_t)]\mathrm{d}t+\sigma (t,x_t) \mathrm{d}\omega (t) \text{ for } \text{ t } \in (0,T]\\ x(t)&{}=&{}\psi (t) \text{ for } \text{ t } \in [-h,0),\quad x(0)=x_0+g(x). \end{array}\right\} \end{aligned}$$
(4.2)

where \(\omega (t)\) is a one dimensional Wiener process, \(x=(x_1,x_2)\in R^2\) and

$$\begin{aligned}&A = \left[ \begin{array}{rr} -1 &{} 0 \\ 0 &{} 1 \end{array}\right] , \;\; B =\left[ \begin{array}{rr} 1.2 &{} -0.2 \\ 0.6 &{} 2.4 \end{array}\right] ,\;\; \\&f(t,x_t)=\frac{1}{a}\left[ \begin{array}{r} \sin x_t \\ x_t \end{array}\right] , \;\;\sigma (t,x_t)=\frac{1}{b}\left[ \begin{array}{rr} x_t&{} 0 \\ 0 &{}\cos x_t \end{array}\right] ,\; \end{aligned}$$

The controllability matrix can be obtained as

$$\begin{aligned} \Gamma _0^T&=\int _{0}^{T}exp(A(T-s))BB^*exp(A^*(T-s))ds\\&=\left[ \begin{array}{rr} 0.74-0.74e^{-2T} &{} 0.24T \\ 0.24T &{} -0.36+0.36e^{2T} \end{array}\right] \end{aligned}$$

which is nonsingular for \(T>0\).

If we take Euclidean norm then

$$\begin{aligned}&||f(t,x_t)-f(t,y_t)||^2\le \frac{2}{a^2}||x_t-y_t||^2,\quad ||\sigma (t,x_t)-\sigma (t,y_t)||^2\le \frac{2}{b^2}||x_t-y_t||^2 \end{aligned}$$

Let \(L_1=\frac{2}{a^2}\), and \(\quad L_2=\frac{2}{b^2}\)

Now, one can easily see that the assumption \((i)\) is satisfied by \(f\) and \(\sigma \). Also assumption (iii) is satisfied as described above. So, it can be easily verified from theorem (3.1),(3.2) that the system (4.2) is approximately controllable provided the assumption (ii) is also satisfied.