1 Introduction

The memory and hereditary properties of various materials and processes in electrical circuits, biology, biomechanics, etc., such as viscoelasticity, electrochemistry, control, porous media, and electromagnetic processes, are widely recognized to be well predicted by using fractional differential operators [16]. During the past decades, the subject of fractional calculus and its potential applications have gained an increase in importance, mainly because it has become a powerful tool with more accurate and successful results in modeling several complex phenomena in numerous seemingly diverse and widespread fields of science and engineering [710].

There has been a significant development in nonlocal problems for (fractional) differential equations or inclusions (see for instance [1117]). Indeed, nonlinear fractional differential equations have, in recent years, been object of an increasing interest because of their wide applicability in nonlinear oscillations of earthquakes, in many physical phenomena such as seepage flow in porous media, and in fluid dynamic traffic model [1820]. On the other hand, there could be no manufacturing, no vehicles, no computers, and no regulated environment, without control systems. Control systems are most often based on the principle of feedback, whereby the signal to be controlled is compared to a desired reference signal and the discrepancy used to compute corrective control actions [21]. Over the last years, one of the fields of science that has been well established is the fractional calculus of variations (see [2224] and references therein). Moreover, a generalization of this area, namely the fractional optimal control, is a topic of research by many authors [25, 26].

The fractional optimal control of a distributed system is an optimal control problem for which the system dynamics is defined with partial fractional differential equations [27]. The calculus of variations with constraints being sets of solutions of control systems, allow us to justify, while performing numerical calculations, the passage from a nonconvex optimal control problem to the convexified optimal control problem. We then approximate the latter problem by a sequence of smooth and convex optimal control problems, for which the optimality conditions are known and methods of their numerical resolution are well developed.

Sobolev-type semilinear equations serve as an abstract formulation of partial differential equations, which arise in various applications such as in the flow of fluid through fissured rocks, thermodynamics, and shear in second-order fluids. Further, the fractional differential equations of Sobolev type appear in the theory of control of dynamical systems, when the controlled system and/or the controller is described by a fractional differential equation of Sobolev type. Furthermore, the mathematical modeling and simulations of systems and processes are based on the description of their properties in terms of fractional differential equations of Sobolev type. These new models are more adequate than previously used integer-order models, so fractional-order differential equations of Sobolev type have been investigated by many researchers: see, for example, Fe c̆ kan et al. [28] and Li et al. [29]. In our previous works [30, 31], we have introduced the notion of nonlocal control condition and presented a new kind of Sobolev-type condition that appears in terms of two linear operators. Kamocki [32] studied the existence of optimal solutions to fractional optimal control problems. Liu et al. [33] established the relaxation for nonconvex optimal control problems described by fractional differential equations. Motivated by the above facts and results, we introduce here a new kind of Sobolev-type condition and another form of a nonlocal control condition for nonlinear fractional multiple control systems. The new Sobolev condition is given in terms of two linear operators and requires formulating two other characteristic solution operators and their properties, such as boundedness and compactness. Further, we consider an optimal control problem \((P)\) of multi-integral functionals, with integrands that are not convex in the controls. We establish an interrelation between the solutions of problem \((P)\) and the relaxation problem (RP). Under certain assumptions, it is proved that (RP) has a solution and that for any solution of (RP) there is a minimizing sequence for \((P)\) converging, in the appropriate topologies, to the solution of (RP). The convergence takes place simultaneously with respect to the trajectory, the control, and the functional. This property is usually called relaxation [34, 35].

The paper is organized as follows. In Sect. 2, we formulate and define the problems under study and we review some essential facts from fractional calculus [4, 18], semigroup theory [36, 37], and multivalued analysis [38, 39], which are used throughout the work. In Sect. 3, we prove some auxiliary results that are required for the proof of our main results. Section 4 deals with existence results for multiple control systems. The main results are given in Sect. 5. We end with Sect. 6 of conclusions.

2 Preliminaries

Consider the following nonlocal nonlinear fractional control system of Sobolev type:

$$\begin{aligned}&\displaystyle L~^{{C}}D_t^{\alpha }[Mx(t)]+Ex(t)=f(t, x(t), B_{1}(t)u_{1}(t),\ldots , B_{r-1}(t)u_{r-1}(t)),~ t\in I \end{aligned}$$
(1)
$$\begin{aligned}&\displaystyle x(0)+h(x(t), B_{r}(t)u_{r}(t))=x_{0}, \end{aligned}$$
(2)

with mixed nonconvex constraints on the controls

$$\begin{aligned} u_{1}(t),\ldots , u_{r}(t)\in U(t, x(t))~ \text {a.e. on}~ I, \end{aligned}$$
(3)

where \(^{{C}}D^{\alpha }_{t}\) is the Caputo fractional derivative of order \(\alpha ,\,0<\alpha \le 1\), and \(t\in I:=[0, a]\). Let \(X, Y\), and \(Z\) be three Banach spaces such that \(Z\) is densely and continuously embedded in \(X\), the unknown function \(x(\cdot )\) takes its values in \(X\), and \(x_{0}\in X\). We assume that the operators \(E: D(E)\subset X\rightarrow Y,\,M: D(M)\subset X\rightarrow Z,\,L: D(L)\subset Z\rightarrow Y\), and \(B_{1},\ldots ,B_{r}:I\rightarrow {\mathcal {L}}(T, X)\) are linear and bounded from \(T\) into \(X\). The space \(T\) is a separable reflexive Banach space modeling the control space. It is also assumed that \(f: I\times X^{r}\rightarrow Y\) and \(h: C(X^{2}, X)\rightarrow X\) are given abstract functions, to be specified later, and \(U:I\times X \rightrightarrows 2^T\backslash \{\emptyset \}\) is a multivalued map with closed values, not necessarily convex. Let \(\widehat{{\mathbb {R}}} := ]-\infty , +\infty ]\). For functions \(g_{1},\ldots ,g_{r}:I\times X\times T \rightarrow {\mathbb {R}}\), we consider the problem

on solutions of the control system (1)–(2) with constraint (3). Let \(g_{1, U},\ldots ,g_{r, U}: I\times X\times T\rightarrow \widehat{{\mathbb {R}}}\) be the functions defined by

$$\begin{aligned} \left\{ \begin{array}{ll} g_{1,U}(t, x, u_1) :=\left\{ \begin{array}{ll} g_{1}(t, x, u_{1}),&{}\quad u_{1}\in U(t, x),\\ +\infty , &{}\quad u_{1}\notin U(t, x), \end{array}\right. \\ \qquad \vdots \\ g_{r,U}(t, x, u_{r}) :=\left\{ \begin{array}{ll} g_{r}(t, x, u_{r}),&{}\quad u_{r}\in U(t, x),\\ +\infty , &{}\quad u_{r}\notin U(t, x), \end{array}\right. \end{array}\right. \end{aligned}$$

and \(g^{**}_{1}(t, x, u_{1}),\ldots ,g^{**}_{r}(t, x, u_{r})\) be the bipolar of \(u_{1}\rightarrow g_{1,U}(t, x, u_{1}),\ldots ,\,u_{r}\rightarrow g_{r,U}(t, x, u_{r})\), respectively. Along with problem \((P)\), we also consider the relaxation problem

on the solutions of control system (1)–(2) with the convexified constraints

$$\begin{aligned} u_{1}(t),\ldots , u_{r}(t)\in {\text {cl}}\,{\text {conv}}\,U(t,x(t))\quad \text {a.e. on }I \end{aligned}$$
(4)

on the controls, where conv denotes the convex hull and cl the closure. In our results, we will denote by \({\mathcal {R}}_U\) and \({\mathcal {T}}r_{U} ({\mathcal {R}}_{{\text {cl}}\,{\text {conv}}\,U}\) and \({\mathcal {T}}r_{{\text {cl}}\,{\text {conv}}\,U})\) the sets of all solutions and all trajectories of control system (1)–(3) [control system (1)–(2),(4), respectively].

Definition 2.1

The fractional integral of order \(\alpha >0\) of a function \(f\in L^{1}([a,b],{\mathbb {R}})\) is given by

$$\begin{aligned} I^{\alpha }_{a}f(t):=\frac{1}{\varGamma (\alpha )}\int _{a}^{t}(t-s)^{\alpha -1}f(s)\hbox {d}s, \end{aligned}$$

where \(\varGamma \) is the classical gamma function.

If \(a=0\), then we can write \(I^{\alpha }f(t) := (g_{\alpha }*f)(t)\), where

$$\begin{aligned} g_{\alpha }(t):= \left\{ \begin{array}{ll} \frac{1}{\varGamma (\alpha )}t^{\alpha -1},&{}\quad t>0,\\ 0, &{}\quad t\le 0 \end{array}\right. \end{aligned}$$

and, as usual, \(*\) denotes convolution. Moreover, \(\lim \nolimits _{\alpha \downarrow 0} g_{\alpha }(t)=\delta (t)\) with \(\delta \) the delta Dirac function.

Definition 2.2

The Riemann–Liouville fractional derivative of order \(\alpha >0,\,n-1<\alpha <n,\,n\in {\mathbb {N}}\), is given by

$$\begin{aligned} ^{L}D^{\alpha }f(t) := \frac{1}{\varGamma (n-\alpha )}\frac{\hbox {d}^{n}}{\hbox {d}t^{n}} \int _{0}^{t}\frac{f(s)}{(t-s)^{\alpha +1-n}}\hbox {d}s,\quad t>0, \end{aligned}$$

where function \(f\) has absolutely continuous derivatives up to order \((n-1)\).

Definition 2.3

The Caputo fractional derivative of order \(\alpha >0,\,n-1<\alpha <n,\,n\in {\mathbb {N}}\), is given by

$$\begin{aligned} ^{C}D^{\alpha }f(t) := ~^{L}D^{\alpha }\left( f(t) -\sum \limits _{k=0}^{n-1}\frac{t^{k}}{k!}f^{(k)}(0)\right) , \quad t>0, \end{aligned}$$

where function \(f\) has absolutely continuous derivatives up to order \((n-1)\).

If \(f\) is an abstract function with values in \(X\), then the integrals that appear in Definitions 2.12.3 are taken in Bochner’s sense.

Remark 2.1

Let \(n-1<\alpha <n,\,n\in {\mathbb {N}}\). The following properties hold:

  1. (i)

    If \(f\in C^{n}([0, \infty [)\), then

    $$\begin{aligned} ^{C}D^{\alpha }f(t)=\frac{1}{\varGamma (n-\alpha )} \int _{0}^{t}\frac{f^{(n)}(s)}{(t-s)^{\alpha +1-n}}\hbox {d}s =I^{n-\alpha }f^{(n)}(t), \quad t>0; \end{aligned}$$
  2. (ii)

    The Caputo derivative of a constant function is equal to zero;

  3. (iii)

    The Riemann–Liouville derivative of a constant function is given by

    $$\begin{aligned} ^{L}D^{\alpha }_{a^{+}}C=\frac{C}{\varGamma (1-\alpha )}(t-a)^{-\alpha },\quad 0<\alpha <1. \end{aligned}$$

We make the following assumptions:

  • (\(\hbox {H}_1\)) \(L: D(L)\subset Z\rightarrow Y\) and \(M: D(M)\subset X\rightarrow Z\) are linear operators, and \(E: D(E)\subset X\rightarrow Y\) is closed.

  • (\(\hbox {H}_2\)) \(D(M)\subset D(E),\,\text {Im}(M)\subset D(L)\), and \(L\) and \(M\) are bijective.

  • (\(\hbox {H}_3\)) \(L^{-1}: Y\rightarrow D(L)\subset Z\) and \(M^{-1}: Z\rightarrow D(M)\subset X\) are linear, bounded, and compact operators. Note that (\(\hbox {H}_3\)) implies that \(L\) and \(M\) are closed. Indeed, if \(L^{-1}\) and \(M^{-1}\) are closed and injective, then their inverse is also closed. From (\(\hbox {H}_1\))–(\(\hbox {H}_3\)) and the closed graph theorem, we obtain the boundedness of the linear operator \(L^{-1}{ EM }^{-1}: Z\rightarrow Z\). Consequently, \(L^{-1}{ EM }^{-1}\) generates a semigroup \(\lbrace Q(t), t\ge 0\rbrace ,\,Q(t):=e^{L^{-1}{ EM }^{-1}t}\). We assume that \(M_{0}:=\sup _{t\ge 0}\Vert Q(t)\Vert <\infty \), and for short, we denote \(C_{1}:=\Vert L^{-1}\Vert \) and \(C_{2}:=\Vert M^{-1}\Vert \). According to previous definitions, it is suitable to rewrite problem (1)–(2) as the equivalent integral equation

    $$\begin{aligned} Mx(t)= & {} Mx(0)\nonumber \\&+\,\frac{1}{\varGamma (\alpha )}\int _{0}^{t}(t-s)^{\alpha -1} \left[ -L^{-1}Ex(s)\right. \nonumber \\&\left. +\,L^{-1}f(s, x(s), B_{1}(s)u_{1}(s),\ldots , B_{r-1}(s)u_{r-1}(s))\right] \hbox {d}s, \end{aligned}$$
    (5)

    provided the integral in (5) exists a.e. in \(t\in J\). Before formulating the definition of mild solution of system (1)–(3), we first introduce some necessary notions. Let \(I:=[0,a]\) be a closed interval of the real line with the Lebesgue measure \(\mu \) and the \(\sigma \)-algebra \(\varSigma \) of \(\mu \) measurable sets. The norm of the space \(X\) (or \(T\)) will be denoted by \(\Vert \cdot \Vert _X\) (or \(\Vert \cdot \Vert _T\)). We denote by \(C(I,X)\) the space of all continuous functions from \(I\) into \(X\) with the supnorm given by \(\Vert x\Vert _{C}:=\sup _{t\in I}\Vert x(t)\Vert _X\) for \(x\in C(I,X)\). For any Banach space \(V\), the symbol \(\omega \)-\(V\) stands for \(V\) equipped with the weak topology \(\sigma (V,V^*)\). The same notation will be used for subsets of \(V\). In all other cases, we assume that \(V\) and its subsets are equipped with the strong (normed) topology.

Throughout the paper, \(A:=-L^{-1}{ EM }^{-1}: D(A)\subset Z\rightarrow Z\) is the infinitesimal generator of a compact analytic semigroup of uniformly bounded linear operators \(Q(\cdot )\) in \(X\). Then, there exists a constant \(M_{0}\ge 1\) such that \(\Vert Q(t)\Vert \le M_{0}\) for \(t\ge 0\). The operators \(B_{i}\in L^{\infty }(I, {\mathcal {L}}(T, X))\), and we let \(\Vert B_{i}\Vert \) stand for \(\Vert B_{i}\Vert _{L^{\infty }(I, {\mathcal {L}}(T, X))}\).

We now proceed with some basic definitions and results from multivalued analysis. For more details on multivalued analysis, we refer to the books [38, 39]. We use the following symbols: \(P_{f}(T)\) is the set of all nonempty closed subsets of \(T;\,P_{bf}(T)\) is the set of all nonempty, closed, and bounded subsets of \(T\). On \(P_{bf}(T)\), we have a metric, known as the Hausdorff metric, defined by

$$\begin{aligned} d_{H}(A,B):=\max \left\{ \sup _{a\in A}d(a,B),\,\sup _{b\in B}d(b,A)\right\} , \end{aligned}$$

where \(d(x,C)\) is the distance from a point \(x\) to a set \(C\). We say that a multivalued map is \(H\)-continuous if it is continuous in the Hausdorff metric \(d_{H}(\cdot ,\cdot )\). Let \(F: I \rightrightarrows 2^{T}\backslash \{\emptyset \}\) be a multifunction. For \(1\le p\le +\infty \), we define \(S^{p}_{F}:=\lbrace f\in L^{p}(I, T): f(t)\in F(t)\) a.e. on \(I\rbrace \). We say that a multivalued map \(F:I \rightrightarrows P_f(T)\) is measurable if \(F^{-1}(E)=\{t\in I:F(t)\cap E\ne \emptyset \}\in \varSigma \) for every closed set \(E\subseteq T\). If \(F:I\times T\rightarrow P_f(T)\), then the measurability of \(F\) means that \(F^{-1}(E)\in \varSigma \otimes {\mathcal {B}}_{T}\), where \(\varSigma \otimes {\mathcal {B}}_{T}\) is the \(\sigma \)-algebra of subsets in \(I\times T\) generated by the sets \(A\times B,\,A\in \varSigma ,\,B\in {\mathcal {B}}_{T}\), and \({\mathcal {B}}_{T}\) is the \(\sigma \)-algebra of the Borel sets in \(T\).

Suppose that \(V_{1}\) and \(V_{2}\) are two Hausdorff topological spaces and \(F: V_{1}\rightarrow 2^{V_{2}} \backslash \{\emptyset \}\). We say that \(F\) is lower semicontinuous in the sense of Vietoris (l.s.c., for short) at a point \(x_0\in V_{1}\), if for any open set \(W\subseteq V_{2},\,F(x_0)\cap W\ne \emptyset \), there is a neighborhood \(O(x_0)\) of \(x_0\) such that \(F(x)\cap W\ne \emptyset \) for all \(x\in O(x_0)\). Similarly, \(F\) is said to be upper semicontinuous in the sense of Vietoris (u.s.c., for short) at a point \(x_0\in V_{1}\), if for any open set \(W\subseteq V_{2},\,F(x_0)\subseteq W\), there is a neighborhood \(O(x_0)\) of \(x_0\) such that \(F(x)\subseteq W\) for all \(x\in O(x_0)\). For more properties of l.s.c and u.s.c, we refer to the book [39]. Besides the standard norm on \(L^q(I, T)\) (here, \(T\) is a separable reflexive Banach space), \(1<q<\infty \), we also consider the so-called weak norm:

$$\begin{aligned} \Vert u_{i}(\cdot )\Vert _{\omega }:=\sup _{0\le t_1\le t_2\le a} \left\| \int _{t_1}^{t_2}u_{i}(s)\hbox {d}s\right\| _T,~ u_{i}\in L^q(I, T), \quad i=1,\ldots ,r. \end{aligned}$$
(6)

The space \(L^q(I, T)\) furnished with this norm will be denoted by \(L_{\omega }^q(I, T)\). The following result gives a relation between convergence in \(\omega \)-\(L^q(I, T)\) and convergence in \(L_{\omega }^q(I, T)\).

Lemma 2.1

(see [40]) If sequences \(\{u_{1,n}\}_{n\ge 1},\ldots ,\{u_{r,n}\}_{n\ge 1}\subseteq L^q(I, T)\) are bounded and converge to \(u_{1},\ldots ,u_{r}\) in \(L_{\omega }^q(I, T)\), respectively, then they converge to \(u_{1},\ldots ,u_{r}\) in \(\omega \)-\(L^q(I, T)\), respectively.

We make use of the following assumptions on the data of our problems.

  1. (H1)

    The nonlinear function \(f: I\times X^{r}\rightarrow Y\) satisfies the following:

    1. (1)

      \(t\rightarrow f(t, x_{1},\ldots ,x_{r})\) is measurable for all \((x_{1},\ldots ,x_{r})\in X^{r}\);

    2. (2)

      \(\Vert f(t, x_{1},\ldots ,x_{r})-f(t, y_{1},\ldots ,y_{r})\Vert _{Y}\le k_{1}(t) \sum _{i=1}^{r}\Vert x_{i}-y_{i}\Vert _{X}\) a.e. on \(I,\,k_{1}\in L^{\infty }(I,{\mathbb {R}}^+)\);

    3. (3)

      there exists a constant \(0<\beta <\alpha \) such that \(\Vert f(t, x_{1},\ldots ,x_{r})\Vert _Y\le a_1(t)+c_1\sum _{i=1}^{r}\Vert x_{i}\Vert _{X}\) a.e. in \(t\in I\), where \(a_1\in L^{1/\beta }(I, {\mathbb {R}}^+)\) and \(c_1>0\).

  2. (H2)

    The nonlocal function \(h: C(J: X, X)\rightarrow X\) satisfies the following:

    1. (1)

      \(t\rightarrow h(x, y)\) is measurable for all \(x,y\in X\);

    2. (2)

      \(\Vert h(x_{1}, y_{1})-h(x_{2}, y_{2})\Vert _X\le k_{2}(t)\lbrace \Vert x_{1}-x_{2}\Vert _X+\Vert y_{1}-y_{2}\Vert _X\rbrace \) a.e. on \(I,\,k_{2}\in L^{\infty }(I,{\mathbb {R}}^+)\);

    3. (3)

      there exists a constant \(0<\beta <\alpha \) such that \(\Vert h(x, y)\Vert _X\le a_2(t)+c_2\lbrace \Vert x\Vert _X+\Vert y\Vert _X\rbrace \) a.e. in \(t\in I\) and all \(x,y\in X\), where \(a_2\in L^{1/\beta }({\mathbb {R}}^+)\) and \(c_2>0\).

  3. (H3)

    The multivalued map \(U: I\times X \rightrightarrows P_{f}(T)\) is such that:

    1. (1)

      \(t\rightarrow U(t,x)\) is measurable for all \(x\in X\);

    2. (2)

      \(d_H(U(t, x),U(t, y))\le k_3(t)\Vert x-y\Vert _X\) a.e. on \(I,\,k_3\in L^{\infty }(I,{\mathbb {R}}^+)\);

    3. (3)

      there exists a constant \(0<\beta <\alpha \) such that

      $$\begin{aligned} \Vert U(t, x)\Vert _T=\sup \{\Vert v\Vert _T: v\in U(t, x)\} \le a_3(t)+c_3\Vert x\Vert _X \quad \text { a.e. in } t\in I, \end{aligned}$$

      where \(a_3\in L^{1/\beta }(I, {\mathbb {R}}^+)\) and \(c_3>0\).

  4. (H4)

    Functions \(g_{i}: I\times X\times T\rightarrow {\mathbb {R}},\,i=1,\ldots ,r\), are such that:

    1. (1)

      the map \(t\rightarrow g_{i}(t, x, u_{i})\) is measurable for all \((x, u_{i})\in X\times T\);

    2. (2)

      \(\vert g_{i}(t, x,u_{i})-g_{i}(t, y,v_{i})\vert \le k^{\prime }_{4}(t)\Vert x-y\Vert _X +k^{\prime \prime }_{4}\Vert u_{i}-v_{i}\Vert _{T}\) a.e., \(k^{\prime }_{4}\in L^{1}(I,{\mathbb {R}}^+),\,k^{\prime \prime }_{4}>0\);

    3. (3)

      \(\vert g_{i}(t, x,u_{i})\vert \le a_{4}(t)+b_{4}(t)\Vert x\Vert _{X} +c_{4}\Vert u_{i}\Vert _{T}\) a.e. \(t\in I, a_4,b_{4}\in L^{1/\beta }(I, {\mathbb {R}}^+),\,c_{4}>0\).

Definition 2.4

A solution of the control system (1)–(3) is defined to be a vector of functions \((x(\cdot ), u_{1}(\cdot ),\ldots ,u_{r}(\cdot ))\) consisting of a trajectory \(x\in C(I, X)\) and \(r\) multiple controls \(u_{1},\ldots ,u_{r}\in L^{1}(I, T)\) satisfying system (1)–(2) and the inclusion (3) almost everywhere.

A solution of control system (1)–(2), (4) can be defined similarly.

Definition 2.5

(see [17, 30, 41]) A vector of functions \((x,u_{1},\ldots ,u_{r})\) is a mild solution of the control system (1)–(3) iff \(x\in C(I,X)\) and there exist \(u_{1},\,\ldots ,\,u_{r} \in L^1(I,T)\) such that \(u_{1}(t),\,\ldots ,\,u_{r}(t) \in U(t,x(t))\) a.e. in \(t\in I,\,x(0)=x_0-h(x(t), B_{r}(t)u_{r}(t))\), and the following integral equation is satisfied:

$$\begin{aligned} x(t)= & {} S_{\alpha }(t)M\left[ x_{0}-h(x(t), B_{r}(t)u_{r}(t))\right] \\&+\int _{0}^{t}(t-s)^{\alpha -1}T_{\alpha }(t-s)L^{-1} f\left( s, x(s), B_{1}(s)u_{1}(s),\ldots , B_{r-1}(s)u_{r-1}(s)\right) \hbox {d}s, \end{aligned}$$

where

$$\begin{aligned} S_{\alpha }(t):= & {} \int _{0}^{\infty }M^{-1}\zeta _{\alpha }(\theta )Q(t^{\alpha }\theta )\mathrm{d}\theta , \quad T_{\alpha }(t):=\alpha \int _{0}^{\infty }M^{-1}\theta \zeta _{\alpha }(\theta ) Q(t^{\alpha }\theta )\mathrm{d}\theta ,\\ \zeta _{\alpha }(\theta ):= & {} \frac{1}{\alpha }\theta ^{-1-\frac{1}{\alpha }} \varpi _{\alpha }(\theta ^{-\frac{1}{\alpha }})\ge 0, \quad \varpi _{\alpha }(\theta )\\ \varpi _{\alpha }(\theta ):= & {} \frac{1}{\pi }\sum _{n=1}^{\infty }(-1)^{n-1} \theta ^{-\alpha n-1}\frac{\varGamma (n\alpha +1)}{n!}\sin (n\pi \alpha ), \ \theta \in ]0, \infty [, \end{aligned}$$

with \(\zeta _{\alpha }\) the probability density function defined on \(]0, \infty [\), that is, \(\zeta _{\alpha }(\theta )\ge 0,\,\theta \in ]0, \infty [\), and \(\int _{0}^{\infty }\zeta _{\alpha }(\theta )\mathrm{d}\theta =1\).

A similar definition can be introduced for the control system (1)–(2),(4).

Remark 2.2

(see [41]) One has \(\int _0^{\infty }\theta \xi _{\alpha }(\theta )\hbox {d}\theta =\frac{1}{\varGamma (1+\alpha )}\).

Lemma 2.2

(see [41]) The characteristic operators \(S_{\alpha }\) and \(T_{\alpha }\) have the following properties:

  1. (1)

    for any fixed \(t\ge 0,\,S_{\alpha }(t)\) and \(T_{\alpha }(t)\) are linear and bounded operators, i.e., for any \(x\in X\),

    $$\begin{aligned} \Vert S_{\alpha }(t)x\Vert _X\le C_{2}M_0\Vert x\Vert _X,\quad \Vert T_{\alpha }(t)x\Vert _X \le \frac{C_{2}M_0}{\varGamma (\alpha )}\Vert x\Vert _X; \end{aligned}$$
  2. (2)

    \(\{S_{\alpha }(t),t\ge 0\}\) and \(\{T_{\alpha }(t),t\ge 0\}\) are strongly continuous;

  3. (3)

    for every \(t>0,\,S_{\alpha }(t)\) and \(T_{\alpha }(t)\) are compact operators.

Lemma 2.3

(see [42]) Let \(x(t)\) be continuous and nonnegative on \([0, a]\). If

$$\begin{aligned} x(t)\le \psi (t)+\lambda \int _{0}^{t}\frac{x(s)}{(t-s)^{\gamma }}\mathrm{d}s, \quad 0\le t\le a, \end{aligned}$$

where \(0\le \gamma <1,\,\psi (t)\) is a nonnegative monotonic increasing continuous function on \([0, a]\), and \(\lambda \) is a positive constant, then

$$\begin{aligned} x(t)\le \psi (t)E_{1-\gamma }\left( \lambda \varGamma (1-\gamma )t^{1-\gamma }\right) , \quad 0\le t\le a, \end{aligned}$$

where \(E_{1-\gamma }(z)\) is the Mittag-Leffler function defined for all \(\gamma <1\) by

$$\begin{aligned} E_{1-\gamma }(z):=\sum \limits _{n=0}^{\infty }\frac{z^{n}}{\varGamma (n(1-\gamma )+1)}. \end{aligned}$$

3 Auxiliary Results

In this section, we give some auxiliary results, which are required for the proof of our main results. We begin with a prior estimation of the trajectory of the control system.

Lemma 3.1

For any admissible trajectory \(x\) of the control system (1)–(2), (4), that is, for any \(x\in {\mathcal {T}}r_{{\text {cl}}\,{\text {conv}}\,U}\), there is a constant \(L_{0}\) such that

$$\begin{aligned} \Vert x\Vert _C\le L_0. \end{aligned}$$
(7)

Proof

From Definition 2.5, there exist \(u_{1}(t),\ldots ,u_{r}(t)\in {\text {cl}}\,{\text {conv}}\,U(t,x(t))\) a.e. in \(t\in I\) for any \(x\in {\mathcal {T}}r_{{\text {cl}}\,{\text {conv}}\,U}\), and

$$\begin{aligned} x(t)&=S_{\alpha }(t)M[x_{0}-h(x(t), B_{r}(t)u_{r}(t))]\\&\quad +\,\int _{0}^{t}(t-s)^{\alpha -1}T_{\alpha }(t-s)L^{-1}f(s, x(s), B_{1}(s)u_{1}(s),\ldots , B_{r-1}(s)u_{r-1}(s))\hbox {d}s. \end{aligned}$$

Then, by Lemma 2.2, (H1.3), (H2.3), (H3.3), and Hölder’s inequality, one gets

$$\begin{aligned} \Vert x(t)\Vert _X&\le C_{2}M_0\Vert M\Vert \left\{ \Vert x_0\Vert _X + a_{2}(t)+c_2\left[ \Vert x\Vert _X +\Vert B_r\Vert \left( a_3(t)+c_3\Vert x\Vert _X\right) \right] \right\} \\&\quad +\frac{C_{1}C_{2}M_0}{\varGamma (\alpha )} \int _0^t(t-s)^{\alpha -1}\left\{ a_1(s)\right. \\&\quad \left. +\,c_1\left[ \Vert x\Vert _X +\sum _{i=1}^{r-1}\Vert B_i\Vert \left( a_3(s)+c_3\Vert x\Vert _X\right) \right] \right\} \hbox {d}s\\&\le C_{2}M_0\Vert M\Vert \left\{ \Vert x_0\Vert _X+ a_{2}(t) +c_2\left[ \Vert x\Vert _X +\Vert B_r\Vert \left( a_3(t)+c_3\Vert x\Vert _X\right) \right] \right\} \\&\quad +\frac{C_{1}C_{2}M_0}{\varGamma (\alpha )}\left[ \frac{(1-\beta )}{\alpha -\beta }a^{\frac{\alpha -\beta }{1-\beta }}\right] ^{1-\beta } \left[ \Vert a_1\Vert _{L^{\frac{1}{\beta }}}+c_1 \sum _{i=1}^{r-1}\Vert B_i\Vert \Vert a_3\Vert _{L^{\frac{1}{\beta }}}\right] \\&\quad +\frac{C_{1}C_{2}M_0}{\varGamma (\alpha )}\left( c_1+c_1c_3 \sum _{i=1}^{r-1}\Vert B_i\Vert \right) \int _0^t(t-s)^{\alpha -1}\Vert x\Vert _X \hbox {d}s. \end{aligned}$$

From the above inequality, using the well-known singular version of Gronwall’s inequality (see Lemma 2.3), we can deduce that the inequality (7) is satisfied, that is, there exists a constant \(L_{0}>0\) such that \(\Vert x\Vert _C\le L_{0}\). \(\square \)

Let \({\text {pr}}_{L_0}:X\rightarrow X\) be a \(L_0\)-radial retraction, that is,

$$\begin{aligned} {\text {pr}}_{L_0}(x) := {\left\{ \begin{array}{ll} x, &{}\quad \Vert x\Vert _X\le L_0,\\ \frac{L_0x}{\Vert x\Vert _X}, &{}\quad \Vert x\Vert _X>L_0. \end{array}\right. } \end{aligned}$$

This map is Lipschitz continuous. We define \(U_1(t,x):= U(t,{\text {pr}}_{L_0}x)\). Evidently, \(U_1\) satisfies (H3.1) and (H3.2). Moreover, by the properties of \({\text {pr}}_{L_0}\), we have, a.e. in \(t\in I\), all \(x\in X\) and all \(u_{1},\ldots ,u_{r}\in U_1(t, x)\), that

$$\begin{aligned} \sup \lbrace \Vert u_{1}\Vert _T,\ldots ,\Vert u_{r}\Vert _T\rbrace\le & {} a_3(t)+c_3L_0\,\, \text {and}\,\,\sup \lbrace \Vert u_{1}\Vert _T, \ldots ,\Vert u_{r}\Vert _T\rbrace \\\le & {} a_3(t)+c_3\Vert x\Vert _X. \end{aligned}$$

Hence, Lemma 3.1 is still valid with \(U(t, x)\) substituted by \(U_1(t,x)\). Consequently, without loss of generality, we assume that, a.e. in \(t\in I\) and all \(x\in X\),

$$\begin{aligned} \sup \left\{ \Vert v\Vert _T:v\in U(t, x)\right\} \le \varphi (t)=a_3(t)+c_3L_0, ~ \text {with}~ \varphi \in L^{1/\beta }(I, {\mathbb {R}}^+). \end{aligned}$$
(8)

Now we consider the following fractional nonlocal semilinear auxiliary problem of Sobolev type:

$$\begin{aligned}&L~^{{C}}D_t^{\alpha }[Mx(t)]+Ex(t)=f(t, x(t)),~ t\in I, \end{aligned}$$
(9)
$$\begin{aligned}&x(0)=h(x(t)). \end{aligned}$$
(10)

It is clear that, for every \(f\in L^{1/\beta }(I\times X, Y),\,h\in L^{1/\beta }(I:X, X),\,0<\beta <\alpha \), the problem (9)–(10) has a unique mild solution \(H(f, h)\in C(I, X)\), which is given by

$$\begin{aligned} H(f,h)(t)=S_{\alpha }(t)Mh(x(t))+\int _{0}^{t}(t-s)^{\alpha -1}T_{\alpha }(t-s)L^{-1}f(s, x(s))\hbox {d}s. \end{aligned}$$

Let \(\varphi \) be defined by (8). We put

$$\begin{aligned} T_{\varphi }= & {} \left\{ u_{i}\in L^{1/\beta }(I, T): \Vert u_{i}(t)\Vert \le \varphi (t) ~ \text {a.e.}~ t\in I, i=1,\ldots ,r \right\} ,\\ X_{\varphi }= & {} \left\{ f, h\vert f\in L^{1/\beta }(I\times X, Y), h\in L^{1/\beta }(I:X, X)\right\} . \end{aligned}$$

The following lemma gives a property of the solution map \(S: T_{\varphi }\rightarrow C(I, X)\) of (1)–(2), which is crucial in our investigation.

Lemma 3.2

The solution map \(S: T_{\varphi }\rightarrow C(I, X)\) is continuous from \(\omega \)-\(T_{\varphi }\) into \(C(I, X)\).

Proof

The operator \(H: L^{1/\beta }(I\times X, Y)\times L^{1/\beta }(I:X, X)\rightarrow C(I,X)\) is linear. The estimation

$$\begin{aligned} \Vert H(f,h)\Vert _C\le C_{2}M_0\Vert M\Vert \Vert h\Vert _{L^{\frac{1}{\beta }}} +\frac{C_{1}C_{2}M_0}{\varGamma (\alpha )}\left[ \frac{(1-\beta )}{\alpha -\beta } a^{\frac{\alpha -\beta }{1-\beta }}\right] ^{1-\beta } \Vert f\Vert _{L^{\frac{1}{\beta }}} \end{aligned}$$

shows that \(H\) is continuous. Hence, \(H\) is also continuous from \(\omega \)-\(L^{1/\beta }(I\times X, Y)\times L^{1/\beta }(I:X, X)\) to \(\omega \)-\(C(I,X)\). Let \(C\in P_{b}(L^{1/\beta }(J,X))\) and suppose that for any \(f,h\in C,\,\Vert f\Vert _{L^{1/\beta }(I\times X, Y)}\le K_{1}\) and \(\Vert h\Vert _{L^{1/\beta }(I:X, X)}\le K_{2}\) (\(K_{1}, K_{2}>0\)). Next we will show that \(H\) is completely continuous.

  • Step 1. From Lemma 3.1, we have that the map \(\Vert H(f,h)(t)\Vert _X\) is uniformly bounded.

  • Step 2. \(H\) is equicontinuous on \(C\). Let \(0\le t_1<t_2\le a\). For any \(f,h\in C\), we obtain

    $$\begin{aligned}&\Vert H(f,h)(t_2)-H(f,h)(t_1)\Vert _X\\&\quad \le \left\| [S_{\alpha }(t_{2})-S_{\alpha }(t_{1})]M h(x)\right\| _{X}+\left\| \int _{t_1}^{t_2}(t_2-s)^{\alpha -1}T_{\alpha }(t_2-s)L^{-1}f(s, x(s))\hbox {d}s \right\| _X \\&\quad \quad +\left\| \int _0^{t_1}\Big [(t_2-s)^{\alpha -1}-(t_1-s)^{\alpha -1}\Big ] T_{\alpha }(t_2-s)L^{-1}f(s, x(s))\hbox {d}s\right\| _X \\&\quad \quad +\left\| \int _0^{t_1}(t_1-s)^{\alpha -1}\Big [T_{\alpha }(t_2-s) -T_{\alpha }(t_1-s)\Big ]L^{-1}f(s, x(s))\hbox {d}s\right\| _X\\&\quad \quad =: I_1+I_2+I_3+I_4. \end{aligned}$$

    By using analogous arguments as in Lemma 3.1, we have

    $$\begin{aligned} I_1&\le K_{2}\Vert M\Vert \sup \Vert S_{\alpha }(t_2)-S_{\alpha }(t_1)\Vert ,\\ I_2&\le \frac{C_1C_2M_0K_{1}}{\varGamma (\alpha )} \Big [\frac{1-\beta }{\alpha -\beta }\Big ]^{1-\beta }(t_2-t_1)^{\alpha -\beta }, \\ I_3&\le \frac{C_1C_2M_0K_{1}}{\varGamma (\alpha )}\left( \int _0^{t_1} \left( (t_1-s)^{\alpha -1}-(t_2-s)^{\alpha -1}\right) ^{1/(1-\beta )}\hbox {d}s\right) ^{1-\beta }\\&\le \frac{C_1C_2M_0K_{1}}{\varGamma (\alpha )}\left( \int _0^{t_1} \left( (t_1-s)^{\frac{\alpha -1}{1-\beta }}-(t_2-s)^{\frac{\alpha -1}{1-\beta }}\right) \hbox {d}s\right) ^{1-\beta }\\&= \frac{C_1C_2M_0K_{1}}{\varGamma (\alpha )} \left[ \frac{1-\beta }{\alpha -\beta }\right] ^{1-\beta } \left( t_1^{\frac{\alpha -\beta }{1-\beta }} -t_2^{\frac{\alpha -\beta }{1-\beta }}+(t_2-t_1)^{\frac{\alpha -\beta }{1-\beta }} \right) ^{1-\beta }\\&\le \frac{C_1C_2M_0K_{1}}{\varGamma (\alpha )} \Big [\frac{1-\beta }{\alpha -\beta }\Big ]^{1-\beta } \big (t_2-t_1\big )^{\alpha -\beta }. \end{aligned}$$

    For \(t_1=0\) and \(0<t_2\le b\), it is easy to see that \(I_4=0\). For \(t_1>0\) and \(\epsilon >0\) small enough,

    $$\begin{aligned} I_4&\le \left\| \int _0^{t_1-\epsilon }(t_1-s)^{\alpha -1}\Big (T_{\alpha }(t_2-s) -T_{\alpha }(t_1-s)\Big )L^{-1}f(s, x(s))\mathrm{d}s\right\| _X \\&\quad +\left\| \int _{t_1-\epsilon }^{t_1}(t_1-s)^{\alpha -1}\Big (T_{\alpha }(t_2-s) -T_{\alpha }(t_1-s)\Big )L^{-1}f(s, x(s))\mathrm{d}s\right\| _X \\&\le \sup _{s\in [0,t_1-\epsilon ]}\Vert T_{\alpha }(t_2-s) -T_{\alpha }(t_1-s)\Vert C_1 K_{1}\Big [\frac{1-\beta }{\alpha -\beta }\Big ]^{1-\beta } \left( t_1^{\frac{\alpha -\beta }{1-\beta }}-\epsilon ^{\frac{\alpha -\beta }{1-\beta }}\right) ^{1-\beta }\\&\quad +\frac{2C_1C_2M_0K_{1}}{\varGamma (\alpha )} \Big [\frac{1-\beta }{\alpha -\beta }\Big ]^{1-\beta }\epsilon ^{\alpha -\beta }. \end{aligned}$$

    Combining the estimations for \(I_1,\,I_2,\,I_3\), and \(I_4\) and letting \(t_2\rightarrow t_1\) and \(\epsilon \rightarrow 0\) in \(I_4\), we conclude that \(H\) is equicontinuous. For more details, see [17].

  • Step 3. The set \(\varPi (t):=\{H(f,h)(t): f,h\in C\}\) is relatively compact in \(X\). Clearly, \(\varPi (0)\) is compact. Hence, it is only necessary to consider \(t>0\). For each \(g \in ]0,t[,\,t\in ]0,a],\,f,h\in C\), and \(\delta >0\) being arbitrary, we define \(\varPi _{g,\delta }(t):=\{H_{g,\delta }(f,h)(t): f,h\in C\}\), where

    $$\begin{aligned} H_{g,\delta }(f,h)(t)&=\int _{\delta }^{\infty }M^{-1}\xi _{\alpha }(\theta )Q(t^{\alpha }\theta ) Mh(x)\mathrm{d}\theta \\&\quad + \alpha \int _0^{t-g}\int _{\delta }^{\infty }\theta (t-s)^{\alpha -1}\xi _{\alpha } (\theta ) Q((t-s)^{\alpha }\theta )L^{-1}f(s, x(s))\mathrm{d}\theta \mathrm{d}s \\&=Q(g^{\alpha }\delta )\int _{\delta }^{\infty }M^{-1}\xi _{\alpha }(\theta ) Q(t^{\alpha }\theta -g^{\alpha }\delta )Mh(x)\mathrm{d}\theta \\&\quad +\alpha Q(g^{\alpha }\delta )\int _0^{t-g}\int _{\delta }^{\infty } \theta (t-s)^{\alpha -1}\xi _{\alpha }(\theta ) Q\big ((t-s)^{\alpha }\theta -g^{\alpha }\delta \big )L^{-1}\\&\quad \times f(s, x(s))\mathrm{d}\theta \mathrm{d}s\\&:=Q(g^{\alpha }\delta )y(t, g). \end{aligned}$$

Because \(Q(g^{\alpha }\delta )\) is compact and \(y(t, g)\) is bounded, we obtain that the set \(\varPi _{g,\delta }(t)\) is relatively compact in \(X\) for any \(g \in ]0,t[\) and \(\delta >0\). Moreover, we have

$$\begin{aligned}&\Vert H(f,h)(t)-H_{g,\delta }(f,h)(t)\Vert _X\\&\quad =\biggl \Vert \int _{0}^{\delta }M^{-1}\xi _{\alpha }(\theta )Q(t^{\alpha }\theta )Mh(x)\mathrm{d}\theta \\&\qquad +\,\alpha \int _0^t\int _0^{\delta }M^{-1}\theta (t-s)^{\alpha -1} \xi _{\alpha }(\theta ) Q((t-s)^{\alpha }\theta )L^{-1}f(s, x(s))\mathrm{d}\theta \mathrm{d}s \\&\qquad +\,\alpha \int _0^t\int _{\delta }^{\infty }M^{-1}\theta (t-s)^{\alpha -1}\xi _{\alpha }(\theta ) Q((t-s)^{\alpha }\theta )L^{-1}f(s, x(s))\mathrm{d}\theta \mathrm{d}s \\&\qquad -\,\alpha \int _0^{t-g}\int _{\delta }^{\infty }M^{-1}\theta (t-s)^{\alpha -1} \xi _{\alpha }(\theta ) Q((t-s)^{\alpha }\theta )L^{-1}f(s, x(s))\mathrm{d}\theta \mathrm{d}s\biggr \Vert _X\\&\quad \le \left\| \int _{0}^{\delta }M^{-1}\xi _{\alpha }(\theta )Q(t^{\alpha }\theta )Mh(x)\mathrm{d}\theta \right\| _{X} \\&\qquad +\,\alpha \left\| \int _0^t\int _0^{\delta }M^{-1}\theta (t-s)^{\alpha -1}\xi _{\alpha } (\theta ) Q((t-s)^{\alpha }\theta )L^{-1}f(s, x(s))\mathrm{d}\theta \mathrm{d}s\right\| _X \\&\qquad +\,\alpha \left\| \int _{t-g}^t\int _{\delta }^{\infty }M^{-1}\theta (t-s)^{\alpha -1} \xi _{\alpha }(\theta ) Q((t-s)^{\alpha }\theta )L^{-1}f(s, x(s))\mathrm{d}\theta \mathrm{d}s\right\| _X\\&\quad \le C_2M_{0}\int _{0}^{\delta }\xi _{\alpha }(\theta )\mathrm{d}\theta \Vert M\Vert \Vert h(x)\Vert _{L^{1/\beta }}\\&\qquad +\,C_1C_2M_0\alpha \left( \int _0^t(t-s)^{\frac{\alpha -1}{1-\beta }}\mathrm{d}s\right) ^{1-\beta } \Vert f\Vert _{L^{1/\beta }} \int _0^{\delta }\theta \xi _{\alpha }(\theta )\mathrm{d}\theta \\&\qquad +\,C_1C_2M_0\alpha \left( \int _{t-g}^t(t-s)^{\frac{\alpha -1}{1-\beta }}\mathrm{d}s \right) ^{1-\beta }\Vert f\Vert _{L^{1/\beta }} \int _{\delta }^{\infty }\theta \xi _{\alpha }(\theta )\mathrm{d}\theta \\&\quad \le C_2M_{0}\Vert M\Vert K_2\int _{0}^{\delta }\xi _{\alpha }(\theta )\mathrm{d}\theta \\&\qquad +\,C_1C_2M_0K_{1}\alpha \left[ \frac{1-\beta }{\alpha -\beta }\right] ^{1-\beta } \left( b^{\alpha -\beta }\int _0^{\delta }\theta \xi _{\alpha }(\theta )\mathrm{d}\theta +\frac{1}{\varGamma (1+\alpha )}g^{\alpha -\beta }\right) . \end{aligned}$$

From Definition 2.5 and Remark 2.2, we deduce that the right-hand side of the last inequality tends to zero as \(g\rightarrow 0\) and \(\delta \rightarrow 0\). Therefore, there are relatively compact sets arbitrarily close to the set \(\varPi (t),\,t>0\). Hence, the set \(\varPi (t),\,t>0\) is also relatively compact in \(X\).

Since \(T_{\varphi }\) is a convex compact metrizable subset of \(\omega \)-\(L^{1/\beta }(I, T)\), it suffices to prove the sequential continuity of the map \(S\). Let \(\{u_{1,n}\}_{n\ge 1},\ldots ,\{u_{r,n}\}_{n\ge 1}\subseteq T_{\varphi }\) be such that

$$\begin{aligned} (u_{1,n},\ldots ,u_{r,n})\rightarrow (u_1,\ldots ,u_r)~ \text {in}~\omega -L^{1/\beta }(I, T), \quad u_1,\ldots ,u_r\in T_{\varphi }. \end{aligned}$$
(11)

Set \(f_n:=f_n\left( \cdot ,\cdot , B_1(\cdot )u_{1,n}(\cdot ),\ldots ,B_{r-1}(\cdot )u_{r-1,n}(\cdot )\right) \) and \(h_n:=h_n\left( \cdot ,B_{r}(\cdot )u_{r,n}(\cdot )\right) \). By the properties of the operator \(H\) together with (11), we have \(H(f_n, h_n)\rightarrow H(f,h)~\text {in}~ \omega -C(I, X)\), where \((f_n, h_n)\rightarrow (f,h)\) in \(\omega -L^{1/\beta }(I\times X^{r}, Y)\times L^{1/\beta }(X^{2}, X)\) and the limit functions are \(f=f\left( \cdot ,\cdot , B_1(\cdot )u_{1}(\cdot ),\ldots ,B_{r-1}(\cdot )u_{r-1}(\cdot )\right) \) and \(h=h(\cdot ,B_{r}(\cdot )u_{r}(\cdot ))\). Since \(\{f_n\}_{n\ge 1}\) and \(\{h_n\}_{n\ge 1}\) are bounded, there are subsequences \(\{f_{n_k}\}_{k\ge 1}\) and \(\{h_{n_k}\}_{k\ge 1}\) of the sequences \(\{f_n\}_{n\ge 1}\) and \(\{f_n\}_{n\ge 1}\), respectively, such that \(H(f_{n_k}, h_{n_k})\rightarrow z\) in \(C(I,X)\) for some \(z\in C(I,X)\). From the fact that \(H(f_n, h_n)\rightarrow H(f, h)~\text {in }\omega \text {-}C(I,X)\) and \(H(f_{n_k}, h_{n_k})\rightarrow z~\text {in}~ C(I,X)\), we obtain that \(z=H(f, h)\) and \(H(f_n, h_n)\rightarrow H(f, h)\) in \(C(I,X)\). Based on the definitions of operators \(S\) and \(H\), we know that \(S(u_1,\ldots ,u_r)(t)=S_{\alpha }(t)Mx_0+H(f, h)(t)\). According to the arguments above, we conclude that \(S\left( u_{1,n},\ldots ,u_{r,n}\right) (t)\rightarrow S(u_1,\ldots ,u_r)(t)\) in \(C(I, X)\). \(\square \)

Now, we consider the space \(\overline{T}:=T\times {\mathbb {R}}\). The elements of the space \(\overline{T}\) will be denoted by \(\overline{u}_i:=(u_i, \tau _i)\), such that \(u_i\in T,\,\tau _i\in {\mathbb {R}}\), and \(i=1,\ldots ,r\). The space \(\overline{T}\) is endowed with the norm \(\Vert \overline{u}\Vert _{\overline{T}}=\max \lbrace \max (\Vert u_1\Vert _T, \vert \tau _1\vert ), \ldots ,\max (\Vert u_r\Vert _T, \vert \tau _r\vert )\rbrace \). Then \(\overline{T}\) is a separable reflexive Banach space. In view of (6), the norm on the space \(L^{q}_{\omega }(I, {\overline{T}})\) becomes

$$\begin{aligned} \Vert {\overline{u}}\Vert _\omega= & {} \sup \limits _{0\le t_1\le t_2\le a}\Biggl \lbrace \max \biggl \lbrace \max \left( \left\| \int _{t_1}^{t_2}u_1(s)\mathrm{d}s\right\| _T, \left| \int _{t_1}^{t_2}\tau _{1}(s)\mathrm{d}s\right| \right) ,\\&\ldots , \max \left( \left\| \int _{t_1}^{t_2}u_r(s)\mathrm{d}s\right\| _T, \left| \int _{t_1}^{t_2}\tau _{r}(s)\mathrm{d}s\right| \right) \biggr \rbrace \Biggr \rbrace . \end{aligned}$$

Let the multivalued map \(F: I\times X \rightrightarrows \overline{T}\) be defined by

$$\begin{aligned} F(t, x) :=\left\{ (u_i, \tau _i)\in \overline{T}\Biggl \vert \begin{array}{lll} u_{1}\in U(t, x), \tau _1=g_{1}(t, x, u_{1})\\ \qquad \vdots \\ u_{r}\in U(t, x), \tau _r=g_{r}(t, x, u_{r}) \end{array} \right\} . \end{aligned}$$
(12)

Lemma 3.3

The multivalued map \(F\) given by (12) has bounded closed values and satisfies:

  1. (1)

    the map \(t\rightarrow F(t, x)\) is measurable;

  2. (2)

    \(d_H(F(t, x), F(t, y))\le l(t)\Vert x-y\Vert _X\) a.e., with \(l\in L^{1}(I, {\mathbb {R}}^+)\);

  3. (3)

    for any \(\overline{u}_i=(u_i, \tau _i)\in F(t, x)\), we have \(\vert \tau _i\vert \le a_4(t)+b_4(t)\Vert x\Vert _X+c_4(a_3(t)+c_3\Vert x\Vert _X)\) and \(\Vert u_i\Vert _T\le a_3(t)+c_3\Vert x\Vert _X,\,i=1,\ldots ,r\).

Proof

From (H3.3) and (H4.3), we have the boundedness of \(F(t, x)\). Moreover, item (3) also follows. Since the graphs of functions \(u_1\rightarrow g_1(t, x, u_1),\ldots ,u_r\rightarrow g_r(t, x, u_r)\) are closed on the set \(U(t, x)\), we obtain the closedness of \(F(t, x)\). The measurability of the multivalued map is concluded and extended from [43, 44]. To prove item (2), we consider \(x, y\in X\), such that \(x\ne y\), and any arbitrary \(\epsilon _i >0,\,i=1,\ldots ,r\). Then, for each \(u_i\in U(t, x)\), there exists \(v_i\in U(t, y)\) satisfying \(\Vert u_i-v_i\Vert _T \le (k_3(t)+\epsilon _i)\Vert x-y\Vert _X\) and \(\vert g_i(t, x, u_i)-g_i(t, y, v_i)\vert \le k_4(t) \Vert x-y\Vert _X+k_{4}^{\prime }((k_3(t)+\epsilon _i)\Vert x-y\Vert _X)\). From above, we get

$$\begin{aligned}&\max \Biggl \lbrace \sup d\biggl ((u_1,g_1(t,x,u_1)), F(t,y)\biggr ), \ldots , \sup d\biggl ((u_r,g_r(t,x,u_r)), F(t,y)\biggr )\Biggr \rbrace \\&\quad \le l(t)\Vert x-y\Vert _X, \end{aligned}$$

where \(l(t):=\max \lbrace k_3(t), k_4(t)+k_{4}^{\prime }k_3\rbrace \). Similarly, we can get

$$\begin{aligned}&\max \Biggl \lbrace \sup d\biggl ((v_1,g_1(t,y,v_1)), F(t,x)\biggr ), \ldots , \sup d\biggl ((v_r,g_r(t,y,v_r)), F(t,x)\biggr )\Biggr \rbrace \\&\quad \le l(t)\Vert x-y\Vert _X. \end{aligned}$$

We apply \(\max \) between the two last \(\max \)-sets, to get our result. \(\square \)

Let \(\text {Eff} g^{**}(t, x)\) be the effective set, and \(\text {Epi} g^{**}(t, x)\) the epigraph of functions \(u_1\rightarrow g_{1}^{**}(t, x, u_{1}),\,\ldots ,\,u_r\rightarrow g_{r}^{**}(t, x, u_{r})\), that is,

  1. (1)

    \(\text {Eff} g^{**}(t, x):=\lbrace u_i\in T: \max \lbrace g_{1}^{**}(t, x, u_1),\ldots , g_{r}^{**}(t, x, u_r)\rbrace <+\infty \rbrace \),

  2. (2)

    \(\text {Epi} g^{**}(t, x):=\lbrace (u_i, \tau _i)\in \overline{T}: g_{1}^{**}(t, x, u_{1}) \le \tau _1,\ldots ,g_{r}^{**}(t, x, u_{r})\le \tau _r\rbrace \).

Now, we present some properties of functions \(g_{i}^{**}(t, x, u_{i})\) via the following lemma.

Lemma 3.4

For a.e. in \(t\in I\), one has:

  1. (1)

    \(\text {Eff} g^{**}(t, x)={\text {cl}}\,{\text {conv}}\,U(t,x(t))\);

  2. (2)

    \(g_{1}^{**}(t, x, u_{1})+\cdots +g_{r}^{**}(t, x, u_{r}) =\min \lbrace \tau _{1}+\cdots +\tau _{r}\in {\mathbb {R}}: (u_i,\tau _i)\in {\text {cl}}\,{\text {conv}}\,F(t, x)\rbrace \) for every \(u_1,\ldots ,u_r\in \text {Eff} g^{**}(t, x)\) and hence \((u_i,g_{i}^{**}(t, x, u_{i}))\in {\text {cl}}\,{\text {conv}}\,F(t, x)\) when \(u_i\in {\text {cl}}\,{\text {conv}}\,U(t, x)\) and \(x\in X\);

  3. (3)

    for any \(\epsilon _i>0\), there exist closed sets \(I_{\epsilon _{i}}\subseteq I, \mu (I\setminus I_{\epsilon _{i}})\le \epsilon _i\), such that \((t, x, u_i)\rightarrow g_{i}^{**}(t, x, u_{i})\) are l.s.c. on \(I_{\epsilon _{i}}\times X\times T,\,i=1,\ldots ,r\).

Proof

It is well known that the bipolar \(g_{1}^{**}(t, x, u_{1}),\ldots ,g_{r}^{**}(t, x, u_{r})\) are the \(\varGamma \)-regularization of \(u_1\rightarrow g_{1,U}(t, x, u_1),\ldots ,u_r\rightarrow g_{r,U}(t, x, u_r)\), respectively. Let \(x\in I\) a.e. be arbitrary. By (H4.3), each function of \(u_1\rightarrow g_{1,U}(t, x, u_1),\ldots ,u_r\rightarrow g_{r,U}(t, x, u_r)\) has an affine continuous minorant. Then,

$$\begin{aligned} \text {Epi} g^{**}(t, x)={\text {cl}}\,{\text {conv}}\,\bigcup \limits _{i=1}^{r}\text {Epi} g_{i,U}(t, x). \end{aligned}$$
(13)

Therefore, items (1) and (2) follow from (13) and (11) (for more details, see [45]). Using Corollary 2.1 of [44] and items (1) and (2) of Lemma 3.3, for every \(\epsilon _1,\ldots ,\epsilon _r>0\), there are closed sets \(I_{\epsilon _{1}},\ldots ,I_{\epsilon _{r}}\subseteq I\) with \(\mu (I\setminus I_{\epsilon _{1}})\le \epsilon _{1},\ldots ,\mu (I\setminus I_{\epsilon _{r}})\le \epsilon _r\) such that the map \({\text {cl}}\,{\text {conv}}\,F(t, x)\), restricted to \(I_{\epsilon _{1}}\times X,\ldots ,I_{\epsilon _{r}}\times X\), has a closed graph in \(I\times X\times \overline{T}\). To show item (3), let us consider \((t_n, x_n, u_{i,n})_{n\ge 1}\in I_{\epsilon _{i}}\times X\times T\), such that \((t_n, x_n, u_{i,n})\rightarrow (t, x, u_{i})\). If \(\lim _{n\rightarrow \infty } g_{i}^{**}(t_n, x_n, u_{i,n})=+\infty \), then \(g_{i}^{**}(t, x, u_{i})\) are l.s.c. at points \((t, x, u_i),\,i=1,\ldots ,r\). If \(\lim _{n\rightarrow \infty } g_{i}^{**}(t_n, x_n, u_{i,n})=\lambda _{i}\), with \(\lambda _{i} \ne +\infty \), then by using (H4.3), we get \(\lambda _i\ne -\infty \). Hence, we can assume, without loss of generality, that \(g_{i}^{**}(t_n, x_n, u_{i,n})<+\infty \). Then we have \((u_{i,n}, g_{i}^{**}(t_n, x_n, u_{i,n})) \in {\text {cl}}\,{\text {conv}}\,F(t_n, x_n)\). From the last formula, and based on the above, the map \({\text {cl}}\,{\text {conv}}\,F(t, x)\) restricted to \(I_{\epsilon _{1}}\times X,\ldots ,I_{\epsilon _{r}}\times X\) has a closed graph in \(I\times X\times \overline{T}\). We obtain that \((u_i, \lambda _i)\in {\text {cl}}\,{\text {conv}}\,F(t, x)\). By the second item of this lemma, we have \(g_{i}^{**}(t, x, u_{i})\le \lambda _i=\lim _{n\rightarrow \infty }g_{i}^{**}(t_n, x_n, u_{i,n}))\). Consequently, the maps \((t, x, u_i)\rightarrow g_{i}^{**}(t, x, u_{i})\) are l.s.c. on \(I_{\epsilon _{i}}\times X\times T\). \(\square \)

4 Existence for Multiple Control Systems

In this section, we shall prove existence of solutions for the multiple control system (1)–(3) and (1)–(2),(4). Let \(\varLambda := S(T_\varphi )\). From Lemma 3.2, we have that \(\varLambda \) is a compact subset of \(C(I,X)\). It follows from (8) and the definitions of \(T_\varphi \) and \(X_\varphi \) that \({\mathcal {T}}r_U\subseteq {\mathcal {T}} r_{{\text {cl}}\,{\text {conv}}\,U}\subseteq \varLambda \). Let the set-valued map \(\overline{U}: C(I, X) \rightrightarrows 2^{L^{1/\beta }(I, T)}\) be defined by

$$\begin{aligned} \overline{U}(x):= & {} \left\{ \theta _i: I\rightarrow T\text { measurable}: \theta _i(t)\in U(t, x(t)) \text { a.e.},\quad i=1,\ldots ,r\right\} ,\\&\quad x\in C(I, X). \end{aligned}$$

Theorem 4.1

The set \({\mathcal {R}}_U\) is nonempty, and the set \({\mathcal {R}}_{{\text {cl}}\,{\text {conv}}\,U}\) is a compact subset of the space \(C(J,X)\times \omega \)-\(L^{1/\beta }(I,T)\).

Proof

By hypotheses (H3.1) and (H3.2), we have that for any measurable function \(x: I\rightarrow X\), the map \(t\rightarrow U(t, x(t))\) is measurable and has closed values [39], Proposition 2.7.9]. Therefore, it has measurable selectors [43]. So the operator \(\overline{U}\) is well defined, and its values are closed decomposable subsets of \(L^{1/\beta }(I, T)\). We claim that \(x\rightarrow \overline{U}(x)\) is l.s.c. Let \(x_* \in C(I, X),\,\theta _{i,*}\in \overline{U}(x_*),\,i=1,\ldots ,r\), and let \(\{x_n\}_{n\ge 1}\subseteq C(I, X)\) be a sequence converging to \(x_*\). It follows from [46], Lemma 3.2] that there are sequences \(\theta _{i,n}\in \overline{U}(x_n)\) such that

$$\begin{aligned} \sum \limits _{i=1}^{r}\Vert \theta _{i,*}(t)-\theta _{i,n}(t)\Vert _T \le \sum \limits _{i=1}^{r}\biggl \lbrace d_T(\theta _{i,*}(t), U(t,x_n(t)))+\frac{1}{in}\biggr \rbrace , \quad \text {a.e. }t\in I. \end{aligned}$$
(14)

Since the map \(y\rightarrow U(t,y)\) is \(H\)-continuous a.e. in \(t\in I\) [by (H3.2)], then a.e. in \(t\in I\), the map \(y\rightarrow U(t,y)\) is l.s.c. [39], Proposition 1.2.66]. Hence, each function \(y\rightarrow d_T(\theta _{i,*}(t),U(t, y))\) is u.s.c. for a.e. \(t\in I\). It follows from (14) that, a.e. in \(t\in I\),

$$\begin{aligned} \lim _{n\rightarrow \infty }\sum \limits _{i=1}^{r}\Vert \theta _{i,*}(t)-\theta _{i,n}(t)\Vert _T&\le \lim _{n\rightarrow \infty }\sum \limits _{i=1}^{r}\sup d_T\left( \theta _{i,*}(t),U(t,x_n(t))\right) \\&\le \sum \limits _{i=1}^{r}d_T\left( \theta _{i,*}(t),U(t,x_*(t))\right) =0. \end{aligned}$$

The last inequality together with (8) implies that \(\theta _{i,n}\rightarrow \theta _{i,*}\) in \(L^{1/\beta }(I, T), ~i=1,\ldots ,r\). Therefore, the map \(x\rightarrow \overline{U}(x)\) is l.s.c. By [47] (see also [39], Theorem 2.8.7]), there exists a continuous function \(m:\varLambda \rightarrow L^{1/\beta }(I, T)\) such that \(m(x)\in \overline{U}(x)\) for all \(x\in \varLambda \). Consider the map \({\mathcal {P}}:L^{1/\beta }(I, T)\rightarrow L^{1/\beta }(I, T)\) defined by \({\mathcal {P}}(\theta _1,\ldots ,\theta _r):=m(S(\theta _1,\ldots ,\theta _r))\). According to (8) and the definition of \(T_\varphi ,\,{\mathcal {P}}(\theta _1,\ldots ,\theta _r)\in T_\varphi \) for every \(\theta _1,\ldots ,\theta _r\in T_\varphi \). Due to Lemma 3.2 and the continuity of \(m\), the map \({\mathcal {P}}: \omega \)-\(T_{\varphi }\rightarrow \omega \)-\(T_{\varphi }\) is continuous. Since \(\omega \)-\(T_{\varphi }\) is a convex metrizable compact set in \(\omega \)-\(L^{1/\beta }(I, T)\), by applying Schauder’s fixed point theorem, we deduce that this map has a fixed point \((\theta _{1,*},\ldots ,\theta _{r,*})\in T^{r}_{\varphi }\), that is, \((\theta _{1,*},\ldots ,\theta _{r,*}) ={\mathcal {P}}(\theta _{1,*},\ldots ,\theta _{r,*})=m(S(\theta _{1,*},\ldots ,\theta _{r,*}))\). Let \((u_{1,*},\ldots ,u_{r,*}):=(\theta _{1,*},\ldots ,\theta _{r,*})\) and \(x_* := S(\theta _{1,*},\ldots ,\theta _{r,*})\). Then, \((u_{1,*},\ldots ,u_{r,*})=m(x_*)\) and \(x_*=S(u_{1,*},\ldots ,u_{r,*})\). Thus, we have

$$\begin{aligned} x_*(t)&=S(u_{1,*},\ldots ,u_{r,*})(t)=S_{\alpha }(t)M[x_{0}-h(x_*(t), B_{r}(t)u_{r,*}(t))]\\&\quad +\,\int _{0}^{t}(t-s)^{\alpha -1}T_{\alpha }(t-s)L^{-1}f(s, x_*(s), B_{1}(s)u_{1,*}(s),\ldots ,\\&\quad B_{r-1}(s)u_{r-1,*}(s))\mathrm{d}s,\\&\quad u_{1,*},\ldots ,u_{r,*}\in U(t,x_*(t))\,\,\text {a.e.} \,\, t\in I, \end{aligned}$$

which imply that \((x_*(\cdot ),u_{1,*}(\cdot ),\ldots ,u_{r,*}(\cdot ))\) is a solution of the control system (1)–(3). Hence, \({\mathcal {R}}_U\) is nonempty. It is easy to see that \({\mathcal {R}}_{{\text {cl}}\,{\text {conv}}\,U}\subseteq \varLambda \times T_{\varphi }\). Since \(\varLambda \) is compact in \(C(I, X)\) and \(T_{\varphi }\) is metrizable convex compact in \(\omega \)-\(L^{1/\beta }(I, T)\), we have that \({\mathcal {R}}_{{\text {cl}}\,{\text {conv}}\,U}\) is relatively compact in \(C(I, X)\times \omega \)-\(L^{1/\beta }(I, T)\). Hence, to complete the proof of this theorem, it is sufficient to prove that \({\mathcal {R}}_{{\text {cl}}\,{\text {conv}}\,U}\) is sequentially closed in \(C(I, X)\times \omega \)-\(L^{1/\beta }(I, T)\). Let \(\{(x_n(\cdot ),u_{1,n}(\cdot ),\ldots ,u_{r,n}(\cdot )\}_{n\ge 1} \subseteq {\mathcal {R}}_{{\text {cl}}\,{\text {conv}}\,U}\) be a sequence converging to \((x(\cdot ),u_{1}(\cdot ),\ldots ,u_{r}(\cdot ))\) in the space \(C(I, X)\times \omega \)-\(L^{1/\beta }(I, T)\). Then we have \(x_n=S(u_{1,n},\ldots ,u_{r,n})\) and \((u_{1,n},\ldots ,u_{r,n}) \rightarrow (u_{1},\ldots ,u_{r})\) in \(\omega \)-\(L^{1/\beta }(I, T)\). Denote \(z := S(u_1,\ldots ,u_r)\). From Lemma 3.2, we obtain that \(z=x\), that is, \(x\) is a solution of (9)–(10) corresponding to \(u_1,\ldots ,u_r\). Hence, to prove that \((x(\cdot ),u_{1}(\cdot ), \ldots ,u_{r}(\cdot ))\in {\mathcal {R}}_{{\text {cl}}\,{\text {conv}}\,U}\), we only need to verify that \(u_1,\ldots ,u_r\in {\text {cl}}\,{\text {conv}}\,U(t, x(t))\) a.e. in \(t\in I\). Since \(u_{1,n}\rightarrow u_1,\ldots ,u_{r,n}\rightarrow u_r\) in \(\omega \)-\(L^{1/\beta }(I, T)\), by Mazur’s theorem, we have

$$\begin{aligned}&u_{1}(t)\in \bigcap \limits _{n=1}^{\infty } {\text {cl}}\,{\text {conv}}\,\left( \bigcup \limits _{k=n}^{\infty } u_{1,k}(t)\right) ,\ldots ,u_{r}(t)\in \bigcap \limits _{n=1}^{\infty } {\text {cl}}\,{\text {conv}}\,\left( \bigcup \limits _{k=n}^{\infty } u_{r,k}(t)\right) \nonumber \\&\quad \text {for a.e. }t\in I. \end{aligned}$$
(15)

From hypothesis (H3.2) and the fact that \(d_H({\text {cl}}\,{\text {conv}}\,A,{\text {cl}}\,{\text {conv}}\,B) \le d_H(A,B)\) for sets \(A,B\), the map \(x \rightarrow {\text {cl}}\,{\text {conv}}\,U(t,x)\) is \(H\)-continuous. Then, from Proposition 1.2.86 in [39], we conclude that the map \(x\rightarrow {\text {cl}}\,{\text {conv}}\,U(t,x)\) has property \(Q\). Therefore, we have

$$\begin{aligned}&\bigcap \limits _{n=1}^{\infty }{\text {cl}}\,{\text {conv}}\,\left( \bigcup \limits _{k=n}^{\infty }{\text {cl}}\,{\text {conv}}\,U(t,x_k(t))\right) \subseteq {\text {cl}}\,{\text {conv}}\,U(t,x(t)) \nonumber \\&\quad \text {for a.e. }t\in I. \end{aligned}$$
(16)

By (15) and (16), we obtain that \(u_{1}(t),\ldots ,u_{r}(t)\in {\text {cl}}\,{\text {conv}}\,U(t,x(t))\) a.e. in \(t\in I\). This means that \({\mathcal {R}}_{{\text {cl}}\,{\text {conv}}\,U}\) is compact in \(C(I, X)\times \omega \)-\(L^{1/\beta }(I, T)\). \(\square \)

5 Main Results

In order to state and prove our main results, we firstly show the following helpful lemma.

Lemma 5.1

For any function \(x_*\in C(I, X)\) and any measurable selectors \(u_{1,*},\ldots ,u_{r,*}\) of the map \(t\rightarrow {\text {cl}}\,{\text {conv}}\,U(t, x_*(t))\), there are sequences \(u_{1,n}(t),\ldots , u_{r,n}(t),\,n\ge 1\), of measurable selectors of the map \(t\rightarrow U(t, x_*(t))\), such that

$$\begin{aligned}&\sup _{0\le t_1\le t_2\le a}\sum \limits _{i=1}^{r}\biggl \Vert \int _{t_1}^{t_2}(u_{i,*}(s)-u_{i,n}(s))\mathrm{d}s \biggr \Vert _T\le \sum \limits _{i=1}^{r}\frac{1}{in}, \end{aligned}$$
(17)
$$\begin{aligned}&\sup _{0\le t_1\le t_2\le a}\sum \limits _{i=1}^{r}\biggl \vert \int _{t_1}^{t_2}(g_{i}^{**}(s, x_*(s), u_{i,*}(s)) -g_{i}(s, x_*(s), u_{i,n}(s)))\mathrm{d}s \biggr \vert \le \sum \limits _{i=1}^{r}\frac{1}{in}.\nonumber \\ \end{aligned}$$
(18)

The sequences \(u_{i,n}\) converge to \(u_{i,*}\) in \(\omega \)-\(L^{1/\beta }(I, T)\).

Proof

Let \(\overline{u}_{i,*}(t):=\left( u_{i,*}, g_{i}^{**}(t, x_*(t), u_{i,*}(t))\right) ,\,i=1,\ldots ,r\). According to Lemma 3.4, we know that \(\overline{u}_{i,*}(t)\) are measurable selectors of the map \(t \rightarrow {\text {cl}}\,{\text {conv}}\,F(t, x_*(t))\). From Lemma 3.3, the map \(t\rightarrow F(t, x_*(t))\) is measurable and integrally bounded. Hence, by using [47], Theorem 2.2], we have that, for any \(n\ge 1\), there exist measurable selections \(\overline{u}_{1,n}(t),\ldots ,\overline{u}_{r,n}(t)\) of the map \(t\rightarrow F(t, x_*(t))\) such that

$$\begin{aligned} \sup _{0\le t_1\le t_2\le a}\biggl \Vert \int _{t_1}^{t_2}(\overline{u}_{i,*}(s)-\overline{u}_{i,n}(s))\mathrm{d}s \biggr \Vert _{\overline{T}}\le \frac{1}{in}, \quad i=1,\ldots ,r. \end{aligned}$$

The definitions of \(F\) and the weak norm on \(L^{1/\beta }(I, \overline{T})\) give \(\overline{u}_{i,n}(t)=(u_{i,n}, g_{i}(t, x_*(t), u_{i,n}(t)))\) and \(u_{i,n} \in U(t, x^*(t)),\,i=1,\ldots ,r\), a.e. Then formulas (17) and (18) follow. Hence, from Lemma 2.1, \(u_{i,n}\rightarrow u_{i,*}\) in \(\omega \)-\(L^{1/\beta }(I, T)\). \(\square \)

Theorem 5.1

Let any \((x_*(\cdot ),u_{1,*}(\cdot ),\ldots ,u_{r,*}(\cdot )) \in {\mathcal {R}}_{{\text {cl}}\,{\text {conv}}\,U}\). Then there exists a sequence

$$\begin{aligned} (x_n(\cdot ), u_{1,n}(\cdot ),\ldots ,u_{r,n}(\cdot ))\in {\mathcal {R}}_{U}, \quad n\ge 1, \end{aligned}$$

such that

$$\begin{aligned}&x_n\rightarrow x_* ~\text {in}~ C(I, X), \end{aligned}$$
(19)
$$\begin{aligned}&u_{i,n}\rightarrow u_{i,*}~ \text {in}~ L^{\frac{1}{\beta }}_{\omega }(I, T)~ \text {and}~ \omega \text {-}L^{\frac{1}{\beta }}(I, T), \end{aligned}$$
(20)
$$\begin{aligned}&\lim \limits _{n\rightarrow \infty }\sup _{0\le t_1\le t_2\le a} \sum \limits _{i=1}^{r}\biggl \vert \int _{t_1}^{t_2}(g_{i}^{**}(s, x_*(s), u_{i,*}(s)) -g_{i}(s, x_n(s), u_{i,n}(s)))\mathrm{d}s \biggr \vert =0.\nonumber \\ \end{aligned}$$
(21)

Proof

Let \((x_*(\cdot ),u_{1,*}(\cdot ),\ldots ,u_{r,*}(\cdot ))\in {\mathcal {R}}_{{\text {cl}}\,{\text {conv}}\,U}\). From Lemma 5.1, for any \(n\ge 1\), there are measurable selectors \(v_{1,n}(t),\ldots ,v_{r,n}(t)\) of the multivalued map \(t \rightrightarrows U(t,x_*(t))\) such that

$$\begin{aligned}&\sup _{0\le t_1\le t_2\le a}\sum \limits _{i=1}^{r}\biggl \Vert \int _{t_1}^{t_2}(u_{i,*}(s)-v_{i,n}(s))\mathrm{d}s \biggr \Vert _T\le \sum \limits _{i=1}^{r}\frac{1}{in},\nonumber \\&\quad \sup _{0\le t_1\le t_2\le a}\sum \limits _{i=1}^{r}\biggl \vert \int _{t_1}^{t_2}(g_{i}^{**}(s, x_*(s), u_{i,*}(s))-g_{i}(s, x_*(s), v_{i,n}(s)))\mathrm{d}s \biggr \vert \le \sum \limits _{i=1}^{r}\frac{1}{in}.\nonumber \\ \end{aligned}$$
(22)

The sequences \(v_{i,n}\rightarrow u_{i,*}\) in \(\omega \text {-}L^{\frac{1}{\beta }}(I, T),\,i=1,\ldots ,r\). For each fixed \(n\ge 1\), by (H3.2), we have that, for any \(x\in X\) and a.e. in \(t\in I\), there exist \(v_{i}\in U(t,x),\,i=1,\ldots ,r\), such that

$$\begin{aligned} \Vert v_{i,n}(t)-v_i\Vert _T<k_i(t)\Vert x_*(t)-x\Vert _X+\frac{1}{in}. \end{aligned}$$
(23)

Let the map \(\varUpsilon _n: I\times X\rightarrow 2^T\) be defined by

$$\begin{aligned} \varUpsilon _n(t,x):=\left\{ v_{i}\in T: v_{i}, i=1,\ldots ,r, ~\text {satisfy inequality (23)}\right\} . \end{aligned}$$
(24)

It follows from (23) that \(\varUpsilon _n(t,x)\) is well defined a.e. on \(I\) and all \(x\in X\), and its values are open sets. Using [44], Corollary 2.1] (since we can assume, without loss of generality, that \(U(t,x)\) is \(\varSigma \otimes {\mathcal {B}}_X\) measurable, see [39], Proposition 2.7.9]), we obtain that, for any \(\epsilon _1,\ldots ,\epsilon _r>0\), there are compact sets \(I_{\epsilon _1},\ldots ,I_{\epsilon _r}\subseteq I\) with \(\mu (I\backslash I_{\epsilon _{1}})\le \epsilon _{1},\ldots ,\mu (I\backslash I_{\epsilon _{r}})\le \epsilon _{r}\), such that the restrictions of \(U(t,x)\) to \(I_{\epsilon _{1}}\times X,\ldots ,I_{\epsilon _{r}}\times X\) are l.s.c and the restrictions of \(v_{1,n}(t),\ldots , v_{r,n}(t)\) and \(k_1(t),\ldots ,k_r(t)\) to \(I_{\epsilon _{1}},\ldots ,I_{\epsilon _{r}}\), respectively, are continuous. It means that (23) and (24) imply that the graphs of the restrictions of \(\varUpsilon _n(t,x)\) to \(I_{\epsilon _{i}}\times X\) are open sets in \(I_{\epsilon _{i}}\times X\times T,\,i=1,\ldots ,r\), respectively. Let the map \(\varUpsilon : I\times X\rightarrow 2^T\) be defined by \(\varUpsilon (t,x):=\varUpsilon _n(t,x)\cap U(t,x)\). Clearly, a.e. in \(t\in I\) and all \(x\in X,\,\varUpsilon (t,x)\ne \emptyset \). Due to the arguments above and Proposition 1.2.47 in [39], we know that the restrictions of \(\varUpsilon (t,x)\) to \(J_{\epsilon _{i}}\times X\) are l.s.c. and so does \(\overline{\varUpsilon }(t,x)=\overline{\varUpsilon (t,x)}\). Here the bar stands for the closure of a set in \(T\). Now consider the system (1)–(2) with the constraint on the controls

$$\begin{aligned} u_1(t),\ldots ,u_r(t)\in \overline{\varUpsilon }(t,x(t)) \quad \text {a.e. on } I. \end{aligned}$$
(25)

Since \(\overline{\varUpsilon }(t,x)\subseteq U(t,x)\), the estimate of Lemma 3.1 also holds in this matter. Repeating the proof of Theorem 4.1, we obtain that there is a solution \((x_n(\cdot ),u_{1,n}(\cdot ),\ldots ,u_{r,n}(\cdot ))\) of the control system (1)–(2), (25). The definition of \(\overline{\varUpsilon }\) implies that \((x_n(\cdot ),u_{1,n}(\cdot ),\ldots ,\,u_{r,n}(\cdot ))\in {\mathcal {R}}_U\) and

$$\begin{aligned} \Vert v_{i,n}(t)-u_{i,n}(t)\Vert _T\le k_i(t)\Vert x_*(t)-x_n(t)\Vert _X +\frac{1}{in}, \quad i=1,\ldots ,r. \end{aligned}$$
(26)

Since \((x_n(\cdot ),u_{1,n}(\cdot ),\ldots ,u_{r,n}(\cdot ))\in {\mathcal {R}}_U,\,n\ge 1\), and \((x_*(\cdot ),u_{1,*}(\cdot ),\ldots ,u_{r,*}(\cdot ))\in {\mathcal {R}}_{{\text {cl}}\,{\text {conv}}\,U}\), we have

$$\begin{aligned} x_*(t)= & {} S_{\alpha }(t)M[x_{0}-h(x_*(t), B_{r}(t)u_{r,*}(t))]\nonumber \\&+\int _{0}^{t}(t-s)^{\alpha -1}T_{\alpha }(t-s)L^{-1}f(s, x_*(s), B_{1}(s)u_{1,*}(s),\nonumber \\&\ldots , B_{r-1}(s)u_{r-1,*}(s))\mathrm{d}s \end{aligned}$$
(27)

and

$$\begin{aligned} x_n(t)= & {} S_{\alpha }(t)M[x_{0}-h(x_n(t), B_{r}(t)u_{r,n}(t))]\nonumber \\&+\int _{0}^{t}(t-s)^{\alpha -1}T_{\alpha }(t-s)L^{-1}f(s, x_n(s), B_{1}(s)u_{1,n}(s),\nonumber \\&\ldots , B_{r-1}(s)u_{r-1,n}(s))\mathrm{d}s. \end{aligned}$$
(28)

Theorem 4.1 and \(\{(x_n(\cdot ),u_{1,n}(\cdot )), \ldots ,u_{r,n}(\cdot ))\}_{n\ge 1}\subseteq {\mathcal {R}}_U\subseteq {\mathcal {R}}_{{\text {cl}}\,{\text {conv}}\,U}\) imply that we can assume, possibly up to a subsequence, that \((x_n(\cdot ),u_{1,n}(\cdot ),\ldots ,u_{r,n}(\cdot ))\rightarrow (\overline{x}(\cdot ),\overline{u}_{1} (\cdot ),\ldots ,\,\overline{u}_{r} (\cdot ))\in {\mathcal {R}}_{{\text {cl}}\,{\text {conv}}\,U}\) in \(C(I,X)\times \omega \)-\(L^{1/\beta }(I, T)\). Subtracting (28) from (27), we obtain that

$$\begin{aligned} \begin{aligned}&\Vert x_*(t)-x_n(t)\Vert _X\\&\le \Vert S_{\alpha }(t)M\left[ h(x_*(t), B_{r}(t)u_{r,*}(t))-h(x_*(t), B_{r}(t)v_{r,n}(t))\right] \Vert _X\\&\quad +\,\Vert S_{\alpha }(t)M\left[ h(x_*(t), B_{r}(t)v_{r,n}(t))-h(x_n(t), B_{r}(t)u_{r,n}(t))\right] \Vert _X\\&\quad +\,\biggl \Vert \int _{0}^{t}(t-s)^{\alpha -1} T_{\alpha }(t-s)L^{-1}\left[ f(s, x_*(s), B_{1}(s)u_{1,*}(s),\ldots , B_{r-1}(s)u_{r-1,*}(s))\right. \\&\left. \qquad -\,f(s, x_*(s), B_{1}(s)v_{1,n}(s),\ldots , B_{r-1}(s)v_{r-1,n}(s))\right] \mathrm{d}s\biggr \Vert _X\\&\quad +\,\biggl \Vert \int _{0}^{t}(t-s)^{\alpha -1}T_{\alpha }(t-s)L^{-1}[ f(s, x_*(s), B_{1}(s)v_{1,n}(s),\ldots , B_{r-1}(s)v_{r-1,n}(s))\\&\qquad -\,f(s, x_n(s), B_{1}(s)u_{1,n}(s),\ldots , B_{r-1}(s)u_{r-1,n}(s))]\mathrm{d}s\biggr \Vert _X. \end{aligned} \end{aligned}$$
(29)

We use the previous estimations of our sufficient set of conditions, together with the property of the operator \(\varUpsilon \) defined in the proof of Lemma 3.2, and since \(v_{i,n}\rightarrow u_{i,*},\,i=1,\ldots ,r\), in \(\omega \text {-}L^{\frac{1}{\beta }}(I, T)\) and \(x_n\rightarrow \overline{x}\) in \(C(I, X)\), then by letting \(n\rightarrow \infty \) in (29) and realizing Lemma 2.3, we get \(x_*=\overline{x}\), that is, \(x_n\rightarrow x_*\) in \(C(I, X)\). Hence, from (26), we have \((v_{i,n}-u_{i,n})\rightarrow 0\) in \(L^{\frac{1}{\beta }}(I, T)\). Thus, \(u_{i,n}=(u_{i,n}-v_{i,n})+v_{i,n}\rightarrow u_{i,*}\) in \(\omega \text {-}L^{\frac{1}{\beta }}(I, T)\) and in \(L_{\omega }^{\frac{1}{\beta }}(I, T)\). Hence, (19) and (20) hold. Moreover, we have

$$\begin{aligned} \begin{aligned} \sup _{0\le t_1\le t_2\le a}&\sum \limits _{i=1}^{r}\biggl \vert \int _{t_1}^{t_2}(g_{i}^{**}(s, x_*(s), u_{i,*}(s))-g_{i}(s, x_n(s), u_{i,n}(s)))\mathrm{d}s \biggr \vert \\&\le \sup _{0\le t_1\le t_2\le a}\sum \limits _{i=1}^{r}\biggl \vert \int _{t_1}^{t_2}(g_{i}^{**}(s, x_*(s), u_{i,*}(s))-g_{i}(s, x_*(s), v_{i,n}(s)))\mathrm{d}s \biggr \vert \\&\le \sup _{0\le t_1\le t_2\le a}\sum \limits _{i=1}^{r}\biggl \vert \int _{t_1}^{t_2}(g_{i}(s, x_*(s), v_{i,n}(s))-g_{i}(s, x_n(s), u_{i,n}(s)))\mathrm{d}s \biggr \vert \end{aligned} \end{aligned}$$
(30)

and assumption (H4.2) and (26) give

$$\begin{aligned}&\vert g_{i}(t, x_*(t), v_{i,n}(t))-g_{i}(t, x_n(t), u_{i,n}(t)) \vert \\&\quad \le (k^{\prime }_{4}(t)+k^{\prime \prime }_{4}k_i(t))\Vert x_*(t) -x_n(t)\Vert _X+\frac{k^{\prime \prime }_{4}}{in}. \end{aligned}$$

Therefore, the last inequality together with (22) and (30) implies that (21) holds. \(\square \)

Theorem 5.2

Problem (RP) has a solution and

$$\begin{aligned} \min \limits _{(x, u_i)\in {\mathcal {R}}_{{\text {cl}}\,{\text {conv}}\,U}}J^{**}_{i}(x, u_i) =\inf \limits _{(x, u_i)\in {\mathcal {R}}_{U}}J_{i}(x, u_i), \quad i=1,\ldots ,r. \end{aligned}$$
(31)

For any solution \((x_*, u_{1,*},\ldots ,u_{r,*})\) of problem (RP), there exists a minimizing sequence

$$\begin{aligned} \left( x_n, u_{1,n},\ldots ,u_{r,n}\right) \in {\mathcal {R}}_U, \quad n \ge 1, \end{aligned}$$

for problem \((P)\), which converges to \((x_*, u_{1,*},\ldots ,u_{r,*})\) in the spaces \(C(I, X)\times \omega \text {-}L^{\frac{1}{\beta }}(I, T)\) and in \(C(I, X)\times L_{\omega }^{\frac{1}{\beta }}(I, T)\), and the following formula holds:

$$\begin{aligned} \lim \limits _{n\rightarrow \infty }\sup _{0\le t_1\le t_2\le a}\sum \limits _{i=1}^{r}\biggl \vert \int _{t_1}^{t_2}(g_{i}^{**}(s, x_*(s), u_{i,*}(s))-g_{i}(s, x_n(s), u_{i,n}(s)))\mathrm{d}s \biggr \vert =0. \end{aligned}$$
(32)

Conversely, if \((x_n, u_{1,n},\ldots ,u_{r,n}),\,n\ge 1\), is a minimizing sequence for problem \((P)\), then there is a subsequence \((x_{n_{k}}, u_{1,n_{k}},\ldots ,u_{r,n_{k}}),\,k\ge 1\), of the sequence \((x_n, u_{1,n},\ldots ,u_{r,n}),\,n\ge 1\), and a solution \((x_*, u_{1,*},\ldots ,u_{r,*})\) of problem (RP) such that the subsequence \((x_{n_{k}}, u_{1,n_{k}},\ldots ,u_{r,n_{k}}),\,k\ge 1\), converges to \((x_*, u_{1,*},\ldots ,u_{r,*})\) in \(C(I, X)\times \omega \text {-}L^{\frac{1}{\beta }}(I, T)\) and relation (32) holds for this subsequence \((x_{n_{k}}, u_{1,n_{k}},\ldots ,u_{r,n_{k}}),\,k\ge 1\).

Proof

By definition of functions \(g_{i,U}(t, x, u_i),\,i=1,\ldots ,r\), (H3.3), (H4.3), and the boundedness of the trajectories \({\mathcal {T}}r_{{\text {cl}}\,{\text {conv}}\,U}\) of the control system (1)–(2), (4) (Lemma 3.1), we can get functions \(m_i\in L^{1}(I, {\mathbb {R}}^{+})\) such that

$$\begin{aligned} \begin{aligned}&-m_i(t)=-[a_4(t)+b_4(t)L_0+c_4(a_3(t)+c_3L_0)] \le g_{i,U}(t, x, u_i), ~\text {a.e.}~t\in I,\\&\quad \text {with all}~x\in Q=\lbrace g\in X: \Vert g\Vert _X \le L_0\rbrace , \quad u_i\in U(t, x), \quad i=1,\ldots ,r. \end{aligned} \end{aligned}$$
(33)

Inequality (33) and the properties of the bipolar (see [34]) directly imply that

$$\begin{aligned} -m_i(t)\le g_{i}^{**}(t, x, u_i)\le g_{i,U}(t, x, u_i), \quad \text {a.e.}~t\in I, \quad x\in Q, \quad u_i\in T. \end{aligned}$$
(34)

Hence, from item (3) of Lemma 3.4, (34), and [48], Theorem 2.1], the functional \(J^{**}_{i},\,i=1,\ldots ,r\), are lower semicontinuous on \({\mathcal {R}}_{{\text {cl}}\,{\text {conv}}\,U}\subseteq C(I, X) \times \omega \text {-}L^{\frac{1}{\beta }}(I, T)\). Theorem 4.1 implies that \({\mathcal {R}}_{{\text {cl}}\,{\text {conv}}\,U}\) is compact in \(C(I, X)\times \omega \text {-}L^{\frac{1}{\beta }}(I, T)\). Therefore, problem (RP) has a solution \((x_*, u_{1,*},\ldots ,u_{r,*})\). By item (1) of Lemma 3.4, we have

$$\begin{aligned} J^{**}_{i}(x_*, u_{i,*})\le \inf \limits _{(x, u_i) \in {\mathcal {R}}_{U}}J_{i}(x, u_i), \quad i=1,\ldots ,r. \end{aligned}$$
(35)

Now, for every solution \((x_*, u_{1,*},\ldots ,u_{r,*})\) of problem (RP), by using Theorem 5.1, we obtain that there exists a sequence \((x_n, u_{1,n},\ldots ,u_{r,n})\in {\mathcal {R}}_U,\,n\ge 1\), such that (19)–(21) hold. Since

$$\begin{aligned} \begin{aligned} \sum \limits _{i=1}^{r}&\biggl \vert \int _{I}(g_{i}^{**}(s, x_*(s), u_{i,*}(s))-g_{i}(s, x_n(s), u_{i,n}(s)))\mathrm{d}s \biggr \vert \\&\le \sup _{0\le t_1\le t_2\le a}\sum \limits _{i=1}^{r}\biggl \vert \int _{t_1}^{t_2}(g_{i}^{**}(s, x_*(s), u_{i,*}(s))-g_{i}(s, x_n(s), u_{i,n}(s)))\mathrm{d}s \biggr \vert , \end{aligned} \end{aligned}$$
(36)

by formulas (21), (35), and (36), we get that (31), (32) hold and \((x_n(\cdot ), u_{1,n}(\cdot ),\ldots ,u_{r,n}(\cdot )) \in {\mathcal {R}}_U,\,n\ge 1\), is a minimizing sequence for problem \((P)\). Let \((x_n(\cdot ), u_{1,n}(\cdot ),\ldots ,u_{r,n}(\cdot ))\in {\mathcal {R}}_U,\,n\ge 1\), be a minimizing sequence for problem \((P)\). According to Theorem 4.1, without loss of generality, we can assume that \((x_n, u_{1,n},\ldots ,u_{r,n}) \rightarrow (x_*, u_{1,*},\ldots ,u_{r,*})\in {\mathcal {R}}_{{\text {cl}}\,{\text {conv}}\,U}\) in \(C(I, X)\times \omega \text {-}L^{\frac{1}{\beta }}(I, T)\) and

$$\begin{aligned} \min (RP)=\lim _{n\rightarrow \infty }\sum \limits _{i=1}^{r} \int _{I}g_{i}(s, x_n(s), u_{i,n}(s))\mathrm{d}s. \end{aligned}$$
(37)

It follows from (34) and the properties of function \(g_{i}^{**}(t, x, u_i)\) that

$$\begin{aligned} \begin{aligned} \int _{I}g^{**}_{i}(s, x_*(s), u_{i,*}(s))\mathrm{d}s&\le \lim _{n\rightarrow \infty }\inf \int _{I}g_{i}^{**}(s, x_n(s), u_{i,n}(s))\mathrm{d}s\\&\le \lim _{n\rightarrow \infty }\int _{I}g_{i}(s, x_n(s), u_{i,n}(s))\mathrm{d}s. \end{aligned} \end{aligned}$$
(38)

From (37) and (38), we obtain that

$$\begin{aligned} \min (RP)=\sum \limits _{i=1}^{r}\int _{I}g_{i}^{**}(s, x_*(s), u_{i,*}(s))\mathrm{d}s =\lim _{n\rightarrow \infty }\sum \limits _{i=1}^{r}\int _{I}g_{i}(s, x_n(s), u_{i,n}(s))\mathrm{d}s. \end{aligned}$$
(39)

Hence, \((x_*(\cdot ), u_{1,*}(\cdot ),\ldots ,u_{r,*}(\cdot )) \in {\mathcal {R}}_{{\text {cl}}\,{\text {conv}}\,U}\) is a solution of problem (RP). Hypotheses (H3.3) and (H4.3) and Lemma 3.1, imply that \(\lbrace g_i(s, x_{n}(s), u_{i,n}(s))\rbrace _{n\ge 1}\) is uniformly integrable. Therefore, by the Dunford–Pettis theorem, we have that there exists a subsequence \(\lbrace g_i(s, x_{n_{k}}(s), u_{i,n_{k}}(s))\rbrace _{k\ge 1}\) of the sequence \(\lbrace g_i(s, x_{n}(s), u_{i,n}(s))\rbrace _{n\ge 1}\) converging to certain functions \(\lambda _i(t)\) in the topology of the space \(\omega \text {-}L^{1}(I, {\mathbb {R}})\). Since \((u_{i,n_{k}}(s), g_i(s, x_{n_{k}}(s), u_{i,n_{k}}(s))\in F(s, x_{n_{k}}(s))\) a.e. in \(s\in I\), Lemma 3.3 implies that \((u_{i,*}, \lambda _i(s))\in {\text {cl}}\,{\text {conv}}\,F(s, x_*(s))\), a.e. \(s\in I\). From Lemma 3.4, we obtain that

$$\begin{aligned} g_{i}^{**}(s, x_*(s), u_{i,*}(s)) \le \lambda _i(s),~ \text {a.e.}~s\in I. \end{aligned}$$
(40)

Hence,

$$\begin{aligned} \sum \limits _{i=1}^{r}\int _{0}^{t}g_{i}^{**}(s, x_*(s), u_{i,*}(s))\mathrm{d}s\le & {} \sum \limits _{i=1}^{r}\int _{0}^{t}\lambda _i(s)\mathrm{d}s\nonumber \\= & {} \lim _{k\rightarrow \infty }\sum \limits _{i=1}^{r} \int _{0}^{t}g_{i}(s, x_{n_{k}}(s), u_{i,n_{k}}(s))\mathrm{d}s \end{aligned}$$
(41)

for any \(t\in I\). Now we can obtain from (39)–(41) that \(g_{i}^{**}(t, x_*(t), u_{i,*}(t))=\lambda _i(t)\), a.e. \(t\in I\). Hence the subsequence \(g_i(s, x_{n_{k}}(s), u_{i,n_{k}}(s))\rightarrow g^{**}_{i}(s, x_{*}(s), u_{i,*}(s))\) as \(k\rightarrow \infty \), in \(\omega \text {-}L^{1}(I, {\mathbb {R}})\). This implies that

$$\begin{aligned} \lim \limits _{k\rightarrow \infty }\sup _{0\le t_1\le t_2\le a}\sum \limits _{i=1}^{r}\biggl \vert \int _{t_1}^{t_2}(g_{i}^{**}(s, x_*(s), u_{i,*}(s))-g_{i}(s, x_{n_{k}}(s), u_{i,n_{k}}(s)))\mathrm{d}s \biggr \vert =0. \end{aligned}$$

Hence, we proved that (32) holds for the subsequence \((x_{n_{k}}, u_{i,n_{k}}),\,k\ge 1\). \(\square \)

6 Conclusions

We studied optimality and relaxation of multiple control problems, described by Sobolev-type nonlinear fractional differential equations with nonlocal control conditions in Banach spaces. The optimization problems were defined by multi-integral functionals with integrands that are not convex in the controls, subject to control systems with mixed nonconvex constraints on the controls. We proved appropriate sufficient conditions assuring existence of optimal solutions for the relaxed problems. Moreover, we have shown, in suitable topologies, that the optimal solutions are limits of minimizing sequences of systems with respect to the trajectory, multicontrols, and the functional.