1 Introduction

The sweeping process is a first-order differential inclusion, involving the normal cone to a moving set depending on time. Roughly speaking, a point is swept by a moving closed set. This differential inclusion was introduced and deeply studied by Moreau in a series of papers (see [1,2,3,4,5]) to model an elasto-plastic mechanical system. Since then, many other applications of the sweeping process have been given, namely in electrical circuits [6], crowd motion [7], hysteresis in elasto-plastic models [8].

The seminal work of Moreau was the starting point of many other developments, as the state-dependent sweeping process, the second-order sweeping process [9], the generalized sweeping process [10], etc.

In this paper we are interested in the study of the state-dependent sweeping process, which corresponds to the case where the moving set depends also on the state. This differential inclusion has been motivated by quasivariational inequalities arising, e.g., in the evolution of sandpiles, quasistatic evolution problems with friction, micromechanical damage models for iron materials, among others (see [11] and the references therein).

The investigation of the state-dependent sweeping process was initiated by Chraibi Kaadoud [12], for convex sets in three dimension to deal with a mechanical problem with unilateral contact and friction, and generalized to a (possibly multivalued) perturbed form in the convex and nonconvex setting.

In the convex setting and by using a semi-implicit discretization scheme, Kunze and Monteiro-Marques [13] obtained the existence of solutions when the sets have a Lipschitz variation. Using an explicit discretization scheme, Haddad and Haddad [14] proved the existence of solutions of a perturbed state-dependent sweeping process with time-independent sets. Later, Bounkhel and Castaing [15] considered state-dependent sweeping process in uniformly smooth and uniformly convex Banach spaces.

In the nonconvex case and by using Schauder’s fixed point theorem, Chemetov and Monteiro-Marques [16] established existence of solutions of perturbed state-dependent sweeping processes with uniformly prox-regular sets. Using a fixed point argument in ordered spaces, the same authors [17] proved the existence of solutions of the perturbed state-dependent sweeping process. Next, Castaing et al. [18] showed the existence of solutions of the state-dependent sweeping process in the uniformly prox-regular case by using an extended version of Schauder’s theorem and a discretization scheme. Later, Azzam-Laouir et al. [19] and Haddad et al. [20] showed the existence of solution of the multivalued perturbed state-dependent sweeping process in the finite-dimensional setting with uniformly prox-regular sets. Finally, Noel [21] and Noel and Thibault [22] showed the existence of multivalued perturbed versions of the state-dependent sweeping process with equi-uniformly subsmooth and uniformly prox-regular sets.

The purpose of this paper is twofold: first, to show the existence of solutions of state-dependent sweeping process by using the Moreau-Yosida regularization technique, and second, to show the existence of solutions of the state-dependent sweeping process in the bounded variation continuous case by using a reparametrization technique.

The Moreau-Yosida regularization is a quite old approach to deal with differential inclusions. It consists in approaching the given differential inclusion by a penalized one, depending on a parameter, whose existence is easier to establish (for example, by using the classical Cauchy–Lipschitz theorem) and then to study the limit when the parameter goes to zero.

In the case of sweeping processes, the Moreau-Yosida regularization has been used to deal only with convex or uniformly prox-regular sets (see [1, 23,24,25,26,27,28] for more details), although it has never been used, even in the convex case, to study the state-dependent sweeping process.

To deal with the state-dependent sweeping process in the bounded variation continuous case, we use the reparametrization technique from [1, 29,30,31] to reduce the bounded variation continuous case to the Lipschitz one. The application of the reparametrization technique is possible due to the rate-independence property of the sweeping process.

The paper is organized as follows. After some preliminaries in Sect. 2, we collect in Sect. 3 the hypotheses used throughout the paper. In Sect. 4, we establish some technical lemmas used in the sequel of the paper. In Sect. 5, we introduce the notion of a solution for the state-dependent sweeping process with bounded variation. Next, in Sect. 6 and 7, we present the main results of this paper (Theorems 6.1 and 6.2), namely the convergence (up to a subsequence) of the Moreau-Yosida regularization for the state-dependent sweeping process and the existence of solutions for bounded variation continuous state-dependent sweeping process. Finally, in Sect. 8, we apply our results to the well-posedness of the Play operator with positively \(\alpha \)-far sets for bounded variation continuous inputs.

2 Notation and Preliminaries

From now on H stands for a separable Hilbert space, whose norm is denoted by \(\Vert \cdot \Vert \). The closed ball centered at x with radius \(\rho \) is denoted by \({\mathbb {B}}(x,\rho )\), and the closed unit ball is denoted by \({\mathbb {B}}\). The notation \(H_w\) denotes H equipped with the weak topology, and \(x_n \rightharpoonup x\) stands for the weak convergence in H of a sequence \((x_n)_n\) to x.

A vector \(h\in H\) belongs to the Clarke tangent cone \(T_C(S;x)\); when for every sequence \((x_n)_n\) in S converging to x and every sequence of positive numbers \((t_n)_n\) converging to 0, there exists some sequence \((h_n)_n\) in H converging to h such that \(x_n+t_nh_n\in S\) for all \(n\in {\mathbb {N}}\). This cone is closed and convex, and its negative polar is the Clarke normal cone to S at \(x\in S\), that is, \(N\left( S;x\right) =\left\{ v\in H:\left\langle v,h\right\rangle \le 0 \text { for all } h\in T_C(S;x)\right\} \). As usual, \(N(S;x)=\emptyset \) if \(x\notin S\). Through the Clarke normal cone, the Clarke subdifferential of a function \(f:H\rightarrow {\mathbb {R}}\cup \{+\infty \}\) is defined as

$$\begin{aligned} \partial f(x):=\left\{ v\in H:(v,-1)\in N\left( \hbox {epi}\,f,(x,f(x))\right) \right\} , \end{aligned}$$

where \(\hbox {epi}\,f:=\left\{ (y,r)\in H\times {\mathbb {R}}:f(y)\le r\right\} \) is the epigraph of f. When the function f is finite and locally Lipschitzian around x, the Clarke subdifferential is characterized (see [32, Proposition 2.1.5]) in the following way

$$\begin{aligned} \partial f(x)=\left\{ v\in H:\left\langle v,h\right\rangle \le f^{\circ }(x;h) \text { for all } h\in H\right\} , \end{aligned}$$

where

$$\begin{aligned} f^{\circ }(x;h):=\limsup _{(t,y)\rightarrow (0^+,x)}t^{-1}\left[ f(y+th)-f(y)\right] \end{aligned}$$

is the generalized directional derivative of the locally Lipschitzian function f at x in the direction \(h\in H\). The function \(f^{\circ }(x;\cdot )\) is in fact the support of \(\partial f(x)\), that is, \(f^{\circ }(x;h)=\sup \{\langle v,h\rangle :v\in \partial f(x)\}\). This characterization easily yields that the Clarke subdifferential of any locally Lipschitzian function has the important property of upper semicontinuity from H into \(H_w\).

For \(S\subset H\), the distance function is defined by \(d_{S}(x):=\inf _{y\in S}\Vert x-y\Vert \) for \(x\in H\). We denote \(\hbox {Proj}\,_{S}(x)\) as the set (possibly empty) of points which attain this infimum. The equality (see [32, Proposition 2.5.4])

$$\begin{aligned} \begin{aligned} N\left( S;x\right)&=\hbox {cl}\,\left( {{\mathbb {R}}_+\partial d_S(x)}\right)&\text { for } x\in S, \end{aligned} \end{aligned}$$
(1)

gives an expression of the Clarke normal cone in terms of the distance function. As usual, it will be convenient to write \(\partial d(x,S)\) in place of \(\partial d\left( \cdot ,S\right) (x)\).

Remark 2.1

In the present paper, we will calculate the Clarke subdifferential of the distance function to a moving set. By doing so, the subdifferential will always be calculated with respect to the variable involved in the distance function by assuming that the set is fixed. More explicitly, \(\partial d_{C(t,y)}(x)\) means the subdifferential of the function \(d_{C(t,y)}(\cdot {})\) (here C(ty) is fixed) is calculated at the point x, i.e., \(\partial \left( d_C(t,y)(\cdot )\right) (x)\). In the same way, \(\partial d_{C(t,x)}(x)\) means the subdifferential of the function \(d_{C(t,x)}(\cdot )\) (here C(tx) is a fixed set) is calculated at the point x, i.e., \(\partial \left( d_{C(t,x)}(\cdot )\right) (x)\).

Let \(f:H\rightarrow {\mathbb {R}}\cup \{+\infty \}\) be an lsc function and \(x\in \hbox {dom}\,f\). An element \(\zeta \) belongs to the proximal subdifferential (see [32, Chapter 1]) \(\partial _P f(x)\) of f at x if there exist two positive numbers \(\sigma \) and \(\eta \) such that

$$\begin{aligned} \begin{aligned} f(y)&\ge f(x)+\left\langle \zeta ,y-x\right\rangle -\sigma \Vert y-x\Vert ^2&\forall y\in B(x;\eta ). \end{aligned} \end{aligned}$$

The limiting proximal subdifferential (see [32, Chapter 1]) is defined by

$$\begin{aligned} \partial _Lf(x):=\left\{ w\text {-}\lim \zeta _i:\zeta _i\in \partial _P f(x_i), x_i\rightarrow x, f(x_i)\rightarrow f(x)\right\} . \end{aligned}$$

When f is locally Lipschitz, the following formula holds:

$$\begin{aligned} \partial f(x)=\hbox {cl}\,{\hbox {conv}\,}\,\partial _Lf(x). \end{aligned}$$

The following lemma will be used in the proof of Lemma 4.3.

Lemma 2.1

Let \(S\subset H\) be a closed set. Then, for \(x\notin S\) and \(s\in \hbox {Proj}\,_S(x)\) we have \( \frac{x-s}{\Vert x-s\Vert }\in \partial _L d_S(s). \)

Proof

According to the definition of the proximal normal cone and [32, Proposition 1.1.3], we have \( x-s\in N\left( S;s\right) . \) Furthermore, by exact penalization  (see [32, Proposition 1.6.3]), for all \(\varepsilon >0\), \( \frac{x-s}{\Vert x-s\Vert +\varepsilon }\in \partial _P d_S(s). \) Then, by taking \(\varepsilon \downarrow 0\), we get the result.\(\square \)

We recall the definition of the class of positively \(\alpha \)-far sets, introduced in [33] and widely studied in [10].

Definition 2.1

Let \(\alpha \in ]0,1]\) and \(\rho \in ]0,+\infty ]\). Let S be a nonempty and closed subset of H with \(S\ne H\). We say that the Clarke subdifferential of the  distance function \(d(\cdot ,S)\) keeps the origin \(\alpha \)-far-off on the open \(\rho \)-tube around S, \(U_{\rho }(S):=\{x\in H:0<d(x,S)<\rho \}\), provided

$$\begin{aligned} 0<\alpha \le \inf _{x\in U_{\rho }(S)}d(0,\partial d(\cdot ,S)(x)). \end{aligned}$$
(2)

Moreover, if E is a given nonempty set, then we say that the family \((S(t))_{t\in E}\) is positively \(\alpha \)-far if every S(t) satisfies (2) with the same \(\alpha \in ]0,1]\) and \(\rho >0\).

Several characterizations of this notion and examples were given in [10]. For example, in the case where the set S is ball compact (see Sect. 3), this notion can be interpreted geometrically in the following manner (see Proposition 3.1 in [10] and Lemma 4.2 in Sect. 4): For all \(u_1,u_2\in \hbox {Proj}\,_S(x)\),

$$\begin{aligned} \langle x-u_1, x-u_2\rangle \ge \alpha ^2 d_S^2(x) \quad \forall x \in U_{\rho }(S). \end{aligned}$$
(3)

In fact, it is shown (see [10]) that condition (3) implies (2). Conversely (see Proposition 3.1 in [10]) relation (2) implies: For all \(u_1,u_2\in \hbox {Proj}\,_S(x)\),

$$\begin{aligned} \langle x-u_1, x-u_2\rangle \ge (2\alpha ^2-1) d_S^2(x) \quad \forall x \in U_{\rho }(S). \end{aligned}$$

The notion of positively \(\alpha \)-far sets includes strictly the notion of uniformly subsmooth sets (see Proposition 2.1) and the notion of uniformly prox-regular sets (see [10]).

Definition 2.2

([34]) For a fixed \(r>0\), the set S is said to be r-uniformly prox-regular iff for any \(x\in S\) and \(\zeta \in N_S^P(x)\cap {\mathbb {B}}\) one has \(x=\hbox {proj}\,_S(x+r\zeta )\).

It is known that S is r-uniformly prox-regular if and only if every nonzero proximal vector \(\zeta \in N_S^P(x)\) to S at any point \(x\in S\) can be realized by an r-ball, that is, \(S\cap B\left( x+r\frac{\zeta }{\Vert \zeta \Vert }\right) =\emptyset \), which is equivalent to

$$\begin{aligned} \begin{aligned} \left\langle \zeta ,y-x\right\rangle&\le \frac{\Vert \zeta \Vert }{2r}\Vert y-x\Vert ^2&\text { for all } y\in S. \end{aligned} \end{aligned}$$

Definition 2.3

Let S be a closed subset of H. We say that S is uniformly subsmooth, iff for every \(\varepsilon >0\) there exists \(\delta >0\), such that

$$\begin{aligned} \left\langle x_1^*-x_2^*,x_1-x_2\right\rangle \ge -\varepsilon \Vert x_1-x_2\Vert \end{aligned}$$
(4)

holds for all \(x_1,x_2\in S\) satisfying \(\Vert x_1-x_2\Vert <\delta \) and all \(x_i^*\in N\left( S;x_i\right) \cap {\mathbb {B}}\) for \(i=1,2\). Furthermore, if E is a given nonempty set, then we say that the family \(\left( S(t)\right) _{t\in E}\) is equi-uniformly subsmooth, if for every \(\varepsilon >0\), there exists \(\delta >0\) such that (4) holds for each \(t\in E\) and all \(x_1,x_2\in S(t)\) satisfying \(\Vert x_1-x_2\Vert <\delta \) and all \(x_i^*\in N\left( S(t);x_i\right) \cap {\mathbb {B}}\) for \(i=1,2\).

Proposition 2.1

([10]) Assume that S is uniformly subsmooth. Then, for all \(\varepsilon \in ]0,1[\) there exists \(\rho \in ]0,+\infty [\) such that

$$\begin{aligned} \sqrt{1-\varepsilon }\le \inf _{y\in U_{\rho }(S)}d(0,\partial d(y,S)). \end{aligned}$$

Remark 2.2

The class of positively \(\alpha \)-far sets contains strictly that of uniformly subsmooth sets. To see this, consider \(S=\{(x,y)\in {\mathbb {R}}^2:y\ge -|x|\}\). Then, (see [33]), S satisfies relation (2) with \(\alpha =\frac{\sqrt{2}}{2}\) on \(H\setminus S\), but we easily see that S is not uniformly subsmooth.

Given a measure \(\nu \) over \([T_0,T]\), we denote by \(L^1_{\nu }\left( [T_0,T];H\right) \) the space of H-valued \(\nu \)-integrable functions defined over \([T_0,T]\). When \(\nu \) is the Lebesgue measure we simply write \(L^1\left( [T_0,T];H\right) \), and in this case, we write \(L^1_w\left( [T_0,T];H\right) \) to mean the space \(L^1\left( [T_0,T];H\right) \) endowed with the weak topology. Moreover, we say that \(u\in \hbox {AC}\,\left( [T_0,T];H\right) \) if there exists \(f\in L^1\left( [T_0,T];H\right) \) and \(u_0\in H\) such that \(u(t)=u_0+\int _{T_0}^t f(s)\mathrm{d}s\) for all \(t\in [T_0,T]\).

Given a function \(u:[T_0,T]\rightarrow H\) and a subinterval \(J\subset [T_0,T]\), the variation of u on J is defined by

$$\begin{aligned} \hbox {Var}\,(u,J):=\sup \left\{ \sum _{j=1}^m\Vert u(t_{j})-u(t_{j-1})\Vert :m\in {\mathbb {N}}, t_j\in J,\,t_0<\cdots <t_m \right\} . \end{aligned}$$

If \(\hbox {Var}\,(u,[T_0,T])<+\infty \) then we say that u has bounded variation on \([T_0,T]\). The space of functions with bounded variation is denoted by \(\hbox {BV}\,\left( [T_0,T];H\right) \). The set of H-valued continuous functions defined on \([T_0,T]\) is denoted by \(C\left( [T_0,T];H\right) \). For convenience we set

$$\begin{aligned} \hbox {CBV}\,\left( [T_0,T];H\right) :=\hbox {BV}\,\left( [T_0,T];H\right) \cap C\left( [T_0,T];H\right) . \end{aligned}$$

Furthermore, for \(u:[T_0,T]\rightarrow H\) we define \(\hbox {Lip}\,(u):=\sup _{t\ne s}\Vert u(t)-u(s)\Vert /|t-s|\) and \(\hbox {Lip}\,\left( [T_0,T];H\right) :=\left\{ u:[T_0,T]\rightarrow H:\hbox {Lip}\,(u)<+\infty \right\} \). We recall the concept of arc-length \(\ell _u\) (see [35, Section 2.5.16]). For \(u\in \hbox {CBV}\,\left( [T_0,T];H\right) \), let \(\ell _u:[T_0,T]\rightarrow [T_0,T]\) be defined, for \(t\in [T_0,T]\), by

$$\begin{aligned} \ell _u(t):={\left\{ \begin{array}{ll} T_0+\frac{(T-T_0)}{\hbox {Var}\,\left( u,[T_0,T]\right) }\hbox {Var}\,\left( u,[T_0,t]\right) , &{} \text { if } \hbox {Var}\,\left( u,[T_0,T]\right) \ne 0,\\ T_0, &{} \text { if } \hbox {Var}\,\left( u,[T_0,T]\right) = 0. \end{array}\right. } \end{aligned}$$

The following result is the key element of the reparametrization technique used in Sect. 6 (see, for instance, [30, Proposition 2.1]).

Proposition 2.2

For every \(u\in \hbox {CBV}\,\left( [T_0,T];H\right) \) there exists a unique function \(U\in \hbox {Lip}\,\left( [T_0,T];H\right) \) such that \(u=U\circ \ell _u\). Moreover, \(\hbox {Lip}\,(U)\le \frac{\hbox {Var}\,\left( u,[T_0,T]\right) }{(T-T_0)}.\)

Given a vector measure \(\mu :{\mathcal {B}}([T_0,T])\rightarrow H\), where \({\mathcal {B}}([T_0,T])\) are the Borel sets of \([T_0,T]\), its variation measure \(|\mu | :{\mathcal {B}}([T_0,T])\rightarrow {\mathbb {R}}\) is defined for any Borel set \(A\subset [T_0,T]\) as \( |\mu |(A):=\sup \sum _{n\in {\mathbb {N}}}\Vert \mu (B_n)\Vert , \) where the supremum is taken over all sequences \((B_n)_{n\in {\mathbb {N}}}\) of mutually disjoint Borel subsets of \([T_0,T]\) such that \(A=\bigcup _{n\in {\mathbb {N}}}B_n\). We say that \(\mu \) has bounded variation if \(\left| \mu \right| ([T_0,T])\) is finite (see [36, 37]). Moreover, given \(u\in \hbox {BV}\,\left( [T_0,T];H\right) \) it is known that its distributional derivative \(Du:{\mathcal {B}}([T_0,T])\rightarrow H\) is a measure with bounded variation, i.e., \(|Du|([T_0,T])<\infty \) and \(\displaystyle -\int _{{\mathbb {R}}}\varphi ^{\prime }(t){\bar{u}}(t)dt=\int _{{\mathbb {R}}}\varphi dD{\bar{u}}\) for all \(\varphi \in C_c^1({\mathbb {R}};{\mathbb {R}})\), where \({\bar{u}}:{\mathbb {R}}\rightarrow H\) is defined by

$$\begin{aligned} {\bar{u}}(t):={\left\{ \begin{array}{ll} u(T_0), &{} t<T_0.\\ u(t), &{} t\in [T_0,T],\\ u(T), &{} t>T. \end{array}\right. } \end{aligned}$$

We recall that Du is the differential measure associated with u.

Next proposition is a chain rule for BV functions (see [30, Proposition 2.2] and [36, Lemma 6.4 and Theorem 6.1] for more details).

Proposition 2.3

Let \(I,J\subset {\mathbb {R}}\) be intervals and let \(h:I\rightarrow J\) be nondecreasing and continuous. Then,

  1. (i)

    \(Dh\left( h^{-1}(B)\right) ={\mathcal {L}}^1(B)\) for every \(B\in {\mathcal {B}}\left( h(I)\right) \), where \({\mathcal {L}}^1\) is the Lebesgue measure and \({\mathcal {B}}(h(I))\) are the Borel sets of h(I).

  2. (ii)

    If \(g\in \hbox {Lip}\,\left( J;H\right) \), then \(g\circ h\in \hbox {BV}\,\left( I;H\right) \) and \(D\left( g\circ h\right) =\left( g^{\prime }\circ h\right) Dh\), where \(g^{\prime }\) is any representative of the distributional derivative of g.

Let A be a bounded subset of H. We define the Kuratowski measure of noncompactness of A, \(\alpha (A)\), as

$$\begin{aligned} \alpha (A):=\inf \{d>0:A \text { admits a finite cover by sets of diameter }\le d\}, \end{aligned}$$

and the Hausdorff measure of noncompactness of A, \(\beta (A)\), as

$$\begin{aligned} \beta (A):=\inf \{r>0:A \text { can be covered by finitely many balls of radius } r\}. \end{aligned}$$

The following result gives the main properties of the Kuratowski and Hausdorff measure of noncompactness (see [38, Proposition 9.1 from Section 9.2]).

Proposition 2.4

Let H be a Hilbert space and \(B,B_1,B_2\) be bounded subsets of H. Let \(\gamma \) be the Kuratowski or the Hausdorff measure of noncompactness. Then,

  1. (i)

    \(\gamma (B)=0\) if and only if \(\hbox {cl}\,(B)\) is compact.

  2. (ii)

    \(\gamma (\lambda B)=|\lambda |\gamma (B)\) for every \(\lambda \in {\mathbb {R}}\).

  3. (iii)

    \(\gamma (B_1+B_2)\le \gamma (B_1)+\gamma (B_2)\).

  4. (iv)

    \(B_1\subset B_2\) implies \(\gamma (B_1)\le \gamma (B_2)\).

  5. (v)

    \(\gamma (\hbox {conv}\,B)=\gamma (B)\).

  6. (vi)

    \(\gamma (\hbox {cl}\,(B))=\gamma (B)\).

The following result (see [39, Theorem 2]) is used to prove the existence for the Moreau-Yosida regularization scheme.

Theorem 2.1

Let \(F:[T_0,T]\times H \rightrightarrows H\) be with nonempty, closed and convex values satisfying:

  1. (i)

    for every \(x\in H\), \(F(\cdot ,x)\) is measurable.

  2. (ii)

    for every \(t\in [T_0,T]\), \(F(t,\cdot )\) is upper semicontinuous from H into \(H_w\).

  3. (iii)

    for a.e. \(t\in [T_0,T]\) and \(A\subset H\) bounded, \(\gamma (F(t,A))\le k(t)\gamma (A)\), for some \(k\in L^1(T_0,T)\) with \(k(t)<+\infty \) for all \(t\in [T_0,T]\), where \(\gamma =\alpha \) or \(\gamma =\beta \) is either the Kuratowski or the Hausdorff measure of noncompactness.

Then, the differential inclusion

$$\begin{aligned} \begin{aligned} {\dot{x}}(t)&\in F(t,x(t))&\text { a.e. } t\in [T_0,T],\\ x(T_0)&=x_0, \end{aligned} \end{aligned}$$

has at least one solution \(x\in \hbox {AC}\,\left( [T_0,T];H\right) \).

The following lemma is a compactness criteria for absolutely continuous functions.

Lemma 2.2

Let \((x_n)_n\) be a sequence of absolutely continuous functions from \([T_0,T]\) into H with \(x_n(T_0)=x_0^n\). Assume that for all \(n\in {\mathbb {N}}\)

$$\begin{aligned} \begin{aligned} \Vert {\dot{x}}_n(t)\Vert&\le \psi (t)&\text { a.e } t\in [T_0,T], \end{aligned} \end{aligned}$$
(5)

where \(\psi \in L^1(T_0,T)\) and that \(x_0^n \rightarrow x_0\) as \(n\rightarrow \infty \). Then, there exists a subsequence \((x_{n_k})_k\) of \((x_n)_n\) and \(x\in \hbox {AC}\,\left( [T_0,T];H\right) \) such that

  1. (i)

    \(x_{n_k}(t)\rightharpoonup x(t)\) in H as \(k\rightarrow +\infty \) for all \(t\in [T_0,T]\).

  2. (ii)

    \(x_{n_k}\rightharpoonup x\) in \(L^1\left( [T_0,T];H\right) \) as \(k\rightarrow +\infty \).

  3. (iii)

    \({\dot{x}}_{n_k}\rightharpoonup {\dot{x}}\) in \(L^1\left( [T_0,T];H\right) \) as \(k\rightarrow +\infty \).

  4. (iv)

    \(\Vert {\dot{x}}(t)\Vert \le \psi (t)\) a.e. \(t\in [T_0,T]\).

Proof

On the one hand, let us consider \(K:=\left\{ {\dot{x}}_n:n\in {\mathbb {N}}\right\} \subset L^1\left( [T_0,T];H\right) \). According to (5), the set K is bounded and uniformly integrable (see [40, Theorem A.2.5]). Thus, as a result of the Dunford-Pettis theorem (see [40, Theorem 2.3.24]), K is compact in \(L^1_w\left( [T_0,T];H\right) \). Therefore, there exists a subsequence of \(({\dot{x}}_{n_k})_k\) of \(({\dot{x}}_{n})_n\) converging to some v in \(L^1_w\left( [T_0,T];H\right) \). Now, let \(S:=\left\{ x_{n_k}:k\in {\mathbb {N}}\right\} \subset L^1\left( [T_0,T];H\right) \). Thus, due to (5), for every \(x_{n_k}\in S\) we have

$$\begin{aligned} \begin{aligned} \Vert x_{n_k}(t)\Vert&\le \Vert x_0^{n_k}\Vert +\int _{T_0}^t \psi (s)\mathrm{d}s&t\in [T_0,T], \end{aligned} \end{aligned}$$
(6)

which implies, by virtue of the Dunford-Pettis theorem, that S is compact in \(L^1_w\left( [T_0,T];H\right) \). Consequently, there exists a subsequence \((x_{n_k})_k\) (without relabeling) of \((x_{n_k})_k\) converging to some x in \(L^1_w\left( [T_0,T];H\right) \).

On the other hand, due to (5) and (6), the sequence \((x_{n_k})_k\) is uniformly bounded in \({W}^{1,1}\left( [T_0,T];H\right) \) and in \(L^{\infty }\left( [T_0,T];H\right) \). Therefore, as seen in [24, Theorem 0.2.2.1], there exists a subsequence \((x_{n_k})_k\) (without relabeling) of \((x_{n_k})_k\) and a function \({\tilde{x}}\) such that \( \Vert {\tilde{x}}(t)\Vert \le \psi (t)\) a.e. \(t\in [T_0,T]\) and

$$\begin{aligned} x_{n_k}(t)\rightarrow {\tilde{x}}(t) \text { weakly as } k\rightarrow +\infty \text { for all } t\in [T_0,T]. \end{aligned}$$
(7)

Moreover, by virtue of [40, Proposition 2.3.31], \(x\equiv {\tilde{x}}\), which proves (iv). Now, we prove that \(v={\dot{x}}\). Indeed, let \(w\in H\) and \(t\in [T_0,T]\) be fixed. Then,

$$\begin{aligned} \left\langle x_{n_k}(t)-x_0^{n_k},w\right\rangle =\int _{T_0}^{t}\left\langle {\dot{x}}_{n_k}(s),w\right\rangle \mathrm{d}s=\int _{T_0}^{T}\left\langle {\dot{x}}_{n_k}(s),w\cdot \mathbbm {1}_{[T_0,t]}(s)\right\rangle \mathrm{d}s, \end{aligned}$$
(8)

where

$$\begin{aligned} \mathbbm {1}_{[T_0,t]}(s):={\left\{ \begin{array}{ll} 1, &{} \text { if } s\in [T_0,t],\\ 0, &{} \text { if } s\in ]t,T], \end{array}\right. } \end{aligned}$$

belongs to \(L^{\infty }\left( [T_0,T];H\right) \). Hence, using (7), the weak convergence of \({\dot{x}}_{n_k}\) to v in \(L^1\left( [T_0,T];H\right) \) and passing to the limit in (8), we obtain

$$\begin{aligned} \begin{aligned} \left\langle x(t)-x_0,w\right\rangle&=\int _{T_0}^t \left\langle v(s),w \right\rangle \mathrm{d}s&\text { for all } w\in H, \end{aligned} \end{aligned}$$

which implies that \(x(t)-x_0=\int _{T_0}^t v(s)\mathrm{d}s\) for all \(t\in [T_0,T]\). Hence \(v={\dot{x}}\). Therefore, (i), (ii), (iii) and (iv) hold. \(\square \)

3 Technical Assumptions

In this section, we list the hypotheses used throughout the paper.

Hypotheses on the set-valued map \(C:[T_0,T]\rightrightarrows H\): C is a set-valued map with nonempty and closed values. Moreover, the following hypotheses will be considered in Sect. 7.

  • \(({\mathcal {H}}_1)\) There exists \(v\in \hbox {CBV}\,\left( [T_0,T];{\mathbb {R}}\right) \) such that for \(s,t\in [T_0,T]\) and \(x\in H\)

    $$\begin{aligned} |d(x,C(t))-d(x,C(s))|\le |v(t)-v(s)|. \end{aligned}$$
  • \(({\mathcal {H}}_2)\) There exists \(\kappa \ge 0\) such that for all \(s,t\in [T_0,T]\) and all \(x\in H\)

    $$\begin{aligned} |d(x,C(t))-d(x,C(s))|\le \kappa |t-s|. \end{aligned}$$
  • \(({\mathcal {H}}_3)\) There exist two constants \(\alpha \in ]0,1]\) and \(\rho \in ]0,+\infty ]\) such that

    $$\begin{aligned} \begin{aligned} 0<\alpha&\le \inf _{x\in U_{\rho }\left( C(t)\right) }d\left( 0,\partial d(x,C(t))\right)&\text { a.e. } t\in [T_0,T], \end{aligned} \end{aligned}$$

    where \(U_{\rho }\left( C(t)\right) =\left\{ x\in H :0<d(x,C(t))<\rho \right\} \) for all \(t\in [T_0,T]\).

  • \(({\mathcal {H}}_4)\) For a.e. \(t\in [T_0,T]\) the set C(t) is ball compact, that is, for every \(r>0\) the set \(C(t)\cap r{\mathbb {B}}\) is compact in H.

  • \(({\mathcal {H}}_5)\) For a.e. \(t\in [T_0,T]\) the set C(t) is r-uniformly prox-regular for some \(r>0\).

Hypotheses on the set-valued map \(C:[T_0,T]\times H\rightrightarrows H\): C is a set-valued map with nonempty and closed values. Moreover, we will consider the following conditions:

  • \(({\mathcal {H}}_6)\) There exists \(v\in \hbox {CBV}\,\left( [T_0,T];{\mathbb {R}}\right) \) and \(L\in [0,1[\) such that for \(s,t\in [T_0,T]\) and \(x,y,z\in H\)

    $$\begin{aligned} |d(z,C(t,x))-d(z,C(s,y))|\le |v(t)-v(s)|+L\Vert x-y\Vert . \end{aligned}$$
  • \(({\mathcal {H}}_7)\) There exist \(\kappa \ge 0\) and \(L\in [0,1[\) such that for \(s,t\in [T_0,T]\) and \(x,y,z\in H\)

    $$\begin{aligned} |d(z,C(t,x))-d(z,C(s,y))|\le \kappa |t-s|+L\Vert x-y\Vert . \end{aligned}$$
  • \(({\mathcal {H}}_8)\) There exist constants \(\alpha \in ]0,1]\) and \(\rho \in ]0,+\infty ]\) such that for every \(y\in H\)

    $$\begin{aligned} \begin{aligned} 0<\alpha&\le \inf _{x\in U_{\rho }\left( C(t,y)\right) }d\left( 0,\partial d(\cdot ,C(t,y))(x)\right)&\text { a.e. } t\in [T_0,T], \end{aligned} \end{aligned}$$

    where \(U_{\rho }\left( C(t,y)\right) =\left\{ x\in H :0<d(x,C(t,y))<\rho \right\} \).

  • \(({\mathcal {H}}_9)\) The family \(\{C(t,v):(t,v)\in [T_0,T]\times H\}\) is equi-uniformly subsmooth.

  • \(({\mathcal {H}}_{10})\) There exists \(k\in L^1(T_0,T)\) such that for every \(t\in [T_0,T]\), every \(r>0\) and every bounded set \(A\subset H\),

    $$\begin{aligned} \gamma \left( C(t,A)\cap r{\mathbb {B}}\right) \le k(t)\gamma (A), \end{aligned}$$

    where \(\gamma =\alpha \) or \(\gamma =\beta \) is either the Kuratowski or the Hausdorff measure of noncompactness (see Proposition 2.4) and \(k(t)<1\) for all \(t\in [T_0,T]\).

Remark 3.1

  1. (i)

    Let \(L\in [0,1[\). Under \(({\mathcal {H}}_9)\) for every \(\alpha \in ]\sqrt{L},1]\) there exists \(\rho >0\) such that \(({\mathcal {H}}_8)\) holds. This follows from Proposition 2.1.

  2. (ii)

    It is not difficult to prove that \(({\mathcal {H}}_5)\) implies \(({\mathcal {H}}_3)\) with \(\alpha =1\) and \(\rho =r\).

  3. (iii)

    If \(C(t,x):=C(t)\) for every \((t,x)\in [T_0,T]\times H\), then \(({{\mathcal {H}}_{10}})\) implies \(({\mathcal {H}}_4)\). Indeed, fix \(t\in [T_0,T]\) and \(r>0\). Then, for a fixed \(x\in H\), we have

    $$\begin{aligned} \gamma \left( C(t,\{x\})\cap r{\mathbb {B}}\right) =\gamma \left( C(t)\cap r{\mathbb {B}}\right) \le k(t)\gamma \left( \{x\}\right) =0, \end{aligned}$$

    which implies, since C(t) is closed, that \(C(t)\cap r{\mathbb {B}}\) is compact.

  4. (iv)

    As it is shown in [13], the condition \(L\in [0,1[\) in \(({\mathcal {H}}_6)\) and \(({\mathcal {H}}_7)\) cannot be removed.

4 Preparatory Lemmas

In this section, we give some preliminary lemmas that will be used in the following sections. They are related to set-valued maps and properties of the distance function.

Since \(-d(\cdot , S )\) has a directional derivative that coincides with the Clarke directional derivative of \(-d(\cdot , S )\) whenever \(x\notin S\) (see [41]), we obtain the following lemma.

Lemma 4.1

Let \(S\subset H\) be a closed set, \(x\notin S\) and \(v\in H\). Then

$$\begin{aligned} \lim _{h\downarrow 0} \frac{d(x+hv,S)-d(x,S)}{h}= \min _{y^*\in \partial d(x,S)} \left\langle y^*, v \right\rangle . \end{aligned}$$

Lemma 4.2

Assume that \(({{\mathcal {H}}_{10}})\) holds. Let \(t\in [T_0,T]\), \(y\in H\) and \(x\notin C(t,y)\). Then,

$$\begin{aligned} \partial d_{C(t,y)}(x)=\frac{x-\hbox {cl}\,{\hbox {conv}\,}\hbox {Proj}\,_{C(t,y)}(x)}{d_{C(t,y)}(x)}. \end{aligned}$$

Proof

Let \(t\in [T_0,T]\) and \(y\in H\) be given. It is not difficult to see that \(({{\mathcal {H}}_{10}})\) implies that C(ty) is ball compact. To simplify the rest of the proof let us denote \(C:=C(t,y)\). According to [42],

$$\begin{aligned} \partial d_C(x)=\frac{x-\partial \varphi _C(x)}{d_C(x)}, \end{aligned}$$

where \(\varphi _C(x):=\sup \limits _{c\in C}\left\{ \langle x,c\rangle -\frac{1}{2}\Vert c\Vert ^2\right\} \) is the Asplund function associated with C. Moreover, due to [43, Proposition 4.5.1] and the ball compactness of C, \(\partial \varphi _C(x)=\hbox {cl}\,{\hbox {conv}\,}\hbox {Proj}\,_C(x)\), which shows the result. \(\square \)

Lemma 4.3

If \(({\mathcal {H}}_6)\), \(({\mathcal {H}}_9)\) and \(({{\mathcal {H}}_{10}})\) hold then, for all \(t\in [T_0,T]\), the set-valued map \(x\rightrightarrows \partial d(\cdot ,C(t,x))(x)\) is upper semicontinuous from H into \(H_w\).

Proof

Fix \(t\in [T_0,T]\) and \(x\in H\).

  1. I)

    Assume that \(x\in C(t,x)\): Due to [44, Theorem 17.35], it is enough to prove that \(x\rightrightarrows \partial _L d(\cdot ,C(t,x))(x)\) is sequentially upper semicontinuous from H into \(H_w\) at x.

    Let \(x_n\rightarrow x\) and \(x_n^* \rightharpoonup x^*\) with \(x_n^*\in \partial _L d_{C(t,x_n)}(x_n)\). We have to prove that \(x^*\in \partial _L d_{C(t,x)}(x)\). Indeed, for every \(n\in {\mathbb {N}}\) where \(x_n\notin C(t,x_n)\) (see Lemma 2.1) we have

    $$\begin{aligned} x_n^*=\frac{x_n-y_n}{d_{C(t,x_n)}(x_n)}\in \partial _L d_{C(t,x_n)}(y_n), \end{aligned}$$
    (9)

    for some \(y_n\in \hbox {Proj}\,_{C(t,x_n)}(x_n)\). Then, for each \(n\in {\mathbb {N}}\), we define

    $$\begin{aligned} {\hat{x}}_n:= {\left\{ \begin{array}{ll} x_n, &{} \text { if } x_n\in C(t,x_n),\\ y_n, &{} \text { if } x_n\notin C(t,x_n), \end{array}\right. } \end{aligned}$$

    where \(y_n\in H\) is given by (9). Thus, \({\hat{x}}_n\rightarrow x\), \(x_n^* \rightharpoonup x^*\), \({\hat{x}}_n\in C(t,x_n)\) and \(x_n^*\in \partial _L d_{C(t,x_n)}\left( {\hat{x}}_n\right) \). Therefore, using \(({\mathcal {H}}_9)\) and [21, Lemma 2.2.2], we obtain that \(x^*\in \partial d_{C(t,x)}(x)\).

  2. II)

    Assume that \(x\notin C(t,x)\): Due to Lemma 4.2 and [44, Theorem 17.35], it is enough to prove that \(x \rightrightarrows \hbox {Proj}\,_{C(t,x)}(x)\) is sequentially upper semicontinuous from H into \(H_w\) at x. Indeed, let \(x_n\rightarrow x\) and \(x_n^* \rightharpoonup x^*\) with \(x_n^*\in \hbox {Proj}\,_{C(t,x_n)}(x_n)\). We have to prove that \(x^*\in \hbox {Proj}\,_{C(t,x)}(x)\). Indeed, due to \(({{\mathcal {H}}_{10}})\), \((x_n^*)_n\) is relatively compact and, thus, \(x_n^*\rightarrow x^*\) up to a subsequence. Moreover,

    $$\begin{aligned} \begin{aligned} \Vert x-x^*\Vert&\le \Vert x-x_n\Vert +d_{C(t,x_n)}(x_n)+\Vert x_n^*-x^*\Vert \\&\le (1+L)\Vert x-x_n\Vert +d_{C(t,x)}(x_n), \end{aligned} \end{aligned}$$

    which shows that \(\Vert x-x^*\Vert \le d_{C(t,x)}(x)\). Furthermore,

    $$\begin{aligned} d_{C(t,x)}(x^*)=d_{C(t,x)}(x^*)-d_{C(t,x_n)}(x^*_n)\le L\Vert x-x_n\Vert +\Vert x^*-x_n^*\Vert , \end{aligned}$$

    which shows that \(x^*\in C(t,x)\). \(\square \)

Next lemma gives some properties and estimations of the distance function to a moving set depending on the state.

Lemma 4.4

Let \(x, y\in \hbox {AC}\,\left( [T_0,T]; H\right) \) and let \(C:[T_0,T]\times H \rightrightarrows H\) be a set-valued map with nonempty and closed values satisfying \(({\mathcal {H}}_7)\). Then,

  1. (i)

    The function \(t \rightarrow d(x(t),C(t,y(t)))\) is absolutely continuous over \([T_0,T]\).

  2. (ii)

    For all \(t\in ]T_0,T[\), where \({\dot{y}}(t)\) exists,

    $$\begin{aligned} \begin{aligned}&\limsup _{s\downarrow 0} \frac{d_{C(t+s,y(t+s))}(x(t+s))-d_{C(t,y(t))}(x(t))}{s}\\&\le \kappa +L\Vert {\dot{y}}(t)\Vert + \limsup _{s\downarrow 0} \frac{d_{C(t,y(t))}(x(t+s))-d_{C(t,y(t))}(x(t))}{s}. \end{aligned} \end{aligned}$$
  3. (iii)

    For all \(t\in ]T_0,T[\), where \({\dot{x}}(t)\) exists,

    $$\begin{aligned} \limsup _{s\downarrow 0} \frac{d_{C(t,y(t))}(x(t+s))-d_{C(t,y(t))}(x(t))}{s}\le \max _{y^*\in \partial d(x(t),C(t,y(t)))} \left\langle y^*, {\dot{x}}(t)\right\rangle . \end{aligned}$$
  4. (iv)

    For all \(t\in \{ t\in ]T_0,T[:x(t)\notin C(t,y(t))\}\), where \({\dot{x}}(t)\) exists,

    $$\begin{aligned} \lim _{s\downarrow 0} \frac{d_{C(t,y(t))}(x(t+s))-d_{C(t,y(t))}(x(t))}{s}= \min _{y^*\in \partial d(x(t),C(t,y(t)))} \left\langle y^*, {\dot{x}}(t)\right\rangle . \end{aligned}$$
  5. (v)

    For every \(x\in H\) the set-valued map \(t\rightrightarrows \partial d(\cdot ,C(t,y(t)))(x)\) is measurable.

Proof

Let \(\psi :[T_0,T]\rightarrow {\mathbb {R}}\) be the function defined by \(\psi (t):=d(x(t),C(t,y(t)))\).

  1. (i)

    It follows directly from \(({\mathcal {H}}_7)\).

  2. (ii)

    Let \(t\in ]T_0,T[\) where \({\dot{y}}(t)\) exists. Then, for \(s>0\) small enough,

    $$\begin{aligned} \frac{\psi (t+s)-\psi (t)}{s}= & {} \frac{d(x(t+s),C(t+s,y(t+s)))-d(x(t+s),C(t,y(t)))}{s}\\&+\,\frac{d(x(t+s),C(t,y(t)))-d(x(t),C(t,y(t)))}{s}\\\le & {} \kappa + L\frac{\Vert y(t+s)-y(t)\Vert }{s}\\&+\,\frac{d(x(t+s),C(t,y(t)))-d(x(t),C(t,y(t)))}{s}, \end{aligned}$$

    and taking the superior limit, we get the desired inequality.

  3. (iii)

    Let \(t\in ]T_0,T[\) be such that \({\dot{x}}(t)\) exists. Let \(s_n \downarrow 0\) be such that

    $$\begin{aligned} \begin{aligned} \limsup _{s\downarrow 0}&\frac{d(x(t+s),C(t,y(t)))-d(x(t),C(t,y(t)))}{s}\\&= \lim _{n\rightarrow +\infty } \frac{d(x(t+s_n),C(t,y(t)))-d(x(t),C(t,y(t)))}{s_n}. \end{aligned} \end{aligned}$$

    By virtue of Lebourg’s mean value theorem [32, Theorem 2.2.4], there exist \(z_n\in ]x(t),x(t+s_n)[\) and \(\xi _n \in \partial d(z_n,C(t,y(t)))\) such that

    $$\begin{aligned} d(x(t+s_n),C(t,y(t)))-d(x(t),C(t,y(t)))=\left\langle \xi _n,x(t+s_n)-x(t)\right\rangle . \end{aligned}$$

    Since \(\Vert \xi _n\Vert \le 1\), there is a subsequence (without relabeling) of \((\xi _n)_n\) such that \(\xi _n \rightharpoonup \xi \in \partial d(x(t),C(t,y(t)))\). Thus, taking the limit in the last equality we obtain the result.

  4. (iv)

    Let \(t\in \{t\in ]T_0,T[ :x(t)\notin C(t,y(t))\}\) where \({\dot{x}}(t)\) exists. Then, for \(s>0\) small enough,

    $$\begin{aligned} \begin{aligned}&\frac{1}{s}\left( d(x(t+s),C(t,y(t)))-d(x(t),C(t,y(t)))\right) \\&=\frac{1}{s}\left( d(x(t)+s{\dot{x}}(t)+s\varepsilon (s,t),C(t,y(t)))-d(x(t),C(t,y(t)))\right) \\&=\frac{1}{s}\left( d(x(t)+s{\dot{x}}(t),C(t,y(t)))-d(x(t),C(t,y(t)))\right) +\eta (s,t), \end{aligned} \end{aligned}$$

    for some mappings \(\varepsilon (\cdot ,t)\) and \(\eta (\cdot ,t)\) with \(\lim _{s\downarrow 0}\varepsilon (s,t)=\lim _{s\downarrow 0}\eta (s,t)=0\). Then, by using Lemma 4.1, we get

    $$\begin{aligned} \begin{aligned}&\lim _{s\downarrow 0}\frac{d(x(t+s),C(t,y(t)))-d(x(t),C(t,y(t)))}{s}\\&=\lim _{s\downarrow 0}\frac{d(x(t)+s{\dot{x}}(t),C(t,y(t)))-d(x(t),C(t,y(t)))}{s}\\&=\min _{y^*\in \partial d(x(t),C(t,y(t)))} \left\langle y^*,{\dot{x}}(t)\right\rangle . \end{aligned} \end{aligned}$$
  5. (v)

    See [10].

\(\square \)

The following result shows that the set-valued map \((t,x)\rightrightarrows \frac{1}{2}\partial d^2_{C(t,x)}(x)\) satisfies the conditions of Theorem 2.1.

Proposition 4.1

Assume that \(({\mathcal {H}}_6)\), \(({\mathcal {H}}_9)\) and \(({{\mathcal {H}}_{10}})\) hold. Then, the set-valued map \(G:[T_0,T]\times H\rightrightarrows H\) defined by \(G(t,x):=\frac{1}{2}\partial d^2_{C(t,x)}(x)\) satisfies:

  1. (i)

    for all \(x\in H\) and all \(t\in [T_0,T]\), \(G(t,x)=x-\hbox {cl}\,{\hbox {conv}\,}\hbox {Proj}\,_{C(t,x)}(x)\).

  2. (ii)

    for every \(x\in H\) the set-valued map \(G(\cdot ,x)\) is measurable.

  3. (iii)

    for every \(t\in [T_0,T]\), \(G(t,\cdot )\) is upper semicontinuous from H into \(H_w\).

  4. (iv)

    for every \(t\in [T_0,T]\) and \(A\subset H\) bounded, \(\gamma \left( G(t,A)\right) \le (1+k(t))\gamma \left( A\right) \), where \(\gamma =\alpha \) or \(\gamma =\beta \) is the Kuratowski or the Hausdorff measure of noncompactness of A and \(k\in L^1(T_0,T)\) is given by \(({{\mathcal {H}}_{10}})\).

  5. (v)

    Let \(x_0\in C(T_0,x_0)\). Then, for all \(t\in [T_0,T]\) and \(x\in H\),

    $$\begin{aligned} \Vert G(t,x)\Vert :=\sup \left\{ \Vert w\Vert :w\in G(t,x)\right\} \le (1+L)\Vert x-x_0\Vert +\left| v(t)-v(T_0)\right| . \end{aligned}$$

Proof

(i), (ii) and (iii) follow, respectively, from Lemma 4.2, (v) of Lemma 4.4 and Lemma 4.3. To prove (iv), let \(A\subset H\) be a bounded set included in the ball \(r{\mathbb {B}}\), for some \(r>0\). Define the set-valued map \(F(t,x):=\hbox {Proj}\,_{C(t,x)}(x)\). Then, for every \(t\in [T_0,T]\), \( \Vert F(t,A)\Vert :=\sup \{\Vert w\Vert :w\in F(t,A)\}\le {\tilde{r}}(t), \) where \({\tilde{r}}(t):=(2+L)r+(1+L)\Vert x_0\Vert +|v(t)-v(T_0)|\). Indeed, let \(z\in F(t,A)\), then there exists \(x\in A\) such that \(z\in \hbox {Proj}\,_{C(t,x)}(x)\). Thus,

$$\begin{aligned} \begin{aligned} \Vert z\Vert&\le d_{C(t,x)}(x)-d_{C(T_0,x_0)}(x_0)+\Vert x\Vert \\&\le (1+L)\Vert x-x_0\Vert +|v(t)-v(T_0)|+\Vert x\Vert \\&\le (2+L)r+(1+L)\Vert x_0\Vert +|v(t)-v(T_0)|={\tilde{r}}(t). \end{aligned} \end{aligned}$$

Therefore, by using \(({{\mathcal {H}}_{10}})\),

$$\begin{aligned} \begin{aligned} \gamma \left( G(t,A)\right)&\le \gamma (A)+ \gamma \left( \hbox {cl}\,{\hbox {conv}\,}F(t,A)\right) \\&= \gamma (A)+\gamma \left( F(t,A)\cap {\tilde{r}}(t){\mathbb {B}}\right) \\&\le \gamma (A)+\gamma \left( C(t,A)\cap {\tilde{r}}(t){\mathbb {B}}\right) \\&\le (1+k(t))\gamma (A). \end{aligned} \end{aligned}$$

To prove (v), define \({\tilde{G}}(t,x):=x-\hbox {Proj}\,_{C(t,x)}(x)\). Then, due to \(({\mathcal {H}}_6)\),

$$\begin{aligned} \Vert {\tilde{G}}(t,x)\Vert =d(x,C(t,x))-d(x_0,C(T_0,x_0))\le (1+L)\Vert x-x_0\Vert +\left| v(t)-v(T_0)\right| . \end{aligned}$$

By passing to the closed and convex hull in the last inequality, we get the result. \(\square \)

When the sets C(tx) are independent of x, the subsmoothness in Proposition 4.1 is no longer required. The following result follows in the same way as Proposition 4.1.

Proposition 4.2

Assume that \(({\mathcal {H}}_1)\) and \(({\mathcal {H}}_4)\) hold. Then, the set-valued map \(G:[T_0,T]\times H \rightrightarrows H\) defined by \(G(t,x):=\frac{1}{2}\partial d^2_{C(t)}(x)\) satisfies:

  1. (i)

    for all \(x\in H\) and all \(t\in [T_0,T]\), \( G(t,x)=x-\hbox {cl}\,{\hbox {conv}\,}\hbox {Proj}\,_{C(t)}(x). \)

  2. (ii)

    for every \(x\in H\) the set-valued map \(G(\cdot ,x)\) is measurable.

  3. (iii)

    for every \(t\in [T_0,T]\), \(G(t,\cdot )\) is upper semicontinuous from H into \(H_w\).

  4. (iv)

    for all \(t\in [T_0,T]\) and all \(A\subset H\) bounded, \( \gamma \left( G(t,A)\right) \le \gamma \left( A\right) , \) where \(\gamma =\alpha \) or \(\gamma =\beta \) is either the Kuratowski or the Hausdorff measure of noncompactness of A.

  5. (v)

    Let \(x_0\in C(T_0)\). Then, for all \(t\in [T_0,T]\) and \(x\in H\),

    $$\begin{aligned} \Vert G(t,x)\Vert :=\sup \left\{ \Vert w\Vert :w\in G(t,x)\right\} \le \Vert x-x_0\Vert +\left| v(t)-v(T_0)\right| . \end{aligned}$$

5 The Concept of Solution

In this section, we define the notion of solution for the state-dependent sweeping process in the sense of differential measures. Through this section, we put \(I=[T_0,T]\). Let \(x:I\rightarrow H\) be a function of bounded variation, and denote by \(\mathrm{d}x\) the differential vector measure associated with x (see [45]). If x is right continuous, then this measure satisfies \(\displaystyle x(t)=x(s)+\int _{]s,t]}\mathrm{d}x\) for all \(s,t\in I\) with \(z\le t\). Conversely, if there exists some mapping \({\hat{x}}\in L^1_{\nu }(I;H)\) such that \(\displaystyle x(t)=x(T_0)+\int _{]T_0,t]}{\hat{x}}\mathrm{d}\nu \) for all \(t\in I\), then x is of bounded variation and right continuous. For the associated differential vector measure \(\mathrm{d}x\), it is known that its variation measure \(|\mathrm{d}x|\) satisfies \(\displaystyle |\mathrm{d}x|(]s,t])=\int _{]s,t]}\Vert {\hat{x}}(\tau )\Vert \mathrm{d}\nu (\tau )\) for all \(s,t\in I\) with \(s\le t\); \(\mathrm{d}x\) is absolutely continuous with respect to \(\nu \) and admits \({\hat{x}}\) as a density relative to \(\nu \), that is, \( \mathrm{d}x={\hat{x}}(\cdot )\mathrm{d}\nu \).

Now we define the notion of solution of the state-dependent sweeping process in the sense of differential measures. The following definition is based on [37, Definition 2.1].

Definition 5.1

Let \(C:I\times H\rightrightarrows H\) be a set-valued map with nonempty and closed values. We say that \(x:I\rightarrow H\) is a solution of the state-dependent sweeping process

figure a

in the sense of differential measure, provided there is \(L\in [0,1[\) and a positive Radon measure \(\mu \) on I satisfying, for all \(s\le t\) in I and \(x,y\in H\),

$$\begin{aligned} \sup _{z\in H} \left| d_{C(s,x)}(z)-d_{C(t,y)}(z)\right| \le \mu (]s,t])+L\Vert x-y\Vert , \end{aligned}$$

and such that the following conditions hold:

  1. (i)

    The mapping \(x(\cdot )\) is of bounded variation on I, right continuous, and satisfies \(x(T_0)=x_0\) and \(x(t)\in C(t,x(t))\) for all \(t\in I\).

  2. (ii)

    There exists a positive Radon measure \(\nu \), absolutely continuously equivalent to \(\mu \) and with respect to which the differential measure \(\mathrm{d}x\) of \(x(\cdot )\) is absolutely continuous with \(\frac{\mathrm{d}x}{\mathrm{d}\nu }(\cdot )\) as an \(L_{\nu }^1(I;H)\)-density and

$$\begin{aligned} \begin{aligned} -\frac{\mathrm{d}x}{\mathrm{d}\nu }(t)&\in N(C(t,x(t));x(t))&\nu \text {-a.e. } t\in [T_0,T]. \end{aligned} \end{aligned}$$

6 Existence Results for the State-Dependent Sweeping Process via Moreau-Yosida Regularization

In this section, we prove the existence of solutions in the sense of differential measures for (\(\mathcal {BVSP}\)). To do that, we prove the existence of Lipschitz solutions of the classical state-dependent sweeping process

figure b

Then, by means of a reparametrization technique we obtain the existence of (\(\mathcal {BVSP}\)).

Let \(\lambda >0\) and consider the following differential inclusion

figure c

The following proposition follows from Theorem 2.1 and Proposition 4.1.

Proposition 6.1

Assume that \(({\mathcal {H}}_7)\), \(({\mathcal {H}}_9)\) and \(({{\mathcal {H}}_{10}})\) hold. Then, for every \(\lambda >0\) there exists at least one absolutely continuous solution \(x_{\lambda }\) of (\({\mathcal {P}}_{\lambda }\)).

Let us define \(\varphi _{\lambda }(t):=d_{C(t,x_{\lambda }(t))}(x_{\lambda }(t))\) for \(t\in [T_0,T]\).

Remark 6.1

Recall that under \(({\mathcal {H}}_9)\), according to Proposition 2.1, for every \(\alpha \in ]\sqrt{L},1]\) there exists \(\rho >0\) such that \(({\mathcal {H}}_8)\) holds.

Proposition 6.2

Under the hypotheses of Proposition 6.1, if \(\lambda <\frac{(\alpha ^2-L) \rho }{\kappa }\), then

$$\begin{aligned} \begin{aligned} {\dot{\varphi }}_{\lambda }(t)&\le \kappa +\frac{L-\alpha ^2}{\lambda }\varphi _{\lambda }(t)&\text { a.e. } t\in [T_0,T], \end{aligned} \end{aligned}$$
(10)

where \(\alpha \in ]\sqrt{L},1]\) and \(\rho >0\) are given by Remark 6.1. Moreover,

$$\begin{aligned} \begin{aligned} \varphi _{\lambda }(t)&\le \frac{\kappa \lambda }{\alpha ^2-L} \text { for all } t\in [T_0,T]. \end{aligned} \end{aligned}$$
(11)

Proof

According to Proposition 6.1, the function \(x_{\lambda }\) is absolutely continuous. Thus, due to \(({\mathcal {H}}_7)\), for every \(t,s\in [T_0,T]\)

$$\begin{aligned} \left| \varphi _{\lambda }(t)-\varphi _{\lambda }(s)\right| \le (1+L)\Vert x_{\lambda }(t)-x_{\lambda }(s)\Vert +\kappa |t-s|, \end{aligned}$$

which implies the absolute continuity of \(\varphi _{\lambda }\). On the one hand, let \(t\in [T_0,T]\) where \(\varphi _{\lambda }(t)\in ]0,\rho [\) and \({\dot{x}}_{\lambda }(t)\) exists. Then, by using Lemma 4.4, we have

$$\begin{aligned} \begin{aligned} {\dot{\varphi }}_{\lambda }(t)&\le \kappa +L\Vert {\dot{x}}_{\lambda }(t)\Vert +\min _{w\in \partial d_{C(t,x_{\lambda }(t))}(x_{\lambda }(t))}\left\langle w,{\dot{x}}_{\lambda }(t)\right\rangle \\&\le \kappa +\frac{L}{\lambda }\varphi _{\lambda }(t)-\frac{\alpha ^2}{\lambda }\varphi _{\lambda }(t)\\&= \kappa - \frac{\alpha ^2-L}{\lambda }\varphi _{\lambda }(t), \end{aligned} \end{aligned}$$

where we have used \(({\mathcal {H}}_9)\) and Proposition 2.1.

On the other hand, let \(t\in \varphi _{\lambda }^{-1}\left( \{0\}\right) \), where \({\dot{x}}_{\lambda }(t)\) exists. Then, according to (\({\mathcal {P}}_{\lambda }\)), \(\Vert {\dot{x}}_{\lambda }(t)\Vert =0\). Indeed,

$$\begin{aligned} \begin{aligned} \Vert {\dot{x}}_{\lambda }(t)\Vert \le \frac{1}{2\lambda }\sup \{\Vert z\Vert :z\in \partial d_{C(t,x_{\lambda }(t))}^2\left( x_{\lambda }(t)\right) \} \le \frac{\varphi _{\lambda }(t)}{\lambda }=0, \end{aligned} \end{aligned}$$

where we have used the identity \(\partial d_S^2(x)=2d_S(x)\partial d_S(x)\). Then, due to \(({\mathcal {H}}_7)\),

$$\begin{aligned} {\dot{\varphi }}_{\lambda }(t)= & {} \lim _{h\downarrow 0}\frac{1}{h}\left( d_{C(t+h,x_{\lambda }(t+h))}(x_{\lambda }(t+h))-d_{C(t,x_{\lambda }(t))}(x_{\lambda }(t+h))\right) \\&+\,d_{C(t,x_{\lambda }(t))}(x_{\lambda }(t+h))\\\le & {} \kappa +L\Vert {\dot{x}}_{\lambda }(t)\Vert +\lim _{h\downarrow 0}\frac{1}{h}d_{C(t,x_{\lambda }(t))}(x_{\lambda }(t+h))\\\le & {} \kappa +(1+L)\Vert {\dot{x}}_{\lambda }(t)\Vert \\\le & {} \kappa +\frac{1+L}{\lambda }\varphi _{\lambda }(t)\\= & {} \kappa -\frac{\alpha ^2-L}{\lambda }\varphi _{\lambda }(t). \end{aligned}$$

Moreover,

$$\begin{aligned} \varphi _{\lambda }(t)<\rho \text { for all } t\in [T_0,T]. \end{aligned}$$

Otherwise, since \(\varphi _{\lambda }^{-1}\left( ]-\infty ,\rho [\right) \) is open and \(T_0\in \varphi _{\lambda }^{-1}\left( ]-\infty ,\rho [\right) \), there would exist \(t^*\in ]T_0,T]\) such that \([T_0,t^*[\subset \varphi _{\lambda }^{-1}\left( ]-\infty ,\rho [\right) \) and \(\varphi _{\lambda }(t^*)=\rho \). Then,

$$\begin{aligned} \begin{aligned} {\dot{\varphi }}_{\lambda }(t)&\le \kappa -\frac{\alpha ^2-L}{\lambda }\varphi _{\lambda }(t)&\text { a.e. } t\in [T_0,t^*[, \end{aligned} \end{aligned}$$

which, by virtue of Grönwall’s inequality, entrain that, for every \(t\in [T_0,t^*[\)

$$\begin{aligned} \begin{aligned} \varphi _{\lambda }(t)\le \frac{\kappa \lambda }{\alpha ^2-L}\left( 1-\exp \left( -\frac{\alpha ^2-L}{\lambda }t\right) \right) \le \frac{\kappa \lambda }{\alpha ^2-L}< \rho , \end{aligned} \end{aligned}$$

that implies that \(\varphi _{\lambda }(t^*)<\rho \), which is not possible.

Thus, we have proved that \(\varphi _{\lambda }\) satisfies (10) and (11). \(\square \)

As a corollary to the last proposition, we obtain that \(x_{\lambda }\) is \(\frac{\kappa }{\alpha ^2-L}\)-Lipschitz.

Corollary 6.1

For every \(\lambda >0\) the function \(x_{\lambda }\) is \(\frac{\kappa }{\alpha ^2-L}\)-Lipschitz.

Proof

Since \(x_{\lambda }\) satisfies \(({\mathcal {P}}_{\lambda })\), we have

$$\begin{aligned} \Vert {\dot{x}}_{\lambda }(t)\Vert \le \frac{1}{2\lambda }\sup \{\Vert z\Vert :z\in \partial d_{C(t,x_{\lambda }(t))}^2\left( x_{\lambda }(t)\right) \}\le \frac{\varphi _{\lambda }(t)}{\lambda }, \end{aligned}$$

where we have used the identity \(\partial d^2_S(x)=2d_S(x)\partial d_S(x)\). Consequently, by using (11), for a.e. \(t\in [T_0,T]\), \( \Vert {\dot{x}}_{\lambda }(t)\Vert \le \frac{\varphi _{\lambda }(t)}{\lambda }\le \frac{\kappa }{\alpha ^2-L}, \) which proves that \(x_{\lambda }\) is \(\frac{\kappa }{\alpha ^2-L}\)-Lipschitz. \(\square \)

Let \((\lambda _n)_n\) be a sequence converging to 0. Next result shows the existence of a subsequence \(\left( \lambda _{n_k}\right) _{k}\) of \((\lambda _n)_n\) such that \(\left( x_{\lambda _{n_k}}\right) _k\) converges (in the sense of Lemma 2.2) to a solution of (\(\mathcal {SP}\)). A similar result was proved by Noel in [21, Theorem 5.2.1] (with a stronger compactness condition on the sets C(tx)) by using a very different approach.

Theorem 6.1

Assume that \(({\mathcal {H}}_7)\), \(({\mathcal {H}}_9)\) and \(({{\mathcal {H}}_{10}})\) hold. Then, there exists at least one solution \(x\in \hbox {Lip}\,\left( [T_0,T];H\right) \) of (\(\mathcal {SP}\)). Moreover, \(\Vert {\dot{x}}(t)\Vert \le \frac{\kappa }{\alpha ^2-L}\) for a.e. \(t\in [T_0,T]\).

Proof

According to Proposition 6.2, the sequence \((x_{\lambda _n})_n\) satisfies the hypotheses of Lemma 2.2 with \(\psi (t):=\frac{k}{\alpha ^2-L}\). Therefore, there exists a subsequence \((x_{\lambda _{n_k}})_k\) of \((x_{\lambda _n})_n\) and a function \(x:[T_0,T]\rightarrow H\) satisfying the hypotheses (i–iv) of Lemma 2.2. For simplicity, we write \(x_k\) instead of \(x_{\lambda _{n_k}}\) for all \(k\in {\mathbb {N}}\). \(\square \)

Claim 1

\(\left( x_{k}(t)\right) _k\) is relatively compact in H for all \(t\in [T_0,T]\).

Proof of Claim 1

Let \(t\in [T_0,T]\). Let us consider \(y_{k}(t)\in \hbox {Proj}\,_{C(t,x_{k}(t))}\left( x_{k}(t)\right) \). Then, \(\Vert x_{k}(t)-y_{k}(t)\Vert =d_{C(t,x_{k}(t))}\left( x_{k}(t)\right) \). Thus,

$$\begin{aligned} \begin{aligned} \Vert y_{k}(t)\Vert&\le d_{C(t,x_{k}(t))}\left( x_{k}(t)\right) +\Vert x_k(t)\Vert \\&\le \frac{\kappa \lambda _{n_k}}{\alpha ^2-L}+\Vert x_k(t)-x_0\Vert +\Vert x_0\Vert \\&\le {\tilde{r}}(t):=\frac{\kappa }{\alpha ^2-L}\left( \lambda _{n_k}+(t-T_0)\right) +\Vert x_0\Vert . \end{aligned} \end{aligned}$$

Furthermore, since \((x_k(t)-y_k(t))\) converges to 0,

$$\begin{aligned} \gamma \left( \left\{ x_{k}(t):k\in {\mathbb {N}}\right\} \right) =\gamma \left( \left\{ y_{k}(t):k\in {\mathbb {N}}\right\} \right) . \end{aligned}$$

Therefore, if \(A:=\left\{ x_{k}(t):k\in {\mathbb {N}}\right\} \), then

$$\begin{aligned} \gamma \left( A\right) =\gamma (\left\{ y_{k}(t):k\in {\mathbb {N}}\right\} ) \le \gamma \left( C\left( t,A\right) \cap {\tilde{r}}(t){\mathbb {B}}\right) \le k(t) \gamma \left( A\right) , \end{aligned}$$

where we have used \(({{\mathcal {H}}_{10}})\). Finally, since \(k(t)<1\), we obtain that \(\gamma \left( A\right) =0\), which shows the result. \(\square \)

Claim 2

\(x(t)\in C(t,x(t))\) for all \(t\in [T_0,T]\).

Proof of Claim 2

As a result of Claim 1 and the weak convergence \(x_k(t)\rightharpoonup x(t)\) for all \(t\in [T_0,T]\) (due to (i) of Lemma 2.2), we obtain the strong convergence of \((x_k(t))_k\) to x(t) for all \(t\in [T_0,T]\). Therefore, due to \(({\mathcal {H}}_7)\),

$$\begin{aligned} \begin{aligned} d_{C(t,x(t))}(x(t))&\le \liminf _{k\rightarrow \infty }\left( d_{C(t,x_{k}(t))}\left( x_{k}(t)\right) +(1+L)\Vert x_{k}(t)-x(t)\Vert \right) \\&\le \liminf _{k\rightarrow \infty } \left( \frac{\kappa \lambda _{n_k}}{\alpha ^2-L}+(1+L)\Vert x_{k}(t)-x(t)\Vert \right) =0. \end{aligned} \end{aligned}$$

\(\square \)

Now, we prove that x is a solution of (\(\mathcal {SP}\)). Define

$$\begin{aligned} {\tilde{F}}(t,x):=\hbox {cl}\,{\hbox {conv}\,}\left( \frac{\kappa }{\alpha ^2-L}\partial d_{C(t,x)}(x)\cup \{0\}\right) , \end{aligned}$$

for \((t,x)\in [T_0,T]\times H\). Then, for a.e. \(t\in [T_0,T]\)

$$\begin{aligned} -{\dot{x}}_k(t)\in \frac{1}{2\lambda }\partial d_{C(t,x_{k}(t))}^2\left( x_{k}(t)\right) \subset {\tilde{F}}(t,x_{k}(t)), \end{aligned}$$

where we have used Proposition 6.2.

Claim 3

\({\tilde{F}}\) has closed and convex values and satisfies:

  1. (i)

    for each \(x\in H\), \({\tilde{F}}(\cdot ,x)\) is measurable;

  2. (ii)

    for all \(t\in [T_0,T]\), \({\tilde{F}}(t,\cdot )\) is upper semicontinuous from H into \(H_w\);

  3. (iii)

    if \(x\in C(t,x)\), then \({\tilde{F}}(t,x)=\frac{\kappa }{\alpha ^2-L}\partial d_{C(t,x)}(x)\).

Proof of Claim 3

Define \(G(t,x):=\frac{\kappa }{\alpha ^2-L}\partial d_{C(t,x)}(x)\cup \{0\}\). We note that \(G(\cdot ,x)\) is measurable as the union of two measurable set-valued maps (see [44]). Let us define \(\varGamma (t):={\tilde{F}}(t,x)\). Then, \(\varGamma \) takes weakly compact and convex values. Fixing any \(d\in H\), by virtue of [46, Proposition 2.2.39], is enough to verify that the support function \(t\mapsto \sigma (d,\varGamma (t)):=\sup \{\left\langle v,d\right\rangle :v\in \varGamma (t)\}\) is measurable. Thus,

$$\begin{aligned} \sigma (d,\varGamma (t)):=\sup \{\left\langle v,d\right\rangle :v\in \varGamma (t)\}=\sup \{\left\langle v,d\right\rangle :v\in G(t,x)\} \end{aligned}$$

is measurable because \(G(\cdot ,x)\) is measurable. Thus (i) holds. Assertion (ii) follows directly from [44, Theorem 17.27 and 17.3]. Finally, if \(x\in C(t,x)\) then \(0\in \partial d_{C(t,x)}(x)\). Hence, using the fact that the subdifferential of a locally Lipschitz function is closed and convex,

$$\begin{aligned} {\tilde{F}}(t,x)=\hbox {cl}\,{\hbox {conv}\,}\left( \frac{\kappa }{\alpha ^2-L}\partial d_{C(t,x)}(x)\right) =\frac{\kappa }{\alpha ^2-L}\partial d_{C(t,x)}(x), \end{aligned}$$

which shows (iii). \(\square \)

In summary, we have

  1. (i)

    for each \(x\in H\), \({\tilde{F}}(\cdot ,x)\) is measurable.

  2. (ii)

    for all \(t\in [T_0,T]\), \({\tilde{F}}(t,\cdot )\) is upper semicontinuous from H into \(H_w\).

  3. (iii)

    \({\dot{x}}_k \rightharpoonup {\dot{x}}\) in \(L^1\left( [T_0,T];H\right) \) as \(k\rightarrow +\infty \).

  4. (iv)

    \({x}_k(t) \rightarrow {x}(t)\) as \(k\rightarrow +\infty \) for all \(t\in [T_0,T]\).

  5. (v)

    \(-{\dot{x}}_k(t)\in {\tilde{F}}\left( t,x_{k}(t)\right) \) for a.e. \(t\in [T_0,T]\).

These conditions and the convergence theorem (see [47, p.60] for more details) imply that x satisfies

$$\begin{aligned} \begin{aligned} -{\dot{x}}(t)&\in {\tilde{F}}(t,x(t))&\text { a.e. } t\in [T_0,T],\\ x(T_0)&=x_0\in C(T_0,x_0), \end{aligned} \end{aligned}$$

which, according to Claim 3, implies that x is a solution of

$$\begin{aligned} \begin{aligned} -{\dot{x}}(t)&\in \frac{\kappa }{\alpha ^2-L}\partial d_{C(t,x(t))}(x(t))&\text { a.e. } t\in [T_0,T],\\ x(T_0)&=x_0\in C(T_0,x_0). \end{aligned} \end{aligned}$$

Therefore, by virtue of (1) and Claim 2, x is a solution of (\(\mathcal {SP}\)). Finally, since \(\Vert {\dot{x}}(t)\Vert \le \frac{\kappa }{\alpha ^2-L}\) for a.e. \(t\in [T_0,T]\), x is \(\frac{\kappa }{\alpha ^2-L}\)-Lipschitz continuous. \(\square \)

Now, from Theorem 6.1 and by means of a reparametrization technique, we will deduce the existence of solutions for (\(\mathcal {BVSP}\)). The following theorem extends all the known existence results for (\(\mathcal {BVSP}\)).

Theorem 6.2

Assume that \(({\mathcal {H}}_6)\), \(({\mathcal {H}}_9)\) and \(({{\mathcal {H}}_{10}})\) hold. Then, there exists at least one solution \(x\in \hbox {CBV}\,\left( [T_0,T];H\right) \) of (\(\mathcal {BVSP}\)). Moreover, this solution satisfies \( \hbox {Var}\,\left( x,[T_0,T]\right) \le \frac{\hbox {Var}\,\left( v,[T_0,T]\right) }{\alpha ^2-L}. \)

Proof

Without any loss of generality, we can assume that the function v from \(({\mathcal {H}}_6)\) is strictly increasing. Indeed, if \(T_0\le t_1\le t_2\le T\), then

$$\begin{aligned} \begin{aligned} |v(t_1)-v(t_2)|&\le \hbox {Var}\,\left( v,[t_1,t_2]\right) \\&=\hbox {Var}\,\left( v,[T_0,t_2]\right) -\hbox {Var}\,\left( v,[T_0,t_1]\right) \\&\le v_{\varepsilon }(t_2)-v_{\varepsilon }(t_1), \end{aligned} \end{aligned}$$

where \(v_{\varepsilon }(t):=\hbox {Var}\,\left( v,[T_0,t]\right) +\varepsilon (t-T_0)\), for \(\varepsilon >0\), is a strictly increasing function. Accordingly, by Proposition 2.2, there exists \(V\in \hbox {Lip}\,\left( [T_0,T];H\right) \) such that \(v=V\circ \ell _v\) and \( \hbox {Lip}\,(V)\le \frac{\hbox {Var}\,\left( v,[T_0,T]\right) }{(T-T_0)}\). Moreover, as v is continuous and strictly increasing, the arc-length \(\ell _v\) is continuous, strictly increasing and \(\ell _v\left( [T_0,T]\right) =[T_0,T]\). Therefore, \(\ell ^{-1}_v:[T_0,T]\rightarrow [T_0,T]\) is continuous, strictly increasing and with bounded variation.

Let us consider \({\tilde{C}}:[T_0,T]\times H \rightrightarrows H\), defined by \({\tilde{C}}(t,x):=C\left( \ell _{v}^{-1}(t),x\right) \). Then, \({\tilde{C}}\) satisfies \(({\mathcal {H}}_7)\) with \(\kappa =\hbox {Lip}\,(V)\). Indeed, for \(t\in [T_0,T]\) and \(x,y,z\in H\),

$$\begin{aligned} \begin{aligned} \left| d(z,{\tilde{C}}(t,x))-d(z,{\tilde{C}}(s,y))\right|&\le |v\circ \ell _{v}^{-1}(t)-v\circ \ell _{v}^{-1}(s)|+L\Vert x-y\Vert \\&=|V(t)-V(s)|+L\Vert x-y\Vert \\&\le \hbox {Lip}\,(V)|t-s|+L\Vert x-y\Vert . \end{aligned} \end{aligned}$$

Thus, due to Theorem 6.1, there exists at least one solution \(u\in \hbox {Lip}\,\left( [T_0,T];H\right) \) of the differential inclusion:

$$\begin{aligned} \begin{aligned} -{\dot{u}}(t)&\in N({\tilde{C}}(t,u(t)),u(t))&\text { a.e. } t\in [T_0,T],\\ u(T_0)&=u_0\in {\tilde{C}}(T_0,x_0), \end{aligned} \end{aligned}$$
(12)

with \(\hbox {Lip}\,(u)\le \frac{\hbox {Lip}\,(V)}{\alpha ^2-L}\). Let us consider the mapping \(x:[T_0,T]\rightarrow H\) defined by \(x(t):=u\circ \ell _v(t)\). Then, x is continuous with bounded variation. Indeed,

$$\begin{aligned} \begin{aligned} \hbox {Var}\,\left( x,[T_0,T]\right)&\le \hbox {Lip}\,(u)\hbox {Var}\,\left( \ell _v,[T_0,T]\right) \\&\le \frac{\hbox {Lip}\,(V)}{\alpha ^2-L} \hbox {Var}\,\left( \ell _v,[T_0,T]\right) \\&\le \frac{\hbox {Var}\,\left( \ell _v,[T_0,T]\right) }{T-T_0}\frac{\hbox {Var}\,\left( v,[T_0,T]\right) }{\alpha ^2-L}\\&\le \frac{\hbox {Var}\,\left( v,[T_0,T]\right) }{\alpha ^2-L}. \end{aligned} \end{aligned}$$

Moreover, due to Proposition 2.3, \( Dx=D\left( u\circ \ell _v\right) =\left( {\dot{u}}\circ \ell _v\right) D\ell _v \). Let us define \(w:={\dot{u}}\circ \ell _v\) and \(Z:=\{t\in [T_0,T]:-{\dot{u}}(t)\notin N({\tilde{C}}(t,u(t));u(t))\}\).

Then, \({\mathcal {L}}^1\left( Z\right) =0\) because of (12). Moreover,

$$\begin{aligned} \begin{aligned}&D\ell _v\left( \left\{ t\in [T_0,T]:-w(t)\notin N\left( C(t,x(t));x(t)\right) \right\} \right) \\&=D\ell _v\left( \{ t\in [T_0,T]:-{\dot{u}}\left( \ell _v(t)\right) \notin N({\tilde{C}}(\ell _v(t),u(\ell _v(t)));u(\ell _v(t))) \}\right) \\&=D\ell _v\left( \left\{ t\in [T_0,T]:\ell _v(t)\in Z \right\} \right) \\&=D\ell _v\left( \ell _v^{-1}\left( Z\right) \right) \\&={\mathcal {L}}^1\left( Z\right) =0, \end{aligned} \end{aligned}$$

where we have used (i) from Proposition 2.3. Therefore, \(x\in \hbox {CBV}\,\left( [T_0,T];H\right) \) is a solution of (\(\mathcal {BVSP}\)) in the sense of differential measures. \(\square \)

7 The Case of the Sweeping Process

This section is devoted to the measure differential inclusion:

$$\begin{aligned} \begin{aligned} -\mathrm{d}x&\in N\left( C(t);x(t)\right) ,\\ x(T_0)&=x_0\in C(T_0), \end{aligned} \end{aligned}$$
(13)

and the classical sweeping process:

$$\begin{aligned} \begin{aligned} -{\dot{x}}(t)&\in N\left( C(t);x(t)\right)&\text { a.e. } t\in [T_0,T],\\ x(T_0)&=x_0\in C(T_0). \end{aligned} \end{aligned}$$
(14)

These two differential inclusions can be seen, respectively, as a particular case of (\(\mathcal {BVSP}\)) and (\(\mathcal {SP}\)) when the sets C(tx) do not depend on the state. We show that Theorem 6.1 and Theorem 6.2 are valid under the weaker hypothesis \(({\mathcal {H}}_3)\) instead of \(({\mathcal {H}}_9)\). A similar result of the following was proved by Jourani and Vilches in [10] by using a very different approach.

Theorem 7.1

Assume that \(({\mathcal {H}}_2)\), \(({\mathcal {H}}_3)\) and \(({\mathcal {H}}_4)\) hold. Then, there exists at least one solution \(x\in \hbox {Lip}\,\left( [T_0,T];H\right) \) of (14). Moreover, \(\hbox {Lip}\,(x)\le \frac{\kappa }{\alpha ^2}\).

Proof

According to the proof of Theorem 6.1, we observe that \(({\mathcal {H}}_9)\) was used to obtain \(({\mathcal {H}}_8)\) and the upper semicontinuity of \(\partial d_{C(t,\cdot )}(\cdot )\) from H into \(H_w\) for all \(t\in [T_0,T]\). Since in the present case these two properties hold under \(({\mathcal {H}}_3)\) (see Proposition 4.2), it is sufficient to adapt the proof of Theorem 6.1 to get the result. \(\square \)

The following result follows in the same way as in the proof of Theorem 6.2.

Theorem 7.2

Assume that \(({\mathcal {H}}_1)\), \(({\mathcal {H}}_3)\) and \(({\mathcal {H}}_4)\) hold. Then, there exists at least one solution \(x\in \hbox {CBV}\,\left( [T_0,T];H\right) \) of (13). Moreover, this solution satisfies \( \hbox {Var}\,\left( x,[T_0,T]\right) \le \frac{\hbox {Var}\,\left( v,[T_0,T]\right) }{\alpha ^2}. \)

Remark 7.1

When the sets C(t) are convex or r-uniformly prox-regular it has been proved that the Moreau-Yosida regularization generates a family \((x_{\lambda })_{\lambda }\) which converges uniformly in \(C\left( [T_0,T];H\right) \), as \(\lambda \downarrow 0\), to the unique solution of (14) (see [25,26,27,28] for more details). In particular, the following theorem holds.

Theorem 7.3

Assume that \(({\mathcal {H}}_1)\) and \(({\mathcal {H}}_5)\) hold. Then, there exists a unique solution \(x\in \hbox {CBV}\,\left( [T_0,T];H\right) \) of (13). Moreover, this solution satisfies

$$\begin{aligned}\hbox {Var}\,\left( x,[T_0,T]\right) \le \hbox {Var}\,\left( v,[T_0,T]\right) . \end{aligned}$$

7.1 The Finite-Dimensional Case

When H is a finite-dimensional Hilbert space, Benabdellah [48] and Colombo and Goncharov [49] proved, at almost the same time, the existence of solutions for the sweeping process (14) under merely \(({\mathcal {H}}_2)\) (see [50] for similar results).

Theorem 7.4

([48, 49]) Assume that \(({\mathcal {H}}_2)\) holds. Then, there exists at least one solution \(x\in \hbox {Lip}\,\left( [T_0,T];H\right) \) of (14). Moreover, \(\hbox {Lip}\,(x)\le \kappa \).

From Theorem 7.4 and the reparametrization technique used in the proof of Proposition 6.2, we can prove the following result, which extends Theorem 7.2 to completely nonregular sets with continuous bounded variation.

Theorem 7.5

Assume that \(({\mathcal {H}}_1)\) holds. Then, there exists at least one solution \(x\in \hbox {CBV}\,\left( [T_0,T];H\right) \) of (13). Moreover, \(\hbox {Var}\,\left( x,[T_0,T]\right) \le \hbox {Var}\,\left( v,[T_0,T]\right) \).

Remark 7.2

When the sets C(t) are r-uniformly prox-regular, Theorem 7.5 is well known (see [30, 31, 51] for more details).

8 An Application to Hysteresis

In this section, we study the so-called Play operator, which arises in hysteresis and we extend the results given in [52, 53] to the class of positively \(\alpha \)-far sets in finite-dimensional Hilbert spaces. Hysteresis occurs in phenomena such as plasticity, ferromagnetism, ferroelectricity, porous media filtration and behavior of thermostats (see [54] for more details). Several properties in hysteresis can be described in terms of some hysteresis operators. One of these hysteresis operators is the so-called Play operator [8, 52]. This operator can be defined as the solution of a differential inclusion associated with a fixed set \(Z\subset H\). The case where the set Z is convex has been thoroughly studied (see, for instance, [36, 52, 55, 56]), whereas the nonconvex case has been only considered in [53] for uniformly prox-regular sets. The use of nonconvex sets is important in applications because, as Gudovich and Quincampoix stated in [53, Remark 3.7], when “the elastic properties change with plastic deformation, then a nonconvex yield surface cannot be excluded from consideration” and “its nonconvexity can be explained physically allowing irregularities, elastic-plastic interaction, and the granular character of the material” (see [53] and the references given there for a deeper discussion on the nonconvexity of the set under consideration and an example of a multidimensional Play operator). In the aforementioned paper [53] the authors construct the Play operator, with Z uniformly prox-regular set, for only Lipschitz inputs, while by using Theorem 7.2 we can easily define the Play operator, with Z positively \(\alpha \)-far, for BV continuous inputs.

Let \(Z\subset H\) be a positively \(\alpha \)-far set. Let \(y\in \hbox {CBV}\,\left( [T_0,T];H\right) \) and consider the following differential inclusion:

$$\begin{aligned} \begin{aligned} \mathrm{d}u&\in \mathrm{d}y-N\left( Z;u(t)\right) ,\\ u(T_0)&=y(T_0)-x_0, \end{aligned} \end{aligned}$$
(15)

where \(x_0\in y(T_0)-Z\). Then, the function \(x:=y-u\) is a solution of (\(\mathcal {BVSP}\)) with \(C(t):=y(t)-Z\) for all \(t\in [T_0,T]\) if and only if \(u=y-x\) is a solution of (15). Moreover, the sets \(C(t)=y(t)-Z\) are positively \(\alpha \)-far for all \(t\in [T_0,T]\) and for every \(x\in H\) and \(t,s\in [T_0,T]\)

$$\begin{aligned} \left| d(x,C(t))-d(x,C(s))\right| =|d(y(t)-x,Z)-d(y(s)-x,Z)|\le |y(t)-y(s)|. \end{aligned}$$

Thus, \(({\mathcal {H}}_1)\), \(({\mathcal {H}}_3)\) and \(({\mathcal {H}}_4)\) hold. Therefore, Theorem 7.2 shows that there is at least one solution \(x\in \hbox {CBV}\,\left( [T_0,T];H\right) \) of (13). This allows us to define the hysteresis operator

$$\begin{aligned} P:\hbox {CBV}\,\left( [T_0,T];H\right) \rightrightarrows \hbox {CBV}\,\left( [T_0,T];H\right) , \end{aligned}$$

which to every function y associates the set of solutions of (15). Therefore, the Play operator is well defined for inputs in \(\hbox {CBV}\,\left( [T_0,T];H\right) \) generalizing the results given in [52, 53] to the class of positively \(\alpha \)-far.

Remark 8.1

  1. (i)

    If Z is uniformly prox-regular, then due to the uniqueness of solution of (15), the Play operator is single valued.

  2. (ii)

    Let us consider \(y\in \hbox {CBV}\,\left( [T_0,T];H\right) \) and \(C(t):=y(t)-Z\) for all \(t\in [T_0,T]\), where \(Z\subset H\) is a convex set. Let \(x\in \hbox {CBV}\,\left( [T_0,T];H\right) \) be a solution of (13). Then, \(u:=y-x\) satisfies \(u(t)-y(t)\in Z \) for all \( t\in [T_0,T]\) and

    $$\begin{aligned} \begin{aligned} \int _{T_0}^{t}\left\langle u(s)-y(s)-z(s),\mathrm{d}y\right\rangle&\ge 0&\forall z\in C\left( [T_0,T];Z\right) \,\,\forall t\in [T_0,T], \end{aligned} \end{aligned}$$

    which corresponds to the classical formulation of the evolution variational inequality associated with the Play operator (see [52]).

9 Conclusions

Using Moreau-Yosida regularization, we have established existence results for continuous bounded variation state-dependent sweeping processes with equi-uniformly subsmooth sets. This work improves and extends in different ways recent results in [21, 22]. Moreover, our work shows that Moreau-Yosida regularization can be used even for nonregular sets. This achievement opens the door toward possible new developments in nonsmooth analysis. Related to the state-dependent sweeping process, there remain many issues that need answers and further investigations. For example, it would be interesting to study the bounded variation (not necessarily continuous) state-dependent sweeping process. This would allow to define the Play operator for bounded variation inputs. Another research topic will be the study of the state-dependent sweeping process with definable moving sets (see [57]). We will pursue this in another paper.