1 Introduction

Given a Hilbert space \(\mathcal {H}\), Moreau’s sweeping process is a first-order differential inclusion involving the normal cone to a family of closed moving sets \((C(t))_{t\in [0,T]}\). In its simplest form, it can be written as

$$\begin{aligned} \begin{array}{ll} \dot{x}(t)\in -N\left( C(t);x(t)\right) &{} \quad \text { a.e. } t\in [0,T],\\ x(0)=x_0\in C(0), \end{array} \end{aligned}$$
(SP)

where \(N(C(t);\cdot )\) denotes an appropriate normal cone to the sets \((C(t))_{t\in [0,T]}\). Since its introduction by J.J. Moreau in [25, 26], the sweeping process has allowed the development of various applications in contact mechanics, electrical circuits, and crowd motion, among others (see, e.g., [1, 9, 24]). Furthermore, so far, we have a well-consolidated existence theory for moving sets in the considerable class of prox-regular sets.

The most prominent (and constructive) method for solving the sweeping process is the so-called catching-up algorithm. Developed by J.J. Moreau in [26] for convex moving sets, it consists in taking a time discretization \(\{t_k^n\}_{k=0}^n\) of the interval [0, T] and defining a piecewise linear and continuous function \(x_n:[0,T]\rightarrow \mathcal {H}\) with nodes

$$\begin{aligned} x_{k+1}^n:= {\text {proj}}_{C(t_{k+1}^n)}(x_k^n) \text { for all } k\in \{0,\ldots ,n-1\}. \end{aligned}$$

Moreover, under general assumptions, it could be proved that the sequence \((x_n)\) converges to the unique solution of (SP) (see, e.g., [8]).

The applicability, from the numerical point of view, of the catching-up algorithm is based on the possibility of calculating an exact formula for the projection to the moving sets. However, for the majority of sets, the projection onto a closed set is not possible to obtain exactly, and only numerical approximations can be computed. Since there are still no guarantees on the convergence of the catching-up algorithm with approximate projections, in this paper, we develop a theoretical framework for the numerical approximation of the solutions of the sweeping process using an appropriate concept of approximate projection that is consistent with the numerical methods for the computation of the projection onto a closed set.

Regarding numerical approximations of sweeping processes, we are aware of the paper [33], where the author proposes an implementable numerical method for the particular case of the intersection of the complement of convex sets, which is used to study crowd motion. Our approach follows a different path and is based on numerical optimization methods to find an approximate projection in the following sense: given a closed set \(C\subset \mathcal {H}\), \(\varepsilon >0\) and \(x\in \mathcal {H}\), we say that \(\bar{x}\in C\) is an approximate projection of C at \(x\in \mathcal {H}\) if

$$\begin{aligned} \Vert x-\bar{x}\Vert ^2<\inf _{y\in C}\Vert x-y\Vert ^2+\varepsilon . \end{aligned}$$

We observe that the set of approximate projections is always nonempty and can be obtained through numerical optimization methods. Hence, in this paper, we study the properties of approximate projections and propose a general numerical method for the sweeping process based on approximate projections. We prove that this algorithm converges in three general cases: (i) prox-regular moving sets (without compactness assumptions), (ii) ball-compact subsmooth moving sets, and (iii) general ball-compact fixed closed sets. Hence, our results cover a wide range of existence results for the sweeping process and provide important insights into the numerical simulation of sweeping processes.

The paper is organized as follows. Section 2 provides the mathematical tools needed for the presentation of the paper and also develops the theoretical properties of approximate projections. Section 3 is devoted to presenting the proposed algorithm and its main properties. Then, in Sect. 4, we prove the convergence of the algorithm when the moving set has uniformly prox-regular values (without compactness assumptions). Next, in Sect. 5, we provide the convergence of the proposed algorithm for ball-compact subsmooth moving sets. Section 6 shows the convergence for a fixed ball-compact set. Finally, Sect. 7 discusses numerical aspects for obtaining approximate projections. The paper ends with concluding remarks.

2 Preliminaries

From now on, \(\mathcal {H}\) stands for a real Hilbert space, whose norm, denoted by \(\Vert \cdot \Vert \), is induced by an inner product \(\langle \cdot ,\cdot \rangle \). The closed (resp. open) ball centered at x with radius \(r>0\) is denoted by \(\mathbb {B}[x,r]\) (resp. \(\mathbb {B}(x,r)\)), and the closed unit ball is denoted by \(\mathbb {B}\). The notation \(\mathcal {H}_w\) stands for \(\mathcal {H}\) equipped with the weak topology, and \(x_n \rightharpoonup x\) denotes the weak convergence of a sequence \((x_n)\) to x. For a given set \(S\subset \mathcal {H}\), the support and the distance function of S of at \(x\in \mathcal {H}\) are defined, respectively, as

$$\begin{aligned} \sigma (x,S):= \sup _{z\in S}\langle x,z\rangle \text { and } d_S(x):=\inf _{z\in S}\Vert x-z\Vert . \end{aligned}$$

Given \(\rho \in ]0,+\infty ]\) and \(\gamma <1\) positive, the \(\rho \)-enlargement and the \(\gamma \rho \)-enlargement of S are defined, respectively, as

$$\begin{aligned} U_\rho (S) = \{x\in \mathcal {H}:d_S(x)<\rho \} \text { and } U_\rho ^\gamma (S):=\{x\in \mathcal {H}:d_S(x)<\gamma \rho \}. \end{aligned}$$

Given \(A,B\subset \mathcal {H}\) two sets, we define the excess of A over B as the quantity \(e(A,B):= \sup _{x\in A} d_B(x)\). From this, we define the Hausdorff distance between A and B as

$$\begin{aligned} d_H(A,B):= \max \{e(A,B),e(B,A)\}. \end{aligned}$$

Further properties about Hausdorff distance can be found in [3, Sec. 3.16].

A vector \(h\in \mathcal {H}\) belongs to the Clarke tangent cone T(Sx) (see [10]); when for every sequence \((x_n)\) in S converging to x and every sequence of positive numbers \((t_n)\) converging to 0, there exists a sequence \((h_n)\) in \(\mathcal {H}\) converging to h such that \(x_n+t_nh_n\in S\) for all \(n\in \mathbb {N}\). This cone is closed and convex, and its negative polar N(Sx) is the Clarke normal cone to S at \(x\in S\), that is,

$$\begin{aligned} N\left( S;x\right) :=\left\{ v\in \mathcal {H}: \left\langle v,h\right\rangle \le 0 \quad \text {for all } h\in T(S;x)\right\} . \end{aligned}$$

As usual, \(N(S;x)=\emptyset \) if \(x\notin S\). Through that normal cone, the Clarke subdifferential of a function \(f:\mathcal {H}\rightarrow \mathbb {R}\cup \{+\infty \}\) is defined by

$$\begin{aligned} \partial f(x):=\left\{ v\in \mathcal {H}: (v,-1)\in N\left( {\text {epi}}f,(x,f(x))\right) \right\} , \end{aligned}$$

where \({\text {epi}}f:=\left\{ (y,r)\in \mathcal {H}\times \mathbb {R}: f(y)\le r\right\} \) is the epigraph of f. When the function f is finite and locally Lipschitzian around x, the Clarke subdifferential is characterized (see [11]) in the following simple and amenable way

$$\begin{aligned} \partial f(x)=\left\{ v\in \mathcal {H}: \left\langle v,h\right\rangle \le f^{\circ }(x;h) \text { for all } h\in \mathcal {H}\right\} , \end{aligned}$$

where

$$\begin{aligned} f^{\circ }(x;h):=\limsup _{(t,y)\rightarrow (0^+,x)}t^{-1}\left[ f(y+th)-f(y)\right] , \end{aligned}$$

is the generalized directional derivative of the locally Lipschitzian function f at x in the direction \(h\in \mathcal {H}\). The function \(f^{\circ }(x;\cdot )\) is in fact the support of \(\partial f(x)\), i.e., \(f^{\circ }(x;h) = \sup _{z\in \partial f(x)} \langle h,z\rangle \). That characterization easily yields that the Clarke subdifferential of any locally Lipschitzian function is a set-valued map with nonempty and convex values satisfying the important property of upper semicontinuity from \(\mathcal {H}\) into \(\mathcal {H}_w\).

Let \(f:\mathcal {H}\rightarrow \mathbb {R}\cup \{+\infty \}\) be an lsc (lower semicontinuous) function and \(x\in {\text {dom}}f\). We say that

  1. (i)

    An element \(\zeta \) belongs to the proximal subdifferential of f at x, denoted by \(\partial _P f(x)\), if there exist two non-negative numbers \(\sigma \) and \(\eta \) such that

    $$\begin{aligned} f(y)\ge f(x)+\left\langle \zeta ,y-x\right\rangle -\sigma \Vert y-x\Vert ^2 \text { for all } y\in \mathbb {B}(x;\eta ). \end{aligned}$$
  2. (ii)

    An element \(\zeta \in \mathcal {H}\) belongs to the Fréchet subdifferential of f at x, denoted by \(\partial _F f(x)\), if

    $$\begin{aligned} \liminf _{h\rightarrow 0}\frac{f(x+h)-f(x)-\langle \zeta ,h\rangle }{\Vert h\Vert }\ge 0. \end{aligned}$$
  3. (iii)

    An element \(\zeta \) belongs to the limiting subdifferential of f at x, denoted by \(\partial _L f(x)\), if there exist sequences \((\zeta _n)\) and \((x_n)\) such that \(\zeta _n\in \partial _P f(x_n)\) for all \(n\in \mathbb {N}\) and \(x_n\rightarrow x\), \(\zeta _n \rightharpoonup \zeta \), and \(f(x_n)\rightarrow f(x)\).

Through these concepts, we can define the proximal, Fréchet, and limiting normal cone of a given set \(S\subset \mathcal {H}\) at \(x\in S\), respectively, as

$$\begin{aligned} N^P\left( S;x\right) :=\partial _P I_S(x),\, N^F(C;x):= \partial _F I_C(x) \text { and } N^L(S;x):= \partial _LI_S(x), \end{aligned}$$

where \(I_S\) is the indicator function of \(S\subset \mathcal {H}\) (recall that \(I_S(x)=0\) if \(x\in S\) and \(I_S(x)=+\infty \) if \(x\notin S\)). It is well-known that (see [7, Theorem 4.1])

$$\begin{aligned} N^{P}(S;x)\cap \mathbb {B}=\partial _P d_S(x) \quad \text { for all } x\in S. \end{aligned}$$
(1)

The equality (see [11])

$$\begin{aligned} \begin{aligned} N\left( S;x\right)&= \overline{\text {co}}^*N^L(S;x)={\text {cl}}^*\left( \mathbb {R}_+\partial d_S(x)\right)&\text { for } x\in S, \end{aligned} \end{aligned}$$

gives an expression of the Clarke normal cone in terms of the distance function.

Now, we recall the concept of uniformly prox-regular sets. Introduced by Federer in the finite-dimensional case (see [17]) and later developed by Rockafellar, Poliquin, and Thibault in [30], the prox-regularity generalizes and unifies convexity and nonconvex bodies with \(C^2\) boundary. We refer to [12, 31] for a survey.

Definition 1

Let S be a closed subset of \(\mathcal {H}\) and \(\rho \in ]0,+\infty ]\). The set S is called \(\rho \)-uniformly prox-regular if for all \(x\in S\) and \(\zeta \in N^P(S;x)\) one has

$$\begin{aligned} \langle \zeta ,x'-x\rangle \le \frac{\Vert \zeta \Vert }{2\rho }\Vert x'-x\Vert ^2 \text { for all } x'\in S. \end{aligned}$$

It is important to emphasize that convex sets are \(\rho \)-uniformly prox-regular for any \(\rho >0\). The following proposition provides a characterization of uniformly prox-regular sets (see, e.g., [12, 27]).

Proposition 1

Let \(S\subset \mathcal {H}\) be a closed set and \(\rho \in ]0,+\infty ]\). The following assertions are equivalent:

  1. (a)

    S is \(\rho \)-uniformly prox-regular.

  2. (b)

    For any positive \(\gamma <1\) the mapping \({\text {proj}}_S\) is well-defined on \(U_\rho ^\gamma (S)\) and Lipschitz continuous on \(U_\rho ^\gamma (S)\) with \((1-\gamma )^{-1}\) as a Lipschitz constant, i.e.,

    $$\begin{aligned} \left\| {\text {proj}}_S\left( u_1\right) -{\text {proj}}_S\left( u_2\right) \right\| \le (1-\gamma )^{-1}\left\| u_1-u_2\right\| \quad \end{aligned}$$

    for all \(u_1, u_2 \in U_\rho ^\gamma (S)\).

  3. (c)

    For any \(x_i \in S, v_i \in N^P\left( S; x_i\right) \), with \(i=1,2\), one has

    $$\begin{aligned} \left\langle v_1-v_2, x_1-x_2\right\rangle \ge -\frac{1}{2\rho }(\Vert v_1\Vert +\Vert v_2\Vert )\left\| x_1-x_2\right\| ^2, \end{aligned}$$

    that is, the set-valued mapping \(N^P(S; \cdot ) \cap \mathbb {B}\) is \(1 / \rho \)-hypomonotone.

  4. (d)

    For all \(\gamma \in ]0,1[\), for all \(x,x'\in U_\rho ^\gamma (S)\), for all \(\xi \in \partial _P d_S(x)\), one has

    $$\begin{aligned} \langle \xi ,x'-x\rangle \le \frac{1}{2\rho (1-\gamma )^2}\Vert x'-x\Vert ^2 + d_S(x')-d_S(x). \end{aligned}$$

Next, we recall the class of subsmooth sets that includes the concepts of convex and uniformly prox-regular sets (see [4] and also [31, Chapter 8] for a survey).

Definition 2

Let S be a closed subset of \(\mathcal {H}\). We say that S is subsmooth at \(x_0 \in S\), if for every \(\varepsilon >0\) there exists \(\delta >0\) such that

$$\begin{aligned} \left\langle \xi _2-\xi _1, x_2-x_1\right\rangle \ge -\varepsilon \left\| x_2-x_1\right\| , \end{aligned}$$
(2)

whenever \(x_1, x_2 \in \mathbb {B}\left[ x_0, \delta \right] \cap S\) and \(\xi _i \in N\left( S; x_i\right) \cap \mathbb {B}\) for \(i\in \{1,2\}\). The set S is said subsmooth if it is subsmooth at each point of S. We further say that S is uniformly subsmooth, if for every \(\varepsilon >0\) there exists \(\delta >0\), such that (2) holds for all \(x_1, x_2 \in S\) satisfying \(\left\| x_1-x_2\right\| \le \delta \) and all \(\xi _i \in N\left( S; x_i\right) \cap \mathbb {B}\) for \(i\in \{1,2\}\).

Let \((S(t))_{t\in I}\) be a family of closed sets of \(\mathcal {H}\) indexed by a nonempty set I. The family is called equi-uniformly subsmooth, if for all \(\varepsilon >0\), there exists \(\delta >0\) such that for all \(t\in I\), inequality (2) holds for all \(x_1,x_2\in S(t)\) satisfying \(\Vert x_1-x_2\Vert \le \delta \) and all \(\xi _i\in N(S(t);x_i)\cap \mathbb {B}\) with \(i\in \{1,2\}\).

Given an interval \(\mathcal {I}\), a set-valued map \(F:\mathcal {I}\rightrightarrows \mathcal {H}\) is said to be measurable if for all open set U of \(\mathcal {H}\), the inverse image \(F^{-1}(U) = \{t\in \mathcal {I}:F(t)\cap U\ne \emptyset \}\) is a Lebesgue measurable set. When F takes nonempty and closed values and \(\mathcal {H}\) is separable, this notion is equivalent to the \(\mathcal {L}\otimes \mathcal {B}(\mathcal {H})\)-measurability of the graph \({\text {gph}}F:= \{(t,x)\in \mathcal {I}\times \mathcal {H}: x\in F(t)\}\) (see, e.g., [28, Theorem 6.2.20]).

Given a set-valued map \(F:\mathcal {H}\rightrightarrows \mathcal {H}\), we say F is upper semicontinuous from \(\mathcal {H}\) into \(\mathcal {H}_w\) if for all weakly closed set C of \(\mathcal {H}\), the inverse image \(F^{-1}(C)\) is a closed set of \(\mathcal {H}\). It is known (see, e.g., see [28, Proposition 6.1.15 (c)]) that if F is upper semicontinuous, then the map \(x\mapsto \sigma (\xi ,F(x))\) is upper semicontinuous for all \(\xi \in \mathcal {H}\). When F takes convex and weakly compact values, these two properties are equivalent (see [28, Proposition 6.1.17]).

A set \(S\subset \mathcal {H}\) is said ball compact if the set \(S\cap r\mathbb {B}\) is compact for all \(r>0\). The projection onto \(S\subset \mathcal {H}\) is the (possibly empty) set

$$\begin{aligned} {\text {Proj}}_{S}(x):=\left\{ z\in S: d_{S}(x)=\Vert x-z\Vert \right\} . \end{aligned}$$

When the projection set is a singleton, we denote it as \({\text {proj}}_{S}(x)\). For \(\varepsilon >0\), we define the set of approximate projections:

$$\begin{aligned} {\text {proj}}_{S}^{\varepsilon }(x):=\left\{ z\in S: \Vert x-z\Vert ^2 < d_S^2(x)+\varepsilon \right\} . \end{aligned}$$

By definition, the above set is nonempty and open. Moreover, it satisfies similar properties as the projection map (see Proposition 2 below). The approximate projections have been considered several times in variational analysis. In particular, they were used to characterize the subdifferential of the Asplund function of a given set. Indeed, let \(S\subset \mathcal {H}\) and consider the Asplund function of the set S

$$\begin{aligned} \varphi _S(x):= \frac{1}{2}\Vert x\Vert ^2-\frac{1}{2}d_S^2(x) \quad x\in \mathcal {H}. \end{aligned}$$

Then, the following formula holds (see, e.g., [21, p. 467]):

$$\begin{aligned} \partial \varphi _S(x) = \bigcap _{\varepsilon >0}\overline{\text {co}}({\text {proj}}_S^\varepsilon (x)). \end{aligned}$$

We recall that for any set \(S\subset \mathcal {H}\) and \(x\in \mathcal {H}\), where \({\text {Proj}}_S(x)\ne \emptyset \), the following formula is a consequence of formula (1):

$$\begin{aligned} x-z\in d_S(x)\partial _P d_S(z) \text { for all } z\in {\text {Proj}}_S(x). \end{aligned}$$

The next result provides an approximate version of the above formula for any closed set \(S\subset \mathcal {H}\).

Lemma 1

Let \(S\subset \mathcal {H}\) be a closed set, \(x\in \mathcal {H}\), and \(\varepsilon >0\). For each \(z\in {\text {proj}}_{S}^{\varepsilon }(x)\) there is \(v\in {\text {proj}}_{S}^{\varepsilon }(x)\) such that \(\Vert z-v\Vert < 2\sqrt{\varepsilon }\) and

$$\begin{aligned} x-z\in (4\sqrt{\varepsilon } + d_S(x))\partial _P d_S(v) + 3\sqrt{\varepsilon }\mathbb {B}. \end{aligned}$$

Proof

Fix \(\varepsilon >0\), \(x\in \mathcal {H}\) and \(z\in {\text {proj}}_{S}^{\varepsilon }(x)\). According to the Borwein-Preiss Variational Principle [6, Theorem 2.6] applied to \(y\mapsto g(y):=\Vert x-y\Vert ^2+ I_S(y)\), there exists \(v\in {\text {proj}}_{S}^{\varepsilon }(x)\) such that \(\Vert z-v\Vert <2\sqrt{\varepsilon }\) and \(0\in \partial _P g(v) + 2\sqrt{\varepsilon }\mathbb {B}\). Then, by the sum rule for the proximal subdifferential (see, e.g., [11, Proposition 2.11]), we obtain that

$$\begin{aligned} x-v\in N^P(S;v)+\sqrt{\varepsilon }\mathbb {B}, \end{aligned}$$

which implies that \(x-z\in N^P(S;v)+3\sqrt{\varepsilon }\mathbb {B}. \) Next, since \(\Vert x-z\Vert \le d_S(x) + \sqrt{\varepsilon }\), we obtain that

$$\begin{aligned} x-z\in N^P(S;v)\cap (4\sqrt{\varepsilon } + d_S(x))\mathbb {B} + 3\sqrt{\varepsilon }\mathbb {B}. \end{aligned}$$

Finally, the result follows from formula (1) and the above inclusion. \(\square \)

The following proposition displays some properties of approximation projections for uniformly prox-regular sets.

Proposition 2

Let \(S\subset \mathcal {H}\) be a \(\rho \)-uniformly prox-regular set. Then, one has:

  1. (a)

    Let \((x_n)\) be a sequence converging to \(x\in U_\rho (S)\). Then for any \((z_n)\) and any sequence of positive numbers \((\varepsilon _n)\) converging to 0 with \(z_n\in {\text {proj}}_{S}^{\varepsilon _n}(x_n)\) for all \(n\in \mathbb {N}\), we have that \(z_n\rightarrow {\text {proj}}_{S}(x)\).

  2. (b)

    Let \(\gamma \in ]0,1[\) and \(\varepsilon \in ]0,\varepsilon _0]\) where \(\varepsilon _0\) is such that

    $$\begin{aligned} \gamma +4\sqrt{\varepsilon _0}\left( 1+\gamma +\frac{1}{\rho }(1+4\sqrt{\varepsilon _0})\right) =1. \end{aligned}$$

    Then, for all \(z_i\in {\text {proj}}_S^\varepsilon (x_i)\) with \(x_i\in U_\rho ^\gamma (S)\) for \(i\in \{1,2\}\), we have

    $$\begin{aligned} (1-\digamma )\Vert z_1-z_2\Vert ^2\le \sqrt{\varepsilon }\Vert x_1-x_2\Vert ^2 + M\sqrt{\varepsilon }+\langle x_1-x_2,z_1-z_2\rangle , \end{aligned}$$

    where \(\digamma := \frac{\alpha }{\rho } + 4\sqrt{\varepsilon }\left( 1+\frac{\alpha }{\rho }+\frac{1}{\rho }(1+\sqrt{\varepsilon })\right) \) with \(\alpha := \max \{d_S(x_1),d_S(x_2)\}\) and M is a non-negative constant only dependent on \(\epsilon , \rho ,\gamma \).

Proof

(a) We observe that for all \(n\in \mathbb {N}\)

$$\begin{aligned} \begin{aligned} \Vert z_n\Vert&\le \Vert z_n-x_n\Vert + \Vert x_n\Vert \le d_C(x_n) + \sqrt{\varepsilon _n} + \Vert x_n\Vert . \end{aligned} \end{aligned}$$

Hence, since \(\varepsilon _n\rightarrow 0\) and \(x_n\rightarrow x\), we obtain \((y_n)\) is bounded. On the other hand, since \(x\in U_\rho (S)\), we obtain \({\text {proj}}_S(x)\) is well-defined and

$$\begin{aligned} \begin{aligned} \Vert z_n - {\text {proj}}_S(x)\Vert ^2&= \Vert z_n - x_n\Vert ^2 - \Vert x_n-{\text {proj}}_S(x)\Vert ^2 \\&\quad + 2\langle x-{\text {proj}}_S(x),z_n-{\text {proj}}_S(x)\rangle +2\langle z_n-{\text {proj}}_S(x),x_n-x\rangle \\&\le d_S^2(x_n)+\varepsilon _n - \Vert x_n-{\text {proj}}_S(x)\Vert ^2 \\&\quad + 2\langle x-{\text {proj}}_S(x),z_n-{\text {proj}}_S(x)\rangle +2\langle z_n-{\text {proj}}_S(x),x_n-x\rangle \\&\le \varepsilon _n+ 2\langle x-{\text {proj}}_S(x),z_n-{\text {proj}}_S(x)\rangle \\&\quad +2\langle z_n-{\text {proj}}_S(x),x_n-x\rangle \\ \end{aligned} \end{aligned}$$

where we have used \(z_n\in {\text {proj}}_S^{\varepsilon _n}(x_n)\) and that \(d_S^2(x_n)\le \Vert x_n-{\text {proj}}_S(x)\Vert ^2\). Moreover, since \(x-{\text {proj}}_S(x)\in N^P(S;{\text {proj}}_S(x))\) and S is \(\rho \)-uniformly prox-regular, we obtain that

$$\begin{aligned} 2\langle x-{\text {proj}}_S(x),z_n-{\text {proj}}_s(x)\rangle \le \frac{d_S(x)}{\rho }\Vert z_n-{\text {proj}}_S(x)\Vert ^2. \end{aligned}$$

Therefore, by using the above inequality and rearranging terms, we obtain that

$$\begin{aligned} \Vert z_n - {\text {proj}}_S(x)\Vert ^2\le \frac{\rho }{\rho -d_S(x)}\left( \varepsilon _n +2\langle z_n-{\text {proj}}_S(x),x_n-x\rangle \right) . \end{aligned}$$

Finally, since \(x_n\rightarrow x\) and \((z_n)\) is bounded, we conclude that \(z_n\rightarrow {\text {proj}}_S(x)\).

(b) By virtue of Lemma 1, for \(i\in \{1,2\}\) there exists \(v_i,b_i\in \mathcal {H}\) such that

$$\begin{aligned} b_i\in \mathbb {B}, v_i\in {\text {proj}}_S^\varepsilon (x_i), \Vert z_i-v_i\Vert <2\sqrt{\varepsilon } \text { and } \frac{x_i-z_i - 3\sqrt{\varepsilon }b_i}{4\sqrt{\varepsilon } + d_S(x_i)}\in \partial _P d_S(v_i). \end{aligned}$$

The hypomonotonicity of proximal normal cone (see Proposition 1 (b)) implies that

$$\begin{aligned} \left\langle \zeta _1-\zeta _2, v_1-v_2\right\rangle \ge \frac{-1}{\rho }\Vert v_1-v_2\Vert ^2, \end{aligned}$$

where \(\zeta _i:= \frac{x_i-z_i - 3\sqrt{\varepsilon }b_i}{4\sqrt{\varepsilon }+\alpha }\) for \(i\in \{1,2\}\) and \(\alpha := \max \{ d_S(x_1),d_S(x_2) \}\). On the one hand, we have

$$\begin{aligned} \Vert v_1-v_2\Vert \le \Vert v_1-z_1\Vert + \Vert z_1-z_2\Vert + \Vert z_2-v_2\Vert \le 4\sqrt{\varepsilon }+\Vert z_1-z_2\Vert , \end{aligned}$$

and for all \(z\in \mathcal {H}\) and \(i\in \{1,2\}\)

$$\begin{aligned} |\langle z,v_i-z_i\rangle | \le \frac{\sqrt{\varepsilon }\Vert z\Vert ^2}{2} + \frac{\Vert v_i-z_i\Vert ^2}{2\sqrt{\varepsilon }}\le \frac{\sqrt{\varepsilon }\Vert z\Vert ^2}{2} +2\sqrt{\varepsilon }. \end{aligned}$$

On the other hand,

$$\begin{aligned} \begin{aligned}&\langle (x_1-z_1-3\sqrt{\varepsilon }b_1) - (x_2-z_2-3\sqrt{\varepsilon }b_2),v_1-v_2\rangle \\&\quad = \ 3\sqrt{\varepsilon }\langle b_2-b_1,v_1-v_2\rangle + \langle (x_1-x_2)-(z_1-z_2),v_1-v_2 \rangle \\&\quad = \ 3\sqrt{\varepsilon }\langle b_2-b_1,v_1-v_2\rangle + \langle x_1-x_2,v_1-z_1\rangle +\langle x_1-x_2,z_1-z_2\rangle \\&\qquad +\langle x_1-x_2,z_2-v_2\rangle -\langle z_1-z_2,v_1-z_1\rangle -\Vert z_1 - z_2\Vert ^2 - \langle z_1-z_2,z_2-v_2 \rangle \\&\quad \le \ 6\sqrt{\varepsilon }(4\sqrt{\varepsilon }+\Vert z_1-z_2\Vert ) + \sqrt{\varepsilon }\Vert x_1-x_2\Vert ^2 + 8\sqrt{\varepsilon } + \langle x_1-x_2,z_1-z_2\rangle \\&\qquad -(1-\sqrt{\varepsilon })\Vert z_1-z_2\Vert ^2\\&\quad \le \ 24\varepsilon + 11\sqrt{\varepsilon } + \sqrt{\varepsilon }\Vert x_1-x_2\Vert ^2 + \langle x_1-x_2,z_1-z_2\rangle -(1-4\sqrt{\varepsilon })\Vert z_1-z_2\Vert ^2. \end{aligned} \end{aligned}$$

It follows that

$$\begin{aligned} \begin{aligned}&\left[ 1-\frac{\alpha }{\rho } -4\sqrt{\varepsilon }(1+\frac{1}{\rho }(1+4\sqrt{\varepsilon }+\alpha ))\right] \Vert z_1-z_2\Vert ^2\\&\quad \le \sqrt{\varepsilon }\Vert x_1-x_2\Vert ^2 + \langle x_1-x_2,z_1-z_2 \rangle +4(4\varepsilon +\sqrt{\varepsilon })(4\frac{\sqrt{\varepsilon }}{\rho } + \gamma ) +24\varepsilon + 11\sqrt{\varepsilon } \end{aligned} \end{aligned}$$

which proves the desired inequality. \(\square \)

The following result provides a stability result for a family of equi-uniformly subsmooth sets. We refer to see [20, Lemma 2.7] for a similar result.

Lemma 2

Let \(\mathcal {C}=\{C_n\}_{n\in \mathbb {N}}\cup \{C\}\) be a family of nonempty, closed, and equi-uniformly subsmooth sets. Assume that

$$\begin{aligned} \lim _{n\rightarrow \infty }d_{C_n}(x)= 0, \text { for all } x\in C. \end{aligned}$$

Then, for any sequence \(\alpha _n \rightarrow \alpha \in \mathbb {R}\) and any sequence \((y_n)\) converging to y with \(y_n\in C_n\) and \(y\in C\), one has

$$\begin{aligned} \limsup _{n\rightarrow \infty } \sigma (\xi ,\alpha _n\partial d_{C_n}(y_n))\le \sigma (\xi ,\alpha \partial d_{C}(y)) \text { for all } \xi \in \mathcal {H}. \end{aligned}$$

Proof

Fix \(\xi \in \mathcal {H}\). Since \(\partial d_S(x)\subset \mathbb {B}\) for all \(x\in \mathcal {H}\), we observe that

$$\begin{aligned} \beta := \limsup _{n\rightarrow \infty } \sigma (\xi ,\alpha _n\partial d_{C_n}(y_n))<+\infty . \end{aligned}$$

Let us consider a subsequence \((n_k)\) such that

$$\begin{aligned} \beta = \lim _{k\rightarrow \infty }\sigma (\xi ,\alpha _{n_k}\partial d_{C_{n_k}}(y_{n_k})). \end{aligned}$$

Given that \(\partial d_{C_{n_k}}(y_{n_k})\) is weakly compact for all \(k\in \mathbb {N}\), there is \(v_{n_k}\in \partial d_{C_{n_k}}(y_{n_k})\) such that

$$\begin{aligned} \sigma (\xi ,\alpha _{n_k}\partial d_{C_{n_k}}(y_{n_k})) = \langle \xi ,\alpha _{n_k} v_{n_k} \rangle \text { for all } k\in \mathbb {N}. \end{aligned}$$

Moreover, the sequence \((v_{n_k})\) is bounded. Hence, without loss of generality, we can assume that \(v_{n_k}\rightharpoonup v\in \mathbb {B}\). It follows that \(\beta = \langle \xi ,\alpha v\rangle \). By equi-uniformly subsmoothness of \(\mathcal {C}\), for any \(\varepsilon >0\), there is \(\delta >0\) such that for all \(D\in \mathcal {C}\) and \(x_1,x_2\in D\) with \(\Vert x_1-x_2\Vert <\delta \), one has

$$\begin{aligned} \langle \zeta _1-\zeta _2,x_1-x_2\rangle \ge -\varepsilon \Vert x_1-x_2\Vert , \end{aligned}$$
(3)

whenever \(\zeta _i\in N(D;x_i)\cap \mathbb {B}\) for \(i\in \{1,2\}\). Next, let \(y'\in C\) such that \(\Vert y-y'\Vert <\delta /2\). Then, since \(d_{C_{n_k}}(y')\) converges to 0, there is a sequence \((y_{n_k}')\) converging to \(y'\) with \(y_{n_k}'\in C_{n_k}\) for all \(k\in \mathbb {N}\). Hence, there is \(k_0\in \mathbb {N}\) such that \(\Vert y_{n_k}'-y'\Vert <\delta /2\) for all \(k\ge k_0\). On the other hand, since \(y_n\rightarrow y\), then there is \(k_0^{\prime }\in \mathbb {N}\) such that \(\Vert y_{n_k}-y\Vert <\delta /2\) for all \(k\ge k_0^{\prime }\). Hence, if \(k\ge \max \{k_0,k_0^{\prime }\}=:\hat{k}\) we have \(\Vert y_{n_k}-y_{n_k}'\Vert <\delta \). Therefore, it follows from the fact that \(0\in \partial d_{C_{n_k}}(y_{n_k}')\) and inequality (3) that

$$\begin{aligned} \langle v_{n_k},y_{n_k}-y_{n_k}' \rangle \ge -\varepsilon \Vert y_{n_k}-y_{n_k}'\Vert \text { for all }k\ge \hat{k}. \end{aligned}$$

By taking \(k\rightarrow \infty \), we obtain that

$$\begin{aligned} \langle v,y-y'\rangle \ge -\varepsilon \Vert y-y'\Vert \text { for all } y'\in C\cap \mathbb {B}(y,\delta /2), \end{aligned}$$

which implies that \(v\in N^F(C;y)\). Then, by [29, Lemma 4.21],

$$\begin{aligned} v\in N^F(C;y)\cap \mathbb {B}=\partial _F d_{C}(y)\subset \partial d_{C}(y). \end{aligned}$$

Finally, we have proved that

$$\begin{aligned} \beta = \langle \xi ,\alpha v\rangle \le \sigma (\xi ,\alpha \partial d_{C}(y)), \end{aligned}$$

which ends the proof. \(\square \)

The following lemma is a convergence theorem for a set-valued map from a topological space into a Hilbert space.

Lemma 3

Let \((E,\tau )\) be a topological space and \(\mathcal {G}:E\rightrightarrows \mathcal {H}\) be a set-valued map with nonempty, closed, and convex values. Consider sequences \((x_n)\subset E\), \((y_n)\subset \mathcal {H}\) and \((\varepsilon _n)\subset \mathbb {R}_+\) such that

  1. (i)

    \(x_n\rightarrow x\) (in E), \(y_n\rightharpoonup y\) (weakly in \(\mathcal {H}\)) and \(\varepsilon _n\rightarrow 0\);

  2. (ii)

    For all \(n\in \mathbb {N}\), \(y_n\in \text {co}(\mathcal {G}(x_k)+\varepsilon _k\mathbb {B}:k\ge n)\);

  3. (iii)

    \(\displaystyle \limsup _{n\rightarrow \infty }\sigma (\xi ,\mathcal {G}(x_n))\le \sigma (\xi ,\mathcal {G}(x))\) for all \(\xi \in \mathcal {H}\).

Then, \(y\in \mathcal {G}(x)\).

Proof

Assume by contradiction that \(y\notin \mathcal {G}(x)\). By virtue of Hahn–Banach theorem there exists \(\xi \in \mathcal {H}\setminus \{0\}\), \(\delta >0\) and \(\alpha \in \mathbb {R}\) such that

$$\begin{aligned} \langle \xi ,y'\rangle + \delta \le \alpha \le \langle \xi ,y\rangle , \ \forall y'\in \mathcal {G}(x). \end{aligned}$$

Then, it follows that \(\sigma (\xi ,\mathcal {G}(x))\le \alpha -\delta \). Besides, according to (ii) we have for all \(n\in \mathbb {N}\), there is a finite set \(J_n\subset \mathbb {N}\) such that for all \(m\in J_n\), \(m\ge n\) and

$$\begin{aligned} y_n =\sum _{j\in J_n}\alpha _j(y_j' + \varepsilon _jv_j) \end{aligned}$$

where for all \(j\in J_n\), \(\alpha _j\ge 0\), \(v_j\in \mathbb {B}\), \(y_j'\in \mathcal {G}(x_j)\) and \(\sum _{j\in J_n}\alpha _j = 1\). Also, there exists \(N\in \mathbb {N}\) such that for all \(n\ge N\), \(\varepsilon _n<\frac{\delta }{2\Vert \xi \Vert }\). Thus, for \(n\ge N\)

$$\begin{aligned} \begin{aligned} \langle \xi ,y_n\rangle&= \sum _{j\in J_n}\alpha _j\langle \xi ,y_j'+\varepsilon _jv_j\rangle \\&\le \sum _{j\in J_n}\alpha _j\sup _{k\ge n}\sigma (\xi ,\mathcal {G}(x_k)) + \sum _{j\in J_n}\alpha _j\varepsilon _j\langle \xi ,v_j\rangle \\&\le \sup _{k\ge n}\sigma (\xi ,\mathcal {G}(x_k)) + \Vert \xi \Vert \sum _{j\in J_n}\alpha _j\frac{\delta }{2\Vert \xi \Vert }\le \sup _{k\ge n}\sigma (\xi ,\mathcal {G}(x_k)) + \frac{\delta }{2}. \end{aligned} \end{aligned}$$

Therefore, as \(y_n\rightharpoonup y\), letting \(n\rightarrow \infty \) in the last inequality we obtain that

$$\begin{aligned} \begin{aligned} \langle \xi ,y\rangle&\le \limsup _{n\rightarrow \infty }\sigma (\xi ,\mathcal {G}(x_n)) + \frac{\delta }{2}\le \sigma (\xi ,\mathcal {G}(x))+\frac{\delta }{2}. \end{aligned} \end{aligned}$$

Therefore, \(\langle \xi ,y\rangle \le \alpha -\delta /2\le \langle \xi ,y\rangle -\delta /2\), which is a contradiction. The proof is then complete. \(\square \)

The next lemma is a technical result whose proof can be found in [23, Lemma 2.2].

Lemma 4

Let \(\left( x_n\right) \) be a sequence of absolutely continuous functions from \(\left[ 0, T\right] \) into \(\mathcal {H}\) with \(x_n\left( 0\right) =x_0^n\). Assume that for all \(n \in \mathbb {N}\)

$$\begin{aligned} \left\| \dot{x}_n(t)\right\| \le \psi (t) \quad \text{ a.e } t \in \left[ 0, T\right] \end{aligned}$$

where \(\psi \in L^1([0, T];\mathbb {R}_+)\) and that \(x_0^n \rightarrow x_0\) as \(n \rightarrow \infty \). Then, there exists a subsequence \(\left( x_{n_k}\right) \) of \(\left( x_n\right) \) and an absolutely continuous function x such that

  1. (i)

    \(x_{n_k}(t) \rightharpoonup x(t)\) in \(\mathcal {H}\) as \(k \rightarrow +\infty \) for all \(t \in \left[ 0, T\right] \).

  2. (ii)

    \(x_{n_k} \rightharpoonup x\) in \(L^1([0, T]; \mathcal {H})\) as \(k \rightarrow +\infty \).

  3. (iii)

    \(\dot{x}_{n_k} \rightharpoonup \dot{x}\) in \(L^1\left( [0, T\right] ; \mathcal {H})\) as \(k \rightarrow +\infty \).

  4. (iv)

    \(\Vert \dot{x}(t)\Vert \le \psi (t)\) a.e. \(t \in \left[ 0, T\right] \).

3 Catching-Up Algorithm with Errors for Sweeping Processes

In this section, we propose a numerical method for the existence of solutions for the sweeping process:

$$\begin{aligned} \begin{aligned} \dot{x}(t)&\in -N\left( C(t);x(t)\right) +F(t,x(t))&\text { a.e. } t\in [0,T],\\ x(0)&=x_0\in C(0), \end{aligned} \end{aligned}$$
(4)

where \(C:[0,T]\rightrightarrows \mathcal {H}\) is a set-valued map with closed values in a Hilbert space \(\mathcal {H}\), \(N\left( C(t);x\right) \) stands for the Clarke normal cone to C(t) at x, and \(F:[0,T]\times \mathcal {H}\rightrightarrows \mathcal {H}\) is a given set-valued map with nonempty closed and convex values. Our algorithm is based on the catching-up algorithm, except that we do not ask for an exact calculation of the projections.

The proposed algorithm is given as follows. For \(n\in \mathbb {N}^{*}\), let \((t_k^n:k=0,1, \ldots ,n)\) be a uniform partition of [0, T] with uniform time step \(\mu _n:=T/n\). Let \((\varepsilon _n)\) be a sequence of positive numbers such that \(\varepsilon _n/\mu _n^2\rightarrow 0\). We consider a sequence of piecewise continuous linear approximations \((x_n)\) defined as \(x_n(0)=x_0\) and for any \(k\in \{0,\ldots ,n-1\}\) and \(t\in ]t_{k}^n,t_{k+1}^n]\)

$$\begin{aligned} x_n(t)=x_k^n +\frac{t-t_k^n}{\mu _n}\left( x_{k+1}^n-x_k^n-\int _{t_{k}^n}^{t_{k+1}^n}f(s,x_k^n)\textrm{d}s\right) + \int _{t_k^n}^{t}f(s,x_k^n)\textrm{d}s, \end{aligned}$$
(5)

where \(x_0^n=x_0\) and

$$\begin{aligned} x_{k+1}^n\in {\text {proj}}_{C(t_{k+1}^n)}^{\varepsilon _n}\left( x_k^n +\int _{t_{k}^n}^{t_{k+1}^n}f(s,x_k^n)\textrm{d}s\right) \ \text {for }k\in \{0,1, \ldots ,n-1\}. \end{aligned}$$
(6)

Here f(tx) denotes any selection of F(tx) such that \(f(\cdot ,x)\) is measurable for all \(x\in \mathcal {H}\). For simplicity, we consider \(f(t,x)\in {\text {proj}}_{F(t,x)}^{\gamma }(0)\) for some \(\gamma >0\). In Proposition 3, we prove that it is possible to obtain such measurable selection under mild assumptions.

The above algorithm is called catching-up algorithm with approximate projections because the projection is not necessarily exactly calculated. We will prove that the above algorithm converges for several families of algorithms as long as inclusion (6) is verified.

Let us consider functions \(\delta _n(\cdot )\) and \(\theta _n(\cdot )\) defined as

$$\begin{aligned} \delta _n(t)={\left\{ \begin{array}{ll} t_k^n &{} \text { if } t\in [t_k^n,t_{k+1}^n[\\ t_{n-1}^n &{} \text { if } t=T, \end{array}\right. } \text { and } \theta _n(t)={\left\{ \begin{array}{ll} t_{k+1}^n &{} \text { if } t\in [t_k^n,t_{k+1}^n[\\ T &{} \text { if } t=T. \end{array}\right. } \end{aligned}$$

In what follows, we show useful properties satisfied for the above algorithm, which will help us to prove the existence of sweeping process (4) in three cases:

  1. (i)

    The set-valued map \(t\rightrightarrows C(t)\) takes uniformly prox-regular values.

  2. (ii)

    The set-valued map \(t\rightrightarrows C(t)\) takes subsmooth and ball-compact values.

  3. (iii)

    \(C(t) \equiv C\) in [0, T] and C is ball-compact.

Throughout this section, \(F:[0,T]\times \mathcal {H}\rightrightarrows \mathcal {H}\) will be a set-valued map with nonempty, closed, and convex values. Moreover, we will consider the following conditions:

\((\mathcal {H}_1^F)\):

For all \(t\in [0,T]\), \(F(t,\cdot )\) is upper semicontinuous from \(\mathcal {H}\) into \(\mathcal {H}_w\).

\((\mathcal {H}_2^F)\):

There exists \(h:\mathcal {H}\rightarrow \mathbb {R}^+\) Lipschitz continuous (with constant \(L_h>0\)) such that

$$\begin{aligned} \begin{aligned} d\left( 0,F(t,x)\right) :=\inf \{\Vert w\Vert : w\in F(t,x)\}\le h(x), \end{aligned} \end{aligned}$$

for all \(x\in \mathcal {H}\) and a.e. \(t\in [0,T]\).

\((\mathcal {H}_3^F)\):

There is \(\gamma >0\) such that the set-valued map \((t,x)\rightrightarrows {\text {proj}}_{F(t,x)}^\gamma (0)\) has a selection \(f:[0,T]\times \mathcal {H}\rightarrow \mathcal {H}\) such that \(f(\cdot ,x)\) is measurable for all \(x\in \mathcal {H}\).

The following proposition provides conditions for the feasibility of hypothesis \((\mathcal {H}_3^F)\).

Proposition 3

Let us assume that \(\mathcal {H}\) is a separable Hilbert space. Moreover we suppose \(F(\cdot ,x)\) is measurable for all \(x\in \mathcal {H}\); then, \((\mathcal {H}_3^F)\) holds for all \(\gamma >0\).

Proof

Let \(\gamma >0\) and fix \(x\in \mathcal {H}\). Since the set-valued map \(F(\cdot ,x)\) is measurable, the map \(t\mapsto d(0,F(t,x))\) is a measurable function. Let us define the set-valued map \(\mathcal {F}_x:t\rightrightarrows {\text {proj}}_{F(t,x)}^{\gamma }(0)\). Then,

$$\begin{aligned} \begin{aligned} {\text {gph}}\mathcal {F}_x&= \{(t,y)\in [0,T]\times \mathcal {H}: y\in {\text {proj}}_{F(t,x)}^{\gamma }(0)\}\\&= \{(t,y)\in [0,T]\times \mathcal {H}:\Vert y\Vert ^2< d(0,F(t,x))^2 + \gamma \text { and } y\in F(t,x)\}\\&={\text {gph}}F(\cdot ,x)\cap \{(t,y)\in [0,T]\times \mathcal {H}:\Vert y\Vert ^2< d(0,F(t,x))^2 + \gamma \}. \end{aligned} \end{aligned}$$

Hence, \({\text {gph}}\mathcal {F}_x\) is a measurable set. Consequently, \(\mathcal {F}_x\) has a measurable selection (see [28, Theorem 6.3.20]). Denoting by \(t\mapsto f(t,x)\) such selection, we obtain the result. \(\square \)

Now, we establish the main properties of the proposed algorithm.

Theorem 1

Assume, in addition to \((\mathcal {H}_1^F)\), \((\mathcal {H}_2^F)\) and \((\mathcal {H}_3^F)\), that \(C:[0,T]\rightrightarrows \mathcal {H}\) is a set-valued map with nonempty and closed values such that

$$\begin{aligned} d_H(C(t),C(s))\le L_C|t-s| \text { for all } t,s\in [0,T]. \end{aligned}$$
(7)

Then, the sequence of functions \((x_n:[0,T]\rightarrow \mathcal {H})\) generated by numerical scheme (5) and (6) satisfies the following properties:

  1. (a)

    There are non-negative constants \(K_1,K_2,K_3, K_4,K_5\) such that for all \(n\in \mathbb {N}\) and \(t\in [0,T]\):

    1. (i)

      \(d_{C(\theta _n(t))}(x_n(\delta _n(t)) + \int _{\delta _n(t)}^{\theta _n(t)}f(s,x_n(\delta _n(t)))\textrm{d}s)\le (L_C + h(x(\delta _n(t)))+\sqrt{\gamma })\mu _n.\)

    2. (ii)

      \(\Vert x_n(\theta _n(t)) - x_0\Vert \le K_1.\)

    3. (iii)

      \(\Vert x_n(t) \Vert \le K_2.\)

    4. (iv)

      \(\Vert x_n(\theta _n(t)) - x_n(\delta _n(t))\Vert \le K_3\mu _n + \sqrt{\varepsilon _n}.\)

    5. (v)

      \(\Vert x_n(t)-x_n(\theta _n(t))\Vert \le K_4\mu _n + 2\sqrt{\varepsilon _n}\).

  2. (b)

    There exists \(K_5>0\) such that for all \(t\in [0,T]\) and \(m,n\in \mathbb {N}\) we have

    $$\begin{aligned} d_{C(\theta _n(t))}(x_m(t))\le K_5\mu _m + L_C\mu _n + 2\sqrt{\varepsilon _m}. \end{aligned}$$
  3. (c)

    There exists \(K_6>0\) such that for all \(n\in \mathbb {N}\) and almost all \(t\in [0,T]\), \(\Vert \dot{x}_n(t)\Vert \le K_6\).

  4. (d)

    For all \(n\in \mathbb {N}\) and \(k\in \{0,1, \ldots ,n-1\}\), there is \(v_{k+1}^n\in C(t_{k+1}^n)\) such that for all \(t\in ]t_k^n,t_{k+1}^n[\):

    $$\begin{aligned} \dot{x}_n(t)\in -\frac{\lambda _n(t)}{\mu _n}\partial _P d_{C(\theta _n(t))}(v_{k+1}^n) + f(t,x_n(\delta _n(t))) +\frac{3\sqrt{\varepsilon _n}}{\mu _n}\mathbb {B}, \end{aligned}$$
    (8)

    where \(\lambda _n(t) = 4\sqrt{\varepsilon _n} + (L_C + h(x(\delta _n(t)))+\sqrt{\gamma })\mu _n\). Moreover, \(\Vert v_{k+1}^n-x_n(\theta _n(t))\Vert <2\sqrt{\varepsilon _n}\).

Proof

(a): Set \(\mu _n:= T/n\) and let \((\varepsilon _n)\) be a sequence of non-negative numbers such that \(\varepsilon _n/\mu _n^2\rightarrow 0\). We define \(\mathfrak {c}:= \sup _{n\in \mathbb {N}} \frac{\sqrt{\varepsilon _n}}{\mu _n}\). We denote by \(L_h\) the Lipschitz constant of h. For all \(t\in [0,T]\) and \(n\in \mathbb {N}\), we define \(\tau _n(t):= x_n(\delta _n(t)) + \int _{\delta _n(t)}^{\theta _n(t)}f(s,x_n(\delta _n(t)))\textrm{d}s\). Since \(f(t,x_n(\delta _n(t)))\in {\text {proj}}_{F(t,x_n(\delta _n(t)))}^{\gamma }(0)\) we obtain that

$$\begin{aligned} \begin{aligned} d_{C(\theta _n(t))}(\tau _n(t))&\le d_{C(\theta _n(t))}(x_n(\delta _n(t))) + \left\| \int _{\delta _n(t)}^{\theta _n(t)}f(s,x_n(\delta _n(t)))\textrm{d}s \right\| \\&\le L_C\mu _n + \int _{\delta _n(t)}^{\theta _n(t)}\Vert f(s,x_n(\delta _n(t)))\Vert \textrm{d}s\\&\le L_C\mu _n + \int _{\delta _n(t)}^{\theta _n(t)}(h(x_n(\delta _n(t)))+\sqrt{\gamma })\textrm{d}s\\&\le (L_C + h(x_n(\delta _n(t)))+\sqrt{\gamma })\mu _n, \end{aligned} \end{aligned}$$

which proves (i). Moreover, since \(x_n(\theta _n(t))\in {\text {proj}}_{C(\theta _n(t))}^{\varepsilon _n}(\tau _n(t))\), we get that

$$\begin{aligned} \begin{aligned} \Vert x_n(\theta _n(t)) - \tau _n(t) \Vert&\le d_{C(\theta _n(t))}(\tau _n(t)) + \sqrt{\varepsilon _n}\\&\le (L_C + h(x_n(\delta _n(t)))+\sqrt{\gamma })\mu _n + \sqrt{\varepsilon _n}, \end{aligned} \end{aligned}$$
(9)

which yields

$$\begin{aligned} \begin{aligned} \Vert x_n(\theta _n(t)) - x_n(\delta _n(t)) \Vert \le&\ (L_C + 2h(x_n(\delta _n(t))) + 2\sqrt{\gamma })\mu _n + \sqrt{\varepsilon _n}\\ \le&\ (L_C + 2h(x_0) + 2\sqrt{\gamma } + 2L_h\Vert x_n(\delta _n(t))-x_0\Vert )\mu _n \\&+ \sqrt{\varepsilon _n}. \end{aligned} \end{aligned}$$
(10)

Hence, for all \(t\in [0,T]\)

$$\begin{aligned} \begin{aligned} \Vert x_n(\theta _n(t)) - x_0 \Vert&\le (1+2L_h\mu _n)\Vert x_n(\delta _n(t))-x_0\Vert \\&\quad +(L_C + 2h(x_0) + 2\sqrt{\gamma })\mu _n + \sqrt{\varepsilon _n}. \end{aligned} \end{aligned}$$

The above inequality means that for all \(k\in \{0,1, \ldots ,n-1\}\):

$$\begin{aligned} \begin{aligned} \Vert x_{k+1}^n - x_0 \Vert&\le (1+2L_h\mu _n)\Vert x_k^n-x_0\Vert +(L_C + 2h(x_0)+2\sqrt{\gamma })\mu _n + \sqrt{\varepsilon _n}. \end{aligned} \end{aligned}$$

Then, by [11, p. 183], we obtain that for all \(k\in \{0, \ldots ,n-1\}\)

$$\begin{aligned} \begin{aligned} \Vert x_{k+1}^n - x_0 \Vert&\le (k+1)((L_C + 2h(x_0)+2\sqrt{\gamma })\mu _n + \sqrt{\varepsilon _n})\exp (2L_h(k+1)\mu _n)\\&\le T(L_C + 2h(x_0) +\sqrt{\gamma } + \mathfrak {c})\exp (2L_h T)=:K_1. \end{aligned} \end{aligned}$$
(11)

which proves (ii).

(iii): By definition of \(x_n\), for \(t\in ]t_k^n,t_{k+1}^n]\) and \(k\in \{0,1 \ldots ,n-1\}\), using (5)

$$\begin{aligned} \begin{aligned} \Vert x_n(t) \Vert&\le \Vert x_k^n\Vert + \Vert x_{k+1}^n - \tau _n(t) \Vert + \int _{t_k^n}^t\Vert f(s,x_k^n)\Vert \textrm{d}s\\&\le K_1 + \Vert x_0\Vert + (L_C +\sqrt{\gamma } +h(x_k^n))\mu _n + \sqrt{\varepsilon _n} + (h(x_k^n)+\sqrt{\gamma })\mu _n, \end{aligned} \end{aligned}$$

where we have used (9). Moreover, it is clear that for \(k\in \{0, \ldots ,n\}\)

$$\begin{aligned} \begin{aligned} h(x_k^n)\le h(x_0) + L_h\Vert x_k^n - x_0\Vert \le h(x_0) + L_hK_1. \end{aligned} \end{aligned}$$

Therefore, for all \(t\in [0,T]\)

$$\begin{aligned} \begin{aligned} \Vert x_n(t)\Vert&\le K_1 + \Vert x_0\Vert + (L_C + 2(h(x_0) + L_hK_1+\sqrt{\gamma }))\mu _n + \sqrt{\varepsilon _n}\\&\le K_1 + \Vert x_0\Vert + T(L_C+2(h(x_0) + L_hK_1+\sqrt{\gamma }) + \mathfrak {c}) =: K_2, \end{aligned} \end{aligned}$$

which proves (iii).

(iv): From (10) and (11) it is easy to see that there exists \(K_3>0\) such that for all \(n\in \mathbb {N}\) and \(t\in [0,T]\): \(\Vert x_n(\theta _n(t)) - x_n(\delta _n(t)) \Vert \le K_3\mu _n + \sqrt{\varepsilon _n}\).

(v): To conclude this part, we consider \(t\in ]t_k^n,t_{k+1}^n]\) for some \(k\in \{0,1, \ldots ,n-1\}\). Then \(x_n(\theta _n(t)) = x_{k+1}^n\) and also

$$\begin{aligned} \begin{aligned} \Vert x_n(\theta _n(t))-x_n(t)\Vert \le \,&\Vert x_{k+1}^n - x_k^n\Vert + \Vert x_{k+1}^n - \tau _n(t)\Vert + \int _{t_k^n}^t \Vert f(s,x_k^n)\Vert \textrm{d}s\\ \le \,&K_3\mu _n + \sqrt{\varepsilon _n} + (L_C + \sqrt{\gamma }+ h(x_0) + L_h K_1)\mu _n + \sqrt{\varepsilon _n}\\&+ \mu _n (h(x_k^n)+\sqrt{\gamma })\\ \le \,&(\underbrace{K_3 + L_C + 2(h(x_0) + L_hK_1)+2\sqrt{\gamma }}_{=:K_4})\mu _n + 2\sqrt{\varepsilon _n}, \end{aligned} \end{aligned}$$

and we conclude this first part.

(b): Let \(m,n\in \mathbb {N}\) and \(t\in [0,T]\), then

$$\begin{aligned} \begin{aligned} d_{C(\theta _n(t))}(x_m(t))&\le d_{C(\theta _n(t))}(x_m(\theta _m(t))) + \Vert x_m(\theta _m(t))-x_m(t)\Vert \\&\le d_H(C(\theta _n(t)),C(\theta _m(t))) + K_4\mu _m + 2\sqrt{\varepsilon _m}\\&\le L_C|\theta _n(t) - \theta _m(t)| + K_4\mu _m + 2\sqrt{\varepsilon _m}\\&\le L_C(\mu _n + \mu _m) + K_4\mu _m + 2\sqrt{\varepsilon _m} \end{aligned} \end{aligned}$$

where we have used (v). Hence, by setting \(K_5:= K_4 + L_C\) we prove (b).

(c): Let \(n\in \mathbb {N}\), \(k\in \{0,1, \ldots ,n-1\}\) and \(t\in ]t_k^n,t_{k+1}^n]\). Then,

$$\begin{aligned} \begin{aligned} \Vert \dot{x}_n(t)\Vert&= \left\| \frac{1}{\mu _n}\left( x_{k+1}^n-x_k^n-\int _{t_{k}^n}^{t_{k+1}^n}f(s,x_k^n)\textrm{d}s\right) + f(t,x_k^n)\right\| \\&\le \frac{1}{\mu _n}\Vert x_n(\theta _n(t))-\tau _n(t)\Vert + \Vert f(t,x_k^n)\Vert \\&\le \frac{1}{\mu _n}((L_C + h(x_k^n)+\sqrt{\gamma })\mu _n+\sqrt{\varepsilon _n}) + h(x_k^n) + \sqrt{\gamma }\\&\le \frac{\sqrt{\varepsilon _n}}{\mu _n} + L_C + 2(h(x_0) + L_hK_1 + \sqrt{\gamma })\\&\le \mathfrak {c} + L_C + 2(h(x_0) + L_hK_1+ \sqrt{\gamma }) =: K_6, \end{aligned} \end{aligned}$$

which proves (c).

(d): Fix \(k\in \{0,1, \ldots ,n-1\}\) and \(t\in ]t_k^n,t_{k+1}^n[\). Then, \(x_{k+1}^n\in {\text {proj}}_{C(t_{k+1}^n)}^{\varepsilon _n}(\tau _n(t))\). Hence, by Lemma 1, there exists \(v_{k+1}^n\in C(t_{k+1}^n)\) such that \(\Vert x_{k+1}-v_{k+1}^n\Vert <2\sqrt{\varepsilon _n}\) and

$$\begin{aligned} \begin{aligned} \tau _n(t)-x_{k+1}^n\in \alpha _n(t)\partial _P d_{C(t_{k+1}^n)}(v_{k+1}^n) + 3\sqrt{\varepsilon _n}\mathbb {B}, \ \forall t\in ]t_k^n,t_{k+1}^n[, \end{aligned} \end{aligned}$$

where \(\alpha _n(t) = 4\sqrt{\varepsilon _n} + d_{C(\theta _n(t))}(\tau _n(t))\). By virtue of (i),

$$\begin{aligned} \alpha _n(t)\le 4\sqrt{\varepsilon _n} + (L_C + h(x(\delta _n(t)))+\sqrt{\gamma })\mu _n=:\lambda _n(t). \end{aligned}$$

Then, for all \(t\in ]t_k^n,t_{k+1}^n[\)

$$\begin{aligned} -\mu _n(\dot{x}_n(t)-f(t,x_k^n))\in \lambda _n(t)\partial _P d_{C(t_{k+1}^n)}(v_{k+1}^n) + 3\sqrt{\varepsilon _n}\mathbb {B}, \end{aligned}$$

which implies that \(t\in ]t_k^n,t_{k+1}^n[\)

$$\begin{aligned} \dot{x}_n(t)\in -\frac{\lambda _n(t)}{\mu _n}\partial _P d_{C(t_{k+1}^n)}(v_{k+1}^n) + f(t,x_k^n) + \frac{3\sqrt{\varepsilon _n}}{\mu _n}\mathbb {B}. \end{aligned}$$

\(\square \)

4 Prox-Regular Case

In this section, we will study the algorithm under the assumption of uniform prox-regularity of the moving sets. The classical catching-up algorithm in this framework was studied in [8], where the existence of solutions for (4) was established for a set-valued map F taking values in a fixed compact set.

Theorem 2

Suppose, in addition to the assumptions of Theorem 1, that C(t) is \(\rho \)-uniformly prox-regular for all \(t\in [0,T]\), and for all \(r>0\), there exists a non-negative integrable function \(k_r\) such that for all \(t\in [0,T]\) and \(x,x'\in r\mathbb {B}\) one has

$$\begin{aligned} \langle y-y',x-x'\rangle \le k_r(t)\Vert x-x'\Vert ^2, \ \forall y\in F(t,x), \forall y'\in F(t,x'). \end{aligned}$$
(12)

Then, the sequence of functions \((x_n)\) generated by algorithm (5) and (6) converges uniformly to an absolutely continuous function x, which is a solution of (4). Moreover, if F satisfies the following growth condition,

$$\begin{aligned} \sup _{y\in F(t,x)}\Vert y\Vert \le c(t)(\Vert x\Vert +1), \forall x\in \mathcal {H}, t\in [0,T], \end{aligned}$$
(13)

where \(c\in L^1([0,T];\mathbb {R}_+)\), then the solution x is unique.

Proof

Consider \(m,n\in \mathbb {N}\) with \(m\ge n\) big enough such that for all \(t\in [0,T]\), \(d_{C(\theta _n(t))}(x_m(t))<\rho \), this can be guaranteed by Theorem 1. Then, for a.e. \(t\in [0,T]\)

$$\begin{aligned} \frac{\textrm{d}}{\textrm{d}t}\left( \frac{1}{2}\Vert x_n(t)-x_m(t)\Vert ^2\right) = \langle \dot{x}_n(t)-\dot{x}_m(t), x_n(t)-x_m(t)\rangle . \end{aligned}$$

Let \(t\in [0,T]\) where the above equality holds. Let \(k, j \in \{0,1, \ldots ,n-1\}\) such that \(t\in ]t_k^n,t_{k+1}^n]\) and \(t\in ]t_j^m,t_{j+1}^m]\). On the one hand, we have that

$$\begin{aligned} \begin{aligned} \langle \dot{x}_n(t)-\dot{x}_m(t), x_n(t)-x_m(t)\rangle =&\ \langle \dot{x}_n(t)-\dot{x}_m(t),x_n(t)-x_{k+1}^n\rangle \\&+ \langle \dot{x}_n(t)-\dot{x}_m(t),x_{k+1}^n-v_{k+1}^n\rangle \\&+ \langle \dot{x}_n(t)-\dot{x}_m(t),v_{k+1}^n -v_{j+1}^m\rangle \\&+\langle \dot{x}_n(t)-\dot{x}_m(t), v_{j+1}^m- x_{j+1}^m \rangle \\&+ \langle \dot{x}_n(t)-\dot{x}_m(t),x_{j+1}^m- x_m(t)\rangle \\ \le&\ 2K_6(K_4(\mu _n +\mu _m) +4(\sqrt{\varepsilon _n} + \sqrt{\varepsilon _m}))\\&+\langle \dot{x}_n(t)-\dot{x}_m(t),v_{k+1}^n -v_{j+1}^m\rangle , \end{aligned} \end{aligned}$$
(14)

where \(v_{k+1}^n\in C(t_{k+1}^n)\) and \(v_{j+1}^m\in C(t_{j+1}^m)\) are the given in Theorem 1. We can see that

$$\begin{aligned} \begin{aligned} \max \big \{d_{C(t_{k+1}^n)}(v_{j+1}^m),d_{C(t_{j+1}^m)}(v_{k+1}^n)\big \}&\le d_H(C(t_{j+1}^m),C(t_{k+1}^n))\\&\le L_C|t_{j+1}^m-t_{k+1}^n|\le L_C(\mu _n+\mu _m). \end{aligned} \end{aligned}$$

From now, \(m,n\in \mathbb {N}\) are big enough such that \(L_C(\mu _n+\mu _m)<\frac{\rho }{2}\). Moreover, as h is \(L_h\)-Lipschitz, we have that for all \(p\in \mathbb {N}\), \(i\in \{0,1, \ldots ,p\}\) and \(t\in [0,T]\)

$$\begin{aligned} \Vert f(t,x_i^p)\Vert \le h(x_i^p)+\sqrt{\gamma }\le h(x_0) + L_hK_1+\sqrt{\gamma }=:\alpha . \end{aligned}$$

On the other hand, using (8) and Proposition 1 we have that

$$\begin{aligned} \begin{aligned}&\frac{1}{\digamma }\max \{\left\langle \zeta _n-\dot{x}_n(t),v_{j+1}^m-v_{k+1}^n\right\rangle , \left\langle \zeta _m-\dot{x}_m(t),v_{k+1}^n-v_{j+1}^m\right\rangle \}\\&\quad \le \frac{2}{\rho }\Vert v_{k+1}^n-v_{j+1}^m\Vert ^2+L_C(\mu _n+\mu _m), \end{aligned} \end{aligned}$$

where \(\xi _n,\xi _m\in \mathbb {B}\), \(\digamma :=\sup \{\frac{\lambda _\ell (t)}{\mu _\ell }:t\in [0,T],\ell \in \mathbb {N}\}\) and \(\zeta _i:= f(t,x_i(\delta _i(t)))+\frac{3\sqrt{\varepsilon _i}}{\mu _i}\xi _i\) for \(i\in \{n,m\}\). Therefore, we have that

$$\begin{aligned} \begin{aligned}&\langle \dot{x}_n(t)-\dot{x}_m(t),v_{k+1}^n -v_{j+1}^m\rangle \\&\quad = \langle \dot{x}_n(t)-\zeta _n,v_{k+1}^n -v_{j+1}^m\rangle +\langle \zeta _n-\zeta _m,v_{k+1}^n -v_{j+1}^m\rangle \\&\qquad + \langle \zeta _m-\dot{x}_m(t),v_{k+1}^n -v_{j+1}^m\rangle \\&\quad \le 2\digamma \left( \frac{2}{\rho }\Vert v_{k+1}^n-v_{j+1}^m\Vert ^2 + L_C(\mu _n+\mu _m)\right) +\langle \zeta _n-\zeta _m,v_{k+1}^n -v_{j+1}^m\rangle \\&\quad \le \frac{4\digamma }{\rho }(\Vert x_n(t)-x_m(t)\Vert + 3(\sqrt{\varepsilon _n}+\sqrt{\varepsilon _m}) + K_4(\mu _n+\mu _m))^2 \\&\qquad + 2\digamma L_C(\mu _n+\mu _m) +\langle \zeta _n-\zeta _m,v_{k+1}^n -v_{j+1}^m\rangle . \end{aligned} \end{aligned}$$

Moreover, by virtue of Theorem 1, we have \(\max \{\Vert x_n\Vert _{\infty },\Vert x_m\Vert _{\infty }\}\le K_2\). Hence, there is \(k\in L^1([0,T];\mathbb {R}_+)\) satisfying (12) on \(K_2\mathbb {B}\). Therefore, it follows that

$$\begin{aligned} \begin{aligned}&\langle \zeta _n-\zeta _m,v_{k+1}^n -v_{j+1}^m\rangle \\&\quad = \langle f(t,x_n(\delta _n(t)))-f(t,x_m(\delta _m(t))),x_n(\delta _n(t))-x_m(\delta _m(t))\rangle \\&\qquad + \langle f(t,x_n(\delta _n(t)))-f(t,x_m(\delta _m(t))),v_{k+1}^n-x_{k+1}^n\rangle \\&\qquad + \langle f(t,x_n(\delta _n(t)))-f(t,x_m(\delta _m(t))),x_{k+1}^n-x_k^n\rangle \\&\qquad + \langle f(t,x_n(\delta _n(t)))-f(t,x_m(\delta _m(t))),x_j^m-x_{j+1}^m\rangle \\&\qquad + \langle f(t,x_n(\delta _n(t)))-f(t,x_m(\delta _m(t))),x_{j+1}^m-v_{j+1}^m\rangle \\&\qquad + \frac{3\sqrt{\varepsilon _n}}{\mu _n}\langle \xi _n,v_{k+1}^n-v_{j+1}^m\rangle +\frac{3\sqrt{\varepsilon _m}}{\mu _m}\langle \xi _m,v_{j+1}^m-v_{k+1}^n\rangle \\&\quad \le \ k(t)\Vert x_n(\delta _n(t))-x_m(\delta _m(t))\Vert ^2\\&\qquad + 2\alpha (3(\sqrt{\varepsilon _n}+\sqrt{\varepsilon _m})+K_3(\mu _n+ \mu _m)) \\&\qquad + \frac{3\sqrt{\varepsilon _n}}{\mu _n}\Vert v_{k+1}^n-v_{j+1}^m\Vert + \frac{3\sqrt{\varepsilon _m}}{\mu _m}\Vert v_{j+1}^m-v_{k+1}^n\Vert \\&\quad \le k(t)(\Vert x_n(t)-x_m(t)\Vert + 3(\sqrt{\varepsilon _n}+\sqrt{\varepsilon _m})+(K_3+K_4)(\mu _n+\mu _m))^2\\&\qquad + 2\alpha (3(\sqrt{\varepsilon _n}+\sqrt{\varepsilon _m})+K_3(\mu _n+ \mu _m)) \\&\qquad + 6\left( \frac{\sqrt{\varepsilon _n}}{\mu _n}+\frac{\sqrt{\varepsilon _m}}{\mu _m}\right) (\sqrt{\varepsilon _n}+\sqrt{\varepsilon _m}+K_2). \end{aligned} \end{aligned}$$

These two inequalities and (14) yield

$$\begin{aligned} \begin{aligned}&\frac{\textrm{d}}{\textrm{d}t}\Vert x_n(t)-x_m(t)\Vert ^2\\&\quad \le \ 4\left( \frac{4\digamma }{\rho }+k(t)\right) \Vert x_n(t)-x_m(t)\Vert ^2+4\alpha (3(\sqrt{\varepsilon _n}+\sqrt{\varepsilon _m})+K_3(\mu _n+ \mu _m))\\&\qquad +4\digamma L_C(\mu _n+\mu _m)+12\left( \frac{\sqrt{\varepsilon _n}}{\mu _n}+\frac{\sqrt{\varepsilon _m}}{\mu _m}\right) (\sqrt{\varepsilon _n}+\sqrt{\varepsilon _m}+K_2)\\&\qquad +\frac{16\digamma }{\rho }(3(\sqrt{\varepsilon _n}+\sqrt{\varepsilon _m}) + K_4(\mu _n+\mu _m))^2\\&\qquad +4k(t)(3(\sqrt{\varepsilon _n}+\sqrt{\varepsilon _m})+(K_3+K_4)(\mu _n+\mu _m))^2. \end{aligned} \end{aligned}$$

Hence, using Gronwall’s inequality, we have for all \(t\in [0,T]\) and nm big enough:

$$\begin{aligned} \Vert x_n(t)-x_m(t)\Vert ^2\le A_{m,n}\exp \left( \frac{16\digamma }{\rho }T+4\int _0^T k(s)\textrm{d}s\right) , \end{aligned}$$
(15)

where

$$\begin{aligned} \begin{aligned} A_{m,n}&=\ 4\alpha T(3(\sqrt{\varepsilon _n}+\sqrt{\varepsilon _m})+K_3(\mu _n+ \mu _m))\\&\quad +4T\digamma L_C(\mu _n+\mu _m)+12T\left( \frac{\sqrt{\varepsilon _n}}{\mu _n}+\frac{\sqrt{\varepsilon _m}}{\mu _m}\right) (\sqrt{\varepsilon _n}+\sqrt{\varepsilon _m}+K_2)\\&\quad +\frac{16T\digamma }{\rho }(3(\sqrt{\varepsilon _n}+\sqrt{\varepsilon _m}) + K_4(\mu _n+\mu _m))^2\\&\quad +4\Vert k\Vert _1(3(\sqrt{\varepsilon _n}+\sqrt{\varepsilon _m})+(K_3+K_4)(\mu _n+\mu _m))^2. \end{aligned} \end{aligned}$$

Since \(A_{m,n}\) goes to 0 when \(m,n\rightarrow \infty \), it shows that \((x_n)\) is a Cauchy sequence in the space of continuous functions with the uniform convergence. Therefore, it converges uniformly to some continuous function \(x:[0,T]\rightarrow \mathcal {H}\). It remains to check that x is absolutely continuous, and it is the unique solution of (4). First of all, by Theorem 1 and Lemma 4, x is absolutely continuous and there is a subsequence of \((\dot{x}_{n})\) which converges weakly in \(L^1([0,T];\mathcal {H})\) to \(\dot{x}\). So, without relabeling, we have \(\dot{x}_n\rightharpoonup \dot{x}\) in \(L^1([0,T];\mathcal {H})\). On the other hand, using Theorem 1 and defining \(v_n(t):= v_{k+1}^n\) for \(t\in ]t_k^n,t_{k+1}^n]\) we have

$$\begin{aligned} \begin{aligned} \dot{x}_n(t)&\in -\frac{\lambda _n(t)}{\mu _n}\partial _P d_{C(\theta _n(t))}(v_n(t)) + f(t,x_n(\delta _n(t))) +\frac{3\sqrt{\varepsilon _n}}{\mu _n}\mathbb {B}\\&\in -\kappa _1\partial d_{C(\theta _n(t))}(v_n(t)) + \kappa _2\mathbb {B}\cap F(t,x_n(\delta _n(t))) + \frac{3\sqrt{\varepsilon _n}}{\mu _n}\mathbb {B}, \end{aligned} \end{aligned}$$

where, by Theorem 1, \(\kappa _1\) and \(\kappa _2\) are non-negative numbers which do not depend of \(n\in \mathbb {N}\) and \(t\in [0,T]\). We also have \(v_n\rightarrow x\), \(\theta _n\rightarrow \text {Id}_{[0,T]}\) and \(\delta _n\rightarrow \text {Id}_{[0,T]}\) uniformly. Theorem 1 ensures that \(x(t)\in C(t)\) for all \(t\in [0,T]\). By Mazur’s lemma, there is a sequence \((y_j)\) such that for all n, \(y_n\in \text {co}(\dot{x}_k:k\ge n)\) and \((y_n)\) converges strongly to \(\dot{x}\) in \(L^1([0,T];\mathcal {H})\). That is to say

$$\begin{aligned} y_n(t)\in \text {co}\left( -\kappa _1\partial d_{C(\theta _k(t))}(v_k(t)) + \kappa _2\mathbb {B}\cap F(t,x_k(\delta _k(t))) +\frac{3\sqrt{\varepsilon _k}}{\mu _k}\mathbb {B}:k\ge n\right) . \end{aligned}$$

Hence, there exists \((y_{n_j})\) which converges to \(\dot{x}\) almost everywhere in [0, T]. Then, by virtue of Lemma 2, \((\mathcal {H}_1^F)\) and Lemma 3, we obtain that

$$\begin{aligned} \dot{x}(t)\in -\kappa _1\partial d_{C(t)}(x(t)) +\kappa _2\mathbb {B}\cap F(t,x(t)) \text { for a.e. } t\in [0,T]. \end{aligned}$$

Since \(\partial d_{C(t)}(x(t))\subset N(C(t);x(t))\) for all \(t\in [0,T]\), we have x is the solution of (4).

To end the proof, we are going to prove that (4) has a unique solution under growth condition (13). First, take any solution x of (4). Then, for a.e. \(t\in [0,T]\) there is \(f(t,x(t))\in F(t,x(t))\) such that

$$\begin{aligned} \mathcal {R}_x(t):= f(t,x(t))-\dot{x}(t)\in N(C(t);x(t)). \end{aligned}$$
(16)

Take any \(t\in ]0,T]\) satisfying (16). Suppose that \(\dot{x}(t)\ne f(t,x(t))\), then using (1) and the uniform prox-regularity of C(t) we have that

$$\begin{aligned} \frac{\mathcal {R}_x(t)}{\Vert \mathcal {R}_x(t)\Vert }\in \partial _P d_{C(t)}(x(t)). \end{aligned}$$

Take any \(\gamma \in ]0,1[\), by continuity there is \(\delta >0\) such that \(x(s)\in U_\rho ^{\gamma }(C(t))\) for all \(s\in ]t-\delta ,t+\delta [\), using Proposition 1 we have

$$\begin{aligned} \begin{aligned} \left\langle \frac{\mathcal {R}_x(t)}{\Vert \mathcal {R}_x(t)\Vert }, x(s)-x(t) \right\rangle&\le \frac{1}{2\rho (1-\gamma )^2}\Vert x(s)-x(t)\Vert ^2 + d_{C(t)}(x(s))\\&\le \frac{1}{2\rho (1-\gamma )^2}\Vert x(s)-x(t)\Vert ^2 + L_C|t-s|. \end{aligned} \end{aligned}$$

Dividing by \(t-s\) for \(s\in ]t-\delta ,t[\) and taking the limit \(s\nearrow t\), we obtain that

$$\begin{aligned} \left\langle \frac{\mathcal {R}_x(t)}{\Vert \mathcal {R}_x(t)\Vert }, -\dot{x}(t) \right\rangle \le L_C \implies \Vert \mathcal {R}_x(t)\Vert \le \Vert f(t,x(t))\Vert +L_C. \end{aligned}$$

When \(\dot{x}(t) = f(t,x(t))\), the above inequality always holds. Hence, for a.e. \(t\in [0,T]\).

Now, take two solutions \(x_1,x_2\) of (4) with \(x_1(0) = x_2(0) = x_0\), then using the hypomonotonicity given in Proposition 1, we have

$$\begin{aligned} \langle \mathcal {R}_{x_1}(t) - \mathcal {R}_{x_2}(t),x_1(t)-x_2(t)\rangle \ge \frac{-1}{2\rho }(\Vert \mathcal {R}_{x_1}(t)\Vert +\Vert \mathcal {R}_{x_2}(t)\Vert )\Vert x_1(t)-x_2(t)\Vert ^2. \end{aligned}$$

Defining \(r = \max \{\Vert x_i\Vert _{\infty }:i=1,2\}\), there is \(k_{r}\in L^1([0,T];\mathbb {R}_+)\) satisfying (12) on \(r\mathbb {B}\). Hence, by using growth condition (13), we have a.e.

$$\begin{aligned} \begin{aligned} \frac{\textrm{d}}{\textrm{d}t}(\Vert x_1(t)-x_2(t)\Vert ^2)&\le \Vert x_1(t)-x_2(t)\Vert ^2[2k_r(t) + \frac{1}{\rho }(\Vert \mathcal {R}_{x_1}(t)\Vert +\Vert \mathcal {R}_{x_2}(t)\Vert )]\\&\le \Vert x_1(t)-x_2(t)\Vert ^2[2k_r(t) + \frac{2L_C}{\rho } + 2c(t)\left( \frac{1}{\rho } + c\right) ], \end{aligned} \end{aligned}$$

which, by virtue of Gronwall’s inequality, implies that \(x_1 \equiv x_2\). The result is proven. \(\square \)

Remark 1

The property required for F in (12) is a classical monotonicity assumption in the theory of existence of solutions for differential inclusions (see, e.g., [15, Theorem 10.5]).

Remark 2

[Rate of convergence] In the precedent proof, we have established the following estimation:

$$\begin{aligned} \Vert x_n(t)-x_m(t)\Vert ^2\le A_{m,n}\exp \left( \frac{16\digamma }{\rho }T+4\int _0^T k(s)\textrm{d}s\right) \end{aligned}$$

for mn such that \(\mu _n+\mu _m<\frac{\rho }{2L_C}\). Hence, by letting \(m\rightarrow \infty \), we obtain that

$$\begin{aligned} \Vert x_n(t)-x(t)\Vert ^2\le A_n\exp \left( \frac{16\digamma }{\rho }T+4\int _0^T k(s)\textrm{d}s\right) \text { for all } n>\frac{2L_C T}{\rho }, \end{aligned}$$

where

$$\begin{aligned} A_n:= \lim _{m\rightarrow \infty } A_{m,n}\le D\left( \sqrt{\varepsilon _n} + \mu _n + \frac{\sqrt{\varepsilon _n}}{\mu _n}\right) , \end{aligned}$$

where D is a non-negative constant. Hence, the above estimation provides a rate of convergence for our scheme.

5 Subsmooth Case

In this section, we study sweeping process (4) for the class of subsmooth sets, which strictly includes the class of uniformly prox-regular sets. We now assume \((C(t))_{t\in [0,T]}\) is a equi-uniformly subsmooth family. The classical catching-up algorithm was studied in [20] under this framework. In this case, we assume the ball compactness of the moving sets, required in the infinite-dimensional setting. We will see that our algorithm allows us to prove the existence of a solution, but we only ensure that a subsequence converges to this solution, which is expected due to the lack of uniqueness of solutions in this case.

Theorem 3

Suppose, in addition to assumptions of theorem 1, that the family \((C(t))_{t\in [0,T]}\) is equi-uniformly subsmooth and the set C(t) are ball-compact for all \(t\in [0,T]\). Then, the sequence of continuous functions \((x_n)\) generated by algorithm (5) and (6) converges uniformly (up to a subsequence) to an absolutely continuous function x, which is a solution of (4).

Proof

From Theorem 1 we have for all \(n\in \mathbb {N}\) and \(k\in \{0,\ldots ,n-1\}\), there is \(v_{k+1}^n\in C(t_{k+1}^n)\) such that \(\Vert v_{k+1}^n-x_{k+1}^n\Vert <2\sqrt{\varepsilon _n}\) and for all \(t\in ]t_k^n,t_{k+1}^n]\):

$$\begin{aligned} \dot{x}_n(t)\in -\frac{\lambda _n(t)}{\mu _n}\partial _P d_{C(\theta _n(t))}(v_{k+1}^n) + f(t,x_n(\delta _n(t))) +\frac{3\sqrt{\varepsilon _n}}{\mu _n}\mathbb {B}, \end{aligned}$$

where \(\lambda _n(t) = 4\sqrt{\varepsilon _n} + (L_C + h(x(\delta _n(t)))+\sqrt{\gamma })\mu _n\). As h is \(L_h\)-Lipschitz it follows that

$$\begin{aligned} \lambda _n(t)\le (4\mathfrak {c}+L_C+h(x_0)+\sqrt{\gamma } + L_hK_1)\mu _n. \end{aligned}$$

Defining \(v_n(t):=v_{k+1}^n\) on \(]t_k^n,t_{k+1}^n]\), then for all \(n\in \mathbb {N}\) and almost all \(t\in [0,T]\)

$$\begin{aligned} \begin{aligned} \dot{x}_n(t)&\in -M\partial _P d_{C(\theta _n(t))}(v_n(t)) + f(t,x_n(\delta _n(t))) +\frac{3\sqrt{\varepsilon _n}}{\mu _n}\mathbb {B}\\&\in -M\partial d_{C(\theta _n(t))}(v_n(t)) + M\mathbb {B}\cap F(t,x_n(\delta _n(t))) +\frac{3\sqrt{\varepsilon _n}}{\mu _n}\mathbb {B}, \end{aligned} \end{aligned}$$
(17)

where \(M:= 4\mathfrak {c}+L_C+h(x_0) + L_hK_1+\sqrt{\gamma }\). Moreover, by Theorem 1, we have

$$\begin{aligned} \begin{aligned} d_{C(t)}(x_n(t))\le d_{C(\theta _n(t))}(x_n(t)) + L_C\mu _n\le (K_5+2L_C)\mu _n+2\sqrt{\varepsilon _n}. \end{aligned} \end{aligned}$$
(18)

for all \(t\in [0,T]\).

Next, fix \(t\in [0,T]\) and define \(K(t):=\{x_n(t): n\in \mathbb {N}\}\). We claim that K(t) is relatively compact. Indeed, let \(x_m(t) \in K(t)\) and take \(y_m(t)\in {\text {Proj}}_{C(t)}(x_m(t))\) (the projection exists due to the ball compactness of C(t) and the boundedness of K(t)). Moreover, according to (18) and Theorem 1,

$$\begin{aligned} \begin{aligned} \Vert y_n(t)\Vert&\le d_{C(t)}(x_n(t))+\Vert x_n(t)\Vert \le (K_5+2L_C)\mu _n+2\sqrt{\varepsilon _n}+K_2. \end{aligned} \end{aligned}$$

This entails that \(y_n(t)\in C(t)\cap R\, \mathbb {B}\) for all \(n\in \mathbb {N}\) for some \(R>0\). Thus, by the ball compactness of C(t), there exists a subsequence \((y_{m_k}(t))\) of \((y_m(t))\) converging to some y(t) as \(k\rightarrow +\infty \). Then,

$$\begin{aligned} \begin{aligned} \Vert x_{m_k}(t)-y(t)\Vert&\le d_{C(t)}(x_{m_k}(t))+\Vert y_{m_k}(t)-y(t)\Vert \\&\le (K_5+2L_C)\mu _{m_k}+2\sqrt{\varepsilon _{m_k}}+\Vert y_{m_k}(t)-y(t)\Vert , \end{aligned} \end{aligned}$$

which implies that K(t) is relatively compact. Moreover, it is not difficult to see by Theorem 1 that \(K:=(x_n)\) is equicontinuous. Therefore, by virtue of Theorem 1, Arzela-Ascoli’s and Lemma 4, we obtain the existence of a Lipschitz function x and a subsequence \((x_j)\) of \((x_n)\) such that

  1. (i)

    \((x_j)\) converges uniformly to x on [0, T].

  2. (ii)

    \(\dot{x}_j\rightharpoonup \dot{x}\) in \(L^1\left( [0,T];\mathcal {H}\right) \).

  3. (iii)

    \(x_j(\theta _j(t))\rightarrow x(t)\) for all \(t\in [0,T]\).

  4. (iv)

    \(x_j(\delta _j(t))\rightarrow x(t)\) for all \(t\in [0,T]\).

  5. (v)

    \(v_j(t)\rightarrow x(t)\) for all \(t\in [0,T]\).

From (18) it is clear that \(x(t)\in C(t)\) for all \(t\in [0,T]\). By Mazur’s lemma, there is a sequence \((y_j)\) such that for all j, \(y_j\in \text {co}(\dot{x}_k:k\ge j)\) and \((y_j)\) converges strongly to \(\dot{x}\) in \(L^1([0,T];\mathcal {H})\). That is to say

$$\begin{aligned} y_j(t)\in \text {co}\left( -M\partial d_{C(\theta _n(t))}(v_n(t)) + M\mathbb {B}\cap F(t,x_n(\delta _n(t))) +\frac{3\sqrt{\varepsilon _n}}{\mu _n}\mathbb {B}:n\ge j\right) . \end{aligned}$$

On the other hand, there exists \((y_{n_j})\) which converges to \(\dot{x}\) almost everywhere in [0, T]. Then, using Lemma 2, Lemma 3, and \((\mathcal {H}_1^F)\), we have

$$\begin{aligned} \dot{x}(t)\in -M\partial d_{C(t)}(x(t)) +M\mathbb {B}\cap F(t,x(t)) \ \text {a.e.} \end{aligned}$$

Finally, since \(\partial d_{C(t)}(x(t))\subset N(C(t);x(t))\) for all \(t\in [0,T]\), it follows that x is the solution of (4). \(\square \)

6 Fixed Set

In this section, we consider a closed and nonempty set \(C\subset \mathcal {H}\), and we look for a solution of the particular case of (4) given by

$$\begin{aligned} \begin{aligned} \dot{x}(t)&\in -N\left( C;x(t)\right) +F(t,x(t))&\text { a.e. } t\in [0,T],\\ x(0)&=x_0\in C, \end{aligned} \end{aligned}$$
(19)

where \(F:[0,T]\times \mathcal {H}\rightrightarrows \mathcal {H}\) is a set-valued map defined as above. The existence of a solution using classical catching up was done in [34]. Now, we use similar ideas to get the existence of a solution using our proposed algorithm. We emphasize that in this case, no regularity of the set C is required.

Theorem 4

Let \(C\subset \mathcal {H}\) be a ball-compact set and \(F:[0,T]\times \mathcal {H}\rightrightarrows \mathcal {H}\) be a set-valued map satisfying \((\mathcal {H}_1^F)\), \((\mathcal {H}_2^F)\) and \((\mathcal {H}_3^F)\). Then, for any \(x_0\in S\), the sequence of functions \((x_n)\) generated by algorithm (6) converges uniformly (up to a subsequence) to a Lipschitz solution x of sweeping process (19) such that

$$\begin{aligned} \begin{aligned} \Vert \dot{x}(t)\Vert&\le 2(h(x(t))+\sqrt{\gamma })&\text { a.e. } t\in [0,T]. \end{aligned} \end{aligned}$$
(20)

Proof

We are going to use the properties of Theorem 1, where now we have \(L_C = 0\). First of all, from Theorem 1 we have for all \(n\in \mathbb {N}\) and \(k\in \{0,1,\ldots ,n-1\}\), there is \(v_{k+1}^n\in C\) such that \(\Vert v_{k+1}^n-x_{k+1}^n\Vert <2\sqrt{\varepsilon _n}\) and for all \(t\in ]t_k^n,t_{k+1}^n]\):

$$\begin{aligned} \dot{x}_n(t)\in -\frac{\lambda _n(t)}{\mu _n}\partial _P d_{C}(v_{k+1}^n) + f(t,x_n(\delta _n(t))) +\frac{3\sqrt{\varepsilon _n}}{\mu _n}\mathbb {B}, \end{aligned}$$

where \(\lambda _n(t) = 4\sqrt{\varepsilon _n} + (h(x(\delta _n(t)))+\sqrt{\gamma })\mu _n\). Defining \(v_n(t):=v_{k+1}^n\) on \(]t_k^n,t_{k+1}^n]\), we get that for all \(n\in \mathbb {N}\) and a.e. \(t\in [0,T]\)

$$\begin{aligned} \begin{aligned} \dot{x}_n(t)&\in -\frac{\lambda _n(t)}{\mu _n}\partial _P d_{C}(v_n(t)) + f(t,x_n(\delta _n(t))) +\frac{3\sqrt{\varepsilon _n}}{\mu _n}\mathbb {B}\\&\in -\frac{\lambda _n(t)}{\mu _n}\partial d_{C}(v_n(t))+ (h(t,x_n(\delta _n(t)))+\sqrt{\gamma })\mathbb {B}\cap F(t,x_n(\delta _n(t))) +\frac{3\sqrt{\varepsilon _n}}{\mu _n}\mathbb {B}. \end{aligned} \end{aligned}$$

Moreover, by Theorem 1, we have

$$\begin{aligned} d_C(x_n(t))\le K_5\mu _n+2\sqrt{\varepsilon _n} \text { for all } t\in [0,T]. \end{aligned}$$

Next, fix \(t\in [0,T]\) and define \(K(t):=\{x_n(t): n\in \mathbb {N}\}\). We claim that K(t) is relatively compact. Indeed, let \(x_m(t) \in K(t)\) and take \(y_m(t)\in {\text {Proj}}_C(x_m(t))\) (the projection exists due to the ball compactness of C and the boundedness of K(t)). Moreover, according to the above inequality and Theorem 1,

$$\begin{aligned} \Vert y_n(t)\Vert \le d_C(x_n(t))+\Vert x_n(t)\Vert \le K_5\mu _n+2\sqrt{\varepsilon _n}+K_2, \end{aligned}$$

which entails that \(y_n(t)\in C\cap R\, \mathbb {B}\) for all \(n\in \mathbb {N}\) for some \(R>0\). Thus, by the ball-compactness of C, there exists a subsequence \((y_{m_k}(t))\) of \((y_m(t))\) converging to some y(t) as \(k\rightarrow +\infty \). Then,

$$\begin{aligned} \begin{aligned} \Vert x_{m_k}(t)-y(t)\Vert&\le d_{C}(x_{m_k}(t))+\Vert y_{m_k}(t)-y(t)\Vert \\&\le K_5\mu _{m_k}+2\sqrt{\varepsilon _{m_k}}+\Vert y_{m_k}(t)-y(t)\Vert , \end{aligned} \end{aligned}$$

which implies that K(t) is relatively compact. Moreover, it is not difficult to see by Theorem 1 that the set \(K:=(x_n)\) is equicontinuous. Therefore, by virtue of Theorem 1, Arzela-Ascoli’s and Lemma 4, we obtain the existence of a Lipschitz function x and a subsequence \((x_j)\) of \((x_n)\) such that

  1. (i)

    \((x_j)\) converges uniformly to x on [0, T].

  2. (ii)

    \(\dot{x}_j\rightharpoonup \dot{x}\) in \(L^1\left( [0,T];\mathcal {H}\right) \).

  3. (iii)

    \(x_j(\theta _j(t))\rightarrow x(t)\) for all \(t\in [0,T]\).

  4. (iv)

    \(x_j(\delta _j(t))\rightarrow x(t)\) for all \(t\in [0,T]\).

  5. (v)

    \(v_j(t)\rightarrow x(t)\) for all \(t\in [0,T]\).

  6. (vi)

    \(x(t)\in C\) for all \(t\in [0,T]\).

By Mazur’s lemma, there is a sequence \((y_j)\) such that for all j, \(y_j\in \text {co}(\dot{x}_k:k\ge j)\) and \((y_j)\) converges strongly to \(\dot{x}\) in \(L^1([0,T];\mathcal {H})\). i.e.,

$$\begin{aligned} y_j(t)\in \text {co}\left( -\alpha _n\partial d_{C}(v_n(t)) + \beta _n\mathbb {B}\cap F(t,x_n(\delta _n(t))) +\frac{3\sqrt{\varepsilon _n}}{\mu _n}\mathbb {B}:n\ge j\right) , \end{aligned}$$

where \(\alpha _n:= \frac{4\sqrt{\varepsilon _n}}{\mu _n}+h(t,x_n(\delta _n(t)))+\sqrt{\gamma }\) and \(\beta _n:= \frac{4\sqrt{\varepsilon _n}}{\mu _n}+h(t,x_n(\delta _n(t)))\). On the other hand, there exists \((y_{n_j})\) which converges to \(\dot{x}\) almost everywhere in [0, T]. Then, using Lemma 2, Lemma 3, and \((\mathcal {H}_1^F)\), we have

$$\begin{aligned} \dot{x}(t)\in -(h(x(t))+\sqrt{\gamma })\partial d_{C}(x(t)) +(h(x(t))+\sqrt{\gamma })\mathbb {B}\cap F(t,x(t)) \text { for a.e. } t\in [0,T]. \end{aligned}$$

It is clear that x satisfies bound (20). Finally, since \(\partial d_{C}(x(t))\subset N(C;x(t))\) for all \(t\in [0,T]\), we obtain that x is the solution of (19). \(\square \)

7 Numerical Methods for Approximate Projections

As stated before, in most cases, finding an explicit formula for the projection onto a closed set is not possible. Therefore, one must resort to numerical algorithms to obtain approximate projections. Several papers discuss this issue for different notions of approximate projections (see, e.g., [32]). These algorithms are called projection oracles and provide an approximate solution \(\bar{z}\in \mathcal {H}\) to the following optimization problem:

$$\begin{aligned} \min _{z\in C} \Vert x-z\Vert ^2, \end{aligned}$$
($P_x$)

where C is a given closed set and \(x\in \mathcal {H}\). Whether the approximate solution \(\bar{z}\) belongs to the set C or not depends on the notion of approximate projection. In our case, to implement our algorithm, we need that \(\bar{z}\in C\). In this line, a well-known projection oracle fulfilling this property can be obtained via the celebrated Frank–Wolfe algorithm (see, e.g., [18, 22]), where a linear sub-problem of (\(P_x\)) is solved in each iteration. For several types of convex sets, this method has been successfully developed (see [5, 13, 22]). Besides, in [16], it was shown that an approximate solution of the linear sub-problem is enough to obtain a projection oracle.

Another important approach to obtaining approximate projections is the use of the Frank–Wolfe algorithm with separation oracles (see [14]). Roughly speaking, a separation oracle determines whether a given point belongs to a set and, in the negative case, provides a hyperplane separating the point from the set (see [19] for more details). For particular sets, it is easy to get an explicit separation oracle (see [19, p. 49]). An important example is the case of a sublevel set: let \(g:\mathcal {H}\rightarrow \mathbb {R}\) be a continuous convex function and \(\lambda \in \mathbb {R}\). Then \([g\le \lambda ]:= \{x\in \mathcal {H}: g(x)\le \lambda \}\) has a separation oracle described as follows: to verify that any point belongs to \([g\le \lambda ]\) is straightforward. When a point \(x\in \mathcal {H}\) does not belong to \([g\le \lambda ]\), we can consider any \(x^*\in \partial g(x)\). Then, for all \(y\in [g\le \lambda ]\),

$$\begin{aligned} \langle x^*,x\rangle \ge g(x)-g(y)+\langle x^*,y\rangle > \langle x^*,y\rangle , \end{aligned}$$

where we have used that \(g(x)>\lambda \ge g(y)\). Hence, the above inequality shows the existence of the desired hyperplane, which provides a separation oracle for \([g\le \lambda ]\). Therefore, if C is the sublevel set of some convex function, we can use the algorithm proposed in [14] to get an approximate solution \(\bar{z}\in {\text {proj}}_S^\epsilon (x)\). Moreover, the sublevel set enables us to consider the case

$$\begin{aligned} C(t,x):= \bigcap _{i=1}^m\{x\in \mathcal {H}:g_i(t,x)\le 0\}=\left\{ x\in \mathcal {H}:g(t,x):=\max _{i=1, \ldots ,m}g_i(t,x)\le 0\right\} , \end{aligned}$$

where for all \(t\in [0,T]\), \(g_i(t,\cdot ):\mathcal {H}\rightarrow \mathbb {R}\), \(i=1,\ldots ,m\) are convex functions. We refer to [2, Proposition 5.1] for the proper assumptions on these functions to ensure the Lipschitz property of the map \(t\rightrightarrows C(t)\) holds (7).

8 Concluding Remarks

In this paper, we have developed an enhanced version of the catching-up algorithm for sweeping processes through an appropriate concept of approximate projections. We provide the proposed algorithm’s convergence for three frameworks: prox-regular, subsmooth, and merely closed sets. Some insights into numerical procedures to obtain approximate projections were given mainly in the convex case. Finally, the convergence of our algorithm for other notions of approximate solutions will be explored in forthcoming works.