Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Gradient flows, and hence minimizing movements, and the convergence of functionals trivially do not commute even when the convergence is uniform. As a simple example, take \(X = \mathbb{R}\) and

$$\displaystyle{F_{\varepsilon }(x) = {x}^{2} -\rho \,\sin {\Bigl (\frac{x} {\varepsilon } \Bigr )},}$$

with \(\rho =\rho _{\varepsilon } \rightarrow 0\) as \(\varepsilon \rightarrow 0\), uniformly converging to F(x) = x 2. If also

$$\displaystyle{\varepsilon \ll \rho,}$$

then for fixed x 0 the solutions \(u_{\varepsilon }\) to the equation

$$\displaystyle{\left \{\begin{array}{@{}l@{\quad }l@{}} u^{\prime}_{\varepsilon } = -2u_{\varepsilon } + \frac{\rho } {\varepsilon }\cos {\Bigl (\frac{u_{\varepsilon }} {\varepsilon } \Bigr )}\quad \\ u_{\varepsilon }(0) = x_{0} \quad \end{array} \right.}$$

converge to the constant function u 0(t) = x 0 as \(\varepsilon \rightarrow 0\). This is easily seen by studying the stationary solutions of

$$\displaystyle{-2x + \frac{\rho } {\varepsilon }\cos {\Bigl (\frac{x} {\varepsilon } \Bigr )} = 0.}$$

Conversely, the gradient flow of the limit is

$$\displaystyle{\left \{\begin{array}{@{}l@{\quad }l@{}} u^{\prime} = -2u \quad \\ u(0) = x_{ 0},\quad \end{array} \right.}$$

for which the constant functions are not solutions if x 0 ≠ 0.

With the remark above in mind, in order to give a meaningful limit for the energy-driven motion along a sequence of functionals it may be useful to vary the definition of minimizing movement. This will be done in the following section. As in the previous chapter, we will limit our analysis to a Hilbert setting for simplicity.

8.1 Minimizing Movements Along a Sequence

In this section we will give a notion of minimizing movement along a sequence \(F_{\varepsilon }\), which will depend in general on the interaction between the time scale τ and the parameter \(\varepsilon\) in the energies.

Definition 8.1 (minimizing movements along a sequence).

Let X be a separable Hilbert space, let \(F_{\varepsilon }: X \rightarrow [0,+\infty ]\) be equicoercive and lower semicontinuous, \(x_{0}^{\varepsilon } \rightarrow x_{0}\) with

$$\displaystyle{ F_{\varepsilon }(x_{0}^{\varepsilon }) \leq C < +\infty, }$$
(8.1)

and let \(\tau _{\varepsilon } > 0\) converge to 0 as \(\varepsilon \rightarrow 0\). With fixed \(\varepsilon > 0\) we define \(x_{k}^{\varepsilon }\) recursively as a minimizer for the problem

$$\displaystyle{ \min \Bigl \{F_{\varepsilon }(x) + \frac{1} {2\tau }\|x - x_{k-1}^{\varepsilon }\|{}^{2}\Bigr \}, }$$
(8.2)

and the piecewise-constant trajectory \({u}^{\varepsilon }: [0,+\infty ) \rightarrow X\) given by

$$\displaystyle{ {u}^{\varepsilon }(t) = x_{\lfloor t/\tau _{\varepsilon }\rfloor }. }$$
(8.3)

A minimizing movement for \(F_{\varepsilon }\) from \(x_{0}^{\varepsilon }\) is any limit of a subsequence \({u}^{\varepsilon _{j}}\) uniform on compact sets of [0, +).

After remarking that the Hölder continuity estimates in Proposition 7.1 only depend on the bound on \(F_{\varepsilon }(x_{0}^{\varepsilon })\), with the same proof we can show the following result.

Proposition 8.1.

For every \(F_{\varepsilon }\) and \(x_{0}^{\varepsilon }\) as above there exist minimizing movements for \(F_{\varepsilon }\) from \(x_{0}^{\varepsilon }\) in \({C}^{1/2}([0,+\infty );X)\) .

Remark 8.1 (Growth conditions).

As for the case of a single functional, the positiveness of \(F_{\varepsilon }\) can be substituted by the requirement that for all \(\overline{x}\) the functionals

$$\displaystyle{x\mapsto F_{\varepsilon }(x) + \frac{1} {2\tau }\|x -{\overline{x}\|}^{2}}$$

be bounded from below; i.e., that there exists C > 0 such that

$$\displaystyle{x\mapsto F_{\varepsilon }(x) + C\|x -{\overline{x}\|}^{2}}$$

be bounded from below.

Example 8.1.

We give a simple example that shows how the limit minimizing movement may depend on the choice of the mutual behavior of \(\varepsilon\) and τ. We consider the functions

$$\displaystyle{F_{\varepsilon }(x) = \left \{\begin{array}{@{}l@{\quad }l@{}} -x\quad &\text{if}\,x \leq 0 \\ 0 \quad &\text{if}\,0 \leq x \leq \varepsilon \\ \varepsilon -x\quad &\text{if}\,x \geq \varepsilon, \end{array} \right.}$$

which converge uniformly to \(F(x) = -x\). Note that the energies are not bounded from below, but their analysis falls within the framework in the previous remark. For this example a direct computation is immediately carried on. We consider a fixed initial datum x 0.

If x 0 > 0, then for \(\varepsilon < x_{0}\) we have \(x_{k}^{\varepsilon } = x_{k-1}^{\varepsilon }+\tau\) for all \(k \geq 0\).

If x 0 ≤ 0 then we have \(x_{k}^{\varepsilon } = x_{k-1}^{\varepsilon }+\tau\) if \(x_{k-1}^{\varepsilon } \leq -\tau\). If \(0 \geq x_{k-1}^{\varepsilon } > -\tau\) then \(x_{k}^{\varepsilon } - x_{k-1}^{\varepsilon }\) is obtained by minimizing the function

$$\displaystyle{f(y) = \left \{\begin{array}{@{}l@{\quad }l@{}} -y + \frac{1} {2\tau }{y}^{2} \quad &\text{if}\,0 \leq y \leq -x_{ k-1}^{\varepsilon } \\ \quad \\ x_{k-1}^{\varepsilon } + \frac{1} {2\tau }{y}^{2}\quad &\text{if}\, - x_{ k-1}^{\varepsilon } \leq y \leq -x_{ k-1}^{\varepsilon }+\varepsilon \\ \quad \\ \varepsilon -y + \frac{1} {2\tau }{y}^{2} \quad &\text{if}\,y \geq -x_{ k-1}^{\varepsilon }+\varepsilon, \end{array} \right.}$$

whose minimizer is always \(y =\tau +x_{k-1}^{\varepsilon }\) if \(\varepsilon -x_{k-1}^{\varepsilon } >\tau\). In this case \(x_{k}^{\varepsilon } = 0\). If otherwise \(\varepsilon -x_{k-1}^{\varepsilon } \leq \tau\) the other possible minimizer is y =τ. We then have to compare the values

$$\displaystyle{f(-x_{k-1}^{\varepsilon }) = x_{ k-1}^{\varepsilon } + \frac{1} {2\tau }{(x_{k-1}^{\varepsilon })}^{2},\qquad f(\tau ) =\varepsilon -\frac{1} {2}\tau.}$$

We have three cases:

  1. (a)

    \(\varepsilon -\frac{1} {2}\tau > 0\). In this case we have \(x_{k}^{\varepsilon } = 0\) (and this holds for all subsequent steps).

  2. (b)

    \(\varepsilon -\frac{1} {2}\tau < 0\). In this case we either have \(f(\tau ) < f(-x_{k-1}^{\varepsilon })\), in which case \(x_{k}^{\varepsilon } = x_{k-1}^{\varepsilon }+\tau\) (and this then holds for all subsequent steps); otherwise \(x_{k}^{\varepsilon } = 0\) and \(x_{k+1}^{\varepsilon } = x_{k}^{\varepsilon }+\tau\) (and this holds for all subsequent steps).

  3. (c)

    \(\varepsilon -\frac{1} {2}\tau = 0\). If \(x_{k-1}^{\varepsilon } < 0\) then \(x_{k}^{\varepsilon } = 0\) (otherwise we already have \(x_{k-1}^{\varepsilon } = 0\)). Then, since we have the two solutions y = 0 and y =τ, we have \(x_{j}^{\varepsilon } = 0\) for kjk 0 for some \(k_{0} \in \mathbb{N} \cup +\infty \) and \(x_{j}^{\varepsilon } = x_{j-1}^{\varepsilon }+\tau\) for j > k 0.

We can summarize the possible minimizing movements with initial datum x 0 ≤ 0 as follows:

  1. (i)

    If \(\tau < 2\varepsilon\) then the unique minimizing movement is \(x(t) =\min \{ x_{0} + t,0\}\).

  2. (ii)

    If \(\tau > 2\varepsilon\) then the unique minimizing movement is \(x(t) = x_{0} + t\).

  3. (iii)

    If \(\tau = 2\varepsilon\) then we have the family of minimizing movements (parameterized by x 1x 0) \(x(t) =\max {\bigl \{\min \{ x_{0} + t,0\},x_{1} + t\bigr \}}\).

    For x 0 > 0 we always have the only minimizing movement \(x(t) = x_{0} + t\).

8.2 Commutability Along ‘Fast-Converging’ Sequences

We now show that, by suitably choosing the \(\varepsilon\)-τ regimes, the minimizing movement along the sequence \(F_{\varepsilon }\) from \(x_{\varepsilon }\) converges to a minimizing movement for the limit F from x 0 (‘fast-converging \(\varepsilon\)’), while for other choices (‘fast-converging τ’) the minimizing movement converges to a limit of minimizing movements for \(F_{\varepsilon }\) as \(\varepsilon \rightarrow 0\). Heuristically, minimizing movements for all other regimes are ‘trapped’ between these two extrema.

Theorem 8.1.

Let \(F_{\varepsilon }\) be a equi-coercive sequence of (non-negative) lower-semicontinuous functionals on a Hilbert space X Γ-converging to F, let \(x_{\varepsilon } \rightarrow x_{0}\) . Then:

  1. (i)

    There exists a choice of \(\varepsilon =\varepsilon (\tau )\) such that every minimizing movement along \(F_{\varepsilon }\) (and with time-step τ) with initial data \(x_{\varepsilon }\) is a minimizing movement for F from x 0 on [0,T] for all T.

  2. (ii)

    There exists a choice of \(\tau =\tau (\varepsilon )\) such that every minimizing movement along \(F_{\varepsilon }\) (and with time-step τ) with initial data \(x_{\varepsilon }\) is a limit of a sequence of minimizing movements for \(F_{\varepsilon }\) (for \(\varepsilon\) fixed) from \(x_{\varepsilon }\) on [0,T] for all T.

Proof.

  1. (i)

    Note that if \(y_{\varepsilon } \rightarrow y_{0}\) then the solutions of

    $$\displaystyle{ \min {\Bigl \{F_{\varepsilon }(x) + \frac{1} {2\tau }\|x - y{_{\varepsilon }\|}^{2}\Bigr \}} }$$
    (8.4)

    converge to solutions of

    $$\displaystyle{ \min {\Bigl \{F(x) + \frac{1} {2\tau }\|x - y{_{0}\|}^{2}\Bigr \}} }$$
    (8.5)

    since we have a continuously converging perturbation of a Γ-converging sequence.

    Let now \(x_{\varepsilon } \rightarrow x_{0}\). Let τ be fixed. We consider the sequence \(\{x_{k}^{\tau,\varepsilon }\}\) defined by iterated minimization of \(F_{\varepsilon }\) with initial point \(x_{\varepsilon }\). Since \(x_{\varepsilon } \rightarrow x_{0}\), up to subsequences we have \(x_{1}^{\tau,\varepsilon } \rightarrow x_{1}^{\tau,0}\), which minimizes

    $$\displaystyle{ \min {\Bigl \{F(x) + \frac{1} {2\tau }\|x - x{_{0}\|}^{2}\Bigr \}}. }$$
    (8.6)

    The point \(x_{2}^{\tau,\varepsilon }\) converge to x 2 τ, 0. Since they minimize

    $$\displaystyle{ \min \Bigl \{F_{\varepsilon }(x) + \frac{1} {2\tau }\|x - x_{1}^{\tau,\varepsilon }\|{}^{2}\Bigr \} }$$
    (8.7)

    and \(x_{1}^{\tau,\varepsilon } \rightarrow x_{1}^{\tau,0}\), their limit is a minimizer of

    $$\displaystyle{ \min \Bigl \{F(x) + \frac{1} {2\tau }\|x - x_{1}^{\tau,0}\|{}^{2}\Bigr \}. }$$
    (8.8)

    This operation can be repeated iteratively, obtaining (upon subsequences) \(x_{k}^{\tau,\varepsilon } \rightarrow x_{k}^{\tau,0}\), and {x k τ, 0} iteratively minimizes F with initial point x 0. Since up to subsequences the trajectories {x k τ, 0} converge to a minimizing movement for F with initial datum x 0, the thesis follows by a diagonal argument.

  2. (ii)

    For fixed \(\varepsilon\), the piecewise-constant functions \({u}^{\varepsilon,\tau }(t) = x_{\lfloor t/\tau \rfloor }^{\varepsilon,\tau }\) converge uniformly to a minimizing movement \({u}^{\varepsilon }\) for \(F_{\varepsilon }\) with initial datum \(x_{\varepsilon }\). By compactness, these \({u}^{\varepsilon }\) converge uniformly to some function u as \(\varepsilon \rightarrow 0\). Again, a diagonal argument gives the thesis. □

Remark 8.2.

Note that, given \(x_{\varepsilon }\) and \(F_{\varepsilon }\), if F has more than one minimizing movement from x 0 then the approximation gives a choice criterion. As an example, take \(F(x) = -\vert x\vert \), \(F_{\varepsilon }(x) = -\vert x +\varepsilon \vert \) and \(x_{0} = x_{\varepsilon } = 0\).

Remark 8.3 (The convex case).

If all \(F_{\varepsilon }\) are convex then it can be shown that, actually, the minimizing movement along the sequence \(F_{\varepsilon }\) always coincides with the minimizing movement for their Γ-limit. This (exceptional) case will be dealt with in detail separately in Chap. 11.

Example 8.2.

In dimension one, we can take

$$\displaystyle{F_{\varepsilon }(x) = \frac{1} {2}{x}^{2} +\varepsilon \, W{\Bigl (\frac{x} {\varepsilon } \Bigr )},}$$

where W is a one-periodic odd Lipschitz function with \(\|W^{\prime}\|_{\infty } = 1\). Up to addition of a constant is not restrictive to suppose that the average of W is 0. We check that the critical regime for the minimizing movements along \(F_{\varepsilon }\) is \(\varepsilon \sim \tau\). Indeed, if \(\varepsilon \ll \tau\) then from the estimate

$$\displaystyle{{\Bigl |F_{\varepsilon }(x) -\frac{1} {2}{x}^{2}\Bigr |} \leq \frac{\varepsilon } {2}}$$

we deduce that

$$\displaystyle{\frac{x_{k} - x_{k-1}} {\tau } = -x_{k} + O{\Bigl (\frac{\varepsilon } {\tau }\Bigr )},}$$

and hence that the limit minimizing movement satisfies \(u^{\prime} = -u\), so that it corresponds to the minimizing movement of the limit \(F_{0}(x) = \frac{1} {2}{x}^{2}\).

Conversely, if \(\tau \ll \varepsilon\) then it may be seen that for |x 0|≤ 1 the motion is pinned; i.e., the resulting minimizing movement is the trivial solution u(t) = x 0 for all t. If WC 2 this is easily checked, since in this case the stationary solutions, corresponding to x satisfying

$$\displaystyle{x + W^{\prime}{\Bigl (\frac{x} {\varepsilon } \Bigr )} = 0}$$

tend to be dense in the interval [−1, 1] as \(\varepsilon \rightarrow 0\). Moreover, in this regime the minimizing movement corresponds to the limit as \(\varepsilon \rightarrow 0\) of the minimizing movements of \(F_{\varepsilon }\) for \(\varepsilon\) fixed; i.e., solutions \(u_{\varepsilon }\) of the gradient flow

$$\displaystyle{u_{\varepsilon }^{\prime} = -u_{\varepsilon } - W^{\prime}{\Bigl (\frac{u_{\varepsilon }} {\varepsilon } \Bigr )}.}$$

Integrating between t 1 and t 2 we have

$$\displaystyle{\int _{u_{\varepsilon }(t_{1})}^{u_{\varepsilon }(t_{2})} \frac{1} {s + W^{\prime}(s/\varepsilon )}\,\mathit{ds} = t_{1} - t_{2}.}$$

By the uniform convergence \(u_{\varepsilon } \rightarrow u\) we can pass to the limit, recalling that the integrand weakly converges to the function 1∕g defined by

$$\displaystyle{ \frac{1} {g(s)} =\int _{ 0}^{1} \frac{1} {s + W^{\prime}(\sigma )}d\sigma,}$$

and obtain the equation

$$\displaystyle{u^{\prime} = -g(u).}$$

This equation corresponds to the minimizing movement for the even energy \(\tilde{F}_{0}\) given for x ≥ 0

$$\displaystyle{\tilde{F}_{0}(x) = \left \{\begin{array}{@{}l@{\quad }l@{}} 0 \quad &\text{if}\,x \leq 1\\ \quad \\ \int _{ 1}^{x}g(w)\,\mathit{dw}\quad &\text{if}\,x \geq 1. \end{array} \right.}$$

The plot of the derivatives of \(F_{\varepsilon }\), F 0 and \(\tilde{F}_{0}\) is reproduced in Fig. 8.1

Fig. 8.1
figure 1

The derivatives of \(F_{\varepsilon }\), F 0 and \(\tilde{F}_{0}\)

We can explicitly compute the minimizing movement for \(\tau \ll \varepsilon\); e.g., in the case

$$\displaystyle{W(x) = \frac{1} {2\pi }\sin (2\pi x),}$$

which gives the equation

$$\displaystyle{u^{\prime} = \sqrt{{u}^{2 } - 1},}$$

for |x 0|≥ 1, and

$$\displaystyle{\tilde{F}_{0}(x) = \frac{1} {2}{\Bigl (\vert x\vert \sqrt{{x}^{2 } - 1} -\log {\Bigl (\vert x\vert + \sqrt{{x}^{2 } - 1}\Bigr )}\Bigr )}}$$

for |x| > 1, and in the case

$$\displaystyle{ W(x) ={\Bigl | x -\frac{1} {2}\Bigr |} -\frac{1} {4}\quad \text{ for }0 \leq x \leq 1. }$$
(8.9)

In the latter, the solutions with initial datum x 0 > 1 satisfy the equation

$$\displaystyle{u^{\prime} = \frac{1} {u} - u.}$$

Integrating this limit equation we conclude that the minimizing movement along \(F_{\varepsilon }\) corresponds to that of the effective energy

$$\displaystyle{\tilde{F}_{0}(x) ={\Bigl ( \frac{1} {2}{x}^{2} -\log \vert x\vert -{\frac{1} {2}\Bigr )}}^{+}.}$$

Example 8.3 (Pinning threshold).

In the previous example we have computed the critical regime \(\varepsilon \sim \tau\), but we have not computed the minimizing movement for a fixed ratio \(\varepsilon /\tau\). In this case, a simpler interesting problem is the computation of the pinning threshold; i.e., the maximal value T such that |x 0|≤ T gives in the limit a stationary minimizing movement. We have seen that for \(\varepsilon \ll \tau\) we have T = 0, while for \(\tau \ll \varepsilon\) we have T = 1. After considering the linearization of the problem above, the pinning threshold can be characterized as the greatest value T such that we have only stationary minimizing movements for the energies

$$\displaystyle{F_{\varepsilon }^{T}(x) = \mathit{Tx} +\varepsilon \, W{\Bigl (\frac{x} {\varepsilon } \Bigr )}.}$$

In order to have an explicit description of T = T(γ) in terms of \(\gamma:=\varepsilon /\tau\), we only treat the case of

$$\displaystyle{ W(x) = \vert x\vert \qquad \text{ for }\vert x\vert \leq \frac{1} {2}, }$$
(8.10)

which gives the same limit as the one in (8.9). By comparison with the case \(\tau \ll \varepsilon\), we have T(γ) ≤ 1 for all γ.

By a comparison argument, it is not restrictive to suppose that \(x_{0} \in \varepsilon \mathbb{Z}\), and then by translation that x 0 = 0. The problem is then translated in the existence of negative minimizers for the problem

$$\displaystyle{\min \Bigr \{\mathit{Tx} +\varepsilon \, W\Bigl (\frac{x} {\varepsilon } \Bigr ) + \frac{1} {2\tau }{x}^{2}\Bigr \}.}$$

Since T ≤ 1 and \(W^{\prime} = -1\) in \([-\varepsilon /2,0]\), this holds only if we have a negative value in \([-\varepsilon,-\varepsilon /2]\), or equivalently if

$$\displaystyle\begin{array}{rcl} 0& >& \min \Bigr \{\mathit{Tx} +\varepsilon \, W\Bigl (\frac{x} {\varepsilon } \Bigr ) + \frac{1} {2\tau }{x}^{2}: -\varepsilon \leq x \leq -\varepsilon /2\Bigr \} {}\\ & =& \min \Bigr \{(T + 1)x +\varepsilon +\frac{1} {2\tau }{x}^{2}: -\varepsilon \leq x \leq -\varepsilon /2\Bigr \}. {}\\ \end{array}$$

Taking again into account that T ≤ 1, it is easily seen that this minimum must be taken for \(x = -\varepsilon\), so that the condition is equivalent to

$$\displaystyle{0 > -T\varepsilon +{ \frac{1} {2\tau }\varepsilon }^{2}\text{; i.e., }T > \frac{\varepsilon } {2\tau }.}$$

This proves that we have pinning for Tγ∕2. In conclusion, the pinning threshold is

$$\displaystyle{T(\gamma ) =\min {\Bigl \{ \frac{\gamma } {2},1\Bigr \}}}$$

(see Fig. 8.2). As γ → 0 and \(\gamma \rightarrow +\infty \) we recover the thresholds in the limit cases.

Fig. 8.2
figure 2

Pinning threshold in dependence of the ratio \(\varepsilon /\tau\)

8.2.1 Relaxed Evolution

In Theorem 8.1 we have considered, as usual for simplicity, the Γ-convergence with respect to the topology in X. In this way we characterize the convergence of solutions to problems (8.4) to solutions of problems (8.5) in terms of the Γ-limit. This is the only argument where we have used the definition of F in the proof of Theorem 8.1(i). We may consider the Γ-limits with respect to weaker topologies, for which we have coerciveness but the distance term is not a continuous perturbation. In analogy with what already observed for quasistatic motions in Chap. 3 (see, e.g., Sect. 3.1.5), the proof of Theorem 8.1(i) can be repeated, upon defining a relaxed limit motion, where the minimizing movement for F is replaced by the limit of u τ defined by successive minimizing

$$\displaystyle{\min _{X}\mathcal{F}_{\tau }^{x_{k-1} }(x),}$$

where

$$\displaystyle{ \mathcal{F}_{\tau }^{y}(x) =\varGamma -\lim _{\varepsilon \rightarrow 0}{\Bigl (F_{\varepsilon }(x) + \frac{1} {2\tau }\|x - {y\|}^{2}\Bigr )}. }$$
(8.11)

The study of this more general minimizing movements is beyond the scope of these notes. We only give a simple example.

Example 8.4.

Consider X = L 2(0, 1) and

$$\displaystyle{F_{\varepsilon }(u) =\int _{ 0}^{1}a{\Bigl (\frac{x} {\varepsilon } \Bigr )}{u}^{2}\,\mathit{dx},}$$

where a is 1-periodic and 0 <αa(y) ≤β < + for some constants α and β. Then \(F_{\varepsilon }\) is equicoercive with respect to the weak-L 2 topology, and its limit is \(\underline{a}\int _{0}^{1}{u}^{2}\,\mathit{dx}\) (\(\underline{a}\) the harmonic mean of a). However, the perturbations with the L 2-distance are not continuous, and the limits in (8.11) with respect to the weak topology are easily computed as

$$\displaystyle\begin{array}{rcl} \mathcal{F}_{\tau }^{v}(u)& =& \varGamma -\lim _{\varepsilon \rightarrow 0}{\Bigl (F_{\varepsilon }(u) + \frac{1} {2\tau }\|u - {v\|}^{2}\Bigr )} {}\\ & =& \varGamma -\lim _{\varepsilon \rightarrow 0}\int _{0}^{1}{\Biggl ({\Bigl (a{\Bigl (\frac{x} {\varepsilon } \Bigr )} + \frac{1} {2\tau }\Bigr )}{u}^{2} + \frac{({v}^{2} - 2\mathit{uv})} {2\tau } \Biggr )}\,\mathit{dx} {}\\ & =& \int _{0}^{1}{\Biggl (\underline{a}_{\tau }{u}^{2} + \frac{({v}^{2} - 2\mathit{uv})} {2\tau } \Biggr )}\,\mathit{dx} {}\\ & =& \int _{0}^{1}{\Bigl (\underline{a}_{\tau } -\frac{1} {2\tau }\Bigr )}{u}^{2}\,\mathit{dx} + \frac{1} {2\tau }\|u - {v\|}^{2}, {}\\ \end{array}$$

where

$$\displaystyle{\underline{a}_{\tau } ={\Biggl (\int _{ 0}^{1} \frac{1} {{\Bigl (a(y) + \frac{1} {2\tau }\Bigr )}}\,{\mathit{dy}\Biggr )}}^{-1}.}$$

A series expansion argument easily yields that

$$\displaystyle\begin{array}{rcl} \underline{a}_{\tau }& =& \frac{1} {2\tau }{\Biggl (\int _{0}^{1} \frac{1} {2\tau a(y) + 1}\,{\mathit{dy}\Biggr )}}^{-1} {}\\ & =& \frac{1} {2\tau }{\Biggl (\int _{0}^{1}{\Bigl (1 - 2\tau a(y) + O{(\tau }^{2})\Bigr )}\,{\mathit{dy}\Biggr )}}^{-1} {}\\ & =& \frac{1} {2\tau }{\Bigl (1 + 2\tau \int _{0}^{1}a(y)\,\mathit{dy} + O{(\tau }^{2})\Bigr )} = \frac{1} {2\tau } + \overline{a} + O(\tau ), {}\\ \end{array}$$

where \(\overline{a}\) is the arithmetic mean of a. We then obtain that the limit of u τ coincides with the minimizing motion for \(\tilde{F}\) given by

$$\displaystyle{\tilde{F}(u) = \overline{a}\int _{0}^{1}{u}^{2}\,\mathit{dx}.}$$

The same argument leading to an effective motion can be applied to varying distances as in the following example.

Example 8.5.

We consider \(X_{\varepsilon } = X = {L}^{2}(0,1)\) equipped with the distance \(d_{\varepsilon }\) given by

$$\displaystyle{d_{\varepsilon }^{2}(u,v) =\int _{ 0}^{1}a{\Bigl (\frac{x} {\varepsilon } \Bigr )}\vert u - v{\vert }^{2}\,\mathit{dx},}$$

and \(F_{\varepsilon }(u) = F(u) =\int _{ 0}^{1}\vert u^{\prime}{\vert }^{2}\,\mathit{dx}\). For fixed v the square distances can be seen as functionals depending on v, weakly equicoercive in L 2 and Γ-converging to \(\underline{a}\|u - {v\|}^{2}\) (\(\|u\|\) the L 2-norm). Nevertheless, in this case the functionals \(F_{\varepsilon }(u) + \frac{1} {2\tau }d_{\varepsilon }^{2}(u,v)\) are coercive with respect to the strong L 2-norm and Γ-converge to \(F(u) + \frac{1} {2\tau }\overline{a}\|u - {v\|}^{2}\). As a conclusion, the minimizing movement coincide with the minimizing movement for F with respect to the norm \(\sqrt{\overline{a}}\|u\|\) or, equivalently, with the minimizing movement for \(\frac{1} {\overline{a}}F\) with respect to the L 2-norm.

8.3 An Example: ‘Overdamped Dynamics’ of Lennard-Jones Interactions

We now give an example of a sequence of non-convex energies which commute with the minimizing movement procedure.

Let J be as in Sect. 4.4 and \(\frac{1} {\varepsilon } = N \in \mathbb{N}\). We consider the energies

$$\displaystyle{F_{\varepsilon }(u) =\sum _{ i=1}^{N}J{\Bigl (\frac{u_{i} - u_{i-1}} {\sqrt{\varepsilon }} \Bigr )}}$$

with the periodic boundary condition u N = u 0. As proved in Sect. 4.4, after identification of u with a piecewise-constant function on [0, 1], these energies Γ-converge to the energy

$$\displaystyle{F(u) =\int _{ 0}^{1}\vert u^{\prime}{\vert }^{2}\,\mathit{dt} + \#(S(u) \cap [0,1)),\qquad {u}^{+} > {u}^{-},}$$

defined on piecewise-H 1 functions, in this case extended 1-periodically on the whole real line.

In this section we apply the minimizing movements scheme to \(F_{\varepsilon }\) as a sequence of functionals in L 2(0, 1). In order to have initial data \(u_{0}^{\varepsilon }\) with equibounded energy, we may suppose that these are the discretization of a single piecewise-H 1 function u 0 (with a slight abuse of notation we will continue to denote all those discrete functions by u 0).

With fixed \(\varepsilon\) and τ, the time-discretization scheme consists in defining recursively u k as a minimizer of

$$\displaystyle{ u\mapsto \sum _{i=1}^{N}J{\Bigl (\frac{u_{i} - u_{i-1}} {\sqrt{\varepsilon }} \Bigr )} + \frac{1} {2\tau }\sum _{i=1}^{N}\varepsilon \vert u_{ i} - u_{i}^{k-1}{\vert }^{2}. }$$
(8.12)

By Proposition 8.1, upon extraction of a subsequence, the functions \({u}^{\tau }(t) = u_{\lfloor t/\tau \rfloor }\) converge uniformly in L 2 to a function \(u \in {C}^{1/2}([0,+\infty );{L}^{2}(0,1))\). Moreover, since we have F(u(t)) ≤ F(u 0) < +, u(t) is a piecewise-H 1 function for all t.

We now describe the motion of the limit u. For the sake of simplicity we suppose that u 0 is a piecewise-Lipschitz function and that \(S(u_{0}) \cap \{\varepsilon i: i \in \{ 1,\ldots,N\}\} = \varnothing \) (so that we do not have any ambiguity in the definition of the interpolations of u 0).

We first write down the Euler–Lagrange equations for u k, which simply amount to a N-dimensional system of equations obtained by deriving (8.12) with respect to u i

$$\displaystyle{ \frac{1} {\sqrt{\varepsilon }}{\Bigl (J^{\prime}{\Bigl (\frac{u_{i}^{k} - u_{i-1}^{k}} {\sqrt{\varepsilon }} \Bigr )}- J^{\prime}{\Bigl (\frac{u_{i+1}^{k} - u_{i}^{k}} {\sqrt{\varepsilon }} \Bigr )}\Bigr )} + \frac{\varepsilon } {\tau }(u_{i}^{k} - u_{ i}^{k-1}) = 0. }$$
(8.13)
  • With fixed i ∈{ 1, , N} let v k be defined by

    $$\displaystyle{v_{k} = \frac{u_{i}^{k} - u_{i-1}^{k}} {\varepsilon }.}$$

    For simplicity of notation, we set

    $$\displaystyle{J_{\varepsilon }(w) = \frac{1} {\varepsilon } J(\sqrt{\varepsilon }\,w).}$$

    By (8.13) and the corresponding equation for i − 1, which can be rewritten as

    $$\displaystyle{J_{\varepsilon }^{\prime}{\Bigl (\frac{u_{i-1}^{k} - u_{i-2}^{k}} {\varepsilon } \Bigr )} - J_{\varepsilon }^{\prime}{\Bigl (\frac{u_{i}^{k} - u_{i-1}^{k}} {\varepsilon } \Bigr )} + \frac{\varepsilon } {\tau }(u_{i-1}^{k} - u_{ i-1}^{k-1}) = 0,}$$

    we have

    $$\displaystyle\begin{array}{rcl} \frac{v_{k} - v_{k-1}} {\tau } & =& \frac{1} {\tau } \Bigl (\frac{u_{i}^{k} - u_{i-1}^{k}} {\varepsilon } -\frac{u_{i}^{k-1} - u_{i-1}^{k-1}} {\varepsilon } \Bigr ) {}\\ & =& \frac{1} {\varepsilon } \Bigl (\frac{u_{i}^{k} - u_{i}^{k-1}} {\tau } -\frac{u_{i-1}^{k} - u_{i-1}^{k-1}} {\tau } \Bigr ) {}\\ & =& \frac{1} {{\varepsilon }^{2}} \Biggl (\Bigl (J_{\varepsilon }^{\prime}\Bigl (\frac{u_{i-1}^{k} - u_{i-2}^{k}} {\varepsilon } \Bigr ) - J_{\varepsilon }^{\prime}\Bigl (\frac{u_{i}^{k} - u_{i-1}^{k}} {\varepsilon } \Bigr )\Bigr ) {}\\ & & -\Bigl (J_{\varepsilon }^{\prime}\Bigl (\frac{u_{i}^{k} - u_{i-1}^{k}} {\varepsilon } \Bigr ) - J_{\varepsilon }^{\prime}\Bigl (\frac{u_{i+1}^{k} - u_{i}^{k}} {\varepsilon } \Bigr )\Bigr )\Biggr ), {}\\ \end{array}$$

    so that

    $$\displaystyle\begin{array}{rcl} \frac{v_{k} - v_{k-1}} {\tau } -{\frac{2} {\varepsilon }^{2}} J_{\varepsilon }^{\prime}(v_{k})& =& -{\frac{1} {\varepsilon }^{2}} {\Bigl (J_{\varepsilon }^{\prime}{\Bigl (\frac{u_{i-1}^{k} - u_{i-2}^{k}} {\varepsilon } \Bigr )} + J_{\varepsilon }^{\prime}{\Bigl (\frac{u_{i+1}^{k} - u_{i}^{k}} {\varepsilon } \Bigr )}\Bigr )} \\ & \geq &-{\frac{2} {\varepsilon }^{2}} J_{\varepsilon }^{\prime}{\Bigl (\frac{w_{0}} {\sqrt{\varepsilon }}\Bigr )}. {}\end{array}$$
    (8.14)

    We recall that we denote by w 0 the maximum point of J′.

    We can read (8.14) as an inequality for the difference system

    $$\displaystyle{\frac{v_{k} - v_{k-1}} {\eta } - 2J_{\varepsilon }^{\prime}(v_{k}) \geq -2J_{\varepsilon }^{\prime}{\Bigl (\frac{w_{0}} {\sqrt{\varepsilon }}\Bigr )},}$$

    where \(\eta =\tau {/\varepsilon }^{2}\) is interpreted as a discretization step. Note that \(v_{k} = w_{0}/\sqrt{\varepsilon }\) for all k is a stationary solution of the equation

    $$\displaystyle{\frac{v_{k} - v_{k-1}} {\eta } - 2J_{\varepsilon }^{\prime}(v_{k}) = -2J_{\varepsilon }^{\prime}{\Bigl (\frac{w_{0}} {\sqrt{\varepsilon }}\Bigr )}}$$

    and that \(J_{\varepsilon }^{\prime}\) are equi-Lipschitz functions on [0, +). If η ≪ 1 this implies that if \(v_{k_{0}} \leq w_{0}/\sqrt{\varepsilon }\) for some k 0 then

    $$\displaystyle{v_{k} \leq \frac{w_{0}} {\sqrt{\varepsilon }}\qquad \text{ for }k \geq k_{0},}$$

    or, equivalently, that if \(\tau {\ll \varepsilon }^{2}\) the set

    $$\displaystyle{S_{\varepsilon }^{k} ={\Bigl \{ i \in \{ 1,\ldots,N\}: \frac{u_{i}^{k} - u_{ i-1}^{k}} {\varepsilon } \geq \frac{w_{0}} {\sqrt{\varepsilon }}\Bigr \}}}$$

    is decreasing with k. By our assumption on u 0, for \(\varepsilon\) small enough we then have

    $$\displaystyle{S_{\varepsilon }^{0} ={\Bigl \{ i \in \{ 1,\ldots,N\}: [\varepsilon (i - 1),\varepsilon i] \cap S(u_{ 0})\neq \varnothing \Bigr \}},}$$

    so that, passing to the limit

    $$\displaystyle{ S(u(t)) \subseteq S(u_{0})\qquad \text{ for all }t \geq 0. }$$
    (8.15)
  • Taking into account that we may define

    $$\displaystyle{{u}^{\tau }(t,x) = u_{\lfloor x/\varepsilon \rfloor }^{\lfloor t/\tau \rfloor },}$$

    we may choose functions \(\phi \in C_{0}^{\infty }(0,T)\) and \(\psi \in C_{0}^{\infty }(x_{1},x_{2})\), with \((x_{1},x_{2}) \cap S(u_{0}) = \varnothing \), and obtain from (8.13)

    $$\displaystyle\begin{array}{rcl} & & \int _{0}^{T}\int _{ x_{1}}^{x_{2} }{u}^{\tau }(t,x){\Bigl (\frac{\phi (t) -\phi (t+\tau )} {\tau } \Bigr )}\psi (x)\,\mathit{dx}\,\mathit{dt} {}\\ & & \quad = -\int _{0}^{T}\int _{ x_{1}}^{x_{2} }{\Bigl ( \frac{1} {\sqrt{\varepsilon }}J^{\prime}{\Bigl (\sqrt{\varepsilon }\frac{{u}^{\tau }(t,x) - {u}^{\tau }(t,x-\varepsilon )} {\varepsilon } \Bigr )}\Bigr )} {}\\ & & \qquad \times \,\phi (t){\Bigl (\frac{\psi (x) -\psi (x+\varepsilon )} {\varepsilon } \Bigr )}\,\mathit{dx}\,\mathit{dt}. {}\\ \end{array}$$

    Taking into account that

    $$\displaystyle{\lim _{\varepsilon \rightarrow 0} \frac{1} {\sqrt{\varepsilon }}J^{\prime}(\sqrt{\varepsilon }w) = 2w,}$$

    we can pass to the limit and obtain that

    $$\displaystyle\begin{array}{rcl} -\int _{0}^{T}\int _{ x_{1}}^{x_{2} }u(t,x)\phi ^{\prime}(t)\psi (x)\,\mathit{dx}\,\mathit{dt} =\int _{ 0}^{T}\int _{ x_{1}}^{x_{2} }2\frac{\partial u} {\partial x}\phi (t)\psi ^{\prime}(x)\,\mathit{dx}\,\mathit{dt}\,;& & {}\\ \end{array}$$

    i.e., that

    $$\displaystyle{ \frac{\partial u} {\partial t} = -2\frac{{\partial }^{2}u} {\partial {x}^{2}} }$$
    (8.16)

    in the sense of distributions (and hence also classically) in (0, T) × (x 1, x 2). By the arbitrariness of the interval (x 1, x 2) we have that equation (8.16) is satisfied for x in (0, 1)∖ S(u 0).

  • We now derive boundary conditions on S(u(t)). Let i 0 + 1 belong to \(S_{\varepsilon }^{0}\), and suppose that \({u}^{+}(t,x) - {u}^{-}(t,x) \geq c > 0\). Then we have

    $$\displaystyle{\lim _{\tau \rightarrow 0} \frac{1} {\sqrt{\varepsilon }}J^{\prime}{\Biggl (\frac{u_{i_{0}}^{\lfloor t/\tau \rfloor }- u_{i_{0}-1}^{\lfloor t/\tau \rfloor }} {\sqrt{\varepsilon }} \Biggr )} = 0.}$$

    If i < i 0, from (8.13) it follows, after summing up the indices from i to i 0, that

    $$\displaystyle{ \sum _{j=i}^{i_{0} } \frac{\varepsilon } {\tau } (u_{j}^{k} - u_{ j}^{k-1}) = -\frac{1} {\sqrt{\varepsilon }}J^{\prime}{\Bigl (\frac{u_{i}^{k} - u_{i-1}^{k}} {\sqrt{\varepsilon }} \Bigr )}. }$$
    (8.17)

    We may choose \(i = i_{\varepsilon }\) such that \(\varepsilon i_{\varepsilon } \rightarrow \overline{x}\) and we may deduce from (8.17) that

    $$\displaystyle{\int _{\overline{x}}^{x_{0} } \frac{\partial u} {\partial t} \,\mathit{dx} = -2\frac{\partial u} {\partial x}(\overline{x}),}$$

    where x 0S(u(t)) is the limit of \(\varepsilon i_{0}\). Letting \(\overline{x} \rightarrow x_{0}^{-}\) we obtain

    $$\displaystyle{\frac{\partial u} {\partial x}(x_{0}^{-}) = 0.}$$

    Similarly we obtain the homogeneous Neumann condition at x 0 +.

Summarizing, the minimizing movement along the scaled Lennard-Jones energies \(F_{\varepsilon }\) from a piecewise-H 1 function consists in a piecewise-H 1 motion, following the heat equation on (0, 1)∖ S(u 0), with homogeneous Neumann boundary conditions on S(u 0) (as long as u(t) has a discontinuity at the corresponding point of S(u 0)).

Note that for \(\varepsilon \rightarrow 0\) sufficiently fast Theorem 8.1 directly ensures that the minimizing movement along \(F_{\varepsilon }\) coincides with the minimizing movement for the functional F. The computation above shows that this holds also for \(\tau {\ll \varepsilon }^{2}\) (i.e., \(\varepsilon \rightarrow 0\) ‘sufficiently slow’), which then must be regarded as a technical condition.

8.4 Homogenization of Minimizing Movements

We now examine minimizing movements along oscillating sequences (with many local minima), treating two model cases in the real line.

8.4.1 Minimizing Movements for Piecewise-Constant Energies

We apply the minimizing-movement scheme to the functions

$$\displaystyle{F_{\varepsilon }(x) = -{\Bigl \lfloor\frac{x} {\varepsilon } \Bigr \rfloor}\varepsilon }$$

converging to \(F(x) = -x\) (see Fig. 8.3).

Fig. 8.3
figure 3

The function \(F_{\varepsilon }\)

This is a prototype of a function with many local minimizers (actually, in this case all points are local minimizers) converging to a function with few local minimizers (actually, none).

Note that, with fixed \(\varepsilon\), for any initial datum x 0 the minimizing movement for \(F_{\varepsilon }\) is trivial: u(t) = x 0, since all points are local minimizers. Conversely, the corresponding minimizing movement for the limit is \(u(t) = x_{0} + t\).

We now fix an initial datum x 0, the space scale \(\varepsilon\) and the time scale τ, and examine the successive-minimization scheme from x 0. Note that it is not restrictive to suppose that 0 ≤ x 0 < 1 up to a translation in \(\varepsilon \mathbb{Z}\).

The first minimization, giving x 1, is

$$\displaystyle{ \min {\Bigl \{F_{\varepsilon }(x) + \frac{1} {2\tau }{(x - x_{0})}^{2}\Bigr \}}. }$$
(8.18)

The function to minimize is pictured in Fig. 8.4 in normalized coordinates (\(\varepsilon = 1\)); note that it equals \(-x + \frac{1} {2\tau }{(x - x_{0})}^{2}\) if \(x \in \varepsilon \mathbb{Z}\).

Fig. 8.4
figure 4

The function in the minimization problem (8.18)

Apart from some exceptional cases that we deal separately below, we have two possibilities:

  1. (i)

    If \(\frac{\tau }{\varepsilon } < \frac{1} {2}\) then the motion is trivial. The value 1∕2 is the pinning threshold.

    Indeed, after setting set \(x_{0} = s\varepsilon\) with 0 ≤ s < 1, we have two sub-cases:

    1. (a)

      The minimizer x 1 belongs to \([0,\varepsilon )\). This occurs exactly if \(F_{\varepsilon }(\varepsilon ) + \frac{1} {2\tau }{(\varepsilon -x_{0})}^{2} > 0\); i.e.,

      $$\displaystyle{ \tau < \frac{{(s - 1)}^{2}\varepsilon } {2}. }$$
      (8.19)

      In this case the only minimizer is the initial datum x 0. This implies that we have x k = x 0 for all k.

    2. (b)

      We have that \(x_{1} =\varepsilon\). This implies that, up to a translation we are in the case x 0 = 0 with s = 0, and (8.19) holds since \(\tau < \frac{\varepsilon } {2}\). Hence, x k = x 1 for all k ≥ 1.

  2. (ii)

    If \(\frac{\tau }{\varepsilon } > \frac{1} {2}\) then for \(\varepsilon\) small the minimum is taken on \(\varepsilon \mathbb{Z}\). So that again we may suppose that x 0 = 0.

Note that we are leaving out for the time being the case when x 0 = 0 and \(\frac{\tau } {\varepsilon } = \frac{1} {2}\). In that case we have a double choice for the minimizer; such situations will be examined separately.

If x 0 = 0 then x 1 is computed by solving

$$\displaystyle{ \min {\Bigl \{F_{\varepsilon }(x) + \frac{1} {2\tau }{x}^{2}: x \in \varepsilon \mathbb{Z}\Bigr \}}, }$$
(8.20)

and is characterized by

$$\displaystyle{x_{1} -\frac{1} {2}\varepsilon \leq \tau \leq x_{1} + \frac{1} {2}\varepsilon.}$$

We then have

$$\displaystyle{x_{1} ={\Bigl \lfloor \frac{\tau } {\varepsilon } + \frac{1} {2}\Bigr \rfloor}\varepsilon \qquad \text{ if }\frac{\tau } {\varepsilon } + \frac{1} {2}\not\in \mathbb{Z}}$$

(note again that we have two solutions for \(\frac{\tau } {\varepsilon } + \frac{1} {2} \in \mathbb{Z}\), which also includes the case \(\frac{\tau }{\varepsilon } = \frac{1} {2}\) already set aside, and we examine those cases separately). The same computation is repeated at each k giving

$$\displaystyle{\frac{x_{k} - x_{k-1}} {\tau } ={\Bigl \lfloor \frac{\tau } {\varepsilon } + \frac{1} {2}\Bigr \rfloor}\frac{\varepsilon } {\tau }.}$$

We can now choose τ and \(\varepsilon\) tending to 0 simultaneously and pass to the limit. The behaviour of the limit minimizing movements is governed by the quantity

$$\displaystyle{ w =\lim _{\varepsilon \rightarrow 0}\frac{\tau } {\varepsilon }, }$$
(8.21)

which we may suppose exists up to subsequences. If \(w + \frac{1} {2}\not\in \mathbb{Z}\) then the minimizing movement along \(F_{\varepsilon }\) from x 0 is uniquely defined by

$$\displaystyle{ u(t) = x_{0} + vt,\text{ with }v ={\Bigl \lfloor w + \frac{1} {2}\Bigr \rfloor} \frac{1} {w}, }$$
(8.22)

so that the whole sequence converges if the limit in (8.21) exists. Note that

  • (pinning) we have v = 0 exactly when \(\frac{\tau }{\varepsilon } < \frac{1} {2}\) for \(\varepsilon\) small. In particular this holds for \(\tau \ll \varepsilon\) (i.e., for w = 0).

  • (limit motion for slow times) if \(\varepsilon \ll \tau\) then the motion coincides with the gradient flow of the limit, with velocity 1.

  • (discontinuous dependence of the velocity) the velocity is a discontinuous function of w at points of \(\frac{1} {2} + \mathbb{Z}\). Note moreover that it may be actually greater than the limit velocity 1. The graph of v is pictured in Fig. 8.5.

  • (non-uniqueness at  \(w \in \frac{1} {2} + \mathbb{Z}\)) in these exceptional cases we may have either of the two velocities \(1 + \frac{1} {2w}\) or \(1 - \frac{1} {2w}\) in the cases \(\frac{\varepsilon }{\tau } + \frac{1} {2} > w\) or \(\frac{\varepsilon }{\tau } + \frac{1} {2} < w\) for all \(\varepsilon\) small respectively, but we may also have any u(t) with

    $$\displaystyle{1 - \frac{1} {2w} \leq u^{\prime}(t) \leq 1 + \frac{1} {2w}}$$

    if we have precisely \(\frac{\varepsilon }{\tau } + \frac{1} {2} = w\) for all \(\varepsilon\) small, since in this case at every time step we may choose any of the two minimizers giving the extremal velocities, and then obtain any such u′ as a weak limit of piecewise constant functions taking only those two values. Note therefore that in this case the limit is not determined only by w, and in particular it may depend on the subsequence even if the limit (8.21) exists.

Fig. 8.5
figure 5

The velocity v in terms of w

We remark that the functions \(F_{\varepsilon }\) above can be substituted by functions with isolated local minimizers; e.g. by taking (α > 0)

$$\displaystyle{F_{\varepsilon }(x) = -{\Bigl \lfloor\frac{x} {\varepsilon } \Bigr \rfloor}\varepsilon +\alpha {\Bigl ( x -{\Bigl \lfloor\frac{x} {\varepsilon } \Bigr \rfloor}\varepsilon \Bigr )},}$$

with isolated local minimizers at \(\varepsilon \mathbb{Z}\) (for which the computations run exactly as above), or

$$\displaystyle{F_{\varepsilon }(x) = -x + (1+\alpha )\varepsilon \sin {\Bigl (\frac{x} {\varepsilon } \Bigr )}.}$$

Note that the presence of an energy barrier between local minimizers does not influence the velocity of the final minimizing movement, that can always be larger than 1 (the velocity as \(\varepsilon \ll \tau\)).

We also remark that the same result can be obtained by a ‘discretization’ of F; i.e., taking

$$\displaystyle{ F_{\varepsilon }(x) = \left \{\begin{array}{@{}l@{\quad }l@{}} -x \quad &\text{if}\,x \in \varepsilon \mathbb{Z}\\ +\infty \quad &\text{otherwise}. \end{array} \right. }$$
(8.23)

8.4.2 A Heterogeneous Case

We briefly examine a variation of the previous example obtained by introducing a heterogeneity parameter \(1 \leq \lambda \leq 2\) and defining

$$\displaystyle{{ F}^{\lambda }(x) = \left \{\begin{array}{@{}l@{\quad }l@{}} -2{\Bigl \lfloor\frac{x} {2}\Bigr \rfloor} \quad &\text{if}\,2{\Bigl \lfloor\frac{x} {2}\Bigr \rfloor} \leq x < 2{\Bigl \lfloor\frac{x} {2}\Bigr \rfloor}+\lambda \\ \quad \\ -2{\Bigl \lfloor\frac{x} {2}\Bigr \rfloor}-\lambda \quad &\text{if}\,2{\Bigl \lfloor\frac{x} {2}\Bigr \rfloor}+\lambda \leq x < 2{\Bigl \lfloor\frac{x} {2}\Bigr \rfloor} + 1. \end{array} \right. }$$
(8.24)

If λ = 1 we are in the previous situation; for general λ the function F λ is pictured in Fig. 8.6.

Fig. 8.6
figure 6

The function F λ

We apply the minimizing-movement scheme to the functions

$$\displaystyle{F_{\varepsilon }(x) = F_{\varepsilon }^{\lambda }(x) =\varepsilon \, {F}^{\lambda }{\Bigl (\frac{x} {\varepsilon } \Bigr )}.}$$

Arguing as above, we can reduce to the two cases

$$\displaystyle{\mbox{ (a) $x_{k} \in 2\varepsilon \mathbb{Z}$, or (b) $x_{k} \in 2\varepsilon \mathbb{Z}+\varepsilon \lambda $.}}$$

Taking into account that x k+1 is determined as the point in \(2\varepsilon \mathbb{Z} \cup (2\varepsilon \mathbb{Z}+\varepsilon \lambda )\) closer to τ (as above, we only consider the cases when we have a unique solution to the minimum problems in the iterated procedure), we can characterize it as follows.

In case (a) we have the two sub cases:

  • (a1) If we have

    $$\displaystyle{2n < \frac{\tau } {\varepsilon } - \frac{\lambda } {2} < 2n + 1}$$

    for some \(n \in \mathbb{N}\) then

    $$\displaystyle{x_{k+1} = x_{k} + (2n+\lambda )\varepsilon.}$$

    In particular \(x_{k+1} \in 2\varepsilon \mathbb{Z}+\varepsilon \lambda\).

  • (a2) If we have

    $$\displaystyle{2n - 1 < \frac{\tau } {\varepsilon } - \frac{\lambda } {2} < 2n}$$

    for some \(n \in \mathbb{N}\) then

    $$\displaystyle{x_{k+1} = x_{k} + 2n\varepsilon.}$$

    In particular \(x_{k+1} \in 2\varepsilon \mathbb{Z}\). Note that \(x_{k+1} = x_{k}\) (pinning) if \(\frac{\tau } {\varepsilon } < \frac{\lambda } {2}\).

In case (b) we have the two sub cases:

  • (b1) If we have

    $$\displaystyle{2n < \frac{\tau } {\varepsilon } + \frac{\lambda } {2} < 2n + 1}$$

    for some \(n \in \mathbb{N}\) then

    $$\displaystyle{x_{k+1} = x_{k} + 2n\varepsilon.}$$

    In particular \(x_{k+1} \in 2\varepsilon \mathbb{Z}+\varepsilon \lambda\). Note that \(x_{k+1} = x_{k}\) (pinning) if \(\frac{\tau } {\varepsilon } < 1 - \frac{\lambda } {2}\), which is implied by the pinning condition in (a2).

  • (b2) If we have

    $$\displaystyle{2n - 1 < \frac{\tau } {\varepsilon } + \frac{\lambda } {2} < 2n}$$

    for some \(n \in \mathbb{N}\) then

    $$\displaystyle{x_{k+1} = x_{k} + 2n\varepsilon -\varepsilon \lambda.}$$

    In particular \(x_{k+1} \in 2\varepsilon \mathbb{Z}\).

Eventually, we have the two cases:

  1. (1)

    When

    $$\displaystyle{{\Bigl |\frac{\tau } {\varepsilon } - 2n\Bigr |} < \frac{\lambda } {2}}$$

    for some \(n \in \mathbb{N}\) then, after possibly one iteration, we are either in the case (a2) or (b1). Hence, either \(x_{k} \in 2\varepsilon \mathbb{Z}\) or \(x_{k} \in 2\varepsilon \mathbb{Z}+\varepsilon \lambda\) for all k. The velocity in this case is then

    $$\displaystyle{\frac{x_{k+1} - x_{k}} {\tau } = 2n\frac{\varepsilon } {\tau }.}$$
  2. (2)

    When

    $$\displaystyle{{\Bigl |\frac{\tau } {\varepsilon } - (2n + 1)\Bigr |} < 1 - \frac{\lambda } {2}}$$

    for some \(n \in \mathbb{N}\) then we are alternately in case (a1) or (b2). In this case we have an

    • averaged velocity: the speed of the orbit {x k } oscillates between two values with an average speed given by

      $$\displaystyle{\frac{x_{k+2} - x_{k}} {2\tau } = \frac{2n\varepsilon +\lambda \varepsilon } {2\tau } + \frac{2(n + 1)\varepsilon -\lambda \varepsilon } {2\tau } = (2n + 1)\frac{\varepsilon } {\tau }.}$$

      This is an additional feature with respect to the previous example.

Fig. 8.7
figure 7

The function f describing the effective velocity

Summarizing, if we define w as in (8.21) then (taking into account only the cases with a unique limit) the minimizing movement along the sequence \(F_{\varepsilon }\) with initial datum x 0 is given by \(x(t) = x_{0} + vt\) with \(v = \frac{1} {w}f(w)\), and f is given by

$$\displaystyle{f(w) = \left \{\begin{array}{@{}l@{\quad }l@{}} 2n \quad &\text{if}\,\vert w - 2n\vert \leq \frac{\lambda } {2},n \in \mathbb{N}\\ \quad \\ 2n + 1\quad &\text{if}\,\vert w - (2n + 1)\vert < 1 - \frac{\lambda } {2},n \in \mathbb{N} \end{array} \right.}$$

(see Fig. 8.7). Note that the pinning threshold is now λ∕2. We can compare this minimizing movement with the one given in (8.22) by examining the graph of \(w\mapsto \lfloor w + 1/2\rfloor - f(w)\) in Fig. 8.8. For \(2n + 1/2 < w < 2n +\lambda /2\) the new minimizing movement is slower, while for \(2n + 2 -\lambda /2 < w < 2n + 2 - 1/2\) it is faster.

Fig. 8.8
figure 8

Comparison with the homogeneous case

8.4.3 A Proposal for Some Random Models

From the heterogeneous example above we may derive two possible random models, of which we may then study the corresponding minimizing movement. We only give a heuristic proposal, which can then be correctly formalized by introducing suitable random variables.

  1. 1.

    Random environment. Let λ ∈ (1∕2, 1) and p ∈ [0, 1]. We consider a random array of points {x i ω} in \(\mathbb{R}\) such that, e.g.,

    $$\displaystyle{ x_{i}^{\omega }-x_{ i-1}^{\omega } = \left \{\begin{array}{@{}l@{\quad }l@{}} \lambda \quad &\text{ with probability }p\\ \quad \\ 2-\lambda \quad &\text{ with probability }1 - p.\end{array} \right. }$$
    (8.25)

    With fixed ω we may consider the minimizing movement related to

    $$\displaystyle{F_{\varepsilon }^{\omega }(x) = \left \{\begin{array}{@{}l@{\quad }l@{}} -x \quad &\text{if}\,x \in \{\varepsilon x_{i}^{\omega }: i \in \mathbb{Z}\} \\ +\infty \quad &\text{otherwise},\end{array} \right.}$$

    or equivalently (as in the definition (8.23)

    $$\displaystyle{F_{\varepsilon }^{\omega }(x) = -\varepsilon x_{i}^{\omega }\mbox{ if $x \in [\varepsilon x_{ i}^{\omega },\varepsilon x_{ i+1}^{\omega }),i \in \mathbb{Z}$.}}$$

    In the case p = 0 or p = 1 we almost surely have a homogeneous environment as in Sect. 8.4.1. For \(p = 1/2\) we have a random version of the heterogeneous model of Sect. 8.4.2. Note that in this case for all p ∈ (0, 1) the pinning threshold for the ratio \(\tau /\varepsilon\) is almost surely λ∕2, since below that value, the motion will be pinned at the first index i with \(x_{i}^{\omega } - x_{i-1}^{\omega } =\lambda\); i.e., almost surely after a finite number of steps. For \(\tau /\varepsilon =\lambda /2\) and λ < 2∕3 (with this condition we always move of one index) then the (maximal) velocity after pinning is \(v =\lambda p + (1 - p)\) (for λ > 2∕3 the computation of the velocity involves the probability of m-consecutive points x i ω at distance 2 −λ).

  2. 2.

    Random movements. Let λ ∈ (1∕2, 1) and p ∈ [0, 1]. Contrary to the model above, we suppose that at every time step k we may make a random choice of points \(\{x_{i}^{\omega _{k}}\}\) satisfying (8.25) such that \(x_{k}^{\varepsilon } \in \{ x_{i}^{\omega _{k}}\}\); i.e., this choice now represents the random possibility of motion of the point itself (and not a characteristic of the medium). Note that in this case for p ∈ (0, 1) the pinning threshold for the ratio \(\tau /\varepsilon\) is almost surely the lower value \(1 - \frac{\lambda } {2}\), and the (maximal) velocity after pinning is \(v = (2-\lambda )(1 - p)\).

8.5 Time-Dependent Minimizing Movements

Following the arguments of Sect. 7.2 we can define a minimizing movement along a time-dependent sequence of energies \(F_{\varepsilon }(x,t)\), upon some technical assumptions as in (7.10). In this case we fix a sequence of initial data \(x_{0}^{\varepsilon }\) and \(\tau =\tau _{\varepsilon } \rightarrow 0\), and define recursively \(x_{k}^{\varepsilon }\) as minimizing

$$\displaystyle{ \min \Bigl \{F_{\varepsilon }(x,k\tau ) + \frac{1} {2\tau }\|x - x_{k-1}^{\varepsilon }\|{}^{2}\Bigr \}. }$$
(8.26)

A minimizing movement is then any limit u of \({u}^{\varepsilon }\) defined by \({u}^{\varepsilon }(t) = x_{\lfloor t/\tau \rfloor }^{\varepsilon }\).

We only give a simple one-dimensional example with a time-dependent forcing term.

Example 8.6.

We consider

$$\displaystyle{F_{\varepsilon }(x,t) =\varepsilon \, W{\Bigl (\frac{x} {\varepsilon } \Bigr )} -\mathit{tx}}$$

with W as in Example 8.2. Similarly to that example we can check that \(\varepsilon \sim \tau\) is the critical case, and we can explicitly describe the minimizing movement in the extreme cases:

  • (\(\varepsilon \ll \tau\)) the minimizing movement is that corresponding to \(F_{0}(x,t) = -\mathit{tu}\); i.e., to the equation u′ = t.

  • (\(\tau \ll \varepsilon\)) the minimizing movement is that corresponding to the function

    $$\displaystyle{\tilde{F}_{0}(t,x) = \left \{\begin{array}{@{}l@{\quad }l@{}} 0 \quad &\text{if}\,t \leq 1\\ g(t)u\quad &\text{if} \,t \geq 1,\end{array} \right.}$$

    where g is now defined by

    $$\displaystyle{ \frac{1} {g(t)} =\int _{ 0}^{1} \frac{1} {W^{\prime}(\sigma ) - t}d\sigma.}$$

8.6 Varying Dissipations: BV-Solutions of Evolution Equations

In the previous sections of this chapter, we have limited ourselves to a Hilbert setting. This often rules out interesting applications, in particular a viscosity approach to quasistatic motion as a limit of gradient flows, which is obtained by perturbing a positively one-homogeneous dissipation \(\mathcal{D}\) by a sequence \(\mathcal{D} + \frac{1} {\tau } \mathcal{D}_{\varepsilon }\) for which a gradient flow-type motion can be defined using the minimizing-movement approach. In general, the limit of these gradient flows gives a motion, called BV-solution, which is different from the energetic solution as defined in Sect. 3.2, and can be characterized in a variational way different from the energy balance. A treatment of this subject is beyond the scope of these notes, since it would need a too refined introduction to the theory of gradient flows in metric spaces, even though it would fit the spirit of the book since it may be stated in terms of Γ-limits. Many of the arguments followed above for varying energies also hold for varying dissipations.

We only deal with a simple example, in order to highlight the differences with energetic solutions.

Example 8.7 (Nonconvex mechanical play).

We can consider the double-well potential in Example 3.3 and the perturbed dissipations

$$\displaystyle{\mathcal{F}(t,x) = \frac{1} {2}\min \{{(x - 1)}^{2},{(x + 1)}^{2}\} -\mathit{tx},\qquad \mathcal{D}_{\varepsilon,\tau }(x) = \vert x\vert + \frac{\varepsilon } {2\tau }{x}^{2},}$$

with \(x_{0} \in [-2,-1]\). Then the sequence x k τ is increasing and minimizes

$$\displaystyle\begin{array}{rcl} & & \min \Bigl \{\frac{1} {2}\min \{{(x - 1)}^{2},{(x + 1)}^{2}\} - (k\tau - 1)x - x_{ k-1}^{\tau } {}\\ & & \quad + \frac{\varepsilon } {2\tau }{(x - x_{k-1}^{\tau })}^{2}: x \geq x_{ k-1}^{\tau }\Bigl \}. {}\\ \end{array}$$

We fix the ratio

$$\displaystyle{ \gamma = \frac{\varepsilon } {\tau }. }$$
(8.27)

With a computation similar to the one in Example 3.3, we obtain as limit the solution

$$\displaystyle{x(t) = \left \{\begin{array}{@{}l@{\quad }l@{}} x_{0} \quad &\text{if}\,t \leq x_{0} + 2 \\ t - 2\quad &\text{if}\,x_{0} \leq t \leq 2 - \frac{1} {\gamma +1} \\ t \quad &\text{if}\,t > 2 - \frac{1} {\gamma +1} \end{array} \right.}$$

or the one equal to this except for \(t = 2 - \frac{1} {\gamma +1}\) where x = t.

Remark 8.4 (Interpolations of energetic and BV solutions).

In the previous example, the case \(\varepsilon \ll \tau\) (formally, γ = 0) gives the energetic solution obtained in Example 3.3. The case \(\tau \ll \varepsilon\) (formally, \(\gamma = +\infty \)) corresponds to the BV-solution hinted at above. The case in which (8.27) holds can be interpreted as an interpolation between these two extreme case, and is pictured in Fig. 8.9.

Fig. 8.9
figure 9

Interpolation between energetic and BV solutions