Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

The issues related to the behavior of global minimization problems along a sequence of functionals \(F_{\varepsilon }\) are by now well understood, and mainly rely on the concept of Γ-limit. In this chapter we review this notion, which will be the starting point of our analysis. We will mainly be interested in the properties of Γ-limits as the convergence of minimization problems is concerned; further properties of Γ-limits will be recalled when necessary.

2.1 Upper and Lower Bounds

Here and afterwards \(F_{\varepsilon }\) will be functionals defined on a separable metric (or metrizable) space X, if not further specified.

Definition 2.1 (Lower bound).

We say that F is a lower bound for the family \((F_{\varepsilon })\) if for all uX we have

$$\displaystyle\begin{array}{rcl} F(u) \leq \liminf \limits _{\varepsilon \rightarrow 0}F_{\varepsilon }(u_{\varepsilon })\,\text{for all}\,u_{\varepsilon } \rightarrow u,& &{}\end{array}$$
(LB)

or, equivalently, \(F(u) \leq F_{\varepsilon }(u_{\varepsilon }) + o(1)\) as \(\varepsilon \rightarrow 0\) for all \(u_{\varepsilon } \rightarrow u\).

The inequality (LB) is usually referred to as the liminf inequality.

If F is a lower bound we obtain a lower bound also for minimum problems on compact sets.

Proposition 2.1.

Let F be a lower bound for \(F_{\varepsilon }\) and K be a compact subset of X. Then

$$\displaystyle{ \inf _{K}F \leq \liminf _{\varepsilon \rightarrow 0}\inf _{K}F_{\varepsilon }. }$$
(2.1)

Proof.

Let \(u_{\varepsilon _{k}} \in K\) be such that \(u_{\varepsilon _{k}} \rightarrow \overline{u}\) and

$$\displaystyle{\lim _{k}F_{\varepsilon _{k}}(u_{\varepsilon _{k}}) =\liminf _{\varepsilon \rightarrow 0}\inf _{K}F_{\varepsilon }.}$$

We set

$$\displaystyle{\tilde{u}_{\varepsilon } = \left \{\begin{array}{@{}l@{\quad }l@{}} u_{\varepsilon _{k}}\quad &\text{if}\,\varepsilon =\varepsilon _{k} \\ \overline{u} \quad &\text{otherwise}. \end{array} \right.}$$

Then by (LB) we have

$$\displaystyle{ \inf _{K}F \leq F(\overline{u}) \leq \liminf _{\varepsilon \rightarrow 0}F_{\varepsilon }(\tilde{u}_{\varepsilon }) \leq \lim _{k}F_{\varepsilon _{k}}(u_{\varepsilon _{k}}) =\liminf _{\varepsilon \rightarrow 0}\inf _{K}F_{\varepsilon }, }$$
(2.2)

as desired. □

Remark 2.1.

Note that the hypothesis that K be compact cannot altogether be removed. A trivial example on the real line is:

$$\displaystyle{F_{\varepsilon }(x) = \left \{\begin{array}{@{}l@{\quad }l@{}} -1\quad &\text{if}\,x = 1/\varepsilon \\ 0 \quad &\text{otherwise}. \end{array} \right.}$$

Then F = 0 is a lower bound according to Definition 2.1, but (2.1) fails if we take \(\mathbb{R}\) in the place of K.

Remark 2.2.

The hypothesis that K be compact can be substituted by the hypothesis that K be closed and the sequence \((F_{\varepsilon })\) be equi-coercive; i.e., that

$$\displaystyle{ \mbox{ if $\sup \nolimits _{\varepsilon }F_{\varepsilon }(u_{\varepsilon }) < +\infty $ then $(u_{\varepsilon })$ is precompact,} }$$
(2.3)

the proof being the same.

Definition 2.2 (Upper bound).

We say that F is an upper bound for the family \((F_{\varepsilon })\) if for all uX we have

$$\displaystyle\begin{array}{rcl} \text{there exists}\,u_{\varepsilon } \rightarrow u\,\text{such that}\,\qquad F(u) \geq \limsup \limits _{\varepsilon \rightarrow 0}F_{\varepsilon }(u_{\varepsilon }),& &{}\end{array}$$
(UB)

or, equivalently, \(F(u) \geq F_{\varepsilon }(u_{\varepsilon }) + o(1)\) as \(\varepsilon \rightarrow 0\).

The inequality (UB) is usually referred to as the limsup inequality.

If F is an upper bound for \(F_{\varepsilon }\) we obtain an upper bound also for the corresponding minimum problems on open sets.

Proposition 2.2.

Let F be an upper bound for \(F_{\varepsilon }\) and A be an open subset of X. Then

$$\displaystyle{ \inf _{A}F \geq \limsup _{\varepsilon \rightarrow 0}\inf _{A}F_{\varepsilon }. }$$
(2.4)

Proof.

The proof is immediately derived from the definition after remarking that if uA then we may suppose also that \(u_{\varepsilon } \in A\), so that

$$\displaystyle{F(u) \geq \limsup _{\varepsilon \rightarrow 0}F_{\varepsilon }(u_{\varepsilon }) \geq \limsup _{\varepsilon \rightarrow 0}\inf _{A}F_{\varepsilon },}$$

and (2.4) follows by the arbitrariness of uA.□

Remark 2.3.

Again, note that the hypothesis that A be open cannot be removed. A trivial example on the real line is:

$$\displaystyle{F_{\varepsilon }(x) = \left \{\begin{array}{@{}l@{\quad }l@{}} 1\quad &\text{if}\,x = 0\\ 0\quad &\text{otherwise} \end{array} \right.}$$

(independent of \(\varepsilon\)). Then F = 0 is an upper bound according to Definition 2.2 (and also a lower bound!), but (2.4) fails taking A ={ 0}.

Note that in the remark above 0 is an upper bound for \(F_{\varepsilon }\) at 0 even though \(F_{\varepsilon }(0) = 1\) for all \(\varepsilon\), which trivially shows that an upper bound at a point can be actually (much) lower that any element of the family \(F_{\varepsilon }\) at that point.

2.2 Γ-Convergence

In this section we introduce the concept of Γ-limit.

Definition 2.3 (Γ-limit).

We say that F is the Γ -limit of the sequence \((F_{\varepsilon })\) if it is both a lower and an upper bound according to Definitions 2.1 and 2.2.

If (LB) and (UB) hold at a point u then we say that F is the Γ -limit at u, and we write

$$\displaystyle{F(u) =\varGamma \text{-}\lim _{\varepsilon \rightarrow 0}F_{\varepsilon }(u).}$$

Note that this notation does not imply that u is in any of the domains of \(F_{\varepsilon }\), even if F(u) is finite.

Remark 2.4 (Alternate upper-bound inequalities).

If F is a lower bound then requiring that (UB) holds is equivalent to any of the following

$$\displaystyle\begin{array}{rcl} & & \text{there exists}\,\,u_{\varepsilon } \rightarrow u\,\text{such that}\,\qquad F(u) =\lim \limits _{\varepsilon \rightarrow 0}F_{\varepsilon }(u_{\varepsilon });{}\end{array}$$
(RS)
$$\displaystyle\begin{array}{rcl} & & \text{for all}\,\,\eta > 0\,\text{there exists}\,\,u_{\varepsilon } \rightarrow u\,\text{such that}\,F(u)+\eta \geq \limsup \limits _{\varepsilon \rightarrow 0}F_{\varepsilon }(u_{\varepsilon }).{}\end{array}$$
(AUB)

The latter is called the approximate limsup inequality, and is more handy in computations. A sequence satisfying (RS) is called a recovery sequence. The construction of a recovery sequence is linked to an ansatz on its form. The description of this ansatz gives an insight of the relevant features of the energies (oscillations, concentration, etc.) and is usually given on a subclass of u for which it is easier to prove its validity, while for general u one proceeds by a density argument.

Example 2.1.

We analyze some simple examples on the real line.

  1. 1.

    From Remark 2.3 we see that the constant sequence

    $$\displaystyle{F_{\varepsilon }(x) = \left \{\begin{array}{@{}l@{\quad }l@{}} 1\quad &\text{if}\,x = 0\\ 0\quad &\text{otherwise}\end{array} \right.}$$

    Γ-converges to the constant 0; in particular this is a constant sequence not converging to itself.

  2. 2.

    The sequence

    $$\displaystyle{F_{\varepsilon }(x) = \left \{\begin{array}{@{}l@{\quad }l@{}} 1\quad &\text{if}\,x =\varepsilon \\ 0\quad &\text{otherwise}\end{array} \right.}$$

    again Γ-converges to the constant 0. This is clearly a lower and an upper bound at all x ≠ 0. At x = 0 any sequence \(x_{\varepsilon }\neq \varepsilon\) is a recovery sequence.

  3. 3.

    The sequence

    $$\displaystyle{F_{\varepsilon }(x) = \left \{\begin{array}{@{}l@{\quad }l@{}} -1\quad &\text{if}\,x =\varepsilon \\ 0 \quad &\text{otherwise}\end{array} \right.}$$

    Γ-converges to

    $$\displaystyle{F(x) = \left \{\begin{array}{@{}l@{\quad }l@{}} -1\quad &\text{if}\,x = 0\\ 0 \quad &\text{otherwise}.\end{array} \right.}$$

    Again, F is clearly a lower and an upper bound at all x ≠ 0. At x = 0 the sequence \(x_{\varepsilon } =\varepsilon\) is a recovery sequence.

  4. 4.

    Take the sum of the energies in Examples 2.2 and 2.3 above. This is identically 0, so is its limit, while the sum of the Γ-limits is the function F in Example 2.3. The same function F is obtained as the \(\varGamma\)-limit by taking the function \(G_{\varepsilon }(x) = F_{\varepsilon }(x) + F_{\varepsilon }(-x)\) (\(F_{\varepsilon }\) in Example 2.3).

  5. 5.

    Let \(F_{\varepsilon }(x) =\sin (x/\varepsilon )\). Then the Γ-limit is the constant − 1. This is clearly a lower bound. A recovery sequence for a fixed x is \(x_{\varepsilon } = 2\pi \varepsilon \lfloor x/(2\pi \varepsilon )\rfloor -\varepsilon \pi /2\) (\(\lfloor t\rfloor \) is the integer part of t).

The following fundamental property of Γ-convergence derives directly from its definition.

Proposition 2.3 (Stability under continuous perturbations).

Let \(F_{\varepsilon }\) Γ-converge to F and let \(G_{\varepsilon }\) converge continuously to G (i.e., \(G_{\varepsilon }(u_{\varepsilon }) \rightarrow G(u)\) if \(u_{\varepsilon } \rightarrow u\) ); then \(F_{\varepsilon } + G_{\varepsilon }\) Γ-converges to F + G.

Note that this proposition applies to \(G_{\varepsilon } = G\) if G is continuous, but is in general false for \(G_{\varepsilon } = G\) even if G is lower semicontinuous (take \(G_{\varepsilon } = F\) as in Example 2.1(3), with \(F_{\varepsilon } = -F\)).

Example 2.2.

The functions \(\sin (x/\varepsilon ) + {x}^{2} + 1\) Γ-converge to x 2. In this case we may apply the proposition above with \(F_{\varepsilon }(x) =\sin (x/\varepsilon )\) (see Example 2.1(5)) and \(G_{\varepsilon }(x) = {x}^{2} + 1\). Note for future reference that \(F_{\varepsilon }\) has countably many local minimizers, which tend to be dense in the real line, while F has only one global minimizer.

It may be useful to define the lower and upper Γ-limits, so that the existence of a Γ-limit can be viewed as their equality.

Definition 2.4 (Lower and upper Γ-limits).

We define

$$\displaystyle{ \varGamma \text{-}\liminf _{\varepsilon \rightarrow 0}F_{\varepsilon }(u) =\inf \{\liminf _{\varepsilon \rightarrow 0}F_{\varepsilon }(u_{\varepsilon }): u_{\varepsilon } \rightarrow u\} }$$
(2.5)
$$\displaystyle{ \varGamma \text{-}\limsup _{\varepsilon \rightarrow 0}F_{\varepsilon }(u) =\inf \{\limsup _{\varepsilon \rightarrow 0}F_{\varepsilon }(u_{\varepsilon }): u_{\varepsilon } \rightarrow u\}. }$$
(2.6)

Remark 2.5.

  1. 1.

    We immediately obtain that the Γ-limit exists at a point u if and only if

    $$\displaystyle{\varGamma \text{-}\liminf _{\varepsilon \rightarrow 0}F_{\varepsilon }(u) =\varGamma \text{-}\limsup _{\varepsilon \rightarrow 0}F_{\varepsilon }(u).}$$
  2. 2.

    Comparing with the trivial sequence \(u_{\varepsilon } = u\) we obtain

    $$\displaystyle{\varGamma \text{-}\liminf _{\varepsilon \rightarrow 0}F_{\varepsilon }(u) \leq \liminf _{\varepsilon \rightarrow 0}F_{\varepsilon }(u)}$$

    (and analogously for the \(\varGamma \text{-}\limsup\)). More in general, note that the Γ-limit depends on the topology on X. If we change topology, converging sequences are different and the value of the Γ-limit changes. A weaker topology will have more converging sequences and the value will decrease, a stronger topology will have less converging sequences and the value will increase. The pointwise limit corresponds to the Γ-limit with respect to the discrete topology.

  3. 3.

    From the formulas above it is immediate to check that a constant sequence \(F_{\varepsilon } = F\) Γ-converges to itself if and only if F is lower semicontinuous; i.e., (LB) holds with \(F_{\varepsilon } = F\). Indeed (LB) is equivalent to the validity of (2.5), while F is always an upper bound. More in general, a constant sequence \(F_{\varepsilon } = F\) converges to the lower-semicontinuous envelope \(\overline{F}\) of F defined by

    $$\displaystyle{\overline{F} =\max \{ G: G \leq F,G\text{ is lower semicontinuous}\};}$$

    in particular the Γ-limit is a lower-semicontinuous function.

  4. 4.

    It may be convenient to notice that the upper and lower Γ-limits are lower-semicontinuous functions and, with the notation just introduced, that

    $$\displaystyle\begin{array}{rcl} \varGamma \text{-}\liminf _{\varepsilon \rightarrow 0}F_{\varepsilon }(u)& =& \varGamma \text{-}\liminf _{\varepsilon \rightarrow 0}\overline{F_{\varepsilon }}(u) {}\end{array}$$
    (2.7)
    $$\displaystyle\begin{array}{rcl} \varGamma \text{-}\limsup _{\varepsilon \rightarrow 0}F_{\varepsilon }(u)& =& \varGamma \text{-}\limsup _{\varepsilon \rightarrow 0}\overline{F_{\varepsilon }}(u)\,; {}\end{array}$$
    (2.8)

    that is, Γ-limits are unchanged upon substitution of \(F_{\varepsilon }\) with its lower-semicontinuous envelope. These properties are an important observation for the actual computation of the Γ-limit, since in many cases lower-semicontinuous envelopes satisfy structural properties that make them easier to handle. As an example, we may consider (homogeneous) integral functionals of the form

    $$\displaystyle{F(u) =\int _{\varOmega }f(u)\,\mathit{dx},}$$

    defined on L 1(Ω) equipped with the weak topology. Under some growth conditions, the Γ-limits can be computed with respect to the weak topology on bounded sets of L 1(Ω), which is metrizable. In this case, the lower-semicontinuous envelope of F is

    $$\displaystyle{F(u) =\int _{\varOmega }{f}^{{\ast}{\ast}}(u)\,\mathit{dx},}$$

    where f ∗∗ is the convex and lower-semicontinuous envelope of f; i.e.,

    $$\displaystyle{{f}^{{\ast}{\ast}} =\max \{ g: g \leq f,\ g\text{ is lower-semicontinuous and convex}\}.}$$

    In particular, convexity is a necessary condition for a functional to be a Γ-limit of the integral form above.

2.3 Convergence of Minimum Problems

As we have already remarked, the Γ-convergence of \(F_{\varepsilon }\) will not imply the convergence of minimizers (or ‘almost minimizers’). It is necessary then to assume a compactness (or ‘mild coerciveness’) property as follows:

$$\displaystyle{ \text{there exists a precompact sequence }(u_{\varepsilon })\text{ with }F_{\varepsilon }(u_{\varepsilon }) =\inf F_{\varepsilon } + o(1)\text{ as }\varepsilon \rightarrow 0, }$$
(2.9)

which is implied by the following stronger condition

$$\displaystyle{ \text{there exists a compact set }K\text{ such that }\inf F_{\varepsilon } =\mathop{ \inf }\nolimits _{K}F_{\varepsilon }\text{ for all }\varepsilon > 0. }$$
(2.10)

This condition is implied by the equi-coerciveness hypothesis (2.3); i.e., if for all c there exists a compact set K such that the sublevel sets \(\{F_{\varepsilon } \leq c\}\) are all contained in K. To check that (2.10) is stronger than (2.9) consider \(F_{\varepsilon }(x) =\varepsilon {e}^{x}\) on the real line: any converging sequence satisfies (2.9) but (2.10) does not hold.

By arguing as for Propositions 2.1 and 2.2 we will deduce the convergence of minima. This result is made precise in the following theorem.

Theorem 2.1 (Fundamental Theorem of Γ-convergence).

Let \((F_{\varepsilon })\) satisfy the compactness property (2.9) and Γ-converge to F. Then

  1. (i)

    F admits minimum, and \(\min F =\lim \limits _{\varepsilon \rightarrow 0}\inf F_{\varepsilon }\) .

  2. (ii)

    If \((u_{\varepsilon _{k}})\) is a minimizing sequence for some subsequence \((F_{\varepsilon _{k}})\) (i.e., is such that \(F_{\varepsilon _{k}}(u_{\varepsilon _{k}}) =\inf F_{\varepsilon } + o(1)\) as \(\varepsilon \rightarrow 0\) ) which converges to some \(\overline{u}\) , then its limit point is a minimizer for F.

Proof.

By condition (2.9) we can argue as in the proof of Proposition 2.1 with K = X and also apply Proposition 2.2 with A = X to deduce that

$$\displaystyle{ \inf F \geq \limsup _{\varepsilon \rightarrow 0}\inf F_{\varepsilon } \geq \liminf _{\varepsilon \rightarrow 0}\inf F_{\varepsilon } \geq \inf F. }$$
(2.11)

We then have that there exists the limit

$$\displaystyle{\lim \limits _{\varepsilon \rightarrow 0}\inf F_{\varepsilon } =\inf F.}$$

Since from (2.9) there exists a minimizing sequence \((u_{\varepsilon })\) from which we can extract a converging subsequence, it suffices to prove (ii). We can then follow the proof of Proposition 2.1 to deduce as in (2.2) that

$$\displaystyle{\inf F \leq F(\overline{u}) \leq \lim _{k}F_{\varepsilon _{k}}(u_{\varepsilon _{k}}) =\lim _{\varepsilon \rightarrow 0}\inf F_{\varepsilon } =\inf F;}$$

i.e., \(F(\overline{u}) =\inf F\) as desired. □

Corollary 2.1.

In the hypotheses of Theorem  2.1 the minimizers of F are all the limits of converging minimizing sequences.

Proof.

If \(\overline{u}\) is a limit of a converging minimizing sequence then it is a minimizer of F by (ii) in Theorem 2.1. Conversely, if \(\overline{u}\) is a minimizer of F, then every its recovery sequence \((u_{\varepsilon })\) is a minimizing sequence. □

Remark 2.6.

Trivially, it is not true that all minimizers of F are limits of minimizers of \(F_{\varepsilon }\), since this is not true even for (locally) uniformly converging sequences on the line. Take for example:

  1. (1)

    \(F_{\varepsilon }(x) =\varepsilon {x}^{2}\) or \(F_{\varepsilon }(x) =\varepsilon {e}^{x}\) and F(x) = 0. All points minimize the limit but only x = 0 minimizes \(F_{\varepsilon }\) in the first case, and we have no minimizer for the second case.

  2. (2)

    \(F(x) = {({x}^{2} - 1)}^{2}\) and \(F_{\varepsilon }(x) = F(x) +\varepsilon {(x - 1)}^{2}\). F is minimized by 1 and − 1, but the only minimum of \(F_{\varepsilon }\) is 1. Note, however, that − 1 is the limit of strong local minimizers for \(F_{\varepsilon }\).

2.4 An Example: Homogenization

The theory of homogenization of integral functionals is a very wide subject in itself. We will refer to monographs on the subject for details if needed. In this context, we want only to highlight some facts that will be used in the sequel and give a hint of the behaviour in the case of elliptic energies.

We consider \(a: {\mathbb{R}}^{n} \rightarrow [\alpha,\beta ]\), with 0 <α <β < + 1-periodic in the coordinate directions, and the integrals

$$\displaystyle{F_{\varepsilon }(u) =\int _{\varOmega }a{\Bigl (\frac{x} {\varepsilon } \Bigr )}\vert \nabla u{\vert }^{2}\,\mathit{dx}}$$

defined in H 1(Ω), where Ω is a bounded open subset of \({\mathbb{R}}^{n}\). The computation of the Γ-limit of \(F_{\varepsilon }\) is referred to as their homogenization, implying that a simpler ‘homogeneous’ (i.e., x-independent) functional can be used to capture the relevant features of \(F_{\varepsilon }\). The limit can be computed both with respect to the L 1- topology, but it can also be improved; e.g., in 1D it coincides with the limit in the L -topology. This means that the liminf inequality holds for \(u_{\varepsilon }\) converging in the L 1-topology, while there exists a recovery sequence with \(u_{\varepsilon }\) tending to u in the L sense.

An upper bound is given by the pointwise limit of \(F_{\varepsilon }\), whose computation in this case can be obtained by the following non-trivial but well-known result.

Proposition 2.4 (Riemann–Lebesgue lemma).

The functions \(a_{\varepsilon }(x) = a{\Bigl (\frac{x} {\varepsilon } \Bigr )}\) converge weakly in L to their average

$$\displaystyle{ \overline{a} =\int _{{(0,1)}^{n}}a(y)\,\mathit{dy}. }$$
(2.12)

For fixed u the pointwise limit of \(F_{\varepsilon }(u)\) is then simply \(\overline{a}\int _{\varOmega }\vert \nabla u{\vert }^{2}\,\mathit{dx}\), which therefore gives an upper bound for the Γ-limit.

In a one-dimensional setting, the Γ-limit is completely described by a, and is given by

$$\displaystyle{F_{\mathrm{hom}}(u) =\underline{ a}\int _{\varOmega }\vert u^{\prime}{\vert }^{2}\,\mathit{dx},\qquad \text{ where }\qquad \underline{a} ={\Bigl (\int _{ 0}^{1} \frac{1} {a(y)}\,{\mathit{dy}\Bigr )}}^{-1}}$$

is the harmonic mean of a. We briefly sketch a proof which gives the ansatz for recovery sequences. We check the limit inequality as \(u_{\varepsilon } \rightarrow u\). Suppose, for the sake of simplicity, that \(N = 1/\varepsilon \in \mathbb{N}\), and write

$$\displaystyle\begin{array}{rcl} F_{\varepsilon }(u_{\varepsilon })& =& \sum _{i=1}^{N}\int _{ \varepsilon (i-1)}^{\varepsilon i}a{\Bigl (\frac{x} {\varepsilon } \Bigr )}\vert u^{\prime}_{\varepsilon }{\vert }^{2}\,\mathit{dx} {}\\ & \geq & \sum _{i=1}^{N}\varepsilon \min {\Bigl \{\int _{ 0}^{1}a(y)\vert v^{\prime}{\vert }^{2}\,\mathit{dy}: v(1) - v(0) = \frac{u_{\varepsilon }(\varepsilon i) - u_{\varepsilon }(\varepsilon (i - 1))} {\varepsilon } \Bigr \}} {}\\ & =& \underline{a}\sum _{i=1}^{N}\varepsilon {\Bigl |{\frac{u_{\varepsilon }(\varepsilon i) - u_{\varepsilon }(\varepsilon (i - 1))} {\varepsilon } \Bigr |}}^{2}. {}\\ \end{array}$$

The inequality in the second line is obtained by minimizing over all functions w with \(w(\varepsilon (i - 1)) = u_{\varepsilon }(\varepsilon i)\) and \(w(\varepsilon (i - 1)) = u_{\varepsilon }(\varepsilon i)\); the minimum problem in the second line is obtained by scaling such w and using the periodicity of a, the third line is a direct computation of the previous minimum.

If we define \(\tilde{u}_{\varepsilon }\) as the piecewise-affine interpolation of \(u_{\varepsilon }\) on \(\varepsilon \mathbb{Z}\) then the estimate above shows that

$$\displaystyle{F_{\varepsilon }(u_{\varepsilon }) \geq F_{\mathrm{hom}}(\tilde{u}_{\varepsilon }).}$$

The functional on the right-hand side is independent of \(\varepsilon\) and with a convex integrand; hence, it is lower semicontinuous with respect to the weak H 1-convergence. Since \(\tilde{u}_{\varepsilon }\) weakly converges to u in H 1, we then deduce

$$\displaystyle{\liminf _{\varepsilon \rightarrow 0}F_{\varepsilon }(u_{\varepsilon }) \geq \liminf _{\varepsilon \rightarrow 0}F_{\mathrm{hom}}(\tilde{u}_{\varepsilon }) \geq F_{\mathrm{hom}}(u);}$$

i.e., the liminf inequality. The ansatz for the upper bound is obtained by making the lower bound sharp: recovery sequences oscillate around the target function in an optimal way. If the target function u(x) = zx is linear then a recovery sequence is obtained by taking the 1-periodic function v minimizing

$$\displaystyle{\min {\Bigl \{\int _{0}^{1}a(y)\vert v^{\prime} + 1{\vert }^{2}\,dy: v(0) = v(1) = 0\Bigr \}} =\underline{ a},}$$

and setting

$$\displaystyle{u_{\varepsilon }(x) = z{\Bigl (x +\varepsilon v{\Bigl (\frac{x} {\varepsilon } \Bigr )}\Bigr )}.}$$

If u is affine then the construction is the same upon adding a constant. This construction can be repeated up to an error of order \(\varepsilon\) if u is piecewise affine, and then carries over to arbitrary u by density.

As a particular case, we can fix θ ∈ [0, 1] and consider the 1-periodic a given on [0, 1) by

$$\displaystyle{ a(y) = \left \{\begin{array}{@{}l@{\quad }l@{}} \alpha \quad &\text{if}\,0 \leq y <\theta \\ \beta \quad &\text{if} \,\theta \leq y < 1. \end{array} \right. }$$
(2.13)

In this case we have

$$\displaystyle{ \underline{a} = \frac{\alpha \beta } {\theta \beta +(1-\theta )\alpha }. }$$
(2.14)

Note that if a is a 1-periodic function with \(\vert \{y \in (0,1): a(y) =\alpha \} \vert =\theta\) and \(\vert \{y \in (0,1): a(y) =\beta \} \vert = 1-\theta\) then the Γ-limit is the same. Thus, in dimension one, the limit depends only on the volume fraction of α.

In the higher-dimensional case the limit can still be described by an elliptic integral, of the form

$$\displaystyle{F_{\mathrm{hom}}(u) =\int _{\varOmega }\langle A\nabla u,\nabla u\rangle \,\mathit{dx},}$$

where A is a constant symmetric matrix with \(\underline{a}I \leq A \leq \overline{a}I\) (I the identity matrix), with strict inequalities unless a is constant. If in two dimensions we take \(a(y_{1},y_{2}) = a(y_{1})\) (this is called a laminate in the first direction), then A is the diagonal matrix diag\((\underline{a},\overline{a})\). Of course, if \(a(y_{1},y_{2}) = a(y_{2})\) then the two values are interchanged. If a takes only the values α and β in particular this shows that in the higher-dimensional case the results depends on the geometry of \(\{y \in (0,1): a(y) =\alpha \}\) (often referred to as the microgeometry of the problem) and not only on the volume fraction.

A class of meaningful minimum problems is obtained by considering as \(F_{\varepsilon }\) the restriction of the previous functionals on the affine space \(X =\varphi +H_{0}^{1}(\varOmega )\) (i.e., we consider only functions with \(u =\varphi\) on ∂ Ω). It can be proved that this boundary condition is ‘compatible’ with the Γ-limit; i.e., that the Γ-limit is the restriction to X of the previous one or, equivalently, that recovery sequences for the first Γ-limit can be taken satisfying the same boundary data as their limit. As a consequence of Theorem 2.1 we then conclude that oscillating minimum problems for \(F_{\varepsilon }\) with fixed boundary data are approximated by a simpler minimum problem with the same boundary data. Note, however, that all energies, both \(F_{\varepsilon }\) and F hom, are strictly convex, which implies that they have no local non global minimizer.

Example 2.3.

We can perturb the previous energies with some continuously converging perturbation to obtain some additional convergence result. For example, we can add perturbations of the form

$$\displaystyle{G_{\varepsilon }(u) =\int _{\varOmega }g{\Bigl (\frac{x} {\varepsilon },u\Bigr )}\,\mathit{dx}.}$$

On g we make the following hypothesis:

g is a Borel function 1-periodic in the first variable and uniformly Lipschitz in the second one; i.e.,

$$\displaystyle{\vert g(y,z) - g(y,z^{\prime})\vert \leq L\vert z - z^{\prime}\vert.}$$

We then have a perturbed homogenization result as follows.

Proposition 2.5.

The functionals \(F_{\varepsilon } + G_{\varepsilon }\) Γ-converge in the L 1 -topology to the functional F hom + G, where

$$\displaystyle{G(u) =\int _{\varOmega }\overline{g}(u)\,\mathit{dx},\qquad \text{ and }\qquad \overline{g}(z) =\int _{{(0,1)}^{n}}g(y,z)\,\mathit{dy}}$$

is simply the average of g(⋅,z).

Proof.

By Proposition 2.3 it suffices to show that \(G_{\varepsilon }\) converge continuously with respect to the L 1-convergence. If \(u_{\varepsilon } \rightarrow u\) in L 1 then

$$\displaystyle\begin{array}{rcl} \vert G_{\varepsilon }(u_{\varepsilon }) - G(u)\vert & \leq & \int _{\varOmega }{\Bigl |g{\Bigl (\frac{x} {\varepsilon },u_{\varepsilon }\Bigr )} - g{\Bigl (\frac{x} {\varepsilon },u\Bigr )}\Bigr |}\,\mathit{dx} + \vert G_{\varepsilon }(u) - G(u)\vert {}\\ &\leq & L\int _{\varOmega }\vert u_{\varepsilon } - u\vert \,\mathit{dx} + \vert G_{\varepsilon }(u) - G(u)\vert. {}\\ \end{array}$$

It suffices then to show that \(G_{\varepsilon }\) converges pointwise to G. If u is piecewise constant then this follows immediately from the Riemann–Lebesgue Lemma. Noting that also \(\vert \overline{g}(z) -\overline{g}(z^{\prime})\vert \leq L\vert z - z^{\prime}\vert \) we easily obtain the convergence for uL 1(Ω) by the density of piecewise-constant functions. □

Note that, with a slightly more technical proof, we can improve the Lipschitz continuity condition to a local Lipschitz continuity of the form

$$\displaystyle{\vert g(y,z) - g(y,z^{\prime})\vert \leq L(1 + \vert z\vert + \vert z^{\prime}\vert )\vert z - z^{\prime}\vert.}$$

In particular, in dimension one we can apply the result for \(g(y,z) = a(y)\vert z{\vert }^{2}\) and we have that

$$\displaystyle{\int _{\varOmega }a{\Bigl (\frac{x} {\varepsilon } \Bigr )}(\vert u^{\prime}{\vert }^{2} + \vert u{\vert }^{2})\,\mathit{dx}}$$

Γ-converges to

$$\displaystyle{\int _{\varOmega }(\underline{a}\vert u^{\prime}{\vert }^{2} + \overline{a}\vert u{\vert }^{2})\,\mathit{dx}.}$$

As a consequence of Theorem 2.1, under the condition of coerciveness

$$\displaystyle{\lim _{z\rightarrow \pm \infty }\inf g(\cdot,z) = +\infty,}$$

we obtain a convergence result as follows.

Proposition 2.6.

The solutions to the minimum problems

$$\displaystyle{\min {\Bigl \{F_{\varepsilon }(u) + G_{\varepsilon }(u): u \in {H}^{1}(\varOmega )\Bigr \}}}$$

converge (up to subsequences) to a constant function \(\overline{u}\) , whose constant value minimizes \(\overline{g}\) .

Proof.

The proof of the proposition follows immediately from Theorem 2.1, once we observe that, by the coerciveness and continuity of \(\overline{g}\), a minimizer for that function exists, and the constant function \(\overline{u}\) defined above minimizes both F hom and G.□

If g is differentiable then by computing the Euler–Lagrange equations of \(F_{\varepsilon } + G_{\varepsilon }\) we conclude that we obtain solutions of

$$\displaystyle{ -\sum _{\mathit{ij}} \frac{\partial } {\partial x_{i}}{\Bigl (a{\Bigl (\frac{x} {\varepsilon } \Bigr )} \frac{\partial u_{\varepsilon }} {\partial x_{i}}\Bigr )} + \frac{\partial } {\partial u}g{\Bigl (\frac{x} {\varepsilon },u_{\varepsilon }\Bigr )} = 0 }$$
(2.15)

with Neumann boundary conditions, converging to the constant \(\overline{u}\).

2.5 Higher-Order Γ-Limits and a Choice Criterion

We have noticed that if the hypotheses of Theorem 2.1 are satisfied then every minimum point of the Γ-limit F corresponds to a minimizing sequence for \(F_{\varepsilon }\) (see Corollary 2.1). However, not all points may be limits of minimizers for \(F_{\varepsilon }\); conversely, it may be interesting to discriminate between limits of minimizing sequences with different speeds of convergence. To this end, we may look at scaled Γ-limits. If we suppose that, e.g., u is a limit of a sequence \((u_{\varepsilon })\) with

$$\displaystyle{ F_{\varepsilon }(u_{\varepsilon }) =\min F + O{(\varepsilon }^{\alpha }) }$$
(2.16)

for some α > 0 (but, of course, the rate of convergence may also not be polynomial), then we may look at the Γ-limit of the scaled functionals

$$\displaystyle{ F_{\varepsilon }^{\alpha }(u) ={ \frac{F_{\varepsilon }(u_{\varepsilon }) -\min F} {\varepsilon }^{\alpha }}. }$$
(2.17)

Suppose that \(F_{\varepsilon }^{\alpha }\) Γ-converges to some F α not taking the value −. Then:

  1. (i)

    The domain of F α is contained in the set of minimizers of F (but may as well be empty).

  2. (ii)

    \({F}^{\alpha }(u)\neq + \infty \) if and only if there exists a recovery sequence for u satisfying (2.16).

Moreover, we can apply Theorem 2.1 to \(F_{\varepsilon }^{\alpha }\) and obtain the following result, which gives a choice criterion among minimizers of F.

Theorem 2.2.

Let the hypotheses of Theorem  2.1 be satisfied and the functionals in (2.17) Γ-converge to some F α not taking the value −∞ and not identically + ∞. Then

  1. (i)

    \(\inf F_{\varepsilon } =\min F {+\varepsilon }^{\alpha }\min {F}^{\alpha } + o{(\varepsilon }^{\alpha })\) .

  2. (ii)

    If \(F_{\varepsilon }(u_{\varepsilon }) =\min F_{\varepsilon } + o{(\varepsilon }^{\alpha })\) and \(u_{\varepsilon } \rightarrow u\) then u minimizes both F and F α .

Proof.

We can apply Theorem 2.1 to a (subsequence of a) converging minimizing sequence for \(F_{\varepsilon }^{\alpha }\); i.e., a sequence satisfying hypothesis (ii). Its limit point u satisfies

$$\displaystyle{{F}^{\alpha }(u) =\min {F}^{\alpha } =\lim _{\varepsilon \rightarrow 0}\min F_{\varepsilon }^{\alpha } =\lim _{\varepsilon \rightarrow 0}{\frac{\min F_{\varepsilon } -\min F} {\varepsilon }^{\alpha }},}$$

which proves (i). Since, as already remarked u is also a minimizer of F, we also have (ii). □

Example 2.4.

Simple examples in the real line:

  1. (1)

    If \(F_{\varepsilon }(x) =\varepsilon {x}^{2}\) then F(x) = 0. We have \({F}^{\alpha }(x) = 0\) if 0 <α < 1, F 1(x) = x 2 (if α = 1), and

    $$\displaystyle{{F}^{\alpha }(x) = \left \{\begin{array}{@{}l@{\quad }l@{}} 0 \quad &x = 0\\ +\infty \quad &x\neq 0\end{array} \right.}$$

    if α > 1.

  2. (2)

    If \(F_{\varepsilon }(x) = {({x}^{2} - 1)}^{2} +\varepsilon {(x - 1)}^{2}\) then \(F(x) = {({x}^{2} - 1)}^{2}\). We have

    $$\displaystyle{{F}^{\alpha }(x) = \left \{\begin{array}{@{}l@{\quad }l@{}} 0 \quad &\vert x\vert = 1\\ +\infty \quad &\vert x\vert \neq 1 \end{array} \right.}$$

    if 0 <α < 1,

    $$\displaystyle{{F}^{1}(x) = \left \{\begin{array}{@{}l@{\quad }l@{}} 0 \quad &x = 1\\ 4 \quad &x = -1 \\ +\infty \quad &\vert x\vert \neq 1\end{array} \right.}$$

    if α = 1,

    $$\displaystyle{{F}^{\alpha }(x) = \left \{\begin{array}{@{}l@{\quad }l@{}} 0 \quad &x = 1\\ +\infty \quad &x\neq 1\end{array} \right.}$$

    if α > 1.

Remark 2.7.

It must be observed that the functionals \(F_{\varepsilon }^{\alpha }\) in Theorem 2.2 are often equicoercive with respect to a stronger topology than the original \(F_{\varepsilon }\), so that we can improve the convergence in (ii), as in the following example.

Example 2.5 (Gradient theory of phase transitions).

Let

$$\displaystyle{ F_{\varepsilon }(u) =\int _{\varOmega }(W(u) {+\varepsilon }^{2}\vert \nabla u{\vert }^{2})\,\mathit{dx} }$$
(2.18)

be defined in L 1(Ω) with domain in H 1(Ω). Here \(W(u) = {({u}^{2} - 1)}^{2}\) (or a more general double-well potential; i.e., a non-negative function vanishing exactly at \(\pm 1\)). Then \((F_{\varepsilon })\) is equicoercive with respect to the weak L 1-convergence. Since this convergence is metrizable on bounded sets, we can consider L 1(Ω) equipped with this convergence. The Γ-limit is then simply

$$\displaystyle{{F}^{0}(u) =\int _{\varOmega }{W}^{{\ast}{\ast}}(u)\,\mathit{dx},}$$

where W ∗∗ is the convex envelope of W; i.e. \({W}^{{\ast}{\ast}}(u) = {(({u}^{2} - 1) \vee 0)}^{2}\). All functions with \(\|u\|_{\infty }\leq 1\) are minimizers of F 0.

We take α = 1 and consider

$$\displaystyle{ F_{\varepsilon }^{1}(u) =\int _{\varOmega }{\Bigl (\frac{W(u)} {\varepsilon } +\varepsilon \vert \nabla u{\vert }^{2}\Bigr )}\,dx. }$$
(2.19)

Then \((F_{\varepsilon }^{1})\) is equicoercive with respect to the strong L 1-convergence, and its Γ-limit is

$$\displaystyle{ {F}^{1}(u) = c_{ W}{\mathcal{H}}^{n-1}(\partial \{u = 1\}\cap \varOmega )\text{ for }u \in BV (\varOmega;\{\pm 1\}), }$$
(2.20)

and + otherwise, where \(c_{W} = 8/3\) (in general \(c_{W} = 2\int _{-1}^{1}\sqrt{W(s)}\,\mathit{ds}\)). Here we denote by BV (Ω) the space of functions of bounded variation in Ω, and by \(BV (\varOmega;\{\pm 1\})\) the space of functions of bounded variation in Ω taking only the values ± 1.

This results states that recovery sequences \((u_{\varepsilon })\) tend to sit in the bottom of the wells (i.e., \(u \in \{\pm 1\}\)) in order to make \(\frac{W(u_{\varepsilon })} {\varepsilon }\) finite; however, every ‘phase transition’ costs a positive amount, which is optimized by balancing the effects of the two terms in the integral. Indeed, by optimizing the interface between the phases \(\{u = 1\}\) and \(\{u = -1\}\) one obtains the optimal ‘surface tension’ \(c_{W}\).

In one-dimension the ansatz on the recovery sequences around a jump point x 0 is that they are of the form

$$\displaystyle{u_{\varepsilon }(x) = v{\Bigl (\frac{x - x_{0}} {\varepsilon } \Bigr )},}$$

where v minimizes

$$\displaystyle{\min {\Bigl \{\int _{-\infty }^{+\infty }(W(v) + \vert v^{\prime}{\vert }^{2})\,dx:\ v(\pm \infty ) = \pm 1\Bigr \}} = 2\int _{ -1}^{1}\sqrt{W(s)}\,\mathit{ds}.}$$

In more than one-dimension the ansatz becomes

$$\displaystyle{u_{\varepsilon }(x) = v{\Bigl (\frac{d(x,\partial \{u = 1\})} {\varepsilon } \Bigr )},}$$

where \(d(\cdot,A)\) is the signed distance from the set A. This means that around the interface \(\partial \{u = 1\}\) the recovery sequence passes from − 1 to + 1 following the one-dimensional profile of v essentially on a \(O(\varepsilon )\)-neighbourhood of \(\partial \{u = 1\}\).

Note that:

  1. (i)

    We have an improved convergence of recovery sequences from weak to strong L 1-convergence.

  2. (ii)

    The domain of F 1 is almost disjoint from that of the \(F_{\varepsilon }^{1}\), the only two functions in common being the constants ± 1.

  3. (iii)

    In order to make the Γ-limit properly defined we have to use the space of functions of bounded variation or, equivalently, (taking the constraint u(x) = ±1 a.e. into account) the family of sets of finite perimeter if we take as parameter the set \(A =\{ u = 1\}\). In this context the set \(\partial \{u = 1\}\) is properly defined in a measure-theoretical way, as well as its (n − 1)-dimensional Hausdorff measure. Even though ∂ A may greatly differ from the topological boundary of A, we will use the same symbol in order not to overburden the notation. Note however that in our examples sets can be supposed to be smooth enough, so that the two notions coincide.

Example 2.6 (Linearized fracture mechanics from interatomic potentials).

We now give an example in which the scaling of the variable, and not only of the energy, is part of the problem. We consider a systems of one-dimensional nearest-neighbour atomistic interactions through a Lennard-Jones type interaction. Note that, by the one-dimensional nature of the problem, we can parameterize the position of the atoms as an increasing function of the parameter.

Let ψ be a C 2 potential as in Fig. 2.1, with domain (0, +) (we set \(\psi (z) = +\infty \) for z ≤ 0), minimum in 1 with ψ″(1) > 0, convex in (0, z 0), concave in (z 0, +) and tending to \(\psi (\infty ) < +\infty \) at + . A possible choice is Lennard-Jones potential

$$\displaystyle{\psi (z) = \frac{1} {{z}^{12}} - \frac{2} {{z}^{6}}.}$$
Fig. 2.1
figure 1

A Lennard-Jones potential

We consider the energy

$$\displaystyle{\varPsi _{N}(v) =\sum _{ i=1}^{N}\psi (v_{ i} - v_{i-1})}$$

with \(N \in \mathbb{N}\), defined on v i with \(v_{i} > v_{i-1}\). We introduce the small parameter \(\varepsilon = 1/N\) and identify the vector \((v_{0},\ldots,v_{N})\) with a discrete function defined on \(\varepsilon \mathbb{Z} \cap [0,1]\) (i.e., \(v_{i} = v(\varepsilon i)\)). A non-trivial Γ-limit will be obtained by scaling and rewriting the energy in terms of a scaled variable

$$\displaystyle{u = \sqrt{\varepsilon }{\Bigl (v -\frac{\mathit{id}} {\varepsilon } \Bigr )};\qquad \text{ i.e., }u_{i} = \sqrt{\varepsilon }(v_{i} - i).}$$

This scaling can be justified noting that (up to additive constants) \(v_{i} = i = \mathit{id}/\varepsilon\) is the absolute minimum of the energy. The scaled energies that we consider are

$$\displaystyle{F_{\varepsilon }(u) =\varPsi _{N}{\Bigl (\frac{id} {\varepsilon } + \frac{u} {\sqrt{\varepsilon }}\Bigr )}-\min \varPsi _{N} =\sum _{ i=1}^{N}J{\Bigl (\frac{u_{i} - u_{i-1}} {\sqrt{\varepsilon }} \Bigr )},}$$

where

$$\displaystyle{J(w) =\psi (1 + w)-\min \psi =\psi (1 + w) -\psi (1).}$$

For convenience we extend the function to all \(\mathbb{R}\) setting \(J(w) = +\infty \) if w ≤−1. Again, the vector \((u_{0},\ldots,u_{N})\) is identified with a discrete function defined on \(\varepsilon \mathbb{Z} \cap [0,1]\) or with its piecewise-affine interpolation. With this last identification, \(F_{\varepsilon }\) can be viewed as functionals in L 1(0, 1), and their Γ-limit computed with respect to the strong L 1(0, 1)-topology.

We denote by \(w_{0} = 1 + z_{0}\) the inflection point of J. It must be noted that for all \(\overline{w} > 0\) we have

$$\displaystyle{\#{\Bigl \{i: \frac{u_{i} - u_{i-1}} {\sqrt{\varepsilon }} > \overline{w}\Bigr \}} \leq \frac{1} {J(\overline{w})}F_{\varepsilon }(u),}$$

so that this number of indices is equi-bounded along sequences with equibounded energy. We may therefore suppose that the corresponding points \(\varepsilon i\) converge to a finite set S ⊂ [0, 1]. For fixed \(\overline{w}\), we have \(J(w) \geq \overline{c}\vert w{\vert }^{2}\) on \((-\infty,\overline{w}]\) for some \(\overline{c} > 0\); this gives, if A is compactly contained in (0, 1)∖ S, that

$$\displaystyle{F_{\varepsilon }(u) \geq \overline{c}\sum _{i}{\Bigl ({\frac{u_{i} - u_{i-1}} {\sqrt{\varepsilon }} \Bigr )}}^{2} = \overline{c}\sum _{ i}\varepsilon {\Bigl ({\frac{u_{i} - u_{i-1}} {\varepsilon } \Bigr )}}^{2} \geq \overline{c}\int _{ A}\vert u^{\prime}{\vert }^{2}\,\mathit{dt}}$$

(the sum is extended to those i such that \(\frac{u_{i}-u_{i-1}} {\sqrt{\varepsilon }} \leq \overline{w}\)). By the arbitrariness of A in this estimate we then have that if \(u_{\varepsilon } \rightarrow u\) and \(F_{\varepsilon }(u_{\varepsilon }) \leq C < +\infty \) then u is piecewise-H 1; i.e., there exists a finite set S ⊂ (0, 1) such that \(u \in {H}^{1}((0,1)\setminus S)\); we denote by S(u) the minimal set such that uH 1((0, 1)∖ S(u)). The reasoning above also shows that

$$\displaystyle{\overline{c}\int _{0}^{1}\vert u^{\prime}{\vert }^{2}\,\mathit{dt} + J(\overline{w}){\#}(S(u))}$$

is a lower bound for the Γ-limit of \(F_{\varepsilon }\). The Γ-limit on piecewise-H 1(0, 1) functions can be computed by optimizing the choice of \(\overline{w}\) and \(\overline{c}\), obtaining

$$\displaystyle{ F(u) = \frac{1} {2}J^{\prime\prime}(0)\int _{0}^{1}\vert u^{\prime}{\vert }^{2}\,\mathit{dt} + J(\infty )\#(S(u)) }$$
(2.21)

with the constraint that \({u}^{+} > {u}^{-}\) on S(u). This functional is the one-dimensional version of Griffith’s fracture energy for brittle materials, and coincides with a functional introduced by Mumford–Shah in the framework of Image Reconstruction (without the constraint \({u}^{+} > {u}^{-}\)).

Note that the parameterization of v i on \(\varepsilon \mathbb{Z}\) would suggest to interpret \(v_{i} - v_{i-1}\) as a different quotient and hence the change of variables \(u_{i} =\varepsilon v_{i} - id\). This would give an energy of the form

$$\displaystyle{\tilde{F}_{\varepsilon }(u) =\sum _{ i=1}^{N}J{\Bigl (\frac{u_{i} - u_{i-1}} {\varepsilon } \Bigr )};}$$

it can be shown that \(\tilde{F}_{\varepsilon }\) converges to the energy with domain the set of piecewise-affine increasing u with u′ = 1 a.e., and for such u

$$\displaystyle{\tilde{F}(u) = J(\infty )\,{\#}(S(u)).}$$

This different choice of the parameterization hence only captures the fracture part of the energy.