Abstract
This manuscript extends a study initiated in Dacorogna et al. (C R Math Acad Sci Paris Ser I 353:1099–1104, 2015) to incorporate non-homogeneous cost functions. The problems studied here are convex optimization problems, but the subdifferential of the actions we consider, are not easily characterized except when we deal with smooth cost functions with polynomial growth at infinity. We study minimization problems on the paths of k-forms, which involves dual maximization problems with constraints on the co-differential of the k-forms. When \(k<n,\) only some directional derivatives of a vector field are controlled. This is in contrast with prior studies of optimal transportation of volume forms (\(k=n\)), where the full gradient of a scalar function is controlled. An additional complication emerges due to the fact that our dual maximization problem cannot avoid the use of k-currents.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
This work continues our program on the theory of transportation of closed differential forms. The current manuscript studies actions defined on paths of closed differential forms, introduces various distances and improves on the study in [9] (for related work more centered on the symplectic case where \(k=2,\) see [10]). We denote by \(\Lambda ^{k}\) or \(\Lambda ^{k}(\mathbb {R}^{n})\), the set of exterior k-forms over \(\mathbb {R}^{n}\) (k-covectors of \(\mathbb {R}^{n}\)).
Consider a convex (in fact contractible will be sufficient) open bounded set \(\Omega \subset \mathbb {R}^{n}\) and denote by \(\nu \) the unit outward vector to the boundary \(\partial \Omega \). Let d denote the exterior derivative operator on the set of differential forms on \(\Omega \) and let \(\delta \) denote the adjoint (or co-differential) of d. Let \(\bar{f}_{0},\bar{f}_{1}\) be two-closed k-forms on \(\Omega \) (i.e. their distributional differential \(d\bar{f}_{0}\) and \(d\bar{f}_{1}\) are null) and the compatibility condition
is satisfied when \(1\le k\le n-1\) while we impose that
when \(k=n.\) Accordingly, we denote by \(\mathcal {H}\), the set of k-forms \(h\in L^{1}(\Omega ;\Lambda ^{k})\), which are closed in the weak sense, and such that when \(1\le k\le n-1\) then
while when \(k=n\) it is rather required that
This is a subspace of the separable Banach \(L^{1}(\Omega ;\Lambda ^{k})\). If \(s\rightarrow f_{s}\) is a path in \(\mathcal {H}\), since on contractible domain every closed form is exact and \(s\rightarrow -\partial _{s}f_{s}\) remains a path of closed forms, there exists a path \(s\rightarrow A_{s}\) of \((k-1)\)-forms such that \(-\partial _{s}f=dA.\) Let \(p\in (1,\infty )\). In fact, we are interested in pairs (f, A) such that
and
in the weak sense (cf. Definition 2.2). The variable s has, a priori, no physical meaning and only serves as an interpolation variable between two prescribed closed forms. Let us denote by \(P^{p}(\bar{f}_{0},\bar{f}_{1})\) the set of pairs (f, A) such that (1.3) and (1.4) holds.
Let \(c: \Lambda ^{k} \times \Lambda ^{k-1} \rightarrow [0,\infty ]\) be a lower semicontinuous function such that when \(\omega \in \Lambda ^{k}\), \(\xi \in \Lambda ^{k-1}\) and \(c(\omega , \xi )<\infty \) then
In order for c to induce a Riemannian or Finsler type metric, we further assume that
For \(f \in L^{1}(\Omega ; \Lambda ^{k})\) and \(A \in L^{1}(\Omega ; \Lambda ^{k-1})\) we set
and define Finsler type metrics
By Jensen’s inequality
But using the standard “reparametrization of constant length” (cf. Lemma 5.2), one shows that in fact
When \(c(f,A)=|A|^{p}\), \(p\in [1,\infty )\) and \(rp=r+p\) then a sufficient condition for (f, A) to minimize (1.8) is (cf. [9])
and so in this case, A is time independent. Further restricting p to \((1,\infty )\) turns (1.10) into a necessary condition, which uniquely characterizes the minimizers.
By Sect. B.3, any convex function \(c:\Lambda ^{k} \times \Lambda ^{k-1}\rightarrow [0,\infty )\) (hence assuming only finite values) satisfying (1.5) and (1.6) must be independent of \(\omega .\) This is precisely the case already studied in [9]. This motivates our desire to study cost functions which take on infinite values. What matters the most in the choice of our cost function is the scaling condition (1.6), which is necessary to induce a metric.
An example of \(c(\omega ,\xi )=G(|\omega |,\xi )\) taking infinite value and studied in Sect. B.1 is
We can also consider cost functions of the form \(G(|\omega |,\xi )+H(\xi )\),obtained by adding to the c in (1.11) a smooth function H. One could replace the denominator in the cost in (1.11) by \(p\;(M-|\omega |^{2})^{\frac{p-1}{2}}\), where M is a positive parameter. In this case, any minimizing path (f, A) in (1.9) must satisfy the requirement \(|f| \le M.\)
Let us for a moment keep our focus on the case \(k=2.\) Given a non-degenerate closed smooth 2-form f, there exists a 1-form w such that
where \(\mathcal {L}_{w}\) is the Lie derivative acting on the set of 1-form (w has been identified with a vector field). A variant of (1.9) is
where the infimum is performed over the set of (f, w) such that \(w:(0,1)\times \Omega \rightarrow \Lambda ^{1}\) is smooth and \(s\rightarrow f_{s}\) are paths in \(\mathcal {H}\) that start at \(\bar{f}_{0}\) and end at \(\bar{f} _{1}.\) Unlike (1.9), (1.13) is not a convex minimization problem and so, it is not known to have minimizers. However, if a minimizer (f, A) of problem (1.9) is such that \(f_{s}\) is non-degenerate for almost every \(s\in (0,1)\), then \((f,v):=(f,A\,\lrcorner \,f^{-1})\) is a minimizer in (1.13).
There is a sharp contrast between the search of optimal paths in the set of closed k-forms, when \(1\le k\le n-1,\) and that of the case \(k=n\). This, can well be illustrated by comparing the case \(k=2,\) expressed in terms of electro-magnetism, to the case \(k=n,\) expressed as a mass transport problem. Consider a bounded open convex (or contractible) set \(O\subset \mathbb {R}^{3}\) and set
Define \(\mathcal {S}\) to be the set of pairs of electro/magnetic time dependent vector fields
which are integrable, satisfy a certain boundary conditions [omitted now but formulated in Subsection E to match (1.1)] and satisfy Gauss’s law for magnetism and the Maxwell–Faraday induction equations
When \(k=2\), (1.8) is equivalent to the search of paths of minimal actions on \(\mathcal {S}\) (cf. Subsection E). Any starting (resp. ending) point \((\bar{B}_{0},\bar{E}_{0})\) (resp. \((\bar{B} _{1},\bar{E}_{1})\)) in \(\mathcal {S}\) is identified with a starting (resp. ending) point \(\bar{f}_{0}\) (resp. \(\bar{f}_{1}\)) in the set of closed 2-forms. Similarly, a path \(s\in [0,1]\rightarrow (B(s),E(s))\) which interpolates between \((B_{0},E_{0})\) and \((B_{1},E_{1}),\) corresponds to a path \(s\in [0,1]\rightarrow f(s),\) lying in the set of closed 2-forms \(\mathcal {H}\), which interpolates between \(\bar{f}_{0}\) and \(\bar{f}_{1}\). If f(s) is not degenerate then there exists \(w:(0,1)\times \Omega \rightarrow \Lambda ^{1}\) such that \(\partial _{s}f+\mathcal {L}_{w}f=0\). Here, it is worth stressing that in contrast with the study of n-forms (i.e. volume forms), intensively studied in the past few years in the theory of optimal transportation, s does not represent a time variable. In the theory of optimal transportation, given two volume forms \(\bar{\mu }_{0}\) and \(\bar{\mu }_{1}\) of same mass, we want to minimize an action over the set of paths \(t\rightarrow \mu (t)\) which interpolate between \(\bar{\mu }_{0}\) and \(\bar{\mu }_{1}.\) For each path \(t\rightarrow \mu (t)\), there exists a velocity vector field v such that the continuity equation \(\partial _{t}\mu +\mathcal {L}_{v}\mu =0\) is satisfied. The action to minimize is an integral over the set of time of an expression either written in terms of \((\mu (t),v(t))\) or equivalently in terms of \((\mu (t),A(t))=(\mu (t),\mu (t)v(t))\). In the case of 2-forms, the time t appears in (1.14) to ensure that f(s) is a closed form for each s, but w is not the physical velocity. Now, the action to be minimized is an integral over the set of parameters s, of an expression which depends on either (f(s), w(s)) (cf. 1.13) or equivalently \((f(s),A(s))=(f(s),w(s)\,\lrcorner \,f(s))\) (cf. 1.9).
This manuscript contributes to the identification of a non-trivial class of metrics on set of closed k-differential forms, with potential impacts on the study of evolutive equations on the set of closed k-differential forms. The non-homogeneous costs allow for a much richer class of metrics, but come at the expense of yielding transportation problems for which the subdifferentials of the actions are not easily characterized. We then face the study of dual problems which involve k-differential forms, whose differential are not a-priory locally summable. This means that unlike the case when \(k=n\), a difficulty we have to deal with when \(k<n\), is to face a dual problem involving functions for which not all partial derivatives are summable. This means we cannot rely on any classical Sobolev type inequality and need to prove a result such as Lemma 4.7. In this Lemma, we show that up to a translation in one-dimensional interpolation variables, any path on the set of measures of k-differential forms, is controlled by its derivative with respect to the interpolation variable and the \(L^r\)-norm of its co-differential. The point is that we obtain an inequality which does not need to involve the \(L^r\)-norms of both the differential and the co-differential of our k-forms. The proof of the Lemma relies on the use of a subtle Gaffney type inequality and the result is central to obtain needed coercitivity properties of a functional we study in a dual problem. An extremely challenging problem we leave open and which we hope to be the purpose of future investigations, is the regularity properties of geodesics of minimal length. Problem A.2 comments on a systems of PDEs induced by these geodesics.
This manuscript is divided into two parts, the first one containing our central results. The second part is an appendix consisting of examples and technical regularization Lemmas, needed to circumvent the lack of smoothness property of the functions we are dealing with. The appendix ends with a section alluding to the interpretation of our work in the context of electromagnetism.
In Sect. 3, we consider cost functions c on \(\Lambda ^{k}\times \Lambda ^{k-1}\) which assume only finite values, are smooth, strictly convex, with a polynomial growth at infinity. We do not impose that \(c(\omega ,\cdot )\) is p-homogeneous and use standard methods to characterize the subdifferential of the actions along paths of minimal length. This Section will later be useful when studying cost functions which take infinite value. Section 4 is a preliminary section which deals with paths of bounded variations on metric spaces, the metric space in our case being the set of k-currents. We later use these to study Finsler type metrics on the set of k-forms. In Sect. 5 not only the set where c assumes the value \(+\infty \)) is not empty but also \(c^{*},\) the dual of c, is assumed to have a lower bound which may be linear: \(c^{*}(b,B)\ge \gamma _{6}\left( |b|+|B|^{r} \right) \). This creates a difficulty, usually not faced in the optimal transportation theory, which led to incorporating the two lengthy Sections C and D. We identify and exploit a dual maximization problem to characterize the paths minimizing our action. When \(k=n\), in the dual problem, all the partial derivatives of a scalar function are controlled. When \(k<n\) we face serious technical difficulties since the control of the co-differential of a \((k+1)\)-forms is equivalent to the control of some directional derivatives. We anticipate that the level of complications will substantially increase if we extend the class of cost functions c to include those which are polyconvex or even quasiconvex in a sense to be specified. These considerations, which constitute a new type of challenges, will be addressed in a forthcoming paper [8]. We close our description by drawing the attention of the reader to a recent paper by Brenier and Duan [1], one of the very few related to our context, which considers gradient flows of entropy functionals on the set of differential forms.
Throughout the manuscript, it would have been sufficient to assume that \(\Omega \) is a contractible domain of smooth boundary and not necessarily a convex set. In order to reduce the level of technicality, we chose not to state some of our results under the sharpest assumptions.
2 Preliminaries for the smooth case
For simplicity, throughout the manuscript, \(\Omega \subset \mathbb {R}^{n}\) is assumed to be an open bounded convex set and \(\nu \) denote the outward unit normal to \(\partial \Omega .\) Let \(1 \le k \le n\) be an integer. We assume that \(r, p \in (1, \infty )\) are conjugate of each other in the sense that \(r+p=rp.\)
Definition 2.1
Let \(f\in L^{1}\left( \Omega ;\Lambda ^{k}\right) \), let \(A\in L^{1}\left( \Omega ;\Lambda ^{k-1}\right) \) and \(B\in L^{1}\left( \Omega ;\Lambda ^{k+1}\right) \).
-
(i)
We write \(-df=A\) (resp. \(-\delta f=B\)) in \(\Omega \) in the weak sense if for any \(h\in C_{c}^{\infty }\left( \Omega ;\Lambda ^{k}\right) \)
$$\begin{aligned} \int _{\Omega }\left\langle f;h\right\rangle =\int _{\Omega }\left\langle A;\delta h\right\rangle \qquad \left( \hbox {resp.}\;\int _{\Omega }\left\langle f;h\right\rangle =\int _{\Omega }\left\langle B;dh\right\rangle \right) . \end{aligned}$$ -
(ii)
Similarly if we want to express in the weak sense
$$\begin{aligned} (i) \left\{ \begin{array}{ll} -dA=f &{} \text {in }\Omega \\ \nu \wedge A=0 &{} \text {on }\partial \Omega \end{array} \right. \quad \left( \text {resp.} \quad (ii) \left\{ \begin{array}{ll} -\delta B=g &{} \text {in }\Omega \\ \nu \,\lrcorner \,B=0 &{} \text {on }\partial \Omega \end{array} \right. \right) , \end{aligned}$$(2.1)we impose that for any \(h\in C^{\infty }\left( \bar{\Omega };\Lambda ^{k}\right) \)
$$\begin{aligned} \int _{\Omega }\left\langle f; h\right\rangle =\int _{\Omega }\left\langle A ;\delta h\right\rangle \qquad \left( \hbox {resp.}\; \int _{\Omega }\left\langle f; h\right\rangle =\int _{\Omega }\left\langle B; dh\right\rangle \right) . \end{aligned}$$ -
(iii)
We say that f is in the weak sense a closed (resp. co-closed) differential form if \(df=0\) (resp. \(\delta f=0\)) in \(\Omega \).
We consider k-forms \(\bar{f}_{0},\bar{f}_{1}\in L^{p}\left( \Omega ;\Lambda ^{k}\right) \) such that, if \(1\le k\le n-1,\)
and, if \(k=n,\)
Definition 2.2
We say that \((f, A) \in P^{p}(\bar{f}_{0}, \bar{f}_{1})\) if
and
for all \(h \in C^{1} \left( [0,1] \times \bar{\Omega }; \Lambda ^{k} \right) .\)
Remark 2.3
Assume (2.2) holds when \(1 \le k\le n-1\) and (2.3) holds when \(k=n.\)
-
(i)
By Theorem 7.2 [7], there exists in the weak sense, \(\bar{A}\in W^{1,p}\left( \Omega ;\Lambda ^{k-1}\right) \) satisfying
$$\begin{aligned} \left\{ \begin{array}{ll} d\bar{A}+\bar{f}_{1}-\bar{f}_{0}=0 &{} \quad \delta \bar{A}=0\quad \text {in } \Omega \\ \nu \wedge \bar{A}=0 &{} \text {on }\partial \Omega \end{array} \right. \end{aligned}$$and there exists a constant \(C=C\left( \Omega ,p,k\right) \) such that
$$\begin{aligned} ||\bar{A}||_{W^{1,p}\left( \Omega ;\Lambda ^{k-1}\right) }\le C||f||_{L^{p} }. \end{aligned}$$ -
(ii)
We have \((\bar{f}_{s},\bar{S}_{s}):=\left( (1-s)\bar{f}_{0} +s\bar{f}_{1},\bar{A}\right) \in P^{p}(\bar{f}_{0},\bar{f}_{1})\).
Definition 2.4
We define \(\mathbf {B}^{r}\left( (0,1) \times \Omega ; \Lambda ^{k} \right) \) to be the set of h such that
and there exists
such that
Here, \(\partial _{s} h\) is the distributional derivative of h with respect to s.
2.1 A weak time continuity property for \(P^{p}(\bar{f}_{0}, \bar{f}_{1})\)
Let \((f, A) \in P^{p}(\bar{f}_{0}, \bar{f}_{1}).\) By Fubini’s theorem, the function \(s \rightarrow \int _{\Omega }|f(s,x)|^{p} dx\) is in \(L^{1}(0,1)\) and so, its Lebesgue points are of full measure in (0, 1). If \(\phi \in C^{1}(\bar{\Omega })\) we set
Using \(h(s,x)=\alpha (s) \phi (x)\) in (2.4) for arbitrary \(\alpha \in C^{1}([0,1])\), we obtain that there is a set \(\mathcal {N}_{\phi }\) of null Lebesgue measure such that \(L(\cdot , f, \phi )\) coincides on \((0,1) {\setminus }\mathcal {N}_{\phi }\) with a function \(\overline{ L(\cdot , f, \phi )} \in W^{1,p}(0,1).\) More precisely,
The distributional derivative of \(L(\cdot , f, \phi )\) is
We have the following Lemma.
Lemma 2.5
There exists a function \(\tilde{f} \in L^{p}\left( (0,1) \times \Omega ; \Lambda ^{k} \right) \) such that the following hold.
-
(i)
\(\tilde{f} = f\) for almost every \((0,1) \times \Omega \)
-
(ii)
For any \(\phi \in C_{c}^{1}(\Omega ; \Lambda ^{k})\), \(\overline{L(\cdot , f, \phi )}= L(\cdot , \tilde{f}, \phi )\) everywhere on (0, 1). In particular, \(L(\cdot , \tilde{f}, \phi ) \in W^{1,p}(0,1)\) is continuous.
Remark 2.6
Thanks to Lemma 2.5, we will always tacitly assume that given \((f, A) \in P^{p}(\bar{f}_{0}, \bar{f}_{1})\) then for any \(\phi \in C_{c}^{1}(\Omega ; \Lambda ^{k})\), \(L(\cdot , f, \phi ) \in W^{1,p}(0,1)\) is continuous.
2.2 Properties of \(\mathbf {B}^{r}\left( (0,1) \times \Omega ; \Lambda ^{k} \right) \)
Lemma 2.7
If \(h\in \mathbf {B}^{r}\left( (0,1)\times \Omega ;\Lambda ^{k}\right) ,\) then for \(\mathcal {L}^{1}\)-almost every \(s\in (0,1)\) we have \(B(s,\cdot )\in L^{r}(\Omega )\) and \(B(s,\cdot )=\delta h(s,\cdot )\) is the weak sense.
Proof
Observe first that by Fubini’s theorem,
Let \(\{ g_{i}\}_{i=1}^{\infty }\subset C_{c}^{1}(\Omega )\) be a dense subset of \(L^{p}(\Omega ).\) If for \(w \in C_{c}^{1}(0,1)\) we set \(\psi (s,x)=w(s) g_{i}(x)\) then (2.5) reads off
Thus, there exists a set \(N_{i} \subset (0,1)\) of \(\mathcal {L}^{1}\)-null measure such that
for any \(s \in (0,1) {\setminus } N_{i}\). Thus, (2.7) hold for all \(s \in (0,1) {\setminus } N\) if N is the union of the \(N_{i}\)’s. We conclude that
for any \(s \in (0,1) {\setminus } N\) and any \(g \in C_{c}^{1}(\Omega )\). This concludes the proof of the Lemma. \(\square \)
Remark 2.8
By standard approximation results, it is enough to assume that \(\Omega \) is an open bounded contractible set of locally Lipschitz boundary \(\partial \Omega \) to obtain that if \((f, A) \in P^{p}(\bar{f}_{0}, \bar{f}_{1})\) then (2.4) holds for \(h \in W^{1,r}\left( (0,1) \times \Omega ; \Lambda ^{k}\right) \). The proof of the following Lemma, which extends (2.4) to \(h\in \mathbf {B}^{r}\left( (0,1) \times \Omega ; \Lambda ^{k} \right) \), can be obtained by standard methods.
Lemma 2.9
If \((f,A)\in P^{p}(\bar{f}_{0},\bar{f}_{1})\) and \(h\in \mathbf {B}^{r}\left( (0,1)\times \Omega ;\Lambda ^{k}\right) ,\) then (2.4) holds.
Corollary 2.10
(An invariant) If \((f,A)\in P^{p}(\bar{f}_{0},\bar{f} _{1})\) and \(h\in \mathbf {B}^{r}\left( (0,1)\times \Omega ;\Lambda ^{k}\right) ,\) then
Indeed, by Lemma 2.9 these expressions depend only on the initial and final values of h and f.
3 Duality results for smooth superlinear integrands of finite values
Let \(p,r\in (1,\infty )\) be such that \(rp=r+p\) and let \(\bar{f}_{0},\bar{f}_{1}\in L^{p}\left( \Omega ;\Lambda ^{k}\right) \) be two k-forms such that, in the weak sense (2.2) holds when \(1\le k\le n-1\) and (2.3) holds when \(k=n.\) Let
where c is convex and \(c^{*}\) is the Legendre transform of c,
and
for any \(b\in \Lambda ^{k}\) and \(B\in \Lambda ^{k-1}.\) Here, \(\gamma _{1} ,\gamma _{2}>0\) are prescribed constants.
Remark 3.1
Since the Legendre transform reverses order, the following hold.
-
(i)
If \(c^{*}\) satisfies (3.2) then for any \(\omega \in \Lambda ^{k}\) and \(\xi \in \Lambda ^{k-1}\)
$$\begin{aligned} c(\omega ,\xi ) \le E^{*}(\omega ,\xi )= \gamma _{2}+ \gamma _{1}(r-1) {\frac{|\omega |^{p}+ |\xi |^{p} }{(r\gamma _{1})^{p}}}. \end{aligned}$$ -
(ii)
Similarly, assume there are constants \(\gamma _{6},\gamma _{7}>0\) such that for any \((\omega ,\xi )\in \Lambda ^{k}\times \Lambda ^{k-1}\) we have
$$\begin{aligned} c(\omega ,\xi )\ge \gamma _{6}(|\omega |^{p}+|\xi |^{p})-\gamma _{7} . \end{aligned}$$(3.3)Then for any \(b\in \Lambda ^{k}\) and \(B\in \Lambda ^{k-1}\)
$$\begin{aligned} c^{*}(b,B)\le \gamma _{7}+\gamma _{6}(p-1){\frac{|b|^{r}+|B|^{r}}{(p\gamma _{6})^{r}}}. \end{aligned}$$ -
(iii)
If (3.1) holds then \(c^{*}(0,0)=-\inf c\) is a finite real number.
We define \(\mathcal {C}: \Lambda ^{k} \times \Lambda ^{k-1} \rightarrow (-\infty , \infty ]\) by
for
The following proposition is obtained using standard techniques of the direct methods of the calculus of variations.
Proposition 3.2
Suppose \(\bar{f}_{0}, \bar{f}_{1}\in L^{p}\left( \Omega ;\Lambda ^{k}\right) \) are k-forms such that (2.2) holds when \(1\le k\le n-1\) and (2.3) holds when \(k=n.\) Suppose \(c: \Lambda ^{k} \times \Lambda ^{k-1} \rightarrow (-\infty , \infty ]\) is convex, lower semicontinuous and satisfies (3.3). Then there exists \((f^{*},A^{*})\) that minimizes \(\mathcal {C}\) over \(P^{p}(\bar{f}_{0}, \bar{f}_{1}).\)
For \(h \in \mathbf {B}^{r}\left( (0,1) \times \Omega ; \Lambda ^{k} \right) \) we set
and for \(s \in [0,1]\) set
where \(\bar{A}\) is given by Remark 2.3 (i). By Remark 2.3 (ii), \((\bar{f}, \bar{A}) \in P^{p}(\bar{f}_{0}, \bar{f}_{1})\) and so,
Thus, \(\mathcal {D}(h)\) depends only on \(\partial _{s} h\) and \(\delta h\).
Remark 3.3
Assume \(c^{*}\) satisfies (3.2). Then
-
(i)
There exist constant \(\gamma _{4}, \gamma _{5}>0\) which depends only on \(\Omega \), \(||\bar{f}_{0}||_{p}\), \(||\bar{f}_{1}||_{p}\) \(\gamma _{1}\), \(\gamma _{2}\), s and r such that
$$\begin{aligned} \mathcal {D}(h) \le \gamma _{5}- \gamma _{4} \left( ||\delta h||_{r}^{r}+||\partial _{s} h||_{r}^{r} \right) . \end{aligned}$$(3.5) -
(ii)
There exists a constant C depending only on \(\Omega \), k and r such that for any \(h \in \mathbf {B}^{r}\left( (0,1) \times \Omega ; \Lambda ^{k} \right) \) there is \(\bar{h} \in \mathbf {B}^{r}\left( (0,1) \times \Omega ; \Lambda ^{k} \right) \) such that \(\mathcal {D}(h)=\mathcal {D}(\bar{h})\) and
$$\begin{aligned} ||\bar{h}(s, \cdot )||_{L^{r}(\Omega )} \le C ||\delta \bar{h} ||_{r} + ||\partial _{s} \bar{h}||_{r} \quad \mathcal {L}^{1} - \hbox {a.e. on} \;\; (0,1). \end{aligned}$$ -
(iii)
If c satisfies (3.1) then \(\mathcal {D} (0)>-\infty .\)
Proof
(i) Using the expression of \(\mathcal {D}\) in (3.4), we have
This, yields (i).
(ii) By Lemma 2.7 there exists \(t_{0} \in (0,1)\) such that
By Theorems 7.2 and 7.4 [7] (written for \(r \in [2,\infty )\) but extendable to \(r \in (1,2)\)) there is \(\bar{h}_{t_{0}} \in W^{1,r}(\Omega ; \Lambda ^{k})\) such that
Furthermore, there is a constant C which depends only on \(\Omega \), k and r such that
This, together with (3.6) implies
Define
We have
Thus,
This, together with (3.8) yields
Note that \(\partial _{s} h=\partial _{s} \bar{h}\), \(\delta h=\delta \bar{h}\) to conclude the proof of (ii).
(iii) Since \(\mathcal {D}(0)=-\mathcal {L}^{d}(\Omega ) c^{*}(0, 0)\) and by Remark 3.1, \(c^{*}(0, 0)\) is finite we obtain (iii). \(\square \)
We will often refer to the following proposition, which can be obtained using standard techniques of the direct methods of the calculus of variations.
Proposition 3.4
Assume c satisfies (3.1), \(c^{*}\) satisfies (3.2), \((f, A) \in P^{p}(\bar{f}_{0}, \bar{f}_{1})\) and \(h \in \mathbf {B}^{r}\left( (0,1) \times \Omega ; \Lambda ^{k} \right) \). Then
-
(i)
\(\mathcal {C}(f, A) \ge \mathcal {D}(h).\)
-
(ii)
\(\mathcal {C}(f, A) = \mathcal {D}(h)\) if and only if \((f , A) \in \partial _{\cdot }c^{*}(\partial _{s} h , \delta h)\) for almost every \((s, x) \in (0,1) \times \Omega .\)
Set
Set
and
We now record a remark on convex analysis, which is found in classical literature on the topic.
Remark 3.5
Suppose \(c^{*}\) satisfies (3.2) and \(\epsilon \in (0,1)\).
-
(i)
There exist \(\gamma _{1}^{*}, \gamma _{2}^{*}>0\) independent of \(\epsilon \) such that
$$\begin{aligned} c_{\epsilon }^{*}(b, B) \ge \gamma _{1}^{*} \left( |b|^{r}+|B|^{r} \right) -\gamma _{2}^{*} \end{aligned}$$ -
(ii)
We have that \(c^{*}_{\epsilon }\) is of class \(C^{1}\) and its domain is \(\Lambda ^{k} \times \Lambda ^{k-1}\) and
$$\begin{aligned} c^{*}_{\epsilon }\in C^{1}\left( \Lambda ^{k} \times \Lambda ^{k-1} \right) . \end{aligned}$$ -
(iii)
There exists a constant \(C_{\epsilon }\) such that
$$\begin{aligned} |\nabla c_{\epsilon }^{*}(b, B)| \le C_{\epsilon }\left( |b|^{r-1} +|B|^{r-1}+1 \right) . \end{aligned}$$
Lemma 3.6
(Relying on the smoothness of \(c_{\epsilon }\) to compute the differential of the action) Assume \(c^{*}\) satisfies (3.2) and \(\epsilon \in (0,1).\) Let \(h^{*}, h \in \mathbf {B}^{r}\left( (0,1) \times \Omega ; \Lambda ^{k} \right) \) and set \(N (u)= \mathcal {D}_{\epsilon }(h^{*}+ u h).\) Then,
where
Proof
The continuity of \(\nabla c^{*}\) and Remark 3.5 (iii) allow to directly compute \(N^{\prime }(0)\). \(\square \)
Proposition 3.7
(Smoothness of \(c_{\epsilon }\) yields a standard duality result) Suppose c is convex, lower semicontinuous, satisfies (3.1) and \(c^{*}\) satisfies (3.2). Then
-
(i)
there exists \(h^{*}\) that maximizes \(\mathcal {D}\) over \(\mathbf {B}^{r}\left( (0,1) \times \Omega ; \Lambda ^{k} \right) \).
-
(ii)
there exists \(h_{\epsilon }\) that maximizes \(\mathcal {D}_{\epsilon }\) over \(\mathbf {B}^{r}\left( (0,1) \times \Omega ; \Lambda ^{k} \right) \).
-
(iii)
For any \(h \in \mathbf {B}^{r}\left( (0,1) \times \Omega ; \Lambda ^{k} \right) \)
$$\begin{aligned} \int _{(0,1) \times \Omega } \left\langle \bar{A} - A_{\epsilon }; \delta h\right\rangle ds dx +\int _{(0,1) \times \Omega } \left\langle \bar{f} -f_{\epsilon }; \partial _{s} h\right\rangle ds dx=0. \end{aligned}$$where
$$\begin{aligned} f_{\epsilon }:=\nabla _{a} c^{*}_{\epsilon }\left( \partial _{s} h_{\epsilon }, \delta h_{\epsilon }\right) , \quad A_{\epsilon }:= \nabla _{B} c^{*}_{\epsilon }\left( \partial _{s} h_{\epsilon }, \delta h_{\epsilon }\right) . \end{aligned}$$(3.10) -
(iv)
We may assume without loss of generality that there is a constant C independent of \(\epsilon \) such that we can choose \(h_{\epsilon }\) such that
$$\begin{aligned} \left\| h_{\epsilon }(s, \cdot )\right\| _{L^{r}(\Omega )} \le C ||\delta h_{\epsilon }||_{r}+ || \partial h_{\epsilon } ||_{r} \end{aligned}$$
Proof
(i) Let \(\bar{A}\) be given by Remark 2.3 and set
We have \((\bar{f}, \bar{A}) \in P^{p}(\bar{f}_{0}, \bar{f}_{1})\). The bounds in that Remarks 2.3 (i) and 3.1 (i) imply
This, together with Proposition 3.4 implies
By Remark 3.3 (iii) \(D>-\infty \) and by (i) of the same remark, if \(\gamma \) is a real number then the upper level sets of \(\mathcal {D}\) satisfy
Combining this with Remark 3.3 (ii) we obtain a maximizing sequence \(\{h_{i}\}_{i}\) of \(\mathcal {D}\) over \(\mathbf {B}^{r}\left( (0,1) \times \Omega ; \Lambda ^{k} \right) \) satisfying
Hence, we may extract from \(\{h_{i}\}_{i}\) a subsequence which converges weakly to some \(h^{*}\) in \(L^{r}\left( (0,1) \times \Omega ; \Lambda ^{k} \right) \) and such that \(\{\delta h_{i}\}_{i}\) (resp. \(\{\partial _{s} h_{i}\}_{i}\)) converges weakly to \(\delta h^{*}\) (resp. \(\partial _{s} h^{*}\)) in \(L^{r}\left( (0,1) \times \Omega ; \Lambda ^{k} \right) \). We have \(h^{*} \in \mathbf {B}^{r}\left( (0,1) \times \Omega ; \Lambda ^{k} \right) \).
Recall that by (3.4), \(-\mathcal {D}(h_{i})\) can be expressed as a convex function of \(\partial _{s} h_{i}\) and \(\delta h_{i}.\) Therefore, by standard results of convex analysis
This proves that \(h^{*}\) maximizes \(\mathcal {D}\) over \(\mathbf {B} ^{r}\left( (0,1) \times \Omega ; \Lambda ^{k} \right) \).
(ii) By Remark 3.5 we have all the properties needed to replace \(c^{*}\) by \(c^{*}_{\epsilon }\) in the above proof. The proof of (ii) repeats the arguments used in that of (i) but it is even easier.
(iii) Let \(h \in \mathbf {B}^{r}\left( (0,1) \times \Omega ; \Lambda ^{k} \right) \). The real valued function \(u \in \mathbb {R }\rightarrow N_{\epsilon }(u)= \mathcal {D}_{\epsilon }(h_{\epsilon }+ u h)\) achieves its minimum at 0. Since by Lemma 3.6 N is differentiable at 0, we have \(N^{\prime }_{\epsilon }(0)=0\). This is exactly the identity in (iii).
(iv) Is a direct consequence of Remark 3.3 (ii). \(\square \)
Theorem 3.8
(A duality result not requiring smoothness of c) Suppose c is convex, lower semicontinuous, it satisfies (3.1) and \(c^*\) satisfies (3.2). Further assume there are constants \(\gamma _6, \gamma _7>0\) such that c satisfies (3.3). Then
-
(i)
there exists \((f^*, A^*)\) which minimizes \(\mathcal {C}\) over \(P^p(\bar{f}_0, \bar{f}_1)\).
-
(ii)
For any \(h^*\) that maximizes \(\mathcal {D}\) over \(\mathbf{B}^r\left( (0,1) \times \Omega ; \Lambda ^k \right) \) we have \(\mathcal {C} (f^*,A^*) =\mathcal {D}(h^*).\)
-
(iii)
Let \((f, A) \in P^p(\bar{f}_0, \bar{f}_1).\) Then (f, A) minimizes \(\mathcal {A}\) over \(P^p(\bar{f}_0, \bar{f}_1)\) if and only if there exists \(h \in \mathbf{B}^r\left( (0,1) \times \Omega ; \Lambda ^k \right) \) such that \((f, A) \in \partial _\cdot c^*(\partial _s h, \delta h)\) for almost every \((s, x) \in (0,1) \times \Omega .\)
Proof
(i) and (ii) Let \(h_{\epsilon }\) be a maximizer of \(\mathcal {D} _{\epsilon }\) as provided in Proposition 3.7 and let
We combine (iii) of the same proposition with the fact that \((\bar{f}, \bar{A}) \in P^{p}(\bar{f}_{0}, \bar{f}_{1})\) (cf. Remark 2.3 (ii)) to obtain that \((f_{\epsilon }, A_{\epsilon }) \in P^{p}(\bar{f}_{0}, \bar{f}_{1}).\) Proposition 3.4 (ii) implies
We then use Proposition 3.4 (i) to conclude that \((f_{\epsilon }, A_{\epsilon })\) minimizes \(\mathcal {C}_{\epsilon }\) over \(P^{p}(\bar{f}_{0}, \bar{f}_{1})\).
Since for \(\epsilon \in (0,1)\)
we have
Also, by Remark 3.5 (i) and the maximality property of \(h_{\epsilon }\)
Thus, using (3.4) we have
We again use Remark 3.5 (i) to obtain
and so,
Thus by Remark 3.3 (ii), we may assume without loss of generality that
By (3.11) there exists a subsequence of \((f_{\epsilon _{l}}, A_{\epsilon _{l}})_{l}\) which converges weakly to some \((f^{*}, A^{*})\) in \(L^{p}\left( (0,1) \times \Omega ; \Lambda ^{k} \right) \times L^{p}\left( (0,1) \times \Omega ; \Lambda ^{k-1} \right) \) as l tends to \(\infty .\) Passing to another subsequence if necessary, thanks to (3.12), we may assume without loss of generality that \((h_{\epsilon _{l}})_{l}\) converges weakly in \(L^{r}\) to some \(h^{*} \in \mathbf {B}^{r}\left( (0,1) \times \Omega ; \Lambda ^{k} \right) \). Thus, \((\delta h_{\epsilon _{l}})_{l}\) converges weakly in \(L^{r}\) to \(\delta h^{*}\) and \((\partial _{s} h_{\epsilon _{l}})_{l}\) converges weakly in \(L^{r}\) to \(\partial _{s} h^{*}.\) Letting \(\epsilon _{l}\) tend to 0 in Proposition 3.7 (iii) we obtain for any \(h \in W^{1,r}\left( (0,1) \times \Omega ; \Lambda ^{k} \right) \)
We use the the fact that by Remark 2.3 (ii), \((\bar{f}, \bar{A}) \in P^{p}(\bar{f}_{0}, \bar{f}_{1})\) to conclude that
and so, \((f^{*}, A^{*}) \in P^{p}(\bar{f}_{0}, \bar{f}_{1}).\)
We first use the fact that \((f_{\epsilon }, A_{\epsilon }) \in \partial c^{*}_{\epsilon }( \partial _{s} h_{\epsilon }, \delta h_{\epsilon })\) and then use the fact that \(c_{\epsilon }\ge c\) to obtain
Also
We combine this with (3.13) to conclude that
Since \((f_{\epsilon }, A_{\epsilon }) \in P^{p}(\bar{f}_{0}, \bar{f}_{1}),\) we may use Remark 2.10 in the previous inequality to obtain
One lets \(\epsilon _{l}\) tend to 0 to derive the inequality
This proves that
Rearranging, and using the expression of \(\mathcal {D}\) in (3.4), we have \(\mathcal {D}(h^{*})=\mathcal {C}(f^{*}, A^{*}).\) By Proposition 3.4 (i), \((f^{*}, A^{*})\) minimizes \(\mathcal {C}\) over \(P^{p}(\bar{f}_{0}, \bar{f}_{1})\) and \(h^{*}\) maximizes \(\mathcal {D}\) over \(\mathbf {B}^{r}\left( (0,1) \times \Omega ; \Lambda ^{k} \right) \).
(iii) Let \((f, A) \in P^{p}(\bar{f}_{0}, \bar{f}_{1})\) and \(h \in \mathbf {B} ^{r}\left( (0,1) \times \Omega ; \Lambda ^{k} \right) \). Since \(c(\omega ,\xi ) \ge \gamma _{6} (|\omega |^{p}+|\xi |^{p})-\gamma _{7}\) for all \(\omega \in \Lambda ^{k}\) and \(\xi \in \Lambda ^{k-1}\), there is a constant \(\gamma _{6}^{*}>0\) such that \(c^{*}(b,B) \le \gamma _{6}^{*} (|b|^{r}+|B|^{r})+\gamma _{7}\) for all \(b \in \Lambda ^{k}\) and \(B \in \Lambda ^{k-1}\). This together with the fact that \(c^{*}\) satisfies (3.2) implies \(\mathcal {D} (h)<\infty .\)
By Proposition 3.4, \((f, A) \in \partial c^{*} (\partial _{s} h, \delta h)\) for almost every \((s, x) \in (0,1) \times \Omega \) if and only if h maximizes \(\mathcal {D}\) over \(\mathbf {B}^{r}\left( (0,1) \times \Omega ; \Lambda ^{k} \right) \) and (f, A) minimizes \(\mathcal {A}\) over \(P^{p}(\bar{f}_{0}, \bar{f}_{1})\). \(\square \)
4 The set of k-forms: approximations of k-currents
4.1 Notation
Throughout this subsection \(\mathbb {H}\) is a finite dimensional Hilbert space and \(C:\mathbb {H}\rightarrow (-\infty , \infty ]\) is a proper lower semicontinuous convex function. We fix a non empty open bounded convex set \(\Omega \subset {\mathbb {R}}^n\) and \(p \in (1, \infty ).\)
We denote by \(\mathcal {M}(\Omega )\) the set of signed measure of finite total variations. The upper and lower variations \(g^{+}\) and \(g^{-}\) are finite measures and the Jordan decomposition \(g=g^{+}-g^{-}\) holds (cf. e.g. [11]). The total mass of \(|g|:=g^{+}+g^{-}\) is
\(\left( \mathcal {M}(\Omega ),||\cdot ||\right) \) is a normed space and by the Banach–Alaoglu Theorem, every bounded subset is pre-compact. Thus, \(\left( \mathcal {M}(\Omega ),||\cdot ||\right) \) is a complete space.
Let \(\mathcal {C}\) be a countable dense subset of \(C_{c}(\Omega )\), contained in \(C_{c}^{1}(\Omega )\) and which does not contain the null function. If we denote by \(\hat{\mathcal {C}}\) the set of \(f/||f||_{\infty }\) such that \(f \in \mathcal {C}\) then
The set of Borel measures with values into \(\Lambda ^{k}\), of finite total mass, will be denoted by \(\mathcal {M}(\Omega ; \Lambda ^{k}).\) This is the set of k-currents of finite mass. For any \(F \in \mathcal {M}(\Omega ; \Lambda ^{k}),\) we define the norm
Definition 4.1
Given a metric space \((\mathcal {S}, \mathrm {dist})\) the total variation of \(h:[0,1] \rightarrow \mathcal {S}\) is
Definition 4.2
The following definitions can be found respectively in [12, 14]. The recession function of C is \(\bar{C}:\mathbb {H}\rightarrow (-\infty ,\infty ]\) given by
One checks that the definition is independent of \(v_{0}.\)
Set
Here, we skip the proof of the following elementary Lemma.
Lemma 4.3
Assume \(g \in L^{p}(O)\) and \(\eta \) be a singular measure. Set \(\eta _{*}:=\eta +\mathcal {L}^{n+1}_{O}\) and let \(E \subset O\) be a Borel set such that
Then for any \(\alpha \in \mathbb {R}\), \(g_{\alpha }:=g(1-\chi _{E}) + \alpha \; \chi _{E}\in L^{p}(O, \eta _{*})\) and \(\mathcal {L}^{n+1}\{g_{\alpha }\not = g \}=0.\)
Remark 4.4
Assume \(c: \Lambda ^{k} \times \Lambda ^{k-1} \rightarrow (-\infty , \infty ]\) is convex, lower semicontinuous and satisfies (5.3). We assume the Legendre transform \(c^{*}: \Lambda ^{k} \times \Lambda ^{k-1} \rightarrow \mathbb {R}\) satisfies (5.4). Let
Let \(b_{s}\) be the singular part of b, set \(\eta := |b_{s}|\) and let \(E \subset O\) be a Borel set satisfying (4.4). Consider the Radon–Nikodym derivatives \(F:= d b /d\mathcal {L}^{n+1}\) and \(G:= d b_{s} / d \eta .\) Let
be such that
Note \(c(\bar{f}, A)\) is finite except may be on a Borel set \(F \subset O\) such that \(\mathcal {L}^{n+1}(F)=0.\) Let \(d_{*}\) be in \(\mathrm {dom}(c).\) According to Lemma 4.3,
where \(\eta _{*}:=\eta + \mathcal {L}^{n+1}|_{O}.\) Furthermore, \(f=\bar{f}\) \(\mathcal {L}^{n+1}\)-almost everywhere
Assume that \(f: O \rightarrow \Lambda ^{k}\) is a Borel map which we are free to modify on a set of null \((\mathcal {L}^{n+1}+|b|)\)-measure. We have
and so, if \(c(f, A) + c^{*}(F, B) \in L^{1}(O)\) then the positive part of \(\langle f; F \rangle + \langle A; B \rangle \) is of finite Lebesgue integral. In that case, in terms of \(\bar{C}\), the recession function of \(C:=c^{*}\), we have
Since \(c(f, A)<\infty \) \(\eta _{*}\)—a.e., we use Lemma C.1 (i) to infer
Equality holds if and only if
4.2 Paths of bounded variations on \(\mathcal {M}(\Omega ;\Lambda ^{k})\)
Below, we list results on the trace operator of \(BV\left( (0,1); \mathcal {M}(\Omega ; \Lambda ^{k}) \right) \) functions, needed in the manuscript.
Remark 4.5
There exists a linear bounded trace (explicitely written below as the left/right limits) operator \(T:BV\left( (0,1); \mathcal {M}(\Omega ; \Lambda ^{k}) \right) \rightarrow L^{\infty }\left( \{0,1\}; \mathcal {M}(\Omega ; \Lambda ^{k}) \right) \) such that the following hold for any \(h \in BV\left( (0,1); \mathcal {M}(\Omega ; \Lambda ^{k})\right) \).
-
(i)
If h and \(\partial _{s} h\) are continuous on \([0,1] \times \bar{\Omega }\) then
$$\begin{aligned} T h=h|_{\{0,1\} \times \Omega } \end{aligned}$$ -
(ii)
We have the integration by parts formula
$$\begin{aligned} \int _{0}^{1} ds \int _{\Omega }\langle h(s,dx) ; \partial _{s} g(s,x) \rangle + \int _{(0,1) \times \Omega } \langle \partial _{s} h(ds,dx) ; g(s,x) \rangle = u \end{aligned}$$for any \(g \in C^{1}\left( [0,1] \times \bar{\Omega }; \Lambda ^{k} \right) \). Here, we have set
$$\begin{aligned} u:=\int _{\Omega }\langle T h(1,dx) ; g(1,x) \rangle -\int _{\Omega }\langle T h(0,dx) ; g(0,x) \rangle \end{aligned}$$ -
(iii)
If \(h \in BV\left( (0,1); \mathcal {M}(\Omega ; \Lambda ^{k}) \right) \) is such that \(s\rightarrow h(s, \cdot )\) is left continuous at 1 and right continuous at 0 then
$$\begin{aligned} T h(0, \cdot )= \lim _{s \rightarrow 0^{+}} h(s, \cdot ), \quad T h(1, \cdot )= \lim _{s \rightarrow 1^{-}} h(s, \cdot ) \end{aligned}$$
4.3 Special paths of bounded variations on \(\mathcal {M}(\Omega ;\Lambda ^{k})\)
Let \(h \in L^{1}\left( (0,1); \mathcal {M} (\Omega ;\Lambda ^{k})\right) \) be such that there exists \(b\in \mathcal {M} \left( (0,1) \times \Omega ;\Lambda ^{k}\right) \) such that
for all \(\psi \in C_{c}^{1}\left( (0,1) \times \Omega ;\Lambda ^{k}\right) \). Modifying if necessary, \(h(s, \cdot )\) on a subset of (0, 1) of null Lebesgue (cf. [13]), we always assume without loss of generality that h satisfies the following Lemma.
Lemma 4.6
(A non smooth variant of Remark 3.3(ii)) If (4.6) holds, then for any \(0 \le t_{1}<t_{2}<1\) and \(F \in C_{c}(\Omega ; \Lambda ^{k}),\) we have the following.
-
(i)
$$\begin{aligned} \int _{\Omega }\langle F(x); h(t_{2}, dx) \rangle -\int _{\Omega }\langle F(x); h(t_{1}, dx) \rangle = \int _{(t_{1}, t_{2}] \times \Omega } \langle F(x); b(ds, dx) \rangle \end{aligned}$$
-
(ii)
$$\begin{aligned} \int _{\Omega }\langle F(x); h(1, dx) \rangle -\int _{\Omega }\langle F(x);h(t_{1}, dx)\rangle = \int _{(t_{1}, 1) \times \Omega } \langle F(x); b(ds, dx)\rangle \end{aligned}$$
-
(iii)
Using the definition of \(\mathrm {TV}(h)\) in Definition 4.1 we have
$$\begin{aligned} \mathrm {TV}(h) \le |b|\left( (0,1) \times \Omega \right) . \end{aligned}$$
Lemma 4.7
Further assume there exists \(B\in L^{r}\left( (0,1);L^{r}(\Omega ;\Lambda ^{k-1})\right) \) such that
for all \(g\in C_{c}^{1}\left( (0,1)\times \Omega ;\Lambda ^{k-1}\right) \), we say \(\delta h=B\) in the weak sense and say that \(\delta \phi \) belongs to \(L^{r}\left( (0,1);L^{r}(\Omega ;\Lambda ^{k-1})\right) .\) There exists \(\bar{h}_{t_{0}}\in W^{1,r}(\Omega ;\Lambda ^{k})\) such that if we set \([\bar{h}(s,\cdot ):=h(s,\cdot )-h(t_{0},\cdot )+\bar{h}_{t_{0}}\) then, the following hold.
-
(i)
Replacing h by \(\bar{h}\), (4.11) holds for any \(s \in \mathcal {T}\) and any \(H \in C_{c}^{1}(\Omega ;\Lambda ^{k-1})\). In other words, for any \(s \in \mathcal {T}\), we have \(\delta \bar{h}(s, \cdot ) =B(s, \cdot )\).
-
(ii)
There exists a constant \(C_{\Omega }\) depending only on \(\Omega \), r and k such that for all \(s\in (0,1)\)
$$\begin{aligned} ||\bar{h}(s, \cdot )|| \le |b|\left( (0, 1) \times \Omega \right) + C_{\Omega }\mathcal {L}^{n}(\Omega )^{\frac{1 }{r}} \left( \int _{(0,1) \times \Omega } |B(\tau ,x)|^{r} d\tau dx\right) ^{\frac{1 }{r}}. \end{aligned}$$ -
(iii)
We have \(\partial _{s} \bar{h} =b\) and \(\delta \bar{h}=B\) in the sense that we may substitute \(\bar{h}\) with h in (4.6) and (4.7).
Proof
By Lemma 4.6, for each \(F \in C^{1}(\Omega ;\Lambda ^{k})\), the real value function
it is defined everywhere on [0, 1], it is in \(\mathrm {BV}(0,1)\), right continuous on [0, 1) and left continuous at 1. We use (i) of the same Lemma to obtain
Let \(\mathcal {T}^{1}\) be the set of full Lebesgue measure in (0, 1) such that for all \(s\in \mathcal {T}^{1}\)
The set of \(\mathcal {T}^{0}\) which consists of the set of \(s \in (0,1)\) such that
is of positive Lebesgue measure.
We use (4.7) to obtain that for any \(H \in C_{c}^{1} (\Omega ;\Lambda ^{k-1}),\) the existence of a set \(\mathcal {T}^{H} \subset \mathcal {T}^{1}\) of full Lebesgue measure in (0, 1) such that
for any \(s \in \mathcal {T}^{H}.\)
Let \(\{F_{n}\}_{n=1}^{\infty }\subset C_{c}^{1}(\Omega ;\Lambda ^{k-1})\) be a dense of \(C_{c}^{1}(\Omega ;\Lambda ^{k-1})\) for the \(||\cdot ||_{C^{1}(\Omega )} \)-norm. Set
The set \(\mathcal {T }\cap \mathcal {T}^{0}\) has the same measure as \(\mathcal {T}^{0}\). Let \(t_{0} \in \mathcal {T }\cap \mathcal {T}^{0}\). By Theorems 7.2 and 7.4 [7] (written for \(r \in [2,\infty )\) but extendable to \(r \in (1,2)\)), there is \(\bar{h}_{t_{0}} \in W^{1,r}(\Omega ; \Lambda ^{k})\) such that
Furthermore, there is a constant C which depends only on \(\Omega \), k and r such that
Set
(i) Observe that (4.11) holds for any \(s \in \mathcal {T}\) and any H which is a point of accumulation on \(\{F_{n}\}.\) Using the fact that \(\{F_{n}\}_{n=1}^{\infty }\) is dense in \(C_{c}^{1}(\Omega ;\Lambda ^{k-1})\) we conclude the proof of (i).
(ii) We exploit Corollary 4.6 and to obtain
This, together with (4.12) yields the desired inequality.
(iii) Observe that if \(g \in C_{c}^{1}\left( (0,1) \times \Omega ;\Lambda ^{k}\right) \) then
That all is needed to conclude that we may substitute \(\bar{h}\) with h in (4.6). By (i) \(\delta h(t_{0}, \cdot )= B(t_{0}, \cdot )\). Using the definition of \(\bar{h}_{t_{0}}\) we conclude that we may substitute \(\bar{h}\) with h in (4.7). \(\square \)
Definition 4.8
We define \(BV_{*}^{r}(0,1;\Omega )\) to be the set of \(h \in L^{1}\left( (0,1); \mathcal {M}(\Omega ;\Lambda ^{k})\right) \) such that \(\delta h \in L^{r}\left( (0,1); L^{r}(\Omega ; \Lambda ^{k-1})\right) ,\) and there exists \(b\in \mathcal {M}\left( (0,1) \times \Omega ;\Lambda ^{k}\right) \) such that (4.6) holds. We write \(b=\partial _{s} h.\)
Lemma 4.9
Let \((h^{\epsilon })_{\epsilon \in (0,1)} \subset BV_{*}^{r}(0,1;\Omega )\) such that that
and
Then there exists \(h^{0} \in BV_{*}^{r}(0,1;\Omega )\) such that up to a subsequence the following hold.
-
(i)
\((\delta h^{\epsilon })_{\epsilon }\) converges to \(\delta h^{0}\) weakly in \(L^{r}\left( (0,1) \times \Omega ; \Lambda ^{k-1} \right) \).
-
(ii)
\((\partial _{s} h^{\epsilon })_{\epsilon }\) converges weak \(*\) to \(\partial _{s} h^{0}\) on \((0,1) \times \Omega \).
-
(iii)
Except for countably many \(s \in (0,1),\) \((h^{\epsilon }(s, \cdot ))_{\epsilon }\) converges weak \(*\) to \(h^{0}(s, \cdot )\) on \(\Omega \)
Proof
There are
and a sequence \(\{ \epsilon _{m}\}_{m}\) decreasing to 0 such that the following hold:
-
(a)
\((\delta h^{\epsilon })_{\epsilon }\) converges to B weakly in \(L^{r}\left( (0,1) \times \Omega ; \Lambda ^{k-1} \right) \)
-
(b)
\((\partial _{s} h^{\epsilon })_{\epsilon }\) converges weak \(*\) to b on \((0,1) \times \Omega \)
-
(c)
\((|\partial _{s} h^{\epsilon }|)_{\epsilon }\) converges weak \(*\) to \(\beta \) on \(\mathbb {R }\times \mathbb {R}^{n}.\)
Write \((0,1) \cap \mathbb {Q}=\{t_{i} \}_{i=1}^{\infty }.\) Since
we use a diagonal sequence argument to obtain a subsequence of \((\epsilon _{m})_{m}\), which we continuous to label \((\epsilon _{m})_{m}\), such that for each \(i \in \mathbb {N}\) there exists \(\bar{h}_{i} \in \mathcal {M}(\Omega ;\Lambda ^{k})\) such that \((h^{\epsilon _{m}}(t_{i}, \cdot ))_{m}\) converges weak \(*\) to \(\bar{h}_{i} \) on \(\Omega \).
Let D be the set of \(s \in (0,1)\) such that \(\beta (\{s\} \times \mathbb {R} ^{n})>0.\) Since b is a finite measure, D is at most countable. Let \(s \in (0,1) {\setminus } D\) and let \((t_{i_{j}})_{j}\) be a subsequence of \((t_{i})_{i}\) that converges to s. By Lemma 4.6
Because \(||h^{\epsilon _{m}}(s, \cdot )|| \le m_{0}\), the set \(\{ h^{\epsilon _{m}}(s, \cdot ) \}_{m}\) admits points of accumulation for the weak \(*\) topology. Let \(h^{0}(s, \cdot )\) be one of these points of accumulation. Letting m tend to \(\infty \) in (4.16) we have
and so,
Thus, \(\{ h^{\epsilon _{m}}(s, \cdot ) \}_{m}\) admits only one points of accumulation and \((h_{i_{j}})_{j}\) converges weak \(*\) to \(h^{0}(s, \cdot )\). We extend \(s \rightarrow h^{0}(s, \cdot )\) to (0, 1) by setting \(h^{0}(s, \cdot ) \equiv 0\) for \(s \in D\).
Let \(g \in C_{c}^{1}\left( (0,1) \times \Omega ; \Lambda ^{k} \right) .\) Since
and
for every \(s \in (0,1) {\setminus } D\), we use the dominated convergence theorem to conclude that
Thus, \(b=\partial h^{0}.\) Similarly, we show that \(\delta h^{0}=B\) and so, modifying \(h^{0}\) on a subset of (0, 1) of null Lebesgue measure \(h^{0} \in BV_{*}^{r}(0,1;\Omega )\). \(\square \)
5 Finsler type metrics
Assume \(\Omega \subset {\mathbb {R}}^n\) is an open bounded convex set, \(p \in (1,\infty )\) and \(rp=r+p.\) Motivated by examples of cost functions c such as the one in Sect. B.1, we relax the condition imposed on the lower bound of \(c^{*}\) in Sect. 3 (cf. 3.2). This allows to extend Theorem 3.8 to cost functions which take on infinite values. Throughout this section,
is lower semicontinuous convex function. We assume that when \(c(\omega , \xi )<\infty \) then
and for any \(\lambda >0\) we have
We assume that there are constants \(\gamma _{1}, \gamma _{2}, \gamma _{6}, \gamma _{7}>0\) such that
and
for any \(\omega , b \in \Lambda ^{k}\) and \(\xi , B \in \Lambda ^{k-1}.\) Note that we may have
Let \(||\cdot ||_{f}\) and \(M_{p}(\cdot , \cdot )\) be defined as in (1.7) and (1.8).
Remark 5.1
Observe the following.
-
(i)
In case (5.5) does not hold, then by Lemma B.4 there exists a norm \(\Vert \cdot \Vert _{norm}\) such that \(c(\omega , A) \equiv \Vert A \Vert _{norm}^{p}\) is independent of \(\omega .\) According to [9] the solutions of (1.10) are minimizers of (1.8) and the only minimizers if we further impose that \(\Vert \cdot \Vert _{norm}^{p}\) is strictly convex.
-
(ii)
When \(k=n\), which is the case of volume forms, in the current literature, most work studying geodesics of length, deal with either the case when c assumes only finite values (as in Sect. 3) or the case when \(c^{*}(b, B) \in \{0,\infty \}\) for all \((b, B) \in \Lambda ^{k} \times \Lambda ^{k-1}\). It seems obvious that when \(c^{*}(b, B) \in \{0,\infty \}\) (see Remark B.1 for such an example when \(k=2\)), the study of geodesics of optimal length in the set of k-form will only mimic the well-known theory of n-forms. Therefore, in the current manuscript, we keep or focus on the case where (5.4) is satisfied (cf. Sect. B.1 for an example).
For any Borel map \(f: \Omega \rightarrow \Lambda ^{k}\), we define
Let
be Borel maps. When \(1\le k\le n-1,\) we assume that
However when \(k=n\), we assume that
and so, (5.7) implies that \(|\bar{f}_{0}|,|\bar{f}_{1}|\) are bounded functions. Let \((\bar{f},\bar{A})\) be as in Remark 2.3. The same Remark provides us with a constant \(C_{p,\Omega }\) independent of \(\bar{f}_{0},\bar{f}_{1}\) such that
By the convexity property of c,
and so, by the homogeneity with respect to the second variables
Recall that \(P^{p}(\bar{f}_{0},\bar{f}_{1})\) is a set of paths connecting \(\bar{f}_{0}\) to \(\bar{f}_{1}\) as given in Definition 2.2. In other words, if \((f, A) \in P^{p}(\bar{f}_{0}, \bar{f}_{1})\) then in the weak sense
5.1 A metric on a subset of the set of differential forms
Lemma 5.2
(Reparametrization by arc lengths) Suppose \(c: \Lambda ^{k} \times \Lambda ^{k-1} \rightarrow [0,\infty ]\) is a lower semicontinuous convex function that satisfies (5.1) and (5.2). If \(\bar{f}_{0}\) and \(\bar{f}_{1}\) are such that (5.6) and (5.7–5.8) hold then
Proof
For any \((f,A) \in P^{p}(\bar{f}_{0}, \bar{f}_{1})\) we use Jensen’s inequality to conclude that
Thus,
It remains to prove the reverse inequality. Assume without loss of generality that \(M_{p}^{p}(\bar{f}_{0}, \bar{f}_{1})<\infty \) otherwise, there will be nothing to prove. Let \(\epsilon >0\) and let \((f^{\epsilon },A^{\epsilon }) \in P^{p}(\bar{f}_{0}, \bar{f}_{1})\) be such that
Define
Observe that \(S_{\epsilon }: [0,1] \rightarrow [0,1]\) is a bijection and so has an inverse \(T_{\epsilon }: [0,1] \rightarrow [0,1]\) such that
Define
We have \((\tilde{f}, \tilde{A}) \in P^{p}(\bar{f}_{0}, \bar{f}_{1})\) and
Thus, using (5.13) we obtain that
After an integration over (0, 1) we use (5.12) to conclude that
Letting \(\epsilon \) tend to 0 we have
\(\square \)
Lemma 5.3
Suppose \(c: \Lambda ^{k} \times \Lambda ^{k-1} \rightarrow [0,\infty ]\) is a lower semicontinuous convex function that satisfies (5.1) and (5.2). There exists a constant \(\bar{C}_{\Omega }\) which depends only on \(\Omega \) and s such that if \(\bar{f}_{0}\) and \(\bar{f}_{1}\) are such that (5.6) and (5.7–5.8) hold then
Proof
Define \((\bar{f}, \bar{A})\) is as in Remark 2.3 (i) and recall that by (ii) of the same Remark, \((\bar{f}, \bar{A}) \in P^{p}(\bar{f}_{0}, \bar{f}_{1})\). We integrate the expressions in (5.10) to obtain
We first use Lemma 5.2 and then (5.9) to conclude that
which completes the proof. \(\square \)
Denote by \(\mathcal {H}_{p}\) the set of k-forms \(f \in L^{p}\left( \Omega ;\Lambda ^{k}\right) \) such that
and
Theorem 5.4
Suppose \(c: \Lambda ^k \times \Lambda ^{k-1} \rightarrow [0,\infty ]\) is a lower semicontinuous convex function that satisfies (5.1), (5.2) and (5.3). Then the following hold.
-
(i)
If \(\bar{f}_0\) and \(\bar{f}_1\) satisfy (5.6) and (5.7) then there exists \((f^*, A^*)\) that minimizes \(\mathcal {C}\) and \(\int _0^1 ||A_s||_{f_s}ds\) over \(P^p(\bar{f}_0, \bar{f}_1).\)
-
(ii)
The function \(M_p\) in (1.8) is a metric on the set \(\{f \in \mathcal {H}_p \; | \; c^\infty (f)<\infty \}.\)
Proof
(i) follows from Proposition 3.2 and Lemma 5.2.
(ii) Let \(\tilde{f}_{0}, \tilde{f}_{1}, \tilde{f}_{2}\in \mathcal {H}_{p}.\) By (i) and Lemma 5.2 there are \((f^{0}, A^{0}) \in P^{p} (\bar{\omega }_{0}, \bar{\omega }_{1})\) and \((f^{1}, A^{1}) \in P^{p}(\bar{\omega }_{1}, \bar{\omega }_{2})\) such that
and
By Lemma 5.3, if \(\tilde{f}_{0}=\tilde{f}_{1}\) then \(M_{p}(\tilde{f}_{0}, \tilde{f}_{1})=0.\) Conversely, \(M_{p}(\tilde{f}_{0}, \tilde{f}_{1})=0\) means
and so, \(c(f^{0}, A^{0})=0\) almost everywhere on \((0,1) \times \Omega .\) By (5.1) \(A_{0}=0\) almost everywhere on \((0,1) \times \Omega .\) This means \((f^{0},0) \in P^{p}(\tilde{f}_{0}, \tilde{f}_{1})\) and so, \(\tilde{f}_{1}=\tilde{f}_{0}.\)
Setting
we have \((\tilde{f}, \tilde{A}) \in P^{p}(\tilde{f}_{1}, \tilde{f}_{0})\) and so,
By symmetry, the reverse inequality holds and so, \(M_{p}^{p}(\tilde{f}_{1}, \tilde{f}_{0})=M_{p}^{p}(\tilde{f}_{0}, \tilde{f}_{1}).\)
Set
We have \((f, A) \in P^{p}(\tilde{f}_{0}, \tilde{f}_{2})\) and
Hence
This concludes the proof of (ii). \(\square \)
5.2 A duality result for non-finite cost function
Remark 5.5
The following hold.
-
(i)
By the convexity and lower semicontinuity properties of
$$\begin{aligned} (b, B) \rightarrow \underline{c}_{\epsilon }(b, B):=c^{*}(b, B) +{\frac{\epsilon }{p}} (|b|^{p}+|B|^{p}), \end{aligned}$$setting \(c_{\epsilon }:=(\underline{c}_{\epsilon })^{*}\), we have \(c_{\epsilon }^{*}=\underline{c}_{\epsilon }.\)
-
(ii)
Observe that since \(c^{*}\) is convex, \(c^{*}_{\epsilon }\) is strictly convex. Furthermore,
$$\begin{aligned} c^{*}_{\epsilon }\ge c^{*}\quad \hbox {and} \quad c_{\epsilon }\le c. \end{aligned}$$ -
(iii)
By (5.4) there is a constant \(\gamma _{3}^{\epsilon }>0\) depending on \(\epsilon >0\) such that
$$\begin{aligned} -\gamma _{2} + {\frac{\epsilon }{p}} (|b|^{p}+|B|^{p}) \le c^{*}_{\epsilon }(b, B) \le \gamma _{7} + \gamma _{3}^{\epsilon }\left( |b|+|B|^{r} \right) \end{aligned}$$ -
(iv)
By (5.3) there are constants \(\gamma _{6}^{*}>0\) and \(\gamma _{7}^{*} \ge 0\) independent of \(\epsilon \in (0,1)\) such that
$$\begin{aligned} c_{\epsilon }(\omega , \xi ) \ge \gamma _{6}^{*} \left( |\omega |^{p}+|\xi |^{p} \right) -\gamma _{7}^{*}. \end{aligned}$$ -
(v)
Using the notation of Sect. 3, since \(c^{*}\) satisfies (iii), Proposition 3.7 asserts the existence of \(h_{\epsilon }\) that maximizes \(\mathcal {D}_{\epsilon }\) over \(\mathbf {B}^{r}\left( (0,1) \times \Omega ; \Lambda ^{k} \right) \). By Theorem 3.8 there exists \((f^{\epsilon }, A^{\epsilon })\) which minimizes \(\mathcal {C}_{\epsilon }\) over \(P^{p}(\bar{f}_{0}, \bar{f}_{1})\). Furthermore, \(\mathcal {D}_{\epsilon }(h_{\epsilon })=\mathcal {C}_{\epsilon }(f^{\epsilon }, A^{\epsilon })\). Since \(c^{*}_{\epsilon }\) is strictly convex, \(c_{\epsilon }\) is continuously differentiable and so, Theorem 3.8 gives
$$\begin{aligned} (f^{\epsilon }, A^{\epsilon }) \in \partial c^{*}_{\epsilon }(\partial _{s} h_{\epsilon }, \delta h_{\epsilon }) \quad \hbox {i.e.} \quad (\partial _{s} h_{\epsilon }, \delta h_{\epsilon })= \nabla c_{\epsilon }(f^{\epsilon }, A^{\epsilon }). \end{aligned}$$
Theorem 5.6
Assume c satisfies (5.1), (5.2) and (5.3) and \(c^*\) satisfies (5.4). We assume that \(\bar{f}_0, \bar{f}_1 \in C_0(\Omega , \Lambda ^k)\) are such that (5.6), (5.7) hold and there exists \(\epsilon _0>0\) such that \(|\bar{f}_0|, |\bar{f}_1| \le \gamma _1-\epsilon _0\).Then
Proof
1. Let \((f^{\epsilon }, A^{\epsilon })\) and \(h^{\epsilon }\) be the optima in Remark 5.5. We first use the minimality property of \((f^{\epsilon }, A^{\epsilon })\) and then use Remark 5.5 (ii) to conclude that
This, together with Remark 5.5 (iv) implies
Thus, up to a subsequence \((f^{\epsilon })_{\epsilon }\) converges weakly in \(L^{p}((0,1) \times \Omega ; \Lambda ^{k})\) to some \(f^{0}\) and \((A^{\epsilon })_{\epsilon }\) converges weakly in \(L^{p}((0,1) \times \Omega ; \Lambda ^{k-1})\) to some \(A^{0}.\) For any \(b \in C_{0}((0,1) \times \Omega ; \Lambda ^{k})\) and \(B \in C_{0}((0,1) \times \Omega ; \Lambda ^{k-1})\) we have
Thus, since \(c^{*}\) takes on only finite values, maximizing over (b, B), we can use Proposition C.5 (iii) to conclude that
Recall the expression of \({\mathcal {D}}_{\epsilon }\) in (3.9), use the maximality property of \(h^{\epsilon }\) and (5.4) to obtain that
Thus, (4.14) holds. Thanks to Lemma 4.7, we may assume without loss of generality that (4.14) holds. We use Lemma 4.9 to conclude that there exists \(h^{0} \in BV_{*}^{r}(0,1;\Omega )\) such that up to a subsequence
-
(i)
\((\delta h^{\epsilon _{m}})_{m}\) converges to \(\delta h^{0}\) weakly in \(L^{r}\left( (0,1) \times \Omega ; \Lambda ^{k-1} \right) \).
-
(ii)
\((\partial _{s} h^{\epsilon _{m}})_{m}\) converges weak \(*\) to \(\partial _{s} h^{0}\) on \((0,1) \times \Omega )\).
-
(iii)
For \(\mathcal {L}^{1}\)-almost every \(s \in (0,1),\) \((h^{\epsilon _{m}}(s, \cdot ))_{m}\) converges weak \(*\) to \(h^{0}(s, \cdot )\) on \(\Omega \)
Since \(c^{*}_{\epsilon }\ge c^{*}\),
By Theorem 3.3.1 [5] and the convergence in (i) and (ii), we have
The integral of \(c^{*}(\partial _{s} h^{0}, \delta h^{0})\) needs to be interpreted as in Definition 4.2 which involves the recession function \(\bar{c}^{*}.\) Combining (5.18) and (5.19) we obtain
Recall that we can assume without loss of generality that \(s \rightarrow h^{0}(s, \cdot )\) is left continuous at 1 and right continuous at 0. We use the trace operator in Sect. 4.2, and combine (4.14) with (4.15) to obtain that
Rearranging the expressions in the identify \(\mathcal {C}_{\epsilon }(f^{\epsilon }, A^{\epsilon })=\mathcal {D}_{\epsilon }(h^{\epsilon })\) we have
Thus, using (5.17), (5.20) and (5.21), together with the fact that
we obtain
This means
2. We claim that \(\mathcal {C}(f, A) \ge \mathcal {D}(h)\) for any \((f, A) \in P^{p}(\bar{f}_{0}, \bar{f}_{1})\) and any \(h \in BV_{*}^{r}(0,1;\Omega )\). Observe that (5.3) and (5.4) imply that \(C:=c^{*}\) satisfies (C.1) and (C.2). By the assumption on c, we have \(C^{*}\ge C^{*}(0)=0.\) Let \(h^{\epsilon }_{l} \in C^{\infty }(\Omega ; \Lambda ^{k})\) and \(h_{l}\in BV_{*}^{r}(0,1;\Omega _{l})\) be the approximations of h as defined by (D.2) in Section D. Here, \(\Omega _{l}\) is the l-neighborhood of \(\Omega .\) We have
Letting \(\epsilon \) tend to 0 in Lemmas D.3 and D.4 we obtain
Letting l tend to 0 in Lemmas D.3 and D.4 we obtain \(\mathcal {C}(f, A) \ge \mathcal {D}(h).\) This, together with (5.24) concludes the proof of the Theorem. \(\square \)
References
Brenier, Y., Duan, X.: An integrable example of gradient flows based on optimal transport of differential forms (2017). arXiv:1704.00743
Bouchitté, G., Buttazzo, G.: New lower semicontinuity results for nonconvex functionals defined on measures. Nonlinear Anal. 15, 679–692 (1990)
Bouchitté, G., Buttazzo, G.: Integral representation of nonconvex functionals defined on measures. Ann. Inst. H. Poincaré Anal. Non Linéaire 9, 101–117 (1992)
Bouchitté, G., Buttazzo, G.: Relaxation for a class of nonconvex functionals defined on measures. Ann. Inst. H. Poincaré Anal. Non Linéaire 10, 345–361 (1993)
Buttazzo, G.: Semicontinuity, Relaxation and Integral Representation in the Calculus of Variations. Pitman Research Notes in Mathematics Series, vol. 207. Longman Scientific and Technical, New York (1989)
Castaing, C., Valadier, M.: Convex Analysis and Measurable Multifunctions. Springer, Berlin (1977)
Csato, G., Dacorogna, B., Kneuss, O.: The Pullback Equation for Differential Forms. Birkhaüser, Basel (2012)
Dacorogna, B., Gangbo, W.: Quasiconvexity and relaxation in optimal transportation of closed differential forms (Submitted)
Dacorogna, B., Gangbo, W., Kneuss, O.: Optimal transport of closed differential forms for convex costs. C. R. Math. Acad. Sci. Paris Ser. I 353, 1099–1104 (2015)
Dacorogna, B., Gangbo, W., Kneuss, O.: Symplectic factorization, Darboux theorem and ellipticity. Ann. Inst. H. Poincaré Anal. Non Linéaire 35(2), 327–356 (2018)
Dunford, N., Schwartz, J.T.: Linear Operators. Part I. General Theory. Wiley, New York (1988)
Rockafellar, R.T.: Convex Analysis. Princeton University Press, Princeton (1970)
Somersille, S.: work in progress
Témam, R.: Problèmes mathématiques en Plasticité. Gauthier-Villars, Paris (1983)
Acknowledgements
The authors wish to thank Y. Brenier and F. Rezakhanlou for fruitful conversations and for sharing their notes with them. They acknowledge intensive discussions with O. Kneuss, which took place during the course of this work. The authors wish to thank G. Buttazzo for providing them with references [2,3,4,5]. The research of WG was supported by NSF Grants DMS–11 60 939 and DMS–17 00 202. In addition, WG would like to acknowledge the generous support provided by the Fields Institute during Fall 2014: Thematic Program on Variational Problems in Physics, Economics and Geometry, where part of this work was done. We thank L. Ambrosio for drawing our attention to the recent work by Brenier and Duan [1]. Finally, the authors would like to thank the anonymous referee for suggestions on how to improve the presentation of the manuscript.
Author information
Authors and Affiliations
Corresponding author
Additional information
Communicated by L. Ambrosio.
Appendices
Appendix A. Open problems
Throughout this section, we use the same notation as in Sect. 5.2. To alleviate the notation, we denote by (f, A) the pair \((f^0, A^0)\) in Remark 4.4 and write h instead of \(h^0.\) Let \((\partial _{s} h)_{a}\) denote the absolutely continuous part of \(\partial _{s} h\). By abuse of notation, we don’t distinguish between \((\partial _{s} h)_{a}\) and its Radon Nikodym derivative with respect to \(\mathcal {L}^{n+1}.\)
Remark A.1
According Remark 4.4
and if equality holds
We next list few open problems, sources of future investigations.
Problem A.2
These problems are stated under the hypotheses of Sect. 5.
-
(i)
What are the regularity properties of the minimizing geodesics in (5.16), or equivalently, thanks to (A.1), what are the regularity properties of the maximizer h in (5.16)?
-
(ii)
For the sake of illustration, let c be given by (1.11) so that
$$\begin{aligned} c^*(b, B)= \sqrt{|b|^2+ {|B|^{2r} \over r^2}}. \end{aligned}$$Hence, formally at least, using (A.1) and expressing the fact that \((f, A) \in P^p(\bar{f}_0, \bar{f}_1),\) we have
$$\begin{aligned} \partial _s \left( { (\partial _s h)_a \over \sqrt{ \left| (\partial _s h)_a\right| ^2 + r^{-2}|\delta h|^{2r} } } \right) + d\left( { \delta h |\delta h|^{2(r-1)} \over r\sqrt{ \left| (\partial _t h)_a\right| ^2 + r^{-2}|\delta h|^{2r} } }\right) =0 \end{aligned}$$(A.2)in the sense of distribution in the interior of \(\mathbb {U}\) where
$$\begin{aligned} \mathbb {U} :=\left\{ \left| (\partial _s h)_a \right| ^2 +r^{-2} |\delta h|^{2r}>0 \right\} \end{aligned}$$Observe (A.2) is a type of system of elliptic PDEs. What can we show about the set \(\mathbb {U}\)?
-
(iii)
Continuing with c given by (1.11), what are the regularity properties of \(\left( (\partial _t h)_a , \delta h\right) \) or equivalently, since the regularity properties of h transfer to those of (f, A) through (A.1), what are the regularity properties of (f, A)?
Appendix B. Convex functions
Throughout this section, we assume that \(\Omega \subset {\mathbb {R}}^n\) is an open bounded convex set, \(p, r \in (1,\infty )\) and \(rp=r+p.\)
1.1 Examples
A prototype cost is
where
and
In this case, the Legendre transform of \(\theta \) is the strictly convex function \(\theta ^{*}:\Lambda ^{k}\rightarrow [1,\infty )\) of class \(C^{1}\) given by
The Legendre transform of U is \(U^{*}\) and
If \(b\in \Lambda ^{k}\) and \(B\in \Lambda ^{k-1}\) then
For \(B\not =0\), the minimum is achieved at \(0<\alpha _{0}:=|B|^{r}/r.\)
Remark B.1
As mentioned in Remark 5.1, we have chosen not to include cases such as
which satisfy \(c(\lambda \omega , \lambda \xi )=\lambda c(\omega , \xi )\) for any \(\lambda \in \mathbb {R}\). Indeed, in this case, \(c^{*}(b, B) \in \{0,\infty \}\) for all \((b, B) \in \Lambda ^{2} \times \Lambda ^{1}.\)
1.2 Bounds on gradients of convex functions
Let \(\mathbb {H}\) be a finite dimensional Hilbert space and assume \(c, c^{*}: \mathbb {H }\rightarrow (-\infty , \infty ]\) Legendre transform of each other and \(\gamma _{6}, \gamma _{7}, \gamma _{8}>0\).
Remark B.2
The following hold.
-
(i)
Suppose \(c(w) \ge -\gamma _{8}\) and \(c^{*}(z) \ge \gamma _{6} |z|^{r}-\gamma _{7}\) for any \(w, z \in \mathbb {H}.\) Then there exists a constant \(\bar{C}_{\gamma }\) depending only on s, \(\gamma _{6},\) \(\gamma _{7}\) and \(\gamma _{8}\) such that
$$\begin{aligned} \sup _{z \in \partial _{\cdot }c(w)}|z| \le \bar{C}_{\gamma }(|w|^{p-1}+1), \qquad \forall \; w \in \mathbb {H}. \end{aligned}$$(B.5) -
(ii)
Similarly, suppose \(c(w) \ge \gamma _{6} |w|^{p}-\gamma _{7}\) and \(c^{*}(z) \ge -\gamma _{8}\) for any \(w, z \in \mathbb {H}\). Then there exists a constant \(\tilde{C}_{\gamma }\) depending only on r, \(\gamma _{6},\) \(\gamma _{7}\) and \(\gamma _{8}\) such that
$$\begin{aligned} \sup _{w \in \partial _{\cdot }c^{*}(z)}|w| \le \tilde{C}_{\gamma }(|z|^{r-1}+1), \qquad \forall \; p \in \mathbb {H}. \end{aligned}$$(B.6)
1.3 A class of convex functions
Assume that \(c: \Lambda ^{k} \times \Lambda ^{k-1} \rightarrow (-\infty , \infty ]\) is lower semicontinuous and for each if \(\omega \in \Lambda ^{k}\) and \(A \in \Lambda ^{k-1}\) are such that \(c(\omega , A)<\infty \) then
and
For \(\xi \in \Lambda ^{k}\) we define
We set
Denote by \(\Pi : \Lambda ^{k} \times \Lambda ^{k-1} \rightarrow \Lambda ^{k}\) the projection operator. We assume that
Obviously, if c takes on only finite values, then (B.9) holds.
Lemma B.3
Suppose c satisfies (B.9).
-
(i)
We have
$$\begin{aligned} c^{*}(b, B)= \sup _{\omega } \left\{ \langle b; \omega \rangle +(c_{\omega })^{*}(B) \right\} = \left\{ \begin{array}{ll} \infty &{} \quad \text {if } b \not =0\\ \sup \nolimits _{\omega \in \Lambda ^{k}}(c_{\omega })^{*}(B) &{} \quad \text {if } b=0. \end{array} \right. \end{aligned}$$ -
(ii)
For any \((\omega , \xi ) \in \Lambda ^{k} \times \Lambda ^{k-1}\) we have
$$\begin{aligned} c^{**}(\omega , \xi ) \le \inf _{w \in \Lambda ^{k}} (c_{w})^{**}(\xi ) \le G_{c}(\xi ). \end{aligned}$$ -
(iii)
Because c is lower semicontinuous and (B.8) holds, we have \(\lambda _{c}>0.\) Defining \(s>0\) by \(s^{p}=p\lambda _{c}\) we have
$$\begin{aligned} c^{*}(0, B) \le {\frac{|B|^{r} }{r s^{r}}}. \end{aligned}$$
Proof
We only comment on the proof of (iii).
Let \((\omega , \xi ), (b, B) \in \Lambda ^{k} \times \Lambda ^{k-1}\). If \(\xi \not =0\) then by Young’s inequality
Rearranging and maximizing the subsequent inequality over \(\xi \) we obtain
which, together with (i) implies (iii). \(\square \)
Lemma B.4
Suppose c takes on only finite values.
-
(i)
If c is upper semicontinuous, so is \(G_{c}.\)
-
(ii)
If \(G_{c}\) is lower semicontinuous and convex then for any we have
$$\begin{aligned} c^{**}(\omega , \xi )=G_{c}(\xi ) \quad \forall \; (\omega , \xi ) \in \Lambda ^{k} \times \Lambda ^{k-1}. \end{aligned}$$ -
(iii)
If c is convex so is \(G_{c}\). If in addition c is bounded below, then \(G_{c}\) is locally Lipschitz and \(c(\omega , \xi ) \equiv G_{c} (\xi ).\)
Proof
We shall only comment on the last statement of (iii) and leave it to the reader to show (i), (ii) and that if c is convex so is \(G_{c}\). Assume we know \(G_{c}\) is convex. Since it takes on only finite values, it is locally Lipschitz. By the convexity of c and (ii), \(c(\omega , \xi )=c^{**}(\omega ,\xi ) \equiv G_{c}(\xi ).\) \(\square \)
Appendix C. Representation formulas for \(\int _O C(F)\) when F is a measure
Let \(\mathbb {H}\) be either the Hilbert space \(\Lambda ^{k}\times \Lambda ^{k-1}\) or \(\mathbb {R}^{N}.\) We assume that
and denote by \(C^{*}\) the Legendre transform of C. Set \(D:=\mathrm {dom} \left( C^{*}\right) \) and let \(\mathrm {int}(D)\) be the interior of D. Note that if \(0 \in D\) then \(C(u) \ge \langle 0; u\rangle -C^{*}(0)=-C^{*}(0)\) and so, C is bounded below. We sometimes make the stronger assumption that
1.1 Basic properties of recession function
Consider the Minkowski function of \(\bar{D}\) and its polar
Recall that \(\bar{C}\) is the recession function of C as given by Definition 4.2. We recall the following two Lemmas from Convex Analysis.
Lemma C.1
Assume C is convex and (C.1) holds. Then,
-
(ii)
\(\bar{C}=\varrho _{\bar{D}}^{0}\).
-
(iii)
\((\bar{C})^{*}(u)=\left\{ \begin{array}{ll} 0 &{} \text {if }\;u \in \bar{D} \\ \infty &{} \text {if }\;u \not \in \bar{D} \end{array} \right. \)
Lemma C.2
Assume C is convex and (C.1) and (C.2) hold. Then,
-
(i)
\(\varrho _{\bar{D}}\) is Lipschitz and there exists \(\epsilon >0\) such that \(\varrho _{\bar{D}}^{0} \ge \epsilon \Vert \cdot \Vert .\)
-
(ii)
$$\begin{aligned} \bar{D}=\{ \varrho _{\bar{D}} \le 1\}, \quad \mathrm {int}(D)=\{ \varrho _{\bar{D}} < 1\}. \end{aligned}$$(C.4)
1.2 Integral of functions of measures in terms of Legendre transform
Let \(O\subset \mathbb {R}^{n+1}\) be a bounded open set and assume that C is convex and (C.1) holds and \(0 \in D.\) For each \(l>0\) we define \(C^{*}_{l}\)
Because is lower semicontinuous and does not achieve the value \(-\infty ,\) \(C_{l},\) the Legendre transform of \(C_{l}^{*}\) is l-Lipschitz and convex. Furthermore, by the fact that \(C=(C^{*})^{*}\) and \(C_{l}^{*}\ge C^{*}\),
for all \(v\in \mathbb {H}.\) Note that
Identify \(\mathbb {H}\) with \(\mathbb {R}^{N}\). For N signed Borel measures \(F_{1}, \cdots , F_{N}\) on O of finite total mass, we write Radon–Nikodym decomposition
Here, \(\eta \) is a finite Borel measure on \(\mathbb {R}^{N}\) such that \(\mathcal {L}^{N}\) and \(\eta \) are mutually singular, \(F_{a} \in L^{1}(O)\) is a Borel map and \(F_{s} \in L^{1}(\mathbb {R}^{N}, \eta )\) is a Borel map. We set
where \(\bar{C}\) is the recession function of C. By Lemma C.1, since \(0 \in D,\) \(0=\bar{C}(0)\le \bar{C}\) and \(-C^{*}(0) \le C.\) Hence, \(\int _{O} C(F_{a}) dx\) and \(\int _{O} \bar{C}(F_{s}) d\eta \) exist although they may be \(\infty .\) Observe that because \(\bar{C}\) is 1-homogeneous, if \(g_{1}\) and \(g_{2}\) are two finite Borel measures on O which are absolutely continuous with respect to each other then
Hence, even if the pair \((F_{s}, \eta )\) is not uniquely determined, by the fact that the product \(F_{s} \eta \) is uniquely determined, it is well-understood that \(\int _{O} \bar{C}(F_{s}) d\eta \) is well defined. Similarly, (C.6) implies that \(\int _{O} C_{l}(F) dz\) makes sense. Thanks to (C.5) and (C.6) we can apply Fatou’s Lemma to obtain that
Remark C.3
Assume \(G_{1}: O \rightarrow \mathbb {H}\) is a bounded Borel map. If \(|G_{1}|\le l\), then
makes sense although it may be \(-\infty .\) It is not \(-\infty \) if and only if \(C^{*}(G_{1}), C^{*}_{l}(G_{1}) \in L^{1}(O).\)
Proof
Since \(F_{a} \in L^{1}(O)\), if \(|G_{1}|\le l\), using the definition of \(C^{*}_{l}\), Young’s inequality and eventually (C.5 ), we have
This allows to conclude the proof of the remark. \(\square \)
Thanks to Remark C.3 it makes sense to define
Here, \(\mathcal {B}_{2}\) is the set of pairs \((G_{1}, G_{2})\) such that \(G_{1}, G_{2}:O \rightarrow \mathbb {H}\) are Borel and bounded, \(G_{1} \in \mathrm {dom}(C^{*})\) \(\mathcal {L}^{n+1}\)—a.e. and \(G_{2} \in \mathrm {dom}(C^{*}) \) \(\eta \)—a.e.
Lemma C.4
Assume C is convex, (C.1) holds and \(0 \in D\). Then \(\mathcal {K}_{1}(F)=\mathcal {K}_{2}(F).\)
Proof
If \((G_{1}, G_{2}) \in \mathcal {B}_{2}\) then except on a set of \(\mathcal {L}^{n+1}\)-null measure
By Lemma C.1 (i), except on a set of \(\eta \)-null measure
We integrate the two terms in (C.8) with respect to \(\mathcal {L}^{n+1}\) and those in (C.9) with respect to \(\eta \) and add up the subsequent inequalities to conclude that \(\mathcal {K} _{1}(F) \ge \mathcal {K}_{2}(F).\)
Suppose first that \(\mathcal {K}_{1}(G)<\infty \) and fix \(\epsilon >0\) arbitrary. Since \(C_{l}\) assumes only finite values, for any \(v\in \mathbb {H}\), the subdifferential \(\partial _{\cdot }C_{l}(v)\) is not empty. Because, \(C_{l}\) is l-Lipschitz, \(\partial _{\cdot }C_{l}(v)\) is a compact set contained in the ball of radius l. The theory of multifunctions [6] ensures existence of a Borel map \(M_{l}:\mathbb {H}\rightarrow \mathbb {H}\) such that for any \(v\in \mathbb {H}\), \(M_{l}(v)\in \partial _{\cdot }C_{l}(v)\) and \(|M_{l} (v)|\le l.\) The map \(G_{l}:=M_{l}\circ F_{a}\) is a Borel map such that \(|G_{l}|\le l\) and
for any \(z\in O.\) Thus,
In light of (C.7) there exists l such that
We combine this, together with (C.11) and use the fact that \(C_{l}^{*}\ge C^{*}\) to conclude that
Since \(\int _{O} C(F_{a})dx\) is finite and \(C^{*}\) is bounded below, (C.12) implies
Observe that \(\bar{C}: \mathbb {H }\rightarrow (-\infty , \infty )\) is a convex function that is bounded below. We apply (C.12) after replacing C by \(\bar{C}\), \(\mathcal {L}^{n+1}\) by \(\eta \) and \(F_{a}\) by \(F_{s}\) to conclude that we can choose l large enough so that there exists a Borel map \(\bar{G}_{l}: O \rightarrow \mathbb {H}\) such that \(|\bar{G}_{l}| \le l\) and
By Lemma C.1 (ii), \((\bar{C})^{*}\) takes the value 0 in \(\bar{D}\) otherwise takes the value \(\infty .\) Since we have assumed that \(\int _{O} \bar{C}(F_{s}) d\nu \) is finite, (C.14) is equivalent to
We combine (C.13) and (C.15) to obtain that \(\mathcal {K}_{1}(F) \le \mathcal {K}_{2}(F)+2 \epsilon \) and then use the fact that \(\epsilon >0\) is arbitrary to that \(\mathcal {K}_{1}(F) \le \mathcal {K} _{2}(F).\)
Assume that \(\mathcal {K}_{1}(G)=\infty \) and so, for instance, \(\int _{O} C(F_{a}) dx=\infty .\) We are to show that for every \(\epsilon >0,\) \(\mathcal {K}_{2}(G) \ge \epsilon ^{-1}.\) In light of (C.7) there exists l such that
We use (C.11) to obtain a bounded Borel map \(G_{l}: O \rightarrow \mathbb {H}\) such that
Observe that since \(C_{l}\) is l-Lipschitz and \(F_{a} \in L^{1}(O)\). Hence, \(C^{*}(G_{l}) \in L^{1}(O)\) and so, \(G_{l}(z) \in D\) \(\mathcal {L}^{n+1} \)—a.e. This proves that \(\mathcal {K}_{2}(F) \ge \epsilon ^{-1} \) and so, \(\mathcal {K}_{2}(F) =\infty \). \(\square \)
Define
Here, \(\mathcal {B}_{3}\) is the set of pairs \((G_{1},G_{2})\) in \(\mathcal {B} _{2}\) such that there exists a compact set \(K\subset O\) such that \(G_{1}\in K \) \(\mathcal {L}^{n+1}\)—a.e. and \(G_{2}\in K\) \(\eta \)—a.e. Define
Here, \(\mathcal {B}_{4}\) is the set of pairs \((G_{1},G_{2})\) in \(\mathcal {B} _{3}\) that are continuous and of compact supports such that \(G_{1}=G_{2}\) and the range of \(G_{1}\) is contained in a compact subset of the interior of D.
Proposition C.5
As in Lemma C.4, we suppose that C is convex, (C.1) holds and \(0 \in D\). Then
If we further assume that (C.2) holds then
Proof
Replacing \(C^{*}\) by \(C^{*}-C^{*}(0)\) if necessary, let us assume without loss of generality that \(C^{*}(0)=0.\) What is obvious is
and by Lemma C.4\(\mathcal {K}_{1}(F)=\mathcal {K}_{2}(F).\) It remains to show the reverse inequalities.
Part 1. To show that \(\mathcal {K}_{2}(F)\le \mathcal {K}_{3}(F)\) it suffices to show that for any \(\epsilon >0\) and \((G_{1}.G_{2})\in \mathcal {B}_{2},\)
We can assume that \(C^{*}(G_{1})\in L^{1}(O)\) and \(C^{*}(G_{2})\in L^{1}(\eta )\) otherwise, there is nothing to prove. For each m positive integer, we define
and let \(\chi _{S_{m}}\) be the indicator function of \(S_{m}.\) The dominated convergence theorem allows to choose m large enough so that
We have
Since \(C^{*}(0)=0\), by convexity
Thus
and so,
Integrating over O and using (C.16), we have
and so, \(C^{*}(\chi _{S_{m}}G_{1})\in L^{1}(O).\) Replace \((F_{a} ,G_{1},\mathcal {L}^{n+1}, C)\) by \((F_{s},G_{2},\eta ,\bar{C})\) in (C.18) to obtain for m large enough,
and so \((\bar{C})^{*}(\chi _{S_{m}}G_{2})\in L^{1}(\eta ).\) As in the proof of Lemma C.4, use the fact that \((\bar{C})^{*}\) takes the value 0 in \(\bar{D}\) otherwise it takes the value \(\infty \) to conclude that
Note that \((\bar{C})^{*}(\chi _{S_{m}}G_{2})\in L^{1}(\eta )\) is equivalent to \((\bar{C})^{*}(\chi _{S_{m}}G_{2})=0\) \(\eta \)—a.e., which means that \(\chi _{S_{m}}G_{2}\in \bar{D}\) \(\eta \)—a.e. By (C.18) and (C.19)
Since \((\chi _{S_{m}}G_{1},\chi _{S_{m}}G_{2})\in \mathcal {B}_{3}\), we obtain
We use the fact that \((G_{1},G_{2})\in \mathcal {B}_{2}\) and \(\epsilon >0\) are arbitrary to conclude that \(\mathcal {K}_{2}(F)\le \mathcal {K}_{3}(F),\) and so, \(\mathcal {K}_{2}(F)=\mathcal {K}_{3}(F).\)
Part 2. Further assume that 0 is in the interior of D. To show that \(\mathcal {K}_{3}(F)\le \mathcal {K}_{4}(F),\) it suffices to show that for any arbitrary \(\epsilon >0\), if \((G_{1},G_{2})\in \mathcal {B}_{3}\) then
Fix such a \((G_{1},G_{2})\) and assume without loss of generality that \(C^{*}(G_{1})\in L^{1}(O)\) and \((\bar{C})^{*}(G_{2})\in L^{1}(\eta ).\) Extend \(G_{1}\) by setting it to be null outside O. Let \(m_{0}\) be such that \(\mathcal {L}^{n+1}\)—a.e, \(G_{1}\) is supported by \(S_{m_{0}}\) and \(\eta \)—a.e, \(G_{2}\) is supported by \(S_{m_{0}}\) and
Consider a standard mollifier \(\varrho \in C_{c}^{\infty }(\mathbb {R}^{n+1})\) which is a probability density supported by the unit ball centered at the origin. Define
By Jensen’s inequality
Similarly, since \(G_{1}\) is supported by \(\bar{D}\) \(\mathcal {L}^{n+1}\)—a.e. and the latter is a convex set, by Jensen’s inequality, the range of \(G_{l}\) is contained in \(\bar{D}.\) By the fact that both \(G_{1}\) and \(C^{*}(G_{1})\) are in \(L^{1}(O)\), standard arguments show that
Thus, for l small enough
This, together with (C.23) implies
By Lusin’s theorem theorem there exists for each positive l, there exists a continuous function \(\bar{G}^{l}\) such that
Since \(G_{2}\) is bounded and has its support in O, we may assume without loss of generality that
Furthermore, we may assume without loss of generality that \(\bar{G}^{l}\) is supported by \(S_{2m_{0}}.\) Consider the function \(A \in C(\mathbb {R})\) defined
The function \(G^{l}_{0}: =\bar{G}^{l} A (\varrho _{\bar{D}} \circ \bar{G}^{l})\) is continuous, supported by \(S_{2m_{0}}\) and its range is contained in \(\bar{D}.\) We have
and so, by (C.26)
By (C.27)
Since \(F_{s} \in L^{1}(\eta )\) there exists \(e>0\) such that if \(S \subset O\) and \(\eta (S) \le e\) then
Thus, if \(l \in (0,e)\), using (C.27) we have
If \(\bar{e} \in (0,1)\) is closed enough to 1 we conclude that setting \(G^{l}:= \bar{e} G^{l}_{0}\) then
Observe that \(\varrho _{\bar{D}}( G^{l})= \bar{e} \varrho _{\bar{D}}(G^{l}_{0}) \le \bar{e}<1\) and so, by Lemma C.2, \(G^{l}\) belongs to \(\mathrm {int}(D).\) We combine (C.25) and (C.30) to conclude that
Let \(E \subset O\) be a Borel set such that
By Lusin’s theorem, we may find a sequence \(\{\chi _{j}\}_{j} \subset C(\bar{O}, [0,1])\) that converges \((\mathcal {L}^{n+1} +\eta )\)—a.e. to \(\chi _{E}.\) Set
We have that \(g_{j} \in C_{c}(O, \mathbb {H})\) is bounded and so, since the ranges of both \(G_{l}\) and \(G^{l}\) are contained in \(\bar{D}\) we have
By Lemma C.2, \(\{\varrho _{\bar{D}} \le \bar{e} \}\) is a compact set contained in \(\mathrm {int}(D)\), while (C.32) ensures that the range of \(g_{j}\) is contained in \(\{\varrho _{\bar{D}} \le \bar{e} \}\). We use the convexity of \(C^{*}\) to conclude that
We have
and so, by (C.33)
Since \(C^{*}\) is continuous in \(\mathrm {int}(D)\) and \(G^{l} \in C_{c}(O, \mathbb {H})\) has its range contained in \(\mathrm {int}(D)\), we conclude that \(C^{*}( G^{l}) \in C(\bar{O})\). In fact, since we have assumed that \(C^{*}(0)=0\), \(C^{*}( G^{l}) \in C_{c}(O)\). What matters is the conclusion that \(C^{*}( G^{l}) \in L^{1}(O)\) and so,
We apply the dominated convergence theorem to conclude that since \(\eta (O {\setminus } E)=0\) then
Similarly, since \(\mathcal {L}^{n+1}(E)=0\) then
Finally, since \(\mathcal {L}^{n+1}(E)=0\) then
We combine (C.34–C.37) to conclude that for j large enough
This, together with (C.31) and the fact that \((g_{j}, g_{j}) \in \mathcal {B}_{4}\) yields that
Since \((G_{1}, G_{2})\) is an arbitrary element of \(\mathcal {B}_{3}\) and \(\epsilon >0 \) is arbitrary, we conclude that \(\mathcal {K}_{3}(F) \le \mathcal {K}_{4}(F)\) and so, \(\mathcal {K}_{3}(F) = \mathcal {K}_{4}(F)\). \(\square \)
Appendix D. Approximation of k-currents by smooth currents
Throughout this section, we assume that \(\Omega \subset \mathbb {R} ^{n}\) is an open bounded convex set. We assume without loss of generality that \(\Omega \) contains the origin and denote by \(\varrho _{\bar{\Omega }}\) the Minkowski function of \(\Omega \) (cf. (C.3)). Recall that (cf. Lemma C.2)
For \(0<l<1\), we use the notation
and
We define \(T_{l}, S_{l} : \mathbb {R}^{n+1} \rightarrow \mathbb {R}^{n+1}\) by
Fix \(h \in BV_{*}^{r}(0,1;\Omega )\) (cf. Definition 4.8) and define \(h_{l} \in BV_{*}^{r}(I_{l};\Omega _{l})\) by
For instance if \(H \in C(\bar{O})\) and \(h(z)= H(z) e^{1} \wedge \cdots \wedge e^{k}\) then \(h_{l}(w)= H_{l}(w) e^{1} \wedge \cdots \wedge e^{k}\) where
Reminder D.1
By Lemma 4.6 (iii), \(t \rightarrow h(t, \cdot ) \in \mathcal {M}(\Omega , \Lambda ^k)\) is of bounded variations and so, it is continuous except may be at countably many t. Furthermore, by (ii) of the same Remark, we may tacitly choose an appropriate representative such that \(t \rightarrow h(t, \cdot ) \in \mathcal {M}(\Omega , \Lambda ^k)\) is right continuous at any \(t\in [0,1).\) (iii) of the Remark will ensure left continuity at 1 and so, \(h(t, \cdot )\) is well-defined for every \(t \in [0,1]\). Since \(|\partial _s h|\) is a finite measure, \(||h(t, \cdot )||\) is bounded by a constant independent of t.
Remark D.2
For any \(\phi \in C(\bar{O}_{l}; \Lambda ^{k})\), the following hold.
-
(i)
$$\begin{aligned} \int _{O_{l}} \left\langle \partial _{\tau }h_{l}(dw); \phi (w) \right\rangle ={\frac{1 }{1+l}} \int _{O} \left\langle \partial _{s} h(dz); \phi (S_{l} z)\right\rangle . \end{aligned}$$
-
(ii)
$$\begin{aligned} \int _{O_{l}} \left\langle \delta h_{l}(w); \phi (w)\right\rangle dw ={\frac{1 }{1+l}} \int _{\Omega } \left\langle \delta h(z); \phi (S_{l} z) \right\rangle dz. \end{aligned}$$
-
(iii)
For any \(\psi \in C(\bar{\Omega }_{l}; \Lambda ^{k})\)
$$\begin{aligned} \int _{\Omega _{l}} \left\langle h_{l}(\tau , dy); \psi (y)\right\rangle ={\frac{1}{1+l}} \int _{\Omega }\left\langle h\left( {\frac{\tau +{\frac{l}{2}} }{1+l}},dx \right) ; \psi \left( (1+l) x \right) \right\rangle , \quad \forall \; \tau \in I_{l}. \end{aligned}$$ -
(iv)
Since \(|\partial _{s} h|\left( (0,1) \times \Omega \right) <\infty \), the set of \(t \in (0,1)\) such that \(|\partial _{s} h|\left( \{t\} \times \Omega \right) >0\) is at most countable. This implies, the set \(\mathcal {T}\) of \(l \in (0,1)\) such that
$$\begin{aligned} |\partial _{s} h|\left( \left\{ {\frac{1 +{\frac{l}{2}} }{1+l}} \right\} \times \Omega \right) >0 \end{aligned}$$is at most countable.
Proof
Thanks to (D.2), the proof of (i) and (ii) is direct. To prove (iii), we need to show that for any \(\beta \in C_{c}(I_{l})\) and \(\psi \in C_{c}(\bar{\Omega }_{l}; \Lambda ^{k})\) we have
We first use the change of variables \(s(1+l)-l/2=\tau \) in the second integral and then use (D.2) with \(\phi (\tau ,y)=\beta (t) \psi (y)\) to conclude. \(\square \)
Lemma D.3
Suppose that C is a convex function on \(\mathbb {H}:=\Lambda ^{k}, \times \Lambda ^{k-1}\) such that (C.1) and (C.2) hold. Let \(F_{l} :=(\partial _{\tau }h_{l}, \delta h_{l})\) and \(F=(\partial _{s} h, \delta h)\) let \(\mathcal {K}_{1}\) and \(\mathcal {K}_{4}\) be as in Section C. We have \((1+l) \mathcal {K}_{1}(F_{l}) \le \mathcal {K}_{1}(F).\)
Proof
Replacing C by C(0) if necessary, we assume without loss of generality that \(C(0)=0,\) which yields \(C^{*}\ge 0\). By Proposition C.5, \(\mathcal {K}_{1}(F_{l})=\mathcal {K}_{4}(F_{l})\). Hence, for any \(\epsilon >0\), there exist a compact set S contained in the interior of D and
such that the range of \((g, g_{*})(O_{l})\) is contained in S and
We use the change of variables provided by Remark D.2 to infer
Using that \(C^{*}\ge 0\) we conclude that, we obtain
Thus,
which, together with Proposition C.5, proves the Lemma. \(\square \)
Lemma D.4
Suppose that C is a convex function on \(\mathbb {H}:=\Lambda ^{k}, \times \Lambda ^{k-1}\) such that (C.1) and (C.2) hold. Suppose C achieves its minimum at 0. Assume \(f_{0}, f_{1} \in C_{0}(\bar{\Omega })\) in the sense that their restriction to the boundary is the null function. Then,
-
(i)
$$\begin{aligned} \lim _{l \rightarrow 0^{+} }\int _{\Omega } \langle h_{l}(1, x); f_{1}(x) \rangle dx=\int _{\Omega } \langle h(1, x); f_{1}(x) \rangle dx \end{aligned}$$
-
(ii)
$$\begin{aligned} \lim _{l \rightarrow 0^{+} }\int _{\Omega } \langle h_{l}(0, x); f_{0}(x) \rangle dx=\int _{\Omega } \langle h(0, x); f_{0}(x) \rangle dx \end{aligned}$$
Proof
By Lemma 4.6 (ii) and Remark D.2 (iii)
where
and
Since \(|h|(1, \cdot )\) is a finite measure, he Lebesgue dominated convergence theorem ensures that
By Lemma 4.6 (ii)
Hence,
This with (D.3) and (D.4) proves (i). The proof of (ii) is obtained in a similar way. \(\square \)
Let \(\varrho _{1} \in C^{\infty }_{c}(\mathbb {R})\) and \(\varrho _{n} \in C_{c}^{\infty }(\mathbb {R}^{n})\) be nonnegative symmetric probability density functions. Suppose \(\varrho _{n}\) is positive on the open ball of radius 1 and null outside the closed ball of radius 1 and \(\varrho _{1}\) satisfies the analogous condition. We set
For
we define
Similarly, for
we define
In the remaining of this section, we fix \(f_{0}, f_{1} \in C(\bar{\Omega }; \Lambda ^{k}).\)
Since \(h_{l} \in BV_{*}^{r}(I_{l};\Omega _{l})\) and for \(\epsilon \in (0,l/2)\) we define can \(h^{\epsilon }_{l} \in BV_{*}^{r}(0,1;\Omega )\) by
For instance, if
then
Thus, \(h^{\epsilon }_{l} \in C^{\infty }(\bar{O}; \Lambda ^{k}).\)
Remark D.5
Let \(\phi \in C(\bar{O}; \Lambda ^{k})\) and \(\psi \in C(\bar{\Omega }; \Lambda ^{k}).\) Then the following hold.
-
(i)
$$\begin{aligned} \int _{O} \langle \partial _{s} h^{\epsilon }_{l}(dz); \phi (z)\rangle = \int _{O_{\epsilon }} \langle \partial _{s} h_{l}(dw); \varrho ^{\epsilon }*\phi (w) \rangle . \end{aligned}$$
-
(ii)
$$\begin{aligned} \int _{O} \langle \delta h^{\epsilon }_{l}(z); \phi (z)\rangle dz = \int _{O_{\epsilon }} \langle \delta h_{l}(w); \varrho ^{\epsilon }*\phi (w) \rangle dw. \end{aligned}$$
Proof
The proof of the Remark is straightforward to obtain. \(\square \)
Lemma D.6
Suppose that C is a convex function on \(\mathbb {H}:=\Lambda ^{k}, \times \Lambda ^{k-1}\) such that (C.1) and (C.2) hold. Suppose C achieves its minimum at 0. Let \(F^{\epsilon }_{l}:=(\partial _{s} h^{\epsilon }_{l}, \delta h_{l}^{\epsilon })\) and \(F_{l}=(\partial _{s} h_{l}, \delta h_{l})\) let \(\mathcal {K}_{1}\) and \(\mathcal {K}_{4}\) be as in Section C. We have \(\mathcal {K}_{1}(F^{\epsilon }_{l}) \le \mathcal {K}_{1}(F_{l}).\)
Proof
Replacing C by C(0) if necessary, we assume without loss of generality that \(0=C(0) \le C,\) which yields \(C^{*}\ge 0=C^{*}(0)\). By Proposition C.5\(\mathcal {K}_{1}(F^{\epsilon }_{l} )=\mathcal {K}_{4}(F^{\epsilon }_{l})\). Hence, for any \(\bar{\epsilon }>0\), there exist a compact set S contained in the interior of D and
such that the range of \((g, g_{*})(O)\) is contained in S and
We use the change of variables provided by Remark D.5 to infer
By Jensen’s inequality
Since \(C^{*}\ge 0=C^{*}(0)\) and \((g, g_{*})\) is supported by O, we obtain
which together with (D.6) yields
Thus, \(\mathcal {K}_{1}(F^{\epsilon }_{l})\le \bar{\epsilon }+ \mathcal {K} _{4}(F_{l}).\) Since \(\bar{\epsilon }>0\) is arbitrary, we use Proposition C.5, to conclude the proof. \(\square \)
Lemma D.7
Suppose that C is a convex function on \(\mathbb {H}:=\Lambda ^{k}, \times \Lambda ^{k-1}\) such that (C.1) and (C.2) hold. Suppose C achieves its minimum at 0. Assume \(f_{0}, f_{1} \in C_{0}(\bar{\Omega })\) in the sense that their restriction to the boundary is the null function. Then, for almost every \(l \in (0,1)\)
-
(i)
$$\begin{aligned} \lim _{\epsilon \rightarrow 0^{+} }\int _{\Omega } \langle h^{\epsilon }_{l}(1, x); f_{1}(x) \rangle dx=\int _{\Omega } \langle h_{l}(1, x); f_{1}(x) \rangle dx \end{aligned}$$
-
(ii)
$$\begin{aligned} \lim _{\epsilon \rightarrow 0^{+} }\int _{\Omega } \langle h^{\epsilon }_{l}(0, x); f_{0}(x) \rangle dx=\int _{\Omega } \langle h_{l}(0, x); f_{0}(x) \rangle dx \end{aligned}$$
Proof
We shall only show (i) as the proof of (ii) follows the same lines of arguments. We have
where,
Part 1. Since \(f_{1}\) vanishes on \(\partial \Omega \), we can extend it by setting its value to be 0 outside \(\bar{\Omega }\), and obtain a function \(C_{c}(\mathbb {R}^{n}).\) Consequently, \((f^{\epsilon }_{1})_{\epsilon }\) converges uniformly to \(f_{1}\) on \(\mathbb {R}^{n}.\) Since \((\chi _{\Omega _{\epsilon }})_{\epsilon }\) converges pointwise to \(\chi _{\bar{\Omega }}\), \((f^{\epsilon }_{1} \chi _{\Omega _{\epsilon }})_{\epsilon }\) converges pointwise to \(f_{1}\chi _{\bar{\Omega }}= f_{1}\chi _{\Omega }.\) Thus,
Part 2. We use Remark D.2 (iii) to obtain
Since
vanishes outside \([1-\epsilon , 1+\epsilon ],\) we conclude that
with
We have
We combine (D.9) and (D.10) to conclude that for any \(l \in (0,1) {\setminus }\mathcal {T}\) (cf. Remark D.2 (iv) )
We combine (D.8) and (D.11) to conclude the proof of (i). \(\square \)
Appendix E. Closed differential 2-forms and electromagnetism
The search of optimal k-forms can be put in the context of electro-magnetism. Indeed, consider a contractible open bounded convex set \(O \subset \mathbb {R}^{3}\), denote by \(\mathbf {n}\) the unit outward vector to \(\partial O\) and suppose
Define \(\mathcal {S}\) to be the set of pairs of magnetic/electric vector fields
which are integrable and satisfy (in the weak sense) Gauss’s law for magnetism
and the Maxwell–Faraday induction equations
The well-known correspondence between \(\mathcal {S}\) and the set of closed differential 2-forms on \(\Omega \) is given by the isometry M which associates to (B, E) the 2-differential form M(B, E) defined by
One identifies the differential form M(B, E) with the skew-symmetric matrix
If we set \(f=M(B,E),\) direct computations reveal that
and so, f is closed if and only if both (E.1) and (E.2) hold. Furthermore,
and so, f is symplectic if and only if \((E\cdot B)^{2}>0.\)
Let \(w=(w^{0},w^{1},w^{2},w^{3})=(w^{0},{\mathbf {w}}):(0,T)\times O\rightarrow \mathbb {R}^{4}\) be a vector field. Let \(A\in \Lambda ^{1}\left( \mathbb {R}^{4}\right) \) be written as
When we write \(A=w\lrcorner \,f\) we mean that
where
Therefore in terms of (B, E, w) the system of equations \(\partial _{s}f+dA=0\) is equivalent to
This means
Using the identity
we equivalently write
Therefore, considering action of the form
under the conditions that
amounts to considering actions of the form
under the conditions that (E.1), (E.2) and (E.9) hold. The boundary condition \(A\wedge \nu =0\) is equivalent to
Rights and permissions
About this article
Cite this article
Dacorogna, B., Gangbo, W. Transportation of closed differential forms with non-homogeneous convex costs. Calc. Var. 57, 108 (2018). https://doi.org/10.1007/s00526-018-1376-0
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s00526-018-1376-0