Keywords

1 Introduction

We are interested in the numerical resolution of the nonlinear elliptic Monge-Ampère equation

$$\begin{aligned} \det D^2 u&= f \quad \text {in } \varOmega \nonumber \\ u&= 0\quad \text {on } \partial \varOmega , \end{aligned}$$
(1)

where \(D^2 v\) denotes the Hessian of a smooth function \(v\), i.e., \(D^2 v\) is the matrix with \((i,j)\)th entry \(\partial ^2 v / (\partial x_i \partial x_j)\). Here \(\varOmega \) is a smooth uniformly convex bounded domain of \(\mathbb {R}^2\) which is at least \(C^{1,1}\) and \(f \in C(\overline{\varOmega })\) with \(f \ge c_0 >0\) for a constant \(c_0\). If \(f \in C^{0,\alpha }, 0<\alpha <1\), (1) has a classical convex solution in \(C^2(\varOmega ) \cap C(\overline{\varOmega })\) and its numerical resolution assuming more regularity on \(u\) is understood, e.g., [6, 7, 11]. In the nonsmooth case, various approaches have been proposed, e.g., [16, 17]. For various reasons, it is desirable to use standard discretization techniques, which are valid for both the smooth and the nonsmooth cases. We propose to solve numerically (1) by the discrete version of the sequence of iterates

$$\begin{aligned} (\mathrm{cof }(D^2 u_{\varepsilon }^k + \varepsilon I) ): D^2 u_{\varepsilon }^{k+1}&= \det D^2 u_{\varepsilon }^{k} + f, \quad \quad \quad \text {in }\varOmega \nonumber \\ u_{\varepsilon }^{k+1}&= 0, \quad \qquad \qquad \qquad \text {on }\partial \varOmega , \end{aligned}$$
(2)

where \(\varepsilon >0\), \(I\) is the \(2\times 2\) identity matrix and we use the notation \(\mathrm{cof }A\) to denote the matrix of cofactors of \(A\), i.e., for all \(i,j\), \((-1)^{i+j} (\mathrm{cof }A)_{ij}\) is the determinant of the matrix obtained from \(A\) by deleting its \(i\)th row and its \(j\)th column. For two \(n \times n\) matrices \(A, B\), we recall the Frobenius inner product \(A:B = \sum _{i,j=1}^n A_{ij} B_{ij}\), where \(A_{ij}\) and \(B_{ij} \) refer to the entries of the corresponding matrices.

Our recent results [1] indicate that an appropriate space to study a natural variational formulation of (1) is a finite dimensional space of piecewise smooth \(C^1\) functions. For the numerical experiments, we will let \(V_h\) be a finite dimensional space of piecewise smooth \(C^1\) functions constructed with the isogeometric analysis paradigm. Numerical results indicate that the proposed iterative regularization (2) is effective for nonsmooth solutions. Formally, the sequence defined by (2) converges to a limit \(u_{\varepsilon }\), and \(u_{\varepsilon }\) converges uniformly on compact subsets of \(\varOmega \) to the solution \(u\) of (1) as \(\varepsilon \rightarrow 0\).

For \(\varepsilon =0\), (2) gives the sequence of Newton’s method iterates applied to (1). Surprisingly, for the two-dimensional problem, the formal limit \(u_{\varepsilon }\) of the sequence \(u_{\varepsilon }^{k+1}\) solves the vanishing viscosity approximation of (1)

$$\begin{aligned} \varepsilon \varDelta u_{\varepsilon } + \det D^2 u_{\varepsilon } -f&=0 \quad \text {in }\varOmega \nonumber \\ u_{\varepsilon }^{}&= 0 \quad \text {on } \partial \varOmega . \end{aligned}$$
(3)

However, discrete versions of Newton’s method applied to (3) do not in general perform well for nonsmooth solutions. This led to the development of alternative methods, e.g., the vanishing moment methodology [11]. The key feature in (2) is that the perturbation \(\varepsilon I\) is included to prevent the matrix \(D^2 u_{\varepsilon }^k + \varepsilon I\) from being singular.

The difficulty of constructing piecewise polynomial \(C^1\) functions is often cited as a motivation to seek alternative approaches to \(C^1\) conforming approximations of the Monge-Ampère equation. In [1] Lagrange multipliers are used to enforce the \(C^1\) continuity, but the extent to which this constraint is enforced in the computations is comparable to the accuracy of the discretization. With the isogeometric method, the basis functions are also \(C^1\) at the computational level. On the other hand, another advantage of the isogeometric method is the exact representation of a wide range of geometries which we believe would prove useful in applications of the Monge-Ampère equation to geometric optics. Finally, the isogeometric method is widely reported to have better convergence properties than the standard finite element method.

The main difficulty of the numerical resolution of (1) is that Newton’s method fails to capture the correct numerical solution when the solution of (1) is not smooth. We proposed in [1] to use a time marching method for solving the discrete equations resulting from a discretization of (1). Moreover in [3] we argued that the correct solution is approximated if one first regularizes the data. However, numerical experiments reported in [1] and in this paper indicate that regularization of the data may not be necessary.

It is known that the convex solution \(u\) of (1) is the unique minimizer of a certain functional \(J\) in a set of convex functions \(S\). It is reasonable to expect, although not very easy to make rigorous, that the set \(S\) can be approximated by a set of smooth convex functions \(S_m\) and minimizers of \(J\) in \(S_m\) would approximate the minimizer of \(J\) in \(S\). We prove that the functional \(J\) has a unique minimizer in a ball of \(C^1\) functions centered at a natural interpolant of a smooth solution \(u\). With a sufficiently close initial guess, a minimization algorithm can be used for the computation of the numerical solution. The difficulty of choosing a suitable initial guess may be circumvented by using a global minimization strategy as in [14]. Netherveless our result can be considered a first step toward clarifying whether regularization of the data is necessary for a proven convergence theory of \(C^1\) approximations of (1) in the nonsmooth case.

In this paper the numerical solution \(u_h\) is computed as the limit of the sequence \(u_{\varepsilon ,h}^k\) which solve the discrete variational problem associated with (2). For the case of smooth solutions we use \(\varepsilon =0\) in the resulting discrete problem. See Remark 2. Since (1) is not approximated directly there is a loss of accuracy. Netherveless our algorithm can be considered a step toward the development of fast iterative methods capable of retrieving the correct numerical approximation to (1) in the context of \(C^1\) conforming approximations. Let \(u_{\varepsilon ,h}\) denote the solution of the discrete problem associated to (3). The existence of \(u_{\varepsilon ,h}\) and \(u_{\varepsilon ,h}^k\), the convergence of the sequence \((u_{\varepsilon ,h}^k)_k\) as \(k \rightarrow \infty \) as well as the behavior of \(u_{\varepsilon ,h}\) as \(\varepsilon \rightarrow 0\) will be addressed in a subsequent paper. These results parallel our recent proof of the convergence of the discrete vanishing moment methodology [2].

This paper falls in the category of papers which do not prove convergence of the discretization of (1) to weak solutions, but give numerical evidence of convergence as well results in the smooth case and/or in particular cases, e.g., [10, 12, 13]. We organize the paper as follows: in the next section we describe the notation used and some preliminaries. In Sect. 3 we prove minimization results at the discrete level. We also derive in Sect. 3 the vanishing viscosity approximation (3) from (2) as well as the discrete variational formulation used in the numerical experiments. In Sect. 4, we recall the isogeometric concept and give numerical results in Sect. 5.

2 Notation and Preliminaries

We denote by \(C^k(\varOmega )\) the set of all functions having all derivatives of order \(\le k\) continuous on \(\varOmega \) where \(k\) is a nonnegative integer or infinity and by \(C^0(\overline{\varOmega })\), the set of all functions continuous on \(\overline{\varOmega }\). A function \(u\) is said to be uniformly H\(\ddot{\text {o}}\)lder continuous with exponent \(\alpha , 0 <\alpha \le 1\) in \(\varOmega \) if the quantity

$$ \text {sup}_{x \ne y} \frac{|u(x)-u(y)|}{|x-y|^{\alpha }} $$

is finite. The space \(C^{k,\alpha }(\varOmega )\) consists of functions whose \(k\)th order derivatives are uniformly H\(\ddot{\text {o}}\)lder continuous with exponent \(\alpha \) in \(\varOmega \).

We use the standard notation of Sobolev spaces \(W^{k,p}(\varOmega )\) with norms \(||.||_{k,p}\) and semi-norm \(|.|_{k,p}\). In particular, \(H^k(\varOmega )=W^{k,2}(\varOmega )\) and in this case, the norm and seminorms will be denoted, respectively, by \(||.||_{k}\) and semi-norm \(|.|_{k}\). For a function \(u\), we denote by \(D u\) its gradient vector and recall that \(D^2 u\) denotes its Hessian. For a matrix field \(A\), we denote by \(\mathrm{div }A\) the vector obtained by taking the divergence of each row.

Using the product rule one obtains for sufficiently smooth vector fields \(v\) and matrix fields \(A\)

$$\begin{aligned} \mathrm{div }(A v) = (\mathrm{div }A^T) \cdot v + A: (D v)^T. \end{aligned}$$
(4)

Moreover, by [8, p. 440]

$$\begin{aligned} \mathrm{div }\mathrm{cof }D^2 v =0. \end{aligned}$$
(5)

For computation with determinants, the following results are needed.

Lemma 1

We have

$$\begin{aligned} \det D^2 v = \frac{1}{2} (\mathrm{cof }D^2 v): D^2 v = \frac{1}{2} \mathrm{div }\big ((\mathrm{cof }D^2 v) D v \big ), \end{aligned}$$
(6)

and for \(F(v) = \det D^2 v \) we have

$$ F'(v) (w) = (\mathrm{cof }D^2 v): D^2 w = \mathrm{div }\big ((\mathrm{cof }D^2 v) D w \big ), $$

for \(v,w\) sufficiently smooth.

Proof

For a \(2 \times 2\) matrix \(A\), one easily verifies that \(2 \det A = (\mathrm{cof }A): A\). It follows that \(\det D^2 v = 1/2 (\mathrm{cof }D^2 v): D^2 v\). Using (4) and (5) we obtain \((\mathrm{cof }D^2 v): D^2 v = \mathrm{div }\big ((\mathrm{cof }D^2 v) D v \big )\) and \((\mathrm{cof }D^2 v): D^2 w = \mathrm{div }\) \(\big ((\mathrm{cof }D^2 v)\) \( D w \big )\). Finally the expression of the Fréchet derivative is obtained from the definition of Fréchet derivative and the expression \(\det D^2 v = 1/2 (\mathrm{cof }D^2 v): D^2 v\). \(\square \)

Lemma 2

Let \(v,w \in W^{2,\infty }(\varOmega )\) and \(\psi \in H^2(\varOmega ) \cap H_0^1(\varOmega )\), then

$$\begin{aligned} \bigg | \int _{\varOmega } (\det D^2 v - \det D^2 w) \psi \,dx \bigg | \le C (|v|_{2,\infty }+|w|_{2,\infty })^{} |v-w|_1 |\psi |_1. \end{aligned}$$
(7)

The above lemma is a simple consequence of the mean value theorem and Cauchy-Schwarz inequalities. For additional details, we refer to [1].

We require our approximation spaces \(V_h\) to satisfy the following properties: There exists an interpolation operator \(Q_h\) mapping \(W^{l+1,p}(\varOmega )\) into the space \(V_h\) for \(1 \le p \le \infty , 0 \le l \le d\) with \(d\) a constant that depends on \(V_h\) and such that

$$\begin{aligned} || v -Q_h v ||_{k,p} \le C_{ap} h^{l+1-k} ||v||_{l+1,p}, \end{aligned}$$
(8)

for \(0 \le k \le l\) and

$$\begin{aligned} ||v||_{s,p} \le C_{inv} h^{l-s+\text {min}(0,\frac{n}{p}-\frac{n}{q})} ||v||_{l,q},~~~ \forall v \in V_h, \end{aligned}$$
(9)

for \(0 \le l \le s, 1 \le p,q\le \infty \).

The discussion in [1] is for a space \(V_h\) of piecewise polynomials. However, the results quoted here are valid for spaces of piecewise smooth \(C^1\) functions.

We consider the following discretization of (1): find \(u_h \in V_h \cap H_0^1(\varOmega )\) such that

$$\begin{aligned} \int _{\varOmega } (\det D^2 u_h) v \,dx = \int _{\varOmega } f v \,dx, \,~~~ \forall v \in V_h \cap H_0^1(\varOmega ). \end{aligned}$$
(10)

It can be shown that for \(u_h \in H^2(\varOmega )\), the left hand side of the above equation is well defined [1]. We recall from [1] that under the assumption that \(u \in C^4(\overline{\varOmega })\) is a strictly convex function, there exists \(\delta >0\) such that if we define

$$X_h= \left\{ v_h \in V_h, v_h=0\quad \text {on } \partial \varOmega , ||v_h-Q_h u||_1 < \frac{\delta h^2}{4} \right\} ,$$

then for \(h\) sufficiently small and \(v_h \in X_h, ||v_h- Q_h u||_1 < \delta h^{2}/2\), \(v_h\) is convex with smallest eigenvalue bounded a.e. below by \(m'/2\) and above by \(3M'/2\). Here \(m'\) and \(M'\) are respectively lower and upper bounds of the smallest and largest eigenvalues of \(D^2 u\) in \(\varOmega \). The idea of the proof is to use the continuity of the eigenvalues of a matrix as a function of its entries. Thus using (8) with \(k=2, p=\infty \) and \(l=d\) one obtains that \(D^2 Q_h u(x)\) is also positive definite element by element for \(h\) sufficiently small. The same argument shows that a \(C^1\) function close to \(D^2 Q_h u\) is also piecewise convex and hence convex due to the \(C^1\) continuity. The power of \(h\) which appears in the definition of \(X_h\) arises from the use of the inverse estimate (9).

We note that by an inverse estimate, for \(v_h \in X_h\),

$$ ||v_h - Q_h u||_{2,\infty } \le C_{inv} h^{-2} ||v_h - Q_h u||_1 \le C_{inv} \delta . $$

3 Minimization Results

We first note

Lemma 3

Let \(v_n, v, w_n\) and \(w \in W^{2,\infty }(\varOmega ) \cap H_0^1(\varOmega )\) such that \(||v_n-v||_{2,\infty } \rightarrow 0\) and \(||w_n-w||_{2,\infty } \rightarrow 0\). Then

$$\begin{aligned} \int _{\varOmega } (\det D^2 v_n) w_n \,dx&\rightarrow \int _{\varOmega } (\det D^2 v) w \,dx \end{aligned}$$
(11)
$$\begin{aligned} \int _{\varOmega } f v_n \,dx&\rightarrow \int _{\varOmega } f v \,dx. \end{aligned}$$
(12)

Proof

Put \(\alpha = \int _{\varOmega } (\det D^2 v_n) w_n \,dx - \int _{\varOmega } (\det D^2 v) w \,dx \). We have

$$\begin{aligned} \alpha = \int _{\varOmega } (\det D^2 v_n - \det D^2 v) w_n \,dx + \int _{\varOmega } (\det D^2 v) (w_n-w) \,dx. \end{aligned}$$

Using (7) we obtain

$$\begin{aligned} |\alpha | \le C (|v_n|_{2,\infty } + |v|_{2,\infty }) |v_n-v|_1 |w_n|_1 + C |v|_{2,\infty } |v|_1 |w_n-w|_1. \end{aligned}$$

Since \(|v_n-v|_1 \le C ||v_n-v||_{2,\infty }\) and convergent sequences are bounded, (11) follows. We have

$$ \bigg | 3 \int _{\varOmega } f (v_n-v) \,dx\bigg | \le C ||f||_0 ||v_n-v||_0, $$

and so (12) holds. \(\square \)

We consider the functional \(J\) defined by

$$ J(v) = - \int _{\varOmega } v \det D^2 v \,dx + 3 \int _{\varOmega } f v \,dx. $$

We have

Lemma 4

For \(v,w \in W^{2,\infty }(\varOmega ) \cap H_0^1(\varOmega )\)

$$ J'(v)(w) = 3 \int _{\varOmega } (f - \det D^2 v) w \,dx. $$

Proof

Note that for \(v,w\) smooth, vanishing on \(\partial \varOmega \) and by Lemma 1

$$\begin{aligned} J'(v) (w)&= 3 \int _{\varOmega } f w \,dx - \int _{\varOmega } w \det D^2 v \,dx - \int _{\varOmega } v \mathrm{div }[ (\mathrm{cof }D^2 v) D w] \,dx. \end{aligned}$$

But by integration by parts, the symmetry of \(D^2 v\) and Lemma 1

$$\begin{aligned} \begin{array}{rl} \int _{\varOmega } v \mathrm{div }[ (\mathrm{cof }D^2 v) D w] \,dx &{} = -\int _{\varOmega } [(\mathrm{cof }D^2 v) D w]\cdot D v \,dx= - \int _{\varOmega } [(\mathrm{cof }D^2 v) D v] \cdot D w \,dx \\ &{} =\int _{\varOmega } w \mathrm{div }[ (\mathrm{cof }D^2 v) D v] \,dx = 2 \int _{\varOmega } w \det D^2 v \,dx. \end{array} \end{aligned}$$

Thus

$$ J'(v)(w) = 3 \int _{\varOmega } (f - \det D^2 v) w \,dx. $$

We have proved that for \(v,w\) smooth, vanishing on \(\partial \varOmega \)

$$ J(v+w) - J(v) = 3 \int _{\varOmega } (f - \det D^2 v) w \,dx + O(|w|_1^2). $$

Since the space of infinitely differentiable functions with compact support is dense in \(W^{2,\infty }(\varOmega ) \cap H_0^1(\varOmega )\), the result holds for \(v,w \in W^{2,\infty }(\varOmega ) \cap H_0^1(\varOmega )\) by a density argument and using Lemma 3. \(\square \)

The Euler-Lagrange equation for \(J\) is therefore (10).

Remark 1

It has been shown in [4, 19] that a generalized solution of (1) is the unique minimizer of the functional \(J\) on the set of convex functions vanishing on the boundary.

Theorem 1

Let \(u \in C^4(\overline{\varOmega })\) be the unique strictly convex solution of (1). Then for \(h\) sufficiently small, the functional \(J\) has a unique minimizer \(\hat{u}_h\) in \(X_h\). Moreover, \(||u-\hat{u}_h||_1 \rightarrow 0\) as \(h \rightarrow 0\).

Proof

We first note that by (7), the functional \(J\) is sequentially continuous in \(W^{2,\infty }(\varOmega ) \cap H_0^1(\varOmega )\). For \(v_n, v \in W^{2,\infty }(\varOmega ) \cap H_0^1(\varOmega )\) we have

$$\begin{aligned} J(v_n) -J(v)&= 3 \int _{\varOmega } f (v_n-v) \,dx + \int _{\varOmega } (v \det D^2 v - v_n \det D^2 v_n ) \,dx. \end{aligned}$$

We conclude from Lemma 3 that \(J(v_n) \rightarrow J(v)\) as \(||v_n-v||_{2,\infty } \rightarrow 0\). Moreover, using the expression of \(J'(v)(w)\) given in Lemma 4, we obtain

$$\begin{aligned} J''(v)(w)(z)&= -3 \int _{\varOmega } w \mathrm{div }[(\mathrm{cof }D^2 v) D z] \,dx = 3 \int _{\varOmega } [(\mathrm{cof }D^2 v) D z] \cdot D w \,dx. \end{aligned}$$

We conclude that

$$\begin{aligned} J''(v)(w)(w)&= 3 \int _{\varOmega } [( \mathrm{cof }D^2 v) D w] \cdot D w \,dx. \end{aligned}$$

That is, \(J\) is strictly convex in \(X_h\) by definition of \(X_h\). A minimizer, if it exists, is therefore unique.

The argument to prove that \(J\) has a minimizer follows the lines of Theorem 5.1 in [9]. We have for some \(\theta \in [0,1]\)

$$\begin{aligned} J(v)&= J(0) + J'(0)(v) + \frac{1}{2}J''(\theta v)(v)(v) \nonumber \\&= 0 + 3 \int _{\varOmega } f v \,dx + \frac{3}{2} \theta \int _{\varOmega } [( \mathrm{cof }D^2 v) D v] \cdot D v \,dx. \end{aligned}$$
(13)

We claim that for \(v \in X_h, v \ne 0\), we have \(\theta \ne 0\). Assume otherwise. Then

$$\begin{aligned} 0&= - \int _{\varOmega } v \det D^2 v \,dx = -\frac{1}{2} \int _{\varOmega } v \mathrm{div }(\mathrm{cof }D^2 v) D v \,dx \nonumber \\&= \frac{1}{2} \int _{\varOmega } [(\mathrm{cof }D^2 v) D v] \cdot D v \,dx \ge \frac{m }{2} |v|_1^2, \end{aligned}$$
(14)

where \(m\) is a lower bound on the smallest eigenvalue of \(\mathrm{cof }D^2 v\). By the assumption on \(v \in X_h\) we have \(m >0\) . We obtain the contradiction \(v=0\) and conclude that \(\theta \in (0,1]\).

Next, note that

$$\begin{aligned} \bigg | \int _{\varOmega } f v \,dx \bigg |&\le ||f||_0 ||v||_0 \le ||f||_0 ||v||_1. \text { Thus }\int _{\varOmega } f v \,dx \ge - ||f||_0 ||v||_1. \end{aligned}$$

By (13), we obtain using Poincare’s inequality

$$\begin{aligned} J(v)&\ge -3 ||f||_0 ||v||_1 + \frac{3}{2} \theta m |v|_1^2 \ge -3 ||f||_0 ||v||_1 + C ||v||_1^2 \nonumber \\&\ge ||v||_1 ( -3 ||f||_0 + C ||v||_1 ), \end{aligned}$$
(15)

for a constant \(C>0\). Let now \(R >0\) such that

$$ X_h \cap \{ \, v \in V_h \cap H_0^1(\varOmega ), ||v||_1 \le R\, \} \ne \emptyset . $$

Since \(J\) is continuous, \(J\) is bounded below on the above set. Moreover for \(||v||_1 \ge R\), we have

$$ J(v) \ge R (-3 ||f||_0 + C R). $$

We conclude that the functional \(J\) is bounded below. We show that its infimum is given by some \(\hat{u}_h\) in \(X_h\). Let \(v_n \in X_h\) such that \(\lim _{n \rightarrow \infty } J(v_n) =\text {inf}_{v \in X_h} J(v)\) which has just been proved to be finite. Then the sequence \(J(v_n)\) is bounded and by (15), the sequence \(v_n\) is also necessary bounded. Let \(v_{k_n}\) be a weakly convergent subsequence with limit \(\hat{u}_h\). We have

$$ \lim _{n \rightarrow \infty } J'(\hat{u}_h)(v_{k_n}) = J'(\hat{u}_h)(u_h). $$

Since \(J\) is strictly convex in \(X_h\),

$$ J(v_{k_n}) \ge J(\hat{u}_h) + J'(\hat{u}_h)(v_{k_n}-\hat{u}_h), $$

and so at the limit \(\text {inf}_{v \in X_h} J(v) \ge J(\hat{u}_h)\). This proves that \(\hat{u}_h\) minimizes \(J\) in \(X_h\).

We now prove that \(||u-\hat{u}_h||_1 \rightarrow 0\) as \(h \rightarrow 0\). Note that since \(u_h \in X_h\), \(||\hat{u}_h-Q_hu||_1 \le \delta h^2 /4\). By (8) and triangle inequality, we obtain the result. \(\square \)

Remark 2

From the approach taken in [1], we may conclude that (10) has a unique convex solution \(u_h\) in \(X_h\) which therefore solves the Euler-Lagrange equation for the functional \(J\). Since \(X_h\) is open and convex and \(J\) convex on \(X_h\), by Theorem 3.9.1 of [15] we have

$$ J(v) \ge J(u_h) + J'(u_h) (v-u_h), \quad \forall v \in X_h. $$

Since \(u_h\) is a critical point of \(J\) in \(X_h\), we get

$$ J(v) \ge J(u_h), \quad \forall v \in X_h. $$

We conclude that both \(u_h\) and \(\hat{u}_h\) are minimizers of \(J\) in \(X_h\). By the strict convexity of \(J\) in \(X_h\), they are equal. Therefore the unique minimizer of \(J\) in \(X_h\) solves (10).

We now turn to the regularized problems (2) and (3). The formal limit of \(u_{\varepsilon }^k\) as \(k \rightarrow \infty \) solves

$$\begin{aligned} (\mathrm{cof }(D^2 u_{\varepsilon } + \varepsilon I) ): D^2 u_{\varepsilon }^{}&= \det D^2 u_{\varepsilon }^{} + f \quad \text {in }\varOmega \\ u_{\varepsilon }^{}&= 0 \quad \text {on }\partial \varOmega . \end{aligned}$$

But since \(I\) and \(D^2 u_{\varepsilon }\) are \(2 \times 2\) matrices, we have \(\mathrm{cof }(D^2 u_{\varepsilon } + \varepsilon I) = \mathrm{cof }D^2 u_{\varepsilon } + \mathrm{cof }\varepsilon I = \mathrm{cof }D^2 u_{\varepsilon } + \varepsilon I\) and we obtain

$$ (\mathrm{cof }D^2 u_{\varepsilon } ): D^2 u_{\varepsilon } + \varepsilon I : D^2 u_{\varepsilon } = \det D^2 u_{\varepsilon } + f. $$

Since \( \varepsilon I : D^2 u_{\varepsilon } = \varepsilon \varDelta u_{\varepsilon } \) and by (6) we have \( (\mathrm{cof }D^2 u_{\varepsilon } ): D^2 u_{\varepsilon }= 2 \det D^2 u_{\varepsilon }\), we obtain (3).

Next we present the discrete variational formulation used in the numerical experiments. To avoid large errors, we used a damped version of (2). Let \(\nu >0\). We consider the problem

$$\begin{aligned} (\mathrm{cof }(D^2 u_{\varepsilon }^k + \varepsilon I) ): D^2 u_{\varepsilon }^{k+1}&=2 \det D^2 u_{\varepsilon }^{k} + \frac{1}{\nu } (-\det D^2 u_{\varepsilon }^{k} + f ) \quad \text {in }\varOmega \nonumber \\ u_{\varepsilon }^{k+1}&= 0 \quad \text {on } \partial \varOmega . \end{aligned}$$
(16)

We note that for \(\nu =1\), (16) reduces to (2). Also the formal limit, as \(\varepsilon \rightarrow 0\) and \(k \rightarrow \infty \), of \(u_{\varepsilon }^{k}\) solving (16) is a solution of \(1/\nu (f-\det D^2 u)=0\).

Let \(|x|\) denote the Euclidean norm of \(x \in \mathbb {R}^2\). Note that that \(D^2 ( |x|^2/2) = I\) and thus for \(u_{\varepsilon }^k\) smooth, \(\mathrm{cof }(D^2 u_{\varepsilon }^k + \varepsilon I )= \mathrm{cof }D^2 (u_{\varepsilon }^k + \varepsilon /2 |x|^2)\) and thus using (4) and (5) we obtain

$$\begin{aligned} \begin{array}{rl} \mathrm{div }\bigg ( (\mathrm{cof }(D^2 u_{\varepsilon }^k + \varepsilon I) ) D u_{\varepsilon }^{k+1} \bigg )&{} =2 \det D^2 u_{\varepsilon }^{k} + \dfrac{1}{\nu } (-\det D^2 u_{\varepsilon }^{k} + f ) \quad \text {in }\varOmega \\ u_{\varepsilon }^{k+1} &{} = 0 \quad \text {on } \partial \varOmega . \end{array} \end{aligned}$$

This leads to the following discretization: find \(u_{\varepsilon ,h}^{k+1} \in V_h \cap H_0^1(\varOmega )\) such that \(\forall v \in V_h \cap H_0^1(\varOmega )\)

$$\begin{aligned} \begin{array}{rl} -\int _{\varOmega }\bigg ( (\mathrm{cof }(D^2 u_{\varepsilon ,h}^k + \varepsilon I) ) D u_{\varepsilon ,h}^{k+1} \bigg ) \cdot D v \,dx &{} = \int _{\varOmega } \bigg ( 2 \det D^2 u_{\varepsilon ,h}^{k} \\ &{} \quad +\, \dfrac{1}{\nu } (-\det D^2 u_{\varepsilon ,h}^{k} + f ) \bigg ) v \,dx. \end{array} \end{aligned}$$
(17)

For the initial guess \(u_{\varepsilon ,h}^{0}\) when \(\varepsilon \ge 0\), we take the discrete approximation of the solution of the problem

$$\begin{aligned} \varDelta u_{\varepsilon }^{0}&= 2 \sqrt{f} \quad \text {in } \varOmega \\ u_{\varepsilon }^{0}&= 0 \quad \text {on } \partial \varOmega . \end{aligned}$$

While this does not assure that \(u_{\varepsilon ,h}^{0} \in X_h\) the above choice appears to work in all our numerical experiments.

Remark 3

For a possible extension of the minimization result in Theorem 1 to the case of nonsmooth solutions, the homogeneous boundary condition is necessary.

4 Isogeometric Analysis

We refer to [20] for a short introduction to isogeometric analysis. Here we give a shorter overview suitable for our needs. Precisely, we are interested in the ability of this approach to generate finite dimensional spaces of piecewise smooth \(C^1\) functions which can be used in the Galerkin method for approximating partial differential equations.

A univariate NURBS of degree \(p\) is given by

$$\frac{w_i N_{i,p}(u)}{\sum _{j \in \fancyscript{J}} w_j N_{j,p}(u)}, \quad u \in [0,1],$$

with B-splines \(N_{i,p}\), weights \(w_i\) and an index set \(\fancyscript{J}\) which encodes its smoothness. The parameter \(h\) refers to the maximum distance between the knots \(u_i, i \in \fancyscript{J}\).

A bivariate NURBS is given by

$$R_{kl}(u,v) = \frac{w_{kl} N_{k}(u) N_{l}(v) }{\sum _{i \in \fancyscript{I}} \sum _{j \in \fancyscript{J}} w_j N_{i}(u) N_{j}(v) }, \quad u,v \in [0,1],$$

with index sets \(\fancyscript{I}\) and \(\fancyscript{J}\). In the above expression, we omit the degrees \(\text {pU}\) and \(\text {pV}\) of the NURBS \(R_{kl}\) in the \(u\) and \(v\) directions.

The domain \(\varOmega \) is described parametrically by a mapping \(F: [0,1]^2 \rightarrow \varOmega , F(u,v)= \sum _{i \in \fancyscript{I}} \sum _{j \in \fancyscript{J}} R_{ij}(u,v) c_{ij}\) with NURBS \(R_{ij}\) and control points \(c_{ij}\). We take equally spaced knots \(u_i, v_j\) and hence \(h\) refers to the size of an element in the parametric domain.

We say that a NURBS \(R_{kl}\) has degree \(p\) if the univariate NURBS \(N_k\) and \(N_l\) all have degree \(p\). The NURBS considered in this paper are all of a fixed degree \(p\) and \(C^1\).

The basis functions \(R_{ij}\) used in the description of the domain are also used in the definition of the finite dimensional space \(V_h \subset \text {span} \{R_{ij} \circ F^{-1} \}\). Thus the numerical solution takes the form

$$ T_h(x,y) = \sum _{i \in \fancyscript{I}} \sum _{j \in \fancyscript{J}} R_{ij} (F^{-1}(x,y)) q_{ij}, $$

with unknowns \(q_{ij}\).

It can be shown [18] that there exists an interpolation operator \(Q_h\) mapping \(H^r(\varOmega ), r \ge p+1\) into \(V_h\) such that if \(0 \le l \le p+1, 0 \le l \le r \le p+1\), we have

$$\begin{aligned} |u-Q_h u|_l \le C h^{r-l} ||u||_r, \end{aligned}$$

with \(C\) independent of \(h\). Thus the approximation property (8) holds for spaces constructed with the isogeometric analysis concept. For the inverse estimates (9), we refer to [5].

Fig. 1
figure 1

Circle represented exactly. \(\text {pU}=2, \text {pV}=2\)

5 Numerical Results

The implementation was done by modifying the companion code to [20]. The computational domain is taken as the unit circle: \(x^2\,+\,y^2\,-\,1=0\) with an initial triangulation depicted in Fig. 1. The numerical solutions are obtained by computing \(u_{\varepsilon ,h}^k\) defined by (17). We consider the following test cases.

Test 1 (smooth solution): \(u(x,y)=(x^2+y^2-1) e^{x^2+y^2}\) with \(f(x,y) = 4 e^{2(x^2+y^2)} (x^2+y^2)^2(2 x^2 +3 + 2 y^2)\). Numerical results are given in Table 1. Since \(\text {pU}=2, \text {pV}=2\), the approximation space in the parametric domain contains piecewise polynomials of degree \(p=2\). The analysis in [1] suggests that the rate of convergence for smooth solutions is \(O(h^p)\) in the \(H^1\) norm, \(O(h^{p+1})\) and \(O(h^{p-1})\) in the \(L^2\) and \(H^2\) norms respectively. No regularization or damping was necessary for this case.

Table 1 Smooth solution \(u(x,y)=(x^2+y^2-1) e^{x^2+y^2}\)
Fig. 2
figure 2

Convex solution with data \(f=e^{x^2+y^2}, g=0\) with \(\nu =2.5, \varepsilon =0.01, h=1/32\). No known analytical formula

Table 2 Solution not in \(H^1(\varOmega )\, u(x,y)=-\sqrt{1-x^2-y^2}\) with \(\nu =2.5, \varepsilon =0.01\)

Test 2 (No known exact solution): \(f=e^{x^2+y^2}, g =0\). As expected the numerical solution displayed in Fig. 2 appears to be a convex function.

Test 3 (solution not in \(H^1(\varOmega )\)): \(u(x,y)=-\sqrt{1-x^2-y^2}\) with \(f(x,y) = 1/(x^2+y^2-1)^2\). With regularization and damping, we were able to avoid the divergence of the discretization. These results should be compared with the ones in [1] where iterative methods with only a linear convergence rate were proposed for nonsmooth solutions of (1). Note that \(u\) in this case is highly singular as \(f\) vanishes on \(\partial \varOmega \).

In the tables \(n_{\textit{it}}\) refers to the number of iterations for Newton’s method (Table 2).