1 Introduction

For a bounded, regular domain \(\Omega \subset \mathbb {R}^2\), and Dirichlet boundary data provided by \((u_{1, 0}, u_{2, 0})\in H^1(\Omega ; \mathbb {R}^2)\), we become interested in the following system of PDEs

$$\begin{aligned}&{\text {div}}\left( \frac{|\nabla u_2(\mathbf {x})|}{|\nabla u_1(\mathbf {x})|}\nabla u_1(\mathbf {x})\right) =0\hbox { in }\Omega ,\quad u_1=u_{1, 0}\hbox { on }\partial \Omega ,\end{aligned}$$
(1.1)
$$\begin{aligned}&{\text {div}}\left( \frac{|\nabla u_1(\mathbf {x})|}{|\nabla u_2(\mathbf {x})|}\nabla u_2(\mathbf {x})\right) =0\hbox { in }\Omega ,\quad u_2=u_{2, 0}\hbox { on }\partial \Omega , \end{aligned}$$
(1.2)

for a couple of functions

$$\begin{aligned} u_i\in H^1(\Omega ), i=1, 2,\quad u_i-u_{i, 0}\in H^1_0(\Omega ), \end{aligned}$$

as the Euler–Lagrange system associated with the functional

$$\begin{aligned} I(\mathbf {u})=\int _\Omega |\nabla u_1(\mathbf {x})|\,|\nabla u_2(\mathbf {x})|\,d\mathbf {x},\quad \mathbf {u}=(u_1, u_2). \end{aligned}$$
(1.3)

Indeed, if we put

$$\begin{aligned} \phi (\mathbf {F})=|\mathbf {F}^{(1)}|\,|\mathbf {F}^{(2)}|,\quad \mathbf {F}=\begin{pmatrix}\mathbf {F}^{(1)}\\ \mathbf {F}^{(2)}\end{pmatrix}\in \mathbf {M}^{2\times 2}, \end{aligned}$$
(1.4)

we are talking about the vector variational problem

$$\begin{aligned} \hbox {Minimize in }\mathbf {u}\in \mathcal {A}:\quad I(\mathbf {u})=\int _\Omega \phi (\nabla \mathbf {u}(\mathbf {x}))\,d\mathbf {x}\end{aligned}$$

where the class \(\mathcal {A}\) needs to be determined, incorporating the Dirichlet boundary condition

$$\begin{aligned} \mathbf {u}=\mathbf {u}_0\hbox { on }\partial \Omega ,\quad \mathbf {u}_0=(u_{1, 0}, u_{2, 0}), \end{aligned}$$

furnishing boundary data. For vector problems, like the one we are considering here, there are not many ways to show existence of solutions. In fact, as far as we can tell, there are two classic methods to prove such existence of solutions. Both come directly from the treatment of equilibrium solutions in finite elasticity [1]. The first is to make use of the implicit function theorem [2, 3] in a suitable framework. By its very perturbation nature, this method can only deliver solutions which are sufficiently close to solutions of approximating linear problems. It demands quite restrictive hypotheses. The second alternative is to show that the system to be studied is, at least formally, the Euler–Lagrange system of a functional which admits minimizers in appropriate function spaces. This too asks for important structural assumptions on the underlying functional which cannot always be taken for granted [1, 2].

Our only hope to show existence of solutions for our non-linear system (1.1) and (1.2) is to focus on proving the existence of global minimizers for the functional with integrand \(\phi (\mathbf {F})\) in (1.4). The whole point of our concern is, however, that such integrand does not comply with any of the necessary requirements to apply the direct method of the Calculus of Variations: the functional I in (1.3) is neither coercive, nor quasiconvex [1, 2, 4]. Under such unfavorable circumstances, in which the variational method is very seriously compromised, there are two relevant objectives that can be tried out:

  1. (1)

    look for the relaxation of the functional in (1.3);

  2. (2)

    one possible alternative to find a global minimizer for such a singular vector variational problem, and hence a possible weak solution of the initial system of PDEs, asks for two important ingredients:

    1. (a)

      show that \(I(\mathbf {v})\ge 0\) (or some other lower bound) for every feasible \(\mathbf {v}\);

    2. (b)

      find some feasible \(\mathbf {u}\) with \(I(\mathbf {u})=0\).

Surprisingly enough, both possibilities can be carried out in our case: we can compute quite explicitly the relaxed problem; and, under some circumstances, we can find global minimizers that turn out to be solutions of the original differential system. The goal of our contribution is to report on these two appealing aspects, and relate the second one to inverse problems in conductivity without pretending to add any new result in this area.

As we have just pointed out, it is a quite surprising feature of our system how intimately connected it is to inverse problems in conductivity in the plane. To see this connection more clearly, suppose we are seeking an unknown conductivity coefficient

$$\begin{aligned} \gamma (\mathbf {x})\ge \gamma _0>0, \quad \mathbf {x}\in \Omega . \end{aligned}$$

For a chosen Dirichlet boundary condition \(u_0\) around \(\partial \Omega \), the solution of the linear elliptic equation

$$\begin{aligned} {\text {div}}[\gamma (\mathbf {x})\nabla u(\mathbf {x})]=0\hbox { in }\Omega ,\quad u=u_0\hbox { on }\partial \Omega \end{aligned}$$
(1.5)

is such that there is a unique (up to an additive constant) \(v\in H^1(\Omega )\) with

$$\begin{aligned} \gamma (\mathbf {x})\nabla u(\mathbf {x})+\mathbf {R}\nabla v(\mathbf {x})=\mathbf {0},\quad \mathbf {R}=\begin{pmatrix}0&{}1\\ -1&{}0\end{pmatrix}. \end{aligned}$$
(1.6)

Vector equation (1.6) furnishes three important pieces of information on the auxiliary function v:

  1. (1)

    a conductivity equation for v

    $$\begin{aligned} {\text {div}}\left[ \frac{1}{\gamma (\mathbf {x})}\nabla v(\mathbf {x})\right] =0\hbox { in }\Omega ; \end{aligned}$$
    (1.7)
  2. (2)

    a formula for \(\gamma \) in terms of u and v, namely

    $$\begin{aligned} \gamma =\frac{|\nabla v|}{|\nabla u|}; \end{aligned}$$
    (1.8)
  3. (3)

    Dirichlet boundary values around \(\partial \Omega \) based on the Neumann condition for u

    $$\begin{aligned} \nabla v\cdot \mathbf {t}=\gamma \nabla u\cdot \mathbf {n}\hbox { on }\partial \Omega , \end{aligned}$$
    (1.9)

    where \(\mathbf {n}\) is the outer normal to \(\Omega \), and \(\mathbf {t}=\mathbf {R}\mathbf {n}\) is the counterclockwise tangential vector to \(\partial \Omega \).

Hence, if \(\gamma \) is unknown, one can try to determine it through (1.8), and then (1.5) and (1.7) carry us to our initial system for

$$\begin{aligned} u=u_1,\quad v=u_2, \end{aligned}$$

together with Dirichlet boundary conditions for both components through (1.9). Hence, our interpretation of the inverse conductivity problem for a single pair (uv) focuses on vector equation (1.6). This condition is usually dealt with through the Beltrami operator (check [5]), but for our approach it will be beneficial to keep it in the form (1.6).

This connection of our analysis with system (1.1) and (1.2) and functional (1.3) and the classical inverse problem in conductivity is worth exploring, we believe, given the practical importance of reconstruction procedures. In particular, we are interested in providing al least some partial answers to the two following issues:

  1. (1)

    Under what circumstances, can a solution \((u_1, u_2)\) of our initial system of PDEs provide a valid, coherent conductivity coefficient \(\gamma \) through (1.8)

    $$\begin{aligned} \gamma =\frac{|\nabla u_2|}{|\nabla u_1|}, \end{aligned}$$
    (1.10)

    that is

    $$\begin{aligned} \frac{|\nabla u_2|}{|\nabla u_1|}\nabla u_1+\mathbf {R}\nabla u_2=\mathbf {0}\hbox { in }\Omega \hbox {?} \end{aligned}$$
    (1.11)
  2. (2)

    Find a procedure to build boundary measurements \((u_{1, 0}, u_{2, 0})\), for which a conductivity coefficient \(\gamma \) can be reconstructed through our variational method.

In such situations, we will be concerned with the numerical approximation which can be setup in various ways. The interesting thing is that in spite of a huge non-uniqueness of solutions for these problems (when there are solutions), we possess a certificate of convergence, to a true solution, through (1.11). This certificate of convergence will be formulated in a much more practical and direct way below.

Our ideas extend to the multi-measurement case as well. The system for an unknown field

$$\begin{aligned} \mathbf {u}(\mathbf {x}):\Omega \subset \mathbb {R}^2\rightarrow \mathbb {R}^{2N} \end{aligned}$$

becomes

$$\begin{aligned} {\text {div}}\left( \frac{|\nabla \mathbf {u}_2|}{|\nabla \mathbf {u}_1|}\nabla u_1^{(j)}\right) =0\hbox { in }\Omega ,\quad u_1^{(j)}=u_{1, 0}^{(j)}\hbox { on }\partial \Omega ,\end{aligned}$$
(1.12)
$$\begin{aligned} {\text {div}}\left( \frac{|\nabla \mathbf {u}_1|}{|\nabla \mathbf {u}_2|}\nabla u_2^{(j)}\right) =0\hbox { in }\Omega ,\quad u_2^{(j)}=u_{2, 0}^{(j)}\hbox { on }\partial \Omega , \end{aligned}$$
(1.13)

for \(j=1, 2, \dots , N\), where we are using the notation

$$\begin{aligned}&\mathbf {u}=(\mathbf {u}^{(j)})_{j=1, 2, \dots , N}=(u^{(j)}_1, u^{(j)}_2)_{j=1, 2, \dots , N},\\&\mathbf {u}^{(j)}=(u^{(j)}_1, u^{(j)}_2),\quad \mathbf {u}_i=(u^{(j)}_i)_{j=1, 2, \dots , N},\quad i=1, 2. \end{aligned}$$

Note how this system is fully coupled because this time the quotient for the conductivity coefficient

$$\begin{aligned} \gamma =\frac{|\nabla \mathbf {u}_2|}{|\nabla \mathbf {u}_1|} \end{aligned}$$

involves all of the components of \(\mathbf {u}\).

The paper is organized as follows. Section 2 focuses on the analytical part of our work in which we examine a non-convex vector variational problem, calculate its relaxation, and, with its help, distinguish cases where the infimum vanishes and situations where it does not. The underlying system of PDEs completed with a funny non-linear boundary condition ensures the existence of a minimizer with a vanishing minimum. In Sect. 3, we examine the intimate relationship between our variational problem and its Euler–Lagrange system, when the minimum vanishes, on the one hand, and inverse problems in conductivity, on the other. This, in particular, means that one can make an attempt to approximate a solution for an inverse problem in conductivity by simulating numerically one minimizer or one solution of the Euler–Lagrange system. This, however, requires to guarantee that the infimum of the variational problems vanishes. Hence, we study the situation of generating synthetic data around \(\partial \Omega \) in such a way that the infimum of the problem indeed vanishes. This approach has the interesting practical feature that the functional tending to zero means that we are approximating a true solution of the inverse conductivity problem even though it may not be the one we would expect. We test the procedure in practice with various cases and examples, including multimeasurement situations. Our analytical results, however, do not lead to any new fact in the area of inverse problems in conductivity, except possibly for the use of the quasiconvexification of the underlying functional to clarify the role of boundary data.

The classic Calderón’s problem [6] in conductivity is a tremendously rich problem that has, and still is, stirring a lot of interest and research ranging from deep results in analysis to numerical simulation and practical implementation. Without pretending to enumerate all relevant contributions, we cite some of our favorite ones. From the mathematical point of view, this kind of problems have led to deep and fundamental results (see [5, 7,8,9,10,11,12, 35], and references therein). The book [13] is a basic reference in this field. The main issues concerning inverse problems are uniqueness in various scenarios [5, 14], stability [15,16,17] and reconstruction [18, 19].

2 A vector variational problem

We have already indicated in the Introduction that, at least formally, our initial system (1.1) and (1.2) is the Euler–Lagrange system for the functional

$$\begin{aligned} I(\mathbf {u})=\int _\Omega \phi (\nabla \mathbf {u}(\mathbf {x}))\,d\mathbf {x},\quad \phi (\mathbf {F})=|\mathbf {F}^{(1)}|\,|\mathbf {F}^{(2)}|,\quad \mathbf {F}=\begin{pmatrix}\mathbf {F}^{(1)}\\ \mathbf {F}^{(2)}\end{pmatrix}\in \mathbf {M}^{2\times 2}. \end{aligned}$$
(2.1)

Since vector equation (1.11), written in matrix notation

$$\begin{aligned} \frac{|\mathbf {F}_2|}{|\mathbf {F}_1|}\mathbf {F}_1+\mathbf {R}\mathbf {F}_2=\mathbf {0},\quad \mathbf {F}=\begin{pmatrix}\mathbf {F}_1\\ \mathbf {F}_2\end{pmatrix}, \end{aligned}$$
(2.2)

will play a central role in our analysis too, it is elementary to realize that if we put

$$\begin{aligned} \psi (\mathbf {F})=\phi (\mathbf {F})-\det \mathbf {F}, \end{aligned}$$
(2.3)

then \(\psi (\mathbf {F})\ge 0\) always, but \(\psi (\mathbf {F})=0\) precisely when (2.2) holds. It seems thus advantageous to replace the functional in (2.1) by the modified one

$$\begin{aligned} I(\mathbf {u})=\int _\Omega \psi (\nabla \mathbf {u}(\mathbf {x}))\,d\mathbf {x}. \end{aligned}$$
(2.4)

There are two main advantages of this functional over the old one.

  1. (1)

    Since we have added a null-lagrangian, \(-\det \mathbf {F}\), to the old \(\phi \), the new underlying Euler–Lagrange system remains the same, i.e. our original system of PDEs (1.1) and (1.2).

  2. (2)

    If m is the infimum of I in (2.4), over a class of mappings respecting boundary data, then \(m\ge 0\), and \(m=0\) is attained, i.e. \(m=0\) is a minimum, precisely when (2.2) holds for a minimizer \((u_1, u_2)\).

Unfortunately, the integrand \(\psi \) is neither convex nor quasiconvex, just as \(\phi \) was not, so existence of minimizers is still a difficult issue. In addition, neither of the two integrands, \(\phi \) or \(\psi \), is coercive. Even so, we can use the non-negative number m to organize our discussion of four possibilities of interest:

  1. (1)

    \(m>0\),

    1. (a)

      attained for I;

    2. (b)

      not attained for I.

  2. (2)

    \(m=0\),

    1. (a)

      attained for I;

    2. (b)

      not attained for I.

The following result indicates how boundary data \((u_{1, 0}, u_{2, 0})\) can distinguish the case \(m=0\) from \(m>0\). It involves the explicit calculation of the quasiconvexification of our integrand \(\psi \). It is one of those rare cases where such a computation is possible.

Theorem 2.1

Let \(u_{i, 0}\in H^{1/2}(\partial \Omega )\), \(i=1, 2\). If there is an extension of \((u_{1, 0}, u_{2, 0})\) to some \(\mathbf {u}_0\in H^1(\Omega ; \mathbb {R}^2)\) with \(\det \nabla \mathbf {u}_0>0\) a.e. in \(\Omega \) then \(m=0\).

We suspect that the converse of this statement is also correct but that would require to deal with some delicate points about sequences of jacobians. It is also reminiscent of some other results relating boundary values for systems of PDEs and jacobians as in, for example, [20,21,22]. Note that, quite often, if the map

$$\begin{aligned} (u_{1, 0}, u_{2, 0}):\partial \Omega \rightarrow \mathbb {R}^2 \end{aligned}$$

is not one-to-one, then such boundary data cannot be extended to \(\Omega \) under the circumstances of Theorem 2.1. Such is the situation, for instance, for the simple example

$$\begin{aligned} u_{1, 0}(x_1, x_2)=|x_1|,\quad u_{2, 0}(x_1, x_2)=x_2. \end{aligned}$$

As far as we can tell, our proof is the first one that is performed through the explicit calculation of the quasiconvexification of an integrand.

Theorem 2.2

The quasiconvexification \(Q\phi \) of \(\phi \) in (2.1) is given by the jacobian

$$\begin{aligned} Q\phi (\mathbf {F})=|\det \mathbf {F}|. \end{aligned}$$

Proof

It is easy to realize that

$$\begin{aligned} |\det \mathbf {F}|\le \phi (\mathbf {F}) \end{aligned}$$

with the left-hand side, a quasiconvex (polyconvex) function. Hence

$$\begin{aligned} |\det \mathbf {F}|\le Q\phi (\mathbf {F})\le \phi (\mathbf {F}). \end{aligned}$$

Equality between these three terms holds when

$$\begin{aligned} \gamma \mathbf {F}_1+\mathbf {R}\mathbf {F}_2=\mathbf {0}. \end{aligned}$$
(2.5)

As usual, we will check that the rank-one convexification of \(\phi (\mathbf {F})\) turns out to be precisely \(|\det \mathbf {F}|\). If this is so, then we will immediately have the result

$$\begin{aligned} Q\phi (\mathbf {F})=|\det \mathbf {F}|, \end{aligned}$$
(2.6)

as desired.

The computation of the rank-one convex hull of \(\phi \), for matrices \(\mathbf {F}\) off that coincidence set (2.5), proceeds by checking that such \(\mathbf {F}\)’s can be decomposed through suitable rank-one matrices with support in that coincidence set. Let us check that this is so.

Set

$$\begin{aligned} \mathbf {Z}_\pm =\mathbf {Z}=\left\{ \begin{pmatrix}\mathbf {x}\\ \alpha \mathbf {R}\mathbf {x}\end{pmatrix}: \alpha >(<)0, \mathbf {x}\in \mathbb {R}^2\right\} . \end{aligned}$$
(2.7)

To show our aim, all we need to check is that

$$\begin{aligned} R_1\mathbf {Z}_{\pm }=\{\mathbf {F}\in \mathbf {M}^{2\times 2}: \det \mathbf {F}\ge (\le )0\} \end{aligned}$$

where the first-level rank-one convex hull \(R_1\mathbf {Z}\) of a set of matrices \(\mathbf {Z}\) is the collection of matrices that can be written as a convex combination of two matrices from \(\mathbf {Z}\) along a rank-one matrix. Note that the two matrices

$$\begin{aligned} \begin{pmatrix}\mathbf {x}\\ \alpha \mathbf {R}\mathbf {x}\end{pmatrix},\quad \begin{pmatrix}\mathbf {y}\\ \beta \mathbf {R}\mathbf {y}\end{pmatrix} \end{aligned}$$

are rank-one connected if

$$\begin{aligned} (\mathbf {x}-\mathbf {y})\cdot (\alpha \mathbf {x}-\beta \mathbf {y})=0. \end{aligned}$$

Suppose first that \(\mathbf {F}\) has positive determinant. We will concentrate on showing that two matrices \(\mathbf {F}_0, \mathbf {F}_1\in \mathbf {Z}_+\), and a parameter \(t\in [0, 1]\) can be found, such that

$$\begin{aligned} \mathbf {F}=t\mathbf {F}_1+(1-t)\mathbf {F}_0,\quad \mathbf {F}_1-\mathbf {F}_0, \hbox { rank-one}. \end{aligned}$$

We already know that

$$\begin{aligned} \mathbf {F}_i=\begin{pmatrix}\mathbf {x}_i\\ \alpha _i\mathbf {R}\mathbf {x}_i\end{pmatrix},\quad i=1, 0, \end{aligned}$$

for some positive \(\alpha _i\) and vectors \(\mathbf {x}_i\). The condition on the difference \(\mathbf {F}_1-\mathbf {F}_0\) being a rank-one matrix translates, as already remarked, into

$$\begin{aligned} (\mathbf {x}_1-\mathbf {x}_0)\cdot (\alpha _1\mathbf {x}_1-\alpha _0\mathbf {x}_0)=0; \end{aligned}$$
(2.8)

finally, we should have

$$\begin{aligned} \mathbf {F}^{(1)}=t\mathbf {x}_1+(1-t)\mathbf {x}_0,\quad \mathbf {F}^{(2)}=t\alpha _1\mathbf {R}\mathbf {x}_1+(1-t)\alpha _0\mathbf {R}\mathbf {x}_0. \end{aligned}$$

From these two vector equations, one can easily find that

$$\begin{aligned}&\mathbf {x}_1=\frac{1}{t}\frac{1}{\alpha _0-\alpha _1}(\alpha _0\mathbf {F}^{(1)}+\mathbf {R}\mathbf {F}^{(2)}),\\&\mathbf {x}_0=-\frac{1}{1-t}\frac{1}{\alpha _0-\alpha _1}(\alpha _1\mathbf {F}^{(1)}+\mathbf {R}\mathbf {F}^{(2)}). \end{aligned}$$

If we replace these expressions in (2.8), and rearrange terms, we arrive at the quadratic equation in t

$$\begin{aligned} \det \,\mathbf {F}\,t^2-&\frac{1}{\alpha _0-\alpha _1}(\alpha _1\alpha _0|\mathbf {F}^{(1)}|^2-|\mathbf {F}^{(2)}|^2+(\alpha _0-\alpha _1)\det \,\mathbf {F})\, t\nonumber \\ +&\frac{1}{(\alpha _0-\alpha _1)^2}(\alpha _0^2\alpha _1|\mathbf {F}^{(1)}|^2+\alpha _1|\mathbf {F}^{(2)}|^2-2\alpha _0\alpha _1\det \,\mathbf {F})=0. \end{aligned}$$
(2.9)

The value of this quadratic function for \(t=0\) and \(t=1\) turns out to be, respectively,

$$\begin{aligned} \frac{\alpha _1}{(\alpha _0-\alpha _1)^2}|\alpha _0\mathbf {F}^{(1)}+\mathbf {R}\mathbf {F}^{(2)}|^2,\quad \frac{\alpha _0}{(\alpha _0-\alpha _1)^2}|\alpha _1\mathbf {F}^{(1)}+\mathbf {R}\mathbf {F}^{(2)}|^2. \end{aligned}$$

Under the condition \(\det \,\mathbf {F}>0\), there are roots for t in (0, 1), provided that the discriminant is non-negative, and the vertex of the parabola belongs to (0, 1). It is elementary, again after some algebraic manipulations, that these conditions amount to having

$$\begin{aligned} 2\sqrt{\alpha _1\alpha _0}\sqrt{|\mathbf {F}^{(1)}|^2|\mathbf {F}^{(2)}|^2-\det \,\mathbf {F}\,^2}\le (\alpha _1+\alpha _0)\det \,\mathbf {F}-\alpha _1\alpha _0|\mathbf {F}^{(1)}|^2-|\mathbf {F}^{(2)}|^2, \end{aligned}$$
(2.10)

for some positive values \(\alpha _i\), \(i=1, 0\). If we examine the function of two variables

$$\begin{aligned} f(\alpha _1, \alpha _0)=\frac{1}{\sqrt{\alpha _1\alpha _0}}[(\alpha _1+\alpha _0)\det \,\mathbf {F}-\alpha _1\alpha _0|\mathbf {F}^{(1)}|^2-|\mathbf {F}^{(2)}|^2], \end{aligned}$$

we realize that along the hyperbole \(\alpha _1\alpha _0=1\), f grows indefinitely (recall that \(\det \,\mathbf {F}>0\)), and eventually it becomes larger than any positive value, in particular, bigger than

$$\begin{aligned} 2\sqrt{|\mathbf {F}^{(1)}|^2|\mathbf {F}^{(2)}|^2-\det \,\mathbf {F}\,^2}. \end{aligned}$$

In this way (2.10) is fulfilled for some positive values for \(\alpha _1\) and \(\alpha _0\), and the proof of this step is finished.

If \(\det \,\mathbf {F}<0\), it is readily checked that the same above calculations lead to the result \(Q\phi (\mathbf {F})=-\det \,\mathbf {F}\) because there is a minus sign in front of every occurrence of the determinant, with negative values for \(\alpha _1\) and \(\alpha _0\).

These paragraphs above show that

$$\begin{aligned} R_1\phi (\mathbf {F})=|\det \mathbf {F}|, \end{aligned}$$

if the first-level, rank-one convex hull \(R_1\xi \) of a function \(\xi \) is defined through

$$\begin{aligned} R_1\xi (\mathbf {F})=\inf \{t\xi (\mathbf {F}_1)+(1-t)\xi (\mathbf {F}_0): \mathbf {F}=t\mathbf {F}_1+(1-t)\mathbf {F}_0, t\in [0, 1], \mathbf {F}_1-\mathbf {F}_0,\hbox { rank-one}\}. \end{aligned}$$

Note that \(\phi (\mathbf {F})=\det \mathbf {F}\) on both sets in (2.7), and that \(Q\xi \le R_1\xi \) always. We would conclude that

$$\begin{aligned} |\det \mathbf {F}|\le Q\phi (\mathbf {F})\le R_1\phi (\mathbf {F})=|\det \mathbf {F}|, \end{aligned}$$

and (2.6) holds. \(\square \)

Proof of Theorem 2.1

Since \(-\det \mathbf {F}\) is a null-lagrangian, it is immediate to argue that

$$\begin{aligned} Q\psi (\mathbf {F})=|\det \mathbf {F}|-\det \mathbf {F}=2\det {}^-\mathbf {F}. \end{aligned}$$
(2.11)

As a consequence, we have a relaxation fact in the form

$$\begin{aligned} \inf _{\mathbf {u}\in \mathcal {A}}\left\{ \int _\Omega \psi (\nabla \mathbf {u}))\,d\mathbf {x}\right\} =\inf _{\mathbf {u}\in \mathcal {A}}\left\{ \int _\Omega Q\psi (\nabla \mathbf {u})\,d\mathbf {x}\right\} , \end{aligned}$$
(2.12)

where

$$\begin{aligned} \mathcal {A}=\{\mathbf {u}\in H^1(\Omega ; \mathbb {R}^2): \mathbf {u}-\mathbf {u}_0\in H^1_0(\Omega ; \mathbb {R}^2)\} \end{aligned}$$

and \(\mathbf {u}_0\) is any extension of boundary data \((u_{1, 0}, u_{2, 0})\). Under an appropriate coercivity condition for \(\psi \) in \(H^1(\Omega ; \mathbb {R}^2)\), the right-hand side infimum in (2.12) would be a minimum [23]. However since, as indicated earlier, we do not have such condition, the infimum m might not be a minimum for none of those two problems. Equality (2.11) leads to

$$\begin{aligned} m=\inf _{\mathbf {u}\in \mathcal {A}}\left\{ \int _\Omega \psi (\nabla \mathbf {u}))\,d\mathbf {x}\right\} =2\inf _{\mathbf {u}\in \mathcal {A}}\left\{ \int _\Omega \det {}^-\nabla \mathbf {u}\,d\mathbf {x}\right\} , \end{aligned}$$

which shows our statement, because the right-hand side vanishes for \(\mathbf {u}=\mathbf {u}_0\), the extension of boundary data assumed. \(\square \)

Boundary data \((u_{1, 0}, u_{2, 0})\) which do not fall under the action of this result are irrelevant for the inverse conductivity problem. Yet, even so, our system (1.1) and (1.2) might admit solutions. Could there be a further requirement on boundary data to force solutions \(\mathbf {u}=(u_1, u_2)\) of our system of PDEs to provide an absolute minimizer for the functional in (2.4) with \(I(\mathbf {u})=0\), and, as a result, (1.11) or (2.2) would hold? Note how this vector equation ((1.11) or (2.2)) implies that the gradients of \(u_1\) and \(u_2\) must be orthogonal all over \(\Omega \), and so this property must also be retained for feasible boundary data. Namely

$$\begin{aligned} \nabla u_1\cdot \mathbf {t}\,\nabla u_2\cdot \mathbf {t}+\nabla u_1\cdot \mathbf {n}\,\nabla u_2\cdot \mathbf {n}=0 \end{aligned}$$

around \(\partial \Omega \). The left-hand side in this equation is precisely \(\nabla u_1\cdot \nabla u_2\). It is a non-linear boundary condition.

Theorem 2.3

Let \(\mathbf {u}=(u_1, u_2)\in H^1(\Omega ; \mathbb {R}^2)\) for a simply-connected domain \(\Omega \subset \mathbb {R}^2\). The two following statements are equivalent:

  1. (1)
    $$\begin{aligned} {\text {div}}\left( \frac{|\nabla u_2(\mathbf {x})|}{|\nabla u_1(\mathbf {x})|}\nabla u_1(\mathbf {x})\right) =0,\quad {\text {div}}\left( \frac{|\nabla u_1(\mathbf {x})|}{|\nabla u_2(\mathbf {x})|}\nabla u_2(\mathbf {x})\right) =0, \end{aligned}$$
    (2.13)

    and

    $$\begin{aligned} \frac{\partial u_1}{\partial \mathbf {t}}\frac{\partial u_2}{\partial \mathbf {t}}+\frac{\partial u_1}{\partial \mathbf {n}}\frac{\partial u_2}{\partial \mathbf {n}}=0\hbox { on }\partial \Omega , \end{aligned}$$
    (2.14)

    with both terms vanishing nowhere around \(\partial \Omega \);

  2. (2)
    $$\begin{aligned} \gamma (\mathbf {x})\nabla u_1(\mathbf {x})+\mathbf {R}\nabla u_2(\mathbf {x})=\mathbf {0}\hbox { in }\Omega , \end{aligned}$$
    (2.15)

    where

    $$\begin{aligned} \gamma (\mathbf {x})=\frac{|\nabla u_2(\mathbf {x})|}{|\nabla u_1(\mathbf {x})|}\ge 0\hbox { in }\Omega . \end{aligned}$$
    (2.16)

Proof

We already argued above that if (2.15) holds with \(\gamma (\mathbf {x})\) given by (2.16) then (2.13) and (2.14) are true. Let us therefore suppose that these two last conditions are correct. The two PDEs imply the existence of two additional functions \(U_i\in H^1(\Omega )\), \(i=1, 2\), such that

$$\begin{aligned} \gamma (\mathbf {x})\nabla u_i(\mathbf {x})+\mathbf {R}\nabla U_i(\mathbf {x})=\mathbf {0}\hbox { in }\Omega ,\quad i=1, 2. \end{aligned}$$
(2.17)

Given how \(\gamma \) is defined through (2.16), we can be certain that

$$\begin{aligned} |\nabla u_2(\mathbf {x})|=|\nabla U_1(\mathbf {x})|\hbox { in }\Omega , \end{aligned}$$

and so there is a rotation \(\mathbf {Q}(\mathbf {x})\) of angle \(\theta (\mathbf {x})\) such that

$$\begin{aligned} \nabla U_1(\mathbf {x})=\mathbf {Q}(\mathbf {x})\nabla u_2(\mathbf {x}),\quad \mathbf {Q}(\mathbf {x})=\begin{pmatrix}\cos \theta (\mathbf {x})&{}-\sin \theta (\mathbf {x})\\ \sin \theta (\mathbf {x})&{}\cos \theta (\mathbf {x})\end{pmatrix}. \end{aligned}$$

If we take back this information to (2.17), we find

$$\begin{aligned} \gamma (\mathbf {x})\nabla u_1(\mathbf {x})+\mathbf {R}\mathbf {Q}(\mathbf {x})\nabla u_2(\mathbf {x})=\mathbf {0}\hbox { in }\Omega . \end{aligned}$$

By changing the angle \(\theta (\mathbf {x})\) through a constant angle, we can assume that

$$\begin{aligned} \mathbf {R}\mathbf {Q}(\mathbf {x})=\begin{pmatrix}\cos \theta (\mathbf {x})&{}-\sin \theta (\mathbf {x})\\ \sin \theta (\mathbf {x})&{}\cos \theta (\mathbf {x})\end{pmatrix}, \end{aligned}$$

and

$$\begin{aligned} \gamma (\mathbf {x})\nabla u_1(\mathbf {x})+\begin{pmatrix}\cos \theta (\mathbf {x})&{}-\sin \theta (\mathbf {x})\\ \sin \theta (\mathbf {x})&{}\cos \theta (\mathbf {x})\end{pmatrix}\nabla u_2(\mathbf {x})=\mathbf {0}\hbox { in }\Omega . \end{aligned}$$
(2.18)

We would like to show that, necessarily, \(\theta (\mathbf {x})\) is identically \(\pi /2\), since if this is so then we recover (2.15) as desired.

Again by our observations made earlier about the implications of (2.15), it is clear that \(\theta \) restricted to \(\partial \Omega \) is \(\pi /2\). This is the boundary condition (2.14). Our intention is to study the PDE that \(\theta \) ought to verify coming from the first equation in (2.13) and conclude that the only possible solution is \(\theta \equiv \pi /2\).

If we combine (2.18) with the first equation in (2.13), we conclude that

$$\begin{aligned} {\text {div}}\left( \begin{pmatrix}\cos \theta (\mathbf {x})&{}-\sin \theta (\mathbf {x})\\ \sin \theta (\mathbf {x})&{}\cos \theta (\mathbf {x})\end{pmatrix}\nabla u_2(\mathbf {x})\right) =0\hbox { in }\Omega , \end{aligned}$$
(2.19)

in addition to having \(\theta =\pi /2\) on \(\partial \Omega \). However, a fundamental point is that \(\theta \equiv \pi /2\) is indeed a global solution in \(\Omega \) of the first-order, quasilinear PDE (2.19), together with the boundary condition because the divergence of a rotated gradient identically vanishes, and this fact is independent of what the function \(u_2\) is.

If \(u_2\) were analytic, we can expand (2.19) to have

$$\begin{aligned} -(\sin \theta \,u_{2, x}+\cos \theta \,u_{2, y})\theta _x+(\cos \theta \,u_{2, x}-\sin \theta \,u_{2, y})\theta _y+\cos \theta \,\Delta u_2=0. \end{aligned}$$
(2.20)

Given that the four terms

$$\begin{aligned} \frac{\partial u_1}{\partial \mathbf {t}},\quad \frac{\partial u_2}{\partial \mathbf {t}},\quad \frac{\partial u_1}{\partial \mathbf {n}},\quad \frac{\partial u_2}{\partial \mathbf {n}} \end{aligned}$$

do not vanish on \(\partial \Omega \) (non-characteristic boundary condition), Holmgren uniqueness theorem would apply, and knowing a priori that \(\theta \equiv \pi /2\) is a global solution in \(\Omega \), we can conclude that it is the only possible solution. If \(u_2\) is not analytic, we can proceed by approximation in (2.20) taking advantage again of the fact that we know before hand that \(\theta \equiv \pi /2\) is a global solution independently of the approximating sequence. \(\square \)

We therefore see that boundary condition (2.14) is the clue to knowing whether optimality system (2.13) corresponds to a global minimizer of our problem with \(m=0\). It is interesting, from a practical point of view, to count on a different criterium.

3 Inverse problems in conductivity

Motivated by our observations in the Introduction, we introduce the following concept.

Definition 3.1

We will say that a pair of functions \((u_1, u_2)\) in \(H^1(\Omega )\) is a feasible solution for an inverse problem in conductivity if

$$\begin{aligned} |\nabla u_2|\nabla u_1+|\nabla u_1|\mathbf {R}\nabla u_2=\mathbf {0}\hbox { a.e. in }\Omega . \end{aligned}$$
(3.1)

The corresponding conductivity coefficient is the non-negative measurable function

$$\begin{aligned} \gamma (\mathbf {x})=\frac{|\nabla u_2|}{|\nabla u_1|}. \end{aligned}$$

Based on our remarks in Sect. 2, we see how feasible pairs for inverse problems in conductivity are intimately related to solutions of our original system of PDEs corresponding to global minimizers of the functional I in (2.4).

Proposition 3.1

Global minimizers for the functional I in (2.4) with a vanishing minimum \(m=0\) are exactly feasible pairs for inverse problems in conductivity.

Proof

The observation that

$$\begin{aligned} \left| |\nabla u_2|\nabla u_1+|\nabla u_1|\mathbf {R}\nabla u_2\right| ^2=|\nabla u_1|\,|\nabla u_2|\,\psi (\nabla \mathbf {u}) \end{aligned}$$

for \(\psi \) given in (2.3) and \(\mathbf {u}=(u_1, u_2)\), immediately lets us see the validity of our statement. \(\square \)

This simple proposition suggests the possibility of finding or approximating feasible pairs for inverse problems in conductivity, by approximating solutions of our initial system (1.1) and (1.2), and certify convergence by keeping track of the value of I in (2.4). If for a sequence \(\{\mathbf {u}_j\}\), we have \(I(\mathbf {u}_j)\searrow 0\), then we know that it is a good approximation to an inverse problem in conductivity, though it may not be the one we would like to see. We would like to test the procedure in practice.

We have already insisted in the tremendous difficulties associated with the vector minimization problem (2.4), and have provided, in the preceding section, some partial answers about the vanishing of the infimum, the attainment of it, and how these issues may relate to our initial system of PDEs as the Euler–Lagrange system associated with (2.4). The various possibilities depend dramatically on the boundary values around \(\partial \Omega \) that are adopted. As indicated above, from the perspective of inverse problems of conductivity, we would like to explore, so as to test typical minimization algorithms like conjugate gradient or Newton–Raphson methods, how to build feasible boundary data for which we can be sure that there are minimizers of (2.4) and solutions of (1.1) and (1.2) complying with Definition 3.1. After all, in real problems we do know that there should be pairs of functions \((u_1, u_2)\) complying with the vector equation in this definition. It is also interesting to point out that we have a sure certificate of convergence in that the functional in (2.4) should be made very small if we are to find good approximations of such pairs. It is, however, very well-established that there might be (infinitely-)many pairs complying with Definition 3.1 and preserving the same boundary data around \(\partial \Omega \) (even in a multi-measurement scenario). This, in particular, implies that one can anticipate serious difficulties in the practical numerical implementation of examples.

3.1 A procedure to generate admissible boundary data

We propose one way to generate (synthetic) pairs of boundary data around \(\partial \Omega \) for which the non-linear system of PDEs (1.1) and (1.2) admits solutions, and they are such that equation (1.11) holds, i.e. the minimum vanishes. As we know, these minimizers correspond to feasible solutions of the inverse problem in conductivity. It is as follows.

  1. (1)

    Take \(\gamma (\mathbf {x})\ge \gamma _0>0\) in \(\Omega \), and \(u_0\in H^{1/2}(\partial \Omega )\) freely.

  2. (2)

    Solve the problem

    $$\begin{aligned} {\text {div}}(\gamma \nabla u)=0\hbox { in }\Omega ,\quad u=u_0\hbox { on }\partial \Omega , \end{aligned}$$

    and compute the tangential derivative \(w_0=\nabla u\cdot \mathbf {t}\) around \(\partial \Omega \).

  3. (3)

    Find the solution of the Neumann problem

    $$\begin{aligned} {\text {div}}(\frac{1}{\gamma }\nabla v)=0\hbox { in }\Omega ,\quad \frac{1}{\gamma }\nabla v\cdot \mathbf {n}=w_0\hbox { on }\partial \Omega , \end{aligned}$$
    (3.2)

    under some normalization condition.

  4. (4)

    Take \(v_0=\left. v\right| _{\partial \Omega }\).

In practice, if \(\gamma (\mathbf {x})\) is a true, unknown conductivity coefficient, and \(u_0\in H^{1/2}(\partial \Omega )\) is given, the response

$$\begin{aligned} \overline{v}_0=\gamma \nabla u\cdot \mathbf {n}\end{aligned}$$

around \(\partial \Omega \) is the data measured, where

$$\begin{aligned} {\text {div}}(\gamma \nabla u)=0\hbox { in }\Omega ,\quad u=u_0\hbox { on }\partial \Omega . \end{aligned}$$

Note that the relationship between \(\overline{v}_0\) and our boundary datum \(v_0\), coming from the solution of (3.2) or from (1.6), is given through

$$\begin{aligned} v_0(\mathbf {x})=\int _{\mathbf {x}_0}^\mathbf {x}\overline{v}_0(\mathbf {z})\,dS(\mathbf {z}), \end{aligned}$$

where integration is performed around \(\partial \Omega \), and \(\mathbf {x}_0\) is an arbitrarily chosen point on \(\partial \Omega \). Uncertainty on measurement for \(\overline{v}_0\) is transmitted to \(v_0\).

Proposition 3.2

Suppose boundary data \((u_0, v_0)\) are chosen as just indicated. Then the pair of functions (uv), computed through the process, is a solution of (1.1) and (1.2), for which (1.11) is valid. In particular, \(m=0\) is attained.

Proof

The proof amounts to writing

$$\begin{aligned} \gamma \nabla u+\mathbf {R}\nabla w=\mathbf {0}\hbox { in }\Omega , \end{aligned}$$

for some \(w\in H^1(\Omega )\), unique up to an additive constant. As a consequence of the previous vector equation, it is immediate to check that w must be also a solution of problem (3.2), and so the difference between w and v is a constant, i.e.

$$\begin{aligned} \gamma \nabla u+\mathbf {R}\nabla v=\mathbf {0}\hbox { in }\Omega . \end{aligned}$$

\(\square \)

This will be our method to generate synthetic boundary data for numerical experiments for which the inverse conductivity problem is meaningful. Note that if, once the pair (uv) has been generated furnishing a couple of feasible boundary data for the inverse problem, we change to some other pair \((\tilde{u}, \tilde{v})\) preserving boundary information

$$\begin{aligned} u-\tilde{u}, v-\tilde{v}\in H^1_0(\Omega ), \end{aligned}$$
(3.3)

the use of an iterative procedure to lead our functional (2.4) to the corresponding local minimum starting from \((\tilde{u}, \tilde{v})\) will provide, through (1.10), a coherent conductivity coefficient if I in (2.4) is led to zero. It might happen however that the computed optimal \(\tilde{\gamma }\) would be different to the \(\gamma \) we started with. This could be an indication and a consequence of the non-uniqueness of the inverse problem of conductivity with one single measurement, or with a finite number of them. We will turn back to this point in the section of numerical experiments.

As a matter of fact, our scheme above is the only way to produce boundary-data pairs for which \(m=0\) is attained for our functional (2.4). This is pretty clear after all of our previous discussions. Indeed, if we would have

$$\begin{aligned} I(\mathbf {u})=0,\quad \mathbf {u}=(u_1, u_2)\in H^1(\Omega ; \mathbb {R}^2), \end{aligned}$$

then

$$\begin{aligned} |\nabla u_1(\mathbf {x})|\,|\nabla u_2(\mathbf {x})|-\det \nabla \mathbf {u}(\mathbf {x})=0 \end{aligned}$$

for a.e. \(\mathbf {x}\in \Omega \). This is like saying

$$\begin{aligned} |\nabla u_1(\mathbf {x})|\,|\nabla u_2(\mathbf {x})|=-\nabla u_1(\mathbf {x})\cdot \mathbf {R}\nabla u_2(\mathbf {x}), \end{aligned}$$

for a.e. \(\mathbf {x}\in \Omega \). The standard Cauchy-Schwarz inequality for the inner product forces vector equation (1.6) for some non-negative conductivity coefficient \(\gamma \) (which would be the quotient \(|\nabla u_2|/|\nabla u_1|\)), and then \(u_1\) and \(u_2\) are precisely the functions generated by our procedure for such coefficient \(\gamma \).

On the other hand, Theorem 2.1 indicates that it suffices to provide a mapping

$$\begin{aligned} \mathbf {u}_0:\Omega \rightarrow \mathbb {R}^2 \end{aligned}$$

with non-negative determinant \(\det \nabla \mathbf {u}_0>0\), furnishing boundary data, to be sure that the infimum m vanishes. Most of the time, when such mapping \(\mathbf {u}_0\) is given in a casual way, \(m=0\) will not be a minimum, since for a minimizing sequence of pairs \(\mathbf {u}_j\) the sequence of scalars

$$\begin{aligned} \frac{|\nabla u_{j, 2}|}{|\nabla u_{j, 1}|} \end{aligned}$$

may only G-converge [24] to some homogenized elliptic, symmetric matrix

$$\begin{aligned} \Gamma (\mathbf {x})\in \mathbf {M}^{2\times 2}. \end{aligned}$$

Indeed, we can generalize our mechanism to generate interesting boundary data replacing \(\gamma (\mathbf {x})\) by an elliptic, symmetric matrix \(\Gamma (\mathbf {x})\):

  1. (1)

    Take \(\Gamma (\mathbf {x})\ge \gamma _0\mathbf {1}\) (in the sense of symmetric matrices), \(\gamma _0>0\) in \(\Omega \), and \(u_0\in H^{1/2}(\partial \Omega )\) freely.

  2. (2)

    Solve the problem

    $$\begin{aligned} {\text {div}}(\Gamma \nabla u)=0\hbox { in }\Omega ,\quad u=u_0\hbox { on }\partial \Omega , \end{aligned}$$

    and compute the tangential derivative \(w_0=\nabla u\cdot \mathbf {t}\) around \(\partial \Omega \).

  3. (3)

    Find a solution of the Neumman problem

    $$\begin{aligned} {\text {div}}(\Gamma _l\nabla v)=0\hbox { in }\Omega ,\quad \Gamma _l\nabla v\cdot \mathbf {n}=w_0\hbox { on }\partial \Omega . \end{aligned}$$
    (3.4)

    with \(\Gamma _l=\mathbf {R}\Gamma ^{-1}\mathbf {R}.\)

  4. (4)

    Take \(v_0=\left. v\right| _{\partial \Omega }\).

For such pair \((u_0, v_0)\), one would expect that \(m=0\) is not attained, as in a typical homogenization process [25].

4 The multi-measurement situation

We recall very briefly that in this case the system for an unknown field

$$\begin{aligned} \mathbf {u}(\mathbf {x}):\Omega \subset \mathbb {R}^2\rightarrow \mathbb {R}^{2N} \end{aligned}$$

is

$$\begin{aligned}&{\text {div}}\left( \frac{|\nabla \mathbf {u}_2|}{|\nabla \mathbf {u}_1|}\nabla u_1^{(j)}\right) =0\hbox { in }\Omega ,\quad u_1^{(j)}=u_{1, 0}^{(j)}\hbox { on }\partial \Omega ,\end{aligned}$$
(4.1)
$$\begin{aligned}&{\text {div}}\left( \frac{|\nabla \mathbf {u}_1|}{|\nabla \mathbf {u}_2|}\nabla u_2^{(j)}\right) =0\hbox { in }\Omega ,\quad u_2^{(j)}=u_{2, 0}^{(j)}\hbox { on }\partial \Omega , \end{aligned}$$
(4.2)

for \(j=1, 2, \dots , N\), where

$$\begin{aligned} \mathbf {u}=(\mathbf {u}^{(j)})_{j=1, 2, \dots , N}=(u^{(j)}_1, u^{(j)}_2)_{j=1, 2, \dots , N},\\ \mathbf {u}^{(j)}=(u^{(j)}_1, u^{(j)}_2),\quad \mathbf {u}_i=(u^{(j)}_i)_{j=1, 2, \dots , N},\quad i=1, 2. \end{aligned}$$

We also have an underlying functional whose Euler–Lagrange system is precisely (4.1) and (4.2), namely

$$\begin{aligned} I_N(\mathbf {u})=\int _\Omega \left( |\nabla \mathbf {u}_1|\,|\nabla \mathbf {u}_2|-\sum _{j=1}^N\det \nabla \mathbf {u}^{(j)}\right) \,d\mathbf {x}. \end{aligned}$$
(4.3)

We would have a parallel method, applied componentwise, to generate synthetic sets of data for which \(I_N\) in (4.3) vanishes. It is interesting to note that the formula for the recovered \(\gamma (\mathbf {x})\) in the multimeasurement situation is

$$\begin{aligned} \gamma (\mathbf {x})=\frac{|\nabla \mathbf {u}_2(\mathbf {x})|}{|\nabla \mathbf {u}_1(\mathbf {x})|}\quad \text {a.e. }x\in \Omega , \end{aligned}$$

so the information coming from every measurement for \(j=1, 2, \dots , N\) is taken into account.

5 Approximation

In this section, we focus on various numerical examples to solve the non-linear system of PDEs

$$\begin{aligned} \left\{ \begin{array}{l} \mathbf {u}-\mathbf {u}_{0}\in H^1_0(\Omega ;\mathbb {R}^2)^N,\mathbf {u}=(\mathbf {u}_1, \mathbf {u}_2), \mathbf {u}_i\in H^1(\Omega )^N\\ \\ \displaystyle -\mathrm{div}\left( \frac{|\nabla \mathbf {u}_2(\mathbf {x})|}{|\nabla \mathbf {u}_1(\mathbf {x})|}\nabla u_1^{(j)}\right) =0 \text { in }\Omega ,\quad u_1^{(j)}=u_{0,1}^{(j)}\text { on } \partial \Omega , \\ \displaystyle -\mathrm{div}\left( \frac{|\nabla \mathbf {u}_1(\mathbf {x})|}{|\nabla \mathbf {u}_2(\mathbf {x})|}\nabla u_2^{(j)}\right) =0 \text { in }\Omega ,\quad u_2^{(j)}=u_{0,2}^{(j)}\text { on } \partial \Omega , \\ \mathbf {u}_i=(u^{(j)}_i)_j, j=1,\dots ,N, \end{array}\right. \end{aligned}$$
(5.1)

that corresponds to the solution of the associated inverse problem

$$\begin{aligned} \left\{ \begin{array}{ll} -\mathrm{div}\left( \gamma \nabla u_1^{(j)}\right) =0 &{}\text { in }\Omega ,\\ u_1^{(j)}=u_{0,1}^{(j)}&{}\text { on } \partial \Omega ,\\ \gamma {\partial u_1^{(j)}\over \partial \nu }=\nabla u_{0,2}^{(j)}\cdot \mathbf {t}&{}\text { on } \partial \Omega , \end{array}\right. \quad j=1,\dots ,N. \end{aligned}$$
(5.2)

If the boundary data \(\mathbf {u}_{0}\) are “well-choosen” in the sense that inverse problem (5.2) is well-posed (for instance, following the procedure in Sect. 3.1), then, as indicated above, one solution of the inverse problem is given precisely by

$$\begin{aligned} \displaystyle \gamma (\mathbf {x})=\frac{|\nabla \mathbf {u}_2(\mathbf {x})|}{|\nabla \mathbf {u}_1(\mathbf {x})|}\quad \text {a. e. }x\in \Omega . \end{aligned}$$

Then, we propose to solve numerically the non-linear system of PDEs (5.1) in order to get a solution for the inverse problem (5.2). We consider the weak formulation for system (5.1) which can be expressed in the form

$$\begin{aligned} \begin{array}{ll} \displaystyle \mathcal {L}(\mathbf {u},\mathbf {v})=\displaystyle \int _\Omega \left( \frac{|\nabla \mathbf {u}_2(\mathbf {x})|}{|\nabla \mathbf {u}_1(\mathbf {x})|}\nabla \mathbf {u}_1:\nabla \mathbf {v}_1+\frac{|\nabla \mathbf {u}_1(\mathbf {x})|}{|\nabla \mathbf {u}_2(\mathbf {x})|}\nabla \mathbf {u}_2:\nabla \mathbf {v}_2\right) \,d\mathbf {x}=0, \end{array} \end{aligned}$$
(5.3)

which should be valid for every \(\mathbf {v}\in H^1_0(\Omega ;\mathbb {R}^2)^N\).

We will use a standard Newton–Raphson algorithm, in order to solve (5.1), or equivalently (5.3).

We want to find functions \(\mathbf {u}\in H^1(\Omega ;\mathbb {R}^2)^N\) such that

$$\begin{aligned} \mathbf {u}-\mathbf {u}_{0}\in H^1_0(\Omega ;\mathbb {R}^2)^N,\quad \mathcal {L}(\mathbf {u},\mathbf {v})=0 \end{aligned}$$

for all \(\mathbf {v}\in H^1_0(\Omega ;\mathbb {R}^2)^N\). The Newton–Raphson algorithm, to solve numerically the nonlinear problem (5.3), is the following:

  1. (1)

    We choose an admissible initialization \(\mathbf {u}^0\in H^1(\Omega ;\mathbb {R}^2)^N\).

  2. (2)

    Iterate until convergence \(\left( I(\mathbf {u}^k)< tol \hbox { or } \frac{||\mathbf {w}^k||_{\infty }}{||\mathbf {u}^k||_{\infty }}<tol \right) \):

    • take \(\mathbf {w}^k\in H^1_0(\Omega ;\mathbb {R}^2)^N\) such that

      $$\begin{aligned} D\mathcal {L}(\mathbf {u}^k,\mathbf {v})\,\mathbf {w}^k=\mathcal {L}(\mathbf {u}^k,\mathbf {v}), \end{aligned}$$

      for every \(\mathbf {v}\in H^1_0(\Omega ;\mathbb {R}^2)^N\), where \(D\mathcal {L}(\mathbf {u},\mathbf {v})\,\mathbf {w}\) is defined by

      $$\begin{aligned} \begin{array}{ll} D\mathcal {L}(\mathbf {u},v^{(j)})\,w^{(j)}=\\ \displaystyle \int _\Omega \frac{|\nabla \mathbf {u}_2(\mathbf {x})|}{|\nabla \mathbf {u}_1(\mathbf {x})|} \left( \nabla v_{1}^{(j)}\nabla w_{1}^{(j)}+\nabla v_1^{(j)}\cdot \nabla u_1^{(j)}\left( {\displaystyle \nabla u_2^{(j)}\cdot \nabla w_2^{(j)}\over \displaystyle |\nabla \mathbf {u}_2(\mathbf {x})|^2}-{\displaystyle \nabla u_1^{(j)}\cdot \nabla w_1^{(j)}\over \displaystyle |\nabla \mathbf {u}_1(\mathbf {x})|^2}\right) \right) \,d\mathbf {x}\\ +\displaystyle \int _\Omega \frac{|\nabla \mathbf {u}_1(\mathbf {x})|}{|\nabla \mathbf {u}_2(\mathbf {x})|} \left( \nabla v_{2}^{(j)}\nabla w_{2}^{(j)}+\nabla v_2^{(j)}\cdot \nabla u_2^{(j)}\left( {\displaystyle \nabla u_1^{(j)}\cdot \nabla w_1^{(j)}\over \displaystyle |\nabla \mathbf {u}_1(\mathbf {x})|^2}-{\displaystyle \nabla u_1^{(j)}\cdot \nabla w_2^{(j)}\over \displaystyle |\nabla \mathbf {u}_2(\mathbf {x})|^2}\right) \right) \,d\mathbf {x};\\ \end{array} \end{aligned}$$
      (5.4)
    • update

      $$\begin{aligned} \mathbf {u}^{k+1}=\mathbf {u}^k-\mathbf {w}^k. \end{aligned}$$

We would like to bring readers’ attention to the fully non-linear and non-convex character of the system of PDEs (5.1). This character is associated with the existence or lack of it of solution of problem when the boundary data \(\mathbf {u}_{0}^{(j)}=(u_{0,1}^{(j)},u_{0,2}^{(j)}), \, j=1,\cdots ,N\) are not “well-chosen”. Moreover, in the case where existence of solution is a fact, a crucial issue is uniqueness, it could be expected existence of different solutions. It is well-known, in general, the lack of uniqueness of solution for Calderon’s problem for discontinuous coefficients and a finite number of measurements. This fact must be reflected on the lack of uniqueness of solutions for the corresponding non-linear system of PDEs. From our formulation, we will consider that a solution \(\mathbf {u}\in H^1(\Omega ;\mathbb {R}^2)^N\) of the system of PDEs (5.1) such that \(I(\mathbf {u})=0\) will provide a density \(\gamma \) solution of the inverse problem 5.2. And we can establish direct connection between the the existence (or not) and uniqueness (or not) of solutions of the non-linear system of pde and the inverse problem.

As a matter of fact, we have considered different strategies for the numerical resolution of the inverse problem (5.2), in addition to the Newton–Raphson scheme. We have also explored gradient descent algorithms (conjugated gradient or optimal step) for the functional I in (2.4) in order to approximate optimal solutions for the variational problems in (2.12). For the non-linear system of PDEs (5.1), we have also examined a fixed point algorithm, given its simplicity, as follows:

  1. (1)

    choose an admissible initialization \(\mathbf {u}^0\in H^1(\Omega ;\mathbb {R}^2)^N\);

  2. (2)

    iterate until convergence \(\left( I(\mathbf {u}^k)< tol \right) \):

    • take

      $$\begin{aligned} \gamma ^k= \displaystyle \frac{|\nabla \mathbf {u}_2^k(\mathbf {x})|}{|\nabla \mathbf {u}_1^k(\mathbf {x})|}; \end{aligned}$$
    • solve:

      $$\begin{aligned} \left\{ \begin{array}{ll} \displaystyle -\mathrm{div}\left( \gamma ^k\nabla u_1^{(j),k+1}\right) =0 \text { in }\Omega ,\quad u_1^{(j),k+1}=u_{0,1}^{(j)}\text { on } \partial \Omega , \\ \displaystyle -\mathrm{div}\left( \frac{1}{\gamma ^k}\nabla u_2^{(j),k+1}\right) =0 \text { in }\Omega ,\quad u_2^{(j),k+1}=u_{0,2}^{(j)}\text { on } \partial \Omega , \end{array} \right. \quad j=1\dots ,N. \end{aligned}$$

Concerning the practical behavior of these three possibilities, it is our experience that all three algorithms behave well and furnish convergence, in the sense that I in (2.4), converges to zero. As has been discussed earlier, this is our main certificate of convergence. However, the Newton–Raphson method and the fixed point algorithm are faster than the gradient descent counterpart. We therefore stick to the Newton–Raphson method for our numerical experiments.

It is also appropriate to insist, in addition to the non-linear character of the problem, in the lack of convexity (quasiconvexity) of the underlying functional. This also has important consequences for the numerical approximation making simulations harder than might be expected. From the procedure for choosing boundary data described in Sect. 3.1, one can guarantee the existence of, at least, one solution for the chosen boundary data; or equivalently, one can be sure that there exists

$$\begin{aligned} \mathbf {u}_{}=(\mathbf {u}_{1},\mathbf {u}_{2})\in H^{1}(\Omega ;\mathbb {R}^2)^N \end{aligned}$$

such that \(I(\mathbf {u}_{1},\mathbf {u}_{2})=0\) for the vector functional I defined in (2.4). As we have shown in Sect. 2, the vector variational problem for I is non-convex, even non-quasiconvex, and therefore there is no way to ensure the uniqueness of minimizers. In this sense, it is important to realize that our numerical algorithm might recover a different minimizer from the one we would expect and that was used to generate the synthetic boundary data. In this regard, the initialization used in the computations themselves will lead them to one solution or another. The important point, nonetheless, is that our certificate of convergence, the functional being driven to zero, be correct.

5.1 Numerical experiments

We would like to apply the Newton–Raphson algorithm described above in order to find a solution of the inverse problem (5.2). Our numerical experiments have been implemented using the free software FreeFemm++ v 3.56 (see [26] and http://www.freefem.org/). We will use the Newton–Raphson method to solve the non-linear system of partial differential equations (5.1).

We consider a density \(\gamma \) and a boundary data \(\mathbf {u}_{0}\in H^{1/2}(\partial \Omega ;\mathbb {R}^2)^N\) associated with it through (5.2). For a fixed number of experiments \(N \ge 1\), we select boundary data as described in the strategy in Sect. 3.1:

  • Take any density \(\gamma \in L^\infty (\Omega )\), and a function \(u_{0,1}\in H^{1/2}(\partial \Omega ) \).

  • Let \(\mathbf {R}_\delta :\mathbb {R}^2\rightarrow \mathbb {R}^2\) be the counterclockwise rotation in the plane of angle \(\delta \). Take:

    $$\begin{aligned} u_{0,1}^{(j)}(x,y)=u_{0,1}(\mathbf {R}_{\delta _j}(x,y))\quad \delta _j=2\pi \frac{j-1}{N}\in [0,2\pi ) \quad j=1,\dots ,N, \end{aligned}$$

    where \(\delta _j\) represents different angles of rotation.

  • Solve the problem

    $$\begin{aligned} {\text {div}}(\gamma \nabla u_1^{(j)})=0\hbox { in }\Omega ,\quad u^{(j)}_1=u_{0,1}^{(j)}\hbox { on }\partial \Omega ,\quad j=1,\cdots , N. \end{aligned}$$

We consider

$$\begin{aligned} u_{0,1}^{(j)}=u_1^{(j)}, j=1,\cdots , N, \end{aligned}$$

with \(\mathbf {u}_{0,1}\in H^{1/2}(\partial \Omega )^N.\) To determine \(\mathbf {u}_{0,2}\in H^{1/2}(\partial \Omega )^N,\) we solve problems

$$\begin{aligned} \left\{ \begin{array}{ll} -\mathrm{div}\left( \frac{1}{\gamma }\nabla u_2^{(j)}\right) =0&{} \text { in }\Omega ,\\ \frac{1}{\gamma }{\partial u_2^{(j)}\over \partial \nu } =\nabla u^{(j)}_1\cdot \mathbf {t}&{}\text { on } \partial \Omega , \end{array}\right. \quad j=1,\dots ,N, \end{aligned}$$
(5.5)

under the normalization condition \(\displaystyle \int _\Omega u(x)\,dx=0\), and take

$$\begin{aligned} \mathbf {u}_{0,2}=\mathbf {u}_2|_{\partial \Omega }\in H^{1/2}(\partial \Omega )^N \end{aligned}$$

where \(\mathbf {t}\) is the counterclockwise tangential vector to \(\partial \Omega \).

In this way, having in mind Theorem 2.1, we ensure that \(\gamma \) solves inverse problem (5.2) associated with the boundary data

$$\begin{aligned} \mathbf {u}_{0}=(\mathbf {u}_{0,1},\mathbf {u}_{0,2})\in H^{1/2}(\partial \Omega ;\mathbb {R}^2)^N, \end{aligned}$$
(5.6)

and therefore these are appropriate boundary conditions to perform numerical experiments. In order to get more realistic result in our experiments, we use two different meshes. We use a finer mesh to build the data (described in the above procedure to get \(\mathbf {u}_{0}\)); and another coarser mesh to perform the reconstruction and to find the optimal density \(\gamma \), see Fig. 1. The first one has 3633 node and 7064 triangle for searching optimal solutions, while the one to build boundary data has 14038 nodes and 27674 triangles.

Fig. 1
figure 1

The domain \(\Omega \) and its triangulation. Left, for searching optimal solutions. Right to build boundary data.

Our domain of reference will always be

$$\begin{aligned} \Omega =\{(x,y)\in \mathbb {R}^2: x^2+y^2<1\}, \end{aligned}$$

and, most of the time,

$$\begin{aligned} \gamma =\beta \chi _D+\alpha (1-\chi _D) \end{aligned}$$
(5.7)

where \(D\subset \Omega \), \(\chi _D\) is the corresponding characteristic function, where \(0<\alpha <\beta \) are two constant representing the electrical property of the body and of a certain unknown inclusion. In the jargon of Electrical Impedance Tomography, \(\Omega \) may be viewed as a region/body that contains healthy and ill tissues, while D represents the set of tumor cells. In our academic examples, we consider \(\alpha =5\) and \(\beta =10\). We use \(P_2\)-Lagrange finite element approximations for the solutions \(\mathbf {u}\). We have checked that the numerical errors in quadrature formula are important in order to have sharp approximation for the solution of the inverse problem, in this sense for all 2D integral we have consider hight precision quadrature formulae with order six (see [27]).

In the following, and taking into account the procedure to generate admissible boundary data in Sect. 3.1, we take an arbitrary function like

$$\begin{aligned} u_{0,1}(x,y)=10 x+5\sin y, \end{aligned}$$
(5.8)

and consider various choices for \(\gamma \), and for the number of measurements N. We stick to the convergence criteria with optimal costs lower than \(tol=1.0 \times 10^{-5}\).

Given a target function \(\gamma \) and the associated boundary conditions (5.6), one important aspect is to determine the initialization for the Newton–Raphson algorithm. The way we choose the initialization determines the search of optimal solution and the possibility to find different ones. Note that the local convergence character of the Newton–Raphson method forces us to select an initialization sufficiently close to a solution. The non-linear and non-(quasi)convex nature of the system of PDEs we are dealing with (and therefore of the related inverse problem) is responsible for the behavior of the Newton–Raphson method as one expects different solutions for different initializations. In this regard, we have considered two distinct ways to select the initialization:

Initialization 1 We take

$$\begin{aligned} \mathbf {u}^{0}_{i}=\tilde{\mathbf {u}}_{0, i}+\mathbf {V}_{i},\quad i=1, 2, \end{aligned}$$
(5.9)

with \(\mathbf {V}_i\) any arbitrary function in \(H^1_0(\Omega ;\mathbb {R})^N\), and \(\tilde{\mathbf {u}}_{0}\in H^1(\Omega ,\mathbb {R}^2)^N\) an extension of \( \mathbf {u}_{0}\in H^{1/2}(\partial \Omega ;\mathbb {R}^2)^N\), in order to preserve that both \(\mathbf {u}^0\) and \(\mathbf {u}_{0}\) share the same boundary information. For \(\mathbf {V}_i\), we take an arbitrary function \(f_i^{(j)}\) (for instance, \(f_1^{(j)}(x,y)=e^{x^2} y^2\sin (x);\) and \(f_2^{(j)}(x,y)=1\)), and consider \(V_i\) solution of:

$$\begin{aligned} \left\{ \begin{array}{ll} -\Delta V_i^{(j)}=f^{(j)}_i &{}\hbox { in }\Omega \\ V_i^{(j)}=0&{}\hbox { on }\partial \Omega \end{array}\right. i=1,2 \quad j=1,\dots ,N \end{aligned}$$

Initialization 2 We solve the problems:

$$\begin{aligned} \left\{ \begin{array}{ll} -\Delta V_1^{(j)}=0&{} \text { in }\Omega ,\\ V_1^{(j)} =\mathbf {u}_{0, 1}^{(j)}&{}\text { on } \partial \Omega , \end{array}\right. \quad j=1,\dots ,N, \end{aligned}$$
(5.10)
$$\begin{aligned} \left\{ \begin{array}{ll} -\Delta V_2^{(j)}=0&{} \text { in }\Omega ,\\ V_2^{(j)} =\mathbf {u}_{0, 2}^{(j)}&{}\text { on } \partial \Omega , \end{array}\right. \quad j=1,\dots ,N, \end{aligned}$$
(5.11)

and take:

$$\begin{aligned} \mathbf {u}^0_i=\mathbf {V}_i,\quad i=1, 2, \end{aligned}$$
(5.12)

We have treated the following numerical examples.

Example 1

The case of one isolated circular tumor. We start the first numerical simulation for \(\gamma \) as in (5.7) with

$$\begin{aligned} D=\{(x,y)\in \mathbb {R}^2: (x-0.15)^2+(y-0.1)^2\le 0.1 \}. \end{aligned}$$
Fig. 2
figure 2

Example 1—The target \(\gamma \) (top left) and the computed \(\gamma \) for different numbers of measurements: N=1 (top right), N=2 (bottom left), N=3 (bottom right) using initialization 1.

In Fig. 2, we show the target layout at the top left, and the computed optimal solution densities \(\gamma \) for different values of \(N=1, 2, 3\), at top right, bottom left and bottom right, respectively. We have utilized the process described as Initialization 1 in order the choose the initialization functions for the Newton-Rapshon algorithm. From the numerical point of view, we have noticed that, for our simulations, the Newton method is very quick in the reconstruction process. For the cases for one single or two measurements (\(N=1,2\)), we have used a version of the damped-Newton method in order to get satisfactory result. For stationary problems, the number of measurements plays an important role for the efficiency of our strategy. Having in mind this observation, and to maintain a low computational cost, we perform the rest of experiments with \(N=3\). In Fig, 4, we show the cost evolution for the experiment corresponding to the case of \(N=3\) measurements, and considering the two initialization procedures described above. We would like to remark that, with few iterations, the algorithm yields a very good qualitative reconstruction of the inclusion \(\gamma \); but later, in order to approach a very sharp approximation, the process is much slower (see Fig. 4).

Fig. 3
figure 3

The computed \(\gamma \) using Initialization 2.

Fig. 4
figure 4

Cost evolution. Left: Initialization 1 (Fig. 2). Right: Initialization 2 (Fig. 3).

On the other hand, if we consider the process described in Initialization 2 for the initialization functions for the Newton–Rapshon algorithm, we find another optimal solution with very different geometry with respect to the target \(\gamma \) used in the procedure to generate the admissible boundary data, see Fig. 3. This phenomenon is associated with the lack of uniqueness of solution of the full non-linear system of PDEs (5.1), in particular for the underlying inverse problem, and with the local character of convergence of the numerical scheme implemented, the Newton–Raphson method.

Example 2

Two isolated circular tumors. In this situation, we consider \(\gamma \) as in (5.7) with \(D=D_1\cup D_2\), and

$$\begin{aligned} D_1=\{(x,y)\in \mathbb {R}^2: (x-0.2)^2+(y-0.35)^2\le 0.1 \}. \end{aligned}$$
$$\begin{aligned} D_2=\{(x,y)\in \mathbb {R}^2: (x+0.35)^2+(y+0.3)^2\le 0.0625 \}. \end{aligned}$$

With such a choice, we pretend to simulate the case of two isolated circular tumors of different sizes. Figure 5 shows the target layout on the left, and the computed optimal value for \(\gamma \) on the right.

Fig. 5
figure 5

Example 3—The target \(\gamma \) (left) and computed \(\gamma \) (right) for two isolated circular tumors case.

Example 3

One isolated tumor with a rectangular inclusion. This time, we consider a more complicated \(\gamma \) given by

$$\begin{aligned} \gamma =\beta \chi _D(1+{1\over 2}\chi _{D_1})+\alpha (1-\chi _D) \end{aligned}$$
(5.13)

with

$$\begin{aligned} D=\{(x,y)\in \mathbb {R}^2: (x-0.05)^2+(y-0.15)^2\le 0.25 \}. \end{aligned}$$

and the rectangular inclusion \(D_1\subset D\),

$$\begin{aligned} D_1=\{(x,y)\in \mathbb {R}^2: -0.25\le x\le 0.35, 0\le y\le 0.3\}. \end{aligned}$$

The conductivity parameter is taken to be

$$\begin{aligned} \gamma =\alpha \hbox { in }\Omega \setminus D,\quad \gamma =\beta \hbox { in }D\setminus D_1,\quad \gamma ={3\over 2}\beta \hbox { in }D_1. \end{aligned}$$

We thus treat to simulate the case where there is a single tumor, but with two different densities. We assume that in the internal part of the tumor there is necrosis, and therefore the parameter \(\beta \) is different (greater), and it has a singular rectangular shape. Figure 6 shows the target layout on the left, and the computed optimal value for \(\gamma \) on the right.

Fig. 6
figure 6

Example 4—The target \(\gamma \) (left) and the computed \(\gamma \) for one isolated tumor with a rectangular inclusion case.

Example 4

Continuous densities. In this situation, we want to check our strategy for other types of densities. We made some experiments with the case where \(\gamma \) is not a piece-wise function but a smooth-varying density which take a continuous range of values. In particular, we consider the case

$$\begin{aligned} \gamma =1+10(1-x^2-y^2)^4 x^3\hbox { in }\Omega , \end{aligned}$$

the unit ball. We present in Fig. 7 the numerical result: on the left, the continuous target density \(\gamma \); and on the right, the computed value of \(\gamma \).

Fig. 7
figure 7

Example 4.1—The target \(\gamma \) (left) and the computed \(\gamma \) for the continuous density case.

Example 5

Microstructures: laminates. We would like to show the efficiency of our reconstruction method for more complicated situations. In all previous numerical experiments, we have selected the boundary data and the density \(\gamma \) so that the infimum of the variational problem (2.12) is equal to zero and it is attained, i.e., the minimization problem is well-posed in the sense that the minimum is equal to zero. We would like to test the behavior of our numerical algorithm for the case in which the value of the infimum is equal to zero, but it is not a minimum. We can force such situations by using microstructures, i.e. highly oscillatory sequences.

We consider the laminate composite generated by two phases \(\alpha \) and \(\beta \), the volume fraction \(\theta \), and the unit vector \(\mathbf {\mathbf {n}}\) corresponding to the direction of lamination [28,29,30,31]. From the microscopic point of view, for this kind of material the component phases \(\alpha \) and \(\beta \) are stacked in slices orthogonal to direction \(\mathbf {\mathbf {n}}\), in proportions \(\theta \) and \(1-\theta \), respectively. At macroscopic level, this kind of material are characterized, from a mathematical point of view, in the context of Homogenization Theory [32,33,34] through the homogenized matrix

$$\begin{aligned} A_*=\theta \alpha \mathbf{Id}. +(1-\theta )\beta \mathbf{Id}. -\frac{\theta (1-\theta )}{\alpha (1-\theta )+\beta \theta }(\alpha -\beta )^2\mathbf {n}\otimes \mathbf {n}. \end{aligned}$$
(5.14)

We will generate boundary data associated with homogenized matrices \(A_*\) following the strategy in Sect. 3.1:

  • Take any homogenized matrix \(A_*\) as above, and a function \(u_{0,1}\in H^{1/2}(\partial \Omega ) \),

  • If \(\mathbf {R}_\delta :\mathbb {R}^2\rightarrow \mathbb {R}^2\) is the \(\delta \)-rotation in the plane, we take

    $$\begin{aligned} u_{0,1}^{(j)}(x,y)=u_{0,1}(\mathbf {R}_{\delta _j}(x,y)),\quad \delta _j=2\pi \frac{j-1}{N}\in [0,2\pi ), \quad j=1,\dots ,N, \end{aligned}$$

    where \(\delta _j\) represents the different angles of rotation. In this way, we build the data set \(\mathbf {u}_{0,1}\in H^{1/2}(\partial \Omega )^N.\)

  • We solve the problems

    $$\begin{aligned} {\text {div}}(A_*\nabla u_1^{(j)})=0\hbox { in }\Omega ,\quad u^{(j)}_1=u_{0,1}^{(j)}\hbox { on }\partial \Omega ,\quad j=1,\cdots , N. \end{aligned}$$

To generate \(\mathbf {u}_{0,2}\in H^{1/2}(\partial \Omega )^N,\) we consider the matrix \(M_l=R\cdot ({A_*}^{-1}\cdot R)\), and we solve the family of problems

$$\begin{aligned} \left\{ \begin{array}{ll} -\mathrm{div}\left( M_l\nabla u_2^{(j)}\right) =0&{} \text { in }\Omega ,\\ M_l{\partial u_2^{(j)}\over \partial \nu } =\nabla u^{(j)}_1\cdot \mathbf {t}&{}\text { on } \partial \Omega , \end{array}\right. \quad j=1,\dots ,N, \end{aligned}$$
(5.15)

where \(\mathbf {t}\) is the tangential, counterclockwise unit vector to \(\partial \Omega \). We take

$$\begin{aligned} \mathbf {u}_{0,2}=\mathbf {u}_2|_{\partial \Omega }\in H^{1/2}(\partial \Omega )^N. \end{aligned}$$

In this way we ensure that \(\gamma =A_*\), which is not a density (it is a matrix), solves the inverse problem (5.2) associated with boundary data

$$\begin{aligned} \mathbf {u}_{0}=(\mathbf {u}_{0,1},\mathbf {u}_{0,2})\in H^{1/2}(\partial \Omega ;\mathbb {R}^2)^N. \end{aligned}$$
(5.16)

This time, the infimum for the variational problem (2.12) is equal to zero, but it is not a minimum, i.e., the infimum is not attained using scalar densities \(\gamma \). Only through homogenized matrices can the infimum be achieved.

Fig. 8
figure 8

The \(\gamma \) for the initialization searching laminates

Fig. 9
figure 9

Example 5—Layout of phases in different steps searching for laminates oriented with angle \(\frac{\pi }{4}\).

Fig. 10
figure 10

Example 5—Layout of phases in different steps searching laminates oriented with angle \(\frac{3\pi }{4}\).

For our specific simulations, we consider a homogenized matrix defined by (5.14), with

$$\begin{aligned}\alpha =5,\quad \beta =10,\quad \theta =0.5. \end{aligned}$$

In Fig. 8, we present the value \(\gamma \) for the initialization using the procedure described above. In Fig. 9 and 10 we show the intermediate configuration of \(\gamma \) in different steps of the Newton–Raphson algorithm and orientation corresponding to angles \(\frac{\pi }{4}\) and \(\frac{3\pi }{4}\), respectively.

In the case of laminates, we know that the inverse problem (5.2) is ill-posed, in the sense that in order to recover the optimal coefficient \(\gamma \), it is necessary to enlarge the clase of admissible solutions to the set of homogenized matrices. When we try to solve the optimality system (5.1) using Newton–Raphson algorithm, we expect that, after some iterations, the algorithm would veer off. To check on this behavior, we show intermediate pictures corresponding to the layout in different steps of the iterative method. We note how our numerical algorithm keeps the right direction of lamination given by vector \(\mathbf {\mathbf {n}}\). It is important to recall that in order to get a sharp reconstruction of the direction of lamination, the size and orientation of the mesh play an important role so as to avoid predetermined directions, which would not be so satisfactory in other cases.

5.2 Some final remarks

We would like to comment on some numerical issues concerning our strategy to solve inverse problems.

We would like to assess the influence of numerical errors in the data for the algorithm, i.e. to test how robust our algorithm is. To test this, we have introduced perturbations on the prescribed boundary condition \(u_0\) of order \(0.1\%\) and \(0.15\%\), and have considered simulations for Example 1, where \(\gamma \) represents one isolated tumor (Fig. 1). The results for the perturbed simulations are shown in Fig. 11. From the pictures, we can see how the algorithm converges to a new perturbed solution, where the size of this perturbation is similar to the perturbation of the boundary data. We noticed that for larger perturbations, the algorithm does not converge, and this happens with both Newton–Raphson and the fixed-point alternative. Our interpretation of this phenomenon stresses the ill-posed character for the inverse problem, in the sense that the new perturbed boundary data may not be associated with a “well-chosen” boundary data for the existence of solutions for the inverse problem.

Fig. 11
figure 11

Computed \(\gamma \). Left, perturbation of \(0.1\%\). Right, perturbation of \(0.15\%\)

All of our numerical experiments are run under the Newton–Raphson algorithm described above. A very importan aspect for this kind of iterative method is the role of the initialization defined in (5.9). We recall that as initialization, we can take any function which verifies the prescribed boundary condition \(\mathbf {u}_0\). In this work we have presented two different alternatives to choose this initialization from, and for any value of N, the number of boundary measurements. We have checked that depending on the procedure of the initialization different solution are found. This phenomenon is associated with the full non-linear and non-convex character of the system of PDE, but the fact that the cost function in all cases is equal to zero guarantees that our computations do capture a true solution even though it may not be the expected one.