Abstract
We study some non-linear systems of PDEs that turn out to be related to the classical inverse problem in conductivity. They have a variational structure in the sense that, at least formally, they are the Euler–Lagrange systems of some explicit vector variational problems. The underlying, non-negative integrands are, however, non-quasiconvex in such a way that the existence of weak solutions for the corresponding Euler–Lagrange systems is, at first sight, compromised. It is however remarkable that the quasiconvexification can be computed quite explicitly. Though our analysis is broader, the connection with inverse conductivity problems is established when there are global minimizers with a vanishing minimum value. Since this fact depends on boundary conditions, we try to clarify the situation with the help of the quasiconvexification, and describe a way to build synthetic boundary data for which the minimum of the underlying functional is attained and vanishes. Because this is exactly the situation for inverse problems in conductivity, we explore numerically the possibility of approximating such global minimizers, furnishing approximated solutions to inverse problems through typical descent or Newton–Raphson methods. We test the procedure on several examples with such synthetic data.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
For a bounded, regular domain \(\Omega \subset \mathbb {R}^2\), and Dirichlet boundary data provided by \((u_{1, 0}, u_{2, 0})\in H^1(\Omega ; \mathbb {R}^2)\), we become interested in the following system of PDEs
for a couple of functions
as the Euler–Lagrange system associated with the functional
Indeed, if we put
we are talking about the vector variational problem
where the class \(\mathcal {A}\) needs to be determined, incorporating the Dirichlet boundary condition
furnishing boundary data. For vector problems, like the one we are considering here, there are not many ways to show existence of solutions. In fact, as far as we can tell, there are two classic methods to prove such existence of solutions. Both come directly from the treatment of equilibrium solutions in finite elasticity [1]. The first is to make use of the implicit function theorem [2, 3] in a suitable framework. By its very perturbation nature, this method can only deliver solutions which are sufficiently close to solutions of approximating linear problems. It demands quite restrictive hypotheses. The second alternative is to show that the system to be studied is, at least formally, the Euler–Lagrange system of a functional which admits minimizers in appropriate function spaces. This too asks for important structural assumptions on the underlying functional which cannot always be taken for granted [1, 2].
Our only hope to show existence of solutions for our non-linear system (1.1) and (1.2) is to focus on proving the existence of global minimizers for the functional with integrand \(\phi (\mathbf {F})\) in (1.4). The whole point of our concern is, however, that such integrand does not comply with any of the necessary requirements to apply the direct method of the Calculus of Variations: the functional I in (1.3) is neither coercive, nor quasiconvex [1, 2, 4]. Under such unfavorable circumstances, in which the variational method is very seriously compromised, there are two relevant objectives that can be tried out:
-
(1)
look for the relaxation of the functional in (1.3);
-
(2)
one possible alternative to find a global minimizer for such a singular vector variational problem, and hence a possible weak solution of the initial system of PDEs, asks for two important ingredients:
-
(a)
show that \(I(\mathbf {v})\ge 0\) (or some other lower bound) for every feasible \(\mathbf {v}\);
-
(b)
find some feasible \(\mathbf {u}\) with \(I(\mathbf {u})=0\).
-
(a)
Surprisingly enough, both possibilities can be carried out in our case: we can compute quite explicitly the relaxed problem; and, under some circumstances, we can find global minimizers that turn out to be solutions of the original differential system. The goal of our contribution is to report on these two appealing aspects, and relate the second one to inverse problems in conductivity without pretending to add any new result in this area.
As we have just pointed out, it is a quite surprising feature of our system how intimately connected it is to inverse problems in conductivity in the plane. To see this connection more clearly, suppose we are seeking an unknown conductivity coefficient
For a chosen Dirichlet boundary condition \(u_0\) around \(\partial \Omega \), the solution of the linear elliptic equation
is such that there is a unique (up to an additive constant) \(v\in H^1(\Omega )\) with
Vector equation (1.6) furnishes three important pieces of information on the auxiliary function v:
-
(1)
a conductivity equation for v
$$\begin{aligned} {\text {div}}\left[ \frac{1}{\gamma (\mathbf {x})}\nabla v(\mathbf {x})\right] =0\hbox { in }\Omega ; \end{aligned}$$(1.7) -
(2)
a formula for \(\gamma \) in terms of u and v, namely
$$\begin{aligned} \gamma =\frac{|\nabla v|}{|\nabla u|}; \end{aligned}$$(1.8) -
(3)
Dirichlet boundary values around \(\partial \Omega \) based on the Neumann condition for u
$$\begin{aligned} \nabla v\cdot \mathbf {t}=\gamma \nabla u\cdot \mathbf {n}\hbox { on }\partial \Omega , \end{aligned}$$(1.9)where \(\mathbf {n}\) is the outer normal to \(\Omega \), and \(\mathbf {t}=\mathbf {R}\mathbf {n}\) is the counterclockwise tangential vector to \(\partial \Omega \).
Hence, if \(\gamma \) is unknown, one can try to determine it through (1.8), and then (1.5) and (1.7) carry us to our initial system for
together with Dirichlet boundary conditions for both components through (1.9). Hence, our interpretation of the inverse conductivity problem for a single pair (u, v) focuses on vector equation (1.6). This condition is usually dealt with through the Beltrami operator (check [5]), but for our approach it will be beneficial to keep it in the form (1.6).
This connection of our analysis with system (1.1) and (1.2) and functional (1.3) and the classical inverse problem in conductivity is worth exploring, we believe, given the practical importance of reconstruction procedures. In particular, we are interested in providing al least some partial answers to the two following issues:
-
(1)
Under what circumstances, can a solution \((u_1, u_2)\) of our initial system of PDEs provide a valid, coherent conductivity coefficient \(\gamma \) through (1.8)
$$\begin{aligned} \gamma =\frac{|\nabla u_2|}{|\nabla u_1|}, \end{aligned}$$(1.10)that is
$$\begin{aligned} \frac{|\nabla u_2|}{|\nabla u_1|}\nabla u_1+\mathbf {R}\nabla u_2=\mathbf {0}\hbox { in }\Omega \hbox {?} \end{aligned}$$(1.11) -
(2)
Find a procedure to build boundary measurements \((u_{1, 0}, u_{2, 0})\), for which a conductivity coefficient \(\gamma \) can be reconstructed through our variational method.
In such situations, we will be concerned with the numerical approximation which can be setup in various ways. The interesting thing is that in spite of a huge non-uniqueness of solutions for these problems (when there are solutions), we possess a certificate of convergence, to a true solution, through (1.11). This certificate of convergence will be formulated in a much more practical and direct way below.
Our ideas extend to the multi-measurement case as well. The system for an unknown field
becomes
for \(j=1, 2, \dots , N\), where we are using the notation
Note how this system is fully coupled because this time the quotient for the conductivity coefficient
involves all of the components of \(\mathbf {u}\).
The paper is organized as follows. Section 2 focuses on the analytical part of our work in which we examine a non-convex vector variational problem, calculate its relaxation, and, with its help, distinguish cases where the infimum vanishes and situations where it does not. The underlying system of PDEs completed with a funny non-linear boundary condition ensures the existence of a minimizer with a vanishing minimum. In Sect. 3, we examine the intimate relationship between our variational problem and its Euler–Lagrange system, when the minimum vanishes, on the one hand, and inverse problems in conductivity, on the other. This, in particular, means that one can make an attempt to approximate a solution for an inverse problem in conductivity by simulating numerically one minimizer or one solution of the Euler–Lagrange system. This, however, requires to guarantee that the infimum of the variational problems vanishes. Hence, we study the situation of generating synthetic data around \(\partial \Omega \) in such a way that the infimum of the problem indeed vanishes. This approach has the interesting practical feature that the functional tending to zero means that we are approximating a true solution of the inverse conductivity problem even though it may not be the one we would expect. We test the procedure in practice with various cases and examples, including multimeasurement situations. Our analytical results, however, do not lead to any new fact in the area of inverse problems in conductivity, except possibly for the use of the quasiconvexification of the underlying functional to clarify the role of boundary data.
The classic Calderón’s problem [6] in conductivity is a tremendously rich problem that has, and still is, stirring a lot of interest and research ranging from deep results in analysis to numerical simulation and practical implementation. Without pretending to enumerate all relevant contributions, we cite some of our favorite ones. From the mathematical point of view, this kind of problems have led to deep and fundamental results (see [5, 7,8,9,10,11,12, 35], and references therein). The book [13] is a basic reference in this field. The main issues concerning inverse problems are uniqueness in various scenarios [5, 14], stability [15,16,17] and reconstruction [18, 19].
2 A vector variational problem
We have already indicated in the Introduction that, at least formally, our initial system (1.1) and (1.2) is the Euler–Lagrange system for the functional
Since vector equation (1.11), written in matrix notation
will play a central role in our analysis too, it is elementary to realize that if we put
then \(\psi (\mathbf {F})\ge 0\) always, but \(\psi (\mathbf {F})=0\) precisely when (2.2) holds. It seems thus advantageous to replace the functional in (2.1) by the modified one
There are two main advantages of this functional over the old one.
-
(1)
Since we have added a null-lagrangian, \(-\det \mathbf {F}\), to the old \(\phi \), the new underlying Euler–Lagrange system remains the same, i.e. our original system of PDEs (1.1) and (1.2).
-
(2)
If m is the infimum of I in (2.4), over a class of mappings respecting boundary data, then \(m\ge 0\), and \(m=0\) is attained, i.e. \(m=0\) is a minimum, precisely when (2.2) holds for a minimizer \((u_1, u_2)\).
Unfortunately, the integrand \(\psi \) is neither convex nor quasiconvex, just as \(\phi \) was not, so existence of minimizers is still a difficult issue. In addition, neither of the two integrands, \(\phi \) or \(\psi \), is coercive. Even so, we can use the non-negative number m to organize our discussion of four possibilities of interest:
-
(1)
\(m>0\),
-
(a)
attained for I;
-
(b)
not attained for I.
-
(a)
-
(2)
\(m=0\),
-
(a)
attained for I;
-
(b)
not attained for I.
-
(a)
The following result indicates how boundary data \((u_{1, 0}, u_{2, 0})\) can distinguish the case \(m=0\) from \(m>0\). It involves the explicit calculation of the quasiconvexification of our integrand \(\psi \). It is one of those rare cases where such a computation is possible.
Theorem 2.1
Let \(u_{i, 0}\in H^{1/2}(\partial \Omega )\), \(i=1, 2\). If there is an extension of \((u_{1, 0}, u_{2, 0})\) to some \(\mathbf {u}_0\in H^1(\Omega ; \mathbb {R}^2)\) with \(\det \nabla \mathbf {u}_0>0\) a.e. in \(\Omega \) then \(m=0\).
We suspect that the converse of this statement is also correct but that would require to deal with some delicate points about sequences of jacobians. It is also reminiscent of some other results relating boundary values for systems of PDEs and jacobians as in, for example, [20,21,22]. Note that, quite often, if the map
is not one-to-one, then such boundary data cannot be extended to \(\Omega \) under the circumstances of Theorem 2.1. Such is the situation, for instance, for the simple example
As far as we can tell, our proof is the first one that is performed through the explicit calculation of the quasiconvexification of an integrand.
Theorem 2.2
The quasiconvexification \(Q\phi \) of \(\phi \) in (2.1) is given by the jacobian
Proof
It is easy to realize that
with the left-hand side, a quasiconvex (polyconvex) function. Hence
Equality between these three terms holds when
As usual, we will check that the rank-one convexification of \(\phi (\mathbf {F})\) turns out to be precisely \(|\det \mathbf {F}|\). If this is so, then we will immediately have the result
as desired.
The computation of the rank-one convex hull of \(\phi \), for matrices \(\mathbf {F}\) off that coincidence set (2.5), proceeds by checking that such \(\mathbf {F}\)’s can be decomposed through suitable rank-one matrices with support in that coincidence set. Let us check that this is so.
Set
To show our aim, all we need to check is that
where the first-level rank-one convex hull \(R_1\mathbf {Z}\) of a set of matrices \(\mathbf {Z}\) is the collection of matrices that can be written as a convex combination of two matrices from \(\mathbf {Z}\) along a rank-one matrix. Note that the two matrices
are rank-one connected if
Suppose first that \(\mathbf {F}\) has positive determinant. We will concentrate on showing that two matrices \(\mathbf {F}_0, \mathbf {F}_1\in \mathbf {Z}_+\), and a parameter \(t\in [0, 1]\) can be found, such that
We already know that
for some positive \(\alpha _i\) and vectors \(\mathbf {x}_i\). The condition on the difference \(\mathbf {F}_1-\mathbf {F}_0\) being a rank-one matrix translates, as already remarked, into
finally, we should have
From these two vector equations, one can easily find that
If we replace these expressions in (2.8), and rearrange terms, we arrive at the quadratic equation in t
The value of this quadratic function for \(t=0\) and \(t=1\) turns out to be, respectively,
Under the condition \(\det \,\mathbf {F}>0\), there are roots for t in (0, 1), provided that the discriminant is non-negative, and the vertex of the parabola belongs to (0, 1). It is elementary, again after some algebraic manipulations, that these conditions amount to having
for some positive values \(\alpha _i\), \(i=1, 0\). If we examine the function of two variables
we realize that along the hyperbole \(\alpha _1\alpha _0=1\), f grows indefinitely (recall that \(\det \,\mathbf {F}>0\)), and eventually it becomes larger than any positive value, in particular, bigger than
In this way (2.10) is fulfilled for some positive values for \(\alpha _1\) and \(\alpha _0\), and the proof of this step is finished.
If \(\det \,\mathbf {F}<0\), it is readily checked that the same above calculations lead to the result \(Q\phi (\mathbf {F})=-\det \,\mathbf {F}\) because there is a minus sign in front of every occurrence of the determinant, with negative values for \(\alpha _1\) and \(\alpha _0\).
These paragraphs above show that
if the first-level, rank-one convex hull \(R_1\xi \) of a function \(\xi \) is defined through
Note that \(\phi (\mathbf {F})=\det \mathbf {F}\) on both sets in (2.7), and that \(Q\xi \le R_1\xi \) always. We would conclude that
and (2.6) holds. \(\square \)
Proof of Theorem 2.1
Since \(-\det \mathbf {F}\) is a null-lagrangian, it is immediate to argue that
As a consequence, we have a relaxation fact in the form
where
and \(\mathbf {u}_0\) is any extension of boundary data \((u_{1, 0}, u_{2, 0})\). Under an appropriate coercivity condition for \(\psi \) in \(H^1(\Omega ; \mathbb {R}^2)\), the right-hand side infimum in (2.12) would be a minimum [23]. However since, as indicated earlier, we do not have such condition, the infimum m might not be a minimum for none of those two problems. Equality (2.11) leads to
which shows our statement, because the right-hand side vanishes for \(\mathbf {u}=\mathbf {u}_0\), the extension of boundary data assumed. \(\square \)
Boundary data \((u_{1, 0}, u_{2, 0})\) which do not fall under the action of this result are irrelevant for the inverse conductivity problem. Yet, even so, our system (1.1) and (1.2) might admit solutions. Could there be a further requirement on boundary data to force solutions \(\mathbf {u}=(u_1, u_2)\) of our system of PDEs to provide an absolute minimizer for the functional in (2.4) with \(I(\mathbf {u})=0\), and, as a result, (1.11) or (2.2) would hold? Note how this vector equation ((1.11) or (2.2)) implies that the gradients of \(u_1\) and \(u_2\) must be orthogonal all over \(\Omega \), and so this property must also be retained for feasible boundary data. Namely
around \(\partial \Omega \). The left-hand side in this equation is precisely \(\nabla u_1\cdot \nabla u_2\). It is a non-linear boundary condition.
Theorem 2.3
Let \(\mathbf {u}=(u_1, u_2)\in H^1(\Omega ; \mathbb {R}^2)\) for a simply-connected domain \(\Omega \subset \mathbb {R}^2\). The two following statements are equivalent:
-
(1)
$$\begin{aligned} {\text {div}}\left( \frac{|\nabla u_2(\mathbf {x})|}{|\nabla u_1(\mathbf {x})|}\nabla u_1(\mathbf {x})\right) =0,\quad {\text {div}}\left( \frac{|\nabla u_1(\mathbf {x})|}{|\nabla u_2(\mathbf {x})|}\nabla u_2(\mathbf {x})\right) =0, \end{aligned}$$(2.13)
and
$$\begin{aligned} \frac{\partial u_1}{\partial \mathbf {t}}\frac{\partial u_2}{\partial \mathbf {t}}+\frac{\partial u_1}{\partial \mathbf {n}}\frac{\partial u_2}{\partial \mathbf {n}}=0\hbox { on }\partial \Omega , \end{aligned}$$(2.14)with both terms vanishing nowhere around \(\partial \Omega \);
-
(2)
$$\begin{aligned} \gamma (\mathbf {x})\nabla u_1(\mathbf {x})+\mathbf {R}\nabla u_2(\mathbf {x})=\mathbf {0}\hbox { in }\Omega , \end{aligned}$$(2.15)
where
$$\begin{aligned} \gamma (\mathbf {x})=\frac{|\nabla u_2(\mathbf {x})|}{|\nabla u_1(\mathbf {x})|}\ge 0\hbox { in }\Omega . \end{aligned}$$(2.16)
Proof
We already argued above that if (2.15) holds with \(\gamma (\mathbf {x})\) given by (2.16) then (2.13) and (2.14) are true. Let us therefore suppose that these two last conditions are correct. The two PDEs imply the existence of two additional functions \(U_i\in H^1(\Omega )\), \(i=1, 2\), such that
Given how \(\gamma \) is defined through (2.16), we can be certain that
and so there is a rotation \(\mathbf {Q}(\mathbf {x})\) of angle \(\theta (\mathbf {x})\) such that
If we take back this information to (2.17), we find
By changing the angle \(\theta (\mathbf {x})\) through a constant angle, we can assume that
and
We would like to show that, necessarily, \(\theta (\mathbf {x})\) is identically \(\pi /2\), since if this is so then we recover (2.15) as desired.
Again by our observations made earlier about the implications of (2.15), it is clear that \(\theta \) restricted to \(\partial \Omega \) is \(\pi /2\). This is the boundary condition (2.14). Our intention is to study the PDE that \(\theta \) ought to verify coming from the first equation in (2.13) and conclude that the only possible solution is \(\theta \equiv \pi /2\).
If we combine (2.18) with the first equation in (2.13), we conclude that
in addition to having \(\theta =\pi /2\) on \(\partial \Omega \). However, a fundamental point is that \(\theta \equiv \pi /2\) is indeed a global solution in \(\Omega \) of the first-order, quasilinear PDE (2.19), together with the boundary condition because the divergence of a rotated gradient identically vanishes, and this fact is independent of what the function \(u_2\) is.
If \(u_2\) were analytic, we can expand (2.19) to have
Given that the four terms
do not vanish on \(\partial \Omega \) (non-characteristic boundary condition), Holmgren uniqueness theorem would apply, and knowing a priori that \(\theta \equiv \pi /2\) is a global solution in \(\Omega \), we can conclude that it is the only possible solution. If \(u_2\) is not analytic, we can proceed by approximation in (2.20) taking advantage again of the fact that we know before hand that \(\theta \equiv \pi /2\) is a global solution independently of the approximating sequence. \(\square \)
We therefore see that boundary condition (2.14) is the clue to knowing whether optimality system (2.13) corresponds to a global minimizer of our problem with \(m=0\). It is interesting, from a practical point of view, to count on a different criterium.
3 Inverse problems in conductivity
Motivated by our observations in the Introduction, we introduce the following concept.
Definition 3.1
We will say that a pair of functions \((u_1, u_2)\) in \(H^1(\Omega )\) is a feasible solution for an inverse problem in conductivity if
The corresponding conductivity coefficient is the non-negative measurable function
Based on our remarks in Sect. 2, we see how feasible pairs for inverse problems in conductivity are intimately related to solutions of our original system of PDEs corresponding to global minimizers of the functional I in (2.4).
Proposition 3.1
Global minimizers for the functional I in (2.4) with a vanishing minimum \(m=0\) are exactly feasible pairs for inverse problems in conductivity.
Proof
The observation that
for \(\psi \) given in (2.3) and \(\mathbf {u}=(u_1, u_2)\), immediately lets us see the validity of our statement. \(\square \)
This simple proposition suggests the possibility of finding or approximating feasible pairs for inverse problems in conductivity, by approximating solutions of our initial system (1.1) and (1.2), and certify convergence by keeping track of the value of I in (2.4). If for a sequence \(\{\mathbf {u}_j\}\), we have \(I(\mathbf {u}_j)\searrow 0\), then we know that it is a good approximation to an inverse problem in conductivity, though it may not be the one we would like to see. We would like to test the procedure in practice.
We have already insisted in the tremendous difficulties associated with the vector minimization problem (2.4), and have provided, in the preceding section, some partial answers about the vanishing of the infimum, the attainment of it, and how these issues may relate to our initial system of PDEs as the Euler–Lagrange system associated with (2.4). The various possibilities depend dramatically on the boundary values around \(\partial \Omega \) that are adopted. As indicated above, from the perspective of inverse problems of conductivity, we would like to explore, so as to test typical minimization algorithms like conjugate gradient or Newton–Raphson methods, how to build feasible boundary data for which we can be sure that there are minimizers of (2.4) and solutions of (1.1) and (1.2) complying with Definition 3.1. After all, in real problems we do know that there should be pairs of functions \((u_1, u_2)\) complying with the vector equation in this definition. It is also interesting to point out that we have a sure certificate of convergence in that the functional in (2.4) should be made very small if we are to find good approximations of such pairs. It is, however, very well-established that there might be (infinitely-)many pairs complying with Definition 3.1 and preserving the same boundary data around \(\partial \Omega \) (even in a multi-measurement scenario). This, in particular, implies that one can anticipate serious difficulties in the practical numerical implementation of examples.
3.1 A procedure to generate admissible boundary data
We propose one way to generate (synthetic) pairs of boundary data around \(\partial \Omega \) for which the non-linear system of PDEs (1.1) and (1.2) admits solutions, and they are such that equation (1.11) holds, i.e. the minimum vanishes. As we know, these minimizers correspond to feasible solutions of the inverse problem in conductivity. It is as follows.
-
(1)
Take \(\gamma (\mathbf {x})\ge \gamma _0>0\) in \(\Omega \), and \(u_0\in H^{1/2}(\partial \Omega )\) freely.
-
(2)
Solve the problem
$$\begin{aligned} {\text {div}}(\gamma \nabla u)=0\hbox { in }\Omega ,\quad u=u_0\hbox { on }\partial \Omega , \end{aligned}$$and compute the tangential derivative \(w_0=\nabla u\cdot \mathbf {t}\) around \(\partial \Omega \).
-
(3)
Find the solution of the Neumann problem
$$\begin{aligned} {\text {div}}(\frac{1}{\gamma }\nabla v)=0\hbox { in }\Omega ,\quad \frac{1}{\gamma }\nabla v\cdot \mathbf {n}=w_0\hbox { on }\partial \Omega , \end{aligned}$$(3.2)under some normalization condition.
-
(4)
Take \(v_0=\left. v\right| _{\partial \Omega }\).
In practice, if \(\gamma (\mathbf {x})\) is a true, unknown conductivity coefficient, and \(u_0\in H^{1/2}(\partial \Omega )\) is given, the response
around \(\partial \Omega \) is the data measured, where
Note that the relationship between \(\overline{v}_0\) and our boundary datum \(v_0\), coming from the solution of (3.2) or from (1.6), is given through
where integration is performed around \(\partial \Omega \), and \(\mathbf {x}_0\) is an arbitrarily chosen point on \(\partial \Omega \). Uncertainty on measurement for \(\overline{v}_0\) is transmitted to \(v_0\).
Proposition 3.2
Suppose boundary data \((u_0, v_0)\) are chosen as just indicated. Then the pair of functions (u, v), computed through the process, is a solution of (1.1) and (1.2), for which (1.11) is valid. In particular, \(m=0\) is attained.
Proof
The proof amounts to writing
for some \(w\in H^1(\Omega )\), unique up to an additive constant. As a consequence of the previous vector equation, it is immediate to check that w must be also a solution of problem (3.2), and so the difference between w and v is a constant, i.e.
\(\square \)
This will be our method to generate synthetic boundary data for numerical experiments for which the inverse conductivity problem is meaningful. Note that if, once the pair (u, v) has been generated furnishing a couple of feasible boundary data for the inverse problem, we change to some other pair \((\tilde{u}, \tilde{v})\) preserving boundary information
the use of an iterative procedure to lead our functional (2.4) to the corresponding local minimum starting from \((\tilde{u}, \tilde{v})\) will provide, through (1.10), a coherent conductivity coefficient if I in (2.4) is led to zero. It might happen however that the computed optimal \(\tilde{\gamma }\) would be different to the \(\gamma \) we started with. This could be an indication and a consequence of the non-uniqueness of the inverse problem of conductivity with one single measurement, or with a finite number of them. We will turn back to this point in the section of numerical experiments.
As a matter of fact, our scheme above is the only way to produce boundary-data pairs for which \(m=0\) is attained for our functional (2.4). This is pretty clear after all of our previous discussions. Indeed, if we would have
then
for a.e. \(\mathbf {x}\in \Omega \). This is like saying
for a.e. \(\mathbf {x}\in \Omega \). The standard Cauchy-Schwarz inequality for the inner product forces vector equation (1.6) for some non-negative conductivity coefficient \(\gamma \) (which would be the quotient \(|\nabla u_2|/|\nabla u_1|\)), and then \(u_1\) and \(u_2\) are precisely the functions generated by our procedure for such coefficient \(\gamma \).
On the other hand, Theorem 2.1 indicates that it suffices to provide a mapping
with non-negative determinant \(\det \nabla \mathbf {u}_0>0\), furnishing boundary data, to be sure that the infimum m vanishes. Most of the time, when such mapping \(\mathbf {u}_0\) is given in a casual way, \(m=0\) will not be a minimum, since for a minimizing sequence of pairs \(\mathbf {u}_j\) the sequence of scalars
may only G-converge [24] to some homogenized elliptic, symmetric matrix
Indeed, we can generalize our mechanism to generate interesting boundary data replacing \(\gamma (\mathbf {x})\) by an elliptic, symmetric matrix \(\Gamma (\mathbf {x})\):
-
(1)
Take \(\Gamma (\mathbf {x})\ge \gamma _0\mathbf {1}\) (in the sense of symmetric matrices), \(\gamma _0>0\) in \(\Omega \), and \(u_0\in H^{1/2}(\partial \Omega )\) freely.
-
(2)
Solve the problem
$$\begin{aligned} {\text {div}}(\Gamma \nabla u)=0\hbox { in }\Omega ,\quad u=u_0\hbox { on }\partial \Omega , \end{aligned}$$and compute the tangential derivative \(w_0=\nabla u\cdot \mathbf {t}\) around \(\partial \Omega \).
-
(3)
Find a solution of the Neumman problem
$$\begin{aligned} {\text {div}}(\Gamma _l\nabla v)=0\hbox { in }\Omega ,\quad \Gamma _l\nabla v\cdot \mathbf {n}=w_0\hbox { on }\partial \Omega . \end{aligned}$$(3.4)with \(\Gamma _l=\mathbf {R}\Gamma ^{-1}\mathbf {R}.\)
-
(4)
Take \(v_0=\left. v\right| _{\partial \Omega }\).
For such pair \((u_0, v_0)\), one would expect that \(m=0\) is not attained, as in a typical homogenization process [25].
4 The multi-measurement situation
We recall very briefly that in this case the system for an unknown field
is
for \(j=1, 2, \dots , N\), where
We also have an underlying functional whose Euler–Lagrange system is precisely (4.1) and (4.2), namely
We would have a parallel method, applied componentwise, to generate synthetic sets of data for which \(I_N\) in (4.3) vanishes. It is interesting to note that the formula for the recovered \(\gamma (\mathbf {x})\) in the multimeasurement situation is
so the information coming from every measurement for \(j=1, 2, \dots , N\) is taken into account.
5 Approximation
In this section, we focus on various numerical examples to solve the non-linear system of PDEs
that corresponds to the solution of the associated inverse problem
If the boundary data \(\mathbf {u}_{0}\) are “well-choosen” in the sense that inverse problem (5.2) is well-posed (for instance, following the procedure in Sect. 3.1), then, as indicated above, one solution of the inverse problem is given precisely by
Then, we propose to solve numerically the non-linear system of PDEs (5.1) in order to get a solution for the inverse problem (5.2). We consider the weak formulation for system (5.1) which can be expressed in the form
which should be valid for every \(\mathbf {v}\in H^1_0(\Omega ;\mathbb {R}^2)^N\).
We will use a standard Newton–Raphson algorithm, in order to solve (5.1), or equivalently (5.3).
We want to find functions \(\mathbf {u}\in H^1(\Omega ;\mathbb {R}^2)^N\) such that
for all \(\mathbf {v}\in H^1_0(\Omega ;\mathbb {R}^2)^N\). The Newton–Raphson algorithm, to solve numerically the nonlinear problem (5.3), is the following:
-
(1)
We choose an admissible initialization \(\mathbf {u}^0\in H^1(\Omega ;\mathbb {R}^2)^N\).
-
(2)
Iterate until convergence \(\left( I(\mathbf {u}^k)< tol \hbox { or } \frac{||\mathbf {w}^k||_{\infty }}{||\mathbf {u}^k||_{\infty }}<tol \right) \):
-
take \(\mathbf {w}^k\in H^1_0(\Omega ;\mathbb {R}^2)^N\) such that
$$\begin{aligned} D\mathcal {L}(\mathbf {u}^k,\mathbf {v})\,\mathbf {w}^k=\mathcal {L}(\mathbf {u}^k,\mathbf {v}), \end{aligned}$$for every \(\mathbf {v}\in H^1_0(\Omega ;\mathbb {R}^2)^N\), where \(D\mathcal {L}(\mathbf {u},\mathbf {v})\,\mathbf {w}\) is defined by
$$\begin{aligned} \begin{array}{ll} D\mathcal {L}(\mathbf {u},v^{(j)})\,w^{(j)}=\\ \displaystyle \int _\Omega \frac{|\nabla \mathbf {u}_2(\mathbf {x})|}{|\nabla \mathbf {u}_1(\mathbf {x})|} \left( \nabla v_{1}^{(j)}\nabla w_{1}^{(j)}+\nabla v_1^{(j)}\cdot \nabla u_1^{(j)}\left( {\displaystyle \nabla u_2^{(j)}\cdot \nabla w_2^{(j)}\over \displaystyle |\nabla \mathbf {u}_2(\mathbf {x})|^2}-{\displaystyle \nabla u_1^{(j)}\cdot \nabla w_1^{(j)}\over \displaystyle |\nabla \mathbf {u}_1(\mathbf {x})|^2}\right) \right) \,d\mathbf {x}\\ +\displaystyle \int _\Omega \frac{|\nabla \mathbf {u}_1(\mathbf {x})|}{|\nabla \mathbf {u}_2(\mathbf {x})|} \left( \nabla v_{2}^{(j)}\nabla w_{2}^{(j)}+\nabla v_2^{(j)}\cdot \nabla u_2^{(j)}\left( {\displaystyle \nabla u_1^{(j)}\cdot \nabla w_1^{(j)}\over \displaystyle |\nabla \mathbf {u}_1(\mathbf {x})|^2}-{\displaystyle \nabla u_1^{(j)}\cdot \nabla w_2^{(j)}\over \displaystyle |\nabla \mathbf {u}_2(\mathbf {x})|^2}\right) \right) \,d\mathbf {x};\\ \end{array} \end{aligned}$$(5.4) -
update
$$\begin{aligned} \mathbf {u}^{k+1}=\mathbf {u}^k-\mathbf {w}^k. \end{aligned}$$
-
We would like to bring readers’ attention to the fully non-linear and non-convex character of the system of PDEs (5.1). This character is associated with the existence or lack of it of solution of problem when the boundary data \(\mathbf {u}_{0}^{(j)}=(u_{0,1}^{(j)},u_{0,2}^{(j)}), \, j=1,\cdots ,N\) are not “well-chosen”. Moreover, in the case where existence of solution is a fact, a crucial issue is uniqueness, it could be expected existence of different solutions. It is well-known, in general, the lack of uniqueness of solution for Calderon’s problem for discontinuous coefficients and a finite number of measurements. This fact must be reflected on the lack of uniqueness of solutions for the corresponding non-linear system of PDEs. From our formulation, we will consider that a solution \(\mathbf {u}\in H^1(\Omega ;\mathbb {R}^2)^N\) of the system of PDEs (5.1) such that \(I(\mathbf {u})=0\) will provide a density \(\gamma \) solution of the inverse problem 5.2. And we can establish direct connection between the the existence (or not) and uniqueness (or not) of solutions of the non-linear system of pde and the inverse problem.
As a matter of fact, we have considered different strategies for the numerical resolution of the inverse problem (5.2), in addition to the Newton–Raphson scheme. We have also explored gradient descent algorithms (conjugated gradient or optimal step) for the functional I in (2.4) in order to approximate optimal solutions for the variational problems in (2.12). For the non-linear system of PDEs (5.1), we have also examined a fixed point algorithm, given its simplicity, as follows:
-
(1)
choose an admissible initialization \(\mathbf {u}^0\in H^1(\Omega ;\mathbb {R}^2)^N\);
-
(2)
iterate until convergence \(\left( I(\mathbf {u}^k)< tol \right) \):
-
take
$$\begin{aligned} \gamma ^k= \displaystyle \frac{|\nabla \mathbf {u}_2^k(\mathbf {x})|}{|\nabla \mathbf {u}_1^k(\mathbf {x})|}; \end{aligned}$$ -
solve:
$$\begin{aligned} \left\{ \begin{array}{ll} \displaystyle -\mathrm{div}\left( \gamma ^k\nabla u_1^{(j),k+1}\right) =0 \text { in }\Omega ,\quad u_1^{(j),k+1}=u_{0,1}^{(j)}\text { on } \partial \Omega , \\ \displaystyle -\mathrm{div}\left( \frac{1}{\gamma ^k}\nabla u_2^{(j),k+1}\right) =0 \text { in }\Omega ,\quad u_2^{(j),k+1}=u_{0,2}^{(j)}\text { on } \partial \Omega , \end{array} \right. \quad j=1\dots ,N. \end{aligned}$$
-
Concerning the practical behavior of these three possibilities, it is our experience that all three algorithms behave well and furnish convergence, in the sense that I in (2.4), converges to zero. As has been discussed earlier, this is our main certificate of convergence. However, the Newton–Raphson method and the fixed point algorithm are faster than the gradient descent counterpart. We therefore stick to the Newton–Raphson method for our numerical experiments.
It is also appropriate to insist, in addition to the non-linear character of the problem, in the lack of convexity (quasiconvexity) of the underlying functional. This also has important consequences for the numerical approximation making simulations harder than might be expected. From the procedure for choosing boundary data described in Sect. 3.1, one can guarantee the existence of, at least, one solution for the chosen boundary data; or equivalently, one can be sure that there exists
such that \(I(\mathbf {u}_{1},\mathbf {u}_{2})=0\) for the vector functional I defined in (2.4). As we have shown in Sect. 2, the vector variational problem for I is non-convex, even non-quasiconvex, and therefore there is no way to ensure the uniqueness of minimizers. In this sense, it is important to realize that our numerical algorithm might recover a different minimizer from the one we would expect and that was used to generate the synthetic boundary data. In this regard, the initialization used in the computations themselves will lead them to one solution or another. The important point, nonetheless, is that our certificate of convergence, the functional being driven to zero, be correct.
5.1 Numerical experiments
We would like to apply the Newton–Raphson algorithm described above in order to find a solution of the inverse problem (5.2). Our numerical experiments have been implemented using the free software FreeFemm++ v 3.56 (see [26] and http://www.freefem.org/). We will use the Newton–Raphson method to solve the non-linear system of partial differential equations (5.1).
We consider a density \(\gamma \) and a boundary data \(\mathbf {u}_{0}\in H^{1/2}(\partial \Omega ;\mathbb {R}^2)^N\) associated with it through (5.2). For a fixed number of experiments \(N \ge 1\), we select boundary data as described in the strategy in Sect. 3.1:
-
Take any density \(\gamma \in L^\infty (\Omega )\), and a function \(u_{0,1}\in H^{1/2}(\partial \Omega ) \).
-
Let \(\mathbf {R}_\delta :\mathbb {R}^2\rightarrow \mathbb {R}^2\) be the counterclockwise rotation in the plane of angle \(\delta \). Take:
$$\begin{aligned} u_{0,1}^{(j)}(x,y)=u_{0,1}(\mathbf {R}_{\delta _j}(x,y))\quad \delta _j=2\pi \frac{j-1}{N}\in [0,2\pi ) \quad j=1,\dots ,N, \end{aligned}$$where \(\delta _j\) represents different angles of rotation.
-
Solve the problem
$$\begin{aligned} {\text {div}}(\gamma \nabla u_1^{(j)})=0\hbox { in }\Omega ,\quad u^{(j)}_1=u_{0,1}^{(j)}\hbox { on }\partial \Omega ,\quad j=1,\cdots , N. \end{aligned}$$
We consider
with \(\mathbf {u}_{0,1}\in H^{1/2}(\partial \Omega )^N.\) To determine \(\mathbf {u}_{0,2}\in H^{1/2}(\partial \Omega )^N,\) we solve problems
under the normalization condition \(\displaystyle \int _\Omega u(x)\,dx=0\), and take
where \(\mathbf {t}\) is the counterclockwise tangential vector to \(\partial \Omega \).
In this way, having in mind Theorem 2.1, we ensure that \(\gamma \) solves inverse problem (5.2) associated with the boundary data
and therefore these are appropriate boundary conditions to perform numerical experiments. In order to get more realistic result in our experiments, we use two different meshes. We use a finer mesh to build the data (described in the above procedure to get \(\mathbf {u}_{0}\)); and another coarser mesh to perform the reconstruction and to find the optimal density \(\gamma \), see Fig. 1. The first one has 3633 node and 7064 triangle for searching optimal solutions, while the one to build boundary data has 14038 nodes and 27674 triangles.
Our domain of reference will always be
and, most of the time,
where \(D\subset \Omega \), \(\chi _D\) is the corresponding characteristic function, where \(0<\alpha <\beta \) are two constant representing the electrical property of the body and of a certain unknown inclusion. In the jargon of Electrical Impedance Tomography, \(\Omega \) may be viewed as a region/body that contains healthy and ill tissues, while D represents the set of tumor cells. In our academic examples, we consider \(\alpha =5\) and \(\beta =10\). We use \(P_2\)-Lagrange finite element approximations for the solutions \(\mathbf {u}\). We have checked that the numerical errors in quadrature formula are important in order to have sharp approximation for the solution of the inverse problem, in this sense for all 2D integral we have consider hight precision quadrature formulae with order six (see [27]).
In the following, and taking into account the procedure to generate admissible boundary data in Sect. 3.1, we take an arbitrary function like
and consider various choices for \(\gamma \), and for the number of measurements N. We stick to the convergence criteria with optimal costs lower than \(tol=1.0 \times 10^{-5}\).
Given a target function \(\gamma \) and the associated boundary conditions (5.6), one important aspect is to determine the initialization for the Newton–Raphson algorithm. The way we choose the initialization determines the search of optimal solution and the possibility to find different ones. Note that the local convergence character of the Newton–Raphson method forces us to select an initialization sufficiently close to a solution. The non-linear and non-(quasi)convex nature of the system of PDEs we are dealing with (and therefore of the related inverse problem) is responsible for the behavior of the Newton–Raphson method as one expects different solutions for different initializations. In this regard, we have considered two distinct ways to select the initialization:
Initialization 1 We take
with \(\mathbf {V}_i\) any arbitrary function in \(H^1_0(\Omega ;\mathbb {R})^N\), and \(\tilde{\mathbf {u}}_{0}\in H^1(\Omega ,\mathbb {R}^2)^N\) an extension of \( \mathbf {u}_{0}\in H^{1/2}(\partial \Omega ;\mathbb {R}^2)^N\), in order to preserve that both \(\mathbf {u}^0\) and \(\mathbf {u}_{0}\) share the same boundary information. For \(\mathbf {V}_i\), we take an arbitrary function \(f_i^{(j)}\) (for instance, \(f_1^{(j)}(x,y)=e^{x^2} y^2\sin (x);\) and \(f_2^{(j)}(x,y)=1\)), and consider \(V_i\) solution of:
Initialization 2 We solve the problems:
and take:
We have treated the following numerical examples.
Example 1
The case of one isolated circular tumor. We start the first numerical simulation for \(\gamma \) as in (5.7) with
In Fig. 2, we show the target layout at the top left, and the computed optimal solution densities \(\gamma \) for different values of \(N=1, 2, 3\), at top right, bottom left and bottom right, respectively. We have utilized the process described as Initialization 1 in order the choose the initialization functions for the Newton-Rapshon algorithm. From the numerical point of view, we have noticed that, for our simulations, the Newton method is very quick in the reconstruction process. For the cases for one single or two measurements (\(N=1,2\)), we have used a version of the damped-Newton method in order to get satisfactory result. For stationary problems, the number of measurements plays an important role for the efficiency of our strategy. Having in mind this observation, and to maintain a low computational cost, we perform the rest of experiments with \(N=3\). In Fig, 4, we show the cost evolution for the experiment corresponding to the case of \(N=3\) measurements, and considering the two initialization procedures described above. We would like to remark that, with few iterations, the algorithm yields a very good qualitative reconstruction of the inclusion \(\gamma \); but later, in order to approach a very sharp approximation, the process is much slower (see Fig. 4).
On the other hand, if we consider the process described in Initialization 2 for the initialization functions for the Newton–Rapshon algorithm, we find another optimal solution with very different geometry with respect to the target \(\gamma \) used in the procedure to generate the admissible boundary data, see Fig. 3. This phenomenon is associated with the lack of uniqueness of solution of the full non-linear system of PDEs (5.1), in particular for the underlying inverse problem, and with the local character of convergence of the numerical scheme implemented, the Newton–Raphson method.
Example 2
Two isolated circular tumors. In this situation, we consider \(\gamma \) as in (5.7) with \(D=D_1\cup D_2\), and
With such a choice, we pretend to simulate the case of two isolated circular tumors of different sizes. Figure 5 shows the target layout on the left, and the computed optimal value for \(\gamma \) on the right.
Example 3
One isolated tumor with a rectangular inclusion. This time, we consider a more complicated \(\gamma \) given by
with
and the rectangular inclusion \(D_1\subset D\),
The conductivity parameter is taken to be
We thus treat to simulate the case where there is a single tumor, but with two different densities. We assume that in the internal part of the tumor there is necrosis, and therefore the parameter \(\beta \) is different (greater), and it has a singular rectangular shape. Figure 6 shows the target layout on the left, and the computed optimal value for \(\gamma \) on the right.
Example 4
Continuous densities. In this situation, we want to check our strategy for other types of densities. We made some experiments with the case where \(\gamma \) is not a piece-wise function but a smooth-varying density which take a continuous range of values. In particular, we consider the case
the unit ball. We present in Fig. 7 the numerical result: on the left, the continuous target density \(\gamma \); and on the right, the computed value of \(\gamma \).
Example 5
Microstructures: laminates. We would like to show the efficiency of our reconstruction method for more complicated situations. In all previous numerical experiments, we have selected the boundary data and the density \(\gamma \) so that the infimum of the variational problem (2.12) is equal to zero and it is attained, i.e., the minimization problem is well-posed in the sense that the minimum is equal to zero. We would like to test the behavior of our numerical algorithm for the case in which the value of the infimum is equal to zero, but it is not a minimum. We can force such situations by using microstructures, i.e. highly oscillatory sequences.
We consider the laminate composite generated by two phases \(\alpha \) and \(\beta \), the volume fraction \(\theta \), and the unit vector \(\mathbf {\mathbf {n}}\) corresponding to the direction of lamination [28,29,30,31]. From the microscopic point of view, for this kind of material the component phases \(\alpha \) and \(\beta \) are stacked in slices orthogonal to direction \(\mathbf {\mathbf {n}}\), in proportions \(\theta \) and \(1-\theta \), respectively. At macroscopic level, this kind of material are characterized, from a mathematical point of view, in the context of Homogenization Theory [32,33,34] through the homogenized matrix
We will generate boundary data associated with homogenized matrices \(A_*\) following the strategy in Sect. 3.1:
-
Take any homogenized matrix \(A_*\) as above, and a function \(u_{0,1}\in H^{1/2}(\partial \Omega ) \),
-
If \(\mathbf {R}_\delta :\mathbb {R}^2\rightarrow \mathbb {R}^2\) is the \(\delta \)-rotation in the plane, we take
$$\begin{aligned} u_{0,1}^{(j)}(x,y)=u_{0,1}(\mathbf {R}_{\delta _j}(x,y)),\quad \delta _j=2\pi \frac{j-1}{N}\in [0,2\pi ), \quad j=1,\dots ,N, \end{aligned}$$where \(\delta _j\) represents the different angles of rotation. In this way, we build the data set \(\mathbf {u}_{0,1}\in H^{1/2}(\partial \Omega )^N.\)
-
We solve the problems
$$\begin{aligned} {\text {div}}(A_*\nabla u_1^{(j)})=0\hbox { in }\Omega ,\quad u^{(j)}_1=u_{0,1}^{(j)}\hbox { on }\partial \Omega ,\quad j=1,\cdots , N. \end{aligned}$$
To generate \(\mathbf {u}_{0,2}\in H^{1/2}(\partial \Omega )^N,\) we consider the matrix \(M_l=R\cdot ({A_*}^{-1}\cdot R)\), and we solve the family of problems
where \(\mathbf {t}\) is the tangential, counterclockwise unit vector to \(\partial \Omega \). We take
In this way we ensure that \(\gamma =A_*\), which is not a density (it is a matrix), solves the inverse problem (5.2) associated with boundary data
This time, the infimum for the variational problem (2.12) is equal to zero, but it is not a minimum, i.e., the infimum is not attained using scalar densities \(\gamma \). Only through homogenized matrices can the infimum be achieved.
For our specific simulations, we consider a homogenized matrix defined by (5.14), with
In Fig. 8, we present the value \(\gamma \) for the initialization using the procedure described above. In Fig. 9 and 10 we show the intermediate configuration of \(\gamma \) in different steps of the Newton–Raphson algorithm and orientation corresponding to angles \(\frac{\pi }{4}\) and \(\frac{3\pi }{4}\), respectively.
In the case of laminates, we know that the inverse problem (5.2) is ill-posed, in the sense that in order to recover the optimal coefficient \(\gamma \), it is necessary to enlarge the clase of admissible solutions to the set of homogenized matrices. When we try to solve the optimality system (5.1) using Newton–Raphson algorithm, we expect that, after some iterations, the algorithm would veer off. To check on this behavior, we show intermediate pictures corresponding to the layout in different steps of the iterative method. We note how our numerical algorithm keeps the right direction of lamination given by vector \(\mathbf {\mathbf {n}}\). It is important to recall that in order to get a sharp reconstruction of the direction of lamination, the size and orientation of the mesh play an important role so as to avoid predetermined directions, which would not be so satisfactory in other cases.
5.2 Some final remarks
We would like to comment on some numerical issues concerning our strategy to solve inverse problems.
We would like to assess the influence of numerical errors in the data for the algorithm, i.e. to test how robust our algorithm is. To test this, we have introduced perturbations on the prescribed boundary condition \(u_0\) of order \(0.1\%\) and \(0.15\%\), and have considered simulations for Example 1, where \(\gamma \) represents one isolated tumor (Fig. 1). The results for the perturbed simulations are shown in Fig. 11. From the pictures, we can see how the algorithm converges to a new perturbed solution, where the size of this perturbation is similar to the perturbation of the boundary data. We noticed that for larger perturbations, the algorithm does not converge, and this happens with both Newton–Raphson and the fixed-point alternative. Our interpretation of this phenomenon stresses the ill-posed character for the inverse problem, in the sense that the new perturbed boundary data may not be associated with a “well-chosen” boundary data for the existence of solutions for the inverse problem.
All of our numerical experiments are run under the Newton–Raphson algorithm described above. A very importan aspect for this kind of iterative method is the role of the initialization defined in (5.9). We recall that as initialization, we can take any function which verifies the prescribed boundary condition \(\mathbf {u}_0\). In this work we have presented two different alternatives to choose this initialization from, and for any value of N, the number of boundary measurements. We have checked that depending on the procedure of the initialization different solution are found. This phenomenon is associated with the full non-linear and non-convex character of the system of PDE, but the fact that the cost function in all cases is equal to zero guarantees that our computations do capture a true solution even though it may not be the expected one.
References
Ball, J.M.: Some Open Problems in Elasticity, Geometry, Mechanics, and Dynamics. Springer, New York (2002)
Ciarlet, P.G.: Mathematical elasticity, Vol. I: Three-dimensional elasticity, North-Holland, (1988)
Valent, T.: Boundary Value Problems of finite elasticity. In: Local Theorems on Existence, Uniqueness, and Analytic Dependence on Data. Springer Tracts in Natural Philosophy, vol.31. Springer, New York (1988)
Rindler, F.: Calculus of Variations. Universitext. Springer, Cham (2018)
Astala, K., Päivärinta, L.: Calderón’s inverse conductivity problem in the plane. Ann. Math. 163(1), 265–299 (2006)
Calderón, A.P.: On an inverse boundary value problem. In: Seminar on numerical analysis and its applications to continuum physics (Rio de Janeiro, 1980), Brazilian mathematical society , Rio de Janeiro, pp. 65–73 (1980)
Alessandrini, G.: Stable determination of conductivity by boundary measurements. Appl. Anal. 27(1–3), 153–172 (1988)
Borcea, L.: Electrical impedance tomography. Inverse Probl 18(6), R99–R136 (2002)
Cheney, M., Isaacson, D., Newell, J.C.: Electrical impedance tomography. SIAM Rev. 41, 85–101 (1999)
Faraco, D., Kurylev, Y., Ruiz, A.: G-convergence, Dirichlet to Neumann maps and invisibility. J. Funct. Anal. 267(7), 2478–2506 (2014)
Kohn, R.V., Vogelius, M.: Determining conductivity by boundary measurements. II. Interior results. Commun. Pure Appl. Math. 38, 643–667 (1985)
Uhlmann, G.: Electrical impedance tomography and Calderón’s problem. Inverse probl. 25(12), 123011 (2009)
Isakov, V.: Inverse Problems for Partial Differential Equations. Applied Mathematical Sciences, 2nd edn. Springer, New York (2006)
Nachman, A.: Global uniqueness for a two-dimensional inverse boundary value problem. Ann. Math. 143, 71–96 (1996)
Alessandrini, G., de Hoop, M.V., Gaburro, R., Sincich, E.: Lipschitz stability for the electrostatic inverse boundary value problem with piecewise linear conductivities. J. Math. Pures Appl. 107(5), 638–664 (2017)
Barcel, T., Faraco, D., Ruiz, A.: Stability of Calderón inverse conductivity problem in the plane. J. Math. Pures Appl. 88(6), 522–556 (2007)
Clop, A., Faraco, D., Ruiz, A.: Stability of Calderón’s inverse conductivity problem in the plane for discontinuous conductivities. Inverse Probl. Imaging 4(1), 49–91 (2010)
Ammari, H., Kang, H.: Reconstruction of Small Inhomogeneities from Boundary Measurements. Lecture Notes in Mathematics. Springer, Berlin (2004)
Nachman, A.: Reconstructions from boundary measurements. Ann. Math. 128, 531–576 (1988)
Alberti, G.S., Capdeboscq, Y.: Lectures on elliptic methods for hybrid inverse problems. Cours Spécialisés [Specialized Courses], 25. Société Mathématique de France, Paris, (2018)
Alessandrini, G., Nesi, V.: Univalent \(\sigma \)-harmonic mappings. Arch. Ration. Mech. Anal. 158(2), 155–171 (2001)
Bauman, P., Marini, A., Nesi, V.: Univalent solutions of an elliptic system of partial differential equations arising in homogenization. Indiana Univ. Math. J. 50(2), 747–757 (2001)
Dacorogna, B.: Quasiconvexity and relaxation of nonconvex problems in the calculus of variations. J. Funct. Anal. 46(1), 102–118 (1982)
Dal Maso, G.: An Introduction to G-convergence. Progress in Nonlinear Differential Equations and their Applications. Birkhäuser Boston Inc., Boston (1993)
Cioranescu, D., Donato, P.: An Introduction to Homogenization. Oxford Lecture Series in Mathematics and its Applications. The Clarendon Press, Oxford University Press, Oxford (1999)
Hecht, F.: New development in FreeFem++. J. Numer. Math. 20(3–4), 251–266 (2012)
Taylor, M.A., Wingate, B.A., Bos, L.P.: Several new quadrature formulas for polynomial integration in the triangle , Report-no: SAND2005-0034J, http://xyz.lanl.gov/format/math.NA/0501496
Christensen, T.: Mechanics of Composite Materials. John Wiley, New York (1979)
Francfort, G.A., Murat, F.: Homogenization and optimal bounds in linear elasticity. Arch. Ration. Mech. Anal. 94, 307–334 (1986)
Milton, G.: Modeling the properties of composites by laminates. In: Ericksen, J.L., et al. (eds.) Homogenization and Effective Moduli of Materials and Media, pp. 150–174. Springer, New York (1986)
Tartar, L.: Remark on homogenization. In: Ericksen, J.L., et al. (eds.) Homogenization and Effective Moduli of Materials and Media, pp. 228–246. Springer, New York (1986)
Allaire, G.: Shape Optimization by the Homogenization Method. Applied Mathematical Sciences. Springer, New York (2002)
Murat, F.: H-convergence, In: Séminaire d’Analyse Fonctionnelle et Numérique, Université d’Alger, 1977/1978, multicopied, pp. 34 ; English translation: F. Murat, L. Tartar, H-convergence. In: Cherkaev, L., Kohn, R.V. (eds.) Topics in the Mathematical Modelling of Composite Materials, In: Progress in nonlinear differential equations and their applications. Birkhäuser, Boston, pp. 21–43 (1998)
Spagnolo, S.: Sulla convergenza di soluzioni di equazioni paraboliche ed ellittiche. Ann. Scuola Norm. Sup. Pisa Cl. Sci. 22, 571–597 (1968)
Kohn, R.V., Vogelius, M.: Determining conductivity by boundary measurements. Commun. Pure Appl. Math. 37, 289–298 (1984)
Author information
Authors and Affiliations
Corresponding author
Additional information
Communicated by A. Malchiodi.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Maestre, F., Pedregal, P. Some non-linear systems of PDEs related to inverse problems in conductivity. Calc. Var. 60, 110 (2021). https://doi.org/10.1007/s00526-021-01945-3
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s00526-021-01945-3