1 Introduction

In this paper we deal with the solution of direct and inverse problems related with the detection of objects by nondestructive testing. A wide number of applications lead to this kind of problems as ultrasound in medicine, reflection of seismic waves in oil prospection, crack detection in structural mechanics, locating archaelogical sites, etc.

The basis of our techniques is the creation and detection of acoustic or thermal waves. We will acoustically or thermally excite a medium with some objects inside. If we know the shape, size and location of these objects, the physical parameters characterizing the different materials, as well as how the interaction on the interfaces separating the objects from the exterior matrix is (that is, the boundary or transmission conditions at the interfaces), we can study the scattering/propagation of the waves. For instance we can compute the total wave at some receptors located outside the objects. This problem is called the direct problem. It is a well-posed problem in the sense that it has a unique solution that depends continuously on the data of the problem. In practice the goal usually is to solve the opposite problem, that will be called inverse problem: from measurements of the total wave at some receptors, one wants to reconstruct the objects or properties related to them (for instance the constitutive parameters of the materials). Inverse problems are ill-posed. Given arbitrary data, solutions may not exist. When they exist, small changes in the measured data can lead to large changes in the reconstructions, and some regularization techniques will be needed.

This paper surveys the main contributions of the author to the theoretical and numerical study of direct [1416, 22, 23, 37, 4451] and inverse problems [510, 30, 31] in Acoustics and Photothermal Science.

For simplicity this paper is restricted to two dimensional problems, but almost all the results can be extended to the three dimensional case without difficulty. In fact, most of the author’s papers study theoretically both situations simultaneously.

The paper is organized in seven sections. Sections 15 are devoted to the study of direct problems. Sections 6 and 7 deal with the numerical solution of inverse problems. More precisely, the paper is organized as follows. In Sect. 2 we formulate several scattering problems in Acoustics and Photothermal Sciences. Sections 3 and 4 are devoted to the numerical solution of such problems by boundary element methods. In Sect. 5 we investigate a more general problem, where the constitutive materials inside the objects are heterogeneous (non-constant). We propose in this case a BEM–FEM coupled discretization. Finally, the last two sections deal with the numerical solution of inverse problems. We first consider the problem of detecting objects when the constitutive parameters are known. We briefly study afterwards the full problem of recovering both the objects and their parameters. Finally we consider a conductive transmission problem where we will detect the level of corrosion at the interface of an inclusion using thermal measurements.

2 Direct problems

Prior to solving an inverse problem it is advisable to understand the properties of the associated forward problem. Furthermore, many of the methods used to solve inverse problems involve iterative solution of direct problems. This is often to be done a large number of times, which justifies an additional interest in having reliable and if possible fast direct solvers. On the other hand, it is also important to have estimators of the error preventing both, numerical solutions that are far from the exact solution of the equations and too precise solutions, that are not justified by the quality of the measurements used as data.

This paper is mainly focused in transmission problems for the Helmholtz equation in the plane. Let us begin by describing the geometrical configuration of our model problem. We have a finite number \(d\) of bounded and simply connected objects \(\Omega _1,\dots ,\Omega _d\) having no pairwise contact. Their boundaries \(\Gamma _1,\dots ,\Gamma _d\) are assumed to be smooth, for instance \(\mathcal{C}^2\) (the presentation will be done for smooth scatterers, but most theoretical and many numerical results are easy to extend to non-smooth scatterers). These objects are immersed in the exterior medium \(\Omega _e:=\mathbb{R }^2{\setminus } \cup _{j=1}^d\overline{\Omega }_j\) (see Fig. 1).

Fig. 1
figure 1

Geometry of the problem

The direct problem assumes that the objects \(\Omega _j\), the values of the parameters \(\lambda _j\), \(\lambda _e\), \(\kappa _j\), \(\kappa _e\) and the functions \(g_j^0,\,g_j^1\), for \(j=1,\dots ,d\) are known, and looks for the solution of the following problem:

$$\begin{aligned} \Delta u+\lambda _e^2 u&= 0, \text{ in} \Omega _e,\end{aligned}$$
(2.1)
$$\begin{aligned} \Delta u+\lambda _j^2 u&= 0, \text{ in} \Omega _j,\quad j=1,\dots ,d,\end{aligned}$$
(2.2)
$$\begin{aligned} u|_{\Gamma _j}^\mathrm{int}-u|_{\Gamma _j}^\mathrm{ext}&= g^0_j, \text{ on} \Gamma _j,\quad j=1,\dots ,d,\end{aligned}$$
(2.3)
$$\begin{aligned} \kappa _j\,\partial _\mathbf{n}u|_{\Gamma _j}^\mathrm{int}-\kappa _e\,\partial _\mathbf{n}u|_{\Gamma _j}^\mathrm{ext}&= g_j^1, \text{ on} \Gamma _j,\quad j=1,\dots ,d,\end{aligned}$$
(2.4)
$$\begin{aligned} r^{1/2}(\partial _ru-\imath \lambda _e u)&\rightarrow 0, \text{ when} r:=|\mathbf{x}|\rightarrow \infty . \end{aligned}$$
(2.5)

The Sommerfeld radiation condition (2.5) has to be satisfied uniformly in all directions. The solutions of exterior Helmholtz problems satisfying this condition correspond to waves that are not reflected from infinity.

The functions \(g_j^0\) and \(g_j^1\) are defined on the boundaries \(\Gamma _j\). Although in our applications they are smooth functions, in the variational/integral setting is common to assume that \(g_j^0\in H^{1/2}(\Gamma _j)\) and \(g_j^1\in H^{-1/2}(\Gamma _j)\), where \(H^r(\Gamma _j)\) is the Sobolev space of index \(r\) on the boundary \(\Gamma _j\) (see [41]).

We assume that the parameters \(\lambda _e,\,\lambda _j,\,\kappa _e,\,\kappa _j\) appearing in (2.1)–(2.5) are such that the problem has a unique solution. General conditions for uniqueness can be found in [19] for one obstacle. The generalization to multiple obstacles is relatively straightforward.

Problem (2.1)–(2.5) appears in the study of stationary (time-harmonic) acoustic waves and of thermal waves. A time-harmonic acoustic wave is a solution of the form \(v(\mathbf{x},t)=\text{ Re(v(x)}\,exp(-\imath \omega \,t))\) of the wave equation \(\kappa \Delta v=\rho v_{tt}\), where \(\sqrt{\kappa }\) is the transmission velocity of the wave in the media and \(\rho >0\) is the density of the material. In this case v\((\mathbf{x})\) is a solution of the Helmholtz equation \(\Delta \mathrm{v}+\lambda ^2 \mathrm{v}=0\) with wave number \(\lambda :=\omega \sqrt{\rho /\kappa }\), which is real. Similarly, if \(v(\mathbf{x},t)\) is a time-harmonic solution of the heat equation \(\kappa \Delta v=\rho v_t\) (where now \(\kappa >0\) is the conductivity and \(\rho \) is the density multiplied by the specific heat), then the thermal wave v\((\mathbf{x})\) solves \(\Delta \mathrm{v}+\lambda ^2 \mathrm{v}=0\) with \(\lambda :=(1+\imath )\sqrt{\omega \rho /(2\kappa )}\). Therefore, in thermal problems the wave number is on the complex diagonal Re\((s)\)=Im\((s)\).

In both cases, we can excite the system generating a time-harmonic incident wave \(v_{inc}(\mathbf{x},t)=\text{ Re}(u_{inc}(\mathbf{x})\exp (-\imath \omega \,t))\), where \(\omega >0\) is the frequency and \(u_{inc}\) is a solution of the Helmholtz equation with wave number \(\lambda _e\). For instance, we can consider planar waves of the form \(u_{inc}(\mathbf{x})=\exp (\imath \lambda _e\mathbf{x}\cdot \mathbf{d})\), where \(\mathbf{d}\) is a unitary vector pointing at the direction of advance of the wave.

The presence of the objects makes the medium respond by generating a new wave, the scattered wave \(u_s\). The fact that \(u_s\) is the dispersion of the incident wave is modeled imposing that \(u_s\) satisfies (2.5).

We also impose to the total wave \(\mathrm{v}=u_{inc}+u_s\) continuity conditions for its trace and for its normal derivative on the interfaces \(\Gamma _j\):

$$\begin{aligned} \mathrm{v}|_{\Gamma _j}^\mathrm{int}=\mathrm{v}|_{\Gamma _j}^\mathrm{ext},\qquad \kappa _j\,\partial _\mathbf{n}\mathrm{v}|_{\Gamma _j}^\mathrm{int}=\kappa _e\,\partial _\mathbf{n}\mathrm{v}|_{\Gamma _j}^\mathrm{ext},\qquad j=1,\dots ,d. \end{aligned}$$

Taking now as unknown

$$\begin{aligned} u:=\left\{ \begin{array}{ll} u_s,&\quad \text{ in} \Omega _e,\\ \mathrm{v},&\quad \text{ in} \Omega _j,\quad j=1,\dots ,d, \end{array}\right. \end{aligned}$$

we find that \(u\) is a solution of the transmission problem (2.1)–(2.5), where \(g_j^0=u_{inc}|_{\Gamma _j}\) and \(g_j^1=\kappa _e\,\partial _\mathbf{n}u_{inc}|_{\Gamma _j}\), \(j=1,\dots ,d\).

Conditions (2.3) and (2.4) can be replaced by other boundary conditions, modeling different behaviors of the objects as a response of the created excitation. Dirichlet conditions for the total wave at the boundary of the objects \(u|_{\Gamma _j}=0\) model sound-soft objects, while Neumann conditions \(\partial _\mathbf{n}u|_{\Gamma _j}=0\) model sound-hard ones. Both boundary problems are limit cases of transmission problems [51].

Other interesting physical model appears when imposing the continuity condition (2.4) and

$$\begin{aligned} u|_{\Gamma _j}^\mathrm{int}+f_j\,\partial _\mathbf{n}u|_{\Gamma _j}^\mathrm{int}-u|_{\Gamma _j}^\mathrm{ext}=g_j^0,\qquad \text{ on} \Gamma _j,\quad j=1,\dots ,d, \end{aligned}$$
(2.6)

where \(f_j\) (corrosion function) is a positive function that describes the situation when there is some coating material surrounding the obstacles. This function is essentially proportional to the width of the coating at each point of the boundary \(\Gamma _j\). When conditions (2.4) and (2.6) are satisfied, we are dealing with a conductive transmission problem. The combination of the continuity condition (2.3) with

$$\begin{aligned} \kappa _j\,\partial _\mathbf{n}u|_{\Gamma _j}^\mathrm{int}+f_j\,u|_{\Gamma _j}^\mathrm{int}-\kappa _e\partial _\mathbf{n}u|_{\Gamma _j}^{ext}=g_j^1,\qquad \text{ on} \Gamma _j,\quad j=1,\dots ,d, \end{aligned}$$

is called resistive transmission. In this case the function \(f_j\) is related to resistivity.

The interested reader may consult [1719, 36, 56] for a detailed study of exterior Helmholtz transmission problems. For the thermal context we refer to [44, 49]. The conductive and resistive transmission problems are studied in [1].

3 Integral methods for homogeneous materials

In this section we show how to formulate the original problem (2.1)–(2.5) as different equivalent systems of integral equations over the boundaries of the objects. Therefore, we will study one dimensional problems that are easier to deal with from the numerical point of view. This kind of techniques were already known in the context of electromagnetism and acoustics [17, 19, 36, 56]. Our main contributions in this field are centered in the study of transmission problems of thermal waves. We proposed some new integral formulations and developed a systematic way for their analysis using operator theory. This section summarizes the formulations that are analyzed in [37, 4547, 50]. For a global vision of them in the thermal context the reader may consult our survey [49].

We start by describing an indirect integral formulation of the problem (2.1)–(2.5), where the unknowns are the densities \(\varphi _j,\ \psi _j\in H^{-1/2}(\Gamma _j)\). The solution of the transmission problem is found as [15, 46]:

$$\begin{aligned} u:=\left\{ \begin{array}{ll}\mathcal{S}_{\Gamma _j}^{\lambda _j}\varphi _j,&\quad \text{ in} \Omega _j,\quad j=1,\dots ,d,\\ \sum \nolimits _{j=1}^d\mathcal{S}_{\Gamma _j}^{\lambda _e}\psi _j,&\quad \text{ in} \Omega _e, \end{array}\right. \end{aligned}$$
(3.1)

where \(\mathcal{S}_{\Gamma _j}^{\lambda }\) is the single layer potential for the Helmholtz equation with wave number \(\lambda \),

$$\begin{aligned} \mathcal{S}_{\Gamma _j}^{\lambda }\eta :=\int _{\Gamma _j}\frac{\imath }{4}\,H_0^{(1)}(\lambda |\,\cdot \,-\mathbf{y}|)\,\eta (\mathbf{y})\,d\gamma _\mathbf{y}:\mathbb{R }^2\longrightarrow \mathbb{C }. \end{aligned}$$

Here \(H_0^{(1)}\) is the Hankel function of the first kind and order zero. To impose the transmission conditions we have to know the limiting values of these potentials on each interface from both sides of the boundaries. These values are given by the so-called jump relations, that are written in terms of the following integral operators:

$$\begin{aligned} V_{ij}^{\lambda }\eta&:= \int _{\Gamma _j}\frac{\imath }{4}\,H_0^{(1)}(\lambda |\,\cdot \,-\mathbf{y}|)\,\eta (\mathbf{y})\,d\gamma _\mathbf{y}:\Gamma _i\longrightarrow \mathbb{C },\\ J_{ij}^{\lambda }\eta&:= \int _{\Gamma _j}\partial _{\mathbf{n}(\,\cdot \,)}\frac{\imath }{4}\,H_0^{(1)}(\lambda |\,\cdot \,-\mathbf{y}|)\,\eta (\mathbf{y})\,d\gamma _\mathbf{y}:\Gamma _i\longrightarrow \mathbb{C }. \end{aligned}$$

The single layer potential is a continuous function, and \(V_{ij}^{\lambda }\eta \) is the observation on \(\Gamma _i\) of the potential \(\mathcal{S}_{\Gamma _j}^\lambda \), generated from \(\Gamma _j\). However, the gradient of the potential \(\mathcal{S}_{\Gamma _j}^\lambda \) has a jump in its normal derivative when is observed from \(\Gamma _j\). We have that \(\partial _\mathbf{n}\mathcal{S}_{\Gamma _j}^\lambda \eta =J_{ij}^\lambda \eta \) on \(\Gamma _i\) when \(i\ne j\), but \(\partial _\mathbf{n}\mathcal{S}_{\Gamma _j}^\lambda \eta =\pm \frac{1}{2}\eta +J_{jj}^\lambda \eta \) on \(\Gamma _j\) (with “+” when we consider the interior normal derivative and with “\(-\)” when considering the exterior one).

If \(-\lambda _j^2\) and \(-\lambda _e^2\) are not Dirichlet eigenvalues for the Laplacian in \(\Omega _j\) for \(j=1,\dots ,d\), problem (2.1)–(2.5) is equivalent to the following system of integral equations [46]:

$$\begin{aligned} V_{ii}^{\lambda _i}\varphi _i-\sum _{j=1}^d V_{ij}^{\lambda _e} \psi _j&= g_j^0,\qquad \text{ on} \Gamma _i,\quad i=1,\dots ,d,\qquad \quad \end{aligned}$$
(3.2)
$$\begin{aligned} \kappa _i\left(\frac{1}{2}\varphi _i+J_{ii}^{\lambda _i}\varphi _i\right)+\kappa _e\left(\frac{1}{2}\psi _i-\sum _{j=1}^d J_{ij}^{\lambda _e} \psi _j\right)&= g_j^1,\qquad \text{ on} \Gamma _i,\quad i=1,\dots ,d,\qquad \quad \end{aligned}$$
(3.3)

Since Dirichlet eigenvalues for the Laplacian are real and positive, in the thermal context both problems are always equivalent. However, in the acoustic case this formulation could not be valid since resonances may appear. This drawback is overcome by the alternative formulation proposed in (3.14)–(3.15).

Some of the properties of the system (3.2)–(3.3) and ideas on how to discretize it are more clearly seen by using matrices of operators. To do that we introduce the density vectors \(\varphi :=(\varphi _1,\dots ,\varphi _d)^\top \) and \(\psi :=(\psi _1,\dots ,\psi _d)^\top \), and the right-hand side vectors \(g^0:=(g_1^0,\dots ,g_d^0)^\top \) and \(g^1:=(g_1^1,\dots ,g_d^1)^\top \). Finally we consider three diagonal matrices of operators

$$\begin{aligned} \mathcal{V}^\Lambda :=\text{ diag}(V_{11}^{\lambda _1},\dots ,V_{dd}^{\lambda _d}),\quad \mathcal{J}^\Lambda :=\text{ diag}(J_{11}^{\lambda _1},\dots ,J_{dd}^{\lambda _d}),\quad \Xi :=\text{ diag}(\kappa _1I,\dots ,\kappa _dI),\nonumber \\ \end{aligned}$$
(3.4)

and two full matrices:

$$\begin{aligned} \widetilde{\mathcal{V}}^{\lambda _e}:=(V_{ij}^{\lambda _e}),\quad \widetilde{\mathcal{J}}^{\lambda _e}:=(J_{ij}^{\lambda _e}). \end{aligned}$$
(3.5)

System (3.2)–(3.3) can then be written as

$$\begin{aligned} \left[ \begin{array}{ll} \mathcal{V}^\Lambda&\quad -\widetilde{\mathcal{V}}^{\lambda _e}\\ \Xi ({\textstyle \frac{1}{2}}\mathcal{I}+\mathcal{J}^\Lambda )&\quad \kappa _e({\textstyle \frac{1}{2}}\mathcal{I}-{\widetilde{\mathcal{J}}}^{\lambda _e}) \end{array}\right]\left[\begin{array}{l}\varphi \\ \psi \end{array}\right]=\left[\begin{array}{l}g^0\\ g^1\end{array}\right]. \end{aligned}$$
(3.6)

This block structure will be mimicked by the discrete system. We will take advantage of it to find preconditioners as well as possible iterative schemes for its solution.

Alternatively, we proposed in [45] the solution of (2.1)–(2.5) from a direct point of view, taking the Cauchy data of the solution (trace and normal derivative) as unknowns. This formulation is more suitable than the indirect formulation if we want to recover information of the original unknown on the boundaries of the objects, or close to them. Furthermore, it has the additional advantage of providing a system of integral equations that is always solvable (although maybe no uniquely). When uniqueness fails, it is not always possible to find the correct Cauchy data, but all the solutions that are obtained are correct when they are used to compute the solution of the original transmission problem. The main disadvantages in comparison with the indirect formulation are that the size of the system of equations is bigger and that the expressions of the solutions of the original problem are more complicated.

The direct formulation is based on the use of the Third Green Formula:

$$\begin{aligned} u=\left\{ \begin{array}{ll}\mathcal{S}_{\Gamma _j}^{\lambda _j}\partial _{\mathbf{n}}u|_{\Gamma _j}^\mathrm{int}-\mathcal{D}_{\Gamma _j}^{\lambda _j}u|_{\Gamma _j}^\mathrm{int},&\quad \text{ in} \Omega _j,\quad j=1,\dots ,d,\\ \sum \nolimits _{j=1}^d\mathcal{D}_{\Gamma _j}^{\lambda _e}u|_{\Gamma _j}^\mathrm{ext}-\sum \nolimits _{j=1}^d\mathcal{S}_{\Gamma _j}^{\lambda _e}\partial _{\mathbf{n}}u|_{\Gamma _j}^\mathrm{ext},&\quad \text{ in} \Omega _e, \end{array}\right. \end{aligned}$$
(3.7)

where \(\mathcal{D}_{\Gamma _j}^{\lambda }\) is the double layer potential

$$\begin{aligned} \mathcal{D}_{\Gamma _j}^{\lambda }\eta :=\int _{\Gamma _j}\frac{\imath }{4}\,\partial _{\mathbf{n}(\mathbf{y})}H_0^{(1)}(\lambda |\,\cdot \,-\mathbf{y}|)\,\eta (\mathbf{y})\,d\gamma _\mathbf{y}. \end{aligned}$$

The potential \(\mathcal{D}_{\Gamma _j}^\lambda \) is not continuous across the boundary \(\Gamma _j\). Its trace on the boundaries \(\Gamma _i\) can be written in terms of the integral operators

$$\begin{aligned} K_{ij}^{\lambda }\eta :=\int _{\Gamma _j}\frac{\imath }{4}\,\partial _{\mathbf{n}(\mathbf{y})}H_0^{(1)}(\lambda |\,\cdot \,-\mathbf{y}|)\,\eta (\mathbf{y})\,d\gamma _\mathbf{y}:\Gamma _i\longrightarrow \mathbb{C }. \end{aligned}$$

It follows that \(\mathcal{D}_{\Gamma _j}^\lambda \eta =K_{ij}^{\lambda }\eta \) on \(\Gamma _i\) if \(i\ne j\) and \(\mathcal{D}_{\Gamma _j}^\lambda \eta =\mp {\textstyle \frac{1}{2}}\eta +K_{jj}^\lambda \) on \(\Gamma _j\) (with “\(-\)” when considering the interior trace and with “+” when taking the exterior trace).

Notice that (3.7) is a representation formula that represents the solution of the transmission problem as a sum of potentials generated by the Cauchy data on the boundary. From this point of view, we look for the solution of the transmission problem in the form (3.7), where the unknowns will be four vectors: the interior and exterior traces,

$$\begin{aligned} u^-=(u|_{\Gamma _1}^\mathrm{int},\dots ,u|_{\Gamma _d}^\mathrm{int})^\top , \qquad u^+=(u|_{\Gamma _1}^\mathrm{ext},\dots ,u|_{\Gamma _d}^\mathrm{ext})^\top , \end{aligned}$$
(3.8)

and the interior and exterior normal derivatives

$$\begin{aligned} \partial _\mathbf{n}u^-:=(\partial _\mathbf{n}u|_{\Gamma _1}^\mathrm{int},\dots ,\partial _\mathbf{n}u|_{\Gamma _d}^\mathrm{int})^\top ,\qquad \partial _\mathbf{n}u^+:=(\partial _\mathbf{n}u|_{\Gamma _1}^\mathrm{ext},\dots ,\partial _\mathbf{n}u|_{\Gamma _d}^\mathrm{ext})^\top . \end{aligned}$$
(3.9)

We obtain now the following system of integral equations

$$\begin{aligned} \left[\begin{array}{llll}\mathcal{I}&\quad 0&\quad -\mathcal{I}&\quad 0\\ {\textstyle \frac{1}{2}}\mathcal{I}+\mathcal{K}^\Lambda&\quad -\mathcal{V}^\Lambda&\quad 0&\quad 0\\ 0&\quad \Xi&\quad 0&\quad -\kappa _e\mathcal{I}\\ 0&\quad 0&\quad {\textstyle \frac{1}{2}}\mathcal{I}-\widetilde{\mathcal{K}}^{\lambda _e}&\quad \widetilde{\mathcal{V}}^{\lambda _e} \end{array}\right]\left[\begin{array}{l}u^-\\ \partial _\mathbf{n}u^-\\ u^+\\ \partial _\mathbf{n}u^+\end{array}\right]=\left[\begin{array}{l}g^0\\ 0\\ g^1\\ 0\end{array}\right], \end{aligned}$$
(3.10)

\(\mathcal{V}^\Lambda , \,\widetilde{\mathcal{V}}^{\lambda _e}\) and \(\Xi \) being the operators defined in (3.4)–(3.5), and

$$\begin{aligned} \mathcal{K}^\Lambda :=\text{ diag}(K_{11}^{\lambda _1},\dots ,K_{dd}^{\lambda _d}),\qquad \widetilde{\mathcal{K}}^{\lambda _e}:=(K_{ij}^{\lambda _e}). \end{aligned}$$
(3.11)

After some manipulations, (3.10) can be transformed in a block triangular system. In [45] we took advantage of the analysis developed for the study of system (3.6) to study the new system (and its discretization). We proved the equivalence of problem (2.1)–(2.5) and system (3.10) under the hypothesis that \(-\lambda _j^2\) and \(-\lambda _e^2\) are not Dirichlet eigenvalues for the Laplacian in \(\Omega _j\) for \(j=1,\dots ,d\). Although the matrix in (3.10) is very unstructured combining full and diagonal matrices with many null positions, we can take advantage of this structure to simplify the resulting system after discretization [49].

We proposed an alternative formulation in [16, 50] trying to obtain the best of direct and indirect formulations. We will have a simple approximation in the unbounded domain by using a single layer potential and a good knowledge of what happens near the boundaries of the objects using the Cauchy data and the representation formula in the interior of the objects. That is, the solution of the transmission problem is recovered as

$$\begin{aligned} u:=\left\{ \begin{array}{ll}\mathcal{S}_{\Gamma _j}^{\lambda _j}\partial _{\mathbf{n}}u|_{\Gamma _j}^\mathrm{{int}}-\mathcal{D}_{\Gamma _j}^{\lambda _j}u|_{\Gamma _j}^\mathrm{{int}},&\quad \text{ in} \Omega _j,\quad j=1,\dots ,d,\\ \sum \nolimits _{j=1}^d\mathcal{S}_{\Gamma _j}^{\lambda _e}\psi _j,&\quad \text{ in} \Omega _e. \end{array}\right. \end{aligned}$$
(3.12)

The resulting system of equation in this case is [16, 50]:

$$\begin{aligned} \left[\begin{array}{lll}-\widetilde{\mathcal{V}}^{\lambda _e}&\quad 0&\quad \mathcal{I}\\ 0&\quad -\mathcal{V}^\Lambda&\quad {\textstyle \frac{1}{2}}\mathcal{I}+\mathcal{K}^\Lambda \\ \kappa _e({\textstyle \frac{1}{2}}\mathcal{I}-\widetilde{\mathcal{J}}^{\lambda _e})&\quad \Xi&\quad 0\end{array}\right]\left[\begin{array}{l}\psi \\ \partial _\mathbf{n}u^-\\ u^-\end{array}\right]=\left[\begin{array}{l}g^0\\ 0\\ g^1\end{array}\right]. \end{aligned}$$
(3.13)

The key idea for the analysis of this system is to group the first two unknown vectors to detect that the system has the structure of a generalized mixed problem [2].

As we have already mentioned, the previous formulations are not always equivalent to the original transmission problem. To avoid the resonances produced by the eigenvalues we can write the solution as mixed potentials (they are also called Brakhage-Werner, Panich, or combined field potentials):

$$\begin{aligned} u:=\left\{ \begin{array}{ll}(\mathcal{S}_{\Gamma _j}^{\lambda _j}-\imath \alpha \mathcal{D}_{\Gamma _j}^{\lambda _j})\varphi _j,&\quad \text{ in} \Omega _j,\quad j=1,\dots ,d,\\ \sum \nolimits _{j=1}^d(\mathcal{S}_{\Gamma _j}^{\lambda _e}-\imath \alpha \mathcal{D}_{\Gamma _j}^{\lambda _e})\psi _j,&\quad \text{ in} \Omega _e, \end{array}\right. \end{aligned}$$
(3.14)

where \(\alpha >0\) is a real number. In order to write the resulting system of integral equations in terms of the density vectors \(\varphi \) and \(\psi \) we need to know the normal derivatives of the double layer potentials. They are given by the hypersingular operators

$$\begin{aligned} W_{ij}^\lambda \eta :=\partial _{\mathbf{n}(\,\cdot \,)}\int _{\Gamma _j}\frac{\imath }{4}\,\partial _{\mathbf{n}(\mathbf{y})}H_0^{(1)}(\lambda |\,\cdot \,-\mathbf{y}|)\,\eta (\mathbf{y})\,d\gamma _\mathbf{y}:\Gamma _i\longrightarrow \mathbb C . \end{aligned}$$

Introducing now the operator matrices

$$\begin{aligned} \mathcal{W}^\Lambda :=\text{ diag}(W_{11}^{\lambda _1},\dots ,W_{dd}^{\lambda _d}),\qquad \widetilde{\mathcal{W}}^{\lambda _e}:=(W_{ij}^{\lambda _e}), \end{aligned}$$

and using the jump relations and the transmission conditions we arrive at the following system of equations [47]:

$$\begin{aligned} \left[\begin{array}{ll} \mathcal{V}^\Lambda +\imath \alpha ({\textstyle \frac{1}{2}}\mathcal{I}-\mathcal{K}^\Lambda )&\quad -\widetilde{\mathcal{V}}^{\lambda _e}+\imath \alpha ({\textstyle \frac{1}{2}}\mathcal{I}+\widetilde{\mathcal{K}}^{\lambda _e})\\ \Xi ({\textstyle \frac{1}{2}}\mathcal{I}+\mathcal{J}^\Lambda -\imath \alpha \mathcal{W}^\Lambda )&\quad \kappa _e({\textstyle \frac{1}{2}}\mathcal{I}-\widetilde{\mathcal{J}}^{\lambda _e}+\imath \alpha \widetilde{\mathcal{W}}^{\lambda _e}) \end{array}\right]\left[\begin{array}{l}\varphi \\ \psi \end{array}\right]=\left[\begin{array}{l}g^0\\ g^1\end{array}\right]. \end{aligned}$$
(3.15)

Although the structure of this system is apparently similar to (3.6), from the numerical point of view it is more complicated: now we have to discretize the four types of integral operators, \(V,\,J,\,K\) and \(W\).

Following [19], we proposed our latest formulation in [37]. We consider now five unknown vectors, the four ones formed by the Cauchy data defined in (3.8)–(3.9), and a new vector that copies the exterior trace, \(\varphi :=u^+\). In this case we obtain the system

$$\begin{aligned} \left[\begin{array}{l@{\quad }l@{\quad }l@{\quad }l@{\quad }l} -\widetilde{\mathcal{W}}^{\lambda _e}&{\textstyle \frac{1}{2}}\mathcal{I}+\widetilde{J}^{\lambda _e}&0&0&0\\ -{\textstyle \frac{1}{2}}\mathcal{I}-\widetilde{\mathcal{K}}^{\lambda _e}&\widetilde{V}^{\lambda _e}&0&0&\mathcal{I}\\ 0&0&-{\textstyle \frac{1}{\kappa }}_e\,\Xi \,\mathcal{W}^\Lambda&-{\textstyle \frac{1}{2}}\mathcal{I}+\mathcal{J}^\Lambda&0\\ 0&0&{\textstyle \frac{1}{2}}\mathcal{I}-\mathcal{K}^\Lambda&\kappa _e\,\Xi ^{-1}\mathcal{V}^\Lambda&-\mathcal{I}\\ 0&\mathcal{I}&0&-\mathcal{I}&0 \end{array}\right]\left[\begin{array}{l}u^+\\ \partial _\mathbf{n}u^+\\ u^-\\ \partial _\mathbf{n}u^-\\ \varphi \end{array}\right]\!=\! \left[\begin{array}{l}0\\ 0\\ 0\\ g^0\\ -g^1\end{array}\right].\nonumber \\ \end{aligned}$$
(3.16)

This method can be understood as a coupling of independent solvers for the exterior and interior problems. The system (3.16) is always solvable. From any solution of it we can recover the solution of the original transmission problem by introducing these data in the representation formula (3.7). The system is equivalent to the original problem if \(-\lambda _e^2\) is not a Neumann eigenvalue for the Laplacian in \(\Omega _j\) for \(j=1,\dots ,d\).

Comparing the direct formulations (3.10) and (3.16), we observe that the introduction of the additional unknown \(\varphi \) yields to a bigger system that is as unstructured as (3.10). Furthermore, the four types of integral operators appear now. However, multiplying by \(-1\) the first and third blocks of equations in (3.16), we obtain a symmetric system (the operators \(J\) and \(K\) are transposed of each other). This structure is preserved after discretization and can be used to apply Krylov methods. It is also useful to design preconditioners for the original system of equations [37].

4 Numerical methods for systems of integral equations

One of our main contributions in the field of the numerical solution of boundary value problems related with the scattering of waves is the identification, from an abstract point of view, of what have to be satisfied by the discrete spaces involved in Petrov–Galerkin methods to have stable schemes.

In the indirect formulation with single layer potentials where we obtained the system (3.2)–(3.3), we consider two families of discrete spaces:

$$\begin{aligned} X_i^h\subset H^{-1/2}(\Gamma _i),\quad \ Y_i^h\subset H^{1/2}(\Gamma _i), \quad \ \text{ dim}X_i^h=\text{ dim}Y_i^h,\quad \ i=1,\dots ,d. \end{aligned}$$

The Petrov–Galerkin method associated consists in finding the densities \(\varphi _i^h,\,\psi _i^h\in X_i^h\) (\(i=1,\dots , d\)) such that for \(i=1,\dots ,d,\)

$$\begin{aligned}&\int _{\Gamma _i}(V_{ii}^{\lambda _i}\varphi _i^h)\,\xi _i^h d\Gamma _i-\int _{\Gamma _i}\!\!\left(\,\sum _{j=1}^dV_{ij}^{\lambda _e}\psi _j^h\right)\xi _i^hd\Gamma _i=\!\int _{\Gamma _i}\!\!g_{i}^0\xi _i^h d\Gamma _i, \ \, \forall \xi _i^h\!\in \! X_i^h\!,\end{aligned}$$
(4.1)
$$\begin{aligned}&\int _{\Gamma _i}\!\!\!\bigg (\!\!\kappa _i\big (\frac{1}{2}\varphi _i^h+J_{ii}^{\lambda _i}\varphi _i^h\big )\!\!\bigg )r_i^hd\Gamma _i+\! \int _{\Gamma _i}\!\!\!\left(\!\!\kappa _e\big (\frac{1}{2}\psi _i^h-\!\sum _{j=1}^d J_{ij}^{\lambda _e} \psi _j^h\big )\!\!\right)r_i^h d\Gamma _i=\!\int _{\Gamma _i}\!\!g_i^1 r_i^h d\Gamma _i,\ \, \forall r_i^h\!\in \!Y_i^h\!.\nonumber \\ \end{aligned}$$
(4.2)

Equations (4.1) are associated to strongly elliptic operators and therefore, any Galerkin method (where the trial and test spaces are the same) provides a stable discretization. However, the second group of equations (4.2) takes place in \(H^{-1/2}(\Gamma _i)\) and has to be tested with elements in its dual space, that is, in \(H^{1/2}(\Gamma _i)\). The key point for the design of stable and convergent methods lies in the stabilization of the discretization of the identity operator in \(H^{-1/2}(\Gamma _i)\) [46], which requires non-standard choices of spaces. In addition, we can take advantage of the choice of Petrov–Galerkin methods to exploit the structure of the system (3.6) to obtain optimal convergence rates in weak norms by Aubin–Nitsche techniques. This matrix structure can also be exploited to design preconditioners based on physical ideas of scattering such as multiple scattering and close-neighbors techniques [14, 49]. Roughly speaking, the idea is to consider the response of each object to the incident wave without taking into account the influence of the other objects on it, or considering only the influence of the closer objects to it.

We next present some particular choices of spaces that lead to stable discretizations. We assume that the boundary of each obstacle can be parameterized by a regular 1-periodic function \(\mathbf{x}_i:[0,1]\rightarrow \Gamma _i\). Then we can take a couple of families of discrete spaces \(X^h\subset L^2(0,1)\) and \(Y^h\subset \{v\in \mathcal H ^1(0,1)\,:\, v(0)=v(1)\}\) with \(\text{ dim}X^h=\text{ dim}Y^h=n\) and define

$$\begin{aligned} X_i^h:=\{\xi _i^h:\Gamma _i\rightarrow \mathbb{C }: \ \ |\mathbf{x}_i^{\prime }|\xi _i^h\circ \mathbf{x}_i\in X^h\},\quad Y_i^h:=\{ r_i^h:\Gamma _i\rightarrow \mathbb{C }:\ \ |\mathbf{x}_i^{\prime }|r_i^h\circ \mathbf{x}_i\in Y^h\}.\nonumber \\ \end{aligned}$$
(4.3)

A stable Galerkin method is obtained by using trigonometric polynomials (\(h=1/n\)):

$$\begin{aligned} X^h=Y^h=\text{ span}\{\exp (2\pi \imath \ell t),\ \ -n/2\le \ell <n/2\}.\end{aligned}$$

In [46] we proved the stability and convergence of this method in a wide range of Sobolev norms. If the boundaries and data are \(\mathcal{C}^\infty \), the method has superalgebraic order of convergence, that is, for arbitrarily high \(t>0\):

$$\begin{aligned} \sum _{i=1}^d(\Vert \varphi _i-\varphi _i^h\Vert _{L^2(\Gamma _i)}+\Vert \psi _i-\psi _i^h\Vert _{L^2(\Gamma _i)})\le C_t h^{t}. \end{aligned}$$

A second choice uses spaces of periodic splines on uniform staggered grids. To define the method, we take a uniform mesh of the interval [0,1] with nodes on the points \(t_i:=ih\) and the grid formed by the middle points \(t_{i+1/2}:=(i+1/2)h\). We define then the spaces

$$\begin{aligned} X^h:=\{\xi ^h\in \mathcal{C}^{m-1}:\ \ \xi ^h|_{[t_{i-1/2},t_{i+1/2}]}\in \mathbb{P }_m\},\quad Y^h:=\{r^h\in \mathcal{C}^{m}:\ \ r^h|_{[t_{i},t_{i+1}]}\in \mathbb{P }_{m+1}\}, \end{aligned}$$

where \(\mathbb{P }_m\) is the space of polynomials of degree less than or equal to \(m\). If \(m=0\), the space \(X^h\) is just the space of piecewise constant functions. The spaces \(X_i^h\) and \(Y^h_i\) are defined now by mapping these ones onto the boundaries as in (4.3). The method can be analyzed by Fourier Analysis techniques [46]. We obtained the bounds

$$\begin{aligned} \sum _{i=1}^d(\Vert \varphi _i-\varphi _i^h\Vert _{H^{-1/2}(\Gamma _i)}+\Vert \psi _i-\psi _i^h\Vert _{H^{-1/2}(\Gamma _i)})\le C_m h^{m+3/2}. \end{aligned}$$

Considering the \(L^2\) norm, we obtain the convergence order \({m+1}\). Using now Aubin–Nitsche techniques we also proved bounds in weak norms. In particular,

$$\begin{aligned} \sum _{i=1}^d(\Vert \varphi _i-\varphi _i^h\Vert _{H^{-m-2}(\Gamma _i)}+\Vert \psi _i-\psi _i^h\Vert _{H^{-m-2}(\Gamma _i)})\le C_m h^{2m+3}. \end{aligned}$$

Notice that we have cubic convergence order if we approximate the densities by piecewise constant functions. The analysis can be extended when \(m=0\) to non-uniform meshes satisfying a local quasi-uniformity condition. In this case we proved again cubic order of convergence [46].

In all these methods, the numerical solution \(u^h\), obtained by substituting the discrete densities \(\varphi _i^h\) and \(\psi _i^h\) in (3.1), inherits the optimal convergence order.

A fully discrete version of the previous methods can be obtained by approximating the integrals involving the operators \(J\) as well as the right hand side by midpoint rules. For the integrals associated to the operator \(V\) we use a generalization of the Galerkin-collocation method [34]. The term with the logarithmic singularity is subtracted and integrated exactly, and in the remaining term we apply elementary quadrature rules [46]. The resulting methods inherit from the original methods the same order of convergence in a certain range of Sobolev norms. For instance, the fully discrete method with trigonometric polynomials preserves the superalgebraic convergence order. For the method with periodic splines of degrees zero and one we proved that for \(-1/2< s \le 1\):

$$\begin{aligned} \sum _{i=1}^d(\Vert \varphi _i-\varphi _i^h\Vert _{H^{-s}(\Gamma _i)}+\Vert \psi _i-\psi _i^h\Vert _{H^{-s}(\Gamma _i)})\le C_s h^{s+1}. \end{aligned}$$

In particular, we have quadratic order with very little computational work.

To illustrate the performance of the methods we present now a numerical example. An analogous comparison in a scattering problem with two objects can be found in [49]. Our aim is also to show the advantages of the spline methods in comparison with the methods with trigonometric polynomials (spectral methods) when considering obstacles with smooth but not \(\mathcal{C}^\infty \) boundaries. This kind of interfaces appears naturally in inverse problems where the domain is represented by some kind of spline functions because they are much less rigid than curves defined by trigonometric polynomials.

We have considered three different geometrical configurations with a rather similar object in order to study the sensitivity of the methods to small perturbations on the domain involving regularity loses (see Fig. 2). In the first one, the boundary of the object, \(\Gamma _{\mathcal{C}^\infty }\), is a \(\mathcal{C}^\infty \) curve, while in the second and third configurations the boundaries \(\Gamma _{\mathcal{C}^3}\) and \(\Gamma _{\mathcal{C}^2}\) are \(\mathcal{C}^3\) and \(\mathcal{C}^2\) respectively.

Fig. 2
figure 2

Geometrical configurations

In Fig. 3 we compare the methods with trigonometric polynomials and spline functions of degrees zero and one in the three cases. We represent the average relative error in twenty equally spaced points on the axis \(y=0\) (more precisely, on the interval \(-1.5\le x\le 1.5\), which is represented by a solid line in Fig. 2) versus the number of nodes \(n\) with logarithmic scale in both axes. The spectral method reaches machine precision with very few degrees of freedom with the \(\mathcal{C}^\infty \) curve. When the boundaries are not so smooth, this method provides approximations of the same quality as the spline method at a much higher computational cost.

Fig. 3
figure 3

Average relative errors with the spectral and spline methods. On the vertical axis \(-\)log(error) and on the horizontal axis log(\(n\)), where \(n\) is the number of nodes at the boundary of the object

Notice also that the discretization with periodic splines is not too sensitive to small perturbations on the domain involving regularity loses, which is a desirable property when dealing with inverse problems.

Convergent Petrov–Galerkin methods for all the integral formulations proposed in the previous section were derived in [37, 45, 47, 50]. We provide now a graphical illustration about the performance of the different formulations. In Fig. 4 we compare the use of a fully discrete method with splines of degrees zero and one for the three geometrical configurations represented in Fig. 2 and the following formulations: indirect formulation with single layer potentials (SL), direct formulation (direct), mixed formulation (mixed) and indirect formulation with Brakhage–Werner potentials (BW). We observe that the performance of the methods is essentially the same, even when the boundaries have different regularity properties. The option for one or other method has to be taken depending on what we want to compute (Cauchy data, the solution at points close to the boundary, the solution in the far field,...) and on whether we need to avoid resonances.

Fig. 4
figure 4

Average relative errors with spline methods for the discretization of the equations obtained in four different formulations. On the vertical axis \(-\)log(error) and on the horizontal axis log(\(n\)), where \(n\) is the number of nodes at the boundary of the object

An alternative to the variational methods for system (3.6) combines classical quadrature methods (Nyström methods) for equations of the second type with quadrature methods adapted to the equations associated to weakly singular operators as \(V\). In [22, 23] we showed that these discretizations can be seen as generalized Petrov–Galerkin methods. The trial space is a Dirac delta space defined at the nodes of a uniform mesh, \( S^h:=\text{ span}\{\delta _{ih},\ i=1,\dots ,n\}\), and the test space is also a Dirac delta space but defined at a displaced meshed, \(S^h_\varepsilon :=\text{ span}\{\delta _{(i+\varepsilon )h},\ i=1,\dots ,n\}\). By this way we find linear convergent methods for \(0\ne \varepsilon \in (-1/2,1/2)\) and of quadratic order for the particular choices \(\varepsilon =\pm 1/6\). The value \(\varepsilon =0\) can not be considered because the operators \(V_{ii}^\lambda \) have logarithmic singularities at the diagonal of its kernel. The choices \(\varepsilon =\pm 1/2\) provide unstable methods, as can be proven following [13].

Following the ideas of the so-called qualocation methods [54], we defined a new scheme of cubic order whose implementation is also straightforward. For the analysis we rewrite the system as a non conforming Petrov–Galerkin method, where the test space \(S_\varepsilon ^h\) in the equations associated with the logarithmic operators is replaced by a space of linear combinations of Dirac delta distributions:

$$\begin{aligned} S^h_*:=\text{ span}\{5(\delta _{(i-1/6)h}+\delta _{(i+1/6)h})+(\delta _{(i-5/6)h}+\delta _{(i+5/6)h}),\ \, i=1,\dots ,n\}. \end{aligned}$$

We test the behavior of these families of methods in Fig. 5. We show, as in the examples described in Fig. 3, the average relative errors at twenty points on the line \(y=0\) when considering the method for the values \(\varepsilon =1/3\) (linear convergence), \(\varepsilon =1/6\) (quadratic convergence) and the method with \(S^h_*\) as test space for testing the logarithmic equations (cubic convergence).

Fig. 5
figure 5

Average relative errors with delta methods. On the vertical axis \(-\)log(error) and on the horizontal axis log(\(n\)), where \(n\) is the number of nodes at the boundary of the object

The same ideas can be followed to find alternatives to variational methods for systems (3.10), (3.13), (3.15) and (3.16). The interested reader may consult [20, 21].

This kind of methods is very attractive for the solution of inverse problems related with the detection of objects by iterative processes that start by an initial approximation of the geometry and improve this approximation in the next iterations. For instance this happens in the methods described in Sect. 6. Notice that small changes in the boundaries of the objects require recomputing all the matrices at the system and the implementation of these methods has very little computational cost in comparison with the variational methods described in the previous section.

5 Non-homogeneous objects

In the previous sections we assumed that the constitutive parameters of each material were constant. We now consider the problem with heterogeneous objects. Then, a combination of finite and boundary element methods seems to be the correct numerical approach for simulation. In the last decades, the coupling of finite elements and boundary elements has been applied for the numerical solution of a wide variety of transmission problems in an unbounded media with an obstacle with heterogeneous properties inside [28, 33, 42]. The main novelty in our formulation is the introduction of an auxiliary unknown that transmits efficiently the information between independently working finite element codes (defined on a polygonal approximation of the object) and the boundary element codes (defined on the boundary of the object).

To simplify we restrict the exposition to the thermal setting for homogeneous exterior media [48, 49]. The method can be extended to more general situations, for example to the situation when the material outside the objects is also heterogeneous in a bounded region [9]. The physical model is analogous to the one described in Sect. 2, but now the diffusion inside the obstacles is described by the equations:

$$\begin{aligned} \nabla \cdot (\kappa _j\nabla v)=\rho _jv_t,\quad \text{ in} \Omega _j,\quad j=1,\dots ,d, \end{aligned}$$

where the coefficients \(\kappa _j\) and \(\rho _j\) are positive and smooth (or piecewise smooth). If we look for time-harmonic solutions of this problem we find a similar problem to (2.1)–(2.5), but now the equation (2.2) is replaced by

$$\begin{aligned} \nabla \cdot (\kappa _j\nabla u)+\imath \omega \rho _j u=0,\quad \text{ in} \Omega _j,\quad j=1,\dots ,d. \end{aligned}$$
(5.1)

We study this problem in [48]. We combine a mixed variational formulation of the interior problems with boundary element techniques. The unknowns are the solution \(u\) inside the objects \(\Omega _j\), the fluxes \(\mathbf{p}:=\kappa _j\nabla u\) in \(\Omega _j\), the exterior trace of the solution on the boundaries, \(u\) on \(\Gamma _j\), and a exterior density vector \(\psi =(\psi _1,\dots ,\psi _d)^\top \). The solution of the problem in the unbounded domain is represented as a sum of single layer potentials \(u=\sum _{j=1}^d\mathcal{S}^{\lambda _e}_{\Gamma _j}\psi _j\). We then find an equivalent system of equations taking place in \(\Omega _j\) or on \(\Gamma _j\), which simplifies the solution of the problem at the discrete level. On the other hand, the coefficients \(\kappa _j\) and \(\rho _j\) appear in different equations, making this strategy very suitable for the solution of inverse problems where these coefficients are unknown [7, 10, 31] (see also Sect. 7). Finally, this formulation provides approximations of the fluxes inside each obstacle. These fluxes are of interest when solving certain inverse problems. For instance, in our papers [7, 9, 10] (see also Sect. 6) we need to know both the solution and the flux to implement a technique based on the computation of topological derivatives for the detection of defects in transmission problems.

For the numerical solution of the resulting equations we proposed in [48] a coupling of finite elements and boundary elements, where the discrete spaces are unrelated. The unknowns defined on the boundaries are approximated by the periodic spline spaces of degrees zero and one that were introduced in Sect. 4. For the unknowns defined inside the objects we consider the lowest order Raviart–Thomas space defined on a polygonal approximation \(\Omega _j^h\) of \(\Omega _j\) and the space of piecewise constant functions on the triangulation of \(\Omega _j^h\). Under some regularity hypothesis the method is linearly convergent:

$$\begin{aligned}&\sum _{j=1}^d(\Vert \mathbf{p}-\mathbf{p}_h\Vert _{H_{div}({\Omega _j\cap \Omega _j^h})}+\Vert u-u_h\Vert _{L^2(\Omega _j\cap \Omega _j^h)}+\Vert u-u_h\Vert _{H^{\frac{1}{2}}(\Gamma _j)}\\&\qquad +\Vert \psi -\psi _h\Vert _{H^{-\frac{1}{2}}(\Gamma _j)})\le Ch. \end{aligned}$$

For the analysis of the method we introduced (only for theoretical purposes) an auxiliary discretization that considers the same finite element spaces but defined on a curved triangulation of the bounded domain. This technique was previously used in [53] for the study of the exterior Dirichlet problem for the Laplace equation. The technical problems when considering the transmission problem for the Helmholtz equation are different and more complex, requiring a back and forth transmission of stability properties between the curved and straight triangulations.

The interested reader can find some numerical experiments in [44, 48, 49] that corroborate our error estimates.

6 Defect detection

In this section we consider the inverse problem of reconstructing the number, size, location and geometry of scatterers buried in a medium by means of nondestructive testing. We deal here with reconstruction schemes that assume that the properties of the objects (the constitutive materials) are known. The general situation of recovering both the objects and the parameters is studied in the next section.

The solution of inverse scattering problems associated with shape reconstruction is an important field in Applied Mathematics that has grown exponentially in recent times. The range of applications is huge: ultrasound in medicine, X-ray diffraction to retrieve information about the DNA, reflection of seismic waves in oil prospection or crack detection in structural mechanics, etc.

The statement of the problem is as follows. We are generating an incident wave and the response of the system is modeled by the transmission problem (2.1)–(2.5). To simplify, we assume that the parameters \(\kappa _j\) in condition (2.4) satisfy \(\kappa _j=\kappa _e\), that \(\lambda _e,\,\lambda _j\in \mathbb R \) and that all the interior wave numbers are identical: \(\lambda _j=\lambda _i\) for all \(j\) (we use the notation “\(e\)” for the exterior media and “\(i\)” for the interior domains).

The direct problem was studied in the previous sections: compute the solution of problem (2.1)–(2.5) in a set of receptors \(\mathbf{x}_1\),..., \(\mathbf{x}_M\), that is, \(u(\mathbf{x}_1),\dots ,u(\mathbf{x}_M)\), when the geometry as well as the physical parameters are known. The inverse problem consists in finding the obstacles such that the solution of the forward problem equals the measured values \(u(\mathbf{x}_1),\dots ,u(\mathbf{x}_M)\) at the receptors. A less demanding and more regular formulation is the following: determine the objects \(\Omega \) minimizing the functional

$$\begin{aligned} J(\mathbb{R }^2{\setminus }\Omega ):=\frac{1}{2}\,\sum _{j=1}^M |u(\mathbf{x}_j)-u_{meas}(\mathbf{x}_j)|^2, \end{aligned}$$

where \(u\) is the total wave solution of the forward problem (2.1)–(2.5) when \(\Omega \) is the union of all the objects, and \(u_{meas}\) is the measured total wave at the receptors (that is, \(u_{meas}\) is our experimental data). From this point of view, the domain \(\Omega \) is the variable of the functional and the transmission problem acts as a constraint of the problem (state equation). In case multiple measurements corresponding to different directions \(\mathbf{d}^n\) of illumination of the obstacle are available (\(n=1,\dots ,N\)), the functional we minimize is

$$\begin{aligned} J(\mathbb{R }^2{\setminus }\Omega ):=\frac{1}{2}\,\sum _{n=1}^N\sum _{j=1}^M |u^n(\mathbf{x}_j)-u_{meas}^n(\mathbf{x}_j)|^2, \end{aligned}$$
(6.1)

where \(u^n\) is the solution of the direct problem with incident wave \(u_{inc}^n(\mathbf{x})=\exp (\imath \lambda _e\mathbf{x}\cdot \mathbf{d}^n)\).

Different strategies have been proposed to minimize this kind of functionals, especially based on modified gradient methods. They differ on how an initial guess is deformed from one iteration to the next in such a way that the cost functional decreases. Early work relied in classical shape deformation using small perturbations of the identity [29, 35]. This means that we start with a given number of objects and we deform them continuously. The problem is that the process does not allow for topological changes and the number of scatterers has to be known from the beginning. However, in practice, the number of objects is also unknown. Lately this problem was solved by introducing a new type of deformation inspired in level-set methods [52]. Nevertheless, iterative methods tend to be slow unless a good initial guess of the obstacle is available. Topological derivative methods [24, 25] are a promising alternative that provide good initial guesses and moreover, iterative schemes based on the computation of topological derivatives are fast and allow for topological changes. Our main contributions in the field of the numerical solution of inverse problems are centered in topological derivative based methods [512].

The topological derivative [55] of a functional \(\mathcal{J}\) defined in a region \(\mathcal{R}\) measures the sensitivity of such functional to creating infinitesimal cavities in \(\mathcal{R}\). It is a scalar function that can be understood as a map pointing at the regions with a higher probability to find an object. The standard definition is the following. Let us consider a small ball of radius \(\varepsilon \), \(B_\varepsilon (\mathbf{x})\), \(\mathbf x\in \mathcal{R}\), and the region \(\mathcal{R}_\varepsilon :=\mathcal{R}{\setminus } B_\varepsilon (\mathbf{x})\). Then, the topological derivative of the functional \(\mathcal{J}(\mathcal{R})\) at \(\mathbf x\) is

$$\begin{aligned} D_T(\mathbf{x},\mathcal{R}):=\lim _{\varepsilon \rightarrow 0}\frac{\mathcal{J}(\mathcal{R}_\varepsilon )-\mathcal{J}(\mathcal R)}{\mathcal{V}(\varepsilon )},\quad \mathbf{x\in \mathcal{R}}, \end{aligned}$$
(6.2)

\(\mathcal{V}(\varepsilon )\) being a scalar positive function such that the limit (6.2) is finite and nonzero. For instance, for the transmission or the Neumann problems in the plane we can take \(\mathcal{V}(\varepsilon )=\pi \varepsilon ^2\) (see [6]) and for the Dirichlet problem we take \(\mathcal{V}(\varepsilon )=-2\pi /\log (\lambda _e\varepsilon )\) (see [5]). Note that from the definition (6.2) it follows that

$$\begin{aligned} \mathcal{J}(\mathcal{R}_\varepsilon )=\mathcal{J}(\mathcal{R})+\mathcal{V}(\varepsilon )D_T(\mathbf{x},\mathcal{R})+o(\mathcal{V}(\varepsilon )),\qquad \text{ when} \varepsilon \rightarrow 0. \end{aligned}$$

This relation motivates the key idea for the reconstruction technique: if we locate small objects \(B_\varepsilon (\mathbf{x})\) at the points \(\mathbf{x}\) where \(D_T(\mathbf{x},\mathcal{R})\) is negative, then \(\mathcal{J}(\mathcal{R}_\varepsilon )<\mathcal{J}(\mathcal{R})\), that is, the value of the functional decreases. Hence we will identify the points where the topological derivative attains the larger negative values with the regions where it is more likely to have an object.

When we do not have any a priori information about the location of the objects (if we do not have an initial guess for the geometry), we observe the functional in the whole plane, assuming that there are no objects. That is, we take \(\mathcal{R}=\mathbb{R }^2\) and \(\Omega =\emptyset \). As we have already said, it is likely to have objects in the regions where \(D_T(\mathbf{x}, \mathbb R ^{2})\) attains the larger negative values. We find then a first approximation \(\Omega _1\) of \(\Omega \) (recall that \(\Omega \) denotes the union of all the objects and therefore it can contain several disjoint components), defined as

$$\begin{aligned} \Omega _1:=\{\mathbf{x}\in \mathbb{R }^2\,: \ D_T(\mathbf{x},\mathbb{R }^2)<-C_1\}. \end{aligned}$$
(6.3)

Here \(C_1\) is a positive constant that depends on the values of the topological derivative. The guidelines for the selection of this threshold are given in [7, 10]. This process may be iterated now. Starting from an initial guess \(\Omega _k\), we compute the topological derivative of the functional \(\mathcal{J}(\mathbb{R }^2{\setminus }\Omega _k)\) and add to the current approximation \(\Omega _k\) the regions where \(D_T(\mathbf{x}, \mathbb{R }^2{\setminus }\Omega _k)\) attains large negative values:

$$\begin{aligned} \Omega _{k+1}:=\Omega _k\cup \{\mathbf{x}\in \mathbb{R }^2\!\setminus \!\Omega _k\,: \ D_T(\mathbf{x},\mathbb{R }^2\!\setminus \!\Omega _k)<-C_{k+1}\},\quad C_{k+1}>0. \end{aligned}$$

Before accepting the approximation in the next step, we check that the functional decreases. Otherwise, we increase the value of the constant \(C_{k+1}\). Furthermore, we also impose a volume constraint so that if the difference between consecutive approximations is negligible, we stop the process. The iterative method also stops if the functional attains a small value, proportional to the error in the data (a discrepancy principle). A similar method was proposed in [25] for the solution of an elasticity problem. The main difference with our strategy is the way we choose the thresholds \(C_k\). In [25] they are selected in order to obtain new objects whose volumes equal to a previously selected sequence. Our strategy is more empirical and adjusts the constants to guarantee a decrease in the functional from one step to the next, reducing the risk to evolve to a configuration that does not provide a minimum of the functional.

The previous process generates an increasing sequence of objects, \(\Omega _k\subset \Omega _{k+1}\). This implies that if we have added a spurious region at one step, in the subsequent approximations this region is also included. To avoid this problem, we generalized the definition of the topological derivative in [7]: for the points \(\mathbf{x}\in \mathcal{R}\), the definition is the standard one (see (6.2)). When \(\mathbf{x}\not \in \mathcal{R}\), we define

$$\begin{aligned} D_T(\mathbf{x},\mathcal{R}):=\lim _{\varepsilon \rightarrow 0}\frac{\mathcal{{J}}(\mathcal {R})-\mathcal{J}(\mathcal{R}\cup B_\varepsilon (\mathbf{x}))}{\mathcal{V}(\varepsilon )},\quad \mathbf{x\not \in \mathcal{R}}. \end{aligned}$$

The points \(\mathbf{x}\not \in \mathcal{R}\) where \(D_T(\mathbf{x},\mathcal{R})\) attains very large positive values indicate regions where we should not locate any obstacle. With this extended definition we designed a new iterative scheme where points can be added or removed at each step. The initial guess \(\Omega _1\) is computed as in (6.3). However, in the next iterations the domain \(\Omega _{k+1}\) is defined from \(\Omega _k\) as follows:

$$\begin{aligned} \Omega _{k+1}\!:=\!(\Omega _k\cup \{\mathbf{x}\in \mathbb{R }^2{\setminus }\Omega _k: D_T(\mathbf{x},\mathbb{R }^2{\setminus }\Omega _k)\!<\!-C_{k+1}\}){\setminus } \{\mathbf{x}\in \Omega _k: D_T(\mathbf{x},\mathbb{R }^2{\setminus }\Omega _k)\!>\!C_{k+1}^{\prime }\} \end{aligned}$$

for decreasing sequences of positive constants \(\{C_k\}\) and \(\{C_{k}^{\prime }\}\). This new strategy is also well suited for scatterers with inner boundaries, like annular scatterers.

The computation of the topological derivative by applying the definition is not practical. However, in a wide variety of problems it is possible to obtain simple expressions in terms of the solution of a direct problem and of a related adjoint problem. There are different ways to obtain those expressions, for instance one can use Green functions [26] or the adjoint method [3]. We follow the idea proposed in [24], which is very systematic and can be easily adapted to more complicated problems (non-constant coefficients, state equations depending on time...). It is based on the computation of the topological derivative as a limit of shape derivatives. Shape derivatives can be obtained by introducing a Lagrangian formulation of the problem. To compute then the limit we use asymptotic expansions of the solutions of the forward and adjoint problems appearing in the computation of shape derivatives [6, 7].

In our model problem (2.1)–(2.5) with \(\kappa _e=\kappa _j\), the topological derivative of the functional (6.1) when \(\mathcal{R}=\mathbb{R }^2\) at \(\mathbf{x}\in \mathbb{R }^2\) is

$$\begin{aligned} D_T(\mathbf{x},\mathbb{R }^2)=\sum _{n=1}^N\text{ Re}\left((\lambda _i^2-\lambda _e^2)u^n(\mathbf{x})\,\overline{p}^n(\mathbf{x})\right), \end{aligned}$$
(6.4)

where \(u^n\) and \(p^n\) are solutions of direct and adjoint problems with \(\Omega =\emptyset \). More precisely, \(u^n\) is the total wave solution of the forward problem (2.1)–(2.5) when \(\Omega =\emptyset \) (that is, when \(\Omega _e=\mathbb{R }^2\)) and the incident wave is \(u_{inc}^n\). Therefore, \(u^n=u_{inc}^n\). The \(n\)-th adjoint problem is

$$\begin{aligned} \Delta p+\lambda _e^2p&= \sum _{j=1}^M(u_{meas}^n(\mathbf{x}_j)-u^n(\mathbf{x}_j))\,\delta _{\mathbf{x}_j},\text{ in} \mathbb{R }^2,\nonumber \\ r^{1/2}(\partial _rp+\imath \lambda _e p)&\rightarrow 0,\text{ when} r:=|\mathbf{x}|\rightarrow \infty , \end{aligned}$$
(6.5)

where \(\delta _{\mathbf{x}_j}\) is the Dirac delta distribution at the point \(\mathbf{x}_j\); \(u^n(\mathbf{x}_j)\) and \(u^n_{meas}(\mathbf{x}_j)\) are the values of the total wave solving the direct problem and the wave experimentally obtained at the receptor \(\mathbf{x}_j\) when the incident wave is \(u^n_{inc}\). The explicit solution of the adjoint problem is also known,

$$\begin{aligned} {\overline{p}}^n(\mathbf{x})=-\sum _{j=1}^M\frac{\imath }{4}\,H_0^{(1)}(\lambda _e|\mathbf{x}-\mathbf{x}_j|)(\overline{u_{meas}^n(\mathbf{x}_j)-u^n(\mathbf{x}_j)}). \end{aligned}$$

Therefore, the evaluation of the topological derivative (6.4) is straightforward, as well as the computation of the first approximation of the geometry of the problem using (6.3).

In the next iterations one has to compute the topological derivative when \(\mathcal{R}=\mathbb{R }^2{\setminus }\Omega \). Formally we obtain again formula (6.4), but now \(u^n\) and \(p^n\) solve direct and adjoint problems with \(\Omega _e=\mathbb{R }^2{\setminus }\Omega \), \(\Omega \) being the union of all the current objects. Therefore, to compute the topological derivative we have to solve numerically these problems. To do that we use the direct solvers described in Sects. 3 and 4. As we are dealing with an acoustic problem we choose the representation with Brakhage–Werner potentials to avoid resonances.

To illustrate the efficiency of the method we show a numerical example in Fig. 6. A wide gallery of reconstructions using our method can be found in [6, 7]. The objective is to reconstruct the three objects whose boundaries are represented by white lines in all the plots. In Fig. 6a we represent the values of the topological derivative in a grid of points computed when \(\mathcal{R}=\mathbb{R }^2\) when data were available at 24 receptors uniformly distributed at the circle of radius 3 for 10 incident waves generated at uniformly distributed directions in the interval \((0,2\pi ]\). Looking at these values, we detect two regions where the topological derivative attains the larger negative values (the regions in darker color). This means that the topological derivative ignores the smallest object and the first approximation just consists of two objects, the two balls represented in Fig. 6b. In the same plot we also represent the values of the topological derivative when \(\mathcal{R}=\mathbb{R }^2{\setminus } \Omega _1\). It is clear now that the leftmost object is bigger (reconstructed in Fig. 6c) but the smaller object is ignored again. Although in Fig. 6d we visually suspect the presence of the third object, our constants were too restrictive and we required five iterations to find this object (Fig. 6f). In Fig. 6i we observe that at the eighth iteration we have a rather satisfactory reconstruction of the three objects.

Fig. 6
figure 6

Topological derivative and first eight iterations. The parameters of the problem are \(\lambda _e=2.5\), \(\lambda _i=0.5\), \(\kappa _e=\kappa _i=1\)

To end this section we briefly describe some of the results we obtained for other problems. In [7] we considered inverse problems related with the following direct problem:

$$\begin{aligned} \nabla \cdot (\kappa _e(\mathbf{x})\nabla u)+\mu _e^2(\mathbf{x}) u&= 0,\text{ in} \Omega _e,\end{aligned}$$
(6.6)
$$\begin{aligned} \nabla \cdot (\kappa _i(\mathbf{x})\nabla u)+\mu _i^2(\mathbf{x}) u&= 0,\text{ in} \Omega _i:=\cup _{j=1}^d\Omega _j,\end{aligned}$$
(6.7)
$$\begin{aligned} u|_{\Gamma _j}^\mathrm{int}-u|_{\Gamma _j}^\mathrm{ext}&= g^0_j,\text{ on} \Gamma _j,\quad j=1,\dots ,d,\end{aligned}$$
(6.8)
$$\begin{aligned} \kappa _i(\mathbf{x})\,\partial _\mathbf{n}u|_{\Gamma _j}^\mathrm{int}-\kappa _e(\mathbf{x})\,\partial _\mathbf{n}u|_{\Gamma _j}^\mathrm{ext}&= g_j^1,\text{ on} \Gamma _j,\quad j=1,\dots ,d,\end{aligned}$$
(6.9)
$$\begin{aligned} r^{1/2}(\partial _ru-\imath \lambda _e u)&\rightarrow 0,\text{ when} r:=|\mathbf{x}|\rightarrow \infty . \end{aligned}$$
(6.10)

Here the functions \(\mu _i\) and \(\kappa _i\) are non-constant and the functions associated to the exterior media \(\mu _e\) and \(\kappa _e\) are only constant outside a ball

$$\begin{aligned} \mu _e(\mathbf{x})=\mu _e^0,\qquad \kappa _e(\mathbf{x})=\kappa _e^0,\qquad |\mathbf{x}|>R. \end{aligned}$$

In this situation, \(\lambda _e:=\mu _e^0/\sqrt{\kappa _e^0}\) is the wave number of the incident wave. The topological derivative of the shape functional (6.1) for this problem is given by [7]:

$$\begin{aligned} D_T(\mathbf{x},\mathbb{R }^2{\setminus }\Omega )&= \sum _{n=1}^N\text{ Re} \bigg (\frac{2\kappa _e(\mathbf{x})(\kappa _e(\mathbf{x})-\kappa _i(\mathbf{x}))}{\kappa _e(\mathbf{x})+{\kappa _i}(\mathbf x)}\,\nabla u^n(\mathbf{x})\cdot \nabla \overline{p}^n(\mathbf{x})\\&\qquad \qquad \quad +(\mu _i^2(\mathbf{x})-\mu _e^2(\mathbf{x}))\,u^n(\mathbf{x})\,\overline{p}^n(\mathbf{x})\bigg ), \end{aligned}$$

where \(u^n\) is the total wave solving the direct problem (6.6)–(6.10) when the incident wave is \(u_{inc}^n=\exp (\imath \lambda _e\mathbf{x}\cdot \mathbf{d}^n)\) and \(p^n\) is the solution of a related adjoint problem. In this case, and even in the simplest situation when \(\Omega =\emptyset \), both \(u^n\) and \(p^n\) have to be computed numerically, since there are no explicit expressions for them as in the case of constant parameters. For solving these problems we use the method introduced in Sect. 5 [9]. As we already mentioned in Sect. 5, our method is specially well suited since we need to compute the solution as well as its gradient.

In problems with sound-hard scatterers (Neumann conditions), the topological derivative of the functional (6.1) is [6]:

$$\begin{aligned} D_T(\mathbf{x},\mathbb{R }^2{\setminus }\Omega )=\sum _{n=1}^N\text{ Re}\left(2\nabla u^n(\mathbf{x})\cdot \nabla \overline{p}^n(\mathbf{x})-\lambda _e^2 u^n(\mathbf{x})\,\overline{p}^n(\mathbf{x})\right), \end{aligned}$$

where now \(u^n\) and \(p^n\) are solutions of direct and adjoint Neumann problems. Analogously, if we consider sound-soft objects (Dirichlet conditions) we obtain the formula [5]:

$$\begin{aligned} D_T(\mathbf{x},\mathbb{R }^2{\setminus }\Omega )=\sum _{n=1}^N\text{ Re}\left(u^n(\mathbf{x})\,\overline{p}^n(\mathbf{x})\right). \end{aligned}$$

It can be shown [5] that experimental data for a single incident wave are theoretically enough to uniquely determine the objects. We have also found explicit expressions for the topological derivative for transmission problems in elasticity [6].

In transient thermal problems the total field satisfies the heat equation and the temporal variable cannot be removed. When we reformulate the inverse problem as an optimization problem with constraints, these constraints are initial value problems with boundary conditions for evolution equations. We designed a way to deal with this new situation using Laplace transforms in time [8]. The idea is to substitute the original optimization problem, where the constraints are time-dependent, by an approximate problem involving a finite number of stationary constraints. The starting inverse problem consists in finding the objects \(\Omega \) that minimize the functional

$$\begin{aligned} J(\mathbb{R }^2{\setminus }\Omega ):=\frac{1}{2}\,\sum _{n=1}^N\sum _{j=1}^Mf(t_n)|U(\mathbf{x}_j,t_n)-U_{meas}(\mathbf{x}_j,t_n)|^2, \end{aligned}$$
(6.11)

where \(U(\mathbf{x}_j,t_n)\) is the total wave solution to the direct problem and \(U_{meas}(\mathbf{x}_j,t_n)\) is the total wave measured at the receptor \(\mathbf{x}_j\) at time \(t_n\). Here \(f\) is a positive function that weighs the contribution at each time instant taking into account the time decay of the solutions to the heat equation. The solution \(U\) of the forward problem can be numerically approximated by using the following method [32, 38]: if we consider the Laplace transform of \(U\),

$$\begin{aligned} u(\mathbf{x},s)=\int _0^\infty e^{-st}\,U(\mathbf{x},t)\,dt, \end{aligned}$$

then, for each value of \(s\) the function \(u_s(\mathbf{x}):=u(\mathbf{x},s)\) is a solution of a stationary Helmholtz transmission problem (2.1)–(2.5) with complex wave numbers depending on the parameter \(s\). To invert the Laplace transform we choose hyperbolic paths of the form [40]:

$$\begin{aligned} \gamma (\theta ):=\mu (1-\mathrm {sin}(\pi /4+\imath \theta )),\quad \theta \in \mathbb R\mathrm , \end{aligned}$$

where \(\mu >0\) is a parameter that is tuned to obtain an optimal performance of the method in the desired time interval. Then, the solution of the original problem is

$$\begin{aligned} U(\mathbf{x},t)=\frac{1}{2\pi \imath }\int _{-\infty }^\infty e^{t\gamma (\theta )}u(\mathbf{x},\gamma (\theta ))\,\gamma ^{\prime }(\theta )\,d\theta . \end{aligned}$$

A numerical approximation of \(U\) can be calculated now using a truncated trapezoidal rule:

$$\begin{aligned} U(\mathbf{x},t)\approx \sum _{\ell =-L}^L c_\ell \, e^{ts_\ell }\, u(\mathbf{x},s_\ell ), \end{aligned}$$
(6.12)

with nodes and weights

$$\begin{aligned} s_\ell :=\gamma \left(\frac{\log (L)}{L}\,\ell \right),\qquad c_\ell :=\frac{\log (L)}{2\pi \imath L}\,\gamma ^{\prime }\left(\frac{\log (L)}{L}\,\ell \right). \end{aligned}$$

Based on this strategy we proposed in [8] to substitute the cost functional (6.11) by the approximated functional

$$\begin{aligned} J(\mathbb{R }^2{\setminus }\Omega ):=\frac{1}{2}\,\sum _{n=1}^N\sum _{j=1}^Mf(t_n)\left|\sum _{\ell =-L}^L c_\ell \, e^{ts_\ell }\, u_{s_\ell }(\mathbf{x}_j)-U_{meas}(\mathbf{x}_j,t_n)\right|^2, \end{aligned}$$

whose topological derivative is

$$\begin{aligned} D_T(\mathbf{x},\mathbb{R }^2{\setminus } \Omega )=\text{ Re}\left(\sum _{\ell =-L}^L \frac{2\kappa _e(\kappa _e-\kappa _i)}{\kappa _e+\kappa _i}\,\nabla u_{s_\ell }(\mathbf{x})\cdot \nabla \overline{p}_{s_\ell }(\mathbf{x})\!+\!(\lambda _{s_{\ell ,i}}^2\!-\!\lambda _{s_{\ell ,e}}^2)\,u_{s_\ell }(\mathbf{x})\,\overline{p}_{s_\ell }(\mathbf{x})\!\right)\!. \end{aligned}$$

Therefore, the computation of the topological derivative of the new functional requires to solve \(2L+1\) direct and adjoint problems. The direct problems are of the form (2.1)–(2.5) with complex wave numbers. The source term that appears in the Helmholtz equation for the exterior domain at the adjoint problems is analogous to that obtained in (6.5) and involves our discrete version of the inverse Laplace transform. Specifically, the source term of the \(\ell \)-th adjoint problem is

$$\begin{aligned} \sum _{n=1}^N\sum _{j=1}^M f(t_n)\,c_{s_\ell }\,e^{t_ns_\ell }\bigg (U_{meas}(\mathbf{x}_j,t_n)-\sum _{k=-L}^L c_k\,e^{t_ns_k}u_{s_k}(\mathbf{x}_j)\bigg )\delta _{\mathbf{x}_j}. \end{aligned}$$

To solve the stationary problems we use the highly parallelizable methods described in Sects. 3 and 4. Our numerical results (see [8]) seem to indicate that in the non-stationary case we obtain more satisfactory reconstructions than in the time-harmonic one. We also observed that for the same amount of data we get more reliable reconstructions when observing the behavior of the system at a few receptors at several time instants than when we measure the temperature at a higher number of receptors at a single time instant.

At the present time we are working on the reconstruction of objects in electrical impedance tomography problems [11, 12], a field of great interest in medical applications.

7 Parameter identification

In this section we deal with the identification of constitutive parameters in acoustic and thermal problems. In a first step we assume that the geometry of the problem is known and we recover the parameters, that may be non homogeneous. Afterwards we address the full problem of recovering the objects as well as their parameters. Finally, an inverse conductive transmission problem is studied, where we recover a function that is proportional to the level of corrosion at the interfaces. The main contributions of the author in the field of parameter identification in the context of thermal and acoustic waves are collected in [7, 10, 30, 31]. More recently we have also worked in parameter identification problems in electrical impedance tomography [11].

Parameter identification in transmission problems Let us first consider parameter identification problems for the Helmholtz equation when the geometry of the interior obstacles is known. Our aim is to recover the constitutive parameters inside the objects from measurements of the total wave at some receptors. The direct problem is described by the equations (2.1)–(2.5) when the materials are homogeneous or by the equations (6.6)–(6.10) when dealing with space-dependent coefficients. We showed some regularity properties of the measurements with respect to the parameters as well as local uniqueness in [31]. Subsequently we developed in [7] a strategy based on descent techniques to recover the parameters. Some alternatives were previously proposed for the reconstruction of heterogeneous objects buried in homogeneous materials [39, 43] or homogeneous inclusions inside gradually heterogenous media [27]. Our method is also valid when both the exterior and the objects are heterogeneous. For the reconstruction of the parameters inside the defects we introduce an iterative method of gradient type. We start from some approximations \(\kappa _i^k\) and \(\lambda _i^k\) of \(\kappa _i\) and \(\lambda _i\), and perturb these parameters following two fields \(\phi \) and \(\psi \), selected in such a way that the following cost functional decreases

$$\begin{aligned} J(\delta ):=J(\kappa _i^k+\delta \phi ,\lambda _i^k+\delta \psi ;\mathbb{R }^2{\setminus }\Omega )=\frac{1}{2}\sum _{j=1}^M|u_\delta -u_{meas}|^2. \end{aligned}$$

Here \(u_\delta \) is the solution of the forward problem when the union of all the objects is \(\Omega \) and the interior parameters are \(\kappa _i^k+\delta \phi \) and \(\lambda _i^k+\delta \psi \). To do that, we seek \(\delta , \phi \) and \(\psi \) such that \(dJ(\delta )/d\delta <0\). An explicit expression of this derivative provides analytical expressions for the corrections \(\phi \) and \(\psi \). Specifically, after introducing a Lagrangian formulation of the problem we prove in [7] that, for a given small value of \(\delta >0\), a suitable choice for homogeneous problems is

$$\begin{aligned} \phi :=\text{ Re}\int _{\Omega }\nabla u(\mathbf{y})\cdot \nabla \overline{p}(\mathbf{y})d\mathbf{y},\qquad \psi :=-\text{ Re}\int _{\Omega } u(\mathbf{y})\,\overline{p}(\mathbf{y})d\mathbf{y}, \end{aligned}$$

where \(u\) and \(p\) solve forward and adjoint problems when the union of all the objects is \(\Omega \) and the interior parameters are \(\kappa _i=\kappa _i^k\) and \(\lambda _i=\lambda _i^k\).

If we look for parameters that are constant but different inside each object \(\Omega _j\) (recall that \(\Omega =\cup _{j=1}^d\Omega _j\)), then the integral over \(\Omega \) may be replaced by the integral over \(\Omega _j\) to approximate the parameters in \(\Omega _j\). In the heterogeneous case (space-dependent parameters) we select

$$\begin{aligned} \phi (\mathbf{x}):=\text{ Re}(\nabla u(\mathbf{x})\cdot \nabla \overline{p}(\mathbf{x})),\qquad \psi (\mathbf{x}):=-\text{ Re}(u(\mathbf{x})\,\overline{p}(\mathbf{x})). \end{aligned}$$

In both cases we define \(\kappa _i^{k+1}:=\kappa _i^k+\delta \phi \) and \(\lambda _i^{k+1}:=\lambda _i^k+\delta \psi \). For the numerical implementation of this strategy we need to solve transmission problems. When dealing with constant parameters we use the direct solvers described in Sects. 3 and 4, and in the heterogeneous case we apply the finite and boundary element coupling of Sect. 5. In this second situation it is worth to emphasize that our formulation is especially well suited since it allows to calculate simultaneously and with ease both the solution of the forward and adjoint problems and their gradients.

In the more general inverse problem the aim is to reconstruct the objects as well as the constitutive parameters inside them with no a priori information about the geometry and the parameters. We propose in [7, 10] to combine the iterative method based in the successive computation of topological derivatives (introduced in Sect. 6) with the method to identify the parameters that we have just described. We start the method with initial values for the interior parameters close to the exterior ones. With them we obtain then a first approximation of the objects by computing the topological derivative in \(\mathbb{R }^2\). In the next step we fix the objects and apply the gradient method to improve the approximation of the parameters. Once the parameters have been updated, we compute the topological derivative again to update the objects and so forth. We also investigated in [10] how to accelerate this method and concluded that it is computationally cheaper to alternate one iteration to update the geometry with several iterations of the gradient method to adjust the parameters.

We have tested this hybrid topological derivative-gradient based method considering the configuration with three objects of the previous example (Fig. 6). The interior parameters in all the objects are \(\kappa _i=1\) (assumed to be known) and \(\lambda _i=0.5\) (assumed to be unknown). The background parameters are \(\kappa _e=1\) and \(\lambda _e=2.5\). Choosing \(\lambda _i^0=2\) as initial guess for \(\lambda _i\) we compute the topological derivative when \(\Omega =\emptyset \) (see Fig. 7a). Notice that except for the scales, this plot is identical to that in Fig. 6a, and our initial guess \(\Omega _1\) consists again of the two balls represented in Fig. 6b. We now fix \(\Omega _i=\Omega _1\) and iterate five times to update the value of the parameter without modifying the obstacle. Afterwards, we fix the value of \(\lambda _i\) and compute the topological derivative to update the domain and so on. The reconstruction at the tenth iteration with respect to the domain is represented in Fig. 7b. The values of \(\lambda _i\) versus the number of iterations are shown in Fig. 7c. Two identical values of the parameter in the plot mark each iteration to improve the domains (the parameter is not updated).

Fig. 7
figure 7

a Topological derivative when \(\Omega =\emptyset \), \(\kappa _e=\kappa _i=1, \lambda _e=2.5\) are known and \(\lambda _i\) is approximated by \(\lambda _i^0=2\). b Reconstruction of the geometry at the tenth iteration with respect to the domain. c Values of \(\lambda _i\) through the iterative method

Some reconstructions with two or three objects with different constant parameters can be found in [7, 10]. In [10] a gallery of reconstructions is presented including some examples where the interior parameters and/or the exterior ones are space dependent.

Reconstruction of the corrosion function in a conductive transmission problem We end this section with the study of an inverse problem for the conductive transmission problem, described by the equations (2.1), (2.2), (2.4), (2.5) and (2.6). To simplify we assume that we only have one object buried in the half plane \(\mathbb{R }_-^2:=\{(x,y)\in \mathbb{R }^2,\ \ y<0\}\). If we generate now an incident wave by a periodic heating generated from a source point on the boundary \(\Pi :=\{(x,y)\in \mathbb{R }^2,\ \ y=0\}\), the behavior of the system can be modeled by the previous equations, replacing \(\mathbb{R ^2}\) by \(\mathbb{R ^2_-}\) at (2.1) and adding the adiabatic condition \(\partial _\mathbf{n}u=0\) on \(\Pi \). Our aim in [30] is to recover the corrosion function \(f\) that appears at the boundary condition (2.6). The closest work related to this problem is [4]. In that paper the objective is to determine in the time-harmonic context the shape of the object as well as the \(L^\infty \) norm of the corrosion function. This function is never recovered and no uniqueness results are provided. Even in the simplified situation of known objects, the numerical experiments only consider constant corrosion functions. We prove in [30] a uniqueness result for the reconstruction of the corrosion function in both the time-harmonic and transient situations. Furthermore, we proposed two numerical methods for the reconstruction: an iterative regularized Newton-type method and a non-iterative scheme. The first one was the more efficient and here we are only going to describe this method.

We introduce the function \(F(f)\) that maps each corrosion function \(f\) with the value of \(u\) on the boundary \(\Pi \). The inverse problem is then to find the function \(f\) when the values of \(F(f)\) at a finite number of points on \(\Pi \) are known. Our iterative regularized Gauss–Newton method works as follows. If \(F_0\) is the data vector (that in principle contains errors that are unavoidable but can be estimated), and \(f_0\) is an initial guess for the corrosion function, the successive approximate corrosion functions are calculated by

$$\begin{aligned} f_{n+1}=f_n+(\alpha _nI+G_n^*G_n)^{-1}(G_n^*(F_0-F(f_n))+\alpha _n(f_0-f_n)), \end{aligned}$$
(7.1)

where \(G_nh:=F^{\prime }(f_n)h\) is the Fréchet derivative of the operator \(F\) at the point \(f_n\) in the direction \(h\). The regularizing sequence of parameters \(\alpha _n\) tends to zero.

The stopping criteria is a Morozov discrepancy principle that indicates that the algorithm has to stop as soon as the residual is smaller than a function of the error in the data. The evaluation of \(F^{\prime }(f)h\) and \(F^{\prime }(f)^*\) needs to solve systems of integral equations that are similar to the one that appears when solving the direct problem, see [30].

Some numerical examples illustrating the behavior of the method for different level of noises and for different number of incident waves are given in [30, 49]. In [30] we also study the transient problem, combining Laplace transforms in time with boundary elements for the numerical solution of the evolution problems appearing at the iterative regularized Gauss–Newton method (7.1). The numerical scheme to solve the forward problem is basically the method based in Laplace transforms that was described at the end of Sect. 6. The main difference is that the solution is approximated by (6.12), but now for each value \(s_\ell \), the function \(u(\,\cdot \,,s_\ell )\) solves the conductive transmission problem (2.1), (2.2), (2.4), (2.5) and (2.6), with a complex wave number depending on \(s_\ell \).

This numerical strategy defines a highly efficient scheme that provides more accurate reconstructions when observing the system at a time interval than in the time harmonic case. We also observed that, for a given amount of data, the reconstructions of the corrosion function are more precise when measuring at several time instants (at a few receptors) than when considering just one time instant (at a higher number of receptors). Some numerical examples in the time-dependent case can be found in [30].