Abstract
The transmission eigenvalue problem plays an important role in the inverse scattering theory of inhomogeneous media. In particular, transmission eigenvalues can be reconstructed from scattering data and used to obtain qualitative information about the material properties of the scattering medium. In this paper, we consider the inverse spectral problem to determine the material properties given a few transmission eigenvalues. The lack of theoretical results motivates us to propose a Bayesian approach. The inverse problem is first formulated as a statistical inference problem using the Bayes’ theorem. Then, the MCMC algorithm is used to compute the posterior density. Due to the non-uniqueness nature of the problem, we adopt the local conditional means (LCM) to characterize the posterior density function. Numerical examples show that the proposed method can provide useful information about the unknown material properties.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
The transmission eigenvalue problem plays an important role in the inverse scattering theory for inhomogeneous media and received a lot of attention in the last decade. The problem is critical to the analysis of the inverse medium problem. Furthermore, it has been shown that transmission eigenvalues can be reconstructed from the scattering data and used to obtain qualitative information about the material properties. For the discreteness, existence, and the determination of transmission eigenvalues from scattering data, we refer the readers to [3, 4, 6, 9, 12, 24].
In this paper, given a few real transmission eigenvalues, we consider the inverse spectral problem to determine the material properties of a non-absorbing isotropic or anisotropic medium. The problem has been treated using certain inequalities or optimization methods [11, 24]. However, due to the lack of theoretical results and non-uniqueness, these methods face some difficulties. For example, the use of inequalities can only provide rough estimates of the material properties. The optimization method might not provide all possible solutions when non-uniqueness of the inverse problem presents (since only a few eigenvalues are used).
Bayesian inversion is an effective approach for inverse problems [14, 22] and can be used to characterize non-unique solutions using the (posterior) probability distribution of the unknowns [25]. Although sometimes computationally expensive, it is an attractive method for challenging inverse problems. In this paper, we propose a Bayesian approach for the inverse transmission eigenvalue problem. First, using Bayes’ theorem, the inverse problem is formulated as a statistical inference for the posterior density of the unknown physical properties. Then, the MCMC algorithm is used to explore the posterior density. Due to the non-uniqueness nature of the problem, the posterior density function concentrates at more than one location. The classical maximum a posteriori (MAP) and conditional mean (CM) cannot fully characterize the density. We resort to the local conditional means (LCM) [25], which can be used to reveal multiple solutions of the inverse problem.
Numerical examples validate the effectiveness of the proposed method. For applications of Bayesian approach in inverse problems, we refer the readers to, e.g., [14, 15, 18, 22]. In particular, Bayesian inversion was employed in [20] for an inverse Stekloff eigenvalue problem.
The rest of the paper is organized as follows. In Sect. 2, we present the transmission eigenvalue problem for acoustic waves and state the inverse spectral problem of interest. Section 3 is devoted to the development of a Bayesian approach and the stability analysis. In Sect. 4, we first present a continuous finite element method to compute transmission eigenvalues used in the MCMC algorithm. Then, we show four examples demonstrating the performance of the proposed Bayesian approach. Conclusions and future work are discussed in Sect. 5.
2 Inverse transmission eigenvalue problem
Let \(D \subset {\mathbb {R}}^2\) be a bounded Lipschitz domain. Let A(x) be a \(2\times 2\) real matrix valued function with \(C^1(D)\) entries and \(n(x) \in C(D)\). Assume that \(n(x) > 0\) is bounded and A(x) is bounded and symmetric such that \(\xi \cdot A \xi \ge \gamma |\xi |^2\) for all \(\xi \in {\mathbb {R}}^2\) with \(\gamma > 0\). Given an incident field \(u^i\) (e.g., plane waves), the direct scattering problem by an inhomogeneous medium D is to find u and the scattered field \(u^s\) such that
where k is the wavenumber, \(r=|x|\), \(\nu \) is the unit outward normal to \(\partial D\) and \(\partial _A u\) is the conormal derivative
The Sommerfeld radiation condition (1e) holds uniformly over all directions. It is well known that there exists a unique solution \(u^s\) to the above scattering problem [9].
The transmission eigenvalue problem associated to (1) is as follows. Find \(k^2 \in {\mathbb {C}}\) and non-trivial functions u, v such that
The well-posedness of the above problem has been an active research area for a decade. We summarize some discreteness results from [3], which are relevant to the topic discussed in the current paper. The existence results can be found in the same book (Theorems 4.12, 4.32, 4.37 therein).
-
(a)
Let \(n_*=\inf _D n\) and \(n^*=\sup _D n\). Assume \(A=I\). If \(n \ge n_0 > 0\) and either \(n_* > 1\) or \(n^* <1\), the set of transmission eigenvalues is discrete with \(+\infty \) as the only possible accumulation point. The multiplicity of the eigenvalues is finite.
-
(b)
Let \(a_* = \inf _D \inf _{|\xi |=1} \xi \cdot A \xi > 0\) and \(a^* = \sup _D \sup _{|\xi |=1} \xi \cdot A \xi < \infty \). Assume that \(n=1\) and either \(a_* > 1\) or \(0<a^*<1\). Then, the set of transmission eigenvalues is discrete with \(+\infty \) as the only possible accumulation point.
-
(c)
Assume that either \(0<a^*<1\) or \(a_* >1\) and \(\int _D (n-1)dx \ne 0\). Then, the set of transmission eigenvalues is discrete with \(+\infty \) as the only possible accumulation point.
-
(d)
Assume that either \(0<a^*<1\) and \(0<n^*<1\), or \(a_* > 1\) and \(n_*>1\). Then, the set of transmission eigenvalues is discrete with \(+\infty \) as the only possible accumulation point.
In the rest of the paper, we assume that there exists an operator \({\mathcal {G}}\) such that
where \({{\varvec{k}}} \in {\mathbb {R}}^m\) is a vector consisting of m transmission eigenvalues.
Let S be the space of \(2 \times 2\) symmetric matrices with real \(C^1\) elements and \(X = S \times C(D)\). Let \(Y={\mathbb {R}}^m\). Define two Sobolev spaces
For \({{\varvec{w}}}=(u, v) \in V\) and \({\varvec{\psi }}=(\phi , \varphi ) \in V\), we define two sesquilinear forms
Then, the transmission eigenvalue problem can be written as follows [27]. Find \( k^2 \in {\mathbb {C}}\) and non-trivial \({{\varvec{w}}}\) with \(\Vert {{\varvec{w}}}\Vert _W=1\) such that
In this paper, we only consider positive real transmission eigenvalue since they represent the wavenumbers that can be reconstructed from the scattering data [5, 24]. Note that \(a(\cdot , \cdot )\) and \(b(\cdot , \cdot )\) depend continuously on A and n and, as a consequence, \({{\varvec{k}}}\) depends on A and n. We assume that the following Lipschitz continuity holds
for some constant C. There exist only a few results on the dependence of transmission eigenvalues on the materials properties. When \(A=I\), the continuous dependence of \({{\varvec{k}}}\) on n(x) is implied in Sect. 3 of [4] (see also Sect. 3 of [23]). The authors are not aware of similar results beyond that. Results similar to (5) are certainly desirable for general material properties.
The interior transmission eigenvalue problem plays an essential role in the inverse scattering theory for inhomogeneous media. It received significant attention in the last decade. The study has focused on the discreteness, the existence, the determination of transmission eigenvalues from scattering data, and the relation between the transmission eigenvalues and the material properties (see, e.g., [4, 5, 9, 11, 12, 24]).
In this paper, we are interested in the following inverse spectral problem.
ISP: Given a few real transmission eigenvalues \({{\varvec{k}}} \in Y\), reconstruct A and n.
In particular, we assume that D is known as an a priori. Note that both D and the eigenvalues \({{\varvec{k}}} \) can be obtained from the scattering data. For interested readers, we refer to [5, 9, 24] and the references therein.
Except a few special cases, e.g., \(A=I\) and \(n=n_0\) for some constant \(n_0\) or spherically stratified media, little is known about the above inverse spectral problem. In addition, since only a few transmission eigenvalues are given, non-uniqueness can happen and it is highly challenging to develop effective deterministic methods for the ISP. This motivates us to develop a Bayesian approach to seek useful information about A and n.
3 Bayesian inversion
In the Bayesian framework, the ISP is treated as a problem of statistical inference. All the parameters are modeled as random variables and the uncertainties of their values are expressed by distributions. The solution is the posterior probability distribution that provides a probabilistic description of the unknowns [14, 22].
Denote by \({\mathcal {N}}\) the normal distribution and \({\mathcal {U}}\) the uniform distribution. Consider the inverse problem of (3) to reconstruct \((A,n) \in X\) from the measurement \({{\varvec{k}}}\in Y\). Since the measurement \({{\varvec{k}}}\) contains noise, the statistical problem can be written as
where \({{\varvec{k}}}=(k_1, k_2, \ldots , k_m)^T \) is a vector of transmission eigenvalues, A(x) and n(x) are random, \({\mathcal {G}}\) is the operator mapping (A(x), n(x)) to \({{\varvec{k}}}\) based on (2), and E is the additive noise, mutually independent of A and n. We assume that the noise E follows a Gaussian distribution, i.e., \(E\sim {\mathcal {N}}(0,\,\Gamma _{\text {noise}})\), where \(\Gamma _{\text {noise}} \in {\mathbb {R}}^{m\times m}\) is positive definite.
In the Bayesian statistical theory, the prior information can be coded into the prior density \(\pi _{0} (A, n)\). For example, if n is known to be a real constant \(n_0\) such that \(a< n_0 < b\), one may take the prior as the continuous uniform distribution, i.e., \(n \sim {\mathcal {U}}(a,b)\). The conditional distribution of the measurement \({{\varvec{k}}}\) given (A, n) is referred to as the likelihood, denoted by \(\pi ({{\varvec{k}}}|(A, n))\). From (6), \(\pi ({{\varvec{k}}}|(A, n)) ={\mathcal {N}}({{\varvec{k}}}-{\mathcal {G}}(A, n),\Gamma _{\text {noise}})\).
Given \({{\varvec{k}}}\), the goal of the Bayesian inverse problem is to seek statistical information of (A, n) by exploring the conditional probability distribution \(\pi _{{\varvec{k}}}((A, n)|{{\varvec{k}}})\), called the posterior distribution of (A, n). According to Bayes’ rule, one has that
Let \(P(A,n;{\varvec{k}})= \frac{1}{2} ({{\varvec{k}}}-{\mathcal {G}}(A, n))^{\top } \Gamma ^{-1}_{\text {noise}} ({{\varvec{k}}}-{\mathcal {G}}(A, n))\). Assume that the posterior measure \(\mu _{{\varvec{k}}}\) with density \(\pi _{{\varvec{k}}}\) is absolutely continuous with respect to the prior measure \(\mu _{0}\) with density \(\pi _{0}\). Using (7), we can relate \(\mu _{{\varvec{k}}}\) with \(\mu _{0}\) by
where \(Z({\varvec{k}})=\int _{X} \exp (-P(A,n;{\varvec{k}})) \mathrm {d} \mu _{0}(A,n)\) is the normalization constant. The formula (8) is understood as the Radon–Nikodym derivative of the posterior probability measure \(\mu _{{\varvec{k}}}\) with respect to the prior measure \(\mu _{0}\).
Now we study the stability of the Bayesian inversion scheme for the ISP following [22]. From (4) and the boundedness of A and n, the following lemma holds.
Lemma 3.1
For every \(\varepsilon >0\), there exists \(C:=C(\varepsilon )\in {\mathbb {R}}\) such that, for all \((A,n)\in X\),
The assumption (5) implies that \({\mathcal {G}}: X\rightarrow {\mathbb {R}}^{m}\) satisfies the following property. For every \(r>0\) there exists a \(C:=C(r)>0\) such that, for all \((A,n)_{1},(A,n)_{2}\in X\) with \(\max \{\Vert (A,n)_{1}\Vert _{X},\Vert (A,n)_{2}\Vert _{X}\}<r\),
Then, the function \(P: X\times Y \rightarrow {\mathbb {R}}\) has the following properties [22].
-
(i)
For every \(\varepsilon >0\) and \(r>0\) there is an \(M=M(\varepsilon ,r) \in {\mathbb {R}}\) such that, for all \((A,n) \in X\) and \({\varvec{k}} \in Y\) with \(\Vert {\varvec{k}}\Vert _{Y} < r\),
$$\begin{aligned} P(A,n;{\varvec{k}} ) \ge M- \varepsilon \Vert (A,n)\Vert _{X }. \end{aligned}$$ -
(ii)
For every \(r>0\) there is a \(K=K(r)>0\) such that, for all \((A,n) \in X\) and \({\varvec{k}} \in Y\) with \(\max \{\Vert (A,n)\Vert _{X }, \Vert {\varvec{k}}\Vert _{Y} \}< r\),
$$\begin{aligned} P(A,n;{\varvec{k}} ) \le K. \end{aligned}$$ -
(iii)
For every \(r>0\), there is an \(L=L(r) >0\) such that, for all \((A,n)_{1} ,(A,n)_{2} \in X\) and \({\varvec{k}} \in Y\) with \(\max \{\Vert (A,n)_{1}\Vert _{X }, \Vert (A,n)_{2}\Vert _{X }, \Vert {\varvec{k}}\Vert _{Y} \}< r\),
$$\begin{aligned} | P((A,n)_{1};{\varvec{k}} ) - P((A,n)_{2};{\varvec{k}} ) | \le L\Vert (A,n)_{1}-(A,n)_{2}\Vert _{X }. \end{aligned}$$ -
(iv)
For every \(\varepsilon >0\) and \(r>0\), there is a \(C=C(\varepsilon ,r) \in {\mathbb {R}}\) such that, for all \({\varvec{k}}_{1},{\varvec{k}}_{2} \in Y\) with \(\max \{\Vert {\varvec{k}}_{1}\Vert _{Y} ,\Vert {\varvec{k}}_{2}\Vert _{Y} \}< r\), and \((A,n) \in X\),
$$\begin{aligned} |P((A,n);{\varvec{k}}_{1} ) - P((A,n);{\varvec{k}}_{2} ) | \le \exp (\varepsilon \Vert (A,n)\Vert ^2_{X}+C)\Vert {\varvec{k}}_{1}-{\varvec{k}}_{2}\Vert _{Y}. \end{aligned}$$
Definition 3.1
The Hellinger distance between two probability measures \(\mu _1\) and \(\mu _2\) with common reference measure \(\nu \) is defined as
The following theorem guarantees the well-posedness of the Bayesian inverse problem (8).
Theorem 3.1
Suppose X is a Banach space and \(\mu _{0}\) is a Borel probability measure on X with \(\mu _{0}(X)=1\). Then, the Bayesian inverse problem (8) is well-posed in Hellinger metrics, i.e., for \({\varvec{k}}_1\) and \({\varvec{k}}_2\) with \(\max \{\Vert {\varvec{k}}_1\Vert _{Y}, \Vert {\varvec{k}}_2\Vert _{Y}\}\le r\), there exists \(M=M(r)>0\) such that
Proof
By the definition of \(Z({\varvec{k}})=\int _{X} \exp (-P(A,n;{\varvec{k}})) \mathrm {d} \mu _{0}(A,n)\) and \(\mu _{0}(X)=1\), we have \(0< Z({\varvec{k}}) \le 1\).
Applying the mean value theorem and using properties (i) and (iv), it holds that
By the definition of the Hellinger distance, we obtain that
Applying again the mean value theorem and using property (iv), we have that
Due to the bounds on \(Z({\varvec{k}}_1)\) and \(Z({\varvec{k}}_2)\), it holds that
Combining (9)-(12), we obtain that
\(\square \)
Now we present the MCMC (Markov chain Monte Carlo) method to explore the posterior density functions of A and n given \({\varvec{k}}\).
MCMC-ISP:
-
1.
Given D and \({\varvec{k}}\).
-
2.
Draw \((A,n)_{1}\) from \(\pi _{0}(A,n)\) such that A is positive definite.
-
3.
For \(j=2,\ldots ,J\), do
-
a.
Generate \({(A,n)^{*}}\) from \(\pi _{0}(A,n)\) such that \(A^*\) is positive definite;
-
b.
Compute \(\alpha ({(A,n)^{*}},(A,n)_{j-1}) = \min \left( 1, \dfrac{\pi _{{\varvec{k}}}({(A,n)^{*}}|{\varvec{k}})}{\pi _{{\varvec{k}}}\left( (A,n)_{j-1}|{\varvec{k}}\right) } \right) \);
-
c.
Draw \(\tilde{\alpha } \sim {\mathcal {U}}(0,1)\).
-
d.
If \(\alpha >\tilde{\alpha }\), then \((A,n)_{j}={(A,n)^{*}}\), otherwise \((A,n)_{j}=(A,n)_{j-1}\).
-
a.
As discussed above, the solution of the Bayesian inverse problem is the posterior probability densities of the unknown parameters. To characterize the posterior density functions, point estimators such as maximum a posteriori (MAP) and conditional mean (CM) are usually used. The CM of (A, n) is defined as
The MAP of (A, n) is defined as
However, for the problem considered in this paper, such estimators might not carry sufficient/correct information of the unknowns due to the presence of non-unique solutions. For example, see Figs. 1d, e and 2b. Hence, we shall use the local estimators introduced in [25] when necessary.
Definition 3.2
(LMAP) We call \((A^*, n^*)\) a local MAP, denoted by \((A, n)_{LMAP}\), if
for some constant \(\epsilon \in (0, 1)\) and \(N(A^*, n^*)\) a neighborhood of \((A^*, n^*)\).
Definition 3.3
(LCM) The local conditional mean \((A, n)_{LCM}\) is defined as
where \({X_1}\) is a subset of X.
If it is found that a significant number of samples aggregate in more than one region, we shall use the local estimators instead of the global estimators. Numerical examples show that the local estimators can identify multiple solutions to the inverse spectral problems considered here. For more details of CM, MAP, and LCM, we refer the readers to Chp. 3 of [14] and [25].
4 Numerical examples
In this section, we present some numerical examples. For the MCMC-ISP algorithm, one needs an effective method to compute several transmission eigenvalues of (2) given A and n. Numerical methods for the transmission eigenvalue problem have been developed by many researchers. We refer the readers to [1, 8, 10, 16, 17, 21, 23, 26, 28] and the references therein. In particular, finite element methods for (2) of anisotropic media are discussed in [2, 13, 27]. In the following, we present a continuous finite element method from [13], which is used in the simulation. Note that, from the finite element convergence theory for the transmission eigenvalue problem [13, 26, 27], the numerical method defines a discrete operator \({\mathcal {G}}_h\) such that
The operator \({\mathcal {G}}_h\) approximates \({\mathcal {G}}\) in the sense that
where h is the size of the finite element mesh for D.
We first multiply (2a) by a test function \(\phi \) and integrate by part to obtain
where \(\langle \cdot , \cdot \rangle \) is the boundary integral on \(\partial D\). Similarly, for (2b), we have that
Subtracting (15) from (14) and using (2d), it holds that
Let \({\mathcal {T}}_h\) be a regular triangular mesh for D. Let \(V_h\) be the space of the linear Lagrange finite element, \(V_{h}^{0}\) be the subspace of functions in \(V_h\) with vanishing degrees of freedom on \(\partial D\), and \(V_{h}^{B}\) be the subspace of functions in \(V_h\) with vanishing degrees of freedom in D. The boundary condition (2c) is enforced by setting
For \(\xi _h\in V_h^0\) in (14), the weak formulation for \(w_{h}\) is
Similarly, the weak formulation for \(v_{h}\) is
Setting \( \phi _h\in V_{h}^B\) in (16), we have that
Let \(N_h\), \(N_h^0\), and \(N_h^B\) be the dimensions of \(V_h\), \(V_h^0\) and \(V_h^B\), respectively. Let \(\{\xi _1, \ldots , \xi _{N_h} \}\) be the finite element basis for \(V_h\) such that \(\{\xi _1, \ldots , \xi _{N_h^0} \}\) is a basis for \(V_h^0\). Let \(S_A\) be the stiffness matrix with \((S_A)_{j,\ell }=(A\nabla \xi _j, \nabla \xi _{\ell })\), S be the stiffness matrix with \((S)_{j,\ell }=(\nabla \xi _j, \nabla \xi _{\ell })\), \(M_n\) be the mass matrix with \((M_n)_{j,\ell }=(n\xi _j,\xi _{\ell })\) and M be the mass matrix with \((M)_{j,\ell }=( \xi _j,\xi _{\ell })\). The matrix form for (17), (18) and (19) is the generalized eigenvalue problem
where
and
In all the numerical examples, we use MATLAB ”eigs” to compute several eigenvalues of (20). Note that if the transmission eigenvalues \({{\varvec{k}}}\) are reconstructed using the scattering data, they are usually not large in magnitude and are only approximations of the exact ones [24]. Furthermore, the multiplicities of the eigenvalues are not known in general. Hence, given an eigenvalue k, in the Bayesian inversion stage, one only needs to know if there exists an eigenvalue \(k_1\) of (20), which is close enough to k. In the rest of the paper, the covariance of the noise is set to be \(\Gamma _{\text {noise}}=\frac{1}{100}I\).
Example 1
Let D be a circle with radius 1/2. We consider the isotropic medium, i.e., \(A=I\) and the constant index of refraction \(n(x)=n_c\). The unknown to be reconstructed is \(n_c\). Four transmission eigenvalues are given \({{\varvec{k}}} = (2.01, 2.61, 3.23, 3.80)\). These eigenvalues are computed from scattering data (see Fig. 2 in [24]). Note that the exact index of refraction is \(n_c=16\).
We first obtain some qualitative information of \(n_c\) using a deterministic method. In [11], the following Faber–Krahn type inequality is proved:
where \(\lambda _0(D)\) is the smallest Dirichlet eigenvalue. Using the Lagrange finite element method, we find that \(\lambda _0(D) \approx 23.21\) (see Chp. 3 of [26]). Consequently, the lower bound for \(\sup _D n(x)\) given by (21) is roughly 5.8.
We consider the following cases:
-
1.1
use the first transmission eigenvalue to reconstruct n;
-
1.2
use the first two transmission eigenvalues to reconstruct n;
-
1.3
use all the four transmission eigenvalues to reconstruct n.
For all the cases, we choose a Gamma prior \(n\sim \text {Gamma}(3,4)+5.8\) incorporating the information obtained by the Faber–Krahn type inequality (21). A total of 2000 samples are generated in the MCMC-ISP.
The results are shown in Fig. 1. For cases 1.1 and 1.2 (Fig. 1a, b), the samples aggregate in two regions (around 16 and 27), which indicates the non-unique solutions. We use the local conditional means (LCM) to characterize the posterior density function. In Table 1, for case 1.1, two local conditional means are shown: \(n_{LCM}^1 = 15.90\) and \(n_{LCM}^2 = 27.04\). We also compute the associated transmission eigenvalue, which is close to the given one. In other words, both indices of refraction have an eigenvalue close to 2.01, which reveals the non-unique nature of the inverse problem. The scenario is the same for case 1.2.
When four transmission eigenvalues are used, the non-uniqueness disappears on the interval [8, 30]. As shown in Fig. 1c, all the samples accumulate around 16. The conditional mean \(n_{CM}=16.23\) and the corresponding transmission eigenvalues are shown in Table 1. It can be seen that there exit four eigenvalues (1.98, 2.61, 3.24, 3.78) for \(n_{CM}=16.23\), which are close to the given eigenvalues \({{\varvec{k}}}=(2.01, 2.61, 3.23, 3.80)\).
Example 2
Let D be the unit square. Again, the medium is isotropic, i.e., \(A=I\). The exact index of refraction is \(n(x) = 8+x_1-x_2\) for this example. Given the transmission eigenvalues \({{\varvec{k}}} = (2.82, 3.54, 4.13)\), we consider three cases:
-
2.1
assuming \(n=n_0\), reconstruct \(n_0\) (\(D=(-0.5, 0.5)\times (-0.5, 0.5)\));
-
2.2
assuming \(n=n_0 + n_1 x_1 + n_2 x_2\), reconstruct \(n_0, n_1, n_2\) (\(D=(-0.5, 0.5)\times (-0.5, 0.5)\));
-
2.3
assuming \(n=n_0 + n_1 x_1 + n_2 x_2\), reconstruct \(n_0, n_1, n_2\) (\(D=(4.5, 5.5)\times (4.5, 5.5)\)).
Applying (21) again, we obtain \(\sup _D n(x)>2.48\). 6000 samples are drawn in the MCMC stage. We first consider case 2.1 using three transmission eigenvalues. A uniform prior distribution \(n_{0}\sim {\mathcal {U}}[2.5, 14]\) is used. A large part of the samples concentrates around 8.1 (see Fig. 2a, b). We also notice that some samples are around 12.3. Similar to Example 1, we compute the local conditional means as shown in Table 2. The exact transmission eigenvalues close to \({\varvec{k}}\) for the two LCMs are also shown in Table 2. The differences between the transmission eigenvalues and the given \({\varvec{k}}\) are small.
We move on to case 2.2 by coding the reconstruction of \(n_{0}\) in case 2.1 into the prior. n(x) is approximated by the linear function \(n=n_0 + n_1 x_1 + n_2 x_2\) with \(\pi (n_{0})={\mathcal {U}}[7.5, 8.5]\) and \(\pi (n_{1})=\pi (n_{2})={\mathcal {U}}[-1.5, 1.5]\). The reconstruction is \(n(x)=8.10-0.01x_{1}-0x_{2}\). The recovery of the constant term \(n_0\) is satisfactory. Figure 2c shows samples of \(n_{1},n_{2}\). The errors of the coefficient \(n_{1}, n_{2}\) are large. The same experiment is carried out for case 2.3. Again, the constant \(n_0\) is reconstructed well. But \(n_1\) and \(n_2\) are away from the exact values (see Table 2 and Fig. 2d). The inverse problem related to cases 2.2 and 2.3 seems to be severely ill-posed. Nonetheless, as seen in Table 2, the exact transmission eigenvalues for the reconstructed n(x) are close to the given \({\varvec{k}}\). We did some examples using a few more eigenvalues for cases 2.2 and 2.3, which do not seem to improve the reconstruction, and thus are not shown here.
Example 3
Let D be an L-shape domain given by
The exact A and n are \(\text {diag}(\frac{1}{6},\frac{1}{8})\) and 1, respectively. Given \({{\varvec{k}}} = (4.31, 4.44, 4.95, 5.47)\), we consider the following cases:
-
3.1
assuming \(A=\text {diag}(a_0, a_1)\) and \(n=n_0\), reconstruct \(a_0, a_1\) and \(n_0\);
-
3.2
assuming \(A=\begin{pmatrix} a_0 &{}a_2\\ a_2 &{}a_1\end{pmatrix}\) and \(n=n_0\), reconstruct \(a_0, a_1, a_2\) and \(n_0\).
For case 3.1, we set \(a_0, a_1 \sim {\mathcal {U}}[0.05, 0.25]\) and \(n_0 \sim {\mathcal {U}}[0.1, 1.6]\). The results of the MCMC-ISP with 6000 samples are displayed in Table 3. We can see that A is approximated by \(\text {diag}(0.14, 0.14)\) and \(n_{CM}\) is 1.34.
Next, we consider case 3.2 using the estimators from case 3.1. As shown in Fig. 3a, the diagonal elements of A are small and oscillate between 0.1 and 0.2. Hence, we set \(a_0, a_1 \sim {\mathcal {U}}[0.1, 0.2]\). Choose \(a_2\) from \({\mathcal {U}}[-0.05, 0.1]\) and \(n \sim {\mathcal {U}}[0.8, 1.6]\). Using 6000 samples to reconstruct A and n, we show the results in Table 3 and Fig. 3b. For both cases, the reconstructions are acceptable and the eigenvalues are close to the given ones.
Example 4
In this example, we use the same L-shape domain D as Example 3 but \(A=\begin{pmatrix} \frac{1}{2} &{} \frac{1}{7}\\ \frac{1}{7}&{} \frac{1}{3} \end{pmatrix}\) and \(n=2\). With \({{\varvec{k}}} = (5.32 ,5.73 ,6.04 ,6.53)\), we consider the following cases:
-
4.1
assuming \(A=\text {diag}(a_0, a_1)\) and \(n=n_0\), reconstruct \(a_0, a_1\) and \(n_0\);
-
4.2
assuming \(A=\begin{pmatrix} a_0 &{}a_2\\ a_2 &{}a_1\end{pmatrix}\) and \(n=n_0\), reconstruct \(a_0, a_1, a_2\) and \(n_0\).
For case 4.1, we choose the priors \(a_0, a_1 \sim {\mathcal {U}}[0.05, 1.05]\) and \(n_0 \sim {\mathcal {U}}[1.5, 3.5]\). The conditional means of \(a_0\), \( a_1\) and n are 0.23, 0.22, and 2.79, respectively. Based on the results of case 4.1, we choose \(a_0\sim {\mathcal {U}}[0.2, 0.6]\), \(a_1 \sim {\mathcal {U}}[0.2, 0.4]\), \(a_2 \sim {\mathcal {U}}[-0.05, 0.2]\) and \(n\sim {\mathcal {U}}[1.8, 3.5]\) for case 4.2. The results are shown in Table 4 and Fig. 4. The approximation of n is greater than the exact value and approximations of the entries of A are smaller than the exact ones, indicating the severe ill-posed nature of the inverse problem. Nevertheless, the computed transmission eigenvalues are close to the given \({\varvec{k}}\) for both cases.
5 Conclusions and future work
The inverse spectral problem of transmission eigenvalues is studied. Given a few transmission eigenvalues, the Bayesian approach is employed to reconstruct the material properties of the inhomogeneous medium. An MCMC algorithm is used to explore the posterior density functions of the unknowns. Due to the fact that only partial data are available (a few eigenvalues), the inverse problem is severely ill-posed and can have non-unique solutions. To characterize the density functions, we resort to the recently proposed local estimators. Numerical examples indicate that the proposed Bayesian method is useful for the inverse spectral problem considered in this paper.
For Examples 1 and 2, the Faber–Krahn type inequality is first used to obtain some qualitative information of the index of refraction. Such information can be coded in the priors to improve the performance of the Bayesian inversion. The results in [18, 19] also indicate that the combination of deterministic and statistical methods can successfully treat challenging inverse problems with partial data.
The Bayesian inversion framework has the potential to provide useful information about unknowns for inverse problems, especially when the measurement data are partial. The current paper only provides a preliminary along this direction. We plan to apply the method proposed here to study new inverse spectral problems arising recently in the inverse scattering theory, e.g., the inverse modified transmission eigenvalue problem [7].
References
An, J., Shen, J.: A fourier-spectral-element method for transmission eigenvalue problems. J. Sci. Comput. 57, 670–688 (2013)
Gong, B., Sun, J., Turner, T., Zheng, C.: Finite element approximation of transmission eigenvalues for anisotropic media. arXiv:2001.05340, (2020)
Cakoni, F., Colton, D., Haddar, H.: Inverse scattering theory and transmission eigenvalues. SIAM, Philadelphia (2016)
Cakoni, F., Colton, D., Monk, P., Sun, J.: The inverse electromagnetic scattering problem for anisotropic media. Inverse Problems 26(7), 074004 (2010)
Cakoni, F., Colton, D., Haddar, H.: On the determination of Dirichlet or transmission eigenvalues from far field data. C. R. Math. Acad. Sci. Paris 348(7–8), 379–383 (2010)
Cakoni, F., Haddar, H.: Transmission eigenvalues in inverse scattering theory. Inverse problems and applications: inside out II. 529-580, Math. Sci. Res. Inst. Publ., 60, Cambridge Univ. Press, Cambridge (2013)
Cogar, S., Colton, D., Meng, S., Monk, P.: Modified transmission eigenvalues in inverse scattering theory. Inverse Problems 33(12), 125002 (2017)
Cossonnière, A., Haddar, H.: Surface integral formulation of the interior transmission problem. J. Integ. Equ. Appl. 25(3), 341–376 (2013)
Colton, D., Kress, R.: Inverse acoustic and electromagnetic scattering theory, 3rd ed. Appl. Math. Sci. 93 (2013)
Colton, D., Monk, P., Sun, J.: Analytical and computational methods for transmission eigenvalues. Inverse Problems 26(4), 045011 (2010)
Colton, D., Päivärinta, L., Sylvester, J.: The interior transmission problem. Inverse Probl. Imag. 1(1), 13–28 (2007)
Harris, I., Cakoni, F., Sun, J.: Transmission eigenvalues and non-destructive testing of anisotropic magnetic materials with voids. Inverse Problems 30(3), 035016 (2014)
Ji, X., Sun, J.: A multi-level method for transmission eigenvalues of anisotropic media. J. Comput. Phys. 255, 422–435 (2013)
Kaipio, J., Somersalo, E.: Statistical and Computational Inverse Problems, Applied Mathematical Sciences, vol. 160. Springer, New York (2005)
Kaipio, J., Huttunen, T., Luostari, T., Lähivaara, T., Monk, P.: A Bayesian approach to improving the Born approximation for inverse scattering with high-contrast materials. Inverse Problems 35(8), 084001 (2019)
Kleefeld, A.: A numerical method to compute interior transmission eigenvalues. Inverse Problems 29(10), 104012 (2013)
Li, T., Huang, W., Lin, W., Liu, J.: On spectral analysis and a novel algorithm for transmission eigenvalue problems. J. Sci. Comput. 64, 83–108 (2015)
Li, Z., Deng, Z., Sun, J.: Extended-Sampling-Bayesian method for limited aperture inverse scattering problems. SIAM J. Imag. Sci. 13(1), 422–444 (2020)
Li, Z., Liu, Y., Sun, J., Xu, L.: Quality-Bayesian approach to inverse acoustic source problems with partial data. SIAM J. Sci. Comput. 43(2), A1062–A1080 (2021)
Liu, J., Liu, Y., Sun, J.: An inverse medium problem using Stekloff eigenvalues and a Bayesian approach. Inverse Problems 35(9), 094004 (2019)
Mora, D., Velásquez, I.: A virtual element method for the transmission eigenvalue problem. Math. Models Methods Appl. Sci. 28(14), 2803–2831 (2018)
Stuart, A.M.: Inverse problems: a Bayesian perspective. Acta Numer. 19, 451–559 (2010)
Sun, J.: Iterative methods for transmission eigenvalues. SIAM J. Numer. Anal. 49(5), 1860–1874 (2011)
Sun, J.: Estimation of transmission eigenvalues and the index of refraction from Cauchy data. Inverse Problems 27(1), 015009 (2011)
Sun, J.: Local estimators and Bayesian inverse problems with non-unique solutions. arXiv:2105.09141 (2021)
Sun, J., Zhou, A.: Finite element methods for eigenvalue problems. CRC Press, Taylor & Francis Group, Boca Raton, London, New York (2016)
Xie, H., Wu, X.: A multilevel correction method for interior transmission eigenvalue problem. J. Sci. Comput. 72(2), 586–604 (2017)
Yang, Y., Bi, H., Li, H., Han, J.: Mixed methods for the Helmholtz transmission eigenvalues. SIAM J. Sci. Comput. 38(3), A1383–A1403 (2016)
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of Interest
On behalf of all authors, the corresponding author states that there is no conflict of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Liu, Y., Sun, J. Bayesian inversion for an inverse spectral problem of transmission eigenvalues. Res Math Sci 8, 53 (2021). https://doi.org/10.1007/s40687-021-00288-x
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s40687-021-00288-x