1 Introduction

In this paper, we consider the numerical approximation of the zero-index transmission eigenvalues that are associated with the scalar scattering problem with a conductive boundary. In general, the transmission eigenvalues can be seen as the wave numbers where the associated far-field operator fails to be injective. The zero-index transmission eigenvalue problem is derived by mathematically imbedding the scattering object in a background with refractive index equalling zero in the interior of the scatterer. It can be shown that the resulting far-field operator fails to be injective with a dense range at the wave numbers corresponding to these eigenvalues see [7] for the case when the conductivity is zero. The zero-index transmission eigenvalue problem has two main advantages over the classical transmission eigenvalue problem. First, is that they avoid the assumption that the contrast must be either positive or negative definite in the scatterer. Next, is the fact that they are linear eigenvalue problems. This zero-index eigenvalue problem with a conductive boundary condition was introduced in [22] and was motivated by the work in [9, 23] for the classical transmission eigenvalue problem with a conductive boundary and [7] for the scattering problem without a conductive boundary. We are also interested in the inverse spectral problem of estimating the refractive index with little a prior knowledge of the boundary conductivity parameter. There have been manuscripts written on the computation and application of transmission eigenvalue problems to parameter identification such as [11, 13, 21, 30] to name a few. For the classical transmission eigenvalue problem we refer to [2, 3] for the application of spectral-Galerkin methods to compute the eigenvalues. See for example [17, 19] for some of the previous work for computing the classical transmission eigenvalues via the finite element method. Recently, the method of fundamental solutions for computing the classical transmission eigenvalues was studied and implemented in [25]. Due to the monotonicity property of the transmission eigenvalues one can estimate the refractive index from the knowledge of the eigenvalues(see for example [15, 16]). The main contributions of this paper is the convergence analysis with error estimates of the spectral-Galerkin method with the Dirichlet eigenfunctions taken as the basis and the estimation of the refractive index from the zero-index transmission eigenvalues.

The zero-index transmission eigenvalue problem can be written as a fourth-order eigenvalue problem that depends on the refractive index and conductivity. We now derive the fourth-order formulation of the eigenvalue problem. To this end, we define the zero-index transmission eigenvalue problem from the scalar isotropic scattering problem as the values \(k \in {\mathbb {C}}\setminus \{ 0 \}\) such that there exists a nontrivial pair \((u,{u_0} ) \in H^1(D) \times H^1(D)\) satisfying the system

$$\begin{aligned} \Delta u +k^2 n u&=0 \quad \text {and} \quad \Delta {u_0} =0 \quad&\text { in } \, D \end{aligned}$$
(1)
$$\begin{aligned} u-{u_0}&=0 \quad \text {and} \quad {\partial _\nu {u}} = {\partial _\nu {u_0}}+ \eta u_0 \quad&\text { on } \partial D. \end{aligned}$$
(2)

Here, we assume that \(D \subset {\mathbb {R}}^d\) (for \(d=2,3\)) is a simply connected open set where the boundary \(\partial D\) is either polygonal with no reentrant corners or class \({\mathscr {C}}^2\) where \(\nu \) is the outward unit normal vector. The eigenvalue k corresponds to the wave number for the associated scattering problem. Let the refractive index \(n \in L^{\infty } (D)\) and conductivity \(\eta \in L^{\infty }(\partial D)\) where we assume that they are uniformly positive definite functions such that there exists positive constants

$$\begin{aligned} n_{\text {min}} \le n(x) \le n_{\text {max}} \quad \text { a.e.} \, \, \,\,x \in \overline{D} \quad \text { and } \quad \eta _{\text {min}} \le \eta (x)\le \eta _{\text {max}}\quad \text { a.e.} \, \, \,\, x \in \partial D. \end{aligned}$$

Therefore, we can define the difference of the eigenfunctions \(w=u-{u_0}\). It is clear that w satisfies the equation

$$\begin{aligned} \Delta w +k^2 n w= -k^2 n {u_0} \quad \text { in } \, D. \end{aligned}$$

Due to standard elliptic regularity results ( [18] page 334 for a \({\mathscr {C}}^2\) boundary) we have that \(w \in H^2(D) \cap H^1_0(D)\). Now, by appealing to the fact that \({u_0}\) is harmonic in D and the boundary condition (2) we can conclude that w satisfies the homogeneous boundary value problem

$$\begin{aligned} \Delta \frac{1}{n} \Delta w = -k^2 \Delta w \quad \text { in } \, D \quad \text {and} \quad \frac{k^2}{\eta } {\partial _\nu w}= - \frac{1}{n}\Delta w \quad \text { on } \, \partial D. \end{aligned}$$
(3)

In [22] it is shown that \(k \in {\mathbb {C}}\setminus \{ 0 \}\) is a zero-index transmission eigenvalue problem if and only if there is a nontrivial \(w \in H^2(D) \cap H^1_0(D)\) satisfying (3). By studying the variational formulation of (3) it is shown that there exists infinitely many real zero-index transmission eigenvalues. This eigenvalue problem is derived by mathematically embedding scatterer D in a background where the refractive index is equal to zero in the interior of the object. This is done by studying the difference of the far-field operators for the standard scattering problem and the augmented far-field operator for the scattering problem where the refractive index is equal zero in D. In general, it is known that the transmission eigenvalues can be determined from the scattering data. In [24] it is shown that the classical transmission eigenvalues can be determined from the far-field data. While in [23] it is shown that the classical transmission eigenvalues with a conductive boundary can also be recovered from far-field data. This implies that these eigenvalues can be used as a target signature to determine the material properties.

The rest of the paper is ordered as follows. In the next section we will study the solution operator corresponding to the zero-index transmission eigenvalue problem with a conductive boundary (3). We will then consider the approximation of the eigenvalues via a Dirichlet spectral-Galerkin method where the approximation space is taken to be the span of finitely many Dirichlet eigenfunctions for the Laplacian. This method of representing the solution to a PDE by the eigenfunctions of an auxiliary eigenvalue problem is studied for physical applications in the manuscript [1]. We study the approximation properties of this space as well as prove convergence of the Dirichlet spectral-Galerkin method for computing the zero-index transmission eigenvalues and provide error estimates. We will then provide some numerical examples in two dimensions to show that the proposed spectral method is effective for computing the eigenvalues. Once we have a method to approximate the eigenvalues we will turn our attention to estimating the refractive index for either large or small valued conductivity parameters.

2 The Zero-Index Transmission Eigenvalues

This section focuses on the variational formulation of the zero-index transmission eigenvalue problem (3). In particular, we study the associated solution operator. The analysis of the solution operator will be used in the convergence analysis of the spectral method. We define the variational space for (3) as \(H^2(D) \cap H^1_0(D)\) where

$$\begin{aligned} H^2(D) =\big \{ \varphi \in L^2(D) \, : \, \partial _{x_i} \varphi \quad \text {and} \quad \partial _{x_i x_j} \varphi \in L^2(D) \, \text { for } \, i,j =1, \ldots , d \big \} \end{aligned}$$

and

$$\begin{aligned} H^1_0(D) =\big \{ \varphi \in L^2(D) \, : \, \partial _{x_i} \varphi \in L^2(D) \,\, \text { for } \,\, i =1, \ldots , d \,\, \text { with } \,\, \varphi |_{\partial D}=0 \big \}. \end{aligned}$$

From [22] we have that the equivalent variational form for the zero-index transmission eigenvalue problem (3) is given by the values \(k \in {\mathbb {C}}\) such that there is a nontrivial \(w \in H^2(D) \cap H^1_0(D)\) satisfying

$$\begin{aligned} a(w,\varphi )=k^2b(w,\varphi ) \quad \text { for all } \,\,\, \varphi \in H^2(D) \cap H^1_0(D). \end{aligned}$$
(4)

We will assume that the eigenfunctions are normalized with \(\Vert w \Vert _{L^2(D)}=1\). The bounded sesquilinear forms on are defined by

$$\begin{aligned} a(w, \varphi )= \int \limits _{D} \frac{1}{n} \Delta w \, \Delta {\overline{\varphi }}\, \text {d}x \quad \text {and } \quad b( w, \varphi )=\int \limits _{D} \nabla w \cdot \nabla {\overline{\varphi }} \, \text {d}x - \int \limits _{\partial D} \frac{1}{\eta } {\partial _\nu w} \, {\partial _\nu {\overline{\varphi }}} \, \text {d}s. \end{aligned}$$
(5)

Recall, that we assume that there exists positive constants

$$\begin{aligned} n_{\text {min}} \le n(x) \le n_{\text {max}} \quad \text { a.e.} \, \, \,\,x \in \overline{D} \quad \text { and } \quad \eta _{\text {min}} \le \eta (x)\le \eta _{\text {max}}\quad \text { a.e.} \, \, \,\, x \in \partial D. \end{aligned}$$

We will study the variational formulation for the zero-index transmission eigenvalue problem in this section. Even though this is a linear eigenvalue problem for \(k^2\) notice that the sesquilinear form \(b(\cdot \, , \cdot )\) is not sign definite due to the opposing signs in the definition. Which does not give a semi-norm on the variational space which is usually the case for standard elliptic eigenvalue problems.

The well-posedness estimate for the Poisson problem with zero trace along with the \(H^2\) elliptic regularity estimate gives have that for any \(H^2(D)\) function with zero trace \(\Vert \Delta \cdot \Vert _{L^2(D)}\) is equivalent to the \(\Vert \cdot \Vert _{H^2(D)}\). Therefore, we let

$$\begin{aligned} X(D)=H^2(D) \cap H^1_0(D) \quad \text { such that } \quad \Vert \cdot \Vert _{X(D)} = \Vert \Delta \cdot \Vert _{L^2(D)}. \end{aligned}$$

Clearly, X(D) is a Hilbert space with the associated inner-product. This implies that \(a(\cdot \, , \cdot )\) is a coercive and Hermitian sesquilinear form on X(D). This implies that \(k=0\) is not a zero-index transmission eigenvalue. Now, by the Lax-Milgram Lemma we can define the solution operator \(T: X(D) \rightarrow X(D)\) as

$$\begin{aligned} a\big (Tf,\varphi \big )=b(f,\varphi ) \quad \text { for all } \,\,\, f, \varphi \in X(D). \end{aligned}$$
(6)

From the definition of T in (6) we have the following result.

Theorem 2.1

Let the operator \(T: X(D) \rightarrow X(D)\) be as defined by (6). Then T is an \(a(\cdot \, , \cdot )\) self-adjoint compact operator and satisfies the estimate

$$\begin{aligned} \Vert Tf \Vert _{X(D)} \le C \Big ( \Vert f \Vert _{H^1(D)} + \Vert {\partial _\nu f} \Vert _{L^2(\partial D)} \Big ). \end{aligned}$$

Proof

Since \(a(\cdot \, , \cdot )\) is a coercive and Hermitian sesquilinear form on X(D) it is an equivalent inner-product on X(D). Therefore, we have that for all \(f, \varphi \in X(D)\)

$$\begin{aligned} a\big (Tf,\varphi \big )=b(f,\varphi )=\overline{b(\varphi ,f)}= \overline{a\big (T\varphi ,f\big )}=a\big (f,T\varphi \big ) \end{aligned}$$

since n and \(\eta \) are real-valued. Proving that T is \(a(\cdot \, , \cdot )\) self-adjoint on X(D). By the compact embedding of \(H^{1/2}(\partial D)\) into \(L^2(\partial D)\) and \(H^{2}(D)\) into \(H^1( D)\) the compactness of T will follow immediately from the estimate. To prove the estimate notice that by (5) and (6) we can conclude that

$$\begin{aligned} \frac{1}{n_{\text {max}}} \Vert Tf \Vert ^2_{X(D)}&\le a\big (Tf,Tf\big )=b\big (f,Tf\big ) \\&\le \Big ( \Vert f \Vert _{H^1(D)} \Vert Tf \Vert _{H^1(D)} + \frac{1}{\eta _{\text {min}}} \Vert {\partial _\nu f} \Vert _{L^2(\partial D)} \Vert {\partial _\nu Tf} \Vert _{L^2(\partial D)} \Big ) \end{aligned}$$

where we have used the bounds on the coefficients. By appealing to the Trace Theorem and the continuous embedding of \(H^{2}(D)\) into \(H^1( D)\) we further have that

$$\begin{aligned} \frac{1}{n_{\text {max}}} \Vert Tf \Vert ^2_{X(D)} \le C \Big ( \Vert f \Vert _{H^1(D)} + \Vert {\partial _\nu f} \Vert _{L^2(\partial D)} \Big )\Vert Tf \Vert _{X(D)} \end{aligned}$$

proving the claim. \(\square \)

Notice that since T is a self-adjoint operator on the Hilbert space X(D) with the \(a(\cdot \, , \cdot )\) inner-product the Hilbert–Schmidt Theorem implies that there exists infinitely many eigenvalues counting multiplicity \(\mu \in {\mathbb {R}}\) for the operator T such that

$$\begin{aligned} Tw = \mu w \quad \text { which implies that } \quad \mu =k^{-2}. \end{aligned}$$

Note that since T is not sign definite there can be complex transmission eigenvalues k that are purely imaginary. In [22] it has been shown that there are infinitely many zero-index transmission eigenvalues \(k \in {\mathbb {R}}\). Also, note that again by the Hilbert–Schmidt Theorem we have that there are infinity many eigenfunctions w that form an \(a(\cdot \, , \cdot )\) orthonormal basis of X(D).

3 The Dirichlet Spectral-Galerkin Approximation

With the results given in the previous section we can prove the convergence and error estimates of the Dirichlet spectral-Galerkin approximation method of the zero-index transmission eigenvalue problem. We will use the approximation space that is the span of finitely many Dirichlet eigenfunctions for the Laplacian in the domain D. To prove the convergence and error estimates we must show the approximation properties of this space and use the variational formulation (4) to show the convergence of the eigenvalues and eigenfunctions. Even though we focus on the approximation space of Dirichlet eigenfunctions similar analysis as in Sect. 3.2 will work for any conforming approximation space such as the Legendre-Galerkin approximation which is used for the fourth order formulation of the classical transmission eigenvalue problem in [3].

3.1 Approximation Space

Here we will define the approximation space of Dirichlet eigenfunctions and study the approximation properties of the space. To begin, we let \(\phi _j \in H^1_0(D)\) be the jth Dirichlet eigenfunction for the Laplacian and the corresponding eigenvalue \(\lambda _j \in {\mathbb {R}}_{+}\) for the domain D. The Dirichlet eigenpair satisfy

$$\begin{aligned} - \Delta \phi _j = \lambda _j \phi _j \,\, \text { in } \,\, D \quad \text { where } \quad \Vert \phi _j\Vert _{L^2(D)}=1. \end{aligned}$$
(7)

By again appealing to elliptic regularity we have that \(\phi _j \in X(D)\). From [22] we have the following result.

Lemma 3.1

Let \(\phi _j\) satisfy (7) then the span\(\{ \phi _j \}_{j=1}^\infty \) is dense in X(D).

The eigenvalues are assumed to be arranged in non-decreasing order such that \(0 < \lambda _j \le \lambda _{j+1}\) for all \(j \in {\mathbb {N}}\). It is well known that the eigenfunctions \(\{ \phi _j \}_{j=1}^\infty \) form an orthonormal basis of \(L^2(D)\) which implies that for all \(f \in L^2(D)\)

$$\begin{aligned} f = \sum \limits _{j=1}^{\infty } (f,\phi _j)_{L^2(D)} \phi _j \end{aligned}$$
(8)

as a convergent series in the \(L^2(D)\) norm. This series representation will be used to show the approximation rates for this set of basis functions. To do so, we will show the convergence of this series in the X(D) norm.

Theorem 3.1

For all \(f \in X(D)\) we have that (8) is convergent in the X(D) norm. Moreover,

$$\begin{aligned} \Delta f = \sum \limits _{j=1}^{\infty } -\lambda _j (f,\phi _j)_{L^2(D)} \phi _j \end{aligned}$$

and is an \(L^2(D)\) norm convergent series which gives \(\Vert f\Vert ^2_{X(D)} =\sum \limits _{j=1}^{\infty } \lambda _j^{2} \big |(f,\phi _j)_{L^2(D)}\big |^2\).

Proof

The result follows by Lemma 3.1 which gives that \(\psi _j=\phi _j / \lambda _j\) is an orthonormal basis for X(D) by Eq. (7) such that

$$\begin{aligned} f = \sum \limits _{j=1}^{\infty } (f,\psi _j)_{X(D)} \psi _j \end{aligned}$$

as a X(D) convergent series. Then appealing to Green’s 2nd Theorem we have that

$$\begin{aligned} (f,\psi _j)_{X(D)} = -(\Delta f,\phi _j)_{L^2(D)}= - ( f,\Delta \phi _j)_{L^2(D)} = \lambda _j ( f, \phi _j)_{L^2(D)}. \end{aligned}$$

Therefore, we obtain that

$$\begin{aligned} f = \sum \limits _{j=1}^{\infty } (f,\phi _j)_{L^2(D)} \phi _j \end{aligned}$$

is a convergent series in the X(D) norm. Then applying the Laplacian term by term to the series representation proves the claim. \(\square \)

By the series representation (8) for any \(f\in L^2(D)\) along with (7) we can define the powers of the Laplacian \(\Delta ^m\) for \(m \in {\mathbb {N}}\) by

$$\begin{aligned} \Delta ^m f = \sum \limits _{j=1}^{\infty } (-\lambda _j)^m (f,\phi _j)_{L^2(D)} \phi _j. \end{aligned}$$
(9)

Note that this is often done to define (fractional) powers of an elliptic operator(see for example [26]). We will denote the domain of the mth power of the Laplacian in the set \(L^2(D)\) as

$$\begin{aligned} {\mathscr {D}} \big ( \Delta ^m \big ) :=\big \{ f \in L^2(D) \, : \, \Delta ^m f \,\, \text {as defined in } (9) \text { is an } L^2(D) \text { convergent series } \big \}. \end{aligned}$$

Therefore, we have that \({\mathscr {D}} \big ( \Delta ^m \big )\) is a Hilbert space with the associated norm

$$\begin{aligned} \Vert f \Vert ^2_{{\mathscr {D}} ( \Delta ^m )} = \sum \limits _{j=1}^{\infty } \lambda _j^{2m} \big |(f,\phi _j)_{L^2(D)}\big |^2 <\infty . \end{aligned}$$
(10)

By Theorem 3.1 we have that \(X(D) \subseteq {\mathscr {D}} ( \Delta )\).

For our spectral approximation method we will take the conforming computational subspace of X(D) to be given by

$$\begin{aligned} X_N (D) = \text {span}\big \{ \phi _j\big \}_{j=1}^N \quad \text { for some fixed } \quad N \in {\mathbb {N}}. \end{aligned}$$

A key ingredient to determining the approximation rate for this set of basis functions is Weyl’s law for the Dirichlet eigenvalues(see [5]). Weyl’s law states that there exists two constants \(c_1 , c_2 >0\) independent of j such that

$$\begin{aligned} c_1 j^{2/d} \le \lambda _j \le c_2 j^{2/d}\quad \text { for all } \quad j \in {\mathbb {N}}\end{aligned}$$

where again the dimension \(d=2,3\). We now consider the \(L^2(D)\) projection onto the approximation space \(X_N (D)\) which we denote \(\Pi _N :X(D) \rightarrow X_N (D)\) and is given by

$$\begin{aligned} \Pi _N f = \sum \limits _{j=1}^{N} (f,\phi _j)_{L^2(D)} \phi _j \quad \text {for some fixed} \quad N \in {\mathbb {N}}. \end{aligned}$$

It is clear that we have the point-wise convergence

$$\begin{aligned} \big \Vert (I-\Pi _N)f \big \Vert ^2_{X(D)} = \sum \limits _{j=N+1}^{\infty } \lambda _j^{2} \big |(f,\phi _j)_{L^2(D)}\big |^2 \rightarrow 0 \quad \text {as} \quad N \rightarrow \infty \end{aligned}$$

for any \(f \in X(D)\) by Theorem 3.1. We now give a convergence estimate for any \(f \in X(D) \cap {\mathscr {D}} \big ( \Delta ^m \big )\) where again we have that \({\mathscr {D}} \big ( \Delta ^m \big )\) is the subspace of \(L^2(D)\) function such that Eq. (10) is satisfied.

Theorem 3.2

Assume that \(f \in X(D) \cap {\mathscr {D}} \big ( \Delta ^m \big )\) for some \(m\ge 2\) then

$$\begin{aligned} \big \Vert (I-\Pi _N) f \big \Vert _{X(D)} \le \frac{C}{(N+1)^{2(m-1)/d}} \Vert f \Vert _{{\mathscr {D}} ( \Delta ^m )} . \end{aligned}$$

Proof

To prove the claim we estimate

$$\begin{aligned} \big \Vert (I-\Pi _N) f \big \Vert ^2_{X(D)} = \sum \limits _{j=N+1}^{\infty } \lambda _j^{2} \big |(f,\phi _j)_{L^2(D)}\big |^2 \le \frac{1}{{\lambda _{N+1}^{2(m-1)}}}\sum \limits _{j=N+1}^{\infty } \lambda _j^{2m} \big |(f,\phi _j)_{L^2(D)}\big |^2 \end{aligned}$$

where we have used the series representation in Theorem 3.1. Now appealing to Weyl’s law and by (10) we have that

$$\begin{aligned}&\big \Vert (I-\Pi _N)f \big \Vert ^2_{X(D)} \\&\le \frac{C}{(N+1)^{4(m-1)/d}}\sum \limits _{j=1}^{\infty } \lambda _j^{2m} \big |(f,\phi _j)_{L^2(D)}\big |^2 = \frac{C}{(N+1)^{4(m-1)/d}} \Vert f \Vert ^2_{{\mathscr {D}} ( \Delta ^m )} \end{aligned}$$

proving the estimate. \(\square \)

3.2 Convergence and Error Estimates

In this section, we will establish the Dirichlet spectral-Galerkin method for the zero-index transmission eigenvalue problem. In our analysis we will use the approximation space \(X_N(D)\) defined as the span of the first N Dirichlet eigenfunctions for the Laplacian in D. Similar results can be established by using other conforming approximation subspaces such as a finite element approximation space of piecewise polynomials. Using the convergence analysis for the approximation space in the previous section we are now ready to prove the convergence on the spectral approximation.

To begin, we will first consider the approximation for the operator T defined in (6) by the \(L^2(D)\) projection of T onto the space \(X_N (D)\). We will show that this approximation converges in the operator norm. This fact will be used to prove convergence and error estimates for the approximation of the eigenvalues.

Theorem 3.3

Let the operator \(T: X(D) \rightarrow X(D)\) be as defined by (6) and \(\Pi _N :X(D) \rightarrow X_N (D)\) be the \(L^2(D)\) projection onto \(X_N (D)\). Then \(\Pi _N T \rightarrow T\) as \(N \rightarrow \infty \) in the operator norm.

Proof

Since the operator T defined in Eq. (6) is compact by Theorem 2.1 we have that the point-wise convergence of \(\Pi _N\) to the Identity operator on X(D) implies that

$$\begin{aligned} \big \Vert (I-\Pi _N)T\big \Vert _{X(D) \mapsto X(D)}\rightarrow 0 \quad \text {as} \quad N \rightarrow \infty . \end{aligned}$$

Proving the claim. \(\square \)

We now define the spectral approximation of the zero-index transmission eigenvalue problem as find the values \(k_N \in {\mathbb {C}}\) such that there is a nontrivial \(w_N \in X_N(D)\) satisfying

$$\begin{aligned} a(w_N,\varphi _N)=k_N^2b(w_N,\varphi _N) \quad \text { for all } \,\,\, \varphi _N \in X_N(D) \end{aligned}$$
(11)

where the sesquilinear forms \(a( \cdot \, , \, \cdot )\) and \(b( \cdot \, , \, \cdot )\) are defined by (5). Here we again assume that the eigenfunctions are normalized such that \(\Vert w_N \Vert _{L^2(D)}=1\). Therefore, just as in the continuous case we can define the spectral approximation of the solution operator as operator \(T_N: X(D) \rightarrow X_N(D)\) such that for any \(f \in X(D)\)

$$\begin{aligned} a\big (T_Nf,\varphi _N\big )=b(f,\varphi _N) \quad \text { for all } \,\,\, \varphi _N \in X_N(D). \end{aligned}$$
(12)

Since the dimension of the range of \(T_N\) is finite we have that it is compact. It is also clear that \(T_N\) restricted to \(X_N(D)\) is an \(a( \cdot \, , \, \cdot )\) self-adjoint operator on \(X_N(D)\). This gives that \(T_N\) has N eigenvalues counting multiplicity. Arguing similarly as in Sect. 2 we have that the eigenpair \((k_N , w_N) \in {\mathbb {C}}\times X_N(D)\) satisfying (11) is the eigenpair for the spectral approximation of the solution operator such that

$$\begin{aligned} T_N w_N = k^{-2}_N w_N. \end{aligned}$$

In order to attain the convergence as well as an error estimate we will study the convergence of the spectral approximation of the solution operator as \(N \rightarrow \infty \). Therefore, by appealing to Galerkin orthogonality

$$\begin{aligned} a\big (T f - T_Nf,\varphi _N\big ) = 0 \quad \text { for all } \,\,\, \varphi _N \in X_N(D) \end{aligned}$$

and Cea’s Lemma( [6] page 372) we have that

$$\begin{aligned} \big \Vert Tf - T_N f \big \Vert _{X(D)} \le C \big \Vert Tf - v_N \big \Vert _{X(D)} \quad \text { for any } \,\,\, v_N \in X_N(D) \end{aligned}$$

and for all \(f \in X(D)\). From the above estimate we conclude that

$$\begin{aligned} \big \Vert Tf - T_N f \big \Vert _{X(D)} \le C \big \Vert (I-\Pi _N)T f \big \Vert _{X(D)} \end{aligned}$$

where again \(\Pi _N\) is the \(L^2(D)\) projection onto the approximation space \(X_N(D)\). This analysis gives the following result.

Theorem 3.4

Let (kw) and \((k_N , w_N)\) be the jth eigenpair for (4) and (11) respectively. Then as \(N \rightarrow \infty \) we have that \(k_N \rightarrow k\) and \(w_N \rightarrow w\) in X(D).

Proof

This result follows from the fact that

$$\begin{aligned} \big \Vert T -T_N \big \Vert _{X(D) \mapsto X(D)} \le C \big \Vert (I-\Pi _N)T \big \Vert _{X(D) \mapsto X(D)} \rightarrow 0 \quad \text { as } \quad N \rightarrow \infty \end{aligned}$$

and then appealing to the results in [27]. \(\square \)

Now that we have established the convergence of the spectral approximation our next step it to determine the convergence rate. To this end, we will argue similarly to Theorem 3.2 along with the using variational formulations (4) and (11). Simple calculations give that for (kw) and \((k_N , w_N)\) being the jth egienpair for (4) and (11) then

$$\begin{aligned} a\big (w_N-w , w_N -w\big ) -k^2 b\big (w_N-w , w_N -w\big ) = \big ( k^2_N -k^2 \big )b\big (w_N , w_N\big ) \end{aligned}$$
(13)

for any N. The equality (13) will be used to establish the convergence rate for the eigenvalues. So we need to establish that the sequence \(\big | b\big (w_N , w_N\big ) \big | \) is bounded below. Therefore, we present the following result.

Theorem 3.5

Let \((k_N , w_N)\) be the jth egienpair for (11). Then there is a constant \(\beta >0\) independent of N such that

$$\begin{aligned} \inf \limits _{N \in {\mathbb {N}}} \Big \{ \big | b\big (w_N , w_N\big ) \big | \Big \} \ge \beta . \end{aligned}$$

Proof

To prove the claim we proceed by way of contradiction. To this end, assume no such \(\beta \) exists, then we can extract a subsequence still denoted with N such that

$$\begin{aligned} \big | b\big (w_N , w_N\big ) \big | \rightarrow 0 \quad \text { as } \quad N \rightarrow \infty . \end{aligned}$$

By the continuity of the sesquilinear form \(b( \cdot \, , \, \cdot )\) and Theorem 3.4 we obtain

$$\begin{aligned} \big | b\big (w_N , w_N\big ) \big | \rightarrow \big | b\big (w , w\big ) \big | \quad \text { as } \quad N \rightarrow \infty \end{aligned}$$

where w is an eigenfunction corresponding to (4). The variational formulation implies that \(a(w,w)=0\) which contradicts the fact that \(\Vert w \Vert _{L^2(D)}=1\) since \(a( \cdot \, , \, \cdot )\) defines an inner-product on X(D). Proving the claim. \(\square \)

From Theorem 3.5 we can conclude that for (kw) and \((k_N , w_N)\) being the jth egienpair for (4) and (11) respectively then there is a \(C>0\) independent of N where

$$\begin{aligned} \big | k_N^2-k^2 \big | \le C \big \Vert w_N -w \big \Vert ^2_{X(D)}. \end{aligned}$$

Note that we have used (13) and the boundedness of sesquilinear forms \(a( \cdot \, , \, \cdot )\) and \(b( \cdot \, , \, \cdot )\). In order to obtain the error estimate for the spectral approximation of the zero-index transmission eigenvalues we must estimate the error in the Galerkin approximation in the approximation space \(X_N(D)\) on the eigenspace corresponding to k. To this end, we will denote the eigenspace corresponding to the zero-index transmission eigenvalue k by E(k). It is clear that \(E(k) \subset X(D)\) is finite dimensional and any \(u \in E(k)\) satisfies \(Tu = k^{-2} u\). With this we can now prove the error estimate.

Theorem 3.6

Let k and \(k_N\) be the jth eigenvalues for (4) and (11) respectively. If the corresponding eigenspace \(E(k) \subset {\mathscr {D}} \big ( \Delta ^m \big )\) for \(m\in {\mathbb {N}}\) then there is a \(C>0\) independent of N such that

$$\begin{aligned} \big | k_N^2-k^2 \big | \le \frac{C}{ (N+1)^{4(m-1)/d}}\sup \limits _{u \in E(k) \, ,\, \Vert u \Vert _{X(D)} =1} \big \Vert (I -\Pi _N)u \big \Vert ^2_{{\mathscr {D}} ( \Delta ^m )} \end{aligned}$$

where \(\Pi _N :X(D) \rightarrow X_N (D)\) is the \(L^2(D)\) projection onto \(X_N (D)\).

Proof

Notice that according to the analysis in [8], we have that

$$\begin{aligned} \Vert w_N -w \Vert ^2_{X(D)} \le C \big \Vert (T -T_N) \big \Vert ^2_{E(k) \mapsto X(D)}. \end{aligned}$$

Therefore, by the definition of the norm \(\big \Vert \cdot \big \Vert _{E(k) \mapsto X(D)}\) we can estimate

$$\begin{aligned} \big \Vert (T -T_N) \big \Vert ^2_{E(k) \mapsto X(D)}&=\sup \limits _{u \in E(k) \, ,\, \Vert u \Vert _{X(D)} =1} \big \Vert (T -T_N)u \big \Vert ^2_{X(D)}\\&\le C \sup \limits _{u \in E(k) \, ,\, \Vert u \Vert _{X(D)} =1} \big \Vert (I -\Pi _N)Tu \big \Vert ^2_{X(D)}. \end{aligned}$$

We further have that

$$\begin{aligned} \big \Vert (T -T_N) \big \Vert ^2_{E(k) \mapsto X(D)}&\le C \sup \limits _{u \in E(k) \, ,\, \Vert u \Vert _{X(D)} =1} \big \Vert (I -\Pi _N)Tu \big \Vert ^2_{X(D)}\\&=C \sup \limits _{u \in E(k) \, ,\, \Vert u \Vert _{X(D)} =1} \sum \limits _{j=N+1}^{\infty } \lambda _j^{2} \big |(Tu,\phi _j)_{L^2(D)}\big |^2\\&=C|k|^{-4}\sup \limits _{u \in E(k) \, ,\, \Vert u \Vert _{X(D)} =1} \sum \limits _{j=N+1}^{\infty } \lambda _j^{2} \big |(u,\phi _j)_{L^2(D)}\big |^2. \end{aligned}$$

Where we have used the fact that \(u \in E(k)\). Now, since we have assumed that \(E(k) \subset {\mathscr {D}} \big ( \Delta ^m \big )\) we further estimate just as in Theorem 3.2

$$\begin{aligned} \big \Vert (T -T_N) \big \Vert ^2_{E(k) \mapsto X(D)}&\le C\lambda _{N+1}^{-2(m-1)} \sup \limits _{u \in E(k) \, ,\, \Vert u \Vert _{X(D)} =1} \sum \limits _{j=N+1}^{\infty } \lambda _j^{2m} \big |(u,\phi _j)_{L^2(D)}\big |^2\\&\le C (N+1)^{-4(m-1)/d} \sup \limits _{u \in E(k) \, ,\, \Vert u \Vert _{X(D)} =1} \big \Vert (I -\Pi _N)u \big \Vert ^2_{{\mathscr {D}} ( \Delta ^m )} \end{aligned}$$

which we obtain by appealing to Weyl’s law. This estimate give the convergence rate proving the claim. \(\square \)

We end this section by noting that the convergence rate for the eigenfunctions are the square root of the convergence rate for the eigenvalues. This is clear from the proof of Theorem 3.6 since estimates for \(\Vert w_N -w \Vert ^2_{X(D)}\) are used to derive the estimates for the eigenvalue convergence rate.

Theorem 3.7

Let w and \(w_N\) be the jth eigenfunctions for (4) and (11) respectively. If the corresponding eigenspace \(E(k) \subset {\mathscr {D}} \big ( \Delta ^m \big )\) for \(m \in {\mathbb {N}}\) then there is a \(C>0\) independent of N such that

$$\begin{aligned} \Vert w_N-w \Vert _{X(D)} \le \frac{C }{ (N+1)^{2(m-1)/d}} \sup \limits _{u \in E(k) \, ,\, \Vert u \Vert _{X(D)} =1}\big \Vert (I -\Pi _N)u \big \Vert _{{\mathscr {D}} ( \Delta ^m )} \end{aligned}$$

where \(\Pi _N :X(D) \rightarrow X_N (D)\) is the \(L^2(D)\) projection onto \(X_N (D).\)

Proof

The result follows from the analysis given in the proof of Theorem 3.6. \(\square \)

4 Numerical Examples

In this section, we provided some numerical examples of computing the zero-index transmission eigenvalues via the Dirichlet spectral-Galerkin method studied in the previous sections. For simplicity, will assume that the domain D is the unit ball in \({\mathbb {R}}^2\). We refer to [4] for the approximation of the classical transmission eigenvalues via a Spectral Element Method for a spherically stratified media. We will check the accuracy of our Dirichlet spectral-Galerkin method by comparing to the eigenvalues computed by separation of variables for constant refractive index n and conductivity \(\eta \). The computations in this section are motivated by the work in [20] where the authors studied the convergence of the spectral-Galerkin method for computing the classical transmission eigenvalues where the basis functions in the approximation space are the eigenfunctions for the bilaplacian with zero clamped plate boundary conditions. We will also consider estimating the refractive index n for \(\eta \) either small or large but unknown. To do so, we will use the convergence results given in [22] where it is shown that as the conductivity tends to zero or infinity one obtains an eigenvalue problem that only depends on the refractive index. The estimation of the refractive index from the scattering data has been studied in [12, 29] to name a few examples. In these papers the authors show that the far and near field data can be used to recover the classical transmission eigenvalues and use monotonicity results to recover a constant refractive index or estimate a variable refractive index. Here we numerically study this problem for the zero-index transmission eigenvalues.

4.1 Computing the Eigenvalues

We are now ready to compute the zero-index transmission eigenvalues using the Dirichlet spectral-Galerkin approximation presented in Sect. 3. All of our experiments are done with the software MATLAB 2018a on an iMac with a 4.2 GHz Intel Core i7 processor with 8GB of memory. To begin, we will describe an effective method for computing the approximated eigenvalues that satisfy (11). Since the domain D is given by the unit circle we can determine the basis functions from separation of variables. Therefore, the basis functions are taken to be

$$\begin{aligned} \phi _{j}(r,\vartheta ) = J_{p} \left( \sqrt{\lambda _{p,q}} \, r\right) \cos (p \vartheta )\quad \text {with index} \quad j=j(p,q) \in {\mathbb {N}}. \end{aligned}$$

Here square root of the eigenvalues \(\sqrt{\lambda _{p,q}}\) corresponds to the qth positive root of the pth first kind Bessel function denoted \(J_p\) for all \(p\in {\mathbb {N}}\cup \{0\}\) and \(q \in {\mathbb {N}}\).

In the following numerical examples we take 24 basis functions where \(0\le p\le 5\) and \(1\le q \le 4\) which will give that the approximation space is defined as

$$\begin{aligned} \text {Span} \Big \{ \phi _{j(p,q)}(r,\vartheta ) \Big \}_{p=0\, ,\, q=1}^{p=5 \, ,\, q=4} \subset X(D). \end{aligned}$$

For our Dirichlet spectral-Galerkin approximation we will solve (11) for \(w_N\) in the aforementioned approximation space. This gives that the approximation of the eigenfunctions will have the form

$$\begin{aligned} w_N (x) = \sum \limits _{j=1}^{24} \omega _j \phi _j (x). \end{aligned}$$

Therefore, the spectral approximation of the eigenvalues \(k_N\) satisfying (11) correspond to the eigenvalues for the matrix equation

$$\begin{aligned} \left( \mathbf{A}-k_N^2\mathbf{B}\right) {\vec {\omega }} =0 \quad \text { where } \quad {\vec {\omega }}=\big (\omega _1, \ldots ,\omega _{24} \big )^{\top } \ne 0. \end{aligned}$$
(14)

We have that the \(24 \times 24\) Galerkin mass and stiffness matrices in the Spectral approximation (11) are given by

$$\begin{aligned} \mathbf{A}_{i,j} = a(\phi _i , \phi _j) \quad \text { and } \quad \mathbf{B}_{i,j} = b(\phi _i , \phi _j) \quad \text { for } \,\, i,j =1, \ldots , 24. \end{aligned}$$

The integrals can be simplified by using (7) to evaluate the sesquilinear forms \(a( \cdot \, , \, \cdot )\) and \(b( \cdot \, , \, \cdot )\) giving that

$$\begin{aligned} a(\phi _i , \phi _j) = \lambda _i \lambda _j \int \limits _{D} \frac{1}{n(x)} \phi _i (x) \, {\phi }_j(x) \, \text {d}x \end{aligned}$$

and

$$\begin{aligned} b(\phi _i , \phi _j) =\lambda _i \int \limits _{D} \phi _i (x) \, {\phi }_j(x) \, \text {d}x - \int \limits _{\partial D} \frac{1}{\eta (x) }\, {\partial _\nu \phi _i (x)} \, {\partial _\nu {\phi }_j (x) }\, \text {d}s. \end{aligned}$$

Notice that we have used that for the Dirichlet eigenfunctions \(\phi _i\) and \(\phi _j\) we have the following integral identities

$$\begin{aligned} \int \limits _{D} \frac{1}{n(x)} \Delta \phi _i (x) \, \Delta {\phi }_j(x) \, \text {d}x = \lambda _i \lambda _j \int \limits _{D} \frac{1}{n(x)} \phi _i (x) \, {\phi }_j(x) \, \text {d}x \end{aligned}$$

and

$$\begin{aligned} \int \limits _{D} \nabla \phi _i (x) \cdot \nabla {\phi }_j(x) \, \text {d}x = \lambda _i \int \limits _{D} \phi _i (x) \, {\phi }_j(x) \, \text {d}x \quad \text {for any} \quad i,j \in {\mathbb {N}}. \end{aligned}$$

In our calculations, we use the fact that the Dirichlet eigenfunctions are orthogonal in \(L^2(D)\) which gives that the volume integral in \(b(\phi _i , \phi _j)\) corresponds to a diagonal matrix.

To compute the Galerkin matrices we implement a 2d Gaussian quadrature method. In the numerical examples the integrals are written in polar coordinates where 12 quadrature points are used to evaluate the radial and angular parts of the integrals. The discretized eigenvalue problem (14) is then solved using the ‘polyeig’ command in MATLAB since (14) is a quadratic eigenvalue problem for the parameter \(k_N\). From [22] we have that for n and \(\eta \) constant the eigenvalues k satisfy the transcendental equation

$$\begin{aligned} d_m(k):=k\sqrt{n}J_{|m|}' \big (k\sqrt{n}\big ) -\big (\eta +|m|\big )J_{|m|}\big (k\sqrt{n}\big )=0 \quad \text { for all } \quad m \in {\mathbb {Z}}. \end{aligned}$$

Using the ‘fzero’ command in MATLAB we can determine the exact zero-index transmission eigenvalues. In Figs. 1 and 2 we plot the function \(d_m(k)\) for the values of \(m=0,1,2,3,4,5\) with \(k \in [0,5]\). To validate our approximation method we compare the Approximation v.s. the Exact eigenvalues presented in Tables 1 and 2.

Fig. 1
figure 1

Plot of the function \(d_m(k)\) for the values of \(m=0,1,2,3,4,5\) with coefficient parameter \(n=4\) and \(\eta =25\)

Fig. 2
figure 2

Plot of the function \(d_m(k)\) for the values of \(m=0,1,2,3,4,5\) with coefficient parameter \(n=4\) and \(\eta =1/10\)

Table 1 Comparison of the Dirichlet spectral-Galerkin approximation v.s. Exact zero-index transmission eigenvalues for \(n=4\) and \(\eta = 25\)
Table 2 Comparison of the Dirichlet spectral-Galerkin approximation v.s. Exact zero-index transmission eigenvalues for \(n=4\) and \(\eta = 1/10\)

The plot of the relative error for the first eigenvalue \(k_1\) is given where we let n vary in the interval [3, 5] for \(\eta = 25\) or \(\eta = 1/10\). We use \(d_0(k)\) to compute the exact first zero-index transmission eigenvalues. In Fig. 3 we see that the relative error for \(\eta = 25\) is on the order of \(10^{-4}\) where as the relative error for \(\eta = 1/10\) is on the order of \(10^{-2}\). This seems to imply that the Dirichlet spectral-Galerkin approximation method is better suited for problems with larger conductivity.

Fig. 3
figure 3

Here is a plot of the relative error for the 1st eigenvalue for \(n \in [3,5]\) for \(\eta =25\) and \(\eta =1/10\) on the left and right respectively

We now consider computing the eigenvalues for variable coefficients. Here we take a smooth and a piece-wise constant refractive index defined as

$$\begin{aligned} n_1=4+\text {exp}(-r^2) \quad \text { and } \quad n_2= 4*mathbb{1}_{(0.25 \le r< 1)} + 2*mathbb{1}_{(r<0.25)} \end{aligned}$$

for a spherically stratified media. Here \(mathbb{1}_{I}\) denotes the indicator function on the interval I. In Tables 3 and 4 we report the first three real zero-index transmission eigenvalues computed via our approximation method for various conductivities. Here we take three different parameters \(\eta \). Two of the conductivities are constants taken to be 25 and 1/10 while the third is a variable conductivity parameter

$$\begin{aligned} \eta =1/\big (10+\sin ^2(2\theta )\big ). \end{aligned}$$

Recall, that the analysis in Sect. 3.2 also gives the convergence of the eigenfunctions in Theorems 3.4 and 3.7. So, presented with Tables 3 and 4 are the plots for the first two corresponding eigenfunctions for the spherically stratified refractive indices \(n_1\) and \(n_2\) defined above.

Table 3 The first three real zero-index transmission eigenvalues for \(n_1\). We also plot the eigenfunctions corresponding to the eigenvalues \(k_1\) and \(k_2\) for \(\eta =1/10\)
Table 4 The first three real zero-index transmission eigenvalues for \(n_2\). We also plot the eigenfunctions corresponding to the eigenvalues \(k_1\) and \(k_2\) for \(\eta =25\)

Notice that the computed zero-index transmission eigenvalues are monotonically decreasing with respect to the coefficients which is predicted by the theory in [22]. We can see that for the first three real zero-index transmission eigenvalues that for \(\eta = 25\) and 1/10 we have

$$\begin{aligned} k_j(n_1 , \eta ) \le k_j(4,\eta ) \le k_j(n_2 , \eta ). \end{aligned}$$

Similarly comparing the reported eigenvalues we see the monotonicity with respect to the conductivity \(\eta \) for various refractive indices.

4.2 Estimating the Refractive Index

In this section, we present numerical examples for estimating the refractive index from the knowledge of the zero-index transmission eigenvalues. It has been shown in [9, 23] that the classical transmission eigenvalues with a conductive boundary condition can be recovered from the scattering data via the Linear Sampling Method and the Inside-Out Duality, see [14, 24] for details of these methods to recover the transmission eigenvalues. Therefore, we will assume that the zero-index transmission eigenvalues can be recovered from the scattering data and we wish to estimate n.

In order to estimate the refractive index from the zero-index transmission eigenvalues \(k(n,\eta )\) we will restrict ourselves to the case where \(\eta \) is either sufficiently large or small. The case for \(\eta \) known was considered numerically in [22]. The limiting behavior of the zero-index transmission eigenvalues was studied in [22] as \(\eta \) tends to zero or infinity. It has been shown that \(k(n,\eta ) \rightarrow \tau (n)\) as \(\eta \rightarrow \infty \) where \(\tau \) is a ‘modified’ Dirichlet eigenvalue satisfying that there exist a nontrivial v such that

$$\begin{aligned} \Delta v +\tau ^2 n v= 0 \,\, \text { in } \,\, D \quad \text { where } \quad v \in H^1_0(D) \end{aligned}$$

or as \(\eta \rightarrow 0\) where \(\tau \) is a ‘modified’ plate buckling eigenvalue satisfying that there exist a nontrivial v such that

$$\begin{aligned} \Delta \frac{1}{n} \Delta v +\tau ^2 \Delta v=0 \,\, \text { in } \,\, D \quad \text { where } \quad v \in H^2_0(D). \end{aligned}$$

This limiting behavior will allow use to estimate the refractive index without the knowledge of \(\eta \) on the boundary.

This gives that if it is known a prior that \(\eta \ll 1\) or \(\eta \gg 1\) then we can estimate the refractive index by finding the constant \(n_{\text {approx}}\) such that \(k_1(n,\eta ) = \tau _1(n_{\text {approx}})\) where \(\tau _1\) is the first ‘modified’ Dirichlet eigenvalue for \(\eta \gg 1\) or the first ‘Modified’ plate buckling eigenvalue for \(\eta \ll 1\). It is known that \(\tau _1\) depends monotonically on n by the max-min principle [30]. Since D is assumed to be known we can compute \(\tau _1\) for any constant refractive index n via the methods from [10, 28]. To numerically approximate \(\tau _1\) we use separation of variables since D is the unit circle. Therefore, we have that the ‘modified’ Dirichlet eigenvalues for a constant n satisfies

$$\begin{aligned} J_{|m|} \big ( \tau \sqrt{n }\big )=0 \quad \text { for all } \quad m \in {\mathbb {Z}}\end{aligned}$$

and the ‘modified’ plate buckling eigenvalues for a constant n satisfies

$$\begin{aligned} \tau \sqrt{n}J_{|m|}' \big (\tau \sqrt{n}\big ) -|m| J_{|m|}\big (\tau \sqrt{n}\big )=0 \quad \text { for all } \quad m \in {\mathbb {Z}}. \end{aligned}$$

To determine the approximate refractive index we first find the polynomial interpolation for \(\tau _1(n)\) for constant \(n\in [2,7]\) via the ‘polyfit’ command in MATLAB. Then we solve for the constant \(n_{\text {approx}}\) such that

$$\begin{aligned} k_1(n,\eta ) = \tau _1(n_{\text {approx}}) \end{aligned}$$

via the ‘fzero’ command in MATLAB. Since \(\tau _1\) is a deceasing function of n the above equation has a unique solution \(n_{\text {approx}}\). The results are reported in Tables 5 and 6 for the spherically stratified refractive indices used in our previous calculations with variable coefficient conductivity parameters.

Table 5 Estimation of the refractive index n for \(\eta = 1/\big (10+\sin (2\theta )^2\big )\)
Table 6 Estimation of the refractive index n for \(\eta = 25(2+\sin ^4(\theta ))\)

Simple calculus gives that the average value of \(n=4+\exp (-r^2)\) in the unit disk to be \(4+(1-\exp (-1)) \approx 4.6321205\) which is fairly close to the approximation in Table 6. We can also compute the average value for the piece-wise constant refractive index \(n=4*mathbb{1}_{(0.25 \le r< 1)} + 2*mathbb{1}_{(r<0.25)} \) in the unit disk which is 3.875. Also notice that due to the monotonicity of \(\tau _1\) we have that \(n_{\text {min}} \le n_{\text {approx}}\le n_{\text {max}}\). In the case of the classical transmission eigenvalues it has been numerically documented that estimating the refractive index by a constant leads to determining it’s average value [29]. Table 6 seems to suggest that for \(\eta \gg 1\) that estimating the refractive index by a constant may also lead to determining the average value.

4.3 A Numerical Example for the Unit Square

For completeness we provide numerical examples for the unit square \(D=(0,1)^2\). This is given to show that this method also works for polygonal domains with no reentrant corners. Here we wish to show the accuracy of the approximation for this domain. To this end, we will compute the zero-index transmission eigenvalues for a constant \(\eta \). To establish that the approximation is accurately computing the eigenvalues we will show test convergence as \(\eta \rightarrow \infty \) as well as the monotonicity. We will also estimate the refractive index \(n(x_1 ,x_2)\) assuming \(\eta \gg 1\) just as we did for the unit sphere. Therefore, the zero-index transmission eigenvalues k should converge to the ‘modified’ Dirichlet eigenvalues for the unit square.

For the approximate we have that the basis functions are taken to be

$$\begin{aligned} \phi _{j}(x_1,x_2) = \sin (p\pi x_1)\sin (q\pi x_2) \quad \text {with index} \quad j=j(p,q) \in {\mathbb {N}}. \end{aligned}$$

In the numerical examples we take 25 basis functions where \(1\le p,q\le 5\) which gives the spectral approximation space as

$$\begin{aligned} \text {Span} \Big \{ \phi _{j(p,q)}(x_1,x_2) \Big \}_{p, q=1}^{5} \subset X(D). \end{aligned}$$

To compute the zero-index transmission eigenvalues we proceed just as in the previous section. The spectral approximation of the eigenvalues \(k_N\) satisfy the corresponding matrix eigenvalue problem with the appropriate mass and stiffness matrices. Again the matrix eigenvalue problem is solved by using ‘polyeig’ command in MATLAB.

Table 7 The first zero-index transmission eigenvalue for \(n=4\) and \(\eta =10^m\) with \(m=0,1,2,3\) of the unit square
Table 8 The first zero-index transmission eigenvalue for \(n= \left( \frac{x_1^2}{2}+2 \right) \left( \frac{x_2^2}{2}+2 \right) \) and \(\eta =10^m\) with \(m=0,1,2,3\) of the unit square

Here we see in Table 7 the convergence of the first zero-index transmission eigenvalue \(k_1\) to the first ‘modified’ Dirichlet eigenvalue. Also, notice that in Tables 7 and 8 the monotonicity of the transmission eigenvalue \(k_1(n,\eta )\) with respect to n and \(\eta \) is verified by the calculations. Now from the approximated transmission eigenvalue we can again estimate the refractive index. Using the convergence as \(\eta \rightarrow \infty \) we proceed just as in the previous section. That is we find \(n_{\text {approx}}\) such that

$$\begin{aligned} k_1(n,\eta ) = \tau _1(n_{\text {approx}}) \end{aligned}$$

where the conductivity parameter \(\eta \gg 1\). To estimate the refractive index we can solve the above equation exactly since the ‘modified’ Dirichlet eigenvalues are known analytically. In Tables 9 we present the approximation of two refractive indices from the first zero-index transmission eigenvalue for the unit square.

Table 9 Estimation of the refractive index n for \(\eta = 10\)

5 Summary and Conclusions

In conclusion, we have provided a numerical method to compute the zero-index transmission eigenvalues via the Dirichlet spectral-Galerkin approximation method. Our approximation space is taken to be the span of the first N Dirichlet eigenfunctions. Even though our numerical examples are only presented in for the unit disk and circle in \({\mathbb {R}}^2\) the analysis is also valid for any domain where the boundary \(\partial D\) is either polygonal with no reentrant corners or class \({\mathscr {C}}^2\) in \({\mathbb {R}}^d\) for \(d=2\), 3. In order to apply this method one needs the Dirichlet eigenpairs a prior for the domain of interest. In theory this can be done by pre-calculating a fixed number of Dirichlet eigenpairs for the domain D via BEM [28] or FEM [30]. We have also given numerical examples to validate the theoretical results as well as investigated estimating the refractive index from the first zero-index transmission eigenvalue. It seems that for \(\eta \) sufficiently large one can estimate the average value of n which can be used for nondestructive testing. Possible future work would consist of applying this method to compute ‘classical’ transmission eigenvalues with a conductive boundary and considering the inverse problem of recovering a variable coefficient refractive index from the eigenvalues.