1 Introduction

Let \(\Omega \subset \mathbb {R}^3\) be a bounded domain of class \(C^3\). Let \(\varepsilon , \, \mu , \, {\hat{\mu }}, \, {\hat{\varepsilon }}\in [L^\infty (\Omega )]^{3 \times 3}\) be symmetric and uniformly elliptic. A complex number \(\omega \in \mathbb {C}\) is called a transmission eigenvalue if there exists a nonzero solution \((E,H,\hat{E},\hat{H}) \in [L^2(\Omega )]^{12}\) of the following Cauchy problem:

$$\begin{aligned} \left\{ \begin{array}{cl} \nabla \times E = i \omega \mu H &{} \text{ in } \Omega , \\ \nabla \times H = - i \omega \varepsilon E &{} \text{ in } \Omega ,\\ \end{array} \right. \quad \left\{ \begin{array}{cl} \nabla \times \hat{E} = i \omega \hat{\mu } \hat{H} &{} \text{ in } \Omega , \\ \nabla \times \hat{H} = - i \omega {\hat{\varepsilon }}\hat{E} &{} \text{ in } \Omega , \end{array} \right. \end{aligned}$$
(1.1)
$$\begin{aligned} (\hat{E}-E)\times \nu = 0 \text{ on } \partial \Omega , \quad \text { and } \quad (\hat{H}-H)\times \nu = 0 \, \text { on }\partial \Omega . \end{aligned}$$
(1.2)

Here and in what follows, \(\nu \) denotes the unit, outward, normal vector to \(\partial \Omega \).

The transmission eigenvalue problem, proposed by Kirsch [16] and Colton and Monk [10], has been an active research topic in the inverse scattering theory for inhomogeneous media. It has a connection with the injectivity of the relative scattering operator. Transmission eigenvalues are related to interrogating frequencies for which there is an incident field that is not scattered by the medium. We refer the reader to [7] for a recent and self-contained introduction to the topic.

Cakoni and Nguyen [8] have recently studied the transmission eigenvalue problem for Maxwell equations in a very general setting. Under the assumption that \(\varepsilon , \, \mu , \, {\hat{\varepsilon }}, \, \hat{\mu }\) are of class \(C^1\) in a neighborhood of the boundary, they proposed the following condition:

$$\begin{aligned} \varepsilon , \, \mu , \, {\hat{\varepsilon }}, \, \hat{\mu } \text{ are } \text{ isotropic } \text{ on } \partial \Omega \text{, } \text{ and } \varepsilon \ne \hat{\varepsilon }, \, \, \mu \ne \hat{\mu }, \, \, \varepsilon /\mu \ne \hat{\varepsilon }/ \hat{\mu } \, \, \text { on }\partial \Omega \end{aligned}$$
(1.3)

(see Remark 4 for the convention used in (1.3)). Under this assumption, Cakoni and Nguyen showed that the set of eigenvalues \(\lambda _j\) of system (1.1)-(1.2) is discrete. In studying the location of the eigenvalues under this condition, they showed that, for every \(\gamma >0\), there exists \(\omega _0 > 0\) such that if \(\omega \in \mathbb {C}\) with \(|\Im (\omega ^2)| \ge \gamma |\omega |^2\) and \(|\omega | \ge \omega _0\), then \(\omega \) is not a transmission eigenvalue. Their analysis is inspired and guided by the famous work of Agmon, Douglis, and Nirenberg [2, 3] on complementing boundary conditions.

In this paper, we further study spectral properties of the transmission eigenvalue problem under assumption (1.3) given above. More precisely, we establish the completeness of the generalized eigenfunctions and derive an optimal upper bound for the counting function of the transmission eigenvalues.

Before stating our results, as in [8], we denote

$$\begin{aligned}&\mathbf{H} (\Omega ) : =\Big \{ (u,v,\hat{u},\hat{v}) \in [L^2(\Omega )]^{12} : {\text {div}}(\varepsilon u)= {\text {div}}(\mu v) = {\text {div}}({\hat{\varepsilon }}\hat{u}) = {\text {div}}(\hat{\mu } \hat{v}) = 0 \text { in }\Omega \nonumber \\&\qquad \qquad \text{ and } \quad {\hat{\varepsilon }}{\hat{u}}\cdot \nu - \varepsilon u \cdot \nu = {\hat{\mu }}{\hat{v}}\cdot \nu - \mu v \cdot \nu = 0 \text { on }\partial \Omega \Big \}. \end{aligned}$$
(1.4)

The functional space \(\mathbf{H} (\Omega )\), which plays a role in both the analysis in [8] as well as in this paper, is a Hilbert space with the standard \([L^2(\Omega )]^{12}\)-scalar product. One of the motivations for the definition of \(\mathbf{H} (\Omega )\) is the fact that if \((E, H, {\hat{E}}, {\hat{H}}) \in [L^2(\Omega )]^{12}\) is an eigenfunction of the transmission eigenvalue problem, i.e., a solution of (1.1) and (1.2) for some \(\omega \in \mathbb {C}\), then \((E, H, {\hat{E}}, {\hat{H}}) \in \mathbf{H} (\Omega )\) except for \(\omega = 0\). The other motivation is on the compactness of \({\mathcal {T}}_k\) defined below.

The first main result of this paper is on the completeness of the generalized eigenfunctions. We have

Theorem 1.1

Assume that \(\varepsilon , \, \mu , \, {\hat{\varepsilon }}, \, \hat{\mu } \in [C^2({\bar{\Omega }})]^{3 \times 3}\) and (1.3) holds. The space spanned by the generalized eigenfunctions is complete in \(\mathbf{H} (\Omega )\), i.e., the space spanned by them is dense in \(\mathbf{H} (\Omega )\).

Remark 1

See also Remark 7 for a discussion of another version of Theorem 1.1.

Remark 2

The space spanned by the generalized eigenfunctions corresponding to a given transmission eigenvalue is of finite dimension. This follows from the compactness of the operator \({\mathcal {T}}_k\) (see (1.11) below). As a consequence of Theorem 1.1, the number of transmission eigenvalues is infinite and the space spanned by the transmission eigenfunctions is of infinite dimension.

The second main result of this paper is on an upper bound for the counting function \({{\mathcal {N}}}\). This function is defined by, for \(t > 0\),

$$\begin{aligned} {{\mathcal {N}}}(t) := \# \Big \{ j : |\lambda _j| \le t \Big \}. \end{aligned}$$
(1.5)

Concerning the behavior of \({{\mathcal {N}}}(t)\) for a large value of t, we have

Theorem 1.2

Assume that \(\varepsilon , \, \mu , \, {\hat{\varepsilon }}, \, \hat{\mu } \in [C^2({\bar{\Omega }})]^{3 \times 3}\) and (1.3) holds. There exists a constant \(c>0\) such that, for \(t > 1\),

$$\begin{aligned} {{\mathcal {N}}}(t) \le ct^3. \end{aligned}$$
(1.6)

Theorem 1.2, complementary to Theorem 1.1, gives an upper bound for the density of the distribution of the transmission eigenvalues. This upper bound is optimal in the sense that it has the same order as the standard Weyl laws for the Maxwell equations [30, 35].

Some comments on Theorem 1.1 and Theorem 1.2 are in order. The generalized eigenfunctions associated with \(\lambda _j\), considered in Theorem 1.1, are understood as the generalized eigenfunctions of the operator \({\mathcal {T}}_k\), defined in (1.11) below, corresponding to the eigenvalue \( (i \lambda _j - k)^{-1} \) of \({\mathcal {T}}_k\). One can show that it is independent of k as long as \({\mathcal {T}}_k\) is well defined (and compact). In the conclusion of Theorem 1.2, the multiplicity of eigenvalues is taken into account. The meaning of the multiplicity \(\lambda _j\) is understood as the multiplicity of the eigenvalue \( (i \lambda _j - k)^{-1}\) of the operator \({\mathcal {T}}_k\). Again, this is independent of k. These points follow from [1, Theorem 12.4] after applying Lemma 3.1 on the modified resolvent of \({\mathcal {T}}_k\). The multiplicity and the generalized eigenfunctions corresponding to \(\lambda _j\) are then understood as the multiplicity of \((i \lambda _j - k)^{-1}\) and the generalized eigenfunctions corresponding to \((i \lambda _j - k)^{-1}\) both corresponding to \({\mathcal {T}}_k\) from now on.

We recall here the definition of a generalized eigenfunction and the multiplicity of its corresponding eigenvalue, see, e.g., [1, Definition 12.5], for the convenience of the reader.

Definition 1.1

Let \(A: H \rightarrow H\) be a linear and bounded operator on a Hilbert space H. Let \(\lambda \) be an eigenvalue of A. An element \(v \in H \setminus \{0 \}\) is a called a generalized eigenfunction of A if there exists a positive integer m such that

$$\begin{aligned} (\lambda -A)^m v = 0. \end{aligned}$$
(1.7)

The multiplicity of the eigenvalue \(\lambda \) is defined as the dimension of the set \(\bigcup _{m \in \mathbb {N}_*} \mathrm{Ker}(\lambda - A)^m. \)

The study of the transmission eigenvalue problem for Maxwell’s equations is not as complete as for the scalar case, which is discussed briefly below. Before [8], the discreteness results could be found in [6, 14] (see also [9]) where the case of \(\mu = {\hat{\varepsilon }}= {\hat{\mu }}= I \), and \(\varepsilon - I \) invertible in a neighborhood \(\partial \Omega \) was considered. Concerning the other aspects, Cakoni, Gintides, and Haddar [5] studied the existence of real transmission eigenvalues, and Haddar and Meng [15] studied the completeness of eigenfunctions for the setting related to the one in [6] mentioned above. In the isotropic case, under the assumption \(\mu = {\hat{\mu }}\) and \(\varepsilon \mu \ne {\hat{\varepsilon }}{\hat{\mu }}\), Vodev recently derived a parabolic eigenvalue-free region [34].

The structure of the spectrum of the transmission eigenvalue problem is better understood in the case of scalar inhomogeneous Helmholtz equations in \(\Omega \) of \(\mathbb {R}^d\) with \(d \ge 2\). Let \(A_1\) and \(A_2\) be two (\(d \times d\)) symmetric, uniformly elliptic, matrix-valued functions and \(\Sigma _1\) and \(\Sigma _2\) be two bounded positive functions all defined in \(\Omega \). The state-of-the-art results on the discreteness of transmission eigenvalues are given in [24]. In particular, the authors showed that the transmission eigenvalue problem corresponding to the pairs \((A_1, \Sigma _1)\) and \((A_2, \Sigma _2)\) has a discrete spectrum if the coefficients are smooth only near the boundary, and

  1. (i)

    \(A_1(x), \, A_2(x)\) satisfy the complementing boundary condition with respect to \(\nu (x)\) for all \(x \in \partial \Omega \), i.e., for all \(x \in \partial \Omega \) and for all \(\xi \in \mathbb {R}^d \setminus \{0\}\) with \(\xi \cdot \nu = 0\), we have

    $$\begin{aligned} (A_2 \nu \cdot \nu ) ( A_2 \xi \cdot \xi ) - ( A_2 \nu \cdot \xi )^2 \ne ( A_1 \nu , \nu )( A_1 \xi \cdot \xi ) - ( A_1 \nu \cdot \xi )^2, \end{aligned}$$
  2. (ii)

    \(( A_1 \nu \cdot \nu ) \Sigma _1 \ne ( A_2 \nu \cdot \nu ) \Sigma _2\) for all \(x \in \partial \Omega \).

Assume i) and ii) and \(A_1, A_2, \Sigma _1, \Sigma _2\) are continuous in \({\bar{\Omega }}\), the Weyl laws for eigenvalues and the completeness of the generalized eigenfunctions in \([L^2(\Omega )]^2\) were recently established by Nguyen and (Q. H.) Nguyen [25]. Previous results on discreteness can be found in [11, 17, 31] and references therein. Completeness of transmission eigenfunctions and estimates on the counting function were studied by Robbiano [28, 29] for \(C^\infty \) boundary and coefficients, and for the case \(A_1 = A_2 = I\). Again in \(C^\infty \) isotropic setting, Vodev [32, 33] proved the sharpest known results on eigenvalue-free zones and Weyl’s law with an estimate for the remainder.

The Cauchy problem also naturally appears in the context of negative-index materials after using reflections as initiated in [18] (see also [20]). The well-posedness and the limiting absorption principle for the Helmholtz equation with sign-changing coefficients were developed by Nguyen [19] using the Fourier and multiplier approach. Similar problems for the Maxwell equations were studied by Nguyen and Sil [26]. Both papers [19, 26] deal with the stability question of negative index materials and are the starting point for the analysis of the transmission eigenvalue problems in [8, 24, 25]. Other aspects and applications of negative-index materials as well as the stability and instability the Cauchy problem (1.1) and (1.2) are discussed in [20,21,22,23] and the references therein.

The starting point and key feature of the analysis in [8] are the following result [8, Propositions 4.1 and 4.2]:

Theorem 1.3

(Cakoni & Nguyen) Assume that \(\varepsilon , \, \mu , \, {\hat{\varepsilon }}, \, \hat{\mu } \in [C^1(\bar{\Omega })]^{3 \times 3}\) and (1.3) holds, and let \(\gamma > 0\). There exist two constants \(k_0 \ge 1\) and \(C > 0\) such that for \(k \in \mathbb {C}\) with \(|\Im {(k^2)}| \ge \gamma |k|^2\) and \(|k| \ge k_0\), for every \((J_e, J_m, {\hat{J}}_e, {\hat{J}}_m) \in [L^2(\Omega )]^{12}\), there exists a unique solution \((E, H, {\hat{E}}, {\hat{H}}) \in [L^2(\Omega )]^{12}\) of

$$\begin{aligned}&\left\{ \begin{array}{c} \nabla \times E = k \mu H + J_e \text{ in } \Omega , \\ \nabla \times H = - k \varepsilon E + J_m \text{ in } \Omega , \end{array}\right. \quad \left\{ \begin{array}{c} \nabla \times {\hat{E}}= k {\hat{\mu }}{\hat{H}}+ {\hat{J}}_e \text{ in } \Omega , \\ \nabla \times {\hat{H}}= - k {\hat{\varepsilon }}{\hat{E}}+ {\hat{J}}_m \text{ in } \Omega , \end{array}\right. \end{aligned}$$
(1.8)
$$\begin{aligned}&({\hat{E}}- E) \times \nu = 0 \text{ on } \partial \Omega , \quad \text{ and } \quad ({\hat{H}}- H) \times \nu = 0 \text{ on } \partial \Omega . \end{aligned}$$
(1.9)

Moreover, if \((J_e, J_m, {\hat{J}}_e, {\hat{J}}_m) \in [H({\text {div}}, \Omega )]^4\) with \((J_{e}\cdot \nu - {\hat{J}}_{e} \cdot \nu , J_{m} \cdot \nu - {\hat{J}}_{m} \cdot \nu ) \in [H^{1/2}(\partial \Omega )]^2\), then

$$\begin{aligned}&|k| \, \Vert (E, H, {\hat{E}}, {\hat{H}}) \Vert _{L^2(\Omega )} + \Vert (E, H, {\hat{E}}, {\hat{H}}) \Vert _{H^1(\Omega )} \le C \Vert (J_e, J_m, {\hat{J}}_e, {\hat{J}}_m)\Vert _{L^2(\Omega )} \nonumber \\&\qquad + \frac{C}{|k|} \Vert ({\text {div}}J_e,{\text {div}}J_m, {\text {div}}{\hat{J}}_e,{\text {div}}{\hat{J}}_m)\Vert _{L^2(\Omega )} \nonumber \\&\qquad + \frac{C}{|k|} \Vert (J_{e} \cdot \nu - {\hat{J}}_e \cdot \nu , J_m \cdot \nu - {\hat{J}}_m \cdot \nu )\Vert _{H^{1/2}(\partial \Omega )} . \end{aligned}$$
(1.10)

We recall that the space \(H({\text {div}},\Omega )\) is defined by

$$\begin{aligned} H({\text {div}},\Omega ) = \{u \in [L^2(\Omega )]^3 : {\text {div}}(u) \in L^2(\Omega )\}. \end{aligned}$$

Remark 3

In [8], the coefficients are assumed to be of class \(C^1\) near the boundary, and a variant of (1.10), where the \(\Vert \cdot \Vert _{H^1(\Omega )}\) is replaced by \(\Vert \cdot \Vert _{H^1(D \cap \Omega )}\) for some neighborhood D of \(\partial \Omega \) (see [8, (4.4) of Proposition 4.1]), was established. Nevertheless, under the smoothness assumption considered here, (1.10) follows immediately by the same analysis.

Fix \(k \in \mathbb {C}\) such that the conclusions in Theorem 1.3 hold. One can then define the operator \({\mathcal {T}}_k\) as follows:

$$\begin{aligned} \begin{array}{rclc} {\mathcal {T}}_k: &{} \mathbf{H} (\Omega ) &{} \rightarrow &{} \mathbf{H} (\Omega ) \\ &{} ({{\mathcal {J}}}_e, {{\mathcal {J}}}_m, \hat{{\mathcal {J}}}_e, \hat{{\mathcal {J}}}_m) &{} \mapsto &{} (E,H,\hat{E},\hat{H}), \end{array} \end{aligned}$$
(1.11)

where \((E, H, {\hat{E}}, {\hat{H}})\) is the unique solution of, with \((J_e, J_m, {\hat{J}}_e, {\hat{J}}_m) = (\mu {{\mathcal {J}}}_m, - \varepsilon {{\mathcal {J}}}_e, {\hat{\mu }}\hat{{\mathcal {J}}}_m, - {\hat{\varepsilon }}\hat{{\mathcal {J}}}_e)\),

$$\begin{aligned} \left\{ \begin{array}{cl} \nabla \times E = k \mu H + J_e &{} \text{ in } \Omega , \\ \nabla \times H = - k \varepsilon E + J_m &{} \text{ in } \Omega , \end{array} \right. \quad \left\{ \begin{array}{cl} \nabla \times {\hat{E}}= k {\hat{\mu }}{\hat{H}}+ {\hat{J}}_e &{} \text{ in } \Omega , \\ \nabla \times {\hat{H}}= - k {\hat{\varepsilon }}{\hat{E}}+ {\hat{J}}_m &{} \text{ in } \Omega , \end{array} \right. \end{aligned}$$
(1.12)
$$\begin{aligned} (\hat{E}-E)\times \nu = 0 \text{ on } \partial \Omega , \quad \text { and } \quad (\hat{H}-H)\times \nu = 0 \, \text { on }\partial \Omega . \end{aligned}$$
(1.13)

From (1.10) and the compactness criterion related to the Maxwell equations, one can derive that \({\mathcal {T}}_k\) is compact. It is easy to check that \(\omega \) is an eigenvalue of the transmission eigenvalue problem if and only if \((i \omega - k)^{-1}\) is an eigenvalue of \({\mathcal {T}}_k\). The discreteness of the eigenvalues of the transmission eigenvalue problem then follows from the discreteness of the eigenvalues of \({\mathcal {T}}_k\).

In this paper, to derive further spectral properties of the transmission eigenvalue problem, we develop the analysis in [8] in order to be able to apply the spectral theory of Hilbert–Schmidt operators. This strategy was previously used in the acoustic setting [25]. To this end, we establish a regularity result (see Theorem 2.1) for solutions given in Theorem 1.3. In addition to this, one of the main ingredients in the proof of Theorem 1.1 is the density of the range of the map \({\mathcal {T}}_k\) in \(\mathbf{H} (\Omega )\) with respect to the \([L^2(\Omega )]^{12}\)-norm (see Proposition 3.2). The proof of Theorem 1.1 is also given in a way which does not involve any extra topological property of \(\Omega \) than its connectivity (see Step 2 of the proof of Proposition 3.2).

The paper is organized as follows. In Sect. 2, we establish the regularity result on the transmission eigenvalue problem. The last two sections are devoted to the proof of Theorem 1.1 and Theorem 1.2, respectively.

2 A regularity result for the transmission eigenvalue problem

The following regularity result for the Maxwell transmission eigenvalue problem is the main result of this section (compare with Theorem 1.3).

Theorem 2.1

Let \(\varepsilon , \, \mu , \, {\hat{\varepsilon }}, \, \hat{\mu } \in [C^2(\bar{\Omega })]^{3 \times 3}\) be symmetric, and let \(\gamma > 0\). Assume that there exist \(\Lambda \ge 1\) and \(\Lambda _1>0\) such that

$$\begin{aligned}&\Lambda ^{-1} \le \varepsilon ,\mu ,\hat{\varepsilon },\hat{\mu } \le \Lambda \text { in }\Omega , \quad \Vert (\varepsilon ,\mu ,\hat{\varepsilon },\hat{\mu })\Vert _{C^2(\bar{\Omega })} \le \Lambda , \end{aligned}$$
(2.1)
$$\begin{aligned}&\varepsilon , \, \mu , \, {\hat{\varepsilon }}, \, \hat{\mu } \text { are isotropic on }\partial \Omega , \end{aligned}$$
(2.2)

and, for \(x \in \partial \Omega \),

$$\begin{aligned} |\varepsilon (x)-\hat{\varepsilon }(x)| \ge \Lambda _1, \quad |\mu (x)-\hat{\mu }(x)|\ge \Lambda _1, \quad |\varepsilon (x)/\mu (x)-\hat{\varepsilon }(x)/\hat{\mu }(x)| \ge \Lambda _1. \end{aligned}$$
(2.3)

There exist two constants \(k_0 \ge 1\) and \(C > 0\) such that, for \(k \in \mathbb {C}\) with \(|\Im {(k^2)}| \ge \gamma |k|^2\) and \(|k| \ge k_0\), the conclusion of Theorem 1.3 holds for \((J_e, J_m, {\hat{J}}_e, {\hat{J}}_m) \in [L^2(\Omega )]^{12}\). Moreover, for \(J_e,J_m,\hat{J}_e,\hat{J}_m \in [H^1(\Omega )]^3\) with \({\text {div}}J_e,{\text {div}}J_m,{\text {div}}\hat{J}_e,{\text {div}}\hat{J}_m \in H^1(\Omega )\) and \(J_e \cdot \nu -\hat{J}_e \cdot \nu , J_m \cdot \nu -\hat{J}_m \cdot \nu \in H^{3/2}(\partial \Omega )\), we have

$$\begin{aligned}&\Vert (E, H, {\hat{E}}, {\hat{H}}) \Vert _{H^2(\Omega )} + |k| \Vert (E, H, {\hat{E}}, {\hat{H}}) \Vert _{H^1(\Omega )} + |k|^2 \, \Vert (E, H, {\hat{E}}, {\hat{H}}) \Vert _{L^2(\Omega )} \nonumber \\&\quad \le C |k| \Vert (J_e, J_m, {\hat{J}}_e, {\hat{J}}_m)\Vert _{L^2(\Omega )} + C \Vert (J_e, J_m, {\hat{J}}_e, {\hat{J}}_m)\Vert _{H^1(\Omega )} \nonumber \\&\quad \quad +\, C \Vert ({\text {div}}J_e,{\text {div}}J_m, {\text {div}}{\hat{J}}_e,{\text {div}}{\hat{J}}_m)\Vert _{L^2(\Omega )} + \frac{C}{|k|} \Vert ({\text {div}}J_e,{\text {div}}J_m, {\text {div}}{\hat{J}}_e,{\text {div}}{\hat{J}}_m)\Vert _{H^1(\Omega )} \nonumber \\&\quad \quad +\, C \Vert (J_{e} \cdot \nu - {\hat{J}}_e \cdot \nu , J_m \cdot \nu - {\hat{J}}_m \cdot \nu )\Vert _{H^{1/2}(\partial \Omega )} \nonumber \\&\quad \quad +\, \frac{C}{|k|} \Vert (J_{e} \cdot \nu - {\hat{J}}_e \cdot \nu , J_m \cdot \nu - {\hat{J}}_m \cdot \nu )\Vert _{H^{3/2}(\partial \Omega )}, \end{aligned}$$
(2.4)

for some positive constant C depending only on \(\Omega \), \(\Lambda \), \(\Lambda _1\), and \(\gamma \).

Remark 4

The convention used in (1.3) and in (2.3) is as follows. A \(3 \times 3\) matrix-valued function M defined in a subset \(O \subset \mathbb {R}^3\) is called isotropic at \(x \in O\) if it is proportional to the identity matrix at x, i.e., \(M(x) = m I\) for some scalar \(m = m(x)\), where I denotes the \(3 \times 3\) identity matrix. In this case, for notational ease, we also denote m(x) by M(x). If M is isotropic for \(x \in O\), then M is said to be isotropic in O. Conditions (1.3) and (2.3) are understood under the convention \(m(x) = M(x)\).

Denote

$$\begin{aligned} \mathbb {R}^3_+ = \Big \{x = (x_1, x_2, x_3) \in \mathbb {R}^3; \; x_3 > 0 \Big \} \end{aligned}$$

and

$$\begin{aligned} \mathbb {R}^3_0 = \Big \{x = (x_1, x_2, x_3) \in \mathbb {R}^3; \; x_3 = 0 \Big \}. \end{aligned}$$

One of the main ingredients of the proof of Theorem 2.1 is the following lemma, which is a variant of [8, Corollary 3.1] (see also Remark 5).

Lemma 2.1

Let \(\gamma > 0\), \(k \in \mathbb {C}\) with \(|\Im (k^2)| \ge \gamma |k|^2\), and \(|k| \ge 1\), and let \(\varepsilon , \, \mu , \, {\hat{\varepsilon }}, \, {\hat{\mu }}\in [C^1({\bar{\mathbb {R}}}^3_+)]^{3 \times 3}\) be symmetric, uniformly elliptic. Let \(\Lambda \ge 1\) be such that

$$\begin{aligned} \Lambda ^{-1} \le \varepsilon , \, \mu , \, {\hat{\varepsilon }}, \, {\hat{\mu }}\le \Lambda \text{ in } B_1 \cap \mathbb {R}^3_+ \quad \text{ and } \quad \Vert (\varepsilon , \mu , {\hat{\varepsilon }}, {\hat{\mu }}) \Vert _{C^1(\mathbb {R}^3_+ \cap B_1)} \le \Lambda . \end{aligned}$$

Assume that \(\varepsilon (0), \, {\hat{\varepsilon }}(0), \, \mu (0), {\hat{\mu }}(0)\) are isotropic, and for some \(\Lambda _1 \ge 0\)

$$\begin{aligned} |\varepsilon (0) - {\hat{\varepsilon }}(0)| \ge \Lambda _1, \quad |\mu (0) - {\hat{\mu }}(0)| \ge \Lambda _1, \quad \text{ and } \quad |\varepsilon (0)/ \mu (0) - {\hat{\varepsilon }}(0)/ {\hat{\mu }}(0)| \ge \Lambda _1. \end{aligned}$$

Let \(J_e, J_m, {\hat{J}}_e, {\hat{J}}_m \in [L^2(\mathbb {R}^3_+)]^3\), and assume that \((E, H, {\hat{E}}, {\hat{H}}) \in [L^2(\mathbb {R}^3)]^{12}\) be a solution of the systemFootnote 1

$$\begin{aligned}&\left\{ \begin{array}{c} \nabla \times E = k \mu H + J_e \text{ in } \mathbb {R}^3_+, \\ \nabla \times H = - k \varepsilon E + J_m \text{ in } \mathbb {R}^3_+, \end{array}\right. \quad \left\{ \begin{array}{c} \nabla \times {\hat{E}}= k {\hat{\mu }}{\hat{H}}+ {\hat{J}}_e \text{ in } \mathbb {R}^3_+, \\ \nabla \times {\hat{H}}= - k {\hat{\varepsilon }}{\hat{E}}+ {\hat{J}}_m \text{ in } \mathbb {R}^3_+, \end{array}\right. \end{aligned}$$
(2.5)
$$\begin{aligned}&({\hat{E}}- E) \times e_3 = 0 \text{ on } \mathbb {R}^3_0, \quad \text{ and } \quad ({\hat{H}}- H) \times e_3 = 0 \text{ on } \mathbb {R}^3_0. \end{aligned}$$
(2.6)

There exist \(0< r_0 < 1\) and \(k_0 > 1\) depending only on \(\gamma \), \(\Lambda \), and \(\Lambda _1\) such that if the supports of \(E, \, H, \, {\hat{E}}, \, {\hat{H}}\) are in \(B_{r_0} \cap \overline{\mathbb {R}^3_+}\), then, for \(|k| > k_0\),

(i):
$$\begin{aligned} |k| \, \Vert (E, H, {\hat{E}}, {\hat{H}}) \Vert _{L^2(\mathbb {R}^3_+)} \le C \Vert (J_e, J_m, {\hat{J}}_e, {\hat{J}}_m)\Vert _{L^2(\mathbb {R}^3_+)}. \end{aligned}$$
(2.7)
(ii):

if \(J_e, J_m, {\hat{J}}_e, {\hat{J}}_m \in H({\text {div}}, \mathbb {R}^3_+)\) and \(J_{e, 3} - {\hat{J}}_{e, 3}, J_{m, 3} - {\hat{J}}_{m, 3} \in H^{1/2}(\mathbb {R}^3_0)\), then

$$\begin{aligned}&\Vert (E, H, {\hat{E}}, {\hat{H}}) \Vert _{H^1(\mathbb {R}^3_+)} + |k| \, \Vert (E, H, {\hat{E}}, {\hat{H}}) \Vert _{L^2(\mathbb {R}^3_+)} \le C \Big ( \Vert (J_e, J_m, {\hat{J}}_e, {\hat{J}}_m)\Vert _{L^2(\mathbb {R}^3_+)} \nonumber \\&\quad + \frac{1}{|k|} \Vert ({\text {div}}J_e, {\text {div}}J_m, {\text {div}}{\hat{J}}_e, {\text {div}}{\hat{J}}_m)\Vert _{L^2(\mathbb {R}^3_+)} \nonumber \\&\quad + \frac{1}{|k|} \Vert (J_{e, 3} - {\hat{J}}_{e, 3}, J_{m, 3} - {\hat{J}}_{m, 3})\Vert _{H^{1/2}(\mathbb {R}^3_0)} \Big ). \end{aligned}$$
(2.8)
(iii):

assume in addition that \(\varepsilon , \, \mu , \, {\hat{\varepsilon }}, \, {\hat{\mu }}\in [C^2({\bar{\mathbb {R}}}^3_+)]^{3 \times 3}\) and

$$\begin{aligned} \Vert (\varepsilon , \mu , {\hat{\varepsilon }}, {\hat{\mu }}) \Vert _{C^2( \mathbb {R}^3_+ \cap B_1)} \le \Lambda . \end{aligned}$$

Then, if \(J_e, J_m, {\hat{J}}_e, {\hat{J}}_m \in [H^1(\mathbb {R}^3_+)]^3\), \({\text {div}}J_e, {\text {div}}J_m, {\text {div}}{\hat{J}}_e, {\text {div}}{\hat{J}}_m \in H^1(\mathbb {R}^3_+)\), and \(J_{e, 3} - {\hat{J}}_{e, 3}, \, J_{m, 3} - {\hat{J}}_{m, 3} \in H^{3/2} (\mathbb {R}^3_0)\), we have

$$\begin{aligned}&\Vert (E, H, {\hat{E}}, {\hat{H}}) \Vert _{H^2(\mathbb {R}^3_+)} + |k| \Vert (E, H, {\hat{E}}, {\hat{H}}) \Vert _{H^1(\mathbb {R}^3_+)} + |k|^2 \, \Vert (E, H, {\hat{E}}, {\hat{H}}) \Vert _{L^2(\mathbb {R}^3_+)} \nonumber \\&\quad \le C |k| \Vert (J_e, J_m, {\hat{J}}_e, {\hat{J}}_m)\Vert _{L^2(\mathbb {R}^3_+)} + C \Vert (J_e, J_m, {\hat{J}}_e, {\hat{J}}_m)\Vert _{H^1(\mathbb {R}^3_+)} \nonumber \\&\quad \quad + C \Vert ({\text {div}}J_e, {\text {div}}J_m, {\text {div}}{\hat{J}}_e, {\text {div}}{\hat{J}}_m)\Vert _{L^2(\mathbb {R}^3_+)} \nonumber \\&\quad \quad + \frac{C}{|k|} \Vert ({\text {div}}J_e, {\text {div}}J_m, {\text {div}}{\hat{J}}_e, {\text {div}}{\hat{J}}_m)\Vert _{H^1(\mathbb {R}^3_+)} \nonumber \\&\quad \quad + C \Vert (J_{e, 3} - {\hat{J}}_{e, 3}, J_{m, 3} - {\hat{J}}_{m, 3}) \Vert _{H^{1/2}(\mathbb {R}^3_0)} + \frac{C}{|k|} \Vert (J_{e, 3} - {\hat{J}}_{e, 3}, J_{m, 3} - {\hat{J}}_{m, 3}) \Vert _{H^{3/2}(\mathbb {R}^3_0)}. \end{aligned}$$
(2.9)

Here, C denotes a positive constant depending only on \(\gamma \), \(\Lambda \), and \(\Lambda _1\).

Remark 5

Parts (i) and (ii) are from [8, Corollary 3.1], which are restated here for the convenience of the reader. The new material is in part (iii).

Proof

We only prove (iii) (see Remark 5). The idea of the proof is as follows. To derive (2.9), we first differentiate the system with respect to \(x_j\) for \(j=1, 2\) and then derive the corresponding estimates for \((\partial _{x_j} E, \partial _{x_j} H, \partial _{x_j} {\hat{E}}, \partial _{x_j} {\hat{H}})\) using (i) and (ii). After that, we use the system of (EH) and \(({\hat{E}}, {\hat{H}})\) to obtain similar estimates for \((\partial _{x_3} E, \partial _{x_3} H, \partial _{x_3} {\hat{E}}, \partial _{x_3} {\hat{H}})\). This strategy is quite standard at least in the regularity theory of second elliptic equations, see, e.g., [4]. The main goal of the process is to keep track of the dependence on |k|. The details are now given.

Fix \(k_0\) and \(r_0\) such that (i) and (ii) hold. By (ii), we have

$$\begin{aligned}&\Vert (E, H, {\hat{E}}, {\hat{H}}) \Vert _{H^1(\mathbb {R}^3_+)} + |k| \Vert (E, H, {\hat{E}}, {\hat{H}}) \Vert _{L^2(\mathbb {R}^3_+)} \nonumber \\&\quad \le C \left( \Vert (J_e, J_m, {\hat{J}}_e, {\hat{J}}_m) \Vert _{L^2(\mathbb {R}^3_+)} + \frac{1}{|k|} \Vert ({\text {div}}J_e, {\text {div}}J_m, {\text {div}}{\hat{J}}_e, {\text {div}}{\hat{J}}_m) \Vert _{L^2(\mathbb {R}^3_+)} \right. \nonumber \\&\quad \quad + \left. \frac{1}{|k|} \Vert (J_{e, 3} - {\hat{J}}_{e, 3}, J_{m, 3} - {\hat{J}}_{m, 3}) \Vert _{H^{1/2}(\mathbb {R}^3_0)} \right) . \end{aligned}$$
(2.10)

Let \(j =1, 2\). Differentiating (2.5) and (2.6) with respect to \(x_j\), we obtain

$$\begin{aligned}&\left\{ \begin{array}{c} \nabla \times \partial _{x_j} E = k \mu \partial _{x_j} H + \mathbf{J} _e \text{ in } \mathbb {R}^3_+, \\ \nabla \times \partial _{x_j} H = - k \varepsilon \partial _{x_j} E + \mathbf{J} _m \text{ in } \mathbb {R}^3_+, \end{array}\right. \quad \left\{ \begin{array}{c} \nabla \times \partial _{x_j} {\hat{E}}= k {\hat{\mu }}\partial _{x_j} {\hat{H}}+ {\hat{\mathbf{J }}}_e \text{ in } \mathbb {R}^3_+, \\ \nabla \times \partial _{x_j}{\hat{H}}= - k {\hat{\varepsilon }}\partial _{x_j} {\hat{E}}+ {\hat{\mathbf{J }}}_m \text{ in } \mathbb {R}^3_+, \end{array}\right. \\&(\partial _{x_j} {\hat{E}}- \partial _{x_j} E) \times e_3 = 0 \text{ on } \mathbb {R}^3_0, \quad \text{ and } \quad (\partial _{x_j} {\hat{H}}- \partial _{x_j} H) \times e_3 = 0 \text{ on } \mathbb {R}^3_0, \end{aligned}$$

where

$$\begin{aligned} \mathbf{J} _e = \partial _{x_j} J_e + k (\partial _{x_j} \mu ) H, \quad \mathbf{J} _m = \partial _{x_j} J_m - k (\partial _{x_j} \varepsilon ) E, \\ {\hat{\mathbf{J }}}_e = \partial _{x_j} {\hat{J}}_e + k (\partial _{x_j} {\hat{\mu }}) {\hat{H}}, \quad {\hat{\mathbf{J }}}_m = \partial _{x_j} {\hat{J}}_m - k (\partial _{x_j} {\hat{\varepsilon }}) {\hat{E}}. \end{aligned}$$

Applying (ii) to \((\partial _{x_j} E, \partial _{x_j} H, \partial _{x_j} {\hat{E}}, \partial _{x_j} {\hat{H}})\), we deduce that

$$\begin{aligned} \Vert (\partial _{x_j} E, \partial _{x_j} H, \partial _{x_j} {\hat{E}}, \partial _{x_j} {\hat{H}})&\Vert _{H^1(\mathbb {R}^3_+)} + |k| \Vert (\partial _{x_j} E, \partial _{x_j} H, \partial _{x_j} {\hat{E}}, \partial _{x_j} {\hat{H}}) \Vert _{L^2(\mathbb {R}^3_+)} \le C (R_1 + R_2), \end{aligned}$$
(2.11)

where

$$\begin{aligned} R_1 =&\Vert (\partial _{x_j} J_e, \partial _{x_j} J_m, \partial _{x_j} {\hat{J}}_e, \partial _{x_j} {\hat{J}}_m) \Vert _{L^2(\mathbb {R}^3_+)} \nonumber \\&+ \frac{1}{|k|} \Vert ({\text {div}}\partial _{x_j} J_e, {\text {div}}\partial _{x_j} J_m, {\text {div}}\partial _{x_j} {\hat{J}}_e, {\text {div}}\partial _{x_j} {\hat{J}}_m) \Vert _{L^2(\mathbb {R}^3_+)} \nonumber \\&+ \frac{1}{|k|} \Vert (\partial _{x_j} J_{e, 3} - \partial _{x_j} {\hat{J}}_{e, 3}, \partial _{x_j} J_{m, 3} - \partial _{x_j} {\hat{J}}_{m, 3}) \Vert _{H^{1/2}(\mathbb {R}^3_0)}, \end{aligned}$$
(2.12)

and

$$\begin{aligned} R_2 = |k| \Vert (E, H, {\hat{E}}, {\hat{H}}) \Vert _{L^2(\mathbb {R}^3_+)} + \Vert (E, H, {\hat{E}}, {\hat{H}}) \Vert _{H^1(\mathbb {R}^3_+)} + \Vert (E, H, {\hat{E}}, {\hat{H}}) \Vert _{H^{1/2}(\mathbb {R}^3_0)}. \end{aligned}$$
(2.13)

Combing (2.10), (2.12), and (2.13), we derive from (2.11) that

$$\begin{aligned}&\Vert (\partial _{x_j} E, \partial _{x_j} H, \partial _{x_j} {\hat{E}}, \partial _{x_j} {\hat{H}}) \Vert _{H^1(\mathbb {R}^3_+)} + |k| \Vert (\partial _{x_j} E, \partial _{x_j} H, \partial _{x_j} {\hat{E}}, \partial _{x_j} {\hat{H}}) \Vert _{L^2(\mathbb {R}^3_+)} \nonumber \\&\quad \le \text{ the } \text{ RHS } \text{ of } (2.9). \end{aligned}$$
(2.14)

On the other hand, from the system of (EH), we have, in \(\mathbb {R}^3_+\),

$$\begin{aligned}&\partial _{x_3}E_2=\partial _{x_2}E_3 - k(\mu H)_1-J_{e,1},\quad \partial _{x_3} E_1 = \partial _{x_1} E_3 + k (\mu H)_2 + J_{e, 2}\quad \text{ and } \nonumber \\&\partial _{x_3} \left( \sum _{j=1}^3 \varepsilon _{3j}E_j \right) = -\sum _{\ell =1}^2 \sum _{j=1}^3 \partial _{x_\ell } \varepsilon _{\ell j}E_j +\frac{1}{k}{\text {div}}(J_m) . \end{aligned}$$
(2.15)

Combining (2.10), (2.14), and (2.15), and using the fact that \(\varepsilon _{33} \ge \Lambda ^{-1}\), one has

$$\begin{aligned} \Vert E \Vert _{H^2(\mathbb {R}^3_+)} + |k| \Vert E \Vert _{H^1(\mathbb {R}^3_+)} + |k|^2 \, \Vert E \Vert _{L^2(\mathbb {R}^3_+)} \le \text{ the } \text{ RHS } \text{ of } (2.9). \end{aligned}$$
(2.16)

Similarly, one obtains

$$\begin{aligned}&\Vert (H, {\hat{E}}, {\hat{H}}) \Vert _{H^2(\mathbb {R}^3_+)} + |k| \Vert (H, {\hat{E}}, {\hat{H}}) \Vert _{H^1(\mathbb {R}^3_+)} \nonumber \\&\qquad + |k|^2 \, \Vert (H, {\hat{E}}, {\hat{H}}) \Vert _{L^2(\mathbb {R}^3_+)} \le \text{ the } \text{ RHS } \text{ of } (2.9). \end{aligned}$$
(2.17)

The conclusion of Lemma 2.1 follows from (2.14), (2.16), and (2.17). \(\square \)

We are ready to give

Proof of Theorem 2.1

Let K be a compact subset of \(\Omega \). Fix \(\varphi \in C^2_c(\Omega )\) such that \(\varphi = 1\) in K. Set

$$\begin{aligned} (E_\varphi , H_\varphi , {\hat{E}}_\varphi , {\hat{H}}_\varphi ) = \varphi (E, H, {\hat{E}}, {\hat{H}}) \text{ in } \Omega . \end{aligned}$$

From the system of \((E, H, {\hat{E}}, {\hat{H}})\), we have

$$\begin{aligned}&\left\{ \begin{array}{c} \nabla \times E_\varphi = k \mu H_\varphi + J_{\varphi , e} \text{ in } \Omega , \\ \nabla \times H_\varphi = - k \varepsilon E_\varphi + J_{\varphi ,m} \text{ in } \Omega , \end{array}\right. \quad \left\{ \begin{array}{c} \nabla \times {\hat{E}}_\varphi = k {\hat{\mu }}{\hat{H}}_\varphi + {\hat{J}}_{\varphi , e} \text{ in } \Omega , \\ \nabla \times {\hat{H}}_\varphi = - k {\hat{\varepsilon }}{\hat{E}}_\varphi + {\hat{J}}_{\varphi , m} \text{ in } \Omega , \end{array}\right. \end{aligned}$$
(2.18)
$$\begin{aligned}&({\hat{E}}_\varphi - E_\varphi ) \times \nu = 0 \text{ on } \partial \Omega , \quad \text{ and } \quad ({\hat{H}}_\varphi - H_\varphi ) \times \nu = 0 \text{ on } \partial \Omega . \end{aligned}$$
(2.19)

Here, in \(\Omega \),

$$\begin{aligned} J_{\varphi , e}= & {} \nabla \varphi \times E + \varphi J_{e}, \; \; J_{\varphi , m} = \nabla \varphi \times H + \varphi J_{m}, \\ {\hat{J}}_{\varphi , e}= & {} \nabla \varphi \times {\hat{E}}+ \varphi {\hat{J}}_{e}, \; \; {\hat{J}}_{\varphi , m} = \nabla \varphi \times {\hat{H}}+ \varphi {\hat{J}}_{m}. \end{aligned}$$

Differentiating the system of \((E_\varphi , H_\varphi , {\hat{E}}_\varphi , {\hat{H}}_\varphi )\) with respect to \(x_j\) (\(1 \le j \le 3\)) and applying Theorem 1.3, we obtain, as in the proof of Lemma 2.1,

$$\begin{aligned}&\Vert (E_\varphi , H_\varphi , {\hat{E}}_\varphi , {\hat{H}}_\varphi ) \Vert _{H^2(\Omega )} \le C |k| \Vert (J_{\varphi , e}, J_{\varphi , m}, {\hat{J}}_{\varphi , e}, {\hat{J}}_{\varphi , m})\Vert _{L^2(\Omega )}\\&\quad + C \Vert (J_{\varphi , e}, J_{\varphi , m}, {\hat{J}}_{\varphi , e}, {\hat{J}}_{\varphi , m})\Vert _{H^1(\Omega )} \\&\quad + C \Vert ({\text {div}}J_{\varphi , e}, {\text {div}}J_{\varphi , m}, {\text {div}}{\hat{J}}_{\varphi , e}, {\text {div}}{\hat{J}}_{\varphi , m}) \Vert _{L^2(\Omega )} \\&\quad + \frac{C}{|k|} \Vert ({\text {div}}J_{\varphi , e}, {\text {div}}J_{\varphi , m}, {\text {div}}{\hat{J}}_{\varphi , e}, {\text {div}}{\hat{J}}_{\varphi , m})\Vert _{H^1(\Omega )}. \end{aligned}$$

This implies

$$\begin{aligned}&\Vert (E_\varphi , H_\varphi , {\hat{E}}_\varphi , {\hat{H}}_\varphi ) \Vert _{H^2(\Omega )} \le C |k| \Vert (J_e, J_m, {\hat{J}}_e, {\hat{J}}_m)\Vert _{L^2(\Omega )} + C \Vert (J_e, J_m, {\hat{J}}_e, {\hat{J}}_m)\Vert _{H^1(\Omega )} \nonumber \\&\quad + C \Vert ({\text {div}}J_e, {\text {div}}J_m, {\text {div}}{\hat{J}}_e, {\text {div}}{\hat{J}}_m)\Vert _{L^2(\Omega )} + \frac{C}{|k|} \Vert ({\text {div}}J_e, {\text {div}}J_m, {\text {div}}{\hat{J}}_e, {\text {div}}{\hat{J}}_m)\Vert _{H^1(\Omega )} \nonumber \\&\quad + C |k| \Vert (E, H, {\hat{E}}, {\hat{H}})\Vert _{L^2(\Omega )} + C \Vert (E, H, {\hat{E}}, {\hat{H}}) \Vert _{H^1(\Omega )}. \end{aligned}$$
(2.20)

Applying Theorem 1.3 again, we derive from (2.20) that

$$\begin{aligned}&\Vert (E_\varphi , H_\varphi , {\hat{E}}_\varphi , {\hat{H}}_\varphi ) \Vert _{H^2(\Omega )} + |k| \Vert (E_\varphi , H_\varphi , {\hat{E}}_\varphi , {\hat{H}}_\varphi ) \Vert _{H^1(\Omega )} \nonumber \\&\quad + |k|^2 \Vert (E_\varphi , H_\varphi , {\hat{E}}_\varphi , {\hat{H}}_\varphi ) \Vert _{L^2(\Omega )} \le \text{ the } \text{ RHS } \text{ of } (2.4). \end{aligned}$$
(2.21)

The conclusion of Theorem 2.1 now follows from (2.21) and Lemma 2.1 via local charts. The proof is complete. \(\square \)

3 Completeness of the generalized eigenfunctions: Proof of Theorem 1.1

To establish the completeness of the generalized eigenfunctions, we use Theorem 2.1 and apply the theory of Hilbert–Schmidt operators. To this end, we first recall

Definition 3.1

Let H be a separable Hilbert space, and let \((\phi _k)_{k=1}^\infty \) be an orthogonal basis. A bounded, linear operator \(\mathbf {T}: H \rightarrow H\) is Hilbert–Schmidt if its finite double norm

Remark 6

The definition of does not depend on the choice of \((\phi _k)\), see, e.g., [1, Chapter 12].

Using Theorem 2.1, we can establish the following result.

Proposition 3.1

Assume that \(\varepsilon , \, \mu , \, {\hat{\varepsilon }}, \, \hat{\mu } \in [C^2({\bar{\Omega }})]^{3 \times 3}\) and (1.3) holds, and let \(\gamma > 0\). Let \(k_0 \ge 1\) and \(C > 0\) be constants such that for \(k \in \mathbb {C}\) with \(|\Im {(k^2)}| \ge \gamma |k|^2\) and \(|k| \ge k_0\), the conclusions of Theorem 2.1 hold. Then, for such a complex number k,

$$\begin{aligned}&\Vert {\mathcal {T}}_k^2 ({{{\mathcal {J}}}}) \Vert _{H^2(\Omega )} + |k| \Vert {\mathcal {T}}_k^2 ({{{\mathcal {J}}}}) \Vert _{H^1(\Omega )} + |k|^2 \Vert {\mathcal {T}}_k^2 ({{{\mathcal {J}}}}) \Vert _{L^2(\Omega )} \nonumber \\&\quad \le C \Vert {{\mathcal {J}}}\Vert _{L^2(\Omega )} \quad \forall {{\mathcal {J}}}= ({{\mathcal {J}}}_e, {{\mathcal {J}}}_m, \hat{{\mathcal {J}}}_e, \hat{{\mathcal {J}}}_m) \in \mathbf{H} (\Omega ). \end{aligned}$$
(3.1)

Consequently,

  1. (i)

    \({\mathcal {T}}_k^2\) is a Hilbert–Schmidt operator defined in \(\mathbf{H} (\Omega )\); moreover,

    (3.2)

    for some positive constant C, independent of k.

  2. (ii)

    For \(\theta \in \mathbb {R}\) with \(|\Im (e^{2 i \theta })| > 0\), \(e^{i \theta }\) is a direction of minimal growth of the modified resolvent of \({\mathcal {T}}_k^2\).

For the convenience of the reader, we recall briefly here some notions associated with the concept of the minimal growth. Let A be a continuous, linear transformation from a Hilbert space H into itself. The modified resolvent set \(\rho _m(A)\) of A is the set of all \(\lambda \in \mathbb {C}\setminus \{0 \}\) such that \(I - \lambda A\) is bijective (and continuous). If \(\lambda \in \rho _m(A)\), then the map \(A_\lambda : = A (I - \lambda A)^{-1}\) is the modified resolvent of A (see [1, Definition 12.3]). For \(\theta \in \mathbb {R}\), \(e^{i \theta }\) is a direction of minimal growth of the modified resolvent of A if for some \(a>0\), the following two facts hold for all \(r>a\): i) \(r e^{i \theta }\) is in the modified resolvent set \(\rho _m(A)\) of A and ii) \(\Vert A_{r e^{i \theta }} \Vert \le C/ r\) (see [1, Definition 12.6]).

Another key ingredient of the proof of Theorem 1.1 is:

Proposition 3.2

Assume that \(\varepsilon , \, \mu , \, {\hat{\varepsilon }}, \, \hat{\mu } \in [C^2({\bar{\Omega }})]^{3 \times 3}\) and (1.3) holds. Let \(k \in \mathbb {C}\) be such that the conclusion of Theorem 2.1 holds. We have

$$\begin{aligned} \overline{{\mathcal {T}}_k(\mathbf{H} (\Omega ))}^{L^2(\Omega )} = \mathbf{H} (\Omega ). \end{aligned}$$

The rest of this section contains three subsections. In the first subsection, we give the proof of Proposition 3.1. The proofs of Proposition 3.2 and Theorem 1.1 are given in the last two subsections, respectively.

3.1 Proof of Proposition 3.1

We first state and prove a lemma used in the proof of Proposition 3.1.

Lemma 3.1

Let \(k, \, s \in \mathbb {C}\) be such that \({\mathcal {T}}_k, {\mathcal {T}}_{k+s} : \mathbf{H} (\Omega ) \rightarrow \mathbf{H} (\Omega )\) are bounded. We have

  1. (i)

    If \({\mathcal {T}}_k\) is compact, then \(s \in \rho _m({\mathcal {T}}_k). \)

  2. (ii)

    Assume \(s \in \rho _m({\mathcal {T}}_k)\). Then,

    $$\begin{aligned} {\mathcal {T}}_k (I - s {\mathcal {T}}_k)^{-1} = (I - s {\mathcal {T}}_k)^{-1} {\mathcal {T}}_k = {\mathcal {T}}_{k + s}. \end{aligned}$$
    (3.3)

Proof of Lemma 3.1

We begin with assertion i). Since \({\mathcal {T}}_k\) is compact, it suffices to prove that \(I - s {\mathcal {T}}_k\) is injective. Indeed, let \((E, H, {\hat{E}}, {\hat{H}}) \in \mathbf{H} (\Omega )\) be a solution of the equation \(I - s {\mathcal {T}}_k = 0\). One can check that \((E, H, {\hat{E}}, {\hat{H}}) = {\mathcal {T}}_{k+s} (0) = 0\). Assertion i) follows.

We next establish ii). Let \({{\mathcal {J}}}= ({{\mathcal {J}}}_e, {{\mathcal {J}}}_m, \hat{{\mathcal {J}}}_e, \hat{{\mathcal {J}}}_m) \in \mathbf{H} (\Omega )\) be arbitrary. Set

$$\begin{aligned}&(E, H, {\hat{E}}, {\hat{H}}) = {\mathcal {T}}_{k+s} ({{\mathcal {J}}}), \end{aligned}$$
(3.4)
$$\begin{aligned}&{{\mathcal {J}}}^1 = ({{\mathcal {J}}}_e^1, {{\mathcal {J}}}_m^1, \hat{{\mathcal {J}}}_e^1, \hat{{\mathcal {J}}}_m^1)= (I - s {\mathcal {T}}_k)^{-1} ({{\mathcal {J}}}), \end{aligned}$$
(3.5)
$$\begin{aligned}&(E^1, H^1, {\hat{E}}^1, {\hat{H}}^1) = {\mathcal {T}}_{k} ({{\mathcal {J}}}^1). \end{aligned}$$
(3.6)

We claim that

$$\begin{aligned} (E^1, H^1, {\hat{E}}^1, {\hat{H}}^1) = (E, H, {\hat{E}}, {\hat{H}}), \end{aligned}$$

which implies \({\mathcal {T}}_k (I - s {\mathcal {T}}_k)^{-1} = {\mathcal {T}}_{k+s}\) since \({{\mathcal {J}}}\) is arbitrary.

To prove the claim, we will show that \((E^1, H^1, {\hat{E}}^1, {\hat{H}}^1)\) and \((E, H, {\hat{E}}, {\hat{H}})\) satisfy the same Cauchy problem. We have

$$\begin{aligned}&\nabla \times E \mathop {=}^{(3.4)} (k + s) \mu H + \mu {{\mathcal {J}}}_m, \nonumber \\&\nabla \times E^1 \mathop {=}^{(3.6)} k \mu H^1 + \mu {{\mathcal {J}}}_m^1, \nonumber \\&{{\mathcal {J}}}^1 - {{\mathcal {J}}}\mathop {=}^{(3.5)} s {\mathcal {T}}_k ({{\mathcal {J}}}_1) \mathop {=}^{(3.6)} s (E^1, H^1, {\hat{E}}^1, {\hat{H}}^1). \end{aligned}$$
(3.7)

This implies

$$\begin{aligned} \nabla \times E^1 = (k + s) \mu H^1 + \mu {{\mathcal {J}}}_m, \end{aligned}$$

(compare with (3.7)). Similarly, we can derive that \((E^1, H^1, {\hat{E}}^1, {\hat{H}}^1)\) and \((E, H, {\hat{E}}, {\hat{H}})\) satisfy the same system since it is clear that, on \(\partial \Omega \),

$$\begin{aligned} ({\hat{E}}_1 - E_1)\times \nu = ({\hat{H}}_1 - H_1)\times \nu = 0 = ({\hat{E}}- E)\times \nu = ({\hat{H}}- H)\times \nu . \end{aligned}$$

The claim is proved.

Since

$$\begin{aligned} (I - s {\mathcal {T}}_k) (I - s {\mathcal {T}}_k)^{-1} = I = (I - s {\mathcal {T}}_k)^{-1} (I - s {\mathcal {T}}_k), \end{aligned}$$

and \(s \ne 0\) by the definition of \(\rho _m({\mathcal {T}}_k)\), we obtain

$$\begin{aligned} {\mathcal {T}}_k (I - s {\mathcal {T}}_k)^{-1} = (I - s {\mathcal {T}}_k)^{-1} {\mathcal {T}}_k. \end{aligned}$$

The proof is complete. \(\square \)

We are ready to give

Proof of Proposition 3.1

Assertion (3.1) is a consequence of Theorem 1.3 and Theorem 2.1. Indeed, as a consequence of (3.1) and Gagliardo–Nirenberg’s inequality, see [12, 27], we derive, for \({{\mathcal {J}}}\in \mathbf{H} (\Omega )\), that \({\mathcal {T}}_k^2({{\mathcal {J}}}) \in [C({\bar{\Omega }})]^{12}\), and

$$\begin{aligned} \Vert {\mathcal {T}}_k^2({{\mathcal {J}}}) \Vert _{L^\infty (\Omega )} \le C \Vert {\mathcal {T}}_k^2({{\mathcal {J}}}) \Vert _{H^2(\Omega )}^\frac{3}{4} \Vert {\mathcal {T}}_k^2({{\mathcal {J}}}) \Vert _{L^2(\Omega )}^\frac{1}{4} \le \frac{C}{|k|^{1/2}} \Vert {{\mathcal {J}}}\Vert _{L^2(\Omega )}. \end{aligned}$$

It follows from the theory of Hilbert–Schmidt operators, see, e.g., [25, Lemma 3]Footnote 2, that \({\mathcal {T}}_k^2\) is a Hilbert–Schmidt operator defined on \(\mathbf{H} (\Omega )\) and

We next check the assertion on the minimal growth of the modified resolvent of \({\mathcal {T}}_k\). We have

$$\begin{aligned} \lim _{r \rightarrow + \infty } |\Im \big ( (k + r e^{i \theta })^2 \big )| / |k + r e^{i \theta }|^2 = |\Im (e^{2 i \theta }) | \ge 2 \gamma , \end{aligned}$$

for some \(\gamma > 0\). It follows, for a large enough, that \(k + r e^{i \theta }\) satisfies the conclusion of Theorem 2.1 for \(r > a\). On the other hand, let \((E, H, {\hat{E}}, {\hat{H}}) \in \mathbf{H} (\Omega )\). We first note that, for \(s \in \mathbb {C}\),

$$\begin{aligned} (I - s {\mathcal {T}}_k) (E, H, {\hat{E}}, {\hat{H}}) = 0 \text{ if } \text{ and } \text{ only } \text{ if } (E, H, {\hat{E}}, {\hat{H}}) = {\mathcal {T}}_{k + s} ( 0) = 0, \end{aligned}$$

provided that \({\mathcal {T}}_{k+s}\) is well defined. Since \({\mathcal {T}}_k\) is compact, it follows that \( r e^{i \theta } \in \rho _m({\mathcal {T}}_k)\) for \(r > a\). By Lemma 3.1, we also have, with \(s = r e^{i \theta }\),

$$\begin{aligned} {\mathcal {T}}_k (I - s {\mathcal {T}}_k)^{-1} = (I - s {\mathcal {T}}_k)^{-1} {\mathcal {T}}_k = {\mathcal {T}}_{k + s}. \end{aligned}$$

Let \(s_1= i r^{1/2} e^{i \theta /2}\) and \(s_2 = - i r^{1/2} e^{i \theta /2}\). Thus, \((t- s_1)(t-s_2) = t^2 - s\) for \(t \in \mathbb {C}\). One then can check that

$$\begin{aligned}&{\mathcal {T}}_k^2 (I - s {\mathcal {T}}_k^2)^{-1} = {\mathcal {T}}_k^2 (I - s_1 {\mathcal {T}}_k)^{-1}(I - s_2 {\mathcal {T}}_k)^{-1} \\&\quad \mathop {=}^{Lemma 3.1} {\mathcal {T}}_k(I - s_1 {\mathcal {T}}_k)^{-1} {\mathcal {T}}_k (I - s_2 {\mathcal {T}}_k)^{-1} = {\mathcal {T}}_{k+ s_1} {\mathcal {T}}_{k+s_2}. \end{aligned}$$

It follows from Theorem 1.3 that

$$\begin{aligned} \Vert {\mathcal {T}}_k^2 (I - s {\mathcal {T}}_k^2)^{-1} \Vert _\mathbf{H (\Omega ) \rightarrow \mathbf{H} (\Omega )}&= \Vert {\mathcal {T}}_{k+ s_1} {\mathcal {T}}_{k+s_2} \Vert _\mathbf{H (\Omega ) \rightarrow \mathbf{H} (\Omega )}\nonumber \\&\le \Vert {\mathcal {T}}_{k+ s_1} \Vert _\mathbf{H (\Omega ) \rightarrow \mathbf{H} (\Omega )} \Vert {\mathcal {T}}_{k+s_2} \Vert _\mathbf{H (\Omega ) \rightarrow \mathbf{H} (\Omega )} \nonumber \\&\le C \frac{1}{|s_1|}\frac{1}{|s_2|} = \frac{C}{|s|}. \end{aligned}$$

The assertion on the minimal growth of the modified resolvent of \({\mathcal {T}}_k^2\) follows. \(\square \)

3.2 Proof of Proposition 3.2

We first state and prove the following technical result, which is used in the proof of Proposition 3.2.

Lemma 3.2

Let \({{\mathcal {M}}}\in [C^1(\bar{\Omega })]^{3 \times 3}\) be symmetric and uniformly elliptic. Let \(U \in [H^1(\Omega )]^3\) be such that \({\text {div}}({{\mathcal {M}}}U) = 0\) in \(\Omega \). There exists a sequence \((U_n)_n \subset [H^1(\Omega )]^3\) such that

$$\begin{aligned}&{\text {div}}({{\mathcal {M}}}U_n) = 0 \text{ in } \Omega , \end{aligned}$$
(3.8)
$$\begin{aligned}&{{\mathcal {M}}}U_n \cdot \nu = {{\mathcal {M}}}U \cdot \nu \text{ on } \partial \Omega , \quad U_n \times \nu = 0 \text{ on } \partial \Omega , \end{aligned}$$
(3.9)

and

$$\begin{aligned} U_n \rightarrow U \text { in } [L^2(\Omega )]^3 \text{ as } n \rightarrow + \infty . \end{aligned}$$
(3.10)

Proof of Lemma 3.2

Since \(\Omega \) is connected, \(U \in [H^1(\Omega )]^3\) and \({\text {div}}({{\mathcal {M}}}U) = 0\) in \(\Omega \), by [13, lemma 2.2], there exists \(\widetilde{V} \in [H^1(\Omega )]^3\) such that

$$\begin{aligned} {\text {div}}\widetilde{V} = 0 \text { in }\Omega \, \, \, \text { and }\, \, \, \widetilde{V} = \frac{ \mathcal {M} U \cdot \nu }{ {{\mathcal {M}}}\nu \cdot \nu } {{\mathcal {M}}}\nu \text { on }\partial \Omega . \end{aligned}$$
(3.11)

Set \(V = \mathcal {M}^{-1}\widetilde{V}\) in \(\Omega \). One can easily check from the definition of V and (3.11) that

$$\begin{aligned} {\text {div}}(\mathcal {M}V) = 0 \text { in }\Omega , \, \, \, {{\mathcal {M}}}V \cdot \nu = {{\mathcal {M}}}U \cdot \nu \text { on }\partial \Omega \, \, \, \text { and }\, \, \, V \times \nu = 0 \text { on }\partial \Omega . \end{aligned}$$
(3.12)

Set \(\widetilde{U}= U-V\) in \(\Omega \). Since \({\text {div}}({{\mathcal {M}}}U) = 0\) in \(\Omega \), we derive from (3.12) that \({\text {div}}({{\mathcal {M}}}\widetilde{U})=0\) in \(\Omega \) and \( {{\mathcal {M}}}\widetilde{U} \cdot \nu =0\) on \(\partial \Omega \). It follows from [13, Theorem 2.8] that there exists a sequence \((\widetilde{U}_n)_n \subset [C^1_c(\Omega )]^3\) such that

$$\begin{aligned} {\text {div}}(\widetilde{U}_n) = 0 \text { in }\Omega \end{aligned}$$
(3.13)

and

$$\begin{aligned} \widetilde{U}_n \rightarrow {{\mathcal {M}}}\widetilde{U} \text{ in } [L^2(\Omega )]^3 \text{ as } n \rightarrow + \infty . \end{aligned}$$
(3.14)

Set

$$\begin{aligned} U_n = {{\mathcal {M}}}^{-1} \widetilde{U}_n + V. \end{aligned}$$

We claim that the sequence \((U_n)_n\) has the required properties. Indeed,

$$\begin{aligned} {\text {div}}({{\mathcal {M}}}U_n) ={\text {div}}(\widetilde{U}_n) + {\text {div}}({{\mathcal {M}}}V) \mathop {=}^{(3.12),(3.13)} 0 \text { in }\Omega , \end{aligned}$$

and since \(\widetilde{U}_n \in [C_c^1(\Omega )]^3\), we also have

$$\begin{aligned} {{\mathcal {M}}}U_n \cdot \nu = \widetilde{U}_n \cdot \nu + {{\mathcal {M}}}V \cdot \nu \mathop {=}^{(3.12)} {{\mathcal {M}}}U \cdot \nu \text { on }\partial \Omega , \end{aligned}$$

and

$$\begin{aligned} U_n \times \nu = {{\mathcal {M}}}^{-1} \widetilde{U}_n \times \nu + V \times \nu \mathop {=}^{(3.12)} 0\text { on }\partial \Omega . \end{aligned}$$

Moreover, since \(V \in [H^1(\Omega )]^3\), it follows that \(U_n \in [H^1(\Omega )]^3\). By (3.14), we obtain

$$\begin{aligned} U_n \rightarrow \widetilde{U} + V = U \text{ in } [L^2(\Omega )]^3 \text{ as } n \rightarrow + \infty . \end{aligned}$$

The proof is complete. \(\square \)

We are ready to give

Proof of Proposition 3.2

Since \({\mathcal {T}}_k\) is a map from \(\mathbf{H} (\Omega )\) into \(\mathbf{H} (\Omega )\), it suffices to prove the following two facts

$$\begin{aligned}{}[H^1(\Omega )]^{12} \cap \mathbf{H} (\Omega ) \subset \overline{{\mathcal {T}}_k \big ( \mathbf{H} (\Omega ) \big )}^{L^2(\Omega )}, \end{aligned}$$
(3.15)

and

$$\begin{aligned}{}[H^1(\Omega )]^{12} \cap \mathbf{H} (\Omega ) \text{ is } \text{ dense } \text{ in } \mathbf{H} (\Omega ) \text{ with } \text{ respect } \text{ to } [L^2(\Omega )]^{12}-\text{ norm }. \end{aligned}$$
(3.16)

These will be proved in Steps 1 and 2 below.

Step 1 Proof of (3.15). Let \((E,H,\hat{E},\hat{H}) \in [H^1(\Omega )]^{12} \cap \mathbf{H} (\Omega )\). By applying Lemma 3.2 with \(({{\mathcal {M}}}, U)\) equal to \((\varepsilon , E)\), \((\mu , H)\), \(({\hat{\varepsilon }}, {\hat{E}})\), and \(({\hat{\mu }}, {\hat{H}})\), there exists a sequence \(\big ((E^n,H^n,\hat{E}^n,\hat{H}^n) \big )_n \subset [H^1(\Omega )]^{12} \cap \mathbf{H} (\Omega )\) such that

$$\begin{aligned} E^n \times \nu = H^n \times \nu = \hat{E}^n \times \nu = \hat{H}^n \times \nu = 0 \text{ on } \partial \Omega , \end{aligned}$$
(3.17)

and

$$\begin{aligned} (E^n,H^n,\hat{E}^n,\hat{H}^n) \rightarrow (E,H,\hat{E},\hat{H}) \text{ in } [L^2(\Omega )]^{12} \text{ as } n \rightarrow + \infty . \end{aligned}$$
(3.18)

Set, in \(\Omega \),

$$\begin{aligned}&J_e^n = \nabla \times E^n - k \mu H^n, \quad J_m^n = \nabla \times H^n + k \varepsilon H^n, \end{aligned}$$
(3.19)
$$\begin{aligned}&{\hat{J}}_e^n = \nabla \times {\hat{E}}^n - k {\hat{\mu }}{\hat{H}}^n, \quad {\hat{J}}_m^n = \nabla \times {\hat{H}}^n + k {\hat{\varepsilon }}{\hat{H}}^n, \end{aligned}$$
(3.20)

and define \(({{\mathcal {J}}}_e^n, {{\mathcal {J}}}_m^n, \hat{{\mathcal {J}}}_e^n, \hat{{\mathcal {J}}}_m^n)\) in \(\Omega \) via \((J_e^n, J_m^n, {\hat{J}}_e^n, {\hat{J}}_m^n) = (\mu {{\mathcal {J}}}_m^n, - \varepsilon {{\mathcal {J}}}_e^n, {\hat{\mu }}\hat{{\mathcal {J}}}_m^n, - {\hat{\varepsilon }}\hat{{\mathcal {J}}}_e^n)\).

It follows that (1.12) holds with \((E, H, {\hat{E}}, {\hat{H}})\) and \((J_e, J_m, {\hat{J}}_e, {\hat{J}}_m)\) replaced by \((E^n, H^n, {\hat{E}}^n, {\hat{H}}^n)\) and \((J_e^n, J_m^n, {\hat{J}}_e^n, {\hat{J}}_m^n)\). Since \((E^n,H^n,\hat{E}^n,\hat{H}^n) \in \mathbf{H} (\Omega )\), it follows that

$$\begin{aligned} {\text {div}}J_e^n = {\text {div}}J_m^n = {\text {div}}{\hat{J}}_e^n = {\text {div}}{\hat{J}}_m^n =0 \text{ in } \Omega . \end{aligned}$$
(3.21)

On the other hand, from (3.19) and (3.20), we have, on \(\partial \Omega \),

$$\begin{aligned} ({\hat{J}}_e^n - J_e^n ) \cdot \nu = (\nabla \times {\hat{E}}^n - \nabla \times {\hat{E}}^n) \cdot \nu - k ({\hat{\mu }}{\hat{H}}^n - \mu H^n) \cdot \nu . \end{aligned}$$

This implies

$$\begin{aligned} ({\hat{J}}_e^n - J_e^n ) \cdot \nu = 0 \text{ on } \partial \Omega , \end{aligned}$$
(3.22)

since \(({\hat{\mu }}{\hat{H}}^n - \mu H^n) \cdot \nu = 0\) on \(\partial \Omega \) and \({\text {div}}_{\partial \Omega } \Big ( ({\hat{E}}^n - E^n) \times \nu \Big ) = 0\) on \(\partial \Omega \) by (3.17). Similarly, we have

$$\begin{aligned} ({\hat{J}}_m^n - J_m^n ) \cdot \nu = 0 \text{ on } \partial \Omega . \end{aligned}$$
(3.23)

Combining (3.21), (3.22), and (3.23) yields that \(({{\mathcal {J}}}_e^n, {{\mathcal {J}}}_m^n, \hat{{\mathcal {J}}}_e^n, \hat{{\mathcal {J}}}_m^n) \in \mathbf{H} (\Omega )\). Consequently,

$$\begin{aligned} (E^n, H^n, {\hat{E}}^n, {\hat{H}}^n) \in {\mathcal {T}}_k \big ( \mathbf{H} (\Omega ) \big ). \end{aligned}$$

The conclusion of Step 1 now follows from (3.18).

Step 2 Proof of (3.16). Fix \((E, \, H, \, \hat{E}, \, \hat{H}) \in \mathbf{H} (\Omega )\) arbitrary. There exist sequences \(({{\mathcal {E}}}^n)_n, \, ({{\mathcal {H}}}^n)_n \subset [H^2(\Omega )]^{3}\) such that

$$\begin{aligned} (\varepsilon {{\mathcal {E}}}^n, \, \mu {{\mathcal {H}}}^n) \rightarrow (\varepsilon E, \, \mu H) \text{ in } [H({\text {div}}, \Omega )]^{2}. \end{aligned}$$
(3.24)

Since

$$\begin{aligned}&{\text {div}}({\hat{\varepsilon }}{\hat{E}}- \varepsilon E) = {\text {div}}({\hat{\mu }}{\hat{H}}- \mu H) = 0 \text{ in } \Omega \quad \text{ and } \\&({\hat{\varepsilon }}{\hat{E}}- \varepsilon E) \cdot \nu = ({\hat{\mu }}{\hat{H}}- \mu H) \cdot \nu = 0 \text{ on } \partial \Omega , \end{aligned}$$

by [13, Theorem 2.8], there exist sequences \((U^n_e)_n, \, (U_m^n)_n \subset [H^2(\Omega )]^3\) such that

$$\begin{aligned} {\text {div}}U^n_e = {\text {div}}U_m^n = 0 \text{ in } \Omega , \end{aligned}$$
(3.25)

and

$$\begin{aligned} (U^n_e, \, U_m^n) \rightarrow ({\hat{\varepsilon }}{\hat{E}}- \varepsilon E, {\hat{\mu }}{\hat{H}}- \mu H) \text{ in } [L^2(\Omega )]^6 \text{ as } n \rightarrow + \infty . \end{aligned}$$
(3.26)

Define \(\hat{{\mathcal {E}}}^n, \, \hat{{\mathcal {H}}}^n \in [L^2(\Omega )]^3\) via

$$\begin{aligned} {\hat{\varepsilon }}\hat{{\mathcal {E}}}^n = U_e^n + \varepsilon {{\mathcal {E}}}^n \text{ in } \Omega \quad \text{ and } \quad {\hat{\mu }}\hat{{\mathcal {H}}}^n = U_m^n + {\hat{\mu }}{{\mathcal {H}}}^n \text{ in } \Omega . \end{aligned}$$
(3.27)

From (3.24), (3.25), and (3.26), we have

$$\begin{aligned} ({\hat{\varepsilon }}\hat{{\mathcal {E}}}^n, \, {\hat{\mu }}\hat{{\mathcal {H}}}^n) \rightarrow ({\hat{\varepsilon }}{\hat{E}}, \, {\hat{\mu }}{\hat{H}}) \text{ in } [H({\text {div}}, \Omega )]^{2}. \end{aligned}$$
(3.28)

Using (3.24) and (3.28), we derive from the trace theory that, as \(n \rightarrow + \infty \),

$$\begin{aligned}&(\varepsilon {{\mathcal {E}}}^n - \varepsilon E) \cdot \nu , \; (\mu {{\mathcal {H}}}^n - \mu H) \cdot \nu , \; ({\hat{\varepsilon }}\hat{{\mathcal {E}}}^n - {\hat{\varepsilon }}{\hat{E}})\nonumber \\&\qquad \cdot \nu , \; ({\hat{\mu }}\hat{{\mathcal {H}}}^n - {\hat{\mu }}{\hat{H}}) \cdot \nu \rightarrow 0 \text{ in } H^{-1/2} (\partial \Omega ). \end{aligned}$$
(3.29)

Since \(({\hat{\varepsilon }}{\hat{E}}- \varepsilon E) \cdot \nu = ({\hat{\mu }}{\hat{H}}- \mu H) \cdot \nu = 0 \) on \(\partial \Omega \), we obtain

$$\begin{aligned} ({\hat{\varepsilon }}\hat{{\mathcal {E}}}^n - \varepsilon {{\mathcal {E}}}^n) \cdot \nu , \; ({\hat{\mu }}\hat{{\mathcal {H}}}^n - {\hat{\mu }}{{\mathcal {H}}}^n) \cdot \nu \rightarrow 0, \; \text{ in } H^{-1/2} (\partial \Omega ) \text{ as } n \rightarrow + \infty . \end{aligned}$$
(3.30)

Set

$$\begin{aligned} \alpha _e^n = \frac{1}{|\partial \Omega |} \int _{\partial \Omega } \varepsilon {{\mathcal {E}}}^n \cdot \nu \quad \text{ and } \quad \alpha _m^n = \frac{1}{|\partial \Omega |} \int _{\partial \Omega } \mu {{\mathcal {H}}}^n \cdot \nu , \end{aligned}$$
(3.31)

where \(|\partial \Omega |\) denotes the 2-Hausdorff measure of \(\partial \Omega \). We derive that

$$\begin{aligned} \lim _{n \rightarrow + \infty } \alpha _e^n \mathop {=}^{(3.29)}\frac{1}{|\partial \Omega |} \int _{\partial \Omega } \varepsilon E \cdot \nu = \frac{1}{|\partial \Omega |} \int _{ \Omega } {\text {div}}(\varepsilon E ) = 0. \end{aligned}$$
(3.32)

Similarly, we obtain

$$\begin{aligned} \lim _{n \rightarrow + \infty } \alpha _m^n = 0. \end{aligned}$$
(3.33)

Denote

$$\begin{aligned} H^1_{\sharp } (\Omega ) = \Big \{u \in H^1(\Omega ) : \int _\Omega u = 0 \Big \}. \end{aligned}$$

Let \(\xi _e^n, \, \xi _m^n, \, {\hat{\xi }}_e^n, \, {\hat{\xi }}_m^n \in H^1_{\sharp } (\Omega )\) be a solution of

$$\begin{aligned}&\left\{ \begin{array}{cl} -{\text {div}}(\varepsilon \nabla \xi _e^n) = - {\text {div}}(\varepsilon {{\mathcal {E}}}^n) &{} \text { in }\Omega , \\ \varepsilon \nabla \xi _e^n \cdot \nu = \alpha _e^n &{} \text { on }\partial \Omega , \end{array} \right. \nonumber \\&\left\{ \begin{array}{cl} -{\text {div}}(\mu \nabla \xi _m^n) = - {\text {div}}(\mu {{\mathcal {H}}}^n) &{} \text { in }\Omega , \\ \mu \nabla \xi _m^n \cdot \nu = \alpha _m^n &{} \text { on }\partial \Omega , \end{array} \right. \end{aligned}$$
(3.34)
$$\begin{aligned}&\left\{ \begin{array}{cl} -{\text {div}}({\hat{\varepsilon }}\nabla {\hat{\xi }}_e^n) = - {\text {div}}({\hat{\varepsilon }}\hat{{\mathcal {E}}}^n) &{} \text { in }\Omega , \\ {\hat{\varepsilon }}\nabla {\hat{\xi }}_e^n \cdot \nu = ({\hat{\varepsilon }}\hat{{\mathcal {E}}}^n - \varepsilon {{\mathcal {E}}}^n) \cdot \nu + \alpha _e^n &{} \text { on }\partial \Omega , \end{array} \right. \nonumber \\&\left\{ \begin{array}{cl} -{\text {div}}({\hat{\mu }}\nabla {\hat{\xi }}_m^n) = - {\text {div}}({\hat{\mu }}\hat{{\mathcal {H}}}^n) &{} \text { in }\Omega , \\ {\hat{\mu }}\nabla {\hat{\xi }}_m^n \cdot \nu = ({\hat{\mu }}\hat{{\mathcal {H}}}^n - \mu {{\mathcal {H}}}^n) \cdot \nu + \alpha _m^n &{} \text { on }\partial \Omega . \end{array} \right. \end{aligned}$$
(3.35)

By the definition of \(\alpha _e^n\) and \(\alpha _m^n\) (3.31), we have

$$\begin{aligned} \int _{\Omega } {\text {div}}(\varepsilon {{\mathcal {E}}}^n) = \int _{\partial \Omega } \alpha _e^n \quad \text{ and } \quad \int _{\Omega } {\text {div}}(\mu {{\mathcal {H}}}^n) = \int _{\partial \Omega } \alpha _m^n. \end{aligned}$$
(3.36)

It follows that \(\xi _e^n\) and \(\xi _m^n\) are well defined and uniquely determined. We also have

$$\begin{aligned} \int _{\Omega } {\text {div}}({\hat{\varepsilon }}\hat{{\mathcal {E}}}^n) - \int _{\partial \Omega } \Big ( ({\hat{\varepsilon }}\hat{{\mathcal {E}}}^n - \varepsilon {{\mathcal {E}}}^n) \cdot \nu + \alpha _e^n \Big ) \mathop {=}^{(3.31)}0 \end{aligned}$$

and

$$\begin{aligned} \int _{\Omega } {\text {div}}({\hat{\mu }}\hat{{\mathcal {H}}}^n) - \int _{\partial \Omega } \Big ( ({\hat{\mu }}\hat{{\mathcal {H}}}^n - \mu {{\mathcal {H}}}^n) \cdot \nu + \alpha _m^n \Big ) \mathop {=}^{(3.31)} 0. \end{aligned}$$

Hence, \({\hat{\xi }}_e^n\) and \({\hat{\xi }}_m^n\) are well defined and uniquely determined as well. From the regularity theory of elliptic equations, it follows that

$$\begin{aligned} (\xi _e^n,\xi _m^n,\hat{\xi }^n_e,\hat{\xi }^n_m) \in [H^2(\Omega )]^4. \end{aligned}$$
(3.37)

Using (3.30), (3.32), and (3.33), we derive that

$$\begin{aligned} \xi _e^n, \, \xi _m^n, \, {\hat{\xi }}_e^n, \, {\hat{\xi }}_m^n \rightarrow 0 \text{ in } H^{1} (\Omega ) \text{ as } n \rightarrow + \infty . \end{aligned}$$
(3.38)

Set

$$\begin{aligned} (E_n, H_n, \hat{E}_n, \hat{H}_n) = ({{\mathcal {E}}}^n - \nabla \xi _e^n, \, {{\mathcal {H}}}^n - \nabla \xi _m^n, \, \hat{{\mathcal {E}}}^n - \nabla {\hat{\xi }}_e^n, \, \hat{{\mathcal {H}}}^n - \nabla {\hat{\xi }}_m^n) \text{ in } \Omega . \end{aligned}$$
(3.39)

We have, by (3.24), and (3.28), and (3.38),

$$\begin{aligned} (E_n, H_n, \hat{E}_n, \hat{H}_n) \rightarrow (E, H, {\hat{E}}, {\hat{H}}) \text{ in } [L^2(\Omega )]^{12}. \end{aligned}$$
(3.40)

From the definition of \(\xi _e^n, \, \xi _m^n, \, {\hat{\xi }}_n^n, \, {\hat{\xi }}_m^n\), we have

$$\begin{aligned} {\text {div}}(\varepsilon E^n) = {\text {div}}( {\hat{\varepsilon }}{\hat{E}}^n) = {\text {div}}(\mu H^n) = {\text {div}}({\hat{\mu }}{\hat{H}}^n) = 0 \text{ in } \Omega , \end{aligned}$$
(3.41)

and, on \(\partial \Omega \),

$$\begin{aligned} ({\hat{\varepsilon }}{\hat{E}}^n - \varepsilon E^n) \cdot \nu = ({\hat{\mu }}{\hat{H}}^n - \mu H^n) \cdot \nu = 0 \text{ on } \partial \Omega . \end{aligned}$$
(3.42)

Combining (3.37), (3.41), and (3.42) yields

$$\begin{aligned} (E^n, H^n, {\hat{E}}^n, {\hat{H}}^n) \in \mathbf{H} (\Omega ) \cap [H^1(\Omega )]^{12}. \end{aligned}$$
(3.43)

The conclusion of Step 2 now follows from (3.40).

The proof is complete. \(\square \)

Remark 7

One can rewrite (1.1) and (1.2) under the following form:

$$\begin{aligned} \left\{ \begin{array}{c} \nabla \times \big (\mu ^{-1} (\nabla \times E) \big )- \omega ^2 \varepsilon E =0 \text{ in } \Omega , \\ \nabla \times \big ({\hat{\mu }}^{-1} (\nabla \times {\hat{E}}) \big )- \omega ^2 {\hat{\varepsilon }}{\hat{E}}=0 \text{ in } \Omega , \\ {\hat{E}}\times \nu = E \times \nu \text{ on } \partial \Omega , \\ \big ( {\hat{\mu }}^{-1 }(\nabla \times {\hat{E}}) \big ) \times \nu = \big ( \mu ^{-1 }(\nabla \times E) \big ) \times \nu \text{ on } \partial \Omega . \end{array} \right. \end{aligned}$$
(3.44)

Then, a complex number \(\omega \in \mathbb {C}\) is called a transmission eigenvalue if there exists a nonzero solution \((E, {\hat{E}}) \in [L^2(\Omega )]^{6}\) of (3.44). Theorem 1.1 might be translated as follows:

Completeness: Assume that \(\varepsilon , \, \mu , \, {\hat{\varepsilon }}, \, \hat{\mu } \in [C^2({\bar{\Omega }})]^{3 \times 3}\) and (1.3) holds. The space spanned by the generalized eigenfunctions is complete in \(\mathbf{G} (\Omega )\), i.e., the space spanned by them is dense in \(\mathbf{G} (\Omega )\), where

$$\begin{aligned} \mathbf{G} (\Omega )= & {} \Big \{ (u, {\hat{u}}) \in [H({\text {curl}}, \Omega )]^2; {\text {div}}(\varepsilon u) ={\text {div}}({\hat{\varepsilon }}{\hat{u}}) = 0 \text{ in } \Omega , \nonumber \\&({\hat{u}}- u ) \times \nu = 0 \text{ on } \partial \Omega , \, \big ( {\hat{\mu }}^{-1}(\nabla \times {\hat{u}}) \big ) \times \nu - \big ( \mu ^{-1}(\nabla \times u) \big ) \times \nu = 0 \text{ on } \partial \Omega \Big \}\nonumber \\ \end{aligned}$$
(3.45)

Remark 8

In [15], the authors studied the completeness of generalized eigenfunctions in the isotropic case under the assumption that

$$\begin{aligned}&\varepsilon = \mu = {\hat{\mu }}= I \text{ in } \Omega , \\&{\hat{\varepsilon }}\in C^\infty ({\bar{\Omega }}) \text{ and } {\hat{\varepsilon }} \text{ is } \text{ constant } \text{ different } \text{ from } \text{1 } \text{ in } \text{ a } \text{ neighborhood } \text{ of } \partial \Omega . \end{aligned}$$

They considered the system under the form (3.44). Since \(\varepsilon = \mu = I\), their settings and ours are different.

3.3 Proof of Theorem 1.1

Applying Proposition 3.1, one has

  • \({\mathcal {T}}_k^2: \mathbf{H} (\Omega ) \rightarrow \mathbf{H} (\Omega )\) is a Hilbert–Schmidt operator.

  • For \(\theta \in \mathbb {R}\) with \(|\Im (e^{2 i \theta })| > 0\), \(e^{i \theta }\) is a direction of minimal growth of the modified resolvent of \({\mathcal {T}}_k^2\).

Applying the theory of Hilbert–Schmidt operators, see, e.g., [1, Theorem 16.4], one derives that

(1) the closure of the space spanned by all generalized eigenfunctions of \({\mathcal {T}}_{k}^{2}\) is equal to \(\overline{{\mathcal {T}}_k^2(\mathbf{H} (\Omega ))}\). (The closures are taken with respect to the \([L^2(\Omega )]^{12}\)-norm.)

On the other hand, we have

(2) \(\overline{{\mathcal {T}}_k^2(\mathbf{H} (\Omega ))}=\mathbf{H} (\Omega )\) since

$$\begin{aligned} \begin{aligned} \mathbf{H} (\Omega )&= \overline{{\mathcal {T}}_k(\mathbf{H} (\Omega ))}&(\text{ by } \text{ Proposition } 3.2) \\&= \overline{{\mathcal {T}}_k \overline{{\mathcal {T}}_k(\mathbf{H} (\Omega ))}}&(\text{ by } \text{ Proposition } 3.2) \\&= \overline{{\mathcal {T}}_k^2(\mathbf{H} (\Omega ))}&\text{(by } \text{ the } \text{ continuity } \text{ of } {\mathcal {T}}_k). \end{aligned} \end{aligned}$$

(3) The space spanned by the generalized eigenfunctions of \({\mathcal {T}}_{k}^{2}\) associated with the nonzero eigenvalues of \({\mathcal {T}}_{k}^{2}\) is equal to the space spanned by the generalized eigenfunctions of \({\mathcal {T}}_k\) associated with the nonzero eigenvalues of \({\mathcal {T}}_k\). This can be done as in the last part of the proof of [1, Theorem 16.5]. Consequently, the space spanned by all generalized eigenfunctions of \({\mathcal {T}}_{k}^{2}\) is equal to the space spanned by all generalized eigenfunctions of \({\mathcal {T}}_k\).

The conclusion now follows from (1), (2), and (3). \(\square \)

4 An upper bound for the counting function: Proof of Theorem 1.2

Let \({\widetilde{\lambda }}_j\) be the nonzero eigenvalues of \({\mathcal {T}}_k\). Note that the nonzero eigenvalue values of \({\mathcal {T}}_k^2\), counted according to multiplicity, are \({\widetilde{\lambda }}_j^2\). (This can be proved as in the last part of the proof of [1, Theorem 16.5].) Applying the spectral theory of Hilbert–Schmidt operators, see, e.g., [1, Theorem 12.14] to \({\mathcal {T}}_k^2\), we have

(4.1)

Applying i) of Proposition 3.1, we obtain

$$\begin{aligned} \sum _{j} |{\widetilde{\lambda }}_j|^{4} \le C |k|^{-1}. \end{aligned}$$

Note that \(\lambda _j\) is an transmission eigenvalue if and only if \((i \lambda _j - k)^{-1}\) is an eigenvalue of \({\mathcal {T}}_k\), and they have the same multiplicity. It follows that

$$\begin{aligned} \sum _{j} \frac{1}{ |i \lambda _j - k|^{4}} \le C |k|^{-1}. \end{aligned}$$
(4.2)

Note that if \(|\lambda _j| \le |k|\), then \(|i \lambda _j - k| \le 2|k|\). We then derive from (4.2) that

$$\begin{aligned} \frac{1}{|k|^4}\sum _{j: |\lambda _j|\le |k|} 1\le C |k|^{-1}. \end{aligned}$$

This implies

$$\begin{aligned} {{\mathcal {N}}}(|k|) \le C |k|^3. \end{aligned}$$

The proof is complete. \(\square \)