The theory of extensions of symmetric operators on a Hilbert space has found nowadays numerous applications in analysis (the momentum problem) and in the boundary problem for differential equations. The theory of extensions for operators with finite deficiency indices has been worked out particularly in detail. Such operators always emerge in the study of one-dimensional boundary problems. Concerning boundary problems for partial differential equations (of elliptic type), they lead, generally speaking, to operators with infinite deficiency indices. It is also important that ordinarily these operators turn out to be semi-bounded. In connection with this, the theory of extensions of semi-bounded symmetric operators with infinite deficiency indices provides a great interest for applications. The main results in this fields are due to K. Friedrichs and M. G. Kreı̆n.

In his article [1] K. Friedrichs proposed a special method for extending a semi-bounded symmetric operator to a self-adjoint one, which is based on the closure of the associated quadratic form. The resulting extension has the same lower bound as the original operator.

The question of self-adjoint extensions of semi-bounded symmetric operators has been most thoroughly investigated in the article of M. G. Kreı̆n [2]. With the help of a fractional linear transformation, M. G. Kreı̆n reduced the problem to constructing extensions of a bounded symmetric operator defined on a non-dense set. This way M. G. Kreı̆n found out that among all semi-bounded self-adjoint extensions of a semi-bounded symmetric operator there is one (“rigid”) extension which has a number of remarkable extremal properties. M. G. Kreı̆n has also showed that the extension of the operator by Friedrichs method always leads to the rigid extension.

Among other works on the extension theory, the article of M. I. Višik [3] is of great interest. Having released the assumption of symmetry of the initial operator, M. I. Višik considers extensions which have certain properties of resolvability, such as extensions with bounded inverse operator and the like. In the case when the initial operator is symmetric, special consideration is given to its self-adjoint extensions. M. I. Višik applies his results to the study of general boundary problems for second order elliptic differential equations.

Let us quickly have a look at the research method used by M. I. Višik restricting to the case of symmetric operators. M. I. Višik associates every operator extension with some other operator B acting in the subspace of the zeros of the adjoint of the original operator. One can characterise the properties of resolvability of the extension in terms of the associated operator B. On the other hand, in the applications to boundary problems M. I. Višik finds that the operator B is immediately determined by the boundary conditions of the problems provided that these conditions are expressed in some “canonical” form. Through this, the connection between the properties of the extensions of the original differential operators and the form of the boundary conditions is established.

There arises the question of characterising further properties of the extensions, and not only properties of resolvability, in terms of the operator B (i.e., in terms of boundary conditions). Particularly interesting for applications in connection with the theory of M. G. Kreı̆n, is the characterisation of self-adjoint extensions of a positive definite symmetric operator.Footnote 1

In the present article various questions on this subject are considered. A characterisation of semi-bounded self-adjoint extensions of the initial operator are given in terms of the operator B, and also the quadratic forms associated with these extensions are described. Besides that, one theorem on the negative part of the spectrum of semi-bounded extensions is proved and symmetric positive-definite extensions of the initial operator are described.

A brief communication on the results of this article has been published in Doklady Acad. Nauk SSSR [4]. Applications of the extension theory to multi-dimensional boundary problems had been considered in the author’s notes [5,6,7].

1 Some Results from the Theory of Operator Extensions

For the purpose of the future presentation we shall list in this Section several results obtained by M. G. Kreı̆n and M. I. Višik.

Let us first of all consider some auxiliary notions introduced by K. Friedrichs and M. G. Kreı̆n. Let T be a semi-bounded symmetric operator with dense domain of definition \(\mathscr {D}(T)\) in the Hilbert space \(\mathscr {H}\).Footnote 2 With each such operator we will associate a linear set \(\mathscr {D}[T]\), which represents the closure of \(\mathscr {D}(T)\) in the sense of the T-convergence. This is defined in the following way: \(g_n\xrightarrow []{T}g\) if \(g_n\in \mathscr {D}(T)\), g n → g, and (Tg n − Tg m, g n − g m) → 0 for n, m →. With this closure the functional (Tf, g) naturally extends to \(\mathscr {D}[T]\). Following M. G. Kreı̆n, we shall denote this extension by T[f, g]. One can consider the set \(\mathscr {D}[T]\) as a complete Hilbert space, if one introduces the scalar product by means of the formula

$$\displaystyle \begin{aligned} (f,g)_T\;=\;T[f,g]-\beta(f,g)\end{aligned} $$

with arbitrary β < m(T). If the operator T is positive and self-adjoint then, as M. G. Kreı̆n has shown,

$$\displaystyle \begin{aligned} \mathscr{D}[T]\;=\;\mathscr{D}(T^{1/2})\quad \text{ and }\quad T[f,g]\;=\;(T^{1/2}f,T^{1/2}g)\,.\end{aligned} $$

We remark that for the operator one has

$$\displaystyle \begin{aligned} \mathscr{D}[T_\alpha]\;=\;\mathscr{D}[T]\,,\qquad T_\alpha[f,g]\;=\;T[f,g]+\alpha(f,g)\,.\end{aligned} $$

Let S be a closed symmetric operator with positive lower bound (positive definite)

$$\displaystyle \begin{aligned} (Sf,f)\;\geqslant\;\gamma(f,f)\qquad (\gamma=m(S)>0)\end{aligned} $$

for all \(f\in \mathscr {D}(S)\). The operator S allows for an infinite set of self-adjoint extensions, at least one of which has the same lower bound γ as the initial operator. In particular, the extension which one gets according to K. Friedrichs [1] has this property. Following M. G. Kreı̆n we shall denote this extension by S μ and call it the rigid extension of the operator S. The method of K. Friedrichs consists of constructing a set \(\mathscr {D}[S]\) and a functional S[f, g] on it. It turns out that \(\mathscr {D}[S]\) is the domain of definition of the square root of some self-adjoint extension S μ of the operator S:

$$\displaystyle \begin{aligned} \mathscr{D}(S)\;\subset\;\mathscr{D}(S_\mu)\;\subset\;\mathscr{D}[S]\;=\;\mathscr{D}[S_\mu]\;=\;\mathscr{D}(S_\mu^{1/2})\,.\end{aligned} $$

Besides that, for all \(f,g\in \mathscr {D}[S]\)

$$\displaystyle \begin{aligned} S[f,g]\;=\;S_\mu[f,g]\;=\;(S_\mu^{1/2}f,S_\mu^{1/2}g)\,.\end{aligned} $$

M. G. Kreı̆n showed that S μ is the unique semi-bounded self-adjoint extension of the operator S whose domain of definition lies in \(\mathscr {D}[S]\).

Denote by S the adjoint operator of S and by U the set of solutions to the equation S u = 0. It is easy to seeFootnote 3 that \(U=\mathscr {H}\ominus R(S)\). M. G. Kreı̆n has shown that for every semi-bounded self-adjoint extension \(\widetilde {S}\) the set \(\mathscr {D}[\widetilde {S}]\) decomposes into the direct sum

$$\displaystyle \begin{aligned} \mathscr{D}[\widetilde{S}]\;=\;\mathscr{D}[S]\dotplus\mathscr{D}[\widetilde{S}]\cap U\,,\end{aligned} $$

whence \(\widetilde {S}[f,g]=S[f,g]\), if \(f,g\in \mathscr {D}[S]\), and \(\widetilde {S}[f,u]=\widetilde {S}[u,f]=0\), if \(f\in \mathscr {D}[S]\) and \(u\in \mathscr {D}[\widetilde {S}]\cap U\).

Let us highlight another proposition of M. G. Kreı̆n.

If \(\widetilde {S}_1\) and \(\widetilde {S}_2\) are semi-bounded self-adjoint extensions of the operator S, then for the inequality

to hold for at least one \(\alpha >-m(\widetilde {S}_k)\) (k = 1, 2) (and hence for all such α) the following conditions are necessary and sufficient:

$$\displaystyle \begin{aligned} \mathscr{D}[\widetilde{S}_1]\cap U\;\subset\;\mathscr{D}[\widetilde{S}_2]\cap U\quad \text{ and }\quad \widetilde{S}_2[u,u]\;\leqslant\;\widetilde{S}_1[u,u]\qquad (u\in\mathscr{D}[\widetilde{S}_1]\cap U)\,.\end{aligned} $$

It follows in particular that

$$\displaystyle \begin{aligned} S_{\mu}^{-1}\;\leqslant\;\widetilde{S}^{-1},\text{ if }\;m(\widetilde{S})>0.\end{aligned} $$

Below we will use also other results of M. G. Kreı̆n. Their formulations will be stated along the discussion.

To conclude we will consider the following important theorem of M. I. Višik.

Theorem 1

The domain of definition of the operator S , the adjoint of S, decomposes into the direct sum

$$\displaystyle \begin{aligned} \mathscr{D}(S^*)\;=\;\mathscr{D}(S)\dotplus S_\mu^{-1}U\dotplus U. {}\end{aligned} $$
(1)

In order for an operator \(\widetilde {S}\) to be a self-adjoint extension of the operator S it is necessary and sufficient that the operator \(\widetilde {S}\) is defined as a restriction of S on the direct sum

$$\displaystyle \begin{aligned} \mathscr{D}(\widetilde{S})\;=\;\mathscr{D}(S)\dotplus(S_\mu^{-1}+B)\widetilde{U}_1\dotplus U_0\,; {} \end{aligned} $$
(2)

where U 1 is some subspace of U, B is a self-adjoint operator in U 1, \(\widetilde {U}_1=\mathscr {D}(B)\) is a set dense in U 1 , and U 0 = U  U 1.

We remark that the statements of the theorem remain true if one replaces the rigid extension S μ in the decompositions (1) and (2) with an arbitrary fixed self-adjoint extension which has an everywhere defined bounded inverse in \(\mathscr {H}\).

For the convenience of the future discussion we will provide here a relatively easy proof of Theorem 1, somewhat different from the proof of M. I. Višik.

First we check that the decomposition (1) is true. It is clear that

$$\displaystyle \begin{aligned} \mathscr{D}(S)+ S_\mu^{-1}U + U\;\subseteq\;\mathscr{D}(S^*)\,, \end{aligned}$$

since \(\mathscr {D}(S)\subset \mathscr {D}(S^*)\), \(U\subset \mathscr {D}(S^*)\), and \(S_\mu ^{-1}U\subset \mathscr {D}(S_\mu )\subset \mathscr {D}(S^*)\) . Let us prove the opposite inclusion. Let \(g\in \mathscr {D}(S^*)\), S g = h, and \(f=S_\mu ^{-1}h\). From S (g − f) = S g − S μ f = h − h = 0 it follows u = g − f ∈ U. Since \(\mathscr {H}=R(S)\oplus U\), then \(h=h_0+\overline {u}\), where h 0 ∈ R(S) and \(\overline {u}\in U\). It follows that \(f=S_\mu ^{-1}(h_0+\overline {u})=S^{-1}h_0+S_\mu ^{-1}\overline {u}\) and \(g=f_0+ S_\mu ^{-1}\overline {u}+u\), where \(f_0=S^{-1}h_0\in \mathscr {D}(S)\).

So

$$\displaystyle \begin{aligned} \mathscr{D}(S^*)\;\subseteq\;\mathscr{D}(S)\dotplus S_{\mu}^{-1}U \dotplus U\,. \end{aligned}$$

It remains to check that the sum (1) is direct. Let \(g=f_0+S_\mu ^{-1}\overline {u}+u=0\). Then \(S^*g=S f_0+\overline {u}=0\) and, since \(S f_0\perp \overline {u}\), \(Sf_0=\overline {u}=0\). It follows that f 0 = S −1(Sf 0) = 0 and \(S_\mu ^{-1}\overline {u}=0\). Since \(u=g-f_0-S_\mu ^{-1}\overline {u}=0\), it is proved that the sum (1) is direct.

Let us proceed to the proof of the decomposition (2).

Necessity

Let \(\widetilde {S}\) be a self-adjoint extension of S and U 0 be the subspace of the solutions to \(\widetilde {S}u_0=0\). Clearly, U 0 ⊆ U. We introduce this notation: U 1 = U ⊖ U 0, \(\mathscr {H}_+=\mathscr {H}\ominus U_0=R(S)\oplus U_1\). Since the set \(R(\widetilde {S})\) is dense in \(\mathscr {H}_+\), \(R(\widetilde {S})\) can be represented as

$$\displaystyle \begin{aligned} R(\widetilde{S})\;=\;R(S)\dotplus \widetilde{U}_1\,, {} \end{aligned} $$
(3)

where \(\widetilde {U}_1\) is some dense set in U 1. The operator \(\widetilde {S}\), if considered only in \(\mathscr {H}_+\), has on \(R(\widetilde {S})\) the inverse operator \(\widetilde {S}^{-1}\). It is known that the operator \(\widetilde {S}^{-1}\) is self-adjoint in \(\mathscr {H}_+\) and \(\widetilde {S}^{-1}R(\widetilde {S})=P_+\mathscr {D}(\widetilde {S})\); here P + is the projection operator onto \(\mathscr {H}_+\). We will extend \(\widetilde {S}^{-1}\) to the whole \(\mathscr {H}\) maintaining the self-adjointness by considering it to be 0 on U 0. Applying \(\widetilde {S}^{-1}\) to the decomposition (3) we find that

$$\displaystyle \begin{aligned} P_+\mathscr{D}(\widetilde{S})\;=\;P_+\mathscr{D}(S)+\widetilde{S}^{-1}\widetilde{U}_1\,. {} \end{aligned} $$
(4)

Since \(\mathscr {D}(\widetilde {S})=P_+\mathscr {D}(\widetilde {S})+U_0\) and \(P_+\mathscr {D}(S)+U_0=\mathscr {D}(S)+U_0\), (4) can be written as

$$\displaystyle \begin{aligned} \mathscr{D}(\widetilde{S})\;=\;\mathscr{D}(S)+\widetilde{S}^{-1}\widetilde{U}_1+U_0\,. {} \end{aligned} $$
(5)

Let us consider the operator \(B=\widetilde {S}^{-1}-P_+ S_\mu ^{-1}P_+\). This is a self-adjoint operator with domain of definition \(\mathscr {D}(B)=\mathscr {D}(\widetilde {S}^{-1})=R(\widetilde {S})\dotplus U_0\) dense in \(\mathscr {H}\). It is clear that BU 0 = 0. We shall show that BR(S) = 0. Indeed, let h 0 ∈ R(S) and f 0 = S −1 h 0. Then,

$$\displaystyle \begin{aligned} B h_0\;=\;\widetilde{S}^{-1}h_0-P_+ S_\mu^{-1} P_+ h_0\;=\;P_+ f_0-P_+ S_\mu^{-1} h_0 \;=\; P_+ f_0 - P_+ f_0 \;=\; 0\,. \end{aligned}$$

We see that the subspaces U 0 and R(S) are invariant for the operator B, that is why B has to be a self-adjoint operator in U 1 if it is considered only on the domain \(\widetilde {U}_1\). By means of the operator B let us re-write the decomposition (5) as

$$\displaystyle \begin{aligned} \mathscr{D}(\widetilde{S}) \;&=\;\mathscr{D}(S)+[P_+ S_\mu^{-1} P_+ + (\widetilde{S}^{-1} - P_+ S_\mu^{-1} P_+)]\widetilde{U}_1 + U_0 \\ & =\;\mathscr{D}(S)+(P_+ S_\mu^{-1} +B) \widetilde{U}_1 +U_0 \\ & =\;\mathscr{D}(S)+P_+(S_\mu^{-1} +B) \widetilde{U}_1 +U_0\,. \end{aligned}$$

Since

$$\displaystyle \begin{aligned} P_+(S_\mu^{-1} +B) \widetilde{U}_1 +U_0\;=\;(S_\mu^{-1} +B) \widetilde{U}_1 +U_0\,, \end{aligned}$$

we can finally decompose \(\mathscr {D}(\widetilde {S})\) as follows:

$$\displaystyle \begin{aligned} \mathscr{D}(\widetilde{S})\;=\;\mathscr{D}(S)+(S_\mu^{-1} +B) \widetilde{U}_1 +U_0\,. \end{aligned}$$

The latter sum is direct, since it is a part of the direct sum (1). Necessity is proved.

Sufficiency

Let U 0 be some subspace of U and B be some self-adjoint operator in U 1 = U ⊖ U 0, whose domain of definition we denote by \(\widetilde {U}_1\). Let us form the direct sum

$$\displaystyle \begin{aligned} \mathscr{D}(\widetilde{S})\;=\;\mathscr{D}(S)\dotplus(S_\mu^{-1} +B) \widetilde{U}_1 \dotplus U_0 \end{aligned}$$

and define on it the operator \(\widetilde {S}\) as the restriction of the operator S . Clearly, \(\widetilde {S}\) is an extension of the operator S. Let us show that this extension is self-adjoint. Let for all \(g\in \mathscr {D}(\widetilde {S})\) be

$$\displaystyle \begin{aligned} (\widetilde{S}g,t)\;=\;(g,t^*)\,. {} \end{aligned} $$
(6)

One has to show that \(t\in \mathscr {D}(\widetilde {S})\) and \(\widetilde {S}t=t^*\). Clearly \(t\in \mathscr {D}(S^*)\) and t  = S t, and thus, according to the decomposition (1), \(t=\varphi _0+S_\mu ^{-1}\overline {v}+v\), \(t^*=S\varphi _0+\overline {v}\), where \(\varphi _0\in \mathscr {D}(S)\), \(v,\overline {v}\in U\). Let us consider (6) with g equal to some arbitrary element u 0 ∈ U 0. We get

$$\displaystyle \begin{aligned} (t^*,u_0)\;=\;(t,\widetilde{S} u_0)\;=\;0\,, \end{aligned}$$

i.e., t ⊥ U 0. Since \(t^*=S\varphi _0+\overline {v}\) and 0 ⊥ U, it follows that \(\overline {v}\in U_1\). Let us represent v in the form v = v 0 + v 1, where v 0 ∈ U 0, v 1 ∈ U 1. Let us consider (6) with g equal to some arbitrary element of the form

$$\displaystyle \begin{aligned} g\;=\;f_0+S_\mu^{-1}u_1+B u_1\,,\quad \text{ where }\quad f_0\in\mathscr{D}(S),\,u_1\in\widetilde{U}_1\,. \end{aligned}$$

For short denote \(\varphi =\varphi _0+S_\mu ^{-1}\overline {v}\), \(f=f_0+S_\mu ^{-1}u_1\), \(\varphi ,f\in \mathscr {D}(S_\mu )\). Since \(\widetilde {S}g=S^*g=S^*f+S^* Bu_1=S_\mu f\) and t  = S t = S μ φ, we get from (6) the equality

$$\displaystyle \begin{aligned} (S_\mu f,\varphi +v_1+v_0)\;=\;(f+B u_1,S_\mu\varphi)\,. {} \end{aligned} $$
(7)

Since (S μ f, φ) = (f, S μ φ), S μ f = Sf 0 + u 1, and \(S_\mu \varphi =S\varphi _0+\overline {v}\), it follows from (7) that

$$\displaystyle \begin{aligned} (Sf_0+u_1,v_1+v_0)\;=\;(B u_1,S\varphi_0+\overline{v})\,. \end{aligned}$$

Finally, noting that Sf 0 ⊥ v 1 + v 0, u 1 ⊥ v 0, and Bu 1 ⊥  0, we find the relation

$$\displaystyle \begin{aligned} (Bu_1,\overline{v})\;=\;(u_1,v_1)\,. {} \end{aligned} $$
(8)

Since u 1 is an arbitrary element from \(\mathscr {D}(B)=\widetilde {U}_1\), \(\overline {v},v_1\in U_1\), and the operator B is self-adjoint in U 1, it follows from (8) that \(\overline {v}\in \mathscr {D}(B)\) and \(v_1=B\overline {v}\), and thus

$$\displaystyle \begin{aligned} t\;=\;\varphi_0+(S_\mu^{-1}+B)\overline{v}+v_0\,. \end{aligned}$$

The last equality shows that \(t\in \mathscr {D}(\widetilde {S})\). This way, the operator \(\widetilde {S}\) is self-adjoint and the theorem is proved.

Remark

The formula

$$\displaystyle \begin{aligned} B\;=\;\widetilde{S}^{-1}-P_+ S_\mu^{-1}P_+ {} \end{aligned} $$
(9)

obtained along the proof allows to reconstruct the operator B uniquely from the extension \(\widetilde {S}\). It follows also from this formula that the boundedness of the operator B is necessary and sufficient for the boundedness of the inverse operator \(\widetilde {S}^{-1}\) considered on \(R(\widetilde {S})\). If in addition U 0 consists only of the zero element, then \(\widetilde {S}\) has a bounded inverse everywhere on \(\mathscr {H}\).

2 On the Semi-Bounded Extensions of a Positive Definite Operator

The aim of the current Section consists of characterising the semi-bounded extensions of an operator S in terms of the corresponding operators B. Below, along with B, we will often need to consider the inverse operator B −1. Obviously this is a self-adjoint operator in \(\overline {R(B)}\) with domain of definition R(B). However, we will always consider B −1 to be defined on a broader set

$$\displaystyle \begin{aligned} \mathscr{D}(B^{-1})\;=\;R(B)\dotplus U_0\,, \end{aligned}$$

by setting B −1 U 0 = 0.

Lemma 1

If \(\widetilde {S}\) is a semi-bounded self-adjoint extension of the operator S and B is the corresponding operator in U 1 (in the sense of Theorem 1), then

$$\displaystyle \begin{aligned} \mathscr{D}(B^{-1})\;\subset\;\mathscr{D}[\widetilde{S}] \end{aligned}$$

and, for all \(v_1,v_2\in \mathscr {D}(B^{-1})\),

$$\displaystyle \begin{aligned} \widetilde{S}[v_1,v_2]\;=\;(B^{-1}v_1,v_2)\,. {} \end{aligned} $$
(10)

Proof

Let \(v\in \mathscr {D}(B^{-1})\). By definition of the set \(\mathscr {D}(B^{-1})\), v = Bu + v 0, where \(u\in \widetilde {U}_1\) and v 0 ∈ U 0. According to Theorem 1, the element \(g=S_\mu ^{-1}u+Bu+v_0\) is contained in \(\mathscr {D}(\widetilde {S})\). Since \(f=S_\mu ^{-1}u\in \mathscr {D}(S_\mu )\subset \mathscr {D}[S]\subset \mathscr {D}[\widetilde {S}]\) and \(g\in \mathscr {D}(\widetilde {S})\subset \mathscr {D}[\widetilde {S}]\), the element \(v=g-f\in \mathscr {D}[\widetilde {S}]\). Furthermore, since \(\widetilde {S}g=u\), then

$$\displaystyle \begin{aligned} \widetilde{S}[g,g]\;=\;(\widetilde{S}g,g)\;=\;(u,f+v)\;=\;(S_\mu f,f)+(u,v)\;=\;S[f,f]+(B^{-1}v,v)\,. \end{aligned}$$

On the other hand, according to the results of M. G. Kreı̆n quoted above, \(\widetilde {S}[f,v]=0\) and thus

$$\displaystyle \begin{aligned} \widetilde{S}[g,g]\;=\;S[f,f]+\widetilde{S}[v,v]\,. \end{aligned}$$

Comparing this equality with the former one, we find that

$$\displaystyle \begin{aligned} \widetilde{S}[v,v]\;=\;(B^{-1}v,v)\,. \end{aligned}$$

The concluding step to the bilinear relation (10) is performed in the ordinary way. The Lemma is proved.

Let us make one more remark needed in the following. Let A be a positive definite self-adjoint operator. Then for all \(h\in \mathscr {H}\)

$$\displaystyle \begin{aligned} \sup_{f\in\mathscr{D}(A)}\frac{\,|(f,h)|{}^2}{(Af,f)}\;=\;(A^{-1}h,h)\,. \end{aligned}$$

Indeed, setting g = A 1∕2 f, we find that

$$\displaystyle \begin{aligned} \sup_{f\in\mathscr{D}(A)}\frac{\,|(f,h)|{}^2}{(Af,f)}\;=\;\sup_{g\in\mathscr{H}}\frac{\;|(A^{-1/2}g,h)|{}^2}{\|g\|{}^2}\;=\;\sup_{\|g\|=1}|(g,A^{-1/2}h)|{}^2\,. \end{aligned}$$

According to Bunyakowsky’s inequality, the upper bound of the last expression is attained by g = A −1∕2 h∕∥A −1∕2 h∥ and thus

$$\displaystyle \begin{aligned} \sup_{f\in\mathscr{D}(A)}\frac{\,|(f,h)|{}^2}{(Af,f)}\;=\;(A^{-1}h,h)\,. \end{aligned}$$

In particular, if (α < m(S) = γ) and , then for every \(h\in \mathscr {H}\)

$$\displaystyle \begin{aligned} \sup_{f\in\mathscr{D}(S_\mu)}\frac{|(f,h)|{}^2}{(S_\mu f,f)-\alpha(f,f)}\;=\;(R_\alpha h,h)\,. {} \end{aligned} $$
(11)

Theorem 2

In order for the self-adjoint extension \(\widetilde {S}\) of the operator S to satisfy the condition of semi-boundedness

$$\displaystyle \begin{aligned} (\widetilde{S}g ,g)\;\geqslant\;\alpha(g,g)\qquad (\alpha<\gamma) {} \end{aligned} $$
(12)

for all \(g\in \mathscr {D}(\widetilde {S})\) , it is necessary and sufficient that for all \(v\in \mathscr {D}(B^{-1})\) the following inequality holds

$$\displaystyle \begin{aligned} (B^{-1}v,v)\;\geqslant\;\alpha(v,v)+\alpha^2(R_\alpha v,v)\,. {} \end{aligned} $$
(13)

Proof

Condition (12) is equivalent to

$$\displaystyle \begin{aligned} \widetilde{S}[g,g]\;\geqslant\;\alpha(g,g)\,,\quad g\in\mathscr{D}[\widetilde{S}]\,, {} \end{aligned} $$
(14)

which arises from (12) by taking the closure in the sense of \(\widetilde {S}\)-convergence. In particular, let g = f + v, where \(f\in \mathscr {D}(S_\mu )\) and \(v\in \mathscr {D}(B^{-1})\). According to Lemma 1, \(g\in \mathscr {D}[\widetilde {S}]\) and

$$\displaystyle \begin{aligned} \widetilde{S}[g,g]\;=\;(S_\mu f,f)+(B^{-1}v,v)\,. \end{aligned}$$

Let us write condition (14) in the form

$$\displaystyle \begin{aligned} (S_\mu f,f)+(B^{-1}v,v)\;\geqslant\;\alpha(f,f)+\alpha(f,v)+\alpha(v,f)+\alpha(v,v)\,. \end{aligned}$$

Replacing here f with ξf and v with ηv, we get the inequality

$$\displaystyle \begin{aligned}{}[(S_\mu f,f)-\alpha(f,f)]\xi\overline{\xi}-\alpha(f,v)\xi\overline{\eta}-\alpha(v,f)\overline{\xi}\eta+[(B^{-1}v,v)-\alpha(v,v)]\eta\overline{\eta}\;\geqslant\;0\,. \end{aligned} $$
(15)

Since α < γ = m(S) and, thus, \((S_\mu f,f)-\alpha (f,f)\geqslant 0\), then for all \(f\in \mathscr {D}(S_\mu )\) and \(v\in \mathscr {D}(B^{-1})\) the validity of the inequality

$$\displaystyle \begin{aligned} \alpha^2|(f,v)|{}^2\;\leqslant\;[(B^{-1}v,v)-\alpha(v,v)]\,[(S_\mu f,f)-\alpha(f,f)] {} \end{aligned} $$
(16)

is a necessary and sufficient condition for the positivity of the quadratic form in the left hand side of (15). Let us show that not only is this condition necessary but also sufficient for the validity of the inequality (12). Indeed if \(g\in \mathscr {D}(\widetilde {S})\) then, according to Theorem 1,

$$\displaystyle \begin{aligned} g\;=\;f_0+S_\mu^{-1}u_1+B u_1+u_0\,, \end{aligned}$$

where \(f_0\in \mathscr {D}(S)\), \(u_1\in \widetilde {U}_1\), and u 0 ∈ U 0. Since \(f=f_0+S_\mu ^{-1}u_1\in \mathscr {D}(S_\mu )\) and \(v=B u_1+u_0\in \mathscr {D}(B^{-1})\), then (15) follows from (16). Setting ξ = η = 1 in inequality (15), noting that g = f + v, and proceeding in the opposite direction along the computations above, we find that condition (14), which coincides with condition (12) for \(g\in \mathscr {D}(\widetilde {S})\), holds true. For the proof of the theorem it remains to write (16) in the form

$$\displaystyle \begin{aligned} (B^{-1}v,v)-\alpha(v,v)\;\geqslant\;\frac{\alpha^2|(f,v)|{}^2}{(S_\mu f,f)-\alpha(f,f)} \end{aligned}$$

and to compare it with (11).

Corollaries

  1. 1.

    If the operator \(\widetilde {S}\) is semi-bounded and \(m(\widetilde {S})\geqslant \alpha \) , the operator B −1 is also semi-bounded and \(m(B^{-1})\geqslant \alpha \).

    This statement follows directly from formula (13).

  2. 2.

    In order for the operator \(\widetilde {S}\) to be positive, it is necessary and sufficient that the corresponding operator B −1 is positive.

    For the proof of this statement it suffices to set α = 0 in (12) and (13).

  3. 3.

    In order for the operator \(\widetilde {S}\) to be positive definite, it is necessary and sufficient that the corresponding operator B −1 is positive definite.

    Indeed, if m(B −1) > 0, then \(m(\widetilde {S})\geqslant 0\) . The operator B −1 has a bounded inverse, thus U 0 consists only of the zero element and the operator B is bounded. According to the Remark on Theorem 1, \(\widetilde {S}\) has then a bounded inverse everywhere in \(\mathscr {H}\) . This is why the equality \(m(\widetilde {S})=0\) is impossible and the operator \(\widetilde {S}\) is positive definite. Conversely, if \(m(\widetilde {S})>0\) then, according to (13), \(m(B^{-1})\geqslant m(\widetilde {S})>0\).

  4. 4.

    If \(m(B^{-1})\geqslant c\) and c > −γ, then the corresponding operator \(\widetilde {S}\) is semi-bounded and

    $$\displaystyle \begin{aligned} m(\widetilde{S})\;\geqslant\;\alpha=\frac{\gamma c}{\gamma + c}\,. \end{aligned}$$

    It is easy to see that α < γ. Condition (13) is fulfilled since the stronger condition

    $$\displaystyle \begin{aligned} (B^{-1}v,v)-\alpha(v,v)\;\geqslant\;\frac{\alpha^2}{\gamma-\alpha}(v,v)\,,\quad v\in \mathscr{D}(B^{-1})\,, \end{aligned}$$

    representing another form of writing the inequality \((B^{-1}v,v)\geqslant c(v,v)\) , holds true.

  5. 5.

    In order for the self-adjoint extension \(\widetilde {S}\) of the operator S to have the lower bound \(m(\widetilde {S})=\gamma \) , it is necessary and sufficient that condition (13) holds true for all α < γ.

    The proof of this statement is obvious.

Remark

M. G. Kreı̆n stated conditions for which the rigid extension S μ is the unique self-adjoint extension of the operator S with the lower bound γ = m(S) (see Theorems 8 and 9 of the work [2]). Corollary 5 gives us an alternative way to get these conditions.

As we already noted, for semi-bounded extensions \(\widetilde {S}\) of the operator S there exists the decomposition of M. G. Kreı̆n:

$$\displaystyle \begin{aligned} \mathscr{D}[\widetilde{S}]=\mathscr{D}[S]\dotplus\mathscr{D}[\widetilde{S}]\cap U\,. {} \end{aligned} $$
(17)

It is interesting to describe the set \(\mathscr {D}[\widetilde {S}]\cap U\) in terms of the operator B. Before the proof of the corresponding theorem we anticipate a lemma.

Lemma 2

If \(\widetilde {S}\) is a semi-bounded extension of S and \(\beta <m(\widetilde {S})\) , then there exists a positive number η < 1 such that

$$\displaystyle \begin{aligned} \beta^2|(f,v)|{}^2\;\leqslant\;\eta^2[(S_\mu f,f)-\beta(f,f)]\,[(B^{-1}v,v)-\beta(v,v)] {} \end{aligned} $$
(18)

for all f  D(S μ) and \(v\in \mathscr {D}(B^{-1})\).

Proof

According to (11), to prove inequality (18) it suffices to establish the validity of the relation

$$\displaystyle \begin{aligned} \beta^2(R_\beta v,v)\;\leqslant\;\eta^2[(B^{-1}v,v)-\beta(v,v)]\,, \end{aligned}$$

which we re-write in the form

$$\displaystyle \begin{aligned} (B^{-1}v,v)\;\geqslant\;\frac{\beta^2(R_\beta v,v)}{\eta^2}+\beta(v,v)\,. {} \end{aligned} $$
(19)

Let the number α be such that \(\beta <\alpha <m(\widetilde {S})\). According to Theorem 2,

$$\displaystyle \begin{aligned} (B^{-1}v,v)\;\geqslant\;\alpha^2(R_\alpha v,v)+\alpha(v,v)\,. {} \end{aligned} $$
(20)

Let us show that the number η can be chosen in such a way that for all \(v\in \mathscr {D}(B^{-1})\) the inequality

$$\displaystyle \begin{aligned} \alpha^2(R_\alpha v,v)+\alpha(v,v)\;\geqslant\;\frac{\beta^2(R_\beta v,v)}{\eta^2}+\beta(v,v) {} \end{aligned} $$
(21)

holds true. We denote by \(\mathscr {E}_\lambda \) the spectral measure of the operator S μ, and re-write (21) in the form

$$\displaystyle \begin{aligned} \int_\gamma^{\infty}\Big(\frac{\alpha^2}{\lambda-\alpha}+\alpha-\frac{\beta^2}{(\lambda-\beta)\eta^2}-\beta\Big)\mathrm{d}(\mathscr{E}_\lambda v,v)\;\geqslant\;0\,. {} \end{aligned} $$
(22)

Assume first that β < 0. Then we can also assume α < 0. Since

$$\displaystyle \begin{aligned} \frac{\alpha^2}{\lambda-\alpha}+\alpha-\frac{\beta^2}{(\lambda-\beta)\eta^2}-\beta\;&=\;\frac{(\alpha-\beta)\lambda^2}{(\lambda-\alpha)(\lambda-\beta)}-\frac{(\eta^{-2}-1)\beta^2}{\lambda-\beta} \\ &\geqslant\;\frac{(\alpha-\beta)\gamma^2}{(\gamma-\beta)(\gamma-\alpha)}-\frac{(\eta^{-2}-1)\beta^2}{\gamma-\beta} \,, \end{aligned}$$

then, for a sufficiently small value of θ = η −2 − 1, inequality (21) indeed holds true. If \(\beta \geqslant 0\), then α > 0; in this case the validity of (21) follows from the relation

$$\displaystyle \begin{aligned} \frac{(\alpha-\beta)\lambda^2}{(\lambda-\alpha)(\lambda-\beta)}-\frac{\theta\beta^2}{\lambda-\beta}\;\geqslant\;(\alpha-\beta)-\frac{\theta\beta^2}{\gamma-\beta}\,, \end{aligned}$$

if θ is chosen to be sufficiently small. Comparing inequalities (21), (20), and (19) we convince ourselves that the Lemma is true.

Moreover, let us remark that since

$$\displaystyle \begin{aligned} (S_\mu f,f)-\beta(f,f)\;&=\;\widetilde{S}[f,f]-\beta(f,f)\;=\;(f,f)_{\widetilde{S}}\,, \\ (B^{-1}v,v)-\beta(v,v)\;&=\;\widetilde{S}[v,v]-\beta(v,v)\;=\;(v,v)_{\widetilde{S}}\,, \end{aligned}$$

and

$$\displaystyle \begin{aligned} -\beta(f,v)\;=\;\widetilde{S}[f,v]-\beta(f,v)\;=\;(f,v)_{\widetilde{S}}\,, \end{aligned}$$

we can write inequality (18) in the form

$$\displaystyle \begin{aligned} |(f,v)_{\widetilde{S}}|{}^2\;\leqslant\;\eta^2\,(f,f)_{\widetilde{S}}\,(v,v)_{\widetilde{S}}\,. {} \end{aligned} $$
(23)

Inequality (23) shows that the “angle” between the manifolds \(\mathscr {D}(S_\mu )\) and \(\mathscr {D}(B^{-1})\) in the Hilbert space \(\mathscr {D}[\widetilde {S}]\) is different from zero.

Theorem 3

For every semi-bounded extension of the operator S

$$\displaystyle \begin{aligned} \mathscr{D}[\widetilde{S}]\cap U\;=\;\mathscr{D}[B^{-1}] {} \end{aligned} $$
(24)

and thus, according to (17),

$$\displaystyle \begin{aligned} \mathscr{D}[\widetilde{S}]\;=\;\mathscr{D}[S]\dotplus\mathscr{D}[B^{-1}]\,. {} \end{aligned} $$
(25)

Proof

Let \(\beta <m(\widetilde {S})\). According to Corollary 1 of Theorem 2, the operator B −1 is semi-bounded and m(B −1) > β. The set \(\mathscr {D}[B^{-1}]\) represents the closure of \(\mathscr {D}(B^{-1})\) in the metric defined by the scalar product

$$\displaystyle \begin{aligned} (v_1,v_2)_{B^{-1}}\;=\;(B^{-1}v_1,v_2)-\beta(v_1,v_2)\,,\qquad v_1,v_2\in\mathscr{D}(B^{-1})\,. \end{aligned}$$

According to Lemma 1, \(\mathscr {D}(B^{-1})\subset \mathscr {D}[\widetilde {S}]\) and

$$\displaystyle \begin{aligned} (v_1,v_2)_{B^{-1}}\;=\;(B^{-1}v_1,v_2)-\beta(v_1,v_2)\;=\;\widetilde{S}[v_1,v_2]-\beta(v_1,v_2)=(v_1,v_2)_{\widetilde{S}}\,, \end{aligned}$$

and therefore the closure in the norm \((v,v)_{B^{-1}}\) does not lead out of \(\mathscr {D}[\widetilde {S}]\). On the other hand. since

$$\displaystyle \begin{aligned} (v,v)_{B^{-1}}\;=\;(B^{-1}v,v)-\beta(v,v)\;\geqslant\;(m(B^{-1})-\beta)(v,v)\,, \end{aligned}$$

the closure of \(\mathscr {D}(B^{-1})\) in the norm \((v,v)_{B^{-1}}\) does not lead out of U. This way,

$$\displaystyle \begin{aligned} \mathscr{D}[B^{-1}]\;\subset\,\mathscr{D}[\widetilde{S}]\cap U\,. \end{aligned}$$

Now let us establish the opposite inclusion. Let \(u\in \mathscr {D}[\widetilde {S}]\cap U\). By the definition of the set \(\mathscr {D}[\widetilde {S}]\), there exists a sequence \(\{g_k\}\subset \mathscr {D}(\widetilde {S})\) such that

$$\displaystyle \begin{aligned} \|g_k-u\|{}_{\widetilde{S}}\to 0\quad \text{ as }\quad k\to\infty\,. {} \end{aligned} $$
(26)

Representing g k in the form g k = f k + v k, \(f_k\in \mathscr {D}(S_\mu )\), \(v_k\in \mathscr {D}(B^{-1})\), according to (23) we get

$$\displaystyle \begin{aligned} \|g_k -g_m\|{}_{\widetilde{S}}^2 \;&=\;\|f_k -f_m\|{}_{\widetilde{S}}^2+2\mathfrak{Re}(f_k -f_m,v_k-v_m)_{\widetilde{S}}+\|v_k-v_m\|{}_{\widetilde{S}}^2 \\ &\geqslant\;\|f_k-f_m\|{}_{\widetilde{S}}^2-2\eta\|f_k-f_m\|{}_{\widetilde{S}}\|v_k-v_m\|{}_{\widetilde{S}}+\|v_k-v_m\|{}_{\widetilde{S}}^2 \\ &\geqslant\;(1-\eta)\,\big[\,\|f_k-f_m\|{}_{\widetilde{S}}^2+\|v_k-v_m\|{}_{\widetilde{S}}^2\,\big]\,. \end{aligned} $$

It now follows from (26) that

$$\displaystyle \begin{aligned} \|f_k-f_s\|{}_{\widetilde{S}}\to 0\quad \text{ as }\quad k,m\to\infty\,,\qquad \|v_k-v_m\|{}_{\widetilde{S}}\to 0\quad \text{ as }\quad k,m\to\infty\,. \end{aligned}$$

Since \(\|v_h-v_m\|{ }_{\widetilde {S}}=\|v_h-v_m\|{ }_{B^{-1}}\), the sequence {v k} converges to some element \(v\in \mathscr {D}[B^{-1}]\). Exactly in the same way the sequence {f k} converges to some element \(f\in \mathscr {D}[S]\). Taking the limit in the equality g k = f k + v k we find that u = f + v. Since f = u − v ∈ U, then necessarily f = 0 and \(u=v\in \mathscr {D}[B^{-1}]\). The theorem is proved.

Corollaries

  1. 1.

    If the operator \(\widetilde {S}\) is positive, then

    $$\displaystyle \begin{aligned} \mathscr{D}[\widetilde{S}]\;=\;\mathscr{D}[S]\dotplus R(B^{1/2})\dotplus U_0\,. \end{aligned}$$

    Indeed, if \(\widetilde {S}\geqslant 0\) , then \(B^{-1}\geqslant 0\) and hence

    $$\displaystyle \begin{aligned} \mathscr{D}[B^{-1}]\;=\;\mathscr{D}(B^{-1/2})\;=\;R(B^{1/2})\dotplus U_0\,. \end{aligned}$$
  2. 2.

    For \(v\in \mathscr {D}[\widetilde {S}]\cap U\)

    $$\displaystyle \begin{aligned} \widetilde{S}[v,v]\;=\;B^{-1}[v,v]\,. \end{aligned}$$

    Indeed, \(v\in \mathscr {D}[B^{-1}]\) and therefore there exists a sequence \(\{v_n\}\subset \mathscr {D}(B^{-1})\) such that \(v_n\xrightarrow []{B^{-1}}v\) or, equivalently, \(v_n\xrightarrow []{\widetilde {S}}v\) . It remains to note that

    $$\displaystyle \begin{aligned} B^{-1}[v,v]\;=\;\lim_{n\to\infty}(B^{-1}v_n,v_n)\;=\;\lim_{n\to\infty}\widetilde{S}[v_n,v_n]\;=\;\widetilde{S}[v,v]\,. \end{aligned}$$

3 On the Spectrum of Self-Adjoint Extensions of a Positive Definite Operator

Based on the knowledge of the type of the spectrum of the operator B it is sometimes possible to get information on the spectrum of the corresponding extension \(\widetilde {S}\). Now, in the proof of Theorem 1 we established (formula (9)) that the operator \(\widetilde {S}^{-1}-P_+S_\mu ^{-1}P_+\) coincides with the operator B if the latter is extended by zero to R(S) ⊕ U 0. From this it follows that in case \(S_\mu ^{-1}\) is absolutely continuous, then the absolute continuity of \(\widetilde {S}^{-1}\) is equivalent to that of B.

M. G. Kreı̆n proved theoremsFootnote 4 that allow one to describe the number of negative eigenvalues of the self-adjoint extensions of a positive definite operator with finite deficiency index. Using Theorems 1 and 3 the result of M. G. Kreı̆n can be formulated as follows:

The number of negative eigenvalues (considering their multiplicity) of the operator \(\widetilde {S}\) is exactly equal to the number of the negative eigenvalues of the operator B −1.

The extension of M. G. Kreı̆n’s result to the case of an operator S with an infinite deficiency index is the following

Theorem 4

In order for the negative part of the spectrum of the self-adjoint extension \(\widetilde {S}\) of the operator S to consist of a bounded from below set of eigenvalues of finite rank and not to have accumulation points distinct from zero, it is necessary and sufficient that the negative part of the spectrum of the operator B −1 has the same property. Moreover, if one of the operators \(\widetilde {S}\) , B −1 has a finite number of negative eigenvalues, then the other one has exactly the same number of negative eigenvalues.

Proof

The proof of the theorem is based on the following obvious remark:

If G is a finite-dimensional subspace of \(\mathscr {H}\) and W is a linear set of dimension higher than the one of G, then W contains an element orthogonal to G.

Let us start with the proof of the necessity of the first statement. It follows from the condition of the theorem that \(\widetilde {S}\) is a semi-bounded operator. Let E λ be the resolution of the identity for \(\widetilde {S}\), \(m(\widetilde {S})=\alpha <0\), m(B −1) = δ (according to Corollary 1 of Theorem 2, \(\delta \geqslant \alpha \)), F t the decomposition of the identity for B −1, and V  be the closure in \(\mathscr {H}\) of the set \(\mathscr {D}(B^{-1})\). We note that for \(g\in \mathscr {D}[\widetilde {S}]\)

$$\displaystyle \begin{aligned} \widetilde{S}[g,g]\;=\;\int_\alpha^\infty\lambda\,\mathrm{d}(E_\lambda g,g)\,. {} \end{aligned} $$
(27)

Indeed, setting (β < α), we see that T > 0 and thus, for \(g\in \mathscr {D}[T]\),

$$\displaystyle \begin{aligned} T[g,g]\;=\;\|T^{1/2}g\|{}^2\;=\;\int_\alpha^\infty(\lambda-\beta)\,\mathrm{d}(E_\lambda g,g)\;=\;\int_\alpha^\infty\lambda\,\mathrm{d} (E_\lambda g,g)-\beta(g,g)\,. \end{aligned}$$

From this, since \(\widetilde {S}[g,g]=T[g,g]+\beta (g,g)\), we get formula (27). Applying it to \(v\in \mathscr {D}(B^{-1})\) and noting that

$$\displaystyle \begin{aligned} \widetilde{S}[v,v]\;=\;(B^{-1} v,v)\;=\;\int_\delta^\infty t\,\mathrm{d}(F_t v,v)\,, {} \end{aligned} $$
(28)

we get, comparing (27) and (28), the inequality

$$\displaystyle \begin{aligned} \int_\delta^\infty t\,\mathrm{d}(F_t v,v)\;\geqslant\;\int_\alpha^0\lambda\,\mathrm{d} (E_\lambda v,v)\,. {} \end{aligned} $$
(29)

Assume that the statement of the theorem does not hold for B −1. Then one can find an interval Δ = [δ, −ε] (ε > 0) such that F Δ V  is infinite-dimensional.Footnote 5 Let \(\Delta _1=[\alpha ,-\frac {\varepsilon }{2}]\). According to the condition of the theorem, the subspace \(E_{\Delta _1}\mathscr {H}\) is finite-dimensional and thus one can find a non-zero element \(\overline {v}\in F_\Delta V\), orthogonal to \(E_{\Delta _1}\mathscr {H}\). Since \(\overline {v}\in \mathscr {D}(B^{-1})\), applying inequality (29) to \(\overline {v}\) we get that

$$\displaystyle \begin{aligned} \int_\delta^{-\varepsilon}t\,\mathrm{d}(F_t \overline{v},\overline{v})\;\geqslant\;\int_{-\varepsilon/2}^0\lambda\,\mathrm{d}(E_\lambda \overline{v},\overline{v})\,. \end{aligned}$$

But then

$$\displaystyle \begin{aligned} -\varepsilon(\overline{v},\overline{v})\;&\geqslant\;\int_\delta^{-\varepsilon}t\,\mathrm{d}(F_t\overline{v},\overline{v})\;\geqslant\;\int_{-\varepsilon/2}^0\lambda\,\mathrm{d}(E_\lambda\overline{v},\overline{v}) \\ & \geqslant\;-\frac{\varepsilon}{2}\int_{-\varepsilon/2}^0\mathrm{d}(E_\lambda\overline{v},\overline{v})\;\geqslant\;-\frac{\varepsilon}{2}(\overline{v},\overline{v})\,, \end{aligned}$$

which is impossible. The necessity of the first statement is proved.

Let us turn to the proof of sufficiency. We note preliminarily that if \(\widetilde {S}\) is a self-adjoint extension of S, if

$$\displaystyle \begin{aligned} g_k\in\mathscr{D}(\widetilde{S})&\cap E_{(-\infty,-\varepsilon]}\mathscr{H}\quad (\varepsilon>0)\,,\qquad g_k=f_k+v_k\,, \\ & f_k\in\mathscr{D}(S_\mu)\,,\qquad v_k\in\mathscr{D}(B^{-1})\,, \end{aligned}$$

and if the g h’s are linearly independent, then the corresponding elements v k’s are also linearly independent. Indeed, if for some values of the constant c k one has \(\sum _{k=1}^n c_k v_k=0\), then

$$\displaystyle \begin{aligned} & g\;=\;\sum_{k=1}^n c_k g_k\;=\;\sum_{k=1}^n c_k f_k\;\in\;\mathscr{D}(S_\mu)\qquad \text{and} \\ &(\widetilde{S} g,g)\;=\;(S^* g,g)\;=\;(S_\mu g,g)\;\geqslant\;\gamma(g,g)\,, \end{aligned}$$

which is impossible, since \(g\in E_{(-\infty ,-\varepsilon ]}\mathscr {H}\) and g ≠ 0.

Assume that the statement of the theorem does not hold for \(\widetilde {S}\). Then one can find ε > 0 such that the subspace \(E_{(-\infty ,-\varepsilon ]}\mathscr {H}\) is infinite-dimensional. Owing to that, its dense linear subset \(\mathscr {D}(\widetilde {S})\cap E_{(-\infty ,-\varepsilon ]}\mathscr {H}\) is infinite-dimensional. Applying the decomposition g = f + v, \(f\in \mathscr {D}(S_\mu )\), \(v\in \mathscr {D}(B^{-1})\), to all elements \(g\in \mathscr {D}(\widetilde {S})\cap E_{(-\infty ,-\varepsilon ]}\mathscr {H}\), let us consider the linear set of corresponding elements v which we denote by V ε. Owing to the noted linear independence of the elements v, one can claim that the set V ε is also infinite-dimensional. Let δ = m(B −1), Δh = [δ, −h], and the number h be chosen such that it fulfills the conditions: 0 < h < γ, − h > δ, \(\frac {\gamma h}{\gamma -h}\leqslant \frac {\varepsilon }{2}\). It follows from the condition of the theorem that the subspace \(F_{\Delta _h}V\) is finite-dimensional. Thus, the set V ε contains an element v′ which is orthogonal to the subspace \(F_{\Delta _h}V\). Let v′ correspond to the element g′ from \(\mathscr {D}(\widetilde {S})\cap E_{(-\infty ,-\varepsilon ]}\mathscr {H}\). Owing to the noted linear independence, such an element can be chosen uniquely. Let us set f′ = g′− v′ and show that

$$\displaystyle \begin{aligned} (\widetilde{S}g',g')\;\geqslant\;-\frac{\varepsilon}{2}\,(g',g')\,. {} \end{aligned} $$
(30)

One can do this in the same way as Theorem 2 and its Corollary 4 are proved. Let us write (30) in the form

$$\displaystyle \begin{aligned} \big[ (S_\mu f',f')+\frac{\varepsilon}{2}(f',f')\big]+\frac{\varepsilon}{2}(f',v')+\frac{\varepsilon}{2}(v',f')+\big[ (B^{-1} v',v')+\frac{\varepsilon}{2}(v',v')\big]\;\geqslant\; 0\,. \end{aligned}$$

A sufficient condition for the positivity of this expression is the validity of inequality

$$\displaystyle \begin{aligned} \frac{\;\varepsilon^2}{4}\,|(f',v')|{}^2\;\leqslant\;\big[(S_\mu f',f')+\frac{\varepsilon}{2}(f',f')\big]\,\big[ (B^{-1} v',v')+\frac{\varepsilon}{2}(v',v')\big]\,, \end{aligned}$$

which, according to (11), is true when the condition

$$\displaystyle \begin{aligned} \frac{\;\varepsilon^2}{4}\,(R_{-\varepsilon/2}v',v')\;\leqslant\;(B^{-1} v',v')+\frac{\varepsilon}{2}(v',v') {} \end{aligned} $$
(31)

is true. In turn, the validity of inequality (31) is guaranteed by the validity of the stronger condition

$$\displaystyle \begin{aligned} \frac{\;\varepsilon^2}{4}\Big(\gamma+\frac{\varepsilon}{2}\Big)^{-1}(v',v')\;\leqslant\;(B^{-1} v',v')+\frac{\varepsilon}{2}(v',v')\,, \end{aligned}$$

which one can write in the following form:

$$\displaystyle \begin{aligned} (B^{-1} v',v')\;\geqslant\;-\frac{\gamma\varepsilon}{2\gamma+\varepsilon}\,(v',v')\,. {} \end{aligned} $$
(32)

Finally, the validity of relation (32) follows from the fact that, according to the choice of h,

$$\displaystyle \begin{aligned} h\;\leqslant\;\frac{\gamma\varepsilon}{2\gamma+\varepsilon} \end{aligned}$$

and thus

$$\displaystyle \begin{aligned} (B^{-1} v',v')\;=\;\int_{-h}^\infty t\,\mathrm{d}(F_t v',v')\;\geqslant\;-h(v',v')\;\geqslant\;-\frac{\gamma\varepsilon}{2\gamma+\varepsilon}(v',v')\,. \end{aligned}$$

This way, relation (30) should be valid, which is however impossible, since \(g'\in E_{(-\infty ,-\varepsilon ]}\mathscr {H}\) and

$$\displaystyle \begin{aligned} (\widetilde{S}g',g')\;\leqslant\;-\varepsilon\,(g',g')\,. \end{aligned}$$

The obtained contradiction proves the validity of the first statement of the theorem.

We turn to the proof of the second part of the theorem. Assume \(\widetilde {S}\) has p negative eigenvalues. The discrete character of the negative part of the spectrum of the operator B −1 can be established with the first statement. If B −1 has more than p negative eigenvalues then for some ε > 0 there exists an element \(\overline {v}\in F_{[\delta ,-\varepsilon ]}V\), orthogonal to \(E_{[\alpha ,0]}\mathscr {H}\). Since \(\overline {v}\in \mathscr {D}(B^{-1})\), then applying (29) we find that

$$\displaystyle \begin{aligned} \int_\delta^{-\varepsilon}t\,\mathrm{d} (F_t\overline{v},\overline{v})\;\geqslant\;0\,. \end{aligned}$$

The latter is however impossible. This way the number q of negative eigenvalues of the operator B −1 does not exceed p. On the other hand, if q is finite then \(p\leqslant q\). Indeed, if the contrary holds, for some ε > 0 the dimension of the subspace \(E_{[\alpha ,-\varepsilon ]}\mathscr {H}\) is larger than q and thus the dimension of V ε is also larger than q. So V ε has an element v′ orthogonal to the subspace F [δ,0] V . But then, for the corresponding element g′ we obtain the inequality

$$\displaystyle \begin{aligned} (\widetilde{S} g',g')\;=\;(S_\mu f',f')+(B^{-1}v',v')\;\geqslant\;(B^{-1}v',v')\;\geqslant\;\int_\delta^0t\,\mathrm{d}(F_t v',v')\;=\;0\,, \end{aligned}$$

which is impossible, since \(g'\in E_{[\alpha ,-\varepsilon ]}\mathscr {H}\). Comparing these results we see that p = q. The theorem is proved.

Remark

We note that under the conditions of the theorem the sequential negative eigenvalues of the operators \(\widetilde {S}\) and B −1 satisfy the relations

$$\displaystyle \begin{aligned} \lambda_j(\widetilde{S})\;\leqslant\;\lambda_j(B^{-1})\qquad (j=1,2,\dots)\,. \end{aligned}$$

Indeed, since the discrete character of the negative part of the spectrum is established, one can find the numbers \(\lambda _j(\widetilde {S})\) as the sequential minima of the quadratic form

$$\displaystyle \begin{aligned} \widetilde{S}[g,g]\qquad (\|g\|=1) \end{aligned}$$

on the set \(\mathscr {D}[\widetilde {S}]\). According to Corollary 2 of Theorem 3, on the set \(\mathscr {D}[B^{-1}]\) this form coincides with the quadratic form

$$\displaystyle \begin{aligned} B^{-1}[v,v]\qquad (\|v\|=1)\,, \end{aligned}$$

whose sequential minima are given by the numbers λ j(B −1). It now remains to refer to the known mini-maximal property of eigenvalues.

4 On the Positive Definite Symmetric Extensions of the Operator S

Below theorems are given which are supplements to Theorem 1. Theorem 5 gives a characterisation of the symmetric positive definite extensions S′ of the initial operator S. Theorem 6 gives a general characterisation of the self-adjoint extensions of the operator S′. Theorem 7 is devoted to the characterisation of the rigid extension \(S^{\prime }_\mu \) of the operator S′.

Theorem 5

In order for the operator S′ to be a closed symmetric positive definite extension of the operator S, it is necessary and sufficient that S′ is defined as a restriction of S on the direct sum

$$\displaystyle \begin{aligned} \mathscr{D}(S')\;=\;\mathscr{D}(S)\dotplus(S_\mu^{-1}+B')U'\,; {} \end{aligned} $$
(33)

here U′ is some subspace of U, B′ is a symmetric operator which maps U′ into U and satisfies the condition

$$\displaystyle \begin{aligned} (B'u',B'u')\;\leqslant\;M\,(B'u',u')\,,\qquad M>0\,,\qquad u'\in U'\,. {} \end{aligned} $$
(34)

Proof

Necessity

We introduce the notation:

$$\displaystyle \begin{aligned} \overline{U}\;=\;\mathscr{H}\ominus R(S')\qquad \text{and}\qquad U'=U\ominus\overline{U}\,. \end{aligned}$$

It is obvious that R(S′) = R(S) ⊕ U′. Let us consider the rigid extension \(S^{\prime }_\mu \) of the operator S′. According to Theorem 1, there exists an operator B, bounded and self-adjoint in U, such that

$$\displaystyle \begin{aligned} \mathscr{D}(S^{\prime}_\mu)\;=\;\mathscr{D}(S)\dotplus(S_\mu^{-1}+B)U\,. {} \end{aligned} $$
(35)

Obviously, \(R(S^{\prime }_\mu )=\mathscr {H}\). We denote with B′ the restriction of the operator B defined on U′ and check that

$$\displaystyle \begin{aligned} \mathscr{D}(S')\;=\;\mathscr{D}(S)\dotplus(S_\mu^{-1}+B')U'\,. \end{aligned}$$

Indeed, if \(g=f_0+(S_\mu ^{-1}+B')u'\), \(f_0\in \mathscr {D}(S)\), u′∈ U′, then \(S^{\prime }_\mu g=S f_0 + u'\in R(S')\), and thus \(g\in \mathscr {D}(S')\). Conversely, if \(g\in \mathscr {D}(S')\), then, according to (35), \(g=f_0+(S_\mu ^{-1}+B)u\), \(f_0\in \mathscr {D}(S)\), u ∈ U, \(S'g =S^{\prime }_\mu g=S f_0 +u\), and necessarily u ∈ U′, for otherwise S′gR(S′). It remains to ensure that condition (34) holds. Let F t be the spectral measure of the operator B, M be its upper bound, and u ∈ U. Since the operator B is positive, then

$$\displaystyle \begin{aligned} (Bu,Bu)\;=\;\int_0^M t^2\,\mathrm{d}(F_t u,u)\;\leqslant\;M\int_0^Mt\,\mathrm{d} (F_t u,u)\;=\;M(Bu,u)\,. \end{aligned}$$

These inequalities hold in particular on U′. Necessity is proved.

Sufficiency

We note first of all that the operator B′ defined on U′ can be extended to a self-adjoint operator in U, which also satisfies condition (34).Footnote 6 Indeed, let us consider the symmetric operator on U′:

$$\displaystyle \begin{aligned} (C'u',C'u')\;=\;(B'u',B'u')-M(B'u',u')+\frac{\;M^2}{4}(u',u')\;\leqslant\;\frac{\;M^2}{4}(u',u')\,. \end{aligned}$$

According to Theorem 2 of M. G. Kreı̆n’s work [2], there exists at least one self-adjoint extension \(\widetilde {C}\) of the operator C′ on the whole U, which satisfies the condition \(\|\widetilde {C}u\|\leqslant \frac {M}{2}\|u\|\), u ∈ U. The operator is obviously a self-adjoint extension of the operator C′ and in addition

$$\displaystyle \begin{aligned} (\widetilde{B}u,\widetilde{B}u)\;& =\;(\widetilde{C}u,\widetilde{C}u)+M(\widetilde{C}u,u)+\frac{\;M^2}{4}(u,u) \\ & \leqslant\;\frac{\;M^2}{2}(u,u)+M(\widetilde{C}u,u)\;=\;M(\widetilde{B}u,u)\,. \end{aligned}$$

With the help of the operator \(\widetilde {B}\) we construct a positive definite operator \(\widetilde {S}\) with domain of definition

$$\displaystyle \begin{aligned} \mathscr{D}(\widetilde{S})\;=\;\mathscr{D}(S)\dotplus(S_\mu^{-1}+\widetilde{B})U\,. \end{aligned}$$

The operator \(\widetilde {S}\), when defined on the direct sum (33), is obviously a restriction of the operator \(\widetilde {S}\) and thus has to be positive definite as well. The closedness of S′ follows from the existence of a bounded inverse operator (S′)−1 on the closed set R(S′) = R(S) ⊕ U′. The theorem is proved.

Remarks

  1. 1.

    If B′ is a positive bounded self-adjoint operator in U′, then obviously condition (34) holds.

  2. 2.

    The operator B′ is defined by the formula

    $$\displaystyle \begin{aligned} B'u\;=\;(S')^{-1}u-S_\mu^{-1}u\,,\qquad u\in U'\,. {} \end{aligned} $$
    (36)

    Indeed, according to (9), \(B=(S^{\prime }_\mu )^{-1}-S_\mu ^{-1}\), and this formula coincides with (36) on U′.

We now consider the issue of self-adjoint extensions of the operator S′. Let U 0 be a subspace in \(\overline {U}\) and \(U_1=U'\oplus (\overline {U}\ominus U_0)\). We denote \(\mathscr {H}\ominus U_0\) by \(\mathscr {H}_+\): \(\mathscr {H}_+=\mathscr {H}\ominus U_0\), and the projection operator onto \(\mathscr {H}_+\) by P +.

Theorem 6

In order for the operator \(\widetilde {S}\) to be a self-adjoint extension of the operator S′, it is necessary and sufficient that \(\widetilde {S}\) is defined as a restriction of S on the direct sum

$$\displaystyle \begin{aligned} \mathscr{D}(\widetilde{S})\;=\;\mathscr{D}(S)\dotplus(S_\mu^{-1}+\widetilde{B})\widetilde{U}_1\dotplus U_0\,; {} \end{aligned} $$
(37)

here \(\widetilde {U}_1=\mathscr {D}(\widetilde {B})\) is a set dense in U 1, \(\widetilde {B}\) is a self-adjoint extension on U 1 of a symmetric operator P + B′ on U′.

Proof

Necessity

If the operator \(\widetilde {S}\) is a self-adjoint extension of S′ then obviously \(\widetilde {S}\supset S\), \(\widetilde {S}\subset S^*\) and, according to Theorem 1,

$$\displaystyle \begin{aligned} \mathscr{D}(\widetilde{S})\;=\;\mathscr{D}(S)\dotplus(S_\mu^{-1}+B)\widetilde{U}_1\dotplus U_0\,. \end{aligned}$$

The meaning of the notation is the same as in Theorem 1. The corresponding subspaces U 0 and U 1 satisfy the conditions of the theorem to prove. Indeed,

$$\displaystyle \begin{aligned} U_0\;=\;\mathscr{H}\ominus\overline{R(S)}\;\subset\;\mathscr{H}\ominus R(S')\;=\;\overline{U} \end{aligned}$$

and

$$\displaystyle \begin{aligned} U_1\;=\;U\ominus U_0\;=\;(U'\oplus\overline{U})\ominus U_0\;=\; U'\oplus (\overline{U}\ominus U_0)\,. \end{aligned}$$

It remains to show that the operator B is the extension of the operator P + B′. To this aim, we note that the operators \(\widetilde {S}^{-1}\) and P +(S′)−1 coincide on U′ and according to (9) and (36)

$$\displaystyle \begin{aligned} B\;=\;\widetilde{S}^{-1}-P_+ S_\mu^{-1} P_+\;=\;P_+(S')^{-1}-P_+\widetilde{S}_\mu^{-1}\;=\; P_+\big((S')^{-1}-S_\mu^{-1}\big)\;=\; P_+B'\,. \end{aligned}$$

Necessity is proved.

Sufficiency

Let U 0 be some subspace of \(\overline {U}\) and \(\widetilde {B}\) be a self-adjoint extension of the operator P + B′ on U 1. Then formula (37) defines some operator \(\widetilde {S}\) which is a self-adjoint extension of the operator S. Let us show that \(\widetilde {S}\supset S'\). Let \(g'\in \mathscr {D}(S')\). According to Theorem 5, \(g'=f_0+S_\mu ^{-1}u'+B'u'\), \(f_0\in \mathscr {D}(S)\), u′∈ U′. We denote B′u′− P + B′u′ by u 0. Obviously, u 0 ∈ U 0. Now let us represent g′ in the form

$$\displaystyle \begin{aligned} g'\;=\;f_0+S_\mu^{-1}u'+P_+ B'u' +u_0\;=\;f_0+(S_\mu^{-1}+\widetilde{B})u'+u_0\,. \end{aligned}$$

Since u′∈ U′⊂ U 1, \(u'\in \mathscr {D}(\widetilde {B})\), and u 0 ∈ U 0, then according to (37) \(g'\in \mathscr {D}(\widetilde {S})\). Thus, \(\widetilde {S}\supset S'\) and the theorem is proved.

Let us turn to the characterisation of the rigid extension \(S^{\prime }_\mu \) of the operator S′. According to Theorem 6, the domain of definition \(\mathscr {D}(S^{\prime }_\mu )\) can be decomposed into the direct sum

$$\displaystyle \begin{aligned} \mathscr{D}(S^{\prime}_\mu)\;=\;\mathscr{D}(S)\dotplus(S_\mu^{-1}+\widetilde{B}) U\,, \end{aligned}$$

where \(\widetilde {B}\) is some positive bounded self-adjoint extension of the operator B′. It is easy to see that the set of positive bounded self-adjoint extensions \(\widetilde {B}\) of the operator B′ on U is defined by the formula

$$\displaystyle \begin{aligned} \widetilde{B}\;=\;\overline{B}-G\,; \end{aligned}$$

here \(\overline {B}\) is one of such extensions (fixed) and G is an arbitrary self-adjoint operator in U, which satisfies the conditions \((Gu,u)\leqslant (\overline {B}u,u)\) and GU′ = 0. The second condition is obviously equivalent to \(R(G)\subset \overline {U}\). Theorem 1 of M. G. Kreı̆n’s work [2] allows one to state that among the operators G with the mentioned properties there can be found a maximal operator G μ. We get the minimal (lower) positive bounded extension of the operator B′ if we choose the operator G μ as G. We denote this extension by B μ. It follows from Theorem 5 of work [2] that B μ is the unique extension of \(\widetilde {B}\) for which U′ is dense in U in the norm

$$\displaystyle \begin{aligned} \|u\|{}_{\widetilde{B}}^2\;=\;(\widetilde{B}u,u)\,. \end{aligned}$$

After these remarks it is not difficult to prove the following theorem:

Theorem 7

The domain of definition of the rigid extension \(S^{\prime }_\mu \) of the operator S′ can be represented as the direct sum

$$\displaystyle \begin{aligned} \mathscr{D}(S^{\prime}_\mu)\;=\;\mathscr{D}(S)\dotplus(S_\mu^{-1}\dotplus B_\mu)U\,, \end{aligned}$$

where B μ is the lower positive extension of the operator B′ on U.

Proof

Let us temporarily denote by \(\overline {S}\) the extension of the operator S′ constructed by B μ and show that \(\mathscr {D}(\overline {S})\subset \mathscr {D}[S']\). As it was noted in Sect. 1, this will prove that \(\overline {S}=S^{\prime }_\mu \). Since

$$\displaystyle \begin{aligned} \mathscr{D}(\overline{S})\;=\;\mathscr{D}(S)\dotplus(S_\mu^{-1}+B_\mu)U \end{aligned}$$

and

$$\displaystyle \begin{aligned} \mathscr{D}(S)+S_\mu^{-1}U\;\subset\;\mathscr{D}(S_\mu)\;\subset\;\mathscr{D}[S]\;\subset\;D[S']\,, \end{aligned}$$

it is enough to show that

$$\displaystyle \begin{aligned} B_\mu U\;=\; \mathscr{D}(B_\mu^{-1})\;\subset\;\mathscr{D}[S']\,. {} \end{aligned} $$
(38)

Since the extension \(\overline {S}\) is positive definite, then we can set

$$\displaystyle \begin{aligned} \|g\|{}_{\overline{S}}^2\;=\;\overline{S}[g,g]\,, {} \end{aligned} $$
(39)

which introduces a norm in \(\mathscr {D}[\overline {S}]\). The set \(\mathscr {D}[S']\) is some subspace in \(\mathscr {D}[\overline {S}]\). Then for the proof of (38) it suffices to show that each element from B μ U can be approximated in the norm (39) by elements of \(\mathscr {D}[S']\). Let v = B μ u, u ∈ U. According to the property of the lower extension, one can find a sequence \(\{u_n^{\prime }\}\subset U'\) such that \(\|u_n^{\prime }-u\|{ }_{B_\mu }^2\to 0\) as n →. But the latter means that for v one has constructed a sequence of elements \(v_n=B_\mu u_n^{\prime }\) belonging to the set \(\mathscr {D}[\overline {S}]\), convergent in the \(\overline {S}\)-norm.

Indeed, according to (10),

$$\displaystyle \begin{aligned} \|u^{\prime}_n -u\|{}_{B_\mu}^2\;&=\;(B_\mu u-B_\mu u^{\prime}_n,u-u^{\prime}_n)\;=\;(B_\mu^{-1}v- B_\mu^{-1}v^{\prime}_n,v-v^{\prime}_n) \\ & =\;\overline{S}[v-v^{\prime}_n,v-v^{\prime}_n]\;\rightarrow\;0\qquad \text{as}\qquad n\to\infty\,, \end{aligned}$$

and it remains to prove that \(v^{\prime }_n\in \mathscr {D}[S']\). We temporarily denote by \(\overline {B}\) the operator corresponding to the rigid extensions \(S^{\prime }_\mu \). Since \(v^{\prime }_n=B_\mu u^{\prime }_n=B' u^{\prime }_n=\overline {B} u^{\prime }_n\), then \(v^{\prime }_n\in \mathscr {D}(\overline {B}^{-1})\), and according to Lemma 1, \(v^{\prime }_n\in \mathscr {D}[S^{\prime }_\mu ]=\mathscr {D}[S']\). The theorem is proved.

Remark

If the operator B′ is self-adjoint in U′ then we obviously get its lower extension if we extend it by zero on \(\overline {U}\). According to the proved theorem, the obtained extension of the operator B′ allows one to construct the rigid extension \(S^{\prime }_\mu \) of the operator S′.

(Submitted to the editorial office on 13 November 1954.)

Notes to the Translation

  1. (i)

    Throughout Birman’s article, at the moment of choosing a subspace of the Hilbert space \(\mathscr {H}\), it is tacitly assumed that such a subspace is closed. Thus, for instance, in the statement of Theorem 1 U 1 is a closed subspace of Ker S .

  2. (ii)

    Except for the preliminary remark at the beginning of Sect. 2, in Birman’s article there is no explicit notation to distinguish among a subspace of \(\mathscr {H}\) and its closure. Thus, in several orthogonal direct sums appearing in the text, such as \(\mathscr {H}=R(S)\oplus U\) and \(\mathscr {H}_+=R(S)\oplus U_1\) in the proof of Theorem 1, a summand appears which is not closed as it should be according to the usual convention for “⊕”. In the above-mentioned example, U 1, U, and \(\mathscr {H}_+\) are closed subspaces of \(\mathscr {H}\) but R(S) is not, thus one should have written \(\mathscr {H}=\overline {R(S)}\oplus U\) and \(\mathscr {H}_+=\overline {R(S)}\oplus U_1\). We warn the reader that unfortunately the “bar” notation is used in the original article both for the closure of a subspace in \(\mathscr {H}\) (beginning of Sect. 2) and for denoting a distinguished subspace (Sect. 4).

  3. (iii)

    Birman’s convention, kept in the translation, for an expression like “the operator A in the (Hilbert) subspace \(\mathscr {K}\)”, is to indicate that the possibly unbounded operator A has a domain dense in \(\mathscr {K}\) and maps \(\mathscr {K}\) into itself. This is the case, for instance, in Sect. 2 for the operator B (as a self-adjoint operator on the Hilbert space U 1) and for the restriction of \(\widetilde {S}^{-1}\) to the Hilbert space \(\mathscr {H}_+\).

  4. (iv)

    Despite the possible confusion, we kept Birman’s standard to use the same symbol for operators acting on different spaces. This is the case of B in the ‘small” space U 1 and in the “large” space \(\mathscr {H}\), for B −1 in the “small” \(\overline {R(B)}\) and in the “large” \(\overline {R(B)}\oplus U_0\), and for \(\widetilde {S}^{-1}\) in the “small” \(\mathscr {H}_+\) and in the “large” \(\mathscr {H}\). Note also that B −1 and \(\widetilde {S}^{-1}\) are the inverse of the restrictions of B and \(\widetilde {S}\) out of their kernels.