1 Introduction

Almost 70 years ago in his famous paper [47] M. G. Kreĭn proved that for a densely defined nonnegative operator A in a Hilbert space there are two extremal extensions of A, the Friedrichs (hard) extension \(A_F\) and the Kreĭn–von Neumann (soft) extension \(A_K\), such that every nonnegative selfadjoint extension \({{\widetilde{A}} }\) of A can be characterized by the following two inequalities:

$$\begin{aligned} (A_F+a)^{-1}\le (\widetilde{A}+a)^{-1}\le (A_K+a)^{-1}, \quad a>0. \end{aligned}$$

To obtain such a description he used Cayley transforms of the form

$$\begin{aligned} T_1=(I-A)(I+A)^{-1} T=(I-{{\widetilde{A}} })(I+{{\widetilde{A}} })^{-1}, \end{aligned}$$

to reduce the study of unbounded operators to the study of contractive selfadjoint extensions T of a hermitian nondensely defined contraction \(T_1\). In the study of contractive selfadjoint extensions of \(T_1\) he introduced a notion which is nowadays called “the shortening of a bounded nonnegative operator H to a closed subspace \({\mathfrak N}\)” of \({\mathfrak H}\) as the (unique) maximal element in the set

$$\begin{aligned} \{\,D \in [{\mathfrak H}] : 0 \le D \le H, \, \mathrm{ran\,}D \subset {\mathfrak N}\,\}, \end{aligned}$$
(1)

which is denoted by \(H_{\mathfrak N}\); cf. [3, 4, 57]. Here and in what follows the notation \([{\mathfrak H}_1,{\mathfrak H}_2]\) stands for the space of all bounded everywhere defined operators acting from \({\mathfrak H}_1\) to \({\mathfrak H}_2\); if \({\mathfrak H}={\mathfrak H}_1={\mathfrak H}_2\) then the shorter notation \([{\mathfrak H}]=[{\mathfrak H}_1,{\mathfrak H}_2]\) is used. By means of shortening of operators he proved the existence of a minimal and maximal contractive extension \(T_m\) and \(T_M\) of \(T_1\) and that T is a selfadjoint contractive extension of \(T_1\) if and only if \(T_m\le T\le T_M\).

Later the study of nonnegative selfadjoint extensions of \(A\ge 0\) was generalized to the case of nondensely defined operators \(A\ge 0\) by Ando and Nishio [5], as well as to the case of linear relations (multivalued linear operators) \(A\ge 0\) by Coddington and de Snoo [22]. Further studies followed this work of M. G. Kreĭn; the approach in terms of “boundary conditions” to the extensions of a positive operator A was proposed by Vishik [63] and Birman [16]; an exposition of this theory based on the investigation of quadratic forms can be found from [2]. An approach to the extension theory of symmetric operators based on abstract boundary conditions was initiated even earlier by Calkin [21] under the name of reduction operators, and later, independently the technique of boundary triplets was introduced to formalize the study of boundary value problems in the framework of general operator theory; see [20, 29, 31, 37, 43, 54]. Later the extension theory of unbounded symmetric Hilbert space operators and related resolvent formulas originating also from the work of Kreĭn [45, 46], see also e.g. [52], was generalized to the spaces with indefinite inner products in the well-known series of papers by Langer and Kreĭn, see e.g. [49, 50], and all of this has been further investigated, developed, and extensively applied in various other areas of mathematics and physics by numerous other researchers.

In spite of the long time span, natural extensions of the original results of Kreĭn in [47] seem not to be available in the literature. Obviously the most closely related result appears in Constantinescu and Gheondea [24], where for a given pair of a row operator \(T_r=(T_{11},T_{12})\in [{\mathfrak H}_1\oplus {\mathfrak H}_1',{\mathfrak H}_2]\) and a column operator \(T_c=\mathrm{col\,}(T_{11},T_{21})\in [{\mathfrak H}_1,{\mathfrak H}_2\oplus {\mathfrak H}_2']\) the problem for determining all possible operators \({{\widetilde{T}} }\in [{\mathfrak H}_1\oplus {\mathfrak H}_1',{\mathfrak H}_2\oplus {\mathfrak H}_2']\) acting from the Hilbert space \({\mathfrak H}_1\oplus {\mathfrak H}_1'\) to the Hilbert space \({\mathfrak H}_2\oplus {\mathfrak H}_2'\) such that

$$\begin{aligned} P_{{\mathfrak H}_2}{{\widetilde{T}} }=T_r, \quad {{\widetilde{T}} }{\upharpoonright \,}{\mathfrak H}_1=T_c, \end{aligned}$$

and such that the following negative index (number of negative eigenvalues) conditions are satisfied

$$\begin{aligned} \kappa _1:=\nu _-(I-{{\widetilde{T}} }^*{{\widetilde{T}} })=\nu _-(I-T_c^*T_c),\quad \kappa _2:=\nu _-(I-{{\widetilde{T}} }{{\widetilde{T}} }^*)=\nu _-(I-T_rT_r^*), \end{aligned}$$

is considered. The problem was solved in [24, Theorem 5.1] under the condition \(\kappa _1,\kappa _2<\infty \). In the literature cited therein appears also a reference to an unpublished manuscript [53] by H. Langer and B. Textorius, where a similar problem for a given bounded hermitian column operator T has been investigated; see [53, Theorems 1.1, 2.8]Footnote 1 and [24, Section 6]. However, in these papers the existence of possible extremal extensions in the solution set in the spirit of [47], when it is nonempty, have not been investigated. Also possible investigations of analogous results for unbounded symmetric operators with a fixed negative index seem to be unavailable in the literature.

In this paper we study classes of “quasi-contractive” symmetric operators \(T_1\) with \(\nu _-(I-T_1^*T_1)<\infty \) as well as “quasi-nonnegative” operators A with \(\nu _-(A)<\infty \) and the existence and description of all possible selfadjoint extensions T and \({{\widetilde{A}} }\) of them which preserve the given negative indices \(\nu _-(I-T^2)=\nu _-(I-T_1^*T_1)\) and \(\nu _-({{\widetilde{A}} })=\nu _-(A)\), and prove precise analogs of the above mentioned results of M. G. Kreĭn under a minimality condition on the negative indices \(\nu _-(I-T_1^*T_1)\) and \(\nu _-(A)\), respectively. It is an unexpected fact that when there is a solution then the solution set still contains a minimal solution and a maximal solution which then describe the whole solution set via two operator inequalities, just as in the original paper of M. G. Kreĭn. The approach developed in this paper differs from the approach in [47]. In fact, technique based on nonnegative completions of operators appearing in papers by Kolmanovich and Malamud [44] and Hassi et al. [39] will be successfully generalized. In particular, we introduce a new class of completion problems for Hilbert space operators, whose solutions evidently admit a wider scope of applications than what is appearing in the present paper.

The starting point in our approach is to establish a generalization of an old result due to Shmul’yan [59] on completions of nonnegative block operators where the result was applied for introducing so-called Hellinger operator integrals. Our extension of this fundamental result is given in Sect. 2; see Theorem 1 (for the case \(\kappa <\infty \)) and Theorem 2 (for the case \(\kappa =\infty \)). Obviously these two results, already in view of the various consequences appearing in later sections, may be considered as being most useful inventions in the present paper with further possible applications in problems appearing also elsewhere (see e.g. [4, 6, 27, 28, 58]).

In this paper we will extensively apply Theorem 1. In Sect. 3 this result is specialized to a class of block operators to characterize occurrence of a minimal negative index for the so-called Schur complement, see Theorem 3. This result can be also viewed as a factorization result and, in fact, it yields a generalization of the well-known Douglas factorization of Hilbert space operators in [32], see Proposition 1, which is completed by a generalization of Sylvester’s criterion on additivity of inertia on Schur complements in Proposition 2. In Sect. 4, Theorem 1, or its special case Theorem 3, is applied to solve lifting problems for J-contractive operators in Hilbert, Pontryagin and Kreĭn spaces in a new simple way, the most general version of which is formulated in Theorem 4: this result was originally proved in Constantinescu and Gheondea [23, Theorem 2.3] with the aid of [13, Theorem 5.3]; for special cases, see also Dritschel and Rovnyak [33, 34]. In the Hilbert space case the problem has been solved in [12, 25, 62], further proofs and facts can be found e.g. from [8, 10, 19, 44, 55].

Section 5 contains the extension of the fundamental result of Kreĭn in [47], see Theorem 5, which characterizes the existence and gives a description of all selfadjoint extensions T of a bounded symmetric operator \(T_1\) satisfying the following minimal index condition \(\nu _-(I-T^2)=\nu _-(I-T_{11}^2)\) by means of two extreme extensions via \(T_m\le T\le T_M\). In Sect. 6 selfadjoint extensions of unbounded symmetric operators, and symmetric relations, are studied under a similar minimality condition on the negative index \(\nu _-(A)\); the main result there is Theorem 8. It is a natural extension of the corresponding result of  Kreĭn in [47]. The treatment here uses Cayley type transforms and hence is analogous to that in [47]. However, the existence of two extremal extensions in this setting and the validity of all the operator inequalities appearing therein depend essentially on so-called “antitonicity results” proved only very recently for semibounded selfadjoint relations in [15] concerning correctness of the implication \(H_1\le H_2\) \(\Rightarrow \) \(H_1^{-1} \ge H_2^{-1}\) in the case that \(H_1\) and \(H_2\) have some finite negative spectra. In this section analogs of the so-called Kreĭn’s uniqueness criterion for the equality \(T_{m}=T_{M}\) are also established.

2 A completion problem for block operators

By definition the modulus |C| of a closed operator C is the nonnegative selfadjoint operator \(|C|=(C^*C)^{1/2}\). Every closed operator admits a polar decomposition \(C=U|C|\), where U is a (unique) partial isometry with the initial space \(\mathrm{\overline{ran}\,}|C|\) and the final space \(\mathrm{\overline{ran}\,}C\), cf. [42]. For a selfadjoint operator \(H=\int _{{\mathbb R}} t\, dE_t\) in a Hilbert space \({\mathfrak H}\) the partial isometry U can be identified with the signature operator, which can be taken to be unitary: \(J=\mathrm{sign\,}(H)=\int _{{\mathbb R}}\,\mathrm{sign\,}(t)\,dE_t\), in which case one should define \(\mathrm{sign\,}(t)=1\) if \(t\ge 0\) and otherwise \(\mathrm{sign\,}(t)=-1\).

2.1 Completion to operator blocks with finite negative index

The following theorem solves a completion problem for a bounded incomplete block operator \(A^0\) of the form

$$\begin{aligned} A^0= \begin{pmatrix} A_{11}&{}\quad A_{12}\\ A_{21}&{}\quad *\end{pmatrix} \begin{pmatrix} {\mathfrak H}_1\\ {\mathfrak H}_2 \end{pmatrix} \rightarrow \begin{pmatrix} {\mathfrak H}_1\\ {\mathfrak H}_2 \end{pmatrix} \end{aligned}$$
(2)

in the Hilbert space \({\mathfrak H}={\mathfrak H}_1\oplus {\mathfrak H}_2\).

Theorem 1

Let \({\mathfrak H}={\mathfrak H}_1\oplus {\mathfrak H}_2\) be an orthogonal decomposition of the Hilbert space \({\mathfrak H}\) and let \(A^0\) be an incomplete block operator of the form (2). Assume that \(A_{11}=A_{11}^*\) and \(A_{21}=A_{12}^*\) are bounded, \(\nu _-(A_{11})=\kappa <\infty \), where \(\kappa \in \mathbb {Z}_+\), and let \(J=\mathrm{sign\,}(A_{11})\) be the (unitary) signature operator of \(A_{11}\). Then:

  1. (1)

    There exists a completion \(A\in [{\mathfrak H}]\) of \(A^0\) with some operator \(A_{22}=A_{22}^*\in [{\mathfrak H}_2]\) such that \(\nu _-(A)=\nu _-(A_{11})=\kappa \) if and only if

    $$\begin{aligned} \mathrm{ran\,}A_{12}\subset \mathrm{ran\,}|A_{11}|^{1/2}. \end{aligned}$$
    (3)
  2. (2)

    If (3) is satisfied, then the operator \(S=|A_{11}|^{[-1/2]}A_{12}\), where \(|A_{11}|^{[-1/2]}\) denotes the (generalized) Moore–Penrose inverse of \(|A_{11}|^{1/2}\), is well defined and \(S\in [{\mathfrak H}_2,{\mathfrak H}_1]\). Moreover, \(S^*JS\) is the smallest operator in the solution set

    $$\begin{aligned} \mathcal {A}:=\{{A_{22}=A_{22}^*\in [{\mathfrak H}_2]: A=(A_{ij})_{i,j=1}^{2}:\nu _-(A)=\kappa }\} \end{aligned}$$
    (4)

    and this solution set admits a description as the (semibounded) operator interval given by

    $$\begin{aligned} \mathcal {A}=\{{A_{22}\in [{\mathfrak H}_2]:A_{22}=S^*JS+Y,\, Y=Y^*\ge 0}\}. \end{aligned}$$

Proof

(i) Assume that there exists a completion \(A_{22}\in \mathcal {A}\). Let \(\lambda _\kappa \le \lambda _{\kappa -1}\le \cdots \le \lambda _1<0\) be all the negative eigenvalues of \(A_{11}\) and let \(\varepsilon \) be such that \(|\lambda _1|>\varepsilon >0\). Then \(0\in \rho (A_{11}+\varepsilon )\) and hence one can write

$$\begin{aligned}&\begin{pmatrix} I&{}\quad 0\\ -A_{21}(A_{11}+\varepsilon )^{-1}&{}\quad I \end{pmatrix} \begin{pmatrix} A_{11}+\varepsilon &{}\quad A_{12}\\ A_{21}&{}\quad A_{22}+\varepsilon \end{pmatrix} \begin{pmatrix} I&{}-(A_{11}+\varepsilon )^{-1}A_{12}\\ 0&{}I \end{pmatrix} \nonumber \\&=\qquad \begin{pmatrix} A_{11}+\varepsilon &{}0\\ 0&{}A_{22}+\varepsilon -A_{21}(A_{11}+\varepsilon )^{-1}A_{12} \end{pmatrix} \end{aligned}$$
(5)

The operator in the righthand side of (5) has \(\kappa \) negative eigenvalues if and only if

$$\begin{aligned} A_{21}(A_{11}+\varepsilon )^{-1}A_{12}\le A_{22}+\varepsilon \end{aligned}$$
(6)

or equivalently

$$\begin{aligned} \int \limits _{-\Vert A_{11}\Vert }^{\Vert A_{11}\Vert }(t+\varepsilon )^{-1}d\Vert E_tA_{12}f\Vert ^2\le \varepsilon \Vert f\Vert ^2+(A_{22}f,f), \end{aligned}$$
(7)

where \(E_t\) is the spectral family of \(A_{11}\) and \(f\in {\mathfrak H}_2\). We rewrite (7) in the form

$$\begin{aligned} \begin{array}{l} \int _{[-\Vert A_{11}\Vert ,0)} (t+\varepsilon )^{-1}d\Vert E_tA_{12}f\Vert ^2+ \int _{[0,\Vert A_{11}\Vert ]}(t+\varepsilon )^{-1}d\Vert E_tA_{12}f\Vert ^2 \\ \le \varepsilon \Vert f\Vert ^2+(A_{22}f,f), \end{array} \end{aligned}$$

This yields the estimate

$$\begin{aligned} \int _{[0,\Vert A_{11}\Vert ]} (t+\varepsilon )^{-1}d\Vert E_tA_{12}f\Vert ^2\le \varepsilon \Vert f\Vert ^2+(A_{22}f,f)-\frac{1}{\lambda _1+\varepsilon }\Vert A_{12}f\Vert ^2. \end{aligned}$$
(8)

By letting \(\varepsilon \searrow 0\) in (8) the monotone convergence theorem implies that

$$\begin{aligned} P_+A_{12}f\in \mathrm{ran\,}A_{11+}^{1/2}\subset \mathrm{ran\,}|A_{11}|^{1/2} \end{aligned}$$

for all \(f\in {\mathfrak H}_2\); here \(A_{11+}=\int _{[0,\Vert A_{11}\Vert ]} t\, dE_t\) stands for the nonnegative part of \(A_{11}\) and \(P_+\) is the orthogonal projection onto the corresponding closed subspace \(\mathrm{\overline{ran}\,}A_{11+}=\int _{[0,\Vert A_{11}\Vert ]} \, dE_t\). Since \(\mathrm{ran\,}(I-P_+)\) is the \(\kappa \)-dimensional spectral subspace of \(A_{11}\) corresponding to its negative spectrum, one concludes that

$$\begin{aligned} (I-P_+)A_{12}f\in \mathrm{ran\,}A_{11}\subset \mathrm{ran\,}|A_{11}|^{1/2} \end{aligned}$$

for all \(f\in {\mathfrak H}_2\). Therefore, \(\mathrm{ran\,}A_{12}\subset \mathrm{ran\,}|A_{11}|^{1/2}\).

Conversely, if \(\mathrm{ran\,}A_{12}\subset \mathrm{ran\,}|A_{11}|^{1/2}\), then the operator \(S:=|A_{11}|^{[-1/2]}A_{12}\) is well defined, closed and bounded, i.e., \(S\in [{\mathfrak H}_2,{\mathfrak H}_1]\). Since \(A_{12}=|A_{11}|^{1/2}S\), it follows from \(A_{21}=S^*|A_{11}|^{1/2}\) and

$$\begin{aligned} \begin{pmatrix} |A_{11}|^{1/2}\\ S^*J \end{pmatrix}J \begin{pmatrix} |A_{11}|^{1/2}&{}JS\\ \end{pmatrix}: \nu _-(A)=\kappa , \end{aligned}$$
(9)

that the operator \(A_{22}=S^*JS\) gives a completion for \(A^0\).

(ii) The proof of (i) shows that \(A_{21}=S^*|A_{11}|^{1/2}\) is well defined and that \(S^*JS\in [{\mathfrak H}_2]\) gives a solution to the completion problem (2). Now

$$\begin{aligned} s-\lim _{\varepsilon \searrow 0}A_{21}(A_{11}+\varepsilon )^{-1}A_{12}=s-\lim _{\varepsilon \searrow 0}S^*|A_{11}|^{1/2}(A_{11}+\varepsilon )^{-1}|A_{11}|^{1/2}S=S^*JS \end{aligned}$$

and if \(A_{22}\) is an arbitrary operator in the set (4), then by letting \(\varepsilon \searrow 0\) one concludes that \(S^*JS\le A_{22}\). Therefore, \(S^*JS\) satisfies the desired minimality property.

To prove the last statement assume that \(Y\in [{\mathfrak H}_2]\) and that \(Y\ge 0\). Then \(A_{22}=S^*JS+Y\) inserted in \(A^0\) defines a block operator \(A_Y\ge A_\mathrm{{min}}\). In particular, \(\nu _-(A_Y)\le \nu _-(A_\mathrm{{min}})=\kappa <\infty \). On the other hand, it is clear from the formula

$$\begin{aligned} A_Y= \begin{pmatrix} |A_{11}|^{1/2}\\ S^*J \end{pmatrix}J \begin{pmatrix} |A_{11}|^{1/2}&{}JS\\ \end{pmatrix} + \begin{pmatrix} 0 &{}\quad 0 \\ 0 &{}\quad Y \\ \end{pmatrix} \end{aligned}$$
(10)

that the \(\kappa \)-dimensional eigenspace corresponding to the negative eigenvalues of \(A_{11}\) is \(A_Y\)-negative and, hence, \(\nu _-(A_Y)\ge \kappa \). Therefore, \(\nu _-(A_Y)=\kappa \) and \(Y\in \mathcal {A}\).

Notice that in the factorization \(A_{12}=|A_{11}|^{1/2}S\), S is uniquely determined under the condition \(\mathrm{ran\,}S\subset \mathrm{\overline{ran}\,}A_{11}\) (which implies that \(\mathrm{ker\,}A_{12}=\mathrm{ker\,}S\)); cf. [32].

In the case that \(\kappa =0\), the result in Theorem 1 reduces to the well-known criterion concerning completion of an incomplete block operator to a nonnegative operator; cf. [59]. In the case of matrices acting on a finite dimensional Hilbert space, the result with \(\kappa >0\) has been proved very recently in the appendix of [28], where it was applied in solving indefinite truncated moment problems. In the present paper Theorem 1 will be one of the main tools for further investigations.

2.2 Completion to operator blocks with an infinite negative index.

The completion result in Theorem 1 is of some general interest already by the substantial number of its applications known in the case of nonnegative operators. In this section the completion problem is treated in the case that \(\kappa =\infty \). For this purpose some further notions will be introduced.

Recall that a subspace \({\mathfrak M}\subset {\mathfrak H}\) is said to be uniformly A-negative, if there exists a positive constant \(\nu >0\) such that \((Af,f)\le -\nu \Vert f\Vert ^2\) for all \(f\in {\mathfrak M}\). It is maximal uniformly A-negative, if \({\mathfrak M}\) has no proper uniformly A-negative extension. The completion problem is now extended by claiming from the completions the following maximality property:

$$\begin{aligned} \mathrm{There \, exists \, a \, subspace}\; {\mathfrak M}\subset {\mathfrak H}_1 \mathrm{which \, is \, maximal \, uniformly \, A{\text {-}}negative}. \end{aligned}$$
(11)

Theorem 2

Let \(A^0\) be an incomplete block operator of the form (2) in the Hilbert space \({\mathfrak H}={\mathfrak H}_1\oplus {\mathfrak H}_2\). Let \(A_{11}=A_{11}^*\) and \(A_{21}=A_{12}^*\) be bounded, let \(J=\mathrm{sign\,}(A_{11})\) be the (unitary) signature operator of \(A_{11}\), and, in addition, assume that there is a spectral gap \((-\delta ,0)\subset \rho (A_{11})\), \(\delta >0\). Then:

  1. (i)

    There exists a completion \(A\in [{\mathfrak H}]\) of \(A^0\) with some operator \(A_{22}=A_{22}^*\) satisfying the condition (11) if and only if

    $$\begin{aligned} \mathrm{ran\,}A_{12}\subset \mathrm{ran\,}|A_{11}|^{1/2}. \end{aligned}$$
  2. (ii)

    If the condition in (i) is satisfied, then \(S=|A_{11}|^{[-1/2]}A_{12}\), where \(|A_{11}|^{[-1/2]}\) denotes the (generalized) Moore–Penrose inverse of \(|A_{11}|^{1/2}\), is well defined and \(S\in [{\mathfrak H}_2,{\mathfrak H}_1]\). Moreover, \(S^*JS\) is the smallest operator in the solution set

    $$\begin{aligned} \mathcal {A}:=\{{A_{22}=A_{22}^*\in [{\mathfrak H}_2]: A=(A_{ij})_{i,j=1}^{2}\,\mathrm { satisfies }\,(11) }\} \end{aligned}$$

    and this solution set admits a description as the (semibounded) operator interval given by

    $$\begin{aligned} \mathcal {A}=\{{A_{22}\in [{\mathfrak H}_2]: A_{22}=S^*JS+Y,\, Y=Y^*\ge 0}\}. \end{aligned}$$

Proof

To prove this result suitable modifications in the proof of Theorem 1 are needed. (i) First assume that \(A_{22}\in \mathcal {A}\) gives a desired completion for \(A^0\). If \(\varepsilon \in (0,\delta )\) then \(0\in \rho (A_{11}+\varepsilon )\) and therefore the block operator \((A_{ij})\) satisfies the formula (5). We claim that the condition (11) implies the inequality (6) for all sufficiently small values \(\varepsilon >0\). To see this let \({\mathfrak M}\subset {\mathfrak H}_1\) be a subspace for which the condition (11) is satisfied. Then \((A_{11}f,f)\le -\nu \Vert f\Vert ^2\) for some fixed \(\nu >0\) and for all \(f\in {\mathfrak M}\). Assume that for some \(0<\varepsilon _0<\min \{\nu ,\delta \}\) (6) is not satisfied. Then \(((A_{22}+\varepsilon _0-A_{21}(A_{11}+\varepsilon _0)^{-1}A_{12})v_0,v_0)<0\) holds for some vector \(v_0\in {\mathfrak H}_2\). Define \({\mathfrak L}=W_{\varepsilon _0}^{-1}({\mathfrak M}+\mathrm{span\,}\{v_0\})\), where

$$\begin{aligned} W_{\varepsilon _0}= \begin{pmatrix} I&{}-(A_{11}+\varepsilon _0)^{-1}A_{12}\\ 0&{}I \end{pmatrix}. \end{aligned}$$

Clearly, \(W_{\varepsilon _0}\) is bounded with bounded inverse and it maps \({\mathfrak M}\) bijectively onto \({\mathfrak M}\), so that \({\mathfrak L}\) is a 1-dimensional extension of \({\mathfrak M}\). It follows from (5) that for all \(f\in {\mathfrak L}\),

$$\begin{aligned} (Af,f)+\varepsilon _0 \Vert f\Vert ^2 = \left( \begin{pmatrix} A_{11}+\varepsilon _0 &{}0\\ 0&{}A_{22}+\varepsilon _0 -A_{21}(A_{11}+\varepsilon _0)^{-1}A_{12} \end{pmatrix} u,u \right) <0, \end{aligned}$$

where \(u=W_{\varepsilon _0}f \in {\mathfrak M}+\mathrm{span\,}\{v_0\}\). Therefore, \({\mathfrak L}\) is a proper uniformly A-negative extension of \({\mathfrak M}\); a contradiction, which shows that (6) holds for all \(0<\varepsilon <\min \{\nu ,\delta \}\). Then, as in the proof of Theorem 1 it is seen that \(\mathrm{ran\,}A_{12}\subset \mathrm{ran\,}|A_{11}|^{1/2}\); note that in the estimate (8) \(\lambda _1\) is to be replaced by \(-\delta \).

Conversely, if \(\mathrm{ran\,}A_{12}\subset \mathrm{ran\,}|A_{11}|^{1/2}\), then \(S=|A_{11}|^{[-1/2]}A_{12}\in [{\mathfrak H}_2,{\mathfrak H}_1]\) and the block operator A in (9) gives a completion. To prove that A satisfies (11) observe that if \({\mathfrak M}\) is a uniformly A-negative subspace in \({\mathfrak H}\), then \(\begin{pmatrix} |A_{11}|^{1/2}&{}JS\\ \end{pmatrix}\) maps it bijectively onto a uniformly J-negative subspace in \({\mathfrak H}_1\). The spectral subspace corresponding to the negative spectrum of \(A_{11}\) is maximal uniformly J-negative in \({\mathfrak H}_1\) and also uniformly A-negative in \({\mathfrak H}\). By the above mapping property this subspace must be maximal uniformly A-negative in \({\mathfrak H}\).

(ii) If \(A_{22}=A_{22}^*\) defines a completion \(A\in [{\mathfrak H}]\) of \(A^0\) such that (11) is satisfied then by the proof of (i) the inequality (6) holds for all sufficiently small values \(\varepsilon >0\). Now the minimality property of \(S^*JS\) can be obtained in the same manner as in Theorem 1.

As to the last statement again for every \(Y\in [{\mathfrak H}_2]\), \(Y\ge 0\), the block operator \(A_Y\) defined in the proof of Theorem 1 satisfies \(A_Y\ge A_\mathrm{{min}}\). Hence, every uniformly \(A_Y\)-negative subspace is also uniformly \(A_\mathrm{{min}}\)-negative. Now it follows from the formula (10) that the spectral subspace corresponding to the negative spectrum of \(A_{11}\), which is maximal uniformly \(A_\mathrm{{min}}\)-negative, is also maximal uniformly \(A_Y\)-negative. Hence, \(A_Y\) satisfies (11) and \(Y\in \mathcal {A}\).

3 Some factorizations of operators with finite negative index

Theorems 1 and 2 contain a valuable tool in solving a couple of other problems, which initially do not occur as a completion problem of some symmetric incomplete block operator. In this section it is shown that Theorem 1 (a) can be used to characterize the existence of certain J-contractive factorizations of operators via a minimal index condition; (b) implies an extension of the well-known Douglas factorization result with a certain specification to the Bognár–Krámli factorization; (c) yields an extension of a factorization result of Shmul’yan for J-bicontractions; (d) allows an extension of a classical Sylvester’s law of inertia of a block operator, which is originally used in characterizing nonnegativity of a bounded block operator via Schur complement.

Some simple inertia formulas are now recalled. The factorization \(H=B^*EB\) clearly implies that \(\nu _\pm (H)\le \nu _\pm (E)\). If \(H_1\) and \(H_2\) are selfadjoint operators, then

$$\begin{aligned} H_1+H_2=\begin{pmatrix} I \\ I \end{pmatrix}^* \begin{pmatrix} H_1 &{}\quad 0 \\ 0 &{}\quad H_2 \end{pmatrix} \begin{pmatrix} I \\ I \end{pmatrix} \end{aligned}$$

shows that \(\nu _\pm (H_1+H_2)\le \nu _\pm (H_1)+\nu _\pm (H_2)\). Consider the selfadjoint block operator \(H\in [{\mathfrak H}_1\oplus {\mathfrak H}_2]\) of the form

$$\begin{aligned} H=\begin{pmatrix} A &{}\quad B^* \\ B &{}\quad J_2 \end{pmatrix}, \end{aligned}$$
(12)

where \(J_2=J_2^*=J_2^{-1}\). By applying the above mentioned inequalities shows that

$$\begin{aligned} \nu _\pm (A)\le \nu _\pm (A-B^*J_2B)+\nu _\pm (J_2). \end{aligned}$$
(13)

Assuming that \(\nu _-(A-B^*J_2B)\) and \(\nu _-(J_2)\) are finite, the question when \(\nu _-(A)\) attains its maximum in (13), or equivalently, \(\nu _-(A-B^*J_2B)\ge \nu _-(A)-\nu _-(J_2)\) attains its minimum, turns out to be of particular interest. The next result characterizes this situation as an application of Theorem 1. Recall that if \(A=J_A |A|\) is the polar decomposition of A, then one can interpret \({\mathfrak H}_A=(\mathrm{\overline{ran}\,}A,J_A)\) as a Kreĭn space generated on \(\mathrm{\overline{ran}\,}A\) by the fundamental symmetry \(J_A=\mathrm{sgn\,}(A)\).

Theorem 3

Let \(A\in [{\mathfrak H}_1]\) be selfadjoint, \(B\in [{\mathfrak H}_1,{\mathfrak H}_2]\), \(J_2=J_2^*=J_2^{-1}\in [{\mathfrak H}_2]\), and assume that \(\nu _-(A),\nu _-(J_2)<\infty \). If the equality

$$\begin{aligned} \nu _-(A) = \nu _-(A-B^*J_2B)+\nu _-(J_2) \end{aligned}$$
(14)

holds, then \(\mathrm{ran\,}B^*\subset \mathrm{ran\,}|A|^{1/2}\) and \(B^*=|A|^{1/2}K\) for a unique operator \(K\in [{\mathfrak H}_2,{\mathfrak H}_A]\) which is J-contractive: \(J_2-K^*J_A K\ge 0\).

Conversely, if the equality \(B^*=|A|^{1/2}K\) holds for some J-contractive operator \(K\in [{\mathfrak H}_2,\mathrm{\overline{ran}\,}A]\), then the equality (14) is satisfied.

Proof

Assume that (14) is satisfied. The factorization

$$\begin{aligned} H=\begin{pmatrix} A &{}\quad B^* \\ B &{}\quad J_2 \end{pmatrix} = \begin{pmatrix} I &{}\quad B^*J_2\\ 0 &{}\quad I \end{pmatrix} \begin{pmatrix} A-B^* J_2B&{}\quad 0 \\ 0 &{}\quad J_2 \end{pmatrix} \begin{pmatrix} I &{}\quad 0 \\ J_2 B &{}\quad I \end{pmatrix} \end{aligned}$$

shows that \(\nu _-(H)=\nu _-(A-B^* J_2B)+\nu _-(J_2)\), which combined with the equality (14) gives \(\nu _-(H)=\nu _-(A)\). Therefore, by Theorem 1 one has \(\mathrm{ran\,}B^*\subset \mathrm{ran\,}|A|^{1/2}\) and this is equivalent to the existence of a unique operator \(K\in [{\mathfrak H}_2,\mathrm{\overline{dom}\,}A]\) such that \(B^*=|A|^{1/2}K\); i.e. \(K=|A|^{[-1/2]}B^*\). Furthermore, \(K^*J_{A}K\le J_2\) by the minimality property of \(K^*J_{A}K\) in Theorem 1, in other words K is a J-contraction.

Converse, if \(B^*=|A|^{1/2}K\) for some J-contraction \(K\in [{\mathfrak H}_2,\mathrm{\overline{dom}\,}A]\), then clearly \(\mathrm{ran\,}B^*\subset \mathrm{ran\,}|A|^{1/2}\). By Theorem 1 the completion problem for \(H^{0}\) has solutions with the minimal solution \(S^*J_{A}S\), where

$$\begin{aligned} S=|A|^{[-1/2]}B^*=|A|^{[-1/2]}|A|^{1/2}K=K. \end{aligned}$$

Furthermore, by J-contractivity of K one has \(K^*J_{A}K\le J_2\), i.e. \(J_2\) is also a solution and thus \(\nu _-(H)=\nu _-(A)\) or, equivalently, the equality (14) is satisfied.

While Theorem 3 is obtained as a direct consequence of Theorem 1 it will be shown in the next section that this result yields simple solutions to a wide class of lifting problems for contractions in Hilbert, Pontryagin and Kreĭn space settings.

Before deriving the next result some inertia formulas for a class of selfadjoint block operators are recalled. Consider the following two representations

$$\begin{aligned} \begin{pmatrix} J_1&{}\quad T^*\\ T &{}\quad J_2 \end{pmatrix}&=\begin{pmatrix} I&{}0\\ TJ_1&{}I \end{pmatrix} \begin{pmatrix} J_1&{}\quad 0\\ 0&{}\quad J_2-TJ_1T^* \end{pmatrix} \begin{pmatrix} I&{}\quad J_1T^*\\ 0&{}\quad I \end{pmatrix}\\&=\begin{pmatrix} I&{}\quad T^*J_2\\ 0&{}\quad I \end{pmatrix} \begin{pmatrix} J_1-T^*J_2 T&{}\quad 0\\ 0&{}\quad J_2 \end{pmatrix} \begin{pmatrix} I&{}\quad 0\\ J_2T&{}\quad I \end{pmatrix}, \end{aligned}$$

where \(J_i=J_i^*=J_i^{-1}\), \(i=1,2\). Since here the triangular operators are bounded with bounded inverse, one concludes that \(\mathrm{ran\,}(J_2-TJ_1T^*)\) is closed if and only if \(\mathrm{ran\,}(J_1-T^*J_2 T)\) is closed. Furthermore, one gets the following inertia formulas; cf. e.g. [13, Proposition 3.1].

Lemma 1

With the above notations one has

$$\begin{aligned} \nu _\pm (J_1-T^*J_2T)+\nu _\pm (J_2)=\nu _\pm (J_2-TJ_1T^*)+\nu _\pm (J_1), \end{aligned}$$
$$\begin{aligned} \nu _0(J_1-T^*J_2T)=\nu _0(J_2-TJ_1T^*). \end{aligned}$$

The next result contains two general factorization results: assertion (i) contains an extension of the well-known Douglas factorization, see [32, 35], and assertion (ii) is a specification of the so-called Bognár–Krámli factorization, see [18]: \(A=B^*J_2B\) holds for some bounded operator B if and only if \(\nu _\pm (J_2)\ge \nu _\pm (A)\).

Proposition 1

Let A, B, and \(J_2\) be as in Theorem 3, and let \(\nu _-(A)=\nu _-(J_2)<\infty \). Then:

  1. (i)

    The inequality

    $$\begin{aligned} A\ge B^*J_2 B \end{aligned}$$
    (15)

    holds if and only if \(B=C|A|^{1/2}\) for some J-contractive operator \(C\in [{\mathfrak H}_A,{\mathfrak H}_2]\); in this case C is unique and, in addition, J-bicontractive, i.e., \(J_A-C^*J_2 C\ge 0\) and \(J_2-CJ_A C^*\ge 0\).

  2. (ii)

    The equality

    $$\begin{aligned} A = B^*J_2 B \end{aligned}$$
    (16)

    holds if and only if \(B=C|A|^{1/2}\) for some J-isometric operator \(C\in [{\mathfrak H}_A,{\mathfrak H}_2]\); again C is unique. In addition, C is unitary if and only if \(\mathrm{ran\,}B\) is dense in \({\mathfrak H}_2\).

Proof

(i) The inequality (15) means that \(\nu _-(A - B^*J_2 B)=0\). Hence the assumption \(\nu _-(A)=\nu _-(J_2)<\infty \) implies the equality (14). Therefore, the desired factorization for B is obtained from Theorem 3. Conversely, if \(B=C|A|^{1/2}\) for some J-contractive operator C then (14) holds by Theorem 3 and the assumption \(\nu _-(A)=\nu _-(J_2)<\infty \) implies that \(\nu _-(A - B^*J_2 B)=0\).

The fact that C is actually J-bicontractive follows directly from Lemma 1.

(ii) Assume that (16) holds. Then by part (i) it remains to prove that in the factorization \(B=C|A|^{1/2}\) the operator C is isometric. Substituting \(B=C|A|^{1/2}\) into (16) gives

$$\begin{aligned} A=|A|^{1/2}C^*J_2C|A|^{1/2}. \end{aligned}$$

Since \(\mathrm{dom\,}C,\, \mathrm{ran\,}C^* \subset \mathrm{\overline{ran}\,}A\) and \(A=|A|^{1/2}J_A|A|^{1/2}\), the previous identity implies the equality \(J_A=C^*J_2C\), i.e., C is J-isometric. Conversely, if C is J-isometric then clearly (16) holds.

Since \(B=C|A|^{1/2}\) and \(C\in [{\mathfrak H}_A,{\mathfrak H}_2]\), it is clear that B has dense range in \({\mathfrak H}_2\) precisely when the range of C is dense in \({\mathfrak H}_2\). The (Kreĭn space) adjoint is a bounded operator with \(\mathrm{dom\,}C^{[*]}={\mathfrak H}_2\). By isometry one has \(C^{-1}\subset C^{[*]}\), and thus \(C^{-1}\) is also bounded, densely defined and closed. Thus, the equality \(C^{-1}=C^{[*]}\) prevails, i.e., C is J-unitary. Conversely, if C is unitary then \(C^{-1}=C^{[*]}\) holds and \(\mathrm{ran\,}C=\mathrm{dom\,}C^{[*]}={\mathfrak H}_2\). Consequently, \(\mathrm{ran\,}B=\mathrm{ran\,}C|A|^{1/2}\) is dense in \({\mathfrak H}_2\).

If, in particular, \(\nu _-(A)=\nu _-(J_2)=0\) then \(0\le A\le B^*B\) and Proposition 1 combined with Theorem 1 yields the factorization and range inclusion results proved in [32, Theorem 1] with A replaced by \(A^*A\). In particular, notice that if \(\mathrm{ran\,}B^*\subset \mathrm{ran\,}|A|^{1/2}\), then already Theorem 1 alone implies that \(S=|A|^{[-1/2]}B^*\) is bounded and hence \(B^*B=|A|^{1/2}SS^*|A|^{1/2}\le \Vert S\Vert ^2 A\).

Assertions in part (ii) of Corollary 1 can be found in the literature with a different proof. In fact, the first statement in (ii) appears in [13, Proposition 2.1,Corollary 2.6] while the second statement in (ii) is proved in [23, Corollary 1.3]. Another extension for Douglas’ factorization result can be found from [58].

For a general treatment of isometric (not necessarily densely defined) operators and isometric relations appearing in the proof of Proposition 1 the reader is referred to [14], [26, Section 2], and [27].

A slightly different viewpoint to Proposition 1 gives the following statement, which can be viewed as an extension of a theorem by Shmul’yan, see [60, Theorem 3], on the factorization of bicontractions on Kreĭn spaces; for a related abstract Leech theorem, see [34, Section 3.4].

Corollary 1

Let \(A\in [{\mathfrak H}_1]\) be selfadjoint, let \(B\in [{\mathfrak H}_1,{\mathfrak H}_2]\), and let \(J_2=J_2^*=J_2^{-1}\in [{\mathfrak H}_2]\) with \(\nu _-(J_2)<\infty \). Then:

  1. (i)
    $$\begin{aligned} A\ge B^*J_2 B \quad \text {and}\quad \nu _-(A) = \nu _-(J_2) \end{aligned}$$

    if and only if \(B=C|A|^{1/2}\) for some J-bicontractive operator \(C\in [{\mathfrak H}_A,{\mathfrak H}_2]\); in this case C is unique.

  2. (ii)
    $$\begin{aligned} A=B^*J_2 B \quad \text {and}\quad \nu _-(A) = \nu _-(J_2) \end{aligned}$$

    if and only if \(B=C|A|^{1/2}\) for some J-bicontractive operator C which is also J-isometric, i.e., \(J_A-C^*J_2 C= 0\) and \(J_2-CJ_A C^*\ge 0\); again C is unique.

Proof

Observe that if C is J-bicontractive, then an application of Lemma 1 shows that \(\nu _-(J_2)=\nu _-(J_A)=\nu _-(A)\). Now the stated equivalences can be obtained from Proposition 1.

This section is finished with an extension of Sylvester’s law of inertia, which is actually obtained as a consequence of Theorem 1.

Proposition 2

Let \(A=(A_{ij})_{i,j=1}^{2}\) be an arbitrary selfadjoint block operator in \({\mathfrak H}={\mathfrak H}_1\oplus {\mathfrak H}_2\), which satisfies the range inclusion (3), and let \(S=|A_{11}|^{[-1/2]}A_{12}\). Then \(\nu _-(A)<\infty \) if and only if \(\nu _-(A_{11})<\infty \) and \(\nu _-(A_{22}-S^*JS)<\infty \); in this case

$$\begin{aligned} \nu _-(A)=\nu _-(A_{11})+\nu _-(A_{22}-S^*JS). \end{aligned}$$

In particular, \(A\ge 0\) if and only if \(\mathrm{ran\,}A_{12}\subset \mathrm{ran\,}|A_{11}|^{1/2}\), \(A_{11}\ge 0\), and \(A_{22}-S^*JS\ge 0\).

Proof

By the assumption (3) \(S=|A_{11}|^{[-1/2]}A_{12}\) is an everywhere defined bounded operator and, since \(A_{11}=|A_{11}|^{1/2}J|A_{11}|^{1/2}\) (cf. Theorem 1), the following equality holds:

$$\begin{aligned} A=\begin{pmatrix} |A_{11}|^{1/2} &{}\quad 0\\ S^*J&{}\quad I \end{pmatrix} \begin{pmatrix} J &{}\quad 0\\ 0&{}\quad A_{22}-S^*JS \end{pmatrix} \begin{pmatrix} |A_{11}|^{1/2} &{}\quad JS \\ 0&{}\quad I \end{pmatrix}, \end{aligned}$$

i.e. \(A=B^*EB\) where E stands for the diagonal operator with \(\nu _-(E)=\nu _-(A_{11})+\nu _-(A_{22}-S^*JS)\) and the triangular operator B on the right side is bounded and has dense range in \(\mathrm{\overline{ran}\,}A_{11}\oplus {\mathfrak H}_2\). Clearly, \(\nu _-(A)\le \nu _-(E)\) and it remains to prove that if \(\nu _-(A)<\infty \) then \(\nu _-(A)=\nu _-(E)\).

To see this assume that \(\nu _-(A)<\nu _-(E)\). We claim that \(\mathrm{ran\,}B\) contains an E-negative subspace \({\mathfrak L}\) with dimension \(\mathrm{dim\,}{\mathfrak L}>\nu _-(A)\). Assume the converse and let \({\mathfrak L}\subset \mathrm{ran\,}B\) be a maximal E-negative subspace with \(\mathrm{dim\,}{\mathfrak L}\le \nu _-(A)\). Then \((E{\mathfrak L})^\perp \) must be E-nonnegative, since if \(v\perp E{\mathfrak L}\) and \((Ev,v)<0\), then \(\mathrm{span\,}\{v+{\mathfrak L}\}\) would be a proper E-negative extension of \({\mathfrak L}\). Since \(E{\mathfrak L}\) is finite dimensional and \(\mathrm{ran\,}B\) is dense in \(\mathrm{\overline{ran}\,}A_{11}\oplus {\mathfrak H}_2\), \(\mathrm{ran\,}B\) has dense intersection with \((\mathrm{\overline{ran}\,}A_{11}\oplus {\mathfrak H}_2)\ominus E{\mathfrak L}\), and hence the closure of this subspace is also E-nonnegative. Consequently, \(\nu _-(E)=\nu _-({\mathfrak L})\), a contradiction with the assumption \(\nu _-(E)>\nu _-(A)\). This proves the claim that \(\mathrm{ran\,}B\) contains an E-negative subspace \({\mathfrak L}\) with \(\mathrm{dim\,}{\mathfrak L}>\nu _-(A)\). However, then the subspace \({\mathfrak L}'=\{u\in \mathrm{\overline{ran}\,}A_{11}\oplus {\mathfrak H}_2:Bu\in {\mathfrak L}\}\) satisfies \(\mathrm{dim\,}{\mathfrak L}'\ge \mathrm{dim\,}{\mathfrak L}\) and, moreover, \({\mathfrak L}'\) is A-negative: \((Au,u)=(EBu,Bu)<0\), \(u\in {\mathfrak L}'\), \(u\ne 0\). Thus, \(\nu _-(A)\ge \mathrm{dim\,}{\mathfrak L}\), a contradiction with \(\mathrm{dim\,}{\mathfrak L}>\nu _-(A)\). This completes the proof.

Proposition 2 completes Theorem 1: if \(\mathrm{ran\,}A_{12}\subset \mathrm{ran\,}|A_{11}|^{1/2}\) then \(A_{11}=J|A_{11}|\) and \(A_{12}=|A_{11}|^{1/2}S\) imply that \(A_{21}|A_{11}|^{[-1/2]}J|A_{11}|^{[-1/2]}A_{12}=S^*JS\). Hence the negative index of A can be calculated by using the following version of a generalized Schur complement or a shorted operator (defined initially for a nonnegative operator H in (1))

$$\begin{aligned} A_{{\mathfrak H}_2}:=\begin{pmatrix} 0 &{}\quad 0\\ 0&{} \quad A_{22}-S^*JS \end{pmatrix} \end{aligned}$$
(17)

via the explicit formula

$$\begin{aligned} \nu _-(A)=\nu _-(A_{11})+\nu _-(A_{22}-A_{21}|A_{11}|^{[-1/2]}J|A_{11}|^{[-1/2]}A_{12}). \end{aligned}$$
(18)

The addition made in Proposition 2 concerns selfadjoint operators \(A_{22}\) that are not solutions to the original completion problem for \(A^0\).

The notion of a shorted operator in infinite dimensional Hilbert spaces has been extended to the case of not necessarily selfadjoint block operators in a paper by Antezana et al. [6]. These so-called bilateral shorted operators introduced and studied therein use two range inclusions, see [6, Definitions 3.5, 4.1], which in the selfadjoint case reduce to the single condition (3) appearing in Theorems 1 and 2.

4 Lifting of operators with finite negative index

As a first application of the completion problem solved in Sect. 2 it is shown how nicely some lifting results established in a series of papers by Arsene, Constantinescu, and Gheondea, see [12, 13, 23, 24], as well as in Dritschel and Rovnyak [33, 34] (see also further references appearing in these papers) on contractive operators with finite number of negative squares can be derived from Theorem 1.

For this purpose some standard notations are introduced. Let \(({\mathfrak H}_1,(\cdot ,\cdot )_{1})\) and \(({\mathfrak H}_2,(\cdot ,\cdot )_{2})\) be Hilbert spaces and let \(J_1\) and \(J_2\) be symmetries in \({\mathfrak H}_1\) and \({\mathfrak H}_2\), i.e. \(J_i=J_i^*=J_i^{-1}\), so that \(({\mathfrak H}_i,(J_i\cdot ,\cdot )_{i})\), \(i=1,2\), becomes a Kreĭn space. Then associate with \(T\in [{\mathfrak H}_1,{\mathfrak H}_2]\) the corresponding defect and signature operators

$$\begin{aligned} D_T=|J_1-T^*J_2T|^{1/2},\quad J_T=\mathrm{sign\,}(J_1-T^*J_2T), \quad {\mathfrak D}_T=\mathrm{\overline{ran}\,}D_T, \end{aligned}$$

where the so-called defect subspace \({\mathfrak D}_T\) can be considered as a Kreĭn space with the fundamental symmetry \(J_T\). Similar notations are used with \(T^*\):

$$\begin{aligned} D_{T^*}=|J_2-TJ_1T^*|^{1/2},\quad J_{T^*}=\mathrm{sign\,}(J_2-TJ_1T^*), \quad {\mathfrak D}_{T^*}=\mathrm{\overline{ran}\,}D_{T^*}. \end{aligned}$$

By definition \(J_TD_T^2=J_1-T^*J_2T\) and \(J_TD_T=D_TJ_T\) with analogous identities for \(D_{T^*}\) and \(J_{T^*}\). In addition,

$$\begin{aligned} \begin{array}{l} (J_1-T^*J_2T)J_1T^*=T^*J_2(J_2-TJ_1T^*), \\ (J_2-TJ_1T^*)J_2T=TJ_1(J_1-T^*J_2T). \end{array} \end{aligned}$$
(19)

Recall that \(T\in [{\mathfrak H}_1,{\mathfrak H}_2]\) is said to be a J-contraction if \(J_1-T^*J_2T\ge 0\), i.e. \(\nu _-(J_1-T^*J_2T)=0\). If, in addition, \(T^*\) is a J-contraction, T is termed as a J-bicontraction, in which case \(\nu _-(J_1)=\nu _-(J_2)\) by Lemma 1. In what follows it is assumed that

$$\begin{aligned} \kappa _1:=\nu _-(J_1-T^*J_2T)<\infty ,\quad \kappa _2:=\nu _-(J_2-TJ_1T^*)<\infty . \end{aligned}$$

In this case Lemma 1 shows that

$$\begin{aligned} \nu _-(J_2) = \nu _-(J_1) + \kappa _2-\kappa _1. \end{aligned}$$
(20)

The aim in this section is to show applicability of Theorem 1 in establishing formulas for so-called liftings \({{\widetilde{T}} }\) of T with prescribed negative indices \({{\widetilde{\kappa }} }_1\) and \({{\widetilde{\kappa }} }_2\) for the defect subspaces, equivalently, for the associated signature operators. Given a bounded operator \(T\in [{\mathfrak H}_1,{\mathfrak H}_2]\) the problem is to describe all operators \({{\widetilde{T}} }\) from the extended Kreĭn space \(({\mathfrak H}_1\oplus {\mathfrak H}_1^\prime ,J_1\oplus J_1^\prime )\) to the extended Kreĭn space \(({\mathfrak H}_2\oplus {\mathfrak H}_2^\prime ,J_2\oplus J_2^\prime )\) such that

$$\begin{aligned} \text {(*)} \quad P_2 {{\widetilde{T}} }{\upharpoonright \,}{\mathfrak H}_1 = T \quad \text {and}\quad \nu _-({{\widetilde{J}} }_1-{{\widetilde{T}} }^*{{\widetilde{J}} }_2 {{\widetilde{T}} })={{\widetilde{\kappa }} }_1, \quad \nu _-({{\widetilde{J}} }_2-{{\widetilde{T}} }{{\widetilde{J}} }_1 {{\widetilde{T}} }^*)={{\widetilde{\kappa }} }_2, \end{aligned}$$

with some fixed values of \({{\widetilde{\kappa }} }_1,{{\widetilde{\kappa }} }_2<\infty \). Here \(P_i\) stands for the orthogonal projection from \({{\widetilde{\mathfrak H}} }_i={\mathfrak H}_i\oplus {\mathfrak H}_i^\prime \) onto \({\mathfrak H}_i\) and \({{\widetilde{J}} }_i=J_i\oplus J_i^\prime \), \(i=1,2\). In addition, it is assumed that the exit spaces are Pontryagin spaces, i.e., that

$$\begin{aligned} \nu _-(J_1^\prime ), \nu _-(J_2^\prime ) <\infty . \end{aligned}$$

Following [13, 23] consider first the following column extension problem:

\((*)_c\) Give a description of all operators \(T_c=\mathrm{col\,}\begin{pmatrix}T&C \end{pmatrix}\in [{\mathfrak H}_1,{\mathfrak H}_2\oplus {\mathfrak H}_2^\prime ]\), such that \(\nu _-(J_1-T_c^*{{\widetilde{J}} }_2T_c)={{\widetilde{\kappa }} }_1\,(<\infty )\).

Since \(J_1-T_c^*{{\widetilde{J}} }_2T_c=J_1-T^*J_2T-C^*J_2^\prime C\), then necessarily (see Sect. 3)

$$\begin{aligned} {{\widetilde{\kappa }} }_1 \ge \kappa _1-\nu _-(C^*J_2^\prime C)\ge \kappa _1 -\nu _-(J_2^\prime ). \end{aligned}$$

Moreover, it is clear that \({{\widetilde{\kappa }} }_2\ge \kappa _2\), since \(J_2-TJ_1T^*\) appears as the first diagonal entry of the \(2\times 2\) block operator \({{\widetilde{J}} }_2-T_c J_1T_c^*\) when decomposed w.r.t. \({{\widetilde{\mathfrak H}} }_i={\mathfrak H}_i\oplus {\mathfrak H}_i^\prime \), \(i=1,2\).

With the minimal value of \({{\widetilde{\kappa }} }_1\) all solutions to this problem will now be described by applying Theorem 1 to an associated \(2\times 2\) block operator \(T_C\) appearing in the proof below; in fact the result is just a special case of Theorem 3.

Lemma 2

Let \({{\widetilde{\kappa }} }_1=\nu _-(J_1-T_c^*{{\widetilde{J}} }_2T_c)\) and assume that \({{\widetilde{\kappa }} }_1=\kappa _1-\nu _-(J_2^\prime )(\ge 0)\). Then \(\mathrm{ran\,}C^*\subset \mathrm{ran\,}D_T\) and the formula

$$\begin{aligned} T_c=\begin{pmatrix}T \\ K^*D_{T} \end{pmatrix} \end{aligned}$$

establishes a one-to-one correspondence between the set of all solutions to Problem \((*)_c\) and the set of all J-contractions \( K\in [{\mathfrak H}'_2,{\mathfrak D}_{T}] \).

Proof

To make the argument more explicit consider the following block operator

$$\begin{aligned} T_C:=\begin{pmatrix} J_1-T^*J_2T &{}\quad C^* \\ C &{}\quad J_2^\prime \end{pmatrix} = \begin{pmatrix} I &{}\quad C^*J_2^\prime \\ 0 &{}\quad I \end{pmatrix} \begin{pmatrix} J_1-T_c^*{{\widetilde{J}} }_2T_c &{}\quad 0 \\ 0 &{}\quad J_2^\prime \end{pmatrix} \begin{pmatrix} I &{}\quad 0 \\ J_2^\prime C &{}\quad I \end{pmatrix}. \end{aligned}$$

Clearly \(\nu _-(T_C)=\nu _-(J_1-T_c^*{{\widetilde{J}} }_2T_c)+\nu _-(J_2^\prime )<\infty \), which combined with \({{\widetilde{\kappa }} }_1=\kappa _1-\nu _-(J_2^\prime )\) shows that \(\nu _-(T_C)=\kappa _1=\nu _-(J_1-T^*J_2T)\). Now, the statement is obtained from Theorem 1 or, more directly, just by applying Theorem 3.

Remark 1

(i) The above proof, which essentially makes use of an associated \(2\times 2\) block operator \(T_C\) (being a special case of the block operator H in (12) behind Theorem 3), is new even in the case of Hilbert space contractions. In particular, it shows that the operator K in Lemma 2 coincides with the operator S that gives the minimal solution \(S^*J_{T}S\) to the completion problem associated with \(T_C\); the J-contractivity of K itself is equivalent to the fact that \(T_C\) is also a solution precisely when \({{\widetilde{\kappa }} }=\kappa -\nu _-(J_2^\prime )\).

(ii) The existence of a solution to Problem \((*)_c\) is proved here using only the condition \({{\widetilde{\kappa }} }_1=\kappa _1-\nu _-(J_2^\prime )\,(\ge 0)\). The corresponding result in [23, Lemma 2.2] is formulated (and formally also proved) under the additional condition \({{\widetilde{\kappa }} }_2=\kappa _2\). In the case that \(\nu _-(J_1)<\infty \) the equality \({{\widetilde{\kappa }} }_2=\kappa _2\) follows automatically from the equality \({{\widetilde{\kappa }} }_1=\kappa _1-\nu _-(J_2^\prime )\): to see this apply (20) to T and \(T_c\), which leads to \(\nu _-(J_1)+\kappa _2=\nu _-(J_1)+{{\widetilde{\kappa }} }_2\), so that \(\nu _-(J_1)<\infty \) implies \(\kappa _2={{\widetilde{\kappa }} }_2\). Naturally, in Lemma 2 the condition \({{\widetilde{\kappa }} }_2=\kappa _2\) follows from the condition \({{\widetilde{\kappa }} }_1=\kappa _1-\nu _-(J_2^\prime )\) also in the case where \(\nu _-(J_1)=\infty \); see Corollary 3 below.

Finally, it is mentioned that for a Pontryagin space operator T the result in Lemma 2 was proved in [13, Lemma 5.2].

In a dual manner we can treat the following row extension problem; again initially considered in [13, 23]:

\(\mathbf {(*)_r}\) Give a description of all operators \(T_r=\begin{pmatrix}T&R\end{pmatrix}\in [{\mathfrak H}_1\oplus {\mathfrak H}'_1,{\mathfrak H}_2]\), such that \(\nu _-(J_2-T_r{{\widetilde{J}} }_1T_r^*)={{\widetilde{\kappa }} }_2\,(<\infty )\).

Analogous to the case of column operators, \(J_2-T_r{{\widetilde{J}} }_1T_r^*=J_2-TJ_1T^*-RJ_1^\prime R^*\) gives the estimate

$$\begin{aligned} {{\widetilde{\kappa }} }_2 \ge \kappa _2-\nu _-(RJ_1^\prime R^*) \ge \kappa _2-\nu _-(J_1^\prime ). \end{aligned}$$

Moreover, it is clear that \({{\widetilde{\kappa }} }_1\ge \kappa _1\). With the minimal value of \({{\widetilde{\kappa }} }_2\) all solutions to Problem \(\mathbf {(*)_r}\) are established by applying Theorem 1 to an associated \(2\times 2\) block operator \(T_R\).

Lemma 3

Let \({{\widetilde{\kappa }} }_2=\nu _-(J_2-T_r{{\widetilde{J}} }_1T_r^*)\) and assume that \({{\widetilde{\kappa }} }_2=\kappa _2 - \nu _-(J_1^\prime )(\ge 0)\). Then \(\mathrm{ran\,}R\subset \mathrm{ran\,}D_{T^*}\) and the formula

$$\begin{aligned} T_r=\begin{pmatrix}T&D_{T^*} B\end{pmatrix} \end{aligned}$$

establishes a one-to-one correspondence between the set of all solutions to Problem \(\mathbf {(*)_r}\) and the set of all J-contractions \(B\in [{\mathfrak H}'_1,{\mathfrak D}_{T^*}]\).

Proof

To prove the statement via Theorem 1 (cf. Theorem 3) consider

$$\begin{aligned} T_R:=\begin{pmatrix} J_2-TJ_1T^* &{}\quad R \\ R^* &{}\quad J_1^\prime \end{pmatrix} = \begin{pmatrix} I &{}\quad RJ_1^\prime \\ 0 &{}\quad I \end{pmatrix} \begin{pmatrix} J_2-T_r{{\widetilde{J}} }_1T_r^* &{}\quad 0 \\ 0 &{}\quad J_1^\prime \end{pmatrix} \begin{pmatrix} I &{}\quad 0 \\ J_1^\prime R^* &{}\quad I \end{pmatrix}. \end{aligned}$$

Then clearly \(\nu _-(T_R)=\nu _-(J_2-T_r{{\widetilde{J}} }_1T_r^*)+\nu _-(J_1^\prime )\) and hence the assumption \({{\widetilde{\kappa }} }_2=\kappa _2 - \nu _-(J_1^\prime )\) is equivalent to \(\nu _-(T_R)=\kappa _2=\nu _-(J_2-TJ_1T^*)\). Therefore, again the statement follows from Theorem 1 or directly from Theorem 3.

Remarks similar to those made after Lemma 2 can be done here, too. In particular, the corresponding result in [23, Lemma 2.1] is formulated under the additional condition \({{\widetilde{\kappa }} }_1=\kappa _1\): here this equality will be a consequence from the equality \({{\widetilde{\kappa }} }_2=\kappa _2 - \nu _-(J_1^\prime )\); cf. Corollary 3 below.

To prove the main result concerning parametrization of all \(2\times 2\) liftings in a larger Kreĭn space with minimal signature for the defect operators an indefinite version of the commutation relation of the form \(TD_T=D_{T^*}T\) is needed; these involve so-called link operators introduced in [13, Section 4].

We will give a simple proof for the construction of link operators (see [13, Proposition 4.1]) by applying Heinz inequality combined with the basic factorization result from [32]. The first step is formulated in the next lemma, which is connected to a result of Kreĭn [48] concerning continuity of a bounded Banach space operator which is symmetric w.r.t. to a continuous definite inner product; the existence of link operators was proved in [13] via this result of Kreĭn. Here a statement, analogous to that of Kreĭn, is formulated in pure Hilbert space operator language by using the modulus of the product operator; see [34, Lemma B2], where Kreĭn’s result is presented with a proof due to W. T. Reid.

Lemma 4

Let \(S\in [{\mathfrak H}_1,{\mathfrak H}_2]\) and let \(H\in [{\mathfrak H}_2]\) be nonnegative. Then

$$\begin{aligned} HS=(HS)^* \quad \Rightarrow \quad |HS| \le \mu H \text { for some } \mu <\infty . \end{aligned}$$

Proof

Since HS is selfadjoint, one obtains

$$\begin{aligned} (HS)^2=HSS^*H \le \mu ^2 H^2, \quad \mu = \Vert S\Vert <\infty . \end{aligned}$$

Now by Heinz inequality (see e.g. [17, Theorem 10.4.2]) we get

$$\begin{aligned} |HS|=(HSS^*H)^{1/2} \le \mu H. \end{aligned}$$

Corollary 2

Let \(T\in [{\mathfrak H}_1,{\mathfrak H}_2]\) and let \(J_1\) and \(J_2\) be symmetries in \({\mathfrak H}_1\) and \({\mathfrak H}_2\) as above. Then there exist unique operators \(L_T\in [{\mathfrak D}_T,{\mathfrak D}_{T^*}]\) and \(L_{T^*}\in [{\mathfrak D}_{T^*},{\mathfrak D}_T]\) such that

$$\begin{aligned} D_{T^*}L_T=TJ_1 D_T{\upharpoonright \,}{\mathfrak D}_T, \quad D_T L_{T^*}=T^*J_2 D_{T^*}{\upharpoonright \,}{\mathfrak D}_{T^*}; \end{aligned}$$

in fact, \(L_T=D_{T^*}^{[-1]}TJ_1 D_T{\upharpoonright \,}{\mathfrak D}_T\) and \(L_{T^*}=D_T^{[-1]}T^*J_2 D_{T^*}{\upharpoonright \,}{\mathfrak D}_{T^*}\).

Proof

Denote \(S=J_{T^*}J_2TJ_TJ_1T^*\). Then (19) implies that

$$\begin{aligned} D_{T^*}^2 S= & {} (J_2-TJ_1T^*)J_2TJ_TJ_1T^*\\= & {} TJ_1(J_1-T^*J_2T)J_TJ_1T^*\\= & {} TJ_1D_T^2J_1T^*\ge 0, \end{aligned}$$

so that \(D_{T^*}^2 S\) is nonnegative and, in particular, selfadjoint. By Lemma 4 with \(\mu =\Vert S\Vert \) one has

$$\begin{aligned} 0\le TJ_1D_T^2J_1T^*=D_{T^*}^2 S \le \mu D_{T^*}^2. \end{aligned}$$

This last inequality is equivalent to the factorization \(TJ_1D_T{\upharpoonright \,}{\mathfrak D}_T=D_{T^*} L_T\) with a unique operator \(L_T\in [{\mathfrak D}_T,{\mathfrak D}_{T^*}]\), see [32, Theorem 1], which by means of Moore–Penrose generalized inverse can be rewritten as indicated.

The second formula is obtained by applying the first one to \(T^*\).

The following identities can be obtained with direct calculations; see [13, Section 4]:

$$\begin{aligned} \begin{array}{ll} L_T^* J_{T^*}{\upharpoonright \,}{\mathfrak D}_{T^*}&{}=J_{T}L_{T^*};\\ (J_T-D_TJ_1D_T){\upharpoonright \,}{\mathfrak D}_T&{}=L_T^*J_{T^*}L_T;\\ (J_{T^*}-D_{T^*}J_2D_{T^*}){\upharpoonright \,}{\mathfrak D}_{T^*}&{}=L_{T^*}^* J_T L_{T^*}. \end{array} \end{aligned}$$
(21)

The next corollary contains the promised identity \({{\widetilde{\kappa }} }_1=\kappa _1\) under the assumption \({{\widetilde{\kappa }} }_2=\kappa _2-\nu _-(J_2^\prime )\ge 0\) in Lemma 3. Similarly \({{\widetilde{\kappa }} }_1=\kappa _1-\nu _-(J_1^\prime )\) implies \({{\widetilde{\kappa }} }_2=\kappa _2\); the general result for the first case can be formulated as follows (and there is similar result for the latter case).

Corollary 3

Let R be a bounded operator such that \(\mathrm{ran\,}R\subset \mathrm{ran\,}D_{T^*}\) and let \(T_r\) be the corresponding row operator and denote \({{\widetilde{\kappa }} }_1=\nu _-({{\widetilde{J}} }_1-T_r^*J_2T_r)\). Then \(R=D_{T^*}B\) for a (unique) bounded operator \(B\in [{\mathfrak H}'_1,{\mathfrak D}_{T^*}]\) and

$$\begin{aligned} {{\widetilde{\kappa }} }_1=\kappa _1+\nu _-(J_1^\prime - B^*J_{T^*} B). \end{aligned}$$

In particular, J-contractivity of B is equivalent to \({{\widetilde{\kappa }} }_1=\kappa _1\).

Proof

Recall that \(\mathrm{ran\,}R\subset \mathrm{ran\,}D_{T^*}\) is equivalent to the factorization \(R=D_{T^*}B\). By applying the commutation relations in Corollary 2 together with the identities (21) one gets the following expression for \( J_{T_r}D_{T_r}^2\):

$$\begin{aligned} \begin{array}{rl} J_{T_r}D_{T_r}^2 &{} = \begin{pmatrix} J_1-T^*J_2T &{} -T^*J_2D_{T^*}B \\ -B^*D_{T^*} J_2 T &{} J_1^\prime - B^*D_{T^*} J_2 D_{T^*}B\end{pmatrix} \\ &{}= \begin{pmatrix} J_TD_T^2 &{} -D_T L_{T^*}B \\ -B^*L^*_{T^*} D_T &{} J_BD_{B}^2 + B^*L^*_{T^*} J_T L_{T^*}B\end{pmatrix}. \end{array} \end{aligned}$$
(22)

Now apply Proposition 2 and calculate the Schur complement, cf. (18),

$$\begin{aligned} J_BD_{B}^2 + B^*L^*_{T^*} J_T L_{T^*}B -B^*L^*_{T^*} D_T (D_T^{[-1]}J_TD_T^{[-1]}) D_T L_{T^*}B =J_BD_{B}^2, \end{aligned}$$

to see that \({{\widetilde{\kappa }} }_1=\nu _-(J_1-T^*J_2T)+\nu _-(J_1^\prime - B^*J_{T^*} B)\).

By means of Lemmas 23 and the link operators in Corollary 2 one can now establish the main result concerning the lifting problem \((*)\).

First notice that if Problem \((*)\) has a solution, then by treating \({{\widetilde{T}} }\) as a row extension of its first column \(T_c\) and as a column extension of its first row \(T_r\) one gets from the inequalities preceding Lemmas 23 the estimates

$$\begin{aligned} \begin{array}{l} {{\widetilde{\kappa }} }_1\ge \kappa _1(T_r)-\nu _-(J_2^\prime )\ge \kappa _1-\nu _-(J_2^\prime ); \\ {{\widetilde{\kappa }} }_2\ge \kappa _2(T_c)-\nu _-(J_1^\prime )\ge \kappa _2-\nu _-(J_1^\prime ). \end{array} \end{aligned}$$
(23)

Under the minimal choice of the indices \({{\widetilde{\kappa }} }_1\) and \({{\widetilde{\kappa }} }_2\) Problem \((*)\) is already solvable; all solutions are described by the following result, which was initially proved in [23, Theorem 2.3] with the aid of [13, Theorem 5.3]. Here a different proof is presented, again based on an application of Theorem 1.

Theorem 4

Let \({{\widetilde{T}} }\) be a bounded operator from \(({\mathfrak H}_1\oplus {\mathfrak H}_1^\prime ,J_1\oplus J_1^\prime )\) to \(({\mathfrak H}_2\oplus {\mathfrak H}_2^\prime ,J_2\oplus J_2^\prime )\) such that \(P_2 {{\widetilde{T}} }{\upharpoonright \,}{\mathfrak H}_1 = T\). Assume that \(0\le \kappa _1 - \nu _-(J_2^\prime )={{\widetilde{\kappa }} }_1<\infty \) and \(0\le \kappa _2 - \nu _-(J_1^\prime )={{\widetilde{\kappa }} }_2<\infty \). Then the Problem \((*)\) is solvable and the formula

$$\begin{aligned} {{\widetilde{T}} }=\begin{pmatrix}T &{}\quad D_{T^*} \Gamma _1 \\ \Gamma _2 D_T &{}\quad -\Gamma _2 L_T^* J_{T^*} \Gamma _1 + D_{\Gamma _2^*}\Gamma D_{\Gamma _1}\end{pmatrix} \end{aligned}$$

establishes a one-to-one correspondence between the set of all solutions to Problem \((*)\) and the set of triplets \(\{\Gamma _1,\Gamma _2,\Gamma \}\) where \(\Gamma _1\in [{\mathfrak H}'_1,{\mathfrak D}_{T^*}]\) and \(\Gamma _2^*\in [{\mathfrak H}'_2,{\mathfrak D}_T]\) are J-contractions and \(\Gamma \in [{\mathfrak D}_{\Gamma _1},{\mathfrak D}_{\Gamma _2^*}]\) is a Hilbert space contraction.

Proof

Assume that there is a solution \({{\widetilde{T}} }\) to Problem \((*)\) and write it in the form

$$\begin{aligned} {{\widetilde{T}} }=\begin{pmatrix} T &{}\quad R \\ C &{}\quad X \end{pmatrix} \end{aligned}$$

with the first column denoted by \(T_c\) and first row denoted by \(T_r\), and assume that \({{\widetilde{\kappa }} }_1=\kappa _1 - \nu _-(J_2^\prime )\) and \({{\widetilde{\kappa }} }_2= \kappa _2 - \nu _-(J_1^\prime )\). Then (23) shows that \(\kappa _1=\kappa _1(T_r)\) and \(\kappa _2=\kappa _2(T_c)\). Hence Lemma 3 can be applied by viewing \({{\widetilde{T}} }\) as a row extension of \(T_c\) to get a range inclusion and then from Corollary 3 one gets the equality \({{\widetilde{\kappa }} }_1=\kappa _1(T_c)\). Similarly applying Lemma 2 and the analog of Corollary 3 to column operator \({{\widetilde{T}} }\) one gets the equality \({{\widetilde{\kappa }} }_2=\kappa _2(T_r)\). Thus \(\kappa _1(T_c)=\kappa _1 - \nu _-(J_2^\prime )\) and \(\kappa _2(T_r)=\kappa _2 -\nu _-(J_1^\prime )\). Consequently, one can apply Lemma 2 to the first column \(T_c\) and Lemma 3 to the first row \(T_r\) to get the stated factorizations \(C=\Gamma _2 D_T\) and \(R=D_{T^*}\Gamma _1\) with unique J-contractions \(\Gamma _1\) and \(\Gamma _2^*\).

To establish a formula for X we proceed by considering the block operator

$$\begin{aligned} H:=\begin{pmatrix} J_{T_r}D_{T_r}^2 &{}\quad T_{r,2}^* \\ T_{r,2} &{}\quad J_2^\prime \end{pmatrix}, \end{aligned}$$

where \(T_{r,2}\) denotes the second row of \({{\widetilde{T}} }\). It is straightforward to derive the following formula for the Schur complement

$$\begin{aligned} J_{T_r}D_{T_r}^2-T_{r,2}^* J_2^\prime T_{r,2}={{\widetilde{J}} }_1-{{\widetilde{T}} }^*{{\widetilde{J}} }_2 {{\widetilde{T}} }. \end{aligned}$$

Thus \(\nu _-(H)={{\widetilde{\kappa }} }_1+\nu _-(J_2^\prime )=\kappa _1=\nu _-(J_{T_r})\) and one can apply Theorem 1 to get the factorization \(T_{r,2}^*=D_{T_r} {{\widetilde{K}} }\) with a unique \({{\widetilde{K}} }\in [{\mathfrak H}_2^\prime ,{\mathfrak D}_{T_r}]\) satisfying \({{\widetilde{K}} }^* J_{T_r} {{\widetilde{K}} } \le J_2^\prime \), i.e., \({{\widetilde{K}} }\) is a J-contraction; see Theorem 3.

It follows from (22) that

$$\begin{aligned} J_{T_r}D_{T_r}^2 = \begin{pmatrix} D_T &{}\quad 0 \\ -\Gamma _1^* L^*_{T^*}J_T &{}\quad D_{\Gamma _1}\end{pmatrix} \begin{pmatrix} J_T &{}\quad 0 \\ 0 &{}\quad I_{D_{\Gamma _1}} \end{pmatrix} \begin{pmatrix} D_T &{}\quad -J_T L_{T^*}\Gamma _1 \\ 0 &{}\quad D_{\Gamma _1}\end{pmatrix} =:B^*{{\widehat{J}} } B.\end{aligned}$$

Since here \(\nu _-(J_{T_r})=\kappa _1=\nu _-(J_{T})\) and B is a triangular operator whose range is dense in \({\mathfrak D}_T\oplus {\mathfrak D}_{\Gamma _1}\) (the diagonal entries \(D_T\) and \(D_{\Gamma _1}\) of B have dense ranges by definition), there is a unique Pontryagin space J-unitary operator U from \({\mathfrak D}_{T_r}\) onto \({\mathfrak D}_T\oplus {\mathfrak D}_{\Gamma _1}\) such that \(B=UD_{T_r}\); see Proposition 1 (ii). It follows that \(K^*:=(U^{-1})^*{{\widetilde{K}} }\) is a J-contraction from \({\mathfrak H}_2^\prime \) to \({\mathfrak D}_T\oplus {\mathfrak D}_{\Gamma _1}\) and \(KB={{\widetilde{K}} }^* D_{T_r}=T_{r,2}\). Now \(J_2^\prime -K{{\widehat{J}} } K^*\ge 0\) gives

$$\begin{aligned} 0\le K_1K_1^*\le J_2^\prime -K_0 J_T K_0^*, \end{aligned}$$
(24)

where \(K=(K_0\,K_1)\) is considered as a row operator, and \(T_{r,2}=KB\) reads as

$$\begin{aligned} \Gamma _2D_{T}=K_0 D_T, \quad X=-K_0 J_T L_{T^*}\Gamma _1 + K_1 D_{\Gamma _1}. \end{aligned}$$

Since all contractions that are involved are unique, \(K_0=\Gamma _2\), \(J_2^\prime -K_0 J_T K_0^* = D_{\Gamma _2^*}^2\), and (24) implies that there is a unique Hilbert space contraction \(\Gamma \in [{\mathfrak D}_{\Gamma _1},{\mathfrak D}_{\Gamma _2^*}]\) such that \(K_1=D_{\Gamma _2^*}\Gamma \). The desired formula for \({{\widetilde{T}} }\) is proven (cf. (21)). It is clear from the proof that every operator \({{\widetilde{T}} }\) of the stated form is a solution and that there is one-to-one correspondence via the triplets \(\{\Gamma _1,\Gamma _2,\Gamma \}\) of J-contractions.

Remark 2

(i) By replacing \({{\widetilde{T}} }\) with its adjoint \({{\widetilde{T}} }^*\) it is clear that all formulas remain the same and are obtained by changing T with \(T^*\) and interchanging the roles of the indices 1 and 2; see also (21). This connects the considerations with row and column operators to each other.

(ii) If \(\kappa _1=0\) so that \(J_1-T^*J_2T\ge 0\), then the above proof becomes slightly simpler since then \(J_{T_r}\), \(J_T\), and \(J_2^\prime \) are identity operators and \({{\widetilde{K}} }\) is a Hilbert space contraction. Then Theorem 4 gives all contractive liftings of a contraction in a Kreĭn space. If in addition \(\kappa _2=0\), then one gets all bicontractive liftings of a bicontraction in a Kreĭn space with Pontryagin spaces as exit spaces. In the special case that the exit spaces are Hilbert spaces (\(\nu _-(J_1)=\nu _-(J_2)=0\) and \(\kappa _1=\kappa _2=0\)) Theorem 4 coincides with [33, Theorem 3.6]. In fact, the present proof can be seen as a further development of the proof appearing in that paper; see also further references and historical remarks given in [33, 34].

5 Contractive extensions of contractions with minimal negative indices

Let \({\mathfrak H}_1\) be a closed linear subspace of the Hilbert space \({\mathfrak H}\), let \(T_{11}=T_{11}^*\in [{\mathfrak H}_1]\) be an operator such that \(\nu _-(I-T_{11}^2)=\kappa <\infty \). Denote

$$\begin{aligned} J=\mathrm{sign\,}(I-T_{11}^2), \quad J_+=\mathrm{sign\,}(I-T_{11}),\, \text { and }\, J_-=\mathrm{sign\,}(I+T_{11}), \end{aligned}$$
(25)

and let \(\kappa _+=\nu _-(I-T_{11})\) and \(\kappa _-=\nu _-(I+T_{11})\). It is obvious that \(J=J_-J_+=J_+J_-\). Moreover, there is an equality \(\kappa =\kappa _- +\kappa _+\) as stated in the next lemma.

Lemma 5

Let \(T=T^*\in [{\mathfrak H}_1]\) be an operator such that \(\nu _-(I-T^2)=\kappa <\infty \) then \(\nu _-(I-T^2)=\nu _-(I+T)+\nu _-(I-T).\)

Proof

Let \(E_t(\cdot )\) be the resolution of identity of T. Then by the spectral mapping theorem the spectral subspace corresponding to the negative spectrum of \(I-T^2\) is given by \(E_t((\infty ;-1)\cup (1;\infty ))=E_t((-\infty ;-1))\oplus E_t((1;\infty ))\). Consequently, \(\nu _-(I-T^2)=\mathrm{dim\,}E_t((-\infty ;-1))+\mathrm{dim\,}E_t((1;\infty ))=\nu _-(I+T)+\nu _-(I-T)\).

The next problem concerns the existence and a description of selfadjoint operators T such that \({{\widetilde{A}} }_+=I+T\) and \({{\widetilde{A}} }_-=I-T\) solve the corresponding completion problems

$$\begin{aligned} A_{\pm }^0= \begin{pmatrix} I\pm T_{11}&{}\quad \pm T_{21}^*\\ \pm T_{21}&{}\quad *\end{pmatrix}, \end{aligned}$$
(26)

under minimal index conditions \(\nu _-(I+T)=\nu _-(I+T_{11})\), \(\nu _-(I-T)=\nu _-(I-T_{11})\), respectively. Observe, that if \(I\pm T\) provides an arbitrary completion to \(A_{\pm }^0\) then clearly \(\nu _-(I \pm T)\ge \nu _-(I \pm T_{11})\). Thus by Lemma 5 the two minimal index conditions above are equivalent to the single condition \(\nu _-(I-T^2)=\nu _-(I-T_{11}^2)\).

Unlike with the case of a selfadjoint contraction \(T_{11}\), this problem need not have solutions when \(\nu _-(I-T_{11}^2)>0\). It is clear from Theorem 1 that the conditions \(\mathrm{ran\,}T_{21}^*\subset \mathrm{ran\,}|I-T_{11}|^{1/2}\) and \(\mathrm{ran\,}T_{21}^*\subset \mathrm{ran\,}|I+T_{11}|^{1/2}\) are necessary for the existence of solutions; however alone they are not sufficient.

The next theorem gives a general solvability criterion for the completion problem (26) and describes all solutions to this problem, when the criterion is satisfied. As in the definite case, there are minimal solutions \(A_+\) and \(A_-\) which are connected to two extreme selfadjoint extensions T of

$$\begin{aligned} T_1=\begin{pmatrix} T_{11}\\ T_{21} \end{pmatrix}:{\mathfrak H}_1\rightarrow \begin{pmatrix}{\mathfrak H}_1\\ {\mathfrak H}_2\end{pmatrix}, \end{aligned}$$
(27)

now with finite negative index \(\nu _-(I-T^2)=\nu _-(I-T_{11}^2)>0\). The set of all solutions T to the problem (26) will be denoted by \(\mathrm{Ext\,}_{T_1,\kappa }(-1,1)\).

Theorem 5

Let \(T_1\) be a symmetric operator as in (27) with \(T_{11}=T_{11}^*\in [{\mathfrak H}_1]\) and \(\nu _-(I-T_{11}^2)=\kappa <\infty \), and let \(J=\mathrm{sign\,}(I-T_{11}^2)\). Then the completion problem for \(A_{\pm }^0\) in (26) has a solution \(I\pm T\) for some \(T=T^*\) with \(\nu _-(I-T^2)=\kappa \) if and only if the following condition is satisfied:

$$\begin{aligned} \nu _-(I-T_{11}^2)=\nu _-(I-T_1^*T_1). \end{aligned}$$
(28)

If this condition is satisfied then the following facts hold:

  1. (i)

    The completion problems for \(A_{\pm }^0\) in (26) have minimal solutions \(A_\pm \).

  2. (ii)

    The operators \(T_m:=A_+-I\) and \(T_M:=I-A_-\in \mathrm{Ext\,}_{T_1,\kappa }(-1,1)\).

  3. (iii)

    The operators \(T_m\) and \(T_M\) have the block forms

    $$\begin{aligned} \begin{array}{ll} T_m&{}= \begin{pmatrix} T_{11}&{}D_{T_{11}}V^*\\ VD_{T_{11}}&{}-I+V(I-T_{11})JV^* \end{pmatrix}, \\ T_M&{}= \begin{pmatrix} T_{11}&{}D_{T_{11}}V^*\\ VD_{T_{11}}&{}I-V(I+T_{11})JV^* \end{pmatrix}, \end{array} \end{aligned}$$
    (29)

    where \(D_{T_{11}}:=|I-T_{11}^2|^{1/2}\) and V is given by \(V:=\mathrm{clos\,}(T_{21}D_{T_{11}}^{[-1]})\).

  4. (iv)

    The operators \(T_m\) and \(T_M\) are extremal extensions of \(T_1\):

    $$\begin{aligned} T\in \mathrm{Ext\,}_{T_1,\kappa }(-1,1)\ \text { iff }\ T=T^*\in [{\mathfrak H}],\quad T_m\le T\le T_M. \end{aligned}$$
    (30)
  5. (v)

    The operators \(T_m\) and \(T_M\) are connected via

    $$\begin{aligned} (-T)_m=-T_M, \quad (-T)_M=-T_m. \end{aligned}$$
    (31)

Proof

It is easy to see that \(\kappa =\nu _-(I-T_{11}^2)\le \nu _-(I-T_1^*T_1)\le \nu _-(I-T^2)\). Hence the condition \(\nu _-(I-T^2)=\kappa \) implies (28). The sufficiency of this condition is established while proving the assertions (i)–(iii) below. (i) If the condition (28) is satisfied then \(\mathrm{ran\,}T_{21}^*\subset \mathrm{ran\,}|I- T_{11}^2|^{1/2}\) by Lemma 2. In fact, this inclusion is equivalent to the inclusions \(\mathrm{ran\,}T_{21}^*\subset \mathrm{ran\,}|I\pm T_{11}|^{1/2}\), which by Theorem 1 means that both of the completion problems, \(A_{\pm }^0\) in (26), are solvable. Consequently, the following operators

$$\begin{aligned} S_-=|I+T_{11}|^{[-1/2]}T_{21}^*,\quad S_+=|I-T_{11}|^{[-1/2]}T_{21}^* \end{aligned}$$
(32)

are well defined and they provide the minimal solutions \(A_\pm \) to the completion problems for \(A_\pm ^0\) in (26). Notice that the assumption that there is a simultaneous solution \(I\pm T\) with a single selfadjoint operator T is not yet used here.

(ii) & (iii) Proof of (i) shows that the inclusion \(\mathrm{ran\,}T_{21}^*\subset \mathrm{ran\,}|I- T_{11}^2|^{1/2}\) holds. This last inclusion alone is equivalent to the existence of a (unique) bounded operator \(V^*=D_{T_{11}}^{[-1]}T_{21}^*\) with \(\mathrm{ker\,}V\supset \mathrm{ker\,}D_{T_{11}}\), such that \(T_{21}^*=D_{T_{11}}V^*\). The operators \(T_m:=A_+-I\) and \(T_M:=I-A_-\) (see proof of (i)) can be now rewritten as in (29). Observe that

$$\begin{aligned} S_\mp =|I\pm T_{11}|^{[-1/2]}D_{T_{11}}V^*=P_\mp |I\mp T_{11}|^{1/2}V^*=|I\mp T_{11}|^{1/2}P_\mp V^*, \end{aligned}$$

where \(P_\mp \) are the orthogonal projections onto

$$\begin{aligned} (\mathrm{ker\,}|I\pm T_{11}|^{1/2})^\perp =(\mathrm{ker\,}|I\pm T_{11}|)^\perp =\overline{\mathrm{ran\,}}|I\pm T_{11}|=\overline{\mathrm{ran\,}}|I\pm T_{11}|^{1/2}. \end{aligned}$$

Since \(\mathrm{ker\,}V\supset \mathrm{ker\,}D_{T_{11}}\) implies \(\overline{\mathrm{ran\,}}V^*\subset \overline{\mathrm{ran\,}}D_{T_{11}}\subset \overline{\mathrm{ran\,}}|I\pm T_{11}|^{1/2}\), it follows that

$$\begin{aligned} S_-=|I-T_{11}|^{1/2}V^*,\quad S_+=|I+T_{11}|^{1/2}V^*. \end{aligned}$$

Consequently, see (25),

$$\begin{aligned} S_-^*J_-S_-= & {} V|I-T_{11}|^{1/2}J_-|I-T_{11}|^{1/2}V^*=V(I-T_{11})JV^*, \\ S_+^*J_+S_+= & {} V|I+T_{11}|^{1/2}J_+|I+T_{11}|^{1/2}V^*=V(I+T_{11})JV^*, \end{aligned}$$

which implies the representations for \(T_m\) and \(T_M\) in (29). Clearly, \(T_m\) and \(T_M\) are selfadjoint extensions of \(T_1\), which satisfy the equalities

$$\begin{aligned} \nu _-(I+T_{m})=\kappa _-,\quad \nu _-(I-T_{M})=\kappa _+. \end{aligned}$$

Moreover, it follows from (29) that

$$\begin{aligned} T_M-T_m= \begin{pmatrix} 0&{}\quad 0\\ 0&{}\quad 2(I-VJV^*) \end{pmatrix}. \end{aligned}$$
(33)

Now the assumption (28) will be used again. Since \(\nu _-(I-T_{1}^*T_{1})=\nu _-(I-T_{11}^2)\) and \(T_{21}=VD_{T_{11}}\) it follows from Lemma 2 that \(V^*\in [{\mathfrak H}_2,{\mathfrak D}_{T_{11}}]\) is J-contractive: \(I-VJV^*\ge 0\). Therefore, (33) shows that \(T_M\ge T_m\) and \(I+T_M\ge I+T_m\) and hence, in addition to \(I+T_m\), also \(I+T_M\) is a solution to the problem \(A_{+}^0\) and, in particular, \(\nu _-(I+T_M)=\kappa _-=\nu _-(I+T_m)\). Similarly, \(I-T_M\le I-T_m\) which implies that \(I-T_m\) is also a solution to the problem \(A_{-}^0\), in particular, \(\nu _-(I-T_m)=\kappa _+=\nu _-(I-T_M)\). Now by applying Lemma 5 we get

$$\begin{aligned} \nu _-(I-T_m^2)=\nu _-(I-T_m)+\nu _-(I+T_m)=\kappa _++\kappa _-=\kappa ,\\ \nu _-(I-T_M^2)=\nu _-(I-T_M)+\nu _-(I+T_M)=\kappa _++\kappa _-=\kappa . \end{aligned}$$

Therefore, \(T_m,T_M\in \mathrm{Ext\,}_{T_1,\kappa }(-1,1)\) which in particular proves that the condition (28) is sufficient for solvability of the completion problem (26).

(iv) Observe, that \(T\in \mathrm{Ext\,}_{T_1,\kappa }(-1,1)\) if and only if \(T=T^*\supset T_1\) and \(\nu _-(I\pm T)=\kappa _\mp \). By Theorem 1 this is equivalent to

$$\begin{aligned} S_-^*J_-S_--I\le T_{22}\le I-S_+^*J_+S_+. \end{aligned}$$
(34)

The inequalities (34) are equivalent to (30).

(v) The relations (31) follow from (32) and (29).

For a Hilbert space contraction \(T_1\) one has \(\nu _-(I-T_{11}^2)\le \nu _-(I-T_1^*T_1)=0\), i.e., the criterion (28) is automatically satisfied. In this case Theorem 5 has been proved in [39]. As Theorem 5 shows, under the minimal index condition \(\nu _-(I-T^2)=\nu _-(I-T_{11}^2)\), the solution set \(\mathrm{Ext\,}_{T_1,\kappa }(-1,1)\) admits the same attractive description as an operator interval determined by the two extreme extensions \(T_m\) and \(T_M\) as was originally proved by Kreĭn in his paper [47] when describing all contractive selfadjoint extensions of a Hilbert space contraction. In particular, Theorem 5 shows that if there is a solution to the completion problem (26), i.e. if \(T_1\) satisfies the index condition (28), then all selfadjoint extensions T of \(T_1\) satisfying the equality \(\nu _-(I-T^2)= \nu _-(I-T_1^*T_1)\) are determined by the operator inequalities \(T_m\le T\le T_M\). The original paper [47] of M. G. Kreĭn has never been translated: for some literature in English where many of the original ideas of Kreĭn have been presented we refer to the monographs [1, 9, 51] and the papers [11, 39].

The original proof of Kreĭn in [47] for the description of all contractive selfadjoint extensions of a Hilbert space contraction \(T_1\) as the operator interval in (30) was based on the notion of shortening or shorted operator; cf. (1). To get this result Kreĭn first constructed one contractive selfadjoint extension T for \(T_1\) and then used it together with the following two formulas involving shortening of \(I+T\) and \(I-T\) to the subspace \({\mathfrak N}={\mathfrak H}\ominus \mathrm{dom\,}T_1={\mathfrak H}_2\):

$$\begin{aligned} T_m=T-(I+T)_{\mathfrak N}, \quad T_M=T+(I-T)_{\mathfrak N}, \end{aligned}$$

see [47, Theorem 3]. It follows from Theorem 1, see also (10), and the formulas for \(T_m\) and \(T_M\) in Theorem 5 that these descriptions of \(T_m\) and \(T_M\) remain true in the present setting: indeed, using the given block formulas one can directly check that

$$\begin{aligned} I+T=I+T_m+(I+T)_{\mathfrak N}, \quad I-T=I-T_M +(I-T)_{\mathfrak N}, \end{aligned}$$

where the shortening is calculated as defined in (17).

Notice that T belongs to the solution set \(\mathrm{Ext\,}_{T_1,\kappa }(-1,1)\) precisely when \(T=T^*\supset T_1\) and \(\nu _-(I\pm T)=\kappa _\mp \). This means that every selfadjoint extension of \(T_1\) for which \((I-T^2)=\nu _-(I-T_1^*T_1)\) admits precisely \(\kappa _-\) eigenvalues on the interval \((-\infty ,-1)\) and \(\kappa _+\) eigenvalues on the interval \((1,\infty )\); in total there are \(\kappa =\kappa _- +\kappa _+\) eigenvalues outside the closed interval \([-1,1]\). The fact that the numbers \(\kappa _\mp =\nu _-(I\pm T)\) are constant in the solution set \(\mathrm{Ext\,}_{T_1,\kappa }(-1,1)\) is crucial for dealing properly with the Cayley transforms in the next section.

6 A generalization of M. G. Kreĭn’s approach to the extension theory of nonnegative operators

6.1 Some antitonicity theorems for selfadjoint relations

The notion of inertia of a selfadjoint relation in a Hilbert space is defined by means of its associated spectral measure. In what follows the Hilbert space is assumed to be separable.

Definition 1

Let H be a selfadjoint relation in a separable Hilbert space \({\mathfrak H}\) and let \(E_t(\cdot )\) be the spectral measure of H. The inertia of H is defined as the ordered quadruplet \(\mathsf{i }(H)=\{\mathsf{i }^+(H),\mathsf{i }^-(H),\mathsf{i }^0(H),\mathsf{i }^\infty (H)\}\), where

$$\begin{aligned} \mathsf{i }^+(H)= & {} \mathrm{dim\,}\mathrm{ran\,}E_t((0,\infty )),\quad \mathsf{i }^-(H)=\mathrm{dim\,}\mathrm{ran\,}E_t((-\infty ,0)),\\ \mathsf{i }^0(H)= & {} \mathrm{dim\,}\mathrm{ker\,}H,\quad \, \mathsf{i }^\infty (H)=\mathrm{dim\,}\mathrm{mul\,}H. \end{aligned}$$

In particular, for a selfadjoint relation H in \({\mathbb C}^n\), the quadruplet \(\mathsf{i }(H)\) consists of the numbers of positive, negative, zero, and infinite eigenvalues of H; cf. [15]. Hence, if H is a selfadjoint matrix in \({\mathbb C}^n\), then \(\mathsf{i}^\infty (H)=0\) and the remaining numbers make up the usual inertia of H.

The following theorem characterizes the validity of the implication

$$\begin{aligned} H_1\le H_2 \quad \Rightarrow \quad H_2^{-1}\le H_1^{-1} \end{aligned}$$

for a pair of bounded selfadjoint operators \(H_1\) and \(H_2\) having bounded inverses; in the infinite dimensional case it has been proved independently in [30, 40, 61]; cf. also [41]. Some extensions of this result, where the condition \(\min \{\mathsf{i }^+_2,\mathsf{i }^-_1\}<\infty \) is relaxed, are also contained in [40, 41, 61].

Theorem 6

Let \(H_1\) and \(H_2\) be bounded and boundedly invertible selfadjoint operators in a separable Hilbert space \({\mathfrak H}\). Let \(\mathsf{i }(H_j)=\{\mathsf{i }^+_j,\mathsf{i }^-_j,\mathsf{i }^0_j,\mathsf{i }^\infty _j\}\) be the inertia of \(H_j\), \(j=1,2\), and assume that \(\min \{\mathsf{i }^+_2,\mathsf{i }^-_1\}<\infty \) and that \(H_1\le H_2\). Then

$$\begin{aligned} H_2^{-1} \le H_1^{-1} \quad \text {if and only if} \quad \mathsf{i}(H_1) = \mathsf{i}(H_2). \end{aligned}$$

Very recently two extensions of Theorem 6 have been established in [15] for a general pair of selfadjoint operators and relations without any invertibility assumptions. For the present purposes we need the second main antitonicity theorem from [15], which reads as follows.

Theorem 7

Let \(H_1\) and \(H_2\) be selfadjoint relations in a separable Hilbert space \({\mathfrak H}\) which are semibounded from below. Let \(\mathsf{i }(H_j)=\{\mathsf{i }^+_j,\mathsf{i }^-_j,\mathsf{i }^0_j,\mathsf{i }^\infty _j\}\) be the inertia of \(H_j\), \(j=1,2\), and assume that \(\mathsf{i }^-_1<\infty \) and that \(H_1 \le H_2\). Then

$$\begin{aligned} H_2^{-1} \le H_1^{-1} \quad \text {if and only if} \quad \mathsf{i}_1^- = \mathsf{i}_2^-. \end{aligned}$$

The ordering appearing in Theorem 7 is defined via

$$\begin{aligned} H_1 \le H_2 \quad \Leftrightarrow \quad 0\le (H_2-aI)^{-1}\le (H_1-aI)^{-1}, \end{aligned}$$

where \(a<\min \{\mu (H_1),\mu (H_2)\}\) is fixed and \(\mu (H_i)\in {\mathbb R}\) stands for the lower bound of \(H_i\), \(i=1,2\). Notice that the conditions \(H_1 \le H_2\) and \(\mathsf{i }^-_1<\infty \) imply \(\mathsf{i }^-_2<\infty \); in particular these conditions already imply that the inverses \(H_1^{-1}\) and \(H_2^{-1}\) are also semibounded from below. For further facts on ordering of semibounded selfadjoint operators and relations the reader is referred to [15, 42].

6.2 Cayley transforms

Define the linear fractional transformation \({\mathcal {C}}\), taking a linear relation A into a linear relation \({\mathcal {C}}(A)\), by

$$\begin{aligned} {\mathcal {C}}(A)=\{\, \{f+f',f-f'\} : {{\widehat{f}} }=\{f,f'\} \in A\,\}= -I+2(I+A)^{-1}. \end{aligned}$$
(35)

Clearly, \({\mathcal {C}}\) maps the (closed) linear relations one-to-one onto themselves, \({\mathcal {C}}^{2}=I\), and

$$\begin{aligned} {\mathcal {C}}(A)^{-1}={\mathcal {C}}(-A), \end{aligned}$$
(36)

for every linear relation A. Moreover,

$$\begin{aligned}&\mathrm{dom\,}{\mathcal {C}}(A)=\mathrm{ran\,}(I+A), \quad \mathrm{ran\,}{\mathcal {C}}(A)= \mathrm{ran\,}(I-A), \\&\mathrm{ker\,}({\mathcal {C}}(A)-I)=\mathrm{ker\,}A, \quad \mathrm{ker\,}({\mathcal {C}}(A)+I)=\mathrm{mul\,}A. \end{aligned}$$

In addition, \({\mathcal {C}}\) preserves closures, adjoints, componentwise sums, orthogonal sums, intersections, and inclusions. The relation \({\mathcal {C}}(A)\) is symmetric if and only if A is symmetric. It follows from (35) and

$$\begin{aligned} \Vert f+f'\Vert ^2-\Vert f-f'\Vert ^2=4\mathrm{Re\,}(f',f) \end{aligned}$$
(37)

that \({\mathcal {C}}\) gives a one-to-one correspondence between nonnegative (selfadjoint) linear relations and symmetric (respectively, selfadjoint) contractions. Observe the following mapping properties of \({\mathcal {C}}\) on the extended real line \({\mathbb R}\cup \{\pm \infty \}\):

$$\begin{aligned} {\mathcal {C}}([0,1])&=[0,1];\quad {\mathcal {C}}([-1,0])=[1,+\infty ];\nonumber \\ {\mathcal {C}}([1,+\infty ])&=[-1,0];\quad {\mathcal {C}}([-\infty ,-1])=[-\infty ,-1]. \end{aligned}$$
(38)

If H is a selfadjoint relation then

$$\begin{aligned} \mathsf{i }^-(I+H)=\mathsf{i }^-({\mathcal {C}}(H)+I),\quad \mathsf{i }^-(I-H)=\mathsf{i }^-({\mathcal {C}}(H)^{-1}+I), \end{aligned}$$

and hence

$$\begin{aligned} \sigma (H)\cap (-\infty ,-1)&=\sigma ({\mathcal {C}}(H))\cap (-\infty ,-1),\nonumber \\ \sigma (H)\cap (1,+\infty )&=\sigma ({\mathcal {C}}(H)^{-1})\cap (-\infty ,-1)=\sigma ({\mathcal {C}}(H))\cap (-1,0); \end{aligned}$$
(39)

which can also be seen from (38).

6.3 M. G. Kreĭn’s approach to the extension theory with a minimal negative index

In M. G. Kreĭn’s approach to the extension theory of nonnegative operators the idea is to make a connection to the selfadjoint contractive extensions of a hermitian contraction T via the Cayley transform in (35). The extension of this approach to the present indefinite situation is based on the fact that the Cayley transform still reverses the ordering of selfadjoint extensions due to the antitonicity result formulated in Theorem 7 and the fact that in Theorem 5 \(T\in \mathrm{Ext\,}_{T_1,\kappa }(-1,1)\) if and only if \(T=T^*\supset T_1\) and \(\nu _-(I\pm T)=\kappa _\mp \).

A semibounded symmetric relation A is said to be quasi-nonnegative if the associated form \(a(f,f):=(f',f)\), \(\{f,f'\}\in A\), has a finite number of negative squares, i.e. every A-negative subspace \({\mathfrak L}\subset \mathrm{dom\,}A\) is finite dimensional. If the maximal dimension of A-negative subspaces is finite and equal to \(\kappa \in {\mathbb Z}_+\), then A is said to be \(\kappa \)-nonnegative; the more precise notations \(\nu _-(a)\), \(\nu _-(A)\) are used to indicate the maximal number of negative squares of the form a and the relation A, respectively; here \(\nu _-(a)=\nu _-(A)\). A selfadjoint extension \({{\widetilde{A}} }\) of A is said to be a \(\kappa \)-nonnegative extension of A if \(\nu _-({{\widetilde{A}} })=\kappa \). The set of all such extension will be denoted by \(\mathrm{Ext\,}_{A,\kappa }(0,\infty )\).

If A is a closed symmetric relation in the Hilbert space \({\mathfrak H}\) with \(\kappa _-(A)<\infty \), then the subspace \({\mathfrak H}_1:=\mathrm{ran\,}(I+A)\) is closed, since the Cayley transform \(T_1={\mathcal {C}}(A)\) is a closed bounded symmetric operator in \({\mathfrak H}\) with \(\mathrm{dom\,}T_1={\mathfrak H}_1\). Let \(P_1\) be the orthogonal projection onto \({\mathfrak H}_1\) and let \(P_2=I-P_1\). Then the form

$$\begin{aligned} a_1(f,f):=(P_1f',f), \quad \{f,f'\}\in A, \end{aligned}$$
(40)

is symmetric and it has a finite number of negative squares.

Lemma 6

Let A be a closed symmetric relation in \({\mathfrak H}\) with \(\kappa _-(A)<\infty \) and let \(T_1={\mathcal {C}}(A)\). Then the form \(a_1\) is given by

$$\begin{aligned} a_1(f,f)=a(f,f)+\Vert P_2 f \Vert ^2 \end{aligned}$$
(41)

with \(\nu _-(a_1)\le \nu _-(A)\). Moreover,

$$\begin{aligned} 4a_1(f,f)=\Vert g\Vert ^2-\Vert T_{11}g\Vert ^2, \quad 4a(f,f)=\Vert g\Vert ^2-\Vert T_1g\Vert ^2, \end{aligned}$$

where \(\{f,f'\}\in A\), \(g=f+f'\), and \(T_{11}=P_1T_1\). In addition, \(T_{21}=P_2T_1\) satisfies \(\Vert T_{21}g \Vert ^2=\Vert P_2f\Vert =-(P_2 f,f')\).

Proof

The formula (37) shows that if \(T_1={\mathcal {C}}(A)\) and \(\{f,f'\}\in A\), then

$$\begin{aligned} \Vert g\Vert ^2-\Vert T_1g\Vert ^2=4 (f',f)=4 a(f,f), \quad g=f+f'\in \mathrm{dom\,}T_1= {\mathfrak H}_1. \end{aligned}$$

Moreover, \(T_{21}g=P_2(f-f')=2P_2f=-2P_2f'\) gives \((P_2f',f)=-\Vert P_2 f\Vert ^2\) and

$$\begin{aligned} \Vert T_{21}g \Vert ^2=-4(P_2 f',P_2 f)=-4(P_2f',f). \end{aligned}$$

In particular, (41) follows from

$$\begin{aligned} a(f,f)=(P_1f',f)+(P_2f',f)=a_1(f,f)-\Vert P_2 f\Vert ^2. \end{aligned}$$

Finally, (41) combined with \(\Vert T_{21}g\Vert ^2=4\Vert P_2f\Vert ^2\) leads to

$$\begin{aligned} 4a_1(f,f)=\Vert g\Vert ^2-\Vert T_1g\Vert ^2+\Vert T_{21}g\Vert ^2=\Vert g\Vert ^2-\Vert T_{11}g\Vert ^2. \end{aligned}$$

The main result in this section concerns the existence and a description of all selfadjoint extensions \({{\widetilde{A}} }\) of a symmetric relation A for which \(\nu _-({{\widetilde{A}} })<\infty \) attains the minimal value \(\nu _-(a_1)\). A criterion for the existence of such a selfadjoint extension is established, in which case all such extensions are described in a manner that is familiar from the case of nonnegative operators. To formulate the result assume that the selfadjoint quasi-contractive extensions \(T_m\) and \(T_M\) of \(T_1\) as in Theorem 5 exist, and denote the corresponding selfadjoint relations \(A_F\) and \(A_K\) by

$$\begin{aligned} A_F = X(T_{m})=-I+2(I+T_m)^{-1}, \quad A_K = X(T_{M})=-I+2(I+T_M)^{-1}. \end{aligned}$$
(42)

Theorem 8

Let A be a closed symmetric relation in \({\mathfrak H}\) with \(\nu _-(A)<\infty \) and denote \(\kappa =\nu _-(a_1)\,(\le \nu _-(A))\), where \(a_1\) is given by (40). Then \(\mathrm{Ext\,}_{A,\kappa }(0,\infty )\) is nonempty if and only if \(\nu _-(A)=\kappa \). In this case \(A_F\) and \(A_K\) are well defined and they belong to \(\mathrm{Ext\,}_{A,\kappa }(0,\infty )\). Moreover, the formula

$$\begin{aligned} \widetilde{A}=-I+2(I+T)^{-1} \end{aligned}$$
(43)

gives a bijective correspondence between the quasi-contractive selfadjoint extensions \(T\in \mathrm{Ext\,}_{T_1,\kappa }(-1,1)\) of \(T_1\) and the selfadjoint extensions \(\widetilde{A}=\widetilde{A}^*\in \mathrm{Ext\,}_{A,\kappa }(0,\infty )\) of A. Furthermore, \(\widetilde{A}=\widetilde{A}^*\in \mathrm{Ext\,}_{A,\kappa }(0,\infty )\) precisely when

$$\begin{aligned} A_K\le {{\widetilde{A}} }\le A_F, \end{aligned}$$
(44)

or equivalently, when \(A_F^{-1}\le {{\widetilde{A}} }^{-1}\le A_K^{-1}\), or

$$\begin{aligned} (A_F+I)^{-1}\le (\widetilde{A}+I)^{-1}\le (A_K+I)^{-1}. \end{aligned}$$
(45)

The set \(\mathrm{Ext\,}_{A^{-1},\kappa }(0,\infty )\) is also nonempty and \({{\widetilde{A}} }\in \mathrm{Ext\,}_{A,\kappa }(0,\infty )\) if and only if \({{\widetilde{A}} }^{-1}\in \mathrm{Ext\,}_{A^{-1},\kappa }(0,\infty )\). The extreme selfadjoint extensions \(A_F\) and \(A_K\) of A are connected to those of \(A^{-1}\) via

$$\begin{aligned} (A^{-1})_F=(A_K)^{-1}, \quad (A^{-1})_K=(A_F)^{-1}. \end{aligned}$$
(46)

Proof

Since \(\nu _-(A)<\infty \), the Cayley transform \(T_1={\mathcal {C}}(A)\) defines a bounded symmetric operator in \({\mathfrak H}\) with \({\mathfrak H}_1=\mathrm{dom\,}T_1=\mathrm{ran\,}(I+A)\). It follows from Lemma 6 that

$$\begin{aligned} \nu _-(A)=\nu _-(a)=\nu _-(I-T_1^*T_1), \quad \nu _-(a_1)=\nu _-(I-T_{11}^2), \end{aligned}$$

and therefore the condition \(\nu _-(A)=\kappa \) is equivalent to solvability criterion (28) in Theorem 5. Moreover, \({{\widetilde{A}} }\) is a selfadjoint extension of A if and only if \(T={\mathcal {C}}({{\widetilde{A}} })\) is selfadjoint extension of \(T_1\) and by Lemma 6 the equality \(\nu _-({{\widetilde{A}} })=\nu _-(I-T^2)\) holds. Therefore, it follows from Theorem 5 that the set \(\mathrm{Ext\,}_{A,\kappa }(0,\infty )\) is nonempty if and only if \(\nu _-(A)=\kappa \) and in this case the formula (43) establishes a one-to-one correspondence between the sets \(\mathrm{Ext\,}_{A,\kappa }(0,\infty )\) and \(\mathrm{Ext\,}_{T_1,\kappa }(-1,1)\).

Next the characterizations (44) and (45) for the set \(\mathrm{Ext\,}_{A,\kappa }(0,\infty )\) are established. Let \({{\widetilde{A}} }\in \mathrm{Ext\,}_{A,\kappa }(0,\infty )\) and let \(T={\mathcal {C}}({{\widetilde{A}} })\). According to Theorem 7 \(T={\mathcal {C}}({{\widetilde{A}} })\in \mathrm{Ext\,}_{T_1,\kappa }(-1,1)\) if and only if T satisfies the inequalities \(T_m\le T\le T_M\). It is clear from the formulas (42) and (43) that the inequalities \(I+T_m\le I+T\le I+T_M\) are equivalent to the inequalities (45).

On the other hand, \(\nu _-(I-T_{11}^2)=\nu _-(I-T^2)\) and hence the indices \(\kappa _+=\nu _-(I-T_{11})=\nu _-(I-T)\) and \(\kappa _-=\nu _-(I+T_{11})=\nu _-(I+T)\) do not depend on \(T={\mathcal {C}}({{\widetilde{A}} })\); cf. (25). The mapping properties (39) of the Cayley transform imply that the number of eigenvalues of \({{\widetilde{A}} }\) on the open intervals \((-\infty ,-1)\) and \((-1,0)\) are also constant and equal to \(\kappa _-\) and \(\kappa _+\), respectively. In particular, since \(\kappa _-=\nu _-(I+T)\) is constant we can apply Theorem 6 to conclude that the inequalities \(I+T_m\le I+T\le I+T_M\) are equivalent to

$$\begin{aligned} (I+T_M)^{-1}\le (I+T)^{-1}\le (I+T_m)^{-1}, \end{aligned}$$

which due to the formulas (42) and (43) can be rewritten as \(A_F+I\le {{\widetilde{A}} }+I \le A_K+I\), or as \(A_F\le {{\widetilde{A}} }\le A_K\). This proves (44). Since \(\nu _-({{\widetilde{A}} })=\kappa =\kappa _-+\kappa _+\) is also constant, an application of Theorem 7 shows that the inequalities (44) are also equivalent to \(A_F^{-1}\le {{\widetilde{A}} }^{-1}\le A_K^{-1}\).

As to the inverse \(A^{-1}\), notice that \(\nu _-(A^{-1})=\nu _-(A)\). Moreover, since \(A^{-1}={\mathcal {C}}(-T_1)\) it is clear that \(\mathrm{ran\,}(I+A^{-1})=\mathrm{dom\,}T_1\) and thus the form associated to \(A^{-1}\) via (40) satisfies \(a_1^{(-1)}(f',f')=(P_1f,f')=(P_1f',f)=a_1(f,f)\). In particular, \(\nu _-(a_1^{(-1)})=\nu _-(a_1)\). Moreover, it is clear that \(\nu _-(A^{-1})=\nu _-(A)\). Consequently, the equality \(\nu _-(A)=\nu _-(a_1)\) is equivalent to the equality \(\nu _-(A^{(-1)})=\nu _-(a_1^{(-1)})\). Furthermore, it is clear that \({{\widetilde{A}} }\in \mathrm{Ext\,}_{A,\kappa }(0,\infty )\) if and only if \({{\widetilde{A}} }^{-1}\in \mathrm{Ext\,}_{A^{-1},\kappa }(0,\infty )\).

Finally, the relations (46) are obtained from (31), (36), and (42).

It follows from Theorem 8 that the extensions \(\widetilde{A}\in \mathrm{Ext\,}_{A,\kappa }(0,\infty )\) admit a uniform lower bound \(\mu \le \mu ({{\widetilde{A}} })\,(\mu \le 0)\). This leads to the following inequalities for the resolvents.

Corollary 4

With the assumptions as in Theorem 8 let \(\nu _-(a_1)=\nu _-(A)<\infty \) and \(\mu \le 0\) be a uniform lower bound for the extensions \(\widetilde{A}\in \mathrm{Ext\,}_{A,\kappa }(0,\infty )\). Then the resolvents of these extensions satisfy the inequalities

$$\begin{aligned} (A_F+a)^{-1}\le (\widetilde{A}+ a)^{-1}\le (A_K+a)^{-1}, \quad a>-\mu . \end{aligned}$$
(47)

Proof

Let \(T={\mathcal {C}}({{\widetilde{A}} })\in \mathrm{Ext\,}_{T_1,\kappa }(-1,1)\) be the Cayley transform of the extension \(\widetilde{A}\in \mathrm{Ext\,}_{A,\kappa }(0,\infty )\) and rewrite the resolvent of \({{\widetilde{A}} }\) in the form

$$\begin{aligned} (\widetilde{A}+a)^{-1} = \frac{1}{a-1}\, I-\frac{2}{(a-1)^2}\, \left( T+\frac{a+1}{a-1}\right) ^{-1}. \end{aligned}$$

Since \(-a<\mu \le \mu ({{\widetilde{A}} })\), T admits precisely \(\kappa _-\) eigenvalues below the number \(-{(a+1)}/{(a-1)}<-1\). Therefore the inequalities \(T_m\le T\le T_M\) in Theorem 5 or, equivalently, the inequalities

$$\begin{aligned} T_m+\frac{a+1}{a-1}\le T+\frac{a+1}{a-1}\le T_M+\frac{a+1}{a-1} \end{aligned}$$

imply the inequalities (47) by Theorem 6.

The antitonicity Theorems 67 can be also used as follows. If the inequalities (44) and \(A_F^{-1}\le {{\widetilde{A}} }^{-1}\le A_K^{-1}\) hold, then \(\kappa =\nu _-({{\widetilde{A}} })=\nu _-(A_K)=\nu _-(A_F)\) is constant. If, in addition, (45) is satisfied, then it follows from (44) that \(\kappa _-=\nu _-(I+{{\widetilde{A}} })=\nu _-(I+A_K)=\nu _-(I+A_F)\) is constant, so that also \(\kappa _+=\nu _-(I-{{\widetilde{A}} })=\nu _-(I-A_K)=\nu _-(I-A_F)\) is constant. However, in this case the equality \(\nu _-(a_1)=\nu _-(A)\) need not hold and there can also be selfadjoint extensions \({{\widetilde{A}} }\) of A with

$$\begin{aligned} \nu _-({{\widetilde{A}} })=\nu _-(A_K)=\nu _-(A_F)>\nu _-(A)\ge \nu _-(a_1), \end{aligned}$$

which neither satisfy the inequalities (44) and (45), nor the equalities \(\nu _-(I+{{\widetilde{A}} })=\kappa _-\) and \(\nu _-(I-{{\widetilde{A}} })=\kappa _+\). It is emphasized that the result in Theorem 8 characterizes all selfadjoint extensions in \(\mathrm{Ext\,}_{A,\kappa }(0,\infty )\) under the minimal index condition \(\kappa =\nu _-(a_1)=\nu _-(A)\).

In the case that A is nonnegative one has automatically \(\kappa =\nu _-(a_1)=\nu _-(A)=0\). Therefore, Theorem 8 is a precise generalization of the well-known characterization of the class \(\mathrm{Ext\,}_{A}(0,\infty )\) (with \(\kappa =0\)) due to M. G. Kreĭn [47] to the case of a finite negative (minimal) index \(\kappa >0\). The selfadjoint extensions \(A_F\) and \(A_K\) of A are called the Friedrichs (hard) and the Kreĭn–von Neumann (soft) extension, respectively; these notions go back to [36, 56]. The extremal properties (47) of the Friedrichs and Kreĭn–von Neumann extensions were discovered by Kreĭn [47] in the case when A is a densely defined nonnegative operator. The case when \(A\ge 0\) is not densely defined was considered by Ando and Nishio [5], and Coddington and de Snoo [22]. In the nonnegative case the formulas (46) can be found in [5, 22]. Notice that in view of (43) and (44) the minimal solution of the completion problem for a nonnegative block operator \(A^0\) can be also interpreted by means of the Kreĭn–von Neumann extension of the first column \(\mathrm{col\,}(A_{11},A_{21})\) in (2); cf. [7, Section 4], [39, Section 4.9].

6.4 Kreĭn’s uniqueness criterion

To establish a generalization of Kreĭn’s uniqueness criterion for the equality \(A_F=A_K\) in Theorem 8, i.e., for \(\mathrm{Ext\,}_{A,\kappa }(0,\infty )\) to consists only of one extension, we first derive some general facts on J-contractions by means of their commutation properties.

Let \({\mathfrak H}_1\) and \({\mathfrak H}_2\) be Hilbert spaces with symmetries \(J_1\) and \(J_2\), respectively, and let \(T\in [{\mathfrak H}_1,{\mathfrak H}_2]\) be a J-contraction, i.e., \(J_1-T^*J_2T\ge 0\). Let \(D_T\) and \(D_{T^*}\) be the corresponding defect operators and let \(J_T\) and \(J_{T^*}\) be their signature operators as defined in Sect. 4. The first lemma connects the kernels of the defect operators \(D_T\) and \(D_{T^*}\).

Lemma 7

Let \(T\in [{\mathfrak H}_1,{\mathfrak H}_2]\), let \(J_i\) be a symmetry in \({\mathfrak H}_i\), \(i=1,2\), and let \(D_T\) and \(D_{T^*}\) be the defect operators of T and \(T^*\), respectively. Then

$$\begin{aligned} J_2T(\mathrm{ker\,}D_{T})=\mathrm{ker\,}D_{T^*}, \quad T^*J_2(\mathrm{ker\,}D_{T^*})=\mathrm{ker\,}D_{T}. \end{aligned}$$
(48)

In particular,

$$\begin{aligned} \mathrm{ker\,}D_{T}=\{0\} \text{ if } \text{ and } \text{ only } \text{ if } \mathrm{ker\,}D_{T^*}=\{0\}. \end{aligned}$$

Proof

It suffices to show the first identity in (48). If \(\varphi \in \mathrm{ker\,}D_T=\mathrm{ker\,}J_TD_T^2\), then the second identity in (19) implies that \(J_2T \varphi \in \mathrm{ker\,}J_{T^*}D_{T^*}^2=\mathrm{ker\,}D_{T^*}\). Hence, \(J_2T(\mathrm{ker\,}D_{T}) \subset \mathrm{ker\,}D_{T^*}\). Conversely, let \(\varphi \in \mathrm{ker\,}D_{T^*}\). Then \(0=J_{T^*}D_{T^*}^2\varphi \) or, equivalently, \(\varphi = J_2TJ_1T^*\varphi \), and here \(J_1T^*\varphi \in \mathrm{ker\,}D_T\) by the first identity in (19). This proves the reverse inclusion.

Lemma 8

Let the notations be as in Lemma 7. Then

$$\begin{aligned} \mathrm{ran\,}T \cap \mathrm{ran\,}D_{T^*}=\mathrm{ran\,}TJ_1D_T=\mathrm{ran\,}D_{T^*}L_T, \end{aligned}$$

where \(L_T\) is the link operator defined in Corollary 2.

Proof

By the commutation formulas in Corollary 2 we have \(\mathrm{ran\,}TJ_1D_T=\mathrm{ran\,}D_{T^*}L_T \subset \mathrm{ran\,}T \cap \mathrm{ran\,}D_{T^*}\). Hence, it suffices to prove the inclusion

$$\begin{aligned} \mathrm{ran\,}T \cap \mathrm{ran\,}D_{T^*} \subset \mathrm{ran\,}TJ_1D_T. \end{aligned}$$

Suppose that \(\varphi \in \mathrm{ran\,}T \cap \mathrm{ran\,}D_{T^*}\). Then Corollary 2 shows that \(T^*J_2\varphi = D_T f\) for some \(f\in {\mathfrak D}_T\), while the second identity in (19) implies that

$$\begin{aligned} (J_2-TJ_1T^*)J_2\varphi =TJ_1D_Tg, \end{aligned}$$

for some \(g\in {\mathfrak D}_T\). Therefore,

$$\begin{aligned} \varphi =(J_2-TJ_1T^*)J_2\varphi +TJ_1T^*J_2\varphi =TJ_1D_Tg+TJ_1D_Tf=TJ_1D_T(g+f) \end{aligned}$$

and this completes the proof.

We can now characterize J-isometric operators \(T\in [{\mathfrak H}_1,{\mathfrak H}_2]\) as follows.

Proposition 3

With the notations as in Lemma 7 the following statements are equivalent:

  1. (i)

    T is J-isometric, i.e., \(T^*J_2T=J_1\);

  2. (ii)

    \(\mathrm{ker\,}T=\{0\}\) and \(\mathrm{ran\,}T \cap \mathrm{ran\,}D_{T^*} =\{0\}\);

  3. (iii)

    for some, and equivalently for every, subspace \({\mathfrak L}\) with \(\mathrm{ran\,}J_2T\subset \overline{{\mathfrak L}}\) one has

    $$\begin{aligned} \sup _{f\in {\mathfrak L}}\frac{|(f,T\varphi )|}{\Vert D_{T^*}f\Vert }=\infty \quad \text{ for } \text{ every } \varphi \in {\mathfrak H}_1\backslash \{0\}, \end{aligned}$$
    (49)

    i.e., there is no constant \(0\le C<\infty \) satisfying \(|(f,T\varphi )|\le C \Vert D_{T^*}f\Vert \) for every \(f\in {\mathfrak L}\), if \(\varphi \ne 0\).

Proof

(i) \(\Rightarrow \) (iii) Let \({\mathfrak L}\) be an arbitrary subspace with \(\mathrm{ran\,}J_2T\subset \overline{{\mathfrak L}}\). Assume that the supremum in (49) is finite for some \(\varphi =J_1\psi \in {\mathfrak H}_1\). Then there exists \(0\le C<\infty \), such that

$$\begin{aligned} |(f,TJ_1\psi )|\le C\Vert D_{T^*}f\Vert \quad \text{ for } \text{ every } f\in {\mathfrak L}. \end{aligned}$$

Since \(\mathrm{ran\,}J_2T\subset \overline{{\mathfrak L}}\) and T is J-isometric, also the following inequality holds:

$$\begin{aligned} \Vert \psi \Vert ^2=(J_1T^*J_2T\psi ,\psi ) \le C\Vert D_{T^*}J_2T\psi \Vert . \end{aligned}$$
(50)

By taking adjoints (and zero extension for \(L_{T^*}\)) in the second identity in Corollary 2 it is seen that \(D_{T^*}J_2T\psi =L_{T^*}^*D_T \psi =0\), since T is J-isometric. Hence (50) implies \(\varphi =J_1\psi =0\). Therefore (49) holds for every \(\varphi \ne 0\).

(iii) \(\Rightarrow \) (ii) Assume that (49) is satisfied with some subspace \({\mathfrak L}\). If (ii) does not hold, then either \(\mathrm{ker\,}T\ne \{0\}\), in which case (49) does not hold for \(0\ne \varphi \in \mathrm{ker\,}T\), or \(\mathrm{ran\,}T\cap \mathrm{ran\,}D_{T^*}\ne \{0\}\). However, then with \(0\ne T\varphi =D_{T^*}h\) the supremum in (49) is finite even if f varies over the whole space \({\mathfrak H}_2\). Thus, if (ii) does not hold then (49) fails to be true.

(ii) \(\Rightarrow \) (i) Let \(\mathrm{ran\,}T \cap \mathrm{ran\,}D_{T^*} =\{0\}\). Then by Lemma 8 one has \(TJ_1D_T=0\) and it follows from \(\mathrm{ker\,}{T}=\{0\}\) that \(D_T=0\), i.e., T is isometric. This completes the proof.

After these preparations we are ready to prove the analog of Kreĭn’s uniqueness criterion for the equality \(T_{m}=T_{M}\) in the case of quasi-contractions appearing in Theorem 5.

Theorem 9

Let the Hilbert space \({\mathfrak H}\) be decomposed as \({\mathfrak H}={\mathfrak H}_1\oplus {\mathfrak H}_2\) and let \(T_1 \in [{\mathfrak H}_1,{\mathfrak H}]\) be a symmetric quasi-contraction satisfying the condition (28) in Theorem 5. Then \(T_{m}=T_{M}\) if and only if

$$\begin{aligned} \sup _{f\in {\mathfrak H}_1}\frac{|(T_1f,\varphi )|^2}{(|I-T_1^*T_1|f,f)}=\infty \quad \text{ for } \text{ every } \varphi \in {\mathfrak H}_2{\setminus } \{0\}. \end{aligned}$$
(51)

Proof

Let \(J=\mathrm{sign\,}(I-T_{11}^2)\). According to Theorem 5 there is \(V\in [{\mathfrak D}_{T_{11}},{\mathfrak H}_2]\), such that \(T_{21}=VD_{T_{11}}\); moreover, \(V^*\) is a J-contraction, i.e., \(I-VJV^*\ge 0\). This implies that

$$\begin{aligned} (T_1f,\varphi )=(T_{21}f,\varphi )=(D_{T_{11}}f,V^*\varphi ), \end{aligned}$$
(52)

and a direct calculation shows that

$$\begin{aligned} I-T^*_1T_1=I-T_{11}^2-T_{21}^*T_{21}=JD_{T_{11}}^2-D_{T_{11}}V^*VD_{T_{11}}= D_{T_{11}}D_VJ_V D_VD_{T_{11}}. \end{aligned}$$
(53)

By construction \(D_V\in [{\mathfrak D}_{T_{11}}]\) and therefore \(\mathrm{ran\,}D_VD_{T_{11}}\) is dense in \({\mathfrak D}_V=\mathrm{\overline{ran}\,}D_V\). Furthermore, since \(V^*\) is J-contractive it follows from Lemma 1 that \(\nu _-(J_V)=\nu _-(J)=\nu _-(I-T_{11}^2)\) and, therefore, the assumption (28) shows that \(\nu _-(J_V)=\nu _-(I-T_1^*T_1)\). Now according to Proposition 1 (ii) if follows from (53) that there is a unique J-unitary operator \(C\in [{\mathfrak D}_{T_1},{\mathfrak D}_V]\) such that \(D_VD_{T_{11}}=CD_{T_{1}}\).

In view of (33) \(T_m=T_M\) if and only if \(V^*\) is J-isometric. Since \(\mathrm{ran\,}JV^*\subset \overline{\mathrm{ran\,}}D_{T_{11}}\), it follows from (i) and (iii) in Proposition 3 that \(T:=V^*\) satisfies the condition (49) with \({\mathfrak L}=\mathrm{ran\,}D_{T_{11}}\).

On the other hand, it follows from (53) and J-unitarity of \(C\in [{\mathfrak D}_{T_1},{\mathfrak D}_V]\) that

$$\begin{aligned} \Vert D_VD_{T_{11}}\Vert \le \Vert C\Vert \, \Vert D_{T_1}\Vert ,\quad \Vert D_{T_1}\Vert \le \Vert C^{-1}\Vert \, \Vert D_VD_{T_{11}}\Vert . \end{aligned}$$

By combining this equivalence between the norms of \(\Vert D_{T_1}\Vert \) and \(\Vert D_VD_{T_{11}}\Vert \) with the equality (52) one concludes that \(V^*\) satisfies the condition (49) precisely when \(T_1\) satisfies the condition (51).

Remark 3

In the case of a hermitian contraction acting in a Hilbert space the criterion in Theorem 9 was proved by Kreĭn [47].

As to the geometric interpretation of the condition in Theorem 9, observe that if the supremum (51) is finite for some \(\varphi \), then \(T_{21}^*\varphi \in \mathrm{ran\,}D_{T_1}\) (see e.g. [38, Corollary 2]) and as the proof shows \(D_{T_1}=D_{T_{11}}D_VC^{-*}\), which gives the equation \(D_{T_{11}}V^*\varphi =D_{T_{11}}D_VC^{-*}v\) for some v. Consequently, \(V^*\varphi =D_VC^{-*}v\) and hence again an application of Proposition 3 to \(V^*\), now using items (i) and (ii), shows that (51) is equivalent to \(V^*\) being J-isometric. Here (see (33))

$$\begin{aligned} T_M-T_m= \begin{pmatrix} 0&{}\quad 0\\ 0&{}\quad 2(I-VJV^*) \end{pmatrix}. \end{aligned}$$

Recall that the minimal and maximal extension \(T_m\) and \(T_M\) of \(T_1\) are determined via the minimal solutions \(A_+=I+T_m=S_-^*J_-S_-\) and \(A_-=I-T_M=S_+^*J_+S_+\) to the completion problems (26), where

$$\begin{aligned} S_-=|I+T_{11}|^{[-1/2]}T_{21}^*,\quad S_+=|I-T_{11}|^{[-1/2]}T_{21}^*. \end{aligned}$$

Here \(Q_m:=S_-^*J_-S_-=V(I-T_{11})JV^*\) and \(Q_M:=S_+^*J_+S_+=V(I+T_{11})JV^*\) appear when calculating the generalized Schur complements of the block operators \(A_+\) and \(A_-\) using proper range inclusions; see Proposition 2 and (17). These two operators can be expressed either by limit values or by integrals as follows:

$$\begin{aligned} Q_m=T_{21}(I+T_{11})^{(-1)} T_{21}^*:= \lim _{\varepsilon \uparrow 1} T_{21}(I+\varepsilon T_{11})^{-1} T_{21}^* =\int _{-\Vert T_{11}\Vert }^{\Vert T_{11}\Vert }\frac{T_{21}dE_t T_{21}^*}{1+t}, \end{aligned}$$
$$\begin{aligned} Q_M=T_{21}(I-T_{11})^{(-1)} T_{21}^*:= \lim _{\varepsilon \uparrow 1} T_{21}(I-\varepsilon T_{11})^{-1} T_{21}^* =\int _{-\Vert T_{11}\Vert }^{\Vert T_{11}\Vert }\frac{T_{21}dE_t T_{21}^*}{1-t}, \end{aligned}$$

where \(\varepsilon \) is sufficiently close to 1 (to guarantee proper invertibility of indicated inverses) and \(E_t\) stands for the spectral family of \(T_{11}\). With these notations the equality \(T_m=T_M\) can be also rewritten in the form \(Q_m-I=I-Q_M\), i.e. \(2I=Q_m+Q_M=2VJV^*\) or, equivalently,

$$\begin{aligned} \int _{-\Vert T_{11}\Vert }^{\Vert T_{11}\Vert }\frac{T_{21}dE_t T_{21}^*}{1-t^2}=I. \end{aligned}$$
(54)

In the special case of finite defect numbers (\(\mathrm{dim\,}(\mathrm{dom\,}T_1)^\perp <\infty \)) the condition (54) appears in Langer and Textorius [53, Theorem2.8]. Notice, that using the factorization \(T_{21}=VD_{T_{11}}\) and the formula \(I-T_{11}^2=JD_{T_{11}^2}\) the condition (54) can immediately be rewritten in the form \(VJV^*=I\).

The criterion in Theorem 9 can be translated to the situation of Theorem 8 via Cayley transform to get the analog of Kreĭn’s uniqueness criterion for the equality \(A_F=A_K\).

Corollary 5

Let A be a closed symmetric relation in \({\mathfrak H}\) satisfying the condition \(\nu _-(A)=\nu _-(a_1)<\infty \) in Theorem 8. Then the equality \(A_{F}=A_{K}\) holds if and only if the following condition is fulfilled:

$$\begin{aligned} \sup _{g\in {\mathfrak H}_1}\frac{|((A+I)^{-1}g,\varphi )|^2}{(|\widehat{A}|g,g)}=\infty \quad \text{ for } \text{ every } \varphi \in \mathrm{ker\,}(A^*+I){\setminus } \{0\}, \end{aligned}$$
(55)

where \(\widehat{A}=(I+A)^{-*}A(I+A)^{-1}\) is a bounded selfadjoint operator in \({\mathfrak H}_1=\mathrm{ran\,} (A+I)\).

Proof

Let \(T_1={\mathcal {C}}(A)\) so that \(\{f,f'\}\in A\) if and only if \(\{f+f',2f\}\in T_1+I\); see (35). Then with \(g=f+f'\in \mathrm{dom\,}T_1={\mathfrak H}_1\) and \(\varphi \in {\mathfrak H}_2=(\mathrm{dom\,}T_1)^\perp \) one has

$$\begin{aligned} (T_1g,\varphi )=((T_1+I)g,\varphi )=2((A+I)^{-1}g,\varphi ). \end{aligned}$$

Let \(A_s=P_sA\) be the operator part of A; here \(P_s\) stands for the orthogonal projection onto \(\mathrm{mul\,}A=(\mathrm{dom\,}A^*)^\perp =\mathrm{ker\,}(T_1+I)\). Then the form \(a(f,f)=(f',f)\) associated with A can be rewritten as \(a(f,f)=(A_sf,f)\), \(f\in \mathrm{dom\,}A\), and thus

$$\begin{aligned} ((I-T_1^*T_1)g,g)=4(f',f)=4(A_s(I+A)^{-1}g,(I+A)^{-1}g)), \end{aligned}$$

where \(2(I+A)^{-1}=T_1+I\) is a bounded operator from \({\mathfrak H}_1\) into \({\mathfrak H}\). Then clearly \(\widehat{A}=(I+A)^{-*}A_s(I+A)^{-1}\) is a bounded selfadjoint operator in \({\mathfrak H}_1\) and, moreover, \(\nu _-(\widehat{A})=\nu _-(a)=\nu _-(I-T_1^*T_1)\); see Lemma 6. Thus, it follows from Proposition 1 that there is a J-unitary operator C from \(\mathrm{\overline{ran}\,}\widehat{A}\) into \({\mathfrak D}_{T_1}\) such that \(D_{T_1}=C|\widehat{A}|^{1/2}\). As in the proof of Theorem 8 this implies the equivalence of the conditions (51) and (55).

Observe that if A is nonnegative then with \(\{f,f'\}\in A\) and \(g=f+f'\in {\mathfrak H}_1\),

$$\begin{aligned} ((A+I)^{-1}g,\varphi )=(f,\varphi ), \quad (A_s(I+A)^{-1}g,(I+A)^{-1}g))=(A_sf,f), \end{aligned}$$

and, therefore, in this case the condition (55) can be rewritten as

$$\begin{aligned} \sup _{\{f,f'\}\in A} \frac{|(f,\varphi )|^2}{(f',f)}=\infty \quad \text{ for } \text{ every } \varphi \in \mathrm{ker\,}(A^*+I){\setminus } \{0\}, \end{aligned}$$

the criterion which for a densely defined operator A was obtained in [47] and for a nonnegative relation A can be found in [38, 39].