1 Introduction

The theory of reproducing kernel Hilbert spaces (RKHS) provides research tools in such domains as complex analysis, probability theory and statistics [2], stochastic (Gaussian) processes [15], quantum physics [16, 17] or computer science (especially artificial intelligence [9, 12]). Basic properties and definitions together with the detailed analysis of RKHS can be found in [19, 25]. In [1] RKHS associated with the continuous wavelet transform generated by the irreducible representations of the Euclidean motion SE(2) are considered.

Groupoids are widely used in differential geometry, algebraic topology and physics. There is a well-known association of groupoids with \(C^*\)-algebras [7, 23]. The definition and main properties of (locally compact) groupoids can be found for example in [6, 18]. In [21, 22] unitary representations of groupoids are considered. The applications of theory of finite groudoids and their representations are presented in [13, 14]. Concerning representations of locally compact groups, the Haar measure and the tensor product of the Hilbert spaces, the Reader is referred to [26].

The aim of te present paper is to connect the above two domains. To this end, we study the relationship between unitary representations of groupoids and reproducing kernel Hilbert spaces, proposing how to construct one using the other provided certain conditions are met.

The content of the paper is as follows. In Section 2 the fundamental definitions and properties of the theory of reproducing kernel Hilbert spaces are introduced. Section 3 covers the concept of groupoids (propositions, examples and the notion of their unitary representation). Main results of the paper are contained in Section 4. It combines the ideas presented in the two previous sections. The constructions of reproducing kernels associated to a unitary representation of a group and to a unitary representation of a groupoid are described in Subsections 4.1 and 4.2, respectively. The latter subsection is concluded with several illustrative examples. Section 5 contains brief discussion on the possible applications of the notions studied in the article. In Section 6 the conclusions are presented.

2 Reproducing Kernel Hilbert Space

2.1 Basic Concepts and Properties

Definition 1

Let A be a nonempty set. The map \( K: A \times A \rightarrow \mathbb {C}\) shall be called a kernel on A. We say that the kernel K is positive definite if

$$\begin{aligned} \forall _{ n \in \mathbb {N}} \quad \forall _{ a_1,...,a_n \in A } \quad \forall _{ \lambda _1,...,\lambda _n \in \mathbb C} \qquad \sum ^{n}_{k=1} \sum ^{n}_{l=1}\lambda _k K(a_k, a_l) \bar{\lambda }_l \ge 0 . \end{aligned}$$

Let H be an inner product space of complex-valued functions on A equipped with the inner productFootnote 1\(\left\langle \cdot ,\cdot \right\rangle \).

Definition 2

The family \(\left\{ K_a \right\} _{a \in A} \subset H\) is called a reproducing family of H if

$$\begin{aligned} \forall _{ a \in A} \quad \forall _{ f \in H} \qquad f(a)= \left\langle f,K_a \right\rangle . \end{aligned}$$
(1)

Equality (1) is called the reproducing kernel property, and the function \(K: A \times A \rightarrow \mathbb {C}\) defined as

$$\begin{aligned} K(b,a):= \langle K_a, K_b \rangle = K_{a}(b), \quad a,b \in A \end{aligned}$$

is called a reproducing kernel on the space H. If the latter is a Hilbert space, we call H a reproducing kernel Hilbert space (RKHS).

Clearly, by the Riesz representation theorem, any RKHS has a unique reproducing family and thus a unique reproducing kernel, which can be easily shown to be positive definite. The following seminal result shows that actually the converse is also true: Every positive definite kernel is a reproducing kernel on a certain Hilbert space.

Theorem 1

[Moore–Aronszajn] Let K be a positive definite kernel on a non-empty set A. Then there is a unique Hilbert space H(K) of complex-valued functions on A with the reproducing kernel K.

Proof

(A sketch; for details, see [19, Theorem 2.14]) One considers the vector space \(H_{0}(K) := span \{K_a\}_{a \in A}\). The map \(\left\langle \cdot , \cdot \right\rangle _{H_{0}}: H_{0}(K) \times H_{0}(K) \rightarrow \mathbb {C}\) given by

$$\begin{aligned} \left\langle \sum ^{m}_{l=1} \lambda _l K_{a_l}, \sum ^{n}_{k=1} \beta _k K_{b_k} \right\rangle _{H_{0}} := \sum ^{n}_{k=1} \sum ^{m}_{l=1}\lambda _l \bar{\beta }_k K(b_k,a_l) \end{aligned}$$
(2)

can be demonstrated to be a well-defined inner product (more information on the correctness of such definitions can be found e.g. in [10]). Then one shows that the completion of \(H_0(K)\) in the norm induced by that inner product, denoted \(H(K) := \widetilde{H_0(K)} = \widetilde{span }\{K_a\}_{a \in A}\), can still be interpreted as a space of complex-valued functions on A, with K as its reproducing kernel. \(\square \)

In what follows, we shall be using the following general way of constructing positive definite kernels.

Theorem 2

Let \(F: A \rightarrow H\) be any function from a nonempty set A to a Hilbert space H equipped with an inner product \(\langle \cdot , \cdot \rangle \). Then:

  • The map \(K: A \times A \rightarrow \mathbb {C}\) defined as \(K(b,a) := \langle F(a), F(b) \rangle \) is a positive definite kernel.

  • By Theorem 1, there exists an RKHS \(H(K) := \widetilde{span }\left\{ K(\cdot ,a) \right\} _{a \in A}\), for which K is the reproducing kernel.

  • The linear map \(T: H \rightarrow H(K)\) defined via \((Tv)(a) := \langle v , F(a) \rangle \) is a surjective contraction. Moreover, it becomes an isometry when restricted to the closed subspace \(V := \overline{span }\, F(A)\) of the space H.

  • Defining \(S: H(K) \rightarrow H\) as \(S := T|_V^{-1}\), we obtain an isometric embedding of the RKHS H(K) into H that satisfies \(S(K_a) = F(a)\) for every \(a \in A\).

Proof

Cf. [25, p. 13]. \(\square \)

Notice that the third bullet of the above theorem carries a lot of information about the functions belonging to H(K). For example, if F is weakly continuous, then \(H(K) \subset C(A)\).

The (unique) reproducing kernel of a given RKHS turns out be tightly related to the so-called Parseval frames, which can be thought of as generalizations of complete orthonormal systems of vectors.

Definition 3

(cf. Definition 2.6 & Proposition 2.8 in [19]) Let H be a Hilbert space (not necessarily of functions) with an inner product \(\langle \cdot , \cdot \rangle \). The set \(\{w_j\}_{j \in I} \subset H\) is called a Parseval frame for H if \(\Vert v\Vert ^2 = \sum _{j \in I} | \langle v, w_j \rangle |^2\) for every \(v \in H\) or, equivalently, if \(v = \sum _{j \in I} \langle v, w_j \rangle w_j\) for every \(v \in H\).

Theorem 3

(Papadakis) Let H be an RKHS of functions on A with reproducing kernel K. Then the family \(\left\{ \varphi _j \right\} _{j \in I} \subset H\) is a Parseval frame for H iff

$$\begin{aligned} K(b,a)= \sum _{j \in I} \varphi _j(b) \overline{\varphi _j(a)}, \quad a,b \in A, \end{aligned}$$
(3)

where the series converges pointwise.

Proof

See [19, Theorem 2.10]. \(\square \)

2.2 Reproducing Kernel Defined by the Convolution of Functions on a Locally Compact Group

Let G be a locally compact group (not necessarily abelian) and let \(\mu \) be a fixed left Haar measure on G. For two continuous compactly supported complex-valued functions \(f_{1}, f_{2} \in C_c(G)\), their convolution \(f_{1} *f_{2}\) is another such function on G defined via

$$\begin{aligned} (f_{1} *f_{2})(g) = \int _{G} f_{1}(\tau ) f_{2}(\tau ^{-1} g) d \mu (\tau ). \end{aligned}$$

The above definition extends to the space \(L^2(G, \mu )\) in the following sense [11, 444O & 444R]: for any \(f_1,f_2 \in L^2(G, \mu )\) the convolution \(f_1 *\tilde{f}_2\), where \(\tilde{f}_2(g) := \overline{f_2(g^{-1})}\), is a well-defined element of \(C_b(G)\) and, moreover, \(|(f_1 *\tilde{f}_2)(g)| \le \Vert f_1\Vert _{L^2} \Vert f_2\Vert _{L^2}\) for every \(g \in G\) (more information on the use of convolution can be found in e.g. [24]).

Example 1

Let G be a locally compact group and \(\mu \) be a left Haar measure on G. Fix \(f \in L^2(G, \mu )\) and consider the map \(K: G \times G \rightarrow \mathbb {C}\) defined via

$$\begin{aligned} K(h, g):= (f *\tilde{f})(g^{-1} h), \quad g, h \in G. \end{aligned}$$

By the preceding discussion, K is bounded and jointly continuous. Moreover, it is a positive definite kernel, because for any \(n \in \mathbb {N}\), \(\lambda _1,\ldots ,\lambda _n \in \mathbb {C}\) and \(g_1,\ldots ,g_n \in G\) one has that

$$\begin{aligned} \sum _{i,j=1}^{n} \lambda _i K(g_i,g_j) \overline{\lambda }_j&= \sum _{i,j=1}^{n} \lambda _i \overline{\lambda }_j \int _{G} f(\tau ) \tilde{f}( \tau ^{-1} g_j^{-1} g_i) d\mu (\tau ) \\&=\sum _{i,j=1}^{n} \lambda _i \overline{\lambda }_j \int _{G} f(\tau ) \overline{f ( g_i^{-1} g_j \tau )} d\mu (\tau ) \\&=\sum _{i,j=1}^{n} \lambda _i \overline{\lambda }_j \int _{G} f(g_j^{-1} \tau ) \overline{f ( g_i^{-1} \tau )} d\mu (\tau )\\&=\int _{G} \left| \sum _{j=1}^{n} \overline{\lambda }_j f (g_j^{-1} \tau ) \right| ^2 d\mu (\tau ) \ge 0, \end{aligned}$$

where in the antepenultimate step we employed the left-invariance of \(\mu \).

By Theorem 1, K is a reproducing kernel on a certain Hilbert space H(K) equipped with the inner product \(\left\langle \cdot ,\cdot \right\rangle _{H(K)}\) with the reproducing family \(\{K_g := K(\cdot , g)\}_{g \in G}\). In fact, H(K) is a certain space of bounded continuous functions on G, which can be isometrically embedded into \(L^2(G, \mu )\). To see why this is the case, let \(L_g: G \rightarrow G\), \(\xi \mapsto g\xi \) denote the left shift by the element \(g \in G\). Employing the pullback \(L_g^*f := f \circ L_g\), one can write that

$$\begin{aligned} K(h,g)&= \int _{G} f(\tau ) \tilde{f}( \tau ^{-1} g^{-1} h ) d\mu (\tau ) \nonumber \\&= \int _{G} f(g^{-1}\tau ) \overline{f (h^{-1}\tau ) } d\mu (\tau ) \nonumber \\&= \left\langle L_{g^{-1}}^*f, L_{h^{-1}}^*f \right\rangle _{L^2} \end{aligned}$$
(4)

for any \(g,h \in G\). Thus, the above construction is actually a special case of the one described in Theorem 2, with the map \(F: G \rightarrow L^2(G,\mu )\) defined as \(F(g) := L_{g^{-1}}^*f\) for every \(g \in G\). Hence the elements of H(K) are mappings of the form \(g \mapsto \langle v, L_{g^{-1}}^*f \rangle \) with \(v \in L^2(G,\mu )\), which are clearly bounded (by \(\Vert v\Vert _{L^2}\Vert f\Vert _{L^2}\)) and continuous, whereas the isometric embedding \(S: H(K) \rightarrow L^2(G,\mu )\) satisfies \(S(K_g) = L_{g^{-1}}^*f\).

As a side remark, observe that the above positive definite kernel K can also be expressed as

$$\begin{aligned} K(h,g)&= \int _{G} f(g^{-1}\tau ) \overline{f (h^{-1}\tau ) } d\mu (\tau )= \int _{G} \tilde{f}(\tau ^{-1}h) \overline{\tilde{f} (\tau ^{-1}g) } d\mu (\tau ) \\&= \int _{G} L_{\tau ^{-1}}^*\tilde{f}(h) \overline{L_{\tau ^{-1}}^*\tilde{f}(g)} d\mu (\tau ), \quad g, h \in G, \end{aligned}$$

what is analogous to formula (3), only here instead of a Parseval frame \(\left\{ \varphi _j \right\} _{j \in I}\) and a counting measure we have the family \(\{L_{\tau ^{-1}}^*\tilde{f}\}_{\tau \in G}\) of the left-translations of the map \(\tilde{f}\) and the left Haar measure \(\mu \).

3 Groupoids

Groupoids provide both a generalization of groups [3] and of equivalence relations [18].

Definition 4

Let \(\Gamma ,X\) be nonempty sets. A groupoid \(\Gamma \) over a set X is a septuple \((\Gamma ,X, s,r, \varepsilon , \cdot , {}^{-1})\) with the below described mappings:

  1. 1.

    the source mapping \(s : \Gamma \longrightarrow X\), which is surjective.

  2. 2.

    the range mapping \(r : \Gamma \longrightarrow X\), which is surjective.

  3. 3.

    the multiplication mapping \(\cdot : \Gamma ^{(2)} \longrightarrow \Gamma \), where \(\Gamma ^{(2)}:= \left\{ (\gamma _{1}, \gamma _{2}) \!\in \! \Gamma \!\times \! \Gamma \ | \ r(\gamma _{2})\right. \left. =s(\gamma _{1})\right\} \). The multiplication is associative. For simplicity, instead of \(\gamma _{1} \cdot \gamma _{2}\) we write \(\gamma _{1} \gamma _{2}.\)

  4. 4.

    the embedding map \(\varepsilon : X \longrightarrow \Gamma \) fulfilling

    $$\begin{aligned} \varepsilon (r(\gamma ))\gamma = \gamma =\gamma \varepsilon (s(\gamma )). \end{aligned}$$
  5. 5.

    the inversion map \(^{-1} : \Gamma \longrightarrow \Gamma \) such that

    $$\begin{aligned} \forall _{\gamma \in \Gamma } \quad \gamma ^{-1} \gamma =\varepsilon (s(\gamma )) \quad \text {and} \quad \gamma \gamma ^{-1} =\varepsilon (r(\gamma )).\end{aligned}$$

The element \(\gamma \in \Gamma \) can be regarded as an arrow with the starting point at \(x=s(\gamma )\) and the ending point at \(y=r(\gamma )\), where \(x,y\in X\).

Remark 1

From the above definition it follows that the groupoid \((\Gamma ,X, s,r, \varepsilon , \cdot , {}^{-1})\) can be identified with a small category in which all morphisms are invertible. In this interpretation X is the set of objects, \(\Gamma \) is the set of morphisms, \(s(\gamma )\) is the domain of \(\gamma \), \(r(\gamma )\) is the codomain of \(\gamma \), \(\varepsilon (x)\) is the identity morphism at x, \(\cdot \) is the composition of morphisms and \(\gamma ^{-1}\) is the inverse of the morphism \(\gamma \in \Gamma .\)

Example 2

A group G is a groupoid over the singleton \(X:=\left\{ x\right\} \) with the mappings defined as \(\forall _{g \in G} \quad s(g):=x, r(g):=x\) and \(\varepsilon (x) := e\) — the unit of the group G. The inversion and multiplication mappings are given by the respective group operations.

Let \(\Gamma \) be a groupoid over the set X. It is easy to show that the relation \(\sim \) on the set X defined as follows:

$$\begin{aligned} \forall _{x,y \in X} \quad x \sim y \quad \Leftrightarrow \quad \exists _{\gamma \in \Gamma } \quad x=s(\gamma ) \ \ \wedge \ \ y=r(\gamma ) \end{aligned}$$

is an equivalence relation on X. The equivalence class of any element x with respect to \(\sim \) is the set

$$\begin{aligned} \text {Or}_{x} := \left[ x\right] _{\sim } = \left\{ y \in X \ | \ \exists _{\gamma \in \Gamma } \ \ s(\gamma )=x \ \wedge \ r(\gamma )=y \right\} = r(\Gamma _{x}), \end{aligned}$$

where \(\Gamma _{x} := \left\{ \gamma \in \Gamma \ | \ s(\gamma )=x\right\} =s^{-1}(x).\)

We call \(\text {Or}_{x}\) the orbit of the element \(x \in X\) with respect to the groupoid \(\Gamma .\) We say that \(\Gamma \) is transitive if for any \(x,y \in X\) we have \(x \sim y\) (or, equivalently, if \(\text {Or}_{x}=X\) for some and hence for every \(x \in X\)).

Defining the fibers of the groupoid \(\Gamma \) as \(\Gamma _{x} := s^{-1}(x)\) and \(\Gamma ^{x}:=r^{-1}(x)\) and using the above definition of the equivalence relation \(\sim \), we have \(r(s^{-1}(x))= s(r^{-1}(x))\).

The set of arrows starting at x and ending at y is denoted by \(\Gamma _{x}^{y}:=\Gamma _{x} \cap \Gamma ^{y}.\) The set \(\Gamma _{x}^{x}:=\Gamma _{x} \cap \Gamma ^{x}=\left\{ \gamma \in \Gamma \ | \ r(\gamma )= s(\gamma )=x \right\} \) of elements starting and ending at x together with the groupoid multiplication and inversion map has a group structure, and we call it the isotropy group of the element x.

Remark 2

If \(\Gamma \) is a transitive groupoid over X then it is easy to show that for any \(y \in X\) the groups \(\Gamma _{x}^{x}\) and \(\Gamma _{y}^{y}\) are isomorphic (albeit not canonically!). Without the transitivity, however, it may happen that for some \(x \ne y\) the fibers \(\Gamma _{x}^{x}\) and \(\Gamma _{y}^{y}\) are not isomorphic as groups.

Remark 3

If \((\Gamma ,X, s,r, \varepsilon , \cdot , {}^{-1})\) is a groupoid and \(\Gamma , X\) are topological spaces then we call \(\Gamma \) a topological groupoid if all maps \(s,r, \varepsilon , \cdot , ^{-1}\) are continuous. In this case the maps \(\varepsilon , ^{-1}\) are homeomorphisms onto their images and all isotropy groups are topological groups.

3.1 Unitary Representation of a Groupoid

Recall that a unitary representation of a group G is a pair \((\mathcal {U},H)\), where H is a Hilbert space and \(\mathcal {U}\) is a map assigning to every \(g \in G\) a unitary operator \(\mathcal {U}(g): H \rightarrow H\) satisfying \(\mathcal {U}(g_1g_2) = \mathcal {U}(g_1)\mathcal {U}(g_2)\) for all \(g_1,g_2 \in G\). A standard example of a unitary representation of a (locally compact) group is the left regular representation \(\mathcal {U}(g) := L^{*}_{g^{-1}}: L^2(G,\mu ) \rightarrow L^2(G,\mu )\), which appeared in Example 1 above.

The notion of a unitary representation can be extended onto groupoids.

Definition 5

Let \(\Gamma \) be a groupoid over the set X. A pair \((\mathcal {U},\mathbb {H})\) is called a unitary representation of the groupoid \(\Gamma \) if \(\mathbb {H} := \{H_x\}_{x \in X}\) is a family of Hilbert spaces and \(\mathcal {U}\) is a mapping assigning to each \(\gamma \in \Gamma \) a unitary transformation \(\mathcal {U}(\gamma ): H_{s(\gamma )} \rightarrow H_{r(\gamma )}\) in such a way that \(\mathcal {U}(\gamma _1 \gamma _2) = \mathcal {U}(\gamma _1)\mathcal {U}(\gamma _2)\) for every \((\gamma _1, \gamma _2) \in \Gamma ^{(2)}\).

Notice that from the above definition it already follows that

  • \(\mathcal {U}(\varepsilon (x)) = id _{H_x}\) for every \(x \in X\).

  • \(\mathcal {U}(\gamma ^{-1}) = \mathcal {U}(\gamma )^{-1} = \mathcal {U}(\gamma )^\dag \) for every \(\gamma \in \Gamma \).

From the family \(\mathbb {H} := \{H_x\}_{x \in X}\) one can construct a new Hilbert space \(\mathcal {H} := \widetilde{\bigoplus }_{x \in X} H_x\) defined as the completion of the direct sum \(\bigoplus _{x \in X} H_x\) with respect to the norm generated by the inner product \(\sum _{x \in X} \langle \cdot , \cdot \rangle _{x}\), where \(\langle \cdot , \cdot \rangle _{x}\) denotes the inner product on \(H_x\).

Many authors [8, 18, 22] use a more general and somewhat more involved definition of a unitary representation of a groupoid, in which \(\mathbb {H}\) is a Hilbert bundle or a measurable field of complex Hilbert spaces. This allows for constructing many more interesting Hilbert spaces from the fibers \(H_x\) than just the completed direct sum \(\mathcal {H}\) above (see also [10]). In the present paper, however, we keep the definition simple so that the constructions of the positive definite kernels and RKHS become more tractable.

Example 3

Let \(\Gamma \) be a groupoid over X and H be a fixed Hilbert space equipped with an inner product \(\langle \cdot ,\cdot \rangle \). Consider the family \(\{H_x := \{x\} \times H\}_{x \in X}\) with \(H_x\) endowed with the inner product \(\langle (x,h_1), (x,h_2) \rangle _x := \langle h_1,h_2 \rangle \). For any \(\gamma \in \Gamma _{x}^{y}\), \(x,y \in X\) the transformation \(\mathcal {U}(\gamma ): H_{x} \rightarrow H_{y}\) defined by \(\mathcal {U} (\gamma )(x,h):=(y,h)\) (change of the fiber base point) is of course unitary. Any such a representation of the groupoid \(\Gamma \) is called a trivial representation.

A less trivial standard examples of groupoid representations arise for locally compact topological groupoids. Observe that for any such a groupoid \(\Gamma \), the fibers \(\Gamma ^{x}:=r^{-1}(x)\) and \(\Gamma _{x}:=s^{-1}(x)\) for any \(x \in X\) are locally compact spaces, too, and the same concerns the base space X itself. On such groupoids one introduces the following generalization of Haar measures [18, 22].

Definition 6

Let \(\Gamma \) be a locally compact topological groupoid. The family \(\left\{ \lambda ^{x} \right\} _{x \in X}\) of regular Borel measures on \(\Gamma ^x\) is called a left Haar system for the groupoid \(\Gamma \) if

  1. 1.

    For every \(f \in C_{c}(\Gamma )\) the function \(f^0: X \rightarrow \mathbb {C}\) given by \(f^0(x):=\int _{\Gamma ^{x}} f d \lambda ^{x}\) is continuous,

  2. 2.

    For every \(\gamma \in \Gamma \) and every \(f \in C_{c}(\Gamma )\)

    $$\begin{aligned} \int _{\Gamma ^{s(\gamma )}} f (\gamma \chi ) d \lambda ^{s(\gamma )}(\chi )= \int _{\Gamma ^{r(\gamma )}} f (\chi ) d \lambda ^{r(\gamma )}(\chi ). \end{aligned}$$

Similarly, the family \(\left\{ \lambda _{x} \right\} _{x \in X}\) of regular Borel measures on \(\Gamma _x\) is called a right Haar system for the groupoid \(\Gamma \) if

1.\(^\prime \):

For every \(f \in C_{c}(\Gamma )\) the function \(f_0: X \rightarrow \mathbb {C}\) given by \(f_0(x):=\int _{\Gamma _{x}} f d \lambda _{x}\) is continuous,

2.\(^\prime \):

For every \(\gamma \in \Gamma \) and every \(f \in C_{c}(\Gamma )\)

$$\begin{aligned} \int _{\Gamma _{r(\gamma )}} f ( \chi \gamma ) d \lambda _{r(\gamma )}(\chi )= \int _{\Gamma _{s(\gamma )}} f (\chi ) d \lambda _{s(\gamma )}(\chi ). \end{aligned}$$

Observe that both \(f^0\) and \(f_0\) are automatically compactly supported. Conditions 1. and 1.\(^\prime \) express the demand for the measures \(\lambda ^x\) and \(\lambda _x\) to vary continuously over X, whereas conditions 2. and 2.\(^\prime \) generalize the notions of, respectively, left and right invariance of Haar measures on locally compact groups.

Example 4

For any \(x \in X\) let \(H_{x} = L^{2}(\Gamma ^{x}, \lambda ^{x})\), where \(\left\{ \lambda ^{x} \right\} _{x \in X} \) is a left Haar system on \(\Gamma \). For any \(\gamma \in \Gamma _{x}^{y}\) and for \(f \in H_{x}\) define a unitary transformation \(\mathcal {U} (\gamma ): H_{x} \rightarrow H_{y}\) by

$$\begin{aligned} ( \mathcal {U} (\gamma ) f ) (\chi ) = f( \gamma ^{-1} \chi ), \quad \chi \in \Gamma ^{y}. \end{aligned}$$

Such a representation \((\mathcal {U}, \{H_x\}_{x \in X})\) is called the left regular representation of the groupoid \(\Gamma \).

Example 5

For any \(x \in X\) let \(H_{x} = L^{2}(\Gamma _{x}, \lambda _{x})\), where \(\left\{ \lambda _{x} \right\} _{x \in X} \) is a right Haar system on \( \Gamma \). For any \(\gamma \in \Gamma _{x}^{y}\) and for \(f \in H_{x}\) define a unitary transformation \(\mathcal {U} (\gamma ): H_{x} \rightarrow H_{y}\) by

$$( \mathcal {U} (\gamma ) f ) (\chi ) = f( \chi \gamma ), \quad \chi \in \Gamma _{y}.$$

Such a representation \((\mathcal {U}, \{H_x\}_{x \in X})\) is called the right regular representation of the groupoid \(\Gamma \).

For more examples of unitary representations of groupoids, the Reader is referred to [13, 22].

4 Reproducing Kernels and Unitary Representations

4.1 Reproducing Kernel Associated to a Unitary Representation of a Group

Let G be a group and \((\mathcal {U}, H)\) be its unitary representation on a Hilbert space H equipped with an inner product \(\langle \cdot , \cdot \rangle \). Additionally, let \(v \in H\) be any fixed vector. Formula (4) in Example 1 suggests considering the kernel \(K: G \times G \rightarrow \mathbb {C}\) defined as

$$\begin{aligned} K(h,g) := \langle \mathcal {U}(g)v, \mathcal {U}(h)v \rangle . \end{aligned}$$
(5)

Notice that thus defined K is a special case of the construction presented in Theorem 2 with the map \(F: G \rightarrow H\) given by \(F(g) := \mathcal {U}(g)v\), \(g \in G\). This means, in particular, that the RKHS H(K) provided by the Moore–Aronszajn theorem is a space of functions of the form \(g \mapsto \langle w, \mathcal {U}(g)v \rangle \), where \(w \in H\). Notice that, if \(\mathcal {U}\) is a weakly continuous unitary representation of a topological group G, then \(H(K) \subset C(G)\). Moreover, if \(\{w_j\}_{j \in I} \subset H\) is a Parseval frame for H, then

$$\begin{aligned} K(h,g) = \langle \mathcal {U}(g)v, \mathcal {U}(h)v \rangle = \sum _{j \in I} \langle \mathcal {U}(g)v, w_j \rangle \langle w_j, \mathcal {U}(h)v \rangle = \sum _{j \in I} \varphi _j(h) \overline{\varphi _j(g)}, \end{aligned}$$

where \(\varphi _j := Tw_j \in H(K)\) (cf. the third bullet in Theorem 2). On the strength of Theorem 3, \(\{\varphi _j\}_{j \in I}\) is a Parseval frame for H(K).

By the unitarity of the representation, the kernel K satisfies

$$\begin{aligned} K(h,g) = K(g^{-1}h,e)\quad for any g,h \in G, \end{aligned}$$
(6)

where e is the unit element of G.

Conversely, suppose we are given a positive definite kernel \(K: G \times G \rightarrow \mathbb {C}\) on a group G satisfying (6). We can employ the Moore–Aronszajn theorem to define its representation \((\mathcal {U},H(K))\). Concretely, for any \(g \in G\) define \(\mathcal {U}(g)\sum _{i=1}^n \lambda _i K_{h_i} := \sum _{i=1}^n \lambda _i K_{gh_i}\) for any element of \(H_0(K) := span \{K_h\}_{h \in G}\). Since there is no guarantee that the system \(\{K_h\}_{h \in G}\) is linearly independent, we must check that such a \(\mathcal {U}(g)\) is well defined. To this end, it suffices to prove that if \(\sum _{i=1}^n \lambda _i K_{h_i} \equiv 0\) on G, then also \(\sum _{i=1}^n \lambda _i K_{gh_i} \equiv 0\). But thanks to (6) we have that, for any \(h \in G\),

$$\begin{aligned} \sum _{i=1}^n \lambda _i K_{gh_i}(h) = \sum _{i=1}^n \lambda _i K_{h_i}(g^{-1}h) = 0 \end{aligned}$$

by assumption (see also [10]). Since \(H_0(K)\) is dense in H(K), thus defined \(\mathcal {U}(g)\) can be uniquely extended to H(K). Moreover, also by (6) and by the density argument, one obtains the unitarity of \(\mathcal {U}(g)\) by verifying that for any \(h,h' \in G\)

$$\begin{aligned} \langle \mathcal {U}(g)K_h, \mathcal {U}(g)K_{h'} \rangle _{H(K)}&= \langle K_{gh}, K_{gh'} \rangle _{H(K)} \\&= K(gh',gh) = K((gh)^{-1}gh',e) \\&= K(h^{-1}g^{-1}gh',e) = K(h^{-1}h',e) = K(h,h') \\&= \langle K_{h'}, K_{h} \rangle _{H(K)}. \end{aligned}$$

Observe, finally, that one can retrieve the kernel K from the above ‘Moore–Aronszajn representation’ through formula (5). Indeed, one simply has to take \(v := K_e\).

4.2 Reproducing Kernel Associated to a Unitary Representation of a Groupoid

Let us now generalize the above relationship between unitary representations of groups and reproducing kernels onto the groupoid setting. Let thus \(\Gamma \) be a groupoid over X and \((\mathcal {U}, \mathbb {H} = \{H_x\}_{x \in X})\) be its unitary representation as specified by Definition 5. Additionally, let v be a fixed vector field, by which we shall understand a mapping \(X \ni x \mapsto v(x) \in H_x\) (note: vector fields need not belong to \(\mathcal {H} := \widetilde{\bigoplus }_{x \in X} H_x\)). Define the kernel \(K: \Gamma \times \Gamma \rightarrow \mathbb {C}\) via

$$\begin{aligned} K(\chi , \gamma ) := \left\{ \begin{array}{ll} \left\langle \mathcal {U}(\gamma )v(s(\gamma )), \mathcal {U}(\chi )v(s(\chi )) \right\rangle _{r(\gamma )} &{}\quad for r(\gamma ) = r(\chi ),\\ 0 &{}\quad for r(\gamma ) \ne r(\chi ). \end{array} \right. \end{aligned}$$
(7)

Observe that for each \(x \in X\) the restriction \(K^x := K|_{\Gamma ^x \times \Gamma ^x}\) is a positive definite kernel — it constitutes a special case of the construction presented in Theorem 2 with \(F^x: \Gamma ^x \rightarrow H_x\) given by \(F^x(\gamma ) := \mathcal {U}(\gamma )v(s(\gamma ))\) for every \(\gamma \in \Gamma ^x\). Invoking the Moore–Aronszajn theorem (Theorem 1), we obtain an RKHS given by \(H(K^x) := \widetilde{span }\left\{ K^x_\gamma := K^x(\cdot ,\gamma ) \right\} _{\gamma \in \Gamma ^x}\), whose reproducing kernel is \(K^x\). By the third bullet of Theorem 2, every element of \(H(K^x)\) is a function on \(\Gamma ^x\) of the form \(\gamma \mapsto \langle w_x, \mathcal {U}(\gamma )v(s(\gamma )) \rangle \), where \(w_x \in H_x\). Finally, by the fourth bullet of Theorem 2, for any \(x \in X\) the Hilbert space \(H(K^x)\) can be isometrically embedded into \(H_x\), its image being the closed subspace \(V^x := \overline{span } \{\mathcal {U}(\gamma )v(s(\gamma ))\}_{\gamma \in \Gamma ^x}\).

Also the kernel K itself is a realization of the general construction described in Theorem 2. To see this, consider \(F: \Gamma \rightarrow \mathcal {H}\) given by \(F(\gamma ) := \mathcal {U}(\gamma )v(s(\gamma ))\) and observe that \(\langle F(\gamma ), F(\chi ) \rangle _{\mathcal {H}} := \sum _{x \in X} \langle F(\gamma ), F(\chi ) \rangle _x\) indeed equals \(K(\chi ,\gamma )\) for all \(\gamma , \chi \in \Gamma \), because \(H_x \bot H_y\) as subspaces of \(\mathcal {H}\) for \(x \ne y\). Therefore, K is positive definite and the Moore–Aronszajn theorem yields an RKHS defined as \(H(K) := \widetilde{span }\left\{ K_\gamma := K(\cdot ,\gamma ) \right\} _{\gamma \in \Gamma }\).

Notice that the two above constructions are of RKHS compatible in the sense that

$$\begin{aligned} H(K) = \widetilde{\bigoplus }_{x \in X} H(K^x). \end{aligned}$$
(8)

Indeed, observe that both spaces have the same dense subspaces, namely (cf. the sketch of the proof of Theorem 1 above)

$$\begin{aligned} H_0(K)&:= span \left\{ K_\gamma \right\} _{\gamma \in \Gamma }\\&= \bigoplus _{x \in X} span \left\{ K_\gamma \right\} _{\gamma \in \Gamma ^x} \\&= \bigoplus _{x \in X} span \left\{ K^x_\gamma \right\} _{\gamma \in \Gamma ^x} =: \bigoplus _{x \in X} H_0(K^x), \end{aligned}$$

where the maps \(K_\gamma : \Gamma \rightarrow \mathbb {C}\) and \(K^x_\gamma : \Gamma ^x \rightarrow \mathbb {C}\) have been identified (the former being the extension by zero of the latter). Since both above spaces are equipped with the same inner product (given by (2)), they yield the same Hilbert spaces after completion (see also [10]).

Similarly as before, by the third bullet of Theorem 2, every element of H(K) is a function on \(\Gamma \) of the form \(\gamma \mapsto \langle w, \mathcal {U}(\gamma )v(s(\gamma )) \rangle \), where \(w \in \mathcal {H}\). Notice that such functions must vanish on all but countably many fibers \(\Gamma ^x\). Moreover, if \(\{w_j\}_{j \in I} \subset \mathcal {H}\) is a Parseval frame for \(\mathcal {H}\), then

$$\begin{aligned} K(\chi ,\gamma )&= \langle \mathcal {U}(\gamma )v(s(\gamma )), \mathcal {U}(\chi )v(s(\chi )) \rangle _{\mathcal {H}} \\&\quad = \sum _{j \in I} \langle \mathcal {U}(\gamma )v(s(\gamma )), w_j \rangle _{\mathcal {H}} \langle w_j, \mathcal {U}(\chi )v(s(\chi )) \rangle _{\mathcal {H}} \\&\quad = \sum _{j \in I} \varphi _j(\chi ) \overline{\varphi _j(\gamma )}, \end{aligned}$$

where \(\varphi _j := \langle w_j, \mathcal {U}(\cdot )v(s(\cdot )) \rangle _{\mathcal {H}}\). Similarly as in the group case, by Theorem 3 we obtain that \(\{\varphi _j\}_{j \in I}\) is a Parseval frame for H(K).

Finally, by the fourth bullet of Theorem 2, the Hilbert space H(K) can be isometrically embedded into \(\mathcal {H}\), its image being the closed subspace \(V := \overline{span } \{\mathcal {U}(\gamma )v(s(\gamma ))\}_{\gamma \in \Gamma } = \overline{\bigoplus }_{x \in X} V^x\), where the last equality can be proven completely analogously to (8).

Additionally, the unitarity of the representation implies that for \(\gamma , \chi \in \Gamma \) such that \(r(\gamma ) = r(\chi )\)

$$\begin{aligned} K(\chi ,\gamma ) = K(\gamma ^{-1}\chi , \varepsilon (r(\gamma ^{-1}\chi ))), \end{aligned}$$
(9)

which is nothing but a straightforward generalization of (6). Indeed, one has

$$\begin{aligned} K(\chi ,\gamma )&= \left\langle \mathcal {U}(\gamma )v(s(\gamma )), \mathcal {U}(\chi )v(s(\chi )) \right\rangle _{r(\gamma )} \\&= \left\langle v(s(\gamma )), \mathcal {U}(\gamma ^{-1}\chi ) v(s(\chi )) \right\rangle _{s(\gamma )} \\&= K(\gamma ^{-1}\chi , \varepsilon (s(\gamma ))) = K(\gamma ^{-1}\chi , \varepsilon (r(\gamma ^{-1}\chi ))). \end{aligned}$$

Let us now consider the converse problem. That is, given a groupoid \(\Gamma \) over X and a positive definite kernel \(K: \Gamma \times \Gamma \rightarrow \mathbb {C}\) such that \(K(\gamma , \chi ) = 0\) if \(r(\gamma ) \ne r(\chi )\) and (9) holds, we construct the ‘Moore–Aronszajn representation’ \((\mathcal {U}, \{H(K^x)\}_{x \in X})\) of \(\Gamma \). To this end, define each \(\mathcal {U}(\gamma ): H(K^{s(\gamma )}) \rightarrow H(K^{r(\gamma )})\) first on the dense subspace \(H_0(K^{s(\gamma )}) := span \{K_\chi \}_{\chi \in \Gamma ^{s(\gamma )}}\) by

$$\begin{aligned} \mathcal {U}(\gamma )\sum _{i = 1}^n \lambda _i K_{\chi _i} := \sum _{i = 1}^n \lambda _i K_{\gamma \chi _i}, \end{aligned}$$

where, similarly as for groups, one can easily check that this definition is sound (analogously as in the group case, it is here where property (9) steps in). Observe that \(r(\chi _i) = s(\gamma )\) for all \(i=1,\ldots ,n\), so the products \(\gamma \chi _i\) are all well defined. It is also straightforward to prove (again, thanks to (9)) that thus defined \(\mathcal {U}(\gamma )\) preserves inner products. Extending it onto the entire \(H(K^{s(\gamma )})\), by the arbitrariness of \(\gamma \) we obtain the desired unitary representation of \(\Gamma \).

Finally, notice that the kernel K can be retrieved from the ‘Moore–Aronszajn representation’ through formula (7), where one has to take the vector field \(v(x) := K_{\varepsilon (x)}\), \(x \in X\).

Before moving to examples, let us remark that formula (7) ‘works well’ with the basic algebraical operations on the groupoid representations.

Remark 4

Let \(\Gamma \) be a groupoid over X and let \((\mathcal {U}^{(j)}, \mathbb {H}^{(j)} := \{H^{(j)}_{x}\}_{x \in X})\), \(j=1,2,\ldots ,l\), be its unitary representations. Fixing l vector fields \(v_1,v_2,\ldots ,v_l\) on X such that \(v_j(x)\in H^{(j)}_{x}\) for \(x\in X\) and \(j=1,2,\dots ,l\) we can construct positive definite kernels \(K_1,K_2,\dots ,K_l\) on \(\Gamma \), respectively, using formula (7). Consider now the direct sum \((\bigoplus _{j=1}^l\mathcal {U}^{(j)}, \bigoplus _{j=1}^l\mathbb {H}^{(j)} := \{\bigoplus _{j=1}^l H^{(j)}_{x}\}_{x \in X})\) of the above representations, and, taking the vector field \(v^\oplus := \bigoplus _{j=1}^l v_j\), define a kernel \(K^\oplus \) on \(\Gamma \) via (7). Similarly, consider the tensor product \((\bigotimes _{j=1}^l\mathcal {U}^{(j)}, \bigotimes _{j=1}^l\mathbb {H}^{(j)} = \{\bigotimes _{j=1}^l H^{(j)}_{x}\}_{x \in X})\) of the representations and, taking this time the vector field \(v^\otimes := \bigotimes _{j=1}^l v_j\), define another kernel \(K^\otimes \) on \(\Gamma \), again employing formula (7). It follows from [25, p. 16,17] that

$$\begin{aligned} K^{\oplus }=K_1 + K_2 + \cdots + K_l \qquad and \qquad K^\otimes =K_1 \cdot K_2 \cdot \cdots \cdot K_l. \end{aligned}$$

Remark 5

Suppose now we have a family of groupoids \(\{(\Gamma _i,X_i, s_i,r_i, \varepsilon _i, \cdot _i, {}^{-1_i})\}\) indexed by \(i \in I\). Let also \((\mathcal {U}^{(i)}, \mathbb {H}^{(i)} := \{H^{(i)}_{x}\}_{x \in X_i})\) be a unitary representation of the groupoid \(\Gamma _i\) for every \(i \in I\). The disjoint union \(\Gamma :=\bigsqcup _{i\in I}\Gamma _i\) has a natural groupoid structure with \(X:=\bigsqcup _{i\in I}X_i\) as a base space, \(\Gamma ^{(2)}:=\bigsqcup _{i\in I}\Gamma _i^{(2)}\), \(\gamma _1\cdot \gamma _2:=\gamma _1\cdot _i\gamma _2\) for \(\gamma _1,\gamma _2\in \Gamma _i^{(2)}\), \(\varepsilon (x):=\varepsilon _i(x)\) for \(x\in X_i\) and, moreover, \(s(\gamma ):=s_i(\gamma )\), \(r(\gamma ):=r_i(\gamma )\), \(\gamma ^{-1}:=\gamma ^{{-1}_i}\) for \(\gamma \in \Gamma _i\).

Consider now the family of Hilbert spaces \(\mathbb {H} := \{H_x\}_{x \in X}\), where we put \(H_x:=H^{(i)}_x\) for \(x \in X_i\) and define a mapping \(\mathcal {U}\) assigning to each \(\gamma \in \Gamma \) a unitary transformation \(\mathcal {U}(\gamma ): H_{s(\gamma )} \rightarrow H_{r(\gamma )}\) by \(\mathcal {U}(\gamma ):=\mathcal {U}^{(i)}(\gamma )\) for \(\gamma \in \Gamma _i\). Then \((\mathcal {U},\mathbb {H})\) is a unitary representation of \(\Gamma \).

For any \(i\in I\) let us fix a vector field \(X_i \ni x \rightarrow v_i(x) \in H^{(i)}_x\) and use formula (7) to obtain a positive definite kernel \(K_i\) from the unitary representation \((\mathcal {U}^{(i)}, \mathbb {H}^{(i)})\) of \(\Gamma _i\). Additionally, introduce a vector field v on X by setting \(v(x):=v_i(x)\) if \(x\in X_i\) and use it together with the representation \((\mathcal {U},\mathbb {H})\) to obtain, again via (7), another positive definite kernel K on \(\Gamma \). It is not difficult to observe that

$$\begin{aligned} K(\chi ,\gamma ) = \left\{ \begin{array}{l@{\quad }l} K_i(\chi ,\gamma ) &{} for \chi ,\gamma \in \Gamma _i, \\ 0 &{} for \chi \in \Gamma _i, \gamma \in \Gamma _{i'} with i \ne i'. \end{array} \right. \end{aligned}$$
(10)

What is more, reasoning analogously as when proving formula (8), one can show that the RKHS H(K) obtained from K by means of the Moore–Aronszajn theorem satisfies \(H(K)=\widetilde{\bigoplus }_{i \in I} H(K_i)\).

Example 6

Fix a Hilbert space H endowed with the inner product \(\langle \cdot , \cdot \rangle \) and let \((\mathcal {U}, \{H_x := \{x\} \times H\}_{x \in X})\) be the trivial representation of the groupoid \(\Gamma \). For any fixed vector field v, which here can be regarded as an element of \(H^X\), we have \(\mathcal {U}(\gamma )v(s(\gamma ))=v(r(\gamma ))\) and formula (7) yields the kernel

$$\begin{aligned} K(\chi , \gamma ) = \left\{ \begin{array}{l@{\quad }l} \left\langle v(r(\gamma )), v(r(\chi )) \right\rangle = \left\| v(r(\gamma )) \right\| ^{2}&{} for r(\gamma ) = r(\chi ), \\ 0 &{} for r(\gamma ) \ne r(\chi ). \end{array} \right. \end{aligned}$$

In other words, for every \(x \in X\) the kernel’s restriction \(K^x\) is a constant map. Every \(H(K^x)\) is thus either one-dimensional (if \(v(x) \ne 0\)) or zero-dimensional (if \(v(x) = 0\)). Moreover, if v is a nowhere-vanishing vector field, the set \(\{\Vert v(x)\Vert \mathbf {1}_{\Gamma ^x} \}_{x \in X}\) (where \(\mathbf {1}_{\Gamma ^x}: \Gamma \rightarrow \{0,1\}\) denotes the indicator function of the fiber \(\Gamma ^x\)) constitutes an orthonormal basis of H(K), and hence the latter Hilbert space is isometrically isomorphic to \(l^2(X)\).

Example 7

As a simple nontrivial example, consider the four-element groupoid \(\Gamma := \{\varepsilon (+), \varepsilon (-),\) \(\alpha , \alpha ^{-1} \}\) over a two-element set \(X := \{+,-\}\), visualized in Figure 1. Such a groupoid (the pair groupoid over a two-element set) is studied e.g. in the context of quantum information [14].

Fig. 1
figure 1

Structure of the groupoid \(\Gamma \) in Example 7

To further simplify the example, let us consider a one-dimensional representation of \(\Gamma \). Concretely, let us take \(H_+ := \{+\} \times \mathbb {C}\), \(H_- := \{-\} \times \mathbb {C}\) and let the unitary representation \((\mathcal {U},\{H_+,H_-\})\) be given by

$$\begin{aligned} \mathcal {U}(\alpha ): H_+ \rightarrow H_-, \quad \mathcal {U}(\alpha )(+,z) := (-,\lambda z) \end{aligned}$$

and hence \(\mathcal {U}(\alpha ^{-1}): H_- \rightarrow H_+, \ \mathcal {U}(\alpha ^{-1})(-,z) := (+,\bar{\lambda } z)\), where \(\lambda \) is a fixed complex number of modulus 1. Choosing a generic vector field \(v(+) := (+,v_+)\), \(v(-) := (-,v_-)\), where \(v_\pm \in \mathbb {C}\), formula (7) yields a kernel whose respective values are presented in Table 1.

Table 1 The kernel obtained from the representation studied in Example 7

Notice that the columns of the above table contain the values of the functions \(K_{\varepsilon (+)}\), \(K_{\alpha ^{-1}}\), \( K_\alpha \), \(K_{\varepsilon (-)}\), respectively. These four functions span the space H(K), but they are not linearly independent. In fact, one can write that \(H(K) = span \{\varphi _+, \varphi _-\}\), where the functions \(\varphi _\pm : \Gamma \rightarrow \mathbb {C}\) are defined as

$$\begin{aligned}&\varphi _+(\varepsilon (+)) := \bar{v}_+,&\varphi _+(\alpha ^{-1}) := \lambda \bar{v}_-,&\varphi _+(\alpha ) := 0,&\varphi _+(\varepsilon (-)) := 0, \\&\varphi _-(\varepsilon (+)) := 0,&\varphi _-(\alpha ^{-1}) := 0,&\varphi _-(\alpha ) := \bar{\lambda } \bar{v}_+,&\varphi _-(\varepsilon (-)) := \bar{v}_-. \end{aligned}$$

Unless \(v_+ = v_- = 0\), the functions \(\varphi _\pm \) can be shown to be orthonormal:

$$\begin{aligned} \langle \varphi _+, \varphi _+ \rangle _{H(K)} = \langle \varphi _-, \varphi _- \rangle _{H(K)} = 1 \quad and \quad \langle \varphi _+, \varphi _- \rangle _{H(K)} = \langle \varphi _-, \varphi _+ \rangle _{H(K)} = 0. \end{aligned}$$

Therefore, we have \(H(K) \cong \mathbb {C}^2\) as Hilbert spaces, and so in this case H(K) is isomorphic to \(\mathcal {H} := H_+ \oplus H_-\) and not just isometrically embedded in the latter (cf. Theorem 2). In addition, observe that the (restricted) functions \(\varphi _+|_{\Gamma ^+},\varphi _-|_{\Gamma ^-}\) span the RKHS’s \(H(K^+)\), \(H(K^-)\), respectively, built from the restricted kernels. All in all, we have that

$$\begin{aligned} H(K) = span \{\varphi _+, \varphi _-\} = span \{\varphi _+\} \oplus span \{\varphi _-\} = H(K^+) \oplus H(K^-), \end{aligned}$$

where again we have identified the restricted functions with its extensions by zero, in full agreement with formula (8).

Finally, the set \(\{\varphi _+,\varphi _-\}\), being an orthonormal basis of H(K), is a Parseval frame for H(K), and hence by Theorem 3 we must have, for every \(\chi ,\gamma \in \Gamma \),

$$\begin{aligned} K(\chi ,\gamma ) = \varphi _+(\chi )\overline{\varphi _+(\gamma )} + \varphi _-(\chi )\overline{\varphi _-(\gamma )}, \end{aligned}$$

As one can check directly, the above equality indeed holds in the considered case.

Example 8

Let \(\Gamma \) be a locally compact groupoid endowed with a left Haar system \(\{\lambda ^x\}_{x \in X}\), and let \((\mathcal {U}, \{H_x := L^2(\Gamma ^x, \lambda ^x)\}_{x \in X})\) be its left regular representation, i.e. \((\mathcal {U}(\gamma )f)(\xi ) := f(\gamma ^{-1}\xi )\) for any \(f \in H_{r(\gamma )}\).

For any fixed vector field \(X \ni x \mapsto v(x) \in H_x\) the reproducing kernel reads, in the case when \(r(\chi ) = r(\gamma )\),

$$\begin{aligned} K(\chi , \gamma )&= \int _{\Gamma ^{r(\gamma )}} v(s(\gamma ))(\gamma ^{-1}\xi ) \overline{ v(s(\chi ))(\chi ^{-1}\xi )} d \lambda ^{r(\gamma )} (\xi ) \\&= \int _{\Gamma ^{s(\gamma )}} v(s(\gamma ))(\xi ) \overline{ v(s(\chi ))(\chi ^{-1} \gamma \xi )} d \lambda ^{s(\gamma )} (\xi ) \\&= \int _{\Gamma ^{s(\gamma )}} v(s(\gamma ))(\xi ) \widetilde{ v(s(\chi ))}(\xi ^{-1}\gamma ^{-1} \chi ) d \lambda ^{s(\gamma )} (\xi ). \end{aligned}$$

We note that this is an analogue of the kernel defined by the convolution on a group with respect to a left Haar measure (cf. Example 1). Although there exists a standard definition of a convolution on a groupoid (see, e.g., [18, p. 38]), the above expression does not entirely fit into it, because the convoluted functions \(v(s(\gamma ))\) and \(\widetilde{v(s(\chi ))}\) are not defined over entire \(\Gamma \). In fact, \(v(s(\gamma )) \in L^{2}(\Gamma ^{s(\gamma )}, \lambda ^{s(\gamma )}) \), whereas \(\widetilde{ v(s(\chi ))} \in L^{2}(\Gamma _{s(\chi )}, inv _{*} \lambda ^{s(\chi )})\) (where \(inv \) denotes here the inversion map, \(inv (\gamma ) := \gamma ^{-1}\)) and as such their convolution is well defined only on \(\Gamma ^{s(\gamma )}_{s(\chi )}\).

Of course, when \(r(\gamma ) \ne r(\chi )\) the kernel is by definition \(K(\chi , \gamma ) =0\).

Example 9

Let \(\Gamma \) be a locally compact groupoid endowed this time with a right Haar system \(\{\lambda _x\}_{x \in X}\), and let \((\mathcal {U}, \{H_x := L^2(\Gamma _x, \lambda _x)\}_{x \in X})\) be its right regular representation, i.e. \((\mathcal {U}(\gamma )f)(\xi ) := f(\xi \gamma )\) for any \(f \in H_{r(\gamma )}\).

For any fixed vector field \(X \ni x \mapsto v(x) \in H_x\) the reproducing kernel reads, in the case when \(r(\chi ) = r(\gamma )\),

$$\begin{aligned} K(\chi , \gamma )&=\int _{\Gamma _{r(\gamma )}} v(s(\gamma ))(\xi \gamma ) \overline{ v(s(\chi ))(\xi \chi )} d \lambda _{r(\gamma )} (\xi ) \\&=\int _{\Gamma _{s(\gamma )}} v(s(\gamma ))(\xi ) \overline{ v(s(\chi ))(\xi \gamma ^{-1} \chi )} d \lambda _{s(\gamma )} (\xi ) \\&= \int _{\Gamma _{s(\gamma )}} v(s(\gamma ))(\xi ) \widetilde{ v(s(\chi ))}(\chi ^{-1}\gamma \xi ^{-1}) d \lambda _{s(\gamma )} (\xi ). \end{aligned}$$

This also can be seen as something analogous to the convolution (only this time, related to the right Haar measure in the group setting).

Of course, when \(r(\gamma ) \ne r(\chi )\) the kernel \(K(\chi , \gamma )\) is defined to vanish, just as in the previous examples.

Example 10

This example is inspired by [4]. Let X be the set of all pairs \((\Omega ,z)\), where \(\Omega \) is a domain (nonempty, open and connected set) in \({\mathbb C}^n\) and \(z\in \Omega \). Let also \(\Gamma \) be the set of all pairs \((\Phi ,z)\), where \(\Phi : \Omega _1\rightarrow \Omega _2\) is a biholomorphism between open domains in \({\mathbb C}^n\) and \(z\in \Omega _1\). We define the groupoid \((\Gamma ,X,s,r,\varepsilon , \cdot , {}^{-1})\) as follows.

If \((\Phi ,z)\in \Gamma \), where \(\Phi : \Omega _1\rightarrow \Omega _2\), then \(s(\Phi ,z):=(\Omega _1,z)\) and \(r(\Phi ,z):=(\Omega _2,\Phi (z))\). The embedding map is given by \(\varepsilon (\Omega ,z):=(id _{\Omega },z)\), whereas the inversion map reads \((\Phi ,z)^{-1}:=(\Phi ^{-1},\Phi (z))\). Finally, the multiplication \(\cdot \) of pairs \((\Phi : \Omega _1\rightarrow \Omega _2,z)\) and \((\Psi : \Omega _3\rightarrow \Omega _4,\zeta )\) is defined if \(\Omega _2=\Omega _3\) and \(\zeta =\Phi (z)\), in which case \((\Psi ,\zeta )(\Phi ,z):=(\Psi \circ \Phi ,z)\), where \(\circ \) is an ordinary composition of mappings.

In order to introduce a kernel on \(\Gamma \), define a map \(k:\Gamma \rightarrow \mathbb {C}\) via

$$\begin{aligned} k(\Phi ,z):=\frac{J\Phi (z)}{|J\Phi (z)|},\quad (\Phi ,z)\in \Gamma , \end{aligned}$$

where \(J\Phi (z)\) denotes the complex Jacobian determinant of \(\Phi \) at the point z. Notice that the map k is multiplicative, i.e., \(k(\gamma \chi ) = k(\gamma ) k(\chi )\) for any \((\gamma , \chi ) \in \Gamma ^{(2)}\). Indeed, using elementary properties of the Jacobian, we can write

$$\begin{aligned} k((\Psi ,\zeta )(\Phi ,z)) = k(\Psi \circ \Phi ,z) = \frac{J(\Psi \circ \Phi )(z)}{|J(\Psi \circ \Phi )(z)|} = \frac{J(\Psi )(\zeta )}{|J(\Psi )(\zeta )|} \cdot \frac{J(\Phi )(z)}{|J(\Phi )(z)|} = k(\Psi ,\zeta )k(\Phi ,z), \end{aligned}$$
(11)

where \(\zeta = \Phi (z)\) by the multiplicability of \((\Psi ,\zeta )\), \((\Phi ,z)\). What is more, for any \((\Omega ,z) \in X\) one trivially has that \(k(\varepsilon (\Omega ,z)) = k(id _\Omega ,z) = 1\) and hence, moreover,

$$\begin{aligned} k(\gamma ^{-1}) = k(\gamma )^{-1} k(\gamma ) k(\gamma ^{-1}) = k(\gamma )^{-1} k(\gamma \gamma ^{-1}) = k(\gamma )^{-1} k(\varepsilon (r(\gamma ))) = k(\gamma )^{-1} = \overline{k(\gamma )}\nonumber \\ \end{aligned}$$
(12)

for every \(\gamma \in \Gamma \).

Let now \(K:\Gamma \times \Gamma \rightarrow {\mathbb C}\) be a kernel given by the formula

$$\begin{aligned} K(\chi ,\gamma ) := \left\{ \begin{array}{l@{\quad }l} k(\gamma )\overline{k(\chi )} &{} for r(\gamma ) = r(\chi ), \\ 0 &{} for r(\gamma ) \ne r(\chi ), \end{array} \right. \end{aligned}$$
(13)

that is

$$\begin{aligned} K((\Psi ,\zeta ),(\Phi ,z)) := \left\{ \begin{array}{l@{\quad }l} \frac{J\Phi (z)}{|J\Phi (z)|}\frac{\overline{J\Psi (\zeta )}}{|J\Psi (\zeta )|} &{} for r(\Phi ,z) = r(\Psi ,\zeta ), \\ 0 &{} for r(\Phi ,z)\ne r(\Psi ,\zeta ). \end{array} \right. \end{aligned}$$

One can easily show that the above kernel is positive definite. In fact, it is a realization of the general construction described in Theorem 2 with the function \(F: \Gamma \rightarrow l^2(X)\) defined as \(F(\Phi ,z) := k(\Phi ,z)\delta _{r(\Phi ,z)}\), where \(\delta _x: X \rightarrow \mathbb {C}\) denotes the Kronecker delta concentrated at \(x \in X\).

Moreover, the above kernel satisfies (9). Indeed, by (11,12) one has that, whenever \(r(\chi ) = r(\gamma )\),

$$\begin{aligned} K(\chi ,\gamma ) = k(\gamma )\overline{k(\chi )}= & {} \overline{k(\gamma ^{-1})k(\chi )} \\= & {} \overline{k(\gamma ^{-1}\chi )} = k(\varepsilon (\gamma ^{-1}\chi ))\overline{k(\gamma ^{-1}\chi )} \\= & {} K(\gamma ^{-1}\chi ,\varepsilon (r(\gamma ^{-1}\chi ))). \end{aligned}$$

On the strength the discussion following formula (9), the just proven properties of the kernel K mean that the latter can be used to construct a unitary ‘Moore–Aronszajn representation’ \((\mathcal {U},\{H(K^x)\}_{x\in X})\) of \(\Gamma \), with each \(\mathcal {U}(\gamma ): H(K^{s(\gamma )}) \rightarrow H(K^{r(\gamma )})\) satisfying

$$\begin{aligned} \mathcal {U}(\gamma ) K_{\chi } = K_{\gamma \chi }, \quad \chi \in \Gamma ^{s(\gamma )}. \end{aligned}$$
(14)

In order to better understand this representation, notice first that for any \(\chi \in \Gamma \)

$$\begin{aligned} K_\chi = k(\chi ) \bar{k} \cdot \mathbf {1}_{\Gamma ^{r(\chi )}}, \end{aligned}$$

where \(\mathbf {1}_{\Gamma ^{r(\chi )}}\) denotes the indicator function of the fiber \(\Gamma ^{r(\chi )}\) (cf. Example 6). This in particular means that for every \(x \in X\) the space \(H(K^x)\) is spanned by the function \(\bar{k} \cdot \mathbf {1}_{\Gamma ^{x}}\), which can be easily shown to be of norm 1. Observe now that, for any chosen \(\chi \in \Gamma ^{s(\gamma )}\)

$$\begin{aligned} \mathcal {U}(\gamma )(\bar{k} \cdot \mathbf {1}_{\Gamma ^{s(\gamma )}})&= k(\chi )^{-1} \mathcal {U}(\gamma )K_\chi = k(\chi )^{-1} K_{\gamma \chi } = k(\chi )^{-1} k(\gamma \chi ) \bar{k} \cdot \mathbf {1}_{\Gamma ^{r(\gamma \chi )}} \\&= k(\gamma ) \bar{k} \cdot \mathbf {1}_{\Gamma ^{r(\gamma )}}, \end{aligned}$$

where we have used (14) and (11). In other words, under the above choice of the orthonormal bases of the one-dimensional spaces \(H(K^x)\), the transformation \(\mathcal {U}(\gamma )\) can be regarded simply as the multiplication by the complex number \(k(\gamma )\).

The above groupoid \(\Gamma \) together with the simple kernel given by (13) is by no means the only one worth investigating in the context of biholomorphisms. The construction can be generalized, e.g., by considering, for some fixed natural \(N>1\), the sets \(X_N:=\{(\Omega ,\underline{z}) \ | \ \underline{z} \in \Omega ^N\}\) and \(\Gamma _N:=\{(\Phi ,\underline{z}) \ | \ \underline{z} \in \Omega ^N\}\), where \(\underline{z}:=(z_1,\ldots ,z_N)\) and the symbols \(\Omega \) and \(\Phi \) have the same meaning as before. For any biholomorphism \(\Phi :\Omega _1\rightarrow \Omega _2\) we put

$$\begin{aligned}&s(\Phi ,\underline{z}):=(\Omega _1,\underline{z}), \\&r(\Phi ,\underline{z}):=(\Omega _2,\underline{\Phi (z)}), \\&(\Phi ,\underline{z})^{-1}:=(\Phi ^{-1},\underline{\Phi (z)}), \end{aligned}$$

where \(\underline{\Phi (z)}:=(\Phi (z_1),\ldots ,\Phi (z_N))\), and we define the multiplication

$$\begin{aligned} (\Psi ,\underline{\zeta })(\Phi ,\underline{z}):=(\Psi \circ \Phi ,\underline{z}) \end{aligned}$$

provided \(\Phi (\zeta ) = \underline{z}\) and \(\Psi : \Omega _2 \rightarrow \Omega _3\) is another biholomorphism. Finally, the embedding map is given by \(\varepsilon (\Omega ,\underline{z}):=(id _{\Omega },\underline{z})\) for any \((\Omega ,\underline{z}) \in X_N\). The septuple \((\Gamma _N, X_N, s, r, \varepsilon , \cdot , {}^{-1})\) is a groupoid and we can define N positive definite kernels \(K_1,\ldots ,K_N:\Gamma _N \times \Gamma _N \rightarrow \mathbb {C}\) via

$$\begin{aligned} K_j((\Psi ,\underline{\zeta }),(\Phi ,\underline{z})) := \left\{ \begin{array}{cc} k(\Phi ,z_j)\overline{k(\Psi ,\zeta _j)} &{} for r(\Phi ,\underline{z}) = r(\Psi ,\underline{\zeta }), \\ 0 &{} for r(\Phi ,\underline{z}) \ne r(\Psi ,\underline{\zeta }), \end{array} \right. \end{aligned}$$
(15)

\(j=1,\ldots ,N\). Since all \(K_j\)’s satisfy condition (9), then for any \(\alpha _1,\ldots ,\alpha _N>0\) and any natural exponents \(m_1,\ldots ,m_N\) the function \(K:=\alpha _1K_1^{m_1}+\alpha _2K_2^{m_2}+\cdots +\alpha _N K_N^{m_N}\) is a positive definite kernelFootnote 2 on \(\Gamma _N\) also satisfying condition (9). Therefore, K defines a unitary ‘Moore–Aronszajn representation’ of \(\Gamma _N\), which no longer offers such a straightforward interpretation as the one presented above. This, however, goes beyond the scope of the current article and will be addressed in the future work.

5 Applications

Let us briefly discuss possible applications of the presented relationship between unitary representations of groupoids and reproducing kernels.

Kernel methods are practically utilized, e.g., in the machine learning field. Currently, their typical implementation is the classification or regression task, where the kernel-based method can be used to process the feature vector (representing the analyzed object) and produce the desired output, ensuring minimum error even if the data are difficult to distinguish. The most popular method is the support vector machines classifier (SVC), used to identify linearly inseparable objects. Their original features, based on which the decision is made, are transformed using the kernel function to a new space, where separation of examples belonging to various categories is easier [9]. However, one of the requirements for the kernel function \(K(x,y)=\left\langle \tau (y),\tau (x)\right\rangle \) is that its input arguments are real numbers. This function can be substituted by \(K(h,g) = \langle \mathcal {U}(g)v, \mathcal {U}(h)v \rangle \). As a result, the unitary representation \(\mathcal {U}\) on the group (instead of transforming features from the original space to the new one using function \(\tau \)) is used.

Another domain in which kernel-based methods prove their usefulness is optimization. The problem, often encountered in data processing modules (implemented in such fields as electronics or control engineering) is the selection of the optimal kernel regarding the distance between feature vectors in the multidimensional space. According to [20], such a measure (on the groupoids) can be used to solve the generalized version of the traveling salesman problem. The overall distance to minimize is given as: \(L=\sum _{i=1}^{N} d(c_{i},c_{i+1}),\) where \(c_{i}\) and \(c_{i+1}\) are two subsequent nodes from the graph in the optimized cycle. Using the kernel to describe distances between nodes in the new space allows for estimating the distance components as:

$$\begin{aligned} d(x,y)=\sqrt{K(x,x)-2K(x,y)+K(y,y)}, \end{aligned}$$

where kernel K is a real-valued function. This type of distance is described in [12, p. 78]. In the SVC implementation, selection of the optimal kernel among various candidates (ensuring the minimal or maximal distance between the points) can be done without the actual transformation of the original space to the new one (which is the core of the kernel applicability in the machine learning).

Yet another possible application concerns quantum physics, where both reproducing kernels [16] and groupoids and their representations [5] have been used in the description of quantum systems. For instance, the simple representation of the pair groupoid of a two-element set, considered in Example 7 above, can be used in the description of a qubit — the central notion of the theory of quantum information [14].

6 Conclusions

By combining RKHS and unitary representations of groupoids one notion can be described and studied in terms of the other. For instance, one could examine how the irreducibility of the representation, the regularity of the representation or the existence of the imprimitivity system is reflected in terms of reproducing kernels associated with this representation.

Another interesting problem might be to describe the so-called (unitary) equivalence after extension between two bounded linear operators in Hilbert spaces in terms of groupoids and their representations (see [24]). This may be applicable in the theory of partial differential equations and will be investigated in future work.

In future work we shall also investigate the physical meaning and significance of the kernels and RKHS associated to the quantum-mechanically relevant unitary representations of groupoids.

On the mathematical side, let us add that the simple definition of a unitary representation of a groupoid employed in the paper can be generalized onto the setting of measurable fields of Hilbert spaces [8, 22, 23]. The natural question whether the above-studied relationship between RKHS and unitary groupoid representations still holds in this more general setting will also be addressed in the future work. This line of research might in the end offer some new tools and insights in the fields of complex analysis, quantum physics and computer science.