1 Introduction

The principal kinematic formula is a cornerstone of classical integral geometry. In its basic form in Euclidean space, it deals with integral mean values for distinguished geometric functionals with respect to the invariant measure on the group of proper rigid motions; see [67, Chap. 5.1] and [65, Chap. 4.4] for background information, recent developments and applications. To be more specific, let \(\mathcal {K}^{n}\) denote the space of convex bodies (nonempty, compact, convex sets) in \(\mathbb {R}^{n}\). For two convex bodies \(K, K' \in \mathcal {K}^{n}\) and \(j \in \{ 0, \ldots , n \}\), the principal kinematic formula states that

$$\begin{aligned} \int _{G_n} V_{j} (K \cap g K') \, \mu ( \mathrm {d}g) = \sum _{k = j}^{n} \alpha _{n, j, k} V_{k}(K) V_{n - k + j}(K'), \end{aligned}$$
(1)

where \(G_{n}\) denotes the group of proper rigid motions of \(\mathbb {R}^{n}\), \(\mu \) is the motion invariant Haar measure on \(G_{n}\), normalized in the usual way (see [67, p. 586]), and the constant

$$\begin{aligned} \alpha _{n, j, k}{:=\,}\frac{\Gamma \left( \frac{k+1}{2}\right) \Gamma \left( \frac{n-k+j+1}{2}\right) }{\Gamma \left( \frac{j+1}{2}\right) \Gamma \left( \frac{n+1}{2}\right) } \end{aligned}$$

is expressed in terms of specific values of the Gamma function.

The also appearing functionals \(V_{j}: \mathcal {K}^{n} \rightarrow \mathbb {R}\), \(j \in \{ 0, \ldots , n \}\), are the intrinsic volumes, which arise as the uniquely determined coefficients of the monomials in the Steiner formula

$$\begin{aligned} \mathcal {H}^n (K + \epsilon B^{n}) = \sum _{j = 0}^{n} \kappa _{n - j} V_{j} (K) \epsilon ^{n - j},\quad \epsilon \ge 0, \end{aligned}$$
(2)

which holds for all convex bodies \(K \in \mathcal {K}^{n}\). As usual in this context, \(+\) denotes the Minkowski addition in \(\mathbb {R}^{n}\), \(B^{n}\) is the Euclidean unit ball in \(\mathbb {R}^{n}\) of n-dimensional volume \(\kappa _{n}\) and \(\mathcal {H}^n\) is the n-dimensional Hausdorff measure. Properties of the intrinsic volumes such as continuity, isometry invariance, additivity (valuation property) and homogeneity are derived from corresponding properties of the volume functional. A key result for the intrinsic volumes is Hadwiger’s characterization theorem, which states that \(V_{0}, \ldots , V_{n}\) form a basis of the vector space of continuous and isometry invariant real-valued valuations on \(\mathcal {K}^{n}\) (see [65, Theorem 6.4.14]). This theorem can be used to derive not only (1), but also Hadwiger’s general integral geometric theorem (see [67, Theorem 5.1.2]).

It is an important feature of the principal kinematic formula that the integral mean values of the intrinsic volumes can be expressed as sums of products of intrinsic volumes of the two convex bodies involved, and no other functionals are required. In other words, the principal kinematic formulae for the intrinsic volumes constitute a complete (closed) system of integral geometric formulae. As a consequence, these formulae can be iterated as described in [67, Chap. 5.1] and applied to the study of Boolean models in stochastic geometry (see [67, Chap. 9.1]). The bilinear structure of the right side of the kinematic formula (1) motivated the introduction and study of kinematic operators, in the recently developed field of algebraic integral geometry, which led to new insights, generalizations and to a profound understanding of the structure of integral geometric formulae (see [10, 27]) in connection with the algebraic structure of translation invariant valuations (see [3], also for further references).

It is natural to extend the principal kinematic formula by applying the integration over the rigid motion group \(G_{n}\) to functionals which generalize the intrinsic volumes. A far-reaching generalization is obtained by localizing the intrinsic volumes as measures, associated with convex bodies, such that the intrinsic volumes are just the total measures. Specifically, this leads to the support measures (generalized curvature measures) which are weakly continuous, locally defined and motion-equivariant valuations on convex bodies with values in the space of finite measures on Borel subsets of \(\mathbb {R}^{n} \times {\mathbb {S}}^{n-1}\), where \({\mathbb {S}}^{n-1}\) denotes the Euclidean unit sphere in \(\mathbb {R}^{n}\). They are determined by a local version of the Steiner formula (2), and thus, they provide a natural example of a localization of the intrinsic volumes. Their marginal measures on Borel subsets of \(\mathbb {R}^{n}\) are called curvature measures. In 1959, Federer (see [24, Theorem 6.11]) proved kinematic formulae for the curvature measures, even in the more general setting of sets with positive reach, which contain the classical kinematic formula as a very special case. More recently, kinematic formulae for support measures on convex bodies have been established by Glasauer in 1997 (see [28, Theorem 3.1]). These formulae are based on a special set operation on support elements of the involved bodies, which limits their usefulness for the present purpose, as explained in [29].

Already in the early seventies, integral geometric formulae for quermassvectors (curvature centroids) had been found by Hadwiger and Schneider [32, 61, 62]. Recently, McMullen [56] initiated a study of tensor-valued generalizations of the (scalar) intrinsic volumes and the vector-valued quermassvectors. This naturally raised the question for an analogue of Hadwiger’s characterization theorem and for integral geometric formulae for basic additive, tensor-valued functionals (tensor valuations) on the space of convex bodies. As shown by Alesker [1, 2], and further studied in [42], there exist natural tensor-valued functionals, the basic Minkowski tensors, which generalize the intrinsic volumes and span the vector space of tensor-valued, continuous, additive functionals on the space of convex bodies which are also isometry covariant. Although the basic Minkowski tensors span the corresponding vector space of tensor-valued valuations, the Minkowski tensors of rank at least two satisfy nontrivial linear relationships and hence are not a basis. This fact and the inherent difficulty of computing Minkowski tensors explicitly for sufficiently many examples provide an obstacle for computing the constants involved in integral geometric formulae. Nevertheless, major progress has been made in various works by different methods. Integral geometric Crofton formulae for general Minkowski tensors have been obtained in [41]. A specific case has been further studied and applied to problems in stereology in [53], for various extensions see [43] and [76]. A quite general study of various kinds of integral geometric formulae for translation invariant tensor valuations is carried out in [15], where also corresponding algebraic structures are explicitly determined. An approach to Crofton and thus kinematic formulae for translation invariant tensor valuations via integral geometric formulae for area measures (which are of independent interest) follows from [29] and [74]. Despite all these efforts and substantial progress, a complete set of kinematic and Crofton formulae for general Minkowski tensors had not been found prior to the present work. The current state of the art is described in the lecture notes [48] and in the recent contributions [45, 46] which are crucially based on the present work.

Surprising new insight into integral geometric formulae can be gained by combining local and tensorial extensions of the classical intrinsic volumes. This setting has recently been studied by Schneider (see [64]) and further analyzed by Hug and Schneider in their works on local tensor valuations (see [37,38,39]). These valuations can be viewed as tensor-valued generalizations of the support measures. On the other hand, they can be considered as localizations of the global tensor valuations introduced and first studied by McMullen (see [56]) and characterized by Alesker in a Hadwiger style (see [1, 2]), as pointed out before. Inspired by the characterization results obtained in [37,38,39, 59, 64], we consider tensor-valued curvature measures, the tensorial curvature measures, and their generalizations, which basically appear for polytopes, and establish a complete set of kinematic formulae for these (generalized) tensorial curvature measures. As an application of the present work, we establish in [46] (see also [83, Chapter 6] for a preliminary version) a complete set of kinematic and Crofton formulae for Minkowski tensors.

Kinematic formulae for the generalized tensorial curvature measures on polytopes (which do not have a continuous extension to all convex bodies) have not been considered before. In fact, our results are new even for the tensorial curvature measures which are obtained by integration against support measures. The constants involved in these formulae are surprisingly simple and can be expressed as a concise product of Gamma functions. Although some information about tensorial kinematic formulae can be gained from abstract characterization results (as developed in [37, 64]), we believe that explicit results cannot be obtained by such an approach, at least not in a simple way. In contrast, our argument starts as a tensor-valued version of the proof of the kinematic formula for curvature measures (see [65, Theorem 4.4.2]). But instead of first deriving Crofton formulae to obtain the coefficients of the appearing functionals, we proceed in a direct way. In fact, the explicit derivation of the constants in related Crofton formulae via the template method does not seem to be feasible. In [45], we provide explicit Crofton formulae for (generalized) tensorial curvature measures as a straightforward consequence of our kinematic formulae and relate them to previously obtained special results. The main technical part of the present argument is based on the calculation of rotational averages over Grassmannians and the rotation group.

The current work explores generalizations of the principal kinematic formula to tensorial measure-valued valuations. Various other directions have been taken in extending the classical framework. Kinematic formulae for support functions have been studied by Weil [80], Goodey and Weil [30] and Schneider [63]; recent related work on mean section bodies and Minkowski valuations is due to Schuster [73], Goodey and Weil [31] and Schuster and Wannerer [74], a Crofton formula for Hessian measures of convex functions has been established and applied in [22]. Instead of changing the functionals involved in the integral geometric formulae, it is also natural and in fact required by applications to stochastic geometry to explore formulae where the integration is extended over subgroups G of the motion group. The extremal cases are translative and rotational integral geometry, where \(G = \mathbb {R}^n\) and \(G = O(n)\), respectively. The former is described in detail in [67, Chap. 6.4]; recent progress for scalar- and measure-valued valuations and further references are provided in [36, 81, 82]; applications to stochastic geometry are given in [33, 34, 72], where translative integral formulae for tensor-valued measures are established and applied. Rotational Crofton formulae for tensor valuations have recently been developed further by Auneau et al. [6, 7] and Svane and Vedel Jensen [76] (see also the literature cited there); applications to stereological estimation and bio-imaging are treated and discussed in [57, 77, 86]. Various other groups of isometries, also in Riemannian isotropic spaces, have been studied in recent years. Major progress has been made, for instance, in Hermitian integral geometry (in curved spaces), where the interplay between global and local results turned out to be crucial (see [13, 14, 25, 26, 75, 78, 79] and the survey [11]), but various other group actions have been studied successfully as well (see [4, 8, 9, 12, 17,18,19, 23]).

Minkowski tensors, tensorial curvature measures and general local tensor valuations are useful morphological characteristics that allow to describe the geometry of complex spatial structure and are particularly well suited for developing structure–property relationships for tensor-valued or orientation-dependent physical properties; see [68, 69] for surveys and Klatt’s PhD thesis [49] for an in-depth analysis of various aspects (including random fields and percolation) of the interplay between physics and Minkowski tensors. These applications cover a wide spectrum ranging from nuclear physics [70], granular matter [47, 55, 60, 85], density functional theory [84], physics of complex plasmas [20] to physics of materials science [58]. Characterization and classification theorems for tensor valuations, uniqueness and reconstruction results [50,51,52, 71], which are accompanied by numerical algorithms [21, 35, 68, 69], stereological estimation procedures [53, 54] and integral geometric formulae, as considered in the present work, form the foundation for these and many other applications.

The paper is structured as follows. Section 2 contains a brief introduction to the basic concepts and definitions required to state our results. The main theorem (Theorem 1) and its consequences are described in Sect. 3, where also further comments on the structure of the obtained formulae are provided. The proof of Theorem  1, which is given in Sect. 5, is preceded by several auxiliary results. These concern integral averages over Grassmannians and the rotation group and are the subject of Sect. 4. The proof of Theorem 1 is divided into four main steps which are outlined at the beginning of Sect. 5. In the course of that proof, iterated sums involving Gamma functions build up. In a final step, these expressions have to be simplified again. Some basic tools which are required for this purpose are collected in an appendix.

2 Preliminaries

We work in the n-dimensional Euclidean space \(\mathbb {R}^{n}\), \(n\ge 2\), equipped with its usual topology generated by the standard scalar product \(\langle \cdot \,, \cdot \rangle \) and the corresponding Euclidean norm \(\Vert \cdot \Vert \). For a topological space X, we denote the Borel \(\sigma \)-algebra on X by \(\mathcal {B}(X)\).

The algebra of symmetric tensors over \(\mathbb {R}^n\) is denoted by \(\mathbb {T}\) (the underlying \(\mathbb {R}^n\) will be clear from the context); the vector space of symmetric tensors of rank \(p\in \mathbb {N}_0\) is denoted by \(\mathbb {T}^p\) with \(\mathbb {T}^0=\mathbb {R}\). The symmetric tensor product of tensors \(T_i\in \mathbb {T}\), \(i=1,2\), over \(\mathbb {R}^{n}\) is denoted by \(T_1T_2\in \mathbb {T}\), and for \(q\in \mathbb {N}_0\) and a tensor \(T\in \mathbb {T}\) we write \(T^{q}\) for the q-fold tensor product of T, where \(T^0{:=\,}1\); see also the contributions [16, 40] in the lecture notes [48] for further details and references. Identifying \(\mathbb {R}^{n}\) with its dual space via the given scalar product, we interpret a symmetric tensor of rank p as a symmetric p-linear map from \((\mathbb {R}^{n})^{p}\) to \(\mathbb {R}\). A special tensor is the metric tensor \(Q \in \mathbb {T}^{2}\), defined by \(Q(x, y) {:=\,} \langle x, y \rangle \) for \(x, y \in \mathbb {R}^{n}\). For an affine k-flat \(E\subset \mathbb {R}^n \), \(k \in \{0, \ldots , n\}\), the metric tensor Q(E) associated with E is defined by \(Q(E)(x, y) {:=\,} \langle p_{E^{0}} (x), p_{E^{0}} (y) \rangle \) for \(x, y \in \mathbb {R}^{n}\), where \(E^0\) denotes the linear direction space of E (see Sect. 4) and \(p_{E^{0}} (x)\) is the orthogonal projection of x to \(E^0\). If \(F\subset \mathbb {R}^n\) is a k-dimensional convex set, then we again write Q(F) for the metric tensor \(Q(\text {aff}(F))=Q(\text {aff}(F)^0)\) associated with the affine subspace \(\text {aff}(F)\) spanned by F.

In order to define the tensorial curvature measures and to explain how they are related to the support measures, we start with the latter. For a convex body \(K \in \mathcal {K}^{n}\) and \(x \in \mathbb {R}^{n}\), we denote the metric projection of x onto K by p(Kx), and we define \(u(K, x) {:=\,} (x - p(K, x)) / \Vert x - p(K, x) \Vert \) for \(x \in \mathbb {R}^{n}{\setminus } K\). For \(\epsilon > 0\) and a Borel set \(\eta \subset \Sigma ^{n}{:=\,}\mathbb {R}^{n} \times {\mathbb {S}}^{n-1}\),

$$\begin{aligned} M_{\epsilon }(K, \eta ) {:=\,} \left\{ x \in \left( K + \epsilon B^{n} \right) {\setminus } K :\left( p(K, x), u(K, x) \right) \in \eta \right\} \end{aligned}$$

is a local parallel set of K which satisfies the local Steiner formula

$$\begin{aligned} \mathcal {H}^n (M_{\epsilon }(K, \eta )) = \sum _{j = 0}^{n - 1} \kappa _{n - j} \Lambda _{j} (K, \eta ) \epsilon ^{n - j}, \quad \epsilon \ge 0. \end{aligned}$$
(3)

This relation determines the support measures \(\Lambda _{0} (K, \cdot ), \ldots , \Lambda _{n - 1} (K, \cdot )\) of K, which are finite Borel measures on \(\mathcal {B}(\Sigma ^{n})\). Obviously, a comparison of (3) and the Steiner formula yields \(V_{j}(K) = \Lambda _{j} (K, \Sigma ^{n})\). For further information see [65, Chap. 4.2].

Let \(\mathcal {P}^n\subset \mathcal {K}^n\) denote the space of convex polytopes in \(\mathbb {R}^n\). For a polytope \(P \in \mathcal {P}^{n}\) and \(j \in \{ 0, \ldots , n \}\), we denote the set of j-dimensional faces of P by \(\mathcal {F}_{j}(P)\) and the normal cone of P at a face \(F \in \mathcal {F}_{j}(P)\) by N(PF). Then, the jth support measure \(\Lambda _{j} (P, \cdot )\) of P is explicitly given by

$$\begin{aligned} \Lambda _{j} (P, \eta ) = \frac{1}{\omega _{n - j}} \sum _{F \in \mathcal {F}_{j}(P)} \int _{F} \int _{N(P,F) \cap {\mathbb {S}}^{n-1}} \mathbb {1}_\eta (x,u) \, \mathcal {H}^{n - j - 1} (\mathrm {d}u) \, \mathcal {H}^{j} (\mathrm {d}x) \end{aligned}$$

for \(\eta \in \mathcal {B}(\Sigma ^{n})\) and \(j \in \{ 0, \ldots , n - 1 \}\), where \(\mathcal {H}^{j}\) denotes the j-dimensional Hausdorff measure and \(\omega _{n}\) is the \((n - 1)\)-dimensional volume of \({\mathbb {S}}^{n-1}\).

For a polytope \(P\in \mathcal {P}^n\), we define the generalized tensorial curvature measure

$$\begin{aligned} \phi _{j}^{r,s,l} (P, \cdot ),\quad j \in \{0, \ldots , n - 1\}, \, r, s, l \in \mathbb {N}_{0}, \end{aligned}$$

as the Borel measure on \(\mathcal {B}(\mathbb {R}^n)\) which is given by

$$\begin{aligned} \phi _{j}^{r,s,l} (P, \beta ) {:=\,} c_{n, j}^{r, s, l}\,\frac{1}{\omega _{n - j}} \sum _{F \in \mathcal {F}_{j}(P)} Q(F)^{l} \int _{F \cap \beta } x^r \, \mathcal {H}^{j}(\mathrm {d}x) \int _{N(P,F) \cap {\mathbb {S}}^{n-1}} u^{s} \, \mathcal {H}^{n - j - 1} (\mathrm {d}u), \end{aligned}$$

for \(\beta \in \mathcal {B}(\mathbb {R}^{n})\), where

$$\begin{aligned} c_{n, j}^{r, s, l}{:=\,} \frac{1}{r! s!} \frac{\omega _{n - j}}{\omega _{n - j + s}} \frac{\omega _{j + 2l}}{\omega _{j}} \hbox { if }j\ne 0, \quad \text {}\ c_{n, 0}^{r, s, 0}{:=\,} \frac{1}{r! s!} \frac{\omega _{n}}{\omega _{n + s}},\quad \text { and }c_{n, 0}^{r, s, l}{:=\,}1 \hbox { for } l\ge 1. \end{aligned}$$

Note that if \(j = 0\) and \(l \ge 1\), then we have \( \phi _{0}^{r,s,l} \equiv 0\). In all other cases, the factor \(1/\omega _{n-j}\) in the definition of \( \phi _{j}^{r,s,l} (P, \beta )\) and the factor \(\omega _{n-j}\) involved in the constant \(c_{n, j}^{r, s, l}\) cancel.

For a general convex body \(K\in \mathcal {K}^n\), we define the tensorial curvature measure

$$\begin{aligned} \phi _{n}^{r,0,l} (K, \cdot ),\quad r,l\in \mathbb {N}_0, \end{aligned}$$

as the Borel measure on \(\mathcal {B}(\mathbb {R}^n)\) which is given by

$$\begin{aligned} \phi _{n}^{r,0,l} (K, \beta ): = c_{n, n}^{r, 0, l} \, Q^{l} \int _{K \cap \beta } x^r \, \mathcal {H}^{n}(\mathrm {d}x), \end{aligned}$$

for \(\beta \in \mathcal {B}(\mathbb {R}^{n})\), where \(c_{n, n}^{r, 0, l}{:=\,} \frac{1}{r!} \frac{\omega _{n + 2l}}{\omega _{n}}\). For the sake of convenience, we extend these definitions by \( \phi _{j}^{r,s,0} {:=\,} 0\) for \(j \notin \{ 0, \ldots , n \}\) or \(r \notin \mathbb {N}_{0}\) or \(s \notin \mathbb {N}_{0}\) or \(j = n\) and \(s \ne 0\). Finally, we observe that for \(P\in \mathcal {P}^n\), \(r=s=l=0\), and \(j=0,\ldots ,n-1\), the scalar-valued measures \( \phi _{j}^{0,0,0} (P,\cdot )\) are just the curvature measures \( \phi _{j} (P,\cdot )\), that is, the marginal measures on \(\mathbb {R}^n\) of the support measures \(\Lambda _j(P,\cdot )\), which therefore can be extended from polytopes to general convex bodies, and \( \phi _{n}^{0,0,0} (K,\cdot )\) is the restriction of the n-dimensional Hausdorff measure to \(K\in \mathcal {K}^n\).

To put the generalized tensorial curvature measures into their natural context and to emphasize some of their properties, we recall the relevant definitions and results from [37, 40, 64]. For \(p\in \mathbb {N}_0\), let \(T_p(\mathcal {P}^n)\) denote the vector space of all mappings \({\tilde{\Gamma }}:\mathcal {P}^n\times \mathcal {B}(\Sigma ^n)\rightarrow \mathbb {T}^p\) such that (a) \({\tilde{\Gamma }}(P,\cdot )\) is a \(\mathbb {T}^p\)-valued measure on \(\mathcal {B}(\Sigma ^n)\), for each \(P\in \mathcal {P}^n\); (b) \({\tilde{\Gamma }}\) is isometry covariant; (c) \({\tilde{\Gamma }}\) is locally defined. We refer to [37, 40, 64] for explicit definitions of these properties.

For a polytope \(P\in \mathcal {P}^n\), the generalized local Minkowski tensor

$$\begin{aligned}{\tilde{\phi }}^{r,s,l}_j (P, \cdot ),\quad j\in \{0,\ldots ,n-1\},\, r,s,l\in \mathbb {N}_0, \end{aligned}$$

is the Borel measure on \(\mathcal {B}(\Sigma ^n)\) which is defined by

$$\begin{aligned}&{\tilde{\phi }}^{r,s,l}_j (P, \eta )\\&\quad {:=\,} c_{n, j}^{r, s, l}\,\frac{1}{\omega _{n - j}} \sum _{F \in \mathcal {F}_{j}(P)} Q(F)^{l} \int _{F}\int _{N(P,F) \cap {\mathbb {S}}^{n-1}} \mathbb {1}_\eta (x,u) x^r u^{s}\, \mathcal {H}^{j}(\mathrm {d}x) \, \mathcal {H}^{n - j - 1} (\mathrm {d}u), \end{aligned}$$

for \(\eta \in \mathcal {B}(\Sigma ^n)\).

It was shown in [37, 64] (where a different notation and normalization was used) that the mappings \(Q^m {\tilde{\phi }}^{r, s, l}_j\), where \(m, r, s, l \in \mathbb {N}_0\) satisfy \(2m + r + s + 2l = p\), where \(j \in \{ 0, \ldots , n - 1 \}\), and where \(l = 0\) if \(j \in \{ 0, n - 1 \}\), form a basis of \(T_p(\mathcal {P}^n)\).

This fundamental characterization theorem highlights the importance of the generalized local Minkowski tensors. In particular, since the mappings \(P\mapsto Q^m {\tilde{\phi }}^{r, s, l}_j(P, \cdot )\), \(P \in \mathcal {P}^n\), are additive, as shown in [37], all mappings in \(T_p(\mathcal {P}^n)\) are valuations.

Noting that

$$\begin{aligned} \phi _{j}^{r,s,l} (P, \beta ) = {\tilde{\phi }}^{r, s, l}_j (P, \beta \times {\mathbb {S}}^{n-1}), \quad j \in \{ 0, \ldots , n - 1 \}, \, r, s, l \in \mathbb {N}_0, \end{aligned}$$

for \(P \in \mathcal {P}^n\) and all \(\beta \in \mathcal {B}(\mathbb {R}^n)\), it is clear that the mappings

$$\begin{aligned} \phi _{j}^{r,s,l} : \mathcal {P}^n\times \mathcal {B}(\mathbb {R}^n)\rightarrow \mathbb {T}^p,\quad (P,\beta )\mapsto \phi _{j}^{r,s,l} (P,\beta ), \end{aligned}$$

where \(p = r + s + 2l\), have similar properties as the generalized local Minkowski tensors. In particular, it is easy to see that (including the case \(j = n\) where \(\mathcal {P}^n\) can be replaced by \(\mathcal {K}^n\))

  1. (a)

    \( \phi _{j}^{r,s,l} (P,\cdot )\) is a \(\mathbb {T}^p\)-valued measure on \(\mathcal {B}(\mathbb {R}^n)\), for each \(P \in \mathcal {P}^n\);

  2. (b)

    \( \phi _{j}^{r,s,l} \) is isometry covariant (translation covariant of degree r and rotation covariant);

  3. (c)

    \( \phi _{j}^{r,s,l} \) is locally defined (in the sense of [65, Note 11 for Section 4.2]);

  4. (d)

    \(P\mapsto \phi _{j}^{r,s,l} (P,\cdot )\), \(P\in \mathcal {P}^n\), is additive (a valuation).

It is an open problem whether the vector space of all mappings \( \Gamma : \mathcal {P}^n \times \mathcal {B}(\mathbb {R}^n) \rightarrow \mathbb {T}^p\) satisfying these properties, is spanned by the mappings \(Q^m \phi _{j}^{r,s,l} \), where \(m, r, s, l \in \mathbb {N}_0\) satisfy \(2m + r + s + 2l = p\), where \(j \in \{ 0, \ldots , n - 1 \}\), and where \(l = 0\) if \(j \in \{ 0, n - 1 \}\), or where \(j = n\) and \(s = l = 0\). However, the linear independence of these mappings is shown in [45, Theorem 10]. A characterization theorem for smooth tensor-valued curvature measures, meaning representable with suitable differential forms defined on the sphere bundle of \(\mathbb {R}^{n}\) and integrated (that is, evaluated) on the normal cycle, has recently been found by Saienko [59]. Note, however, that the generalized tensorial curvature measures \( \phi _{j}^{r,s,l} \), for \(1\le j\le n-2\) and \(l>1\), are not smooth. We also point out that, for every \(\beta \in \mathcal {B}(\mathbb {R}^n)\), the mapping \( \phi _{j}^{r,s,l} (\cdot , \beta )\) on \(\mathcal {P}^n\) is measurable. This is implied by the more general Lemma A.3 in the more detailed prior work [44].

It has been shown in [37] that the generalized local Minkowski tensor \({\tilde{\phi }}^{r,s,l}_j\) has a continuous extension to \(\mathcal {K}^{n}\) which preserves all other properties if and only if \(l \in \{0,1\}\); see [37, Theorem 2.3] for a stronger characterization result. Globalizing any such continuous extension in the \({\mathbb {S}}^{n-1}\)-coordinate, we obtain a continuous extension for the generalized tensorial curvature measures. For \(l = 0\), there exists a natural representation of the extension via the support measures. We call these the tensorial curvature measures. For a convex body \(K \in \mathcal {K}^{n}\), a Borel set \(\beta \in \mathcal {B}(\mathbb {R}^{n})\), and \(r, s \in \mathbb {N}_{0}\), they are given by

$$\begin{aligned} \phi _{j}^{r,s,0} (K, \beta ){:=\,} c_{n, j}^{r, s, 0} \int _{\beta \times {\mathbb {S}}^{n-1}} x^r u^s \, \Lambda _j(K, \mathrm {d}(x, u)), \end{aligned}$$
(4)

for \(j \in \{ 0, \ldots , n - 1 \}\), whereas \( \phi _{n}^{r,0,l} (K, \beta )\) has already been defined for all \(K \in \mathcal {K}^{n}\).

For an explicit description of the generalized local Minkowski tensors \({{\tilde{\phi }}}_j^{r, s, 1}(K, \cdot )\), for \(K \in \mathcal {K}^{n}\) and \(j \in \{ 0, \ldots , n - 1 \}\), and hence of \( \phi _{j}^{r,s,1} (K, \cdot )\), we refer to [37]. There it is shown that the map \(K \mapsto {{\tilde{\phi }}}_j^{r, s, 1}(K, \eta )\), \(K \in \mathcal {K}^{n}\), is measurable for all \(\eta \in \mathcal {B}(\Sigma ^n)\), which yields that the map \(K \mapsto \phi _{j}^{r,s,1} (K, \beta )\), \(K \in \mathcal {K}^{n}\), is measurable for all \(\beta \in \mathcal {B}(\mathbb {R}^n)\). Moreover, the measurability of the map \(K \mapsto \phi _j^{r, s, 0}(K, \beta )\), \(K \in \mathcal {K}^{n}\), is clear from (4).

In the coefficients of the kinematic formula and in the proof of our main theorem, the classical Gamma function is involved. It can be defined via the Gaussian product formula

$$\begin{aligned} \Gamma (z) {:=\,} \lim _{a \rightarrow \infty } \frac{a^{z} a!}{z (z + 1) \cdots (z + a)} \end{aligned}$$

for all \(z \in \mathbb {C}{\setminus } \{ 0, -1, \ldots \}\) (see [5, (2.7)]). For \(c \in \mathbb {R}{\setminus } \mathbb {Z}\) and \(m \in \mathbb {N}_{0}\), this definition implies that

$$\begin{aligned} \frac{\Gamma (-c + m)}{\Gamma (-c)} = (-1)^{m} \frac{\Gamma (c + 1)}{\Gamma (c - m + 1)}. \end{aligned}$$
(5)

The Gamma function has simple poles at the nonpositive integers. The right side of relation (5) provides a continuation of the left side at \(c \in \mathbb {N}_{0}\), where \(\Gamma (c - m + 1)^{-1} = 0\) for \(c < m\). In the proofs, we repeatedly use Legendre’s duplication formula, \(\Gamma (c) \Gamma (c + \tfrac{1}{2}) = 2^{1 - 2c} \sqrt{\pi } \Gamma (2c)\), for \(c > 0\).

3 The main results

In the present work, we establish explicit kinematic formulae for the (generalized) tensorial curvature measures \( \phi _{j}^{r,s,l} \) of polytopes. In other words, for \(P, P' \in \mathcal {P}^n\) and \(\beta , \beta ' \in \mathcal {B}(\mathbb {R}^n)\) we express the integral mean value

$$\begin{aligned} \int _{G_n} \phi _{j}^{r,s,l} (P \cap g P', \beta \cap g \beta ') \, \mu ( \mathrm {d}g) \end{aligned}$$

in terms of the (generalized) tensorial curvature measures of P and \(P'\), evaluated at \(\beta \) and \(\beta '\), respectively. In fact, the precise result shows that only a selection of these measures is needed. Furthermore, for \(l =0, 1\), the tensorial measures \( \phi _{j}^{r,s,l} \) are defined on \(\mathcal {K}^{n} \times \mathcal {B}(\mathbb {R}^{n})\), and therefore, in these two cases we also consider integral means of the form

$$\begin{aligned} \int _{G_n} \phi _{j}^{r,s,l} (K \cap g K', \beta \cap g \beta ') \, \mu ( \mathrm {d}g), \end{aligned}$$

for general \(K, K' \in \mathcal {K}^n\) and \(\beta , \beta ' \in \mathcal {B}(\mathbb {R}^n)\). Although the latter result can be deduced as a consequence of the former, it came as a surprise that the general formulae simplify for \(l\in \{0,1\}\) so that only generalized tensorial curvature measures are involved which admit a continuous extension.

Theorem 1

For \(P, P' \in \mathcal {P}^n\), \(\beta , \beta ' \in \mathcal {B}(\mathbb {R}^n)\), \(j, l, r, s \in \mathbb {N}_{0}\) with \(j \le n\), and \(l=0\) if \(j=0\),

$$\begin{aligned}&\displaystyle \int _{G_n} \phi _{j}^{r,s,l} (P \cap g P', \beta \cap g \beta ') \, \mu ( \mathrm {d}g)\\&\displaystyle \quad = \sum _{k = j}^{n} \sum _{m = 0}^{\lfloor \frac{s}{2} \rfloor } \sum _{i = 0}^{m} c_{n, j, k}^{s, l, i, m} \, Q^{m - i} \phi _{k}^{r,s - 2m,l + i} (P, \beta ) \phi _{n - k + j} (P', \beta '), \end{aligned}$$

where

$$\begin{aligned} c_{n, j, k}^{s, l, i, m} :=\, \frac{(-1)^{i}}{(4\pi )^{m} m!} \frac{\left( {\begin{array}{c}m\\ i\end{array}}\right) }{\pi ^i} \frac{(i + l - 2)!}{(l - 2)!}\, \gamma _{j,k}^{s,m}\, \alpha _{n,j,k} \end{aligned}$$

with \(\alpha _{n,j,k}\) as in (1) and

$$\begin{aligned} \gamma _{j,k}^{s,m}{:=\,}\frac{\Gamma \left( \frac{k}{2} + 1\right) }{\Gamma \left( \frac{j}{2} + 1\right) } \frac{\Gamma \left( \frac{j + s}{2} - m + 1\right) }{\Gamma \left( \frac{k + s}{2} + 1\right) } \frac{\Gamma \left( \frac{k - j}{2} + m\right) }{\Gamma \left( \frac{k - j}{2}\right) }. \end{aligned}$$

Several remarkable facts concerning the coefficients \( c_{n, j, k}^{s, l, i, m}\) should be observed. First, the ratio \({(i + l - 2)!} /{(l - 2)!}\) has to be interpreted in terms of Gamma functions and relation (5) if \(l\in \{0,1\}\), as described below. The corresponding special cases will be considered separately in the following two theorems. Second, the coefficients are indeed independent of the tensorial parameter r and depend only on l through the ratio \({(i + l - 2)!} /{(l - 2)!}\). Moreover, only tensors \( \phi _{k}^{r,s - 2m,p} (P, \beta ) \) with \(p \ge l\) show up on the right side of the kinematic formula. Using Legendre’s duplication formula, we could shorten the given expressions for the coefficients \( c_{n, j, k}^{s, l, i, m}\) even further. However, the present form has the advantage of exhibiting that \(\gamma _{j,k}^{s,m}=1\) if \(s = 0\) (and hence also \(m = i = 0\)). Furthermore, the coefficients are signed in contrast to the classical kinematic formula. We shall see below that for \(l\in \{0,1\}\) all coefficients are nonnegative.

In Theorem 1, we can simplify the coefficient \(c_{n, j, k}^{s, l, i, m}\) for \(k \in \{j, n \}\) and \(j\le n-1\) such that only one functional remains. From (5) we conclude that

$$\begin{aligned} c_{n, j, j}^{s, l, i, m} = \mathbb {1}\{ i = m = 0 \}. \end{aligned}$$

Furthermore, since \( \phi _{n}^{r,s,l} \) vanishes for \(s \ne 0\) and the functionals \(Q^{\frac{s}{2} - i} \phi _{n}^{r,0,l + i} \), \(i \in \{ 0, \ldots , \frac{s}{2} \}\), can be combined, we can redefine

$$\begin{aligned} c_{n, j, n}^{s, l, i, m}&:=\, \mathbb {1} \{ s \text { even}, m = i = \tfrac{s}{2} \} \frac{1}{(2\pi )^{s} \left( \frac{s}{2}\right) !} \frac{\Gamma \left( \frac{n - j + s}{2}\right) }{\Gamma \left( \frac{n - j}{2}\right) }; \end{aligned}$$

see (7) and (10) in the proof of Theorem 1.

It should also be observed that the functionals \( \phi _{n - 1}^{r,s - 2m,l + i} \) can be expressed in terms of the functionals \(Q^{m'} \phi _{n - 1}^{r,s',0} \), where \(m', s' \in \mathbb {N}_0\) and \(2m' + s' = s + 2l\). We do not pursue this here, since the resulting coefficients do not simplify nicely (see, however, [45]).

Theorem 1 states an equality for measures, hence the case \(r = 0\) of the theorem immediately implies the general case (nevertheless, we prove it in the present general form). In fact, algebraic induction and the inversion invariance of \(\mu \) immediately yield the following extension of Theorem 1.

Corollary 2

Let \(P, P' \in \mathcal {P}^n\), \(j, l, r, s \in \mathbb {N}_{0}\) with \(j \le n\), and \(l = 0\) if \(j = 0\). Let fh be tensor-valued continuous functions on \(\mathbb {R}^n\). Then

$$\begin{aligned}&\int _{G_n}\int _{\mathbb {R}^n}f(x)h(gx)\, \phi _{j}^{r,s,l} (P \cap g^{-1} P', \mathrm {d}x) \, \mu ( \mathrm {d}g) \\&\quad = \sum _{k = j}^{n} \sum _{m = 0}^{\lfloor \frac{s}{2} \rfloor } \sum _{i = 0}^{m} c_{n, j, k}^{s, l, i, m} \, Q^{m - i} \int _{\mathbb {R}^n}f(x)\, \phi _{k}^{r,s - 2m,l + i} (P, \mathrm {d}x)\int _{\mathbb {R}^n} h(y) \, \phi _{n - k + j} (P', \mathrm {d}y). \end{aligned}$$

In particular, we could choose \(h(y) = y^{{\bar{r}}}\), \(y \in \mathbb {R}^n\), for \({\bar{r}} \in \mathbb {N}_0\). Moreover, since the generalized tensorial curvature measures depend additively on the underlying polytope, all integral formulae remain true if P and \(P'\) are replaced by finite unions of polytopes. Similar extensions hold for the following results.

The cases \(l = 0, 1\) are of special interest, since we can formulate the kinematic formulae for general convex bodies in these cases.

Theorem 3

For \(K, K' \in \mathcal {K}^n\), \(\beta , \beta ' \in \mathcal {B}(\mathbb {R}^n)\) and \(j, r, s \in \mathbb {N}_0\) with \(1\le j \le n\),

$$\begin{aligned}&\displaystyle \int _{G_n} \phi _{j}^{r,s,1} (K \cap g K', \beta \cap g \beta ') \, \mu ( \mathrm {d}g)\\&\displaystyle = \sum _{k = j}^{n} \sum _{m = 0}^{\lfloor \frac{s}{2} \rfloor } c_{n, j, k}^{s, 1, 0, m} \, Q^{m} \phi _{k}^{r,s - 2m,1} (K, \beta ) \phi _{n - k + j} (K', \beta '), \end{aligned}$$

where

$$\begin{aligned} c_{n, j, k}^{s, 1, 0, m} = \frac{1}{(4\pi )^{m} m!}\, \gamma _{j,k}^{s,m}\, \alpha _{n,j,k}. \end{aligned}$$

Proof

We apply (5) to obtain

$$\begin{aligned} \frac{(i - 1)!}{(- 1)!} = \frac{\Gamma (i)}{\Gamma (0)} = \mathbb {1}\{ i = 0 \}. \end{aligned}$$

Then, Theorem 1 yields the assertion in the polytopal case. For a convex body, we conclude the formula by approximation with polytopes, since the valuations \( \phi _{k}^{r,s - 2m,1} \) have weakly continuous extensions to \(\mathcal {K}^n\) (and the same is true for the curvature measures). \(\square \)

Theorem 4

For \(K, K' \in \mathcal {K}^n\), \(\beta , \beta ' \in \mathcal {B}(\mathbb {R}^n)\) and \(j, r, s \in \mathbb {N}_0\) with \(j \le n\),

$$\begin{aligned}&\displaystyle \int _{G_n} \phi _{j}^{r,s,0} (K \cap g K', \beta \cap g \beta ') \, \mu ( \mathrm {d}g)\\&\displaystyle = \sum _{k = j}^{n} \sum _{m = 0}^{\lfloor \frac{s}{2} \rfloor } \sum _{i = 0}^{1} c_{n, j, k}^{s, 0, i, m} \, Q^{m - i} \phi _{k}^{r,s - 2m,i} (K, \beta ) \phi _{n - k + j} (K', \beta '), \end{aligned}$$

where

$$\begin{aligned} c_{n, j, k}^{s, 0, i, m} = \frac{1}{(4\pi )^{m} (m-i)!\pi ^{i}} \, \gamma _{j,k}^{s,m}\, \alpha _{n,j,k}. \end{aligned}$$

Proof

We apply (5) to obtain

$$\begin{aligned} \frac{(i - 2)!}{(- 2)!} = \frac{\Gamma (i - 1)}{\Gamma (- 1)} = (-1)^{i} \frac{1}{\Gamma (2 - i)} = \mathbb {1}\{ i = 0 \}-\mathbb {1}\{ i = 1 \}. \end{aligned}$$

Then, Theorem 1 yields the assertion in the polytopal case. For a convex body, we conclude the formula by approximation with polytopes, since for \(i \in \{ 0, 1 \}\) the valuations \( \phi _{k}^{r,s - 2m,i} \) have weakly continuous extensions to \(\mathcal {K}^n\). Finally, we note that \(c_{n, j, k}^{s, 0, 1, 0} =0\). \(\square \)

It is crucial that the right sides of the formulae in Theorems 3 and 4 only involve the (generalized) tensorial curvature measures \( \phi _{k}^{r,s,0} \) and \( \phi _{k}^{r,s,1} \), which are the ones with weakly continuous extensions to \(\mathcal {K}^{n}\), and not \( \phi _{k}^{r,s,i} \) with \(i > 1\).

4 Some auxiliary results

Before we start with the proof of the main theorem, we establish several auxiliary integral geometric results in this section. As a rule, these results hold for \(n\ge 1\). If not stated otherwise, the case \(n = 1\) (or even \(n = 0\)) can be checked directly.

We recall the following notions. The rotation group on \(\mathbb {R}^{n}\) is denoted by \(SO(n)\), the orthogonal group on \(\mathbb {R}^n\) by \(O(n)\), and we write \(\nu \) for the Haar probability measure on both spaces. By \(G(n, k)\) (resp. \(A(n, k)\)), for \(k \in \{0, \ldots , n\}\), we denote the Grassmannian (resp. affine Grassmannian) of k-dimensional linear (resp. affine) subspaces of \(\mathbb {R}^{n}\). We write \(\nu _k\) for the rotation invariant Haar probability measure on \(G(n, k)\). The directional space of an affine k-flat \(E \in A(n, k)\) is denoted by \(E^{0} \in G(n, k)\) and its orthogonal complement by \(E^{\perp } \in G(n, n - k)\). For \(k \in \{0, \ldots , n\}\) and \(F \in G(n, k)\), we denote the group of rotations of \(\mathbb {R}^n\) mapping F (and hence also \(F^\perp \)) into itself by \(SO(F)\) (which is the same as \(SO(F^\perp )\)) and write \(\nu ^{F}\) for the Haar probability measure on \(SO(F)\). For \(l \in \{0, \ldots , n\}\), let \(G(F, l) {:=\,} \{ L \in G(n, l): L \subset F \}\) if \(l \le k\), and let \(G(F, l) {:=\,} \{ L \in G(n, l): L \supset F \}\) if \(l > k\). Then \(G(F, l)\) is a homogeneous \(SO(F)\)-space. Hence, there exists a unique Haar probability measure \(\nu ^{F}_l\) on \(G(F, l)\), which is \(SO(F)\) invariant. An introduction to invariant measures and group operations as needed here is provided in [67, Chap. 13], where, however, \(SO(F)\) is defined in a slightly different way.

Recall that the orthogonal projection of a vector \(x \in \mathbb {R}^{n}\) to a linear subspace L of \(\mathbb {R}^{n}\) is denoted by \(p_{L}(x)\), and set \(\pi _{L}(x){:=\,} p_{L}(x)/\Vert p_{L}(x)\Vert \in {\mathbb {S}}^{n-1}\) for \(x \notin L^{\perp }\). For two linear subspaces \(L, L'\) of \(\mathbb {R}^{n}\), the subspace determinant \([L, L']\) is defined as follows (see [67, Sect. 14.1]). One extends an orthonormal basis of \(L \cap L'\) (the empty set if \(L \cap L'=\{0\}\)) to an orthonormal basis of L and to one of \(L'\). Then \([L, L']\) is the volume of the parallelepiped spanned by all these vectors. Consequently, if \(L=\{0\}\) or \(L=\mathbb {R}^n\), then \([L,L']{:=\,}1\). For \(F,F'\in \mathcal {K}^n\), we define \([F,F']{:=\,}[F^0,(F')^0]\), where \(F^0\) is the direction space of the affine hull of F.

A basic tool in this work is the following integral geometric transformation formula, which is a special case of [67, Theorem 7.2.6].

Lemma 5

Let \(0\le j \le k \le n\) be integers, \(F \in G(n,k)\), and let \(f:G(n,n-k+j) \rightarrow \mathbb {R}\) be integrable. Then

$$\begin{aligned}&\displaystyle \int _{G(n, n - k + j)} f(L) \, \nu _{n - k + j}(\mathrm {d}L)\\&\displaystyle = d_{n, j, k} \int _{G(F, j)} \int _{G(U, n - k + j)} [F, L]^j f(L) \, \nu _{n - k + j}^U(\mathrm {d}L) \, \nu _{j}^F(\mathrm {d}U) \end{aligned}$$

with

$$\begin{aligned} d_{n, j, k}{:=\,} \prod _{i = 1}^{k - j} \frac{\Gamma \left( \frac{i}{2}\right) \Gamma \left( \frac{n - k + j + i}{2}\right) }{\Gamma \left( \frac{j + i}{2}\right) \Gamma \left( \frac{n - k + i}{2}\right) }. \end{aligned}$$

The preceding lemma yields the next result, which is again an integral geometric transformation formula (which will be needed in Sect. 5.3). Here we (implicitly) require that \(n \ge 2\).

Lemma 6

[41, Corollary 4.2] Let \(u \in {\mathbb {S}}^{n-1}\) and let \(h: G(n, k) \rightarrow \mathbb {T}\) be an integrable function, where and \(0< k < n\). Then

$$\begin{aligned} \int _{G(n, k)} h(L) \, \nu _k (\mathrm {d}L)&= \frac{\omega _{k}}{2 \omega _{n}} \int _{G(u^\perp , k - 1)} \int _{-1}^{1} \int _{U^{\perp } \cap u^{\perp } \cap {\mathbb {S}}^{n-1}} |t|^{k - 1} ( 1 - t^2 )^{\frac{n - k - 2}{2}} \\&\quad \times h \bigl ( {{\mathrm{\mathrm{span}}}}\bigl \{ U, t u + \sqrt{1 - t^2} w \bigr \} \bigr ) \, \mathcal {H}^{n - k - 1} (\mathrm {d}w) \, \mathrm {d}t \, \nu ^{u^\perp }_{k - 1} (\mathrm {d}U). \end{aligned}$$

The following lemmas can be derived from Lemma 6 (see [66, (24)], [41, Lemma 4.3 and Proposition 4.5]).

Lemma 7

[66, (24)] Let \(s \in \mathbb {N}_{0}\) and \(n\ge 1\). Then

$$\begin{aligned} \int _{{\mathbb {S}}^{n-1}} u^{s} \, \mathcal {H}^{n - 1} (\mathrm {d}u) = \mathbb {1}\{ s \text { even } \} \, 2 \frac{\omega _{n + s}}{\omega _{s + 1}}\, Q^{\frac{s}{2}}. \end{aligned}$$

The next lemma is used in the proofs of Lemmas 9 and 10 below.

Lemma 8

[41, Lemma 4.3] Let \(i, k \in \mathbb {N}_0\) with \(k \le n\) and \(n\ge 1\). Then

$$\begin{aligned} \int _{G(n, k)} Q(L)^i \, \nu _{k} (\mathrm {d}L) = \frac{\Gamma \left( \frac{n}{2}\right) \Gamma \left( \frac{k}{2} + i\right) }{\Gamma \left( \frac{n}{2} + i\right) \Gamma \left( \frac{k}{2}\right) } \,Q^{i}. \end{aligned}$$

The following lemma extends Lemma 8 (but the latter is used in the proof of Lemma 9). It will be needed in the proof of Proposition 14 (of which Lemma 9 is a special case).

Lemma 9

[41, Proposition 4.5] Let \(a, i \in \mathbb {N}_0\), \(k,r \in \{ 0, \ldots , n \}\) with \(k + r \ge n\ge 1\), and let \(F \in G(n, r)\). Then

$$\begin{aligned} \int _{G(n, k)} [F, L]^{a} Q(L)^{i} \, \nu _{k} (\mathrm {d}L)&= e_{n, k, r, a}\, \frac{\Gamma ( \frac{n + a}{2})}{\Gamma ( \frac{n + a}{2} + i) \Gamma ( \frac{k + a}{2})} \sum _{\beta = 0}^{i} (-1)^{\beta } \left( {\begin{array}{c}i\\ \beta \end{array}}\right) \Gamma (\tfrac{k + a}{2} + i - \beta ) \\&\quad \times \frac{\Gamma \left( \frac{n - k}{2} + \beta \right) \Gamma \left( \frac{a}{2} + 1\right) \Gamma \left( \frac{r}{2}\right) }{\Gamma \left( \frac{n - k}{2}\right) \Gamma \left( \frac{a}{2} + 1 - \beta \right) \Gamma \left( \frac{r}{2} + \beta \right) }\, Q^{i - \beta } Q(F)^{\beta } \end{aligned}$$

with

$$\begin{aligned} e_{n, k, r, a}{:=\,} \prod _{p = 0}^{n - r - 1} \frac{\Gamma \left( \frac{n - p}{2}\right) \Gamma \left( \frac{k - p + a}{2}\right) }{\Gamma \left( \frac{n - p + a}{2}\right) \Gamma \left( \frac{k - p}{2}\right) }. \end{aligned}$$

Proof

Although this lemma is stated in [41, Proposition 4.5] only for \(k, r \ge 1\), it is easy to check that it remains true for \(k = 0\) (and \(r = n\)) and for \(r = 0\) (and \(k = n\)) with \(n \ge 1\) as well as for \(n = k = r = 1\). The only nontrivial case concerns the assertion for \(a>0\), \(k = 0\), \(r = n\) and \(i \ge 1\), where we can show (using relation (A.1’)) that the right side is the zero tensor. The case \(a=0\) is covered by Lemma 8. \(\square \)

From Lemma 8, we deduce the next result, which will be used in the proofs of Lemma 11 and Proposition 14.

Lemma 10

Let \(i, j, k \in \mathbb {N}_0\) with \(0\le k \le n\). Then

$$\begin{aligned} \int _{G(n, k)} Q(L)^i Q(L^{\perp })^j \, \nu _{k} (\mathrm {d}L) = \frac{\Gamma \left( \frac{n}{2}\right) \Gamma \left( \frac{k}{2} + i\right) \Gamma \left( \frac{n - k}{2} + j\right) }{\Gamma \left( \frac{n}{2} + i + j\right) \Gamma \left( \frac{k}{2}\right) \Gamma \left( \frac{n - k}{2}\right) } \,Q^{i + j}. \end{aligned}$$

Proof

The cases where \(k\in \{0,n\}\) can be checked easily. For \(1 \le k \le n - 1\), expansion of \(Q(L^{\perp })^{j} = (Q - Q(L))^{j}\), Lemma 8 and relation (A.1’) yield the assertion. \(\square \)

The next lemma will be used at the beginning of Sect. 5.2.

Lemma 11

Let \(j, l, s \in \mathbb {N}_{0}\) with \(j < n\), \(L \in G(n, j)\) and \(u \in L^\perp \cap {\mathbb {S}}^{n-1}\). Then

$$\begin{aligned} \int _{SO(n)} Q(\vartheta L)^l (\vartheta u)^{s} \, \nu (\mathrm {d}\vartheta ) = \frac{\Gamma \left( \frac{n}{2}\right) \Gamma \left( \frac{j}{2} + l\right) \Gamma \left( \frac{s + 1}{2}\right) }{\sqrt{\pi }\Gamma \left( \frac{n + s}{2} + l\right) \Gamma \left( \frac{j}{2}\right) }\, Q^{l + {\frac{s}{2}}}, \end{aligned}$$

if s is even. The same relation holds if the integration is extended over \(O(n)\). If s is odd and \(n\ge 2\) (or \(n=1\) and the integration is extended over \(O(1)\)), then the integral vanishes.

Proof

The case \(n = 1\), \(j = 0\) is checked directly by distinguishing \(l = 0\) or \(l \ne 0\). Hence let \(n \ge 2\). Let I denote the integral we are interested in. Due to symmetry, \(I = 0\) if s is odd. Therefore, let s be even. Let \(\rho \in SO(L^\perp )\). Then, by the right invariance of \(\nu \) and Fubini’s theorem we obtain

$$\begin{aligned} I&= \int _{SO(L^\perp )}\int _{SO(n)}Q(\vartheta \rho L)^l (\vartheta \rho u)^{s}\, \nu (\mathrm {d}\vartheta )\, \nu ^{L^{\perp }}(\mathrm {d}\rho ) \\&= \int _{SO(L^\perp )}\int _{SO(n)}Q(\vartheta L)^l (\vartheta \rho u)^{s}\, \nu (\mathrm {d}\vartheta )\, \nu ^{L^{\perp }}(\mathrm {d}\rho ) \\&= \int _{SO(n)}Q(\vartheta L)^l\vartheta \int _{SO(L^\perp )} (\rho u)^{s}\, \nu ^{L^{\perp }}(\mathrm {d}\rho )\, \nu (\mathrm {d}\vartheta ). \end{aligned}$$

Lemma 7, applied in \(L^\perp \) with \(\text {dim}(L^\perp )\ge 1\), yields

$$\begin{aligned} \int _{SO(L^\perp )} (\rho u)^{s}\, \nu ^{L^{\perp }}(\mathrm {d}\rho ) = \frac{1}{\omega _{n - j}} \int _{{\mathbb {S}}^{n-1}\cap L^\perp } v^{s} \, \mathcal {H}^{n - j - 1} (\mathrm {d}v) = 2 \frac{\omega _{n - j + s}}{\omega _{s + 1} \omega _{n - j}} \,Q(L^\perp )^{\frac{s}{2}}, \end{aligned}$$

and hence (applying a transformation to the integration with respect to \(\vartheta \)) we get

$$\begin{aligned} I&= 2 \frac{\omega _{n - j + s}}{\omega _{s + 1} \omega _{n - j}} \int _{G(n, j)} Q(U)^l Q(U^\perp )^{\frac{s}{2}} \, \nu _j(\mathrm {d}U). \end{aligned}$$

Now Lemma 10 yields the assertion. \(\square \)

The following lemma will be required in Sect. 5.3.

Lemma 12

Let \(u, v \in \mathbb {S}^{n-1}\), \(i, t \in \mathbb {N}_0\) and \(n\ge 1\). Then

$$\begin{aligned} \int _{SO(n)} (\rho v)^{i} \langle u, \rho v \rangle ^t \, \nu (\mathrm {d}\rho ) = \frac{\Gamma \left( \frac{n}{2}\right) \Gamma (t + 1)}{2^t \sqrt{\pi }\Gamma \left( \frac{n + i + t}{2}\right) } \sum _{x = \left( \frac{i - t}{2}\right) ^+}^{\lfloor \frac{i}{2} \rfloor } \left( {\begin{array}{c}i\\ 2x\end{array}}\right) \frac{\Gamma \left( x + \frac{1}{2}\right) }{\Gamma \left( \frac{t - i}{2} + x + 1\right) } u^{i - 2x} Q^x, \end{aligned}$$

if \(i + t\) is even. The same relation holds if the integration is extended over \(O(n)\). If \(i + t\) is odd and \(n\ge 2\) (or \(n=1\) and the integration is extended over \(O(1)\)), then the integral on the left side vanishes.

Proof

First, we assume that \(n\ge 2\). Let I denote the integral we are interested in. By symmetry, \(I = 0\) if \(i + t\) is odd. Thus, in the following we assume that \(i + t\) is even. Applying the transformation

$$\begin{aligned} f: [-1,1] \times ({\mathbb {S}}^{n-1}\cap u^\perp ) \rightarrow {\mathbb {S}}^{n-1}, \, (z, w) \mapsto z u + \sqrt{1 - z^2} w, \end{aligned}$$

with Jacobian \(\mathcal {J}f(z,w) = \sqrt{1-z^2}^{n - 3}\) to the integral I, we get

$$\begin{aligned} I&= \frac{1}{\omega _{n}} \int _{{\mathbb {S}}^{n-1}} v^{i} \langle u, v \rangle ^t \, \mathcal {H}^{n - 1} (\mathrm {d}v) \\&= \frac{1}{\omega _{n}} \int _{-1}^1 \int _{{\mathbb {S}}^{n-1}\cap u^\perp }\left( 1 - z^2 \right) ^{\frac{n - 3}{2}} \left( z u + \sqrt{1 - z^2} w \right) ^{i}{{\left\langle u, z u + \sqrt{1 - z^2} w \right\rangle }}^{t} \, \mathcal {H}^{n - 2}(\mathrm {d}w) \, \mathrm {d}z. \end{aligned}$$

Binomial expansion of \((zu + \sqrt{1 - z^2}w)^i\) yields

$$\begin{aligned} I&= \frac{1}{\omega _{n}} \sum _{m = 0}^{i} \left( {\begin{array}{c}i\\ m\end{array}}\right) u^{i - m} \underbrace{\int _{-1}^1 z^{t + i - m} \left( 1 - z^2 \right) ^{\frac{n + m - 3}{2}} \, \mathrm {d}z}_{ = \mathbb {1} \{ m \text { even} \} B\left( \frac{t + i - m + 1}{2}, \frac{n + m - 1}{2}\right) } \int _{{\mathbb {S}}^{n-1}\cap u^\perp } w^{m}\, \mathcal {H}^{n - 2}(\mathrm {d}w), \end{aligned}$$

where \(B(\cdot ,\cdot )\) denotes the Beta function. From Lemma 7 applied to the integration with respect to w, we obtain

$$\begin{aligned} I&= \frac{\Gamma \left( \frac{n}{2}\right) }{\pi \Gamma \left( \frac{n + i + t}{2}\right) } \sum _{m = 0}^{\lfloor \frac{i}{2} \rfloor } \left( {\begin{array}{c}i\\ 2m\end{array}}\right) \Gamma \left( m + \tfrac{1}{2}\right) \Gamma \left( \tfrac{t + i + 1}{2} - m\right) u^{i - 2m} Q(u^\perp )^{m}. \end{aligned}$$

Since \(Q(u^\perp ) = Q - u^2\), binomial expansion yields

$$\begin{aligned} Q(u^\perp )^{m}&= \sum _{x = 0}^{m} (-1)^{m + x} \left( {\begin{array}{c}m\\ x\end{array}}\right) u^{2m - 2x} Q^{x}. \end{aligned}$$

Legendre’s duplication formula gives

$$\begin{aligned} \left( {\begin{array}{c}i\\ 2m\end{array}}\right) \left( {\begin{array}{c}m\\ x\end{array}}\right) \Gamma \left( m + \tfrac{1}{2}\right)&= \left( {\begin{array}{c}i\\ 2x\end{array}}\right) \Gamma \left( x + \tfrac{1}{2}\right) \left( {\begin{array}{c}\lfloor \frac{i}{2} \rfloor - x\\ m - x\end{array}}\right) \frac{\Gamma \left( \lfloor \frac{i + 1}{2} \rfloor - x + \frac{1}{2}\right) }{\Gamma \left( \lfloor \frac{i + 1}{2} \rfloor - m + \frac{1}{2}\right) }, \end{aligned}$$

and thus, we obtain by a change of the order of summation

$$\begin{aligned} I&= \frac{\Gamma \left( \frac{n}{2}\right) }{\pi \Gamma \left( \frac{n + i + t}{2}\right) } \sum _{x = 0}^{\lfloor \frac{i}{2} \rfloor } \left( {\begin{array}{c}i\\ 2x\end{array}}\right) \Gamma \left( x + \tfrac{1}{2}\right) \Gamma \left( \lfloor \tfrac{i + 1}{2} \rfloor - x + \tfrac{1}{2}\right) u^{i - 2x} Q^{x} \\&\quad \times \sum _{m = x}^{\lfloor \frac{i}{2} \rfloor } (-1)^{m + x} \left( {\begin{array}{c}\lfloor \frac{i}{2} \rfloor - x\\ m - x\end{array}}\right) \frac{\Gamma \left( \tfrac{t + i + 1}{2} - m\right) }{\Gamma \left( \lfloor \frac{i + 1}{2} \rfloor - m + \frac{1}{2}\right) }. \end{aligned}$$

We denote the sum with respect to m by \(S_1\). An index shift by x, applied to \(S_1\), yields

$$\begin{aligned} S_1&= \sum _{m = 0}^{\lfloor \frac{i}{2} \rfloor - x} (-1)^{m} \left( {\begin{array}{c}\lfloor \frac{i}{2} \rfloor - x\\ m\end{array}}\right) \frac{\Gamma \left( \tfrac{t + i + 1}{2} - x - m\right) }{\Gamma \left( \lfloor \frac{i + 1}{2} \rfloor - x - m + \frac{1}{2}\right) }. \end{aligned}$$

Now we conclude from relation (A.1’) that

$$\begin{aligned} S_1&= (-1)^{\lfloor \frac{i}{2} \rfloor - x} \sum _{m = 0}^{\lfloor \frac{i}{2} \rfloor - x} (-1)^{m} \left( {\begin{array}{c}\lfloor \frac{i}{2} \rfloor - x\\ m\end{array}}\right) \frac{\Gamma \left( \frac{t + i + 1}{2} - \lfloor \frac{i}{2} \rfloor + m\right) }{\Gamma \left( \lfloor \frac{i + 1}{2} \rfloor - \lfloor \frac{i}{2} \rfloor + m + \frac{1}{2}\right) } \\&= (-1)^{\lfloor \frac{i}{2} \rfloor - x} \frac{\Gamma \left( \frac{t + i + 1}{2} - \lfloor \frac{i}{2} \rfloor \right) \Gamma \left( \lfloor \frac{i + 1}{2} \rfloor + \lfloor \frac{i}{2} \rfloor - \frac{t + i + 1}{2} - x + \frac{1}{2}\right) }{\Gamma \left( \lfloor \frac{i + 1}{2} \rfloor - x + \frac{1}{2}\right) \Gamma \left( \lfloor \frac{i + 1}{2} \rfloor - \frac{t + i + 1}{2} + \frac{1}{2}\right) } \\&= \overbrace{(-1)^{i + \lfloor \frac{i + 1}{2} \rfloor + \lfloor \frac{i}{2} \rfloor }}^{ = (-1)^{2i} = 1} \frac{\overbrace{\Gamma \left( \tfrac{t + i + 1}{2} - \lfloor \tfrac{i}{2} \rfloor \right) \Gamma \left( \tfrac{t + i + 1}{2} - \lfloor \tfrac{i + 1}{2} \rfloor + \tfrac{1}{2}\right) }^{ \scriptstyle = \Gamma \left( \frac{t + 1}{2}\right) \Gamma \left( \frac{t}{2} + 1\right) }}{\Gamma \left( \lfloor \frac{i + 1}{2} \rfloor - x + \frac{1}{2}\right) \Gamma \left( \frac{t - i}{2} + x + 1\right) }, \end{aligned}$$

where we used (5) with \(c = \frac{t + i + 1}{2} - \lfloor \frac{i + 1}{2} \rfloor - \frac{1}{2}\in \mathbb {N}_{0}\) and \( m = i - \lfloor \frac{i + 1}{2} \rfloor - x \in \mathbb {N}_{0}\). We notice that \(S_1 = 0\) if \(x < \frac{i - t}{2} \). Thus we obtain the assertion by another application of Legendre’s duplication formula.

It remains to confirm the assertion if \(n=1\) and \(i+t\) is even (all other assertions are easy to check). In this case, \(u=\pm v\) and therefore the left-hand side of the asserted equation equals \((\pm 1)^t v^i\). Using Legendre’s duplication formula repeatedly and relation (A.1), we see that the right-hand side equals \((\pm 1)^i v^{i} = (\pm 1)^t v^{i}\), which confirms the assertion. \(\square \)

The next lemma will be useful in the proof of Proposition 14.

Lemma 13

Let \(j, k, n \in \mathbb {N}_0\) with \(j + k \le n\), \(n\ge 1\), and \(U \in G(n, j)\). Then, for any integrable function \(f: G(U, j + k) \rightarrow \mathbb {R}\),

$$\begin{aligned} \int _{G(U^\perp ,k)} f(U + L) \, \nu ^{U^\perp }_k(\mathrm {d}L) = \int _{G(U, j + k)} f(L) \, \nu ^{U}_{j + k}(\mathrm {d}L). \end{aligned}$$

Proof

We consider the map \(H: G(U^\perp , k) \rightarrow G(U, j + k)\), \( L \mapsto U + L\), which is well defined, since \(\dim (U \cap L) = 0\) and hence \(\dim (L + U) = j + k\) for all \(L \in G({U^\perp }, k)\). It is sufficient to show that \(H(\nu ^{U^\perp }_k) = \nu ^{U}_{j + k} \), where \(H(\nu ^{U^\perp }_k)\) is the image measure of \(\nu ^{U^\perp }_k\) under H. Since \(H(\nu ^{U^\perp }_k)\) and \(\nu ^{U}_{j + k}\) are probability measures, and \(\nu ^{U}_{j + k}\) is \(SO(U^\perp )\) invariant by definition, it is sufficient to observe that \(H(\nu ^{U^\perp }_k)\) is \(SO(U^\perp )\) invariant. \(\square \)

The following proposition, which is a generalization of Lemma 9 in the case \(a=2\), will be applied at the end of Sect. 5.3. Its proof uses several of the lemmas provided above.

Proposition 14

Let \(F \in G(n, k)\) with \(0\le j \le k \le n\) and \(m,l\in \mathbb {N}_0\). Then

$$\begin{aligned}&\int _{G(n, n - k + j)} [F, L]^2 Q(L)^m Q(F \cap L)^l \, \nu _{n - k + j} (\mathrm {d}L) \\&\quad = \frac{(n - k + j)! k!}{n! j!} \frac{\Gamma \left( \frac{n}{2} + 1\right) \Gamma \left( \frac{j}{2} + l\right) \Gamma \left( \frac{k}{2}\right) }{\Gamma \left( \frac{n}{2} + m + 1\right) \Gamma \left( \frac{j}{2}\right) \Gamma \left( \frac{k - j}{2}\right) \Gamma \left( \frac{n - k + j}{2} + 1\right) } \\&\qquad \times \sum _{i = 0}^{m} \left( {\begin{array}{c}m\\ i\end{array}}\right) \frac{(l + i - 2)!}{(l - 2)!} \frac{\Gamma \left( \frac{k - j}{2} + i\right) \Gamma \left( \frac{n - k + j}{2} + m - i + 1\right) }{\Gamma \left( \frac{k}{2} + l + i\right) } Q^{m - i} Q(F)^{l + i}. \end{aligned}$$

For \(l \le 1\), the factor \(\frac{(l + i - 2)!}{(l - 2)!}\) in Proposition 14 is read as stated in (5) and discussed in Sect. 3. Moreover, \(\Gamma (l + j / 2 )/\Gamma ({j} / {2})\) is zero if \(j = 0, l \ne 0\) and one if \(j = l = 0\).

Proof

Let I denote the integral in which we are interested. If \(j = k\), all summands on the right side of the asserted equation are zero except for \(i = 0\). Thus it is easy to confirm the assertion. Now assume that \(0 \le j < k \le n\), hence \(n \ge 1\). If \(j = l = 0\), then the assertion follows as a special case of Lemma 9. If \(j = 0, l \ne 0\) then both sides of the asserted equation are zero.

In the following, we consider the remaining cases where \(0< j < k\). Then Lemma 5 yields

$$\begin{aligned} I = d_{n, j, k} \int _{G(F, j)} \int _{G(U, n - k + j)} [F, L]^{j + 2} Q(L)^m Q(F \cap L)^l \, \nu ^{U}_{n - k + j} (\mathrm {d}L) \, \nu ^F_{j} (\mathrm {d}U). \end{aligned}$$

For fixed \(U\in G(F, j)\), we have \(\dim (F \cap L) = j = \dim U\) for \(\nu ^U_{n - k + j}\)-almost all \(L \in G(U, n - k + j)\) and \(U \subset F \cap L\), hence \(U = F \cap L\), and therefore

$$\begin{aligned} I= d_{n, j, k} \int _{G(F, j)} Q(U)^l \int _{G(U, n - k + j)} [F, L]^{j + 2} Q(L)^m \, \nu ^{U}_{n - k + j} (\mathrm {d}L) \, \nu ^F_{j} (\mathrm {d}U). \end{aligned}$$

An application of Lemma 13 shows that

$$\begin{aligned} I= d_{n, j, k} \int _{G(F, j)} Q(U)^l \int _{G(U^\perp , n - k)} [F, U + L]^{j + 2} Q(U + L)^m \, \nu ^{U^\perp }_{n - k} (\mathrm {d}L) \, \nu ^F_{j} (\mathrm {d}U). \end{aligned}$$

As \(U \subset F\) and \(L \subset U^\perp \), we have \([F, U + L] = [F \cap U^\perp , L]^{(U^\perp )}\) and \(Q(U + L)^m = \big ( Q(U) + Q(L) \big )^m\). Expanding the latter yields

$$\begin{aligned} I&= d_{n, j, k} \sum _{\alpha = 0}^{m} \left( {\begin{array}{c}m\\ \alpha \end{array}}\right) \int _{G(F, j)} Q(U)^{l + m - \alpha } \\&\quad \times \int _{G(U^\perp , n - k)} \left( [F \cap U^\perp , L]^{(U^\perp )} \right) ^{j + 2} Q(L)^\alpha \, \nu ^{U^\perp }_{n - k} (\mathrm {d}L) \, \nu ^F_{j} (\mathrm {d}U). \end{aligned}$$

Observe that \(\text {dim}(U^\perp )=n-j>n-k\ge 0\), hence \(\text {dim}(U^\perp )\ge 1\). Therefore Lemma 9 can be used to see that the integral with respect to L can be expressed as

$$\begin{aligned}&e_{n - j, n - k, k - j, j + 2} \frac{\Gamma \left( \frac{n}{2} + 1\right) }{\Gamma \left( \frac{n - k + j}{2} + 1\right) \Gamma \left( \frac{n}{2} + 1 + \alpha \right) } \sum _{\beta = 0}^{\alpha } (-1)^{\beta } \left( {\begin{array}{c}\alpha \\ \beta \end{array}}\right) \\&\quad \times \frac{\Gamma \left( \frac{n - k + j}{2} + 1 + \alpha - \beta \right) \Gamma \left( \frac{k - j}{2} + \beta \right) \Gamma \left( \frac{j}{2} + 2\right) \Gamma \left( \frac{k - j}{2}\right) }{\Gamma \left( \frac{k - j}{2}\right) \Gamma \left( \frac{j}{2} + 2 - \beta \right) \Gamma \left( \frac{k - j}{2} + \beta \right) } Q(U^\perp )^{\alpha - \beta } Q(F \cap U^\perp )^{\beta }, \end{aligned}$$

and thus

$$\begin{aligned} I&= d_{n, j, k} e_{n - j, n - k, k - j, j + 2} \frac{\Gamma \left( \frac{n}{2} + 1\right) }{\Gamma \left( \frac{n - k + j}{2} + 1\right) } \\&\quad \times \sum _{\alpha = 0}^{m} \sum _{\beta = 0}^{\alpha } (-1)^{\beta } \left( {\begin{array}{c}m\\ \alpha \end{array}}\right) \left( {\begin{array}{c}\alpha \\ \beta \end{array}}\right) \frac{\Gamma \left( \frac{n - k + j}{2} + 1 + \alpha - \beta \right) \Gamma \left( \frac{j}{2} + 2\right) }{\Gamma \left( \frac{n}{2} + 1 + \alpha \right) \Gamma \left( \frac{j}{2} + 2 - \beta \right) } \\&\quad \times \int _{G(F, j)} Q(U)^{l + m - \alpha } Q(U^\perp )^{\alpha - \beta } Q(F \cap U^\perp )^{\beta } \, \nu ^F_{j} (\mathrm {d}U). \end{aligned}$$

Observing cancellations and using Legendre’s duplication formula, we get

$$\begin{aligned} d_{n, j, k} e_{n - j, n - k, k - j, j + 2}=\frac{(n - k + j)! k!}{n! j!}. \end{aligned}$$

Expanding \(Q(U^{\perp })^{\alpha - \beta } = (Q - Q(U))^{\alpha - \beta }\), we obtain

$$\begin{aligned} I&= \frac{(n - k + j)! k!}{n! j!} \frac{\Gamma \left( \frac{n}{2} + 1\right) \Gamma \left( \frac{j}{2} + 2\right) }{\Gamma \left( \frac{n - k + j}{2} + 1\right) } \sum _{\alpha = 0}^{m} \sum _{\beta = 0}^{\alpha } \sum _{i = 0}^{\alpha - \beta } (-1)^{\alpha + i} \left( {\begin{array}{c}m\\ \alpha \end{array}}\right) \left( {\begin{array}{c}\alpha \\ \beta \end{array}}\right) \left( {\begin{array}{c}\alpha - \beta \\ i\end{array}}\right) \\&\quad \times \frac{\Gamma \left( \frac{n - k + j}{2} + 1 + \alpha - \beta \right) }{\Gamma \left( \frac{n}{2} + 1 + \alpha \right) \Gamma \left( \frac{j}{2} + 2 - \beta \right) } Q^{i} \int _{G(F, j)} Q(U)^{l + m - \beta - i} Q(F \cap U^\perp )^{\beta } \, \nu ^F_{j} (\mathrm {d}U). \end{aligned}$$

Lemma 10, applied in F, yields

$$\begin{aligned} I&= \frac{(n - k + j)! k!}{n! j!} \frac{\Gamma \left( \frac{n}{2} + 1\right) \Gamma \left( \frac{k}{2}\right) \Gamma \left( \frac{j}{2} + 2\right) }{\Gamma \left( \frac{n - k + j}{2} + 1\right) \Gamma \left( \frac{j}{2}\right) \Gamma \left( \frac{k - j}{2}\right) }\\&\quad \times \sum _{\alpha = 0}^{m} \sum _{\beta = 0}^{\alpha } \sum _{i = 0}^{\alpha - \beta } (-1)^{\alpha + i}{\left( {\begin{array}{c}m\\ \alpha \end{array}}\right) \left( {\begin{array}{c}\alpha \\ \beta \end{array}}\right) \left( {\begin{array}{c}\alpha - \beta \\ i\end{array}}\right) } \\&\quad \times \frac{\Gamma \left( \frac{n - k + j}{2} + 1 + \alpha - \beta \right) }{\Gamma \left( \frac{n}{2} + 1 + \alpha \right) } \frac{\Gamma \left( \frac{j}{2} + l + m - \beta - i\right) \Gamma \left( \frac{k - j}{2} + \beta \right) }{\Gamma \left( \frac{k}{2} + l + m - i\right) \Gamma \left( \frac{j}{2} + 2 - \beta \right) } Q^{i} Q(F)^{l + m - i}. \end{aligned}$$

Using the relation

$$\begin{aligned} \left( {\begin{array}{c}m\\ \alpha \end{array}}\right) \left( {\begin{array}{c}\alpha \\ \beta \end{array}}\right) \left( {\begin{array}{c}\alpha - \beta \\ i\end{array}}\right) = \left( {\begin{array}{c}m\\ i\end{array}}\right) \left( {\begin{array}{c}m - i\\ \beta \end{array}}\right) \left( {\begin{array}{c}m - i - \beta \\ \alpha - i - \beta \end{array}}\right) \end{aligned}$$

and by a change of the order of summation, we conclude that

$$\begin{aligned} I&= \frac{(n - k + j)! k!}{n! j!} \frac{\Gamma \left( \frac{n}{2} + 1\right) \Gamma \left( \frac{k}{2}\right) \Gamma \left( \frac{j}{2} + 2\right) }{\Gamma \left( \frac{n - k + j}{2} + 1\right) \Gamma \left( \frac{j}{2}\right) \Gamma \left( \frac{k - j}{2}\right) } \sum _{i = 0}^{m} \left( {\begin{array}{c}m\\ i\end{array}}\right) Q^{i} Q(F)^{l + m - i} \\&\quad \times ~\frac{1}{\Gamma \left( \frac{k}{2} + l + m - i\right) } \sum _{\beta = 0}^{m - i} \left( {\begin{array}{c}m - i\\ \beta \end{array}}\right) \frac{\Gamma \left( \frac{j}{2} + l + m - \beta - i\right) \Gamma \left( \frac{k - j}{2} + \beta \right) }{ \Gamma \left( \frac{j}{2} + 2 - \beta \right) } \\&\quad \times \sum _{\alpha = i + \beta }^{m} (-1)^{\alpha + i} \left( {\begin{array}{c}m - i - \beta \\ \alpha - i - \beta \end{array}}\right) \frac{\Gamma \left( \frac{n - k + j}{2} + 1 + \alpha - \beta \right) }{\Gamma \left( \frac{n}{2} + 1 + \alpha \right) }. \end{aligned}$$

For the sum with respect to \(\alpha \), we obtain from relation (A.1’) that

$$\begin{aligned}&\sum _{\alpha = 0}^{m - i - \beta } (-1)^{\alpha + \beta } \left( {\begin{array}{c}m - i - \beta \\ \alpha \end{array}}\right) \frac{\Gamma \left( \frac{n - k + j}{2} + i + 1 + \alpha \right) }{\Gamma \left( \frac{n}{2} + i + \beta + 1 + \alpha \right) }\\&\qquad =(-1)^{\beta } \frac{\Gamma \left( \frac{n - k + j}{2} + i + 1\right) \Gamma \left( \frac{k - j}{2} + m - i\right) }{\Gamma \left( \frac{n}{2} + m + 1\right) \Gamma \left( \frac{k - j}{2} + \beta \right) }. \end{aligned}$$

Next, for the resulting sum with respect to \(\beta \), we obtain again from relation (A.1’) that

$$\begin{aligned} \sum _{\beta = 0}^{m - i} (-1)^{m + i + \beta } \left( {\begin{array}{c}m - i\\ \beta \end{array}}\right) \frac{\Gamma \left( \frac{j}{2} + l + \beta \right) }{\Gamma \left( \frac{j}{2} + 2 - m + i + \beta \right) }&= \frac{\Gamma \left( \frac{j}{2} + l\right) \Gamma \left( l + m - i - 1\right) }{\Gamma \left( \frac{j}{2} + 2\right) \Gamma (l - 1)}, \end{aligned}$$

where we first used \(j>0\) and then (5) (and distinguished the cases \(l = 0\), \(l = 1\), \(l \ge 2\)). Reversing the order of summation in the resulting expression for I, the assertion of the proposition follows. \(\square \)

5 Proof of Theorem 1

We divide the proof into several steps. First, we treat the translative part of the kinematic integral, for which we refer to the proof of the translative integral formula for curvature measures. Then we consider two “boundary cases” separately. The main and third step requires the explicit calculation of a rotational integral for a tensor-valued function. Once this is accomplished, the proof is finished, except for the asserted description of the coefficients, which at this point are still given in terms of iterated sums of products of binomial coefficients and Gamma functions. In a final step, these coefficients are then shown to have the simple form provided in the statement of the theorem.

5.1 The translative part

The case \(j=n\) is easy to check directly (since then \(s=0\)). Hence we may assume that \(j\le n-1\) in the following. Let \(I_{1}\) denote the integral in which we are interested. We start by decomposing the measure \(\mu \) and by substituting the definition of \( \phi _{j}^{r,s,l} \) for polytopes to get

$$\begin{aligned} I_1&= \int _{G_n} \phi _{j}^{r,s,l} (P \cap g P', \beta \cap g \beta ') \, \mu ( \mathrm {d}g) \\&= \int _{SO(n)} \int _{\mathbb {R}^n} \phi _{j}^{r,s,l} (P \cap (\vartheta P' + t), \beta \cap (\vartheta \beta ' + t)) \, \mathcal {H}^n(\mathrm {d}t) \, \nu ( \mathrm {d}\vartheta ) \\&= c_{n, j}^{r, s, l}\frac{1}{\omega _{n - j}} \int _{SO(n)} \int _{\mathbb {R}^n} \sum _{G \in \mathcal {F}_j(P \cap (\vartheta P' + t))} Q(G)^l \int _{G \cap \beta \cap (\vartheta \beta ' + t)} x^r \, \mathcal {H}^j(\mathrm {d}x) \\&\quad \times \int _{N(P \cap (\vartheta P' + t), G) \cap {\mathbb {S}}^{n-1}} u^s \, \mathcal {H}^{n - j - 1} (\mathrm {d}u) \, \mathcal {H}^n(\mathrm {d}t) \, \nu ( \mathrm {d}\vartheta ). \end{aligned}$$

Let \(\vartheta \in SO(n)\) be fixed for the moment. Neglecting a set of translations \(t\in \mathbb {R}^n\) of measure zero, we can assume that the following is true (see [65, p. 241]). For every j-face \(G \in \mathcal {F}_j(P \cap (\vartheta P' + t))\), there are a unique \(k \in \{j, \ldots , n\}\), a unique \(F \in \mathcal {F}_k(P)\) and a unique \(F' \in \mathcal {F}_{n - k + j}( P')\) such that \(G = F \cap (\vartheta F'+t)\). Conversely, for every \(k \in \{j, \ldots , n\}\), every \(F \in \mathcal {F}_k(P)\) and every \(F' \in \mathcal {F}_{n - k + j}( P' )\), we have \(F \cap (\vartheta F'+t)\in \mathcal {F}_j(P \cap (\vartheta P' + t))\) for almost all \(t\in \mathbb {R}^n\) such that \(F \cap (\vartheta F'+t)\ne \emptyset \). Hence, we get

$$\begin{aligned} I_1&= c_{n, j}^{r, s, l}\frac{1}{\omega _{n - j}} \int _{SO(n)} \sum _{k = j}^n \sum _{F \in \mathcal {F}_k(P)} \sum _{F' \in \mathcal {F}_{n - k + j}(P')} Q(F^0 \cap (\vartheta F')^0)^l \nonumber \\&\quad \times \int _{N(P \cap (\vartheta P' + t), F \cap (\vartheta F' + t)) \cap {\mathbb {S}}^{n-1}} u^s \, \mathcal {H}^{n - j - 1} (\mathrm {d}u) \nonumber \\&\quad \times \int _{\mathbb {R}^n} \int _{F \cap (\vartheta F' + t) \cap \beta \cap (\vartheta \beta ' + t)} x^r \, \mathcal {H}^j(\mathrm {d}x)\, \mathcal {H}^n(\mathrm {d}t) \, \nu ( \mathrm {d}\vartheta ), \end{aligned}$$
(6)

where we use that the integral with respect to u is independent of the choice of a vector \(t \in \mathbb {R}^n\) such that \(\mathrm {relint \, }F \cap \mathrm {relint \, }(\vartheta F' + t) \ne \emptyset \). As a next step, we calculate the integral with respect to t. This can be done as in [65, p. 241–2] (for details see [83, Section 4.4.1]). Thus, we can rewrite (6) as

$$\begin{aligned} I_1&= c_{n, j}^{r, s, l}\frac{1}{\omega _{n - j}} \sum _{k = j}^n \sum _{F \in \mathcal {F}_k(P)} \sum _{F' \in \mathcal {F}_{n - k + j}(P')} \mathcal {H}^{n - k + j}(F' \cap \beta ')\\&\quad \times \int _{F \cap \beta } x^r \, \mathcal {H}^k(\mathrm {d}x) \int _{SO(n)} [F, \vartheta F'] \\&\quad \times Q \left( F^0 \cap (\vartheta F')^0 \right) ^l \int _{N(P \cap (\vartheta P' + t), F \cap (\vartheta F' + t)) \cap {\mathbb {S}}^{n-1}} u^s \, \mathcal {H}^{n - j - 1} (\mathrm {d}u) \, \nu ( \mathrm {d}\vartheta ). \end{aligned}$$

5.2 The cases \(k\in \{j,n\}\)

In the summation with respect to k, we have to consider the summands for \(k = j\) and \(k = n\) separately, since the application of some of the auxiliary results requires that \(j< k < n\). Starting with \(k = j\) and denoting the corresponding summand by \(S_{j}\), we get

$$\begin{aligned} S_{j}&= c_{n, j}^{r, s, l}\frac{1}{\omega _{n - j}} \sum _{F \in \mathcal {F}_j(P)} \sum _{F' \in \mathcal {F}_n(P')} \mathcal {H}^n (F' \cap \beta ') \int _{F \cap \beta } x^r \, \mathcal {H}^j (\mathrm {d}x) \int _{SO(n)} [F, \vartheta F'] \\&\quad \times Q \big ( F^0 \cap (\vartheta F')^0 \big )^l \int _{N(P \cap (\vartheta F' + t), F \cap (\vartheta F' + t)) \cap {\mathbb {S}}^{n-1}} u^s \, \mathcal {H}^{n - j - 1} (\mathrm {d}u) \, \nu ( \mathrm {d}\vartheta ) \\&= \phi _{j}^{r,s,l} (P, \beta ) \phi _{n} (P', \beta '). \end{aligned}$$

For \(k = n\), we denote the corresponding summand by \(S_{n}\) and conclude from Fubini’s theorem

$$\begin{aligned} S_{n}&= c_{n, j}^{r, s, l}\frac{1}{\omega _{n - j}} \sum _{F \in \mathcal {F}_n(P)} \sum _{F' \in \mathcal {F}_{j}(P')} \mathcal {H}^{j}(F' \cap \beta ') \int _{F \cap \beta } x^r \, \mathcal {H}^n(\mathrm {d}x) \int _{SO(n)} [F, \vartheta F'] \\&\quad \times Q \big ( F^0 \cap (\vartheta F')^0 \big )^l \int _{N(P \cap (\vartheta P' + t), P \cap (\vartheta F' + t)) \cap {\mathbb {S}}^{n-1}} u^s \, \mathcal {H}^{n - j - 1} (\mathrm {d}u) \, \nu ( \mathrm {d}\vartheta ) \\&= c_{n, j}^{r, s, l}\frac{1}{\omega _{n - j}} \int _{P \cap \beta } x^r \, \mathcal {H}^n(\mathrm {d}x) \sum _{F' \in \mathcal {F}_{j}(P')} \mathcal {H}^{j}(F' \cap \beta ') \\&\quad \times \int _{N(P', F') \cap {\mathbb {S}}^{n-1}} \int _{SO(n)} Q \left( \vartheta F' \right) ^l (\vartheta u)^s \, \nu ( \mathrm {d}\vartheta ) \, \mathcal {H}^{n - j - 1} (\mathrm {d}u). \end{aligned}$$

For this, we obtain from Lemma 11

$$\begin{aligned} S_{n}&= \mathbb {1} \{ s \text { even} \} c_{n, j}^{r, s, l}{\frac{1}{\omega _{n - j}} \frac{\Gamma \left( \frac{n}{2}\right) \Gamma \left( \frac{j}{2} + l\right) \Gamma \left( \frac{s + 1}{2}\right) }{\sqrt{\pi }\Gamma \left( \frac{n + s}{2} + l\right) \Gamma \left( \frac{j}{2}\right) }} Q^{l + \frac{s}{2}} \int _{P \cap \beta } x^r \, \mathcal {H}^n(\mathrm {d}x) \\&\qquad \times \sum _{F' \in \mathcal {F}_{j}(P')} \mathcal {H}^{j}(F' \cap \beta ') \int _{N(P', F') \cap {\mathbb {S}}^{n-1}} \mathcal {H}^{n - j - 1} (\mathrm {d}u) \\&= c_{n, j}^{s} \, \phi _{n}^{r,0,\frac{s}{2} + l} (P, \beta ) \phi _{j} (P', \beta '), \end{aligned}$$

where

$$\begin{aligned} c_{n, j}^{s} :=\, \mathbb {1} \{ s \text { even} \} \frac{2 \omega _{n - j}}{s! \omega _{s + 1} \omega _{n - j + s}} = \mathbb {1} \{ s \text { even} \} \frac{1}{(2\pi )^{s} \left( \frac{s}{2}\right) !} \frac{\Gamma \left( \frac{n - j + s}{2}\right) }{\Gamma \left( \frac{n - j}{2}\right) }. \end{aligned}$$
(7)

Hence, we get

$$\begin{aligned} I_1&= c_{n, j}^{r, s, l}\frac{1}{\omega _{n - j}} \sum _{k = j + 1}^{n - 1} \sum _{F \in \mathcal {F}_k(P)} \sum _{F' \in \mathcal {F}_{n - k + j}(P')} \mathcal {H}^{n - k + j}(F' \cap \beta ') \int _{F \cap \beta } x^r \, \mathcal {H}^k(\mathrm {d}x) \\&\quad \times \int _{SO(n)} [F, \vartheta F'] Q\left( F^0 \cap (\vartheta F')^0 \right) ^l \\&\quad \times \int _{N(P \cap (\vartheta P' + t), F \cap (\vartheta F' + t)) \cap {\mathbb {S}}^{n-1}} u^s \, \mathcal {H}^{n - j - 1} (\mathrm {d}u) \, \nu ( \mathrm {d}\vartheta ) \\&\quad + \phi _{j}^{r,s,l} (P, \beta ) \phi _{n} (P', \beta ') + c_{n, j}^{s} \, \phi _{n}^{r,0,\frac{s}{2} + l} (P, \beta ) \phi _{j} (P', \beta '). \end{aligned}$$

For any \(t \in \mathbb {R}^n\) such that \(\mathrm {relint \, }F \cap \mathrm {relint \, }(\vartheta F' + t) \ne \emptyset \), we obtain from [65, Theorem 2.2.1]

$$\begin{aligned} N\left( P \cap (\vartheta P' + t), F \cap (\vartheta F' + t) \right)&= N(P, F) + \vartheta N(P', F'), \end{aligned}$$

and thus

$$\begin{aligned} I_1&= c_{n, j}^{r, s, l}\frac{1}{\omega _{n - j}} \sum _{k = j + 1}^{n - 1} \sum _{F \in \mathcal {F}_k(P)} \sum _{F' \in \mathcal {F}_{n - k + j}(P')} \mathcal {H}^{n - k + j}(F' \cap \beta ') \int _{F \cap \beta } x^r \, \mathcal {H}^k(\mathrm {d}x) \nonumber \\&\quad \times \int _{SO(n)} [F, \vartheta F'] Q\left( F^0 \cap (\vartheta F')^0 \right) ^l \int _{(N(P, F) + \vartheta N(P', F')) \cap {\mathbb {S}}^{n-1}} u^s \, \mathcal {H}^{n - j - 1} (\mathrm {d}u) \, \nu ( \mathrm {d}\vartheta ) \nonumber \\&\quad + \phi _{j}^{r,s,l} (P, \beta ) \phi _{n} (P', \beta ') + c_{n, j}^{s} \, \phi _{n}^{r,0,\frac{s}{2} + l} (P, \beta ) \phi _{j} (P', \beta '). \end{aligned}$$
(8)

In the following, we denote by \(C(\omega ) {:=\,} \{ \lambda x \in \mathbb {R}^n : x \in \omega , \lambda > 0\}\) the cone spanned by a set \(\omega \subset \mathbb {S}^{n-1}\). Moreover, if F is a face of P, we write \(F^\perp \) for the linear subspace orthogonal to \(F^0\). For the next and main step, we may assume that \(j\le n-2\) (since \(j<k\le n-1\)). We define the mapping

$$\begin{aligned} J: \mathcal {B}(F^\perp \cap {\mathbb {S}}^{n-1}) \times \mathcal {B}(F'^\perp \cap {\mathbb {S}}^{n-1}) \rightarrow \mathbb {T}^{2l + s} \end{aligned}$$

by

$$\begin{aligned} J(\omega , \omega ')&{:=\,} \int _{SO(n)} [F, \vartheta F'] Q\left( F^0 \cap (\vartheta F')^0 \right) ^l \int _{(C(\omega ) + \vartheta C(\omega ')) \cap {\mathbb {S}}^{n-1}} u^s \, \mathcal {H}^{n - j - 1} (\mathrm {d}u) \, \nu ( \mathrm {d}\vartheta ) \end{aligned}$$

for \(\omega \in \mathcal {B}(F^\perp \cap {\mathbb {S}}^{n-1})\) and \(\omega ' \in \mathcal {B}(F'^\perp \cap {\mathbb {S}}^{n-1})\). Then J is a finite signed measure on \(\mathcal {B}(F^\perp \cap {\mathbb {S}}^{n-1})\) in the first variable and a finite signed measure on \(\mathcal {B}(F'^\perp \cap {\mathbb {S}}^{n-1})\) in the second variable, but this will not be needed in the following. In fact, we could specialize to the case \(\omega =N(P,F)\cap {\mathbb {S}}^{n-1}\) and \(\omega '=N(P',F')\cap {\mathbb {S}}^{n-1}\) throughout the proof.

Since \([F, \vartheta F'] Q( F^0 \cap (\vartheta F')^0 )^l\) depends only on the linear subspaces \(F^0\) and \((\vartheta F')^0\), we can assume that \(F\in G(n,k)\) and \(F'\in G(n,n-k+j)\) for determining \(J(\omega ,\omega ')\). Moreover, for \(\nu \)-almost all \(\vartheta \in SO(n)\), the linear subspaces \(F^\perp \) and \(\vartheta (F'^\perp )\) are linearly independent. This will be tacitly used in the following.

5.3 The rotational part

In this section, \(\omega ,\omega '\) are fixed and as described above. Due to the right invariance of \(\nu \) and since \(\rho F'=F'\) for \(\rho \in SO(F'^{\perp })\), we obtain

$$\begin{aligned} J(\omega , \omega ')&= \int _{SO(n)} [F, \vartheta F'] Q(F \cap \vartheta F')^l \\&\quad \times \int _{SO(F'^\perp )} \int _{(C(\omega ) + \vartheta \rho C(\omega ')) \cap {\mathbb {S}}^{n-1}} u^s \, \mathcal {H}^{n - j - 1} (\mathrm {d}u) \, \nu ^{F'^\perp }(\mathrm {d}\rho ) \, \nu ( \mathrm {d}\vartheta ), \end{aligned}$$

where we averaged over all such rotations \(\rho \in SO(F'^{\perp })\) and applied Fubini’s theorem. Next, we introduce a multiple \(J_1\) of the inner integral of \(J(\omega ,\omega ')\) and rewrite it by means of a polar coordinate transformation, that is,

$$\begin{aligned} J_1 :=&\, \tfrac{1}{2} \Gamma ({\textstyle \frac{n - j + s}{2}}) \int _{(C(\omega ) + \vartheta \rho C(\omega ')) \cap {\mathbb {S}}^{n-1}} u^s \, \mathcal {H}^{n - j - 1}(\mathrm {d}u)\\ =&\, \int _{C(\omega ) + \vartheta \rho C(\omega ')} x^s e^{-\Vert x\Vert ^2} \, \mathcal {H}^{n - j}(\mathrm {d}x). \end{aligned}$$

The bijective transformation (recall that we assume that \(\vartheta \in SO(n)\) is such that \(F^\perp \) and \(\vartheta (F'^\perp )\) are linearly independent subspaces)

$$\begin{aligned}&T: \omega \times \omega ' \times (0,\frac{\pi }{2}) \times (0, \infty ) \rightarrow C(\omega ) + \vartheta \rho C(\omega '), \\&\quad (u, v, \alpha , r) \mapsto r\cos (\alpha ) u + r\sin (\alpha ) \vartheta \rho v, \end{aligned}$$

has the Jacobian

$$\begin{aligned} \mathcal {J}T(u,v,\alpha , r) = r^{n-j-1}(\cos (\alpha ))^{n - k - 1} (\sin (\alpha ))^{k - j - 1} [ F, \vartheta F']. \end{aligned}$$

Hence, we obtain

$$\begin{aligned} J_1&= [ F, \vartheta F'] \int _{\omega } \int _{\omega '} \int _0^{\frac{\pi }{2}} \int _0^\infty (\cos (\alpha ))^{n - k - 1} ( \sin (\alpha ))^{k - j - 1} e^{-\Vert r \cos (\alpha ) u + r \sin (\alpha ) \vartheta \rho v \Vert ^2} \\&\quad \times \left( r \cos (\alpha ) u + r \sin (\alpha ) \vartheta \rho v \right) ^s r^{n-j-1} \, \mathrm {d}r \, \mathrm {d}\alpha \, \mathcal {H}^{k - j - 1}(\mathrm {d}v) \, \mathcal {H}^{n - k - 1}(\mathrm {d}u). \end{aligned}$$

Using binomial expansion, we get

$$\begin{aligned} J_1&= [ F, \vartheta F'] \int _{\omega } \int _{\omega '} \sum _{i = 0}^{s} \left( {\begin{array}{c}s\\ i\end{array}}\right) u^{s - i} (\vartheta \rho v)^{i} \int _0^{\frac{\pi }{2}} \cos (\alpha )^{n - k + s - i - 1} \sin (\alpha )^{k - j + i - 1} \\&\quad \times \int _0^\infty r^{n - j + s - 1} e^{-r^2 (1 + { \scriptstyle 2 \cos (\alpha )\sin (\alpha )} \langle u, \vartheta \rho v \rangle )} \, \mathrm {d}r \, \mathrm {d}\alpha \, \mathcal {H}^{k - j - 1}(\mathrm {d}v) \, \mathcal {H}^{n - k - 1}(\mathrm {d}u). \end{aligned}$$

We factor the occurring exponential function and expand the second part of it as a power series to obtain

$$\begin{aligned} J_1&= [ F, \vartheta F'] \int _{\omega } \int _{\omega '} \sum _{i = 0}^{s} \left( {\begin{array}{c}s\\ i\end{array}}\right) u^{s - i} (\vartheta \rho v)^{i}\\&\quad \times \int _0^{\frac{\pi }{2}} \cos (\alpha )^{n - k + s - i - 1} \sin (\alpha )^{k - j + i - 1}\int _0^\infty r^{n - j + s - 1} \\&\quad \times e^{-r^2} \sum _{t = 0}^{\infty } \frac{\left( -2r^{2}\cos (\alpha )\sin (\alpha ) \langle u, \vartheta \rho v \rangle \right) ^{t}}{t!} \, \mathrm {d}r \, \mathrm {d}\alpha \, \mathcal {H}^{k - j - 1}(\mathrm {d}v) \, \mathcal {H}^{n - k - 1}(\mathrm {d}u) \\&= [ F, \vartheta F'] \int _{\omega } \int _{\omega '} \sum _{i = 0}^{s} \left( {\begin{array}{c}s\\ i\end{array}}\right) u^{s - i} (\vartheta \rho v)^{i} \sum _{t = 0}^{\infty } \frac{(-2 \langle u, \vartheta \rho v \rangle )^{t}}{t!} \int _0^\infty r^{n - j + s + 2t - 1} e^{-r^2} \, \mathrm {d}r \\&\quad \times \int _0^{\frac{\pi }{2}} \cos (\alpha )^{n - k + s - i + t - 1} \sin (\alpha )^{k - j + i + t - 1} \, \mathrm {d}\alpha \, \mathcal {H}^{k - j - 1}(\mathrm {d}v) \, \mathcal {H}^{n - k - 1}(\mathrm {d}u), \end{aligned}$$

where we interchanged the integrations with respect to r and \(\alpha \) with the series with respect to t by dominated convergence which can be applied for almost all \((\vartheta ,u)\). In fact, for \(\nu \otimes \mathcal {H}^{n - k - 1}\)-almost all pairs \((\vartheta , u) \in SO(n) \times F^{\perp }\) we have \(\vartheta ^{-1} u \not \in F'^{\perp }\) (and hence \(\langle u, \vartheta \rho v \rangle < 1\), for all \(\rho \in SO(F'^{\perp })\) and \(v \in F'^{\perp }\)) such that the series converges absolutely and uniformly and yields an integrable upper bound. The integrations with respect to r and \(\alpha \) can now be simplified to

$$\begin{aligned} J_1&= \frac{1}{4} [ F, \vartheta F'] \int _{\omega } \int _{\omega '} \sum _{i = 0}^{s} \left( {\begin{array}{c}s\\ i\end{array}}\right) u^{s - i} (\vartheta \rho v)^{i} \sum _{t = 0}^{\infty } \frac{(-2 \langle u, \vartheta \rho v \rangle )^{t}}{t!} \\&\quad \times \Gamma \big (\tfrac{n - k + s - i + t}{2}\big ) \Gamma \big (\tfrac{k - j + i + t}{2}\big ) \, \mathcal {H}^{k - j - 1}(\mathrm {d}v) \, \mathcal {H}^{n - k - 1}(\mathrm {d}u). \end{aligned}$$

The series with respect to t again converges absolutely for almost all \((\vartheta ,u)\). Therefore, Fubini’s theorem yields

$$\begin{aligned} J(\omega , \omega ')&= \frac{1}{2\Gamma \left( \frac{n - j + s}{2}\right) } \int _{\omega } \int _{SO(n)} [F, \vartheta F']^2 \, Q(F \cap \vartheta F')^l \\&\quad \times \sum _{i = 0}^{s} \left( {\begin{array}{c}s\\ i\end{array}}\right) u^{s - i} \sum _{t = 0}^{\infty } \frac{(-2)^t}{t!} \Gamma \big (\tfrac{n - k + s - i + t}{2}\big ) \Gamma \big (\tfrac{k - j + i + t}{2}\big ) \\&\quad \times \int _{\omega '} \int _{SO(F'^\perp )} (\vartheta \rho v)^{i} \langle u, \vartheta \rho v \rangle ^{t} \, \nu ^{F'^\perp }(\mathrm {d}\rho ) \, \mathcal {H}^{k - j - 1}(\mathrm {d}v) \, \nu ( \mathrm {d}\vartheta ) \, \mathcal {H}^{n - k - 1}(\mathrm {d}u). \end{aligned}$$

By the right invariance of the measure \(\nu ^{F'^{\perp }}\), the integrand is now independent of the specific choice of \(v \in F'^{\perp } \cap {\mathbb {S}}^{n-1}\). Thus, for an arbitrary but fixed unit vector \(v_{0} \in F'^{\perp } \cap {\mathbb {S}}^{n-1}\) we obtain

$$\begin{aligned} J(\omega , \omega ')&= \frac{1}{2\Gamma \left( \frac{n - j + s}{2}\right) } \, \mathcal {H}^{k - j - 1}(\omega ') \int _{\omega } \int _{SO(n)} [F, \vartheta F']^2 \, Q(F \cap \vartheta F')^l \\&\quad \times \sum _{i = 0}^{s} \left( {\begin{array}{c}s\\ i\end{array}}\right) u^{s - i} \sum _{t = 0}^{\infty } \frac{(-2)^t}{t!} \Gamma \big (\tfrac{n - k + s - i + t}{2}\big ) \Gamma \big (\tfrac{k - j + i + t}{2}\big ) \\&\quad \times \int _{SO(F'^\perp )} (\vartheta \rho v_{0})^{i} \langle u, \vartheta \rho v_{0} \rangle ^{t} \, \nu ^{F'^\perp }(\mathrm {d}\rho ) \, \nu ( \mathrm {d}\vartheta ) \, \mathcal {H}^{n - k - 1}(\mathrm {d}u). \end{aligned}$$

We denote the integral with respect to \(\rho \) by \(J_{2}\) and obtain

$$\begin{aligned} J_2&= \Vert p_{\vartheta F'^\perp } (u) \Vert ^t \, \vartheta \int _{SO(F'^\perp )} (\rho v_{0})^{i} \Big \langle \pi _{F'^\perp } (\vartheta ^{-1} u), \rho v_{0} \Big \rangle ^t \, \nu ^{F'^\perp }(\mathrm {d}\rho ) \end{aligned}$$

if \(\vartheta ^{-1}u\notin F'\) (which holds by an analogous argument as above for almost all pairs \((\vartheta , u)\)). We note that the integration over \(SO(F'^\perp )\) yields the same value as an integration over all \(\rho \in O(n)\) which fix \(F'^0\) pointwise, since \(\text {dim}(F'^\perp )\in \{1,\ldots ,n-1\}\) and \(n\ge 2\). Hence, an application of Lemma 12 in \(F'^\perp \) yields

$$\begin{aligned} J_2&= \mathbb {1}\{ i + t \text { even} \} \frac{\Gamma \left( \frac{k - j}{2}\right) }{\sqrt{\pi }} \frac{\Gamma (t + 1)}{2^t \Gamma \left( \frac{k - j + i + t}{2}\right) } \Vert p_{\vartheta F'^\perp } (u) \Vert ^t \\&\quad \times \sum _{x = (\frac{i - t}{2})^+ }^{\lfloor \frac{i}{2} \rfloor } \left( {\begin{array}{c}i\\ 2x\end{array}}\right) \frac{\Gamma (x + \frac{1}{2}) }{\Gamma \left( \frac{t - i}{2} + x + 1\right) } \pi _{\vartheta F'^\perp } (u)^{i - 2x} Q(\vartheta F'^\perp )^{x}. \end{aligned}$$

Thus we conclude

$$\begin{aligned} J(\omega , \omega ')&= \frac{\Gamma \left( \frac{k - j}{2}\right) }{2 \sqrt{\pi }\Gamma \left( \frac{n - j + s}{2}\right) } \mathcal {H}^{k - j - 1}(\omega ') \int _{\omega } \int _{SO(n)} [F, \vartheta F']^2 \, Q(F \cap \vartheta F')^l \\&\quad \times \sum _{i = 0}^{s} (-1)^{i} \left( {\begin{array}{c}s\\ i\end{array}}\right) u^{s - i} \sum _{t = 0}^{\infty } \mathbb {1}\{ i + t \text { even} \} \Gamma \big (\tfrac{n - k + s - i + t}{2}\big ) \Vert p_{\vartheta F'^\perp } (u) \Vert ^t \\&\quad \times \sum _{x = (\frac{i - t}{2})^+ }^{\lfloor \frac{i}{2} \rfloor } \left( {\begin{array}{c}i\\ 2x\end{array}}\right) \frac{\Gamma (x + \frac{1}{2}) }{\Gamma \left( \frac{t - i}{2} + x + 1\right) } \pi _{\vartheta F'^\perp } (u)^{i - 2x} Q(\vartheta F'^\perp )^{x} \, \nu ( \mathrm {d}\vartheta ) \, \mathcal {H}^{n - k - 1}(\mathrm {d}u), \end{aligned}$$

where we used that \((-1)^t = (-1)^i\) provided that \(i + t\) is even.

As the series with respect to t converges absolutely for almost all \((\vartheta , u)\) (using again that we have \(\vartheta ^{-1} u \not \in F' \cup F'^{\perp }\), for \(\nu \otimes \mathcal {H}^{n - k - 1}\)-almost all pairs \((\vartheta , u) \in SO(n) \times F^{\perp }\)), we can rearrange the order of the summations to get

$$\begin{aligned} J(\omega , \omega ')&= \frac{\Gamma \left( \frac{k - j}{2}\right) }{2\sqrt{\pi }\Gamma \left( \frac{n - j + s}{2}\right) } \mathcal {H}^{k - j - 1}(\omega ') \int _{\omega } \int _{SO(n)} [F, \vartheta F']^2 Q(F \cap \vartheta F')^l \\&\quad \times \sum _{i = 0}^{s} \sum _{x = 0}^{\lfloor \frac{i}{2} \rfloor } (-1)^i \left( {\begin{array}{c}s\\ i\end{array}}\right) \left( {\begin{array}{c}i\\ 2x\end{array}}\right) \Gamma (x + \tfrac{1}{2}) u^{s - i} \pi _{\vartheta F'^\perp } (u)^{i - 2x} Q(\vartheta F'^\perp )^{x} \\&\quad \times \sum _{t = i - 2x}^{\infty } \mathbb {1} \{ i + t \text { even} \} \frac{\Gamma \left( \frac{n - k + s - i + t}{2}\right) }{\Gamma \left( \frac{t - i}{2} + x + 1\right) } \Vert p_{\vartheta F'^\perp } (u) \Vert ^t \, \nu ( \mathrm {d}\vartheta ) \, \mathcal {H}^{n - k - 1}(\mathrm {d}u). \end{aligned}$$

We denote the series with respect to t by \(S_{t}\). Then, for \(\vartheta ^{-1}u\notin F'^\perp \), we obtain (after an index shift)

$$\begin{aligned} S_t&= \sum _{t = 0}^{\infty } \mathbb {1} \{ 2i - 2x + t \text { even} \} \frac{\Gamma \left( \frac{n - k + s + t - 2x}{2}\right) }{\Gamma \left( \frac{t}{2} + 1\right) } \Vert p_{\vartheta F'^\perp } (u) \Vert ^{i - 2x + t} \\&= \Vert p_{\vartheta F'^\perp } (u) \Vert ^{i - 2x} \sum _{t = 0}^{\infty } \frac{\Gamma \left( \frac{n - k + s}{2} + t - x\right) }{\Gamma (t + 1)} \Vert p_{\vartheta F'^\perp } (u) \Vert ^{2t} \\&= \Gamma (\tfrac{n - k + s}{2} - x) \Vert p_{\vartheta F'^\perp } (u) \Vert ^{i - 2x} \sum _{t = 0}^{\infty } \left( {\begin{array}{c} - \frac{n - k + s}{2} + x\\ t\end{array}}\right) (- \Vert p_{\vartheta F'^\perp } (u) \Vert ^{2})^t \\&= \Gamma (\tfrac{n - k + s}{2} - x) \Vert p_{\vartheta F'^\perp } (u) \Vert ^{i - 2x} \Vert p_{\vartheta F'} (u) \Vert ^{- n + k - s + 2x}. \end{aligned}$$

Expanding \(Q(\vartheta F'^\perp )^x = (Q - Q(\vartheta F'))^x\) in \(J(\omega , \omega ')\), we obtain

$$\begin{aligned} J(\omega , \omega ')&= \frac{\Gamma \left( \frac{k - j}{2}\right) }{2\sqrt{\pi }\Gamma \left( \frac{n - j + s}{2}\right) } \mathcal {H}^{k - j - 1}(\omega ')\\&\quad \times \int _{\omega } \int _{SO(n)} \sum _{i = 0}^{s} \sum _{x = 0}^{\lfloor \frac{i}{2} \rfloor } \sum _{y = 0}^x (-1)^{i + y} \left( {\begin{array}{c}s\\ i\end{array}}\right) \left( {\begin{array}{c}i\\ 2x\end{array}}\right) \left( {\begin{array}{c}x\\ y\end{array}}\right) \Gamma (x + \tfrac{1}{2}) \\&\quad \times \Gamma (\tfrac{n - k + s}{2} - x) u^{s - i} Q^{x - y} [F, \vartheta F']^2 \Vert p_{\vartheta F'} (u) \Vert ^{- n + k - s + 2x} \\&\quad \times p_{\vartheta F'^\perp } (u) ^{i - 2x} Q(\vartheta F')^{y} Q(F \cap \vartheta F')^l \, \nu ( \mathrm {d}\vartheta ) \, \mathcal {H}^{n - k - 1}(\mathrm {d}u). \end{aligned}$$

Changing the order of the summation under the integral gives

$$\begin{aligned} J(\omega , \omega ')&= \frac{\Gamma \left( \frac{k - j}{2}\right) }{2\sqrt{\pi }\Gamma \left( \frac{n - j + s}{2}\right) } \mathcal {H}^{k - j - 1}(\omega ')\\&\quad \times \int _{\omega } \int _{SO(n)} \sum _{x = 0}^{\lfloor \frac{s}{2} \rfloor } \sum _{y = 0}^x \sum _{i = 2x}^{s} (-1)^{i + y} \left( {\begin{array}{c}s\\ i\end{array}}\right) \left( {\begin{array}{c}i\\ 2x\end{array}}\right) \left( {\begin{array}{c}x\\ y\end{array}}\right) \Gamma (x + \tfrac{1}{2}) \\&\quad \times \Gamma (\tfrac{n - k + s}{2} - x) u^{s - i} Q^{x - y} [F, \vartheta F']^2 \Vert p_{\vartheta F'} (u) \Vert ^{- n + k - s + 2x} p_{\vartheta F'^\perp } (u) ^{i - 2x} \\&\quad \times Q(\vartheta F')^{y} Q(F \cap \vartheta F')^l \, \nu ( \mathrm {d}\vartheta ) \, \mathcal {H}^{n - k - 1}(\mathrm {d}u). \end{aligned}$$

We denote the integral with respect to \(\vartheta \) in \(J(\omega , \omega ')\) by \(J_{3}\) and transform it, to obtain

$$\begin{aligned} J_{3}&= \int _{G(n, n - k + j)} \sum _{x = 0}^{\lfloor \frac{s}{2} \rfloor } \sum _{y = 0}^x \sum _{i = 2x}^{s} (-1)^{i + y} \left( {\begin{array}{c}s\\ i\end{array}}\right) \left( {\begin{array}{c}i\\ 2x\end{array}}\right) \left( {\begin{array}{c}x\\ y\end{array}}\right) \Gamma (x + \tfrac{1}{2}) \Gamma (\tfrac{n - k + s}{2} - x) \\&\quad \times u^{s - i} Q^{x - y} [F, G]^2 \Vert p_{G} (u) \Vert ^{- n + k - s + 2x} p_{G^\perp } (u) ^{i - 2x} Q(G)^{y} Q(F \cap G)^l \, \nu _{n - k + j}( \mathrm {d}G). \end{aligned}$$

Since \(n\ge 2\) and \(1\le n-k+j\le n-1\), Lemma 6 yields

$$\begin{aligned} J_{3}&= \frac{\omega _{n - k + j}}{2 \omega _n} \int _{G(u^\perp , n - k + j - 1)}\\&\quad \times \int _{-1}^1 \int _{U^\perp \cap u^\perp \cap {\mathbb {S}}^{n-1}} \sum _{x = 0}^{\lfloor \frac{s}{2} \rfloor } \sum _{y = 0}^x \sum _{i = 2x}^{s} (-1)^{i + y} \left( {\begin{array}{c}s\\ i\end{array}}\right) \left( {\begin{array}{c}i\\ 2x\end{array}}\right) \left( {\begin{array}{c}x\\ y\end{array}}\right) \Gamma (x + \tfrac{1}{2}) \\&\quad \times \Gamma (\tfrac{n - k + s}{2} - x) u^{s - i} Q^{x - y} | z |^{n - k + j - 1} \left( 1 - z^2 \right) ^{\frac{k - j - 2}{2}} [F, \mathrm {lin}\{ U, z u + \sqrt{1 - z^2} w \} ]^2 \\&\quad \times \Vert p_{\mathrm {lin}\{ U, z u + \sqrt{1 - z^2} w \}} (u) \Vert ^{- n + k - s + 2x} Q(\mathrm {lin}\{ U, z u + \sqrt{1 - z^2} w \})^{y} \\&\quad \times Q(F \cap \mathrm {lin}\{ U, z u + \sqrt{1 - z^2} w \})^l p_{\mathrm {lin}\{ U, z u + \sqrt{1 - z^2} w \}^\perp } (u)^{i - 2x} \, \\&\quad \times \mathcal {H}^{k - j - 1}(\mathrm {d}w) \, \mathrm {d}z \, \nu ^{u^\perp }_{n - k + j - 1}( \mathrm {d}U). \end{aligned}$$

The required integrability will be clear from (9) below. Since \(u,w \in U^\perp \), we obtain

$$\begin{aligned} p_{\mathrm {lin}\{ U, z u + \sqrt{1 - z^2} w \}^\perp } (u) = \sqrt{1 - z^2}\cdot (\sqrt{1 - z^2} u - |z| \mathrm {sign}(z) w) \end{aligned}$$

and \(\Vert p_{\mathrm {lin}\{ U, z u + \sqrt{1 - z^2} w \}} (u) \Vert = |z|\). Furthermore, since also \(F\subset u^\perp \), we have

$$\begin{aligned}{}[F, \mathrm {lin}\{ U, z u + \sqrt{1 - z^2} w \}]&= [F, U]^{(u^\perp )} |z|, \\ Q(\mathrm {lin}\{ U, z u + \sqrt{1 - z^2} w \})&= Q(U) + (|z| u + \sqrt{1 - z^2} \mathrm {sign}(z) w )^2, \end{aligned}$$

and, for all \(z \in [-1, 1]{\setminus }\{0\}\) and \(w \in U^\perp \cap u^\perp \cap {\mathbb {S}}^{n-1}\),

$$\begin{aligned} Q(F \cap \mathrm {lin}\{ U, z u + \sqrt{1 - z^2} w \}) = Q( F \cap U), \end{aligned}$$

as \(F\subset u^\perp \) and \(U = \mathrm {lin}\{ U, z u + \sqrt{1 - z^2} w \} \cap u^\perp \). Using the fact that the integration with respect to w is invariant under reflection in the origin, we obtain

$$\begin{aligned} J_{3}&= \frac{\omega _{n - k + j}}{2 \omega _n} \int _{G(u^\perp , n - k + j - 1)} \int _{-1}^1 \int _{U^\perp \cap u^\perp \cap {\mathbb {S}}^{n-1}} \\&\quad \times \sum _{x = 0}^{\lfloor \frac{s}{2} \rfloor }\sum _{y = 0}^x \sum _{i = 2x}^{s} (-1)^{i + y} \left( {\begin{array}{c}s\\ i\end{array}}\right) \left( {\begin{array}{c}i\\ 2x\end{array}}\right) \left( {\begin{array}{c}x\\ y\end{array}}\right) \Gamma (x + \tfrac{1}{2}) \\&\quad \times \Gamma (\tfrac{n - k + s}{2} - x) u^{s - i} Q^{x - y} | z |^{j - s + 2x + 1} \left( 1 - z^2 \right) ^{\frac{k - j + i - 2x - 2}{2}} \left( \sqrt{1 - z^2} u - |z| w \right) ^{i - 2x} \\&\quad \times \left( [F, U]^{(u^\perp )}\right) ^2 \left( Q(U) + (|z| u + \sqrt{1 - z^2} w )^2 \right) ^{y} \\&\quad \times Q(F \cap U)^l \, \mathcal {H}^{k - j - 1}(\mathrm {d}w) \mathrm {d}z \nu ^{u^\perp }_{n - k + j - 1}( \mathrm {d}U). \end{aligned}$$

The binomial expansion

$$\begin{aligned} \left( \sqrt{1 - z^2} u - |z| w \right) ^{i - 2x} = \sum _{\alpha = 0}^{i - 2x} (-1)^{\alpha } \left( {\begin{array}{c}i - 2x\\ \alpha \end{array}}\right) \big ( \sqrt{1 - z^2} u \big )^{i - 2x - \alpha } \big ( |z| w \big )^{\alpha } \end{aligned}$$

and a change of the order of summation give

$$\begin{aligned} J_{3}&= \frac{\omega _{n - k + j}}{2 \omega _n} \int _{G(u^\perp , n - k + j - 1)} \left( [F, U]^{(u^\perp )}\right) ^2 \int _{-1}^1 \int _{U^\perp \cap u^\perp \cap {\mathbb {S}}^{n-1}} \sum _{x = 0}^{\lfloor \frac{s}{2} \rfloor } \sum _{y = 0}^x \sum _{\alpha = 0}^{s - 2x} (-1)^{y} \left( {\begin{array}{c}x\\ y\end{array}}\right) \\&\quad \times w^\alpha \sum _{i = 2x + \alpha }^{s} (-1)^{i + \alpha } \left( {\begin{array}{c}s\\ i\end{array}}\right) \left( {\begin{array}{c}i\\ 2x\end{array}}\right) \left( {\begin{array}{c}i - 2x\\ \alpha \end{array}}\right) \left( 1 - z^2 \right) ^{i} \Gamma (\tfrac{n - k + s}{2} - x) \\&\quad \times \Gamma (x + \tfrac{1}{2}) u^{s - 2x - \alpha } Q^{x - y} | z |^{j - s + 2x + \alpha + 1} \left( 1 - z^2 \right) ^{\frac{k - j - 4x - \alpha - 2}{2}} Q(F \cap U)^l \\&\quad \times \left( Q(U) + (|z| u + \sqrt{1 - z^2} w)^2 \right) ^{y} \, \mathcal {H}^{k - j - 1}(\mathrm {d}w) \, \mathrm {d}z \, \nu ^{u^\perp }_{n - k + j - 1}( \mathrm {d}U). \end{aligned}$$

With Lemma A.2 we get

$$\begin{aligned} J_{3}= & {} \frac{\omega _{n - k + j}}{2 \omega _n} \sum _{x = 0}^{\lfloor \frac{s}{2} \rfloor } \sum _{y = 0}^x \sum _{\alpha = 0}^{s - 2x} (-1)^{y} \left( {\begin{array}{c}x\\ y\end{array}}\right) \left( {\begin{array}{c}s\\ 2x\end{array}}\right) \left( {\begin{array}{c}s - 2x\\ \alpha \end{array}}\right) \Gamma (x + \tfrac{1}{2}) \Gamma (\tfrac{n - k + s}{2} - x) u^{s - 2x - \alpha } Q^{x - y} \nonumber \\&\quad \times \int _{G(u^\perp , n - k + j - 1)} \left( [F, U]^{(u^\perp )}\right) ^2 \int _{-1}^1 | z |^{j + s - 2x - \alpha + 1} \left( 1 - z^2 \right) ^{\frac{k - j + \alpha - 2}{2}} \int _{U^\perp \cap u^\perp \cap {\mathbb {S}}^{n-1}} \nonumber \\&\quad \times w^\alpha Q(F \cap U)^l \left( Q(U) + (|z| u + \sqrt{1 - z^2} w)^2 \right) ^{y} \, \mathcal {H}^{k - j - 1}(\mathrm {d}w) \, \mathrm {d}z \, \nu ^{u^\perp }_{n - k + j - 1}( \mathrm {d}U).\nonumber \\ \end{aligned}$$
(9)

At this point, we easily see that the integrals in \(J_{3}\) are finite, since \(j+s-2x-\alpha +1\ge 0\) and \(k-j+\alpha -2\ge -1\). In fact, the absolute values of the integrands have finite integral, which also justifies the application of Lemma 6 above. Therefore, we can freely change the order of summation and integration from now on. We write \(J_4\) for the integral with respect to U multiplied by the factor \( {\omega _{n - k + j}} /{(2 \omega _n)} \).

By (twofold) binomial expansion of \(( Q(U) + (|z| u + \sqrt{1 - z^2} w \})^2 )^{y}\), we obtain

$$\begin{aligned} J_{4}&= \frac{\omega _{n - k + j}}{2 \omega _n} \sum _{\beta = 0}^{y} \sum _{\gamma = 0}^{2\beta } \left( {\begin{array}{c}y\\ \beta \end{array}}\right) \left( {\begin{array}{c}2\beta \\ \gamma \end{array}}\right) u^{2\beta - \gamma } \int _{G(u^\perp , n - k + j - 1)} \left( [F, U]^{(u^\perp )}\right) ^2 Q(U)^{y - \beta } \\&\quad \times Q(F \cap U)^l \ \int _{-1}^1 | z |^{j + s - 2x - \alpha + 2\beta - \gamma + 1} \left( 1 - z^2 \right) ^{\frac{k - j + \alpha + \gamma - 2}{2}} \, \mathrm {d}z \\&\quad \times \int _{U^\perp \cap u^\perp \cap {\mathbb {S}}^{n-1}} w^{\alpha + \gamma } \, \mathcal {H}^{k - j - 1}(\mathrm {d}w) \, \nu ^{u^\perp }_{n - k + j - 1}( \mathrm {d}U). \end{aligned}$$

Using Lemma 7 and expressing the involved spherical volumes in terms of Gamma functions, we get

$$\begin{aligned} J_{4}&= \frac{\Gamma ( \frac{n}{2})}{\sqrt{\pi }\Gamma ( \frac{n - k + j}{2})} \sum _{\beta = 0}^{y} \sum _{\gamma = 0}^{2\beta } \mathbb {1}\{\alpha + \gamma \text { even}\} \left( {\begin{array}{c}y\\ \beta \end{array}}\right) \left( {\begin{array}{c}2\beta \\ \gamma \end{array}}\right) \\&\quad \times \frac{\Gamma \left( \frac{j + s - \alpha - \gamma }{2} - x + \beta + 1\right) \Gamma \left( \frac{\alpha + \gamma + 1}{2}\right) }{\Gamma \left( \frac{k + s}{2} - x + \beta + 1\right) } u^{2\beta - \gamma } \\&\quad \times \int _{G(u^\perp , n - k + j - 1)} \left( [F, U]^{(u^\perp )}\right) ^2 Q(U)^{y - \beta } Q(F {\cap } U)^l Q(U^\perp \cap u^\perp )^{\frac{\alpha + \gamma }{2}} \, \nu ^{u^\perp }_{n - k + j - 1}( \mathrm {d}U). \end{aligned}$$

With an index shift in the summation with respect to \(\gamma \), we obtain

$$\begin{aligned} J_{4}&= \frac{\Gamma \left( \frac{n}{2}\right) }{\sqrt{\pi }\Gamma \left( \frac{n - k + j}{2}\right) } \sum _{\beta = 0}^{y} \sum _{\gamma = \alpha }^{\alpha + 2\beta } \mathbb {1}\{\gamma \text { even}\} \left( {\begin{array}{c}y\\ \beta \end{array}}\right) \left( {\begin{array}{c}2\beta \\ \gamma - \alpha \end{array}}\right) \\&\quad \times \frac{\Gamma \left( \frac{j + s - \gamma }{2} - x + \beta + 1\right) \Gamma \left( \frac{\gamma + 1}{2}\right) }{\Gamma \left( \frac{k + s }{2} - x + \beta + 1\right) } u^{\alpha + 2\beta - \gamma } \\&\quad \times \int _{G(u^\perp , n - k + j - 1)} \left( [F, U]^{(u^\perp )}\right) ^2 Q(U)^{y - \beta } Q(F \cap U)^l Q(U^\perp \cap u^\perp )^{\frac{\gamma }{2}} \, \nu ^{u^\perp }_{n - k + j - 1}( \mathrm {d}U). \end{aligned}$$

We plug \(J_4\) into \(J_3\) and change the order of summation to get

$$\begin{aligned} J_{3}&= \frac{\Gamma \left( \frac{n}{2}\right) }{\sqrt{\pi }\Gamma \left( \frac{n - k + j}{2}\right) } \sum _{x = 0}^{\lfloor \frac{s}{2} \rfloor } \sum _{y = 0}^x \sum _{\beta = 0}^{y}\\&\quad \times \sum _{\gamma = 0}^{s - 2x + 2\beta } (-1)^{y} \mathbb {1}\{\gamma \text { even}\} \left( {\begin{array}{c}s\\ 2x\end{array}}\right) \left( {\begin{array}{c}x\\ y\end{array}}\right) \left( {\begin{array}{c}y\\ \beta \end{array}}\right) \Gamma (x + \tfrac{1}{2}) \Gamma (\tfrac{n - k + s}{2} - x) \\&\quad \times \sum _{\alpha = (\gamma - 2\beta )^+}^{\min \{s - 2x, \gamma \}} \left( {\begin{array}{c}s - 2x\\ \alpha \end{array}}\right) \left( {\begin{array}{c}2\beta \\ \gamma - \alpha \end{array}}\right) \frac{\Gamma \left( \frac{j + s-\gamma }{2} - x + \beta + 1\right) \Gamma \left( \frac{\gamma + 1}{2}\right) }{\Gamma \left( \frac{k + s}{2} - x + \beta + 1\right) } u^{s - 2x + 2\beta - \gamma } Q^{x - y} \\&\quad \times \int _{G(u^\perp , n - k + j - 1)} \left( [F, U]^{(u^\perp )}\right) ^2 Q(U)^{y - \beta } Q(F \cap U)^l Q(U^\perp \cap u ^\perp )^{\frac{\gamma }{2}} \, \nu ^{u^\perp }_{n - k + j - 1}( \mathrm {d}U). \end{aligned}$$

By Vandermonde’s identity (applied to the summation over \(\alpha \)),

$$\begin{aligned} J_{3}&= \frac{\Gamma \left( \frac{n}{2}\right) }{\sqrt{\pi }\Gamma \left( \frac{n - k + j}{2}\right) } \sum _{x = 0}^{\lfloor \frac{s}{2} \rfloor } \sum _{y = 0}^x \sum _{\beta = 0}^{y} \sum _{\gamma = 0}^{\lfloor \frac{s}{2} \rfloor - x + \beta } (-1)^y \left( {\begin{array}{c}s\\ 2x\end{array}}\right) \left( {\begin{array}{c}x\\ y\end{array}}\right) \left( {\begin{array}{c}y\\ \beta \end{array}}\right) \left( {\begin{array}{c}s - 2x + 2\beta \\ 2\gamma \end{array}}\right) \Gamma (x + \tfrac{1}{2}) \\&\quad \times \Gamma (\tfrac{n - k + s}{2} - x) \frac{\Gamma \left( \frac{j + s}{2} - x + \beta - \gamma + 1\right) \Gamma \left( \gamma + \frac{1}{2}\right) }{\Gamma \left( \frac{k + s}{2} - x + \beta + 1\right) } u^{s - 2x + 2\beta - 2\gamma } Q^{x-y}\\&\quad \times \int _{G(u^\perp , n - k + j - 1)} \left( [F, U]^{(u^\perp )}\right) ^2 Q(U)^{y - \beta } Q(F \cap U)^l Q(U^\perp \cap u ^\perp )^{\gamma } \, \nu ^{u^\perp }_{n - k + j - 1}( \mathrm {d}U). \end{aligned}$$

Next we expand \(Q(U^\perp \cap u ^\perp )^{\gamma } = (Q(u^\perp ) - Q(U) )^{\gamma }\) to get

$$\begin{aligned} J(\omega , \omega ')&= \\&\quad \frac{\Gamma ( \frac{n}{2}) \Gamma \left( \frac{k - j}{2}\right) }{2\pi \Gamma ( \frac{n - k + j}{2}) \Gamma \left( \frac{n - j + s}{2}\right) } \mathcal {H}^{k - j - 1}(\omega ') \sum _{x = 0}^{\lfloor \frac{s}{2} \rfloor } \sum _{y = 0}^x \sum _{\beta = 0}^{y} \sum _{\gamma = 0}^{\lfloor \frac{s}{2} \rfloor - x + \beta } \sum _{\delta = 0}^{\gamma } (-1)^{y + \delta } \left( {\begin{array}{c}s\\ 2x\end{array}}\right) \left( {\begin{array}{c}x\\ y\end{array}}\right) \left( {\begin{array}{c}y\\ \beta \end{array}}\right) \\&\quad \times \left( {\begin{array}{c}s - 2x + 2\beta \\ 2\gamma \end{array}}\right) \left( {\begin{array}{c}\gamma \\ \delta \end{array}}\right) {\textstyle \Gamma (x + \frac{1}{2}) \Gamma \left( \frac{n - k + s}{2} - x\right) } \frac{\Gamma \left( \frac{j + s}{2} - x + \beta - \gamma + 1\right) \Gamma \left( \gamma + \frac{1}{2}\right) }{\Gamma \left( \frac{k + s}{2} - x + \beta + 1\right) } \\&\quad \times Q^{x - y} \int _{\omega } u^{s - 2x + 2\beta - 2\gamma } Q(u^\perp )^{\gamma - \delta } \int _{G(u^\perp , n - k + j - 1)} \left( [F, U]^{(u^\perp )}\right) ^2 \\&\quad \times Q(U)^{y - \beta + \delta } Q(F \cap U)^l \, \nu ^{u^\perp }_{n - k + j - 1}( \mathrm {d}U) \, \mathcal {H}^{n - k - 1}(\mathrm {d}u). \end{aligned}$$

Reversing the order of summation, first with respect to \(\beta \), and then with respect to y, we get

$$\begin{aligned} J(\omega , \omega ')&= \frac{\Gamma ( \frac{n}{2}) \Gamma \left( \frac{k - j}{2}\right) }{2\pi \Gamma \left( \frac{n - k + j}{2}\right) \Gamma \left( \frac{n - j + s}{2}\right) } \mathcal {H}^{k - j - 1}(\omega ') \sum _{x = 0}^{\lfloor \frac{s}{2} \rfloor } \sum _{y = 0}^x \sum _{\beta = 0}^{x - y} \sum _{\gamma = 0}^{\lfloor \frac{s}{2} \rfloor - y - \beta } \sum _{\delta = 0}^{\gamma } (-1)^{x + y + \delta } \\&\quad \times \left( {\begin{array}{c}s\\ 2x\end{array}}\right) \left( {\begin{array}{c}x\\ y\end{array}}\right) \left( {\begin{array}{c}x - y\\ \beta \end{array}}\right) \left( {\begin{array}{c}s - 2y - 2\beta \\ 2\gamma \end{array}}\right) \left( {\begin{array}{c}\gamma \\ \delta \end{array}}\right) \Gamma (x + \tfrac{1}{2}) \Gamma (\tfrac{n - k + s}{2} - x) \\&\quad \times \frac{\Gamma \left( \frac{j + s}{2} - y - \beta - \gamma + 1\right) \Gamma \left( \gamma + \frac{1}{2}\right) }{\Gamma \left( \frac{k + s}{2} - y - \beta + 1\right) } Q^{y} \int _{\omega } u^{s - 2y - 2\beta - 2\gamma } Q(u^\perp )^{\gamma - \delta } \\&\quad \times \int _{G(u^\perp , n - k + j - 1)} \left( [F, U]^{(u^\perp )}\right) ^2 Q(U)^{\beta + \delta } Q(F \cap U)^l \, \nu ^{u^\perp }_{n - k + j - 1}( \mathrm {d}U) \, \mathcal {H}^{n - k - 1}(\mathrm {d}u). \end{aligned}$$

A change of the order of summation yields

$$\begin{aligned} J(\omega , \omega ')&= \frac{\Gamma ( \frac{n}{2}) \Gamma \left( \frac{k - j}{2}\right) }{2\pi \Gamma ( \frac{n - k + j}{2}) \Gamma \left( \frac{n - j + s}{2}\right) } \mathcal {H}^{k - j - 1}(\omega ') \sum _{y = 0}^{\lfloor \frac{s}{2} \rfloor } \sum _{\beta = 0}^{\lfloor \frac{s}{2} \rfloor - y} \sum _{\gamma = 0}^{\lfloor \frac{s}{2} \rfloor - y - \beta } \sum _{\delta = 0}^{\gamma } (-1)^{y + \delta } \\&\quad \times \sum _{x = y + \beta }^{\lfloor \frac{s}{2} \rfloor } (-1)^x \left( {\begin{array}{c}s\\ 2x\end{array}}\right) \left( {\begin{array}{c}x\\ y\end{array}}\right) \left( {\begin{array}{c}x - y\\ \beta \end{array}}\right) {\textstyle \Gamma (x + \frac{1}{2}) \Gamma \left( \frac{n - k + s}{2} - x\right) } \\&\quad \times \left( {\begin{array}{c}s - 2y - 2\beta \\ 2\gamma \end{array}}\right) \left( {\begin{array}{c}\gamma \\ \delta \end{array}}\right) \frac{\Gamma \left( \frac{j + s}{2} - y - \beta - \gamma + 1\right) \Gamma \left( \gamma + \frac{1}{2}\right) }{\Gamma \left( \frac{k + s}{2} - y - \beta + 1\right) } \\&\quad \times Q^{y} \int _{\omega } u^{s - 2y - 2\beta - 2\gamma } Q(u^\perp )^{\gamma - \delta } \int _{G(u^\perp , n - k + j - 1)} \left( [F, U]^{(u^\perp )}\right) ^2 \\&\quad \times Q(U)^{\beta + \delta } Q(F \cap U)^l \, \nu ^{u^\perp }_{n - k + j - 1}( \mathrm {d}U) \, \mathcal {H}^{n - k - 1}(\mathrm {d}u). \end{aligned}$$

By Legendre’s duplication formula, applied several times, we obtain

$$\begin{aligned}&\left( {\begin{array}{c}s\\ 2x\end{array}}\right) \left( {\begin{array}{c}x\\ y\end{array}}\right) \left( {\begin{array}{c}x - y\\ \beta \end{array}}\right) \Gamma \left( x + \tfrac{1}{2}\right) \\&\quad = \left( {\begin{array}{c}s\\ 2y + 2\beta \end{array}}\right) \left( {\begin{array}{c}y + \beta \\ y\end{array}}\right) \Gamma \left( y + \beta + \tfrac{1}{2}\right) \left( {\begin{array}{c}\lfloor \frac{s}{2} \rfloor - y - \beta \\ x - y - \beta \end{array}}\right) \frac{\Gamma \left( \lfloor \tfrac{s + 1}{2} \rfloor - y - \beta + \tfrac{1}{2}\right) }{\Gamma \left( \lfloor \frac{s + 1}{2} \rfloor - x + \tfrac{1}{2}\right) }. \end{aligned}$$

We denote the resulting sum with respect to x by \(S_x\). An index shift and a change of the order of summation imply that

$$\begin{aligned} S_{x}&= \sum _{x = 0}^{\lfloor \frac{s}{2} \rfloor - y - \beta } (-1)^{\lfloor \frac{s}{2} \rfloor + x} \left( {\begin{array}{c}\lfloor \frac{s}{2} \rfloor - y - \beta \\ x\end{array}}\right) \frac{\Gamma \left( \tfrac{n - k + s}{2} - \lfloor \frac{s}{2} \rfloor + x\right) }{\Gamma \left( \lfloor \frac{s + 1}{2} \rfloor - \lfloor \frac{s}{2} \rfloor + x + \tfrac{1}{2}\right) }. \end{aligned}$$

Hence, an application of relation (A.1’) and then of relation (5) with \(c = \tfrac{n-k+s}{2}-\lfloor \tfrac{s + 1}{2} \rfloor -\tfrac{1}{2}\) and \(m=\lfloor \frac{s}{2} \rfloor -y-\beta \in \mathbb {N}_0\) yield

$$\begin{aligned} S_{x}&= (-1)^{\lfloor \frac{s}{2} \rfloor } \frac{\Gamma \left( \tfrac{n - k + s}{2} - \lfloor \frac{s}{2} \rfloor \right) \Gamma \left( \lfloor \frac{s + 1}{2} \rfloor + \lfloor \frac{s}{2} \rfloor - \tfrac{n - k + s}{2} - y - \beta + \tfrac{1}{2}\right) }{\Gamma \left( \lfloor \frac{s + 1}{2} \rfloor - y - \beta + \tfrac{1}{2}\right) \Gamma \left( \lfloor \frac{s + 1}{2} \rfloor - \tfrac{n - k + s}{2} + \tfrac{1}{2}\right) } \\&= \overbrace{(-1)^{s + \lfloor \frac{s}{2} \rfloor + \lfloor \frac{s + 1}{2} \rfloor + y + \beta }}^{ = (-1)^{y + \beta }} \frac{\overbrace{\textstyle \Gamma \left( \tfrac{n - k + s}{2} - \lfloor \frac{s}{2} \rfloor \right) \Gamma \left( \tfrac{n - k + s}{2} - \lfloor \frac{s + 1}{2} \rfloor + \tfrac{1}{2}\right) }^{ = \Gamma \left( \frac{n - k}{2}\right) \Gamma \left( \frac{n - k + 1}{2}\right) }}{\Gamma \left( \lfloor \frac{s + 1}{2} \rfloor - y - \beta + \tfrac{1}{2}\right) \Gamma \left( \tfrac{n - k + 1}{2} + y + \beta - \tfrac{s}{2}\right) }, \end{aligned}$$

where we used that \(c\ge 0\), except for \(k=n-1\) and odd s when \(c=-1/2\). We note that \(S_x = 0\) if \(n - k + s\) is odd and \(n - k + 1 \le s - 2y - 2\beta \). Thus, we obtain

$$\begin{aligned} J(\omega , \omega ')&= \frac{\Gamma ( \frac{n}{2}) \Gamma \left( \frac{n - k}{2}\right) \Gamma \left( \frac{n - k + 1}{2}\right) \Gamma \left( \frac{k - j}{2}\right) }{2\pi \Gamma ( \frac{n - k + j}{2}) \Gamma \left( \frac{n - j + s}{2}\right) } \mathcal {H}^{k - j - 1}(\omega ') \sum _{y = 0}^{\lfloor \frac{s}{2} \rfloor } \sum _{\beta = 0}^{\lfloor \frac{s}{2} \rfloor - y} \sum _{\gamma = 0}^{\lfloor \frac{s}{2} \rfloor - y - \beta } \sum _{\delta = 0}^{\gamma } (-1)^{\beta + \delta } \\&\quad \times \left( {\begin{array}{c}s\\ 2y + 2\beta \end{array}}\right) \left( {\begin{array}{c}y + \beta \\ y\end{array}}\right) \left( {\begin{array}{c}s - 2y - 2\beta \\ 2\gamma \end{array}}\right) \left( {\begin{array}{c}\gamma \\ \delta \end{array}}\right) \frac{\Gamma \left( \frac{j + s}{2} - y - \beta - \gamma + 1\right) \Gamma \left( \gamma + \frac{1}{2}\right) }{\Gamma \left( \frac{k + s}{2} - y - \beta + 1\right) } \\&\quad \times \frac{\Gamma \left( y + \beta + \frac{1}{2}\right) }{\Gamma \left( \frac{n - k + 1}{2} + y + \beta - \frac{s}{2}\right) } Q^{y} \int _{\omega } u^{s - 2y - 2\beta - 2\gamma } Q(u^\perp )^{\gamma - \delta } \int _{G(u^\perp , n - k + j - 1)} \\&\quad \times \left( [F, U]^{(u^\perp )}\right) ^2 Q(U)^{\beta + \delta } Q(F \cap U)^l \, \nu ^{u^\perp }_{n - k + j - 1}( \mathrm {d}U) \, \mathcal {H}^{n - k - 1}(\mathrm {d}u). \end{aligned}$$

We conclude from Proposition 14 (applied in \(u^\perp \)) that

$$\begin{aligned} J(\omega , \omega ')&= {\frac{\Gamma \left( \frac{n}{2}\right) \Gamma \left( \frac{n + 1}{2}\right) (n - k + j - 1)!}{(n - 1)! \Gamma ( \frac{n - k + j}{2}) \Gamma \left( \frac{n - k + j + 1}{2}\right) }} \frac{k! \Gamma \left( \frac{k}{2}\right) {\textstyle \Gamma \left( \frac{n - k}{2}\right) \Gamma \left( \frac{n - k + 1}{2}\right) } \Gamma \left( \frac{j}{2} + l\right) }{2 \pi j! \Gamma \left( \frac{j}{2}\right) \Gamma \left( \frac{n - j + s}{2}\right) } \mathcal {H}^{k - j - 1}(\omega ') \\&\quad \times \sum _{y = 0}^{\lfloor \frac{s}{2} \rfloor } \sum _{\beta = 0}^{\lfloor \frac{s}{2} \rfloor - y} \sum _{\gamma = 0}^{\lfloor \frac{s}{2} \rfloor - y - \beta } \sum _{\delta = 0}^{\gamma } \sum _{i = 0}^{\beta + \delta } (-1)^{\beta + \delta } \left( {\begin{array}{c}s\\ 2y + 2\beta \end{array}}\right) \left( {\begin{array}{c}y + \beta \\ y\end{array}}\right) \left( {\begin{array}{c}s - 2y - 2\beta \\ 2\gamma \end{array}}\right) \left( {\begin{array}{c}\gamma \\ \delta \end{array}}\right) \\&\quad \times \left( {\begin{array}{c}\beta + \delta \\ i\end{array}}\right) \frac{\Gamma \left( \frac{j + s}{2} - y - \beta - \gamma + 1\right) \Gamma \left( \gamma + \frac{1}{2}\right) }{\Gamma \left( \frac{k + s}{2} - y - \beta + 1\right) } \frac{\Gamma \left( y + \beta + \frac{1}{2}\right) }{\Gamma \left( \frac{n - k + 1}{2} + y + \beta - \frac{s}{2}\right) } \\&\quad \times \frac{(i + l - 2)!}{(l - 2)!} \frac{\Gamma \left( \frac{k - j}{2} + i\right) }{\Gamma \left( \frac{k}{2} + l + i\right) } \frac{\Gamma \left( \frac{n - k + j + 1}{2} + \beta + \delta - i\right) }{\Gamma \left( \frac{n + 1}{2} + \beta + \delta \right) } Q^{y} Q(F)^{l + i} \\&\quad \times \int _{\omega } u^{s - 2y - 2\beta - 2\gamma } Q(u^\perp )^{\beta + \gamma - i} \, \mathcal {H}^{n - k - 1}(\mathrm {d}u). \end{aligned}$$

To simplify the right-hand side, we apply Legendre’s duplication formula three times. Then binomial expansion of \(Q(u^{\perp })^{\beta + \gamma - i} = (Q - u^{2})^{\beta + \gamma - i}\), an index shift in the resulting sum, and a subsequent index shift in the summation with respect to \(\beta \) imply that

$$\begin{aligned} J(\omega , \omega ')&= \frac{k! (n - k - 1)! \Gamma \left( \frac{k}{2}\right) \Gamma \left( \frac{j}{2} + l\right) }{2^{n - j} \sqrt{\pi }j! \Gamma \left( \frac{j}{2}\right) \Gamma \left( \frac{n - j + s}{2}\right) } \mathcal {H}^{k - j - 1}(\omega ') \sum _{y = 0}^{\lfloor \frac{s}{2} \rfloor } \sum _{\beta = y}^{\lfloor \frac{s}{2} \rfloor } \sum _{\gamma = 0}^{\lfloor \frac{s}{2} \rfloor - \beta } \sum _{\delta = 0}^{\gamma } \sum _{i = 0}^{\beta + \delta - y} \sum _{m = y + i}^{\beta + \gamma } \\&\quad \times (-1)^{m + y + \gamma + \delta } \left( {\begin{array}{c}s\\ 2\beta \end{array}}\right) \left( {\begin{array}{c}\beta \\ y\end{array}}\right) \left( {\begin{array}{c}s - 2\beta \\ 2\gamma \end{array}}\right) \left( {\begin{array}{c}\gamma \\ \delta \end{array}}\right) \left( {\begin{array}{c}\beta + \delta - y\\ i\end{array}}\right) \left( {\begin{array}{c}\beta + \gamma - y - i\\ m - y - i\end{array}}\right) \\&\quad \times \frac{(i + l - 2)!}{(l - 2)!} \frac{\Gamma \left( \beta + \frac{1}{2}\right) }{\Gamma \left( \frac{n - k + 1}{2} + \beta - \frac{s}{2}\right) } \frac{\Gamma \left( \frac{j + s}{2} - \beta - \gamma + 1\right) \Gamma \left( \gamma + \frac{1}{2}\right) }{\Gamma \left( \frac{k + s}{2} - \beta + 1\right) } \frac{\Gamma \left( \frac{k - j}{2} + i\right) }{\Gamma \left( \frac{k}{2} + l + i\right) } \\&\quad \times \frac{\Gamma \left( \frac{n - k + j + 1}{2} + \beta + \delta - y - i\right) }{\Gamma \left( \frac{n + 1}{2} + \beta + \delta - y\right) } Q^{m - i} Q(F)^{l + i} \int _{\omega } u^{s - 2m} \, \mathcal {H}^{n - k - 1}(\mathrm {d}u). \end{aligned}$$

By a change of the order of summation, we finally obtain

$$\begin{aligned} J(\omega , \omega ')&= \mathcal {H}^{k - j - 1}(\omega ') \sum _{m = 0}^{\lfloor \frac{s}{2} \rfloor } \sum _{i = 0}^{m} b_{n, j, k}^{s, l, i} \, {\hat{a}}_{n, j, k}^{s, i, m} \, Q^{m - i} Q(F)^{l + i} \int _{\omega } u^{s - 2m} \, \mathcal {H}^{n - k - 1}(\mathrm {d}u), \end{aligned}$$

where

$$\begin{aligned} b_{n, j, k}^{s, l, i}&:=\, \frac{\Gamma \left( \frac{k}{2}\right) }{2^{n - j} \sqrt{\pi }\Gamma \left( \frac{j}{2}\right) \Gamma \left( \frac{n - j + s}{2}\right) } \frac{k! (n - k - 1)!}{j!} \frac{(i + l - 2)!}{(l - 2)!} \frac{ \Gamma \left( \frac{j}{2} + l\right) \Gamma \left( \frac{k - j}{2} + i\right) }{\Gamma \left( \frac{k}{2} + l + i\right) }, \\ {\hat{a}}_{n, j, k}^{s, i, m}&:=\, \sum _{y = 0}^{m - i} \sum _{\beta = y}^{\lfloor \frac{s}{2} \rfloor } \sum _{\gamma = (m - \beta )^+}^{\lfloor \frac{s}{2} \rfloor - \beta } \sum _{\delta = (i - \beta + y)^+}^{\gamma } (-1)^{m + y + \gamma + \delta } \left( {\begin{array}{c}s\\ 2\beta \end{array}}\right) \left( {\begin{array}{c}\beta \\ y\end{array}}\right) \left( {\begin{array}{c}s - 2\beta \\ 2\gamma \end{array}}\right) \left( {\begin{array}{c}\gamma \\ \delta \end{array}}\right) \\&\quad \quad \times \left( {\begin{array}{c}\beta + \delta - y\\ i\end{array}}\right) \left( {\begin{array}{c}\beta + \gamma - y - i\\ m - y - i\end{array}}\right) \Gamma (\beta + \tfrac{1}{2}) \Gamma (\gamma + \tfrac{1}{2}) \\&\quad \quad \times \frac{\Gamma \left( \frac{j + s}{2} - \beta - \gamma + 1\right) }{\Gamma \left( \frac{k + s}{2} - \beta + 1\right) \Gamma \left( \frac{n - k + 1}{2} + \beta - \frac{s}{2}\right) } \frac{\Gamma \left( \frac{n - k + j + 1}{2} + \beta + \delta - y - i\right) }{\Gamma \left( \frac{n + 1}{2} + \beta + \delta - y\right) }. \end{aligned}$$

5.4 Simplifying the coefficients

In this section, we simplify the coefficients \({\hat{a}}_{n, j, k}^{s, i, m}\) by a repeated change of the order of summation and by repeated application of relations (A.1) and (A.1’).

First, an index shift by \(\beta \), applied to the summation with respect to \(\gamma \), and a change of the order of summation yield

$$\begin{aligned} {\hat{a}}_{n, j, k}^{s, i, m}&= \sum _{y = 0}^{m - i} \sum _{\gamma = m}^{\lfloor \frac{s}{2} \rfloor } \sum _{\beta = y}^{\gamma } \sum _{\delta = (i - \beta + y)^+}^{\gamma - \beta } (-1)^{m + y + \gamma + \beta + \delta } \left( {\begin{array}{c}s\\ 2\beta \end{array}}\right) \left( {\begin{array}{c}\beta \\ y\end{array}}\right) \left( {\begin{array}{c}s - 2\beta \\ 2\gamma - 2\beta \end{array}}\right) \left( {\begin{array}{c}\gamma - \beta \\ \delta \end{array}}\right) \\&\quad \times \left( {\begin{array}{c}\beta + \delta - y\\ i\end{array}}\right) \left( {\begin{array}{c}\gamma - y - i\\ m - y - i\end{array}}\right) \Gamma (\beta + \tfrac{1}{2}) \Gamma (\gamma - \beta + \tfrac{1}{2}) \\&\quad \times \frac{\Gamma \left( \frac{j + s}{2} - \gamma + 1\right) }{\Gamma \left( \frac{k + s}{2} - \beta + 1\right) \Gamma \left( \frac{n - k + 1}{2} + \beta - \frac{s}{2}\right) } \frac{\Gamma \left( \frac{n - k + j + 1}{2} + \beta + \delta - y - i\right) }{\Gamma \left( \frac{n + 1}{2} + \beta + \delta - y\right) }. \end{aligned}$$

Shifting the index of the summation with respect to \(\delta \) by \(\beta \) and changing the order of summation, we obtain

$$\begin{aligned}{\hat{a}}_{n, j, k}^{s, i, m}&= \sum _{y = 0}^{m - i} \sum _{\gamma = m}^{\lfloor \frac{s}{2} \rfloor } \sum _{\delta = i + y }^{\gamma } \sum _{\beta = y}^{\delta } (-1)^{m + y + \gamma + \delta } \left( {\begin{array}{c}s\\ 2\beta \end{array}}\right) \left( {\begin{array}{c}\beta \\ y\end{array}}\right) \left( {\begin{array}{c}s - 2\beta \\ 2\gamma - 2\beta \end{array}}\right) \left( {\begin{array}{c}\gamma - \beta \\ \delta - \beta \end{array}}\right) \left( {\begin{array}{c}\delta - y\\ i\end{array}}\right) \left( {\begin{array}{c}\gamma - y - i\\ m - y - i\end{array}}\right) \\&\quad \times \Gamma (\beta + \tfrac{1}{2}) \Gamma (\gamma - \beta + \tfrac{1}{2}) \frac{\Gamma \left( \frac{j + s}{2} - \gamma + 1\right) }{\Gamma \left( \frac{k + s}{2} - \beta + 1\right) \Gamma \left( \frac{n - k + 1}{2} + \beta - \frac{s}{2}\right) } \frac{\Gamma \left( \frac{n - k + j + 1}{2} + \delta - y - i\right) }{\Gamma \left( \frac{n + 1}{2} + \delta - y\right) }. \end{aligned}$$

We conclude from an index shift by y, applied to the summation with respect to \(\beta \), and by \(- i - y\), applied to the summation with respect to \(\delta \),

$$\begin{aligned} {\hat{a}}_{n, j, k}^{s, i, m}&= \sum _{y = 0}^{m - i} \sum _{\gamma = m}^{\lfloor \frac{s}{2} \rfloor } \sum _{\delta = 0}^{\gamma - y - i} \sum _{\beta = 0}^{i + \delta } (-1)^{i + m + \gamma + \delta } \left( {\begin{array}{c}s\\ 2y + 2\beta \end{array}}\right) \left( {\begin{array}{c}y + \beta \\ y\end{array}}\right) \left( {\begin{array}{c}s - 2y - 2\beta \\ 2\gamma - 2y - 2\beta \end{array}}\right) \\&\quad \times \left( {\begin{array}{c}\gamma - y - \beta \\ i + \delta - \beta \end{array}}\right) \left( {\begin{array}{c}i + \delta \\ i\end{array}}\right) \left( {\begin{array}{c}\gamma - y - i\\ m - y - i\end{array}}\right) \Gamma (y + \beta + \tfrac{1}{2}) \Gamma (\gamma - y - \beta + \tfrac{1}{2}) \\&\quad \times \frac{\Gamma \left( \frac{j + s}{2} - \gamma + 1\right) }{\Gamma \left( \frac{k + s}{2} - y - \beta + 1\right) \Gamma \left( \frac{n - k + 1}{2} + y + \beta - \frac{s}{2}\right) } \frac{\Gamma \left( \frac{n - k + j + 1}{2} + \delta \right) }{\Gamma \left( \frac{n + 1}{2} + i + \delta \right) }. \end{aligned}$$

With Legendre’s duplication formula (applied three times), we obtain

$$\begin{aligned}&\left( {\begin{array}{c}s\\ 2y + 2\beta \end{array}}\right) \left( {\begin{array}{c}y + \beta \\ y\end{array}}\right) \left( {\begin{array}{c}s - 2y - 2\beta \\ 2\gamma - 2y - 2\beta \end{array}}\right) \left( {\begin{array}{c}\gamma - y - \beta \\ i + \delta - \beta \end{array}}\right) \left( {\begin{array}{c}i + \delta \\ i\end{array}}\right) \left( {\begin{array}{c}\gamma - y - i\\ m - y - i\end{array}}\right) \\&\qquad \times \Gamma (y + \beta + \tfrac{1}{2}) \Gamma (\gamma - y - \beta + \tfrac{1}{2}) \\&\quad = \left( {\begin{array}{c}s\\ 2i\end{array}}\right) \left( {\begin{array}{c}m - i\\ y\end{array}}\right) \left( {\begin{array}{c}\gamma - i\\ m - i\end{array}}\right) \left( {\begin{array}{c}s - 2i\\ 2\gamma - 2i\end{array}}\right) \left( {\begin{array}{c}\gamma - y - i\\ \delta \end{array}}\right) \left( {\begin{array}{c}i + \delta \\ \beta \end{array}}\right) \Gamma (i + \tfrac{1}{2}) \Gamma (\gamma - i + \tfrac{1}{2}), \end{aligned}$$

and hence

$$\begin{aligned}{\hat{a}}_{n, j, k}^{s, i, m}= & {} \Gamma (i + \tfrac{1}{2}) \left( {\begin{array}{c}s\\ 2i\end{array}}\right) \sum _{y = 0}^{m - i} \sum _{\gamma = m}^{\lfloor \frac{s}{2} \rfloor } \sum _{\delta = 0}^{\gamma - y - i} \sum _{\beta = 0}^{i + \delta } (-1)^{i + m + \gamma + \delta } \\&\times \left( {\begin{array}{c}m - i\\ y\end{array}}\right) \left( {\begin{array}{c}\gamma - i\\ m - i\end{array}}\right) \left( {\begin{array}{c}s - 2i\\ 2\gamma - 2i\end{array}}\right) \left( {\begin{array}{c}\gamma - y - i\\ \delta \end{array}}\right) \left( {\begin{array}{c}i + \delta \\ \beta \end{array}}\right) \Gamma (\gamma - i + \tfrac{1}{2}) \\&\times \frac{\Gamma \left( \frac{j + s}{2} - \gamma + 1\right) }{\Gamma \left( \frac{k + s}{2} - y - \beta + 1\right) \Gamma \left( \frac{n - k + 1}{2} + y + \beta - \frac{s}{2}\right) } \frac{\Gamma \left( \frac{n - k + j + 1}{2} + \delta \right) }{\Gamma \left( \frac{n + 1}{2} + i + \delta \right) }. \end{aligned}$$

Now we define \(a_{n, j, k}^{s, i, m} :=\, ( \Gamma (i + \tfrac{1}{2}) \left( {\begin{array}{c}s\\ 2i\end{array}}\right) )^{-1}{\hat{a}}_{n, j, k}^{s, i, m}\). We first use relation (A.1) and then apply relation (A.1’) twice. Thus we obtain

$$\begin{aligned}&\sum _{\beta = 0}^{i + \delta } \left( {\begin{array}{c}i + \delta \\ \beta \end{array}}\right) \frac{1}{\Gamma \left( \frac{k + s}{2} - y - \beta + 1\right) \Gamma \left( \frac{n - k - s + 1}{2} + y + \beta \right) } \\&\quad = \frac{\Gamma \left( \frac{n + 1}{2} + i + \delta \right) }{\Gamma \left( \frac{k + s}{2} - y + 1\right) \Gamma \left( \frac{n - k - s + 1}{2} + i + y + \delta \right) \Gamma \left( \frac{n + 1}{2}\right) }, \end{aligned}$$
$$\begin{aligned}&\sum _{\delta = 0}^{\gamma - y - i} (-1)^{\delta } \left( {\begin{array}{c}\gamma - y - i\\ \delta \end{array}}\right) \frac{\Gamma \left( \frac{n - k + j + 1}{2} + \delta \right) }{\Gamma \left( \frac{n - k - s + 1}{2} + i + y + \delta \right) }\\&\qquad = (-1)^{i + \gamma + y} \frac{\Gamma \left( \frac{n - k + j + 1}{2}\right) \Gamma \left( \frac{j + s}{2} - i - y + 1\right) }{\Gamma \left( \frac{n - k - s + 1}{2} + \gamma \right) \Gamma \left( \frac{j + s}{2} - \gamma + 1\right) }, \end{aligned}$$

where we also used (5) with \(c = \frac{j + s}{2} - i - y\ge 0\) and \(m = \gamma - i - y \in \mathbb {N}_{0}\), and

$$\begin{aligned} \sum _{y = 0}^{m - i} (-1)^{m + y} \left( {\begin{array}{c}m\\ y\end{array}}\right) \frac{\Gamma \left( \frac{j + s}{2} - i - y + 1\right) }{\Gamma \left( \frac{k + s}{2} - y + 1\right) }&= (-1)^{i} \frac{\Gamma \left( \frac{j + s}{2} - m + 1\right) \Gamma \left( \frac{k - j}{2} + m\right) }{\Gamma \left( \frac{k + s}{2} + 1\right) \Gamma \left( \frac{k - j}{2} + i\right) }. \end{aligned}$$

This gives

$$\begin{aligned} a_{n, j, k}^{s, i, m}&= (-1)^{i} \frac{\Gamma \left( \frac{n - k + j + 1}{2}\right) \Gamma \left( \frac{j + s}{2} - m + 1\right) \Gamma \left( \frac{k - j}{2} + m\right) }{\Gamma \left( \frac{n + 1}{2}\right) \Gamma \left( \frac{k + s}{2} + 1\right) \Gamma \left( \frac{k - j}{2} + i\right) } \\&\quad \times \sum _{\gamma = m}^{\lfloor \frac{s}{2} \rfloor } \left( {\begin{array}{c}\gamma - i\\ m - i\end{array}}\right) \left( {\begin{array}{c}s - 2i\\ 2\gamma - 2i\end{array}}\right) \frac{\Gamma \left( \gamma - i + \tfrac{1}{2}\right) }{\Gamma \left( \frac{n - k - s + 1}{2} + \gamma \right) }. \end{aligned}$$

We deduce from Legendre’s duplication formula that

$$\begin{aligned} \left( {\begin{array}{c}\gamma - i\\ m - i\end{array}}\right) \left( {\begin{array}{c}s - 2i\\ 2\gamma - 2i\end{array}}\right) \Gamma \left( \gamma -i + \tfrac{1}{2}\right) = \sqrt{\pi }\left( {\begin{array}{c}\lfloor \frac{s}{2} \rfloor - i\\ m - i\end{array}}\right) \left( {\begin{array}{c}\lfloor \frac{s}{2} \rfloor - m\\ \gamma - m\end{array}}\right) \frac{\Gamma \left( \lfloor \frac{s + 1}{2} \rfloor - i + \frac{1}{2}\right) }{\Gamma \left( \lfloor \frac{s + 1}{2} \rfloor - \gamma + \tfrac{1}{2}\right) }. \end{aligned}$$

Denoting the resulting sum in \(a_{n, j, k}^{ s, i, m}\) with respect to \(\gamma \) by \(S_{4}\), we obtain

$$\begin{aligned} S_{4}&= \sum _{\gamma = 0}^{\lfloor \frac{s}{2} \rfloor - m} \left( {\begin{array}{c}\lfloor \frac{s}{2} \rfloor - m\\ \gamma \end{array}}\right) \frac{1}{\Gamma (\lfloor \frac{s + 1}{2} \rfloor - m - \gamma + \tfrac{1}{2}) \Gamma \left( \frac{n - k - s + 1}{2} + m + \gamma \right) }, \end{aligned}$$

for which relation (A.1) yields

$$\begin{aligned} S_{4}&= \frac{\Gamma \left( \frac{n - k + s}{2} - m\right) }{\Gamma \left( \frac{n - k + 1}{2}\right) \Gamma \left( \frac{n - k}{2}\right) \Gamma \left( \lfloor \frac{s + 1}{2} \rfloor - m + \tfrac{1}{2}\right) }. \end{aligned}$$

We obtain from Legendre’s duplication formula

$$\begin{aligned}&\sqrt{\pi }\left( {\begin{array}{c}\lfloor \frac{s}{2} \rfloor - i\\ m - i\end{array}}\right) \Gamma (\lfloor \tfrac{s + 1}{2} \rfloor - i + \tfrac{1}{2}) S_{4} = \frac{\Gamma \left( \frac{n - k + s}{2} - m\right) }{\Gamma \left( \frac{n - k + 1}{2}\right) \Gamma \left( \frac{n - k}{2}\right) } \left( {\begin{array}{c}s - 2i\\ 2m - 2i\end{array}}\right) \Gamma (m - i + \tfrac{1}{2}). \end{aligned}$$

This gives

$$\begin{aligned} a_{n, j, k}^{s, i, m}&= (-1)^{i} \left( {\begin{array}{c}s - 2i\\ 2m - 2i\end{array}}\right) \Gamma (m - i + \tfrac{1}{2}) \frac{\Gamma \left( \frac{n - k + j + 1}{2}\right) }{\Gamma \left( \frac{n + 1}{2}\right) \Gamma \left( \frac{n - k + 1}{2}\right) \Gamma \left( \frac{n - k}{2}\right) \Gamma \left( \frac{k + s}{2} + 1\right) } \\&\quad \times \frac{\Gamma \left( \frac{n - k + s}{2} - m\right) \Gamma \left( \frac{j + s}{2} - m + 1\right) \Gamma \left( \frac{k - j}{2} + m\right) }{\Gamma \left( \frac{k - j}{2} + i\right) }. \end{aligned}$$

Next, we conclude from further applications of Legendre’s duplication formula that

$$\begin{aligned} \frac{\omega _{n - k} \omega _{k - j}}{\omega _{n - j}} \frac{c_{n, j}^{r, s, l}}{c_{n, k}^{r, s - 2m, l + i}} \left( {\begin{array}{c}s\\ 2i\end{array}}\right) \Gamma (i + \tfrac{1}{2}) b_{n, j, k}^{s, l, i} a_{n, j, k}^{s, i, m}= c_{n, j, k}^{s, l, i, m} \end{aligned}$$

with \(c_{n, j, k}^{s, l, i, m}\) as defined in the statement of Theorem 1.

Finally, returning to (8) and using the definition of the generalized tensorial curvature measures, we get

$$\begin{aligned} I_1 = \sum _{k = j}^{n} \sum _{m = 0}^{\lfloor \frac{s}{2} \rfloor } \sum _{i = 0}^{m} c_{n, j, k}^{s, l, i, m} \, Q^{m - i} \phi _{k}^{r,s - 2m,l + i} (P, \beta ) \phi _{n - k + j} (P', \beta '). \end{aligned}$$

In the last step, we use that for \(k=j\) we have \(c_{n, j, j}^{s, l, i, m} = \mathbb {1}\{ i = m = 0 \}\). Moreover, in the case \(k=n\) we use that \( \phi _{n}^{r,s - 2m,l + i} \) vanishes for \(m \ne \frac{s}{2}\). Hence, for even s we have to simplify the sum

$$\begin{aligned}&\sum _{i = 0}^{\frac{s}{2}} c_{n, j, n}^{s, l, i, \frac{s}{2}} \, Q^{\frac{s}{2} - i} \phi _{n}^{r,0,l + i} (P, \beta ) \phi _{j} (P', \beta ') =\sum _{i = 0}^{\frac{s}{2}} c_{n, j, n}^{s, l, i, \frac{s}{2}} \, \frac{\omega _{n + 2l + 2i}}{\omega _{n + s + 2l}} \phi _{n}^{r,0,\frac{s}{2} + l} (P, \beta ) \phi _{j} (P', \beta '). \end{aligned}$$

For this, an application of relation (A.1’) yields

$$\begin{aligned} c_{n,j}^s \, \phi _{n}^{r,0,\frac{s}{2} + l} (P, \beta ) \phi _{j} (P', \beta ')&= \frac{1}{(2\pi )^{s} \left( \frac{s}{2}\right) !} \frac{\Gamma \left( \frac{n + s}{2} + l\right) }{\Gamma (l - 1)} \frac{\Gamma \left( \frac{n}{2} + 1\right) }{\Gamma \left( \frac{n + s}{2} + 1\right) } \frac{\Gamma \left( \frac{n - j + s}{2}\right) }{\Gamma \left( \frac{n - j}{2}\right) }\nonumber \\&\quad \times \sum _{i = 0}^{\frac{s}{2}} (-1)^{i} \left( {\begin{array}{c}\frac{s}{2}\\ i\end{array}}\right) \frac{\Gamma (i + l - 1)}{\Gamma \left( \frac{n}{2} + l + i\right) } \phi _{n}^{r,0,\frac{s}{2} + l} (P, \beta ) \phi _{j} (P', \beta ') \end{aligned}$$
(10)

as required. This completes the proof.