1 Introduction

The concept of linear 2-normed spaces and 2-metric spaces has been investigated by Gähler [13]. In [6] and [7], Diminnie, Gähler, and White studied the 2-inner product spaces.

A classification of results related to the theory of 2-inner product spaces can be found in book [3]. Here, several properties of 2-inner product spaces are given. In [10], Dragomir et al. show the corresponding version of Boas–Bellman inequality in 2-inner product spaces. Others properties of a 2-inner product space can be found in [4].

Misiak [20] generalizes this concept of a 2-inner product space, in 1989, in the following way: let n be a nonnegative integer \((n \ge 2)\) and X be a vector space of dimension \(\dim X=d\ge n\) (d may be infinite) over the field of real numbers \({\mathbb {R}}\). An \({\mathbb {R}}\)-valued function \(\langle \cdot ,\cdot \mid \cdot ,...,\cdot \rangle\) on \(X^{n+1}\) satisfying the following properties:

  1. (I1)

    \(\langle v_{1},v_{1}| v_{2},...,v_{n}\rangle \ge 0; \langle v_{1},v_{1}|v_{2},...,v_{n}\rangle = 0\) if and only if \(v_{1},v_{2},...,v_{n}\) are linearly dependent;

  2. (I2)

    \(\langle v_1,v_1| v_{2},...,v_{n}\rangle =\langle v_{i_1},v_{i_1}| v_{i_2},v_{i_3},...,v_{i_n}\rangle\), for every permutation \((i_1,i_2,...,i_n)\) of (1, 2, ..., n);

  3. (I3)

    \(\langle v,w| v_{2},...,v_{n}\rangle =\langle w,v| v_{2},...,v_{n}\rangle\);

  4. (I4)

    \(\langle \alpha v,w| v_{2},...,v_{n}\rangle =\alpha \langle v,w| v_{2},...,v_{n}\rangle\), for every scalar \(\alpha \in {\mathbb {R}}\).

  5. (I5)

    \(\langle v+v',w| v_{2},...,v_{n}\rangle =\langle v,w| v_{2},...,v_{n}\rangle +\langle v',w| v_{2},...,v_{n}\rangle\);

is called an n-inner product on X, and the pair \((X,\langle \cdot ,\cdot |\cdot ,...,\cdot \rangle )\) is called an n-inner product space or n-pre-Hilbert space.

It is easy to see that the n-inner product is a linear function of its two first arguments. Several results related to the theory of the n-inner product spaces can be found in [15, 21]: \(\langle v,w|\alpha v_{2},...,v_{n}\rangle =\alpha ^{2}\langle v,w| v_{2},...,v_{n}\rangle\), for every real number \(\alpha\) and for \(v,w,v_{2},..,v_{n} \in X\); \(\langle v,w|v_{2}+v_{2}^{\prime},v_{3},\ldots,x_{n}\rangle -\langle v,w|v_{2}-v_{2}^{\prime},v_{3},\ldots,v_{n}\rangle =\langle v_{2},v_{2}'|v+w,v_{3},...,v_{n}\rangle\) \(-\langle v_{2},v_{2}'|v-w, v_{3},...,v_{n}\rangle\), for all \(v,w,v_{2},v_{3},...,v_{n}, v_{2}' \in X\) and an extension of the Cauchy–Schwarz inequality to arbitrary n:

$$\begin{aligned} |\langle v,w|v_{2},...,v_{n}\rangle |\le \sqrt{\langle v,v|v_{2},...,v_{n}\rangle }\sqrt{\langle w,w|v_{2},...,v_{n}\rangle }, \end{aligned}$$
(1.1)

for all \(v,w,v_{2},...,v_{n} \in X\). The equality holds in (1) if and only if \(v,w,v_{2},...,v_{n}\) are linearly dependent.

Other consequences from the above properties can be inferred very easily:

$$\begin{aligned}&\langle 0,w|v_{2},...,v_{n}\rangle =\langle v,0|v_{2},...,v_{n}\rangle =\langle v,w|0,...,v_{n}\rangle =0, \\&\langle v_{2},w|v_{2},...,v_{n}\rangle =\langle v,v_{2}|v_{2},...,v_{n}\rangle =0, \end{aligned}$$

for all \(v,w,v_{2},...,v_{n}\in X\).

Let \((X,\langle \cdot ,\cdot |\cdot ,...,\cdot \rangle )\) be an n-inner product space, \(n\ge 2\). We can define a function \(\Vert \cdot ,...,\cdot \Vert\) on \(X\times X\times ...\times X=X^{n}\) by:

$$\begin{aligned} \Vert v|v_{2},...,v_{n}\Vert :=\sqrt{\langle v,v|v_{2},...,v_{n}\rangle }, \end{aligned}$$

for all \(v,v_{2},...,v_{n} \in X\), which in [20] is shown that satisfies the following conditions:

  1. (N1)

    \(\Vert v|v_{2},...,v_{n}\Vert \ge 0\) and \(\Vert v|v_{2},...,v_{n}\Vert =0\) if and only if \(v,v_{2},...,v_{n}\) are linearly dependent;

  2. (N2)

    \(\Vert v|v_{2},...,v_{n}\Vert\) is invariant under permutation;

  3. (N3)

    \(\Vert \alpha v|v_{2},...,v_{n}\Vert =|\alpha |\Vert v|v_{2},...,v_{n}\Vert\), for any scalar \(\alpha \in {\mathbb {R}}.\)

  4. (N4)

    \(\Vert v+w|v_{2},...,v_{n}\Vert \le \Vert v|v_{2},...,v_{n}\Vert +\Vert w|v_{2},...,v_{n}\Vert\),

for all \(v,w,v_{2},...,v_{n}\in X.\)

A function \(\Vert \cdot |\cdot ,...,\cdot \Vert\) defined on \(X^{n}\) and satisfying the above conditions is called an n-norm on X and \((X,\Vert \cdot |\cdot ,...,\cdot \Vert )\) is called a linear n-normed space.

It is easy to see that if \((X,\langle \cdot ,\cdot |\cdot ,...,\cdot \rangle )\) is an n-inner product space over the field of real numbers \({\mathbb {R}}\), then \((X,\Vert \cdot |\cdot ,...,\cdot \Vert )\) is a linear n-normed space and the n-norm \(\Vert \cdot |\cdot ,...,\cdot \Vert\) is generated by the n-inner product \(\langle \cdot ,\cdot |\cdot ,...,\cdot \rangle\).

Furthermore, we have the parallelogram law [3]:

$$\begin{aligned} \Vert v+w|v_{2},...,v_{n}\Vert ^{2}+\Vert v-w|v_{2},...,v_{n}\Vert ^{2}=2\Vert v|v_{2},...,v_{n}\Vert ^{2}+2\Vert w|v_{2},...,v_{n}\Vert ^{2}, \end{aligned}$$
(1.2)

for all \(v,w,v_{2},..,v_{n}\in X\) and the polarization identity (see, e.g., [3] and [4]):

$$\begin{aligned} \Vert v+w|v_{2},...,v_{n}\Vert ^{2}-\Vert v-w|v_{2},...,v_{n}\Vert ^{2}=4\langle v,w|v_{2},...,v_{n}\rangle , \end{aligned}$$
(1.3)

for all \(v,w,v_{2},...,v_{n} \in X.\)

The standard n -inner product on an inner product space \(X=(X,\langle \cdot .\cdot \rangle )\) is given by:

$$\begin{aligned} \langle v,w|v_{2},...,v_{n}\rangle := \begin{vmatrix} \langle v,w\rangle&\langle v,v_{2}\rangle&...&\langle v,v_{n}\rangle \\ \langle v_{2},w\rangle&\langle v_{2},v_{2} \rangle&...&\langle v_{2},v_{n}\rangle \\ \vdots&\vdots&\vdots&\vdots \\ \langle v_{n},w \rangle&\langle v_{n},v_{2} \rangle&...&\langle v_{n},v_{n}\rangle \end{vmatrix}, \end{aligned}$$
(1.4)

which generates n-norm \(\Vert v|v_{2},...,v_{n}\Vert :=\sqrt{\langle v,v|v_{2},...,v_{n}\rangle },\) representing the volume of the n-dimensional parallelepiped spanned by \(v,v_{2},...,v_{n}.\)

Various type of applications of n-inner products and n-norms can be found in recent papers [2, 16,17,18, 22, 23, 25, 26].

Remark 1.1

The standard n-inner product satisfies also the following additional condition:

  1. I6)

    If \(v,v_2,\ldots ,v_n\) are linearly dependent, then \(\langle v,w|v_2,\ldots ,v_n\rangle =0\),

for \(v,w,v_2,\ldots ,v_n\in X\).

The motivation of this article is to study another type of n-inner product built based on the properties of the n-inner product, except property I2. We will define the weak n-inner product and the n-iterated 2-inner product and we will give its representation in terms of the standard k-inner products, \(k\le n\), using the Dodgson’s identity for determinants. We also present a brief characterization of a linear regression model for the random variables in discrete case. Finally, we generalize the Chebyshev functional using the n-iterated 2-inner product.

2 The weak n-inner product

Let X be a real vector space.

Definition 2.1

An \({\mathbb {R}}\)-valued function \((\cdot ,\cdot \mid \cdot ,...,\cdot )\) on \(X^{n+1}\), \(n\ge 2\), satisfying the following properties:

  1. (P1)

    Positivity: \((x,x| x_{n},...,x_{2})\ge 0\;\text{ and }\; (x,x|x_{n},...,x_{2})= 0\) if and only if \(x, x_{2},x_{3},...,x_{n}\) are linearly dependent;

  2. (P2)

    Interchangeability: \((x,x| x_{n},...,x_{2})=(x_{n},x_{n}| x,x_{n-1},...,x_{2})\);

  3. (P3)

    Symmetry: \((x,y| x_{n},...,x_{2})=(y,x| x_{n},...,x_{2})\);

  4. (P4)

    Homogeneity: \((\alpha x,y| x_{n},...,x_{2})=\alpha (x,y| x_{n},...,x_{2})\), for every scalar \(\alpha \in {\mathbb {R}}\).

  5. (P5)

    Additivity: \((x+x',y| x_{n},...,x_{2})=(x,y| x_{n},...,x_{2})+(x',y| x_{n},...,x_{2})\);

is called a weak n -inner product on X, and the pair \((X,(\cdot ,\cdot |\cdot ,...,\cdot ))\) is called a weak n -inner product space or weak n -pre-Hilbert space.

Remark 2.2

It is easy to see that:

$$\begin{aligned} (0,y|x_{n},...,x_{2})=(x,0|x_{n},...,x_{2})=(x,y|0,...,x_{2})=0. \end{aligned}$$

Remark 2.3

Obviously, an n-inner product is a weak n-inner product, so an n-inner product space is a weak n-inner product space, but the reciprocal is not true. This fact will be shown in Remark 2.14.

For \(n=2\), a weak n-inner product is also an n-inner product. For \(n\ge 3\), a weak n-inner product can be build, for instance, by formula:

$$\begin{aligned} (x,y|x_{n},\ldots ,x_2)=\Theta (x,y|x_{n}\ldots ,x_2)\cdot \Psi (x_{n-1},\ldots ,x_{2}), \end{aligned}$$

where \(\Theta (x,y|x_{n}\ldots ,x_2)\) is a n-inner product and \(\Psi :X^{n-2}\rightarrow {\mathbb {R}}\) is a function with properties \(\Psi (x_{n-1},\ldots ,x_{2})\ge 0.\;\forall (x_{n-1},\ldots ,x_{2})\) and \(\Psi (x_{n-1},\ldots ,x_{2})=0\) iff \(x_{n-1},\ldots ,x_{2}\) are linearly dependent (in the case \(n=3\), this means \(x_2=0\)).

In the next lemma, we generalize a property that exists in the case of 2-inner products. The method of the proof is based on the method used in [4].

Lemma 2.4

Let \(,x_2,\ldots ,x_n,x,y\in X\). If \(x_2,\ldots ,x_n,x\) are linearly dependent, then:

$$\begin{aligned} (x,y|x_n,\ldots ,x_2)=0. \end{aligned}$$
(2.1)

Proof

We consider two cases.

Case 1. \(x_2,\ldots ,x_n,y\) are linearly independent. Consider the vector:

$$\begin{aligned} u=(y,y|x_n,\ldots ,x_2)x-(x,y|x_n,\ldots ,x_2)y. \end{aligned}$$

Then from P1), we have \((u,u|x_n,\ldots ,x_2)\ge 0\). This inequality is equivalent to:

$$\begin{aligned} (y,y|x_n,\ldots ,x_2)[(x,x|x_n,\ldots ,x_2)(y,y|x_n,\ldots ,x_2)-(x,y|x_n,\ldots ,x_2)^2]\ge 0. \end{aligned}$$

Since \(x_2,\ldots ,x_n,x\) are linearly dependent, from one has \((x,x|x_n,\ldots ,x_2)=0\) and hence:

$$\begin{aligned} -(y,y|x_n,\ldots ,x_2)(x,y|x_n,\ldots ,x_2)^2\ge 0. \end{aligned}$$

Since \(x_2,\ldots ,x_n,y\) are linearly independent, it follows that \((y,y|x_n,\ldots ,x_2)>0\). Consequently, one obtains (2.1).

Case 2. \(x_2,\ldots ,x_n,y\) are linearly dependent. Then, also \(x_2,\ldots ,x_n,x+y\) are linearly dependent. We have:

$$\begin{aligned}&(x,y|x_n,\ldots ,x_2)&\\ =&\frac{1}{2}[(x+y,x+y|x_n,\ldots ,x_2)-(x,x|x_n,\ldots ,x_2)-(y,y|x_n,\ldots ,x_2)].&\end{aligned}$$

Because \((x,x|x_n,\ldots ,x_2)=0\), \((y,y|x_n,\ldots ,x_2)=0\), \((x+y,x+y|x_n,\ldots ,x_2)=0\) relation (2.1) follows. \(\square\)

Theorem 2.5

Suppose that \((X,( \cdot ,\cdot |\cdot ,...,\cdot ))\) is a weak n-inner product space over the field of real numbers \({\mathbb {R}}\). Let \(x_2,\ldots , x_n\in X\), \(n\ge 2\) be fixed. Denote \(Y=span\{x_2,\ldots ,x_n\}\). Define the quotient space \(X/Y=\{{\hat{x}}|\;x\in X\}\), where \({\hat{x}}=\{u\in X|u-x\in Y\}\), \(x\in X\). Then, function \(\psi : (X/Y)^2\rightarrow {\mathbb {R}}\), \(\psi ({\hat{x}},{\hat{y}}):= (x,y|x_{n},...,x_{2})\), \({\hat{x}},{\hat{y}}\in X/Y\) is well defined and is a semi-inner product on X/Y. Moreover, if \(x_2,\ldots ,x_n\) are linearly independent, then \(\psi\) is an inner product.

Proof

Let \(x,x',y,y'\in X\), such that \(x'-x\in Y\) and \(y'-y\in Y\). Using Lemma 2.4, we get \(\psi (\widehat{x'},\hat{y'})=(x',y'|x_{n},...,x_{2}) = (x,y|x_{n},...,x_{2})+(x'-x,y|x_{n},...,x_{2})+(x,y'-y|x_{n},...,x_{2})+(x'-x,y'-y|x_{n},...,x_{2})=(x,y|x_{n},...,x_{2}) =\psi ({\widehat{x}},{\hat{y}})\). This means that \(\psi\) is well defined.

From P1), we have \(\psi ({\hat{x}},{\hat{x}})=(x,x|x_{n},...,x_{2})\ge 0\). Moreover, if \(\psi ({\hat{x}},{\hat{x}})=0\), then \((x,x|x_2,\ldots ,x_n)=0\), which implies that \(x,x_2,\ldots ,x_n\) are linearly dependent. If \(x_2,\ldots , x_n\) are linearly independent, it follows that \(x\in Y\). Then \({\hat{x}}={\hat{0}}\).

The other properties of the inner product follow in a simple manner from conditions P3), P4), and P5). \(\square\)

Theorem 2.6

(Schwarz type inequality) Let \((X,(\cdot ,\cdot |\cdot ,...,\cdot ))\) be a weak n-inner product space. For any \(x,y,x_{2},...,x_{n} \in X\), we have:

$$\begin{aligned} |(x,y|x_{n},...,x_{2})|\le \sqrt{(x,x|x_{n},...,x_{2})}\sqrt{(y,y|x_{n},...,x_{2})}. \end{aligned}$$
(2.2)

In the case, when \(x_2,\ldots ,x_n\) are linearly independent, then the equality holds in (2.2) if and only if there exist \(\mu \in {\mathbb {R}}_+\) and \(u\in Y:=span\{x_2,\ldots ,x_n\}\), such that \(y=\mu x+u\).

Proof

By taking into account Theorem 2.5 and the notations given there, we have:

$$\begin{aligned}&|(x,y|x_{n},...,x_{2})|=|\Psi ({\hat{x}},{\hat{y}})|\le \sqrt{\Psi ({\hat{x}},{\hat{x}})}\sqrt{\Psi ({\hat{y}},{\hat{y}})}&\\ =&\sqrt{(x,x|x_{n},...,x_{2})}\sqrt{(y,y|x_{n},...,x_{2})}.&\end{aligned}$$

If \(x_2,\ldots ,x_n\) are linearly independent, then the equality holds in (2.2) iff there is \(\mu \ge 0\), such that \({\hat{y}}=\mu {\hat{x}}\), i.e., exists \(u\in Y\) for which \(y=\mu x+u\). \(\square\)

Definition 2.7

Let \((X,(\cdot ,\cdot |\cdot ,...,\cdot ))\) be a weak n-inner product space, \(n\ge 2\). We can define a function \(\Vert \cdot ,...,\cdot \Vert\) on \(X\times X\times ...\times X=X^{n}\) by:

$$\begin{aligned} \Vert x|x_{n},...,x_{2}\Vert :=\sqrt{(x,x|x_{n},...,x_{2})},\; \text{ for } \text{ all }\; x,x_{2},...,x_{n} \in X. \end{aligned}$$
(2.3)

Proposition 2.8

If \((X,(\cdot ,\cdot |\cdot ,...,\cdot ))\) is a weak n-inner product space, then function \(\Vert \cdot ,...,\cdot \Vert\) defined in (2.3) satisfies the following conditions:

(C1):

\(\Vert x|x_{n},...,x_{2}\Vert \ge 0\) and \(\Vert x|x_{n},...,x_{2}\Vert =0\) if and only if \(x, x_{2},...,x_{n}\) are linearly dependent;

(C2):

\(\Vert x|x_{n},x_{n-1},...,x_{2}\Vert =\Vert x_n|x,x_{n-1},...,x_{2}\Vert\);

(C3):

\(\Vert \alpha x|x_{n},...,x_{2}\Vert =|\alpha |\Vert x|x_{n},...,x_{2}\Vert\), for any scalar \(\alpha \in {\mathbb {R}};\)

(C4):

\(\Vert x+y|x_{n},...,x_{2}\Vert \le \Vert x|x_{n},...,x_{2}\Vert +\Vert y|x_{n},...,x_{2}\Vert\)

for all \(x,y,x_{2},...,x_{n}\in X.\)

Proof

Conditions (C1)– (C4) follow immediately from conditions (P1)–(P5) and Definition 2.7. \(\square\)

Definition 2.9

Let X be a real vector space. A real function \(\Vert \cdot |\cdot ,...,\cdot \Vert\) defined on \(X^{n}\) and satisfying conditions (C1)–(C4) is called a weak n-norm on X and \((X,\Vert \cdot |\cdot ,...,\cdot \Vert )\) is called a linear weak n-normed space.

It follows that if \((X,( \cdot ,\cdot |\cdot ,...,\cdot ))\) is a weak n-inner product space over the field of real numbers \({\mathbb {R}}\), then \((X,\Vert \cdot |\cdot ,...,\cdot \Vert )\) is a linear weak n-normed space and the weak n-norm \(\Vert \cdot |\cdot ,...,\cdot \Vert\) is generated by the weak n-inner product \((\cdot ,\cdot |\cdot ,...,\cdot )\).

Theorem 2.10

In conditions of Theorem 2.5, function \(\varphi :X/Y\rightarrow {\mathbb {R}}_+\), \(\varphi ({\hat{x}}):= \Vert x|x_{n},...,x_{2}\Vert\), \({\hat{x}}\in X/Y\) is well defined and is a semi-norm on X/Y. Moreover, if \(x_2,\ldots ,x_n\) are linearly independent, then \(\varphi\) is a norm.

Proof

It follows immediately from Theorem 2.5, since \(\varphi ({\hat{x}})=\sqrt{\psi ({\hat{x}},{\hat{x}})}\), \({\hat{x}}\in X/Y\), where function \(\psi\) was defined in this theorem. \(\square\)

In an inner product space, a special weak n-inner product can be defined by recurrence starting from the 2-inner product. Recall that the 2-inner product was studied in [3, 4].

Definition 2.11

Let \((X,\langle \cdot ,\cdot \rangle )\) be a real pre-Hilbert space. The n-iterated 2-inner product, or standard weak n-inner product \((\cdot ,\cdot |\cdot ,\ldots ,\cdot )_*:X^{n+1}\rightarrow {\mathbb {R}}\) is defined for \(n\ge 2\) as follows. For \(n=2\), \((\cdot ,\cdot |\cdot )_*\) coincides with the standard 2-inner product, that is:

$$\begin{aligned} (x,y|z)_*:=\langle x,y|z\rangle =\begin{vmatrix} \langle x,y \rangle&\langle x,z \rangle \\ \langle z,y \rangle&\langle z,z \rangle \end{vmatrix}=\langle x,y\rangle \langle z,z \rangle -\langle x,z\rangle \langle z,y\rangle ,\quad x,y,z \in X. \end{aligned}$$
(2.4)

Then, if \(n\ge 3\) and \(x,y,x_{2},...,x_{n}\in X\), define:

$$\begin{aligned} (x,y|x_{n},...,x_{2})_*:=\begin{vmatrix} (x,y|x_{n-1},...,x_{2})_*&(x,x_{n}|x_{n-1},...,x_{2})_*\\ (x_{n},y|x_{n-1},...,x_{2})_*&(x_{n},x_{n}|x_{n-1},...,x_{2})_* \end{vmatrix}. \end{aligned}$$
(2.5)

Theorem 2.12

If \((X,\langle \cdot ,\rangle )\) is a real pre-Hilbert space, then for any \(n\ge 2\) function \((\cdot ,\cdot |\cdot ,\ldots ,\cdot )_*:X^{n+1}\rightarrow {\mathbb {R}}\) given in Definition 2.11 is a weak n-inner product.

Proof

Consider proposition S(n): the n-iterated 2-inner product satisfies conditions (P1)–(P6). We prove this proposition by mathematical induction, for \(n\ge 2\).

For \(n=2\), S(n) is true, since we know from [3, 4] that the standard 2-inner product, \((x,y|z)_*=\det \begin{pmatrix} \langle x,y \rangle &{} \langle x,z \rangle \\ \langle z,y \rangle &{}\langle z,z \rangle \end{pmatrix}\), satisfies conditions \(P1)-P6)\).

Suppose S(n) is true and prove that proposition \(S(n+1)\) is true. The \((n+1)\)-iterated 2-inner product is given by:

$$\begin{aligned} (x,y|x_{n+1},x_{n},...,x_{2})_*=\begin{vmatrix} (x,y|x_{n},...,x_{2})_*&(x,x_{n+1}|x_{n},...,x_{2})_*\\ (x_{n+1},y|x_{n},...,x_{2})_*&(x_{n+1},x_{n+1}|x_{n},...,x_{2})_* \end{vmatrix}. \end{aligned}$$

Let us prove P1) for \(n+1\). First, we prove that \((x,x|x_{n+1},...,x_{2})_* \ge 0\), for \(x, x_2,\ldots x_{n+1}\in X\).

Case 1: \((x,x|x_{n},...,x_{2})_*=0\). Then, from property P1) for n, it results that \(x, x_{2},...,x_{n}\) are linearly dependent. From the hypothesis of induction and from Lemma 2.4, it follows that \((x,x_{n+1}|x_{n},...,x_{2})_*=0\). Then:

$$\begin{aligned} (x,x|x_{n+1},\ldots ,x_{2})_*&= \begin{vmatrix} (x,x|x_{n},...,x_{2})_*&(x,x_{n+1}|x_{n},...,x_{2})_*\\ (x_{n+1},x|x_{n},...,x_{2})_*&(x_{n+1},x_{n+1}|x_{n},...,x_{2})_* \end{vmatrix}&\\&=\begin{vmatrix} 0&0\\ (x_{n+1},x|x_{n},...,x_{2})_*&(x_{n+1},x_{n+1}|x_{n},...,x_{2})_* \end{vmatrix}=0.&\end{aligned}$$

Case 2: \((x,x|x_{n},...,x_{2})_*>0\). From (P1) for n, we have \((z,z|x_{n},...,x_{2})_* \ge 0\), for all \(z\in X\), and then:

$$\begin{aligned} (\lambda x+x_{n+1},\lambda x +x_{n+1}|x_{n},...,x_{2})_* \ge 0,\; \text{ for } \text{ all }\;\lambda \in {\mathbb {R}}. \end{aligned}$$

We obtain the following relation:

$$\begin{aligned} \lambda ^{2}(x,x|x_{n},...,x_{2})_*+2\lambda (x,x_{n+1}|x_{n},...,x_{2})_*+(x_{n+1},x_{n+1}|x_{n},...,x_{2})_* \ge 0,\;\forall \lambda . \end{aligned}$$

Since \((x,x|x_{n},...,x_{2})_*>0\), the discriminant \(\Delta _{\lambda }\) of this polynomial in variable \(\lambda\) is not strictly positive. Hence, \((x,x|x_{n+1},x_{n},...,x_{2})_* = -\frac{1}{4}\Delta _{\lambda }\ge 0.\) Therefore, in both cases, we obtain \((x,x|x_{n+1},...,x_{2})_* \ge 0\).

On the other hand, let us suppose that \((x,x|x_{n+1},x_n,...,x_2)_*=0\), which means that:

$$\begin{aligned} (x,x|x_n,...,x_2)_*(x_{n+1},x_{n+1}|x_n,...,x_2)_*-(x,x_{n+1}|x_n,...,x_2)_*^2=0. \end{aligned}$$

If \((x_{n+1},x_{n+1}|x_n,...,x_2)_*\not =0\), the expression above is equal to \(-\frac{1}{4}\Delta _{\lambda }\), where \(\Delta _{\lambda }\) is the discriminant of the polynomial equation of degree 2 in \(\lambda\): \(Q(\lambda )=0\), where \(Q(\lambda )=(x+\lambda x_{n+1},x+\lambda x_{n+1}|x_n,...,x_2)_*\). Since the discriminant is 0, then there exists \(\lambda _0\in {\mathbb {R}}\), for which \(Q(\lambda _0)=0\). From condition (P1) for n, it follows that \(x+\lambda _0x_{n+1}, x_n,...,x_2\) are linearly dependent. Then, there are the numbers \(\alpha , \alpha _i\in {\mathbb {R}}\), not all null, such that \(\alpha (x+\lambda _0x_{n+1})+\sum _{i=2}^{n}\alpha _i x_i=0\). Therefore, \(x, x_2,...,x_{n+1}\) are linearly dependent. If \((x_{n+1},x_{n+1}|x_n,...,x_2)_*=0\), then \(x_2,...,x_{n+1}\) are linearly dependent from (P1) for n. Then, \(x, x_2,...,x_{n+1}\) are linearly dependent. Condition (P1) is completely proved for \(n+1\).

We prove condition (P2) for \(n+1\):

$$\begin{aligned} (x,x|x_{n+1},...,x_{2})_*&=\begin{vmatrix} (x,x|x_{n},...,x_{2})_*&(x,x_{n+1}|x_{n},...,x_{2})_*\\ (x_{n+1},x|x_{n},...,x_{2})_*&(x_{n+1},x_{n+1}|x_{n},...,x_{2})_* \end{vmatrix}&\\&=\begin{vmatrix} (x_{n+1},x_{n+1}|x_{n},...,x_{2})_*&(x_{n+1},x|x_{n},...,x_{2})_*\\ (x,x_{n+1}|x_{n},...,x_{2})_*&(x,x|x_{n},...,x_{2})_* \end{vmatrix}&\\&=(x_{n+1},x_{n+1}|x,x_{n},...,x_{2})_*.&\end{aligned}$$

Consequently, condition (P2) is true for \(n+1\).

We pass to the verification of condition (P3). We have:

$$\begin{aligned} (x,y|x_{n+1},x_{n},...,x_{2})_*=\begin{vmatrix} (x,y|x_{n},...,x_{2})_*&(x,x_{n+1}|x_{n},...,x_{2})_*\\ (x_{n+1},y|x_{n},...,x_{2})_*&(x_{n+1},x_{n+1}|x_{n},...,x_{2})_* \end{vmatrix} \\ = \begin{vmatrix} (y,x|x_{n},...,x_{2})_*&(x_{n+1},y|x_{n},...,x_{2})_*\\ (x,x_{n+1}|x_{n},...,x_{2})_*&(x_{n+1},x_{n+1}|x_{n},...,x_{2})_* \end{vmatrix}=(y,x|x_{n+1},x_{n},...,x_{2})_*, \end{aligned}$$

because \(\det A=\det A^{T}\), for any square matrix A and \((x,y|x_{n},...,x_{2})_*=(y,x|x_{n},..,x_{2})_*\). Therefore, the \((n+1)\)-iterated 2-inner product satisfies condition (P3) for \(n+1\).

We pass now to condition (P4). Since we have:

$$\begin{aligned}&(\alpha x,y|x_{n+1},x_{n},...,x_{2})_*= \begin{vmatrix} (\alpha x,y|x_{n},...,x_{2})_*&(\alpha x,x_{n+1}|x_{n},...,x_{2})_*\\ (x_{n+1},y|x_{n},...,x_{2})_*&(x_{n+1},x_{n+1}|x_{n},...,x_{2})_* \end{vmatrix} \\&\quad =\begin{vmatrix} \alpha (x,y|x_{n},...,x_{2})_*&\alpha (x,x_{n+1}|x_{n},...,x_{2})_*\\ (x_{n+1},y|x_{n},...,x_{2})_*&(x_{n+1},x_{n+1}|x_{n},...,x_{2})_* \end{vmatrix}=\alpha (x,y|x_{n+1},x_{n},...,x_{2})_*, \end{aligned}$$

it follows that condition (P4) is proved for \(n+1\).

For (P5) for \(n+1\), the determinant \((x+x',y|x_{n+1},x_{n},...,x_{2})_*\) can be expressed by a determinant of second order, having on the first line the elements \((x+x',y|x_{n},...,x_{2})_*\) and \((x+x',x_{n+1}|x_{n},...,x_{2})_*\), respectively, and on the second line, elements which do not depend on x and \(x'\). Then, using by induction the additivity in the first argument of the products above, and then the additivity of the determinant with regard to the first line, it follows immediately that:

$$\begin{aligned} (x+x',y|x_{n+1},x_{n},...,x_{2})_*=(x,y|x_{n+1},x_{n},...,x_{2})_*+(x',y|x_{n+1},x_{n},...,x_{2})_*. \end{aligned}$$

\(\square\)

Proposition 2.13

Let \((X,\langle \cdot ,\cdot \rangle )\) be a real pre-Hilbert space. For \(x,y,x_2,\ldots ,x_n \in X\), \(n\ge 2\) and: \(t\in {\mathbb {R}}\):

$$\begin{aligned} (tx,ty|tx_2,\ldots ,tx_n)_*=t^{2^{n}}(x,y|x_2,\ldots , x_n)_*. \end{aligned}$$
(2.6)

Proof

For \(n=2\), \((tx,ty|tx_2)_*=t^4(x,y|x_2)_*.\) Then, it follows by mathematical induction. \(\square\)

Remark 2.14

Theorem 2.12 allows us to furnish an example of weak n-inner product which is not a n-inner product. For this, let \(X={\mathbb {R}}^3\) endowed with the usual inner product. Then, from Theorem 2.12, 3-iterated 2-inner product \((\cdot ,\cdot |\cdot ,\cdot )_*:X^4\rightarrow {\mathbb {R}}\) is a weak 3-inner product, but it is not a 3-inner product. Indeed, if axiom I2) would be true for 3-iterated 2-inner product, then we must have:

$$\begin{aligned} (x,x|u,v)_*=(v,v|u,x)_*,\quad \text{ for } \text{ all }\;x,u,v\in X. \end{aligned}$$
(2.7)

However, if we choose \(x=(1,0,0)\), \(u=(1,1,1)\), \(v=(2,1,2)\), we have:

and on the other hand:

$$\begin{aligned} (v,v|u,x)_*&= \begin{vmatrix} \;\;\begin{vmatrix}\langle v,v\rangle&\langle v,x\rangle \\ \langle x,v \rangle&\langle x,x\rangle \end{vmatrix}\;\; \begin{vmatrix}\langle v,u\rangle&\langle v,x\rangle \\ \langle x,u \rangle&\langle x,x\rangle \end{vmatrix}\!\!\!\!\!\\ \;&\;\\ \;\;\begin{vmatrix}\langle u,v\rangle&\langle u,x\rangle \\ \langle x,v \rangle&\langle x,x\rangle \end{vmatrix}\;\; \begin{vmatrix}\langle u,u\rangle&\langle u,x\rangle \\ \langle x,u \rangle&\langle x,x\rangle \end{vmatrix}\!\!\!\!\! \end{vmatrix}= \begin{vmatrix}\;\begin{vmatrix}9&2\\ 2&1\end{vmatrix}\; \begin{vmatrix}5&2\\ 1&1\end{vmatrix}\!\!\!\!\!\!\\ \;&\;\\ \;\begin{vmatrix}5&1\\ 2&1\end{vmatrix}\; \begin{vmatrix}3&1\\ 1&1\end{vmatrix}\!\!\!\!\!\! \end{vmatrix}&\\&=\begin{vmatrix}5&3\\3&2\end{vmatrix}=1.&\end{aligned}$$

Hence, relation (2.7) is not true. Consequently, axiom (I2) is not satisfied. Therefore, 3-iterated 2-inner product is not a 3-inner product.

3 Representation of the n-iterated 2-inner product in terms of the standard k-inner products, \((k\le n)\)

In this section, we obtain a representation of the n-iterated 2-inner product, given in Definition 2.11 in terms of the standard k-inner products \(k\le n\). For this, we use Dodgson’s identity for determinants, [8, 9]. Historical notes about this identity, in connection with Chiò’s formula can be found in [1]. To express this identity, we adopt the compact notation used by Eves [11]. If \(A=(a_{i,j})_{1\le i,j\le n}\) is a square matrix, denote the determinant of A by \(|a_{1,1}\;\ldots a_{n,n}|\) and the sub-determinant involving rows \(i_1,\ldots ,i_s\) and columns \(j_1,\ldots , j_s\) by \(|a_{i_1,j_1}\;\ldots a_{i_s,j_s}|\). In [11]—Theorem 3.6.3, the following Dodgson-type identity \((n\ge 3)\) is proved:

$$\begin{aligned}&|a_{1,1}\;\ldots \;a_{n-2,n-2}|\cdot |a_{1,1}\;\ldots \;a_{n,n}|&\nonumber \\ =&\begin{vmatrix} |a_{1,1}\;\ldots \;a_{n-2,n-2}\; a_{n-1,n-1}|&|a_{1,1}\;\ldots a_{n-2.n-2}\;a_{n-1,n}|\\ |a_{1,1}\;\ldots \;a_{n-2,n-2}\; a_{n,n-1}|&|a_{1,1}\;\ldots a_{n-2,n-2}\;a_{n,n}| \end{vmatrix}.&\end{aligned}$$
(3.1)

For us, it is more convenient to use the following identity \((n\ge 3)\):

$$\begin{aligned}&|a_{2,2}\;\ldots \;a_{n-1,n-1}|\cdot |a_{1,1}\;\ldots \;a_{n,n}|&\nonumber \\ =&\begin{vmatrix} \;|a_{1,1}\;\ldots \;a_{n-2,n-2}\; a_{n-1,n-1}|&|a_{1,2}\;\ldots a_{n-2,n-1}\;a_{n-1,n}|\;\\ \;|a_{2,1}\;\ldots \;a_{n-1,n-2}\; a_{n,n-1}|&|a_{2,2}\;\ldots a_{n-1,n-1}\;a_{n,n}|\; \end{vmatrix}&. \end{aligned}$$
(3.2)

For \(n=3\), one has:

$$\begin{aligned} a_{2,2}\cdot |a_{1,1}\;a_{2,2}\;a_{3,3}|=\begin{vmatrix} \;|a_{1,1}\;a_{2,2}|&|a_{1,2}\;a_{2,3}|\;\\ \;|a_{2,1}\;a_{3,2}|&|a_{2,2}\;a_{3,3}|\; \end{vmatrix}. \end{aligned}$$
(3.3)

Formula (3.2) can be easily obtained applying formula (3.1). Indeed, first note that:

$$\begin{aligned} |a_{1,1}\;\ldots \;a_{n,n}|=(-1)^{n-1}|a_{1,2}\;\ldots \;a_{n-1,n}\;a_{n,1}|=|a_{2,2}\;\ldots \;a_{n,n}\;a_{1,1}|. \end{aligned}$$

Then, applying rule (3.1) for our new matrix, we find, using the notation \(\varepsilon =(-1)^{n-2}\):

$$\begin{aligned}&|a_{2,2}\;\ldots \;a_{n-1,n-1}|\cdot |a_{2,2}\;\ldots \;a_{n,n}\;a_{1,1}|&\\ =&\begin{vmatrix} |a_{2,2}\;\ldots \;a_{n-1,n-1}\;a_{n,n}|&|a_{2,2}\;\ldots a_{n-1,n-1}\;a_{n,1}|\;\\ |a_{2,2}\;\ldots \;a_{n-1,n-1}\;a_{1,n}|&|a_{2,2}\;\ldots a_{n-1,n-1}\;a_{1,1}|\; \end{vmatrix}&\\ =&\begin{vmatrix} |a_{2,2}\;\ldots \;a_{n-1,n-1}\;a_{n,n}|&\varepsilon |a_{2,1}\;\ldots a_{n-1,n-2}\;a_{n,n-1}|\;\\ \varepsilon |a_{1,2}\;\ldots \;a_{n-2,n-1}\;a_{n-1,n}|&\varepsilon ^2|a_{1,1}\;\ldots a_{n-2,n-2}\;a_{n-1,n-1}|\; \end{vmatrix}&\\ =&\begin{vmatrix} \;|a_{1,1}\;\ldots \;a_{n-2,n-2}\; a_{n-1,n-1}|&|a_{1,2}\;\ldots a_{n-2,n-1}\;a_{n-1,n}|\;\\ \;|a_{2,1}\;\ldots \;a_{n-1,n-2}\; a_{n,n-1}|&|a_{2,2}\;\ldots a_{n-1,n-1}\;a_{n,n}|\; \end{vmatrix}.&\end{aligned}$$

Note that, conversely, from relation (3.2), one can deduce relation (3.1).

Let \((X,\langle \cdot ,\cdot \rangle )\) be an inner product space. For \(x,y,z,v\in X\), from (3.2), for \(n=3\), we deduce:

$$\begin{aligned}&(x,y|v,z)_*=\begin{vmatrix} (x,y|z)_*&(x,v|z)_*\\(v,y|z)_*&(v,v|z)_*\end{vmatrix} =\begin{vmatrix}\;\begin{vmatrix} \langle x,y\rangle&\langle x,z\rangle \\ \langle z,y\rangle&\langle z,z\rangle \end{vmatrix} \; \begin{vmatrix} \langle x,v\rangle&\langle x,z\rangle \\\langle z,v\rangle&\langle z,z\rangle \end{vmatrix}\;\\ \quad \\ \;\begin{vmatrix} \langle v,y\rangle&\langle v,z\rangle \\\langle z,y\rangle&\langle z,z\rangle \end{vmatrix}\; \begin{vmatrix} \langle v,v\rangle&\langle v,z\rangle \\\langle z,v\rangle&\langle z,z\rangle \end{vmatrix}\;\end{vmatrix}&\\&=\begin{vmatrix}\;\begin{vmatrix} \langle x,y\rangle&\langle x,z\rangle \\ \langle z,y\rangle&\langle z,z\rangle \end{vmatrix} \; \begin{vmatrix} \langle x,z\rangle&\langle x,v\rangle \\ \langle z,z\rangle&\langle z,v\rangle \end{vmatrix}\;\\ \quad \\ \;\begin{vmatrix} \langle z,y\rangle&\langle z,z\rangle \\ \langle v,y\rangle&\langle v,z\rangle \end{vmatrix}\; \begin{vmatrix} \langle z,z\rangle&\langle z,v\rangle \\ \langle v,z\rangle&\langle v,v\rangle \end{vmatrix}\;\end{vmatrix} =\langle z,z\rangle \begin{vmatrix}\langle x,y\rangle&\langle x,z\rangle&\langle x,v\rangle \\ \langle z,y\rangle&\langle z,z\rangle&\langle z,v\rangle \\ \langle v,y\rangle&\langle v,z\rangle&\langle v,v\rangle \end{vmatrix}&\\&=\langle z,z\rangle \begin{vmatrix}\langle x,y\rangle&\langle x,v\rangle&\langle x,z\rangle \\ \langle v,y\rangle&\langle v,v\rangle&\langle v,z\rangle \\ \langle z,y\rangle&\langle z,v\rangle&\langle z,z\rangle \end{vmatrix}.&\end{aligned}$$

Hence, we obtained:

$$\begin{aligned} (x,y|v,z)_*=\langle z,z\rangle \langle x,y|v,z\rangle . \end{aligned}$$
(3.4)

Also, using formula (3.2), for \(n=4\), and then formula (3.4) we obtain :

$$\begin{aligned}&\langle z,z|w\rangle \langle z,z\rangle ^2\begin{vmatrix} \langle x,y\rangle&\langle x,z\rangle&\langle x,v\rangle&\langle x,w\rangle \\ \langle z,y\rangle&\langle z,z\rangle&\langle z,v\rangle&\langle z,w\rangle \\ \langle v,y\rangle&\langle v,z\rangle&\langle v,v\rangle&\langle v,w\rangle \\ \langle w,y\rangle&\langle w,z\rangle&\langle w,v\rangle&\langle w,w\rangle \end{vmatrix}&\\ =&\langle z,z\rangle ^2 \begin{vmatrix}\langle z,z\rangle&\langle z,w\rangle \\ \langle w,z\rangle&\langle w,w\rangle \end{vmatrix} \begin{vmatrix} \langle x,y\rangle&\langle x,z\rangle&\langle x,w\rangle&\langle x,v\rangle \\ \langle z,y\rangle&\langle z,z\rangle&\langle z,w\rangle&\langle z,v\rangle \\ \langle w,y\rangle&\langle w,z\rangle&\langle w,w\rangle&\langle w,v\rangle \\ \langle v,y\rangle&\langle v,z\rangle&\langle v,w\rangle&\langle v,v\rangle \end{vmatrix}&\\ \\ =&\langle z,z\rangle ^2 \begin{vmatrix}\;\begin{vmatrix} \langle x,y\rangle&\langle x,z\rangle&\langle x,w\rangle \\ \langle z,y\rangle&\langle z,z\rangle&\langle z,w\rangle \\ \langle w,y\rangle&\langle w,z\rangle&\langle w,w\rangle \end{vmatrix}\; \begin{vmatrix}\langle x,z\rangle&\langle x,w\rangle&\langle x,v\rangle \\ \langle z,z\rangle&\langle z,w\rangle&\langle z,v\rangle \\ \langle w,z\rangle&\langle w,w\rangle&\langle w,v\rangle \end{vmatrix}\\ \quad \\ \;\begin{vmatrix}\langle z,y\rangle&\langle z,z\rangle&\langle z,w\rangle \\ \langle w,y\rangle&\langle w,z\rangle&\langle w,w\rangle \\ \langle v,y\rangle&\langle v,z\rangle&\langle v,w\rangle \end{vmatrix}\; \begin{vmatrix}\langle z,z\rangle&\langle z,w\rangle&\langle z,v\rangle \\ \langle w,z\rangle&\langle w,w\rangle&\langle w,v\rangle \\ \langle v,z\rangle&\langle v,w\rangle&\langle v,v\rangle \end{vmatrix}\; \end{vmatrix}&\\ =&\begin{vmatrix}\;\langle z,z\rangle \begin{vmatrix} \langle x,y\rangle&\langle x,w\rangle&\langle x,z\rangle \\ \langle w,y\rangle&\langle w,w\rangle&\langle w,z\rangle \\ \langle z,y\rangle&\langle z,w\rangle&\langle z,z\rangle \end{vmatrix}\; \langle z,z\rangle \begin{vmatrix}\langle x,v\rangle&\langle x,w\rangle&\langle x,z\rangle \\ \langle w,v\rangle&\langle w,w\rangle&\langle w,z\rangle \\ \langle z,v\rangle&\langle z,w\rangle&\langle z,z\rangle \end{vmatrix}\;\\ \quad \\ \;\langle z,z\rangle \begin{vmatrix}\langle v,y\rangle&\langle v,w\rangle&\langle v,z\rangle \\ \langle w,y\rangle&\langle w,w\rangle&\langle w,z\rangle \\ \langle z,y\rangle&\langle z,w\rangle&\langle z,z\rangle \end{vmatrix}\; \;\langle z,z\rangle \begin{vmatrix}\langle v,v\rangle&\langle v,w\rangle&\langle v,z\rangle \\ \langle w,v\rangle&\langle w,w\rangle&\langle w,z\rangle \\ \langle z,v\rangle&\langle z,w\rangle&\langle z,z\rangle \end{vmatrix}\; \end{vmatrix}&\\ =&\begin{vmatrix} \langle z,z\rangle \langle x,y|w,z\rangle&\langle z,z\rangle \langle x,v|w,z\rangle \\ \langle z,z\rangle \langle v,y|w,z\rangle&\langle z,z\rangle \langle v,v|w,z\rangle \end{vmatrix}&\\ =&\begin{vmatrix} (x,y|w,z)_*&(x,v|w,z)_*\\ (v,y|w,z)_*&(v,v|w,z)_*\end{vmatrix}&\\ =&(x,y|v,w,z)_*.&\end{aligned}$$

Hence:

$$\begin{aligned} (x,y|v,w,z)_*=\langle z,z|w\rangle \langle z,z\rangle ^2\langle x,y|v,w,z\rangle . \end{aligned}$$
(3.5)

The results obtained in (3.4) and (3.5) can be generalized as it is shown in the next theorem. We extend the definition of the standard weak n-inner product, for \(n=1\), by the convention \(\langle x,y|x_1,\ldots ,x_2\rangle =\langle x,y\rangle\).

Theorem 3.1

Let \((X,\langle \cdot ,\cdot \rangle )\) be an inner product space. For \(n\ge 2\), consider the vectors \(x,y,x_2,\ldots ,x_n\in X.\). Then:

$$\begin{aligned} (x,y|x_n,\ldots ,x_2)_*=E_n\cdot \langle x,y|x_n,\ldots ,x_2\rangle , \end{aligned}$$
(3.6)

where \(E_2=1\) and:

$$\begin{aligned} E_n=\prod _{k=2}^{n-1}\langle x_k,x_k|x_{k-1},\ldots ,x_2\rangle ^{2^{n-k-1}},\; (n\ge 3). \end{aligned}$$
(3.7)

Proof

For \(n=2\), the theorem is immediate, since \((x,y|x_2)_*=\langle x,y|x_2\rangle\) and \(E_2=1\). For \(n=3\), the theorem follows from relation (3.4), for the choice \(v=x_3\) and \(z=x_2\). Then, \(E_3=\langle z,z\rangle\). For \(n\ge 4\), we prove by induction. Suppose the theorem true for \(n\ge 3\) and let us prove it for \(n+1\). Using the hypothesis of induction, we get:

$$\begin{aligned} (x,y|x_{n+1},x_n,\ldots ,x_2)_*=&\begin{vmatrix} (x,y|x_n,\ldots , x_2)_*&(x,x_{n+1}|x_n,\ldots , x_2)_*\\(x_{n+1},y|x_n,\ldots , x_2)_*&(x_{n+1},x_{n+1}|x_n,\ldots , x_2)_*\end{vmatrix} \nonumber \\ =&\begin{vmatrix} E_n\langle x,y|x_n,\ldots , x_2\rangle&E_n\langle x,x_{n+1}|x_n,\ldots , x_2\rangle \\E_n\langle x_{n+1},y|x_n,\ldots , x_2\rangle&E_n\langle x_{n+1},x_{n+1}|x_n,\ldots , x_2\rangle \end{vmatrix}&\nonumber \\ =&(E_n)^2\begin{vmatrix} \langle x,y|x_n,\ldots , x_2\rangle&\langle x,x_{n+1}|x_n,\ldots , x_2\rangle \\\langle x_{n+1},y|x_n,\ldots , x_2\rangle&\langle x_{n+1},x_{n+1}|x_n,\ldots , x_2\rangle \end{vmatrix}.&\end{aligned}$$
(3.8)

We transform all the four elements from the above determinant. Each of them is a determinant of order n. First, in the following determinant, changing the order of the last \(n-1\) lines and then changing the order of the last \(n-1\) columns, we obtain successively:

$$\begin{aligned} \langle x,y|x_n,\ldots , x_2\rangle =&\begin{vmatrix} \langle x,y\rangle&\langle x,x_n\rangle&\dots&\langle x,x_2\rangle \\ \langle x_n,y\rangle&\langle x_n,x_n\rangle&\dots&\langle x_n,x_2\rangle \\ \vdots&\vdots&\ddots&\vdots \\ \langle x_2,y\rangle&\langle x_2,x_n\rangle&\dots&\langle x_2,x_2\rangle \end{vmatrix}&\nonumber \\ =&(-1)^{\frac{(n-1)(n-2)}{2}} \begin{vmatrix} \langle x,y\rangle&\langle x,x_n\rangle&\dots&\langle x,x_2\rangle \\ \langle x_2,y\rangle&\langle x_2,x_n\rangle&\dots&\langle x_2,x_2\rangle \\ \vdots&\vdots&\ddots&\vdots \\ \langle x_n,y\rangle&\langle x_n,x_n\rangle&\dots&\langle x_n,x_2\rangle \end{vmatrix}&\nonumber \\ =&\begin{vmatrix} \langle x,y\rangle&\langle x,x_2\rangle&\dots&\langle x,x_n\rangle \\ \langle x_2,y\rangle&\langle x_2,x_2\rangle&\dots&\langle x_2,x_n\rangle \\ \vdots&\vdots&\ddots&\vdots \\ \langle x_n,y\rangle&\langle x_n,x_2\rangle&\dots&\langle x_n,x_n\rangle \end{vmatrix}.&\end{aligned}$$
(3.9)

Next, for the second determinant, we change the order of all the n columns and then we change the order of the last \(n-1\) lines; we obtain:

$$\begin{aligned}&\langle x,x_{n+1}|x_n,\ldots , x_2\rangle = \begin{vmatrix} \langle x,x_{n+1}\rangle&\langle x,x_n\rangle&\dots&\langle x,x_2\rangle \\ \langle x_n,x_{n+1}\rangle&\langle x_n,x_n\rangle&\dots&\langle x_n,x_2\rangle \\ \vdots&\vdots&\ddots&\vdots \\ \langle x_2,x_{n+1}\rangle&\langle x_2,x_n\rangle&\dots&\langle x_2,x_2\rangle \end{vmatrix}&\nonumber \\&\qquad =(-1)^{\frac{(n-1)n}{2}} \begin{vmatrix} \langle x,x_2\rangle&\langle x,x_3\rangle&\dots&\langle x,x_{n+1}\rangle \\ \langle x_n,x_2\rangle&\langle x_n,x_3\rangle&\dots&\langle x_n,x_{n+1}\rangle \\ \vdots&\vdots&\ddots&\vdots \\ \langle x_2,x_2\rangle&\langle x_2,x_3\rangle&\dots&\langle x_2,x_{n+1}\rangle \end{vmatrix}&\nonumber \\&\quad&\nonumber \\&\qquad =(-1)^{(n-1)^2}\begin{vmatrix} \langle x,x_2\rangle&\langle x,x_3\rangle&\dots&\langle x,x_{n+1}\rangle \\ \langle x_2,x_2\rangle&\langle x_2,x_3\rangle&\dots&\langle x_2,x_{n+1}\rangle \\ \vdots&\vdots&\ddots&\vdots \\ \langle x_n,x_{n+1}\rangle&\langle x_n,x_3\rangle&\dots&\langle x_n,x_{n+1}\rangle \\ \end{vmatrix}&, \end{aligned}$$
(3.10)

since \(\frac{(n-1)n}{2}+\frac{(n-2)(n-1)}{2}=(n-1)^2.\)

Similar operations there can be made for the third determinant. We change the order of all the n lines and then we change the order of the last \(n-1\) columns, and we get:

$$\begin{aligned}&\langle x_{n+1},y|x_n,\ldots , x_2\rangle = \begin{vmatrix} \langle x_{n+1},y\rangle&\langle x_{n+1},x_n\rangle&\dots&\langle x_{n+1},x_2\rangle \\ \langle x_n,y\rangle&\langle x_n,x_n\rangle&\dots&\langle x_n,x_2\rangle \\ \vdots&\vdots&\ddots&\vdots \\ \langle x_2,y\rangle&\langle x_2,x_n\rangle&\dots&\langle x_2,x_2\rangle \end{vmatrix}&\nonumber \\&\qquad =(-1)^{\frac{(n-1)n}{2}} \begin{vmatrix} \langle x_2,y\rangle&\langle x_2,x_n\rangle&\dots&\langle x_2,x_2\rangle \\ \langle x_3,y\rangle&\langle x_3,x_n\rangle&\dots&\langle x_3,x_2\rangle \\ \vdots&\vdots&\ddots&\vdots \\ \langle x_{n+1},y\rangle&\langle x_{n+1},x_n\rangle&\dots&\langle x_{n+1},x_2\rangle \end{vmatrix}\nonumber&\\&\quad&\nonumber \\&\qquad =(-1)^{(n-1)^2}\begin{vmatrix} \langle x_2,y\rangle&\langle x_2,x_2\rangle&\dots&\langle x_2,x_n\rangle \\ \langle x_3,y\rangle&\langle x_3,x_2\rangle&\dots&\langle x_3,x_n\rangle \\ \vdots&\vdots&\ddots&\vdots \\ \langle x_{n+1},y\rangle&\langle x_{n+1},x_2\rangle&\dots&\langle x_{n+1},x_n\rangle \\ \end{vmatrix}.&\end{aligned}$$
(3.11)

Finally, applying formula (I2), we have:

$$\begin{aligned}&\langle x_{n+1},x_{n+1}|x_n,\ldots , x_2\rangle = \langle x_2,x_2|x_3,\ldots ,x_{n+1}\rangle&\nonumber \\&\qquad = \begin{vmatrix} \langle x_2,x_2\rangle&\langle x_2,x_3\rangle&\dots&\langle x_2, x_{n+1}\rangle \\ \langle x_3,x_2\rangle&\langle x_3,x_3\rangle&\dots&\langle x_3, x_{n+1}\rangle \\ \vdots&\vdots&\ddots&\vdots \\ \langle x_{n+1},x_2\rangle&\langle x_{n+1},x_3\rangle&\dots&\langle x_{n+1},x_{n+1}\rangle \end{vmatrix}.&\end{aligned}$$
(3.12)

Consider the matrix:

$$\begin{aligned} A=\begin{vmatrix} \langle x,y\rangle&\langle x,x_2\rangle&\dots&\langle x, x_{n+1}\rangle \\ \langle x_2,y\rangle&\langle x_2,x_2\rangle&\dots&\langle x_2, x_{n+1}\rangle \\ \vdots&\vdots&\ddots&\vdots \\ \langle x_{n+1},y\rangle&\langle x_{n+1},x_2\rangle&\dots&\langle x_{n+1},x_{n+1}\rangle \end{vmatrix}. \end{aligned}$$

Denote the elements of A by \(a_{i,j}\), \(1\le i,j\le n+1\). Using the notation given in the beginning of the section, we have \(|A|=|a_{1,1}\;a_{2,2}\;\ldots \;a_{n+1,n+1}|\).

From (3.9), one has \(\langle x,y|x_n,\ldots ,x_2\rangle =|a_{1,1}\;a_{2,2}\;\ldots \;a_{n,n}|\).

From (3.10), one has \(\langle x,x_{n+1}|x_n,\ldots ,x_2\rangle =(-1)^{(n-1)^2}|a_{1,2}\;a_{2,3}\;\ldots \;a_{n,n+1}|\).

From (3.11), one has \(\langle x_{n+1},y|x_n,\ldots ,x_2\rangle =(-1)^{(n-1)^2}|a_{2,1}\;a_{3,2}\;\ldots \;a_{n+1,n}|\).

From (3.12), one has \(\langle x_{n+1},x_{n+1}|x_n,\ldots ,x_2\rangle =|a_{2,2}\;a_{3,3}\;\ldots \;a_{n+1,n+1}|\).

Then, applying formula (3.2) for \(n+1\) instead of n, we arrive to:

$$\begin{aligned}&\begin{vmatrix} \langle x,y|x_n,\ldots , x_2\rangle&\langle x,x_{n+1}|x_n,\ldots , x_2\rangle \\\langle x_{n+1},y|x_n,\ldots , x_2\rangle&\langle x_{n+1},x_{n+1}|x_n,\ldots , x_2\rangle \end{vmatrix}&\nonumber \\ =&|a_{1,1}\;a_{2,2}\;\ldots \;a_{n+1,n+1}|\cdot |a_{2,2}\;a_{3,3}\;\ldots \;a_{n,n}|.&\end{aligned}$$
(3.13)

If we change the order of the last n lines and of the last n columns in |A|, the determinant does not change, that is:

$$\begin{aligned} |a_{1,1}\;a_{2,2}\;\ldots \;a_{n+1,n+1}|=|a_{1,1}\;a_{n+1,n+1}\;a_{n,n}\;\ldots \;a_{2,2}|. \end{aligned}$$

However:

$$\begin{aligned} |a_{1,1}\;a_{n+1,n+1}\;a_{n,n}\;\ldots \;a_{2,2}| =&\begin{vmatrix} \langle x,y\rangle&\langle x,x_{n+1}\rangle&\dots&\langle x, x_2\rangle \\ \langle x_{n+1},y\rangle&\langle x_{n+1},x_{n+1}\rangle&\dots&\langle x_{n+1},x_2\rangle \\ \vdots&\vdots&\ddots&\vdots \\ \langle x_2,y\rangle&\langle x_2,x_{n+1}\rangle&\dots&\langle x_2,x_2\rangle \end{vmatrix}&\\ =&\langle x,y|x_{n+1},x_n,\ldots , x_2\rangle .&\end{aligned}$$

Therefore:

$$\begin{aligned} |a_{1,1}\;a_{2,2}\;\ldots \;a_{n+1,n+1}|=\langle x,y|x_{n+1},x_n,\ldots , x_2\rangle . \end{aligned}$$
(3.14)

Also, if we change the order of the lines and columns in determinant \(|a_{2,2}\;a_{3,3}\;\ldots \;a_{n,n}|\), the value does not change. Hence:

$$\begin{aligned} |a_{2,2}\;a_{3,3}\;\ldots \;a_{n,n}|=|a_{n,n}\;a_{n-1,n-1}\; \ldots \; a_{2,2}|. \end{aligned}$$

However:

$$\begin{aligned} |a_{n,n}\;a_{n-1,n-1}\; \ldots \; a_{2,2}| =&\begin{vmatrix} \langle x_n,x_n\rangle&\langle x_n,x_{n-1}\rangle&\dots&\langle x_n, x_2\rangle \\ \langle x_{n-1},x_n\rangle&\langle x_{n-1},x_{n-1}\rangle&\dots&\langle x_{n-1},x_2\rangle \\ \vdots&\vdots&\ddots&\vdots \\ \langle x_2,x_n\rangle&\langle x_2,x_{n-1}\rangle&\dots&\langle x_2,x_2\rangle \end{vmatrix}&\\ =&(x_n,x_n|x_{n-1},\ldots ,x_2).&\end{aligned}$$

Therefore:

$$\begin{aligned} |a_{2,2}\;a_{3,3}\;\ldots \;a_{n,n}|=\langle x_n,x_n|x_{n-1},\ldots ,x_2\rangle . \end{aligned}$$
(3.15)

From relations (3.13), (3.14), and (3.15), we conclude that:

$$\begin{aligned}&\begin{vmatrix} \langle x,y|x_n,\ldots , x_2\rangle&\langle x,x_{n+1}|x_n,\ldots , x_2\rangle \\\langle x_{n+1},y|x_n,\ldots , x_2\rangle&\langle x_{n+1},x_{n+1}|x_n,\ldots , x_2\rangle \end{vmatrix}&\nonumber \\&\qquad \qquad =\langle x,y|x_{n+1},x_n,\ldots , x_2\rangle \langle x_n,x_n|x_{n-1},\ldots ,x_2\rangle .&\end{aligned}$$
(3.16)

Replacing in (3.8), we obtain:

$$\begin{aligned} (x,y|x_{n+1},x_n,\ldots ,x_2)_*=(E_n)^2\langle x,y|x_{n+1},x_n,\ldots , x_2\rangle \langle x_n,x_n|x_{n-1},\ldots ,x_2\rangle . \end{aligned}$$

Since \((E_n)^2\langle x_n,x_n|x_{n-1},\ldots ,x_2\rangle =E_{n+1}\), it results, finally, that:

$$\begin{aligned} (x,y|x_{n+1},x_n,\ldots ,x_2)_*=E_{n+1}\langle x,y|x_{n+1},x_n,\ldots , x_2\rangle . \end{aligned}$$

\(\square\)

4 Several applications of the n iterated 2-inner product

1. Let \(X=(X,\langle \cdot ,\cdot \rangle )\) be an inner product space. Let \(x,w,z\in X\). From Definition 2.11, we deduce:

$$\begin{aligned} (x,x|w,z)_*=&(x,x|z)_*(w,w|z)_*-(x,w|z)_*(w,x|z)_*&\nonumber \\ =&\Vert x|z \Vert ^{2}\Vert w|z \Vert ^{2}-(x,w|z)_*^{2}.&\end{aligned}$$
(4.1)

Relation (4.1) can be written as:

$$\begin{aligned} (x,x|w,z)_*=&(\Vert x\Vert ^{2}\Vert w\Vert ^{2}\Vert z\Vert ^{2}+2\langle w,z \rangle \langle z,x \rangle \langle x,w\rangle -\Vert x\Vert ^{2}\langle w,z\rangle ^{2}&\nonumber \\&-\Vert w\Vert ^{2}\langle z,x\rangle ^{2}-\Vert z\Vert ^{2}\langle x,w\rangle ^{2})\Vert z\Vert ^2.&\end{aligned}$$
(4.2)

Since \((x,x|w,z)_* \ge 0\), then we obtain the inequality from Lupu and Schwarz [19] given by the following:

$$\begin{aligned} \Vert x\Vert ^{2}\langle w,z\rangle ^{2}+ \Vert w\Vert ^{2}\langle z,x\rangle ^{2}+\Vert z\Vert ^{2}\langle x,w\rangle ^{2} \le \Vert x\Vert ^{2}\Vert w\Vert ^{2}\Vert z\Vert ^{2}+2\langle w,z \rangle \langle z,x \rangle \langle x,w\rangle . \end{aligned}$$
(4.3)

2. Formula (3.4) can be written in the form:

$$\begin{aligned} (x,y|w,z)_*= \langle x,y|w,z\rangle \Vert z\Vert ^{2}=\begin{vmatrix}\langle x,y\rangle&\langle x,w \rangle&\langle x,z\rangle \\ \langle w,y\rangle&\langle w,w \rangle&\langle w,z\rangle \\ \langle z,y \rangle&\langle z,w \rangle&\langle z,z\rangle \end{vmatrix}\cdot \Vert z\Vert ^{2}. \end{aligned}$$
(4.4)

Therefore, for \(\Vert z\Vert \ne 1\), we have \((x,y|w,z)_*\) \(\ne \langle x,y|w,z\rangle .\) Also, since in the case \(x=y\), the determinant in (4.4) is the Gram’s determinant \(\Gamma (x,w,z)\), from relation (4.4), we can deduce:

$$\begin{aligned} (x,x|w,z)_*= \Gamma (x,w,z)\cdot \Vert z\Vert ^{2}. \end{aligned}$$
(4.5)

Since, also \((x,x|z,w)_*=\Gamma (x,z,w)\cdot \Vert w\Vert ^{2}\) and \(\Gamma (x,w,z)=\Gamma (x,z,w)\), it results:

$$\begin{aligned} (x,x|z,w)_*\Vert z\Vert ^{2}=(x,x|w,z)_*\Vert w\Vert ^{2}. \end{aligned}$$
(4.6)

3. From Theorem 3.1 for \(n=4\), we find that the 4 iterated 2-inner product can be given in the following way:

$$\begin{aligned} (x,y|v,w,z)_*=&\Vert w|z\Vert ^2\Vert z\Vert ^4\begin{vmatrix} \langle x,y\rangle&\langle x,z \rangle&\langle x,w\rangle&\langle x,v\rangle \\ \langle z,y\rangle&\langle z,z \rangle&\langle z,w\rangle&\langle z,v\rangle \\ \langle w,y \rangle&\langle w,z \rangle&\langle w,w\rangle&\langle w,v\rangle \\\langle v,y \rangle&\langle v,z \rangle&\langle v,w\rangle&\langle v,v\rangle \end{vmatrix}&\nonumber \\ =&\langle x,y|v,w,z\rangle \Vert w|z\Vert ^2\Vert z\Vert ^4.&\end{aligned}$$
(4.7)

From relation (4.7), for \(x=y\), we deduce:

$$\begin{aligned} (x,x|v,w,z)_*=\Gamma (x,v,w,z)\cdot \Vert w|z\Vert ^{2}\Vert z\Vert ^4, \end{aligned}$$
(4.8)

where \(\Gamma (x,v,w,z)\) is the Gram’s determinant.

In [4], Cho, Matić, and Pe\(\breve{c}\)arić used Gram’s determinant of the vectors \(x_1, x_2,..., x_k\) with respect to the vector z by:

$$\begin{aligned} \Gamma (x_1,x_2,...,x_k|z)= \begin{vmatrix} \langle x_1,x_1|z\rangle&\langle x_1,x_2 |z\rangle&...&\langle x_1,x_k|z\rangle \\ \langle x_2,x_1|z\rangle&\langle x_2,x_2 |z\rangle&...&\langle x_2,x_k|z\rangle \\ \vdots&\vdots&\ddots&\vdots \\ \langle x_k,x_1 |z\rangle&\langle x_k,x_2 |z\rangle&...&\langle x_k,x_k|z\rangle \end{vmatrix}. \end{aligned}$$
(4.9)

We consider the following determinant, which can be rewritten using formula (3.3):

$$\begin{aligned}&\begin{vmatrix} \langle x,y|z\rangle&\langle x,w |z\rangle&\langle x,v|z\rangle \\ \langle w,y|z\rangle&\langle w,w |z\rangle&\langle w,v|z\rangle \\ \langle v,y |z\rangle&\langle v,w |z\rangle&\langle v,v|z\rangle \end{vmatrix}&\nonumber \\ =&\frac{1}{\langle w,w|z\rangle }\Big [(x,y|w,z)_*(v,v|w,z)_*-(x,v|w,z)_*(v,y|w,z)_*\Big ]&\nonumber \\ =&\frac{1}{\langle w,w|z\rangle }(x,y|v,w,z)_*.&\end{aligned}$$
(4.10)

From relations (4.7) and (4.10), we find the following identity:

$$\begin{aligned} \begin{vmatrix} \langle x,y|z\rangle&\langle x,w |z\rangle&\langle x,v|z\rangle \\ \langle w,y|z\rangle&\langle w,w |z\rangle&\langle w,v|z\rangle \\ \langle v,y |z\rangle&\langle v,w |z\rangle&\langle v,v|z\rangle \end{vmatrix}=\langle x,y|v,w,z\rangle \Vert z\Vert ^4, \end{aligned}$$
(4.11)

which implies the relation:

$$\begin{aligned} \Gamma (x,w,v|z)=\Gamma (x,w,v)\Vert z\Vert ^4. \end{aligned}$$
(4.12)

4. Let xyew be vectors in the inner product space X,  over the field of real numbers and the vectors \(\{e,x,y\}\) being linearly independent, such that:

$$\begin{aligned} ax+by+ce=w, \end{aligned}$$

where \(a,b,c\in {\mathbb {R}}.\)

We want to study the problem of determining the scalars abc. Using the inner product and its properties, we deduce:

$$\begin{aligned} \left\{ \begin{array}{ll} &{} a\langle x,x\rangle +b\langle y,x\rangle +c\langle e,x\rangle =\langle w,x\rangle \\ &{} a\langle x,y\rangle +b\langle y,y\rangle +c\langle e,y\rangle =\langle w,y\rangle \\ &{} a\langle x,e\rangle +b\langle y,e\rangle +c\langle e,e\rangle =\langle w,e\rangle .\end{array}\right. \end{aligned}$$
(4.13)

Therefore, we have to solve this system with three equations and three unknowns \(a,b,c\in {\mathbb {R}}.\) The matrix of the system is:

$$\begin{aligned} A=\begin{pmatrix} \langle x,x \rangle \ \langle y,x \rangle \ \langle e,x \rangle \\ \langle x,y \rangle \ \langle y,y \rangle \ \langle e,y \rangle \\ \langle x,e \rangle \ \langle y,e \rangle \ \langle e,e \rangle \end{pmatrix}. \end{aligned}$$

From formula (4.4), we find:

$$\begin{aligned} \det A=\Gamma (x,y,e)=\frac{1}{\Vert e\Vert ^2}(x,x|y,e)_*. \end{aligned}$$

From P1), \((x,x|y,e)_*\) is zero if and only if the vectors xye are linearly dependent. However, the vectors \(\{e,x,y\}\) are linearly independent; therefore, we have \((x,x|y,e)_*>0\). Using the Cramer method, we find that:

$$\begin{aligned} a=\frac{( w,x|y,e)_*}{(x,x|y,e)_*}, \ b=\frac{(w,y|x,e)_*}{(x,x|y,e)_*}, \ c=\frac{\Vert e\Vert ^2(w,e|x,y)_*}{\Vert y\Vert ^2(x,x|y,e)_*}. \end{aligned}$$

In the particular case, when \(\Vert e\Vert =1\), we obtain:

$$\begin{aligned} a=\frac{( w,x|y,e)_*}{(x,x|y,e)_*}, \ b=\frac{(w,y|x,e)_*}{(x,x|y,e)_*}, \ c=\langle w,e\rangle -a\langle x,e\rangle -b\langle y,e\rangle . \end{aligned}$$

5. Next, we will make a correlation of the previous calculations with the coefficients that appear in the case of a multiple linear regression model.

A process is called multiple linear regression, when we have more than one independent variable [12]. For a general linear model for two independent variables V and W and a dependent variable Z, \(Z=aV+bW+c\), where \(V=\begin{pmatrix} x_{i}\\ \frac{1}{n} \end{pmatrix}_{1\le i\le n}\); \(W=\begin{pmatrix} y_{i}\\ \frac{1}{n} \end{pmatrix}_{1\le i\le n}\); \(Z=\begin{pmatrix} z_{i}\\ \frac{1}{n} \end{pmatrix}_{1\le i\le n}\) with probabilities \(P(V=x_{i})=\dfrac{1}{n}\), \(P(W=y_{i})=\dfrac{1}{n}\), \(P(Z=z_{i})=\dfrac{1}{n},\) for any \(i=\overline{1,n}.\)

We can describe the underlying relationship between \(z_i\) and \(x_i, y_i\) involving error term \(\epsilon _i\) by \(\epsilon _i=z_i-ax_i-by_i-c\).

If we take \(S(a,b,c)=\sum _{i=1}^{n}\epsilon _{i}^2=\sum _{i=1}^{n}(z_i-ax_i-by_i-c)^2\), then we have to find \(\underset{a,b,c\in {\mathbb {R}}}{min}S(a,b,c).\) Using the Lagrange method, we obtain:

$$\begin{aligned} a\sum _{i=1}^{n}x_i+b\sum _{i=1}^{n}y_i+nc=&\sum _{i=1}^{n}z_i,&\\ a\sum _{i=1}^{n}x_i^2+b\sum _{i=1}^{n}x_iy_i+c\sum _{i=1}^{n}x_i=&\sum _{i=1}^{n}x_iz_i,&\\ a\sum _{i=1}^{n}x_iy_i+b\sum _{i=1}^{n}y_i^2+c\sum _{i=1}^{n}y_i=&\sum _{i=1}^{n}y_iz_i.&\end{aligned}$$

By simple calculations, we deduce:

$$\begin{aligned} a=&\frac{\mathrm{Var}(W)\mathrm{Cov}\left( V,Z\right) -\mathrm{Cov}(V,W)\mathrm{Cov}(W,Z)}{\mathrm{Var}\left( V\right) \mathrm{Var}(W)-\mathrm{Cov}^2(V,W)},&\\ b=&\frac{\mathrm{Var}(V)\mathrm{Cov}\left( W,Z\right) -\mathrm{Cov}(V,W)\mathrm{Cov}(V,Z)}{\mathrm{Var}\left( V\right) \mathrm{Var}(W)-\mathrm{Cov}^2(V,W)},&\\ c=&E\left( Z\right) -aE\left( V\right) -bE\left( W\right) .&\end{aligned}$$

Now, we take the vector space \((X={\mathbb {R}}^n,\langle \cdot ,\cdot \rangle ).\) For \(x=(x_{1},x_{2},...,x_{n}), \ y=(y_{1},y_{2},...,y_{n}), \ z=(z_{1},z_{2},...,z_{n}),\) we have:

$$\begin{aligned}&\langle x,y \rangle =x_{1}y_{1}+x_{2}y_{2}+...+x_{n}y_{n}, \ \Vert x\Vert =\sqrt{x_{1}^2+x_{2}^2+...+x_{n}^2}, \\&(x,y|z)_*=\langle x,y \rangle \langle z,z \rangle - \langle x,z \rangle \langle z,y \rangle =\sum _{i=1}^{n}x_{i}y_{i}\sum _{i=1}^{n}z_{i}^2-\sum _{i=1}^{n}x_{i}z_{i}\sum _{i=1}^{n}z_{i}y_{i} \end{aligned}$$

and

$$\begin{aligned} \Vert x|z \Vert =\sqrt{\sum _{i=1}^{n}x_{i}^2\sum _{i=1}^{n}z_{i}^2-\bigg (\sum _{i=1}^{n}x_{i}z_{i}\bigg )^2}. \end{aligned}$$

If \(e=\frac{u}{\Vert u\Vert }\), where \(u=(1,1,...,1)\in {\mathbb {R}}^n\), then the average of vector x is \(\mu _{x}=\bigg \langle \frac{x}{\Vert u\Vert },e\bigg \rangle =\frac{1}{n}\sum _{i=1}^{n}x_{i}\), and we have:

$$\begin{aligned} \bigg \Vert \frac{x}{\Vert u\Vert }|e\bigg \Vert =\sqrt{\frac{1}{n}\sum _{i=1}^{n}x_{i}^2-\bigg ( \frac{1}{n}\sum _{i=1}^{n}x_{i}\bigg )^2}. \end{aligned}$$

Therefore, in \(({\mathbb {R}}^n,\langle \cdot ,\cdot \rangle )\), we define the variance of a vector x by \(var(x):=\bigg \Vert \frac{x}{\Vert u\Vert }|e\bigg \Vert ^2.\)

The standard deviation \(\sigma (x)\) of \(x\in {\mathbb {R}}^n\) is defined by \(\sigma (x):=\sqrt{var(x)}\), so we deduce that \(\sigma (x)=\bigg \Vert \frac{x}{\Vert u\Vert } |e\bigg \Vert .\) Since, using the standard 2-inner product, we have:

$$\begin{aligned} \bigg (\frac{x}{\Vert u\Vert },\frac{y}{\Vert u\Vert }|e\bigg )_*=\frac{1}{n}\sum _{i=1}^{n}x_{i}y_{i}-\bigg (\frac{1}{n}\sum _{i=1}^{n}x_{i}\bigg )\bigg (\frac{1}{n}\sum _{i=1}^{n}y_{i}\bigg ), \end{aligned}$$

it is easy to define the covariance of two vectors x and y by:

$$\begin{aligned} cov(x,y):=\bigg (\frac{x}{\Vert u\Vert },\frac{y}{\Vert u\Vert }|e\bigg )_*. \end{aligned}$$

It is easy to see that, we obtain:

$$\begin{aligned} a=&\frac{\mathrm{Var}(y)\mathrm{cov}\left( x,z\right) -\mathrm{cov}(x,y)\mathrm{cov}(y,z)}{\mathrm{Var}\left( x\right) \mathrm{Var}(y)-\mathrm{cov}^2(x,y)},&\\ b=&\frac{\mathrm{Var}(x)\mathrm{cov}\left( y,z\right) -\mathrm{cov}(y,x)\mathrm{cov}(x,z)}{\mathrm{Var}\left( x\right) \mathrm{Var}(y)-\mathrm{cov}^2(x,y)},&\\ c=&\mu _z -a\mu _x-b\mu _y.&\end{aligned}$$

We observe that, by the vector method, we obtain the same coefficients as by the Lagrange method.

6. In [24], the Chebyshev functional is defined by:

$$\begin{aligned} T_{z}(x,y)=\Vert z\Vert ^2\langle x,y\rangle -\langle x,z\rangle \langle y,z\rangle , \end{aligned}$$

for all \(x,y\in X\), where \(z\in X\) is a given nonzero vector.

It is easy to see that if the standard 2-inner product \((\cdot ,\cdot |\cdot )\) is defined by the inner product \(\langle \cdot ,\cdot \rangle ,\) then we have \(T_{z}(x,y)=(x,y|z)_*=(x,y|z)\).

Therefore, we generalize this Chebyshev functional to the following functional:

$$\begin{aligned} T_{x_{n},...,x_{2}}(x,y):=(x,y|x_{n},...,x_{2})_*, \end{aligned}$$

which we will call n-Chebyshev functional, so:

$$\begin{aligned} T_{x_{n},...,x_{2}}(x,y)=&T_{x_{n-1},...,x_{2}}(x,y)T_{x_{n-1},...,x_{2}}(x_{n},x_{n})&\nonumber \\&\quad -T_{x_{n-1},...,x_{2}}(x,x_{n})T_{x_{n-1},...,x_{2}}(x_{n},y),&\end{aligned}$$
(4.14)

for all \(x,y\in X\), where \(x_{2},...,x_{n}\in X\) are given nonzero vectors.

In a particular case, when \(n=3\), we have:

$$\begin{aligned} T_{w,z}(x,y)=(x,y|w,z)_*=(x,y|z)_*(w,w|z_*)-(x,w|z)_*(w,y|z)_*; \end{aligned}$$

so, we have:

$$\begin{aligned} T_{w,z}(x,x)=&(x,x|w,z)_*=(x,x|z)_*(w,w|z)_*-(x,w|z)_*(w,x|z)_*&\\ =&\Vert x|z\Vert ^2\Vert w|z\Vert ^2-(x,w|z)^2&\\ =&(\Vert x\Vert ^2\Vert w\Vert ^2\Vert z\Vert ^2+2\langle w,z\rangle \langle z,x\rangle \langle x,w\rangle -\Vert x\Vert ^2\langle w,z\rangle ^2&\\&-\Vert w\Vert ^2\langle z,x\rangle ^2-\Vert z\Vert ^2\langle x,w\rangle ^2)\Vert z\Vert ^2.&\end{aligned}$$

Therefore, the Cauchy–Schwarz inequality in terms of the n-Chebyshev functional becomes:

$$\begin{aligned} |T_{x_{n},...,x_{2}}(x,y)|^2\le T_{x_{n},...,x_{2}}(x,x)T_{x_{n},...,x_{2}}(y,y). \end{aligned}$$
(4.15)

5 Conclusions

In this paper, we exemplified the weak n-inner product only by the weak n iterated 2-inner product. This particular case of weak n-inner product does not exhaust all the possibilities of particular cases. The weak n-inner product is clearly more general then the n-inner product, and consequently, it offers more possibilities. An important connection is between the vector method and the Lagrange method given above. In the future, we will determine a formula for multiple regression for n independent variables.