Introduction

The Pascal triangle is one of the oldest mathematical objects the interest to which and its applications in the last years has significantly increased. The main applications are related with the Pascal’s triangle matrix which is defined as an infinite matrix containing the binomial coefficients as its elements [1, 2]. Due to its simple construction by factorials the basic representation of Pascal’s triangle can be given in terms of the matrix exponential. The Pascal matrix is represented by the exponential of matrix \(A\),

$$\begin{aligned} T^a(P) =\exp (aA), \end{aligned}$$
(1)

where the matrix \(A\) is defined as either an lower-diagonal, or a upper-diagonal matrix containing only the sequence of natural numbers. For these matrices we will use the following notations

$$\begin{aligned} \left( A^{sub}\right) _{ij}= \left( A^{up}\right) _{ji} \left( \begin{array}{l@{\quad }l} i,&{}\quad i=j+1\\ 0,&{}\quad \text{ otherwise }, \end{array} \right) ,\quad i,j=0,1,2,\ldots ,n. \end{aligned}$$
(2)

The matrix \(A\) is nilpotent, that is, when raised to a sufficiently high integer power, it degenerates into the zero matrix. This is reminiscent of the quantal lowering operator. In literature, this matrix is called as creation matrix due to its role in construction of different types of Appell polynomials [3].

In this work, we investigate evolution of coefficients of the polynomial under simultaneous translations-shifts of the set of roots. The main result is presented in “Evolution of Coefficients of Polynomials Induced by Shifts of Roots” section where we show that the Pascal matrix arises in a natural way as an operator of evolution of coefficients of the polynomial. In this case the creation matrix \(A\) and, correspondingly, the Pascal matrix \(T^a(P)\) are given by finite dimensional matrices. In “Pascal Matrix as an Adjoint Operator of Differential Operator of Translation” section, we observe that just as differential operator of translation acts on the functional space the Pascal’s matrix acts on the space formed by coefficients of the polynomial. It is demonstrated that the matrix \(A\) has to be considered as an operator adjoint to the differential operator \(d/dx\). Respectively, the infinite dimensional Pascal matrix \(T^a(P)\) is a adjoint operator of the operator of translation

$$\begin{aligned} T^a=\exp \left( a\frac{d}{dx}\right) . \end{aligned}$$
(3)

Under translations of roots the coefficients of the polynomial take a form of a sequence of polynomials. This sequence is defined as a result of transformation by the Pascal matrix given by upper diagonal matrix whereas the sequence of Appell polynomials are defined by the Pascal matrix given by lower diagonal matrix. In “Connection of Polynomial Coefficients with Appell Polynomials” section, we establish interrelation between these two sequences. In “Applications of the Pascal Matrix Transformation” section, some applications of the Pascal matrix method is given. The method is applied in order to calculate coefficients of the invariant polynomial and to construct an algorithm of deflation decreasing degree of the polynomial if one of roots is known. Also, it is shown as the Pascal matrix representation can be exploited in the dynamics. In “Geometrical Interpretation of Pascal Matrix Transformation”, the geometrical interpretation of the Pascal matrix is given.

Evolution of Coefficients of Polynomials Induced by Shifts of Roots

Main task of this section is to establish evolution law for the coefficients of the polynomial under simultaneous translations of the roots. If \(F\) is a field and \(q_1,\ldots ,q_n\) are algebraically independent over \(F\), the polynomial

$$\begin{aligned} f(X)=\prod _{i=1}^n(X-q_i) \end{aligned}$$
(4)

is referred to as generic polynomial over \(F\) of degree \(n\). The polynomial \(f(X)\) of \(n\)-degree over field \(F\) with roots \(q_k,k=1,\ldots ,n\) is written in the form

$$\begin{aligned} f(X):=\sum _{k=0}^{n-1}(-)^kP_kX^{n-k}+(-)^{n}P_n, \end{aligned}$$
(5)

where the coefficients \(P_k\in F(q_1,\ldots ,q_n)\) and \(P_0=1\). Since the roots of the generic polynomial \(f(X)\) are algebraically independent, this polynomial is, in some sense, the most general polynomial possible [4]. In this paper we shall restrict our attention only to polynomials with simple roots. The signs at the coefficients in (5) are changed from term to term which allows in Vièta formulae to keep the positive signs only:

$$\begin{aligned} (a)\;P_1=\sum _{i=1}^{n}q_i,\quad (b)\;P_n=\prod _{i=1}^{n}q_i,\quad (c)\;P_k=\sum _{1\le r_1<\ldots <r_k\le n}\prod ^k_{j=1}q_{r_j},\quad k=2,\ldots ,n-1,\nonumber \\ \end{aligned}$$
(6)

The coefficients \(P_k,k=1,2,\ldots n\) are defined via elementary symmetric polynomials of the roots. The number of monomials inside \(k\)th elementary polynomial is equal to the binomial coefficient:

$$\begin{aligned} C^n_k=\left( \begin{array}{c} n\\ k \end{array} \right) =\frac{n!}{k!(n-k)!}. \end{aligned}$$

The following theorem holds true.

Theorem 2.1

Let \(q_{k}, k=1,\ldots ,n\) be set of eigenvalues of polynomial equation of \(n\)- degree (5). Let differentials of all roots are equal to each other, \( dq_k=dq,\;k=1,\ldots ,n.\) Then differentials of the coefficients \(P_k\) satisfy the following system of equations:

$$\begin{aligned} dP_{k+1}=(n-k)P_{k}\;dq,\quad k=0,1,2,\ldots ,n-1;\;P_0=1. \end{aligned}$$
(7)

Proof

Coefficients of the polynomial \(f(X)\) are symmetric forms of the roots where \(k\)th coefficient \(P_k\) consists of \(C^{n}_k\) monomials of \(k\)-degree. Respectively, the derivative of this coefficient, \(dP_k\), contains \(kC^{n}_k\) monomials of \((k-1)\) degree. Since \( dq_k=dq,\;k=1,\ldots ,n.\) the derivative \(dP_k\) is equal to \(\Lambda dq\) where symmetric polynomial \(\Lambda \) is a sum of \(kC^{n}_k\) symmetric monomials of \((k-1)\) degree. On the other hand, the symmetric polynomial of \((k-1)\) degree can be expressed only by the sum of \(C^n_{k-1}\) symmetric monomials of \((k-1)\)-degree. It means that the symmetric polynomial \(\Lambda \) consists of \(kC^{n}_k/C^n_{k-1}=n-k+1\) the same symmetric polynomials of \((k-1)\)-degree. This symmetric polynomial is nothing else than \((k-1)\)th coefficient of polynomial \(f(X)\), i.e, \(P_{k-1}\). Hence, \(\Lambda =(n-k+1)P_{k-1}\), and

$$\begin{aligned} dP_k=(n-k+1)P_{k-1}\;dq,\quad k=1,2,3,\ldots ,n,\;\;P_0=1. \end{aligned}$$
(8)

\(\square \)

Formulas for differentials of coefficients lead to the following evolution equations with respect to parameter of translation \(a\):

$$\begin{aligned} \frac{d}{da}P_{n-p}(a)=(p+1)P_{n-(p+1)}(a),\quad \text{ and }\quad \frac{d}{da}P_{k+1}(a)=(n-k)P_{k}(a). \end{aligned}$$
(9)

Now, let us handle the sequence \(\{ P_k(a) \}_{k>0}\) into a vector form,

$$\begin{aligned}{}[P_k]:=[P_n,P_{n-1},P_{n-2},\ldots ,P_0]^T. \end{aligned}$$

An application of the creation matrix \(A^{up}\) implies that the vector \([P_k]\) satisfies differential equation

$$\begin{aligned} \frac{d}{da} [P_{k}(a)]=A^{up}\;[P_k(a)]. \end{aligned}$$
(10)

Differentiating \(m\)-times \((n-p)\)-th coefficient, \(P_{n-p}(a),\;m\le n-p\), we obtain

$$\begin{aligned} \frac{d^m}{da^m}P_{n-p}(a)=(p+1)(p+2)\ldots (p+m)\;P_{n-(p+m)} (a)=\frac{(p+m)!}{p!}P_{n-(p+m)}(a).\nonumber \\ \end{aligned}$$
(11)

For the last coefficient of the polynomial, \(P_n(a)\), we have

$$\begin{aligned} \frac{d^m}{da^m}P_{n}(a)=m!\;P_{n-m}(a), \end{aligned}$$
(12)

Now, let us explore the changes of the coefficients under simultaneous translations of the roots on a constant value. These kind of transformations are denominated as shifts or displacements. Firstly, let us consider \(n\)th coefficient, \(P_n\), which at the initial point of translations of the roots has the form

$$\begin{aligned} P_n(0)=\prod _{k=1}^n\;q_k. \end{aligned}$$
(13)

In the result of shifting of the roots this product takes a form

$$\begin{aligned} P_n(a)=\prod _{k=1}^n\;(q_k+a). \end{aligned}$$
(14)

In order to evaluate this product take into account that

$$\begin{aligned} f(0)=\prod _{k=1}^n\;(-q_k)=(-1)^nP_n. \end{aligned}$$
(15)

Furthermore, according to (5) one may write

$$\begin{aligned} f(0-a)=(-1)^n\sum _{k=0}^na^kP_{n-k}(0), \end{aligned}$$
(16)

consequently,

$$\begin{aligned} P_n(a)=\sum _{k=0}^n\;a^k \;P_{n-k}(0). \end{aligned}$$
(17)

For the sake of convenience let us re-write this formula in the following equivalent form

$$\begin{aligned} P_n(a)=\sum _{k=0}^n C^{k}_k\;a^k \;P_{n-k}(0). \end{aligned}$$
(18)

In order to obtain similar expressions for other coefficients of the polynomial \(f(X)\) differentiate \(P_n(a) k\)-times with respect to \(a\) on making use of formulae (11), (12). The first order differentiation gives

$$\begin{aligned} \frac{d}{da}P_n(a)=\frac{d}{da}\left( \sum _{k=0}^n C^{k}_k\;a^k \;P_{n-k}\right) =\sum _{k=0}^n kC^{k}_k\;a^{k-1} \;P_{n-k}= P_{n-1}(a). \end{aligned}$$
(19)

Differentiating \(m\)-times \(P_n(a)\), we get

$$\begin{aligned} \frac{d^m}{da^m}P_{n}(a)= & {} \frac{d^m}{da^m}\left( \sum _{k=0}^n C^{k}_{k}\;a^{k} \;P_{n-k}\right) =\sum _{k=m+1}^n C^{k}_{k}\;\frac{k!}{(k-m)!}\;a^{k-m} \;P_{n-k}(0)\nonumber \\= & {} m!\;P_{n-m}(a). \end{aligned}$$
(20)

By using the formula

$$\begin{aligned} (m-k)C^m_{m-k}=(k+1)C^m_{m-k-1}, \end{aligned}$$
(21)

we come to polynomial expression for coefficient \(P_{n-m}(a)\):

$$\begin{aligned} P_{n-m}(a)=\sum _{k=m+1}^n C^{k}_{k-m}\;a^{k-m} \;P_{n-k}(0). \end{aligned}$$
(22)

By using the Pascal matrix the system of equations (22) are presented as follows

$$\begin{aligned}{}[P_k(a)]=\exp \left( aA^{up}\right) [P_k(0)], \end{aligned}$$
(23)

which is a general solution of matrix equation (10).

Pascal Matrix as an Adjoint Operator of Differential Operator of Translation

The Taylor expansion of analytic function \(f(x)\) is given by the convergent power series

$$\begin{aligned} f(x)=f_0+f_1x+f_2x^2+f_3x^3+\cdots f_nx^n+\cdots , \end{aligned}$$
(24)

where coefficients of the polynomial are derivatives of \(f(x)\) at the point of expansion, \(x=0\),

$$\begin{aligned} f_k=\frac{1}{k!}\frac{d^kf}{dx^k}\Big |_{x=0}. \end{aligned}$$
(25)

Consider translation \(x\rightarrow x+a\), \(f(x)\rightarrow f(x+a).\) This translation is expressed by exponential differential operator

$$\begin{aligned} f(x+a)=\exp \left( a\frac{d}{dx}\right) f(x). \end{aligned}$$
(26)

In the polynomial representation the translation is written as

$$\begin{aligned} f(x+a)=f_0(0)+f_1(0)(x+a)+f_2(0)(x+a)^2+\cdots +f_n(0)(x+a)^n+\cdots \end{aligned}$$
(27)

Opening brackets write this series as a power series of \(x\):

$$\begin{aligned} f(x+a)=f_0(a)+f_1(a)x+f_2(a)x^2+f_3(a)x^3+\cdots f_n(a)x^n+\cdots \!, \end{aligned}$$
(28)

here coefficients \(f_k(a)\) are certain polynomials of \(a\).

In order to handle the sequence \(\left\{ f_k(a)\right\} _{k\ge 0}\) in a closed form introduce infinite dimensional vector

$$\begin{aligned}{}[f_k(a)]=\left[ f_0(a),f_{1}(a),\ldots ,f_{n-1}(a),f_n(a),\ldots \right] ^T\!. \end{aligned}$$
(29)

Corollary 3.1

Vector \([f_k(0)]\) is transformed into vector \([f_k(a)]\) under action of the Pascal matrix according to formula

$$\begin{aligned} \left[ f_k(a)\right] =\exp \left( aA^{up}\right) \left[ f_k(0)\right] . \end{aligned}$$
(30)

This Corollary it follows of the Theorem 2.1 which is true for any dimension \(n\) of vectors and the matrix \(A^{up}\).

Transition (26) is presented by the following identities

$$\begin{aligned} f(x+a)= & {} \sum _k f_k(x+a)^k=\exp \left( a\frac{d}{dx}\right) \sum _k f_kx^k,\nonumber \\ \sum _kf_k(a)x^k= & {} \sum _k \bigg (\big (\exp (aA^{up}\big )\bigg )_k^if_i(0)x^k. \end{aligned}$$
(31)

Consequently we have

$$\begin{aligned} \exp \left( a\frac{d}{dx}\right) \sum _k f_kx^k=\sum _k \bigg (\big (\exp (aA^{up}\big )\bigg )_k^if_i(0)x^k. \end{aligned}$$
(32)

This formula prompts an idea that the Pascal matrix is a adjoint operator to the differential operator of translation. In order to clarify this proposition, let us define the vector consisting of polynomial sequence \(x^k,k=0,1,2,\ldots \):

$$\begin{aligned} \big [x^k\big ]=\left[ x^0,x^1,\ldots ,x^{n-1},x^n,\ldots \right] , \end{aligned}$$
(33)

and consider this vector as a co-vector with respect to the vector of coefficients \([f_k]\). Now, consider a scalar product of these vectors which, evidently, is the analytic function \(f(x)\),

$$\begin{aligned} f(x)=\left( \big [x^k\big ],\left[ f_k\right] \right) . \end{aligned}$$
(34)

Within the framework of this definition formula (32) is written as follows

$$\begin{aligned} f(x+a)=\left( \exp \left( a\frac{d}{dx}\right) \big [x^k\big ],\left[ f_k\right] \right) =\left( \big [x^k\big ],\exp \left( aA^{up}\right) \left[ f_k\right] \right) . \end{aligned}$$
(35)

It is seen that the infinite dimensional matrix \(A^{up}\) acting on coefficients of the polynomial is adjoint operator to the operator of differentiation \(d/dx\) acting on the polynomial function. In order to determine an operator of “x” in this discrete domain let us recall the Weyl canonical commutation relation

$$\begin{aligned} \exp \left( a\frac{d}{dx}\right) \exp (bx)=\exp (ab)\; \exp (bx)\exp \left( a\frac{d}{dx}\right) . \end{aligned}$$
(36)

This commutation relation shows, in order to find a discrete operator of “x” we have to consider operation of multiplication of analytical function \(f(x)\) on function \(\exp (bx)\) and write this operation in the space of coefficients of the Taylor expansion [5]. The product of two functions \(f(x)\) and \(g(x)\) is defined as follows

$$\begin{aligned} F(x)= & {} \left( f_0+f_1x+f_2x^2+\cdots \right) \left( g_0+g_1x+g_2x^2+\cdots \right) \nonumber \\= & {} \left( F_0+F_1x+F_2x^2+F_3x^3+\cdots \right) , \end{aligned}$$
(37)

where coefficients of the resulting polynomial are defined by

$$\begin{aligned} F_k= \sum ^{k+1}_{i=0}f_ig_{k-i},\quad k=0,1,3,\ldots \end{aligned}$$
(38)

If \(g(x)=\exp (bx)\) then the vector \([F_k]\) of coefficients of resulting polynomial \(F(x)\) is defined by following matrix formula

$$\begin{aligned} \big [F_k\big ]=\exp \left( bB^{sub}\right) \big [f_k\big ], \end{aligned}$$
(39)

where the sub-diagonal matrix is defined by [6],

$$\begin{aligned} \big (B^{sub}\big )_{ij}= \left( \begin{array}{c@{\quad }c} 1,&{}\quad i=j+1\\ 0,&{}\quad \text{ otherwise }, \end{array} \right) ,\quad i,j=0,1,2,\ldots ,n,\ldots . \end{aligned}$$
(40)

Evidently, this matrix is a adjoint operator to operator “x” if the adjunction is defined with respect to scalar product (34). Infinite dimensional matrices \(A^{up}\) and \(B^{sub}\) satisfy same commutation relation as operators \(d/dx\) and \(x\):

$$\begin{aligned} A^{up}B^{sub}-B^{sub}A^{up}=I,\quad I-\text{ unit } \text{ matrix }. \end{aligned}$$
(41)

Connection of Polynomial Coefficients with Appell Polynomials

The Pascal matrix and its various generalizations are applied in matrix representations of Appell polynomials [7]. The sequences of Appell polynomials \(\{ H_n(x) \} \) satisfy the relation

$$\begin{aligned} \frac{d}{dx}H_n(x)=nH_{n-1}(x),\quad n=1,2,\ldots , \end{aligned}$$
(42)

with \(H_0(x)=C_0, \;\;C_0\;\in R \;\;\{0\} \). If the initial vector \(\big [H_n(0)\big ]\) is known then the vector of solution of these differential equations is presented by Pascal matrix [8]

$$\begin{aligned} \big [H_n(x)\big ]=\exp \left( xA^{sub}\right) \big [H_n(0)\big ], \end{aligned}$$
(43)

where \(A^{sub}\) means the sub-diagonal creation matrix.

We have seen that coefficients of the polynomial under shift of the set of roots form a sequence of polynomials which with respect to parameter of translation obeys evolution equations

$$\begin{aligned} \frac{d}{dx}P_{k+1}(x)=(n-k)P_{k}(x),\quad k=0,1,2,\ldots ,n-1, \end{aligned}$$
(44)

which directly is integrated,

$$\begin{aligned} \big [P_n(x)\big ]=\exp \left( xA^{up}\right) \big [P_n(0)\big ], \end{aligned}$$
(45)

where \(A^{up}\) means the upper-diagonal creation matrix. Essential difference of this formula of (43) is that in the latter the Pascal matrix contains the creation matrix defined by upper-diagonal matrix.

The question arises: how are interconnected the sequence of coefficients of the polynomials \(f(X)\) with the class of Appell polynomials? Many polynomials are included into the class of Appell polynomials, the simplest case is the Hermite polynomials.

In order to establish interrelation between coefficients of polynomial \(f(X)\) defined via parameter of shifts of the roots with Appell polynomials is necessary to take \(P_n(x)=H_n(x)\), then for polynomial \(P_k\) with \(k<n\) the following relationship holds true

$$\begin{aligned} P_k(x)=\frac{n!}{(n-k)!k!}H_k(x)=C^n_kH_k(x),\quad k\le n. \end{aligned}$$
(46)

In fact, substitute this formula into the evolution equation (44). We get,

$$\begin{aligned} \frac{d}{dx}\quad \frac{n!}{(n-k-1)!(k+1)!} H_{k+1}(x)= & {} (n-k)\quad \frac{n!}{(n-k)!\,k!} H_{k}(x)\nonumber \\= & {} (k+1)\frac{n!}{(n-k-1)!(k+1)!} H_{k}(x). \end{aligned}$$
(47)

In this way the evolution equations for the coefficients of the polynomial (44) are transformed into the evolution equations for the Appell polynomials (42)

$$\begin{aligned} \frac{d}{dx}\;H_{k+1}(x)=(k+1) H_{k}(x). \end{aligned}$$

Notice, however, this correspondence will works up till \(k=n\), only.

For almost all classical polynomials defined in ordinary way, as well, in a general way, the corresponding generating functions are known [9]. The generating functions usually are given in a form that reveals their property of being Appell polynomials due to the inclusion of the exponential function [3].

Applications of the Pascal Matrix Transformation

Reduction of Original Polynomial to a Invariant Polynomial and Deflation

The Pascal matrix serves as a faithful representation of the translation operator in discrete space [10]. This representation provides with simple and powerful tool in investigations different problems in the field of polynomials. As an example consider the problem of construction of the invariant polynomial and the problem of deflation if one of the roots is known.

If in the polynomial \(f(X)\) replace \(X\) by

$$\begin{aligned} X=Y+\frac{1}{n}P_1, \end{aligned}$$
(48)

then the resulting polynomial

$$\begin{aligned} r(Y)=f\left( X-\frac{1}{n}P_1\right) =r_0y^{n}+\sum _{k=2}^{n-1}(-)^kr_kY^{n-k}+(-)^nr_n, \end{aligned}$$
(49)

will not contain \((n-1)\)-degree term with respect to the variable \(Y\), since

$$\begin{aligned} r_1=\sum _{k=1}^n\left( q_k-\frac{1}{n}P_1\right) =0. \end{aligned}$$
(50)

The polynomial \(r(Y)\) is called the invariant polynomial [11] .

By applying \(Y=X-P_1/n\) for every root we get

$$\begin{aligned} y_k=q_k-\frac{1}{n}\sum _{i=1}^{n}q_i=\frac{1}{n}\sum _{i\ne k}^n \big (q_k-q_i\big ),\quad k=1,\ldots ,n. \end{aligned}$$
(51)

Since \(y_k\) consists of differences \((q_k-q_i)\) it remains invariant under simultaneous translations of type \(q_k\rightarrow q_k+a,\;k=1,\ldots ,n\) the coefficients \(r_k,k=0,1,2,\ldots ,n,\) as sums of uniform monomials of \(y_k,k=1,\ldots ,n\) remain also invariant with respect to these translations. Denote

$$\begin{aligned} Q=\frac{1}{n}P_1. \end{aligned}$$
(52)

The transformation from original polynomial \(f(X)\) onto invariant polynomial \(r(Y)\) is worked out by translation \(Y=X-Q\), which equivalent to simultaneous translations of the roots of the polynomial \(f(X)\rightarrow f(X-Q)\). Consequently, in order to calculate coefficients of the transformed polynomial we have to use formula (23) with parameter of translation \(a = Q\). Introduce two \(n+1\)-dimensional vectors \([r_k]\) and \([P_k]\) defining them as follows

$$\begin{aligned} \big [r_k\big ]=\big [r_n,r_{n-1},\ldots ,r_1,r_0\big ]^T,\quad \big [P_k\big ]=\big [P_n,P_{n-1},\ldots ,P_1,P_0\big ]^T. \end{aligned}$$
(53)

Then the following formulae hold true

$$\begin{aligned} \big [r_k\big ]=\exp \big (-Q\cdot A^{up}\big )\big [P_k\big ],\quad \big [P_k\big ]=\exp \big (Q\cdot A^{up}\big )\big [r_k\big ], \end{aligned}$$
(54)

Consider now the problem of calculation of roots of the polynomial. The effort of root finding can be significantly reduced by the use of deflation. Once you have found a root \(q_1\) of a polynomial \(f(x)\), consider next the deflated polynomial \(q(x)\) which satisfies

$$\begin{aligned} f(x)=\big (x-q_1\big )q(x). \end{aligned}$$

The roots of \(q(x)\) are exactly the remaining roots of \(f(x)\). Because the degree decreases, the effort of finding the remaining roots decreases. More importantly, with deflation we can avoid the blunder of having our iterative method converge twice to the same root instead of separately to two different roots. Deflation can be carried out by synthetic division of \(f(x)\) by \(q(x)\) which acts on the array of polynomial coefficients, or by using Horner scheme [12].

Our method of deflation is based on shift all roots of the \(n\)-degree polynomial \(f(x,n)\) on the value \((-q_1)\) where \(q_1\) is known root. This shift is carried out by using the Pascal matrix transformation

$$\begin{aligned} \big [r_k\big ]=\exp \big (-q_1\cdot A^{up}\big )\big [P_k\big ]. \end{aligned}$$
(55)

Here two \(n+1\)-dimensional vectors \([r_k]\) and \([P_k]\) defining them as follows

$$\begin{aligned} \big [r_k\big ]=\big [r_n,r_{n-1},\ldots ,r_1,r_0\big ]^T,\quad \big [P_k\big ]=\big [P_n,P_{n-1},\ldots ,P_1,P_0\big ]^T. \end{aligned}$$
(56)

The transformed in a such way set of coefficients definitely has the last coefficient equal to zero \(r_n=0\). In fact this coefficient is defined as the product

$$\begin{aligned} r_n=\prod _{k=1}^n\big (q_k-q_1\big )=0. \end{aligned}$$
(57)

Consequently, the transformed polynomial does not contain the free coefficient, and the problem is reduced to solution of \((n-1)\) degree polynomial [13].

Let \(x_k(0),\,k=1,2,3,\ldots ,n\) are roots of the polynomial \(f(x,n)\), and \(\big [P_k(0)\big ]=\big [P_n,P_{n-1},\ldots ,P_1,P_0\big ]^T\) is the vector of coefficients of the polynomial \(f(x,n)\). Let \(x_1(0)\) is one of the roots which is known. Then by the Pascal matrix transformation of the coefficients

$$\begin{aligned} \big [P_k(1)\big ]=\exp \big (-x_1(0)\cdot A^{up}\big )\big [P_k(0)\big ], \end{aligned}$$
(58)

we obtain coefficients of the polynomial with roots \(x_1(1)=0\) and \(x_k(1)=x_k(0)-x_1(0),k=2,3,\ldots ,n\).

Due to presence of the trivial root, \(x_1(1)=0\), the free coefficient of the new polynomial is equal to zero. The set of other coefficients form \((n-1)\)-degree polynomial \(f(x,n-1)\). Calculate only one of roots of this polynomial, let it be \(x_2(1)\). This quantity serves as the parameter of the next shift

$$\begin{aligned} \big [P_k(2)\big ]=\exp \big (-x_2(1)\cdot A^{up}\big )\big [P_k(1)\big ], \end{aligned}$$
(59)

At \(m\)th step we have

$$\begin{aligned} \big [P_k(m)\big ]=\exp \big (-x_1\cdot A^{up}\big )\big [P_k(m-1)\big ], \end{aligned}$$
(60)

And at \(n\)th step we arrive to linear equation

$$\begin{aligned} x_{n}(n-1)=x_{n-1}(n-2)-x_{n-1}(n-3). \end{aligned}$$
(61)

At the of this algorithm we possess with \(n\) quantities

$$\begin{aligned} x_1(0),\;\;x_2(1),\;\;x_3(2),\ldots ,x_l(l-1),\ldots ,x_n(n-1), \end{aligned}$$
(62)

that is to say, we find one root at each step of the algorithm. The root \(x_l(l-1)\) is the root of the polynomial of \((n-l)\) degree. Finally, we arrive to quadratic equation of the form

$$\begin{aligned} X^2-P_{n-1}(n-1)X+P_{n}(n-1)=0,\;\text{ with } \text{ solutions }\; x_{n-1}(n-2), x_{n}(n-2).\qquad \end{aligned}$$
(63)

Find one of these roots, \(x_{n-1}(n-2)\). Then the other is

$$\begin{aligned} P_1(n-1)=x_n(n-1)=x_n(n-2)-x_{n-1}(n-2). \end{aligned}$$
(64)

Here unknown is \(x_n(n-2)\) which we find from the last equation:

$$\begin{aligned} x_n(n-2)=x_n(n-1)+x_{n-1}(n-2). \end{aligned}$$
(65)

By inverse passage we obtain all solutions of the original equation, which are defined by the following formula

$$\begin{aligned} x_k(0)=\sum _{l=1}^k x_l(l-1),\quad k=1,2,\ldots ,n. \end{aligned}$$
(66)

Applications to Dynamical Systems

The mathematical model of composed dynamical system with finite spectrum of the energies consists of the following principal elements [1416]. The state of the \(n\)-ary composed dynamical system is defined by the \(n\)-degree polynomial function \(f(X)\). The coefficients and the roots of the polynomial are interpreted as external and internal energies, correspondingly.

Let us suppose that the set of roots of the state polynomial \(f(X)\) is the array of the kinetic energies

$$\begin{aligned} q_k=\frac{p_k^2}{2m_k},\quad k=1,2,\ldots n. \end{aligned}$$
(67)

Then the set of coefficients of the state polynomial \(f(X)\) is interpreted as an array of external kinetic energies of the composed dynamical system. Form \((n+1)\) dimensional vector from coefficients of the state polynomial

$$\begin{aligned}{}[K]=\big (P_n,P_{n-1},P_{n-2},\ldots ,P_1,P_0 \big ). \end{aligned}$$
(68)

In the classical mechanics the total energy consists of a sum of kinetic energy \(K\) and the potential energy \(V(r)\):

$$\begin{aligned} E=K+V(r)=\frac{p^2}{2m}+V(r). \end{aligned}$$
(69)

In the case of composed dynamics this formula is extended as follows

$$\begin{aligned} E_k=q_k+V(r)=\frac{p_k^2}{2m_k}+V(r). \end{aligned}$$
(70)

It is seen, the inclusion of the potential field is fulfilled by shift of the set of roots of the polynomial \(f(X)\). Define the vector of total energies

$$\begin{aligned}{}[E]=\big (E,E_{n-1},E_{n-2},\ldots ,E_1 \big ). \end{aligned}$$
(71)

Under the simultaneous translations of the roots \(q_k\), the coefficients of the state polynomial will transformed according to formula (23), according to which the vectors \([K]\) and \([E]\) are related via the following Pascal matrix transformation

$$\begin{aligned}{}[E]=\exp \big (V(r)A^{up}\big )[K]. \end{aligned}$$
(72)

This formula is an analogue of the classical formula (67). An analogue of the classical inverse transformation

$$\begin{aligned} K=E-V(r), \end{aligned}$$
(73)

obviously, is given by the formula

$$\begin{aligned}{}[K]=\exp \big (-V(r)A^{up}\big )[E]. \end{aligned}$$
(74)

Geometrical Interpretation of Pascal Matrix Transformation

The Pascal matrix transformation admits the following simple geometrical interpretation. Let us start with the Pythagoras formula

$$\begin{aligned} a^2+b^2=c^2, \end{aligned}$$
(75)

where \(a,b,c\) sides of the right angled triangle \(\triangle {ABC}\) with right angle at \(C\) and \(a=BC,b=AC,c=AB\). Consider the following quadratic polynomial

$$\begin{aligned} X^2-2cX+b^2=f(X), \end{aligned}$$
(76)

with roots \(q_1=c-a,\;q_2=c+a\). In equation \(f(X)=0\) make the following substitution \(Y=X-c\), then the quadratic equation is reduced to

$$\begin{aligned} Y^2=c^2-b^2. \end{aligned}$$
(77)

Hence, \(Y=a\).

Consider a rectangle with sides \(q_1=c-a,\;q_2=c+a\), with half-perimeter \(P/2=(q_1+q_2)=2c\) and the square \(S=q_1q_2=b^2\). It is seen that the quadratic equation \(f(X)=0\) connects perimeter of the rectangle with its square.

Under the translation \(q_1=q_1+a\):

$$\begin{aligned} \left( \begin{array}{c} P_2(a)\\ P_1(a)\\ P_0=1 \end{array} \right) = \left( \begin{array}{c@{\quad }c@{\quad }c} 1&{}\quad a&{}\quad a^2 \\ 0&{}\quad 1&{}\quad 2a \\ 0&{}\quad 0&{}\quad 1 \end{array} \right) \left( \begin{array}{c} P_2\\ P_1\\ P_0=1 \end{array} \right) . \end{aligned}$$
(78)

Now consider three-dimensional rectangular parallelepiped with three edge lengths \(q_k,k=1,2,3\). Volume of rectangular parallelepiped \(V_3\) is equal to product of the sides

$$\begin{aligned} V_3=q_1q_2q_3. \end{aligned}$$
(79)

The total area of the parallelepiped is

$$\begin{aligned} V_2=2\big (q_1q_2+q_3q_1+q_2q_3\big ), \end{aligned}$$
(80)

and total sum of edge lengths is

$$\begin{aligned} V_1=4\big (q_1+q_2+q_3\big ). \end{aligned}$$
(81)

Let us consider the edge lengths \(q_k,k=1,2,3\) as roots of a cubic polynomial

$$\begin{aligned} f(X)=X^3-P_1X^2+P_2X-P_3, \end{aligned}$$
(82)

where \(P_3\) means volume, \(V\), \( P_3=q_1q_2q_3=V\), coefficient \(P_2\) is proportional to sum of the areas \(S\), \(S=2P_2=2\big (q_1q_2+q_3q_1+q_2q_3\big )\), and coefficient \(P_1\) is proportional to the perimeter \(L\), \(L =4\big (q_1+q_2+q_3\big )=4P_1\). Thus, volume, sum of areas of the sides and perimeter are related via cubic polynomial equation

$$\begin{aligned} X^3-\frac{1}{4}LX^2+\frac{1}{2}SX-V=0. \end{aligned}$$
(83)

Now, undergo simultaneous translation the roots of this polynomial, i.e., the lengths of the edges on same value \(a\). This transformation is given by the following matrix equation

$$\begin{aligned} \big [P_k(a)\big ]=\exp \big (aA^{up}\big )\big [P_k(0)\big ], \end{aligned}$$
(84)

or, in a explicit form that is [6]

$$\begin{aligned} \left( \begin{array}{c} V(a)\\ S(a)/2\\ L(a)/4\\ 1 \end{array} \right) = \left( \begin{array}{c@{\quad }c@{\quad }c@{\quad }c} 1&{}\quad a&{}\quad a^2&{}\quad a^3 \\ 0&{}\quad 1&{}\quad 2a&{}\quad 3a^2 \\ 0&{}\quad 0&{}\quad 1&{}\quad 3a\\ 0&{}\quad 0&{}\quad 0&{}\quad 1 \end{array} \right) \left( \begin{array}{c} V(0)\\ S(0)/2\\ L(0)/4\\ 1 \end{array} \right) . \end{aligned}$$
(85)

These equations establish dynamics of the volume, area and the perimeter of the rectangular parallelepiped with respect to translations of the sides. There two combinations of these dynamical values which remain invariant under this transformation. Invariant functions are obtained by using the following Pascal matrix equation

$$\begin{aligned} \big [r_k\big ]=\exp \big (-QA^{up}\big )\big [P_k\big ], \end{aligned}$$
(86)

the matrix form is

$$\begin{aligned} \left( \begin{array}{c} r_3\\ r_2\\ r_1\\ 1 \end{array} \right) = \left( \begin{array}{c@{\quad }c@{\quad }c@{\quad }c} 1&{}\quad -Q&{}\quad Q^2&{}\quad -Q^3 \\ 0&{}\quad 1&{}\quad -2Q&{}\quad 3Q^2 \\ 0&{}\quad 0&{}\quad 1&{}\quad -3Q\\ 0&{}\quad 0&{}\quad 0&{}\quad 1 \end{array} \right) \left( \begin{array}{c} V\\ S/2\\ L/4\\ 1 \end{array} \right) . \end{aligned}$$
(87)

where \(r_1=0\). From this condition it follows \(12Q=L\).