Abstract
Several operations can be defined on the set of all linear recurrent sequences, such as the binomial convolution (Hurwitz product) or the multinomial convolution (Newton product). Using elementary techniques, we prove that this set equipped with the termwise sum and the aforementioned products is an R-algebra, given any commutative ring R with identity. Moreover, we provide explicitly a characteristic polynomial of the Hurwitz product and Newton product of any two linear recurrent sequences. Finally, we also investigate whether these R-algebras are isomorphic, considering also the R-algebras obtained using the Hadamard product and the convolution product.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Given a commutative ring with identity \((R, +, \cdot )\), we will denote by \({\mathcal {S}}(R)\) the set of all sequences \({\textbf{a}} = (a_n)_{n \ge 0}\) such that \(a_n \in R\), for all \(n \in {\mathbb {N}}\). A sequence \({\textbf{a}} \in {\mathcal {S}}(R)\) is said to be a linear recurrent sequence of order N if its elements for all \(n\ge N\) satisfy
for some coefficients \(h_i\in R\), \(i=1,\ldots ,N\), where \(h_N\) is not a zero divisor in R, and we define
as the characteristic polynomial associated to this recurrence relation. The elements \(a_0, \ldots , a_{N-1}\) are called initial conditions. We will denote by \({\mathcal {W}}(R) \subset {\mathcal {S}}(R)\) the set of all linear recurrent sequences. Moreover, given \({\textbf{a}} \in {\mathcal {S}}(R)\), we will write
for the ordinary generating function (o.g.f.) and the exponential generating function (e.g.f.), respectively.
It is well known that \({\mathcal {S}}(R)\) and \({\mathcal {W}}(R)\) can be equipped with several operations giving them interesting algebraic structures. When R is a field, it is immediate to see that the element-wise sum or product (also called the Hadamard product) of two linear recurrent sequences is still a linear recurrent sequence, see, e.g., [8]. In [7], the authors proved this in the more general case where R is a ring, showing that \({\mathcal {W}}(R)\) is an R-algebra and also giving explicitly the characteristic polynomials of the element-wise sum and Hadamard product of two linear recurrent sequences. Larson and Taft [18, 24] studied this algebraic structure characterizing the invertible elements and zero divisors. Further studies about the behaviour of linear recurrent sequences under the Hadamard product can be found, e.g., in [6, 11, 13, 25]. Similarly, \({\mathcal {W}}(R)\) equipped with the element-wise sum and the convolution product (or Cauchy product) has been deeply studied. For instance, \({\mathcal {W}}(R)\) is still an R-algebra and the characteristic polynomial of the convolution product between two linear recurrent sequences can be explicitly found [7]. The convolution product of linear recurrent sequences is very important in many applications and it has been studied also from a combinatorial point of view [1] and over finite fields [12]. For other results, see, e. g., [21,22,23]. Another important operation between sequences is the binomial convolution (or Hurwitz product). The Hurwitz series ring, introduced in a systematic way by Keigher [14], has been extensively studied by several authors [2,3,4,5, 15, 19]. However, there are few results when focusing on linear recurrent sequences [16, 17].
In this paper, we extend the studies about the algebraic structure of linear recurrent sequences considering in particular the Hurwitz product and the Newton product (which is the generalization of the Hurwitz product considering multinomial coefficients). In particular, we prove that \({\mathcal {W}}(R)\) is an R-algebra when equipped with the element-wise sum and the Hurwitz product, as well as when we consider element-wise sum and Newton product. We also give explicitly the characteristic polynomials of the Hurwitz and Newton product of two linear recurrent sequences. For the Newton product we also find explicitly the inverses. Moreover, we study the isomorphisms between these algebraic structures, finding that \({\mathcal {W}}(R)\) with element-wise sum and Hurwitz product is not isomorphic to the other algebraic structures, whereas if we consider the Newton product, there is an isomorphism with the R-algebra obtained using the Hadamard product. Finally, we provide an overview about the behaviour of linear recurrent sequences under all the different operations considered (element-wise sum, Hadamard product, Cauchy product, Hurwitz product, Newton product) with respect to the characteristic polynomials and their companion matrices.
2 Preliminaries and notation
For any \({\textbf{a}}, {\textbf{b}} \in {\mathcal {S}}(R)\), we will deal with the following operations:
-
componentwise sum \(\oplus \), defined by
$$\begin{aligned} {\textbf{a}} \oplus {\textbf{b}} = {\textbf{c}}, \quad c_n = a_n + b_n, \quad \forall n \ge 0; \end{aligned}$$ -
componentwise product or Hadamard product \(\odot \), defined by
$$\begin{aligned} {\textbf{a}} \odot {\textbf{b}} = {\textbf{c}}, \quad c_n = a_n \cdot b_n, \quad \forall n \ge 0; \end{aligned}$$ -
convolution product \(*\), defined by
$$\begin{aligned} {\textbf{a}} * {\textbf{b}} = {\textbf{c}}, \quad c_n = \sum _{i=0}^n a_i b_{n-i}, \quad \forall n \ge 0; \end{aligned}$$ -
binomial convolution product or Hurwitz product \(\star \), defined by
$$\begin{aligned} {\textbf{a}} \star {\textbf{b}} = {\textbf{c}}, \quad c_n = \sum _{i=0}^n \left( {\begin{array}{c}n\\ i\end{array}}\right) a_i b_{n-i}, \quad \forall n \ge 0; \end{aligned}$$ -
multinomial convolution product or Newton product \(\boxtimes \), defined by
$$\begin{aligned} {\textbf{a}} \boxtimes {\textbf{b}} = {\textbf{c}}, \quad c_n = \sum _{i=0}^n \sum _{j=0}^i \left( {\begin{array}{c}n\\ i\end{array}}\right) \left( {\begin{array}{c}i\\ j\end{array}}\right) a_i b_{n-j}, \quad \forall n \ge 0. \end{aligned}$$
Remark 1
The Newton product is also called multinomial convolution product, because it is the natural generalization of the binomial convolution product using the multinomial coefficient. Observe indeed that \(\left( {\begin{array}{c}n\\ i\end{array}}\right) \left( {\begin{array}{c}i\\ j\end{array}}\right) = \left( {\begin{array}{c}n\\ n-i, i-j, j\end{array}}\right) \).
In [7], the authors showed that \(({\mathcal {W}}(R), \oplus , \odot )\) and \(({\mathcal {W}}(R), \oplus , *)\) are R-algebras and they are never isomorphic. Moreover, given \({\textbf{a}}, {\textbf{b}} \in {\mathcal {W}}(R)\) and \({\textbf{c}} = {\textbf{a}} \odot {\textbf{b}}\), \({\textbf{d}} = {\textbf{a}} * {\textbf{b}}\), they proved that
where the operation \(\otimes \) between polynomials is defined as follows. Given two polynomials f(t) and g(t) with coefficients in R, said F and G their companion matrices, respectively, then \(f(t) \otimes g(t)\) is the characteristic polynomial of the Kronecker product between F and G. In the following, we will denote by \(\otimes \) also the Kronecker product between matrices. To the best of our knowledge, similar results involving the Hurwitz product and the Newton product are still missing.
Remark 2
Let us observe that the sequences \({\textbf{c}}\) and \({\textbf{d}}\), defined above, recur with characteristic polynomials \(p_c(t)\) and \(p_d(t)\) as given in (1), respectively, but these polynomials are not necessarily the minimal polynomials of recurrence. In the case that R is a ring without zero divisors, see [8] for more, the minimal polynomial of a linear recurrent sequence \({\textbf{a}}\) is the (unique) monic polynomial f(t) such that it divides any characteristic polynomial of any linear recurrence relation satisfied by \({\textbf{a}}\). In other words, it is the characteristic polynomial of the linear recurrence relation of least degree satisfied by \({\textbf{a}}\). In general, it is an hard problem to find the minimal polynomials of recurrence of these sequences, for some results, see [6, 11, 18, 21].
Lemma 3
Given \({\textbf{a}} \in {\mathcal {S}}(R)\), we have that \({\textbf{a}} \in {\mathcal {W}}(R)\) and \(p_a(t)\) is its characteristic polynomial if and only if \(p^*_a(t) \cdot A_o(t)\) is a polynomial of degree less than \(\deg (p_a(t))\), where \(p^*_a(t)\) denotes the reciprocal or reflected polynomial of \(p_a(t)\).
Proof
See [7, Lemma 3.2]. \(\square \)
Definition 4
Given two monic polynomials f(t) of degree M and g(t) of degree N, their resultant is \(\text {res}(f(t), g(t)):= \prod _{i=1}^M \prod _{j=1}^{N^{}}(\alpha _i - \beta _j)\), where \(\alpha _i\)’s and \(\beta _j\)’s are, respectively, the roots of f(t) and g(t), counted with their multiplicities.
3 R-algebras of linear recurrent sequences
The main results of this section are provided in Theorems 5 and 9, where we prove that \(({\mathcal {W}}(R), \oplus , \star )\) and \(({\mathcal {W}}(R), \oplus , \boxtimes )\) are R-algebras, finding also the characteristic polynomial of the Hurwitz and Newton product between two linear recurrent sequences.
Theorem 5
Given \({\textbf{a}}, {\textbf{b}} \in {\mathcal {W}}(R)\), we have that \({\textbf{r}} = {\textbf{a}} \star {\textbf{b}} \in {\mathcal {W}}(R)\) and the characteristic polynomial of \({\textbf{r}}\) is \(\text {res}(p_a(x),p_b(t-x))\) with \(p_b(t-x)\) regarded as a polynomial in x. Moreover, \(({\mathcal {W}}(R), \oplus , \star )\) is an R-algebra.
Proof
It is well–known that \(({\mathcal {S}}(R), \oplus , \star )\) is an R-algebra (see, e.g., [14]), thus it is sufficient to show that \({\mathcal {W}}(R)\) is closed under the Hurwitz product in order to prove that \(({\mathcal {W}}(R), \oplus , \star )\) is an R-algebra.
Let M and N be the degrees of \(p_a(t)\) and \(p_b(t)\), respectively. Without loss of generality, let us suppose \(p_a(t)\) and \(p_b(t)\) have distinct roots denoted by \(\alpha _1, \ldots , \alpha _M\) and \(\beta _1, \ldots , \beta _N\), respectively. We consider the ordinary generating function of the sequence \({\textbf{r}} = {\textbf{a}} \star {\textbf{b}}\),
where \(\sum _{m=0}^{+\infty }\left( {\begin{array}{c}m+i\\ i\end{array}}\right) b_{m} t^{m}\) is the o.g.f of the sequence obtained from the Hadamard product between \({\textbf{b}}\) and \(\left( \left( {\begin{array}{c}m+i\\ i\end{array}}\right) \right) _{m \ge 0}\), i.e.,
Since \(B_o(t)\) is the ordinary generating function of \({\textbf{b}} \in {\mathcal {W}}(R)\), it is a rational function and we can write it as
for some integers \(c_j\). Now, we have
and we get that
Thus, from (2) we obtain
where \(\delta (t)\) is a polynomial of degree less than M. Let \(p(t)=\text {res}(p_a(x), p_b(t-x))\), then \({p(t)= \prod _{h=1}^{M} \prod _{l=1}^{N} (t-\alpha _h -\beta _l} )\) and its reciprocal polynomial is \({p^*(t)= \prod _{h=1}^{M} \prod _{l=1}^{N} (1-(\alpha _h + \beta _l)t)}\). In particular, it is possible to rearrange the last formula in the following way
Moreover, we can express the function \(\delta \left( \frac{t}{1-\beta _jt}\right) \) as
with \(deg(\mu _j(t)) \le M-1\). Applying the same reasoning, we have
with \(deg(\xi _j(t)) \le M\). Hence, Eq. (5) becomes
We have from (6)
thus, by Lemma 3, \({\textbf{r}}\) is a linear recurrent sequence whose characteristic polynomial is \(p(t)=\text {res}(p_a(x), p_b(t-x))\). \(\square \)
Remark 6
Given \({\textbf{a}}, {\textbf{b}} \in {\mathcal {W}}(R)\), if \(\alpha _1, \ldots \alpha _M\) and \(\beta _1, \ldots , \beta _N\) are distinct roots of \(p_a(t)\) and \(p_b(t)\) respectively, then, by Theorem 5, the roots of the characteristic polynomial of \({\textbf{a}} \star {\textbf{b}}\) are \(\alpha _i + \beta _j\), for any \(i = 1, \ldots , M\) and \(j = 1, \ldots , N\). Moreover, we would like to highlight that the proof of Theorem 5 can be adapted also in the case of multiple roots.
In the following Proposition, we see a way for writing the Newton product in terms of the Hurwitz and Hadamard ones.
Proposition 7
Given \({\textbf{a}}, {\textbf{b}} \in {\mathcal {W}}(R)\), then \({\textbf{a}} \boxtimes {\textbf{b}} = [({\textbf{a}} \star {\textbf{1}}) \odot ({\textbf{b}} \star {\textbf{1}})] \star {\textbf{e}}\), where \({\textbf{1}}:= (1, 1, 1, \ldots )\) and \({\textbf{e}}:= ((-1)^n)_{n \ge 0}\).
Proof
The nth terms of \({\textbf{a}} \star {\textbf{1}}\) and \({\textbf{b}} \star {\textbf{1}}\) are by definition \(\sum _{i=0}^n \left( {\begin{array}{c}n\\ i\end{array}}\right) a_i\) and \(\sum _{i=0}^n \left( {\begin{array}{c}n\\ i\end{array}}\right) b_i\), respectively. Thus, the nth term of \(({\textbf{a}} \star {\textbf{1}}) \odot ({\textbf{b}} \star {\textbf{1}})\) is \(\sum _{s=0}^n \left( {\begin{array}{c}n\\ s\end{array}}\right) a_s \sum _{t=0}^n \left( {\begin{array}{c}n\\ t\end{array}}\right) b_t\), and consequently
is the nth term of \([({\textbf{a}} \star {\textbf{1}}) \odot ({\textbf{b}} \star {\textbf{1}})] \star {\textbf{e}}\).
Considering the definition of the Newton product, we need to prove the equality
Let \(c_i= \sum _{s=0}^i \left( {\begin{array}{c}i\\ s\end{array}}\right) a_s \sum _{t=0}^i \left( {\begin{array}{c}i\\ t\end{array}}\right) b_t\) and \(d_n= \sum _{i=0}^n \sum _{j=0}^i \left( {\begin{array}{c}n\\ i\end{array}}\right) \left( {\begin{array}{c}i\\ j\end{array}}\right) a_ib_{n-j}\), then (7) is equivalent to
Exploiting the Newton’s inversion formula (see, e.g., [10, Equation 5.48]), i.e.,
for some arithmetic functions f and g, Eq. (8) becomes
that is
So, showing that (10) holds is equivalent to prove (7). Now, we can write the first member of (10) as
From (11) we have \(0 \le j \le s\le i \le n\), and we can rewrite (11) in the equivalent form
where, setting \(t = i-j\), so that \(s \le t+j \le n\), we obtain
Observing that
the term (12) becomes
Finally, setting \(m = s-j\), we have \(0 \le m \le s\) and we get
where the last equality is due to Vandermonde’s identity \(\sum _{m=0}^{t} \left( {\begin{array}{c}n-s\\ t-m\end{array}}\right) \left( {\begin{array}{c}s\\ m\end{array}}\right) = \left( {\begin{array}{c}n\\ t\end{array}}\right) \). \(\square \)
Remark 8
The previous proposition can be proved also exploiting the umbral calculus (see [20] for the basic notions). Given \({\textbf{a}}, {\textbf{b}} \in {\mathcal {S}}(R)\), let us consider two linear functionals U and V defined by \(U(W^n) = a_n\) and \(V(Z^n) = b_n\). The nth term of \(({\textbf{a}} \boxtimes {\textbf{b}}) \star {\textbf{1}}\) is \(\sum _{i=0}^n \left( {\begin{array}{c}n\\ i\end{array}}\right) \sum _{k=0}^i\sum _{j=0}^k \left( {\begin{array}{c}i\\ k\end{array}}\right) \left( {\begin{array}{c}k\\ j\end{array}}\right) a_kb_{i-j}\) and it can be obtained using the functionals U and V in the following way:
Now, the last quantity can be rewritten as
which is the nth term of the sequence \(({\textbf{a}} \star {\textbf{1}}) \odot ({\textbf{b}} \star {\textbf{1}})\).
Theorem 9
Given \({\textbf{a}}, {\textbf{b}} \in {\mathcal {W}}(R)\), we have that \({\textbf{c}} = {\textbf{a}} \boxtimes {\textbf{b}} \in {\mathcal {W}}(R)\) and the characteristic polynomial of \({\textbf{c}}\) is \(\prod _{i=1}^M \prod _{j=1}^N(t-(\alpha _i+\beta _j+\alpha _i\beta _j))\), where \(M = \deg (p_a(t))\), \(N = \deg (p_b(t))\), \(\alpha _i\)’s are the roots of \(p_a(t)\) and \(\beta _j\)’s the roots of \(p_b(t)\). Moreover, \(({\mathcal {W}}(R), \oplus , \boxtimes )\) is an R-algebra.
Proof
Firstly, we show that \(({\mathcal {S}}(R), \oplus , \boxtimes )\) is an R-algebra. This is an immediate consequence of Proposition 7. Indeed, since \({\textbf{a}} \boxtimes {\textbf{b}} = [({\textbf{a}} \star {\textbf{1}}) \odot ({\textbf{b}} \star {\textbf{1}})] \star {\textbf{e}}\), it is straightforward to see that \(\boxtimes \) satisfies all the properties characterizing \(({\mathcal {S}}(R), \oplus , \boxtimes )\) as an R-algebra. Moreover, since \((1, 0, 0, \ldots )\) is the identity element for the Hurwitz product and \({\textbf{e}}\) is the inverse of \({\textbf{1}}\) with respect to \(\star \), we have that \((1, 0, 0, \ldots )\) is the identity also for the Newton product.
Given \({\textbf{a}}, {\textbf{b}} \in {\mathcal {W}}(R)\), we have \({\textbf{a}} \boxtimes {\textbf{b}} \in {\mathcal {W}}(R)\) by Proposition 7, thus also \(({\mathcal {W}}(R), \oplus , \boxtimes )\) is an R-algebra.
By Theorem 5 and Remark 6, we can observe that, given \({\textbf{a}}, {\textbf{b}} \in {\mathcal {W}}(R)\), then \({\textbf{a}} \star {\textbf{1}}\) and \({\textbf{b}} \star {\textbf{1}}\) are linear recurrent sequences whose characteristic polynomials have roots \(\alpha _i + 1\) and \(\beta _j +1\), for \(i = 1, \ldots , M\) and \(j = 1, \ldots , N\), respectively. Moreover, since \({\textbf{e}}\) is a linear recurrent sequence whose characteristic polynomial is \(t + 1\), then \([({\textbf{a}} \star {\textbf{1}}) \odot ({\textbf{b}} \star {\textbf{1}})] \star {\textbf{e}}\) has characteristic polynomial whose roots are \((\alpha _i+1)(\beta _j + 1) - 1 = \alpha _i + \beta _j + \alpha _i\beta _j\), for \(i = 1, \ldots , M\) and \(j = 1, \ldots , N\). \(\square \)
Proposition 10
Let \({\textbf{a}} \in {\mathcal {S}}(R)\) be invertible with respect to the Newton product, then, said \({\textbf{b}}\) its inverse, we have
for any \(n \ge 0\).
Proof
Remembering that the identity element for the Newton product is \((1, 0, 0, \ldots )\), we have that \(a_0b_0\) must be 1, i.e., \(b_0 = a_0^{-1}\). When \(n \ge 1\), we have that
i.e.,
Let us define
where \(g(n) = 0\), when \(n \ge 1\) and \(g(0)=1\). By Newton’s inversion formula (9), we obtain
Since \(g(i) = 0\) for all \(i \ge 1\), we have \(f(n) = 1\) for all \(n\ge 0\), thus
or, equivalently,
Let \(d_n= \frac{1}{\sum _{s=0}^n \left( {\begin{array}{c}n\\ s\end{array}}\right) a_s}\), applying Newton’s inversion formula, we get for all \(n\ge 0\)
therefore
\(\square \)
Remark 11
We point out that \({\textbf{a}}\) is invertible with respect to the Newton product if and only if all the elements of \({\textbf{a}}\) are invertible elements of R, as well as it happens for the Hadamard product.
Let us notice that every \({\textbf{a}} \in {\mathcal {W}}(R)\) can be associated to its monic characteristic polynomial \(p_a(t)\) with coefficients in R and this polynomial to its companion matrix A. Thus, the results we found for the R-algebras \(({\mathcal {W}}(R), \oplus , \odot )\), \(({\mathcal {W}}(R), \oplus , *)\), \(({\mathcal {W}}(R), \oplus , \star ),\) and \(({\mathcal {W}}(R), \oplus , \boxtimes )\), enable us to give to the set of the monic polynomials \(\mathcal {P}ol(R)\) with coefficients in R some new algebraic structures. Moreover, we can also observe what happens to the roots and to the companion matrices of the characteristic polynomials.
Let us consider \({\textbf{a}}, {\textbf{b}} \in {\mathcal {W}}(R)\) with characteristic polynomials of degree M and N, whose roots are \(\alpha _1, \ldots , \alpha _M\) and \(\beta _1, \ldots \beta _N\), respectively. The sequences \({\textbf{a}} + {\textbf{b}}\) and \({\textbf{a}} * {\textbf{b}}\) both recur with characteristic polynomial \(p_a(t) \cdot p_b(t)\).
Regarding the Hadamard product, we have already observed that the characteristic polynomial of \({\textbf{c}} = {\textbf{a}} \odot {\textbf{b}}\) is \(p_c(t) = p_a(t) \otimes p_b(t)\), whose roots are \(\alpha _i\beta _j\), for \(i = 1, \ldots , M\) and \(j = 1, \ldots , N\). Thus, starting from the R-algebra \(({\mathcal {W}}(R), \oplus , \odot )\), we can construct the semiring \((\mathcal {P}ol(R), \cdot , \otimes )\) whose identity is the polynomial \(t-1\). Said A, B, and C the companion matrices of \(p_a(t), p_b(t)\) and \(p_c(t)\), we have that \(C = A \otimes B\), where \(\otimes \) is the Kronecker product between matrices. Thus C is a \(mn \times mn\) matrix with eigenvalues the products of the eigenvalues of A and B.
Similarly, starting from the Hurwitz product, we can construct a new operation in \(\mathcal {P}ol(R)\). Given \({\textbf{c}} = {\textbf{a}} \star {\textbf{b}}\), we proved that \(p_c(t)\) has roots \(\alpha _i + \beta _j\), for \(i = 1, \ldots , M\) and \(j= 1, \ldots , N\). The matrix \(A \otimes I_n + I_m \otimes B\) is a \(mn \times mn\) matrix, whose eigenvalues are the sum of the eigenvalues of A and B. Thus, we can define \(p_c(t) = p_a(t) \star p_b(t)\) as the characteristic polynomial of the matrix \(A \otimes I_n + I_m \otimes B\) and we get the semiring \((\mathcal {P}ol(R), \cdot , \star )\).
Finally, given \({\textbf{c}} = {\textbf{a}} \boxtimes {\textbf{b}}\), we know that \(p_c(t)\) has roots \(\alpha _i + \beta _j + \alpha _i\beta _j\), for \(i = 1, \ldots , M\) and \(j = 1, \ldots , N\). In this case, we can define \(p_c(t) = p_a(t) \boxtimes p_b(t)\) as the characteristic polynomial of the matrix \(A \otimes I_n + I_m \otimes B + A \otimes B\), which is a \(mn \times mn\) matrix, whose eigenvalues are exactly \(\alpha _i + \beta _j + \alpha _i\beta _j\), for \(i = 1, \ldots , M\) and \(j = 1, \ldots , N\). Thus, we have that \((\mathcal {P}ol(R), \cdot , \boxtimes )\) is another semiring of monic polynomials.
4 On isomorphisms between R-algebras
In [7], the authors proved that \(({\mathcal {W}}(R), \oplus , \odot )\) and \(({\mathcal {W}}(R), \oplus , *)\) are never isomorphic as R-algebras. In the following we prove similar results for the other algebraic structures that we have studied in the previous section.
Theorem 12
The R-algebras \(({\mathcal {W}}(R), \oplus , \odot )\) and \(({\mathcal {W}}(R), \oplus , \star )\) are not isomorphic.
Proof
Let us suppose that \(\psi :({\mathcal {W}}(R), \oplus , \odot ) \xrightarrow {}({\mathcal {W}}(R), \oplus , \star )\) is an injective morphism and consider \({\textbf{a}}:=(1, 0, 0, 0, \dots )\) and \({\textbf{b}}:=(0,1, 0, 0, \dots )\).
Then \(\psi ({\textbf{a}} \odot {\textbf{b}} ) = \psi ({\textbf{a}} ) \star \psi ({\textbf{b}} )= (0, 0, \ldots )\) and, by injectivity, \(\psi ({\textbf{a}} ), \psi ({\textbf{b}} ) \ne (0,0, \ldots )\). Let \(A_e^{\psi }(t)\) and \(B_e^{\psi }(t)\) be the exponential generating functions of \(\psi ({\textbf{a}})\) and \(\psi ({\textbf{b}})\), respectively. From \(\psi ({\textbf{a}} ) \star \psi ({\textbf{b}} )=(0,0, \ldots )\), it follows that \(A_e^{\psi }(t)B_e^{\psi }(t)=0\). Thanks to Lemma 3, we have
with \(deg(h(t)) < deg(p^*_{\psi ({\textbf{a}})}(t)) \).
Now, let us consider the map \(\gamma : \sum _{n=0}^\infty a_nt^n \xrightarrow {} \sum _{n=0}^\infty \frac{a_n}{n!}t^n\), which is an isomorphism between the ring of ordinary power series and the Hurwitz series ring, see, e.g., [3]. Applying \(\gamma \) to (14), we obtain
where \(p^*_{\psi ({\textbf{a}})}(t)\) can be viewed as a formal series with an infinite number of zero coefficients. Multiplying by \(\gamma (p^*_{\psi ({\textbf{a}})}(t))\) the equation \(A_e^{\psi }(t)B_e^{\psi }(t)=0\), it becomes
which implies \(h(t)B_o^{\psi }(t)=0\). From this, it follows that there is a nonzero element \(w \in R\), such that \(wB_o^{\psi }(t)=0\) ( [9, Eq. (2.9)]) and \(w {\textbf{b}} =0\), which is absurd. \(\square \)
Theorem 13
The R-algebras \(({\mathcal {W}}(R), \oplus , \odot )\) and \(({\mathcal {W}}(R), \oplus , \boxtimes )\) are isomorphic.
Proof
The explicit isomorphism is \(\psi : ({\mathcal {W}}(R), \oplus , \odot ) \rightarrow ({\mathcal {W}}(R), \oplus , \boxtimes )\) defined by \(\psi ({\textbf{a}}):= {\textbf{a}} \star {\textbf{e}}\), where \({\textbf{e}} = ((-1)^n)_{n \ge 0}\). Indeed, by Theorem 5, the map \(\psi \) is well–defined (in the sense that a linear recurrent sequence is mapped into a linear recurrent sequence). Moreover, since \({\textbf{1}}\) is the inverse of \({\textbf{e}}\) with respect to the Hurwitz product, it is straightforward to check injectivity and surjectivity. Finally, by Proposition 7, we have
since \({\textbf{e}} \star {\textbf{1}} = (1, 0, 0, \ldots )\). \(\square \)
Theorem 14
Let R be an integral domain, if \(\psi : ({\mathcal {W}}(R), \oplus , *) \rightarrow ({\mathcal {W}}(R),\oplus , \star )\) is a morphism, then \(\psi \) is not injective.
Proof
Let us suppose that \(\psi :({\mathcal {W}}(R), \oplus , *) \rightarrow ({\mathcal {W}}(R), \oplus , \star )\) is an injective morphism. Let us denote by \((\psi ({\textbf{a}}))_i\) the ith term of the sequence \(\psi ({\textbf{a}})\). The nth term of \(\psi ({\textbf{a}}) \star \psi ({\textbf{b}})\) is
Then, considering \(\psi ({\textbf{a}} * {\textbf{b}}) = \psi ({\textbf{a}}) \star \psi ({\textbf{b}})\), for any \({\textbf{a}}, {\textbf{b}} \in {\mathcal {W}}(R)\), we obtain
where we define the formal sequences \({\textbf{f}}:=(1, 2!, 3!, \ldots )\) and \({\textbf{f}}^{-1}:=(1, \frac{1}{2!}, \frac{1}{3!}, \ldots )\).
We define a map \(\tau :({\mathcal {W}}(R), \oplus , *) \xrightarrow {}({\mathcal {W}}(R), \oplus , *)\) such that \(\tau ({\textbf{a}}) = \psi ({\textbf{a}}) \odot {\textbf{f}}^{-1}\). By definition, we have that \(\tau ( {\textbf{a}} \oplus {\textbf{b}} )= \tau ( {\textbf{a}}) \oplus \tau ( {\textbf{b}} )\), \(\tau ( {\textbf{a}} * {\textbf{b}} )= \tau ( {\textbf{a}})*\tau ( {\textbf{b}} )\), for any \({\textbf{a}}, {\textbf{b}} \in {\mathcal {W}}(R)\) and \(\ker (\tau )= {\textbf{0}}\) where \({\textbf{0}}= (0, 0, 0, \ldots )\).
Let \(\tilde{\tau }\) be a map that acts over the ordinary generating functions such that if \(A(t)=\sum _{n=0}^{+\infty } a_nt^n\) then \(\tilde{\tau }(A(t))=\sum _{n=0}^{+\infty }(\tau (a))_nt^n\). From the properties of \(\tau \), it follows that
and
When we consider \(A(t)=1\) and \(B(t)=1\), we clearly have
and this implies \(\tilde{\tau }(1)=1\). Indeed, by the injectivity of \(\tau \), we can not have \(\tilde{\tau }(1)=0\) because \(\tau (1, 0, \ldots )\) should be \({\textbf{0}}\). In the case that \(A(t)=t\) and \(B(t)=-t\), then
which implies \(\tilde{\tau }(-t)=-\tilde{\tau }(t).\) Moreover, when \(A(t)=t\) and \(B(t)=t^{-1}\), then
so \(\tilde{\tau }(t^{-1})=( \tilde{\tau }(t))^{-1}\).
Lastly, if \(A(t)=B(t)=t\), then
so \(\tilde{\tau }(t^2)=( \tilde{\tau }(t))^2\) and \(\tilde{\tau }(nt)=n\tilde{\tau }(t)\), \(\forall n \in {\mathbb {N}}\).
Now, let us consider \(A(t)=t\) and \(\tilde{\tau }(t)=\sum _{n=0}^{+\infty } s_nt^n\), then from \(\tilde{\tau }(t^2)=( \tilde{\tau }(t))^2\) it follows that
i.e, if we consider the square of the ordinary generating function \({\tilde{\tau }}(t)\), seen as the product between \({\tilde{\tau }}(t)\) and itself, it must be equal to \({\tilde{\tau }}(t^2)\), then the coefficients of the even powers are zero and the coefficients of the odd powers are \(s_k\). From (16), we obtain \(s_0=s_0^2\) and we may have \(s_0=0\) or \(s_0=1\). In the case that \(s_0=1\), then \(s_i=0\), \(\forall i \ge 1\), and \(\tilde{\tau }(t)=1\). But this can not happen because, if so, we should have
which implies \(\tilde{\tau }(t-1)=0\), i. e., \(t=1\). Whereas, if \(s_0=0\), then \(s_1=s_1^2\) and \(s_1=0\) or \(s_1=1\). In the case that \(s_1=1\), then \(s_i=0\), \(\forall i \ge 2\), and \(\tilde{\tau }(t)=t\).
In the case that \(s_1=0\), then \(s_2=s_2^2\) so \(s_2=0\) or \(s_2=1\). Repeating the same reasoning and exploiting (16), we get that \(\tilde{\tau }(t)=t^k\) must hold for a fixed \(k \ge 1\).
Let us consider \(A(t)=\frac{1}{1-t}\), then
By definition,
and, since from some \(k\ge 1\) we have \(\tilde{\tau }(t)=t^k\), we also have
Equating the coefficients in the two equivalent power series (17) and (18), we have
Thus, by (15) and the definition of \(\tau \) and \(\tilde{\tau }\), we have
which is not a linear recurrent sequence. \(\square \)
References
Abrate, M., Barbero, S., Cerruti, U., Murru, N.: Colored compositions, Invert operator and elegant compositions with the “black tie.’’. Discrete Math. 335, 1–7 (2014)
Barbero, S., Cerruti, U., Murru, N.: Some combinatorial properties of the Hurwitz series ring. Ric. Mat. 67, 491–507 (2018)
Barbero, S., Cerruti, U., Murru, N.: On the operations of sequences in rings and binomial type sequences. Ric. Mat. 67, 911–927 (2018)
Benhissi, A.: Ideal structure of Hurwitz series rings. Beitr. Algebra Geom. 48, 251–256 (2007)
Benhissi, A., Koja, F.: Basic properties of Hurwitz series rings. Ric. Mat. 61, 255–273 (2012)
Cakcak, E.: A remark on the minimal polynomial of the product of linear recurring sequences. Finite Fields Their Appl. 4, 87–97 (1998)
Cerruti, U., Vaccarino, F.: R-Algebras of linear recurrent sequences. J. Algebra 175(1), 332–338 (1995)
Everest, G., van der Poorten, A., Shparlinski, I., Ward, T.: Recurrence Sequences, vol. 104. Mathematical Surveys and Monographs, Providence (2003)
Gilmer, R.: On polynomial and power series rings over a commutative ring. Rocky Mt. J. Math. 5, 157–175 (1975)
Graham, R., Knuth, D., Patashnik, O.: Concrete Mathematics: A Foundation for Computer Science. Addison-Wesley, Reading (1994)
Gottfert, R., Niederreiter, H.: On the minimal polynomial of the product of linear recurring sequences. Finite Fields Their Appl. 1, 145–163 (1995)
Haukkanen, P.: On a convolution of linear recurring sequences over finite fields. J. Algebra 149, 179–182 (1992)
Kazenas, S.D.: The Hadamard product and recursively defined sequences. Open J. Discrete Math. 3, 20–24 (2020)
Keigher, W.F.: On the ring of Hurwitz series. Commun. Algebra 25, 1845–1859 (1997)
Keigher, W.F., Pritchard, F.L.: Hurwitz series as formal functions. J. Pure Appl. Algebra 146, 291–304 (2000)
Kurakin, V.L.: Convolution of linear recurrent sequences. Russ. Math. Surv. 48, 235–236 (1993)
Kurakin, V.L.: The structure of Hopf algebras of linear recurrent sequences. Russ. Math. Surv. 48, 177–178 (1993)
Larson, R.G., Taft, E.J.: The algebraic structure of linearly recursive sequences under Hadamard product. Israel J. Math. 72, 118–132 (1990)
Liu, Z.: Hermite and PS-rings of Hurwitz series. Commun. Algebra 28, 299–305 (2000)
Roman, S.M., Rota, G.C.: The umbral calculus. Adv. Math. 27, 95–188 (1978)
Stoll, M.: Bounds for the length of recurrence relations for convolution of P-recursive sequences. Eur. J. Comb. 18, 707–712 (1997)
Szakacs, T.: Convolution of second order linear recursive sequences I. Ann. Math. Inform. 46, 205–216 (2016)
Szakacs, T.: Convolution of second order linear recursive sequences II. Commun. Math. 25, 137–148 (2017)
Taft, E.J.: Hadamard invertibility of linearly recursive sequences in several variables. Discrete Math. 139, 393–397 (1995)
Zierler, N., Mills, W.H.: Products of linear recurring sequences. J. Algebra 27, 147–157 (1973)
Acknowledgements
We would like to thank the anonymous referee for the valuable suggestions that improved the presentation of the paper. The third author acknowledge support from Ripple’s University Blockchain Research Initiative.
Funding
Open access funding provided by Università degli Studi di Trento within the CRUI-CARE Agreement.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors assert that there are no conflicts of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Alecci, G., Barbero, S. & Murru, N. Some notes on the algebraic structure of linear recurrent sequences. Ricerche mat (2023). https://doi.org/10.1007/s11587-023-00826-5
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s11587-023-00826-5