1 Introduction

Given a commutative ring with identity \((R, +, \cdot )\), we will denote by \({\mathcal {S}}(R)\) the set of all sequences \({\textbf{a}} = (a_n)_{n \ge 0}\) such that \(a_n \in R\), for all \(n \in {\mathbb {N}}\). A sequence \({\textbf{a}} \in {\mathcal {S}}(R)\) is said to be a linear recurrent sequence of order N if its elements for all \(n\ge N\) satisfy

$$\begin{aligned} a_n = \sum _{i=1}^{N} h_i a_{n-i} \end{aligned}$$

for some coefficients \(h_i\in R\), \(i=1,\ldots ,N\), where \(h_N\) is not a zero divisor in R, and we define

$$\begin{aligned} p_a(t) = t^N - \sum _{i=1}^{N} h_i t^{N-i} \end{aligned}$$

as the characteristic polynomial associated to this recurrence relation. The elements \(a_0, \ldots , a_{N-1}\) are called initial conditions. We will denote by \({\mathcal {W}}(R) \subset {\mathcal {S}}(R)\) the set of all linear recurrent sequences. Moreover, given \({\textbf{a}} \in {\mathcal {S}}(R)\), we will write

$$\begin{aligned} A_o(t)=\sum _{n=0}^\infty a_nt^n, \quad A_e(t)=\sum _{n=0}^\infty \frac{a_n}{n!}t^n, \end{aligned}$$

for the ordinary generating function (o.g.f.) and the exponential generating function (e.g.f.), respectively.

It is well known that \({\mathcal {S}}(R)\) and \({\mathcal {W}}(R)\) can be equipped with several operations giving them interesting algebraic structures. When R is a field, it is immediate to see that the element-wise sum or product (also called the Hadamard product) of two linear recurrent sequences is still a linear recurrent sequence, see, e.g., [8]. In [7], the authors proved this in the more general case where R is a ring, showing that \({\mathcal {W}}(R)\) is an R-algebra and also giving explicitly the characteristic polynomials of the element-wise sum and Hadamard product of two linear recurrent sequences. Larson and Taft [18, 24] studied this algebraic structure characterizing the invertible elements and zero divisors. Further studies about the behaviour of linear recurrent sequences under the Hadamard product can be found, e.g., in [6, 11, 13, 25]. Similarly, \({\mathcal {W}}(R)\) equipped with the element-wise sum and the convolution product (or Cauchy product) has been deeply studied. For instance, \({\mathcal {W}}(R)\) is still an R-algebra and the characteristic polynomial of the convolution product between two linear recurrent sequences can be explicitly found [7]. The convolution product of linear recurrent sequences is very important in many applications and it has been studied also from a combinatorial point of view [1] and over finite fields [12]. For other results, see, e. g., [21,22,23]. Another important operation between sequences is the binomial convolution (or Hurwitz product). The Hurwitz series ring, introduced in a systematic way by Keigher [14], has been extensively studied by several authors [2,3,4,5, 15, 19]. However, there are few results when focusing on linear recurrent sequences [16, 17].

In this paper, we extend the studies about the algebraic structure of linear recurrent sequences considering in particular the Hurwitz product and the Newton product (which is the generalization of the Hurwitz product considering multinomial coefficients). In particular, we prove that \({\mathcal {W}}(R)\) is an R-algebra when equipped with the element-wise sum and the Hurwitz product, as well as when we consider element-wise sum and Newton product. We also give explicitly the characteristic polynomials of the Hurwitz and Newton product of two linear recurrent sequences. For the Newton product we also find explicitly the inverses. Moreover, we study the isomorphisms between these algebraic structures, finding that \({\mathcal {W}}(R)\) with element-wise sum and Hurwitz product is not isomorphic to the other algebraic structures, whereas if we consider the Newton product, there is an isomorphism with the R-algebra obtained using the Hadamard product. Finally, we provide an overview about the behaviour of linear recurrent sequences under all the different operations considered (element-wise sum, Hadamard product, Cauchy product, Hurwitz product, Newton product) with respect to the characteristic polynomials and their companion matrices.

2 Preliminaries and notation

For any \({\textbf{a}}, {\textbf{b}} \in {\mathcal {S}}(R)\), we will deal with the following operations:

  • componentwise sum \(\oplus \), defined by

    $$\begin{aligned} {\textbf{a}} \oplus {\textbf{b}} = {\textbf{c}}, \quad c_n = a_n + b_n, \quad \forall n \ge 0; \end{aligned}$$
  • componentwise product or Hadamard product \(\odot \), defined by

    $$\begin{aligned} {\textbf{a}} \odot {\textbf{b}} = {\textbf{c}}, \quad c_n = a_n \cdot b_n, \quad \forall n \ge 0; \end{aligned}$$
  • convolution product \(*\), defined by

    $$\begin{aligned} {\textbf{a}} * {\textbf{b}} = {\textbf{c}}, \quad c_n = \sum _{i=0}^n a_i b_{n-i}, \quad \forall n \ge 0; \end{aligned}$$
  • binomial convolution product or Hurwitz product \(\star \), defined by

    $$\begin{aligned} {\textbf{a}} \star {\textbf{b}} = {\textbf{c}}, \quad c_n = \sum _{i=0}^n \left( {\begin{array}{c}n\\ i\end{array}}\right) a_i b_{n-i}, \quad \forall n \ge 0; \end{aligned}$$
  • multinomial convolution product or Newton product \(\boxtimes \), defined by

    $$\begin{aligned} {\textbf{a}} \boxtimes {\textbf{b}} = {\textbf{c}}, \quad c_n = \sum _{i=0}^n \sum _{j=0}^i \left( {\begin{array}{c}n\\ i\end{array}}\right) \left( {\begin{array}{c}i\\ j\end{array}}\right) a_i b_{n-j}, \quad \forall n \ge 0. \end{aligned}$$

Remark 1

The Newton product is also called multinomial convolution product, because it is the natural generalization of the binomial convolution product using the multinomial coefficient. Observe indeed that \(\left( {\begin{array}{c}n\\ i\end{array}}\right) \left( {\begin{array}{c}i\\ j\end{array}}\right) = \left( {\begin{array}{c}n\\ n-i, i-j, j\end{array}}\right) \).

In [7], the authors showed that \(({\mathcal {W}}(R), \oplus , \odot )\) and \(({\mathcal {W}}(R), \oplus , *)\) are R-algebras and they are never isomorphic. Moreover, given \({\textbf{a}}, {\textbf{b}} \in {\mathcal {W}}(R)\) and \({\textbf{c}} = {\textbf{a}} \odot {\textbf{b}}\), \({\textbf{d}} = {\textbf{a}} * {\textbf{b}}\), they proved that

$$\begin{aligned} p_c(t) = p_a(t) \otimes p_b(t), \quad p_d(t) = p_a(t) \cdot p_b(t), \end{aligned}$$
(1)

where the operation \(\otimes \) between polynomials is defined as follows. Given two polynomials f(t) and g(t) with coefficients in R, said F and G their companion matrices, respectively, then \(f(t) \otimes g(t)\) is the characteristic polynomial of the Kronecker product between F and G. In the following, we will denote by \(\otimes \) also the Kronecker product between matrices. To the best of our knowledge, similar results involving the Hurwitz product and the Newton product are still missing.

Remark 2

Let us observe that the sequences \({\textbf{c}}\) and \({\textbf{d}}\), defined above, recur with characteristic polynomials \(p_c(t)\) and \(p_d(t)\) as given in (1), respectively, but these polynomials are not necessarily the minimal polynomials of recurrence. In the case that R is a ring without zero divisors, see [8] for more, the minimal polynomial of a linear recurrent sequence \({\textbf{a}}\) is the (unique) monic polynomial f(t) such that it divides any characteristic polynomial of any linear recurrence relation satisfied by \({\textbf{a}}\). In other words, it is the characteristic polynomial of the linear recurrence relation of least degree satisfied by \({\textbf{a}}\). In general, it is an hard problem to find the minimal polynomials of recurrence of these sequences, for some results, see [6, 11, 18, 21].

Lemma 3

Given \({\textbf{a}} \in {\mathcal {S}}(R)\), we have that \({\textbf{a}} \in {\mathcal {W}}(R)\) and \(p_a(t)\) is its characteristic polynomial if and only if \(p^*_a(t) \cdot A_o(t)\) is a polynomial of degree less than \(\deg (p_a(t))\), where \(p^*_a(t)\) denotes the reciprocal or reflected polynomial of \(p_a(t)\).

Proof

See [7, Lemma 3.2]. \(\square \)

Definition 4

Given two monic polynomials f(t) of degree M and g(t) of degree N, their resultant is \(\text {res}(f(t), g(t)):= \prod _{i=1}^M \prod _{j=1}^{N^{}}(\alpha _i - \beta _j)\), where \(\alpha _i\)’s and \(\beta _j\)’s are, respectively, the roots of f(t) and g(t), counted with their multiplicities.

3 R-algebras of linear recurrent sequences

The main results of this section are provided in Theorems 5 and 9, where we prove that \(({\mathcal {W}}(R), \oplus , \star )\) and \(({\mathcal {W}}(R), \oplus , \boxtimes )\) are R-algebras, finding also the characteristic polynomial of the Hurwitz and Newton product between two linear recurrent sequences.

Theorem 5

Given \({\textbf{a}}, {\textbf{b}} \in {\mathcal {W}}(R)\), we have that \({\textbf{r}} = {\textbf{a}} \star {\textbf{b}} \in {\mathcal {W}}(R)\) and the characteristic polynomial of \({\textbf{r}}\) is \(\text {res}(p_a(x),p_b(t-x))\) with \(p_b(t-x)\) regarded as a polynomial in x. Moreover, \(({\mathcal {W}}(R), \oplus , \star )\) is an R-algebra.

Proof

It is well–known that \(({\mathcal {S}}(R), \oplus , \star )\) is an R-algebra (see, e.g., [14]), thus it is sufficient to show that \({\mathcal {W}}(R)\) is closed under the Hurwitz product in order to prove that \(({\mathcal {W}}(R), \oplus , \star )\) is an R-algebra.

Let M and N be the degrees of \(p_a(t)\) and \(p_b(t)\), respectively. Without loss of generality, let us suppose \(p_a(t)\) and \(p_b(t)\) have distinct roots denoted by \(\alpha _1, \ldots , \alpha _M\) and \(\beta _1, \ldots , \beta _N\), respectively. We consider the ordinary generating function of the sequence \({\textbf{r}} = {\textbf{a}} \star {\textbf{b}}\),

$$\begin{aligned} R_o(t)= \sum _{n=0}^{+\infty } \left( \sum _{i=0}^{n}\left( {\begin{array}{c}n\\ i\end{array}}\right) a_ib_{n-i}\right) t^n&=\sum _{i=0}^{+\infty } \sum _{n=i}^{+\infty }\left( {\begin{array}{c}n\\ i\end{array}}\right) a_ib_{n-i} t^n \nonumber \\&=\sum _{i=0}^{+\infty }a_it^i\sum _{n=i}^{+\infty }\left( {\begin{array}{c}n\\ i\end{array}}\right) b_{n-i} t^{n-i} \nonumber \\&= \sum _{i=0}^{+\infty }a_it^i\sum _{m=0}^{+\infty }\left( {\begin{array}{c}m+i\\ i\end{array}}\right) b_{m} t^{m}, \end{aligned}$$
(2)

where \(\sum _{m=0}^{+\infty }\left( {\begin{array}{c}m+i\\ i\end{array}}\right) b_{m} t^{m}\) is the o.g.f of the sequence obtained from the Hadamard product between \({\textbf{b}}\) and \(\left( \left( {\begin{array}{c}m+i\\ i\end{array}}\right) \right) _{m \ge 0}\), i.e.,

$$\begin{aligned} \sum _{m=0}^{+\infty }\left( {\begin{array}{c}m+i\\ i\end{array}}\right) b_{m} t^{m}=\left( \sum _{m=0}^{+\infty }b_{m} t^m\right) \odot \left( \sum _{m=0}^{+\infty }\left( {\begin{array}{c}m+i\\ i\end{array}}\right) t^{m}\right) = B_o(t) \odot \frac{1}{{(1-t)}^{i+1}}. \end{aligned}$$

Since \(B_o(t)\) is the ordinary generating function of \({\textbf{b}} \in {\mathcal {W}}(R)\), it is a rational function and we can write it as

$$\begin{aligned} B_o(t)=\frac{\gamma (t)}{p^*_b(t)}=\sum _{j=1}^{N}\frac{c_j}{(1-\beta _jt)}, \end{aligned}$$

for some integers \(c_j\). Now, we have

$$\begin{aligned} \frac{1}{1-\beta _jt} \odot \frac{1}{(1-t)^{i+1}}= & {} \sum _{m=0}^ {+\infty }(\beta _{j} t)^m \odot \sum _{m=0}^{+\infty }\left( {\begin{array}{c}m+i\\ i\end{array}}\right) t^{m} \\= & {} \sum _{m=0}^{+\infty }\left( {\begin{array}{c}m+i\\ i\end{array}}\right) (\beta _{j} t)^{m} \\= & {} \frac{1}{(1-\beta _jt)^{i+1}}, \end{aligned}$$

and we get that

$$\begin{aligned} B_o(t) \odot \frac{1}{(1-t)^{i+1}}= \sum _{j=1} ^{N} c_j \frac{1}{1-\beta _jt} \odot \frac{1}{(1-t)^{i+1}}=\sum _{j=1} ^{N} \frac{c_j}{(1-\beta _jt)^{i+1}}. \end{aligned}$$

Thus, from (2) we obtain

$$\begin{aligned} R_o(t) = \sum _{i=0}^{+\infty }a_it^i\sum _{j=1} ^{N} \frac{c_j}{(1-\beta _jt)^{i+1}}&= \sum _{j=1} ^{N} \frac{c_j}{1-\beta _jt} \sum _{i=0}^{+\infty }a_i \frac{t^i}{(1-\beta _jt)^{i}} \nonumber \\&= \sum _{j=1} ^{N} \frac{c_j}{1-\beta _jt} A_o \left( \frac{t}{1-\beta _jt}\right) \nonumber \\&= \sum _{j=1} ^{N} \frac{c_j}{1-\beta _jt} \cdot \frac{\delta \left( \frac{t}{1-\beta _jt}\right) }{p^*_a \left( \frac{t}{1-\beta _jt}\right) }, \end{aligned}$$
(3)

where \(\delta (t)\) is a polynomial of degree less than M. Let \(p(t)=\text {res}(p_a(x), p_b(t-x))\), then \({p(t)= \prod _{h=1}^{M} \prod _{l=1}^{N} (t-\alpha _h -\beta _l} )\) and its reciprocal polynomial is \({p^*(t)= \prod _{h=1}^{M} \prod _{l=1}^{N} (1-(\alpha _h + \beta _l)t)}\). In particular, it is possible to rearrange the last formula in the following way

$$\begin{aligned} p^*(t)= \prod _{h=1}^{M} \prod _{l=1}^{N} (1-\beta _lt -\alpha _ht)&= \prod _{h=1}^{M} \prod _{l=1}^{N} (1-\beta _lt) \left( 1-\frac{\alpha _ht}{1-\beta _lt} \right) \nonumber \\&= \prod _{h=1}^{M} \prod _{l=1}^{N} (1-\beta _lt) \prod _{l=1}^{N} \left( 1-\frac{\alpha _ht}{1-\beta _lt} \right) \nonumber \\&= \prod _{h=1}^{M} p^*_b(t) \prod _{l=1}^{N} \left( 1-\frac{\alpha _ht}{1-\beta _lt} \right) \nonumber \\&= \left[ p^*_b(t) \right] ^M \prod _{l=1}^{N} \prod _{h=1}^{M} \left( 1-\frac{\alpha _ht}{1-\beta _lt} \right) \nonumber \\&= \left[ p^*_b(t) \right] ^M \prod _{l=1}^{N} p^*_a\left( \frac{t}{1-\beta _lt} \right) . \end{aligned}$$
(4)

Combining (3) and (4) we get

$$\begin{aligned} p^*(t) \cdot R_o(t)&= \left[ p^*_b(t) \right] ^M \prod _{l=1}^{N} p^*_a\left( \frac{t}{1-\beta _lt} \right) \left( \sum _{j=1} ^{N} \frac{c_j}{(1-\beta _jt)} \cdot \frac{\delta \left( \frac{t}{1-\beta _jt}\right) }{p^*_a \left( \frac{t}{1-\beta _jt}\right) } \right) \nonumber \\&= \left[ p^*_b(t) \right] ^M \left( \sum _{j=1} ^{N} \frac{c_j}{1-\beta _jt} \cdot \delta \left( \frac{t}{1-\beta _jt}\right) \right) \cdot \prod _{\begin{array}{c} l=1 \\ l\ne j \end{array} }^{N} p^*_a\left( \frac{t}{1-\beta _lt} \right) . \end{aligned}$$
(5)

Moreover, we can express the function \(\delta \left( \frac{t}{1-\beta _jt}\right) \) as

$$\begin{aligned} \delta \left( \frac{t}{1-\beta _jt}\right)= & {} \sum _{h=0}^{M-1} \delta _h \cdot \left( \frac{t}{1-\beta _j t}\right) ^h \nonumber \\= & {} \frac{\sum _{h=0}^{M-1} \delta _h t^h ( 1-\beta _j t )^{M-1-h}}{( 1-\beta _j t )^{M-1}} \nonumber \\= & {} \frac{\mu _j(t)}{( 1-\beta _j t )^{M-1}}, \end{aligned}$$

with \(deg(\mu _j(t)) \le M-1\). Applying the same reasoning, we have

$$\begin{aligned} p^*_a\left( \frac{t}{1-\beta _jt}\right)= & {} \frac{\sum _{h=0}^{M} f_h t^h ( 1-\beta _j t )^{M-h}}{( 1-\beta _j t )^{M}} \nonumber \\= & {} \frac{\xi _j(t)}{( 1-\beta _j t )^{M}} \end{aligned}$$

with \(deg(\xi _j(t)) \le M\). Hence, Eq. (5) becomes

$$\begin{aligned} p^*(t) \cdot R_o(t)&=\left[ p^*_b(t) \right] ^M \sum _{j=1} ^{N} \frac{c_j}{1-\beta _jt} \cdot \frac{\mu _j(t)}{( 1-\beta _j t )^{M-1}} \cdot \prod _{\begin{array}{c} l=1 \\ l\ne j \end{array} }^{N} \frac{\xi _j(t)}{( 1-\beta _j t )^{M}} \nonumber \\&= \left[ p^*_b(t) \right] ^M \sum _{j=1} ^{N} c_j\cdot \frac{\mu _j(t)}{( 1-\beta _j t )^{M}} \cdot \frac{{ \prod _{\begin{array}{c} l=1 \\ l\ne j \end{array} }^{N} \xi _j(t)}}{{ \prod _{\begin{array}{c} l=1 \\ l\ne j \end{array} }^{N} ( 1-\beta _j t )^{M}}} \nonumber \\&= \left[ p^*_b(t) \right] ^M \sum _{j=1} ^{N} c_j \mu _j(t) \frac{{ \prod _{\begin{array}{c} l=1 \\ l\ne j \end{array} }^{N} \xi _j(t)}}{\left[ p^*_b(t) \right] ^M} =\sum _{j=1} ^{N} c_j \mu _j(t) { \prod _{\begin{array}{c} l=1 \\ l\ne j \end{array} }^{N} \xi _j(t)}. \end{aligned}$$
(6)

We have from (6)

$$\begin{aligned} deg\left( p^*(t)R_o(t)\right) =deg\left( \sum _{j=1} ^{N} c_j \mu _j(t) {\displaystyle \prod _{\begin{array}{c} l=1 \\ l\ne j \end{array} }^{N} \xi _j(t)}\right) \!\le \! M-1+(N\!-\!1)M=MN\!-\!1, \end{aligned}$$

thus, by Lemma 3, \({\textbf{r}}\) is a linear recurrent sequence whose characteristic polynomial is \(p(t)=\text {res}(p_a(x), p_b(t-x))\). \(\square \)

Remark 6

Given \({\textbf{a}}, {\textbf{b}} \in {\mathcal {W}}(R)\), if \(\alpha _1, \ldots \alpha _M\) and \(\beta _1, \ldots , \beta _N\) are distinct roots of \(p_a(t)\) and \(p_b(t)\) respectively, then, by Theorem 5, the roots of the characteristic polynomial of \({\textbf{a}} \star {\textbf{b}}\) are \(\alpha _i + \beta _j\), for any \(i = 1, \ldots , M\) and \(j = 1, \ldots , N\). Moreover, we would like to highlight that the proof of Theorem 5 can be adapted also in the case of multiple roots.

In the following Proposition, we see a way for writing the Newton product in terms of the Hurwitz and Hadamard ones.

Proposition 7

Given \({\textbf{a}}, {\textbf{b}} \in {\mathcal {W}}(R)\), then \({\textbf{a}} \boxtimes {\textbf{b}} = [({\textbf{a}} \star {\textbf{1}}) \odot ({\textbf{b}} \star {\textbf{1}})] \star {\textbf{e}}\), where \({\textbf{1}}:= (1, 1, 1, \ldots )\) and \({\textbf{e}}:= ((-1)^n)_{n \ge 0}\).

Proof

The nth terms of \({\textbf{a}} \star {\textbf{1}}\) and \({\textbf{b}} \star {\textbf{1}}\) are by definition \(\sum _{i=0}^n \left( {\begin{array}{c}n\\ i\end{array}}\right) a_i\) and \(\sum _{i=0}^n \left( {\begin{array}{c}n\\ i\end{array}}\right) b_i\), respectively. Thus, the nth term of \(({\textbf{a}} \star {\textbf{1}}) \odot ({\textbf{b}} \star {\textbf{1}})\) is \(\sum _{s=0}^n \left( {\begin{array}{c}n\\ s\end{array}}\right) a_s \sum _{t=0}^n \left( {\begin{array}{c}n\\ t\end{array}}\right) b_t\), and consequently

$$\begin{aligned} \sum _{i=0}^n \left( {\begin{array}{c}n\\ i\end{array}}\right) (-1)^{n-i}\sum _{s=0}^i \left( {\begin{array}{c}i\\ s\end{array}}\right) a_s \sum _{t=0}^i \left( {\begin{array}{c}i\\ t\end{array}}\right) b_t \end{aligned}$$

is the nth term of \([({\textbf{a}} \star {\textbf{1}}) \odot ({\textbf{b}} \star {\textbf{1}})] \star {\textbf{e}}\).

Considering the definition of the Newton product, we need to prove the equality

$$\begin{aligned} \sum _{i=0}^n \left( {\begin{array}{c}n\\ i\end{array}}\right) (-1)^{n-i}\sum _{s=0}^i \left( {\begin{array}{c}i\\ s\end{array}}\right) a_s \sum _{t=0}^i \left( {\begin{array}{c}i\\ t\end{array}}\right) b_t = \sum _{i=0}^n \sum _{j=0}^i \left( {\begin{array}{c}n\\ i\end{array}}\right) \left( {\begin{array}{c}i\\ j\end{array}}\right) a_i b_{n-j}, \quad \forall n \ge 0. \end{aligned}$$
(7)

Let \(c_i= \sum _{s=0}^i \left( {\begin{array}{c}i\\ s\end{array}}\right) a_s \sum _{t=0}^i \left( {\begin{array}{c}i\\ t\end{array}}\right) b_t\) and \(d_n= \sum _{i=0}^n \sum _{j=0}^i \left( {\begin{array}{c}n\\ i\end{array}}\right) \left( {\begin{array}{c}i\\ j\end{array}}\right) a_ib_{n-j}\), then (7) is equivalent to

$$\begin{aligned} \sum _{i=0}^n \left( {\begin{array}{c}n\\ i\end{array}}\right) (-1)^ic_i=(-1)^n d_n. \end{aligned}$$
(8)

Exploiting the Newton’s inversion formula (see, e.g., [10, Equation 5.48]), i.e.,

$$\begin{aligned} \sum _{i=0}^n \left( {\begin{array}{c}n\\ i\end{array}}\right) (-1)^i f(i) = g(n) \Leftrightarrow f(n)=\sum _{i=0}^n \left( {\begin{array}{c}n\\ i\end{array}}\right) (-1)^i g(i), \end{aligned}$$
(9)

for some arithmetic functions f and g, Eq. (8) becomes

$$\begin{aligned} \sum _{i=0}^n \left( {\begin{array}{c}n\\ i\end{array}}\right) (-1)^i (-1)^i d_i = c_n, \end{aligned}$$

that is

$$\begin{aligned} \sum _{i=0}^n \left( {\begin{array}{c}n\\ i\end{array}}\right) \sum _{k=0}^i \sum _{j=0}^k \left( {\begin{array}{c}i\\ k\end{array}}\right) \left( {\begin{array}{c}k\\ j\end{array}}\right) a_kb_{i-j} = \sum _{s=0}^n \left( {\begin{array}{c}n\\ s\end{array}}\right) a_s \sum _{t=0}^n \left( {\begin{array}{c}n\\ t\end{array}}\right) b_t. \end{aligned}$$
(10)

So, showing that (10) holds is equivalent to prove (7). Now, we can write the first member of (10) as

$$\begin{aligned} \sum _{j=0}^n \sum _{s=j}^n \sum _{i=s}^n \left( {\begin{array}{c}n\\ i\end{array}}\right) \left( {\begin{array}{c}i\\ s\end{array}}\right) \left( {\begin{array}{c}s\\ j\end{array}}\right) a_sb_{i-j}.\end{aligned}$$
(11)

From (11) we have \(0 \le j \le s\le i \le n\), and we can rewrite (11) in the equivalent form

$$\begin{aligned} \sum _{s=0}^n \sum _{j = 0}^s \sum _{i = s}^n \left( {\begin{array}{c}n\\ i\end{array}}\right) \left( {\begin{array}{c}i\\ s\end{array}}\right) \left( {\begin{array}{c}s\\ j\end{array}}\right) a_s b_{i-j}, \end{aligned}$$

where, setting \(t = i-j\), so that \(s \le t+j \le n\), we obtain

$$\begin{aligned} \sum _{s=0}^n \sum _{j=0}^s \sum _{t=s-j}^{n-j} \left( {\begin{array}{c}n\\ t+j\end{array}}\right) \left( {\begin{array}{c}t+j\\ s\end{array}}\right) \left( {\begin{array}{c}s\\ j\end{array}}\right) a_sb_t. \end{aligned}$$
(12)

Observing that

$$\begin{aligned} \left( {\begin{array}{c}n\\ t+j\end{array}}\right) \left( {\begin{array}{c}t+j\\ s\end{array}}\right)= & {} \frac{n!}{(n-t-j)!(t+j-s)!s!}\\= & {} \frac{n!(n-s)!}{(n-s)!s!(n-t-j)!(t+j-s)!} \\= & {} \left( {\begin{array}{c}n\\ s\end{array}}\right) \left( {\begin{array}{c}n-s\\ n-t-j\end{array}}\right) \end{aligned}$$

the term (12) becomes

$$\begin{aligned} \sum _{s=0}^n \left( {\begin{array}{c}n\\ s\end{array}}\right) a_s \sum _{j=0}^s \sum _{t=s-j}^{n-j} \left( {\begin{array}{c}n-s\\ n-t-j\end{array}}\right) \left( {\begin{array}{c}s\\ j\end{array}}\right) b_t. \end{aligned}$$

Finally, setting \(m = s-j\), we have \(0 \le m \le s\) and we get

$$\begin{aligned}&\sum _{s=0}^n \left( {\begin{array}{c}n\\ s\end{array}}\right) a_s \sum _{m=0}^s \sum _{t=m}^{n-s+m} \left( {\begin{array}{c}n-s\\ n-s-(t-m)\end{array}}\right) \left( {\begin{array}{c}s\\ s-m\end{array}}\right) b_t \\&\quad =\sum _{s=0}^n \left( {\begin{array}{c}n\\ s\end{array}}\right) a_s \sum _{m=0}^s \sum _{t=m}^{n-s+m} \left( {\begin{array}{c}n-s\\ t-m\end{array}}\right) \left( {\begin{array}{c}s\\ m\end{array}}\right) b_t \nonumber \\&\quad = \sum _{s=0}^n \left( {\begin{array}{c}n\\ s\end{array}}\right) a_s \sum _{t=0}^n b_t \sum _{m=0}^{t} \left( {\begin{array}{c}n-s\\ t-m\end{array}}\right) \left( {\begin{array}{c}s\\ m\end{array}}\right) \\ \nonumber&\quad = \sum _{s=0}^n \left( {\begin{array}{c}n\\ s\end{array}}\right) a_s \sum _{t=0}^n \left( {\begin{array}{c}n\\ t\end{array}}\right) b_t ,\nonumber \\ \end{aligned}$$

where the last equality is due to Vandermonde’s identity \(\sum _{m=0}^{t} \left( {\begin{array}{c}n-s\\ t-m\end{array}}\right) \left( {\begin{array}{c}s\\ m\end{array}}\right) = \left( {\begin{array}{c}n\\ t\end{array}}\right) \). \(\square \)

Remark 8

The previous proposition can be proved also exploiting the umbral calculus (see [20] for the basic notions). Given \({\textbf{a}}, {\textbf{b}} \in {\mathcal {S}}(R)\), let us consider two linear functionals U and V defined by \(U(W^n) = a_n\) and \(V(Z^n) = b_n\). The nth term of \(({\textbf{a}} \boxtimes {\textbf{b}}) \star {\textbf{1}}\) is \(\sum _{i=0}^n \left( {\begin{array}{c}n\\ i\end{array}}\right) \sum _{k=0}^i\sum _{j=0}^k \left( {\begin{array}{c}i\\ k\end{array}}\right) \left( {\begin{array}{c}k\\ j\end{array}}\right) a_kb_{i-j}\) and it can be obtained using the functionals U and V in the following way:

$$\begin{aligned}{} & {} UV\left( \sum _{i=0}^n \left( {\begin{array}{c}n\\ i\end{array}}\right) Z^i\sum _{k=0}^i \left( {\begin{array}{c}i\\ k\end{array}}\right) W^k\sum _{j=0}^k\left( {\begin{array}{c}k\\ j\end{array}}\right) Z^{-j}\right) \\{} & {} \quad = UV\left( \sum _{i=0}^n \left( {\begin{array}{c}n\\ i\end{array}}\right) Z^i \sum _{k=0}^i \left( {\begin{array}{c}i\\ k\end{array}}\right) (W+W/Z)^k \right) \\{} & {} \quad = UV\left( \sum _{i=0}^n \left( {\begin{array}{c}n\\ i\end{array}}\right) (ZW+W+Z)^i\right) \\{} & {} \quad = UV((ZW+W+Z+1)^n) \end{aligned}$$

Now, the last quantity can be rewritten as

$$\begin{aligned} UV((Z+1)^n(W+1)^n)= & {} U(V(Z+1)^n(W+1)^n)\\= & {} U\left( V\left( \sum _{s=0}^n\left( {\begin{array}{c}n\\ s\end{array}}\right) Z^s\right) (W+1)^n\right) \\= & {} U\left( \sum _{s=0}^n \left( {\begin{array}{c}n\\ s\end{array}}\right) b_s (W+1)^n \right) \\= & {} \sum _{s=0}^n\left( {\begin{array}{c}n\\ s\end{array}}\right) b_s U\left( \sum _{t=0}^n \left( {\begin{array}{c}n\\ t\end{array}}\right) W^t \right) \\= & {} \sum _{s=0}^n\left( {\begin{array}{c}n\\ s\end{array}}\right) b_s \cdot \sum _{t=0}^n\left( {\begin{array}{c}n\\ s\end{array}}\right) a_t, \end{aligned}$$

which is the nth term of the sequence \(({\textbf{a}} \star {\textbf{1}}) \odot ({\textbf{b}} \star {\textbf{1}})\).

Theorem 9

Given \({\textbf{a}}, {\textbf{b}} \in {\mathcal {W}}(R)\), we have that \({\textbf{c}} = {\textbf{a}} \boxtimes {\textbf{b}} \in {\mathcal {W}}(R)\) and the characteristic polynomial of \({\textbf{c}}\) is \(\prod _{i=1}^M \prod _{j=1}^N(t-(\alpha _i+\beta _j+\alpha _i\beta _j))\), where \(M = \deg (p_a(t))\), \(N = \deg (p_b(t))\), \(\alpha _i\)’s are the roots of \(p_a(t)\) and \(\beta _j\)’s the roots of \(p_b(t)\). Moreover, \(({\mathcal {W}}(R), \oplus , \boxtimes )\) is an R-algebra.

Proof

Firstly, we show that \(({\mathcal {S}}(R), \oplus , \boxtimes )\) is an R-algebra. This is an immediate consequence of Proposition 7. Indeed, since \({\textbf{a}} \boxtimes {\textbf{b}} = [({\textbf{a}} \star {\textbf{1}}) \odot ({\textbf{b}} \star {\textbf{1}})] \star {\textbf{e}}\), it is straightforward to see that \(\boxtimes \) satisfies all the properties characterizing \(({\mathcal {S}}(R), \oplus , \boxtimes )\) as an R-algebra. Moreover, since \((1, 0, 0, \ldots )\) is the identity element for the Hurwitz product and \({\textbf{e}}\) is the inverse of \({\textbf{1}}\) with respect to \(\star \), we have that \((1, 0, 0, \ldots )\) is the identity also for the Newton product.

Given \({\textbf{a}}, {\textbf{b}} \in {\mathcal {W}}(R)\), we have \({\textbf{a}} \boxtimes {\textbf{b}} \in {\mathcal {W}}(R)\) by Proposition 7, thus also \(({\mathcal {W}}(R), \oplus , \boxtimes )\) is an R-algebra.

By Theorem 5 and Remark 6, we can observe that, given \({\textbf{a}}, {\textbf{b}} \in {\mathcal {W}}(R)\), then \({\textbf{a}} \star {\textbf{1}}\) and \({\textbf{b}} \star {\textbf{1}}\) are linear recurrent sequences whose characteristic polynomials have roots \(\alpha _i + 1\) and \(\beta _j +1\), for \(i = 1, \ldots , M\) and \(j = 1, \ldots , N\), respectively. Moreover, since \({\textbf{e}}\) is a linear recurrent sequence whose characteristic polynomial is \(t + 1\), then \([({\textbf{a}} \star {\textbf{1}}) \odot ({\textbf{b}} \star {\textbf{1}})] \star {\textbf{e}}\) has characteristic polynomial whose roots are \((\alpha _i+1)(\beta _j + 1) - 1 = \alpha _i + \beta _j + \alpha _i\beta _j\), for \(i = 1, \ldots , M\) and \(j = 1, \ldots , N\). \(\square \)

Proposition 10

Let \({\textbf{a}} \in {\mathcal {S}}(R)\) be invertible with respect to the Newton product, then, said \({\textbf{b}}\) its inverse, we have

$$\begin{aligned} b_n = (-1)^n \sum _{t=0}^n \left( {\begin{array}{c}n\\ t\end{array}}\right) (-1)^t \frac{1}{\sum _{s=0}^t \left( {\begin{array}{c}t\\ s\end{array}}\right) a_s}, \end{aligned}$$
(13)

for any \(n \ge 0\).

Proof

Remembering that the identity element for the Newton product is \((1, 0, 0, \ldots )\), we have that \(a_0b_0\) must be 1, i.e., \(b_0 = a_0^{-1}\). When \(n \ge 1\), we have that

$$\begin{aligned} \sum _{i=0}^n \left( {\begin{array}{c}n\\ i\end{array}}\right) (-1)^{n-i} \sum _{s=0}^i \left( {\begin{array}{c}i\\ s\end{array}}\right) a_s \sum _{t=0}^i \left( {\begin{array}{c}i\\ t\end{array}}\right) b_t = 0, \end{aligned}$$

i.e.,

$$\begin{aligned} \sum _{i=0}^n \left( {\begin{array}{c}n\\ i\end{array}}\right) (-1)^{i} \sum _{s=0}^i \left( {\begin{array}{c}i\\ s\end{array}}\right) a_s \sum _{t=0}^i \left( {\begin{array}{c}i\\ t\end{array}}\right) b_t = 0. \end{aligned}$$

Let us define

$$\begin{aligned} f(i):=\sum _{s=0}^i \left( {\begin{array}{c}i\\ s\end{array}}\right) a_s \sum _{t=0}^i \left( {\begin{array}{c}i\\ t\end{array}}\right) b_t, \quad g(n):= \sum _{i=0}^n \left( {\begin{array}{c}n\\ i\end{array}}\right) (-1)^if(i), \end{aligned}$$

where \(g(n) = 0\), when \(n \ge 1\) and \(g(0)=1\). By Newton’s inversion formula (9), we obtain

$$\begin{aligned} f(n) = \sum _{i=0}^n \left( {\begin{array}{c}n\\ i\end{array}}\right) (-1)^i g(i). \end{aligned}$$

Since \(g(i) = 0\) for all \(i \ge 1\), we have \(f(n) = 1\) for all \(n\ge 0\), thus

$$\begin{aligned} \sum _{s=0}^n \left( {\begin{array}{c}n\\ s\end{array}}\right) a_s \sum _{t=0}^n \left( {\begin{array}{c}n\\ t\end{array}}\right) b_t = 1, \end{aligned}$$

or, equivalently,

$$\begin{aligned} \sum _{t=0}^n \left( {\begin{array}{c}n\\ t\end{array}}\right) b_t = \frac{1}{\sum _{s=0}^n \left( {\begin{array}{c}n\\ s\end{array}}\right) a_s}. \end{aligned}$$

Let \(d_n= \frac{1}{\sum _{s=0}^n \left( {\begin{array}{c}n\\ s\end{array}}\right) a_s}\), applying Newton’s inversion formula, we get for all \(n\ge 0\)

$$\begin{aligned} d_n =\sum _{t=0}^n \left( {\begin{array}{c}n\\ t\end{array}}\right) (-1)^t (-1)^t b_t \quad \Leftrightarrow \quad (-1)^n b_n = \sum _{t=0}^n \left( {\begin{array}{c}n\\ t\end{array}}\right) (-1)^t d_t, \end{aligned}$$

therefore

$$\begin{aligned} b_n = (-1)^n \sum _{t=0}^n \left( {\begin{array}{c}n\\ t\end{array}}\right) (-1)^t \frac{1}{\sum _{s=0}^t \left( {\begin{array}{c}t\\ s\end{array}}\right) a_s}. \end{aligned}$$

\(\square \)

Remark 11

We point out that \({\textbf{a}}\) is invertible with respect to the Newton product if and only if all the elements of \({\textbf{a}}\) are invertible elements of R, as well as it happens for the Hadamard product.

Let us notice that every \({\textbf{a}} \in {\mathcal {W}}(R)\) can be associated to its monic characteristic polynomial \(p_a(t)\) with coefficients in R and this polynomial to its companion matrix A. Thus, the results we found for the R-algebras \(({\mathcal {W}}(R), \oplus , \odot )\), \(({\mathcal {W}}(R), \oplus , *)\), \(({\mathcal {W}}(R), \oplus , \star ),\) and \(({\mathcal {W}}(R), \oplus , \boxtimes )\), enable us to give to the set of the monic polynomials \(\mathcal {P}ol(R)\) with coefficients in R some new algebraic structures. Moreover, we can also observe what happens to the roots and to the companion matrices of the characteristic polynomials.

Let us consider \({\textbf{a}}, {\textbf{b}} \in {\mathcal {W}}(R)\) with characteristic polynomials of degree M and N, whose roots are \(\alpha _1, \ldots , \alpha _M\) and \(\beta _1, \ldots \beta _N\), respectively. The sequences \({\textbf{a}} + {\textbf{b}}\) and \({\textbf{a}} * {\textbf{b}}\) both recur with characteristic polynomial \(p_a(t) \cdot p_b(t)\).

Regarding the Hadamard product, we have already observed that the characteristic polynomial of \({\textbf{c}} = {\textbf{a}} \odot {\textbf{b}}\) is \(p_c(t) = p_a(t) \otimes p_b(t)\), whose roots are \(\alpha _i\beta _j\), for \(i = 1, \ldots , M\) and \(j = 1, \ldots , N\). Thus, starting from the R-algebra \(({\mathcal {W}}(R), \oplus , \odot )\), we can construct the semiring \((\mathcal {P}ol(R), \cdot , \otimes )\) whose identity is the polynomial \(t-1\). Said AB,  and C the companion matrices of \(p_a(t), p_b(t)\) and \(p_c(t)\), we have that \(C = A \otimes B\), where \(\otimes \) is the Kronecker product between matrices. Thus C is a \(mn \times mn\) matrix with eigenvalues the products of the eigenvalues of A and B.

Similarly, starting from the Hurwitz product, we can construct a new operation in \(\mathcal {P}ol(R)\). Given \({\textbf{c}} = {\textbf{a}} \star {\textbf{b}}\), we proved that \(p_c(t)\) has roots \(\alpha _i + \beta _j\), for \(i = 1, \ldots , M\) and \(j= 1, \ldots , N\). The matrix \(A \otimes I_n + I_m \otimes B\) is a \(mn \times mn\) matrix, whose eigenvalues are the sum of the eigenvalues of A and B. Thus, we can define \(p_c(t) = p_a(t) \star p_b(t)\) as the characteristic polynomial of the matrix \(A \otimes I_n + I_m \otimes B\) and we get the semiring \((\mathcal {P}ol(R), \cdot , \star )\).

Finally, given \({\textbf{c}} = {\textbf{a}} \boxtimes {\textbf{b}}\), we know that \(p_c(t)\) has roots \(\alpha _i + \beta _j + \alpha _i\beta _j\), for \(i = 1, \ldots , M\) and \(j = 1, \ldots , N\). In this case, we can define \(p_c(t) = p_a(t) \boxtimes p_b(t)\) as the characteristic polynomial of the matrix \(A \otimes I_n + I_m \otimes B + A \otimes B\), which is a \(mn \times mn\) matrix, whose eigenvalues are exactly \(\alpha _i + \beta _j + \alpha _i\beta _j\), for \(i = 1, \ldots , M\) and \(j = 1, \ldots , N\). Thus, we have that \((\mathcal {P}ol(R), \cdot , \boxtimes )\) is another semiring of monic polynomials.

4 On isomorphisms between R-algebras

In [7], the authors proved that \(({\mathcal {W}}(R), \oplus , \odot )\) and \(({\mathcal {W}}(R), \oplus , *)\) are never isomorphic as R-algebras. In the following we prove similar results for the other algebraic structures that we have studied in the previous section.

Theorem 12

The R-algebras \(({\mathcal {W}}(R), \oplus , \odot )\) and \(({\mathcal {W}}(R), \oplus , \star )\) are not isomorphic.

Proof

Let us suppose that \(\psi :({\mathcal {W}}(R), \oplus , \odot ) \xrightarrow {}({\mathcal {W}}(R), \oplus , \star )\) is an injective morphism and consider \({\textbf{a}}:=(1, 0, 0, 0, \dots )\) and \({\textbf{b}}:=(0,1, 0, 0, \dots )\).

Then \(\psi ({\textbf{a}} \odot {\textbf{b}} ) = \psi ({\textbf{a}} ) \star \psi ({\textbf{b}} )= (0, 0, \ldots )\) and, by injectivity, \(\psi ({\textbf{a}} ), \psi ({\textbf{b}} ) \ne (0,0, \ldots )\). Let \(A_e^{\psi }(t)\) and \(B_e^{\psi }(t)\) be the exponential generating functions of \(\psi ({\textbf{a}})\) and \(\psi ({\textbf{b}})\), respectively. From \(\psi ({\textbf{a}} ) \star \psi ({\textbf{b}} )=(0,0, \ldots )\), it follows that \(A_e^{\psi }(t)B_e^{\psi }(t)=0\). Thanks to Lemma 3, we have

$$\begin{aligned} p^*_{\psi ({\textbf{a}})}(t)A^{\psi }_o(t)=h(t) \end{aligned}$$
(14)

with \(deg(h(t)) < deg(p^*_{\psi ({\textbf{a}})}(t)) \).

Now, let us consider the map \(\gamma : \sum _{n=0}^\infty a_nt^n \xrightarrow {} \sum _{n=0}^\infty \frac{a_n}{n!}t^n\), which is an isomorphism between the ring of ordinary power series and the Hurwitz series ring, see, e.g., [3]. Applying \(\gamma \) to (14), we obtain

$$\begin{aligned} \gamma \left( p^*_{\psi ({\textbf{a}})}(t)\right) A^{\psi }_e(t)&=\gamma (h(t)) \end{aligned}$$

where \(p^*_{\psi ({\textbf{a}})}(t)\) can be viewed as a formal series with an infinite number of zero coefficients. Multiplying by \(\gamma (p^*_{\psi ({\textbf{a}})}(t))\) the equation \(A_e^{\psi }(t)B_e^{\psi }(t)=0\), it becomes

$$\begin{aligned} \gamma (h(t))B_e^{\psi }(t)&=0 \end{aligned}$$

which implies \(h(t)B_o^{\psi }(t)=0\). From this, it follows that there is a nonzero element \(w \in R\), such that \(wB_o^{\psi }(t)=0\) ( [9, Eq. (2.9)]) and \(w {\textbf{b}} =0\), which is absurd. \(\square \)

Theorem 13

The R-algebras \(({\mathcal {W}}(R), \oplus , \odot )\) and \(({\mathcal {W}}(R), \oplus , \boxtimes )\) are isomorphic.

Proof

The explicit isomorphism is \(\psi : ({\mathcal {W}}(R), \oplus , \odot ) \rightarrow ({\mathcal {W}}(R), \oplus , \boxtimes )\) defined by \(\psi ({\textbf{a}}):= {\textbf{a}} \star {\textbf{e}}\), where \({\textbf{e}} = ((-1)^n)_{n \ge 0}\). Indeed, by Theorem 5, the map \(\psi \) is well–defined (in the sense that a linear recurrent sequence is mapped into a linear recurrent sequence). Moreover, since \({\textbf{1}}\) is the inverse of \({\textbf{e}}\) with respect to the Hurwitz product, it is straightforward to check injectivity and surjectivity. Finally, by Proposition 7, we have

$$\begin{aligned} \psi ({\textbf{a}}) \boxtimes \psi ({\textbf{b}}) = ((({\textbf{a}} \star {\textbf{e}}) \star {\textbf{1}}) \odot (({\textbf{b}} \star {\textbf{e}}) \star {\textbf{1}}) )\star {\textbf{e}}=({\textbf{a}} \odot {\textbf{b}}) \star {\textbf{e}}=\psi ({\textbf{a}} \odot {\textbf{b}}), \end{aligned}$$

since \({\textbf{e}} \star {\textbf{1}} = (1, 0, 0, \ldots )\). \(\square \)

Theorem 14

Let R be an integral domain, if \(\psi : ({\mathcal {W}}(R), \oplus , *) \rightarrow ({\mathcal {W}}(R),\oplus , \star )\) is a morphism, then \(\psi \) is not injective.

Proof

Let us suppose that \(\psi :({\mathcal {W}}(R), \oplus , *) \rightarrow ({\mathcal {W}}(R), \oplus , \star )\) is an injective morphism. Let us denote by \((\psi ({\textbf{a}}))_i\) the ith term of the sequence \(\psi ({\textbf{a}})\). The nth term of \(\psi ({\textbf{a}}) \star \psi ({\textbf{b}})\) is

$$\begin{aligned} \sum _{i=0}^n \left( {\begin{array}{c}n\\ i\end{array}}\right) (\psi ({\textbf{a}}))_i(\psi ({\textbf{b}}))_{n-i}= n!\sum _{i=0}^n \left( {\begin{array}{c}n\\ i\end{array}}\right) \frac{(\psi ({\textbf{a}}))_i}{i!}\frac{(\psi ({\textbf{b}}))_{n-i}}{(n-i)!}. \end{aligned}$$

Then, considering \(\psi ({\textbf{a}} * {\textbf{b}}) = \psi ({\textbf{a}}) \star \psi ({\textbf{b}})\), for any \({\textbf{a}}, {\textbf{b}} \in {\mathcal {W}}(R)\), we obtain

$$\begin{aligned} \psi ({\textbf{a}}*{\textbf{b}})=((\psi ({\textbf{a}}) \odot {\textbf{f}}^{-1})\star (\psi ({\textbf{b}}) \odot {\textbf{f}}^{-1})) \odot {\textbf{f}} \end{aligned}$$
(15)

where we define the formal sequences \({\textbf{f}}:=(1, 2!, 3!, \ldots )\) and \({\textbf{f}}^{-1}:=(1, \frac{1}{2!}, \frac{1}{3!}, \ldots )\).

We define a map \(\tau :({\mathcal {W}}(R), \oplus , *) \xrightarrow {}({\mathcal {W}}(R), \oplus , *)\) such that \(\tau ({\textbf{a}}) = \psi ({\textbf{a}}) \odot {\textbf{f}}^{-1}\). By definition, we have that \(\tau ( {\textbf{a}} \oplus {\textbf{b}} )= \tau ( {\textbf{a}}) \oplus \tau ( {\textbf{b}} )\), \(\tau ( {\textbf{a}} * {\textbf{b}} )= \tau ( {\textbf{a}})*\tau ( {\textbf{b}} )\), for any \({\textbf{a}}, {\textbf{b}} \in {\mathcal {W}}(R)\) and \(\ker (\tau )= {\textbf{0}}\) where \({\textbf{0}}= (0, 0, 0, \ldots )\).

Let \(\tilde{\tau }\) be a map that acts over the ordinary generating functions such that if \(A(t)=\sum _{n=0}^{+\infty } a_nt^n\) then \(\tilde{\tau }(A(t))=\sum _{n=0}^{+\infty }(\tau (a))_nt^n\). From the properties of \(\tau \), it follows that

$$\begin{aligned} \tilde{\tau }(A(t)+B(t))=\tilde{\tau }(A(t))+\tilde{\tau }(B(t)), \quad \tilde{\tau }(A(t)B(t))=\tilde{\tau }(A(t))\tilde{\tau }(B(t)) \end{aligned}$$

and

$$\begin{aligned} \tilde{\tau }(A(t))=\tilde{\tau }(B(t))\quad \Leftrightarrow \quad A(t)=B(t). \end{aligned}$$

When we consider \(A(t)=1\) and \(B(t)=1\), we clearly have

$$\begin{aligned} \tilde{\tau }(1)=\tilde{\tau }(1 \cdot 1)=\tilde{\tau }(1) \tilde{\tau }(1) \end{aligned}$$

and this implies \(\tilde{\tau }(1)=1\). Indeed, by the injectivity of \(\tau \), we can not have \(\tilde{\tau }(1)=0\) because \(\tau (1, 0, \ldots )\) should be \({\textbf{0}}\). In the case that \(A(t)=t\) and \(B(t)=-t\), then

$$\begin{aligned} 0= \tilde{\tau }(0)=\tilde{\tau }(t -t)=\tilde{\tau }(t) +\tilde{\tau }(-t), \end{aligned}$$

which implies \(\tilde{\tau }(-t)=-\tilde{\tau }(t).\) Moreover, when \(A(t)=t\) and \(B(t)=t^{-1}\), then

$$\begin{aligned} 1= \tilde{\tau }(1)=\tilde{\tau }(t \cdot t^{-1})=\tilde{\tau }(t)\tilde{\tau }(t^{-1}), \end{aligned}$$

so \(\tilde{\tau }(t^{-1})=( \tilde{\tau }(t))^{-1}\).

Lastly, if \(A(t)=B(t)=t\), then

$$\begin{aligned} \tilde{\tau }(t^2)=\tilde{\tau }(t \cdot t)=\tilde{\tau }(t)\tilde{\tau }(t), \end{aligned}$$

so \(\tilde{\tau }(t^2)=( \tilde{\tau }(t))^2\) and \(\tilde{\tau }(nt)=n\tilde{\tau }(t)\), \(\forall n \in {\mathbb {N}}\).

Now, let us consider \(A(t)=t\) and \(\tilde{\tau }(t)=\sum _{n=0}^{+\infty } s_nt^n\), then from \(\tilde{\tau }(t^2)=( \tilde{\tau }(t))^2\) it follows that

$$\begin{aligned} \left( \sum _{n=0}^{+\infty } s_nt^n\right) ^2= \sum _{n=0}^{+\infty } \left( \sum _{i=0}^n s_is_{n-i}\right) t^n \Rightarrow {\left\{ \begin{array}{ll} \sum _{i=0}^{2k+1} s_is_{2k+1-i}=0 \\ \sum _{i=0}^{2k} s_is_{2k-i}=s_k \end{array}\right. } \end{aligned}$$
(16)

i.e, if we consider the square of the ordinary generating function \({\tilde{\tau }}(t)\), seen as the product between \({\tilde{\tau }}(t)\) and itself, it must be equal to \({\tilde{\tau }}(t^2)\), then the coefficients of the even powers are zero and the coefficients of the odd powers are \(s_k\). From (16), we obtain \(s_0=s_0^2\) and we may have \(s_0=0\) or \(s_0=1\). In the case that \(s_0=1\), then \(s_i=0\), \(\forall i \ge 1\), and \(\tilde{\tau }(t)=1\). But this can not happen because, if so, we should have

$$\begin{aligned} \tilde{\tau }(t)=1=\tilde{\tau }(1), \end{aligned}$$

which implies \(\tilde{\tau }(t-1)=0\), i. e., \(t=1\). Whereas, if \(s_0=0\), then \(s_1=s_1^2\) and \(s_1=0\) or \(s_1=1\). In the case that \(s_1=1\), then \(s_i=0\), \(\forall i \ge 2\), and \(\tilde{\tau }(t)=t\).

In the case that \(s_1=0\), then \(s_2=s_2^2\) so \(s_2=0\) or \(s_2=1\). Repeating the same reasoning and exploiting (16), we get that \(\tilde{\tau }(t)=t^k\) must hold for a fixed \(k \ge 1\).

Let us consider \(A(t)=\frac{1}{1-t}\), then

$$\begin{aligned} \tilde{\tau }((1-t)^{-1})=(\tilde{\tau }(1)-\tilde{\tau }(t))^{-1} =(1-\tilde{\tau }(t))^{-1}=(1-t^k)^{-1}. \end{aligned}$$

By definition,

$$\begin{aligned} \tilde{\tau } \left( \frac{1}{1-t}\right) =\tilde{\tau }\left( \sum _{n=0}^{+\infty } t^n\right) = \sum _{n=0}^{+\infty } (\tau ({\textbf{1}}))_nt^n, \end{aligned}$$
(17)

and, since from some \(k\ge 1\) we have \(\tilde{\tau }(t)=t^k\), we also have

$$\begin{aligned} \tilde{\tau } \left( \frac{1}{1-t}\right) =\sum _{n=0}^{+\infty } t^{kn}. \end{aligned}$$
(18)

Equating the coefficients in the two equivalent power series (17) and (18), we have

$$\begin{aligned} \left( \tilde{\tau } \left( \frac{1}{1-t} \right) \right) _n = {\left\{ \begin{array}{ll} 1 \quad \hbox {if} k \mid n \\ 0 \quad \text {if} k \not \mid n. \end{array}\right. } \end{aligned}$$

Thus, by (15) and the definition of \(\tau \) and \(\tilde{\tau }\), we have

$$\begin{aligned} \psi ({\textbf{1}})_n = (\tau ({\textbf{1}}) \odot {\textbf{f}})_n = {\left\{ \begin{array}{ll} n! \quad \hbox {if} k \mid n \\ 0 \quad \text {if} k \not \mid n, \end{array}\right. } \end{aligned}$$

which is not a linear recurrent sequence. \(\square \)