Keywords

MSC 2010 Classification:

19.1 Introduction

A topic of interest in the field of operator algebras is the connection between properties of dynamical systems and algebraic properties of crossed products associated to them. More specifically the question when a certain canonical subalgebra is maximal commutative and has the ideal intersection property, that is, each non-zero ideal of the algebra intersects the subalgebra non-trivially. For a topological dynamical systems \((X,\sigma ),\) one may define a crossed product C*-algebra \(C(X) \rtimes _{\tilde{\sigma }} \mathbb {Z} \) where \(\tilde{\sigma }\) is an automorphism of C(X) induced by \(\sigma \). It turns out that the property known as topological freeness of the dynamical system is equivalent to C(X) being a maximal commutative subalgebra of \(C(X) \rtimes _{\tilde{\sigma }} \mathbb {Z}\) and also equivalent to the condition that every non-trivial closed ideal has a non-zero intersection with C(X). An excellent reference for this correspondence is [22]. For analogues, extensions and applications of this theory in the study of dynamical systems, harmonic analysis, quantum field theory, string theory, integrable systems, fractals and wavelets, see [1, 2, 4,5,6, 10, 11, 22].

For any class of graded rings, including gradings given by semigroups or even filtered rings (e.g. Ore extensions), it makes sense to ask whether the ideal intersection property is related to maximal commutativity of the degree zero component. For crossed product-like structures, where one has a natural action, it further makes sense to ask how the above mentioned properties of the degree zero component are related to properties of the action. These questions have been considered recently for algebraic crossed products and Banach algebra crossed products, both in the traditional context of crossed products by groups as well as generalizations to crossed products by groupoids and general categories [3, 8, 9, 12, 13, 15, 16, 20, 21].

Ore extensions constitute an important class of rings, appearing in extensions of differential calculus, in non-commutative geometry, in quantum groups and algebras and as a uniting framework for many algebras appearing in physics and engineering models. An Ore extension of a ring R is an overring with a generator x satisfying \(xr=\sigma (r)x+\varDelta (r), \ r \in R,\) for some endomorphism \(\sigma \) and a \(\sigma \)-derivation \(\varDelta \).

This article aims at giving a description of the centralizer and the center of the coefficient subalgebra \(\mathcal {A}\) in the Ore extension algebra \(\mathcal {A}[x,\tilde{\sigma },\varDelta ],\) where \(\mathcal {A}\) is the algebra of functions with finite support on a countable set X and \(\tilde{\sigma }:\mathcal {A}\rightarrow \mathcal {A}\) is an automorphism of \(\mathcal {A}\) that is induced by a bijection \(\sigma :X\rightarrow X\). A number of studies on centralizers in Ore extensions have been carried out before in [14, 17, 18], but in completely different settings to the one here.

In Sect. 19.2, we recall some notation and basic facts about Ore extensions used through out the rest of the article. In Sect. 19.3, we give a description of twisted derivations on the algebra of functions on a finite set from which it is observed that there are no non trivial derivations on \(\mathbb {R}^n.\) In Sect. 19.4, we give the description of the centralizer of the coefficient algebra \(\mathcal {A}\) and the center of the Ore extension \(\mathcal {A}[x,\tilde{\sigma },0].\) In Sect. 19.5, we turn to the case when \(\mathcal {A}\) is the algebra of functions on a countable set with finite support, give a description for the centralizer and the center of the Ore extension \(\mathcal {A}[x,\tilde{\sigma },0].\) Sections 19.6 and 19.7 are devoted to the special case of the skew power series and the skew Laurent rings respectively.

19.2 Ore Extensions. Basic Preliminaries

For general references on Ore extensions, see for example, [7, 19]. For the convenience of the reader, we recall the definition.

Let R be a ring, \(\sigma : R \rightarrow R\) a ring endomorphism (not necessarily injective) and \(\varDelta : R \rightarrow R\) a \(\sigma \)-derivation, that is,

$$\begin{aligned} \varDelta (a+b)=\varDelta (a)+\varDelta (b) \quad \text { and } \quad \varDelta (ab) = \sigma (a) \varDelta (b) + \varDelta (a)b \end{aligned}$$

for all \(a,b\in R\).

Definition 19.1

The Ore extension \(R[x,\sigma ,\varDelta ]\) is defined as the ring generated by R and an element \(x \notin R\) such that \(1,x, x^2, \ldots \) form a basis for \(R[x,\sigma ,\varDelta ]\) as a left R-module and all \(r \in R\) satisfy

$$\begin{aligned} x r = \sigma (r)x + \varDelta (r). \end{aligned}$$
(19.1)

Such a ring always exists and is unique up to isomorphism [7].

From \(\varDelta (1 \cdot 1) = \sigma (1) \cdot \varDelta (1) +\varDelta (1) \cdot 1\) we get that \(\varDelta (1)=0\), and since \(\sigma (1)=1\) we see that \(1_R\) will be the multiplicative identity for \(R[x,\sigma ,\varDelta ]\) as well.

If \(\sigma = id_R\), then we say that \(R[x,id_R,\varDelta ]\) is a differential polynomial ring. If instead \(\varDelta \equiv 0\), then we say that \(R[x,\sigma ,0]\) is a skew polynomial ring. The reader should be aware that some authors use the term skew polynomial ring to mean Ore extensions.

An arbitrary non-zero element \(P \in R[x,\sigma ,\varDelta ]\) can be written uniquely as \(P = \sum \nolimits _{i=0}^n a_i x^i\) for some \(n \in \mathbb {Z}_{\ge 0}\), with \(a_i \in R\) for \(i \in \{0,1,\ldots , n\}\) and \(a_n \ne 0\). The degree of P is defined as \(\deg (P):=n\). We set \(\deg (0) := -\infty \).

Definition 19.2

A \(\sigma \)-derivation \(\varDelta \) is said to be inner if there exists some \(a \in R\) such that \(\varDelta (r) = ar-\sigma (r)a\) for all \(r \in R\). A \(\sigma \)-derivation that is not inner is called outer.

Given a ring S we denote its center by Z(S). The centralizer C(T),  of a subset \(T\subseteq S\) is defined as the set of elements of S that commute with every element of T. If T is a commutative subring of S and the centralizer of T in S coincides with T, then T is said to be a maximal commutative subring of S.

19.3 Derivations on Algebras of Functions on a Finite Set

Let \(X=[n](=\{1,2,\ldots ,n\})\) be a finite set and let \(\mathcal {A}=\{f:X\rightarrow \mathbb {R}\}\) denote the algebra of real-valued functions on X with respect to the usual pointwise operations, that is, pointwise addition, scalar multiplication and pointwise multiplication. By writing \(f_n:=f(n)\), then we can identify \(\mathcal {A}\) with \(\mathbb {R}^n.\) Here, \(\mathbb {R}^n\) is equipped with the usual operations of pointwise addition, scalar multiplication and multiplication defined by

$$xy=(x_1y_1,x_2y_2,\ldots ,x_ny_n)$$

for every \(x=(x_1,x_2,\ldots ,x_n)\) and \(y=(y_1,y_2,\ldots ,y_n).\)

Now, let \(\sigma :X\rightarrow X\) be a bijection such that \(\mathcal {A}\) is invariant under \(\sigma \), (that is \(\sigma \) is a permutation on X) and let \(\tilde{\sigma }:\mathcal {A}\rightarrow \mathcal {A}\) be the automorphism induced by \(\sigma ,\) that is

$$\begin{aligned} \tilde{\sigma }(f) = f \circ \sigma ^{-1} \end{aligned}$$
(19.2)

for every \(f\in \mathcal {A}.\) We would like to consider the Ore extension \(\mathcal {A}[x;\tilde{\sigma },\varDelta ]\) where \(\varDelta \) is a \(\tilde{\sigma }\)-derivation on \(\mathcal {A}\) and x is an indeterminate.

Recall that \(\varDelta \) is a \(\tilde{\sigma }\) derivation on \(\mathcal {A}\) if it is \(\mathbb {R}-\)linear and for every \(f,g\in \mathcal {A},\)

$$\varDelta (fg) = \tilde{\sigma } (f) \varDelta (g) + \varDelta (f) g.$$

Since \(\mathcal {A}\) can be identified with \(\mathbb {R}^n\), then \(\varDelta \) is an operator on \(\mathbb {R}^n\) which can be represented by a matrix,

$$\begin{aligned}{}[\varDelta ]=[\varDelta (e_1)|\varDelta (e_2)|\cdots |\varDelta (e_n)],\quad \varDelta (e_i)=\left[ \begin{matrix} k_{1i}\\ k_{2i}\\ \vdots \\ k_{ni}\end{matrix}\right] \end{aligned}$$
(19.3)

where \(\{e_1,e_2,\ldots ,e_n\}\) is the standard basis of \(\mathbb {R}^n.\) In the following Theorem we give the description of the matrix \([\varDelta ]\) in (19.3) above.

Theorem 19.1

Let \(\sigma :X\rightarrow X\) be a bijection and let \(\varDelta \) be a \(\tilde{\sigma }\)-derivation whose standard matrix \(\big [\varDelta \big ]\) is as given by (19.3). Then

  1. 1.

    \(k_{li}=0\) if \(l\not \in \{i,\sigma (i)\}\),

  2. 2.

    \(k_{ii}= 0\) if \(i = \sigma (i)\),

  3. 3.

    \(k_{ji}=-k_{jj}\) for all \(i\ne j = \sigma (i)\).

Proof

1. If \(\sigma (i)=j\), then \(\tilde{\sigma }(e_i)=e_j,\) where \(\tilde{\sigma }\) is as defined by (19.2) and \(\{e_i,\ i=1,2,\ldots ,n\}\) is the standard basis for \(\mathbb {R}^n.\) Therefore from the definition of \(\varDelta ,\)

$$\begin{aligned} \varDelta (e_i^2)&=\tilde{\sigma }(e_i)\varDelta (e_i)+\varDelta (e_i)e_i\\&=e_j\varDelta (e_i)+\varDelta (e_i)e_i\\&=\varDelta (e_i)(e_j+e_i). \end{aligned}$$

Now \(e_i^2=e_i\) and hence

$$\varDelta (e_i)=\varDelta (e_i)(e_i+e_j) = e_{i}k_{ii} + e_{j}k_{ji}.$$

Therefore, \(k_{li}=0\) whenever \(l\ne i,j.\)

2. If \(i = \sigma (i)\), then

$$\begin{aligned} e_{i}k_{ii} = 2e_{i}k_{ii}, \end{aligned}$$

and hence \(k_{ii}\) = 0.

3. For \(i\ne j=\sigma (i)\), \(\varDelta (e_ie_j)=\varDelta (0)=0\) and hence

$$\begin{aligned} 0&=\varDelta (e_ie_j)\\&=\tilde{\sigma }(e_i)\varDelta (e_j)+\varDelta (e_i)e_j\\&=e_j\varDelta (e_j)+\varDelta (e_i)e_j\\&=e_j\big (\varDelta (e_j)+\varDelta (e_i)\big ). \end{aligned}$$

Looking at the jth component we get

$$\begin{aligned} k_{jj}+k_{ji}=0 \ \text { or }\ k_{ji}=-k_{jj}. \end{aligned}$$

   \(\square \)

Corollary 19.3.1

There are no non zero derivations on \(\mathcal {A}.\)

Proof

If \(\tilde{\sigma }=id,\) that is, \(\tilde{\sigma }(e_i)=e_i\) and \(i = \sigma (i)\), \(i=1,2,\ldots ,n,\) then from Theorem 19.1, it follows that \(k_{ij}=0\) for all \(i\ne j\), and \(k_{ii}=0\) for all \(i=1,2,\ldots , n\). Therefore, \(\varDelta =0.\)   \(\square \)

Example 19.3.1

Let \(n=3\) and let \(\sigma :[3] \rightarrow [3]\) be a permutation such that \(\sigma (1)=2,\ \sigma (2)=3\) and \(\sigma (3)=1.\) Then

$$\begin{aligned} \tilde{\sigma }(f)(1)&=f\Big (\sigma ^{-1}(1)\Big )=f(3)\\ \tilde{\sigma }(f)(2)&=f\Big (\sigma ^{-1}(2)\Big )=f(1)\\ \tilde{\sigma }(f)(3)&=f\Big (\sigma ^{-1}(3)\Big )=f(2) \end{aligned}$$

Therefore \(\tilde{\sigma }(x_1,x_2,x_3)=(x_3,x_1,x_2).\) Let

$$ [\varDelta ]=\left[ \begin{matrix}k_{11}&{}k_{12}&{}k_{13}\\ k_{21}&{}k_{22}&{}k_{23}\\ k_{31}&{}k_{32}&{}k_{33}\end{matrix}\right] $$

be the standard matrix for \(\varDelta \) and let \(x=(x_1,x_2,x_3)\) and \(y=(y_1,y_2,y_3)\) be arbitrary vectors in \(\mathbb {R}^3.\) Then

$$\begin{aligned} \tilde{\sigma }(x)\varDelta (y)&= \left[ \begin{matrix} x_3\\ x_1\\ x_2 \end{matrix}\right] \left[ \begin{matrix} k_{11}y_1 + k_{12}y_2 + k_{13}y_3\\ k_{21}y_1 + k_{22}y_2 + k_{23}y_3\\ k_{31}y_1 + k_{32}y_2 + k_{33}y_3 \end{matrix}\right] \\&=\left[ \begin{matrix} k_{11}x_3y_1 + k_{12}x_3y_2 + k_{13}x_3y_3\\ k_{21}x_1y_1 + k_{22}x_1y_2 + k_{23}x_1y_3\\ k_{31}x_2y_1 + k_{32}x_2y_2 + k_{33}x_2y_3\end{matrix}\right] \end{aligned}$$
$$\begin{aligned} \varDelta (x)y&=\left[ \begin{matrix} k_{11}x_1 + k_{12}x_2 + k_{13}x_3\\ k_{21}x_1 + k_{22}x_2 + k_{23}x_3\\ k_{31}x_1 + k_{32}x_2 + k_{33}x_3 \end{matrix}\right] \left[ \begin{matrix} y_1\\ y_2\\ y_3 \end{matrix}\right] \\&=\left[ \begin{matrix} k_{11}x_1y_1 + k_{12}x_2y_1 + k_{13}x_3y_1\\ k_{21}x_1y_2 + k_{22}x_2y_2 + k_{23}x_3y_2\\ k_{31}x_1y_3 + k_{32}x_xy_3 + k_{33}x_3y_3 \end{matrix}\right] \end{aligned}$$
$$ \tilde{\sigma }(x)\varDelta (y)+\varDelta (x)y=\left[ \begin{matrix} k_{11}(x_1y_1+x_3y_1) + k_{12}(x_2y_1+x_3y_2) + k_{13}(x_3y_1+x_3y_3)\\ k_{21}(x_1y_2+x_1y_1) + k_{22}(x_1y_2+x_2y_2) + k_{23}(x_1y_3+x_3y_2)\\ k_{31}(x_2y_1+x_1y_3) + k_{32}(x_2y_2+x_2y_3) + k_{33}(x_2y_3+x_3y_3) \end{matrix}\right] . $$

Also,

$$\begin{aligned} \varDelta (xy)&=\left[ \begin{matrix} k_{11}&{}k_{12}&{}k_{13}\\ k_{21}&{}k_{22}&{}k_{23}\\ k_{31}&{}k_{32}&{}k_{33}\end{matrix}\right] \left[ \begin{matrix} x_1y_1\\ x_2y_2\\ x_3y_3\end{matrix}\right] \\&=\left[ \begin{matrix} k_{11}x_1y_1 + k_{12}x_2y_2 + k_{13}x_3y_3\\ k_{21}x_1y_1 + k_{22}x_2y_2 + k_{23}x_3y_3\\ k_{31}x_1y_1 + k_{32}x_2y_2 + k_{33}x_3y_3 \end{matrix}\right] . \end{aligned}$$

Therefore, from

$$\varDelta (xy)=\tilde{\sigma }(x)\varDelta (y)+\varDelta (x)y$$

we get

$$k_{11}x_3y_1+k_{12}(x_2y_1+x_3y_2-x_2y_2)+k_{13}x_3y_1=0,$$

from which we obtain, \(k_{13}=-k_{11}\) and \(k_{12}=0.\)

Similarly, we obtain \(k_{21}=-k_{22},\ k_{23}=0\), and \(k_{31}=0,\ k_{32}=-k_{33}\) which is agreement with the assertions of Theorem 19.1.

Setting \(k_{11}=s,\ k_{22}=t\) and \(k_{33}=u\) we obtain the matrix of \(\varDelta \) as;

$$[\varDelta ]=\left[ \begin{matrix} s &{}0&{}-s\\ -t &{}t&{}0\\ 0&{}-u&{}u \end{matrix}\right] .$$

In the next Theorem, we prove that if \(\tilde{\sigma }:\mathbb {R}^n\rightarrow \mathbb {R}^n\) is an automorphism of \(\mathbb {R}^n\) and \(\varDelta :\mathbb {R}^n\rightarrow \mathbb {R}^n\) is an operator on \(\mathbb {R}^n\) that satisfies the conditions of Theorem 19.1, then \(\varDelta \) is a \(\tilde{\sigma }\)-derivation.

Theorem 19.2

Let \(\sigma :X\rightarrow X\) be a bijection on X and let \(\tilde{\sigma }:\mathbb {R}^n\rightarrow \mathbb {R}^n\) be the automorphism induced by \(\sigma .\) Let \(\varDelta :\mathbb {R}^n\rightarrow \mathbb {R}^n\) be linear operator whose standard matrix \(\big [\varDelta \big ]\) has the following properties

  1. 1.

    \(k_{li}=0\) if \(l\not \in \{ i,\sigma (i)\}\),

  2. 2.

    \(k_{ji}=-k_{jj}\), for \(i \ne j = \sigma (i)\),

  3. 3.

    \(k_{ii}=0\) if \(i = \sigma (i)\).

Then \(\varDelta \) is a \(\tilde{\sigma }\)-derivation.

Proof

We first do the proof for the standard basis \(\{e_1,e_2,\ldots ,e_n\}\) of \(\mathbb {R}^n.\) Recall that if \(\sigma (i)=j\) then \(\tilde{\sigma }(e_i)=e_j.\) From the definition of \(\varDelta ,\)

$$ \varDelta (e_i)=\left[ \begin{matrix} k_{1i}\\ k_{2i}\\ \vdots \\ k_{ni}\end{matrix}\right] $$

where \(k_{li}=0\) for all \(l\ne i,j\) and \(k_{ji}=-k_{jj}.\)

Now, for \(i\ne j,\) \(\varDelta (e_ie_j)=\varDelta (0)=0.\) And

$$\begin{aligned} \tilde{\sigma }(e_i)\varDelta (e_j)+\varDelta (e_i)e_j&=e_j\varDelta (e_j)+\varDelta (e_i)e_j\\&=e_j\big (\varDelta (e_j)+\varDelta (e_i)\big ). \end{aligned}$$

All the components in the above vector are zero except the jth component which is given by

$$k_{jj}+k_{ji}=k_{jj}-k_{jj}=0.$$

Also, \(\varDelta (e_i^2)=\varDelta (e_i)\), (since \(e_i^2=e_i\)), and

$$\begin{aligned} \tilde{\sigma }(e_i)\varDelta (e_i)+\varDelta (e_i)e_i&=e_j\varDelta (e_i)+\varDelta (e_i)e_i\\&=\varDelta (e_i)(e_i+e_j), \end{aligned}$$

where all the components in the above vector are zero except the ith and jth component. That is (assuming \(i<j\))

$$\tilde{\sigma }(e_i)\varDelta (e_i)+\varDelta (e_i)e_i=\left[ \begin{matrix} 0\\ \vdots \\ k_{ii}\\ 0\\ \vdots \\ k_{ji}\\ 0\\ \vdots \\ 0 \end{matrix}\right] =\varDelta (e_i).$$

Therefore, \(\varDelta \) is a \(\tilde{\sigma }\)-derivation on the standard basis vectors.

Now let \(x=(x_1,x_2,\ldots ,x_n)\) and \(y=(y_1,y_2,\ldots ,y_n)\) be arbitrary vectors in \(\mathbb {R}^n.\) Then

$$x=\sum _{i=1}^nx_ie_i\ \text { and }\ y=\sum _{j=1}^ny_je_j$$

and

$$xy=\sum _{i,j=1}^nx_iy_je_ie_j.$$

Using the fact that both \(\tilde{\sigma }\) and \(\varDelta \) are \(\mathbb {R}-\)linear we have;

$$\begin{aligned} \varDelta (xy)&=\varDelta \left( \sum _{i,j=1}^nx_iy_je_ie_j\right) \\&=\sum _{i,j=1}^n\varDelta \big (x_iy_je_ie_j\big )\\&=\sum _{i,j=1}^nx_iy_j\varDelta \big (e_ie_j\big )\\&=\sum _{i,j=1}^n x_iy_j\big (\tilde{\sigma }(e_i)\varDelta (e_j)+\varDelta (e_i)e_j\big )\\&=\sum _{i,j=1}^n x_iy_j\tilde{\sigma }(e_i)\varDelta (e_j)+\sum _{i,j=1}^nx_iy_j\varDelta (e_i)e_j\\&=\left( \sum _{i=1}^nx_i\tilde{\sigma }(e_i)\right) \left( \sum _{j=1}^n y_j\varDelta (e_j)\right) +\left( \sum _{i=1}^nx_i\varDelta (e_i)\right) \left( \sum _{j=1}^ny_je_j\right) \\&=\left( \tilde{\sigma }\left( \sum _{i=1}^nx_ie_i\right) \right) \left( \varDelta \left( \sum _{j=1}^n y_je_j\right) \right) +\left( \varDelta \left( \sum _{i=1}^nx_ie_i\right) \right) \left( \sum _{j=1}^ny_je_j\right) \\&= \tilde{\sigma }(x)\varDelta (y)+\varDelta (x)y. \end{aligned}$$

Therefore \(\varDelta \) is a \(\tilde{\sigma }\)-derivation on \(\mathbb {R}^n.\)   \(\square \)

19.4 Centralizers in Ore Extensions for Functional Algebras

Consider the Ore extension \(\mathcal {A}[x,\tilde{\sigma },\varDelta ]\), that is, the algebra

$$\mathcal {A}\big [x,\tilde{\sigma },\varDelta \big ]:=\left\{ \sum _{k=0}^m f_kx^k,\ :\ f_k\in \mathcal {A}\right\} $$

with the operations of pointwise addition, scalar multiplication, and multiplication given by the relation

$$xf=\tilde{\sigma }(f)x+\varDelta (f)$$

for every \(f\in \mathcal {A}.\)

Our interest is to give a description of the centralizer \(C(\mathcal {A})\), of \(\mathcal {A}\) in the Ore extension \(\mathcal {A}[x,\tilde{\sigma },\varDelta ]\) where \(\tilde{\sigma }\) and \(\varDelta \) are as described before. Using the notation introduced in [14], we define functions \(\pi _k^l:\mathcal {A}\rightarrow \mathcal {A},\) for \(k,l\in \mathbb {Z}\) as follows; \(\pi _0^0=id.\) If mn are nonzero integers such that \(m>n\) or atleast one of the integers is negative, then \(\pi _m^n=0.\) For the other remaining cases,

$$\pi _m^n=\tilde{\sigma }\circ \pi _{m-1}^{n-1}+\varDelta \circ \pi _m^{n-1}.$$

It has been proven in [14] that an element \(\sum \nolimits _{k=0}^m f_kx^k\in \mathcal {A}[x,\tilde{\sigma },\varDelta ]\) belongs to the centralizer of \(\mathcal {A}\) if and only if

$$\begin{aligned} gf_k=\sum _{j=k}^m f_j\pi _k^j(g) \end{aligned}$$
(19.4)

holds for all \(k\in \{0,1,\ldots ,m\}\) and all \(g\in \mathcal {A}.\)

Observe that since \(\mathcal {A}\) is commutative, then the centralizer \(C(\mathcal {A})\) of \(\mathcal {A}\) is also commutative and hence a maximal commutative subalgebra of \(\mathcal {A}[x,\tilde{\sigma },\varDelta ].\)

19.4.1 Centralizer for the Case \(\varDelta =0\)

In this section we treat the simplest case when \(\varDelta =0.\) Recall that \(\tilde{\sigma }\) acts like a permutation of the elements of \(\mathcal {A}\) and since [n] is a finite set, then \(\tilde{\sigma }\) is of finite order. Before we give the description, we need the following definition.

Definition 19.3

For any nonzero \(n\in \mathbb Z,\) set

$$\begin{aligned} Sep^n(X)= & {} \{x\in X~|~x\ne \sigma ^n(x)\}, \\ Per^n(X)= & {} \{x\in X~|~x=\sigma ^n(x)\}. \end{aligned}$$

Observe that \(\tilde{\sigma }^n(h)(x)\ne h(x)\) if and only if \(\sigma ^n(x)\ne x\) for every \(x\in X\) and every \(h\in \mathcal {A}.\) We give the description of the centralizer in the following theorem.

Theorem 19.3

The centralizer \(C(\mathcal {A}),\) of \(\mathcal {A}\) in the Ore extension \(\mathcal {A}\big [x,\tilde{\sigma },\varDelta \big ]\) is given by

$$ C(\mathcal {A})=\left\{ \sum _{k=0}^mf_kx^k \quad { such}\, {that } \quad f_k=0 \quad { on }\quad Sep^k(X)\right\} . $$

Proof

From Eq. (19.4) and the fact that \(\varDelta =0,\) we see that an element \(\sum \nolimits _{k=0}^m f_kx^k\in \mathcal {A}[x,\tilde{\sigma },\varDelta ]\) belongs to the centralizer of \(\mathcal {A}\) if and only if

$$ gf_k=f_k\tilde{\sigma }^k(g) $$

for every \(k=0,1,\ldots ,m\) and every \(g\in \mathcal {A}\). That is,

$$\begin{aligned} g(x)f_k(x)=f_k(x)\tilde{\sigma }^k(g)(x) \end{aligned}$$
(19.5)

for every \(x\in X.\) Since \(\mathcal {A}\) is commutative, then Eq. (19.5) will hold if and only if \(x\in Per^k(X)\) or \(f_k=0.\) Therefore, the centralizer \(C(\mathcal {A}),\) of \(\mathcal {A}\) will be given by

$$C(\mathcal {A})=\left\{ \sum _{k=0}^mf_kx^k\ :\ f_k\in \mathcal {A} \text { where } f_k=0 \text { on } Sep^k(X)\right\} .$$

   \(\square \)

19.4.2 Centralizer for the Case \(\varDelta \ne 0\)

Now, suppose \(\tilde{\sigma }\ne id\) is of order \(j\in \mathbb {Z}_{>0},\) that is \(\tilde{\sigma }^j=id\) but \(\tilde{\sigma }^k\ne id\) for all \(k<j\). In the next Theorem, we state a necessary condition for an element in the Ore extension to belong to the centralizer.

Theorem 19.4

If an element of degree m\(\sum \nolimits _{k=0}^mf_kx^k\in \mathcal {A}[x,\tilde{\sigma },\varDelta ]\) belongs to the centralizer of \(\mathcal {A}\), then \(f_m=0 \text { on } Sep^m(X).\)

Proof

As already stated an element \(\sum \nolimits _{k=0}^mf_kx^k\in \mathcal {A}[x,\tilde{\sigma },\varDelta ]\) belongs to the centralizer of \(\mathcal {A}\) if and only if for every \(g\in \mathcal {A}\)

$$gf_k=\sum _{j=k}^mf_j\pi _k^j(g)$$

for \(k=0,1,\ldots ,m.\) Looking at the leading coefficient we have

$$\begin{aligned} gf_m=f_m\pi _m^m(g)=f_m\tilde{\sigma }^m(g) \end{aligned}$$
(19.6)

Since \(\mathcal {A}\) is commutative, then Eq. (19.6) holds on \( Per^m(X)\) and on \(Sep^m(X),\) Eq. (19.6) holds if and only if \(f_m=0.\)   \(\square \)

The above condition is not sufficient to describe all the elements that belong to the centralizer of \(\mathcal {A}\). In the next example we show that conditions satisfied by all elements in the centralizer of \(\mathcal {A}\) are actually quite complicated even for the case when \(n=2.\)

Example 19.4.1

Let \(n=2\) and let \(\sigma :[n]\rightarrow [n]\) be a bijection on [n]. We already know that if \(\sigma =id,\) then \(\varDelta =0\) so we will consider the case \(\sigma \ne id,\) that is \(\tilde{\sigma }(e_1)=e_2\) and \(\tilde{\sigma }(e_2)=e_1.\) In this case, \(\varDelta \) has a standard matrix given by

$$[\varDelta ]=\begin{bmatrix} s&{}&{}-s\\ -t&{}&{} t \end{bmatrix}$$

for some \(s,t\in \mathbb {R}\) with \(s,t\ne 0.\)

A direct calculation shows that an element \(fx\in \mathcal {A}[x,\tilde{\sigma },\varDelta ]\) belongs to the centralizer of \(\mathcal {A}\) if and only if \(f=0.\) So we consider a monomial of degree 2.

Let \(f=\begin{pmatrix} f_1\\ f_2\end{pmatrix} x^2\in \mathcal {A}[x,\tilde{\sigma },\varDelta ]\) be an element in the centralizer of \(\mathcal {A}.\) Then, if \(g=\begin{pmatrix}g_1\\ g_2\end{pmatrix}\in \mathcal {A},\) we have,

$$gf=\begin{pmatrix} g_1\\ g_2\end{pmatrix} \begin{pmatrix} f_1\\ f_2\end{pmatrix} x^2=\begin{pmatrix}g_1f_1\\ g_2f_2\end{pmatrix}x^2.$$

On the other hand, using the fact that for every \(g\in \mathcal {A,}\)

$$x^2 g=\tilde{\sigma }^2(g)x^2+\Big [\varDelta (\tilde{\sigma }(g))+\tilde{\sigma }\big (\varDelta (g)\big )\Big ]x+\varDelta ^2(g)$$

and since \(\tilde{\sigma }^2=id\) we have,

Solving \(fg=gf\) and looking at the coefficient of \(x^2\), we get that \(f_1,f_2\) are free variables and hence the centralizer \(C(\mathcal {A})\) is non trivial. In the more general case, we have the following.

As already seen, an element \(\sum \nolimits _{k=0}^mf_kx^k\in \mathcal {A}[x, \tilde{\sigma },\varDelta ]\) belongs to the centralizer of \(\mathcal {A}\) if and only if for every \(g\in \mathcal {A}\)

$$gf_k=\sum _{j=k}^mf_j\pi _k^j(g)$$

for \(k=0,1,\ldots ,m.\)

Looking at the constant term and using the fact that \(\pi _0^j(g)=\varDelta ^j(g)\) we have;

$$\begin{aligned} gf_0= & {} \sum _{j=0}^mf_j\pi _0^j(g)\\= & {} f_0g+\sum _{j=1}^mf_j\varDelta ^j(g) \end{aligned}$$

from which we obtain that

$$\begin{aligned} \sum _{j=1}^mf_j\varDelta ^j(g)=0. \end{aligned}$$
(19.7)

Now, for any \(g\in \mathcal {A},\ g=\begin{pmatrix} g_1\\ g_2\end{pmatrix},\) we have

$$ \varDelta ^j(g)=\varDelta ^j \begin{pmatrix} g_1\\ g_2 \end{pmatrix}=\left[ \begin{matrix} (g_1-g_2)\sum _{k=0}^j\begin{pmatrix}j-1\\ k\end{pmatrix}s^{j-k}t^k\\ -(g_1-g_2)\sum _{k=0}^j\begin{pmatrix}j-1\\ k\end{pmatrix}s^{k}t^{j-k}\end{matrix}\right] . $$

Therefore, from Eq. (19.7), we have

$$ \left[ \begin{matrix} (g_1-g_2)\sum _{j=1}^m f_{j1}\sum _{k=0}^j \begin{pmatrix} j-1\\ k \end{pmatrix} s^{j-k}t^k\\ -(g_1-g_2)\sum _{j=0}^mf_{j2}\sum _{k=0}^j \begin{pmatrix} j-1\\ k \end{pmatrix} s^{k}t^{j-k}\end{matrix}\right] =\left[ \begin{matrix}0\\ 0 \end{matrix}\right] . $$

Since Eq. (19.7) should hold for every \(g\in \mathcal {A},\) then we have

$$\begin{aligned} \left[ \begin{matrix} \sum _{j=1}^m f_{j1}\sum _{k=0}^j\begin{pmatrix}j-1\\ k \end{pmatrix} s^{j-k}t^k\\ \sum _{j=0}^mf_{j2}\sum _{k=0}^j\begin{pmatrix}j-1\\ k\end{pmatrix}s^{k}t^{j-k}\end{matrix}\right] =\left[ \begin{matrix} 0\\ 0\end{matrix}\right] . \end{aligned}$$
(19.8)

Since \(s,t\ne 0,\) then from Eq. (19.8), we have

$$ \left[ \begin{matrix} \sum _{j=1}^m f_{j1}\sum _{k=0}^{j-1}\begin{pmatrix} j-1\\ k \end{pmatrix} s^{j-k-1}t^k\\ \sum _{j=0}^mf_{j2}\sum _{k=0}^{j-1}\begin{pmatrix} j-1\\ k \end{pmatrix} s^{k}t^{j-k-1}\end{matrix}\right] =\left[ \begin{matrix} 0\\ 0 \end{matrix}\right] . $$

Observe that

$$\sum \limits _{k=0}^j \begin{pmatrix} j-1\\ k \end{pmatrix} s^{j-k-1}t^k=\sum \limits _{k=0}^j \begin{pmatrix} j-1\\ k \end{pmatrix} s^{k}t^{j-k-1}.$$

Therefore we get a matrix equation of the form

$$\begin{aligned} \left[ \begin{matrix} f_{11}&{}f_{21}&{}f_{31}&{}\cdots &{}f_{m1}\\ f_{12}&{}f_{22}&{}f_{32}&{}\cdots &{}f_{m2}\end{matrix}\right] \left[ \begin{matrix} 1\\ s+t\\ s^2+2st +t^2\\ \vdots \\ \sum _{k=0}^{m-1}\begin{pmatrix} m-1\\ k \end{pmatrix} s^{m-k-1}t^t\end{matrix}\right] =\left[ \begin{matrix} 0\\ 0 \end{matrix}\right] \end{aligned}$$
(19.9)

which always has nontrivial solutions if \(m\geqslant 2.\)

19.4.3 Center of the Ore Extension Algebra

In the following section, we give a description of the center of our Ore extension algebra. We will give the description for the case \(\varDelta =0.\)

Theorem 19.5

The center of the Ore extension algebra \(\mathcal {A}[x,\tilde{\sigma },0]\) is given by

$$Z\big (\mathcal {A}[x,\tilde{\sigma },0]\big )=\left\{ \sum _{k=0}^mf_kx^k:{ where}\,f_k=0\, {on}\, Sep^k(X)\, {and}\, \tilde{\sigma }(f_k)=f_k\right\} .$$

Proof

Let \(f=\sum \nolimits _{k=0}^mf_kx^k\) be an element in \(Z\big (\mathcal {A}[x,\tilde{\sigma },0]\big ),\) then \(f\in C(\mathcal {A}),\) that is \(f_k(x)=0\) for every \(x\in Sep^k(x).\) Since the Ore extension \(\mathcal {A}[x,\tilde{\sigma },0]\) is associative, it is enough to derive conditions under which \(xf=fx.\) Now

$$fx=\left( \sum _{k=0}^mf_kx^k\right) x=\sum _{k=0}^mf_kx^{k+1}.$$

On the other hand,

$$\begin{aligned} xf&=x\sum _{k=0}^mf_kx^k\\&=\sum _{k=0}^mxf_k x^k\\&=\sum _{k=0}^m\tilde{\sigma }(f_k)x^{k+1}. \end{aligned}$$

From which we obtain that \(xf=fx\) if and only if \(\tilde{\sigma }(f_k)=f_k.\) Therefore,

$$Z\big (\mathcal {A}[x,\tilde{\sigma },0]\big )=\left\{ \sum _{k=0}^mf_kx^k:\text { where }f_k=0 \text { on }Sep^k(x) \text { and }\tilde{\sigma }(f_k)=f_k \right\} .$$

   \(\square \)

19.5 Infinite Dimensional Case

Let J be a countable subset of \(\mathbb {R}\) and let \(\mathcal {A}\) be the set of functions \(f:J\rightarrow J\) such that \(f(i)=0\) for all except finitely many \(i\in J.\) Then \(\mathcal {A}\) is a commutative non-unital algebra with respect to the usual pointwise operations of addition, scalar multiplication and multiplication. For \(i\in J,\) let \(e_i\in \mathcal {A}\) denote the characteristic function of i,  that is

$$e_i(j)=\chi _{\{i\}}(j)={\left\{ \begin{array}{ll}1\ \text { if }i=j\\ 0\ \text { if }i\ne j.\end{array}\right. }$$

Then every \(f\in \mathcal {A}\) can be written in the form

$$\begin{aligned} f=\sum _{i\in J}f_ie_i \end{aligned}$$
(19.10)

where \(f_i=0\) for all except finitely many \(i\in J.\)

Let \(\sigma :J\rightarrow J\) be a bijection and let \(\tilde{\sigma }:\mathcal {A}\rightarrow \mathcal {A}\) be the automorphism of \(\mathcal {A}\) induced by \(\sigma ,\) that is,

$$\tilde{\sigma }(f)=f\circ \sigma ^{-1}$$

for every \(f\in \mathcal {A}.\) We can still construct the non-unital Ore extension \(\mathcal {A}[x,\tilde{\sigma },\varDelta ]\) as follows

$$ \mathcal {A}[x,\tilde{\sigma },\varDelta ]:=\left\{ \sum _{k=0}^mf_kx^k \quad \text { where } \quad f_k\in \mathcal {A}\right\} $$

with addition and scalar multiplication given by the usual pointwise operations and multiplication determined by the relation

$$(fx)g=\tilde{\sigma }(g)fx+\varDelta (g)$$

where \(\varDelta \) is a \(\tilde{\sigma }\)-derivation on \(\mathcal {A}.\)

In the following Theorem, we state the necessary and sufficient conditions for \(\varDelta :\mathcal {A}\rightarrow \mathcal {A}\) to be a \(\tilde{\sigma }\)-derivation on \(\mathcal {A}.\)

Theorem 19.6

Let \(\sigma :J \rightarrow J\) be a bijection and let \(\tilde{\sigma }:\mathcal {A}\rightarrow \mathcal {A}\) be the automorphism induced by \(\sigma .\) A linear map \(\varDelta :\mathcal {A}\rightarrow \mathcal {A}\) is a \(\tilde{\sigma }\)-derivation on \(\mathcal {A},\) if and only if, for every \(i\in J\)

  1. 1.

    \(\varDelta (e_i)=-\varDelta \Big (e_{\sigma (i)}\Big )\) and

  2. 2.

    \(\varDelta (e_i)(k)=0\) if \(k\not \in \big \{i,\sigma (i)\big \} \)

Proof

Suppose \(\varDelta \) is a \(\tilde{\sigma }\)-derivation on \(\mathcal {A}\) and let \(\sigma (i)=j,\) then \(\tilde{\sigma }(e_i)=e_j.\) If \(i\ne j,\) then,

$$\varDelta (e_ie_j)=\varDelta (0)=0.$$

On the other hand,

$$\begin{aligned} \varDelta (e_ie_j)&=\tilde{\sigma }(e_i)\varDelta (e_j)+\varDelta (e_i)e_j\\&=e_j\big (\varDelta (e_j)+\varDelta (e_i)\big ). \end{aligned}$$

That is, for every \(k\in J,\)

$$\begin{aligned} \varDelta (e_ie_j)(k)&=\Big [e_j\big (\varDelta (e_i)+\varDelta (e_j)\big )\Big ](k)\\&={\left\{ \begin{array}{ll} \varDelta (e_i)+\varDelta (e_j) &{} \text { if }k= j\\ 0 &{} \text { if } k\ne j. \end{array}\right. } \end{aligned}$$

Therefore, since \(\varDelta (e_ie_j)=0,\) then \(\varDelta (e_i)=-\varDelta (e_j).\)

Also, for any \(k\in J\)

$$\begin{aligned} \varDelta (e_i^2)(k)&=\big (\tilde{\sigma }(e_i)\varDelta (e_i)+\varDelta (e_i)e_i\big )(k)\\&=\varDelta (e_i)(e_j+e_i)(k)\\&=0\quad \text { if } \quad k\not \in \{i,j\}. \end{aligned}$$

Conversely, suppose \(\varDelta :\mathcal {A}\rightarrow \mathcal {A}\) is a \(\mathbb {R}-\)linear map which satisfies conditions (1) and (2) for some bijection \(\sigma :J\rightarrow J\) of J. We prove that \(\varDelta \) is \(\tilde{\sigma }\)-derivation on \(\mathcal {A}.\)

Suppose \(\sigma (i)=j\) and consider the characteristic functions \(e_i,e_j\) for \(i\ne j.\) Since \(\varDelta \) is a linear map,

$$\varDelta (e_ie_j)=\varDelta (0)=0.$$

On the other hand,

$$\begin{aligned} \tilde{\sigma }(e_i)\varDelta (e_j)+\varDelta (e_i)e_j&=e_j\varDelta (e_j)+\varDelta (e_i)e_j\\&=e_j\big (\varDelta (e_j)+\varDelta (e_i)\big )\\&=0. \end{aligned}$$

Therefore

$$\begin{aligned} \varDelta (e_ie_j)=\tilde{\sigma }(e_i)\varDelta (e_j)+\varDelta (e_i)e_j \end{aligned}$$
(19.11)

for \(i\ne j\) and the same holds for \(i=j\). The fact that Eq. (19.11) holds for every \(f,g\in \mathcal {A}\) follows from linearity of both \(\tilde{\sigma }\) and \(\varDelta \) and Eq. (19.10). Therefore \(\varDelta \) is a \(\tilde{\sigma }\)-derivation.   \(\square \)

Remark 19.5.1

It can be seen from Theorem 19.6 above that, if \(\sigma (i)=i\) for all \(i\in J,\) then \(\varDelta =0.\)

19.5.1 Centralizer for \(\mathcal {A}\)

In this section, we give a description of the centralizer \(C(\mathcal {A})\) of \(\mathcal {A},\) in the Ore extension \(\mathcal {A}[x,\tilde{\sigma }, \varDelta ]\) and the center of the Ore extension. Since \(\mathcal {A}\) is commutative, then by [14, Proposition 3.3], if \(\tilde{\sigma }\) is of infinite order, then \(\mathcal {A}\) is maximal commutative, that is, \(C(\mathcal {A})=\mathcal {A}.\) Therefore we will focus on the case when \(\tilde{\sigma }\) is of finite order. We do this for two cases.

19.5.1.1 The Case \(\varDelta =0\)

The following Theorem gives the description of the centralizer of \(\mathcal {A}\) in the skew-polynomial ring \(\mathcal {A}[x,\tilde{\sigma },0].\)

Theorem 19.7

The centralizer \(C(\mathcal {A}),\) of \(\mathcal {A}\) in the Ore extension \(\mathcal {A}[x,\tilde{\sigma }, 0]\) is given by

$$ C(\mathcal {A})=\left\{ \sum _{k=0}^mf_kx^k \quad { such}\, {that } \quad f_k=0 \quad { on }\quad Sep^k(X)\right\} $$

where \(Sep^k(X)\) is as defined in Definition 19.3.

Proof

Let \(f=\sum \nolimits _{k=0}^mf_kx^k\in \mathcal {A}[x,\tilde{\sigma },0]\) be an element of degree m which belongs to \(C(\mathcal {A}).\) Then \(fg=gf\) should hold for every \(g\in \mathcal {A}.\)

Now,

$$gf=g\sum \limits _{k=0}^mf_kx^k=\sum \limits _{k=0}^mgf_kx^k.$$

On the other hand,

$$fg=\left( \sum \limits _{k=0}^mf_kx^k\right) g=\sum \limits _{k=0}^mf_k\Big (x^kg\Big )=\sum \limits _{k=0}^mf_k\tilde{\sigma }^k(g)x^k.$$

Therefore, \(gf=fg\) if and only if

$$gf_k=f_k\tilde{\sigma }^k(g)$$

for all \(k=0,1,\ldots ,m.\) Since \(\mathcal {A}\) is commutative, then the above equation holds on \(Per^k(X).\) Therefore,

$$ C(\mathcal {A})=\left\{ \sum _{k=0}^mf_kx^k \quad \text { such that } \quad f_k=0 \quad \text { on }\quad Sep^k(X)\right\} . $$

   \(\square \)

19.5.1.2 The Case \(\varDelta \ne 0\)

Now, suppose \(\tilde{\sigma }\ne id\) is of order \(j\in \mathbb {Z}_{>0},\) that is \(\tilde{\sigma }^j=id\) but \(\tilde{\sigma }^k\ne id\) for all \(k<j\). In the next Theorem, whose proof is similar to Theorem 19.4, we state a necessary condition for an element in the Ore extension to belong to the centralizer.

Theorem 19.8

Let \(\tilde{\sigma }:\mathcal {A}\rightarrow \mathcal {A}\) be an automorphism on \(\mathcal {A}\). If an element of order m\(\sum \nolimits _{k=0}^mf_kx^k\in \mathcal {A}[x,\tilde{\sigma },\varDelta ]\) belongs to the centralizer of \(\mathcal {A}\), then \(f_m=0\text { on }Sep^m(X).\)

19.5.2 Center of \(\mathcal {A}[x,\tilde{\sigma },\varDelta ]\) When \(\varDelta =0\)

In the following section, we give a description of the center of our Ore extension algebra. We will give the description for the case \(\varDelta =0.\)

Theorem 19.9

Then the center of the Ore extension algebra \(\mathcal {A}[x,\tilde{\sigma },0]\) is given by

$$Z\big (\mathcal {A}[x,\tilde{\sigma },0]\big )=\left\{ \sum _{k=0}^mf_kx^k:\text { where }f_k=0 \text { on }Sep^k(X) \text { and }\tilde{\sigma }(f_k)=f_k \right\} .$$

Proof

Observe that since \(\mathcal {A}\) is not unital, the proof of Theorem 19.5 does not work for Theorem 19.9, since the element \(x\not \in \mathcal {A}[x,\tilde{\sigma },0].\) Therefore, we adopt the following proof.

Denote \(\mathcal {A}[x,\tilde{\sigma },\varDelta ]\) by R and let \(f=\sum \nolimits _{k=0}^mf_kx^k\in Z(R).\) Then \(f\in C(\mathcal {A}),\) that is \(f_k=0\) on \(Sep^k(X).\) Now let \(g=\sum \nolimits _{l=0}^ng_lx^l\) be an arbitrary element in R. Then

$$\begin{aligned} fg&=\left( \sum _{k=0}^mf_kx^k\right) \left( \sum _{l=0}^ng_lx^l\right) \\&=\left( \sum _{k,l}f_k(x^kg_l)x^l\right) \\&=\left( \sum _{k,l}f_k\tilde{\sigma }^k(g_l)x^{k+l}\right) \end{aligned}$$

In the same way, it can be shown that

$$gf=\left( \sum _{l=0}^ng_lx^l\right) \left( \sum _{k=0}^mf_kx^k\right) =\sum _{k,l}g_l\tilde{\sigma }^l(f_k)x^{k+l}.$$

It follows that \(fg=gf\) if and only if

$$\begin{aligned} f_k\tilde{\sigma }^k(g_l)=g_l\tilde{\sigma }^l(f_k). \end{aligned}$$
(19.12)

for all \(k=0,1,\ldots ,m\) and all \(l=0,1,\ldots ,n.\) Now, \(f_k=0\) on \(Sep^k(X)\) and on \(Per^k(X),\) we have \(\sigma ^k=id.\) Therefore, Eq. (19.12) holds if and only if

$$f_kg_l=g_l\tilde{\sigma }^l(f_k)$$

for all \(l=0,1,\ldots ,n.\) Since \(\mathcal {A}\) is commutative, we conclude that (19.12) holds iff \(\tilde{\sigma }(f_k)=f_k.\) Therefore

$$Z\big (\mathcal {A}\big [x,\tilde{\sigma },0\big ]\big )=\left\{ \sum _{k=0}^mf_kx^k:\text { where }f_k=0 \text { on } Sep^k(X)\text { and }\tilde{\sigma }(f_k)=f_k \right\} .$$

   \(\square \)

19.6 The Skew Power Series Ring

As before, we let \(X=[n](=\{1,2,\ldots ,n\})\) be a finite set and let \(\mathcal {A}=\{f:X\rightarrow \mathbb {R}\}\) denote the unital algebra of real-valued functions on X with respect to the usual pointwise operations. Let \(\sigma :X\rightarrow X\) be a bijection such that \(\mathcal {A}\) is invariant under \(\sigma \), (that is \(\sigma \) is a permutation on X) and let \(\tilde{\sigma }:\mathcal {A}\rightarrow \mathcal {A}\) be the automorphism induced by \(\sigma ,\) that is

$$\begin{aligned} \tilde{\sigma }(f)=f\circ \sigma ^{-1} \end{aligned}$$
(19.13)

for every \(f\in \mathcal {A}.\)

Consider the skew ring of formal power series over \(\mathcal {A}\), \(\mathcal {A}[x;\tilde{\sigma }];\) that is the set

$$ \left\{ \sum _{n=0}^{\infty }f_nx^n\ \text { such that }f_n\in \mathcal {A}\right\} $$

with pointwise addition and multiplication determined by the relations

$$xf=\tilde{\sigma }(f)x.$$

That is, if \(f=\sum _{n=0}^{\infty }f_nx^n\) and \(g=\sum _{n=0}^{\infty }g_nx^n\) are elements of \(\mathcal {A}[x;\tilde{\sigma }],\) then

$$ f+g=\sum _{n=0}^{\infty }\big (f_n+g_n\big ) x^n $$

and

$$ fg=\left( \sum _{n=0}^{\infty }f_nx^n\right) \left( \sum _{n=0}^{\infty }g_nx^n\right) =\sum _{n=0}^{\infty }\left( \sum _{k=0}^nf_k\tilde{\sigma }^k\big (g_{n-k}\big )\right) x^n. $$

19.6.1 Centralizer of \(\mathcal {A}\) in the Skew Power Series Ring \(\mathcal {A}\big [x;\tilde{\sigma }\big ]\)

In the next Theorem, we give the description of the centralizer of \(\mathcal {A}\) in the skew power series ring \(\mathcal {A}\big [x;\tilde{\sigma }\big ].\)

Theorem 19.10

The centralizer \(C(\mathcal {A}),\) of \(\mathcal {A}\) in the skew power series ring \(\mathcal {A}[x;\tilde{\sigma }]\) is given by

$$ C(\mathcal {A})=\left\{ \sum _{n\in \mathbb {Z}}f_nx^n \quad {such\, that}\,f_n=0 \quad { on } \quad Sep^n(X)\right\} $$

where \(Sep^k(X)\) is as given in Definition 19.3.

Proof

Let \(f=\sum \nolimits _{n=0}^{\infty }f_nx^n\in \mathcal {A}[x;\tilde{\sigma }]\) be an element which belongs to \(C(\mathcal {A}).\) Then \(fg=gf\) should hold for every \(g\in \mathcal {A}.\) Now,

$$gf=g\sum \limits _{n=0}^{\infty }f_nx^n=\sum \limits _{n=0}^{\infty }gf_nx^n.$$

On the other hand,

$$fg=\left( \sum \limits _{n=0}^{\infty }f_nx^n\right) g=\sum \limits _{n=0}^{\infty }f_n \big (x^ng\big )=\sum \limits _{n=0}^{\infty }f_n\tilde{\sigma }^n(g)x^n.$$

Therefore, \(gf=fg\) if and only if

$$gf_n=f_n\tilde{\sigma }^n(g)$$

for all \(n\in \mathbb {N}.\) Since \(\mathcal {A}\) is commutative, then the above equation holds on \(Per^n(X).\) Therefore,

$$ C(\mathcal {A})=\left\{ \sum _{n\in \mathbb {Z}}f_nx^n \quad \text { such that }\quad f_n=0\quad \text { on } \quad Sep^n(X)\right\} . $$

   \(\square \)

19.6.2 The Center of the Skew Power Series Ring

The next Theorem gives the description of the center for the skew power series ring \(\mathcal {A}[x;\tilde{\sigma }].\)

Theorem 19.11

The center of the skew power series ring \(\mathcal {A}[x;\tilde{\sigma }]\) is given by

$$Z\big (\mathcal {A}[x;\tilde{\sigma }]\big )=\left\{ \sum _{k=0}^{\infty }f_nx^n:\, { where }\, f_n=0\, { on }\, Sep^n(X)\, { and }\,\tilde{\sigma }(f_n)=f_n \right\} .$$

Proof

Let \(f=\sum \nolimits _{n=0}^{\infty }f_nx^n\) be an element in \(Z\big (\mathcal {A}\big [x;\tilde{\sigma }\big ]\big ).\) Then \(f\in C(\mathcal {A}),\) that is \(f_n(x)=0\) for every \(x\in Sep^n(x).\) Since \(\mathcal {A}[x;\tilde{\sigma }]\) is associative, it is enough to derive conditions under which \(xf=fx.\) Now

$$fx=\left( \sum _{n=0}^{\infty }f_nx^n\right) x=\sum _{n=0}^{\infty }f_nx^{n+1}.$$

On the other hand,

$$\begin{aligned} xf&=x\sum _{n=0}^{\infty }f_nx^n\\&=\sum _{n=0}^{\infty }xf_n x^n\\&=\sum _{n=0}^{\infty }\tilde{\sigma }(f_n)x^{n+1}. \end{aligned}$$

From which we obtain that \(xf=fx\) if and only if \(\tilde{\sigma }(f_n)=f_n.\) Therefore,

$$Z\big (\mathcal {A}[x;\tilde{\sigma }]\big )=\left\{ \sum _{n=0}^{\infty }f_nx^n:\text { where }f_n=0 \text { on }Sep^n(x) \text { and }\tilde{\sigma }(f_n)=f_n \right\} .$$

   \(\square \)

19.7 The Skew-Laurent Ring \(\mathcal {A}[x,\ x^{-1};\tilde{\sigma }]\)

The fact that \(\tilde{\sigma }\) is an automorphism of A naturally leads us to the consideration of the skew-Laurent ring \(\mathcal {A}[x,\ x^{-1};\tilde{\sigma }].\)

Definition 19.4

Let R be a ring and \(\sigma \) an automorphism of R. By a skew-Laurent ring \(R[x,x^{-1};\sigma ]\) we mean that

  1. 1.

    \(R[x,\ x^{-1};\sigma ]\) is a ring, containing R as a subring,

  2. 2.

    x is an invertible element of \(R[x,\ x^{-1};\sigma ]\),

  3. 3.

    \(R[x,\ x^{-1};\sigma ]\) is a free left \(R-\)module with basis \(\Big \{1,x,x^{-1},x^2,x^{-2},\ldots \Big \}\),

  4. 4.

    \(xr=\sigma (r)x,\) \(\Big (\text {and } x^{-1}r=\sigma ^{-1}(r)x^{-1}\Big )\) for all \(r\in R.\)

As before, we let \(X=[n](=\{1,2,\ldots ,n\})\) and let \(\mathcal {A}=\{f:X\rightarrow \mathbb {R}\}\) denote the unital algebra of real-valued functions on X with respect to the usual pointwise operations. Let \(\sigma :X\rightarrow X\) be a bijection such that \(\mathcal {A}\) is invariant under \(\sigma \), (that is \(\sigma \) is a permutation on X) and let \(\tilde{\sigma }:\mathcal {A}\rightarrow \mathcal {A}\) be the automorphism induced by \(\sigma ,\) as defined by Eq. (19.13).

Consider the skew-Laurent ring \(\mathcal {A}[x,\ x^{-1};\tilde{\sigma }],\) that is the set

$$ \left\{ \sum _{n\in \mathbb {Z}}f_nx^n\ :\ f_n\in \mathcal {A}\text { and }f_n=0\text { for all except finitely many }n \right\} $$

with pointwise addition and multiplication determined by the relations

$$xf=\tilde{\sigma }(f)x\qquad \text {and }x^{-1}f=\tilde{\sigma }^{-1}x^{-1}.$$

In the next Theorem, we give the description of the centralizer of \(\mathcal {A}\) in the skew-Laurent ring \(\mathcal {A}[x,\ x^{-1};\tilde{\sigma }].\)

Theorem 19.12

The centralizer of \(\mathcal {A}\) in the skew-Laurent extension \(\mathcal {A}[x,\ x^{-1};\tilde{\sigma }]\) is given by

$$ C(\mathcal {A})=\left\{ \sum _{n\in \mathbb {Z}}f_nx^n\ :\ f_n=0 \quad { on } \quad Sep^n(X)\right\} $$

where \(Sep^k(X)\) is as given in Definition 19.3.

Proof

Let \(f=\sum \nolimits _{n\in \mathbb {Z}}f_nx^n\in \mathcal {A}[x,\ x^{-1};\tilde{\sigma }]\) be an element which belongs to \(C(\mathcal {A}).\) Then \(fg=gf\) should hold for every \(g\in \mathcal {A}.\) Now,

$$gf=g\sum \limits _{n\in \mathbb {Z}}f_nx^n=\sum \limits _{n\in \mathbb {Z}}gf_nx^n.$$

On the other hand,

$$fg=\left( \sum \limits _{n\in \mathbb {Z}}f_nx^n\right) g=\sum \limits _{n\in \mathbb {Z}}f_n\big (x^ng\big )=\sum \limits _{n\in \mathbb {Z}}f_n\tilde{\sigma }^n(g)x^n.$$

Therefore, \(gf=fg\) if and only if

$$gf_n=f_n\tilde{\sigma }^n(g)$$

for all \(n\in \mathbb {Z}.\) Since \(\mathcal {A}\) is commutative, then the above equation holds on \(Per^n(X).\) Therefore,

$$ C(\mathcal {A})=\left\{ \sum _{n\in \mathbb {Z}}f_nx^n\ :\ f_n=0 \quad \text { on }\quad Sep^n(X)\right\} . $$

   \(\square \)