1 Introduction

In the \(C^*\)-algebra theory, there are a lot of literature about the semicrossed product \(A*_{\alpha }\mathbb {Z}_+\), where \(\alpha\) is a \(*\)-endomorphism of the \(C^{*}\)-algebra A, see [5] for a survey about this topic.

One of the problems in the operator theory is to determine to what extent the dynamical system can be recovered from the semicrossed product. The authors in [6] studied this problem. They proved that if two systems are outer conjugate, then the semicrossed products are isomorphic. Moreover, they studied the converse in a lot of cases. It is convenient to point out that other authors studied isomorphism problem between skew polynomial rings, see [9] and [7] and the references therein.

In this work, we want to study necessary and sufficient conditions for two skew polynomial rings to be isomorphic. In a certain sense, we want to study some of the results presented in [6] in a pure algebraic level. Since there are many results in the \(C^*\)-algebras level concerning when the two semicrossed products are isomorphic, we expect that there will be interesting applications in the algebra level.

The paper is organized as follows: In Sect. 2, we study necessary and sufficient conditions for two skew polynomial rings to be isomorphic in graded case. In Sect. 3, we consider the same goals of Sect. 2, the involved homomorphisms are injective and the algebras are \(*\)-algebras. In Sect. 4, we study necessary and sufficient conditions for two partial skew polynomial rings to be isomorphic.

2 Isomorphism problem in skew polynomial rings: graded case

Let \(\sigma\) be an endomorphism of a ring S. The skew polynomial ring of endomorphism type \(S[x;\sigma ]\) is the set of polynomials \(\sum _{i\ge 0}^n s_ix^i\), \(s_i\in S\) with usual addition and multiplication rule \(xa=\sigma (a)x\). It is well-known that \(S[x;\sigma ]\) is an associative ring with the same multiplicative identity of S. During the rest of the paper, for any ring with identity S, the set U(S) denotes the invertible elements of S.

In this section, we study when two skew polynomial rings are isomorphic.

2.1 Graded case

A ring S is graded by the natural numbers if, for each \(i\in \mathbb {N}\), there exists additive subgroups \(S_i\) of S such that \(S=\oplus _{i\ge 0} S_i\) and \(S_iS_j\subseteq S_{i+j}\). We begin this subsection with the following well known definition.

Definition 2.1

Let S,T be any unital graded algebras by the natural numbers. An isomorphism of rings \(f:S\rightarrow T\) is said to be graded if \(f(S_i)\subseteq T_i\), for all \(i\ge 0\), where \(S=\oplus _{i\ge 0} S_i\) and \(T=\oplus _{i\ge 0} T_i\)

It is convenient to point out that the skew polynomial rings are naturally graded by the natural numbers, i.e., let \(S[x;\sigma ]\) as before and for each \(i\in \mathbb {N}\), we consider the subgroups \(S[x;\sigma ]_i=Sx^i\).

The authors, in ([6], pg. 5), used the outer conjugacy to prove that two semicrossed products are isomorphic, however one may see that the isomorphism defined there is graded. It is convenient to point out that the semicrossed products in [6] are the same as skew polynomial rings that we work in this paper. In the next result, we study necessary and sufficient conditions for an isomorphism between skew polynomial rings with certain conditions to be graded.

Theorem 2.2

Let, A, B be unital algebras, \(\alpha :A\rightarrow A\), \(\gamma :B\rightarrow B\) homomorphisms of algebras and \(\theta :A\rightarrow B\) an isomorphism. Then there exists a graded automorphism \(\Psi :A[x;\alpha ]\rightarrow B[y;\gamma ]\) with \(\Psi |_{A}=\theta\) if and only if there exists \(s_i\in U(B)\), \(i\ge 0\), \(s_{i+j}=s_i\gamma ^ i(s_j)\), \(i,j\ge 0\) such that \(s_i^ {-1}\theta \circ \alpha ^ i(a) s_i=\gamma ^ i \circ \theta (a)\), for all \(a\in A\) and \(i\ge 1\).

Proof

\(\Rightarrow\)

By assumption, for each \(i\ge 0\) we have that \(\Psi (x^ i)=s_iy^ i\), for some \(s_i\in B\). We claim that \(s_i\in U(B)\), for each \(i\ge 0\), and for this we need to prove that \(s_i\) is not a zero divisor and left invertible. In fact, fix \(i\ge 0\) and suppose that there exists \(b\in B\) such that \(bs_i=0\). Since \(\Psi (a)=\theta (a)=b\), for some \(a\in A\), then \(\Psi (ax^ i)=bs_iy^ i=0\) and it follows that \(ax^ i=0\). Thus, \(a=0\) and we have that \(b=0\). In this case, \(s_i\) is not a left zero divisor. By the fact that \(\Psi\) is a graded isomorphism, we have that for each \(i\ge 0\), there exists \(c_i\in A\) such that \(\Psi (cx^ i)=y^ i\) and we have that \(\theta (c_i)s_iy^ i=y^ i\). As a consequence, \(\theta (c_i)s_i=1\) and it follows that \(s_i\) is left invertible. Finally, if \(s_ib=0\), for some \(b\in B\), then \(b=0\), since \(s_i\) is left invertible. Hence \(s_i\) is not a zero divisor and left invertible, and it follows from ([8], Exercise 1.4(b)) that \(s_i\) is invertible.

Now, for each \(a\in A\), we have that \(\Psi (x^ ia)=\Psi (x^ i)\Psi (a)=s_iy^ i\theta (a)=s_i\gamma ^ i(\theta (a))y^i\) and \(\Psi (\alpha ^ i(a)x^i)=\theta (\alpha ^ i(a))s_iy^ i\). So, \(s_i\gamma ^ i(\theta (a))=\theta (\alpha ^ i(a))s_i\), for all \(i\ge 0\), and by assumption, we have that \(s_{i+j}y^{i+j}=\Psi (x^ {i+j})=\Psi (x^ i)\Psi (x^ j)=s_iy^ is_jy^ j=s_i\gamma ^i(s_j)y^{i+j}\). Therefore, \(s_{i+j}=s_i\gamma ^i(s_j)\).

\(\Leftarrow\)

We define \(\Psi :A[x;\alpha ]\rightarrow B[y;\gamma ]\) by \(\Psi (a)=\theta (a)\) and \(\Psi (\sum a_ix^ i)=\sum \theta (a_i)s_iy^ i\). It is easy to see that \(\Psi\) is additive. We claim that \(\Psi\) is multiplicative. In fact, for each \(a_ix^i\) and \(a_jx^j\) in \(A[x;\alpha ]\) we have that

$$\begin{aligned} \Psi (a_ix^ ia_jx^ j)=\Psi (a_i\alpha ^ i(a_j)x^ {i+j})=\theta (a_i)\theta (\alpha ^ i(a_j))s_{i+j}y^ {i+j} \end{aligned}$$

and

$$\begin{aligned} \Psi (a_ix^ i)\Psi (a_jx^ j)= & {} \theta (a_i)s_iy^ i\theta (a_j)s_jy^j=\theta (a_i)s_i\gamma ^ i(\theta (a_j))\gamma ^i(s_j)y^ {i+j}\\= & {} \theta (a_i)\theta (\alpha ^ i(a_j))s_i\gamma ^ i(s_j)y^{i+j} =\theta (a_i)\theta (\alpha ^i(a_j))s_{i+j}y^{i+j}. \end{aligned}$$

So, \(\Psi (a_ix^ ia_jx^ j)=\Psi (a_ix^ i)\Psi (a_jx^ j)\) and we obtain that \(\Psi\) is multiplicative.

Finally, for each \(by^ j\in B[y,\gamma ]\), we have that \(\Psi (a\theta ^ {-1}(s_j^ {-1})x^j)=\theta (a)s_j^ {-1}s_jy^ j=by^ j\), where \(\theta (a)=b\). We easily have that \(\Psi\) is injective. Therefore, \(\Psi\) is an isomorphism. \(\square\)

3 Isomorphism problem in skew polynomial rings: \(*\)-algebras

For the next lemma, we assume that A, B are \(*\)-algebras, where \(*:A\rightarrow A\) and \(*:B\rightarrow B\) are involutions, \(\theta :A\rightarrow B\) is a \(*\)-isomorphism, i.e., \(\theta (a^*)=\theta (a)^*\), for all \(a\in A\), and \(\alpha :A\rightarrow A\) and \(\beta :B\rightarrow B\) are surjective homomorphisms.

We consider \(\Psi : A[x;\alpha ]\rightarrow B[y;\gamma ]\) an isomorphism such that \(\Psi\) is not graded and \(\Psi |_{A}=\theta\). In this case, \(\Psi (x)=b_0+b_1y+fy^ 2\) and \(\Psi ^{-1}(y)=a_0+a_1x+py^ 2\), where \(f\in B[y;\gamma ]\) and \(p\in A[x;\alpha ]\).

The next lemma will be used in the principal result of this section.

Lemma 3.1

If \(b_1\)is invertible, then there exists \(w\in U(B)\) such that \(b_1=wb_1^*b_1\) and \(\gamma (b)=w(\theta \circ \alpha \circ \theta ^{-1}(b))w^{-1}\).

Proof

Since \(b_1\) is invertible, then \(b_1^*\) is invertible. Thus, \(Bb_1^*b_1=B\) and it follows that \(b_1=z b_1^*b_1\), for some \(z\in B\). It is easy to see that z is invertible and we obtain that \(z\theta (\alpha (a))z^{-1}b_1=z\theta (\alpha (a))b_1^*b_1=zb_1^*b_1\theta (\alpha (a))=b_1\theta (\alpha (a))=\gamma (\theta (a))b_1\). If \(\theta (a)=b\), then \(z(\theta \circ \alpha \circ \theta ^{-1}(b))z^{-1}=\gamma (b)\), for all \(b\in B\).\(\square\)

From now on, we study the isomorphism problem when the involved homomorphisms are injective. We will follow the ideas presented in the Section 4.2 of [6] since the direct limit is a general theory.

During the rest of this section, let A and B be unital algebras such that A and B are \(*\)-algebras, i.e., A and B are algebras with involution. We consider \(\theta :A\rightarrow B\) and \(\Psi :A[x;\alpha ]\rightarrow B[y;\gamma ]\) isomorphisms such that \(\Psi |_{A}=\theta\), where \(\alpha\) and \(\gamma\) are injective homomorphisms. Moreover, we assume that \(\Psi (x)=b_0+b_1y+fy^ 2\) and \(\Psi ^{-1}(y)=a_0+a_1x+py^ 2\), where \(f\in B[y;\gamma ]\) and \(p\in A[x;\alpha ]\).

We define the direct limit \((A_{\infty },\alpha _{\infty })\) by

$$\begin{aligned} \begin{array}{ccccccccc} A &{} \begin{array}{c} \alpha \\ \rightarrow \end{array} &{} A &{} \begin{array}{c} \alpha \\ \rightarrow \end{array} &{} A &{} \begin{array}{c} \alpha \\ \rightarrow \end{array} &{}... &{} \begin{array}{c} \alpha \\ \rightarrow \end{array} &{} A \\ \begin{array}{cc} \alpha &{} \downarrow \end{array}&\,&\begin{array}{cc} \alpha&\downarrow \end{array}&\,&\begin{array}{cc} \alpha&\downarrow \end{array}&\,{} & {} \,&\begin{array}{cc} \downarrow &{} \alpha _{\infty } \end{array} \\ A &{} \begin{array}{c} \alpha \\ \rightarrow \end{array} &{} A &{} \begin{array}{c} \alpha \\ \rightarrow \end{array} &{} A &{} \begin{array}{c} \alpha \\ \rightarrow \end{array} &{}... &{} \begin{array}{c} \alpha \\ \rightarrow \end{array}&A \end{array}. \end{aligned}$$

Then A is contained in \(A_{\infty }\), says via map i. The limit map \(\alpha _{\infty }\) is an automorphism. Thus, we can consider the skew Laurent polynomial ring \(A_{\infty }\langle \tilde{x}; \alpha _{\infty } \rangle =\{\sum _{i=n}^{m}a_i\tilde{x}^i:n,m\in \mathbb {Z}\}\) with usual sum and multiplication rule is \(\tilde{x}a=\alpha _{\infty }(a)\tilde{x}\) and \(\tilde{x}^{-1}a=\alpha _{\infty }^{-1}(a)\tilde{x}\). We easily get that \(A[x;\alpha ]\) is isomorphic to the subalgebra generated by i(A) and \(\tilde{x}\).

Note that the skew polynomial ring \(A_{\infty }[\tilde{x};\alpha _{\infty }]\) is contained in \(A_{\infty }\langle \tilde{x};\alpha _{\infty }\rangle\). Let \(ad_{\tilde{x}}\) be defined by \(ad_{\tilde{x}}(f)=\tilde{x} f\tilde{x}^{-1}\), for \(f\in A_{\infty }[\tilde{x};\alpha _{\infty }]\). In this case, when \(f\in A[x;\alpha ]\), we have that \(ad_{\tilde{x}}\) is an extension of \(\alpha\) to an injective endomorphism of \(A[x;\alpha ]\).

Hence, we have the direct system

$$\begin{aligned} \begin{array}{ccccc} A[x;\alpha ] &{} \begin{array}{c} ad_{\tilde{x}^{-1}} \\ \rightarrow \end{array} &{} A[x;\alpha ] &{} \begin{array}{c} ad_{\tilde{x}^{-1}} \\ \rightarrow \end{array}&A[x;\alpha ]... \end{array} \end{aligned}$$

whose direct limit is \(A[\tilde{x};\alpha _{\infty }]\). Let \(\tilde{i}\) be the injection of \(A[x;\alpha ]\) to \(A_{\infty }[\tilde{x};\alpha _{\infty }]\) extending i. Then one can observe that \(\tilde{i}(x)=\tilde{x}\).

Now, let \(i_k\) be the injection of the k-th copy of A into \(A_{\infty }\) and note that, for each \(a\in A_{\infty }\), \(\alpha _{\infty }(a)=\tilde{x}a\tilde{x}^{-1}\). Since \(i_k(a)=i_{k+1}(\alpha (a))=\alpha _{\infty }(i_{k+1}(a))=\tilde{x} i_{k+1}(a)\tilde{x}^{-1}\), then \(ad_{\tilde{x}^{-1}}(i_k(a))=\tilde{x}^{-1} i_{k}(a) \tilde{x}=i_{k+1}(a)\). This argument shows that \(ad_{\tilde{x}^{-1}}\) maps \(i_k(A)\) onto \(i_{k+1}(A)\). Now, we consider the k-th copy of \(A[x;\alpha ]\), that is, the subalgebra generated generated by \(i_k(a)\) and \(\tilde{x}^i\), for \(i\ge 0\), inside \(A_{\infty }\langle \tilde{x};\alpha _{\infty }\rangle\). Thus, \(ad_{\tilde{x}^{-1}}\) maps the k-th copy of \(A[x;\alpha ]\) onto the \((k+1)\)- th copy of \(A[x;\alpha ]\).

By similar methods as before applied to \(\gamma :B\rightarrow B\), we have the isomorphism \(\gamma _{\infty }:B_{\infty }\rightarrow B_{\infty }\), a natural injection \(j:B\rightarrow B_{\infty }\), the skew polynomial ring \(B_{\infty }[\tilde{y};\gamma _{\infty }]\) with the natural injection \(\tilde{j}:B[y;\gamma ]\rightarrow B_{\infty }[\tilde{y};\gamma _{\infty }]\), where \(\tilde{j}(y)=\tilde{y}\). Note that we have an extension of \(\gamma\) to an injective endomorphism of \(B[y;\gamma ]\) which is \(ad_{\tilde{y}}\) whose definition is \(ad_{\tilde{y}}(g)=\tilde{y}g\tilde{y}^{-1}\), for \(g\in B[y;\gamma ]\). Moreover, as before, \(ad_{\tilde{y}^{-1}}\) maps the k-th copy of \(B[y;\gamma ]\) into the \((k+1)\)-th copy of \(B[y;\gamma ]\).

Now, we construct the following sequence

$$\begin{aligned}\begin{array}{ccccccccc} A[x;\alpha ] &{} \begin{array}{c} ad_{\tilde{x}^{-1}} \\ \rightarrow \end{array} &{} A[x;\alpha ] &{} \begin{array}{c} ad_{\tilde{x}^{-1}} \\ \rightarrow \end{array} &{} A[x;\alpha ] &{} \begin{array}{c} ad_{\tilde{x}^{-1}} \\ \rightarrow \end{array} &{}... &{} \begin{array}{c} \\ \rightarrow \end{array} &{} A_{\infty }[x;\alpha _{\infty }] \\ \begin{array}{cc} \Psi =\varphi _0 &{} \downarrow \end{array}&\,&\begin{array}{cc} \varphi _1&\downarrow \end{array}&\,&\begin{array}{cc} \varphi _2&\downarrow \end{array}&\,{} & {} \,&\begin{array}{cc} \downarrow &{} \varphi _{\infty } \end{array} \\ B[y;\gamma ] &{} \begin{array}{c} ad_{\tilde{y}^{-1}} \\ \rightarrow \end{array} &{} B[y;\gamma ] &{} \begin{array}{c} ad_{\tilde{y}^{-1}} \\ \rightarrow \end{array} &{} B[y;\gamma ] &{} \begin{array}{c} ad_{\tilde{y}^{-1}} \\ \rightarrow \end{array} &{}... &{} \begin{array}{c} \\ \rightarrow \end{array}&B_{\infty }[y;\gamma _{\infty }] \end{array}, \end{aligned}$$

with \(\varphi _n=ad_{\tilde{y}^{-1}}\circ \varphi _{n-1}\circ ad_{\tilde{x}}\), where \(ad_{\tilde{y}^{-1}}(g)=\tilde{y}^{-1} g \tilde{y}\) and \(ad_{\tilde{x}^{-1}}(f)=\tilde{x}^{-1} f \tilde{x}\), for \(f\in A[x;\alpha ]\) and \(g\in B[y;\gamma ]\). We claim that \(\varphi _n\) sends \(A[x;\alpha ]\) to \(B[y;\gamma ]\). In fact, we have that \(\varphi _n=ad_{\tilde{y}^{-1}}\circ \varphi _{n-1}\circ ad_{\tilde{x}}\) maps the n-th copy of \(A[x;\alpha ]\) onto the nth copy of \(B[y;\gamma ]\), because \(ad_{\tilde{x}}\) maps the n-th copy of \(A[x;\alpha ]\) onto \((n-1)\)-th copy, after it is mapped by \(\varphi _{n-1}\) onto the corresponding copy of \(B[y;\gamma ]\) and it is mapped onto the n-th copy by \(ad_{\tilde{y}^{-1}}\).

Since the diagram clearly commutes, the limit map \(\varphi _{\infty }\) is an isomorphism. Moreover, one can see that the isomorphism \(\gamma _{\infty }\) of \(A_{\infty }\) onto \(B_{\infty }\) that extends \(\gamma\). Finally, \(\varphi _{\infty }(\tilde{x})=j(b_0)+j(b_1)\tilde{y}+j(f(x))\tilde{y}^2\).

From now on, without loss of generality, we assume that \(\tilde{x}=x\) and \(\tilde{y}=y\).

Now, we state the main result of this section which is an algebraic version of ([6], Theorem 4.4).

Theorem 3.2

Let \(\alpha\), \(\gamma\) be injective endomorphisms of A and B, respectively. Suppose that there exist isomorphisms \(\theta :A\rightarrow B\) and \(\Psi :A[x;\alpha ]\rightarrow B[y;\gamma ]\) such that \(\Psi |_{A}=\theta\). Then \(\exists w \in U(B)\) such that \(\gamma (b)=w(\theta \circ \alpha \circ \theta ^{-1}(b))w^{-1}\).

Proof

By the above construction, we have the automorphic systems \((A_{\infty }, \alpha _{\infty })\) and \((B_{\infty }, \gamma _{\infty })\) such that \(\Psi _{\infty }:A_{\infty }[x;\alpha _{\infty }]\rightarrow B_{\infty }[y;\gamma _{\infty }]\) is an isomorphism with \(\Psi _{\infty }|_{A_{\infty }}=\theta _{\infty }\). Thus, by Lemma 3.1, \(\gamma _{\infty }=w\theta _{\infty }\circ \alpha _{\infty }\circ \theta ^{-1}_{\infty }(b)w^{-1}\), where w is such that \(j(b_1)=wj(b_1^{*})j(b_1)\). Thus, w is invertible in j(B). Hence, \(w\in j(B)\). So, restricting to the elements of B, we have the desired result. \(\square\)

4 Isomorphism problem: partial skew polynomial rings

Now, we review the basic concepts of partial actions and the skew extensions by partial actions that will be necessary in this section.

We begin with the following definition that is a particular case of ([4], Definition 2.1).

Definition 4.1

A unital partial action of the additive abelian group \(\mathbb {Z}\) on a ring S is a double

$$\begin{aligned} \omega = \big (\{D_i\}_{i \in \mathbb {Z}}, \{\omega _i\}_{i \in \mathbb {Z}} \big ), \end{aligned}$$

where for each \(i \in \mathbb {Z}\), \(D_i\) is a two-sided ideal in S generated by a central idempotent \(1_i\), \(\omega _i:D_{-i} \rightarrow D_i\) is an isomorphism of rings satisfying the following postulates, for all \(i,j \in \mathbb {Z}\):

(i):

\(D_{0} = S\) and \(w_{0}\) is the identity map of S;

(ii):

\(\omega _i(D_{-i} D_j)= D_i D_{i+j}\);

(iii):

\(\omega _i \circ \omega _j (a)= \omega _{i+j}(a)\), for all \(a \in D_{-j} D_{-(j+i)}\);

Given a unital partial action \(\omega\) of \(\mathbb {Z}\) on a ring S, we have the partial skew Laurent of polynomial ring \(S\langle x;\omega \rangle =\displaystyle \bigoplus _{i\in \mathbb {Z}} D_i x^i\), whose elements are

$$\begin{aligned} \displaystyle \sum _{i=s}^{n} a_jx^j,\text { with }a_j\in D_j \end{aligned}$$

with the usual addition and multiplication defined by

$$\begin{aligned} (a_ix^i)(a_jx^j) = \omega _i(\omega ^{-1}_i(a_i)b_j)x^{i+j}, \end{aligned}$$

see [1] for more details.

It is shown in ([4], Theorem 2.4) that \(S\langle x;\omega \rangle\) is an associative ring whose identity is \(1_Sx^{0}\). Note that, we have the injective morphism \(\phi : S\rightarrow S\langle x;\omega \rangle\) defined by \(r\mapsto rx^0\) and we can consider \(S\langle x;\omega \rangle\) as an extension of S. Moreover, we consider the partial skew polynomial ring as a subring of \(S\langle x;\omega \rangle\), which we denote it by \(S[x;\omega ]\), whose elements are the polynomials \(\displaystyle \sum _{i=0}^m b_ix^i\) with the usual sum and multiplication rule defined as before. For more details about this theory, we refer to [1, 2] and [3] and the references therein.

From now on, for any polynomial \(f\in S[x;w]\) we denote \(\delta\) as the degree of the polynomial f.

In this section, we consider two unital rings A and B with partial actions \(\omega =\{\omega _i:D_{-i}\rightarrow D_i: i\in \mathbb {Z}\}\) and \(\omega '=\{\omega '_i:D'_{-i}\rightarrow D'_i: i\in \mathbb {Z}\}\), respectively. Let \(\theta :A\rightarrow B\) and \(\Psi :A[x;\omega ]\rightarrow B[y;\omega ']\) be isomorphisms such that \(\theta (D_i)=D'_i\) and \(\Psi |_{A}=\theta\). If A and B are \(*\)-algebras, we suppose that \(\theta\) and the partial actions commute with the involution \(*\), i.e, \((\omega _i(a))^{*}=\omega _i(a^{*})\) and \((\omega '_i(b))^{*}=\omega '_i(b^*)\), for all \(i\in \mathbb {Z}\), \(a\in D_i\) and \(b\in D'_i\), and \(\theta (a^*)=\theta (a)^*\). Moreover, we assume that that \(D_i\) and \(D'_i\) are invariant under \(*\) for all \(i\in \mathbb {Z}\).

Remark 4.2

It is convenient that to point out the study of the isomorphism problem in partial actions is more delicated than the studies of the previous section. We have the following situations:

Case1: All the ideals \(D_i\), \(i\ne 0\), are zero ideals. In this case, we get that all the ideals \(D'_i\), \(i\ne 0\), are zero ideals. So, \(\Psi =\theta\) and there is nothing to do in this situation.

Case 2: Suppose that there exists \(j\ne 0\) such that \(D_j\ne 0\). We may assume, that without loss of generality that in this section \(D_1\ne 0\), because if we would assume a smallest positive integer i such that \(D_i\ne 0\). In this case, we only need to adapt all the assumptions of the next three lemmas and the statements of the main results of this section accordingly.

According to the last remark, during the rest of this section we assume that \(D_1\ne 0\). We consider that the automorphism \(\Psi\) is not necessarily a graded automorphism, i.e., \(\Psi (1_1x)=b_0+b_1y+h\), where \(\delta (h)\ge 2\) and \(\Psi ^{-1}(1'_1y)=a_0+a_1x+l\) with \(\delta (l)\ge 2\).

Now, we have the following lemma.

Lemma 4.3

For all \(a\in A\), the following statements hold.

a) \(\theta (a)b_0=b_0\theta (\omega _1(a1_{-1}))\) and \(\omega '_1(\theta (a)1'_{-1})b_1=b_1\theta (\omega _1(a1_{-1}))\)

b) \(b_0b_0^*\in Z(B)\), \(b_0^*b_0\) and \(b_1^*b_1\) commutes with \(\theta (\omega _1(A1_{-1}))\) and \(b_1b_1^*\) commutes with \(w'_1(B1'_{-1})\).

Proof

a) For all \(a\in A\), we have that \(\begin{aligned}\Psi (1_1xa)=\Psi (\omega _1(a1_{-1})x)=\theta (\omega _1(a1_{-1}))(b_0+b_1y+h)= \theta (\omega _1(a1_{-1}))b_0 +\theta (\omega _1(a_1 1_{-1}))b_1y+h'\end{aligned}\) and \(\Psi (1_1x)\Psi (a)=(b_0+b_1y+h)\theta (a)=b_0\theta (a)+b_1\omega '_1(\theta (a1'_{-1}))y+h''\). Thus, we have that \(b_0\theta (a)=\theta (\omega _1(a1_{-1}))b_0\) and \(b_1\omega '_1(\theta (a)1'_{-1})=\theta (\omega _1(a1_{-1}))b_1\).

b) From part a) we have the formula \(b_0^*\theta (a)=\theta (\omega _1(a1_{-1}))b_0^*\). Thus, \(b_0b_0^*\theta (a)=b_0\theta (\omega _1(a1_{-1}))b_0^*=\theta (a)b_0b_0^*\). So, \(b_0^*b_0\in Z(B)\). Note, for each \(a\in A\), \(\theta (\omega _1(a1_{-1}))b_0^*b_0=b_0^*\theta (a)b_0=b_0^*b_0\theta (\omega _1(a1_{-1}))\) and it follows that \(b_0^*b_0\) commutes with \(\theta (\omega _1(a1_{-1}))\). The other statements are similarly proved. \(\square\)

The next result is important to prove one of the main result of this section.

Lemma 4.4

The element \(b_1\) is invertible in \(D'_1\) in the following situations:

  1. (a)

    \(D'_1\) is a simple ring

  2. (b)

    \(D'_j=0\), for all \(j\ge 2\).

  3. (c)

    \(\sum _{j\ge 2} D_j=\oplus _{j\ge 2}D_j\) and \(D'_j\ne B'\) for all \(j\ge 2\)

Proof

  1. (a)

    By assumption, the center of \(D'_1\) is a division ring. By Lemma 4.3, we have that \(b_1D_1'=D_1'b_1\) and since, \(b_1\ne 0\) we have that \(b_1\) is invertible.

  2. (b)

    By assumption, there exists \(p\in A[x;w]\) such that \(\Psi (p)=1'_1y^1\). Suppose that \(p=a_0+a_1y\). Thus, we get \(1'_1y=\theta (a_0)+\theta (a_1)b_0+\theta (a_1)b_1y\). Hence, \(\theta (a_1)b_1 =1'_1\). Therefore, by Lemma 4.3 we get the result.

  3. (c)

    We consider \(p=\sum _{i=0}^n a_iy^ i\) such that \(\Psi (p)=1'_1y\). Note that by our assumptions \(\theta (1_1)=1'_1\) and we get that \(\begin{aligned}1'_1y= \Psi (1_1p)=\Psi (1_1a_0+a_1y+a_21_1y^2+....+a_n1_1y^n)=\theta (1_1a_0)+\theta (a_1)(b_0+b_1y+h_2)=\theta (a_1)b_0+\theta (a_1)b_1y+\theta (a_1)h_2\end{aligned}\), where \(\delta (h_2)\ge 2\). Now, looking at degree one we get \(\theta (a_1)b_1=1'_1\). So, by Lemma 4.3, we have that \(b_1\) is invertible.

\(\square\)

The proof of the next lemma uses similar ideas of Lemma 2.1 and we put the proof here for the reader’s convenience.

Lemma 4.5

If \(b_1\) is invertible in \(D'_1\), then there exists \(w\in U(D'_1)\) such that \(\omega '_1(b1'_{-1})=w\theta (\omega _1(\theta ^{-1}(b)1_{-1}))w^{-1}\), for all \(b\in B\).

Proof

By assumption, we have that \(b_1^*b_1\) is invertible. Thus, \(D'_1b_1^*b_1=D'_1\) and there exists \(w\in U(D'_1)\) such that \(b_1=wb_1^*b_1\). Hence, for each \(a\in A\), we have that \(w\theta (\omega _1(a1_{-1}))w^{-1}b_1=w\theta (\omega _1(a1_{-1}))b_1^*b_1=wb_1^*b_1\theta (\omega _1(a1_{-1})) =b_1\theta (\omega _1(a1_{-1}))=\omega '_1(\theta (a)1'_{-1})b_1\). Since \(b_1\) is invertible in \(D'_1\), we have

$$\begin{aligned} w\theta (\omega _1(a1_{-1}))w^{-1}=\omega '_1(\theta (a)1'_{-1}). \end{aligned}$$

Consequently, if \(\theta (a)=b\), then we have that

$$\begin{aligned} \omega '(b1'_{-1})=w\theta (\omega _1(\theta ^{-1}(b)1_{-1}))w^{-1}. \end{aligned}$$

\(\square\)

Now, we are ready to prove the first main result of this section.

Theorem 4.6

Let A, B be unital algebras, w, \(w'\) partial actions of \(\mathbb {Z}\) on A and B, respectively, \(\theta :A\rightarrow B\) an isomorphism with \(\theta (D_i)= D'_i\) for all \(i\in \mathbb {Z}\), \(\Psi :A[x;w]\rightarrow B[y;w']\) an isomorphism such that \(\Psi |_{A}=\theta\). Suppose that either \(D'_1\) is a simple ring or \(\sum _{j\ge 2}D'_j=\oplus D'_j\) such that \(D'_j\ne B\) for all \(j\ge 2\) or \(D'_j=0\), for all \(j\ge 2\). Then \(\exists\) \(a\in U(D_1)\) such that \(w'_1(b1_{-1})=a\theta (w_1(\theta ^{-1}(b)1_1)a^{-1}\)

Proof

By Lemma 4.4, we have that \(b_1\) is invertible. So, the result follows from Lemma 4.5. \(\square\)

It is convenient to point out that the partial skew polynomial rings are graded by the natural numbers. Under some conditions, we have a converse of Theorem 4.6 as the next result shows.

Theorem 4.7

Let, A, B be unital algebras, \(w=\big (\{D_i\}_{i \in \mathbb {Z}}, \{w_i\}_{i \in \mathbb {Z}} \big )\), \(w'=\big (\{D'_i\}_{i \in \mathbb {Z}}, \{w'_i\}_{i \in \mathbb {Z}} \big )\), unital partial actions of \((\mathbb {Z},+)\) on A and B, where \(1_i\) and \(1'_i\) are the idempotents that generate \(D_i\) and \(D'_i\), for all \(i\in \mathbb {Z}\), respectively. Suppose that there exists \(\theta :A\rightarrow B\) an isomorphism with \(\theta (D_i)= D'_i\) for all \(i\in \mathbb {Z}\). Then for all \(i,j\ge 0\) there exists \(s_i\in U(D_i)\), \(i\ge 0\), \(s_{i+j}=s_iw'_i(s_j1_{-i})\) and \(\theta (w_i(a1_{-i}))s_i=s_i w'_i(\theta (a)1_{-i})\), for all \(i\ge 0\) if and only if for all \(i,j\ge 0\) we have that \(D_{i+j}\subseteq D_i\) and there exists a graded isomorphism \(\Psi :A[x;w]\rightarrow B[y;w']\) such that \(\Psi |_{A}=\theta\).

Proof

\(\Rightarrow\)

By assumption, \(s_{i+j}=s_i w'_i(s_j1'_{-i})\), and we have that \(1'_is_{i+j}=s_{i+j}\). Thus, \(1'_i1'_{i+j}=1'_{i+j}\) which implies, that \(1_i1_{i+j}=1_{i+j}\). Hence, \(D_{i+j}\subseteq D_i\), for all \(i,j\ge 0\). Next, we define \(\Psi : A[x;w]\rightarrow B[y;w']\) by \(\Psi (a_jx^j)=\theta (a_j)s_jx^j\). We clearly have that \(\Psi\) is additive. We claim that \(\Psi\) is multiplicative. In fact,

$$\begin{aligned} \Psi (a_ix^ia_jx^j)=\Psi (a_i w_i(a_j1_{-i})x^{i+j})=\theta (a_i)\theta ( w_i(a_j1_{-i}))s_{i+j}y^{i+j} \end{aligned}$$

and

$$\begin{aligned} \Psi (a_ix^i)\Psi (a_jx^j)= & {} \theta (a_i)s_iy^i\theta (a_j)s_jy^j=\theta (a_i)s_iw'_i(\theta (a_j)s_j1'_{-i})y^{i+j}\\= & {} \theta (a_i)\theta (w_i(a1_{-i}))s_{i} w'_{i}(s_j1_{-i})y^{i+j}=\theta (a_i)\theta (w_i(a1_{-i}))s_{i+j}y^{i+j}. \end{aligned}$$

So, \(\Psi ((a_ix^i)\Psi (a_jx^j)=\Psi (a_ix^ia_jx^j)\). We easily have that \(\Psi\) is injective. Now, given \(by^j\in B[y;w']\) we have that \(by^j=\Psi (a\theta ^{-1}(s_j^{-1})1_jx^j)\).

\(\Leftarrow\)

To proof the existence of the invertible elements \(s_i\in D'_i\) when \(D'_i\ne 0\) are similar with Theorem 1.2. By assumption, we have \(1_ix^i1_jx^j=1_i1_{i+j}x^{i+j}=1_{i+j}x^{i+j}\). As \(\Psi\) is graded, \(\psi (1_ix^i)=s_ix^i\), for some \(s_ix^i\), for some \(s_i\in D_i\). Next, for each \(a\in A\), we have that \(\Psi (1_ix^ia)=\Psi (1_ix^i)\Psi (a)=s_ix^i\theta (a)=s_i w'_i(\theta (a)1'_{-i})y^i\) and \(\Psi (1_ix^ia)=\Psi (w_i(a1_{-i})x^i)=\theta (w_i(a1_{-i}))s_iy^i\). Hence, \(s_i w'_i(\theta (a)1'_{-i})=\theta (w_i(a1_{-i}))s_i\).

Finally, \(s_{i+j}y^{i+j}=\Psi (1_{i+j}x^{i+j})=\Psi (1_ix^i1_jx^j)=s_iy^is_jy^j=s_i w'_i(s_j1'_{-i})y^{i+j}\) and we have that \(s_{i+j}=s_i w'_i(s_j1'_{-i})y^{i+j}\). \(\square\)

Question

In Lemma 3.4, we presented some conditions for \(b_1\) to be invertible. Is \(b_1\) invetible in general in \(D_1'\)?

Until now, we could not find a proof or a conter-example.