1 Introduction

1.1 Motivations

Let \(\mathcal{R }\) be a prime ring with center \(\mathcal{Z(R) }\) and \(d\) be a nonzero derivation of \(\mathcal{R }\). The well-known theorem of Posner [21] states that if \([d(x),x]\in \mathcal{Z(R) }\) for all \(x\in \mathcal{R }\), then \(\mathcal{R }\) must be commutative. Starting from this result, several authors studied the relationship between the structure of prime ring \(\mathcal{R }\) and the behavior of an additive mapping \(f\) which satisfies the Engel-type condition \([f(x),x]_k=0\). The Engel-condition is defined by \([f(x),x]_k=[[f(x),x]_{k-1},x]\) for all \(x\in \mathcal{R }\) and all \(k>1\). Chuang and Lee [12] showed that if \(d\) a derivation of a semiprime ring \(\mathcal{R }\) and \(\mathcal{I }\) is a left ideal of \(\mathcal{R }\) such that, for all \(x\in \mathcal{I }, d(x^n)\) is central, for \(n = n(x) > 1\) and \(n(x)\)’s are bounded, then \([\mathcal{I },\mathcal{R }]d(\mathcal{R })=0\). In case \(\mathcal{R }\) is prime then it is commutative. Many researchers in this area analyzed in detail the case when a one-sided ideal of a prime ring satisfies some kind of Engel-type condition. In particular, we refer the reader to the results obtained by Lee [18]. He proved that if \(\mathcal{R }\) is a semiprime ring with a derivation \(d, \mathcal{I }\) a left ideal of \(\mathcal{R }\) and \(k,n\) two fixed positive integers, such that \([d(x^n),x^n]_k=0\) for all \(x\in \mathcal{I }\), then \([\mathcal{I },\mathcal{R }]d(\mathcal{R })=0\). As above, in case \(\mathcal{R }\) is prime then it is commutative.

In a recent paper [1], another related generalization is considered by the first author, Albas and Argac. More precisely speaking, we describe what happens if the derivation \(d\) is replaced by an additive mapping \(\delta \) defined as follows: \(\delta (xy)=\delta (x)y+xg(y)\) for all \(x, y\in \mathcal{R }\) and for some derivation \(g\) of \(\mathcal{R }\). Such a mapping \(\delta \) is called a generalized derivation of \(\mathcal{R }\) with associated derivation \(g\). More precisely in [1] is proved that if \(\mathcal{R }\) is a prime ring with extended centroid \(\mathcal{C }\) and left Utumi quotient ring \(\mathcal U \) and \(\mathcal{I }\) is a nonzero left ideal, \(\delta \) a generalized derivation of \(\mathcal{R }\) and \(n, k \ge 1\) fixed integers such that \([\delta (x^n), x^n]_k = 0\) for all \(x \in \mathcal{I }\), then \(\delta (x) = xc\) for some \(c \in \mathcal U \) and \(\mathcal{I }(c - \lambda ) = 0\), for a suitable \(\lambda \in \mathcal{C }\).

Obviously, any derivation of \(\mathcal{R }\) and any mapping of \(\mathcal{R }\) with form \(f(x)=ax+xb\), for some \(a, b\in \mathcal{R }\), are both generalized derivations. The latter are usually called inner generalized derivations. We would like to point that one of the leading roles in the development of the theory of generalized derivations is played by the inner generalized derivations.

In the current presentation we will continue the study of Engel-type condition on one-sided ideals of a prime ring, involving a generalized skew derivation \(F\) of \(\mathcal{R }\) instead of a generalized derivation. We will now recall the definition of generalized skew derivations of \(\mathcal{R }\). Let \(\mathcal{R }\) be an associative ring and \(\alpha \) be an automorphism of \(\mathcal{R }\). An additive mapping \(d: \mathcal{R }\longrightarrow \mathcal{R }\) is called a skew derivation of \(\mathcal{R }\) if

$$\begin{aligned} d(xy)=d(x)y+\alpha (x)d(y) \end{aligned}$$

for all \(x, y\in \mathcal{R }\) and \(\alpha \) is called an associated automorphism of \(d\). An additive mapping \(F: \mathcal{R }\longrightarrow \mathcal{R }\) is said to be a generalized skew derivation of \(R\) if there exists a skew derivation \(d\) of \(\mathcal{R }\) with associated automorphism \(\alpha \) such that

$$\begin{aligned} F(xy)=F(x)y+\alpha (x)d(y) \end{aligned}$$

for all \(x, y\in \mathcal{R }, d\) is said to be an associated skew derivation of \(F\) and \(\alpha \) is called an associated automorphism of \(F\). The definition of generalized skew derivations is a unified notion of skew derivation and generalized derivation, which are considered as classical additive mappings of non-associative algebras. The behaviour of these mapping has been investigated by many researchers from various views, see [37, 19].

Here we will prove the following:

Theorem 1

Let \(\mathcal{R }\) be a prime ring, \(\mathcal{Q }_r\) the right Martindale quotient ring of \(\mathcal{R }, \mathcal{C }\) the extended centroid of \(\mathcal{R }, F:R\rightarrow \mathcal{R }\) a nonzero generalized skew derivation of \(\mathcal{R }\) and \(n,k\ge 1\) fixed integers. If \([F(x^n),x^n]_k=0\), for all \(x\in \mathcal{R }\) then there exists \(\lambda \in \mathcal{C }\) such that \(F(x)=\lambda x\), for all \(x\in \mathcal{R }\).

Theorem 2

Let \(\mathcal{R }\) be a prime ring of characteristic different from \(2, \mathcal{Q }_r\) the right Martindale quotient ring of \(\mathcal{R }, \mathcal{C }\) the extended centroid of \(\mathcal{R }, \mathcal{I }\) a nonzero left ideal of \(\mathcal{R }, F\) a nonzero generalized skew derivation of \(\mathcal{R }\) with associated automorphism \(\alpha \), and \(n,k \ge 1\) be fixed integers. If \([F(r^n),r^n]_k=0\) for all \(r \in \mathcal{I }\), then there exists \(\lambda \in \mathcal{C }\) such that \(F(x)=\lambda x\), for all \(x\in \mathcal{I }\).

More precisely one of the following holds:

  1. (1)

    \(\alpha \) is an \(X\)-inner automorphism of \(\mathcal{R }\) and there exist \(b,c \in \mathcal{Q }_r\) and \(q\) invertible element of \(\mathcal{Q }_r\), such that \(F(x)=bx-qxq^{-1}c\), for all \(x\in \mathcal{Q }_r\). Moreover there exists \(\gamma \in \mathcal{C }\) such that \(\mathcal{I }(q^{-1}c-\gamma )=(0)\) and \(b-\gamma q \in \mathcal{C }\);

  2. (2)

    \(\alpha \) is an \(X\)-outer automorphism of \(\mathcal{R }\) and there exist \(c \in \mathcal{Q }_r, \lambda \in \mathcal{C }\), such that \(F(x)=\lambda x-\alpha (x)c\), for all \(x\in \mathcal{Q }_r\), with \(\alpha (\mathcal{I })c=0\).

We should remark that the conclusion of both Theorems follows directly from the results which were obtained in [18] in case \(F\) is an ordinary derivation of \(\mathcal{R }\), and from the results in [1], in case \(F\) is a generalized derivation.

In what follows, let \(\mathcal{Q }_r\) be the right Martindale quotient ring of \(\mathcal{R }\), and \(\mathcal{C }=\mathcal Z (\mathcal{Q }_r)\) be the center of \(\mathcal{Q }_r\). \(\mathcal{C }\) is usually called the extended centroid of \(\mathcal{R }\) and is a field when \(\mathcal{R }\) is a prime ring. It should be remarked that \(\mathcal{Q }_r\) is a centrally closed prime \(\mathcal{C }\)-algebra. We refer the reader to [2] for the definitions and the related properties of these objects.

It is well known that automorphisms, derivations and skew derivations of \(\mathcal{R }\) can be extended both to \(\mathcal{Q }_r\). Chang [3] shows that also a generalized skew derivation \(F\) of \(\mathcal{R }\) can be extended to the right Martindale quotient ring \(\mathcal{Q }_r\) of \(\mathcal{R }\) as follows: \(F: \mathcal{Q }_r \longrightarrow \mathcal{Q }_r\), such that \(F(xy)=F(x)y+\alpha (x)d(y)\) for all \(x, y\in \mathcal{Q }_r\), where \(d\) is a skew derivation of \(\mathcal{R }\) and \(\alpha \) is an automorphism of \(\mathcal{R }\). Moreover, there exists \(F(1)=a\in \mathcal{Q }_r\) such that \(F(x)=ax+d(x)\) for all \(x\in \mathcal{R }\).

Let us also mention that a skew derivation \(d:\mathcal{R }\longrightarrow \mathcal{R }\) is called \(X\)-inner if there exist an element \(q\in \mathcal{Q }_r\) and an automorphism \(\alpha \) of \(\mathcal{R }\) such that \(d(x)=qx-\alpha (x)q\) for all \(x\in \mathcal{R }\). If a skew derivation \(d\) of \(\mathcal{R }\) is not \(X\)-inner, then it is called \(X\)-outer.

Similarly a generalized skew derivation \(F:\mathcal{R }\longrightarrow \mathcal{R }\) is called \(X\)-inner if there exist \(c,q\in \mathcal{Q }_r\) and an automorphism \(\alpha \) of \(\mathcal{R }\) such that \(F(x)=cx-\alpha (x)q\) for all \(x\in \mathcal{R }\). If a generalized skew derivation \(F\) of \(\mathcal{R }\) is not \(X\)-inner, then it is called \(X\)-outer.

1.2 Preliminary results.

In all that follows we will make use of the following known results:

Fact 1.1

Let \(\mathcal{R }\) be a prime ring and \(\mathcal{I }\) a two-sided ideal of \(\mathcal{R }\). Then \(\mathcal{I }, \mathcal{R }\) and \(\mathcal{Q }_r\) satisfy the same generalized polynomial identities with coefficients in \(\mathcal{Q }_r\) [8].

Fact 1.2

Let \(\mathcal{R }\) be a prime ring and \(\mathcal{I }\) a two-sided ideal of \(\mathcal{R }\). Then \(\mathcal{I }, \mathcal{R }\) and \(\mathcal{Q }_r\) satisfy the same generalized polynomial identities with automorphisms [10, Theorem 1].

Fact 1.3

Let \(\mathcal{R }\) be a domain and \(\alpha \in Aut(\mathcal{R })\) is \(X\)-outer (in the sense that it is not \(X\)-inner). If \(\Phi (x_i,\alpha (x_i))\) is a generalized polynomial identity for \(\mathcal{R }\), then \(\mathcal{R }\) also satisfies the non-trivial generalized polynomial identity \(\Phi (x_i,y_i)\), where \(x_i\) and \(y_i\) are distinct indeterminates [17].

Fact 1.4

Let \(\mathcal{R }\) be a prime ring and \(D\) be an \(X\)-outer skew derivation of \(\mathcal{R }\). If \(\Phi (x_i,D(x_i))\) is a generalized polynomial identity for \(\mathcal{R }\), then \(\mathcal{R }\) also satisfies the generalized polynomial identity \(\Phi (x_i,y_i)\), where \(x_i\) and \(y_i\) are distinct indeterminates [11, Theorem 1].

Fact 1.5

Let \(\mathcal{R }\) be a prime ring, \(D\) be an \(X\)-outer skew derivation of \(\mathcal{R }\) and \(\alpha \) be an \(X\)-outer automorphism of \(\mathcal{R }\). If \(\Phi (x_i,D(x_i),\alpha (x_i))\) is a generalized polynomial identity for \(\mathcal{R }\), then \(\mathcal{R }\) also satisfies the generalized polynomial identity \(\Phi (x_i,y_i,z_i)\), where \(x_i, y_i\) and \(z_i\) are distinct indeterminates [11, Theorem 1].

2 Hypercentralizing generalized skew derivations on prime rings

In this section we will consider the case when \(F:\mathcal{R }\rightarrow \mathcal{R }\) is a nonzero generalized skew derivation of a prime ring \(\mathcal{R }\), which satisfies an Engel-type condition on the whole \(\mathcal{R }\). Hence we study the special case \([F(x^n),x^n]_k=0\), for all \(x\in \mathcal{R }\) and prove Theorem 1.

In this sense, our first aim will be to prove the following:

Proposition 2.1

Let \(\mathcal{R }\) be a prime ring, \(n,k\ge 1\) fixed integers, \(b, c\) elements of \(\mathcal{Q }_r\). If

$$\begin{aligned} \left[ br^n-\alpha (r^n)c,r^n\right] _k=0 \end{aligned}$$

for all \(r \in \mathcal{R }\), then either \(\alpha \) is the identical map on \(\mathcal{R }\) and \(b,c \in \mathcal{C }\), or there exists an invertible element \(q\in \mathcal{Q }_r\) such that \(\alpha (x)=qxq^{-1}\), for all \(x\in \mathcal{R }\), with \(q^{-1}c \in \mathcal{C }\) and \(b-c\in \mathcal{C }\). In other words, there exists \(\lambda \in \mathcal{C }\) such that \(F(x)=\lambda x\), for all \(x\in \mathcal{R }\).

For proving Proposition 2.1 we also need the following (it is a reduced version of Theorem 1 in [1]):

Lemma 2.1

Let \(\mathcal{R }\) be a non-commutative prime ring, \(a,b\in \mathcal{R }, n,k\ge 1\) fixed integers such that \([ar^n+r^nb,r^n]_k=0\), for any \(r\in \mathcal{R }\). Then \(a,b\in \mathcal{C }\).

We begin with the case when \(\alpha \in Aut(R)\) is an inner automorphism of \(\mathcal{R }\), that is: there exists an invertible element \(q\in \mathcal{Q }_r\) such that \(\alpha (x)=qxq^{-1}\), for all \(x\in \mathcal{R }\). For sake of clearness, in all that follows we deonte

$$\begin{aligned} \Phi (x)=\left[ bx^n-qx^nq^{-1}c,x^n\right] _k \end{aligned}$$
(2.1)

so that \(\Phi (r)=0\) for all \(r\in \mathcal{R }\).

Lemma 2.2

Let \(\mathcal{R }\) be a prime ring, \(n,k\ge 1\) fixed integers, \(b, c, q\) elements of \(\mathcal{Q }_r\). If \(q\) is invertible and

$$\begin{aligned} \left[ br^n-qr^nq^{-1}c,r^n\right] _k=0 \end{aligned}$$

for all \(r\in \mathcal{R }\), then \(q^{-1}c \in \mathcal{C }\) and \(b-c\in \mathcal{C }\).

Proof

By our assumption, \(\mathcal{R }\) satisfies (2.1), that is \(\Phi (x)\) is a generalized polynomial identity for \(\mathcal{R }\). Since \(\mathcal{R }\) and \(\mathcal{Q }_r\) satisfy the same generalized polynomial identities with automorphisms \(\mathcal{Q }_r\) also satisfies (2.1). Note that if either \(q^{-1}c \in \mathcal{C }\) or \(q\in \mathcal{C }\), then we are done by Lemma 2.1.

Hence we consider that both \(q^{-1}c \notin C\) and \(q \notin C\). In this case (2.1) is a non-trivial generalized polynomial identity for \(\mathcal{Q }_r\). By Martindale’s theorem [20], \(\mathcal{Q }_r\) is a primitive ring having a nonzero socle with \(C\) as the associated division ring. In a light of Jacobson’s theorem [15, p. 75], \(\mathcal{Q }_r\) is isomorphic to a dense ring of linear transformations on some vector space \(V\) over \(C\). Of course, we may assume that \(dim_CV\ge 2\).

If \(\{q^{-1}cv,v\}\) are linearly \(C\)-dependent, for all \(v\in V\), then standard arguments show that \(q^{-1}c \in \mathcal{C }\). Hence, consider the case when there exists \(v\in V\) such that \(\{q^{-1}cv,v\}\) are linearly \(C\)-independent. We will prove that in this situation a number of contradictions follows.

First, suppose \(dim_CV=\infty \). Since \(\{q^{-1}cv,v\}\) are linearly \(C\)-independent, there exist \(v_1,\ldots ,v_{n-1},w_1,\ldots ,w_{n-1} \in V\) such that \(\{q^{-1}cv,v,v_1,\ldots ,v_{n-1},w_1,\ldots ,w_{n-1}\}\) are linearly \(C\)-independent.

By the density of \(\mathcal{Q }_r\) there exist \(r\in \mathcal{Q }_r\) such that

$$\begin{aligned} rv=v_1;\quad rv_i=v_{i+1}, i=1,\ldots ,n-2;\quad rv_{n-1}=0\\ r(q^{-1}cv)=w_1;\quad rw_i=w_{i+1}, i=1,\ldots ,n-2;\quad rw_{n-1}=q^{-2}cv \end{aligned}$$

so that

$$\begin{aligned} r^nv=0,\qquad r^n(q^{-1}cv)=q^{-2}cv \end{aligned}$$

and by (2.1) we get the following contradiction:

$$\begin{aligned} 0=\left[ br^n-qr^nq^{-1}c,r^n\right] _kv=-q^{-2}cv\ne 0. \end{aligned}$$

Let now \(dim_CV=t\) finite, that is \(\mathcal{Q }_r\cong M_t(\mathcal{C })\), the ring of all \(t\times t\) matrices over \(\mathcal{C }\). Denote \(p=q^{-1}c=\sum _{ij}e_{ij}p_{ij}, q=\sum _{ij}e_{ij}q_{ij}\), where \(p_{ij}, q_{ij} \in \mathcal{C }\) and \(e_{ij}\) is the usual matrix unit, with \(1\) in \((i,j)\)-entry and zero elsewhere.

For \(x=e_{ii}\) in (2.1) and right multiplying by \(e_{jj}\), for all \(i\ne j\), it follows the condition

$$\begin{aligned} q_{ii}p_{ij}=0. \end{aligned}$$
(2.2)

Assume that \(p_{ij}\ne 0\), thus \(q_{ii}=0\). Recall that for any \(\varphi \in Aut(\mathcal{Q }_r)\)

$$\begin{aligned} \left[ \varphi (b)x^n-\varphi (q)x^n\varphi (q^{-1}c),x^n\right] _k \end{aligned}$$
(2.3)

is also an identity for \(\mathcal{Q }_r\). Therefore, the matrices \(\varphi (b), \varphi (q)\) and \(\varphi (p)\) must satisfy the condition (2.2). In particular, let \(\varphi (x)=(1+e_{ji})x(1-e_{ji})=x+e_{ji}x-xe_{ji}-e_{ji}xe_{ji}\) and denote \(\varphi (q)=\sum q_{ij}^{\prime }e_{ij}, \varphi (p)=\sum p_{ij}^{\prime }e_{ij}\), for \(q_{ij}^{\prime }, p_{ij}^{\prime } \in \mathcal{C }\). Thus, by (2.2), we have \(q_{ii}^{\prime }p_{ij}^{\prime }=0\), that is \(q_{ij}p_{ij}=0\), which implies \(q_{ij}=0\).

If \(t=2\) it follows that the \(i-th\) row of \(q\) is zero, which is a contradiction, since \(q\) is an invertible matrix. Moreover, in case \(t\ge 3\), for all \(k\ne i,j\), let \(\chi (x)=(1+e_{ki})x(1-e_{ki})=x+e_{ki}x-xe_{ki}-e_{ki}xe_{ki}\) and denote \(\chi (q)=\sum q_{ij}^{\prime \prime }e_{ij}, \chi (p)=\sum p_{ij}^{\prime \prime }e_{ij}\), for \(q_{ij}^{\prime \prime }, p_{ij}^{\prime \prime } \in \mathcal{C }\). Thus, by (2.2), we have \(q_{ii}^{\prime \prime }p_{ij}^{\prime \prime }=0\), that is \(q_{ik}p_{ij}=0\), which implies \(q_{ik}=0\). Therefore in any case the \(i\)-th row of \(q\) is zero, again a contradiction.

Thus \(p_{ij}=0\), for all \(i\ne j\), that is \(p\) is a diagonal matrix in \(M_t(\mathcal{C })\). Finally, for any automorphism \(\varphi \) of \(M_t(\mathcal{C })\) and since \(\varphi (q)\) is an invertible matrix, then by the same above argument \(\varphi (p)\) must be a diagonal matrix. In this case an easy computation forces the contradiction \(p\in \mathcal{C }\).\(\square \)

Lemma 2.3

Let \(\mathcal{R }\) be a prime ring, \(n,k\ge 1\) fixed integers, \(b, c\) elements of \(\mathcal{Q }_r\), and \(\alpha \in Aut(Q)\) be an outer automorphism of \(\mathcal{Q }_r\). If

$$\begin{aligned} \left[ br^n-\alpha (r^n)c,r^n\right] _k=0 \end{aligned}$$

for all \(r\in \mathcal{R }\), then \(\alpha \) is the identity map on \(\mathcal{Q }_r\) and \(b,c \in \mathcal{C }\), unless when \(c=0\) and \(b\in \mathcal{C }\).

Proof

Since \(\mathcal{R }\) and \(\mathcal{Q }_r\) satisfy the same generalized polynomial identities with automorphisms it follows that \(\mathcal{Q }_r\) satisfies

$$\begin{aligned} \left[ bx^n-\alpha (x^n)c,x^n\right] _k \end{aligned}$$
(2.4)

Of course in case \(\mathcal{R }\) is commutative we are done, so that we suppose that \(\mathcal{R }\) is not commutative. We may also assume that either \(b\notin \mathcal{C }\) or \(c\ne 0\), then by the main Theorem in [9], \(\mathcal{Q }_r\) satisfies a non-trivial generalized polynomial identity (\(\mathcal{Q }_r\) is a GPI-ring). Therefore, by [20, Theorem 3] \(\mathcal{Q }_r\) is a primitive ring and it is a dense subring of the ring of linear transformations of a vector space \(V\) over a division ring \(D\). Moreover, \(\mathcal{Q }_r\) contains nonzero linear transformations of finite rank.

If \(\alpha \) is not Frobenius, then by [10, Theorem 2] and (2.4) we have that \(\mathcal{Q }_r\) satisfies

$$\begin{aligned} \left[ bx^n-y^nc,x^n\right] _k \end{aligned}$$
(2.5)

and \(b,c \in \mathcal{C }\) from Lemma 2.1. In particular, by (2.5), \(\mathcal{Q }_r\) satisfies \(c[y^n,x^n]_k\). Since \(c\ne 0\), it follows that \(\mathcal{Q }_r\) is commutative, a contradiction.

On the other hand, if \(\mathcal{Q }_r\) is a domain, then, by Fact 1.3 and (2.4), we have again that \(\mathcal{Q }_r\) satisfies (2.5) and, as above, we conclude that \(\mathcal{Q }_r\) is commutative.

In the light of previous arguments we assume that \(\alpha \) is Frobenius and \(dim_DV\ge 2\). Note that if \(char(R)=0\), we have \(\alpha (x)=x\) for all \(x\in \mathcal{R }\) since \(\alpha \) is Frobenius. By [2, Theorem 4.7.4] this implies that \(\alpha \) is inner, a contradiction. Thus, we may assume that \(char(R)=p \ne 0\) and \(\alpha (\lambda )=\lambda ^{p^t}\), for all \(\lambda \in \mathcal{C }\) and some nonzero fixed integer \(t\), moreover there exists \(\gamma \in \mathcal{C }\) such that \(\gamma ^{p^t}\ne \gamma \).

Let now \(s\ge 1\) such that \(p^s > k\), then \(\mathcal{Q }_r\) satisfies

$$\begin{aligned} \left[ bx^n-\alpha (x^n)c,x^n\right] _{p^s} \end{aligned}$$

that is

$$\begin{aligned} \left[ bx^n-\alpha (x^n)c,x^{n(p^s)}\right] . \end{aligned}$$
(2.6)

In particular, choose \(\lambda \in \mathcal{C }\) such that \(\lambda ^n=\gamma \), and replace in (2.6) \(x\) by \(\lambda x\). Therefore \(\mathcal{Q }_r\) satisfies

$$\begin{aligned} \lambda ^{n(p^s+1)}\left[ bx^n-\lambda ^{n(p^t-1)} \alpha (x^n)c,x^{n(p^s)}\right] . \end{aligned}$$

that is \(\mathcal{Q }_r\) satisfies

$$\begin{aligned} \left[ bx^n-\gamma ^{(p^t-1)}\alpha (x^n)c,x^{n(p^s)}\right] . \end{aligned}$$
(2.7)

Comparing (2.6) with (2.7), we have that

$$\begin{aligned} \left( \gamma ^{(p^t-1)}-1\right) \left[ \alpha (x^n)c,x^{n(p^s)}\right] \end{aligned}$$

is an identity with automorphism for \(\mathcal{Q }_r\), that is \(\mathcal{Q }_r\) satisfies

$$\begin{aligned} \left[ \alpha (x^n)c,x^{n(p^s)}\right] . \end{aligned}$$
(2.8)

Moreover, again by (2.6) and (2.8), we have that \(0=[br^n,r^{n(p^s)}]=[b,r^{n(p^s)}]r^n\), for all \(r\in \mathcal{Q }_r\). This implies that \(b\in \mathcal{C }\) (see for example [13], Theorem 1 and Corollary 1). In light of this we may consider \(c\ne 0\).

Let now \(e^2=e \in Soc(\mathcal{Q }_r)\), then by (2.8),

$$\begin{aligned}{}[\alpha (e)c,e]=0 \end{aligned}$$
(2.9)

and right multiplying by \((1-e)\) it follows

$$\begin{aligned} e\alpha (e)c(1-e)=0. \end{aligned}$$
(2.10)

In particular choose the following idempotent element in (2.10): \((1-e)-(1-e)xe\), for any \(x\in \mathcal{Q }_r\), for any idempotent element \(e\in Soc(\mathcal{Q }_r)\). Therefore we have that \(\mathcal{Q }_r\) satisfies:

$$\begin{aligned} \begin{aligned}&(1-e)\alpha (1-e)c(1-e)xe+(1-e)\alpha (1-e)\alpha (x)\alpha (e) c\bigl (e+(1-e)xe\bigr )\\&\quad -(1-e)xe\alpha (1-e)c\bigl (e+(1-e)xe\bigr )\\&\quad +(1-e)xe\alpha (1-e)\alpha (x)\alpha (e)c\bigl (e+(1-e)xe\bigr ) \end{aligned} \end{aligned}$$
(2.11)

and since \(\alpha \) is outer, by (2.11), \(\mathcal{Q }_r\) satisfies

$$\begin{aligned} \begin{aligned}&(1-e)\alpha (1-e)c(1-e)xe+(1-e)\alpha (1-e)y\alpha (e)c\bigl (e+(1-e)xe\bigr )\\&\quad -(1-e)xe\alpha (1-e)c\bigl (e+(1-e)xe\bigr )\\&\quad +(1-e)xe\alpha (1-e)y\alpha (e)c\bigl (e+(1-e)xe\bigr ) \end{aligned} \end{aligned}$$
(2.12)

and in particular \((1-e)\alpha (1-e)r\alpha (e)ce=0\), for all \(r\in \mathcal{Q }_r\). Hence, for all \(e^2=e\in Soc(\mathcal{Q }_r)\):

  1. (1)

    either \((1-e)\alpha (1-e)=0\);

  2. (2)

    or \(\alpha (e)ce=0\).

Consider the case \((1-e)\alpha (1-e)=0\). Thus the (2.12) reduces to

$$\begin{aligned} \begin{aligned}&-(1-e)xe\alpha (1-e)c\bigl (e+(1-e)xe\bigr )\\&+(1-e)xe\alpha (1-e)y\alpha (e)c\bigl (e+(1-e)xe\bigr ) \end{aligned} \end{aligned}$$

and in particular \(\mathcal{Q }_r\) satisfies

$$\begin{aligned} (1-e)xe\alpha (1-e)y\alpha (e)c\bigl (e+(1-e)xe\bigr ). \end{aligned}$$

Now, replacing \(y\) by \(ye\) and using (2.10), it follows that

$$\begin{aligned} (1-e)xe\alpha (1-e)ye\alpha (e)ce. \end{aligned}$$
(2.13)

Notice that if \(e\alpha (1-e)=0\) and since \((1-e)\alpha (1-e)=0\), we get the contradiction \(\alpha (1-e)=0\). Therefore (2.13) implies \(e\alpha (e)ce=0\), and by (2.10) we get \(e\alpha (e)c=0\).

Therefore by (2.9), in any case we have \(\alpha (e)ce=0\) for any idempotent element \(e\) of \(\mathcal{Q }_r\).

Consider the following idempotent element: \(ex(1-e)+e\), for any \(x\in \mathcal{Q }_r\). Then \(\mathcal{Q }_r\) satisfies

$$\begin{aligned} \left( \alpha \bigl (ex(1-e)\bigr )+\alpha \bigl (e\bigr )\right) c\left( ex(1-e)+e\right) \end{aligned}$$

that is, since \(\alpha (e)ce=0\)

$$\begin{aligned} \alpha (e)\alpha (x)c\left( ex(1-e)+e\right) \end{aligned}$$

and since \(\alpha \) is outer, \(\mathcal{Q }_r\) satisfies

$$\begin{aligned} \alpha (e)yc\left( ex(1-e)+e\right) \end{aligned}$$

in particular \(\alpha (e)yce=0\), that is \(ce=0\), for all \(e^2=e\in Soc(\mathcal{Q }_r)\). Denote by \(T\) the additive subgroup of \(\mathcal{Q }_r\) generated by all its idempotent elements. Thus \(cT=(0)\), moreover by [16] (p. 18, Corollary), \([\mathcal{Q }_r,\mathcal{Q }_r]\subseteq T\), that is \(c[\mathcal{Q }_r,\mathcal{Q }_r]=0\), which implies \(c=0\), a contradiction.\(\square \)

Remark 2.1

Notice that Lemmas 2.1, 2.2 and 2.3 cover all the possible cases in order to prove Proposition 2.1.

2.1 The Proof of Theorem 1

As above remarked, there exist a skew derivation \(d\) of \(\mathcal{R }\) and an element \(a\in \mathcal{Q }_r\), such that \(F(x)=ax+d(x)\), for all \(x\in \mathcal{R }\). Moreover we will denote by \(\alpha \) the automorphism of \(\mathcal{R }\) associated with \(d\) and \(F\).

In case \(d\) is \(X\)-inner, there exists \(c\in \mathcal{Q }_r\) such that \(d(x)=cx-\alpha (x)c\), for all \(x\in \mathcal{R }\), so that \(F(x)=(a+c)x-\alpha (x)c\) and

$$\begin{aligned} \left[ (a+c)r^n-\alpha (r^n)c,r^n\right] _k=0 \end{aligned}$$

for all \(r\in \mathcal{R }\). Hence by Proposition 2.1, we have that one of the following holds:

  1. (1)

    there exists an invertible element \(q\in \mathcal{Q }_r\) such that \(\alpha (x)=qxq^{-1}\), for all \(x\in \mathcal{R }\), and \(q^{-1}c \in \mathcal{C }, a\in \mathcal{C }\); therefore \(F(x)=ax\).

  2. (2)

    \(\alpha \) is the identity map in \(\mathcal{R }\) and \(a,c \in \mathcal{C }\); also in this case \(F(x)=ax\).

  3. (3)

    \(c=0\) and \(a\in \mathcal{C }\).

Therefore in any case there is \(\lambda \in \mathcal{C }\), such that \(F(x)=\lambda x\), for all \(x\in \mathcal{R }\).

Assume now that \(d\) is not \(X\)-inner. Since \(\mathcal{R }\) satisfies

$$\begin{aligned} \left[ ax^n+d(x^n),x^n\right] _k \end{aligned}$$

then \(\mathcal{R }\) satisfies

$$\begin{aligned} \left[ ax^n+\sum _{i=0}^{n-1} \alpha (x^i)d(x)x^{n-i-1},x^n\right] _k. \end{aligned}$$
(2.14)

Since \(d\) is not \(X\)-inner, by Fact 1.4 and (2.14), \(\mathcal{R }\) satisfies

$$\begin{aligned} \left[ ax^n+\sum _{i=0}^{n-1} \alpha (x^i)yx^{n-i-1},x^n\right] _k. \end{aligned}$$
(2.15)

For \(y=0\) in (2.15), it follows \([ax^n,x^n]_k=0\) and by Lemma 2.1 we get \(a\in \mathcal{C }\). Moreover, if \(\alpha \) is the identity map on \(\mathcal{R }\), then \(d\) is an ordinary derivation of \(\mathcal{R }\) and we are done by the result in [18]. Thus let \(\alpha \) be diffenrent from the identical map on \(\mathcal{R }\). Since \(a\in \mathcal{C }, \mathcal{R }\) satisfies

$$\begin{aligned} \left[ \sum _{i=0}^{n-1} \alpha (x^i)yx^{n-i-1},x^n\right] _k. \end{aligned}$$
(2.16)

In case \(\alpha \) is outer, by Fact 1.5, \(\mathcal{R }\) satisfies

$$\begin{aligned} \left[ \sum _{i=0}^{n-1} z^iyx^{n-i-1},x^n\right] _k \end{aligned}$$

and for \(z=0\), we have that \([yx^{n-1},x^n]_k\) is an identity for \(\mathcal{R }\). This implies easily that \(\mathcal{R }\) is commutative.

Finally consider the case there exists an invertible element \(q \in \mathcal{Q }_r\), such that \(\alpha (x)=qxq^{-1}\), for all \(x\in \mathcal{Q }_r\). Hence \(\mathcal{R }\) satisfies

$$\begin{aligned} \left[ \sum _{i=0}^{n-1} (qxq^{-1})^iyx^{n-i-1},x^n\right] _k. \end{aligned}$$
(2.17)

Since \(\alpha \ne 1\), then \(q\notin \mathcal{C }\), therefore (2.17) is a non-trivial generalized polynomial identity for \(\mathcal{R }\). By Martindale’s theorem [20], \(\mathcal{Q }_r\) is a primitive ring having a nonzero socle with \(\mathcal{C }\) as the associated division ring. In a light of Jacobson’s theorem [15, p. 75], \(\mathcal{Q }_r\) is isomorphic to a dense ring of linear transformations on some vector space \(V\) over \(\mathcal{C }\).

Firstly we consider \(dim_{\mathcal{C }}V\ge 3\). Since \(q\notin \mathcal{C }\), there exists \(v\in V\) such that \(\{q^{-1}v,v\}\) are linearly \(\mathcal{C }\)-independent. Moreover, since \(dim_{\mathcal{C }}V\ge 3\), there exists \(w\in V\) such that \(\{q^{-1}v,v,w\}\) are linearly \(\mathcal{C }\)-independent. By the density of \(\mathcal{Q }_r\) there exist \(r, s\in \mathcal{Q }_r\) such that

$$\begin{aligned} rw=0;\quad sw=v;\quad rq^{-1}v=q^{-1}v;\quad rv=v \end{aligned}$$

so that by (2.17) we get the following contradiction:

$$\begin{aligned} 0=\left[ \sum _{i=0}^{n-1} (qrq^{-1})^isr^{n-i-1},r^n\right] _kw=(-1)^kv\ne 0. \end{aligned}$$

Let now \(dim_{\mathcal{C }}V=2\), so that is \(Q\cong M_2(\mathcal{C })\), the ring of all \(2\times 2\) matrices over \(\mathcal{C }\). Denote

$$\begin{aligned}q=\left[ \begin{array}{c@{\quad }c} q_{11} &{} q_{12} \\ q_{21} &{} q_{22} \end{array} \right] ,\qquad q^{-1}=\frac{1}{det(q)}\cdot \left[ \begin{array}{cc} q_{22} &{} -q_{12} \\ -q_{21} &{} q_{11} \end{array} \right] . \end{aligned}$$

In (2.17) choose \(x=e_{22}, y=e_{21}\), then by computation we have

$$\begin{aligned} q_{11}q_{22}=0. \end{aligned}$$
(2.18)

Analogously we obtain the following:

$$\begin{aligned} x=e_{11},\quad y=e_{22}\quad \Longrightarrow q_{11}q_{12}=0 \end{aligned}$$
(2.19)

Since \(q\) is invertible, then the \(2\)nd column of \(q\) cannot be zero. Thus by (2.18) and (2.19), it follows \(q_{11}=0\). This implies \(q_{21}\ne 0\), since the \(1\)st column of \(q\) cannot be zero.

Let \(\varphi (x)=(1+e_{12})x(1-e_{12})\) be an inner automorphisms of \(\mathcal{Q }_r\). Of course, for all \(r,s \in \mathcal{Q }_r\),

$$\begin{aligned} \varphi \left( \left[ \sum _{i=0}^{n-1} (qrq^{-1})^isr^{n-i-1},r^n\right] _k\right) \end{aligned}$$

that is \(\mathcal{Q }_r\) satisfies

$$\begin{aligned} \left[ \sum _{i=0}^{n-1} \left( \varphi (q)x\varphi (q^{-1})\right) ^iyx^{n-i-1},x^n\right] _k. \end{aligned}$$
(2.20)

Moreover the matrix \(\varphi (q)\) satisfies the same properties of \(q\). If denote by \(\varphi (q)_{ij}\) the \((i,j)\)-entry of \(\varphi (q)\), we have \(0=\varphi (q)_{11}=q_{21}\), which is a contradiction.

The previuos contradictions force \(dim_CV=1\), that is \(\mathcal{Q }_r\) is commutative as well as \(\mathcal{R }\).

3 Hypercentralizing generalized skew derivations on left ideals

Finally we complete our study of hypercentralizing generalized skew derivations and consider the extension of Theorem 1 for left ideals. Thus we will consider the case when \(F:\mathcal{R }\rightarrow \mathcal{R }\) is a nonzero generalized skew derivation of a prime ring \(\mathcal{R }\), which satisfies an Engel-type condition on some one-sided ideal \(\mathcal{I }\) of \(\mathcal{R }\). Hence we study the case \([F(x^n),x^n]_k=0\), for all \(x\in \mathcal{I }\). We will prove Theorem 2.

Firstly we need to fix some results and begin with:

lemma 3.1

Let \(\mathcal{R }\) be a prime ring, \(\mathcal{I }\) be a nonzero left ideal of \(\mathcal{R }\) and \(F\) be a nonzero generalized skew derivation of \(\mathcal{R }\) defined as \(F(x)=bx-qxq^{-1}c\) for some \(b,c\in \mathcal{Q }_r\) and for some invertible element \(q\in \mathcal{Q }_r\). Suppose that \(\mathcal{R }\) does not satisfy any non-trivial generalized polynomial identity, then there exists \(\gamma \in \mathcal{C }\) such that \(I(q^{-1}c-\gamma )=(0)\) and \(b-\gamma q\in \mathcal{C }\).

Proof

If there exists \(a\in \mathcal{I }\) such that \(\{a,aq^{-1}c\}\) are linearly \(\mathcal{C }\)-independent, then \([F((xa)^n),(xa)^n]_k\) is a non-trivial generalized polynomial identity for \(\mathcal{R }\), a contradiction. Thus we suppose that \(\{a,aq^{-1}c\}\) are linearly \(\mathcal{C }\)-dependent for all \(a\in \mathcal{I }\). This implies that there exists \(\gamma \in \mathcal{C }\) such that \(aq^{-1}c=\gamma a\), then \(F(a)=ba-qaq^{-1}c=(b-\gamma q)a\). Hence \(\mathcal{I }\) satisfies \([(b-\gamma q)x^n,x^n]_k\). By [1] we get \(b-\gamma q \in \mathcal{C }\).\(\square \)

lemma 3.2

Let \(\mathcal{R }=M_t(\mathcal K )\) be the ring of \(t\times t\) matrices over the field \(\mathcal K \) of characteristic different from \(2, \mathcal{I }\) be a nonzero left ideal of \(\mathcal{R }\) and \(F\) be a nonzero generalized skew derivation of \(\mathcal{R }\) defined as \(F(x)=bx-qxq^{-1}c\) for some \(b,c\in \mathcal{R }\) and for some invertible element \(q\in \mathcal{R }\). Suppose that \([F(r^n),r^n]_k=0\) for all \(r \in \mathcal{I }\). If \(\mathcal{I }F(\mathcal{I })=0\), then there exists \(\gamma \in \mathcal{C }\) such that \(I(q^{-1}c-\gamma )=(0)\) and \(b-\gamma q=0\), that is \(F(\mathcal{I })=0\).

Proof

In case \([F(r^n),r^n]_k=0\) for all \(r\in \mathcal{R }\), we are done by Theorem 1. Thus we assume that \(\mathcal{I }\ne \mathcal{R }\). By our assumption we have that \(F(r^n)r^{nk}=0\), for all \(r \in \mathcal{I }\), that is

$$\begin{aligned} (br^n-qr^nq^{-1}c)r^{nk}=0. \end{aligned}$$
(3.1)

Since there exists a set of matrix units that contains the idempotent generator of a given minimal left ideal, we observe that any minimal left ideal is part of a direct sum of minimal left ideals adding to \(\mathcal{R }\). In light of this and applying Proposition 5 on page 52 in [15], we may assume that any minimal left ideal of \(\mathcal{R }\) is a direct sum of minimal left ideals, each of the form \(\mathcal{R }e_{ii}\).

Moreover, we know that the left ideal \(\mathcal{I }\) has a number of uniquely determined simple components: they are minimal left ideals of \(\mathcal{R }\) and \(\mathcal{I }\) is their direct sum. In view of this argument, we may write \(\mathcal{I }=\mathcal{R }e\) for some \(e=\sum _{i=1}^pe_{ii}\) and \(p\in \{1,2,\ldots ,t\}\), that is, \(\mathcal{I }=\mathcal{R }e=(\mathcal{R }e_{11}+\cdots +\mathcal{R }e_{pp})\), where \(p\le t\). Of course we can suppose \(t\ge 2\). Let us write \(q=\sum _{r,s}q_{rs}e_{rs}, q^{-1}c=\sum _{r,s}c_{rs}e_{rs}, q^{-1}b=\sum _{r,s}b_{rs}e_{rs}\) for \(q_{rs}, c_{rs}, b_{rs} \in \mathcal K \).

For any \(j\le p\), the element \(e_{jj}\) falls in \(\mathcal{I }\). Moreover, left multiplying (3.1) by \(q^{-1}\) we get

$$\begin{aligned} (q^{-1}br^n-r^nq^{-1}c)r^{nk}=0. \end{aligned}$$
(3.2)

Hence, for \(r=e_{jj}\) in (3.2) and left multiplying by \(e_{ii}\), for any \(i\ne j\), we obtain \(e_{ii}(q^{-1}b)e_{jj}=0\), that is \(b_{ij}=0\), for all \(j\le p\) and any \(i\ne j\).

Consider the following automorphisms of \(\mathcal{I }\): for \(j \le p\) (\(k\ne j\)) let

$$\begin{aligned} \lambda (x)&= (1+e_{kj})x(1-e_{kj})=x+e_{kj}x-xe_{kj}-e_{kj}xe_{kj}\\ \varphi (x)&= (1-e_{kj})x(1+e_{kj})=x-e_{kj}x+xe_{kj}-e_{kj}xe_{kj}. \end{aligned}$$

Note that \(\lambda (\mathcal{I }) \subseteq \mathcal{I }\) is a left ideal of \(\mathcal{R }\) satisfying

$$\begin{aligned} \left( \lambda (q^{-1}b)r^n-r^n\lambda (q^{-1}c)\right) r^{nk}=0 \end{aligned}$$

and the same holds for \(\varphi (\mathcal{I }) \subseteq \mathcal{I }\). Set \(\lambda (q^{-1}b)=\sum _{uv}b_{uv}^{\prime }e_{uv}, \varphi (q^{-1}b)=\sum _{uv}b_{uv}^{\prime \prime }e_{uv}\) with \(b_{uv}^{\prime }, b_{uv}^{\prime \prime } \in \mathcal K \). Since the matrices \(\lambda (q^{-1}b)\) and \(\varphi (q^{-1}b)\) must satisfy the condition \(b_{ij}^{\prime }=0\) and \(b_{ij}^{\prime \prime }=0\), for any \(j\le p, i\ne j\), then we obtain:

  1. (1)

    in case we consider \(k\le p, b_{kj}^{\prime }=0\), that is \(b_{jj}=b_{kk}\) (for all \(j\ne k \le p\));

  2. (2)

    in case \(k>p, b_{kj}^{\prime }=0\), that is \(b_{jj}-b_{kk}-b_{jk}=0\); moreover \(b_{kj}^{\prime \prime }=0\), that is \(-b_{jj}+b_{kk}-b_{jk}=0\). Since \(char(R)\ne 2\), we have that both \(b_{jk}=0\) and \(b_{jj}=b_{kk}\), for all \(j\le p\) and \(k>p\).

The previous conditions imply that there exists \(\gamma \in \mathcal K \) such that \(\mathcal{I }(q^{-1}b)=\gamma \mathcal{I }\). Starting from (3.2), we also have that \(\mathcal{I }\) satisfies

$$\begin{aligned} y\left( q^{-1}bx^n-x^nq^{-1}c\right) x^{nk}=0 \end{aligned}$$

and by easy computations for \(x=y\), it follows that

$$\begin{aligned} r^{n+1}q^{-1}cr^{n}=\gamma r^{2n+1} \end{aligned}$$
(3.3)

for all \(r\in \mathcal{I }\). Notice that for any \(j\le p\), the elements \(e_{jj}\) and \(e_{ij}+e_{jj}\) fall in \(\mathcal{I }\), for all \(i\). Moreover For any \(i,j\le p\), the element \(e_{ii}-e_{jj}\) falls in \(\mathcal{I }\). So that by (3.3) we get

$$\begin{aligned}&\qquad \qquad \quad \qquad \qquad \; e_{jj}^{n+1}q^{-1}ce_{jj}^n=\gamma e_{jj}^{2n+1},\quad \forall j\le p\end{aligned}$$
(3.4)
$$\begin{aligned}&(e_{ii}-e_{jj})^{n+1}q^{-1}c(e_{ii}-e_{jj})^n=\gamma (e_{ii}-e_{jj})^{2n+1},\nonumber \\&\quad \forall i,j\le p,\quad i\ne j \end{aligned}$$
(3.5)

and

$$\begin{aligned}&(e_{ij}+e_{jj})^{n+1}q^{-1}c(e_{ij}+e_{jj})^n=\gamma (e_{ij}+e_{jj})^{2n+1},\nonumber \\&\quad \forall j\le p, \quad \forall i\ne j. \end{aligned}$$
(3.6)

Therefore:

  • By (3.4) we have \(c_{jj}=\gamma \), for all \(j\le p\);

  • By (3.5), and using \(c_{jj}=\gamma \), we also have \(c_{ij}=0\), for all \(i,j \le p\), with \(i\ne j\);

  • By (3.6), and using again \(c_{jj}=\gamma \), we finally have \(c_{ji}=0\), for all \(j\le p\) and any \(i\ne j\).

Hence \(I(q^{-1}c-\gamma )=(0)\) and (3.1) implies \((b-\gamma q)r^{n+nk}=0\), for all \(r\in \mathcal{I }\), that is \(b-\gamma q=0\).\(\square \)

Lemma 3.3

Let \(\mathcal{R }\) be a prime ring, \(\mathcal{I }\) a nonzero left ideal of \(\mathcal{R }\) and \(F\) be a generalized skew derivation of \(\mathcal{R }\) with associated automorphism \(\alpha \in \mathrm{Aut}(\mathcal{R })\) and associated skew derivation \(d\). If \(\alpha \) is \(X\)-outer and \(\mathcal{I }F(\mathcal{I })=0\), then \(d(\mathcal{R })=0\) and there exists \(a\in \mathcal{Q }_r\) such that \(F(x)=ax\), for all \(x\in \mathcal{R }\), with \(\mathcal{I }a=0\).

Proof

Let \(r\in \mathcal{R }\) and \(x,y \in \mathcal{I }\). Since \(0=xF(ry)=x(F(r)y+\alpha (r)d(y))\) and \(\alpha \) is \(X\)-outer, then \(0=x(F(r)y+zd(y))\) for all \(z\in \mathcal{R }\). In particular, this means both \(0=xF(R)y\) and \(0=xzd(I)\). Since \(\mathcal{R }\) is prime, we have both \(\mathcal{I }F(\mathcal{R })=0\) and \(d(\mathcal{I })=0\).

Thus, for all \(r,s \in \mathcal{R }\) we have

$$\begin{aligned} 0=\mathcal{I }F(rs)=\mathcal{I }(F(r)s+\alpha (r)d(s)) =\mathcal{I }\alpha (r)d(s). \end{aligned}$$

Since \(\alpha \) is \(X\)-outer, it follows \(\mathcal{I }\mathcal{R }d(\mathcal{R })=0\), that is \(d(\mathcal{R })=0\). This implies that \(F(xy)=F(x)y\), for all \(x,y \in \mathcal{Q }_r\). In particular choose \(a=F(1)\in \mathcal{Q }_r\), then \(F(x)=ax\), for all \(x\in \mathcal{R }\) and by our assumption \(\mathcal{I }a\mathcal{I }=0\), that is \(\mathcal{I }a=0\).\(\square \)

Lemma 3.4

Let \(\mathcal{R }\) be a prime ring of characteristic different from \(2, \mathcal{I }\) be a nonzero left ideal of \(\mathcal{R }\) and \(F\) be a nonzero generalized skew derivation of \(\mathcal{R }\) defined as \(F(x)=bx-qxq^{-1}c\) for some \(b,c\in \mathcal{Q }_r\) and for some invertible element \(q\in \mathcal{Q }_r\). Suppose that \([F(r^n),r^n]_k=0\) for all \(r\in \mathcal{I }\). If \(\mathcal{I }F(\mathcal{I })=0\), then there exists \(\gamma \in \mathcal{C }\) such that \(I(q^{-1}c-\gamma )=(0)\) and \(b-\gamma q=0\).

Proof

From Lemma 3.1, we assume that \(\mathcal{R }\) satisfies the non-trivial generalized polynomial identity

$$\begin{aligned} \left[ bx^n-qx^nq^{-1}c,x^n\right] _k. \end{aligned}$$
(3.7)

It follows from [20] that \(\mathcal RC \) is a primitive ring and hence \(\mathcal{Q }_r\) has non-zero socle \(\mathcal H \) with non-zero left ideal \(\mathcal J =\mathcal HI \). Moreover, \(\mathcal J \) satisfies (3.7). Replacing \(\mathcal{R }\) by \(\mathcal H \) and replacing \(\mathcal{I }\) by \(\mathcal J \), without loss of generality we consider that \(\mathcal{R }\) is a simple ring and equal to its own socle and \(\mathcal{I }=\mathcal RI \).

Assume first that \(\mathcal{I }[q^{-1}c,\mathcal{I }]=0\). Let \(x,y\in \mathcal{I }, r\in \mathcal{R }\). Then \(0=y[q^{-1}c,rx]=yq^{-1}crx-yrxq^{-1}c\), that is \(xq^{-1}c=\beta _xx\), with \(\beta _x\in \mathcal{C }\), and analogously \(yq^{-1}c=\beta _yy, (x+y)q^{-1}c=\beta _{x+y}(x+y)\). From this it is easy to see that \(\beta _x\) is independent from the choice of \(x\in \mathcal{I }\), therefore there exists \(\gamma \in \mathcal{C }\) such that \(\mathcal{I }(q^{-1}c-\gamma )=0\). In this case, \([(b-\gamma q)r^n,r^{n}]_k=0\), for all \(r\in \mathcal{I }\). Hence \((b-\gamma q)\in \mathcal{C }\) follows from [1], and, since \(\mathcal{I }F(\mathcal{I })=0\), it follows easily that \((b-\gamma q)=0\). Thus we may assume in what follows that there exist \(a_1,a_2 \in \mathcal{I }\), such that \(a_1[q^{-1}c,a_2]\ne 0\).

According to \(|\mathcal{C }|=\infty \) or \(|\mathcal{C }|<\infty \), we correspondingly choose \(\mathcal K \) to be the algebraic closure of \(\mathcal{C }\) or \(\mathcal{C }\). Note that \(\mathcal HI \otimes _\mathcal{C } \mathcal K \) is a completely reducible left \(\mathcal H \otimes _\mathcal{C } \mathcal K \)-module which satisfies the generalized polynomial identity (3.7). Thus there exists an idempotent \(h\in \mathcal HI \otimes _\mathcal{C } \mathcal K \) such that \(b,q,c,q^{-1}c,a_1,a_{2} \in (\mathcal HI \otimes _\mathcal{C } \mathcal K )h\). By Litoff’s Theorem (for a proof see [14]) there exists \(e^2=e\in \mathcal H \otimes _\mathcal{C } \mathcal K \) such that

$$\begin{aligned} h,hb,bh,hq,qh,hc,ch,hq^{-1}c,q^{-1}ch,a_1,a_{2} \in e(\mathcal H \otimes _\mathcal{C } \mathcal K )e \end{aligned}$$

with \(e(\mathcal H \otimes _\mathcal{C } \mathcal K )e\cong M_t(\mathcal K )\) for \(t\ge 2\). For all \(r\in e(\mathcal H \otimes _\mathcal{C } \mathcal K )eh \subseteq (\mathcal HI \otimes _\mathcal{C } \mathcal F )\cap e(\mathcal H \otimes _\mathcal{C } \mathcal F )e\) we have

$$\begin{aligned} 0&= \left[ bx^n-qx^nq^{-1}c,x^n\right] _k\\&= \left[ bhx^n-qhx^nq^{-1}c,hx^n\right] _k\\&= \left[ (ebe)hx^n-(eqe)hx^n(eq^{-1}ce),x^n\right] _k. \end{aligned}$$

Since we assume that \(\mathcal{I }F(\mathcal{I })=0\), then we may apply Lemma 3.2. Hence it follows \([e(q^{-1}c)e, e(\mathcal H \otimes _\mathcal{C } \mathcal F )eh]=0\) which contradicts with \(0\ne a_1[q^{-1}c,a_2]=ea_1eh[e(q^{-1}c)e,ea_2eh]=0\).\(\square \)

Here we also state the following easy result in matrix rings, which will be useful in the next:

Lemma 3.5

Let \(\mathcal{R }=M_t(\mathcal K )\) be the ring of \(t\times t\) matrices over the field \(\mathcal K \) of characteristic different from \(2, b,c\in \mathcal{R }\) and \(n,m\ge 1\) positive integers such that \(x^ncx^mb=0\), for all \(x\in \mathcal{R }\). Then either \(b=0\) or \(c=0\).

Proof

In case \(t=1\), then \(\mathcal{R }\) is a field and there is nothing to prove.

Thus we consider \(t\ge 2\).

Let us write \(c=\sum _{r,s}c_{rs}e_{rs}, b=\sum _{r,s}b_{rs}e_{rs}\), for \(c_{rs}, b_{rs} \in \mathcal K \).

For any \(j\ge 1\) and \(x=e_{jj}\), it follows \(e_{jj}ce_{jj}b=0\) which implies

$$\begin{aligned} c_{jj}b_{ji}=0,\qquad \forall i\ge 1. \end{aligned}$$
(3.8)

Our first aim is to show that, if \(c\ne 0\), then \(b\) is a diagonal matrix. To do this, we assume that \(b\) is not diagonal, set \(b_{ji}\ne 0\) for some \(i\ne j\). Consider the following automorphisms of \(\mathcal{R }\): for \(j \ne i\) let

$$\begin{aligned} \lambda (x)=(1+e_{ij})x(1-e_{ij})=x+e_{ij}x-xe_{ij}-e_{ij}xe_{ij}\\ \varphi (x)=(1-e_{ij})x(1+e_{ij})=x-e_{ij}x+xe_{ij}-e_{ij}xe_{ij} \end{aligned}$$

and of course both \(x^n\lambda (c)x^m\lambda (b)=0\) and \(x^n\varphi (c)x^m\varphi (b)=0\), for all \(x\in \mathcal{R }\).

Since the matrices \(\lambda (c), \varphi (c), \lambda (b)\) and \(\varphi (b)\) must satisfy the condition (3.8), then we obtain:

$$\begin{aligned} c_{jj}^{\prime }b_{ji}^{\prime }=0 \Longrightarrow (c_{jj}-c_{ji})b_{ji}=0 \Longrightarrow c_{ji}=0 \end{aligned}$$
(3.9)

moreover

$$\begin{aligned} b_{ij}^{\prime }=b_{ij}+b_{jj}-b_{ii}-b_{ij}\\ b_{ij}^{\prime \prime }=b_{ij}-b_{jj}+b_{ii}-b_{ij}. \end{aligned}$$

If \(b_{ij}^{\prime }=b_{ij}^{\prime \prime }=0\) then \(b_{ij}=b_{ji}\ne 0\), so that by (3.8) and (3.9), it follows \(c_{ii}=c_{ij}=0\). On the other hand, in case \(b_{ij}^{\prime }\ne 0\) (respectively \(b_{ij}^{\prime \prime }\ne 0\)), then \(c_{ii}^{\prime }=c_{ij}^{\prime }=0\) (respectively \(c_{ii}^{\prime \prime }=c_{ij}^{\prime \prime }=0\)). By computation, we have that \(c_{ii}=c_{ij}=0\) in any case.

Therefore, if \(b_{ji}=0\), then the following hold:

$$\begin{aligned} c_{ji}=0,\quad c_{ii}=0,\quad c_{ij}=0, c_{jj}=0. \end{aligned}$$
(3.10)

In case \(t=2\), it follows the contradiction \(c=0\), so that we may assume \(t\ge 3\).

Let now \(k\ne i,j\), and

$$\begin{aligned} \lambda ^{\prime \prime }(x)=(1+e_{ki})x(1-e_{ki})=x+e_{ki}x-xe_{ki}-e_{ki}xe_{ki} \end{aligned}$$

with \(\lambda ^{\prime \prime }(b)=\sum _{uv}b^{\prime }e_{uv}, \lambda ^{\prime \prime }(c)=\sum _{uv}c^{\prime }e_{uv}\), for \(b_{uv}^{\prime }, c_{uv}^{\prime }\in \mathcal K \). We notice that \(b_{ji}^{\prime }=b_{ji}-b_{jk}\), and also in this case, by applying (3.10), we have two cases:

  • if \(b_{ji}^{\prime }\ne 0\), then \(0=b_{ji}^{\prime }=-b_{jk}\);

  • if \(b_{ji}^{\prime }=0\), then \(b_{ji}=b_{jk}\ne 0\), and again \(c_{jk}=0\).

Therefore, for \(b_{ji}\ne 0\) and for all \(k\ne i,j\),

$$\begin{aligned} c_{ji}=0,\quad c_{ii}=0,\quad c_{ij}=0,\quad c_{jj}=0, \quad c_{jk}=0. \end{aligned}$$
(3.11)

Consider again an automorphism of \(\mathcal{R }\), defined as

$$\begin{aligned} \lambda ^{\prime \prime \prime }(x)=(1+e_{kj})x(1-e_{kj})=x+e_{kj}x-xe_{kj}-e_{kj}xe_{kj} \end{aligned}$$

with \(\lambda ^{\prime \prime \prime }(b)=\sum _{uv}b^{\prime }e_{uv}, \lambda ^{\prime \prime \prime }(c)=\sum _{uv}c^{\prime }e_{uv}\), for \(b_{uv}^{\prime }, c_{uv}^{\prime }\in \mathcal K \). By computation \(b_{ki}^{\prime }=b_{ki}+b_{ji}\). By (3.10), as above we have two cases: if \(b_{ki}^{\prime }\ne 0\), then

$$\begin{aligned} 0&= c_{ki}^{\prime }=c_{ki};\quad 0=c_{ik}^{\prime }=c_{ik};\nonumber \\ 0&= c_{kk}^{\prime }=c_{kk}^{\prime };\quad 0=c_{kj}^{\prime }=c_{kj}. \end{aligned}$$
(3.12)

In case \(b_{ki}^{\prime }=0\), then \(b_{ki}=-b_{ji}\ne 0\), and again (3.12) holds.

Therefore, for \(b_{ji}\ne 0\) and for all \(k\ge 1\),

$$\begin{aligned} c_{ki}=0,\quad c_{ik}=0,\quad c_{kj}=0,\quad c_{jk}=0, \quad c_{kk}=0. \end{aligned}$$
(3.13)

Of course, the case \(t=3\) implies the contradiction \(c=0\). Therefore, we finally assume \(t\ge 4\) and fix \(r\ge 1\), such that \(r\ne i,j,k\). We define the last automorphisms as follows:

$$\begin{aligned} \chi ^{\prime }(x)&= (1+e_{rk})x(1-e_{rk})=x+e_{rk}x-xe_{rk}-e_{rk}xe_{rk}\\ \chi ^{\prime \prime }(x)&= (1+e_{ki})x(1-e_{ki})=x+e_{ki}x-xe_{ki}-e_{ki}xe_{ki} \end{aligned}$$

with \(\chi ^{\prime }(b)=\sum _{uv}b^{\prime }e_{uv}, \chi ^{\prime }(c)=\sum _{uv}c^{\prime }e_{uv}, \chi ^{\prime \prime }(b)=\sum _{uv}b^{\prime \prime }e_{uv}, \chi ^{\prime \prime }(c)=\sum _{uv}c^{\prime \prime }e_{uv}\) for \(b_{uv}^{\prime }, c_{uv}^{\prime }, b_{uv}^{\prime \prime }, c_{uv}^{\prime \prime }\in \mathcal K \).

Since \(b_{ji}^{\prime }=b_{ji}\ne 0\), then, by (3.13), \(c_{kk}^{\prime }=0\), that is \(c_{kr}=0\).

Moreover, in case \(b_{ji}^{\prime \prime }\ne 0\), then by (3.13), \(c_{ri}^{\prime \prime }=0\), that is \(c_{rk}=0\). On the other hand, in case \(0=b_{ji}^{\prime \prime }=b_{ji}-b_{jk}\), then \(b_{jk}\ne 0\). Hence, applying again (3.13), it follows \(c_{rk}=0\), in any case.

The previous argument says that, if there exist \(i\ne j\) such that \(b_{ji}\ne 0\), then \(c=0\). This contradiction implies that \(b\) must be a diagonal matrix. Finally for all \(r\ne s\), let

$$\begin{aligned} \chi ^{\prime \prime \prime }(x)=(1+e_{rs})x(1-e_{rs})=x+e_{rs}x-xe_{rs}-e_{rs}xe_{rs}. \end{aligned}$$

Since \(\chi (c)^{\prime \prime \prime }\ne 0\), then \(\chi (b)^{\prime \prime \prime }\) must be a diagonal matrix. In particular the \((r,s)\)-entry of \(chi(b)^{\prime \prime \prime }\) is zero, that is \(b_{rr}=b_{ss}\), which implies that \(b\) is a central matrix in \(\mathcal{R }\). By the main hypothesis, and assuming \(0\ne b\in \mathcal K \), it follows \(x^mcx^n=0\) for all \(x\in \mathcal{R }\), which means \(c=0\), a contradiction.

Hence we conclude that if \(b\ne 0\), then \(c=0\).\(\square \)

Lemma 3.6

Let \(\mathcal{R }\) be a prime ring of characteristic different from \(2, \mathcal{I }\) be a nonzero left ideal of \(\mathcal{R }\) and \(F\) be a nonzero generalized skew derivation of \(\mathcal{R }\) defined as \(F(x)=bx-qxq^{-1}c\) for some \(b,c\in \mathcal{R }\) and for some invertible element \(q\in \mathcal{R }\). Suppose that \([F(r^n),r^n]_k=0\) for all \(r\in \mathcal{I }\). Then there exists \(\gamma \in \mathcal{C }\) such that \(I(q^{-1}c-\gamma )=(0)\) and \(b-\gamma q\in \mathcal{C }\).

Proof

We assume that the conclusion of the present Lemma doesn’t hold. Moreover, in light of Lemma 3.4, we also suppose \(\mathcal{I }F(\mathcal{I })\ne 0\), if not we are done. Hence there exist \(a_1,a_2,a_3 \in \mathcal{I }\) such that

$$\begin{aligned} \begin{aligned}&a_1[q^{-1}c,a_1]\ne 0\\&a_2F(a_3)\ne 0. \end{aligned} \end{aligned}$$

Again there exists an idempotent element \(h\in \mathcal RI \) such that \(\mathcal{R }h=\sum _{i=1}^{3}\mathcal{R }a_i+\mathcal{R }q\) and \(a_i=a_ih, q=qh\). Since

$$\begin{aligned}{}[b(x_1h)^n-q(x_1h)^nq^{-1}c,(x_1h)^n]_k \end{aligned}$$

is satisfied by \(\mathcal{R }\), right multiplying by \((1-h)\), we get that \(\mathcal{R }\) satisfies

$$\begin{aligned} (x_1h)^{nk}q(x_1h)^nq^{-1}c(1-h). \end{aligned}$$

In particular, the central simple algebra \(h\mathcal{R }h\cong M_t(\mathcal{C })\) satisfies the generalized identity

$$\begin{aligned} X^{nk}\left( hqh\right) X^n\left( hq^{-1}c(1-h)\right) . \end{aligned}$$

By Lemma 3.5, it follows either \(hqh=0\) or \(hq^{-1}c(1-h)=0\). Since \(hqh=hq \ne 0\), we have \(hq^{-1}c=hq^{-1}ch\). Thus \(hq^{-1}c=hq^{-1}ch \in \mathcal{R }h\). This implies \(F(\mathcal{R }h)\subseteq \mathcal{R }h\). Denote \(\mathcal{R }h=\mathcal J \) and let \(\overline{\mathcal{J }}=\frac{\mathcal{J }}{\mathcal{J }\cap l_\mathcal{R }(\mathcal{J })}\); \(\overline{\mathcal{J }}\) is a prime \(\mathcal{C }\)-algebra with a generalized skew derivation \(\overline{F}\) such that \(\overline{F}(\overline{x})=\overline{F(x)}\) for all \(x\in \mathcal{J }\). Therefore we have \(0=[\overline{F(r^n)},\overline{r^n}]_k\) for all \(\overline{r} \in \overline{\mathcal{J }}\). By Theorem 1 one of the following holds:

  1. (1)

    \(\overline{F}=0\) modulo \(l_\mathcal{R }(\mathcal J )\), which contradictis with the choices of \(a_2,a_3 \in \mathcal{I }\);

  2. (2)

    \([hq^{-1}c,\overline{\mathcal{J }}]=0\) modulo \(l_\mathcal{R }(\mathcal J )\), which contradicts with the choice of \(a_1\) in \(\mathcal{I }\);

  3. (3)

    \(\overline{\mathcal{J }}\) is commutative, in particular \([hq^{-1}c,\overline{\mathcal{J }}]=0\) modulo \(l_\mathcal{R }(\mathcal J )\), once again a contradiction.

In any case we are done.\(\square \)

Lemma 3.7

Let \(\mathcal{R }\) be a prime ring, \(\mathcal{I }\) a nonzero left ideal of \(\mathcal{R }, \alpha F\) be an automorphism of \(\mathcal{R }\) and \(c\in \mathcal{Q }_r\). Suppose that \([\alpha (r^n)c,r^m]=0\) for all \(r \in \mathcal{I }\) and \(n,m\ge 1\) fixed integers. If \(\alpha \) is \(X\)-outer, then \(\alpha (\mathcal{I })c=0\).

Proof

Suppose by contradiction that there exist \(a\in \mathcal{I }\) such that \(\alpha (a)c\ne 0\). By [11, Theorem 2], \(\mathcal{R }\) and \(\mathcal{Q }_r\) satisfy the same generalized polynomial identities with a single skew derivation, therefore \(\mathcal{Q }_r\) satisfies \([\alpha (x^n)c,x^m]\). Since \(\alpha (\mathcal{I })c\ne 0\), then, by Main Theorem in [9], \(\mathcal{R }\) is a GPI-ring. By [20, Theorem 3] it follows that \(\mathcal{Q }_r\) is a primitive ring having nonzero socle \(\mathcal H \) with the field \(\mathcal{C }\) as its associated division ring. Since \(\mathcal{Q }_r\) and \(\mathcal H \) satisfy the same generalized polynomial identities, in order to prove our Lemma we replace \(\mathcal{Q }_r\) by its socle and consider \(\mathcal{Q }_r\) as a simple regular ring.

Since \(\mathcal{Q }_r\) is regular, there exists an idempotent element \(h\in \mathcal{Q }_r\mathcal{I }\) such that \(\mathcal{Q }_rh=\mathcal{Q }_ra\), and \(a=ah\).

Firstly we notice that for any idempotent element \(e=e^2\in \mathcal{Q }_r\mathcal{I }\), the following holds:

$$\begin{aligned}{}[\alpha (e)c,e]=0 \end{aligned}$$
(3.14)

that is both

$$\begin{aligned} \alpha (e)ce=e\alpha (e)c \end{aligned}$$
(3.15)

and (right multiplying (3.14) by \(1-e\))

$$\begin{aligned} e\alpha (e)c(1-e)=0 \quad \text{ that } \text{ is }\quad e\alpha (e)c=e\alpha (ece). \end{aligned}$$
(3.16)

Since (3.16) holds for any idempotent element, we may replace \(e\) by \(h+(1-h)xh\) for all \(x\in \mathcal{Q }_r\), so that

$$\begin{aligned} \left( h+(1-h)xh\right) \left( \alpha (h)+(1-\alpha (h)) \alpha (x)\alpha (h)\right) c\left( 1-h-(1-h)xh\right) =0. \end{aligned}$$

Since \(\alpha \) is \(X\)-outer and by applying (3.16),

$$\begin{aligned} \left( h+(1-h)xh\right) \left( (1-\alpha (h))y\alpha (h)\right) c\left( 1-h-(1-h)xh\right) =0 \end{aligned}$$

for all \(x,y \in \mathcal{Q }_r\). In particular

$$\begin{aligned} h\left( (1-\alpha (h))y\alpha (h)\right) c(1-h)=0. \end{aligned}$$
(3.17)

By the primeness of \(\mathcal{Q }_r\), by (3.17) we have two different cases:

Firstly we consider \(h(1-\alpha (h))=0\). In this case

$$\begin{aligned} h=h\alpha (h) \end{aligned}$$
(3.18)

and by (3.15)

$$\begin{aligned} \alpha (h)ch=hc. \end{aligned}$$
(3.19)

Now replace \(e\) by \(h+(1-h)xh\) in (3.15) and get

$$\begin{aligned}&\left( \alpha (h)+(1-\alpha (h))\alpha (x)\alpha (h)\right) c\left( h+(1-h)xh\right) \\&\quad =\left( h+(1-h)xh\right) \left( \alpha (h)+(1-\alpha (h)) \alpha (x)\alpha (h)\right) c \end{aligned}$$

for all \(x\in \mathcal{Q }_r\). Since \(\alpha \) is \(X\)-outer, we have

$$\begin{aligned}&\left( \alpha (h)+(1-\alpha (h))y\alpha (h)\right) c\left( h+(1-h)xh\right) \\&\quad =\left( h+(1-h)xh\right) \left( \alpha (h)+(1-\alpha (h)) y\alpha (h)\right) c \end{aligned}$$

for all \(x,y \in \mathcal{Q }_r\). Moreover since \(h(1-\alpha (h))=0\), it follows that

$$\begin{aligned}&\left( \alpha (h)+(1-\alpha (h))y\alpha (h)\right) c\left( h+(1-h)xh\right) \\&\quad =\left( h+(1-h)xh\right) \alpha (h)c. \end{aligned}$$

In particular

$$\begin{aligned} \left( \alpha (h)+(1-\alpha (h))y\alpha (h)\right) ch=h\alpha (h)c. \end{aligned}$$

By (3.15), then we have \((1-\alpha (h))y\alpha (h)c=0\), for all \(y\in \mathcal{Q }_r\). Therefore, since \(1-\alpha (h)\ne 0\), we get \(\alpha (h)c=0\). This last contradicts with \(0\ne \alpha (a)c=\alpha (ah)c=\alpha (a)\alpha (h)c\).

Finally, by (3.17), we consider now the case \(\alpha (h)c(1-h)=0\), then

$$\begin{aligned} \alpha (h)c=\alpha (h)ch. \end{aligned}$$
(3.20)

Now replace again \(e\) by \(h+(1-h)xh\) in (3.15) and, by using \(\alpha (h)c(1-h)=0\), it follows

$$\begin{aligned}&\left( \alpha (h)+(1-\alpha (h))\alpha (x)\alpha (h)\right) ch \\&\quad =\left( h+(1-h)xh\right) \left( \alpha (h)+(1-\alpha (h)) \alpha (x)\alpha (h)\right) c \end{aligned}$$

for all \(x\in \mathcal{Q }_r\). Since \(\alpha \) is \(X\)-outer, we have

$$\begin{aligned}&\left( \alpha (h)+(1-\alpha (h))y\alpha (h)\right) ch\nonumber \\&\quad =\left( h+(1-h)xh\right) \left( \alpha (h)+(1-\alpha (h)) y\alpha (h)\right) c \end{aligned}$$
(3.21)

for all \(x,y \in \mathcal{Q }_r\). In particular for \(x=y=0\) we get \(\alpha (h)ch=h\alpha (h)c\), and, by (3.20),

$$\begin{aligned} \alpha (h)c=h\alpha (h)c. \end{aligned}$$
(3.22)

Moreover again by (3.21), for \(y=0\) and for any \(x\in \mathcal{Q }_r\),

$$\begin{aligned} \alpha (h)ch=h\alpha (h)c+(1-h)xh\alpha (h)c \end{aligned}$$

and by using both (3.20) and (3.22), it follows \(0=h\alpha (h)c=\alpha (h)c\). As in the previous case, this is a contradiction.\(\square \)

Lemma 3.8

Let \(\mathcal{R }\) be a prime ring, \(\mathcal{I }\) a nonzero left ideal of \(\mathcal{R }\) and \(F\) be a nonzero generalized skew derivation of \(\mathcal{R }\) (with associated automorphism \(\alpha \)) defined as \(F(x)=bx-\alpha (x) c\) for some \(b,c\in \mathcal{Q }_r\). Suppose that \([F(r^n),r^n]_k=0\) for all \(r \in \mathcal{I }\). If \(\alpha \) is \(X\)-outer, then \(\alpha (\mathcal{I })c=0\) and \(b\in \mathcal{C }\).

Proof

For all \(a\in \mathcal{I }, \mathcal{R }\) satisfies

$$\begin{aligned} \Phi (x,\alpha (x))=\left[ b(xa)^n-\alpha (xa)^nc,(xa)^n\right] _k. \end{aligned}$$

By [11, Theorem 2], \(\mathcal{R }\) and \(\mathcal{Q }_r\) satisfy the same generalized polynomial identities with a single skew derivation, therefore \(\mathcal{Q }_r\) satisfies \(\Phi (x, \alpha (x))\). Notice that, in case \(\alpha (\mathcal{I })c=0\), then \(b\in \mathcal{C }\) follows from [1]. Thus we may suppose that \(\alpha (\mathcal{I })c\ne 0\), thus there exists \(a\in \mathcal{I }\) such that \(\alpha (a)c\ne 0\).

In this case, our aim is to show that a number of contradictions follows.

Since \(\alpha (\mathcal{I })c\ne 0\), then, by Main Theorem in [9], \(\mathcal{R }\) is a GPI-ring. By by [20, Theorem 3] it follows that \(\mathcal{Q }_r\) is a primitive ring having nonzero socle \(\mathcal H \) with the field \(\mathcal{C }\) as its associated division ring. Since \(\mathcal{Q }_r\) and \(\mathcal H \) satisfy the same generalized polynomial identities, in order to prove our Lemma we replace \(\mathcal{Q }_r\) by its socle and consider \(\mathcal{Q }_r\) as a simple regular ring.

Since \(\mathcal{Q }_r\) is regular, there exists an idempotent element \(h\in \mathcal{Q }_r\mathcal{I }\)

Suppose first that \(char(\mathcal{R })=0\).

Since \(\mathcal{R }\) satisfies

$$\begin{aligned} \left[ b(xh)^n-\bigl (\alpha (x)\alpha (h)\bigr )^nc,(xh)^n\right] _k \end{aligned}$$

then by Theorem 3 in [10], \(\mathcal{R }\) satisfies

$$\begin{aligned} \left[ b(xh)^n-\bigl (y\alpha (h)\bigr )^nc,(xh)^n\right] _k \end{aligned}$$

and in particular \(\mathcal{R }\) satisfies

$$\begin{aligned} \left[ \bigl (y\alpha (h)\bigr )^nc,(xh)^n\right] _k. \end{aligned}$$
(3.23)

Replacing \(x\) by \(hx\) in (3.24) and left multiplying by \((1-h)\), it follows

$$\begin{aligned} (1-h)\left( \bigl (y\alpha (h)\bigr )^{n}c\right) ^k(hxh)^n=0 \end{aligned}$$

that is \(0=\alpha (h)ch=\alpha (h)c\). This means \(\alpha (a_6)c=\alpha (a_6h)c=\alpha (a_6)\alpha (h)c=0\), which is a contradiction.

Consider now \(char(\mathcal{R })=p\ne 0\) and let \(t\ge 1\) be such that \(p^t\ge k\).

Thus \(\mathcal{I }\) satisfies \([bx^n-\alpha (x^n)c,x^{np^t}]\).

In case \(b\in \mathcal{C }\), then the conclusion follows from Lemma 3.7. Therefore we consider the case \(b\notin C\).

Let \(h^2=h\) be any idempotent element of \(\mathcal{Q }_r\mathcal{I }\), then

$$\begin{aligned}{}[bh-\alpha (h)c,h]=0 \end{aligned}$$

that is

$$\begin{aligned} bh-\alpha (h)ch-hbh+h\alpha (h)c=0 \end{aligned}$$
(3.24)

moreover

$$\begin{aligned} h\alpha (h)c(1-h)=0 \quad \text{ that } \text{ is }\quad h\alpha (h)c=h\alpha (h)ch. \end{aligned}$$
(3.25)

Since (3.24) holds for any idempotent element of \(\mathcal{Q }_r\mathcal{I }\), we may replace \(h\) by \(h+(1-h)xh\) for all \(x\in \mathcal{Q }_r\), so that

$$\begin{aligned} \begin{aligned}&b\left( h+(1-h)xh\right) -\left( \alpha (h)+(1-\alpha (h)) \alpha (x)\alpha (h)\right) c\left( h+(1-h)xh\right) \\&\quad -\left( h+(1-h)xh\right) b\left( h+(1-h)xh\right) \\&\quad +\left( h+(1-h)xh\right) \left( \alpha (h)+(1-\alpha (h)) \alpha (x)\alpha (h)\right) c=0 \end{aligned} \end{aligned}$$

for all \(x\in \mathcal{Q }_r\). Since \(\alpha \) is \(X\)-outer,

$$\begin{aligned}&b\left( h+(1-h)xh\right) -\left( \alpha (h)+(1-\alpha (h)) y\alpha (h)\right) c\left( h+(1-h)xh\right) \nonumber \\&\quad -\left( h+(1-h)xh\right) b\left( h+(1-h)xh\right) \nonumber \\&\quad +\left( h+(1-h)xh\right) \left( \alpha (h)+(1-\alpha (h))y \alpha (h)\right) c=0 \end{aligned}$$
(3.26)

for all \(x,y \in \mathcal{Q }_r\).

Moreover we may also replace in (3.25) \(h\) by \(h+(1-h)xh\) for all \(x\in \mathcal{Q }_r\). By using the same computations in Lemma 3.7, we get

$$\begin{aligned} h\left( 1-\alpha (h)\right) y\alpha (h)c(1-h)=0. \end{aligned}$$
(3.27)

By the primeness of \(\mathcal{Q }_r\), and by (3.27) we have two different cases:

Suppose first that there exists an idempotent element \(h\in \mathcal{Q }_r\mathcal{I }\), such that \(h(1-\alpha (h))=0\). Thus by (3.26) we have

$$\begin{aligned}&b\left( h+(1-h)xh\right) -\left( \alpha (h)+(1-\alpha (h)) y\alpha (h)\right) c\left( h+(1-h)xh\right) \nonumber \\&\quad -\left( h+(1-h)xh\right) b\left( h+(1-h)xh\right) + \left( h+(1-h)xh\right) \alpha (h)c=0.\qquad \end{aligned}$$
(3.28)

For \(x=y=0\) in (3.28) it follows

$$\begin{aligned} bh-\alpha (h)ch-hbh+h\alpha (h)c=0 \end{aligned}$$
(3.29)

on the other hand for \(x=0\) in (3.28), we have

$$\begin{aligned} bh-\left( \alpha (h)+(1-\alpha (h))y\alpha (h)\right) ch-hbh+h\alpha (h)c=0. \end{aligned}$$
(3.30)

Comparing (3.29) and (3.30) one has \((1-\alpha (h))y\alpha (h)ch=0\), that is

$$\begin{aligned} \alpha (h)ch=0. \end{aligned}$$
(3.31)

Moreover right multiplying (3.29) by \(h\), we also get \(hbh=bh\).

In light of this and by our main hypothesis, \(\mathcal{Q }_r\) satisfies

$$\begin{aligned} \left[ b(hxh)^n,(hxh)^{np^t}\right] . \end{aligned}$$

Since \(h\mathcal{Q }_rh\) is a central simple algebra over its center, by the result in [1], it follows \(bh=\beta h\), for some \(\beta \in \mathcal{C }\). Hence, by (3.30), and using (3.31), the following holds: \(0=h\alpha (h)c=hc\).

From \(bh=\beta h, 0=h\alpha (h)c=hc, \alpha (h)ch=0\) and for \(y=0\) in (3.28), we get

$$\begin{aligned} b\left( h+(1-h)xh\right) -\alpha (h)cxh-\left( h+(1-h)xh\right) b\left( h+(1-h)xh\right) =0.\nonumber \\ \end{aligned}$$
(3.32)

Comparing (3.28) with (3.32) it follows

$$\begin{aligned} (1-\alpha (h))y\alpha (h)cxh=0. \end{aligned}$$

Since \((1-\alpha (h))\ne 0\), then one has \(\alpha (h)c\ne 0\). In this case, by

$$\begin{aligned} \left[ b(xh)^n-\bigl (\alpha (x)\alpha (h)\bigr )^nc,(xh)^n\right] _k=0 \end{aligned}$$

for all \(x\in \mathcal{Q }_r\), we have that \(\mathcal{Q }_rh\) satisfies

$$\begin{aligned} \left[ bx^n,x^n\right] _k \end{aligned}$$

and again from [1], we have the contradiction \(b \in \mathcal{C }\).

In light of this contradiction and by (3.27), we have that, for all idempotent element \(h\in \mathcal{Q }_r\mathcal{I }\), the following holds:

$$\begin{aligned} \alpha (h)c(1-h)=0. \end{aligned}$$

Using this last in (3.26) we have both:

$$\begin{aligned} \begin{aligned}&\text{ for }\quad x=y=0,\quad bh-\alpha (h)ch-hbh+h\alpha (h)c=0 \end{aligned} \end{aligned}$$
(3.33)

and

$$\begin{aligned}&\text{ for }\quad x=0,\nonumber \\&bh-\alpha (h)ch-\alpha (1-h)y\alpha (h)ch-hbh+h\alpha (h) c+h\alpha (1-h)y\alpha (h)c=0.\nonumber \\ \end{aligned}$$
(3.34)

Comparing (3.33) with (3.34) it follows

$$\begin{aligned} \left( \alpha (1-h)-h\alpha (1-h)\right) \mathcal{Q }_r\alpha (h)c=0. \end{aligned}$$

As above, if there exists \(h^2=h\in \mathcal{Q }_r\mathcal{I }\) such that \(\alpha (h)c=0\), then we get the contradiction \(b\in \mathcal{C }\). Therefore we may assume that, for any idempotent element \(h\) of \(\mathcal{Q }_r\mathcal{I }\),

$$\begin{aligned} \alpha (1-h)=h\alpha (1-h). \end{aligned}$$
(3.35)

We would like to point out that all the previous argument implies the following:

Remark If \(h^2=h \in \mathcal{Q }_r\) and \(\mathcal J =\mathcal{Q }_rh\) such that \([bx^n,\alpha (x)^nc,x^{np^t}]=0\) for all \(x\in \mathcal J \), then either \(b\in C\) and \(\alpha (h)c=0\), or \(\alpha (1-h)=h\alpha (1-h)\).

Starting from (3.35) and left multiplying by \((1-h)\), we also have \((1-h)\alpha (1-h)=0\). Moreover, by applying \(\alpha ^{-1}\), it follows

$$\begin{aligned} 0=\left( \alpha ^{-1}(1-h)\right) (1-h) \end{aligned}$$

that is \(\alpha ^{-1}(1-h) \in \mathcal{Q }_rh\). In particular, this implies \(\alpha ^{-1}(\mathcal{Q }_r(1-h))\subseteq \mathcal{Q }_rh\). Therefore by our main hypothesis, \(\mathcal{Q }_r\) satisfies

$$\begin{aligned} \left[ b \alpha ^{-1}(x(1-h))^n-\alpha \left( \alpha ^{-1}(x(1-h))^n\right) c, \alpha ^{-1}(x(1-h))^{np^t}\right] \end{aligned}$$

that is

$$\begin{aligned} \left[ b \alpha ^{-1}(x(1-h))^n-(x(1-h))^nc, \alpha ^{-1}(x(1-h))^{np^t}\right] . \end{aligned}$$

By applying \(\alpha \) to this last identity, we get that

$$\begin{aligned} \left[ \alpha (b) (x(1-h))^n-\alpha \left( x(1-h)\right) ^n\alpha (c), (x(1-h))^{np^t}\right] \end{aligned}$$

is an identity for \(\mathcal{Q }_r\). In other words,

$$\begin{aligned} \left[ \alpha (b) x^n-\alpha (x)^n\alpha (c), x^{np^t}\right] \end{aligned}$$

is an identity for the left ideal \(\mathcal{Q }_r(1-h)\).

Since we assume \(b\notin \mathcal{C }\), by the above Remark, it follows that \(\alpha (1-f)=f\alpha (1-f)\), where \(f=1-h\). By easy computation, and since \((1-h)\alpha (1-h)=0\), it follows both

$$\begin{aligned} \alpha (h)=1-h,\qquad \alpha (1-h)=h. \end{aligned}$$
(3.36)

Finally consider the following idempotent element of \(\mathcal{Q }_r\mathcal{I }\): \(g=h+(1-h)xh\), for any \(x\in \mathcal{Q }_r\). Since \(\mathcal{Q }_rg\subseteq \mathcal{Q }_rh\), then, by the main assumption, \(\mathcal{Q }_rg\) satisfies

$$\begin{aligned} \left[ b x^n-\alpha (x)^nc, x^{np^t}\right] . \end{aligned}$$

Once again we apply the above stated Remark. Since \(b\notin \mathcal{C }\), then \(\alpha (1-g)=g\alpha (1-g)\). This implies that, for all \(x\in \mathcal{Q }_r\), the following holds:

$$\begin{aligned} \alpha \left( 1-h-(1-h)xh\right) =\left( h+(1-h)xh\right) \alpha \left( 1-h-(1-h)xh\right) \end{aligned}$$

that is

$$\begin{aligned} 1-\alpha (h)-\alpha (1-h)\alpha (x)\alpha (h)=(h+(1-h)xh) \left( 1-\alpha (h)-\alpha (1-h)\alpha (x)\alpha (h)\right) . \end{aligned}$$

Since \(\alpha \) is \(X\)-outer, then \(\mathcal{Q }_r\) satisfies

$$\begin{aligned} 1-\alpha (h)-\alpha (1-h)y\alpha (h)=(h+(1-h)xh)\left( 1-\alpha (h)-\alpha (1-h)y\alpha (h)\right) \end{aligned}$$

and by using (3.36) it follows

$$\begin{aligned} h-hy(1-h)=(h+(1-h)xh)(h-hy(1-h)) \end{aligned}$$

which implies

$$\begin{aligned} (1-h)xh-(1-h)xhy(1-h)=0. \end{aligned}$$

In particular \((1-h)xh=0\) for all \(x\in \mathcal{Q }_r\), that is \((1-h)\mathcal{Q }_rh=0\), which is a contradiction by the primeness of \(\mathcal{Q }_r\).

All the previous contradictions imply that \(\alpha (\mathcal{I })c=0\) and \(b\in \mathcal{C }\).\(\square \)

3.1 The Proof of Theorem 2

Let \(d\) be a skew derivation of \(\mathcal{R }\) and \(a\in \mathcal{Q }_r\), such that \(F(x)=ax+d(x)\), for all \(x\in \mathcal{R }\). We will denote by \(\alpha \) the automorphism of \(\mathcal{R }\) associated with \(d\) and \(F\).

In case \(d\) is \(X\)-inner, there exists \(c\in \mathcal{Q }_r\) such that \(d(x)=cx-\alpha (x)c\), for all \(x\in \mathcal{R }\), so that \(F(x)=(a+c)x-\alpha (x)c\) and

$$\begin{aligned} \left[ (a+c)r^n-\alpha (r^n)c,r^n\right] _k=0 \end{aligned}$$

for all \(r\in \mathcal{I }\). Hence, by applying Lemmas 3.6 and 3.8 we get \(a+c \in \mathcal{C }\) and \(\alpha (\mathcal{I })c=0\), and we are done.

Assume now that \(d\) is not \(X\)-inner. Let \(c\in \mathcal{I }\). Since \(\mathcal{R }\) satisfies

$$\begin{aligned} \left[ a(xc)^n+d((xc)^n),(xc)^n\right] _k \end{aligned}$$

then \(\mathcal{R }\) satisfies

$$\begin{aligned} \left[ a(xc)^n+\sum _{i=0}^{n-1} \alpha (xc)^id(xc)(xc)^{n-i-1},(xc)^n\right] _k. \end{aligned}$$
(3.37)

Since \(d\) is not \(X\)-inner, by Fact 1.4 and (3.37), \(\mathcal{R }\) satisfies

$$\begin{aligned} \left[ a(xc)^n+\sum _{i=0}^{n-1} \alpha (xc)^i\left( yc+\alpha (y)d(c)\right) (xc)^{n-i-1},(xc)^n\right] _k. \end{aligned}$$
(3.38)

In particular \(\mathcal{R }\) satisfies the following component from (3.38):

$$\begin{aligned} \left[ \sum _{i=0}^{n-1} \alpha (xc)^iyc(xc)^{n-i-1},(xc)^n\right] _k. \end{aligned}$$
(3.39)

In case \(\alpha \) is an inner automorphism of \(\mathcal{R }\), then there exists an invertible element \(q\in \mathcal{Q }_r\) such that \(\alpha (x)=qxq^{-1}\), for all \(x\in \mathcal{R }\). Thus, by (3.39), \(\mathcal{R }\) satisfies

$$\begin{aligned} \left[ \sum _{i=0}^{n-1} (qxcq^{-1})^iyc(xc)^{n-i-1},(xc)^n\right] _k. \end{aligned}$$
(3.40)

Notice that, if \(cq^{-1}=\gamma c\), for some \(\gamma \in \mathcal{C }\), then \(cq=\gamma ^{-1} c\), and left multiplying (3.40) by \(c, \mathcal{R }\) satisfies the non-trivial generalized polynomial identity

$$\begin{aligned} c\left[ \sum _{i=0}^{n-1} (xc)^iyc(xc)^{n-i-1},(xc)^n\right] _k. \end{aligned}$$

On the other hand, if \(\{cq^{-1},c\}\) are linearly \(\mathcal{C }\)-independent, then (3.40) is a non-trivial generalized polynomial identity for \(\mathcal{R }\). Hence, in any case, \(\mathcal{R }\) is a GPI-ring. Therefore, by [20, Theorem 3] \(\mathcal{Q }_r\) is a primitive ring containing nonzero linear transformations of finite rank. Let \(e^2=e\in \mathcal{Q }_r\mathcal{I }\), so that \(\mathcal{Q }_r\) satisfies

$$\begin{aligned} \left[ \sum _{i=0}^{n-1} (qxeq^{-1})^iye(xe)^{n-i-1},xe^n\right] _k. \end{aligned}$$
(3.41)

We replace \(y\) by \(q(1-e)y, x\) by \(ex\), and left multiply (3.41) by \((1-e)\). Thus, by computation it follows that \(\mathcal{Q }_r\) satisfies \((1-e)q(1-e)ye(xe)^{n+nk-1}\) which implies \((1-e)q(1-e)=0\), for all \(e^2=e\in \mathcal{Q }_r\mathcal{I }\). If we replace \(e\) by \(e+(1-e)xe\), for all \(x\in \mathcal{Q }_r\), the following holds:

$$\begin{aligned} \left( (1-e)-(1-e)xe\right) q\left( (1-e)-(1-e)xe\right) =0 \end{aligned}$$

and right multiplying by \(e, (1-e)xeq(1-e)xe=0\). By the primeness of \(\mathcal{Q }_r\), we have \(eq(1-e)=0\), that is \(0=eq(1-e)=q(1-e)\ne 0\), a contradiction.

Consider now the case \(\alpha \) is \(X\)-outer. By Fact 1.5 and (3.39), \(\mathcal{Q }_r\) satisfies

$$\begin{aligned} \left[ \sum _{i=0}^{n-1} (z\alpha (e))^iye(xe)^{n-i-1},(xe)^n\right] _k. \end{aligned}$$
(3.42)

If replace \(y\) by \(\alpha (1-e)y\), then

$$\begin{aligned} \left[ \alpha (1-e)ye(xe)^{n-1},(xe)^n\right] _k \end{aligned}$$
(3.43)

is a generalized identity for \(\mathcal{Q }_r\). Moreover, left multiplying by \(\alpha (e)\), it follows \(\alpha (e)(xe)^{nk}\alpha (1-e)ye(xe)^{n-1}=0\). By the primeness of \(\mathcal{Q }_r\), we have

$$\begin{aligned} \alpha (e)(xe)^{nk}\alpha (1-e)=0. \end{aligned}$$
(3.44)

Firstly we assume \(a\alpha (1-e)\ne 0\). In this case, for \(x=e\) in (3.44), we get \(\alpha (e)e\alpha (1-e)=0\). Thus, by replacing in (3.43) \(x\) with \(x\alpha (e)\), one has \(\alpha (1-e)ye(x\alpha (e)e)^{n+nk-1}=0\). Once again by the primeness of \(\mathcal{Q }_r\), it follows \(\alpha (e)e=0\).

Therefore either \(\alpha (e)e=0\) or \(e\alpha (1-e)=0\).

Assume \(\alpha (e)e=0\). If replacing \(x\) by \(ex\) and \(z\) by \(ez\) in (3.42), and right multiplying by \(\alpha (e)\), we have \(\alpha (e)ye(exe)^{n+nk-1}=0\), a contradiction since \(\alpha (e)\ne 0\).

On the other hand, if \(e\alpha (1-e)=0\), then by (3.43), it follows \(\alpha (1-e)ye(xe)^{n+nk-1}=0\), which is agan a contradiction, since \(\alpha (1-e)\ne 0\).