Keywords

Classification

1 Introduction

Let R be a prime ring of characteristic different from 2. Throughout this paper Z(R) always denotes the center of R, \(Q_r\) the right Martindale quotient ring of R and \(C=Z(Q_r)\) the center of \(Q_r\) (C is usually called the extended centroid of R). Let \(S\subseteq R\) be a subset of R.

An additive map \(d:R\rightarrow R\) is called derivation of S if \(d(xy)=d(x)y+xd(y)\), for all \(x,y \in S\). An additive map \(G:R\rightarrow R\) is called generalized derivation of S if there exists a derivation d of R such that \(G(xy)= G(x)y+xd(y)\), for all \(x,y \in S\). The additive map \(F: R\rightarrow R\) is called Lie derivation of S if \(F([x,y])=[F(x),y]+[x,F(y)]\), for any \(x,y \in S\). Of course any derivation is a Lie derivation. The problem of whether a Lie derivation is a derivation has been studied by several authors (see for example [3, 21] and the references therein).

Motivated by this, here we introduce the definition of g-Lie derivations. More precisely, let \(f,g:R\rightarrow R\) be two additive maps and \(S\subseteq R\) a subset of R. If \(f([x,y])=[f(x),g(y)]+[g(x),f(y)]\), for any \(x,y \in S\), then f is called g-Lie derivation of S. It is clear that any Lie derivation is a 1-Lie derivation. The simplest example of g-Lie derivation is the following:

Example 1.1

Let \(n\ge 2\) be an integer and C a field of characteristic \(2n-1\). Let R be a prime C-algebra, \(0\ne \lambda \in C\), \(f(x)=\lambda x\) and \(g(x)=nx\), for any \(x\in R\). Then f is a g-Lie derivation of R in the sense of the above definition. Moreover f is not a Lie derivation of R.

One natural question could be whether a g-Lie derivation of \(S\subseteq R\) is a Lie derivation of S. Here we consider a first step of this problem. To be more specific, in this paper we study the form of a generalized skew derivation f acting as a g-Lie derivation on the subset \(\{p(r_1,\ldots ,r_n) : r_1,\ldots ,r_n \in R\}\), where g is the associated automorphism with f and \(p(x_1,\ldots ,x_n)\) is a noncentral polynomial in n non-commuting variables.

More precisely, let \(\alpha \) be an automorphism of R. An additive mapping \(d: R\longrightarrow R\) is called a skew derivation of R if

$$ d(xy)=d(x)y+\alpha (x)d(y) $$

for all \(x, y\in R\) and \(\alpha \) is called an associated automorphism of d. An additive mapping \(G: R\longrightarrow R\) is said to be a generalized skew derivation of R if there exists a skew derivation d of R with associated automorphism \(\alpha \) such that

$$ G(xy)=G(x)y+\alpha (x)d(y) $$

for all \(x, y\in R\); d is said to be an associated skew derivation of G and \(\alpha \) is called an associated automorphism of G. Any mapping of R with form \(G(x)=ax+\alpha (x)b\) for some \(a, b\in R\) and \(\alpha \in Aut(R)\), is called inner generalized skew derivation. In particular, if \(a=-b\), then G is called inner skew derivation. If a generalized skew derivation (respectively, a skew derivation) is not inner, then it is usually called outer.

In light of above definitions, one can see that the concept of generalized skew derivation unifies the notions of skew derivation and generalized derivation.

It is well known that automorphisms, derivations, and skew derivations of R can be extended to \(Q_r\). In [4] Chang extends the definition of generalized skew derivation to the right Martindale quotient ring \(Q_r\) of R as follows: by a (right) generalized skew derivation we mean an additive mapping \(G: Q_r \longrightarrow Q_r\) such that \(G(xy)=G(x)y+\alpha (x)d(y)\) for all \(x, y\in Q\), where d is a skew derivation of R and \(\alpha \) is an automorphism of R. Moreover, there exists \(G(1)=a\in Q_r\) such that \(G(x)=ax+d(x)\) for all \(x\in R\).

The main result of this article is

Theorem 1

Let R be a prime ring of characteristic different from 2, \(Q_r\) its right Martindale quotient ring and C its extended centroid. Suppose that F is a nonzero generalized skew derivation of R, with the associated automorphism \(\alpha \), and \(p(x_1,\ldots ,x_n)\) a noncentral polynomial over C, such that

$$F\biggl ([x,y]\biggr )=[F(x),\alpha (y)]+[\alpha (x),F(y)]$$

for all \(x,y \in \{p(r_1,\ldots ,r_n) : r_1,\ldots ,r_n \in R\}\). Then \(\alpha \) is the identity map on R and F is an ordinary derivation of R.

2 Preliminaries

We now collect some Facts which follow from results in [69] and will be used in the sequel.

Fact 2.1

In [11] Chuang and Lee investigate polynomial identities with skew derivations. They prove that if \(\Phi (x_i,D(x_i))\) is a generalized polynomial identity for R, where R is a prime ring and D is an outer skew derivation of R, then R also satisfies the generalized polynomial identity \(\Phi (x_i,y_i)\), where \(x_i\) and \(y_i\) are distinct indeterminates.

Fact 2.2

Let R be a prime ring and I be a two-sided ideal of R. Then I, R, and \(Q_r\) satisfy the same generalized polynomial identities with coefficients in \(Q_r\) (see [6]). Furthermore, I, R, and \(Q_r\) satisfy the same generalized polynomial identities with automorphisms (see [8, Theorem 1]).

Remark 2.3

We would like to point out that in [18] Lee proves that every generalized derivation can be uniquely extended to a generalized derivation of U and thus all generalized derivations of R will be implicitly assumed to be defined on the whole U. In particular Lee proves the following result:

Theorem 3 in [18] Every generalized derivation g on a dense right ideal of R can be uniquely extended to U and assumes the form \(g(x)=ax+d(x)\), for some \(a\in U\) and a derivation d on U.

We also need the following:

Remark 2.4

Let R be a non-commutative prime ring of characteristic different from 2, \(D_1\) and \(D_2\) be derivations of R such that \(D_1(x)D_2(x)=0\) for all \(X \in R\). Then either \(D_1=0\) or \(D_2=0\).

Proof

It is a reduced version of Theorem 3 in [22].\(\quad \square \)

Remark 2.5

Let R be a prime ring and \(0\ne a \in R\). If \([x_1,x_2]a\in Z(R)\), for any \(x_1,x_2 \in R\), then either R is commutative or \(a=0\).

Proof

Since for all \(x_1,x_2 \in R\) we have \([[x_1,x_2]a,[x_1,x_2]]=0\), then \([x_1,x_2][a,[x_1,x_2]]=0\). As a consequence of [19, Theorem 2], either R is commutative or \(a\in Z(R)\). Moreover, in this last case and by our hypothesis, it follows that either \(a=0\) or \(a\ne 0\) and \([x_1,x_2] \in Z(R)\), for all \(x_1,x_2 \in R\), that is R is commutative. \(\quad \square \)

Remark 2.6

Let R be a non-commutative prime ring and \(a\in R\) be such that

$$\begin{aligned}{}[x_1,x_2]a[y_1,y_2]-[y_1,y_2]a[x_1,x_2] \end{aligned}$$
(2.1)

is a generalized polynomial identity for R. Then \(a=0\).

Proof

In relation (2.1) we replace \(y_2\) with \(y_2t\), for any \(t\in R\). Using again (2.1), we have that R satisfies

$$\begin{aligned}{}[y_1,y_2]\biggl [a[x_1,x_2],t\biggr ]+\biggl [[x_1,x_2]a,y_2\biggr ][y_1,t]. \end{aligned}$$
(2.2)

For \(t=a[x_1,x_2]\) in (2.2), it follows that

$$\begin{aligned} \biggl [[x_1,x_2]a,y_2\biggr ]\biggl [y_1,[x_1,x_2]a\biggr ] \end{aligned}$$
(2.3)

is a generalized polynomial identity for R. Let \(x_1,x_2 \in R\) and \(D_1\) and \(D_2\) be the inner derivations of R induced respectively by \([x_1,x_2]a\) and \(a[x_1,x_2]\). By (2.3) we get \(D_1(y_1)D_2(y_2)=0\), for any \(y_1,y_2 \in R\). By Remark 2.4 we have that either \(D_1=0\) or \(D_2=0\). This means that, for any \(x_1,x_2 \in R\), either \([x_1,x_2]a\in Z(R)\) or \(a[x_1,x_2]\in Z(R)\).

Let \(x_1,x_2 \in R\) be such that \(a[x_1,x_2]\in Z(R)\).

Thus, by (2.3) it follows \(\biggl [[x_1,x_2]a,y_2\biggr ][y_1,t]=0\), for any \(y_1,y_2,t\in R\) and, using again Remark 2.4, we have that \([x_1,x_2]a\in Z(R)\), for any \(x_1,x_2 \in R\). Therefore, by Remark 2.5 and since R is not commutative, we get \(a=0\). \(\quad \square \)

Remark 2.7

Let R be a non-commutative prime ring and \(F:R\rightarrow R\) a generalized derivation of R. If F acts as a Lie derivation of [RR], then F is an usual derivation of R.

Proof

Since F acts as a Lie derivation, we have that [RR] satisfies \(F([u,v])-[F(u),v]-[u,F(v)]\). Using Remark 2.3, one has that there exist \(a\in U\) and a derivation d on U such that \(F(x)=ax+d(x)\), for any \(x\in R\).

By easy calculations it follows that [RR] satisfies the generalized identity \(uav-vau\), that is R satisfies

$$\begin{aligned}{}[x_1,x_2]a[y_1,y_2]-[y_1,y_2]a[x_1,x_2]. \end{aligned}$$

Hence, by Remark 2.6, we get \(a=0\) and \(F=d\) is an ordinary derivation of R.\(\quad \square \)

As an easy consequence we also have that

Remark 2.8

Let R be a non-commutative prime ring, \(a\in R\) and \(F:R\rightarrow R\) be such that \(F(x)=ax\), for any \(x\in R\). If F acts as a Lie derivation of [RR], then \(a=0\), that is \(F=0\).

3 The Case of Inner Generalized Skew Derivations

In the first part of this section we will prove the following:

Proposition 3.1

Let R be a non-commutative prime ring of characteristic different from 2, \(Q_r\) be its right Martindale quotient ring and C be its extended centroid. Suppose that \(\alpha \) is an inner automorphism of R induced by the invertible element \(q\in Q_r\) and F is an inner generalized skew derivation of R defined as follows: \(F(x)=ax+qxq^{-1}b\), for all \(x\in R\) and suitable fixed \(a,b \in Q_r\). If

$$F\biggl ([x,y]\biggr )=[F(x),\alpha (y)]+[\alpha (x),F(y)]$$

for all \(x,y \in [R,R]\), then \(a+b=0\) and either \(q\in C\) or \(q^{-1}b\in C\).

We assume that R satisfies the following generalized polynomial identity

$$\begin{aligned}&\Psi (x_1,x_2,y_1,y_2)=\nonumber \\&\biggl [a[x_1,x_2]+q[x_1,x_2]q^{-1}b,q[y_1,y_2]q^{-1}\biggr ]+\biggl [q[x_1,x_2]q^{-1},a[y_1,y_2]+q[y_1,y_2]q^{-1}b\biggr ]\nonumber \\&\quad -a\biggl [[x_1,x_2],[y_1,y_2]\biggr ]-q\biggl [[x_1,x_2],[y_1,y_2]\biggr ]q^{-1}b. \end{aligned}$$
(3.1)

Lemma 3.2

If \(q^{-1}a \in C\), then \(q^{-1}b \in C\) and \(a+b=0\).

Proof

Left multiplying (3.1) by \(q^{-1}\) and since \(q^{-1}a\in C\), one has that R satisfies

$$\begin{aligned} \biggl ([x_1,x_2](a+q^{-1}bq)[y_1,y_2]-[y_1,y_2](a+q^{-1}bq)[x_1,x_2]\biggr )q^{-1} \end{aligned}$$
(3.2)

so that, right multiplying by q, it follows that

$$\begin{aligned}{}[x_1,x_2](a+q^{-1}bq)[y_1,y_2]-[y_1,y_2](a+q^{-1}bq)[x_1,x_2] \end{aligned}$$
(3.3)

is a generalized polynomial identity for R. Therefore, by Remark 2.6 we get \(a=-q^{-1}bq\). Moreover, since \(q^{-1}a\in C\), we notice that \(0=[q^{-1}a,q]=q^{-1}aq-a\), that is \(q^{-1}aq=a=-q^{-1}bq\). Therefore \(q^{-1}b=-q^{-1}a\in C\) and \(a+b=0\), as required. \(\quad \square \)

Remark 3.3

Notice that, in case \(q\in C\) then F is a generalized derivation of R and the conclusion of Proposition 3.1 follows directly from Remark 2.7. Moreover, in light of Lemma 3.2, we are also done in the case \(q^{-1}a \in C\).

We begin with the following

Fact 3.4

(Lemma 1.5 in [12]) Let H be an infinite field and \(n\ge 2\). If \(A_1,\ldots ,A_k\) are not scalar matrices in \(M_m(H)\) then there exists some invertible matrix \(P\in M_m(H)\) such that each matrix \(PA_1P^{-1},\ldots ,PA_kP^{-1}\) has all nonzero entries.

Lemma 3.5

Let \(R=M_m(C)\), \(m\ge 2\) and let C be infinite, Z(R) the center of R, abq elements of R and q is invertible. If R satisfies \(\Psi (x_1,x_2,y_1,y_2)\) then \(a+b=0\) and one of the following holds:

  1. (a)

    \(q \in Z(R)\);

  2. (b)

    \(q^{-1}b\in Z(R)\).

Proof

If either \(q \in Z(R)\) or \(q^{-1}a\in Z(R)\), then the conclusion follows from Remark 3.3.

We assume that \(q^{-1}a\notin Z(R)\) and \(q \notin Z(R)\), that is both \(q^{-1}a\) and q are not scalar matrices, and prove that a contradiction follows.

By Fact 3.4, there exists some invertible matrix \(P\in M_m(C)\) such that each matrix \(P(q^{-1}a)P^{-1}, PqP^{-1}\) has all nonzero entries. Denote by \(\varphi (x)=PxP^{-1}\) the inner automorphism induced by P. Without loss of generality we may replace q and \(q^{-1}a\) with \(\varphi (q)\) and \(\varphi (q^{-1}a)\), respectively, and denote \(q=\sum q_{lm}e_{lm}\) and \(q^{-1}a=a_{lm}e_{lm}\), for some \(q_{lm}, a_{lm} \in C\).

Let \(e_{ij}\) be the usual matrix unit, with 1 in the (ij)-entry and zero elsewhere. For any \(i\ne j\) and \([x_1,x_2]=[e_{ii},e_{ij}]=e_{ij}\), \([y_1,y_2]=[e_{ji},e_{ij}]=e_{jj}-e_{ii}\) in (3.1), then

$$\begin{aligned}&\biggl [ae_{ij}+qe_{ij}q^{-1}b,q(e_{jj}-e_{ii})q^{-1}\biggr ]\nonumber \\&\quad +\biggl [qe_{ij}q^{-1},a(e_{jj}-e_{ii})+q(e_{jj}-e_{ii})q^{-1}b\biggr ]\nonumber \\&\quad -2ae_{ij}-2qe_{ij}q^{-1}b=0. \end{aligned}$$
(3.4)

Left multiplying (3.4) by \(e_{ij}q^{-1}\) and right multiplying by \(qe_{ij}\), we get \(4e_{ij}q^{-1}ae_{ij}qe_{ij}=0\), that is \(a_{ji}q_{ji}=0\), which is a contradiction. \(\quad \square \)

Lemma 3.6

Let \(R=M_m(C)(m\ge 2)\). Then Proposition 3.1 holds.

Proof

If one assumes that C is infinite, the conclusion follows from Lemma 3.5.

Now, let E be an infinite field which is an extension of the field C and let \(\overline{R}=M_t(E)\cong R\otimes _C E\). Consider the generalized polynomial \(\Psi (x_1,x_2,y_1,y_2)\), which is a multilinear generalized polynomial identity for R. Clearly, \(\Psi (x_1,x_2,y_1,y_2)\) is a generalized polynomial identity for \(\overline{R}\) too, and the conclusion follows from Lemma 3.5.

Lemma 3.7

Either \(\Psi (x_1,x_2,y_1,y_2)\) is a nontrivial generalized polynomial identity for R or \(a+b=0\) and one of the following holds:

  1. (a)

    \(q \in Z(R)\);

  2. (b)

    \(q^{-1}b\in Z(R)\).

Proof

Consider the generalized polynomial (3.1)

$$\begin{aligned} \begin{aligned}&\Psi (x_1,x_2,y_1,y_2)=\\&\biggl [a[x_1,x_2]+q[x_1,x_2]q^{-1}b,q[y_1,y_2]q^{-1}\biggr ]+\biggl [q[x_1,x_2]q^{-1},a[y_1,y_2]+q[y_1,y_2]q^{-1}b\biggr ]\\&\quad -a\biggl [[x_1,x_2],[y_1,y_2]\biggr ]-q\biggl [[x_1,x_2],[y_1,y_2]\biggr ]q^{-1}b. \end{aligned} \end{aligned}$$

By our hypothesis, R satisfies this generalized polynomial identity. Replacing \([x_1,x_2]\) by \(q^{-1}[x_1,x_2]q\) and \([y_1,y_2]\) by \(q^{-1}[y_1,y_2]q\), we have that R satisfies the generalized polynomial identity

$$\begin{aligned}&\biggl [aq^{-1}[x_1,x_2]q+[x_1,x_2]b,[y_1,y_2]\biggr ]\nonumber \\&\biggl [[x_1,x_2],aq^{-1}[y_1,y_2]q+[y_1,y_2]b\biggr ]\nonumber \\&\quad -a\biggl [q^{-1}[x_1,x_2]q,q^{-1}[y_1,y_2]q\biggr ]-\biggl [[x_1,x_2],[y_1,y_2]\biggr ]b. \end{aligned}$$
(3.5)

If \(\{aq^{-1},1\}\) are linearly independent over C then (3.5) is a nontrivial generalized polynomial identity for R. Therefore, we may assume in all follows that \(\{aq^{-1},1\}\) are linearly dependent over C, that is \(aq^{-1}\in C\).

By (3.5) we have that R satisfies

$$\begin{aligned}{}[x_1,x_2](ab)[y_1,y_2]-[y_1,y_2](a+b)[x_1,x_2] \end{aligned}$$

and by Remark 2.6 it follows that \(a+b=0\). Hence the generalized polynomial (3.1) reduces to

$$\begin{aligned}&\Psi (x_1,x_2,y_1,y_2)=\nonumber \\&\biggl [a[x_1,x_2]-q[x_1,x_2]q^{-1}a,q[y_1,y_2]q^{-1}\biggr ]+\biggl [q[x_1,x_2]q^{-1},a[y_1,y_2]-q[y_1,y_2]q^{-1}a\biggr ]\nonumber \\&\quad -a\biggl [[x_1,x_2],[y_1,y_2]\biggr ]+q\biggl [[x_1,x_2],[y_1,y_2]\biggr ]q^{-1}a. \end{aligned}$$
(3.6)

Left multiplying (3.6) by \(q^{-1}\) and by easy computations, one has that R satisfies

$$\begin{aligned} \begin{aligned}&q^{-1}a\biggl ([x_1,x_2]q[y_1,y_2]q^{-1}+[y_1,y_2]q[x_1,x_2]q^{-1}-[x_1,x_2][y_1,y_2]+[y_1,y_2][x_1,x_2]\biggr )\\&\quad +[x_1,x_2]q^{-1}a[y_1,y_2]-[y_1,y_2]q^{-1}a[x_1,x_2]+[y_1,y_2]a[x_1,x_2]q^{-1}-[x_1,x_2]a[y_1,y_2]q^{-1}. \end{aligned} \nonumber \\ \end{aligned}$$
(3.7)

If \(\{q^{-1}a,1\}\) are linearly dependent over C, then \(q^{-1}a\in C\), that is \(q^{-1}b \in C\) and we are done. On the other hand, if \(\{q^{-1}a,1\}\) are linearly independent over C and since (3.7) is a trivial generalized polynomial identity for R, then R satisfies

$$\begin{aligned}{}[x_1,x_2]q^{-1}a[y_1,y_2]-[y_1,y_2]q^{-1}a[x_1,x_2]+[y_1,y_2]a[x_1,x_2]q^{-1}-[x_1,x_2]a[y_1,y_2]q^{-1}. \nonumber \\ \end{aligned}$$
(3.8)

Moreover, if \(q\in C\), then the conclusion follows from Remark 3.3, so that we may assume \(q\notin C\). In this last case, by (3.8) it follows that R satisfies the nontrivial generalized polynomial identity

$$\begin{aligned}{}[x_1,x_2]q^{-1}a[y_1,y_2]-[y_1,y_2]q^{-1}a[x_1,x_2] \end{aligned}$$

which is a contradiction.

Proof of Proposition 3.1. The generalized polynomial \(\Psi (x_1,x_2,y_1,y_2)\) is a generalized polynomial identity for R. By Lemma 3.7, we may assume that \(\Psi (x_1,x_2,y_1,y_2)\) is a nontrivial generalized polynomial identity for R and, by [6] it follows that \(\Psi (x_1,x_2,y_1,y_2)\) is a nontrivial generalized polynomial identity for \(Q_r\). By the well-known Martindale’s theorem of [20], \(Q_r\) is a primitive ring having nonzero socle with the field C as its associated division ring. By [15] (p.75) \(Q_r\) is isomorphic to a dense subring of the ring of linear transformations of a vector space V over C, containing nonzero linear transformations of finite rank. Assume first that \(\mathrm{dim}_CV= k\ge 2\) is a finite positive integer, then \(Q_r\cong M_k(C)\) and the conclusion follows from Lemma 3.6.

Let now \(\mathrm{dim}_CV=\infty \).

Let \(x_0, y_0\in R\). By Litoff’s theorem (see Theorem 4.3.11 in [2]) there exists an idempotent element \(e\in R\) such that \(x_0, y_0, a, b, q, q^{-1}a, q^{-1}b \in eRe \cong M_k(C)\) for some integer k. Of course

$$\begin{aligned}&\biggl [a[r_1,r_2]+q[r_1,r_2]q^{-1}b,q[s_1,s_2]q^{-1}\biggr ]+\biggl [q[r_1,r_2]q^{-1},a[s_1,s_2]+q[s_1,s_2]q^{-1}b\biggr ]\nonumber \\&\quad -a\biggl [[r_1,r_2],[s_1,s_2]\biggr ]-q\biggl [[r_1,r_2],[s_1,s_2]\biggr ]q^{-1}b=0, \quad \forall r_1,r_2,s_1,s_2 \in eRe. \end{aligned}$$
(3.9)

For sake of clearness, here we write \(F(x)=ax+qxq^{-1}b\), for any \(x\in eRe\). By Lemma 3.6 one of the following holds:

  1. (a)

    eRe is commutative, in particular \(q^{-1}b, q\) are central elements of eRe. In this case \(F(x)=(a+b)x\), for any \(x\in eRe\), that is F is a generalized derivation of eRe. Moreover, since eRe satisfies (3.9) and q is a central element of eRe, we have that F acts as a Lie derivation on the set [eReeRe]. Thus, by Remark 2.7 it follows that F is an ordinary derivation of eRe, in particular \(F(x_0y_0)=F(x_0)y_0+x_0F(y_0)\).

  2. (b)

    \(a+b=0\) and q is a central element of eRe. In this case \(F(x)=ax-xa\), for any \(x\in eRe\), that is F is an inner ordinary derivation of eRe and once again \(F(x_0y_0)=F(x_0)y_0+x_0F(y_0)\) holds.

  3. (c)

    \(a+b=0\) and \(q^{-1}b, q^{-1}a\) are central elements of eRe. In this case \(F(x)=0\), for any \(x\in eRe\), in particular \(F(x_0)=F(y_0)=0\).

Therefore, in any case \(F(x_0y_0)=F(x_0)y_0+x_0F(y_0)\) holds. Repeating this process for any \(x,y\in R\), it follows that F satisfies the rule \(F(xy)=F(x)y+xF(y)\) for any \(x,y \in R\), that is F acts as a derivation on R, as required.

Now we extend the previous result to the case the automorphism \(\alpha \) is not necessarily inner

Proposition 3.8

Let R be a non-commutative prime ring of characteristic different from 2, \(Q_r\) be its right Martindale quotient ring and C be its extended centroid. Suppose that F is an inner generalized skew derivation of R, with associated automorphism \(\alpha \), defined as follows: \(F(x)=ax+\alpha (x)b\) for all \(x\in R\) and suitable fixed \(a,b\in Q_r\). If \(F\ne 0\) and

$$F\biggl ([x,y]\biggr )=[F(x),\alpha (y)]+[\alpha (x),F(y)]$$

for all \(x,y \in [R,R]\), then \(\alpha \) is the identity map on R and \(a+b=0\).

In order to prove Proposition 3.8 we need to fix the following useful results:

Remark 3.9

Let R be a prime ring of characteristic different from 2.

If \(\bigl [[x_1,x_2],[y_1,y_2]\bigr ]\in Z(R)\), for all \(x_1,x_2,y_1,y_2\in R\), then R is commutative.

Proof

Since R is a prime ring satisfying the polynomial identity

$$\biggl [ \bigl [[x_1,x_2],[y_1,y_2]\bigr ],x_3\biggr ]$$

then there exists a field K such that R and \(M_t(K)\), the ring of all \(t\times t\) matrices over K, satisfy the same polynomial identities (see [16]).

Suppose \(t\ge 2\). Let \(x_1=e_{11}\), \(x_2=e_{22}\), \(y_1=e_{22}\) and \(y_2=e_{21}\). By calculation we obtain \(\bigl [[x_1,x_2],[y_1,y_2]\bigr ]=e_{11}-e_{22}\notin Z(R)\), a contradiction. So \(t=1\) and R is commutative. \(\quad \square \)

Remark 3.10

Let R be a prime ring of characteristic different from 2 and \(a\in R\). If \(a\bigl [[x_1,x_2],[y_1,y_2]\bigr ]=0\) (respectively \(\bigl [[x_1,x_2],[y_1,y_2]\bigr ]\,a=0\)), for all \(x_1,x_2,y_1,y_2\in R\), then either \(a=0\) or R is commutative.

Proof

By Remark 3.9 we may assume that the polynomial \(\bigl [[x_1,x_2],[y_1,y_2]\bigr ]\) is not central in R. Therefore \(a=0\) follows from [10]. \(\quad \square \)

Remark 3.11

Let R be a prime ring of characteristic different from 2 and \(a\in R\). If \(\bigl [a[x_1,x_2],[x_1,x_2]\bigr ]=0\) (respectively \(\bigl [[x_1,x_2]a,[x_1,x_2]\bigr ]=0\)), for all \(x_1,x_2\in R\), then either \(a\in Z(R)=0\).

Proof

It is easy consequence of [1]. \(\quad \square \)

Proof of Proposition 3.8 If there exists an invertible element \(q\in Q_r\) such that \(\alpha (x)=qxq^{-1}\), for all \(x\in R\), then the conclusion follows from Proposition 3.1. Hence, in what follows we assume that \(\alpha \) is not an inner automorphism of R and prove that a contradiction follows. Thus, since R satisfies

$$\begin{aligned}&\biggl [a[x_1,x_2]+\alpha ([x_1,x_2])b,\alpha ([y_1,y_2])\biggr ]\nonumber \\&\quad +\biggl [\alpha ([x_1,x_2]),a[y_1,y_2]+\alpha ([y_1,y_2])b\biggr ]\nonumber \\&\quad -a\biggl [[x_1,x_2],[y_1,y_2]\biggr ]-\alpha \biggl (\biggl [[x_1,x_2],[y_1,y_2]\biggr ]\biggr )b \end{aligned}$$
(3.10)

then

$$\begin{aligned}&\biggl [a[x_1,x_2]+[t_1,t_2]b,[z_1,z_2]\biggr ]\nonumber \\&\quad +\biggl [[t_1,t_2]),a[y_1,y_2]+[z_1,z_2]b\biggr ]\nonumber \\&\quad -a\biggl [[x_1,x_2],[y_1,y_2]\biggr ]-\biggl [[t_1,t_2],[z_1,z_2]\biggr ]b \end{aligned}$$
(3.11)

is a generalized identity for R. In particular R satisfies \(a\biggl [[x_1,x_2],[y_1,y_2]\biggr ]\), which implies that \(a=0\) (see Remark 3.10). Hence (3.11) reduces to

$$\begin{aligned}{}[t_1,t_2]b[z_1,z_2]-[z_1,z_2]b[t_1,t_2] \end{aligned}$$

and, by Remark 2.6, we get \(b=0\), which implies the contradiction \(F=0\).

4 The Proof of Theorem 1

As mentioned in the Introduction, we can write \(F(x)=ax+f(x)\), for all \(x\in R\), where \(a\in Q_r\) and f is a skew derivation of R. Let \(\alpha \) be the automorphism associated with f. That is \(f(xy)=f(x)y+\alpha (x)f(y)\), for all \(x,y\in R\).

Remark 4.1

Let S be the additive subgroup generated by the set

$$p(R)=\{p(r_1,\ldots ,r_n) : r_1,\ldots ,r_n\in R\}\ne 0.$$

Of course \(F([x,y])=[F(x),\alpha (y)]+[\alpha (x),F(y)]\), for all \(x,y\in S\). Since \(p(x_1,\ldots ,x_n)\) is not central in R, by [5] and \(char(R)\ne 2\), it follows that there exists a noncentral Lie ideal L of R such that \(L\subseteq S\). Moreover it is well known that there exists a nonzero ideal I of R such that \([I,R]\subseteq L\) (see [14, pp. 4–5], [13, Lemma 2, Proposition 1], [17, Theorem 4]).

Proof of Theorem 1. By Remark 4.1 we assume there exists a noncentral ideal I of R such that

$$\begin{aligned}&\biggl [a[x_1,x_2]+f([x_1,x_2]),\alpha ([y_1,y_2])\biggr ]+\biggl [\alpha ([x_1,x_2]),a[y_1,y_2]+f([y_1,y_2]),\biggr ]\nonumber \\&\quad -a\biggl [[x_1,x_2],[y_1,y_2]\biggr ]-f\biggl (\biggl [[x_1,x_2],[y_1,y_2]\biggr ]\biggr ) \end{aligned}$$
(4.1)

is satisfied by I. Since I and R satisfy the same generalized identities with derivations and automorphisms, then (4.1) is a generalized differential identity for R, that is R satisfies

$$\begin{aligned}&\biggl [a[x_1,x_2]+f(x_1)x_2+\alpha (x_1)f(x_2)-f(x_2)x_1-\alpha (x_2)f(x_1),\alpha ([y_1,y_2])\biggr ]\nonumber \\&\biggl [\alpha ([x_1,x_2]),a[y_1,y_2]+f(y_1)y_2+\alpha (y_1)f(y_2)-f(y_2)y_1-\alpha (y_2)f(y_1)\biggr ]\nonumber \\&\quad -a\biggl [[x_1,x_2],[y_1,y_2]\biggr ]-\biggl (f(x_1)x_2+\alpha (x_1)f(x_2)-f(x_2)x_1-\alpha (x_2)f(x_1)\biggr )[y_1,y_2]\nonumber \\&\quad -\alpha ([x_1,x_2])\biggl (f(y_1)y_2+\alpha (y_1)f(y_2)-f(y_2)y_1-\alpha (y_2)f(y_1)\biggr )\nonumber \\&\quad +\biggl (f(y_1)y_2+\alpha (y_1)f(y_2)-f(y_2)y_1-\alpha (y_2)f(y_1)\biggr )[x_1,x_2]\nonumber \\&\quad +\alpha ([y_1,y_2])\biggl (f(x_1)x_2+\alpha (x_1)f(x_2)-f(x_2)x_1-\alpha (x_2)f(x_1)\biggr ) \end{aligned}$$
(4.2)

In all that follows we assume R is not commutative and \(f\ne 0\), otherwise \(F(x)=ax\), for all \(x\in R\) and, by Remark 2.8, we have the contradiction \(F=0\). Moreover, we also assume \(\alpha \) is not the identity map on R, if not \(F(x)=ax+xb\), for all \(x\in R\) and, by Remark 2.7, it follows \(a=-b\) and F is an ordinary inner derivation of R.

In case f is an inner skew derivation of R, then we get the required conclusions by Proposition 3.8. Hence we now assume that f is not inner and show that a number of contradictions follows.

Since \(0\ne f\) is not inner, then, by relation (4.2), it follows that R satisfies

$$\begin{aligned}&\biggl [a[x_1,x_2]+t_1x_2+\alpha (x_1)t_2-t_2x_1-\alpha (x_2)t_1,\alpha ([y_1,y_2])\biggr ]\nonumber \\&\biggl [\alpha ([x_1,x_2]),a[y_1,y_2]+z_1y_2+\alpha (y_1)z_2-z_2y_1-\alpha (y_2)z_1\biggr ]\nonumber \\&\quad -a\biggl [[x_1,x_2],[y_1,y_2]\biggr ]-\biggl (t_1x_2+\alpha (x_1)t_2-t_2x_1-\alpha (x_2)t_1\biggr )[y_1,y_2]\nonumber \\&\quad -\alpha ([x_1,x_2])\biggl (z_1y_2+\alpha (y_1)z_2-z_2y_1-\alpha (y_2)z_1\biggr )\nonumber \\&\quad +\biggl (z_1y_2+\alpha (y_1)z_2-z_2y_1-\alpha (y_2)z_1\biggr )[x_1,x_2]\nonumber \\&\quad +\alpha ([y_1,y_2])\biggl (t_1x_2+\alpha (x_1)t_2-t_2x_1-\alpha (x_2)t_1\biggr ) \end{aligned}$$
(4.3)

and in particular, by computation we get that

$$\begin{aligned} \biggl (t_1x_2-\alpha (x_2)t_1\biggr )\alpha ([y_1,y_2])-\biggl (t_1x_2-\alpha (x_2)t_1\biggr )[y_1,y_2] \end{aligned}$$
(4.4)

is satisfied by R.

If there exists an invertible element \(q\in Q_r\) such that \(q\notin C\) and \(\alpha (x)=qxq^{-1}\), for all \(x\in R\), we replace any \(t_i\) with \(qt_i\) in (4.4) and left multiplying by \(q^{-1}\), one has that R satisfies \([t_1,x_2]\bigl (q[y_1,y_2]q^{-1}-[y_1,y_2]\bigr )\). Since R is not commutative and in light of Remark 2.5, the last relation implies \(q[y_1,y_2]q^{-1}=[y_1,y_2]\), for any \(y_1,y_2 \in R\), that is \(q[y_1,y_2]=[y_1,y_2]q\), for any \(y_1,y_2 \in R\). In this case, it is well known that the contradiction \(q\in C\) follows.

On the other hand, in case \(\alpha \) is not inner, then by (4.4) it follows that R satisfies the generalized identity

$$\begin{aligned} \biggl (t_1x_2-t_3t_1\biggr )[z_1,z_2]-\biggl (t_1x_2-t_3t_1\biggr )[y_1,y_2] \end{aligned}$$
(4.5)

and, for \(y_1=y_2=0\) and \(x_2=t_3\), we have that R satisfies the polynomial identity \([t_1,x_2][z_1,z_2]\), which implies again the contradiction that R is commutative.