1 Introduction

One of the interesting issues in mathematics is the determination of the structure of linear (additive) mappings on algebras (rings) that act through zero products in the same way as certain mappings, such as homomorphisms, derivations, and centralizers. Among these issues, one can point out the problem of characterizing a linear (additive) map \(\delta \) from an algebra (ring) \({\mathcal {A}}\) into an \({\mathcal {A}}\)-bimodule \({\mathcal {M}}\), which satisfies

$$\begin{aligned} ab = 0 \Longrightarrow a \delta ( b) + \delta ( a) b = 0 , ~~~ (a,b \in {\mathcal {A}}) . \end{aligned}$$
(1.1)

In fact, in this case \(\delta \) is like derivations at zero product elements. Recall that a linear map \(d: {\mathcal {A}} \rightarrow {\mathcal {M}}\) is said to be a derivation if \(d(ab) = ad(b) + d(a)b\) for all \(a, b \in {\mathcal {A}}\). Now let us mention some studies done in this regard. In [3, Theorem 4.4], Bre\(\check{\text {s}}\)ar showed by using a more general approach that in the case of \({\mathcal {A}}\) being a unital ring generated by its idempotents every additive map \(\delta : {\mathcal {A}} \rightarrow {\mathcal {M}}\) satisfying (1.1) is of the form \(\delta (a)=d(a)+ca\) (\(a\in {\mathcal {A}}\)), where \(d: {\mathcal {A}} \rightarrow {\mathcal {M}}\) is an additive derivation and \(c\in Z({\mathcal {A}})\), (\(Z({\mathcal {A}})\) is the center of \({\mathcal {A}}\)). Jing et al. [12] showed that, for the cases of nest algebras on a Hilbert space and standard operator algebras in a Banach space, the set of linear maps \(\delta \) satisfying (1.1) and \(\delta (I)=0\) coincides with the set of inner derivations. Then many studies have been done in this case and different results have been obtained; for instance, see [1, 5, 9, 10, 12, 17] and the references therein.

Let \({\mathcal {A}}\) be an algebra (ring) and \({\mathcal {M}}\) be an \({\mathcal {A}}\)-bimodule. Recall that a linear (additive) map \(\rho : {\mathcal {A}} \rightarrow {\mathcal {M}}\) is said to be a right (left) centralizer if \(\rho (ab) = a\rho (b)\) (\(\rho (ab)=\rho (a)b\)) for each \(a,b \in {\mathcal {A}}\). It is called a centralizer if \(\rho \) is both a left centralizer and a right centralizer. Conditions similar to (1.1) can be expressed for maps behaving like right (left) centralizer or centralizers at zero product elements as follows:

$$\begin{aligned} \begin{aligned}&ab = 0 \Longrightarrow a \rho ( b) = 0 ,\\&ab = 0 \Longrightarrow \rho ( a)b = 0 , \\&ab = 0 \Longrightarrow a \rho ( b) =\rho ( a)b = 0 , \end{aligned} \end{aligned}$$
(1.2)

where \(a,b \in {\mathcal {A}}\) and \(\rho : {\mathcal {A}} \rightarrow {\mathcal {M}}\) is a linear (additive) map. The characterizing of \(\rho \) is also a matter of concern. In [3], Bre\(\check{\text {s}}\)ar proves that if \({\mathcal {A}}\) is a prime ring and \(\rho : {\mathcal {A}} \rightarrow {\mathcal {A}}\) is an additive map, then \(\rho \) satisfying the second equation in (1.2) if and only if \(\rho \) is a left centralizer. This problem has been studied by several authors, ([14,15,16] among others).

The more general condition of the (1.1) and (1.2), which is considered, is as follows:

$$\begin{aligned} ab = 0 \Longrightarrow a \tau ( b) + \delta ( a ) b = 0 , ~~~ (a,b\in {\mathcal {A}}) , \end{aligned}$$
(1.3)

where \(\delta : {\mathcal {A}} \rightarrow {\mathcal {M}}\) and \(\tau : {\mathcal {A}} \rightarrow {\mathcal {M}}\) are linear (additive) maps. If in (1.3) we assume that \(\tau =\delta \), then (1.1) is obtained and, if we put \(\delta =0\) or \(\tau =0\), then we pass to the (1.2). The condition (1.3) has also been studied by some authors and the mappings \(\delta \) and \( \tau \) have been characterized on different algebras (rings) (see [8, 13]). In [2], the authors consider linear maps \(\delta , \tau : {\mathcal {A}} \rightarrow {\mathcal {M}}\) satisfying (1.3) and prove that if the unital algebra \({\mathcal {A}}\) is generated by idempotents, then \(\delta \) and \(\tau \) are of the form \(\delta (a)=d(a)+\delta (1)a\) and \(\tau (a)=d(a)+a\tau (1)\) (\(a\in {\mathcal {A}}\)), where \(d: {\mathcal {A}} \rightarrow {\mathcal {M}}\) is a derivation. Also, characterizations of the maps \(\delta \) and \(\tau \) are given if \({\mathcal {A}}\) is assumed to be a triangular algebra under some constraints on the bimodule \({\mathcal {M}}\). In this paper, we describe the linear mappings of the standard operator algebras in a Banach space that satisfy (1.3) and we provide different results from this description.

Throughout this paper, all algebras and vector spaces will be over the complex field \(\mathbb {C}\). Let \({\mathcal {X}}\) be a Banach space. We denote by \(B({\mathcal {X}})\) the algebra of all bounded linear operators on \({\mathcal {X}}\), and \(F({\mathcal {X}})\) denotes the algebra of all finite rank operators in \(B({\mathcal {X}})\). Recall that a standard operator algebra is any subalgebra of \(B({\mathcal {X}})\) which contains \(F({\mathcal {X}})\). We shall denote the identity matrix of \(B({\mathcal {X}})\) by I. In Theorem 2.1 of this article, we characterize the linear maps \(\delta , \tau : {\mathcal {A}} \rightarrow B({\mathcal {X}})\) satisfying (1.3), where \({\mathcal {A}}\) is a unital standard operator algebra. This theorem is the main result of our paper. Also, we apply our main result to describe linear maps satisfying (1.1) and (1.2) on standard operator algebras (Corollaries 2.2, 2.3).

Recently, the problem of characterizing linear (additive) maps on \(\star \)-algebras (\(\star \)-rings) behaving like derivations at orthogonal elements for several types oforthogonality conditions has been considered; for instance, see [6, 11]. Inparticular, the following conditions on a linear (additive) map \(\delta \) from a \(\star \)-algebra (\(\star \)-ring) \({\mathcal {A}}\) into itself are considered:

$$\begin{aligned} \begin{aligned}&ab^{\star }=0 \Longrightarrow a\delta (b)^{\star }+\delta (a)b^{\star }=0 ,\\&a^{\star }b=0 \Longrightarrow a^{\star }\delta (b)+\delta (a)^{\star }b=0 , \\ \end{aligned} \end{aligned}$$
(1.4)

where \(a,b \in {\mathcal {A}}\). As another application of Theorem 2.1, in Theorem 2.4 andCorollary 2.5, we determine the linear maps satisfying (1.4) on unital standard operator algebras on a Hilbert space \({\mathcal {H}}\) such that \( {\mathcal {A}} \) is closed under the adjoint operation.

In Sect. 2 of this paper, we give all the results and assign Sect. 3 to the proof of Theorem 2.1.

2 The Main Results

In this section, we present the results of this paper. The following is the main result of our article, the proof of which will be given in Sect. 3.

Theorem 2.1

Let \( {\mathcal {X}} \) be a Banach space, dim\({\mathcal {X}} \ge 2\), and let \( {\mathcal {A}} \subseteq B ( {\mathcal {X}} ) \) be a unital standard operator algebra. Suppose that \( \delta \) and \( \tau \) are linear maps from \( {\mathcal {A}} \) into \( B ( {\mathcal {X}} ) \) satisfying

$$\begin{aligned} A B = 0 \Longrightarrow A \tau ( B) + \delta ( A ) B = 0 , ~~~ (A, B \in {\mathcal {A}}) . \end{aligned}$$

Then there exist \( R , S, T \in B ( {\mathcal {X}} ) \) such that

$$\begin{aligned} \delta (A) = A S - R A , ~~ ~\tau ( A) = A T - S A \end{aligned}$$

for all \( A \in {\mathcal {A}} \).

From Theorem 2.1, one gets the following corollary, which is already proved in [12, Theorem 6]. So it can be said that Theorem 2.1 is a generalization of [12, Theorem 6].

Corollary 2.2

Let \( {\mathcal {X}} \) be a Banach space, dim\({\mathcal {X}} \ge 2\), and let \( {\mathcal {A}} \subseteq B ( {\mathcal {X}} ) \) be a unital standard operator algebra. Assume that \( \delta : {\mathcal {A}} \rightarrow B ( {\mathcal {X}} ) \) is a linear map satisfying

$$\begin{aligned} A B = 0 \Longrightarrow A \delta ( B) + \delta ( A ) B = 0 , ~~~ (A, B \in {\mathcal {A}}) . \end{aligned}$$

Then there exist \( R , S \in B ( {\mathcal {X}} ) \) such that

$$\begin{aligned} \delta (A) = A S - R A \end{aligned}$$

for all \( A \in {\mathcal {A}} \) and \( R - S \in Z ( B( {\mathcal {X}} )) \).

Proof

By Theorem 2.1, there exist \( R , S , T \in B ( {\mathcal {X}} ) \) such that

$$\begin{aligned} \delta ( A) = A S - R A = A T - S A \end{aligned}$$

for all \( A \in {\mathcal {A}} \). So,

$$\begin{aligned} A ( S - T ) = ( R - S ) A \end{aligned}$$

for all \( A \in {\mathcal {A}} \). Let \( A = I \), we arrive at \( S - T= R - S \). Therefore, \( R - S \in Z( {\mathcal {A}} ) \). We show that \( R - S \in Z ( B( {\mathcal {X}} )) \). Let \( A \in B ( {\mathcal {X}} ) \). Since \( \overline{F( {\mathcal {B}} ( {\mathcal {X}} )) }^{SOT} = B( {\mathcal {X}} ) \), there exists a net \( ( F_i )_{i \in I} \) in \(F( {\mathcal {B}} ( {\mathcal {X}} )) \) such that . By separate SOT-continuity of product in \( B ( {\mathcal {X}} ) \), we see that

On account of \( R -S \in Z ( {\mathcal {A}} ) \), we have \( (R - S ) A = A ( R - S ) \). Since \( A \in B ( {\mathcal {X}} ) \) is arbitrary, it follows that \( R - S \in Z ( B( {\mathcal {X}} )) \). \(\square \)

In the following, we will characterize the linear maps on standard operator algebras behaving like right (left) centralizers or centralizers at zero product elements.

Corollary 2.3

Let \( {\mathcal {X}} \) be a Banach space, dim\({\mathcal {X}} \ge 2\), and let \( {\mathcal {A}} \subseteq B ( {\mathcal {X}} ) \) be a unital standard operator algebra. Assume that \( \rho : {\mathcal {A}} \rightarrow B ( {\mathcal {X}} ) \) is a linear map.

  1. (i)

    \(\rho \) satisfies

    $$\begin{aligned} A B = 0 \Longrightarrow A \tau ( B) = 0 , ~~~ (A, B \in {\mathcal {A}}), \end{aligned}$$

    if and only if \( \tau ( A) = A D \) for all \( A\in {\mathcal {A}} \) in which \( D \in B ( {\mathcal {X}} ) \).

  2. (ii)

    \(\rho \) satisfies

    $$\begin{aligned} A B = 0 \Longrightarrow \delta ( A)B = 0 , ~~~ (A, B \in {\mathcal {A}}), \end{aligned}$$

    if and only if \( \delta ( A) = DA \) for all \( A\in {\mathcal {A}} \) in which \( D \in B ( {\mathcal {X}} ) \).

  3. (iii)

    \(\rho \) satisfies

    $$\begin{aligned} A B = 0 \Longrightarrow A \rho ( B) =\rho ( A)B = 0 , ~~~ (A, B \in {\mathcal {A}}), \end{aligned}$$

    if and only if \( \tau ( A) = DA \) for all \( A \in {\mathcal {A}} \) in which \( D \in Z(B ( {\mathcal {X}}) ) \).

Proof

(i) By Theorem 2.1, there exist \( R , S , T \in B ( {\mathcal {X}} ) \) such that

$$\begin{aligned} A S - R A =0 , ~~ ~\tau ( A) = A T - S A \end{aligned}$$

for all \( A \in {\mathcal {A}} \). By setting \(A=I,\) we see that \(S=R\). Hence, \(AS=SA\) for all \(A \in {\mathcal {A}}\). Now, let \(D=T-S \in B ( {\mathcal {X}} ) \) and we have \(\tau ( A) = A T\) for all \( A \in {\mathcal {A}} \). The converse is proved easily.

(ii):

The proof is obtained by using a similar argument as in (i).

(iii):

It is clear from (i) and (ii).

\(\square \)

In the next theorem, we consider the standard operator algebras on Hilbert spaces which are closed under the adjoint operation and determine the structure of linear maps on them that act similar to derivations at an one-sided orthogonality condition. This theorem is an application of Theorem 2.1.

Theorem 2.4

Let \( {\mathcal {A}} \) be a unital standard operator algebra on a Hilbert space \({\mathcal {H}}\) with dim\({\mathcal {H}} \ge 2\), such that \( {\mathcal {A}} \) is closed under the adjoint operation. Suppose that \( \delta : {\mathcal {A}} \rightarrow B( {\mathcal {H}} ) \) is a linear map satisfying

$$\begin{aligned} A B^\star = 0 \Longrightarrow A \delta ( B)^\star + \delta ( A ) B^\star = 0 , ~~~ (A, B \in {\mathcal {A}}) . \end{aligned}$$

Then, there exist \( R , S \in B ( {\mathcal {X}} ) \) such that

$$\begin{aligned} \delta (A) = A S - R A \end{aligned}$$

for all \( A \in {\mathcal {A}} \) and \( Re S \in Z ( B( {\mathcal {H}} )) \).

Proof

Define the linear map \( \tau : {\mathcal {A}} \rightarrow B( {\mathcal {H}} ) \) by \(\tau (A)=\delta (A^{\star })^{\star }\). Then by assumption

$$\begin{aligned} A \delta ( B^\star )^\star + \delta ( A) B = 0 \end{aligned}$$

for all \( A, B \in {\mathcal {A}} \) with \( A B = 0 \). Thus,

$$\begin{aligned} A \tau ( B) + \delta ( A) B = 0 \end{aligned}$$

for all \( A, B \in {\mathcal {A}} \) with \( A B = 0 \). So \( \delta \) and \( \tau \) satisfy the conditions of Theorem 2.1 and according to this theorem there exist \( R, S , T \in B( {\mathcal {H}} ) \) such that

$$\begin{aligned} \delta ( A) = A S - R A , ~~~~ \tau ( A) = A T - S A \end{aligned}$$

for all \( A \in {\mathcal {A}} \). Therefore, \( \delta ( A^\star )^\star = AT - S A \) for all \( A \in {\mathcal {A}} \) and hence \( \delta (A)= T^\star A - A S^\star \) for all \( A \in {\mathcal {A}} \). Comparing these relations for \( \delta \), we obtain

$$\begin{aligned} A ( S + S^\star ) = ( R + T^\star ) A \end{aligned}$$

for all \( A \in {\mathcal {A}} \). Let \( A = I \), we get \( S + S^\star = R + T^\star \). Hence, \( S + S^\star \in Z( {\mathcal {A}} ) \). By using similar arguments as in the proof of Corollary 2.2, we have \( S + S^\star \in Z ( B ( {\mathcal {H}} )) \). So \( Re S \in Z ( B( {\mathcal {H}} )) \) and the proof is complete. \(\square \)

Corollary 2.5

Let \( {\mathcal {A}} \) be a unital standard operator algebra on a Hilbert space \({\mathcal {H}}\) with dim\({\mathcal {H}} \ge 2\), such that \( {\mathcal {A}} \) is closed under the adjoint operation. Suppose that \( \delta : {\mathcal {A}} \rightarrow B( {\mathcal {H}} ) \) is a linear map satisfying

$$\begin{aligned} A^\star B = 0 \Longrightarrow A^\star \delta ( B) + \delta ( A )^\star B = 0 , ~~~ (A, B \in {\mathcal {A}}) . \end{aligned}$$

Then there exist \( R , S \in B ( {\mathcal {X}} ) \) such that

$$\begin{aligned} \delta (A) = A R - S A \end{aligned}$$

for all \( A \in {\mathcal {A}} \) and \( Re S \in Z ( B( {\mathcal {H}} )) \).

Proof

Define the linear map \( \tau : {\mathcal {A}} \rightarrow B ( {\mathcal {H}} ) \) by \( \tau ( A)= \delta ( A^\star ) ^\star \). Consider \( A, B \in {\mathcal {A}} \) with \( A B^\star = 0\). So, \( ( A^\star )^\star B^\star = 0 \) and by assumption we have

$$\begin{aligned} A \delta ( B^\star ) + \delta ( A^\star )^\star B^\star = 0 . \end{aligned}$$

It follows from the definition of \( \tau \) that

$$\begin{aligned} A \tau ( B)^\star + \tau ( A) B^\star = 0. \end{aligned}$$

Therefore, \( \tau \) satisfies the conditions of Theorem 2.4 and hence there exist \( R_1 , S_1\in B ( {\mathcal {H}} ) \) such that

$$\begin{aligned} \tau ( A) = A S_1 - R_1 A \end{aligned}$$

for all \( A \in {\mathcal {A}} \) and \( ReS_1 \in Z ( B ( {\mathcal {H}} )) \). Therefore, \( \delta ( A^\star )^\star = A S_1 - R_1 A \) andconsequently \( \delta ( A) = S_1^{\star } A - A R_1^{\star } \) for all \( A \in {\mathcal {A}} \). Now by letting \( S = - S_1^{\star } \) and \( R = - R_1^{\star } \), we obtain \( \delta ( A) = A R - SA \) for all \( A \in {\mathcal {A}} \) and \( ReS \in Z ( B ( {\mathcal {H}} )) \). \(\square \)

3 Proof of Theorem 2.1

We prove Theorem 2.1 through the following lemmas.

Lemma 3.1

For all \( A \in {\mathcal {A}} \) and \( X \in F ( {\mathcal {X}} ) \), we have

$$\begin{aligned} A \tau ( X ) + \delta ( A) X = AX \tau (I) + \delta ( A X ) . \end{aligned}$$

Proof

Let \( P \in {\mathcal {A}} \) be an idempotent operator of rank one. Set \( Q = I - P \). Then for all \( A \in {\mathcal {A}} \), we obtain \( APQ = 0 \). So by assumption, we have

$$\begin{aligned} A P \tau ( Q) + \delta ( AP ) Q = 0 . \end{aligned}$$

Therefore,

$$\begin{aligned} AP \tau (I) - AP \tau (P) + \delta ( AP ) - \delta ( AP ) P = 0 . \end{aligned}$$

Hence,

$$\begin{aligned} AP \tau (I) + \delta ( AP ) = AP \tau (P) + \delta (AP) P . \end{aligned}$$
(3.1)

Since \( AQP = 0 \) (\(A \in {\mathcal {A}}\)), it follows that

$$\begin{aligned} A Q \tau (P) + \delta ( AQ) P = 0 . \end{aligned}$$

So,

$$\begin{aligned} A \tau (P) - A P \tau (P) + \delta (A) P - \delta ( AP ) P = 0 . \end{aligned}$$

Consequently,

$$\begin{aligned} A \tau (P) + \delta ( A)P = A P \tau (P) + \delta ( AP ) P. \end{aligned}$$
(3.2)

By comparing (3.1) and (3.2), we obtain

$$\begin{aligned} A \tau (P) + \delta (A) P = AP \tau (I) + \delta (AP) . \end{aligned}$$

By [4, Lemma 1.1], every element \( X \in F ( {\mathcal {X}} ) \) is a linear combination of rank-one idempotents, and so

$$\begin{aligned} A \tau ( X) + \delta ( A) X = AX \tau (I) + \delta (AX) \end{aligned}$$

for all \( A \in {\mathcal {A}} \) and \( X \in F ( {\mathcal {X}} ) \). \(\square \)

Lemma 3.2

For all \( A \in {\mathcal {A}} \) and \( X \in F ( {\mathcal {X}} ) \), we have

$$\begin{aligned} \tau ( X A ) + \delta ( I) X A = X \tau (A) + \delta ( X ) A . \end{aligned}$$

Proof

Let \( P \in {\mathcal {A}} \) be a rank-one idempotent operator, and \( Q = I - P \). So \( PQA = 0 \) and \( QPA = 0\) for all \( A \in {\mathcal {A}} \). By assumption, we have

$$\begin{aligned} P \tau ( QA ) + \delta (P) AQ = 0 \end{aligned}$$

and

$$\begin{aligned} Q \tau (PA) + \delta ( Q) PA = 0 . \end{aligned}$$

From these equations we have the following, respectively.

$$\begin{aligned} P \tau ( A) + \delta ( P ) A = P \tau ( PA ) + \delta (P) PA \end{aligned}$$

and

$$\begin{aligned} \tau ( PA) + \delta (I) PA = P \tau (PA) + \delta (P) PA . \end{aligned}$$

Comparing these equations, we get

$$\begin{aligned} \tau ( PA) + \delta (I) PA = P \tau (A) + \delta (P) A . \end{aligned}$$

Now, by [4, Lemma 1.1] we have

$$\begin{aligned} \tau ( X A ) + \delta (I) X A = X \tau (A) + \delta (X) A \end{aligned}$$

for all \( A \in {\mathcal {A}} \) and \( X \in F ( {\mathcal {X}} ) \). \(\square \)

Lemma 3.3

For all \( A,B \in {\mathcal {A}} \), we have

$$\begin{aligned} \delta ( AB) = A \delta ( B) + \delta (A) B - A \delta (I) B . \end{aligned}$$

Proof

Taking \( A= I \) in Lemma 3.1, we find that

$$\begin{aligned} \delta (X) = \tau (X) - X \tau (I) + \delta (I ) X , \end{aligned}$$
(3.3)

for all \( X \in F ( {\mathcal {X}} ) \). Since \( F( {\mathcal {X}} ) \) is an ideal in \( {\mathcal {A}} \), it follows from (3.3) that

$$\begin{aligned} \delta ( AX) = \tau ( AX ) - AX \tau (I) + \delta (I) AX \end{aligned}$$

for all \( A \in {\mathcal {A}} \) and \( X \in F ( {\mathcal {X}} ) \). From this equation and Lemma 3.1, we obtain

$$\begin{aligned} \tau ( AX) = A \tau (X) + \delta (A) X - \delta (I) A X \end{aligned}$$
(3.4)

for all \( A \in {\mathcal {A}} \) and \( X \in F ( {\mathcal {X}} ) \). From (3.4), we have

$$\begin{aligned} \tau ( AB X) = AB \tau (X) + \delta ( AB) X - \delta (I) AB X , \end{aligned}$$
(3.5)

for all \( A , B \in {\mathcal {A}} \) and \( X \in F ( {\mathcal {X}} ) \). On the other hand,

$$\begin{aligned} \tau ( ABX)&= A \tau (BX) + \delta (A) B X - \delta (I) AB X \nonumber \\&= AB \tau (X) + A \delta (B) X - A \delta (I) B X + \delta (A) B X - \delta (I) AB X \end{aligned}$$
(3.6)

for all \( A , B \in {\mathcal {A}} \) and \( X \in F ( {\mathcal {X}} ) \). By comparing (3.5) and (3.6), we see that

$$\begin{aligned} \delta (AB) X = A \delta (B) X + \delta (A) B X- A \delta (I) BX \end{aligned}$$

for all \( A , B \in {\mathcal {A}} \) and \( X \in F ( {\mathcal {X}} ) \). Since \(F ( {\mathcal {X}} ) \) is an essential ideal in primitive algebra \( B ( {\mathcal {X}} ) \), it follows that

$$\begin{aligned} \delta ( AB ) = A \delta (B) + \delta (A) B - A \delta (I) B \end{aligned}$$

for all \( A , B \in {\mathcal {A}} \). \(\square \)

Lemma 3.4

For all \( A,B \in {\mathcal {A}} \), we have

$$\begin{aligned} \tau (AB) = A \tau (B) + \tau (A) B - A \tau (I) B . \end{aligned}$$

Proof

From Lemma 3.2 and (3.3), we conclude that

$$\begin{aligned} \tau ( X A)&= X \tau (A) + \delta (X) A - \delta (I) X A \nonumber \\&= X \tau ( A) + \tau (x) A - X \tau (I) A , \end{aligned}$$
(3.7)

for all \( A \in {\mathcal {A}} \) and \( X \in F ( {\mathcal {X}} ) \). Now, by using (3.7) for all \( A , B \in {\mathcal {A}} \) and \( X \in F ( {\mathcal {X}} ) \), we calculate \( \tau ( X AB ) \) in two ways and we obtain the following:

$$\begin{aligned} \tau ( X AB ) = X \tau (AB) - \tau (X) AB - X \tau (I) AB \end{aligned}$$

and

$$\begin{aligned} \tau (XAB ) = XA \tau (B) + X \tau (A) B + \tau (X) AB - X \tau (I) AB - X A \tau (I) B . \end{aligned}$$

Comparing these equations, we get

$$\begin{aligned} X \tau (AB) = X A \tau (B) + X \tau (A) B - X A \tau (I) B \end{aligned}$$

for all \( A , B \in {\mathcal {A}} \) and \( X \in F ( {\mathcal {X}} ) \). Since \( F ( {\mathcal {X}} ) \) is an essential ideal in \( B ( {\mathcal {X}} ) \), it follows that

$$\begin{aligned} \tau ( AB ) = A \tau (B) + \tau (A) B - A \tau (I) B \end{aligned}$$

for all \( A , B \in {\mathcal {A}} \) and \( X \in F ( {\mathcal {X}} ) \). \(\square \)

Lemma 3.5

For all \( A \in {\mathcal {A}} \), we have

$$\begin{aligned} \tau ( A) - A \tau (I) = \delta (A) - \delta (I) A . \end{aligned}$$

Proof

It follows from (3.3) and Lemma 3.3 that

$$\begin{aligned} \tau ( AX ) - AX \tau (I)&= \delta (AX ) - \delta ( I ) A X \\&= A \delta ( X) + \delta (A) X - A \delta (I) X - \delta (I) A X \\&= A \tau (X) - AX \tau (I) + \delta (A) X - \delta (I) AX \end{aligned}$$

for all \( A \in {\mathcal {A}} \) and \( X \in F( {\mathcal {X}} ) \). On the other hand, according to Lemma 3.4, for all \( A \in {\mathcal {A}} \) and \( X \in F ( {\mathcal {X}} ) \), we have

$$\begin{aligned} \tau (AX) - A X \tau (I) = A \tau (X) + \tau ( A) X - A \tau (I) X - A X \tau (I) . \end{aligned}$$

By comparing these equations, we find that

$$\begin{aligned} ( \delta (A) - \delta ( I ) A ) X = ( \tau (A) - A \tau (I) ) X \end{aligned}$$

for all \( A \in {\mathcal {A}} \) and \( X \in F ( {\mathcal {X}} ) \). Since \(F ( {\mathcal {X}} ) \) is an essential ideal in \( B ( {\mathcal {X}} ) \), it follows that

$$\begin{aligned} \delta (A) - \delta (I) A= \tau ( A) - A \tau (I) \end{aligned}$$

for all \( A \in {\mathcal {A}} \). \(\square \)

Now, by considering the obtained results we are ready to prove Theorem 2.1.

Proof of Theorem 2.1

Define the linear map \( \Delta : {\mathcal {A}} \rightarrow B ( {\mathcal {X}} ) \) by \( \Delta (A) = \delta (A) - \delta (I) A \). It follows from Lemma 3.3 that

$$\begin{aligned} \Delta (AB)&= \delta ( AB ) - \delta (I) AB \\&= A \delta ( B) + \delta (A) B - A \delta (I ) B - \delta ( I ) AB \\&= A \Delta (B) + \Delta ( A) B . \end{aligned}$$

So \( \Delta \) is a derivation and according to [7, Theorem 2.5.14] there exists \( S \in {\mathcal {B}} ( {\mathcal {X}} ) \) such that \( \Delta ( A) = A S - S A \) for all \( A \in {\mathcal {A}} \). Set \( R = S - \delta (I) \). From the definition of \( \Delta \) we conclude that \( \delta (A) = A S - R A \) for all \( A \in {\mathcal {A}} \). Also, by Lemma 3.5, we have \( \Delta (A) = \tau (A) - A \tau (I) \) for all \( A \in {\mathcal {A}} \). Set \( T = S + \tau (I) \). Hence, \(\tau ( A) = AT - S A \) for all \( A \in {\mathcal {A}} \). The proof of theorem is complete. \(\square \)