1 Introduction

Determined maps with local properties have always been considered in mathematics. Among the local properties, we can mention mappings on rings or algebras that act on special products such as homomorphisms, centralizers, derivations, Lie derivations and so on. In [4], Brešar has described the additive mappings on the prime ring containing a non-trivial idempotent that acts as a centralizer, derivation, or homomorphism at zero products and extensive studies have been conducted in this regard. In [10] additive maps on Banach algebras which behave like derivations at idempotent-product elements have been characterized. Determining of Lie derivations at idempotent-product elements on operator algebra \({\mathcal {B}}({\mathcal {X}})\) (\({\mathcal {X}}\) is a Banach space), triangular algebras and prime rings with a non-trivial idempotent has been given in [14, 18, 20], respectively. Another important class of mappings related to the Lie structure are centralizers of Lie algebras which we call Lie centralizers. This is a classical notion in the theory of Lie and other non-associative algebras. We are only interested in the case, where associative algebras are endowed with the Lie product and create a Lie algebra. Lie centralizers have recently been studied a lot on algebras from different perspectives. One of these study paths has been the characterization of Lie centralizers at specific products. In the following, we will refer to some of the results obtained for the Lie centralizers. Motivated by these developments, in the present article, we study the additive Lie centralizers at idempotent-product elements on unital Banach algebras.

Let \({\mathcal {U}}\) be an algebra. The additive map \(\varphi :{\mathcal {U}}\rightarrow {\mathcal {U}}\) is said to be a centralizer if \(\varphi (xy)=\varphi (x)y=x\varphi (y)\) for all \(x,y\in {\mathcal {U}}\) and it is called a Lie centralizer if \(\varphi ([x,y])=[\varphi (x),y]\) for all \(x,y\in {\mathcal {U}}\), where \([x,y]=xy-yx\) is the Lie product of x and y in \({\mathcal {U}}\). It is easily checked that \(\varphi \) is a Lie centralizer on \({\mathcal {U}}\) if and only if \(\varphi ([x,y])=[x,\varphi (y)]\) for all \(x,y\in {\mathcal {U}}\). If \({\mathcal {U}}\) is unital with unity 1, a routine verifications shows that \(\varphi \) is a centralizer if and only if there is an element c in center of \({\mathcal {U}}\) such that \(\varphi (x)=cx\) for all \(x\in {\mathcal {U}}\). The notion of Lie centralizer is a classical notion in the theory of Lie algebras. Obviously every centralizer is a Lie centralizer but the converse is not necessarily true. Recently in [8] the authors have studied non-additive Lie centralizers on triangular rings. Also, in [19] nonlinear Lie centralizers on generalized matrix algebra have been described. In [13] it has been proved that under some conditions on a unital generalized matrix algebra \({\mathcal {G}}\), if \(\varphi :{\mathcal {G}}\rightarrow {\mathcal {G}}\) is a linear Lie centralizer, then \(\varphi (x)=\lambda x +\mu (x) \) in which \(\lambda \) is in the center of \({\mathcal {G}}\) and \(\mu \) is a linear map from \({\mathcal {G}}\) into the center of \({\mathcal {G}}\) vanishing at commutators. In [2] the authors have studied the characterization of Lie centralizers on non-unital triangular algebras through zero products. In [5] under some mild conditions, the problem of characterizing linear maps behaving like Lie centralizers at idempotent products on triangular algebras is considered. In [11] linear Lie centralizers at the zero products on some operator algebras are studied, and in [6] linear Lie centralizers through zero products on a 2-torsion free unital generalized matrix algebra under some mild conditions, are characterized. To find more results in this regard, we refer to [1, 7, 12, 17, 22] and the references therein. Now, in this article, we consider the additive Lie centralizers at idempotent-product elements on Banach algebras. More precisely, in this paper we consider the subsequent condition on an additive map \(\varphi \) on a unital Banach algebra \({\mathcal {U}}\) with a non-trivial idempotent p:

$$\begin{aligned} x,y \in {\mathcal {U}},xy=p\Longrightarrow \varphi ([x,y])=[\varphi (x),y]=[x,\varphi (y)] \quad ({{\textbf {P}}}), \end{aligned}$$

and we characterize the structure of \(\varphi \) under some mild conditions. Indeed, we show under certain conditions that \(\varphi (x)=cx+\mu (x)\) for all \(x\in {\mathcal {U}}\), where \(c\in Z({\mathcal {U}})\), \(\mu :{\mathcal {U}}\rightarrow Z({\mathcal {U}})\) (\(Z({\mathcal {U}})\) is the center of \({\mathcal {U}}\)) is an additive map in which \(\mu ([x,y])=0\) for any \(x,y \in {\mathcal {U}}\) with \(xy=p\). We also present some applications to our main results, in particular, we present an application to the von Neumann algebras for suitable projections.

This paper is organized as follows. In Sect. 2, we set out the preliminaries and tools required. Section 3 is devoted to the main results of the article. In Sect. 4, we refer to some applications of the results obtained.

2 Preliminaries and Tools

In this section, we present some of the preliminaries and tools needed for the next sections. This section contains results that can be interesting in themselves.

We will first introduce generalized matrix algebras. We assume that all algebras are over a unital commutative ring \({\mathcal {R}}\). A Morita context consists of two algebras \( {\mathcal {A}} \) and \( {\mathcal {B}} \), two bimodules \({\mathcal {M}}\) and \({\mathcal {N}}\) where \({\mathcal {M}}\) is an \(({\mathcal {A}},{\mathcal {B}}) \)-bimodule and \({\mathcal {N}}\) is a \( ({\mathcal {B}},{\mathcal {A}}) \)-bimodule, and two bimodule homomorphisms called the pairings \( \zeta _{{\mathcal {M}} {\mathcal {N}}}: {\mathcal {M}} \underset{{\mathcal {B}}}{\otimes }{\mathcal {N}} \rightarrow {\mathcal {A}} \) and \( \psi _{{\mathcal {N}} {\mathcal {M}}}: {\mathcal {N}} \underset{{\mathcal {A}}}{\otimes }{\mathcal {M}} \rightarrow {\mathcal {B}} \) satisfying the following commutative diagrams:

If \( ( {\mathcal {A}}, {\mathcal {B}}, {\mathcal {M}}, {\mathcal {N}}, \zeta _{{\mathcal {M}} {\mathcal {N}}}, \psi _{{\mathcal {N}} {\mathcal {M}}} ) \) is a Morita context, then, the set

$$\begin{aligned} \begin{bmatrix} {\mathcal {A}} &{} {\mathcal {M}} \\ {\mathcal {N}} &{} {\mathcal {B}} \end{bmatrix} = \left\{ \begin{bmatrix} a &{} m \\ n &{} b \end{bmatrix}: a \in {\mathcal {A}}, m \in {\mathcal {M}}, n \in {\mathcal {N}}, b \in {\mathcal {B}} \right\} \end{aligned}$$

is an algebra under the usual matrix operations. Such an algebra is called a generalized matrix algebra. If \({\mathcal {N}}=0\), then the generalized matrix algebra \(\begin{bmatrix} {\mathcal {A}} &{} {\mathcal {M}} \\ 0 &{} {\mathcal {B}} \end{bmatrix}\) is said to be a triangular algebra which is denoted by \( Tri ({\mathcal {A}}, {\mathcal {M}}, {\mathcal {B}} ) \).

Let \({\mathcal {A}}\) be an algebra with unity \(1_{{\mathcal {A}}}\), and \({\mathcal {M}}\) be a left \({\mathcal {A}}\)-module. Then \({\mathcal {M}}\) is called unital left \({\mathcal {A}}\)-module if \(1_{\mathcal {A}} m = m \), for any \( m \in {\mathcal {M}} \). Unital right modules are defined similarly. A bimodules is unital if it is both unital as left module and unital as right module. By routine verifications, it can be seen that the generalized matrix algebra \( {\mathcal {G}}= \begin{bmatrix} {\mathcal {A}} &{} {\mathcal {M}} \\ {\mathcal {N}} &{} {\mathcal {B}} \end{bmatrix}\) is unital if and only if \( {\mathcal {A}} \) and \( {\mathcal {B}} \) are unital algebras with unities \(1_{\mathcal {A}}\) and \(1_{\mathcal {B}}\), respectively, \( {\mathcal {M}} \) is a unital \(({\mathcal {A}}, {\mathcal {B}})\)-bimodule, and \( {\mathcal {N}} \) is a unital \(({\mathcal {B}}, {\mathcal {A}})\)-bimodule. In this case, \( I= \begin{bmatrix} 1_{\mathcal {A}} &{} 0 \\ 0 &{} 1_{\mathcal {B}} \end{bmatrix} \) is the unity of \( {\mathcal {G}}\).

The center of an algebra \({\mathcal {U}}\) is denoted by \(Z({\mathcal {U}})\). In [16, Lemma 1] the center of unital generalized matrix algebras has been characterized. In the following we give this result.

Lemma 2.1

( [16, Lemma 1]) Let \({\mathcal {G}}= \begin{bmatrix} {\mathcal {A}} &{} {\mathcal {M}} \\ {\mathcal {N}} &{} {\mathcal {B}} \end{bmatrix}\) be a unital generalized matrix algebra. Then

$$\begin{aligned} Z ( {\mathcal {G}} ) = \left\{ \begin{bmatrix} a &{} 0 \\ 0 &{} b \end{bmatrix}: a \in Z ( {\mathcal {A}} ), b \in Z({\mathcal {B}} ), am = mb, na = bn, \forall m \in {\mathcal {M}}, \forall n \in {\mathcal {N}} \right\} . \end{aligned}$$

Remark 2.2

Suppose that the unital generalized matrix algebra \({\mathcal {G}}= \begin{bmatrix} {\mathcal {A}} &{} {\mathcal {M}} \\ {\mathcal {N}} &{} {\mathcal {B}} \end{bmatrix}\) satisfies

$$\begin{aligned} \begin{aligned}&a \in {\mathcal {A}},\, a {\mathcal {M}} =0 \,\, \text {and} \,\, {\mathcal {N}} a = 0 \Rightarrow a = 0; \\ {}&b \in {\mathcal {B}}, \, {\mathcal {M}} b =0 \,\, \text {and} \,\, b {\mathcal {N}} = 0 \Rightarrow b = 0. \end{aligned} \end{aligned}$$
(2.1)

If \( a_0 m = m b_0 \) and \( n a_0 = b_0 n \) for all \( m \in {\mathcal {M}} \) and \( n \in {\mathcal {N}} \) where \( a_0 \in {\mathcal {A}} \) and \( b_0 \in {\mathcal {B}} \) are fixed, then \( a_0 \in Z( {\mathcal {A}} ) \) and \( b_0 \in Z( {\mathcal {B}} ) \). Because for any \( a \in {\mathcal {A}} \), \( m \in {\mathcal {M}} \) and \( n \in {\mathcal {N}} \), we have

$$\begin{aligned} aa_0 m = a m b_0 = a_0 a m, \quad n a a_0 = b_0 n a = n a_0 a. \end{aligned}$$

Hence

$$\begin{aligned} ( a a_0 - a_0 a ) {\mathcal {M}} = 0 \quad \text {and} \quad {\mathcal {N}} ( a a_0 - a_0 a ) = 0. \end{aligned}$$

According to the assumption \( a_0 a = a a_0 \) for any \( a \in {\mathcal {A}} \) and thus \( a_0 \in Z( {\mathcal {A}} ) \). Similarly, it follows that \( b_0 \in Z ( {\mathcal {B}} ) \). So, if \({\mathcal {G}}\) satisfies (2.1), then it follows from Lemma 2.1 that

$$\begin{aligned} Z ( {\mathcal {G}} ) = \left\{ \begin{bmatrix} a &{} 0 \\ 0 &{} b \end{bmatrix}: am = mb, na = bn, \forall m \in {\mathcal {M}}, \forall n \in {\mathcal {N}} \right\} . \end{aligned}$$

The \(({\mathcal {A}}, {\mathcal {B}})\)-bimodule \( {\mathcal {M}} \) is called faithful if from \(a {\mathcal {M}} = 0\) (\(a \in {\mathcal {A}}\)) and \({\mathcal {M}} b = 0\) (\(b \in {\mathcal {B}}\)) we can deduce that \(a= 0\) and \(b = 0\), respectively. Let \({\mathcal {G}}= \begin{bmatrix} {\mathcal {A}} &{} {\mathcal {M}} \\ {\mathcal {N}} &{} {\mathcal {B}} \end{bmatrix}\) be a generalized matrix algebra. Clearly, if \( {\mathcal {M}} \) or \( {\mathcal {N}} \) are faithful bimodules, then \({\mathcal {G}}\) satisfies (2.1), but the converse is not necessarily true (see [6, Example 2.1]). Note that if \( Tri ( {\mathcal {A}}, {\mathcal {M}}, {\mathcal {B}} ) \) is a triangular algebra, then (2.1) is equivalent to the faithfulness of \( {\mathcal {M}} \) as \(({\mathcal {A}}, {\mathcal {B}})\)-bimodule.

On a generalized matrix algebra \({\mathcal {G}}= \begin{bmatrix} {\mathcal {A}} &{} {\mathcal {M}} \\ {\mathcal {N}} &{} {\mathcal {B}} \end{bmatrix}\) define two natural projections \( \pi _1: {\mathcal {G}} \rightarrow {\mathcal {A}} \) and \( \pi _2: {\mathcal {G}} \rightarrow {\mathcal {B}} \) by

$$\begin{aligned} \pi _1 \left( \begin{bmatrix} a &{} m \\ n &{} b \end{bmatrix} \right) = a \quad \text {and} \quad \pi _2 \left( \begin{bmatrix} a &{} m \\ n &{} b \end{bmatrix} \right) = b. \end{aligned}$$

Under the conditions of Lemma 2.1, it follows that \( \pi _1 ( Z ( {\mathcal {G}} )) \subseteq Z ( {\mathcal {A}} ) \) and \( \pi _2 ( Z ( {\mathcal {G}} )) \subseteq Z ( {\mathcal {B}} ) \). In fact, \(\pi _1 ( Z ( {\mathcal {G}} ))\) is a subalgebra of \( Z ( {\mathcal {A}} ) \) and \( \pi _2 ( Z ( {\mathcal {G}} )) \) is a subalgebra of \( Z ( {\mathcal {B}} ) \). The following result is proved in [3].

Lemma 2.3

([3, Proposition 2.1]) Let \({\mathcal {G}}= \begin{bmatrix} {\mathcal {A}} &{} {\mathcal {M}} \\ {\mathcal {N}} &{} {\mathcal {B}} \end{bmatrix}\) be a unital generalized matrix algebra satisfying (2.1). Then there exists a unique algebra isomorphism \( \eta : \pi _1 ( Z ({\mathcal {G}})) \rightarrow \pi _2 ( Z ( {\mathcal {G}} )) \) such that \( a m = m \eta (a) \) and \( n a = \eta (a) n \) for all \(a\in \pi _1( Z ({\mathcal {G}}))\), \( m \in {\mathcal {M}} \) and \( n \in {\mathcal {N}}\).

In the event that \( Tri ( {\mathcal {A}}, {\mathcal {M}}, {\mathcal {B}} ) \) is a unital triangular algebra that \( {\mathcal {M}} \) is a faithful (\( {\mathcal {A}}, {\mathcal {B}} ) \)-bimodulu, then the statements 2.12.2 and 2.3 hold for \( Tri ( {\mathcal {A}}, {\mathcal {M}}, {\mathcal {B}} ) \) as well.

Remark 2.4

Let \({\mathcal {U}}\) be a unital algebra with unity 1 and a non-trivial idempotent p (\(p^2 = p, p\ne 0,1\)). Define the following sets:

$$\begin{aligned} \begin{aligned}&{\mathcal {U}}_{11}=p {\mathcal {U}}p; \\ {}&{\mathcal {U}}_{12}=p {\mathcal {U}}(1-p); \\ {}&{\mathcal {U}}_{21}=(1-p ){\mathcal {U}}p; \\ {}&{\mathcal {U}}_{22}=(1-p ){\mathcal {U}}(1- p). \end{aligned} \end{aligned}$$

Every \({\mathcal {U}}_{ij}\) (\(1\le i,j \le 2\)) is a subspace of \({\mathcal {U}}\) such that

$$\begin{aligned} {\mathcal {U}}={\mathcal {U}}_{11}\dotplus {\mathcal {U}}_{12}\dotplus {\mathcal {U}}_{21} \dotplus {\mathcal {U}}_{22} \end{aligned}$$

as sum of subspaces. Also, \({\mathcal {U}}_{11}\) and \({\mathcal {U}}_{22}\) are unital subalgebras of \({\mathcal {U}}\) with unities p and \(1-p\), respectively, \({\mathcal {U}}_{12}\) is a unital \(({\mathcal {U}}_{11}, {\mathcal {U}}_{22})\)-bimodule and \({\mathcal {U}}_{21}\) is a unital \(({\mathcal {U}}_{22}, {\mathcal {U}}_{11})\)-bimodule. In fact, \({\mathcal {U}}\) is a unital generalized matrix algebra in the form

$$\begin{aligned} {\mathcal {U}}=\begin{bmatrix} {\mathcal {U}}_{11}&{} {\mathcal {U}}_{12} \\ {\mathcal {U}}_{21} &{} {\mathcal {U}}_{22} \end{bmatrix}. \end{aligned}$$

This decomposition of \({\mathcal {U}}\) is called Peirce decomposition of \({\mathcal {U}}\) with respect to idempotent p. In this case we have

$$\begin{aligned} \pi _1: {\mathcal {U}} \rightarrow {\mathcal {U}}_{11} \quad \text {and} \quad \pi _2: {\mathcal {U}}\rightarrow {\mathcal {U}}_{22}, \end{aligned}$$

where \(\pi _1 \) and \(\pi _2\) are projection maps, \(\pi _1( Z ({\mathcal {U}}))\subseteq Z({\mathcal {U}}_{11}) \) and \(\pi _2( Z ({\mathcal {U}}))\subseteq Z({\mathcal {U}}_{22}) \).

If \({\mathcal {U}}\) is a Banach algebra, then each \({\mathcal {U}}_{ij}\) is a closed subspace, \({\mathcal {U}}\) is the direct sum of Banach spaces and its norm is equivalent to \(\ell ^1\)-norm. So \({\mathcal {U}}_{11}\) and \({\mathcal {U}}_{22}\) are Banach subalgebras of \({\mathcal {U}}\), \({\mathcal {U}}_{12}\) is a Banach \(({\mathcal {U}}_{11}, {\mathcal {U}}_{22})\)-bimodule and \({\mathcal {U}}_{21}\) is a Banach \(({\mathcal {U}}_{22}, {\mathcal {U}}_{11})\)-bimodule. Since \({\mathcal {U}}_{11}\) is a unital Banach algebra with unity p, any element in \({\mathcal {U}}_{11}\) is a sum of invertible elements in \({\mathcal {U}}_{11}\). The set of all invertible elements in \({\mathcal {U}}_{11}\) is denoted by \(Inv({\mathcal {U}}_{11})\) and if \(x_{11}\in Inv({\mathcal {U}}_{11})\), then \(x_{11}^{-1}\) denotes the inverse of \(x_{11}\) in \({\mathcal {U}}_{11}\). So we have \(x_{11}x_{11}^{-1}=x_{11}^{-1}x_{11}=p\).

3 Main Results

In this section the main results of this paper are given. Throughout this section we assume that \({\mathcal {U}}\) is a unital Banach algebra with unity 1 and a non-trivial idempotent p. We consider \({\mathcal {U}}\) as the following generalized matrix algebra

$$\begin{aligned} {\mathcal {U}}=\begin{bmatrix} {\mathcal {U}}_{11}&{} {\mathcal {U}}_{12} \\ {\mathcal {U}}_{21} &{} {\mathcal {U}}_{22} \end{bmatrix} \end{aligned}$$

where each \({\mathcal {U}}_{ij}\) (\(1\le i,j \le 2\)) is as mentioned in Remark 2.4. For an element \(x_{ij}\in {\mathcal {U}}\), we always mean that \(x_{ij}\in {\mathcal {U}}_{ij}\) (\(1\le i,j \le 2\)). It should also be noted that we denote the matrix \(\begin{bmatrix} p &{} 0 \\ 0 &{} 0 \end{bmatrix}\) by p and the difference between the two is clear from the content of the discussion.

The following theorem is the main result of this paper.

Theorem 3.1

Suppose that

$$\begin{aligned}{} & {} x_{11}\in {\mathcal {U}}_{11},\, x_{11}{\mathcal {U}}_{12}=0 \,\, \text {and} \,\,{\mathcal {U}}_{21}x_{11}= 0 \Longrightarrow x_{11}= 0; \\{} & {} x_{22}\in {\mathcal {U}}_{22}, \, {\mathcal {U}}_{12}x_{22}=0 \,\, \text {and} \,\, x_{22}{\mathcal {U}}_{21}= 0 \Longrightarrow x_{22}= 0. \end{aligned}$$

Let \( \pi _1 ( Z({\mathcal {U}} )) = Z({\mathcal {U}}_{11}) \) and \( \pi _2 ( Z({\mathcal {U}})) = Z({\mathcal {U}}_{22}) \). Then the additive map \( \varphi : {\mathcal {U}} \rightarrow {\mathcal {U}} \) satisfies \(({{\textbf {P}}})\) if and only if \(\varphi (x)=cx+\mu (x)\) for all \(x\in {\mathcal {U}}\), where \(c\in Z({\mathcal {U}})\), \(\mu :{\mathcal {U}}\rightarrow Z({\mathcal {U}})\) is an additive map in which \(\mu ([x,y])=0\) for any \(x,y \in {\mathcal {U}}\) with \(xy=p\).

Proof. The only if, part is obvious and we will prove the if part only. Let \(\varphi \) satisfies \(({{\textbf {P}}})\). Since \(\varphi \) is additive, it could be decomposed in the following form

$$\begin{aligned} \begin{aligned} \varphi&\Bigl (\begin{bmatrix} x_{11}&{} x_{12}\\ x_{21}&{} x_{22}\end{bmatrix}\Bigl )\\ {}&=\begin{bmatrix} \alpha _1(x_{11})+\beta _1(x_{22})+\tau _1(x_{12})+\gamma _1(x_{21})&{} \alpha _2(x_{11})+\beta _2(x_{22})+\tau _2(x_{12})+\gamma _2(x_{21}) \\ \alpha _3(x_{11})+\beta _3(x_{22})+\tau _3(x_{12})+\gamma _3(x_{21}) &{} \alpha _4(x_{11})+\beta _4(x_{22})+\tau _4(x_{12})+\gamma _4(x_{21}) \end{bmatrix} \end{aligned} \end{aligned}$$

where

$$\begin{aligned}{} & {} \alpha _{1}:{\mathcal {U}}_{11}\rightarrow {\mathcal {U}}_{11}, \alpha _{2}:{\mathcal {U}}_{11}\rightarrow {\mathcal {U}}_{12}, \alpha _{3}:{\mathcal {U}}_{11}\rightarrow {\mathcal {U}}_{21}, \alpha _{4}:{\mathcal {U}}_{11}\rightarrow {\mathcal {U}}_{22}\\{} & {} \beta _{1}:{\mathcal {U}}_{22}\rightarrow {\mathcal {U}}_{11}, \beta _{2}:{\mathcal {U}}_{22}\rightarrow {\mathcal {U}}_{12}, \beta _{3}:{\mathcal {U}}_{22}\rightarrow {\mathcal {U}}_{21}, \beta _{4}:{\mathcal {U}}_{22}\rightarrow {\mathcal {U}}_{22}\\{} & {} \tau _{1}:{\mathcal {U}}_{12}\rightarrow {\mathcal {U}}_{11}, \tau _{2}:{\mathcal {U}}_{12}\rightarrow {\mathcal {U}}_{12}, \tau _{3}:{\mathcal {U}}_{12}\rightarrow {\mathcal {U}}_{21}, \tau _{4}:{\mathcal {U}}_{12}\rightarrow {\mathcal {U}}_{22}\\{} & {} \gamma _{1}:{\mathcal {U}}_{21}\rightarrow {\mathcal {U}}_{11}, \gamma _{2}:{\mathcal {U}}_{21}\rightarrow {\mathcal {U}}_{12}, \gamma _{3}:{\mathcal {U}}_{21}\rightarrow {\mathcal {U}}_{21}, \gamma _{4}:{\mathcal {U}}_{21}\rightarrow {\mathcal {U}}_{22}\end{aligned}$$

are additive maps. According to the following claims we will characterize the structure of \(\varphi \).

Claim 1

\(\alpha _{2}=0,\alpha _{3}=0,\alpha _{4}(x_{11})\in Z({\mathcal {U}}_{22})\) for each \(x_{11}\in {\mathcal {U}}_{11}\) and \(\alpha _{1}(x_{11})x_{11}=x_{11}\alpha _{1}(x_{11})\) for each \(x_{11}\in Inv({\mathcal {U}}_{11})\).

Proof

Suppose that \(x_{11}\in Inv({\mathcal {U}}_{11})\) and \(x_{22}\in {\mathcal {U}}_{22}\). Let \(a=\begin{bmatrix} x_{11}&{} 0 \\ 0 &{} 0 \end{bmatrix}\) and \(b=\begin{bmatrix} x_{11}^{-1} &{} 0 \\ 0 &{} x_{22}\end{bmatrix}.\) Then \(ab=ba=p\) and so

$$\begin{aligned} 0=\varphi ([a,b])=[\varphi (a),b]=[a,\varphi (b)]. \end{aligned}$$

Therefore

$$\begin{aligned} \begin{array}{rlllllll} 0=[\varphi (a),b] &{}= &{}\Bigl [\begin{bmatrix} \alpha _{1}(x_{11}) &{} \alpha _{2}(x_{11}) \\ \alpha _{3}(x_{11}) &{} \alpha _{4}(x_{11}) \end{bmatrix}, \begin{bmatrix} x_{11}^{-1} &{} 0 \\ 0 &{} x_{22}\end{bmatrix}\Bigl ]\\ &{}=&{} \begin{bmatrix} \alpha _{1}(x_{11})x_{11}^{-1}-x_{11}^{-1}\alpha _{1}(x_{11}) &{} \alpha _{2}(x_{11})x_{22}-x_{11}^{-1}\alpha _{2}(x_{11}) \\ \alpha _{3}(x_{11})x_{11}^{-1}-x_{22}\alpha _{3}(x_{11}) &{} \alpha _{4}(x_{11})x_{22}-x_{22}\alpha _{4}(x_{11}) \end{bmatrix}. \end{array} \end{aligned}$$

So

$$\begin{aligned} x_{11}\alpha _{1}(x_{11})=\alpha _{1}(x_{11})x_{11}. \end{aligned}$$

Also

$$\begin{aligned}{} & {} \alpha _{2}(x_{11})x_{22}=x_{11}^{-1}\alpha _{2}(x_{11});\nonumber \\{} & {} \alpha _{3}(x_{11})x_{11}^{-1}=x_{22}\alpha _{3}(x_{11});\nonumber \\{} & {} \alpha _{4}(x_{11})x_{22}=x_{22}\alpha _{4}(x_{11}). \end{aligned}$$
(3.1)

If we put \(x_{22}=0\) in the first and second relations of 3.1 we get

$$\begin{aligned} \alpha _{2}(x_{11})=0 \quad \text {and} \quad \alpha _{3}(x_{11})=0 \end{aligned}$$

for each \(x_{11}\in Inv({\mathcal {U}}_{11})\). Since \(\alpha _{2},\alpha _{3}\) are additive and each element of \({\mathcal {U}}_{11}\) is a sum of two invertible elements of \({\mathcal {U}}_{11}\), it results that \(\alpha _{2}=0,\alpha _{3}=0\). Also from the last relation of 3.1 we have \(\alpha _{4}(x_{11})\in Z({\mathcal {U}}_{22})\) for each \(x_{11}\in Inv({\mathcal {U}}_{11})\). Since \(Z({\mathcal {U}}_{22})\) is a subalgebra of \({\mathcal {U}}_{22}\) thus

$$\begin{aligned} \alpha _{4}(x_{11})\in Z({\mathcal {U}}_{22}) \end{aligned}$$

for each \(x_{11}\in {\mathcal {U}}_{11}\). \(\square \)

Claim 2

\(\beta _{2}=0,\beta _{3}=0\) and \(\beta _{1}(x_{22})\in Z({\mathcal {U}}_{11})\) for each \(x_{22}\in {\mathcal {U}}_{22}\).

Proof

Let \(a,b \in {\mathcal {U}}\) be defined as in the proof of Claim 1. We use the relation \([a,\varphi (b)]=0\) and by Claim 1, the results will be obtained. \(\square \)

Claim 3

\(\tau _{1}=0,\tau _{3}=0\) and \(\tau _{4}=0\).

Proof

Let \(x_{12}\in {\mathcal {U}}_{12}\) be an arbitrary element, and we define \(a=\begin{bmatrix} p &{} x_{12}\\ 0 &{} 0 \end{bmatrix}\) and \(b=\begin{bmatrix} p &{} 0 \\ 0 &{} 0 \end{bmatrix}.\) Thus we have \(ab=p\) and \(ba=\begin{bmatrix} p &{} x_{12}\\ 0 &{} 0 \end{bmatrix}\). So

$$\begin{aligned} \begin{array}{rlllllll} \begin{bmatrix} -\tau _{1}(x_{12}) &{} -\tau _{2}(x_{12}) \\ -\tau _{3}(x_{12}) &{} -\tau _{4}(x_{12}) \end{bmatrix}&{}=&{}\varphi \Bigl (\begin{bmatrix} 0 &{} -x_{12}\\ 0 &{} 0 \end{bmatrix}\Bigl )\\ &{}=&{}\varphi ([a,b])=[\varphi (a),b]\\ &{}=&{}\begin{bmatrix} 0 &{} -\tau _{2}(x_{12}) \\ \tau _{3}(x_{12}) &{} 0 \end{bmatrix}. \end{array} \end{aligned}$$

According to this equation the result will be obtained. \(\square \)

Claim 4

\(\gamma _{1}=0,\gamma _{3}=0\) and \(\gamma _{4}=0\).

Proof

Put \(a=\begin{bmatrix} p &{} 0 \\ 0 &{} 0 \end{bmatrix}\) and \(b=\begin{bmatrix} p &{} 0 \\ x_{21}&{} 0 \end{bmatrix}\), for each \(x_{21}\in {\mathcal {U}}_{21}\). Therefore \(ab=p\) and \(\varphi ([a,b])=[a,\varphi (b)]\). By using \(\varphi \) to these relations, the result will be obtained. \(\square \)

According to the Claims 1 and 2, our assumptions and Lemma 2.3, we define the well-defined additive maps \({\bar{\alpha }}:{\mathcal {U}}_{11}\rightarrow {\mathcal {U}}_{11}\) and \({\bar{\beta }}:{\mathcal {U}}_{22}\rightarrow {\mathcal {U}}_{22}\) as following

$$\begin{aligned}{} & {} {\bar{\alpha }}(x_{11})=\alpha _{1}(x_{11})-\eta ^{-1}(\alpha _{4}(x_{11})),\quad (x_{11}\in {\mathcal {U}}_{11}) \\{} & {} {\bar{\beta }}(x_{22})=\beta _{4}(x_{22})-\eta (\beta _{1}(x_{22})),\quad (x_{22}\in {\mathcal {U}}_{22}). \end{aligned}$$

Claim 5

$$\begin{aligned}{} & {} \tau _{2}(x_{11}x_{12})=x_{11}\tau _{2}(x_{12})={\bar{\alpha }}(x_{11})x_{12}; \\{} & {} \tau _{2}(x_{12}x_{22})=\tau _{2}(x_{12})x_{22}=x_{12}{\bar{\beta }}(x_{22}) \end{aligned}$$

for each \(x_{11}\in {\mathcal {U}}_{11},x_{12}\in {\mathcal {U}}_{12}, x_{22}\in {\mathcal {U}}_{22}\).

Proof

Let \(x_{11}\in Inv({\mathcal {U}}_{11})\) and \(x_{12}\in {\mathcal {U}}_{12}\) be arbitrary. Put \(a=\begin{bmatrix} x_{11}&{} x_{11}x_{12}\\ 0 &{} 0 \end{bmatrix}\) and \(b=\begin{bmatrix} x_{11}^{-1} &{} 0 \\ 0 &{} 0 \end{bmatrix}.\) So \(ab=p\) and \(ba=\begin{bmatrix} p &{} x_{12}\\ 0 &{} 0 \end{bmatrix}.\) Therefore

$$\begin{aligned} \varphi ([a,b])=[\varphi (a),b]=[a,\varphi (b)]. \end{aligned}$$

According to Claim 1, we have

$$\begin{aligned} \begin{array}{rlllllll} \begin{bmatrix} 0 &{} -\tau _{2}(x_{12}) \\ 0 &{} 0 \end{bmatrix}&{}=&{}\varphi \Bigl (\begin{bmatrix} 0 &{} -x_{12}\\ 0 &{} 0 \end{bmatrix}\Bigl )\\ &{}=&{}\varphi ([a,b])=[\varphi (a),b]\\ &{}=&{}\begin{bmatrix} 0 &{} x_{11}^{-1}\tau _{2}(x_{11}x_{12}) \\ 0 &{} 0 \end{bmatrix}. \end{array} \end{aligned}$$

Thus \(\tau _{2}(x_{11}x_{12})=x_{11}\tau _{2}(x_{12})\) for each \(x_{11}\in Inv({\mathcal {U}}_{11})\) and \(x_{12}\in {\mathcal {U}}_{12}\). Since \(\tau \) is additive, we conclude that \(\tau _{2}(x_{11}x_{12})=x_{11}\tau _{2}(x_{12})\) for each \(x_{11}\in {\mathcal {U}}_{11}\) and \(x_{12}\in {\mathcal {U}}_{12}\). Also we have

$$\begin{aligned} \begin{array}{rlllllll} \begin{bmatrix} 0 &{} -\tau _{2}(x_{12}) \\ 0 &{} 0 \end{bmatrix}&{}=&{}\varphi ([a,b])=[a,\varphi (b)]\\ &{}=&{}\begin{bmatrix}.0 &{} x_{11}x_{12}\alpha _{4}(x_{11}^{-1})-\alpha _{1}(x_{11}^{-1})x_{11}x_{12}\\ 0 &{} 0 \end{bmatrix}. \end{array} \end{aligned}$$

So

$$\begin{aligned} \tau _{2}(x_{12})=\alpha _{1}(x_{11}^{-1})x_{11}x_{12}-x_{11}x_{12}\alpha _{4}(x_{11}^{-1}) \end{aligned}$$

for each \(x_{11}\in Inv({\mathcal {U}}_{11})\) and \(x_{12}\in {\mathcal {U}}_{12}\). Therefore

$$\begin{aligned} \tau _2(x_{11}^{-1}x_{12})=\alpha _1(x_{11}^{-1})x_{12}-x_{12}\alpha _4(x_{11}^{-1}) \end{aligned}$$

for every \( x_{11}\in Inv({\mathcal {U}}_{11})\) and \( x_{12}\in Inv({\mathcal {U}}_{12})\). From the additivity of \(\alpha _1\), \(\tau _2\) and \(\alpha _4\), it results that

$$\begin{aligned} \tau _2( x_{11}x_{12})=\alpha _1( x_{11}) x_{12}- x_{12}\alpha _4( x_{11}) \end{aligned}$$

for every \( x_{11}\in {\mathcal {U}}_{11}\) and \( x_{12}\in {\mathcal {U}}_{12}\). By Claim 1 and our assumptions \(\alpha _4( x_{11}) \in Z({\mathcal {U}}_{22})=\pi _2(Z({\mathcal {U}}))\), and hence by Lemma 2.3

$$\begin{aligned} \eta ^{-1}(\alpha _4( x_{11})) x_{12}= x_{12}\alpha _4( x_{11}). \end{aligned}$$

Thus we have

$$\begin{aligned} \tau _2( x_{11}x_{12})=\alpha _1( x_{11}) x_{12}-\eta ^{-1}(\alpha _4( x_{11})) x_{12}={\bar{\alpha }}( x_{11}) x_{12}\end{aligned}$$

for every \( x_{11}\in {\mathcal {U}}_{11}\) and \( x_{12}\in {\mathcal {U}}_{12}\). Now from the above result, it follows that

$$\begin{aligned} \tau _2( x_{12})={\bar{\alpha }}(p) x_{12}\end{aligned}$$

for every \( x_{12}\in {\mathcal {U}}_{12}\). So

$$\begin{aligned} \tau _2( x_{12}x_{22})={\bar{\alpha }}(p) x_{12}x_{22}=\tau _2( x_{12}) x_{22}\end{aligned}$$

for every \( x_{12}\in {\mathcal {U}}_{12}\) and \( x_{22}\in {\mathcal {U}}_{22}\).

Let us define \( c=\begin{bmatrix} p&{} x_{12}\\ 0 &{} 0 \end{bmatrix}\) and \(d=\begin{bmatrix} p&{} x_{12}x_{22}\\ 0 &{} - x_{22}\end{bmatrix}\), where \( x_{12}\in {\mathcal {U}}_{12}\) and \( x_{22}\in {\mathcal {U}}_{22}\) are arbitrary. Then we have \(cd=p\) and \(dc= \begin{bmatrix} p&{} x_{12}\\ 0 &{} 0 \end{bmatrix}\). Thus

$$\begin{aligned} \begin{bmatrix} 0&{} -\tau _2( x_{12}) \\ 0 &{} 0 \end{bmatrix}= & {} \varphi ([c,d])=[c,\varphi (d)]\hspace{170.71652pt}\\= & {} \begin{bmatrix} 0&{} \tau _2( x_{12}x_{22})+ x_{12}\alpha _4(p)- x_{12}\beta _4( x_{22})-\alpha _1(p) x_{12}+\beta _1( x_{22}) x_{12}\\ 0 &{} 0 \end{bmatrix}. \end{aligned}$$

Since \(\tau _2( x_{12})=\alpha _1(p) x_{12}-x_{12}\alpha _4(p), \) it results

$$\begin{aligned} \tau _2( x_{12}x_{22})= x_{12}\beta _4( x_{22})-\beta _1( x_{22}) x_{12}\end{aligned}$$

for every \( x_{12}\in {\mathcal {U}}_{12}\) and \( x_{22}\in {\mathcal {U}}_{22}\). According to the Claim 2, \(\beta _1( x_{22})\in Z({\mathcal {U}}_{11})\), and by the assumption \(\pi _1(Z({\mathcal {U}}))=Z({\mathcal {U}}_{11})\), so in view of the definition of \(\eta \), it result that

$$\begin{aligned} \tau _2( x_{12}x_{22})= x_{12}\beta _4 ( x_{22}) - x_{12}\eta (\beta _1( x_{22}))= x_{12}{{\bar{\beta }}}( x_{22}), \end{aligned}$$

for every \( x_{12}\in {\mathcal {U}}_{12}\) and \( x_{22}\in {\mathcal {U}}_{22}\). \(\square \)

Claim 6

$$\begin{aligned}{} & {} \\{} & {} \gamma _3( x_{21}x_{11})=\gamma _3( x_{21}) x_{11}= x_{21}{\bar{\alpha }}( x_{11}); \\{} & {} \gamma _3( x_{22}x_{21})= x_{22}\gamma _3( x_{21})={\bar{\beta }}( x_{22}) x_{21}\end{aligned}$$

for every \( x_{11}\in {\mathcal {U}}_{11}\), \( x_{21}\in {\mathcal {U}}_{21}\) and \( x_{22}\in {\mathcal {U}}_{22}\).

Proof

For every \( x_{11}\in Inv({\mathcal {U}}_{11})\) and \( x_{21}\in {\mathcal {U}}_{21}\), we set \( a=\begin{bmatrix} x_{11}^{-1}&{} 0 \\ 0 &{} 0 \end{bmatrix}\) and \(b=\begin{bmatrix} x_{11}&{} 0 \\ x_{21}x_{11}&{} 0 \end{bmatrix}\). So \(ab=p\) and \( ba=\begin{bmatrix} p &{} 0 \\ x_{21}&{} 0 \end{bmatrix}.\) Thus we have

$$\begin{aligned} \varphi ([a,b])=[\varphi (a),b]=[a,\varphi (b)]. \end{aligned}$$

After using \(\varphi \) to these relations, using Claim 1, for every \( x_{11}\in Inv({\mathcal {U}}_{11})\) and \( x_{21}\in {\mathcal {U}}_{21}\) the following equalities are obtained

$$\begin{aligned}{} & {} \gamma _3( x_{21}x_{11})x_{11}^{-1}=\gamma _3( x_{21}); \\{} & {} \gamma _3( x_{21})= x_{21}x_{11}\alpha _1(x_{11}^{-1})-\alpha _4(x_{11}^{-1}) x_{21}x_{11}. \end{aligned}$$

Thus for every \( x_{11}\in Inv({\mathcal {U}}_{11})\) and \( x_{21}\in {\mathcal {U}}_{21}\) we have

$$\begin{aligned}{} & {} \gamma _3( x_{21}x_{11})=\gamma _3( x_{21}) x_{11}; \\{} & {} \gamma _3( x_{21}x_{11}^{-1})= x_{21}\alpha _1(x_{11}^{-1})-\alpha _4(x_{11}^{-1}) x_{21}. \end{aligned}$$

Since \(\gamma _3\), \(\alpha _1\) and \(\alpha _4\) are additive, it results that

$$\begin{aligned}{} & {} \gamma _3( x_{21}x_{11})=\gamma _3( x_{21}) x_{11};\\{} & {} \gamma _3( x_{21}x_{11})= x_{21}\alpha _1( x_{11})-\alpha _4( x_{11}) x_{21}\\{} & {} = x_{21}\alpha _1( x_{11})- x_{21}\eta ^{-1}(\alpha _4( x_{11})) \\{} & {} =x_{21}{{\bar{\alpha }}}( x_{11}) \end{aligned}$$

for every \( x_{11}\in {\mathcal {U}}_{11}\) and \( x_{21}\in {\mathcal {U}}_{21}\). Thus \(\gamma _3( x_{21})= x_{21}{\bar{\alpha }}(p)\) and hence

$$\begin{aligned} \gamma _3( x_{22}x_{21})= x_{22}x_{21}{\bar{\alpha }}(p)= x_{22}\gamma _3( x_{21}) \end{aligned}$$

for every \( x_{22}\in {\mathcal {U}}_{22}\) and \( x_{21}\in {\mathcal {U}}_{21}\).

Let \(x_{12}\in {\mathcal {U}}_{12}\) and \(x_{22}\in {\mathcal {U}}_{22}\). We put \(c=\begin{bmatrix} p &{} 0 \\ x_{22}x_{21}&{} -x_{22}\end{bmatrix}\) and \(d=\begin{bmatrix} p &{} 0 \\ x_{21}&{} 0 \end{bmatrix}.\) So \(cd=p\) and \(dc=\begin{bmatrix} p &{} 0 \\ x_{21}&{} 0 \end{bmatrix}.\) By using \(\varphi ([c,d])=[\varphi (c),d]\) and with \(\gamma _{3}(x_{21})=x_{21}\alpha _{1}(p)-\alpha _{4}(p)x_{21}\), we get

$$\begin{aligned} \begin{aligned} \gamma _{3}(x_{22}x_{21})&=\beta _{4}(x_{22})x_{21}-x_{21}\beta _{1}(x_{22})\\ {}&=\beta _{4}(x_{22})x_{21}-\eta (\beta _{1}(x_{22}))x_{21}\\ {}&={\bar{\beta }}(x_{22})x_{21}\end{aligned} \end{aligned}$$

for each \(x_{21}\in {\mathcal {U}}_{21}\) and \(x_{22}\in {\mathcal {U}}_{22}\). \(\square \)

Claim 7

\({\bar{\alpha }}:{\mathcal {U}}_{11}\rightarrow {\mathcal {U}}_{11}\) and \({\bar{\beta }}:{\mathcal {U}}_{22}\rightarrow {\mathcal {U}}_{22}\) are additive centralizers in the form \({\bar{\alpha }}(x_{11})={\bar{\alpha }}(p)x_{11}\) and \({\bar{\beta }}(x_{22})={\bar{\beta }}(1-p)x_{22}\), where \({\bar{\alpha }}(p)\in Z({\mathcal {U}}_{11}) \) and \({\bar{\beta }}(1-p)\in Z({\mathcal {U}}_{22})\).

Proof

It follows from Claims 5 and 6 that

$$\begin{aligned} \tau _{2}(x_{12})={\bar{\alpha }}(p)x_{12}=x_{12}{\bar{\beta }}(1-p) \end{aligned}$$

and

$$\begin{aligned} \gamma _{3}(x_{21})=x_{21}{\bar{\alpha }}(p)={\bar{\beta }}(1-p)x_{21}\end{aligned}$$

for each \(x_{12}\in {\mathcal {U}}_{12}\) and \(x_{21}\in {\mathcal {U}}_{21}\). By Remark 2.2, \({\bar{\alpha }}(p)\in Z({\mathcal {U}}_{11}) \) and \({\bar{\beta }}(1-p)\in Z({\mathcal {U}}_{22})\). Also, from Claims 5 and 6 we have

$$\begin{aligned} \tau _{2}(x_{11}x_{12})={\bar{\alpha }}(x_{11})x_{12}\quad \text {and} \quad \tau _{2}(x_{11}x_{12})={\bar{\alpha }}(p)x_{11}x_{12};\\\gamma _{3}(x_{21}x_{11})=x_{21}{\bar{\alpha }}(x_{11}) \quad \text {and} \quad \gamma _{3}(x_{21}x_{11})=x_{21}x_{11}{\bar{\alpha }}(p) \end{aligned}$$

for each \(x_{11}\in {\mathcal {U}}_{11}\), \(x_{12}\in {\mathcal {U}}_{12}\) and \(x_{21}\in {\mathcal {U}}_{21}\). Hence

$$\begin{aligned} ({\bar{\alpha }}(x_{11})-{\bar{\alpha }}(p)x_{11}){\mathcal {U}}_{12}=0 \quad \text {and} \quad {\mathcal {U}}_{21}({\bar{\alpha }}(x_{11})-{\bar{\alpha }}(p)x_{11})=0 \end{aligned}$$

for each \(x_{11}\in {\mathcal {U}}_{11}\), and from our assumption it follows that \({\bar{\alpha }}(x_{11})={\bar{\alpha }}(p)x_{11}\) for all \(x_{11}\in {\mathcal {U}}_{11}\).

In the same way and using Claims 5 and 6, it can be proved that \({\bar{\beta }}(x_{22})={\bar{\beta }}(1-p)x_{22}\) for all \(x_{11}\in {\mathcal {U}}_{22}\). \(\square \)

Claim 8

Theorem is satisfied.

Proof

Define the additive maps \(\psi :{\mathcal {U}}\rightarrow {\mathcal {U}}\) and \(\mu :{\mathcal {U}}\rightarrow {\mathcal {U}}\) as follows

$$\begin{aligned} \psi (x)=\psi \Bigl (\begin{bmatrix} x_{11}&{} x_{12}\\ x_{21}&{} x_{22}\end{bmatrix}\Bigl )=\begin{bmatrix} {\bar{\alpha }}(x_{11}) &{} \tau _{2}(x_{12}) \\ \gamma _{3}(x_{21}) &{} {\bar{\beta }}(x_{22}) \end{bmatrix} \end{aligned}$$

and

$$\begin{aligned} \mu (x)=\mu \Bigl (\begin{bmatrix} x_{11}&{} x_{12}\\ x_{21}&{} x_{22}\end{bmatrix}\Bigl )=\begin{bmatrix} \eta ^{-1}(\alpha _{4}(x_{11}))+\beta _{1}(x_{22}) &{} 0 \\ 0 &{} \alpha _{4}(x_{11})+\eta (\beta _{1}(x_{22})) \end{bmatrix}, \end{aligned}$$

where \(x=\begin{bmatrix} x_{11}&{} x_{12}\\ x_{21}&{} x_{22}\end{bmatrix}\in {\mathcal {U}}.\) By Claims 5-7, it results that

$$\begin{aligned} \begin{aligned} \psi (x)&=\begin{bmatrix} {\bar{\alpha }}(x_{11}) &{}\tau _{2}(x_{12}) \\ \gamma _{3}(x_{21}) &{} {\bar{\beta }}(x_{22}) \end{bmatrix}\\ {}&=\begin{bmatrix} {\bar{\alpha }}(p)x_{11}&{} {\bar{\alpha }}(p)x_{12}\\ {\bar{\beta }}(1-p)x_{21}&{} {\bar{\beta }}(1-p)x_{22}\end{bmatrix}\\ {}&=cx, \end{aligned} \end{aligned}$$

where \(c=\begin{bmatrix} {\bar{\alpha }}(p)&{} 0\\ 0 &{} {\bar{\beta }}(1-p) \end{bmatrix}\in Z({\mathcal {U}})\). According to Claims 1 and 2, our assumptions and the properties of \(\eta \) we deduce that \(\mu (x)\in Z({\mathcal {U}})\) for each \(x\in {\mathcal {U}}\). Also by the definition of \({\bar{\alpha }}\),\({\bar{\beta }}\) and Claims 1-4, we get \(\varphi (x)=\psi (x)+\mu (x)=cx+\mu (x)\) for all \(x\in {\mathcal {U}}\). Finally according to these results for \(x,y\in {\mathcal {U}}\) where \(xy=p\) we have

$$\begin{aligned} \begin{aligned} \mu ([x,y])&=\varphi ([x,y])-c[x,y]\\ {}&=[\varphi (x),y]-[cx,y]\\ {}&=[cx,y]+[\mu (x),y]-[cx,y]=0. \end{aligned} \end{aligned}$$

The proof is complete. \(\square \)

According to Theorem 3.1 we characterize Lie centralizers in the following form.

Corollary 3.2

Suppose that

$$\begin{aligned} x_{11}\in {\mathcal {U}}_{11},\, x_{11}{\mathcal {U}}_{12}=0 \,\, \text {and} \,\,{\mathcal {U}}_{21}x_{11}= 0 \Longrightarrow x_{11}= 0; \\ x_{22}\in {\mathcal {U}}_{22}, \, {\mathcal {U}}_{12}x_{22}=0 \,\, \text {and} \,\, x_{22}{\mathcal {U}}_{21}= 0 \Longrightarrow x_{22}= 0. \end{aligned}$$

Let \( \pi _1 ( Z({\mathcal {U}} )) = Z({\mathcal {U}}_{11}) \) and \( \pi _2 ( Z({\mathcal {U}})) = Z({\mathcal {U}}_{22}) \). Then the additive map \( \varphi : {\mathcal {U}} \rightarrow {\mathcal {U}} \) is a Lie centralizer if and only if \(\varphi (x)=cx+\mu (x)\) for all \(x\in {\mathcal {U}}\), where \(c\in Z({\mathcal {U}})\), \(\mu :{\mathcal {U}}\rightarrow Z({\mathcal {U}})\) is an additive map in which \(\mu ([x,y])=0\) for any \(x,y \in {\mathcal {U}}\).

Proof

Suppose that \(\varphi \) is a centralizer. Since it satisfies in the condition \(({{\textbf {P}}})\), according to Theorem 3.1, there exist an element \(c\in Z(U)\) and \(\mu \) such that \(\varphi (x)=cx+\mu (x)\) for any \(x\in {\mathcal {U}}\) and they satisfy in the conditions of Theorem 3.1. It is sufficient to prove that for any \(x,y\in {\mathcal {U}}\) we have \(\mu ([x,y])=0\). This part can be proved in similar manner with the Claim 9 of the proof of Theorem 3.1.

The converse is clear. \(\square \)

Assume that \({\mathcal {U}}:=Tri({\mathcal {A}}, {\mathcal {M}}, {\mathcal {B}})\) is a unital triangular algebra with unity \(1=\begin{bmatrix} 1_{{\mathcal {A}}} &{} 0 \\ 0 &{} 1_{{\mathcal {B}}} \end{bmatrix}\), then \(p=\begin{bmatrix} 1_{{\mathcal {A}}} &{} 0 \\ 0 &{} 0 \end{bmatrix}\) is an idempotent element in \({\mathcal {U}}\), which is called standard idempotent. In this case, if we consider the Peirce decomposition of \({\mathcal {U}}\) with respect to p, then \({\mathcal {U}}_{11}={\mathcal {A}}\), \({\mathcal {U}}_{22}={\mathcal {B}}\), \({\mathcal {U}}_{12}= {\mathcal {M}}\) and \({\mathcal {U}}_{21}=0\). If \(Tri({\mathcal {A}}, {\mathcal {M}}, {\mathcal {B}})\) is a Banach algebra with respect to some norm, then \({\mathcal {U}}\) is a Banach algebra with the non-trivial idempotent p, \({\mathcal {A}}\), \({\mathcal {B}}\) are Banach subalgebras of \({\mathcal {U}}\) and \({\mathcal {M}}\) is a Banach \(({\mathcal {A}}, {\mathcal {B}})\)-bimodule.

Using these notations and Theorem 3.1 we have the following corollary which is proved in [5]. So it can be said that in the case of Banach algebras, Theorem 3.1 is a generalization of the main results of [5].

Proposition 3.3

Let \({\mathcal {U}}:=Tri({\mathcal {A}}, {\mathcal {M}}, {\mathcal {B}})\) be a unital triangular algebra which is a Banach algebra with respect to some norm. Suppose that \({\mathcal {M}}\) is a faithful \(({\mathcal {A}},{\mathcal {B}})\)-bimodule. Further assume that \(\pi _1(Z({\mathcal {U}}))=Z({\mathcal {A}})\), \(\pi _2(Z({\mathcal {U}}))=Z({\mathcal {B}})\), and \(p=\begin{bmatrix} 1_{{\mathcal {A}}} &{} 0 \\ 0 &{} 0 \end{bmatrix}\) is the standard idempotent of \({\mathcal {U}}\). Let \(\varphi : {\mathcal {U}}\rightarrow {\mathcal {U}}\), be an additive map, then:

  1. (i)

    \(\varphi \) satisfies \(({{\textbf {P}}})\) if and only if \(\varphi (x)=cx+\mu (x)\) for all \(x\in {\mathcal {U}}\), where \(c\in Z({\mathcal {U}})\), \(\mu :{\mathcal {U}}\rightarrow Z({\mathcal {U}})\) is an additive map in which \(\mu ([x,y])=0\) for any \(x,y \in {\mathcal {U}}\) with \(xy=p\).

  2. (ii)

    \(\varphi \) is a Lie centralizer if and only if \(\varphi (x)=cx+\mu (x)\) for all \(x\in {\mathcal {U}}\), where \(c\in Z({\mathcal {U}})\), \(\mu :{\mathcal {U}}\rightarrow Z({\mathcal {U}})\) is an additive map in which \(\mu ([x,y])=0\) for any \(x,y \in {\mathcal {U}}\).

4 Applications

Now we give several applications of the results in the above section.

First, we state the application for von Neumann algebras, so we need some notations and preliminaries about von Neumann algebras. A von Neumann algebra \({\mathcal {U}}\) is a weakly closed, self-adjoint algebra of operators on a Hilbert space \( {\mathcal {H}} \) containing the identity 1. An element of \( {\mathcal {U}}\) is called a projection if it is idempotent and self-adjoint. A projection \(p\in {\mathcal {U}}\) is called a central abelian projection if \( p \in Z ({\mathcal {U}}) \) and \( p {\mathcal {U}}p \) is abelian. The central carrier of \(x\in {\mathcal {U}}\), denoted by \( {\overline{x}}\), is the smallest central projection p satisfying \( px = x\). It is well known that \( {\overline{x}}\) is the projection whose range is the closed linear span of \( \{ ax(h): a\in {\mathcal {U}}, h \in {\mathcal {H}} \}\). For each self-adjoint operator \( b \in {\mathcal {U}}\), the core of b, denoted by \( {\underline{b}} \), is \( \sup \{z \in Z ({\mathcal {U}}): z= z^*, z \le b \} \). If \( p \in {\mathcal {U}}\) is a projection and \( {\underline{p}} = 0\), we call p a core-free projection. A routine verification shows that \({\underline{p}} = 0\) if and only if \( \overline{1-p} = 1\). Note that \({\mathcal {U}}\) is a von Neumann algebra with no central summands of type \(I_1\) if and only if it has a projection p such that \({\underline{p}} = 0\) and \( {\overline{p}} = 1 \). If \({\mathcal {U}}\) is an arbitrary von Neumann algebra, the unit element 1 of \({\mathcal {U}}\) is the sum of two orthogonal central projections \(e_1\) and \(e_2\) such that \({\mathcal {U}}= {\mathcal {U}}e_1\oplus {\mathcal {U}}e_2 \), \( {\mathcal {U}}e_1\) is of type \(I_1\) and \({\mathcal {U}}e_2 \) is a von Neumann algebra with no central summands of type \(I_1\). So \( {\mathcal {U}}e_2\) contains a core-free projection with central carrier \(e_2\). We refer the reader to [15] for the theory of von Neumann algebras.

Remark 4.1

Let \({\mathcal {U}}\) be a von Neumann algebra with no central summands of type \(I_1\), and \(p\in {\mathcal {U}}\) be a projection such that \({\underline{p}} = 0\) and \( {\overline{p}} = 1 \). We have \(\underline{1-p} = 0\) and \( \overline{1-p} = 1 \). Consider the Peirce decomposition of \({\mathcal {U}}\) with respect to p. It follows from the definition of the central carrier that both \( span\{a p(h): a\in {\mathcal {U}}, h \in {\mathcal {H}} \}\) and \( span\{a (1-p) (h): a\in {\mathcal {U}}, h \in {\mathcal {H}} \}\) are dense in \({\mathcal {H}}\). So \( a \in {\mathcal {U}}\), \( a {\mathcal {U}}p = \{ 0\}\) implies \( a = 0\) and \( a {\mathcal {U}}(1- p ) =\{0 \} \) implies \( a = 0\). From these results, it follows that \({\mathcal {U}}\) satisfies (2.1). By [15, Corollary 5.5.7] we have \(Z(p{\mathcal {U}}p)=pZ({\mathcal {U}})\) and \(Z((1-p){\mathcal {U}}(1-p))=(1-p)Z({\mathcal {U}})\). So \( \pi _1 ( Z({\mathcal {U}} )) = Z({\mathcal {U}}_{11}) \) and \( \pi _2 ( Z({\mathcal {U}})) = Z({\mathcal {U}}_{22}) \).

Now we are ready to state our result.

Theorem 4.2

Let \({\mathcal {U}}\) be a von Neumann algebra with unit element 1, and \(e_1 + e_2 =1\), where \(e_1\) and \(e_2\) are two orthogonal central projections such that \( {\mathcal {U}}e_1\) is of type \(I_1\) and \( {\mathcal {U}}e_2\) is a von Neumann algebra with no central summands of type \(I_1\). Suppose that \(p\in {\mathcal {U}}e_2\) is a core-free projection with central carrier \(e_2\). Let \(\varphi : {\mathcal {U}}\rightarrow {\mathcal {U}}\) be an additive map. Then \(\varphi \) satisfies \(({{\textbf {P}}})\) if and only if \(\varphi (x)=cx+\mu (x)\) \((x\in {\mathcal {U}})\), where \(c\in Z({\mathcal {U}})\), \(\mu :{\mathcal {U}}\rightarrow Z({\mathcal {U}})\) is an additive map in which \(\mu ([x,y])=0\) for any \(x,y \in {\mathcal {U}}\) with \(xy=p\).

Proof

Assume that \(\varphi \) satisfies \(({{\textbf {P}}})\). Let \(a\in {\mathcal {U}}\) be arbitrary, and we consider \(b, y\in {\mathcal {U}}e_2 \) where \(by=p\). Put \(x:=a e_1 + b \). Since \(b=be_2\) and \(y=ye_2\), it follows that \(xy=p\) and \([x,y]=[b,y]=[b,y]e_2\) (because \({\mathcal {U}}e_1 \subseteq Z({\mathcal {U}})\)). From our assumption it results \(\varphi ([x,y])=[\varphi (x),y]=[x,\varphi (y)]\). So

$$\begin{aligned} \begin{aligned} \varphi ([b,y])&=[\varphi (x),y]\\ {}&=[\varphi (x)e_2,y]\\ {}&=[\varphi (ae_1)e_2,y]+[\varphi (b)e_2,y]. \end{aligned} \end{aligned}$$

We multiply the sides of the above identity by \(e_1\), and get \(\varphi ([b,y])e_1=0\). Hence

$$\begin{aligned} \varphi ([b,y])e_2=[\varphi (ae_1)e_2,y]+[\varphi (b)e_2,y]. \end{aligned}$$
(4.1)

If we assume in (4.1) that \(a=0\), then we arrive at

$$\begin{aligned} \varphi ([b,y])e_2=[\varphi (b)e_2,y]. \end{aligned}$$
(4.2)

for all \(b, y\in {\mathcal {U}}e_2 \) with \(by=p\). Also, we have

$$\begin{aligned} \begin{aligned} \varphi ([b,y])&=[x,\varphi (y)]\\ {}&=[ae_1,\varphi (y)e_1]+[b,\varphi (y)e_2]. \end{aligned} \end{aligned}$$

So by the fact that \({\mathcal {U}}e_1 \subseteq Z({\mathcal {U}})\) we get

$$\begin{aligned} \varphi ([b,y])e_2=[b,\varphi (y)e_2]. \end{aligned}$$
(4.3)

for all \(b, y\in {\mathcal {U}}e_2 \) with \(by=p\). Equations (4.2) and (4.3) show that the additive mapping \(\lambda : {\mathcal {U}}e_2\rightarrow {\mathcal {U}}e_2\) defined by \(\lambda (xe_2)=\varphi (xe_2)e_2\), on \({\mathcal {U}}e_2\) satisfies the condition \(({{\textbf {P}}})\). By our assumption \( {\mathcal {U}}e_2\) is a von Neumann algebra with no central summands of type \(I_1\), and \(p\in {\mathcal {U}}e_2\) is a projection such that \({\underline{p}} = 0\) and \( {\overline{p}} = e_2 \). If we consider the Peirce decomposition of \({\mathcal {U}}\) with respect to p, then according to Remark 4.1, all the assumptions of Theorem 3.1 are valid for \({\mathcal {U}}\). So by Theorem 3.1, there are \(c_1\in Z({\mathcal {U}}e_2)\subseteq Z({\mathcal {U}})\) and an additive mapping \(\mu _1:{\mathcal {U}}e_2 \rightarrow Z({\mathcal {U}}e_2)\subseteq Z({\mathcal {U}})\) such that

$$\begin{aligned} \varphi (xe_2)e_2=\lambda (xe_2)=c_1xe_2+\mu _1(xe_2) \end{aligned}$$
(4.4)

for all \(x\in {\mathcal {U}}\) and \(\mu _1([xe_2, ye_2])=0 \) for all \(x,y\in {\mathcal {U}}\) with \(xye_2=p\). By (4.1) and (4.4), for all \(a\in {\mathcal {U}}\) and \(b, y\in {\mathcal {U}}e_2 \) with \(by=p\) we have

$$\begin{aligned} \begin{aligned} c_1[b,y]&=c_1[b,y]+\mu _1([b,y])\\ {}&=[\varphi (ae_1)e_2,y]+[c_1b + \mu _1(b),y]\\ {}&=[\varphi (ae_1)e_2,y]+c_1[b,y]. \end{aligned} \end{aligned}$$

If we put \(b=py^{-1}\) in the above equation, where y is an arbitrary invertible element in \({\mathcal {U}}e_2\) (i.e., \(y\in Inv({\mathcal {U}} e_2\)), then we get

$$\begin{aligned}{}[\varphi (ae_1)e_2,y]=0\end{aligned}$$

for all \(a\in {\mathcal {U}}\) and \(y\in Inv({\mathcal {U}} e_2)\). Since each element of \({\mathcal {U}}e_2\) is a sum of two invertible elements of \({\mathcal {U}}e_2\), it results that

$$\begin{aligned} \varphi (ae_1)e_2\in Z({\mathcal {U}}e_2)\subseteq Z({\mathcal {U}}) \end{aligned}$$

for all \(a\in {\mathcal {U}}\). Also

$$\begin{aligned} \varphi (x)e_1\in {\mathcal {U}}e_1\subseteq Z({\mathcal {U}}) \end{aligned}$$

for all \(x\in {\mathcal {U}}\). Now by (4.4) we have

$$\begin{aligned} \begin{aligned} \varphi (x)&=\varphi (x)e_1 +\varphi (xe_1)e_2 +\varphi (xe_2)e_2\\ {}&=\varphi (x)e_1 +\varphi (xe_1)e_2 +c_1xe_2+\mu _1(xe_2)\\ {}&=cx+\mu (x), \end{aligned} \end{aligned}$$

for all \(x\in {\mathcal {U}}\), where \(c:=c_1e_2\in Z({\mathcal {U}})\) and \(\mu : {\mathcal {U}}\rightarrow {\mathcal {U}}\) is an additive map defined by \(\mu (x)=\varphi (x)e_1 +\varphi (xe_1)e_2 +\mu _1(xe_2)\). By above all three summands lie in \(Z({\mathcal {U}})\), thus \(\mu \) maps \({\mathcal {U}}\) into \(Z({\mathcal {U}})\). Finally according to these results for \(x,y\in {\mathcal {U}}\) where \(xy=p\) we have

$$\begin{aligned} \mu ([x,y])=\varphi ([x,y])-c[x,y]=[\varphi (x),y]-[cx,y]=[\mu (x),y]=0. \end{aligned}$$

The converse is clear. \(\square \)

In the following result we characterize Lie centralizers on von Neumann algebras which is a generalization of [9, Corollary 4.3-(ii)].

Corollary 4.3

Suppose that \({\mathcal {U}}\) is a von Neumann algebra, and \(\varphi : {\mathcal {U}}\rightarrow {\mathcal {U}}\) is an additive map. Then \(\varphi \) is a Lie centralizer if and only if \(\varphi (x)=cx+\mu (x)\) \((x\in {\mathcal {U}})\), where \(c\in Z({\mathcal {U}})\), \(\mu :{\mathcal {U}}\rightarrow Z({\mathcal {U}})\) is an additive map in which \(\mu ([x,y])=0\) for any \(x,y \in {\mathcal {U}}\).

Proof

Let \(\varphi \) be a Lie centralizer. If \({\mathcal {U}}\) is an abelian von Neumann algebra, then \(\varphi \) maps \({\mathcal {U}}\) into \(Z({\mathcal {U}})={\mathcal {U}}\). Also, by the fact that \(\varphi \) is a Lie centralizer we have \(\varphi ([x,y])=0\) for any \(x,y \in {\mathcal {U}}\). So in this case the result is valid. Now let’s assume that \({\mathcal {U}}\) is non-abelian. In this case the unit element 1 of \({\mathcal {U}}\) is the sum of two orthogonal central projections \(e_1\) and \(e_2\) such that \({\mathcal {U}}= {\mathcal {U}}e_1\oplus {\mathcal {U}}e_2 \), \( {\mathcal {U}}e_1\) is of type \(I_1\) and \({\mathcal {U}}e_2 \) is a von Neumann algebra with no central summands of type \(I_1\). So there exist a core-free projection \(p\in {\mathcal {U}}e_2\) with central carrier \(e_2\). Because \(\varphi \) is Lie centralizer, it satisfies the condition \(({{\textbf {P}}})\) on \({\mathcal {U}}\). Hence by Theorem 4.2\(\varphi (x)=cx+\mu (x)\) \((x\in {\mathcal {U}})\), where \(c\in Z({\mathcal {U}})\), \(\mu :{\mathcal {U}}\rightarrow Z({\mathcal {U}})\) is an additive map. It is sufficient to prove that for any \(x,y\in {\mathcal {U}}\) we have \(\mu ([x,y])=0\). This part can be proved in similar manner with the proof of Theorem 4.2.

The converse is clear. \(\square \)

In the next remark, we introduce various classes of Banach algebras for which our main results hold.

Remark 4.4

Various unital Banach algebras have been identified that have a non-trivial idempotent p so that their Peirce decomposition with respect to p satisfies (2.1) and we also have \( \pi _1 ( Z({\mathcal {U}} )) = Z({\mathcal {U}}_{11}) \) and \( \pi _2 ( Z({\mathcal {U}})) = Z({\mathcal {U}}_{22}) \). Therefore, Theorem 3.1 and Corollary 3.2 hold for additive mappings on these Banach algebras that satisfy \(({{\textbf {P}}})\) (for idempotent p). Because in various articles, these Banach algebras and their appropriate non-trivial idempotents have been stated and proven to have the mentioned properties, here we list only these Banach algebras and their appropriate non-trivial idempotents (p) with refer to the suitable references.

  • \(M_n({\mathcal {A}})\), where \({\mathcal {A}}\) is a unital Banach algebra, \(M_n({\mathcal {A}})\) is the Banach algebra of \(n\times n\) matrices over \({\mathcal {A}}\) with respect to some norm, and \(p=E_{11}\) or \(p=I-E_{11}\) (\(E_{11}\) is the matrix unit and I is the identity matrix) (see [21]).

  • \(T_n({\mathcal {A}})\), where \({\mathcal {A}}\) is a unital Banach algebra, \(T_n({\mathcal {A}})\) is the Banach algebra of all \(n\times n\) upper triangular matrices over \({\mathcal {A}}\) with respect to some norm, and \(p=E_{11}\) or \(p=I-E_{11}\) (see [21]).

  • Unital standard operator algebra \({\mathcal {U}}\) on a complex Banach space \({\mathcal {X}}\) which is a Banach algebra with respect to some norm, where p is any non-trivial idempotent in \({\mathcal {U}}\) (see [20]).

  • Factor von Neumann algebra \({\mathcal {U}}\) on a complex Hilbert space \( {\mathcal {H}} \), where p is any non-trivial idempotent in \({\mathcal {U}}\) (see [20]).

  • Non-trivial nest algebra \(Alg{\mathcal {N}}\) on a complex Hilbert space \( {\mathcal {H}} \), where p is a projection operator on a non-trivial element of \({\mathcal {N}}\) (see [21]).