Abstract
The theory developed for solving Levi–Civita functional equations is more comprehensive on groups than it is on semigroups, due to the existence of prime ideals in semigroups. Here we solve on semigroups a particular Levi–Civita equation of importance to the theory, and we use that result together with representation theory to solve the general Levi–Civita equation with three summands on commutative monoids. The results also give the continuous solutions on topological semigroups and monoids.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Let S be a semigroup, \({{\mathbb {N}}}\) the set of positive integers, and \({{\mathbb {C}}}\) the set of complex numbers. A functional equation of the form
for \(n\in {{\mathbb {N}}}\) and unknown functions \(f,g_1,\ldots ,g_n,h_1,\ldots ,h_n:S \rightarrow {{\mathbb {C}}}\) is called a Levi–Civita equation. There are two general methods used to describe the solutions of (1). One is based on representation theory (see [3,4,5,6] and [7, Chapter 5]), and the other, for the case that S is an Abelian group, is based on spectral synthesis (see [8, Chapter 10]). These methods describe the general structure of the solution but do not supply explicit formulas for the solution functions. For that one may need to use special properties of S and/or make additional computations. The existence of prime ideals in semigroups complicates matters and rules out the methods found in [5, 6, 8].
A general result giving explicit solution formulas for (1) in the case \(n=2\) on commutative monoids was proved in [2]. Here we do the same for the case \(n=3\), where the complications caused by prime ideals increase significantly over the case \(n=2\).
The outline of the paper is as follows. Section 2 introduces some notation and terminology. In Sect. 3 we quote a result about an important specific instance of (1) for \(n=2\), namely the sine addition formula. The same section contains a fundamental result using representation theory to describe the general structure of solutions of (1). Section 4 contains the solution of a key particular instance of (1) for \(n=3\). Our main result-Theorem 5.1-follows in Sect. 5. The final section contains applications to specific functional equations and examples on a couple of commutative monoids with very different prime ideal structures.
Generally our results are presented in their topological versions, but one may choose the discrete topology.
2 Notation and Terminology
Throughout this paper S denotes a semigroup. A monoid is a semigroup with an identity element generally denoted e.
An additive function on S is a homomorphism from S into \(({{\mathbb {C}}}, +)\).
A polynomial on S is a function of the form \(P(A_1,\ldots ,A_n)\), where \(P \in {{\mathbb {C}}}[x_1,\ldots , x_n]\) and \(A_1,\ldots ,A_n\) are additive functions.
A multiplicative function on S is a homomorphism from S into \(({{\mathbb {C}}}, \cdot )\). If \(\chi : S \rightarrow {{\mathbb {C}}}\) is multiplicative and \(\chi \ne 0\) then we call \(\chi \) an exponential on S. When we say that a function F is nonzero we mean that \(F \ne 0\).
Define the nullspace of a multiplicative \(\chi \) by
If \(I_{\chi } \ne \emptyset \) then it is a (two-sided) ideal of S and is called the null ideal of \(\chi \). An ideal \(I\subset S\) is said to be a prime ideal if \(I \ne S\) and whenever \(xy\in I\) it follows that either \(x\in I\) or \(y\in I\), so \(S\setminus I\) is a nonempty subsemigroup of S. There is a very close relationship between prime ideals and exponentials on semigroups. For any exponential \(\chi \) it is easy to see that if \(I_{\chi } \ne \emptyset \) then \(I_{\chi }\) is a prime ideal. Conversely, if I is any prime ideal of S and we define \(\chi (x) := 0\) for \(x\in I\) and \(\chi (x) := 1\) for \(x\in S\setminus I\), then \(\chi :S\rightarrow {{\mathbb {C}}}\) is an exponential with null ideal \(I_{\chi } = I\).
For any subset \(T \subseteq S\) we define \(T^2 := \{t_1t_2 \mid t_1,t_2 \in T\}\). Several subsets of the nullspace \(I_\chi \) of a multiplicative \(\chi \) are significant for the description of solutions of (1). One of them is
For a topological space X, let C(X) denote the algebra of continuous functions mapping X into \({{\mathbb {C}}}\). Let \({{\mathbb {C}}}^* = {{\mathbb {C}}}\setminus \{0\}\).
3 Supporting Theory
Since (1) for \(n=1\) is \(f(xy) = g_1(x)h_1(y)\), it is not surprising that multiplicative functions are fundamental to the study of (1). A second particular Levi–Civita equation that is important for the general study of (1) is the sine addition formula
for unknown functions \(\phi ,\gamma :S \rightarrow {{\mathbb {C}}}\). The following is [1, Theorem 3.1].
Proposition 3.1
Let S be a topological semigroup, and suppose \(\phi ,\gamma :S \rightarrow {{\mathbb {C}}}\) satisfy the sine addition law (2) with \(\phi \ne 0\) and \(\phi \in C(S)\). Then \(\phi ,\gamma \) belong to one of the following families, where \(\chi _1,\chi _2\in C(S)\) are multiplicative.
-
(a)
For \(\chi _1 \ne \chi _2\) there exists \(b\in {{\mathbb {C}}}^*\) such that \(\phi = b(\chi _1 - \chi _2)\) and \(\gamma = (\chi _1 + \chi _2)/2\).
-
(b)
For \(\chi _1 = \chi _2 =: \chi \ne 0\), we have \(\gamma = \chi \) and
$$\begin{aligned} \phi (x) = {\left\{ \begin{array}{ll} A(x)\chi (x) &{} \text { for } x\in S\setminus I_\chi \\ \phi _P(x) &{} \text { for } x \in P_\chi \\ 0 &{} \text { for } x \in I_\chi \setminus P_\chi \end{array}\right. } \end{aligned}$$(3)where \(A\in C(S\setminus I_{\chi })\) is additive and \(\phi _P\in C(P_\chi )\) is the restriction of \(\phi \) to \(P_\chi \). In addition we have the following conditions.
-
(i)
\(\phi (xu) = \phi (ux) = 0\) for all \(x \in I_\chi \setminus P_\chi \) and \(u\in S\setminus I_\chi \).
-
(ii)
If \(x \in \{up, pv, upv\}\) for \(p \in P_\chi \) and \(u,v\in S\setminus I_\chi \), then \(x \in P_\chi \) and we have respectively \(\phi _P(x) = \phi _P(p)\chi (u)\), \(\phi _P(x) = \phi _P(p)\chi (v)\), \(\phi _P(x) = \phi _P(p)\chi (uv)\).
-
(i)
-
(c)
For \(\chi _1 = \chi _2 = 0\), we have \(\gamma = 0\), \(S \ne S^2\), and
$$\begin{aligned} \phi (x) = {\left\{ \begin{array}{ll} \phi _0(x) &{} \text { for } x\in S\setminus S^2 \\ 0 &{} \text { for } x \in S^2 \end{array}\right. } \end{aligned}$$where \(\phi _0\in C(S\setminus S^2)\) is an arbitrary nonzero function.
Conversely, if the pair \((\phi ,\gamma )\) is given by the formulas in (a), (c), or (b) with conditions (i) and (ii) holding, then \((\phi ,\gamma )\) satisfies (2).
Note that \(\phi _P\) may take arbitrary values at some points of its domain (see Example 6.4).
The following is an immediate consequence of Proposition 3.1.
Corollary 3.2
Let S be a topological semigroup, \(\chi \in C(S)\) an exponential, and \(\phi \in C(S)\) with \(\phi \ne 0\). If \(\phi \) satisfies the special sine addition law
then \(\phi \) has the form (3) where \(A\in C(S\setminus I_{\chi })\) is additive and \(\phi _P \in C(P_\chi )\). In addition we have conditions (i) and (ii) of Proposition 3.1(b).
Conversely, if \(\phi \) has the form (3) where A is additive and \(\phi ,\phi _P\) satisfy conditions (i) and (ii) of Proposition 3.1(b), then \(\phi \) is a solution of (4).
Our fundamental result for describing the general structure of solutions of the Levi–Civita Eq. (1) on commutative monoids is the following, most of which is the result of combining [4, Lemma 2.4 and Theorem 2.5]. For any \(n\in {{\mathbb {N}}}\) consider \({{\mathbb {C}}}^n\) as a vector space of column vectors, and let \({\mathcal {M}}_n({{\mathbb {C}}})\) denote the algebra of \(n\times n\) matrices over \({{\mathbb {C}}}\). To avoid confusion with nullpaces labeled as \(I_{r}\), we denote the \(n\times n\) identity matrix by \(E_n\). A pure polynomial is a polynomial with constant term 0.
Proposition 3.3
Let \(n \in {{\mathbb {N}}}\), let S be a topological commutative monoid, and suppose \(f,g_j,h_j \in C(S)\) satisfy (1), with \(\{g_1,\ldots ,g_n\}\) and \(\{h_1,\ldots ,h_n\}\) linearly independent. Let \(V = span \{g_1,\ldots ,g_n\}\) and \(g = [g_1,\ldots ,g_n]^t\).
There exists an associative and commutative algebra \(({{\mathbb {C}}}^n, +, *)\) with identity element g(e) and regular representation \(R: {{\mathbb {C}}}^n \rightarrow {\mathcal {M}}_n({{\mathbb {C}}})\) such that
with \(R(g(e)) = E_n\) and \(g(x) = R(g(x))g(e)\) for all \(x\in S\).
There exists a similarity matrix \(D \in {\mathcal {M}}_n({{\mathbb {C}}})\) simultaneously transforming the family \(\{R(g(x))\mid x \in S\}\) of commuting matrices into block diagonal form
where each \(M_r\) is lower triangular of the form
for all \(x\in S\). The vector space V is spanned by the matrix elements of the representation \(R\circ g\) of S, that is
so each \(\chi _r\) and \(\rho _r^{i,j}\) belongs to V.
Each \(\chi _r\) is an exponential with nullspace denoted \(I_r\), and \(\chi _1,\ldots ,\chi _s\) are distinct. For each \(r\in \{1,\ldots ,s\}\) and \(j \in \{1,\ldots , d_r - 1\}\) there exist linearly independent additive \(A_{r,j} \in C(S\setminus I_{r})\), pure polynomials \(P_{r,j}(A_{r,1},\ldots , A_{r,d_r-1})\) of degree at most \(d_r - 1\), and functions \(q_{r,j}\in V\) such that
and V has a basis \(B_1\cup \cdots \cup B_s\) where \(B_r = \{\chi _r, q_{r,1}, \ldots , q_{r,d_r - 1} \}\). (Note that the form of \(q_{r,j}\) on \(I_r\) is unspecified.)
Moreover \(f,h_1,\ldots ,h_n \in V\).
Proof
Most of the statement follows from [4, Lemma 2.4 and Theorem 2.5]. The items needing justification are the following.
The first item is (5), which follows from the definition of R and \(g = (R\circ g) g(e)\) as noted near the end of the proof of [3, Theorem 1]. From this it is clear that each \(\chi _r\) and \(\rho _r^{i,j}\) belongs to V.
The second item is the linear independence (unstated in [4]) of the additive functions \(A_{r,j}\). We may assume without loss of generality that \(\{A_{r,1},\ldots , A_{r,d_r-1}\}\) is linearly independent, since if not we could compensate by adjusting the coefficients of the \(P_{r,j}\). \(\square \)
4 Solution of a Particular Instance of (1) for \(n=3\)
Another particular Levi–Civita equation that plays a critical role in our discussion is
for unknown \(\psi :S \rightarrow {{\mathbb {C}}}\), where \(\chi :S \rightarrow {{\mathbb {C}}}\) is an exponential and \(\phi :S\rightarrow {{\mathbb {C}}}\) is a nonzero solution of the special sine addition law (4). Observe that \(\psi (xy) = \psi (yx)\) for all \(x,y \in S\) by the symmetry of the right hand side of (6).
The form of a continuous \(\phi \) satisfying (4) on a topological semigroup is described in Corollary 3.2. Now we find the form of continuous \(\psi \) satisfying (6). In order to describe such functions we subdivide \(I_\chi \setminus P_\chi \) as follows. First define
It is easy to see that \(J_\chi \subseteq I_\chi \setminus P_\chi \), since \(J_\chi \subseteq I_\chi ^2\) while \(P_\chi \subseteq I_\chi \setminus I_\chi ^2\). Next define
It is not difficult to verify that \(J_\chi \) and \(K_\chi \) are disjoint. For example, suppose \(x\in I_\chi (I_\chi \setminus P_\chi )\), say \(x = st\) with \(s\in I_\chi \), \(t \in I_\chi \setminus P_\chi \). Then for all \(u \in S\setminus I_\chi \) we have \(ux = (us)t \in J_\chi \), so \(x \notin K_\chi \). It only remains to note that \(S\setminus I_\chi \) is nonempty since \(\chi \) is an exponential. The proof for \(x\in (I_\chi \setminus P_\chi )I_\chi \) is parallel. Thus we can view \(I_\chi \setminus P_\chi \) as the disjoint union
where \(L_{\chi } := I_\chi \setminus (P_\chi \cup K_\chi \cup J_\chi )\).
Theorem 4.1
Let S be a topological semigroup, let \(\chi \in C(S)\) be an exponential, and let \(\phi \in C(S)\) be a nonzero solution of the special sine addition law (4) as described in Corollary 3.2, with additive \(A\in C(S\setminus I_\chi )\) and restriction \(\phi _P \in C(P_\chi )\).
If \(\psi \in C(S)\) is a solution of (6), then there exist an additive \(B\in C(S\setminus I_\chi )\) and (restriction) functions \(\psi _P \in C(P_\chi )\), \(\psi _K \in C(K_\chi )\) such that
and the following conditions hold.
-
(a)
For all \(y \in S\setminus I_\chi \) and \(x\in P_\chi \) we have \(yx,xy \in P_\chi \) and \(\psi _P(yx) = \psi _P(xy) = [\psi _P(x) + \phi _P(x)A(y)]\chi (y)\).
-
(b)
For all \(y \in S\setminus I_\chi \) and \(x\in K_\chi \) we have \(\psi (xy) = \psi (yx) = \psi _K(x)\chi (y)\).
-
(c)
\(\psi (xy) = \psi (yx) = 0\) for all \(y \in S\setminus I_\chi \) and \(x\in J_\chi \cup L_\chi \).
-
(d)
\(\psi (xy) = \psi (yx) = \phi _P(x)\phi _P(y)\) for all \(x,y\in P_\chi \).
-
(e)
\(\psi (xy) = \psi (yx) = 0\) for all \(x \in I_\chi \) and \(y\in I_\chi \setminus P_\chi \).
Conversely, if \(\psi :S \rightarrow {{\mathbb {C}}}\) has the form (7) with additive \(B:S\setminus I_\chi \rightarrow {{\mathbb {C}}}\) and conditions (a)–(e) holding, then \(\psi \) satisfies (6).
Proof
Suppose \(\psi \in C(S)\) is a solution of (6), and let \(\psi _P,\psi _K\) be the respective restrictions of \(\psi \) to \(P_\chi \) and \(K_\chi \), which are disjoint. Recalling that \(\phi \) has the form (3), we get for \(x,y \in S\setminus I_\chi \) that
from which it follows that
Thus \(B\in C(S\setminus I_\chi )\) defined by
is additive, and we have the top case of (7).
Next we show that \(\psi \) vanishes everywhere on \(J_\chi \cup L_\chi \). Note first that if \(x,y\in I_\chi \) and either of them belongs to \(I_\chi \setminus P_\chi \), then by (6) we have
since \(\chi (x) = \chi (y) = 0\) and one of \(\phi (x),\phi (y)\) is zero. Thus we have
confirming part of the bottom case of (7). Now suppose \(x\in L_\chi \). By definition of \(L_\chi \) there exists \(u_0\) and/or \(v_0\) in \(S\setminus I_{\chi }\) such that \(u_0x\), \(xv_0\), or \(u_0xv_0\) belongs to \(J_{\chi }\). In the case \(u_0x \in J_{\chi }\) we have
since \(\chi (x) = 0 = \phi (x)\) (because \(L_\chi \subseteq I_\chi \setminus P_\chi \)). It follows from \(\chi (u_0) \ne 0\) that \(\psi (x) = 0\). The case \(xv_0 \in J_{\chi }\) is parallel. In the case \(u_0xv_0\in J_\chi \) we note that \(\chi (xv_0) = \chi (x) = 0 = \phi (x) = \phi (xv_0)\), the last by condition (i) of Proposition 3.1(b). Thus
so \(\psi (x) = 0\) since \(\chi (u_0)\chi (v_0) \ne 0\). This establishes the bottom case of (7).
Since there is nothing to prove in the two middle cases, (7) is established. It remains to prove conditions (a)–(e).
To prove (a) take \(x\in S\setminus I_\chi \), \(y \in P_\chi \) in (6). Then
since \(\chi (y) = 0\). It follows from condition (ii) of Proposition 3.1(b) that \(xy, yx \in P_\chi \), thus we have condition (a).
For (b) suppose \(x\in S\setminus I_\chi \), \(y \in K_\chi \). Then (6) yields
since \(\chi (y) = \phi (y) = 0\).
To prove (c) let \(y \in S\setminus I_\chi \) and \(x\in J_\chi \cup L_\chi \). Then \(\chi (x) = \psi (x) = \phi (x) = 0\), so \(\psi (xy) = \psi (yx) = 0\) follows immediately from (6).
For (d), taking \(x,y\in P_\chi \) we see that
since \(\chi (x) = \chi (y) = 0\).
Finally, for (e) suppose \(x\in I_\chi \) and \(y \in I_\chi \setminus P_\chi \). Noting that \(\chi (x) = \chi (y) = \phi (y) = 0\), (6) yields \(\psi (xy) = \psi (yx) = 0\).
For the converse, suppose first that \(y \in S\setminus I_\chi \). If \(x \in S\setminus I_\chi \), then we have
For \(x \in P_\chi \) we have by (a) that \(xy\in P_\chi \), and
since \(\chi (x) = 0\). Next, if \(x\in K_\chi \) then we get from (b) that
since \(\chi (x) = \phi (x) = 0\). Lastly, if \(x \in J_\chi \cup L_\chi \) then by (c) we have (6) since \(\psi (xy) = \chi (x) = \psi (x) = \phi (x) = 0\).
The mirror cases with \(x\in S\setminus I_\chi \) are parallel, so we have verified all combinations of x, y with one of them belonging to \(S\setminus I_\chi \).
Now suppose \(y \in P_\chi \). If \(x\in P_\chi \) then (d) yields
since \(\chi (x) = \chi (y) = 0\). If \(x\in I_\chi \setminus P_\chi \) then (e) yields \(\psi (xy) = 0\), which confirms (6) since \(\chi (x) = \chi (y) = \phi (x) = 0\). The remaining cases are left to the reader. \(\square \)
Note that \(\psi _P\) and \(\psi _K\) may take arbitrary values at some points of their respective domains (see Example 6.4). Later, in Examples 6.3 and 6.4, we will see how conditions (a)–(e) can be used to find explicit forms for \(\psi \) on semigroups with very different prime ideal structures.
Here we record a simple linear independence result involving exponentials, solutions of the sine addition law, and solutions of (6). In the next section we will see that these three function types make up bases for the solutions of (1) in the case \(n=3\).
Lemma 4.2
Let S be a semigroup.
-
(a)
The set of exponentials on S is linearly independent.
-
(b)
Let \(\chi :S \rightarrow {{\mathbb {C}}}\) an exponential, and let \(\phi :S \rightarrow {{\mathbb {C}}}\) be a nonzero solution of the special sine addition law (4).
-
(i)
If \(\chi ':S \rightarrow {{\mathbb {C}}}\) is an exponential different from \(\chi \), then \(\{\chi ', \chi , \phi \}\) is linearly independent.
-
(ii)
If \(\phi ':S \rightarrow {{\mathbb {C}}}\) is a solution of the (4) with \(\{\phi ,\phi '\}\) linearly independent, then \(\{\chi ,\phi ,\phi '\}\) is linearly independent.
-
(iii)
If \(\psi :S \rightarrow {{\mathbb {C}}}\) satisfies (6) and \(\psi \ne 0\), then \(\{\chi ,\phi ,\psi \}\) is linearly independent.
-
(i)
Proof
Part (a) is [7, Theorem 3.18].
For (b)(i) suppose there exist \(a,b,c\in {{\mathbb {C}}}\) such that \(a\chi ' + b\chi + c\phi = 0\). Then
for all \(x,y \in S\). By part (a) this implies that \(a(\chi ' - \chi ) = c\phi = 0\). Since \(\chi ' \ne \chi \) and \(\phi \ne 0\) we have \(a = c = 0\). Therefore \(b=0\) too since \(\chi \ne 0\).
For part (b)(ii) let \(a\chi + b\phi + c\phi ' = 0\) for some \(a,b,c \in {{\mathbb {C}}}\). Then
for all \(x,y \in S\). Thus \(a=0\). Now \(b\phi + c\phi ' = 0\), so \(b = c = 0\) by the independence of \(\{\phi ,\phi '\}\).
For (b)(iii) suppose \(a\chi + b\phi + c\psi = 0\) for some \(a,b,c \in {{\mathbb {C}}}\). Then
for all \(x,y \in S\). Since \(\{\chi ,\phi \}\) is independent by part (b)(i), this implies \(a = c = 0\). Then \(b=0\) follows. \(\square \)
5 The Main Result
Now we apply Proposition 3.3 to solve the general Levi–Civita equation for \(n=3\).
Theorem 5.1
Let S be a topological commutative monoid, and suppose \(f,g_j,h_j \in C(S)\) satisfy the Levi–Civita equation
with \(\{g_1,g_2,g_3\}\) and \(\{h_1,h_2,h_3\}\) linearly independent. Then there exist constants \(c_k, a_{j,k}, b_{j,k}\in {{\mathbb {C}}}\) such that \(f,g_j,h_j\) belong to one of the following families, where \(\chi ,\chi ',\chi _k \in C(S)\) are exponentials, \(\phi \in C(S)\) is a nonzero solution of (4) as described in Corollary 3.2, and \(\psi \in C(S)\) is a nonzero solution of (6) as described in Theorem 4.1.
-
(I)
For distinct exponentials \(\chi _1,\chi _2,\chi _3\) we have
$$\begin{aligned} f = \sum _{k=1}^3 c_k \chi _k, \quad g_j = \sum _{k=1}^3 a_{j,k} \chi _k, \quad h_j = \sum _{k=1}^3 b_{j,k} \chi _k, \end{aligned}$$(9)with coefficients satisfying
$$\begin{aligned} \begin{pmatrix} a_{1,1} &{}\quad a_{2,1} &{}\quad a_{3,1} \\ a_{1,2} &{}\quad a_{2,2} &{}\quad a_{3,2} \\ a_{1,3} &{}\quad a_{2,3} &{}\quad a_{3,3} \end{pmatrix}\begin{pmatrix} b_{1,1} &{}\quad b_{1,2} &{}\quad b_{1,3} \\ b_{2,1} &{}\quad b_{2,2} &{}\quad b_{2,3} \\ b_{3,1} &{}\quad b_{3,2} &{}\quad b_{3,3} \end{pmatrix} = \begin{pmatrix} c_{1} &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad c_{2} &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad c_{3} \end{pmatrix}. \end{aligned}$$(10) -
(II)
For distinct exponentials \(\chi ,\chi '\) we have
$$\begin{aligned}&f = c_1 \chi ' + c_2 \chi + c_3 \phi , \quad g_j = a_{j,1} \chi ' + a_{j,2}\chi + a_{j,3}\phi ,\quad \nonumber \\&h_j = b_{j,1} \chi ' + b_{j,2}\chi + b_{j,3} \phi , \end{aligned}$$(11)with coefficients satisfying
$$\begin{aligned} \begin{pmatrix} a_{1,1} &{}\quad a_{2,1} &{}\quad a_{3,1} \\ a_{1,2} &{}\quad a_{2,2} &{}\quad a_{3,2} \\ a_{1,3} &{}\quad a_{2,3} &{}\quad a_{3,3} \end{pmatrix}\begin{pmatrix} b_{1,1} &{}\quad b_{1,2} &{}\quad b_{1,3} \\ b_{2,1} &{}\quad b_{2,2} &{}\quad b_{2,3} \\ b_{3,1} &{}\quad b_{3,2} &{}\quad b_{3,3} \end{pmatrix} = \begin{pmatrix} c_{1} &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad c_{2} &{}\quad c_3 \\ 0 &{}\quad c_3 &{}\quad 0 \end{pmatrix}. \end{aligned}$$(12) -
(III)
$$\begin{aligned}&f = c_1 \chi + c_2 \phi + c_3 \psi , \quad g_j = a_{j,1} \chi + a_{j,2}\phi + a_{j,3} \psi , \quad \nonumber \\&h_j = b_{j,1} \chi + b_{j,2}\phi +b_{j,3} \psi \end{aligned}$$(13)
with coefficients satisfying
$$\begin{aligned} \begin{pmatrix} a_{1,1} &{}\quad a_{2,1} &{}\quad a_{3,1} \\ a_{1,2} &{}\quad a_{2,2} &{}\quad a_{3,2} \\ a_{1,3} &{}\quad a_{2,3} &{}\quad a_{3,3} \end{pmatrix}\begin{pmatrix} b_{1,1} &{}\quad b_{1,2} &{}\quad b_{1,3} \\ b_{2,1} &{}\quad b_{2,2} &{}\quad b_{2,3} \\ b_{3,1} &{}\quad b_{3,2} &{}\quad b_{3,3} \end{pmatrix} = \begin{pmatrix} c_{1} &{}\quad c_2 &{}\quad c_3 \\ c_2 &{}\quad c_3 &{}\quad 0 \\ c_2 &{}\quad 0 &{}\quad 0 \end{pmatrix}. \end{aligned}$$(14)
Conversely, each family (I), (II), (III) constitutes a solution of (8), and \(\{g_1,g_2,g_3\}\) and \(\{h_1,h_2,h_3\}\) are linearly independent provided that the coefficients are chosen so that the matrices in (10), (12), (14) have rank 3.
Proof
Let \(V = span \{g_1,g_2,g_3\}\). We consider three cases as determined by Proposition 3.3.
Case 1: Suppose \(s=3\), so \(d_1 = d_2 = d_3 = 1\). Then V has a basis of the form \(\{\chi _1,\chi _2,\chi _3\}\) for three distinct continuous exponentials. Let \(f,g_j,h_j\) have the form (9) for some \(c_k, a_{j,k}, b_{j,k}\in {{\mathbb {C}}}\). By linear independence such functions satisfy (8) if and only if the constants satisfy (10).
Case 2: Suppose \(s=2\) with \(d_1 = 1\) and \(d_2 = 2\) (the case \(d_1 = 2\), \(d_2 = 1\) is equivalent). Since V is spanned by the matrix elements of the representation \(R\circ g\) of S, we have \(V = span \{\chi ',\chi , \rho ^{2,1}\}\) for distinct exponentials \(\chi ',\chi \), where
as described in Proposition 3.3. Using the formula \(R(g(xy)) = R(g(x))R(g(y))\) we find by matrix multiplication that \(\phi := \rho ^{2,1}\) is a solution of the special sine addition law (4). From \(\dim V = 3\) it follows that \(\phi \ne 0\) and \(\{\chi ',\chi ,\phi \}\) is a basis for V.
Let \(f,g_j,h_j\) have the form (11) for some \(c_k, a_{j,k}, b_{j,k}\in {{\mathbb {C}}}\). Such functions satisfy (8) if and only if
By the linear independence of \(\{\chi ',\chi , \phi \}\) this means the constants must fulfill (12).
Case 3: Suppose \(s=1\), so \(d_1 = 3\). By the same reasoning as in Case 2 we find that \(V = span \{\chi ,\rho ^{2,1},\rho ^{3,2},\rho ^{3,1}\}\), where
The first two formulas show that \(V = span \{\chi ,\phi _1,\phi _2,\rho ^{3,1}\}\), where \(\phi _1 := \rho ^{2,1}\) and \(\phi _2 := \rho ^{3,2}\) are solutions of the special sine addition law (4). Here we split the proof into two sub-cases.
Sub-case 3(a): Suppose \(\{\phi _1,\phi _2\}\) is linearly independent. Then since \(\dim V = 3\) we get from Lemma 4.2 that \(\{\chi , \phi _1, \phi _2\}\) is a basis for V. Re-labeling \(\{\phi _1,\phi _2\}\) as \(\{\phi ,\phi '\}\), let \(f,g_j,h_j\) be given by
for some \(c_k, a_{j,k}, b_{j,k}\in {{\mathbb {C}}}\). Now (8) holds if and only if
for all \(x,y \in S\). By the linear independence of the basis functions we arrive at the constraint
Since the matrix on the right hand side has rank at most 2, there are no non-degenerate solutions in this subcase. That is, either \(\{g_1,g_2,g_3\}\) or \(\{h_1,h_2,h_3\}\) is linearly dependent.
Sub-case 3(b): Suppose \(\{\phi _1,\phi _2\}\) is linearly dependent. Since \(\dim V = 3\) we cannot have \(\phi _1 = \phi _2 = 0\), so we can denote one of these functions as \(\phi \ne 0\) and the other as \(d\phi \) for some \(d \in {{\mathbb {C}}}\). Now \(\rho ^{3,1}\) satisfies
If \(d=0\) then \(\rho ^{3,1}\) is a solution of the special sine addition law (4). In this case, defining \(\phi ' := \rho ^{3,1}\) we have \(\{\phi ,\phi '\}\) linearly independent since \(\dim V = 3\). Thus we revert to Sub-case 3(a) which is already settled. If \(d\ne 0\) then defining \(\psi := d^{-1}\rho ^{3,1}\) we see that \(\psi \) is a solution of (6), and we have \(span \{\chi ,\phi ,\psi \} = V\). Moreover \(\psi \ne 0\) since \(\dim V = 3\), thus \(\{\chi ,\phi ,\psi \}\) is a basis for V.
Let \(f,g_j,h_j\) have the form (13) for some \(c_k, a_{j,k}, b_{j,k}\in {{\mathbb {C}}}\). Then (8) holds if and only if
for all \(x,y \in S\). By the linear independence of the basis functions we arrive at constraint (14).
For the converse, the matrices in the constraints must all have full rank to ensure that \(\{g_1,g_2,g_3\}\) and \(\{h_1,h_2,h_3\}\) are linearly independent. \(\square \)
In the next section we apply Theorem 5.1 to two particular Levi–Civita equations.
6 Applications and Examples
A well-known particular equation of the form (8) is the cosine-sine functional equation
for three unknown functions \(f,g,h:S \rightarrow {{\mathbb {C}}}\).
Corollary 6.1
Let S be a topological commutative monoid, and suppose \(f,g,h \in C(S)\) is a solution of (15) with \(\{f,g,h\}\) linearly independent. Then there exist constants \(c_k, \alpha _{k}, \beta _{k}\in {{\mathbb {C}}}\) such that f, g, h belong to one of the following three families, where \(\chi ,\chi ',\chi _k \in C(S)\) are exponentials, \(\phi \in C(S)\) is a nonzero solution of (4) as described in Corollary 3.2, and \(\psi \in C(S)\) is a nonzero solution of (6) as described in Theorem 4.1.
-
(I)
For distinct exponentials \(\chi _1,\chi _2,\chi _3\) we have
$$\begin{aligned} f = \sum _{k=1}^3 c_k \chi _k, \quad g = \sum _{k=1}^3 \alpha _{k} \chi _k, \quad h = \sum _{k=1}^3 \beta _{k} \chi _k, \end{aligned}$$with coefficients satisfying
$$\begin{aligned} \begin{pmatrix} c_1 &{}\quad \alpha _1 &{}\quad \beta _1 \\ c_2 &{}\quad \alpha _2 &{}\quad \beta _2 \\ c_3 &{}\quad \alpha _3 &{}\quad \beta _3 \end{pmatrix}\begin{pmatrix} \alpha _1 &{}\quad \alpha _2 &{}\quad \alpha _3 \\ c_1 &{}\quad c_2 &{}\quad c_3 \\ \beta _1 &{}\quad \beta _2 &{}\quad \beta _3 \end{pmatrix} = \begin{pmatrix} c_{1} &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad c_{2} &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad c_{3} \end{pmatrix}. \end{aligned}$$ -
(II)
For distinct exponentials \(\chi ',\chi \) we have
$$\begin{aligned} f = c_1 \chi ' + c_2 \chi + c_3 \phi , \quad g = \alpha _{1} \chi ' + \alpha _{2}\chi + \alpha _{3}\phi , \quad h = \beta _{1} \chi ' + \beta _{2}\chi + \beta _{3} \phi , \end{aligned}$$with coefficients satisfying
$$\begin{aligned} \begin{pmatrix} c_1 &{}\quad \alpha _{1} &{}\quad \beta _{1} \\ c_2 &{}\quad \alpha _{2} &{}\quad \beta _{2} \\ c_3 &{}\quad \alpha _{3} &{}\quad \beta _{3} \end{pmatrix}\begin{pmatrix} \alpha _{1} &{}\quad \alpha _{2} &{}\quad \alpha _{3} \\ c_1 &{}\quad c_2 &{}\quad c_3 \\ \beta _{1} &{}\quad \beta _{2} &{}\quad \beta _{3} \end{pmatrix} = \begin{pmatrix} c_{1} &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad c_{2} &{}\quad c_3 \\ 0 &{}\quad c_3 &{}\quad 0 \end{pmatrix}. \end{aligned}$$ -
(III)
$$\begin{aligned} f = c_1 \chi + c_2 \phi + c_3 \psi , \quad g = \alpha _{1} \chi + \alpha _{2}\phi + \alpha _{3} \psi , \quad h = \beta _{1} \chi + \beta _{2}\phi + \beta _{3} \psi \end{aligned}$$
with coefficients satisfying
$$\begin{aligned} \begin{pmatrix} c_1 &{}\quad \alpha _1 &{}\quad \beta _1 \\ c_2 &{}\quad \alpha _2 &{}\quad \beta _2 \\ c_3 &{}\quad \alpha _3 &{}\quad \beta _3 \end{pmatrix}\begin{pmatrix} \alpha _1 &{}\quad \alpha _2 &{}\quad \alpha _3 \\ c_1 &{}\quad c_2 &{}\quad c_3 \\ \beta _1 &{}\quad \beta _2 &{}\quad \beta _3 \end{pmatrix} = \begin{pmatrix} c_{1} &{}\quad c_2 &{}\quad c_3 \\ c_2 &{}\quad c_3 &{}\quad 0 \\ c_2 &{}\quad 0 &{}\quad 0 \end{pmatrix}. \end{aligned}$$
The converse is also true provided that the constants are chosen so that the matrices in each constraint have rank 3.
Proof
We apply Theorem 5.1 under the conditions \(g_1 = h_2 = f\), \(h_1 = g_2 =: g\), and \(g_3 = h_3 =: h\). In terms of the coefficients that means \(a_{1,k} = c_k = b_{2,k}\), \(b_{1,k} = a_{2,k} =: \alpha _k\), and \(a_{3,k} = b_{3,k} =: \beta _k\) for all \(k \in \{1,2,3\}\). Enforcing these identifications, the result follows immediately. \(\square \)
Another application of Theorem 5.1 is the following. Let \(1_S\) denote the constant exponential function \(\chi = 1\) on S.
Corollary 6.2
Let S be a topological commutative monoid, and suppose \(f,g,h,j,k \in C(S)\) satisfy
with \(\{g,1_S,j\}\) and \(\{1_S,h,\ell \}\) linearly independent. The solutions belong to the following families, where \(\chi \in C(S)\) is an exponential and \(A, B \in C(S)\) are additive.
-
(a)
For \(\chi \ne 1_S\) and \(A\ne 0\) we have
$$\begin{aligned} f&= d_1e_1 \chi + (b + c_2 + d_2e_2) + c_3A , \quad g = - d_1e_2 \chi + b + c_{3}A, \\ h&= - d_2e_1 \chi + c_{2} + c_{3}A, \quad j = d_1 \chi + d_2, \quad \ell = e_{1} \chi + e_{2}, \end{aligned}$$for \(b,c_j,d_j,e_j \in {{\mathbb {C}}}\) with \(c_3d_1e_1 \ne 0\).
-
(b)
For \(A\ne 0\) we have
$$\begin{aligned} f&= (b_1 + c_1 + d_1e_1) + (b_2 + d_2e_1) A + d_2e_2 \left( \frac{1}{2} A^2 + B\right) , \quad \\ g&= b_{1} + b_{2}A + d_2e_2\left( \frac{1}{2} A^2 + B\right) ,\\ h&= c_{1} + c_{2}A + d_2e_2\left( \frac{1}{2} A^2 + B\right) , \quad j = d_1 + d_2 A, \quad \ell = e_1 + e_2 A, \end{aligned}$$for \(b_j,c_j,d_j,e_j \in {{\mathbb {C}}}\) satisfying \(c_2 + d_1e_2 = b_2 + d_2e_1\) and \(d_2e_2 \ne 0\).
Conversely, in each family above f, g, h is a solution of (16) with \(\{g,1_S,j\}\) and \(\{1_S,h,\ell \}\) linearly independent.
Proof
Since (16) is the special case \(g_1 = g\), \(g_2 = h_1 = 1_S\), \(g_3 = j\), \(h_2 = h\), and \(h_3 = \ell \) of (8), we obtain the solution forms by specializing Theorem 5.1 to this case. Note that the exponential function \(1_S\) belongs to \(V = span\{g,1_S,j\}\), so without loss of generality we may assign it to any basis for V.
In case (I) the basis consists of three distinct exponentials, one of which is \(1_S\) (which we designate as \(\chi _3\)). Writing \(f,g,h,j,\ell \) as
condition (9) becomes
Simple calculations show that either \(a_1 = 0\) or \(a_2 = 0\), so this case yields no nondegenerate solutions (i.e. no solutions with \(\{g,1_S,j\}\) and \(\{1_S,h,\ell \}\) linearly independent).
In case (II) we have either \(\chi = 1_S\) or \(\chi ' = 1_S\). Suppose \(\chi ' \ne 1_S = \chi \). For \(\chi = 1_S\) Eq. (4) shows that \(\phi \) is additive. Writing \(\phi = A \in C(S)\) we have
with additive \(A \ne 0\), and we get from condition (12) that
Re-naming \(\chi '\) as a new \(\chi \) this leads to solutions of the form (a) above. The condition \(c_3d_1e_1 \ne 0\) is dictated by the linear independence assumptions. If on the other hand \(\chi ' = 1_S \ne \chi \), then a similar calculation shows that there are no nondegenerate solutions.
Finally, in case (III) we have \(\chi = 1_S\), so \(\phi = A\) is additive and nonzero. Now (6) takes the form
and by Theorem 4.1 we have \(\psi = B + \frac{1}{2}A^2\) for some additive \(B\in C(S)\). Writing
for additive \(A,B \in C(S)\) with \(A \ne 0\), where the constants satisfy
This leads to solutions of the form (b), with the condition \(d_2e_2 \ne 0\) following from the linear independence assumptions.
The converse is easy to check. \(\square \)
We close with contrasting examples illustrating the results of Theorem 5.1 on two different monoids, one with a very simple prime ideal structure and one with a very rich prime ideal structure.
In the first example there are very few prime ideals. Let \({\mathfrak {R}}(\alpha )\) denote the real part of \(\alpha \in {{\mathbb {C}}}\). Recall that the continuous exponentials on \(([0,1],\cdot )\) under the usual topology are \(m_0 = 1\) and \(m_{\alpha }\) defined by
for any \(\alpha \in {{\mathbb {C}}}\) with \({\mathfrak {R}}(\alpha ) > 0\). Note also that the only additive function on \(([0,1],\cdot )\) is \(A = 0\), since \(A(0) = A(0\cdot x) = A(0)+ A(x)\) for all \(x \in [0,1]\). The continuous additive functions on \(((0,1],\cdot )\) have the form \(A(x) = c\log x\) for some \(c \in {{\mathbb {C}}}\).
Example 6.3
Let \(S = ([0,1]\times [0,1],\cdot )\) with the product topology. The prime ideals of S are \(I_1 = \{0\} \times [0,1]\), \(I_2 = [0,1]\times \{0\}\), and \(I_1 \cup I_2\). The continuous exponentials on S have the form \(\chi (x,y) = m_1(x)m_2(y)\), denoted \(\chi = m_1 \otimes m_2\), where each \(m_j:[0,1] \rightarrow {{\mathbb {C}}}\) is either \(m_0\) or \(m_\alpha \) for some \({\mathfrak {R}}(\alpha ) > 0\).
Exponentials \(\chi \in C(S)\) and additive functions \(A\in C(S\setminus I_\chi )\) corresponding to each possible nullspace are the following, where \(c_1,c_2,\alpha ,\beta \in {{\mathbb {C}}}\) with \({\mathfrak {R}}(\alpha ),{\mathfrak {R}}(\beta ) > 0\).
-
(a)
For \(I_\chi = \emptyset \), \(\chi = m_0\otimes m_0 = 1\), and \(A=0\).
-
(b)
For \(I_\chi = I_1\), \(\chi = m_{\alpha }\otimes m_0\), and \(A(x,y) = c_1\log x\) for all \(x \in (0,1]\) and \(y \in [0,1]\).
-
(c)
For \(I_\chi = I_2\), \(\chi = m_0\otimes m_{\alpha }\), and \(A(x,y) = c_2\log y\) for all \(x \in [0,1]\) and \(y \in (0,1]\).
-
(d)
For \(I_\chi = I_1 \cup I_2\), \(\chi = m_{\alpha }\otimes m_{\beta }\), and \(A(x,y) = c_1\log x + c_2\log y\) for all \(x,y \in (0,1]\).
Note that in each case (a)–(d) we have \(I_\chi ^2 = I_\chi \), so \(P_\chi = K_{\chi } = L_{\chi } = \emptyset \) and \(J_\chi = I_\chi \). Thus the forms of \(\phi ,\psi \in C(S)\) given by Corollary 3.2 and Theorem 4.1 simplify to
and
for additive \(A,B \in C(S\setminus I_\chi )\).
The solutions of (8) (under the accompanying linear independence assumptions) are obtained by using the appropriate forms above in the formulas of Theorem 5.1.
For the second example we choose the commutative monoid \(S = ({{\mathbb {N}}},\cdot )\), which has infinitely many prime ideals. Let P denote the set of prime numbers, and for each \(p\in P\) define the function \(C_p: S \rightarrow {{\mathbb {N}}}\cup \{0\}\) for each \(x \in S\) by
Then \(C_p\) is additive for each \(p\in P\), and each \(x\in S\) has prime factorization \(x = \prod _{p\in P}p^{C_p(x)}\).
Example 6.4
Let \(S = ({{\mathbb {N}}},\cdot )\) under the discrete topology. For any exponential \(\chi :S \rightarrow {{\mathbb {C}}}\) we have \(\chi (x) = \chi (\prod _{p\in P}p^{C_p(x)}) = \prod _{p\in P}\chi (p)^{C_p(x)}\), so \(\chi \) is completely determined by its values on P. The prime ideals of S are of the form
where Q is any nonempty subset of P.
An additive function \(A:S\setminus I_{\chi }\rightarrow {{\mathbb {C}}}\) has the form \(A(x) = \sum _{p \in P\setminus I_\chi } C_p(x)A(p)\), where the values of A on \(P\setminus I_\chi \) may be chosen arbitrarily.
Next we determine the forms of the subsets \(P_\chi , J_{\chi }, K_{\chi }, L_{\chi } \subset I_\chi \) for a given exponential \(\chi \). The definitions of these sets simplify here because S is commutative. Suppose \(\chi :S \rightarrow {{\mathbb {C}}}\) is an exponential with \(I_\chi = I_Q\) for some \(\emptyset \ne Q \subset P\). By definition \(x \in I_Q\) if and only if there exist \(p \in Q\) and \(z \in S\) such that \(x = pz\). Similarly \(x \in I_Q^n\) for some positive integer n if and only there exist \(p_1,\ldots , p_n \in Q\) and \(z \in S\) such that \(x = p_1 \cdots p_nz\). It is not difficult to verify that
Thus we have
and \( L_{\chi } = \emptyset \).
By Corollary 3.2 the form of nonzero \(\phi :S \rightarrow {{\mathbb {C}}}\) satisfying (4) is given by (3), where \(A: S\setminus I_{\chi } \rightarrow {{\mathbb {C}}}\) is additive and \(\phi _P:P_\chi \rightarrow {{\mathbb {C}}}\) is any function such that conditions (i) and (ii) of Proposition 3.1(b) hold. It is not difficult to work out that
where \(\phi _Q:Q \rightarrow {{\mathbb {C}}}\) is an arbitrary function.
By Theorem 4.1 the form of \(\psi :S \rightarrow {{\mathbb {C}}}\) satisfying (6) with a given nonzero solution \(\phi \) of (4) is (7), where \(B: S\setminus I_{\chi } \rightarrow {{\mathbb {C}}}\) is additive and \(\psi _P:P_\chi \rightarrow {{\mathbb {C}}}\), \(\psi _K :K_\chi \rightarrow {{\mathbb {C}}}\) are functions such that conditions (a)–(e) hold. After a few straightforward calculations we arrive at
where \(\psi _Q:Q \rightarrow {{\mathbb {C}}}\) is an arbitrary function. Here we have applied condition (d) for \(p_1,p_2 \in Q\) to get \(\psi _K(p_1p_2) = \phi _Q(p_1)\phi _Q(p_2)\).
The solutions of (8) (under the accompanying linear independence assumptions) are obtained by substituting the appropriate forms above into the formulas of Theorem 5.1.
References
Ebanks, B.: Around the sine addition law and d’Alembert’s equation on semigroups. Results Math. 77, 11 (2022). https://doi.org/10.1007/s10025-021-01548-6
Ebanks, B.: A Levi-Civita equation on monoids, two ways. Annales Math. Silesianae (2022). https://doi.org/10.2478/amsil-2022-0009
Ebanks, B., Ng, C.T.: Levi-Civita functional equations and the status of spectral synthesis on semigroups. Semigroup Forum 103, 469–494 (2021)
Ebanks, B., Ng, C.T.: Levi-Civita functional equations and the status of spectral synthesis on semigroups - II. Semigroup Forum (2022). https://doi.org/10.1007/s00233-021-10248-0
McKiernan, M.A.: The matrix equation \(a(x\circ y) = a(x) + a(x)a(y) + a(y)\). Aequationes Math. 15, 213–223 (1977)
McKiernan, M.A.: Equations of the form \(H(x\circ y) = \Sigma _i f_i(x)g_i(y)\). Aequationes Math. 16, 51–58 (1977)
Stetkær, H.: Functional equations on groups. World Scientific, Singapore (2013)
Székelyhidi, L.: Convolution type functional equations on topological abelian groups. World Scientific, Singapore (1991)
Funding
The author declares that no funds, grants, or other support were received during the preparation of this manuscript.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The author has no relevant financial or non-financial interests to disclose.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Ebanks, B. Some Levi-Civita Functional Equations on Semigroups. Results Math 77, 154 (2022). https://doi.org/10.1007/s00025-022-01705-5
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s00025-022-01705-5
Keywords
- Levi–Civita functional equation
- sine addition formula
- semigroup
- regular representation
- prime ideal
- monoid