1 Introduction

Let S be a semigroup, \({{\mathbb {N}}}\) the set of positive integers, and \({{\mathbb {C}}}\) the set of complex numbers. A functional equation of the form

$$\begin{aligned} f(xy) = \sum _{i=1}^n g_i(x) h_i(y), \quad \text { for all } x,y \in S, \end{aligned}$$
(1)

for \(n\in {{\mathbb {N}}}\) and unknown functions \(f,g_1,\ldots ,g_n,h_1,\ldots ,h_n:S \rightarrow {{\mathbb {C}}}\) is called a Levi–Civita equation. There are two general methods used to describe the solutions of (1). One is based on representation theory (see [3,4,5,6] and [7, Chapter 5]), and the other, for the case that S is an Abelian group, is based on spectral synthesis (see [8, Chapter 10]). These methods describe the general structure of the solution but do not supply explicit formulas for the solution functions. For that one may need to use special properties of S and/or make additional computations. The existence of prime ideals in semigroups complicates matters and rules out the methods found in [5, 6, 8].

A general result giving explicit solution formulas for (1) in the case \(n=2\) on commutative monoids was proved in [2]. Here we do the same for the case \(n=3\), where the complications caused by prime ideals increase significantly over the case \(n=2\).

The outline of the paper is as follows. Section 2 introduces some notation and terminology. In Sect. 3 we quote a result about an important specific instance of (1) for \(n=2\), namely the sine addition formula. The same section contains a fundamental result using representation theory to describe the general structure of solutions of (1). Section 4 contains the solution of a key particular instance of (1) for \(n=3\). Our main result-Theorem 5.1-follows in Sect. 5. The final section contains applications to specific functional equations and examples on a couple of commutative monoids with very different prime ideal structures.

Generally our results are presented in their topological versions, but one may choose the discrete topology.

2 Notation and Terminology

Throughout this paper S denotes a semigroup. A monoid is a semigroup with an identity element generally denoted e.

An additive function on S is a homomorphism from S into \(({{\mathbb {C}}}, +)\).

A polynomial on S is a function of the form \(P(A_1,\ldots ,A_n)\), where \(P \in {{\mathbb {C}}}[x_1,\ldots , x_n]\) and \(A_1,\ldots ,A_n\) are additive functions.

A multiplicative function on S is a homomorphism from S into \(({{\mathbb {C}}}, \cdot )\). If \(\chi : S \rightarrow {{\mathbb {C}}}\) is multiplicative and \(\chi \ne 0\) then we call \(\chi \) an exponential on S. When we say that a function F is nonzero we mean that \(F \ne 0\).

Define the nullspace of a multiplicative \(\chi \) by

$$\begin{aligned} I_{\chi }:= \{ x \in S \mid \chi (x) = 0\}. \end{aligned}$$

If \(I_{\chi } \ne \emptyset \) then it is a (two-sided) ideal of S and is called the null ideal of \(\chi \). An ideal \(I\subset S\) is said to be a prime ideal if \(I \ne S\) and whenever \(xy\in I\) it follows that either \(x\in I\) or \(y\in I\), so \(S\setminus I\) is a nonempty subsemigroup of S. There is a very close relationship between prime ideals and exponentials on semigroups. For any exponential \(\chi \) it is easy to see that if \(I_{\chi } \ne \emptyset \) then \(I_{\chi }\) is a prime ideal. Conversely, if I is any prime ideal of S and we define \(\chi (x) := 0\) for \(x\in I\) and \(\chi (x) := 1\) for \(x\in S\setminus I\), then \(\chi :S\rightarrow {{\mathbb {C}}}\) is an exponential with null ideal \(I_{\chi } = I\).

For any subset \(T \subseteq S\) we define \(T^2 := \{t_1t_2 \mid t_1,t_2 \in T\}\). Several subsets of the nullspace \(I_\chi \) of a multiplicative \(\chi \) are significant for the description of solutions of (1). One of them is

$$\begin{aligned} P_{\chi } :=&\{x \in I_\chi \setminus I_\chi ^2 \mid ux, xv, uxv \in I_{\chi }\setminus I_{\chi }^2 \text { for all } u,v\in S\setminus I_{\chi } \}. \end{aligned}$$

For a topological space X, let C(X) denote the algebra of continuous functions mapping X into \({{\mathbb {C}}}\). Let \({{\mathbb {C}}}^* = {{\mathbb {C}}}\setminus \{0\}\).

3 Supporting Theory

Since (1) for \(n=1\) is \(f(xy) = g_1(x)h_1(y)\), it is not surprising that multiplicative functions are fundamental to the study of (1). A second particular Levi–Civita equation that is important for the general study of (1) is the sine addition formula

$$\begin{aligned} \phi (xy) = \phi (x)\gamma (y) + \gamma (x)\phi (y), \quad x,y \in S, \end{aligned}$$
(2)

for unknown functions \(\phi ,\gamma :S \rightarrow {{\mathbb {C}}}\). The following is [1, Theorem 3.1].

Proposition 3.1

Let S be a topological semigroup, and suppose \(\phi ,\gamma :S \rightarrow {{\mathbb {C}}}\) satisfy the sine addition law (2) with \(\phi \ne 0\) and \(\phi \in C(S)\). Then \(\phi ,\gamma \) belong to one of the following families, where \(\chi _1,\chi _2\in C(S)\) are multiplicative.

  1. (a)

    For \(\chi _1 \ne \chi _2\) there exists \(b\in {{\mathbb {C}}}^*\) such that \(\phi = b(\chi _1 - \chi _2)\) and \(\gamma = (\chi _1 + \chi _2)/2\).

  2. (b)

    For \(\chi _1 = \chi _2 =: \chi \ne 0\), we have \(\gamma = \chi \) and

    $$\begin{aligned} \phi (x) = {\left\{ \begin{array}{ll} A(x)\chi (x) &{} \text { for } x\in S\setminus I_\chi \\ \phi _P(x) &{} \text { for } x \in P_\chi \\ 0 &{} \text { for } x \in I_\chi \setminus P_\chi \end{array}\right. } \end{aligned}$$
    (3)

    where \(A\in C(S\setminus I_{\chi })\) is additive and \(\phi _P\in C(P_\chi )\) is the restriction of \(\phi \) to \(P_\chi \). In addition we have the following conditions.

    1. (i)

      \(\phi (xu) = \phi (ux) = 0\) for all \(x \in I_\chi \setminus P_\chi \) and \(u\in S\setminus I_\chi \).

    2. (ii)

      If \(x \in \{up, pv, upv\}\) for \(p \in P_\chi \) and \(u,v\in S\setminus I_\chi \), then \(x \in P_\chi \) and we have respectively \(\phi _P(x) = \phi _P(p)\chi (u)\), \(\phi _P(x) = \phi _P(p)\chi (v)\), \(\phi _P(x) = \phi _P(p)\chi (uv)\).

  3. (c)

    For \(\chi _1 = \chi _2 = 0\), we have \(\gamma = 0\), \(S \ne S^2\), and

    $$\begin{aligned} \phi (x) = {\left\{ \begin{array}{ll} \phi _0(x) &{} \text { for } x\in S\setminus S^2 \\ 0 &{} \text { for } x \in S^2 \end{array}\right. } \end{aligned}$$

    where \(\phi _0\in C(S\setminus S^2)\) is an arbitrary nonzero function.

Conversely, if the pair \((\phi ,\gamma )\) is given by the formulas in (a), (c), or (b) with conditions (i) and (ii) holding, then \((\phi ,\gamma )\) satisfies (2).

Note that \(\phi _P\) may take arbitrary values at some points of its domain (see Example 6.4).

The following is an immediate consequence of Proposition 3.1.

Corollary 3.2

Let S be a topological semigroup, \(\chi \in C(S)\) an exponential, and \(\phi \in C(S)\) with \(\phi \ne 0\). If \(\phi \) satisfies the special sine addition law

$$\begin{aligned} \phi (xy) = \phi (x)\chi (y) + \chi (x)\phi (y), \quad x,y \in S, \end{aligned}$$
(4)

then \(\phi \) has the form (3) where \(A\in C(S\setminus I_{\chi })\) is additive and \(\phi _P \in C(P_\chi )\). In addition we have conditions (i) and (ii) of Proposition  3.1(b).

Conversely, if \(\phi \) has the form (3) where A is additive and \(\phi ,\phi _P\) satisfy conditions (i) and (ii) of Proposition  3.1(b), then \(\phi \) is a solution of (4).

Our fundamental result for describing the general structure of solutions of the Levi–Civita Eq. (1) on commutative monoids is the following, most of which is the result of combining [4, Lemma 2.4 and Theorem 2.5]. For any \(n\in {{\mathbb {N}}}\) consider \({{\mathbb {C}}}^n\) as a vector space of column vectors, and let \({\mathcal {M}}_n({{\mathbb {C}}})\) denote the algebra of \(n\times n\) matrices over \({{\mathbb {C}}}\). To avoid confusion with nullpaces labeled as \(I_{r}\), we denote the \(n\times n\) identity matrix by \(E_n\). A pure polynomial is a polynomial with constant term 0.

Proposition 3.3

Let \(n \in {{\mathbb {N}}}\), let S be a topological commutative monoid, and suppose \(f,g_j,h_j \in C(S)\) satisfy (1), with \(\{g_1,\ldots ,g_n\}\) and \(\{h_1,\ldots ,h_n\}\) linearly independent. Let \(V = span \{g_1,\ldots ,g_n\}\) and \(g = [g_1,\ldots ,g_n]^t\).

There exists an associative and commutative algebra \(({{\mathbb {C}}}^n, +, *)\) with identity element g(e) and regular representation \(R: {{\mathbb {C}}}^n \rightarrow {\mathcal {M}}_n({{\mathbb {C}}})\) such that

$$\begin{aligned} R(g(xy)) = R(g(x)) R(g(y)), \quad x,y \in S, \end{aligned}$$

with \(R(g(e)) = E_n\) and \(g(x) = R(g(x))g(e)\) for all \(x\in S\).

There exists a similarity matrix \(D \in {\mathcal {M}}_n({{\mathbb {C}}})\) simultaneously transforming the family \(\{R(g(x))\mid x \in S\}\) of commuting matrices into block diagonal form

$$\begin{aligned} D^{-1}R(g(x))D = diag \{M_1(x),\ldots , M_s(x)\}, \quad M_{r}(x) \in {\mathcal {M}}_{d_r}({{\mathbb {C}}}),\quad d_1 + \cdots + d_s = n, \end{aligned}$$

where each \(M_r\) is lower triangular of the form

$$\begin{aligned} M_r(x) = \chi _r(x)E_{d_r} + \left( \rho _r^{i,j}(x) \right) ,\quad \rho _r^{i,j}(x) = 0 \text { for } i \le j \in \{1, \ldots , d_r\}, \end{aligned}$$

for all \(x\in S\). The vector space V is spanned by the matrix elements of the representation \(R\circ g\) of S, that is

$$\begin{aligned} V = span \{R(g(\cdot ))_{i,j}\mid i,j = 1,\ldots , n\}, \end{aligned}$$
(5)

so each \(\chi _r\) and \(\rho _r^{i,j}\) belongs to V.

Each \(\chi _r\) is an exponential with nullspace denoted \(I_r\), and \(\chi _1,\ldots ,\chi _s\) are distinct. For each \(r\in \{1,\ldots ,s\}\) and \(j \in \{1,\ldots , d_r - 1\}\) there exist linearly independent additive \(A_{r,j} \in C(S\setminus I_{r})\), pure polynomials \(P_{r,j}(A_{r,1},\ldots , A_{r,d_r-1})\) of degree at most \(d_r - 1\), and functions \(q_{r,j}\in V\) such that

$$\begin{aligned} q_{r,j}(x) = P_{r,j}(A_{r,1}(x),\ldots , A_{r,d_r-1}(x))\chi _r(x), \quad \text { for all } x \in S\setminus I_{r}, \end{aligned}$$

and V has a basis \(B_1\cup \cdots \cup B_s\) where \(B_r = \{\chi _r, q_{r,1}, \ldots , q_{r,d_r - 1} \}\). (Note that the form of \(q_{r,j}\) on \(I_r\) is unspecified.)

Moreover \(f,h_1,\ldots ,h_n \in V\).

Proof

Most of the statement follows from [4, Lemma 2.4 and Theorem 2.5]. The items needing justification are the following.

The first item is (5), which follows from the definition of R and \(g = (R\circ g) g(e)\) as noted near the end of the proof of [3, Theorem 1]. From this it is clear that each \(\chi _r\) and \(\rho _r^{i,j}\) belongs to V.

The second item is the linear independence (unstated in [4]) of the additive functions \(A_{r,j}\). We may assume without loss of generality that \(\{A_{r,1},\ldots , A_{r,d_r-1}\}\) is linearly independent, since if not we could compensate by adjusting the coefficients of the \(P_{r,j}\). \(\square \)

4 Solution of a Particular Instance of (1) for \(n=3\)

Another particular Levi–Civita equation that plays a critical role in our discussion is

$$\begin{aligned} \psi (xy) = \psi (x)\chi (y) + \chi (x)\psi (y) + \phi (x)\phi (y), \quad x,y \in S, \end{aligned}$$
(6)

for unknown \(\psi :S \rightarrow {{\mathbb {C}}}\), where \(\chi :S \rightarrow {{\mathbb {C}}}\) is an exponential and \(\phi :S\rightarrow {{\mathbb {C}}}\) is a nonzero solution of the special sine addition law (4). Observe that \(\psi (xy) = \psi (yx)\) for all \(x,y \in S\) by the symmetry of the right hand side of (6).

The form of a continuous \(\phi \) satisfying (4) on a topological semigroup is described in Corollary 3.2. Now we find the form of continuous \(\psi \) satisfying (6). In order to describe such functions we subdivide \(I_\chi \setminus P_\chi \) as follows. First define

$$\begin{aligned} J_{\chi } :=&I_\chi (I_\chi \setminus P_\chi ) \cup (I_\chi \setminus P_\chi )I_\chi . \end{aligned}$$

It is easy to see that \(J_\chi \subseteq I_\chi \setminus P_\chi \), since \(J_\chi \subseteq I_\chi ^2\) while \(P_\chi \subseteq I_\chi \setminus I_\chi ^2\). Next define

$$\begin{aligned} K_{\chi } :=&\{x \in I_\chi \setminus P_\chi \mid ux, xv, uxv \in I_{\chi }\setminus J_{\chi } \text { for all } u,v\in S\setminus I_{\chi } \}. \end{aligned}$$

It is not difficult to verify that \(J_\chi \) and \(K_\chi \) are disjoint. For example, suppose \(x\in I_\chi (I_\chi \setminus P_\chi )\), say \(x = st\) with \(s\in I_\chi \), \(t \in I_\chi \setminus P_\chi \). Then for all \(u \in S\setminus I_\chi \) we have \(ux = (us)t \in J_\chi \), so \(x \notin K_\chi \). It only remains to note that \(S\setminus I_\chi \) is nonempty since \(\chi \) is an exponential. The proof for \(x\in (I_\chi \setminus P_\chi )I_\chi \) is parallel. Thus we can view \(I_\chi \setminus P_\chi \) as the disjoint union

$$\begin{aligned} I_\chi \setminus P_\chi = K_\chi \cup J_\chi \cup L_\chi , \end{aligned}$$

where \(L_{\chi } := I_\chi \setminus (P_\chi \cup K_\chi \cup J_\chi )\).

Theorem 4.1

Let S be a topological semigroup, let \(\chi \in C(S)\) be an exponential, and let \(\phi \in C(S)\) be a nonzero solution of the special sine addition law (4) as described in Corollary  3.2, with additive \(A\in C(S\setminus I_\chi )\) and restriction \(\phi _P \in C(P_\chi )\).

If \(\psi \in C(S)\) is a solution of (6), then there exist an additive \(B\in C(S\setminus I_\chi )\) and (restriction) functions \(\psi _P \in C(P_\chi )\), \(\psi _K \in C(K_\chi )\) such that

(7)

and the following conditions hold.

  1. (a)

    For all \(y \in S\setminus I_\chi \) and \(x\in P_\chi \) we have \(yx,xy \in P_\chi \) and \(\psi _P(yx) = \psi _P(xy) = [\psi _P(x) + \phi _P(x)A(y)]\chi (y)\).

  2. (b)

    For all \(y \in S\setminus I_\chi \) and \(x\in K_\chi \) we have \(\psi (xy) = \psi (yx) = \psi _K(x)\chi (y)\).

  3. (c)

    \(\psi (xy) = \psi (yx) = 0\) for all \(y \in S\setminus I_\chi \) and \(x\in J_\chi \cup L_\chi \).

  4. (d)

    \(\psi (xy) = \psi (yx) = \phi _P(x)\phi _P(y)\) for all \(x,y\in P_\chi \).

  5. (e)

    \(\psi (xy) = \psi (yx) = 0\) for all \(x \in I_\chi \) and \(y\in I_\chi \setminus P_\chi \).

Conversely, if \(\psi :S \rightarrow {{\mathbb {C}}}\) has the form (7) with additive \(B:S\setminus I_\chi \rightarrow {{\mathbb {C}}}\) and conditions (a)–(e) holding, then \(\psi \) satisfies (6).

Proof

Suppose \(\psi \in C(S)\) is a solution of (6), and let \(\psi _P,\psi _K\) be the respective restrictions of \(\psi \) to \(P_\chi \) and \(K_\chi \), which are disjoint. Recalling that \(\phi \) has the form (3), we get for \(x,y \in S\setminus I_\chi \) that

$$\begin{aligned} \psi (xy) = \psi (x)\chi (y) + \chi (x)\psi (y) + A(x)\chi (x)A(y)\chi (y), \end{aligned}$$

from which it follows that

$$\begin{aligned} \dfrac{\psi }{\chi }(xy) - \dfrac{\psi }{\chi }(x) - \dfrac{\psi }{\chi }(y) = A(x)A(y) = \dfrac{1}{2}[A^2(xy) - A^2(x) - A^2(y)]. \end{aligned}$$

Thus \(B\in C(S\setminus I_\chi )\) defined by

$$\begin{aligned} B(x) := \dfrac{\psi }{\chi }(x) - \dfrac{1}{2}A^2(x), \quad \text { for all } x\in S\setminus I_\chi , \end{aligned}$$

is additive, and we have the top case of (7).

Next we show that \(\psi \) vanishes everywhere on \(J_\chi \cup L_\chi \). Note first that if \(x,y\in I_\chi \) and either of them belongs to \(I_\chi \setminus P_\chi \), then by (6) we have

$$\begin{aligned} \psi (xy) = \psi (x)\chi (y) + \chi (x)\psi (y) + \phi (x)\phi (y) = 0, \end{aligned}$$

since \(\chi (x) = \chi (y) = 0\) and one of \(\phi (x),\phi (y)\) is zero. Thus we have

$$\begin{aligned} \psi (x) = 0 \quad \text { for all } x \in J_\chi , \end{aligned}$$

confirming part of the bottom case of (7). Now suppose \(x\in L_\chi \). By definition of \(L_\chi \) there exists \(u_0\) and/or \(v_0\) in \(S\setminus I_{\chi }\) such that \(u_0x\), \(xv_0\), or \(u_0xv_0\) belongs to \(J_{\chi }\). In the case \(u_0x \in J_{\chi }\) we have

$$\begin{aligned} 0 = \psi (u_0x) = \psi (u_0)\chi (x) + \chi (u_0)\psi (x) + \phi (u_0)\phi (x) = \chi (u_0)\psi (x), \end{aligned}$$

since \(\chi (x) = 0 = \phi (x)\) (because \(L_\chi \subseteq I_\chi \setminus P_\chi \)). It follows from \(\chi (u_0) \ne 0\) that \(\psi (x) = 0\). The case \(xv_0 \in J_{\chi }\) is parallel. In the case \(u_0xv_0\in J_\chi \) we note that \(\chi (xv_0) = \chi (x) = 0 = \phi (x) = \phi (xv_0)\), the last by condition (i) of Proposition 3.1(b). Thus

$$\begin{aligned} 0&= \psi (u_0xv_0) = \psi (u_0)\chi (xv_0) + \chi (u_0)\psi (xv_0) + \phi (u_0)\phi (xv_0) \\&= \chi (u_0)[\psi (x)\chi (v_0) + \chi (x)\psi (v_0) + \phi (x)\phi (v_0)]\\&= \chi (u_0)\psi (x)\chi (v_0), \end{aligned}$$

so \(\psi (x) = 0\) since \(\chi (u_0)\chi (v_0) \ne 0\). This establishes the bottom case of (7).

Since there is nothing to prove in the two middle cases, (7) is established. It remains to prove conditions (a)–(e).

To prove (a) take \(x\in S\setminus I_\chi \), \(y \in P_\chi \) in (6). Then

$$\begin{aligned} \psi (yx) = \psi (xy) = \chi (x)\psi (y) + \phi (x)\phi (y) = \chi (x)[\psi (y) + A(x)\phi _P(y)], \end{aligned}$$

since \(\chi (y) = 0\). It follows from condition (ii) of Proposition 3.1(b) that \(xy, yx \in P_\chi \), thus we have condition (a).

For (b) suppose \(x\in S\setminus I_\chi \), \(y \in K_\chi \). Then (6) yields

$$\begin{aligned} \psi (yx) = \psi (xy) = \chi (x)\psi _K(y), \end{aligned}$$

since \(\chi (y) = \phi (y) = 0\).

To prove (c) let \(y \in S\setminus I_\chi \) and \(x\in J_\chi \cup L_\chi \). Then \(\chi (x) = \psi (x) = \phi (x) = 0\), so \(\psi (xy) = \psi (yx) = 0\) follows immediately from (6).

For (d), taking \(x,y\in P_\chi \) we see that

$$\begin{aligned} \psi (yx) = \psi (xy) = \phi (x)\phi (y) = \phi _P(x)\phi _P(y), \end{aligned}$$

since \(\chi (x) = \chi (y) = 0\).

Finally, for (e) suppose \(x\in I_\chi \) and \(y \in I_\chi \setminus P_\chi \). Noting that \(\chi (x) = \chi (y) = \phi (y) = 0\), (6) yields \(\psi (xy) = \psi (yx) = 0\).

For the converse, suppose first that \(y \in S\setminus I_\chi \). If \(x \in S\setminus I_\chi \), then we have

$$\begin{aligned} \psi (xy)&= \left[ B(xy) + \frac{1}{2}A^2(xy)\right] \chi (xy)\\&= \left[ B(x) + B(y) + \frac{1}{2}A^2(x) + A(x)A(y) + \frac{1}{2}A^2(y)\right] \chi (x)\chi (y)\\&= \left[ B(x) + \frac{1}{2}A^2(x)\right] \chi (x)\chi (y) + \chi (x)\left[ B(y) + \frac{1}{2}A^2(y)\right] \chi (y) + A(x)\chi (x)A(y)\chi (y)\\&= \psi (x)\chi (y) + \chi (x)\psi (y) + \phi (x)\phi (y). \end{aligned}$$

For \(x \in P_\chi \) we have by (a) that \(xy\in P_\chi \), and

$$\begin{aligned} \psi (xy)&= \psi _P(xy) = [\psi _P(x) + \phi _P(x)A(y)]\chi (y)\\&= \psi (x)\chi (y) + \chi (x)\psi (y) + \phi (x)\phi (y) \end{aligned}$$

since \(\chi (x) = 0\). Next, if \(x\in K_\chi \) then we get from (b) that

$$\begin{aligned} \psi (xy)&= \psi _K(x)\chi (y) = \psi (x)\chi (y) + \chi (x)\psi (y) + \phi (x)\phi (y) \end{aligned}$$

since \(\chi (x) = \phi (x) = 0\). Lastly, if \(x \in J_\chi \cup L_\chi \) then by (c) we have (6) since \(\psi (xy) = \chi (x) = \psi (x) = \phi (x) = 0\).

The mirror cases with \(x\in S\setminus I_\chi \) are parallel, so we have verified all combinations of xy with one of them belonging to \(S\setminus I_\chi \).

Now suppose \(y \in P_\chi \). If \(x\in P_\chi \) then (d) yields

$$\begin{aligned} \psi (xy)&= \phi _P(x)\phi _P(y) = \psi (x)\chi (y) + \chi (x)\psi (y) + \phi (x)\phi (y) \end{aligned}$$

since \(\chi (x) = \chi (y) = 0\). If \(x\in I_\chi \setminus P_\chi \) then (e) yields \(\psi (xy) = 0\), which confirms (6) since \(\chi (x) = \chi (y) = \phi (x) = 0\). The remaining cases are left to the reader. \(\square \)

Note that \(\psi _P\) and \(\psi _K\) may take arbitrary values at some points of their respective domains (see Example 6.4). Later, in Examples 6.3 and 6.4, we will see how conditions (a)–(e) can be used to find explicit forms for \(\psi \) on semigroups with very different prime ideal structures.

Here we record a simple linear independence result involving exponentials, solutions of the sine addition law, and solutions of (6). In the next section we will see that these three function types make up bases for the solutions of (1) in the case \(n=3\).

Lemma 4.2

Let S be a semigroup.

  1. (a)

    The set of exponentials on S is linearly independent.

  2. (b)

    Let \(\chi :S \rightarrow {{\mathbb {C}}}\) an exponential, and let \(\phi :S \rightarrow {{\mathbb {C}}}\) be a nonzero solution of the special sine addition law (4).

    1. (i)

      If \(\chi ':S \rightarrow {{\mathbb {C}}}\) is an exponential different from \(\chi \), then \(\{\chi ', \chi , \phi \}\) is linearly independent.

    2. (ii)

      If \(\phi ':S \rightarrow {{\mathbb {C}}}\) is a solution of the (4) with \(\{\phi ,\phi '\}\) linearly independent, then \(\{\chi ,\phi ,\phi '\}\) is linearly independent.

    3. (iii)

      If \(\psi :S \rightarrow {{\mathbb {C}}}\) satisfies (6) and \(\psi \ne 0\), then \(\{\chi ,\phi ,\psi \}\) is linearly independent.

Proof

Part (a) is [7, Theorem 3.18].

For (b)(i) suppose there exist \(a,b,c\in {{\mathbb {C}}}\) such that \(a\chi ' + b\chi + c\phi = 0\). Then

$$\begin{aligned} 0 =&\, a\chi '(xy) + b\chi (xy) + c\phi (xy) \\ =&\, a\chi '(x)\chi '(y) + b\chi (x)\chi (y) + c\big (\phi (x)\chi (y) + \chi (x)\phi (y)\big )\\ =&\, a\chi '(x)\chi '(y) + \big (b\chi (x) + c\phi (x)\big )\chi (y) + c\chi (x)\phi (y)\\ =&\, a\chi '(x)\big (\chi '(y) - \chi (y)\big ) + c\chi (x)\phi (y) \end{aligned}$$

for all \(x,y \in S\). By part (a) this implies that \(a(\chi ' - \chi ) = c\phi = 0\). Since \(\chi ' \ne \chi \) and \(\phi \ne 0\) we have \(a = c = 0\). Therefore \(b=0\) too since \(\chi \ne 0\).

For part (b)(ii) let \(a\chi + b\phi + c\phi ' = 0\) for some \(a,b,c \in {{\mathbb {C}}}\). Then

$$\begin{aligned} 0&= a\chi (xy) + b\phi (xy) + c\phi '(xy) \\&= a\chi (x)\chi (y) + b\big (\phi (x)\chi (y) + \chi (x)\phi (y)\big ) + c\big (\phi '(x)\chi (y) + \chi (x)\phi '(y)\big )\\&= \big (a\chi (x) + b\phi (x) + c\phi '(x)\big )\chi (y) + \chi (x)\big (b\phi (y) + c\phi '(y)\big )\\&= -a \chi (x)\chi (y) \end{aligned}$$

for all \(x,y \in S\). Thus \(a=0\). Now \(b\phi + c\phi ' = 0\), so \(b = c = 0\) by the independence of \(\{\phi ,\phi '\}\).

For (b)(iii) suppose \(a\chi + b\phi + c\psi = 0\) for some \(a,b,c \in {{\mathbb {C}}}\). Then

$$\begin{aligned} 0&= a\chi (xy) + b\phi (xy) + c\psi (xy) \\&= a\chi (x)\chi (y)+b\big (\phi (x)\chi (y)+ \chi (x)\phi (y)\big ) + c\big (\psi (x)\chi (y) + \chi (x)\psi (y) +\phi (x)\phi (y)\big )\\&= \big (a\chi (x) + b\phi (x) + c\psi (x)\big )\chi (y) + \chi (x)\big (b\phi (y) + c\psi (y)\big ) + c\phi (x)\phi (y)\\&= -a \chi (x)\chi (y) + c\phi (x)\phi (y) \end{aligned}$$

for all \(x,y \in S\). Since \(\{\chi ,\phi \}\) is independent by part (b)(i), this implies \(a = c = 0\). Then \(b=0\) follows. \(\square \)

5 The Main Result

Now we apply Proposition 3.3 to solve the general Levi–Civita equation for \(n=3\).

Theorem 5.1

Let S be a topological commutative monoid, and suppose \(f,g_j,h_j \in C(S)\) satisfy the Levi–Civita equation

$$\begin{aligned} f(xy) = g_1(x)h_1(y) + g_2(x)h_2(y) + g_3(x)h_3(y), \quad x,y \in S, \end{aligned}$$
(8)

with \(\{g_1,g_2,g_3\}\) and \(\{h_1,h_2,h_3\}\) linearly independent. Then there exist constants \(c_k, a_{j,k}, b_{j,k}\in {{\mathbb {C}}}\) such that \(f,g_j,h_j\) belong to one of the following families, where \(\chi ,\chi ',\chi _k \in C(S)\) are exponentials, \(\phi \in C(S)\) is a nonzero solution of (4) as described in Corollary  3.2, and \(\psi \in C(S)\) is a nonzero solution of (6) as described in Theorem  4.1.

  1. (I)

    For distinct exponentials \(\chi _1,\chi _2,\chi _3\) we have

    $$\begin{aligned} f = \sum _{k=1}^3 c_k \chi _k, \quad g_j = \sum _{k=1}^3 a_{j,k} \chi _k, \quad h_j = \sum _{k=1}^3 b_{j,k} \chi _k, \end{aligned}$$
    (9)

    with coefficients satisfying

    $$\begin{aligned} \begin{pmatrix} a_{1,1} &{}\quad a_{2,1} &{}\quad a_{3,1} \\ a_{1,2} &{}\quad a_{2,2} &{}\quad a_{3,2} \\ a_{1,3} &{}\quad a_{2,3} &{}\quad a_{3,3} \end{pmatrix}\begin{pmatrix} b_{1,1} &{}\quad b_{1,2} &{}\quad b_{1,3} \\ b_{2,1} &{}\quad b_{2,2} &{}\quad b_{2,3} \\ b_{3,1} &{}\quad b_{3,2} &{}\quad b_{3,3} \end{pmatrix} = \begin{pmatrix} c_{1} &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad c_{2} &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad c_{3} \end{pmatrix}. \end{aligned}$$
    (10)
  2. (II)

    For distinct exponentials \(\chi ,\chi '\) we have

    $$\begin{aligned}&f = c_1 \chi ' + c_2 \chi + c_3 \phi , \quad g_j = a_{j,1} \chi ' + a_{j,2}\chi + a_{j,3}\phi ,\quad \nonumber \\&h_j = b_{j,1} \chi ' + b_{j,2}\chi + b_{j,3} \phi , \end{aligned}$$
    (11)

    with coefficients satisfying

    $$\begin{aligned} \begin{pmatrix} a_{1,1} &{}\quad a_{2,1} &{}\quad a_{3,1} \\ a_{1,2} &{}\quad a_{2,2} &{}\quad a_{3,2} \\ a_{1,3} &{}\quad a_{2,3} &{}\quad a_{3,3} \end{pmatrix}\begin{pmatrix} b_{1,1} &{}\quad b_{1,2} &{}\quad b_{1,3} \\ b_{2,1} &{}\quad b_{2,2} &{}\quad b_{2,3} \\ b_{3,1} &{}\quad b_{3,2} &{}\quad b_{3,3} \end{pmatrix} = \begin{pmatrix} c_{1} &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad c_{2} &{}\quad c_3 \\ 0 &{}\quad c_3 &{}\quad 0 \end{pmatrix}. \end{aligned}$$
    (12)
  3. (III)
    $$\begin{aligned}&f = c_1 \chi + c_2 \phi + c_3 \psi , \quad g_j = a_{j,1} \chi + a_{j,2}\phi + a_{j,3} \psi , \quad \nonumber \\&h_j = b_{j,1} \chi + b_{j,2}\phi +b_{j,3} \psi \end{aligned}$$
    (13)

    with coefficients satisfying

    $$\begin{aligned} \begin{pmatrix} a_{1,1} &{}\quad a_{2,1} &{}\quad a_{3,1} \\ a_{1,2} &{}\quad a_{2,2} &{}\quad a_{3,2} \\ a_{1,3} &{}\quad a_{2,3} &{}\quad a_{3,3} \end{pmatrix}\begin{pmatrix} b_{1,1} &{}\quad b_{1,2} &{}\quad b_{1,3} \\ b_{2,1} &{}\quad b_{2,2} &{}\quad b_{2,3} \\ b_{3,1} &{}\quad b_{3,2} &{}\quad b_{3,3} \end{pmatrix} = \begin{pmatrix} c_{1} &{}\quad c_2 &{}\quad c_3 \\ c_2 &{}\quad c_3 &{}\quad 0 \\ c_2 &{}\quad 0 &{}\quad 0 \end{pmatrix}. \end{aligned}$$
    (14)

Conversely, each family (I), (II), (III) constitutes a solution of (8), and \(\{g_1,g_2,g_3\}\) and \(\{h_1,h_2,h_3\}\) are linearly independent provided that the coefficients are chosen so that the matrices in (10), (12), (14) have rank 3.

Proof

Let \(V = span \{g_1,g_2,g_3\}\). We consider three cases as determined by Proposition 3.3.

Case 1: Suppose \(s=3\), so \(d_1 = d_2 = d_3 = 1\). Then V has a basis of the form \(\{\chi _1,\chi _2,\chi _3\}\) for three distinct continuous exponentials. Let \(f,g_j,h_j\) have the form (9) for some \(c_k, a_{j,k}, b_{j,k}\in {{\mathbb {C}}}\). By linear independence such functions satisfy (8) if and only if the constants satisfy (10).

Case 2: Suppose \(s=2\) with \(d_1 = 1\) and \(d_2 = 2\) (the case \(d_1 = 2\), \(d_2 = 1\) is equivalent). Since V is spanned by the matrix elements of the representation \(R\circ g\) of S, we have \(V = span \{\chi ',\chi , \rho ^{2,1}\}\) for distinct exponentials \(\chi ',\chi \), where

$$\begin{aligned} D^{-1}R(g(x))D = \begin{pmatrix} \chi '(x) &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad \chi (x) &{}\quad 0 \\ 0 &{}\quad \rho ^{2,1}(x) &{}\quad \chi (x) \end{pmatrix} \end{aligned}$$

as described in Proposition 3.3. Using the formula \(R(g(xy)) = R(g(x))R(g(y))\) we find by matrix multiplication that \(\phi := \rho ^{2,1}\) is a solution of the special sine addition law (4). From \(\dim V = 3\) it follows that \(\phi \ne 0\) and \(\{\chi ',\chi ,\phi \}\) is a basis for V.

Let \(f,g_j,h_j\) have the form (11) for some \(c_k, a_{j,k}, b_{j,k}\in {{\mathbb {C}}}\). Such functions satisfy (8) if and only if

$$\begin{aligned} 0 =&\sum _{j=1}^{3} [a_{j,1} \chi '(x) + a_{j,2}\chi (x) + a_{j,3}\phi (x)][b_{j,1} \chi '(y) + b_{j,2} \chi (y) + b_{j,3}\phi (y)]\\&- \left( c_1 \chi '(xy) + c_2 \chi (xy) + c_3 \phi (xy)\right) \\ =&\sum _{j=1}^{3} [a_{j,1} \chi '(x) + a_{j,2}\chi (x) + a_{j,3}\phi (x)][b_{j,1} \chi '(y) + b_{j,2} \chi (y) + b_{j,3}\phi (y)]\\&- \left( c_1 \chi '(xy) + c_2 \chi (xy) + c_3 [\phi (x)\chi (y) + \chi (x)\phi (y)]\right) \\ =&\left( \sum _{j=1}^{3} a_{j,1}b_{j,1} - c_1\right) \chi '(x)\chi '(y) + \sum _{j=1}^{3} a_{j,1}b_{j,2}\chi '(x)\chi (y)\\&+ \sum _{j=1}^{3}a_{j,1}b_{j,3}\chi '(x)\phi (y) + \sum _{j=1}^{3} a_{j,2}b_{j,1}\chi (x)\chi '(y)\\&+ \left( \sum _{j=1}^{3} a_{j,2}b_{j,2} - c_2\right) \chi (x)\chi (y) + \left( \sum _{j=1}^{3} a_{j,2}b_{j,3} - c_3\right) \chi (x)\phi (y) \\&+ \sum _{j=1}^{3} a_{j,3}b_{j,1}\phi (x)\chi '(y) + \left( \sum _{j=1}^{3} a_{j,3}b_{j,2} - c_3\right) \phi (x)\chi (y)\\&+\sum _{j=1}^{3} a_{j,3}b_{j,3}\phi (x)\phi (y). \end{aligned}$$

By the linear independence of \(\{\chi ',\chi , \phi \}\) this means the constants must fulfill (12).

Case 3: Suppose \(s=1\), so \(d_1 = 3\). By the same reasoning as in Case 2 we find that \(V = span \{\chi ,\rho ^{2,1},\rho ^{3,2},\rho ^{3,1}\}\), where

$$\begin{aligned} \rho ^{2,1}(xy)&= \rho ^{2,1}(x)\chi (y) + \chi (x)\rho ^{2,1}(y),\\ \rho ^{3,2}(xy)&= \rho ^{3,2}(x)\chi (y) + \chi (x)\rho ^{3,2}(y),\\ \rho ^{3,1}(xy)&= \rho ^{3,1}(x)\chi (y) + \chi (x)\rho ^{3,1}(y) + \rho ^{3,2}(x)\rho ^{2,1}(y). \end{aligned}$$

The first two formulas show that \(V = span \{\chi ,\phi _1,\phi _2,\rho ^{3,1}\}\), where \(\phi _1 := \rho ^{2,1}\) and \(\phi _2 := \rho ^{3,2}\) are solutions of the special sine addition law (4). Here we split the proof into two sub-cases.

Sub-case 3(a): Suppose \(\{\phi _1,\phi _2\}\) is linearly independent. Then since \(\dim V = 3\) we get from Lemma 4.2 that \(\{\chi , \phi _1, \phi _2\}\) is a basis for V. Re-labeling \(\{\phi _1,\phi _2\}\) as \(\{\phi ,\phi '\}\), let \(f,g_j,h_j\) be given by

$$\begin{aligned}&f = c_1 \chi + c_2 \phi + c_3 \phi ', \quad g_j = a_{j,1} \chi + a_{j,2}\phi + a_{j,3} \phi ', \quad \\&h_j = b_{j,1} \chi + b_{j,2}\phi + b_{j,3} \phi ', \end{aligned}$$

for some \(c_k, a_{j,k}, b_{j,k}\in {{\mathbb {C}}}\). Now (8) holds if and only if

$$\begin{aligned} 0 =&\sum _{j=1}^{3} [a_{j,1} \chi (x) + a_{j,2}\phi (x) + a_{j,3} \phi '(x)][b_{j,1} \chi (y) + b_{j,2} \phi (y) + b_{j,3} \phi '(y)]\\&- [c_1 \chi (xy) + c_2 \left( \phi (x)\chi (y) + \chi (x)\phi (y)\right) + c_3 \left( \phi '(x)\chi (y) + \chi (x)\phi '(y)\right) ]\\ =&\left( \sum _{j=1}^{3}a_{j,1}b_{j,1} - c_1\right) \chi (x)\chi (y) \\&+ \left( \sum _{j=1}^{3}a_{j,1}b_{j,2} - c_2\right) \chi (x)\phi (y) + \left( \sum _{j=1}^{3}a_{j,1}b_{j,3} - c_3\right) \chi (x)\phi '(y)\\&+ \left( \sum _{j=1}^{3}a_{j,2}b_{j,1} - c_2\right) \phi (x)\chi (y) + \sum _{j=1}^{3}a_{j,2}b_{j,2}\phi (x)\phi (y) \\&+ \sum _{j=1}^{3}a_{j,2}b_{j,3}\phi (x)\phi '(y) \\&+ \left( \sum _{j=1}^{3}a_{j,3}b_{j,1} - c_3\right) \phi '(x)\chi (y) + \sum _{j=1}^{3}a_{j,3}b_{j,2}\phi '(x)\phi (y) \\&+\sum _{j=1}^{3}a_{j,3}b_{j,3}\phi '(x)\phi '(y) \end{aligned}$$

for all \(x,y \in S\). By the linear independence of the basis functions we arrive at the constraint

$$\begin{aligned} \begin{pmatrix} a_{1,1} &{}\quad a_{2,1} &{}\quad a_{3,1} \\ a_{1,2} &{}\quad a_{2,2} &{}\quad a_{3,2} \\ a_{1,3} &{}\quad a_{2,3} &{}\quad a_{3,3} \end{pmatrix}\begin{pmatrix} b_{1,1} &{}\quad b_{1,2} &{}\quad b_{1,3} \\ b_{2,1} &{}\quad b_{2,2} &{}\quad b_{2,3} \\ b_{3,1} &{}\quad b_{3,2} &{}\quad b_{3,3} \end{pmatrix} = \begin{pmatrix} c_{1} &{}\quad c_2 &{}\quad c_3 \\ c_2 &{}\quad 0 &{}\quad 0 \\ c_2 &{}\quad 0 &{}\quad 0 \end{pmatrix}. \end{aligned}$$

Since the matrix on the right hand side has rank at most 2, there are no non-degenerate solutions in this subcase. That is, either \(\{g_1,g_2,g_3\}\) or \(\{h_1,h_2,h_3\}\) is linearly dependent.

Sub-case 3(b): Suppose \(\{\phi _1,\phi _2\}\) is linearly dependent. Since \(\dim V = 3\) we cannot have \(\phi _1 = \phi _2 = 0\), so we can denote one of these functions as \(\phi \ne 0\) and the other as \(d\phi \) for some \(d \in {{\mathbb {C}}}\). Now \(\rho ^{3,1}\) satisfies

$$\begin{aligned} \rho ^{3,1}(xy) = \rho ^{3,1}(x)\chi (y) + \chi (x)\rho ^{3,1}(y) + d\phi (x)\phi (y), \quad x,y \in S. \end{aligned}$$

If \(d=0\) then \(\rho ^{3,1}\) is a solution of the special sine addition law (4). In this case, defining \(\phi ' := \rho ^{3,1}\) we have \(\{\phi ,\phi '\}\) linearly independent since \(\dim V = 3\). Thus we revert to Sub-case 3(a) which is already settled. If \(d\ne 0\) then defining \(\psi := d^{-1}\rho ^{3,1}\) we see that \(\psi \) is a solution of (6), and we have \(span \{\chi ,\phi ,\psi \} = V\). Moreover \(\psi \ne 0\) since \(\dim V = 3\), thus \(\{\chi ,\phi ,\psi \}\) is a basis for V.

Let \(f,g_j,h_j\) have the form (13) for some \(c_k, a_{j,k}, b_{j,k}\in {{\mathbb {C}}}\). Then (8) holds if and only if

$$\begin{aligned} 0 =&\sum _{j=1}^{3} [a_{j,1} \chi (x) + a_{j,2}\phi (x) + a_{j,3} \psi (x)][b_{j,1} \chi (y) + b_{j,2} \phi (y) + b_{j,3} \psi (y)]\\&- [c_1 \chi (xy) + c_2 \left( \phi (x)\chi (y) + \chi (x)\phi (y)\right) + c_3 \left( \psi (x)\chi (y) \right. \\&\left. + \chi (x)\psi (y) + \phi (x)\phi (y)\right) ]\\ =&\left( \sum _{j=1}^{3}a_{j,1}b_{j,1} - c_1\right) \chi (x)\chi (y) + \left( \sum _{j=1}^{3}a_{j,1}b_{j,2} - c_2\right) \chi (x)\phi (y) \\&+ \left( \sum _{j=1}^{3}a_{j,1}b_{j,3} - c_3\right) \chi (x)\psi (y) + \left( \sum _{j=1}^{3}a_{j,2}b_{j,1} - c_2\right) \phi (x)\chi (y)\\&+ \left( \sum _{j=1}^{3}a_{j,2}b_{j,2} - c_3\right) \phi (x)\phi (y) + \sum _{j=1}^{3}a_{j,2}b_{j,3}\phi (x)\psi (y) \\&+ \left( \sum _{j=1}^{3}a_{j,3}b_{j,1} - c_3\right) \psi (x)\chi (y) + \sum _{j=1}^{3}a_{j,3}b_{j,2}\psi (x)\phi (y)\\&+\sum _{j=1}^{3}a_{j,3}b_{j,3}\psi (x)\psi (y) \end{aligned}$$

for all \(x,y \in S\). By the linear independence of the basis functions we arrive at constraint (14).

For the converse, the matrices in the constraints must all have full rank to ensure that \(\{g_1,g_2,g_3\}\) and \(\{h_1,h_2,h_3\}\) are linearly independent. \(\square \)

In the next section we apply Theorem 5.1 to two particular Levi–Civita equations.

6 Applications and Examples

A well-known particular equation of the form (8) is the cosine-sine functional equation

$$\begin{aligned} f(xy) = f(x)g(y) + g(x)f(y) + h(x)h(y), \quad x,y \in S, \end{aligned}$$
(15)

for three unknown functions \(f,g,h:S \rightarrow {{\mathbb {C}}}\).

Corollary 6.1

Let S be a topological commutative monoid, and suppose \(f,g,h \in C(S)\) is a solution of (15) with \(\{f,g,h\}\) linearly independent. Then there exist constants \(c_k, \alpha _{k}, \beta _{k}\in {{\mathbb {C}}}\) such that fgh belong to one of the following three families, where \(\chi ,\chi ',\chi _k \in C(S)\) are exponentials, \(\phi \in C(S)\) is a nonzero solution of (4) as described in Corollary  3.2, and \(\psi \in C(S)\) is a nonzero solution of (6) as described in Theorem  4.1.

  1. (I)

    For distinct exponentials \(\chi _1,\chi _2,\chi _3\) we have

    $$\begin{aligned} f = \sum _{k=1}^3 c_k \chi _k, \quad g = \sum _{k=1}^3 \alpha _{k} \chi _k, \quad h = \sum _{k=1}^3 \beta _{k} \chi _k, \end{aligned}$$

    with coefficients satisfying

    $$\begin{aligned} \begin{pmatrix} c_1 &{}\quad \alpha _1 &{}\quad \beta _1 \\ c_2 &{}\quad \alpha _2 &{}\quad \beta _2 \\ c_3 &{}\quad \alpha _3 &{}\quad \beta _3 \end{pmatrix}\begin{pmatrix} \alpha _1 &{}\quad \alpha _2 &{}\quad \alpha _3 \\ c_1 &{}\quad c_2 &{}\quad c_3 \\ \beta _1 &{}\quad \beta _2 &{}\quad \beta _3 \end{pmatrix} = \begin{pmatrix} c_{1} &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad c_{2} &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad c_{3} \end{pmatrix}. \end{aligned}$$
  2. (II)

    For distinct exponentials \(\chi ',\chi \) we have

    $$\begin{aligned} f = c_1 \chi ' + c_2 \chi + c_3 \phi , \quad g = \alpha _{1} \chi ' + \alpha _{2}\chi + \alpha _{3}\phi , \quad h = \beta _{1} \chi ' + \beta _{2}\chi + \beta _{3} \phi , \end{aligned}$$

    with coefficients satisfying

    $$\begin{aligned} \begin{pmatrix} c_1 &{}\quad \alpha _{1} &{}\quad \beta _{1} \\ c_2 &{}\quad \alpha _{2} &{}\quad \beta _{2} \\ c_3 &{}\quad \alpha _{3} &{}\quad \beta _{3} \end{pmatrix}\begin{pmatrix} \alpha _{1} &{}\quad \alpha _{2} &{}\quad \alpha _{3} \\ c_1 &{}\quad c_2 &{}\quad c_3 \\ \beta _{1} &{}\quad \beta _{2} &{}\quad \beta _{3} \end{pmatrix} = \begin{pmatrix} c_{1} &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad c_{2} &{}\quad c_3 \\ 0 &{}\quad c_3 &{}\quad 0 \end{pmatrix}. \end{aligned}$$
  3. (III)
    $$\begin{aligned} f = c_1 \chi + c_2 \phi + c_3 \psi , \quad g = \alpha _{1} \chi + \alpha _{2}\phi + \alpha _{3} \psi , \quad h = \beta _{1} \chi + \beta _{2}\phi + \beta _{3} \psi \end{aligned}$$

    with coefficients satisfying

    $$\begin{aligned} \begin{pmatrix} c_1 &{}\quad \alpha _1 &{}\quad \beta _1 \\ c_2 &{}\quad \alpha _2 &{}\quad \beta _2 \\ c_3 &{}\quad \alpha _3 &{}\quad \beta _3 \end{pmatrix}\begin{pmatrix} \alpha _1 &{}\quad \alpha _2 &{}\quad \alpha _3 \\ c_1 &{}\quad c_2 &{}\quad c_3 \\ \beta _1 &{}\quad \beta _2 &{}\quad \beta _3 \end{pmatrix} = \begin{pmatrix} c_{1} &{}\quad c_2 &{}\quad c_3 \\ c_2 &{}\quad c_3 &{}\quad 0 \\ c_2 &{}\quad 0 &{}\quad 0 \end{pmatrix}. \end{aligned}$$

The converse is also true provided that the constants are chosen so that the matrices in each constraint have rank 3.

Proof

We apply Theorem 5.1 under the conditions \(g_1 = h_2 = f\), \(h_1 = g_2 =: g\), and \(g_3 = h_3 =: h\). In terms of the coefficients that means \(a_{1,k} = c_k = b_{2,k}\), \(b_{1,k} = a_{2,k} =: \alpha _k\), and \(a_{3,k} = b_{3,k} =: \beta _k\) for all \(k \in \{1,2,3\}\). Enforcing these identifications, the result follows immediately. \(\square \)

Another application of Theorem 5.1 is the following. Let \(1_S\) denote the constant exponential function \(\chi = 1\) on S.

Corollary 6.2

Let S be a topological commutative monoid, and suppose \(f,g,h,j,k \in C(S)\) satisfy

$$\begin{aligned} f(xy) = g(x) + h(y) + j(x)\ell (y), \quad x,y \in S, \end{aligned}$$
(16)

with \(\{g,1_S,j\}\) and \(\{1_S,h,\ell \}\) linearly independent. The solutions belong to the following families, where \(\chi \in C(S)\) is an exponential and \(A, B \in C(S)\) are additive.

  1. (a)

    For \(\chi \ne 1_S\) and \(A\ne 0\) we have

    $$\begin{aligned} f&= d_1e_1 \chi + (b + c_2 + d_2e_2) + c_3A , \quad g = - d_1e_2 \chi + b + c_{3}A, \\ h&= - d_2e_1 \chi + c_{2} + c_{3}A, \quad j = d_1 \chi + d_2, \quad \ell = e_{1} \chi + e_{2}, \end{aligned}$$

    for \(b,c_j,d_j,e_j \in {{\mathbb {C}}}\) with \(c_3d_1e_1 \ne 0\).

  2. (b)

    For \(A\ne 0\) we have

    $$\begin{aligned} f&= (b_1 + c_1 + d_1e_1) + (b_2 + d_2e_1) A + d_2e_2 \left( \frac{1}{2} A^2 + B\right) , \quad \\ g&= b_{1} + b_{2}A + d_2e_2\left( \frac{1}{2} A^2 + B\right) ,\\ h&= c_{1} + c_{2}A + d_2e_2\left( \frac{1}{2} A^2 + B\right) , \quad j = d_1 + d_2 A, \quad \ell = e_1 + e_2 A, \end{aligned}$$

    for \(b_j,c_j,d_j,e_j \in {{\mathbb {C}}}\) satisfying \(c_2 + d_1e_2 = b_2 + d_2e_1\) and \(d_2e_2 \ne 0\).

Conversely, in each family above fgh is a solution of (16) with \(\{g,1_S,j\}\) and \(\{1_S,h,\ell \}\) linearly independent.

Proof

Since (16) is the special case \(g_1 = g\), \(g_2 = h_1 = 1_S\), \(g_3 = j\), \(h_2 = h\), and \(h_3 = \ell \) of (8), we obtain the solution forms by specializing Theorem 5.1 to this case. Note that the exponential function \(1_S\) belongs to \(V = span\{g,1_S,j\}\), so without loss of generality we may assign it to any basis for V.

In case (I) the basis consists of three distinct exponentials, one of which is \(1_S\) (which we designate as \(\chi _3\)). Writing \(f,g,h,j,\ell \) as

$$\begin{aligned} f = \sum _{k=1}^3 a_k \chi _k, \quad g = \sum _{k=1}^3 b_{k} \chi _k, \quad h = \sum _{k=1}^3 c_{k} \chi _k, \quad j = \sum _{k=1}^3 d_{k} \chi _k, \quad \ell = \sum _{k=1}^3 e_{k} \chi _k, \end{aligned}$$

condition (9) becomes

$$\begin{aligned} \begin{pmatrix} b_{1} &{}\quad 0 &{}\quad d_{1} \\ b_{2} &{}\quad 0 &{}\quad d_{2} \\ b_{3} &{}\quad 1 &{}\quad d_{3} \end{pmatrix}\begin{pmatrix} 0 &{}\quad 0 &{}\quad 1 \\ c_{1} &{}\quad c_{2} &{}\quad c_{3} \\ e_{1} &{}\quad e_{2} &{}\quad e_{3} \end{pmatrix} = \begin{pmatrix} a_{1} &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad a_{2} &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad a_3 \end{pmatrix}. \end{aligned}$$

Simple calculations show that either \(a_1 = 0\) or \(a_2 = 0\), so this case yields no nondegenerate solutions (i.e. no solutions with \(\{g,1_S,j\}\) and \(\{1_S,h,\ell \}\) linearly independent).

In case (II) we have either \(\chi = 1_S\) or \(\chi ' = 1_S\). Suppose \(\chi ' \ne 1_S = \chi \). For \(\chi = 1_S\) Eq. (4) shows that \(\phi \) is additive. Writing \(\phi = A \in C(S)\) we have

$$\begin{aligned} f&= a_1\chi ' + a_2 + a_3A , \quad g = b_{1}\chi ' + b_{2} + b_{3}A, \quad h = c_{1}\chi ' + c_{2} + c_{3}A,\\ j&= d_{1}\chi ' + d_{2} + d_{3}A, \quad \ell = e_{1}\chi ' + e_{2} + e_{3}A, \end{aligned}$$

with additive \(A \ne 0\), and we get from condition (12) that

$$\begin{aligned} \begin{pmatrix} b_{1} &{}\quad 0 &{}\quad d_{1} \\ b_{2} &{}\quad 1 &{}\quad d_{2} \\ b_{3} &{}\quad 0 &{}\quad d_{3} \end{pmatrix}\begin{pmatrix} 0 &{}\quad 1 &{}\quad 0 \\ c_{1} &{}\quad c_{2} &{}\quad c_{3} \\ e_{1} &{}\quad e_{2} &{}\quad e_{3} \end{pmatrix} = \begin{pmatrix} a_{1} &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad a_{2} &{}\quad a_3 \\ 0 &{}\quad a_3 &{}\quad 0 \end{pmatrix}. \end{aligned}$$

Re-naming \(\chi '\) as a new \(\chi \) this leads to solutions of the form (a) above. The condition \(c_3d_1e_1 \ne 0\) is dictated by the linear independence assumptions. If on the other hand \(\chi ' = 1_S \ne \chi \), then a similar calculation shows that there are no nondegenerate solutions.

Finally, in case (III) we have \(\chi = 1_S\), so \(\phi = A\) is additive and nonzero. Now (6) takes the form

$$\begin{aligned} \psi (xy) = \psi (x) + \psi (y) + A(x)A(y), \quad x,y \in S, \end{aligned}$$

and by Theorem 4.1 we have \(\psi = B + \frac{1}{2}A^2\) for some additive \(B\in C(S)\). Writing

$$\begin{aligned} f&= a_1 + a_2 A + a_3\left( \frac{1}{2}A^2 + B\right) , \quad g = b_{1} + b_{2}A + b_{3}\left( \frac{1}{2}A^2 + B\right) , \quad \\ h&= c_{1} + c_{2}A + c_{3}\left( \frac{1}{2}A^2 + B\right) ,\\ j&= d_{1} + d_{2}A + d_{3}\left( \frac{1}{2}A^2 + B\right) , \quad \ell = e_{1} + e_{2}A + e_{3}\left( \frac{1}{2}A^2 + B\right) , \end{aligned}$$

for additive \(A,B \in C(S)\) with \(A \ne 0\), where the constants satisfy

$$\begin{aligned} \begin{pmatrix} b_{1} &{}\quad 1 &{}\quad d_1 \\ b_{2} &{}\quad 0 &{}\quad d_2 \\ b_{3} &{}\quad 0 &{}\quad d_3 \end{pmatrix}\begin{pmatrix} 1 &{}\quad 0 &{}\quad 0 \\ c_{1} &{}\quad c_{2} &{}\quad c_{3} \\ e_1 &{}\quad e_2 &{}\quad e_3 \end{pmatrix} = \begin{pmatrix} a_{1} &{}\quad a_2 &{}\quad a_3 \\ a_2 &{}\quad a_3 &{}\quad 0 \\ a_3 &{}\quad 0 &{}\quad 0 \end{pmatrix}. \end{aligned}$$

This leads to solutions of the form (b), with the condition \(d_2e_2 \ne 0\) following from the linear independence assumptions.

The converse is easy to check. \(\square \)

We close with contrasting examples illustrating the results of Theorem 5.1 on two different monoids, one with a very simple prime ideal structure and one with a very rich prime ideal structure.

In the first example there are very few prime ideals. Let \({\mathfrak {R}}(\alpha )\) denote the real part of \(\alpha \in {{\mathbb {C}}}\). Recall that the continuous exponentials on \(([0,1],\cdot )\) under the usual topology are \(m_0 = 1\) and \(m_{\alpha }\) defined by

$$\begin{aligned} m_\alpha (x) := {\left\{ \begin{array}{ll} x^{\alpha } &{} \text { for } x > 0 \\ 0 &{} \text { for } x = 0 \end{array}\right. } \end{aligned}$$

for any \(\alpha \in {{\mathbb {C}}}\) with \({\mathfrak {R}}(\alpha ) > 0\). Note also that the only additive function on \(([0,1],\cdot )\) is \(A = 0\), since \(A(0) = A(0\cdot x) = A(0)+ A(x)\) for all \(x \in [0,1]\). The continuous additive functions on \(((0,1],\cdot )\) have the form \(A(x) = c\log x\) for some \(c \in {{\mathbb {C}}}\).

Example 6.3

Let \(S = ([0,1]\times [0,1],\cdot )\) with the product topology. The prime ideals of S are \(I_1 = \{0\} \times [0,1]\), \(I_2 = [0,1]\times \{0\}\), and \(I_1 \cup I_2\). The continuous exponentials on S have the form \(\chi (x,y) = m_1(x)m_2(y)\), denoted \(\chi = m_1 \otimes m_2\), where each \(m_j:[0,1] \rightarrow {{\mathbb {C}}}\) is either \(m_0\) or \(m_\alpha \) for some \({\mathfrak {R}}(\alpha ) > 0\).

Exponentials \(\chi \in C(S)\) and additive functions \(A\in C(S\setminus I_\chi )\) corresponding to each possible nullspace are the following, where \(c_1,c_2,\alpha ,\beta \in {{\mathbb {C}}}\) with \({\mathfrak {R}}(\alpha ),{\mathfrak {R}}(\beta ) > 0\).

  1. (a)

    For \(I_\chi = \emptyset \), \(\chi = m_0\otimes m_0 = 1\), and \(A=0\).

  2. (b)

    For \(I_\chi = I_1\), \(\chi = m_{\alpha }\otimes m_0\), and \(A(x,y) = c_1\log x\) for all \(x \in (0,1]\) and \(y \in [0,1]\).

  3. (c)

    For \(I_\chi = I_2\), \(\chi = m_0\otimes m_{\alpha }\), and \(A(x,y) = c_2\log y\) for all \(x \in [0,1]\) and \(y \in (0,1]\).

  4. (d)

    For \(I_\chi = I_1 \cup I_2\), \(\chi = m_{\alpha }\otimes m_{\beta }\), and \(A(x,y) = c_1\log x + c_2\log y\) for all \(x,y \in (0,1]\).

Note that in each case (a)–(d) we have \(I_\chi ^2 = I_\chi \), so \(P_\chi = K_{\chi } = L_{\chi } = \emptyset \) and \(J_\chi = I_\chi \). Thus the forms of \(\phi ,\psi \in C(S)\) given by Corollary 3.2 and Theorem 4.1 simplify to

$$\begin{aligned} \phi (x,y) = {\left\{ \begin{array}{ll} A(x,y)\chi (x,y) &{} \text { for } (x,y) \in S\setminus I_\chi \\ 0 &{} \text { for } (x,y) \in I_\chi \end{array}\right. } \end{aligned}$$

and

$$\begin{aligned} \psi (x,y) = {\left\{ \begin{array}{ll} [\frac{1}{2}A^2(x,y) + B(x,y)]\chi (x,y) &{} \text { for } (x,y) \in S\setminus I_\chi \\ 0 &{} \text { for } (x,y) \in I_\chi \end{array}\right. } \end{aligned}$$

for additive \(A,B \in C(S\setminus I_\chi )\).

The solutions of (8) (under the accompanying linear independence assumptions) are obtained by using the appropriate forms above in the formulas of Theorem 5.1.

For the second example we choose the commutative monoid \(S = ({{\mathbb {N}}},\cdot )\), which has infinitely many prime ideals. Let P denote the set of prime numbers, and for each \(p\in P\) define the function \(C_p: S \rightarrow {{\mathbb {N}}}\cup \{0\}\) for each \(x \in S\) by

$$\begin{aligned} C_p(x) := \text { the number of copies of }p\text { in the prime factorization of }x. \end{aligned}$$

Then \(C_p\) is additive for each \(p\in P\), and each \(x\in S\) has prime factorization \(x = \prod _{p\in P}p^{C_p(x)}\).

Example 6.4

Let \(S = ({{\mathbb {N}}},\cdot )\) under the discrete topology. For any exponential \(\chi :S \rightarrow {{\mathbb {C}}}\) we have \(\chi (x) = \chi (\prod _{p\in P}p^{C_p(x)}) = \prod _{p\in P}\chi (p)^{C_p(x)}\), so \(\chi \) is completely determined by its values on P. The prime ideals of S are of the form

$$\begin{aligned} I_Q := \bigcup _{p \in Q} p{{\mathbb {N}}}\end{aligned}$$

where Q is any nonempty subset of P.

An additive function \(A:S\setminus I_{\chi }\rightarrow {{\mathbb {C}}}\) has the form \(A(x) = \sum _{p \in P\setminus I_\chi } C_p(x)A(p)\), where the values of A on \(P\setminus I_\chi \) may be chosen arbitrarily.

Next we determine the forms of the subsets \(P_\chi , J_{\chi }, K_{\chi }, L_{\chi } \subset I_\chi \) for a given exponential \(\chi \). The definitions of these sets simplify here because S is commutative. Suppose \(\chi :S \rightarrow {{\mathbb {C}}}\) is an exponential with \(I_\chi = I_Q\) for some \(\emptyset \ne Q \subset P\). By definition \(x \in I_Q\) if and only if there exist \(p \in Q\) and \(z \in S\) such that \(x = pz\). Similarly \(x \in I_Q^n\) for some positive integer n if and only there exist \(p_1,\ldots , p_n \in Q\) and \(z \in S\) such that \(x = p_1 \cdots p_nz\). It is not difficult to verify that

$$\begin{aligned} P_\chi = I_Q \setminus I_Q^2, \quad I_\chi \setminus P_\chi = I_Q^2, \quad J_{\chi } = I_Q^3, \quad I_\chi \setminus J_{\chi } = I_Q \setminus I_Q^3. \end{aligned}$$

Thus we have

$$\begin{aligned} K_{\chi } = \{x \in I_Q^2 \mid ux \in I_Q \setminus I_Q^3 \text { for all } u\in S\setminus I_Q \} = I_Q^2 \setminus I_Q^3, \end{aligned}$$

and \( L_{\chi } = \emptyset \).

By Corollary 3.2 the form of nonzero \(\phi :S \rightarrow {{\mathbb {C}}}\) satisfying (4) is given by (3), where \(A: S\setminus I_{\chi } \rightarrow {{\mathbb {C}}}\) is additive and \(\phi _P:P_\chi \rightarrow {{\mathbb {C}}}\) is any function such that conditions (i) and (ii) of Proposition 3.1(b) hold. It is not difficult to work out that

$$\begin{aligned} \phi (x) = \left\{ \begin{array}{ll} A(x)\chi (x) &{} \text { for } x\in S\setminus I_Q\\ \phi _Q(p)\chi (z) &{} \text { for } x = pz \in I_Q \setminus I_Q^2 \quad (p \in Q, z \in S\setminus I_Q) \\ 0 &{} \text { for } x \in I_Q^2 \end{array}\right. \end{aligned}$$

where \(\phi _Q:Q \rightarrow {{\mathbb {C}}}\) is an arbitrary function.

By Theorem 4.1 the form of \(\psi :S \rightarrow {{\mathbb {C}}}\) satisfying (6) with a given nonzero solution \(\phi \) of (4) is (7), where \(B: S\setminus I_{\chi } \rightarrow {{\mathbb {C}}}\) is additive and \(\psi _P:P_\chi \rightarrow {{\mathbb {C}}}\), \(\psi _K :K_\chi \rightarrow {{\mathbb {C}}}\) are functions such that conditions (a)–(e) hold. After a few straightforward calculations we arrive at

$$\begin{aligned} \psi (x) = \left\{ \begin{array}{ll} \left[ B(x) + \frac{1}{2}A^2(x)\right] \chi (x) &{} \text {for } x\in S\setminus I_Q \\ \left[ \psi _Q(p) + \phi _Q(p)A(z)\right] \chi (z) &{} \text { for } x = pz \in I_Q \setminus I_Q^2 \quad (p \in Q, z \in S\setminus I_Q)\\ \phi _Q(p_1)\phi _Q(p_2)\chi (z) &{} \text { for } x = p_1p_2z \in I_Q^2 \setminus I_Q^3 \quad (p_j \in Q, z \in S\setminus I_Q) \\ 0 &{} \text { for } x \in I_Q^3 \end{array}\right. \end{aligned}$$

where \(\psi _Q:Q \rightarrow {{\mathbb {C}}}\) is an arbitrary function. Here we have applied condition (d) for \(p_1,p_2 \in Q\) to get \(\psi _K(p_1p_2) = \phi _Q(p_1)\phi _Q(p_2)\).

The solutions of (8) (under the accompanying linear independence assumptions) are obtained by substituting the appropriate forms above into the formulas of Theorem 5.1.