1 Introduction

In this paper S denotes a semigroup. If S is a topological semigroup then C(S) denotes the algebra of continuous functions mapping S into \({\mathbb C}\). In general we give the continuous solutions of our functional equations on topological semigroups, but the discrete topology is allowed.

A function \(A:S \rightarrow {\mathbb C}\) is said to be additive if \(A(xy) = A(x) + A(y)\) for all \(x,y \in S\), and \(M:S \rightarrow {\mathbb C}\) is called multiplicative if \(M(xy) = M(x)M(y)\) for all \(x,y \in S\). If M is multiplicative and \(M \ne 0\) then we call M an exponential. Additive functions and multiplicative functions are considered to be fundamental as they are homomorphisms of S into \(({\mathbb C},+)\) and \(({\mathbb C},\cdot )\), respectively. They are used routinely to describe solutions of other functional equations. Here we take a similar view of solutions \(\sigma ,\kappa :S \rightarrow {\mathbb C}\) of the sine addition law

$$\begin{aligned} \sigma (xy) = \sigma (x)\kappa (y) + \kappa (x)\sigma (y), \quad \text { for all } x,y \in S, \end{aligned}$$
(1)

for \(\kappa \) multiplicative, and we use them to describe solutions of other functional equations on semigroups. We shall refer to a pair \((\sigma ,\kappa )\) satisfying (1) as a sine pair.

Our main objective is to solve the cosine–sine functional equation

$$\begin{aligned} f(xy) = f(x)g(y) + g(x)f(y) + h(x)h(y), \quad \text { for all } x,y \in S, \end{aligned}$$
(2)

where \(f,g,h: S \rightarrow {\mathbb C}\). This equation was solved by Chung, Kannappan, and Ng [2] for the case that S is a group. In that setting all solutions can be expressed in terms of additive functions and multiplicative functions. Their result was extended by Ajebbar and Elqorachi [1] to semigroups generated by their squares, and by the author [3] to a larger class of semigroups. In the setting of semigroups there exist solutions that cannot be described using only additive and multiplicative functions.

In [3] we found that a critical role is played by the functional equation

$$\begin{aligned} \psi (xy) = \psi (x)M(y) + M(x)\psi (y) + \sigma (x)\sigma (y), \quad x,y \in S, \end{aligned}$$
(3)

for \(\psi ,M,\sigma :S \rightarrow {\mathbb C}\), where M is multiplicative and \((\sigma , M)\) is a sine pair. Clearly (3) is a special case of (2). The ability to solve (3) when M is multiplicative and \((\sigma , M)\) is a sine pair allows us to find the solution of (2) on all semigroups.

The outline of the paper is as follows. The next section lays some preliminary groundwork, including the solutions of (1) and (3) on semigroups (Proposition 2.2, resp. Proposition 2.4 and Lemma 2.5). Section 3 contains further lemmas to support the solution of (2) on a general semigroup. The final section contains the main result – Theorem 4.1 – and an example illustrating some new types of solutions on a semigroup that is not covered by previous results.

2 Preliminaries

Some inspiration for this article comes from a pair of papers by Henrik Stetkær in which he described the solutions of two functional equations in terms of multiplicative functions and solutions of the sine addition law (1). In [7, Theorem 6.1] he took this approach for the cosine addition law

$$\begin{aligned} g(xy) = g(x)g(y) - f(x)f(y), \quad x,y \in S. \end{aligned}$$

(At the time of publication the general solution of (1) on semigroups was not yet known.) In [8, Theorem 5.1] he took the same approach for the functional equation

$$\begin{aligned} f(xy) = f(x)g(y) + g(x)f(y) - g(x)g(y), \quad x,y \in S. \end{aligned}$$

Now the solution of (1) is known and is described in the next proposition. For any subset \(T \subseteq S\) let \(T^2:= \{xy \mid x,y \in T\}\). For any multiplicative function \(M:S \rightarrow {\mathbb C}\), let

$$\begin{aligned} I_M&:= \{x \in S \mid M(x) = 0 \},\\ P_M&:= \{p \in I_M\setminus I_M^2 \mid up,pv,upv \in I_M\setminus I_M^2 \text { for all } u,v \in S\setminus I_M\}. \end{aligned}$$

Remark 2.1

Note that if \(u \in S\setminus I_M\) and \(p \in P_M\), then \(up,pu \in P_M\).

Let \({\mathbb C}^*:= {\mathbb C}\setminus \{0\}\). The following is [4, Theorem 3.1]. It omits the trivial observation that if \(\sigma = 0\) in (1) then \(\kappa \) is arbitrary.

Proposition 2.2

Let S be a topological semigroup, and let \((\sigma ,\kappa )\) be a sine pair on S with \(\sigma \in C(S)\) and \(\sigma \ne 0\). Then \(\kappa \in C(S)\) and we have one of the following cases, where \(M_1,M_2\in C(S)\) are multiplicative functions such that \(\kappa = (M_1 + M_2)/2\).

  1. (A)

    For \(M_1 \ne M_2\) and \(c\in {\mathbb C}^*\) we have \(\sigma = c(M_1 - M_2)\).

  2. (B)

    For \(M_1 = M_2 \ne 0\) we define \(M:= M_1 = M_2\). Then \(\kappa = M\) and there exist an additive function \(A\in C(S{\setminus } I_{M})\) and a function \(\sigma _P \in C(P_M)\) with either \(A\ne 0\) or \(\sigma _P \ne 0\) such that

    $$\begin{aligned} \sigma (x) = {\left\{ \begin{array}{ll} A(x)M(x) &{} \text { for } x\in S\setminus I_M \\ \sigma _P(x) &{} \text { for } x \in P_M \\ 0 &{} \text { for } x \in I_M \setminus P_M \end{array}\right. } \end{aligned}$$

    and such that the following statements (i) and (ii) hold.

    1. (i)

      \(\sigma (qu) = \sigma (uq) = 0\) for all \(q \in I_M {\setminus } P_M\) and \(u\in S{\setminus } I_M\).

    2. (ii)

      If \(x \in \{up, pv, upv\}\) for \(p \in P_M\) and \(u,v\in S{\setminus } I_M\), then we have respectively \(\sigma _P(x) = \sigma _P(p)M(u)\), \(\sigma _P(x) = \sigma _P(p)M(v)\), or \(\sigma _P(x) = \sigma _P(p)M(uv)\).

  3. (C)

    For \(M_1 = M_2 = 0\) we have \(\kappa = 0\), \(S \ne S^2\), and \(\sigma \) has the form

    $$\begin{aligned} \sigma (x) = {\left\{ \begin{array}{ll} \sigma _0(x) &{} \text { for } x\in S\setminus S^2 \\ 0 &{} \text { for } x \in S^2 \end{array}\right. } \end{aligned}$$
    (4)

    for some nonzero \(\sigma _0\in C(S\setminus S^2)\).

Conversely, in each case \((\sigma ,\kappa )\) is a sine pair with \(\sigma \ne 0\).

Clearly there is a lot of information packed into this result, especially in case (B). As observed in [4], \(\sigma _P\) may take arbitrary values at some, none, or all points of \(P_M\), depending on S and M. So although Proposition 2.2 describes the solutions of (1) in detail, the complexity of the description suggests that it may be desirable to have a short way to refer to such functions. Furthermore, although case (C) may seem to be rather degenerate, such solutions exist on many semigroups. A simple example is \(S = ({\mathbb N}, +)\), where the element 1 cannot be written as the sum of two elements so \(S \ne S^2\) (here \(S^2 = S + S\)). Defining \(\sigma (1)\) to be any nonzero complex number and \(\sigma (n) = 0\) for all \(n \ge 2\), we have a solution of (1) with \(M = 0\) but \(\sigma \ne 0\).

That motivates the following definitions.

Definition 2.3

  1. (a)

    If \((\sigma , M)\) is a sine pair on S with \(\sigma \ne 0\) and M exponential, then we say that \((\sigma , M)\) is a sine-exponential pair on S. Such pairs are described in Proposition 2.2 (B).

  2. (b)

    If \((\sigma , 0)\) is a sine pair on S with \(\sigma \ne 0\), then we say that \(\sigma \) is a quasi-trivial sine function on S. The form of such functions is described in Proposition 2.2 (C). (Clearly such functions exist only if \(S \ne S^2\).)

Next we turn our attention to (3). For any exponential \(M:S \rightarrow {\mathbb C}\) we define

$$\begin{aligned} J_{M}&:= I_M(I_M \setminus P_M) \cup (I_M \setminus P_M)I_M,\\ K_{M}&:= \{x \in I_M \setminus P_M \mid ux, xv, uxv \in I_{M}\setminus J_{M} \text { for all } u,v\in S\setminus I_{M} \}. \end{aligned}$$

The following is [6, Proposition 2.3].

Proposition 2.4

Let S be a topological semigroup, let \(M\in C(S)\) be an exponential, and let \((\sigma , M)\) be a continuous sine-exponential pair on S. If \(\psi \in C(S)\) is a solution of (3), then there exist an additive \(B\in C(S{\setminus } I_M)\) and functions \(\psi _P \in C(P_M)\), \(\psi _K \in C(K_M)\) such that

$$\begin{aligned} \psi (x) = {\left\{ \begin{array}{ll} [B(x) + \frac{1}{2}A^2(x)]M(x) &{} \text { for } x\in S\setminus I_M \\ \psi _P(x) &{} \text { for } x \in P_M \\ \psi _K(x) &{} \text { for } x \in K_M \\ 0 &{} \text { for } x \in I_M \setminus (P_M \cup K_M) \end{array}\right. } \end{aligned}$$
(5)

and the following conditions (i)–(iv) hold.

  1. (i)

    For all \(u \in S\setminus I_M\) and \(x\in P_M\) we have \(\psi _P(ux) = \psi _P(xu) = [\psi _P(x) + \sigma (x)A(u)]M(u)\).

  2. (ii)

    For all \(u \in S\setminus I_M\) and \(x\in K_M\) we have \(\psi (xu) = \psi (ux) = \psi _K(x)M(u)\).

  3. (iii)

    For all \(u \in S\setminus I_M\) and \(x\in I_M {\setminus } (P_M \cup K_M)\) we have \(\psi (xu) = \psi (ux) = 0\).

  4. (iv)

    For all \(x,y\in P_M\) we have \(\psi (xy) = \psi (yx) = \sigma (x)\sigma (y)\).

Conversely, if \(\psi :S \rightarrow {\mathbb C}\) has the form (5) with additive \(B:S\setminus I_M\rightarrow {\mathbb C}\) and conditions (i)–(iv) hold, then \(\psi \) satisfies (3).

Here the description of solutions is even more complicated than that of sine-exponential pairs. Again there is the possibility that parts of the solution (namely \(\psi _P\) and \(\psi _K\)) may take arbitrary values at some, none, or all points of their respective domains.

Now we complete the discussion of (3) for M multiplicative and \((\sigma , M)\) a sine pair. Proposition 2.4 covers the case that M is exponential. The case \(\psi = 0\) is trivial, since then \(\sigma = 0\) and M is an arbitrary multiplicative function. What remains is the case \(M = 0\) and \(\psi \ne 0\).

Lemma 2.5

Suppose \(\psi ,\sigma :S \rightarrow {\mathbb C}\) satisfy (3) with \(M = 0\), where \(\psi \ne 0\) and \((\sigma , 0)\) is a sine pair.

  1. (A)

    If \(\sigma = 0\) then \(\psi \) is a quasi-trivial sine function.

  2. (B)

    If \(\sigma \ne 0\) (i.e. \(\sigma \) is a quasi-trivial sine function), then

    $$\begin{aligned} \psi (x) = {\left\{ \begin{array}{ll} \psi _0(x) &{} \text { for } x \in S\setminus S^2 \\ \sigma (s)\sigma (t) &{} \text { for } x = st \text { with } s,t \in S\setminus S^2 \\ 0 &{} \text { for } x = stu \text { with } s,t,u \in S \end{array}\right. } \end{aligned}$$
    (6)

    for an arbitrary \(\psi _0:S \setminus S^2 \rightarrow {\mathbb C}\).

We observe that \(S \ne S^2\) in both cases.

Proof

(A) If \(\sigma = 0\) then (3) reduces to \(\psi (xy) = 0\) for all \(x,y \in S\). Since \(\psi \ne 0\) we see that \(\psi \) is a quasi-trivial sine function. (B) Suppose \(\sigma \ne 0\), so \(\sigma \) is a quasi-trivial sine function. If \(x \in S\setminus S^2\) then (3) gives no information about \(\psi (x)\), so we have the first case of (6) for an arbitrary \(\psi _0:S \setminus S^2 \rightarrow {\mathbb C}\). Secondly, if \(x = st\) with \(s,t \in S\setminus S^2\) then (3) immediately gives the second case of (6). Finally, if \(x = stu\) for some \(s,t,u \in S\), then by (3) we have \(\psi (x) = \sigma (s)\sigma (tu) = 0\) since \(tu \in S^2\). That completes (6). \(\square \)

We introduce the following definitions.

Definition 2.6

  1. (a)

    If \(\psi , M, \sigma :S \rightarrow {\mathbb C}\) satisfy (3) where \((\sigma , M)\) is a sine-exponential pair, then we say that \((\psi , M, \sigma )\) is a cosine–sine-exponential triple on S. The forms of such triples are described in Proposition 2.4.

  2. (b)

    Suppose \(\psi , M, \sigma :S \rightarrow {\mathbb C}\) satisfy (3) where \(M=0\), \(\sigma \) is a quasi-trivial sine function, and \(\psi \ne 0\). Then we say that \((\psi , \sigma )\) is a quasi-trivial cosine–sine pair on S. The forms of such pairs are described in Lemma 2.5 (B).

3 Preparations for the Solution of (2)

This section contains some lemmas to be used in the solution of (2). We start with the following, which is part of [2, Lemma 4] (though stated for groups the same proof works for semigroups).

Lemma 3.1

If \(f,h:S \rightarrow {\mathbb C}\) satisfy

$$\begin{aligned} f(xy) = f(x) f(y) + h(x)h(y), \quad x,y \in S, \end{aligned}$$

then there exists \(\alpha \in {\mathbb C}\) for which

$$\begin{aligned} h(xy) = h(x) f(y) + f(x)h(y) + \alpha h(x)h(y), \quad x,y \in S. \end{aligned}$$

We will need some linear independence facts.

Lemma 3.2

Let S be a semigroup.

  1. (a)

    The set of exponentials on S is linearly independent.

  2. (b)

    Suppose \(M, M': S\rightarrow {\mathbb C}\) are distinct exponentials and \((\psi , M, \sigma )\) is a cosine–sine-exponential triple on S. Then \(\{M', M, \sigma \}\) and \(\{M, \sigma , \psi \}\) are linearly independent.

  3. (c)

    Suppose \(M':S \rightarrow {\mathbb C}\) is an exponential and \((\psi ,\sigma )\) is a quasi-trivial cosine–sine pair on S. Then \(\{M', \psi , \sigma \}\) is linearly independent.

Proof

For (a) and (b) see [5, Lemma 4.2].

For (c) suppose \(M'\) is an exponential, \((\psi ,\sigma )\) is a quasi-trivial cosine–sine pair, and there exist constants \(a,b,c \in {\mathbb C}\) such that \(aM' + b\psi + c\sigma = 0\). (Recall that such pairs \((\psi ,\sigma )\) satisfy (3) with \(M=0\) and \(\psi \ne 0 \ne \sigma \).) Then for all \(x,y \in S\) we have

$$\begin{aligned} 0 =&\,aM'(xy) + b\psi (xy) + c\sigma (xy)\nonumber \\ =&\,aM'(x)M'(y) + b\sigma (x)\sigma (y), \end{aligned}$$
(7)

using (3) and \(\sigma (xy) = 0\). Putting \(x=uv\) for \(u,v\in S\) in the preceding equation we get

$$\begin{aligned} 0 = aM'(uv)M'(y) + b\sigma (uv)\sigma (y) = aM'(u)M'(v)M'(y), \quad u,v,y \in S, \end{aligned}$$

since \(\sigma (uv) = 0\). Thus \(a = 0\) since \(M' \ne 0\). From (7) we now get \(b=0\) because \(\sigma \ne 0\). Therefore \(c=0\) too.\(\square \)

The next lemma generalizes [3, Lemma 4.9] and plays a key role in the process of solving (2).

Lemma 3.3

Let S be a topological semigroup, let \(M\in C(S)\) be multiplicative, let \(\delta \in {\mathbb C}\), and suppose \(f,h \in C(S)\) satisfy

$$\begin{aligned} f(xy) = f(x)M(y) + M(x)f(y) + h(x)h(y), \quad x,y \in S, \end{aligned}$$
(8)

where \((h, M + \frac{\delta }{2}h)\) is a sine pair and \(\{f,h\}\) is linearly independent.

If \(\delta \ne 0\) then there exists a multiplicative \(M'\in C(S)\) with \(M' \ne M\) such that

$$\begin{aligned} h = \delta ^{-1}(M' - M), \qquad f = \sigma + \delta ^{-2}(M' - M). \end{aligned}$$
(9)

In this case if \(M\ne 0\) then \((\sigma , M)\) is a sine-exponential pair, and if \(M=0\) then \(\sigma \) is a quasi-trivial sine function.

If \(\delta = 0\) then there are again two possibilities. If \(M \ne 0\) then (hM) is a sine-exponential pair and (fMh) is a cosine–sine-exponential triple. If \(M = 0\) then h is a quasi-trivial sine function and (fh) is a quasi-trivial cosine–sine pair.

Proof

Since \(\{f,h\}\) is linearly independent we have \(f\ne 0\) and \(h\ne 0\). We divide the proof into two parts: \(\delta \ne 0\) and \(\delta = 0\). In each part we apply Proposition 2.2 to the sine pair \((h, M + \frac{\delta }{2}h)\) and consider the resulting cases (A), (B), (C).

Part 1: Suppose \(\delta \ne 0\). In case (A) we have \(h = c(M_1 - M_2)\) and \(M + \frac{\delta }{2}h = \frac{1}{2}(M_1 + M_2)\), for distinct multiplicative \(M_1,M_2\in C(S)\) and \(c\in {\mathbb C}^*\). Eliminating h from these two equations we find that

$$\begin{aligned} 0 = M + \frac{c\delta - 1}{2}M_1 - \frac{c\delta + 1}{2}M_2. \end{aligned}$$

By Lemma 3.2(a) this implies that \(c\delta = \pm 1\) and either \(M_1\) or \(M_2\) coincides with M. Without loss of generality suppose \(c\delta = 1\) and \(M_2 = M\). By (8) we have

$$\begin{aligned} f(xy) = f(x)M(y) + M(x)f(y) + \delta ^{-2}(M_1 - M)(x)(M_1 - M)(y), \end{aligned}$$

and defining \(M':= M_1\) we can write this as

$$\begin{aligned} f(xy)&- \delta ^{-2}M'(xy) + \delta ^{-2} M(xy) \\&= [f(x) - \delta ^{-2}M'(x) + \delta ^{-2} M(x)]M(y) + M(x)[f(y) \\&\quad - \delta ^{-2} M'(y) + \delta ^{-2} M(y)] \end{aligned}$$

for all \(x,y \in S\). Thus \((f - \delta ^{-2}(M' - M), M)\) is a sine pair. Moreover \(f - \delta ^{-2}(M' - M) \ne 0\) by the independence of \(\{f,h\}\), since \(h = \delta ^{-1}(M' - M)\). Defining \(\sigma := f - \delta ^{-2}(M' - M)\) we have (9) with \(\sigma \ne 0\). For the sine pair \((\sigma , M)\) we have two possibilities. If \(M\ne 0\) then \((\sigma ,M)\) is a sine-exponential pair, and if \(M=0\) then \(\sigma \) is a quasi-trivial sine function.

In case (B) we have \(M + \frac{\delta }{2}h = M'\) for some exponential \(M'\in C(S)\), and \((h, M')\) is a sine-exponential pair. Putting \(c:= 2/\delta \) we have \(h = c(M' - M)\), so

$$\begin{aligned} c(M' - M)(xy)&= c(M' - M)(x)M'(y) + M'(x)c(M' - M)(y)\\&= 2cM'(xy) - cM(x)M'(y) - cM'(x)M(y) \end{aligned}$$

for all \(x,y \in S\). That is,

$$\begin{aligned} 0 = c[M'(x) - M(x)][M'(y) - M(y)], \quad x,y \in S, \end{aligned}$$

which is impossible since \(c \ne 0\) and \(M' \ne M\). Therefore this case is excluded.

In case (C) we have \(M + \frac{\delta }{2}h = 0\) and h is a quasi-trivial sine function. From \(0 = (M + \frac{\delta }{2}h)(xy) = M(xy) = M(x)M(y)\) for all \(x,y \in S\) we get \(M=0\), and since \(\delta \ne 0\) and \(h \ne 0\) this case is impossible.

Part 2: Suppose \(\delta = 0\), so (hM) is a sine pair. If \(M \ne 0\) then (hM) is a sine-exponential pair and (fMh) is a cosine–sine-exponential triple. If \(M = 0\) then h is a quasi-trivial sine function and (8) reduces to \(f(xy) = h(x)h(y)\) for all \(x,y \in S\). Thus (fh) is a quasi-trivial cosine–sine pair.\(\square \)

Most of the solution families that we will see in the main result are verified in the following lemma. We omit the proof since it is just a translation of [3, Lemma 4.10] into the terminology introduced in the present article.

Lemma 3.4

Let S be a semigroup, let \(M_j,M,M':S \rightarrow {\mathbb C}\) be multiplicative functions, let \((\sigma , M)\) be a sine-exponential pair on S, let \((\psi , M,\sigma )\) be a cosine–sine-exponential triple on S, and let \(f,g,h:S \rightarrow {\mathbb C}\) be a solution of (2) such that fgh belong to one of the following linear spaces V.

  1. (a)

    If \(V = span\{M_1,M_2,M_3\}\), then there exist \(a_j,b_j,c_j \in {\mathbb C}\) satisfying

    $$\begin{aligned} \begin{pmatrix} a_{1} &{}\quad b_{1} &{} \quad c_1 \\ a_{2} &{}\quad b_{2} &{} \quad c_2 \\ a_{3} &{} \quad b_{3} &{}\quad c_3 \end{pmatrix}\begin{pmatrix} b_{1} &{} \quad b_{2} &{}\quad b_{3} \\ a_{1} &{}\quad a_{2} &{}\quad a_{3} \\ c_1 &{} \quad c_2 &{} \quad c_3 \end{pmatrix} = \begin{pmatrix} a_{1} &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad a_{2} &{}\quad 0 \\ 0 &{} \quad 0 &{} \quad a_3 \end{pmatrix} \end{aligned}$$
    (10)

    such that

    $$\begin{aligned} f = \sum _{i=1}^{3} a_i M_i, \quad g = \sum _{i=1}^{3} b_i M_i, \quad h = \sum _{i=1}^{3} c_i M_i. \end{aligned}$$
  2. (b)

    If \(V = span\{M',M,\sigma \}\), then there exist \(a_j,b_j,c_j \in {\mathbb C}\) satisfying

    $$\begin{aligned} \begin{pmatrix} a_{1} &{}\quad b_{1} &{}\quad c_1 \\ a_{2} &{} \quad b_{2} &{} \quad c_2 \\ a_{3} &{} \quad b_{3} &{}\quad c_3 \end{pmatrix}\begin{pmatrix} b_{1} &{}\quad b_{2} &{} \quad b_{3} \\ a_{1} &{}\quad a_{2} &{} \quad a_{3} \\ c_1 &{} \quad c_2 &{} \quad c_3 \end{pmatrix} = \begin{pmatrix} a_{1} &{}\quad 0 &{} \quad 0 \\ 0 &{} \quad a_{2} &{}\quad a_3 \\ 0 &{} \quad a_3 &{} \quad 0 \end{pmatrix} \end{aligned}$$
    (11)

    such that

    $$\begin{aligned} f = a_1 M' + a_2M + a_3 \sigma , \quad g = b_1 M' + b_2M + b_3 \sigma , \quad h = c_1 M' + c_2M + c_3 \sigma . \end{aligned}$$
  3. (c)

    If \(V = span\{M,\sigma ,\psi \}\), then there exist \(a_j,b_j,c_j \in {\mathbb C}\) satisfying

    $$\begin{aligned} \begin{pmatrix} a_{1} &{}\quad b_{1} &{} \quad c_1 \\ a_{2} &{}\quad b_{2} &{} \quad c_2 \\ a_{3} &{}\quad b_{3} &{} \quad c_3 \end{pmatrix}\begin{pmatrix} b_{1} &{} \quad b_{2} &{} \quad b_{3}\\ a_{1} &{}\quad a_{2} &{} \quad a_{3}\\ c_1 &{} \quad c_2 &{} \quad c_3 \end{pmatrix} = \begin{pmatrix} a_{1} &{} \quad a_2 &{} \quad a_3\\ a_2 &{}\quad a_3 &{} \quad 0 \\ a_3 &{}\quad 0 &{}\quad 0 \end{pmatrix} \end{aligned}$$
    (12)

    such that

    $$\begin{aligned} f = a_1M + a_2\sigma + a_3 \psi , \quad g = b_1M + b_2\sigma + b_3 \psi ,\quad h = c_1 M + c_2\sigma + c_3 \psi . \end{aligned}$$

Moreover if any of fgh is zero then the corresponding coefficients in each case can be chosen to be zero.

Conversely, in each case if fgh is such a linear combination with coefficients satisfying the stated condition, then the functions satisfy (2).

The next lemma verifies the remaining solution family we will encounter in Theorem 4.1.

Lemma 3.5

Let S be a semigroup, let \(M:S \rightarrow {\mathbb C}\) be multiplicative, let \((\psi , \sigma )\) be a quasi-trivial cosine–sine pair on S, and let \(f,g,h:S \rightarrow {\mathbb C}\) be a solution of (2). If \(f,g,h \in span \{M, \psi ,\sigma \}\), then there exist \(a_j,b_j,c_j \in {\mathbb C}\) satisfying

$$\begin{aligned} \begin{pmatrix} a_{1} &{} \quad b_{1} &{}\quad c_1 \\ a_{2} &{} \quad b_{2} &{} \quad c_2 \\ a_3 &{} \quad b_3 &{}\quad c_3 \end{pmatrix}\begin{pmatrix} b_{1} &{}\quad b_{2} &{}\quad b_3 \\ a_{1} &{}\quad a_{2} &{} \quad a_3 \\ c_1 &{}\quad c_2 &{} \quad c_3 \end{pmatrix} = \begin{pmatrix} a_{1} &{}\quad 0 &{}\quad 0 \\ 0 &{} \quad 0 &{} \quad 0 \\ 0 &{}\quad 0 &{} \quad a_2 \end{pmatrix} \end{aligned}$$
(13)

such that

$$\begin{aligned} f = a_1 M + a_2\psi + a_3\sigma , \quad g = b_1 M + b_2\psi + b_3\sigma ,\quad h = c_1 M + c_2\psi + c_3\sigma . \end{aligned}$$
(14)

Furthermore if any of fgh is zero then the corresponding coefficients can be chosen to be zero.

Conversely any fgh of the form (14) with coefficients fulfilling (13) satisfy (2).

Proof

Define \(\gamma _1:= M\), \(\gamma _2:= \psi \), \(\gamma _3:= \sigma \), and let \(E \subseteq \{1,2,3\}\) be chosen so that \(\{\gamma _i \mid i\in E \}\) is a basis for the vector space \(V = span \{M, \psi ,\sigma \}\). There is a unique representation of fgh in the form (14) with \(a_k = b_k = c_k = 0\) for all \(k\notin E\). Inserting these forms into (2) we find after some calculation that

$$\begin{aligned} 0&= (2a_1b_1 + c_1^2 - a_1)M(x)M(y) + (a_1b_2 + b_1a_2 + c_1c_2)[M(x)\psi (y) + \psi (x)M(y)]\\&\quad + (a_1b_3 + b_1a_3 + c_1c_3)[M(x)\sigma (y) + \sigma (x)M(y)] + (2a_2b_2 + c_2^2)\psi (x)\psi (y)\\&\quad + (a_2b_3 + b_2a_3 + c_2c_3)[\psi (x)\sigma (y) + \sigma (x)\psi (y)] + (2a_3b_3 + c_3^2 - a_2)\sigma (x)\sigma (y), \end{aligned}$$

for all \(x,y \in S\), since \(\sigma (xy) = 0\) and \(\psi (xy) = \sigma (x)\sigma (y)\). By the linear independence of basis elements, the coefficients of all nonzero terms vanish. Combining this with \(a_k = b_k = c_k = 0\) for all \(k\notin E\), we have (13).

It can be seen from the proof that if any of fg, or h is zero then the corresponding coefficients can be chosen to be zero.

The converse is easily verified by substitution.\(\square \)

4 The Main Result

Now we are ready to solve (2). As in [3] we follow the general outline of the proof used in [2, Theorem]. Note that if \(f=0\) in (2), then \(h=0\) and g is an arbitrary function. We omit this trivial case from the theorem statement.

Theorem 4.1

Let S be a topological semigroup, and suppose \(f,g,h\in C(S)\) satisfy (2) with \(f\ne 0\). The solutions are the following families, where \(M_j, M, M' \in C(S)\) are multiplicative, \((\sigma , M)\) is a continuous sine-exponential pair on S, \((\psi , M, \sigma )\) is a continuous cosine–sine-exponential triple on S, \(\tau \in C(S)\) is a quasi-trivial sine function, and either \(\omega = 0\) or \((\omega ,\tau )\) is a continuous quasi-trivial cosine–sine pair on S. In each family we can choose a basis B for V and coefficients \(a_j,b_j,c_j\) so that the coefficients are equal to 0 for each term not appearing in B.

  1. (a)

    \(f,g,h \in V = span\{M_1,M_2,M_3\}\), namely

    $$\begin{aligned} f = \sum _{j=1}^{3} a_j M_j, \quad g = \sum _{j=1}^{3} b_j M_j, \quad h = \sum _{j=1}^{3} c_j M_j \end{aligned}$$

    with \(a_j,b_j,c_j \in {\mathbb C}\) satisfying (10).

  2. (b)

    \(f,g,h \in V = span\{M',M,\sigma \}\), namely

    $$\begin{aligned} f = a_1 M' + a_2M + a_3 \sigma , \quad g = b_1 M' + b_2M + b_3 \sigma , \quad h = c_1 M' + c_2M + c_3 \sigma \end{aligned}$$

    with \(a_j,b_j,c_j \in {\mathbb C}\) satisfying (11).

  3. (c)

    \(f,g,h \in V = span\{M,\sigma ,\psi \}\), namely

    $$\begin{aligned} f = a_1 M + a_2\sigma + a_3 \psi , \\ g = b_1 M + b_2\sigma + b_3\psi ,\\ h = c_1 M + c_2\sigma + c_3 \psi , \end{aligned}$$

    with \(a_j,b_j,c_j \in {\mathbb C}\) satisfying (12).

  4. (d)

    For \(S \ne S^2\) we have \(f,g,h \in V = span\{M,\omega ,\tau \}\), namely

    $$\begin{aligned} f = a_1 M + a_2\omega + a_3\tau , \quad g = b_1 M + b_2\omega + b_3\tau ,\quad h = c_1 M + c_2\omega + c_3\tau , \end{aligned}$$

    with \(a_j,b_j,c_j \in {\mathbb C}\) satisfying (13).

Conversely, the functions in each family satisfy (2).

Proof

Let \(f,g,h\in C(S)\) be a solution of (2) with \(f\ne 0\). First suppose \(\{f,h\}\) is linearly dependent. With \(h = b f\) for some \(b \in {\mathbb C}\), (2) can be written as

$$\begin{aligned} f(xy) = f(x)\left( g + \frac{b^2}{2} f\right) (y) + \left( g + \frac{b^2}{2} f\right) (x)f(y),\quad x,y \in S, \end{aligned}$$

so \((f, g + \frac{b^2}{2} f)\) is a sine pair. We consider cases (A), (B), (C) from Proposition 2.2. In case (A) we have \(f = c(M_1 - M_2)\) and \(g + \frac{b^2}{2} f = \frac{1}{2}(M_1 + M_2)\) for distinct multiplicative functions \(M_1, M_2\in C(S)\) and \(c \in {\mathbb C}^*\). Thus \(f,g,h \in span\{M_1,M_2\}\) and we are in family (a) by Lemma 3.5 (a) with \(M_3:= 0\). In case (B) we have \(g + \frac{b^2}{2} f = M\) for an exponential \(M \in C(S)\), and \(\sigma := f\) where \((\sigma , M)\) is a sine-exponential pair. Thus \(f,g,h \in span\{\sigma ,M\}\) and we are in family (b) with \(M':= 0\). In case (C) we have \(g + \frac{b^2}{2} f = 0\) where f is a quasi-trivial sine function. Hence \(f,g,h \in span\{f\}\) and we are in family (d) with \(M:= 0\), \(\omega := 0\), and \(\tau := f\).

From here on we assume that \(\{f,h\}\) is linearly independent, so in particular \(h \ne 0\).

Computing f((xy)z) and f(x(yz)) using (2), equating the results and using the linear independence of \(\{f, h\}\), we get from [2, section 3, pp. 267–268] for all \(x,y \in S\) that

$$\begin{aligned} g(xy)&= g(x)g(y) + \alpha f(x)f(y) + \beta [f(x)h(y) + h(x)f(y)] + \gamma h(x)h(y),\end{aligned}$$
(15)
$$\begin{aligned} h(xy)&= h(x)g(y) + g(x)h(y) + \beta f(x)f(y) + \gamma [f(x)h(y) \nonumber \\&\quad + h(x)f(y)] + \delta h(x)h(y), \end{aligned}$$
(16)

for some constants \(\alpha , \beta , \gamma , \delta \in {\mathbb C}\). Next computing g(x(yz)) and g((xy)z) using equations (15), (16), (2), a comparison of the results brings us (again by linear independence) to the conclusion that

$$\begin{aligned} \alpha + \beta \delta - \gamma ^2 = 0. \end{aligned}$$
(17)

Now using (2), (15), and the linear independence of \(\{f,h\}\), we find that

$$\begin{aligned} (\lambda f + g)(xy) = (\lambda f + g)(x)(\lambda f + g)(y) + (\mu f + \nu h)(x)(\mu f + \nu h)(y) \end{aligned}$$
(18)

holds for all \(x,y \in S\) if and only if the constants \(\lambda , \mu ,\nu \in {\mathbb C}\) satisfy

$$\begin{aligned} \lambda ^2 + \mu ^2 = \alpha , \quad \mu \nu = \beta , \quad \text { and } \nu ^2 = \lambda + \gamma . \end{aligned}$$
(19)

Here the proof divides into two main cases.

Case 1: Suppose \(\beta = 0\). Now from (17) we have \(\alpha = \gamma ^2\), and the choice \((\lambda , \mu , \nu ) = (-\gamma , 0,0)\) yields a solution of (19). Therefore by (18) we have

$$\begin{aligned} g = M + \gamma f \end{aligned}$$
(20)

for some multiplicative \(M\in C(S)\). Using this to eliminate g from (2) and (16), we arrive at the pair of functional equations

$$\begin{aligned} f(xy)&= 2\gamma f(x)f(y) + f(x)M(y) + M(x)f(y) + h(x)h(y), \end{aligned}$$
(21)
$$\begin{aligned} h(xy)&= h(x)\big (M + 2\gamma f + \frac{\delta }{2} h\big )(y) + \big (M + 2\gamma f + \frac{\delta }{2} h\big )(x)h(y). \end{aligned}$$
(22)

Here we subdivide the proof again.

Subcase 1a: Suppose \(\gamma = 0\). Now (21), (22) reduce to the system

$$\begin{aligned} f(xy)&= f(x)M(y) + M(x)f(y) + h(x)h(y), \\ h(xy)&= h(x)\left( M + \frac{\delta }{2} h\right) (y) + \left( M + \frac{\delta }{2} h\right) (x)h(y), \end{aligned}$$

which is solved in Lemma 3.3. If \(\delta \ne 0\) then we have \(h = \delta ^{-1}(M' - M)\) and \(f = \sigma + \delta ^{-2}(M' - M)\), where \(M'\in C(S)\) is multiplicative, \(M \ne M'\), and there are two alternatives. The first is that \(M\ne 0\) and \((\sigma , M)\) is a sine-exponential pair. In this event by (20) we have \(g = M\), hence \(f,g,h \in span\{M',M, \sigma \}\) and we are in solution family (b). The alternative is that \(M = 0\) and \(\sigma \) is a quasi-trivial sine function. In that case (since \(g = 0\)) we have \(f,g,h \in span\{M', \sigma \}\) with \(M' \ne 0\). Thus we are in family (d) with \(\omega := 0\), \(\tau := \sigma \), and a new \(M:= M'\).

If \(\delta = 0\) there are again two possibilities. If \(M \ne 0\) then (hM) is a sine-exponential pair and (fMh) is a cosine–sine-exponential triple. In this case by (20) we are in family (c) with \(\sigma := h\) and \(\psi := f\). On the other hand if \(M = 0\) then (fh) is a quasi-trivial cosine–sine pair. By (20) we have \(f,g,h \in span\{f, h\}\), so we are in family (d) with \(M:= 0\), \(\omega := f\), and \(\tau := h\).

Subcase 1b: Suppose \(\gamma \ne 0\). Applying Proposition 2.2 to (22), we consider the cases (A), (B), (C) in turn. Case (A) yields \(h = c(M_1 - M_2)\) and \(M + 2\gamma f + \frac{\delta }{2} h = \frac{1}{2}(M_1 + M_2)\) for distinct multiplicative \(M_1,M_2\in C(S)\) and \(c \in {\mathbb C}^*\). Hence \(f,h \in span\{M_1,M_2,M\}\). Defining \(M_3:= M\) and recalling (20), we are in solution family (a).

In case (B) we have \(h = \sigma \) and \(M + 2\gamma f + \frac{\delta }{2} h = M'\) for some exponential \(M'\in C(S)\), where \((\sigma , M')\) is a sine-exponential pair. Exchanging \(M'\) with M and referring to (20) we are in family (b).

In case (C), h is a quasi-trivial sine function and \(M + 2 \gamma f + \frac{\delta }{2} h = 0\). Thus \(f\in span \{h,M\}\). By (20) we have also \(g\in span \{h,M\}\), so we are in family (d) with \(\tau := h\) and \(\omega := 0\).

This completes Case 1.

Case 2: Suppose \(\beta \ne 0\). Here we proceed exactly as in the proof of [3, Theorem 5.1], except that we use our Lemma 3.1 in place of [3, Lemma 4.1]. The conclusion is that the solution functions fgh again belong to one of the families (a)–(d).

The converse is established by Lemmas 3.4 and 3.5.\(\square \)

We conclude with an example on a semigroup not covered by previous results.

Example 4.2

Let \(S = ({\mathbb N}, +)\) with the discrete topology. Here (2) takes the form

$$\begin{aligned} f(x + y) = f(x)g(y) + g(x)f(y) + h(x)h(y), \quad x,y \in {\mathbb N}. \end{aligned}$$

The exponentials on S are the functions of the form \(M(x) = b^x\) for all \(x \in {\mathbb N}\), where \(b \in {\mathbb C}^*\). The additive functions are the functions of the form \(A(x) = cx\) for all \(x \in {\mathbb N}\), where \(c \in {\mathbb C}\). Since the exponentials vanish nowhere, the sine-exponential pairs are the pairs of the form (AMM) and the cosine–sine-exponential triples are the triples of the form \(((B + \frac{1}{2}A^2)M,M,AM)\) with M exponential, A and B additive, and \(A \ne 0\).

The quasi-trivial cosine–sine pairs \((\omega ,\tau )\) on S (so \(\tau \) is necessarily a quasi-trivial sine function) are the pairs of functions of the form

$$\begin{aligned} \tau (x):= {\left\{ \begin{array}{ll} a &{} \text { for } x = 1 \\ 0 &{} \text { for } x \ge 2 \end{array}\right. }, \qquad \omega (x):= {\left\{ \begin{array}{ll} b &{} \text { for } x = 1 \\ a^2 &{} \text { for } x = 2 \\ 0 &{} \text { for } x \ge 3 \end{array}\right. }, \end{aligned}$$

where \(a,b \in {\mathbb C}\) with \(a\ne 0\). Examples of solution triples (fgh) of (2) belonging to family (d) of Theorem 4.1 are \((\tau , -\frac{c^2}{2}\tau , c\tau )\), \((\tau + c^2\,M, 0, cM)\), \((-M + i\tau , \frac{1}{2}(M + i\tau ), \tau )\), and \((\omega + c\tau , 0, \tau )\), for any \(c \in {\mathbb C}\).