In this chapter we collect three papers that correspond to the translations of three parts of the same work originally published as:

  • M. Sce, Monogeneità e totale derivabilità nelle algebre reali e complesse. I, (Italian) Atti Accad. Naz. Lincei. Rend. Cl. Sci. Fis. Mat. Nat. (8) 16 (1954), 30–35.

  • M. Sce, Monogeneità e totale derivabilità nelle algebre reali e complesse. II, (Italian) Atti Accad. Naz. Lincei. Rend. Cl. Sci. Fis. Mat. Nat. (8) 16 (1954), 188–193.

  • M. Sce, Monogeneità e totale derivabilità nelle algebre reali e complesse. III, (Italian) Atti Accad. Naz. Lincei. Rend. Cl. Sci. Fis. Mat. Nat. (8) 16 (1954), 321–325.

2.1 Monogenicity and Total Derivability in Real and Complex Algebras, I

Article I by Michele Sce, presented during the meeting of 16 January 1954 by B. Segre, member of the Academy.

To construct a theory of functions of a hypercomplex variable, a natural way would be to generalize the function theory of a complex variable. However, to pass from functions of a complex variable (for which the uniqueness of the derivative follows from the monogenicity condition) to functions of a hypercomplex variable, there are two possibilities: one is to impose the uniqueness of the derivative, and this yields to the theory of totally derivable functions;Footnote 1the second is to generalize the monogenicity conditions and this yields to the theory of monogenic functions.Footnote 2

In the course held at the Istituto di Alta Matematica in the year 1952–1953, Prof. B. Segre proposed to study algebras for which the notion of total derivability implies the one of monogenicity.

This Note I deals with the search of the conditions that the basis of an algebra, for simplicity we assume any algebra with module [Editors’ note: here module means unit. In the following, we will always translate the term module with the modern term unit], must satisfy in order for this to hold. The forthcoming Note II, III will deal with the case of algebras of order two, three, four,Footnote 3 and some cases of higher order algebras, whose bases satisfy these conditions. We believe that our results for algebras of the fourth order, in which case we find five algebras, are particularly interesting. Four of them (precisely the bicomplex, the bidual, the quaternions and the algebra of matrices of the second order) are already widely studied; we think that the fifth is considered here for the first time and it would maybe deserve a deeper study.

1. Let \(\mathscr A\) be a real or complex algebra of order n, with unit. Given a basis u = (u 1, …, u n) we say that the algebras \(\mathscr A'\), \(\mathscr A''\) are the first and second regular representation Footnote 4 of \(\mathscr A\) if their elements are order n matrices X′, X″ defined, for any \(x\in \mathscr A\), by the relations

$$\displaystyle \begin{aligned} xu=uX' \end{aligned} $$
(2.1)

Footnote 5

$$\displaystyle \begin{aligned} ux=uX^{\prime\prime}_{-1}. \end{aligned} $$
(2.2)

We say that an element

$$\displaystyle \begin{aligned}y=y_1u_1+\cdots +y_nu_n=(y_1,\ldots, y_n)(u_1,\ldots,u_n)_{-1}=\eta u_{-1} \end{aligned}$$

belonging to \(\mathscr A\) is a right or left totally derivable function of an element

$$\displaystyle \begin{aligned}x=x_1u_1+\cdots+x_nu_n=\xi u_{-1} \end{aligned}$$

in \(\mathscr A\), if the jacobian matrix ∂η∂ξ = dydx belongs to \(\mathscr A'\) or its transpose belongs to \(\mathscr A''\).Footnote 6

Finally, we say that the element y in \(\mathscr A\) is a right or left monogenic function Footnote 7 of an element x in \(\mathscr A\), if

$$\displaystyle \begin{aligned} u\frac{dy}{dx}u_{-1}=0 \end{aligned} $$
(2.3)

or

$$\displaystyle \begin{aligned} u\left(\frac{dy}{dx}\right)_{-1}u_{-1}=0. \end{aligned} $$
(2.4)

When performing the change of basis

$$\displaystyle \begin{aligned} u'=uP^{-1}, \end{aligned} $$
(2.5)

since

$$\displaystyle \begin{aligned}x=\xi u_{-1}=\xi' u_{-1}^{\prime}, \qquad y=\eta u_{-1}=\eta' u_{-1}^{\prime} \end{aligned}$$

[Editors’ note: it was Y  in the original manuscript] it turns out that

$$\displaystyle \begin{aligned}\xi'=\xi P_{-1}, \qquad \eta'=\eta P_{-1} \end{aligned}$$

and so

$$\displaystyle \begin{aligned}\frac{d\eta}{d\xi}=\frac{d\eta}{d\eta'}\frac{d\eta'}{d\xi'}\frac{d\xi'}{d\xi}=P^{-1}\frac{d\eta'}{d\xi'}P. \end{aligned}$$

Thus the definition of totally derivable functions is invariant with respect to change of basis (indeed

$$\displaystyle \begin{aligned}xu'=xuP^{-1}=uX'P^{-1}=u'PX'P^{-1}=u'X^{'*} \end{aligned}$$

with \(X'=P^{-1}X^{'*}P\)) when performing the change of basis (2.5) in fact formulas (2.3) and (2.4) become

$$\displaystyle \begin{aligned} u'\frac{\partial \eta'}{\partial \xi'} PP_{-1}u^{\prime}_{-1}=0 \end{aligned} $$
(2.6)
$$\displaystyle \begin{aligned} u' PP_{-1}\left(\frac{\partial \eta'}{\partial \xi'}\right)_{-1} u^{\prime}_{-1}=0; \end{aligned} $$
(2.7)

the conditions (2.6) and (2.7) depend, in general, on the change of basis,Footnote 8in the sense that they ensure the existence (when |P|≠ 0) of a suitable basis u = u′P such that in that basis y is a monogenic function of x.

After that, the problem of comparing the notion of monogenic function with the one of totally derivable function translates into the search of conditions under which a totally derivable function is monogenic with respect to a suitable basis, namely in the comparison between the conditions that dydx belongs to \(\mathscr A'\) and formulas (2.6) and (2.7).

2. We now consider the functions

$$\displaystyle \begin{aligned}y_i(x)=u_ix=u\xi^{\prime}_{-1},\qquad i=1,\ldots, n; \end{aligned}$$

by virtue of (2.1), we can write

$$\displaystyle \begin{aligned}u_ix=u_iu\xi_{-1}=uU_i^{\prime}\xi_{-1}, \end{aligned}$$

with \(U_i^{\prime }\) in \(\mathscr A'\), and it turns out that

$$\displaystyle \begin{aligned}\xi^{\prime}_{-1}=U^{\prime}_i\xi_{-1}.\end{aligned} $$

But then

$$\displaystyle \begin{aligned}\frac{d u_i x}{dx}=U^{\prime}_i\end{aligned} $$

and y i(x) = u ix are right totally derivable.

Thus, if in our algebra the right totally derivable functions are also monogenic, there should exists a basis such that u ix are right or left monogenic in x, that is, it should hold

$$\displaystyle \begin{aligned} uU^{\prime}_i PP_{-1}u_{-1}=0, \end{aligned} $$
(2.8)

or

$$\displaystyle \begin{aligned} u PP_{-1}(U^{\prime}_i)_{-1}u_{-1}=0, \end{aligned} $$
(2.9)

for some nonsingular matrix P.Footnote 9

Since \(U^{\prime }_i\) are elements in \(\mathscr A'\), (2.1), (2.8), and (2.9) allow to deduce

$$\displaystyle \begin{aligned} u_iu PP_{-1}u_{-1}=0,\qquad i=1,\ldots, n \end{aligned} $$
(2.10)
$$\displaystyle \begin{aligned} u PP_{-1}u_iu_{-1}=0,\qquad i=1,\ldots, n. \end{aligned} $$
(2.11)

As each element z of \(\mathscr A\) is a linear combination (with real or complex coefficients) of u i, by taking a linear combination of (2.10) and (2.11) one has

$$\displaystyle \begin{aligned} zu PP_{-1}u_{-1}=0, \end{aligned} $$
(2.12)

or

$$\displaystyle \begin{aligned} u PP_{-1}zu_{-1}=0; \end{aligned} $$
(2.13)

conversely, if (2.12) and (2.13) hold for each element z in \(\mathscr A\), then they hold also for u, thus one reobtains (2.10) and (2.11).

In particular, when taking z equal to the unit then (2.12) and (2.13) give

$$\displaystyle \begin{aligned} u PP_{-1}u_{-1}=0; \end{aligned} $$
(2.14)

from this one reobtains in an obvious way both (2.12) and (2.10), so these latter are equivalent to (2.14).Footnote 10

Given a right totally derivable function y(x), its jacobian matrix dydx will automatically belong to \(\mathscr A'\); let z be the corresponding element in \(\mathscr A\) and let us assume that (2.14) or (2.11) hold. Then also (2.12) or (2.13) hold, so that because of (2.1), we reobtain (2.6) or (2.7); thus y(x) is right or left monogenic.

Thus we may conclude that (2.14) and (2.11) with P nonsingular, are necessary and sufficient conditions for right totally derivable functions in an algebra \(\mathscr A\) to be right or left monogenic (with respect to a suitable basis); condition (2.14) is necessary also for the left monogenicity. [Editors’ Note: see Remark 2.4]Footnote 11

This latter assertion ensures that it is necessary that y(x) be right monogenic in order to have that right total derivability imply left monogenicity; thus the algebras in which right total derivability implies left monogenicity are the algebras in which right totally derivable functions are both right and left monogenic.

Let us now recall that the monogenicity conditions are a system of n differential equations while those of total derivability are a system of n(n − 1) differential equations;Footnote 12 moreover the conditions for right and left monogenicity are n + m ≤ 2n.

Since (2.14) or (2.11) translate into linear conditions on the basis elements, and thus in conditions concerning only the algebra, in order to get right total differentiability provided that (2.14) or (2.11) hold, one has to add n(n − 2) differential equations to the n arising from right monogenicity or to the n + m arising from both the right and left monogenicity.

3. We say that an algebra with unit is solenoidal if its bases satisfy relations of the form (2.14) with a nonsingular P and we say, in particular, that it is bisolenoidal if its bases satisfy relations of the form (2.11).

Given two algebras \(\mathscr A=(u_1,\ldots , u_n)\), \(\mathscr B=(v_1,\ldots , v_m)\) we consider their direct sum \(\mathscr S\) and their direct product \(\mathscr P\) whose basis are, respectively

$$\displaystyle \begin{aligned}w=(u^o_1,\ldots, u^o_n, v^o_1,\ldots, v^o_m)=(u^o;v^o) \end{aligned}$$
$$\displaystyle \begin{aligned}w'=(u^o_1v^o_1, u^o_1v^o_2,\ldots, u^o_n v^o_m)=(u^o_1v^o;\ldots;u_n^ov^o), \end{aligned}$$

where

$$\displaystyle \begin{aligned}\mathscr A_o=(u^o_1,\ldots, u^o_n),\qquad \mathscr B_o=(v^o_1,\ldots, v^o_m) \end{aligned}$$

are algebras isomorphic to \(\mathscr A\), \(\mathscr B\).Footnote 13 If \(\mathscr A\) and \(\mathscr B\) are bisolenoidal, so are also \(\mathscr A_o\) and \(\mathscr B_o\) so there exist nonsingular matrices P and Q such that

$$\displaystyle \begin{aligned} u^oPP_{-1}u_i^ou_{-1}^o&=0, \qquad i=1,2,\ldots, n,\\ v^oQQ_{-1}v_k^ov_{-1}^o&=0, \qquad k=1,2,\ldots, m. \end{aligned} $$
(2.15)

From this, by setting

and recalling that in the direct sum \(u_i^ov_k^o=v_k^ou_i^o=0\), one gets

and, analogously, one finds that

$$\displaystyle \begin{aligned}wRR_{-1}v_k^ow_{-1}=0. \end{aligned}$$

Thus, if w i is any element in w, there exists a nonsingular matrix R such that

$$\displaystyle \begin{aligned}wRR_{-1}w_iw_{-1}=0, \end{aligned}$$

i.e., the direct sum of bisolenoidal algebras is bisolenoidal. Assume that only \(\mathscr B\) is solenoidal, that is, (2.15) holds; then, by setting

and recalling that in the direct product u iv k = v ku i, one obtains

Thus, the direct product of algebras, one of which is bisolenoidal is bisolenoidal.Footnote 14.

2.2 Monogenicity and Total Derivability in Real and Complex Algebras, II

Article II by Michele Sce, continuation of Article I published at p. 30 of this volume, presented during the meeting of 13 February 1954 by B. Segre, member of the Academy.

4. Setting PP −1 = ∥a ik∥, (i, k = 1, …, n), formula (2.14) rewrites as ∑i,ka iku iu k = 0 [Editors’ note: ∥a ik∥ denotes the matrix with elements a ik]; thus, since the units u 1 and u 2 of the complex algebras of the second order, i.e., of the complex and dual numbers,Footnote 15 combine according to the rules u 1u i = u i, \(u_2^2=-u_1\), and u 1u i = u i, \(u_2^2=0\), (i = 1, 2), it turns out that to satisfy (2.14), it must be a 11 = a 22, a 12 = 0 or a 11 = a 12 = 0. In the second case, PP −1 is singular and so the algebra of dual numbers is not solenoidal and the algebra of complex numbers is the only complex algebra solenoidal of the second order. We now try to satisfy (2.14) for two of the five complex algebras of the third order, Footnote 16 tripotential and tridual numbers, whose units combine according to the rules u 1u i = u i, \(u_2^2=u_3\), and u 1u i = u i, (i = 1, 2, 3); in the first case one has a 11 = a 22 = 0 and a 13 + a 22 = 0, in the second case a 1i = 0 and PP −1 is singular. An analogous analysis for the remaining three algebras shows that only the algebra of tridual numbers is not solenoidal.

In the case of ternions whose units combine according to the rules \(u_1^2=u_1\), \(u_2^2=u_2\), and u 1u 3 = u 3u 2 = u 3, (2.11) translate into

$$\displaystyle \begin{aligned} a_{11}u_1+a_{13}u_3=0, \quad a_{22}u_2+a_{32}u_3=0, \quad a_{12}u_3=0 \end{aligned} $$
(2.16)

which ensure that PP −1 is singular; thus the algebra of ternions is not bisolenoidal,Footnote 17 and among complex algebras of the third order, only the solenoidal commutative algebras are bisolenoidal. Considering the 16 multiplication tables which arise from the complex algebras of the 4th orderFootnote 18 together with (2.14) it can be proved that only the algebras with multiplication tables XLI and LV are not solenoidal. To this end, we limit ourselves to observe that since the multiplication rules in the two algebras are, respectively, u 1u i = u i and u 1u i = u i, u 2u 3 = −u 3u 2 = u 4 (i = 1, …, 4), in both cases to satisfy (2.14) the first row of PP −1 must vanish.Footnote 19

Imposing (2.11) for the six noncommutative algebras, one concludes that the only bisolenoidal algebras are those whose units satisfy u 1u i = u i, \(u_2^2=u_2\), u 3 = −u 3, u 2 = u 4, \(u_3^3=\alpha u_4\).Footnote 20

5. In the study of real solenoidal algebras, it is important to bear in mind that also the matrix P in (2.14) and (2.11) is real, thus PP −1 is symmetric and positive definite.Footnote 21. This remark allows to have the converse of the first Theorem in n. 3, namely to show that a direct sum of real algebras is bisolenoidal only if its components are bisolenoidal.

Indeed, if the real algebra \(\mathscr C=(w)=(u^o\ v^o)\), direct sum of the algebras \(\mathscr A\) and \(\mathscr B\), is bisolenoidal there exists a symmetric, positive definite matrix

such that

$$\displaystyle \begin{aligned}w A w_iw_{-1}=u^{o}A_1 w_iu_{-1}^{o}+v^{o}A_4 w_iv_{-1}^{o} =0; \end{aligned}$$

from this relation, and according to the fact that w i is an element either in u o or in v o, one gets

$$\displaystyle \begin{aligned}u^oA_1u_i^ou_{-1}^o=0,\qquad v^oA_4u_i^ou_{-1}^o=0 \end{aligned}$$

with A 1 and A 4 still symmetric and positive definite since they are principal minors of A.

It turns out that the only solenoidal algebra of order two is the one of complex numbers. In fact, besides the two algebras over the complex field, there is the algebra of bireal numbersFootnote 22 which is direct sum of the real field (certainly non solenoidal) with itself.

About the six algebras of the third order,Footnote 23 we have already seen in n. 4 that the three indecomposable algebras cannot satisfy (2.14) with PP −1 symmetric and positive definite; since the three decomposable algebras are not solenoidal by the theorem just proven, one can conclude that there are no real algebras solenoidal of the third order.

Among the real algebras of the fourth order Footnote 24 in addition to the two algebras direct product of the algebra of complex numbers with the one of the bireal numbers and of the dual numbers, only the algebra of quaternions, the algebra of 2 × 2 matrices and the one with multiplication table LXXXI are solenoidal; none of these algebras is bisolenoidal.

Making use of a direct proof, or of theorems that we will provide in n. 6, one can prove that among the algebras of the fourth order only the five mentioned in the assertion can be solenoidal; since, by the second theorem in n. 3, the two direct product algebras are solenoidal, we shall examine only the three noncommutative algebras.

The units of the algebras in table LXXXI can be combined according to the multiplication rules

$$\displaystyle \begin{aligned}u_1u_i=u_i, \ \ (i=1,\ldots,4),\ \ u_2u_3=-u_3u_2=u_4,\ \ u_2u_4=-u_4u_2=-u_3, \end{aligned}$$

[Editors’ note: one needs also the condition \(u_2^2=-u_1\)] thus (2.14) leads to

$$\displaystyle \begin{aligned}a_{11}-a_{22}=a_{12}=a_{13}=a_{14}=0 \end{aligned}$$

which ensures the fact that the algebra is solenoidal;Footnote 25 adding (a 11 + a 22)u 3 = 0 to (2.14), we reobtain all (2.11) which, however, can be satisfied only with PP −1 singular and the algebra is not bisolenoidal.

For the algebra of 2 × 2 matrices, if we select the units e i,k, (i, k = 1, 2) which combine according to e i,he h,k = e i,k, relations (2.11) give rise to

$$\displaystyle \begin{aligned}\sum_{i} a_{j,h+i}e_{1,i}+\sum_{i} a_{j+2,h+i}e_{2,i}=0, \end{aligned}$$

(i, j = 1, 2; h = 0, 2); since, for h = 0 they impose the vanishing of the first two columns of PP −1 and for h = 2 of the remaining two columns, the algebra is not bisolenoidal, not even in the complex field. As the unit of the algebra is e 11 + e 22, (2.14) can be obtained by summing the relations that we have for j = 1, h = 0, and j = h = 2 and this leads to

$$\displaystyle \begin{aligned}a_{11}+a_{23}= a_{12}+a_{24}=a_{31}+a_{43}=a_{32}+a_{44}=0; \end{aligned}$$

these conditions are compatible with the fact that PP −1 is symmetric, positive definite so that the algebra is solenoidal.Footnote 26

In the algebra of quaternions, whose basis is e 0 = 1, e 1, e 2, e 3 = e 1e 2 and satisfies

$$\displaystyle \begin{aligned}e_1^2=e_2^2=-1,\ \ \ e_1e_2+e_2e_1=0, \end{aligned}$$

(2.14) rewrites as

$$\displaystyle \begin{aligned}a_{11}-\sum_{k=2,3,4} a_{kk}+2\sum_{k=2,3,4} a_{1k} e_{k-1}=0; \end{aligned}$$

thus one gets a 11 =∑k=2,3,4a kk, a 1k = 0, (k = 2, 3, 4) and since these conditions are compatible with the fact that PP −1 is positive definite we get that the algebra is solenoidal.

Since e 1(e 0, …, e 3) = 2(e 1, −e 0, 0, 0) − (e 0, …, e 3)e 1, the second of (2.11), once that the first one of the (2.11) is satisfied namely (2.14), reduces to

$$\displaystyle \begin{aligned}(a_{11}-a_{22})e_1- \sum_{k=3,4}a_{2k} e_{k-1}=0, \end{aligned}$$

and thus it leads to

$$\displaystyle \begin{aligned}\sum_k a_{kk}=0,\ \ \ a_{2k}=0,\ \ (k=3,4). \end{aligned}$$

Then, imposing the remaining (2.11), one obtains that PP −1 must vanish and this excludes that the algebra is bisolenoidal, also over the complex field.

6. Real division algebras are, in addition to the field of real numbers, the algebra of complex numbers and the one of quaternions;Footnote 27 thus, from n. 5, we deduce that also real division algebras of order n > 1 are solenoidal.

An immediate generalization of the proofs in n. 5 allows to state that all regular algebras (total matric algebras) [Editors’ note: this is written in English in the original text; see also Definition 2.3 and the comment after that.] and all the real or complex Clifford algebras are solenoidal but not bisolenoidal.

From the two propositions it follows that simple real or complex algebras, i.e. direct product of a division algebra with an algebra which is regular, of order n > 1 are solenoidal.

In force of the first theorem in n. 5, the real semi-simple algebras, i.e. direct sums of simple algebras, are solenoidal if and only if all their components are so; since there are no simple solenoidal algebras of order 1, 3, 5, 7,Footnote 28 we conclude that real semi-simple algebras of order 1, 3, 5, 7 are not solenoidal.

An algebra \(\mathscr A=(u_1,\ldots ,u_{n-m};u_{n-m+1},\ldots , u_n)=(u';u'')\) of order n not semisimple has a nontrivial subalgebra \(\mathscr R=(u'')\) of order m which is its maximal nilpotent ideal, called radical of the algebra;Footnote 29 in the case of \(\mathscr A\), (2.14) rewrites as:

and, since \(\mathscr R\) and \(\mathscr A - \mathscr R\) are disjoint, this condition imposes the vanishing of the two factors in the right hand side. It follows that: in order that the real algebra \(\mathscr A\) is solenoidal, \(\mathscr A -\mathscr R\) has to be solenoidal.

Since the real algebra \(\mathscr A -\mathscr R\) is semisimple,Footnote 30 from the last two statements it follows that the real algebras of orders n − 1, n − 3, n − 5, n − 7 are not solenoidal. Footnote 31

Algebras with cyclic radical, namely with radical of order m and index m + 1, are direct sum of two algebras one of which is either the algebra of (m + 1)-potential numbers or the algebra of ternions;Footnote 32 since, by virtue of the last statement of n. 5, they are not solenoidal, real algebras with cyclic radical are not solenoidal. In particular, algebras with radical of order 1 are not solenoidal.

Bearing in mind that if \(\mathscr A - \mathscr R\) is simple, its order must divide both the order of \(\mathscr A\) and the one of \(\mathscr R\),Footnote 33 we show that there are no solenoidal real algebras of order 5 and 7.

We know already that there are no semisimple, solenoidal, real algebras of orders 5, 7; thus, recalling the next to the last theorem, algebras whose radicals have orders 4, 2, 1, 0 or 6, 4, 2, 1, 0, respectively, are not solenoidal.

If there existed a real solenoidal algebra \(\mathscr A\) of order 5 with radical \(\mathscr R\) of order 3, the algebra \(\mathscr A -\mathscr R\) of order 2 would be solenoidal and thus it would be the algebra of complex numbers, which is simple; in this case, its order must divide the order of \(\mathscr A\) and this is absurd.

The same reasoning shows that the radical of a real solenoidal algebra of order 7 cannot be of order 5 and that if \(\mathscr R\) is of order 3 then \(\mathscr A-\mathscr R\) cannot be simple. Thus let us suppose that there exists a real, solenoidal algebra of order 7 with radical of order 3—which will not be cyclic—and let \(\mathscr A-\mathscr R\) be direct sum of the algebra of complex numbers with itself; the multiplication table of \(\mathscr A\) would be , where T 1 is the multiplication table of \(\mathscr A -\mathscr R\) whose units combine according to the rules \(u_1^2=u_1\), u 1u 2 = u 2, \(u_2^2=u_1\), \(u_3^2=u_3u_3u_4=u_4u_4^2=-u_3\), [Editor’s note: \(u_3^2=u_3\), u 3u 4 = u 4, \(u_4^2=-u_3\)], T 2 and T 3 are matrices with elements in \(\mathscr R\) and T 4 is the multiplication table of a nilpotent algebra of order 3 whose units combine according to the rules \(u_5^2=u_6u_5=u_7\), \(u_6^2=\alpha u_7\) or \(u_5^2=u_7\), \(u_6^2=\alpha u_7\) (case 1), u 5u 6 = −u 6u 5 = u 7 (case 2) or it is the table of a zero algebra (case 3).Footnote 34 Then let u = (u 1 + u 3 + αu 5 + bu 6 + cu 7) be the unit of \(\mathscr A\) and let us set

$$\displaystyle \begin{aligned}(u_2+u_4)u_{4+i}=\sum_{k} m_{ik}u_{4+k} \end{aligned}$$

(i, k = 1, 2, 3) with m ik real numbers.

In the first two cases we will have, respectively,

$$\displaystyle \begin{aligned}(u_2+u_4)u_7=(u_2+u_4)u_5^2=(\sum_{i} m_{1i} u_{4+i})u_5=\gamma u_7 \end{aligned}$$
$$\displaystyle \begin{aligned}(u_2+u_4)u_7=(u_2+u_4)u_5u_6=(\sum_{i} m_{1i} u_{4+i})u_6=\gamma u_7 \end{aligned}$$

with γ real (it can also be zero). On the other hand

$$\displaystyle \begin{aligned}u_7=uu_7=(u_1+u_3)u_7=-(u_2+u_4)^2u_7=-\gamma^2 u_7; \end{aligned}$$

thus the real algebra at hand cannot have a radical as in case 1 or 2. Thus let us consider case 3; then

$$\displaystyle \begin{aligned}u_{4+i}=u u_{4+i}=(u_1+u_3)u_{4+i}=-(u_2+u_4)^2u_{4+i}=-\sum_k m_{ik} (\sum_i m_{ik} u_{i+4}), \end{aligned}$$

thus ∥m ik∥ + I 3 = 0. But a real matrix of odd order cannot satisfy such an equation, thus the radical of \(\mathscr A\) cannot be a zero-algebra of order 3; this completes the proof of the theorem.

2.3 Monogenicity and Total Derivability in Real and Complex Algebras, III

Article III by Michele Sce, continuation of the Notes I, II published in these “Rendiconti” pp. 30–35 and pp. 188–193 presented during the meeting of 13 March 1954 by B. Segre, member of the Academy.

7. Let x and y be elements of the algebra \(\mathscr A\) as in n. 1 and let y (k) denote the partial derivative of y(x) with respect to x k; in order to have that the Pfaffian form y dx is closed it is necessary that

$$\displaystyle \begin{aligned} y^{(k)}u_h-u_k y^{(h)}=0, \qquad (h,k=1,\ldots,n)\end{aligned} $$
(2.17)

so that y(x) is right totally derivable.Footnote 35 If—possibly making a basis change—we assume that u 1 is the unit of \(\mathscr A\), from (2.17) we obtain

$$\displaystyle \begin{aligned} y^{(1)}(u_ku_h-u_hu_k)=0, \qquad (h,k=1,\ldots,n);\end{aligned} $$
(2.18)

so that, if \(\mathscr A\) is not commutative, y (1) is a zero-divisor and the matrix which corresponds to it in the algebra \(\mathscr A'\), first regular representation of \(\mathscr A\), is singular. Since this matrix is, by virtue of the total derivability, the jacobian matrix of y(x) we have that functions y(x) with non-zero jacobian such that y dx is closed are the functions totally derivable in a commutative algebra.

Since, by virtue of the results in n. 2, the form y dx is co-closed if and only if the function y(x) is monogenicFootnote 36 we can state that in the commutative, solenoidal algebras, closed forms are co-closed, thus they are harmonic both in the Hodge and in the de Rham sense.Footnote 37

8. Let S n be the vector space associated with \(\mathscr A\) and let y be right monogenic in a domain D, y′ a left monogenic function in a domain D′; then if V n−1 is a (n − 1)-dimensional cycle contained in D ∩ D′ and homologous to zero there, and dx is the adjoint of the form dx =∑iu i dx i, (i = 1, …, n), one has Footnote 38

$$\displaystyle \begin{aligned} \int_{V_{n-1}} y\,dx^*\, y'=0. \end{aligned} $$
(2.19)

Now let us suppose that for the monogenic functions of \(\mathscr A\) there exists an integral formula of Cauchy-type. More precisely, let us assume that in \(\mathscr A\) there exists a function f(x, ξ) which, for ξ fixed, is right monogenic in x in S n except a set \(\mathscr I\), at most (n − 1)-dimensional, of points in which it is not defined, so that for every g(x) left monogenic in a domain D′ one has

$$\displaystyle \begin{aligned} \int_{V_{n-1}} f(x,\xi)\, dx^*\, g(x)=g(\xi) \end{aligned} $$
(2.20)

where V n−1 is an (n − 1)-dimensional cycle encircling ξ and homologous to zero in D′; by virtue of the theorem just stated, V n−1 cannot be homologous to zero in the domain where f(x, ξ) is monogenic and must contain the points in \(\mathscr I\). In particular, (2.20) must hold when g(x) is the unit of the algebra and V n−1 is a sphere centered at ξ and with radius r; setting

$$\displaystyle \begin{aligned}x_1=\xi_1+r\cos\varphi_1\cdots\cos\varphi_{n-1}, \ x_2=\xi_2+r\sin\varphi_1\cos\varphi_2\cdots\cos\varphi_{n-1},\ \ldots \end{aligned}$$
$$\displaystyle \begin{aligned}\ldots, \ x_n=\xi_n+r\sin\varphi_{n-1}, \qquad (0\leq \varphi_1< 2\pi;\ -\frac{\pi}{2}\leq \varphi_i\leq \frac{\pi}{2}, \ldots, i=2,\ldots ,n-1), \end{aligned}$$

since it turns out that

$$\displaystyle \begin{aligned}dx^*=r^{n-2}(x-\xi)\, d\sigma, \end{aligned}$$

where \(d\sigma =\cos ^{n-2}\varphi _{n-1}\cdots \cos \varphi _{2}\, d\varphi _1\cdots d\varphi _{n-1}\) is the area element of the unit sphere,Footnote 39 (2.20) gives

$$\displaystyle \begin{aligned} \int_S f(x,\xi)r^{n-2}(x-\xi)\, d\sigma=1. \end{aligned} $$
(2.21)

Thus in the algebras where the function r 2−n(xξ)−1 is right monogenic, where defined, namely it satisfies

$$\displaystyle \begin{aligned} &\sum_k \frac{\partial}{\partial x_k}[r^{2-n}(x-\xi)^{-1}]u_k=\\ &=r^{-n} (x-\xi)^{-1}[(n-2)(x-\xi)+r^2 \sum_k u_k(x-\xi)^{-1}u_k]=0, \end{aligned} $$
(2.22)

we can presume that formula (2.20) holds where we have set

$$\displaystyle \begin{aligned} f(x,\xi)=k^{-1} r^{2-n}(x-\xi)^{-1}, \qquad k=\int_S d\sigma. \end{aligned} $$
(2.23)

Since in Clifford algebras, for any nonzero divisor x one has \(\sum _k u_k xu_k=-(n-2) \bar x\), with \(x\bar x=r^2\), (2.22) is satisfied and it remains to establish if (2.20), where we have set (2.23), effectively gives an integral formula in Clifford algebras.Footnote 40

9. To obtain real solutions to the equation

$$\displaystyle \begin{aligned} \varOmega h(x_1,\ldots ,x_p)=\sum_{k=1}^p \alpha_k(x_1,\ldots, x_p) \frac{\partial^2 h}{\partial x_k^2}=0 \end{aligned} $$
(2.24)

we can consider the equation

$$\displaystyle \begin{aligned} \varOmega f(x)=0 \end{aligned} $$
(2.25)

where f is a totally derivable function of the element x = x 1u 1 + ⋯ + x pu p of the algebra \(\mathscr A\) of order n (p ≤ n). Equation (2.25) is solvable is and only if, with respect to some basis of \(\mathscr A\), it holds Footnote 41

$$\displaystyle \begin{aligned}\sum_{k=1}^p \alpha_k u_k^2=0. \end{aligned}$$

Thus, if p = n and α k are constant, (2.25) is solvable in solenoidal algebras in the complex field; in particular, if (2.24) is elliptic, (2.25) is solvable in solenoidal algebras in the field of real numbers.

The aforementioned method easily extends to partial differential equations of order greater that two; for example, to solve the equation

$$\displaystyle \begin{aligned}\varDelta^n h(x_1,x_2)=0 \end{aligned}$$

we can bring back to totally derivable functions in an algebra such that

$$\displaystyle \begin{aligned}(u_1^2+u_2^2)^n=0. \end{aligned}$$

Among this type of algebras, are particularly relevant the cyclic algebras of order 2n whose basis 1, j, j 2, …, j 2n−1 satisfies the relation (1 + j 2)n = 0. By setting ω = 1 + j 2, we can express j through the imaginary unit and powers of ω; thus one sees that such algebras are direct product of the algebra of complex numbers and algebras of n-potential numbers with basis 1, ω, …, ω n−1. Footnote 42 This fact ensures that these algebras are solenoidal and such that totally derivable functions \(y(x_0,\ldots , x_{2^n-1})\) are harmonic; moreover, a simple inspection of the jacobian matrix which, by definition of total derivability is of the form

(2.26)

ensures that every y is a harmonic function of all pairs x 2k, x 2k+1. It is worthwhile to note that monogenic functions which are not totally derivable are not even harmonic.

10. Let us now consider noncommutative algebras \(\mathscr A_n\) of order 2n whose basis

$$\displaystyle \begin{aligned}1, i, \omega, i\omega, \ldots, \omega^{n-1}, i\omega^{n-1} \end{aligned}$$

satisfies the relations Footnote 43

$$\displaystyle \begin{aligned}i^2=-1,\ \ \ \omega^{n}=0, \ \ \ \omega i+i\omega =0; \end{aligned}$$

we can write any element a in \(\mathscr A_n\) in the form

$$\displaystyle \begin{aligned} a=\alpha_0 +\alpha_1 \omega+\cdots +\alpha_{n-1}\omega^{n-1} \end{aligned} $$
(2.27)

where α k = a 2k + ia 2k+1 behave among them like ordinary complex numbers, while

$$\displaystyle \begin{aligned} \alpha_k\omega=\omega \overline{\alpha}_{k}. \end{aligned} $$
(2.28)

Besides the algebras \(\mathscr A_n\), we will consider the algebra \(\mathscr Q_n\) of order 4n which can be obtained by maintaining condition ω n = 0 and assuming that α k in (2.27) and (2.28) behave among them like ordinary quaternions.

Taking into account the expression of the product of two elements a =∑kα kω k, b =∑kβ kω k

$$\displaystyle \begin{aligned}ab=\sum_k \alpha_r\tilde{\beta}_s \omega^k, \ \ \ \ (r+s=k), \end{aligned}$$

\(\tilde {\beta }_s=\overline {\beta }_s\) or β s according to the fact that r = k − s is even or odd, with long but not difficult calculations one can establish the following results:

the elements in the center of an algebra \(\mathscr A_n\) or \(\mathscr Q_n\) defined via (2.27) and (2.28) and ω n = 0 have the form \(\sum \alpha _{2k}\omega ^{2k}\) with \(\alpha _{2k}=\overline {\alpha }_{2k}\).

The zero divisors are of the form ω ia thus they are all and the only nilpotent elements in the algebra; these latter constitute the radical, which is of index n and order 2(n − 1) for \(\mathscr A_n\) and 4(n − 1) for \(\mathscr Q_n\).

The only idempotent not nilpotent is the unit thus \(\mathscr A_n\) and \(\mathscr Q_n\) are algebras completely primary.

If one considers only the case n = 2, as we shall do, from the first statement one obtains immediately that \(\mathscr A_2\) and \(\mathscr Q_2\) are normal. Moreover, setting \(\bar a= \bar {\alpha }_0 -\alpha _1 \omega \), if we say that the norm of a is the real number \(a\bar a=\alpha _0\bar {\alpha }_0\), we see that the zero divisors are all the elements with zero norm and only them.

By setting y = α + βω and x = ξ + ηω, monogenic functions y(x) are also harmonic in the components of ξ.

In fact, it is knownFootnote 44 that the monogenicity condition for y(x) may be written as Dy = 0 where D is an operator that behaves like an element in the algebra; since the norm of D is the laplacian associated with the components of ξ, by applying to Dy = 0 the operator \(\bar D\) we obtain the result.

11. The considerations that we made so far, even though maybe not uninteresting, would be in need of being deepened if one wishes to deduce more concrete results; however, it is our belief that such results can be obtained only in special type of algebras like, for example, \(\mathscr A_n\).

About these latter algebras we point out that their zero divisors, in the representative space S 2n, form a linear space S 2(n−1); thus, in this case, the study of the variety of the zero divisors—necessary preliminary to look for integral formulas—is trivial. The difficulty in this type of problems is the lack of concrete examples of monogenic functions, especially in the case of noncommutative algebras;Footnote 45 for example, in \(\mathscr A_2\) neither the powers nor the exponential are monogenic function, and we can only say that such functions are of the form

$$\displaystyle \begin{aligned}w=u(x_1,x_2)+ \dfrac{v(x_1,x_2)}{x_2}(x_2 i+x_3 \omega +x_4 i\omega) \end{aligned}$$

with u + iv holomorphic. In the algebra of quaternions, from a similar property one deduces that Δw (and in particular Δx n) are monogenic functions; since such a result is not valid in \(\mathscr A_2\), there is the problem of knowing if in \(\mathscr A_2\) there is a differential operator Ω such that Ωw are monogenic.

Since all these problems appear to be connected among them, an answer, even partial, could shed light on the whole question: this is what we hope to do in another work.

2.4 Comments and Historical Remarks

In this section we are revising, also providing examples, the concepts contained in the previous sections i.e. in the original papers [27,28,29]. We also add some historical remarks which seem to be nowadays forgotten.

The interest in theories of functions in algebras other than the algebra of complex numbers and generalizing the complex holomorphic functions started after the study of these algebras in the classical works of Gauss, Hamilton, Hankel, Frobenius and goes back to the end of the nineteenth century (see [40] for a list of references). It then continued with the work of Lanczos [17], who considered the generalization to quaternions back in 1919, but his work was mostly unknown until [18], and with the PhD dissertation of Ketchum, see [14]. It was in the early thirties when the study of quaternionic functions started systematically with the works of Moisil [23] and Fueter and in the forties, Krylov [16] and Meilikhson [22] studied the notion of quaternionic differentiability.

It is interesting that in the thirties and forties, various authors considered functions with values in algebras, for example Ward [40] who developed his PhD dissertation on the theory of analytic functions in associative algebras, Nef [24], Spampinato [38], Sobrero [37] and also Fueter [7,8,9] and Haefeli [11]. The interest in these studies continued even later as shown by the works of Kriszten [15] and Rizza [26].

Michele Sce’s works considered in this chapter insert in this field of researches. He knew rather well the existing literature and despite the fact that the circulation of the journals, and consequently papers, was more limited in the fifties, his knowledge of the available works was complete. Sce notes in his papers, that some function theories were already developed, specifically the theory of hyperholomorphic (or hyperdifferentiable or monogenic) functions over quaternions, bicomplex numbers, Clifford algebras. In the quaternionic case, the analog of holomorphic functions are the Cauchy–Fueter regular functions, so-called since it was Fueter and his school who developed this function theory.

In his work Sce, as well as a few other authors, quotes the work of Moisil [23] published in 1931, mentioning that this author was using the term monogenic, instead of regular, functions. But the history of the birth of regularity on quaternions is more complicated. For example (as we mentioned before and without any claim to historical completeness), Lanczos already developed this approach in his 1919 dissertation [17], Fueter presented some of his results at the International Congress of Mathematicians in 1928, [6], and Iwanenko and Nikolsky already considered the case of biquaternions in 1930, [13]. It is also likely that, at that time, other Russian researchers were working in this framework. Thus, we believe that it is fair to say that various authors, more or less at the same time, were considering quaternionic functions and a notion of holomorphicity in this context.

The second four dimensional case (over \(\mathbb R\)), namely the one of bicomplex numbers, was started by Scorza Dragoni in [35] and, after the monograph [25], it has attracted attention in more recent times, see [20]. We also note that another four dimensional case, the one of bidual numbers studied by Sobrero [37] has been basically abandoned. We note that we kept the name “bidual” to be consistent with the terminology adopted by the Italian school, even though this algebra is nothing but the complex Grassmann with one generator \(\mathfrak f_1\) such that \(\mathfrak f_1^2=0\). Algebras of order up to four have been studied in [30,31,32,33,34].

The case of functions Clifford algebra valued is widely studied in the literature, starting with the celebrated monograph [3] which has been followed by several other books and hundreds of papers. However, one should notice that the notion of monogenicity treated in this chapter is given for functions from (a subset of) an algebra to itself. This is not the case treated in [3] and subsequent literature, where the functions have values in a Clifford algebra but are defined on Euclidean space identified with the set of paravectors or of vectors in the algebra. It is remarkable that Sce already considered this class of functions, as we shall see in Chap. 5, in relation with the celebrated Fueter theorem nowadays known as Fueter-Sce-Qian theorem. These functions were eventually considered by Iftimie [12] thus denoting that also the Romanian school was continuing the studies in hypercomplex analysis.

Below, we will use some examples to illustrate what is presented in the previous sections and to compare the various concepts. In our examples, we will discuss some particular choices of algebras of order up to four.

We will use standard terminology and notation, like A T (instead of A −1 used by Sce) to denote the transpose of a matrix A and, in particular, of a vector. By \(\mathscr A\) we denote a real or complex algebra of order n and by \(\mathscr A'\), \(\mathscr A{''}\) the algebras of n × n matrices which are the first and second regular representation of \(\mathscr A\).

Even though the basic notions and terminology about algebras are well known, we repeat some preliminaries for the sake of completeness.

Definition 2.1

An algebra over a field F is a set \(\mathscr A\) such that \(\mathscr A\) is a vector space over F and there is a F-bilinear mapping from \(\mathscr A \times \mathscr A\mapsto \mathscr A\), (a, b)↦ab, i.e.

$$\displaystyle \begin{aligned}k(ab)=(ka)b=a(kb), \qquad \mathrm{for}\ \mathrm{all}\ k\in F,\ a,b\in\mathscr A. \end{aligned}$$

This bilinear map is called multiplication.

In particular, an algebra is called associative (resp. commutative) if the multiplication is associative (resp. commutative).

Even though it is not specified in Sce’s papers, all the algebras considered are associative. This was a standard assumption in the papers written in Italy at that time, unless otherwise specified, and it appears also from the calculations performed in the manuscripts.

Definition 2.2

We say that \(\mathscr A\) has order n if there exist \(u_1,\ldots , u_n\in \mathscr A\) such that every \(a\in \mathscr A\) can be expressed in a unique way as

$$\displaystyle \begin{aligned}a=a_1 u_1+\cdots+a_n u_n, \qquad a_i\in F,\ i=1,\ldots ,n. \end{aligned}$$

We recall that, in this Chapter, we consider associative algebras and that an algebra \(\mathscr A\) is called division algebra if it is, as a ring, a division ring.We now state more definitions that are used in the book.

Definition 2.3

An algebra is called regular if it is isomorphic to an algebra of m × m matrices.

In 6 Sce uses the term regular also referring to the English terminology total matric algebra which, however, seems to be not anymore in use.

Definition 2.4

An element \(a\in \mathscr A\) is called nilpotent if a r = 0 for some \(r\in \mathbb N\) and the least such r is called index of a. Moreover, a is called properly nilpotent if both ya and ay are zero o nilpotent for every \(y\in \mathscr A\).

Definition 2.5

The set \(\mathscr R\) consisting of zero and of all properly nilpotent elements is called radical of \(\mathscr A\).

An algebra is called semi-simple if its radical is the zero ideal.

An algebra is called simple if its only proper ideal is the zero ideal and \(\mathscr A\) is not a zero algebra of order 1, namely \(\mathscr A\) is not such that ab = 0 for every \(a,b\in \mathscr A\).

It is also useful to recall that

Theorem 2.1

Let \(\mathscr N\) be an ideal of an algebra \(\mathscr A\). Then \(\mathscr A \setminus \mathscr N\) is semi-simple if and only if \(\mathscr N\) is the radical ideal of \(\mathscr A\).

As explained in Sect. 2.1, the notion of total differentiability has been introduced by Spampinato in [38]. The development of a function theory starting from this definition was not really developed. In the case of left (resp. right) derivability, meant as the existence of the limit of the left (resp. right) difference quotient

$$\displaystyle \begin{aligned}(q-q_0)^{-1}(f(q)-f(q_0))\qquad (f(q)-f(q_0))(q-q_0)^{-1} \end{aligned}$$

where q, q 0 are quaternions, one obtains affine functions only. This fact was proved by Meilikhson in [22], but the interested reader may find a proof in Sudbery’s paper [39] which made the result commonly known.

It was realized only at a later stage, see for examples the works [10, 19, 21] by Gürlebeck, Malonek, Shapiro and others, that in order to obtain a meaningful class of functions one needs to construct differently the quotients. By taking these suitable difference quotients, the notion of differentiability that one obtains coincides with the notion of monogenicity, in analogy with the complex case.

In order to write the notion of total derivability used in this work, we recall the next definition (see (2.1), (2.2) and [1, 2]):

Definition 2.6

Let \(\mathscr A\) be a real or complex algebra of order n, with unit and let u = (u 1, …, u n) be a vector containing the ordered elements in a given basis of \(\mathscr A\). We say that the algebras \(\mathscr A'\), \(\mathscr A''\) are the first and second regular representation of \(\mathscr A\) if their elements the are order n matrices X′, X″ defined, for any \(x\in \mathscr A\), by the relations

$$\displaystyle \begin{aligned}xu=uX' \end{aligned}$$
$$\displaystyle \begin{aligned}ux=u(X'')^T. \end{aligned}$$

Definition 2.7

Let \(\mathscr A\) be an algebra with basis u 1, …, u n, \(x\in \mathscr A\) and \(y:\, \mathscr A\longrightarrow \mathscr A\). Let u = (u 1, …, u n) and let ξ = (ξ 1, …, ξ n), η = (η 1, …, η n) be the coordinates of x, y, respectively, with respect to the given basis, i.e.

$$\displaystyle \begin{aligned}x=\xi u^T,\end{aligned}$$
$$\displaystyle \begin{aligned}y=\eta u^T.\end{aligned}$$

If y is derivable, that is, all the components of η are derivable with respect to the components of ξ, we say that y is right (resp. left) totally derivable if the jacobian belongs to \(\mathscr A'\) (resp. the transpose of the jacobian belongs to \(\mathscr A{''}\)).

Remark 2.1

The notion of right or left total derivability is designed on the notion of right or left differentiability, in the standard sense. In fact, let us consider a function y with values in an algebra with unit \(\mathscr A\), where a basis u 1, …, u n is fixed:

$$\displaystyle \begin{aligned}y(x)=y_1(x)u_1+\cdots +y_n(x)u_n, \end{aligned}$$

where x = x 1u 1 + ⋯ + x nu n. Note that x i, y i, i = 1, …, n are real or complex and x varies in an open set of \(\mathscr A\) of when we identify \(\mathscr A\) with \(\mathbb R^n\) (or \(\mathbb C^n\)). If we assume that the functions y admit derivatives with respect to x i and we set

$$\displaystyle \begin{aligned}dx=dx_1u_1+\cdots +dx_nu_n, \qquad dy=dy_1u_1+\cdots +dy_nu_n, \end{aligned}$$

with

$$\displaystyle \begin{aligned}dy_\ell=\dfrac{\partial y_\ell}{\partial x_1}dx_1+\cdots +\dfrac{\partial y_\ell}{\partial x_n}dx_n,\qquad \ell=1,\ldots ,n, \end{aligned}$$

then the function y is left differentiable or totally derivable on the left if there exists a function z(x) such that

$$\displaystyle \begin{aligned}dy=dx\, z(x), \end{aligned}$$

y is right differentiable or totally derivable on the right if there exists a function z(x) such that

$$\displaystyle \begin{aligned} dy=z(x)\, dx, \end{aligned} $$
(2.29)

for every dx. Writing z(x) = z 1(x)u 1 + ⋯ + z n(x)u n and setting

$$\displaystyle \begin{aligned}u_iu_j =\sum_{\ell=1}^n \gamma_{ij\ell} u_\ell, \end{aligned}$$

we have that (2.29) becomes

$$\displaystyle \begin{aligned}\sum_{\ell=1}^n dy_\ell u_\ell= \sum_{i,j,\ell=1}^n \gamma_{ij\ell}z_i dx_j u_\ell . \end{aligned}$$

By equating the coefficients in front of the units u , we deduce:

$$\displaystyle \begin{aligned}dy_\ell=\sum_{i,j=1}^n \gamma_{ij\ell}z_i dx_j, \qquad \ell=1,\ldots, n. \end{aligned}$$

Since \(dy_\ell =\sum _{j=1}^n \dfrac {\partial y_\ell }{\partial x_j}\, dx_j\) and the differentials dx i are independent we obtain

$$\displaystyle \begin{aligned} \sum_{i=1}^n \gamma_{ij\ell}z_i=\dfrac{\partial y_\ell}{\partial x_j}, \quad j,\ell=1,\ldots, n. \end{aligned} $$
(2.30)

Thus the total derivability on the right (2.29) is equivalent to (2.30). The left hand side of (2.30) gives the entries \(x^{\prime }_{\ell j}\) of the matrix X′ of the first regular representation of z. Thus (2.30) expresses the fact that the Jacobian matrix (∂y ∂x j) belongs to \(\mathscr A'\).

In Definition 2.7, x is varying in the whole algebra, but when a topology can be defined (for example identifying the elements in a real algebra of order n with vectors in \(\mathbb R^n\)) we can consider an open set U in \(\mathscr A\) and have the notion of right or left totally derivable function on U with values in \(\mathscr A\). The notion implies that the jacobian matrix satisfies suitable symmetries, as shown in the following examples.

Example 2.1

Let us consider the case of the real algebra of complex numbers. Then a basis is given for example by {1, i} with i 2 = −1, so that u = (1 i). Then x = x 1 + ix 2 and

In this commutative case the two representations \(\mathscr A'\) and \(\mathscr A{''}\) coincide. The jacobian matrix

belongs to \(\mathscr A'\) if and only if

$$\displaystyle \begin{aligned}\frac{\partial y_1}{\partial x_1}=\frac{\partial y_2}{\partial x_2}, \qquad \frac{\partial y_1}{\partial x_2}=-\frac{\partial y_2}{\partial x_1} \end{aligned}$$

namely if and only if the Cauchy–Riemann conditions are satisfied.

We know from the general theory that total derivability is independent of the choice of a basis. In this specific example, let us choose the basis {1,  − i}. Then x = x 1 − ix 2

Thus the conclusion is as above.

Example 2.2

Let us consider the algebra of dual numbers, namely the algebra generated by u 1, u 2 satisfying \(u_1^2=u_1\), \(u_2^2=0\), u 1u 2 = u 2u 1 = u 2. Let x = x 1u 1 + x 2u 2, u = (u 1 u 2). Then, since the algebra is commutative the right and left representations coincide and follow from

Thus the condition of right and left total derivability of y(x) = y 1(x)u 1 + y 2(x)u 2 is then

$$\displaystyle \begin{aligned}\dfrac{\partial y_1}{\partial x_1}=\dfrac{\partial y_2}{\partial x_2}, \quad \dfrac{\partial y_1}{\partial x_2}=0. \end{aligned}$$

To conclude the examples in the case of algebras of second order, we consider the case of hyperbolic numbers:

Example 2.3

Let us consider the algebra of hyperbolic numbers, namely the algebra generated by u 1, u 2 satisfying \(u_1^2=u_1\), \(u_2^2=u_1\), u 1u 2 = u 2u 1 = u 2. Let x = x 1u 1 + x 2u 2, u = (u 1 u 2). Due to the commutative setting, the left and right representations follow from

Thus the condition of total derivability, both left and right, is expressed by

$$\displaystyle \begin{aligned}\dfrac{\partial y_1}{\partial x_1}=\dfrac{\partial y_2}{\partial x_2}, \quad \dfrac{\partial y_1}{\partial x_2}=\dfrac{\partial y_2}{\partial x_1}. \end{aligned}$$

Example 2.4

We now consider the case of an algebra of the third order, specifically the algebra of ternions which is defined as the real algebra of upper triangular 2 × 2 matrices. As a basis of the algebra, we choose

The multiplication rules are

$$\displaystyle \begin{aligned}u_1^2=u_1, \quad u_2^2=u_2, \quad u_3^2=0, \quad u_1u_3=u_3, \quad u_3u_2=u_3, \end{aligned}$$
$$\displaystyle \begin{aligned}u_1u_2=u_2u_1=u_2u_3=u_3u_1=0. \end{aligned}$$

Setting x = x 1u 1 + x 2u 2 + x 3u 3, the first representation can be computed from

while the second representation follows from

Thus the conditions of right total derivability are expressed by

$$\displaystyle \begin{aligned}\dfrac{\partial y_1}{\partial x_2}=\dfrac{\partial y_1}{\partial x_3}=\dfrac{\partial y_2}{\partial x_1}=\dfrac{\partial y_2}{\partial x_3}=\dfrac{\partial y_3}{\partial x_1}=0, \ \ \dfrac{\partial y_1}{\partial x_1}=\dfrac{\partial y_3}{\partial x_3}, \end{aligned}$$

while the left total derivability corresponds to

$$\displaystyle \begin{aligned}\dfrac{\partial y_1}{\partial x_2}=\dfrac{\partial y_1}{\partial x_3}=\dfrac{\partial y_2}{\partial x_1}=\dfrac{\partial y_2}{\partial x_3}=\dfrac{\partial y_3}{\partial x_2}=0, \ \ \dfrac{\partial y_2}{\partial x_2}=\dfrac{\partial y_3}{\partial x_3}. \end{aligned}$$

Example 2.5

A four order algebra which has been widely studied from the point of view of a function theory on it is the one of bicomplex numbers \(\mathbb {B}\mathbb {C}\) with respect to the basis 1, i, j, k so that x = x 1 + x 2i + x 3j + x 4k. We recall that i 2 = j 2 = −1, ij = ji = k. It is a commutative algebra, so that right and left total differentiability coincide. Since

$$\displaystyle \begin{aligned}xu=(x_1+x_2i+x_3j+x_4k, \, x_1i-x_2+x_3k-x_4j, \, x_1j+x_2k-x_3-x_4i,\, x_1k-x_2j-x_3i+x_4)\end{aligned} $$

Thus, the function y(x) is left or right totally differentiable if and only if the jacobian matrix \(\left (\dfrac {\partial y_i}{\partial x_k}\right )\) satisfies the conditions

$$\displaystyle \begin{aligned} &\frac{\partial y_1}{\partial x_1} =\frac{\partial y_2}{\partial x_2}=\frac{\partial y_3}{\partial x_3} =\frac{\partial y_4}{\partial x_4},\\ &\frac{\partial y_1}{\partial x_2} =-\frac{\partial y_2}{\partial x_1}=\frac{\partial y_3}{\partial x_4} =-\frac{\partial y_4}{\partial x_3},\\ &\frac{\partial y_1}{\partial x_3} =\frac{\partial y_2}{\partial x_4}=-\frac{\partial y_3}{\partial x_1} =-\frac{\partial y_4}{\partial x_2} ,\\ &\frac{\partial y_1}{\partial x_4} =-\frac{\partial y_2}{\partial x_3} =-\frac{\partial y_3}{\partial x_2} =\frac{\partial y_4}{\partial x_1}. \end{aligned} $$
(2.31)

Example 2.6

Let us consider the noncommutative case of quaternions \(\mathbb H\) with respect to the basis 1, i, j, k so that x = x 1 + x 2i + x 3j + x 4k. We recall that i 2 = j 2 = −1, ij = −ji = k. We then have

$$\displaystyle \begin{aligned}xu=(x_1+x_2i+x_3j+x_4k, \, x_1i-x_2-x_3k+x_4j, \, x_1j+x_2k-x_3-x_4i,\, x_1k-x_2j+x_3i-x_4)\end{aligned} $$

Thus, the function y(x) is right totally differentiable if and only if the jacobian matrix \(\left (\dfrac {\partial y_i}{\partial x_k}\right )\) satisfies the conditions

$$\displaystyle \begin{aligned} &\frac{\partial y_1}{\partial x_1} =\frac{\partial y_2}{\partial x_2}=\frac{\partial y_3}{\partial x_3} =\frac{\partial y_4}{\partial x_4},\\ &\frac{\partial y_1}{\partial x_2} =-\frac{\partial y_2}{\partial x_1} =\frac{\partial y_3}{\partial x_4} =-\frac{\partial y_4}{\partial x_3},\\ &\frac{\partial y_1}{\partial x_3} =-\frac{\partial y_3}{\partial x_1} =\frac{\partial y_4}{\partial x_2} =-\frac{\partial y_2}{\partial x_4},\\ &\frac{\partial y_1}{\partial x_4} =\frac{\partial y_2}{\partial x_3} =-\frac{\partial y_3}{\partial x_2} =-\frac{\partial y_4}{\partial x_1}. \end{aligned}$$

Analogously, we have

$$\displaystyle \begin{aligned}ux=(x_1+x_2i+x_3j+x_4k, \, x_1i-x_2+x_3k-x_4j, \, x_1j-x_2k-x_3+x_4i,\, x_1k+x_2j-x_3i-x_4)\end{aligned} $$

Thus the left total derivability conditions are:

$$\displaystyle \begin{aligned} &\frac{\partial y_1}{\partial x_1} =\frac{\partial y_2}{\partial x_2}=\frac{\partial y_3}{\partial x_3} =\frac{\partial y_4}{\partial x_4},\\ &\frac{\partial y_1}{\partial x_2} =-\frac{\partial y_2}{\partial x_1} =-\frac{\partial y_3}{\partial x_4} =\frac{\partial y_4}{\partial x_3},\\ &\frac{\partial y_1}{\partial x_3} =-\frac{\partial y_3}{\partial x_1} =-\frac{\partial y_4}{\partial x_2}=\frac{\partial y_2}{\partial x_4},\\ &\frac{\partial y_1}{\partial x_4} =-\frac{\partial y_2}{\partial x_3} =\frac{\partial y_3}{\partial x_2} =-\frac{\partial y_4}{\partial x_1}. \end{aligned}$$

It is also clear that the 12 = n 2 − n conditions arise from imposing that the 16 = n 2 entries depend on 4 = n parameters. These are definitely different from the conditions expressing the right or left monogenicity, as we shall see below.

Example 2.7

Sce made a comment on the possible interest of the algebra LXXXI in the classification given by Scorza in [34], so we consider also this case. According to n. 10, the basis of the algebra can be written as u = (1, i, ω, ) with i 2 = −1, ω 2 = 0,  + ωi = 0. Setting x = x 1 + ix 2 + ωx 3 + iωx 4, easy computations show that

while

We deduce that the function y(x) is right totally differentiable if and only if the jacobian matrix \(\left (\dfrac {\partial y_i}{\partial x_k}\right )\) satisfies the conditions

$$\displaystyle \begin{aligned} &\frac{\partial y_1}{\partial x_1} =\frac{\partial y_2}{\partial x_2}=\frac{\partial y_3}{\partial x_3} =\frac{\partial y_4}{\partial x_4},\\ &\frac{\partial y_1}{\partial x_3} =\frac{\partial y_1}{\partial x_4} =\frac{\partial y_2}{\partial x_3} =\frac{\partial y_2}{\partial x_4}=0\\ &\frac{\partial y_1}{\partial x_2} =-\frac{\partial y_2}{\partial x_1} =\frac{\partial y_3}{\partial x_4} =-\frac{\partial y_4}{\partial x_3},\\ &\frac{\partial y_3}{\partial x_1} =-\frac{\partial y_4}{\partial x_2},\\ &\frac{\partial y_3}{\partial x_2} =\frac{\partial y_4}{\partial x_1}. \end{aligned}$$

while the left totally differentiability conditions are

$$\displaystyle \begin{aligned} &\frac{\partial y_1}{\partial x_1} =\frac{\partial y_2}{\partial x_2}=\frac{\partial y_3}{\partial x_3} =\frac{\partial y_4}{\partial x_4},\\ &\frac{\partial y_1}{\partial x_3} =\frac{\partial y_1}{\partial x_4} =\frac{\partial y_2}{\partial x_3} =\frac{\partial y_2}{\partial x_4}=0\\ &\frac{\partial y_1}{\partial x_2} =-\frac{\partial y_2}{\partial x_1} =-\frac{\partial y_3}{\partial x_4} =\frac{\partial y_4}{\partial x_3},\\ &\frac{\partial y_3}{\partial x_1} =\frac{\partial y_4}{\partial x_2},\\ &\frac{\partial y_3}{\partial x_2} =-\frac{\partial y_4}{\partial x_1}. \end{aligned}$$

We now turn to the notions of right or left monogenicity that are made explicit in the examples below, computed again in the cases considered above. We recall that a function y = y(x) with values in \(\mathscr A\) is said to be right monogenic (see (2.3) and [36]) if

$$\displaystyle \begin{aligned}u \left(\frac{dy}{dx}\right) u^T=0 \end{aligned} $$

or left monogenic if (see (2.3))

$$\displaystyle \begin{aligned}u\left(\frac{dy}{dx}\right)^T u^T=0. \end{aligned} $$

By explicitly writing these two conditions using the operator

$$\displaystyle \begin{aligned}D=\sum_{i=1}^n u_i \dfrac{\partial }{\partial x_i} \end{aligned}$$

applied to \(y(x)=\sum _{i=\ell }^n u_\ell y_\ell (x)\), it is clear that the second condition can be expressed as Dy = 0 while the second one is

$$\displaystyle \begin{aligned}\sum_{i=1}^n u_j \dfrac{\partial y_j }{\partial x_i} u_i=yD=0 \end{aligned}$$

(where the notation of writing D on the right means that the units in D are written on the right).

Remark 2.2

The reader may wonder if the right and left monogenicity conditions are related by transposition of matrices. However

$$\displaystyle \begin{aligned}\left(u\frac{dy}{dx}u^T\right)^T=0 \end{aligned}$$

equals

$$\displaystyle \begin{aligned}(u^T)^T\left(\frac{dy}{dx}\right)^Tu^T=u\left(\frac{dy}{dx}\right)^Tu^T \end{aligned}$$

only if \(\mathscr A\) is commutative. And in fact, in this case the two notions coincide.

Example 2.8

In the complex case, the notion of monogenicity (left or right, since we work in a commutative setting) is expressed by

which translates into the Cauchy–Riemann equations

$$\displaystyle \begin{aligned}\dfrac{\partial y_1}{\partial x_1}-\dfrac{\partial y_2}{\partial x_2}=0, \qquad \dfrac{\partial y_2}{\partial x_1}+\dfrac{\partial y_1}{\partial x_2}=0,\end{aligned}$$

so the notion, as expected, corresponds to the one of holomorphicity. It is immediate to verify that one obtains exactly the same conditions if one takes the transpose of the jacobian.

Example 2.9

Another case of an algebra of second order that we considered above is the one of dual numbers. Using the basis in Example 2.2, the right (and left) monogenicity conditions are expressed by

$$\displaystyle \begin{aligned} \dfrac{\partial y_1}{\partial x_1}=0, \quad \dfrac{\partial y_1}{\partial x_2}+\dfrac{\partial y_2}{\partial x_1}=0. \end{aligned} $$
(2.32)

In the case of hyperbolic numbers we have:

Example 2.10

Using the basis of hyperbolic numbers given in Example 2.3, the left (and right) monogenicity conditions are expressed by

$$\displaystyle \begin{aligned} &\dfrac{\partial y_1}{\partial x_1}+\dfrac{\partial y_2}{\partial x_2}=0\\ &\dfrac{\partial y_2}{\partial x_1}+\dfrac{\partial y_1}{\partial x_2}=0. \end{aligned} $$
(2.33)

We note that in the literature one may find the above equations written with different signs. In fact, (2.33) expresses the fact that y 1u 1 + y 2u 2 is in the kernel of \(u_1\dfrac {\partial y_1}{\partial x_1}+u_2\dfrac {\partial y_2}{\partial x_2}\), whereas in [20] one finds the conditions characterizing the kernel of \(u_1\dfrac {\partial y_1}{\partial x_1}-u_2\dfrac {\partial y_2}{\partial x_2}\). The two function theories so obtained are different but equivalent.

Example 2.11

In the case of ternions, using the basis previously introduced, see Example 2.4, the right and left monogenicity conditions are expressed, respectively, by

$$\displaystyle \begin{aligned}\dfrac{\partial y_1}{\partial x_1}=\dfrac{\partial y_2}{\partial x_2}=0, \ \ \dfrac{\partial y_3}{\partial x_2}+\dfrac{\partial y_1}{\partial x_3}=0, \end{aligned}$$

and

$$\displaystyle \begin{aligned}\dfrac{\partial y_1}{\partial x_1}=\dfrac{\partial y_2}{\partial x_2}=0, \ \ \dfrac{\partial y_2}{\partial x_3}+\dfrac{\partial y_3}{\partial x_1}=0. \end{aligned}$$

Example 2.12

For the algebra \(\mathbb {B}\mathbb {C}\) the monogenicity conditions (left or right) are expressed by \(u\left (\dfrac {dy}{dx}\right )u^T=0\) and taking into account the multiplication rules we obtain

$$\displaystyle \begin{aligned} &\frac{\partial y_1 }{\partial x_1 }-\frac{\partial y_2 }{\partial x_2 }-\frac{\partial y_3 }{\partial x_3 }+\frac{\partial y_4 }{\partial x_4 }=0\\ &\frac{\partial y_1 }{\partial x_2 }+\frac{\partial y_2 }{\partial x_1 }-\frac{\partial y_3 }{\partial x_4 }-\frac{\partial y_4 }{\partial x_3 }=0\\ &\frac{\partial y_1 }{\partial x_3 }-\frac{\partial y_2 }{\partial x_4 }+\frac{\partial y_3 }{\partial x_1 }-\frac{\partial y_4 }{\partial x_2 }=0\\ &\frac{\partial y_1 }{\partial x_4 }+\frac{\partial y_2 }{\partial x_3 }+\frac{\partial y_3 }{\partial x_2 } + \frac{\partial y_4 }{\partial x_1 }=0. \end{aligned} $$
(2.34)

If the function y(x) is totally derivable, then it is also monogenic, since (2.31) imply that all the equations in (2.34) are identities, already with the basis {1, i, j, k}.

Example 2.13

In the quaternionic case, if we impose the condition of right monogenicity, i.e.,

(2.35)

with easy computations we obtain the system:

$$\displaystyle \begin{aligned} &\frac{\partial y_1 }{\partial x_1 }-\frac{\partial y_2 }{\partial x_2 }-\frac{\partial y_3 }{\partial x_3 }-\frac{\partial y_4 }{\partial x_4 }=0\\ &\frac{\partial y_1 }{\partial x_2 }+\frac{\partial y_2 }{\partial x_1 }+\frac{\partial y_3 }{\partial x_4 }-\frac{\partial y_4 }{\partial x_3 }=0\\ &\frac{\partial y_1 }{\partial x_3 }-\frac{\partial y_2 }{\partial x_4 }+\frac{\partial y_3 }{\partial x_1 }+\frac{\partial y_4 }{\partial x_2 }=0\\ &\frac{\partial y_1 }{\partial x_4 }+\frac{\partial y_2 }{\partial x_3 }-\frac{\partial y_3 }{\partial x_2 } + \frac{\partial y_4 }{\partial x_1 }=0 \end{aligned}$$

which corresponds to the well known Cauchy–Fueter conditions for the right regularity of a quaternionic function. By taking the transpose of the jacobian, with similar calculations, we obtain the Cauchy–Fueter conditions for the left regularity, i.e.,

$$\displaystyle \begin{aligned} &\frac{\partial y_1 }{\partial x_1 }-\frac{\partial y_2 }{\partial x_2 }-\frac{\partial y_3 }{\partial x_3 }-\frac{\partial y_4 }{\partial x_4 }=0\\ &\frac{\partial y_1 }{\partial x_2 }+\frac{\partial y_2 }{\partial x_1 }-\frac{\partial y_3 }{\partial x_4 }+\frac{\partial y_4 }{\partial x_3 }=0\\ &\frac{\partial y_1 }{\partial x_3 }+\frac{\partial y_2 }{\partial x_4 }+\frac{\partial y_3 }{\partial x_1 }-\frac{\partial y_4 }{\partial x_2 }=0\\ &\frac{\partial y_1 }{\partial x_4 }-\frac{\partial y_2 }{\partial x_3 }+\frac{\partial y_3 }{\partial x_2 } + \frac{\partial y_4 }{\partial x_1 }=0 \end{aligned} $$
(2.36)

Remark 2.3

In various works, see [4] and the references therein, we considered functions which are Cauchy–Fueter regular with respect to n > 1 quaternionic variables. The regularity condition is expressed by a system consisting of 4n equations obtained by writing for each of the n quaternionic variables a system of the form (2.36). The analysis of this class of functions is rather complicated and is performed in [4] with algebraic methods based on the construction of a minimal free resolution of the module associated with the system. It is essential to consider n > 1 since in one variable the methods do not provide interesting information, basically because the matrix representing the system (2.36) is square and nondegenerate. It is however interesting to note that when considering the notion of total derivability, the algebraic study may be meaningful also when n = 1 since the matrix representing the system is not square. Such a study deserves to be further investigated.

Finally, we consider the case of the algebra LXXXI in [34]:

Example 2.14

In the case of the algebra LXXXI, using the basis in Example 2.7 we immediately deduce that the left monogenicity conditions are expressed by:

$$\displaystyle \begin{aligned} &\frac{\partial y_1 }{\partial x_1 }-\frac{\partial y_2 }{\partial x_2 }=0\\ &\frac{\partial y_1 }{\partial x_2 }+\frac{\partial y_2 }{\partial x_1 }=0\\ &\frac{\partial y_1 }{\partial x_3 }+\frac{\partial y_2 }{\partial x_4 }+\frac{\partial y_3 }{\partial x_1 }-\frac{\partial y_4 }{\partial x_2 }=0\\ &\frac{\partial y_1 }{\partial x_4 }-\frac{\partial y_2 }{\partial x_3 }+\frac{\partial y_3 }{\partial x_2 } + \frac{\partial y_4 }{\partial x_1 }=0 . \end{aligned}$$

These relation are, formally, similar to the conditions of left Cauchy–Fueter regularity in which in the first two equations only the derivatives with respect to x 1 and x 2 appear. However, from the point of view of the algebraic analysis the two systems are different. This is visible, for example, when taking the minimal free resolution in the case of two variables in this algebra. The first syzygies in fact are quadratic and linear whereas in the Cauchy–Fueter case there are only quadratic syzygies, see [4].

In order to relate the notions of total derivability and of monogenicity (left or right) one needs conditions on the ambient algebra and this justifies the notions of solenoidal or bisolenoidal algebra. In order to distiguish these algebras among themselves, a useful fact that may be useful in understanding the computations in Section 2 of [27] (see also Sect. 1.1) is the following

Proposition 2.1

Let u = (u 1, …, u n), \(u'=(u^{\prime }_1,\ldots , u^{\prime }_n)\) where u and u′ are bases of the algebra \(\mathscr A\) and let u = u′P where P is a nonsingular matrix. Let ξ and ξ′ = ξP T the coordinates with respect these two bases. The jacobian of ξ′ with respect to ξ is \(\left (\dfrac {d\xi '}{d\xi }\right )=P\).

Proof

Let P = (p ij), then

so that

$$\displaystyle \begin{aligned} \xi^{\prime}_1&=p_{11}\xi_1+p_{12}\xi_2+\ldots +p_{1n}\xi_n\\ \xi^{\prime}_2&=p_{21}\xi_1+p_{22}\xi_2+\ldots +p_{2n}\xi_n\\ \ldots &= \ldots\\ \xi^{\prime}_n&=p_{n1}\xi_1+p_{n2}\xi_2+\ldots +p_{nn}\xi_n.\\ \end{aligned}$$

Thus \(\dfrac {\partial \xi _i^{\prime }}{\partial \xi _j}=p_{ij}\) and the statement follows. □

Let us recall the following formulas:

$$\displaystyle \begin{aligned} & u_iu PP_{-1}u_{-1}=0,\qquad i=1,\ldots, n \end{aligned} $$
(2.10)
$$\displaystyle \begin{aligned} & u PP_{-1}u_iu_{-1}=0,\qquad i=1,\ldots, n. \end{aligned} $$
(2.11)
$$\displaystyle \begin{aligned} & zu PP_{-1}u_{-1}=0, \end{aligned} $$
(2.12)
$$\displaystyle \begin{aligned} & u PP_{-1}zu_{-1}=0; \end{aligned} $$
(2.13)
$$\displaystyle \begin{aligned} & u PP_{-1}u_{-1}=0. \end{aligned} $$
(2.14)

Definition 2.8

An algebra with unit is solenoidal if its bases satisfy (2.14) with P non singular. It is bisolenoidal if it has bases satisfying (2.11) with P non singular.

Theorem 2.2

In an algebra with unit we have

(2.10) ⇔ (2.12) ⇔ (2.14)

and

(2.11) ⇔ (2.13) ⇒ (2.14).

Thus, if an algebra is bisolenoidal it is also solenoidal, but not viceversa.

The results in Sect. 2.1 can be summarized as follows:

Theorem 2.3

In an algebra with unit, functions right totally derivable are right monogenic if and only if (2.14) holds while they are left monogenic if and only if the relations (2.11) hold. In this latter case, the functions are also right monogenic.

Proof

Assume that a function y(x) is right totally derivable, then the condition of being right monogenic, with respect to a basis obtained via the change of basis given by the nonsingular matrix P, is expressed by (2.10) which hold if and only if (2.14) holds. The condition of being left monogenic, with respect to a basis obtained via the change of basis given by the nonsingular matrix P, is expressed by (2.11). Since (2.11) imply (2.14), the functions are automatically also right monogenic. □

Remark 2.4

We believe that in the original manuscript there is a typo and the sentence

(2.14) and (2.11) with P nonsingular, are necessary and sufficient conditions for right totally derivable functions in an algebra \(\mathscr A\) to be right or left monogenic (with respect to a suitable basis) should be instead

(2.14) or (2.11) with P nonsingular, are necessary and sufficient conditions for right totally derivable functions in an algebra \(\mathscr A\) to be right or left monogenic (with respect to a suitable basis).

The amended sentence is also in accordance to the review of paper written by B. Crabtree, see [5].

To illustrate these ideas we consider some examples in the case of algebras of low order.

Example 2.15

Let us consider the algebra of dual numbers, see Example 2.2. Setting PP T = A = (a jk) we compute (2.14):

We obtain a 11u 1 + a 12u 2 = 0 which immediately gives a 11 = a 12 = 0 and PP T and so P are singular. From this fact one immediately deduces that the algebra of dual numbers is neither solenoidal, nor bisolenoidal. Note that (2.11) for i = 1 reduces to the condition above and for i = 2 it is an identity.

Example 2.16

To illustrate the reasoning in Sect. 2.2, we develop the computations in the case of ternions. We need to compute the elements a ik of the matrix A using (2.14). We have:

which leads to

$$\displaystyle \begin{aligned}a_{11}u_1+a_{22}u_2+a_{23}u_3+a_{13}u_3=0. \end{aligned}$$

We deduce that a 11 = a 22 = 0 and a 13 + a 23 = 0 so that

Although A is nonsingular, it is not positive as one may notice taking the principal minor, i.e. the determinant of

which has negative value, so A cannot be of the form PP T. We conclude that the algebra of ternions is not solenoidal over the real.

We note that if we would have considered the ternions over the complex field, then the complexified ternion algebra is solenoidal since the only condition required on A is of being nonsingular (see Sect. 1.2, paragraph 4). We now consider the conditions in (2.11):

Taking into account the relations satisfied by u 1, u 2, u 3 we obtain the following system:

$$\displaystyle \begin{aligned} a_{11}u_1^2+a_{13}u_1u_3=0\\ a_{12}u_1u_2+a_{22}u_2^2+a_{23}u_3u_2=0\\ a_{12}u_1u_3+a_{22}u_2u_3+a_{23}u_3^2=0 \end{aligned}$$

which yields to

$$\displaystyle \begin{aligned} a_{11}u_1+a_{13}u_3=0\\ a_{22}u_2+a_{23}u_3=0\\ a_{12}u_3=0. \end{aligned}$$

We deduce that a 11 = a 13 = a 22 = a 23 = a 12 = 0, so that A, and so P, are singular. We conclude that the algebra of ternions is not bisolenoidal.

As algebra of fourth order, we consider the algebra of bicomplex numbers.

Example 2.17

The condition (2.14) is

These conditions translate into

$$\displaystyle \begin{aligned} a_{11}-a_{22}-a_{33}+a_{44}=0\\ a_{12}-a_{34}=0\\ a_{13}-a_{24}=0\\ a_{14}+a_{23}=0, \end{aligned}$$

which shows, with some more computations to show the positivity, that the bicomplex algebra is solenoidal. Since the algebra is commutative, the conditions (2.11) follows form the previous ones and thus the algebra is also bisolenoidal. Thus right total derivability implies monogenicity (right and left) as shown by Example 2.34. Note that one can take P = A = I, the identity matrix, in the above computations and this is in accordance with the fact that (right) total derivability imply (right) monogenicity with respect to the same basis.

Example 2.18

As already noticed in Sect. 1.1 the algebra of quaternions is solenoidal. This can also be verified by direct computations which lead to

$$\displaystyle \begin{aligned} a_{11}-a_{22}-a_{33}-a_{44}=0\\ a_{12}=0\\ a_{13}=0\\ a_{14}=0, \end{aligned}$$

and thus to the matrix

It is possible to choose the matrix A such that it is nonsingular.