Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

9.1 Introduction

In late 1960s and early 1970s, there was apparent interest in discrete dyadic analysis based on the discrete Walsh functions, as can be seen from the organization of a series of international workshops dedicated exclusively to this subject. For more information see a discussion of that in [39]. From the abstract harmonic analysis point of view, these functions are kernels of the discrete Walsh transform which can be viewed as the Fourier transforms on finite dyadic groups consisting of a set of binary n-tuples equipped with the addition modulo 2, the operation that is usually called EXOR in switching theory.

Dr. James Edmund Gibbs from the National Physical Laboratory, Teddington, Middlesex, UK, was deeply involved in these research activities and his work in this area led to the definition of the so-called logical derivative, or the discrete Gibbs derivative, as an operator enabling the differentiation of piecewise constant functions, such as the Walsh functions [13, 14].

By looking for a counterpart of the Walsh (Fourier) analysis in the Boolean domain, J.E. Gibbs defined a transform that he called the Instant Fourier transform [15]. The name of the transform comes from the possibility to compute the related spectrum instantaneously through a network consisting of AND and EXOR logic gates. A study of the relationships between the discrete Walsh series and the Reed-Muller expressions of Boolean functions was carried out several years after [30]. In the frame of this research, it was recognized that the set of basis functions in terms of which the Instant Fourier transform is defined is identical with the basis functions used in the definition of the Reed-Muller (RM) expressions. The spectral interpretation of the Reed-Muller expressions, where the coefficients in the RM-expressions are viewed as spectral coefficients of a Fourier-like transform over the finite Galois (GF) field GF(2), i.e., with computations modulo 2, or equivalently in terms of logical operations AND and EXOR, was championed by Ph.W. Besslich [25], and the same approach was accepted by some other authors [18, 19].

The main difference between these two concepts, the Reed-Muller (RM) transform and the Instant Fourier transform, is in the underlying algebraic structures, the Boolean ring for the Reed-Muller expressions, and the Gibbs algebra defined in terms of the Gibbs multiplication for the Instant Fourier transform [15]. The main idea behind the definition of this new algebraic structure was to derive properties of the Instant Fourier transform (equivalently, the Reed-Muller transform) corresponding to the properties of the classical Fourier transform on the real line.

A generalization of the Gibbs algebra to multiple-valued functions presented in [31] provided a way to define a transform, called the Reed-Muller-Fourier (RMF) transform, that can be viewed as a generalization of the Reed-Muller transform for binary functions to multiple-valued functions. Professor Claudio Moraga immediately realized that the RMF-transform is an interesting concept and already in 1993 started working on the development of a related theory by putting a lot of effort towards the refinement of its definition and the study of its properties for different classes of multiple-valued functions [24, 42]. That was the motivation for the selection of the Reed-Muller-Fourier (RMF) transform as the subject in this book dedicated to Professor Moraga on the occasion of his 80th birthday. In particular, we will discuss the different methods used to compute the Reed-Muller-Fourier spectra of multiple-valued functions efficiently in time and space. The definition of the so-called algorithms with constant geometry for the Reed-Muller-Fourier transform as well as various factorizations of the RMF-matrix derived from the factorization of the Pascal matrices can be consider as new contributions to the field.

9.2 Reed-Muller-Fourier Transform

In this section, we present the basic definitions of the Reed-Muller-Fourier transform.

9.2.1 The Gibbs Algebra

Denote by G a group of n-ary p-valued sequences \(x=(x_{1}, \ldots , x_{n})\) with the group operation defined as componentwise addition modulo p. Thus, for all \(x=(x_{1}, \ldots , x_{n}), y=(y_{1}, \ldots , y_{n}) \in G\),

$$\begin{aligned} x \oplus y= & {} (x_{1}, \ldots x_{n}) \oplus (y_{1}, \ldots , y_{n}) \\= & {} ((x_{1} \oplus y_{1}), \ldots , (x_{n} \oplus y_{n})) \bmod {p}. \end{aligned}$$

Denote by \(Z_{q}\) the set of first q non-negative integers. For each \(x \in G\), the p-adic contraction is defined as a mapping \(\sigma : G \rightarrow Z_{q}\) given by

$$\begin{aligned} \sigma (x) = \sum _{i=1}^{n} x_{i}p^{n-i}. \end{aligned}$$

We denote by P(G) the set of all functions \(f: G \rightarrow Z_{q}\). In P(G), we define the addition as modulo p addition,

$$\begin{aligned} (f \oplus g)(x) = f(x) \oplus g(x), \forall x \in G, \end{aligned}$$

and multiplication as a convolutionwise (Gibbs) multiplication [15]

$$\begin{aligned} (fg)(0)= & {} 0 \\ (fg)(x)= & {} \sum _{s=0}^{\sigma (x)-1} f(\sigma (x)-1-s)g(s), \forall x \in G, \;x \ne 0. \end{aligned}$$

Denote by W a particular function in P(G) such that

$$\begin{aligned} W(x) = p-1, \quad \forall x \in G, \end{aligned}$$

and by S the set of first q positive integer powers of W, i.e., \(S=\{W^{1}, \ldots , W^{q}\}\) with exponentiation in terms of the Gibbs multiplication defined above. The set S is a basis in P(G) with respect to which the Reed-Muller-Fourier (RMF) transform is defined [31]

$$\begin{aligned} f = \sum _{i=0}^{q-1} c_{i}W^{i+1} \bmod {p}, \end{aligned}$$

where \(c_{i} \in Z_{p}\).

As is shown in [31], if a p-valued variable \(x_{i}\) is considered as a particular function in P(G), \(f(x_{1}, \ldots , x_{n}) = x_{i}\), then the RMF-transform matrix can be expressed in terms of the variable \(x_{n}\) as

$$\begin{aligned} f(x_{1}, \ldots , x_{n}) = \sum _{i=0}^{q-1} c_{i}\phi _{i+1}(x_{n}) \bmod {p}, c_{i} \in \{0, \ldots , p-1\}, \end{aligned}$$

where

$$\begin{aligned} \phi _{i}(x_{n}) = \left\{ \begin{array}{ll} (p-1)\cdot x_{n}^{\lceil i/2 \rceil }, &{} i\text{-odd, } \\ x_{n}^{i/2}, &{} i\text{-even, } \end{array} \right. \end{aligned}$$

where \(\lceil a \rceil \) is the integer part of a, and \(x_{n}^{r}=x_{n} \cdot x_{n} \ldots x_{n}\) r times with the multiplication defined as convolutionwise (Gibbs) multiplication in P(G).

This definition corresponds to the interpretation of the RMF-expressions as an analogue of the Fourier series expressions with functions \(\phi _{i}(x_{n})\) viewed as counterparts of the exponential functions. The following alternative definition allows us to consider the RMF-expressions as polynomial expressions. In this way, the RMF-expressions can be viewed as a generalization of the Reed-Muller expressions for binary logic functions to multiple-valued functions. At the same time, being computed modulo p, with no restrictions to p prime, the RMF-expressions can be viewed as a counterpart of GF-expressions for multiple-valued functions [34].

Definition 9.2.1

(Reed-Muller-Fourier expressions) Any p-valued n-variable function \(f(x_{1}, \ldots , x_{n})\) can be expanded in powers of variables \(x_{i}\), \(i=1,\ldots ,n\) as

$$\begin{aligned} f(x_{1}, \ldots , x_{n}) = (-1)^{n}\sum _{a \in V^{n}} q(a)x_{1}^{*a_{1}}\ldots x_{n}^{*a_{n}}, \end{aligned}$$

where \(V^{n}\) is the set of all p-valued n-tuples, \(q(a) \in \{0,1,2, \ldots p-1\}\), and the exponentiation is defined as \(x^{*0}=-1\) modulo p, and for \(i>0\), \(x^{*i}\) is determined in terms of the convolutionwise (Gibbs) multiplication defined above.

When discussed in the Gibbs algebra, properties of the RMF-transform correspond more likely to the properties of the classical Fourier transform than the properties of the Galois field (GF) transforms [34]. We point out the following two chief features among those expressing the differences between the RMF-transforms and the GF-transforms, since these features influence performances of the computation methods discussed below.

  1. 1.

    The operations used in the definition of the RMF-transform are modulo p operations, also in the case of a non-prime p. Therefore, in computations, these operations can be performed faster than the field operations used in GF-transforms. Note that modulo operations are provided in contemporary programming environments such as CUDA for computing over Graphics Processing Units (GPUs) [26].

  2. 2.

    The RMF-transform matrix is a triangular matrix, but expresses a Kronecker product structure in the same way as the GF-transform matrices. In this respect, the triangular structure of this transform matrix corresponds directly to the RM-transform matrix, which is the RMF-transform for \(p=2\). Recall that the GF-transform matrices are not triangular and have a larger number of non-zero elements than the RMF-transform matrices.

Although the entire theory of RMF-transforms is valid for any p, for simplicity, the discussion of computation methods will be given on the example of functions for \(p=3\) and, therefore, we provide the corresponding case examples of the Reed-Muller-Fourier transform. Examples for \(p=4\) elaborated in detail are presented in [34].

9.2.2 RMF Expressions for \(p=3\)

Consider the ring of integers modulo 3 defined in terms of addition and multiplication modulo 3. For a uniform presentation of all the operations that will be used, we present these operations in Table 9.1. In order to generate the product terms of three-valued variables corresponding to those appearing in the RM-expressions for switching functions and GF-expressions for multiple-valued functions, we define in Table 9.2 the 3AND multiplication and 3EXP exponentiation, denoted by \(\odot \) and \(*\), respectively. Note that the 3AND table is actually the multiplication modulo 3 table multiplied by 2, and \(2 = -1\) modulo 3.

We generate a set of \(3^{n}\) product terms given in the matrix notation by

$$\begin{aligned} \mathbf{X}_{3}(n) = \bigotimes _{i=1}^{n} \left[ \begin{array}{lll} x_{i}^{*0}&x_{i}^{*1}&x_{i}^{*2} \end{array} \right] = \bigotimes _{i=1}^{n} \left[ \begin{array}{lll} 2&x_{i}&x_{i}^{*2} \end{array} \right] , \end{aligned}$$

with 3AND and 3EXP applied to the three-valued variables. In matrix notation, the basis functions are expressed as columns of the matrix

$$\begin{aligned} \mathbf{X}_{3}(1) = \left[ \begin{array}{lll} 2 &{} 0 &{} 0 \\ 2 &{} 1 &{} 0 \\ 2 &{} 2 &{} 2 \end{array} \right] . \end{aligned}$$
Table 9.1 Addition and multiplication modulo 3
Table 9.2 3EXP and 3AND

Definition 9.2.2

Each n-variable 3-valued logic function given by the truth-vector \(\mathbf{F}(n)=[f(0), \ldots , f(3^{n}-1)]^{T}\) can be represented as a Reed-Muller-Fourier (RMF) polynomial given by

$$\begin{aligned} f(x_{1}, \ldots x_{n}) = (-1)^{n}{} \mathbf{X}_{3}(n)\mathbf{S}_{f,3}(n), \end{aligned}$$

with calculations modulo 3, where the vector of RMF-coefficients \(\mathbf{S}_{f,3}(n)=[a_{0}, \ldots ,a_{3^{n}-1}]^{T}\) is determined by the matrix relation

$$\begin{aligned} \mathbf{S}_{f,3}(n)= & {} \mathbf{R}_{3}(n)\mathbf{F}(n), \\ \mathbf{R}_{3}(n)= & {} \bigotimes _{i=1}^{n} \mathbf{R}_{3}(1), \end{aligned}$$

where

$$\begin{aligned} \mathbf{R}_{3}(1) = \mathbf{X}_{3}^{-1}(1) = 2\left[ \begin{array}{lll} 1 &{} 0 &{} 0 \\ 1 &{} 2 &{} 0 \\ 1 &{} 1 &{} 1 \end{array} \right] . \end{aligned}$$

Note that \(\mathbf{X}_{3}^{-1}(1)\) is its own inverse.

Example 9.2.1

For \(p=3\) and \(n=2\), the RMF-transform matrix is

$$\begin{aligned} \mathbf{R}_{3}(2)= & {} 2\left[ \begin{array}{rrr} 1 &{} 0 &{} 0 \\ 1 &{} 2 &{} 0 \\ 1 &{} 1 &{} 1 \end{array} \right] \otimes 2\left[ \begin{array}{rrr} 1 &{} 0 &{} 0 \\ 1 &{} 2 &{} 0 \\ 1 &{} 1 &{} 1 \end{array} \right] \end{aligned}$$
$$\begin{aligned}= & {} \left[ \begin{array}{rrrrrrrrr} 1 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 \\ 1 &{} 2 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 \\ 1 &{} 1 &{} 1 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 \\ 1 &{} 0 &{} 0 &{} 2 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 \\ 1 &{} 2 &{} 0 &{} 2 &{} 1 &{} 0 &{} 0 &{} 0 &{} 0 \\ 1 &{} 1 &{} 1 &{} 2 &{} 2 &{} 2 &{} 0 &{} 0 &{} 0 \\ 1 &{} 0 &{} 0 &{} 1 &{} 0 &{} 0 &{} 1 &{} 0 &{} 0 \\ 1 &{} 2 &{} 0 &{} 1 &{} 2 &{} 0 &{} 1 &{} 2 &{} 0 \\ 1 &{} 1 &{} 1 &{} 1 &{} 1 &{} 1 &{} 1 &{} 1 &{} 1 \end{array} \right] = \mathbf{X}_{3}(2). \end{aligned}$$

The basis functions used to define the RMF-expressions for \(p=3\) are defined as

$$\begin{aligned} 2\mathbf{X}_{3}(2) = [2, x_{2}, x_{2}^{*2}, x_{1}, x_{1} \odot x_{2}, x_{1}\odot x_{2}^{*2}, x_{1}^{*2}, x_{1}^{*2}\odot x_{2}, x_{1}^{*2} \odot x_{2}^{*2}]. \end{aligned}$$

Example 9.2.2

For the ternary function \(f(x_{1},x_{2})=x_{1}\oplus x_{2}\), specified by the function vector \(\mathbf{F}=[0,1,2,1,2,0,2,0,1]^{T}\), the RMF-spectrum is \(\mathbf{S}_{f,3}=[0,2,0,2,0,0,0,0,0]^{T}\).

Due to its Kronecker product structure, the RMF-matrix \(\mathbf{R}_{3}(n)\) can be defined recursively as

$$\begin{aligned} \mathbf{R}_{3}(n) = 2\left[ \begin{array}{lll} \mathbf{R}_{3}(n-1) &{} \mathbf{0}(n-1) &{} \mathbf{0}(n-1) \\ \mathbf{R}_{3}(n-1) &{} 2\mathbf{R}_{3}(n-1) &{} \mathbf{0}(n-1) \\ \mathbf{R}_{3}(n-1) &{} \mathbf{R}_{3}(n-1) &{} \mathbf{R}_{3}(n-1) \end{array} \right] , \end{aligned}$$
(9.1)

where \(\mathbf{0}(n-1)\) is the \(((n-1) {\times } (n-1))\) matrix all of whose elements are equal to 0.

9.3 Methods for Computing the RMF-Transform

The definition of RMF-expressions and the corresponding arithmetic counterparts can be uniformly extended to functions for p non-prime.

In applications, efficient computing of RMF-spectra is essential for the feasibility of algorithms based on GF-coefficients. Therefore, in the following sections we discuss different methods to compute the RMF-coefficients over vectors and decision diagrams as underlying data structures.

9.3.1 Cooley-Tukey Algorithms for GF-Expressions

Elements of the set P(X) (the basis functions in RMF-expressions) can be generated as elements of a matrix defined by the Kronecker product of basic matrices corresponding to all variables \(\mathbf{X}_{i}(1) = \left[ \begin{array}{cccc} x_{i}^{0}&x_{i}^{1}&\ldots&x_{i}^{p-1} \end{array} \right] \), \(i=1, \ldots , n\). Due to the properties of the Kronecker product, the matrix \(\mathbf{R}_{p}\), inverse to \(\mathbf{X}_{p}\), can also be generated as the Kronecker product of basic matrices \(\mathbf{R}_{p}(1)\) inverse to \(\mathbf{X}_{i}(1)\). Thus,

$$\begin{aligned} \mathbf{R}_{p}(n) = \bigotimes _{i=1}^{n} \mathbf{R}_{p}(1). \end{aligned}$$
(9.2)

Due to this Kronecker product representation, the RMF-transform matrix can be factorized as

$$\begin{aligned} \mathbf{R}_{p}(n) = \mathbf{C}_{1}{} \mathbf{C}_{2} \ldots \mathbf{C}_{n}, \end{aligned}$$
(9.3)

where factor matrices \(\mathbf{C}_{i}\) are defined as

$$\begin{aligned} \mathbf{C}_{i} = \bigotimes _{k=1}^{n} \mathbf{R}_{k}, \end{aligned}$$

with

$$\begin{aligned} \mathbf{R}_{k} = \left\{ \begin{array}{ll} \mathbf{R}_{i}(1), &{} k=i, \\ \mathbf{I}_{i}, &{} \text{ otherwise, } \end{array} \right. \end{aligned}$$

where \(\mathbf{R}_{i}(1)=\mathbf{R}_{p}(1)\) and \(\mathbf{I}_{i}\) is the \((p \times p)\) identity matrix. See for instance [33].

The computations specified by \(\mathbf{R}_{p}(1)\) can be represented by a flow-graph called the butterfly by analogy to the corresponding FFT-like algorithm.

Example 9.3.1

Figure 9.1 shows the basic RMF-matrices for \(p=3\) and \(p=4\) and the corresponding butterfly operations.

Fig. 9.1
figure 1

Basic RMF-transform matrices for \(p=3\) and \(p=4\)

Matrices \(\mathbf{C}_{i}\) are obviously sparse, since except for \(k=i\), all other factors of the Kronecker product are identity matrices \(\mathbf{I}_{g_{i}}\), and even \(\mathbf{R}_{3}(1)\) is by itself a triangular matrix. It follows that, as in the case of FFT, this factorization leads to a fast computation algorithm consisting of butterflies performing the computations specified by \(\mathbf{R}_{3}(1)\).

The algorithms derived from this factorization belong to the class of Good-Thomas FFT algorithms, since the factorization of \(\mathbf{R}_{p}\) in (9.3) is usually reported as the Good-Thomas factorization by referring to the work of Thomas [45], and Good [16]. In the case of FFT, the Good-Thomas algorithms are tightly related to the Cooley-Tukey algorithms, and differ from them in the correspondence between the single and multiple dimensional indexing [17] and the requirement that factors in the factorization of the transform length have to be mutually prime. The difference with respect to dealing with twiddle factors in these two classes of FFT algorithms is important in the case of DFT. However, it is irrelevant in the case of multiple-valued functions due to computations in finite fields.

Example 9.3.2

For \(p=3\) and \(n=3\), the RMF-transform matrix is defined as

$$\begin{aligned} \mathbf{R}_{3}(3) = \mathbf{R}_{3}(1) \otimes \mathbf{R}_{3}(1) \otimes \mathbf{R}_{3}(1). \end{aligned}$$

Then,

$$\begin{aligned} \mathbf{R}_{3}(3) = \mathbf{C}_{1} \mathbf{C}_{2} \mathbf{C}_{3}, \end{aligned}$$

with

$$\begin{aligned} \mathbf{C}_{1}= & {} \mathbf{R}_{3}(1) \otimes \mathbf{I}_{1} \otimes \mathbf{I}_{1}, \\ \mathbf{C}_{2}= & {} \mathbf{I}_{1} \otimes \mathbf{R}_{3}(1) \otimes \mathbf{I}_{1}, \\ \mathbf{C}_{3}= & {} \mathbf{I}_{1} \otimes \mathbf{I}_{1} \otimes \mathbf{R}_{3}(1), \end{aligned}$$

where \(\mathbf{I}_{1}\) is the \((3 \times 3)\) identity matrix.

Figure 9.2 shows the flow-graphs of the Cooley-Tukey algorithm for \(p=3\) and \(n=3\) based on this factorization of \(\mathbf{R}_{3}(3)\).

Fig. 9.2
figure 2

Flow-graph of the Cooley-Tukey algorithm for the RMF-transform for \(p=3\) and \(n=3\)

9.3.2 Constant Geometry Algorithms for RMF-Transform

In this section, we derive the so-called constant geometry algorithms for computing coefficients in RMF-expressions.

We consider the rows of the basic RMF-matrix in GF(p) as vectors of length p

$$\begin{aligned} \mathbf{R}_{p}(1) = (\mathbf{X}_{p}(1))^{-1} = \left[ \begin{array}{rrrr} q_{0,0} &{} q_{0,1} &{} \cdots &{} q_{0,p-1} \\ q_{1,0} &{} q_{1,1} &{} \cdots &{} q_{1,p-1} \\ \vdots &{} \vdots &{} \vdots &{} \vdots \\ q_{p-1,0} &{} q_{p-1,1} &{} \cdots &{} q_{p-1,p-1} \end{array} \right] = \left[ \begin{array}{r} \mathbf{q}_{0} \\ \mathbf{q}_{1} \\ \vdots \\ \mathbf{q}_{p-1} \end{array} \right] . \end{aligned}$$

We define a row zero matrix with p elements which are equal to 0, \(\mathbf{0}=[0, 0, \ldots 0]\).

With this notation, the RMF-transform matrix can be factorized as

$$\begin{aligned} \mathbf{R}_{p}(n)= & {} (\mathbf{X}_{p}(n))^{-1} = \mathbf{Q}^{n} \end{aligned}$$

where

$$\begin{aligned} \mathbf{Q} = \left[ \begin{array}{r} \mathbf{Q}_{0} \\ \mathbf{Q}_{1} \\ \vdots \\ \mathbf{Q}_{p-1} \end{array} \right] , \end{aligned}$$

where \(\mathbf{Q}_{i}\) are \((p \times p)\) diagonal matrices whose non-zero elements are vectors of p elements

$$\begin{aligned} \mathbf{Q}_{i} = diag(\mathbf{q}_{i}, \mathbf{q}_{i}, \ldots , \mathbf{q}_{i}). \end{aligned}$$

Thus, the \((p \times p)\) matrix \(\mathbf{Q}_{i}\) converts into a \((p \times p^{n})\) matrix with elements in GF(p). Since in \(\mathbf{Q}\) there are p matrices \(\mathbf{Q}_{i}\), the matrix \(\mathbf{R}_{p}(n)\) is obtained.

Example 9.3.3

For \(\mathbf{R}_{3}(1)\), it is

$$\begin{aligned} \mathbf{q}_{0}= & {} [1,0,0], \\ \mathbf{q}_{1}= & {} [1,2,0], \\ \mathbf{q}_{2}= & {} [1,1,1], \\ \mathbf{0}= & {} [0,0,0]. \end{aligned}$$

Therefore, for \(n=3\),

$$\begin{aligned} \mathbf{Q} = \left[ \begin{array}{r} \mathbf{Q}_{0} \\ \mathbf{Q}_{1} \\ \mathbf{Q}_{2} \end{array} \right]= & {} \left[ \begin{array}{rrrrrrrrr} \mathbf{q}_{0} &{} \mathbf{0} &{} \mathbf{0} &{} \mathbf{0} &{} \mathbf{0} &{} \mathbf{0} &{} \mathbf{0} &{} \mathbf{0} &{} \mathbf{0} \\ \mathbf{0} &{} \mathbf{q}_{0} &{} \mathbf{0} &{} \mathbf{0} &{} \mathbf{0} &{} \mathbf{0} &{} \mathbf{0} &{} \mathbf{0} &{} \mathbf{0} \\ \mathbf{0} &{} \mathbf{0} &{} \mathbf{q}_{0} &{} \mathbf{0} &{} \mathbf{0} &{} \mathbf{0} &{} \mathbf{0} &{} \mathbf{0} &{} \mathbf{0} \\ \mathbf{0} &{} \mathbf{0} &{} \mathbf{0} &{} \mathbf{q}_{0} &{} \mathbf{0} &{} \mathbf{0} &{} \mathbf{0} &{} \mathbf{0} &{} \mathbf{0} \\ \mathbf{0} &{} \mathbf{0} &{} \mathbf{0} &{} \mathbf{0} &{} \mathbf{q}_{0} &{} \mathbf{0} &{} \mathbf{0} &{} \mathbf{0} &{} \mathbf{0} \\ \mathbf{0} &{} \mathbf{0} &{} \mathbf{0} &{} \mathbf{0} &{} \mathbf{0} &{} \mathbf{q}_{0} &{} \mathbf{0} &{} \mathbf{0} &{} \mathbf{0} \\ \mathbf{0} &{} \mathbf{0} &{} \mathbf{0} &{} \mathbf{0} &{} \mathbf{0} &{} \mathbf{0} &{} \mathbf{q}_{0} &{} \mathbf{0} &{} \mathbf{0} \\ \mathbf{0} &{} \mathbf{0} &{} \mathbf{0} &{} \mathbf{0} &{} \mathbf{0} &{} \mathbf{0} &{} \mathbf{0} &{} \mathbf{q}_{0} &{} \mathbf{0} \\ \mathbf{0} &{} \mathbf{0} &{} \mathbf{0} &{} \mathbf{0} &{} \mathbf{0} &{} \mathbf{0} &{} \mathbf{0} &{} \mathbf{0} &{} \mathbf{q}_{0} \\ \mathbf{q}_{1} &{} \mathbf{0} &{} \mathbf{0} &{} \mathbf{0} &{} \mathbf{0} &{} \mathbf{0} &{} \mathbf{0} &{} \mathbf{0} &{} \mathbf{0} \\ \mathbf{0} &{} \mathbf{q}_{1} &{} \mathbf{0} &{} \mathbf{0} &{} \mathbf{0} &{} \mathbf{0} &{} \mathbf{0} &{} \mathbf{0} &{} \mathbf{0} \\ \mathbf{0} &{} \mathbf{0} &{} \mathbf{q}_{1} &{} \mathbf{0} &{} \mathbf{0} &{} \mathbf{0} &{} \mathbf{0} &{} \mathbf{0} &{} \mathbf{0} \\ \mathbf{0} &{} \mathbf{0} &{} \mathbf{0} &{} \mathbf{q}_{1} &{} \mathbf{0} &{} \mathbf{0} &{} \mathbf{0} &{} \mathbf{0} &{} \mathbf{0} \\ \mathbf{0} &{} \mathbf{0} &{} \mathbf{0} &{} \mathbf{0} &{} \mathbf{q}_{1} &{} \mathbf{0} &{} \mathbf{0} &{} \mathbf{0} &{} \mathbf{0} \\ \mathbf{0} &{} \mathbf{0} &{} \mathbf{0} &{} \mathbf{0} &{} \mathbf{0} &{} \mathbf{q}_{1} &{} \mathbf{0} &{} \mathbf{0} &{} \mathbf{0} \\ \mathbf{0} &{} \mathbf{0} &{} \mathbf{0} &{} \mathbf{0} &{} \mathbf{0} &{} \mathbf{0} &{} \mathbf{q}_{1} &{} \mathbf{0} &{} \mathbf{0} \\ \mathbf{0} &{} \mathbf{0} &{} \mathbf{0} &{} \mathbf{0} &{} \mathbf{0} &{} \mathbf{0} &{} \mathbf{0} &{} \mathbf{q}_{1} &{} \mathbf{0} \\ \mathbf{0} &{} \mathbf{0} &{} \mathbf{0} &{} \mathbf{0} &{} \mathbf{0} &{} \mathbf{0} &{} \mathbf{0} &{} \mathbf{0} &{} \mathbf{q}_{1} \\ \mathbf{q}_{2} &{} \mathbf{0} &{} \mathbf{0} &{} \mathbf{0} &{} \mathbf{0} &{} \mathbf{0} &{} \mathbf{0} &{} \mathbf{0} &{} \mathbf{0} \\ \mathbf{0} &{} \mathbf{q}_{2} &{} \mathbf{0} &{} \mathbf{0} &{} \mathbf{0} &{} \mathbf{0} &{} \mathbf{0} &{} \mathbf{0} &{} \mathbf{0} \\ \mathbf{0} &{} \mathbf{0} &{} \mathbf{q}_{2} &{} \mathbf{0} &{} \mathbf{0} &{} \mathbf{0} &{} \mathbf{0} &{} \mathbf{0} &{} \mathbf{0} \\ \mathbf{0} &{} \mathbf{0} &{} \mathbf{0} &{} \mathbf{q}_{2} &{} \mathbf{0} &{} \mathbf{0} &{} \mathbf{0} &{} \mathbf{0} &{} \mathbf{0} \\ \mathbf{0} &{} \mathbf{0} &{} \mathbf{0} &{} \mathbf{0} &{} \mathbf{q}_{2} &{} \mathbf{0} &{} \mathbf{0} &{} \mathbf{0} &{} \mathbf{0} \\ \mathbf{0} &{} \mathbf{0} &{} \mathbf{0} &{} \mathbf{0} &{} \mathbf{0} &{} \mathbf{q}_{2} &{} \mathbf{0} &{} \mathbf{0} &{} \mathbf{0} \\ \mathbf{0} &{} \mathbf{0} &{} \mathbf{0} &{} \mathbf{0} &{} \mathbf{0} &{} \mathbf{0} &{} \mathbf{q}_{2} &{} \mathbf{0} &{} \mathbf{0} \\ \mathbf{0} &{} \mathbf{0} &{} \mathbf{0} &{} \mathbf{0} &{} \mathbf{0} &{} \mathbf{0} &{} \mathbf{0} &{} \mathbf{q}_{2} &{} \mathbf{0} \\ \mathbf{0} &{} \mathbf{0} &{} \mathbf{0} &{} \mathbf{0} &{} \mathbf{0} &{} \mathbf{0} &{} \mathbf{0} &{} \mathbf{0} &{} \mathbf{q}_{2} \end{array} \right] . \end{aligned}$$

Then,

$$\begin{aligned} \mathbf{R}_{3}(3) = \mathbf{Q}^{3}. \end{aligned}$$

Example 9.3.4

Figure 9.3 shows the flow-graph of the constant geometry algorithm for \(p=3\) and \(n=3\).

Fig. 9.3
figure 3

Flow-graph of the constant geometry algorithm for \(p=3\) and \(n=3\)

9.3.3 The Difference Between Algorithms

In this section we briefly summarize the main features of Cooley-Tukey algorithms and constant geometry algorithms.

In both algorithms the computations are determined by the basic transform matrices \(\mathbf{R}_{p}(1)\) and organized as butterflies performed in parallel within each step, with steps performed sequentially. The number of steps is equal to the number of variables in the function whose spectrum is computed, since in each step the RMF-transform with respect to a variable is performed.

In Cooley-Tukey algorithms and algorithms based on the Good-Thomas factorization, the factor matrices \(C_{i}\) are mutually different, which causes that in each step the address arithmetic is different, meaning that in each step we have to calculate the addresses of memory locations from which the data are read (fetched). This computing of addresses for input data repeated in each step is a kind of drawback of the algorithm. The good feature is that the output data, the results of the computation in the step, are saved in the same locations from which the input data are read. It follows that computations can be performed in-place, meaning that the memory requirements are minimal and equal to those to store the function vector to be processed. As noticed above, the computations performed in each step of the algorithm consists of \(p^{n-1}\) sets of identical computations. These are computations defined by the basic RMF-transform matrix \(\mathbf{R}_{p}(1)\). When graphically represented, the computations are called butterflies by analogy with similar computations in FFT.

The distance between memory locations from which the data are fetched is different in each step, and ranges from \(p^{n-1}\) in the first step to p in the n-th step of the algorithm. Due to this, the time to perform a step is different from that for other steps. A discussion of the impact of that feature for the case of GF-transforms for ternary and quaternary functions is experimentally analyzed in [12, 38]. Similar conclusions can be derived for the RMF-transforms, since the difference between the GF- and RMF-transforms relevant for this consideration is just in the number of non-zero elements in the basic transform matrices. In the case of RMF-transforms, the number of non-zero elements is smaller.

The chief idea in constant geometry algorithms is to perform computations over data fetched from identical memory locations in all the steps. It follows that address arithmetic is simpler, since the addresses of memory locations to fetch data are computed in the first step and used later in other steps. This results in a speed up of computations, however, the drawback is that in-place computations are impossible and a memory twice as large is required to sore the input and output data of each step.

9.3.4 Computing the RMF-Transform over Decision Diagrams

In this section, we present the method for computing the RMF-spectrum over decision diagrams as the underlying data structure to represent the functions whose spectrum is computed. In this order, for the sake of completeness of presentation, we first introduced the concept of Multiple-place decision diagrams (MDDs) used to represent multiple-valued functions. MDDs are a generalization of Binary Decision Diagrams (BDDs) [6] that are defined in terms of the so-called Shannon expansion rules. Alternatively, BDDs can be viewed as graphical representations of the disjunctive normal forms, Sum-of-Product (SOP) expressions, viewed as particular examples of functional expressions for Boolean functions [33]. Therefore, we first introduce the corresponding concepts, the generalized Shannon expansion, and the multiple-valued counterpart of the disjunctive normal form.

9.3.5 Multiple-Place Decision Diagrams

We will illustrate the derivation of the generalized Shannon expression for multiple-valued functions by referring to this example.

Definition 9.3.1

(Characteristic functions) For a multiple-valued variable \(x_{j}\) taking values in the set \(\{0, 1, \ldots , p-1\}\), \(j = 0, \ldots ,m-1\), the characteristic functions \(J_{i}(x_{j})\), \(i = 0, 1, \ldots , p-1\) are defined as \(J_{i}(x_{j}) = 1\) for \(x_{j} = i\), and \(J_{i}(x_{j}) = 0\) for \(x_{j} \ne i\).

Example 9.3.5

For \(p = 3\) and \(n = 2\), the characteristic functions \(J_{i}(x_{j})\) are given in Table 9.3.

Table 9.3 Characteristic functions for \(p=3\) and \(n=2\)

Definition 9.3.2

(Generalized Shannon expansion) The generalized Shannon expansion for ternary logic functions is defined as

$$\begin{aligned} f = J_{0}(x_{i})f_{0} + J_{1}(x_{i})f_{1} + J_{2}(x_{i})f_{2}, \end{aligned}$$
(9.4)

where \(f_{i}\), \(i = 0, 1, 2\) are the co-factors of f for \(x_{i} \in \{0, 1, 2\}\).

The recursive application of the generalized Shannon expansion rule to all the variables in a given function f results in a functional expression which when graphically represented yields to the decision diagrams that are an analogue of the Binary decision diagrams (BDDs) [27] and represent a generalization of this concept to the representation of multiple-valued functions. These diagrams are called Multiple-place decision trees (MTDTs) and Multiple-place decision diagrams (MDD) [29].

Example 9.3.6

For \(p = 3\) and \(n = 2\), by expanding a given \(f(x_{1},x_{2})\) with respect to \(x_{1}\),

$$\begin{aligned} f(x_{1}, x_{2}) = J_{0}(x_{1})f(x_{1} = 0, x_{2}) + J_{1}(x_{1})f(x_{1} = 1, x_{2}) + J_{2}(x_{2})f(x_{1} = 2, x_{2}). \end{aligned}$$

After application of the generalized Shannon expansion with respect to \(x_{2}\), it follows

$$\begin{aligned} f(x_{1}, x_{2})= & {} J_{0}(x_{2})(J_{0}(x_{1})f(x_{1} = 0, x_{2} = 0) + J_{1}(x_{1})f(x_{1} = 1, x_{2} = 0) \\&+ J_{2}(x_{1})f(x_{1} = 2, x_{2} = 0)) + J_{1}(x_{2})(J_{0}(x_{1})f(x_{1} = 0, x_{2} = 1) \\&+ J_{1}(x_{1})f(x_{0} = 1, x_{1} = 1) + J_{2}(x_{0})f(x_{0} = 2, x_{1} = 1)) \\&+ J_{2}(x_{2})(J_{0}(x_{1})f(x_{1} = 0, x_{2} = 2) + J_{1}(x_{1})f(x_{1} = 1, x_{2} = 2) \\&+ J_{2}(x_{1})f(x_{1} = 2, x_{2} = 2) \\= & {} J_{0}(x_{2})J_{0}(x_{1})f(x_{1} = 0, x_{2} = 0) + J_{0}(x_{2})J_{1}(x_{1})f(x_{1} = 1, x_{2} = 0) \\&+ J_{0}(x_{2})J_{2}(x_{1})f(x_{1} = 2, x_{2} = 0) + J_{1}(x_{2})J_{0}(x_{1})f(x_{1} = 0, x_{2} = 1) \\&+ J_{1}(x_{2})J_{1}(x_{1})f(x_{1} = 1, x_{2} = 1) + J_{1}(x_{2})J_{2}(x_{1})f(x_{1} = 2, x_{2} = 1) \\&+ J_{2}(x_{2})J_{0}(x_{1})f(x_{1} = 0, x_{2} = 2) + J_{2}(x_{2})J_{1}(x_{1})f(x_{1} = 1, x_{2} = 2) \\&+ J_{2}(x_{2})J_{2}(x_{1})f(x_{1} = 2, x_{2} = 2). \end{aligned}$$

Figure 9.4 shows the Multiple-place decision tree (MDT) for ternary functions of two variables.

Fig. 9.4
figure 4

MDT for f in Example 9.3.6

For a function of n variables, the decision tree has \((n+1)\) levels, and each level consists of nodes to which the same variable is assigned. The first level has a single node called the root node. The level \((n+1)\) consists of constant nodes showing the coefficients in the functional expressions whose graphical representation are the decision trees.

9.3.6 Reduction of Decision Trees

Decision diagrams are derived by the reduction of decision trees by eliminating the redundant information expressed in terms of the isomorphic subtrees, and the corresponding subdiagrams. Reduction is accomplished by sharing isomorphic subtrees and deleting any redundant information in the decision tree. It is assumed that two subtrees are isomorphic if

  1. 1.

    They are rooted in nodes at the same level,

  2. 2.

    The constant nodes of the subtrees represent identical subvectors \(\mathbf{V}_{i}\) in the vector of values of constant nodes \(\mathbf{V}\).

This definition includes different reduction rules used in different decision diagrams for either binary or multiple-valued functions, as well as bit-level and word-level decision diagrams [32].

The minimum possible isomorphic subtrees are equal constant nodes. In this case, the function represented has equal values at the points corresponding to these equal-valued constant nodes.

The maximum possible isomorphic subtrees are p equal subtrees rooted at the nodes pointed by the outgoing edges of the root node. In that case, the function f is independent of the variable assigned to the root node.

Definition 9.3.3

(MDD reduction rules)

  1. 1.

    If descendent nodes of a node are identical, then delete the node and connect the incoming edges of the deleted node to the corresponding successor. The label of this incoming edge is re-determined as the product of the label at the initial incoming edge with the sum of labels at the outgoing edges of the deleted node.

  2. 2.

    Share isomorphic subtrees, i.e., if there are isomorphic subtrees, keep a single subtree and redirect to it the incoming edges of all other isomorphic subtrees.

Fig. 9.5
figure 5

MDD for the function f in Example 9.3.7

Definition 9.3.4

(Cross points) A cross point is a point where an edge longer than one crosses a level in the decision diagram.

Cross points are useful in expressing the impact of deleted nodes, which is important to take into account in computing over decision diagrams or performing the realizations of functions represented by decision diagrams. A cross point is illustrated by the decision diagram in Fig. 9.5 for the function f in Example 9.3.7.

In a decision tree, edges connect nodes at successive levels, and we say that the length of such edges is 1. Due to the reduction, in a decision diagram, edges longer than one, i.e., connecting nodes at non-successive levels, can appear. For example, the length of an edge connecting a node at the \((i-1)\)-th level with a node at the \((i+1)\)-th level is two.

Nodes to which the same decision variable is assigned form a level in the decision tree or the diagram.

A path consists of nodes at different levels, with a single node at each level, from the root node to a constant node. Thus, each path connects the root node and a single node from each level, including the level of constant nodes, i.e., a path consists of edges connecting a single node per level, and the length of the path is the sum of the lengths of edges the path consists of.

9.3.7 Computing the RMF-Transform over MDDs

The same computations as in the above example can be performed assuming the decision diagram representations as the underlying data structure.

Example 9.3.7

Figure 9.5 shows the MDD for the function f specified by the function vector

$$\begin{aligned} \mathbf{F}=\left[ 0,1,2,2,2,1,0,0,0,0,0,0,1,1,2,2,2,1,1,1,1,0,0,0,0,0,0\right] ^{T}. \end{aligned}$$

In this figure, the symbol \(a+b\) means that both edges labeled by a and b point to the same node.

The steps of the FFT-like algorithm described above can be performed over this MDD by processing each node in the diagram by the basic butterfly operation specified by \(\mathbf{R}_{3}(1)\). The processing means that the inputs to the butterfly operation are subfunctions represented by the subdiagrams rooted at the nodes pointed by the outgoing edges of the processed node. For the clarity of presentation, we will express the impact of deleted nodes through cross points shown by small circles in Fig. 9.5 and viewed as crossings of a path in the diagram with an imaginary line connecting nodes at the same level in the diagram. In practical programming implementations, these computations are avoided and the procedure simplified by using properties of the performed transforms. In particular, the computations are reduced to transforming the related subfunctions and padding with zeros. It can be followed, depending on the transform, by the multiplication of the constant value of a terminal node or the subfunction pointed by the edge of the processed node by a constant value equal to the length of the path between these two nodes. The explanation for this implementation is the following. If a node is deleted from the MDD, this means that the outgoing edges of this node point to the identical subfunctions. Therefore, in the considered case of ternary functions, nodes have three outgoing edges and the subfunction represented by the deleted node has three identical parts. Thus, this node represents a periodic subfunction or a constant. Then, due to the properties of the transforms, the spectrum of a constant function is the delta function, while the spectrum of a periodic function is the delta function of the length of a period Kronecker multiplied by the spectrum of the periodically repeated subfunction in the considered periodic functions.

If the nodes and cross points are labeled as in Fig. 9.5, the RMF-coefficients for the considered function f are computed as follows.

Step 1

$$\begin{aligned} \begin{array}{l} \mathbf{c}_{5} = \mathbf{R}_{3}(1)\left[ \begin{array}{r} 0 \\ 1 \\ 2 \end{array} \right] = \left[ \begin{array}{r} 0 \\ 2 \\ 0 \end{array} \right] \quad \mathbf{c}_{6} = \mathbf{R}_{3}(1)\left[ \begin{array}{r} 2 \\ 2 \\ 1 \end{array} \right] = \left[ \begin{array}{r} 2 \\ 0 \\ 2 \end{array} \right] \\ \\ \mathbf{c}_{7} = \mathbf{R}_{3}(1)\left[ \begin{array}{r} 0 \\ 0 \\ 0 \end{array} \right] = \left[ \begin{array}{r} 0 \\ 0 \\ 0 \end{array} \right] \quad \mathbf{c}_{8} = \mathbf{R}_{3}(1)\left[ \begin{array}{r} 0 \\ 0 \\ 0 \end{array} \right] = \left[ \begin{array}{r} 0 \\ 0 \\ 0 \end{array} \right] \\ \\ \mathbf{c}_{9} = \mathbf{R}_{3}(1)\left[ \begin{array}{r} 1 \\ 1 \\ 2 \end{array} \right] = \left[ \begin{array}{r} 1 \\ 0 \\ 1 \end{array} \right] \quad \mathbf{c}_{10} = \mathbf{R}_{3}(1)\left[ \begin{array}{r} 1 \\ 1 \\ 1 \end{array} \right] = \left[ \begin{array}{r} 1 \\ 0 \\ 0 \end{array} \right] \\ \\ \mathbf{c}_{11} = \mathbf{R}_{3}(1)\left[ \begin{array}{r} 0 \\ 0 \\ 0 \end{array} \right] = \left[ \begin{array}{r} 0 \\ 0 \\ 0 \end{array} \right] . \end{array} \end{aligned}$$
Fig. 9.6
figure 6

MDDs for subfunctions \(c_{i}\), \(i=5,6,7,8,9,10,11\) in Example 9.3.7

Fig. 9.7
figure 7

MDDs for subfunctions \(c_{i}\), \(i=2,3,4\) in Example 9.3.7

Step 2

$$\begin{aligned} \mathbf{c}_{2}= & {} \mathbf{R}_{3}(1)\left[ \begin{array}{r} \mathbf{c}_{5} \\ \mathbf{c}_{6} \\ \mathbf{c}_{7} \end{array} \right] = \left[ \begin{array}{l} \left[ \begin{array}{r} 0 \\ 2 \\ 0 \end{array} \right] \\ \left[ \begin{array}{r} 0 \\ 2 \\ 0 \end{array} \right] + 2 \left[ \begin{array}{r} 2 \\ 0 \\ 2 \end{array} \right] \\ \left[ \begin{array}{r} 0 \\ 2 \\ 0 \end{array} \right] + \left[ \begin{array}{r} 2 \\ 0 \\ 2 \end{array} \right] + \left[ \begin{array}{r} 0 \\ 0 \\ 0 \end{array} \right] \end{array} \right] = \left[ \begin{array}{r} 0 \\ 2 \\ 0 \\ 1 \\ 2 \\ 1 \\ 2 \\ 2 \\ 2 \end{array} \right] \\ \end{aligned}$$
$$\begin{aligned} \mathbf{c}_{3}= & {} \mathbf{R}_{3}(1)\left[ \begin{array}{r} \mathbf{c}_{8} \\ \mathbf{c}_{9} \\ \mathbf{c}_{6} \end{array} \right] = \left[ \begin{array}{l} \left[ \begin{array}{r} 0 \\ 0 \\ 0 \end{array} \right] \\ \left[ \begin{array}{r} 0 \\ 0 \\ 0 \end{array} \right] + 2 \left[ \begin{array}{r} 1 \\ 0 \\ 1 \end{array} \right] \\ \left[ \begin{array}{r} 0 \\ 0 \\ 0 \end{array} \right] + \left[ \begin{array}{r} 1 \\ 0 \\ 1 \end{array} \right] + \left[ \begin{array}{r} 2 \\ 0 \\ 2 \end{array} \right] \end{array} \right] = \left[ \begin{array}{r} 0 \\ 0 \\ 0 \\ 2 \\ 0 \\ 2 \\ 0 \\ 0 \\ 0 \end{array} \right] \\ \mathbf{c}_{4}= & {} \mathbf{R}_{3}(1)\left[ \begin{array}{r} \mathbf{c}_{10} \\ \mathbf{c}_{11} \\ \mathbf{c}_{11} \end{array} \right] = \left[ \begin{array}{l} \left[ \begin{array}{r} 1 \\ 0 \\ 0 \end{array} \right] \\ \left[ \begin{array}{r} 1 \\ 0 \\ 0 \end{array} \right] + 2 \left[ \begin{array}{r} 0 \\ 0 \\ 0 \end{array} \right] \\ \left[ \begin{array}{r} 1 \\ 0 \\ 0 \end{array} \right] + \left[ \begin{array}{r} 0 \\ 0 \\ 0 \end{array} \right] + \left[ \begin{array}{r} 0 \\ 0 \\ 0 \end{array} \right] \end{array} \right] = \left[ \begin{array}{r} 1 \\ 0 \\ 0 \\ 1 \\ 0 \\ 0 \\ 1 \\ 0 \\ 0 \end{array} \right] . \end{aligned}$$
Fig. 9.8
figure 8

RMFDD for f in Example 9.3.7

Step 3

$$\begin{aligned} \mathbf{c}_{1}= & {} \mathbf{R}_{3}(1)\left[ \begin{array}{r} \mathbf{c}_{2} \\ \mathbf{c}_{3} \\ \mathbf{c}_{4} \end{array} \right] \\= & {} \left[ \begin{array}{l} \left[ 0,2,0,1,2,1,2,2,2 \right] ^{T} \\ \left[ 0,2,0,1,2,1,2,2,2\right] ^{T} + 2\left[ 0,0,0,2,0,2,0,0,0\right] ^{T} \\ \left[ 0,2,0,1,2,1,2,2,2\right] ^{T} + \left[ 0,0,0,2,0,2,0,0,0\right] ^{T}+\left[ 1,0,0,1,0,0,1,0,0\right] ^{T} \end{array} \right] ^{T} \\= & {} \left[ 0,2,0,1,2,1,2,2,2,0,2,0,2,2,2,2,2,2,1,2,0,1,2,0,0,2,2\right] ^{T}. \end{aligned}$$

Each step of the computation can be represented by MDDs, which then can be combined into the MDD for the RMF-coefficients. From the spectral interpretation of decision diagrams, this MDD for the RMF-spectrum of f after conversion of the meaning of nodes and corresponding labels at the edges becomes the RMFDD for f [33]. Figure 9.8 shows this RMFDD for the considered function f. It can be noticed that, counting from the left to the right, the third and fourth node for \(x_{3}\), as well as the sixth and seventh node for the same variable, represent, respectively, subfunctions that are identical to each other multiplied by 2. Thus, the diagram can be simplified by allowing edges with multiplicative attributes in the same way as this is done in BDDs with negated edges. See for instance [27].

9.4 Algorithms Derived from Pascal Matrix Factorizations

In this section, we discuss various factorizations of the RMF-matrix based on its relationships with the Pascal matrix [10]. The Pascal matrix defined as the matrix of binomial coefficients is an infinite matrix. However, in finite domains, the term Pascal matrix \(\mathbf{P}_{i}\) refers to an \((i \times i)\) matrix consisting of first i rows and i columns of the Pascal matrix.

We will distinguish the Pascal matrix and the relatively recently defined Pascal transform matrix [1], which is actually the Pascal matrix with a minus sign assigned to elements of every second column [1]. Note that in modulo p arithmetic, the negative sign of an integer can be interpreted as the multiplication by \(p-1\).

9.4.1 RMF-Matrix and Pascal Matrices

There is a strong relationship between the RMF-matrix and the Pascal matrix and Pascal transform matrix which can be expressed as follows.

Observation 9.4.1

The \((p^{n} \times p^{n})\) RMF-matrix is derived from the Pascal matrix of the same dimensions by multiplying every second column by \(p-1\) and reducing the elements of the resulting matrix modulo p. The RMF-matrix is derived from the Pascal transform matrix by replacing the sign minus with multiplication by \(p-1\) and reducing the elements of the resulting matrix modulo p.

Notice that the second column of the \((p^{n} \times p^{n})\) Pascal matrix is the sequence \(\{0,1,2, \ldots , p^{n}-1\}\), which, as remarked above, can be generated by the Gibbs exponentiation of the constant sequence all of whose elements are equal to \(p-1\). When elements of this sequence are computed modulo p, we get the n-th variable in p-valued functions. In [31], it is pointed out that the n-th variable can be used to generate the RMF-matrix, provided the multiplication by \(p-1\) of elements of every second column of the resulting matrix. The same observation holds in general for any p. From Observation 9.4.1, directly follows that a \((p^{n} \times p^{n})\) Pascal matrix \(\mathbf{P}_{n}\) can be converted into an RMF-matrix \(\mathbf{R}_{p}(n)\) of the same dimensions by the following

Procedure 9.4.1

  1. 1.

    Multiply elements of every second column of the matrix \(\mathbf{P}_{n}\) by \(p-1\), i.e., perform the columnwise Hadamard product with the vector \([1, (p-1), 1, (p-1), \ldots , (p-1), 1]\).

  2. 2.

    Compute elements of the produced matrix modulo p. The resulting matrix is the \((p^{n} \times p^{n})\) RMF-matrix.

The similar procedure can be applied to the Pascal transform matrix. Therefore, various factorizations proposed for the Pascal matrix and the Pascal transform matrix can be simply modified for the factorizations of the RMF-matrix, as will be illustrated by the following examples. Some of these factorizations can be useful for efficient computing of the RMF-spectra since they allow parallel implementations offering the exploitation of both data and task parallelism.

Example 9.4.1

For \(p=3\) and \(n=2\), the Pascal matrix is

$$\begin{aligned} \mathbf{P}_{3^{2}}=\mathbf{P}_{9}= \left[ \begin{array}{rrrrrrrrr} 1 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 \\ 1 &{} 1 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 \\ 1 &{} 2 &{} 1 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 \\ 1 &{} 3 &{} 3 &{} 1 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 \\ 1 &{} 4 &{} 6 &{} 4 &{} 1 &{} 0 &{} 0 &{} 0 &{} 0 \\ 1 &{} 5 &{} 10 &{} 10 &{} 5 &{} 1 &{} 0 &{} 0 &{} 0 \\ 1 &{} 6 &{} 15 &{} 20 &{} 15 &{} 6 &{} 1 &{} 0 &{} 0 \\ 1 &{} 7 &{} 21 &{} 35 &{} 35 &{} 21 &{} 7 &{} 1 &{} 0 \\ 1 &{} 8 &{} 28 &{} 56 &{} 70 &{} 56 &{} 28 &{} 8 &{} 1 \end{array} \right] . \end{aligned}$$

The columnwise Hadamard product multiplication \(\odot \) with the vector \(\mathbf{v}=[1,2,1,2,1,2,1,2,1]\), results in

$$\begin{aligned} \mathbf{v} \odot \mathbf{P}_{9} = \left[ \begin{array}{rrrrrrrrr} 1 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 \\ 1 &{} 2 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 \\ 1 &{} 4 &{} 1 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 \\ 1 &{} 6 &{} 3 &{} 2 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 \\ 1 &{} 8 &{} 6 &{} 8 &{} 1 &{} 0 &{} 0 &{} 0 &{} 0 \\ 1 &{} 10 &{} 10 &{} 20 &{} 5 &{} 2 &{} 0 &{} 0 &{} 0 \\ 1 &{} 12 &{} 15 &{} 40 &{} 15 &{} 12 &{} 1 &{} 0 &{} 0 \\ 1 &{} 14 &{} 21 &{} 70 &{} 35 &{} 42 &{} 7 &{} 2 &{} 0 \\ 1 &{} 16 &{} 28 &{} 112 &{} 70 &{} 112 &{} 28 &{} 16 &{} 1 \end{array} \right] , \end{aligned}$$

which by computing its elements modulo 3 produces the RMF-matrix \(\mathbf{R}_{3}(2)\) in Example 9.2.1.

Example 9.4.2

For \(p=5\) and \(n=1\), the Pascal matrix \(\mathbf{P}_{5}\) is

$$\begin{aligned} \mathbf{P}_{5} = \left[ \begin{array}{rrrrr} 1 &{} 0 &{} 0 &{} 0 &{} 0 \\ 1 &{} 1 &{} 0 &{} 0 &{} 0 \\ 1 &{} 2 &{} 1 &{} 0 &{} 0 \\ 1 &{} 3 &{} 3 &{} 1 &{} 0 \\ 1 &{} 4 &{} 6 &{} 4 &{} 1 \end{array} \right] . \end{aligned}$$

The columnwise Hadamard product with \(\mathbf{v}=[1,4,1,4,1]\) produces

$$\begin{aligned} \mathbf{v} \odot \mathbf{P}_{5} = \left[ \begin{array}{rrrrr} 1 &{} 0 &{} 0 &{} 0 &{} 0 \\ 1 &{} 4 &{} 0 &{} 0 &{} 0 \\ 1 &{} 8 &{} 1 &{} 0 &{} 0 \\ 1 &{} 12 &{} 3 &{} 4 &{} 0 \\ 1 &{} 16 &{} 6 &{} 16 &{} 1 \end{array} \right] . \end{aligned}$$

Computing elements of this matrix modulo 5 produce the RMF-matrix for \(p=5\).

The Pascal matrix \(\mathbf{P}_{5}\) can be factorized as [48]

$$\begin{aligned} \mathbf{P}_{5} = \left[ \begin{array}{rrrrr} 1 &{} 0 &{} 0 &{} 0 &{} 0 \\ 0 &{} 1 &{} 0 &{} 0 &{} 0 \\ -2 &{} 1 &{} 1 &{} 0 &{} 0 \\ -5 &{} -1 &{} 2 &{} 1 &{} 0 \\ -9 &{} -6 &{} 1 &{} 3 &{} 1 \end{array} \right] \cdot \left[ \begin{array}{rrrrr} 1 &{} 0 &{} 0 &{} 0 &{} 0 \\ 1 &{} 1 &{} 0 &{} 0 &{} 0 \\ 2 &{} 1 &{} 1 &{} 0 &{} 0 \\ 3 &{} 2 &{} 1 &{} 1 &{} 0 \\ 5 &{} 3 &{} 2 &{} 1 &{} 1 \end{array} \right] . \end{aligned}$$

When these two matrices are computed modulo 5, we get the matrices whose product is the RMF-matrix for \(p=5\),

$$\begin{aligned} \mathbf{R}_{5}(1) = \left[ \begin{array}{rrrrr} 1 &{} 0 &{} 0 &{} 0 &{} 0 \\ 0 &{} 1 &{} 0 &{} 0 &{} 0 \\ 3 &{} 1 &{} 1 &{} 0 &{} 0 \\ 0 &{} 4 &{} 2 &{} 1 &{} 0 \\ 1 &{} 4 &{} 1 &{} 3 &{} 1 \end{array} \right] \cdot \left[ \begin{array}{rrrrr} 1 &{} 0 &{} 0 &{} 0 &{} 0 \\ 1 &{} 1 &{} 0 &{} 0 &{} 0 \\ 2 &{} 1 &{} 1 &{} 0 &{} 0 \\ 3 &{} 2 &{} 1 &{} 1 &{} 0 \\ 0 &{} 3 &{} 2 &{} 1 &{} 1 \end{array} \right] . \end{aligned}$$

9.4.2 Computing Algorithms

Certain factorizations of Pascal matrices can be a basis to derive computing algorithms for the RMF-spectra.

Example 9.4.3

The Pascal matrix \(\mathbf{P}_{5}\) can be factorized in terms of four binary matrices as [21]

$$\begin{aligned} \mathbf{P}_{5} = \mathbf{V}_{1}\cdot \mathbf{V}_{2} \cdot \mathbf{V}_{3} \cdot \mathbf{V}_{4}, \end{aligned}$$

where

$$\begin{aligned} \mathbf{V}_{1} = \left[ \begin{array}{rrrrr} 1 &{} 0 &{} 0 &{} 0 &{} 0 \\ 0 &{} 1 &{} 0 &{} 0 &{} 0 \\ 0 &{} 0 &{} 1 &{} 0 &{} 0 \\ 0 &{} 0 &{} 0 &{} 1 &{} 0 \\ 0 &{} 0 &{} 0 &{} 1 &{} 1 \end{array} \right] , \mathbf{V}_{2} = \left[ \begin{array}{rrrrr} 1 &{} 0 &{} 0 &{} 0 &{} 0 \\ 0 &{} 1 &{} 0 &{} 0 &{} 0 \\ 0 &{} 0 &{} 1 &{} 0 &{} 0 \\ 0 &{} 0 &{} 1 &{} 1 &{} 0 \\ 0 &{} 0 &{} 0 &{} 1 &{} 1 \end{array} \right] , \mathbf{V}_{3} = \left[ \begin{array}{rrrrr} 1 &{} 0 &{} 0 &{} 0 &{} 0 \\ 0 &{} 1 &{} 0 &{} 0 &{} 0 \\ 0 &{} 1 &{} 1 &{} 0 &{} 0 \\ 0 &{} 0 &{} 1 &{} 1 &{} 0 \\ 0 &{} 0 &{} 0 &{} 1 &{} 1 \end{array} \right] , \mathbf{V}_{4} = \left[ \begin{array}{rrrrr} 1 &{} 0 &{} 0 &{} 0 &{} 0 \\ 1 &{} 1 &{} 0 &{} 0 &{} 0 \\ 0 &{} 1 &{} 1 &{} 0 &{} 0 \\ 0 &{} 0 &{} 1 &{} 1 &{} 0 \\ 0 &{} 0 &{} 0 &{} 1 &{} 1 \end{array} \right] . \end{aligned}$$

Their product is

$$\begin{aligned} \mathbf{V}_{1234} = \left[ \begin{array}{rrrrr} 1 &{} 0 &{} 0 &{} 0 &{} 0 \\ 1 &{} 1 &{} 0 &{} 0 &{} 0 \\ 1 &{} 2 &{} 1 &{} 0 &{} 0 \\ 1 &{} 3 &{} 3 &{} 1 &{} 0 \\ 1 &{} 4 &{} 6 &{} 4 &{} 1 \end{array} \right] . \end{aligned}$$

The columnwise Hadamrd product with \(\mathbf{v}=[1,4,1,4,1]\), produces

$$\begin{aligned} \mathbf{V}_{1234-new} = \left[ \begin{array}{rrrrr} 1 &{} 0 &{} 0 &{} 0 &{} 0 \\ 1 &{} 4 &{} 0 &{} 0 &{} 0 \\ 1 &{} 8 &{} 1 &{} 0 &{} 0 \\ 1 &{} 12 &{} 3 &{} 4 &{} 0 \\ 1 &{} 16 &{} 6 &{} 16 &{} 1 \end{array} \right] . \end{aligned}$$

This matrix, after computing its elements modulo 5, becomes the RMF-matrix \(\mathbf{R}_{5}\). Since the columnwise Hadamard product with \(\mathbf{v}\) can be transferred to the Hadamard product of \(\mathbf{v}\) with the function vector \(\mathbf{F}\) to be processed, this factorization leads to the computing algorithm for the RMF-transform. Figure 9.9 shows the flow-graph of this algorithm for the RMF-transform for \(p=5\) and \(n=1\).

Fig. 9.9
figure 9

Flow-graph of the algorithm in Example 9.4.3

The factorization in Example 9.4.3 can be extended to any p and n, as Example 9.4.4 illustrates for \(p=3\) and \(n=2\). A good feature of this factorization is that there is no multiplication and the addition is performed over neighboring elements of the function vector. A drawback of the algorithm is the number of steps, that is \(p^{n}-1\), which are performed serially. Another good feature is that computations can be performed in-place as in the case of Cooley-Tukey algorithms. Therefore, the algorithm is space efficient.

Example 9.4.4

The RMF-matrix \(\mathbf{R}_{3}(2)\) can be factorized into 8 matrices \(\mathbf{R}_{i}\), \(i=1,2,\ldots , 8\), whose elements are 0 except elements on the main diagonal and elements on the subdiagonal in i rows counting from the bottom. In other words, these factor matrices are derived from the identity matrix by replacing with 1 the 0 value of elements of the left subdiagonal starting from the \(p^{n}-i\)-th row. For example, the factor matrix \(\mathbf{R}_{4}\) is

$$\begin{aligned} \mathbf{R}_{4} = \left[ \begin{array}{rrrrrrrrr} 1 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 \\ 0 &{} 1 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 \\ 0 &{} 0 &{} 1 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 \\ 0 &{} 0 &{} 0 &{} 1 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 \\ 0 &{} 0 &{} 0 &{} 0 &{} 1 &{} 0 &{} 0 &{} 0 &{} 0 \\ 0 &{} 0 &{} 0 &{} 0 &{} 1 &{} 1 &{} 0 &{} 0 &{} 0 \\ 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 1 &{} 1 &{} 0 &{} 0 \\ 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 1 &{} 1 &{} 0 \\ 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 1 &{} 1 \end{array} \right] , \end{aligned}$$

The other factor matrices are defined in the same way.

The matrix obtained by the product of these matrices \(\mathbf{R}_{i}\) becomes the RMF-matrix \(\mathbf{R}_{3}(2)\) after the multiplication of elements of every second column by 2.

As stated above, the recently introduced discrete Pascal transform is defined as a transform whose transform matrix is the Pascal matrix with the sign minus assigned to elements of every second column [1]. In [31], the RMF-transform matrix is generated as the matrix whose second column is the n-th p-valued variable. Other columns are obtained as the Gibbs exponentiation of the second column, with every second column multiplied by \(p-1\). This multiplication corresponds to the sign minus in the case of the Pascal transform. Since by computing elements of the Pascal transform matrix modulo p we get the RMF-matrix, it follows, that the decompositions used to define the fast algorithms to compute the Pascal transform can be used to compute the RMF-transform.

Example 9.4.5

The Pascal transform matrix for \(p=4\) is

$$\begin{aligned} \mathbf{P}_{4} = \left[ \begin{array}{rrrr} 1 &{} 0 &{} 0 &{} 0 \\ 1 &{} -1 &{} 0 &{} 0 \\ 1 &{} -2 &{} 1 &{} 0 \\ 1 &{} -3 &{} 1 &{} -1 \end{array} \right] . \end{aligned}$$

It can be factorized as [28]

$$\begin{aligned} \mathbf{P}_{4} = \mathbf{Y}_{1}\cdot \mathbf{Y}_{2}\cdot \mathbf{Y}_{3} \end{aligned}$$

where

$$\begin{aligned} \mathbf{Y}_{1} = \left[ \begin{array}{rrrr} 1 &{} 0 &{} 0 &{} 0 \\ 0 &{} 1 &{} 0 &{} 0 \\ 0 &{} 0 &{} 1 &{} 0 \\ 0 &{} 0 &{} 1 &{} -1 \end{array} \right] , \mathbf{Y}_{2} = \left[ \begin{array}{rrrr} 1 &{} 0 &{} 0 &{} 0 \\ 0 &{} 1 &{} 0 &{} 0 \\ 0 &{} 1 &{} -1 &{} 0 \\ 0 &{} 0 &{} 1 &{} -1 \end{array} \right] , \mathbf{Y}_{3} = \left[ \begin{array}{rrrr} 1 &{} 0 &{} 0 &{} 0 \\ 1 &{} -1 &{} 0 &{} 0 \\ 0 &{} 1 &{} -1 &{} 0 \\ 0 &{} 0 &{} 1 &{} -1 \end{array} \right] . \end{aligned}$$

This is a factorization representing a modification of the factorization of the Pascal matrix corresponding to the modification of the Pascal matrix to define the Pascal transform matrix. Returning back to the modulo computations, we get a factorization for the RMF-matrix.

If the factor matrices \(\mathbf{Y}_{1}\), \(\mathbf{Y}_{2}\), and \(\mathbf{Y}_{3}\) are computed modulo 4, matrices

$$\begin{aligned} \mathbf{W}_{1} = \left[ \begin{array}{rrrr} 1 &{} 0 &{} 0 &{} 0 \\ 0 &{} 1 &{} 0 &{} 0 \\ 0 &{} 0 &{} 1 &{} 0 \\ 0 &{} 0 &{} 1 &{} 3 \end{array} \right] , \mathbf{W}_{2} = \left[ \begin{array}{rrrr} 1 &{} 0 &{} 0 &{} 0 \\ 0 &{} 1 &{} 0 &{} 0 \\ 0 &{} 1 &{} 3 &{} 0 \\ 0 &{} 0 &{} 1 &{} 3 \end{array} \right] , \mathbf{W}_{3} = \left[ \begin{array}{rrrr} 1 &{} 0 &{} 0 &{} 0 \\ 1 &{} 3 &{} 0 &{} 0 \\ 0 &{} 1 &{} 3 &{} 0 \\ 0 &{} 0 &{} 1 &{} 3 \end{array} \right] , \end{aligned}$$

are obtained, whose product produces the RMF-matrix for \(p=4\).

Another factorization of the RMF-matrix can be derived from a particular way of defining the Pascal matrix as the matrix exponential of an appropriately selected matrix.

For a real or complex \((n \times n)\) matrix \(\mathbf{X}\), the exponential of \(\mathbf{X}\), \(e^{\mathbf{X}}\), is the \((n \times n)\) matrix determined by the power series

$$\begin{aligned} e^{\mathbf{X}} = \sum _{r=0}^{\infty } \frac{1}{r!}{} \mathbf{X}^{r} = \mathbf{I} + \mathbf{X} + \frac{1}{2!}{} \mathbf{X}^{2} + \frac{1}{3!}{} \mathbf{X}^{3} + \cdots , \end{aligned}$$
(9.5)

where \(\mathbf{I}\) is the \((n \times n)\) identity matrix.

Consider the \((p^{n} \times p^{n})\) matrix \(\mathbf{X}\) whose main subdiagonal is the sequence \(\{1,\ldots , p^{n}-1\}\). This is a nilpotent matrix and, therefore, the power series of its exponential is finite having \(p^{n}\) terms. The resulting matrix is the \((p^{n} \times p^{n})\) Pascal matrix [35].

The procedure of converting the Pascal matrix into a RMF-matrix discussed above can be applied to the representation of the Pascal matrix as the matrix exponential and it follows that the computation of the RMF-spectrum of a given function vector \(\mathbf{F}\) can be performed as the sum of products of summands in \(e^{\mathbf{X}}\) with the vector \(\mathbf{F}\). This observation offers the possibility to perform the required computing over a parallel architecture as, for example, the Graphics Processing Unit (GPU) based architecture by exploiting task parallelism. A task is viewed as computing the product of a summand \(\frac{1}{k!}{} \mathbf{X}^{k}\), \(k=1,2, \ldots , n\) with the function vector \(\mathbf{F}\). The addition of obtained vectors followed by the componentwise addition with \(\mathbf{F}\) due to the appearance of \(\mathbf{I}\) in (9.5) produces the RMF-spectrum. These tasks can be combined in various ways depending on the number of available processors. Due to the simplicity of summands \(\frac{1}{k!}{} \mathbf{X}^{k}\), the computation can be fast. It should be noticed that referring to values of elements in summands, this way of computing resembles the computing of each RMF-spectral coefficient separately, which is also an acceptable way of computing spectral coefficients, especially when dealing with large functions, as discussed for the discrete Walsh transform in [7, 8, 11].

Example 9.4.6

For \(p=3\) and \(n=2\), the RMF-transform matrix can be obtained as

$$\begin{aligned} \mathbf{R}_{3}(2)= & {} \mathbf{I}_{3}(2) + \mathbf{D}_{3}(2) + \frac{1}{2!}{} \mathbf{D}_{3}(2)^{2} + \frac{1}{3!}{} \mathbf{D}_{3}(2)^{3} + \frac{1}{4!}{} \mathbf{D}_{3}(2)^{4} \\ \quad \quad&+ \frac{1}{5!}{} \mathbf{D}_{3}(2)^{5} + \frac{1}{6!}{} \mathbf{D}_{3}(2)^{6} + \frac{1}{7!}{} \mathbf{D}_{3}(2)^{7} + \frac{1}{8!}{} \mathbf{D}_{3}(2)^{8}, \end{aligned}$$

where \(\mathbf{I}_{3}(2)\) is the \((9 \times 9)\) identity matrix, and \(\mathbf{D}_{3}(2)\) is a matrix of the same dimension all of whose elements are 0 except the elements on the main subdiagonal which take values from the ordered sequence \(\{1,2,3,4,5,6,7,8\}\). Thus,

$$\begin{aligned} \mathbf{D}_{3}(2) = \left[ \begin{array}{rrrrrrrrr} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 \\ 1 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 \\ 0 &{} 2 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 \\ 0 &{} 0 &{} 3 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 \\ 0 &{} 0 &{} 0 &{} 4 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 \\ 0 &{} 0 &{} 0 &{} 0 &{} 5 &{} 0 &{} 0 &{} 0 &{} 0 \\ 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 6 &{} 0 &{} 0 &{} 0 \\ 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 7 &{} 0 &{} 0 \\ 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 8 &{} 0 \end{array} \right] . \end{aligned}$$

After computing the matrices \(\mathbf{D}_{3}(2)^{i}\) for \(i=2,3, \ldots , 8\), we

  1. 1.

    Divide each matrix \(\mathbf{D}_{3}(2)^{i}\) with \(\frac{1}{i!}\), \(i=1,2,3,4,5,6,7,8\)

  2. 2.

    Perform a columnwise Hadamard multiplication of the produced matrices with \(\mathbf{v}=[1,2,1,2,1,2,1,2,1]\),

  3. 3.

    Reduce elements of these matrices modulo 3.

The sum of these matrices produces the RMF-transform matrix and it follows that the RMF-spectrum of a given function f specified by its function vector \(\mathbf{F}\) can be computed as the sum of products of these matrices with \(\mathbf{F}\).

It should be notices that non-zero elements in the matrix \(\mathbf{D}_{3}(2)^{i}\) are equal to the elements of the i-th subdiagonal in the RMF-matrix. In other words, the RMF-matrix is decomposed into the sum of \(p^{n}\) matrices with each matrix representing a subdagonal in it.

Since the involved matrices \(\mathbf{D}_{p}(n)^{i}\) have a small number of non-zero elements, they can be combined to reduce the number of matrices to be multiplied with \(\mathbf{F}\). The number of resulting matrices can be adjusted to the number or processors provided. In this way, we can produce a series of possible factorization offering a trade-off between the number of processors and speed of computing.

9.5 Concluding Remarks

Multiple-valued logic is an area where Professor Claudio Moraga has been making important contributions over more than 45 years. Spectral logic in general, and various application of spectral methods to the analysis of multiple-valued functions, synthesis of related circuits, and certain optimization problems, can especially be emphasized as topics of his research interest in this area. For that reason, in this chapter, we discussed the Reed-Muller-Fourier transform as a particular spectral transform tailored for the processing of multiple-valued functions sharing at the same time some important properties of related operators in a classical Fourier analysis and their generalizations.

As a continuation of recent discussions with Professor Moraga and a common friend, Professor Jaakko Astola, about relationships between the RMF-transform and Pascal matrices [37], we used these relationships to devise certain factorizations of the RMF-matrix which might potentially led to fast computing algorithms with efficient implementation on different many-core and multi-processor architectures.

9.6 Personal Note

After over 30 years of joint work with Claudio, enjoying all the time a gentle guidance almost invisible on the surface, but deep and very strong inside, and above all his friendship, I could write many pages on what we did, explored, traveled together, organized, and most importantly, learned. In spite of my desire to express all my gratitude to Claudio here, I am aware that I should spare the reader my reminiscences, and I will point out a single detail that I consider essentially important not just for me, but also as a message for future generations. Through it I have learn how an experienced researcher should support young researchers in their first steps into research world.

I have found and wanted to learn some publications by Claudio in 1976 while preparing my BSc thesis on Walsh and Haar discrete transforms. As it was a customary practice at that time, I sent him letters typed on a mechanical typewriter in my broken English asking for reprints. In this way we started our communication, and I got the impression that Claudio is a person ready to help and support youngsters in their attempts to learn. By continuously looking for his new publications, I realized that Claudio is regularly contributing to the international symposia on multiple-valued logic (ISMVL). In 1984, it happen that I was lucky and a paper of mine submitted to the ISMVL in Winnipeg, Canada, was accepted. A couple of persons close to me easily concluded that I am now in trouble since being so crazy to dare to submit something at a conference, without having the slightest idea how I can possibly manage to attend it if the paper eventually accepted. After some days of deep thinking and much wondering about, I addressed Claudio by sending a letter explaining shortly, but quite openly, the situation and with the paper and comments by reviewers enclosed. I directly and openly asked him that, if he possibly likes the idea discussed in the paper, could accept to be the co-author and present it. I easily assumed that he was going to attend the symposium knowing that he was a regular participant. The absence of any answer for some time, provoked many interesting, comical but friendly, comments on my (mis)understanding of the scientific and research world. I was greatly rewarded for all this ribbing when I received a letter by Claudio saying that he would accept to be the co-author and present the paper under the condition that I will accept his corrections in the manuscript. He attached a long list of improvements of various statements and reformulations, so many that the paper was basically completely rewritten. Just the main idea was preserved and emphasized in a very appropriated manner and expressed, much better that I could comprehended it myself. Further, enclosed was a research technical report that Claudio usually prepared and publishing at the Lehrstuhl 1 (Informatik) at the Dortmund University, Dortmund, Germany. The report contained the rewritten version of the paper as specified in the corrections list. It was easy to realize that this was a gentle push to direct me towards a proper attitude in the research work and I of course very gladly and gratefully followed it.

For all these years, I think this answer by Claudio was very fortunate for me. If it had been be some other, possibly less friendly answer, or equally important not so determined and decisive request on my part, accompanied by a complete solution of the matter, I am not sure how I might have reacted and responded. Without his strong support and the manner in which it was offered, it might easily have happened that I would stop submitting to international conferences and trying to find a way to participate in them.

That was the first lecture of Claudio gave me personally that determined my attitude towards research work and communicating with researchers. I am very grateful that I have continuously been learning from Claudio for many years and I wish to continue in the same way for a long time.