1 Introduction

Reduced ordered binary decision diagrams (ROBDDs) are a very efficient data structure for representing sets and relations. This has been shown by numerous applications of RelView, a ROBDD-based tool for the manipulation and visualization of relations and relational programming (see [1]). The use of ROBDDs often leads to an amazing computational power, in particular, if a hard problem is solved by computing a very large set of ‘interesting objects’ or ‘potential solutions’ and subsequently selecting an (optimal) solution among them.

In many applications the set of potential solutions is a subset \(\mathcal {R}\) of the power set \(2^X\) for some set X. A method for solving such a task is to start with a logical formula \(\varphi (Y)\) that characterizes whether a set \(Y \in 2^X\) belongs to \(\mathcal {R}\) or not. This formula is then transformed (using correspondences between logical and relation-algebraic constructions) into the form \(\mathfrak {r}_Y\), where \(\mathfrak {r}\) is a relation-algebraic expression that evaluates to a relational vector in the sense of [10]. Finally, \(\mathfrak {r}\) is translated into the programming language of RelView for execution. For some typical examples, see [2, 3, 5]. When Y is represented by a vector v, then very often \(\varphi (Y)\) is equivalent to an inclusion \(\mathfrak {v}\subseteq \mathfrak {w}\), where \(\mathfrak {v}\) and \(\mathfrak {w}\) are relation-algebraic expressions over v with a close or simple relationship between the inclusion and the expression \(\mathfrak {r}\). Based on this observation, in [4] a general method was developed that allows obtaining the expression \(\mathfrak {r}\) directly from the inclusion \(\mathfrak {v}\subseteq \mathfrak {w}\) if \(\mathfrak {v}\) and \(\mathfrak {w}\) are of a specific syntactic form, called column-wise extendible vector expressions. Normally this leads to a substantial simplification of the development of \(\mathfrak {r}\).

In certain situations X is a Cartesian product \(X_1\) \(\times \) \(X_2\) which means that \(\mathcal {R}\) is a subset of the set of relations with source \(X_1\) and target \(X_2\). Examples can be found in [5, 7], where the original logic-based development method is applied to get \(\mathfrak {r}\), and in [4, 6], where the method of [4] is used instead. In particular the examples of [6] demonstrate the superiority of the method of [4] with regard to the logic-based method. However, the method of [4] has an obvious disadvantage. It requires that both sides of the inclusion \(\mathfrak {v}\subseteq \mathfrak {w}\) describing the property \(R \in \mathcal {R}\) are relation-algebraic expressions over a vector representation of R and not over R itself, which is usually given and much simpler.

In this paper we present a new and very general method for computing vector representations \(\mathfrak {r}\) of sets \(\mathcal {R}\) of relations. This overcomes the just mentioned problem, i.e., it can be used for computing \(\mathfrak {r}\) from an inclusion \(\mathfrak {R}\subseteq \mathfrak {S}\), where both sides are relation-algebraic expressions over a relation R and not over a vector representation of R. Decisive for this method is an equation that reduces the vector representation of a composition \(R ;S\) of relations to those of R and S. Together with known results concerning the remaining relation-algebraic operations, this allows the computation of vector representations via a recursive function \(\nu _r\) and the proof of a theorem that shows how \(\mathfrak {r}\) can be obtained from \(\mathfrak {R}\subseteq \mathfrak {S}\) directly. As applications we treat some of the problems of [7] with the new method.

2 Relational Preliminaries

In this section we want to recall some basic facts about (binary, set-theoretic) relations and their operations that are used throughout this paper. For more details on relation algebras, see [10, 11] for example.

Given sets X and Y, the power set \(2^{X \times Y}\) of \(X\times Y\) is the set of relations with source X and target Y, which we will denote by \([{X}\,\leftrightarrow \,{Y}]\). We will write \(R : {X}\,\leftrightarrow \,{Y}\) instead of \(R \in [{X}\,\leftrightarrow \,{Y}]\) and call \({X}\,\leftrightarrow \,{Y}\) the type of R. If X and Y are finite, then we may represent any relation \(R : {X}\,\leftrightarrow \,{Y}\) by a Boolean matrix, in which an entry ‘true’ (or 1) in the row corresponding to \(x \in X\) and in the column corresponding to \(y \in Y\) indicates that \((x,y) \in R\). Analogously, the entry ‘false’ (or 0) indicates that the elements are not in relation R, i.e., \((x,y) \not \in R\). This matrix interpretation is also used by the RelView system to visualize relations. In addition, we will often use Boolean matrix notation and terminology in the remainder of this paper. In particular, we write \(R_{x,y}\) instead of \((x,y) \in R\). Then we speak of a point-wise notation.

The following basic operations on relations will be used: \({\overline{R}}\) (complement), \(R \cup S\) (union), \(R \cap S\) (intersection), \({R}^\textsf{T}\) (transposition or converse), and \(R ;S\) (composition). We assume that transposition and complementation bind stronger than composition and composition binds stronger than union and intersection. In addition, we have the constants \(\textsf{O}\) (empty relation), \(\textsf{L}\) (universal relation) and \(\textsf{I}\) (identity relation). Note that these constants are polymorphic, e.g., there is an identity relation of type \({X}\,\leftrightarrow \,{X}\) for all X. We also will use \(R \subseteq S\) to indicate that R is included in S. We assume that the reader is familiar with these concepts.

As derived operation we will use \(\text{ syq }(R,S) := {\overline{{R}^\textsf{T} ;{\overline{S}}}} \cap {\overline{{{\overline{R}}}^\textsf{T} ;S}}\), the symmetric quotient of \(R : {X}\,\leftrightarrow \,{Y}\) and \(S : {X}\,\leftrightarrow \,{Z}\). It easily can be shown that \(\text{ syq }(R,S)\) has type \({Y}\,\leftrightarrow \,{Z}\) and for all \(y \in Y\) and \(z \in Z\) it holds \(\text{ syq }(R,S)_{y,z}\) iff for all \(x \in X\) it holds \(R_{x,y}\) iff \(S_{x,z}\).

Besides the well-known lattice theoretic properties and basic properties of composition and transposition, i.e., \(Q;(R \cup S) = Q ;R \cup Q ;S\), \(Q ;(R \cap S) \subseteq Q ;R \cap Q ;S\), \({(R \cup S)}^\textsf{T} = {R}^\textsf{T} \cup {S}^\textsf{T}\), \({(R \cap S)}^\textsf{T} = {R}^\textsf{T} \cap {S}^\textsf{T}\), \({(R ;S)}^\textsf{T} = {S}^\textsf{T} ;{R}^\textsf{T}\), \(({R}^\textsf{T})^\textsf{T} = R\) and \({\overline{{R}^\textsf{T}}} = {{\overline{R}}}^\textsf{T}\), relations also satisfy the so-called modular inclusion \( Q;R\cap S\subseteq Q;(R\cap {Q}^\textsf{T};S), \) and the so-called Schröder equivalences \( {Q}^\textsf{T} ;{\overline{S}} \subseteq {\overline{R}} \) iff \( Q ;R \subseteq S \) iff \( {\overline{S}} ;{R}^\textsf{T} \subseteq {\overline{Q}}. \) In the remainder of the paper we will use these properties without mentioning. Some additional properties are summarized in the following lemma. A proof can be found in [10, 11].

Lemma 2.1

Let be \(Q:{X}\,\leftrightarrow \,{Y}\), \(R:{X}\,\leftrightarrow \,{Z}\), \(S:{Y}\,\leftrightarrow \,{Z}\), and \(T:{X}\,\leftrightarrow \,{Z}\). Then we have

  1. (1)

    \((Q\cap R;\textsf{L});S=Q;S\cap R;\textsf{L}\),

  2. (2)

    \((Q;S\cap T);\textsf{L}=(Q\cap T;{S}^\textsf{T});\textsf{L}\).

An important class of relations are given by maps (or functions). We call a relation \(Q : {X}\,\leftrightarrow \,{Y}\) univalent (or a partial function) iff \({Q}^\textsf{T};Q\subseteq \textsf{I}\), total iff \(\textsf{I}\subseteq Q;{Q}^\textsf{T}\), injective iff \({Q}^\textsf{T}\) is univalent, surjective iff \({Q}^\textsf{T}\) is total, and a map iff Q is total and univalent. The following lemma collects some important properties of univalent relations.

Lemma 2.2

Assume \(f:{X}\,\leftrightarrow \,{Y}\) to be univalent. Furthermore, let be \(Q,R:{Y}\,\leftrightarrow \,{Z}\), \(S:{W}\,\leftrightarrow \,{X}\), \(T:{W}\,\leftrightarrow \,{Y}\), and \(U:{X}\,\leftrightarrow \,{Y}\). Then we have

  1. (1)

    \(f;(Q\cap R)=f;Q\cap f;R\),

  2. (2)

    \((S\cap T;{f}^\textsf{T});f=S;f\cap T\),

  3. (3)

    \(f\cap (f\cap U);\textsf{L}=f\cap U\).

Another important concept is the notion of pairs and the projection relations \(\pi :{X\times Y}\,\leftrightarrow \,{X}\) and \(\rho :{X\times Y}\,\leftrightarrow \,{Y}\). The projection relations have the Cartesian product \(X\times Y\) as source and X resp. Y as target and are defined by \(\pi _{(u_1,u_2),x}\) iff \(u_1 = x\) and \(\rho _{(u_1,u_2),y}\) iff \(u_2 = y\), for all \((u_1,u_2)\in X\times Y\), \(x \in X\) and \(y \in Y\). These relations and the corresponding object \(X\times Y\) can also be defined abstractly (up to isomorphism) by the formulas \({\pi }^\textsf{T};\pi \subseteq \textsf{I}\), \({\rho }^\textsf{T};\rho \subseteq \textsf{I}\), \(\pi ;{\pi }^\textsf{T}\cap \rho ;{\rho }^\textsf{T}=\textsf{I}\) and \({\pi }^\textsf{T};\rho =\textsf{L}\), see [13]Footnote 1. To enhance presentation, in the remainder of this paper we will overload the projection relations, i.e., consider them as polymorphic. In all such cases it is easy to determine their types from the context using the typing rules of the operations of relation algebra.

Based upon the projection relations we can define the left pairing of two relations \(R:{X}\,\leftrightarrow \,{Z}\) and \(S:{Y}\,\leftrightarrow \,{Z}\) by \([\![{R},{S}]:=\pi ;R \cap \rho ;S\) of type \({X\times Y}\,\leftrightarrow \,{Z}\). When using a point-wise notation we have \([\![{R},{S}]_{(u_1,u_2),z}\) iff \(R_{u_1,z}\) and \(S_{u_2,z}\), for all \((u_1,u_2)\in X\times Y\) and \(z \in Z\). Similar to the left pairing we can define the right pairing of two relations \(R:{Z}\,\leftrightarrow \,{X}\) and \(S:{Z}\,\leftrightarrow \,{Y}\) by \([{R},{S}]\!] := R;{\pi }^\textsf{T}\cap S;{\rho }^\textsf{T}\). Here we obtain \({Z}\,\leftrightarrow \,{X\times Y}\) as type and that \([{R},{S}]\!]_{z,(u_1,u_2)}\) iff \(R_{z,u_1}\) and \(S_{z,u_2}\), for all \((u_1,u_2) \in X\times Y\) and \(z \in Z\). Finally, the parallel composition (or product) \({R}\!\parallel \!{S}:{X\times X}\,\leftrightarrow \,{Y\times Y'}\) of two relations \(R:{X}\,\leftrightarrow \,{Y}\) and \(S:{X'}\,\leftrightarrow \,{Y'}\) is defined by \({R}\!\parallel \!{S}:=\pi ;R;{\pi }^\textsf{T}\cap \rho ;S;{\rho }^\textsf{T}\), i.e., we have \(({R}\!\parallel \!{S})_{(u_1,u_2),(v_1,v_2)}\) iff \(R_{u_1,v_1}\) and \(S_{u_2,v_2}\), for all \((u_1,u_2)\in X\times X'\) and \((v_1,v_2)\in Y\times Y'\). As a consequence we have \({R}\!\parallel \!{S} = [{\pi ;R},{\rho ;S}]\!] = [\![{R;{\pi }^\textsf{T}},{S;{\rho }^\textsf{T}}]\), where the right pairing is formed w.r.t. the projection relations of \(Y\times Y'\) and the left pairing is formed w.r.t. the projection relations of \(X\times X'\).

From Lemma 2.2(2) we obtain

$$ [{Q},{R}]\!];\pi = (Q;{\pi }^\textsf{T}\cap R;{\rho }^\textsf{T});\pi = Q\cap R;{\rho }^\textsf{T};\pi =Q\cap R;\textsf{L}, $$

so that \([{Q},{R}]\!];\pi =Q\) follows if R is total. Analogously, we get \([{Q},{R}]\!];\rho =R\) if Q is total and similar results for the left pairing and the parallel composition.

The sharpness property of relational products is the question whether the following equation

$$ [{Q},{R}]\!];[\![{S},{T}] = Q;S\cap R;T $$

holds for all relations Q, R, S and T of suitable types. Note that the equation only involves one Cartesian product, and is easy to verify for set-theoretic relations. However, the equation does not follow from the axioms of a relation algebra and of the corresponding projection relations alone. But if we require sufficient additional structure, i.e., the existence of at least one additional Cartesian product, we are able to show sharpness. The lemma below generalizes the equation above slightly. If instantiated with \(S_1=\pi \) and \(S_2=\rho \) it verifies sharpness under the assumption of the existence of the additional Cartesian product \(W\times X\). The lemma itself is an immediate consequence of the approach developed in [8].

Lemma 2.3

Let be \(Q_1:{W}\,\leftrightarrow \,{X}\), \(Q_2:{W}\,\leftrightarrow \,{Y}\), \(R_1:{X}\,\leftrightarrow \,{Z}\), \(R_2:{Y}\,\leftrightarrow \,{Z}\), \(S_1:{V}\,\leftrightarrow \,{X}\), and \(S_2:{V}\,\leftrightarrow \,{Y}\) with \(S_1\) and \(S_2\) univalent and

$$ {Q}^\textsf{T}_1;Q_2\cap {R}^\textsf{T}_1;R_2\subseteq {S}^\textsf{T}_1;S_2. $$

Furthermore, assume that the Cartesian product \(W\times X\) exists. Then we have

$$ (Q_1;{S}^\textsf{T}_1\cap Q_2;{S}^\textsf{T}_2) ;(S_1;{R}^\textsf{T}_1\cap S_2;{R}^\textsf{T}_2) = Q_1;{R}^\textsf{T}_1 \cap Q_2;{R}^\textsf{T}_2. $$

In the remainder of this paper we will assume that the Cartesian product for every pair of sets (objects) exists, together with the corresponding projection relations, such that we will always have sharpness.

The relation \(\text{ S}^{X,X'}:{X\times X'}\,\leftrightarrow \,{X'\times X}\) is defined by \(\text{ S}^{X,X'}:=[{\rho },{\pi }]\!]=[\![{{\rho }^\textsf{T}},{{\pi }^\textsf{T}}]\). As usual we will drop the sets X and \(X'\) and write simply \(\text{ S }\) instead of \(\text{ S}^{X,X'}\) if the sets of the Cartesian products are clear from the context. The relation exchanges the two components of a pair, i.e., we get \(\text{ S}_{(u_1,u_2),(v_1,v_2)}\) iff \(u_1=v_2\) and \(u_2=v_1\), for all \((u_1,u_2) \in X\times X'\) and \((v_1,v_2) \in X'\times X\). Furthermore, we have \({\text{ S }}^\textsf{T}=\text{ S }\) and \(\text{ S };\text{ S }=\textsf{I}\), where the two occurrences of \(\text{ S }\) in both equations are different versions of the polymorphic relation.

The partial identity \(\textsf{I}\cap \pi ;\rho ;{\pi }^\textsf{T};{\rho }^\textsf{T}\) has source and target \((X\times Y)\times (Y\times Z)\). From this type information we can infer that the first occurrence of \(\pi \) denotes the first projection relation of the Cartesian product \((X\times Y)\times (Y\times Z)\), whereas the second occurrence of \(\pi \) denotes the first projection relation of the Cartesian product \(Y\times Z\), and the first occurrence of \(\rho \) denotes the second projection relation of \(X\times Y\), whereas the second occurrence of \(\rho \) denotes the second projection relation of \((X\times Y)\times (Y\times Z)\). The relation \(\textsf{I}\cap \pi ;\rho ;{\pi }^\textsf{T};{\rho }^\textsf{T}\) acts as a filter when composing it with a suitable relation. Using point-wise notation, a quadruple \(((u_1,u_2),(v_1,v_2)) \in (X\times Y)\times (Y\times Z)\) is related to itself by the relation \(\textsf{I}\cap \pi ;\rho ;{\pi }^\textsf{T};{\rho }^\textsf{T}\) iff \(u_2=v_1\), representing the condition under with the pair \((u_1,v_2) \in X \times Z\) would be in the composition of two relations, where the first relation contains the pair \((u_1,u_2) \in X \times Y\) and the second relation contains the pair \((v_1,v_2) \in Y \times Z\). Note that we have

$$ \textsf{I}\cap \pi ;\rho ;{\pi }^\textsf{T};{\rho }^\textsf{T}={(\textsf{I}\cap \pi ;\rho ;{\pi }^\textsf{T};{\rho }^\textsf{T})}^\textsf{T} = \textsf{I}\cap \rho ;\pi ;{\rho }^\textsf{T};{\pi }^\textsf{T}. $$

By means of the partial identity \(\textsf{I}\cap \pi ;\rho ;{\pi }^\textsf{T};{\rho }^\textsf{T}\) we now define the relation \(\text{ C}^{X,Y,Z}\) of type \({(X\times Y)\times (Y\times Z)}\,\leftrightarrow \,{X\times Z}\) by \(\text{ C}^{X,Y,Z} :=(\textsf{I}\cap \pi ;\rho ;{\pi }^\textsf{T};{\rho }^\textsf{T});({\pi }\!\parallel \!{\rho })\). This relation removes the common intermediate element. Again, we will usually drop the three sets X, Y and Z of \(\text{ C}^{X,Y,Z}\) and overload the relation.

In the next sections we also will use the relation-level equivalents of the set-theoretic symbol ‘\(\in \)’ as basic relations. These are the (again polymorphic) membership relations \(\text{ M }: {X}\,\leftrightarrow \,{2^X}\), which are point-wisely described by \(\text{ M}_{x,Y}\) iff \(x \in Y\), for all elements \(x \in X\) and sets \(Y \in 2^X\). There exists a relation-algebraic axiomatization of membership relations which specifies these up to isomorphism. See [11], for example. But for the applications of the present paper the above point-wise description suffices.

3 Vector Representation of Relations

Relational vectors are relations \(v:{X}\,\leftrightarrow \,{Y}\) with \(v ;\textsf{L}=v\). Such a v can be interpreted as a subset of X in the following sense: If represented by a Boolean matrix, the relation v is a matrix in which every row consists completely either of 1’s (or ‘true’-entries) or of 0’s (or ‘false’-entries) indicating that the element corresponding to that row either belongs to the subset of X or not. Since Y is irrelevant in this representation we will always consider vectors with target \(\textbf{1}\!\!\!\textbf{1}\), where \(\textbf{1}\!\!\!\textbf{1}= \{\bot \}\) is a specific singleton set. In this case all relations in the set \([{X}\,\leftrightarrow \,{\textbf{1}\!\!\!\textbf{1}}]\) of relations are vectors since for a singleton set we have \(\textsf{L}= \textsf{I}\). Note that a singleton set can also be defined abstractly (up to isomorphism) as a so-called unit. A unit is an object \(\textbf{1}\!\!\!\textbf{1}\) such that \(\textsf{L}= \textsf{I}\) for \(\textsf{L}: {\textbf{1}\!\!\!\textbf{1}}\,\leftrightarrow \,{\textbf{1}\!\!\!\textbf{1}}\) and \(\textsf{L}:{X}\,\leftrightarrow \,{\textbf{1}\!\!\!\textbf{1}}\) is total for every X.

Relations are specific sets. In the following we concentrate on their representations by means of vectors.

The first step in providing a vector representation of arbitrary relations is to establish a Boolean lattice isomorphism between the set of relations \(R:{X}\,\leftrightarrow \,{Y}\) and the set of vectors \(v:{X\times Y}\,\leftrightarrow \,{\textbf{1}\!\!\!\textbf{1}}\). Given a relation \(R:{X}\,\leftrightarrow \,{Y}\) and a vector \(v:{X\times Y}\,\leftrightarrow \,{\textbf{1}\!\!\!\textbf{1}}\) we define \(\textit{vec}(R):{X\times Y}\,\leftrightarrow \,{\textbf{1}\!\!\!\textbf{1}}\) and \(\text{ Rel }(v):{X}\,\leftrightarrow \,{Y}\) as shown in the following figure (Fig. 1).

Fig. 1.
figure 1

Definition of functions \(\textit{vec}\) and \(\text{ Rel }\)

The functions \(\textit{vec}:[{X}\,\leftrightarrow \,{Y}]\rightarrow [{X\times Y}\,\leftrightarrow \,{\textbf{1}\!\!\!\textbf{1}}]\) and \(\text{ Rel }:[{X\times Y}\,\leftrightarrow \,{\textbf{1}\!\!\!\textbf{1}}]\rightarrow [{X}\,\leftrightarrow \,{Y}]\) are inverse to each other, i.e., we have \(\text{ Rel }(\textit{vec}(R))=R\) and \(\textit{vec}(\text{ Rel }(v))=v\), for all \(R:{X}\,\leftrightarrow \,{Y}\) and \(v:{X\times Y}\,\leftrightarrow \,{\textbf{1}\!\!\!\textbf{1}}\). Furthermore, \(\textit{vec}\) and \(\text{ Rel }\) are monotone w.r.t. inclusion, i.e., \(R\subseteq S\) implies \(\textit{vec}(R)\subseteq \textit{vec}(S)\), for all \(R, S :{X}\,\leftrightarrow \,{Y}\), and \(v\subseteq w\) implies \(\text{ Rel }(v)\subseteq \text{ Rel }(w)\), for all \(v, w : {X\times Y}\,\leftrightarrow \,{\textbf{1}\!\!\!\textbf{1}}\). We even have \(R\subseteq S\) iff \(\textit{vec}(R)\subseteq \textit{vec}(S)\), for all \(R, S :{X}\,\leftrightarrow \,{Y}\), and \(v\subseteq w\) iff \(\text{ Rel }(v)\subseteq \text{ Rel }(w)\), for all \(v, w :{X\times Y}\,\leftrightarrow \,{\textbf{1}\!\!\!\textbf{1}}\). Proofs of these facts can be found in [10], for example.

The next lemma shows how the five basic operations of relation algebra can be performed directly on the corresponding vectors. Together with the bijectivity properties (1), (3) and (4) of Lemma 3.1 imply that the function \(\textit{vec}\) is an isomorphism from the Boolean lattice \([{X}\,\leftrightarrow \,{Y}]\) of all relations between X and Y to the Boolean lattice \([{X\times Y}\,\leftrightarrow \,{\textbf{1}\!\!\!\textbf{1}}]\) of all vectors with source \(X\times Y\), with \(\textit{vec}^{-1} = \text{ Rel }\), such that \(\textit{vec}(\textsf{O}) = \textsf{O}\), \(\textit{vec}(\textsf{L}) = \textsf{L}\), \(\text{ Rel }(\textsf{O}) = \textsf{O}\) and \(\text{ Rel }(\textsf{L}) = \textsf{L}\) immediately follows.

Lemma 3.1

Let be \(Q,R:{X}\,\leftrightarrow \,{Y}\) and \(S:{Y}\,\leftrightarrow \,{Z}\). Then we have

  1. (1)

    \(\textit{vec}({\overline{Q}})={\overline{\textit{vec}(Q)}}\),

  2. (2)

    \(\textit{vec}({Q}^\textsf{T})=\text{ S };\textit{vec}(Q)\),

  3. (3)

    \(\textit{vec}(Q\cup R)=\textit{vec}(Q)\cup \textit{vec}(R)\),

  4. (4)

    \(\textit{vec}(Q\cap R)=\textit{vec}(Q)\cap \textit{vec}(R)\),

  5. (5)

    \(\textit{vec}(R;S)={\text{ C }}^\textsf{T};[\![{\textit{vec}(R)},{\textit{vec}(S)}]\).

Proof

The four properties (1) to (4) were already shown in [10]. In order to prove property (5), we would like to show that the composition \({\text{ C }}^\textsf{T};[\![{\textit{vec}(R)},{\textit{vec}(S)}]\) can basically be written as the left-hand side of the equation in Lemma 2.3 for suitable \(S_1\) and \(S_2\), and then apply that lemma. Therefore, we will use the following abbreviations

$$\begin{aligned} i&:= \textsf{I}\cap \pi ;\rho ;{\pi }^\textsf{T};{\rho }^\textsf{T},&\tilde{\pi }&:= \pi \cap \rho ;\pi ;{\rho }^\textsf{T},\\ f&:= \pi ;\rho \cap \rho ;\pi ,&\tilde{\rho }&:= \rho \cap \pi ;\rho ;{\pi }^\textsf{T}. \end{aligned}$$

Obviously, all four relations are univalent and we have \(i;\pi =\tilde{\pi }\), \(i;\rho =\tilde{\rho }\), \(\tilde{\pi };\rho =f\) and \(\tilde{\rho };\pi =f\), where the last four equations follow immediately from Lemma 2.2(2). Furthermore, we have \(\tilde{\pi };[\![{R},{\textsf{I}}]\subseteq \tilde{\pi };\rho =f\) and

$$\begin{aligned} \tilde{\rho };[\![{S},{\textsf{I}}];\textsf{L}\cap f&= \tilde{\rho };(\pi ;S\cap \rho );\textsf{L}\cap f\\&= \tilde{\rho };(\pi \cap \rho ;{S}^\textsf{T});\textsf{L}\cap f{} & {} \text {Lemma 2.1(2)}\\&= (\tilde{\rho };\pi \cap \tilde{\rho };\rho ;{S}^\textsf{T});\textsf{L}\cap f{} & {} \text {Lemma 2.2(1)}\\&= (f\cap \tilde{\rho };\rho ;{S}^\textsf{T});\textsf{L}\cap f\\&= f\cap \tilde{\rho };\rho ;{S}^\textsf{T}{} & {} \text {Lemma 2.2(3)}\\&= \tilde{\rho };\pi \cap \tilde{\rho };\rho ;{S}^\textsf{T}\\&= \tilde{\rho };(\pi \cap \rho ;{S}^\textsf{T}){} & {} \text {Lemma 2.2(1)}\\&= \tilde{\rho };[\![{\textsf{I}},{{S}^\textsf{T}}]. \end{aligned}$$

These two properties imply

$$\begin{aligned} i;[\![{\textit{vec}(R)},{\textit{vec}(S)}]&= i;\pi ;[\![{R},{\textsf{I}}];\textsf{L}\cap i;\rho ;[\![{S},{\textsf{I}}];\textsf{L}{} & {} \text {Lemma 2.2(1)}\\&= \tilde{\pi };[\![{R},{\textsf{I}}];\textsf{L}\cap \tilde{\rho };[\![{S},{\textsf{I}}];\textsf{L}\\&= (\tilde{\pi };[\![{R},{\textsf{I}}]\cap \tilde{\rho };[\![{S},{\textsf{I}}];\textsf{L});\textsf{L}{} & {} \text {Lemma 2.1(1)}\\&= (\tilde{\pi };[\![{R},{\textsf{I}}]\cap f\cap \tilde{\rho };[\![{S},{\textsf{I}}];\textsf{L});\textsf{L}{} & {} \text {see above}\\&= (\tilde{\pi };[\![{R},{\textsf{I}}]\cap \tilde{\rho };[\![{\textsf{I}},{{S}^\textsf{T}}]);\textsf{L}{} & {} \text {see above.} \end{aligned}$$

On the other hand, we have

$$\begin{aligned} \text{ C }&= i;({\pi }\!\parallel \!{\rho })\\&= i;(\pi ;\pi ;{\pi }^\textsf{T}\cap \rho ;\rho ;{\rho }^\textsf{T})\\&= i;\pi ;\pi ;{\pi }^\textsf{T}\cap i;\rho ;\rho ;{\rho }^\textsf{T}{} & {} \text {Lemma 2.2(1)}\\&= \tilde{\pi };\pi ;{\pi }^\textsf{T}\cap \tilde{\rho };\rho ;{\rho }^\textsf{T} \end{aligned}$$

so that the composition \({\text{ C }}^\textsf{T};[\![{\textit{vec}(R)},{\textit{vec}(S)}]\) in fact can be written as the left-hand side of Lemma 2.3 with \(S_1=\tilde{\pi }\) and \(S_2=\tilde{\rho }\).

The following calculation now shows that the additional assumption of Lemma 2.3 is satisfied:

$$\begin{aligned}&{[\![{R},{\textsf{I}}];{[\![{\textsf{I}},{{S}^\textsf{T}}]}^\textsf{T}\cap {(\pi ;{\pi }^\textsf{T})}^\textsf{T};\rho ;{\rho }^\textsf{T}}\\&= [\![{R},{\textsf{I}}];{[\![{\textsf{I}},{{S}^\textsf{T}}]}^\textsf{T}\cap \pi ;{\pi }^\textsf{T};\rho ;{\rho }^\textsf{T}\\&= [\![{R},{\textsf{I}}];{[\![{\textsf{I}},{{S}^\textsf{T}}]}^\textsf{T}\cap \pi ;\textsf{L};{\rho }^\textsf{T}\\&= [\![{R},{\textsf{I}}];{[\![{\textsf{I}},{{S}^\textsf{T}}]}^\textsf{T}{} & {} \pi \text { and }\rho \text { total}\\&\subseteq \rho ;{\pi }^\textsf{T}\\&= \rho ;{\pi }^\textsf{T}\cap \rho ;{\pi }^\textsf{T}\\&= [{\textsf{I}},{\rho ;{\pi }^\textsf{T}}]\!];[\![{\rho ;{\pi }^\textsf{T}},{\textsf{I}}]{} & {} \text {sharpness}\\&= ({\pi }^\textsf{T}\cap \rho ;{\pi }^\textsf{T};{\rho }^\textsf{T});(\rho \cap \pi ;\rho ;{\pi }^\textsf{T})\\&= {\tilde{\pi }}^\textsf{T};\tilde{\rho }, \end{aligned}$$

As a consequence we conclude

$$ (\pi ;{\pi }^\textsf{T};{\tilde{\pi }}^\textsf{T}\cap \rho ;{\rho }^\textsf{T};{\tilde{\rho }}^\textsf{T});(\tilde{\pi };[\![{R},{\textsf{I}}]\cap \tilde{\rho };[\![{\textsf{I}},{{S}^\textsf{T}}]) = \pi ;{\pi }^\textsf{T} ;[\![{R},{\textsf{I}}] \cap \rho ;{\rho }^\textsf{T};[\![{\textsf{I}},{{S}^\textsf{T}}]. $$

We finally obtain

$$\begin{aligned}&{\textit{vec}(R;S)}\\&= [\![{R;S},{\textsf{I}}];\textsf{L}\\&= (\pi ;R;S\cap \rho );\textsf{L}\\&= (\pi ;R\cap \rho ;{S}^\textsf{T});\textsf{L}{} & {} \text {Lemma 2.2(2)}\\&= (\pi ;{\pi }^\textsf{T};[\![{R},{\textsf{I}}]\cap \rho ;{\rho }^\textsf{T};[\![{\textsf{I}},{{S}^\textsf{T}}]);\textsf{L}\\&= (\pi ;{\pi }^\textsf{T};{\tilde{\pi }}^\textsf{T}\cap \rho ;{\rho }^\textsf{T};{\tilde{\rho }}^\textsf{T});(\tilde{\pi };[\![{R},{\textsf{I}}]\cap \tilde{\rho };[\![{\textsf{I}},{{S}^\textsf{T}}]);\textsf{L}{} & {} \text {see above}\\&= {\text{ C }}^\textsf{T};i;[\![{\textit{vec}(R)},{\textit{vec}(S)}]\\&= {\text{ C }}^\textsf{T};[\![{\textit{vec}(R)},{\textit{vec}(S)}],{} & {} {i}^\textsf{T};i=i;i=i={i}^\textsf{T} \end{aligned}$$

i.e., the desired property (5).    \(\square \)

The relation \(\text{ C }\) allows us to express a composition of two relations as an operation on their corresponding vectors. Sometimes, when computing all relations that satisfy a certain property, it is sufficient to convert only one relation in a series of compositions into a vector. Quite often this even leads to a more efficient implementation (e.g., in RelView) of testing the property. The following lemma was shown in [9] and provides exactly such a translation.

Lemma 3.2

Let \(Q:{W}\,\leftrightarrow \,{X}\), \(R:{X}\,\leftrightarrow \,{Y}\), and \(S:{Y}\,\leftrightarrow \,{Z}\). Then we have

$$ \textit{vec}(Q;R;S)=({Q}\!\parallel \!{{S}^\textsf{T}}) ;\textit{vec}(R). $$

As specific cases we get \(\textit{vec}(Q;R)=({Q}\!\parallel \!{\textsf{I}});\textit{vec}(R)\) by taking S as identity relation and \(\textit{vec}(R;S)=({\textsf{I}}\!\parallel \!{{S}^\textsf{T}}) ;\textit{vec}(R)\) by taking Q as identity relation.

4 Vector Representation of Sets of Relations

In [9] so-called vector predicates are introduced for the relational treatment of evolutionary algorithms. They are functions in the usual mathematical sense on relations and model those sets of relations which are built from vectors using as operations only complementation, union, intersection and a restricted version of composition. This approach is continued and refined in [4] in view of the specification of vectors \(v : {2^X}\,\leftrightarrow \,{\textbf{1}\!\!\!\textbf{1}}\) which represent subsets \(\mathcal {R}\) of given power sets \(2^X\). The aim is to avoid lengthy and complex logical calculations and to work rather with relation-algebraic specifications of the elements of \(\mathcal {R}\) via inclusions \(\mathfrak {v}\subseteq \mathfrak {w}\) such that v can be obtained from \(\mathfrak {v}\subseteq \mathfrak {w}\) in one step using a general procedure. Decisive for this is that both sides of the inclusions are so-called column-wise extendible vector expressions. These are specific relation-algebraic expressions which can be seen as syntactical counterpart of vector predicates. Formally, they are defined as follows.

Definition 4.1

Given a variable v of type \({X}\,\leftrightarrow \,{\textbf{1}\!\!\!\textbf{1}}\), the set \(\textbf{VE}_v\) of typed column-wise extendible vector expressions over v is inductively defined as follows:

  1. (1)

    We have \(v \in \textbf{VE}_v\) and its type is \({X}\,\leftrightarrow \,{\textbf{1}\!\!\!\textbf{1}}\).

  2. (2)

    If \(w : {Y}\,\leftrightarrow \,{\textbf{1}\!\!\!\textbf{1}}\), then \(w \in \textbf{VE}_v\) and its type is \({Y}\,\leftrightarrow \,{\textbf{1}\!\!\!\textbf{1}}\).

  3. (3)

    If \(\mathfrak {v}\in \textbf{VE}_v\) is of type \({Y}\,\leftrightarrow \,{\textbf{1}\!\!\!\textbf{1}}\), then \({\overline{\mathfrak {v}}} \in \textbf{VE}_v\) and its type is \({Y}\,\leftrightarrow \,{\textbf{1}\!\!\!\textbf{1}}\).

  4. (4)

    If \(\mathfrak {v}, \mathfrak {w}\in \textbf{VE}_v\) are of type \({Y}\,\leftrightarrow \,{\textbf{1}\!\!\!\textbf{1}}\), then \(\mathfrak {v}\cup \mathfrak {w}\in \textbf{VE}_v\) and \(\mathfrak {v}\cap \mathfrak {w}\in \textbf{VE}_v\) and their types are \({Y}\,\leftrightarrow \,{\textbf{1}\!\!\!\textbf{1}}\).

  5. (5)

    If \(\mathfrak {v}\in \textbf{VE}_v\) is of type \({Y}\,\leftrightarrow \,{\textbf{1}\!\!\!\textbf{1}}\) and \(\mathfrak {R}\) is a relation-algebraic expression of type \({Z}\,\leftrightarrow \,{Y}\) in which v does not occur, then \(\mathfrak {R};\mathfrak {v}\in \textbf{VE}_s\) and its type is \({Z}\,\leftrightarrow \,{\textbf{1}\!\!\!\textbf{1}}\).

So, the vector expressions from \(\textbf{VE}_v\) are built from the variable v using vectors and as operations only complementation, union, intersection and left-composition with a relation-algebraic expression in which v does not occur. Note that v is the only variable in such an expression. In the following we also allow the use of derived operations like symmetric quotients and pairings, but these are only seen as abbreviations. For example, \(R ;[\![{v},{S ;v}]\) is considered as a column-wise extendible vector expression over the variable v since unfolding the definition of the left pairing yields \(R ;( \pi ;v \cap \rho ;S ;v)\).

In a column-wise extendible vector expression \(\mathfrak {v}\) over v the variable v can be replaced by a relation R with the same source as v. The result is denoted as \(\mathfrak {v}[R/v]\) and is inductively defined as follows.

Definition 4.2

Given a variable v of type \({X}\,\leftrightarrow \,{\textbf{1}\!\!\!\textbf{1}}\), \(\mathfrak {v}\in \textbf{VE}_v\) and \(R : {X}\,\leftrightarrow \,{Z}\), we define \(\mathfrak {v}[R/v]\) as follows, using induction on the structure of \(\mathfrak {v}\):

  1. (1)

    \(v[R/v] = R\).

  2. (2)

    \(w[R/v] = w ;\textsf{L}\), with \(\textsf{L}: {\textbf{1}\!\!\!\textbf{1}}\,\leftrightarrow \,{Z}\).

  3. (3)

    \({\overline{\mathfrak {w}}}[R/v] = {\overline{\mathfrak {w}[R/v]}}\).

  4. (4)

    \((\mathfrak {w}\cup \mathfrak {u})[R/v] = \mathfrak {w}[R/v] \cup \mathfrak {u}[R/v]\) and \((\mathfrak {w}\cap \mathfrak {u})[R/v] = \mathfrak {w}[R/v] \cap \mathfrak {u}[R/v]\).

  5. (5)

    \((\mathfrak {R};\mathfrak {w})[R/v] = \mathfrak {R};(\mathfrak {w}[R/v])\).

For example, for the variable v of type \({X}\,\leftrightarrow \,{\textbf{1}\!\!\!\textbf{1}}\), \(S : {Y}\,\leftrightarrow \,{X}\) and \(w : {Y}\,\leftrightarrow \,{\textbf{1}\!\!\!\textbf{1}}\) we have \(S ;v \cap w \in \textbf{VE}_v\). A replacement of v in \(S ;v \cap w\) by the membership relation \(\text{ M }: {X}\,\leftrightarrow \,{2^X}\) then yields \(S ;\text{ M }\cap w ;\textsf{L}\), which has type \({Y}\,\leftrightarrow \,{2^X}\).

The general procedure to obtain the vectors \(v : {2^X}\,\leftrightarrow \,{\textbf{1}\!\!\!\textbf{1}}\) from the inclusions \(\mathfrak {v}\subseteq \mathfrak {w}\), that we have mentioned above, is shown in [4] as Theorem 1. If we instantiate this theorem in such a way that instead of a general power set \(2^X\) a power set of a Cartesian product \(X\times Y\) is taken, i.e., a vector \(v : {[{X}\,\leftrightarrow \,{Y}]}\,\leftrightarrow \,{\textbf{1}\!\!\!\textbf{1}}\) representing a subset \(\mathcal {R}\) of the set \([{X}\,\leftrightarrow \,{Y}]\) of relations is to be computed, then we get the following result.

Theorem 4.1

Let the subset \(\mathcal {R}\) of the set \([{X}\,\leftrightarrow \,{Y}]\) of relations be specified as \(\mathcal {R}= \{ \text{ Rel }(r) \mid r \in [{X\times Y}\,\leftrightarrow \,{\textbf{1}\!\!\!\textbf{1}}] \wedge \mathfrak {v}\subseteq \mathfrak {w}\},\) where \(\mathfrak {v}, \mathfrak {w}\in \textbf{VE}_r\). Using the membership relation \(\text{ M }: {X\times Y}\,\leftrightarrow \,{[{X}\,\leftrightarrow \,{Y}]}\) the set \(\mathcal {R}\) is represented by the vector

figure a

Using this result, in [6] a lot of vectors are obtained which represent important classes of relations. A certain disadvantage of Theorem 4.1 is that it works not directly with relations but with their vector representations. If, e.g., \(\mathcal {R}\) is the set of all transitive relations on a set X, then one would like to specify that R is a member of \(\mathcal {R}\) by \(R ;R \subseteq R\) instead of by \({\text{ C }}^\textsf{T} ;[\![{r},{r}] \subseteq r\) as in [6], where r is the vector representation of R. Using the properties of the two functions \(\textit{vec}\) and \(\text{ Rel }\) given in Sect. 3, it is not hard to calculate the inclusion \({\text{ C }}^\textsf{T} ;[\![{r},{r}] \subseteq r\) (between column-wise extendible vector expressions over r) from the common specification \(\text{ Rel }(r) ;\text{ Rel }(r) \subseteq \text{ Rel }(r)\) of \(\text{ Rel }(r)\) to be transitive.

In the following we generalize the example of transitive relations and consider arbitrary inclusions \(\mathfrak {R}\subseteq \mathfrak {S}\), where \(\mathfrak {R}\) and \(\mathfrak {S}\) are relation-algebraic expressions that are constructed from a variable R of type \({X}\,\leftrightarrow \,{Y}\) using certain relations (e.g., membership relations and projection relations) and the constants and operations (including again also derived ones) of relation algebra. We denote the set of all these relation-algebraic expressions as \(\textbf{RE}_R\). Note that R is the only variable in such an expression. Our aim is to get from the specification \(\mathcal {R}= \{ R \in [{X}\,\leftrightarrow \,{Y}] \mid \mathfrak {R}\subseteq \mathfrak {S}\}\) a vector representation of this set in one step similar to Theorem 4.1. Decisive is the following function that transforms expressions from \(\textbf{RE}_R\) into expressions from \(\textbf{VE}_r\), where \(r : {X\times Y}\,\leftrightarrow \,{\textbf{1}\!\!\!\textbf{1}}\).

Definition 4.3

Given a variable r of type \({X\times Y}\,\leftrightarrow \,{\textbf{1}\!\!\!\textbf{1}}\) and a variable R of type \({X}\,\leftrightarrow \,{Y}\), we define the function \(\nu _r : \textbf{RE}_R \rightarrow \textbf{VE}_r\) as follows, using induction on the structure of the argument:

  1. (1)

    \(\nu _r(R) = r\).

  2. (2)

    \(\nu _r(S) = \textit{vec}(S)\) for all relations S.

  3. (3)

    \(\nu _r({\overline{\mathfrak {R}}}) = {\overline{\nu _r(\mathfrak {R})}}\).

  4. (4)

    \(\nu _r({\mathfrak {R}}^\textsf{T}) = \text{ S };\nu _r(\mathfrak {R})\).

  5. (5)

    \(\nu _r(\mathfrak {R}\cup \mathfrak {S}) = \nu _r(\mathfrak {R}) \cup \nu _r(\mathfrak {S})\).

  6. (6)

    \(\nu _r(\mathfrak {R}\cap \mathfrak {S}) = \nu _r(\mathfrak {R}) \cap \nu _r(\mathfrak {S})\).

  7. (7)

    \(\nu _r(\mathfrak {R};\mathfrak {S}) = {\text{ C }}^\textsf{T} ;[\![{\nu _r(\mathfrak {R})},{\nu _r(\mathfrak {S})}]\).

In the next lemma we verify that \(\nu _r\) in fact yields a column-wise extendible vector expression over the variable r. Furthermore, we show that if r is instantiated as \(\textit{vec}(R)\), then \(\nu _r\) equals the function \(\textit{vec}\).

Lemma 4.1

Let be a variable r of type \({X\times Y}\,\leftrightarrow \,{\textbf{1}\!\!\!\textbf{1}}\) and a variable R of type \({X}\,\leftrightarrow \,{Y}\). For all \(\mathfrak {R}\in \textbf{RE}_R\) we then have \(\nu _r(\mathfrak {R}) \in \textbf{VE}_r\) and, provided r is instantiated as \(\textit{vec}(R)\), also \(\nu _r(\mathfrak {R}) = \textit{vec}(\mathfrak {R})\).

Proof

We use induction on the structure of the expression \(\mathfrak {R}\).

The induction base is that \(\mathfrak {R}\) is the variable R or a relation S. In the first case we have \(\nu _r(R) = r \in \textbf{VE}_r\) and \(\nu _r(R) = r\). So, if r is instantiated as \(\textit{vec}(R)\), then \(\nu _r(\mathfrak {R}) = \textit{vec}(\mathfrak {R})\). The case that \(\mathfrak {R}\) is a relation S is trivial.

In the first case of the induction step, assume \(\mathfrak {R}\) to be of the form \({\overline{\mathfrak {S}}}\) and that the induction hypothesis holds for the expression \(\mathfrak {S}\). Then \(\nu _r(\mathfrak {S}) \in \textbf{VE}_r\) and Definition 4.3(3) yield \(\nu _r({\overline{\mathfrak {S}}}) \in \textbf{VE}_r\). If r is instantiated as \(\textit{vec}(R)\), we have \(\nu _r(\mathfrak {S}) = \textit{vec}(\mathfrak {S})\) and we obtain

$$ \nu _r({\overline{\mathfrak {S}}}) = {\overline{\nu _r(\mathfrak {S})}} = {\overline{\textit{vec}(\mathfrak {S})}} = \textit{vec}({\overline{\mathfrak {S}}}) $$

using Definition 4.3(3) and Lemma 3.1(1). Next, let \(\mathfrak {R}\) be of the form \({\mathfrak {S}}^\textsf{T}\) and the induction hypothesis hold for the expression \(\mathfrak {S}\). Here \(\nu _r(\mathfrak {S}) \in \textbf{VE}_r\) and Definition 4.3(4) show \(\nu _r({\mathfrak {S}}^\textsf{T}) \in \textbf{VE}_r\). As an instantiation of r as \(\textit{vec}(R)\) yields \(\nu _r(\mathfrak {S}) = \textit{vec}(\mathfrak {S})\), we get

$$ \nu _r({\mathfrak {S}}^\textsf{T}) = \text{ S };\nu _r(\mathfrak {S}) = \text{ S };\textit{vec}(\mathfrak {S}) = \textit{vec}({\mathfrak {S}}^\textsf{T}) , $$

using Definition 4.3(4) and Lemma 3.1(2). In the same way the cases can be treated where \(\mathfrak {R}\) is a union, an intersection and a composition of expressions. When \(\mathfrak {R}\) is a union, Definition 4.3(5) and Lemma 3.1(3) apply, when it is an intersection, Definition 4.3(6) and Lemma 3.1(4) apply, and when it is a composition, Definition 4.3(7) and Lemma 3.1(5) apply.    \(\square \)

Now, we are able to prove the following variant of Theorem 4.1. The main difference in this modified version is that the elements of the set of relations are specified by an inclusion of relation-algebraic expressions over the variable R rather than as an inclusion between column-wise extendible vector expressions over a variable r that stands for the vector representation of R.

Theorem 4.2

Let the subset \(\mathcal {R}\) of the set \([{X}\,\leftrightarrow \,{Y}]\) of relations be specified as \( \mathcal {R}= \{ R \in [{X}\,\leftrightarrow \,{Y}] \mid \mathfrak {R}\subseteq \mathfrak {S}\}, \) where \(\mathfrak {R}, \mathfrak {S}\in \textbf{RE}_R\). Taking a variable r of type \({X\times Y}\,\leftrightarrow \,{\textbf{1}\!\!\!\textbf{1}}\) and using the membership relation \(\text{ M }: {X\times Y}\,\leftrightarrow \,{[{X}\,\leftrightarrow \,{Y}]}\), the set \(\mathcal {R}\) is represented by the vector

figure b

Proof

First, we prove that the original specification of the set \(\mathcal {R}\) is equivalent to the specification

figure c

Let \(R : {X}\,\leftrightarrow \,{Y}\) be given. We have to verify that \(\mathfrak {R}\subseteq \mathfrak {S}\) holds iff there exists \(r : {X\times Y}\,\leftrightarrow \,{\textbf{1}\!\!\!\textbf{1}}\) such that \(\nu _r(\mathfrak {R}) \subseteq \nu _r(\mathfrak {S})\) and \(R = \text{ Rel }(r)\). Here is the proof, where r ranges over \([{X\times Y}\,\leftrightarrow \,{\textbf{1}\!\!\!\textbf{1}}]\):

$$\begin{aligned} \mathfrak {R}\subseteq \mathfrak {S}&\iff \textit{vec}(\mathfrak {R}) \subseteq \textit{vec}(\mathfrak {S}){} & {} \text {see Sect. 3} \\&\iff \nu _{\textit{vec}(R)}(\mathfrak {R}) \subseteq \nu _{\textit{vec}(R)}(\mathfrak {S}){} & {} \text {by Lemma 4.1} \\&\iff {\exists }r : r = \textit{vec}(R) \wedge \nu _r(\mathfrak {R}) \subseteq \nu _r(\mathfrak {S})\\&\iff {\exists }r : R = \text{ Rel }(r) \wedge \nu _r(\mathfrak {R}) \subseteq \nu _r(\mathfrak {S}){} & {} \text {see Sect. 3} \end{aligned}$$

From Lemma 4.1 we also get that both sides of the inclusion of the second specification of \(\mathcal {R}\) are column-wise extendible vector expressions over r. Hence, Theorem 4.1 is applicable and yields the desired result.    \(\square \)

As an example for applying the previous theorem, we consider again transitive relations, i.e., we use the specification

$$ \mathcal {R}= \{ R \in [{X}\,\leftrightarrow \,{X}] \mid R ;R \subseteq R\}. $$

For arbitrary \(R : {X}\,\leftrightarrow \,{X}\) and \(r : {X\times X}\,\leftrightarrow \,{\textbf{1}\!\!\!\textbf{1}}\) we get for the left-hand side of the inclusion \(R ;R \subseteq R\) that \(\nu _r(R ;R) = {\text{ C }}^\textsf{T} ;[\![{\nu _r(R)},{\nu _r(R)}] = {\text{ C }}^\textsf{T} ;[\![{r},{r}]\), such that \(\nu _r(R ;R)[\text{ M }/r] = {\text{ C }}^\textsf{T} ;[\![{\text{ M }},{\text{ M }}]\), with \(\text{ M }: {X\times X}\,\leftrightarrow \,{[{X}\,\leftrightarrow \,{X}]}\) as membership relation. In case of the right-hand side of \(R ;R \subseteq R\) we have \(\nu _r(R) = r\), and this yields \(\nu _r(R)[\text{ M }/r] = \text{ M }\). So, Theorem 4.2 implies that the vector

$$ \textit{trans} := {{\overline{\textsf{L};( {\text{ C }}^\textsf{T} ;[\![{\text{ M }},{\text{ M }}] \cap {\overline{ \text{ M }}} ) }}}^\textsf{T} $$

of type \({[{X}\,\leftrightarrow \,{X}]}\,\leftrightarrow \,{\textbf{1}\!\!\!\textbf{1}}\) represents \(\mathcal {R}\), i.e., the set of transitive relations on X.

5 Applications: Kernels and Richardson’s Criterion

A directed simple graph, i.e., a directed graph without multiple edges between the same pair of vertices, on a set of vertices X can be represented by a relation \(R : {X}\,\leftrightarrow \,{X}\). A subset K of X is a kernel of R if for all \(x \in X\) it holds \(x \notin K\) iff there exists \(y \in K\) such that \(R_{x,y}\). Kernels have been introduced in [12] as a generalization of a solution of a cooperative game. Not every relation has a kernel and it is known that determining whether a relation has a kernel is a NP-complete problem.

There exist a series of sufficient criteria for the existence of kernels which can be tested efficiently. In [7] it was investigated how well these criteria characterize the class of relations that have kernels. This was done by computing the number of relations with a kernel for all sets X up to 7 elements using RelView. Then the four most popular criteria of the above mentioned series were considered and for each of them the number of relations satisfying the criteria was computed – again for all X with \(|X| \le 7\) using RelView. The numerical data of [7] show that even in case of the most general of the criteria, the absence of odd cycles (Richardson’s criterion), only a very small portion of the relations with kernels satisfy this property. So, the criteria are very far away from characterizing the class of relations with kernels. The data also led to the following conjecture for finite sets X: The probability that a relation selected uniformly at random from \([{X}\,\leftrightarrow \,{X}]\) has a kernel tends to zero if |X| tends to infinity.

The key for getting the data of [7] are relation-algebraic specifications of five vectors of type \({[{X}\,\leftrightarrow \,{X}]}\,\leftrightarrow \,{\textbf{1}\!\!\!\textbf{1}}\) which represent the set of relations on X having kernels and the set of relations on X satisfying one of the criteria. The RelView-programs then are nothing else than translations of the specifications into the programming language of the system. Each vector is obtained from a logical description of the relations in question and its transformation into an equivalent relation-algebraic expression. Especially in case of Richardson’s criterion the development is rather technical and complex. In the following we solve two of the problems of [7] using the new method. A comparison with [7] shows that the new solutions are much more simple and many steps are very straight-forward.

We start with the characterization of relations having kernels. Assume \(R : {X}\,\leftrightarrow \,{X}\) and \(K \subseteq X\). Using the definition of a kernel and the point-wise description of symmetric quotients, we have that K is a kernel of R iff \(\text{ syq }({\overline{\text{ M }}},R ;\text{ M})_{K,K}\), where \(\text{ M }: {X}\,\leftrightarrow \,{2^X}\) is a membership relation. As a consequence R has a kernel iff \(\text{ syq }({\overline{\text{ M }}},R ;\text{ M}) \cap \textsf{I}\not = \textsf{O}\). Thus, R has no kernel iff \(\text{ syq }({\overline{\text{ M }}},R ;\text{ M}) \subseteq {\overline{\textsf{I}}}\). In order to obtain a vector \(\textit{kernel} : {[{X}\,\leftrightarrow \,{X}]}\,\leftrightarrow \,{\textbf{1}\!\!\!\textbf{1}}\) that represents the set of relations on X with a kernel, we use our method to transform the inclusion \(\text{ syq }({\overline{\text{ M }}},R ;\text{ M}) \subseteq {\overline{\textsf{I}}}\) into a vector \(\textit{nokernel} : {[{X}\,\leftrightarrow \,{X}]}\,\leftrightarrow \,{\textbf{1}\!\!\!\textbf{1}}\) and define \(\textit{kernel}\) as complement of \(\textit{nokernel}\).

For the development of the vector \(\textit{nokernel}\), assume an arbitrary \(r : {X\times X}\,\leftrightarrow \,{\textbf{1}\!\!\!\textbf{1}}\) to be given. Using the definition of the symmetric quotient, the definition of the function \(\nu _r\) and Lemma 3.1 in combination with the abbreviations \(m := \textit{vec}(\text{ M})\) and \(\textit{mt} := \textit{vec}({M}^\textsf{T})\), we get

$$\begin{aligned}&\nu _r( \text{ syq }({\overline{\text{ M }}},R ;\text{ M}) ) \\&= \nu _r( {\overline{{{\overline{\text{ M }}}}^\textsf{T} ;{\overline{R ;\text{ M }}}}} \cap {\overline{{{\overline{{\overline{\text{ M }}}}}}^\textsf{T} ;R ;\text{ M }}} ) \\&= \nu _r( {\overline{{{\overline{\text{ M }}}}^\textsf{T} ;{\overline{R ;\text{ M }}}}} ) \cap \nu _r( {\overline{{\text{ M }}^\textsf{T} ;R ;\text{ M }}} ) \\&= {\overline{ \nu _r( {{\overline{\text{ M }}}}^\textsf{T} ;{\overline{R ;\text{ M }}} ) }} \cap {\overline{ \nu _r( {\text{ M }}^\textsf{T} ;R ;\text{ M}) }} \\&= {\overline{ {\text{ C }}^\textsf{T};[\![{\nu _r({{\overline{\text{ M }}}}^\textsf{T})},{\nu _r({\overline{R ;\text{ M }}})}] }} \cap {\overline{ {\text{ C }}^\textsf{T};[\![{\nu _r({\text{ M }}^\textsf{T})},{\nu _r(R ;\text{ M})}] }} \\&= {\overline{ {\text{ C }}^\textsf{T};[\![{\textit{vec}({{\overline{\text{ M }}}}^\textsf{T})},{{\overline{\nu _r(R ;\text{ M})}}}] }} \cap {\overline{ {\text{ C }}^\textsf{T};[\![{\textit{vec}({\text{ M }}^\textsf{T})},{\nu _r(R ;\text{ M})}] }} \\&= {\overline{ {\text{ C }}^\textsf{T};[\![{\textit{vec}({\overline{{\text{ M }}^\textsf{T}}})},{{\overline{{\text{ C }}^\textsf{T};[\![{\nu _r(R)},{\nu _r(\text{ M})}]}}}] }} \cap {\overline{ {\text{ C }}^\textsf{T};[\![{\textit{vec}({\text{ M }}^\textsf{T})},{{\text{ C }}^\textsf{T};[\![{\nu _r(R)},{\nu _r(\text{ M})}]}] }} \\&= {\overline{ {\text{ C }}^\textsf{T};[\![{{\overline{\textit{vec}({\text{ M }}^\textsf{T})}}},{{\overline{{\text{ C }}^\textsf{T};[\![{r},{\textit{vec}(\text{ M})}]}}}] }} \cap {\overline{ {\text{ C }}^\textsf{T};[\![{\textit{vec}({\text{ M }}^\textsf{T})},{{\text{ C }}^\textsf{T};[\![{r},{\textit{vec}(\text{ M})}]}] }} \\&= {\overline{ {\text{ C }}^\textsf{T};[\![{{\overline{\textit{mt}}}},{{\overline{{\text{ C }}^\textsf{T};[\![{r},{m}]}}}] }} \cap {\overline{ {\text{ C }}^\textsf{T};[\![{\textit{mt}},{{\text{ C }}^\textsf{T};[\![{r},{m}]}] }} \end{aligned}$$

for the left-hand side of the inclusion \(\text{ syq }({\overline{\text{ M }}},R ;\text{ M}) \subseteq {\overline{\textsf{I}}}\), hence

$$ \nu _r( \text{ syq }({\overline{\text{ M }}},R ;\text{ M}) )[\text{ M }/r] = {\overline{ {\text{ C }}^\textsf{T};[\![{{\overline{\textit{mt} ;\textsf{L}}}},{{\overline{{\text{ C }}^\textsf{T};[\![{\text{ M }},{m ;\textsf{L}}]}}}] }} \cap {\overline{ {\text{ C }}^\textsf{T};[\![{\textit{mt} ;\textsf{L}},{{\text{ C }}^\textsf{T};[\![{\text{ M }},{m ;\textsf{L}}]}] }} $$

for the replacement of r by the membership relation \(\text{ M }: {X\times X}\,\leftrightarrow \,{[{X}\,\leftrightarrow \,{X}]}\). In case of the right-hand side of the inclusion \(\text{ syq }({\overline{\text{ M }}},R ;\text{ M}) \subseteq {\overline{\textsf{I}}}\) we have \(\nu _r ({\overline{\textsf{I}}}) = \textit{vec}({\overline{\textsf{I}}}) = {\overline{\textit{vec}(\textsf{I})}}\) and with the abbreviation \(i := \textit{vec}(\textsf{I})\) we get

$$ \nu _r ({\overline{\textsf{I}}})[\text{ M }/r] = {\overline{i}} ;\textsf{L}= {\overline{i ;\textsf{L}}} $$

for the replacement of r by \(\text{ M }\). Now, Theorem 4.2 immediately yields

$$ \textit{nokernel} := {{\overline{\textsf{L};( {\overline{{\text{ C }}^\textsf{T}\!;[\![{{\overline{\textit{mt} ;\textsf{L}}}},{{\overline{H}}}] }} \cap {\overline{{\text{ C }}^\textsf{T}\!;[\![{\textit{mt} ;\textsf{L}},{H}] }} \cap i ;\textsf{L}) }}}^\textsf{T} \!\!\text {, where } H := {\text{ C }}^\textsf{T}\!;[\![{\text{ M }},{m ;\textsf{L}}]. $$

Next, we consider Richardson’s criterion and assume, as in [7], a finite set X with \(|X| \le 7\). To compute a vector that represents the set of relations on X without odd cycles, in [7] four vectors \(\mathfrak {cyc1}\), \(\mathfrak {cyc3}\), \(\mathfrak {cyc5}\) and \(\mathfrak {cyc7}\) of type \({[{X}\,\leftrightarrow \,{X}]}\,\leftrightarrow \,{\textbf{1}\!\!\!\textbf{1}}\) were developed which represent the set of relations on X having a cycle of length 1, 3, 5 and 7, respectively. Then the intersection of \({\overline{\mathfrak {cyc1}}}\), \({\overline{\mathfrak {cyc3}}}\), \({\overline{\mathfrak {cyc5}}}\) and \({\overline{\mathfrak {cyc7}}}\) yielded the desired vector. The developments of \(\mathfrak {cyc5}\) and especially of \(\mathfrak {cyc7}\) are very technical and complex. For example, in case of \(\mathfrak {cyc7}\) the relation-algebraic expression describes that for a given relation \(R : {X}\,\leftrightarrow \,{X}\) there exist three pairs \((u_1,u_2) \in X\times X\), \((v_1,v_2) \in X\times X\) and \((w_1,w_2) \in X\times X\) and an element \(x \in X\) such that the tuples \((u_2,u_1,x,v_2,v_1)\) and \((v_1,w_1,w_2,u_2)\) are paths in R and, therefore, the tuple \((u_2,u_1,x,v_2,v_1,w_1,w_2,u_2)\) is a cycle of length 7 in R.

A relation \(R : {X}\,\leftrightarrow \,{X}\) has no cycle of length 1 if \(R \subseteq {\overline{\textsf{I}}}\), no cycle of length 3 if \(R^3 \subseteq {\overline{\textsf{I}}}\), no cycle of length 5 if \(R^5 \subseteq {\overline{\textsf{I}}}\) and no cycle of length 7 if \(R^7 \subseteq {\overline{\textsf{I}}}\), where powers are defined as usual. The method of this paper allows using these specifications directly leading to four vectors \(\textit{nocyc1}\), \(\textit{nocyc3}\), \(\textit{nocyc5}\) and \(\textit{nocyc7}\) of type \({[{X}\,\leftrightarrow \,{X}]}\,\leftrightarrow \,{\textbf{1}\!\!\!\textbf{1}}\) which represent the set of relations on X having no cycle of length 1, 3, 5 and 7, respectively. Then the intersection of \(\textit{nocyc1}\), \(\textit{nocyc3}\), \(\textit{nocyc5}\) and \(\textit{nocyc7}\) yields the vector we are looking for.

For the following, let an arbitrary \(r : {X\times X}\,\leftrightarrow \,{\textbf{1}\!\!\!\textbf{1}}\) be given. With regard to the inclusion \(R \subseteq {\overline{\textsf{I}}}\) we have \(\nu _r(R) = r\) and \(\nu _r({\overline{\textsf{I}}}) = \textit{vec}({\overline{\textsf{I}}}) = {\overline{\textit{vec}(\textsf{I})}}\), hence \(\nu _r(R)[\text{ M }/r] = \text{ M }\) and \(\nu _r({\overline{\textsf{I}}})[\text{ M }/r] = {\overline{\textit{vec}(\textsf{I})}} ;\textsf{L}= {\overline{\textit{vec}(\textsf{I}) ;\textsf{L}}}\) for the replacement of r by the membership relation \(\text{ M }: {X\times X}\,\leftrightarrow \,{[{X}\,\leftrightarrow \,{X}]}\). So, Theorem 4.2 yields

$$ \textit{nocyc1} := {{\overline{\textsf{L};( \text{ M }\cap \textit{vec}(\textsf{I}) ;\textsf{L}) }}}^\textsf{T}. $$

In case of the inclusions \(R^3 \subseteq {\overline{\textsf{I}}}\), \(R^5 \subseteq {\overline{\textsf{I}}}\) and \(R^7 \subseteq {\overline{\textsf{I}}}\) we work with the equivalent inclusions \({(R^2)}^\textsf{T} \subseteq {\overline{R}}\), \({(R^3)}^\textsf{T} \subseteq {\overline{R^2}}\) and \({(R^4)}^\textsf{T} \subseteq {\overline{R^3}}\) since these lead to more efficient RelView-programs than the original inclusions. For \({(R^2)}^\textsf{T} \subseteq {\overline{R}}\) we get

$$ \nu _r({(R^2)}^\textsf{T}) = \text{ S };\nu _r(R ;R) = \text{ S };{\text{ C }}^\textsf{T};[\![{\nu _r(R)},{\nu _r(R)}] = \text{ S };{\text{ C }}^\textsf{T};[\![{r},{r}] $$

and, hence, \(\nu _r({(R^2)}^\textsf{T})[\text{ M }/r] = \text{ S };{\text{ C }}^\textsf{T};[\![{\text{ M }},{\text{ M }}]\) for the left-hand-side, and

$$ \nu _r({\overline{R}}) = {\overline{\nu _r(R)}} = {\overline{r}} $$

and, hence, \(\nu _r({\overline{R}})[\text{ M }/r] = {\overline{\text{ M }}}\) for the right-hand-side. If we apply Theorem 4.2 to these results, we obtain the vector

$$ \textit{nocyc3} := {{\overline{\textsf{L};( \text{ S };{\text{ C }}^\textsf{T};[\![{\text{ M }},{\text{ M }}] \cap \text{ M}) }}}^\textsf{T}. $$

The remaining inclusions can be treated analogously. From \({(R^3)}^\textsf{T} \subseteq {\overline{R^2}}\) we get

$$ \textit{nocyc5} := {{\overline{\textsf{L};( \text{ S };{\text{ C }}^\textsf{T};[\![{\text{ M }},{H}] \cap H ) }}}^\textsf{T} \!\!\text {, where } H := {\text{ C }}^\textsf{T};[\![{\text{ M }},{\text{ M }}], $$

and inclusion \({(R^4)}^\textsf{T} \subseteq {\overline{R^3}}\) leads to

$$ \textit{nocyc7} := {{\overline{\textsf{L};( \text{ S };{\text{ C }}^\textsf{T};[\![{H},{H}] \cap {\text{ C }}^\textsf{T};[\![{\text{ M }},{H}] ) }}}^\textsf{T} \!\!\text {, where } H := {\text{ C }}^\textsf{T};[\![{\text{ M }},{\text{ M }}]. $$

We have implemented the relation-algebraic specifications developed in this section in RelView and have compared the running times with those given in [7]. Doing so, we have used the same environment as mentioned in [7], i.e., version 8.2 of RelViewFootnote 2 on a PC with 2 CPUs of type Intel® Xeon® E5-2698, each with 20 cores and 3.60 GHz base frequency, 512 GByte RAM and running Arch Linux 5.2.0. We only present the data for \(|X|=7\). Using the RelView-programs resulting from [7], it takes 138.67 s to compute the vector that represents the set of 188 553 949 010 868 relations on X which have a kernel and 32 220.55 s to compute the vector that represents the set of 16 230 843 049 relations on X which satisfy Richardson’s criterion. The RelView-programs we have obtained from the specifications of this paper need 201.31 s for computing the first vector and 18 843.34 s for computing the second one.

From Lemma 3.2 we get \(\nu _r (\mathfrak {R};R ;\mathfrak {S}) = ({\mathfrak {R}}\!\parallel \!{{\mathfrak {S}}^\textsf{T}}) ;\nu _r(R)\) for all \(R : {X}\,\leftrightarrow \,{Y}\) and \(\mathfrak {R}, \mathfrak {S}\in \textbf{RE}_R\), where \(r := \textit{vec}(R)\). In Sect. 3 we have mentioned that this often leads to more efficient implementations. If we proceed the above presented calculation for \(\nu _r( \text{ syq }({\overline{\text{ M }}},R ;\text{ M}) )\) with \({\overline{\nu _r (R ;\text{ M})}} = {\overline{({\textsf{I}}\!\parallel \!{{\text{ M }}^\textsf{T}}) ;\nu _r(R)}}\) and \(\nu _r ({\text{ M }}^\textsf{T} ;R ;\text{ M}) = ({{\text{ M }}^\textsf{T}}\!\parallel \!{{\text{ M }}^\textsf{T}}) ;\nu _r(R)\) after the third step, we obtain the variant

$$ \textit{nokernel} := {{\overline{\textsf{L};( {\overline{{\text{ C }}^\textsf{T};[\![{{\overline{\textit{mt} ;\textsf{L}}}},{{\overline{({\textsf{I}}\!\parallel \!{{\text{ M }}^\textsf{T}}) ;\text{ M }}}}] }} \cap {\overline{({{\text{ M }}^\textsf{T}}\!\parallel \!{{\text{ M }}^\textsf{T}}) ;\text{ M }}} \cap i ;\textsf{L}) }}}^\textsf{T} $$

the RelView-implementation of which allows to compute the vector representation of the set of relations on X having kernels for \(|X| = 7\) in 189.89 s.

6 Further Applications

We have applied our method to many other classes of specific relations. These include the remaining three criteria for the existence of kernels treated in [7], i.e., bipartite relations, progressively finite relations and symmetric and irreflexive relations. Also many of the vectors presented in [6] have been redeveloped using the new method. We also have applied the new method to classes of relations not treated so far, viz. lattices, bounded partial orders, finite directed acyclic graphs and arborescences, tournaments, rectangular relations, difunctional relations, general Ferrers relations, strongly connected relations and maps having fixpoints.

Sometimes the specification of a property P(R) of a relation R leads to an inequation \(\mathfrak {R}\not = \textsf{O}\), with a relation-algebraic expression \(\mathfrak {R}\in \textbf{RE}_R\). An example is the specification of R to have a kernel by \(\text{ syq }({\overline{\text{ M }}},R ;\text{ M}) \cap \textsf{I}\not = \textsf{O}\) in Sect. 5. Also the specification of R to be a bounded partial order and of R to be a map with a fixed point leads to such inequations, viz. to \({\overline{{\overline{R}} ;\textsf{L}}} \not = \textsf{O}\) (existence of a least element) and \({\overline{{\overline{{R}^\textsf{T}}} ;\textsf{L}}} \not = \textsf{O}\) (existence of a greatest element) in the first case and to \(R \cap \textsf{I}\not = \textsf{O}\) (existence of a reflexive element) in the second case.

To cope with the inequation \(\text{ syq }({\overline{\text{ M }}},R ;\text{ M}) \cap \textsf{I}\not = \textsf{O}\), in Sect. 5 we consider the equation \(\text{ syq }({\overline{\text{ M }}},R ;\text{ M}) \cap \textsf{I}= \textsf{O}\) instead, which specifies R to have no kernel. Then we transform it into the equivalent inclusion \(\text{ syq }({\overline{\text{ M }}},R ;\text{ M}) \subseteq {\overline{\textsf{I}}}\), apply our method to the latter and, finally, form the complement of the result to obtain the vector we are actually interested in. Since the carrier set of R is non-empty, \(\text{ syq }({\overline{\text{ M }}},R ;\text{ M}) \cap \textsf{I}\not = \textsf{O}\) holds iff \(\textsf{L}\subseteq \textsf{L};(\text{ syq }({\overline{\text{ M }}},R ;\text{ M}) \cap \textsf{I}) ;\textsf{L}\). We have applied our method also to that inclusion. But the corresponding RelView-program proved to be less efficient than that we have obtained from the approach of Sect. 5.

That the inequation \(\mathfrak {R}\not = \textsf{O}\) is equivalent to the inclusion \(\textsf{L}\subseteq \textsf{L};\mathfrak {R};\textsf{L}\) for relations on non-empty carrier sets we also have used to get vector representations of the set of bounded partial orders and of maps with fixed points. In each case the specific form of the expression \(\mathfrak {R}\) allows to apply Lemma 3.2 for computing \(\nu _r(\textsf{L};\mathfrak {R};\textsf{L})\). E.g., for the inclusion \(\textsf{L}\subseteq \textsf{L};(R \cap \textsf{I}) ;\textsf{L}\) we get \(\nu _r(\textsf{L}) = \textsf{L}\) for the left-hand side and \(\nu _r(\textsf{L};(R \cap \textsf{I}) ;\textsf{L}) = ({\textsf{L}}\!\parallel \!{{\textsf{L}}^\textsf{T}}) ;\nu _r(R \cap \textsf{I}) = \textsf{L};(r \cap \textit{vec}(\textsf{I}))\) for the right-hand side. In combination with Theorem 4.2 this yields \( {{\overline{ \textsf{L};{\overline{ \textsf{L};(\text{ M }\cap \textit{vec}(\textsf{I}) ;\textsf{L})}}}}}^\textsf{T} \) as vector representation of the set of relations with a reflexive element, which still can be simplified to \({(\textsf{L};(\text{ M }\cap \textit{vec}(\textsf{I}) ;\textsf{L}))}^\textsf{T}\). With the RelView-program obtained from the simplified specification and the vectors of [6] for the classes of univalent and total relations, respectively, we have been able to compute the vector representation of the set of maps on X having fixed points up to \(|X| = 240\). For 240 elements 5.55 s are needed to get the result and to store it in a ROBDD with 229 439 vertices. Our RelView-experiments show that the percentage of maps having fixed points decreases from 100% if \(|X|=1\) to 63.284 if \(|X|=240\). This is in accordance with the well-known result that, if \(P_n\) denotes the probability that a map on an n-element set selected uniformly at random has a fixed point, then \(P_n\) tends to \(1 - \frac{1}{e}\) if n tends to infinity.

7 Concluding Remarks

The computational power obtained by the use of ROBDDs and RelView becomes clear if we compare the running times mentioned in Sect. 5 and 6 with the times needed in case of a “classical” brute-force approach. E.g., if we assume that some algorithm could generate every map on a given finite set X and test the existence of a fixed point in, say, \(10^{-6}\) seconds, it would take \(2.82 \cdot 10^{11}\) seconds (i.e., more than 9000 years) for this task already if \(|X| = 15\), since in this case there are 282 325 794 823 047 151 maps on X having a fixed point. RelView only needs 0.019 s to compute a vector representing this set.

Membership relations \(\text{ M }: {X\times Y}\,\leftrightarrow \,{[{X}\,\leftrightarrow \,{Y}]}\) play a central role in our approach. The variable ordering used in RelView allows to implement \(\text{ M }\) by a ROBDD the number of vertices is linear in the size of \(X\times Y\). Besides the very efficient ROBDD-implementation of the relational operations this specific feature of the tool seems to be the main reason for the amazing computational power in case of problems that deal with the computation of a subset of a powerset.

In addition to applying the theory to further examples and applications, there are at least two theoretical topics related to the material of this paper we would like to investigate in the future. The first topic is the question whether it is possible to find for any property P(R) of relations \(R : {X}\,\leftrightarrow \,{Y}\) expressed in second order logic an equivalent finite set of inclusions \(\mathfrak {R}_i \subseteq \mathfrak {S}_i\) with \(\mathfrak {R}_i, \mathfrak {S}_i \in \textbf{RE}_R\) for all i. If this is the case, then the method of this paper could be applied immediately to the inclusions to get a vector that represents the set of all these relations. If this is not the case, it would be interesting to characterize the second order properties for which an equivalent description by a finite set of inclusions \(\{\mathfrak {R}_1 \subseteq \mathfrak {S}_1, \ldots , \mathfrak {R}_n \subseteq \mathfrak {S}_n\}\) is possible, i.e., for which our method is applicable. The second topic is the above mentioned conjecture about the probability of a relation selected uniformly at random from \([{X}\,\leftrightarrow \,{X}]\) to have a kernel. The conjecture states that this probability tends to zero if |X| tends to infinity. So far we only have been able to prove that, given any (but fixed) \(k \in {\mathbb N}\), the probability of a relation selected uniformly at random from \([{X}\,\leftrightarrow \,{X}]\) to have a kernel with at most k elements tends to zero if |X| tends to infinity.