1 Introduction: Relations, Nearness and Granulation

From a general point of view, the approximation of a set A included in a universe U amounts to answering to the following questions:

  1. (a)

    What elements of U are surely, or necessarily, in A?

  2. (b)

    What elements of U are not in A but sufficiently near, or possibly, in A?

  3. (c)

    What elements of U are surely outside A, that is neither necessarily nor possibly in A, that is, are necessarily outside A?

The difference with a sharp classification is crystal clear, since a sharp classification just provides a binary answer: either an element is inside A or it is outside of it. Otherwise stated, there is no notion of “possible in although not surely in”. There is no indecision, no nuances: either “Yes” or “No”.

However, classification on the basis of properties or attributes demands some more subtle answers. Suppose an item u ∈ U is in A while another item u′∈ U is not in A from a classical set-theoretical point of view although it fulfils properties or attributes very close to those of u. It could be not correct to definitely exclude that u′ belongs to the set A. Think of some dynamic situation, where all the patients in a hospital showing at least all the symptoms as u developed a particular disease α and a patient u′ who shows almost all the symptoms as u but has not developed that disease. Probably it is not safe to rule out the possibility that u′ will develop the disease, too. Therefore, we would say that u′ belongs to a reasonable approximation of the set A of patients suffering from α. Such an approximation is from above, because it enlarges the actual set A: it is possible for u′ to develop α, hence it can belong to A in the future.

The sentence “fulfilling almost all the properties as” defines a notion of nearness. From a mathematical point of view, “almost all” defines a preorder, that is, a binary relation R on U which is reflexive (uRu) and transitive (uRu′ and u′Ru″ implies uRu″): the set A of symptoms showed by patient u′ is included in the set of symptoms B of patient u. However, this preorder should be refined, because mathematically also the empty set ∅ is included in B, which is meaningless from a clinical point of view. Actually, the metric underlying the adverb “almost” depends on some particular knowledge and heuristic of the experts.

But in general R could be a binary relation denoting any sort of connection between items (or objects) which groups objects in granules of any kind. Indeed, there are cases in which it is difficult to recognise a “rule” behind the formation of the given granules of objects. In this case some relation R is in action, but this relation does not have any known “nice property”.

If the set \(\mathcal G(U)\) of the granules (subsets) of U is a covering, in many cases behind the granulation there is a tolerance relations (that is, reflexive and also symmetric: uRu′ implies u′Ru)—see for instance [5, 18] and [36].

If \(\mathcal G(U)\) forms a family of open subsets of a topological space, then we can restore from it a preorder, or a partial order (a preorder which is antisymmetric: uRu′ and u′Ru implies u = u′). In particular cases one obtains an equivalence relation (reflexive, transitive and symmetric) which is the original situation studied by Zdzisław Pawlak (see [40]).

As we shall see, also the reverse constructions hold, that is, from preorders, partial orders or equivalence relations to topological spaces with different features.

Algebraic structures induced by rough set systems, that is, the set of all rough sets, have been widely studied since inception. Considering only some early results, in [41] it was shown that classical rough sets form Stone algebras. In [25] rough sets were linked to Heyting algebras. Also [8] worked on this topic. In [28] rough set systems were proved to form semi-simple Nelson algebras, hence three-valued Łukasiewicz algebras. This result was improved in [3], in [4] and by other authors. Later, rough sets have been connected to other algebras of logic, such as Post algebras of order three, Chain-based Lattices, Heyting and bi-Heyting algebras (see [2, 32]). In [6] and [7] rough sets were embedded in the framework of Brouwer-Zadeh lattices and Heyting Wajsberg Algebras. More recently, interesting investigations about more general algebras linked to rough sets have been presented (see [46]).

Situations in which instead of topologies one has to deal with pre-topologies have been studied (see for instance [33] and [34]). Nonetheless, in a number of cases, preorders and partial orders occur (see for instance [16]). In these lessons we shall deal with this specific case.

From an abstract point of view topologies are Heyting algebras, which are particular structures which model Intuitionistic Logic in the same way Boolean Algebras model Classical Logic. Eventually, in this case rough set systems can be made into Nelson algebras which are built from Heyting algebras defined on the granulation.

A natural approach to rough sets is through relation algebra. We can cite as early works: [9, 10, 29] and [11].

It follows that we have to develop our exposition in three different framework, which will be connected each other:

$$\displaystyle \begin{aligned} \textit{Relation algebras} - \textit{Topology} - \textit{Algebraic logics}. \end{aligned} $$

From now on, the sets we shall deal with are intended to be finite. This choice does no harm real-word application. Moreover, it avoids some complications which could disturb a beginner’s comprehension. Many of the results are, nonetheless, applicable to the infinite case and, if required to avoid misinterpretation, we shall point out when this does not happen.

Since these are lessons a few results and proofs are really new, although much of the exposition is novel. We underline with references when a result is standard or well-known. Otherwise the theses or the proofs are new or already given in other publications by the author. Moreover, a number of elementary examples will be provided. These examples are connected each other to show how the topics intertwine.

Usually, in the meta-language, which is classical, we write, “∃”, “∀”, “⇒”, “∧”, “∨” and “¬” for “exists”, “for all”, “implies”, “and”, “or” and “non”, respectively. However, in some cases to avoid confusion we use “&” instead of “∧”.

2 Lesson 1: The Relational Framework

2.1 Binary Relations and Their Algebra

Let us formally define what we can do with binary relations.

Definition 1

Let U, U′, U″ be three sets. In what follows, u is a dummy element of U and u a dummy element of U′ (that is, they represent any element of their domain):

  1. 1.

    A binary relation is a subset R ⊆ U × U′ of ordered pairs 〈u, u′〉 of elements of U and U′.

  2. 2.

    − R := {〈u, u′〉 : 〈u, u′〉∉R} is the complement of R. If R′⊆ U × U′, then R ∩ R′ and R ∪ R′ are the usual set-theoretic operations.

  3. 3.

    R denotes the converse of R: R  := {〈y, x〉 : 〈x, y〉∈ R}. Hence, for all u ∈ U, u′∈ U′, 〈u, u′〉∈ R iff 〈u′, u〉∈ R .

  4. 4.

    If Q ⊆ U′× U″, then R ⊗ Q := {〈u, u″〉 : ∃u′∈ U′(〈u, u′〉∈ R ∧〈u′, u″〉∈ Q)}—the right composition of R with Q. Converse is an involution with respect to composition: R ⌣⌣ = R and (RQ) = Q  ⊗ R .

  5. 5.

    If A ⊆ U, then \(A^\rightarrow _R:=\{\langle a,u'\rangle :a\in A\wedge u'\in U'\}\) is called the right cylinder of A with respect to R. It is the relational embedding of a subset A of U in U × U′. If B ⊆ U′, then \(B^{\leftarrow }_R:=\{\langle u, b\rangle :b\in B\wedge u\in U\}\) is the left cylinder of B with respect to R. It is the relational embedding of a subset B of U′ in U × U′.

    We set \(A^\leftarrow _{R^\smile }:=(A^\rightarrow _R)^\smile \), the left cylinder of A with respect to R to have the relational embedding of A in U′× U and \(B^\rightarrow _{R^\smile }:=(R^\leftarrow _R)^\smile \), the right cylinder of B with respect to R , which is the relational embedding of B in U′× U.

    A cylinder represents a set in the language of relations. In any ordered pair 〈x, y〉 of a cylinder, the element y is any element of the codomain of the relation. This is the result and meaning of a cylindrification, that now we formally motivate.

  6. 6.

    The operation \(R^\smile \otimes A^\rightarrow _R=\{\langle u',u^{\prime *}\rangle :\exists u(\langle u,u'\rangle \in R\wedge \langle u,u^{\prime *}\rangle \in A^\rightarrow _R)\}\) is a right cylinder of type U′× U′. Since u is a dummy element of U the operation can be rephrased in terms of relations and sets, as it will be formally proved in Lemma 5:

    $$\displaystyle \begin{aligned} R(A):=\{u':\exists u(\langle u, u'\rangle\in R\wedge u\in A)\}. \end{aligned} $$
    (1)

    R(A) is called the left Peirce product of R and A, R-neighbourhood of A, or the filter of A, if R is an order relation, in which case we denote it by A or R A if necessary.Similarly, the operation \(R\otimes B^\rightarrow _{R^\smile }=\{\langle u,u^*\rangle :\exists u'(\langle u,u'\rangle \in R\wedge \langle u',u^*\rangle \in B^\rightarrow _{R^\smile })\}\) can be rephrased in terms of relations and sets as:

    $$\displaystyle \begin{aligned} R^\smile(B):=\{u:\exists u'(\langle u, u'\rangle\in R\wedge u'\in B)\} \end{aligned} $$
    (2)

    R (B) is called the left Peirce product of R and B (the right Peirce product of R and B), the R -neighbourhood of B, or the R-ideal of B, denoted also by B or R B if we need to specify the relation. For any u′∈ U′, u ∈ U:

    $$\displaystyle \begin{aligned} (A^\rightarrow_R)^\smile(u')=(A^\leftarrow_{R^\smile})(u')=A, \;\; (B^\rightarrow_{R^\smile})^\smile(u)=B^\leftarrow_R(u)=B.\end{aligned} $$
    (3)
  7. 7.

    Given two relations R ⊆ U × U′ and Z ⊆ U × U″ the relation

    $$\displaystyle \begin{aligned} R\longrightarrow Z=\{\langle u',u''\rangle:\forall u(\langle u,u'\rangle\in R\Rightarrow \langle u,u''\rangle\in Z)\}\end{aligned} $$
    (4)

    is called the right residual of R and Z. RZ is the largest relation K such that R ⊗ K ⊆ Z:

    $$\displaystyle \begin{aligned} R\otimes K\subseteq Z\;\;\textit{iff}\;\;K\subseteq R\longrightarrow Z.\end{aligned} $$
    (5)

    If R ⊆ U × U′ and W ⊆ U″ × U′ the relation

    $$\displaystyle \begin{aligned} W\longleftarrow R=\{\langle u'',u\rangle:\forall u'(\langle u,u'\rangle\in R\Rightarrow \langle u'',u'\rangle\in W)\}\end{aligned} $$
    (6)

    is called the left residual of R and W. WR is the largest relation K such that K ⊗ R ⊆ W:

    $$\displaystyle \begin{aligned} K\otimes R\subseteq W\;\;\textit{iff}\;\;K\subseteq W\longleftarrow R.\end{aligned} $$
    (7)

    The above operations can be depicted as follows:

    For any set A ⊆ U and B ⊆ U′, one has: \(R\longrightarrow A^\rightarrow _R=\{\langle u', u^{\prime *}\rangle :\forall u(\langle u,u'\rangle \in R\Rightarrow \langle u,u^{\prime *}\rangle \in A^\rightarrow _R)\}\) and \(R^\smile \longrightarrow B^\rightarrow _{R^\smile }=\{\langle u, u^*\rangle :\forall u'(\langle u',u\rangle \in R^\smile \Rightarrow \langle u',u^*\rangle \in B^\rightarrow _{R^\smile })\}\). Since the elements decorated with ∗ are generic, one can get rid of the cylindrification and rephrase the operations in terms of relations and sets as follows:

    $$\displaystyle \begin{aligned} &R\longrightarrow A=\{u':\forall u(\langle u,u'\rangle\in R\Rightarrow u\in A)\} \end{aligned} $$
    (8)
    $$\displaystyle \begin{aligned} &R^\smile\longrightarrow B=\{u:\forall u'(\langle u,u'\rangle\in R\Rightarrow u'\in B)\}\end{aligned} $$
    (9)

The above operations are fundamental to study approximations by means of relations.

Lemma 2

Given R  U × U′, Z  U × Uand W  U″ × U′:

$$\displaystyle \begin{aligned} &R\longrightarrow Z=-(R^\smile\otimes -Z); \end{aligned} $$
(10)
$$\displaystyle \begin{aligned} &W\longleftarrow R=-(-W\otimes R^\smile). \end{aligned} $$
(11)

Proof

$$\displaystyle \begin{aligned} -(R^\smile\otimes -Z)&=-\{\langle u',u''\rangle:\exists u(\langle u',u\rangle\in R^\smile\wedge\langle u,u''\rangle\notin Z)\}\\ &=-\{\langle u',u''\rangle:\exists u\neg(\langle u',u\rangle\notin R^\smile\vee\langle u,u''\rangle\in Z)\}\\ &=\{\langle u',u''\rangle:\neg\exists u\neg(\langle u',u\rangle\notin R^\smile\vee\langle u,u''\rangle\in Z)\}\\ &=\{\langle u',u''\rangle:\forall u(\langle u',u\rangle\notin R^\smile\vee\langle u,u''\rangle\in Z)\}\\ &=\{\langle u',u''\rangle:\forall u(\langle u,u'\rangle\notin R\vee\langle u,u''\rangle\in Z)\}\\ &=\{\langle u',u''\rangle:\forall u(\langle u,u'\rangle\in R\Rightarrow\langle u,u''\rangle\in Z)\}\\ &=R\longrightarrow Z \end{aligned} $$

The other proof comes from symmetry. □

The above equations parallel the logical equivalence αβ ≡¬(α ∧¬β)).

Definition 3

The structure 〈U, U′, R〉, with R ⊆ U × U′ will be called a relational system. If a is in relation R with b we write 〈a, b〉∈ R. Especially if R is an order relation we also use the notation aRb. If R ⊆ U × U we shall write 〈U, R〉 instead of 〈U, U, R〉.

Example 4

A relation R ⊆ U × U′ will be usually represented by means of a Boolean matrix with rows labelled by the elements of U and columns by the elements of U′. If 〈x, y〉∈ R then the entry of row x, column y is 1. It is 0 otherwise. The operation → has a higher precedence than the others. Thus, for instance, R ⊗ QZ means R ⊗ (QZ).U = {a, b, c, d}, U′ = {α, β, γ}, U″ = {ϰ, λ, μ}, R ⊆ U × U′, Q ⊆ U′× U″. Indeed, for instance, 〈a, α〉∈ R and 〈α, λ〉∈ Q, thus 〈a, λ〉∈ R ⊗ Q. Analogously, 〈c, γ〉∈ R and 〈γ, μ〉∈ Q, thus 〈c, μ〉∈ R ⊗ Q. On the contrary, there is no intermediate element of U′ linking d and μ. And so on.

Therefore, \(R\otimes R\longrightarrow Z\subsetneq Z\). For instance, 〈a, ι〉∉R ⊗ RZ, while 〈a, ι〉∈ Z. Since 〈a, α〉, 〈a, γ〉∈ R, in order to have 〈α, ι〉∈ R ⊗ RZ we should have either 〈α, ι〉 or 〈γ, ι〉 in RZ. In the former case also 〈b, ι〉∈ R ⊗ RZ, because 〈b, α〉∈ R, too. But 〈b, ι〉∉Z. In the latter case also 〈c, ι〉∈ R ⊗ RZ because 〈c, γ〉∈ R, but 〈c, ι〉∉Z.

Let A = {a, b, c}. Then the right cylindrification of A is Notice that for any u′∈ U′, \((A^\rightarrow _R)^\smile (u')=A\). Moreover, Thus, RA = {β, γ}.Let B = {α, γ}. The right cylindrification of B is Then one obtains Thus, R B = {a, d}.

Lemma 5

Given R  U × U′, Z  U × U, W  U′× U, A  U and B  U′:

$$\displaystyle \begin{aligned} &{} R\longrightarrow Z=\{\langle u',u''\rangle:R^\smile(u')\subseteq Z^\smile(u'')\} \end{aligned} $$
(12)
$$\displaystyle \begin{aligned} &R\longrightarrow A=\{u':R^\smile(u')\subseteq A\} \end{aligned} $$
(13)
$$\displaystyle \begin{aligned} &{}R^\smile\longrightarrow W=\{\langle u,u''\rangle:R(u)\subseteq W^\smile(u'')\} \end{aligned} $$
(14)
$$\displaystyle \begin{aligned} &{}R^\smile\longrightarrow B=\{u:R(u)\subseteq B\} \end{aligned} $$
(15)
$$\displaystyle \begin{aligned} &{}R\otimes W=\{\langle u,u''\rangle:R(u)\cap W^\smile(u'')\neq\emptyset\} \end{aligned} $$
(16)
$$\displaystyle \begin{aligned} &{}R^\smile(B)=\{u:R(u)\cap B\neq\emptyset\} \end{aligned} $$
(17)
$$\displaystyle \begin{aligned} &{}R^\smile\otimes Z=\{\langle u',u''\rangle:R^\smile(u')\cap Z^\smile(u'')\neq\emptyset\} \end{aligned} $$
(18)
$$\displaystyle \begin{aligned} &{}R(A)=\{u':R^\smile(u')\cap A\neq\emptyset\} \end{aligned} $$
(19)

Proof

(12), (14), (16) and (18) come straightforwardly from the definitions. We just prove a couple of other points.

(14)⇒(15): Let B ⊆ U′ and \(B^\rightarrow _{R^\smile }\) its right cylinder. Then from (14) \(R^\smile \longrightarrow B^\rightarrow _{R^\smile }=\{\langle u,u^*\rangle : R(U)\subseteq (B^\rightarrow _{R^\smile })^\smile (u^*)\}\). But from (3) \((B^\rightarrow _{R^\smile })^\smile (u^*)=B\). Since u is a dummy element, we can dispense with it and the cylindrification of B and obtain (15).

(18)⇒(19): Let \(A^\rightarrow _R\) be the right cylinder of a set A ⊆ U. Then from (18) \(R^\smile \otimes A^\rightarrow _R=\{\langle u',u^{\prime *}\rangle : R^\smile (u')\cap (A^\rightarrow _R)^\smile (u^{\prime *})\neq \emptyset \}\). But from (3) \((A^\rightarrow _R)^\smile (u^{\prime *})=A\). Thus \(R^\smile \otimes A^\rightarrow _R=\{\langle u',u^{\prime *}\rangle : R^\smile (u')\cap A\neq \emptyset \}\). Since u is a dummy element, we can dispense with it and the cylindrification of A and obtain (19). □

3 Lesson 2: The Topological Framework

3.1 Galois Adjunctions and Their Operators

Pre-topologies and topologies are definable from a particular mathematical notion called a Galois adjunction. It is not the usual way to introduce topologies but it is an effective one.

Definition 6

Let O = 〈U, R〉 be an ordered set and L = 〈U, ∨, ∧, 0, 1〉 a bounded lattice such that for any a, b ∈ U, aRb iff a ∧ b = b (a ∨ b = b). Let φ : OO and θ : LL be two operators. Then, given any a, b ∈ U:

  • φ is a projection if it is (a) monotonic: aRb implies 〈φ(a), φ(b)〉∈ R and (b) idempotent: φ(φ(a)) = φ(a).

  • a projection operator φ is a closure if it is increasing: 〈a, φ(a)〉∈ R.

  • A projection operator φ is an interior if it is decreasing: 〈φ(a), a〉∈ R.

  • θ is a modal or possibility operator if it is (a) normal: θ(0) = 0 and (b) additive: θ(a ∨ b) = θ(a) ∨ θ(b).

  • θ is a co-modal or necessity operator if it is (a) co-normal: θ(1) = 1 and (b) multiplicative: θ(a ∧ b) = θ(a) ∧ θ(b).

  • A closure operator θ on a lattice is topological if it is modal.

  • An interior operator θ on a lattice is topological if it is co-modal.

Now we investigate two pairs of operators which are defined by means of a binary relation R. In the definitions of these operators (as well as of many mathematical operators) some logic combinations recur, namely the pairs 〈⇒, ∧〉, 〈∃, ∧〉, 〈∀, ⇒〉, 〈∃, ∀〉 and 〈∀, ∃〉. These combinations enjoy particular mathematical properties which are inherited by the operators they define and which are introduced in the next definition.

Definition 7 (Galois Adjunctions)

Let O and O be two pre-ordered sets (possibly lattices) with order ≤, resp. ≤ and σ : OO and ι : O O be two maps such that for all p ∈O and p′O

$$\displaystyle \begin{aligned} \iota (p')\leq p\;\; \textit{iff}\;\; p'\leq'\sigma (p) \end{aligned} $$
(20)

then σ is called the upper adjoint of ι and ι is called the lower adjoint of σ. This fact is denoted by O ι, σ O and we say that the pair 〈ι, σ〉 forms a Galois adjunction or an axiality.

Remarks 3.1

The contravariant version of (20), i.e. ι(p′) ≤ p iff p′′σ(p) is called a Galois connection and 〈ι, σ〉 a polarity. Galois connections from binary relations were basically introduced in [26] and applied in Formal Concept Analysis (FCA) in [49]. FCA and polarities are not in the scope of the chapter. Galois adjunctions, that is, the covariant form we are dealing with, have been introduced in classical Rough Set Theory in [17] with the name “dual Galois connections”. Independently and in the present general setting, which is derived from Intuitionistic Formal Spaces (see [44] and [45]), they were applied to approximation theory in [37].

Adjoint operators enjoy interesting properties:

Facts 3.1

  1. 1.

    σ preserves all existing infs (i.e. it is multiplicative), thus it is monotonic.

  2. 2.

    ι preserves all existing sups (i.e. it is additive), thus it is monotonic.

  3. 3.

    σ(a) ∨ σ(b) ≤ σ(a  b); ι(a ) ∧ ι(b ) ≤ ι(a b ).

  4. 4.

    σι is a closure operator on O , ισ is an interior operator on O ;

  5. 5.

    σι(a b ) ≤ σι(a ) ∧ σι(b ), σι(a b ) ≥ σι(a′) ∨ σι(b );

  6. 6.

    ισ(a  b) ≥ ισ(a) ∨ ισ(b), ισ(a  b) ≤ ισ(a) ∧ ισ(b);

  7. 7.

    ισι = ι; σισ = σ.

For the proofs of the above Facts, see for instance [37, 38] or [39].

3.2 Galois Adjunction from Relations

Definition 8

Let R ⊆ U × U′ be a binary relation, A ⊆ U, B ⊆ U′. Then we define the following operators:

  1. 1.

    i〉 : (U)↦(U′);〈i〉(A) = R(A)

    • the intensional possibility of A. It is also denoted by 〈R 〉(A).

  2. 2.

    e〉 : (U′)↦(U);〈e〉(B) = R (B)

    • the extensional possibility of B. It is also denoted by 〈R〉(B).

  3. 3.

    [i] : (U)↦(U′);[i](A) = RA

    • the intensional necessity of A. It is also denoted by [R ](A).

  4. 4.

    [e] : (U′)↦(U);[e](B) = R B

    • the extensional necessity of B. It is also denoted by [R](B).

  5. 5.

    int : (U)↦(U);int(A) = 〈e〉[i](A)—the interior of A.

  6. 6.

    cl : (U)↦(U);cl(A) = [e]〈i〉(A)—the closure of A.

  7. 7.

    \(\mathcal C:\wp (U')\longmapsto \wp (U');\mathcal C(B)=\langle i\rangle [e](B)\)—the co-interior of B.

  8. 8.

    \(\mathcal A:\wp (U')\longmapsto \wp (U');\mathcal A(B)=[i]\langle e\rangle (B)\)—the co-closure of B.

The above notation and terms have the following motivations. In a relational system 〈U, U′, R〉, U can be interpreted as a set of items or objects and U′ as a set of properties, so that 〈u, u′〉∈ R means that object u enjoys property u′. According to this interpretation, if u′∈〈i〉(A), then any element of A has the possibility to enjoy u′. On the other hand, if u′∈ [i](A) then in order to enjoy u′ it is necessary to be a member of A, although this is not a sufficient condition, since there can be elements of A which does not enjoy u′ (to put it another way, at most all the elements of A enjoy u′). A symmetric interpretation holds for 〈e〉(B) and [e](B). The terms “necessity” and “possibility” are also associated with some models for modal logic. A Kripke model is a relational system 〈U, R〉 equipped with a forcing relation \(\Vdash \) between members of U and formulas, such that:

$$\displaystyle \begin{aligned} &u\Vdash\square\alpha\;\; iff\;\;\forall u'(\langle u,u'\rangle\in R\Rightarrow u'\Vdash\alpha)\\ &u\Vdash\Diamond\alpha\;\; iff\;\;\exists u'(\langle u,u'\rangle\in R\wedge u'\Vdash\alpha) \end{aligned} $$

where \(\square \) is the necessity modality and ♢ the possibility. If \([\![ \alpha ]\!] =\{x:x\Vdash \alpha \}\) is the domain of validity of α, then \(u\Vdash \square \alpha \) iff \(u\in [e]([\![ \alpha ]\!] )\), while \(u\Vdash \Diamond \alpha \) iff \(u\in \langle e\rangle ([\![ \alpha ]\!] )\). Therefore, from (15) one has \([e]([\![ \alpha ]\!] )=([\![ \square \alpha ]\!] )\) and from (17), \(\langle e\rangle ([\![ \alpha ]\!] )=([\![ \Diamond \alpha ]\!] )\). In turn, [i] and 〈i〉 model the modality operators with respect to the reverse relation R . For this reason we equate the symbols in the following pairs: ([e], [R]), (〈e〉, 〈R〉), ([i], [R ]) and (〈i〉, 〈R 〉).

Notation

We call the operators 〈•〉 and [•] constructors. If X = {x}, for any operator op of the above definition, we shall usually write op(x) instead of op({x}). If needed we write op R to specify the relation from which an operator op is defined.

A relation R ⊆ U × U′ is called serial if R(u) ≠ ∅, for any u ∈ U.

From now on, if not otherwise stated given a relation R ⊆ U × U′, A will denote a subset of the domain U and B a subset of the codomain U′.

Through the notion of a Peirce product one arrives at the notion of a granule:

Definition 9

Let R ⊆ U × U be a binary relation, u ∈ U, A ⊆ U. The set R(u) (i.e. 〈i〉({u})) is called the R-granule of u and \(R(A)=\bigcup \{R(a): a\in A\}\) is called the R-granule of A.

Remarks 3.2

The above definition of R(A) is consistent with (1) of Definition 1 because the operation R(−) is additive. This can be easily proved from the very definition of R-neighbourhoods based on the quantifier ∃. However, we shall see below that there is another more general proof.

A set of granules of U is called a granulation . More in general, granules are subsets of U, so that they are not necessarily generated by some binary relation. For instance, any covering is a granulation although only particular covering are induced by binary relations (more precisely, particular tolerance relations—see the Introduction). Anyway, in what follows we shall deal with preorders and equivalence relations. These kinds of binary relations induce particular features in the above operators, which will be essential in the algebraic analysis of rough set systems.

We list some straightforward consequences of the above definitions and Lemma 5:

$$\displaystyle \begin{aligned} &{} [i](A)=\{u':R^\smile(u')\subseteq A\} \end{aligned} $$
(21)
$$\displaystyle \begin{aligned} &{} [e](B)=\{u:R(u)\subseteq B\} \end{aligned} $$
(22)
$$\displaystyle \begin{aligned} & \langle i\rangle(A)=R(A)=\{u':R^\smile(u')\cap A\neq\emptyset\} \end{aligned} $$
(23)
$$\displaystyle \begin{aligned} &{} \langle e\rangle(B)=R^\smile(B)=\{u:R(u)\cap B\neq\emptyset\} \end{aligned} $$
(24)
$$\displaystyle \begin{aligned} &{} int(A)=\bigcup\{R^\smile(u'):u'\in[i](A)\}=\bigcup\{R^\smile(u'):R^\smile(u')\subseteq A\} \end{aligned} $$
(25)
$$\displaystyle \begin{aligned} &{} cl(A)=\{u:R(u)\subseteq R(A)\}=\{u:\forall u'(u\in R^\smile(u')\Rightarrow R^\smile(u')\cap A\neq\emptyset)\} \end{aligned} $$
(26)
$$\displaystyle \begin{aligned} & \mathcal C(B)=\bigcup\{R(u):u\in[e](B)\}=\bigcup\{R(u):R(u)\subseteq B\} \end{aligned} $$
(27)
$$\displaystyle \begin{aligned} & \mathcal A(B)=\{u':R^\smile(u')\subseteq R^\smile(B)\}=\{u':\forall u(u'\in R(u)\Rightarrow R(u)\cap B\neq\emptyset)\} \end{aligned} $$
(28)

The following duality properties are easily obtained by means of the logical equivalences ¬∃≡∀¬ and ¬(α ∧¬β) ≡ αβ:

Lemma 10

e〉(B) = −[e](−B);i〉(A) = −[i](−A)

Moreover, the operators acting on opposite directions fulfil the following adjointness properties (see [37] and [39]):

Theorem 11

Let P = 〈U, U′, Rbe a relational system. Then for U  = 〈(U′), ⊆〉 and U = 〈(U), ⊆〉:

$$\displaystyle \begin{aligned} \begin{array}{rcl}{} \textit{1}.\; \mathbf{U'}\dashv^{\langle e\rangle,[i]}\mathbf{U},\;\;\; \textit{2}.\; \mathbf{U}\dashv^{\langle i\rangle,[e]}\mathbf{U'} \end{array} \end{aligned} $$
(29)

Proof

Let A ⊆ U, B ⊆ U′. Then (1): 〈e〉(B) ⊆ A iff for all y ∈ B, 〈e〉(y) ⊆ A, iff for all y ∈ B if xRy then x ∈ A iff for all y ∈ B, y ∈ [i](A), iff B ⊆ [i](A). (2): By symmetry. □

Remarks 3.3

One can verify that all the above operators are isotonic. Moreover, ∃ and ∀ are, from the position of the sub-formula “y ∈ B” and “x ∈ A” in their definitions, respectively lower and upper adjoints to the pre-image f −1 : (Y )↦(X) of a function f : XY . That is, for all A ⊆ X, B ⊆ Y one has ∃f(A) ⊆ B iff A ⊆ f −1(B) and B ⊆∀f(A) iff f −1(B) ⊆ A, where ∃f(A) = {b ∈ B : ∃a(f(a) = b ∧ a ∈ A)} and ∀f(A) = {b ∈ B : ∀a(f(a) = b ⇒ a ∈ A)}. Finally, the operators 〈•〉 has the logical structure ∃∧, while the operators [•] has the structure ∀⇒ and we shall see that ∧ is lower adjoint to ⇒. Therefore, since “e” (i.e. R-based) and “i” (i.e. R -based) operators act in opposite directions, the preceding result comes as no surprise.Footnote 1

Remarks 3.4

From (5) it follows that ⊗ is lower adjoint to → with respect to the ordered set \(\langle \mathcal R(U,U'),\subseteq \rangle \), where \(\mathcal R(U,U')=\{R:R\subseteq U\times U'\}\). Therefore, ⊗ is additive and from point 6 of Definition 1, R() is additive, too. From this observation and Definition 8 one obtains another proof of Theorem 11.

Corollary 12

LetU, U′, Rbe a relational system. Then for any X, Y belonging to the due domain:

  1. 1.

    [i](U) = U′; [e](U′) = U; [•]() = ∅ if the relation is serial.

  2. 2.

    〈•〉() = ∅;e〉(U′) = U if R is serial;i〉(U) = U′ if R is serial.

  3. 3.

    〈•〉(X  Y ) = 〈•〉(X) ∪〈•〉(Y ).

  4. 4.

    [•](X  Y ) = [•](X) ∩ [•](Y ).

  5. 5.

    〈•〉(X  Y ) ⊆〈•〉(X) ∩〈•〉(Y ).

  6. 6.

    [•](X  Y ) ⊇ [•](X) ∪ [•](Y ).

  7. 7.

    int(X) ⊆ X  cl(X).

  8. 8.

    \(\mathcal C(Y)\subseteq Y\subseteq \mathcal A(Y)\).

Proof

(1) and (2) are trivial. (3) Because 〈•〉 constructors are lower adjoints. (4) Because [•] constructors are upper adjoints. (5) and (6) can be proved in many a way which are worth mentioning: (a) Straightforwardly from point 3 of Facts 3.1. (b) Using the distributive properties of quantifiers. For instance one has ∀xA(x) ∨∀xB(x) ⇒∀x(A(x) ∨ B(x)), but not the opposite. Incidentally, this proves that ∀ cannot have an upper adjoint, otherwise it should be additive. (c) A ⊆ X or A ⊆ Y implies A ⊆ X ∪ Y but not the other way around. Also (7) can be proved in many a way: (a) straightforwardly from 5 of Facts 3.1; (b) from (25) and (26) one trivially obtains int(X) ⊆ X and, respectively, X ⊆ cl(X); (c) by adjointness 〈e〉([i](X)) ⊆ X iff [i](X) ⊆ [i](X) and X ⊆ [e](〈i〉(X)) iff 〈i〉(X) ⊆〈i〉(X); but the rightmost inequalities are tautologies. (8) is obtained by symmetry. □

Thus, if U = U′ and R and R are serial, [•] and 〈•〉 are co-modal, respectively modal, operators on 〈(U), ⊆〉, but in general 〈•〉 are not increasing and [•] are not decreasing. Therefore they are not interiors, respectively, closures.

Vice-versa, \(\mathcal A\) and cl are closure operators, while \(\mathcal C\) and int are interior operators on 〈(U′), ⊆〉, respectively 〈(U), ⊆〉. However, they are not modal, respectively co-modal. Indeed, as like as topological interior operators, int and \(\mathcal C\) are not additive, because the internal constructors [•] are not, but they are not multiplicative either, because the external constructors 〈•〉 are not. Symmetrically, cl and \(\mathcal A\) are neither additive nor multiplicative. We call them pretopological.

However, it is worth noticing that u ∈ [e](B) iff R(u) ⊆ B, that is, if there exists an R-neighbourhood of u included in B, so that [e](B) is similar to the topological definition of an open set. In turn, u ∈〈e〉(B) iff R(u) ∩ B ≠ ∅, that is, if all the R-neighbourhoods of u have non void intersection with B, since R(u) is the least R-neighbourhood of u. Therefore, we are close to the definition of topological operators. We achieve the goal if the properties of [•] and 〈•〉 join those of \(\mathcal C\) and int, respectively \(\mathcal A\) and cl.

We sum up the previous results in the following table:

Modal constructors

Pre-topological operators

[e](B) = {u : R(u) ⊆ B}

\(\mathcal {C}(B)=\bigcup \{R(u):R(u)\subseteq B\}\)

[i](A) = {u′ : R (u′) ⊆ A}

\(int(A)=\bigcup \{R^\smile (u'):R^\smile (u')\subseteq A\}\)

e〉(B) = {u : u ∈ R (B)}

\(\mathcal {A}(B)=\{u':R^\smile (u')\subseteq R^\smile (B)\}\)

i〉(A) = {u′ : u′∈ R(A)}

cl(A) = {u : R(u) ⊆ R(A)}

Example 13

U = {a, b, c, d}, U′ = {α, β, γ}

Given R ⊆ U × U′, for any operator \(op\in \{[e],[i],\langle e\rangle ,\langle i\rangle ,cl,int,\mathcal C,\mathcal A\}\) we set S op(D) = {(op(X) : X ∈ dom(op)}, where D is U or U′ according to the operator. Then we can define the following lattices:

Definition 14

Let 〈U, U′, R〉 be a relational system. Then:

  1. 1.

    L i(U) = 〈S i(U), ∧, ∪, ∅, U′〉, where \(\bigwedge _{i\in I}X_i=\mathcal C(\bigcap _{i\in I}X_i)\)

  2. 2.

    L [i](U) = 〈S [i](U), ∩, ∨, ∅, U′〉, where \(\bigvee _{i\in I}X_i=\mathcal A (\bigcup _{i\in I}X_i)\)

  3. 3.

    L e(U′) = 〈S e(U′), ∧, ∪, ∅, U〉, where ∧iI X i = int(⋂iI X i)

  4. 4.

    L [e](U′) = 〈S [e](U′), ∩, ∨, ∅, U〉, where ∨iI X i = cl(⋃iI X i)

  5. 5.

    L int(U) = 〈S int(U), ∧, ∪, ∅, U〉, where ∧iI X i = int(⋂iI X i)

  6. 6.

    L cl(U) = 〈S cl(U), ∩, ∨, ∅, U〉, where ∨iI X i = cl(⋃iI X i)

  7. 7.

    \({\mathbf {L}}_{\mathcal A}(U')=\langle {\mathbf {S}}_{\mathcal A}(U'), \cap ,\vee ,\emptyset ,U'\rangle \), where \(\bigvee _{i\in I}X_i=\mathcal A(\bigcup _{i\in I}X_i)\)

  8. 8.

    \({\mathbf {L}}_{\mathcal C}(U')=\langle {\mathbf {S}}_{\mathcal C}(U'), \wedge ,\cup ,\emptyset ,U'\rangle \), where \(\bigwedge _{i\in I}X_i=\mathcal C(\bigcap _{i\in I}X_i)\)

Proposition 15

The structures L op(D) of Definition 14 are complete lattices.

Proof

The proof for L int(U), L cl(U), \({\mathbf {L}}_{\mathcal A}(U)\) and \({\mathbf {L}}_{\mathcal C}(U)\) can be found in section 1.4 of [39]. As for L i(U) we have to prove that given 〈i〉(X) and 〈i〉(Y ), \(\mathcal C(\langle i\rangle (X)\cap \langle i\rangle (Y))=inf\{\langle i\rangle (X),\langle i\rangle (Y)\}\). That is: (a) \(\mathcal C(\langle i\rangle (X)\cap \langle i\rangle (Y))\subseteq \langle i\rangle (X)\) and \(\mathcal C(\langle i\rangle (X)\cap \langle i\rangle (Y))\subseteq \langle i\rangle (Y)\), and (b) if 〈i〉(Z) ⊆〈i〉(X) and 〈i〉(Z) ⊆〈i〉(Y ), then \(\langle i\rangle (Z)\subseteq \mathcal C(\langle i\rangle (X)\cap \langle i\rangle (Y))\). But (a) is obvious because C(〈i〉(X) ∩〈i〉(Y )) ⊆〈i〉(X) ∩〈i〉(Y ) and 〈i〉(X) ∩〈i〉(Y ) is included both in 〈i〉(X) and 〈i〉(Y ). As to (b) C(〈i〉(X) ∩〈i〉(Y )) = 〈i〉[e](〈i〉(X) ∩〈i〉(Y )) = 〈i〉([e]〈i〉(X) ∩ [e]〈i〉(Y )). Suppose 〈i〉(Z) ⊆〈i〉(X) and 〈i〉(Z) ⊆〈i〉(Y ). Then for adjunction, Z ⊆ [e]〈i〉(X) and Z ⊆ [e]〈i〉(Y ), so that Z ⊆ ([e]〈i〉(X) ∩ [e]〈i〉(Y ). Therefore, by isotonicity 〈i〉(Z) ⊆〈i〉([e]〈i〉(X) ∩ [e]〈i〉(Y )). The proof for L [i] comes from duality and by symmetry we obtain the proof for L [e] and L e. □

From the definitions above it follows that the lattice order of L op(D) is inherited from 〈S op(D), ⊆〉.

Lemma 16

Let P = 〈U, U′, Rbe a relational system. Then for all A  U, B  U′:A S int(U) iff A = 〈e〉(B′), A S cl(U) iff A = [e](B′), for some B′ U′ \(B\in {\mathbf {S}}_{\mathcal {A}}(U')\) iff B = [i](A′), \(B\in {\mathbf {S}}_{\mathcal {C}}(U')\) iff B = 〈i〉(A′), for some A′ U

Proof

If A = 〈e〉(B′) then A = 〈e〉[i]〈e〉(B′), from point 7 of Facts 3.1. Therefore, by definition of int, A = int(〈e〉(B′)) = int(A). Vice-versa, if A = int(A), then A = 〈e〉[i](A). Hence, A = 〈e〉(B′) for B′ = [i](A). The other cases are proved in the same way, by exploiting the appropriate equations of point 7 of Facts 3.1. □

Corollary 17 (See [39])

Let P = 〈U, U′, Rbe a relational system. Then,

  1. 1.

    eis an isomorphism between \({\mathbf {L}}_{\mathcal {A}}(U')\) and L int(U);

  2. 2.

    \(\left [i\right ] \) is an isomorphism between L int(U) and \({\mathbf {L}}_{\mathcal {A}}(U')\) ;

  3. 3.

    \(\left [e\right ] \) is an isomorphism between \({\mathbf {L}}_{\mathcal {C}}(U')\) and L cl(U);

  4. 4.

    iis an isomorphism between L cl(U) and \({\mathbf {L}}_{\mathcal {C}}(U')\).

  5. 5.

    the set-theoretic complementation is an anti-isomorphism between L cl(U) and L int(U) and between \({\mathbf {L}}_{\mathcal {C}}(U')\) and \({\mathbf {L}}_{\mathcal {A}}(U')\).

Proof

Let us notice that the proof for an operator requires the proof for its adjoint operator. Then, let us prove points (1) and (2) together. First, let us prove bijection for 〈e〉 and [i]. From Lemma 16 the codomain of 〈e〉 is S int(U) and the codomain of [i] is \({\mathbf {S}}_{\mathcal {A}}(U')\). Moreover, for all A ∈S int(U), A = 〈e〉[i](A) and for all \(B\in {\mathbf {S}}_{\mathcal {A}}(U'), B=[i]\langle e\rangle (B)\). From the adjunction properties we have:

(i) 〈e〉 is surjective onto S int(U) and (ii) [i] is injective from S int(U).

(iii) 〈e〉 is injective from \({\mathbf {S}}_{\mathcal {A}}(U')\) and (iv) [i] is surjective onto \({\mathbf {S}}_{\mathcal {A}}(U')\).

Moreover, if [i] is restricted to S int(U), then its codomain is the set H = {B : B = [i](A) ∧ A ∈S int(U)}. Clearly, \(H\subseteq {\mathbf {S}}_{\mathcal {A}}(U')\). In turn, if 〈e〉 is restricted to \({\mathbf {S}}_{\mathcal {A}}(U')\), then its codomain is the set \(K=\{A:A=\langle e\rangle (B)\wedge B\in {\mathbf {S}}_{\mathcal {A}}(U')\}\). Clearly K ⊆S int(U). Therefore, (i) and (iii) give that 〈e〉 is bijective if restricted to \({\mathbf {S}}_{\mathcal {A}}(U')\), while (ii) and (iv) give that [i] is a bijection whenever restricted to S int(U).

Now it is to show that 〈e〉 and [i] preserve joins and meets. For 〈e〉 we proceed as follows: (v) \(\langle e\rangle (\bigvee \limits _{i\in I}(\mathcal {A}(Y_i))):= \langle e\rangle (\mathcal {A}(\bigcup \limits _{i\in I}(\mathcal {A}(Y_i))))\). But \(\langle e\rangle \mathcal {A}=\langle e\rangle \), from point 7 of Facts 3.1. Moreover, 〈e〉 distributes over unions. Hence the right side of (v) equals to \(\bigcup \limits _{i\in I}\langle e\rangle (\mathcal {A}(Y_i))\). But in view of Theorem 15, the union of extensional open subsets is open and from Lemma 16 \(\langle e\rangle (\mathcal {A}(Y_i))\) belongs to S int(U) indeed, so that the right side of (v) turns into \(int(\bigcup \limits _{i\in I}\langle e\rangle (\mathcal {A}(Y_i)))=\bigcup \limits _{i\in I}\langle e\rangle (\mathcal A(Y_i))\). (vi) \(\langle e\rangle (\bigcap \limits _{i\in I}\mathcal {A}(Y_i))=\langle e\rangle (\bigcap \limits _{i\in I}[i]\langle e\rangle (Y_i))\). Since [i] distributes over intersections, the right side of (vi) turns into \(\langle e\rangle [i](\bigcap \limits _{i\in I}\langle e\rangle (Y_i))=int(\bigcap \limits _{i\in I}\langle e\rangle (Y_i))\). But \(\langle e\rangle =\langle e\rangle \mathcal {A}\), so that the last term is exactly \(\bigwedge \limits _{i\in I}\langle e\rangle (\mathcal {A}(Y_i))\). Since [i] is the inverse of 〈e〉, qua isomorphism, we have that [i] preserves meets and joins, too.

As to (3) and (4) the results come by symmetry. Finally, (5) is trivial. □

Corollary 18

For any binary relation R, L [e](U) = L cl(U), L e(U) = L int(U), \({\mathbf {L}}_{\langle i\rangle }(U)={\mathbf {L}}_{\mathcal C}(U')\) , \({\mathbf {L}}_{[i]}(U)={\mathbf {L}}_{\mathcal A}(U')\).

Example 19

Example 13 continued.

Thus, so far we have seen how binary relations induce modal and pretopological operators. However, if U = U′ and R is a preorder the operators and constructors gain the topological properties. To prove that, first we show that if R is a preorder then int and [i] coincide. At this point, since int is an interior operator and [i] is comodal, we immediately obtain that int (aka [i]) is topological (see Definition 6). By duality the same can be proved of cl (aka 〈i〉) and by symmetry for \(\mathcal C\) (i.e. [e]) and \(\mathcal A\) (i.e. 〈e〉).

However, we shall complete the proof in a more specific manner, this time with a focus on the opposite direction: it will be proved that if R is a preorder, then \(\mathcal {C}\) (thus [e]) is the interior operator of a particular topology induced by R. Therefore, now we enter the topological framework.

3.3 Topologies and Relations

Definition 20

Let U be a set. Then:

  • Let Ω(U) be a distributive lattice of subsets of U which is bounded by U and ∅ and is closed under infinite unions and finite intersections. Then Ω(U) is called a topology on U, its elements open sets and τ(U) = 〈U, Ω(U)〉 a topological space.

  • \(\mathbb I(X)=\bigcup \{A\in \Omega (U): A\subseteq X\}\) is called the interior of X and \(\mathbb I\) the interior operator of τ(U).

  • \(\mathbb C(X)=\{x:\forall A\in \Omega (U) (x\in A\Rightarrow A\cap X\neq \emptyset )\}\) is called the closure of X and \(\mathbb C\) the closure operator of τ(U). We put \(\varGamma (U)=\{\mathbb C(X):X\subseteq U\}\)—the set of closed sets of τ(U).

Facts 3.2

From the above definitions it follows that:

  • \(\Omega (U)=\{X:X\subseteq U\wedge \mathbb I(X)=X\}=\{\mathbb I(X):X\subseteq U\}\).

  • \(\mathbb I(X)=-\mathbb C(-X)\) and \(\mathbb C(X)=-\mathbb I(-X)\) , any X  U.

  • Γ(U) = {−A : A  Ω(U)} and Ω(U) = {−A : A  Γ(U)}.

  • The inner logical structure of the operator \(\mathbb I\) is (∀⇒). Indeed \(x\in \mathbb I(X)\) iff there exists A  Ω(U) containing x such thaty(y  A  y  B).

  • The inner logical structure of the operator \(\mathbb C\) is (∃∧). Indeed, \(x\in \mathbb C(X)\) iff for all A  Ω(U) containing x,z(x  A  z  X).

Let now R be a binary relation on a set U, which is assumed to be at most countable. From now on we set P := 〈U, R〉. If R is a preorder, then the family of granules B R(U) = {R(u) : u ∈ U} is a basis of a topology on U (that is, any open set of the topology is given by the union of a family, possibly empty, of elements of B R(U)). This topology is called an Alexandrov topology. In this kind of topologies, R(A) is an open set, for any A ⊆ U because the operator R(−) is additive, i.e. R(A) ∪ R(B) = R(A ∪ B). We denote with ΩR(U) the family {R(A) : A ⊆ U} of open subsets of the topology and by \(\mathbb I_R\) and \(\mathbb C_R\) its interior and, respectively, closure operators.

In Alexandrov spaces the intersection of any family of open sets is open and each point has a least open neighbourhood (indeed the basis B R(U) provides these least open neighbourhoods). Moreover, in any topological space, a specialisation preorder ≼ can be defined as follows: x ≼ y iff for all open set O if x ∈ O then y ∈ O. An Alexandrov topology ΩR(U) is such that its specialisation preorder and R coincide.

Remarks 3.5

The definition of a specialisation preorder can be rephrased using the interior operator \(\mathbb I\): x ≼ y iff for all \(A\subseteq U, x\in \mathbb I(A)\) implies \(y\in \mathbb I(A)\). Indeed, given a set X it can be proved that a preorder can be defined by means of any monadic operator ⊙ on (X) as follows:

$$\displaystyle \begin{aligned} x\preceq_\odot y\ \text{iff}\ \forall A\subseteq X, x\in\odot(A)\Longrightarrow y\in\odot(A).\end{aligned} $$

The relation ≼ is a preorder: clearly it is reflexive because by substituting x for y we obtain a tautology, and it is transitive, because implication is transitive and the universal quantifier distributes over implications. Thus we can call ≼ the specialisation preorder induced by ⊙.

If we denote by \(\mathbb I_R\) the interior operator of an Alexandrov topology ΩR induced by a preorder R, we have \(R=\preceq _{\mathbb {I}_R}\) and \(\Omega _R=\Omega _{\preceq _{\mathbb {I}_R}}\). However, there can be Alexandrov topologies ΩR(U) induced by bases B R(U) such that R is not a preorder, so that \(R\neq \preceq _{\mathbb I_R}\) but \(\Omega _R=\Omega _{\preceq _{\mathbb {I}_R}}\), all the same. We shall illustrate this delicate issue in order to avoid some traps. Moreover, to our knowledge this topic has not been treated before.

Lemma 21

LetU, Rbe a relational space. Thenu  U, u ∈ [i](R (u)).

Proof

Trivially, u ∈ [i](R (u)) iff R (u) ⊆ R (u). □

Theorem 22

LetU, Rbe a relational space. Then for all A  U, int(A) = [i](A) if and only if R is a preorder.

Proof

  1. A)

    If ∃A ⊆ U such that int(A) ≠ [i](A) then R is not a preorder (either reflexivity or transitivity fail). Proof. The antecedent holds in two cases: (i) ∃x ∈ [i](A), xint(A); (ii) ∃x ∈ int(A), x∉[i](A). In case (i) from (25) one has that ∀y ∈ [i](A), xR (y). In particular, xR (x), so that reflexivity fails. In case (ii) ∃y ∈ [i](A) such that x ∈ R (y). Therefore, since y ∈ [i](A), from (21) x must belong to A. Moreover, it must exists zA, 〈z, x〉∈ R, otherwise x ∈ [i](A). Since 〈x, y〉∈ R, if R were transitive, 〈z, y〉∈ R, so that y∉[i](A). Contradiction.

  2. B)

    If R is not a preorder, then ∃A ⊆ U, int(A) ≠ [i](A). Proof. (i) Take A = R (x). From Lemma 21, x ∈ [i](R (x)). Suppose R is not reflexive with 〈x, x〉∉R. Thus xR (x). Hence, it cannot exists an y such that x ∈ R (y) and R (y) ⊆ R (x). So, xint(R (x)). (ii) Suppose transitivity fails, with 〈x, y〉, 〈y, z〉∈ R, 〈x, z〉∉R. From Lemma 21, z ∈ [i](R (z)), but y∉[i](R (z)), because x ∈ R (y) while xR (z) so that \(R^\smile (y)\nsubseteq R^\smile (z)\). On the contrary, y ∈ R (z) and R (z) ⊆ R (z). Therefore, y ∈ int(R (z)). We conclude that int(R (z)) ≠ [i](R (z)).

We write op = op′ if for all elements X of their domain op(X) = op′(X).

Corollary 23

In a relational spaceU, R, the following are equivalent: (i) R is a preorder, (ii) \(\mathcal {C}=[e]\) , (iii) int = [i], (iv) cl = 〈i, (v) \(\mathcal A=\langle e\rangle \).

Proof

(i)⇔ (iii) is Theorem 22, (ii)⇔ (iv) by duality and the other equivalences by symmetry. □

Corollary 24

Given a relational spaceU, R, if R is a preorder, then \(int,[i],\mathcal {C}\) and [e] are topological interior operators; \(cl, \langle i\rangle ,\mathcal A,\langle e\rangle \) are topological closure operators.

The converse of Corollary 24 holds just partially:

Corollary 25

LetU, Rbe a SRS. If [•] and 〈•〉 are topological interior, respectively closure, operators, then R is a preorder.

The proof follows from Corollary 23. However, the converse of Corollary 24 does not hold for int, cl, \(\mathcal A\) and \(\mathcal C\) as the following example illustrates and Theorem 30 will prove:

Example 26

$$\displaystyle \begin{aligned}U=\{v,a,b,b',c\},\text{ and } B_R(U)=\{\{a\},\{b,b'\}, \{a,c\}, U\}\end{aligned}$$

R is neither reflexive (e.g. 〈v, v〉∉R) nor transitive (e.g. 〈b, c〉∈ R and 〈c, v〉∈ R, but 〈b, v〉∉R). Therefore, it is not a preorder. Indeed, from the lattices below one verifies that the equalities of Corollary 23 do not hold. However, in these lattices inf = ∩ and sup = ∪. Therefore, they are bounded distributive, hence topologies.

Remarks 3.6

Corollary 24 amends point (iv) of Corollary 1 of [35] and point (ii) of Facts 3 of [36], which state also the converse implication, erroneously. However, one can state that if {R(A) : A ⊆ U} is a topology, then 〈U, R〉 is a renaming of the elements of a preorder 〈U′, R〉. To see this, we need some results about the duality between topologies ΩR(U) from preorders R and the specialisation preorder \(\preceq _{\mathbb {I}_R}\).

Lemma 27

If R  X × X is transitive, thenx, y  X, 〈x, y〉∈ R implies R(y) ⊆ R(x). If R is reflexive, then R(y) ⊆ R(x) impliesx, y〉∈ R.

Proof

Suppose 〈x, y〉∈ R and z ∈ R(y). Then 〈y, z〉∈ R and by transitivity 〈x, z〉∈ R so that z ∈ R(x). Thus, R(y) ⊆ R(x). Vice-versa, if R(y) ⊆ R(x) then for all a, 〈y, a〉∈ R implies 〈x, a〉∈ R. In particular 〈y, y〉∈ R by reflexivity. Hence 〈x, y〉∈ R. □

Theorem 28

LetU, Rbe a relational system such that R is preorder. Then the specialization preorder induced by [i] coincides with R and that induced by [e] coincides with R.

Proof

If x ≼[i] y then for all A ⊆ X, x ∈ [i](A) implies y ∈ [i](A). Therefore, R (x) ⊆ A implies R (y) ⊆ A, all A. In particular, R (x) ⊆ R (x) implies R (y) ⊆ R (x). But the antecedent is true, so the consequence must be true, too, so that R (y) ⊆ R (x). Since R is reflexive, so is R and from Lemma 27, 〈x, y〉∈ R . The opposite implication is proved analogously by transitivity. The thesis for [e] and R is a trivial consequence. □

Corollary 29

Let \(\mathcal {C}_R\) be the operator induced by a preorderU, R. Then \(\mathcal {C}_R\) is the interior operator \(\mathbb I_R\) of the Alexandrov topology induced by R.

Proof

If R is a preorder then from Corollary 23, \({\mathcal C_R}=[e]\). Therefore, from Theorem 28, the specialisation preorder induced by \(\mathcal {C}_R\) coincides with R which, in turn, coincides with the specialisation preorder of the Alexandrov topology induced by R. □

Obviously, if R is symmetric (as in equivalence relations), then R = R , with all the simplifications due to this fact which operates for standard Rough Set Theory. Now we prove that \(\mathcal C_R\) of Example 26 is a topological interior operator, that is, multiplicative. The proof is based on the following fact:

Theorem 30

Let L = 〈L, ∧, ∨〉 be a lattice andan interior operator on L such that ⊙ (a) ∧⊙(b) ≥⊙(a  b) and L  = {⊙(x) : x  L} is a sublattice of L . Thenis multiplicative.

Proof

Since L is a sublattice of L, for all x, y ∈ L, ⊙ (x) ∧⊙(y) = ⊙(z) for some z ∈ L. Since ⊙ (x) ≤ x and ⊙ (y) ≤ y, ⊙ (x) ∧⊙(y) ≤ x ∧ y. Therefore, ⊙ (z) ≤ x ∧ y so that from isotonicity and idempotency of ⊙ we obtain ⊙ (z) = ⊙⊙ (z) ≤⊙(x ∧ y). To prove multiplicativity of ⊙ we then just need ⊙ (z) ≥⊙(x ∧ y), which is given by hypothesis. □

\({\mathbf {L}}_{\mathcal C_R}(U)\) is a sublattice of (U), therefore, in \({\mathbf {L}}_{\mathcal C_R}(U)\), inf = ∩ and \(\mathcal C_R\) fulfils the hypotheses of the theorem. So we obtain that for any X, Y ⊆ U, \(\mathcal C_R(X)\cap \mathcal C_R(Y)=\mathcal C_R(X\cap Y)\).Footnote 2

Let R ⊆ U × U be such that \(\mathcal C_R\) is a topological interior operator. Then \({\mathbf {L}}_{\mathcal C_R}\) is a distributive lattice, hence a topology \(\Omega _{{\mathcal C}_R}(U)\). Let \(\preceq _{{\mathcal C}_R}\) be the specialisation preorder induced by \(\mathcal C_R\). It is possible to prove that the interior operator \(\mathbb I_{\mathcal C_{\preceq _R}}\) and \(\mathcal C_R\) coincide and that there is a transformation from R to \(\preceq _{\mathcal C_R}\). Moreover, this transformation is given by the operation →. However, this topic and its mathematical connections are still under investigation.

Example 31

The specialisation preorder \(\preceq _{\mathcal C}\) induced by the lattice \({\mathbf {L}}_{\mathcal C}(U)\) of Example 26 is the one given in Example 34 below.

3.4 Approximation and Topology

In view of the notions introduced so far, the above three questions can be given precise mathematical answers. Let A ⊆ U, R ⊆ U × U a “nearness relation” of any kind, and x ∈ U:

  1. 1.

    x is in necessarily in A if all the elements R-near x are in A. Since u is R-near x if u ∈ R(x), we obtain that x is necessarily in A if R(x) ⊆ A. Let us set:

    $$\displaystyle \begin{aligned} \begin{array}{rcl}{} (lR)(A)=\{x:R(x)\subseteq A\} \end{array} \end{aligned} $$
    (30)

    It is easy to see that for any x ∈ U, {R(X) : x ∈ R(X)} is a neighbourhood system, so that if R is a preorder x ∈ (lR)(A) if there is an open set of ΩR(U) containing x and included in A.

  2. 2.

    x is possibly in A if there is some element R-near x which is in A. Thus, x is possibly in A if R(x) ∩ A ≠ ∅. Let us set:

    $$\displaystyle \begin{aligned} \begin{array}{rcl}{} (uR)(A)=\{x: R(x)\cap A\neq\emptyset\} \end{array} \end{aligned} $$
    (31)

    If R is a preorder, R(x) is the least open set containing x, so that the previous condition implies that all open sets containing x has non void intersections with A.

  3. 3.

    Finally, x is necessarily outside A if x ∈ − (uR)(A), that is, if R(x) ⊆−A, i.e. x ∈ (lR)(−A).

Theorem 32

Given a relational spaceU, R,

$$\displaystyle \begin{aligned} (i)\;(lR)(A)=[e]_R(A).\;(ii)\; (uR)(A)=\langle e\rangle_R(A). \end{aligned} $$
(32)

Proof

(i) From (22) and (30). (ii) From (24) and (31). □

Therefore all the previous results about the extensional constructors apply to the approximation operators.

Definition 33

Given a relational space (U, R) and A ⊆ U:

  1. 1.

    (lR)(A) is called the lower approximation of A.

  2. 2.

    (uR)(A) is called the upper approximation of A.

  3. 3.

    U, (lR)〉 is called an approximation space and is denoted with AS(UR).

In view of Lemma 10, 〈U, (lR)〉 is enough to define an approximation space. If R is a preorder we identify AS(UR) with the Alexandrov topological space 〈U, ΩR(U)〉, and we have the following correspondences:

  • (lR)(A) is the interior \(\mathbb I_R(A)\) of A,

  • (uR)(A) is the closure \(\mathbb C_R(A)\) of A,

  • (bR)(A) = (uR)(A) ∩−(lR)(A) is the boundary \(\mathbb B_R(A)\) of A,

  • (eR)(A) = −(uR)(A) = (lR)(−A) is the exterior of A, denoted by \(\mathbb E_R(A)\).

The usual topological transformations via the complement hold trivially:− (lR)(A) = (uR)(−A), hence − (uR)(A) = (lR)(−A).

The reader is invited to pay attention that, for instance, (bR)(A) is a notion which applies to any R, while \(\mathbb B_R(A)\) works just if R is a preorder, and so on.

Example 34

U = {v, a, b, b′, c}

Remarks 3.7

It is worth noticing that, provided R is a preorder:

$$\displaystyle \begin{aligned} (lR)(A)&=\{x:R(x)\subseteq{A}\} \end{aligned} $$
(33)
$$\displaystyle \begin{aligned} &=\bigcup\{R(x):R(x)\subseteq A\}=\bigcup\{R(X):R(X)\subseteq A\} \end{aligned} $$
(34)
$$\displaystyle \begin{aligned} &=\bigcup\{O\in\Omega_R(U):O\subseteq A\}. \end{aligned} $$
(35)

Formula (33) can be used to define lower approximations on the basis of any binary relation R. However, this formula does not guarantee a proper lower approximation, that is, less than or equal to the set to be approximated and, dually, R does not guarantee a proper upper approximation. For instance, if R is not reflexive and xR(x), then x ∈ (lR)(R(x)), trivially, so that (lR)(R(x))⊈R(x). Even worst, if R(x) = ∅, then for any set A, x belongs to (lR)(A), according to (33), while it does not belong to (uR)(A). An odd situation: x necessarily belongs to A but not possibly.

Formula (34) serves the same purpose and by definition the resulting lower approximation is proper. But (33) coincides with (34) only if R is at least a preorder. Indeed, if a ∈ (lR)(A) then R(a) ⊆ A. But by reflexivity of R, a ∈ R(A). It follows that \(a\in \bigcup \{R(x): R(x)\subseteq A\}\) and we can conclude \((lR)(A)\subseteq \bigcup \{R(x): R(x)\subseteq A\}\). Conversely, assume \(a\in \bigcup \{R(x): R(x)\subseteq A\}\). Then for some b ∈ U, a ∈ R(b) and R(b) ⊆ A. By transitivity, R(a) ⊆ R(b) ⊆ A, so that a ∈ (lR)(A) and we conclude \(\bigcup \{R(x): R(x)\subseteq A\}\subseteq (lR)(A)\).

If R is an equivalence relation then 〈U, ΩR(U)〉 is a Pawlak approximation space, R(x) is an equivalence class, B R(U) is a partition and any element of ΩR(U) is the union of equivalence classes so that its complement is a union of equivalence classes, too. As a consequence, 〈U, ΩR(U)〉 is a 0-dimensional topological space, i.e. any open set is closed and vice-versa: they are clopen. In this case the upper approximation can be defined in this way: \((uR)(A)=\bigcap \{O\in \Omega _R(U): A\subseteq O\}\).

If R is a preorder, then by setting \([\![ \square \alpha ]\!] =(lR)([\![ \alpha ]\!] )\) and \([\![ \Diamond \alpha ]\!] =(uR)([\![ \alpha ]\!] )\) the approximation space 〈U, (lR), (uR)〉 is a model of the modal logic S4 (or, more precisely, if U is finite, S4.1) . If R is an equivalence relation, the modelled logic is S5 .

Geometrically we can depict this fact by embedding ΩR(U) into the powerset (U) which with the intersection, union and complement operators provides the ambient Boolean algebra. Notice that one can generalise this approach by defining a modal space as a pair 〈L, L 〉 such that L is embeddable in L and setting for any a ∈L, \(\square (a)=\bigsqcup \{x: x\in \mathbf {L'}\;\&\; a\wedge x=x\}\) and ♢(a) =  {x : x ∈L & a ∨ x = a}, where ⊓, ⊔ give the lattice order of L , while ∧, ∨ give the order of L, provided the two orders are linked by some coherence property (see [12, 14]).

4 Lesson 3: The Algebraic Framework

If (U, R) is a preorder then the lattice 〈 ΩR(U), ∩, ∪, ∅, U〉 can be made into a Heyting algebra. If R is an equivalence relation, it is a Boolean algebra. Therefore, we have to explore these notions, from a general point of view.

4.1 Heyting Algebras

Definition 35

  • A structure H = 〈X, ∧, ∨, ¬, ⇒, 1, 0〉 is a Heyting algebra if 〈X, ∧, ∨, 1, 0〉 is a bounded lattice, ¬a = a⇒0, and the following holds, for any x, a, b ∈ X:

    $$\displaystyle \begin{aligned} x\wedge a\leq b {\;\;\textit iff\;\;} x\leq a\Longrightarrow b \end{aligned} $$
    (36)

    The operation xy is called the relative pseudo-complementation of x with respect to y and ¬x is called the pseudo-complementation of x.

  • A Heyting algebra such that for any element x, ¬¬x = x (or equivalently x ∨¬x = 1), is called a Boolean algebra.

The relative pseudo-complement xy is the largest element of H (more precisely, of the carrier X) whose meet with the antecedent x is less than or equal to y. In other terms, xy is what x needs to reach y. The relation (36) which defines the relative pseudo-complementation may be re-written by parametrizing the operation with the shared argument a, as follows:

$$\displaystyle \begin{aligned} \wedge_a(x)\leq b{\textit\;\; iff\;\;}x\leq\Longrightarrow_a(b) \end{aligned}$$

From (20) it immediately appears that in a Heyting algebra ∧ is lower adjoint to ⇒ and ⇒ is upper adjoint to ∧. Therefore, ∧ is additive, so that due to the very properties of adjointness Heyting algebras are distributive lattices: for all a, b and c, a ∧ (b ∨ c) = (a ∧ b) ∨ (a ∧ c).

Notice that in Heyting algebras ¬(a ∨ b) = ¬a ∧¬b but ¬(a ∧ b) ≥¬a ∨¬b, witness the killing case ¬(a ∧¬a). In Boolean algebras also the second De Morgan law holds, because ¬¬a = a.

The following standard results will be useful in Sect. 6.5:

Lemma 36 (Cf.[1] and [42])

In any Heyting algebra H , (1) b  ab. (2) ⇒ is monotonic (i.e. order preserving) in the second argument, and antitonic (i.e. order reversing) in the first, that is a  b implies ca  cb, and bc  ac, any c. (3). a ≤¬¬a. (4) ab ≤¬b⇒¬a. (5) ¬ is antitonic. (6) ¬¬ is monotonic.

Also the following results are standard, but a glance to their proofs is worthwhile, to see how adjunction work.

Theorem 37

In any Heyting algebra

  1. 1.

    ¬¬ preservesand finite meets.

  2. 2.

    ¬(ab) = ¬¬a ∧¬b.

Proof

(1) The proof for ⇒ will be given in a footnote of Theorem 87. As for meet, since ¬¬ is monotonic, ¬¬(x ∧ y) ≤¬¬x and ¬¬(x ∧ y) ≤¬¬y, therefore ¬¬(x ∧ y) ≤¬¬x ∧¬¬y. On the other hand, ¬¬x ∧¬¬y ∧¬(x ∧ y) ≤ 0. From adjunction one obtains ¬¬x ∧¬¬y ≤¬(x ∧ y)⇒0 = ¬¬(x ∧ y). (2) Since ⇒ is monotonic in the second argument, ¬a = a⇒0 ≤ ab. But ¬ is antitonic so that ¬(ab) ≤¬¬a. Moreover, ⇒ is antitonic in the first argument so that b = 1⇒b ≤ ab and, hence, ¬(ab) ≤¬b and we conclude that ¬(ab) ≤¬¬a ∧¬b. On the other hand, since ab ≤¬b⇒¬a and ¬b ∧ (¬b⇒¬a) ≤¬a, one obtains that ¬¬a ∧¬b ∧ (ab) ≤¬¬a ∧¬a ≤ 0 so that by adjunction ¬¬a ∧¬b ≤ (ab)⇒0 = ¬(ab). □

4.2 Heyting Algebras from Topological Spaces

We now define Heyting algebras using the family of open subsets of a topological space. Indeed, abstract Heyting algebras can be considered the pointless companion of the properties of “concrete” topologies, that is open sets populated by points. In this respect, Heyting algebra are part of Algebraic Geometry. In a sense, Heyting algebras are obtained by “zooming-out” topological spaces, while, in turn, topological spaces are obtained by “zooming-in” Heyting algebras. The duality theorem provides such a zooming-in, which will be discussed in the next section.

Definition 38

Let 〈U, Ω(U)〉 be a topological space, A, B ∈ Ω(U):

$$\displaystyle \begin{aligned} &1:=U \end{aligned} $$
(37)
$$\displaystyle \begin{aligned} &0:=\emptyset \end{aligned} $$
(38)
$$\displaystyle \begin{aligned} &A\wedge B:=A\cap B \end{aligned} $$
(39)
$$\displaystyle \begin{aligned} &A\vee B:= A\cup B \end{aligned} $$
(40)
$$\displaystyle \begin{aligned} &{}A\Longrightarrow B:=\mathbb I(-A\cup B) \end{aligned} $$
(41)
$$\displaystyle \begin{aligned} &\neg A:=A\Longrightarrow\emptyset=\mathbb I(-A)=-\mathbb C(A) \end{aligned} $$
(42)

Theorem 39

Ω(U) equipped with the above operations, is a Heyting algebra.

The proof is folklore in mathematical logic (indeed, it is key to the very duality theorem for Heyting algebras). It is easy to verify that \(X\Longrightarrow Y=\bigcup \{Z:Z\cap X\subseteq Y\}\), so that \(\neg X=\bigcup \{Z:Z\cap X=\emptyset \}\).

If 〈U, R〉 is a pre-ordered space, by ΩR(U) we denote three objects: (i) the set of all order filters (or up-sets) of 〈U, R〉, (ii) the Alexandrov topology of the topological space τ(U) = 〈U, ΩR(U)〉, (iii) the Heyting algebra with the operations of Definition 38.

Example 40

Example 34 continued. The relational space (U, R) induces the following topological space 〈U, ΩR(U)〉, a.k.a. Heyting algebra:

4.3 Duality

The relationship between finite Heyting algebras H and preorders is expressed in terms of duality . We say that an element x of a (at most countable) Heyting algebra H is co-prime, if \(x=\bigvee X\) implies x ∈ X. In other words, x is not the union of elements different from it. Let J(H) be the set of co-prime elements of H and J(H) = (J(H), ⊑) where ⊑ is the reverse of the order ≤ of H. Let Ω(J(H)) = { X : X ⊆ J(H)} be the set of all the order filters of J(H) and H(J(H)) = 〈 Ω(J(H)), ⊆〉. Then H and H(J(H)) are lattice isomorphic. The isomorphism φ is given by: φ(a) = {p ∈ J(H) : p ≤ a} (pay attention that ≤ is the order in H). Moreover, J(H(J(H))) and J(H) are order isomorphic.

Notice that H is used both as an algebraic structure and a constructor of algebraic structures.

The elements of J(H) may be thought of as “abstract points”. If the elements of H are thought of as “properties”, then abstract points are bundles of properties (which means that points have their properties as proxies). This interpretation is supported mathematically. In fact, let 2 = 〈{0, 1}, 0 ≤ 1〉 be the so called Sierpiński frame. Let ψ : H2 be a lattice homomorphism. If the elements of H are seen as a properties, then the true kernel ψ −1(2), which is {x : ψ(x) = 1}, is a principal filter p = {x : p ≤ x} in H and, intuitively, gathers together the “virtual points” which fulfil the property p, for some p. If a point a fulfils a property p, we write pa. The element p of H which generates the filter is the least “virtual point” fulfilling that property. Otherwise stated, p is both a property and the representative of the “virtual points” which fulfil p itself. Therefore we can denote this homomorphism ψ with \(\overrightarrow {p}\). Under this respect, the isomorphism φ gains a straightforward interpretation:

$$\displaystyle \begin{aligned} \varphi(a)&=\{p\in J(\mathbf{H}):a\in \uparrow_\leq p\} =\{p\in J(\mathbf{H}):a\in \overrightarrow{p}^{\;-1}(1)\}=\{p\in J(\mathbf{H}):p\models a\} \end{aligned} $$

The set \(\mathcal {Hom}(\mathbf {H},\mathbf {2})\) of all the homomorphism from H to 2 is, thus, the set of properties representing the virtual points which fulfil them. There is a bijection between J(H) and \(\mathcal {Hom}(\mathbf {H},\mathbf {2})\). Therefore we consider J(H) to be the set of abstract points defined by H. Now the reverse order ⊑ is understood: x ⊑ y in J(H) if and only if x ⊆ y and from Lemma 27, if y ≤ x in H, then x ⊆ y. Actually, this is the order on \(\mathcal {Hom}(\mathbf {H},\mathbf {2})\).

Example 41

In order to appreciate the construction of abstract points through duality, let us start with an abstract version H of the Heyting algebra of Example 40.

Notice, for instance, that there is no homomorphism \(\overrightarrow {\delta }\) because if all the elements of δ were mapped on 1 and the others to 0, then \(\overrightarrow {\delta }(\alpha )=0\), \(\overrightarrow {\delta }(\beta )=0\) so that \(\overrightarrow {\delta }(\alpha \vee \beta )=\overrightarrow {\delta }(\delta )=1\neq \overrightarrow {\delta }(\alpha )\vee \overrightarrow {\delta }(\beta )=0\). And the same happens to all the non co-prime elements of H. The properties fulfilled by δ are: φ(δ) = {p ∈ J(H) : p ≤ δ)} = {α, β} = {α, β}.

If H is a lattice of set (that is, a topological space), it might be the case that the abstract points are fewer than the original points. Indeed, if two points p, p′ cannot be separated by means of an open set (i.e. by means of a “personal” property), the homomorphism makes them collapse onto the same abstract point. In other words, from a topological point of view H and H(J(H)) are isomorphic but not homeomorphic. We shall see later that the collapse of such “redundant” points is called T 0 -ification (or soberification) of the space.Footnote 3

This is what can be seen in our example, indeed.

Example 42

Example 13 continued.

It is evident that H(J( ΩR(U)) and ΩR(U) are isomorphic but the isomorphism φ makes the two “twin” points b and b′ collapse onto the single abstract point {b, b′}. In fact, it maps the two-points element {b, b′} onto the singleton {{b, b′}}. In turn, J( ΩR(U)) is order isomorphic to (U, ≤), where ≡ is defined as p ≡ p′ iff pRp′ and p′Rp and ≤ is the order inherited by the equivalence classes from R: [x]≤ [y] if and only if xRy (equivalently, one can set p ≡ p′ iff \(p\preccurlyeq p'\) and \(p'\preccurlyeq p\), where \(\preccurlyeq \) is the specialisation preorder of the topological space 〈U, ΩR(U)〉).

5 Rough Sets and the Algebras of Rough Set Systems

A rough set is an equivalence class on the powerset (U) modulo the equivalence of the two approximations (lR)() and (uR)():

Definition 43

Let AS(UR) be an approximation space, with R any binary relation on U. Two sets A, B ∈ (U) are said to be rough equal, denoted A ≈ B, if (lR)(A) = (lR)(B) and (uR)(A) = (uR)(B). A rough set is an equivalence class modulo ≈. The rough set of a set A is denoted as [A].

Since the two approximations uniquely define a rough set, given any subset A of U, [A] can be represented in the following ways (see Definition 33):

Definition 44

Let AS(UR) be an approximation space and A ⊆ U.

  1. 1.

    〈(lR)(A), (uR)(A)〉—increasing representationIcr(A)

  2. 2.

    〈(uR)(A), (lR)(A)〉—decreasing representationDcr(A)

  3. 3.

    〈(lR)(A), (eR)(A)〉—disjoint representationDsj(A)

  4. 4.

    〈(lR)(A), (bR)(A)〉—boundary representationBdr(A)

Let us focus our attention on the decreasing and disjoint representations:

Definition 45

Let AS(UR) be an approximation space. Then we set:

$$\displaystyle \begin{aligned} \begin{array}{rcl} Dsj(\mathbf{AS}(U/R))=\{\langle(lR)(A),(eR)(A)\rangle: A\subseteq U\} \end{array} \end{aligned} $$
(43)
$$\displaystyle \begin{aligned} \begin{array}{rcl} Dcr(\mathbf{AS}(U/R))=\{\langle(uR)(A),(lR)(A)\rangle: A\subseteq U\} \end{array} \end{aligned} $$
(44)

The above representations are interchangeable. For instance, the following functions link decreasing represented and disjoint represented rough sets:

$$\displaystyle \begin{aligned} & \rho: Dsj(A)\longmapsto Dcr(A); \rho(\langle a_1, a_2\rangle)=\langle -a_2, a_1\rangle \end{aligned} $$
(45)
$$\displaystyle \begin{aligned} & \rho^{-1}: Dcr(A)\longmapsto Dsj(A); \rho^{-1}(\langle a_1, a_2\rangle)=\langle a_2,-a_1\rangle \end{aligned} $$
(46)

The justification of these functions are straightforward. For instance, ρ operates as follows given \(a_1=\mathbb I_R(X)\) and \(a_2=-\mathbb C_R(X)\) for some X ⊆ U: \(\rho (\langle \mathbb I_R(X), -\mathbb C_R(X)\rangle =\langle -\mathbb C_R(X),\mathbb I_R(X)\rangle =\langle \mathbb C_R(X),\mathbb I_R(X)\rangle \), which is the decreasing representation of the rough set of X.

Notation

In view of the above discussion, if R is a preorder from now on the approximation space AS(UR) will be considered a topological space and identified with its topology ΩR(U). The system of rough sets from this space in disjoint representation will be denote by Dsj( ΩR(U)) and in decreasing representation by Dcr( ΩR(U)).

Example 46

Continued from Example 34 where A = {b, b′, c}:

  • Icr(A) = 〈{b, b′}, {b, b′, c, v}〉 • Dcr(A) = 〈{b, b′, c, v}, {b, b′}〉

  • Dsj(A) = 〈{b, b′}, {a}〉 • Bdr(A) = 〈{b, b′}, {c, v}〉

Below we depict the entire rough set system, in disjoint and in decreasing representation induced by the approximation space AS(UR) which is depicted in Example 40 and we identify with the topology ΩR(U).

The following table shows the Dsj image of (U). If only b is in a set X then the corresponding set with b′ is omitted (for instance, 〈{c, v, b} stays also for {c, v, b′}). First Dsj(∅) = 〈∅, U〉, Dsj(U) = 〈U, ∅〉. Then:

The reader can verify that Dcr( ΩR(U)) = ρ(Dsj( ΩR(U)).

For instance, ρ(〈{a}, {b, b′}〉 = 〈−{b, b′}, {a}〉 = 〈{v, a, c}, {a}〉.

Any ordered pair of Dcr( ΩR(U)) has a closed set of 〈U, ΩR(U)〉 as first element and an open set as second, which is included in the closed set. Closed sets are order-ideals in (U, R), while open sets are order-filters. Notice that any open set of 〈U, ΩR(U)〉 is a closed set in \(\langle U,\Omega _{R^\smile }(U)\rangle \), and vice-versa.

Now we have to pay attention to a basic fact. The ordered pair 〈{v, a, c}, ∅〉 is made of the above ingredient, namely, decreasing elements of ΩR(U). Still it is not the representation of any rough set. Indeed, assume {v, a, c} = (uR)(X) and ∅ = (lR)(X), for some subset X of U. Since a ∈ (uR)(X), R(a) ∩ X ≠ ∅. But R(a) = {a}, so that a ∈ X. Therefore, R(a) ⊆ X and we conclude that a ∈ (lR)(X), which contradicts the assumption (lR)(X) = ∅.

In turn, the ordered pairs of Dsj( ΩR(U)) are made of disjoint open sets. However, the disjoint ordered pair 〈{b, b′}, ∅}〉 is made of these ingredients, but it is not a rough set. Indeed, if the second element, ∅, is the complement of the upper approximation (i.e. closure) of a set X, then this upper approximation is U, so that a belongs to it. Hence, from the previous reasoning one obtains that a should be in the lower approximation of X, too, which is not the case.

Pay attention that this problem does not occur because the lower approximation, in the first case, or the complement of the upper approximation, in the second case, are empty sets. Consider the following example:

Example 47

U = {v, a, b, c} and R is the partial order below:

〈{a}, {c}〉 is a pair of disjoint open sets, but if \(\{a\}=\mathbb I_R(X)\) then R(b)⊈X and since R(b) = {b}, bX. So, b ∈ − X. In consequence R(b) ⊆−X and \(b\in \mathbb I_R(-X)=-\mathbb C_R(X)\), which, therefore, cannot be {c}.

Actually, the next sections are focused on obtaining Dsj( ΩR(U)) from the lattice of all the ordered pairs of disjoint elements of a topological approximation space ΩR(U), which now we formally define:

Definition 48

Let ΩR(U) be an approximation space with R a preorder.Dsj(UR) = {〈a 1, a 2〉 : a 1, a 2 ∈ ΩR(U) & a 1 ∩ a 2 = ∅}

If we want, instead, a lattice of decreasing elements, we have to decide “elements of what?”. Since the first element, in decreasing representation, is the closure of a set X, it cannot belong to ΩR(U). For instance 〈{a, c}{a}〉 is a pair of decreasing elements of ΩR(U) but it does not represent any rough set. Actually, the first element of a decreasing representation of rough sets is the complement of some open set of ΩR(U) which is an open set in \(\Omega _{R^\smile }(U)\). If R is an equivalence relation then R = R , so we do not notice the difference. But if R is a preorder or a partial order then we must take care with it.

In particular we have to take care of the definition of the operations which manipulate ordered pairs of decreasing elements, because in some cases they transform elements of the topology ΩR(U) into elements of the opposite topology \(\Omega _{R^\smile }(U)\). We shall see this interesting point at due time. By now, our analysis will focus on the disjoint representation which make it possible to operate just on elements of a single structure.

Now we have to face another problem.

As we have seen, if we take the set of all ordered pairs of disjoint elements of ΩR(U), which we denote by Dsj(UR), respectively of all the ordered pairs of decreasing elements in \(\Omega _{R^\smile }(U)\times \Omega _R(U)\), denoted Dcrj(UR)), we have elements which do not represent any rough set.

From the above discussion, we need a way to exclude from Dsj(UR) the ordered pairs which do not fulfil the following condition:

$$\displaystyle \begin{aligned} X_1\cup X_2\supseteq S \end{aligned} $$
(47)

where S is the set of all isolated points: \(S\bigcup \{x:R(x)=\{x\}\}\).

In Example 94 below one can see an illustration of what we have to do, with some mathematical means.

From Dcr(UR) we have to exclude the ordered pairs which do not fulfil the condition X 1 ∩ S = X 2 ∩ S.

To end this section, we sum up the issue. If R(X) = {x} then topologically x is an isolated point. Isolated points cannot belong to the boundary of any set, for the reason illustrated in Example 47, which formally runs as follows: if x ∈ X, then R(x) ⊆ X so that \(x\in \mathbb I_R(X)\). If xX, then x ∈ − X so that R(x) ⊆−X and, in consequence, \(x\in \mathbb I_R(-X)\). In the first case \(x\in \mathbb C_R(U)\) but \(x\notin -\mathbb I_R(X)\). In the second case \(x\in -\mathbb I_R(X)\) but \(x\notin \mathbb C_R(X)\). In both cases \(x\notin \mathbb C_R(X)\cap -\mathbb I_R(X)=\mathbb B_R(X)\).

6 Rough Set Systems, Grothendieck Topologies and Lawvere-Tierney Operators

The fact that any isolated point must belong either to the positive part (lR)(X), or to the negative part − (uR)(X) of a rough set, is a sort of Excluded Middle localized on S. Indeed, in general given x ∈ U and X ⊆ U, the assignment of x to X is given by a three-valued characteristic function:

$$\displaystyle \begin{aligned} \chi_x(X) = \begin{cases} 1 & \text{if {$x\in(lR)(X)$}} \\ 1/2 & \text{if {$x\in (uR)(X)\cap-(lR)(X)$}}\\ 0 & \text{if {$x\in -(uR)(X)$}} \end{cases} \end{aligned} $$
(48)

But if x ∈ S, then χ takes just value 0 or 1.

Intuitively, Classical Logic is locally valid on S.

Local validity is a notion wide studied in Algebraic Geometry which provides a powerful tool, Grothendieck topology, which we shall apply in this Section.

6.1 Grothendieck Topologies and Local Validity

Definition 49 (Grothendieck Topology)

Let P = 〈U, R〉 be a preorder. We recall that ΩR(U) = {R(X) : X ⊆ U} is the set of all order filters over P. A Grothendieck topology on the preorder P is a map J : U( ΩR(U));J [x] ⊆ ΩR(R(x)) such that:

  1. GT1.

    R(x) ∈ J [x], ∀x ∈ U,

  2. GT2.

    \(R(x')\cap S\in J_{[x']}, \forall x'\geq x, \forall S\in J_{[x]}\).

  3. GT3.

    x ∈ U, ∀S ∈ J [x], ∀S′⊆ R(x) such that S′∈ ΩR(U), if \(\forall x'\in S, R(x')\cap S'\in J_{[x']}\), then S′∈ J [x].

If a filter S belongs to J [x], we say that S covers x. J [x] is called the open-cover system of x. G = {J [x] : x ∈ U} and 〈P, G〉 is called an ordered site.

From a “granular” point of view, Grothendieck topologies formalize the notion “To be locally true” in the following sense: a property P is locally true at point x in a granulated space S if every granule G such that x ∈ G contains a granule G′ such that x ∈ G′ and G has property P, that is, the validity set \([\![ P]\!] \) is included in G′.

If S is a topological space, as in the case we are dealing with, then one substitutes “open neighbourhood” for “granule” and obtains that a topological space S has property P locally valid at a point x, if the set of P-neighbourhoods of x (i.e. the set of neighbourhoods of x included in \([\![ P]\!] \)) is cofinal in the neighbourhood filter of x. By definition, a Grothendieck open-cover of x is a set of open neighbourhoods of x in the given topology.

Example 50

In our standard example ΩR(U), suppose \([\![ P]\!] =\{a\}\), then cP. But the granules (i.e. open neighbourhoods) containing c are {a, c}, {a, c, b, b′}, and U an all of them contain {a}. So P is locally valid at c. On the contrary, for instance, b has three granules containing {a} (they are {a, b, b′}, {a, c, b, } and U), but there is an open neighbourhood of b not containing {a} and it is {b, b′}. So P is not locally valid at b (or at b′, of course).

As much as the family of open (closed) sets of a topology induces an interior (closure) operator, an ordered site induces a particular operator. This operator is partially an interior and partially a closure operator of a usual topology (and we shall see the reason why).

6.2 Lawvere-Tierney Operators

Any Grothendieck topology induces on ΩR(U) a closure operator J : J(A) = {x : A ∩ R(x) ∈ J [x]}. In other terms, if \(A=[\![ P]\!] \) then J(A) is the set of points in which P is locally valid. J is a Lawvere-Tierney operator which we define at pointless level:

Definition 51 (Lawvere-Tierney Operators)

Given a Heyting algebra H, J : H ⇒H is a Lawvere-Tierney operator if the following hold:

  • x ≤ J(x)—inflation,

  • J(J(x)) = J(x)—idempotence,

  • J(x ∧ y) = J(x) ∧ J(y)—multiplicativity.

From multiplicativity we obtain monotonicity: if x ≤ y then J(x) ≤ J(y).

The above properties has the following intuitive motivations:

In the first place, since x is more specialised than y if it enjoys more properties than y, that is, it belongs to open sets from which y is excluded, but not the other way around, conversely we say that a property P is stronger than Q if its domain of validity is “more specialised” than that of Q. So we have: (i) If the property P is stronger than property Q, then P is locally stronger than Q. (ii) The domain of validity of P is stronger than the domain of local validity of P. (iii) The domain of local validity of the domain of local validity of a property P equals the domain of local validity of P itself. (iv) The domain of local validity of (P ∧ Q) is the inf of the domains of local validity of P and Q.

We have seen that given a Grothendieck topology we can produce a Lawvere-Tierney operator connected to it. Now, symmetrically, we restore a Grothendieck topology from a Lawvere-Tierney operator on ΩR(U). The following result can be found in [15] or [23].

Lemma 52

Given a preorder P = 〈U, Rand a Lawvere-Tierney operator J on Ω R(U), the family

$$\displaystyle \begin{aligned} \{J_{[x]}:J_{[x]}=\{R(x)\cap X: x\in J(X)\;\&\; x\in U\} \end{aligned} $$
(49)

is a Grothendieck topology.

Definition 53 (Local Validity in an Ordered Site)

A forcing relation ⊧ between elements of P and the set \(\mathcal I\) of formulas of propositional Intuitionistic logic is a relation \(\models \subseteq U\times \mathcal I\), such that for any formula \(\alpha \in \mathcal I, p\in U\):

$$\displaystyle \begin{aligned} {\textit If\;}p\models \alpha{\textrm\;then\;}\forall p'(p'\in R(p)\Rightarrow p'\models \alpha). \end{aligned} $$
(50)

which means that if \(p\in [\![ \alpha ]\!] \) then \(p\in [e]_{\models }([\![ \alpha ]\!] )\). Clearly, for any α, \([\![ \alpha ]\!] \) belongs to ΩR(U).

Given an ordered site 〈P, G〉, we say that a formula α is locally valid at point p ∈P, in symbols p⊧〈l〉(α), if \(R(p)\cap [\![ \alpha ]\!] \) covers p in the topology G, that is, if {p′≥ p : p′α} belongs to the open-cover system of p in G.

Example 54

The following is a Grothendieck topology which we name G δ for reasons that will be clear soon:

Let us compute J δ({a}), where J δ is the Lawvere-Tierney operator induced by G δ: \(R(a)\cap \{a\}=\{a\}\in J_{[a]}^\delta \), \(R(c)\cap \{a\}=\{a\}\in J_{[c]}^\delta \). For no other x, \(R(x)\cap \{a\}\in J^\delta _{[x]}\). So J δ({a}) = {a, c}. The other cases follow suit. Therefore, if \([\![ P]\!] =\{a\}\) then P is locally valid at c even if \(c\notin [\![ P]\!] \), as anticipated in Example 50.

Vice-versa, given J δ we can compute G δ. For instance \(J^\delta _{[c]}\) is obtained as c ∈ J δ({a}), J δ({a, c}), J δ({a, b, b′}), J δ({a, b, b′, c}) and J δ(U). Therefore:

$$\displaystyle \begin{aligned} J^\delta_{[c]}&=\{R(c)\cap \{a\},R(c)\cap\{a,c\},R(c)\cap\{a,b,b'\},R(c)\cap\{a,b,b',c\},R(c)\cap U\}\\ &=\{\{a,c\}\cap \{a\},\{a,c\}\cap\{a,c\},\{a,c\}\cap\{a,b,b'\},\{a,c\}\cap\{a,b,b',c\},\{a,c\}\cap U\}\\ &=\{\{a\}\},\{a,c\}\}. \end{aligned} $$

Notice that for any x, \(J^\delta _{[x]}=\{Y: J^\delta (Y)=J^\delta (X)\;\&\; Y\subseteq X\}\).

6.3 Congruence

For the comfort of the reader we recall some definitions and results.

Definition 55 (Congruence)

Let L = 〈U, φ〉 be a set equipped with an n-ary operation φ and ≡ an equivalence relation on U. Then ≡ is called a congruence if for all a 1a n, b 1b n ∈ U, a 1 ∈ [b 1], …, a n ∈ [b n] implies φ(a 1, …, a n) ∈ [φ(b 1, …, b n)]. If this is the case, we say that ≡ is compatible with φ.

Therefore, given a Heyting algebra H and an equivalence relation ≡ on H we say that ≡ is a ∧-congruence if it is compatible with ∧, that is if a 1 ∈ [b 1], a 2 ∈ [b 2] implies a 1 ∧ a 2 ∈ [b 1b 2]. Similarly we define the notion of ∨-congruence and ⇒-congruence. If all the three compatibilities are satisfied, we say that ≡ is a Heyting algebra-congruence. Remember that \(0=\bigvee \emptyset \) and \(1=\bigwedge \emptyset \).

Lemma 56

Let L = 〈U, φand L  = 〈U′, φbe two sets equipped with the same n-ary operation φ. Let f be an homomorphism between L and L . Set for all a, b L , a f bf(a) = f(b). Thenf is a congruence on L.

Proof

The standard and straightforward proof is the following: Since ≡f is defined by means of an equality, then it is an equivalence. Now assume \(a_1\in [b_1]_{\equiv _f}, \ldots , a_n\in [b_n]_{\equiv _f}\). Hence, since f preserves φ, f(φ(a 1, …, a n)) = φ(f(a 1), …, f(a n)). But by assumption f(a i) = f(b i). Therefore, f(φ(a 1, …, a n)) = φ(f(b 1), …, f(b n)) = f(φ(b 1, …, b n)), so that \(\varphi (a_1,\ldots ,a_n)\in [\varphi (b_1,\ldots ,b_n)]_{\equiv _f}\). □

The congruence ≡f is called the kernel of f.

Definition 57 (Quotient Structure)

Let L = 〈U, φ〉 be a set with an n-ary operation φ and ≡ an equivalence relation on L. Then:

  1. 1.

    U∕≡ := {[a] : a ∈ U} is called the quotient set of U.

  2. 2.

    For all a, b ∈ U, define φ ([a], [b]) := [φ(a, b)]. Then L∕≡ := 〈U∕≡, φ 〉 is called the quotient structure of L.

If there is no risk of confusion, we write φ also for φ , thus, for instance, ∧ instead of ∧.

Lemma 58

If L is a lattice anda congruence on L , then L∕≡ is a lattice and the map \(\mathfrak {q}:\mathbf { L}\longmapsto \mathbf {L}/{\equiv };\mathfrak {q}(a)=[a]_{\equiv }\) is a homomorphism. The map \(\mathfrak {q}\) is called the natural quotient map .

Theorem 59 (Fundamental Homomorphism Theorem for Lattices)

Let L and L be lattices and f an homomorphism of L onto L . Then the map g : L∕≡fL given by \(g([a]_{\equiv _f})=f(a)\) is independent of the representative a, that is, for all a, b L , \([a]_{\equiv f}=[b]_{\equiv _f}\) implies \(g([a]_{\equiv _f})=g([b]_{\equiv _f})\).

Moreover g is an isomorphism between L∕≡f and L ′.

Finally, if \(\mathfrak {q}\) is the natural quotient map, then \(\equiv _{\mathfrak {q}}\) andf coincide and the following diagram commutes, that is, \(g(\mathfrak {q})=f\) :

Example 60

Consider the abstract Heyting algebra of Example 41.

6.4 Lawvere-Tierney Operators and Heyting Algebra Congruences

Lemma 61

Let H be a Heyting algebra and a H . Let us set the following operator on H :

$$\displaystyle \begin{aligned} J^a(x)=a\Longrightarrow x \end{aligned} $$
(51)

Then J a is a Lawvere-Tierney operator.

Proof

This is a standard result in Algebraic Geometry (see [23]). However we provide a simple proof which exhibits how adjointness properties may be used. Indeed, in any Heyting algebra H, from the adjointness property, y ∧ a ≤ x iff y ≤ ax, for any y. But x ∧ a ≤ x and one obtains x ≤ ax, hence J a is increasing. Idempotence follows from the following equations: a⇒(ax) = (a ∧ a)⇒x = ax. This is an application of the Curry property of ∧ and ⇒ which can be obtained from adjointness: x ≤ y⇒(wz) iff x ∧ y ≤ wz iff x ∧ (y ∧ w) ≤ z, that is, x ≤ (y ∧ w)⇒z. Finally, multiplicativity derives again from the multiplicativity of upper adjoints and ⇒ is upper adjoint. □

Lemma 62

Let H be a Heyting algebra and \(\mathcal F\) a filter on H . Let us define for all a, b H , \(a\equiv _{\mathcal F} b\) iff \(\exists f\in \mathcal F\) such that a  f = b  f. Then \(\equiv _{\mathcal F}\) is a Heyting algebra congruence.

Proof

Suppose \(x\in [y]_{\equiv _{\mathcal F}}\) and \(z\in [w]_{\equiv _{\mathcal F}}\). Then for some \(f\in \mathcal F\), x ∧ f = y ∧ f and z ∧ f = w ∧ f. Now, (x ∨ z) ∧ f = (x ∧ f) ∨ (z ∧ f) = (y ∧ f) ∨ (w ∧ f) = (y ∨ w) ∧ f. Hence, \(x\vee z\in [y\vee w]_{\equiv _{\mathcal F}}\). Dually one obtains ∧-compatibility. We have to prove the case of ⇒, that is, (xz) ∧ f = (yw) ∧ f. Now, let q ∧ x ≤ z Then q ∧ f ∧ x ≤ z ∧ f. From the congruence assumptions we obtain q ∧ f ∧ y ≤ w ∧ f and, a fortiori, q ∧ f ∧ y ≤ w, which means that q ∧ f ≤ yw ∧ f. Since q is arbitrary among the elements a such that a ∧ x ≤ z, in particular we can set q = xz, obtaining xz ∧ f ≤ yw ∧ f. With a symmetric reasoning we obtain yw ∧ f ≤ xz ∧ f and conclude xz ∧ f = yw ∧ f. □

In particular, if \(\mathcal F\) is a principal filter generated by the element p, one has that a ∧ f = b ∧ f for some \(f\in \mathcal F\) if and only if a ∧ p = b ∧ p. However, if we set \(\hat {\mathcal F}(a)=a\wedge p\), we indeed have that \(a \equiv _p b\iff \hat {\mathcal F}(a)=\hat {\mathcal F}(b)\) is a congruence, but \(\hat {\mathcal F}\) is not a Lawvere-Tierney operator, because \(\hat {\mathcal F}(a)\leq a\).

Anyway, there is a Lawvere-Tierney operator J such that ≡J coincides with \(\equiv _{\hat {\mathcal F}}\), and it is J p:

Theorem 63

Given a Heyting algebra H and p H , set a p b if and only if a  p = b  p and let \(a\equiv _{J^p} b\) iff J p(a) = J p(b). Thenp coincides with \(\equiv _{J^p}\).

Proof

A proof is given in Proposition 7.4.2 of [39]. A straightforward proof is the following, anyway. The lower adjoint of J p is ∧p. From this the result is just an application of the adjointness relation (cf. (37)). □

Therefore, given an Heyting algebra H and an element a ∈H, \(\equiv _{J^a}\) is a congruence on H. Here we report a more general result about the relation between congruence and Lawvere-Tierney operators on a Heyting algebra (see [15]):

Theorem 64

Letbe a congruence on a Heyting algebra H . Define:

$$\displaystyle \begin{aligned} J_\equiv:\mathbf{H}\longmapsto\mathbf{H}; J_\equiv(p)=\bigvee\{x:x\equiv p\} \end{aligned} $$
(52)

Then J is a Lawvere-Tierney operator and x  b iff J (x) = J (y), that is,and \(\equiv _{J_\equiv }\) coincide.

Proof

We just prove the first part. From (52) p ≤ J (p) and J (J (p)) = J (p) since J (p) is a maximal element. From p ≡ J (p) and q ≡ J (q) one sees that (p ∨ q) ≡ J (p) ∨ J (q). Therefore, again for (52), J (p) ∨ J (q) ≤ J (p ∨ q). Suppose now p ≤ q. Then p ∨ q = q so that J (p) ∨ J (q) ≤ J (p ∨ q) = J (q). Thus, J is monotone. Similarly, J (p) ∧ J (q) ≤ J (p ∧ q). But from p ∧ q ≤ p and p ∧ q ≤ q monotonicity gives J (p ∧ q) ≤ J (p) and J (p ∧ q) ≤ J (q) so that J (p ∧ q) ≤ J (p) ∧ J (q), from which multiplicativity follows. □

Actually, there is a one-one correspondence between Heyting algebra congruences ≡ and Lawvere-Tierney operators which can be found in [15].

In particular we obtain:

Theorem 65

Let H be a Heyting algebra and J a Lawvere-Tierney operator on H . Let us set for all a, b H ,

$$\displaystyle \begin{aligned} a\equiv_J b \iff J(a)=J(b) \end{aligned} $$
(53)

ThenJ is congruence on H.

Later in this chapter we provide a proof for a particular important case of the above theorem that we are going to introduce.

6.5 Dense Elements of a Heyting Algebra

If not otherwise stated, in this section H will denote a Heyting algebra and a, b, a 1, b 1, x, y, … will denote elements of H.

Definition 66

An element x ∈H is said to be dense if ¬x = 0. It is called regular if ¬¬x = x.

The following is immediate:

Theorem 67

An element δ is dense iff ¬¬δ = 1 iff for all x H , x  δ ≠ 0.

Theorem 68

If \(\mathcal D\) is the filter of all dense elements of H , then for all a, b H , \(a\equiv _{\mathcal D} b\) iff ¬a = ¬b iff ¬¬a = ¬¬b.

The relation \(\equiv _{\mathcal D}\) is called Glivenko congruence . The proof is a standard result in Geometric Logic (see [23]). However, we give an algebraic proof in the case \(\mathcal D\) is a principal filter generated by an element δ which, therefore, is the least dense element of H.Footnote 4

Lemma 69

Let δ be the least dense element of H . Then for all a, δ  a = δ ∧¬¬a.

Proof

a ∨¬a ≤¬¬a ∨¬a and both terms are dense elements in H. Therefore, δ ≤ a ∨¬a ≤¬¬a ∨¬a. Hence, δ ∧ (a ∨¬a) = δ = δ ∧ (¬¬a ∨¬a), which means (δ ∧ a) ∨ (δ ∧¬a) = δ = (δ ∧¬¬a) ∨ (δ ∧¬a). Since δ ∧¬a is disjoint from the other terms of the disjunctions, one obtains δ ∧ a = δ ∧¬¬a. □

Corollary 70

Let δ be the least dense element of H , then for all a, b, δ  a = δ  b if and only if ¬a = ¬b.

Proof

¬a = ¬b if and only if ¬¬a = ¬¬b. Therefore, from Lemma 69 one obtains: δ ∧ a = δ ∧ b iff δ ∧ b = δ ∧¬¬a iff δ ∧ a = δ ∧¬¬b iff δ ∧¬¬b = δ ∧¬¬a iff δ ∧¬b = δ ∧¬a. From these equations ¬a = ¬b. □

Corollary 71

If δ is the least dense element of H , then \(a\equiv _{J^\delta } b\) if and only if ¬¬a = ¬¬b.

Lemma 72

Let a  x ≤¬¬a. Then ¬a = ¬x and, thus, ¬¬a = ¬¬x.

Proof

From the hypothesis one has ¬¬¬a ≤¬x ≤¬a, that is, ¬a ≤¬x ≤¬a. Therefore, ¬a = ¬x. □

Lemma 73

Let a  b and for some y, b  y  a. Then a  y = b  y.

Proof

From a ≤ b and b ∧ y ≤ a one obtains, respectively, a ∧ y ≤ b ∧ y and b ∧ y ∧ y ≤ a ∧ y, that is, b ∧ y ≤ a ∧ y. Therefore, a ∧ y ≤ b ∧ y ≤ a ∧ y, so that a ∧ y = b ∧ y. □

Lemma 74

Let a  x ≤¬¬a. Then xa is dense.

Proof

From Lemma 72 ¬¬a = ¬¬x. Moreover, from Theorem 37, ¬(xa) = ¬¬x ∧¬a = ¬¬a ∧¬a = 0. We conclude that xa is dense. □

Lemma 75

Let δ be dense. Then ¬(δa) = ¬a.

Proof

¬(δa) = ¬¬δ ∧¬a = 1 ∧¬a = ¬a

Corollary 76

Let δ be dense. Then δa ≤¬¬a.

Proof

δa ≤¬¬(δa) = ¬(¬(δa)) = ¬¬a. □

Theorem 77

Let H be a Heyting algebra with least dense element.

  1. 1.

    Let δ be a dense element. Then δa = a or δa = ¬¬a.

  2. 2.

    Let δ be the least dense element of H . Then δa = ¬¬a.

  3. 3.

    Let δ be the least dense element of H . Then δa = ¬a⇒¬δ.

Proof

(1) We have two cases: C1: ¬¬a ∧ δ ≤ a and C2: \(\neg \neg a\wedge \delta \gneq a\). Case C1. By adjointness, from ¬¬a ∧ δ ≤ a one obtains ¬¬a ≤ δa and from δa ≤¬¬a, provided by Lemma 76, δa = ¬¬a. This result encompasses also the case a = 0, because ¬¬0 = 0. Case C2. Let x = δa. Therefore, from adjunction x ∧ δ ≤ a. From Lemma 6.18, x ≤¬¬a. It cannot be x = ¬¬a, because from assumption \(\neg \neg a\wedge \delta \gneq a\), so that both x ∧ δ ≤ a and \(x\wedge \delta \gneq a\) ought to be true. Since \(\delta \gneq a\), this contradiction occurs for every x such that \(a\lneq x\leq \neg \neg a\). It follows that x must be a itself.

(2) If in the proof of (1) one uses the fact that δ ∧ a = δ ∧¬¬a because δ is the least dense element, one obtains the result. Alternatively, from Theorem 63 and Lemma 69, δa = δ⇒¬¬a ≥¬¬a. So, from Lemma 76 we obtain the proof. Another proof exploits adjunction: from Lemma 69 ¬¬a ∧ δ = a ∧ δ ≤ a so that by adjunction ¬¬a ≤ δa. But from (1) δa ≤¬¬a. (3) is a corollary of (2) because ¬a⇒¬δ = ¬a⇒0 = ¬¬a. □

Therefore, the Lawvere-Tierney operator J δ, with δ least dense element of a Heyting algebra H can be re-written as ¬¬. Now we prove the specialisation to J δ of Theorem 65.

Theorem 78

The relation a ¬¬ b iff ¬¬a = ¬¬b is a congruence on H.

Proof

a ∈ [x]¬¬ and b ∈ [y]¬¬ iff ¬¬a = ¬¬x and ¬¬b = ¬¬y which implies ¬¬a ∧¬¬b = ¬¬x ∧¬¬y. But in view of Theorem 37.(1), ¬¬ preserves meets so that one obtains ¬¬(a ∧ b) = ¬¬(x ∧ y) and can conclude that a ∧ b ∈ [xy]¬¬. Regarding disjunction, ¬¬(a ∨ b) = ¬¬(x ∨ y) iff ¬(¬a ∧¬b) = ¬(¬x ∧¬y). But from the hypothesis of congruence, ¬a = ¬x and ¬b = ¬y, therefore, the latter equation is true. A similar proof holds for ⇒: from the hypothesis ¬¬a⇒¬¬b = ¬¬x⇒¬¬y. Since ¬¬ preserves ⇒ we obtain ¬¬(ab) = ¬¬(xy) and we conclude that ab ∈ [xy]¬¬. From this and the fact that ¬¬0 = 0 (or because ¬¬¬a = ¬a) one obtains the congruence for ¬. □

Clearly the same result can be obtained from Theorem 63 plus Lemma 62.

So far we have seen the properties of Lawvere-Tierney operators on an abstract Heyting algebra, and in particular the operator J δ. Now we discuss the same properties in terms of Heyting algebras of an Alexandrov topology ΩR(U). In this way we zoom-in the abstract structures, populate them with points and see how the above manipulations act on them.

In this kind of spaces there is a fundamental example of Grothendieck topology and conjugate Lawvere-Tierney operator, which will be key to our construction: the so-called dense topology and its corresponding local operator.

Definition 79

Let τ(U) = 〈U, Ω(U)〉 be a topological space on a set U and X ∈ Ω(U). Then:

$$\displaystyle \begin{aligned} &X {\textit \;is\;called\;}\mathrm{dense\; in\;}\tau(U){\textit\; if\;}\mathbb C(X)=U. \end{aligned} $$
(54)
$$\displaystyle \begin{aligned} &X {\textit \;is\;called\;}\mathrm{regular\; in\;}\tau(U){\textit\; if\;}\mathbb{I}\mathbb{C}(X)=X. \end{aligned} $$
(55)

Facts 6.1

The abstract (pointless) notions of Definition 66 and the concrete (with points) ones coincide. Indeed, for any open set X of a topology Ω(U), \(\mathbb C(X)=U\) iff \(-\mathbb C(X)=\emptyset \) iff \(\mathbb I(-X)=\emptyset \) iff ¬X = 0. Moreover, \(\mathbb {I}\mathbb {C}(X)={\mathbb {I}\mathbb {C}}(--X)=\mathbb I(-\mathbb I(-X))=\neg \neg X\).

Definition 80 (Dense Topology)

Given a finite topological space τ(U) on a set U the dense topology is obtained by tacking the least dense element δ and using the Lawvere-Tierney operator J δ on the Heyting algebra Ω(U).

From the very definition (41) of ⇒ the following is straightforward:

Theorem 81

Given a topological space τ(U), for any X  Ω(U), \(J^\delta (X) =\mathbb I(-\delta \cup X)\).

Theorem 82

For any \(X\in \Omega (U), J^\delta (X)=\mathbb {I}\mathbb {C}(X)\).

Proof

From Theorem 77.(2). □

As before, we call G δ the Grothendieck topology induced by the operator J δ. Therefore, given any open set O ∈ ΩR(U), in the ordered site 〈G δ, P〉 a point p is covered by O iff p ∈¬¬O (i.e. \(p\in \mathbb {I}\mathbb {C}(O)\)). Clearly, if p ∈ O then \(p\in \mathbb {I}\mathbb {C}(O)\), but the interesting fact is when pO.

The following result is well-known and it is the translation at the point-level of the above theorems:

Lemma 83

LetG δ, Pbe the ordered site induced by the Lawvere-Tierney operator J δ . Let α be any Intuitionistic formula. Let us set for any p  U, p⊧〈l〉(α) iff \([\![ \alpha ]\!] \) covers p in the ordered site. Then p⊧〈l〉(α) iffp′(p  p′⇒∃p″(p′ p″ ∧ p″⊧α)).

So this is the origin of the Grothendieck topology of Example 54, which the reader can use an example of what we have just said.

In general, an upper adjoint is just multiplicative. When it is also additive we are in a particular situation that will be analysed later during the discussion about standard rough set systems. By now we have an operator J δ which does not preserve disjunctions. However, the image of J δ, actually of any Lawvere-Tierney operator, can be made into a Heyting algebra. This structure is very important to Rough Set Systems.

6.6 The Boolean Algebra of the Regular Elements of a Heyting Algebra

Given an operator φ on a lattice L, let us denote with \(\mathcal F_\varphi (\mathbf {L})\) the set of its fixed points: \(\mathcal F_\varphi (\mathbf {L}):=\{x:x\in \mathbf {L}\wedge \varphi (x)=x\}\). If φ is idempotent, then \(\mathcal F_\varphi (\mathbf {L})\), is just the image of φ.

So, we have seen that given an Heyting algebra H the image \(\mathcal F_{J^\delta }(\mathbf {H})\) of the operator J δ inherits from H the operations ∧ and ⇒, but not ∨.

However on \(\mathcal F_{J^\delta }(\mathbf {H})\) one can set a disjunction ⊔ and obtain a Boolean algebra. We prove it in a general manner. The starting point is a classical result:

Lemma 84

[Tarski] Let L be a complete lattice and φ a multiplicative and monotone operator on L . Then the set of fixed points of φ, \(\mathcal F_\varphi (\mathbf {L})\) , is a complete lattice.

Proof (See [15])

Let \(a,b\in \mathcal F_\varphi (\mathbf {L})\). Let \(\mathcal F^{a,b}_\varphi :=\{x:\varphi (x)\leq x\;\&\; a,b\leq x\}\) and set \(a\sqcup b:=\bigwedge \mathcal F^{a,b}_\varphi \). Since L is complete, such inf exists in L. We have to show that it belongs to \(\mathcal F_\varphi (\mathbf {L})\). Since a, b ≤ a ⊔ b by monotonicity φ(a), φ(b) ≤ φ(a ⊔ b) which means a, b ≤ φ(a ⊔ b). Similarly it is proved that if \(x\in \mathcal F^{a,b}_\varphi \), then \(\varphi (x)\in \mathcal F^{a,b}_\varphi \), too. But by definition, if \(x\in \mathcal F^{a,b}_\varphi \) then a ⊔ b ≤ x. It follows that for all such x, a ⊔ b ≤ x, hence φ(a ⊔ b) ≤ φ(x) ≤ x, so that by definition φ(a ⊔ b) ≤ a ⊔ b and one concludes that \(a\sqcup b\in \mathcal F^{a,b}_\varphi \). Hence \(\varphi (a\sqcup b)\in \mathcal F^{a,b}_\varphi \) and from this, a ⊔ b ≤ φ(a ⊔ b). Therefore, φ(a ⊔ b) = a ⊔ b, so \(a\sqcup b\in \mathcal F_\varphi (\mathbf {L})\) and, moreover, for all \(x,y\in \mathcal F_\varphi (\mathbf {L})\), a, b ≤ x implies a ⊔ b ≤ x. □

Now we show that in the case of the operator J δ, alias ¬¬, the above operation ⊔ is the double negation of the disjunction ∨ of H:

Lemma 85

Let the lattice L of Theorem 84 be a Heyting algebra H and φ be ¬¬. Then for all \(a,b\in \mathcal F_{\neg \neg }(\mathbf {H})\) , a  b = ¬¬(a  b).

Proof

Since p ≤¬¬p the requirement φ(p) ≤ p turns into ¬¬p = p. Therefore, \(\mathcal F^{a,b}_{\neg \neg }=\{x:\neg \neg (x)=x\;\&\; a,b\leq x\}\). Since ¬¬(a ∨ b) is a fixed point, it belongs to \(\mathcal F_{\neg \neg }\) and we have just to show that for all y such that ¬¬y = y and a, b ≤ y, ¬¬(a ∨ b) ≤ y. Now, if a, b ≤ y, then a ∨ b ≤ y. By monotonicity, ¬¬(a ∨ b) ≤¬¬y = y. □

As a corollary of Theorem 84, it is easy to show that φ = {x : x ≤ φ(x)} and φ = {x : φ(x) ≤ x} are complete lattices. In the case of operator ¬¬, x ≤¬¬x, all x ∈H. Therefore φ = H and \(\Downarrow _\varphi =\mathcal F_{\neg \neg }(\mathbf {H})\), because x ∈  ¬¬ iff ¬¬x ≤ x ≤¬¬x.

From Theorem 84, it can be proved that if H is a Heyting algebra, then for any Lawvere-Tierney operator J on H, \(\mathcal F_J(\mathbf {H})\) is a Heyting algebra, too (remember that J is idempotent, so that for all x, J(x) is a fixed point):

Theorem 86 (See [15])

Given a Heyting algebra H , for any Lawvere-Tierney operator J on H , the set \(\mathcal F_J(\mathbf {H})=\{J(x):x\in \mathbf {H}\}\) forms a Heyting algebra.

Proof

Since J is multiplicative, \(\mathcal F_J(\mathbf {H})\) is closed under ∧ because if a = J(a) and b = J(b), a ∧ b = J(a) ∧ J(b) = J(a ∧ b). Define on \(\mathcal F_J(\mathbf {H})\) a disjunction ⊔ as above. We have to show the distributive property for elements of \(\mathcal F_J(\bf H)\): p ∧ (a ⊔ b) = (p ∧ a) ⊔ (p ∧ b). Actually, since in one sense it works, we have to show that p ∧ (a ⊔ b) ≤ (p ∧ b) ⊔ (p ∧ b). Trivially, p ∧ a ≤ ((p ∧ a) ⊔ (p ∧ b)) and p ∧ b ≤ ((p ∧ a) ⊔ (p ∧ b)). Hence, by the adjunction relation, a ≤ p⇒((p ∧ a) ⊔ (p ∧ b)) and b ≤ p⇒((p ∧ a) ⊔ (p ∧ b)). Again by applying adjunction, p ∧ (p⇒((p ∧ a) ⊔ (p ∧ b))) ≤ (p ∧ a) ⊔ (p ∧ b). By monotonicity and multiplicativity of J, J(p) ∧ J((p⇒((p ∧ a) ⊔ (p ∧ b)))) ≤ J((p ∧ a) ⊔ (p ∧ b)). One more time by adjunction J((p⇒((pa) ⊔ (pb)))) ≤ J(p)⇒J((pa) ⊔ (pb)) = p⇒((pa) ⊔ (pb)). The equation holds because p, a and b are fixed points of J which is multiplicative. Therefore, from the proof of Tarski’s theorem one obtains a ⊔ b ≤ p⇒((p ∧ a) ⊔ (p ∧ b)) and finally, again adjunction gives the thesis: p ∧ (a ⊔ b) ≤ (p ∧ b) ⊔ (p ∧ b). □

But \(\mathcal F_{J^\delta }(\mathbf {H})\) is not only a Heyting algebra. In fact, one has \(a\equiv _{J^\delta } b\) iff ¬a = ¬b and, therefore, from Definition 35, \(\mathcal F_{J^\delta }(\mathbf {H})\) is a Boolean algebraFootnote 5:

Theorem 87

\(\mathcal F_{J^\delta }(\mathbf {H})\) with the operation ∧, ⊔, ¬ forms a Boolean algebra. Footnote 6

If X is an element of the Heyting algebra ΩR(U) one has that the families \(J^X_{[x]}\) are congruence classes of \(\equiv _{J^X}\) and if \(A\in J^X_{[x]}\) then \(J^X(A)=\bigcup J^X_{[x]}\). In other terms, J X(A) is the top element of the \(\equiv _{J^X}\) congruence class of A.

Example 88

Consider the preorder of Example 13 and the corresponding Heyting algebra (aka Alexandrov topology) of Example 40. The elements {a, b, b′}, {a, b, b′, c} and U are dense and {a, b, b′} is the least dense element δ of the algebra.

The arrows represent the action of the operator J δ. The elements connected by arrows form a congruence class of \(\equiv _{J^\delta }\).

$$\displaystyle \begin{aligned} J^\delta(\{a\})\sqcup J^\delta(\{b,b'\}) &{=}J^\delta(J^\delta(\{a\})\cup J^\delta(\{b,b'\}))\\ &{=}J^\delta(\{a,c\}\cup\{b,b'\})=J^\delta(\{a,b,b',c\})\\ &{=}U{=}J^\delta(\{a,b,b'\}){=}J^\delta(\{a\}\cup\{b,b'\}). \end{aligned} $$

Let us see some instance of operations of the Heyting algebra ΩR(U): {a, b, b′}⇒{a} = {a, c} and ¬{a} = {b, b′}, so that ¬¬{a} = {a, c}. \({\mathcal F}_{J^\delta }({\Omega _R}(U))\) is a Boolean algebra in which {a, c}⊔{b, b′} = J δ({a, c}∪{b, b′}) = ¬¬{a, b, b′, c} = U.

Notice, also, that {a, c} and {b, b′} are regular elements. However they are not complemented in ΩR(U) because their union is {a, b, b′, c}, not the top element U. On the contrary, U and ∅ are both regular and complemented. If we drop v from P, in ΩR(U) the aforementioned elements are complemented. Furthermore, in this case all the regular elements are complemented. Also this is a particular situation which will be discussed during the analysis of standard rough set systems.

We have seen that if \([\![ \alpha ]\!] =\{a\}\), then c⊧〈l〉(α), although cα. Obviously, a⊧〈l〉(α). On the contrary, v⊭〈l〉(α) because \(R(v)\cap \{a\}=\{a\}\notin J^\delta _{[v]}\). In fact, for instance, b ∈ R(v) but \([\![ \alpha ]\!] \not \subseteq R(b)\). Notice that v⊧〈l〉(β) iff \(\delta \subseteq [\![ \beta ]\!] \).

Finally, from the very definition of \(J^\delta _{[c]}\) one can trivially verify that \([\![ \alpha ]\!] \) is locally valid at c in the intuitive sense discussed at the beginning of the section: \(R(c)\cap [\![ \alpha ]\!] =\{a,c\}\cap \{a\}=\{a\}\) and \(\{a\}\in J^\delta _{[c]}\). In Example 54 we have proved it by computing J δ({a}).

Definition 89

Let H be a Heyting algebra and ≡ a congruence on it. If H is a Boolean algebra, then ≡ is called a Boolean congruence.

If A ⊆ δ, then \(\equiv _{J^a}\) is a Boolean congruence. However if \(A\subsetneq \delta \), then a paradoxical situation is obtained. In fact, if \([\![ \alpha ]\!] =A\), then it is not dense so that there exists an x such that x⊧¬α. Hence \(x\in \neg [\![ \alpha ]\!] =[\![ \alpha ]\!] \Longrightarrow \emptyset =J^A(\emptyset )\). In consequence, \(\emptyset \in J^A_{[x]}\). But since x⊧¬α, then \(R(x)\subseteq [\![ \neg \alpha ]\!] \) so that \(R(x)\cap [\![ \alpha ]\!] =\emptyset \), which is a member of \(J^{A}_{[x]}\), and one concludes that x⊧〈l〉(α). That is, ¬α is valid at x and nonetheless α is locally valid at x itself, as well.

Example 90

Consider the operator \(J^{\{b,b'\}}\).

Therefore, if \([\![ \alpha ]\!] =\{b,b'\}\) then a⊧〈l〉(α) and c⊧〈l〉(α) because \(R(a)\cap [\![ \alpha ]\!] =\emptyset \) and \(\emptyset \in J^{\{b,b'\}}_{[a]}\), and the same holds for c. Thus, we have the paradoxical situation that a⊧¬α, c⊧¬α and both a and c force 〈l〉(α).

In view of the above discussion, if a is a non-dense element of a Heyting algebra, that is, ¬a ≠ 0, then we call J a and its related Grothendieck topology paradoxical Lawvere-Tierney operator and, respectively, paradoxical Grothendieck topology.

6.7 Grothendieck Topologies and Rough Set Systems

Now we see how all the above machinery applies to Rough Set Systems.

Let us start with standard rough sets. Therefore, we are given a set U and an equivalence relation E ⊆ U × U. We know that we have to use the set of all singletons S as the parameter for the Lawvere-Tierney operator J X. What does it happen to the conjugate Grothendieck topology G S?

The approximation space ΩE(U) is a Heyting algebra which, in particular, is a Boolean algebra. In this algebra the only dense element is the top element U. If the set of singletons in ΩR(U) coincides with U, then for any O ∈ ΩR(U)), J U(O) = O, so that G U = {{O} : O ∈ ΩE(U)}. Since U is the least (actually only) dense element of the algebra, we know that \(\mathcal F_{J^U}(\Omega _E(U))\) is a Boolean algebra, but in this case the disjunction is ∪ itself. In fact, \(\mathcal F_{J^S}(\Omega _E(U))\) equals ΩE(U) for the very reason that for any O, J U(O) = O and from the previous results we know that J U(O) = ¬¬O because U is the least dense element of ΩE(U). In turn, ΩE(U) equals the Boolean algebra B(U) = 〈(U), ∩, ∪, −, ∅, U〉 because for any x ∈ U, E(x) = {x}. This is consistent with the fact that in the Boolean algebra B(U), ¬¬O = O, any O because in this algebra the negation ¬ is the set-theoretic complement.

Example 91

Let U = {a, b, b′, c} and E = {〈x, x〉 : x ∈ U}. Then:

Things drastically change if the set S of singletons in ΩR(U) is strictly less than U. Usually, in the rough set community it is assumed that there are no singleton classes in UE. If in many real-work applications this assumption may be acceptable, in a broader framework it is questionable because, actually, also the real word is not made of just incomplete information but of a melange of complete and incomplete information. In the complete part Classical Logic is locally valid while it can be assumed that the global logic is three-valued.

Now we frame this issue in the case of Rough Set Systems induced by preorders because it has a particular logic importance, it is a broader point of view and because the issue of singleton granules cannot be avoided if the items we are dealing with are connected by means of a preorder or, even worst, a partial order P. In fact, if in P there is at least a non infinite chain, the greatest point x of this chain is an isolated point because R(x) = {x}.

In preorders we can have both maximal and pre-maximal points, that is, points x such that if x ≤ y then y ≤ x. If x is a pre-maximal point, then the cardinality of R(x) may be greater than 1. For instance, in our Example b and b′ are pre-maximal and R(b) = R(b′) = {b, b′} and both b and b′ can be in the boundary of some set. For instance, \(b,b'\in \mathbb B(\{a,b\})\).

From the point of view of an information order, however, the larger an information the smaller the set of items it filters. This is the so-called “Loi de Port Royal” (Low of Port Royal: “L’extension, ou étendue varie in proportion inverse de l’intension comprehénsion”). Thus, extension and intention are contravariant. If we intend “strictly contravariant”, that is, if we are interested not in items but in information increases, items which are not separable by means of granules play exactly the same role with respect of our knowledge. If this is the case, one would instead use topological spaces where points are not redundant with respect to the available properties, a.k.a. open sets. These spaces, as we have already mentioned, are called sober, or, which is the same in the finite case, T 0 topological spaces:

Definition 92 (T 0-Spaces)

A topological space τ(U) is said to be T 0 if for each two points x, y ∈ U, if x ≠ y there exists an open set O such that x ∈ O and yO.

Therefore, a space is T 0 if any two different points are separable by means of an open set. In our interpretation: any two different items are separable by means of some property. That is, there is at least a property which is enjoyed by one item but not by the other.

T 0 spaces are obtained from any space τ(U) with specialisation preorder ≼, by taking the quotient set U∕ ≡, where x ≡ y if and only if x ≼ y and y ≼ x, and setting for all O ∈ Ω(U), \(\varphi (O)=\{[x]_{\equiv _\preceq }:x\in O\}\). The topology Ω(U∕ ≡) = {φ(O) : O ∈ Ω(U)} is called the T 0 -ification of Ω(U) and φ is an isomorphism between it and Ω(U). Notice that φ is the extension to (U) of the natural map \(\mathfrak {q}\) of Lemma 58: \(\varphi (O)=\{\mathfrak q(x):x\in O\}\).

If P = 〈U, ≼〉 is a preorder (in particular the specialisation preorder of a topological space τ(U)), then on U∕ ≡ we can define the relation \([x]_{\equiv _\preceq }\sqsubseteq [y]_{\equiv _\preceq }\iff x\preceq y\). It is immediate to verify that ⊑ is a partial order. Indeed, reflexivity and transitivity are inherited from ≼ and if \([x]_{\equiv _\preceq }\sqsubseteq [y]_{\equiv _\preceq }\) and \([y]_{\equiv _\preceq }\sqsubseteq [x]_{\equiv _\preceq }\) then x ≼ y and y ≼ x so that x ≡ y and, in conclusion, \([x]_{\equiv _\preceq }=[y]_{\equiv _\preceq }\).

Let τ(U) be an Alexandrov space with specialisation preorder ≼. Let P∕ ≡ := 〈U∕ ≡, ⊑〉. Then Ω(P∕ ≡) =  Ω(U∕ ≡) and it is isomorphic to Ω(U). The isomorphism from Ω(U) to Ω(U∕ ≡) is φ. However, the two isomorphic topologies are not homeomorphic, because if x ≠ y but x ≡ y, then \(\mathfrak {q}(x)=\mathfrak {q}(y)\), so that an open set φ(O) of Ω(U∕ ≡) can have less points than O.

As it is clear, one can chose a representative of [x] and obtain a partial order P on the new set U′⊆ U. It is not difficult to verify through \(\mathfrak {q}\) that Ω(P ) and Ω(P∕ ≡) are homeomorphic.

Example 93

Consider again our standard point-level example on U = {v, a, b, b′, c}. The specialisation preorder ≼ is R (in fact ΩR(U) is an Alexandrov topology). Therefore, the T 0-ification of Ω(U) is:

It is clear that duality produces a T 0-ification. Anyway, if we are interested in the granules, then T 0-ification is not an appropriate move. In what follows we show what happens algebraically in case the space is not T 0-ficated and when it is. We shall see that the algebraic structure of Rough Set Systems induced by the first case will be sensibly transformed, as well as the logical properties of the systems (that is, the logic they model).

Moreover, we also show the algebraic and logic difference between the case H is a generic Heyting algebra and the case H is a Boolean algebra. The latter case in the classical one.

First, we present the topic from the point-level perspective of Rough Set Theory. Then we develop it at the abstract level. After that, we zoom-in again to achieve the intermediate, or hybrid, level of algebras of concrete open sets.

7 Algebras of Rough Set Systems

Given a granulation ΩR(U) induced by a preorder P = 〈U, R〉, the issue concerning singletons is how to filter the set Dsj(UR) of all the ordered pairs of disjoint elements of ΩR(U) in order to have just elements actually representing the pairs 〈(lR)(X), −(uR)(X)〉 for some X ⊆ U.

As it is already clear, the solution is obtained by applying the Lawvere-Tierney operator J S, where S is the union of all isolated points of ΩR(U) intended as a topology. That is, S = {x : R(x) = {x}}.

7.1 Defining the Set of Rough Sets

Remember that ΩR(U) is a Heyting algebra and S is an element of this algebra. So for any element O of the algebra (any open set) we can define the operator J S(O) = SO.

What is the mechanism that makes J S act as a filter which is able to discern true from apparent rough sets? The answer, as we are going to see, is that J S forces any element of S to be in the first or in the second element of a rough set when it is represented by an ordered pair of disjoint elements 〈(lR)(X), −(uR)(X)〉. But what is the mathematical and logical significance of J S?

Let us consider again our example:

Example 94

Consider the preorder P of Example 34 and the Heyting algebra or topology or approximation space ΩR(U). The set of all singletons is S = {a}. The action of the operator J S on ΩR(U) is the following:

This topology gives us some information. For instance, in no ordered pair the open set {b, b′} can have ∅ as partner even if ¬S = {b, b′}. In fact the presence of ∅ in \(J^S_{[b]}\) and \(J^S_{[b']}\) tells us that a partner must be found between S and ¬¬S, in our case {a} and {a, c}. Actually, the only elements which can have ∅ as a partner in a disjoint pair, are the members of S. Therefore, the following filtration is applied by J S on Dsj(UR):

Remarks 7.1

The Grothendieck topology above is paradoxical, like the one of Example 90. In a sense, the very filtration rules out the paradoxical situations.

Now we transform Dsj(UR) into Dsj( ΩR(U)) step by step, explaining the inner mathematical and logical mechanisms of the transformation. At the end of the process we will find that Dsj( ΩR(U)) is a particular structure called Nelson algebra that we define at the pointless level:

Definition 95 (Nelson Algebras)

A lattice N = 〈A, ∧, ∨, ∼, →, 0, 1〉 is a Nelson algebra if:

  1. 1.

    A, ∧, ∨, ∼, 0, 1〉 is a de Morgan lattice, that is, a distributive lattice such that for all a, b, ∼∼ a = a and ∼ (a ∨ b) =∼ a∧∼ b. Therefore, ∼ (a ∧ b) =∼ a∨∼ b and a ≤ b iff ∼ b ≤∼ a.

  2. 2.

    a∧∼ a ≤ b∨∼ b, so that it is also a Kleene algebra .

  3. 3.

    The operation → fulfils the following adjunction property:

    $$\displaystyle \begin{aligned} {a\wedge c\leq\sim a\vee b\iff c\leq a\longrightarrow b} \end{aligned} $$
    (56)

Nelson algebras are a bit tricky to one who is just familiar with Classical or Intuitionistic logic, that is, Boolean or Heyting algebras. Indeed we have that ∼∼ a = a but nonetheless, ∼ a ∨ a ≤ 1. This suggests that ∼ is not a pseudo-complementation, otherwise the lattice would be a Boolean algebra. In turn, → is not a relative pseudo-complementation. Indeed the adjunction property fulfilled by → is (56) and we have that ∼ a ≤ a→0. So we can define two other negations and an additional implication. This operations will play a key role in the connection of Nelson algebras and rough set systems.

$$\displaystyle \begin{aligned} &\lrcorner a:=a\longrightarrow 0 \end{aligned} $$
(57)
$$\displaystyle \begin{aligned}&a\supset b:=\sim\lrcorner\sim a\vee b\vee(\lrcorner a\wedge\lrcorner \sim b) \end{aligned} $$
(58)
$$\displaystyle \begin{aligned} &\neg a:=a\supset 0=\sim\lrcorner\sim a \end{aligned} $$
(59)

The negation ∼ is called strong negation because one has \(\sim a\leq \lrcorner a\), and \(\lrcorner \) is intended to be an intuitionistic negation (it is the implication of the 0-element). We shall see, that \(\lrcorner \) is far from replicating a pseudo-complementation, because \(a\wedge \lrcorner a\) is not 0 and \(\lrcorner \lrcorner \) is not able to grasp classical tautologies. In a particular case, that we shall discuss below, \(\lrcorner \) is a dual pseudo-complementation, this is the reason of the symbol, while ¬ turns into a real pseudo-complementation. Anyway, all finite Nelson lattices are, also, Heyting algebras (but there are infinite Nelson lattices which are not Heyting algebras). What we will show is that if a Nelson algebra is semi-simple, then the relative pseudo-complementation is definable by means of the very operations of the Nelson algebra itself.

Definition 96

A Nelson algebra is called semi-simple if \(a\vee \lrcorner a=1\), any a.

The link between this abstract structure and the concrete structure Dsj( ΩR(U)) is given by the duality theory of Nelson algebras. Therefore the first step is the construction of the dual space of a Nelson algebra from an abstract point of view.

So, let N be a Nelson algebra. We recall that we assume that N is finite and that our meta-theory is Classical Logic. Define on J(N) the following endomorphism:

$$\displaystyle \begin{aligned} f(x)=min_{\leq_N}(J(\mathbf{N})\cap -\{\sim b:b\in\uparrow_{\leq_N} x\}) \end{aligned} $$
(60)

where \(min_{\leq _N}\) and \(\uparrow _{\leq _N}\) refer to the lattice order ≤N of N, which is a partial order. The restriction to J(N) of this order will be denoted by ≤.

It can be proved that f is a linear involutive anti-order isomorphism in J(N) = 〈J(N), ≤〉, that is, x ≤ y implies f(y) ≤ f(x), x ≤ f(x) or f(x) ≤ x and f(f(x)) = x. Moreover, the following interpolation property holds:

$$\displaystyle \begin{aligned} &\text{if } a\geq f(a),b\geq f(a),a\geq f(b),b\geq f(b)\\ &\text{ then } \exists c\in J(\mathbf{N})\text{ such that } c\leq a,c\leq b,f(a)\leq c, f(b)\leq c. \end{aligned} $$

That is, there is an intermediate element which prevents f and the order of J(N) from crossing. If N were a Kleene algebra, then the interpolation property could fail. The space \(\mathcal N(\mathbf {J}(\mathbf {N})):=\langle {J(\mathbf {N}),\leq ,f\rangle }\) is called a Nelson space.

Actually, a Nelson space is any preorder \(\mathcal N(U)=\langle U,\leq ,f\rangle \) where f is an endomorphism with the properties above. A Nelson algebra is restored from a Nelson space \(\mathcal N(U)\) by defining the following operations on Ω(U):

  • 1 := U, 0 := ∅

  • A ∨ B := A ∪ B, A ∧ B := A ∩ B

  • \(A\longrightarrow B=-\mathbb C_\leq (A\cap f(A)\cap -B)\)

  • ∼ A := U ∩−f(A)

One has that \(\mathbf {N}(\mathcal N(U)):=\langle \Omega _\leq (U),\vee ,\wedge ,\longrightarrow ,\sim , 0,1\rangle \) is a Nelson algebra. In particular \(\mathbf {N}(\mathcal N(\mathbf {J}(\bf N)))\) is isomorphic to N.

In the process, we see also that \(\lrcorner A:=-\mathbb C_\leq (A\cap f(A))\)—i.e. A→0.

All this is very useful, but if we start from an approximation space ΩR(U), how to define the involution f? What is the relation between f and the granulation provided by ΩR(U)?

In order to answer, let us observe, in the first place, that a Nelson space can be split into two parts linked by the involution f.Notation In order to avoid confusions later, the universe of a Nelson space will be denoted by U .

Definition 97

Given a Nelson space \(\mathcal N(U^*)=\langle U^*,\leq ,f\rangle \), define

$$\displaystyle \begin{aligned} U^+:=\{x\in U^*:x \leq f(x)\}, \text{ with an order }\leq^+ \text{ inherited from } \mathcal N(U^*)\\ U^-:=\{x\in U^*:f(x)\leq x\}, \text{ with an order} \leq^- \text{ inherited from } \mathcal N(U^*) \end{aligned} $$

Let \(k_f:\Omega _\leq (U^*)\longmapsto \Omega _{\leq ^+}(U^+) \times \Omega _{\leq ^+}(U^+):=\langle U^+\cap X, U^+\cap -f(X)\rangle \).Define the following operations where inside the ordered pairs the operations are those of the Heyting algebra \(\Omega _{\leq ^+}(U^+)\) (i.e. ∧ = ∩, ∨ = ∪ and so on):

  • 1 := 〈U +, ∅〉, 0 = 〈∅, U +

  • X 1, X 2〉∨〈Y 1, Y 2〉 := 〈X 1 ∨ Y 1, X 2 ∧ Y 2

  • X 1, X 2〉∧〈Y 1, X 2〉 := 〈X 1 ∧ Y 1, X 2 ∨ Y 2

  • ∼〈X 1, X 2〉 := 〈X 2, X 1

  • X 1, X 2〉→〈Y 1, Y 2〉 := 〈X 1Y 1, X 1 ∧ Y 2

  • \(\lrcorner \langle X_1,X_2\rangle :=\langle \neg X_1,X_1\rangle \)

  • ¬〈X 1, X 2〉 := 〈X 2, ¬X 2

It is possible to show that \({\mathbf {N}}_{k_f}(\mathcal N(U^*)):=\langle k_f(\Omega _\leq (U^*)),\wedge ,\vee ,\sim ,\longrightarrow ,\lrcorner ,0,1\rangle \) is a Nelson algebra. Just notice, that any X ∈ Ω(U ) is an up-set (i.e. a ≤ filter). Therefore, if x ∈ X ∩ U + then for each y ∈ U +, if x ≤ y then y ∈ X and, thus, y ∈ X ∩ U +. That is, X ∩ U + is an up-set in U + (a ≤+ filter), hence belongs to \(\Omega _{\leq ^+}(U^+)\). Similarly, since X is an up-set in U , f(X) is a down-set in U (a ≤ ideal) because f is order reversing. In consequence, − f(X) is an up-set in U and, again, − f(X) ∩ U + is an ≤+ filter.

Example 98

Although we shall prove only later that Dsj(UR) is a Nelson algebra, in order to follow the construction of the dual space let us assume it is a Nelson algebra and consider a pointless version of it.

Notice that the strong negation of x is symmetric to x. J(N) = {a, b, c, d, g, l, r, 1}. Let us calculate the involution f:\(f(a)=min_{\leq _N} -\{\sim x:x\in \uparrow _{\leq _N}a\}=min_{\leq _N} -\downarrow _{\leq _N}(\sim a)=min_{\leq _N} -\downarrow _{\leq _N}q=min_{\leq _N}\{1\}=1\); \(f(b)=min_{\leq _N} -\downarrow _{\leq _N}(\sim b)=min_{\leq _N} -\downarrow _{\leq _N}p=min_{\leq _N}\{l,o,q,1\}=l\). The reader now has the mechanism. Then:\(f(d)=min_{\leq _N} -\downarrow _{\leq _N} (\sim d)=min_{\leq _N} -\downarrow _{\leq _N}n=min_{\leq _N}\uparrow _{\leq _N}g=g\). \(f(g)=min_{\leq _N} -\downarrow _{\leq _N}i=min_{\leq _N}\uparrow _{\leq _N} d=d\); \(f(l)=min_{\leq _N} \uparrow _{\leq _N}b=b\). f(c) = r and f(r) = c.

The circled elements form the subset U + while the boxed ones form U .

It is easy to verify, for instance, that given the up-set {c, r, 1, l}, f({c, r, 1, l}) = {c, r, a, b} which is a down-set. Therefore, − f({c, r, 1, l}) = −{c, r, a, b} = {d, g, l, 1} is an up-set.

Now we know what our goal is. So the first step is to transform our approximation space (Heyting algebra, Alexandrov topology) ΩR(U) into the space previously denoted as \(\Omega _{\leq ^+}(U^+)\). Then we have to define from it the duplicate space \(\Omega _{\leq ^-}(U^-)\), an order ≤ glueing the two spaces into a preorder on U  = U + ∪ U and, finally, an involution f on U fulfilling the properties discussed above, with respect to ≤. The resulting space \(\mathcal N(U^*)\) is a Nelson space. After that, we are eventually in position to transform ΩR(U) into the Nelson algebra \({\mathbf {N}}_{k_f}(\mathcal N(U^*))\) and prove that its domain (or carrier) is Dsj( ΩR(U)). Since from now to the end of the section the universe U of our approximation space ΩR(U) plays the role of U +, let us use the last symbol (with our usual R which turns into ≤+). Thus, now the approximation space ΩR(U) is called \(\Omega _{\leq ^+}(U^+)\).

A particular attention is due to the involution f, because it is strictly connected with the singleton issue. Indeed, look at the role of f in the definition of \({\mathbf {N}}_{k_f}(\Omega _\leq (U^*))\). It actually decides which elements not belonging to the open set X 1 = U + ∩ X are allowed to belong to X 2. In fact, suppose that X ∈ Ω(U ) and \(U^+\cap X=\mathbb I_{\leq ^+}(A)\) for some subset A of U + (aka U). We know that \(\mathbb I_{\leq ^+}(A)\) is an up-set in U + and \(\mathbb C_{\leq ^+}(A)\) a down set in U + therefore a down-set in U . Vice-versa, if X is a down-set in U , it is a down set also in U +. But we have seen that actually f(X) is a down-set in U . Indeed, f(X) ∩ U + “is” the closure \(\mathbb C_{\leq ^+}(A)\). In consequence, − f(X) ∩ U + is the complement of the closure of A, that is, \(-\mathbb C_{\leq ^+}(A)\) (to see that, we need only to prove − (f(X) ∩ U +) = −f(X) ∩ U +).

Now, how can f rule out from − f(X) the elements c of A which are isolated in \(\Omega _{\leq ^+}(U^+)\) and are not already in X? Or, which is the same, how can it put c in f(X)? How does f work?

We need a formal definition of a notion we have already seen:

Definition 99

Let P = 〈P, ≤〉 be a preorder. An element x ∈ P is said to be pre-maximal if ∀y ∈ P(x ≤ y ⇒ y ≤ x). It is called maximal if ∀y ∈ P(x ≤ y ⇒ y = x).

Clearly, if ≤ is a partial order then the two notions coincide. Notice that in the case P is the dual space J(A) of a Nelson algebra or a Heyting algebra A, then the order of J(A), x ≤ y ⇔  (y) ⊆ (x), is a partial order because it is induced by the subset relation ⊆ which is an extensional relation, so that if X ⊆ Y and Y ⊆ X then X = Y .

Now, c is isolated in 〈U +, ≤+〉 if \({\uparrow _{\leq ^+}c=\{c\}}\). And this happens if and only if c is a maximal element in ≤+.

We have to pay attention to a fact that maybe evaded the reader: U + and U must have the same cardinality (otherwise f is not an anti-isomorphism), however it is not required they are disjoint. In Example 98 they are disjoint, but their definitions do not imply disjunction. So, let us investigate their, possibly not empty, intersection.

Theorem 100

Let \(\mathcal N(U^*)=\langle U^*,\leq ,f\rangle \) be a Nelson space. Let B = U + ∩ U . Then:

  1. 1.

    c  B if and only if f(c) ∈ B.

  2. 2.

    Ifis a partial order, then c  B if and only if f(c) = c.

  3. 3.

    If c  B then c is maximal or pre-maximal inU +, ≤+(maximal if+ is a partial order).

Proof

(1) If c ∈ B then c ∈ U + and c ∈ U , so that f(c) ≤ c ≤ f(c). From this it is immediate to see that f(c) ∈ B, too: c ≤ f(c) gives f(f(c)) ≤ f(c) and f(c) ≤ c gives f(c) ≤ f(f(c)). The reverse is obvious. (2) comes trivially from (1). (3) Suppose now that for x ∈ U +, c ≤ x, so that f(x) ≤ f(c). Since x ∈ U +, x ≤ f(x) and we immediately obtain the following relation: x ≤ f(x) ≤ f(c) ≤ c ≤ x ≤ f(x). Therefore, f(c) ≤ f(x), so that x ≤ c. Hence c is pre-maximal or maximal (therefore, maximal if ≤ is a partial order). □

Remarks 7.2

The reverse implication of Theorem 100.(3) does not hold: if c is pre-maximal (maximal), then not necessarily c ∈ U + ∩ U . This is important, because it means that it depends on the application to decide if a maximal element c is to be set to f(c) = c, which, indeed, is the case of rough set systems, as we are going to see.

We have enough material to proceed with our construction. Given an approximation space \(\Omega _{\leq ^+}(U^+)\) take a copy \(\Omega _{\leq ^-}(U^-)\), where ≤ is the reversed order of ≤+ (i.e. ≤+  =≤). If ≤+ is a preorder (partial order), then ≤ is a preorder (partial order), too. The elements of U will be decorated by an apex “−”, while the corresponding elements in U + will be decorated by “+ ”. As we have just seen, it is possible for some element to have both decorations (if it belongs to U + ∩ U ).

Define now a relation φ ⊆ U + × U as follows: φ(x +) = {x }. Therefore, φ (x ) = {x +} and for x +−, φ(x +−) = φ (x +−) = {x +−}. Clearly, φ is an order anti-isomorphism between U + and U . The new relation φ enables the definition of the final order ≤ on U  = U + ∪ U . The intermediate step is the definition of an order connecting U + and U . It is defined passing through φ in the composition ≤+ ⊗ φ⊗≤. It is then possible to prove that ≤:=≤+∪≤∪ (≤+ ⊗ φ⊗≤) is the required order on U  = U + ∪ U : it preserves both ≤+ and ≤ and glues them together in a minimal way. Finally, we define the involution f on U:

$$\displaystyle \begin{aligned} f(x) = \begin{cases} x^- & \text{if {$x=x^+$}} \\ x & \text{if {$x=x^{+-}$}}\\ x^+ & \text{if {$x=x^-$}} \end{cases} \end{aligned} $$
(61)

It is a little bit long but not difficult to prove that f satisfies the required properties so that the space \(\mathcal N(U^*)=\langle U^*,\leq ,f\rangle \) is a Nelson space. In consequence \({\mathbf {N}}_{k_f}(\mathcal N(\Omega _{\leq }(U^*)))\) is a Nelson algebra of ordered pairs of disjoint elements of \(\Omega _{R^+}(U^+)\).

In synthesis, we have transformed a Heyting algebra into a Nelson one applying a particular filtration which operates as follows. In the first place, let us focus our attention to the involution f. From the above discussion, it is evident that if x ∈ U + then \(f(\uparrow _{\leq ^+} x)\cap U^+=\mathbb C_{\leq ^+}(\{x\})\). Therefore, if x is an isolated point in \(\Omega_\leq^+(U^+)\) (i.e. \(\uparrow _{\leq ^+} x=\{x\}\)) and xX but x ∈ f(X), for some X ∈ Ω(U), then we do have x neither in U + ∩ X nor in U + ∩−f(X). Suppose that for some A ⊆ U +, \(X\cap U^+=\mathbb I_{\leq {{ }^+}}(A)\) so that \(U^+\cap -f(X)=-\mathbb C_{\leq ^+}(A)\). Therefore x would belong to the boundary of A, which is impossible for isolated points.

But suppose f(x) = x. Then xX ∩ U + if and only if f(x)∉X. In consequence, x ∈ − f(X) and, finally x is in U + ∩−f(X), that is, \(x\in -\mathbb C_{\leq ^+}(A)\). It follows that if S is the set of isolated points of \(\Omega _{\leq ^+}\) and we put f(c) = c for all c ∈ S, then there is a one-one correspondence between the set \(\{-\mathbb C_{\leq ^+}(A):A\in \wp (U^+)\}\) and the set {X + ∩−f(X) : X ∈ Ω(U )}, that is, the set {X 2 : 〈X 1, X 2〉∈ k f( Ω(U ))}. This is the tricky part, because \(\Omega _{\leq ^+}(U^+)\) is by definition \(\{\mathbb I_{\leq ^+}(A):A\in \wp (U^+)\}\). Otherwise stated, in 〈X 1, X 2〉 the element X 1 is not affected by the choice of the subset of maximal elements.

Example 101

In this example we use our familiar preorder P = 〈U, R〉. But to follow the construction comfortably, we rename its elements by setting U + instead of U and ≤+ instead of R. Therefore, now our approximation space is called \(\Omega _{\leq ^+}(U^+)\).

It is clear that the codomain of ≤+ ⊗ φ is just a renaming of the codomain of ≤+, from U + to U , so that ≤+ ⊗ φ⊗≤ amounts to ≤+ ⊗≤+ . So we obtain:

The above pre-ordered space together with the involution f of (61) is our Nelson space \(\mathcal N(U^*)=\langle U^*,\leq ,f\rangle \). Now let us apply to \(\mathcal N(U^*)\) the transformation k f. We compute some instances.

$$\displaystyle \begin{aligned} k_f(\{b^+, b^{\prime +}, b^-,b^{\prime -},v^-\})&=\langle U^+\cap\{b^+, b^{\prime +}, b^-,b^{\prime -},v^-\},U^+\cap -f(\{b^+, b^{\prime +}, b^-,b^{\prime -},v^-\})\rangle\\ &=\langle \{b^+,b^{\prime +}\},U^+\cap -(\{b^+,b^{\prime +},b^-,b^{\prime -},v^+\})\rangle\\ &=\langle\{b^+,b^{\prime +}\},U^+\cap \{v^-,c^+,a^{+-},c^-\}\rangle=\langle\{b^+,b^{\prime +}\},\{c^+,a^{+-}\}\rangle. \end{aligned} $$

k f({b +, b +, b , b , v , c }) = 〈{b +, b +}, {a +−}〉.

Since f(a +−) = a +−, for any X ∈ Ω(U +) it is not possible to exclude a +− from both X 1 and X 2 in k f(X) = 〈X 1, X 2〉.

Notice, indeed, that in the construction of the present space we have applied the Grothendieck topology of Example 94. On the contrary, consider the Nelson space \(\mathcal N(U^*)\) of Example 98 and set X = {c, r, 1, g, l}. Then k f(X) = 〈{c}, ∅〉 which corresponds in the present space to 〈{b +, b +}, ∅〉. So, a (i.e. a +) is excluded both from the interior and the complement of the closure of X.

Observe now that from Theorem 100 we can put f(c) = c for all c ∈ S for the very reason that S is the set of isolated, hence maximal, points of \(\Omega _{\leq ^+}(U^+)\). Since S contains all the dense elements of \(\Omega _{\leq ^+}(U^+)\), it induces a Boolean congruence.

Indeed, the above construction is an instance of the following general result at the pointless level which shows how to transform Heyting algebras into Nelson ones.

Theorem 102 (Sendlewski)

Let H be a Heyting algebra anda Boolean congruence on H . Then:

$$\displaystyle \begin{aligned} {N}_\equiv(\mathbf{H})=\{\langle a_1,a_2\rangle:a_1\wedge a_2=0 \mathit{\text{ and }} a_1\vee a_2\equiv 1\} \end{aligned} $$
(62)

equipped with the abstract version of the operations of Definition 97 is a Nelson algebra N (H). If abab  ba, thenis a congruence with respect to all the operations of N (H) but the strong negationand N (H)∕≅ is isomorphic to H . Moreover, \({\mathbf {N}}_{k_f}(\mathcal N(\mathbf {J({\mathbf {N}}_\equiv (\mathbf {H}))}))=\mathbf {N_\equiv (\mathbf {H})}\) . Finally, all the Nelson algebras N such that N∕≅ is isomorphic to H are isomorphic to N (H) for some Boolean congruence ≡.

Remarks 7.3

With “abstract version” of the operations we mean, for instance, ∼〈a 1, a 2〉 = 〈a 2, a 1〉 or 〈a 1, a 2〉→〈b 1, b 2〉 = 〈a 1b 1, a 1 ∧ b 2〉.

The congruence ≅ takes into account just the first elements of the ordered pairs. But these are the elements of H. Therefore it is immediate that N (H)∕≅ is isomorphic to H, provided ≡ is a Boolean congruence. If we look at this result from the point of view of Nelson spaces, we see that there is a one-one correspondence between the subsets of the set M of the maximal elements of the dual space of H and the Boolean congruences of H. The least Boolean congruence corresponds to M itself and to our operator J δ discussed above because M is the least dense element of H. This congruence will play a key role that we see after showing how Boolean congruences, that is, subsets of maximal elements, are connected to rough set systems from the point of view of the transformation N .

The dual space of a Heyting algebra is a partial order. So, any pre-maximal element is maximal. But if we start with a preordered system we have to take into account the subsets of pre-maximal and maximal elements. We have adapted the dual construction of Theorem 102, due to [47], to the finite case and preorders. In the process, we have found Theorem 100. Moreover, we have linked the Boolean congruence ≡ to Lawvere-Tierney operators and Grothendieck topologies. This link enables us to understand the importance of ≡ in the construction of rough set systems.

In fact, the rough set companion of Theorem 102, expressed in terms of Lawvere-Tierney operators, is:

Theorem 103

Let Ω R(U) be an approximation space and S the set of its isolated elements. Let us set:

$$\displaystyle \begin{aligned} {N}_{\equiv_{J^S}}({\Omega_R(U)})=\{\langle X_1,X_2\rangle\in \Omega_R(U):X_1\cap X_2=\emptyset\mathit{\text{ and }} X_1\cup X_2\equiv_{J^S}U\} \end{aligned} $$

Then \(Dsj(\Omega _R(U))={N}_{\equiv _{J^S}}({\Omega _R(U)})\).

Proof

The proof is immediate: In the first place, S is a subset of maximal elements of 〈U, R〉. So \(\equiv _{J^S}\) is a Boolean congruence.

Moreover, if \(X_1\cup X_2\equiv _{J^S} U\) then S⇒(X 1 ∪ X 2) = SU = U. It follows that S ⊆ (X 1 ∪ X 2).

Let \(X_1=\mathbb I_R(A)\) and \(X_2=-\mathbb C_R(A)\) for some A ⊆ U. Then for each c ∈ S either \(c\in \mathbb I_R(A)\) or \(c\in -\mathbb C_R(A)\). □

Consider Example 101. It is easy to verify that \(k_f(\Omega _\leq (U^*))=N_{\equiv _{J^{\{a\}}}}(\Omega _R(U))\) and that they coincide with the lattice Dsj( ΩR(U)) of Example 46 once we get rid of the decoration +.

We can see the above construction from a different point of view: it provides an information-like interpretation of the filtration clause “≡ 1” which appears not only in the definition of Nelson algebras, but also of Stone algebras and Łukasiewicz algebras. Three-valued Łukasiewicz algebras will be linked to rough set systems in the next Section. Now we have to conclude the story about rough set systems from preorders and partial orders, with a particular and interesting case.

7.2 Rough Set Systems Based on Partial Orders and Effective Lattices

7.2.1 Constructive Logic with Strong Negation: CLSN

The Hilbert-style axioms of Nelson Logic, called Constructive Logic with Strong Negation—CLSN—are essentially the axioms for Nelson algebras. There is also an equational definition which can be found in [42]. A Natural Deduction-style set of rules can be found in [39]. Kripke models for CLSN were introduced by Thomason (see [48]). They are partial orders, that for a familiar reason we denote with 〈U +, ≤+〉, equipped with a standard forcing relation ⊧ for positive formulas, that is, if pα then for all p′≥ p, p′α. In Kripke models for Intuitionistic logic, the forcing clause for the intuitionistic negation ¬ is defined as p⊧¬α if for all p′≥ p, p′α. In Thomason’s models the strong negation ∼ of CLSN is defined as like as a positive formula: p⊧ ∼ α implies that for all p′≥ p, p′⊧ ∼ α. It is only required that if pα then p⊭ ∼ α.

Therefore, in the Intuitionistic case, \([\![ \neg \alpha ]\!] \) can be calculated from \([\![ \alpha ]\!] \). In particular, given an Heyting algebra ΩR(U), \([\![ \neg \alpha ]\!] =-\mathbb C_{R}([\![ \alpha ]\!] )\). On the contrary, there is not a function sending \([\![ \alpha ]\!] \) to \([\![ \sim \alpha ]\!] \). Actually, there can be α and β such that \([\![ \alpha ]\!] =[\![ \beta ]\!] \) but \([\![ \sim \alpha ]\!] \neq [\![ \sim \beta ]\!] \), a situation which does not occur for ¬. In a sense, ∼ α is an “explicit negation” not an “implicit” one as ¬α; one has to positively state where α is false.

Therefore, it is natural to represent the evaluation of a CLSN formula α by means of an ordered pair of disjoint elements \(\kappa (\alpha )=\langle [\![ \alpha ]\!] ,[\![ \sim \alpha ]\!] \rangle \).

Clearly, there are states p such that pα and p⊭ ∼ α because it is not required the existence of maximal states m such that either mα or m⊧ ∼ α. This behaviour is different from the one of intuitionistic negation, because by its very definition if α is not forced by some state above p, then p⊧¬α. Otherwise stated, in models for CLSN it is not required that eventually all formulas are decidable.

This behaviour is mirrored by the dual construction of Nelson algebras. In fact, if there is a maximal p which is required to decide every formula α, than p must be either in \([\![ \alpha ]\!] \) or in \([\![ \sim \alpha ]\!] \). Therefore, in the dual construction one must put f(p) = p so that p must be maximal. But from the Remarks 7.2 this is not mandatory for maximal elements, as Example 98 shows.

Pay attention that both in the case of pre-orders and partial orders if p is not maximal or pre-maximal and still we put f(p) = p, then in view of Theorem 100.(3), f cannot be an involutive anti order isomorphism and for X = {p : f(p) = p}, the filter X does not contain all the dense elements of \(\Omega _{\leq ^+}(U^+)\) so that \({\mathbf {N}}_{\equiv _{J^X}}(\Omega _{\leq ^+}(U^+))/\cong \) is isomorphic to another Heyting algebra \(\mathbf {H'}\neq \Omega _{\leq ^+}(U^+)\).

Example 104

If we set X = {a, c}, then J X gives:

The relations between H and H are not in the scope of our chapter. Therefore, we focus our attention on maximal and pre-maximal states.

From a logical point of view, since formula are evaluated on order filters, the distinction between partial and pre orders, hence between maximal and pre-maximal states, is not very important. On the contrary, it is relevant from the point of view of rough sets. In fact, in order to be approximated through a relation R on U, a subset A of U is “evaluated” on the points of ΩR(U), so that, as we are going to see, there is a difference if R is a partial order or a preorder. More precisely, the difference concerns the existence of maximal states. From now on, therefore, we consider partial orders or preorders bounded by maximal states.

We have two extreme cases and an intermediate one: (1) No maximal states decide every formula. (2) All the maximal states decide every formula. (3) Some maximal states but not all decide all formulas.

In the first case, the filtering congruence relation is \(\equiv _{J^\emptyset }\), so it is required that ∅ ⊆ X 1 ∪ X 2 which is a relation always fulfilled. In particular in the resulting Nelson lattice the pair 〈∅, ∅〉 appears, which represents a state of “complete absence of information”.

Notice that ∼〈∅, ∅〉 = 〈∅, ∅〉. An element a such that ∼ a = a is called central. Central elements are fixed elements of the negation, thus. In Nelson algebras there can be only one central element.

The intermediate case is the generic one discussed so far. The lattice of Example 98 (a.k.a. Dsj(UR)) illustrates the first extreme case. The other intermediate case will be discussed in the next section.

Example 105

Let us drop from our standard preorder P the element b′. Then R turns into a partial order Q on a set W = {a, b, c, v}. Suppose we are given just one CLSN formula α to be evaluated on 〈W, Q〉. Then the situations at the maximal state a, are the following (below any diagram the corresponding κ(α) is displayed):

But if the parameter of the operator J X is the set of maximal elements M, then only the first four cases are admitted. Notice again that ordered pairs are not admitted not because they have an empty component. In fact 〈M, ∅〉 and 〈∅, M〉 are admitted (for another counterexample see the next section).

Before analysing the second and fundamental extreme case, we display some interesting relations between the three negations. We need an easy but useful lemma:

Lemma 106

Leta 1, a 2be a pair of disjoint elements of a Heyting algebra. Then a 1 ≤¬a 2 and a 2 ≤¬a 1.

Proof

Immediate from adjointness: a 1 ∧ a 2 ≤ 0 if and only if a 1 ≤ a 2⇒0 = ¬a 2 if and only if a 2 ≤ a 1⇒0 = ¬a 1. □

Moreover, it is not difficult to verify that for any two elements a = 〈a 1, a 2〉 and b = 〈b 1, b 2〉 of any Nelson algebra \({\mathbf {N}}_{\equiv _{Jx}}(\mathbf {H})\) of ordered pairs of disjoint elements of a Heyting algebra H (hence with \(\equiv _{J^X}\) not necessarily a Boolean congruence) the following hold:

$$\displaystyle \begin{aligned} & a\leq b\text{ if and only if }a_1\leq_{\mathbf{H}}b_1\text{ and }b_2\leq_{\mathbf{H}}a_2. \end{aligned} $$
(63)
$$\displaystyle \begin{aligned} &(i)\;\neg a\leq\sim a\leq\lrcorner a,\;(ii)\text{ if }a\leq b\text{ then }\sim a\leq\sim b,\;\neg a\leq\neg b,\lrcorner a\leq\lrcorner b \end{aligned} $$
(64)
$$\displaystyle \begin{aligned} &(i)\;\neg\neg\neg a\geq\neg a,\;(ii)\;\sim\sim\sim a=\sim a,\;(iii)\;\lrcorner\lrcorner\lrcorner\leq\lrcorner a \end{aligned} $$
(65)
$$\displaystyle \begin{aligned}\noalign{} {} &(i)\lrcorner\lrcorner a\geq\sim\lrcorner a=\neg\sim a=\neg\lrcorner a\leq a, (ii)\;\neg\neg a\leq\sim\neg a=\lrcorner \sim a=\lrcorner\neg a\geq a \end{aligned} $$
(66)
$$\displaystyle \begin{aligned} &(i)\;\lrcorner\lrcorner a\geq a\text{ only if }a_2=\neg a_1,\; (ii)\; \neg\neg a\geq a\text{ only if }\neg\neg a_2=a_2 \end{aligned} $$
(67)

By easy inspection using Lemma 106. For instance, (66).(ii) is immediate from (i): \(\sim \neg a=\sim \neg \sim \sim a=\sim \sim \lrcorner \sim a=\lrcorner \sim a\). As to (67) a = 〈a 1, a 2〉 and from Lemma 106 \(\lrcorner \lrcorner a=\langle \neg \neg a_1,\neg a_1\rangle \). Clearly ¬¬a 1 ≥H a 1 and ¬a 1 ≥H a 2, so that the first pair is above the second only if a 2 = ¬a 1. In turn, ¬¬a = 〈¬a 2, ¬¬a 2〉 and ¬a 2 ≥H a 1, ¬¬a 2 ≥H a 2, so that now we need ¬¬a 2 = a 2 (we recall that 〈a 1, a 2〉≤〈b 1, b 2〉 iff a 1 ≤H b 1 and b 2 ≤H a 2).

Example 107

Consider the following examples from the lattice Dsj( Ω(U)) of Example 46: \(\lrcorner \lrcorner \langle \{a\},\emptyset \rangle =\langle \{a,c\},\{b,b'\}\rangle \) which is incomparable with 〈{a}, ∅〉 because {a}⊆{a, c} but also ∅ ⊆{b, b′}.

Similarly, ¬¬〈∅, {a}〉 = 〈{b, b′}, {a, c}〉 which is incomparable with 〈∅, {a}〉. \(\lrcorner \lrcorner \langle \emptyset ,\{a\}\rangle = \langle \emptyset ,U\rangle \leq \langle \emptyset ,\{a\}\rangle \text{ and }\) ¬¬〈{b, b′}, {a}〉 = 〈{b, b′}, {a, c}〉≤〈{b, b′}, {a}〉. \(\text{On the contrary, }\lrcorner \lrcorner \langle \{a\},\{b,b'\}\rangle =\langle \{a,c\},\{b,b'\}\rangle \geq \langle \{a\},\{b,b'\}\rangle \) and ¬¬〈∅, {a, c}〉 = 〈{b, b′}, {a, c}〉≥〈∅, {a, c}〉.

7.3 Effective Lattices, the Logic E 0 and Rough Set Systems

Now we have to focus on the particular case in which the preorder, or the partial order, is bounded by a set M of maximal states. From a rough set perspective, this means that the set S of isolated elements coincides with M. What are the logical consequences of this situation? Otherwise stated, what does it happen if in a model every state has a state above it which decides every formula?

In view of Theorem 83, in such model for any formula α, \([\![ \alpha \vee \sim \alpha ]\!] \) is locally valid everywhere. This is the main feature which makes CLSN transform into a new logic called E 0. This logic (also called Effective Logic 0) was introduced for studying program synthesis and program specification (see [24]) and its algebraic models were studied in [27]. This logic was presented by means of rules of Natural Deduction by adding to the rules for CLSN the following schemas:

Now, let us translate the two rules in the operation of a Nelson algebra N (H) of ordered pairs of disjoint elements of a Heyting algebra H. In what follows we put a = κ(α), b = κ(β), a = 〈a 1, a 2〉, b = 〈b 1, b 2〉, and so on. Remember that ab = 1 iff a 1 ≤H b 1. Then, since the first element of κ(b∧∼ b) is b 1 ∧ b 2 = 0, we have:

$$\displaystyle \begin{aligned} (i)\;(a\longrightarrow 0)\longrightarrow \mathbf{T}(\sim a); (ii)\;(\sim a\longrightarrow 0)\longrightarrow \mathbf{T}(a) \end{aligned} $$
(68)

Moreover, from the two rules it is possible to derive

$$\displaystyle \begin{aligned} \mathbf{T}(\sim\alpha)\equiv\sim\mathbf{T}(\alpha). \end{aligned} $$
(69)

Let us set T(a) = 〈T 1, T 2〉 and from the above ingredients discover what T 1 and T 2 must be.

From (68).(i) and T(∼ α) ≡∼T(α) one has \(\lrcorner a\longrightarrow \sim \mathbf {T}(a)\). From (68).(ii) one obtains \(\lrcorner \sim a\longrightarrow \mathbf {T}(a)\). Since \(\lrcorner a=\langle \neg a_2, a_2\rangle \) and \(\lrcorner a=\langle \neg a_1,a_1\rangle \) it must be:

$$\displaystyle \begin{aligned} (i)\;\neg a_2\leq_{\mathbf{H}} T_1,\;(ii)\;\neg a_1\leq_{\mathbf{H}} T_2,\text{ so }(iii)\;\neg T_1\leq_{\mathbf{H}}\neg\neg a_2,\;(iv)\;\neg T_2\leq_{\mathbf{H}}\neg\neg a_1 \end{aligned} $$
(70)

From Lemma 106 we have:

$$\displaystyle \begin{aligned} (i) \neg\neg a_1\leq_{\mathbf{H}}\neg a_2,\;\;(ii)\; T_1\leq_{\mathbf{H}}\neg T_2,\;\; (iii)\;T_2\leq_{\mathbf{H}}\neg T_1 \end{aligned} $$
(71)

From (70).(ii) and (71).(i) ¬¬a 1 ≤H¬a 2 ≤H T 1. From this, (70).(iv) and (71).(ii) ¬¬a 1 ≤H T 1 ≤H¬T 2 ≤H¬¬a 1. In consequence: ¬¬a 1 = T 1 = ¬T 2. Therefore, ¬T 1 = ¬a 1. From this and (71).(iii) T 2 ≤H¬a 1 and from (70).(ii) T 2 ≤H¬a 1 ≤H T 2, so that T 2 = ¬a 1. We conclude that:

$$\displaystyle \begin{aligned} \mathbf{T}(a)=\langle\neg\neg a_1,\neg a_1\rangle \text{ for any } a \text{ of a Nelson algebra modelling } E_0. \end{aligned} $$
(72)

Now, in a generic Nelson algebra we do have \(\lrcorner \lrcorner a=\langle \neg \neg a_1,\neg a_1\rangle \), but we cannot set \(\mathbf { T}(a)=\lrcorner \lrcorner a\) because of (69). In fact, \(\sim \lrcorner \lrcorner a=\neg \neg \sim a=\langle \neg a_1,\neg \neg a_1\rangle \). Therefore it should be ¬a 1 = ¬¬a 2 and ¬a 2 = ¬¬a 1, all a. However, usually in Nelson algebras these equations do not hold. Look at the following model:

The reader has immediately recognised the problem: b does not decide α or ∼ α.

At this point we must find the conditions for a Nelson algebra to make ¬a 1 = ¬¬a 2, which, obviously, is the same as ¬¬a 1 = ¬a 2.

Lemma 108

Let L be a distributive lattice, a, b, c L , a  b = 0 = a  c. Then a  b = a  c if and only if b = c.

Proof

If a ∨ b = a ∨ c then (a ∨ c) ∧ b = (a ∨ b) ∧ b = b. Therefore, b = (a ∧ b) ∨ (c ∧ b) = 0 ∨ (c ∧ b) so that b ≤ c. Similarly one proves that c ≤ b. The converse implication is more than trivial. □

Theorem 109

Let N (H) be a Nelson algebra of ordered pairs of disjoint elements of a Heyting algebra H . Then for anya 1, a 2〉∈ N (H), ¬a 1 = ¬¬a 2 if and only ifis \(\equiv _{J^\delta }\) , where δ is the least dense element of H.

Proof

If \(\langle a_1,a_2\rangle \in N_{\equiv _{J^\delta }}(\mathbf {H})\) then [〈a 1a 2〉]¬¬ = [1]¬¬. It follows that [〈a 1a 2〉]¬¬ = [〈a 1 ∨¬a 1〉]¬¬. Since a 1 ∧ a 2 = 0 and a 1 ∧¬a 1 = 0, then [a 1]¬¬∧ [a 2]¬¬ = [0]¬¬ and [a 1]¬¬∧¬[a 1]¬¬ = [0]¬¬. Moreover, [a 1a 2]¬¬ = [a 1]¬¬⊔ [a 2]¬¬ = [a 1 ∨¬a 1]¬¬ = [a 1]¬¬⊔ [¬a 1]¬¬. Therefore, from Lemma 108 [a 2]¬¬ = [¬a 1]¬¬ and we conclude ¬¬a 2 = ¬¬¬a 1 = ¬a 1. The example above proves the converse (more precisely, it proves the contrapositive of the converse implication, that is, if ≡ is not the minimal Boolean congruence, then—see Lemma 106—¬a 1 ≥H¬¬a 2). □

In [30] the above result is provided by a parallel algebraic and proof-theoretic derivation.

If an approximation space ΩR(U) is induced by an order R upper bounded by maximal elements (i.e. with no infinite ascending chains), like any finite partial order, then the set of maximal elements of R is the set of isolated points, which for the philosophy of Rough Set Theory, are completely describable items, that is, items which are describable with no ambiguity by the given properties. In consequence \(Dsj(\Omega _R)(U)=N_{\equiv _{J^\delta }}(\Omega _R)(U)\). In this case the intrinsic logic (in the sense of [21]) of the rough set system is E 0, not CLSN (see [19]):

Theorem 110

Let Ω R(U) be an approximation space such that R is an order with no infinite ascending chains, and S the set of its maximal elements. Then \(N_{\equiv _{J^S}} (\Omega _R(U))=Dsj(\Omega _R(U))\).

Example 111

Consider the partial order of Example 105. The set of maximal elements is S = {a, b}. Below we show the Nelson space built on 〈W, Q〉 with the usual decorations “+ ” and “−” and the resulting Nelson lattice of ordered pair of elements of ΩQ(W) without the decoration “+ ”.

Pay attention that if we apply the above construction to our preorder 〈U, R〉 enlarging S to a set S′ containing also the pre-maximal states, then \(N_{\equiv _{J^{S'}}}(\Omega _R(U))\) is, indeed, an effective lattice, but it is a sublattice of Dsj( ΩR(U). For instance, Dsj({a, b}) = 〈{a}, ∅〉 which is not an element of \(N_{\equiv _{J^{S'}}}(\Omega _R(U))\). In fact in this lattice R(b) and R(b′), that is, {b, b′}, must be included either in the first or in the second element of any ordered pair. Practically, \(N_{\equiv _{J^{S'}}}(\Omega _R(U))\) is the above lattice with b′ added in any element containing b. Hence it is a lattice quite different from Dsj( ΩR(U)) which is shown in Example 46.

7.4 Algebraic Logic from Equivalence Relations

Originally, Rough Set Theory was based on equivalence relations (see [40]). Also in this case the intrinsic logic of the resulting rough set system changes drastically according to the filter J X.

Lemma 112

An approximation space Ω R(U) with R an equivalence relation and equipped with the Heyting algebra operations of Definition 38 is a Boolean algebra.

Proof

A Heyting algebra such that ¬¬a = a is a Boolean algebra. The topological space ΩR(U) is made of clopen (closed and open subsets). Hence, if A ∈ ΩR(U) then − A ∈ ΩR(U). In consequence for any A, B ∈ ΩR(U), \(\mathbb I(-A\cup B)=-A\cup B\), so that ¬A = −A. Therefore ¬¬A = −− A = A. □

Lemma 113

If B is a Boolean algebra, then for any a B , \(\equiv _{J^a}\) is a Boolean congruence.

Proof

Trivial: the only dense element of a Boolean algebra is 1 and since a ≤ 1, any a, a contains all dense elements (actually the only one). □

Lemma 114

Let B be a Boolean algebra, then for any congruence \(\equiv _{J^a}\) , \({\mathbf {N}}_{\equiv _{J^a}}(\mathbf {B})\) is a semi-simple Nelson algebra.

Proof

For any \(a\in N_{\equiv _{J^a}}(\mathbf {B})\), \(a\vee \lrcorner a=\langle a_1,a_2\rangle \vee \langle \neg a_1,a_1\rangle =\langle a_1\vee \neg a_1, a_2\wedge a_1\rangle =\langle 1,0\rangle =1\). □

Semi-simple Nelson algebras are the same mathematical objects as three-valued Łukasiewicz algebras. We explain this correspondence by showing interesting facts connected to rough sets.

In semi-simple Nelson algebras the operation ¬ is a pseudo-complementation: ¬a, that is, 〈a 2, ¬a 2〉, is the maximal element 〈x 1, x 2〉 such that x 1 ∧ a 1 = 0 and x 2 ∨ a 2 = 1. This may be surprising, because the complement of a 1 in the underlying Boolean algebra is its pseudo-complement ¬a 1, not a 2. But we have to consider that in any Nelson algebra of ordered pairs of a Heyting algebra H, a ≤ b if and only if a 1 ≤H b 1 and b 2 ≤H a 2. Therefore, the orders of the first and the second elements are contravariant. It follows that 〈x 1, x 2〉 is the pseudo-complement of 〈a 1, a 2〉 if x 2 is the minimal element z such that z ∨ a 2 = 1 and x 1 is the maximal element w such that w ∧ a 1 = 0 and w ∧ x 2 = 0. But, ¬a 2 is the minimal complement of a 2 to 1 (it is its supplement, indeed—see Definition 123). And a 2 is the maximal element disjoint from ¬a 2.

However, we give a proof, by introducing another kind of implication, which is fundamental to understand the intrinsic logic of rough set systems from equivalence relations.

Theorem 115

In a Nelson algebra N (B) with B a Boolean algebra, the operationdefined in (58) is a relative-pseudocomplementation

The proof can be found in [32] or in [39].Footnote 7 As a corollary, since ¬a = a ⊃ 0, one obtains that ¬a is the pseudo-complement of a.

Excursus

Now we have enough material to discuss a point we have left open: how to define the Nelson operations on a decreasing representation Dcr( ΩR(U))? If R is an equivalence relation the answer is straightforward, as we shall see. But if R is a preorder some difficulties arise. We recall that any element 〈A 1, A 2〉 of the decreasing representation belongs to \(\Omega _{R^\smile }(U)\times \Omega _R(U)\) and that \(\Omega _{R^\smile }(U)\) is a co-Heyting algebra with respect to ΩR(U).

It is a tricky point. For meet and join there is no problem: a ∧ b = 〈A 1 ∩ B 1, A 2 ∩ B 2〉 and analogously for ∨. For the strong negation just a little effort gives: ∼〈A 1, A 2〉 = 〈−A 2, −A 1〉. Now we apply the map ρ of (45) which transforms a disjoint representation into a decreasing one. Thus pay attention that before applying ρ, ∼〈A 1, A 2〉 = 〈A 2, A 1〉 while after applying ρ, ∼〈A 1, A 2〉 = 〈−A 2, −A 1〉. Let us then verify that ρ(∼ a) =∼ ρ(a): ρ(∼ a) = ρ(〈A 2, A 1〉) = 〈−A 1, A 2〉 = 〈−A 1, −− A 2〉 =∼〈−A 2, A 1〉 =∼ ρ(a). It is more difficult to define the implication. In disjoint representation ab = 〈A 1B 1, A 1 ∩ B 2〉 so ρ(〈A 1B 1, A 1 ∩ B 2〉) = 〈−(A 1 ∩ B 2), A 1B 1〉 = 〈−A 1 ∪−B 2, A 1B 1〉. Now, ρ(a) = 〈−A 2, A 1〉, ρ(b) = 〈−B 2, B 1〉. It follows that given a = 〈A 1, A 2〉 and b = 〈B 1, B 2in decreasing representation ab = 〈−A 2 ∪ B 1, A 2B 2〉. Notice that \(A_2\Longrightarrow B_2=\mathbb I_R(-A_2\cup B_2)\). Therefore, \(\lrcorner a=a\longrightarrow \langle 0,0\rangle =\langle -A_2, \neg A_2\rangle \). One more time, notice that the first element is a set-theoretic complementation while the second is the interior of a set theoretic complementation. Finally, the definition of ¬ is interesting, and tricky: in disjoint representation ¬a = 〈a 2, ¬a 2〉. Hence \(\rho (\neg a)=\langle -\neg A_2, A_2\rangle =\langle -\mathbb I_R(- A_2), A_2\rangle \). In consequence, for a in decreasing representation \(\neg a=\langle -\mathbb I_R(A_1),A_1\rangle =\langle \lrcorner A_1,-A_1\rangle \), where \(\lrcorner \) is the pseudo-complementation of the opposite Heyting algebra \(\Omega _{R^\smile }(U)\). In fact, \(-\mathbb I_R(X)=-\mathbb C_{R^\smile }(X)=\mathbb I_{R^\smile }(-X)=\lrcorner X\). Rephrased with the constructors introduced in the first lesson, if a is in decreasing representation, then ¬a = 〈[i]R(−A 1), A 1〉 = 〈−〈iR(−A 1), A 1〉, while \(\lrcorner a=\langle -A_2,[e]_R(A_2)\rangle \).

At an abstract level, in view of Theorem 11, these operations require a Boolean algebra equipped with a topological modal operator M and a topological co-modal operator L such that 〈M, L〉 is an axiality (an adjoint pair). Alternatively, we need a bi-Heyting algebra providing ¬ and \(\lrcorner \).

But in the case R is an equivalence relation, R = R and ΩR(U) is a topology of clopen sets. Then things are much easier and the operations are smoother: in the first place, there are no chains of alternate topological operators, that is, \(\mathbb {C}_R\mathbb I_R=\mathbb I_R\) and \(\mathbb {I}_R\mathbb C_R=\mathbb C_R\). This fact simplifies a lot, since any X i has the form \(\mathbb I_R(Y)\) for some Y ⊆ U. Furthermore, \(\lrcorner a=\langle \neg A_2,\neg A_2\rangle \), ¬a = 〈¬A 1, ¬A 1〉 and ab = 〈A 2B 1, A 2B 2〉, where ¬X i is the set-theoretic complement of X i and X iY j = −X i ∪ Y j.

7.5 Rough Set Interpretation of Tautologies and Contradictions

The following hold in any Nelson algebra \({\mathbf {N}}_{\equiv _{J^x}}(\mathbf {H})\):

$$\displaystyle \begin{aligned}&1\geq a\vee\lrcorner a\geq a\vee \neg a=a\vee\sim a\geq\langle x,0\rangle \end{aligned} $$
(73)
$$\displaystyle \begin{aligned}& 0\leq a\wedge\neg a\leq a\wedge\lrcorner a=a\wedge\sim a\leq\langle 0,x\rangle \end{aligned} $$
(74)

(73) and (74) are easily proved: \(a\vee \lrcorner a=\langle a_1\vee \neg a_1,0\rangle \geq \langle a_1\vee a_2\rangle =a\vee \neg a\). Symmetrically, \(a\wedge \neg a=\langle 0,a_2\vee \neg a_2\rangle \leq \langle 0,a_1\vee a_2\rangle =a\wedge \lrcorner a\). But a 1 ∨ a 2 ≥ x, by definition of the domain of the algebra. Therefore, a ∨¬a ≥〈x, 0〉 and \(a\wedge \lrcorner a\leq \langle 0,x\rangle \).

If \({\mathbf {N}}_{\equiv _{J^x}}(\mathbf {H})\) is semi-simple:

$$\displaystyle \begin{aligned}&(i)\;\neg\neg\neg a=\neg a,(ii)\;\lrcorner\lrcorner\lrcorner a=\lrcorner a \end{aligned} $$
(75)
$$\displaystyle \begin{aligned}&(i)\;\lrcorner\lrcorner a=\sim\lrcorner a=\neg\sim a\leq a,\;(ii)\neg\neg a=\sim\neg a=\lrcorner\sim a\geq a \end{aligned} $$
(76)
$$\displaystyle \begin{aligned}&(i)\; a\vee\lrcorner a=\langle 1,0\rangle;\;(ii)\;a\wedge\neg a=\langle 0,1\rangle \end{aligned} $$
(77)

(75), (76) and (77) come from (65), (66) and (67) because ¬¬a 1 = a 1 and ¬¬a 2 = a 2. (77) comes trivially from the fact that a 1 ∨¬a 1 = a 2 ∨¬a 2 = 1, so that \(a\vee \lrcorner a=\langle a_1\vee \neg a_1, 0\rangle =\langle 1,0\rangle \) and a ∧¬a = 〈0, a 2 ∨¬a 2〉 = 〈0, 1〉.

Suppose ΩR(U) is a topology and \(a=Dsj(X)=\langle \mathbb I_R(X),-\mathbb C_R(X)\rangle \) for X ⊆ U. Then:

$$\displaystyle \begin{aligned} &a\vee\sim a=a\vee\neg a=\langle \mathbb I_R(X)\cup -\mathbb C_R(X),\emptyset\rangle=\langle -\mathbb B_R(X),\emptyset \rangle \end{aligned} $$
(78)
$$\displaystyle \begin{aligned} & a\vee\lrcorner a=\langle\mathbb I_R(X)\cup\mathbb I_R-\mathbb I_R(X),\emptyset\rangle=\langle \mathbb I_R(X)\cup\mathbb {I}_R\mathbb {C}_R(-X),\emptyset\rangle \end{aligned} $$
(79)

Opposite relations hold for the contradictions:

$$\displaystyle \begin{aligned} &a\wedge\sim a=a\wedge\lrcorner a=\langle\emptyset,-\mathbb B_R(X)\rangle \end{aligned} $$
(80)
$$\displaystyle \begin{aligned} &a\wedge\neg a=\langle\emptyset,-\mathbb C_R(X)\cup\mathbb I_R--\mathbb C_R(X)\rangle=\langle\emptyset,-\mathbb C_R(X)\cup\mathbb{I}_R\mathbb C_R(X)\rangle \end{aligned} $$
(81)

Therefore, we note that contradictions are not equivalent to 0 and tautologies are not equivalent to 1 because of the presence of an “indecision area”, that is, the boundary of a set.

In the case ΩR(U) is a Boolean algebra, so that Dsj(X) is an element of a semi-simple Nelson algebras, since the open sets of ΩR(U) are clopen, \(\mathbb I_R\mathbb C_R(-X)=\mathbb C_R(-X)=-\mathbb I_R(X)\), so that \(a\vee \lrcorner a=\langle U,\emptyset \rangle \) and a ∧¬a = 〈∅, U〉, as anticipated by (77).

What about, then, \(Dsj(\mathbb B_R(X))\) itself? It is \(\langle \mathbb I_R(\mathbb B_R(X)),-\mathbb C_R(\mathbb B_R(X))\rangle \). But since \(\mathbb B_R(X)\) is the intersection of two closed sets, that is, \(\mathbb C_R(X)\) and \(-\mathbb I_R(X)\), it is itself a closed set. In consequence,

$$\displaystyle \begin{aligned} Dsj(\mathbb B_R(X))=\langle \mathbb I_R\mathbb B_R(X),-\mathbb B_R(X)\rangle=\sim\lrcorner(a\wedge\sim a)=\sim\lrcorner(a\wedge\lrcorner a) \end{aligned} $$
(82)

In the semi-simple case, since \(\mathbb I_R\mathbb B_R(X)=\mathbb B_R(X)\), \(Dsj(\mathbb B_R(X))=\langle \mathbb B_R(X),-\mathbb B_R(X)\rangle \). Therefore, \(\mathbb B_R(X)\) is an exact set.

7.6 Double Negations, Modalities and Approximations

The following are immediate in any Nelson algebra:

$$\displaystyle \begin{aligned} (i)\;\neg\neg a=\langle \neg a_2,\neg\neg a_2\rangle,\;(ii)\;\lrcorner\lrcorner a=\langle\neg\neg a_1,\neg a_1\rangle\;(iii)\;\lrcorner\lrcorner a\leq\neg\neg a \end{aligned} $$
(83)

In what follows, \(\mathbb I\) and \(\mathbb C\) stand for \(\mathbb I_R\) and, respectively, \(\mathbb C_R\).

Theorem 116

Let Ω R(X) be a topology. Then for all X  U:

$$\displaystyle \begin{aligned} &{}Dsj((lR)(X))=\langle\mathbb I(X),-\mathbb{C}\mathbb{I}(X)\rangle,\; Dsj((uR)(X))=\langle\mathbb{I}\mathbb{C}(X),-\mathbb{C}(X)\rangle \end{aligned} $$
(84)
$$\displaystyle \begin{aligned} &{}\neg\neg Dsj(X)=\langle\mathbb{I}\mathbb{C}(X),\mathbb{ICI}(-X)\rangle,\; \lrcorner\lrcorner Dsj(X)=\langle\mathbb{ICI}(X),\mathbb{I}\mathbb{C}(-X)\rangle \end{aligned} $$
(85)
$$\displaystyle \begin{aligned} &{} \lrcorner\lrcorner Dsj((lR)(X))= \lrcorner\lrcorner Dsj(X),\; \lrcorner\lrcorner Dsj((uR)(X))= \neg\neg Dsj(X) \end{aligned} $$
(86)
$$\displaystyle \begin{aligned} &{} \neg\neg Dsj((lR)(X))= \lrcorner\lrcorner Dsj(X),\; \neg\neg Dsj((uR)(X))= \neg\neg Dsj(X) \end{aligned} $$
(87)
$$\displaystyle \begin{aligned} &{} Dsj((lR)(X))\leq\lrcorner\lrcorner Dsj((lR)(X))=\lrcorner\lrcorner Dsj(X)\leq \end{aligned} $$
(88)
$$\displaystyle \begin{aligned} &\leq\neg\neg Dsj(X)=\neg\neg Dsj((uR)(X))\leq Dsj((uR)(X)) \end{aligned} $$
(89)

Proof

First, \(\neg - X=-\mathbb C(-X)=\mathbb I(X)\), \(\neg \neg -X=\mathbb I-\mathbb I(--X)=\mathbb I-\mathbb I(X)\), and so on. Second, \(\mathbb {I}\mathbb C\mathbb I(X)\subseteq \mathbb I\mathbb C(X)\). In view of these equations and dis-equations the proofs are just easy calculations. For instance, (85) is proved as follows:

$$\displaystyle \begin{aligned} \neg\neg Dsj(X)&=\neg\neg\langle\mathbb I(X),-\mathbb C(X)\rangle= \langle\neg-\mathbb C(X),\neg\neg-\mathbb C(X)\rangle=\\ &=\langle\mathbb I--\mathbb C(X),\mathbb I-\mathbb I--\mathbb C(X)\rangle= \langle \mathbb I \mathbb C(X),\mathbb I-\mathbb I\mathbb C(X)\rangle=\\ &=\langle\mathbb I\mathbb C(X),\mathbb I\mathbb C\mathbb I(-X)\rangle \end{aligned} $$

Theorem 117

Let Ω R(U) be a Boolean algebra, then for all X  U,

$$\displaystyle \begin{aligned} Dsj((lR)(X))=\lrcorner\lrcorner Dsj(X),\; Dsj((uR)(X))=\neg\neg Dsj(X) \end{aligned} $$
(90)

In other terms, the following diagrams commute:

Proof

(90) is based on the fact that ΩR(U) is made of clopen sets, so that for all X ⊆ U, \(\mathbb I\mathbb C(X)=\mathbb C(X)\) and \(\mathbb C\mathbb I(X)=\mathbb I(X)\). It follows from (85) and (84) that \(\lrcorner \lrcorner Dsj(X)=\langle \mathbb {ICI}(X),\mathbb {I}\mathbb {C}(-X)\rangle =\langle \mathbb I(X),\mathbb I\mathbb C(-X)\rangle =Dsj((lR)(X))\). The other equation comes from duality. □

Actually, \(\lrcorner \lrcorner Dsj(X)=Dsj((lR)(X))=\langle \mathbb I(X),-\mathbb I(X)\rangle \) follows from \(\mathbb {I}\mathbb {C}(-X)=\mathbb C(-X)=-\mathbb I(X)\). Similarly, \(\neg \neg Dsj(X)=Dsj((uR)(X))=\langle \mathbb C(X),-\mathbb C(X)\rangle \). Indeed, if B is a Boolean algebra, then in the semi-simple Nelson algebra N (B), ¬¬a = 〈¬a 2, a 2〉 and \(\lrcorner \lrcorner a=\langle a_1,\neg a_1\rangle \), any a.

We know that ¬¬(Dsj( ΩR(U)) is a Boolean algebra if equipped with ∩, ⊔ and ¬, where ¬ is the set-theoretic complementation. In case R is an equivalence relation, we prove something special:

Theorem 118

Let N be a semi-simple Nelson algebra and ¬¬(N) the lattice of regular elements of N . Thencoincides with ∨.

In Theorem 117 it was actually proved that in semi-simple Nelson algebras ¬¬ is a topological closure operator, hence it is additive. Another proof is the following: if N is semi-simple, then ¬ is a pseudo-complementation. It follows that ¬¬(¬¬a ∨¬¬b) = ¬(¬¬¬a ∧¬¬¬b) = ¬(¬a ∧¬b). In Sect. 7.8 we shall prove that ¬(x ∧ y) = ¬x ∨¬y. Hence, ¬(¬a ∧¬b) = ¬¬a ∨¬¬b.

In a rough set perspective, one can prove by easy calculation that

$$\displaystyle \begin{aligned} &\neg\neg Dsj(X\cup Y)=\neg\neg Dsj(X)\cup\neg\neg Dsj(Y).\\ \text{Therefore, } & \neg\neg(\neg\neg Dsj(X)\cup\neg\neg Dsj(Y))=\neg\neg\neg\neg Dsj(X\cup Y)=\\ & \neg\neg Dsj(X\cup Y)= \neg\neg Dsj(X)\cup\neg\neg Dsj(Y). \end{aligned} $$

Notice, on the contrary, that Dsj(X ∪ Y ) ≥ Dsj(X) ∪ Dsj(Y ), because \(\mathbb I_R(X\cup Y)\supseteq \mathbb I_R(X)\cup \mathbb I_R(Y)\).

It is natural to ask what is the element x of this algebra such that ¬¬a = J x(a) for any element a.

Theorem 119

Let B be a Boolean algebra. Then for any a B , the least dense element of \({\mathbf {N}}_{\equiv _{J^a}}(\mathbf {B})\) isa, 0〉.

Proof

An element 〈x 1, x 2〉 is dense if ¬〈x 1, x 2〉 = 〈x 2, ¬x 2〉 = 0. Hence x 2 = 0. But because of the filtration clause, x 1 ∨ 0 ≥B a. In consequence x 1 ≥B a. □

Corollary 120

In any semi-simple Nelson algebra \({\mathbf {N}}_{\equiv _{J^a}}(\mathbf {B})\) , J a, 0〉(x) = ¬¬x, any x.

Proof

From Theorem 77.(1). □

So we have seen that the set \(J^{\langle a,0\rangle }(N_{\equiv _{J^a}}(\mathbf {B}))\) forms a subalgebra of \(N_{\equiv _{J^a}}(\mathbf {B})\). All the elements of this subalgebra are regular and complemented. The set of complemented elements of an algebra is called center of the algebra.

Theorem 121

Given a semi-simple Nelson algebra, for any element a, if a = ¬¬a then all the three negations are complementations of a.

Proof

From (66) and (76) \(\neg \neg \neg a=\lrcorner \neg \neg a=\sim \neg \neg a\). Therefore, from (77) all the three negations are complementations of ¬¬a. □

In view of (76).(i), the above result does not hold for generic elements.

From the above discussion and Definition 6, \(\sim \lrcorner =\lrcorner \lrcorner =\neg \lrcorner \) is a necessity operator \(\mathbb L\) and \(\sim \neg =\neg \neg =\lrcorner \neg \) is a possibility operator \(\mathbb M\).

Example 122

Consider the usual preorder P without the elements v and c. Then we obtain an equivalence relation E = {〈a, a〉, 〈b, b〉, 〈b′, b′〉, 〈b, b′〉, 〈b′, b〉} on a set Z = {a, b, b′}. The only isolated element is a. Below we depict the Boolean algebra ΩE(Z), the resulting Nelson space and the rough set system Rsj( ΩE(Z)) as the lattice \(N_{\equiv _{J^{\{a\}}}}(\Omega _E(Z))\) without decorations “+ ”:

Notice that 〈Z , ≤, f〉 is not obtained from the dual space of ΩE(Z) but from 〈Z, E〉. In fact, as we have seen, the dual space of ΩE(Z) is a T 0-ification in which b and b′ collapse into a single point {b, b′}.

Dsj({a}) = 〈{a}, {b, b′}〉. Dsj({a, b}) = 〈{a}, ∅〉. Notice that if X is an exact set, that is, X = (lR)(X) = (uR)(X), then Dsj(X) is a regular element of the algebra. The exact sets are {a}, {b, b′}, Z and ∅. Their Dsj-images lay in the center of \(N_{\equiv _{J^{\{a\}}}}(\Omega _E(Z))\). Let us verify some cases: ¬〈{a}, ∅〉 = 〈∅, Z〉. 〈{a}, ∅〉 it the least dense element. ¬〈∅, {a}〉 = 〈{a}, {b, b′}〉. ¬〈∅, {a}〉∧〈∅, {a}〉 = 〈∅, Z〉. ¬〈∅, {a}〉∨〈∅, {a}〉 = 〈{a}, ∅〉. \(\lrcorner \langle \emptyset ,\{a\}\rangle =\langle Z,\emptyset \rangle \). \(\lrcorner \langle \emptyset ,\{a\}\rangle \wedge \neg \langle \emptyset ,\{a\}\rangle \wedge =\langle \emptyset ,\{a\}\rangle \). ¬¬〈{a}, ∅〉 = 〈Z, ∅〉. \(\lrcorner \lrcorner \langle \{a\},\emptyset \rangle =\langle \{a\},\{b,b'\}\rangle \).

7.7 Negations and Dual Pseudo-Complementation

Given a semi-simple Nelson algebra N (B), from a Boolean algebra B, we know that a ⊃ b is the pseudo-complement of a relative to b and ¬a thepseudo-complement of a. If we reverse the order of ⊃ we obtain another operation:

Definition 123 (Pseudo-Supplementation)

Let L be a bounded lattice, if for all a, b, x ∈L the following holds:

$$\displaystyle \begin{aligned} a\vee x\geq b\text{ if and only if } x\geq a\subset b \end{aligned} $$
(91)

then a ⊂ b is called the psudosupplement of a relative to b. The element a ⊂ 1 is called pseudo-supplement of a.

Therefore, a ⊂ b is the least element x such that a ∨ x ≥ b. In other terms, ⊂ is lower adjoint to ∨. In consequence ∨ is multiplicative, so that L is distributive.

Theorem 124

Let N be a semi-simple Nelson algebra, then

$$\displaystyle \begin{aligned} a\subset b=\sim(\sim a\supset\sim b) \end{aligned} $$
(92)

Let us verify that for any a in N, \(a\subset 1=\lrcorner a\): \(\sim (\sim a\supset \sim b)=\sim (\sim \lrcorner \sim \sim a\vee \sim b\vee (\lrcorner \sim a\wedge \lrcorner \sim \sim b))=\sim (\sim \lrcorner a\vee \sim b\vee (\lrcorner \sim a\wedge \lrcorner b))\). Therefore, \(a\subset 1=\sim (\sim \lrcorner a\vee 0\vee (\lrcorner \sim a\wedge 0))=\sim (\sim \lrcorner a)=\lrcorner a\).

This justifies why \(\lrcorner \) and ¬ have dual properties, for instance with respect to the De Morgan laws, as we shall see in the next section.

Definition 125 (Co-Heyting Algebras)

A bounded distributive lattice L such that a ⊂ b is defined for all a, b ∈L, is called a co-Heyting algebra.

Definition 126 (Bi-Heyting Algebras)

A bounded distributive lattice L such that it is both a Heyting and a co-Heyting algebra is called a bi-Heyting algebra.

Notice that the system of all closed subsets of a topological space is a co-Heyting algebra in which given two closed sets X and Y , \(X\subset Y=\mathbb C(Y\cap -X)\) and \(\lrcorner X=\mathbb C(-X)=-\mathbb I(X)\). This justifies the definition of the operations for the decreasing representation Dcr( ΩR(U)) that were provided in the excursus before Sect. 7.5.

Given a co-Heyting algebra, in [22] William Lawvere defined a boundary operation \(\delta (a):=a\wedge \lrcorner a\) and pointed out that this operation fulfils the following rule δ(a ∧ b) = (δ(a) ∧ b) ∨ (a ∧ δ(b)). This rule is consistent with our spatial intuition, if we think of two overlapping sets. Moreover, it is consistent with the Leibniz rule for differentiation of a product: \(\frac {d}{dx}(a\cdot b)=\frac {da}{dx}\cdot b+a\cdot \frac {db}{dx}\).

But in Sect. 7.5 we have seen, indeed, that given a semi-simple Nelson algebra \({\mathbf {N}}_{\equiv _{J^S}}(\Omega _R(U))\), with R an equivalence relation, \(Dsj(X)\wedge \lrcorner Dsj(X)\) “represents” the boundary of X (see [31]).

We now prove that in a semi-simple Nelson algebra N (B) the Leibniz rule holds for \(\delta (x)=x\wedge \lrcorner x\):

$$\displaystyle \begin{aligned} (a\wedge b)\wedge\lrcorner(a\wedge b)&=\langle a_1\wedge b_1,a_2\vee b_2\rangle\wedge\langle\neg(a_1\wedge b_1),a_1\wedge b_1\rangle\\ &=\langle 0,a_2\vee b_2\vee(a_1\wedge b_1)\rangle=\langle 0,(a_2\vee b_2\vee a_1)\wedge(a_2\vee b_2\vee b_1)\rangle\\ &=\langle 0,a_2\vee b_2\vee a_1\rangle\vee\langle 0,a_2\vee b_2\vee b_1\rangle\\ &=\langle a_1\wedge \neg a_1\wedge b_1, a_2\vee b_2\vee a_1\rangle\vee\langle a_1\wedge b_1\wedge\neg b_1,a_2\vee b_2\vee b_1\rangle\\ &=(\langle a_1,a_2\rangle\wedge\langle\neg a_1, a_1\rangle\wedge\langle b_1,b_2\rangle)\vee(\langle a_1,a_2\rangle\wedge\langle b_1,b_2\rangle\wedge\langle\neg b_1,b_1\rangle)\\ &=((a\wedge\lrcorner a)\wedge b)\vee(a\wedge(b\wedge\lrcorner b)) \end{aligned} $$

We conclude the section with these straightforward results on rough set systems:

Theorem 127

Let B be a Boolean algebra anda congruence on B . Then:

$$\displaystyle \begin{aligned} &\mathbf{H}(\mathbf{B}):=\langle N_\equiv(\mathbf{B}),\vee,\wedge,\neg,\supset,0,1\rangle\mathit{\text{ is a Heyting algebra.}}\\ &\mathbf{CH}(\mathbf{B}):=\langle N_\equiv(\mathbf{B}),\vee,\wedge,\lrcorner,\subset,0,1\rangle\mathit{\text{ is a co-Heyting algebra.}}\\ &\mathbf{BH}(\mathbf{B}):=\langle N_\equiv(\mathbf{B}),\vee,\wedge,\neg,\lrcorner,\supset,\subset,0,1\rangle\mathit{\text{ is a bi-Heyting algebra.}} \end{aligned} $$

Corollary 128

Let Ω R(U) be an approximation space with R an equivalence relation. then H( Ω R(U)) is a Heyting algebra, CH( Ω R(U)) is a co-Heyting algebra, and BH( Ω R(U)) is a bi-Heyting algebra.

7.8 Negations and De Morgan Laws

By definition, both De Morgan laws hold for the strong negation ∼. On the contrary, by means of some calculation we obtain:

$$\displaystyle \begin{aligned} &\neg(a\wedge b)=\neg a\vee\neg b, \end{aligned} $$
(93)
$$\displaystyle \begin{aligned} &\neg(a\vee b)=\langle a_2\wedge b_2,\neg(a_2\wedge b_2)\rangle\leq\langle a_2\wedge b_2,\neg a_2\vee\neg b_2\rangle = \neg a\wedge\neg b \end{aligned} $$
(94)
$$\displaystyle \begin{aligned} &\lrcorner(a\vee b)=\lrcorner a\wedge\lrcorner b \end{aligned} $$
(95)
$$\displaystyle \begin{aligned} &\lrcorner(a\wedge b)=\langle\neg(a_1\wedge b_1),a_1\wedge b_1\rangle\geq\langle\neg a_1\vee\neg b_1,a_1\wedge b_1\rangle\geq\lrcorner a\vee\lrcorner b. \end{aligned} $$
(96)

The same equalities and disequalities hold for double negated elements, too. For instance, ¬(¬¬a ∧¬¬b) = ¬¬¬a ∨¬¬¬b.

Also, \(\neg (\lrcorner \lrcorner a\vee \lrcorner \lrcorner b)\leq \neg \lrcorner \lrcorner a\wedge \neg \lrcorner \lrcorner b\) and so on, while \(\neg (\lrcorner \lrcorner a\wedge \lrcorner \lrcorner b)=\neg \lrcorner \lrcorner a\vee \neg \lrcorner \lrcorner b\).

Pay attention that the above relations hold for the operations ¬ and \(\lrcorner \) in a generic Nelson algebra N. On the contrary, if ¬ is the pseudo-complementation of N qua Heyting algebra (for instance if N is a finite Nelson lattice), then ¬(a ∧ b) ≥¬a ∨¬b, while ¬(a ∨ b) = ¬a ∧¬b. Symmetrically, if \(\lrcorner \) is the dual-pseudocomplementation in the co-Heyting algebra N op then \(\lrcorner (a\vee b)\leq \lrcorner a\wedge \lrcorner b\) and \(\lrcorner (a\wedge b)=\lrcorner a\vee \lrcorner b\).

If the Nelson algebra is semi-simple, things change sensibly. In fact, in this case the Nelson operator ¬ is really a pseudo complementation and \(\lrcorner \) a co-pseudocomplementation. Clearly, we expect that the De Morgan law for Heyting algebras and co-Heyting algebras hold:

$$\displaystyle \begin{aligned}\neg(a\vee b)=\neg a\wedge\neg b \text{ and }\lrcorner(a\wedge b)=\lrcorner a\vee\lrcorner b.\end{aligned}$$

It holds because the same is valid in the underlying Boolean algebra.

The law (93) suggests that semi-simple Nelson algebras can be made into Heyting algebras with very peculiar properties, because that law does not hold in general for pseudo-complementation. Symmetrically for their co-Heyting algebras. Indeed, in [43] it was proved that if (93) and (95) hold then \(\lrcorner \neg \) and \(\neg \lrcorner \) are idempotent operators and \(\lrcorner \neg a\) is the least complemented element above a, while \(\neg \lrcorner a\) is the largest complemented element below a.

In fact we have seen in Theorem 117 that in semi-simple Nelson algebras, \(\lrcorner \neg =\neg \neg =\sim \neg \) and \(\neg \lrcorner =\lrcorner \lrcorner =\sim \lrcorner \) correspond to topological modal (possibility) and, respectively, topological co-modal (necessity) operators which project an element a onto the sublattice of regular elements which is, also, the sublattice of complemented elements of the algebra.

Actually, another reason why the above De Morgan laws hold in semi-simple Nelson algebras is a general result by Johnstone (see [20]): in a Heyting algebra H, the De Morgan law (93) is equivalent to the fact that the set of regular elements form a sublattice of H. And in Theorem 118 it was proved that this is the case for semi-simple Nelson algebras, indeed.

7.9 Changing Information and Changing Logic

We have mentioned that semi-simple Nelson algebras are equivalent to three-valued Łukasiewicz algebras. Now we enter some details.

Definition 129 (Łukasiewicz Algebra)

A three-valued Łukasiewicz algebra is a distributive lattice 〈A, ∨, ∧, ∼, 0, 1〉 with two additive and multiplicative unary operations φ 1, φ 2 satisfying:

$$\displaystyle \begin{aligned} &\varphi_1(x)\geq\varphi_2(x),\;\varphi_i(x)\vee\sim\varphi_i(x)=1,\; \varphi_i(x)\wedge\sim\varphi_i(x)=0,\;\varphi_i(\sim x)=\sim\varphi_i(x)\\ &\varphi_i(\varphi_j(x))=\varphi_j(x),\;x\vee\varphi_1=\varphi_1(x),\; x\wedge\varphi_2=\varphi_2(x),\;\varphi_i(0)=0,\;\varphi_i(1)=1\\ &\sim x\wedge\varphi_2(x)=0,\;\sim x\vee\varphi_1(x)=1,\; y\wedge(x\vee\sim\varphi_1(x)\vee\varphi_2(y))=y. \end{aligned} $$

It is possible to prove (see [39] or [32]):

Theorem 130

Let B be a Boolean algebra anda congruence on B . Then: L(B) := 〈N (B), ∨, ∧, ∼, φ 1, φ 2, 0, 1〉 is a three-valued Łukasiewicz algebra, where \(\varphi _1=\lrcorner \neg =\sim \neg =\neg \neg \) and \(\varphi _2=\neg \lrcorner =\sim \lrcorner =\lrcorner \lrcorner \).

Corollary 131

Let Ω R(U) be an approximation space with R an equivalence relation. Then, L( Ω R(U)) is a three-valued Łukasiewicz algebra.

See Example 122.

It is interesting to note that our relative pseudo-complementation ⊃ of Theorem 115 coincides with the so-called Moisil residuation \(\sqsupset \) which is definable in three-valued Łukasiewicz algebras: \(a\sqsupset b:=b\vee \sim \varphi _1(a)\vee (\sim \varphi _2(a)\wedge \varphi _1(b))\).

Semi-simple Nelson algebras or three-valued Łukasiewicz algebras represent the case in which the filtration congruence ≡ is generic, that is, ≡ is \(\equiv _{J^a}\) for 0 ≤ a ≤ 1. What happens in the extreme cases, that is, when a = 1 and a = 0?

We have seen that \({\mathbf {N}}_{\equiv _{J^1}}(\Omega _R(U))\) is a Boolean algebra isomorphic to ΩR(U). If, instead, a = 0, then we obtain a Post algebra of order three.

Definition 132 (Post Algebra)

A Post algebra of order three is a Heyting algebra 〈A, ∨, ∧, ⇒, ¬, 0, 1〉 equipped with a three chain element 0 = e 0 ≤ e 1 ≤ e 2 = 1 and two unary multiplicative and additive operators D 1, D 2 such that, for 1 ≤ i, j ≤ 2:

$$\displaystyle \begin{aligned} &D_1(x)\vee \neg D_1(x)=1,\;D_i(\neg x)=\neg D_i(x),\;D_i(D_j(x))=D_j(x)\\ &x=(D_1(x)\wedge e_1)\vee(D_2(x)\wedge e_2)\text{ - monotonic representation of }x\\ &D_i(x\Longrightarrow y)=(D_1(x)\Longrightarrow D_1(y))\wedge(D_2(x)\Longrightarrow D_2(y))\\ &D_i(e_j)= \begin{cases} 1 & \text{for {$1\leq i\leq j\leq 2$}} \\ 0 & \text{for {$2\geq i\gneq j\geq 0$}} \end{cases} \end{aligned} $$

Let then P = 〈A, ∨, ∧, ⇒, ¬, e 0, e 1, e 2, D 1, D 2, 0, 1〉. Since D i(x) ∨¬D i(x) = (D 1(D i(x)) ∨¬D 1(D i(x)) = 1, it is evident that for any x, D i(x) belongs to the centre of P.

Post algebras of order three are special cases of three-valued Łukasiewicz algebras. In fact, if a Łukasiewicz algebra L = 〈A, ∨, ∧, ∼, φ 1, φ 2, 0, 1〉 has a chain 0 ≤ δ ≤ 1, by setting D 1 = φ 1, D 2 = φ 2, ¬x =∼ D 1(x), one obtains a Post algebra of order three (see [32]).

Notice that, indeed, ∼ D 1(x) =∼ φ 1(x) =∼∼¬x = ¬x.

Now we exhibit a Post algebra of ordered pairs of disjoint elements (see [32]). We have just noticed that D i(x) is complemented. Moreover, D 1(x) =∼¬(x). In view of (65) and Theorem 121, this suggests that the underlying algebra is Boolean. Therefore, let B be a Boolean algebra and ≡ be the largest congruence on B. Let N (B) be a set of ordered pairs of disjoint elements of B. We have already seen that since ≡ is the largest congruence on B, 1 ≡ 0 so that any pair of disjoint elements of B is admitted by the filtration rule a 1 ∨ a 2 ≡ 1, hence also 〈0, 0〉 is. We know that N (B) can be made into a three-valued Łukasiewicz algebra and how to transform it into a Post algebra of order three. We only need a chain of values. Obviously, e 0 = 〈0, 1〉 and e 2 = 〈1, 0〉. We claim that e 1 is 〈0, 0〉. Clearly, 〈0, 1〉≤〈0, 0〉≤〈1, 0〉. Moreover D 1(〈0, 0〉) =∼¬〈0, 0〉 =∼〈0, 1〉 = 1 and \(D_2(\langle 0,0\rangle )=\sim \lrcorner \langle 0,0\rangle =\sim \langle 1,0\rangle =0\).

Now, the largest congruence on B is \(\equiv _{J^0}\). So, we obtain:

Theorem 133

Let B be a Boolean algebra, then

$$\displaystyle \begin{aligned} \mathbf{P}(\mathbf{B}):=\langle N_{\equiv_{J^0}}(\mathbf{B}),\vee,\wedge,\neg,\supset,\sim\neg,D_1,D_2,e_0,e_1,e_2\rangle \end{aligned} $$

is a Post algebra of order three, if D 1 =∼¬, \(D_2=\sim \lrcorner \) , e 0 = 〈, U, e 1 = 〈, and e 2 = 〈U, 〉.

From the rough set perspective, given an approximation space ΩR(U) with R an equivalence relation, we obtain a Post algebra of order three if and only if there are no isolated elements:

Theorem 134

Let R be an equivalence relation on a set U such that there are no isolated elements, then \(N_{?\equiv _{J^0}}(\Omega _R(U))=Dsj(\Omega _R(U))\) and

$$\displaystyle \begin{aligned} \mathbf{P}(\Omega_R(U)):=\langle N_{\equiv_{J^\emptyset}}(\Omega_R(U)),\vee,\wedge,\neg,\supset,\sim\neg,D_1,D_2,e_0,e_1,e_2\rangle \end{aligned} $$

is a Post algebra of order three.

We have noticed that the intermediate value of the above Post algebra is 〈0, 0〉 which is the least dense element of the algebra. And we have also noticed that given a Boolean algebra B and a ∈B, 〈a, 0〉 is the least dense element in the lattice \(\langle N_{\equiv _{J^a}}(\bf B),\vee ,\wedge ,\neg ,0,1\rangle \), therefore also in L(B) we can set a chain 0 = 〈0, 1〉 = e 0 ≤ e 1 = 〈a, 0〉≤ e 2 = 〈1, 0〉 = 1.

However, D 1(e 1) = φ 1(e 1) =∼¬〈a, 0〉 =∼〈0, 1〉 = 1, which is consistent with Post algebras, but \(D_2(e_1)=\sim \lrcorner e_1=\sim \langle \neg a, a\rangle =\langle a,\neg a\rangle \geq \langle 0,1\rangle =0\), while in Post algebras D 2(e 1) = D 2(e 0) = 0.

Actually, if one assumes 〈a, 0〉 as intermediate value, one obtains another kind of lattices, called chain-based lattices, namely P 2-lattices which are generalisations of Post algebras (see [13]).

Example 135

Let P = {a, a′, b, b′} and E be the equivalence relation depicted below together with the Boolean algebra ΩE(P), the resulting Nelson space and Dsj( ΩE(P)) as the Post algebra of order three \(N_{\equiv _{J^{\emptyset }}}(\Omega _E(P))\) without decorations ’+ ’. For instance, Dsj({a, b} = 〈∅, ∅〉, Dsj({a, a′, b}) = 〈{a, a′}, ∅〉, Dsj({a, a′}) = 〈{a, a′}, {b, b′}〉.

8 Conclusions

In many respects, the logic of rough sets is still to be defined. In the case of classic rough sets based on equivalence relations, we have seen that their logic depends on the geometry of isolated points. In other terms, it depends on the set of items completely describable by the given properties, that is, items which are singled out by the properties, or information, we have. If all the items can be isolated by the properties, then we obtain a Boolean algebra. This is no surprise if we think of Classic Logic as the logic of perfect information: either α or ¬α. If some pieces of information are complete and other are incomplete, and we gather the completely describable items into a set S, then we obtain a semi-simple Nelson algebra, i.e. a three-valued Łukasiewicz algebra, in which the pair 〈S, ∅〉 is the least dense element and a local top element, in the sense that classical tautologies takes values between 〈S, ∅〉 and the absolute top element 〈U, ∅〉. Vice-versa, all classical contradictions are between ∼〈S, ∅〉 = 〈∅, S〉 and the absolute bottom element 〈∅, U〉, so that 〈∅, S〉 is a local bottom element. If no items are completely described, then S = ∅ and rough set systems turns into Post algebras of order three, where the local top and bottom elements fuse into a state 〈∅, ∅〉 of complete indecision or totally uninformed situation. Since it is often assumed that there are no isolated points, or completely described items, then the logic of rough sets should be the one modelled by Post algebras of order three, not three-valued Łukasiewicz logic or connected mathematical objects (regular Stone algebras and the like).

However, in our opinion, the real world is a melange of perfect and imperfect information. Then three-valued Łukasiewicz logic, or Constructive Logic with Strong Negation plus an axiom for \(a\vee \lrcorner a=1\) approximate the intrinsic logic of rough sets. But they are not able to account for the double nature of perfect and imperfect information which is implicit in these algebraic models as it is shown by the fact that any such algebra is the product of a Post algebra of order three, modelling the imperfect part, and a Boolean algebra modelling the perfect part. Look at the Łukasiewicz algebra of Example 122. It is the product of the Boolean algebra B whose (in this case only) atom is {a} and a Post algebra P with elements built on the indiscernible elements b and b′:

Notice that the product of the least (only) dense element of B, that is, the top element 〈{a}, ∅〉 and the least dense element of P, which is the intermediate value 〈∅, ∅〉, gives the least dense element of the resulting three-valued Łukasiewicz algebra.

Finally, if an approximation space is induced by a partial or pre-order bounded by maximal states, then the intrinsic logic of the rough set system is E 0. In particular all the usual approximation spaces induced by a finite partial order are of this kind.