1 Introduction

Rough set theory, proposed by Pawlak [12], is a mathematical tool for data reasoning and may be seen as an extension of classical set theory. It has been proved to be an effective approach to deal with intelligent systems characterized by insufficient and incomplete information and has been successfully applied to machine learning, intelligent systems, inductive reasoning, pattern recognition, mereology, image processing, signal analysis, knowledge discovery, decision analysis, expert systems and many other fields [811].

One of rough set theory’s strengths is that an unknown target concept can be approximately characterized by existing knowledge structures in a knowledge base. In rough set theory, knowledge is interpreted as the segmentation of the domain [23]. That is to say, all objects with identical descriptions are classified into a plurality of equivalence classes. Based on such classification, a set of any objects can be approximated through upper and lower approximation. Early rough set theory mainly considers equivalence relations on the universe, which determines a classification (or partition) on the universe. One deals with not only a single classification (or partition) on the universe, but also a family of classifications (or partitions) on the universe [4, 5]. This leads to the definition of a knowledge base, which is an important concept in rough set theory.

For a given knowledge base, one of the tasks in data mining and knowledge discovery is to generate new knowledge through known knowledge.

The relationship between information systems is a very important topic in the field of artificial intelligence. In mathematics, it can be explained as a mapping between information systems. The approximations and reduction in the original system can be regarded as encoding, while the image system is seen as an interpretive system. The concept of homomorphisms as a kind of tool to study relationships between information systems with rough sets was introduced by Grzymala-Busse [2, 3]. A homomorphism can be viewed as a special relationship between information systems. As explained in [13], homomorphisms allow one to translate the information contained in one granular world into the granularity of another granular world and thus provide a relationship mechanism for exchanging information with other granular worlds. Li et al. [7] has earlier studied invariant characteristics of information systems under some homomorphism. Wang et al. [1618, 20, 21] have widely researched generalized information systems under homomorphisms. Zhu et al. [26, 27] have also done some work in this aspect.

Although in recent years relationship between relation information systems, which are a kind of generalized information systems, has been researched, there are few researches focusing on relationships between knowledge bases.

Since a knowledge base is a special relation information system, we may attempt to study relationships between knowledge bases by means of related results of relation information systems.

The purpose of this paper is to investigate relationships between knowledge bases by using the homomorphism between relation information systems. We concentrate on researching the dependency of knowledge bases and knowledge reductions in knowledge bases. These structural features of the original knowledge base are guaranteed in the image knowledge base. That is, invariant characteristics of knowledge bases under the homomorphism are obtained.

The remaining part of this paper is organized as follows: Sect. 2 recalls some basic concepts about knowledge bases, relation information systems and consistent mappings; Sect. 3 recalls R-mappings and homomorphisms between relation information systems, and gives properties of R-mappings; Sect. 4 investigates relationships between knowledge bases and obtains some invariant amounts and inverse invariant amounts, such as knowledge bases, the dependency of knowledge bases, knowledge reductions, coordinate families and necessary relations; Sect. 5 gives lattice characteristics of the dependency of knowledge bases; Sect. 6 concludes this paper and highlights the prospects for potential future development.

2 Preliminaries

In this section, we recall some basic concepts about knowledge bases, relation information systems and consistent mappings.

Throughout this paper, U denotes a nonempty finite set called the universe, \(U\times U\) denotes the cartesian product of U and U, and \(2^U\) denotes the set of all subsets of U. All mappings are assumed to be surjective.

For any \({\mathcal {R}}\subseteq 2^{U\times U}\), denote \(ind({\mathcal {R}})=\bigcap \nolimits _{R\in {\mathcal {R}}}R.\)

The successor neighborhood of \(x\in U\) with respect to \(R\in 2^{U\times U}\) will be denoted by \(R_{s}(x)\), that is, \(R_{s}(x)=\{y\in U:(x,y)\in R\}\) ([22]).

2.1 Knowledge bases

Recall that R is called a binary relation on U if \(R\in 2^{U\times U}\). R may be represented by a matrix. That is, if \(U=\{x_1, x_2, \ldots , x_m\}\), then R may be represented by the following matrix

$$\begin{aligned} \left( \begin{array}{c@{\quad }c@{\quad }c@{\quad }c} R(x_1,x_1) &{} R(x_1,x_2) &{} \ldots &{} R(x_1,x_m)\\ R(x_2,x_1) &{} R(x_2,x_2) &{} \ldots &{} R(x_2,x_m)\\ \ldots &{} \ldots &{} \ldots &{}\ldots \\ R(x_m,x_1) &{} R(x_m,x_2) &{} \ldots &{} R(x_m,x_m) \end{array}\right) , \end{aligned}$$

where

$$\begin{aligned} R(x_i,x_j)={\left\{ \begin{array}{ll} 1,\ \ \ \ \ (x_i,x_j)\in R;\\ 0,\ \ \ \ \ (x_i,x_j)\not \in R. \end{array}\right. } \end{aligned}$$

Let R be a binary relation on U. R is called equivalence if R is reflexive, symmetric and transitive.

In this paper, \({\mathcal {R}}^*(U)\) denotes the set of all equivalence relations on U.

For \(R\in {\mathcal {R}}^*(U)\) and \(x\in U\), \([x]_R=\{y\in U:xRy\}\) denotes the equivalence class of x. Additionally,

$$\begin{aligned} U/R=\{[x]_R:x\in U\}. \end{aligned}$$

If \(R\in {\mathcal {R}}^*(U)\), then for \(x\in U\), \(R_{s}(x)=[x]_{R}\); if \({\mathcal {R}}\subseteq {\mathcal {R}}^*(U)\), then \(ind({\mathcal {R}})\in {\mathcal {R}}^*(U).\)

Definition 2.1

([28]) A pair \((U,{\mathcal {R}})\) is called a knowledge base, if \({\mathcal {R}}\subseteq {\mathcal {R}}^*(U)\).

Example 2.2

Let \(U=\{x_1,x_2,x_3,x_4,x_5,x_6\}\). Put

$$\begin{aligned} R_1= & {} \left( \begin{array}{c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c} 1&{}1&{}0&{}0&{}1&{}0\\ 1&{}1&{}0&{}0&{}1&{}0\\ 0&{}0&{}1&{}1&{}0&{}1\\ 0&{}0&{}1&{}1&{}0&{}1\\ 1&{}1&{}0&{}0&{}1&{}0\\ 0&{}0&{}1&{}1&{}0&{}1\\ \end{array}\right) , ~~~~R_2=\left( \begin{array}{c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c} 1&{}0&{}0&{}0&{}0&{}1\\ 0&{}1&{}1&{}1&{}1&{}0\\ 0&{}1&{}1&{}1&{}1&{}0\\ 0&{}1&{}1&{}1&{}1&{}0\\ 0&{}1&{}1&{}1&{}1&{}0\\ 1&{}0&{}0&{}0&{}0&{}1\\ \end{array}\right) ,\\ R_3= & {} \left( \begin{array}{c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c} 1&{}1&{}0&{}0&{}1&{}1\\ 1&{}1&{}0&{}0&{}1&{}1\\ 0&{}0&{}1&{}1&{}0&{}0\\ 0&{}0&{}1&{}1&{}0&{}0\\ 1&{}1&{}0&{}0&{}1&{}1\\ 1&{}1&{}0&{}0&{}1&{}1\\ \end{array}\right) , ~~~~R_4=\left( \begin{array}{c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c} 1&{}1&{}0&{}0&{}1&{}0\\ 1&{}1&{}0&{}0&{}1&{}0\\ 0&{}0&{}1&{}1&{}0&{}1\\ 0&{}0&{}1&{}1&{}0&{}1\\ 1&{}1&{}0&{}0&{}1&{}0\\ 0&{}0&{}1&{}1&{}0&{}1\\ \end{array}\right) . \end{aligned}$$

Then, \((U, {\mathcal {R}})\) is a knowledge base where \({\mathcal {R}}=\{R_1,R_2,R_3,R_4\}\).

For each equivalence relation in a knowledge base, one can construct lower approximation and upper approximation of any subset on the universe in the following definition.

Definition 2.3

([8]) Let \((U,{\mathcal {R}})\) be a knowledge base. For any \(X\in 2^U\) and \(R\in {\mathcal {R}}\), define

$$\begin{aligned} \underline{R}(X)= & {} \{x\in U: [x]_R\subseteq X\},\\ \overline{R}(X)= & {} \{x\in U:[x]_R\cap X\ne \emptyset \}. \end{aligned}$$

Then, \(\underline{R}(X)\) and \(\overline{R}(X)\) are called R-lower approximation and R-upper approximation with respect to R, respectively.

\(Bnd_R(X) =\overline{R}(X)-\underline{R}(X)\) is called the boundary region of X.

A set is rough if its boundary region is not empty; otherwise, it is crisp. Thus, X is rough if \(\underline{R}(X)\ne \overline{R}(X)\).

It is well known that elements in a knowledge base are not of the same importance. Some even are redundant. So we often consider knowledge reductions in a knowledge base by deleting unrelated or unimportant elements with the requirement of keeping the ability of classification. This is the meaning of knowledge reduction in a knowledge base.

Definition 2.4

([28]) Let \((U,{\mathcal {R}})\) be a knowledge base and \(\mathcal {P}\subseteq {\mathcal {R}}\).

(1) \(\mathcal {P}\) is called equivalent to \({\mathcal {R}}\) (or \(\mathcal {P}\) is called a coordinate subfamily of \({\mathcal {R}}\)), if \(ind(\mathcal {P})=ind({\mathcal {R}})\).

(2) \(R\in \mathcal {P}\) is called independent in \(\mathcal {P}\), if \(ind(\mathcal {P}-\{R\})\ne ind(\mathcal {P})\); \(\mathcal {P}\) is called a independent subfamily of \({\mathcal {R}}\), if \(\forall ~ R\in \mathcal {P}\); R is independent in \(\mathcal {P}\).

(3) \(\mathcal {P}\) is called a knowledge reduction in \({\mathcal {R}}\), if \(\mathcal {P}\) is both coordinate and independent.

Let \((U,{\mathcal {R}})\) be a knowledge base. We denote the set of all coordinate subfamilies (resp., all knowledge reductions) of \({\mathcal {R}}\) by \(co({\mathcal {R}})\) (resp., \(red({\mathcal {R}})\)).

Obviously,

$$\begin{aligned} \mathcal {P}\in red({\mathcal {R}})~~\Longleftrightarrow ~~\mathcal {P}\in co({\mathcal {R}})~ and ~\forall ~\mathcal {Q}\subset \mathcal {P}, \mathcal {Q}\not \in co({\mathcal {R}}). \end{aligned}$$

Definition 2.5

([28]) Let \((U,{\mathcal {R}})\) and \((U,\mathcal {P})\) be two knowledge bases.

(1) \((U,{\mathcal {R}})\) and \((U,\mathcal {P})\) are called equivalent, if  \(ind({\mathcal {R}})=ind(\mathcal {P})\). We write \((U,{\mathcal {R}}) \backsimeq (U,\mathcal {P})\).

(2) \((U,{\mathcal {R}})\) is called finer than \((U,\mathcal {P})\) or \((U,\mathcal {P})\) is called to depend on \((U,{\mathcal {R}})\), if  \(ind({\mathcal {R}})\subseteq ind(\mathcal {P})\). We write \((U,{\mathcal {R}}) \preccurlyeq (U,\mathcal {P})\).

Obviously,

$$\begin{aligned} (U,{\mathcal {R}}) \backsimeq (U,\mathcal {P})\ \Longleftrightarrow \ (U,{\mathcal {R}}) \preccurlyeq (U,\mathcal {P}), (U,\mathcal {P}) \preccurlyeq (U,{\mathcal {R}}). \end{aligned}$$

For \({\mathcal {R}}\subseteq {\mathcal {R}}^*(U)\) and \(x\in U\), we denote \(\underline{ind({\mathcal {R}})}\) (resp. \(\overline{ind({\mathcal {R}})}\), \(U/{ind({\mathcal {R}})}\), \([x]_{ind({\mathcal {R}})}\)) by \(\underline{{\mathcal {R}}}\) (resp. \(\overline{{\mathcal {R}}}\), \(U/{{\mathcal {R}}}\), \([x]_{{\mathcal {R}}}\)).

For \(\mathcal {P},\mathcal {Q}\subseteq {\mathcal {R}}^*(U)\), \(Pos_{ind(\mathcal {P})}ind(\mathcal {Q})\) denotes by \(Pos_{\mathcal {P}}{\mathcal {Q}}\) where \(Pos_{\mathcal {P}}{\mathcal {Q}}=\bigcup \nolimits _{X\in U/\mathcal {Q}}\underline{\mathcal {P}}(X)\) is called the \(\mathcal {P}\)-positive region of \(\mathcal {Q}.\)

Theorem 2.6

([28]) Let \((U,\mathcal {P})\) and \((U,\mathcal {Q})\) be two knowledge bases. Then, the following are equivalent:

(1) \((U,\mathcal {P}) \preccurlyeq (U,\mathcal {Q})\);

(2) \(ind(\mathcal {P}\cup \mathcal {Q})=ind(\mathcal {Q})\);

(3) \(Pos_{\mathcal {P}}{\mathcal {Q}}=U\).

Definition 2.7

([25]) Let \(\mathcal {A}=\{X_1,X_2,\ldots ,X_k\}\) and \(\mathcal {B}=\{Y_1,Y_2,\ldots ,Y_l\}\) be two partitions on U.

(1)

$$\begin{aligned} H(\mathcal {A})=\sum \limits _{i=1}^{k}p(X_i)p(U-{X_i}) \end{aligned}$$

is called the information amount of \(\mathcal {A}\), where \(p(X_i)=\frac{\mid X_i\mid }{\mid {U}\mid }\) means the probability that the element of U belongs to \(X_i\).

(2)

$$\begin{aligned} H(\mathcal {B}/\mathcal {A})=\sum _{i=1}^{k}\sum \limits _{j=1}^{l}p(X_i\cap Y_j)p(X_i- Y_j) \end{aligned}$$

is called the condition information amount of \(\mathcal {B}\) with respect to \(\mathcal {A}\).

Theorem 2.8

Let \((U,\mathcal {P})\) and \((U,\mathcal {Q})\) be two knowledge bases. Denote

$$\begin{aligned} U/\mathcal {Q}=\{D_1,D_2,\ldots ,D_r\}. \end{aligned}$$

Then, the following are equivalent:

(1) \((U,\mathcal {P}) \preccurlyeq (U,\mathcal {Q})\);

(2) \(\forall ~j\le r\), \(\underline{\mathcal {P}}(D_j)=D_j\);

(3) \(\bigcup \nolimits _{j=1}^r\underline{\mathcal {P}}(D_j)=U\);

(4) \(U/\mathcal {P}\) refines \(U/\mathcal {Q}\), i.e., \(\forall ~A\in U/\mathcal {P}\), \(\exists ~B\in U/\mathcal {Q}\), \(A\subseteq B\);

(5) \(H((U/{\mathcal {Q}})/(U/{\mathcal {P}}))=0\).

Proof

\((1)\Rightarrow (2)\). \(\forall ~x\in D_j\), we have \(D_j=[x]_{\mathcal {Q}}\). Since \((U,\mathcal {P}) \preccurlyeq (U,\mathcal {Q})\), \(ind(\mathcal {P})\subseteq ind(\mathcal {Q})\), we have \([x]_{\mathcal {P}}\subseteq [x]_{\mathcal {Q}}\). Then, \(\{x\}\subseteq [x]_{\mathcal {P}}\subseteq D_j\). It follows

$$\begin{aligned} D_j=\bigcup \limits _{x\in D_j}\{x\}\subseteq \bigcup \limits _{x\in D_j}[x]_{\mathcal {P}}\subseteq D_j. \end{aligned}$$

Then, \(D_j=\bigcup \nolimits _{x\in D_j}[x]_{\mathcal {P}}\).

Thus, \(\underline{\mathcal {P}}(D_j)=\underline{\mathcal {P}}(\bigcup \nolimits _{x\in D_j}[x]_{\mathcal {P}})=\bigcup \limits _{x\in D_j}[x]_{\mathcal {P}}=D_j\).

\((2)\Rightarrow (3)\). This is obvious.

\((3)\Rightarrow (1)\). \(\forall ~(x,y)\in ind(\mathcal {P})\), we have \(y\in [x]_{\mathcal {P}}\). Since \(\bigcup \nolimits _{j=1}^r\underline{\mathcal {P}}(D_j)=U\), \(x\in \underline{\mathcal {P}}(D_j)\) for some \(j\le r\). Then, \(x,y\in [x]_{\mathcal {P}}\subseteq D_j\). Denote \(D_j=[x^*]_{\mathcal {Q}}\). \(x\in D_j\) implies \([x]_{\mathcal {Q}}=[x^*]_{\mathcal {Q}}\). This implies \(y\in [x]_{\mathcal {Q}}\). Then, \((x,y)\in ind(\mathcal {Q})\). Thus, \(ind(\mathcal {P})\subseteq ind(\mathcal {Q})\). This shows \((U,\mathcal {P}) \preccurlyeq (U,\mathcal {Q})\)

\((1)\Leftrightarrow (4)\). This is obvious.

\((4)\Rightarrow (5)\). Put

$$\begin{aligned} U/\mathcal {P}=\{X_1,X_2,\ldots ,X_k\},~U/\mathcal {Q}=\{Y_1,Y_2,\ldots ,Y_l\}. \end{aligned}$$

\(\forall ~i\), since \(U/\mathcal {P}\) refines \(U/\mathcal {Q}\), we have \(X_i\subseteq Y_{j_i}\) for some \(j_i\le l\). Then, \(X_i- Y_{j_i}=\emptyset \).

\(\forall ~j\ne j_i\), \(Y_j\cap Y_{j_i}=\emptyset \). Then, \(X_i\cap Y_j=\emptyset \).

Thus, \(\forall ~i, j\), \(X_i- Y_j=\emptyset \) or \(X_i\cap Y_j=\emptyset \). This implies

$$\begin{aligned} p(X_i\cap Y_j)p(X_i- Y_j)=0. \end{aligned}$$

Hence, \(H((U/\mathcal {Q})/(U/\mathcal {P}))=0\).

\((5)\Rightarrow (4)\). Since \(H((U/\mathcal {Q})/(U/\mathcal {P}))=0\), we have \(\forall ~i, j\), \(X_i- Y_j=\emptyset \) or \(X_i\cap Y_j=\emptyset \). So \(\forall ~i, j\), \(X_i\subseteq Y_j\) or \(X_i\cap Y_j=\emptyset \).

\(\forall ~i\), \(X_i=\bigcup \nolimits _{j=1}^{l}(X_i\cap Y_j)\). Since \(X_i\ne \emptyset \), we have \(X_i\cap Y_{j_i}\ne \emptyset \) for some \(j_i\le l\). Then, \(X_i\subseteq Y_{j_i}\).

Thus, \(U/\mathcal {P}\) refines \(U/\mathcal {Q}\). \(\square \)

2.2 Relation information systems

Definition 2.9

([8]) An information system is a pair (UA) of nonempty finite sets U and A, where U is a set of objects and A is a set of attributes; each attribute \(a\in A\) is a function \(a:U\rightarrow V_a\), where \(V_a\) is the set of values (called domain) of attribute a.

If (UA) is an information system and \(B\subseteq A\), then an equivalence relation (or indiscernibility relation) \(R_B\) can be defined by

$$\begin{aligned} (x,y)\in R_B~\Longleftrightarrow ~ a(x)=a(y),~\forall ~a\in B. \end{aligned}$$

Definition 2.10

([20]) A pair \((U,{\mathcal {R}})\) is called a relation information system, if \({\mathcal {R}}\subseteq 2^{U\times U}\).

Obviously, a knowledge base is a relation information system and a relation information system is not an information system.

Definition 2.11

Let (UA) be an information system. Put

$$\begin{aligned} {\mathcal {R}}=\{R_{\{a\}}:~a\in A\}. \end{aligned}$$

Then, the pair \((U,{\mathcal {R}})\) is called the relation information system induced by the information system (UA).

In fact, \((U,{\mathcal {R}})\) in Definition 2.10 is a knowledge base.

2.3 Consistent mappings

Definition 2.12

([20, 21]) Let U and V be finite universes, f\(U\rightarrow V\) a mapping and \(R\in 2^{U\times U}\). Let

$$\begin{aligned}{}[x]_{f}=\{u\in U:f(u)=f(x)\},\\ (x)_{R}=\{u\in U:R_{s}(u)=R_{s}(x)\}. \end{aligned}$$

Then, \(\{[x]_{f}:x\in U\}\) and \(\{(x)_{R}:x\in U\}\) are two partitions on U. If \([x]_{f}\subseteq R_{s}(u)\) or \([x]_{f}\cap R_{s}(u)= {\emptyset }\) for any \(x,u\in U\), then f is called a type-1 consistent mapping with respect to R on U. If \([x]_{f}\subseteq (x)_{R}\) for any \(x\in U\), then f is called a type-2 consistent mapping with respect to R on U.

Remark 2.13

(1) For each \(x\in U\), \([x]_{f}=f^{-1}(f(x)).\)

(2) If \(R\in {\mathcal {R}}^*(U)\), then for each \(x\in U\), \((x)_{R}=[x]_{R}\).

(3) f is type-1 \(\Longleftrightarrow \) If \([x]_{f}\cap R_{s}(u)\ne {\emptyset }\), then \([x]_{f}\subseteq R_{s}(u)\)

                        \(\Longleftrightarrow \)  If \([x]_{f}\nsubseteq R_{s}(u)\), then \([x]_{f}\cap R_{s}(u)={\emptyset }\),

(4) f is type-2 \(\Longleftrightarrow \)  If \(f(u)=f(x)\), then \(R_{s}(u)=R_{s}(x)\).

3 Homomorphism between relation information systems

In this section, we recall R-mappings and homomorphisms between relation information systems, and give properties of R-mappings.

3.1 R-mappings and their properties

Definition 3.1

([20, 21]) Let \(f:U\rightarrow V\) be a mapping. Define

$$\begin{aligned} \hat{f}:2^{U\times U}\rightarrow 2^{V\times V}, ~R\rightarrow \hat{f}(R)=\bigcup _{x\in U}(\{f(x)\}\times f(R_{s}(x)));\\ \hat{f}^{-1}:2^{V\times V}\rightarrow 2^{U\times U},~ T\rightarrow \hat{f}^{-1}(T)=\bigcup _{y\in V}(\{f^{-1}(y)\}\times f^{-1}(T_{s}(y))). \end{aligned}$$

Then, \(\hat{f}\) and \(\hat{f}^{-1}\) are called the R-mapping and inverse R-mapping induced by f, respectively.

Obviously,

$$\begin{aligned} y_{1}\hat{f}(R)y_{2}\Longleftrightarrow \ \exists ~ x_1,x_2\in U,~ y_{1}=f(x_{1}),~y_{2}=f(x_{1}) ~and~ x_{1}Rx_{2},\\ x_{1}\hat{f}^{-1}(T)x_{2}\Longleftrightarrow \exists ~y_1,y_2\in V, ~ y_{1}=f(x_{1}),y_{2}=f(x_{1}) ~and~ y_{1}Ty_{2}. \end{aligned}$$

We also denote \(\hat{f}(R)\) by \(\hat{R}\). For \({\mathcal {R}}\subseteq 2^{U\times U}\), denote

$$\begin{aligned} \hat{f}({\mathcal {R}})~or~\hat{{\mathcal {R}}}=\{\hat{R}\mid R\in {\mathcal {R}}\}. \end{aligned}$$

Proposition 3.2

([21]) Let \(f: U\rightarrow V\) be a mapping. Then, for each \(T\in 2^{V\times V}\),

$$\begin{aligned} \hat{f}(\hat{f}^{-1}(T))=T. \end{aligned}$$

Proposition 3.3

([20]) If \(f: U \rightarrow V\) is both type-1 and type-2 consistent with respect to \(R\in 2^{U\times U}\), then

$$\begin{aligned} \hat{f}^{-1}(\hat{f}(R))=R. \end{aligned}$$

Proposition 3.4

If \(f: U \rightarrow V\) is type-2 consistent with respect to \(R\in {{\mathcal {R}}}^*(U)\), then

$$\begin{aligned} x_{1}Rx_{2}~\Longleftrightarrow ~f(x_{1})\hat{R}f(x_{2}). \end{aligned}$$

Proof

\(\Longrightarrow \)”. This is obvious.

\(\Longleftarrow \)”. Let \(f(x_{1})\hat{R}f(x_{2})\). Then, there exist \(u_1,u_2\in U\) such that

$$\begin{aligned} f(x_{1})=f(u_1), ~f(x_{2})=f(u_2) ~and ~u_1Ru_2. \end{aligned}$$

Since f is type-2, by Remark 2.13, we have

$$\begin{aligned} R_{s}(x_1)=R_{s}(u_1),~~R_{s}(x_2)=R_{s}(u_2). \end{aligned}$$

Then, \([x_1]_R=[u_1]_R\), \([x_2]_R=[u_2]_R\).

\(u_1Ru_2\) implies \([u_1]_R=[u_2]_R\). Then, \([x_1]_R=[x_2]_R\).

Thus, \(x_{1}Rx_{2}\). \(\square \)

Lemma 3.5

([21]) Let \(f: U\rightarrow V\) be a mapping and \(R\in 2^{U\times U}\). Then,

(1) If R is reflexive, then \(\hat{R}\) is reflexive.

(2) If R is symmetric, then \(\hat{R}\) is symmetric.

(3) If f is type-1 and R is transitive, then \(\hat{R}\) is transitive.

Lemma 3.6

Let \(f: U \rightarrow V\) be both type-1 and type-2 consistent with respect to \(R\in 2^{U\times U}\). Then,

(1) If \(\hat{R}\) is reflexive, then R is reflexive.

(2) If \(\hat{R}\) is symmetric, then R is symmetric.

(3) If \(\hat{R}\) is transitive, then R is transitive.

Proof

(1) Let \(\hat{R}\) be reflexive. Then, for each \(x\in U\), \(f(x)\hat{R}f(x)\). So there exist \(x_1,x_2\in U\) such that \(f(x)=f(x_1), f(x)=f(x_2)\) and \(x_1Rx_2.\)

Since f is type-2, we have \(R_s(x)=R_s(x_1)=R_s(x_2).\)

\(f(x_1)=f(x_2)\) implies \(x_2\in [x_1]_f\). Note that \(x_2\in R_s(x_1)\). Then, \(R_s(x_1)\cap [x_1]_f\ne {\emptyset }\). So

$$\begin{aligned} R_s(x)\cap [x_1]_f\ne {\emptyset }. \end{aligned}$$

Since f is type-1, we have \([x_1]_f\subseteq R_s(x)\).

\(f(x)=f(x_1)\) implies \(x\in [x_1]_f\). Then, \(x\in R_s(x)\).

Thus, R is reflexive.

(2) Let \(\hat{R}\) be symmetric. Suppose \(x_1Rx_2\). Put

$$\begin{aligned} y_1=f(x_1), ~y_2=f(x_2). \end{aligned}$$

Then, \(y_1\hat{R}y_2.\) Since \(\hat{R}\) is symmetric, we have \(y_2\hat{R}y_1\).

Then, there exist \(u_1,u_2\in U\) such that

$$\begin{aligned} y_2=f(u_1), ~y_1=f(u_2) ~and ~u_1Ru_2, \end{aligned}$$

Since \(f(x_1)=f(u_2),~f(x_2)=f(u_1)\) and f is type-2, by Remark 2.13, we have

$$\begin{aligned} R_s(x_1)=R_s(u_2) ~and ~R_s(x_2)=R_s(u_1). \end{aligned}$$

\(f(x_1)=f(u_2)\) implies \(u_2\in [x_1]_f\). Note that \(u_2\in R_s(u_1)\). Then, \([x_1]_f\cap R_s(u_1)\ne {\emptyset }\). By f is type-1, \([x_1]_f\subseteq R_s(u_1)\).

Thus, \(x_1\in [x_1]_f\subseteq R_s(u_1)=R_s(x_2)\). This implies \(x_2Rx_1\).

Hence, f is symmetric.

(3) Let \(\hat{R}\) be transitive. Suppose \(x_1Rx_2, x_2Rx_3.\) Put

$$\begin{aligned} y_1=f(x_1), ~y_2=f(x_2), ~y_3=f(x_3). \end{aligned}$$

Then,

$$\begin{aligned} y_1\hat{R}y_2, ~y_2\hat{R}y_3. \end{aligned}$$

Since \(\hat{R}\) is transitive, we have \(y_1\hat{R}y_3\). Then, there exist \(u_1,u_2\in U\) such that

$$\begin{aligned} y_1=f(u_1), ~y_3=f(u_2) ~and ~u_1Ru_2. \end{aligned}$$

Note that \(u_2\in R_s(u_1)\). Then, \([u_2]_f\cap R_s(u_1)\ne {\emptyset }\). By f is type-1,

$$\begin{aligned}{}[u_2]_f\subseteq R_s(u_1). \end{aligned}$$

Since \(f(x_1)=f(u_1)\) and f is type-2, by Remark 2.13, we have \(R_s(x_1)=R_s(u_1)\).

\(f(x_3)=f(u_2)\) implies \(x_3\in [u_2]_f\).

Then, \(x_3\in [u_2]_f\subseteq R_s(u_1)=R_s(x_1).\) This implies \(x_1Rx_3\).

Hence, f is transitive. \(\square \)

Theorem 3.7

Let \(f:U\rightarrow V\) be both type-1 and type-2 consistent with respect to R on U. Then,

(1) \(R\in {\mathcal {R}}^*(U)\) \(\Longleftrightarrow \) \(\hat{R}\in {\mathcal {R}}^*(V)\);

(2) If \(R\in {\mathcal {R}}^*(U)\), then for each \(x\in U\), \(f([x]_R)= [f(x)]_{\hat{R}}\).

Proof

(1) This holds by Lemmas 3.5 and 3.6.

(2) This holds by (1) and Proposition 3.4. \(\square \)

Corollary 3.8

Let \(f:U\rightarrow V\) be both type-1 and type-2 consistent with respect to R on U. If f is injective and \(R\in {\mathcal {R}}^*(U)\), then

$$\begin{aligned} V/\hat{R}=f(U/R). \end{aligned}$$

Proof

This holds Theorem 3.7. \(\square \)

Lemma 3.9

Let \(f: U\rightarrow V\) be a mapping and \(T\in 2^{V\times V}\). Then,

(1) T is reflexive  \(\Longleftrightarrow \)  \(\hat{f}^{-1}(T)\) is reflexive.

(2) T is symmetric  \(\Longleftrightarrow \)  \(\hat{f}^{-1}(T)\) is symmetric.

(3) T is transitive  \(\Longleftrightarrow \)  \(\hat{f}^{-1}(T)\) is transitive.

Proof

(1) “\(\Longrightarrow \)”. \(\forall ~x\in U\), put \(y=f(x)\).

Since T is reflexive, we have yTy. So \(x\hat{f}^{-1}(T)x\).

Thus, \(\hat{f}^{-1}(T)\) is reflexive.

\(\Longleftarrow \)”. \(\forall ~y\in V\), \(\exists ~x\in U\), \(y=f(x)\).

Since \(\hat{f}^{-1}(T)\) is reflexive, we have \(x\hat{f}^{-1}(T)x\). Then, there exist \(y_1,y_2\in V\) such that

$$\begin{aligned} y_1=f(x), ~y_2=f(x) ~and ~y_1Ty_2. \end{aligned}$$

Then, yTy.

Thus, T is reflexive.

(2) “\(\Longrightarrow \)”. Suppose \(x_1\hat{f}^{-1}(T)x_2\). Then, there exist \(y_1,y_2\in V\) such that

$$\begin{aligned} y_1=f(x_1), ~y_2=f(x_2) ~and ~y_1Ty_2. \end{aligned}$$

Since T is symmetric, we have \(y_2Ty_1\), Then, \(x_2\hat{f}^{-1}(T)x_1\).

Thus, \(\hat{f}(T)\) is symmetric.

\(\Longleftarrow \)”. Suppose \(y_1Ty_2\). Then, there exist \(x_1,x_2\in U\) such that

$$\begin{aligned} y_1=f(x_1), ~y_2=f(x_2). \end{aligned}$$

Then, \(x_1\hat{f}^{-1}(T)x_2\).

Since \(\hat{f}^{-1}(T)\) is symmetric, we have \(x_2\hat{f}^{-1}(T)x_1\). Then, there exist \(v_1,v_2\in V\) such that

$$\begin{aligned} v_1=f(x_2), ~v_2=f(x_1) ~and ~v_1Tv_2. \end{aligned}$$

Then, \(y_1=v_2,y_2=v_1\). So \(y_2Ty_1\)

Thus, T is symmetric.

(3) “\(\Longrightarrow \)”. Suppose \(x_1\hat{f}^{-1}(T)x_2\), \(x_2\hat{f}^{-1}(T)x_3\). Then, there exist \(y_1,y_2,v_1,v_2\in V\) such that

$$\begin{aligned}&y_1=f(x_1), ~y_2=f(x_2), ~y_1Ty_2,\\&v_1=f(x_2), ~v_2=f(x_3), ~v_1Tv_2. \end{aligned}$$

We have \(y_2=f(x_2)=v_1\). Then, \(y_1Tv_1\).

Since T is transitive, we have

$$\begin{aligned} y_1Tv_2, ~y_1=f(x_1), ~v_2=f(x_3). \end{aligned}$$

Then, \(x_1\hat{f}^{-1}(T)x_3\).

Thus, \(\hat{f}(T)\) is transitive.

\(\Longleftarrow \)”. Suppose \(y_1Ty_2,y_2Ty_3\). Then, there exist \(x_1,x_2,x_3\in U\) such that

$$\begin{aligned} y_1=f(x_1),~y_2=f(x_2),~y_3 =f(x_3). \end{aligned}$$

Then,

$$\begin{aligned} x_1\hat{f}^{-1}(T)x_2, ~ x_2\hat{f}^{-1}(T)x_3. \end{aligned}$$

Since \(\hat{f}^{-1}(T)\) is transitive, \(x_1\hat{f}^{-1}(T)x_3\). Then, there exist \(v_1,v_2\in V\) such that

$$\begin{aligned} v_1=f(x_1),~v_2=f(x_3) ~and ~v_1Tv_2. \end{aligned}$$

We have \(y_1=v_1, y_3=v_2\). Then, \(y_1Ty_2\).

Thus, T is transitive. \(\square \)

Theorem 3.10

Let \(f:U\rightarrow V\) be a mapping. Then,

$$\begin{aligned} T\in {\mathcal {R}}^*(V)~\Leftrightarrow ~ \hat{f}^{-1}(T)\in {\mathcal {R}}^*(U). \end{aligned}$$

Proof

This holds by Lemma 3.9. \(\square \)

3.2 Homomorphisms between relation information systems

In this subsection, we give definition of homomorphism between relation information systems.

Definition 3.11

([20]) Let f\(U\rightarrow V\) be a mapping and \({\mathcal {R}}\subseteq 2^{U\times U}\). If f is type-1 (resp. type-2) consistent with respect to R on U for every \(R\in {\mathcal {R}}\), then f is called type-1 (resp. type-2) consistent with respect to \({\mathcal {R}}\) on U.

Proposition 3.12

([20]) Let \(f: U \rightarrow V\) be a mapping and \({\mathcal {R}}\subseteq 2^{U\times U}\). If f is both type-1 and type-2 consistent with respect to \({\mathcal {R}}\), then \(\hat{f}(ind({\mathcal {R}}))=ind(\hat{f}({\mathcal {R}}))\).

Proposition 3.13

([20]) Let \(f: U \rightarrow V\) be a mapping and \({\mathcal {R}}\subseteq 2^{U\times U}\). If f is both type-1 and type-2 consistent with respect to \({\mathcal {R}}\), then \(\hat{f}^{-1}(\hat{f}(ind({\mathcal {R}}))=ind({\mathcal {R}})\).

Definition 3.14

([20]) Let \(f : U\rightarrow V\) be a mapping and \({\mathcal {R}}\subseteq 2^{U\times U}\). Then, the pair \((V, \hat{{\mathcal {R}}})\) is called an f-induced relation information system of the relation information system \((U,{\mathcal {R}})\).

Definition 3.15

([20]) Let \((U,{\mathcal {R}})\) be a relation information system and \((V,\hat{{\mathcal {R}}})\) an f-induced relation information system of \((U,{\mathcal {R}})\). If f is both type-1 and type-2 consistent with respect to \({\mathcal {R}}\) on U, then f is called a homomorphism from \((U,{\mathcal {R}})\) to \((V,\hat{{\mathcal {R}}})\). We write

$$\begin{aligned} (U,{\mathcal {R}})\sim _f(V,\hat{{\mathcal {R}}}). \end{aligned}$$

4 Invariant characteristics of knowledge bases under homomorphisms

In this section, we investigate relationships between knowledge bases and give invariant characteristics of knowledge bases under homomorphisms.

4.1 Invariant characteristics of knowledge bases and their dependency

Theorem 4.1

If \((U,{\mathcal {R}})\sim _f(V,\hat{{\mathcal {R}}})\), then

$$\begin{aligned} (U,{\mathcal {R}})\text { is a knowledge base} ~\Leftrightarrow ~(V, \hat{{\mathcal {R}}})\text { is a knowledge base}. \end{aligned}$$

Proof

This holds by Theorem 3.7. \(\square \)

Theorem 4.2

Let \((U,{\mathcal {R}})\) be a relation information system and \((V, \hat{{\mathcal {R}}})\) an f-induced relation information system of \((U,{\mathcal {R}})\). Then,

(1) \((V,\hat{{\mathcal {R}}})\) is a knowledge base  \(\Leftrightarrow \)  \((U,\hat{f}^{-1}(\hat{{\mathcal {R}}}))\) is a knowledge base;

(2) If \((U,{\mathcal {R}})\sim _f(V,\hat{{\mathcal {R}}})\), then \(\hat{f}^{-1}(\hat{{\mathcal {R}}}))={\mathcal {R}}\).

Proof

(1) This holds by Theorem 3.10.

(2) This holds by Proposition 3.4. \(\square \)

By Theorems 4.1 and 4.2, knowledge bases themselves are invariant and inverse invariant under the homomorphism. Based on this fact, below, we consider the problem “are other characteristics such as the dependency of knowledge bases, knowledge reductions, coordinate families invariant and inverse invariant under the homomorphism?”.

Theorem 4.3

Let \((U,{{\mathcal {R}}}^*)\sim _f(V,\hat{f}({{\mathcal {R}}}^*))\). Then, \(\forall ~\mathcal {P},\mathcal {Q}\subseteq {{\mathcal {R}}}^*\),

$$\begin{aligned} (U,\mathcal {P}) \preccurlyeq (U,\mathcal {Q})~\Longleftrightarrow ~ (V,\hat{\mathcal {P}}) \preccurlyeq (V,\hat{\mathcal {Q}}). \end{aligned}$$

Proof

This holds by Proposition 3.12. \(\square \)

Theorem 4.4

Let \((U,{{\mathcal {R}}}^*)\sim _f(V,\hat{f}({{\mathcal {R}}}^*))\). Then, \(\forall ~{\mathcal {S}},\mathcal {T}\subseteq \hat{f}({{\mathcal {R}}}^*)\),

$$\begin{aligned} (V,{\mathcal {S}}) \preccurlyeq (V,\mathcal {T})~\Longleftrightarrow ~ (U,\hat{f}^{-1}({\mathcal {S}})) \preccurlyeq (U,\hat{f}^{-1}(\mathcal {T})). \end{aligned}$$

Proof

This holds by Propositions 3.3 and 3.13. \(\square \)

Theorem 4.5

Let \((U,\mathcal {P})\) and \((U,\mathcal {Q})\) be two knowledge bases. Denote

$$\begin{aligned} U/\mathcal {Q}=\{D_1,D_2,\ldots ,D_r\}. \end{aligned}$$

If f is injective, then the following are equivalent:

(1) \((U,\hat{\mathcal {P}}) \preccurlyeq (U,\hat{\mathcal {Q}})\);

(2) \(\forall ~j\le r\), \(\underline{\hat{\mathcal {P}}}(f(D_j))=f(D_j)\);

(3) \(\bigcup \nolimits _{j=1}^r\underline{\hat{\mathcal {P}}}(f(D_j))=V\).

Proof

This holds by Theorem 2.8 and Corollary 3.8. \(\square \)

4.2 Invariant characteristics of knowledge reductions in knowledge bases

Proposition 4.6

Let \((U,{\mathcal {R}})\sim _f(V,\hat{{\mathcal {R}}})\). If \((U,{\mathcal {R}})\) be a knowledge base, then

(1) \(\mathcal {P}\in co({\mathcal {R}})~\Longleftrightarrow ~ \hat{\mathcal {P}}\in co(\hat{{\mathcal {R}}})\);

(2) \( co(\hat{{\mathcal {R}}})=\hat{f}(co({\mathcal {R}}))\).

Proof

(1) “\(\Rightarrow \)”. Since \(\mathcal {P}\in co({\mathcal {R}})\), we have \(ind(\mathcal {P})=ind({\mathcal {R}}).\) Then,

$$\begin{aligned} \hat{f}(ind(\mathcal {P}))=\hat{f}(ind({\mathcal {R}})). \end{aligned}$$

By Proposition 3.12,

$$\begin{aligned} ind(\hat{\mathcal {P}})=ind(\hat{{\mathcal {R}}}). \end{aligned}$$

Thus, \(\hat{\mathcal {P}}\in co(\hat{{\mathcal {R}}})\).

\(\Leftarrow \)”. Since \(\hat{\mathcal {P}}\in co(\hat{{\mathcal {R}}})\), we have \(ind(\hat{\mathcal {P}})=ind(\hat{{\mathcal {R}}}).\) By Proposition 3.12,

$$\begin{aligned} \hat{f}(ind(\mathcal {P}))=\hat{f}(ind({\mathcal {R}})). \end{aligned}$$

Then,

$$\begin{aligned} \hat{f}^{-1}(\hat{f}(ind(\mathcal {P}))=\hat{f}^{-1}(\hat{f}(ind({\mathcal {R}}))). \end{aligned}$$

By Proposition 3.13, \(ind(\mathcal {P})=ind({\mathcal {R}}).\)

Thus, \(\mathcal {P}\in co({\mathcal {R}})\).

(2) By (1),

$$\begin{aligned} \hat{f}(co({\mathcal {R}}))= & {} \{\hat{\mathcal {P}}|\mathcal {P}\in co({\mathcal {R}})\}\\= & {} \{\hat{\mathcal {P}}|\hat{\mathcal {P}}\in co(\hat{{\mathcal {R}}})\}\\= & {} co(\hat{{\mathcal {R}}}). \end{aligned}$$

\(\square \)

Theorem 4.7

Let \((U,{\mathcal {R}})\sim _f(V,\hat{{\mathcal {R}}})\). If \((U,{\mathcal {R}})\) be a knowledge base, then

(1) \(\mathcal {P}\in red({\mathcal {R}})~~\Longleftrightarrow ~~\hat{\mathcal {P}}\in red(\hat{{\mathcal {R}}})\);

(2) \( red(\hat{{\mathcal {R}}})=\hat{f}(red({\mathcal {R}}))\).

Proof

(1) “\(\Rightarrow \)”. Since \(\mathcal {P}\in red({\mathcal {R}})\), we have \(\mathcal {P}\in co({\mathcal {R}})\). By Proposition 4.6, \(\hat{\mathcal {P}}\in co(\hat{{\mathcal {R}}})\).

\(\forall ~\emptyset \ne \mathcal {T}\subset \hat{\mathcal {P}}\). Pick \(\mathcal {Q}\in 2^{ {\mathcal {R}}}-\{\emptyset \}\), \(\mathcal {T}=\hat{\mathcal {Q}}\). Then, \(\hat{\mathcal {Q}}\subset \hat{\mathcal {P}}\). By Proposition 3.3,

$$\begin{aligned} \mathcal {Q}=\hat{f}^{-1}(\hat{\mathcal {Q}})\subseteq \hat{f}^{-1}(\hat{\mathcal {P}})=\mathcal {P}. \end{aligned}$$

Suppose \(\mathcal {Q}=\mathcal {P}\). Then, \(\mathcal {T}=\hat{\mathcal {Q}}=\hat{\mathcal {P}}.\) This is a contradiction.

Thus, \(\mathcal {Q}\subset \mathcal {P}.\)

Since \(\mathcal {P}\in red({\mathcal {R}})\), we have \(\mathcal {Q}\not \in co({\mathcal {R}})\). By Proposition 4.6, \(\mathcal {T}=\hat{\mathcal {Q}}\not \in co(\hat{{\mathcal {R}}})\).

Hence, \(\hat{\mathcal {P}}\in red(\hat{{\mathcal {R}}})\).

\(\Leftarrow \)”. Since \(\hat{\mathcal {P}}\in red(\hat{{\mathcal {R}}})\), we have \(\hat{\mathcal {P}}\in co(\hat{{\mathcal {R}}})\). By Proposition 4.6, \(\mathcal {P}\in co({\mathcal {R}})\).

\(\forall ~\emptyset \ne \mathcal {Q}\subset \mathcal {P}\), \(\hat{\mathcal {Q}}\subseteq \hat{\mathcal {P}}\). Suppose \(\hat{\mathcal {Q}}= \hat{\mathcal {P}}\). By Proposition 3.3,

$$\begin{aligned} \mathcal {Q}=\hat{f}^{-1}(\hat{\mathcal {Q}})= \hat{f}^{-1}(\hat{\mathcal {P}})=\mathcal {P}. \end{aligned}$$

This is a contradiction. Thus, \(\hat{\mathcal {Q}}\subset \hat{\mathcal {P}}.\)

Since \(\hat{\mathcal {P}}\in red(\hat{{\mathcal {R}}})\), we have \(\hat{\mathcal {Q}}\not \in co(\hat{{\mathcal {R}}})\). By Proposition 4.6, \(\mathcal {Q}\not \in co({\mathcal {R}})\).

Hence, \(\mathcal {P}\in red({\mathcal {R}})\).

(2) By (1),

$$\begin{aligned} \hat{f}(red({\mathcal {R}}))= & {} \{\hat{\mathcal {P}}|\mathcal {P}\in red({\mathcal {R}})\}\\= & {} \{\hat{\mathcal {P}}|\hat{\mathcal {P}}\in red(\hat{{\mathcal {R}}})\}\\= & {} red(\hat{{\mathcal {R}}}). \end{aligned}$$

\(\square \)

Note that Theorem 4.7(1) is not the corollary of Theorem 4.4 in [20]. Then, we have the following remark.

Remark 4.8

The proofs of Theorem 4.4 in [20] and Theorem 4.7(1) are independent each other.

Proposition 4.9

Let \((U,{\mathcal {R}})\sim _f(V,\hat{{\mathcal {R}}})\). If \((U,{\mathcal {R}})\) be a knowledge base, then,

(1) \(\mathcal {T}\in co(\hat{{\mathcal {R}}}) ~\Longleftrightarrow ~ \hat{f}^{-1}(\mathcal {T}) \in co({\mathcal {R}})\);

(2) \( \hat{f}^{-1}(co(\hat{{\mathcal {R}}}))=co({\mathcal {R}})\).

Proof

(1) “\(\Rightarrow \)”. Since \(\mathcal {T}\in co(\hat{{\mathcal {R}}})\), we have \(ind(\mathcal {T})=ind(\hat{{\mathcal {R}}}).\) Then,

$$\begin{aligned} \hat{f}^{-1}(\bigcap \limits _{R\in \mathcal {T}}R)=\hat{f}^{-1}(\bigcap \limits _{R\in \hat{{\mathcal {R}}}}R). \end{aligned}$$

So

$$\begin{aligned} \bigcap \limits _{R\in \mathcal {T}}\hat{f}^{-1}(R)=\bigcap \limits _{R\in \hat{{\mathcal {R}}}}\hat{f}^{-1}(R). \end{aligned}$$

Thus,

$$\begin{aligned} ind(\hat{f}^{-1}(\mathcal {T}))=ind(\hat{f}^{-1}(\hat{{\mathcal {R}}})). \end{aligned}$$

By Proposition 3.13, \(ind(\hat{f}^{-1}(\mathcal {T}))=ind({\mathcal {R}}).\)

Therefore, \(\hat{f}^{-1}(\mathcal {T}) \in co({\mathcal {R}})\).

\(\Leftarrow \)”. Since \(\hat{f}^{-1}(\mathcal {T}) \in co({\mathcal {R}})\), we have \(ind(\hat{f}^{-1}(\mathcal {T}))=ind({\mathcal {R}}).\)

By Proposition 3.13, \(ind(\hat{f}^{-1}(\mathcal {T}))=ind(\hat{f}^{-1}(\hat{{\mathcal {R}}})).\) Then,

$$\begin{aligned} \bigcap \limits _{R\in \mathcal {T}}\hat{f}^{-1}(R)=\bigcap \limits _{R\in {\mathcal {R}}}\hat{f}^{-1}(\hat{R}). \end{aligned}$$

So

$$\begin{aligned} \hat{f}^{-1}(\bigcap \limits _{R\in \mathcal {T}}R)=\hat{f}^{-1}(\bigcap \limits _{R\in {\mathcal {R}}}\hat{R})=\hat{f}^{-1}(\bigcap \limits _{R\in \hat{{\mathcal {R}}}}R). \end{aligned}$$

Thus,

$$\begin{aligned} \hat{f}(\hat{f}^{-1}(ind(\mathcal {T})))=\hat{f}(\hat{f}^{-1}(ind(\hat{{\mathcal {R}}}))). \end{aligned}$$

This implies \(ind(\mathcal {T})=ind(\hat{{\mathcal {R}}}).\)

Hence, \(\mathcal {T}\in co(\hat{{\mathcal {R}}})\).

(2) By (1),

$$\begin{aligned} \hat{f}^{-1}(co(\hat{{\mathcal {R}}}))= & {} \{\hat{f}^{-1}(\mathcal {T})|\mathcal {T}\in co(\hat{{\mathcal {R}}})\}\\= & {} \{\hat{f}^{-1}(\mathcal {T})| \hat{f}^{-1}(\mathcal {T}) \in co({\mathcal {R}})\}\\= & {} co(\hat{{\mathcal {R}}}). \end{aligned}$$

\(\square \)

Theorem 4.10

Let \((U,{\mathcal {R}})\sim _f(V,\hat{{\mathcal {R}}})\). If \((U,{\mathcal {R}})\) be a knowledge base, then,

(1) \(\mathcal {T}\in red(\hat{{\mathcal {R}}})~~\Longleftrightarrow ~~\hat{f}^{-1}(\mathcal {T})\in red({\mathcal {R}})\);

(2) \(\hat{f}^{-1}(red(\hat{{\mathcal {R}}}))=red({\mathcal {R}})\).

Proof

The proof is similar to Theorem 4.7. \(\square \)

Not that Theorem 4.10(1) is not the corollary of Theorem 5.10 in [21]. Then, we have following remark.

Remark 4.11

The proofs of Theorem 5.10 in [21] and Theorem 4.10(1) are independent each other.

Definition 4.12

Let \((U,{\mathcal {R}})\) be a knowledge base. Put

$$\begin{aligned} \mathcal {D}(x,y)=\{R\in {\mathcal {R}}|(x,y)\not \in R\}~(x,y\in U). \end{aligned}$$

(1) \(\mathcal {D}(x,y)\) is called the discernibility subfamily of \({\mathcal {R}}\) on x and y.

(2) \({\mathfrak {D}}({\mathcal {R}})=(d_{ij})_{n\times n}\) is called the discernibility matrix of \({\mathcal {R}}\) where \(U=\{x_1,x_2,\cdot \cdot \cdot ,x_n\}\) and \(d_{ij}=\mathcal {D}(x_i,x_j)\) (\(1\le i,j\le n\)).

Example 4.13

We consider the knowledge base \((U, {\mathcal {R}})\) of Example 2.2. Then,

$$\begin{aligned} U/R_1= & {} \{\{x_1,x_2,x_5\},\{x_3,x_4,x_6\}\},\\ U/R_2= & {} \{\{x_1,x_6\},\{x_2,x_3,x_4,x_5\}\},\\ U/R_3= & {} \{\{x_1,x_2,x_5,x_6\},\{x_3,x_4\}\},\\ U/R_4= & {} \{\{x_1,x_2,x_5\},\{x_3,x_4,x_6\}\}. \end{aligned}$$

We can obtain the discernibility matrix \({\mathfrak {D}}({\mathcal {R}})\) as follows:

$$\begin{aligned} \left( \begin{array}{c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c} \emptyset &{}\{R_2\}&{}{\mathcal {R}}&{}{\mathcal {R}}&{}\{R_2\}&{}\{R_1,R_4\}\\ \{R_2\}&{}\emptyset &{}\{R_1,R_3,R_4\}&{}\{R_1,R_3,R_4\}&{}\emptyset &{}\{R_1,R_2,R_4\}\\ {\mathcal {R}}&{}\{R_1,R_3,R_4\}&{}\emptyset &{}\emptyset &{}\{R_1,R_3,R_4\}&{}\{R_2,R_3\}\\ {\mathcal {R}}&{}\{R_1,R_3,R_4\}&{}\emptyset &{}\emptyset &{}\{R_1,R_3,R_4\}&{}\{R_2,R_3\}\\ \{R_2\}&{}\emptyset &{}\{R_1,R_3,R_4\}&{}\{R_1,R_3,R_4\}&{}\emptyset &{}\{R_1,R_2,R_4\}\\ \{R_1,R_4\}&{}\{R_1,R_2,R_4\}&{}\{R_2,R_3\}&{}\{R_2,R_3\}&{}\{R_1,R_2,R_4\}&{}\emptyset \end{array}\right) \end{aligned}$$

Thus,

\(red({\mathcal {R}})=\{\{R_1,R_2\}, \{R_2,R_4\}\}, ~~core({\mathcal {R}})=\{R_2\}\)

Proposition 4.14

Let \((U,{\mathcal {R}})\) be a knowledge base. Then,

$$\begin{aligned} \mathcal {P}\in co({\mathcal {R}})~\Longleftrightarrow ~\ If \ (x,y)\not \in ind({\mathcal {R}}),\ then\ \mathcal {P}\cap \mathcal {D}(x,y)\ne \emptyset . \end{aligned}$$

Proof

(1) “\(\Longrightarrow \)”. Since \(\mathcal {P}\in co({\mathcal {R}})\), we have \(ind(\mathcal {P})= ind({\mathcal {R}}).\) Then, \((x,y)\not \in ind(\mathcal {P}).\) So \((x,y)\not \in P\ for\ some\ P\in \mathcal {P}\).

\((x,y)\not \in P\) implies \(P\in \mathcal {D}(x,y)\). Then, \(P\in \mathcal {P}\cap \mathcal {D}(x,y)\).

Thus, \(\mathcal {P}\cap \mathcal {D}(x,y)\ne \emptyset .\)

\(\Longleftarrow \)”. Suppose \(\mathcal {P}\not \in co({\mathcal {R}})\). Then, \(ind(\mathcal {P})\ne ind({\mathcal {R}}).\) This implies \(ind(\mathcal {P})-ind({\mathcal {R}})\ne \emptyset .\) Pick

$$\begin{aligned} (x,y)\in ind(\mathcal {P})-ind({\mathcal {R}}). \end{aligned}$$

Since \((x,y)\not \in ind({\mathcal {R}})\), we have \(\mathcal {P}\cap \mathcal {D}(x,y)\ne \emptyset .\)

Note that \((x,y)\in ind(\mathcal {P})\). Then, for each \(P\in \mathcal {P}\), \((x,y)\in P\). So \(P\not \in \mathcal {D}(x,y).\) Thus, \(\mathcal {P}\cap \mathcal {D}(x,y)=\emptyset .\) This is a contradiction.

Thus, \(\mathcal {P}\in co({\mathcal {R}})\). \(\square \)

The discernibility family can easily determine knowledge reductions.

Theorem 4.15

Let \((U,{\mathcal {R}})\) be a knowledge base. Then,

\(\mathcal {P}\in red({\mathcal {R}})\) \(\Longleftrightarrow \) the following conditions are satisfied:

(i) If \((x,y)\not \in ind({\mathcal {R}})\), then \(\mathcal {P}\cap \mathcal {D}(x,y)\ne \emptyset ;\)

(ii) \(\forall ~R\in \mathcal {P}\), \(\exists ~(x_R,y_R)\in ind({\mathcal {R}})\), \((\mathcal {P}-\{R\})\cap \mathcal {D}(x_R,y_R)= \emptyset \).

Proof

This holds by Proposition 4.14. \(\square \)

Definition 4.16

Let \((U,{\mathcal {R}})\) be a knowledge base. Put

$$\begin{aligned} core({\mathcal {R}})=\bigcap \limits _{\mathcal {P}\in red({\mathcal {R}})}\mathcal {P}. \end{aligned}$$

Then, \(core({\mathcal {R}})\) is called the core of \({\mathcal {R}}\). If \(R\in core({\mathcal {R}})\), then R is called a necessary relation.

Discernibility family can easily determine the core.

Proposition 4.17

Let \((U,{\mathcal {R}})\) be a knowledge base. Then, the following are equivalent:

(1) R is a necessary relation;

(2) R is independent in \({\mathcal {R}}\);

(3) \(\exists ~x,y\in U\), \(\mathcal {D}(x,y)=\{R\}\).

Proof

\((1) \Longrightarrow (2)\). Suppose that R is not independent in \({\mathcal {R}}\). Then,

$$\begin{aligned} ind({\mathcal {R}}-\{R\})= ind({\mathcal {R}}). \end{aligned}$$

This implies \({\mathcal {R}}-\{R\}\in co({\mathcal {R}})\). Consider \({\mathcal {R}}-\{R\}\), \(\exists ~\mathcal {P}\subseteq {\mathcal {R}}-\{R\}\), \(\mathcal {P}\in red({\mathcal {R}})\).

\(\mathcal {P}\subseteq {\mathcal {R}}-\{R\}\) implies \(R\not \in \mathcal {P}\). Then, R is not a necessary relation. This is a contradiction.

\((2) \Longrightarrow (1)\). Suppose that R is not a necessary relation. Then, there exist \(\mathcal {P}\in red({\mathcal {R}})\), \(R\not \in \mathcal {P}\). So \(\mathcal {P}\subseteq {\mathcal {R}}-\{R\}\subseteq {\mathcal {R}}\). This implies

$$\begin{aligned} ind(\mathcal {P})\supseteq ind({\mathcal {R}}-\{R\})\supseteq ind({\mathcal {R}}). \end{aligned}$$

By \(\mathcal {P}\in red({\mathcal {R}})\), \(ind(\mathcal {P})= ind({\mathcal {R}})\). Then, \(ind({\mathcal {R}}-\{R\})= ind({\mathcal {R}}).\) So R is not independent in \({\mathcal {R}}\). This is a contradiction.

\((2) \Longrightarrow (3)\). Since R is independent in \({\mathcal {R}}\), we have \(ind({\mathcal {R}}-\{R\})\ne ind({\mathcal {R}})\). Then, \(ind({\mathcal {R}}-\{R\})-ind({\mathcal {R}})\ne \emptyset \). Pick

$$\begin{aligned} (x,y)\in ind({\mathcal {R}}-\{R\})-ind({\mathcal {R}}). \end{aligned}$$

Denote \({\mathcal {R}}=\{R_1,R_2,\ldots ,R_n\}\). Then, \(R=R_j\) for some \(j\le n\). So

$$\begin{aligned} (x,y)\in \bigcap \limits _{1\le i\le n, i\ne j}R_i-\bigcap \limits _{1\le i\le n}R_i. \end{aligned}$$

This implies \((x,y)\not \in R_j\) and \((x,y)\in R_i~(i\ne j).\)

Thus, \(\mathcal {D}(x,y)=\{R\}\).

\((3) \Longrightarrow (2)\). Since \(\mathcal {D}(x,y)=\{R\}\) for some \(x,y\in U\), we have

$$\begin{aligned} (x,y)\not \in R,~ (x,y)\in R^{'}~( R^{'}\ne R). \end{aligned}$$

Then, \((x,y)\in ind({\mathcal {R}}-\{R\})\). But \((x,y)\not \in ind({\mathcal {R}})\).

Thus, \(ind({\mathcal {R}}-\{R\})\ne ind({\mathcal {R}}).\)

Hence, R is independent in \({\mathcal {R}}\). \(\square \)

Example 4.18

In Example 4.13, \(R_2\) is a necessary relation.

Proposition 4.19

Let \((U,{\mathcal {R}})\sim _f(V,\hat{{\mathcal {R}}})\). If \((U,{\mathcal {R}})\) be a knowledge base, then \(\forall ~x,y\in U\),

$$\begin{aligned} \hat{f}(\mathcal {D}(x,y))=\mathcal {D}(f(x),f(y)), \end{aligned}$$

where \(\mathcal {D}(f(x),f(y))=\{\hat{R}|~R\in {\mathcal {R}},~(f(x),f(y))\not \in \hat{R}\}.\)

Proof

This holds by Proposition 3.4. \(\square \)

Proposition 4.20

Let \((U,{\mathcal {R}})\sim _f(V,\hat{{\mathcal {R}}})\). If \((U,{\mathcal {R}})\) be a knowledge base, then

$$\begin{aligned} T\in core(\hat{{\mathcal {R}}})~\Longleftrightarrow \exists \ x,y\in U, \ \hat{f}(\mathcal {D}(x,y))=\{T\}. \end{aligned}$$

Proof

This holds by Propositions 4.17 and 4.19. \(\square \)

Theorem 4.21

Let \((U,{\mathcal {R}})\sim _f(V,\hat{{\mathcal {R}}})\). If \((U,{\mathcal {R}})\) be a knowledge base, then

$$\begin{aligned} R\in core({\mathcal {R}})~~\Longleftrightarrow ~~\hat{R}\in core(\hat{{\mathcal {R}}}). \end{aligned}$$

Proof

This holds by Propositions 4.17 and 4.20. \(\square \)

Theorem 4.22

Let \((U,{\mathcal {R}})\sim _f(V,\hat{{\mathcal {R}}})\). If \((U,{\mathcal {R}})\) be a knowledge base, then

(1)

$$\begin{aligned} \hat{f}(core({\mathcal {R}}))=core(\hat{{\mathcal {R}}}); \end{aligned}$$

(2)

$$\begin{aligned} \hat{f}^{-1}(core(\hat{{\mathcal {R}}}))=core({\mathcal {R}}). \end{aligned}$$

Proof

(1) By Theorem 4.21,

$$\begin{aligned} \hat{f}(core({\mathcal {R}}))= & {} \{\hat{R}|R\in core({\mathcal {R}})\}\\= & {} \{\hat{R}|\hat{R}\in core(\hat{{\mathcal {R}}})\}\\= & {} core(\hat{{\mathcal {R}}}). \end{aligned}$$

(2) By Proposition 3.3 and Theorem 4.21,

$$\begin{aligned} \hat{f}^{-1}(core(\hat{{\mathcal {R}}}))= & {} \{\hat{f}^{-1}(T)|T\in core(\hat{{\mathcal {R}}})\}\\= & {} \{\hat{f}^{-1}(\hat{R})|\hat{R}\in core(\hat{{\mathcal {R}}})\}\\= & {} \{\hat{f}^{-1}(\hat{R})|R\in core({\mathcal {R}})\}\\= & {} \{R|R\in core({\mathcal {R}})\}\\= & {} core({\mathcal {R}}). \end{aligned}$$

\(\square \)

Corollary 4.23

Let \((U,{\mathcal {R}})\sim _f(V,\hat{{\mathcal {R}}})\). If \((U,{\mathcal {R}})\) be a knowledge base, then

$$\begin{aligned} T\in core(\hat{{\mathcal {R}}})~~\Longleftrightarrow ~~\hat{f}^{-1}(T)\in core({\mathcal {R}}). \end{aligned}$$

Proof

This holds by Proposition 3.3, Theorems 4.21 and 4.22. \(\square \)

In order to illustrate the practical significance and possible applications of reductions in a knowledge base, the following examples are given.

Table 1 The information system (UA)

Example 4.24

We consider the information system (UA) of Table 1 where \(U=\{x_1,x_2,x_3,x_4,x_5,x_6\}\) is the set of six patients, \(A=\{a_1, a_2, a_3\}\), and \(a_1\), \(a_2\) and \(a_3\) mean headache, muscle pain and body temperature, respectively.

Put \({\mathcal {R}}=\{R_{a_1},R_{a_2},R_{a_3}\}\). Then, \((U, {\mathcal {R}})\) is the knowledge base induced by the information system (UA).

For convenience, denote \(R_i=R_{a_i}~(i=1,2,3,4)\). Then,

$$\begin{aligned} U/R_1= & {} \{\{x_1,x_2,x_3\},\{x_4,x_5,x_6\}\},\\ U/R_2= & {} \{\{x_1,x_2,x_3,x_4,x_6\},\{x_5\}\},\\ U/R_3= & {} \{\{x_1,x_4\},\{x_2,x_5\},\{x_3,x_6\}\}. \end{aligned}$$

We can obtain the discernibility matrix \({\mathfrak {D}}({\mathcal {R}})\) of \({\mathcal {R}}\) as follows:

$$\begin{aligned} \left( \begin{array}{c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c} \emptyset &{}\{R_3\}&{}\{R_3\}&{}\{R_1\}&{}{\mathcal {R}}&{}\{R_1,R_3\}\\ \{R_3\}&{}\emptyset &{}\{R_3\}&{}\{R_1,R_3\}&{}\{R_1,R_2\}&{}\{R_1,R_3\}\\ \{R_3\}&{}\{R_3\}&{}\emptyset &{}\{R_1,R_3\}&{}{\mathcal {R}}&{}\{R_1\}\\ \{R_1\}&{}\{R_1,R_3\}&{}\{R_1,R_3\}&{}\emptyset &{}\{R_2,R_3\}&{}\{R_3\}\\ {\mathcal {R}}&{}\{R_1,R_2\}&{}{\mathcal {R}}&{}\{R_2,R_3\}&{}\emptyset &{}\{R_2,R_3\}\\ \{R_1,R_3\}&{}\{R_1,R_3\}&{}\{R_1\}&{}\{R_3\}&{}\{R_2,R_3\}&{}\emptyset \end{array}\right) . \end{aligned}$$

Thus,

\(red({\mathcal {R}})=\{\{R_1,R_3\}\}, ~~core({\mathcal {R}})=\{R_1,R_3\}\).

In order to illustrate the importance and usefulness of the homomorphism between knowledge bases, the following example is given.

Example 4.25

Let \(U=\{x_i|1\le i\le 15\}\). Put

$$\begin{aligned} R_1= & {} \left( \begin{array}{c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c} 1&{}1&{}0&{}1&{}0&{}0&{}1&{}1&{}1&{}1&{}1&{}0&{}0&{}0&{}0\\ 1&{}1&{}0&{}1&{}0&{}0&{}1&{}1&{}1&{}1&{}1&{}0&{}0&{}0&{}0\\ 0&{}0&{}1&{}0&{}1&{}1&{}0&{}0&{}0&{}0&{}0&{}1&{}1&{}1&{}1\\ 1&{}1&{}0&{}1&{}0&{}0&{}1&{}1&{}1&{}1&{}1&{}0&{}0&{}0&{}0\\ 0&{}0&{}1&{}0&{}1&{}1&{}0&{}0&{}0&{}0&{}0&{}1&{}1&{}1&{}1\\ 0&{}0&{}1&{}0&{}1&{}1&{}0&{}0&{}0&{}0&{}0&{}1&{}1&{}1&{}1\\ 1&{}1&{}0&{}1&{}0&{}0&{}1&{}1&{}1&{}1&{}1&{}0&{}0&{}0&{}0\\ 1&{}1&{}0&{}1&{}0&{}0&{}1&{}1&{}1&{}1&{}1&{}0&{}0&{}0&{}0\\ 1&{}1&{}0&{}1&{}0&{}0&{}1&{}1&{}1&{}1&{}1&{}0&{}0&{}0&{}0\\ 1&{}1&{}0&{}1&{}0&{}0&{}1&{}1&{}1&{}1&{}1&{}0&{}0&{}0&{}0\\ 1&{}1&{}0&{}1&{}0&{}0&{}1&{}1&{}1&{}1&{}1&{}0&{}0&{}0&{}0\\ 0&{}0&{}1&{}0&{}1&{}1&{}0&{}0&{}0&{}0&{}0&{}1&{}1&{}1&{}1\\ 0&{}0&{}1&{}0&{}1&{}1&{}0&{}0&{}0&{}0&{}0&{}1&{}1&{}1&{}1\\ 0&{}0&{}1&{}0&{}1&{}1&{}0&{}0&{}0&{}0&{}0&{}1&{}1&{}1&{}1\\ 0&{}0&{}1&{}0&{}1&{}1&{}0&{}0&{}0&{}0&{}0&{}1&{}1&{}1&{}1\\ \end{array}\right) ,\\ R_2= & {} \left( \begin{array}{c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c} 1&{}0&{}0&{}1&{}0&{}0&{}0&{}0&{}0&{}0&{}1&{}1&{}1&{}1&{}1\\ 0&{}1&{}1&{}0&{}1&{}1&{}1&{}1&{}1&{}1&{}0&{}0&{}0&{}0&{}0\\ 0&{}1&{}1&{}0&{}1&{}1&{}1&{}1&{}1&{}1&{}0&{}0&{}0&{}0&{}0\\ 1&{}0&{}0&{}1&{}0&{}0&{}0&{}0&{}0&{}0&{}1&{}1&{}1&{}1&{}1\\ 0&{}1&{}1&{}0&{}1&{}1&{}1&{}1&{}1&{}1&{}0&{}0&{}0&{}0&{}0\\ 0&{}1&{}1&{}0&{}1&{}1&{}1&{}1&{}1&{}1&{}0&{}0&{}0&{}0&{}0\\ 0&{}1&{}1&{}0&{}1&{}1&{}1&{}1&{}1&{}1&{}0&{}0&{}0&{}0&{}0\\ 0&{}1&{}1&{}0&{}1&{}1&{}1&{}1&{}1&{}1&{}0&{}0&{}0&{}0&{}0\\ 0&{}1&{}1&{}0&{}1&{}1&{}1&{}1&{}1&{}1&{}0&{}0&{}0&{}0&{}0\\ 0&{}1&{}1&{}0&{}1&{}1&{}1&{}1&{}1&{}1&{}0&{}0&{}0&{}0&{}0\\ 1&{}0&{}0&{}1&{}0&{}0&{}0&{}0&{}0&{}0&{}1&{}1&{}1&{}1&{}1\\ 1&{}0&{}0&{}1&{}0&{}0&{}0&{}0&{}0&{}0&{}1&{}1&{}1&{}1&{}1\\ 1&{}0&{}0&{}1&{}0&{}0&{}0&{}0&{}0&{}0&{}1&{}1&{}1&{}1&{}1\\ 1&{}0&{}0&{}1&{}0&{}0&{}0&{}0&{}0&{}0&{}1&{}1&{}1&{}1&{}1\\ 1&{}0&{}0&{}1&{}0&{}0&{}0&{}0&{}0&{}0&{}1&{}1&{}1&{}1&{}1\\ \end{array}\right) , \end{aligned}$$
$$\begin{aligned} R_3= & {} \left( \begin{array}{c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c} 1&{}1&{}0&{}1&{}0&{}0&{}1&{}1&{}1&{}1&{}1&{}1&{}1&{}1&{}1\\ 1&{}1&{}0&{}1&{}0&{}0&{}1&{}1&{}1&{}1&{}1&{}1&{}1&{}1&{}1\\ 0&{}0&{}1&{}0&{}1&{}1&{}0&{}0&{}0&{}0&{}0&{}0&{}0&{}0&{}0\\ 1&{}1&{}0&{}1&{}0&{}0&{}1&{}1&{}1&{}1&{}1&{}1&{}1&{}1&{}1\\ 0&{}0&{}1&{}0&{}1&{}1&{}0&{}0&{}0&{}0&{}0&{}0&{}0&{}0&{}0\\ 0&{}0&{}1&{}0&{}1&{}1&{}0&{}0&{}0&{}0&{}0&{}0&{}0&{}0&{}0\\ 1&{}1&{}0&{}1&{}0&{}0&{}1&{}1&{}1&{}1&{}1&{}1&{}1&{}1&{}1\\ 1&{}1&{}0&{}1&{}0&{}0&{}1&{}1&{}1&{}1&{}1&{}1&{}1&{}1&{}1\\ 1&{}1&{}0&{}1&{}0&{}0&{}1&{}1&{}1&{}1&{}1&{}1&{}1&{}1&{}1\\ 1&{}1&{}0&{}1&{}0&{}0&{}1&{}1&{}1&{}1&{}1&{}1&{}1&{}1&{}1\\ 1&{}1&{}0&{}1&{}0&{}0&{}1&{}1&{}1&{}1&{}1&{}1&{}1&{}1&{}1\\ 1&{}1&{}0&{}1&{}0&{}0&{}1&{}1&{}1&{}1&{}1&{}1&{}1&{}1&{}1\\ 1&{}1&{}0&{}1&{}0&{}0&{}1&{}1&{}1&{}1&{}1&{}1&{}1&{}1&{}1\\ 1&{}1&{}0&{}1&{}0&{}0&{}1&{}1&{}1&{}1&{}1&{}1&{}1&{}1&{}1\\ 1&{}1&{}0&{}1&{}0&{}0&{}1&{}1&{}1&{}1&{}1&{}1&{}1&{}1&{}1\\ \end{array}\right) ,\\ R_4= & {} \left( \begin{array}{c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c} 1&{}1&{}0&{}1&{}0&{}0&{}1&{}1&{}1&{}1&{}1&{}0&{}0&{}0&{}0\\ 1&{}1&{}0&{}1&{}0&{}0&{}1&{}1&{}1&{}1&{}1&{}0&{}0&{}0&{}0\\ 0&{}0&{}1&{}0&{}1&{}1&{}0&{}0&{}0&{}0&{}0&{}1&{}1&{}1&{}1\\ 1&{}1&{}0&{}1&{}0&{}0&{}1&{}1&{}1&{}1&{}1&{}0&{}0&{}0&{}0\\ 0&{}0&{}1&{}0&{}1&{}1&{}0&{}0&{}0&{}0&{}0&{}1&{}1&{}1&{}1\\ 0&{}0&{}1&{}0&{}1&{}1&{}0&{}0&{}0&{}0&{}0&{}1&{}1&{}1&{}1\\ 1&{}1&{}0&{}1&{}0&{}0&{}1&{}1&{}1&{}1&{}1&{}0&{}0&{}0&{}0\\ 1&{}1&{}0&{}1&{}0&{}0&{}1&{}1&{}1&{}1&{}1&{}0&{}0&{}0&{}0\\ 1&{}1&{}0&{}1&{}0&{}0&{}1&{}1&{}1&{}1&{}1&{}0&{}0&{}0&{}0\\ 1&{}1&{}0&{}1&{}0&{}0&{}1&{}1&{}1&{}1&{}1&{}0&{}0&{}0&{}0\\ 1&{}1&{}0&{}1&{}0&{}0&{}1&{}1&{}1&{}1&{}1&{}0&{}0&{}0&{}0\\ 0&{}0&{}1&{}0&{}1&{}1&{}0&{}0&{}0&{}0&{}0&{}1&{}1&{}1&{}1\\ 0&{}0&{}1&{}0&{}1&{}1&{}0&{}0&{}0&{}0&{}0&{}1&{}1&{}1&{}1\\ 0&{}0&{}1&{}0&{}1&{}1&{}0&{}0&{}0&{}0&{}0&{}1&{}1&{}1&{}1\\ 0&{}0&{}1&{}0&{}1&{}1&{}0&{}0&{}0&{}0&{}0&{}1&{}1&{}1&{}1\\ \end{array}\right) . \end{aligned}$$

Then, \((U,{\mathcal {R}})\) is a knowledge base where \({\mathcal {R}}=\{R_1,R_2,R_3,R_4\}\).

We have

$$\begin{aligned} U/R_1= & {} \{\{x_1,x_2,x_4,x_7,x_8,x_9,x_{10},x_{11}\},\{x_3,x_5,x_6,x_{12},x_{13},x_{14},x_{15}\}\},\\ U/R_2= & {} \{\{x_1,x_4,x_{11},x_{12},x_{13},x_{14},x_{15}\},\{x_2,x_3,x_5,x_6,x_7,x_8,x_9,x_{10}\}\},\\ U/R_3= & {} \{\{x_1,x_2,x_4,x_7,x_8,x_9,x_{10},x_{11},x_{12},x_{13},x_{14},x_{15}\},\{x_3,x_5,x_6\}\},\\ U/R_4= & {} \{\{x_1,x_2,x_4,x_7,x_8,x_9,x_{10},x_{11}\},\{x_3,x_5,x_6,x_{12},x_{13},x_{14},x_{15}\}\}. \end{aligned}$$

By \(d_{ij}=\mathcal {D}(x_i,x_j)~(1\le i\le 15)\), the discernibility matrix \({\mathfrak {D}}({\mathcal {R}})=(d_{ij})_{15\times 15}\) of \({\mathcal {R}}\) can be obtained as follows:

$$\begin{aligned} \left( \begin{array}{c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c} \emptyset &{}\{R_2\}&{}{\mathcal {R}}&{}\cdots &{}\{R_1,R_4\}&{}\{R_1,R_4\}\\ \{R_2\}&{}\emptyset &{}\{R_1,R_3,R_4\}&{}\cdots &{}\{R_1,R_2,R_4\}&{}\{R_1,R_2,R_4\}\\ {\mathcal {R}}&{}\{R_1,R_3,R_4\}&{}\emptyset &{}\cdots &{}\{R_2,R_3\}&{}\{R_2,R_3\}\\ \emptyset &{}\{R_2\}&{}{\mathcal {R}}&{}\cdots &{}\{R_1,R_4\}&{}\{R_1,R_4\}\\ {\mathcal {R}}&{}\{R_1,R_3,R_4\}&{}\emptyset &{}\cdots &{}\{R_2,R_3\}&{}\{R_2,R_3\}\\ \vdots &{}\vdots &{}\vdots &{}\ddots &{}\vdots &{}\vdots \\ \{R_1,R_4\}&{}\{R_1,R_2,R_4\}&{}\{R_2,R_3\}&{}\cdots &{}\emptyset &{}\emptyset \\ \{R_1,R_4\}&{}\{R_1,R_2,R_4\}&{}\{R_2,R_3\}&{}\cdots &{}\emptyset &{}\emptyset \\ \{R_1,R_4\}&{}\{R_1,R_2,R_4\}&{}\{R_2,R_3\}&{}\cdots &{}\emptyset &{}\emptyset \\ \end{array}\right) . \end{aligned}$$

Then, \(red({\mathcal {R}})=\{\{R_1,R_2\}, \{R_2,R_4\}\}, ~core({\mathcal {R}})=\{R_2\}\).

Let \(V=\{y_1,y_2,y_3,y_4,y_5,y_6\}\). Define a mapping \(f:U\rightarrow V\) as follows:

$$\begin{aligned} \begin{array}{cccccccc} x_1,x_4,x_{11}&{}~~~x_2,x_8&{}~~~x_3,x_6&{}~~~x_5&{}~~~x_7,x_9,x_{10}&{}~~~x_{12},x_{13},x_{14},x_{15}\\ \hline y_1&{}~~~y_2&{}~~~~y_3&{}~~~~y_4&{}~~~y_5&{}~~~~y_6\\ \end{array}. \end{aligned}$$

Then, \((V, \hat{{\mathcal {R}}} )\) is the f-induced relation information system of (UR) where \(\hat{{\mathcal {R}}} =\{\hat{R_1},\hat{R_3},\hat{R_3},\hat{R_4}\}\).

It is easy to verify that f is both type-1 and type-2 consistent with respect to \({\mathcal {R}}\) on U. Thus,

$$\begin{aligned} (U,{\mathcal {R}})\sim _f(V,\hat{{\mathcal {R}}}). \end{aligned}$$

We have

$$\begin{aligned} V/\hat{R_1}= & {} \{\{y_1,y_2,y_5\},\{y_3,y_4,y_6\}\},\\ V/\hat{R_2}= & {} \{\{y_1,y_6\},\{y_2,y_3,y_4,y_5\}\},\\ V/\hat{R_3}= & {} \{\{y_1,y_2,y_5,y_6\},\{y_3,y_4\}\},\\ V/\hat{R_4}= & {} \{\{y_1,y_2,y_5\},\{y_3,y_4,y_6\}\}. \end{aligned}$$

By Theorem 4.1, \((V, \hat{{\mathcal {R}}})\) is a knowledge base.

By Example 4.13, \(red(\hat{{\mathcal {R}}})=\{\{\hat{R_1},\hat{R_2}\}, \{\hat{R_2},\hat{R_4}\}\}, ~core(\hat{{\mathcal {R}}})=\{\hat{R_2}\}.\)

It is obvious that the original knowledge base \((U,{\mathcal {R}})\) is relatively complex and its image system \((V, \hat{{\mathcal {R}}})\) is simpler. However, the original system and its image system have the following equivalent results:

(1) \(\{\hat{R_1},\hat{R_2}\}\), \(\{\hat{R_2},\hat{R_4}\}\) are reductions in \(\hat{{\mathcal {R}}}\) \(\Leftrightarrow \) \(\{R_1,R_2\}\),\(\{R_2,R_4\}\) are reductions in \({\mathcal {R}}\);

(2) \(\hat{R_2}\in core(\hat{{\mathcal {R}}})\) \(\Leftrightarrow \) \(R_2\in core({\mathcal {R}})\).

These results mean that the reductions in the original system and image system are equivalent to each other. The image system \((V,\hat{{\mathcal {R}}}\)) is simpler, but it has the equivalent results to the original knowledge base \((U,{\mathcal {R}})\). Therefore, we can research the original system by means of the results of the image system.

5 Lattice characteristics of the dependency of knowledge bases

Let \((U,{\mathcal {R}})\) be a relation information system and \((V, \hat{{\mathcal {R}}})\) an f-induced relation information system of \((U,{\mathcal {R}})\). Denote

$$\begin{aligned} \mathcal {K}({\mathcal {R}})= & {} \{(U,\mathcal {P})|~\mathcal {P}\in 2^{{\mathcal {R}}}-\{\emptyset \}\},\\ \mathcal {K}(\hat{{\mathcal {R}}})= & {} \{(V,\hat{\mathcal {P}})|~\mathcal {P}\in 2^{{\mathcal {R}}}-\{\emptyset \}\}. \end{aligned}$$

Theorem 5.1

\(L=(\mathcal {K}({\mathcal {R}}), \preccurlyeq )\) is a lattice with \(0_L=(U,{\mathcal {R}})\). If \(\mathcal {P},\mathcal {Q}\in 2^{{\mathcal {R}}}-\{\emptyset \}\), then

$$\begin{aligned} (U,\mathcal {P})\wedge (U,\mathcal {Q})= & {} (U,\mathcal {P}\cup \mathcal {Q}),\\ (U,\mathcal {P})\vee (U,\mathcal {Q})= & {} (U,\mathcal {A}(\mathcal {P},\mathcal {Q})), \end{aligned}$$

where \(A(\mathcal {P}, \mathcal {Q})=\{\mathcal {F}\in 2^{{\mathcal {R}}}-\{\emptyset \}|~ind(\mathcal {P})\cup ind(\mathcal {Q})\subseteq ind(\mathcal {F})\},\)

\(\mathcal {A}(\mathcal {P}, \mathcal {Q})=\bigcup A(\mathcal {P}, \mathcal {Q})\).

Proof

Obviously, \(0_L=(U,{\mathcal {R}})\).

Since

$$\begin{aligned} ind(\mathcal {P}\cup \mathcal {Q})=ind(\mathcal {P})\cap ind(\mathcal {Q}), \end{aligned}$$

we have \((U,\mathcal {P}\cup \mathcal {Q})\preccurlyeq (U,\mathcal {P})\), \((U,\mathcal {P}\cup \mathcal {Q})\preccurlyeq (U,\mathcal {Q})\). This implies that \((U,\mathcal {P}\cup \mathcal {Q})\) is the lower bound of \(\{(U,\mathcal {P}),(U,\mathcal {Q})\}\).

Suppose \((U,{\mathcal {S}})\preccurlyeq (U,\mathcal {P})\), \((U,{\mathcal {S}})\preccurlyeq (U,\mathcal {Q})\) (\({\mathcal {S}}\in 2^{{\mathcal {R}}}-\{\emptyset \}\)). Then, \(ind({\mathcal {S}})\subseteq ind(\mathcal {P})\), \(ind({\mathcal {R}})\subseteq ind(\mathcal {Q})\). Thus,

$$\begin{aligned} ind({\mathcal {S}})\subseteq ind(\mathcal {P})\cap ind(\mathcal {Q})=ind(\mathcal {P}\cup \mathcal {Q}). \end{aligned}$$

Then,

$$\begin{aligned} (U,\mathcal {P})\wedge (U,\mathcal {Q})=(U,\mathcal {P}\cup \mathcal {Q}). \end{aligned}$$

\(\forall ~~\mathcal {F}\in A(\mathcal {P}, \mathcal {Q})\),

$$\begin{aligned} ind(\mathcal {P}))\subseteq ind(\mathcal {F}),~~ind(\mathcal {Q})\subseteq ind(\mathcal {F}). \end{aligned}$$

Then,

$$\begin{aligned} ind(\mathcal {P}))\subseteq \bigcap \limits _{\mathcal {F}\in A(\mathcal {P},\mathcal {Q})}ind(\mathcal {F}),~~ind(\mathcal {Q})\subseteq \bigcap \limits _{\mathcal {F}\in A(\mathcal {P},\mathcal {Q})}ind(\mathcal {F}). \end{aligned}$$

Note that

$$\begin{aligned} \bigcap \limits _{\mathcal {F}\in A(\mathcal {P},\mathcal {Q})}ind(\mathcal {F})=ind(\bigcup \limits _{\mathcal {F}\in A(\mathcal {P},\mathcal {Q})}\mathcal {F})=ind(\mathcal {A}(\mathcal {P},\mathcal {Q})). \end{aligned}$$

Then,

$$\begin{aligned} ind(\mathcal {P}))\subseteq ind(\mathcal {A}(\mathcal {P},\mathcal {Q})),~~ind(\mathcal {Q}))\subseteq ind(\mathcal {A}(\mathcal {P},\mathcal {Q})). \end{aligned}$$

This implies \((U,\mathcal {P})\preccurlyeq (U,\mathcal {A}(\mathcal {P},\mathcal {Q}))\), \((U,\mathcal {Q})\preccurlyeq (U,\mathcal {A}(\mathcal {P},\mathcal {Q}))\).

Thus, \((U,\mathcal {A}(\mathcal {P},\mathcal {Q}))\) is the upper bound of \(\{(U,\mathcal {P}),(U,\mathcal {Q})\}\).

Suppose \((U,\mathcal {P})\preccurlyeq (U,{\mathcal {S}})\), \((U,\mathcal {Q})\preccurlyeq (U,{\mathcal {S}})\) (\({\mathcal {S}}\in 2^{{\mathcal {R}}}-\{\emptyset \}\)). Then,

$$\begin{aligned} ind(\mathcal {P})\subseteq ind({\mathcal {S}}),~~ind(\mathcal {Q})\subseteq ind({\mathcal {S}}). \end{aligned}$$

So

$$\begin{aligned} ind(\mathcal {P})\cup ind(\mathcal {Q})\subseteq ind({\mathcal {S}}). \end{aligned}$$

This implies \(\mathcal {T}\in A(\mathcal {P},\mathcal {Q})\). Then, \(ind(\mathcal {A}(\mathcal {P},\mathcal {Q}))\subseteq ind(\mathcal {T})\). Thus, \((U,\mathcal {A}(\mathcal {P},\mathcal {Q}))\preccurlyeq (U,{\mathcal {S}})\).

This shows \((U,\mathcal {P})\vee (U,\mathcal {Q})=(U,\mathcal {A}(\mathcal {P},\mathcal {Q}))\).

Hence, L is a lattice. \(\square \)

Theorem 5.2

\(L^*=(\mathcal {K}(\hat{{\mathcal {R}}}), \preccurlyeq )\) is a lattice with \(0_{L^*}=(V,\hat{{\mathcal {R}}})\). If \(\mathcal {P},\mathcal {Q}\in 2^{{\mathcal {R}}}-\{\emptyset \}\), then

$$\begin{aligned} (V,\hat{\mathcal {P}})\wedge (V,\hat{\mathcal {Q}})= & {} (V,\hat{\mathcal {P}}\cup \hat{\mathcal {Q}}),\\ (V,\hat{\mathcal {P}})\vee (V,\hat{\mathcal {Q}})= & {} (V,\mathcal {B}(\hat{\mathcal {P}},\hat{\mathcal {Q}})), \end{aligned}$$

where \(B(\hat{\mathcal {P}}, \hat{\mathcal {Q}})=\{\hat{\mathcal {F}}|~\mathcal {F}\in 2^{{\mathcal {R}}}-\{\emptyset \},~ind(\hat{\mathcal {P}})\cup ind(\hat{\mathcal {Q}})\subseteq ind(\hat{\mathcal {F}})\},\)

\(\mathcal {B}(\hat{\mathcal {P}}, \hat{\mathcal {Q}})=\bigcup B(\hat{\mathcal {P}}, \hat{\mathcal {Q}})\)

Proof

Obviously, \(0_{L^*}=(V,\hat{{\mathcal {R}}})\).

Since

$$\begin{aligned} ind(\hat{\mathcal {P}}\cup \hat{\mathcal {Q}})=ind(\hat{\mathcal {P}})\cap ind(\hat{\mathcal {Q}}), \end{aligned}$$

we have \((V,\hat{\mathcal {P}}\cup \hat{\mathcal {Q}})\preccurlyeq (V,\hat{\mathcal {P}})\), \((V,\hat{\mathcal {P}}\cup \hat{\mathcal {Q}})\preccurlyeq (V,\hat{\mathcal {Q}})\). This implies that \((V,\hat{\mathcal {P}}\cup \hat{\mathcal {Q}})\) is the lower bound of \(\{(V,\hat{\mathcal {P}}),(V,\hat{\mathcal {Q}})\}\).

Suppose \((V,\hat{{\mathcal {S}}})\preccurlyeq (V,\hat{\mathcal {P}})\), \((V,\hat{{\mathcal {S}}})\preccurlyeq (V,\hat{\mathcal {Q}})\) (\({\mathcal {S}}\in 2^{{\mathcal {R}}}-\{\emptyset \}\)). Then, \(ind(\hat{{\mathcal {S}}})\subseteq ind(\hat{\mathcal {P}})\), \(ind(\hat{{\mathcal {S}}})\subseteq ind(\hat{\mathcal {Q}})\). Thus,

$$\begin{aligned} ind(\hat{{\mathcal {S}}})\subseteq ind(\hat{\mathcal {P}})\cap ind(\hat{\mathcal {Q}})=ind(\hat{\mathcal {P}}\cup \hat{\mathcal {Q}}). \end{aligned}$$

Then,

$$\begin{aligned} (U,\hat{\mathcal {P}})\wedge (U,\hat{\mathcal {Q}})=(U,\hat{\mathcal {P}}\cup \hat{\mathcal {Q}}). \end{aligned}$$

\(\forall ~~\hat{\mathcal {F}}\in B(\hat{\mathcal {P}}, \hat{\mathcal {Q}})\), we have

$$\begin{aligned} ind(\hat{\mathcal {P}})\subseteq ind(\hat{\mathcal {F}}),~~ind(\hat{\mathcal {Q}})\subseteq ind(\hat{\mathcal {F}}). \end{aligned}$$

Then,

$$\begin{aligned} ind(\hat{\mathcal {P}})\subseteq \bigcap \limits _{\hat{\mathcal {F}}\in B(\hat{\mathcal {P}},\hat{\mathcal {Q}})} ind(\hat{\mathcal {F}})=ind(\bigcup \limits _{\hat{\mathcal {F}}\in B(\hat{\mathcal {P}},\hat{\mathcal {Q}})}\hat{\mathcal {F}})=ind(\mathcal {B}(\hat{\mathcal {P}},\hat{\mathcal {Q}})). \end{aligned}$$

This implies \((V,\hat{\mathcal {P}})\preccurlyeq (V,\mathcal {B}(\hat{\mathcal {P}},\hat{\mathcal {Q}}))\). Similarly, \((V,\hat{\mathcal {Q}})\preccurlyeq (V,\mathcal {B}(\hat{\mathcal {P}},\hat{\mathcal {Q}}))\). Then, \((V,\mathcal {B}(\hat{\mathcal {P}},\hat{\mathcal {Q}}))\) is the upper bound of \(\{(V,\hat{\mathcal {P}}),(V,\hat{\mathcal {Q}})\}\).

Suppose \((V,\hat{\mathcal {P}})\preccurlyeq (V,\hat{{\mathcal {S}}})\), \((V,\hat{\mathcal {Q}})\preccurlyeq (V,\hat{{\mathcal {S}}})\) (\({\mathcal {S}}\in 2^{{\mathcal {R}}}-\{\emptyset \}\)). We have

$$\begin{aligned} ind(\hat{\mathcal {P}})\subseteq ind(\hat{{\mathcal {S}}}),~~ind(\hat{\mathcal {Q}})\subseteq ind(\hat{{\mathcal {S}}}). \end{aligned}$$

Then, \(ind(\hat{\mathcal {P}})\bigcup ind(\hat{\mathcal {Q}})\subseteq ind(\hat{{\mathcal {S}}})\). This implies  \(\hat{{\mathcal {S}}}\in B(\hat{\mathcal {P}},\hat{\mathcal {Q}})\). Thus, \(ind(\mathcal {B}(\hat{\mathcal {P}},\hat{\mathcal {Q}}))\subseteq ind(\hat{{\mathcal {S}}})\). Then, \((V,\mathcal {B}(\hat{\mathcal {P}},\hat{\mathcal {Q}}))\preccurlyeq (V,\hat{{\mathcal {S}}})\).

This shows \((V,\hat{\mathcal {P}})\vee (V,\hat{\mathcal {Q}})=(V,\mathcal {B}(\hat{\mathcal {P}},\hat{\mathcal {Q}}))\).

Hence, \(L^*\) is a lattice. \(\square \)

Theorem 5.3

\(L~\simeq ~L^*.\)

Proof

Define a mapping \(\theta _{f}:L\rightarrow L^*\) by

$$\begin{aligned} \theta _{f}(U,\mathcal {P})=(V,\hat{\mathcal {P}}),~\forall ~ \mathcal {P}\in 2^{{\mathcal {R}}}-\{\emptyset \}. \end{aligned}$$

Obviously, \(\theta _{f}\) is surjective.

Suppose \(\theta _{f}((U,\mathcal {P}))=\theta _{f}((U,\mathcal {Q}))\). Then, \((V,\hat{\mathcal {P}})=(V,\hat{\mathcal {Q}})\), which implies

$$\begin{aligned} (V,\hat{\mathcal {P}})\preccurlyeq (V,\hat{\mathcal {Q}}), ~(V,\hat{\mathcal {Q}})\preccurlyeq (V,\hat{\mathcal {P}}). \end{aligned}$$

By Theorem 4.3,

$$\begin{aligned} (U,\mathcal {P})\preccurlyeq (U,\mathcal {Q}), ~(U,\mathcal {Q})\preccurlyeq (U,\mathcal {P}). \end{aligned}$$

Then, \((U,\mathcal {P})=(U,\mathcal {Q})\). This implies that \(\theta _{f}\) is injective.

Thus, \(\theta _{f}\) is bijection.

We have

$$\begin{aligned} \theta _{f}((U,\mathcal {P})\wedge (U,\mathcal {Q}))= & {} \theta _{f}((U,\mathcal {P}\cup \mathcal {Q})) =(V,\hat{f}(\mathcal {P}\cup \mathcal {Q}))=(V,\hat{\mathcal {P}}\cup \hat{\mathcal {Q}})\\= & {} (V,\hat{\mathcal {P}})\wedge (V,\hat{\mathcal {Q}})=\theta _{f}((U,\mathcal {P})) \wedge \theta _{f}((U,\mathcal {Q})). \end{aligned}$$

Claim \(\mathcal {F}\in A(\mathcal {P},\mathcal {Q})\Longleftrightarrow \hat{\mathcal {F}}\in B(\hat{\mathcal {P}},\hat{\mathcal {Q}})\).

In fact, \(\forall ~\mathcal {F}\in A(\mathcal {P},\mathcal {Q})\), \(ind(\mathcal {P})\cup ind(\mathcal {Q})\subseteq ind(\mathcal {F})\).

Then,

$$\begin{aligned} \hat{f}(ind(\mathcal {P}))\cup \hat{f}(ind(\mathcal {Q}))=\hat{f}(ind(\mathcal {P})\cup ind(\mathcal {Q}))\subseteq \hat{f}(ind(\mathcal {F})). \end{aligned}$$

By Proposition 3.12,

$$\begin{aligned} ind(\hat{\mathcal {P}})\cup ind(\hat{\mathcal {Q}})\subseteq ind(\hat{\mathcal {F}}). \end{aligned}$$

Then, \(\hat{\mathcal {F}}\in B(\hat{\mathcal {P}},\hat{\mathcal {Q}})\).

On the other hand, \(\forall ~\hat{\mathcal {F}}\in B(\hat{\mathcal {P}},\hat{\mathcal {Q}})\), \(ind(\hat{\mathcal {P}})\cup ind(\hat{\mathcal {Q}})\subseteq ind(\hat{\mathcal {F}})\).

By Proposition 3.12,

$$\begin{aligned} \hat{f}(ind(\mathcal {P}))\cup \hat{f}(ind(\mathcal {Q}))\subseteq \hat{f}(ind(\mathcal {F})). \end{aligned}$$

Then,

$$\begin{aligned} \hat{f}^{-1}(\hat{f}(ind(\mathcal {P})))\cup \hat{f}^{-1}(\hat{f}(ind(\mathcal {Q})))\!=\!\hat{f}^{-1}(\hat{f}(ind(\mathcal {P}))\cup \hat{f}(ind(\mathcal {Q})))\!\subseteq \! \hat{f}^{-1}(\hat{f}(ind(\mathcal {F}))). \end{aligned}$$

By Proposition 3.13,

$$\begin{aligned} ind(\mathcal {P})\cup ind(\mathcal {Q})\subseteq ind(\mathcal {F}). \end{aligned}$$

Then, \(\mathcal {F}\in A(\mathcal {P},\mathcal {Q})\).

This shows that \(\mathcal {F}\in A(\mathcal {P},\mathcal {Q})\Longleftrightarrow \hat{\mathcal {F}}\in B(\hat{\mathcal {P}},\hat{\mathcal {Q}})\).

We have

$$\begin{aligned} \theta _{f}((U,\mathcal {P})\vee (U,\mathcal {Q}))=\theta _{f}((U,\mathcal {A}(\mathcal {P},\mathcal {Q})))=(V,\hat{f}(\mathcal {A}(\mathcal {P},\mathcal {Q}))) \end{aligned}$$

and

$$\begin{aligned} \theta _{f}((U,\mathcal {P})\vee \theta _{f}((U,\mathcal {Q}))=(V,\hat{\mathcal {P}})\vee (V,\hat{\mathcal {Q}})=(V,\mathcal {B}(\hat{\mathcal {P}},\hat{\mathcal {Q}})). \end{aligned}$$

Note that

$$\begin{aligned} \hat{f}(\mathcal {A}(\mathcal {P},\mathcal {Q}))=\hat{f}(\bigcup \limits _{\mathcal {F}\in A(\mathcal {P},\mathcal {Q})}\mathcal {F})=\bigcup \limits _{\mathcal {F}\in A(\mathcal {P},\mathcal {Q})}\hat{\mathcal {F}},~~~\mathcal {B}(\hat{\mathcal {P}},\hat{\mathcal {Q}})=\bigcup \limits _{\hat{\mathcal {F}}\in B(\hat{\mathcal {P}},\hat{\mathcal {Q}})}\hat{\mathcal {F}}. \end{aligned}$$

By Claim,

$$\begin{aligned} \hat{f}(\mathcal {A}(\mathcal {P},\mathcal {Q}))=\mathcal {B}(\hat{\mathcal {P}},\hat{\mathcal {Q}}). \end{aligned}$$

Thus,

$$\begin{aligned} \theta _{f}((U,\mathcal {P})\vee (U,\mathcal {Q}))=\theta _{f}((U,\mathcal {P}))\vee \theta _{f}((U,\mathcal {Q})). \end{aligned}$$

Hence, \(L~\simeq ~L^*.\) \(\square \)

Remark 5.4

Theorem 5.3 illustrates that the lattice structures of L and \(L^*\) are same.

6 Conclusions

In this paper, we have established relationships between knowledge bases by using related results of relation information systems. We have investigated the original knowledge base and image knowledge base from two aspects (i.e., the dependency of knowledge bases and knowledge reductions in knowledge bases) and obtained some invariant amounts and inverse invariant amounts, such as knowledge bases, the dependency of knowledge bases, knowledge reductions, coordinate families and necessary relations. These results will be significant for establishing a framework of granular computing in knowledge bases and may have potential applications to knowledge discovery, decision making and reasoning about data. In the future, we will consider concrete applications of our results, such as dealing with knowledge discovery in information systems.