Keywords

1 Introduction

Rough set theory (RST), proposed by Pawlak [16, 17], has been successfully applied in many fields, such as artificial intelligence, data mining, intelligent information processing and so on [7, 15, 18]. However, the Pawlak’s RST could not efficiently tackle when decision attribute values are fuzzy and each object with different probability belongs to different decision classes. Dubois et al. [6] firstly presented rough fuzzy set to solve this problem. Rough fuzzy set has been investigated in many aspects until now. For example, Yao et al. compared the difference between fuzzy sets and rough sets [20]. Luo et al. discussed the property of rough fuzzy ideal in algebraic systems [14]. Xu et al. presented a novel granular computing approach based formal concept analysis in fuzzy datasets [19].

In many applications, information systems may vary over time, which means that the objects, attributes and attribute values may change [3, 8, 11, 12]. For example, new patients may be added to a medical diagnostic information system, which may have additional attributes due to the use of new diagnostic instruments. The patient’s record may also be revised over time. How to utilize the prior knowledge and infuse the new data to update the knowledge becomes a crucial problem. The incremental learning technique is an effective way to solve this problem [1, 4]. For example, Zhang et al. proposed an incremental method for updating approximations based on matrix computation when the object set varies with time [22]. Li et al. presented an incremental method for updating approximations based on the characteristic relation under the attribute generalization [9]. Luo et al. proposed two incremental algorithms based on dominance-based RST with the addition/removal of attribute values [13]. Chen et al. presented a matrix-based incremental method for updating approximations under the decision-theoretic rough sets when the attributes and objects are added simultaneously [2]. In rough fuzzy sets, Cheng et al. proposed an incremental updating approximations method when the attribute set evolves over time [5]. Zeng et al. presented an incremental updating rough fuzzy approximations method when the object set evolves over time [21]. However, they did not consider Fuzzy Decision System (FDS) may be changed over time, i.e., the attributes and objects simultaneously vary. In this paper, we present an incremental approach based matrix for updating approximations in FDS under the variation of attributes and objects simultaneously.

The remainder of this paper is organized as follows. Section 2 introduces the basic concepts of rough fuzzy set in FDS. Section 3 proposes the matrix representation of the lower and upper rough fuzzy approximations. Section 4 presents an incremental updating method for rough fuzzy approximations when the attribute set and the object set vary simultaneously. Section 5 illustrates an example to validate the effectiveness of the proposed method. Section 6 concludes the paper and discusses our future work.

2 Preliminaries

In this section, the basic concepts of FDS and rough fuzzy sets are briefly reviewed [6].

Definition 1

[6] A FDS is 4-tuple \(S=\langle U, C\cup D, V, f\rangle \), where \(U=\{x_{i}|i \in \{1,2,\ldots ,n\}\}\) is a non-empty finite set of objects, called the universe; C is a non-empty finite set of condition attributes and D is a non-empty finite set of fuzzy decision attributes, \(C\cap D=\emptyset \); \(V=V_{C}\cup V_{D}\), where V is the domain of all attributes, \(V_{C}\) is the domain of all condition attributes and \(V_{D}\) is the domain of decision attributes; f is an information function from \(U\times (C\cup D)\) to V such that \(f: U\times C\rightarrow V_{C} \), \(f: U\times D\rightarrow [0,1]\).

Since the classical RST could not deal with the fuzzy concepts in a crisp approximation space, Dubois and Prade introduced the rough fuzzy set model.

Definition 2

[6] Let \(S=\langle U, C\cup D, V, f\rangle \) be a FDS and \(A \subseteq C\). \(\widetilde{d}\) is a fuzzy subset on D, where \(\widetilde{d}(x)\) \((x\in U)\) denotes the degree of membership of x in \(\widetilde{d}\). The lower and upper approximations of \(\widetilde{d}\) are a pair of fuzzy sets on D with respect to the equivalence relation \(R_{A}\), and their membership functions are defined as follows:

$$\begin{aligned} \begin{aligned}&\underline{R_{A}}\widetilde{d}(x)=min\{\widetilde{d}(y)|y \in [x]_{R_{A}}\} \\&\overline{R_{A}}\widetilde{d}(x)=max\{\widetilde{d}(y)|y \in [x]_{R_{A}}\} \end{aligned} \end{aligned}$$
(1)

where \(R_{A}=\{(x,y)\in U\times U | f(x,a)=f(y,a), \forall a\in A\}\), \([x]_{R_{A}}\) denotes the equivalence class of an element x under \(R_{A}\) and \([x]_{R_{A}}=\{y \in U | x R_{A} y\}\).

Example 1

Table 1 illustrates a medical diagnosis FDS, \(S=\langle U,C\cup D,V,f\rangle \), where \(U=\{x_{i}|i \in \{1,2,\ldots ,10\}\}\) denotes the set of patients, the set of condition attributes is \(C=\{headache, muscle pain, sore throat, temperature\}=\{c_{1},c_{2},c_{3},c_{4}\}\), the set of fuzzy decision attribute is \(D=\{Flu\}=\{d\}\). The domain \(V_{c_{1}}=\{no, moderate, heavy\}=\{0,1,2\}\), \(V_{c_{2}}=V_{c_{3}}=V_{c_{4}}=\{no, yes\}=\{0,1\}\).

Let \(A=\{c_{1},c_{2}\}\subset C\). \(U/R_{A}=\{\{x_{1},x_{3},x_{4}\},\{x_{2},x_{5},x_{7},x_{9},x_{10}\},\{x_{6},x_{8}\}\}\). The fuzzy decision attribute Flu \(\widetilde{d}=\{\frac{0.8}{x_{1}},\frac{0.3}{x_{2}},\frac{1}{x_{3}},\frac{0.7}{x_{4}},\frac{0.1}{x_{5}},\frac{0.3}{x_{6}},\frac{0.2}{x_{7}},\frac{0}{x_{8}}, \frac{0.4}{x_{9}},\frac{0.2}{x_{10}}\}\).

According to Definition 2, the degrees of membership can be obtained as follows:

$$\begin{aligned} \begin{aligned} \underline{R_{A}}\widetilde{d}(x_{1})&=\underline{R_{A}}\widetilde{d}(x_{3})=\underline{R_{A}}\widetilde{d}(x_{4})=0.8\wedge 1\wedge 0.7=0.7;\\ \underline{R_{A}}\widetilde{d}(x_{2})&=\underline{R_{A}}\widetilde{d}(x_{5})=\underline{R_{A}}\widetilde{d}(x_{7})=\underline{R_{A}}\widetilde{d}(x_{9})=\underline{R_{A}}\widetilde{d}(x_{10})=0.1;\\ \underline{R_{A}}\widetilde{d}(x_{6})&=\underline{R_{A}}\widetilde{d}(x_{8})=0;\\ \overline{R_{A}}\widetilde{d}(x_{1})&=\overline{R_{A}}\widetilde{d}(x_{3})=\overline{R_{A}}\widetilde{d}(x_{4})=0.8\vee 1\vee 0.7=1;\\ \overline{R_{A}}\widetilde{d}(x_{2})&=\overline{R_{A}}\widetilde{d}(x_{5})=\overline{R_{A}}\widetilde{d}(x_{7})=\overline{R_{A}}\widetilde{d}(x_{9})=\overline{R_{A}}\widetilde{d}(x_{10})=0.4;\\ \underline{R_{A}}\widetilde{d}(x_{6})&=\underline{R_{A}}\widetilde{d}(x_{8})=0.3. \end{aligned} \end{aligned}$$

Then the lower and upper approximations of \(\widetilde{d}\) are as follows:

$$\begin{aligned} \begin{aligned}&\underline{R_{A}}\widetilde{d}=\{\frac{0.7}{x_{1}},\frac{0.1}{x_{2}},\frac{0.7}{x_{3}},\frac{0.7}{x_{4}},\frac{0.1}{x_{5}},\frac{0}{x_{6}},\frac{0.1}{x_{7}}, \frac{0}{x_{8}},\frac{0.1}{x_{9}},\frac{0.1}{x_{10}}\}; \\&\overline{R_{A}}\widetilde{d}=\{\frac{1}{x_{1}},\frac{0.4}{x_{2}},\frac{1}{x_{3}},\frac{1}{x_{4}},\frac{0.4}{x_{5}},\frac{0.3}{x_{6}},\frac{0.4}{x_{7}}, \frac{0.3}{x_{8}},\frac{0.4}{x_{9}},\frac{0.4}{x_{10}}\}. \end{aligned} \end{aligned}$$
Table 1. A decision table with a fuzzy decision attribute.

3 Matrix Representation of the Lower and Upper Approximations in the FDS

In this section, we use the matrix-based method for representing the lower and upper approximations in the FDS. Firstly, the equivalent matrix is proposed in the FDS. Then two matrix operations are defined. Finally, it is shown that the lower and upper approximations in the FDS can be effectively computed by the matrix-based method.

Definition 3

[10] Let \(S=\langle U,C\cup D,V,f\rangle \) be a FDS, where \(U=\{x_{i}|i \in \{1,2,\ldots ,n\}\}\), \(A\subseteq C\). The relation matrix is defined as \(M^{A}=(m^{A}_{ij})_{n\times n}\), where

$$\begin{aligned} m^{A}_{ij}={\left\{ \begin{array}{ll} 1,&{} {x_{i}\in [x_{j}]_{R_{A}}}\\ 0,&{} other \end{array}\right. } \end{aligned}$$
(2)

Proposition 1

[10] \(M^{A}=(m^{A}_{ij})_{n\times n}\) is a symmetric matrix, and \(m^{A}_{ii}=1(i=1,\ldots ,n)\).

Example 2

(Continuation of Example 1) Let \(A=\{c_{1},c_{2}\}\), \(B=\{c_{3},c_{4}\}\). Then the relation matrices \(M^{A}\) and \(M^{B}\) can be calculated as follows.

$$\begin{aligned} M^{A}= \left( \begin{array}{cccccccccc} 1 &{} 0 &{} 1 &{} 1 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 \\ 0 &{} 1 &{} 0 &{} 0 &{} 1 &{} 0 &{} 1 &{} 0 &{} 1 &{} 1 \\ 1 &{} 0 &{} 1 &{} 1 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 \\ 1 &{} 0 &{} 1 &{} 1 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 \\ 0 &{} 1 &{} 0 &{} 0 &{} 1 &{} 0 &{} 1 &{} 0 &{} 1 &{} 1 \\ 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 1 &{} 0 &{} 1 &{} 0 &{} 0 \\ 0 &{} 1 &{} 0 &{} 0 &{} 1 &{} 0 &{} 1 &{} 0 &{} 1 &{} 1 \\ 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 1 &{} 0 &{} 1 &{} 0 &{} 0 \\ 0 &{} 1 &{} 0 &{} 0 &{} 1 &{} 0 &{} 1 &{} 0 &{} 1 &{} 1 \\ 0 &{} 1 &{} 0 &{} 0 &{} 1 &{} 0 &{} 1 &{} 0 &{} 1 &{} 1 \\ \end{array} \right) \quad M^{B}= \left( \begin{array}{cccccccccc} 1 &{} 1 &{} 1 &{} 1 &{} 1 &{} 0 &{} 1 &{} 0 &{} 0 &{} 0 \\ 1 &{} 1 &{} 1 &{} 1 &{} 1 &{} 0 &{} 1 &{} 0 &{} 0 &{} 0 \\ 1 &{} 1 &{} 1 &{} 1 &{} 1 &{} 0 &{} 1 &{} 0 &{} 0 &{} 0 \\ 1 &{} 1 &{} 1 &{} 1 &{} 1 &{} 0 &{} 1 &{} 0 &{} 0 &{} 0 \\ 1 &{} 1 &{} 1 &{} 1 &{} 1 &{} 0 &{} 1 &{} 0 &{} 0 &{} 0 \\ 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 1 &{} 0 &{} 1 &{} 1 &{} 1 \\ 1 &{} 1 &{} 1 &{} 1 &{} 1 &{} 0 &{} 1 &{} 0 &{} 0 &{} 0 \\ 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 1 &{} 0 &{} 1 &{} 1 &{} 1 \\ 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 1 &{} 0 &{} 1 &{} 1 &{} 1 \\ 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 1 &{} 0 &{} 1 &{} 1 &{} 1 \\ \end{array} \right) \end{aligned}$$

From Example 2, it can be easily found that \(m^{A}_{ii}=1,m^{B}_{ii}=1\), and \(M^{A}=(M^{A})^{T},M^{B}=(M^{B})^{T}\!.\)

Definition 4

Let \(S=\langle U,C\cup D,V,f\rangle \) be a FDS. The corresponding relation matrices of \(A,B\subseteq C\) are \(M^{A}=(m^{A}_{ij})_{n\times n}\) and \(M^{B}=(m^{B}_{ij})_{n\times n}\), respectively. Then the dot operation between \(M^{A}\) and \(M^{B}\) is defined as follows.

$$\begin{aligned} M^{A}\bullet M^{B}=(m^{A}_{ij}\cdot m^{B}_{ij})_{n\times n}, \end{aligned}$$
(3)

where \(\bullet \) is the dot product of two matrices.

Proposition 2

Let \(S=\langle U, C\cup D, V, f\rangle \) be a FDS. The corresponding relation matrices of \(A, B\subseteq C\) are \(M^{A}\) and \(M^{B}\), respectively. Then the relation matrix \(M^{A\cup B}\) of \(A\cup B\) is \(M^{A\cup B}=M^{A}\bullet M^{B}\).

Proof

If \(m^{A\cup B}_{ij}=1\), according to Definition 3, it follows \(x_{i}\in [x_{j}]_{R_{A\cup B}}\). Then \(x_{i}\in [x_{j}]_{R_{A}}\) and \(x_{i}\in [x_{j}]_{R_{B}}\). We have \(m^{A}_{ij}=1\) and \(m^{B}_{ij}=1\), that is, \(m^{A\cup B}_{ij}=m^{A}_{ij}\cdot m^{B}_{ij}\). If \(m^{A\cup B}_{ij}=0\), then \(x_{i}\notin [x_{j}]_{R_{A\cup B}}\), that is, \(x_{i}\notin [x_{j}]_{R_{A}}\) or \(x_{i}\notin [x_{j}]_{R_{B}}\). Then \(m^{A}_{ij}=0\) or \(m^{B}_{ij}=0\). Hence \(m^{A}_{ij}\cdot m^{B}_{ij}=m^{A\cup B}_{ij}\). The reverse is also true.

Example 3

(Continuation of Example 2) According to Definition 4, we can compute \(M^{A}\bullet M^{B}\) as follows.

$$\begin{aligned} M^{A}\bullet M^{B}= \left( \begin{array}{cccccccccc} 1 &{} 0 &{} 1 &{} 1 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 \\ 0 &{} 1 &{} 0 &{} 0 &{} 1 &{} 0 &{} 1 &{} 0 &{} 1 &{} 1 \\ 1 &{} 0 &{} 1 &{} 1 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 \\ 1 &{} 0 &{} 1 &{} 1 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 \\ 0 &{} 1 &{} 0 &{} 0 &{} 1 &{} 0 &{} 1 &{} 0 &{} 1 &{} 1 \\ 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 1 &{} 0 &{} 1 &{} 0 &{} 0 \\ 0 &{} 1 &{} 0 &{} 0 &{} 1 &{} 0 &{} 1 &{} 0 &{} 1 &{} 1 \\ 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 1 &{} 0 &{} 1 &{} 0 &{} 0 \\ 0 &{} 1 &{} 0 &{} 0 &{} 1 &{} 0 &{} 1 &{} 0 &{} 1 &{} 1 \\ 0 &{} 1 &{} 0 &{} 0 &{} 1 &{} 0 &{} 1 &{} 0 &{} 1 &{} 1 \\ \end{array} \right) \bullet \left( \begin{array}{cccccccccc} 1 &{} 1 &{} 1 &{} 1 &{} 1 &{} 0 &{} 1 &{} 0 &{} 0 &{} 0 \\ 1 &{} 1 &{} 1 &{} 1 &{} 1 &{} 0 &{} 1 &{} 0 &{} 0 &{} 0 \\ 1 &{} 1 &{} 1 &{} 1 &{} 1 &{} 0 &{} 1 &{} 0 &{} 0 &{} 0 \\ 1 &{} 1 &{} 1 &{} 1 &{} 1 &{} 0 &{} 1 &{} 0 &{} 0 &{} 0 \\ 1 &{} 1 &{} 1 &{} 1 &{} 1 &{} 0 &{} 1 &{} 0 &{} 0 &{} 0 \\ 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 1 &{} 0 &{} 1 &{} 1 &{} 1 \\ 1 &{} 1 &{} 1 &{} 1 &{} 1 &{} 0 &{} 1 &{} 0 &{} 0 &{} 0 \\ 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 1 &{} 0 &{} 1 &{} 1 &{} 1 \\ 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 1 &{} 0 &{} 1 &{} 1 &{} 1 \\ 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 1 &{} 0 &{} 1 &{} 1 &{} 1 \\ \end{array} \right) = \left( \begin{array}{cccccccccc} 1 &{} 0 &{} 1 &{} 1 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 \\ 0 &{} 1 &{} 0 &{} 0 &{} 1 &{} 0 &{} 1 &{} 0 &{} 0 &{} 0 \\ 1 &{} 0 &{} 1 &{} 1 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 \\ 1 &{} 0 &{} 1 &{} 1 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 \\ 0 &{} 1 &{} 0 &{} 0 &{} 1 &{} 0 &{} 1 &{} 0 &{} 0 &{} 0 \\ 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 1 &{} 0 &{} 1 &{} 0 &{} 0 \\ 0 &{} 1 &{} 0 &{} 0 &{} 1 &{} 0 &{} 1 &{} 0 &{} 0 &{} 0 \\ 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 1 &{} 0 &{} 1 &{} 0 &{} 0 \\ 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 1 &{} 1 \\ 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 1 &{} 1 \\ \end{array} \right) \end{aligned}$$

It is easy to verify to the relation matrix \(M^{A\cup B}=M^{A}\bullet M^{B}\).

Definition 5

Let \(S=\langle U, C\cup D, V, f\rangle \) be a FDS, where \(U=\{x_{i}|i \in \{1,2,\ldots ,n\}\}\). \(\widetilde{d}\) is a fuzzy subset on D, \(\widetilde{d}(x)\) \((x\in U)\) is the degree of membership of x in \(\widetilde{d}\) and \(M^{A}=(m^{A}_{ij})_{n\times n}\) is the relation matrix of \(A\subseteq C\). We define the \(\otimes _{max}\) operation as follows.

$$\begin{aligned} (M^{A}\otimes _{max}\widetilde{d})(i)=max(m^{A}_{i1}\cdot \widetilde{d} (x_{1}) ,m^{A}_{i2}\cdot \widetilde{d} (x_{2}),\ldots ,m^{A}_{in}\cdot \widetilde{d} (x_{n}))\quad (i=1,2,\ldots ,n) \end{aligned}$$
(4)

where max operation takes the maximum value among the n numbers.

Theorem 1

Let \(S=\langle U, C\cup D, V, f\rangle \) be a FDS, where \(U=\{x_{i}|i \in \{1,2,\ldots ,n\}\}\). \(\widetilde{d}\) is a fuzzy subset on D and \(M^{A}\) is the relation matrix of \(A\subseteq C\). The upper and lower approximations of \(\widetilde{d}\) are calculated as follows.

$$\begin{aligned} \overline{R_{A}}\widetilde{d}=M^{A}\otimes _{max}\widetilde{d}; \end{aligned}$$
(5)
$$\begin{aligned} \underline{R_{A}}\widetilde{d}=\textit{l}-M^{A}\otimes _{max}\widetilde{d}^{c} \end{aligned}$$
(6)

where l is the column vector that all elements are one and \(\widetilde{d}^{c}\) is the complement of \(\widetilde{d}\).

Proof

According to Definition 5, \((M^{A}\otimes _{max}\widetilde{d})(i)=max(m^{A}_{i1}\cdot \widetilde{d} (x_{1}) ,m^{A}_{i2}\cdot \widetilde{d} (x_{2}),\ldots ,m^{A}_{in}\cdot \widetilde{d} (x_{n}))\). If \(m^{A}_{ij}=1\), it means that \(x_{j}\in [x_{i}]_{R_{A}}\), then \(m^{A}_{ij}\cdot \widetilde{d}(x_{j})=\widetilde{d}(x_{j})\), otherwise \(m^{A}_{ij}\cdot \widetilde{d}(x_{j})=0\). Therefore, according to Definition 2, \((M^{A}\otimes _{max}\widetilde{d})(i)=\overline{R_{A}}\widetilde{d}(x_{i})\). In addition, according to \(\underline{R_{A}}\widetilde{d}=\sim \overline{R_{A}}\widetilde{d^{c}}\) and Eq. 5, it is clearly that \(\underline{R_{A}}\widetilde{d}(x_{i})=(\textit{l}-M^{A}\otimes _{max}\widetilde{d}^{c})(i)\).

Example 4

Given a FDS S=\(\langle U,C\cup D,V,f\rangle \) as shown in Table 1. Let \(A=\{c_{1},c_{2}\}\), \(\widetilde{d}=\{\frac{0.8}{x_{1}},\frac{0.3}{x_{2}},\frac{1}{x_{3}},\frac{0.7}{x_{4}}, \frac{0.1}{x_{5}},\frac{0.3}{x_{6}},\frac{0.2}{x_{7}},\frac{0}{x_{8}}, \frac{0.4}{x_{9}},\frac{0.2}{x_{10}}\}\). From the results of Example 2, then

$$\begin{aligned} \begin{aligned}&\overline{R_{A}}\widetilde{d}=M^{A}\otimes _{max}\widetilde{d}= \left( \begin{array}{cccccccccc} 1 &{} 0 &{} 1 &{} 1 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 \\ 0 &{} 1 &{} 0 &{} 0 &{} 1 &{} 0 &{} 1 &{} 0 &{} 1 &{} 1 \\ 1 &{} 0 &{} 1 &{} 1 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 \\ 1 &{} 0 &{} 1 &{} 1 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 \\ 0 &{} 1 &{} 0 &{} 0 &{} 1 &{} 0 &{} 1 &{} 0 &{} 1 &{} 1 \\ 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 1 &{} 0 &{} 1 &{} 0 &{} 0 \\ 0 &{} 1 &{} 0 &{} 0 &{} 1 &{} 0 &{} 1 &{} 0 &{} 1 &{} 1 \\ 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 1 &{} 0 &{} 1 &{} 0 &{} 0 \\ 0 &{} 1 &{} 0 &{} 0 &{} 1 &{} 0 &{} 1 &{} 0 &{} 1 &{} 1 \\ 0 &{} 1 &{} 0 &{} 0 &{} 1 &{} 0 &{} 1 &{} 0 &{} 1 &{} 1 \\ \end{array} \right) \otimes _{max} \left( \begin{array}{c} 0.8 \\ 0.3 \\ 1 \\ 0.7 \\ 0.1 \\ 0.3 \\ 0.2 \\ 0 \\ 0.4 \\ 0.2 \\ \end{array} \right) = \left( \begin{array}{c} 1 \\ 0.4 \\ 1 \\ 1 \\ 0.4 \\ 0.3 \\ 0.4 \\ 0.3 \\ 0.4 \\ 0.4 \\ \end{array} \right) \\&\underline{R_{A}}\widetilde{d}=\textit{l}-M^{A}\otimes _{max}\widetilde{d}^{c}= \left( \begin{array}{c} 1 \\ 1 \\ 1 \\ 1 \\ 1 \\ 1 \\ 1 \\ 1 \\ 1 \\ 1 \\ \end{array} \right) - \left( \begin{array}{cccccccccc} 1 &{} 0 &{} 1 &{} 1 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 \\ 0 &{} 1 &{} 0 &{} 0 &{} 1 &{} 0 &{} 1 &{} 0 &{} 1 &{} 1 \\ 1 &{} 0 &{} 1 &{} 1 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 \\ 1 &{} 0 &{} 1 &{} 1 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 \\ 0 &{} 1 &{} 0 &{} 0 &{} 1 &{} 0 &{} 1 &{} 0 &{} 1 &{} 1 \\ 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 1 &{} 0 &{} 1 &{} 0 &{} 0 \\ 0 &{} 1 &{} 0 &{} 0 &{} 1 &{} 0 &{} 1 &{} 0 &{} 1 &{} 1 \\ 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 1 &{} 0 &{} 1 &{} 0 &{} 0 \\ 0 &{} 1 &{} 0 &{} 0 &{} 1 &{} 0 &{} 1 &{} 0 &{} 1 &{} 1 \\ 0 &{} 1 &{} 0 &{} 0 &{} 1 &{} 0 &{} 1 &{} 0 &{} 1 &{} 1 \\ \end{array} \right) \otimes _{max} \left( \begin{array}{c} 0.2 \\ 0.7 \\ 0 \\ 0.3 \\ 0.9 \\ 0.7 \\ 0.8 \\ 1 \\ 0.6 \\ 0.8 \\ \end{array} \right) =\left( \begin{array}{c} 0.7 \\ 0.1 \\ 0.7 \\ 0.7 \\ 0.1 \\ 0 \\ 0.1 \\ 0 \\ 0.1 \\ 0.1 \\ \end{array} \right) \end{aligned} \end{aligned}$$

4 Dynamically Maintenance of Approximations in the FDS Under the Variation of Attributes and Objects

In a dynamic information system, the attribute set and object set may be changed simultaneously over time. In this section, we discuss the methods of incremental updating approximations for dynamically changing attributes and objects in FDS. Based on the above analysis, it is found that the relation matrix is the key step to compute rough fuzzy approximations based on matrix. If we can dynamically compute the changed relation matrix with an incremental updating strategy rather than reconstructing it from scratch, then the running-time will be reduced and the lower and upper approximations can be obtained directly. In the following, we discuss how to update the relation matrix incrementally while the attribute and object sets vary.

Let \(S^{t}=\langle U^{t},C^{t}\cup D^{t},V^{t},f^{t}\rangle \) be a FDS at time t, \(S^{t+1}=\langle U^{t+1}, C^{t+1}\cup D^{t+1}, V^{t+1}, f^{t+1}\rangle \) denote the FDS at time \(t+1\), where \(U^{t+1}=U^{t}\cup \varDelta U\), \(C^{t+1}=C^{t}\cup \varDelta C\), \(D^{t+1}=D^{t}\cup \varDelta D\). To incrementally compute the relation matrices, we partition the system \(S^{t+1}\) into two subsystems. One is \(S^{\varDelta U}=\langle \varDelta U,C^{t+1}\cup \varDelta D,V^{\varDelta U},f^{\varDelta U}\rangle \) and the other is \(S^{U^{t}}=\langle U^{t},C^{t+1}\cup D^{t},V^{U^{t}},f^{U^{t}}\rangle \). Then the \(S^{U^{t}}=\langle U^{t},C^{t+1}\cup D^{t},V^{U^{t}},f^{U^{t}}\rangle \) is partitioned into two subsystems again: \(S^{t}=\langle U^{t},C^{t}\cup D^{t},V^{t},f^{t}\rangle \), \(S^{\varDelta C}=\langle U^{t},\varDelta C\cup D^{t},V^{\varDelta C},f^{\varDelta C}\rangle \). Suppose \(|U^{t+1}|=n'\), \(|U^{t}|=n\), \(|\varDelta U|=n^{+}\), \(|C^{t+1}|=m'\), \(|C^{t}|=m\), \(|\varDelta C|=m^{+}\). Then \(n'=n+n^{+}\), \(m'=m+m^{+}\).

Theorem 2

Let \(M^{C^{t+1}}=(^{U_{t+1}}m^{C^{t+1}}_{ij})_{n'\times n'}\) be the relation matrix of FDS \(S^{t+1}\). Then \( (M^{C^{t+1}})_{n'\times n'}=\left( \begin{array}{c|c} (M^{C^{t+1}}_{U_{t}})_{n\times n} &{} (M^{C^{t+1}}_{U_{t}, \varDelta U})_{n\times n^{+}} \\ \hline (M^{C^{t+1}}_{U_{t}, \varDelta U})^{T}_{n^{+}\times n} &{} (M^{C^{t+1}}_{\varDelta U})_{n^{+}\times n^{+}} \\ \end{array} \right) \), where \(M^{C^{t+1}}_{U_{t}}\) denotes the relation matrix of \(U^{t}\) under \(C^{t+1}\), \(M^{C^{t+1}}_{U_{t}, \varDelta U}\) denotes the relation matrix of \(U^{t}\) and \(\varDelta U\) under \(C^{t+1}\) and \(M^{C^{t+1}}_{\varDelta U}\) denotes the relation matrix of \(\varDelta U\) under \(C^{t+1}\).

Proof

According to Definition 3, it is easy to see that the relation matrix \(M^{C^{t+1}}\) can be divided into four parts. Each part can be obtained by Definition 3 directly.

To incrementally compute the relation matrices, we partition the relation matrix to four parts according to Theorem 2, where the first part \((M^{C^{t+1}}_{U_{t}})_{n\times n}\) can be incrementally computed.

Theorem 3

Suppose the \(M^{C^{t+1}}_{U_{t}}=(m^{C^{t+1}}_{ij})_{n\times n}\), \(M^{C^{t}}_{U_{t}}=(m^{C^{t}}_{ij})_{n\times n}\), \(M^{\varDelta C}_{U_{t}}=(m^{\varDelta C}_{ij})_{n\times n}\) are the relation matrices of the FDS of \(S^{U^{t}}\), \(S^{t}\), \(S^{\varDelta C}\), respectively. The elements of the relation matrix \(M^{C^{t+1}}_{U_{t}}\) can be updated as follows.

  1. (1)

    if \(m^{C^{t}}_{ij}=0\), then \(m^{C^{t+1}}_{ij}=m^{C^{t}}_{ij}\);

  2. (2)

    if \(m^{C^{t}}_{ij}=1\), and \(m^{\varDelta C}_{ij}=1\), then \(m^{C^{t+1}}_{ij}=m^{C^{t}}_{ij}\);

  3. (3)

    if \(m^{C^{t}}_{ij}=1\), and \(m^{\varDelta C}_{ij}=0\), then \(m^{C^{t+1}}_{ij}=0\).

Proof

It follows directly from Proposition 2.

According to Theorem 3, we compute the matrix \(M^{C^{t+1}}_{U_{t}}\) only by updating the case (3) while not recomputing the whole matrix.

Theorem 4

Given the relation matrices \(M^{C^{t+1}}_{\varDelta U}=(m^{\varDelta U}_{ij})_{n^{+}\times n^{+}}\), \(M^{C^{t+1}}_{U_{t}}=(m^{C^{t+1}}_{ij})_{n\times n}\). Then \(M^{C^{t+1}}_{U_{t}, \varDelta U}=(m^{U_{t}, \varDelta U}_{ij})_{n\times n^{+}}\) can be updated as follows.

  1. (1)

    if \(x_{i}\) and \(x_{j}\) are equivalent under \(C^{t+1}\), where \(x_{i}\in U^{t}\), \(i=\{1,2,\ldots ,n\}\), \(x_{j}\in \varDelta U\), \(j=\{n+1,n+2,\ldots ,n+n^{+}\}\), then \(m^{U_{t}, \varDelta U}_{[i:]}=m^{\varDelta U}_{[(j-n):]}\), \(m^{U_{t}, \varDelta U}_{[:(j-n)]}=m^{C^{t+1}}_{[:i]}\). In addition, if \(x_{i}\) and \(x_{i'}\) are equivalent under \(C^{t+1}\), \(x_{j}\) and \(x_{j'}\) are equivalent under \(\varDelta C\), where \(i'\in \{1,2,\ldots ,n\}\), \(j'\in \{n+1,n+2,\ldots ,n+n^{+}\}\), then \(m^{U_{t}, \varDelta U}_{[i':]}=m^{\varDelta U}_{[(j-n):]},m^{U_{t}, \varDelta U}_{[:(j'-n)]}=m^{C^{t+1}}_{[:i]} \).

  2. (2)

    if \(x_{i}\) and \(x_{j}\) do not satisfy the above conditions, then \(m^{U_{t}, \varDelta U}_{i(j-n)}=0\). In addition, if \(x_{i}\) and \(x_{i'}\), \(x_{j}\) and \(x_{j'}\) satisfy the above conditions, then \(m^{U_{t}, \varDelta U}_{i'(j-n)}=0\), \(m^{U_{t}, \varDelta U}_{[:(j'-n)]}=m^{U_{t}, \varDelta U}_{[:(j-n)]}\).

where \(m^{U_{t}, \varDelta U}_{[i:]}\) denotes the ith row in the matrix \(M^{C^{t+1}}_{U_{t}, \varDelta U}\), \(m^{U_{t}, \varDelta U}_{[:j]}\) denotes the jth column in the matrix \(M^{C^{t+1}}_{U_{t}, \varDelta U}\) and the others are the same.

Proof

If \(x_{i}\) and \(x_{j}\) are equivalent, then according to Theorem 2, \(m^{U_{t}, \varDelta U}_{[i:]}=m^{\varDelta U}_{[(j-n):]}\). Besides, according to Proposition 1, we have \(m^{U_{t}, \varDelta U}_{[:(j-n)]}=m^{C^{t+1}}_{[:i]}\). In addition, if \(x_{i}\) and \(x_{i'}\), \(x_{j}\) and \(x_{j'}\) are in the same the equivalence class, then we have \(m^{U_{t}, \varDelta U}_{[i':]}=m^{U_{t}, \varDelta U}_{[i:]}=m^{\varDelta U}_{[(j-n):]}\), \(m^{U_{t}, \varDelta U}_{[:(j'-n)]}=m^{U_{t}, \varDelta U}_{[:(j-n)]}=m^{C^{t+1}}_{[:i]} \) according to Proposition 1 and the above results. The proof of case (2) is analogous.

5 An Illustrative Example

To incrementally compute the approximations when objects and attributes alter simultaneously, an example is given to illustrate the proposed method. Let \(S^{t}=\langle U^{t},C^{t}\cup D^{t},V^{t},f^{t}\rangle \) be a FDS at time t, where \(U=\{x_{i}|i \in \{1,2,\ldots ,10\}\}\), \(C^{t}=\{c_{i},1\le i \le 4\}\) (see Table 1). At the time \(t+1\), the attributes \(\{c_{5},c_{6}\}\) and the objects \(\{x_{11},x_{12},x_{13}\}\) are added to \(S^{t+1}\). \(c_{5}\) denotes the runny noses, \(c_{6}\) denotes the cough, and \(V_{c_{5}}=V_{c_{6}}=\{No,Yes\}=\{0,1\}\). Then \(\varDelta C=\{c_{5},c_{6}\}\), \(\varDelta U=\{x_{11},x_{12},x_{13}\}\), \(U^{t+1}=U^{t}\cup \varDelta U\), \(C^{t+1}=C^{t}\cup \varDelta C\) (see Table 2).

Table 2. A decision table with fuzzy decision attributes at time t.

Firstly, we compute the relation matrix \(M^{\varDelta C}_{U_{t}}\) according to Definition 3 and the relation matrix \(M^{C^{t}}_{U_{t}}\) is the result of Example 3.

$$\begin{aligned} M^{\varDelta C}_{U_{t}}= \left( \begin{array}{cccccccccc} 1 &{} 0 &{} 1 &{} 0 &{} 0 &{} 1 &{} 0 &{} 0 &{} 0 &{} 0 \\ 0 &{} 1 &{} 0 &{} 0 &{} 1 &{} 0 &{} 1 &{} 0 &{} 1 &{} 1 \\ \underline{\mathbf{1 }} &{} 0 &{} 1 &{} 0 &{} 0 &{} 1 &{} 0 &{} 0 &{} 0 &{} 0 \\ \underline{\mathbf{0 }} &{} 0 &{} \underline{\mathbf{0 }} &{} 1 &{} 0 &{} 0 &{} 0 &{} 1 &{} 0 &{} 0 \\ 0 &{} \underline{\mathbf{1 }} &{} 0 &{} 0 &{} 1 &{} 0 &{} 1 &{} 0 &{} 1 &{} 1 \\ 1 &{} 0 &{} 1 &{} 0 &{} 0 &{} 1 &{} 0 &{} 0 &{} 0 &{} 0 \\ 0 &{} \underline{\mathbf{1 }} &{} 0 &{} 0 &{}\underline{\mathbf{ 1 }} &{} 0 &{} 1 &{} 0 &{} 1 &{} 1 \\ 0 &{} 0 &{} 0 &{} 1 &{} 0 &{} \underline{\mathbf{0 }} &{} 0 &{} 1 &{} 0 &{} 0 \\ 0 &{} 1 &{} 0 &{} 0 &{} 1 &{} 0 &{} 1 &{} 0 &{} 1 &{} 1 \\ 0 &{} 1 &{} 0 &{} 0 &{} 1 &{} 0 &{} 1 &{} 0 &{} \underline{\mathbf{1 }} &{} 1 \\ \end{array} \right) M^{C^{t}}_{U_{t}}= \left( \begin{array}{cccccccccc} 1 &{} 0 &{} 1 &{} 1 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 \\ 0 &{} 1 &{} 0 &{} 0 &{} 1 &{} 0 &{} 1 &{} 0 &{} 0 &{} 0 \\ \underline{\mathbf{1 }} &{} 0 &{} 1 &{} 1 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 \\ \underline{\mathbf{1 }} &{} 0 &{} \underline{\mathbf{1 }} &{} 1 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 \\ 0 &{}\underline{\mathbf{1 }} &{} 0 &{} 0 &{} 1 &{} 0 &{} 1 &{} 0 &{} 0 &{} 0 \\ 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 1 &{} 0 &{} 1 &{} 0 &{} 0 \\ 0 &{} \underline{\mathbf{1 }} &{} 0 &{} 0 &{} \underline{\mathbf{1 }} &{} 0 &{} 1 &{} 0 &{} 0 &{} 0 \\ 0 &{} 0 &{} 0 &{} 0 &{} 0 &{}\underline{\mathbf{ 1 }} &{} 0 &{} 1 &{} 0 &{} 0 \\ 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 1 &{} 1 \\ 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} \underline{\mathbf{1 }} &{} 1 \\ \end{array} \right) \end{aligned}$$

Secondly, we compute the relation matrix \(M^{C^{t+1}}_{U_{t}}\). According to Proposition 1, we only compute the elements under the principal diagonal of the matrix \(M^{C^{t+1}}_{U_{t}}\). We judge the elements which values are “1” under the principal diagonal of the matrix \(M^{C^{t}}_{U_{t}}\) whether change or not according to Theorem 3. Then we can get the matrix \(M^{C^{t+1}}_{U_{t}}\).

where there are eight elements changed under the principal diagonal of the matrix \(M^{C^{t}}_{U_{t}}\).

Thirdly, we compute the relation matrix \(M^{C^{t+1}}_{\varDelta U}\) according to Definition 3.

Then according to Theorem 4,

  1. (1)

    because \(x_{2}\) and \(x_{11}\) are equivalent, then \(m^{U_{t}, \varDelta U}_{[2:]}=m^{\varDelta U}_{[1:]}\), \(m^{U_{t}, \varDelta U}_{[:1]}=m^{C^{t+1}}_{[:1]} \).

  2. (2)

    according to \(m^{C^{t+1}}_{25}=1\), \(m^{C^{t+1}}_{27}=1\), we have \(m^{U_{t}, \varDelta U}_{[5:]}=m^{\varDelta U}_{[1:]}\), \(m^{U_{t}, \varDelta U}_{[7:]}=m^{\varDelta U}_{[1:]}\), according to the \(m^{\varDelta U}_{11,12}=1\), then \(m^{U_{t}, \varDelta U}_{[:2]}=m^{C^{t+1}}_{[:2]}\).

  3. (3)

    because \(x_{2}\) and \(x_{13}\) are not equivalent, then \(m^{U_{t}, \varDelta U}_{[13]}=0\). The others are the same.

We get the relation matrix

where the first and second column are the same to the second column of \( M^{C^{t+1}}_{U_{t}}\), and the second, fifth, seventh row are the same to the first row of \(M^{C^{t+1}}_{\varDelta U}\). Obviously, it may reduce the computing time than reconstructing the matrix.

Lastly, according to Theorems 1, 2, we obtain the upper and lower approximations of \(\widetilde{d}^{t+1}\).

$$\begin{aligned} \begin{aligned}&\underline{R_{C_{t+1}}}\widetilde{d}^{t+1}=\{\frac{0.8}{x_{1}},\frac{0.1}{x_{2}},\frac{0.8}{x_{3}},\frac{0.7}{x_{4}},\frac{0.1}{x_{5}},\frac{0.3}{x_{6}},\frac{0.1}{x_{7}}, \frac{0}{x_{8}},\frac{0.2}{x_{9}},\frac{0.2}{x_{10}} ,\frac{0.1}{x_{11}},\frac{0.1}{x_{12}},\frac{0.9}{x_{13}}\}\\&\overline{R_{C_{t+1}}}\widetilde{d}^{t+1}=\{\frac{1}{x_{1}},\frac{0.4}{x_{2}},\frac{1}{x_{3}},\frac{0.7}{x_{4}},\frac{0.4}{x_{5}},\frac{0.3}{x_{6}},\frac{0.4}{x_{7}}, \frac{0}{x_{8}},\frac{0.4}{x_{9}},\frac{0.4}{x_{10}} ,\frac{0.4}{x_{11}},\frac{0.4}{x_{12}},\frac{0.9}{x_{13}}\} \end{aligned} \end{aligned}$$

6 Conclusions

In FDS, the attribute and object sets may alter simultaneously. How to effectively compute the lower and upper rough fuzzy approximations is a crucial problem. In this paper, we presented an incremental method for updating rough fuzzy approximations based on matrix, and gave an example to show the effectiveness of the approach. In the future work, we will develop the algorithm to validate the proposed method and extend them to handle the problems of updating approximations when the attributes, objects and attributes values vary simultaneously with time in FDS. The parallel strategy to improve the algorithm will be taken into account in the future too.