1 Introduction

The rough set theory was first proposed in the early 1980s by Pawlak (1982). It is a new mathematical approach to uncertain and vague data analysis, and plays an important role in many fields of data mining and knowledge discovery. The application of rough set theory is also very broad, such as artificial intelligence, machine learning, decision analysis, decision support and expert system etc, and it has been proved to be a very effective mathematical approach.

The rough set theory is an extension of the set theory for the research of intelligent systems characterized by insufficient and incomplete information. The concepts of lower and upper approximation used in rough set theory, knowledge hidden in information may be unraveled and expressed in the form of decision rules (Gu et al. 2007; Kryszkiewicz 1999). It is assumed that there is a complete certainty that an object classified by using Pawlak rough sets is a correct classification by an equivalence relation (Pawlak 1982; Gu et al. 2007). Namely, an object belongs to or not to a classification. An object cannot be classified in a level of confidence in its correct classification.

However, in practice, it seems that some level of uncertainty in the classification process may lead to a deeper understanding and a better utilization of properties of the data being analyzed. In order to deal with uncertainty in such cases, an extended variable precision rough set was proposed by Ziarko in 1993. In this model, a threshold was introduced that denotes the level of uncertainty, to loosen the strict definitions of lower and upper approximations. The theory is different from the Pawlak rough set theory. In variable precision rough sets, an object is classified by the relation classes and there is a level of confidence or fault tolerance in its correct classification, which helps us to discover related knowledge from non-related data.

The extension of Pawlak rough set theory is an important aspect in the study of the rough set theory. In recent years, many scholars and lovers of rough sets have been devoting to the study of this direction, and have obtained many outstanding results (Gu et al. 2007; Ziarko 1993; Inuiguchi 2004a, b; Yao and Lin 1996; Yao 1998, 2003, 2008; Yao and Wong 1996, 1992). Many results of variable precision rough sets have been made since it was proposed by Ziarko. Certainly, the variable precision rough set model is viewed as the extension of Pawlak rough set model. However, the extension not only enriches the rough set theory, but also it broadens the scope of its application and makes better effect in practice. At present, most studies focus on the variable precision fuzzy rough sets (Mieszkowicz-Rolka and Rolka 2004a, b, c, 2005].

In real life, evaluation or inference is usually done by human beings. For example, in the process of disease diagnosis, since the identical disease may simultaneously have many symptoms, on the contrary, the same symptom may be shared by a variety of diseases, so a doctor (or a decision-maker) often finds it difficult to determine or judge whether a person has suffered from some kind of disease. Luckily, the rough set model over two universes can be applied in these cases.

The first study of rough sets over two universes was done in 1995; deeper and more systematic research has been done recently (Pei and Xu 2004; Shu and He 2007; Wang and Wang 2008; Gong and Sun 2008). However, since the rough set theory over two universes lacks the flexibility in solving uncertainty problems, such as disease diagnosis, etc., the variable precision rough set model over two universes can overcome this lack. In this paper, we mainly discuss the variable precision rough set model and its properties over two universes on the basis of the rough sets over two universes. It is not only the direct extension of rough sets over two universes but also may be viewed as an extension of the Pawlak rough sets. For presenting the variable precision rough set model over two universes, we introduce the concepts of rough sets over two universes, and illustrate the related concepts by corresponding examples. Meantime, the concepts of the reverse approximation operators are firstly introduced and the properties of the approximation operators and the reverse approximation operators are discussed. Afterwards, we focus on the variable precision rough sets over the two universes' model and its properties. Furthermore, the concepts of the approximation operators with two parameters are also proposed and the related conclusions are studied.

This paper is organized as follows: In Sect. 2, we review the basic concepts of Pawlak rough sets, information system and decision information system etc, meantime, we recall variable precision rough sets on a universe using inclusion degree. In Sect. 3, we mainly discuss the variable precision rough set model over two universes and the properties of the β-lower approximation and the β-upper approximation. In addition, the reverse approximation operators and the approximation operators with two parameters are proposed and those properties are analyzed. An illustrative example is discussed in Sect. 4. Finally, we conclude the paper in Sect. 5.

2 Preliminaries

For completeness and clarity, we introduce some basic knowledge and notions in this section. And for convenience, variable precision rough set model is abbreviated as VPRS-model in some places.

2.1 Pawlak rough sets

Let U be a finite and nonempty set called universe, and R be an equivalence binary relation on U, i.e., R is reflexive, symmetric and transitive. The pair (UR) is said to be a Pawlak approximation space. The equivalence relation R partitions U into disjoint subsets called equivalence classes. If two elements \(x,y\,\in\,{U}\) belong to the same equivalence class, we say that x and y are indiscernible. Given an arbitrary set \(X\,\subseteq\,{U},\) it may be impossible to describe X precisely by using the equivalence classes of R . In this case, one may characterize X by a pair of lower and upper approximations (Pawlak 1982; Gu et al. 2007):

$$ \underline{apr}_{R}{(X)}=\{x\in{U}:[x]_{R}\subseteq{X}\} $$
(1)
$$ \overline{apr}_{R}{(X)}=\{x\in{U}:[x]_{R}\cap{X}\neq{\O}\} $$
(2)

where \([x]_{R}=\{y\,\in\,{U}:(x,y)\,\in\,{R}\}\) is the R equivalence class containing x. The pair \((\underline{apr}_{R}{(X)},\overline{apr}_{R}{(X)})\) is called the Pawlak rough sets of X with respect to (UR).

2.2 Information system and decision information system

An information system is a pair (UA), where U is a finite universe and A is a finite and nonempty set of attributes, such that \(a:U\,\rightarrow\,{V_{a}}\) for every \(a\,\in\,{A}\) , i.e., \(a(x)\,\in\,{V_{a}},\) where \(V_{a}\) is the domain of attribute a. Each subset of attributes \(B\,\subseteq\,{A}\) determines an indiscernibility relation as follows:

$$ R_{B}=\{(x,y)\,\in\,{U\times{U}:a(x)=a(y),\,\forall\, {a\,\in\,{B}}}\} $$
(3)

Obviously, \(R_{B}\) partitions U into a family of disjoint subsets \(U/R_{B}\) called a quotient set of U:

$$ U/R_{B}=\{[x]_{B}:x\,\in\,{U}\} $$
(4)

Let \(X\,\subseteq\,{U},\;B\,\subseteq\,{A},\) one can characterize X by a pair of lower and upper approximations with respect to the knowledge derived from attribute set B (Gu et al. 2007; Ziarko 1993):

$$\underline{apr}_{B}{(X)}=\{x\,\in\,{U}:[x]_{B}\,\subseteq\,{X}\} =\bigcup\{[x]_{B}:[x]_{B}\,\subseteq\,{X}\} $$
(5)
$$ \overline{apr}_{B}{(X)}=\{x\,\in\,{U}:[x]_{B}\,\cap\,{X}\,\neq\,{\O}\}=\bigcup\{[x]_{B}:[x]_{B}\cap{X}\,\neq\,{\O}\} $$
(6)

The lower approximation \(\underline{apr}_{B}(X)\) is the set of objects that belong to X with certainty, while the upper approximation \(\overline{apr}_{B}(X)\) is the set of objects that possibly belong to X. The pair \((\underline{apr}_{B}{(X)},\,\overline{apr}_{B}{(X)})\) is referred to as the rough sets of X with respect to \((U,R_{B}).\)

An information system \(S=(U,A)\) is referred to as a decision information system if \(A=C\,\cup\,\{d\}\) and \(C\,\cap\,\{d\}={\O},\) where C is the conditional attribute set and {d} is decision attribute set.

2.3 VPRS-model on a universe

Usually, most questions are not about inclusion relation but the inclusion degree between the two sets. So we introduce the definition of inclusion degree as follows (Mieszkowicz-Rolka and Rolka 2005; Zhang and Leung 1996):

Definition 2.1

Let U be a finite universe, for all \(X,Y\,\subseteq\,{U},\) a function D(Y/X) is called the inclusion degree of X is included in Y, if it satisfies the following conditions:

  1. (C1)

    \(0\leq{D(Y/X)}\leq1,\)

  2. (C2)

    if \(X\,\subseteq\,{Y},\) then \(D(Y/X)=1,\)

  3. (C3)

    if \(X\,\subseteq\,{Y}\,\subseteq\,{Z},\) then \(D(X/Z)\leq{D(X/Y)}.\)

From the definition of inclusion degree, we know that the expression of inclusion degree is not unique. In practice, it is usually defined as follows:

$$ D(Y/X)={\frac{|Y\cap{X}|}{|X|}} $$
(7)

where the notation |·| denotes the cardinality of set. Especially, if the set is finite, then it denotes that the set contains the number of elements.

In this paper, we will adopt the above expression to define the variable precision rough set model on a universe, which contains the definition of variable precision rough sets over two universes.

Definition 2.2

Let \(S=(U,A)\) be an information system, P(U) be the power set of UD denotes the inclusion degree of \(P(U),B\,\subseteq\,{A},\beta\,\in\,{(0.5,1]},\) then for all \(X\,\subseteq\,{U},\) the β-lower approximation and the β-upper approximation of X in S are defined, respectively, by

$$ \underline{apr}_{B}^{\beta}(X)=\{x_{i}:D(X/[x_{i}]_{B}) \geq\beta\}=\bigcup\{[x_{i}]_{B}:D(X/[x_{i}]_{B})\geq\beta\} $$
(8)
$$ \overline{apr}_{B}^{\beta}(X)=\{x_{i}:D(X/[x_{i}]_{B})>1-\beta\} =\bigcup\{[x_{i}]_{B}:D(X/[x_{i}]_{B})>1-\beta\} $$
(9)

If \(S=(U,A)\) be a decision information system, namely, \(A=C\,\cup\,\{d\},\;B\,\subseteq\,{A},\) and suppose \(U/R_{d}=\{D_{1},D_{2},\ldots,D_{r}\},\) when \(\beta\,\in\,{(0.5,1]},\) the β-lower approximation and the β-upper approximation have the following properties (Kryszkiewicz 1999):

  1. (A1)

    \(\underline{apr}_{B}^{\beta}(\sim{D_{j}})= \sim\overline{apr}_{B}^{\beta}(D_{j})\quad (j\leq{r});\)

  2. (A2)

    \(\underline {apr}_{B}^{\beta}(D_{i})\,\cap\,\underline {apr}_{B}^{\beta}(D_{j})={\O} \quad (j\neq{i});\)

  3. (A3)

    \(\underline{apr}_{B}^{\beta}(D_{j})\,\subseteq\,\overline{apr}_{B}^{\beta}(D_{j}) \quad (j\leq{r});\)

  4. (A4)

    \(\overline{apr}_{B}^{\beta}(D_{i})\,\cap\, \overline{apr}_{B}^{\beta}(D_{j})={\O} \quad (j\neq{i})\) (not necessarily holds).

    where \(\sim{D_{j}}=\bigcup(U/R_{d}-\{D_{j}\})\) denotes the complementary set of \(D_{j}\) in \(U/R_{d}.\)

3 VPRS-model over two universes and its properties

In the reference (Gong and Sun 2008), the probability rough set model between different universes was proposed by Gong et al. based on the measure of probability. Meantime, the probability rough set model over two universes with two parameters was also given. Based on the above ideas, we discuss the variable precision rough set model over two universes, which is defined by the inclusion degree. In order to introduce the definition of variable precision rough set model over two universes, we firstly give the related definitions of rough set model over two universes from the general point of view, then study its properties in this section.

3.1 Rough set model over two universes

Definition 3.1

(Wang and Wang 2008) Let UV be two finite and nonempty sets called double universes, and R be an arbitrary binary relation on \(U\times{V},\) we can define two mappings \(R_{U}:U\,\rightarrow\,{P(V)}\) and \(R_{V}:V\,\rightarrow\,{P(U)}:\)

$$ R_{U}(x)=\{y\,\in\,{V}:xRy, x\,\in\,{U}\} $$
(10)
$$ R_{V}(y)=\{x\,\in\,{U}:xRy, y\,\in\,{V}\} $$
(11)

where the \(R_{U}(x),\;R_{V}(y)\) denote all R-related elements to x in V and all R-related elements to y in U, respectively. \(R_{U}(x)\) is called R relation class to x in V and \(R_{V}(y)\) is called R relation class to y in U.P(U) and P(V) denote the power sets of U and V, respectively.

From Definition 3.1, we can easily prove the following facts, namely, \(R_{U}(x)\,\subseteq\,{V}\) and \(R_{V}(y)\,\subseteq\,{U}.\) Especially, for \(\forall x\,\in\,{U},\exists y\,\in\,{V},\) if we have \((x,y)\,\in\,{R},\) i.e., \(R_{U}(x)\neq{\O},\) we say that the relation R is serial. on the contrary, if \(\forall y\,\in\,{V},\exists x\,\in\,{U},\) and \((x,y)\,\in\,{R},\) i.e. \(R_{V}(y)\,\neq\,{\O},\) we say that the relation R is reverse serial.

Additionally, one can prove that the above Definition implies the following equivalence relations (Pei and Xu 2004):

  1. (E1)

    R is serial \(\Leftrightarrow\forall x\,\in\,{U}, R_{U}(x)\,\neq\,{\O}\Leftrightarrow R_{V}(V)=U;\)

  2. (E2)

    R is reverse serial \(\Leftrightarrow\forall y\,\in\,{V}, R_{V}(y)\,\neq\,{\O}\,\Leftrightarrow\, R_{U}(U)=V.\)

Definition 3.2

(Wang and Wang 2008) Let UV be two universes, R be an arbitrary binary relation on \(U\times{V},\) for \(\forall Y\,\subseteq\,{V},\) then the lower approximation and upper approximation of Y are defined, respectively, by

$$ \underline{apr}_{R}(Y)=\{x\,\in\,{U}:R_{U}(x)\,\subseteq\,{Y}\} $$
(12)
$$ \overline{apr}_{R}(Y)=\{x\,\in\,{U}:R_{U}(x)\,\cap\,{Y}\,\neq\,{\O}\} $$
(13)

The lower approximation \(\underline{apr}_{R}(Y)\) denotes the set of objects that cannot precisely infer the result or state set Y only relying on the information. However, the upper approximation \(\overline{apr}_{R}(Y)\) means the set of objects that can precisely infer the result or state set Y at most through the information.

In addition, if \(\underline{apr}_{R}(Y)=\overline{apr}_{R}(Y),\) we may say that Y is a definable set of V over two universes. Otherwise, we may say that Y is a rough set of V over two universes. The triple (UVR) is called a generalized approximation space. The pair \((\underline{apr}_{R}(Y),\overline{apr}_{R}(Y))\) is called the generalized Pawlak rough sets over two universes of Y with respect to (UVR).

Obviously, if Y is a definable set of V over two universes, Y can be precisely expressed by the R relation class to x in V. Conversely, if Y is a rough set of V over two universes, Y cannot be precisely expressed by the R relation class to x in V.

We can see it from the following example.

Example 1

Let UV be two universes, U denotes the symptom set, V denotes the disease set. Suppose \(U=\{x_{1},x_{2},x_{3},x_{4},x_{5},x_{6}\},\;V=\{y_{1},y_{2},y_{3},y_{4}\},\) where, each \(x_{i}(i=1,2,{\ldots},6)\) denotes one symptom, but each \(y_{i}(i=1,2,3,4)\) stands for a disease. R be an binary relation on \(U\times{V},\) and for \(x_{i}\,\in\,{U},\) if \(\exists y_{i}\,\in\,{V},(x_{i},y_{i})\,\in\,{R},\) the relation can be understood as the fact that if a person has a certain symptom \(x_{i},\) then he had possibly suffered from a disease \(y_{i}\).

By the definition of R relation class, if \(Y_{1}=\{y_{2},y_{4}\},\; Y_{2}=\{y_{1},y_{2},y_{4}\}\,\subseteq\,{V},\) and for \(x_{i}\,\in\,{U} (i=1,2,{\ldots},6),\) in order to illustrate the problem, let us suppose R-related elements to each \(x_{i}\) in V are given as follows:

$$ \begin{aligned} R_{U}(x_{1})&=\{y_{1},y_{3}\};\quad R_{U}(x_{2})=\{y_{2},y_{4}\};\quad R_{U}(x_{3})=\{y_{1}\};\\ R_{U}(x_{4})&=\{y_{4}\};\quad R_{U}(x_{5})=\{y_{3}\};\quad R_{U}(x_{6})=\{y_{2}\}.\\ \end{aligned} $$

For \(\forall x_{i}\,\in\,{U} (i=1,2,{\ldots},6),\;R_{U}(x_{i})\neq{\O},\) namely, R is serial. In addition, \(R_{U}(x_{1})=\{y_{1},y_{3}\}\) means that if a person has the symptom \(x_{1},\) then he had possibly suffered from the disease \(y_{1}\) or \(y_{3}.\) The rest of relation classes can be similarly interpreted.

Hence, we have the following results by the definitions of the approximation operators:

$$ \begin{aligned} \underline{apr}_{R}(Y_{1})&=\{x_{i}\in{U}:R_{U}(x_{i}) \subseteq{Y_{1}}\}=\{x_{2},x_{4},x_{6}\};\\ \overline{apr}_{R}(Y_{1})&=\{x_{i}\in{U}:R_{U}(x_{i}) \cap{Y_{1}}\neq{\O}\}=\{x_{2},x_{4},x_{6}\};\\ \underline{apr}_{R}(Y_{2})&=\{x_{i}\in{U}:R_{U}(x_{i}) \subseteq{Y_{2}}\}=\{x_{2},x_{3},x_{4},x_{6}\};\\ \overline{apr}_{R}(Y_{2})&=\{x_{i}\in{U}:R_{U}(x_{i}) \cap{Y_{2}}\neq{\O}\}=\{x_{1},x_{2},x_{3},x_{4},x_{6}\}.\\ \end{aligned} $$

Certainly, we can easily see the following two facts:

  1. 1.

    For \(Y_{1}=\{y_{2},y_{4}\}\,\subseteq\,{V},\) we have \(\underline{apr}_{R}(Y_{1})=\overline{apr}_{R}(Y_{1}),\)

  2. 2.

    For \(Y_{2}=\{y_{1},y_{2},y_{4}\}\,\subseteq\,{V},\) we have \(\underline{apr}_{R}(Y_{2})\neq\overline{apr}_{R}(Y_{2}).\)

Based on the above definitions and facts, one can see that \(Y_{1}\) is a definable set of V over two universes, and \(Y_{2}\) is a rough set of V over two universes. That is to say, \(Y_{1}\) can be precisely diagnosed through the symptoms \(x_{2},\;x_{4}\) and \(x_{6},\) it means that one can precisely determine a person had suffered from diseases \(y_{2}\) and \(y_{4}\) according to the symptoms \(x_{2},\;x_{4}\) and \(x_{6}.\) In contrast, \(Y_{2}\) cannot be precisely diagnosed through the symptoms \(x_{2},\;x_{3},\;x_{4}\) and \(x_{6},\) which determines whether a person had suffered from diseases \(y_{1},\;y_{2}\) and \(y_{4}\) or not, at least through the symptoms \(x_{2},\;x_{3},\;x_{4}\) and \(x_{6}.\) Certainly, one can sufficiently determine a person had from the diseases \(y_{1},\;y_{2}\) and \(y_{4}\) after adding the symptom \(x_{1}\).

According to the Definition 3.1, for the rough sets over two universes, one can also define the so-called reverse approximation operators since R be a binary relation on \(U\times{V}.\) The specific definition will be given as follows.

Definition 3.3

Let UV be two universes, R be an arbitrary binary relation on \(U\times{V},\) for \(\forall X\;\subseteq\;{U},\) the reverse lower approximation and reverse upper approximation of X are defined, respectively, by

$$ \underline{apr}_{R^{-1}}(X)=\{y\,\in\,{V}:R_{V}(y)\,\subseteq\,{X}\} $$
(14)
$$ \overline{apr}_{R^{-1}}(X)=\{y\,\in\,{V}:R_{V}(y)\,\cap\,{X}\neq{\O}\} $$
(15)

From the Definition 3.3, one can say that the reverse lower approximation \(\underline{apr}_{R^{-1}}(X)\) denotes the set of objects that have certainly some kind of results or states based on the inference, which has been made by the information set X. The reverse upper approximation \(\overline{apr}_{R^{-1}}(X)\) means the set of objects that have possibly some kind of results or states based on the inference, which has been made by the information set X.

Similarly, if \(\underline{apr}_{R^{-1}}(X)=\overline{apr}_{R^{-1}}(X),\) the set X be called a definable set of U over two universes. Otherwise, if \(\underline{apr}_{R^{-1}}(X)\neq\overline{apr}_{R^{-1}}(X),\) the set X be called a rough set of over two universes. The pair \((\underline {apr}_{R^{-1}}(X),\,\overline{apr}_{R^{-1}}(X))\) be called generalized reverse Pawlak rough sets over two universes with respect to (UVR).

Furthermore, if X is a definable set of U over two universes, X can be precisely expressed by the R relation classes to y in U. Conversely, if X is a rough set of U over two universes, X cannot be precisely expressed by the R relation classes to y in U.

For example 1, we know that the R-relation classes to \(y_{i}\) in U are given as follows:

$$ \begin{aligned} R_{V}(y_{1})&=\{x_{1},x_{3}\};\quad R_{V}(y_{2})=\{x_{2},x_{6}\};\\ R_{V}(y_{3})&=\{x_{1},x_{5}\};\quad R_{V}(y_{4})=\{x_{2},x_{4}\}.\\ \end{aligned} $$

Therefore, for \(X_{1}=\{x_{1},x_{2},x_{6}\},\;X_{2}=U,\) the reverse lower approximation and reverse upper approximation of the sets \(X_{1}\) and \(X_{2}\) are calculated as follows, respectively.

$$ \begin{aligned} \underline{apr}_{R^{-1}}(X_{1})&=\{y_{i}\in{V}:R_{V}(y_{i}) \subseteq{X_{1}}\}=\{y_{2}\};\\ \overline{apr}_{R^{-1}}(X_{1})&=\{y_{i}\in{V}:R_{V}(y_{i}) \cap{X_{1}}\neq{\O}\}=V;\\ \underline{apr}_{R^{-1}}(X_{2})&=\{y_{i}\in{V}:R_{V}(y_{i}) \subseteq{X_{2}}\}=V;\\ \overline{apr}_{R^{-1}}(X_{2})&=\{y_{i}\in{V}:R_{V}(y_{i}) \cap{X_{2}}\neq{\O}\}=V;\\ \end{aligned} $$

The results show that the set \(X_{1}\) is a rough set of U over two universes, the set \(X_{2}\) is a definable set of U over two universes. That is to say, according to the symptom set \(X_{1},\) one can affirm that the person had suffered from the disease \(y_{2}.\) Meantime, one can say that the person had possibly suffered from one or several of the diseases \(y_{1},\;y_{2},\;y_{3}\) and \(y_{4}.\) For the symptom \(X_{2},\) one can make a similar interpretation.

As it can be seen from this example, for the rough set over two universes, relation class is very important to identify whether a set is a definable set or rough set, it will affect the final decision-making results are correct or not. Moreover, one can see that the rough set method over two universes is different from the rough set method on a universe in processing uncertain decision making problems. As we all know, in the traditional rough set theory, the decision making has been carried out, it mainly adopts the following several steps, namely, partition attribute set into condition and decision attribute sets, attribution reduction and construction of decision rules using the sampled data. In general, a decision rule has only a decision-making result. However, for the rough set method over two universes, the decision making can be achieved by the binary relation between the universe and the other universe, on the basis of the two universes have been determined, and the example shows that the multiple decision results can be obtained at the same time.

Based on the above definitions, some of the properties of the approximation operators and reverse approximation operators will be obtained.

Theorem 3.1

Let U, V be two universes, R be a serial and reverse serial binary relation on\(U\times{V},\)for\(\forall X\,\subseteq\,{U},\;Y\,\subseteq\,{V},\)we have the following properties.

$$ \begin{aligned} &(\hbox{PLL})\;\underline{apr}_{R^{-1}}(\underline {apr}_{R}(Y))\subseteq{Y}, \underline {apr}_{R}(\underline {apr}_{R^{-1}}(X))\subseteq{X};\\ &(\hbox{PLU})\;\underline{apr}_{R^{-1}}(\overline{apr}_{R}(Y))=Y, \underline{apr}_{R}(\overline{apr}_{R^{-1}}(X))=X;\\ &(\hbox{PUL})\;\overline{apr}_{R^{-1}}(\underline{apr}_{R}(Y))\subseteq{Y}, \overline{apr}_{R}(\underline {apr}_{R^{-1}}(X))\subseteq{X};\\ &(\hbox{PUU})\;\overline{apr}_{R^{-1}}(\overline{apr}_{R}(Y))\supseteq{Y}, \overline{apr}_{R}(\overline{apr}_{R^{-1}}(X))\supseteq{X}. \end{aligned} $$

Proof

According to the similarity of the above properties, we only to prove the first part of each property.

(PLL) If \(\underline{apr}_{R^{-1}}(\underline{apr}_{R}(Y))={\O},\) the expression obviously holds.

Otherwise, for \(\forall y\,\in\,\underline {apr}_{R^{-1}}(\underline{apr}_{R}(Y))=V'\,\subseteq\,{V},\) we know

$$ R_{V}(y)\,\subseteq\,\underline{apr}_{R}(Y). $$

Hence, by the Definition 3.1, we have

$$ \{x\,\in\,{U}:xRy,\, y\,\in\,{V'}\}\,\subseteq\,\{x\,\in\,{U}:R_{U}(x)\,\subseteq\,{Y}\} $$

Therefore, for \(\forall x\,\in\,{U},\;y\,\in\,{V'},\) if xRy, then \(R_{U}(x)\,\subseteq\,{Y}.\)

So we can obtain \({V'}\,\subseteq\,{R_{U}(x)\,\subseteq\,{Y}},\) i.e., \(\underline {apr}_{R^{-1}}(\underline {apr}_{R}(Y))\,\subseteq\,{Y}.\)

(PLU) For \(\forall y\,\in\,{Y},\) since R is a reverse serial binary relation, we have

$$ {\O}\neq{R_{V}(y)}\,\subseteq\,{U}. $$

Therefore, for \(\forall x\,\in\,{R_{V}(y)},\exists y\,\in\,{Y},\) it makes xRy. Meantime, by the Definition 3.1, we can know \(R_{U}(x)\,\cap\,{Y}\neq{\O}.\)

According to the Definition 3.2, we have \(R_{V}(y)\,\subseteq\,\overline{apr}_{R}(Y).\)

Hence, we can obtain \(\underline {apr}_{R^{-1}}(\overline{apr}_{R}(Y))\,\supseteq\,{Y}.\)

On the other hand, if \(\forall y\,\in\,{\underline {apr}_{R^{-1}}(\overline{apr}_{R}(Y))}=V'\,\subseteq\,{V},\) we have

$$ R_{V}(y)\,\subseteq\,{\overline{apr}_{R}(Y)}. $$

Consequently, for \(\forall y\,\in\,{V'},x\in{U},\) if xRy, then \({\O}\neq{R_{U}(x)\,\cap\,{Y}}\,\subseteq\,{Y}.\) So we have \({V'}\,\subseteq\,{R_{U}(x)\,\cap\,{Y}\,\subseteq\,{Y}},\) i.e., \(\underline{apr}_{R^{-1}}(\overline{apr}_{R}(Y))\,\subseteq\,{Y}.\) Summarizing the above proofs, we can see that the conclusion holds.

(PUL) For \(\forall y\in{\overline{apr}_{R^{-1}}(\underline {apr}_{R}(Y))}=V'\,\subseteq\,{V},\) we have

$$ R_{V}(y)\,\cap\,{\underline{apr}_{R}(Y)}\neq{\O}. $$

Therefore, for \(\forall y\,\in\,{V'},x\,\in\,{U},\) by the Definitions 3.1 and 3.2, we can obtain \(xRy\,\Leftrightarrow\,{R_{U}(x)\,\subseteq\,{Y}}.\) Hence, we have \(y\,\in\,{R_{U}(x)}\,\subseteq\,{Y}\).

(PUU) According to the property (PLU), obviously, (PUU) holds. \(\square\)

In general, the approximation operators and the reverse approximation operators are not mutually inverse by the above properties, only the reverse lower approximation \(\underline{apr}_{R^{-1}}\) and the upper approximation \(\overline{apr}_{R}\) are inverse each other. But the above properties not necessarily hold for an arbitrary binary relation over two universes.

3.2 VPRS-model over two universes

In this part, by setting threshold β, we present the notions of variable precision rough sets over two universes.

Definition 3.4

Let UV be two universes, R be an arbitrary binary relation on \(U\times{V},\;0.5<\beta\leq1,\) for any subset \(Y\,\subseteq\,{V},\) the β-lower approximation and β-upper approximation of Y in V are defined, respectively, by

$$ \underline{apr}_{R}^{\beta}(Y)=\{x\,\in\,{U}:D(Y/R_{U}(x))\geq\beta\} $$
(16)
$$ \overline{apr}_{R}^{\beta}(Y)=\{x\,\in\,{U}:D(Y/R_{U}(x))>1-\beta\} $$
(17)

The β-lower approximation \(\underline{apr}_{R}^{\beta}(Y)\) denotes the set of the objects that can possibly infer the result or state set Y, with the confidence level β, only relying on the given information. In contrast, the β-upper approximation \(\overline{apr}_{R}^{\beta}(Y)\) means the set of objects that can sufficiently infer the result or state set Y, with the confidence level β, through the given information.

Obviously, by this definition, if β = 1, then we easily know the following facts: \(\underline{apr}_{R}^{\beta}(Y)=\underline{apr}_{R}(Y)\) and \(\overline{apr}_{R}^{\beta}(Y)=\overline{apr}_{R}(Y),\) i.e., the variable precision rough set model encompasses the rough set model as a special case. So we may say that the VPRS-model over two universes is an extension of the rough set model over two universes.

Similarly, the reverse β-lower approximation and reverse β-upper approximation will be obtained as follows.

Definition 3.5

Let UV be two universes, R be an arbitrary binary relation on \(U\times{V},\;0.5<\beta\leq1,\) for any subset \(X\subseteq{U},\) the reverseβ-lower approximation and reverse β-upper approximation of X in V are defined, respectively, by

$$ \underline{apr}_{R^{-1}}^{\beta}(X)=\{y\,\in\,{V}: D(X/R_{V}(y))\geq{\beta}\} $$
(18)
$$ \overline{apr}_{R^{-1}}^{\beta}(X)=\{y\,\in\,{V}:D(X/R_{V}(y))>1-\beta\} $$
(19)

where \(D(X/R_{V}(y))={\frac{|X\,\cap\,{R_{V}(y)}|}{|R_{V}(y)|}}.\)

The same as the meaning of Definition 3.3, the reverse β-lower approximation \(\underline{apr}_{R^{-1}}^{\beta}(X)\) denotes the set of objects that have certainly some kind of results or states based on the inference, with the confidence level β, which has been made by the information set X. The reverse β-upper approximation \(\overline{apr}_{R^{-1}}^{\beta}(X)\) means the set of objects that have possibly some kind of results or states based on the inference, with the confidence level β, which has been made by the information set X.

Especially, if \(\beta=1,\) then \(\underline{apr}_{R^{-1}}^{\beta}(X)=\underline{apr}_{R^{-1}}(X),\; \overline{apr}_{R^{-1}}^{\beta}(X)=\overline{apr}_{R^{-1}}(X).\) That is to say the reverse variable precision approximation operators are the extension of the reverse approximation operators.

3.3 The properties of VPRS-model over two universes

Let UV be two finite universes, R be an arbitrary binary relation on \(U\times{V},\) similar to the properties of Pawlak rough sets, we can also obtain the following properties of VPRS-model over two universes:

  • (P1) \(\underline{apr}_{R}^{\beta}({\O})=\overline{apr}_{R}^{\beta}({\O})={\O},\)

    $$ \underline{apr}_{R}^{\beta}(V)=\overline{apr}_{R}^{\beta}(V) =\bigcup\{x_{i}\in{U}:R_{U}(x_{i})\neq{\O}\}; $$
  • (P2) \(\forall Y\,\subseteq\,{V}, \underline{apr}_{R}^{\beta}(Y)\,\subseteq\,\overline{apr}_{R}^{\beta}(Y);\)

  • (P3) \(\forall Y\,\subseteq\,{V}, \underline {apr}_{R}(Y)\,\subseteq\,\underline{apr}_{R}^{\beta}(Y)\,\subseteq\, \overline{apr}_{R}^{\beta}(Y)\,\subseteq\,\overline{apr}_{R}(Y);\)

  • (P4) If \(Y_{1}\,\subseteq\,{Y_{2}}\,\subseteq\,{V},\) then \(\underline{apr}_{R}^{\beta}(Y_{1})\,\subseteq\,\underline{apr}_{R}^{\beta}(Y_{2}),\; \overline{apr}_{R}^{\beta} (Y_{1})\,\subseteq\,\overline{apr}_{R}^{\beta}(Y_{2});\)

  • (P5) If \(Y_{1},Y_{2}\,\subseteq\,{V}\) and \(Y_{1}\,\cap\,{Y_{2}}={\O},\) then \(\underline{apr}_{R}^{\beta}(Y_{1})\,\cap\,\underline{apr}_{R}^{\beta}(Y_{2})={\O},\)

    $$ \overline{apr}_{R}^{\beta} (Y_{1})\,\cap\,\overline{apr}_{R}^{\beta}(Y_{2})={\O}\;(\hbox{not necessarily holds}); $$
  • (P6) \(\forall Y_{1},Y_{2}\,\subseteq\,{V}, \underline{apr}_{R}^{\beta}(Y_{1}\, \cap\,{Y_{2}})\,\subseteq\,\underline{apr}_{R}^{\beta}(Y_{1})\,\cap\,\underline{apr}_{R}^{\beta}(Y_{2}),\)

    $$ \overline{apr}_{R}^{\beta}(Y_{1} \,\cap\,{Y_{2}})\,\subseteq\,\overline{apr}_{R}^{\beta}(Y_{1})\,\cap\,\overline{apr}_{R}^{\beta}(Y_{2}); $$
  • (P7) \(\forall Y_{1},Y_{2}\,\subseteq\,{V}, \;\underline{apr}_{R}^{\beta}(Y_{1} \,\cup\,{Y_{2}})\,\supseteq\,\underline {apr}_{R}^{\beta}(Y_{1})\,\cup\,\underline {apr}_{R}^{\beta}(Y_{2}),\)

    $$ \overline{apr}_{R}^{\beta}(Y_{1} \cup{Y_{2}})\,\supseteq\,\overline{apr}_{R}^{\beta}(Y_{1}) \,\cup\,\overline{apr}_{R}^{\beta}(Y_{2}); $$
  • (P8) \(\forall Y\,\subseteq\,{V},\; \sim{Y}=V-Y, \;\underline {apr}_{R}^{\beta}(\sim{Y})=\sim \overline{apr}_{R}^{\beta}(Y),\)

    $$ \overline{apr}_{R}^{\beta}(\sim{Y})=\sim \underline {apr}_{R}^{\beta}(Y); $$
  • (P9) \(\forall Y\,\subseteq\,{V},\) if \(0.5<\beta_{1}\leq\beta_{2}\leq1,\) then

    $$ \underline{apr}_{R}^{\beta_{1}}(Y)\supseteq\underline {apr}_{R}^{\beta_{2}}(Y),\quad \overline{apr}_{R}^{\beta_{1}}(Y)\subseteq \overline{apr}_{R}^{\beta_{2}}(Y). $$

Proof

(P1) By the Definition 3.3, the following relation expression holds, namely, \(\underline{apr}_{R}^{\beta}({\O})=\overline{apr}_{R}^{\beta} ({\O})={\O}.\)

Because R is a binary relation on \(U\times{V},\) we have

$$ \begin{aligned} \underline{apr}_{R}^{\beta}(V)&=\{x_{i}\in{U}:D(V/R_{U}(x_{i}))\geq\beta\}\\ &=\left\{x_{i}\in{U}:{\frac{|V\cap{R_{U}(x_{i})}|}{|R_{U}(x_{i})|}} \geq\beta\right\}(\forall x_{i}\in{U}, R_{U}(x_{i})\subseteq{V})\\ &=\left\{x_{i}\in{U}:{\frac{|R_{U}(x_{i})|}{|R_{U}(x_{i})|}} =1\geq\beta\right\} =\bigcup\{x_{i}\in{U}:R_{U}(x_{i})\neq{\O}\}\\ \overline{apr}_{R}^{\beta}(V)&=\{x_{i}\in{U}:D(V/R_{U}(x_{i}))=1>1-\beta\}\\ &=\bigcup\{x_{i}\in{U}:R_{U}(x_{i})\neq{\O}\}. \end{aligned} $$

(P2) For \(\forall Y\,\subseteq\,{V},\) if \(x_{i}\,\in\,\underline{apr}_{R}^{\beta}(Y),\) then \(D(Y/R_{U}(x_{i}))\geq\beta\}.\) And since \(\beta\,\in\,{(0.5,1]},\) thus \(D(Y/R_{U}(x_{i}))\geq\beta\}>1-\beta,\) that is to say \(x_{i}\,\in\,\overline{apr}_{R}^{\beta}(Y),\) namely, \(\underline {apr}_{R}^{\beta}(Y)\,\subseteq\,\overline{apr}_{R}^{\beta}(Y).\)

(P3) By the property (P2) and property (P3), we only need to prove that \(\underline {apr}_{R}(Y)\,\subseteq\,\underline {apr}_{R}^{\beta}(Y)\) and \(\overline{apr}_{R}^{\beta}(Y)\,\subseteq\,\overline{apr}_{R}(Y).\)

By the Defintion 3.3, when \(\beta=1,\;\underline {apr}_{R}^{\beta}(Y)=\underline {apr}_{R}(Y)\) and \(\overline{apr}_{R}^{\beta}(Y)=\overline{apr}_{R}(Y),\) thus, if \(x_{i}\,\in\,\underline {apr}_{R}(Y),\) then, \(D(Y/R_{U}(x_{i}))=1.\)

Therefore, for \(\beta\,\in\,{(0.5,1]},\;D(Y/R_{U}(x_{i}))= 1\geq\beta\,\Rightarrow\,{x_{i}\in\underline{apr}_{R}^{\beta}}(Y).\)

By the above proof, the first conclusion can be proved. Certainly, we can also obtain the second conclusion by the same way.

(P4)

$$ \begin{aligned} \underline{apr}_{R}^{\beta}(Y_{1})&=\{x_{i}\in{U}: D(Y_{1}/R_{U}(x_{i}))\geq\beta\}\\ &=\left\{x_{i}\in{U}:{\frac{|Y_{1}\cap{R_{U}(x_{i})}|} {|R_{U}(x_{i})|}}\geq\beta\right\}\\ \end{aligned} $$

Since \(Y_{1}\,\subseteq\,{Y_{2}}\,\subseteq\,{V},\) for \(\forall x_{i}\in{U},\;Y_{1}\cap{R_{U}(x_{i})}\,\subseteq\,{Y_{2}\cap{R_{U}(x_{i})}},\) it has

$$ |Y_{1}\,\cap\,{R_{U}(x_{i})}|\leq{|Y_{2}\,\cap\,{R_{U}(x_{i})}|} $$

Consequently, \( {\frac{|Y_{2}\cap{R_{U}(x_{i})}|} {|R_{U}(x_{i})|}}\geq\beta\,\Rightarrow\,{\frac{|Y_{1}\cap{R_{U}(x_{i})}|} {|R_{U}(x_{i})|}}\geq\beta\)

That is

$$ \begin{aligned} &\left\{x_{i}\in{U}:{\frac{|Y_{2}\,\cap{R_{U}(x_{i})}|} {|R_{U}(x_{i})|}}\geq\beta\right\}\supseteq \left\{x_{i}\in{U}:{\frac{|Y_{1}\cap{R_{U}(x_{i})}|} {|R_{U}(x_{i})|}}\geq\beta\right\}\\ &\quad\Leftrightarrow\underline {apr}_{R}^{\beta}(Y_{2})\supseteq\underline {apr}_{R}^{\beta}(Y_{1}).\\ \end{aligned} $$

(P5) we only need to prove that relation expression \(\underline {apr}_{R}^{\beta}(Y_{1})\cap\underline {apr}_{R}^{\beta}(Y_{2})={\O},\) for the second relation expression, it not necessarily holds, we will cite an example to illustrate the facts. First of all, we will prove the first relation expression with the proof by contradiction.

Suppose there exists an element \(x_{i}\in\underline{apr}_{R}^{\beta}(Y_{1})\,\cap\,\underline{apr}_{R}^{\beta}(Y_{2}),\) then

$$ x_{i}\in\underline{apr}_{R}^{\beta}(Y_{1})\quad\hbox{and}\quad x_{i}\in\underline {apr}_{R}^{\beta}(Y_{2}) $$

The Definition 3.3 implies that

$${\frac{|Y_{1}\,\cap\,{R_{U}(x_{i})}|}{|R_{U}(x_{i})|}} \geq\beta\,\Leftrightarrow\,{|Y_{1}\,\cap\,{R_{U}(x_{i})}|} \geq\beta{|R_{U}(x_{i})|} $$
(20)
$${\frac{|Y_{2}\,\cap\,{R_{U}(x_{i})}|}{|R_{U}(x_{i})|}} \geq\beta\,\Leftrightarrow\,{|Y_{2}\,\cap\,{R_{U}(x_{i})}|} \geq\beta{|R_{U}(x_{i})|} $$
(21)

By expression (16) and (17), owing to \({\frac{1}{2}}<\beta\leq1,\) so

$$ |Y_{1}\,\cap\,{R_{U}(x_{i})}|\geq\beta{|R_{U}(x_{i})|} >{\frac{1}{2}}{|R_{U}(x_{i})|}$$
(22)
$$|Y_{2}\cap{R_{U}(x_{i})}|\geq\beta{|R_{U}(x_{i})|}>{\frac{1}{2}}{|R_{U}(x_{i})|}$$
(23)

Thus

$$ |Y_{1}\,\cap\,{R_{U}(x_{i})}|+|Y_{2}\,\cap\,{R_{U}(x_{i})}|>{|R_{U}(x_{i})|} $$
(24)

According to \(Y_{1},Y_{2}\,\subseteq\,{V}\) and \(Y_{1}\,\cap\,{Y_{2}}={\O},\) we can get

$$ |Y_{1}\,\cap\,{R_{U}(x_{i})}|+|Y_{2}\,\cap\,{R_{U}(x_{i})}|\leq{|R_{U}(x_{i})|} $$
(25)

Obviously, the expression (20) and (21) are contradictory, therefore, the original conclusion holds.

(P6) For \(\forall Y_{1},\,Y_{2}\,\subseteq\,{V},\) if \(\forall x_{i}\in\underline{apr}_{R}^{\beta}(Y_{1}\,\cap\,{Y_{2}}),\) then

$${\frac{|Y_{1}\,\cap\,{Y_{2}}\,\cap\,{R_{U}(x_{i})}|}{|R_{U}(x_{i})|}}\geq\beta $$

Thus, we may obtain the following two results:

$$ {\frac{|Y_{1}\,\cap\,{R_{U}(x_{i})}|}{|R_{U}(x_{i})|}}\geq\beta \quad\hbox{and}\quad{\frac{|Y_{2}\,\cap\,{R_{U}(x_{i})}|}{|R_{U}(x_{i})|}}\geq\beta$$

That is \(x_{i}\,\in\,\underline{apr}_{R}^{\beta}(Y_{1})\) and \(x_{i}\,\in\,\underline{apr}_{R}^{\beta}(Y_{2}),\) namely, \(\underline{apr}_{R}^{\beta}(Y_{1} \,\cap\,{Y_{2}})\,\subseteq\,\underline{apr}_{R}^{\beta}(Y_{1})\,\cap\, \underline{apr}_{R}^{\beta}(Y_{2}).\) The proof of the second conclusion is the same as the previous one.

(P7) For \(\forall Y_{1},\,Y_{2}\,\subseteq\,{V},\) if \(\forall x_{i}\,\in\,\underline{apr}_{R}^{\beta}(Y_{1})\,\cup\,\underline{apr}_{R}^{\beta}(Y_{2}),\) then

$${\frac{|Y_{1}\,\cap\,{R_{U}(x_{i})}|}{|R_{U}(x_{i})|}}>1-\beta \quad\hbox{or}\quad{\frac{|Y_{2}\,\cap\,{R_{U}(x_{i})}|}{|R_{U}(x_{i})|}}>1-\beta $$

Thus, we obtain the following fact:

$${\frac{|(Y_{1}\cup{Y_{2}})\cap{R_{U}(x_{i})}|} {|R_{U}(x_{i})|}}>1-\beta$$

That is \(x_{i}\in\underline{apr}_{R}^{\beta}(Y_{1}\,\cup\,{Y_{2}}),\) namely, \(\underline{apr}_{R}^{\beta}(Y_{1}\, \cup\,{Y_{2}})\,\supseteq\,\underline{apr}_{R}^{\beta}(Y_{1})\,\cup\,\underline{apr}_{R}^{\beta}(Y_{2}).\)

Analogously, the proof of the second conclusion is the same as the previous one.

(P8)

$$ \begin{aligned} \forall Y\,\subseteq\,{V}, x_{i}\,\in\,\underline {apr}_{R}^{\beta}(\sim{Y})&\Leftrightarrow\frac{|\sim{Y}\,\cap\,{R_{U} (x_{i})}|}{|R_{U}(x_{i})|}\geq\beta\\ &\Leftrightarrow\frac{|(V-Y)\,\cap\,{R_{U}(x_{i})}|}{|R_{U}(x_{i})|} \geq\beta\\ &\Leftrightarrow\frac{|(V\,\cap\,{R_{U}(x_{i})})-(Y\,\cap\,{R_{U}(x_{i})}|} {|R_{U}(x_{i})|}\geq\beta (R_{U}(x_{i})\,\subseteq\,{V})\\ &\Leftrightarrow\frac{|R_{U}(x_{i})-(Y\,\cap\,{R_{U}(x_{i})})|} {|R_{U}(x_{i})|}\geq\beta\\ &\Leftrightarrow1-\frac{|Y\,\cap\,{R_{U}(x_{i})}|}{|R_{U}(x_{i})|} \geq\beta (|A-B|=|A|-|A\,\cap\,{B}|)\\ &\Leftrightarrow\frac{|Y\,\cap\,{R_{U}(x_{i})}|}{|R_{U}(x_{i})|}\leq1-\beta\\ &\Leftrightarrow{x_{i}\overline{\in}\overline{apr}_{R} ^{\beta}(Y)}\Leftrightarrow{x_{i}\in\sim\overline{apr}_{R}^{\beta}(Y)}. \end{aligned} $$

(P9) For \(\forall Y\subseteq{V},\) and \(0.5<\beta_{1}\leq\beta_{2}\leq1,\) if \(x_{i}\in\underline {apr}_{R}^{\beta_{2}}(Y),\) then

$${\frac{|Y\,\cap\,{R_{U}(x_{i})}|}{|R_{U}(x_{i})|}}\geq\beta_{2}\geq\beta_{1}$$

Thus, we know that \(x_{i}\,\in\,\underline{apr}_{R}^{\beta_{1}}(Y),\) namely, \(\underline{apr}_{R}^{\beta_{1}}(Y)\,\supseteq\,\underline{apr}_{R}^{\beta_{2}}(Y).\)

Analogously, the proof of the second conclusion is the same as the previous one.

Remark

If the binary relation R is a serial, the second conclusion of the property (P1) becomes the following result, namely, \(\underline{apr}_{R}^{\beta}(V)=\overline{apr}_{R}^{\beta}(V)=U.\) In addition, by the property (P8), the β-lower approximation and β-upper approximation are dual each other.

In the following, we will cite an example to interpret the second conclusion of property (P5).

Example 2

Let UV be two universes, and suppose \(U=\{x_{1},x_{2},x_{3},x_{4},x_{5},x_{6}\},\;V=\{y_{1},y_{2}, y_{3},y_{4},y_{5}\},\;R\) be an binary relation on \(U\times{V}.\) If R relation classes are given, as follows:

$$ \begin{aligned} R_{U}(x_{1})&=\{y_{1},y_{2},y_{4}\};\quad R_{U}(x_{2})=\{y_{3}\};\quad R_{U}(x_{3})&=\{y_{2},y_{4}\};\\ R_{U}(x_{4})&=\{y_{4},y_{5}\};\quad R_{U}(x_{5})=\{y_{1},y_{2}\};\quad R_{U}(x_{6})&=\{y_{3},y_{6}\}.\\ \end{aligned} $$

Suppose \(Y_{1}=\{y_{1},y_{2}\},\;Y_{2}=\{y_{3}\},\;Y_{3}=\{y_{3},y_{4}\}\) and \(\beta=0.6,\) then

$$ \begin{aligned} \underline {apr}_{R}^{\beta}(Y_{1})&=\{x_{1},x_{5}\};\quad\overline{apr}_{R} ^{\beta}(Y_{1})=\{x_{1},x_{3},x_{5}\};\\ \underline {apr}_{R}^{\beta}(Y_{2})&=\{x_{2}\};\quad\overline{apr}_{R} ^{\beta}(Y_{2})=\{x_{2},x_{6}\};\\ \underline {apr}_{R}^{\beta}(Y_{3})&=\{x_{2}\};\quad\overline{apr}_{R} ^{\beta}(Y_{3})=\{x_{2},x_{3},x_{4},x_{6}\}. \end{aligned} $$

Obviously, \(Y_{1}\,\cap\,{Y_{2}}={\O},\,Y_{1}\,\cap\,{Y_{3}}={\O}\) and \(Y_{1},\,Y_{2},\,Y_{3}\,\subseteq\,{V},\) but

$$ \begin{aligned} \overline{apr}_{R}^{\beta}(Y_{1})&\cap\overline{apr}_{R}^{\beta} (Y_{2})={\O},\\ \overline{apr}_{R}^{\beta}(Y_{1})&\cap \overline{apr}_{R}^{\beta}(Y_{3})=\{x_{3}\}\neq{\O}. \end{aligned} $$

So we may say that the second conclusion of property (P5) not necessarily holds.

Moreover, the reverse β-lower approximation and reverse β-upper approximation, we have the same results as the above properties.

3.4 VPRS-model with two parameters over two universes

As the extension of variable precision rough sets over two universes, in this section, the concept of VPRS-model with two parameters over two universes will be introduced. Afterwards, the related properties will be discussed.

Definition 3.6

Let UV be two universes, R be an arbitrary binary relation on \(U\times{V},\;0.5<\beta\leq\alpha\leq1,\;D\) denotes the inclusion degree, for any subset \(Y\,\subseteq\,{V},\) the α-lower approximation and β-upper approximation containing two parameters of Y in U are defined, respectively, by

$$ \underline{apr}_{R}^{\alpha}(Y)=\{x\,\in\,{U}:D(Y/R_{U}(x)\geq\alpha\} $$
(26)
$$ \overline{apr}_{R}^{\beta}(Y)=\{x\,\in\,{U}:D(Y/R_{U}(x)>1-\beta\} $$
(27)

Analogously, the detailed explanations of the α-lower approximation and β-upper approximation can be interpreted as the meaning of the β-lower(upper) approximation.

Obviously, if \(\alpha=\beta,\) the VPRS-model with two parameters becomes the VPRS-model which is previously stated, namely, it is viewed as the extension of the VPRS-model. By the definition, if \(\alpha\geq\beta,\) we can easily see that the contained elements of the α-lower approximation are less than or equal to the contained elements of the β-lower approximation, the contained elements of the β-upper approximation are more than or equal to the contained elements of the α-upper approximation. According to the definition and the properties of VPRS-model over two universes, we have the following theorem.

Theorem 3.2

Let the tripleA = (UVR) be a generalized approximation space, if\(0.5<\alpha_{1}\leq\alpha_{2}\leq1,\;0.5<\beta_{1}\leq\beta_{2}\leq1,\)for any subset\(Y\,\subseteq\,{V},\)then

  • (T1) \(\underline {apr}_{R}^{\alpha_{2}}(Y)\,\subseteq\,\underline {apr}_{R}^{\alpha_{1}}(Y);\)

  • (T2) \(\overline{apr}_{R}^{\beta_{1}}(Y)\,\subseteq\, \overline{apr}_{R}^{\beta_{2}}(Y).\)

The result (T1) shows that the lower approximation operator is monotonic decreasing with respect to the parameter α, but the conclusion (T2) shows that the upper approximation operator is monotonic increasing with respect to the parameter β.

Furthermore, for \(0.5<\beta<\alpha\leq1,\) we can obtain the following conclusions with respect to the approximation operators.

Theorem 3.3

Let the triple\(A=(U,V,R)\)be a generalized approximation space, D denotes the inclusion degree, for any subset\(Y\,\subseteq\,{V},\)the following expressions hold.

  • (T3) \(\lim\limits_{\beta\uparrow\alpha}{\underline {apr}_{R}^{\beta}}(Y)=\bigcap\limits_{\beta<\alpha}{\underline {apr}_{R}^{\beta}}(Y)=\underline {apr}_{R}^{\alpha}(Y);\)

  • (T4) \(\lim\limits_{\alpha\downarrow\beta}{\overline{apr}_{R}^{\alpha}} (Y)=\bigcap\limits_{\alpha>\beta}{\overline{apr}_{R}^{\alpha}}(Y)=\underline {apr}_{R}^{\beta}(Y).\)

Proof

(T3) For \(\beta<\alpha,\) by the Theorem 3.2, we have

$$ \underline{apr}_{R}^{\beta}(Y)\,\supseteq\,\underline{apr}_{R}^{\alpha}(Y). $$

Therefore

$$ \lim\limits_{\beta\uparrow\alpha}{\underline {apr}_{R}^{\beta}}(Y)=\bigcap\limits_{\beta<\alpha}{\underline {apr}_{R}^{\beta}}(Y)\supseteq\underline {apr}_{R}^{\alpha}(Y). $$

In addition, if there exists \(x_{0}\,\in\,\bigcap\limits_{\beta<\alpha}{\underline {apr}_{R}^{\beta}}(Y),\) but \(x_{0}\,\not\in\,\underline{apr}_{R}^{\alpha}(Y).\)

Then, for \(\forall \beta<\alpha,\) we have \(x_{0}\,\in\,\underline{apr}_{R}^{\beta}(Y)\) and \(x_{0}\not\in\underline {apr}_{R}^{\alpha}(Y).\) According to the Definition 3.6 and the arbitrary of the parameter β, we can know

$$ D(Y/R_{U}(x_{0}))\geq\alpha\quad\hbox{and}\quad D(Y/R_{U}(x_{0}))<\alpha. $$

Obviously, this is a contradiction. So we can obtain \(\bigcap\limits_{\beta<\alpha}{\underline{apr}_{R}^{\beta}}(Y)=\underline {apr}_{R}^{\alpha}(Y).\)

(T4) For \(\alpha>\beta,\) similarly, we have

$$ \overline{apr}_{R}^{\alpha}(Y)\,\supseteq\,\overline{apr}_{R}^{\beta}(Y). $$

Therefore

$$ \lim\limits_{\alpha\downarrow\beta}{\overline{apr}_{R}^{\alpha}}(Y)= \bigcap\limits_{\alpha>\beta}{\overline{apr}_{R}^{\alpha}}(Y) \,\supseteq\,\overline{apr}_{R}^{\beta}(Y). $$

On the other hand, if there exists \(x_{0}\,\in\,\bigcap\limits_{\alpha>\beta}{\overline{apr}_{R}^{\alpha}} (Y),\;x_{0}\not\in\overline{apr}_{R}^{\beta}(Y).\)

For \(\forall \alpha>\beta,\) we have \(x_{0}\,\in\,\overline{apr}_{R}^{\alpha}(Y),\;x_{0}\,\not\in\, \overline{apr}_{R}^{\beta}(Y).\)

The same as the above proof, by the Definition 3.6 and the arbitrary of the parameter α, we can know

$$ D(Y/R_{U}(x_{0}))\geq1-\beta\quad\hbox{and}\quad D(Y/R_{U}(x_{0}))<1-\beta. $$

This is another contradiction. Then, we can obtain \(\bigcap\limits_{\alpha>\beta}{\overline{apr}_{R}^{\alpha}}(Y) =\underline {apr}_{R}^{\beta}(Y).\)

The same as the definitions of the reverse α-lower approximation and reverse β-upper approximation over two universes, the notions of the approximation operators containing two parameters can be proposed.

Definition 3.7

Let UV be two universes, R be an arbitrary binary relation on \(U\times{V},\;0.5<\beta\leq\alpha\leq1,\;D\) denotes the inclusion degree, for any subset \(X\,\subseteq\,{U},\) the reverse α-lower approximation and reverse β-upper approximation containing two parameters of X in V are defined, respectively, by

$$ \underline{apr}_{R^{-1}}^{\alpha}(X)=\{y\in{V}:D(X/R_{V}(y))\geq\alpha\} $$
(28)
$$ \overline{apr}_{R^{-1}}^{\beta}(X)=\{y\,\in\,{V}:D(X/R_{V}(y))>1-\beta\} $$
(29)

Similar to the detailed explanations of the reverse α-lower approximation and reverse β-upper approximation can be interpreted as the meaning of the reverse β-lower(upper) approximation.

Resorting to the Theorems 3.2 and 3.3, the similar conclusions of the reverse approximation operators containing two parameters over two universes can be obtained.

4 An illustrative example

Let us illustrate the above concepts by the following example.

Suppose the symptom set \(U=\{x_{1},x_{2},{\ldots},x_{10}\},\) the disease set \(V=\{y_{1},y_{2},{\ldots},y_{5}\},\) which denote ten symptoms and five diseases, respectively. It is well known that a disease may be accompanied by several symptoms, which will establish a corresponding relation between the set U and the set V. Without loss of generality, the corresponding relation R is given, in the form of the matrix, as follows

$$ \begin{aligned} R(x_{i},y_{j})= \left[ \begin{array}{lllll} 1&0 & 0 & 0 &1\\ 0& 1 & 0 & 1 &0\\ 0& 0 & 1 & 1 & 1\\ 0& 0 & 0& 1 & 0\\ 1& 0 & 1 & 0 & 1\\ 0& 1 & 1& 0 & 1\\ 1& 1 & 0 & 1 & 0\\ 1& 0 & 0 & 1 & 0\\ 0& 1 & 0 & 0 & 1\\ 0& 1 & 0 & 1 & 1 \end{array} \right]\quad (i=1,2,{\ldots},10,\; j=1,2,{\ldots},5) \end{aligned} $$

Where the value of the expression \(R(x_{i},y_{j})\) means whether the element \(x_{i} (\in{U})\) is related to the element \(y_{j} (\in{V})\) or not. If \(R(x_{i},y_{j})=1,\) it denotes that \(x_{i}\) has the relationship with \(y_{j},\) i.e., \((x_{i},y_{j})\in{R}.\) Conversely, the expression \(R(x_{i},y_{j})=0\) denotes that \(x_{i}\) has not the relationship with \(y_{j},\) i.e., \((x_{i},y_{j})\,\not\in\,{R}.\)

By the above relation matrix, we can obtain the R-related elements to each \(x_{i} (i=1,2,{\ldots},10)\) in V, as follows:

$$ \begin{aligned} R_{U}(x_{1})&=\{y_{1},y_{5}\};\quad R_{U}(x_{2})=\{y_{2},y_{4}\};\quad R_{U}(x_{3})=\{y_{3},y_{4},y_{5}\};\\ R_{U}(x_{4})&=\{y_{4}\};\quad R_{U}(x_{5})=\{y_{1},y_{3},y_{5}\};\quad R_{U}(x_{6})=\{y_{2},y_{3},y_{5}\};\\ R_{U}(x_{7})&=\{y_{1},y_{2},y_{4}\};\quad R_{U}(x_{8})=\{y_{1},y_{4}\};\quad R_{U}(x_{9})=\{y_{2},y_{5}\};\\ R_{U}(x_{10})&=\{y_{2},y_{4},y_{5}\}. \end{aligned} $$

For a subset \(Y=\{y_{2},y_{4}\}\,\subseteq\,{V},\) which means that a person has suffered from the diseases \(y_{2}\) and \(y_{4}.\) By the Definition 3.2, we obtain the lower and upper approximations of Y as follows:

$$ \begin{aligned} \underline{apr}_{R}(Y)&=\{x_{2},x_{4}\},\\ \overline{apr}_{R}(Y)&=\{x_{2},x_{3},x_{4},x_{6},x_{7}, x_{8},x_{9},x_{10}\}. \end{aligned} $$

we can see from the results, if a person has simultaneously suffered from the diseases \(y_{2}\) and \(y_{4},\) then he (she) would certainly be accompanied by the symptoms \(x_{2}\) and \(x_{4}.\) Except for this, he (she) had possibly one or several of the symptoms \(x_{3},\,x_{6},\,x_{7},\,x_{8},\,x_{9}\) and \(x_{10}.\)

By the Defintion 3.4, if \(\beta=0.65,\) then the β-lower (upper) approximation can be given by

$$ \begin{aligned} \underline {apr}^{0.65}_{R}(Y)&=\{x_{2},x_{4},x_{7},x_{10}\},\\ \overline{apr}^{0.65}_{R}(Y)&=\{x_{2},x_{4},x_{7},x_{8},x_{9},x_{10}\}. \end{aligned} $$

Based on the above results, if a person has simultaneously suffered from the diseases \(y_{2}\) and \(y_{4},\) and when the confidence level β equals to 0.65, then he (she) would be accompanied by the symptoms \(x_{2},\,x_{4},\,x_{7},\) and \(x_{10}\). In addition, he (she) had possibly one or all of the symptoms \(x_{8}\) and \(x_{9}.\)

The previous results show that the lower approximation is contained in the β-lower approximation. In contrast, the upper approximation contains the β-upper approximation. In fact, we hope that the symptoms, which will be sure to appear, can be as much as possible in the disease diagnosis. Certainly, the symptoms can be as little as possible, which will possibly appear. Obviously, this case is consistent with the practice.

Similar to the previous case, we can also get the R-related elements to each \(y_{j} (j=1,2,{\ldots},5)\) according to the relation matrix, the corresponding results are given as follows:

$$ \begin{aligned} R_{V}(y_{1})&=\{x_{1},x_{5},x_{7},x_{8}\};\quad R_{V}(y_{2})=\{x_{2},x_{6},x_{7},x_{9},x_{10}\};\\ R_{V}(y_{3})&=\{x_{3},x_{5},x_{6}\};\quad R_{V}(y_{4})=\{x_{2},x_{3},x_{4},x_{7},x_{8},x_{10}\};\\ R_{V}(y_{5})&=\{x_{1},x_{3},x_{5},x_{6},x_{9},x_{10}\}. \end{aligned} $$

For a subset \(X=\{x_{2},\,x_{7},\,x_{8},\,x_{10}\}\,\subseteq\,{U},\) which means that a person has the symptoms \(x_{2},\,x_{7},\,x_{8}\) and \(x_{10}.\) By the Definition 3.3, the reverse lower and upper approximations of X can be obtained as follows:

$$ \begin{aligned} \underline {apr}_{R^{-1}}(X)&=\varnothing,\\ \overline{apr}_{R^{-1}}(X)&=\{y_{1},y_{2},y_{4},y_{5}\}. \end{aligned} $$

The above results show the following fact, if only has the symptoms \(x_{2},\,x_{7},\,x_{8}\) and \(x_{10},\) we were not sure of a person had been suffering from any disease. However, we can only say that he (she) had possibly suffered from one or several of diseases \(y_{1},y_{2},y_{4}\) and \(y_{5}.\)

According to the Definition 3.5, if \(\beta=0.65,\) then the reverse β-lower and β-upper approximations will be obtained. we have

$$ \begin{aligned} \underline {apr}^{0.65}_{R^{-1}}(X)&=\{y_{4}\},\\ \overline{apr}^{0.65}_{R^{-1}}(X)&=\{y_{1},y_{2},y_{4}\}. \end{aligned} $$

Using the above results, we can know if a person has the symptoms \(x_{2},x_{7},x_{8}\) and \(x_{10},\) when the confidence level β equals to 0.65, then he (she) had suffered from the disease \(y_{4}.\) Apart from this, he (she) had possible suffered from the diseases \(y_{1}\) and \(y_{2}.\)

Similarly, it can be seen from the previous results, the reverse lower approximation is included in the reverse β-lower approximation, but the reverse upper approximation contains the reverse β-upper approximation. The same as the previous case, we can make a similar interpretation. In practice, when a doctor has known some symptoms, he (she) would hope to accurately judge a person suffering from all diseases. Meantime, he (she) must try to remove some diseases from which may be suffered. Certainly, this case is also reasonable.

From the general point of view to see it, the same as the Pawlak rough sets on a universe, if the lower approximation and the complement of the upper approximation in the universe are viewed as the positive region and the negative region of VPRS-model, respectively, then the difference set of the lower approximation and the upper approximation will be considered as the boundary region of VPRS-model. According to the Theorem 3.2, we can know that the boundary region of the VPRS-model which is proposed in this paper is smaller than the boundary region of the classical rough set model over two universes. Simultaneously, it is easy to see that the positive region and the negative region of the VPRS-model which is proposed in this paper are bigger than the positive region and negative region of the classical rough set model over two universes. Therefore, the approximation accuracy of the VPRS-model is higher than the classical rough set model over two universes.

Summarizing the above analysis, one can see that the variable precision rough set model over two universes is better than the rough set model in dealing with the multi-decision making problems, such as the disease diagnosis etc.

5 Conclusions

In this paper we have introduced VPRS-model over two universes based on VPRS-model on a universe, and mainly discussed the nine properties of the β-lower approximation and the β-upper approximation in VPRS-model over two universes. Obviously, if the threshold β = 1, then the VPRS-model will become rough set model over two universes. In addition, the reverse approximation operators are also introduced and the properties are analyzed with the approximation operators. Generally, we can see from the results, they are not inverse each other, only a pair approximation operators are mutually inverse. As a generalization of VPRS-model over two universes, the approximation operators with two parameters and its properties are discussed, we easily see that the VPRS-model is a special case when two parameters are equivalent. The VPRS-model is viewed as a generalization of rough sets, and it will be more widely and better applied in practice. Certainly, the illustrative example has showed that the (reverse) β-approximation operators are better than the (reverse) approximation operators in dealing with the uncertainty problems, especially for the multi-decision making problems. Through the analysis of this paper, we have also seen that the results may be useful for the applications of rough sets and further enrich the rough set theory.