1 Introduction

Our classical mathematical approaches for modelling, computing and reasoning are crisp, deterministic and precise in nature. However, maximum of the real-world problems intrinsically comprises uncertainties and ambiguities. Especially, such classes of problems emerge in Medical Sciences, Economics, Ecology, Engineering, Environmental Sciences, Social Sciences and many other fields, which are highly dependent on the task of modelling uncertainties that cannot be solved using classical mathematical approaches.

With the passage of time, numerous analysts, mathematicians and researchers are attempting to determine some appropriate tools and mathematical theories to manage these uncertainties. Such as Probability Theory, Fuzzy Set Theory, Rough Set Theory, Soft set theory, Interval Mathematical Theory, Vague Set Theory, Graph Theory, Automata Theory, Decision-Making Theory etc., are formulated to solve such problems, and have been found only partially successful. These theories decreased the space between the classical mathematical designs and the vague real-world data.

However, most of these theories have their inherent deficiencies, perhaps because of inadequacy of the parameterization tools of the theories as mentioned in Molodtsov (1999).

In 1965, Zadeh (1965) laid the foundation of fuzzy set theory. The fuzzy set theory is an important and successful mathematical tool which is found most appropriate to cope with uncertainties. The fuzzy set theory is dependent on the fuzzy membership function, with the help of which we can determine the membership degree of an element with respect to a set. However, there exists a difficulty “how to fix the membership function in each particular case”, because the nature of the membership function is extremely individualistic. There are different generalizations of fuzzy set, for example, intuitionistic fuzzy set, vague set, interval-valued fuzzy set, neutrosophic fuzzy sets and many more.

Bipolar fuzzy set is another extension of fuzzy set given by Zhang (1994). The membership degree of bipolar fuzzy sets is extended from the interval [0, 1] to \([-1, 1]\). In a bipolar fuzzy set, the membership degree 0 of an element implies that the element is irrelevant to the corresponding property, the membership degree (0, 1] of an element shows that the element somewhat satisfies the property, and the membership degree \([-1, 0)\) of an element demonstrates that the element somewhat satisfies the implicit counter-property.

The rough set theory Pawlak (1982) is another successful mathematical approach to examine the vagueness in data, presented by Pawlak. In this approach, vagueness is expressed by a boundary region of a set, and not by a partial membership, like in fuzzy set theory. It is based on the conjecture that we can always associate some information (data/knowledge) with every object in the universe. Pawlak used the upper and lower approximations of a collection of objects to investigate how close the objects are to the information attached to them. This theory has been effectively applied to many fields such as data mining, machine learning, data analysis, and medicine.

To get rid of these sort of difficulties, in 1999, a Russian researcher Molodtsov Molodtsov (1999) proposed the notion of soft sets that can be viewed as a completely new mathematical approach for modelling vagueness and uncertainties, where a soft set is associated with an adequate set of parameters and thus free from aforementioned difficulties. Soft sets are intended to capture and to defuse the conflicts among existing fuzzy set theories. Soft set theory is a consistent and unified theory implied implicitly by existing fuzzy theories. Thus, soft set theory is a generalization of fuzzy set theory (Liu et al. 2019) that was proposed to tackle with uncertainty in a non-parametric manner.

Unlike classical mathematics, where exact solution of a mathematical model is required, soft set theory instead requires an approximate description of an object as its initial point. The choice of adequate parameterization tools such as words, real number, and functions make soft set theory very convenient and easy to apply in practice. Many interesting applications of soft set theory can be seen in Çağman et al. (2011), Jiang et al. (2011), Zou and Xiao (2008), Çağman and Enginoğlu (2010) and Herwan (2010).

Recently, many researchers are engaged in the properties and applications of soft set theory. Maji et al. (2003) defined new concept on soft sets and worked on theoretical study of soft sets in detail. Ali et al. (2009) studied some new operations in soft set theory. Maji et al. (2001) proposed the notion of fuzzy soft sets. Naz and Shabir (2013) introduced the idea of bipolar soft sets. Shabir and Shaheen (2017) initiated the idea of fuzzified rough approximations of a set based on fuzzy tolerance relation.

Fuzzy soft set theory (Maji et al. 2001) is useful for solving the real-world problems. It helps to take decision-making in a critical circumstances. Applications of fuzzy soft set theory can also be seen in Alcantud et al. (2019), Çelik and Yamak (2013), Gogoi et al. (2014), Han et al. (2015), Kalayathankal and Singh (2010), Karaaslan and Çağman (2018), Liu et al. (2019) and Xiao (2018). The fuzzy bipolar soft sets (Malik and Shabir 2019) have the capacity to deal with the uncertainty, as well as bipolarity of the information in many situations.

In 1975, Rosenfeld (1975) initiated the concept of fuzzy graphs. Further generalizations of fuzzy graphs can be seen in Akram et al. (2020), Luqman et al. (2019) and Shahzadi and Akram (2019). Akram (2013a) and Akram (2011) introduced certain ideas of bipolar fuzzy graphs and defined some operations on it. Further theory of bipolar fuzzy graphs was developed in Rashmanlou et al. (2015), Sarwar and Akram (2017), and Singh and Kumar (2014). Akram and Nawaz (2015a), Akram and Nawaz (2015b), and Akram and Nawaz (2016) first dealt with soft graphs and fuzzy soft graphs.

In this article, we introduce a new tool for fuzzification of bipolar rough sets which includes both the fuzzification of the information system if the attribute values are linguistic terms and a bipolar fuzzy tolerance relation (intransitive) which is used to measure the compatibility in \((\alpha , \beta )\)-indiscernible objects, the objects which do not have exactly the same attributes but they are similar or compatible up to a certain degree \(\alpha \) and \(\beta \).

The organization of this article is as follows. Section 2 gives the fundamental definitions of bipolar rough sets and bipolar fuzzy relation. In Sect. 3, the notion of \((\alpha , \beta )\)-bipolar fuzzified rough set is introduced and some of their basic properties are investigated. Section 4 gives the concepts of accuracy and roughness measures for \((\alpha , \beta )\)-bipolar fuzzified rough sets. At last, Sect. 5 carries a few conclusions.

2 Preliminaries

In this section, we give the essential definitions and initial results required in upcoming sections of the study. Throughout this section, we will utilize \(\mathcal {U}\) for an initial universe (non-empty finite), E for set of parameters, A for a non-empty subset of the parameter set E and \(P(\mathcal {U})\) for the power set of \(\mathcal {U}\), unless stated otherwise.

2.1 Rough sets

In the rough set theory (Pawlak 1982), equivalence relation plays a significant role to cope with uncertainty in the data sets. This relation partitions the universe into classes, generally called granules of information (data). Hence, in rough set theory, we have to deal with clusters of objects rather than dealing with single object.

Definition 2.1

(Karaaslan 2016) An information system (or knowledge representation system) is a pair \(\mathcal {IS} = (\mathcal {U}, A_t)\), where \(\mathcal {U}\) is a non-empty finite set of objects and \(A_t\) is a non-empty finite set of attributes (parameters) such that each attribute \(a\in A_t\) is a function \(a : \mathcal {U} \longrightarrow V_a\), where \(V_a\) is the set of values (called domain ) of attribute a.

Definition 2.2

(Karaaslan 2016) Let \(\mathcal {IS} = (\mathcal {U}, A_t)\) be an information system. Then with any \(B \subseteq A_t\), there is an associated equivalence relation:

$$\begin{aligned}\mathfrak {R} = \mathcal {I}nd_{\mathcal {IS}}(B) = \big \{ (x, y) \in \mathcal {U}^2 \mid \forall \ a\in B, a(x) = a(y)\big \},\end{aligned}$$

where \(\mathcal {I}nd_{\mathcal {IS}}(B)\) is called the B-indiscernibility relation.

Definition 2.3

(Pawlak 1982) Let \(\mathcal {U}\) be a non-empty finite universe and \(\mathfrak {R}\) be an equivalence relation over \(\mathcal {U}\). Then the pair \(P = (\mathcal {U}, \mathfrak {R})\) is called a Pawlak approximation space.

Using indiscernibility relation \(\mathfrak {R}\), for every subset \(X \subseteq \mathcal {U}\), we can define the following two crisp sets:

$$\begin{aligned} \mathfrak {R}_* (X)= & {} \big \{x \in \mathcal {U} \mid [x]_\mathfrak {R} \subseteq X\big \}, \end{aligned}$$
(2.1)
$$\begin{aligned} \mathfrak {R}^* (X)= & {} \big \{x \in \mathcal {U} \mid [x]_\mathfrak {R} \cap X \ne \emptyset \big \}. \end{aligned}$$
(2.2)

Equations (2.1) and (2.2) are known as lower and upper approximations of X with respect to the Pawlak approximation space \(P = (\mathcal {U}, \mathfrak {R})\), respectively. Here,

$$\begin{aligned}_\mathfrak {R} = \big \{y \in \mathcal {U} \mid (x, y) \in \mathfrak {R} \big \}.\end{aligned}$$

Definition 2.4

(Pawlak 1982) Let \(\mathfrak {R}_* (X)\) and \(\mathfrak {R}^* (X)\) be lower and upper approximations of \(X \subseteq \mathcal {U}\). Then the sets

  1. (i)

    \(\mathbb {P}os_\mathfrak {R} (X) = \mathfrak {R}_* (X),\)

  2. (ii)

    \(\mathbb {B}nd_\mathfrak {R} (X) = \mathfrak {R}^* (X) - \mathfrak {R}_* (X),\)

  3. (iii)

    \(\mathbb {N}eg_\mathfrak {R} (X) = \big (\mathfrak {R}^* (X)\big )^c,\)

  4. (iv)

    \(\overline{\mathbb {E}dg}_\mathfrak {R} (X)= \mathfrak {R}^* (X) - X,\)

  5. (v)

    \(\underline{\mathbb {E}dg}_\mathfrak {R} (X) = X - \mathfrak {R}_* (X)\)

are called the \(\mathfrak {R}\)-\(positive\ region\) (\(\mathfrak {R}^+\)-region), \(\mathfrak {R}\)-\(boundary\ region\) (\(\mathfrak {R}\mathbb {B}_{nd}\)-region), \(\mathfrak {R}\)-\(negative\ region\) (\(\mathfrak {R}^-\)-region), \(\mathfrak {R}\)-\(external\ edge\) (\(\mathfrak {R}\mathbb {E}_{ex}\)) and \(\mathfrak {R}\)-\(internal\ edge\) (\(\mathfrak {R}\mathbb {E}_{in}\)) of X, respectively, where \(\big (\mathfrak {R}^* (X)\big )^c = \mathcal {U} - \mathfrak {R}^* (X).\)

The set X is said to be crisp (exact or definable with respect to \(\mathfrak {R}\)) if and only if \(\mathfrak {R}_* (X) = \mathfrak {R}^* (X)\); equivalently \(\mathbb {B}nd_\mathfrak {R} (X) = \emptyset \). And the set X is said to be rough (inexact or undefinable with respect to \(\mathfrak {R}\)) if and only if \(\mathfrak {R}_* (X) \ne \mathfrak {R}^* (X)\); equivalently \(\mathbb {B}nd_\mathfrak {R} (X) \ne \emptyset \).

Note that sometimes the pair \(\big (\mathfrak {R}_* (X)\), \(\mathfrak {R}^* (X)\big ) \in P(\mathcal {U})\times P(\mathcal {U})\) is referred to as the rough set approximation of X with respect to \(\mathfrak {R}\).

It can be seen that the lower approximation \(\mathfrak {R}_*(X)\) of a set X is greatest definable set contained in X, and the upper approximation \(\mathfrak {R}^*(X)\) is the least definable set containing X with respect to the equivalence relation \(\mathfrak {R}\).

The other fundamental properties of lower and upper approximation operators of a set X have been listed in the following theorem which has been adopted from Pawlak (1982).

Theorem 2.5

Let \(P = (\mathcal {U}, \mathfrak {R})\) be a Pawlak approximation space. Then for any \(X, Y \subseteq \mathcal {U}\), the following properties hold for the lower and upper approximations:

  1. 1.

    \(\mathfrak {R}_* (X) \subseteq X \subseteq \mathfrak {R}^* (X)\);

  2. 2.

    \(X \subseteq Y\) implies \(\mathfrak {R}_* (X) \subseteq \mathfrak {R}_* (Y)\) and \(\mathfrak {R}^* (X) \subseteq \mathfrak {R}^* (Y)\);

  3. 3.

    \(\mathfrak {R}_* (\emptyset ) = \emptyset = \mathfrak {R}^* (\emptyset )\);

  4. 4.

    \(\mathfrak {R}_* (\mathcal {U}) = \mathcal {U} = \mathfrak {R}^* (\mathcal {U})\);

  5. 5.

    \(\mathfrak {R}_* (\mathfrak {R}_* (X)) = \mathfrak {R}_* (X) = \mathfrak {R}^* (\mathfrak {R}_* (X))\);

  6. 6.

    \(\mathfrak {R}^* (\mathfrak {R}^* (X)) = \mathfrak {R}^* (X) = \mathfrak {R}_* (\mathfrak {R}^* (X))\);

  7. 7.

    \(\mathfrak {R}_* ( X \cap Y) = \mathfrak {R}_* (X) \cap \mathfrak {R}_* (Y)\);

  8. 8.

    \(\mathfrak {R}^* ( X \cap Y) \subseteq \mathfrak {R}^* (X) \cap \mathfrak {R}^* (Y)\);

  9. 9.

    \(\mathfrak {R}^* ( X \cup Y) = \mathfrak {R}^* (X) \cup \mathfrak {R}^* (Y)\);

  10. 10.

    \(\mathfrak {R}_* ( X \cup Y) \supseteq \mathfrak {R}_* (X) \cup \mathfrak {R}_* (Y)\);

  11. 11.

    \(\mathfrak {R}_* ( X - Y) \subseteq \mathfrak {R}_* (X) - \mathfrak {R}_* (Y)\);

  12. 12.

    \(\mathfrak {R}^* ( X - Y) \supseteq \mathfrak {R}^* (X) - \mathfrak {R}^* (Y)\);

  13. 13.

    \((\mathfrak {R}_* (X))^c = \mathfrak {R}^* (X^c)\); where \(X^c = \mathcal {U} - X\)

  14. 14.

    \((\mathfrak {R}^* (X))^c = \mathfrak {R}_* (X^c)\). The following axioms are the counterparts of the law \(X \cup X^c = \mathcal {U}\) for approximations:

  15. 15.

    \(\mathfrak {R}^* (X) \cup \mathfrak {R}_* (X^c) = \mathcal {U}\);

  16. 16.

    \(\mathfrak {R}^* (X) \cup \mathfrak {R}^* (X^c) = \mathcal {U}\);

  17. 17.

    \(\mathcal {R}_* (X) \cup \mathfrak {R}^* (X^c) = \mathcal {U}\);

  18. 18.

    \(\mathfrak {R}_* (X) \cup \mathfrak {R}_* (X^c) = \mathbb {B}nd (X)\).

    The following axioms are the equivalent forms of the law \(X \cap X^c = \emptyset \) for approximations:

  19. 19.

    \(\mathfrak {R}^* (X) \cap \mathfrak {R}_* (X^c) = \emptyset \);

  20. 20.

    \(\mathfrak {R}_* (X) \cap \mathfrak {R}_* (X^c) = \emptyset \);

  21. 21.

    \(\mathfrak {R}_* (X) \cap \mathfrak {R}^* (X^c) = \emptyset \);

  22. 22.

    \(\mathfrak {R}^* (X) \cup \mathfrak {R}^* (X^c) = \mathbb {B}nd (X)\).

    De Morgan’s laws have the following equivalent forms for approximations:

  23. 23.

    \((\mathfrak {R}_*(X) \cup \mathfrak {R}_* (Y))^c = \mathfrak {R}^* (X^c) \cap \mathfrak {R}^* (Y^c)\);

  24. 24.

    \((\mathfrak {R}_*(X) \cup \mathfrak {R}^* (Y))^c = \mathfrak {R}^* (X^c) \cap \mathfrak {R}_* (Y^c)\);

  25. 25.

    \((\mathfrak {R}^*(X) \cup \mathfrak {R}_* (Y))^c = \mathfrak {R}_ * (X^c) \cap \mathfrak {R}^* (Y^c)\);

  26. 26.

    \((\mathfrak {R}^*(X) \cup \mathfrak {R}^* (Y))^c = \mathfrak {R}_* (X^c) \cap \mathfrak {R}_* (Y^c)\);

  27. 27.

    \((\mathfrak {R}_*(X) \cap \mathfrak {R}_* (Y))^c = \mathfrak {R}^* (X^c) \cup \mathfrak {R}^* (Y^c)\);

  28. 28.

    \((\mathfrak {R}_*(X) \cap \mathfrak {R}^* (Y))^c = \mathfrak {R}^* (X^c) \cup \mathfrak {R}_* (Y^c)\);

  29. 29.

    \((\mathfrak {R}^*(X) \cap \mathfrak {R}_* (Y))^c = \mathfrak {R}_* (X^c) \cup \mathfrak {R}^* (Y^c)\);

  30. 30.

    \((\mathfrak {R}^*(X) \cap \mathfrak {R}^* (Y))^c = \mathfrak {R}_* (X^c) \cup \mathfrak {R}_* (Y^c)\).

    Moreover, we have

  31. 31.

    X is definable \(\Longleftrightarrow \mathfrak {R}_* (X) = X \Longleftrightarrow \mathfrak {R}^* (X) = X \Longleftrightarrow \mathfrak {R}_* (X) = \mathfrak {R}^* (X)\).

  32. 32.

    If X and Y are definable, then

    \(\mathfrak {R}_* (X \cup Y) = \mathfrak {R}_* (X) \cup \mathfrak {R}_* (Y)\) and \(\mathfrak {R}^* (X \cap Y) = \mathfrak {R}^* (X) \cap \mathfrak {R}^* (Y)\).

  33. 33.

    If X and Y are definable, then

    \(\mathfrak {R}_* (X - Y) = \mathfrak {R}_* (X) - \mathfrak {R}_* (Y)\) and \(\mathfrak {R}^* (X - Y) = \mathfrak {R}^* (X) - \mathfrak {R}^* (Y)\).

2.2 Fuzzy relations and some related concepts

Here we discuss some basic notions related to fuzzy sets and fuzzy relations.

Definition 2.6

(Zadeh 1965) Let \(\mathcal {U}\) be a non-empty finite set, called the universe. A fuzzy set (or fuzzy subset) \(\lambda \) on \(\mathcal {U}\) is a function from \(\mathcal {U}\) into the unit closed interval [0, 1], that is, \(\lambda : \mathcal {U} \longrightarrow [0, 1].\)

The value \(\lambda (u)\) of \(\lambda \) at \(u \in \mathcal {U}\) denotes the membership degree of u in \(\lambda \).

  1. (i)

    \(\lambda (u) = 1\) means full membership.

  2. (ii)

    \(\lambda (u) = 0\) means non-membership.

  3. (iii)

    \(0< \lambda (u) < 1\) means partial membership.

For the two extreme cases \(\emptyset \) (the empty fuzzy set) and \(\mathcal {U}\) (the fuzzy entire set), the membership degree functions are defined by \(\forall u \in \mathcal {U}\), \(\emptyset (u) = 0\) and \(\mathcal {U}(u) = 1\), respectively.

The collection of all fuzzy sets over \(\mathcal {U}\) is represented by \(\mathcal {F}(\mathcal {U})\).

Definition 2.7

(Shabir and Shaheen 2017) A fuzzy subset \(\mu \in \mathcal {F}(\mathcal {U} \times \mathcal {U})\) is said to be a fuzzy binary relation (or fuzzy relation) over \(\mathcal {U}\), that is, \(\mu : \mathcal {U} \times \mathcal {U} \longrightarrow [0, 1]\).

Definition 2.8

(Shabir and Shaheen 2017) Let \(\mu \) be a fuzzy relation over \(\mathcal {U}\). Then

  1. (i)

    \(\mu \) is said to be a serial fuzzy relation if for each \(x \in \mathcal {U}\), \(\exists \ y \in \mathcal {U}\) such that \(\mu (x, y) = 1\).

  2. (ii)

    \(\mu \) is said to be a reflexive fuzzy relation if for all \(x \in \mathcal {U}\), \(\mu (x, x) = 1\).

  3. (iii)

    \(\mu \) is said to be a symmetric fuzzy relation if for all \(x, y \in \mathcal {U}\), \(\mu (x, y) = \mu (y, x)\).

  4. (iv)

    \(\mu \) is said to be a transitive fuzzy relation if for all \(x, y, z \in \mathcal {U}\),

    $$\begin{aligned}\mu (x, z) \ge \bigvee _{y \in \mathcal {U}} \mu (x, y) \wedge \mu (y, z).\end{aligned}$$

Definition 2.9

Let \(\mu \) be a fuzzy relation over \(\mathcal {U}\). Then \(\mu \) is called a tolerance relation over \(\mathcal {U}\) (also called proximity relation) if it is reflexive and symmetric fuzzy relation.

These proximity relations can intuitively be interpreted as measures of ‘likeness’ or ‘sameness’ among the elements of \(\mathcal {U}\). When \(\mu \) is a fuzzy compatibility relation, compatibility classes are defined in terms of a specified membership degree \(\alpha \). An \(\alpha \)-compatibility class (\(\alpha \)-cut) is a subset \(\mu _{\alpha }\) of \(\mathcal {U} \times \mathcal {U}\) defined by \(\mu _{\alpha } = \{ (x, y) \in \mathcal {U} \times \mathcal {U} : \mu (x, y) \ge \alpha \}\), where \(\alpha \in [0, 1]\).

Definition 2.10

A fuzzy relation is said to be a fuzzy equivalence relation (or similarity relation) if it is a reflexive, symmetric and transitive fuzzy relation.

2.3 Bipolar fuzzy sets, bipolar fuzzy relations and some related notions

The bipolar fuzzy set is an extension of Zadeh’s fuzzy set. The bipolar fuzzy models deliver more accuracy, flexibility and comparability to the system as compared to the classical and fuzzy models.

Definition 2.11

(Samanta and Pal 2014) Let \(\mathcal {U}\) be a finite non-empty universe. A bipolar fuzzy set \(\lambda \) over \(\mathcal {U}\) is defined as

$$\begin{aligned}\lambda = \{ (x, \lambda ^P (x), \lambda ^N (x)) : x\in \mathcal {U} \},\end{aligned}$$

where \(\lambda ^P : \mathcal {U} \longrightarrow [0, 1]\) and \(\lambda ^N : \mathcal {U} \longrightarrow [-1, 0]\) are mappings, which are called positive membership degree and negative membership degree, respectively.

The positive membership degree \(\lambda ^P (x)\) denotes the satisfaction degree of an element x to the property and the negative membership degree \(\lambda ^N (x)\) denotes the satisfaction degree of x to somewhat implicit counter-property.

In bipolar fuzzy sets \(\lambda \),

  1. (i)

    If \(\lambda ^P (x)\ne 0\) and \(\lambda ^N (x)= 0\), is the situation that x is regarded as having only positive satisfaction for \(\lambda \).

  2. (ii)

    If \(\lambda ^P (x)= 0\) and \(\lambda ^N (x)\ne 0\), is the situation that x does not satisfy the property of \(\lambda \) but somewhat satisfies the counter property.

  3. (iii)

    It is possible for an element x to have \(\lambda ^P (x)\ne 0\) and \(\lambda ^N (x)\ne 0\) when the membership function of the property overlaps that of its counter property over some portion of \(\mathcal {U}\).

  4. (iv)

    If \(\lambda ^P (x)= 0\) and \(\lambda ^N (x)= 0\), then it is an indeterministic situation to investigate the property of x in \(\lambda \).

The collection of all bipolar fuzzy sets over the universe \(\mathcal {U}\) is represented by \(\mathcal {BF}(\mathcal {U})\).

Definition 2.12

(Samanta and Pal 2014) Let \(\lambda , \xi \in \mathcal {BF}(\mathcal {U})\). Then \(\lambda \) is said to be contained in \(\xi \), that is, \(\lambda \subseteq \xi \) if and only if \(\lambda ^P (x) \le \xi ^P (x)\) and \(\lambda ^N (x) \ge \xi ^N (x)\) for all \(x \in \mathcal {U}\).

Definition 2.13

(Samanta and Pal 2014) Let \(\lambda = \{ (x, \lambda ^P (x), \lambda ^N (x)) : x\in \mathcal {U} \}\) be a bipolar fuzzy set over the universe \(\mathcal {U}\), \(\alpha \in (0, 1]\)   and   \(\beta \in [-1, 0)\). Then we define \((\alpha , \beta )- cut\ level\ set\) of \(\lambda \) to be the crisp set

$$\begin{aligned}\lambda ^\alpha _\beta = \{x \in \mathcal {U} : \lambda ^P (x) \ge \alpha \ \text {and}\ \lambda ^N (x) \le \beta \}.\end{aligned}$$

Example 2.14

To illustrate above definition, let us consider a bipolar fuzzy set

$$\begin{aligned} \lambda= & {} \big \{(x_1, 0.5, -0.4), (x_2, 0.6, -0.3), (x_3, 0.4, -0.8), (x_4, 0.7, -0.9),\\&(x_5, 1, -0.6), (x_6, 1, -0.2), (x_7, 0.3, -1), (x_8, 0, 0)\big \}.\end{aligned}$$

For \(\alpha = 0.6\) and \(\beta = -0.5\), the \((\alpha , \beta )- cut\ level\ set\) of \(\lambda \) is given as

$$\begin{aligned}\lambda ^\alpha _\beta = \{x_4, x_5\}.\end{aligned}$$

Definition 2.15

(Yang et al. 2013) Let \(\mathcal {U}\) be a non-empty finite universe and \(R = \{ (x, \lambda ^P (x), \lambda ^N (x)) : x\in \mathcal {U} \}\) be a bipolar fuzzy set over the universe \(\mathcal {U}\), then a mapping

$$\begin{aligned}R = (\lambda ^P, \lambda ^N) : \mathcal {U} \times \mathcal {U} \longrightarrow [0, 1] \times [-1, 0]\end{aligned}$$

is said to be a bipolar fuzzy relation (or bipolar fuzzy binary relation) over \(\mathcal {U}\), where \(\lambda ^P (x, y) \in [0, 1]\) and \(\lambda ^N (x, y) \in [-1, 0]\) for all \(x, y \in \mathcal {U}\).

Remark 2.16

A bipolar fuzzy relation \(R = \big ( \lambda ^P (x, y), \lambda ^N (x, y) \big )\) over the universe \(\mathcal {U}\) is a special case of bipolar fuzzy set of the form \(R = \{ ((x, y), \lambda ^P (x, y), \lambda ^N (x, y)) : (x, y) \in \mathcal {U} \times \mathcal {U} \}\).

Definition 2.17

(Yang et al. 2013) A bipolar fuzzy relation R over the universe \(\mathcal {U}\) is said to be bipolar fuzzy reflexive relation if \(\lambda ^P (x, x) = 1\) and \(\lambda ^N (x, x) = -1\) for each \(x \in \mathcal {U}\).

Definition 2.18

(Yang et al. 2013) A bipolar fuzzy relation R over the universe \(\mathcal {U}\) is said to be bipolar fuzzy symmetric relation if \(\lambda ^P (x, y) = \lambda ^P (y, x)\) and \(\lambda ^N (x, y) = \lambda ^N (y, x)\) for all \(x, y \in \mathcal {U}\).

Definition 2.19

A bipolar fuzzy reflexive and bipolar fuzzy symmetric relation over \(\mathcal {U}\) is called a bipolar fuzzy tolerance relation (also called bipolar fuzzy proximity relation or bipolar fuzzy compatibility relation).

These bipolar fuzzy proximity relations can intuitively be interpreted as measures of ‘likeness’ or ‘sameness’ among the elements of \(\mathcal {U}\). When \(R = (\lambda ^P, \lambda ^N)\) is a bipolar fuzzy compatibility relation, compatibility classes are defined in terms of a specified membership degree \(\alpha \) and \(\beta \). An \((\alpha , \beta )\)-compatibility class (or (\(\alpha , \beta \))-cut) is a subset \(R_{(\alpha , \beta )}\) of \(\mathcal {U} \times \mathcal {U}\) defined by \(R_{(\alpha , \beta )} = \{ (x, y) \in \mathcal {U} \times \mathcal {U} : \lambda ^P(x, y) \ge \alpha \) and \(\lambda ^N(x, y) \le \beta \) }, where \(\alpha \in [0, 1]\) and \(\beta \in [-1, 0]\).

Definition 2.20

Let \(\mathcal {U}\) be a non-empty finite universe and R be a bipolar fuzzy relation over \(\mathcal {U}\). Then the pair

\(\mathbb {P} = (\mathcal {U}, R)\) is called a bipolar fuzzy approximation space \((\mathcal {BFA}-space\)).

Definition 2.21

Let \(R = \big ( \lambda ^P (x, y), \lambda ^N (x, y) \big )\) be a bipolar fuzzy relation over \(\mathcal {U}\). Then for \(\alpha \in [0, 1]\), \(\beta \in [-1, 0]\), the crisp relation:

$$\begin{aligned}R_{(\alpha , \beta )} = \big \{ (x, y) \in \mathcal {U} \times \mathcal {U} : \lambda ^P (x, y) \ge \alpha \ \text {and}\ \lambda ^N (x, y) \le \beta \big \}\end{aligned}$$

is said to be \((\alpha , \beta )-cut\) relation of R.

Definition 2.22

Let \(R = \big ( \lambda ^P (x, y), \lambda ^N (x, y) \big )\) be a bipolar fuzzy relation over \(\mathcal {U}\), where \(\mathcal {U} = \{x_1, x_2, \cdots , x_n \}\). By considering \(a_{ij} = \lambda ^P(x_i, y_j)\) and \(b_{ij} = \lambda ^N(x_i, y_j)\), \(i = 1, 2,\ldots , n\); \(j = 1, 2,\ldots , n\), the bipolar fuzzy relation R can be represented with the help of a pair of matrices given as

$$\begin{aligned} \lambda ^P = (a_{ij})_{n \times n}^P = \begin{bmatrix} a_{11} &{} a_{12} &{} \cdots &{} a_{1n}\\ a_{21} &{} a_{22} &{} \cdots &{} a_{2n} \\ \vdots &{} \vdots &{} \ddots &{} \vdots \\ a_{n1} &{} a_{n2} &{} \cdots &{} a_{nn} \end{bmatrix}\quad \hbox { and }\quad \lambda ^N = (b_{ij})_{n \times n}^N = \begin{bmatrix} b_{11} &{} b_{12} &{} \cdots &{} b_{1n}\\ b_{21} &{} b_{22} &{} \cdots &{} b_{2n} \\ \vdots &{} \vdots &{} \ddots &{} \vdots \\ b_{n1} &{} b_{n2} &{} \cdots &{} b_{nn} \end{bmatrix}. \end{aligned}$$

The matrices are called the positive membership matrix and the negative membership matrix, respectively.

Remark 2.23

  1. (i)

    Let \(R = \big ( \lambda ^P (x, y), \lambda ^N (x, y) \big )\) be a bipolar fuzzy relation over \(\mathcal {U}\), where \(\mathcal {U}\) is a finite universe. If \(\lambda ^P = (a_{ij})_{n \times n}^P\) and \(\lambda ^N = (b_{ij})_{n \times n}^N\), then bipolar fuzzy reflexivity implies that \(a_{ii} = 1\) and \(b_{ii} = -1\). As a result, we can observe the numbers on the principal diagonal of \(\lambda ^P\) and \(\lambda ^N\) to judge whether R is bipolar fuzzy reflexive or not.

  2. (ii)

    R is bipolar fuzzy symmetric relation if and only if \(\lambda ^P\) and \(\lambda ^N\) are symmetric as a matrix.

Example 2.24

Let \(R = \big ( \lambda ^P (x, y), \lambda ^N (x, y) \big )\) be a bipolar fuzzy relation over \(\mathcal {U}\), where \(\mathcal {U} = \{x_1, x_2, x_3, x_4\}\) defined by the following pair of matrices:

figure a

Clearly \(\lambda ^P (x_i, x_i) = 1\) and \(\lambda ^N (x_i, x_i) = -1\); \(\forall i = 1, 2, 3, 4.\) Hence, \(R = \big ( \lambda ^P (x, y), \lambda ^N (x, y) \big )\) is a bipolar fuzzy reflexive relation over \(\mathcal {U}\).

Example 2.25

Let \(R = \big ( \lambda ^P (x, y), \lambda ^N (x, y) \big )\) be a bipolar fuzzy relation over \(\mathcal {U}\), where \(\mathcal {U} = \{x_1, x_2, x_3\}\) defined by the following pair of matrices:

figure b

Clearly both \(\lambda ^P\) and \(\lambda ^N\) are symmetric as a matrix, so, \(R = \big ( \lambda ^P (x, y), \lambda ^N (x, y) \big )\) is a bipolar fuzzy symmetric relation over \(\mathcal {U}\).

Lemma 2.26

A bipolar fuzzy relation \(R = \big ( \lambda ^P (x, y), \lambda ^N (x, y) \big )\) over the universe \(\mathcal {U}\) is reflexive if and only if \(\forall \ \alpha \in [0, 1]\ \text {and}\ \beta \in [-1, 0]\), \(R_{(\alpha , \beta )}\) is a reflexive relation.

Proof

Suppose \(R = \big ( \lambda ^P (x, y), \lambda ^N (x, y) \big )\) is a bipolar fuzzy reflexive relation over \(\mathcal {U}\) and consider

$$\begin{aligned}R_{(\alpha , \beta )} = \big \{ (x, y) \in \mathcal {U} \times \mathcal {U} : \lambda ^P (x, y) \ge \alpha \ \quad \text {and} \quad \ \lambda ^N (x, y) \le \beta \big \}.\end{aligned}$$

As R is a bipolar fuzzy reflexive relation, so \(\forall \ \alpha \in [0, 1]\ \text {and}\ \beta \in [-1, 0]\),

$$\begin{aligned}\lambda ^P (x, x) = 1 \ge \alpha \ \quad \text {and} \quad \ \lambda ^N (x, x) = -1 \le \beta .\end{aligned}$$

This implies that \((x, x) \in R_{(\alpha , \beta )}\) and hence \(R_{(\alpha , \beta )}\) is reflexive.

Conversely, assume that each \(R_{(\alpha , \beta )}\) is reflexive \(\forall \ \alpha \in [0, 1]\ \text {and}\ \beta \in [-1, 0]\). Suppose that \(R = \big ( \lambda ^P (x, y), \lambda ^N (x, y) \big )\) is not a bipolar fuzzy reflexive relation over \(\mathcal {U}\). Then \(\exists \)\(x \in \mathcal {U}\) such that

$$\begin{aligned}\lambda ^P (x, x) \ne 1 \ \quad \text {or} \quad \ \lambda ^N (x, x) \ne -1 .\end{aligned}$$

Taking \(\alpha = 1\) and \(\beta = -1\), \((x, x) \notin R_{(\alpha , \beta )}\). This implies \(R_{(\alpha , \beta )}\) is not reflexive, which is a contradiction. Hence, R is a bipolar fuzzy reflexive relation. \(\square \)

Lemma 2.27

A bipolar fuzzy relation \(R = \big ( \lambda ^P (x, y), \lambda ^N (x, y) \big )\) over the universe \(\mathcal {U}\) is symmetric if and only if \(\forall \ \alpha \in [0, 1]\ \text {and}\ \beta \in [-1, 0]\), \(R_{(\alpha , \beta )}\) is a symmetric relation.

Proof

Suppose \(R = \big ( \lambda ^P (x, y), \lambda ^N (x, y) \big )\) is a bipolar fuzzy symmetric relation over \(\mathcal {U}\) and consider

$$\begin{aligned}R_{(\alpha , \beta )} = \big \{ (x, y) \in \mathcal {U} \times \mathcal {U} : \lambda ^P (x, y) \ge \alpha \ \quad \text {and}\quad \ \lambda ^N (x, y) \le \beta \big \}.\end{aligned}$$

Let \((x, y) \in R_{(\alpha , \beta )}\). Then \(\lambda ^P (x, y) \ge \alpha \) and \(\lambda ^N (x, y) \le \beta \).

As R is a bipolar fuzzy symmetric relation, so

$$\begin{aligned}\lambda ^P (x, y) = \lambda ^P (y, x) \ \quad \text {and} \quad \ \lambda ^N (x, y) = \lambda ^N (y, x).\end{aligned}$$

This implies that

$$\begin{aligned}\lambda ^P (y, x) = \lambda ^P (x, y) \ge \alpha \ \quad \text {and}\quad \ \lambda ^N (y, x) = \lambda ^N (x, y) \le \beta .\end{aligned}$$

This shows that \((y, x) \in R_{(\alpha , \beta )}\) and hence \(R_{(\alpha , \beta )}\) is symmetric.

Conversely, assume that each \(R_{(\alpha , \beta )}\) is symmetric \(\forall \ \alpha \in [0, 1]\ \text {and}\ \beta \in [-1, 0]\). For any \(x, y \in \mathcal {U}\), consider

$$\begin{aligned}\lambda ^P (x, y) = \alpha \ \quad \text {and}\quad \ \lambda ^N (x, y) = \beta .\end{aligned}$$

Then \((x, y) \in R_{(\alpha , \beta )}\). As \(R_{(\alpha , \beta )}\) is symmetric, so \((y, x) \in R_{(\alpha , \beta )}\). This implies that

$$\begin{aligned} \lambda ^P (y, x) \ge \alpha = \lambda ^P (x, y)\ \quad \text {and}\ \quad \lambda ^N (y, x) \le \beta = \lambda ^N (x, y). \end{aligned}$$
(2.3)

Now taking

$$\begin{aligned}\lambda ^P (y, x) = \alpha _1\ \quad \text {and}\ \quad \lambda ^N (y, x) = \beta _1.\end{aligned}$$

Then this implies \((y, x) \in R_{(\alpha _1, \beta _1)}\). As \(R_{(\alpha _1, \beta _1)}\) is symmetric, so \((x, y) \in R_{(\alpha _1, \beta _1)}\), that is,

$$\begin{aligned} \lambda ^P (x, y) \ge \alpha _1 = \lambda ^P (y, x)\quad \text {and}\quad \ \lambda ^N (x, y) \le \beta _1 = \lambda ^N (y, x). \end{aligned}$$
(2.4)

From (2.3) and (2.4), we have \(\lambda ^P (x, y) = \lambda ^P (y, x)\ \text {and}\ \lambda ^N (x, y) = \lambda ^N (y, x)\).

Hence, R is a bipolar fuzzy symmetric relation. \(\square \)

3 Bipolar fuzzified rough approximations For \((\alpha , \beta )\)-indiscernible Objects

In this section, we introduce the notion of bipolar fuzzified rough approximations of \(X \subseteq \mathcal {U}\) on the basis of bipolar fuzzy approximation space \(\mathbb {P} = (\mathcal {U}, R)\). We introduce the notion of \((\alpha , \beta )\)-indiscernible objects. Also, we discuss some fundamental properties of bipolar fuzzified rough approximations.

Throughout this section, we will utilize R for a bipolar fuzzy tolerance relation over \(\mathcal {U}\), unless stated otherwise.

Definition 3.1

Let \(\mathbb {P} = (\mathcal {U}, R)\) be a bipolar fuzzy approximation space, where \(\mathcal {U}\) is a non-empty finite set of objects and R is a bipolar fuzzy tolerance relation characterized by its positive and negative membership functions given as \(\mu _R^+ : \mathcal {U} \times \mathcal {U} \longrightarrow [0, 1]\) and \(\mu _R^- : \mathcal {U} \times \mathcal {U} \longrightarrow [-1, 0]\). For any \(\alpha \in (0, 1]\) and \(\beta \in [-1, 0)\), the \(fuzzified\ lower\ \alpha -positive\), \(upper\ \alpha - positive\), \(lower\ \beta -negative\) and \(upper\ \beta - negative\)\(bipolar\ rough\ approximations\) of \(X \subseteq \mathcal {U}\) are defined as

$$\begin{aligned} \underline{R}_{\alpha }^+ (X)&= \big \{x \in \mathcal {U} : \mu _R^+ (x, y) < \alpha \ \text {for\ all}\ y \in X^c\big \},\\ \overline{R}_{\alpha }^+ (X)&= \big \{x \in \mathcal {U} : \mu _R^+ (x, y) \ge \alpha \ \text {for\ some}\ y \in X\big \},\\ \underline{R}_{\beta }^- (X)&= \big \{x \in \mathcal {U} : \mu _R^- (x, y) \le \beta \ \text {for\ some}\ y \in X\big \},\\ \overline{R}_{\beta }^ - (X)&= \big \{x \in \mathcal {U} : \mu _R^- (x, y) > \beta \ \text {for\ all}\ y \in X^c\big \}.\\ \end{aligned}$$

The knowledge regarding an element x of \(\mathcal {U}\) depicted by the above-defined operators is as follows:

  • \(\underline{R}_{\alpha }^+ (X)\) represents a crisp set that contains elements \(x \in \mathcal {U}\) equivalent to all elements \(y \in X^c\) with positive membership degree less than to a certain \(\alpha \in [0, 1]\).

  • \(\overline{R}_{\alpha }^+ (X)\) represents a crisp set that contains elements \(x \in \mathcal {U}\) equivalent to at least one element \(y \in X\) with positive membership degree greater than or equal to a certain \(\alpha \in [0, 1]\).

  • \(\underline{R}_{\beta }^- (X)\) represents a crisp set that contains elements \(x \in \mathcal {U}\) equivalent to at least one element \(y \in X\) with negative membership degree less than or equal to a certain \(\beta \in [-1, 0]\).

  • \(\overline{R}_{\beta }^- (X)\) represents a crisp set that contains elements \(x \in \mathcal {U}\) equivalent to all elements \(y \in X^c\) with negative membership degree greater than to a certain \(\beta \in [-1, 0]\).

Definition 3.2

Let \(\underline{R}_{\alpha }^+ (X)\), \(\overline{R}_{\alpha }^+ (X)\), \(\underline{R}_{\beta }^- (X)\) and \(\overline{R}_{\beta }^- (X)\) be \(fuzzified\ lower\ \alpha -positive\), \(upper\ \alpha - positive\), \(lower\ \beta -negative\) and \(upper\ \beta - negative\)\(bipolar\ rough\ approximations\) of X, respectively. Then the two pairs given as

$$\begin{aligned} \underline{FB}_{\mathbb {P}} (X)&= \big (\underline{R}_{\alpha }^+ (X), \underline{R}_{\beta }^- (X)\big ),\\ \overline{FB}_{\mathbb {P}} (X)&= \big (\overline{R}_{\alpha }^+ (X), \overline{R}_{\beta }^- (X)\big )\\ \end{aligned}$$

are called \((\alpha , \beta )\)-bipolar fuzzified rough approximations of X with respect to \(\mathcal {BFA}\)-space \(\mathbb {P} = (\mathcal {U}, R)\).

Definition 3.3

Let \(\underline{FB}_{\mathbb {P}} (X)\) and \(\overline{FB}_{\mathbb {P}} (X)\) be the \((\alpha , \beta )\)-bipolar fuzzified rough approximations of \(X \subseteq \mathcal {U}\). Then the sets,

  1. (i)

    \(\mathcal {F}POS_\mathbb {P} (X) = \big (\underline{R}_{\alpha }^+ (X), \overline{R}_{\beta }^- (X)\big ),\)

  2. (ii)

    \(\mathcal {F}BND_\mathbb {P} (X) = \overline{FB}_\mathbb {P} (X) - \underline{FB}_\mathbb {P} (X) = \big (\overline{R}_{\alpha }^+ (X) \setminus \underline{R}_{\alpha }^+ (X), \underline{R}_{\beta }^- (X)\setminus \overline{R}_{\beta }^- (X) \big ),\)

  3. (iii)

    \(\mathcal {F}NEG_\mathbb {P} (X) = U - \overline{FB}_\mathbb {P} (X) = \bigg (\big ( \overline{R}_{\alpha }^+ (X) \big )^c, \big ( \underline{R}_{\beta }^- (X) \big )^c \bigg ),\)

are known as the \((\alpha , \beta )\)-bipolar fuzzified positive region, \((\alpha , \beta )\)-bipolar fuzzified boundary region and \((\alpha , \beta )\)-bipolar fuzzified negative region of X, respectively.

The knowledge about \(x \in X \subseteq \mathcal {U}\) depicted by these regions is as follows:

  • \(x \in \mathcal {F}POS_\mathbb {P} (X)\) means that x certainly contained in \(\underline{R}_{\alpha }^+ (X)\) and \(\overline{R}_{\beta }^- (X)\).

  • \(x \in \mathcal {F}BND_\mathbb {P} (X)\) means that x may or may not contained in \(\overline{R}_{\alpha }^+ (X) - \underline{R}_{\alpha }^+ (X)\) and \(\underline{R}_{\beta }^- (X) - \overline{R}_{\beta }^- (X)\).

  • \(x \in \mathcal {F}NEG_\mathbb {P} (X)\) means that x definitely does not contained in \(\underline{R}_{\alpha }^+ (X)\) and \(\overline{R}_{\beta }^- (X)\).

Definition 3.4

If \(\alpha \in (0, 1]\), \(\beta \in [-1, 0)\) and R is a bipolar fuzzy relation over the universe \(\mathcal {U}\) characterized by its positive and negative membership functions given as \(\mu _R^+ : \mathcal {U} \times \mathcal {U} \longrightarrow [0, 1]\) and \(\mu _R^- : \mathcal {U} \times \mathcal {U} \longrightarrow [-1, 0]\), then the objects x and y in \(\mathcal {U}\) will be called \((\alpha , \beta )-indiscernible\) if

$$\begin{aligned}\mu _R^+ (x, y) \ge \alpha \quad \text {and}\quad \mu _R^- (x, y) \le \beta .\end{aligned}$$

Proposition 3.5

Let \(\mathbb {P} = (\mathcal {U}, R)\) be a bipolar fuzzy approximation space and \(\alpha _1, \alpha _2 \in (0, 1]\) be such that \(\alpha _1 \le \alpha _2\). Then

  1. (i)

    \(\underline{R}_{\alpha _1}^+ (X) \subseteq \underline{R}_{\alpha _2}^+ (X)\);

  2. (ii)

    \(\overline{R}_{\alpha _2}^+ (X) \subseteq \overline{R}_{\alpha _1}^+ (X)\).

Proof

  1. (i)

    Let \(x \in \underline{R}_{\alpha _1}^+ (X)\). Then by Definition 3.1, we have \(\mu _R^+ (x, y) < \alpha _1\ \text {for\ all}\ y \in X^c\). But since \(\alpha _1 \le \alpha _2\), so we have \(\mu _R^+ (x, y) < \alpha _1 \le \alpha _2\ \text {for\ all}\ y \in X^c\), that is, \(\mu _R^+ (x, y) < \alpha _2\ \text {for\ all}\ y \in X^c\). This implies that \(x \in \underline{R}_{\alpha _2}^+ (X)\). Hence, \(\underline{R}_{\alpha _1}^+ (X) \subseteq \underline{R}_{\alpha _2}^+ (X)\).

  2. (ii)

    Let \(x \in \overline{R}_{\alpha _2}^+ (X)\). Then by Definition 3.1, we have \(\mu _R^+ (x, y) \ge \alpha _2\ \text {for\ some}\ y \in X\). But since \(\alpha _1 \le \alpha _2\), it follows that \(\mu _R^+ (x, y) \ge \alpha _2 \ge \alpha _1\ \text {for\ some}\ y \in X\), that is, \(\mu _R^+ (x, y) \ge \alpha _1\ \text {for\ some}\ y \in X\). This shows that \(x \in \overline{R}_{\alpha _1}^+ (X)\). Hence, \(\overline{R}_{\alpha _2}^+ (X) \subseteq \overline{R}_{\alpha _1}^+ (X)\).

\(\square \)

Proposition 3.6

Let \(\mathbb {P} = (\mathcal {U}, R)\) be a bipolar fuzzy approximation space and \(\beta _1, \beta _2 \in [-1, 0)\) be such that \(\beta _1 \le \beta _2\). Then

  1. (i)

    \(\underline{R}_{\beta _1}^- (X) \subseteq \underline{R}_{\beta _2}^- (X)\);

  2. (ii)

    \(\overline{R}_{\beta _2}^- (X) \subseteq \overline{R}_{\beta _1}^- (X)\).

Proof

  1. 1.

    Let \(x \in \underline{R}_{\beta _1}^- (X)\). Then by Definition 3.1, we have \(\mu _R^- (x, y) \le \beta _1\ \text {for\ some}\ y \in X\). But since \(\beta _1 \le \beta _2\), so we have \(\mu _R^- (x, y) \le \beta _1 \le \beta _2\ \text {for\ some}\ y \in X\), that is, \(\mu _R^- (x, y) \le \beta _2\ \text {for\ some}\ y \in X\). This implies that \(x \in \underline{R}_{\beta _2}^- (X)\). Hence, \(\underline{R}_{\beta _1}^- (X) \subseteq \underline{R}_{\beta _2}^- (X)\).

  2. 2.

    Let \(x \in \overline{R}_{\beta _2}^- (X)\). Then by Definition 3.1, we have \(\mu _R^- (x, y) > \beta _2\ \text {for\ all}\ y \in X^c\). But since \(\beta _1 \le \beta _2\), that is, \(\beta _2 \ge \beta _1\). It follows that \(\mu _R^- (x, y) > \beta _2 \ge \beta _1\ \text {for\ all}\ y \in X^c\), that is, \(\mu _R^- (x, y) > \beta _1\ \text {for\ all}\ y \in X^c\). This shows that \(x \in \overline{R}_{\beta _1}^- (X)\). Hence, \(\overline{R}_{\beta _2}^- (X) \subseteq \overline{R}_{\beta _1}^- (X)\).

\(\square \)

Example 3.7

To illustrate the above two theorems, let us consider \(R = \big ( \lambda ^P (x, y), \lambda ^N (x, y) \big )\) be a bipolar fuzzy tolerance relation over \(\mathcal {U}\), where \(\mathcal {U} = \{u, v, w, x, y, z\}\) defined by the following pair of matrices:

figure c

Let \(X = \{u, v, x\} \subseteq \mathcal {U}\). Then \(X^c = \{w, y, z\}\). For \(\alpha _1 = 0.75\) and \(\alpha _2 = 0.9375\), we have

$$\begin{aligned} \underline{R}_{\alpha _1}^+ (X)&= \{u\},\\ \overline{R}_{\alpha _1}^+ (X)&=\{u, v, w, x, z\},\\ \underline{R}_{\alpha _2}^+ (X)&=\{u, v\},\\ \overline{R}_{\alpha _2}^+ (X)&=\{u, v, x, z\}. \end{aligned}$$

Clearly we can see that \(\underline{R}_{\alpha _1}^+ (X) \subseteq \underline{R}_{\alpha _2}^+ (X)\) and \(\overline{R}_{\alpha _2}^+ (X) \subseteq \overline{R}_{\alpha _1}^+ (X)\).

Similarly for \(\beta _1 = -0.75\) and \(\beta _2 = -0.25\), the fuzzified lower and upper \(\beta \)-negative approximations for \(\beta _1 \) and \(\beta _2\) of X using Definition 3.1 are

$$\begin{aligned} \underline{R}_{\beta _1}^- (X)&= \{u, v, x, z\},\\ \overline{R}_{\beta _1}^- (X)&=\{u\},\\ \underline{R}_{\beta _2}^- (X)&=\{u, v, w, x, y, z\},\\ \overline{R}_{\beta _2}^- (X)&=\{\}.\\ \end{aligned}$$

Clearly we can see that \(\underline{R}_{\beta _1}^- (X) \subseteq \underline{R}_{\beta _2}^- (X)\) and \(\overline{R}_{\beta _2}^- (X) \subseteq \overline{R}_{\beta _1}^- (X)\).

Proposition 3.8

Let \(\mathbb {P} = (\mathcal {U}, R)\) be a bipolar fuzzy approximation space and \(\alpha \in (0, 1]\). Then for any \(X,Y \subseteq \mathcal {U}\), we have

  1. (i)

    \(X \subseteq Y \Longrightarrow \underline{R}_{\alpha }^+ (X) \subseteq \underline{R}_{\alpha }^+ (Y)\);

  2. (ii)

    \(X \subseteq Y \Longrightarrow \overline{R}_{\alpha }^+ (X) \subseteq \overline{R}_{\alpha }^+ (Y)\).

Proof

  1. (i)

    Let \(x \in \underline{R}_{\alpha }^+ (X)\). Then by Definition 3.1, \(\mu _R^+ (x, y) < \alpha \ \text {for all}\ y\in X^c\). As \(X \subseteq Y\), \(Y^c \subseteq X^c\). Thus, in particular, \(\mu _R^+ (x, y) < \alpha \ \text {for all}\ y\in Y^c\). This implies that \(x \in \underline{R}_{\alpha }^+ (Y)\). Hence, \(\underline{R}_{\alpha }^+ (X) \subseteq \underline{R}_{\alpha }^+ (Y)\).

  2. (ii)

    Let \(x \in \overline{R}_{\alpha }^+ (X)\). Then by Definition 3.1, \(\mu _R^+ (x, y) \ge \alpha \ \text {for some}\ y\in X\). As \(X \subseteq Y\) and \(y \in X\), \(y \in Y\). Thus, it follows that \(\mu _R^+ (x, y) \ge \alpha \ \text {for some}\ y\in Y\), which implies that \(x \in \overline{R}_{\alpha }^+ (Y)\). Hence, \(\overline{R}_{\alpha }^+ (X) \subseteq \overline{R}_{\alpha }^+ (Y)\).

\(\square \)

Proposition 3.9

Let \(\mathbb {P} = (\mathcal {U}, R)\) be a bipolar fuzzy approximation space and \(\beta \in [-1, 0)\). Then for any \(X,Y \subseteq \mathcal {U}\), we have

  1. (i)

    \(X \subseteq Y \Longrightarrow \underline{R}_{\beta }^- (X) \subseteq \underline{R}_{\beta }^- (Y)\);

  2. (ii)

    \(X \subseteq Y \Longrightarrow \overline{R}_{\beta }^- (X) \subseteq \overline{R}_{\beta }^- (Y)\).

Proof

  1. (i)

    Let \(x \in \underline{R}_{\beta }^- (X)\). Then by Definition 3.1, \(\mu _R^- (x, y) \le \beta \ \text {for some}\ y\in X\). As \(X \subseteq Y\) and \(y \in X\), \(y \in Y\). Thus, it follows that \(\mu _R^- (x, y) \le \beta \ \text {for some}\ y\in X \subseteq Y\). This implies that \(x \in \underline{R}_{\beta }^- (Y)\). Hence, \(\underline{R}_{\beta }^- (X) \subseteq \underline{R}_{\beta }^- (Y)\).

  2. (ii)

    Let \(x \in \overline{R}_{\beta }^- (X)\). Then by Definition 3.1, \(\mu _R^- (x, y) > \beta \ \text {for all}\ y\in X^c\). As \(X \subseteq Y\) so, \(Y^c \subseteq X^c\). Thus, in particular, \(\mu _R^- (x, y) > \beta \ \text {for all}\ y\in Y^c\). This implies that \(x \in \overline{R}_{\beta }^- (Y)\). Hence, \(\overline{R}_{\beta }^- (X) \subseteq \overline{R}_{\beta }^- (Y)\).\(\square \)

Example 3.10

To illustrate Proposition 3.8 and Proposition 3.9, we revisit Example 3.7, where \(\mathcal {U} = \{u, v, w, x, y, z\}\). We already know that for \(X = \{u, v, x\} \subseteq \mathcal {U}\), the fuzzified lower and upper \(\alpha -positive\) approximations of X for \(\alpha = 0.9375\) are given as

$$\begin{aligned} \underline{R}_{\alpha }^+ (X)&= \{u, v\},\\ \overline{R}_{\alpha }^+ (X)&=\{u, v, x, z\}. \end{aligned}$$

Now, let \(Y = \{u, v, w, x\} \subseteq \mathcal {U}\). Then the fuzzified lower and upper \(\alpha -positive\) approximations of Y for \(\alpha = 0.9375\) are given as

$$\begin{aligned} \underline{R}_{\alpha }^+ (Y)&= \{u, v\},\\ \overline{R}_{\alpha }^+ (Y)&=\{u, v, w, x, y, z\}. \end{aligned}$$

Clearly we can see that \(\underline{R}_{\alpha }^+ (X) \subseteq \underline{R}_{\alpha }^+ (Y)\) and \(\overline{R}_{\alpha }^+ (X) \subseteq \overline{R}_{\alpha }^+ (Y)\).

Similarly, the fuzzified lower and upper \(\beta -negative\) approximations of X and Y for \(\beta = -0.75\) are given as

$$\begin{aligned} \underline{R}_{\beta }^- (X)&= \{u, v, x, z\},\\ \overline{R}_{\beta }^- (X)&=\{u\}.\\ \underline{R}_{\beta }^- (Y)&= \{u, v, w, x, y, z\},\\ \overline{R}_{\beta }^- (Y)&=\{u\}. \end{aligned}$$

Clearly we can see that \(\underline{R}_{\beta }^- (X) \subseteq \underline{R}_{\beta }^- (Y)\) and \(\overline{R}_{\beta }^- (X) \subseteq \overline{R}_{\beta }^- (Y)\).

Proposition 3.11

Let \(\mathbb {P} = (\mathcal {U}, R)\) be a bipolar fuzzy approximation space, \(X \subseteq \mathcal {U}\) and \(\alpha \in (0, 1]\). Then for any \(R_1, R_2 \subseteq R\), we have

  1. (i)

    \(R_1 \subseteq R_2 \Longrightarrow \underline{R_2}_{\alpha }^+ (X) \subseteq \underline{R_1}_{\alpha }^+ (X)\);

  2. (ii)

    \(R_1 \subseteq R_2 \Longrightarrow \overline{R_1}_{\alpha }^+ (X) \subseteq \overline{R_2}_{\alpha }^+ (X)\).

Proof

  1. (i)

    Let \(x \in \underline{R_2}_{\alpha }^+ (X)\). This implies that \(\mu _{R_2}^+ (x, y) < \alpha \ \text {for\ all}\ y \in X^c\). Since \(R_1 \subseteq R_2\) so, it follows that \(\mu _{R_1}^+ (x, y) \le \mu _{R_2}^+ (x, y) < \alpha \) for all \(y \in X^c\). Thus, we have \(x \in \underline{R_1}_{\alpha }^+ (X)\). Hence, \(\underline{R_2}_{\alpha }^+ (X) \subseteq \underline{R_1}_{\alpha }^+ (X).\)

  2. (ii)

    Suppose \(x \in \overline{R_1}_{\alpha }^+ (X) = \{x \in \mathcal {U} : \mu _{R_1}^+ (x, y) \ge \alpha \ \text {for\ some}\ y \in X\}\). As \(R_1 \subseteq R_2\) so, it follows that \(\mu _{R_1}^+ (x, y) \le \mu _{R_2}^+ (x, y)\). That is, \(\mu _{R_2}^+ (x, y) \ge \mu _{R_1}^+ (x, y) \ge \alpha \ \text {for\ some}\ y \in X\). This implies that \(\mu _{R_2}^+ (x, y) \ge \alpha \ \text {for\ some}\ y \in X\), so \(x \in \overline{R_2}_{\alpha }^+ (X)\). Hence, \(\overline{R_1}_{\alpha }^+ (X) \subseteq \overline{R_2}_{\alpha }^+ (X)\).

\(\square \)

Proposition 3.12

Let \(\mathbb {P} = (\mathcal {U}, R)\) be a bipolar fuzzy approximation space, \(X \subseteq \mathcal {U}\) and \(\beta \in [-1, 0)\). Then for any \(R_1, R_2 \subseteq R\), we have

  1. (i)

    \(R_1 \subseteq R_2 \Longrightarrow \underline{R_1}_{\beta }^- (X) \subseteq \underline{R_2}_{\beta }^- (X)\);

  2. (ii)

    \(R_1 \subseteq R_2 \Longrightarrow \overline{R_2}_{\beta }^- (X) \subseteq \overline{R_1}_{\beta }^- (X)\).

Proof

  1. (i)

    Suppose \(x \in \underline{R_1}_{\beta }^- (X) = \big \{x \in \mathcal {U} : \mu _{R_1}^- (x, y) \le \beta \ \text {for\ some}\ y \in X\big \}\). Since \(R_1 \subseteq R_2\) so, it follows that \(\mu _{R_1}^- (x, y) \ge \mu _{R_2}^- (x, y)\). That is, \(\mu _{R_2}^- (x, y) \le \mu _{R_1}^- (x, y) \le \beta \ \text {for\ some}\ y \in X\). This implies that \(\mu _{R_2}^- (x, y) \le \beta \ \text {for\ some}\ y \in X\), so \(x \in \underline{R_2}_{\beta }^- (X)\). Hence, \(\underline{R_1}_{\beta }^- (X) \subseteq \underline{R_2}_{\beta }^- (X)\).

  2. (ii)

    Assume that \(x \in \overline{R_2}_{\alpha }^- (X)\). This implies that \(\mu _{R_2}^- (x, y) > \beta \ \text {for\ all}\ y \in X^c\). As \(R_1 \subseteq R_2\) so, it follows that \(\mu _{R_1}^- (x, y) \ge \mu _{R_2}^- (x, y)\), that is, \(\mu _{R_1}^- (x, y) \ge \mu _{R_2}^- (x, y) > \beta \ \text {for\ all}\ y \in X^c\). This implies that \(\mu _{R_1}^- (x, y) > \beta \ \text {for\ all}\ y \in X^c\), so \(x \in \overline{R_1}_{\alpha }^- (X)\). Hence, \(\overline{R_2}_{\beta }^- (X) \subseteq \overline{R_1}_{\beta }^- (X)\).

\(\square \)

Theorem 3.13

Let \(\mathbb {P} = (\mathcal {U}, R)\) be a bipolar fuzzy approximation space and \(\alpha \in (0, 1]\). Then for \(X, Y \subseteq \mathcal {U}\), we have

  1. 1.

    \(\underline{R}_{\alpha }^+ (X) \subseteq X \subseteq \overline{R}_{\alpha }^+ (X)\);

  2. 2.

    \(\underline{R}_{\alpha }^+ (\emptyset ) = \emptyset = \overline{R}_{\alpha }^+ (\emptyset )\);

  3. 3.

    \(\underline{R}_{\alpha }^+ (\mathcal {U}) = \mathcal {U} = \overline{R}_{\alpha }^+ (\mathcal {U})\);

  4. 4.

    \(\underline{R}_{\alpha }^+ (X^c) = \big (\overline{R}_{\alpha }^+ (X)\big )^c\);

  5. 5.

    \(\overline{R}_{\alpha }^+ (X^c) = \big (\underline{R}_{\alpha }^+ (X)\big )^c\);

  6. 6.

    \(\underline{R}_{\alpha }^+ (X \cap Y) = \underline{R}_{\alpha }^+ (X) \cap \underline{R}_{\alpha }^+ (Y)\);

  7. 7.

    \(\underline{R}_{\alpha }^+ (X \cup Y) \supseteq \underline{R}_{\alpha }^+ (X) \cup \underline{R}_{\alpha }^+ (Y)\);

  8. 8.

    \(\overline{R}_{\alpha }^+ (X \cup Y) = \overline{R}_{\alpha }^+ (X) \cup \overline{R}_{\alpha }^+ (Y)\);

  9. 9.

    \(\overline{R}_{\alpha }^+ (X \cap Y) \subseteq \overline{R}_{\alpha }^+ (X) \cap \overline{R}_{\alpha }^+ (Y)\).

Proof

  1. 1.

    By definition, \(\underline{R}_{\alpha }^+ (X) \subseteq X\) is obvious. For the other inclusion, let \(x \in X \subseteq \mathcal {U}\). Then \(\mu _R^+ (x, x) = 1 \ge \alpha ;\ x\in X\). This implies that \(x \in \overline{R}_{\alpha }^+ (X)\). Hence, \(\underline{R}_{\alpha }^+ (X) \subseteq X \subseteq \overline{R}_{\alpha }^+ (X)\), as required.

  2. 2.

    This is the direct consequence of the definitions of \(fuzzified\ lower\ \alpha -positive\) and \(fuzzified \ upper\ \alpha - positive\) approximations of X (as given in Definition 3.1).

  3. 3.

    This is similar to the proof of (1).

  4. 4.

    For any \(x \in \mathcal {U}\),

    $$\begin{aligned} x \in \underline{R}_{\alpha }^+ (X^c)&\Longleftrightarrow \mu _R^+ (x, y) < \alpha \ \text {for all}\ y\in (X^c)^c = X\\&\Longleftrightarrow \mu _R^+ (x, y) \ngeq \alpha \ \text {for any}\ y\in X\\&\Longleftrightarrow x \notin \overline{R}_{\alpha }^+ (X)\\&\Longleftrightarrow x \in \big (\overline{R}_{\alpha }^+ (X)\big )^c. \end{aligned}$$

    Hence, \(\underline{R}_{\alpha }^+ (X^c) = \big (\overline{R}_{\alpha }^+ (X)\big )^c\).

  5. 5.

    For any \(x \in \mathcal {U}\),

    $$\begin{aligned} x \in \overline{R}_{\alpha }^+ (X^c)&\Longleftrightarrow \mu _R^+ (x, y) \ge \alpha \ \text {for some}\ y\in X^c\\&\Longleftrightarrow \mu _R^+ (x, y) \nless \alpha \ \text {for all}\ y\in X^c\\&\Longleftrightarrow x \notin \underline{R}_{\alpha }^+ (X)\\&\Longleftrightarrow x \in \big (\underline{R}_{\alpha }^+ (X)\big )^c. \end{aligned}$$

    Hence, \(\overline{R}_{\alpha }^+ (X^c) = \big (\underline{R}_{\alpha }^+ (X)\big )^c\).

  6. 6.

    Since we know that \(X \cap Y \subseteq X\) and \(X \cap Y \subseteq Y\), so it follows from part (i) of Proposition 3.8 that \(\underline{R}_{\alpha }^+ (X \cap Y) \subseteq \underline{R}_{\alpha }^+ (X)\) and \(\underline{R}_{\alpha }^+ (X \cap Y) \subseteq \underline{R}_{\alpha }^+ (Y)\). So we get,

    $$\begin{aligned}\underline{R}_{\alpha }^+ (X \cap Y) \subseteq \underline{R}_{\alpha }^+ (X) \cap \underline{R}_{\alpha }^+ (Y).\end{aligned}$$

    For the reverse inclusion, suppose \(x \in \underline{R}_{\alpha }^+ (X) \cap \underline{R}_{\alpha }^+ (Y)\). Then \(x \in \underline{R}_{\alpha }^+ (X)\) and \(x \in \underline{R}_{\alpha }^+ (Y)\). It follows that \(\mu _R^+ (x, y) < \alpha \ \text {for all}\ y\in X^c\) and \(\mu _R^+ (x, z) < \alpha \ \text {for all}\ z\in Y^c\), which further implies that \(\mu _R^+ (x, w) < \alpha \ \text {for all}\ w \in X^c \cup Y^c = (X \cap Y)^c\) and thus we get \(x \in \underline{R}_{\alpha }^+ (X \cap Y)\), that is,

    $$\begin{aligned}\underline{R}_{\alpha }^+ (X) \cap \underline{R}_{\alpha }^+ (Y) \subseteq \underline{R}_{\alpha }^+ (X \cap Y).\end{aligned}$$

    Hence, \(\underline{R}_{\alpha }^+ (X \cap Y) = \underline{R}_{\alpha }^+ (X) \cap \underline{R}_{\alpha }^+ (Y)\).

  7. 7.

    As we know that \(X \subseteq X \cup Y\) and \(Y \subseteq X \cup Y\), so it follows from part (i) of Proposition 3.8 that \(\underline{R}_{\alpha }^+ (X) \subseteq \underline{R}_{\alpha }^+ (X \cup Y)\) and \(\underline{R}_{\alpha }^+ (Y) \subseteq \underline{R}_{\alpha }^+ (X \cup Y)\). Thus, it implies that

    $$\begin{aligned}\underline{R}_{\alpha }^+ (X) \cup \underline{R}_{\alpha }^+ (Y) \subseteq \underline{R}_{\alpha }^+ (X \cup Y).\end{aligned}$$
  8. 8.

    Since we know that \(X \subseteq X \cup Y\) and \(Y \subseteq X \cup Y\), by part (ii) of Proposition 3.8, it follows that \(\overline{R}_{\alpha }^+ (X) \subseteq \overline{R}_{\alpha }^+ (X \cup Y)\) and \(\overline{R}_{\alpha }^+ (Y) \subseteq \overline{R}_{\alpha }^+ (X \cup Y)\). Thus, it implies that

    $$\begin{aligned}\overline{R}_{\alpha }^+ (X) \cup \overline{R}_{\alpha }^+ (Y) \subseteq \overline{R}_{\alpha }^+ (X \cup Y).\end{aligned}$$

    For the reverse inclusion, assume that \(x \in \overline{R}_{\alpha }^+ (X \cup Y)\). Then by Definition 3.1, it follows that \(\mu _R^+ (x, y) \ge \alpha \ \text {for some}\ y\in X \cup Y\). This gives that \(\mu _R^+ (x, y) \ge \alpha \ \text {for some}\ y\in X\) or \(\mu _R^+ (x, y) \ge \alpha \ \text {for some}\ y\in Y\). This implies that \(x \in \overline{R}_{\alpha }^+ (X)\) or \(x \in \overline{R}_{\alpha }^+ (Y)\). This implies that \(x \in \overline{R}_{\alpha }^+ (X) \cup \overline{R}_{\alpha }^+ (Y)\), That is

    $$\begin{aligned}\overline{R}_{\alpha }^+ (X \cup Y) \subseteq \overline{R}_{\alpha }^+ (X) \cup \overline{R}_{\alpha }^+ (Y).\end{aligned}$$

    Hence, \(\overline{R}_{\alpha }^+ (X \cup Y) = \overline{R}_{\alpha }^+ (X) \cup \overline{R}_{\alpha }^+ (Y)\).

  9. 9.

    As we know that \(X \cap Y\subseteq X\) and \(X \cap Y \subseteq Y\), so by part (ii) of Proposition 3.8, it follows that \(\overline{R}_{\alpha }^+ (X \cap Y) \subseteq \overline{R}_{\alpha }^+ (X)\) and \(\overline{R}_{\alpha }^+ (X \cap Y) \subseteq \overline{R}_{\alpha }^+ (Y)\). Thus it implies that

    $$\begin{aligned}\overline{R}_{\alpha }^+ (X \cap Y) \subseteq \overline{R}_{\alpha }^+ (X) \cap \overline{R}_{\alpha }^+ (Y).\end{aligned}$$

\(\square \)

Theorem 3.14

Let \(\mathbb {P} = (\mathcal {U}, R)\) be a bipolar fuzzy approximation space and \(\beta \in [-1, 0)\). Then for \(X, Y \subseteq \mathcal {U}\), we have

  1. 1.

    \(\overline{R}_{\beta }^- (X) \subseteq X \subseteq \underline{R}_{\beta }^- (X)\);

  2. 2.

    \(\overline{R}_{\beta }^- (\emptyset ) = \emptyset = \underline{R}_{\beta }^- (\emptyset )\);

  3. 3.

    \(\overline{R}_{\beta }^- (\mathcal {U}) = \mathcal {U} = \underline{R}_{\beta }^- (\mathcal {U})\);

  4. 4.

    \(\underline{R}_{\beta }^- (X^c) = \big (\overline{R}_{\beta }^- (X)\big )^c\);

  5. 5.

    \(\overline{R}_{\beta }^- (X^c) = \big (\underline{R}_{\beta }^- (X)\big )^c\);

  6. 6.

    \(\underline{R}_{\beta }^- (X \cap Y) \subseteq \underline{R}_{\beta }^- (X) \cap \underline{R}_{\beta }^- (Y)\);

  7. 7.

    \(\underline{R}_{\beta }^- (X \cup Y) = \underline{R}_{\beta }^- (X) \cup \underline{R}_{\beta }^- (Y)\);

  8. 8.

    \(\overline{R}_{\beta }^- (X \cup Y) \supseteq \overline{R}_{\beta }^- (X) \cup \overline{R}_{\beta }^- (Y)\);

  9. 9.

    \(\overline{R}_{\beta }^- (X \cap Y) = \overline{R}_{\beta }^- (X) \cap \overline{R}_{\beta }^- (Y)\).

Proof

  1. 1.

    By definition, \(\overline{R}_{\beta }^- (X) \subseteq X\) is trivial. For the other inclusion, let \(x \in X \subseteq \mathcal {U}\). Then \(\mu _R^- (x, x) = -1 \le \beta ;\ x\in X\). This implies that \(x \in \underline{R}_{\beta }^- (X)\). Hence, \(\overline{R}_{\beta }^- (X) \subseteq X \subseteq \underline{R}_{\beta }^- (X)\), as required.

  2. 2.

    This is the direct consequence of the definition.

  3. 3.

    This is similar to the proof of (1).

  4. 4.

    For any \(x \in \mathcal {U}\),

    $$\begin{aligned} x \in \underline{R}_{\beta }^- (X^c)&\Longleftrightarrow \mu _R^- (x, y) \le \beta \ \text {for some}\ y\in X^c\\&\Longleftrightarrow \mu _R^- (x, y) \ngtr \beta \ \text {for all}\ y\in X^c\\&\Longleftrightarrow x \notin \overline{R}_{\beta }^- (X)\\&\Longleftrightarrow x \in \big (\overline{R}_{\beta }^- (X)\big )^c.\\ \end{aligned}$$

    Hence, \(\underline{R}_{\beta }^- (X^c) = \big (\overline{R}_{\beta }^- (X)\big )^c\).

  5. 5.

    For any \(x \in \mathcal {U}\),

    $$\begin{aligned} x \in \overline{R}_{\beta }^- (X^c)&\Longleftrightarrow \mu _R^- (x, y) > \beta \ \text {for all}\ y\in (X^c)^c = X\\&\Longleftrightarrow \mu _R^- (x, y) \nleq \beta \ \text {for any}\ y\in X\\&\Longleftrightarrow x \notin \underline{R}_{\beta }^- (X)\\&\Longleftrightarrow x \in \big (\underline{R}_{\beta }^- (X)\big )^c.\\ \end{aligned}$$

    Hence, \(\overline{R}_{\beta }^- (X^c) = \big (\underline{R}_{\beta }^- (X)\big )^c\).

  6. 6.

    Since we know that \(X \cap Y \subseteq X\) and \(X \cap Y \subseteq Y\), so it follows from part (i) of Proposition 3.9 that \(\underline{R}_{\beta }^- (X \cap Y) \subseteq \underline{R}_{\beta }^- (X)\) and \(\underline{R}_{\beta }^- (X \cap Y) \subseteq \underline{R}_{\beta }^- (Y)\). So we get,

    $$\begin{aligned}\underline{R}_{\beta }^- (X \cap Y) \subseteq \underline{R}_{\beta }^- (X) \cap \underline{R}_{\beta }^- (Y).\end{aligned}$$
  7. 7.

    Since we know that \(X \subseteq X \cup Y\) and \(Y \subseteq X \cup Y\), so by part (i) of Proposition 3.9, it follows that \(\underline{R}_{\beta }^- (X) \subseteq \underline{R}_{\beta }^- (X \cup Y)\) and \(\underline{R}_{\beta }^- (Y) \subseteq \underline{R}_{\beta }^- (X \cup Y)\). Thus, it implies that

    $$\begin{aligned}\underline{R}_{\beta }^- (X) \cup \underline{R}_{\beta }^- (Y) \subseteq \underline{R}_{\beta }^- (X \cup Y).\end{aligned}$$

    For the reverse inclusion, assume that \(x \in \underline{R}_{\beta }^- (X \cup Y)\). Then by Definition 3.1, it follows that \(\mu _R^- (x, y) \le \beta \ \text {for some}\ y\in X \cup Y\). This gives that \(\mu _R^- (x, y) \le \beta \ \text {for some}\ y\in X\) or \(\mu _R^- (x, y) \le \beta \ \text {for some}\ y\in Y\). This implies that \(x \in \underline{R}_{\beta }^- (X)\) or \(x \in \underline{R}_{\beta }^- (Y)\). This implies that \(x \in \underline{R}_{\beta }^- (X) \cup \underline{R}_{\beta }^- (Y)\). That is

    $$\begin{aligned}\underline{R}_{\beta }^- (X \cup Y) \subseteq \underline{R}_{\beta }^- (X) \cup \underline{R}_{\beta }^- (Y).\end{aligned}$$

    Hence, \(\underline{R}_{\beta }^- (X \cup Y) = \underline{R}_{\beta }^- (X) \cup \underline{R}_{\beta }^- (Y)\).

  8. 8.

    Since we know that \(X \subseteq X \cup Y\) and \(Y \subseteq X \cup Y\), by part (ii) of Proposition 3.9, it follows that \(\overline{R}_{\beta }^- (X) \subseteq \overline{R}_{\beta }^- (X \cup Y)\) and \(\overline{R}_{\beta }^- (Y) \subseteq \overline{R}_{\beta }^- (X \cup Y)\). Thus, it gives us

    $$\begin{aligned}\overline{R}_{\beta }^- (X) \cup \overline{R}_{\beta }^- (Y) \subseteq \overline{R}_{\beta }^- (X \cup Y).\end{aligned}$$
  9. 9.

    Since we know that \(X \cap Y \subseteq X\) and \(X \cap Y \subseteq Y\), it follows from part (ii) of Proposition 3.9 that \(\overline{R}_{\beta }^- (X \cap Y) \subseteq \overline{R}_{\beta }^- (X)\) and \(\overline{R}_{\beta }^- (X \cap Y) \subseteq \overline{R}_{\beta }^- (Y)\). So we get

    $$\begin{aligned}\overline{R}_{\beta }^- (X \cap Y) \subseteq \overline{R}_{\beta }^- (X) \cap \overline{R}_{\beta }^- (Y).\end{aligned}$$

    For the reverse inclusion, suppose \(x \in \overline{R}_{\beta }^- (X) \cap \overline{R}_{\beta }^- (Y)\). Then \(x \in \overline{R}_{\beta }^- (X)\) and \(x \in \overline{R}_{\beta }^- (Y)\). It follows that \(\mu _R^- (x, y) > \beta \ \text {for all}\ y\in X^c\) and \(\mu _R^- (x, z) > \beta \ \text {for all}\ z\in Y^c\), which implies that \(\mu _R^- (x, w) > \beta \ \text {for all}\ w \in X^c \cup Y^c = (X \cap Y)^c\). Thus, we get \(x \in \overline{R}_{\beta }^- (X \cap Y)\), that is,

    $$\begin{aligned}\overline{R}_{\beta }^- (X) \cap \overline{R}_{\beta }^- (Y) \subseteq \overline{R}_{\beta }^- (X \cap Y).\end{aligned}$$

    Hence, \(\overline{R}_{\beta }^- (X \cap Y) = \overline{R}_{\beta }^- (X) \cap \overline{R}_{\beta }^- (Y)\).

\(\square \)

Remark 3.15

If \(\alpha = 1\) and \(\beta = -1\), then \(\underline{R}_{\alpha }^+ (X) = X = \overline{R}_{\alpha }^+ (X)\) and \(\overline{R}_{\beta }^- (X) = X = \underline{R}_{\beta }^- (X)\).

Example 3.16

It is noted that inclusions (7) and (9) in Theorem 3.13 and inclusions (6) and (8) in Theorem 3.14 might be strict, which can be verified by Example 3.7, where \(\mathcal {U} = \{u, v, w, x, y, z\}\). Let \(X, Y \subseteq \mathcal {U}\) be such that \(X = \{u, v, w\}\) and \(Y = \{v, w, x\}\). Then \(X \cup Y = \{u, v, w, x\}\) and \(X \cap Y = \{v, w\}\). For \(\alpha = 0.625\) the fuzzified lower and upper \(\alpha \)-positive approximations for \(X, Y, X \cup Y\ \text {and}\ X \cap Y\) are given as

$$\begin{aligned} \underline{R}_{\alpha }^+ (X)&= \{\},\\ \overline{R}_{\alpha }^+ (X)&=\{u, v, w, x, y, z\},\\ \underline{R}_{\alpha }^+ (Y)&=\{\},\\ \overline{R}_{\alpha }^+ (Y)&=\{u, v, w, x, y, z\},\\ \underline{R}_{\alpha }^+ (X \cup Y)&= \{u\},\\ \overline{R}_{\alpha }^+ (X \cup Y)&=\{u, v, w, x, y, z\},\\ \underline{R}_{\alpha }^+ (X \cap Y)&=\{\},\\ \overline{R}_{\alpha }^+ (X \cap Y)&=\{v, w, y, z\}.\\ \end{aligned}$$

Clearly we see that \(\underline{R}_{\alpha }^+ (X) \cup \underline{R}_{\alpha }^+ (Y) = \{\}\) and \(\underline{R}_{\alpha }^+ (X \cup Y) = \{u\}\). So \(\underline{R}_{\alpha }^+ (X) \cup \underline{R}_{\alpha }^+ (Y) \subset \underline{R}_{\alpha }^+ (X \cup Y)\), which shows that inclusion in part (7) of Theorem 3.13 may hold strictly.

Similarly, we can also see that \(\overline{R}_{\alpha }^+ (X) \cap \overline{R}_{\alpha }^+ (Y) = \{u, v, w, x, y, z\}\) and \(\overline{R}_{\alpha }^+ (X \cap Y) = \{u, w, y, z\}\). So we have \(\overline{R}_{\alpha }^+ (X \cap Y) \subset \overline{R}_{\alpha }^+ (X) \cap \overline{R}_{\alpha }^+ (Y)\), which shows that the inclusion in part (9) of Theorem 3.13 might be strict.

Now for \(\beta = -0.7\) the fuzzified lower and upper \(\beta \)-negative approximations for \(X, Y, X \cup Y\ \text {and}\ X \cap Y\) are given as

$$\begin{aligned} \underline{R}_{\beta }^- (X)&= \{u, v, w, x, y, z\},\\ \overline{R}_{\beta }^- (X)&=\{\},\\ \underline{R}_{\beta }^- (Y)&=\{u, v, w, x, y, z\},\\ \overline{R}_{\beta }^- (Y)&=\{\},\\ \underline{R}_{\beta }^- (X \cup Y)&= \{u, v, w, x, y, z\},\\ \overline{R}_{\beta }^- (X \cup Y)&=\{u\},\\ \underline{R}_{\beta }^- (X \cap Y)&=\{u, v, w, x, z\},\\ \overline{R}_{\beta }^- (X \cap Y)&=\{\}.\\ \end{aligned}$$

Clearly we can see that \(\underline{R}_{\beta }^- (X \cap Y) =\{u, v, w, x, z\}\) and \(\underline{R}_{\beta }^- (X) \cap \underline{R}_{\beta }^- (Y) =\{u, v, w, x, y, z\}\). So we have \(\underline{R}_{\beta }^- (X \cap Y) \subset \underline{R}_{\beta }^- (X) \cap \underline{R}_{\beta }^- (Y)\), which shows that the inclusion in part (6) of Theorem 3.14 might be strict.

Also we can see that \(\overline{R}_{\beta }^- (X) \cup \overline{R}_{\beta }^- (Y)=\{\}\) and \(\overline{R}_{\beta }^- (X \cup Y) = \{u\}\). So \(\overline{R}_{\beta }^- (X) \cup \overline{R}_{\beta }^- (Y) \subset \overline{R}_{\beta }^- (X \cup Y)\), which shows that the inclusion in part (8) of Theorem 3.14 may hold strictly.

4 Accuracy and roughness measures for \((\alpha , \beta )\)-bipolar fuzzified rough sets

To express the quality of an approximation we introduce some accuracy measures. In Pawlak (1982), Pawlak proposed two numerical measures for characterizing the imprecision of rough set approximations, which can help us to get a perception, that how accurate is the information related with some equivalence relation for a specific classification.

A significant utilization of the \((\alpha , \beta )\)-bipolar fuzzified rough approximations of the bipolar fuzzy sets is, that, these approximations give a scheme to analyze how precisely the membership functions (positive and negative) of a bipolar fuzzy relation R depicts the objects. In this section, we introduce accuracy measure (degree of accuracy) and roughness measure (degree of roughness) for \((\alpha , \beta )\)-bipolar fuzzified rough sets and investigate some of their basic properties.

According to Pawlak (1982), accuracy measure \(\eta _{\mathfrak {R}} (X)\) of a subset X of the universe \(\mathcal {U}\) with respect to Pawlak approximation space \(P = (\mathcal {U}, \mathfrak {R})\) is the ratio of cardinality of lower approximation \(\mathfrak {R}_*(X)\) to the cardinality of upper approximation \(\mathfrak {R}^*(X)\), that is,

$$\begin{aligned}\eta _{\mathfrak {R}} (X) = \frac{|\mathfrak {R}_* (X)|}{|\mathfrak {R}^* (X)|}.\end{aligned}$$

Similarly on the basis of accuracy measure, the roughness measure (degree of roughness) is defined as

$$\begin{aligned}\rho _{\mathfrak {R}} (X) = 1 - \eta _{\mathfrak {R}} (X).\end{aligned}$$

Following the similar procedure, we have

Definition 4.1

Let \(\lambda = \{ (x, \lambda ^P (x), \lambda ^N (x)) : x\in \mathcal {U} \}\) be a bipolar fuzzy set over the universe \(\mathcal {U}\) and \(\mathbb {P} = (\mathcal {U}, R)\) be a bipolar fuzzy approximation space. Then for any non-empty subset X of \(\mathcal {U}\), measure of accuracy for \((\alpha , \beta )\)-bipolar fuzzified rough set with respect to X represented by \(\mathbb {MA}(X)\) and is defined by an ordered pair:

$$\begin{aligned}\mathbb {MA}(X) = \big (\mathfrak {X}^{\alpha ^+}, \mathfrak {X}^{\beta ^-}\big ),\end{aligned}$$

where \(\mathfrak {X}^{\alpha ^+} = \frac{|\underline{R}_{\alpha }^+ (X)|}{|\overline{R}_{\alpha }^+ (X)|}\) and \(\mathfrak {X}^{\beta ^-} = \frac{|\overline{R}_{\beta }^- (X)|}{|\underline{R}_{\beta }^- (X)|}.\)

Here \(|\underline{R}_{\alpha }^+ (X)|\), \(|\overline{R}_{\alpha }^+ (X)|\), \(|\underline{R}_{\beta }^- (X)|\) and \(|\underline{R}_{\beta }^- (X)|\) denote the cardinality of the sets \(\underline{R}_{\alpha }^+ (X)\), \(\overline{R}_{\alpha }^+ (X)\), \(\underline{R}_{\beta }^- (X)\) and \(\underline{R}_{\beta }^- (X)\), respectively.

Similarly, measure of roughness for \((\alpha , \beta )\)-bipolar fuzzified rough set with respect to X represented by \(\mathbb {MR}(X)\) and is defined as

$$\begin{aligned}\mathbb {MR}(X) = (1, 1) - \mathbb {MA}(X) = (1, 1) - \big (\mathfrak {X}^{\alpha ^+}, \mathfrak {X}^{\beta ^-}\big ).\end{aligned}$$

From the above definition, we conclude that

  1. (i)

    \(0 \le \mathfrak {X}^{\alpha ^+} \le 1\) and \(0 \le \mathfrak {X}^{\beta ^-} \le 1\) for any subset X of \(\mathcal {U}\).

  2. (ii)

    Conventionally for an empty set \(\emptyset \), we define \(\mathbb {MA}(\emptyset ) = (1, 1)\) and \(\mathbb {MR}(\emptyset ) = (0, 0)\).

  3. (iii)

    Also it can be seen that \(\mathbb {MA}(X) = (1, 1)\ \text {if\ and\ only\ if}\ X = \mathcal {U}\), because \(\underline{R}_{\alpha }^+ (\mathcal {U}) = \mathcal {U} = \overline{R}_{\alpha }^+ (\mathcal {U})\) and \(\overline{R}_{\beta }^- (\mathcal {U}) = \mathcal {U} = \underline{R}_{\beta }^- (\mathcal {U})\).

  4. (iv)

    When \(\alpha = 1\) and \(\beta = -1\), then for any \(X \subseteq \mathcal {U}\), \(\mathbb {MA}(X) = (1, 1)\) and \(\mathbb {MR}(X) = (0, 0)\).

Example 4.2

To interpret the above definition, we revisit Example 3.7, where \(\mathcal {U} = \{u, v, w, x, y, z\}\). We already know that for \(X = \{u, v, x\} \subseteq \mathcal {U}\), the \((\alpha , \beta )\)-bipolar fuzzified rough approximations of X for \(\alpha = 0.75\) and \(\beta = -0.75\) are given as

$$\begin{aligned} \underline{R}_{\alpha }^+ (X)&= \{u\},\\ \overline{R}_{\alpha }^+ (X)&=\{u, v, w, x, z\},\\ \underline{R}_{\beta }^- (X)&=\{u, v, x, z\},\\ \overline{R}_{\beta }^- (X)&=\{u\}.\\ \end{aligned}$$

Therefore, measure of accuracy and measure of roughness for \((\alpha , \beta )\)-bipolar fuzzified rough set with respect to X are, respectively, given as

$$\begin{aligned}\mathbb {MA}(X) = (1/5, 1/4) = (0.2, 0.25), \\ \mathbb {MR}(X) = (1, 1) - (0.2, 0.25) = (0.8, 0.75). \end{aligned}$$

Hence, positive membership function \(\mu _R^+\) of R describes the objects of the universe \(\mathcal {U}\) accurately up to degree 0.2 and the negative membership function \(\mu _R^-\) of R describes the objects of the universe \(\mathcal {U}\) accurately up to degree 0.25.

5 Conclusion

In this work, we have discussed a general framework for fuzzification of bipolar rough sets using a bipolar fuzzy tolerance relation. We also introduced the notion of bipolar fuzzy approximation space (\(\mathcal {BFA}\)-space) \(\mathbb {P} = (U, R)\), on the basis of which we defined the notions of fuzzified lower \(\alpha \)-positive, fuzzified upper \(\alpha \)-positive, fuzzified lower \(\beta \)-negative and fuzzified upper \(\beta \)-negative approximations, and then introduced the concept of \((\alpha , \beta )\)-fuzzified bipolar rough set. Also we have studied some properties of \((\alpha , \beta )\)-fuzzified bipolar rough set. At last, we have discussed accuracy and roughness measures for \((\alpha , \beta )\)-bipolar fuzzified rough set. In future, based upon the defined concepts and operations in this paper, researchers may think about algebraic structures of \((\alpha , \beta )\)-fuzzified bipolar rough sets.