1 Introduction

We know that almost every word has its opposite word. So, every matter has two sides; one is called positive side, and another is called negative side due to the observer’s point of view, which can be measured with certain degree of membership levels. In this article, we focus on problems that present positive and negative preferences and it is called bipolar preference problem.

Bipolar is an important topic in several domains, for example, psychology, multi-criteria decision making, artificial intelligence, qualitative reasoning, etc. In a real-life situation, both positive and negative preferences are useful to handle the problem; in this aspect, this topic is our next issue.

The membership degrees of elements range over the interval [0, 1] in traditional fuzzy set. Sometimes, the membership degree means the satisfaction degree of elements to some property or constraint corresponding to a fuzzy set. So, some elements have irrelevant characteristics to the property corresponding to a fuzzy set and the others have contrary elements in fuzzy sets where the membership degrees ranged only in the interval [0, 1]. If a set representation could express this kind of difference, it would be more informative than the traditional fuzzy set representation. Based on these observations, Zhang (1994) introduced an extension of fuzzy set, named bipolar fuzzy set (BFS).

Major advantages of the BFS theory include

  • It formalizes a unified approach to polarity and fuzziness,

  • It captures the bipolar or double-sided nature of human perception and cognition,

  • It provides a basis for bipolar cognitive modeling and multi-agent decision analysis .

Like classical (crisp) matrices, fuzzy matrices (FMs) are now a very rich topic, in modeling uncertain situations that occur in science, engineering, automata theory, logic of binary relations, medical diagnosis, etc. FMs defined first time by Thomson in (1977) and discussed about the convergence of the powers of a fuzzy matrix. The theories of fuzzy matrices were developed by Kim and Roush FW (1980) as an extension of Boolean matrices. With max–min operation, the fuzzy algebra and its matrix theory are considered by many authors Bhowmik and Pal (2008a), Gavalec (1997), Khan and Pal (2006, 2007), Pal (2001), Shyamal and Pal (2004), Xin (1992) who studied controllable fuzzy matrices. The transitivity of matrices over path algebra (i.e., additively idempotent semiring) is discussed by Hashimoto (1983a, b, 1985). Generalized fuzzy matrices, matrices over an incline and some results about the transitive closer, determinant, adjoint matrices, convergence of powers and conditions for nilpotency are considered by Duan (2004) and Lur et al. (2004). In FMs, rows and columns are taken as certain. But, they may be uncertain. Pal (2015a) has introduced this concept. Here, rows and columns are taken as uncertain. He also investigated different properties of these types of matrices along with applications.

There are some limitations in dealing with uncertainties by fuzzy set. To overcome these difficulties, Atanassov (1983) introduced theory of intuitionistic fuzzy set as a generalization of fuzzy set. Based on this concept, Pal (2001) have defined intuitionistic fuzzy determinant (Pal 2001) and intuitionistic fuzzy matrices (IFMs) in 2002 (Pal et al. 2002). Bhowmik and Pal (2008a, b, 2009, 2010a), Bhowmik et al. (2008) introduced some results on IFMs, intuitionistic circulant fuzzy matrix and generalized intuitionistic fuzzy matrix. Shyamal and Pal (2002, 2005) defined the distances between IFMs and hence defined a metric on IFMs. They also cited few applications of IFMs. In Mondal and Pal (2013b), the similarity relations, invertibility conditions and eigenvalues of IFMs are studied. Idempotent, regularity, permutation matrix and spectral radius of IFMs are also discussed. Also, intuitionistic fuzzy incline matrix and determinant are studied in Mondal and Pal (2014). The parameterizations tool of IFM enhances the flexibility of its applications. For other works on IFMs, see Adak et al. (2012a, b, 2013), Pradhan and Pal (2012, 2013a, b).

The concept of interval-valued fuzzy matrices (IVFMs) as a generalization of FM was introduced and developed in Shyamal and Pal (2006) by extending the max–min operation in fuzzy algebra. We introduced interval-valued fuzzy vector space (Mondal 2012), rank and its associated properties on IVFMs Mondal and Pal (2016). Pal (2015b) defined a new type of IVFM whose rows and column are uncertain along with uncertain elements.

Combining IFMs and IVFMs, a new fuzzy matrix called interval-valued intuitionistic fuzzy matrix (IVIFM) is defined Khan and Pal (2005). For other works on IVIFMs, see Bhowmik and Pal (2010b, 2012).

After the invention of BFSs Cacioppo et al. (1997), Zhang (1994, 1998) introduced fuzzy equilibrium relations and bipolar fuzzy clustering (Zhang 1999), bipolar logic and bipolar fuzzy partial ordering for clustering and coordination (Zhang 2002), bipolar logic and bipolar fuzzy logic (Zhang 2002) and Yin Yang bipolar logic, bipolar fuzzy logic (Zhang and Zhang 2004). The bipolar-valued fuzzy sets were introduced by Lee (2000a, b). After that, many authors Benferhat et al. (2006), Dubois and Prade (2008), Dudziak and Pekala (2010) are working on this topic till now. Samanta and pal introduced the bipolar fuzzy hypergraphs (Samanta and PaL 2012a). They investigated irregular bipolar fuzzy graphs (Samanta and PaL 2012b) and bipolar fuzzy intersection graphs (Samanta and PaL 2014). But, no one introduced the bipolar fuzzy matrix.

1.1 Motivation

Bipolar (fuzzy) set theory becomes popular due to its wide applications to model real-life situation. Bipolar logic is also used to represent bistable devices in computer and communication systems, particularly to represent electrical and electronics systems. Often it is seen that, to model or to solve such problems, a matrix is constructed, i.e., matrix is used as a tool. If uncertainties are present in the system, then fuzzy matrix is considered instead of crisp matrix. To solve or model a problem relating to bipolar uncertain system, a bipolar fuzzy matrix is essential.

Motivated from the above works and properties of BFS, we define bipolar fuzzy matrices. In this article, we introduce some basic properties of bipolar fuzzy elements by using max–min composition. Bipolar fuzzy relation, matrix and its basic properties are also introduced here. The properties of transitive closure, power-convergent are investigated with examples.

2 Preliminaries

The BFS is one of the extensions of fuzzy sets with positive and negative membership values. In this section, some basic notions of BFS are introduced. Also, some basic operations both binary and unary, viz., \(+,\cdot ,\times ,-,\lnot ,\Rightarrow \) on BFS, are given.

Definition 1

(Bipolar fuzzy set) A BFS \({\mathcal {B}}_F\) in X (universe of discourse) is an object having the form

$$\begin{aligned} {\mathcal {B}}_F=\{(x,\mu _{\underline{n}}(x),\mu _{\overline{p}}(x))\} \end{aligned}$$

where \(\mu _{\underline{n}}: X\rightarrow [-1,0]\) and \(\mu _{\overline{p}}: X\rightarrow [0,1]\) are two mappings.

The positive membership degree \(\mu _{\overline{p}}(x)\) denotes the satisfaction degree of an element x to the property corresponding to a BFS \({\mathcal {B}}_F\), and the negative membership degree \(\mu _{\underline{n}}(x)\) denotes the satisfaction degree of x to some implicit counter-property of \({\mathcal {B}}_F\).

If \(\mu _{\overline{p}}(x)\ne 0\) and \(\mu _{\underline{n}}(x)=0\), it is the situation that x is regarded as having only positive satisfaction for \({\mathcal {B}}_F\). If \(\mu _{\overline{p}}(x)=0\) and \(\mu _{\underline{n}}(x)\ne 0\), it is the situation that x does not satisfy the property of \({\mathcal {B}}_F\) but somewhat satisfied the counter-property of \({\mathcal {B}}_F\). There is a possibility that for an element x, \(\mu _{\overline{p}}(x)\ne 0\) and \(\mu _{\underline{n}}(x)\ne 0\), when the membership function of the property overlaps that of its counter-property over some portion of the domain Lee (2000b).

To understand the definition, we consider the following examples below.

Example 1

Let us consider five students from a school and one student from other school. If we consider the property co-operation of five students with the other particular students separately, the counter-property is competition. Then, the bipolar function is (competition and co-operation), and the corresponding BFS is

$$\begin{aligned}&B_1=\left\{ (-0.5,0.3),(-0.7,0.5),(-0.3,0.2),\right. \\&\quad \left. (-0.8,0.6),(-0.1,0.4)\right\} . \end{aligned}$$

Here, in the first element 0.3 denotes the satisfaction degree of co-operation of first student among five with only student from another school and \(-0.5\) denotes the corresponding satisfaction value of competition. It is similar for other cases.

Following is another example of BFS.

Example 2

Suppose that the minimum height of an adult man is 2ft. and maximum height is 8ft.; therefore, the height of the shortest adult man is 2ft. and that of the tallest adult man is 8ft. and for all other cases (heights) the membership functions are linear. The geometrical presentation of the above BFS \(B_2\) is shown in Fig. 1.

We consider the property as tall man, so the counter-property is short man and their corresponding linear membership functions are

$$\begin{aligned} \mu _{\overline{p}}(x)=\frac{x-2}{6}\quad \text{ and }\quad \mu _{\underline{n}}(x)=-\frac{8-x}{6}\ (\text{ say }). \end{aligned}$$
Fig. 1
figure 1

Bipolar fuzzy set with linear membership function

Example 3

In Example 1, the universe X for \(B_1\) is discrete set but for \(B_2\) the universe X (Example 2) is continuous and bounded set. For continuous unbounded universe X, if the counter-property is the exactly opposite to the property, then the geometrical presentation of the BFS looks like Fig. 2.

Fig. 2
figure 2

Bipolar fuzzy set with nonlinear membership function

Second geometric interpretation of BFS indicates that the bipolar fuzzy membership function maps from the universe of discourse X to the square region OPQR which is shown in Fig. 3.

Fig. 3
figure 3

Second geometrical interpretation of a BFS

In arithmetic operations (such as addition and multiplication), only the membership values of BFS are needed. So, from now we represent a BFS as

$$\begin{aligned} {\mathcal {B}}_F=\{x=(-x_n,x_p)|x\in X\} \end{aligned}$$

where \(-x_n\in [-1,0]\), i.e., \(x_n\in [0,1]\) and \(x_p\in [0,1]\) are the, respectively, negative and positive membership degree of \(x\in X\) in \({\mathcal {B}}_F\).

Definition 2

(Equality) Let \(x,y\in {\mathcal {B}}_F\) where \(x=(-x_n,x_p)\) and \(y=(-y_n,y_p)\), then the equality of two elements x and y is denoted by \(x=y\) and is defined by, \(x=y\) if and only if \(x_n=y_n\) and \(x_p=y_p\).

Definition 3

Let \(x,y\in {\mathcal {B}}_F\) where \(x=(-x_n,x_p),y=(-y_n,y_p)\) and \(x_n,x_p,y_n,y_p\in [0,1]\) then the following operations are defined by Zhang and Zhang (2004).

  1. 1.

    The disjunction of x and y is denoted by \(x+y\) and is defined by

    $$\begin{aligned} x+y= & {} (-x_n,x_p)+(-y_n,y_p)\\= & {} (-\max \{x_n,y_n\},\max \{x_p,y_p\})\\= & {} (-\{x_n\vee y_n\},\{x_p\vee y_p\}). \end{aligned}$$
  2. 2.

    The parallel conjunction of x and y is denoted by \(x\cdot y\) and is defined by

    $$\begin{aligned} x\cdot y= & {} (-x_n,x_p)\cdot (-y_n,y_p)\\= & {} (-\min \{x_n,y_n\},\min \{x_p,y_p\})\\= & {} (-\{x_n\wedge y_n\},\{x_p\wedge y_p\}). \end{aligned}$$
  3. 3.

    The serial conjunction of x and y is denoted by \(x\times y\) and is defined by

    $$\begin{aligned} x\times y= & {} (-x_n,x_p)\times (-y_n,y_p)\\= & {} (-\{(x_n\wedge y_p)\vee (x_p\wedge y_n)\},\\&\{(x_n\wedge y_n)\vee (x_p\wedge y_p)\}). \end{aligned}$$
  4. 4.

    The negation of x is denoted by \(-x\) and is defined by \(-x=-(-x_n,x_p)=(-x_p,x_n).\)

  5. 5.

    The compliment of x is denoted by \(\lnot x\) and is defined by

    $$\begin{aligned} \lnot x= & {} \lnot (-x_n,x_p)=(\lnot (-x_n),\lnot x_p)\\= & {} (-1+x_n,1-x_p). \end{aligned}$$
  6. 6.

    The implication of x to y is denoted by \(x\Rightarrow y\) and is defined by \((x\Rightarrow y)=\lnot x+y\).

3 Some properties on BFSs

In this section, we introduced some basic definitions related to BFS, and then, we prove some simple properties on it.

Definition 4

(Zero element) The zero element of a BFS is denoted by \(o_b\) and is defined by \(o_b=(0,0)\).

Definition 5

(Unit element) The unit element of a BFS is denoted by \(i_b\) and is defined by \(i_b=(-1,1)\).

Definition 6

(Identity element) The identity element of a BFS in respect to serial conjunction is denoted by \(e_b\) and is defined by \(e_b=(0,1)\).

Proposition 1

Let \({\mathcal {B}}_F\) be a BFS and \(x,y,z\in {\mathcal {B}}_F\), where \(x=(-x_n,x_p),\ y=(-y_n,y_p)\) and \(z=(-z_n,z_p)\), then the following properties are satisfied

  1. (a)

    \(x+y=y+x,\ x\cdot y=y\cdot x,\ x\times y=y\times x\).

  2. (b)

    \(x+(y+z)=(x+y)+z,\ x\cdot (y\cdot z)=(x\cdot y)\cdot z,\ x\times (y\times z)=(x\times y)\times z\).

  3. (c)

    \(x+o_b=o_b+x=x,\ x\cdot i_b=i_b\cdot x=x,\ x\times e_b=e_b\times x=x\).

  4. (d)

    Inverse element does not exist except the identity in respect to the operations.

  5. (e)

    \(x\cdot (y+z)=x\cdot y+x\cdot z,\ x\times (y+z)=x\times y+x\times z\).

  6. (f)

    \(x-y=-(y-x)\) [where \(x-y=x+(-y)\)].

  7. (g)

    \(x\cdot (-y),\ (-x)\cdot y,\ -(x\cdot y)\) are not equal, but \(x\times (-y)=(-x)\times y=-(x\times y)\).

  8. (h)

    \(x\cdot (y-z)\ne x\cdot y-x\cdot z\), but \(x\times (y-z)=x\times y-x\times z\).

Proof

Given that \(x=(-x_n,x_p)\), \(y=(-y_n,y_p)\) and \(z=(-z_n,z_p)\).

(a)

$$\begin{aligned} \begin{array}{rcl} x\times y &{} = &{} (-x_n,x_p)\times (-y_n,y_p)\\ &{} = &{} (-\{(x_n\wedge y_p)\vee (x_p\wedge y_n)\},\{(x_n\wedge y_n)\vee (x_p\wedge y_p)\})\\ &{} = &{} (-\{(x_p\wedge y_n)\vee (x_n\wedge y_p)\},\{(x_n\wedge y_n)\vee (x_p\wedge y_p)\})\\ &{} = &{} (-\{(y_n\wedge x_p)\vee (y_p\wedge x_n)\},\{(y_n\wedge x_n)\vee (y_p\wedge x_p)\})\\ &{} = &{} (-y_n,y_p)\times (-x_n,x_p)=y\times x. \end{array} \end{aligned}$$

The proofs are similar for other cases.

(b)

$$\begin{aligned} x\times (y\times z)= & {} (-x_n,x_p)\times \{(-y_n,y_p)\times (-z_n,z_p)\}\\= & {} (-x_n,x_p)\times (-\{y_n\wedge z_p\}\vee \{y_p\wedge z_n\},\{y_n\wedge z_n\}\\&\vee \{y_p\wedge z_p\})\\= & {} (-[x_n\wedge \{(y_n\wedge z_n)\vee (y_p\wedge z_p)\}]\\&\vee [x_p\wedge \{(y_n\wedge z_p)\vee (y_p\wedge z_n)\}],\\&[x_n\wedge \{(y_n\wedge z_p)\vee (y_p\wedge z_n)\}]\\&\vee [x_p\wedge \{(y_n\wedge z_n)\vee (y_p\wedge z_p)\}])\\= & {} (-a_n,a_p)\ (\text{ say }) \end{aligned}$$

where

$$\begin{aligned} a_n= & {} [x_n\wedge \{(y_n\wedge z_n)\vee (y_p\wedge z_p)\}]\\&\vee [x_p\wedge \{(y_n\wedge z_p)\vee (y_p\wedge z_n)\}]\\= & {} [x_n\wedge \{(y_p\wedge z_p)\vee (y_n\wedge z_n)\}]\\&\vee [\{x_p\wedge (y_n\wedge z_p)\}\vee \{x_p\wedge (y_p\wedge z_n)\}]\\= & {} [\{x_n\wedge (y_p\wedge z_p)\}\vee \{x_n\wedge (y_n\wedge z_n)\}]\\&\vee [\{(x_p\wedge y_n)\wedge z_p\}\vee \{(x_p\wedge y_p)\wedge z_n\}]\\= & {} [\{(x_n\wedge y_p)\wedge z_p\}\vee \{(x_n\wedge y_n)\wedge z_n\}]\\&\vee [\{(x_p\wedge y_n)\wedge z_p\}\vee \{(x_p\wedge y_p)\wedge z_n\}]\\= & {} \{(x_n\wedge y_p)\wedge z_p\}\vee [\{(x_n\wedge y_n)\wedge z_n\}\\&\vee \{(x_p\wedge y_n)\wedge z_p\}]\vee \{(x_p\wedge y_p)\wedge z_n\}\\= & {} \{(x_n\wedge y_p)\wedge z_p\}\vee \{(x_p\wedge y_n)\wedge z_p\}\\&\vee [\{(x_n\wedge y_n)\wedge z_n\}\vee \{(x_p\wedge y_p)\wedge z_n\}\\= & {} [\{(x_n\wedge y_p)\vee (x_p\wedge y_n)\}\wedge z_p]\\&\vee [\{(x_n\wedge y_n)\vee (x_p\wedge y_p)\}\wedge z_n]. \end{aligned}$$

Similarly, \(a_p=[\{(x_n\wedge y_p)\vee (x_p\wedge y_n)\}\wedge z_n]\vee [\{(x_n\wedge y_n)\vee (x_p\wedge y_p)\}\wedge z_p]\).

$$\begin{aligned} \begin{array}{rcl} \text{ Also, } (x\times y)\times z &{} = &{} \{(-x_n,x_p)\times (-y_n,y_p)\}\times (-z_n,z_p)\\ &{} = &{} (-\{(x_n\wedge y_p)\vee (x_p\wedge y_n)\},\{(x_n\wedge y_n)\\ &{}&{}\vee (x_p\wedge y_p)\})\times (-z_n,z_p)\\ &{} = &{} (-[\{(x_n\wedge y_p)\vee (x_p\wedge y_n)\}\wedge z_p]\\ &{}&{}\vee [\{(x_n\wedge y_n)\vee (x_p\wedge y_p)\}\wedge z_n],\\ &{}&{} [\{(x_n\wedge y_p)\vee (x_p\wedge y_n)\}\wedge z_n]\\ &{}&{}\vee [\{(x_n\wedge y_n)\vee (x_p\wedge y_p)\}\wedge z_p]). \end{array} \end{aligned}$$

Hence, \(x\times (y\times z)=(x\times y)\times z\).

Other two proofs are similar.

(c) \(x\times e_b=(-x_n,x_p)\times (0,1)=(-x_n,x_p)\) and \(e_b\times x=(0,1)\times (-x_n,x_p)=(-x_n,x_p)\).

Hence, \(x\times e_b=e_b\times x=x\), where \(e_b=(0,1)\) is called the identity in respect to serial conjunction.

Other proofs are similar.

(d) Let \(a=(-a_n,a_p)\in {\mathcal {B}}_F\) be the inverse of \(x=(-x_n,x_p)\) in respect to the disjunction (\(+\)) operation.

Then, \(x+a=a+x=o_b\), i.e., \((-x_n,x_p)+(-a_n,a_p)=(-a_n,a_p)+(-x_n,x_p)=(0,0)\)

or, \((-\max \{x_n,a_n\},\max \{x_p,a_p\})=(-\max \{a_n,x_n\},\max \{a_p,x_p\})=(0,0)\).

Thus, \(\max \{x_n,a_n\}=0\) and \(\max \{x_p,a_p\}=0\), which implies that \(x_n=x_p=a_n=a_p=0\). But, \(x=(-x_n,x_p)\) be any element of \({\mathcal {B}}_F\).

Hence, the inverse element does not exist in respect to the operation disjunction except zero element \(o_b\).

Other proofs can be done by similar way.

(e)

$$\begin{aligned} \begin{array}{rcl} x\cdot (y+z) &{} = &{} (-x_n,x_p)\cdot \{(-y_n,y_p)+(-z_n,z_p)\}\\ &{} = &{} (-x_n,x_p)\cdot (-\{y_n\vee z_n\},\{y_p\vee z_p\})\\ &{} = &{} (-x_n\wedge \{y_n\vee z_n\},x_p\wedge \{y_p\vee z_p\})\\ &{} = &{} (-\{x_n\wedge y_n\}\vee \{x_n\wedge z_n\},\{x_p\wedge y_p\}\vee \{x_p\wedge z_p\})\\ &{} = &{} (-x_n\wedge y_n,x_p\wedge y_p)+(-x_n\wedge z_n,x_p\wedge z_p)\\ &{} = &{} (-x_n,x_p)\cdot (-y_n,y_p)+(-x_n,x_p)\cdot (-z_n,z_p)\\ &{}=&{}x\cdot y+x\cdot z. \end{array} \end{aligned}$$

Similarly, we can prove the other result.

(f) \(x-y=x+(-y)=(-x_n,x_p)+(-y_p,y_n)=(-x_n\vee y_p,x_p\vee y_n)\).

$$\begin{aligned} \begin{array}{rcl} \text{ Also, } -(y-x) &{} = &{} -\{y+(-x)\}=-\{(-y_n,y_p)+(-x_p,x_n)\}\\ &{}=&{}-(-y_n\vee x_p,y_p\vee x_n)\\ &{} = &{} (-y_p\vee x_n,y_n\vee x_p)\\ &{}=&{}(-x_n\vee y_p,x_p\vee y_n). \end{array} \end{aligned}$$

Hence, \(x-y=-(y-x)\).

(g) \(x\cdot (-y)=(-x_n,x_p)\cdot (-y_p,y_n)=(-x_n\wedge y_p,x_p\wedge y_n)\)

\((-x)\cdot y=(-x_p,x_n)\cdot (-y_n,y_p)=(-x_p\wedge y_n,x_n\wedge y_p)\)

and \(-(x\cdot y)=-\{(-x_n,x_p)\cdot (-y_n,y_p)\}=-(-x_n\wedge y_n,x_p\wedge y_p)=(-x_p\wedge y_p,x_n\wedge y_n)\).

Thus, \(x\cdot (-y)\), \((-x)\cdot y\) and \(-(x\cdot y)\) are not equal. Actually,

$$\begin{aligned} x\cdot (-y)=-\{(-x)\cdot y\}\ne -(x\cdot y)=(-x)\cdot (-y). \end{aligned}$$

But, for serial conjunction

\(x\times (-y)=(-x_n,x_p)\times (-y_p,y_n)=(-\{x_n\wedge y_n\}\vee \{x_p\wedge y_p\},\{x_n\wedge y_p\}\vee \{x_p\wedge y_n\})\)

$$\begin{aligned} \begin{array}{rcl} (-x)\times y &{} = &{} (-x_p,x_n)\times (-y_n,y_p)\\ &{} = &{} (-\{x_p\wedge y_p\}\vee \{x_n\wedge y_n\},\{x_p\wedge y_n\}\vee \{x_n\wedge y_p\})\\ &{} = &{} (-\{x_n\wedge y_n\}\vee \{x_p\wedge y_p\},\{x_n\wedge y_p\}\vee \{x_p\wedge y_n\}) \end{array} \end{aligned}$$

and

$$\begin{aligned} \begin{array}{rcl} -(x\times y) &{} = &{} -\{(-x_n,x_p)\cdot (-y_n,y_p)\}\\ &{} = &{} -(-\{(x_n\wedge y_p)\vee (x_p\wedge y_n)\},\{(x_n\wedge y_n)\\ &{}&{}\vee (x_p\wedge y_p)\})\\ &{} = &{} (-\{x_n\wedge y_n\}\vee \{x_p\wedge y_p\},\{x_n\wedge y_p\}\\ &{}&{}\vee \{x_p\wedge y_n\}) \end{array} \end{aligned}$$

Hence, \(x\times (-y)=(-x)\times y=-(x\times y)\).

(h) Using (e) and (g), we can easily prove that

$$\begin{aligned} \begin{array}{rcl} x\cdot (y-z) &{} = &{} x\cdot \{y+(-z)\}=x\cdot y+x\cdot (-z)\\ &{} \ne &{} x\cdot y-x\cdot z\quad [\text{ Since } x\cdot (-z)\ne -(x\cdot z)]. \end{array} \end{aligned}$$

But, \(x\times (y-z)=x\times \{y+(-z)\}=x\times y+x\times (-z)=x\times y-x\times z\). \(\square \)

Remark 1

(\({\mathcal {B}}_F,+\)), (\({\mathcal {B}}_F,\cdot \)) and (\({\mathcal {B}}_F,\times \)) are all Abelian groupoids.

Theorem 1

De Morgan’s laws are satisfied on BFS \({\mathcal {B}}_F\). That is, if \(x=(-x_n,x_p)\) and \(y=(-y_n,y_p)\in {\mathcal {B}}_F\) then

  1. (a)

    \(\lnot (x+y)=(\lnot x)\cdot (\lnot y)\) and

  2. (b)

    \(\lnot (x\cdot y)=(\lnot x)+(\lnot y)\).

Proof

(a)

$$\begin{aligned} \begin{array}{rcl} \lnot (x+y) &{} = &{} \lnot \{(-x_n,x_p)+(-y_n,y_p)\}\\ &{}=&{}\lnot (-\{x_n\vee y_n\},\{x_p,y_p\})\\ &{} = &{} (-1+x_n\vee y_n,1-x_p\vee y_p). \end{array} \end{aligned}$$

Also,

$$\begin{aligned} \begin{array}{rcl} (\lnot x)\cdot (\lnot y) &{} = &{} \{\lnot (-x_n,x_p)\}\cdot \{\lnot (-y_n,y_p)\}\\ &{} = &{} (-1+x_n,1-x_p)\cdot (-1+y_n,1-y_p)\\ &{} = &{} (-\{1-x_n\}\wedge \{1-y_n\},\{1-x_p\}\wedge \{1-y_p\})\\ &{} = &{} (-1+x_n\vee y_n,1-x_p\vee y_p)=\lnot (x+y). \end{array} \end{aligned}$$

Similarly, we can prove the second part. \(\square \)

Example 4

Let \(x=(-0.5,0.3)\) and \(y=(-0.1,0.8)\), then \(\lnot x=(-1+0.5,1-0.3)=(-0.5,0.7)\) and \(\lnot y=(-1+0.1,1-0.8)=(-0.9,0.2)\).

Therefore, \((\lnot x)+(\lnot y)=(-0.5,0.7)+(-0.9,0.2)=(-0.9,0.7)\) and \((\lnot x)\cdot (\lnot y)=(-0.5,0.7)\cdot (-0.9,0.2)=(-0.5,0.2)\).

Also, \(x+y=(-0.5,0.3)+(-0.1,0.8)=(-0.5,0.8)\) and \(x\cdot y=(-0.5,0.3)\cdot (-0.1,0.8)=(-0.1,0.3)\).

Thus, \(\lnot (x+y)=(-1+0.5,1-0.8)=(-0.5,0.2)\) and \(\lnot (x\cdot y)=(-1+0.1,1-0.3)=(-0.9,0.7)\). Hence, \(\lnot (x+y)=(\lnot x)\cdot (\lnot y)\) and \(\lnot (x\cdot y)=(\lnot x)+(\lnot y)\).

4 Bipolar fuzzy relation

In this section, we define Cartesian product of two BFSs, and relation. Also, several basic properties are investigated.

Definition 7

(Cartesian product of BFSs) Let \(X_1\) and \(X_2\) be two universe of discourses and let \(A=\{x=(-x_n,x_p)|x\in X_1\}\), \(B=\{y=(-y_n,y_p)|y\in X_2\}\) be two BFSs. The Cartesian product of A and B is denoted by \(A\times B\) and is defined by

$$\begin{aligned} A\times B=\{(x,y)|x\in X_1\quad \text{ and }\quad y\in X_2\}. \end{aligned}$$

Definition 8

(Bipolar fuzzy relation) A bipolar fuzzy relation between two BFSs A and B is defined as a BFS in \(A\times B\). If R is a relation between A and B, \(x\in A\) and \(y\in B\), and if \(-r_n(x,y)\), \(r_p(x,y)\) denote the negative and positive membership values to which x is in relation R with y, then \(r=(-r_n,r_p)\in R\).

Now, we define an order relation ‘\(\le \)’ below.

Definition 9

(Inclusion) Let \({\mathcal {B}}_F\) be a BFS over X and let \(x,y\in {\mathcal {B}}_F\) where \(x=(-x_n,x_p)\) and \(y=(-y_n,y_p)\), then \(x\le y\) if and only if \(x_n\le y_n\) and \(x_p\le y_p\). That is, \(x\le y\) if and only if \(x+y=y\).

Definition 10

Let \({\mathcal {B}}_F\) be a BFS over X and let \(x,y\in {\mathcal {B}}_F\), where \(x=(-x_n,x_p)\) and \(y=(-y_n,y_p)\), then \(x<y\) if and only if \(x\le y\) and \(x\ne y\).

Proposition 2

The relation ‘\(\le \)’ is partial order relation in a BFS.

Proof

I. Since \(x_n\le x_n\) and \(x_p\le x_p\), so we write \(x\le x\) for all \(x\in {\mathcal {B}}_F\).

That is, the relation ‘\(\le \)’ is reflexive.

II. Let \(x\le y\) and \(y\le x\) for any \(x,y\in {\mathcal {B}}_F\). Then,

$$\begin{aligned} \begin{array}{rl} &{} x_n\le y_n,\ x_p\le y_p\quad \text{ and }\quad y_n\le x_n,\ y_p\le x_p.\\ \text{ or } &{} x_n=y_n\quad \text{ and }\quad x_p=y_p\\ \text{ or } &{} x=y. \end{array} \end{aligned}$$

Thus, \(x\le y\) and \(y\le x\) implies \(x=y\) for any \(x,y\in {\mathcal {B}}_F\). That is, the relation ‘\(\le \)’ is antisymmetric.

III. Let \(x\le y\) and \(y\le z\) for any \(x,y,z\in {\mathcal {B}}_F\). Then,

$$\begin{aligned} \begin{array}{rl} &{} x_n\le y_n,\ x_p\le y_p \text{ and } y_n\le z_n,\ y_p\le z_p\\ \text{ or } &{} x_n\le y_n\le z_n \text{ and } x_p\le y_p\le z_p\\ \text{ or } &{} x_n\le z_n \text{ and } x_p\le z_p\\ \text{ or } &{} x\le z. \end{array} \end{aligned}$$

Thus, \(x\le y\) and \(y\le z\) implies \(x\le z\) for any \(x,y,z\in {\mathcal {B}}_F\). That is, the relation ‘\(\le \)’ is transitive.

Hence, the relation ‘\(\le \)’ in a BFS is a partial order relation. \(\square \)

Proposition 3

Let \({\mathcal {B}}_F\) be a BFS over X and let \(x,y,z\in {\mathcal {B}}_F\) where \(x=(-x_n,x_p),y=(-y_n,y_p)\) and \(z=(-z_n,z_p)\), then

  1. (a)

    \(o_b\le x\le i_b\), for any x.

  2. (b)

    If \(x\le y\) then \(x+z\le y+z\) and \(x\cdot z\le y\cdot z\).

  3. (c)

    \(x\le x+y\) and \(y\le x+y\), \(x+y\) is the least upper bound of x and y. In other words, if there is an element z satisfying \(x\le z\) and \(y\le z\) then \(x+y\le z\).

  4. (d)

    \(x\cdot y\le x\) and \(x\cdot y\le y\). That is, \(x\cdot y\) is a lower bound of x and y.

  5. (e)

    \(x\cdot y\cdot z\le x\cdot y\).

Proof

  1. (a)

    Since \(x=(-x_n,x_p)\in {\mathcal {B}}_F\), therefore \(0\le x_n,x_p\le 1\). Hence, \(o_b\le x\le i_b\).

  2. (b)

    Let \(x\le y\), then \(x_n\le y_n\) and \(x_p\le y_p\). Therefore, \(\max \{x_n,z_n\}{\le }\max \{y_n,z_n\}\) and \(\max \{x_p,z_p\}\le \max \{y_p,z_p\}\). Thus, \(x+z\le y+z\). Also, \(\min \{x_n,z_n\}\le \min \{y_n,z_n\}\) and \(\min \{x_p,z_p\}\le \min \{y_p,z_p\}\). Hence, \(x\cdot z\le y\cdot z\).

  3. (c)

    We know that \(x_n\le \max \{x_n,y_n\}\) and \(x_p\le \max \{x_p,y_p\}\). So \(x\le x+y\). Similarly, \(y\le x+y\). Thus, \(x+y\) is the upper bound of x and y. If possible let \(z\ne x+y\) be the least upper bound of x and y then

    $$\begin{aligned} \begin{array}{rl} &{} x\le z\quad \text{ and }\quad y\le z,\\ \mathrm{i.e.,} &{} x_n\le z_n,\ x_p\le z_p\quad \text{ and }\quad y_n\le z_n,\ y_p\le z_p,\\ \mathrm{i.e.,} &{} \max \{x_n,y_n\}\le z_n\quad \text{ and }\max \quad \{x_p,y_p\}\le z_p. \end{array} \end{aligned}$$

    Thus

    $$\begin{aligned} x+y\le z \end{aligned}$$
    (1)

    Also, since \(x+y\) is the upper bound of xy and z is the least upper bound so

    $$\begin{aligned} z\le x+y \end{aligned}$$
    (2)

    From Eqs. (1) and (2), we can write as \(x+y=z\). That is, \(x+y\) is the least upper bound of x and y.

  4. (d)

    Similarly, we can prove that \(x\cdot y\) is the greatest lower bound of x and y.

  5. (e)

    We know that \(\min \{x_n,z_n,y_n\}\le \min \{x_n,y_n\}\) and \(\min \{x_p,z_p,y_p\}\le \min \{x_p,y_p\}\). Therefore, \(x\cdot z\cdot y\le x\cdot y\).

\(\square \)

Hence, we say that under max–min operation every pair of the partial order set has least upper bound and greatest lower bound in BFS. So, a BFS is a lattice.

5 Bipolar fuzzy matrix

In order to develop the theory of bipolar fuzzy matrix (BFM), we begin with the concept of bipolar fuzzy algebra. A bipolar fuzzy algebra is a mathematical system \(({\mathcal {B}}_F,+,\cdot )\) with two binary operations \(+\) and \(\cdot \) defined on \({\mathcal {B}}_F\) satisfying the following properties.

  1. (P1)

    Idempotent: \(x+x=x\), \(x\cdot x=x\)

  2. (P2)

    Commutativity: \(x+y=y+x\), \(x\cdot y=y\cdot x\)

  3. (P3)

    Associativity: \(x+(y+z)=(x+y)+z\), \(x\cdot (y\cdot z)=(x\cdot y)\cdot z\)

  4. (P4)

    Absorption: \(x+(x\cdot y)=x\), \(x\cdot (x+y)=x\)

  5. (P5)

    Distributivity: \(x\cdot (y+z)=(x\cdot y)+(x\cdot z)\), \(x+(y\cdot z)=(x+y)\cdot (x+z)\)

  6. (P6)

    Universal bounds: \(x+o_b=x\), \(x+i_b=i_b\), \(x\cdot o_b=o_b\), \(x\cdot i_b=x\)

where \(x=(-x_n,x_p),\ y=(-y_n,y_p)\) and \(z=(-z_n,z_p)\in {\mathcal {B}}_F\).

Proof

Most of the results are already proved. The proofs of absorption and second distributive law are given below.

(P4) To prove the absorption property, we take the left-hand side of first as

$$\begin{aligned} \begin{array}{rcl} x+(x\cdot y) &{} = &{} (-x_n,x_p)+\{(-x_n,x_p)\cdot (-y_n,y_p)\}\\ &{} = &{} (-x_n,x_p)+(-\min \{x_n,y_n\},\min \{x_p,y_p\})\\ &{} = &{} (-\max [x_n,\min \{x_n,y_n\}],\max [x_p,\min \{x_p,y_p\}])\\ &{} = &{} (-x_n,x_p)=x. \end{array} \end{aligned}$$

Similarly, we can prove the second part.

(P5) For second distributive law, we have

$$\begin{aligned} \begin{array}{rcl} x+(y\cdot z) &{} = &{} (-x_n,x_p)+\{(-y_n,y_p)\cdot (-z_n,z_p)\}\\ &{} = &{} (-x_n,x_p)+(-y_n\wedge z_n,y_p\wedge z_p)\\ &{} = &{} (-x_n\vee \{y_n\wedge z_n\},x_p\vee \{y_p\wedge z_p\})\\ &{} = &{} (-\{x_n\vee y_n\}\wedge \{x_n\vee z_n\},\{x_p\vee y_p\}\wedge \{x_p\vee z_p\})\\ &{} = &{} (-x_n\vee y_n,x_p\vee y_p)\cdot (-x_n\vee z_n,x_p\vee z_p)\\ &{} = &{} \{(-x_n,x_p)+(-y_n,y_p)\}\cdot \{(-x_n,x_p)\\ &{}&{}+(-z_n,z_p)\}\\ &{} = &{} (x+y)\cdot (x+z). \end{array} \end{aligned}$$

Hence, \(({\mathcal {B}}_F,+,\cdot )\) is a bipolar fuzzy algebra. \(\square \)

Definition 11

(Bipolar fuzzy matrix) A bipolar fuzzy matrix (BFM) is the matrix over the bipolar fuzzy algebra. The zero matrix \(O_m\) of order \(m\times m\) is the matrix where all the elements are \(o_b=(0,0)\) and the identity matrix \(I_m\) of order \(m\times m\) is the matrix where all the diagonal entries are \(i_b=(-1,1)\) and all other entries are \(o_b=(0,0)\).

The set of all rectangular BFMs of order \(l\times m\) is denoted by \({\mathcal {M}}_{lm}\) and that of square BFMs of order \(m\times m\) is denoted by \({\mathcal {M}}_m\).

From the definition, we conclude that if \(A=(a_{ij})_{l\times m}\in {\mathcal {M}}_{lm}\), then \(a_{ij}=(-a_{ijn},a_{ijp})\in {\mathcal {B}}_F\), where \(a_{ijn},a_{ijp}\in [0,1]\) are the negative and positive membership values of the element \(a_{ij}\), respectively.

5.1 Operations on BFM

The operations on BFM are as follows:

Definition 12

Let \(A=(a_{ij}),\ B=(b_{ij})\in {\mathcal {M}}_{lm}\) be two BFMs. Therefore, \(a_{ij},b_{ij}\in {\mathcal {B}}_F\), then

$$\begin{aligned} \begin{array}{rcl} A+B &{} = &{} (a_{ij}+b_{ij})_{l\times m}\\ &{}=&{}(-\max \{a_{ijn},b_{ijn}\},\max \{a_{ijp},b_{ijp}\})_{l\times m}\\ \text{ and } A\cdot B &{} = &{} (a_{ij}\cdot b_{ij})_{l\times m}\\ &{}=&{}(-\min \{a_{ijn},b_{ijn}\},\min \{a_{ijp},b_{ijp}\})_{l\times m}. \end{array} \end{aligned}$$

Definition 13

Let \(A=(a_{ij})\in {\mathcal {M}}_{lm}\) and \(B=(b_{ij})\in {\mathcal {M}}_{mq}\) be two BFMs. Therefore, \(a_{ij},b_{ij}\in {\mathcal {B}}_F\), then

$$\begin{aligned} \begin{array}{rcl} &{}&{}A\odot B = \left( \sum \limits _{k=1}^m a_{ik}\cdot b_{kj}\right) _{l\times q}\\ &{}&{}\quad =\left( -\max \limits _{k=1}^m[\min \{a_{ikn},b_{kjn}\}],\max \limits _{k=1}^m[\min \{a_{ikp},b_{kjp}\}]\right) _{l\times q}\\ &{}&{}\text{ and } A\otimes B = \left( \prod \limits _{k=1}^m\{a_{ik}+b_{kj}\}\right) _{l\times q}\\ &{}&{}\quad =\left( -\min \limits _{k=1}^m[\max \{a_{ikn},b_{kjn}\}],\min \limits _{k=1}^m[\max \{a_{ikp},b_{kjp}\}]\right) _{l\times q} \end{array} \end{aligned}$$

Proposition 4

If the BFMs ABC are conformal for corresponding operations, then

  1. (a)

    \(A+B=B+A\), \(A\cdot B=B\cdot A\).

  2. (b)

    \(A+(B+C)=(A+B)+C\), \(A\cdot (B\cdot C)=(A\cdot B)\cdot C\).

  3. (c)

    \(A\cdot (B+C)=A\cdot B+A\cdot C\), \(A+(B\cdot C)=(A+B)\cdot (A+C)\).

  4. (d)

    \(A+O=O+A=A\), \(A\cdot O=O\cdot A=A\), if O be the zero matrix, with appropriate order.

  5. (e)

    \(A\odot B\ne B\odot A\), \(A\otimes B\ne B\otimes A\), in general.

  6. (f)

    \(A\odot (B\odot C)=(A\odot B)\odot C\), \(A\otimes (B\otimes C)=(A\otimes B)\otimes C\).

  7. (g)

    \(A\odot I=I\odot A=A\), \(A\otimes I=I\otimes A=A\), where I be the identity matrix, with appropriate order.

  8. (h)

    \(A\odot (B+C)\ne (A\odot B)+(A\odot C)\), \(A\otimes (B\cdot C)\ne (A\otimes B)\cdot (A\otimes C)\).

Proof

The proof of (a), (b) and (d) is simple for the matrices ABC of the same order.

(c) This property can be proved by using the distributive property on BFS.

(e) From the definition, we say that \(A\odot B\) and \(A\otimes B\) are possible if the order of A and B is \(q\times r\) and \(r\times s\), respectively, that is, \(\text{ number } \text{ of } \text{ columns } \text{ of } A=\text{ number } \text{ of } \text{ rows } \text{ of } B.\)

Thus, if the order of A and B is \(q\times r\) and \(r\times s\), then \(B\odot A\) and \(B\otimes A\) do not exist.

Now, if both the matrices are square of same order say m, then for both cases the matrices are conformable, but the equality does not hold in general. To verify it, we consider the following example.

Let

$$\begin{aligned} A= & {} \left[ \begin{array}{c@{\quad }c@{\quad }c} (-0.3,0.5) &{}\quad (-0.5,0.6) &{}\quad (-0.4,0.4)\\ (-0.1,0.8) &{}\quad (-0.2,0.7) &{}\quad (-0.3,0.6)\\ (-0.6,0.3) &{}\quad (-0.7,0.2) &{}\quad (-0.8,0.1) \end{array}\right] \text{ and } \\ B= & {} \left[ \begin{array}{c@{\quad }c@{\quad }c} (-0.1,0.8) &{}\quad (-0.2,0.9) &{}\quad (-0.3,0.7)\\ (-0.4,0.6) &{}\quad (-0.5,0.4) &{}\quad (-0.6,0.2)\\ (-0.2,0.8) &{}\quad (-0.3,0.6) &{}\quad (-0.4,0.5) \end{array}\right] \end{aligned}$$

then,

$$\begin{aligned} A\odot B=\left[ \begin{array}{c@{\quad }c@{\quad }c} (-0.4,0.6) &{}\quad (-0.5,0.5) &{}\quad (-0.5,0.5)\\ (-0.2,0.8) &{}\quad (-0.3,0.8) &{}\quad (-0.3,0.7)\\ (-0.4,0.3) &{}\quad (-0.5,0.3) &{}\quad (-0.6,0.3) \end{array}\right] \end{aligned}$$

and

$$\begin{aligned} B\odot A=\left[ \begin{array}{c@{\quad }c@{\quad }c} (-0.3,0.8) &{}\quad (-0.3,0.7) &{}\quad (-0.3,0.6)\\ (-0.6,0.5) &{}\quad (-0.6,0.6) &{}\quad (-0.6,0.4)\\ (-0.4,0.6) &{}\quad (-0.4,0.6) &{}\quad (-0.4,0.6) \end{array}\right] . \end{aligned}$$

Note that, \(A\odot B\ne B\odot A\).

Again,

$$\begin{aligned} A\otimes B=\left[ \begin{array}{c@{\quad }c@{\quad }c} (-0.3,0.6) &{}\quad (-0.3,0.6) &{}\quad (-0.3,0.5)\\ (-0.1,0.7) &{}\quad (-0.2,0.6) &{}\quad (-0.3,0.6)\\ (-0.6,0.6) &{}\quad (-0.6,0.4) &{}\quad (-0.6,0.2) \end{array}\right] \end{aligned}$$

and

$$\begin{aligned} B\otimes A=\left[ \begin{array}{c@{\quad }c@{\quad }c} (-0.2,0.7) &{}\quad (-0.2,0.7) &{}\quad (-0.3,0.7)\\ (-0.4,0.3) &{}\quad (-0.5,0.2) &{}\quad (-0.4,0.2)\\ (-0.3,0.5) &{}\quad (-0.3,0.5) &{}\quad (-0.3,0.5) \end{array}\right] . \end{aligned}$$

Here also, \(A\otimes B\ne B\otimes A\).

(f) Let \(A\in {\mathcal {M}}_{qr}\), \(B\in {\mathcal {M}}_{rs}\) and \(C\in {\mathcal {M}}_{st}\) and let \(B\odot C=(d_{ij})\in {\mathcal {M}}_{rt}\). Then, the ijth entries of \(B\odot C\) is

$$\begin{aligned} d_{ij}=\sum \limits _{k=1}^s b_{ik}\cdot c_{kj}\quad \mathrm{where}\quad B=(b_{ij})\quad \text{ and }\quad C=(c_{ij}). \end{aligned}$$

Therefore, the ijth entries of \(A\odot (B\odot C)\) is

$$\begin{aligned} \begin{array}{rcl} \sum \limits _{l=1}^r a_{il}\cdot d_{lj} &{} = &{} \sum \limits _{l=1}^r a_{il}\cdot \left( \sum \limits _{k=1}^s b_{lk}\cdot c_{kj}\right) \quad [\mathrm{where}A=(a_{ij})]\\ &{} = &{} \sum \limits _{l=1}^r\sum \limits _{k=1}^s a_{il}\cdot b_{lk}\cdot c_{kj}\\ &{} = &{} \sum \limits _{k=1}^s\sum \limits _{l=1}^r a_{il}\cdot b_{lk}\cdot c_{kj}\\ &{} = &{} \sum \limits _{k=1}^s\left( \sum \limits _{l=1}^r a_{il}\cdot b_{lk}\right) \cdot c_{kj}\\ &{} = &{} \sum \limits _{k=1}^s e_{ik}\cdot c_{kj}\quad [\text{ where }A\odot B=(e_{ik})\in {\mathcal {M}}_{qs}]. \end{array} \end{aligned}$$

This is the ijth entry of \((A\odot B)\odot C\).

Thus, \(A\odot (B\odot C)=(A\odot B)\odot C\).

The proof of the second part is similar.

(g) For the square matrix \(A=(a_{ij})\) and the identity matrix \(I=(e_{ij})\) of the same order, say m, then the ijth entry of \(A\odot I=(b_{ij})\), where \(b_{ij}=\sum \limits _{k=1}^m a_{ik}\cdot e_{kj}\). Therefore,

$$\begin{aligned} \begin{array}{rcl} b_{ij} &{} = &{} a_{i1}\cdot e_{1j}+a_{i2}\cdot e_{2j}+\cdots +a_{i,j-1}\cdot e_{j-1,j}\\ &{}&{}+a_{ij}\cdot e_{jj}+a_{i,j+1}\cdot e_{j+1,j}+\cdots +a_{im}\cdot e_{mj}\\ &{} = &{} a_{i1}\cdot o_b+a_{i2}\cdot o_b+\cdots +a_{i,j-1}\cdot o_b+a_{ij}\cdot i_b\\ &{}&{}+a_{i,j+1}\cdot o_b+\cdots +a_{im}\cdot o_b\\ &{} = &{} a_{ij}. \end{array} \end{aligned}$$

Thus, \(A\odot I=A\). Similarly, \(I\odot A=A\). Hence, \(A\odot I=I\odot A=A\).

The second part can be proved by similar way.

(h) To verify this result, we consider the following example.

Let

$$\begin{aligned} A= & {} \left[ \begin{array}{c@{\qquad }c} (-0.3,0.6) &{} (-0.2,0.8)\\ (-0.4,0.5) &{} (-0.1,0.7) \end{array}\right] ,\\ B= & {} \left[ \begin{array}{c@{\qquad }c} (-0.2,0.7) &{} (-0.5,0.6)\\ (-0.3,0.8) &{} (-0.4,0.5) \end{array}\right] \end{aligned}$$

and

$$\begin{aligned} C=\left[ \begin{array}{c@{\qquad }c} (-0.4,0.7) &{} (-0.5,0.6)\\ (-0.6,0.4) &{} (-0.7,0.7) \end{array}\right] . \end{aligned}$$

Therefore

$$\begin{aligned}&B\otimes C=\left[ \begin{array}{c@{\qquad }c} (-0.4,0.6) &{} (-0.5,0.7)\\ (-0.4,0.5) &{} (-0.5,0.7) \end{array}\right] \\&\text{ and } A\odot (B\otimes C)=\left[ \begin{array}{c@{\qquad }c} (-0.3,0.6) &{} (-0.3,0.7)\\ (-0.4,0.5) &{} (-0.4,0.7) \end{array}\right] . \end{aligned}$$

Now,

$$\begin{aligned}&A\odot B=\left[ \begin{array}{c@{\qquad }c} (-0.2,0.8) &{} (-0.3,0.6)\\ (-0.2,0.7) &{} (-0.4,0.5) \end{array}\right] \\&\quad \text{ and } A\odot C=\left[ \begin{array}{c@{\qquad }c} (-0.3,0.6) &{} (-0.3,0.7)\\ (-0.4,0.5) &{} (-0.4,0.7) \end{array}\right] . \end{aligned}$$

Thus,

$$\begin{aligned} (A\odot B)\otimes (A\odot C)=\left[ \begin{array}{c@{\qquad }c} (-0.3,0.6) &{} (-0.3,0.7)\\ (-0.3,0.5) &{} (-0.3,0.7) \end{array}\right] . \end{aligned}$$

Therefore, \(A\odot (B\otimes C)\ne (A\odot B)\otimes (A\odot C)\).

Second part can be verified by similar way. \(\square \)

6 Convergence of BFM

In this section, we introduce the concept of convergence and power of convergence of a BFM.

A sequence of matrices \(A_1,A_2,A_3,\ldots ,A_m,A_{m+1},\cdots \) that is, \(\{A_m\}\) is said to be converged to a finite matrix A (if exist) if

$$\begin{aligned} \lim \limits _{m\rightarrow \infty } A_m=A. \end{aligned}$$

Definition 14

(Power of convergence of a BFM) A least positive integer p is said to be the power of convergence of a BFM A in respect to a binary composition \(*\) if

$$\begin{aligned} A^{p+n}=A^{p+n-1}=A^{p+n-2}=\cdots =A^{p+1}=A^p, \end{aligned}$$

where \(n\in {\mathbb {N}}\) (set of natural numbers) and

$$\begin{aligned} A^2=A*A,\ A^3=A*A*A=A^2*A \text{ and } \text{ so } \text{ on }. \end{aligned}$$

The number p is called the index of A and is denoted by i(A).

Definition 15

The partial order relation ‘\(\le \)’ over \({\mathcal {M}}_m\) is defined as \(A\le B\) if and only if \(a_{ij}\le b_{ij}\) for all \(i,j\in \{1,2,3,\ldots ,m\}\) where \(A=(a_{ij}),\ B=(b_{ij})\in {\mathcal {M}}_m\). That is, \(A\le B\) if and only if \(A+B=B\). \(A<B\) holds if and only if \(A\le B\) and \(A\ne B\).

Definition 16

Let \(A=(a_{ij})\in {\mathcal {M}}_m\) be a BFM. The ijth entry of the square matrix \(A^r\) is denoted by \(a_{ij}^{(r)}\) and obviously

$$\begin{aligned} a_{ij}^{(r)}=\sum \limits _{1\le j_1,j_2,\ldots ,j_{r-1}\le m} \{a_{ij_1}\cdot a_{j_1j_2}\cdot a_{j_2j_3}\cdots a_{j_{r-1}j}\}. \end{aligned}$$
(3)

Definition 17

A matrix A is said to be nilpotent of order k if \(A^k=O_m\) for some \(k\in {\mathbb {N}}\), and A is idempotent if \(A^2=A\).

Lemma 1

Let \(A=(a_{ij})\in {\mathcal {M}}_m\) be a BFM. If \(r>m\), then

$$\begin{aligned} A^r\le \sum \nolimits _{k=0}^{m-1} A^k,\quad \text{ where }\quad A^0=I_m. \end{aligned}$$

As a result, \(A^{r+1}\le \sum \nolimits _{k=1}^m A^k\).

Proof

Let \(B=\sum \nolimits _{k=0}^{m-1} A^k\). Now, \(a_{ii}^{(r)}\le i_b=b_{ii}\). Since \(a_{ii}^{(0)}=i_b\).

If \(i\ne j\), we consider an arbitrary summand of right-hand side of equality (3), i.e., \(a_{ij_1}\cdot a_{j_1j_2}\cdot a_{j_2j_3}\cdots a_{j_{r-1}j}\).

Since \(i,j_1,j_2,j_3,\ldots ,j_{r-1},j\in \{1,2,3,\ldots ,m\}\) and \(r+1>m\), there are st such that \(j_s=j_t\) (\(0\le s<t\le r,\ j_0=i,j_m=j\)). Deleting \(a_{j_sj_{s+1}}\cdot a_{j_{s+1}j_{s+2}}\cdot a_{j_{s+2}j_{s+3}}\cdots a_{j_{t-1}j_t}\) from the summand \(a_{ij_1}\cdot a_{j_1j_2}\cdot a_{j_2j_3}\cdots a_{j_{r-1}j}\), we obtain

$$\begin{aligned}&a_{ij_1}\cdot a_{j_1j_2}\cdot a_{j_2j_3}\cdots a_{j_{r-1}j}\\&\quad \le a_{ij_1}\cdot a_{j_1j_2}\cdot a_{j_2j_3}\cdots a_{j_{s-1}j_s}\cdot a_{j_tj_{t+1}}\cdots a_{j_{r-1}j}\\&\quad \quad [{\mathrm{by Proposition} 3~\mathrm{(e)}}]. \end{aligned}$$

If the number \(s+r-t+2\) of the subscripts in the right-hand side of the above inequality still more than m, the same deleting method is used.

Therefore, there is a positive integer \(q\le m-1\) such that

$$\begin{aligned} a_{ij_1}\cdot a_{j_1j_2}\cdot a_{j_2j_3}\cdots a_{j_{r-1}j}\le a_{il_1}\cdot a_{l_1l_2}\cdot a_{l_2l_3}\cdots a_{l_{q-1}j}. \end{aligned}$$

Hence, by definition of \(A^r\) we have

$$\begin{aligned} \begin{array}{rll} &{}&{}a_{ij}^{(r)} \le \sum \limits _{k=1}^{m-1} a_{ij}^k=b_{ij},\\ &{}&{}\quad i.e.,\quad A^r \le \sum \limits _{k=1}^{m-1} A^k\le \sum \limits _{k=0}^{m-1} A^k. \end{array} \end{aligned}$$

\(\square \)

Definition 18

Let \(A,B,C\in {\mathcal {M}}_m\). The BFM A is said to be transitive, if \(A^2\le A\). The BFM B is said to be transitive closure of matrix A, if B is transitive, \(A\le B\) and \(B\le C\) for any transitive matrix C, satisfying \(A\le C\). The transitive closure of A is denoted by t(A).

Theorem 2

Let \(A\in {\mathcal {M}}_m\) be a BFM. Then, the transitive closure of A is given by \(t(A)=\sum \nolimits _{k=1}^m A^k\).

Proof

Let \(B=\sum \nolimits _{k=1}^m A^k\), obviously \(A\le B\). Since \({\mathcal {M}}_m\) is idempotent under addition, we have

$$\begin{aligned} \begin{array}{rcl} B^2 &{} = &{} \sum \limits _{k=2}^{2m} A^k\le \sum \limits _{k=1}^{2m} A^k\\ \text{ or, } B^2 &{} \le &{} B+\sum \limits _{k=m+1}^{2m} A^k \end{array} \end{aligned}$$

By Lemma 1,

$$\begin{aligned} A^k\le \sum \limits _{l=1}^{m} A^l=B \text{ as } k>m. \end{aligned}$$

Hence, \(B^2\le B\).

If there is a matrix C such that \(A\le C\) and \(C^2\le C\), then \(A^2\le AC\le C^2\le C\), and by induction we have \(A^k\le C^k\le C\) for all positive integers k. Hence, \(B\le C\).

Thus, by the definition of transitive closure, \(B=t(A)=\sum \nolimits _{k=1}^m A^k\). \(\square \)

Example 5

Let

$$\begin{aligned} A= & {} \left[ \begin{array}{c@{\quad }c} (-0.3,0.5) &{} (-0.4,0.6)\\ (-0.2,0.4) &{} (-0.1,0.7) \end{array}\right] \\ \text{ or },\quad A^2= & {} \left[ \begin{array}{c@{\quad }c} (-0.3,0.5) &{}\quad (-0.4,0.6)\\ (-0.2,0.4) &{}\quad (-0.1,0.7) \end{array}\right] \\&\odot \left[ \begin{array}{c@{\quad }c} (-0.3,0.5) &{}\quad (-0.4,0.6)\\ (-0.2,0.4) &{}\quad (-0.1,0.7) \end{array}\right] \\= & {} \left[ \begin{array}{c@{\quad }c} (-0.3,0.5) &{}\quad (-0.3,0.6)\\ (-0.2,0.4) &{}\quad (-0.2,0.7) \end{array}\right] \\ \text{ Thus, } t(A)= & {} A+A^2\\= & {} \left[ \begin{array}{c@{\quad }c} (-0.3,0.5) &{} (-0.4,0.6)\\ (-0.2,0.4) &{} (-0.2,0.7) \end{array}\right] \end{aligned}$$

The convergence of powers of a fuzzy matrix was studied first time by Thomason (1977). He also pointed out that the power of general fuzzy matrices either converge or oscillate with a finite period. After that, many authors studied the convergence of fuzzy matrices (Bhowmik and Pal 2008b; Duan 2004; Lur et al. 2004; Mondal and Pal 2014. Hashimoto (1983a, b, 1985) introduced the transitivity condition on power of convergence of fuzzy matrices. Using these concepts, we studied some properties about convergence of power of BFMs.

Definition 19

(Periodicity of BFM) Let \(A\in {\mathcal {M}}_m\) be a BFM. If there exists two least positive integers s and t such that, \(A^{s+t}=A^s\) holds, then t is said to be the periodicity of A and s is the starting point of A corresponding to t.

Let \(r>s\) be any positive integer, then \(r-s>0\) and multiplying \(A^{r-s}\) on both sides of \(A^{s+t}=A^s\) we get, \(A^{r+t}=A^r\) which means that every \(r(>s)\) be also a starting point corresponding to t.

Proposition 5

The power of a BFM \(A\in {\mathcal {M}}_m\) is either converged to \(A^p\) for a finite natural p or oscillate with a finite period.

Proof

The operation is max–min or min–max; so, in the powers of A the negative and the positive membership values are not new at all. These are nothing but the negative and positive membership values of A, respectively. Since n is finite and max–min or min–max operations are deterministic, it cannot introduce negative and positive membership values which are not in A. Thus, if A does not converge in its powers, then it must oscillate with finite period. \(\square \)

The following example shows that the matrix A converges and the matrix B oscillates.

Example 6

Let

$$\begin{aligned} A=\left[ \begin{array}{c@{\quad }c@{\quad }c} (-0.7,0.8) &{}\quad (-0.2,0.3) &{}\quad (-0.3,0.4)\\ (-0.3,0.2) &{}\quad (-0.6,0.7) &{}\quad (-0.4,0.5)\\ (-0.4,0.3) &{}\quad (-0.5,0.4) &{}\quad (-0.8,0.6) \end{array}\right] . \end{aligned}$$

Therefore,

$$\begin{aligned} \begin{array}{rcl} A^2 &{} = &{} \left[ \begin{array}{c@{\quad }c@{\quad }c} (-0.7,0.8) &{}\quad (-0.2,0.3) &{}\quad (-0.3,0.4)\\ (-0.3,0.2) &{}\quad (-0.6,0.7) &{}\quad (-0.4,0.5)\\ (-0.4,0.3) &{}\quad (-0.5,0.4) &{}\quad (-0.8,0.6) \end{array}\right] \\ &{}&{}\odot \left[ \begin{array}{c@{\quad }c@{\quad }c} (-0.7,0.8) &{}\quad (-0.2,0.3) &{}\quad (-0.3,0.4)\\ (-0.3,0.2) &{}\quad (-0.6,0.7) &{}\quad (-0.4,0.5)\\ (-0.4,0.3) &{}\quad (-0.5,0.4) &{}\quad (-0.8,0.6) \end{array}\right] \\ &{} = &{} \left[ \begin{array}{c@{\quad }c@{\quad }c} (-0.7,0.8) &{}\quad (-0.3,0.4) &{}\quad (-0.3,0.4)\\ (-0.4,0.3) &{}\quad (-0.6,0.7) &{}\quad (-0.4,0.5)\\ (-0.4,0.3) &{}\quad (-0.5,0.4) &{}\quad (-0.8,0.6) \end{array}\right] \ne A. \end{array} \end{aligned}$$

Again

$$\begin{aligned} \begin{array}{rcl} A^3 &{} = &{} A^2\odot A\\ &{} = &{}\left[ \begin{array}{c@{\quad }c@{\quad }c} (-0.7,0.8) &{}\quad (-0.3,0.4) &{}\quad (-0.3,0.4)\\ (-0.4,0.3) &{}\quad (-0.6,0.7) &{}\quad (-0.4,0.5)\\ (-0.4,0.3) &{}\quad (-0.5,0.4) &{}\quad (-0.8,0.6) \end{array}\right] \\ &{}&{}\odot \left[ \begin{array}{c@{\quad }c@{\quad }c} (-0.7,0.8) &{}\quad (-0.2,0.3) &{}\quad (-0.3,0.4)\\ (-0.3,0.2) &{}\quad (-0.6,0.7) &{}\quad (-0.4,0.5)\\ (-0.4,0.3) &{}\quad (-0.5,0.4) &{}\quad (-0.8,0.6) \end{array}\right] \\ &{} = &{} \left[ \begin{array}{c@{\quad }c@{\quad }c} (-0.7,0.8) &{}\quad (-0.3,0.4) &{}\quad (-0.3,0.4)\\ (-0.4,0.3) &{}\quad (-0.6,0.7) &{}\quad (-0.4,0.5)\\ (-0.4,0.3) &{}\quad (-0.5,0.4) &{}\quad (-0.8,0.6) \end{array}\right] =A^2. \end{array} \end{aligned}$$

Hence, the matrix A power converges to the power \(p=2\). That is, \(i(A)=2\).

We consider another matrix

$$\begin{aligned} B=\left[ \begin{array}{ccc} (-0.3,0.5) &{}\quad (-0.2,0.6) &{}\quad (-0.4,0.3)\\ (-0.1,0.9) &{}\quad (-0.5,0.4) &{}\quad (0,0.6)\\ (-0.9,0) &{}\quad (-0.3,0.7) &{}\quad (-0.2,0.2) \end{array}\right] . \end{aligned}$$

For this matrix, after calculating the values of the power of matrix B we see that \(B^9=B^{11}=B^{13}=\cdots \). Therefore, B oscillates with period 2, and the starting point is 9, where

$$\begin{aligned} B^9=\left[ \begin{array}{c@{\quad }c@{\quad }c} (-0.3,0.5) &{}\quad (-0.3,0.6) &{}\quad (-0.4,0.5)\\ (-0.1,0.6) &{}\quad (-0.5,0.6) &{}\quad (-0.1,0.6)\\ (-0.4,0.6) &{}\quad (-0.4,0.6) &{}\quad (-0.3,0.6) \end{array}\right] . \end{aligned}$$

Also, \(B^{10}=B^{12}=B^{14}=\cdots \). So, the another starting point is 10, where

$$\begin{aligned} B^{10}=\left[ \begin{array}{c@{\quad }c@{\quad }c} (-0.4,0.6) &{}\quad (-0.3,0.5) &{}\quad (-0.3,0.6)\\ (-0.1,0.6) &{}\quad (-0.5,0.6) &{}\quad (-0.1,0.6)\\ (-0.3,0.6) &{}\quad (-0.4,0.6) &{}\quad (-0.4,0.6) \end{array}\right] . \end{aligned}$$

Theorem 3

Let \(A\in {\mathcal {M}}_m\) be a BFM. If either \(A^q\le A^r\) or \(A^r\le A^q\) holds for every \(q<r\), then A converges.

Proof

Let \(A=(a_{ij})\) and \(A^q\le A^r\) for every \(q<r\). Then

$$\begin{aligned} a_{ij}^{(q)}\le a_{ij}^{(r)}\Rightarrow a_{ijn}^{(q)}\le a_{ijn}^{(r)}\quad \text{ and }\quad a_{ijp}^{(q)}\le a_{ijp}^{(r)}. \end{aligned}$$

Therefore,

$$\begin{aligned} a_{ijn}^{(q)}\le a_{ijn}^{(r)}\le a_{ijn}^{(r+1)}\le a_{ijn}^{(r+2)}\le \cdots \end{aligned}$$

and

$$\begin{aligned} a_{ijp}^{(q)}\le a_{ijp}^{(r)}\le a_{ijp}^{(r+1)}\le a_{ijp}^{(r+2)}\le \cdots \end{aligned}$$

Since \(a_{ijn}^{(q)}\le a_{ijn}^{(r)}\) and \(a_{ijp}^{(q)}\le a_{ijp}^{(r)}\) and max–min operation is deterministic, a finite number of distinct negative and positive membership values in the corresponding position occur in the power of A so that

$$\begin{aligned} a_{ijn}^{(q)}\le a_{ijn}^{(r)}\le a_{ijn}^{(r+1)}\le a_{ijn}^{(r+2)}\le \cdots \le a_{ijn}^{(s)}=a_{ijn}^{(s+1)}=\cdots \end{aligned}$$

and

$$\begin{aligned} a_{ijp}^{(q)}\le a_{ijp}^{(r)}\le a_{ijp}^{(r+1)}\le a_{ijp}^{(r+2)}\le \cdots \le a_{ijp}^{(t)}=a_{ijp}^{(t+1)}=\cdots \end{aligned}$$

for some finite natural numbers s and t. Simply, a finite number of distinct BFM occurs in the powers of A. Hence, A converges.

Similarly, it can be shown that A converges when \(A^r\le A^q\) for every \(q<r\). \(\square \)

Definition 20

Let \(A=(a_{ij})\in {\mathcal {M}}_m\) be a BFM. A is said to be row-diagonally dominant if \(a_{ij}\le a_{ii}\) (\(1\le i,j\le m\)), A is column-diagonally dominant if \(a_{ji}\le a_{ii}\) (\(1\le i,j\le m\)), the matrix A is called diagonally dominant if it is both row- and column-diagonally dominant.

Diagonally dominant property is very important in the matrix and its determinant theory. Here, we investigate some results using this.

Theorem 4

Let \(A\in {\mathcal {M}}_m\) be a BFM. If \(A^q\le A^r\) for every \(q<r\) and A is row- or column-diagonally dominant, then A converges to \(A^l\) for some \(l\le m-1\).

Proof

Let A be row-diagonally dominant. Now,

$$\begin{aligned} \begin{array}{rcl} a_{iin}^{(k)} &{} = &{} \sum \limits _{j_1,j_2,\ldots ,j_{k-1}} a_{ij_1n}a_{j_1j_2n}a_{j_2j_3n}\cdots a_{j_{k-1}in}\\ &{} = &{} \sum \limits _{j_1} a_{ij_1n}\left( \sum \limits _{j_2,j_3,\ldots ,j_{k-1}}a_{j_1j_2n}a_{j_2j_3n}\cdots a_{j_{k-1}in}\right) \\ &{} \le &{} \sum \limits _{j_1} a_{ij_1n}=a_{iin}\quad [{\mathrm{by Proposition} (3)~\mathrm{d}}]. \end{array} \end{aligned}$$

Similarly, \(a_{iip}^{(k)}\le a_{iip}\). Therefore, \(a_{ii}^{(k)}\le a_{ii}\). On the other hand \(a_{ii}\le a_{ii}^{(k)}\)\((k\ge 1)\) by our assumption.

Therefore, \(a_{ii}^{(k)}=a_{ii}\)\((k\ge 1)\). Also using the Lemma 1 we conclude that

$$\begin{aligned} A^{m-1}=A^m. \end{aligned}$$

Hence, A converges to \(A^l\) for some \(l\le m-1\). \(\square \)

Theorem 5

Let \(A\in {\mathcal {M}}_m\) be a BFM. If \(A^q\le A^r\) for every \(q<r\) and A is row- or column-diagonally dominant, then A is power-convergent and converges to t(A) (transitive closure of A).

Proof

From Theorem 3, if \(A^q\le A^r\) for every \(q<r\) then A converges, taking \(q=1\), \(r=2\) we get \(A\le A^2\). Similarly, \(A^2\le A^3\le A^4\le \cdots \). Now

$$\begin{aligned} t(A)=\sum \limits _{k=1}^m A^k=A+A^2+A^3+\cdots +A^m. \end{aligned}$$

Again since A is row- or column-diagonally dominant, A converges to \(A^l\) for some \(l\le m-1\). Then,

$$\begin{aligned} A\le A^2\le A^3\le \cdots \le A^l=A^{l+1}=A^{l+2}=\cdots =A^m. \end{aligned}$$

Therefore, \(t(A)=A^l\).

Thus, A is power convergence to t(A). \(\square \)

Example 7

The matrix A of order 3 in Example 6 is both row- and column-diagonally dominant and \(i(A)=2=3-1\). If all entries of a matrix A are same, then the BFM is power convergence to 1.

Also, for the matrix A of Example 6, \(A\le A^2\). Therefore, A converges to

$$\begin{aligned} t(A)=A+A^2+A^3=A+A^2+A^2=A^2\quad [\text{ since } A^3=A^2] \end{aligned}$$

7 An application on online education

Nowadays, online education is a very popular learning system. But, the entire system depends on various characteristics such as strength of the network signals, writing and presenting quality of teacher, authentication of the site, i.e., knowledge of the teacher on the topic, capability of the reader.

Suppose a group of five students want to learn a topic from online learning system. Also, it is assumed that there are six valid websites for learning this topic. We consider the positive membership value as the learning level of the students, and the negative membership value represents the inability to achieve that learning level from the lecture from a particular site. Then, the ’learning the students and sites’ system can be written as the following matrix:

Here, \(W_1,\ W_2,\ W_3,\ W_4,\ W_5,\ W_6\) are six different websites and \(S_1,\ S_2,\ S_3,\ S_4,\ S_5\) are five different students. In the first entry, \((-0.2,0.6)\), 0.6 and \(-0.2\) represent the learning and non-learning capacity of the student \(S_1\) from the website \(W_1\).

8 Conclusion

It is well known that matrix theory is a very essential tool to model a large number of problems that occur in science, engineering, medical science and even in social science. Again, the world is full of uncertainty. Also, in many problems, it is observed that for the same attribute there are positive and negative information. So, bipolar fuzzy matrix is now essential to model and solve the problem containing bipolar information. In this paper, first time we introduce the bipolar fuzzy relation and bipolar fuzzy matrix based on bipolar fuzzy algebras. Also, some results on transitive closure and power of convergence are investigated. More results can be done about bipolar fuzzy matrix, determinant, invertible matrix, rank, eigenvalues, etc. We are working on these topics.