1 Introduction

Multi-attribute group decision making (MAGDM) theory has been applied to aspects of fields such as economic, management science, and engineering [15]. Due to the complexity and uncertainty of objective things and the ambiguity of human thinking, many attributes seem to be more suitable for describing by using the Zadeh’s fuzzy set [6], interval numbers [7], triangular fuzzy numbers [8], intuitionistic fuzzy sets (IFSs) [9], and interval-valued intuitionistic fuzzy sets (IVIFSs) [10]. Atanassov [11, 12] proposed IFSs and IVIFSs, which can be seen as an extension of the Zadeh’s fuzzy set. Through introducing the non-membership and hesitancy degree, IFSs and IVIFS are more suitable for expressing the decision maker’s satisfaction and/or dissatisfaction degrees than numerical values, fuzzy sets or linguistic variables [1316]. Lots of studies also revealed that the IFS and IVIFS are useful tools to handle imprecise data and vague expressions. There are many studies of MAGDM problems under intuitionistic fuzzy and/or interval-valued intuitionistic environment. But there is still not a satisfactory ranking function for IFSs and IVIFSs because the existing ranking functions more or less have their limitations. For the comparison of two IVIFSs, Xu and Chen [17] introduced the concept of the score function and the accuracy degree function. To improve the ranking result, Wang et al. [18] proposed another two indices to supplement Xu and Chen’s ranking procedure. There are also several ranking functions. Ye [19] proposed a new accuracy degree function to rank the IVIFSs, but Wang [20] pointed out that Ye’s method has some mistakes. Wu and Chiclana [21, 22] respectively introduced a new attitudinal expected score function based on Yager’s continuous OWA (COWA) operator. Wang et al. [23] proposed a score function of IVIFS using cumulative prospect theory, but the calculation is complex.

Inspired by Zhang and Xu [24], this paper will further develop a new effective ranking function of IFSs based on the concept of TOPSIS. TOPSIS is one of the important techniques in multi-attribute decision making (MADM) problems. It simultaneously considers the shortest distance from a positive ideal solution (PIS) and the farthest distance from a negative ideal solution (NIS) and hereby alternatives are ranked according to relative closeness coefficients with to the PIS [25, 26]. TOPSIS has been widely applied to the crisp and fuzzy MADM problems [2729]. Although Zhang and Xu’ method is interesting, their ranking function also occurs counter-intuitive cases. Szmidt and Kacprzyk [30] pointed out that the constructed ranking functions need to simultaneously consider the amount and reliability of IVIF information. Motivated by [24, 30], this paper will propose a new ranking function of IVIFSs, which combines the concept of TOPSIS with the consideration of the amount and reliability information of IVIFSs.

The rest of the paper is organized as follows. Section 2 briefly introduces some basic concepts of IVIFSs and ranking measures. Section 3 develops the ranking method of IVIFSs considering the amount and reliability information of IVIFSs. Section 4 puts forward a new MAGDM method with attribute values expressed with IVIFs. Section 5 studies a numerical example to show the applicability and feasibility of the proposed method. This paper is concluded in Sect. 6.

2 Some Concepts and Notations of Intuitionistic Fuzzy and Interval-Valued Intuitionistic Fuzzy Sets

In what follows, some basic concepts of IFSs and IVIFSs are introduced to facilitate the discussions.

2.1 Intuitionistic Fuzzy Sets

Atanassov [11] proposed the concept of IFSs as follows.

Definition 1

[11] Let \(Z = \{ z_{1} ,z_{2} , \ldots ,z_{n} \}\) be a finite universe of discourse, then

$$U = \{ {<}z_{j} ,\mu_{U} (z_{j} ),\upsilon_{U} (z_{j} ){>}\,|z_{j} \in Z\}$$
(1)

is called an IFS, which assigns to each element \(z_{j}\) a membership degree \(\mu_{U} (z_{j} )\) and a nonmembership degree \(\upsilon_{U} (z_{j} )\), where \(\mu_{U} (z_{j} ) \in [0,1]\) and \(\upsilon_{U} (z_{j} ) \in [0,1]\). Denote

$$\pi_{U} (z_{j} ) = 1 - \mu_{U} (z_{j} ) - \upsilon_{U} (z_{j} )$$
(2)

which is called the hesitation degree or intuitionistic index of an element \(z_{j}\) to \(U\). It reflects the uncertain information. Thus, it can help the decision maker to describe the fuzzy information. Obviously, \(0 \le \pi_{U} (z_{j} ) \le 1\) for every \(z_{j} \in Z\). If \(\pi_{U} (z_{j} ) = 0\), then the IFS \(U\) is reduced to a fuzzy set, i.e., \(U = \{{<} z_{j} ,\mu_{U} (z_{j} ),1 - \mu_{U} (z_{j} ){>}\, |z_{j} \in Z\}\).

$$U^{C} = \{{<} z_{j} ,\upsilon_{U} (z_{j} ),\mu_{U} (z_{j} ){>}\, |z_{j} \in Z\}$$
(3)

is called the complement of \(U\).

When there is only one element in \(Z\), we briefly write the IFS \(U\) given in Eq. (1) as U = <\(\mu ,\upsilon\)>.

2.2 Interval-Valued Intuitionistic Fuzzy Sets

In some situations, it is very difficult to use crisp numbers to express \(\mu_{U} (z_{j} )\) and \(\upsilon_{U} (z_{j} )\) precisely for the complexity and uncertainties of the objective things. But we can use intervals to express them. As a result, Atanassov and Gargov [12] extended the IFS to the IVIFS.

Definition 2

Let \(Z = \{ z_{1} ,z_{2} , \ldots ,z_{m} \}\) be a finite universe of discourse, then

$$\tilde{U} = \{{<} z_{j} ,\tilde{\mu }_{{\tilde{U}}} (z_{j} ),\tilde{\upsilon }_{{\tilde{U}}} (z_{j} ){>}\, |z_{j} \in Z\}$$
(4)

is called an IVIFS, where \(\tilde{\mu }_{{\tilde{U}}} (z_{j} )\) and \(\tilde{\upsilon }_{{\tilde{U}}} (z_{j} )\) are intervals, where \(\tilde{\mu }_{{\tilde{U}}} (z_{j} ) = [\mu_{{\tilde{U}}}^{ - } (z_{j} ),\mu_{{\tilde{U}}}^{ + } (z_{j} )]\) and \(\tilde{\upsilon }_{{\tilde{U}}} (z_{j} ) = [\upsilon_{{\tilde{U}}}^{ - } (z_{j} ),\upsilon_{{\tilde{U}}}^{ + } (z_{j} )]\). \({<} z_{j} ,\tilde{\mu }_{{\tilde{U}}} (z_{j} ),\tilde{\upsilon }_{{\tilde{U}}} (z_{j}){>}\) is called an IVIF value (IVIFV) or an IVIF number (IVIFN) [17]. The hesitation degree of an IVIFN \({<} \tilde{\mu }_{{\tilde{U}}} (z_{j} ),\tilde{\upsilon }_{{\tilde{U}}} (z_{j} ){>}\) can be defined as follows: \(\tilde{\pi }_{{\tilde{U}}} (z_{j} ) = [\pi_{{\tilde{U}}}^{ - } (z_{j} ),\pi_{{\tilde{U}}}^{ + } (z_{j} )]\), where \(\pi_{{\tilde{U}}}^{ - } (z_{j} ) = 1 - \mu_{{\tilde{U}}}^{ + } (z_{j} ) - \upsilon_{{\tilde{U}}}^{ + } (z_{j} )\) and \(\pi_{{\tilde{U}}}^{ + } (z_{j} ) = 1 - \mu_{{\tilde{U}}}^{ - } (z_{j} ) - \upsilon_{{\tilde{U}}}^{ - } (z_{j} )\) for all \(z_{j} \in Z\).

Denote an IVIFN \({<} \tilde{\mu }_{{\tilde{U}}} (z_{j} ),\tilde{\upsilon }_{{\tilde{U}}} (z_{j} ){>}\) by \(\tilde{A} = {<} \tilde{\mu }_{{\tilde{A}}} ,\tilde{\upsilon }_{{\tilde{A}}}{>}\) or \(\tilde{A} = {<} \tilde{\mu }_{{\tilde{A}}} ,\tilde{\upsilon }_{{\tilde{A}}} ,\tilde{\pi }_{{\tilde{A}}}{>}\), where

$$\tilde{\mu }_{{\tilde{A}}} = [\mu_{{\tilde{A}}}^{ - } ,\mu_{{\tilde{A}}}^{ + } ] \subset [0,1],\tilde{\upsilon }_{{\tilde{A}}} = [\upsilon_{{\tilde{A}}}^{ - } ,\upsilon_{{\tilde{A}}}^{ + } ] \subset [0,1],\mu_{{\tilde{A}}}^{ + } + \tilde{\upsilon }_{{\tilde{A}}}^{ + } \le 1$$
(5)
$$\tilde{\pi }_{{\tilde{A}}} = [\pi_{{\tilde{A}}}^{ - } ,\pi_{{\tilde{A}}}^{ + } ] \subset [0,1],\pi_{{\tilde{A}}}^{ - } = 1 - \mu_{{\tilde{A}}}^{ + } - \tilde{\upsilon }_{{\tilde{A}}}^{ + } ,\pi_{{\tilde{A}}}^{ + } = 1 - \mu_{{\tilde{A}}}^{ - } - \tilde{\upsilon }_{{\tilde{A}}}^{ - }$$
(6)

The complementary set \(\tilde{A}^{c}\) of an IVIFN \(\tilde{A}\) is defined as \(\tilde{A}^{c} = {<} \tilde{\upsilon }_{{\tilde{A}}} ,\tilde{\mu }_{{\tilde{A}}}{>}\).

Definition 3

[12] Let \(\tilde{A}_{i} = {<} \tilde{\mu }_{{\tilde{A}_{i} }} ,\tilde{\upsilon }_{{\tilde{A}_{i} }}{>}\) \((i = 1,2)\) be any IVIFNs, then

  1. (1)

    If \(\mu_{{\tilde{A}_{1} }}^{ - } \le \mu_{{\tilde{A}_{2} }}^{ - } ,\mu_{{\tilde{A}_{1} }}^{ + } \le \mu_{{\tilde{A}_{2} }}^{ + }\) and \(\upsilon_{{\tilde{A}_{1} }}^{ - } \ge \upsilon_{{\tilde{A}_{2} }}^{ - } ,\upsilon_{{\tilde{A}_{1} }}^{ + } \ge \upsilon_{{\tilde{A}_{2} }}^{ + }\), then \(\tilde{A}_{1} \;\) is not bigger than \(\tilde{A}_{2}\), denoted by \(\tilde{A}_{1} \le \tilde{A}_{2}\);

  2. (2)

    If \(\tilde{A}_{1} \le \tilde{A}_{2}\) and \(\tilde{A}_{1} \ge \tilde{A}_{2}\), then \(\tilde{A}_{1} \;\) is equal to \(\tilde{A}_{2}\).

From Definition 3, \(\tilde{A}^{ * } = {<} [1,1],[0,0]{>}\) is the biggest IVIFN and \(\tilde{A}^{ - } = {<} [0,0],[1,1]{>}\) is the smallest IVIFN.

For the comparison of two IVIFNs, Xu and Chen [17] introduced the concept of the score function \(S(\tilde{A})\) and the accuracy degree function \(H(\tilde{A})\). Namely, let \(\tilde{A} = {<} \tilde{\mu }_{{\tilde{A}}} ,\tilde{\upsilon }_{{\tilde{A}}} ,\tilde{\pi }_{{\tilde{A}}}{>}\) be an IVIFS, then the score function of \(\tilde{A}\) is defined as follows:

$$S(\tilde{A}) = \frac{1}{2}\left( {\mu_{{\tilde{A}}}^{ - } + \mu_{{\tilde{A}}}^{ + } - \upsilon_{{\tilde{A}}}^{ - } - \upsilon_{{\tilde{A}}}^{ + } } \right)$$
(7)

The accuracy function is defined as follows:

$$H(\tilde{A}) = \frac{1}{2}\left( {\mu_{{\tilde{A}}}^{ - } + \mu_{{\tilde{A}}}^{ - } + \upsilon_{{\tilde{A}}}^{ - } + \upsilon_{{\tilde{A}}}^{ + } } \right)$$
(8)

Xu and Chen [17] gave the following definition to compare two IVIFNs:

Definition 4

Let \(\tilde{A}_{i} = {<} \tilde{\mu }_{{\tilde{A}_{i} }} ,\tilde{\upsilon }_{{\tilde{A}_{i} }}{>}\) \((i = 1,2)\) be any two IVIFNs, \(S(\tilde{A}_{i} )\) and \(H(\tilde{A}_{i} )\;(i = 1,2)\) are respectively the score and accuracy functions of \(\tilde{A}_{i}\), then

  1. (1)

    If \(S(\tilde{A}_{1} )\; > S(\tilde{A}_{2} )\), then \(\tilde{A}_{1} \;\) is larger than \(\tilde{A}_{2}\), denoted by \(\tilde{A}_{1} > \tilde{A}_{2}\);

  2. (2)

    If \(S(\tilde{A}_{1} )\; = S(\tilde{A}_{2} )\), then

    1. (a)

      If \(H(\tilde{A}_{1} )\; = H(\tilde{A}_{2} )\), then there is no difference between \(\tilde{A}_{1} \;\) and \(\tilde{A}_{2}\), denoted by \(\tilde{A}_{1} \sim \tilde{A}_{2}\);

    2. (b)

      If \(H(\tilde{A}_{1} )\; > H(\tilde{A}_{2} )\), then \(\tilde{A}_{1}\) is larger than \(\tilde{A}_{2}\), denoted by \(\tilde{A}_{1} > \tilde{A}_{2}\).

Wang et al. [18] proposed another two indices to supplement the ranking procedure. These two indices are called the membership uncertain index \(g_{1} (\tilde{A})\) and the hesitation uncertain index \(g_{2} (\tilde{A})\). They are defined as follows:

$$g_{1} (\tilde{A}) = \mu_{{\tilde{A}}}^{ + } - \mu_{{\tilde{A}}}^{ - } + \upsilon_{{\tilde{A}}}^{ - } - \upsilon_{{\tilde{A}}}^{ + }$$
(9)

and

$$g_{2} (\tilde{A}) = \mu_{{\tilde{A}}}^{ + } - \mu_{{\tilde{A}}}^{ - } + \upsilon_{{\tilde{A}}}^{ + } - \upsilon_{{\tilde{A}}}^{ - }$$
(10)

In the case where \(S(\tilde{A}_{1} )\; = S(\tilde{A}_{2} )\) and \(H(\tilde{A}_{1} )\; = H(\tilde{A}_{2} )\), one can further consider these two indices:

  1. (1)

    If \(g_{1} (\tilde{A}_{1} )\; < g_{1} (\tilde{A}_{2} )\), then \(\tilde{A}_{1} \;\) is larger than \(\tilde{A}_{2}\), denoted by \(\tilde{A}_{1} > \tilde{A}_{2}\);

  2. (2)

    If \(g_{1} (\tilde{A}_{1} )\; = g_{1} (\tilde{A}_{2} )\), then

    1. (a)

      If \(g_{2} (\tilde{A}_{1} )\; < g_{2} (\tilde{A}_{2} )\), then \(\tilde{A}_{1} \;\) is larger than \(\tilde{A}_{2}\), denoted by \(\tilde{A}_{1} > \tilde{A}_{2}\);

    2. (b)

      If \(g_{2} (\tilde{A}_{1} )\; = g_{2} (\tilde{A}_{2} )\), then \(\tilde{A}_{1} \;\) is equal to \(\tilde{A}_{2}\), denoted by \(\tilde{A}_{1} = \tilde{A}_{2}\).

There are also several ranking functions. For example, Ye [19] proposed a new accuracy function as follows:

$${\text{YH}}(\tilde{A}) = \mu_{{\tilde{A}}}^{ - } + \mu_{{\tilde{A}}}^{ + } + \frac{{\upsilon_{{\tilde{A}}}^{ - } + \upsilon_{{\tilde{A}}}^{ + } }}{2} - 1$$
(11)

Wang [20] pointed out that Ye’s method has some mistakes.

Wu and Chiclana [21, 22] respectively introduced new attitudinal expected score functions based on Yager’s COWA operator. The formulas are given as follows:

$${\text{AES}}1(\tilde{A}) = \frac{{(1 - \lambda )(\mu_{{\tilde{A}}}^{ - } - \upsilon_{{\tilde{A}}}^{ - } ) + \lambda (\mu_{{\tilde{A}}}^{ + } - \upsilon_{{\tilde{A}}}^{ + } ) + 1}}{2}$$
(12)

and

$${\text{AES}}2(\tilde{A}) = (1 - \lambda )(\mu_{{\tilde{A}}}^{ - } - \upsilon_{{\tilde{A}}}^{ + } ) + \lambda (\mu_{{\tilde{A}}}^{ + } - \upsilon_{{\tilde{A}}}^{ - } )$$
(13)

Xu [31] introduced the distance measure between two IVIFSs \(\tilde{B}_{1}\) and \(\tilde{B}_{2}\) as follows:

$$d(\tilde{B}_{1} ,\tilde{B}_{2} ) = \frac{1}{4n}\sum\limits_{j = 1}^{n} {\left[ {\left| {\mu_{{\tilde{B}_{1} }}^{ - } (z_{j} ) - \mu_{{\tilde{B}_{2} }}^{ - } (z_{j} )} \right| + \left| {\mu_{{\tilde{B}_{1} }}^{ + } (z_{j} ) - \mu_{{\tilde{B}_{2} }}^{ + } (z_{j} )} \right|} \right.} + \left| {\upsilon_{{\tilde{B}_{1} }}^{ - } (z_{j} ) - \upsilon_{{\tilde{B}_{2} }}^{ - } (z_{j} )} \right| + \left| {\upsilon_{{\tilde{B}_{1} }}^{ + } (z_{j} ) - \upsilon_{{\tilde{B}_{2} }}^{ + } (z_{j} )} \right|\left. { + \left| {\pi_{{\tilde{B}_{1} }}^{ - } (z_{j} ) - \pi_{{\tilde{B}_{2} }}^{ - } (z_{j} )} \right| + \left| {\pi_{{\tilde{B}_{1} }}^{ + } (z_{j} ) - \pi_{{\tilde{B}_{2} }}^{ + } (z_{j} )} \right|} \right]$$
(14)

Motivated by Eq. (14), the distance measure between IVIFNs \(\tilde{A}_{i} = {<} \tilde{\mu }_{{\tilde{A}_{i} }} ,\tilde{\upsilon }_{{\tilde{A}_{i} }}{>}\) \((i = 1,2)\) is defined as

$$d(\tilde{A}_{1} ,\tilde{A}_{2} ) = \frac{1}{4}\left[ {\left| {\mu_{{\tilde{A}_{1} }}^{ - } - \mu_{{\tilde{A}_{2} }}^{ - } } \right| + \left| {\mu_{{\tilde{A}_{1} }}^{ + } - \mu_{{\tilde{A}_{2} }}^{ + } } \right|} \right. + \left| {\upsilon_{{\tilde{A}_{1} }}^{ - } - \upsilon_{{\tilde{A}_{2} }}^{ - } } \right| + \left| {\upsilon_{{\tilde{A}_{1} }}^{ + } - \upsilon_{{\tilde{A}_{2} }}^{ + } } \right|\left. { + \left| {\pi_{{\tilde{A}_{1} }}^{ - } - \pi_{{\tilde{A}_{2} }}^{ - } } \right| + \left| {\pi_{{\tilde{A}_{1} }}^{ + } - \pi_{{\tilde{A}_{2} }}^{ + } } \right|} \right]$$
(15)

Let \(\tilde{A} = {<} \tilde{\mu }_{{\tilde{A}}} ,\tilde{\upsilon }_{{\tilde{A}}}{>} = {<} [\mu_{{\tilde{A}}}^{ - } ,\mu_{{\tilde{A}}}^{ + } ],[\upsilon_{{\tilde{A}}}^{ - } ,\upsilon_{{\tilde{A}}}^{ + } ]{>}\), \(\tilde{A}^{ * } = {<} [1,1],[0,0]{>}\) and \(\tilde{A}^{ - } = {<} [0,0],[1,1]{>}\), then according to Eq. (15), the distance measures of \(\tilde{A}\) respect to \(\tilde{A}^{ * }\) and \(\tilde{A}^{ - }\) are given as follows:

$$d(\tilde{A},\tilde{A}^{ * } ) = \frac{1}{4}\left[ {\left| {1 - \mu_{{\tilde{A}}}^{ - } } \right| + \left| {1 - \mu_{{\tilde{A}}}^{ + } } \right|} \right. + \left| {\upsilon_{{\tilde{A}}}^{ - } } \right| + \left| {\upsilon_{{\tilde{A}}}^{ + } } \right|\left. { + \left| {\pi_{{\tilde{A}}}^{ - } } \right| + \left| {\pi_{{\tilde{A}}}^{ + } } \right|} \right] = \frac{1}{2}\left( {2 - \mu_{{\tilde{A}}}^{ - } - \mu_{{\tilde{A}}}^{ + } } \right)$$
(16)

and

$$d(\tilde{A},\tilde{A}^{ - } ) = \frac{1}{4}\left[ {\left| {\mu_{{\tilde{A}}}^{ - } } \right| + \left| {\mu_{{\tilde{A}}}^{ + } } \right|} \right. + \left| {1 - \upsilon_{{\tilde{A}}}^{ - } } \right| + \left| {1 - \upsilon_{{\tilde{A}}}^{ + } } \right|\left. { + \left| {\pi_{{\tilde{A}}}^{ - } } \right| + \left| {\pi_{{\tilde{A}}}^{ + } } \right|} \right] = \frac{1}{2}\left( {2 - \upsilon_{{\tilde{A}}}^{ - } - \upsilon_{{\tilde{A}}}^{ + } } \right)$$
(17)

According to the idea of TOPSIS [25], we define a closeness of \(\tilde{A}\) as follows:

$$C(\tilde{A}) = \frac{{d(\tilde{A},\tilde{A}^{ - } )}}{{d(\tilde{A},\tilde{A}^{ - } ) + d(\tilde{A},\tilde{A}^{ + } )}}$$
(18)

That is,

$$C(\tilde{A}) = \frac{{2 - \upsilon_{{\tilde{A}}}^{ - } - \upsilon_{{\tilde{A}}}^{ + } }}{{2 - \upsilon_{{\tilde{A}}}^{ - } - \upsilon_{{\tilde{A}}}^{ + } + 2 - \mu_{{\tilde{A}}}^{ - } - \mu_{{\tilde{A}}}^{ + } }}$$

All above-mentioned ranking functions do not simultaneously consider the amount (i.e., \(S(\tilde{A}) = (\mu_{{\tilde{A}}}^{ - } + \mu_{{\tilde{A}}}^{ + } - \upsilon_{{\tilde{A}}}^{ - } - \upsilon_{{\tilde{A}}}^{ + } )/2\)) and reliability (i.e., \(1/[1 + (\pi_{{\tilde{A}}}^{ - } + \pi_{{\tilde{A}}}^{ + } )/2]\)) information of IVIFSs, and their ranking results sometimes occur anti-intuition phenomenon as Szmidt and Kacprzyk [30] argued. Motivated by Eq. (18) and the suggestion of Szmidt and Kacprzyk [30], we will construct a new ranking function of IVIFSs in the following section. Then, according the new ranking function, we will develop a new MAGDM method in which the detail process is shown in Fig. 1.

Fig. 1
figure 1

The calculation process of the proposed method

3 The New Ranking Index Considering the Amount and Reliability Information of IVIFSs

In this section, we focus on establishing a new ranking method of IVIFSs. Firstly, we give the definition of a ranking function \(R\) as follows.

Definition 5

Let \(\tilde{A} = {<} \tilde{\mu }_{{\tilde{A}}} ,\tilde{\upsilon }_{{\tilde{A}}}{>}\) be any IVIFN. Then, the ranking function \(R\) of the IVIFN \(\tilde{A}\) is defined as follows:

$$R(\tilde{A}) = \frac{1}{3}\left( {\frac{{\mu_{{\tilde{A}}}^{ - } + \mu_{{\tilde{A}}}^{ + } }}{2} + \frac{{1 + S\left( {\tilde{A}} \right)}}{{1 + {{\left( {\pi_{{\tilde{A}}}^{ - } + \pi_{{\tilde{A}}}^{ + } } \right)} \mathord{\left/ {\vphantom {{\left( {\pi_{{\tilde{A}}}^{ - } + \pi_{{\tilde{A}}}^{ + } } \right)} 2}} \right. \kern-0pt} 2}}}} \right)\frac{{d\left( {\tilde{A},\tilde{A}^{ - } } \right)}}{{d\left( {\tilde{A},\tilde{A}^{ - } } \right) + d\left( {\tilde{A},\tilde{A}^{ + } } \right)}}$$
(19)

which usually is called the \(R\) value of the IVIFN for short.

It is easily derived from Eq. (19) that

$$R(\tilde{A}) = \frac{1}{3}\left( {\frac{{\mu_{{\tilde{A}}}^{ - } + \mu_{{\tilde{A}}}^{ + } }}{2} + \frac{{2 + \mu_{{\tilde{A}}}^{ - } + \mu_{{\tilde{A}}}^{ + } - \upsilon_{{\tilde{A}}}^{ - } - \upsilon_{{\tilde{A}}}^{ + } }}{{2 - \upsilon_{{\tilde{A}}}^{ - } - \upsilon_{{\tilde{A}}}^{ + } + 2 - \mu_{{\tilde{A}}}^{ - } - \mu_{{\tilde{A}}}^{ + } }}} \right)\frac{{2 - \upsilon_{{\tilde{A}}}^{ - } - \upsilon_{{\tilde{A}}}^{ + } }}{{2 - \upsilon_{{\tilde{A}}}^{ - } - \upsilon_{{\tilde{A}}}^{ + } + 2 - \mu_{{\tilde{A}}}^{ - } - \mu_{{\tilde{A}}}^{ + } }}$$
(20)

It is easy to see that \(0 \le R(A) \le 1\).

For \(\tilde{A}^{ * } = {<} [1,1],[0,0]{>}\), we have \(R(\tilde{A}) = 1\); for \(\tilde{A}^{ - } = {<} [0,0],[1,1]{>}\), we have \(R(\tilde{A}) = 0\). From Eq. (8), we can see that the \(R\) value contains the amount information (i.e., \(S(\tilde{A}) = (\mu_{{\tilde{A}}}^{ - } + \mu_{{\tilde{A}}}^{ + } - \upsilon_{{\tilde{A}}}^{ - } - \upsilon_{{\tilde{A}}}^{ + } )/2\)) and reliability information (i.e., \(1/[1 + (\pi_{{\tilde{A}}}^{ - } + \pi_{{\tilde{A}}}^{ + } )/2]\)) of IVIFSs. The \(R\) value has some good properties given as below.

Proposition 1

Let \(\tilde{A}_{i} = {<} \tilde{\mu }_{{\tilde{A}_{i} }} ,\tilde{\upsilon }_{{\tilde{A}_{i} }}{>}\) \((i = 1,2)\) be any two IVIFNs. Assume that \(\mu_{{\tilde{A}_{1} }}^{ - } \ge \mu_{{\tilde{A}_{2} }}^{ - } ,\mu_{{\tilde{A}_{1} }}^{ + } \ge \mu_{{\tilde{A}_{2} }}^{ + }\) and \(\upsilon_{{\tilde{A}_{1} }}^{ - } \le \upsilon_{{\tilde{A}_{2} }}^{ - } ,\upsilon_{{\tilde{A}_{1} }}^{ + } \le \upsilon_{{\tilde{A}_{2} }}^{ + }\) , then \(R(\tilde{A}_{2} ) \le R(\tilde{A}_{1} )\).

Proof

Let \(\tilde{A}_{i} = {<} \tilde{\mu }_{{\tilde{A}_{i} }} ,\tilde{\upsilon }_{{\tilde{A}_{i} }}{>}\) \((i = 1,2)\) be any IVIFNs, and suppose that \(\mu_{{\tilde{A}_{1} }}^{ - } \ge \mu_{{\tilde{A}_{2} }}^{ - }\),\(\mu_{{\tilde{A}_{1} }}^{ + } \ge \mu_{{\tilde{A}_{2} }}^{ + }\) and \(\upsilon_{{\tilde{A}_{1} }}^{ - } \le \upsilon_{{\tilde{A}_{2} }}^{ - }\), \(\upsilon_{{\tilde{A}_{1} }}^{ + } \le \upsilon_{{\tilde{A}_{2} }}^{ + }\).

Note that \(\mu_{{\tilde{A}_{1} }}^{ - } - \mu_{{\tilde{A}_{2} }}^{ - } = \Delta_{1}^{ - }\), \(\mu_{{\tilde{A}_{1} }}^{ + } - \mu_{{\tilde{A}_{2} }}^{ + } = \Delta_{1}^{ + }\) and \(\upsilon_{{\tilde{A}_{2} }}^{ - } - \upsilon_{{\tilde{A}_{1} }}^{ - } = \Delta_{2}^{ - }\), \(\upsilon_{{\tilde{A}_{2} }}^{ + } - \upsilon_{{\tilde{A}_{1} }}^{ + } = \Delta_{2}^{ + }\), then by Eq. (19), we get

$$R(\tilde{A}_{1} ) = \frac{1}{3}\left( {\frac{{\mu_{{\tilde{A}_{1} }}^{ - } + \mu_{{\tilde{A}_{1} }}^{ + } }}{2} + \frac{{2 + \mu_{{\tilde{A}_{1} }}^{ - } + \mu_{{\tilde{A}_{1} }}^{ + } - \upsilon_{{\tilde{A}_{1} }}^{ - } - \upsilon_{{\tilde{A}_{1} }}^{ + } }}{{2 - \upsilon_{{\tilde{A}_{1} }}^{ - } - \upsilon_{{\tilde{A}_{1} }}^{ + } + 2 - \mu_{{\tilde{A}_{1} }}^{ - } - \mu_{{\tilde{A}_{1} }}^{ + } }}} \right)\frac{{2 - \upsilon_{{\tilde{A}_{1} }}^{ - } - \upsilon_{{\tilde{A}_{1} }}^{ + } }}{{2 - \upsilon_{{\tilde{A}_{1} }}^{ - } - \upsilon_{{\tilde{A}_{1} }}^{ + } + 2 - \mu_{{\tilde{A}_{1} }}^{ - } - \mu_{{\tilde{A}_{1} }}^{ + } }} \ge \frac{1}{3}\left[ {\frac{{\mu_{{\tilde{A}_{1} }}^{ - } + \mu_{{\tilde{A}_{1} }}^{ + } }}{2} + \frac{{2 + \mu_{{\tilde{A}_{1} }}^{ - } - \Delta_{1}^{ - } + \mu_{{\tilde{A}_{1} }}^{ + } - \Delta_{1}^{ + } - \upsilon_{{\tilde{A}_{1} }}^{ - } - \upsilon_{{\tilde{A}_{1} }}^{ + } }}{{2 - \upsilon_{{\tilde{A}_{1} }}^{ - } - \upsilon_{{\tilde{A}_{1} }}^{ + } + 2 - \left( {\mu_{{\tilde{A}_{1} }}^{ - } - \Delta_{1}^{ - } } \right) - \left( {\mu_{{\tilde{A}_{1} }}^{ + } - \Delta_{1}^{ + } } \right)}}} \right]\frac{{2 - \upsilon_{{\tilde{A}_{1} }}^{ - } - \upsilon_{{\tilde{A}_{1} }}^{ + } }}{{2 - \upsilon_{{\tilde{A}_{1} }}^{ - } - \upsilon_{{\tilde{A}_{1} }}^{ + } + 2 - \left( {\mu_{{\tilde{A}_{1} }}^{ - } - \Delta_{1}^{ - } } \right) - \left( {\mu_{{\tilde{A}_{1} }}^{ + } - \Delta_{1}^{ + } } \right)}} = \frac{1}{3}\left[ {\frac{{\mu_{{\tilde{A}_{1} }}^{ - } + \mu_{{\tilde{A}_{1} }}^{ + } }}{2} + \frac{{2 + \mu_{{\tilde{A}_{2} }}^{ - } + \mu_{{\tilde{A}_{2} }}^{ + } - \upsilon_{{\tilde{A}_{1} }}^{ - } - \upsilon_{{\tilde{A}_{1} }}^{ + } }}{{2 - \upsilon_{{\tilde{A}_{1} }}^{ - } - \upsilon_{{\tilde{A}_{1} }}^{ + } + 2 - \mu_{{\tilde{A}_{2} }}^{ - } - \mu_{{\tilde{A}_{2} }}^{ + } }}} \right]\frac{{2 - \upsilon_{{\tilde{A}_{1} }}^{ - } - \upsilon_{{\tilde{A}_{1} }}^{ + } }}{{2 - \upsilon_{{\tilde{A}_{1} }}^{ - } - \upsilon_{{\tilde{A}_{1} }}^{ + } + 2 - \mu_{{\tilde{A}_{2} }}^{ - } - \mu_{{\tilde{A}_{2} }}^{ + } }},$$

Similarly, we have

$$R(\tilde{A}_{2} ) = \frac{1}{3}\left( {\frac{{\mu_{{\tilde{A}_{2} }}^{ - } + \mu_{{\tilde{A}_{2} }}^{ + } }}{2} + \frac{{2 + \mu_{{\tilde{A}_{2} }}^{ - } + \mu_{{\tilde{A}_{2} }}^{ + } - \upsilon_{{\tilde{A}_{2} }}^{ - } - \upsilon_{{\tilde{A}_{2} }}^{ + } }}{{2 - \upsilon_{{\tilde{A}_{2} }}^{ - } - \upsilon_{{\tilde{A}_{2} }}^{ + } + 2 - \mu_{{\tilde{A}_{2} }}^{ - } - \mu_{{\tilde{A}_{2} }}^{ + } }}} \right)\frac{{2 - \upsilon_{{\tilde{A}_{2} }}^{ - } - \upsilon_{{\tilde{A}_{2} }}^{ + } }}{{2 - \upsilon_{{\tilde{A}_{2} }}^{ - } - \upsilon_{{\tilde{A}_{2} }}^{ + } + 2 - \mu_{{\tilde{A}_{2} }}^{ - } - \mu_{{\tilde{A}_{2} }}^{ + } }} = \frac{1}{3}\left( {\frac{{\mu_{{\tilde{A}_{2} }}^{ - } + \mu_{{\tilde{A}_{2} }}^{ + } }}{2} + 1 - \frac{{2 - 2\mu_{{\tilde{A}_{2} }}^{ - } - 2\mu_{{\tilde{A}_{2} }}^{ + } }}{{2 - \upsilon_{{\tilde{A}_{2} }}^{ - } - \upsilon_{{\tilde{A}_{2} }}^{ + } + 2 - \mu_{{\tilde{A}_{2} }}^{ - } - \mu_{{\tilde{A}_{2} }}^{ + } }}} \right)\left( {1 - \frac{{2 - \mu_{{\tilde{A}_{2} }}^{ - } - \mu_{{\tilde{A}_{2} }}^{ + } }}{{2 - \upsilon_{{\tilde{A}_{2} }}^{ - } - \upsilon_{{\tilde{A}_{2} }}^{ + } + 2 - \mu_{{\tilde{A}_{2} }}^{ - } - \mu_{{\tilde{A}_{2} }}^{ + } }}} \right) \le \frac{1}{3}\left( {\frac{{\mu_{{\tilde{A}_{2} }}^{ - } + \mu_{{\tilde{A}_{2} }}^{ + } }}{2} + 1 - \frac{{2 - 2\mu_{{\tilde{A}_{2} }}^{ - } - 2\mu_{{\tilde{A}_{2} }}^{ + } }}{{2 - \upsilon_{{\tilde{A}_{2} }}^{ - } + \Delta_{2}^{ - } - \upsilon_{{\tilde{A}_{2} }}^{ + } + \Delta_{2}^{ + } + 2 - \mu_{{\tilde{A}_{2} }}^{ - } - \mu_{{\tilde{A}_{2} }}^{ + } }}} \right)\left( {1 - \frac{{2 - \mu_{{\tilde{A}_{2} }}^{ - } - \mu_{{\tilde{A}_{2} }}^{ + } }}{{2 - \upsilon_{{\tilde{A}_{2} }}^{ - } + \Delta_{2}^{ - } - \upsilon_{{\tilde{A}_{2} }}^{ + } + \Delta_{2}^{ + } + 2 - \mu_{{\tilde{A}_{2} }}^{ - } - \mu_{{\tilde{A}_{2} }}^{ + } }}} \right)$$

Thus, we can obtain that

$$R(A_{2} ) \le \frac{1}{3}\left( {\frac{{\mu_{{\tilde{A}_{1} }}^{ - } + \mu_{{\tilde{A}_{1} }}^{ + } }}{2} + \frac{{2 + \mu_{{\tilde{A}_{2} }}^{ - } + \mu_{{\tilde{A}_{2} }}^{ + } - \upsilon_{{\tilde{A}_{1} }}^{ - } - \upsilon_{{\tilde{A}_{1} }}^{ + } }}{{2 - \upsilon_{{\tilde{A}_{1} }}^{ - } - \upsilon_{{\tilde{A}_{1} }}^{ + } + 2 - \mu_{{\tilde{A}_{2} }}^{ - } - \mu_{{\tilde{A}_{2} }}^{ + } }}} \right)\frac{{2 - \upsilon_{{\tilde{A}_{1} }}^{ - } - \upsilon_{{\tilde{A}_{1} }}^{ + } }}{{2 - \upsilon_{{\tilde{A}_{1} }}^{ - } - \upsilon_{{\tilde{A}_{1} }}^{ + } + 2 - \mu_{{\tilde{A}_{2} }}^{ - } - \mu_{{\tilde{A}_{2} }}^{ + } }} \le R(A_{1} )$$

Proposition 2

Let \(\tilde{A} = {<} \tilde{\mu }_{{\tilde{A}}} ,\tilde{\upsilon }_{{\tilde{A}}}{>}\) be any IVIFN, the R value is defined by Eq. (19). If \(\mu_{{\tilde{A}}}^{ - } = \upsilon_{{\tilde{A}}}^{ - }\), \(\mu_{{\tilde{A}}}^{ + } = \upsilon_{{\tilde{A}}}^{ + }\) , then \(R(\tilde{A})\) is increasing with respect to \(\tilde{\mu } = \mu_{{\tilde{A}}}^{ - } + \mu_{{\tilde{A}}}^{ + }\).

Proof

Let \(\tilde{A} = {<} \tilde{\mu }_{{\tilde{A}}} ,\tilde{\upsilon }_{{\tilde{A}}}{>}\) be an IVIFN, and then we can calculate the \(R\) value of \(\tilde{A}\) as follows:

$$R(\tilde{A}) = \frac{1}{3}\left( {\frac{{\mu_{{\tilde{A}}}^{ - } + \mu_{{\tilde{A}}}^{ + } }}{2} + \frac{{2 + \mu_{{\tilde{A}}}^{ - } + \mu_{{\tilde{A}}}^{ + } - \upsilon_{{\tilde{A}}}^{ - } - \upsilon_{{\tilde{A}}}^{ + } }}{{2 - \upsilon_{{\tilde{A}}}^{ - } - \upsilon_{{\tilde{A}}}^{ + } + 2 - \mu_{{\tilde{A}}}^{ - } - \mu_{{\tilde{A}}}^{ + } }}} \right)\frac{{2 - \upsilon_{{\tilde{A}}}^{ - } - \upsilon_{{\tilde{A}}}^{ + } }}{{2 - \upsilon_{{\tilde{A}}}^{ - } - \upsilon_{{\tilde{A}}}^{ + } + 2 - \mu_{{\tilde{A}}}^{ - } - \mu_{{\tilde{A}}}^{ + } }}.$$

If \(\mu_{{\tilde{A}}}^{ - } = \upsilon_{{\tilde{A}}}^{ - } ,\mu_{{\tilde{A}}}^{ + } = \upsilon_{{\tilde{A}}}^{ + }\), then

$$\begin{aligned} R(\tilde{A}) = \frac{1}{3}\left( {\frac{{\mu_{{\tilde{A}}}^{ - } + \mu_{{\tilde{A}}}^{ + } }}{2} + \frac{{2 + \mu_{{\tilde{A}}}^{ - } + \mu_{{\tilde{A}}}^{ + } - \upsilon_{{\tilde{A}}}^{ - } - \upsilon_{{\tilde{A}}}^{ + } }}{{2 - \upsilon_{{\tilde{A}}}^{ - } - \upsilon_{{\tilde{A}}}^{ + } + 2 - \mu_{{\tilde{A}}}^{ - } - \mu_{{\tilde{A}}}^{ + } }}} \right)\frac{{2 - \upsilon_{{\tilde{A}}}^{ - } - \upsilon_{{\tilde{A}}}^{ + } }}{{2 - \upsilon_{{\tilde{A}}}^{ - } - \upsilon_{{\tilde{A}}}^{ + } + 2 - \mu_{{\tilde{A}}}^{ - } - \mu_{{\tilde{A}}}^{ + } }} \hfill \\ \quad \quad = \frac{{\mu_{{\tilde{A}}}^{ - } + \mu_{{\tilde{A}}}^{ + } }}{6} + \frac{1}{{6\left( {2 - \mu_{{\tilde{A}}}^{ - } - \mu_{{\tilde{A}}}^{ + } } \right)}} \hfill \\ \end{aligned}$$

is an increasing function with respect to \(\tilde{\mu } = \mu_{{\tilde{A}}}^{ - } + \mu_{{\tilde{A}}}^{ + }\).

The conclusion of Proposition 2 is consistent with our intuition.

Based on the above analysis, in what follows, we develop a new method for ranking IVIFNs.□

Definition 6

Let \(\tilde{A}_{i} = {<} \tilde{\mu }_{{\tilde{A}_{i} }} ,\tilde{\upsilon }_{{\tilde{A}_{i} }}{>}\) \((i = 1,2)\) be any two IVIFNs, then

  1. (1)

    If \(R(\tilde{A}_{1} ) > R(\tilde{A}_{2} )\), then \(\tilde{A}_{1} \;\) is larger than \(\tilde{A}_{2}\), denoted by \(\tilde{A}_{1} > \tilde{A}_{2}\);

  2. (2)

    If \(R(\tilde{A}_{1} ) < R(\tilde{A}_{2} )\), then \(\tilde{A}_{1} \;\) is smaller than \(\tilde{A}_{2}\), denoted by \(\tilde{A}_{1} < \tilde{A}_{2}\);

  3. (3)

    If \(R(\tilde{A}_{1} ) = R(\tilde{A}_{2} )\), the \(\tilde{A}_{1} \;\) is equal to \(\tilde{A}_{2}\), denoted by \(\tilde{A}_{1} = \tilde{A}_{2}\).

The R value considers not only the amount information but also the reliability information in the ranking order of IVIFNs.

Example 1

Let \(\tilde{A}_{1} = {<} [0.4,0.4],[0.1,0.1]{>}\) and \(\tilde{A}_{2} = {<} [0.6,0.6],[0.36,0.36]{>}\) be two IVIFNs, the scores are \(S(\tilde{A}_{1} ) = 0.3\) and \(S(\tilde{A}_{2} ) = 0.24\), then by the methods [17, 18, 21, 23], the ranking result is \(\tilde{A}_{1} \succ \tilde{A}_{2}\). Wang et al. [23] also pointed out that the rational result should be \(\tilde{A}_{1} \prec \tilde{A}_{2}\).

Let us consider two candidates \(\tilde{A}_{1}\) and \(\tilde{A}_{2}\), the support ratio of \(\tilde{A}_{1} = {<} [0.4,0.4],[0.1,0.1]{>}\) is 40 %, while the support ratio \(\tilde{A}_{2} = {<} [0.6,0.6],[0.36,0.36]{>}\) is 60 %, then we should select \(B\) as the better candidate. To see the performance of the proposed ranking function, we have \(R(\tilde{A}_{1} ) = 0.26\) and \(R(\tilde{A}_{2} ) = 0.3669\), then the ranking order is \(\tilde{A}_{1} \prec \tilde{A}_{2}\), which agrees with the vote explanation.

Example 2

Given the following five IVIFNs \(\tilde{\alpha }_{1} = {<} [0.6,0.6],[0.05,0.10]{>}\), \(\tilde{\alpha }_{2} = {<} [0.6,0.6],[0.10,0.15]{>}\), \(\tilde{\alpha }_{3} = {<} [0.5,0.55],[0,0.05]{>}\), \(\tilde{\alpha }_{4} = {<} [0.2,0.25],[0.25,0.3]{>}\), and \(\tilde{\alpha }_{5} = {<} [0,0.05],[0.80,0.85]{>}\), which are adopted from [17, 21, 22]. We rank them using some methods discussed previously. The computational results are listed as in Table 1.

Table 1 The ranking values of the existing methods and the proposed \(R\) value

According to Table 1, we can get the following ranking results:

  1. (1)

    By the method [17], we get \(\tilde{\alpha }_{1} \succ \tilde{\alpha }_{3} \succ \tilde{\alpha }_{2} \succ \tilde{\alpha }_{4} \succ \tilde{\alpha }_{5}\);

  2. (2)

    By the method [21, 22], when \(\lambda = 0.1\) or 0.5, we have \(\tilde{\alpha }_{1} \succ \tilde{\alpha }_{3} \succ \tilde{\alpha }_{2} \succ \tilde{\alpha }_{4} \succ \tilde{\alpha }_{5}\); when \(\lambda = 1\), we get \(\tilde{\alpha }_{1} = \tilde{\alpha }_{3} \succ \tilde{\alpha }_{2} \succ \tilde{\alpha }_{4} \succ \tilde{\alpha }_{5}\);

  3. (3)

    By our proposed ranking method and the extension method [24] with Eq. (18), we get \(\tilde{\alpha }_{1} \succ \tilde{\alpha }_{2} \succ \tilde{\alpha }_{3} \succ \tilde{\alpha }_{4} \succ \tilde{\alpha }_{5}\).

Note that in most practical voting case, \(\tilde{\alpha }_{2} \succ \tilde{\alpha }_{3}\) is more suitable for our intuition, and this example also shown that the new proposed ranking function has some advantages.

4 Interval-Valued Intuitionistic Fuzzy MAGDM

In this section, we adopt the above ranking method of IVIFSs to solve MAGDM problems in which the ratings of alternatives on attributes are expressed with IVIFSs. In the following, we firstly give the description of IVIF MAGDM problem. The corresponding group decision method is then proposed.

4.1 Model Description of IVIF MAGDM Problem

A group of decision makers work together to find the best alternative from all feasible alternatives assessed on multiple attributes. Such a decision problem is called the MAGDM problem, which involves a set \(D = \{ D_{1} ,D_{2} , \ldots ,D_{s} \}\) of decision makers to work together for selecting the best alternative from a set \(X = \{ x_{1} ,x_{2} , \ldots ,x_{n} \}\) of \(n\) alternatives with respect to a set \(O = \{ o_{1} ,o_{2} , \ldots ,o_{m} \}\) of \(m\) attributes. For the decision maker \(D_{k}\), the ratings of alternatives \(x_{i} \in X\) on attributes \(o_{j} \in O\) are expressed with the IVIFNs \(\tilde{x}_{ijk} = {<} \tilde{\mu }_{ijk} ,\tilde{\upsilon }_{ijk}{>}\), respectively, where \(\tilde{\mu }_{ijk} = [\mu_{ijk}^{ - } ,\mu_{ijk}^{ + } ]\) and \(\tilde{\upsilon }_{ijk} = [\upsilon_{ijk}^{ - } ,\upsilon_{ijk}^{ + } ]\) are intervals, which express the membership (satisfactory) and nonmembership (nonsatisfactory) degree intervals of the alternative \(x_{i} \in X\) on the attribute \(o_{j} \in O\) with respect to the fuzzy concept “excellence” given by the decision maker \(D_{k}\) so that they satisfy the conditions: \(0 \le \mu_{ijk}^{ - } \le \mu_{ijk}^{ + } \le 1,\;0 \le \upsilon_{ijk}^{ - } \le \upsilon_{ijk}^{ + } \le 1\) and \(0 \le \mu_{ijk}^{ + } + \upsilon_{ijk}^{ + } \le 1\) \((i = 1,2, \ldots ,n;j = 1,2, \ldots ,m)\). Thus, a MAGDM problem can be expressed as\({\tilde{\mathbf{D}}}_{k} = ( {<} \tilde{\mu }_{ijk} ,\tilde{\upsilon }_{ijk}{>} )_{n \times m}\), where \(k = 1,2, \ldots ,s\).

In real decision situations, attributes may have different importances. Let \(\varvec{w} = (w_{1} ,w_{2} , \ldots ,w_{m} )^{\text{T}}\) be the weight vector of all attributes, where \(w_{j} \in [0,1]\) (\(j = 1,2, \ldots ,m\)) are weights of the attributes \(o_{j} \in O\), and \(\sum\nolimits_{j = 1}^{m} {w_{j} = 1}\). The attribute weight information is usually unknown and/or partially known due to the insufficient knowledge or limitation of time of decision makers in the decision making process. Therefore, the determination of attribute weights is an important issue in MAGDM problems. Then in Sect. 4.2, we put forward two methods to determine the weights of attributes for the above-mentioned two cases, respectively.

4.2 Weight Determining Method

Attribute weights are important for MAGDM, and different weights often lead the difference of final ranking results. MAGDM problems involve many decision makers, and each decision maker will give his/her preference because of the different knowledge backgrounds and familiarities with the decision problems. Then, we should consider every decision maker’s viewpoint in the final decision. Considering every decision maker’s preference about the important degree of each attribute, this paper will determine the weights of attributes with respect to every decision maker as follows:

Suppose that the attribute weight vector is \(\varvec{w}^{(k)} = \left( {w_{1}^{(k)} ,w_{2}^{(k)} , \ldots ,w_{m}^{(k)} } \right)^{\text{T}}\) with respect to the decision maker \(D_{k}\) \((k = 1,2, \ldots ,s)\), and the corresponding decision matrix is \({\tilde{\mathbf{D}}}_{k} = ( {<} \tilde{\mu }_{ijk} ,\tilde{\upsilon }_{ijk}{>} )_{n \times m}\). According to Eq. (20), we calculate the R value of each IVIFN \(\tilde{x}_{ijk} = {<} \tilde{\mu }_{ijk} ,\tilde{\upsilon }_{ijk}{>}\). Then, we can get the ranking decision matrix \(Q_{k} = (R_{ijk} )_{n \times m}\), where \(R_{ijk}\) can be rewritten as follows:

$$R_{ijk} = \frac{1}{3}\left( {\frac{{\mu_{ijk}^{ - } + \mu_{ijk}^{ + } }}{2} + \frac{{2 + \mu_{ijk}^{ - } + \mu_{ijk}^{ + } - \upsilon_{ijk}^{ - } - \upsilon_{ijk}^{ + } }}{{2 - \upsilon_{ijk}^{ - } - \upsilon_{ijk}^{ + } + 2 - \mu_{ijk}^{ - } - \mu_{ijk}^{ + } }}} \right)\frac{{2 - \upsilon_{ijk}^{ - } - \upsilon_{ijk}^{ + } }}{{2 - \upsilon_{ijk}^{ - } - \upsilon_{ijk}^{ + } + 2 - \mu_{ijk}^{ - } - \mu_{ijk}^{ + } }}$$

In the following, we will develop an approach to determine the attribute weights when their information is completely unknown and partly known. Each decision maker has his/her viewpoint about the important degree of attributes. Based on the ranking decision matrix \(Q_{k} = (R_{ijk} )_{n \times m}\), the overall score of each alternative can be expressed as follows:

$$R_{ik} = \sum\limits_{j = 1}^{m} {\frac{{w_{j}^{(k)} }}{3}\left( {\frac{{\mu_{ijk}^{ - } + \mu_{ijk}^{ + } }}{2} + } \right.\left. {\frac{{2 + \mu_{ijk}^{ - } + \mu_{ijk}^{ + } - \upsilon_{ijk}^{ - } - \upsilon_{ijk}^{ + } }}{{2 - \upsilon_{ijk}^{ - } - \upsilon_{ijk}^{ + } + 2 - \mu_{ijk}^{ - } - \mu_{ijk}^{ + } }}} \right)\frac{{2 - \upsilon_{ijk}^{ - } - \upsilon_{ijk}^{ + } }}{{2 - \upsilon_{ijk}^{ - } - \upsilon_{ijk}^{ + } + 2 - \mu_{ijk}^{ - } - \mu_{ijk}^{ + } }}}$$
(21)

4.2.1 Unknown Weight Information

If the lth attribute values are equal, this attribute does not work for ranking alternatives, then we can make its weight to 0. Conversely, if the lth attribute values have much difference among all attribute classes, the lth attribute will play a great role in ranking order of alternatives. In this case we should give it greater weight.

Considering the attribute \(o_{l}\), the weighted square deviation of R values of alternatives \(x_{i}\) with \(x_{j}\) is \((w_{l}^{(k)} )^{2} (R_{il} - R_{jl} )^{2}\). Then, the weighted square deviation of R values between the alternative \(x_{i}\) and other alternative on the lth decision maker is \(\sum\nolimits_{i = 1}^{n} {(w_{l}^{(k)} )^{2} (R_{il} - R_{jl} )^{2} }\). Further, the weighted square deviation of R values among all alternatives on the lth attribute is \(\sum\nolimits_{i = 1}^{n} {\sum\nolimits_{j = 1}^{n} {(w_{l}^{(k)} )^{2} (R_{il} - R_{jl} )^{2} } }\). Therefore, the optimum weights should maximize all weighted square deviation of R values. Then, the optimization model can be structured as follows:

$$\begin{aligned} &{ \hbox{max} }\;\left\{ {\sum\limits_{l = 1}^{m} {\sum\limits_{i = 1}^{n} {\sum\limits_{j = 1}^{n} {(w_{l}^{(k)} )^{2} (R_{il} - R_{jl} )^{2} } } } } \right\} \hfill \\ &{\text{s}} . {\text{t}}.\;\left\{ \begin{aligned} \sum\limits_{l = 1}^{m} {w_{l}^{(k)} } = 1 \hfill \\ w_{l}^{(k)} \ge 0 \, (l = 1,2, \ldots ,m) \hfill \\ \end{aligned} \right. \hfill \\ \end{aligned}$$
(22)

To solve the above model, denote the Lagrange function as follows:

$$L(\varvec{w}^{(k)} ,\lambda ) = \sum\limits_{l = 1}^{m} {\sum\limits_{i = 1}^{n} {\sum\limits_{j = 1}^{n} {(w_{l}^{(k)} )^{2} (R_{il} - R_{jl} )^{2} } } } + 2\lambda \left( {\sum\limits_{l = 1}^{m} {w_{l}^{(k)} } - 1} \right)$$
(23)

Let the partial derivative of \(L(\varvec{w}^{(k)} ,\lambda )\) be equal to zero, respectively, i.e.,

$$\left\{ \begin{array}{l} \frac{{\partial L(\varvec{w}^{(k)} ,\lambda )}}{{\partial w_{l}^{(k)} }} = 2\sum\limits_{i = 1}^{n} {\sum\limits_{j = 1}^{n} {w_{l}^{(k)} (R_{il} - R_{jl} )^{2} } } + 2\lambda = 0 \hfill \\ \frac{{\partial L(\varvec{w}^{(k)} ,\lambda )}}{{\partial w_{l}^{(k)} }} = 2\left( {\sum\limits_{l = 1}^{m} {w_{l}^{(k)} } - 1} \right) = 0 \hfill \\ \end{array} \right.$$
(24)

Then, we have

$$w_{l}^{(k)} = \frac{1}{{J_{l} \sum\nolimits_{l = 1}^{n} {1/J_{l} } }},$$
(25)

where

$$J_{l} = \sum\limits_{i = 1}^{n} {\sum\limits_{j = 1}^{n} {(R_{il} - R_{jl} )^{2} } } ,\;\left( {l = 1,2, \ldots ,m} \right).$$
(26)

4.2.2 Partial Attribute Weight Information

Due to the complexity and uncertainty of practical decision making problems and the inherent subjective nature of human thinking, the attribute weight information is usually incomplete [3134]. Generally, there will have more constraint conditions for weight vector \(\varvec{w}\). We denote \(H\) as the set of the known weight information.

Obviously, the greater the value \(R_{ik} (\varvec{w}^{(k)} )\) given by Eq. (21) the better the alternative \(x_{i}\). A reasonable attribute weight vector should be maximized \(R_{ik} (\varvec{w}^{(k)} )\) when we only consider the alternative \(x_{i}\). Thus, we can construct the following optimization model:

$$\begin{aligned} &\hbox{max}\,R_{ik} (\varvec{w}^{(k)} ) = \sum\limits_{j = 1}^{m} {w_{j}^{(k)} \times } \frac{1}{3} \times \left( {\frac{{\mu_{ijk}^{ - } + \mu_{ijk}^{ + } }}{2} + \frac{{2 + \mu_{ijk}^{ - } + \mu_{ijk}^{ + } - \upsilon_{ijk}^{ - } - \upsilon_{ijk}^{ + } }}{{\left( {2 - \upsilon_{ijk}^{ - } - \upsilon_{ijk}^{ + } } \right) + \left( {2 - \mu_{ijk}^{ - } - \mu_{ijk}^{ + } } \right)}}} \right) \times \frac{{2 - \upsilon_{ijk}^{ - } - \upsilon_{ijk}^{ + } }}{{\left( {2 - \upsilon_{ijk}^{ - } - \upsilon_{ijk}^{ + } } \right) + \left( {2 - \mu_{ijk}^{ - } - \mu_{ijk}^{ + } } \right)}} \hfill \\ &{\text{s}} . {\text{t}} .\left\{ \begin{array}{l} \varvec{w}^{(k)} \in H \hfill \\ \sum\limits_{j = 1}^{m} {w_{j}^{(k)} } = 1 \hfill \\ w_{j}^{(k)} \ge 0,\quad j = 1,2, \ldots ,m \hfill \\ \end{array} \right. \hfill \\ \end{aligned}$$
(27)

However, all alternative \(x_{i} \,(i = 1,2, \ldots ,n)\) should be considered as the above analysis. Then, we should consider them as a whole. Thus, we can construct the following optimization model:

$$\begin{aligned} \hbox{max} \left\{ {R_{k} (\varvec{w}^{(k)} ) = \sum\limits_{i = 1}^{n} {\sum\limits_{j = 1}^{m} {\frac{{w_{j}^{(k)} }}{3}\left( {\frac{{\mu_{ijk}^{ - } + \mu_{ijk}^{ + } }}{2} + \frac{{2 + \mu_{ijk}^{ - } + \mu_{ijk}^{ + } - \upsilon_{ijk}^{ - } - \upsilon_{ijk}^{ + } }}{{2 - \upsilon_{ijk}^{ - } - \upsilon_{ijk}^{ + } + 2 - \mu_{ijk}^{ - } - \mu_{ijk}^{ + } }}} \right)\frac{{2 - \upsilon_{ijk}^{ - } - \upsilon_{ijk}^{ + } }}{{2 - \upsilon_{ijk}^{ - } - \upsilon_{ijk}^{ + } + 2 - \mu_{ijk}^{ - } - \mu_{ijk}^{ + } }}} } } \right\} \hfill \\ {\text{s}} . {\text{t}} .\left\{ \begin{aligned} \varvec{w}^{(k)} \in H \hfill \\ \sum\limits_{j = 1}^{m} {w_{j}^{(k)} } = 1 \hfill \\ w_{j}^{(k)} \ge 0,\quad j = 1,2, \ldots ,m \hfill \\ \end{aligned} \right. \hfill \\ \end{aligned}$$
(28)

With the help of the Matlab software or Lingo software, the optimal weight vector can be solved as \(\varvec{w}^{(k)} = (w_{1}^{(k)} ,w_{2}^{(k)} , \ldots ,w_{m}^{(k)} )^{T}\).

4.3 Algorithm of New MAGDM Method Under IVIF Environment

For the above-mentioned MAGDM problem, the new decision making method is given as follows:

  • Step 1 For the decision maker \(D_{k}\) \((k = 1,2, \ldots ,s)\), suppose that the corresponding attribute weight vector [35] is \(\varvec{w}^{(k)} = (w_{1}^{(k)} ,w_{2}^{(k)} , \ldots ,w_{m}^{(k)} )^{\text{T}}\);

  • Step 2 Determine the attribute weight vector \(\varvec{w}^{(k)}\) according to Eqs. (25)–(28) in Sect. 4.2;

  • Step 3 Determine the ranking orders of the alternatives for the decision maker \(D_{k}\) \((k = 1,2, \ldots ,s)\) according to the decreasing orders of \(R_{ik} (i = 1,2, \ldots ,n)\) calculated by Eq. (21).

  • Step 4 Determine the group order of the alternatives and the best alternative by using social choice functions such as the Borda function and the Copeland function [36]. In this paper, we select the weighted Borda function [37], whose formula is given as follows:

    $${\text{BF}}(x_{i} ) = \sum\limits_{k = 1}^{s} {u_{k} N(x_{i} } \succ_{k} x_{j} )$$
    (29)

    where \(\succ_{k}\) means that \(x_{i}\) is better than \(x_{j}\) in the kth decision maker’s viewpoint, and \(N\) is the votes given by the kth decision maker according to \(x_{i} \succ_{k} x_{j}\). \(u_{k}\) represents the important degree of the kth decision maker. The alternative which achieves the largest value of \({\text{BF}}(x_{i} )\) is the best alternative.

5 A Numerical Example Analysis

We discuss a decision problem concerning with a manufacturing company, which want to search the best global supplier for one of its most critical parts used in assembling process. This example is adopted from [38]. The company hires four experts (decision makers) \(D_{1}\), \(D_{2}\),\(D_{3}\), \(D_{4}\) to evaluate five candidate suppliers: \(x_{1}\), \(x_{2}\), \(x_{3}\), \(x_{4}\), \(x_{5}\). The evaluation attributes are \(o_{j} \,\) \((j = 1,2, \ldots ,5)\), which are defined as follows: \(o_{1}\) (Overall cost of the product), \(o_{2}\) (Quality of the product), \(o_{3}\) (Service performance of supplier), \(o_{4}\) (Supplier’s profile), and \(o_{5}\)(Risk factor). The information about attribute weights given by the decision makers can be shown as follows:

$$H = \{ w_{1} \le 0.3,\;0.1 \le w_{2} \le 0.2,\;\,0.2 \le w_{3} \le 0.5,\,\;0.1 \le w_{4} \le 0.3,\;w_{5} \le 0.4,\;\,w_{3} - w_{2} \ge w_{5} - w_{4} ,\,\;w_{4} \ge w_{1} ,\,\;w_{3} - w_{1} \le 0.1\}$$

Suppose that the important degree of the experts is \(\varvec{u} = (u_{1} ,u_{2} ,u_{3} ,u_{4} )^{\text{T}} = (0.25,0.4,015,0.25)^{\text{T}}\). The evaluated attribute values given by the four experts are expressed with IVIFNs, which are shown in Tables 2, 3, 4, and 5.

Table 2 Decision matrix for expert \(D_{1}\)
Table 3 Decision matrix for expert \(D_{2}\)
Table 4 Decision matrix for expert \(D_{3}\)
Table 5 Decision matrix for expert \(D_{4}\)

According to the proposed method, we can solve the ranking order of the alternatives and the decision result is reported in Tables 6, 7, 8, and 9.

Table 6 Decision results of the suppliers for experts
Table 7 Weighted Borda scores of the suppliers for experts
Table 8 Decision results of the suppliers for experts
Table 9 Weighted Borda scores of the suppliers for experts

Case 1

The weight information unknown.

The result is reported in Tables 6 and 7.

The weighted Borda scores of the suppliers can be obtained as in Table 7.

The ranking order of the five suppliers is \(x_{4} \succ x_{1} \succ x_{5} \succ x_{2} \succ x_{3}\). The most desirable supplier is \(x_{4}\).

Case 2

The weight information partially known.

The result is reported in Tables 8 and 9.

By Eq. (29), the weighted Borda scores of the suppliers can be obtained as in Table 9.

The ranking order of the five suppliers is \(x_{1} \succ x_{4} \succ x_{5} \succ x_{2} \succ x_{3}\). The most desirable supplier is \(x_{1}\).

6 Conclusion

In IVIFSs’ ranking studies, many ranking functions have been proposed, but most of them still exist drawbacks. To develop a better ranking function, this paper constructs a new ranking function, named \(R\) value which considers the amount and reliability information of IVIFSs. The \(R\) value also considers the closeness of the IVIFSs to the maximum IVIFS based on the concept of TOPSIS. Hereby, for MAGDM problems, we develop a weighted method by establishing an optimization model based on the \(R\) value. Because the operation laws of IVIFSs still exist some shortcoming, which lead that some group decision making methods have unconvincing results. Thus, this paper uses social choice functions to avoid the additional operation of IVIFSs. Finally, a supplier selection problem is used to illustrate the feasibility and effectiveness of the developed method. The method proposed in this paper can also be used to other MAGDM problems such as project selection, staff performance evaluation, and investment selection problem. It can also be used to other fields such as cluster analysis and information retrieval.