1 Introduction

Multiattribute decision making (MADM) is an important part of the decision-making theory, and it has been widely used in many areas such as engineering, scientific research, and artificial intelligence (Wan and Li 2013; Li and Ren 2015; Liang 2018; Yu et al. 2019). How to evaluate the alternatives under attributes accurately and then select the most desirable alternative from the alternative sets is the key problem of the MADM.

Due to the lack of knowledge and uncertainty of information, it is difficult for the decision maker to evaluate alternatives under attributes accurately. Instead, people use the method of uncertainty and vagueness to evaluate alternatives under attributes. How to deal with the uncertainty and vagueness is an interesting and important subject. Fuzzy sets (FSs) (Zadeh 1965) seem to be suitable for dealing with the uncertainty and vagueness, and they are often used to evaluate the alternatives under attributes in the MADM. As the extension of the FSs, the intuitionistic fuzzy sets (IFSs) (Bustine and Burillo 1996) are often used to evaluate the alternatives through the membership and non-membership degrees. Regarding as the generalization of the IFSs, the membership and non-membership degrees of the interval-valued intuitionistic fuzzy sets (IVIFSs) (Atanassov 1994; Atanassov and Gargov 1989) are intervals instead of crisp numbers. Thus, the IVIFS is more flexible to simulate the imprecision and vagueness than the IFS. Therefore, the research on the IVIFS becomes a hot issue. Many researchers studied the IVIFS (An et al. 2018; Nguyen 2019) and applied it to many fields such as project management and emergency management.

However, when the IVIFSs are employed to evaluate the alternatives in MADM, we face the problem of how to compare the IVIFSs. The ranking methods of the IVIFSs are often used to rank the aggregated results. A well-defined ranking method can definitely affect the decision maker to select the most desirable alternative. How to rank the IVIFSs is an important topic, which has attracted the attention of many scholars. Sahin (2016) proposed a fuzzy multiattribute decision-making method based on the improved accuracy function of the IVIFS. Gao et al. (2016) presented an interval-valued intuitionistic fuzzy (IVIF) multiattribute decision-making method based on the revised fuzzy entropy and a new score function. Chen et al. (2017) suggested a group decision-making method based on IVIF numbers, and a new score function was used to rank the aggregated results. Zhang and Xu (2017) proposed an improved accuracy function to rank the IVIF values and applied it to the MADM. Wang and Chen (2018) proposed a new MADM method based on the novel score function of the IVIF numbers and linear programming method. Gong and Ma (2019) proposed a score function and an accuracy function for the IVIFS. Though a lot of ranking methods of the IVIFSs were proposed and applied to the MADM, they cannot rank the IVIFSs well. So they have drawbacks. The reason is that they did not consider the important information affecting the ranking orders. As a result, the decision maker cannot select the most appropriate alternative in the MADM when applying these defective ranking methods.

The information conveyed by the IVIFS is very important for constructing the score function. In this paper, we will develop an information-based score function of the IVIFS and apply it to the MADM. The information-based score function considers the amount of information, the reliability, the certainty information, and the relative closeness degree. The information-based score function can not only overcome the shortcomings of the existing ranking methods, but also can rank the IVIFSs well.

2 Basic concepts

2.1 Interval-valued intuitionistic fuzzy sets

In this section, the concept of IVIFSs and their operations and relations (Atanassov and Gargov 1989; Wan and Dong 2015; Atanassov 1994) will be introduced.

Let \(X = \{ x_{1} ,x_{2} , \ldots ,x_{n} \}\) be the universe of discourse. An IVIFS \(\widetilde{a}\) in the finite set \(X\) is denoted by

$$\widetilde{a} = \{ < x_{i} ,\mu_{{\widetilde{a}}} (x_{i} ),\upsilon_{{\widetilde{a}}} (x_{i} ) > \left| {x_{i} \in X\} } \right.,$$

where \(\mu_{{\widetilde{a}}} (x_{i} ) = [\mu_{{\widetilde{a}}}^{L} (x_{i} ),\mu_{{\widetilde{a}}}^{U} (x_{i} )] \subseteq [0,1]\) and \(\upsilon_{{\widetilde{a}}} (x_{i} ) = [\upsilon_{{\widetilde{a}}}^{L} (x_{i} ),\upsilon_{{\widetilde{a}}}^{U} (x_{i} )] \subseteq [0,1]\) denote the interval membership degree and the interval non-membership degree of an element \(x_{i}\) to the IVIFS \(\widetilde{a}\), respectively, such that \(x_{i} \in X\), \(0 \le \mu_{{\widetilde{a}}}^{L} (x_{i} ),\mu_{{\widetilde{a}}}^{U} (x_{i} ) \le 1\), \(0 \le \upsilon_{{\widetilde{a}}}^{L} (x_{i} ),\) \(\upsilon_{{\widetilde{a}}}^{U} (x_{i} ) \le 1\) and \(0 \le \mu_{{\widetilde{a}}}^{U} (x_{i} ) + \upsilon_{{\widetilde{a}}}^{U} (x_{i} ) \le 1\).

The interval hesitancy degree of an element \(x_{i}\) to the IVIFS \(\widetilde{a}\) is denoted by \(\pi_{{\widetilde{a}}} (x_{i} ) = [\pi_{{\widetilde{a}}}^{L} (x_{i} ),\) \(\pi_{{\widetilde{a}}}^{U} (x_{i} )] \subseteq [0,1]\), where \(\pi_{{\widetilde{a}}}^{L} (x_{i} ) = 1 - \mu_{{\widetilde{a}}}^{U} (x_{i} ) - \upsilon_{{\widetilde{a}}}^{U} (x_{i} )\) and \(\pi_{{\widetilde{a}}}^{U} (x_{i} ){ = 1} - \mu_{{\widetilde{a}}}^{L} (x_{i} ) - \upsilon_{{\widetilde{a}}}^{L} (x_{i} )\). For every \(x_{i} \in X\), if \(\mu_{{\widetilde{a}}}^{L} (x_{i} ) = \mu_{{\widetilde{a}}}^{U} (x_{i} )\) and \(\upsilon_{{\widetilde{a}}}^{L} (x_{i} ) = \upsilon_{{\widetilde{a}}}^{U} (x_{i} )\), the IVIFS reduces to the IFS. For convenience, let \(\mu_{{\widetilde{a}}} (x_{i} ) = [a,b],\upsilon_{{\widetilde{a}}} (x_{i} ) = [c,d]\), then \(\widetilde{a} = \,< [a,b],\) \([c,d] >\) is called an IVIFS. When \(a = b\) and \(c = d\), the IVIFS reduces to the IFS \(\widetilde{a} = {\text{ < [a,a],[c,c] > }}\).

For any two IVIFSs \(\widetilde{a}_{\alpha } = \,< [a_{\alpha } ,b_{\alpha } ],[c_{\alpha } ,d_{\alpha } ] >\) and \(\widetilde{a}_{\beta } = \,< [a_{\beta } ,b_{\beta } ],[c_{\beta } ,d_{\beta } ] >,\) the relations and operations are given as follows:

  1. 1.

    \(\widetilde{a}_{\alpha } \le \widetilde{a}_{\beta }\) if and only if \(a_{\alpha } \le a_{\beta }\), \(b_{\alpha } \le b_{\beta }\) and \(c_{\alpha } \ge c_{\beta }\), \(d_{\alpha } \ge d_{\beta }\);

  2. 2.

    \(\widetilde{a}_{\alpha } = \widetilde{a}_{\beta }\) if and only if \(\widetilde{a}_{\alpha } \le \widetilde{a}_{\beta }\) and \(\widetilde{a}_{\alpha } \ge \widetilde{a}_{\beta }\);

  3. 3.

    The complement of \(\widetilde{a}_{\alpha }\) is \(\widetilde{a}_{\alpha }^{C} = \,< [c_{\alpha } ,d_{\alpha } ],[a_{\alpha } ,b_{\alpha } ] >\);

  4. 4.

    \(\widetilde{a}_{\alpha } + \widetilde{a}_{\beta } = \,< [a_{\alpha } + a_{\beta } - a_{\alpha } a_{\beta } ,b_{\alpha } + b_{\beta } - b_{\alpha } b_{\beta } ],[c_{\alpha } c_{\beta } ,d_{\alpha } d_{\beta } ] > ;\)

  5. 5.

    \(\lambda \widetilde{a}_{\alpha } = \,< [1 - (1 - a_{\alpha } )^{\lambda } ,1 - (1 - b_{\alpha } )^{\lambda } ],[c_{\alpha }^{\lambda } ,d_{\alpha }^{\lambda } ] >\), where \(\lambda > 0.\)

The normalized Hamming distance of two IVIFSs \(\widetilde{a}_{\alpha }\) and \(\widetilde{a}_{\beta }\) is defined as follows:

$$\begin{aligned} D(\widetilde{a}_{\alpha } ,\widetilde{a}_{\beta } ) & = \frac{1}{4}\left[ {|a_{\alpha } - a_{\beta } | + |b_{\alpha } - b_{\beta } |{ + |}c_{\alpha } - c_{\beta } |} \right. \\ & \quad + |d_{\alpha } - d_{\beta } |{ + |}(1 - a_{\alpha } - c_{\alpha } ) - (1 - a_{\beta } - c_{\beta } ) |\\ & \quad \left. { + |(1 - b_{\alpha } - d_{\alpha } )- (1 - b_{\beta } - d_{\beta } )|} \right] \\ \end{aligned}$$
(1)

Let \(\widetilde{a}_{i} = \,< [a_{i} ,b_{i} ],[c_{i} ,d_{i} ] >\)(\(i = 1,2, \ldots ,n\)) be IVIFSs (Xu 2007), where \(0 \le a_{i} ,b_{i} \le 1\), \(0 \le c_{i} ,d_{i} \le 1\) and \(0 \le b_{i} + d_{i} \le 1\). Let \(W = (w_{1} ,w_{2} , \ldots ,w_{n} )^{\rm T}\) be the weight vector of the interval-valued intuitionistic fuzzy weighted averaging (IVIFWA) operator \(g_{w}\) for IVIFSs, where \(w_{i}\) is the weight of the IVIFS \(\widetilde{a}_{i}\), \(w_{i} \in [0,1]\), \(\sum\nolimits_{i}^{n} {w_{i} = 1}\). The IVIFWA operator \(g_{w}\) is defined as follows:

$$\begin{aligned} g_{w} (\widetilde{a}_{1} ,\widetilde{a}_{2} , \ldots ,\widetilde{a}_{n} ) & = \,\left\langle {\left[ {1 - \prod\limits_{i = 1}^{n} {(1 - a_{i} )^{{\omega_{i} }} } ,1 - \prod\limits_{i = 1}^{n} {(1 - b_{i} )^{{\omega_{i} }} } } \right],} \right. \\ & \quad \left. {\left[ {\prod\limits_{i = 1}^{n} {c_{i}^{{\omega_{i} }} } ,\prod\limits_{i = 1}^{n} {d_{i}^{{\omega_{i} }} } } \right]} \right\rangle = {\text{ < [a,b],[c,d] > }} .\\ \end{aligned} .$$
(2)

where \(a = 1 - \prod\nolimits_{i = 1}^{n} {(1 - a_{i} )^{{\omega_{i} }} }\), \(b = 1 - \prod\nolimits_{i = 1}^{n} {(1 - b_{i} )^{{\omega_{i} }} }\), \(c = \prod\nolimits_{i = 1}^{n} {c_{i}^{{\omega_{i} }} }\), \(d = \prod\nolimits_{i = 1}^{n} {d_{i}^{{\omega_{i} }} }\), \(i = 1,2, \ldots ,n\).

2.2 A critical analysis of the existing ranking methods of the IVIFSs

The ranking methods are often used to rank the IVIFSs. Though there are many researches related to the ranking methods of the IVIFSs (Ye 2009; Liu and Luo 2017), some of them have drawbacks. We will enumerate several existing ranking methods of the IVIFSs and discuss their disadvantages.

2.2.1 Xu’s ranking method of the IVIFS

In order to rank the IVIFSs, Xu (2007) defined the score function and the accuracy function as follows:

For any IVIFS \(\widetilde{a}\), the score function of the IVIFS is

$$S_{X} (\widetilde{a}) = \frac{a + b - c - d}{2} .$$
(3)

The accuracy function of the IVIFS is

$$A_{X} (\widetilde{a}) = \frac{a + b + c + d}{2} .$$
(4)

For two IVIFSs \(\widetilde{a}_{\alpha }\) and \(\widetilde{a}_{\beta }\),

  1. (1)

    If \(S_{X} (\widetilde{a}_{\alpha } ) > S_{X} (\widetilde{a}_{\beta } )\), then \(\widetilde{a}_{\alpha } > \widetilde{a}_{\beta }\);

  2. (2)

    If \(S_{X} (\widetilde{a}_{\alpha } ) < S_{X} (\widetilde{a}_{\beta } )\), then \(\widetilde{a}_{\alpha } < \widetilde{a}_{\beta }\);

  3. (3)

    If \(S_{X} (\widetilde{a}_{\alpha } ) = S_{X} (\widetilde{a}_{\beta } )\), then

    1. (a)

      \(A_{X} (\widetilde{a}_{\alpha } ) > A_{X} (\widetilde{a}_{\beta } )\), then \(\widetilde{a}_{\alpha } > \widetilde{a}_{\beta } ;\)

    2. (b)

      \(A_{X} (\widetilde{a}_{\alpha } ) < A_{X} (\widetilde{a}_{\beta } )\), then \(\widetilde{a}_{\alpha } < \widetilde{a}_{\beta }\);

    3. (c)

      \(A_{X} (\widetilde{a}_{\alpha } ) = A_{X} (\widetilde{a}_{\beta } )\), then \(\widetilde{a}_{\alpha } = \widetilde{a}_{\beta }\).

Example 2.1

Suppose that \(\widetilde{a}_{1} = \,< [0.3,0.5],[0.2,0.4] >\) and \(\widetilde{a}_{2} = \,< [0.35,0.45],[0.25,0.35] >\) are two IVIFSs. Using Eqs. (3) and (4), then we have \(S_{X} (\widetilde{a}_{1} ) = S_{X} (\widetilde{a}_{2} ) = 0.1\) and \(A_{X} (\widetilde{a}_{1} ) = A_{X} (\widetilde{a}_{2} ) = 0.7\). So \(S_{X} (\widetilde{a})\) and \(A_{X} (\widetilde{a})\) cannot rank these two IVIFSs.

2.2.2 Yue’s ranking method of the IVIFS

Assume that \(\mu_{{\widetilde{a}}} = [a_{\alpha } ,b_{\alpha } ]\) and \(\mu_{{\widetilde{\beta }}} = [a_{\beta } ,b_{\beta } ]\) are two intervals. Xu and Da (2002) presented the possibility degree ranking method for two intervals, i.e., the possibility degree of \(\mu_{{\widetilde{a}}} \ge \mu_{{\widetilde{\beta }}}\) is defined as follows:

$$P_{X} (\mu_{{\widetilde{a}}} \ge \mu_{{\widetilde{\beta }}} ) = \hbox{max} \left\{ {1 - \hbox{max} \left\{ {\frac{{b_{\beta } - a_{\alpha } }}{{b_{\alpha } - a_{\alpha } + b_{\beta } - a_{\beta } }},0} \right\},0} \right\}.$$
(5)

Motivated by Eq. (5), Yue (2016) gave the possibility degree ranking method of IVIFSs, i.e., the possibility degree of \(\widetilde{a}_{\alpha } \ge \widetilde{a}_{\beta }\) is defined as follows:

$$P_{X} (\widetilde{a}_{\alpha } \ge \widetilde{a}_{\beta } ) = \frac{1}{2}[P_{X} (\mu_{{\widetilde{\alpha }}} \ge \mu_{{\widetilde{\beta }}} ) + P_{X} (\upsilon_{{\widetilde{\beta }}} \ge \upsilon_{{\widetilde{\alpha }}} )].$$
(6)

where \(P_{X} (\mu_{{\widetilde{\alpha }}} \ge \mu_{{\widetilde{\beta }}} )\) and \(P_{X} (\upsilon_{{\widetilde{\beta }}} \ge \upsilon_{{\widetilde{\alpha }}} )\) are calculated by Eq. (5).

When \(P_{X} (\widetilde{a}_{\alpha } \ge \widetilde{a}_{\beta } ) > 0.5\), then \(\widetilde{a}_{\alpha } > \widetilde{a}_{\beta }\). When \(P_{X} (\widetilde{a}_{\alpha } \ge \widetilde{a}_{\beta } ) < 0.5\), then \(\widetilde{a}_{\alpha } < \widetilde{a}_{\beta }\). When \(P_{X} (\widetilde{a}_{\alpha } \ge\) \(\widetilde{a}_{\beta } ) = 0.5\) and \(P_{X} (\widetilde{a}_{\beta } \ge \widetilde{a}_{\alpha } ) = 0.5\), Yue’s method cannot rank these two IVIFSs.

Example 2.2

Suppose that \(\widetilde{a}_{3} = \,< [0.2,0.2],[0.3,0.3] >\) and \(\widetilde{a}_{4} = \,< [0.25,0.25],[0.35,0.35] >\) are two IVIFSs. We find that Eq. (6) cannot rank these two IVIFSs. The reason is that Eq. (6) has the problem of dividing by zero.

Example 2.3

\(\widetilde{a}_{5} = \,< [0.5,0.6],[0.3,0.4] >\) and \(\widetilde{a}_{6} = \,< [0.2,0.3],[0.1,0.2] >\) are two IVIFSs. Using Eq. (6), we have \(P_{X} (\widetilde{a}_{5} \ge \widetilde{a}_{6} ) = 0.5\) and \(P_{X} (\widetilde{a}_{6} \ge \widetilde{a}_{5} ) = 0.5\). It means that Yue’s method cannot rank these two IVIFSs. Thus, Yue’s method has drawback.

2.2.3 Gong and Ma’s score function of the IVIFS

Gong and Ma (2019) gave a score function of the IVIFS as follows:

$$S_{\rm GM} (\widetilde{a}) = \frac{d + c - b - a}{2} + \frac{a + b + 2(ab - cd)}{a + b + c + d}.$$
(7)

Then, Gong and Ma (2019) proposed the property for the score function of the IVIFS as follows:

Property 1

Let \(\widetilde{a}\) be an IVIFS. \(S(\widetilde{a})\) has the following properties:

  1. (1)

    If \(b,c,d\) are fixed, then \(\partial S(\widetilde{a})/\partial a > 0\);

  2. (2)

    If \(a,c,d\) are fixed, then \(\partial S(\widetilde{a})/\partial b > 0\);

  3. (3)

    If \(a,b,d\) are fixed, then \(\partial S(\widetilde{a})/\partial c < 0\);

  4. (4)

    If \(a,b,c\) are fixed, then \(\partial S(\widetilde{a})/\partial d < 0\).

Though \(S(\widetilde{a})\) satisfies Property 1, the information such as the reliability is not taken into account. Assume that \(\widetilde{a}_{\alpha }\) and \(\widetilde{a}_{\beta }\) are two IVIFSs, when \(a_{\alpha } + b_{\alpha } = a_{\beta } + b_{\beta }\), \(c_{\alpha } + d_{\alpha } = c_{\beta } + d_{\beta }\) and \(a_{\alpha } b_{\alpha } - c_{\alpha } d_{\alpha } = a_{\beta } b_{\beta } - c_{\beta } d_{\beta }\), then \(S_{\rm GM} (\widetilde{a})\) cannot rank the IVIFSs.

Example 2.4

Suppose that \(\widetilde{a}_{7} = \,< [0.35,0.45],[0.2,0.3] >\) and \(\widetilde{a}_{8} = \,< [0.3,0.5],[0.15,0.35] >\) are two IVIFSs. From our intuition, \(\widetilde{a}_{7}\) is not as big as \(\widetilde{a}_{8}\). Using Eq. (7), we have \(S_{\rm GM} (\widetilde{a}_{7} ) = S_{\rm GM} (\widetilde{a}_{8} ) = 0.6154.\) \(S_{\rm GM} (\widetilde{a})\) cannot rank these two IVIFSs. Thus, \(S_{\rm GM} (\widetilde{a})\) has drawback.

2.2.4 Wang and Chen’s score function of the IVIFS

Wang and Chen (2018) introduced the score function of the IVIFS as follows:

$$S_{\rm NWC} (\widetilde{a}) = \frac{(a + b)(a + c) - (c + d)(b + d)}{2}.$$
(8)

Taking the derivation of \(S_{\rm NWC} (\widetilde{a})\) with respect to \(b\), we have

$$\partial S_{\rm NWC} (\widetilde{a})/\partial b = a - d.$$

Taking the derivation of \(S_{\rm NWC} (\widetilde{a})\) with respect to \(c\), we have

$$\partial S_{\rm NWC} (\widetilde{a})/\partial c = a - d.$$

When \(a - d < 0\), then \(S_{\rm NWC} (\widetilde{a})\) increases with the decreasing of \(b\). When \(a - d > 0\), then \(S_{\rm NWC} (\widetilde{a})\) increases with the increasing of \(c\). These are the opposite of Property 1. In addition, when \(a = b = c = d\), \(S_{\rm NWC} (\widetilde{a})\) cannot rank the IVIFSs.

Example 2.5

Assume \(\widetilde{a}_{9} = \,< [0.3,0.3],[0.3,0.3] >\) and \(\widetilde{a}_{10} = \,< [0.4,0.4],[0.4,0.4] >\) are two IVIFSs. From our intuition, \(\widetilde{a}_{9}\) is not as big as \(\widetilde{a}_{10}\) and the scores cannot be zero. Using Eq. (8), we have \(S_{\rm NWC} (\widetilde{a}_{9} ) = S_{\rm NWC} (\widetilde{a}_{10} ) = 0\). \(S_{\rm NWC} (\widetilde{a})\) cannot rank these two IVIFSs. Thus, \(S_{\rm NWC} (\widetilde{a})\) has drawback.

2.2.5 Sahin’s accuracy function of the IVIFS

Sahin (2016) suggested a accuracy function of the IVIFS as follows:

$$K(\widetilde{a}) = \frac{a + b(1 - a - c) + b + a(1 - b - d)}{2}.$$
(9)

When \(a = b = 0\), then \(K(\widetilde{a}) = 0\), \(c\) and \(d\) have no effect on \(K(\widetilde{a})\). Thus, Eq. (9) cannot rank the IVIFSs.

Example 2.6

Suppose that \(\widetilde{a}_{11} = \,< [0,0],[0.1,0.1] >\) and \(\widetilde{a}_{12} = \,< [0,0],[0.9,0.9] >\) are two IVIFSs. From the relations of the IVIFSs, \(\widetilde{a}_{12}\) is smaller than \(\widetilde{a}_{11}\). Using Eq. (9), then we have \(K(\widetilde{a}_{11} ) = K(\widetilde{a}_{12} ) = 0\). This is contrary to the relations of the IVIFSs. \(K(\widetilde{a})\) cannot rank these two IVIFSs. Thus, \(K(\widetilde{a})\) has drawback.

2.2.6 Gao et al.’s score function of the IVIFS

Considering the amount of information, Gao et al. (2016) defined the score function to rank the IVIFSs as follows:

$$G(\widetilde{a}) = \frac{1}{4}(a - c + b - d)\left( {1 + \frac{1}{a + b - ac + bd}} \right).$$
(10)

When \(a = b = 0\), we know Eq. (10) has the problem of dividing by zero. When \(a = c\) and \(b = d\), then \(G(\widetilde{a}) = 0\). Gao et al.’s score function cannot rank the IVIFSs.

Example 2.7

Suppose that \(\widetilde{a}_{13} = \,< [0,0],[0.2,0.2] >\) and \(\widetilde{a}_{14} = \,< [0,0],[0.3,0.3] >\) are two IVIFSs. Using Eq. (10), we cannot calculate the scores of \(\widetilde{a}_{13}\) and \(\widetilde{a}_{14}\). Thus, \(G(\widetilde{a})\) cannot rank these two IVIFSs.

2.2.7 Zhang and Xu’s accuracy function of the IVIFS

The accuracy function of the IVIFS defined by Zhang and Xu (2017) is given as follows:

$$\begin{aligned} F(\widetilde{a}) & = \frac{1}{2}\left( {\frac{(a - c) + (b - d)(1 - a - c)}{2}} \right. \\ & \quad { + }\left. {\frac{(b - d) + (a - c)(1 - b - d)}{2}} \right). \\ \end{aligned}$$
(11)

When \(a = c\) and \(b = d\), then \(F(\widetilde{a}) = 0\). Thus, \(F(\widetilde{a})\) cannot distinguish the IVIFSs.

Example 2.8

\(\widetilde{a}_{15} = \,< [0.3,0.5],[0.3,0.5] >\) and \(\widetilde{a}_{16} = \,< [0.4,0.5],[0.4,0.5] >\) are two IVIFSs. Using Eq. (11), we have \(F(\widetilde{a}_{3} ) = F(\widetilde{a}_{4} ) = 0\). Thus, \(F(\widetilde{a})\) cannot rank these two IVIFSs.

From the above analyses, we know the existing ranking methods cannot rank the IVIFSs well and \(S_{\rm NWC} (\widetilde{a})\) do not satisfy Property 1. So they have drawbacks.

3 An information-based score function of the IVIFS

In order to overcome the disadvantages of the existing ranking methods of the IVIFSs and rank the IVIFSs well, we will propose an information-based score function using the information conveyed by the IVIFS.

Zhang and xu (2012) presented a ranking index of the IFS from the idea of TOPSIS, which is shown as follows:

$$\begin{aligned} L(\widetilde{a}) & = 1 - \frac{{D(\widetilde{a}, < 0,1 > )}}{{D(\widetilde{a}, < 1,0 > ) + D(\widetilde{a}, < 0,1 > )}} \\ & { = }1 - \frac{1 - a}{2 - a - c}. \\ \end{aligned}$$
(12)

Szmidt and Kacprzyk (2009) defined the amount of information of the IFS as \(a + c\). Szmidt and Kacprzyk (2009) also defined the reliability as \(a - c\). So \(a + c\) and \(b + d\) are the amount of information of the IVIFS. \(a - c\) and \(b - d\) are the reliability of the IVIFS. The reliability is a kind of information (Li and Ren 2015).

Li and Ren (2015) illustrated that the bigger \(a + c\), the bigger the score. The same as the amount of information, the bigger \(a - c\), the bigger the score. Li and Ren (2015) constructed a ranking index of the IFS based on the amount of information and the reliability as follows:

$$R(\widetilde{a}) = \frac{1}{3}\left( {a + \frac{1 + a - c}{1 - a - c}} \right)\frac{{D(\widetilde{a}, < 0,1 > )}}{{D(\widetilde{a}, < 1,0 > ) + D(\widetilde{a}, < 0,1 > )}}.$$
(13)

where \(D(\widetilde{a}, < 0,1 > )\) is the distance between an IFS \(\widetilde{a}\) and the smallest IFS \(< 0,1 >\). \(D(\widetilde{a}, < 1,0 > )\) is the distance between the IFS \(\widetilde{a}\) and the greatest IFS \(< 1,0 >\).

According to Eq. (1) and the idea of TOPSIS method, the relative closeness degree of an IVIFS \(\widetilde{a}\) to the greatest IVIFS \(< [1,1],[0,0] >\) is defined as follows:

$$\begin{aligned} & \frac{{D(\widetilde{a}, < [0,0],[1,1] > )}}{{D(\widetilde{a}, < [1,1],[0,0] > ) + D(\widetilde{a}, < [0,0],[1,1] > )}} \\ & \quad = \frac{2 - c - d}{4 - a - b - c - d}. \\ \end{aligned}$$
(14)

From Eqs. (12) and (13), we know the relative closeness degree can determine to a certain extent which IVIFS is bigger. The bigger the relative closeness degree, the bigger the score. The relative closeness degree is also a kind of information conveyed by the IVIFS.

The distance between an IFS \(\widetilde{a} = {\text{ < [a,a],[c,c] > }}\) and its complement \(\widetilde{a}^{C} = {\text{ < [c,c],[a,a] > }}\) is

$$D(\widetilde{a},\widetilde{a}^{C} ) = \left| {a - c} \right|.$$
(15)

The distance between an IVIFS \(\widetilde{a}\) and its complement \(\widetilde{a}^{C}\) is

$$D(\widetilde{a},\widetilde{a}^{C} ) = \frac{{\left| {a - c} \right| + \left| {b - d} \right|}}{2}.$$
(16)

\(1 - D(\widetilde{a},\widetilde{a}^{C} )\) is defined by Szmidt and Kacprzyk (2005) as the uncertainty. Thus, \(D(\widetilde{a},\widetilde{a}^{C} )\) is the certainty information for the IVIFS and it is often used to construct the information measure. The certainty information can also determine to a certain extent which IVIFS is bigger. The greater the certainty, the higher the score.

Motivated by Eqs. (3), (13) and (16), we construct an information-based score function of the IVIFS from the amount of information, the reliability, the certainty information, and the relative closeness degree.

Definition 1

The information-based score function of the IVIFS is defined as follows:

$$\begin{aligned} S(\widetilde{a}) & = [1 + a + b - c - d + 0.5(\left| {a - c} \right| + \left| {b - d} \right|)] \\ & \quad [(1 + {\text{a}} + c)e^{a - c + a + b} /e^{3} + (1 + b + d)e^{b - d - c - d} /e] \\ & \quad [(2 - c - d)/(4 - a - b - c - d)]/16. \\ \end{aligned}$$
(17)

where \(0.5(\left| {a - c} \right| + \left| {b - d} \right|)\) is the distance between \(\widetilde{a}\) and its complement \(\widetilde{a}^{C}\), i.e., the certainty information. \((2 - c - d)/(4 - a - b - c - d)\) is the relative closeness degree. \(a + c = 1 - \pi_{{\widetilde{a}}}^{U}\) and \(b + d = 1 - \pi_{{\widetilde{a}}}^{L}\) are the amount of information. \(a - c\) and \(b - d\) are the reliability of the IVIFS.

The information-based score function conforms to Property 1.

Proof

(1) When \(a \ge c\) and \(b,c,d\) are fixed, then we have

$$\begin{aligned} & S(\widetilde{a}) = [1 + 1.5a + b - 1.5c - d + 0.5|b - d|] \\ & \quad [(1 + a + c)e^{a - c + a + b} /e^{3} + (1 + b + d)e^{b - d - c - d} /e] \\ & \quad [(2 - c - d)/(4 - a - b - c - d)]/16. \\ \end{aligned}$$

Let \(M_{1} = [1 + 1.5a + b - 1.5c - d + 0.5|b - d|]\) \([(1 + a + c] \, e^{a - c + a + b} /e^{3} + (1 + b + d)e^{b - d - c - d} /e]\) and \(N = (2 - c - d)/(4 - a - b - c - d)\), by observation, then we know \(a\) has a positive effect on \(M_{1}\). So taking the derivation of \(M_{1}\) with respect to \(a\), we have \(\partial M_{1} /\partial a > 0.\) Taking the derivation of \(N\) with respect to \(a\), then we have \(\partial N/\partial a = (2 - c - d)/(4 - a - b - c - d)^{2} \ge 0\). Given that \(M_{1} \ge 0\) and \(N \ge 0\), thus

$$\partial S(\widetilde{a})/\partial a = \partial M_{1} /\partial aN + M_{1} \partial N/\partial a \ge 0.$$

When \(a < c\) and \(b,c,d\) are fixed, then we have

$$\begin{aligned} & S(\widetilde{a}) = (1 + 0.5a + b - 0.5c - d + 0.5|b - d|) \\ & \quad [(1 + a + c)e^{a - c + a + b} /e^{3} + (1 + b + d)e^{b - d - c - d} /e] \\ & \quad [(2 - cd)/(4 - a - b - c - d)]/16. \\ \end{aligned}$$

Let \(M_{2} = (1 + 0.5a + b - 0.5c - d + 0.5|b - d|)\) \([(1 + a + c)e^{a - c + a + b} /e^{3} + (1 + b + d)e^{b - d - c - d} /e]\), we have \(\partial M_{2} /\partial a > 0\). Given that \(M_{2} \ge 0\) and \(N \ge 0\), thus

$$\partial S(\widetilde{a})/\partial a = N\partial M_{2} /\partial a + M_{2} \partial N/\partial a \ge 0.$$

Only when \(c = d = 1\), then we have \(N = 0\), \(\partial N/\partial a = 0\) and \(\partial S(\widetilde{a})/\partial a = \partial M_{2} /\partial aN + M_{2} \partial N/\partial a = 0\).

Thus, when \(b,c,d\) are fixed, then \(\partial S(\widetilde{a})/\partial a > 0\).

(2) When \(b \ge d\) and \(a,c,d\) are fixed, then we have

$$\begin{aligned} S(\widetilde{a}) & = (1 + a + 1.5b - c - 1.5d + 0.5|a - c|) \\ & \quad [(1 + a + c)e^{a - c + a + b} /e^{3} + (1 + b + d)e^{b - d - c - d} / {\text{e]}} \\ & \quad [ ( 2- c - d)/(4 - a - b - c - d)]/16. \\ \end{aligned}$$

Let \(M_{3} = (1 + a + 1.5b - c - 1.5d + 0.5|a - c|)\) \([(1 + a + c) \, e^{a - c + a + b} /e^{3} + (1 + b + d)e^{b - d - c - d} /e]\), by observation, we know \(b\) has positive effect on \(M_{3}\). So taking the derivation of \(M_{3}\) with respect to \(b\), we have \(\partial M_{3} /\partial b \ge 0\). Taking the derivation of \(N\) with respect to \(b\), then we have \(\partial N/\partial b = (2 - c\) \(- d)/(4 - a - b - c - d)^{2} \ge 0\). Given that \(M_{3} \ge 0\) and \(N \ge 0\), thus

$$\partial S(\widetilde{a})/\partial b = \partial M_{3} /\partial bN + M_{3} \partial N/\partial b \ge 0.$$

When \(b < d\) and \(a,c,d\) are fixed, then we have

$$\begin{aligned} S(\widetilde{a}) & = (1 + a + 0.5b - c - 0.5d + 0.5|a - c|) \\ & \quad [(1 + a + c)e^{a - c + a + b} /e^{3} + (1 + b + d)e^{b - d - c - d} /e] \\ & \quad [(2 - c - d)/(4 - a - b - c - d)]/16 \\ \end{aligned}$$

Let \(M_{4} = (1 + a + 0.5b - c - 0.5d + 0.5|a - c|)\) \([(1 + a + c)e^{a - c + a + b} /e^{3} + (1 + b + d)e^{b - d - c - d} /e]\), the same as above, we have

$$\partial S(\widetilde{a})/\partial b = \partial M_{4} /\partial bN + M_{4} \partial N/\partial b \ge 0.$$

Only when \(c = d = 1\), then we have \(N = 0\), \(\partial N/\partial b = 0\) and

$$\partial S(\widetilde{a})/\partial b = \partial M_{4} /\partial bN + M_{4} \partial N/\partial b = 0.$$

Thus, when \(a,c,d\) are fixed, then \(\partial S(\widetilde{a})/\partial b > 0\).

(3) When \(a \ge c\) and \(a,b,d\) are fixed, by observation, then we know \(c\) has negative effect on \(M_{1}\). Taking the derivation of \(M_{1}\) with respect to \(c\), we have \(\partial M_{1} /\partial c \le 0\). Taking the derivation of \(N\) with respect to \(c\), then we have \(\partial N/\partial c = (a + b - 2)/(4\) \(- a - b - c - d)^{2} \le 0\). Given that \(M_{1} \ge 0\) and \(N \ge 0\), thus

$$\partial S(\widetilde{a})/\partial c = \partial M_{1} /\partial cN + M_{1} \partial N/\partial c \le 0.$$

When \(a < c\) and \(b,c,d\) are fixed, the same as the above proof, then we have

$$\partial S(\widetilde{a})/\partial c = \partial M_{2} /\partial cN + M_{2} \partial N/\partial c \le 0.$$

Only when \(c = d = 1\), then we have \(N = 0\), \(\partial N/\partial c < 0\) and \(M_{1} = M_{2} = 0\). So

$$\begin{aligned} \partial S(\widetilde{a})/\partial c & = \partial M_{1} /\partial cN + M_{1} \partial N/\partial c = 0 \\ \partial S(\widetilde{a})/\partial c & = \partial M_{2} /\partial cN + M_{2} \partial N/\partial c = 0. \\ \end{aligned}$$

Thus, when \(a,b,d\) are fixed, then \(\partial S(\widetilde{a})/\partial c < 0\).

(4) When \(b \ge d\) and \(a,b,c\) are fixed, by observation, then we know \(d\) has negative effect on \(M_{3}\). So taking the derivation of \(M_{3}\) with respect to \(d\), we have \(\partial M_{3} /\partial d \le 0\). Taking the derivation of \(N\) with respect to \(d\), then we have \(\partial N/\partial d = (a + b - 2)/(4 - a - b - c - d)^{2} \le 0\). Given that \(M_{3} \ge 0\) and \(N \ge 0\), thus

$$\partial S(\widetilde{a})/\partial d = \partial M_{3} /\partial dN + M_{3} \partial N/\partial d \le 0.$$

When \(b < d\) and \(a,b,c,\) are fixed, the same as above, then we have

$$\partial S(\widetilde{a})/\partial d = \partial M_{4} /\partial dN + M_{4} \partial N/\partial d \le 0.$$

Only when \(c = d = 1\), then we have \(N = 0\), \(\partial N/\partial d < 0\) and \(M_{3} = M_{4} = 0\). So

$$\begin{aligned} \partial S(\widetilde{a})/\partial d & = \partial M_{3} /\partial dN + M_{3} \partial N/\partial d = 0. \\ \partial S(\widetilde{a})/\partial d & = \partial M_{4} /\partial dN + M_{4} \partial N/\partial d = 0. \\ \end{aligned}$$

Thus, when \(a,b,c\) are fixed, then \(\partial S(\widetilde{a})/\partial d < 0\).

Property 2

Let \(\widetilde{a}\) be an IVIFS, then \(\widetilde{R}(\widetilde{a}) \in [0,1]\).

Proof

When \(\widetilde{a} = \,< [1,1],[0,0] >\), then using Eq. (17), we have \(S(\widetilde{a}) = 1\).

When \(\widetilde{a} = \,< [0,0],[1,1] >\), then using Eq. (17), we have \(S(\widetilde{a}) = 0\).

From the above proof, we know \(S(\widetilde{a})\) satisfies Property 1. So \(S(\widetilde{a})\) increases with the increasing of \(a\) and \(b\), whereas it decreases with the increasing of \(c\) and \(d\). When \(a = b = 1\) and \(c = d = 0\), then \(S(\widetilde{a})\) gets the maximum score value 1. When \(a = b = 0\) and \(\, c = d = 1\), then \(S(\widetilde{a})\) gets the minimum score value 0.

Therefore, we have \(S(\widetilde{a}) \in [0,1]\).

4 Comparison with the existing ranking methods

In this section, we use related examples to show the effectiveness and superiority of the information-based score function. Firstly, in order to prove the effectiveness of our information-based score function, we will verify that the information-based score function satisfies the relations of the IVIFSs. Then, we compare it with the existing ranking methods. Secondly, the comparative analyses will be used to show the superiority of the information-based score function. For convenience, Eq. (6) is abbreviated to \(P_{X}\) in this section.

Example 4.1

\(\widetilde{a}_{17} = \,< [0.15,0.35],[0.2,0.3] >\), \(\widetilde{a}_{18} = \,< [0.25,0.35],[0.2,0.3] >\), \(\widetilde{a}_{19} = \,< [0.5,0.6],[0.3,0.4] >\), \(\widetilde{a}_{20} = \,< [0.5,0.6],[0.1,0.4] >\), \(\widetilde{a}_{21} = \,< [0.5296,0.7],[0.1516,0.2551] >\), \(\widetilde{a}_{22} = \,< [0.5476,0.6565],[0.1,0.2213] >\). \(\widetilde{a}_{23} = \,< [0.8,0.8],[0.2,0.2] >\), \(\widetilde{a}_{24} = \,< [0,0.4],[0.4,0.4] >\), \(\widetilde{a}_{25} = \,< [0.2,0.6],[0.2,0.4] >\), and \(\widetilde{a}_{26} = \,< [0.2,0.3],[0.1,0.5] >\) are IVIFSs. The values of these IVIFSs calculated by Eqs. (6), (7), (9)–(11) and (17) are shown in Table 1. Let us analyze the ranking orders by using the relations of the IVIFSs.

Table 1 Calculation results of the ranking methods in Example 4.1

\(\widetilde{a}_{17}\) and \(\widetilde{a}_{18}\) have the same non-membership interval and \(0.25 > 0.15\), so \(\widetilde{a}_{18} > \widetilde{a}_{17}\). \(\widetilde{a}_{19}\) and \(\widetilde{a}_{20}\) have the same membership interval and \(0.3 > 0.1\), so \(\widetilde{a}_{19} < \widetilde{a}_{20}\). From the membership intervals of \(\widetilde{a}_{21}\) and \(\widetilde{a}_{22}\), we have \(0.5476 - 0.5296 = 0.018\), \(0.6565 - 0.7 = - 0.0435\) and \(- 0.0435 + 0.018 = - 0.0025\). From the non-membership intervals of \(\widetilde{a}_{21}\) and \(\widetilde{a}_{22}\), we have \(0.1516 - 0.1 = 0.0516\), \(0.2551 - 0.2231 = 0.032\), and \(0.0516 + 0.032 = 0.0836\). \(0.0836 - 0.0025 > 0\), so \(\widetilde{a}_{22}\) is bigger than \(\widetilde{a}_{21}\).\(0.8 > 0,0.8 > 0.4\) and \(0.2 < 0.4,\,0.2 < 0.4\); we have \(\widetilde{a}_{23} > \widetilde{a}_{24}\). \(0.2 - 0.2 = 0\), \(0.6 - 0.3 = 0.3\) and \(0.2 - 0.1 = 0.1\), \(0.4 - 0.5 = - 0.1\), so \(\widetilde{a}_{25}\) is bigger than \(\widetilde{a}_{26}\). From Table 1, we know the ranking orders calculated by \(S(\widetilde{a})\) conform to the relations of the IVIFSs. Furthermore, the ranking orders calculated by \(S(\widetilde{a})\) are consistent with those of Eqs. (6), (7) and (9)–(11). Thus, \(S(\widetilde{a})\) is effective.

In order to show the superiority of the information-based score function, the comparative analyses will be performed by using the special IVIFSs, for example, the IVIFSs have the characteristics of \(a = b = 0\), \(a + b = c + d\) or \(a = b = c = d\). \(\widetilde{a}_{27} =\) \(< [0.5,0.5],[0.5,0.5] >\) and \(\widetilde{a}_{28} = \,< [0,0],[0,0] >\) are two special IVIFSs. Using the information-based score function of the IVIFS and the existing ranking methods, we calculated the special IVIFSs. The values are shown in Table 2.

Table 2 Comparison with the existing ranking methods

From Table 2, we know the existing ranking methods have unreasonable results (in bold type) giving the same ranking value for two different IVIFSs. For instance, \(P_{X}\) cannot rank the IVIFSs except \(\widetilde{a}_{11}\) and \(\widetilde{a}_{12}\).\(S_{\rm NWC} (\widetilde{a})\) cannot rank \(\widetilde{a}_{9}\), \(\widetilde{a}_{10}\), \(\widetilde{a}_{27}\) and \(\widetilde{a}_{28}\). \(K(\widetilde{a})\) cannot rank \(\widetilde{a}_{11}\), \(\widetilde{a}_{12}\), \(\widetilde{a}_{15}\), \(\widetilde{a}_{16}\), \(\widetilde{a}_{27}\) and \(\widetilde{a}_{28}\). \(S_{\rm GM} (\widetilde{a})\) cannot rank \(\widetilde{a}_{1}\), \(\widetilde{a}_{2}\), \(\widetilde{a}_{9}\) to \(\widetilde{a}_{12}\), \(\widetilde{a}_{15}\) and \(\widetilde{a}_{16}\). \(S_{\rm GM} (\widetilde{a})\) cannot calculate the scores of \(\widetilde{a}_{28}\). \(G(\widetilde{a})\) cannot rank \(\widetilde{a}_{9}\), \(\widetilde{a}_{10}\), \(\widetilde{a}_{15}\) and \(\widetilde{a}_{16}\). \(G(\widetilde{a})\) cannot calculate the scores of \(\widetilde{a}_{11}\), \(\widetilde{a}_{12}\) and \(\widetilde{a}_{28}\). Using \(S(\widetilde{a})\), we can rank all these special IVIFSs well. The shortcomings of the existing ranking methods are overcome. The information conveyed by the IVIFSs ensures that \(S(\widetilde{a})\) can distinguish the IVIFSs.

Note: “bold” denotes the unreasonable case. The symbol “×” means “cannot calculate the value.”

The membership and non-membership intervals of the IVIFS affect the ranking value significantly. Because the membership and non-membership intervals of the IFS that have no effect on the ranking value are zero, the IFS is a special case of the IVIFS. The IFS has only two parameters, i.e., the membership and the non-membership. Very few parameters cause important information that affects the ranking value not to be considered when constructing the ranking method of the IFS. When the IVIFSs reduce to the IFSs, our information-based score function can also rank the IFSs well.

Example 4.2

\(\widetilde{a}_{29} = \,< [0.5,0.5],[0.45,0.45] >\), \(\widetilde{a}_{30} = \,< [0.25,0.25],[0.05,0.05] >\), \(\widetilde{a}_{31} = \,< [0.6,0.6],\) \([0.2,0.2] >\) and \(\widetilde{a}_{32} = \,< [0.7,0.7],[0.30.3] >\) are IFSs. Using Eq. (17), then we have

$$\begin{aligned} S(\widetilde{a}_{29} ) & = 0.022,S(\widetilde{a}_{30} ) = 0.0368, \\ S(\widetilde{a}_{31} ) & = 0.1014\;{\text{and}}\;S(\widetilde{a}_{32} ) = 0.116. \\ \end{aligned}$$

Thus,

$$S(\widetilde{a}_{29} ) < S(\widetilde{a}_{30} )\;{\text{and}}\;S(\widetilde{a}_{31} ) < S(\widetilde{a}_{32} )$$

The membership degree of \(\widetilde{a}_{29}\) is bigger than that of \(\widetilde{a}_{30}\), i.e., \(0.5 - 0.25 = 0.25\). The non-membership of \(\widetilde{a}_{29}\) is bigger than that of \(\widetilde{a}_{30}\), i.e., \(0.45 - 0.05 = 0.4\). So \(\widetilde{a}_{29}\) is smaller than \(\widetilde{a}_{30}\). The membership of \(\widetilde{a}_{32}\) is bigger than that of \(\widetilde{a}_{31}\), i.e., \(0.7 - 0.6 = 0.1\). The non-membership of \(\widetilde{a}_{32}\) is bigger than that of \(\widetilde{a}_{31}\), i.e., \(0.3 - 0.2 = 0.1\). The information amount of \(\widetilde{a}_{32}\) is 1, and the information amount of \(\widetilde{a}_{31}\) is 0.8. Thus, \(1 > 0.8\) is one reason why \(\widetilde{a}_{32}\) is bigger than \(\widetilde{a}_{31}\). \(0.7 > 0.6\) is another reason why \(\widetilde{a}_{32}\) is bigger than \(\widetilde{a}_{31}\). In addition, the relative closeness degree of \(\widetilde{a}_{32}\) is bigger than that of \(\widetilde{a}_{31}\), i.e., \(0.7 > 0.6667\). Thus, we select \(\widetilde{a}_{30}\) and \(\widetilde{a}_{32}\) as the better IFSs.

Using Eq. (7), then we have

$$\begin{aligned} S_{\rm NWC} (\widetilde{a}_{29} ) & = 0.0475,\;S_{\rm NWC} (\widetilde{a}_{30} ) = 0.06, \\ S_{\rm NWC} (\widetilde{a}_{31} ) & = 0.27\;{\text{and}}\;S_{\rm NWC} (\widetilde{a}_{32} ) = 0.45. \\ \end{aligned}$$

Thus,

$$S_{\rm NWC} (\widetilde{a}_{29} ) < S_{\rm NWC} (\widetilde{a}_{30} )\;{\text{and}}\;S_{\rm NWC} (\widetilde{a}_{31} ) < S_{\rm NWC} (\widetilde{a}_{32} ).$$

The ranking orders calculated by Eq. (17) are consistent with those of Eq. (7).

Using Eq. (9), then we have

$$\begin{aligned} G_{\rm GM} (\widetilde{a}_{29} ) & = 0.5263,\;G_{\rm GM} (\widetilde{a}_{30} ) = 0.8333, \\ G_{\rm GM} (\widetilde{a}_{31} ) & = 0.75\;{\text{and}}\;G_{\rm GM} (\widetilde{a}_{32} ) = 0.7. \\ \end{aligned}$$

Thus,

$$G_{\rm GM} (\widetilde{a}_{29} ) < G_{\rm GM} (\widetilde{a}_{30} )\;{\text{and}}\;G_{\rm GM} (\widetilde{a}_{31} ) > G_{\rm GM} (\widetilde{a}_{32} ).$$

\(G_{\rm GM} (\widetilde{a}_{31} ) > G_{\rm GM} (\widetilde{a}_{32} )\) is not consistent with the above analyses because the information used to rank the IFSs was not taken into account.

Using Eq. (13), we have

$$\begin{aligned} R(\widetilde{a}_{29} ) & = 0.2619,\,R(\widetilde{a}_{30} ) = 0.1781, \\ R(\widetilde{a}_{31} ) & = 0.3926\;{\text{and}}\;R(\widetilde{a}_{32} ) = 0.49. \\ \end{aligned}$$

Thus, \(R(\widetilde{a}_{29} ) > R(\widetilde{a}_{30} )\) and \(R(\widetilde{a}_{31} ) < R(\widetilde{a}_{32} )\).

Equation (13) only considers that the membership of \(\widetilde{a}_{29}\) is bigger than that of \(\widetilde{a}_{30}\), i.e., Equation (13) ignores the important influence of the non-membership on ranking value. Thus, the results cannot really reflect the ranking orders.

From the above analyses, we know the information-based score function conforms to the relations of the IVIFSs and Property 1. Furthermore, it can rank the IVIFSs well. So it is reasonable and better than the existing ranking methods.

5 Applications of the information-based score function in MADM

In this section, we use three illustrative examples adapted from Wang and Chen (2018) to demonstrate the implementation process of the proposed MADM method based on the information-based score function. The comparison analysis of computational results is also conducted to show the superiority of proposed MADM.

When \({\text{ < [h,y],[z,g] > }}\) is the IVIF weight for the attribute, the maximum weight range proposed by Chen and Huang (2017) is \([h,1 - z]\). They omitted the maximum impossible weight range \([y,1 - z]\). If we take into account the maximum impossible weight range \([y,1 - z]\), the weight range is \([h,y]\). In addition, ensuring the rationality of the weights, the weight relation calculated by linear programming model should be consistent with the weight relation calculated by Eq. (17). We apply the ranges and relation of the weights to the following examples:

Example 5.1

Let \(x_{1}\), \(x_{2}\), \(x_{3}\) and \(x_{4}\) be four alternatives and \(C_{1}\), \(C_{2}\) and \(C_{3}\) be three attributes. Assume the IVIF weights \(\widetilde{w}_{1}\), \(\widetilde{w}_{2}\) and \(\widetilde{w}_{3}\) of the attributes \(C_{1}\), \(C_{2}\) and \(C_{3}\) are

Assume that the decision matrix \(\widetilde{R} = (\widetilde{r}_{ij} )_{4 \times 3}\) provided by the decision maker is as follows:

$$\begin{aligned} \widetilde{w}_{1} & = \,< [0.1,0.4],[0.2,0.55] >, \\ \widetilde{w}_{2} & = \,< [0.2,0.5],[0.15,0.45] >, \\ \widetilde{w}_{3} & = \,< [0.25,0.6],[0.15,0.38] >. \\ \end{aligned}$$

The ranges of weights are \(0.1 \le w_{1}^{*} \le 0.4\), \(0.2 \le w_{2}^{*}\) \(\le 0.5\) and \(0.25 \le w_{2}^{*} \le 0.6\). Using Eq. (17), we have \(S(\widetilde{w}_{1} ) = 0.0097\), \(S(\widetilde{w}_{2} ) = 0.0207\) and \(S(\widetilde{w}_{3} ) =\) \(0.0636\). Thus, the weight relation is \(w_{3}^{*} \ge w_{2}^{*} \ge w_{1}^{*}\).

$$\begin{aligned} \widetilde{R} & = (\widetilde{r}_{ij} )_{4 \times 3} = \left( {\begin{array}{*{20}l} { < [0.4,0.5],[0.3,0.4] > } \hfill \\ { < [0.53,0.7],[0.05,0.1] > } \hfill \\ { < [0.3,0.6],[0.3,0.4] > } \hfill \\ { < [0.7,0.8],[0.1,0.2] > } \hfill \\ \end{array} } \right. \\ & \quad \begin{array}{*{20}l} { < [0.4,0.6],[0.2,0.4] > } \hfill \\ { < [0.6,0.63],[0.16,0.3] > } \hfill \\ { < [0.5,0.6],[0.3,0.4] > } \hfill \\ { < [0.6,0.7],[0.1,0.3] > } \hfill \\ \end{array} \quad \left. {\begin{array}{*{20}l} { < [0.1,0.3],[0.5,0.6] > } \hfill \\ { < [0.49,0.7],[0.1,0.2] > } \hfill \\ { < [0.5,0.6],[0.1,0.3] > } \hfill \\ { < [0.3,0.4],[0.1,0.2] > } \hfill \\ \end{array} } \right) \\ \end{aligned}$$
  • Step 1: Based on \(\widetilde{R} = (\widetilde{r}_{ij} )_{4 \times 3}\), the ranges, and relation of the weights, we have the following linear programming model:

    $$\begin{aligned} & \hbox{max} \left\{ {M = \sum\limits_{j = 1}^{3} {w_{j} \left[ {\sum\limits_{i = 1}^{4} {\sum\limits_{k = 1}^{4} {D(\widetilde{r}_{ij} ,\widetilde{r}_{kj} )} } } \right]} } \right\} \\ & \quad s.t. \, \left\{ \begin{array}{l} 0.1 \le w_{1}^{*} \le 0.4 \hfill \\ 0.2 \le w_{2}^{*} \le 0.5 \hfill \\ 0.25 \le w_{3}^{*} \le 0.6 \hfill \\ w_{3}^{*} \ge w_{2}^{*} \ge w_{1}^{*} \hfill \\ w_{1}^{*} + w_{2}^{*} + w_{3}^{*} = 1 \hfill \\ \end{array} \right. \\ \end{aligned}$$
    (18)
  • Step 2: Solving Eq. (18), we have the optimal weights, i.e., \(w_{1}^{*} = 0.2, \, w_{2}^{*} = 0.2,\) \(w_{3}^{*} = 0.6\).

  • Step 3: Based on Eq. (2) and the above optimal weights of attributes, then we have the aggregated results as follows:

    $$\begin{aligned} \widetilde{a}_{1} & = \,< [0.2347,0.4149],[0.3758,0.5102] >, \\ \widetilde{a}_{2} & = \,< [0.5221,0.6871],[0.0956,0.1888] >, \\ \widetilde{a}_{3} & = \,< [0.4562,0.6000],[0.1552,0.3366] >, \\ \widetilde{a}_{4} & = \,< [0.4717,0.5807],[0.1000,0.2169] >. \\ \end{aligned}$$
  • Step 4: Using Eq. (17), then we obtain

    $$\begin{aligned} S(\widetilde{a}_{1} ) & = 0.0099,\,S(\widetilde{a}_{2} ) = 0.1295, \\ S(\widetilde{a}_{3} ) & = 0.0636,\;S(\widetilde{a}_{4} ) = 0.0856. \\ \end{aligned}$$

Because \(S(\widetilde{a}_{2} ) > S(\widetilde{a}_{4} ) > S(\widetilde{a}_{3} ) > S(\widetilde{a}_{1} )\), thus the preference order of the alternatives is

$$x_{2} \succ x_{4} \succ x_{3} \succ x_{1} .$$

where “\(\succ\)” means “is better than.” This result is distinct from the preference order of Wang and Chen (2018), which is

$$x_{4} \succ x_{2} \succ x_{3} \succ x_{1} .$$

Example 5.2

Let \(x_{1}\), \(x_{2}\), \(x_{3}\) be three alternatives and \(C_{1}\), \(C_{2}\) and \(C_{3}\) be three attributes. Assume that the IVIF weights \(\widetilde{w}_{1}\), \(\widetilde{w}_{2}\) and \(\widetilde{w}_{3}\) of the attributes \(C_{1}\), \(C_{2}\) and \(C_{3}\) are

$$\begin{aligned} \widetilde{w}_{1} = \,< [0.25,0.25],[0.25,0.25] >, \hfill \\ \widetilde{w}_{2} = \,< [0.35,0.35],[0.4,0.4] >, \hfill \\ \widetilde{w}_{3} = \,< [0.3,0.3],[0.65,0.65] >. \hfill \\ \end{aligned}$$

Assume that the decision matrix \(\widetilde{R} = (\widetilde{r}_{ij} )_{3 \times 3}\) provided by the decision maker is as follows:

$$\begin{aligned} \widetilde{R} & = (\widetilde{r}_{ij} )_{3 \times 3} = \left( {\begin{array}{*{20}l} { < [0.45,0.66],[0.15,0.2] > } \hfill \\ { < [0.3,0.48],[0.2,0.25] > } \hfill \\ { < [0.15,0.2],[0.45,0.5] > } \hfill \\ \end{array} } \right. \, \\ & \quad \begin{array}{*{20}l} { < [0.5,0.7],[0.13,0.28] > } \hfill \\ { < [0.6,0.7],[0.2,0.2] > } \hfill \\ { < [0.7,0.75],[0.05,0.1] > } \hfill \\ \end{array} \quad \left. {\begin{array}{*{20}l} { < [0.3,0.8],[0.16,0.2] > } \hfill \\ { < [0.45,0.47],[0.5,0.5] > } \hfill \\ { < [0.6,0.6],[0.3,0.3] > } \hfill \\ \end{array} } \right) \\ \end{aligned}$$
  • Step 1: Based on \(\widetilde{R} = (\widetilde{r}_{ij} )_{3 \times 3}\), the ranges, and relation of the weights, we have the linear programming model as follows:

    $$\begin{aligned} \hbox{max} & \left\{ {M = \sum\limits_{j = 1}^{3} {w_{j} \left[ {\sum\limits_{i = 1}^{3} {\sum\limits_{k = 1}^{3} {D(\widetilde{r}_{ij} ,\widetilde{r}_{kj} )} } } \right]} } \right\} \\ & \quad s.t. \, \left\{ {\begin{array}{*{20}l} {0.25 \le w_{1} \le 0.75} \hfill \\ {0.35 \le w_{2} \le 0.6} \hfill \\ {0.3 \le w_{3} \le 0.35} \hfill \\ {w_{3}^{*} \le w_{2}^{*} \le w_{1}^{*} } \hfill \\ {w_{1} + w_{2} + w_{3} = 1} \hfill \\ \end{array} } \right. \\ \end{aligned}$$
    (19)
  • Step 2: Solving Eq. (19), we have the optimal weights, i.e., \(w_{1}^{*} = 0.35, \, w_{2}^{*} = 0.35,w_{3}^{*} = 0.3\).

  • Step 3: Based on Eq. (2) and the above optimal weights of attributes, then we have the aggregated results as follows:

    $$\begin{aligned} \widetilde{a}_{1} & = \,< [0.4281,0.7225],[0.1455,0.2250] >, \\ \widetilde{a}_{2} & = \,< [0.4647,0.5686],[0.2633,0.2847] >, \\ \widetilde{a}_{3} & = \,< [0.5291,0.5675],[0.1847,0.2442] >. \\ \end{aligned}$$
  • Step 4: Using Eq. (17), then we obtain

    $$S(\widetilde{a}_{1} ) = 0.1018,S(\widetilde{a}_{2} ) = 0.0531,S(\widetilde{a}_{3} ) = 0.0763.$$

Because \(S(\widetilde{a}_{1} ) > S(\widetilde{a}_{3} ) > S(\widetilde{a}_{2} )\), thus the preference order of the alternatives is

$$x_{1} \succ x_{3} \succ x_{2} .$$

This result is distinct from the preference order of Wang and Chen (2018), which is

$$x_{3} \succ x_{1} \succ x_{2} .$$

Example 5.3

Assume the IVIF weights \(\widetilde{w}_{1}\), \(\widetilde{w}_{2}\) and \(\widetilde{w}_{3}\) of the attributes \(C_{1}\), \(C_{2}\) and \(C_{3}\) are

$$\begin{aligned} \widetilde{w}_{1} & = \,< [0.1,0.4],[0.2,0.55] >, \\ \widetilde{w}_{2} & = \,< [0.2,0.5],[0.15,0.45] >, \\ \widetilde{w}_{3} & = \,< [0.25,0.6],[0.15,0.38] >. \\ \end{aligned}$$

Assume that the decision matrix \(\widetilde{R} = (\widetilde{r}_{ij} )_{4 \times 3}\) provided by the decision maker is as follows:

$$\begin{aligned} \widetilde{R} & = (\widetilde{r}_{ij} )_{4 \times 3} = \left( {\begin{array}{*{20}l} { < [0.32,0.51],[0.34,0.43] > } \hfill \\ { < [0.32,0.75],[0.03,0.11] > } \hfill \\ { < [0.42,0.6],[0.29,0.4] > } \hfill \\ { < [0.61,0.7],[0.08,0.22] > } \hfill \\ \end{array} } \right. \, \\ & \quad \begin{array}{*{20}l} { < [0.41,0.6],[0.1,0.3] > } \hfill \\ { < [0.51,0.6],[0.1,0.3] > } \hfill \\ { < [0.4,0.5],[0.2,0.4] > } \hfill \\ { < [0.4,0.6],[0.14,0.2] > } \hfill \\ \end{array} \quad \left. {\begin{array}{*{20}c} { < [0.41,0.6],[0.19,0.4] > } \\ { < [0.42,0.7],[0.1,0.21] > } \\ { < [0.45,0.6],[0.1,0.33] > } \\ { < [0.45,0.7],[0.1,0.29] > } \\ \end{array} } \right) \\ \end{aligned}$$
  • Step 1: Based on \(\widetilde{R} = (\widetilde{r}_{ij} )_{4 \times 3}\), the ranges, and relation of the weights, we have the following linear programming model:

    $$\begin{aligned} \hbox{max} & \left\{ {M = \sum\limits_{j = 1}^{3} {w_{j} \left[ {\sum\limits_{i = 1}^{4} {\sum\limits_{k = 1}^{4} {D(\widetilde{r}_{ij} ,\widetilde{r}_{kj} )} } } \right]} } \right\} \\ & \quad s.t.\left\{ {\begin{array}{*{20}l} {0.1 \le w_{1} \le 0.4} \hfill \\ {0.2 \le w_{2} \le 0.5} \hfill \\ {0.25 \le w_{3} \le 0.6} \hfill \\ {w_{3}^{*} \ge w_{2}^{*} \ge w_{1}^{*} } \hfill \\ {w_{1} + w_{2} + w_{3} = 1} \hfill \\ \end{array} } \right. \\ \end{aligned}$$
    (20)
  • Step 2: Solving Eq. (20), we have the optimal weights, i.e., \(w_{1}^{*} = 1/3, \, w_{2}^{*} = 1/3,\) \(w_{3}^{*} = 1/3\).

  • Step 3: Based on Eq. (2) and the above optimal weights of attributes, then we have the aggregated results as follows:

    $$\begin{aligned} \widetilde{a}_{1} & = \,< [0.3814,0.5720],[0.1836,0.3723] >, \\ \widetilde{a}_{2} & = \,< [0.4218,0.6892],[0.0670,0.1907] >, \\ \widetilde{a}_{3} & = \,< [0.4237,0.5691],[0.1797,0.3752] >, \\ \widetilde{a}_{4} & = \,< [0.4951,0.66698],[0.1039,0.2337] >. \\ \end{aligned}$$
  • Step 4: Using Eq. (17), then we obtain

    $$S(\widetilde{a}_{1} ) = 0.043,S(\widetilde{a}_{2} ) = 0.1133,S(\widetilde{a}_{3} ) = 0.0472,S(\widetilde{a}_{4} ) = 0.1072$$

Because \(S(\widetilde{a}_{2} ) > S(\widetilde{a}_{4} ) > S(\widetilde{a}_{3} ) > S(\widetilde{a}_{1} )\), thus the preference order of the alternatives is

$$x_{2} \succ x_{4} \succ x_{3} \succ x_{1} .$$

This result is distinct from the preference order of Wang and Chen (2018), which is

$$x_{4} \succ x_{2} \succ x_{3} \succ x_{1} .$$

From Example 5.1 to Example 5.3, our ranking order results are all different from those of Wang and Chen (2018). One reason is that Wang and Chen’s method did not take into account the maximum impossible weight ranges and the relations of the weights. In Step 1, Wang and Chen (2018) used the defective \(S_{\rm NWC} (\widetilde{a})\) to calculate the scores. Instead of the ranking method, we use the method of calculating the distance between IVIFSs. In Step 4, Wang and Chen (2018) used the defective \(S_{\rm NWC} (\widetilde{a})\) to calculate the scores of the aggregated results again. Thus, the ranking order results are definitely not accurate. On the other hand, our information-based score function is well defined and only used to rank the aggregated results in Step 4. Thus, the orders ranked by our method are better than that of Wang and Chen (2018).

6 Conclusions

In this paper, we enumerate several existing ranking methods which have drawbacks of ranking the IVIFSs. Then, we proposed an information-based score function considering the information amount, the reliability, the certainty information, and the relative closeness degree. It is proved that the information-based score function increases with the increasing of \(a,b\), whereas it increases with the decreasing of \(c,d\). Related examples are used to show the effectiveness of the information-based score function. Calculating the special IVIFSs, the results show that the information-based score function is proved to be more reasonable than the existing ranking methods. Finally, according to the distance of IVIFSs, the ranges, and the relation of weights, we apply the information-based score function and linear programming method to three illustrative examples. By comparing with Wang and Chen’s MADM method, we know that the information-based score function is well defined. In addition, our MADM method is superior to that of Wang and Chen.