Keywords

1 Introduction

In order to describe the uncertainty and fuzziness better, professor Zadeh [1] proposed the theory of fuzzy sets (FSs) in 1965. In 1986, Antanassov [2] proposed the concept of intuitionistic fuzzy sets (IFSs). An IFS, which is a generalization of fuzzy sets, takes into account the neutral state ‘neither this nor that’ and provides a more precise description of problems’ vagueness through two indexes-the degree of non-membership and membership. In practical decision making problems, decision makers often quantify the amount of decision information by interval numbers instead of crisp numbers for just being able to provide the approximate range of the degree of membership and non-membership. In other words, the degree of membership and non-membership are usually expressed by interval numbers. Therefore, based on the IFSs, Atanassov and Gargov [3] further proposed the concept of interval-valued intuitionistic fuzzy sets (IVIFSs). Recent years, many scholars have being studying the IFSs and IVIFSs, the entropy is a hotspot of the research field.

The concept of entropy, which is originated in the Thermodynamics, was introduced into information theory by Shannon to measure the uncertainty of information. In 1965, the entropy was first used to measure the fuzziness of a fuzzy set by Zadeh [1]. Later, Burillo and Bustince [4] defined the entropy for intuitionistic fuzzy sets to measure the degree of hesitation in 1966. In 2001, Szmidt and Kacprzyk gave the definition of a new non-probabilistic intuitionistic fuzzy entropy based on the geometric interpretation for intuitionistic fuzzy sets. About entropy on IVIFSs, Guo [5] and Liu [6] presented the axiomatic definition of interval-valued intuitionistic fuzzy entropy. Wang and Wei [7] extended the formula of entropy for IFSs and proposed a new formula for inteval-valued intuitionistic fuzzy entropy based on the Guo’s axiomatic definition. Gao and Wei [8] defined a new formula based on the improved Hamming distance for IVIFSs. However, the definition in paper [5] has some defects. The constraint for the maximum values of entropy consider only one aspect of uncertainty-fuzziness and neglect the other aspect of uncertainty-lack of knowledge.

By analyzing the papers about intuitionistic fuzzy entropy, we point out that the two aspects of uncertainty should both be taken into account to measure the knowledge of an IFS adequately. As an extension of IFSs, IVIFSs also include both fuzziness and lack of knowledge. So this paper improved the axiomatic definition of entropy for IVIFSs and proposed a new formula, which can reflect the amount of information better. Two applications are illustrated to verify the rationality of the proposed views in the end.

2 Preliminaries

Definition 1

[2] Let X be a finite and non-empty universe of discourse. An intuitionistic fuzzy set A is given by:

$$ A=\left\{ {\langle x,u_A \left( x \right) ,v_A \left( x \right) \rangle \left| {x\in X} \right. } \right\} , $$

where \(u_A \left( x \right) \in \left[ {0,1} \right] \) denotes the degree of membership of x to A, \(v_A \left( x \right) \in \left[ {0,1} \right] \) denotes the degree of non-membership of x to A. For every x to A, it satisfies the following condition:

$$ 0\le u_A \left( x \right) +v_A \left( x \right) \le 1. $$

For a given \(x\in X\), \(\pi _A \left( x \right) =1-u_A \left( x \right) -v_A \left( x \right) \) is called the intuitionistic fuzzy index or the hesitation margin.

Definition 2

[3] Let X be a finite and non-empty universe of discourse. An interval-valued intuitionistic fuzzy set A is given by:

$$ \begin{array}{l} \mathop A\limits ^\sim =\left\{ {\langle x,u _{A} \left( x \right) ,\mathop {v}\limits ^{\sim }{}_{{A}} \left( x \right) \rangle \left| {x\in X} \right. } \right\} \\ \;\;\;=\left\{ {\langle x,\left[ {u_A^- \left( x \right) ,u_A^+ \left( x \right) } \right] ,\left[ {v_A^- \left( x \right) ,v_A^+ \left( x \right) } \right] \rangle \left| {x\in X} \right. } \right\} \\ \end{array}, $$

where \(u_A^- \left( x \right) \in \left[ {0,1} \right] , u_A^+ \left( x \right) \in \left[ {0,1} \right] , v_A^- \left( x \right) \in \left[ {0,1} \right] , v_A^+ \left( x \right) \in \left[ {0,1} \right] . \left[ {u_A^- \left( x \right) ,u_A^+ \left( x \right) } \right] \) and \(\left[ {v_A^- \left( x \right) ,v_A^+ \left( x \right) } \right] \) denote the degree of membership and non-membership of x to A, with the condition:

$$ u_A^+ \left( x \right) +v_A^+ \left( x \right) \le 1. $$

For a given \(x\in X, \mathop {\pi }\limits ^{\sim }{}_{{A}} \left( x \right) =\left[ {1-u_A^+ \left( x \right) -v_A^+ \left( x \right) ,1-u_A^- \left( x \right) -v_A^- \left( x \right) } \right] \) is called the interval-valued intuitionistic fuzzy index or the hesitation margin.

Definition 3

[3] Let \(\mathop A\limits ^\sim ,\mathop B\limits ^\sim \in IVIFS\left( X \right) , \mathop A\limits ^\sim =\left\{ \langle x,\left[ {u_A^- \left( x \right) ,u_A^+ \left( x \right) } \right] ,\left[ v_A^- \left( x \right) ,v_A^+\right. \right. \) \(\left. \left. \left( x \right) \right] \rangle \left| {x\in X} \right. \right\} , \mathop B\limits ^\sim =\left\{ {\langle x,\left[ {u_B^- \left( x \right) ,u_B^+ \left( x \right) } \right] ,\left[ {v_B^- \left( x \right) ,v_B^+ \left( x \right) } \right] \rangle \left| {x\in X} \right. } \right\} \). The following basic operations can be defined:

(1) \(\mathop A\limits ^\sim \subseteq \mathop B\limits ^\sim \) if and only if \(\left\{ {\begin{array}{l} u_A^- \left( x \right) \le u_B^- \left( x \right) ,u_A^+ \left( x \right) \le u_B^+ \left( x \right) \\ v_A^- \left( x \right) \le v_B^- \left( x \right) ,v_A^+ \left( x \right) \le v_B^+ \left( x \right) \\ \end{array}} \right. \);

(2) \(\mathop A\limits ^\sim =\mathop B\limits ^\sim \) if and only if \(\mathop A\limits ^\sim \subseteq \mathop B\limits ^\sim ,\mathop A\limits ^\sim \supseteq \mathop B\limits ^\sim \);

(3) \(\mathop {A^{C}}\limits ^\sim =\left\{ {\langle x,\left[ {v_A^- \left( x \right) ,v_A^+ \left( x \right) } \right] ,\left[ {u_A^- \left( x \right) ,u_A^+ \left( x \right) } \right] \rangle \left| {x\in X} \right. } \right\} \).

Definition 4

[8] For two IVIFSs \(\mathop A\limits ^\sim =\left\{ \langle x_i ,\left[ {u_A^- \left( {x_i } \right) ,u_A^+ \left( {x_i } \right) } \right] ,\left[ {v_A^- \left( {x_i } \right) ,v_A^+ \left( {x_i } \right) } \right] \rangle \right. \left. \left| {x_i \in X} \right. \right\} \) and \(\mathop B\limits ^\sim =\left\{ {\langle x_i ,\left[ {u_B^- \left( {x_i } \right) ,u_B^+ \left( {x_i } \right) } \right] ,\left[ {v_B^- \left( {x_i } \right) ,v_B^+ \left( {x_i } \right) } \right] \rangle \left| {x_i \in X} \right. } \right\} \), the Hamming distance measure and the weighted Hamming distance measure between \(\mathop A\limits ^\sim \) and \(\mathop B\limits ^\sim \) are defined as follows:

$$\begin{aligned} \begin{array}{l} d\left( {\mathop A\limits ^\sim ,\mathop B\limits ^\sim } \right) =\frac{1}{4n}\sum \nolimits _{i=1}^n {\left[ {\left| {u_A^- \left( {x_i } \right) -u_B^- \left( {x_i } \right) } \right| +\left| {u_A^+ \left( {x_i } \right) -u_B^+ \left( {x_i } \right) } \right| } \right. } \\ \qquad \qquad \qquad \left. +\left| {v_A^- \left( {x_i } \right) -v_B^- \left( {x_i } \right) } \right| \left| {v_A^+ \left( {x_i } \right) -v_B^+ \left( {x_i } \right) } \right| \right. \\[2pt] \qquad \qquad \qquad \,\left. +\left| {\pi _A^- \left( {x_i } \right) -\pi _B^- \left( {x_i } \right) } \right| +\left| {\pi _A^+ \left( {x_i } \right) -\pi _B^+ \left( {x_i } \right) } \right| \right] , \\ \end{array} \end{aligned}$$
(1)
$$\begin{aligned} \begin{array}{l} d\left( {\mathop A\limits ^\sim ,\mathop B\limits ^\sim } \right) _w =\frac{1}{4n}\sum \nolimits _{i=1}^n {w_i \left[ {\left| {u_A^- \left( {x_i } \right) -u_B^- \left( {x_i } \right) } \right| +\left| {u_A^+ \left( {x_i } \right) -u_B^+ \left( {x_i } \right) } \right| } \right. } \\ \qquad \qquad \qquad \,\,\left. +\left| {v_A^- \left( {x_i } \right) -v_B^- \left( {x_i } \right) } \right| \left| {v_A^+ \left( {x_i } \right) -v_B^+ \left( {x_i } \right) } \right| \right. \\[2pt] \qquad \qquad \qquad \,\,\left. +\left| {\pi _A^- \left( {x_i } \right) -\pi _B^- \left( {x_i } \right) } \right| {+}\left| {\pi _A^+ \left( {x_i } \right) -\pi _B^+ \left( {x_i } \right) } \right| \right] . \\ \end{array} \end{aligned}$$
(2)

3 Improved Axiomatic Definition and Formula of the Entropy for Interval-Valued Intuitionistic Fuzzy Sets

3.1 The Necessity of Improving the Axiomatic Definition and Formula of Interval-Valued Intuitionistic Fuzzy Entropy

Definition 5

[5, 6] \(\forall \mathop A\limits ^\sim \in IVIFS\left( X \right) \), the mapping \(E:IVIFS\left( X \right) \rightarrow \left[ {0,1} \right] \) is called as entropy if E satisfies the following conditions:

Condition 1: \(E\left( {\mathop A\limits ^\sim } \right) =0\) if and only if \(\mathop A\limits ^\sim \) is a crisp set;

Condition 2: \(E\left( {\mathop A\limits ^\sim } \right) =1\) if and only if \(\left[ {v_A^- \left( {x_i } \right) ,v_A^+ \left( {x_i } \right) } \right] =\left[ {u_A^- \left( {x_i } \right) ,u_A^+ \left( {x_i } \right) } \right] \) for every \(x_i \in X\);

Condition 3: \(E\left( {\mathop A\limits ^\sim } \right) =E\left( {\mathop {A^{C}}\limits ^\sim } \right) \) for every \(\mathop A\limits ^\sim \in IVIFS\left( X \right) \);

Condition 4: For any \(\mathop B\limits ^\sim \in IVIFS\left( X \right) \), if \(\mathop A\limits ^\sim {\subseteq } \mathop B\limits ^\sim \) when \(u_B^- \left( {x_i } \right) {\le }\,v_B^- \left( {x_i } \right) ,u_B^+ \left( {x_i } \right) \le v_B^+ \left( {x_i } \right) \) for every \(x_i \in X\), or \(\mathop A\limits ^\sim \supseteq \mathop B\limits ^\sim \) when \(u_B^- \left( {x_i } \right) \ge v_B^- \left( {x_i}\right) , u_B^+ \left( {x_i } \right) \ge v_B^+ \left( {x_i } \right) \) for every \(x_i \in X\), then \(E\left( {\mathop A\limits ^\sim } \right) \le E\left( {\mathop B\limits ^\sim } \right) \).

Based on the Definition 5, paper [7] and paper [8] gave the concrete formulas of entropy respectively:

$$\begin{aligned} E_1 \left( {\mathop A\limits ^\sim } \right)&=\frac{1}{n}\sum \nolimits _{i=1}^n {\frac{\min \left\{ {u_A^- \left( {x_i } \right) ,v_A^- \left( {x_i } \right) } \right\} +\min \left\{ {u_A^+ \left( {x_i } \right) ,v_A^+ \left( {x_i } \right) } \right\} +\pi _A^- \left( {x_i } \right) +\pi _A^+ \left( {x_i } \right) }{\max \left\{ {u_A^- \left( {x_i } \right) ,v_A^- \left( {x_i } \right) } \right\} +\max \left\{ {u_A^+ \left( {x_i } \right) ,v_A^+ \left( {x_i } \right) } \right\} +\pi _A^- \left( {x_i } \right) +\pi _A^+ \left( {x_i } \right) }} \\&=\frac{1}{n}\sum \nolimits _{i=1}^n {\frac{2-\left| {u_A^- \left( {x_i } \right) -v_A^- \left( {x_i } \right) } \right| -\left| {u_A^+ \left( {x_i } \right) -v_A^+ \left( {x_i } \right) } \right| +\pi _A^- \left( {x_i } \right) +\pi _A^+ \left( {x_i } \right) }{2+\left| {u_A^- \left( {x_i } \right) -v_A^- \left( {x_i } \right) } \right| +\left| {u_A^+ \left( {x_i } \right) -v_A^+ \left( {x_i } \right) } \right| +\pi _A^- \left( {x_i } \right) +\pi _A^+ \left( {x_i } \right) }} , \end{aligned}$$
$$ E_2 \left( {\mathop A\limits ^\sim } \right) =\frac{\min \left\{ {d\left( {\mathop A\limits ^\sim ,\mathop P\limits ^\sim } \right) ,d\left( {\mathop A\limits ^\sim ,\mathop Q\limits ^\sim } \right) } \right\} }{\max \left\{ {d\left( {\mathop A\limits ^\sim ,\mathop P\limits ^\sim } \right) ,d\left( {\mathop A\limits ^\sim ,\mathop Q\limits ^\sim } \right) } \right\} }, $$

where \(\mathop P\limits ^\sim =\left\{ {\langle x_i ,\left[ {1,1} \right] ,\left[ {0,0} \right] \rangle \left| {x_i \in X} \right. } \right\} \) and \(\mathop Q\limits ^\sim =\left\{ {\langle x_i ,\left[ {0,0} \right] ,\left[ {1,1} \right] \rangle \left| {x_i \in X} \right. } \right\} , d\left( {\mathop A\limits ^\sim ,\mathop P\limits ^\sim } \right) \) and \(d\left( {\mathop A\limits ^\sim ,\mathop Q\limits ^\sim } \right) \) are calculated by formula (1).

Moreover, different definitions, which are equal to Definition 5 in essence, are proposed by Zhang [9], Szmidt [10] and Ye [11]. They also define the formulas of entropy for IVIFSs, the entropy formula in [11] is an extension of the intuitionistic fuzzy entropy formula based on the trigonometric function proposed in [12].

Fuzzy entropy measures the fuzziness of a fuzzy set, but researchers have differences in the definitions of the intuitionistic fuzzy entropy. Burrillo and Bustince [4] defined the intuitionistic fuzzy entropy as a measure of the degree of the hesitation, namely the entropy measures how far the intuitionistic fuzzy sets from the fuzzy sets. We called it B-B axiom for short. The constraint of the maximum value is: \(u_A \left( {x_i } \right) =v_A \left( {x_i } \right) =0\) (i.e. \(\pi _A \left( {x_i } \right) =0)\); Szmidt and Kacprzykv [9] defined the entropy on intuitionistic fuzzy sets to measure how far the intuitionistic fuzzy sets from the crisp sets and it (S-K axiom for short) was a measure of the fuzziness. The intuitionistic fuzzy entropy is maximum if and only if \(u_A \left( {x_i } \right) =v_A \left( {x_i } \right) \). Pal and Bustince [13] point out that the uncertainty of a intuitionistic fuzzy set includes fuzziness and lack of knowledge. In order to quantify the uncertainty of an IFS better, they propose the concept of two-tuple entropy, which is a pair \(\left( {E_I ,E_F } \right) \) consisted of \(E_I \) based on B-B axiom and \(E_F \) based on S-K axiom; Szmidt and Kacprzyk [14] point out that: the two situations, one with the maximal entropy when \(u_A \left( {x_i } \right) =v_A \left( {x_i } \right) \) and another when \(u_A \left( {x_i } \right) =v_A \left( {x_i } \right) =0\) are equivalent from the point of view of the entropy measure. But the two situations are completely different from the perspective of decision making. The degree of lack of information should also be considered when we deal with decision making problems. Then, they propose a new index (i.e., the measure of lack of information: \(K\left( x \right) =1-0.5\left( {E\left( x \right) +\pi \left( x \right) } \right) )\) with not changing the axiom in paper [9], where the \(E\left( x \right) \) is the ratio-based entropy in [9]. The greater of the \(K\left( x \right) \), the more amount of information the IFS presents. It’s difficult to cope with the practical decision making problems using the two-tuple entropy. Furthermore, the constraint of the minimal entropy in B-B axiom neglect the inherent fuzziness of an IFS. Relatively speaking, the measure \(K\left( x \right) \) is a more better choice.

Example 1

Let \(A_1 =\langle 0.3,0.3\rangle \) and \(A_2 =\langle 0.1,0.1\rangle \). We can calculate that: \(K_1 \left( x \right) =0.3\), \(K_2 \left( x \right) =0.1\), namely the amount of information of \(A_1\) is more than \(A_2\). It’s obvious that we can not distinguish \(A_1\) and \(A_2 \) if considering only fuzziness.

In fact, there is a third view about the intuitionistic fuzzy entropy which considers both fuzziness and lack of information. Lv [15] defined a measure of fuzziness (i.e., \(f_A \left( x \right) =1-\left| {u_A \left( x \right) -v_A \left( x \right) } \right| )\) and defined the new axiom formula of intuitionistic fuzzy entropy, which satisfies the following conditions. Firstly, the entropy gets the minimal value if and only if the IFS A is a crisp set; Secondly, the constraint of the maximal entropy is: \(u_A \left( {x_i } \right) =v_A \left( {x_i } \right) =0\); Thirdly, the entropy increases with the fuzziness and degree of hesitation (i.e., degree of lack of information) increasing (which is called the monotonicity of intuitionistic fuzzy entropy).

The definitions of intuitionistic fuzzy entropy in papers [1619] also take two aspects-fuzziness and degree of hesitation-into account. Although the constraints in the four papers have some small differences, the definitions are essentially the same. The four formulas are respectively as follows:

$$\begin{aligned} E_1 \left( A \right) =\frac{1}{n}\sum \nolimits _{i=1}^n {\left[ {1-\sqrt{\left( {1-\pi _A \left( {x_i } \right) } \right) ^{2}-u_A \left( {x_i } \right) v_A \left( {x_i } \right) }} \right] } , \end{aligned}$$
(3)
$$\begin{aligned} E_2 \left( A \right) =\frac{1}{n}\sum \nolimits _{i=1}^n {\sqrt{\frac{\left( {1-\left| {u_A \left( {x_i } \right) -v_A \left( {x_i } \right) } \right| } \right) ^{2}+\pi _A^2 \left( x \right) }{2}}} , \end{aligned}$$
(4)
$$\begin{aligned} E_3 \left( A \right) =\frac{1}{n}\sum \nolimits _{i=1}^n {\frac{1-\left| {u_A \left( {x_i } \right) -v_A \left( {x_i } \right) } \right| +\pi _A \left( {x_i } \right) }{2}} , \end{aligned}$$
(5)
$$\begin{aligned} E_4 \left( A \right) =\frac{1}{n}\sum \nolimits _{i=1}^n {\frac{1-\left| {u_A \left( {x_i } \right) -v_A \left( {x_i } \right) } \right| ^{2}+\pi _A^2 \left( {x_i } \right) }{2}} . \end{aligned}$$
(6)

Example 2

Take the above \(A_1 =\langle 0.3,0.3\rangle \) and \(A_2 =\langle 0.1,0.1\rangle \) for example, we calculate that: \(E_1 \left( {A_1 } \right) =0.4804, E_1 \left( {A_2 } \right) =0.8268;E_2 \left( {A_1 } \right) =0.7616,E_2 \left( {A_2 } \right) =0.9055;E_3 \left( {A_1 } \right) =0.7,E_3 \left( {A_2 } \right) =0.9, E_4 \left( {A_1 } \right) =0.58, E_4 \left( {A_2 } \right) =0.82\). The greater the entropy, the less amount of information the intuitionistic expresses, so the results calculated through formulas (3)–(6) are the same with the result calculated byK, namely the intuitionistic fuzzy entropy defined in papers [1519] can quantify the amount of information better.

IVIFSs are extensions of IFSs, so it’s same that the entropy for IVIFSs differentiate when \(\left[ {v_A^- \left( {x_i } \right) ,v_A^+ \left( {x_i } \right) } \right] = \left[ {u_A^- \left( {x_i } \right) ,u_A^+ \left( {x_i } \right) } \right] \) and \(\left[ {v_A^- \left( {x_i } \right) ,v_A^+ \left( {x_i } \right) } \right] =\left[ {u_A^- \left( {x_i } \right) ,u_A^+ \left( {x_i } \right) } \right] =\left[ {0,0} \right] \) from the point of view of decision making. It just means that the fuzziness of an IVIFS is maximal when \(\left[ {v_A^- \left( {x_i } \right) ,v_A^+ \left( {x_i } \right) } \right] =\left[ {u_A^- \left( {x_i } \right) ,u_A^+ \left( {x_i } \right) } \right] \), while fuzziness and degree of lack of knowledge both come to maximum when \(\left[ {v_A^- \left( {x_i } \right) ,v_A^+ \left( {x_i } \right) } \right] =\left[ {u_A^- \left( {x_i } \right) ,u_A^+ \left( {x_i } \right) } \right] =\left[ {0,0} \right] \). So it will be more reasonable to use \(\left[ {v_A^- \left( {x_i } \right) ,v_A^+ \left( {x_i } \right) } \right] =\left[ {u_A^- \left( {x_i } \right) ,u_A^+ \left( {x_i } \right) } \right] =\left[ {0,0} \right] \) as the constraint of maximal value when we define the entropy for IVIFSs.

3.2 The Improved Axiomatic Definition and Formula for Interval-Valued Intuitionistic Fuzzy Sets

Definition 6

\(\forall \mathop A\limits ^\sim \in IVIFS\left( X \right) \), the mapping \(E:IVIFS\left( X \right) \rightarrow \left[ {0,1} \right] \) is called as entropy if E satisfies the following conditions:

Condition 1: \(E\left( {\mathop A\limits ^\sim } \right) =0\) if and only if \(\mathop A\limits ^\sim \) is a crisp set;

Condition 2: \(E\left( {\mathop A\limits ^\sim } \right) =1\) if and only if \(\left[ {v_A^- \left( {x_i } \right) ,v_A^+ \left( {x_i } \right) } \right] =\left[ {u_A^- \left( {x_i } \right) ,u_A^+ \left( {x_i } \right) } \right] =\left[ {0,0} \right] \) for every \(x_i \in X\);

Condition 3: \(E\left( {\mathop A\limits ^\sim } \right) =E\left( {\mathop {A^{C}}\limits ^\sim } \right) \) for every \(\mathop A\limits ^\sim \in IVIFS\left( X \right) \);

Condition 4: For any \(\mathop B\limits ^\sim \in IVIFS\left( X \right) \), if \(\mathop A\limits ^\sim \subseteq \mathop B\limits ^\sim \) when \(u_B^- \left( {x_i } \right) \le v_B^- \left( {x_i } \right) ,u_B^+ \left( {x_i } \right) \le v_B^+ \left( {x_i } \right) \) for every \(x_i \in X\), or \(\mathop A\limits ^\sim \supseteq \mathop B\limits ^\sim \) when \(u_B^- \left( {x_i } \right) \ge v_B^- \left( {x_i } \right) ,u_B^+ \left( {x_i } \right) \ge v_B^+ \left( {x_i } \right) \) for every \(x_i \in X\), then \(E\left( {\mathop A\limits ^\sim } \right) \le E\left( {\mathop B\limits ^\sim } \right) \).

Theorem 1

Let \(X=\left\{ {x_1 ,x_2 ,\ldots ,x_n } \right\} \) be a universe. \(\mathop A\limits ^\sim =\left\{ \langle x_i ,\left[ {u_A^- \left( {x_i } \right) ,u_A^+ \left( {x_i } \right) } \right] ,\right. \left. \left[ {v_A^- \left( {x_i } \right) ,v_A^+ \left( {x_i } \right) } \right] \rangle \left| {x_i \in X} \right. \right\} \), the formula of the entropy is as follows:

$$\begin{aligned} E\left( {\mathop A\limits ^\sim } \right) =\frac{1}{n}\sum \nolimits _{i=1}^n {\frac{2-\left| {u_A^+ \left( {x_i } \right) -v_A^+ \left( {x_i } \right) } \right| ^{2}-\left| {u_A^- \left( {x_i } \right) -v_A^- \left( {x_i } \right) } \right| ^{2}+\left( {\pi _A^- \left( {x_i } \right) } \right) ^{2}+\left( {\pi _A^+ \left( {x_i } \right) } \right) ^{2}}{4}} \end{aligned}$$
(7)

Proof

Condition 1:

\(\mathop A\limits ^\sim \) is a crisp set, namely

$$ \left[ {u_A^- \left( {x_i } \right) ,u_A^+ \left( {x_i } \right) } \right] =\left[ {0,0} \right] , \quad \left[ {v_A^- \left( {x_i } \right) ,v_A^+ \left( {x_i } \right) } \right] =\left[ {1,1} \right] $$

or

$$ \left[ {u_A^- \left( {x_i } \right) ,u_A^+ \left( {x_i } \right) } \right] =\left[ {1,1} \right] , \left[ {v_A^- \left( {x_i } \right) ,v_A^+ \left( {x_i } \right) } \right] =\left[ {0,0} \right] . $$

Then \(E\left( {\mathop A\limits ^\sim } \right) =0\).

If \(E\left( {\mathop A\limits ^\sim } \right) =0\), since

$$ 2+\left( {\pi _A^- \left( {x_i } \right) } \right) ^{2}+\left( {\pi _A^+ \left( {x_i } \right) } \right) ^{2}\ge 2, \left| {u_A^+ \left( {x_i } \right) -v_A^+ \left( {x_i } \right) } \right| ^{2}+\left| {u_A^- \left( {x_i } \right) -v_A^- \left( {x_i } \right) } \right| ^{2}\le 2, $$

So

$$ \left[ {\pi _A^- \left( {x_i } \right) ,\pi _A^+ \left( {x_i } \right) } \right] =\left[ {0,0} \right] \ \mathrm{{and}}\ \left| {u_A^+ \left( {x_i } \right) -v_A^+ \left( {x_i } \right) } \right| =\left| {u_A^- \left( {x_i } \right) -v_A^- \left( {x_i } \right) } \right| =1, $$

namely \(\mathop A\limits ^\sim \) is a crisp set.

Condition 2: If \(\left[ {v_A^- \left( {x_i} \right) , v_A^+ \left( {x_i } \right) } \right] =\left[ {u_A^- \left( {x_i} \right) , u_A^+ \left( {x_i } \right) } \right] =\left[ {0,0} \right] \), it’s obvious that \(E\left( {\mathop A\limits ^\sim } \right) =1\). If \(E\left( {\mathop A\limits ^\sim } \right) =1\), since

$$ 2+\left( {\pi _A^- \left( {x_i } \right) } \right) ^{2}+\left( {\pi _A^+ \left( {x_i } \right) } \right) ^{2}\le 4, \left| {u_A^+ \left( {x_i } \right) -v_A^+ \left( {x_i } \right) } \right| ^{2}+\left| {u_A^- \left( {x_i } \right) -v_A^- \left( {x_i } \right) } \right| ^{2}\ge 0, $$

So \(\left[ {\pi _A^- \left( {x_i } \right) ,\pi _A^+ \left( {x_i } \right) } \right] =\left[ {1,1} \right] \) and \(\left[ {v_A^- \left( {x_i} \right) , v_A^+ \left( {x_i } \right) } \right] =\left[ {u_A^- \left( {x_i} \right) , u_A^+ \left( {x_i } \right) } \right] =\left[ {0,0} \right] \).

Condition 3: For the two IVIFSs \(\mathop A\limits ^\sim \) and \(\mathop {A^{C}}\limits ^\sim \), \(\left[ {\pi _A^- \left( {x_i } \right) ,\pi _A^+ \left( {x_i } \right) } \right] =\left[ {\pi _{A^{C}}^- \left( {x_i } \right) ,\pi _{A^{C}}^+ \left( {x_i } \right) } \right] \), so it’s obvious that the condition 3 is right.

Condition 4:

$$\begin{aligned} \begin{array}{l} E\left( {\mathop A\limits ^\sim } \right) =\frac{1}{n}\sum \nolimits _{i=1}^n {\frac{2-\left| {u_A^+ \left( {x_i } \right) -v_A^+ \left( {x_i } \right) } \right| ^{2}-\left| {u_A^- \left( {x_i } \right) -v_A^- \left( {x_i } \right) } \right| ^{2}+\left( {\pi _A^- \left( {x_i } \right) } \right) ^{2}+\left( {\pi _A^+ \left( {x_i } \right) } \right) ^{2}}{4}} \\ =\frac{1}{n}\sum \nolimits _{i=1}^n {\frac{2+u_A^+ \left( {x_i } \right) \left( {v_A^+ \left( {x_i } \right) -1} \right) +v_A^+ \left( {x_i } \right) \left( {u_A^+ \left( {x_i } \right) -1} \right) +u_A^- \left( {x_i } \right) \left( {v_A^- \left( {x_i } \right) -1} \right) +v_A^- \left( {x_i } \right) \left( {u_A^- \left( {x_i } \right) -1} \right) }{2}} \\ \end{array} \end{aligned}$$
(8)

and

if \(u_B^- \left( {x_i } \right) \le v_B^- \left( {x_i } \right) ,u_B^+ \left( {x_i } \right) \le v_B^+ \left( {x_i } \right) \) and \(\mathop A\limits ^\sim \subseteq \mathop B\limits ^\sim \) for every \(x_i \in A\),

then \(v_A^- \left( {x_i } \right) \ge v_B^- \left( {x_i } \right) \ge u_B^- \left( {x_i } \right) \ge u_A^- \left( {x_i } \right) \) and \(v_A^+ \left( {x_i } \right) \ge v_B^+ \left( {x_i } \right) \ge u_B^+ \left( {x_i } \right) \ge u_A^+ \left( {x_i } \right) \),

so

$$ u_A^+ \left( {x_i } \right) \left( {v_A^+ \left( {x_i } \right) -1} \right) \le u_B^+ \left( {x_i } \right) \left( {v_B^+ \left( {x_i } \right) -1} \right) , v_A^+ \left( {x_i } \right) \left( {u_A^+ \left( {x_i } \right) -1} \right) \le v_B^+ \left( {x_i } \right) \left( {u_B^+ \left( {x_i } \right) -1} \right) , $$
$$ u_A^- \left( {x_i } \right) \left( {v_A^- \left( {x_i } \right) -1} \right) \le u_B^- \left( {x_i } \right) \left( {v_B^- \left( {x_i } \right) -1} \right) , v_A^- \left( {x_i } \right) \left( {u_A^- \left( {x_i } \right) -1} \right) \le v_B^- \left( {x_i } \right) \left( {u_B^- \left( {x_i } \right) -1} \right) , $$

then

$$ E\left( {\mathop A\limits ^\sim } \right) \le E\left( {\mathop B\limits ^\sim } \right) . $$

As the above method, when \(u_B^- \left( {x_i } \right) \ge v_B^- \left( {x_i } \right) ,u_B^+ \left( {x_i } \right) \ge v_B^+ \left( {x_i } \right) \) and \(\mathop A\limits ^\sim \supseteq \mathop B\limits ^\sim \) for every \(x_i \in A\), we can conclude that:

$$ E\left( {\mathop A\limits ^\sim } \right) \le E\left( {\mathop B\limits ^\sim } \right) . $$

So, the condition 4 is correct.

4 Applications in Multi-attribute Decision Making Problems

Consider a multi-attribute decision making problem with the attribute set \(C=\left\{ {c_1 ,c_2 ,\ldots ,c_n } \right\} \) and the alternative set \(A=\left\{ {a_1 ,a_2 ,\ldots ,a_m } \right\} \). Let \(W=\left\{ w_1 ,w_2 ,\ldots ,\right. \left. w_n \right\} \) be the weight set, which is unknown, where \(\sum _{j=1}^n {w_j =1} \), \(w_j \in \left[ {0,1} \right] \). \(\mathop d\limits ^\sim _{ij} =\left\{ {\langle x_{ij} ,\left[ {u_A^- \left( {x_{ij} } \right) ,u_A^+ \left( {x_{ij} } \right) } \right] ,\left[ {v_A^- \left( {x_{ij} } \right) ,v_A^+ \left( {x_{ij} } \right) } \right] \rangle } \right\} \) means the evaluation of the ith alternative \(a_i\) satisfies the jth attribute. Then the steps in dealing with the problem is as follows:

Step 1: construct the decision matrix;

Step 2: calculate the entropy of every attribute \(E_j =\sum _{i=1}^m {e_{ij}}\) by using (7), where \(j=1,2,\ldots ,n\);

Step 3: calculate the weight value of each attribute by using the model as follows [16]:

$$\begin{aligned} w_j =\frac{E_j^{-1} }{\sum _{j=1}^n {E_j^{-1} } }. \end{aligned}$$
(9)

Step 4: let \(A^{{*}}=\left\{ {\langle c_j ,\left[ {1,1} \right] ,\left[ {0,0} \right] \rangle \left| {c_j \in C} \right. } \right\} \) be the positive ideal point. Calculate the weighted Hamming distance \(d_i\) between every alternative and the positive ideal point;

Step 5: sort the \(d_i ,i=1,2,\ldots ,m\), get the best alternative. The smaller the \(d_i \), which means the closer the alternative \(a_i\) is to the positive ideal point, the better the alternative.

Example 3

Consider a company is to invest in one of the following four projects: (1) \(a_1 \) be a automaker; (2) \(a_2 \) be a food company; (3) \(a_3\) be a computer company; (4) \(a_4\) be a firearms manufacturer. Three factors should be considered: (i) \(c_1 \) risk analysis; (ii) \(c_2\) the development; (iii) \(c_3\) the production environment pressure analysis [8].

Step 1: the decision matrix is as follows [8] (Table 1):

Table 1 Decision matrix 1

Step 2: calculate entropy of each attribute by using (7): \(E_1 =0.4525, E_2 =0.4650, E_3 =0.5025\);

Step 3: calculate the weight value of each attribute by using (9): \(w_1 =0.3480, w_2 =0.3386, w_3 =0.3134\);

Step 4: from (2), we get the weighted Hamming distances: \(d_1 =0.6114\). \(d_2 =0.3813\). \(d_3 =0.5196\). \(d_4 =0.4092\);

Step 5: the order of the distances is: \(d_2 <d_4 <d_3 <d_1 \), so \(a_2 >a_4 >a_3 >a_1 \), the second alternative is the best alternative.

The above result is equal to the one in [8]. Taking the degree of lack of knowledge into account while calculating the weight of the attribute leads to this situation, which further verifies the impact of the degree of lack of knowledge should not be neglected in decision making problems.

Example 4

Consider a manufacturer selection problem. The supplier is to choose one from three manufacturers (i.e., Alternatives \(a_j \left( {j=1,2,3} \right) )\). Five evaluating indexes (i.e., attributes) should be considered: quality of product (\(c_1 )\), cost of product (\(c_2 )\), time of delivery (\(c_3 )\), transportation cost (\(c_4 )\), service attitude (\(c_5 )\) [20].

Step 1: decision matrix is as follows [20] (Table 2):

Table 2 Decision matrix 2

Step 2: from (7), the entropy of each attribute: \(E_1 =0.4833\), \(E_2 =0.5133\), \(E_3 =0.5400\), \(E_4 =0.5333\), \(E_5 =0.4800\);

Step 3: from (9), we can get each weight of attributes: \(w_1 =0.2105\), \(w_2 =0.1982\), \(w_3 =0.1884\), \(w_4 =0.1908\), \(w_5 =0.2120\);

Step 4: from (2), the weighted Hamming distance between each attribute and the positive ideal point: \(d_1 =0.6077\), \(d_2 =0.6600\),\(d_3 =0.5278\);

Step 5: the order of the distances is: \(d_2 >d_1 >d_3 \), so the order of the alternatives is: \(a_3 >a_1 >a_2 \). The second alternative is the best, which is the same with the result in [20] when \(q=1\), \(q=2\), \(q\rightarrow 0\) and \(q\rightarrow \infty \)

5 Conclusion

By analyzing the research about the intuitionistic fuzzy entropy, we explain that the uncertainty of an IFS should include both fuzziness and the degree of lack of information. The interval-valued intuitionistic fuzzy sets are extension of the intuitionistic fuzzy sets, so we should neglect neither fuzziness nor lack of knowledge when we define the measure of entropy. This paper improved the existing axiomatic definitions and formulas for IVIFSs and the improved entropy is used to deal with two decision making problems. The examples further demonstrate the correctness of the new entropy and its effectiveness in tackling practical problems.