Literature Review

Since the time Zadeh [1] introduced the concept of fuzzy set, many theories and approaches concerning imprecision and vagueness came into existence. Intuitionistic fuzzy sets (IFSs) proposed by Atanassov [2] are one of the primary generalizations of conventional fuzzy set theory. He pointed out the drawbacks of Zadeh’s fuzzy set theory and proved to be extremely helpful in dealing with uncertainty and vagueness. Entropy is an important concepts in the study of fuzzy set theory and its extensions to IFSs. For the first time, the idea of fuzzy entropy was introduced by Zadeh [1]. Then Yager [3], Szmidt and Kacprzyk [4], Kaufman [5] proposed the various entropies on fuzzy sets. Recently Joshi and Kumar [50] introduced a new (RS)-norm fuzzy information measure corresponding to (RS)-norm entropy proposed by Joshi and Kumar [49] The notion of intuitionistic fuzzy entropy was firstly presented by Bustince and Burillo [6] and Szmidt and Kacprzyk [4] introduced a non-probabilistic type intuitionistic fuzzy entropy. Later, Zhang et al. [7] and Hung [8] suggested the intuitionistic fuzzy entropy based on the distance measure between IFSs. Vlachos and Sergiadis [9] generalized the De Luca and Termini’s [10] concept of non-probabilistic entropy to IFSs. Zeng et al. [11] proposed the intuitionistic fuzzy entropy based on similarity between IFSs. Chen and Li [12] proposed different kinds of entropies on IFSs.

Pedrycz [13], Yager [14] and Zadeh [15] proved the usefulness of fuzzy sets in tackling the problems with uncertain information in many fields such as pattern recognition, decision making and logical reasoning [16]. Also, IFSs have been proved to be useful in handling the fuzzy MADM problems [16]. In most of the fuzzy MADM problems, the information provided by experts may not be sufficient to choose the best alternative because the facts may be fuzzy or uncertain in nature. This may be due to the subjectivity of experts, lack of knowledge, time or data about problem domain and over all may be due lack of expertise in relevant field. Therefore, the alternatives with uncertainty are represented by IFSs. However, a large amount of literature is available in solving fuzzy MADM problems using IFSs, but a very few literature is available on solving fuzzy MADM problems with unknown attribute weights and partially known attribute weights, in particular. In MADM problems, the experts must evaluate the various alternatives for different attributes and choose the most desirable alternative. Attribute weights play an important role in decision making process as the improper assignment of attribute weights may cause the change in ranking of alternatives. Chen and Li [12] categorized the attribute weights into two parts: subjective weights and objective weights. Subjective weights are determined only according to the preference decision makers. The AHP method [17], weighted least squares method [18] and Delphi method [19] belong to this category. The objective methods determine weights by solving the mathematical models without considering the decision maker’s preferences. Entropy method, multi objective programming [20, 21], principle element analysis [20] etc. belong to this category. Since in many practical problems, decision maker’s expertise and experience matters but when it is difficult to obtain such reliable subjective weights, the use of objective weights is useful. In general, the attribute weights cannot be represented by crisp numbers. Entropy method is one of the most representative approaches to solve MADM problems with unknown or partially known weight information. Chen and Li [12] suggested several methods to solve MADM problems with unknown attribute weights information. The traditional entropy method focuses on using the discrimination of data to determine the weights of attributes. If the attribute can discriminate the data more significantly, we give a higher weight to the attribute. Dissimilarly, we focus on using the credibility of data to determine the attribute weights through IF entropy measures. This concept is totally different with the traditional entropy method, but our method can combine with traditional method. Besides Szmidt and Kacprzyk [4] proposed a different concept for assessing the IF entropy. However, in our research, we use Szmidt and Kacprzyk’s concept to measure the IF entropy, because this concept could measure the whole missing information which might be required to certainly have. Therefore the traditional entropy is based on the concept of probability and it could measure the discrimination of attributes while we apply it in MADM. Nevertheless, the meaning of IF entropy is different from the traditional entropy, because the IF entropy represents the credibility of the data while we apply it in MADM.

From the above discussion, role of intuitionistic fuzzy sets in solving MADM problems can be easily estimated. Depending upon the situation, there is a need to develop such measures which not only satisfy the requirement but are also the generalized forms of the existing measures. Apart from this, they should be quite efficient and have consistent performance also. The role of parameters in any information measure is very important. For example, in any problem related with environment, different parameters may represent different environmental factors like humidity, temerature, pressure etc. Thus, the presence of parameters make an information measure more suitable from application view point. Inspired by this, our main emphasis will be on to develop the new information measures and MADM methods based on them to solve the problems containing multiple attributes. The present communication is a sequel in this direction.

This paper is managed as follows. After the introductory section, basic concepts and definitions of the theory of fuzzy sets and intuitionistic fuzzy sets are discussed in “Preliminaries” section. “A New Parametric Intuitionistic Fuzzy Entropy” section is devoted to the introduction of a new intuitionistic fuzzy entropy, establishing its validity and discussing some of its mathematical properties. In “A Comparison with Other Existing Measures” section, the performance of proposed measure is compared some existing measures in literature. A new multiple attribute decision making (MADM) method is proposed by using the concept of TOPSIS in “The New MADM Method Using Proposed IF Entropy” section. In “Numerical Examples” section, the proposed MADM method is explained with the help of numerical examples. Finally, the paper is concluded in “Concluding Remarks” section.

Preliminaries

Now, we introduce some basic definitions and concepts regarding fuzzy sets and IFSs.

Definition 2.1

(See [1]) Let \(X= (z_1, z_2, \ldots , z_n)\) be a finite universe of discourse. A fuzzy set G is given by

$$\begin{aligned} G=\{\langle z_i,\mu _G (z_i)\rangle /z_i\in X\}, \end{aligned}$$
(1)

where \(\mu _G:X\rightarrow [0,1]\) is the membership function of G. The number \(\mu _G (z_i)\) defines the belongingness degree of \(z_i\in X\) in G.

Definition 2.2

A fuzzy set \(\tilde{G}\) is called a sharpened version of fuzzy set G if it satisfies the following conditions:

$$\begin{aligned} \mu _{\tilde{G}} (z_i)\le \mu _G (z_i),\qquad \mathrm { if} \,\mu _G (z_i)\le 0.5; \forall i \end{aligned}$$

and

$$\begin{aligned} \mu _{\tilde{G}}(z_i)\ge \mu _G (z_i),\qquad \mathrm { if} \,\mu _G (z_i)\ge 0.5; \forall i. \end{aligned}$$

De Luca and Termini [10] axiomatized the fuzzy entropy and the axioms proposed by him are widely acclaimed as a criterion to define any fuzzy entropy. In fuzzy set theory, the fuzzy entropy is a measure of fuzziness which represent the average amount of difficulty or ambiguity in guessing that a particular element belongs to the set or not.

Definition 2.3

(See [10]). A measure of fuzziness in a fuzzy set should satisfy atleast the following axioms:

  1. P1

    (Sharpness) H(G) is minimum if and only if G is a crisp set, i.e., \(\mu _G (z_i)=0\) or 1 for all \(z_i\in X\).

  2. P2

    (Maximality) H(G) is maximum if and only if G is most fuzzy set, i.e., \(\mu _G (z_i)=.5\); for all \(z_i \in X\).

  3. P3

    (Resolution) \(H (G)\ge H(\tilde{G})\), where \(\tilde{G}\) is the sharpened version of G.

  4. P4

    (Symmetry) \(H (G)=H(G^c)\), where \( G^c\) is the complement of G, i.e., \(\mu _{G^c} (z_i)=1-\mu _G (z_i)\) for all \(z_i\in X\).

Since \(\mu _G (z_i)\) and \((1-\mu _G (z_i))\) represent the same degree of fuzziness, then, De Luca and Termini [10] defined fuzzy entropy for a fuzzy set G as:

$$\begin{aligned} H (G)=-\frac{1}{n}\sum _{i=1}^n\left[ \mu _G (z_i)\log (\mu _G (z_i))+(1-\mu _G (z_i)\log (1-\mu _G (z_i))\right] . \end{aligned}$$
(2)

Later on, Bhandari and Pal [22] made a survey of information measures on fuzzy sets. Corresponding to Renyi’s entropy [23], they introduced a new measure of fuzzy entropy as:

$$\begin{aligned} H_\alpha (G)=\frac{1}{n(1-\alpha )}\sum _{i=1}^n\log \left[ \mu _G (z_i)^\alpha +(1-\mu _G (z_i))^\alpha \right] ;\quad \alpha \ne 1,\alpha >0. \end{aligned}$$
(3)

Zadeh’s [1] idea of fuzzy sets was extended to intuitionistic fuzzy sets by Atanassov [2] as:

Definition 2.4

(See [2]) An intuitionistic fuzzy set G in a finite universe of discourse \(X= (z_1, z_2, \ldots , z_n)\) is given by

$$\begin{aligned} G=\big \{\langle z_i, \mu _G (z_i), \nu _G (z_i)\rangle /z_i \in X\big \}, \end{aligned}$$
(4)

where   \(\mu _G: X\rightarrow [0,1]\),   \(\nu _G: X\rightarrow [0,1]\) satisfying \(0\le \mu _G (z_i)+\nu _G (z_i)\le 1\), \(\forall z_i\in X\). Here \(\mu _G (z_i)\) and \(\nu _G (z_i)\), respectively, denotes the degree of membership and degree of non-membership of \(z_i\in X\) to the set G. For each IFS G in X, \(\pi _G (z_i)=1-\mu _G (z_i)-\nu _G (z_i), z_i\in X\) represents the hesitancy degree of \(z_i\in X\) and is also called intuitionistic index. Obviously, if \(\pi _G (z_i)=0\) then IFS becomes fuzzy set. Thus, the fuzzy sets are particular cases of IFSs.

Definition 2.5

(See [24]). Let IFS(X) denote the family of all IFSs in the universe X and let \(G, H\in IFS (X)\) be given by

$$\begin{aligned} G= & {} \{\langle z_i, \mu _G (z_i), \nu _G (z_i)\rangle /z_i \in X\},\nonumber \\ H= & {} \{\langle z_i, \mu _H (z_i), \nu _H (z_i)\rangle /z_i \in X\}. \end{aligned}$$
(5)

Then usual set operations and relations are defined as follows:

(i):

\(G\subseteq H\) if and only if \(\mu _G (z_i)\le \mu _H (z_i)\) and \(\nu _G (z_i)\ge \nu _H (z_i)\) for all \(z_i\in X\);

(ii):

\(G=H\) if and only if \(G\subseteq H\) and \(H\subseteq G\);

(iii):

\(G^c=\{\langle z_i, \nu _G (z_i), \mu _G (z_i)\rangle /z_i\in X\}\);

(iv):

\(G\cap H=\{\langle \mu _G (z_i)\wedge \mu _H (z_i) \,\mathrm {and}\, \nu _G (z_i)\vee \nu _H (z_i)\rangle /z_i\in X\}\);

(v):

\(G\cup H=\{\langle \mu _G (z_i)\vee \mu _H (z_i)\, \mathrm {and}\,\nu _G (z_i)\wedge \nu _H (z_i)\rangle /z_i\in X\}\).

Szmidt and Kacprzyk [25] first formulated the axioms for intuitionistic fuzzy entropy measure as an extension of De Luca and Termini [10] for fuzzy sets. The set of axioms of intuitionistic fuzzy entropy measure is:

Definition 2.6

(See [25]). An entropy on IFS(X) is a real-valued function \(E:IFS (X)\rightarrow [0,1]\), which satisfies the following axioms:

(IFS1):

\(E(G)=0\) if and only if G is a crisp set, i.e., \(\mu _G (z_i)=0\), \(\nu _G (z_i)=1\) or \(\mu _G (z_i)=1\), \(\nu _G (z_i)=0\) for all \(z_i\in X\).

(IFS2):

\(E(G)=1\) if and only if \(\mu _G (z_i)=\nu _G (z_i)\) for all \(z_i\in X\).

(IFS3):

\(E(G)\le E(H)\) if and only if \(G\subseteq H\), i.e., if \(\mu _G (z_i)\le \mu _H (z_i)\) and \(\nu _G (z_i)\ge \nu _H (z_i)\) for \(\mu _H (z_i)\le \nu _H (z_i)\), or if \(\mu _G (z_i)\ge \mu _H (z_i)\) and \(\nu _G (z_i)\le \nu _H (z_i)\), for \(\mu _H (z_i)\ge \nu _H (z_i)\) for any \(z_i\in X\).

(IFS4):

\(E(G)=E(G^c)\).

Definition 2.7

(See [26]). Let \( G=\{\langle z_i, \mu _G (z_i), \nu _G (z_i)\rangle / z_i\in X\}\) and \(H=\{\langle z_i, \mu _H (z_i), \nu _H (z_i)\rangle / z_i\in X\}\) be two IFSs with the weight of \(z_i\) is \(u_i\). Then the weighted Hamming Distance measure of G and H is defined as follows:

$$\begin{aligned} s (G, H)=\frac{1}{2}\sum _{i=1}^n u_i (|\mu _G (z_i)-\mu _H (z_i)|+|\nu _G (z_i)-\nu _H (z_i)|+|\pi _G (z_i)-\pi _H (z_i)|). \end{aligned}$$
(6)

Throughout this paper, IFS(X) and FS(X) will represent the set of all intuitionistic fuzzy sets and set of all fuzzy sets respectively.

With these ideas in mind, we now introduce a new parametric intuitionistic fuzzy entropy on IFSs with \(\alpha \) and \(\beta \) as parameters.

A New Parametric Intuitionistic Fuzzy Entropy

In the following, we will borrow an entropy \(H_\alpha ^\beta (A)\) to a probability distribution \(A=\{p_1, p_2, \ldots , p_n\}\) with \(\sum _{i=1}^n p_i=1\) for which

$$\begin{aligned} H_\alpha ^\beta (A)=\frac{1}{2^{1-\alpha }-2^{1-\beta }}\left[ \left( \sum _{i=1}^np_i^\alpha \right) -\left( \sum _{i=1}^np_i^\beta \right) \right] \end{aligned}$$
(7)

where \(\alpha >1\) and \(0<\beta <1\) or \(0<\alpha <1\) and \(\beta >1\), which is studied by Sharma and Taneja [27].

Corresponding to (7), we then propose families of fuzzy entropy of an IFS G with

$$\begin{aligned} E_\alpha ^\beta (G)= & {} \frac{1}{n\left( 2^{1-\alpha }-2^{1-\beta }\right) }\nonumber \\&\times \sum _{i=1}^n\Big [\Big ((\mu _G (z_i)^\alpha +\nu _G (z_i)^\alpha )\times (\mu _G (z_i)+\nu _G (z_i))^{1-\alpha }+2^{1-\alpha }\pi _G (z_i)\Big )\nonumber \\&\quad -\left( (\mu _G (z_i)^\beta +\nu _G (z_i)^\beta )\times (\mu _G (z_i)+\nu _G (z_i))^{1-\beta }+2^{1-\beta }\pi _G (z_i)\right) \Big ];\nonumber \\&\quad \mathrm {if}\, \alpha ,\beta>0;\ \mathrm {either}\ \alpha>1,\beta<1\ \mathrm {or}\ \alpha <1, \beta >1, \end{aligned}$$
(8)

Particular Cases

  1. 1.

    If \(\beta =1\), then (8) becomes

    $$\begin{aligned} E_\alpha ^1 (G)&= \frac{1}{n\left( 2^{1-\alpha }-1\right) }\sum _{i=1}^n\Big [\left( \mu _G (z_i)^\alpha +\nu _G (z_i)^\alpha )\times (\mu _G (z_i)\right. \nonumber \\&\quad \left. +\,\nu _G (z_i))^{1-\alpha }+2^{1-\alpha }\pi _G (z_i)\right) -1\Big ], \end{aligned}$$
    (9)

    which is an intuitionistic fuzzy entropy of order-\(\alpha \) studied by Joshi and Kumar [28].

  2. 2.

    If \(\alpha =1, \beta \rightarrow 1\) or \(\beta =1, \alpha \rightarrow 1\), then (8) becomes

    $$\begin{aligned} E_\alpha ^\beta (G)= & {} -\frac{1}{n}\sum _{i=1}^n\Big [\mu _G (z_i) \log (\mu _G (z_i))+\nu _G (z_i) \log (\nu _G (z_i))\nonumber \\&-\, (1-\pi _G (z_i))\log (1-\pi _G (z_i))-\pi _G (z_i)\Big ]. \end{aligned}$$
    (10)

    which is studied by Vlachos and Sergiadis [9].

  3. 3.

    If \(\pi _G (z_i)=0\), (8) becomes an ordinary fuzzy entropy as:

    $$\begin{aligned} E_\alpha ^\beta (G)&=\frac{1}{n\left( 2^{1-\alpha }-2^{1-\beta }\right) }\sum _{i=1}^n\Big \{\Big (\mu _G (z_i)^\alpha +(1-\mu _G (z_i))^\alpha \Big )\nonumber \\&\quad \left( \mu _G (z_i)^\beta +(1-\mu _G (z_i))^\beta \right) \Big \}. \end{aligned}$$
    (11)
  4. 4.

    If \(\pi _G (z_i)=0\) and \(\beta =1\) then (8) becomes a parametric fuzzy entropy with \(\alpha \) as a parameter:

    $$\begin{aligned} E_\alpha ^1 (G)=\frac{1}{n \left( 2^{1-\alpha }-1\right) }\sum _{i=1}^n\left\{ \left( \mu _G (z_i)^\alpha + (1-\mu _G (z_i))^\alpha \right) -1\right\} {,} \end{aligned}$$
    (12)

    which is an entropy slightly different from Hooda [29].

Now, a very natural question that arises in mind “Is the entropy measure proposed, reasonable?”. We answer this question in the following theorem by showing that the proposed entropy measure obey the axioms (IFS1–IFS4).

Theorem 3.1

The measure \(E_\alpha ^\beta (G)\) is a valid entropy measure for IFSs; i.e., it satisfies all the axioms given in definition (2.6).

Proof

(IFS1) Let G be the crisp set having membership values either 0 or 1 for all \(z_i\in X\). Then from (8), we have \(E_\alpha ^\beta (G)=0\).

Conversely, if \(E_\alpha ^\beta (G)=0\), then

$$\begin{aligned}&\frac{1}{n\left( 2^{1-\alpha }-2^{1-\beta }\right) }\sum _{i=1}^n\Big [\Big ((\mu _G (z_i)^\alpha +\nu _G (z_i)^\alpha )\times (\mu _G (z_i)+\nu _G (z_i))^{1-\alpha }+2^{1-\alpha }\pi _G (z_i)\Big )\nonumber \\&\quad -\,\left( (\mu _G (z_i)^\beta +\nu _G (z_i)^\beta )\times (\mu _G (z_i)+\nu _G (z_i))^{1-\beta }+2^{1-\beta }\pi _G (z_i)\right) \Big ]=0. \end{aligned}$$
(13)

Since \(\alpha \ne \beta \), this implies

$$\begin{aligned}&\left( (\mu _G (z_i)^\alpha +\nu _G (z_i)^\alpha )\times (\mu _G (z_i)+\nu _G (z_i))^{1-\alpha }+2^{1-\alpha }\pi _G (z_i)\right) \nonumber \\&\quad =\left( \mu _G (z_i)^\beta +\nu _G (z_i)^\beta )\times (\mu _G (z_i)+\nu _G (z_i))^{1-\beta }+2^{1-\beta }\pi _G (z_i)\right) . \end{aligned}$$
(14)

Therefore (14) will hold only if \(\mu _G (z_i)=0, \nu _G (z_i)=1\) or \(\mu _G (z_i)=1, \nu _G (z_i)=0\) for all \(z_i\in X\).

Hence \(E_\alpha ^\beta (G)=0\) if and only if G is a crisp set. This proves (IFS1).

(IFS2) \(\displaystyle E_\alpha ^\beta (G)=1\) if and only if \(\mu _G (z_i)=\nu _G (z_i)\quad \forall z_i\in X.\)

First let \(\mu _G (z_i)=\nu _G (z_i)\) for all \(z_i\in X\) in (8),

$$\begin{aligned} \Rightarrow \frac{1}{n\left( 2^{1-\alpha }-2^{1-\beta }\right) }\sum _{i=1}^n \left( 2^{1-\alpha }-2^{1-\beta }\right) =1. \end{aligned}$$

Conversely, let \(E_\alpha ^\beta (G)=1\),

$$\begin{aligned}&\Rightarrow \Big ((\mu _G (z_i)^\alpha +\nu _G (z_i)^\alpha )\times (\mu _G (z_i)+\nu _G (z_i))^{1-\alpha }+2^{1-\alpha }\pi _G (z_i)\Big ) \nonumber \\&\quad -\left( (\mu _G (z_i)^\beta +\nu _G (z_i)^\beta )\times (\mu _G (z_i)+\nu _G (z_i))^{1-\beta }+2^{1-\beta }\pi _G (z_i)\right) =\left( 2^{1-\alpha }-2^{1-\beta }\right) , \end{aligned}$$
(15)

which implies

$$\begin{aligned} \left( \mu _G (z_i)^\alpha +\nu _G (z_i)^\alpha \right) \times (\mu _G (z_i)+\nu _G (z_i))^{1-\alpha }+2^{1-\alpha }\pi _G (z_i)=2^{1-\alpha } \end{aligned}$$
(16)

and

$$\begin{aligned} \left( \mu _G (z_i)^\beta +\nu _G (z_i)^\beta \right) \times (\mu _G (z_i)+\nu _G (z_i))^{1-\beta }+2^{1-\beta }\pi _G (z_i)=2^{1-\beta }. \end{aligned}$$
(17)

From (16), we get

$$\begin{aligned} (\mu _G (z_i)+\nu _G (z_i))^{1-\alpha }\times \left[ \frac{\mu _G (z_i)^\alpha +\nu _G (z_i)^\alpha }{2}-\left( \frac{\mu _G (z_i)+\nu _G (z_i)}{2}\right) ^\alpha \right] =0. \end{aligned}$$
(18)

Therefore (18) will hold only if either

$$\begin{aligned} \mu _G (z_i)+\nu _G (z_i)=0 \Rightarrow \mu _G (z_i)=\nu _G (z_i)=0\quad \forall z_i\in X, \end{aligned}$$
(19)

or

$$\begin{aligned} \left[ \frac{\mu _G (z_i)^\alpha +\nu _G (z_i)^\alpha }{2}-\left( \frac{\mu _G (z_i)+\nu _G (z_i)}{2}\right) ^\alpha \right] =0. \end{aligned}$$
(20)

Now consider the following function:

$$\begin{aligned} f(t)=t^z\qquad \mathrm {where}\quad t\in [0, 1]. \end{aligned}$$
(21)

On differentiating with respect to t, (21) gives

$$\begin{aligned} f'(t)= & {} zt^{z-1},\nonumber \\ f''(t)= & {} z(z-1)t^{z-2}. \end{aligned}$$
(22)

Since \(f''(t)>0\) for \(z>1\) and \(f''(t)<0\) for \(z<1\). Therefore, f(t) is convex for \(z>1\) and concave for \(z<1\). Therefore, for any two points \(t_1\) and \(t_2\) in [0, 1], the following inequalities hold:

$$\begin{aligned} \frac{f(t_1)+f(t_2)}{2}-f\left( \frac{t_1+t_2}{2}\right)\ge & {} 0\quad \mathrm {for}\quad z>1, \end{aligned}$$
(23)
$$\begin{aligned} \frac{f(t_1)+f(t_2)}{2}-f\left( \frac{t_1+t_2}{2}\right)\le & {} 0\quad \mathrm {for}\quad z<1, \end{aligned}$$
(24)

with equality only for \(t_1=t_2\). Therefore, from (21), (22), (23), (24), we conclude that (20) will hold only if \(\mu _G (z_i)=\nu _G (z_i)\) for all \(z_i\in X\). Similarly, we may prove it for (17).

(IFS3). \(E_\alpha ^\beta (G)\le E_\alpha ^\beta (H)\) if and only if \(G\subseteq H\), i.e., if \(\mu _G (z_i)\le \mu _H (z_i)\) and \(\nu _G (z_i)\ge \nu _H (z_i)\) for \(\mu _H (z_i)\le \nu _H (z_i)\), or if \(\mu _G (z_i)\ge \mu _H (z_i)\) and \(\nu _G (z_i)\le \nu _H (z_i)\), for \(\mu _H (z_i)\ge \nu _H (z_i)\) for any \(z_i\in X\).

To prove (8) satisfies (IFS3), it suffices to prove that the function

$$\begin{aligned} f(x, y)=\frac{1}{\left( 2^{1-\alpha }-2^{1-\beta }\right) }\Bigg \{\left[ (x^\alpha +y^\alpha )(x+y)^{1-\alpha }+2^{1-\alpha }(1-x-y)\right] \nonumber \\ -\left[ (x^\beta +y^\beta )(x+y)^{1-\beta }+2^{1-\beta }(1-x-y)\right] \Bigg \}, \end{aligned}$$
(25)

where \(x,y\in [0,1]\), is an increasing function with respect to x and decreasing function with respect to y. Taking partial derivatives of f with respect to x and y, respectively, we get

$$\begin{aligned} \frac{\partial f(x,y)}{\partial x}&= \left[ \frac{(1-\alpha )(x^\alpha +y^\alpha )(x+y)^{-\alpha }+\alpha (x+y)^{1-\alpha }x^{\alpha -1}-2^{1-\alpha }}{ \left( 2^{1-\alpha }-2^{1-\beta }\right) }\right] \nonumber \\&\ \quad -\left[ \frac{(1-\beta )(x^\beta +y^\beta )(x+y)^{-\beta }+\beta (x+y)^{1-\beta }x^{\beta -1}-2^{1-\beta }}{ \left( 2^{1-\alpha }-2^{1-\beta }\right) }\right] \end{aligned}$$
(26)

and

$$\begin{aligned} \frac{\partial f(x,y)}{\partial y}&= \left[ \frac{(1-\alpha )(x^\alpha +y^\alpha )(x+y)^{-\alpha }+\alpha (x+y)^{1-\alpha }y^{\alpha -1}-2^{1-\alpha }}{ \left( 2^{1-\alpha }-2^{1-\beta }\right) }\right] \nonumber \\&\ \quad -\left[ \frac{(1-\beta )(x^\beta +y^\beta )(x+y)^{-\beta }+\beta (x+y)^{1-\beta }y^{\beta -1}-2^{1-\beta }}{ \left( 2^{1-\alpha }-2^{1-\beta }\right) }\right] \end{aligned}$$
(27)

For critical points of f, we put \(\partial f(x,y)/\partial x=0\) and \(\partial f(x,y)/\partial y=0\). This gives

$$\begin{aligned} x=y. \end{aligned}$$
(28)

From (26), (27) and (28), we get

$$\begin{aligned} \frac{\partial f(x,y)}{\partial x}\ge 0\quad \mathrm {when}\, x\le y, \alpha<1, \beta>1 \, \mathrm {and \, also\, for}\,\alpha >1, \beta <1, \end{aligned}$$
(29)
$$\begin{aligned} \frac{\partial f(x,y)}{\partial y}\le 0\quad \mathrm {when}\quad x\ge y, \alpha<1, \beta>1\, \mathrm {and\, also\, for}\,\alpha >1, \beta <1, \end{aligned}$$
(30)

for all \(x,y\in [0, 1]\). Thus f(xy) is an increasing function of x and decreasing function of y.

Now, let us consider the two sets \(G, H\in IFS (X)\) such that \(G\subseteq H\). Let the finite universe of discourse \(X=\{z_1, z_2, \ldots , z_n\}\) be partitioned into two disjoint sets \(X_1\) and \(X_2\) with \(X=X_1\cup X_2\).

Further, let us suppose that all \(z_i\in X_1\) be dominated by the condition

$$\begin{aligned} \mu _G (z_i)\le \mu _H (z_i)\le \nu _H (z_i)\le \nu _G (z_i), \end{aligned}$$
(31)

and for all \(z_i\in X_2\),

$$\begin{aligned} \mu _G (z_i)\ge \mu _H (z_i)\ge \nu _H (z_i)\ge \nu _G (z_i). \end{aligned}$$
(32)

Thus, from the monotonicity of the function f and (8), we obtain that \(E_\alpha ^\beta (G)\le E_\alpha ^\beta (H)\) when \(G\subseteq H\).

(IFS4) \(E_\alpha ^\beta (G)=E_\alpha ^\beta (G^c)\).

We know that \(G^c=\{\langle z_i, \nu _G (z_i), \mu _G (z_i)\rangle /z_i\in X\}\) for \(z_i\in X\) and

$$\begin{aligned} \mu _{G^c} (z_i)=\nu _G (z_i),\qquad \qquad \nu _{G^c} (z_i)=\mu _G (z_i). \end{aligned}$$
(33)

Thus from (8), we have

$$\begin{aligned} E_\alpha ^\beta (G)=E_\alpha ^\beta (G^c). \end{aligned}$$
(34)

Therefore, \(E_\alpha ^\beta (G)\) is a valid intuitionistic fuzzy entropy measure. \(\square \)

The proposed measure (8) also satisfies the following additional properties.

Theorem 3.2

Let G and H be two intuitionistic fuzzy sets defined in \(X=\{z_1, z_2,\ldots , z_n\}\), where \(G=\{\langle z_i, \mu _G (z_i), \nu _G (z_i)\rangle /z_i\in X\}\), \(H=\{\langle z_i, \mu _H (z_i), \nu _H (z_i)\rangle /z_i\in X\}\), such that for all \(z_i\in X\) either \(G\subseteq H\) or \(H\subseteq G\); then

$$\begin{aligned} E_\alpha ^\beta (G\cup H)+E_\alpha ^\beta (G\cap H)=E_\alpha ^\beta (G)+E_\alpha ^\beta (H). \end{aligned}$$
(35)

Proof

Let us separate X into two parts \(X_1\) and \(X_2\), such that

$$\begin{aligned} X_1=\{z_i\in X:G\subseteq H\},\qquad \qquad X_2=\{z_i\in X: G\supseteq H\}. \end{aligned}$$
(36)

This implies that for each \(z_i\in X_1\),

$$\begin{aligned} \mu _G (z_i)\le \mu _H (z_i),\qquad \qquad \nu _G (z_i)\ge \nu _H (z_i), \end{aligned}$$
(37)

and for each \(z_i\in X_2\),

$$\begin{aligned} \mu _G (z_i)\ge \mu _H (z_i),\qquad \qquad \nu _G (z_i)\le \nu _H (z_i). \end{aligned}$$
(38)

From (8), we have,

$$\begin{aligned} E_\alpha ^\beta (G\cup H)= & {} \frac{1}{n\left( 2^{1-\alpha }-2^{1-\beta }\right) }\sum _{i=1}^n\Bigg \{\Big [(\mu _{G\cup H} (z_i)^\alpha +\nu _{G\cup H} (z_i)^\alpha )\times (\mu _{G\cup H} (z_i)\nonumber \\&+\,\nu _{G\cup H} (z_i))^{1-\alpha }+2^{1-\alpha }\pi _{G\cup H} (z_i)\Big ]\nonumber \\&-\,\Big [(\mu _{G\cup H} (z_i)^\beta +\nu _{G\cup H} (z_i)^\beta )\times (\mu _{G\cup H} (z_i)+\nu _{G\cup H} (z_i))^{1-\beta }+2^{1-\beta }\pi _{G\cup H} (z_i)\Big ]\Bigg \};\nonumber \\&=\frac{1}{n\left( 2^{1-\alpha }-2^{1-\beta }\right) }\Bigg \{\sum _{X_1}\Bigg (\Big [(\mu _H (z_i)^\alpha +\nu _H (z_i)^\alpha )\times (\mu _H (z_i)\nonumber \\&+\,\nu _H (z_i))^{1-\alpha }+2^{1-\alpha }\pi _H (z_i)\Big ]\nonumber \\&-\,\Big [(\mu _H (z_i)^\beta +\nu _H (z_i)^\beta )\times (\mu _H (z_i)+\nu _H (z_i))^{1-\beta }+2^{1-\beta }\pi _H (z_i)\Big ]\Bigg )\nonumber \\&+\,\sum _{X_2}\Bigg (\Big [(\mu _G (z_i)^\alpha +\nu _G (z_i)^\alpha )\times (\mu _G (z_i)+\nu _G (z_i))^{1-\alpha }+2^{1-\alpha }\pi _G (z_i)\Big ]\nonumber \\&-\,\Big [(\mu _G (z_i)^\beta +\nu _G (z_i)^\beta )\times (\mu _G (z_i)+\nu _G (z_i))^{1-\beta }+2^{1-\beta }\pi _G (z_i)\Big ]\Bigg )\Bigg \}. \end{aligned}$$
(39)

Similarly,

$$\begin{aligned}&E_\alpha ^\beta (G\cap H)\nonumber \\&\quad =\frac{1}{n\left( 2^{1-\alpha }-2^{1-\beta }\right) }\Bigg \{\sum _{X_1}\Bigg (\Big [(\mu _G (z_i)^\alpha +\nu _G (z_i)^\alpha )\times (\mu _G (z_i)\nonumber \\&\qquad +\,\nu _G (z_i))^{1-\alpha }+2^{1-\alpha }\pi _G (z_i)\Big ]\nonumber \\&\qquad -\,\Big [(\mu _G (z_i)^\beta +\nu _G (z_i)^\beta )\times (\mu _G (z_i)+\nu _G (z_i))^{1-\beta }+2^{1-\beta }\pi _G (z_i)\Big ]\Bigg )\nonumber \\&\qquad +\,\sum _{X_2}\Bigg (\Big [(\mu _H (z_i)^\alpha +\nu _H (z_i)^\alpha )\times (\mu _H (z_i)+\nu _H (z_i))^{1-\alpha }+2^{1-\alpha }\pi _H (z_i)\Big ]\nonumber \\&\qquad -\,\Big [(\mu _H (z_i)^\beta +\nu _H (z_i)^\beta )\times (\mu _H (z_i)+\nu _H (z_i))^{1-\beta }+2^{1-\beta }\pi _H (z_i)\Big ]\Bigg )\Bigg \}. \end{aligned}$$
(40)

From (39) and (40),

$$\begin{aligned} E_\alpha ^\beta (G\cup H)+E_\alpha ^\beta (G\cap H)=E_\alpha ^\beta (G)+E_\alpha ^\beta (H). \end{aligned}$$
(41)

This proves the theorem.

Corollary For any \(G\in IFS (X)\) and its complement \(G^c\),

$$\begin{aligned} E_\alpha ^\beta (G)=E_\alpha ^\beta (G^c)=E_\alpha ^\beta (G\cup G^c)=E_\alpha ^\beta (G\cap G^c). \end{aligned}$$
(42)

\(\square \)

Theorem 3.3

The measure \(E_\alpha ^\beta (G)\) attains maximum value when the set is most intuitionistic fuzzy set and minimum value when the set is crisp set. Also, these values do not contain \(\alpha \) and \(\beta \).

Proof

It has already been proved that in properties IFS3 and IFS4 in Theorem (3.1) that \(E_\alpha ^\beta (G)\) attains maximum value if and only if G is most intuitionistic fuzzy set, i.e., \(\mu _G (z_i)=\nu _G (z_i)\), for all \(z_i\in X\) and minimum value when G is a crisp set, i.e., \(\mu _G (z_i)=1; \nu _G (z_i)=0\) or \(\mu _G (z_i)=0; \nu _G (z_i)=1\) . Therefore, it is sufficient to prove that the minimum and maximum values are free of \(\alpha \) and \(\beta \).

Suppose G be the most intuitionistic fuzzy set; i.e., \(\mu _G (z_i)=\nu _G (z_i)\), for all \(z_i\in X\). Then from (8),

$$\begin{aligned} E_\alpha ^\beta (G)= & {} \frac{1}{n\left( 2^{1-\alpha }-2^{1-\beta }\right) }\sum _{i=1}^n\Bigg \{\Big [(\mu _G (z_i)^\alpha +\nu _G (z_i)^\alpha )\times (\mu _G (z_i)\nonumber \\&+\,\nu _G (z_i))^{1-\alpha }+2^{1-\alpha }\pi _G (z_i)\Big ]\nonumber \\&-\,\Big [(\mu _G (z_i)^\beta +\nu _G (z_i)^\beta )\times (\mu _G (z_i)+\nu _G (z_i))^{1-\beta }+2^{1-\beta }\pi _G (z_i)\Big ]\Bigg \}; \end{aligned}$$
(43)
$$\begin{aligned}&\Rightarrow \frac{1}{n\left( 2^{1-\alpha }-2^{1-\beta }\right) }\sum _{i=1}^n\left( 2^{1-\alpha }-2^{1-\beta }\right) =1, \end{aligned}$$
(44)

which does not contain \(\alpha \) and \(\beta \).

On the other hand, if G is a crisp set, i.e., \(\mu _G (z_i)=1\) and \(\nu _G (z_i)=0\) or \(\mu _G (z_i)=0\) and \(\nu _G (z_i)=1\), for all, \(z_i\in X\) then \(E_\alpha ^\beta (G)=0\) for all values of \(\alpha \) and \(\beta \). This proves the theorem. \(\square \)

Now, we demonstrate the performance of proposed \((\alpha , \beta )\)-norm intuitionistic fuzzy entropy by comparing with other existing measures of intuitionistic fuzzy entropy in literature.

A Comparison with Other Existing Measures

Let \(G=\{\langle z_i,\mu _G (z_i), \nu _G (z_i)\rangle /z_i\in X\}\) be an intuitionistic fuzzy set in \(X=\{z_1, z_2, \ldots , z_n\}\). For any positive real number n, [30] defined an intuitionistic fuzzy set \(G^n\) as follows:

$$\begin{aligned} G^n=\{\langle z_i, [\mu _G (z_i)]^n, 1-[1-\nu _G (z_i)]^n\rangle /z_i\in X\}. \end{aligned}$$
(45)

Let us assume an intuitionistic fuzzy set G in \(X=\{6, 7, 8, 9, 10\}\) defined by [30] as:

$$\begin{aligned} G=\{(6, 0.1, 0.8), (7, 0.3, 0.5), (8, 0.6, 0.2), (9, 0.9, 0.0), (10, 1.0, 0.0)\}. \end{aligned}$$
(46)

Based on the characterization of linguistic variables, suggested by [30], we compare the performance of proposed measure with some existing measures of intuitionistic fuzzy entropy suggested by various researchers as:

  1. 1.

    Burillo and Bustince’s entropy \((E_{BB})\)[6];

  2. 2.

    Zeng and Li’s entropy \((E_{ZL})\) [31];

  3. 3.

    Szmidt and Kacprzyk’s entropy \((E_{SK})\)[25];

  4. 4.

    Vlachos and Sergiadis entropy \((E_{SV})\) [9]:

  5. 5.

    Zhang and Jiang’s IF entropy \((E_{ZJ})\) [32];

  6. 6.

    Hung and Yang’s entropy \((E_{hc}^2\) and \(E_r^{1/2})\) [33];

  7. 7.

    Ye’s IF entropy measure \((E_Y)\)[34];

  8. 8.

    Wei et al. entropy measure \((E_{Wei})\) [35];

  9. 9.

    Verma and Sharma’s exponential intuitionistic fuzzy entropy measure \((E_{VS})\) [36];

  10. 10.

    Wei et al. \((E_{W})\) [35];

  11. 11.

    Wang and Wang entropy \((E_{WW})\) [37];

  12. 12.

    Liu and Ren intuitionistic fuzzy entropy \((E_{LR})\) [38];

Hung and Yang [33] and Hwang and Yang [39] established that the entropy measures of IFSs are supposed to satisfy the following requirement for good performance:

$$\begin{aligned} E (G^{1/2})> E (G)> E (G^2)> E (G^3)> E (G^4). \end{aligned}$$
(47)

Computed numerical values of different entropy measures are tabulated in Table 1.

Table 1 Numerical Values of the various entropy measures under \(G^{1/2}\), G, \(G^2\), \(G^3\) and \(G^4\)

On analyzing the Table 1, we get the following results:

$$\begin{aligned}&E_{BB} (G^{1/2})< E_{BB} (G)< E_{BB} (G^2)< E_{BB} (G^3)< E_{BB} (G^4);\\&E_{ZL} (G^{1/2})> E_{ZL} (G)> E_{ZL} (G^2)> E_{ZL} (G^3)> E_{ZL} (G^4);\\&E_{SK} (G^{1/2})> E_{SK} (G)> E_{SK} (G^2)> E_{SK} (G^3)> E_{SK} (G^4);\\&E_{SV} (G^{1/2})> E_{SV} (G)> E_{SV} (G^2)> E_{SV} (G^3)> E_{SV} (G^4);\\&E_{ZJ} (G^{1/2})> E_{ZJ} (G)< E_{ZJ} (G^2)> E_{ZJ} (G^3)> E_{ZJ} (G^4);\\&E_{hc}^2 (G^{1/2})< E_{hc}^2 (G)> E_{hc}^2 (G^2)> E_{hc}^2 (G^3)> E_{hc}^2 (G^4);\\&E_r^{1/2} (G^{1/2})< E_r^{1/2} (G)> E_r^{1/2} (G^2)> E_r^{1/2} (G^3)> E_r^{1/2} (G^4);\\&E_{Wei} (G^{1/2})> E_{Wei} (G)> E_{Wei} (G^2)> E_{Wei} (G^3)> E_{Wei} (G^4);\\&E_Y (G^{1/2})> E_Y (G)> E_Y (G^2)> E_Y (G^3)> E_Y (G^4);\\&E_{VS} (G^{1/2})> E_{VS} (G)> E_{VS} (G^2)> E_{VS} (G^3)> E_{VS} (G^4);\\&E_W (G^{1/2})< E_W (G)> E_W (G^2)> E_W (G^3)> E_W (G^4);\\&E_{WW} (G^{1/2})> E_{WW} (G)> E_{WW} (G^2)> E_{WW} (G^3)> E_{WW} (G^4);\\&E_{LR} (G^{1/2})> E_{LR} (G)> E_{LR} (G^2)> E_{LR} (G^3)> E_{LR} (G^4);\\&E_\alpha ^\beta (G^{1/2})> E_\alpha ^\beta (G)> E_\alpha ^\beta (G^2)> E_\alpha ^\beta (G^3)> E_\alpha ^\beta (G^4). \end{aligned}$$

Thus, from the above analysis, we find that \(E_{ZL}\), \(E_{SK}\), \(E_{SV}\), \(E_{Wei}\), \(E_W\), \(E_{VS}\), \(E_{WW}\), \(E_{LR}\) and \(E_\alpha ^\beta \) follows the sequence (47) whereas \(E_{BB}\), \(E_{ZJ}\), \(E_{hc}^2\), \(E_r^{1/2}\) and \(E_W\) do not follow the sequence. This means, that performance of \(E_{ZL}\), \(E_{SK}\), \(E_{SV}\), \(E_{Wei}\), \(E_Y\), \(E_{VS}\), \(E_{WW}\), \(E_{LR}\) and \(E_\alpha ^\beta \) is better than that of \(E_{BB}\), \(E_{ZJ}\), \(E_{hc}^2\), \(E_r^{1/2}\) and \(E_W\).

Let us take one more example from Hung and Yang [33] for further comparison.

To analyze how different IFSs “LARGE” in X affect the above entropy measures, we reduce the hesitancy degree of “8” which is the middle point of X. First, suppose that

$$\begin{aligned} ``\hbox {Large}''=G_{1}=\{(6, 0.1, 0.8), (7, 0.3, 0.5), (8, 0.5, 0.4), (9, 0.9, 0.0), (10, 1.0, 0.0)\}. \end{aligned}$$

To compare the different entropy measures, we use IFSs \(G_1^\frac{1}{2}\), \(G_1\), \(G_1^2\), \(G_1^3\) and \(G_1^4\). The comparison results are presented in the Table 2.

Table 2 Numerical Values of the various IF entropies under \(G_1^{1/2}\), \(G_1\), \(G_1^2\), \(G_1^3\) and \(G_1^4\)

Analysis of above table gives,

$$\begin{aligned}&E_{ZL} (G_1^{1/2})< E_{ZL} (G_1)> E_{ZL} (G_1^2)> E_{ZL} (G_1^3)> E_{ZL} (G_1^4);\\&E_{SK} (G_1^{1/2})< E_{SK} (G_1)> E_{SK} (G_1^2)> E_{SK} (G_1^3)> E_{SK} (G_1^4);\\&E_{SV} (G_1^{1/2})> E_{SV} (G_1)> E_{SV} (G_1^2)> E_{SV} (G_1^3)> E_{SV} (G_1^4);\\&E_{ZJ} (G_1^{1/2})< E_{ZJ} (G_1)> E_{ZJ} (G_1^2)> E_{ZJ} (G_1^3)> E_{ZJ} (G_1^4);\\&E_{Wei} (G_1^{1/2})> E_{Wei} (G_1)> E_{Wei} (G_1^2)> E_{Wei} (G_1^3)> E_{Wei} (G_1^4);\\&E_{VS} (G_1^{1/2})> E_{VS} (G_1)> E_{VS} (G_1^2)> E_{VS} (G_1^3)> E_{VS} (G_1^4);\\&E_{WW} (G_1^{1/2})< E_{WW} (G_1)> E_{WW} (G_1^2)> E_{WW} (G_1^3)> E_{WW} (G_1^4);\\&E_{LR} (G_1^{1/2})< E_{LR} (G_1)> E_{LR} (G_1^2)> E_{LR} (G_1^3)> E_{LR} (G_1^4);\\&E_\alpha ^\beta (G_1^{1/2})> E_\alpha ^\beta (G_1)> E_\alpha ^\beta (G_1^2)> E_\alpha ^\beta (G_1^3)> E_\alpha ^\beta (G_1^4). \end{aligned}$$

From the above analysis, we observe that \(E_{ZL}\), \(E_{SK}\), \(E_{ZJ}\), \(E_{WW}\) and \(E_{LR}\) do not follow the pattern (47) but \(E_{SV}\), \(E_{Wei}\), \(E_{VS}\) and \(E_\alpha ^\beta \) follow the pattern (47). Thus the performance of \(E_{SV}\), \(E_{Wei}\), \(E_{VS}\) and \(E_\alpha ^\beta \) is better than \(E_{ZL}\), \(E_{SK}\), \(E_{ZJ}\), \(E_{WW}\) and \(E_{LR}\).

Now, we consider an another IFS “LARGE” in X from Hung and Yang [33] defined as:

$$\begin{aligned} G_2=\{(6,0.1, 0.8), (7, 0.3, 0.5), (8, 0.5, 0.5), (9, 0.9, 0.0), (10, 1.0, 0.0)\}. \end{aligned}$$

In \(G_2\), the hesitancy degree of “8” is reduced to zero. Based on this, we calculate the following Table 3:

Table 3 Numerical Values of the various IF entropies under \(G_2^{1/2}\), \(G_2\), \(G_2^2\), \(G_2^3\) and \(G_2^4\)

On analyzing above table, we observe that

$$\begin{aligned}&E_{ZL} (G_2^{1/2})< E_{ZL} (G_2)> E_{ZL} (G_2^2)> E_{ZL} (G_2^3)> E_{ZL} (G_2^4);\\&E_{SK} (G_2^{1/2})< E_{SK} (G_2)> E_{SK} (G_2^2)> E_{SK} (G_2^3)> E_{SK} (G_2^4);\\&E_{SV} (G_2^{1/2})> E_{SV} (G_2)> E_{SV} (G_2^2)> E_{SV} (G_2^3)> E_{SV} (G_2^4);\\&E_{ZJ} (G_2^{1/2})< E_{ZJ} (G_2)> E_{ZJ} (G_2^2)> E_{ZJ} (G_2^3)> E_{ZJ} (G_2^4);\\&E_{Wei} (G_2^{1/2})> E_{Wei} (G_2)> E_{Wei} (G_2^2)> E_{Wei} (G_2^3)> E_{Wei} (G_2^4);\\&E_{VS} (G_2^{1/2})> E_{VS} (G_2)> E_{VS} (G_2^2)> E_{VS} (G_2^3)> E_{VS} (G_2^4);\\&E_{WW} (G_2^{1/2})< E_{WW} (G_2)> E_{WW} (G_2^2)> E_{WW} (G_2^3)> E_{WW} (G_2^4);\\&E_{LR} (G_2^{1/2})< E_{LR} (G_2)> E_{LR} (G_2^2)> E_{LR} (G_2^3)> E_{LR} (G_2^4);\\&E_\alpha ^\beta (G_2^{1/2})> E_\alpha ^\beta (G_2)> E_\alpha ^\beta (G_2^2)> E_\alpha ^\beta (G_2^3)> E_\alpha ^\beta (G_2^4). \end{aligned}$$

Again \(E_{ZL}\), \(E_{SK}\), \(E_{ZJ}\), \(E_{WW}\) and \(E_{LR}\) fail to meet the requirement whereas \(E_{SV}\), \(E_{Wei}\), \(E_{VS}\) and \(E_\alpha ^\beta \) follow the pattern.

Finally, we take one more example from Liu and Ren [38].

Suppose that there are five IF sets denoted as IFNs as: \(G_3=(0.4, 0.1)\), \(G_4=(0.6, 0.3)\), \(G_5=(0.2, 0.6)\), and \(G_6= (0.13, 0.565)\). It can be observed that \(G_4\) is less fuzzy than \(G_3\) and \(G_5\) is less fuzzy than \(G_6\). Numerical values of the five entropies are displayed in Table 4.

Table 4 Numerical values of six entropies

From the Table 4, we can observe that entropies \(E_{ZJ}\), \(E_{Wei}\) and \(E_{VS}\) cannot differentiate between the alternatives \(G_3\) and \(G_4\) and entropies \(E_W\) and \(E_{WW}\) is unable to differentiate between \(G_5\) and \(G_6\) while \(E_{SV}\) and proposed entropy \(E_\alpha ^\beta \) distinguishes all the above alternatives. Thus on the basis of above examples we can say that \(E_{SV}\) and \(E_\alpha ^\beta \) has a better performance over \(E_{ZJ}\), \(E_{Wei}\), \(E_W\), \(E_{WW}\), \(E_{VS}\). But the proposed entropy \(E_\alpha ^\beta \) contains the parameters which makes it more flexible from application point of view whereas \(E_{SV}\) does not. Therefore, the proposed entropy measure \(E_\alpha ^\beta \) is not only flexible in nature but also has consistent performance. Thus, the proposed entropy formula is considerably good.

The New MADM Method Using Proposed IF Entropy

For a MADM problem, suppose there be set \(Z=(Z_1, Z_2, \ldots , Z_m)\) of m equally probable alternatives and \(O=(e_1, e_2, \ldots , e_n)\) be a set of n attributes. Out of the given set of m alternatives, we have to select most suitable one. The degrees to which the alternative \(Z_i (i=1, 2, \ldots , m)\) satisfies the attribute \(e_j (j=1, 2, \ldots , n)\) is represented by intuitionistic fuzzy number (IFN) \(\tilde{x}_{ij}= (p_{ij}, q_{ij})\), where \(p_{ij}\) is the membership degree and \(q_{ij}\) denote the non-membership degree of the alternative \(Z_i (i=1, 2, \ldots , m)\) satisfying: \(0\le p_{ij}\le 1\), \(0\le q_{ij}\le 1\) and \(0\le p_{ij}+q_{ij}\le 1\) with \(i=1, 2, \ldots m\) and \(j=1, 2, \ldots , n\). In MADM problems, intuitionistic fuzzy values are calculated using statistical method suggested by Liu and Wang [40].

To obtain the degrees to which the alternatives \(Z_i\)’s \((i=1, 2, \ldots , m)\) satisfy or do not satisfy the attributes \(e_j\)’s \((j=1, 2, \ldots , n)\), we use the statistical tool proposed by Liu and Wang [40]. Suppose we invite a team of N-decision makers to deliver the judgment. The team members are expected to answer “yes” or “no” or “I don’t know” to the questions whether the alternatives \(Z_i (i=1, 2, \ldots , m)\) satisfy the attributes \(e_j (j=1, 2, \ldots n)\). Let \(n_{yes} (i, j)\) denote the number of experts who answer affirmatively and \(n_{no} (i, j)\) denote the number of experts who answer negatively. Then, the degrees to which alternatives \(Z_i\)’s \((i=1, 2, \ldots , m)\) satisfy and/or do not satisfy attributes \(e_j\)’s \((j=1, 2, \ldots , n)\) may be computed as:

$$\begin{aligned} p_{ij}=\frac{n_{yes} (i, j)}{N} \quad \mathrm {and} \quad q_{ij}=\frac{n_{no} (i, j)}{N}. \end{aligned}$$
(48)

Thus, MADM problem can be represented by using a fuzzy decision matrix \(X=(\tilde{x}_{ij})_{m\times n}\) as:

(49)

Considering that the attributes have different importance degrees, the weight vector of all attributes, given by the decision makers, is defined as \(u=(u_1, u_2, \ldots , u_n)^T\) such that \(0\le u_j\le 1 (j=1, 2, \ldots , n)\) satisfying \(\sum \nolimits _{j=1}^nu_j=1\), and \(u_j\) is the importance degree of the jth attribute. Sometimes, the information about attribute weights is completely unknown or incompletely known or partially known because of decision maker’s limited expertise about the problem domain, lack of knowledge or time pressure etc. To get the optimal alternatives, we should use methods or optimal models to determine the weight vector of the attributes. In the present communication, two methods are discussed to determine the weights of attributes using proposed entropy.

When Weights are Unknown

Based on the work done by Chen et al. [12], we use the Eq. (8) to determine the weights of the attributes when they are completely unknown as:

$$\begin{aligned} u_j=\frac{1-e_j}{n-\sum _{j=1}^n e_j}, \quad j=1, 2, \ldots , n, \end{aligned}$$
(50)

where \(e_j=\frac{1}{m}\sum _{i=1}^mE_\alpha ^\beta (\tilde{x}_{ij})\) and

$$\begin{aligned} E_\alpha ^\beta (\tilde{x}_{ij})&= \frac{1}{n\left( 2^{1-\alpha }-2^{1-\beta }\right) }\sum _{i=1}^n\Bigg \{\Big [(\mu _G (z_i)^\alpha +\nu _G (z_i)^\alpha )\times (\mu _G (z_i)\nonumber \\&\quad +\,\nu _G (z_i))^{1-\alpha }+2^{1-\alpha }\pi _G (z_i)\Big ]\nonumber \\&\quad -\,\Big [(\mu _G (z_i)^\beta +\nu _G (z_i)^\beta )\times (\mu _G (z_i)+\nu _G (z_i))^{1-\beta }+2^{1-\beta }\pi _G (z_i)\Big ]\Bigg \}, \end{aligned}$$
(51)

is an IF entropy measure of \(\tilde{x}_{ij}=(p_{ij}, q_{ij})\).

According to the entropy theory, smaller value of entropy for each criterion across all alternatives provide decision makers with the useful information. So, the criterion should be assigned a bigger weight; otherwise such a criterion will be judged unimportant by most decision makers. In other words, such a criterion should be assigned a very small weight.

When Weights are Partially Known

In general, there are more constraints for the weight vector \(u=(u_1, u_2, \ldots , u_n)\). Sometimes, the information about attribute weights is partially known due to lack of expertise, time limit or lack of knowledge about the problem domain. To get the optimal alternative, we should use the optimal methods to determine the weight vector of attributes. Let the set of known weight information is denoted as H. Under intuitionistic fuzzy environment, to obtain the weights of attributes for a multiple attribute decision making problem when we have partial information about them, we use the minimum entropy principle introduced by Wang and Wang [37] to determine the weight vector of attributes by constructing the following programming model as:

Now, the overall entropy of the alternative \(Z_i\) is

$$\begin{aligned} E (Z_i)&=\sum _{j=1}^n E_\alpha ^\beta (\tilde{x}_{ij}) =\sum _{j=1}^n \frac{1}{n\left( 2^{1-\alpha }-2^{1-\beta }\right) }\Bigg \{\sum _{i=1}^m\Bigg (\Big [\left( \mu _G (z_i)^\alpha +\nu _G (z_i)^\alpha \right) \times \left( \mu _G (z_i)\right. \nonumber \\&\left. \quad +\,\nu _G (z_i))^{1-\alpha }\right) +2^{1-\alpha }\pi _G (z_i)\Big ] -\Big [(\mu _G (z_i)^\beta +\nu _G (z_i)^\beta )\times (\mu _G (z_i)\\ \nonumber&\quad +\,\nu _G (z_i))^{1-\beta }+2^{1-\beta }\pi _G (z_i)\Big ]\Bigg )\Bigg \}, \end{aligned}$$
(52)

Since each alternative is made in a fairly competitive environment, the weight coefficients corresponding to same attributes should also be equal; to determine the optimal weight the following model can be constructed:

$$\begin{aligned} \min \quad E= & {} \sum _{i=1}^m u_j E (Z_i)=\sum _{i=1}^m u_j\left\{ \sum _{j=1}^n E_\alpha ^\beta (\tilde{x}_{ij})\right\} \nonumber \\= & {} \frac{1}{n\left( 2^{1-\alpha }-2^{1-\beta }\right) }\sum _{i=1}^m\sum _{j=1}^nu_j\Bigg \{\Big [(\mu _G (z_i)^\alpha +\nu _G (z_i)^\alpha )\times (\mu _G (z_i)\nonumber \\&+\,\nu _G (z_i))^{1-\alpha }+2^{1-\alpha }\pi _G (z_i)\Big ]-\Big [(\mu _G (z_i)^\beta +\nu _G (z_i)^\beta )\times (\mu _G (z_i)\qquad \\&+\,\nu _G (z_i))^{1-\beta }+2^{1-\beta }\pi _G (z_i)\Big ]\Bigg \},\nonumber \\&s.t.\quad \sum _{j=1}^n u_j=1, u_j\in H.\nonumber \end{aligned}$$
(53)

On solving the model (53) by using MATLAB software, we get the optimal solution \(\mathrm {arg}\, \min E= (u_1, u_2, \ldots , u_n)^T\).

In summary, the procedural steps of decision making method are listed as follows:

  1. 1.

    Determine the weights of the attributes by solving models equations (50) and (53).

  2. 2.

    Define the Best Solution \(Z^+\) and Worst Solution \(Z^-\) as:

    $$\begin{aligned} Z^+=\Big (\Big (\alpha _1^+,\beta _1^+\Big ), \Big (\alpha _2^+, \beta _2^+\Big ), \ldots , \Big (\alpha _n^+, \beta _n^+\Big )\Big ), \end{aligned}$$
    (54)

    where \((\alpha _j^+, \beta _j^+)=(\sup (\mu _G (z_i)), \inf (\nu _G (z_i)))=(1, 0), j=1, 2, \ldots , n\) and \(z_i\in X\).

    $$\begin{aligned} \mathrm {and}\quad Z^-=\Big (\Big (\alpha _1^-,\beta _1^-\Big ), \Big (\alpha _2^-, \beta _2^-\Big ), \ldots , \Big (\alpha _n^-, \beta _n^-\Big )\Big ), \end{aligned}$$
    (55)

    where \((\alpha _j^-, \beta _j^-)=(\inf (\mu _G (z_i)), \sup (\nu _G (z_i)))=(0, 1), j=1, 2, \ldots , n\) and \(z_i\in X\).

  3. 3.

    By using the definition (2.3), the distance measures of \(Z_i\)’s from \(Z^+\) and \(Z^-\) are given as follows:

    $$\begin{aligned} s (Z_i, Z^+)= & {} \frac{1}{2}\sum _{j=1}^n u_j (|p_{ij}-\alpha _j^+|+|q_{ij}-\beta _j^+|+|r_{ij}-\pi _j^+|),\nonumber \\= & {} \frac{1}{2}\sum _{j=1}^n u_j (|1-p_{ij}|+|q_{ij}|+|1-p_{ij}-q_{ij}|). \end{aligned}$$
    (56)
    $$\begin{aligned} s (Z_i, Z^-)= & {} \frac{1}{2}\sum _{j=1}^n u_j (|p_{ij}-\alpha _j^-|+|q_{ij}-\beta _j^-|+|r_{ij}-\pi _j^-|),\nonumber \\= & {} \frac{1}{2}\sum _{j=1}^n u_j (|p_{ij}|+|1-q_{ij}|+|1-p_{ij}-q_{ij}|). \end{aligned}$$
    (57)

    where \(r_{ij}=1-p_{ij}-q_{ij}\) and \(\pi _j=1-\alpha _j-\beta _j\).

  4. 4.

    Determine the relative degrees of closeness \(D_i\)’s as:

    $$\begin{aligned} D_i=\frac{s (Z_i, Z^-)}{s (Z_i, Z^-)+s (Z_i, Z^+)}. \end{aligned}$$
    (58)
  5. 5.

    Rank the alternatives as per the values of \(D_i\)’s in descending order. The alternative nearest to the \(Z^+\) and farthest from the \(Z^-\) will be the best alternative.

Numerical Examples

Now we illustrate the application of MADM method with the help of examples as follows:

Case 1 When the weights of the attributes are unknown.

Example 6.1 Consider a supplier selection problem with four possible alternatives \(Z_i (i=1, 2, 3, 4)\) and three attributes \(e_j (j=1, 2, 3)\). The ratings of the alternatives are displayed in the intuitionistic fuzzy decision matrix represented by Table 6. (This example is taken from Herrera and Herrera-Viedma [41]; Ye [42]). The membership degrees (satisfactory degrees) \(p_{ij}\) and non-membership degrees (non-satisfactory degrees) \(q_{ij}\) for the alternatives \(Z_i\)’s \((i=1, 2, \ldots , m)\) satisfying the attributes \(e_j\)’s \((j=1, 2, \ldots , n)\) respectively, may be obtained using statistical method proposed by Liu and Wang [40]. [Taking no. of experts=\(N=100\) in (48)]. Suppose that the ‘yes’ or ‘no’ answers of the expert are distributed as shown in Table 5.

Table 5 Distribution of “yes” and “no” answers from 100 experts

Using the formula

$$\begin{aligned} p_{ij}=\frac{n_{yes} (i, j)}{N} \quad \mathrm {and} \quad q_{ij}=\frac{n_{no} (i, j)}{N}, \end{aligned}$$
(59)

we obtain the degrees to which alternatives \(Z_i\)’s \((i=1, 2, 3, 4)\) satisfy or do not satisfy attributes \(e_j\)’s \((j=1, \ldots , 3)\) as follows:

The IF decision matrix corresponding to Table 5 is given in Table 6.

Table 6 Intuitionistic fuzzy decision matrix

The specific calculations are as under:

  1. 1.

    Using Eq. (50), (Taking \(\alpha =50\) and \(\beta =.7\)) the calculated attribute weight vector is:

    $$\begin{aligned} u=(u_1, u_2, u_3)^T=(0.2981, 0.3047, 0.3973)^T. \end{aligned}$$
  2. 2.

    The Best Solution \((Z^+)\) and Worst Solution \((Z^-)\) are respectively given as:

    $$\begin{aligned} Z^+= & {} ((\alpha _1^+, \beta _1^+), (\alpha _2^+, \beta _2^+), (\alpha _3^+, \beta _3^+))=((1, 0), (1, 0), (1, 0));\\ Z^-= & {} ((\alpha _1^-, \beta _1^-), (\alpha _2^-, \beta _2^-), (\alpha _3^-, \beta _3^-))=((0, 1), (0, 1), (0, 1)). \end{aligned}$$
  3. 3.

    The distance measures of \(Z_i\)’s from \(Z^+\) and \(Z^-\) are:

    $$\begin{aligned} s (Z_1, Z^+)= & {} 0.6256, s (Z_2, Z^+)=0.3859, s (Z_3, Z^+)=0.4857, s (Z_4, Z^+)=0.4221;\\ s (Z_1, Z^-)= & {} 0.5923, s (Z_2, Z^-)=0.7859, s (Z_3, Z^-)=0.7039, s (Z_4, Z^-)=0.8358. \end{aligned}$$
  4. 4.

    The calculated values of \(D_i\)’s, the relative degrees of closeness, are:

    $$\begin{aligned} D_1=0.4863, D_2=0.6707, D_3=0.5917, D_4=0.6644. \end{aligned}$$

Thus, the ranking order of all alternatives is \(Z_2\succ Z_4\succ Z_3\succ Z_1\) and \(Z_2\) is the desirable alternative.

Let us take one more example for more clarity.

Example 6.2 This example is adapted from [43]. Consider a supplier selection problem with five possible alternatives \(Z_i (i=1, 2, 3, 4, 5)\) and six attributes \(e_j (j=1, 2, \ldots , 6)\). The ratings of the alternatives are displayed in the following intuitionistic fuzzy decision matrix D.

(60)

The specific calculation steps are as follows:

  1. 1.

    Taking \(\alpha =50\) and \(\beta =0.7\) in (50), the computed attribute weight vector is:

    $$\begin{aligned} u=(u_1, u_2, u_3, u_4, u_5, u_6)^T= (0.0382, 0.5762, 0.1367, 0.1077, 0.0896, 0.0516) \end{aligned}$$
    (61)
  2. 2.

    The Best Solution \((Z^+)\) and Worst Solution \((Z^-)\) are:

    $$\begin{aligned} Z^+&=((\alpha _1^+, \beta _1^+), (\alpha _2^+, \beta _2^+), (\alpha _3^+, \beta _3^+), (\alpha _4^+, \beta _4^+). (\alpha _5^+, \beta _5^+), (\alpha _6^+, \beta _6^+))\nonumber \\&=(1, 0), (1, 0), (1, 0), (1, 0), (1, 0), (1, 0)\nonumber \\ Z^-&=((\alpha _1^-, \beta _1^-), (\alpha _2^-, \beta _2^-), (\alpha _3^-, \beta _3^-), (\alpha _4^-, \beta _4^-), (\alpha _5^-, \beta _5^-), (\alpha _6^-, \beta _6^-))\nonumber \\&=(0, 1), (0, 1), (0, 1), (0, 1), (0, 1), (0, 1) \end{aligned}$$
    (62)
  3. 3.

    The computed distance measures of \(Z_i\)’s from \(Z^+\) and \(Z^-\) are:

    $$\begin{aligned} s (Z_1, Z^+)&=0.6491, s (Z_2, Z^+)=0.3993, s (Z_3, Z^+)=0.3213, s (Z_4, Z^+)\nonumber \\&\quad =0.1477, s (Z_5, Z^+)=0.4475; \nonumber \\ s (Z_1, Z^-)&=0.5918, s (Z_2, Z^-)=0.7887, s (Z_3, Z^-)=0.8701, s (Z_4, Z^-)\nonumber \\&\quad =0.9538, s (Z_5, Z^-)=0.8496. \end{aligned}$$
    (63)
  4. 4.

    The computed values of \(D_i\)’s, i.e., the relative degrees of closeness are:

    $$\begin{aligned} D_1=0.4769, D_2=0.6639, D_3=0.7303, D_4=0.8659, D_5=0.6550. \end{aligned}$$
    (64)
  5. 5.

    Ranking the alternatives according to the values of \(D_i\)’s in descending order, the sequence of alternatives so obtained is \(Z_4\succ Z_3\succ Z_2\succ Z_5\succ Z_1\) and \(Z_4\) is the most desirable alternative.

Comparison with other methods

By applying the method proposed by Xu [43], the preference order of all alternatives is \(Z_4\succ Z_3\succ Z_5\succ Z_2\succ Z_1\) and \(Z_4\) is the best alternative.

If we apply the method suggested by Boran et al. [44] to compute example 6.2, the sequence of alternatives so obtained is \(Z_4\succ Z_5\succ Z_3\succ Z_2\succ Z_1\) and \(Z_4\) is the most desirable alternative.

Using the method proposed by Ye [45], the preference order of alternatives \(Z_4\succ Z_5\succ Z_3\succ Z_2\succ Z_1\) and \(Z_4\) is the most desirable alternative.

On applying Li’s method [46] to example 6.2, the sequence of alternatives so obtained is \(Z_4\succ Z_5\succ Z_3\succ Z_2\succ Z_1\) and \(Z_4\) is the most suitable option.

If we use the MADM method based on intuitionistic fuzzy weighted geometric averaging (IFWGA) operator proposed by Chen and Chang [47] to compute the Example 6.2, the sequence of preferences so obtained is \(Z_4\succ Z_3\succ Z_5\succ Z_2\succ Z_1\) with \(Z_4\) as the best alternative.

All the methods used for comparison choose \(Z_4\) as the best option. Xu’s method [43] is effective only if all attributes have equal weights which is not possible only in some specific applications, for example risk assessment, medical diagnosis [12] etc. Boran et al. [44] uses the definition of IVIFS to calculate the attribute weights in decision making problems under interval-valued intuitionistic fuzzy (IVIF) environment, which do not consider the decision matrix for decision making. Li’s method [46] is only effective in solving MADM problem in attribute weights as well as alternatives on attributes are denoted by interval-valued intuitionistic fuzzy sets (IVIFSs). In our proposed MADM method, we measure the relative degrees of closeness of different alternatives from best possible solution and worst possible solution whereas Ye’s method [45] is based on correlation coefficients with the best solution only. The entropy based attributes weight method introduced in this communication is not only simple and objective method but also considers all the alternatives on attributes.

Case 2 When weights of attributes are partially known

Example 6.3 This example is adapted from Li [48]. In this, we consider an washing machine selection problem. Suppose there are three washing machines: \(Z_i (i=1, 2, 3)\) are to be selected. Evaluation attributes are \(e_1\) (Economical), \(e_2\) (Function) and \(e_3\) (Operationality). The membership degrees (satisfactory degrees) \(p_{ij}\) and non-membership degrees (non-satisfactory degrees) \(q_{ij}\) for the alternatives \(Z_i\)’s \((i=1, 2, \ldots , m)\) satisfying the attributes \(e_j\)’s \((j=1, 2, \ldots , n)\) respectively, may be obtained using statistical method proposed by Liu and Wang [40].

The intuitionistic fuzzy decision matrix (Calculated same as in above example) provided by decision makers is shown in Table 7.

Table 7 Intuitionistic fuzzy decision matrix

Let the weights of attributes satisfy the following set

$$\begin{aligned} H=\{0.25\le u_1\le 0.75, 0.35\le u_2\le 0.60, 0.30\le u_3\le 0.35\}. \end{aligned}$$

The specific calculation steps are as under:

  1. 1.

    Using Eq. (53), following programming model can be established:

    $$\begin{aligned} \min E=0.3926 u_1 +0.4326 u_2+0.1748 u_3, \end{aligned}$$
    (65)
    $$\begin{aligned} s.t.\quad {\left\{ \begin{array}{ll} 0.25\le u_1\le 0.75 \\ 0.35\le u_2\le 0.60\\ 0.30\le u_3\le 0.35\\ u_1+u_2+u_3=1. \end{array}\right. } \end{aligned}$$
    (66)

    Solving the above programming model by using MATLAB software, we get the weight vector as follows:

    $$\begin{aligned} u=(0.30, 0.35, 0.35)^T. \end{aligned}$$
    (67)
  2. 2.

    The Best Solution \((Z^+)\) and Worst Solution \((Z^-)\) are respectively given as:

    $$\begin{aligned} Z^+= & {} ((\alpha _1^+, \beta _1^+), (\alpha _2^+, \beta _2^+), (\alpha _3^+, \beta _3^+))=((1, 0), (1, 0), (1, 0));\\ Z^-= & {} ((\alpha _1^-, \beta _1^-), (\alpha _2^-, \beta _2^-), (\alpha _3^-, \beta _3^-))=((0, 1), (0, 1), (0, 1)). \end{aligned}$$
  3. 3.

    The calculated distances of \(Z_i\)’s \((i=1, \ldots , 4)\) from \(Z^+\) and \(Z^-\) are as:

    $$\begin{aligned} s (Z_1, Z^+)= & {} 0.2850, s (Z_2, Z^+)=0.3645, s (Z_3, Z^+)=0.4075;\\ s (Z_1, Z^-)= & {} 0.8125, s (Z_2, Z^-)=0.7100, s (Z_3, Z^-)=0.7425. \end{aligned}$$
  4. 4.

    The calculated values of \(D_i\)’s, the relative degrees of closeness, are as follows:

    $$\begin{aligned} D_1=0.7403, D_2=0.6608, D_3=0.6457. \end{aligned}$$

Arranging the alternatives in descending order according to the values of \(D_i\)’s, we get the following sequence of alternatives \(Z_1\succ Z_2\succ Z_3\) and \(Z_1\) is the best alternative.

If we apply intuitionistic fuzzy weighted geometric averaging (IFWGA) operator method introduced by Chen and Chang [47], the preferential sequence of alternatives so obtained is \(Z_1\succ Z_2\succ Z_3\) which coincides with proposed method. This shows that the performance of proposed information measure and MADM method based on it is considerably good. Also, the best alternative agrees with that of Li [48].

On analyzing the output obtained from above three examples, we may observe that the proposed method not only give an optimal alternative, but also provide the decision makers with useful information for the choice of alternatives. The fuzzy decision making method with the entropy weights is more practical and effictive for dealing with the partially known and unknown information about criteria weights.

The above method can also be used to solve the following types of MADM problems

  1. 1.

    Suppose a person wants to buy a car. Suppose five types of cars are available in the market. Suppose he makes the four attributes (1) price (2) comfort (3) design (4) safety, a base to purchase the car.

  2. 2.

    Suppose a person wants insure himself with some insurance company. Suppose he has five options available and company considers four attributes to check the suitability of the customer namely (1) age (2) adequate weight (3) cholesterol level (4) blood pressure.

  3. 3.

    Suppose a doctor wants to diagnose the patients on the basis of some symptoms of disease. Let the five possible diseases be (1) \(D_1\) (2) \(D_2\) (3) \(D_3\) (4) \(D_4\) having closely related symptoms. Let the doctor consider the four symptoms to decide the possibility of a particular disease: (1) \(A_1\) (2) \(A_2\) (3) \(A_3\) (4) \(A_4\).

  4. 4.

    Suppose a person wants to choose a school for his children. He as five schools as possible alternatives. He considers the following four attributes to decide : (1). Transport facility (2). Academic profile of the teachers (3). Previous year’s results of the school (4). Discipline.

Concluding Remarks

Intuitionistic fuzzy sets play an important role in solving MADM problems. In this communication, we have proposed a new parametric intuitionistic fuzzy entropy and presented a MADM model based on the proposed entropy in which intuitionistic fuzzy sets represents characteristics of alternatives. We have discussed two cases to calculate the weights of the attributes. One is for unknown attribute weights and other is for partially known attribute weights. Using minimum entropy principle, the optimal criteria weights are obtained by the proposed entropy based model. The problems based on multiple attributes like evaluation of project management risks which depends on many factors, site selection and credit evaluation etc. can also be solved by using proposed MADM method. The techniques proposed in this paper can efficiently help the decision maker. In future, this idea of intuitionistic fuzzy sets will be extended to interval valued intuitionistic fuzzy sets for determining the weights of experts in MADM problems under intuitionistic fuzzy environment and will be reported elsewhere.