1 Introduction

Multi-criteria optimization is the process of determining the best feasible solution satisfying the established criteria. The real-world problems are often characterized by many commensurate and conflicting attributes that to opt for the solution satisfying all the attributes simultaneously is almost impossible. Thus, the solution is a set of non-inferior solutions or a compromise solution reflecting the preferences given by decision-makers. Since each attribute affecting the MADM problem has its own importance/weight–age, in view of this, the attributes need to be evaluated very carefully. Improper assignment of attributes weights may lead to wrong choice of best alternative that ultimately may appear in the form of monetary loss to the company/firm involved. Many methods have been suggested by many researchers worldwide to determine the attributes weights. Realizing the importance of attributes weights, Chen and Li [4] categorized them into two groups: subjective weights and objective weights.

Subjective weights are assigned by decision-makers according to their intuition. Delphi method [12], analytical hierarchy process (AHP) [30], weighted least square method [6], etc., are the examples of this category. But practically, a decision-maker may not be well versed with all the aspects of a problem or he/she may have limited expertise about the problem domain or due to paucity of time due to which he/she may not justify all the attributes involved. In such cases, when it is not possible to have reliable attributes weights, the use of objective attributes weights becomes helpful. The objective attributes weights are determined by solving mathematical models. Multi-objective programming [7, 9], principle element analysis [9] and entropy method belong to this category. Out of these, the entropy method is one of the most trusted methods to determine the attributes weights and has gained much popularity with authors. In the present communication, we have proposed two methods of obtaining the objective attributes weights: First approach involves the determination of attributes weights when they are completely unknown or incompletely known, and second approach involves the case when we have partial information about attributes weights.

In our day-to-day life, we often face the uncertain situations involving vagueness or fuzziness, for example, high speed, very intelligent. Before the introduction of fuzzy set theory by Zadeh [38], probability was the only way to measure the uncertainty. But to measure the uncertainty using probability theory, the data should be given in the form of crisp numbers. The uncertainty involved in vague terms like very smart cannot be computed by using probability theory. Also, the importance of criteria and the impact of alternatives on criteria provided by decision-makers are difficult to precisely express by crisp data in supplier selection problem. For such type of problems involving vagueness, the fuzzy set theory proposed Zadeh [38] has proved to be a powerful tool to quantify the vagueness. The ability of fuzzy sets to deal with practical problems has made them popular among authors. Various generalizations were proposed by many authors, for example, Type-2 fuzzy sets [39]. Among these extensions, the intuitionistic fuzzy sets proposed by Atanassov [1] gained much popularity with researchers. He [2] proved with the help of an example that fuzzy sets alone are not capable to handle practical situations completely. In the existing structure of fuzzy sets, Atanassov [2] added one more factor called intuitionistic index or hesitancy degree to introduce a new structure called ‘Intuitionistic Fuzzy Set (IFS).’ Now, it has been established in many studies that IFSs proposed by Atanassov [2] are more capable to deal with real-world problems than fuzzy sets. The introduction of intuitionistic fuzzy entropy by Burillo and Bustince [3] caused the attention of researchers worldwide. Several authors introduced intuitionistic fuzzy entropies from their points of view [11, 29, 34, 39]. Recently, Joshi and Kumar [17, 20,21,22] proposed four new intuitionistic fuzzy entropies and applied them for solving multiple attribute decision-making problems.

To solve MADM problems using IFSs, various methods have been suggested by many researchers [15, 16, 18, 19, 23,24,25,26, 28]. In these methods, the characteristics of alternatives are represented by IFSs and attributes weights are represented by intuitionistic fuzzy numbers (IFNs), and we use score function or accuracy function for IFSs to calculate the degree of closeness with ideal solutions. These values may not give the sufficient information about alternatives. To address this issue, we suggest an MADM method which is based on the weighted correlation coefficients using entropy weight under intuitionistic fuzzy environment. Instead of computing the difference between an alternative and ideal solutions, we suggest an evaluating criteria based on weighted correlation coefficients and rank the alternatives according to relative closeness coefficients to select the most desirable alternative.

Technique for order preference by similarity to ideal solutions (TOPSIS) is one of the widely used techniques to deal with multiple attribute problems. In this technique, we consider the solution which is nearest to the best solution and away from the worst solution. To compute the distance between ideal solutions and alternatives, we use different distance measures. It has been observed that output of TOPSIS method depends upon the distance measure used. In this paper, we have proved with the help of an example that the best alternatives obtained by using two different distance measures may not coincide. Ye [37] proposed an evaluation criteria of weighted correlation coefficient between an alternative and the positive ideal solution to determine the best possible alternative. But, Ye [37] considered the correlation between the different alternatives and positive ideal solution only, and the correlation between alternatives and negative ideal solution was not considered, whereas the concept of TOPSIS utilizes the distances of alternatives from positive as well as negative ideal solutions to decide the best alternative. But, the main problem with TOPSIS method based on distance measures is the change of best alternative with change of distance measure. The main motive of this paper is to present a novel MADM method whose output is independent of distance measures used. Based on the proposed intuitionistic fuzzy entropy and the concept of TOPSIS, we introduce a new MADM method by using ‘Weighted Correlation Coefficients.’ The prime aims of introducing this communication are: (1) to introduce a new information measure based on IFSs and establish its existence, and (2) to propose a new MADM method based on weighted correlation coefficients.

To achieve the aim, the paper is managed as follows: The contribution of earlier researchers in the field and prime aim of introducing this manuscript are presented in Sect. 1. The basic concepts and definitions required to understand the manuscript are given in Sect. 2. A new intuitionistic fuzzy information measure is introduced and validated in Sect. 3. Some major properties of proposed measure are studied in Sect. 4. A new TOPSIS method to solve MADM problems is introduced in Sect. 5. The proposed MADM method is explained with the help of two illustrative Examples in Sect. 6. At last, the paper is concluded with ‘Concluding Remarks’ in Sect. 7.

2 Preliminaries

In this section, some needed basic concepts and definitions of fuzzy sets and IFSs are presented.

Definition 2.1

(See [38]) A fuzzy set (FS) \({\tilde{J}}\) in a finite universe of discourse \(X= \{g_1, g_2, \ldots , g_m\}\) is given by

$$\begin{aligned} {\tilde{J}}=\left\{ \langle g_i, \mu _{{\tilde{J}}} (g_i)\rangle | g_i\in X\right\} \end{aligned}$$
(2.1)

where \(\mu _{{\tilde{J}}}: X\rightarrow [0, 1]\) is the membership function of \({\tilde{J}}\). The number \(\mu _{{\tilde{J}}} (g_i)\) is called the membership degree of \(g_i\in X\) in \({\tilde{J}}\).

The concept of FSs was generalized to IFSs by Atanassov [1] as:

Definition 2.2

(See [1]) An IFS J on a universe of discourse \(X=\{g_1, g_2, \ldots , g_m\}\) is given as

$$\begin{aligned} J=\left\{ \langle g_i, \mu _J (g_i), \nu _J (g_i)\rangle | g_i\in X\right\} , \end{aligned}$$
(2.2)

where

$$\begin{aligned} \mu _J : X\rightarrow [0, 1] \quad \nu _J : X\rightarrow [0, 1] \end{aligned}$$
(2.3)

with the condition \(0\le \mu _J (g_i)+\nu _J (g_i)\le 1, \forall g_i\in X\). The numbers \(\mu _J (g_i)\) and \(\nu _J (g_i),\) respectively, denote the membership and non-membership degrees of \(g_i\in X\) to the set J.

For each IFS J in X, the number \(\pi _J (g_i)=1-\mu _J (g_i)- \nu _J (g_i), g_i\in X\) represents hesitancy degree of \(g_i\in X\). Also, \(\pi _J (g_i)\) is called intuitionistic index.

Obviously, when \(\pi _J (g_i)=0\), i.e., \(\nu _J (g_i)=1-\mu _J (g_i)\) for all \(g_i\in X\), IFS J becomes an ordinary FS. Therefore, FSs are special cases of IF sets.

From here onwards, \({\textit{FS}}(X)\) and \({\textit{IFS}} (X)\) will, respectively, denote the set of all FSs and IFSs on X.

Definition 2.3

(See [8]) For any \(J, K\in IFS (X)\) given by

$$\begin{aligned} J&=\left\{ \langle g_i, \mu _J (g_i), \nu _J (g_i)\rangle |g_i\in X\right\} ,\nonumber \\ K&=\left\{ \langle g_i, \mu _K (g_i), \nu _K (g_i)\rangle |g_i\in X\right\} ; \end{aligned}$$
(2.4)

the usual set relations and operations are defined as follows:

  1. (1)

    \(J\subseteq K\) if and only if \(\mu _J (g_i)\le \mu _K (g_i), \nu _J (g_i)\ge \nu _K (g_i)\) for \(\mu _K (g_i)\le \nu _K (g_i)\) OR \(\mu _J (g_i)\ge \mu _K (g_i), \nu _J (g_i)\le \nu _K (g_i)\) for \(\mu _K (g_i)\ge \nu _K (g_i)\) for all \(g_i\in X\);

  2. (2)

    \(J=K\) if and only if \(J\subseteq K\) and \(K\subseteq J\);

  3. (3)

    The complement of the set J denoted as \(J^c\), is \(J^c=\left\{ \langle g_i, \nu _J (g_i), \mu _J (g_i)\rangle | g_i\in X\right\} \);

  4. (4)

    \(J\cap K=\{\langle \mu _J (g_i)\wedge \mu _K (g_i)\) and \(\nu _J (g_i)\vee \nu _K (g_i)\rangle | g_i \in X\}\);

  5. (5)

    \(J\cup K=\{\langle \mu _J (g_i)\vee \mu _K (g_i)\) and \(\nu _J (g_i)\wedge \nu _K (g_i) \rangle |g_i\in X\}\).

Hung and Yang [11] proposed the axiomatic definition of IFS entropy from probabilistic viewpoint given by:

Definition 2.4

(See [11]) A real-valued function \(E:{\textit{IFS}} (X)\rightarrow [0, 1]\) is called entropy on \({\textit{IFS}}(X)\) if it satisfies the following postulates:

\(IF_1\) :

(Sharpness):\(E (J)=0\) if and only if J is a crisp set, i.e., \(\mu _J (g_i)=0\), \(\nu _J (g_i)=1\); or \(\mu _ G (g_i)=1\), \(\nu _J (g_i)=0\)\(\forall g_i\in X\);

\(IF_2\) :

(Maximality):E(J) attains maximum value if \(\mu _J (g_i)=\nu _ G (g_i)=\pi _J (g_i)=\frac{1}{3}, \forall g_i\in X\);

\(IF_3\) :

(Symmetry):\(E (J)=E (J^c)\) where \(J^c\) denotes the complement of J;

\(IF_4\) :

(Resolution):\(E (J)\le E(K)\) if J is sharper than K, that is, \(\mu _J\le \mu _K\) and \(\nu _J\le \nu _K\) for \(\max (\mu _K, \nu _K)\le \frac{1}{3}\) and \(\mu _J\ge \mu _K\) and \(\nu _J\ge \nu _K\) for \(\min (\mu _K, \nu _K)\ge \frac{1}{3}\).

Definition 2.5

(See [10]) For any \(J, K\in IFS (X)\), the correlation coefficient is given by

$$\begin{aligned} P (J, K)=\frac{C (J, K)}{\sqrt{(T(J).T(K)}} \end{aligned}$$
(2.5)

where \( C (J, K)=\sum _{i=1}^m (\mu _J (g_i) \mu _K (g_i)+\nu _J (g_i) \nu _K (g_i))\) is the correlation of two IFSs J and K and \(T (J)=\sum _{i=1}^m (\mu _J (g_i)^2+\nu _J (g_i)^2)\) and \(T (K)=\sum _{i=1}^m (\mu _K (g_i)^2+\nu _K (g_i)^2)\) are the informational intuitionistic energies, respectively. \(\mu _J (g_i)\) and \(\nu _J (g_i)\) denote the membership and non-membership degrees of J in X, and similarly \(\mu _K (g_i)\) and \(\nu _K (g_i)\) represent the membership and non-membership degrees of K in X, respectively.

The correlation coefficient of two IFSs J and K satisfies the following properties:

  1. (i).

    \(\displaystyle 0\le P (J, K)\le 1\).

  2. (ii).

    \(P (J, K)=P (K, J)\).

  3. (iii).

    \(P (J, K)=1\) if \(J=K\).

With these concepts and ideas in mind, we now propose a new two parametric IF entropy with ‘R’ and ‘S’ as parameters in the next section.

3 A New Parametric IF Information Measure

3.1 Background

We start with the probabilistic background. Let \(\triangle _n=\left\{ C= (c_1, c_2, \ldots , c_n);c_i\ge 0, \sum _{i=1}^n c_i=1\right\} , n\ge 2\), be a set of complete probability distributions. For some \(C\in \triangle _n\), Sharma and Mittal [32] studied the entropy given by

$$\begin{aligned} H_R^S (C)=\frac{1}{(S-R)}\left[ \sum _{i=1}^m\left( c_i^R-c_i^S\right) \right] , \end{aligned}$$
(3.1)

where either \(R>1; 0<S<1\) or \(0<R<1; S>1\).

Limiting and Particular Cases:

  1. 1.

    If \(R=1\) or \(S=1\), then the measure (3.1) becomes

    $$\begin{aligned} H_R (C)=\frac{1}{1-R}\sum _{i=1}^n \left( c_i^R-1\right) \end{aligned}$$
    (3.2)

    which is an entropy studied by Tsallis [33] and Havdra–Charavat [13]. The only difference between two entropies is of normalizing factor. The Havrda and Charavat [13] entropy is normalized whereas Tsallis entropy [33] is not normalized.

  2. 2.

    If we take \(R=1\) and \(S\rightarrow 1\) or \(S=1\) and \(R\rightarrow 1\), then (3.1) becomes

    $$\begin{aligned} H_1^1 (C)=-\sum _{i=1}^n c_i\log (c_i) \end{aligned}$$
    (3.3)

    which is well-known Shannon entropy [31].

In the next subsection, we generalize the concept proposed by Sharma and Mittal [32] from probabilistic settings to intuitionistic fuzzy settings to introduce a new intuitionistic fuzzy information measure.

3.2 Definition

For any \(J\in IFS (X)\), we define

$$\begin{aligned} H_R^S (J)=\frac{1}{m (S-R)}\sum _{i=1}^m\left[ \left( \mu _J (g_i)^R+\nu _J (g_i)^R+\pi _J (g_i)^R\right) -\left( \mu _J (g_i)^S+\nu _J (g_i)^S+\pi _J (g_i)^S\right) \right] , \end{aligned}$$
(3.4)

where either \(R>1, 0<S<1\) or \(0<R<1, S>1\) which considers membership, non-membership and hesitancy degrees of IFSs.

Particular Cases:

  1. 1.

    If \(R=1\), then (3.4) becomes an intuitionistic fuzzy entropy studied by Hung and Yang [11]

    $$\begin{aligned} H_{HY} (J)=\left\{ \begin{array}{l} \frac{1}{m (S-1)}\sum _{i=1}^m\left[ 1-\left( \mu _J (g_i)^S+\nu _J (g_i)^S+\pi _J (g_i)^S\right) \right] , S>0 (\ne 1);\\ \quad -(\mu _J (g_i)\log \mu _J (g_i)+\nu _J (g_i)\log \nu _J (g_i)+\pi _J (g_i)\log \pi _J (g_i)); S=1. \end{array}\right. \end{aligned}$$
    (3.5)
  2. 2.

    If \(R=1\) and \(\pi _J (g_i)=0\), then (3.4) becomes

    $$\begin{aligned} H_1^S (J)=\left\{ \begin{array}{l} \frac{1}{m (S-1)}\sum _{i=1}^m\left[ 1-\left( \mu _J (g_i)^S+\nu _J (g_i)^S\right) \right] , S>0 (\ne 1);\\ \quad -\frac{1}{m} \sum _{i=1}^m \left[ \mu _J (g_i)\log (\mu _J (g_i)+\nu _J (g_i)\log (\nu _J (g_i)) \right] ; S=1. \end{array}\right. \end{aligned}$$
    (3.6)

In the next subsection, we justify the existence of proposed measure (3.4).

3.3 Justification

Before establishing the validity of proposed measure, we prove a property required in the proof of justification.

Property 3.1

Under the condition\(I_3\), we have

$$\begin{aligned}&\left| \mu _J (g_i)-\frac{1}{3}\right| +\left| \nu _J (g_i)-\frac{1}{3}\right| +\left| \pi _J (g_i)-\frac{1}{3}\right| \nonumber \\&\quad \ge \left| \mu _K (g_i)-\frac{1}{3}\right| +\left| \nu _K (g_i)-\frac{1}{3}\right| +\left| \pi _K (g_i)-\frac{1}{3}\right| \end{aligned}$$
(3.7)
$$\begin{aligned}&{\mathrm {and}} \quad \left( \mu _J (g_i)-\frac{1}{3}\right) ^2+\left( \nu _J (g_i)-\frac{1}{3}\right) ^2+\left( \pi _J (g_i)-\frac{1}{3}\right) ^2\qquad \qquad \qquad \qquad \qquad \nonumber \\&\qquad \qquad \quad \ge \left( \mu _K (g_i)-\frac{1}{3}\right) ^2+\left( \nu _K (g_i)-\frac{1}{3}\right) ^2+\left( \pi _K (g_i)-\frac{1}{3}\right) ^2 \end{aligned}$$
(3.8)

Proof

If \(\mu _J (g_i)\le \mu _K (g_i)\) and \(\nu _J (g_i)\le \nu _K (g_i)\) with \(\max \{\mu _K (g_i), \nu _K (g_i)\}\le \frac{1}{3}\), then \(\mu _J (g_i)\le \mu _K (g_i)\le \frac{1}{3}\); \(\nu _J (g_i)\le \nu _K (g_i)\le \frac{1}{3}\) and \(\pi _J (g_i)\ge \pi _K (g_i)\ge \frac{1}{3}\) which implies that (3.7) and (3.8) hold. Similarly, if \(\mu _J (g_i)\ge \mu _K (g_i)\) and \(\nu _J (g_i)\ge \nu _K (g_i)\) with \(\max \{\mu _K (g_i), \nu _K (g_i)\}\ge \frac{1}{3},\) then (3.7) and (3.8) hold. \(\square \)

Theorem 3.2

Measure (3.4) is a valid intuitionistic fuzzy information measure.

Proof

To establish (3.4) as a valid intuitionistic fuzzy information measure, we prove that it satisfies the properties in Definition (2.4).

\(IF_1\)::

If \(H_R^S (J)=0\) then

$$\begin{aligned} \left( \mu _J (g_i)^R+\nu _J (g_i)^R+\pi _J (g_i)^R\right) -\left( \mu _J (g_i)^S+\nu _J (g_i)^S+\pi _J (g_i)^S\right) =0. \end{aligned}$$
(3.9)

Since \(R, S>0 (R\ne 1\ne S),\) this is possible only in the following cases:

1. Either \(\mu _J (g_i)=1\), i.e., \(\nu _J (g_i)=\pi _J (g_i)=0\) or

2. \(\nu _J (g_i)=1\), i.e., \(\mu _J (g_i)=\pi _J (g_i)=0\) or

3. \(\pi _J (g_i)=1\), i.e., \(\mu _J (g_i)=\nu _J (g_i)=0\).

In all the above cases, \(H_R^S (J)=0\) implies that J is a crisp set. Conversely, if J is a crisp set then either \(\mu _J (g_i)=1\) and \(\nu _J (g_i)=\pi _J (g_i)=0\) or \(\nu _J (g_i)=1\) and \(\mu _J (g_i)=\pi _J (g_i)=0\) or \(\pi _J (g_i)=1\) and \(\mu _J (g_i)=\nu _J (g_i)=0\). This implies that

$$\begin{aligned} \left( \mu _J (g_i)^R+\nu _J (g_i)^R+\pi _J (g_i)^R\right) -\left( \mu _J (g_i)^S+\nu _J (g_i)^S+\pi _J (g_i)^S\right) =0. \end{aligned}$$
(3.10)

Since \(R, S>0\), \(R\ne S\), \(H_R^S (J)=0\). Hence, \(H_R^S (J)=0\) if and only if J is a crisp set.

\(IF_2\)::

Since \(\mu _J (g_i)+\nu _J (g_i)+\pi _J (g_i)=1\), to obtain the maximum value of intuitionistic fuzzy entropy \(H_R^S (J)\), we write \(\phi (\mu _J, \nu _J, \pi _J)=\mu _J (g_i)+\nu _J (g_i)+\pi _J (g_i)-1\) and taking the Lagrange’s multiplier \(\lambda \), we consider

$$\begin{aligned} \varPhi (\mu _J, \nu _J, \pi _J)=H_R^S (\mu _J, \nu _J, \pi _J)+\lambda \phi (\mu _J, \nu _J, \pi _J). \end{aligned}$$
(3.11)

To find the maximum value of \(H_R^S (J)\), differentiating (3.11) partially with respect to \(\mu _J, \nu _J, \pi _J\) and \(\lambda \) and equating them to zero, we get \(\mu _J (g_i)=\nu _J (g_i)=\pi _J (g_i)=\frac{1}{3}\). It may be noted that all the first-order partial derivatives vanish if and only if \(\mu _J (g_i)=\nu _J (g_i)=\pi _J (g_i)=\frac{1}{3}\). Therefore, the stationary point of \(H_R^S (J)\) is \(\mu _J (g_i)=\nu _J (g_i)=\pi _J (g_i)=\frac{1}{3}\). Next, to prove \(H_R^S (J)\) is a concave function of \(J\in F(X)\), we calculate its Hessian at the stationary point. The Hessian of \(H_R^S (J)\) is given by

$$\begin{aligned} {\hat{H}}=\frac{1}{m (S-R)} \left( \begin{array}{ccc} p &{}\quad 0 &{}\quad 0\\ 0 &{}\quad p &{}\quad 0\\ 0 &{}\quad 0 &{}\quad p\\ \end{array} \right) , \end{aligned}$$
(3.12)

where \(p=R(R-1)3^{(2-R)}-S(S-1)3^{(2-S)}\). For all RS such that \( R>1, 0<S<1\) or \(0<R<1, S>1\), \({\hat{H}}\) is a negative definite matrix and hence \(H_R^S (J)\) is a concave function having its maximum value at the point \(\mu _J (g_i)=\nu _J (g_i)=\pi _J (g_i)=\frac{1}{3}\).

\(IF_3\)::

Since \(H_R^S (J)\) is a concave function of \(J\in F(X)\), if \(\max \{\mu _J (g_i), \nu _J (g_i)\}\le \frac{1}{3}\), then \(\mu _J (g_i)\le \mu _K (g_i)\) and \(\nu _J (g_i)\le \nu _K (g_i)\) which implies \(\pi _J (g_i)\ge \pi _K (g_i)\ge \frac{1}{3}\). Therefore, by the Property (3.1), we conclude that \(H_R^S (J)\) satisfies the condition \(I_3\). Similarly, if \(\min \{\mu _J (g_i), \nu _J (g_i)\}\ge \frac{1}{3}\), then \(\mu _J (g_i)\le \mu _K (g_i)\) and \(\nu _J (g_i)\ge \nu _K (g_i)\). Therefore, by using Property (3.1), we conclude that \(H_R^S (J)\) satisfies condition \(I_3\).

\(IF_4\)::

It is clear that from the definition that \(H_R^S (J)=H_R^S (J^c)\).

Hence, \(H_R^S (J)\) satisfies all the properties of intuitionistic fuzzy entropy, and therefore, \(H_R^S (J)\) is a valid information measure. \(\square \)

4 Properties of Proposed Measure

Now, we discuss some major properties of the proposed information measure.

Theorem 4.1

LetJandKbe two IFSs defined on\(X=\{g_1, g_2, \ldots , g_m\}\)where\(J=\left\{ \langle g_i, \mu _J (g_i), \nu _J (g_i)/g_i\in X\rangle \right\} \)and\(K=\left\{ \langle g_i, \mu _K (g_i), \nu _K (g_i)/g_i\in X\rangle \right\} \), such that for all\(g_i\in X\)either\(J\subseteq K\)or\(J\supseteq K\); then,

$$\begin{aligned} H_R^S (J\cup K)+H_R^S (J\cap K)= H_R^S (J)+H_R^S (K). \end{aligned}$$
(4.1)

Proof

Let us bifurcate X into two parts \(X_1\) and \(X_2\), such that

$$\begin{aligned} X_1=\{g_i\in X: J\subseteq K\}, \quad X_2=\{g_i\in X: J\supseteq K\}. \end{aligned}$$
(4.2)

That is, for all \(g_i\in X_1\),

$$\begin{aligned} \mu _J (g_i)\le \mu _K (g_i),\quad \nu _J (g_i)\ge \nu _K (g_i) \end{aligned}$$
(4.3)

and for all \(g_i\in X_2\),

$$\begin{aligned} \mu _J (g_i)\ge \mu _K (g_i),\quad \nu _J (g_i)\le \nu _K (g_i). \end{aligned}$$
(4.4)

Using (3.4), we have

$$\begin{aligned} H_R^S (J\cup K)&=\frac{1}{m (S-R)}\sum _{i=1}^m\left[ \left( \mu _{J\cup K} (g_i)^R+\nu _{J\cup K} (g_i)^R+\pi _{J\cup K} (g_i)^R\right) \right. \nonumber \\&\quad \left. -\left( \mu _{J\cup K} (g_i)^S+\nu _{J\cup K} (g_i)^S+\pi _{J\cup K} (g_i)^S\right) \right] , \end{aligned}$$
(4.5)
$$\begin{aligned} H_R^S (J\cup K)&=\frac{1}{m (S-R)}\left\{ \sum _{X_1}\left[ \left( \mu _K (g_i)^R+\nu _K (g_i)^R+\pi _K (g_i)^R\right) \right. \right. \left. -\left( \mu _K (g_i)^S+\nu _K (g_i)^S+\pi _K (g_i)^S\right) \right] \nonumber \\&\quad +\sum _{X_2}\left[ \left( \mu _J (g_i)^R+\nu _J (g_i)^R+\pi _J (g_i)^R\right) \right. \left. \left. -\left( \mu _J (g_i)^S+\nu _J (g_i)^S+\pi _J (g_i)^S\right) \right] \right. \Bigg \}. \end{aligned}$$
(4.6)

Similarly,

$$\begin{aligned} H_R^S (J\cap K)&=\frac{1}{m (S-R)}\left\{ \sum _{X_1}\left[ \left( \mu _J (g_i)^R+\nu _J (g_i)^R+\pi _J (g_i)^R\right) \right. \right. \left. -\left( \mu _J (g_i)^S+\nu _J (g_i)^S+\pi _J (g_i)^S\right) \right] \nonumber \\&\quad +\sum _{X_2}\left[ \left( \mu _K (g_i)^R+\nu _K (g_i)^R+\pi _K (g_i)^R\right) \right. \left. \left. -\left( \mu _K (g_i)^S+\nu _K (g_i)^S+\pi _K (g_i)^S\right) \right] \right. \Bigg \}. \end{aligned}$$
(4.7)

From (4.6) and (4.7), we have

$$\begin{aligned} H_R^S (J\cup K)+H_R^S (J\cap K)=H_R^S (J)+ H_R^S (K). \end{aligned}$$
(4.8)

This proves the theorem. \(\square \)

Corollary:

For any fuzzy set \(J\in IFS (X)\) and its complement \(J^c\),

$$\begin{aligned} H_R^S (J)=H_R^S (J^c)=H_R^S (J\cup J^c)=H_R^S (J\cap J^c). \end{aligned}$$
(4.9)

5 The New MADM Method Using Proposed IF Entropy

Consider a MADM problem with m-non-inferior alternatives given by \(Z=(\varPhi _1, \varPhi _2, \ldots , \varPhi _m)\) and a set of n-attributes given by \(\chi =(\chi _1, \chi _2, \ldots , \chi _n)\). Our target is to choose the most desirable \(\varPhi _i (i=1, 2, \ldots , m)\) satisfying \(\chi _j (j=1, 2, \ldots , n)\). The degrees to which a particular alternative satisfies a specific attribute as awarded by decision-makers are denoted by using intuitionistic fuzzy numbers (IFNs) given by \(\tilde{x}_{ij}= (\mu _{ij}, \nu _{ij})\), where \(\mu _{ij}\) and \(\nu _{ij},\) respectively, represent the membership and non-membership degrees satisfying \(0\le \mu _{ij}\le 1\), \(0\le \nu _{ij}\le 1\) and \(0\le \mu _{ij}+\nu _{ij}\le 1\). The \(\mu _{ij}\) and \(\nu _{ij}\) are computed by using formula suggested by Liu and Wang [28] given by

$$\begin{aligned} \mu _{ij}=\frac{n_{\mathrm{yea}} (i, j)}{T} \quad {\mathrm {and}} \quad \nu _{ij}=\frac{n_{\mathrm{ne}} (i, j)}{T}, \end{aligned}$$
(5.1)

where \(n_{\mathrm{yea}} (i, j)\) denotes the number of DMs supporting the alternative \(\varPhi _i\, (i=1, 2, \ldots , m)\) for the attribute \(\chi _j\,(j=1, 2, \ldots , n)\), \(n_{\mathrm{ne}} (i,j)\) represents the number of decision-makers opposing \(\varPhi _i\, (i=1, 2, \ldots , m)\) for the attribute \(\chi _j(j=1, 2, \ldots , n),\) and T denotes the total number of DMs. The whole MADM problem can be compiled in the form of IF decision matrix \(X=(\tilde{x}_{ij})_{m\times n}\) given by

$$\begin{aligned} \chi _1\qquad \qquad \quad \chi _2\qquad \qquad \quad \quad \chi _n \qquad \qquad \nonumber \\ {\mathbf{X }}=(\tilde{x}_{ij})_{m\times n}= \left. \begin{array}{c} \varPhi _1 \\ \varPhi _2\\ \vdots \\ \varPhi _m \end{array} \right. \left( \begin{array}{cccc} (\mu _{11}, \nu _{11}) &{}\quad (\mu _{12}, \nu _{12}) &{}\quad \ldots &{} (\mu _{1n}, \nu _{1n})\\ (\mu _{21}, \nu _{21}) &{}\quad (\mu _{22}, \nu _{22}) &{}\quad \ldots &{} (\mu _{2n}, \nu _{2n})\\ \vdots &{}\quad \vdots &{}\quad \vdots \\ (\mu _{m1}, \nu _{m1}) &{}\quad (\mu _{m2}, \nu _{m2}) &{}\quad \ldots &{} (\mu _{mn}, \nu _{mn})\\ \end{array} \right) . \end{aligned}$$
(5.2)

The attributes weights play an eminent role in the solution of a MADM problem. Proper assignment of attributes weights leads to the perfect choice of most desirable alternative, whereas improper assignment of attributes weights may lead to wrong selection of appropriate of alternative. This implies that DMs should fully justify while assigning the attributes weights. But due to complex nature of the problem, lack of knowledge about problem domain or due to time pressure, DMs express themselves in the form intervals instead of precise numbers. To cover up such cases, we have divided the process of determining the attributes weights into two parts as follows:

5.1 If Attributes Weights are Completely Unknown or Incompletely Known

When attributes weights are unknown to us, we use the method proposed by Chen et al. [4] and Ye [37] to determine the attributes weights as follows:

$$\begin{aligned} u_j=\frac{1-E_j}{n-\sum _{j=1}^n E_j}, \quad j=1, 2, \ldots , n, \end{aligned}$$
(5.3)

where \(E_j=\frac{1}{m}\sum _{i=1}^mH_R^S (\tilde{x}_{ij})\) and

$$\begin{aligned} H_R^S \left( \tilde{x}_{ij}\right) =\frac{1}{m (S-R)}\sum _{i=1}^m\left[ \left( \mu _J (g_i)^R+\nu _J (g_i)^R+\pi _J (g_i)^R\right) -\left( \mu _J (g_i)^S+\nu _J (g_i)^S+\pi _J (g_i)^S\right) \right] , \end{aligned}$$

where either \(R>1, 0<S<1\) or \(0<R<1, S>1\).

5.2 If the Information About Attributes Weights is Partial

In general, there are more constraints for the attribute weight vector \(u=(u_1, u_2, \ldots , u_n)\). As discussed earlier, it may not be possible every time for DMs to deliver their judgment in the form of precise numbers. In such cases, we can have only partial information about attributes weights. Let the information available about attributes weights be denoted by P . We use minimum entropy principle proposed by Wang and Wang [35] to determine attributes weights for such cases as follows:

Now, the overall entropy of the alternative \(\varPhi _i\) is given by

$$\begin{aligned} E (\varPhi _i)&=\sum _{j=1}^n H_R^S \left( \tilde{x}_{ij}\right) \nonumber \\&=\sum _{j=1}^n \frac{1}{m\left( S-R\right) }\Bigg \{\sum _{i=1}^m\left[ \left( \mu _J (g_i)^R+\nu _J (g_i)^R+\pi _J (g_i)^R\right) -\left( \mu _J (g_i)^S+\nu _J (g_i)^S+\pi _J (g_i)^S\right) \right] \Bigg \}. \end{aligned}$$
(5.4)

Since each alternative is a fair competition, the weight coefficients corresponding to same attributes should also be equal. Therefore, to obtain the optimal attributes weights, we construct the following programming model:

$$\begin{aligned} \min E= & {} \sum _{i=1}^m u_j E (\varPhi _i)=\sum _{i=1}^m u_j\left\{ \sum _{j=1}^n H_R^S \left( \tilde{x}_{ij}\right) \right\} \nonumber \\= & {} \frac{1}{m\left( S-R\right) }\sum _{i=1}^m\sum _{j=1}^nu_j\Big \{\left[ \left( \mu _J (g_i)^R+\nu _J (g_i)^R+\pi _J (g_i)^R\right) -\left( \mu _J (g_i)^S+\nu _J (g_i)^S+\pi _J (g_i)^S\right) \right] \Big \},\nonumber \\&\quad {\mathrm{s.t.}}\quad \sum _{j=1}^n u_j=1, u_j\in H. \end{aligned}$$
(5.5)

On solving the above model (5.5), we get the optimal solution as \({\mathrm {arg}} \min E= (u_1, u_2, \ldots , u_n)^T\).

In the next subsection, we introduce a new MADM method based on weighted correlation coefficients and proposed IF entropy.

5.3 The Proposed MADM Method

The procedure of proposed MADM model is briefed as follows:

  1. 1.

    Determine the attributes weights by solving (5.3) and (5.5).

  2. 2.

    Determine the positive ideal solution \((\varPhi ^+)\) and negative ideal solution \((\varPhi ^-)\) as follows:

    $$\begin{aligned} \varPhi ^+=\left( \left( \phi _1^+,\psi _1^+\right) , \left( \phi _2^+, \psi _2^+\right) , \ldots , \left( \phi _n^+, \psi _n^+\right) \right) , \end{aligned}$$
    (5.6)

    where \((\phi _j^+, \psi _j^+)=(\sup (\mu _J (g_i)), \inf (\nu _J (g_i)))=(1, 0), j=1, 2, \ldots , n\) and \(g_i\in X\).

    $$\begin{aligned} {\mathrm {and}}\quad \varPhi ^-=\left( \left( \phi _1^-,\psi _1^-\right) , \left( \phi _2^-, \psi _2^-\right) , \ldots , \left( \phi _n^-, \psi _n^-\right) \right) , \end{aligned}$$
    (5.7)

    where \((\phi _j^-, \psi _j^-)=(\inf (\mu _J (g_i)), \sup (\nu _J (g_i)))=(0, 1), j=1, 2, \ldots , n\) and \(g_i\in X\).

  3. 3.

    Using the correlation coefficients between intuitionistic fuzzy sets suggested by Gerstenkorn and Manko [10], the correlation coefficient between the alternatives \(\varPhi _i\)’s \((i=1, 2, \ldots , m)\) and best solution \(\varPhi ^+\) with entropy weights for criteria can be measured by weighted correlation coefficients given by

    $$\begin{aligned} CR_i \left( \varPhi ^+, \varPhi _i\right) =\frac{C \left( \varPhi ^+, \varPhi _i\right) }{\sqrt{T \left( \varPhi ^+\right) T (\varPhi _i)}} =\frac{\sum _{j=1}^n w_j \mu _{\varPhi _i} (e_j)}{\sqrt{\sum _{j=1}^n w_j \left( \mu _{\varPhi _i} (e_j)^2+\nu _{\varPhi _i} (e_j)^2\right) }}. \end{aligned}$$
    (5.8)

    Similarly, the correlation coefficient between the alternatives \(\varPhi _i\)’s \((i=1, 2, \ldots , m)\) and worst solution \(\varPhi ^-\) with entropy weights for criteria can be measured by weighted correlation coefficients given by

    $$\begin{aligned} CR_i \left( \varPhi ^-, \varPhi _i\right) =\frac{C \left( \varPhi ^-, \varPhi _i\right) }{\sqrt{T \left( \varPhi ^-\right) T (\varPhi _i)}} =\frac{\sum _{j=1}^n w_j \mu _{\varPhi _i} (e_j)}{\sqrt{\sum _{j=1}^n w_j \left( \mu _{\varPhi _i} (e_j)^2+\nu _{\varPhi _i} (e_j)^2\right) }}. \end{aligned}$$
    (5.9)
  4. 4.

    Compute the relative closeness coefficients as follows:

    $$\begin{aligned} S_i=\frac{CR_i \left( \varPhi ^-, \varPhi _i\right) }{CR_i \left( \varPhi ^-, \varPhi _i\right) +CR_i \left( \varPhi ^+, \varPhi _i\right) }. \end{aligned}$$
    (5.10)
  5. 5.

    Rank the alternatives according to the values of \(S_i\)’s in descending order. The alternative corresponding to the largest value of \(S_i\) will be the best alternative.

6 Illustrative Examples

Now, we illustrate the application of MADM method with the help of examples as follows:

This example is adapted from Joshi and Kumar [14].

Case 1. If Attributes Weights are unknown.

Example 6.1

Consider an example of a construction company who wants to prepare a list of potential suppliers for supplying raw material required for construction works. Out of a number of quotations invited, four quotations are shortlisted say \(\varPhi _1, \varPhi _2, \varPhi _3\) and \(\varPhi _4\) which are to be ranked. Company has fixed three criteria say \((\chi _1)\) quality, \((\chi _2)\) proximity to site and \((\chi _3)\) emergency stock, on the basis of which the suppliers are to be ranked. To ensure a fair selection of suppliers, a team comprising experts/decision-makers with different backgrounds, expertise, knowledge has been constituted.

The membership degrees (satisfactory degrees) \(\mu _{ij}\) and non-membership degrees (non-satisfactory degrees) \(\nu _{ij}\) for the alternatives \(\varPhi _i\)’s \((i=1, 2, \ldots , m)\) satisfying the attributes \(\chi _j\)’s \((j=1, 2, \ldots , n),\) respectively, may be obtained using statistical method proposed by Liu and Wang [28] [taking no. of experts \(=T=100\) in (5.1)]. Suppose that the responses of decision-makers in the form of ‘yes’ or ‘no’ be distributed as given in Table 1.

Table 1 Responses of DMs

Using the formula (5.1), the IF decision matrix corresponding to Table 1 is given in Table 2.

Table 2 IF Decision Matrix (Case 1)

Now, we compute the IF information matrix corresponding to IF decision matrix given in Table 2 using (3.4) (taking \(R=10\) and \(S=.3\)). The resultant matrix is given in Table 3.

Table 3 IF Information Matrix (Case 1)

The specific calculations are as under:

  1. 1.

    Using (5.3), the computed attribute weight vector is:

    $$\begin{aligned} u=(u_1, u_2, u_3)^{\mathrm{T}}=(.3342, .3338, .3320)^{\mathrm{T}}. \end{aligned}$$
  2. 2.

    The \(\varPhi ^+\) and \(\varPhi ^-\) are given by:

    $$\begin{aligned} \varPhi ^+= \left( \left( \phi _1^+, \psi _1^+\right) , \left( \phi _2^+, \psi _2^+\right) , \left( \phi _3^+, \psi _3^+\right) \right) =((1, 0), (1, 0), (1, 0)); \\ \varPhi ^-= \left( \left( \phi _1^-, \psi _1^-\right) , \left( \phi _2^-, \psi _2^-\right) , \left( \phi _3^-, \psi _3^-\right) \right) =((0, 1), (0, 1), (0, 1)). \end{aligned}$$
  3. 3.

    Using (5.8), the computed values of coefficients of correlation are

    $$\begin{aligned} CR_1 \left( \varPhi ^+, \varPhi _1\right) =.6621, \quad CR_2 \left( \varPhi ^+, \varPhi _2\right) =.9386,\quad CR_3 \left( \varPhi ^+, \varPhi _3\right) =.8560,\quad CR_4 \left( \varPhi ^+, \varPhi _4\right) =.9254;\\ CR_1 \left( \varPhi ^-, \varPhi _1\right) =.6827, \quad CR_2 \left( \varPhi ^-, \varPhi _2\right) =.6910,\quad CR_3 \left( \varPhi ^-, \varPhi _3\right) =.6883,\quad CR_4 \left( \varPhi ^-, \varPhi _4\right) =.7001. \end{aligned}$$
  4. 4.

    Computed values of relative closeness coefficients using (5.10) are given by

    $$\begin{aligned} S_1=.5077; \quad S_2=.4240;\quad S_3=.4457;\quad S_4=.4307. \end{aligned}$$
    (6.1)

Arranging the alternatives according to the values of \(S_i\)’s in descending order, we have \(\varPhi _1\succ \varPhi _3\succ \varPhi _4\succ \varPhi _2\) with \(\varPhi _1\) is the most desirable alternative.

Why The New MADM Method Was Needed?

In this part, we justify the need of proposed method. For this, first we define the conventional MADM method. The procedural steps are given below.

  1. 1.

    Step 1 and Step 2 of conventional TOPSIS method are same as that of proposed method.

  2. 2.

    In Step 3, we calculate the distance of \(\varPhi _i (i=1, 2, \ldots , m)\) from \(\varPhi ^+\) and \(\varPhi ^-\) by using some distance measures. Let these distances be denoted by \(D_i (\varPhi ^+, \varPhi _i)\) and \(D_i (\varPhi ^-, \varPhi _i)\).

  3. 3.

    In Step 4, we compute the relative closeness coefficients say \(S_i\) as follows:

    $$\begin{aligned} S_i=\frac{D_i \left( \varPhi ^-, \varPhi _i\right) }{D_i \left( \varPhi ^+, \varPhi _i\right) +D_i \left( \varPhi ^-, \varPhi _i\right) }. \end{aligned}$$
    (6.2)
  4. 4.

    In Step 5, the alternatives are ranked according to the values of \(S_i\)s in descending order.

The only difference in proposed MADM method (Sect. 5.3) and conventional TOPSIS method lies with Step 3. In conventional TOPSIS method, we use weighted distance measures to compute distance between ideal solutions and alternatives, whereas in proposed MADM method, we have used weighted correlation coefficients. Now, we compute the Example 6.1 by using the different distance measures and observe the difference in outputs.

  1. 1.

    First, we use weighted Hamming distance measure between IFSs given by

    $$\begin{aligned} HD (J, K)=\frac{1}{2}\sum _{j=1}^n\left[ w_j\left( \left| \mu _J (g_i)-\mu _K (g_i)\right| +\left| \nu _J (g_i)-\nu _K (g_i)\right| +\left| \pi _J (g_i)-\pi _K (g_i)\right| \right) \right] ; \end{aligned}$$
    (6.3)

    to compute the distance between \(\varPhi _i\)’s from \(\varPhi ^+\) and \(\varPhi ^-\). The distance measures so obtained are given by

    $$\begin{aligned} D_1 \left( \varPhi ^+, \varPhi _1\right)&=.6163,\quad D_2 \left( \varPhi ^+, \varPhi _2\right) =.3832,\quad D_3 \left( \varPhi ^+, \varPhi _3\right) =.4834,\quad D_4 \left( \varPhi ^+, \varPhi _1\right) =.4162; \end{aligned}$$
    (6.4)
    $$\begin{aligned} D_1 \left( \varPhi ^-, \varPhi _1\right)&=.6003,\quad D_2 \left( \varPhi ^-, \varPhi _2\right) =.7832, \quad D_3 \left( \varPhi ^-, \varPhi _3\right) =.6998,\quad D_4 \left( \varPhi ^-, \varPhi _1\right) =.8333. \end{aligned}$$
    (6.5)

    The calculated values of relative closeness coefficients \(S_i (i=1, 2, 3, 4)\) using (6.2) are given by

    $$\begin{aligned} S_1=.4934;\quad S_2=.6715;\quad S_3=.5914;\quad S_4=.6669. \end{aligned}$$
    (6.6)

    The preferential sequence based on (6.6) is given by

    $$\begin{aligned} \varPhi _2\succ \varPhi _4\succ \varPhi _3\succ \varPhi _1 \end{aligned}$$
    (6.7)

    with \(\varPhi _2\) as the most suitable option.

  2. 2.

    Now, we compute the above Example using distance measure between IFSs proposed by Wang and Xin [36] given by

    $$\begin{aligned} WX (J, K)=\sum _{j=1}^n w_j \left( \begin{array}{l} \frac{\left| \mu _J (g_i)-\mu _K (g_i)\right| +\left| \nu _J (g_i)-\nu _K (g_i)\right| }{4}\\ \quad +\frac{\max \left( \left| \mu _J (g_i)-\mu _K (g_i)\right| , \left| \nu _J (g_i)-\nu _K (g_i)\right| \right) }{2} \end{array} \right) . \end{aligned}$$
    (6.8)

    The distance measures obtained by using (6.8) are given by

    $$\begin{aligned} D_1 \left( \varPhi ^+, \varPhi _1\right)&=.5622, \quad D_2 \left( \varPhi ^+, \varPhi _2\right) =.3416, \quad D_3 \left( \varPhi ^+, \varPhi _3\right) =.4376, \quad D_4 \left( \varPhi ^+, \varPhi _1\right) =.3538; \end{aligned}$$
    (6.9)
    $$\begin{aligned} D_1 \left( \varPhi ^-, \varPhi _1\right)&=.5461, \quad D_2 \left( \varPhi ^-, \varPhi _2\right) =.7416, \quad D_3 \left( \varPhi ^-, \varPhi _3\right) =.6540, \quad D_4 \left( \varPhi ^-, \varPhi _1\right) =.7709. \end{aligned}$$
    (6.10)

    The computed values of relative closeness coefficients \(S_i (i=1, 2, 3, 4)\) using (6.2) are given by

    $$\begin{aligned} S_1=.4928; \quad S_2=.6846;\quad S_3=.5991;\quad S_4=.6854. \end{aligned}$$
    (6.11)

    Thus, the sequence of preferences is given by

    $$\begin{aligned} \varPhi _4\succ \varPhi _2\succ \varPhi _3\succ \varPhi _1 \end{aligned}$$
    (6.12)

    with \(\varPhi _4\) as the best alternative.

  3. 3.

    If we compute the above Example by using the MADM method proposed by Ye [37], the sequence of preferences so obtained is given by

    $$\begin{aligned} \varPhi _2\succ \varPhi _4\succ \varPhi _3\succ \varPhi _1. \end{aligned}$$
    (6.13)

    But in the method proposed by Ye [37], the correlation of alternatives with positive ideal solution only is considered.

A Graphical Analysis: Now, to have more clear understanding of the above-discussed results, we represent them graphically as follows:

Fig. 1
figure 1

A comparative ranking of suppliers

From the above discussion, it is clear that output of conventional TOPSIS method varies with the distance measure used. Therefore, it becomes natural to seek such decision-making methods whose output does not depend on distance measure used and remains consistent. The proposed MADM method is a sequel in this direction. This justifies the need of proposed MADM method (Fig. 1).

Case 2. If Attributes Weights are partially known

Example 6.2

This example is adapted from Joshi and Kumar [14]. Consider an example of a finance company who wants to invest the funds. Company has three alternatives say (1) share market \((\varPhi _1)\), (2) mutual funds \((\varPhi _2)\) and (3) real estate \((\varPhi _3)\). Three attributes have been fixed by company to make optimum choice say \((\chi _1)\) highest returns, \((\chi _2)\) emergency withdrawal and \((\chi _3)\) security. To choose the best option, company has constituted a committee of experts.

The intuitionistic fuzzy decision matrix (calculated same as in Case 1) provided by decision-makers is given in Table 4.

Table 4 The IF Decision Matrix (Case 2)

The information matrix corresponding to intuitionistic fuzzy decision matrix given by Table 4 is given in Table 5.

Table 5 The IF Information Matrix (Case 2)

Let the set of available weight information be given by the following set

$$\begin{aligned} P=\{.25\le v_1\le .75, .35\le v_2\le .60, .30\le v_3\le .35\}. \end{aligned}$$

The computational Steps are as follows:

  1. 1.

    Using (5.5), we construct the following programming model:

    $$\begin{aligned} \min E=.2007 v_1 +.2055 v_2+.1890 v_3, \end{aligned}$$
    (6.14)
    $$ {\mathrm{s.t.}}\quad \left\{ \begin{array}{l} .25\le v_1\le .75 \\ .35\le v_2\le .60\\ .30\le v_3\le .35\\ v_1+v_2+v_3=1. \end{array}\right.$$
    (6.15)

    Solving the above programming model with the help of MATLAB software, the weight vector so obtained is given by:

    $$\begin{aligned} u=(.30, .35, .35)^{\mathrm{T}}. \end{aligned}$$
    (6.16)
  2. 2.

    The \(\varPhi ^+\) and \(\varPhi ^-\) are given by:

    $$\begin{aligned} \varPhi ^+= \left( \left( \phi _1^+, \psi _1^+\right) , \left( \phi _2^+, \psi _2^+\right) , \left( \phi _3^+, \psi _3^+\right) \right) =((1, 0), (1, 0), (1, 0)); \end{aligned}$$
    (6.17)
    $$\begin{aligned} \varPhi ^-= \left( \left( \phi _1^-, \psi _1^-\right) , \left( \phi _2^-, \psi _2^-\right) , \left( \phi _3^-, \psi _3^-\right) \right) =((0, 1), (0, 1), (0, 1)). \end{aligned}$$
    (6.18)
  3. 3.

    Using (5.8), the computed values of coefficients of correlation are

    $$\begin{aligned} D_1 \left( \varPhi ^+, \varPhi _1\right) =.9575,\quad D_2 \left( \varPhi ^+, \varPhi _2\right) =.8705,\quad D_3 \left( \varPhi ^+, \varPhi _3\right) =.8698; \\ D_1 \left( \varPhi ^-, \varPhi _1\right) =.6724,\quad D_2 \left( \varPhi ^-, \varPhi _2\right) =.6233,\quad D_3 \left( \varPhi ^-, \varPhi _3\right) =.5967. \end{aligned}$$
  4. 4.

    The computed values of relative closeness coefficients \(S_i\)’s are

    $$\begin{aligned} S_1=.4125; \quad S_2=.4173; \quad S_3=.4069. \end{aligned}$$
    (6.19)

Ranking the alternatives in descending order as per the values of \(S_i\)’s, we get the following sequence of alternatives \(\varPhi _2\succ \varPhi _1\succ \varPhi _3\) and \(\varPhi _2\) is the best alternative.

If we compute the above example by using method proposed by Li [27] and Chen and Tsao [5], we get \(\varPhi _1\) as the best alternative. This difference in output is due to the difference of distance measures used for calculation.

7 Conclusions

IFSs play an important role in solving MADM problems. By using IFSs in solving MADM problems, more accurate values of attribute weights can be determined from the incomplete and sometimes confusing information obtained from the decision-makers. In this paper, we have successfully introduced a new parametric IF entropy. Apart from this, a new MADM method based on proposed IF entropy measure and weighted correlation coefficient is proposed. Two numerical examples are used to explain the MADM method effectively. The techniques offered in this paper can efficiently help the decision-maker for assigning the attributes weights. In future, we will extend the proposed intuitionistic fuzzy entropy to interval valued IFSs and will be reported somewhere else.