1 Introduction

To cope with the vague and uncertain information appropriately, Zadeh (1965) proposed the theory of fuzzy sets (FSs) to extend the classical set, depicted by the membership degree (\(\mu\)). Before the development of FS theory, the only method to compute uncertainty was probability. But probability measures the uncertainty that should be expressed in precise numbers. The indistinct terms such as high speed, very rich, very intelligent can be quantified by FS theory. Therefore, the FSs are more useful than classical sets and probability theory to express uncertainty. Due to its flexible approach, FSs find their applications in decision-making, in risk evaluation, etc. Various valuable generalizations (Arya and Kumar 2020a) of fuzzy set theory have been proposed, but FSs are not enough to handle the hesitancy degree.

The intuitionistic fuzzy set theory was proposed by Atanassov (1986) as a new prominent extension of FS. IFSs studied by Atanassov (1986) are characterized by two degrees membership \((\mu \in [0,1])\) and non-membership \((\nu \in [0,1])\) and satisfying \(0 \le (\mu + \nu ) \le 1\). He added third factor in the existing framework of FSs called ’intuitionistic index’ \((\pi )\) with the equation \(\mu + \nu + \pi =1\). Ratika and Kumar (2020) introduced a measure to compute the degree of distance between intuitionistic fuzzy set based on Renyi–Tsallis entropy. Thus, IFSs are more flexible of handling the uncertain real-life problems. Let us visit an example on voting. During voting, a section of people that votes in favor of government comprises the membership degree and a section of people that does not vote in favor of the government constitutes the non-membership degree. In reality, another section of people also exists that is indeterminate to vote for or to vote against the government. In FS proposed by Zadeh (1968), such section of people does not get due representation. To cover up this section of people, Attanasov (1986) added one more component in the existing structure of FS called ‘intuitionistic index’ \((\pi )\). This improved the adaptability of FSs for real-world problems.

Weight determination is an key investigation for fuzzy MCDM as considered by different authors (Arya and Kumar 2020, 2020c; Fahmi et al. 2019; Joshi and Kumar 2018). Weight entropy method is frequently employed method among other methods. Thus, entropy becomes burning topic of FSs. Several researchers considered IFE and suggested different formulas. New definition and formula of entropy for IFs introduced by Zhu and Li (2016). Bhandari and Pal (1975) was the initial researcher who suggested the generalized entropy. Cosine function-based IFE suggested by Liu and Ren (2015). Logarithmic function-based entropy suggested by Mao et al. (2013); Xiong et al. (2017) and Mishra et al. (2017). Joshi and Kumar (2018) implemented parameter-based entropy measure. While changing the parameter the entropy measure is both flexible as well consistent.

Multi-criteria decision-making (MCDM) techniques are useful to make a choice of best alternative among multiple decision alternatives. Therefore, multi-criteria decision-making (MCDM) is an important area where the IFSs have a wide scope of applications. Many theories and tools have been developed by different authors for solving MCDM problems (Chen and Chang 2016; Chen et al. 2016; Zeng et al. 2019; Rani et al. 2019). Opricovic (1998) introduced VIKOR (VIsekriterijumska Optimizacija i Kompromisno Resenje) method, Benayoun et al. (1966) suggested the ELECTRE (Elimination et choice translating reality ), PROMETHEE (Preference Ranking Organization Method for Enrichment Evaluations) technique introduced by Brans and Mareschel (1984). Each method has its own advantages and drawbacks. Citing the drawbacks of PROMOTHEE and ELECTRE, Opricovic introduced an extended VIKOR method. VIKOR method by (Opricovic and Tzeng 2007; Arya and Kumar 2020c) provides a comprehensive solution that makes it more suitable for practical applications. The VIKOR method employed by researchers till now is based on distance measure. But it has been observed that the output of distance measure-based decision-making methods may vary with the distance measure used by Joshi and Kumar (2018). For deciding the most suitable alternative many decision-making methods uses score functions or accuracy functions. But, Ye (2010) argued that accuracy functions or score functions do not provide adequate information about alternatives. No study has been conducted so far on weighted correlation coefficients based on VIKOR approach in the best of my knowledge. Therefore, in this paper, we propose a scale invariant using weighted correlation coefficients based on VIKOR approach.

Further, criteria weights vector plays a deciding part to resolve the multi-criteria decision-making (MCDM) problems. The selection of most desirable alternative is feasible only by proper evaluation of criteria weights. After surveying the literature, Chen and Li (2010) splits criteria weight in objective and subjective methods. In subjective method, the criteria weights are decided purely according to the preference of decision-makers (Arya and Kumar 2020; Hwang and Lin 1987; Joshi and Kumar 2018), while in objective weight method, the criteria weights are determined by solving mathematical programming models (Jamkhaneh 2018; Mahmood et al. 2018), which neglect the subjective judgment information of DMs. Entropy method is another reliable and entrusted approach under objective weight evaluation category. The proposed paper adapted an entropy method to determine the criteria weights and and to reflect both the objective information as well as subjective information of the DM’s. An integrated method seems more meaningful and desirable for determining the criteria weights. This study aims the following contributions: in this paper, we develop a new information measure called logarithmic intuitionistic fuzzy (IF) information measure for IFSs. It is based on the Renyi’s concept. The proposed measure has some valuable properties, which are proved to check the applicability of the proposed measure. Next, we study a novel multi-criteria decision-making (MCDM) problem using weighted correlation coefficients where the weight information on the criteria is complete known or partial unknown. Finally, a ranking method using VIKOR is implemented to rank the alternatives.

The motivation of this paper are: to compute the distance between ideal solutions and alternatives, generally we use different distance measures. As Xiao and Wang (2017) proposed intuitionistic fuzzy VIKOR method based on distance measure and calculates the group utility and individual regret by the distance measure, whereas we evaluate the correlation between the different alternatives and positive as well as negative ideal solutions to decide the best alternative. We have not used distance measure because in some of the special cases it cannot successfully decide the closeness of each substitute. In this paper, we introduce a new multi-criteria decision-making (MCDM) method by using weighted correlation coefficient based on VIKOR approach. For measuring the correlation degree, we have used the correlation coefficient for IFSs and further scale-invariant entropy measure is proposed for calculating the uncertainty.

Since the continuous advancement of science requires experimentation with more and more complex physical systems of nature and the analysis of complex data structure arising from them, there has always been a quest for new, more general measures of uncertainty that could possibly explain such complex phenomena more accurately. With this view, several other one and two parameter generalization of the entropy functional have been proposed in the literature, although not all of them have significant applications with experimental validity.

The major contribution of our proposed work as follows:

  • We proposed a new entropy measure which is scale invariant for complete probability distribution, whereas other entropy measures does not holds scale invariance property. We have mentioned this generalized entropy as the Logarithmic \(\beta\) Norm Entropy(LNE).

  • Also, we proposed intuitionistic fuzzy scale-invariant entropy and prove its validity, whereas other intuitionistic fuzzy measures do not hold scale invariance property. We call such entropy is Logarithm \(\beta\)-Norm intuitionistic fuzzy based on entropy.

  • Thereafter, we suggested correlation coefficient-based VIKOR approach for finding the ranking and measuring the uncertainty in place of distance measure. Correlation coefficient-based VIKOR approach has not been used till date in best of my knowledge.

  • After that, we compared our proposed work with the existing researchers those who have used distance measure-based VIKOR method. After comparing, we reached to the conclusion that the proposed work attains the best result.

To achieve the proposed objectives, the present paper is arranged in six sections: Sect. 1, describes the contribution of preliminary researchers in this field, origin of motivation and goals to be attained via this contribution. In Sect. 2, existing literature related to proposed work is reviewed and a new scale-invariant entropy measure for probabilistic view point has been defined. Then a new intuitionistic fuzzy scale-invariant measure analogous to the well-known Renyi’s and Tsallis entropy are proposed and validated. Section 3 is utilized to recognize some basic definitions. In Sect. 4, we proposed a scale-invariant information measure for intuitionistic fuzzy set and proving the validation of it. In Sect. 5, we introduced a new multi-criteria decision-making (MCDM) technique that builds on new proposed measure and weighted correlation coefficients-based VIKOR approach. The application of the suggested multi-criteria decision-making (MCDM) method in actual problems is described in Sect. 6 with the help of example on supplier selection problem. Eventually, the last section draws “Conclusions”.

2 Scale-invariant generalized information measure

Let \(\Gamma _n=\{ \Omega =(\varpi _1,\varpi _2,...,\varpi _n):\varpi _i\ge 0, i= 1,2,...,n; \,\,\sum _{i=1}^{n}\varpi _i=1\},n \ge 2\) be a set of discrete probability distribution. For any probability distribution \(\Omega =(\varpi _1,\varpi _2,...,\varpi _n) \in \Gamma _n\), Shannon defined an information measure given by

$$\begin{aligned} H_{Shannon}(\Omega ) = -\sum _{i=1}^{n}(\varpi _i)\log (\varpi _i). \end{aligned}$$
(1)

Renyi’s (1961) generalized Shannon measure as:

$$\begin{aligned} H_{Renyi}(\Omega ) = {\left\{ \begin{array}{ll} \frac{1}{1-\beta }\left[ \log \left( \sum _{i=1}^{n}\varpi _i^{\beta }\right) \right] , \beta >0(\ne 1);\\ -\sum _{i=1}^{n}(\varpi _i)\log (\varpi _i), \beta = 1. \end{array}\right. } \end{aligned}$$
(2)

Remark 1

If \(\beta =2\) in (2), (2) becomes Renyi Index or collision entropy

$$\begin{aligned} i.e., \quad H_{Renyi}^{\beta =2}(\Omega )=log_D \left( \sum _{i=1}^{n}\varpi _{i}^{2}\right) ^{-1}. \end{aligned}$$
(3)

2. If \(\beta \rightarrow \infty\), the Renyi’s entropy \(H_\beta\) converges to the min entropy \(H_\infty\):

$$\begin{aligned} H_\infty (\Omega ) = -\log (max(p_i)). \end{aligned}$$
(4)

However, in the literature of information theory, there exist several versions of Shannon’s entropy (1948). We introduced a new information measure \(_\beta H_\mathrm{new}(\Omega )\) : \(\Gamma _n \rightarrow \mathfrak {R}^+\) (set of positive real numbers); \(n \ge 2\) as follows:

$$\begin{aligned} \begin{aligned} _\beta H_\mathrm{new}(\Omega ){}&= \frac{1}{\beta ^{-1}-\beta }\left[ \log \left( \sum _{i=1}^{n}\varpi _i^\beta \right) ^{\frac{1}{\beta }} - \log \left( \sum _{i=1}^{n}\varpi _i^{\frac{1}{\beta }}\right) ^{\beta }\right] \\&= \frac{1}{\beta ^{-1}-\beta }\left[ \log \frac{\left( \sum _{i=1}^{n}\varpi _i^\beta \right) ^{\frac{1}{\beta }}}{\left( \sum _{i=1}^{n}\varpi _i^{\frac{1}{\beta }}\right) ^{\beta }}\right] \\ _\beta H_\mathrm{new}(\Omega ){}&= \frac{1}{\beta ^{-1}-\beta }\left[ \log \left( \frac{\Vert \Omega \Vert _{\beta }}{\Vert \Omega \Vert _{\beta ^{-1}}}\right) \right] \\&= \frac{1}{\beta ^{-1}-\beta }\left[ \log \Vert \Omega \Vert _{\beta } - \log \Vert \Omega \Vert _{\beta ^{-1}}\right] , \end{aligned} \end{aligned}$$
(5)

where \(\Vert \Omega \Vert _{\beta }\) denotes the \(\beta\)-Norm of the function \(\Omega =\{\varpi _1,\varpi _2,...,\varpi _n\}\) defined as \(\Vert \Omega \Vert _{\beta } = \left( \sum _{i=1}^{n}\varpi _i^\beta \right) ^{\beta ^{-1}}\).

3. If \(\beta = 1\), then (5) recovers Shannon entropy.

4. \(_\beta H_\mathrm{new}(\Omega ) = _{\beta ^{-1}} H_\mathrm{new}(\Omega )\), that is, (5) is symmetric with respect to \((\beta\) , \(\beta ^{-1})\).

However, several generalized versions of existing entropy (Shannon’s, Renyi’s and Tsallis entropy) are available in the literature. By noting their similarity with the \(\beta\) norm entropy, we denote the generalized entropy given in (5) as the Logarithmic \(\beta\)-Norm entropy for \(\Omega \in \Gamma _n\). It is interesting to note that \(_\beta H_\mathrm{new}(\Omega )\) is symmetric in the tuning parameter \((\beta ,\beta ^{-1})\).

The major advantage of Logarithmic entropy in (5) is its scale invariance property:

\(_\beta H_\mathrm{new}(c\; \Omega ) = _\beta H_\mathrm{new}(\Omega )\) for any \(\Omega \in \Gamma _n\), \(c , \beta > 0\). This striking property is satisfied neither by the Shannon entropy nor by its existing generalizations like Renyi’s, Tsallis entropies, etc. Therefore, it appears that the Logarithmic \(\beta\)-Norm entropy is the parameteric generalization of the Shannon and Renyi entropy over \(\Omega \in \Gamma _n\) that is scale invariant over the complete probability distributions \(\Gamma _n\).

2.1 Properties of generalized measure presented in (5)

Theorem 2.1

For any \(\Omega \in {\Gamma }_n,\) and \(_\beta H_\mathrm{new}(\Omega )\) satisfies the following properties:

  1. a

    \(_\beta H_\mathrm{new}(\Omega ) \ge 0\) for all \(\beta > 0(\ne 1)\).    [Non-negativity]

  2. b

    \(_\beta H_\mathrm{new} (\varpi _1,\varpi _2,...,\varpi _n)\) is a symmetric function of \((\varpi _1,\varpi _2,...,\varpi _n)\).

  3. c

    \(_\beta H_\mathrm{new} (0,1)=0= _\beta H_\mathrm{new}(1,0)\).    [Decisivity]

  4. d

    For any \(\Omega = (\varpi _1,\varpi _2,...,\varpi _n) \in \Gamma _n\), we have \(_\beta H_\mathrm{new}(\Omega ) = _\beta H_\mathrm{new}(\varpi _1,\varpi _2,...,\varpi _n,0)\).    [Expandability]

  5. e

    For \(\Omega = (\varpi _1,\varpi _2,...,\varpi _n) \in \Gamma _n\) and \(\Xi = (\xi _1,\xi _2,...,\xi _m) \in \Gamma _m\), let us define their independent combination as \(\Omega * \Xi = (\varpi _i \xi _j)_{i=1,...,n;j=1,...,m.}\) Then,    \(_\beta H_\mathrm{new}(\Omega *\Xi )= _\beta H_\mathrm{new}(\Omega ) + _\beta H_\mathrm{new}(\Xi )\).    [Shannon additivity/ Extensivity]

  6. f

    \(_\beta H_\mathrm{new} (\frac{1}{2},\frac{1}{2})=1\).    [Normalize]

  7. g

    \(_\beta H_\mathrm{new}(\varpi _1,\varpi _2,.....,\varpi _n) \le _\beta H_\mathrm{new} (\frac{1}{n},\frac{1}{n},...,\frac{1}{n})=log(n)\).

  8. h

    \(_\beta H_\mathrm{new}(\varpi _1,\varpi _2,....,\varpi _n)\) is continuous in \(\varpi _i 's\) for all \(i=1,2,...n\) and \(\beta > 0\).

  9. i

    For \(\beta > 0\), such that \(ln\Vert \Omega \Vert _{\beta }\) is convex in \(\Omega\). Then, \(_\beta H_\mathrm{new}(\Omega )\) is concave in \(\Omega \in \Gamma _n\).

Proof

(a). The entropy \(_\beta H_\mathrm{new}(\Omega )\) is non- negative for all \(\beta >0\).

We consider the following cases:

Case(i): When \(\beta >1\), this implies \(\beta ^{-1}<1\)

$$\begin{aligned} \left( \sum _{i=1}^{n}\varpi _i^{\beta }\right) ^{\frac{1}{\beta }} < 1 \text { and } \left( \sum _{i=1}^{n}\varpi _i^{\frac{1}{\beta }} \right) ^{\beta } >1. \end{aligned}$$
(6)

Taking logarithm on both side (6),

$$\begin{aligned}&\log \left( \sum _{i=1}^{n}\varpi _i^{\beta }\right) ^{\frac{1}{\beta }}< \log 1 =0 \,\,\, and \nonumber \\&\log \left( \sum _{i=1}^{n}\varpi _i^{\frac{1}{\beta }}\right) ^{\beta } >\log 1 =0. \end{aligned}$$
(7)

From (7), we get

$$\begin{aligned} \log \left( \sum _{i=1}^{n}\varpi _i^{\beta }\right) ^{\frac{1}{\beta }} - \log \left( \sum _{i=1}^{n}\varpi _i^{\frac{1}{\beta }}\right) ^{\beta }<0. \end{aligned}$$
(8)

Since, \(\beta >1\), which implies \((\beta ^{-1}-\beta )<0.\)

Therefore, from (8), we get

$$\begin{aligned} \begin{aligned} _\beta H_\mathrm{new}(\Omega ) {}&=\frac{1}{\beta ^{-1}-\beta }\left[ \log \left( \sum _{i=1}^{n}\varpi _i^\beta \right) ^{\frac{1}{\beta }} -\log \left( \sum _{i=1}^{n}\varpi _i^{\frac{1}{\beta }} \right) ^{\beta }\right] >0\\&= \frac{1}{\beta ^{-1}-\beta }\log \left[ \frac{\left( \sum _{i=1}^{n}\varpi _i^\beta \right) ^{\frac{1}{\beta }}}{\left( \sum _{i=1}^{n}\varpi _i^{\frac{1}{\beta }} \right) ^{\beta }}\right] \\&= \frac{1}{\beta ^{-1}-\beta }\left[ \log \Vert \Omega \Vert _{\beta } - \log \Vert \Omega \Vert _{\beta ^{-1}}\right] , \end{aligned} \end{aligned}$$

i.e.,    \(_\beta H_\mathrm{new}(\Omega )>0.\)

Case(ii): Similarly, for \(\beta <1\), and \(\beta ^{-1} >1\), which implies \((\beta ^{-1}-\beta )>0\) and we get

$$\begin{aligned} \log \left( \sum _{i=1}^{n}\varpi _i^{\beta }\right) ^{\frac{1}{\beta }}-\log \left( \sum _{i=1}^{n}\varpi _i^{\frac{1}{\beta }}\right) ^{\beta }>0. \end{aligned}$$
(9)

Therefore, from (9), we get

$$\begin{aligned} \begin{aligned} _\beta H_\mathrm{new}(\Omega ) {}&=\frac{1}{\beta ^{-1}-\beta }\left[ \log \left( \sum _{i=1}^{n}\varpi _i^\beta \right) ^{\frac{1}{\beta }} -\log \left( \sum _{i=1}^{n}\varpi _i^{\frac{1}{\beta }} \right) ^{\beta }\right] >0\\&= \frac{1}{\beta ^{-1}-\beta }\log \left[ \frac{\left( \sum _{i=1}^{n}\varpi _i^\beta \right) ^{\frac{1}{\beta }}}{\left( \sum _{i=1}^{n}\varpi _i^{\frac{1}{\beta }} \right) ^{\beta }}\right] \\&= \frac{1}{\beta ^{-1}-\beta }\left[ \log \Vert \Omega \Vert _{\beta } - \log \Vert \Omega \Vert _{\beta ^{-1}}\right] , \end{aligned} \end{aligned}$$
(10)

i.e.,    \(_\beta H_\mathrm{new}(\Omega ) > 0\).

Now, combine the case (i), (ii) and property (c) that gives non-negativity, i.e., \(_\beta H_\mathrm{new} \ge 0\).

(b–c). Here, property (b) and (c) are proved precisely by the definition of Logarithmic Norm entropy.

(d). By definition (5), this proves the property trivially.

(e). First, we note that, for any \(\beta > 0\), we have

$$\begin{aligned} \Vert \Omega * \Xi \Vert _{\beta }= & {} \left( \sum _{i=1}^n \sum _{j=1}^m (\varpi _i \xi _j)^{\beta } \right) ^{\frac{1}{\beta }} = \left( \sum _{i=1}^n (\varpi _i)^{\beta } \sum _{j=1}^m ( \xi _j)^{\beta } \right) ^{\frac{1}{\beta }} \\= & {} \Vert \Omega \Vert _{\beta }.\Vert \Xi \Vert _{\beta }. \end{aligned}$$

Therefore, for \(\beta \ne 1 (\beta > 0)\), we get

$$\begin{aligned} \begin{aligned} _\beta H_\mathrm{new}(\Omega *\Xi ){}&= \frac{1}{\beta ^{-1}-\beta }\left[ ln\Vert \Omega *\Xi \Vert _{\beta } - ln\Vert \Omega *\Xi \Vert _{\beta ^{-1}}\right] \\&= _\beta H_\mathrm{new}(\Omega ) + _\beta H_\mathrm{new}(\Xi ). \end{aligned} \end{aligned}$$

(h). The proof for the case \(\beta \ne 1\) follows directly from the continuity of the norm functionals \(\Vert \Omega \Vert _{\beta }\) and the Logarithmic function.

(i). Let us assume the property (j) holds and take \(\Omega _1 , \Omega _2 \in \Gamma _n\), \(\lambda \in [0,1]\). Take \(\beta < 1\), by Minkowski inequality, we have

$$\begin{aligned} \Vert \lambda \; \Omega _1 + (1-\lambda )\;\Omega _2\Vert _{\beta } \ge \lambda \;\Vert \Omega _1\Vert _{\beta } + (1-\lambda )\;\Vert \Omega _2\Vert _{\beta }. \end{aligned}$$
(11)

Combining it with the monotonicity and concavity of logarithmic function, we get

$$\begin{aligned}&ln \; \Vert \lambda \Omega _1 \!+\! (1-\lambda )\; \Omega _2\Vert _{\beta } \!\ge \! ln\;[\lambda \; \Vert \Omega _1\Vert _{\beta } \!+\! (1-\lambda )\; \Vert \Omega _2\Vert _{\beta }] \\&\quad \ge \lambda \; ln\; \Vert \Omega _1\Vert _{\beta }+ (1-\lambda )\; ln\; \Vert \Omega _2\Vert _{\beta }. \end{aligned}$$

On the other hand, by convexity of \(ln\; \Vert \Omega _1\Vert _{\beta ^{-1}}\) , we get

$$\begin{aligned}&ln \;\Vert \lambda \; \Omega _1 + (1-\lambda )\;\Omega _2\Vert _{\beta ^{-1}} \le \lambda \; ln\; \Vert \Omega _1\Vert _{\beta ^{-1}}\\&\quad + (1-\lambda )\; ln \;\Vert \Omega _2\Vert _{\beta ^{-1}}. \end{aligned}$$

Thus, along with \((\beta ^{-1}-\beta )>0\), for \(\beta < 1\) finally we get

$$\begin{aligned} \begin{aligned}&_\beta H_\mathrm{new}(\lambda \; \Omega _1 + (1-\lambda )\;\Omega _2){}\\&\quad = \frac{1}{\beta ^{-1}-\beta }\left[ ln\; \Vert \lambda \; \Omega _1 + (1-\lambda )\;\Omega _2\Vert _{\beta } \right. \\&\qquad \left. - ln\; \Vert \lambda \; \Omega _1 + (1-\lambda )\;\Omega _2\Vert _{\beta ^{-1}}\right] \\&\quad \ge \frac{1}{\beta ^{-1}-\beta }\left[ \lambda \;ln\; \Vert \Omega _1\Vert + (1-\lambda )\;\Vert \Omega _2\Vert _{\beta } \right. \\&\qquad \left. - \lambda \; ln\; \Vert \Omega _1\Vert _{\beta ^{-1}} - (1-\lambda ) \;ln \;\Vert \Omega _2\Vert _{\beta ^{-1}}\right] \\&\quad = \frac{1}{\beta ^{-1}-\beta }\left[ \lambda \; \{ln\; \Vert \Omega _1\Vert - ln\; \Vert \Omega _1\Vert _{\beta ^{-1}}\}+(1-\lambda )\;\{\Vert \Omega _2\Vert _{\beta } \right. \\&\qquad \left. - ln\; \Vert \Omega _2\Vert _{\beta ^{-1}}\right] \\&\quad = \lambda \; _\beta H_\mathrm{new}(\Omega _1) + (1-\lambda )\;_\beta H_\mathrm{new}(\Omega _2). \end{aligned} \end{aligned}$$
(12)

This proves the concavity of scale-invariant entropy. \(\square\)

3 Scale-invariant intuitionistic fuzzy information measure

Definition 3.1

(Zadeh 1968) Let \(Y = (y_1,y_2,...,y_n)\) be a non-empty set, FS \({{\tilde{S}}}\) is given by

$$\begin{aligned} {{\tilde{S}}} = \{ \langle y_i,\mu _{{{\tilde{S}}}}(y_i)\rangle |y_i \in Y \}, \end{aligned}$$
(13)

where \(\mu _{{{\tilde{S}}}}: Y \rightarrow [0,1]\) denotes the membership function and \(\mu _{{{\tilde{S}}}}(y_i) \in [0,1]\) represents the membership degree of \(y_i \in Y\) in \({{\tilde{S}}}\).

The idea of FSs extended by Atanassov (1986) by adding one more component named “Hesitancy Degree’’, thus presenting a new concept called “Intuitionistic Fuzzy Set (IFS)”.

Definition 3.2

(Atanassov 1986) For a universe of discourse \(Y = (y_1,y_2,...,y_n)\), an IFS \({{\hat{S}}}\) is given by

$$\begin{aligned} {{\hat{S}}} = \{ \langle y_i,\mu _{{{\hat{S}}}}(y_i),\nu _{{{\hat{S}}}}(y_i)\rangle |y_i \in Y \}, \end{aligned}$$
(14)

where \(\mu _{{{\hat{S}}}}(y_i)\) denotes membership degree and \(\nu _{{{\hat{S}}}}(y_i)\) denotes non-membership degrees of \(y_i \in Y\) in \({{\hat{S}}}\) satisfying \(0 \le \mu _{{{\hat{S}}}}(y_i)+\nu _{{{\hat{S}}}}(y_i) \le 1\). The number \(\pi _{{{\hat{S}}}}(y_i) = 1- \mu _{{{\hat{S}}}}(y_i)-\nu _{{{\hat{S}}}}(y_i)\) denotes the intuitionistic index or hesitancy degree. If we take \(\pi _{{{\hat{S}}}}(y_i) =0\), then IFS become FSs. For an IFS , \((\mu _{{{\hat{S}}}}(s_i) , \nu _{{{\hat{S}}}}(s_i))\) is termed as intuitionistic fuzzy number (IFN) where every IFN is represented as \(\lambda = (\mu _{\lambda }, \nu _{\lambda })\), where \(\mu _{\lambda }\) and \(\nu _{\lambda }\) lie in [0, 1] with \(\mu _{\lambda }+ \nu _{\lambda } \le 1\). In addition to this, \({{\tilde{S}}}(\lambda ) = \mu _{\lambda }- \nu _{\lambda }\) and \({{\hat{H}}}(\lambda ) = \mu _{\lambda }+ \nu _{\lambda }\) we represent the “score value ” and “accuracy degree” of \(\lambda\), respectively.

In this article, IFS(Y) and FS(Y) constitute the set of all IFSs and set of all FSs on Y independently. Logarithms are assigned to the base \('2'\) unless or otherwise stated.

Definition 3.3

(Xu 2007; Yager 2006) Let IFNs \(\lambda _1 = (\mu _{\lambda _1}\), \(\nu _{\lambda _1})\) , \(\lambda _2 = (\mu _{\lambda _2}\), \(\nu _{\lambda _2})\) and \(\lambda _3 = (\mu _{\lambda _3}\), \(\nu _{\lambda _3})\), the set operations are stated as below:

  1. 1.

    \(\lambda _1 +\lambda _2 = (\mu _{\lambda _ 1} +\mu _{\lambda _2} -\mu _{\lambda _1} \mu _{\lambda _ 2},\nu _{\lambda _1} \nu _{\lambda _ 2} )\),

  2. 2.

    \(\lambda _1 *\lambda _ 2 = (\mu _{\lambda _ 1} \mu _{\lambda _ 2}, \nu _{\lambda _ 1}+ \nu _{\lambda _ 2} -\nu _{\lambda _ 1} \nu _{\lambda _ 2})\),

  3. 3.

    \(\beta \lambda =\left( 1-(1-\mu _{\lambda })^{\beta }, (\nu _{\lambda })^{\beta } \right) ; \beta > 0\),

  4. 4.

    \(\lambda ^{\beta } = \left( (\mu _{\lambda })^{\beta } ,1-(1-\nu _{\lambda })^{\beta }\right) ; \beta >0\).

Definition 3.4

(Xia and Xu 2012) Let \(\lambda _i = (\mu _{\lambda _i}\), \(\nu _{\lambda _ i})\), where \((i = 1,2,...,n)\) be a group of IFNs. Suppose \(\gimel = (\gimel _1,\gimel _2,...,\gimel _n)^T\) be the weight vector of \(\lambda _i (i = 1,2,...,n)\) where \(\gimel _i \in [0,1]\) fulfilling the condition \(\sum _{i=1}^{n} \gimel _i = 1\). Function SIFWA : \(U^n \rightarrow U\) defined as

$$\begin{aligned}&\hbox {SIFWA} (\lambda _1,\lambda _2 ,...,\lambda _ n) = \gimel _1 \lambda _ 1 +\gimel _2 \lambda _ 2 +...+\gimel _n \lambda _ n \nonumber \\&\quad = \left( \frac{\prod _{i=1}^{n} \mu _{\lambda _ i}^{\gimel _i}}{\prod _{i=1}^{n} \mu _{\lambda _ i}^{\gimel _i} + \prod _{i=1}^{n}(1- \mu _{\lambda _i})^\gimel _i}, \frac{\prod _{i=1}^{n} \nu _{\lambda _i}^{\gimel _i}}{\prod _{i=1}^{n} \nu _{\lambda _ i}^{\gimel _i} + \prod _{i=1}^{n}(1- \nu _{\lambda _ i})^\gimel _i}\right) \nonumber \\ \end{aligned}$$
(15)

is known as symmetric intuitionistic fuzzy weighted averaging (SIFWA) operator.

Definition 3.5

(Atanassov 1986) ( Operations on IFSs ). For any \({{\hat{T}}},{{\hat{S}}} \in IFS(Y)\) defined by:

$$\begin{aligned} {{\hat{T}}}= & {} \{ \langle y_i,\mu _{{{\hat{T}}}}(y_i),\nu _{{{\hat{T}}}}(y_i)\rangle |y_i \in Y \}, \end{aligned}$$
(16)
$$\begin{aligned} {{\hat{S}}}= & {} \{ \langle y_i,\mu _{{{\hat{S}}}}(y_i),\nu _{{{\hat{S}}}}(y_i)\rangle |y_i \in Y \}, \end{aligned}$$
(17)

the regular set operations and relations are considered as:

  1. 1.

    \({{\hat{T}}} \subseteq {{\hat{S}}}\) if and only if \({{\hat{T}}} \subseteq {{\hat{S}}}\), i.e., if \(\mu _{{{\hat{T}}}}(y_i)\le \mu _{{{\hat{S}}}}(y_i)\) and \(\nu _{{{\hat{T}}}}(y_i)\ge \nu _{{{\hat{S}}}}(y_i)\) for \(\mu _{{{\hat{S}}}}(y_i)\le \nu _{{{\hat{S}}}}(y_i)\), or if \(\mu _{{{\hat{T}}}}(y_i)\ge \nu _{{{\hat{S}}}}(y_i)\) and \(\nu _{{{\hat{T}}}}(y_i)\le \nu _{{{\hat{S}}}}(y_i)\), for \(\mu _{{{\hat{S}}}}(y_i)\ge \nu _{{{\hat{S}}}}(y_i)\) for any \(y_i \in Y\);

  2. 2.

    \({{\hat{T}}} = {{\hat{S}}}\) if and only if \({{\hat{T}}} \subseteq {{\hat{S}}}\) and \({{\hat{S}}} \subseteq {{\hat{T}}}\);

  3. 3.

    \({{{\hat{T}}}}^c = \{ \langle y_i,\nu _{{{\hat{T}}}}(y_i),\mu _{{{\hat{T}}}}(y_i)\rangle |y_i \in Y \}\);

  4. 4.

    \({{\hat{T}}} \cap {{\hat{S}}} = \{\langle \mu _{{{\hat{T}}}}(y_i)\wedge \mu _{{{\hat{S}}}}(y_i)\)and \(\nu _{{{\hat{T}}}}(y_i)\vee \nu _{{{\hat{S}}}}(y_i)\rangle | y_i \in Y \}\);

  5. 5.

    \({{\hat{T}}} \cup {{\hat{S}}} = \{\langle \mu _{{{\hat{T}}}}(y_i)\vee \mu _{{{\hat{S}}}}(y_i)\)and \(\nu _{{{\hat{T}}}}(y_i)\wedge \nu _{{{\hat{S}}}}(y_i)\rangle | y_i \in Y \}\).

4 Entropy for FSs and IFSs

Definition 4.1

(Zadeh 1965) (Fuzzy Entropy ). A real function \({{\tilde{\phi }}} : FS(Y) \rightarrow [0,\infty )\) is termed as fuzzy entropy whenever it satisfies the subsequent properties:

  1. 1.

    \({{\tilde{S}}}\) is crisp set if and only if \({{\tilde{\phi }}} ({{\tilde{S}}}) = 0 ,\) For all \({{\tilde{S}}} \in FS(Y)\).

  2. 2.

    If \(\mu _{{{\tilde{S}}}} = 0.5\) if and only if \({{\tilde{\phi }}} ({{\tilde{S}}})\) is maximum for all \({{\tilde{S}}} \in FS(Y)\).

  3. 3.

    For any \({{\tilde{T}}},{{\tilde{S}}} \in FS(Y), {{\tilde{\phi }}}({{\tilde{T}}}) \le {{\tilde{\phi }}}({{\tilde{S}}})\) if \({{\tilde{T}}}\) is crisper than \({{\tilde{S}}}\), that is, \(\mu _{{{\tilde{T}}}}\le \mu _{{{\tilde{S}}}}\) if \(\mu _{{{\tilde{S}}}} \le 0.5\) and \(\mu _{{{\tilde{T}}}} \ge \mu _{{{\tilde{S}}}}\) if \(\mu _{{{\tilde{S}}}} \ge 0.5\).

  4. 4.

    \({{\tilde{\phi }}}({{\tilde{S}}}) = {{\tilde{\phi }}}({{{\tilde{S}}}})^c\), where \(({{{\tilde{S}}}})^c\) denotes complement of \({{\tilde{S}}} \in FS(Y)\).

Since for an IFS, \(\mu + \nu + \pi = 1\), accordingly, seeing \((\mu , \nu , \pi )\) as probability distribution, Hung and Yang (2006) gave a new entropy for IFSs, by extending the idea of Luca and Termini (1972).

Definition 4.2

(Atanassov 1986) (Intuitionistic fuzzy entropy). A real function \({{\bar{\phi }}}: IFS(Y) \rightarrow [0, \infty )\) is known as an entropy on IFS(Y) if the following properties are satisfied:

  1. 1.

    \({{\hat{S}}}\) is a crisp set if and only if \({{\bar{\phi }}} ({{\hat{S}}}) = 0\).

  2. 2.

    The value of \({{\bar{\phi }}} ({{\hat{S}}})\) is maximum at \(\mu _{{{\hat{S}}}} = \nu _{{{\hat{S}}}} = \pi _{{{\hat{S}}}} = \frac{1}{3}\).

  3. 3.

    \({{\bar{\phi }}}({{\hat{T}}}) \le {{\bar{\phi }}}({{\hat{S}}})\) if and only if \({{\hat{T}}}\) is crisper than \({{\hat{S}}}\), that is, \(\mu _{{{\hat{T}}}} \ge \mu _{{{\hat{S}}}} , \nu _{{{\hat{T}}}} \ge \nu _{{{\hat{S}}}}\) if \(min(\mu _{{{\hat{T}}}} ,\nu _{{{\hat{S}}}}) \ge \frac{1}{3}\), and \(\mu _{{{\hat{T}}}} \le \mu _{{{\hat{S}}}} , \nu _{{{\hat{T}}}} \le \nu _{{{\hat{S}}}}\) if \(max( \mu _{{{\hat{S}}}} , \nu _{{{\hat{S}}}} ) \le \frac{1}{3}\).

  4. 4.

    \({{\bar{\phi }}}({{\hat{S}}}) = {{\bar{\phi }}}({{{\hat{S}}}})^c\) where \(({{\hat{S}}})^c\) denotes complement of \({{\hat{S}}}\).

Definition 4.3

(Szmidt and Kacprzyk 2002) An entropy on IFS(Y) is a real-valued function A: IFS\((Y) \rightarrow [0,1]\), which fulfills the properties as below:

\(\aleph _1\). \(A({{\hat{T}}})= 1\) if and only if \(\mu _{{{\hat{T}}}}(y_i) = \nu _{{{\hat{T}}}}(y_i)\), for all \(y_i \in Y\).

\(\aleph _2\). \(A({{\hat{T}}}) = 0\) if and only if \({{\hat{T}}}\) is crisp set, i.e., \(\mu _{{{\hat{T}}}}(y_i) = 0, \nu _{{{\hat{T}}}}(y_i) = 1\) or \(\mu _{{{\hat{T}}}}(y_i) = 1, \nu _{{{\hat{T}}}}(y_i) = 0\) for all \(y_i \in Y\).

\(\aleph _3\). \(A({{\hat{T}}}) \le A({{\hat{S}}})\) if and only if \({{\hat{T}}} \subseteq {{\hat{S}}}\).

\(\aleph _4\). \(A({{\hat{T}}}) = A({{\hat{T}}}) ^c\).

Definition 4.4

[Correlation coefficients (Gerstenkorn and Manko 1991)]. Suppose \(\hat{S_1} = \{ \langle y_i,\mu _{\hat{S_1}}(y_i), \nu _{\hat{S_1}}(y_i) \rangle |y_i \in Y \}\) and \(\hat{S_2} = \{ \langle y_i,\mu _{\hat{S_2}}(y_i) , \nu _{\hat{S_2}}(y_i) \rangle |y_i \in Y \}\) are two IFSs. Gerstenkorn and Manko define correlation coefficient \(\beth {(\hat{S_1} , \hat{S_2})}\) between \(\hat{S_1}\) and \(\hat{S_2}\) as follows:

$$\begin{aligned} \beth (\hat{S_1}, \hat{S_2}) = \frac{\digamma (\hat{S_1} ,\hat{S_2})}{\sqrt{\psi (\hat{S_1}) \psi (\hat{S_2)}}}, \end{aligned}$$
(18)

where

$$\begin{aligned} \digamma (\hat{S_1} ,\hat{S_2}) = \sum _{i=1}^{n} \left( \mu _{\hat{S_1}}(y_i)\mu _{\hat{S_2}}(y_i) + \nu _{\hat{S_1}}(y_i) \nu _{\hat{S_2}}(y_i)\right) \end{aligned}$$
(19)

represents the correlation between two IFSs \(\hat{S_1}\) and \(\hat{S_2}\), and

$$\begin{aligned} \psi (\hat{S_1})= & {} \sum _{i=1}^{n}\left( (\mu _{\hat{S_1}}(y_i))^2 + (\nu _{\hat{S_1}}(y_i))^2 \right) , \end{aligned}$$
(20)
$$\begin{aligned} \psi (\hat{S_2})= & {} \sum _{i=1}^{n}\left( (\mu _{\hat{S_2}}(y_i))^2 + (\nu _{\hat{S_2}}(y_i))^2 \right) , \end{aligned}$$
(21)

are respective informational energies of \(\hat{S_1}\) and \(\hat{S_2}\).

The correlation coefficient \(\digamma (\hat{S_1} ,\hat{S_2})\) satisfies the following properties:

  1. 1.

    \(0 \le \digamma (\hat{S_1} ,\hat{S_2}) \le 1\).

  2. 2.

    \(\digamma (\hat{S_1} ,\hat{S_2}) = \digamma (\hat{S_2} ,\hat{S_1})\).

  3. 3.

    \(\digamma (\hat{S_1} ,\hat{S_2}) = 1\) if \(\hat{S_1} = \hat{S_2}\).

Now we introduce a new entropy measure called Logarithmic intuitionistic fuzzy entropy of IFSs with the help of above concepts.

Now, corresponding to Shannon entropy, Luca and Termini (1972) proposed a fuzzy entropy as below:

$$\begin{aligned} H_{LT}({{\tilde{S}}})= & {} -\frac{1}{n}\sum _{i=1}^{n}[\mu _{{{\tilde{S}}}}(y_i)log(\mu _{{{\tilde{S}}}}(y_i))\nonumber \\&+(1-\mu _{{{\tilde{S}}}}(y_i)log(1-\mu _{{{\tilde{S}}}}(y_i)))], \end{aligned}$$
(22)

where \({{\tilde{S}}} \in FS(Y)\) and \(y_i \in Y\). Bhandari and Pal (1975) extended Renyi’s idea for introducing a new fuzzy entropy which is given by

$$\begin{aligned}&H_{BP}({{\tilde{S}}})\nonumber \\&\quad =\frac{1}{n(1-\alpha )}\sum _{i=1}^{n}log\left[ (\mu _{{{\tilde{S}}}}(y_i))^{\alpha }+(1-\mu _{{{\tilde{S}}}}(y_i))^{\alpha }\right] . \end{aligned}$$
(23)

Further extending the idea of Verma and Sharma (2014), we proposed a scale-invariant information measure for IFSs.

Definition 4.5

For any \({{\hat{T}}} \in IFS(Y)\), we define:

$$\begin{aligned} \begin{aligned} A_{\beta }({{\hat{T}}})&= \frac{1}{n(\beta ^{-1}-\beta )}\sum _{i=1}^{n}log \left[ \frac{\left( ((\mu _{{{\hat{T}}}}(y_i))^{\beta } + (\nu _{{{\hat{T}}}}(y_i))^{\beta }) \times (\mu _{{{\hat{T}}}}(y_i) + \nu _{{{\hat{T}}}}(y_i))^{(1-\beta )} + 2^{1-\beta } \pi _{{{\hat{T}}}}(y_i)\right) ^{\frac{1}{\beta }} }{\left( ((\mu _{{{\hat{T}}}}(y_i))^{\beta ^{-1}} + (\nu _{{{\hat{T}}}}(y_i))^{\beta ^{-1}}) \times (\mu _{{{\hat{T}}}}(y_i) + \nu _{{{\hat{T}}}}(y_i))^{(1-\beta ^{-1})} + 2^{1-\beta ^{-1}} \pi _{{{\hat{T}}}}(y_i)\right) ^{\beta } }\right] ;\\&\quad \beta > 0 (\ne 1). \end{aligned} \end{aligned}$$
(24)

Then (24) is a Logarithmic \(\beta\)-Norm intuitionistic fuzzy information measure.

Particular cases:

  1. 1.

    If \(\beta =1\), then (24) becomes Vlachos and Sergiadis (2007) entropy.

  2. 2.

    If \(\beta =1\) and Hesitancy \(\pi _{{{\hat{T}}}}(y_i)=0\), (24) becomes De Luca and Termini (1972) entropy.

  3. 3.

    If \(\pi _{{{\hat{T}}}}(y_i)=0\), then (24) becomes fuzzy information measures corresponding to the measure (5).

  4. 4.

    \(A_{\beta }({{\hat{T}}}) = A_{\beta ^{-1}}({{\hat{T}}})\), i.e., (24) is symmetric for intuitionistic case and also satisfies the translating invariant property.

Now, we explain the existence of the measure (24) as below.

4.1 Proof of validity for (24)

Theorem 4.1

The measure \(A_{\beta }({{\hat{T}}})\) in (24)is a valid intuitionistic fuzzy entropy of order \(\beta\).

Proof

We validate the measure (24) by satisfying the axioms \(\aleph _1 - \aleph _4\).

(\(\aleph _1\)). Prove that \(A_{\beta }({{\hat{T}}}) = 1\) if and only if \(\mu _{{{\hat{T}}}}(y_i) = \nu _{{{\hat{T}}}}(y_i)\). Putting \(\mu _{{{\hat{T}}}}(y_i) = \nu _{{{\hat{T}}}}(y_i)\) in (24), then we get \(A_{\beta }({{\hat{T}}}) = 1\). In the converse part,suppose \(A_{\beta }({{\hat{T}}}) = 1\) then we have to show that \(\mu _{{{\hat{T}}}}(y_i) = \nu _{{{\hat{T}}}}(y_i)\) for all \(y_i \in Y\). For proving the above, we use the following inequality:

$$\begin{aligned} \frac{\omega ^{\beta } + \kappa ^{\beta }}{2} \ge \left( \frac{\omega + \kappa }{2} \right) ^{\beta }; 0 \le \omega , \kappa \le 1 \,and\, \beta > 1 , \end{aligned}$$
(25)

where equality holds if and only if \(\omega = \kappa\). For \(\beta > 1\), we have

$$\begin{aligned} \frac{1}{\beta ^{-1} - \beta }\log \left[ \frac{\left( (\omega ^{\beta } + \kappa ^{\beta }) \times (\omega + \kappa )^{1-\beta } + 2^{1-\beta }(1-\omega - \kappa )\right) ^{\frac{1}{\beta }}}{\left( (\omega ^{\beta ^{-1}} \!+\! \kappa ^{\beta ^{-1}}) \times (\omega \!+\! \kappa )^{1-\beta ^{-1}} \!+\! 2^{1-\beta ^{-1}}(1-\omega \!-\! \kappa )\right) ^{\beta }}\right] \!\le \! 1.\nonumber \\ \end{aligned}$$
(26)

By taking \(\mu _{{{\hat{T}}}}(y_i) = \omega , \nu _{{{\hat{T}}}}(y_i) = \kappa\) and \(\mu _{{{\hat{T}}}}(y_i) + \nu _{{{\hat{T}}}}(y_i)+ \pi _{{{\hat{T}}}}(y_i) = 1\) in (26), we have

$$\begin{aligned} {}\frac{1}{n(\beta ^{-1}-\beta )}log \left[ \frac{\left( ((\mu _{{{\hat{T}}}}(y_i))^{\beta } + (\nu _{{{\hat{T}}}}(y_i))^{\beta }) \times (\mu _{{{\hat{T}}}}(y_i) + \nu _{{{\hat{T}}}}(y_i))^{(1-\beta )} + 2^{1-\beta } \pi _{{{\hat{T}}}}(y_i)\right) ^{\frac{1}{\beta }} }{\left( ((\mu _{{{\hat{T}}}}(y_i))^{\beta ^{-1}} + (\nu _{{{\hat{T}}}}(y_i))^{\beta ^{-1}}) \times (\mu _{{{\hat{T}}}}(y_i) + \nu _{{{\hat{T}}}}(y_i))^{(1-\beta ^{-1})} + 2^{1-\beta ^{-1}} \pi _{{{\hat{T}}}}(y_i)\right) ^{\beta } }\right] \le 1 . \end{aligned}$$
(27)

Therefore, \(A_{\beta }({{\hat{T}}}) = 1\) if and only if \(\mu _{{{\hat{T}}}}(y_i) = \nu _{{{\hat{T}}}}(y_i)\).

(\(\aleph _2\)). Let \({{\hat{T}}}\) be a crisp set. This implies, either \(\mu _{{{\hat{T}}}}(y_i) = 1\), \(\nu _{{{\hat{T}}}}(y_i) = 0\); or \(\mu _{{{\hat{T}}}}(y_i) = 0\), \(\nu _{{{\hat{T}}}}(y_i) = 1\).

Then from (24), we find that \(A_{\beta }({{\hat{T}}}) = 0\). Conversely, let \(A_{\beta }({{\hat{T}}}) = 0\). Therefore, (24) gives

$$\begin{aligned} {}\frac{1}{n(\beta ^{-1}-\beta )} \sum _{i=1}^{n}\log \left[ \frac{\left( ((\mu _{{{\hat{T}}}}(y_i))^{\beta } + (\nu _{{{\hat{T}}}}(y_i))^{\beta }) \times (\mu _{{{\hat{T}}}}(y_i) + \nu _{{{\hat{T}}}}(y_i))^{(1-\beta )} + 2^{1-\beta } \pi _{{{\hat{T}}}}(y_i)\right) ^{\frac{1}{\beta }} }{\left( ((\mu _{{{\hat{T}}}}(y_i))^{\beta ^{-1}} + (\nu _{{{\hat{T}}}}(y_i))^{\beta ^{-1}}) \times (\mu _{{{\hat{T}}}}(y_i) + \nu _{{{\hat{T}}}}(y_i))^{(1-\beta ^{-1})} + 2^{1-\beta ^{-1}} \pi _{{{\hat{T}}}}(y_i)\right) ^{\beta } }\right] = 0. \end{aligned}$$
(28)

Therefore, (28) will hold only if

$$\begin{aligned} \begin{aligned}&{}\left( ((\mu _{{{\hat{T}}}}(y_i))^{\beta } + (\nu _{{{\hat{T}}}}(y_i))^{\beta }) \times (\mu _{{{\hat{T}}}}(y_i) + \nu _{{{\hat{T}}}}(y_i))^{(1-\beta )}\right. \\&\left. \quad + 2^{1-\beta } \pi _{{{\hat{T}}}}(y_i)\right) ^{\frac{1}{\beta }} = \left( ((\mu _{{{\hat{T}}}}(y_i))^{\beta ^{-1}} + (\nu _{{{\hat{T}}}}(y_i))^{\beta ^{-1}}) \right. \\&\qquad \left. \times (\mu _{{{\hat{T}}}}(y_i) + \nu _{{{\hat{T}}}}(y_i))^{(1-\beta ^{-1})}+ 2^{1-\beta ^{-1}} \pi _{{{\hat{T}}}}(y_i)\right) ^{\beta }. \end{aligned} \end{aligned}$$
(29)

Since \(\beta \ne 1\), then (28) holds. We may conclude that \({{\hat{T}}}\) is a crisp set if and only if \(A_{\beta }({{\hat{T}}}) = 0\).

\((\aleph _3\)). \(A_{\beta }({{\hat{T}}}) \le A_{\beta }({{\hat{S}}})\) if and only if \({{\hat{T}}} \subseteq {{\hat{S}}}\). Now, we have to prove that (24) satisfies \((\aleph _3)\). For this, we define the function:

$$\begin{aligned} \rho (g_1 , g_2) = \frac{1}{\beta ^{-1}-\beta }\log \left[ \frac{\left( (g_1^{\beta } + g_2^{\beta })(g_1 + g_2)^{1-\beta } + 2^{1-\beta }(1-g_1 - g_2)\right) ^{\frac{1}{\beta }}}{\left( (g_1^{\beta ^{-1}} + g_2^{\beta ^{-1}}) (g_1 + g_2)^{1-\beta ^{-1}} + 2^{1-\beta ^{-1}}(1-g_1 - g_2)\right) ^{\beta }} \right] ;(g_1,g_2) \in [0,1]. \end{aligned}$$
(30)

\(\rho\) is an decreasing and increasing function with respect to \(g_2\) and \(g_1\), respectively. Calculating the critical points and putting \(\frac{\partial {\rho }(g_1,g_2)}{\partial {g_1}} = 0\) and \(\frac{ \partial {\rho }(g_1,g_2)}{\partial {g_2}} = 0\), gives

$$\begin{aligned} g_1 = g_2. \end{aligned}$$

\(\square\)

Some cases that arise are:

$$\begin{aligned}&{1.} \,\, \mathrm{When}\,\,\, g_1 \le g_2, \frac{\partial {\rho }(g_1,g_2)}{\partial {g_1}}\ge 0 \,\, \mathrm{if}\,\, \beta>1\,\, \mathrm{and} \\&\,\,\frac{\partial {\rho }(g_1,g_2)}{\partial {g_1}} \ge 0 \,\,\mathrm{if}\,\, \beta<1.\\&{2.} \,\, \mathrm{When}\,\,\, g_1 \le g_2, \frac{\partial {\rho }(g_1,g_2)}{\partial {g_2}}\le 0 \,\, \mathrm{if} \,\,\beta >1 \,\,\mathrm{and} \\&\,\,\frac{\partial {\rho }(g_1,g_2)}{\partial {g_2}} \le 0 \,\,\mathrm{if} \,\,\beta <1. \end{aligned}$$

We conclude from above cases 1 and 2 that \(\rho (g_1,g_2)\) is an increasing function:

$$\begin{aligned}&{3.} \,\, \mathrm{When} \,\,g_1 \ge g_2\,, \frac{\partial {\rho }(g_1,g_2)}{\partial {g_1}}\le 0\,\, \mathrm{if}\,\, \beta>1 \,\,\mathrm{and}\\&\quad \,\,\frac{\partial {\rho }(g_1,g_2)}{\partial {g_1}} \le 0 \,\,\mathrm{if}\,\,\beta<1.\\&{4.} \,\, \mathrm{When} \,\,g_1 \ge g_2\,, \,\frac{\partial {\rho }(g_1,g_2)}{\partial {g_2}}\ge 0\,\, \mathrm{if}\,\, \beta >1 \,\,\mathrm{and}\\&\,\,\frac{\partial {\rho }(g_1,g_2)}{\partial {g_2}} \ge 0 \,\,\mathrm{if}\,\,\beta <1. \end{aligned}$$

We conclude from above cases 3 and 4 that \(\rho (g_1,g_2)\) is an decreasing function.

Therefore, we may conclude that \(A_{\beta }({{\hat{T}}}) \le A_{\beta }(S)\) if \({{\hat{T}}} \subseteq S\).

(\(\aleph _4\)).\(A_{\beta }({{\hat{T}}}) = A_{\beta }({{\hat{T}}})^c\).

We know that \(({{\hat{T}}})^c = \{\langle y_i,\nu _{{{\hat{T}}}}(y_i) , \mu _{{{\hat{T}}}}(y_i) \rangle |y_i \in Y \}\).This implies

$$\begin{aligned} \mu _{({{\hat{T}}})^c}(y_i) = \nu _{{{\hat{T}}}}(y_i), \nu _{({{\hat{T}}})^c}(y_i) = \mu _{{{\hat{T}}}}(y_i). \end{aligned}$$

We get from (24)

$$\begin{aligned} A_{\beta }({{\hat{T}}}) = A_{\beta }({{\hat{T}}})^c. \end{aligned}$$
(31)

Therefore, \(A_{\beta }({{\hat{T}}})\) is a valid intuitionistic fuzzy entropy measure. (24) also satisfies the following additional properties.

Theorem 4.2

For any two IFSs \({{\hat{T}}}\) and \({{\hat{S}}}\) satisfying \({{\hat{T}}} \subseteq {{\hat{S}}}\) or \({{\hat{S}}} \subseteq {{\hat{T}}}\), the following holds:

$$\begin{aligned} A_{\beta }({{\hat{T}}} \cup {{\hat{S}}}) + A_{\beta }({{\hat{T}}} \cap {{\hat{S}}}) = A_{\beta }({{\hat{T}}}) + A_{\beta }({{\hat{S}}}). \end{aligned}$$
(32)

Proof

Suppose \(Y_1\) and \(Y_2\) are two parts of Y,

$$\begin{aligned} Y_1 = \{y_i \in Y : {{\hat{T}}} \subseteq {{\hat{S}}}\},Y_2 = \{y_i \in Y : {{\hat{S}}} \subseteq {{\hat{T}}}\}. \end{aligned}$$
(33)

We get for all \(y_i \in Y_1\) and \(y_i \in Y_2\),

$$\begin{aligned} \begin{aligned}&\mu _{{{\hat{T}}}}(y_i) \le \mu _{{{\hat{S}}}}(y_i), \nu _{{{\hat{T}}}}(y_i) \ge \nu _{{{\hat{S}}}}(y_i);\\&\mu _{{{\hat{T}}}}(y_i) \ge \mu _{{{\hat{S}}}}(y_i), \nu _{{{\hat{T}}}}(y_i) \le \nu _{{{\hat{S}}}}(y_i). \end{aligned} \end{aligned}$$
(34)

From (32), (33) and (34) may be proved easily. \(\square\)

Corollary

Let \({{\hat{T}}} \in IFS(Y)\) and \(({{\hat{T}}})^c\) denote the complement of \({{\hat{T}}}\).

$$\begin{aligned} A_{\beta }({{\hat{T}}})= A_{\beta }({{\hat{T}}})^c= A_{\beta }({{\hat{T}}} \cup ({{\hat{T}}})^c)= A_{\beta }({{\hat{T}}} \cap ({{\hat{T}}})^c). \end{aligned}$$
(35)

Theorem 4.3

If \({{\hat{T}}}\) be a crisp set, then \(A_{\beta }({{\hat{T}}})\) contains minimum value and if \({{\hat{T}}}\) be most intuitionistic fuzzy set, then \(A_{\beta }({{\hat{T}}})\) has the maximum value. So, the maximum and minimum values are independent.

5 Application of the proposed measure in decision-making

The VIKOR method by Opricovic (1998) determines the best alternatives on the basis of closeness of alternatives with extreme solutions, i.e., positive along with negative ideal solutions. On basis of intuitionistic fuzzy measure, this section presents stepwise algorithm for the proposed VIKOR method. Therefore, determination for justified criteria weights are quite important. The importance of criteria weights divided in two weights, i.e., subjective and objective weights.

5.1 Algorithm to determine criteria weights

Criteria weights play a deciding role in finding the solution of multi-criteria decision-making (MCDM) problems. Algorithm to determine the weights based on proposed entropy:

  1. 1.

    A MCDM problem may be constituted by a \(m \times n\) matrix with m-rows representing alternatives \((\varphi _i)_{1 \le i \le m}\) and \(n-\)columns constituting criteria \((\varrho _j)_{1 \le j \le n}\). Consider the intuitionistic fuzzy decision matrix specified by

    (36)

    where \(d_{ij} = (\bar{\mu _{ij}},\bar{\nu _{ij}}); i = 1,2,...,m\) and \(j = 1,2,...,n\).

  2. 2.

    In this step, by using (24), the intuitionistic fuzzy entropy provided by \({{\tilde{D}}}\) can be obtained as \(F_j\); \(j = 1,2,...,n\).

The whole process of criteria weight evaluation is divided into two parts as follows:

5.1.1 (a) In case the criteria weights are known partially

Practically, there are two constraints in criteria weight determination. First, it is difficult to find an expert having expertise in all fields, and second to hire the experts of all fields is a costly affair. In fact, it is easy to extract their views in forms other than precise numbers, for example, linguistic variables, in the form of intervals, etc. In such cases, partial details about criteria weights are available with us. The whole details expressed by decision-makers can be compiled by means of set denoted by \(W^T\). We use the concept of minimum entropy proposed by Wang and Wang (2012) to extract criteria weights from the set \(W^T\).

Entropy value of an alternative \(\varphi _i\) covering the criteria \(\varrho _j\) is specified by:

$$\begin{aligned} F_j(\varphi _i)= \sum _{i=1}^n F_{\beta }^{\beta ^{-1}}(d_{ij}), \end{aligned}$$
(37)

where

$$\begin{aligned} \begin{aligned} {}&F_{\beta }^{\beta ^{-1}}(d_{ij}) \\&\quad = \frac{1}{n(\beta ^{-1}-\beta )}\sum _{i=1}^{m}log \left[ \frac{\left( ((\mu _{T}(y_i))^{\beta } + (\nu _{T}(y_i))^{\beta }) \times (\mu _{T}(y_i) + \nu _{T}(y_i))^{(1-\beta )} + 2^{1-\beta } \pi _{T}(y_i)\right) ^{\frac{1}{\beta }} }{\left( ((\mu _{T}(y_i))^{\beta ^{-1}} + (\nu _{T}(y_i))^{\beta ^{-1}}) \times (\mu _{T}(y_i) + \nu _{T}(y_i))^{(1-\beta ^{-1})} + 2^{1-\beta ^{-1}} \pi _{T}(y_i)\right) ^{\beta } }\right] . \end{aligned} \end{aligned}$$
(38)

For the determining of optimal criteria weights, we construct a linear programming model which is specified by

$$\begin{aligned} \begin{aligned} {}&min F = \sum _{i=1}^m F_j(\varphi _i) = \sum _{i=1}^m \left[ \sum _{j=1}^n \zeta _j F_{\beta }^{\beta ^{-1}}(d_{ij})\right] \\&\quad = \frac{1}{n(\beta ^{-1}-\beta )}\sum _{i=1}^{m} \sum _{j=1}^n \zeta _j \log \left[ \frac{\left( ((\mu _{T}(y_i))^{\beta } + (\nu _{T}(y_i))^{\beta }) \times (\mu _{T}(y_i) + \nu _{T}(y_i))^{(1-\beta )} + 2^{1-\beta } \pi _{T}(y_i)\right) ^{\frac{1}{\beta }} }{\left( ((\mu _{T}(y_i))^{\beta ^{-1}} + (\nu _{T}(y_i))^{\beta ^{-1}}) \times (\mu _{T}(y_i) + \nu _{T}(y_i))^{(1-\beta ^{-1})} + 2^{1-\beta ^{-1}} \pi _{T}(y_i)\right) ^{\beta } }\right] , \end{aligned} \end{aligned}$$
(39)

which fulfills \(\sum _{j=1}^n \zeta _j = 1 , \zeta _j \in W^T\). On solving (39), the criteria weights vector is stated by arg min \(F = (\zeta _1,\zeta _2,...,\zeta _n)^{'}\).

5.1.2 (b) In case the criteria weights are unknown

In the case where the criteria weights are unknown, we employ the procedure introduced by Chen and Li (2010) as follows:

$$\begin{aligned} \zeta _j = \frac{1- F_j}{n- \sum _{j=1}^n F_j}; j= 1,2,...,n, \end{aligned}$$
(40)

where

$$\begin{aligned} \begin{aligned} {}&F_{\beta }^{\beta ^{-1}}(d_{ij}) \\&\quad = \frac{1}{n(\beta ^{-1}-\beta )}\sum _{i=1}^{m}log \left[ \frac{\left( ((\mu _{T}(y_i))^{\beta } + (\nu _{T}(y_i))^{\beta }) \times (\mu _{T}(y_i) + \nu _{T}(y_i))^{(1-\beta )} + 2^{1-\beta } \pi _{T}(y_i)\right) ^{\frac{1}{\beta }} }{\left( ((\mu _{T}(y_i))^{\beta ^{-1}} + (\nu _{T}(y_i))^{\beta ^{-1}}) \times (\mu _{T}(y_i) + \nu _{T}(y_i))^{(1-\beta ^{-1})} + 2^{1-\beta ^{-1}} \pi _{T}(y_i)\right) ^{\beta } }\right] . \end{aligned} \end{aligned}$$
(41)

5.2 VIKOR method

In the VIKOR method, multi-criteria decision-making (MCDM) problems are solved by setting the IFSs. This introduces a new criteria weight method by subjective and objective weights. On the basis of correlation coefficients, this section presents a stepwise algorithm for the proposed IF-VIKOR method. For a multi-criteria decision-making (MCDM) problem with an m alternatives \(\varphi _i (i=1,2,...,m)\), let the decision-makers be \(FY_j (j=1,2,...,n)\) which are decided to the best alternatives. Each n-decision-makers has a weight \(\omega _j (j=1,2,...,n)\), satisfying \(\sum _{j=1}^n \omega _j = 1\). The proposed VIKOR method is summarized in following steps.

5.2.1 (a) Construction of IF decision matrix

Compile the information obtained by decision-makers \(FY^p\) by means of intuitionistic Fuzzy decision matrix. Let \(d_{ij}^p\) be IFN given by pth decision-makers. The entries of the matrix are the IFNs, that is \(d_{ij}^p = (\mu _{ij}^p,\nu _{ij}^p)\) which can be computed as below:

$$\begin{aligned} \begin{aligned}&d_{ij}^p = \sum _{p=1}^k \omega _p d_{ij}^p \\&= \left( \frac{\prod _{p=1}^k (\mu _{ij}^p)^{\omega _p}}{\prod _{p=1}^k (\mu _{ij}^p)^{\omega _p} + \prod _{p=1}^k(1-\mu _{ij}^p)^{\omega _p}}, \frac{\prod _{p=1}^k (\nu _{ij}^p)^{\omega _p}}{\prod _{p=1}^k (\nu _{ij}^p)^{\omega _p} + \prod _{p=1}^k(1-\nu _{ij}^p)^{\omega _p}}\right) ,\\&\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\mathrm{where} \,\,i = 1,2,...,m \,\,\mathrm{and}\,\, j = 1,2,...,n. \end{aligned}\nonumber \\ \end{aligned}$$
(42)

5.2.2 (b) Normalized the IF decision matrix

Let \(({{\tilde{t}}}_{ij})\) be a intuitionistic decision matrix which is normalized by the proposed method of Xu and Hu (2010) as follows:

$$\begin{aligned} {{\tilde{t}}}_{ij} = \sum _{j=1}^n \frac{\beth (\Upsilon _j^{++},d_{ij})}{\beth (\Upsilon _j^{++},\Upsilon _j^{--})}. \end{aligned}$$
(43)

5.2.3 (c) Determination of subjective weights

Let \(\zeta _j^{p} = (\mu _j^p ,\nu _j^p)\) be the weight given by decision-makers \(FY^p\). Therefore, we calculate IF weights \((\zeta _j)\) for different criteria by using the operator SIFWA:

$$\begin{aligned} \begin{aligned} \zeta _j&= SIFWA\left( \zeta _j^1,\zeta _j^2,...,\zeta _j^k\right) = \sum _{p=1}^k \omega _p \zeta _j^p \\&= \left( \frac{\prod _{p=1}^k (\mu _{j}^p)^{\omega _p}}{\prod _{p=1}^k (\mu _{j}^p)^{\omega _p} + \prod _{p=1}^k(1-\mu _{j}^p)^{\omega _p}}, \frac{\prod _{p=1}^k (\nu _{j}^p)^{\omega _p}}{\prod _{p=1}^k (\nu _{j}^p)^{\omega _p} + \prod _{p=1}^k(1-\nu _{j}^p)^{\omega _p}}\right) , \end{aligned}\nonumber \\ \end{aligned}$$
(44)

where \(\zeta _j = (\mu _j,\nu _j), j= 1,2,...,n\).

5.2.4 (d) Normalize the subjective weights

We obtained the subjective weights (Li et al. 2015; Boran et al. 2011) by normalizing the weight \((\zeta _j^z)\), satisfying \(\sum _{j=1}^n \zeta _j^z = 1\):

$$\begin{aligned} \zeta _j^z = \frac{\mu _j + \tau _j\left( \frac{\mu _j}{\mu _j+\nu _j}\right) }{\sum _{j=1}^n \left( \mu _j + \tau _j\left( \frac{\mu _j}{\mu _j+\nu _j}\right) \right) }, \end{aligned}$$
(45)

where \(\tau _j = 1-\mu _j-\nu _j\).

5.2.5 (e) Determination of objective weights

To calculate the objective weights \(\zeta _j^b\) described in (5.1 (a),(b)).

5.2.6 (f) Determine the solutions and cost criteria

We define the relative intuitionistic fuzzy best solution \(\Upsilon _j^{++} = (\mu _j^{++},\nu _j^{++})\) and the relative intuitionistic fuzzy worst solution \(\Upsilon _j^{--} = (\mu _j^{--},\nu _j^{--})\) as follows:

$$\begin{aligned} \Upsilon _j^{++} = {\left\{ \begin{array}{ll} max_i\ d_{ij} , &{} \text {for benefit criteria}\\ min_i\ d_{ij} , &{} \text {for cost criteria}; \end{array}\right. } \end{aligned}$$
(46)

and

$$\begin{aligned} \Upsilon _j^{--} = {\left\{ \begin{array}{ll} min_i\ d_{ij} , &{} \text {for benefit criteria}\\ max_i\ d_{ij} , &{} \text {for cost criteria}. \end{array}\right. } \end{aligned}$$
(47)

5.2.7 (g) Calculation of \(T_i\) and \(R_i\)

Let us find the values of \(T_i , R_i\) and \(Q_i\) , \(i=1,2,...,m\), where \(T_i\) is a group utility value, \(R_i\) is a individual regret value and \(Q_i\) is compromise value:

$$\begin{aligned} T_i= & {} \Theta \sum _{j=1}^n \zeta _j^z\tilde{t_{ij}} + (1-\Theta )\zeta _j^b \tilde{t_{ij}} = \sum _{j=1}^n \widehat{w_j}\tilde{t_{ij}}, \end{aligned}$$
(48)
$$\begin{aligned} R_i= & {} max (\widehat{w_j}\tilde{t_{ij}}), \end{aligned}$$
(49)

where \(\widehat{w_j} = \sum _{j=1}^n\left( \Theta \zeta _j^z + (1-\Theta )\zeta _j^b\right)\) is the amalgamation of subjective and objective weights and \(\Theta\) presents the relation between subjective and objective weights which lies between 0 and 1 \(, i.e.,\, \Theta \in [0,1]\) and \(\Theta = 0.5\).

5.2.8 (h) Determination of VIKOR index \((Q_i)\)

$$\begin{aligned} Q_i = \Psi \frac{T_i-T^{--}}{T^{++} - T^{--}} + (1-\Psi )\frac{R_i-R^{--}}{R^{++}-R^{--}}, \end{aligned}$$
(50)

where \(\Psi\) and \((1-\Psi )\) represent the weights for \(T_i\) and \(R_i\). We put the value of \(\Psi = 0.5\). We take in (4.17) \(T^{++} = max\,\, T_i\, , T^{--} = min\,\, T_i \,, R^{++} = max\,\, R_i\) and \(R^{--} = min\,\, R_i\).

5.2.9 (i) The values of \(T_i , R_i\) and \(Q_i\) are arranged in ascending order

Then we rank the alternatives. The alternatives satisfy the following conditions:

\({{\bar{C1}}}\) (acceptable advantage) If \(Q(\Upsilon ^2-\Upsilon ^1) \ge \frac{1}{n-1}\), where \(\Upsilon ^1\) and \(\Upsilon ^2\) are first and second ranked alternatives in \(\Upsilon _i\) column.

\({{\bar{C2}}}\) (acceptable stability) The alternative \(\Upsilon ^1\) should also be ranked first in \(R_i\) and \(T_i\) columns.

The alternative \(\Upsilon _i\) will be the most desirable one if both the conditions are concurrently satisfied. If both the conditions are not satisfied simultaneously, then we proceed for compromise solutions as follows:

  1. 1.

    If the condition \({{\bar{C2}}}\) is not satisfied then \((\Upsilon ^1,\Upsilon ^2)\) is the set of compromise solution.

  2. 2.

    If the condition \({{\bar{C1}}}\) is not satisfied then the set \((\Upsilon ^1,\Upsilon ^2,...,\Upsilon ^{M})\) constitutes the compromise solution, where \(\Upsilon ^{M}\) is defined by

$$\begin{aligned} Q(\Upsilon ^{M})- Q(\Upsilon ^1) < \frac{1}{n-1}. \end{aligned}$$
(51)

The number n in (51) denotes the total number of criteria and the number M represents the maximum of ranks of the alternatives in column \(Q_i\) satisfying (51).

Fig. 1
figure 1

Graphical display of the proposed decision-making method

The step-by-step procedure of the proposed VIKOR method is demonstrated with the help of Fig. 1. After the compilation of information by different experts, we constructed the IF decision matrix using Eq. (42). In the next step, we normalized the IF decision matrix with the help of Eq. (43). In the following steps, we determined the criteria weights and extreme solutions with the help of Eqs. (44)–(47). After that VIKOR index is to be determined with the help of group utility and individual regret value using Eq. (50). In the final step, ranking has been given to the different alternatives.

6 Numerical example

Now, we solved the problem with application of multi-criteria decision-making (MCDM) VIKOR method.

6.1 Approach 1: in case criteria weights are known partially

The manufacturers of laptops want to hire potential suppliers for supply of parts. They receive five options, say \(\varphi _i(i=1,2,3,4,5)\). The most appropriate supplier company has hired five experts, say \(FY_i (i=1,2,3,4,5)\). The council of company has fixed five criteria given by Functionality \((\xi _1)\), Reliability \((\xi _2)\), Customer Satisfaction \((\xi _3)\), Quality \((\xi _4)\), Cost \((\xi _5)\).

In Tables 1 and  2, we express the alternative for rating and criteria weights in the form of linguistic terms using IFNs. Here it is to be noted that on the basis of historical data and/or a questionnaire responded by experts of concerned domain, IFNs could be defined. The experts displayed our assessment and important criteria weights are shown in Tables 3 and 4.

Step 1: Intuitionistic fuzzy decision matrix obtained by Eq. 42 is depicted in Table 5.

Step 2: The subjective criteria weights calculated by Eq. 44 are depicted in Table 6.

Step 3: In this step, we normalized the subjective criteria weights with the help of Eq. 45 given in Table 7.

Step 4: In order to compute the values of objective weights let us consider the set of information with weights denoted by:

$$\begin{aligned} {{\hat{M}}}= & {} \{ 0.15 \le \zeta _1 \le 0.30, 0.20 \le \zeta _2 \le 0.45, 0.25 \nonumber \\\le & {} \zeta _3 \le 0.50, 0.10 \le \zeta _4 \le 0.25 ,0.30 \le \zeta _5 \le 0.65 \}.\nonumber \\ \end{aligned}$$
(52)

The construction for objective weights in programming model:

$$\begin{aligned} \begin{aligned} min F&= 0.7104 \zeta _1 +0.5325 \zeta _2 + 0.5979 \zeta _3 +0.2518 \zeta _4 \\&\quad + 0.7127 \zeta _5;\\&=subject\; to {\left\{ \begin{array}{ll} 0.15 \le \zeta _1 \le 0.30;\\ 0.20 \le \zeta _2 \le 0.45;\\ 0.25 \le \zeta _3 \le 0.50;\\ 0.10 \le \zeta _4 \le 0.25;\\ 0.30 \le \zeta _5 \le 0.65;\\ \zeta _1 +\zeta _2 +\zeta _3 +\zeta _4 +\zeta _5 = 1. \end{array}\right. } \end{aligned} \end{aligned}$$
(53)

The criteria objective weight vector function is obtained by solving the (53),

$$\begin{aligned} \zeta = (0.15, 0.20, 0.25, 0.10, 0.30)^{W}. \end{aligned}$$

Step 5: Normalized intuitionistic Fuzzy decision matrix is obtained by Eq. 43 which is depicted in Table 8.

Table 1 Alternatives for rating in terms of linguistic
Table 2 Rating the criteria weights in terms of linguistic
Table 3 Decision-maker’s output
Table 4 Importance weights for criteria

Step 6: The values of \(T_i\) ,\(R_i\) , \(Q_i\) for all alternatives are calculated using Eqs. 4850, where \(T_i\) is the utility value, \(R_i\) is the individual regret and \(Q_i\) is the VIKOR index.

Step 7: After that we have rated the values of TR and Q which are calculated in Table 9 and depicted in Table 10. First ranking is given to the alternative having lower value among all the alternatives and so on.

Step 8: In this step, we have provided the preferential sequences of alternatives on the basis of rating scale such as corresponding to T, \(\varphi _4\) has the lower rating, whereas \(\varphi _5\) has the highest rating. In a similar fashion, we have given the preference to the R and Q depending upon their corresponding values.

Analysis of Table 11 reveals that alternatives \(\varphi _5\) and \(\varphi _1\) are, respectively, ranked first and second in column of \(Q_i\). Also \(Q(\varphi _1) -Q(\varphi _5) = 0.0616 -0 =0.0616 < \frac{1}{5-1} = 0.25\). Therefore, \({{\bar{C1}}}\) is not satisfied. Moreover, the alternative \(\varphi _5\) is also ranked first in columns of T and R, which means \({{\bar{C2}}}\) is satisfied. \(\varphi _5> \varphi _1>\varphi _2>\varphi _3>\varphi _4\); we obtain the best alternative as \(\varphi _5\).

6.2 Sensitive analysis

In this section, we have incorporated sensitive analysis to show the behavior of compromise solution. We analyzed that reliability and fuzzy information are affecting by changing weight \(\Psi\). In this way compromise solution becomes more versatile for practical purpose. We have taken distinct values to show the impact of \(\Psi\) on compromise solution. The values achieved by changing the weight \(\Psi\) are as shown in Table 12. Ranking obtained is as \(\varphi _5> \varphi _1>\varphi _2>\varphi _3>\varphi _4\). After analyzing Table 12, it is observed that same ranking sequence is obtained on different values of \(\Psi\). It shows that there is no effect of \(\Psi\) on ranking sequence and consequently compromise solution has considered all the linguistic information. Sensitivity outcomes at the distinct values of \(\Psi\) are depicted by Fig. 2.

Table 5 IF decision matrix
Table 6 Subjective criteria weights

6.3 Approach 2: in case of unknown criteria weights

In this section, we have to find the solution of the above example in case of unknown criteria weights. The step-by-step computational procedure are as follows:

Step 1: Firstly, we have calculated the different values of criteria weights corresponding to different alternatives with the help of Eq. 40. The calculated values of criteria weights are shown in Eq. 54:

$$\begin{aligned}&\zeta _1 = 0.1861,\zeta _2 = 0.1980, \zeta _3 = 0.2083 , \zeta _4 = 0.2222 \nonumber \\&\,\,and\,\,\zeta _5 = 0.1854. \end{aligned}$$
(54)

Step 2: After that we have calculated the positive \((\Upsilon _j^{++})\) and negative \((\Upsilon _j{--})\) ideal solution using Eqs. (46) and (47). The computed values of positive \((\Upsilon _j^{++})\) and negative \((\Upsilon _j{--})\) ideal solution are depicted in Table 13.

Table 7 Normalized subjective criteria weights
Table 8 Normalized intuitionistic fuzzy decision matrix \((\tilde{t_{ij}})\)
Table 9 The values of TR and Q
Table 10 Rating for TR and Q
Table 11 Preferential sequences of alternatives

Step 3: In this step, we have computed the values of \(T_i\) , \(R_i\) and \(Q_i\) using Eqs. (48)–(50) corresponding to the different alternatives. The calculated values are demonstrated in Table 14.

Step 4: Depending on the values of \(T_i\) , \(R_i\) and \(Q_i\), ranking was assigned to the different alternatives as shown in Table 15.

Step 5: The preferential sequences of alternatives on the basis of Table 14 are shown in Table 16.

Analysis of Table 16 shows that alternatives \(\varphi _5\) and \(\varphi _1\) are, respectively, ranked first and second in column of \(Q_i\). Also \(Q(\varphi _1) -Q(\varphi _5) = 0.3990-0 =0.3990 > \frac{1}{5-1} = 0.25\). Therefore, \({{\bar{C1}}}\) is satisfied. Moreover, the alternative \(\varphi _5\) is also ranked first in columns of T and R, it means \({{\bar{C2}}}\) is satisfied. \(\varphi _5> \varphi _1>\varphi _2>\varphi _3>\varphi _4\), we obtain the best alternative as \(\varphi _5\). Therefore, the supplier represented by the alternative \(\varphi _5\) is the most preferred one.

Table 12 Values of \(T_i, R_i\) and \(Q_i\) obtained on changing weight \((\Psi )\)

6.3.1 Comparative analysis

To further explore the efficacy and execution of proposed work, comparison has been carried out with same numerical example with same presumptions and information of weights. Radhika and Sammeer Kumar (2017) introduced a MCDM method that provides ranking of available alternatives based on distance measure. However, Divsalar (2017) presented VIKOR method as a qualitative multi attribute group decision-making approach which is based on extended hesitant fuzzy linguistic term (EHFLTS) distance measures. Victor (2017) presented a hybrid system by combining FCM with VIKOR method which

Fig. 2
figure 2

Analysis of the \(\Psi\) to the outcomes of the alternatives

Table 13 Positive ideal and negative ideal solution

is also based on distance measure, whereas Afful-Dadzie (2014) proposed a fuzzy VIKOR frame to find the ranking in fuzzy environment with the help of linguistic variables to deal with uncertainty and subjectivity. We have proposed a new entropy measures which is scale invariant for complete probability distribution and proposed correlation coefficient-based VIKOR approach for finding the ranking and measuring the uncertainty in place of distance measure. From the exploration of Table 17, it is observed that ranking order is different by different researchers, but the optimal alternative is same which revealed that the proposed method of this work has evident reliability in intuitionistic fuzzy domain. But the strategy of proposed work is more efficient among existing approaches and valid for different multi-criteria decision-making (MCDM) problems.

Table 14 The values of TR and Q
Table 15 Rating For TR and Q
Table 16 Preferential sequences of alternatives
Table 17 Comparative results

The proposed work can easily pursue the case where information regarding weight is unknown or partially known, whereas compared approaches cannot manage such type of situations. The proposed work incorporates entropy measures in addition to normalization of subjective and objective weights to compute more consistent and reliable information. Additionally, we have evaluated objective weight vector by using mathematical model. The main advantage of this work is that we have normalized the decision matrix using correlation coefficient based on VIKOR method. Secondly, the proposed work can provide a adjustable technique to deal with MCDM problems, where information regarding weight is partially known or fully unknown. Thus we can conclude that the outcomes of proposed work is more practicable and straightforward for ranking.

7 Conclusion

In this paper, we generalized entropy measure for intuitionistic fuzzy set which is called scale-invariant entropy generalization of Shannon. Moreover, a novel multi-criteria decision-making (MCDM) process using weighted correlation coefficients-based VIKOR approach is introduced. The working of the proposed decision-making process is thoroughly interpreted with the help of two numerical illustrations considering selection of the most appropriate supplier with the help of ranking. The numerical examples are discussed with two different approaches about the evaluation of criteria weights. In the first approach, we consider the case of partially known criteria weights, whereas unknown criteria weights are discussed in the second approach. The output of proposed multi-criteria decision-making (MCDM) method is compared with other well-known decision-making methods like the VIKOR method. The proposed measure entropy IF can be elongated to more general sets such as interval-valued intuitionistic Fuzzy sets, picture fuzzy set, Q-orthopair fuzzy set, etc. Further, some of the most representative computational intelligence algorithms can be used to solve the problems, like monarch butterfly optimization (MBO), earthworm optimization algorithm (EWA), elephant herding optimization (EHO), month search (MS) algorithm.