1 Introduction

Multi-criteria decision making (MCDM), a part of decision theory, is one of the fastest growing research areas of operation research both from hypothetical and execution points of view. In recent times, several new approaches have been developed to solve the real-life MCDM problems such as renewable energy selection, flood reservoir management, cloud service provider, and vehicle insurance service quality assessment. Recently, medical decision-making domain has widely been concentrated by various researchers. For instance, Harris (2003) scrutinized the process to find and choose the suitable physician for patients and investigated that there is a need for the development of decision support tools in healthcare systems. Madupu (2009) paid attention to select the best physician based on the types of information resources. Fhkam and Lam (2010) focused their study on those aspects which affect the selection of doctors for patients. Verhoef et al. (2014) centered their study on the rapid development of information technology that allows the patients to rate the physicians on the basis of their experience on online websites. In the recent time, only some researchers have focused on the development of new decision-making support tools for the selection of suitable physicians. Sun et al. (2017) developed an integrated approach based on TODIM and ELECTRE methods to solve the physician selection decision-making problem under single-valued neutrosophic fuzzy environment. Hu et al. (2017) extended the VIKOR method and established a projection-based VIKOR method under interval neutrosophic fuzzy doctrines for solving a doctor selection problem.

In MCDM problems, the decision makers desire to get more proficient solution, but it is not possible all the time due to inaccurate and uncertain information available for the given situation. Fuzzy set (FS) theory developed by Zadeh (1965) have gained more attention in the field of decision making as the FSs are more capable to deal with uncertain information. Since FS is only characterized by a membership degree, for that reason, it is unable to express the decision information in sustain and oppose concurrently. Atanassov (1986) pioneered the concept of intuitionistic fuzzy sets (IFSs) to overcome the shortcomings of FSs. As the extension of FSs, IFSs are characterized by the membership, non-membership and hesitation degrees and satisfy the condition that the sum of the degrees of membership and non-membership is less than or equal to 1. In recent times, IFSs have broadly been applied in the field of decision making, medical diagnosis, image processing, pattern recognition and many decision-making methods have been developed under intuitionistic fuzzy environment (Tanev 1995; Ansari et al. 2018; Mishra et al. 2016, 2017a, b, c; Rani and Jain 2017; Rani et al. 2018, Mishra et al. 2018a, b).

In 1989, Atanassov (1989) pioneered the concept of intuitionistic fuzzy type-2 set (IFT2S) with its geometrical interpretation. It is characterized by the membership and non-membership degrees satisfying the condition that the square sum of the membership and non-membership degrees is less and equal to 1. They also discussed the general form of IFS-2T, which is the intuitionistic fuzzy set of n-th type or p-th type (IFS-nT or IFS-pT) (Atanassov 1999; Atanassov et al. 2017; Atanassov and Vassilev 2018). After that, many authors (Vassilev 2006; Vassilev et al. 2008; Parvathi et al. 2012) have focused their study on IFT2Ss and IFTnSs. Later on, Yager (2013, 2014) studied the same concept of IFT2Ss with the new name Pythagorean fuzzy set (PFS). Some authors [Yager (2013, 2014), Yager and Abbasov (2013)] gave a figure which shows the regions and relation between PFSs and IFSs. Due to its capability and popularity of handling complicated decision-making problems, the PFSs have attracted much attention from decision makers in a very short duration of time. Yager and Abbasov (2013) discussed the concepts related to PFSs and showed a relationship between the PFNs and the complex numbers. In addition, they discussed a decision-making approach in which the criteria values are given by complex numbers. Zhang and Xu (2014) defined some basic operations such as addition, multiplication, union, intersection for PFNs and then extended an MCDM method namely TOPSIS under Pythagorean fuzzy environment. Recently, several literatures show the rapid growth and potentiality of the PFSs (Gou et al. 2016; Peng and Yang 2015; Zeng et al. 2016; Peng and Yang 2016; Peng et al. 2017; Li and Zeng 2018).

Due to diverse nature of MCDM problems, several methods have been developed to cope with the MCDM problems under different fuzzy environments. The weighted aggregated sum product assessment (WASPAS), proposed by Zavadskas et al. (2014), is a novel utility theory-based approach that has been widely extended in different doctrines. The WASPAS method is an integration of weighted sum model (WSM) and weighted product model (WPM) and is more accurate than WSM and WPM. For instance, Zavadskas et al. (2013) focused their study on the verification of robustness of the MOORA (Multiple Objective Optimization on the basis of Ratio Analysis) and WASPAS approaches. Bagočius et al. (2014) developed a MCDM approach to select and rank the location areas of wind farms based on WASPAS and then assessed the types of wind turbines in the Baltic Sea offshore area. Bitarafan et al. (2014) presented the WASPAS- and SWARA-based approach to analyze the real-time intelligent sensors for structural health monitoring of bridges. Recently, Mardani et al. (2017) proposed an overview of two new MCDM utility-based approaches named as SWARA (stepwise weight assessment ratio analysis) and WASPAS, and their applications in different fuzzy doctrines. Peng and Dai (2017) extended the MABAC (multi-attributive border approximation area comparison), WASPAS and COPRAS (complex proportional assessment) approaches to solve hesitant fuzzy soft decision-making problem. Mishra et al. (2018a, b) developed the intuitionistic fuzzy WASPAS method to compare the performance of telephone service providers in Madhya Pradesh (India). Mishra and Rani (2018) introduced the WASPAS approach under interval-valued intuitionistic fuzzy environment and applied to solve the reservoir flood control management policy selection problem. As the IFT2S is an interesting concept to deal with uncertainty and vagueness, the present study focuses within the context of IFT2Ss. In medical domain, the patients want to select the best physician from many physicians influenced by several factors such as rating scores, quality care, conformability, available reviews and many other points; as a result, a physician selection problem is considered as an MCDM problem. In this paper, a novel approach is developed to handle the MCDM problem of physician selection under IFT2Ss environment. The outcomes of the present study are as follows:

  1. 1.

    Divergence and entropy measures are an important concern for researchers in the study of uncertainty. Thus, we have developed new entropy and divergence measures for IFT2Ss.

  2. 2.

    The classical WASPAS method is extended to handle the MCDM problems under IFT2Ss environment with objective and subjective criteria’s weights.

  3. 3.

    In this method, the criteria’s weights are calculated using proposed entropy and divergence measures.

  4. 4.

    A decision-making problem of selecting a proper physician among set of alternatives is applied to illustrate the applicability of the proposed WASPAS method.

  5. 5.

    Comparative study and sensitivity analysis reveal the validity of the results obtained by proposed method.

The remainder of this paper is organized as follows: In Sect. 2, some fundamental results are explained. In Sect. 3, firstly existing literatures are presented and then new entropy and divergence measures are developed. In Sect. 4, the classical WASPAS method is extended within IFT2Ss context with unknown criteria and decision-makers’ weights, which is based on divergence and entropy measures. In Sect. 5, a numerical study is presented to illustrate the validity and applicability of the proposed WASPAS method. Comparative study with existing methods and sensitivity analysis are also discussed. Finally, the concluding remarks are given in Sect. 6.

2 Basic concepts

In this section, some fundamental concepts related to intuitionistic fuzzy sets (IFSs), intuitionistic fuzzy sets of second type (IFT2Ss) and information measures (entropy and divergence measures) of IFT2Ss are presented.

Definition 2.1

(Atanassov 1986) An IFS \( M \) in a finite universe of discourse \( X = \left\{ {x_{1} ,x_{2} , \ldots ,x_{n} } \right\} \) is defined as

$$ M = \left\{ {\left. {\left\langle {x_{i} ,\mu_{M} (x_{i} ),\nu_{M} (x_{i} )} \right\rangle } \right|\,x_{i} \in X} \right\}, $$

where \( \mu_{M} :X \to \left[ {0,1} \right] \) is the degree of membership and \( \nu_{M} :X \to \left[ {0,1} \right] \) the degree of non-membership of the element \( x_{i} \in X \) in \( M \) such that \( 0 \le \mu_{M} (x_{i} ) + \nu_{M} (x_{i} ) \le 1. \) The hesitant degree of \( x_{i} \in X \) in \( M \) is expressed by \( \pi_{M} (x_{i} ) = 1 - \mu_{M} (x_{i} ) - \nu_{M} (x_{i} ). \) For ease, the intuitionistic fuzzy number (IFN) is denoted by \( \alpha = M\left( {\mu_{\alpha } ,\nu_{\alpha } } \right) \) which satisfies \( \mu_{\alpha } ,\nu_{\alpha } \in \left[ {0,1} \right] \) and \( 0 \le \mu_{\alpha } + \nu_{\alpha } \le 1. \)

Definition 2.2

(Atanassov 1989; Yager 2013) An IFT2S (or PFS) \( A \) in a finite universe of discourse \( X \) is defined as

$$ A = \left\{ {\left. {\left\langle {x_{i} ,\mu_{A} (x_{i} ),\nu_{A} (x_{i} )} \right\rangle } \right|x_{i} \in X} \right\}, $$

where \( \mu_{A} :X \to \left[ {0,1} \right] \) and \( \nu_{A} :X \to \left[ {0,1} \right] \) denote the degrees of membership and non-membership of the element \( x_{i} \in X, \) respectively and for every \( x_{i} \in X, \)

$$ 0 \le \left( {\mu_{A} \left( {x_{i} } \right)} \right)^{2} + \left( {\nu_{A} \left( {x_{i} } \right)} \right)^{2} \le 1. $$

For each \( x_{i} \in X, \) the degree of indeterminacy or hesitation is given by \( \pi_{A} (x_{i} ) = \sqrt {1 - \mu_{A}^{2} (x_{i} ) - \nu_{A}^{2} (x_{i} )} . \) Zhang and Xu (2014) denoted the Pythagorean fuzzy number (PFN) by \( \eta = A(\mu_{\eta } ,\nu_{\eta } ) \) which fulfills \( \mu_{\eta } ,\nu_{\eta } \in \left[ {0,1} \right] \) and \( 0 \le \mu_{\eta }^{2} + \nu_{\eta }^{2} \le 1. \)

Definition 2.3

(Zhang and Xu 2014) Let \( \eta = A(\mu_{\eta } ,\nu_{\eta } ) \) be a PFN. Then, the score function and the accuracy function of \( \eta \) are defined by

$$ {\mathbb{S}}\left( \eta \right) = \left( {\mu_{\eta } } \right)^{2} - \left( {\nu_{\eta } } \right)^{2} ,\quad \hbar \left( \eta \right) = \left( {\mu_{\eta } } \right)^{2} + \left( {\nu_{\eta } } \right)^{2} ,\quad {\text{where}}\;{\mathbb{S}}\left( \eta \right) \in \left[ { - 1,1} \right]\;{\text{and}}\;\hbar \left( \eta \right) \in \left[ {0,1} \right]. $$

Since \( {\mathbb{S}}\left( \eta \right) \in \left[ { - 1,1} \right], \) when several score functions are aggregated with linear weighted summation method and it may appear that positive score functions are offset by negative score functions. Therefore, a modified score function of PFNs is defined as follows:

Definition 2.4

Let \( \eta = A(\mu_{\eta } ,\nu_{\eta } ) \) be PFN. Then, the normalized score and uncertainty functions of \( \eta \) is defined by

$$ {\mathbb{S}}^{*} \left( \eta \right) = \frac{1}{2}\left( {{\mathbb{S}}\left( \eta \right) + 1} \right),\quad \hbar^{^\circ } \left( \eta \right) = 1 - \hbar \left( \eta \right),\quad {\text{such}}\;{\text{that}}\;{\mathbb{S}}^{*} \left( \eta \right),\;\hbar^{^\circ } \left( \eta \right) \in \left[ {0,1} \right]. $$

For any two PFNs \( \eta_{1} = A(\mu_{{\eta_{1} }} ,\nu_{{\eta_{1} }} ) \) and \( \eta_{2} = A(\mu_{{\eta_{2} }} ,\nu_{{\eta_{2} }} ), \)

  1. (i)

    If \( {\mathbb{S}}^{*} \left( {\eta_{1} } \right) > {\mathbb{S}}^{*} \left( {\eta_{2} } \right), \) then \( \eta_{1} > \eta_{2} , \)

  2. (ii)

    If \( {\mathbb{S}}^{*} \left( {\eta_{1} } \right) = {\mathbb{S}}^{*} \left( {\eta_{2} } \right), \) then

    1. (a)

      if \( \hbar^{^\circ } \left( {\eta_{1} } \right) > \hbar^{^\circ } \left( {\eta_{2} } \right), \) then \( \eta_{1} < \eta_{2} ; \)

    2. (b)

      if \( \hbar^{^\circ } \left( {\eta_{1} } \right) = \hbar^{^\circ } \left( {\eta_{2} } \right), \) then \( \eta_{1} = \eta_{2} . \)

Definition 2.5

(Atanassov 1989; De et al. 2000; Yager 2013, 2014) Let \( \eta = A(\mu_{\eta } ,\nu_{\eta } ), \)\( \eta_{1} = A(\mu_{{\eta_{1} }} ,\nu_{{\eta_{1} }} ) \) and \( \eta_{2} = A(\mu_{{\eta_{2} }} ,\nu_{{\eta_{2} }} ) \) are three PFNs. Then, the following operations are satisfied for PFSs:

  1. (i)

    \( \eta^{c} = A\left( {\nu_{\eta } ,\mu_{\eta } } \right); \)

  2. (ii)

    \( \eta_{1} \oplus \eta_{2} = A\left( {\sqrt {\mu_{{\eta_{1} }}^{2} + \mu_{{\eta_{2} }}^{2} - \mu_{{\eta_{1} }}^{2} \mu_{{\eta_{2} }}^{2} } ,\nu_{{\eta_{1} }} s\nu_{{\eta_{2} }} } \right); \)

  3. (iii)

    \( \eta_{1} \otimes \eta_{2} = A\left( {\mu_{{\eta_{1} }} \mu_{{\eta_{2} }} ,\sqrt {\nu_{{\eta_{1} }}^{2} + \nu_{{\eta_{2} }}^{2} - \nu_{{\eta_{1} }}^{2} \nu_{{\eta_{2} }}^{2} } } \right); \)

  4. (iv)

    \( \lambda \eta = A\left( {\sqrt {1 - \left( {1 - \mu_{\eta }^{2} } \right)^{\lambda } } ,\left( {\nu_{\eta } } \right)^{\lambda } } \right),\quad \lambda > 0; \)

  5. (v)

    \( \eta^{\lambda } = A\left( {\left( {\mu_{\eta } } \right)^{\lambda } ,\sqrt {1 - \left( {1 - \,\nu_{\eta }^{2} } \right)^{\lambda } } } \right),\quad \lambda > 0. \)

Definition 2.6

(Peng et al. 2017) An entropy measure \( E:{\text{PFS}}\left( X \right) \to \left[ {0,1} \right] \) is a mapping which satisfies the following axioms:

  • (A1) \( 0 \le E\left( A \right) \le 1; \)

  • (A2) \( E\left( A \right) = 0 \) if \( A \) is a crisp set;

  • (A3) \( E\left( A \right) = 1 \) if \( \mu_{A} (x_{i} )\, = \,\nu_{A} (x_{i} ),\,\forall \,x_{i} \, \in \,X; \)

  • (A4) \( E\left( A \right) = E\left( {A^{c} } \right); \)

  • (A5) For each \( x_{i} \in X, \)\( E\left( A \right) \le E\left( B \right) \) if \( A \) is less than \( B, \)\( {\text{i}}.{\text{e}}.,\;\mu_{A} (x_{i} ) \le \mu_{B} (x_{i} ) \le \nu_{B} (x_{i} ) \le \nu_{A} (x_{i} ) \) or \( \nu_{A} (x_{i} ) \le \nu_{B} (x_{i} ) \le \mu_{B} (x_{i} ) \le \mu_{A} (x_{i} ). \)

Definition 2.7

(Montes et al. 2015) A real-valued function \( D:{\text{PFSs}}(X) \times {\text{PFSs}}(X) \to {\mathbb{R}} \) is said to be a divergence measure for PFSs if it satisfies the following requirements:

  • (P1) \( D\left( {A,\,B} \right)\, = \,D\left( {B,\,A} \right); \)

  • (P2) \( D\left( {A,\,B} \right)\, = \,0 \) if and only if \( A\, = \,B; \)

  • (P3) \( D\left( {A\, \cap \,C,\,B\, \cap \,C} \right)\, \le \,D\left( {A,\,B} \right) \) for every \( C \in {\text{PFSs}}(X); \)

  • (P4) \( D\left( {A\, \cup \,C,\,B\, \cup \,C} \right)\, \le \,D\left( {A,\,B} \right) \) for every \( C \in {\text{PFSs}}(X). \)

3 Information measures for IFT2Ss

Since entropy and divergence measures play a significant role in handling with uncertainty, therefore, in this section, some information measures (entropy and divergence measures) are proposed for IFT2Ss. However, very few entropy measures are developed for PFSs (as same as IFT2Ss) but no study is available in the literature regarding divergence measure for PFSs.

3.1 Entropy measures for IFSs-2T

Corresponding to Mishra and Rani (2017), we have proposed a new entropy measure for IFT2S, which as

$$ \begin{aligned} E_{1} (A) & = \tfrac{1}{{n\,\left( {1 - {\text{e}}^{{{\raise0.5ex\hbox{$\scriptstyle { - 1}$} \kern-0.1em/\kern-0.15em \lower0.25ex\hbox{$\scriptstyle 2$}}}} } \right)}}\sum\limits_{i\, = 1}^{n} {\left[ {\left\{ {1 - {\text{e}}^{{ - \left( {\tfrac{{\nu_{A}^{2} \left( {x_{i} } \right)\, + \,1 - \,\mu_{A}^{2} \left( {x_{i} } \right)}}{2}} \right)}} } \right\}} \right.} I_{{\left[ {\mu_{A}^{2} \left( {x_{i} } \right)\, \ge \,\nu_{A}^{2} \left( {x_{i} } \right)} \right]}} \\ & \quad \left. { + \,\left\{ {1 - {\text{e}}^{{ - \left( {\tfrac{{\mu_{A}^{2} \left( {x_{i} } \right)\, + \,1 - \,\nu_{A}^{2} \left( {x_{i} } \right)}}{2}} \right)}} } \right\}I_{{\left[ {\mu_{A}^{2} \left( {x_{i} } \right)\, < \,\nu_{A}^{2} \left( {x_{i} } \right)} \right]}} } \right]. \\ \end{aligned} $$
(1)

Theorem 3.1

The mapping\( E_{1} \left( A \right), \)defined by (1), is an entropy measure for IFT2S\( \left( X \right). \)

Proof

In order to prove this theorem, the function (1) must satisfy the axiomatic requirements (A1)–(A5) of Definition 2.6.

(A1) In PFSs, as we know that \( 0\, \le \,\mu_{A}^{2} \left( {x_{i} } \right)\, + \,\nu_{A}^{2} \left( {u_{i} } \right)\, \le \,1, \) then it is clear from (1) that \( 0\, \le \,E_{1} \left( A \right)\, \le \,1. \)

(A2) Let \( A \) be a crisp set, i.e., \( \mu_{A} \left( {x_{i} } \right)\, = \,1, \)\( \nu_{A} \left( {x_{i} } \right)\, = \,0 \) or \( \mu_{A} \left( {x_{i} } \right)\, = \,0, \)\( \nu_{A} \left( {x_{i} } \right)\, = \,1. \) Then, we obtain \( E_{1} \left( A \right)\, = 0. \)

On the other hand, suppose that \( E_{1} \left( A \right)\, = 0. \)

Also, let us assume that \( \frac{{\mu_{A}^{2} (x_{i} )\, + \,1\, - \,\nu_{A}^{2} (x_{i} )}}{2}\, = \,\psi_{A} (x_{i} ). \) In view of this assumption, (1) becomes

$$ E_{1} \left( A \right) = \tfrac{1}{{n\left( {1 - {\text{e}}^{{{\raise0.7ex\hbox{${ - 1}$} \!\mathord{\left/ {\vphantom {{ - 1} 2}}\right.\kern-0pt} \!\lower0.7ex\hbox{$2$}}}} } \right)}}\sum\limits_{i\, = 1}^{n} {\left[ {\left\{ {1 - {\text{e}}^{{ - \left( {1 - \,\psi_{A} \left( {x_{i} } \right)} \right)}} } \right\}I_{{\left[ {\psi_{A} \left( {x_{i} } \right)\,\, \ge \,\tfrac{1}{2}} \right]}} + \left\{ {1 - {\text{e}}^{{ - \psi_{A} \left( {x_{i} } \right)}} } \right\}I_{{\left[ {\psi_{A} \left( {x_{i} } \right)\,\, < \,\tfrac{1}{2}} \right]}} } \right]} . $$
(2)

Corresponding to Mishra and Rani (2017), (2) becomes zero if and only if \( \psi_{A} (x_{i} ) = 0 \) or 1, i.e.,

$$ \frac{{\mu_{A}^{2} (x_{i} ) + 1 - \nu_{A}^{2} (x_{i} )}}{2} = 0 $$
(3)
$$ {\text{or}}\;\frac{{\mu_{A}^{2} (x_{i} ) + 1 - \nu_{A}^{2} (x_{i} )}}{2} = 1. $$
(4)

From Definition 2.2, \( 0\, \le \,\mu_{A}^{2} \left( {x_{i} } \right)\, + \,\,\nu_{A}^{2} \left( {x_{i} } \right)\, \le \,1. \) Now, solving (3) and (4), we have \( \mu_{A}^{2} \left( {x_{i} } \right)\, = \,0 = \,\mu_{A} \left( {x_{i} } \right), \)\( \,\nu_{A}^{2} \left( {x_{i} } \right)\, = 1 = \,\nu_{A} \left( {x_{i} } \right) \) or \( \mu_{A}^{2} \left( {x_{i} } \right)\, = \,1 = \,\,\mu_{A} \left( {x_{i} } \right), \)\( \nu_{A}^{2} \left( {x_{i} } \right)\, = 0\, = \,\,\nu_{A} \left( {x_{i} } \right),\,\forall \,x_{i} \, \in \,X, \) it implies that \( A \) is a crisp set.

(A3) It can easily be seen that if \( \mu_{A} (x_{i} )\, = \,\nu_{A} (x_{i} ),\,\forall \,x_{i} \, \in \,X, \) then \( E_{1} \left( A \right)\, = 1. \)

On the contrary, if \( E_{1} \left( A \right)\, = 1, \) then it implies that

$$ \begin{aligned} E_{1} \left( A \right) & = \tfrac{1}{{n\left( {1 - {\text{e}}^{{{\raise0.7ex\hbox{${ - 1}$} \!\mathord{\left/ {\vphantom {{ - 1} 2}}\right.\kern-0pt} \!\lower0.7ex\hbox{$2$}}}} } \right)}}\sum\limits_{i\, = 1}^{n} {\left[ {\left\{ {1 - {\text{e}}^{{ - \left( {1 - \,\psi_{A} \left( {x_{i} } \right)} \right)}} } \right\}I_{{\left[ {\psi_{A} \left( {x_{i} } \right)\,\, \ge \,\tfrac{1}{2}} \right]}} + \left\{ {1 - {\text{e}}^{{ - \psi_{A} \left( {x_{i} } \right)}} } \right\}I_{{\left[ {\psi_{A} \left( {x_{i} } \right)\,\, < \,\tfrac{1}{2}} \right]}} } \right]} = 1, \\ & \Rightarrow \psi_{A} \left( {x_{i} } \right) = \frac{1}{2},\quad \forall x_{i} \in X, \\ & \Rightarrow \frac{{\mu_{A}^{2} (x_{i} ) + 1 - \nu_{A}^{2} (x_{i} )}}{2} = \frac{1}{2},\quad \forall x_{i} \in X. \\ & \Rightarrow \mu_{A}^{2} \left( {x_{i} } \right) = \nu_{A}^{2} \left( {x_{i} } \right),\quad \forall x_{i} \in X, \\ & \Rightarrow \mu_{A} \left( {x_{i} } \right) = \nu_{A} \left( {x_{i} } \right),\quad \forall x_{i} \in X. \\ \end{aligned} $$

(A4) It is obvious from (1) that \( E_{1} (A)\, = \,E_{1} (A^{c} ). \)

(A5) Let

$$ \psi \left( {x,y} \right) = \left[ {\left\{ {1 - {\text{e}}^{{ - \left( {\tfrac{y\, + \,1 - \,x}{2}} \right)}} } \right\}I_{{\left[ {x\, \ge \,y} \right]}} + \left\{ {1 - {\text{e}}^{{ - \left( {\tfrac{x\, + \,1\, - \,y}{2}} \right)}} } \right\}I_{{\left[ {x\, < \,y} \right]}} } \right], $$
(5)

where \( x,\,y\, \in \,\left[ {0,\,1} \right]. \) Differentiating (5) partially w.r.t. ‘x’ and ‘y’ respectively, we have

$$ \frac{{\partial \psi \left( {x,y} \right)}}{\partial x} = \frac{1}{2}\left[ { - \left\{ {{\text{e}}^{{ - \left( {\tfrac{y\, + 1\, - \,x}{2}} \right)}} } \right\}I_{{\left[ {x\, \ge \,y} \right]}} + \left\{ {{\text{e}}^{{ - \left( {\tfrac{x\, + 1\, - \,y}{2}} \right)}} } \right\}I_{{\left[ {x\, < \,y} \right]}} } \right], $$
(6)
$$ \frac{{\partial \psi \left( {x,y} \right)}}{\partial y} = \frac{1}{2}\left[ {\left\{ {e^{{ - \left( {\tfrac{y\, + 1\, - \,x}{2}} \right)}} } \right\}I_{{\left[ {x\, \ge \,y} \right]}} - \left\{ {e^{{ - \left( {\tfrac{x\, + 1\, - \,y}{2}} \right)}} } \right\}I_{{\left[ {x\, < \,y} \right]}} } \right]. $$
(7)

To obtain the critical point of \( \psi \left( {x,\,y} \right), \) we equate \( \tfrac{{\partial \psi \left( {x,\,y} \right)}}{\partial x}\, = 0 \) and \( \tfrac{{\partial \psi \left( {x,\,y} \right)}}{\partial y}\, = 0. \) This gives \( x_{c} \, = y. \) From (6) and (7), we get

\( \tfrac{{\partial \psi \left( {x,\,y} \right)}}{\partial x}\, \ge \,0 \) for \( x\, \le \,y \) and \( \tfrac{{\partial \psi \left( {x,\,y} \right)}}{\partial x}\, \le \,0 \) for \( x\, \ge \,y. \)

Thus, \( \psi \left( {x,\,y} \right) \) is an increasing function w.r.t. \( x \) for \( x \le \,y \) and decreasing for \( x\, \ge \,y. \) In the same way, we can prove that \( \tfrac{{\partial \psi \left( {x,\,y} \right)}}{\partial y}\, \le \,0 \) for \( x\, \le \,y \) and \( \tfrac{{\partial \psi \left( {x,\,y} \right)}}{\partial y}\, \ge \,0 \) for \( x\, \ge \,y. \)

If \( A\, \subseteq \,B \) and also suppose that the universal set \( X \) is divided into two parts \( X_{1} \) and \( X_{2} \) such that \( \phi = X_{1} \cap X_{2} \) and \( X = \,X_{1} \, \cup \,X_{2} . \) Furthermore, assume that \( x_{i} \, \in \,X_{1} \) with \( \mu_{A} (x_{i} )\, \le \,\mu_{B} (x_{i} )\, \le \,\nu_{B} (x_{i} )\, \le \,\nu_{A} (u_{i} ) \) and \( x_{i} \, \in \,X_{2} \) with \( \mu_{A} (x_{i} )\, \ge \,\mu_{B} (x_{i} )\, \ge \,\nu_{B} (x_{i} )\, \ge \,\nu_{A} (u_{i} ). \) Thus, by monotonicity of \( \psi \left( {x,\,y} \right) \), we have \( E_{1} \left( A \right)\, \le \,E_{1} \left( B \right). \)

Theorem 3.2

For \( A \in {\text{IFT2S}}\left( X \right), \) a real-valued function \( E_{2} \left( A \right) \) defined by

$$ \begin{aligned} E_{2} \left( A \right) & = \tfrac{1}{{n\,\sqrt e \,\left( {\sqrt e \, - 1} \right)}}\sum\limits_{i\, = 1}^{n} {\left[ {{\text{e}} - \left( {\tfrac{{\mu_{A}^{2} \left( {x_{i} } \right)\, + 1 - \,\nu_{A}^{2} \left( {x_{i} } \right)}}{2}} \right)} \right.} {\text{e}}^{{\left( {\tfrac{{\mu_{A}^{2} \left( {x_{i} } \right)\, + 1 - \,\nu_{A}^{2} \left( {x_{i} } \right)}}{2}} \right)}} \\ & \quad \left. { - \left( {\tfrac{{\nu_{A}^{2} \left( {x_{i} } \right)\, + 1 - \,\mu_{A}^{2} \left( {x_{i} } \right)}}{2}} \right){\text{e}}^{{\left( {\tfrac{{\nu_{A}^{2} \left( {x_{i} } \right)\, + 1 - \,\mu_{A}^{2} \left( {x_{i} } \right)}}{2}} \right)}} } \right] \\ \end{aligned} $$
(8)

is an entropy measure for IFT2S( \( X \) ).

Proof

The proof is similar to Theorem 3.1.

Few entropy measures for IFT2Ss are generalized based on intuitionistic fuzzy entropy measures (Ansari et al. 2018; Mishra et al. 2017a; Mishra 2016), which are

$$ \begin{aligned} E_{3} (A) & = \frac{1}{{n\left( {{\text{e}}^{{(1 - 0.5^{\alpha } )}} - 1} \right)}}\sum\limits_{i\, = 1}^{n} {\left[ {1 - \left( {\frac{{\mu_{A}^{2} (x_{i} ) + 1 - \nu_{A}^{2} (x_{i} )}}{2}} \right){\text{e}}^{{\left( {1 - \,\left( {\frac{{\mu_{A}^{2} (x_{i} )\, + 1 - \,\nu_{A}^{2} (x_{i} )}}{2}} \right)^{\alpha } } \right)}} } \right.} \\ & \quad \left. { - \,\left( {\frac{{v_{A}^{2} (x_{i} ) + 1 - \mu_{A}^{2} (x_{i} )}}{2}} \right){\text{e}}^{{\left( {1 - \,\left( {\frac{{\nu_{A}^{2} (x_{i} )\, + 1 - \,\mu_{A}^{2} (x_{i} )}}{2}} \right)^{\alpha } } \right)}} } \right]. \\ \end{aligned} $$
(9)
$$ \begin{aligned} E_{4} (A) & = \frac{1}{{n{\text{e}}^{{{\raise0.5ex\hbox{$\scriptstyle \beta $} \kern-0.1em/\kern-0.15em \lower0.25ex\hbox{$\scriptstyle 2$}}}} \left( {{\text{e}}^{{{\raise0.5ex\hbox{$\scriptstyle \beta $} \kern-0.1em/\kern-0.15em \lower0.25ex\hbox{$\scriptstyle 2$}}}} - 1} \right)}}\sum\limits_{i\, = 1}^{n} {\left[ {{\text{e}}^{\beta } - \left( {\frac{{\mu_{A}^{2} (x_{i} ) + 1 - \nu_{A}^{2} (x_{i} )}}{2}} \right){\text{e}}^{{\beta \,\left( {\frac{{\mu_{A}^{2} (x_{i} )\, + 1 - \,\nu_{A}^{2} (x_{i} )}}{2}} \right)}} } \right.} \\ & \quad \left. { - \left( {\frac{{v_{A}^{2} (x_{i} ) + 1 - \mu_{A}^{2} (x_{i} )}}{2}} \right){\text{e}}^{{\beta \,\left( {\frac{{v_{A}^{2} (x_{i} )\, + 1 - \,\mu_{A}^{2} (x_{i} )}}{2}} \right)}} } \right],\quad \beta \ne 1(\beta > 0). \\ \end{aligned} $$
(10)
$$ \begin{aligned} E_{5} (A) & = - \frac{2}{n}\sum\limits_{i\, = 1}^{n} {\left[ {\ln \left\{ {\left( {\frac{{\mu_{A}^{2} (x_{i} ) + 1 - \nu_{A}^{2} (x_{i} )}}{2}} \right){\text{e}}^{{\left( {\frac{{\mu_{A}^{2} (x_{i} )\, + 1 - \,\nu_{A}^{2} (x_{i} )}}{2}} \right)}} } \right.} \right.} \\ & \quad + \left. {\left. {\left( {\frac{{\nu_{A}^{2} (x_{i} ) + 1 - \mu_{A}^{2} (x_{i} )}}{2}} \right){\text{e}}^{{\left( {\frac{{\nu_{A}^{2} (x_{i} )\, + 1 - \,\mu_{A}^{2} (x_{i} )}}{2}} \right)}} } \right\} - 1} \right]. \\ \end{aligned} $$
(11)
$$ \begin{aligned} E_{6} (A) & = \frac{2}{n}\sum\limits_{i\, = 1}^{n} {\ln \left[ {\left\{ {\left( {\frac{{\mu_{A}^{2} (x_{i} ) + 1 - \nu_{A}^{2} (x_{i} )}}{2}} \right){\text{e}}^{{\left( {\frac{{\nu_{A}^{2} (x_{i} )\, + 1 - \,\mu_{A}^{2} (x_{i} )}}{2}} \right)}} } \right.} \right.} \, \\ & \quad + \left. {\left. {\left( {\frac{{\nu_{A}^{2} (x_{i} ) + 1 - \mu_{A}^{2} (x_{i} )}}{2}} \right){\text{e}}^{{\left( {\frac{{\mu_{A}^{2} (x_{i} )\, + 1 - \,\nu_{A}^{2} (x_{i} )}}{2}} \right)}} } \right\}} \right]. \\ \end{aligned} $$
(12)
$$ E_{7} (A) = \frac{1}{n}\sum\limits_{i = 1}^{n} {\left[ {1 - \, = \sin \left\{ {\frac{{\left( {\mu_{A}^{2} (x_{i} ) \sim \nu_{A}^{2} (x_{i} )} \right)}}{{2\left( {1 + \pi_{A}^{2} (x_{i} )} \right)}}} \right\}\pi } \right]} . $$
(13)
$$ E_{8} (A) = \frac{1}{2n}\sum\limits_{i = 1}^{n} {\left[ {\sin \left( {\frac{{\mu_{A}^{2} (x_{i} ) + 1 - \nu_{A}^{2} (x_{i} )}}{2}} \right)\pi + \sin \left( {\frac{{\nu_{A}^{2} (x_{i} ) + 1 - \mu_{A}^{2} (x_{i} )}}{2}} \right)\pi } \right]} . $$
(14)

3.2 Divergence measure for IFT2Ss

Theorem 3.3

Let\( A,B \in {\text{IFT2Ss}}\left( X \right). \)Corresponding to Mishra et al. (2017b), the Jensen divergence measure for IFT2Ss is defined as

$$ \begin{aligned} D_{1} \left( {A,B} \right) & = - \frac{1}{{n\sqrt e \left( {\sqrt e - 1} \right)}}\sum\limits_{i\, = 1}^{n} {\left[ {\left( {\tfrac{{\left( {\mu_{A}^{2} (x_{i} ) + \mu_{B}^{2} (x_{i} )} \right) + 2 - \left( {\nu_{A}^{2} (x_{i} ) + \nu_{B}^{2} (x_{i} )} \right)}}{4}} \right){\text{e}}^{{\left( {\tfrac{{\left( {\mu_{A}^{2} (x_{i} )\, + \,\mu_{B}^{2} (x_{i} )\,} \right)\, + \,2\, - \,\left( {\nu_{A}^{2} (x_{i} )\, + \,\nu_{B}^{2} (x_{i} )} \right)}}{4}} \right)}} } \right.} \\ & \quad + \left( {\tfrac{{\left( {\nu_{A}^{2} (x_{i} )\, + \,\nu_{B}^{2} (x_{i} )\,} \right)\, + \,2\, - \,\left( {\mu_{A}^{2} (x_{i} )\, + \,\mu_{B}^{2} (x_{i} )} \right)}}{4}} \right){\text{e}}^{{\left( {\tfrac{{\left( {\nu_{A}^{2} (x_{i} )\, + \,\nu_{B}^{2} (x_{i} )\,} \right)\, + \,2\, - \,\left( {\mu_{A}^{2} (x_{i} )\, + \,\mu_{B}^{2} (x_{i} )} \right)}}{4}} \right)}} \\ & \quad - \frac{1}{2}\left\{ {\tfrac{{\left( {\mu_{A}^{2} (x_{i} )\, + 1 - \,\,\nu_{A}^{2} (x_{i} )\,} \right)}}{2}{\text{e}}^{{\tfrac{{\left( {\mu_{A}^{2} (x_{i} )\, + 1 - \,\,\nu_{A}^{2} (x_{i} )\,} \right)}}{2}}} } \right. + \tfrac{{\left( {\nu_{A}^{2} (x_{i} )\, + 1 - \,\,\mu_{A}^{2} (x_{i} )\,} \right)}}{2}{\text{e}}^{{\tfrac{{\left( {\nu_{A}^{2} (x_{i} )\, + 1 - \,\,\mu_{A}^{2} (x_{i} )\,} \right)}}{2}}} \\ & \quad \left. {\left. { + \tfrac{{\left( {\mu_{B}^{2} (x_{i} )\, + 1 - \,\,\nu_{B}^{2} (x_{i} )\,} \right)}}{2}{\text{e}}^{{\tfrac{{\left( {\mu_{B}^{2} (x_{i} )\, + 1 - \,\,\nu_{B}^{2} (x_{i} )\,} \right)}}{2}}} + \tfrac{{\left( {\nu_{B}^{2} (x_{i} )\, + 1 - \,\,\mu_{B}^{2} (x_{i} )\,} \right)}}{2}{\text{e}}^{{\tfrac{{\left( {\nu_{B}^{2} (x_{i} )\, + 1 - \,\,\mu_{B}^{2} (x_{i} )\,} \right)}}{2}}} } \right\}} \right] \\ \end{aligned} $$
(15)

is a divergence measure for IFT2Ss(\( X \)).

Proof

To prove this theorem, the measure (15) must have to satisfy the conditions (A1)–(A4) of Definition 2.7. As, if \( A = \,\left( {1,\,0,\,0} \right) \) and \( B = \left( {0,\,1,\,0} \right) \)\( \left( {A = \,\left( {0,\,0,\,1} \right)} \right.,\,B\, = \,\left( {0,\,1,\,0} \right) \) or \( \left. {A = \left( {1,\,0,\,0} \right),\;B = \left( {0,\,0,\,1} \right)} \right), \) then \( D_{1} (A,\,B)\, \) reaches its maximum value and thus, \( D_{1} \left( {A,\,B} \right)\, \ge \,0. \)

  • (P1) It is obvious from (15) that \( D_{1} \left( {A,\,B} \right)\, = \,D_{1} \left( {B,\,A} \right). \)

  • (P2) It is evident from (15).

  • (P3) For every \( A,B,C \in {\text{IFT2Ss}}(X), \) we have

    $$ \begin{aligned} \tfrac{\begin{subarray}{l} \left( {\hbox{min} \left\{ {\mu_{A}^{2} (x_{i} ),\,\mu_{C}^{2} (x_{i} )} \right\}} \right) \\ + \,2\, - \left( {\hbox{max} \left\{ {\nu_{A}^{2} (x_{i} ),\,\nu_{C}^{2} (x_{i} )} \right\}} \right) \end{subarray} }{4}{\text{e}}^{{\left( {\tfrac{\begin{subarray}{l} \left( {\hbox{min} \left\{ {\mu_{A}^{2} (x_{i} ),\,\mu_{C}^{2} (x_{i} )} \right\}} \right) \\ + \,2\, - \left( {\hbox{max} \left\{ {\nu_{A}^{2} (x_{i} ),\,\nu_{C}^{2} (x_{i} )} \right\}} \right) \end{subarray} }{4}} \right)}} & \le \left( {\tfrac{\begin{subarray}{l} \left( {\mu_{A}^{2} (x_{i} )\, + \,\mu_{B}^{2} (x_{i} )} \right)\,\, \\ + \,2\, - \,\left( {\nu_{A}^{2} (x_{i} )\, + \,\nu_{B}^{2} (x_{i} )} \right) \end{subarray} }{4}} \right){\text{e}}^{{\left( {\tfrac{\begin{subarray}{l} \left( {\mu_{A}^{2} (x_{i} )\, + \,\mu_{B}^{2} (x_{i} )} \right)\,\, \\ + \,2\, - \,\left( {\nu_{A}^{2} (x_{i} )\, + \,\nu_{B}^{2} (x_{i} )} \right) \end{subarray} }{4}} \right)}} \\ & \Rightarrow \tfrac{\begin{subarray}{l} \left( {\hbox{max} \left\{ {\nu_{A}^{2} (x_{i} ),\,\nu_{C}^{2} (x_{i} )} \right\}} \right)\, \\ + 2\, - \,\left( {\hbox{min} \left\{ {\mu_{A}^{2} (x_{i} ),\,\mu_{C}^{2} (x_{i} )} \right\}} \right) \end{subarray} }{4}{\text{e}}^{{\left( {\tfrac{\begin{subarray}{l} \left( {\hbox{max} \left\{ {\nu_{A}^{2} (x_{i} ),\,\nu_{C}^{2} (x_{i} )} \right\}} \right)\, \\ + 2\, - \,\left( {\hbox{min} \left\{ {\mu_{A}^{2} (x_{i} ),\,\mu_{C}^{2} (x_{i} )} \right\}} \right) \end{subarray} }{4}} \right)}} \le \left( {\tfrac{\begin{subarray}{l} \left( {\nu_{A}^{2} (x_{i} )\, + \,\nu_{B}^{2} (x_{i} )} \right)\, \\ + \,2\, - \,\left( {\mu_{A}^{2} (x_{i} )\, + \,\mu_{B}^{2} (x_{i} )} \right) \end{subarray} }{4}} \right){\text{e}}^{{\left( {\tfrac{\begin{subarray}{l} \left( {\nu_{A}^{2} (x_{i} )\, + \,\nu_{B}^{2} (x_{i} )} \right)\, \\ + \,2\, - \,\left( {\mu_{A}^{2} (x_{i} )\, + \,\mu_{B}^{2} (x_{i} )} \right) \end{subarray} }{4}} \right)}} \\ & \Rightarrow \tfrac{\begin{subarray}{l} \left( {\hbox{min} \left\{ {\mu_{A}^{2} (x_{i} ),\,\mu_{C}^{2} (x_{i} )} \right\}} \right) \\ + 1 - \left( {\hbox{max} \left\{ {\nu_{A}^{2} (x_{i} ),\,\nu_{C}^{2} (x_{i} )} \right\}} \right) \end{subarray} }{2}{\text{e}}^{{\left( {\tfrac{\begin{subarray}{l} \left( {\hbox{min} \left\{ {\mu_{A}^{2} (x_{i} ),\,\mu_{C}^{2} (x_{i} )} \right\}} \right) \\ + 1 - \left( {\hbox{max} \left\{ {\nu_{A}^{2} (x_{i} ),\,\nu_{C}^{2} (x_{i} )} \right\}} \right) \end{subarray} }{2}} \right)}} \le \tfrac{{\left( {\mu_{A}^{2} (x_{i} )\, + 1 - \,\,\nu_{A}^{2} (x_{i} )} \right)}}{2}{\text{e}}^{{\tfrac{{\left( {\mu_{A}^{2} (x_{i} )\, + 1 - \,\,\nu_{A}^{2} (x_{i} )} \right)}}{2}}} \\ & \Rightarrow \tfrac{\begin{subarray}{l} \left( {\hbox{max} \left\{ {\nu_{A}^{2} (x_{i} ),\,\nu_{C}^{2} (x_{i} )} \right\}} \right) \\ + 1 - \left( {\hbox{min} \left\{ {\mu_{A}^{2} (x_{i} ),\,\mu_{C}^{2} (x_{i} )} \right\}} \right) \end{subarray} }{2}{\text{e}}^{{\left( {\tfrac{\begin{subarray}{l} \left( {\hbox{max} \left\{ {\nu_{A}^{2} (x_{i} ),\,\nu_{C}^{2} (x_{i} )} \right\}} \right) \\ + 1 - \left( {\hbox{min} \left\{ {\mu_{A}^{2} (x_{i} ),\,\mu_{C}^{2} (x_{i} )} \right\}} \right) \end{subarray} }{2}} \right)}} \le \tfrac{{\left( {\nu_{A}^{2} (x_{i} )\, + 1 - \,\,\mu_{A}^{2} (x_{i} )} \right)}}{2}{\text{e}}^{{\tfrac{{\left( {\nu_{A}^{2} (x_{i} )\, + 1 - \,\,\mu_{A}^{2} (x_{i} )} \right)}}{2}}} \\ & \Rightarrow \tfrac{\begin{subarray}{l} \left( {\hbox{min} \left\{ {\mu_{B}^{2} (x_{i} ),\,\mu_{C}^{2} (x_{i} )} \right\}} \right) \\ + 1 - \left( {\hbox{max} \left\{ {\nu_{B}^{2} (x_{i} ),\,\nu_{C}^{2} (x_{i} )} \right\}} \right) \end{subarray} }{2}{\text{e}}^{{\left( {\tfrac{\begin{subarray}{l} \left( {\hbox{min} \left\{ {\mu_{B}^{2} (x_{i} ),\,\mu_{C}^{2} (x_{i} )} \right\}} \right) \\ + 1 - \left( {\hbox{max} \left\{ {\nu_{B}^{2} (x_{i} ),\,\nu_{C}^{2} (x_{i} )} \right\}} \right) \end{subarray} }{2}} \right)}} \le \tfrac{{\left( {\mu_{B}^{2} (x_{i} )\, + 1 - \,\,\nu_{B}^{2} (x_{i} )} \right)}}{2}{\text{e}}^{{\tfrac{{\left( {\mu_{B}^{2} (x_{i} )\, + 1 - \,\,\nu_{B}^{2} (x_{i} )} \right)}}{2}}} \\ \end{aligned} $$

    and

    $$ \tfrac{\begin{subarray}{l} \left( {\hbox{max} \left\{ {\nu_{B}^{2} (x_{i} ),\,\nu_{C}^{2} (x_{i} )} \right\}} \right) \\ + 1 - \left( {\hbox{min} \left\{ {\mu_{B}^{2} (x_{i} ),\,\mu_{C}^{2} (x_{i} )} \right\}} \right) \end{subarray} }{2}{\text{e}}^{{\left( {\tfrac{\begin{subarray}{l} \left( {\hbox{max} \left\{ {\nu_{B}^{2} (x_{i} ),\,\nu_{C}^{2} (x_{i} )} \right\}} \right) \\ + 1 - \left( {\hbox{min} \left\{ {\mu_{B}^{2} (x_{i} ),\,\mu_{C}^{2} (x_{i} )} \right\}} \right) \end{subarray} }{2}} \right)}} \le \tfrac{{\left( {\nu_{B}^{2} (x_{i} )\, + 1 - \,\,\mu_{B}^{2} (x_{i} )} \right)}}{2}{\text{e}}^{{\tfrac{{\left( {\nu_{B}^{2} (x_{i} )\, + 1 - \,\,\mu_{B}^{2} (x_{i} )} \right)}}{2}}} . $$

    This implies that \( D_{1} \left( {A \cap \,C,\,B\, \cap \,C} \right)\, \le \,D_{1} \left( {A,\,B} \right). \)

  • (P4) Similar as proof (P3).

Theorem 3.4

The mapping defined by

$$ \begin{aligned} D_{2} \left( {A,B} \right) & = - \frac{1}{{2n\left( {1 - {\text{e}}^{ - 1} } \right)}}\,\sum\limits_{i\, = 1}^{n} {\left[ {\left\{ {{\text{e}}^{{ - \left( {\tfrac{{\left( {\mu_{B}^{2} (x_{i} )\, - \,\mu_{A}^{2} (x_{i} )\,} \right)\, + \,\left( {\nu_{A}^{2} (x_{i} )\, - \,\nu_{B}^{2} (x_{i} )} \right)}}{4}} \right)}} + {\text{e}}^{{ - \left( {\tfrac{{\left( {\mu_{A}^{2} (x_{i} )\, - \,\mu_{B}^{2} (x_{i} )\,} \right)\, + \,\left( {\nu_{B}^{2} (x_{i} )\, - \,\nu_{A}^{2} (x_{i} )} \right)}}{4}} \right)}} - 2} \right\}} \right.} I_{{\left[ {\mu_{A}^{2} \left( {x_{i} } \right)\, \ge \,\nu_{A}^{2} \left( {x_{i} } \right)} \right]}} \\ & \quad + \,\left. {\left\{ {{\text{e}}^{{ - \left( {\tfrac{{\left( {\mu_{A}^{2} (x_{i} )\, - \,\mu_{B}^{2} (x_{i} )\,} \right)\, + \,\left( {\nu_{B}^{2} (x_{i} )\, - \,\nu_{A}^{2} (x_{i} )} \right)}}{4}} \right)}} + {\text{e}}^{{ - \left( {\tfrac{{\left( {\mu_{B}^{2} (x_{i} )\, - \,\mu_{A}^{2} (x_{i} )\,} \right)\, + \,\left( {\nu_{A}^{2} (x_{i} )\, - \,\nu_{B}^{2} (x_{i} )} \right)}}{4}} \right)}} - 2} \right\}} \right]I_{{\left[ {\mu_{A}^{2} \left( {x_{i} } \right)\, < \,\nu_{A}^{2} \left( {x_{i} } \right)} \right]}} \\ \end{aligned} $$
(16)

is a divergence measure for IFT2Ss.

Proof

The proof is similar as Theorem 3.3.

4 Novel WASPAS approach for MCDM problems

This section extends the classical weighted aggregated sum product assessment (WASPAS) method within IFT2S context. The proposed method is developed to handle the MCDM problems with unknown criteria and decision-makers’ weights. The algorithm for WASPAS approach is as follows:

Step 1 Formulate the MCDM problem and then construct the decision matrix.

In MCDM problem, let \( R = \left\{ {R_{1} ,R_{2} , \ldots ,R_{m} } \right\} \) be a set of alternatives and \( Q = \left\{ {Q_{1} ,Q_{2} , \ldots ,Q_{n} } \right\} \) be a set of criteria. A set of decision experts \( \left\{ {B_{1} ,B_{2} , \ldots B_{l} } \right\} \) gives their decisions on each alternative \( R_{i} \) with respect to the criterion \( Q_{j} \) in terms of linguistic variables. Let \( N\, = \,\left( {g_{ij}^{(k)} } \right),\,\,i\, = 1(1)m,\,j\, = 1(1)n \) be the linguistic matrix provided by the decision experts, where \( h_{ij}^{(k)} \) denotes the evaluation of an alternative \( R_{i} \) with respect to criterion \( Q_{j} \) in terms of linguistic variables for \( k^{th} \) decision expert.

Step 2 Determine the weights of the decision experts.

During the process of decision making, it is an important concern to determine the weights of the decision experts. For this, suppose that the significances of the decision experts are expressed as linguistic variables given in terms of intuitionistic fuzzy numbers of second type (IFT2 Ns). For evaluation of kth decision expert, let \( B_{k} \, = \,\left[ {\mu_{k} ,\,\nu_{k} ,\,\pi_{k} } \right] \) be the IFT2N, then the weight of the kth decision expert is obtained by using the following formula (Boran et al. 2009):

$$ \lambda_{k} = \tfrac{{\left( {\mu_{k}^{2} \, + \,\pi_{k}^{2} \, \times \,\left( {\frac{{\mu_{k}^{2} }}{{\mu_{k}^{2} \, + \,\nu_{k}^{2} }}} \right)} \right)}}{{\sum\limits_{k\, = \,1}^{\ell } {\left( {\mu_{k}^{2} \, + \,\pi_{k}^{2} \, \times \,\left( {\frac{{\mu_{k}^{2} }}{{\mu_{k}^{2} \, + \,\nu_{k}^{2} }}} \right)} \right)} }},\quad k = 1\left( 1 \right)\ell . $$
(17)

Clearly, \( \lambda_{k} \, \ge \,0 \) and \( \sum\limits_{k\, = 1}^{\ell } {\lambda_{k} \, = \,1.} \)

Step 3: Aggregate the decision matrix.

To construct the aggregated intuitionistic fuzzy decision matrix of second type, all individual matrices need to be combined into one group based on decision experts’ opinions. To facilitate that intuitionistic fuzzy weighted averaging operator of type-2 (IFT2WAO) or Pythagorean fuzzy weighted averaging operator (PFWAO) [Yager (2013)] is applied and then \( {\mathbb{N}} = \left( {\varepsilon_{ij} } \right)_{m\, \times \,n} , \) where

$$ \varepsilon_{ij} = {\text{PFWA}}_{\lambda } \left( {h_{ij}^{(1)} ,h_{ij}^{(2)} , \ldots ,h_{ij}^{(\ell )} } \right) = \,\left( {\sqrt {1 - \prod\limits_{k = \,1}^{\ell } {\left( {1 - \,\mu_{k}^{2} } \right)^{{\lambda_{k} }} } } ,\prod\limits_{k = 1}^{\ell } {\left( {\nu_{k} } \right)^{{\lambda_{k} }} } } \right). $$
(18)

Step 4: Calculate the weights of the criteria.

To determine the relative importance of each criterion, we have developed the following formula with the help of entropy and divergence measures:

All the criteria are not assumed to be equal importance. Let \( w = \left( {w_{1} ,w_{2} , \ldots ,w_{n} } \right)^{T} \) such that \( \sum\nolimits_{j = 1}^{n} {w_{j} = 1} , \)\( \,w_{j} \in \left[ {0,\,\,1} \right] \) be an importance vector for the criterion set. Consecutively, to achieve \( w, \) we apply the following procedure:

(1) Calculate the objective weight \( w_{j}^{o} \) for each criterion using the divergence measure

$$ w_{j}^{o} = \tfrac{{\sum\limits_{i = \,1}^{m} {\left[ {\tfrac{1}{m\, - 1}\sum\limits_{k = 1,\,k \ne i}^{m} {D_{1} \left( {\eta_{ij} ,\,\eta_{kj} } \right)} \, + \left( {1\, - \,E_{1} \left( {\eta_{ij} } \right)} \right)} \right]} }}{{\sum\limits_{j = \,1}^{n} {\sum\limits_{i = \,1}^{m} {\left[ {\tfrac{1}{m\, - 1}\sum\limits_{k = 1,\,k \ne i}^{m} {D_{1} \left( {\eta_{ij} ,\,\eta_{kj} } \right)\,} + \left( {1\, - \,E_{1} \left( {\eta_{ij} } \right)} \right)} \right]} } }},\quad j = 1,2, \ldots ,n. $$
(19)

Here, \( D_{1} \left( {\eta_{ij} ,\,\eta_{kj} } \right) \) denotes the divergence measure between \( \eta_{ij} \) and \( \eta_{kj} , \) and \( E_{1} \left( {\eta_{ij} } \right) \) denotes the entropy measure of \( \eta_{ij} . \)

(2) Calculate the subjective weight \( w_{j}^{s} \) for criterion

First, construct the subjective weighted matrix \( \left( {w_{j}^{s} } \right) \) for kth decision expert as follows:

$$ w_{j}^{s} = \left[ {w_{j\left( k \right)}^{s} } \right]_{n\, \times 1} = \left[ {\begin{array}{*{20}c} {w_{1\left( k \right)}^{s} } \\ {w_{2\left( k \right)}^{s} } \\ \vdots \\ {w_{n\left( k \right)}^{s} } \\ \end{array} } \right], $$
(20)

where \( w_{j\left( k \right)}^{s} \) is the subjective weight of \( Q_{j} \) assigned by the kth decision expert, \( j = 1\left( 1 \right)n, \)\( k = 1\left( 1 \right)\ell . \) To calculate the subjective weight of the criterion, we have

$$ w_{j}^{s} = \left( {{{\left( {w_{j\left( 1 \right)}^{s} \oplus \,w_{j\left( 2 \right)}^{s} \oplus \cdots \oplus w_{j\left( \ell \right)}^{s} \,} \right)} \mathord{\left/ {\vphantom {{\left( {w_{j\left( 1 \right)}^{s} \oplus \,w_{j\left( 2 \right)}^{s} \oplus \cdots \oplus w_{j\left( \ell \right)}^{s} \,} \right)} \ell }} \right. \kern-0pt} \ell }} \right). $$
(21)

Let \( w_{j\left( k \right)}^{s} = \left( {p_{ijk} } \right) \) be the decision weight of \( k \)th decision expert such that \( p_{ijk} = \left\langle {\mu_{ijk} ,\,\,\nu_{ijk} } \right\rangle , \)\( k = 1\left( 1 \right)\,\ell \) is an IFT2N or PFN. Next, we construct the combined subjective criterion weight value as follows:

$$ w_{j}^{s} = \left\langle {\left[ {\sqrt {1 - \prod\limits_{k = 1}^{\ell } {\left( {1 - \mu_{ijk}^{2} } \right)^{{{1 \mathord{\left/ {\vphantom {1 \ell }} \right. \kern-0pt} \ell }}} } } } \right]} \right.,\;\left. {\left[ {\prod\limits_{k = 1}^{\ell } {\left( {\nu_{ijk} } \right)^{{{1 \mathord{\left/ {\vphantom {1 \ell }} \right. \kern-0pt} \ell }}} } } \right]} \right\rangle . $$
(22)

Here, \( w_{j}^{s} = \left\langle {\mu_{ij*} ,\,\nu_{ij*} } \right\rangle \) is also a PFN.

(3) Compute the aggregate or combined weight by using Eqs. (19)–(22) is given by

$$ w_{j} = \gamma w_{j}^{s} + \left( {1 - \gamma } \right)w_{j}^{o} \,, $$
(23)

where \( \gamma \) is the aggregating coefficient of decision procedure within the range of 0 to 1.

Step 5 Compute the measures of weighted sum model (WSM) \( {\mathbb{C}}_{i}^{(1)} \) for each alternative using the formula

$$ {\mathbb{C}}_{i}^{(1)} \, = \,\sum\limits_{j = \,1}^{n} {w_{j} \,\eta_{ij} } . $$
(24)

Step 6 Compute the measures of weighted product model (WPM) \( {\mathbb{C}}_{i}^{(2)} \) for each alternative by using the following formula

$$ {\mathbb{C}}_{i}^{(2)} \, = \,\prod\limits_{j\, = 1}^{n} {w_{j} \,\eta_{ij} } . $$
(25)

Step 7 Calculate the aggregated measure of the WASPAS method for each alternative, which as

$$ {\mathbb{C}}_{i} \, = \,\lambda \,{\mathbb{C}}_{i}^{(1)} \, + \,\left( {1 - \,\lambda } \right)\,{\mathbb{C}}_{i}^{(2)} , $$
(26)

where \( \lambda \) is the aggregating coefficient of decision precision. It is developed to estimate the accuracy of WASPAS based on initial criteria exactness and when \( \lambda \in \left[ {0,\,\,1} \right] \) (when \( \lambda = 0, \) and \( \lambda = 1, \) WASPAS is transformed to the WPM and the WSM, respectively). It has been proven that the accuracy of the aggregating methods is higher than the accuracy of single ones.

Step 8 Rank the alternatives according to decreasing values (i.e., crisp score values) of \( {\mathbb{C}}_{i} . \)

Step 9 End.

5 Physician selection problem

In this section, we have applied the proposed WASPAS method to solve a decision-making physician selection problem, which verifies the applicability and effectiveness of the proposed method.

In India, www.practo.com is the one of the most commonly visited online physician rating website which collects the information about the facilities, the reviews given by the patients and ratings of every doctor. On this site, a patient may rate a physician in a five-star-marking system on the basis of their experience and post a comment/review related to the rating score. For example, if a patient may review any particular physician that the diagnosis is perfect, the waiting time is too long and the medical staffs’ behaviors are quite rude. Thus, these reviews suggest the patients to take a suitable decision in the selection of a doctor. In a selection of physician, the patients’ decisions are widely influenced by the available reviews. For instance, if a patient who wants to select a physician, he/she may peruse the reviews of the physician according as location, hospitality, specialty and so many other factors. However, how to choose a suitable physician is still a challenge for the patients due to a bulky number of reviews accessible on online websites. In this section, the proposed PF-WASPAS approach is used to assess the most suitable physician for patients based on several factors.

Let us consider that a set of three decision makers \( \left( {B_{1} ,\,B_{2} ,\,B_{3} } \right) \) want to choose a physician for the patients based on available online reviews. Subsequent to initial screening, five physicians \( \left( {R_{1} ,R_{2} ,\,R_{3} ,\,R_{4} ,\,R_{5} } \right) \) are selected for further assessment. The ratings of the physician are assessed in a five-star marking system based on the seven criteria, which as (1) ease of appointment (\( Q_{1} \)), (2) punctuality (\( Q_{2} \)), (3) polite staff (\( Q_{3} \)), (4) accurate diagnosis (\( Q_{4} \)), (5) behavior of a physician in the presence of a patient (\( Q_{5} \)), (6) spends time with patients (\( Q_{6} \)) and (7) follows up after visit (\( Q_{7} \)). The decision makers afford their reviews about the physicians in terms of PFNs according as the above-mentioned seven criteria. Now, the process for implementation of proposed WASPAS method on the present application is as follows:

Step 1 Construct a decision matrix.

In this method, the decision makers firstly express their estimations concerning the ratings of physicians over the preferred factors. Table 1 presents the value of linguistic variables which evaluates the relative importance of the preferred criteria. Table 2 shows the value of linguistic variables that evaluates the importance of the decision experts. Table 3 presents the important degree of each decision expert \( \left( {B_{1} ,\,B_{2} ,\,B_{3} } \right), \) which is evaluated by (17). To evaluate the performance ratings of the alternatives with respect to each criterion, decision makers express the linguistic rating variables in Table 4.

Table 1 Linguistic terms that denote the importance of the alternatives for physician selection
Table 2 Importance weights in linguistic terms for DEs’
Table 3 Decision experts’ weights
Table 4 Evaluation of physician selection alternative w.r.t. different criteria and decision experts

With the use of (18), the aggregated intuitionistic fuzzy decision matrix of second type is estimated for physician selection based on the decision experts’ opinions, and thus the result is shown in Table 5.

Table 5 Intuitionistic fuzzy aggregated decision matrix of second type for physician selection

Based on Table 5 and (19), the objective weight of each criterion based on entropy and divergence measures for IFSs-2T is computed as follows:

$$ w_{j}^{o} = \left( {0.1270,\,0.2499,\,0.0766,\,0.1247,\,0.1660,\,0.1750,\,0.0808} \right)^{T} . $$

Corresponding to Table 1, the subjective weight matrix \( \left( {W_{j}^{s} } \right) \) given in Table 6 is constructed. By using (20) and (21), the aggregated subjective weight \( \left( {\varpi_{j}^{s} } \right) \) of criteria is computed.

Table 6 Weights of criteria demonstrated by decision experts

Based on (23), the combined weight \( \left( {\varpi_{j} } \right) \) of each criterion (with \( \gamma = 0.5 \)) is computed as follows:

$$ w_{j} = \left( {0. 1 5 9 3 ,\,0. 1 8 6 7 ,\,0.0 8 2 9 ,\,0. 1 4 3 8 ,\,0. 1 5 4 8 ,\, 0. 1 7 6 4 ,\,0.0 9 6 2} \right)^{T} . $$

Using Table 5 and (24) and (25), the WSM \( \left( {{\mathbb{C}}_{i}^{\left( 1 \right)} } \right) \) and the WPM \( ({\mathbb{C}}_{i}^{\left( 2 \right)} ) \) measures are evaluated for each physician and given in Table 7. Based on (26), the WASPAS \( \left( {{\mathbb{C}}_{i} } \right) \) measure and their score value \( {\mathbb{S}}^{*} \left( {{\mathbb{C}}_{i} } \right) \) are given in Table 7 for each physician (with \( \lambda \, = 0.5 \)). Then, the rank order for the five physician selection is determined as \( R_{3} \succ R_{2} \succ R_{1} \succ R_{5} \succ R_{4} . \)

Table 7 Computational assessment of proposed WASPAS method

5.1 Sensitivity analysis and comparison

In this section, the outcomes of the developed method are investigated based on a comparison and a sensitivity analysis. Nowadays, due to popularity of PFSs, various MCDM approaches have been introduced in recent years within the framework of PFSs. Each of these methods has its own uniqueness and steps which distinguish it from the others. We have tried to choose methods for the comparison which have good efficiency in the literature and could be applicable in the considered MCDM problem. After a literature survey, the methods given by Zhang and Xu (2014), Ren et al. (2016) and Peng and Yang (2016) are selected for the comparative analysis. Here, the ranking result obtained by the proposed method is compared with the existing approaches in Table 8. The ranking order from the developed approach is same as various existing approaches [except Ren et al. (2016) method], and we observe that \( R_{3} \) is the best physician.

Table 8 Final ranking results of different methods

We also perform a sensitivity analysis based on ten sets of criteria weights and different values of precision parameter. Values of \( \lambda \)(0.0, 0.2, 0.5, 0.8 and 1.0) and values of \( \gamma \) (0.0, 0.2, 0.5, 0.8 and 1.0) are chosen for this analysis. Varying the values of \( \lambda \) can help us to assess the sensitivity of method to moving from weighted sum model to weighted product model. Moreover, changing the values of \( \gamma \) is performed to analyze the sensitivity of method to the quality of criteria weights (subjective and objective). Also, Table 9 represents seven sets which used in the sensitivity analysis. As can be seen in Table 9, one of the criteria has the highest weight in each set and the other criteria have lower weights than it. Using this pattern can help us to consider a wide extent of criteria weights for analyzing the sensitivity of method to changing the weights of criteria. The ranking results with different sets and different parameters are shown in Table 10, and we represent the correlation between the results of using different sets in Table 10. According to these tables, all values of \( r_{p} \) are varying between 0.6 and 1.0. Next, Figs. 1, 2 and 3 represent five sets which are implemented in the sensitivity analysis. The ranking orders with the different weights and the parameters are depicted in Figs. 1, 2 and 3. According to these figures, for precision parameter \( \gamma = 0.2,0.5,0.8, \) the alternative \( R_{3} \) has the highest ranking when \( \lambda \, = 0.0 \) to \( \lambda \, = 0.5, \) while \( R_{2} \) has the highest rank when \( \lambda \, = 0.6 \) to \( \lambda \, = 1.0. \) Therefore, it can be said that the proposed approach has good stability with different weights and different values of parameters. Also, we can see that using more subjective weights (increasing the value of \( \gamma \)) can lead to increasing the sensitivity of the method. Based on this analysis, it can be concluded that using a combination of subjective and objective weights of criteria could increase the stability of the proposed approach.

Table 9 Simulated weights of criteria in different parameter values
Table 10 Ranking results with different sets of criteria weights and various values of parameter
Fig. 1
figure 1

Sensitivity results of the \( {\mathbb{C}}_{i} \) values w.r.t. precision parameter for \( \lambda = 0.2 \)

Fig. 2
figure 2

Sensitivity results of the \( {\mathbb{C}}_{i} \) values w.r.t. precision parameter for \( \lambda = 0.5 \)

Fig. 3
figure 3

Sensitivity results of the \( {\mathbb{C}}_{i} \) values w.r.t. precision parameter for \( \lambda = 0.8 \)

6 Conclusions

In this study, a modified WASPAS method is proposed to handle the MCDM problems within the context of IFSs-2T, in with the criteria and decision-makers’ weights are completely unknown. To find the weights of the criteria and decision makers, new formulae are developed based on novel divergence and entropy measures for IFSs-2T. In the present method, the evaluation scores of the alternatives are expressed by linguistic variables and then given in terms of IFNs-2T. The applicability and feasibility of the proposed WASPAS approach are demonstrated through a decision-making problem of physician selection with intuitionistic fuzzy information of second type.

Comparative study with different existing methods also reveals the validity of the results obtained by proposed method. Finally, sensitivity analysis is presented to see the impact of different weights, which also shows the feasibility of the method. On the basis of comparative and sensitivity analyses, it can be concluded that using a combination of subjective and objective weights of criteria could increase the stability of the proposed approach.