1 Introduction

For any element, when confirming its membership degree of a set, it is difficult to have some possible numerical values (Torra and Narukawa 2009) rather than error amplitude (Atanassov 1986; Atanassov and Gargov 1989; Zadeh 1975) or a certain possibility distribution on the possible values (Dubois and Prade 1980). To address this issue, Torra and Narukawa (2009) extended fuzzy sets (Zadeh 1965) and introduced hesitant fuzzy sets (HFS) where, for any element, its membership degree is simultaneously represented by several numerical values in [0, 1]. Consequently, hesitant fuzzy sets as an efficient tool for dealing with uncertainty have attracted increasing attention (Farhadinia 2013; Peng et al. 2013; Qian et al. 2013; Rodríguez et al. 2012, 2013; Wei 2012; Xia and Xu 2011; Xia et al. 2013; Xu and Xia 2011a, b; Zhang 2013; Zhang and Wei 2013; Zhu et al. 2012; Zhu and Xu 2013).

Recently, Chen et al. (2013a, b) defined the interval-valued hesitant fuzzy set (IVHFS) and investigated several IVHF operators for aggregating IVHF information. Based on these operators, some new IVHF-aggregation operators and MAGDM methods (Chen et al. 2013b; Wei and Zhao 2013; Wei et al. 2013; Zhang and Wu 2014) have also been developed for dealing with the MAGDM problems within the context of IVHFSs. However, these existing interval-valued hesitant fuzzy MAGDM methods have some inherent limitations: (1) when using these methods, the weight information for decision makers and attributes is given ahead of time, which, unavoidably, is more or less subjective and inadequate. (2) These existing methods usually carry out some aggregation operations on interval-valued hesitant fuzzy arguments. Accordingly, the dimensionality of the aggregated IVHF elements may increase. In particular, if the dimensions of the input IVHF elements are a little large, the dimensionality of the fused IVHF elements will be very large, which not only increases computation burden but also results in more original information loss. (3) In many MAGDM problems with IVHF information, the weight information for decision makers and attributes is imperfectly known or entirely unknown.

To make up the above drawbacks, this study develops a new technique for MAGDM under IVHF environments with incomplete weight information. The motivations mainly come from three aspects: (1) how to avoid the information loss of the original information and reduce the computation complexity deserves to be addressed in the developed method. (2) To avoid subjective randomness, DMs’ weights should not be given in advance or assumed to be the same for GDM problems. (3) How to address the situation that the weight information for decision makers and attributes is imperfectly known or entirely unknown is a challenging task.

This novel method can be divided into three parts: first, by using the maximizing group consensus method, we establish a quadratic programming model to objectively obtain the most desirable weight vector of decision makers. Second, according to the maximum deviation method, a new model is built to objectively obtain the optimum weight vector of attributes. Finally, motivated by the classical TOPSIS, we develop an extended TOPSIS to determine the best alternative(s), which includes two stages. The first stage is named IVHF–TOPSIS, which can be used to compute the individual relative closeness coefficient for every alternative to the individual IVHFPIS. The second stage is the standard TOPSIS, which is used to determine the group relative-closeness coefficient for every alternative to group positive ideal solution (GPIS) and choose the most desirable alternative that has the maximum group relative-closeness coefficient.

The paper is arranged as follows: some preliminaries regarding HFS and IVHFS are introduced in Sect. 2; A new approach for handling the IVHF-MAGDM problem with imperfect weight information is developed in Sect. 3; An investment example to show the validity and practicality of our method is given in Sect. 4; Some conclusions are presented in Sect. 5.

2 Preliminaries

Torra (2010) presented the concept of HFS as follows:

Definition 2.1

(Torra 2010) Let us assume \(X\) is a reference set. An HFS \(A\) on \(X\) is a function \({h_A}\left( x \right)\) that returns a subinterval in \(\left[ {0,1} \right]\) when applied to \(X\).

The HFS is characterized by

$$A=\left\{ {\left. {\left\langle {x,{h_A}\left( x \right)} \right\rangle } \right|x \in X} \right\}$$
(1)

where \({h_A}\left( x \right)\) consists of some values in \(\left[ {0,1} \right]\) and is simply called a hesitant fuzzy element (HFE) in Xia and Xu (2011), expressed as \(h={h_A}\left( x \right)\).

Let \({l_h}\) represent the number of values in \(h\). For simplicity, we arrange the values in \(h\) in descending order, that is, \(h=\left\{ {\left. {{h^{\sigma \left( i \right)}}} \right|i=1,2, \ldots ,{l_h}} \right\}\); here, we denote \({h^{\sigma \left( i \right)}}\) as the ith largest value in \(h\).

Torra (2010) gave the following laws for any three HFEs, \(h\), \({h_1}\) and \({h_2}\):

  1. 1.

    \({h^c}=\bigcup\nolimits_{{\gamma \in h}} {\left\{ {1 - \gamma } \right\}}\);

  2. 2.

    \({h_1} \cup {h_2}=\bigcup\nolimits_{{{\gamma _1} \in {h_1},{\gamma _2} \in {h_2}}} {\left\{ {{\gamma _1} \vee {\gamma _2}} \right\}}\);

  3. 3.

    \({h_1} \cap {h_2}=\bigcup\nolimits_{{{\gamma _1} \in {h_1},{\gamma _2} \in {h_2}}} {\left\{ {{\gamma _1} \wedge {\gamma _2}} \right\}}\).

Actually, in many MAGDM problems, experts may have difficulties in assigning crisp numbers as their preference values, but the values can be indicated by a subinterval of [0,1]. Thus, Chen et al. (2013a, b) proposed the concept of IVHFS, which will be briefly reviewed here.

Assume that \(D\left( {\left[ {0,1} \right]} \right)\) includes all closed subintervals of \(\left[ {0,1} \right]\), that is,

$$D\left( {\left[ {0,1} \right]} \right)=\left\{ {\left. {a=\left[ {{a^L},{a^U}} \right]} \right|{a^L} \leq {a^U},{a^L},{a^U} \in \left[ {0,1} \right]} \right\}.$$

Definition 2.2

(Xu and Da 2002) Let \(a=\left[ {{a^L},{a^U}} \right],b=\left[ {{b^L},{b^U}} \right] \in D\left( {\left[ {0,1} \right]} \right)\). Then,

  1. 1.

    \(a=b \Leftrightarrow \left[ {{a^L},{a^U}} \right]=\left[ {{b^L},{b^U}} \right] \Leftrightarrow {a^L}={b^L}{\kern 1pt} {\kern 1pt} {\kern 1pt} {\text{and}}{\kern 1pt} {\kern 1pt} {\kern 1pt} {a^U}={b^U}\);

  2. 2.

    \(a+b=\left[ {{a^L},{a^U}} \right]+\left[ {{b^L},{b^U}} \right]=\left[ {{a^L}+{b^L},{a^U}+{b^U}} \right]\);

  3. 3.

    \(\lambda a=\lambda \left[ {{a^L},{a^U}} \right]=\left[ {\lambda {a^L},\lambda {a^U}} \right]\);

  4. 4.

    The complement of \(a\) is denoted by \({a^c}={\left[ {{a^L},{a^U}} \right]^c}=\left[ {1 - {a^U},1 - {a^L}} \right]\).

For intervals \(a=\left[ {{a^L},{a^U}} \right]\) and \(b=\left[ {{b^L},{b^U}} \right]\), Xu and Da (2002) developed a comparison law as follows:

Definition 2.3

(Xu and Da 2002) Let \(\tilde {a}=\left[ {{a^L},{a^U}} \right],\tilde {b}=\left[ {{b^L},{b^U}} \right] \in D\left( {\left[ {0,1} \right]} \right)\), and \({l_{\tilde {a}}}={a^U} - {a^L}\), \({l_{\tilde {b}}}={b^U} - {b^L}\). Then, the possibility degree of \(\tilde {a} \geq \tilde {b}\) is described as:

$$p\left( {\tilde {a} \geq \tilde {b}} \right)=\hbox{max} \left\{ {1 - \hbox{max} \left( {\frac{{{b^U} - {a^L}}}{{{l_{\tilde {a}}}+{l_{\tilde {b}}}}},0} \right),0} \right\}.$$
(2)

To rank \({\tilde {a}_i}=\left[ {a_{i}^{L},a_{i}^{U}} \right] \in D\left( {\left[ {0,1} \right]} \right)\) (\(i=1,2, \dots ,n\)), a complementary matrix was introduced as:

$$P=\left[ {\begin{array}{*{20}{c}} {{p_{11}}}&{{p_{12}}}& \cdots &{{p_{1n}}} \\ {{p_{21}}}&{{p_{22}}}& \cdots &{{p_{2n}}} \\ \vdots & \vdots & \vdots & \vdots \\ {{p_{n1}}}&{{p_{n2}}}& \cdots &{{p_{nn}}} \end{array}} \right]$$

where \({p_{ij}}=p\left( {{{\tilde {a}}_i} \geq {{\tilde {a}}_j}} \right)\), \({p_{ij}} \geq 0\), \({p_{ij}}+{p_{ji}}=1\), \({p_{ii}}=\frac{1}{2}\), \(i,j=1,2, \cdots ,n\).

Let \({p_i}=\sum\limits_{{j=1}}^{n} {{p_{ij}}}\), \(i=1,2, \dots ,n\). Then, we can sort the \({a_i}=\left[ {a_{i}^{L},a_{i}^{U}} \right]\) (\(i=1,2, \dots ,n\)) in descending order according to the size of\({p_i}\).

Definition 2.4

(Chen et al. 2013a, b) Supposing \(X\) is a fixed set. An IVHFS \(\tilde {A}\) on \(X\) is a function \(\tilde {A}:X \to D\left( {\left[ {0,1} \right]} \right)\) and is expressed as:

$$\tilde {A}=\left\{ {\left. {\left\langle {x,{{\tilde {h}}_{\tilde {A}}}\left( x \right)} \right\rangle } \right|x \in X} \right\}.$$
(3)

For simplicity, \(\widetilde {h}={\tilde {h}_{\tilde {A}}}\left( x \right)\) is named the interval-valued hesitant fuzzy element (IVHFE). If \(\tilde {\gamma } \in \widetilde {h}\), then \(\tilde {\gamma }=\left[ {{{\tilde {\gamma }}^L},{{\tilde {\gamma }}^U}} \right]\), where \({\tilde {\gamma }^L}=\inf \tilde {\gamma }\) and \({\tilde {\gamma }^U}=\sup \tilde {\gamma }\) denote the lower and upper limits of \(\tilde {\gamma }\), respectively. If \({\tilde {\gamma }^L}={\tilde {\gamma }^U}\) for all \(\tilde {\gamma } \in \widetilde {h}\), then the IVHFE is just the HFE.

Denote \({l_{\tilde {h}}}\) as the number of intervals in \(\tilde {h}\). For simplicity, we rank the values in \(\tilde {h}\) in descending order, that is, \(\tilde {h}=\left\{ {\left. {{{\tilde {h}}^{\sigma \left( j \right)}}} \right|j=1,2, \dots ,{l_{\tilde {h}}}} \right\}\). Here, we denote \({\tilde {h}^{\sigma \left( j \right)}}\) as the jth largest interval in \(\tilde {h}\).

Definition 2.5

(Chen et al. 2013a, b) Suppose that \(\widetilde {h}\), \({\tilde {h}_1}\) and \({\tilde {h}_2}\) are IVHFEs; we define several operations on them:

  1. 1.

    \({\widetilde {h}^c}=\left\{ {\left. {\left[ {1 - {{\tilde {\gamma }}^U},1 - {{\tilde {\gamma }}^L}} \right]} \right|\tilde {\gamma } \in \widetilde {h}} \right\};\)

  2. 2.

    \({\tilde {h}_1} \cup {\tilde {h}_2}=\left\{ {\left. {\left[ {\tilde {\gamma }_{1}^{L} \vee \tilde {\gamma }_{2}^{L},\tilde {\gamma }_{1}^{U} \vee \tilde {\gamma }_{2}^{U}} \right]} \right|{{\tilde {\gamma }}_1} \in {{\tilde {h}}_1},{{\tilde {\gamma }}_2} \in {{\tilde {h}}_2}} \right\};\)

  3. 3.

    \({\tilde {h}_1} \cap {\tilde {h}_2}=\left\{ {\left. {\left[ {\tilde {\gamma }_{1}^{L} \wedge \tilde {\gamma }_{2}^{L},\tilde {\gamma }_{1}^{U} \wedge \tilde {\gamma }_{2}^{U}} \right]} \right|{{\tilde {\gamma }}_1} \in {{\tilde {h}}_1},{{\tilde {\gamma }}_2} \in {{\tilde {h}}_2}} \right\};\)

  4. 4.

    \({\widetilde {h}^\lambda }=\left\{ {\left. {\left[ {{{\left( {{{\tilde {\gamma }}^L}} \right)}^\lambda },{{\left( {{{\tilde {\gamma }}^U}} \right)}^\lambda }} \right]} \right|\tilde {\gamma } \in \widetilde {h}} \right\},\) \(\lambda>0;\)

  5. 5.

    \(\lambda \widetilde {h}{\text{=}}\left\{ {\left. {\left[ {1 - {{\left( {1 - {{\tilde {\gamma }}^L}} \right)}^\lambda },1 - {{\left( {1 - {{\tilde {\gamma }}^U}} \right)}^\lambda }} \right]} \right|\tilde {\gamma } \in \widetilde {h}} \right\},\) \(\lambda>0;\)

  6. 6.

    \({\tilde {h}_1} \oplus {\tilde {h}_2}{\text{=}}\left\{ {\left. {\left[ {\tilde {\gamma }_{1}^{L}+\tilde {\gamma }_{2}^{L} - \tilde {\gamma }_{1}^{L}\tilde {\gamma }_{2}^{L},\tilde {\gamma }_{1}^{U}+\tilde {\gamma }_{2}^{U} - \tilde {\gamma }_{1}^{U}\tilde {\gamma }_{2}^{U}} \right]} \right|{{\tilde {\gamma }}_1} \in {{\tilde {h}}_1},{{\tilde {\gamma }}_2} \in {{\tilde {h}}_2}} \right\};\)

  7. 7.

    \({\tilde {h}_1} \otimes {\tilde {h}_2}{\text{=}}\left\{ {\left. {\left[ {\tilde {\gamma }_{1}^{L}\tilde {\gamma }_{2}^{L},\tilde {\gamma }_{1}^{U}\tilde {\gamma }_{2}^{U}} \right]} \right|{{\tilde {\gamma }}_1} \in {{\tilde {h}}_1},{{\tilde {\gamma }}_2} \in {{\tilde {h}}_2}} \right\}.\)

Generally, the number of intervals in different IVHFEs is not the same. Then, we assume that \(l=\hbox{max} \left\{ {{l_{{{\tilde {h}}_1}}},{l_{{{\tilde {h}}_2}}}} \right\}\), where \({l_{{{\tilde {h}}_1}}}\) and \({l_{{{\tilde {h}}_2}}}\) indicate the number of intervals in IVHFEs \({\tilde {h}_1}\) and \({\tilde {h}_2}\), respectively. For the sake of a more exact operation between \({\tilde {h}_1}\) and \({\tilde {h}_2}\), the following approach proposed in Xu and Zhang (2013) can be used to generalize the shorter IVHFE until their lengths are equal to each other.

Definition 2.6

(Xu and Zhang 2013) Let \(\tilde {h}\) be an IVHFE and \({\tilde {h}^+}\), \({\tilde {h}^ - }\) be the largest and smallest intervals in \(\tilde {h}\), respectively. The interval \(\bar {\tilde {h}}=\eta {\tilde {h}^+}+\left( {1 - \eta } \right){\tilde {h}^ - }\) is said to be an extension value, in which \(\eta \in [0,1]\) indicates the controlling parameter given by the DM based on her/his risk appetite.

Chen et al. (2013a) developed an IVHF Hamming distance between two IVHFEs \({\tilde {h}_1}\) and \({\tilde {h}_2}\) as:

$$d\left( {{{\tilde {h}}_1},{{\tilde {h}}_2}} \right)=\frac{1}{{2l}}\sum\limits_{{i=1}}^{l} {\left( {\left| {{{\left( {\tilde {h}_{1}^{{\sigma \left( i \right)}}} \right)}^L} - {{\left( {\tilde {h}_{2}^{{\sigma \left( i \right)}}} \right)}^L}} \right|+\left| {{{\left( {\tilde {h}_{1}^{{\sigma \left( i \right)}}} \right)}^U} - {{\left( {\tilde {h}_{2}^{{\sigma \left( i \right)}}} \right)}^U}} \right|} \right)}$$
(4)

where \(l=\hbox{max} \left\{ {{l_{{{\tilde {h}}_1}}},{l_{{{\tilde {h}}_2}}}} \right\}\), and \(\tilde {h}_{1}^{{\sigma \left( i \right)}}\) and \(\tilde {h}_{2}^{{\sigma \left( i \right)}}\) are the ith largest intervals in \({\tilde {h}_1}\) and \({\tilde {h}_2}\), respectively.

3 A novel method for MAGDM with IVHF information

3.1 Problem description

A MAGDM problem with IVHF information is presented as follows: Suppose \(X=\left\{ {{x_1},{x_2}, \ldots ,{x_m}} \right\}\) is a collection of \(m\) alternatives; \(C=\left\{ {{c_1},{c_2}, \ldots ,{c_n}} \right\}\) is a collection of \(n\) attributes that has the weight information \(w={\left( {{w_1},{w_2}, \ldots ,{w_n}} \right)^T}\), satisfying \({w_j} \in \left[ {0,1} \right]\), \(j=1,2, \ldots ,n\), and \(\sum\nolimits_{{j=1}}^{n} {{w_j}=1.}\) Let \(D=\left\{ {{d_1},{d_2}, \ldots ,{d_p}} \right\}\) represent \(p\) DMs with weight information \(\omega ={\left( {{\omega _1},{\omega _2}, \ldots ,{\omega _p}} \right)^T}\), satisfying \({\omega _k} \in \left[ {0,1} \right]\), \(k=1,2, \dots ,p\), and \(\sum\nolimits_{{k=1}}^{p} {{\omega _k}=1} .\) In addition, let \({\tilde {A}^{\left( k \right)}}={\left( {\tilde {a}_{{ij}}^{{\left( k \right)}}} \right)_{m \times n}}\) be an IVHF decision matrix, where \(\tilde {a}_{{ij}}^{{\left( k \right)}}=\left\{ {\left. {{{\left( {\tilde {a}_{{ij}}^{{\left( k \right)}}} \right)}^{\sigma \left( t \right)}}} \right|t=1,2, \dots ,{l_{\tilde {a}_{{ij}}^{{\left( k \right)}}}}} \right\}\) is composed of the interval values of \({x_i} \in X\) on \({c_j} \in C\), provided by the DM \({d_k} \in D\).

Generally, there exist benefit and cost attributes in an MAGDM problem. For each \(k=1,2, \ldots ,p\), we will obtain the normalized decision matrix \({\tilde {B}^{\left( k \right)}}={\left( {\tilde {b}_{{ij}}^{{\left( k \right)}}} \right)_{m \times n}}\) from the original decision matrices \({\tilde {A}^{\left( k \right)}}={\left( {\tilde {a}_{{ij}}^{{\left( k \right)}}} \right)_{m \times n}}\) by using the following transformation formula:

$$\tilde {b}_{{ij}}^{{\left( k \right)}}=\left\{ \begin{gathered} \tilde {a}_{{ij}}^{{\left( k \right)}},\quad {\text{for benefit attribute}}{\kern 1pt} \quad {c_j} \hfill \\ {\left( {\tilde {a}_{{ij}}^{{\left( k \right)}}} \right)^c},{\kern 1pt} \quad {\text{for cost attribute}}\quad {c_j} \hfill \\ \end{gathered} \right.,\quad i=1,2, \ldots ,m\quad i=1,2, \ldots ,m\quad k=1,2, \ldots ,p.$$
(5)

In most cases, the total elements for different IVHFEs \(\tilde {b}_{{ij}}^{{\left( k \right)}}\) of \({\tilde {B}^{\left( k \right)}}\) (\(k=1,2, \dots ,p\)) are not the same. Assume that

$$l=\hbox{max} \left\{ {\left. {{l_{\tilde {b}_{{ij}}^{{\left( k \right)}}}}} \right|i=1,2, \dots ,m{\text{,}}\quad {\text{ }}j=1,2, \dots ,n{\text{,}}\quad k=1,2, \dots ,p} \right\}.$$

By the use of the regulations mentioned in Xu and Zhang (2013), the normalized decision matrices \({\tilde {B}^{\left( k \right)}}={\left( {\tilde {b}_{{ij}}^{{\left( k \right)}}} \right)_{m \times n}}\) can be replaced by the corresponding decision matrices \({\tilde {H}^{\left( k \right)}}={\left( {\tilde {h}_{{ij}}^{{\left( k \right)}}} \right)_{m \times n}}\) (\(k=1,2, \dots ,p\)), such that \({l_{\tilde {h}_{{ij}}^{{\left( k \right)}}}}=l\), \(\forall\)\(i=1,2, \dots ,m\), \(j=1,2, \dots ,n\), and \(k=1,2, \dots ,p\).

Remark 3.1

Netrusophic theory is also an effective tool to handle uncertainty associated with ambiguity in a manner analogous to human thought, and has been applied into GDM problems successfully (Abdel-Basset et al. 2017; Mohamed et al. 2017). While, Netrusophic set is differentiated by truth-membership function, indeterminacy-membership function and falsity-membership function, which is different from hesitant fuzzy set. Hence, the literature (Abdel-Basset et al. 2017; Mohamed et al. 2017) cannot handle the GDM problems in the interval-valued hesitant fuzzy environments.

3.2 A quadratic programming model for the weights of decision makers

First, for each \(k=1,2, \dots ,p,\) the individual decision matrices

$${\tilde {H}^{\left( k \right)}}={\left( {\tilde {h}_{{ij}}^{{\left( k \right)}}} \right)_{m \times n}}={\left( {\left\{ {\left. {{{\left( {\tilde {h}_{{ij}}^{{\left( k \right)}}} \right)}^{\sigma \left( t \right)}}} \right|t=1,2, \dots ,l} \right\}} \right)_{m \times n}}$$

can be aggregated into the group decision matrix

$$\tilde {H}={\left( {{{\tilde {h}}_{ij}}} \right)_{m \times n}}={\left( {\left\{ {\left. {\tilde {h}_{{ij}}^{{\sigma \left( t \right)}}} \right|t=1,2, \dots ,l} \right\}} \right)_{m \times n}},$$

where

$${\tilde {h}_{ij}}=\mathop \oplus \limits_{{k=1}}^{p} \left( {{\omega _k}\tilde {h}_{{ij}}^{{\left( k \right)}}} \right)=\left\{ {\left. {\sum\limits_{{k=1}}^{p} {{\omega _k}{{\left( {\tilde {h}_{{ij}}^{{\left( k \right)}}} \right)}^{\sigma \left( t \right)}}} } \right|t=1,2, \dots ,l} \right\}=\left\{ {\left. {\left[ {\sum\limits_{{k=1}}^{p} {{\omega _k}{{\left( {{{\left( {\tilde {h}_{{ij}}^{{\left( k \right)}}} \right)}^{\sigma \left( t \right)}}} \right)}^L}} ,\sum\limits_{{k=1}}^{p} {{\omega _k}{{\left( {{{\left( {\tilde {h}_{{ij}}^{{\left( k \right)}}} \right)}^{\sigma \left( t \right)}}} \right)}^U}} } \right]} \right|t=1,2, \dots ,l} \right\}$$
(6)

Then, the issue of how to confirm the weights of the decision makers can be discussed in two cases as follows:

  1. (1)

    If for all \(k=1,2, \dots ,p, {\tilde {H}^{\left( k \right)}}\) are equal to each other, that is, \({\tilde {H}^{\left( k \right)}}=\tilde {H}\), then it is rational to assign the decision makers \({d_k}\) the same weight \(\frac{1}{p}\).

  2. (2)

    If not all \({\tilde {H}^{\left( k \right)}}\) (\(k=1,2, \dots ,p\)) are equal, that is, there exist at least two matrices \({\tilde {H}^{\left( {{k_1}} \right)}}\) and \({\tilde {H}^{\left( {{k_2}} \right)}}\) (\({k_1},{k_2} \in \left\{ {1,2, \dots ,p} \right\}\)) such that \({\tilde {H}^{\left( {{k_1}} \right)}} \ne {\tilde {H}^{\left( {{k_2}} \right)}}\), then we introduce the deviation variables

$$\begin{gathered} e_{{ij}}^{{\left( k \right)}}=d\left( {\tilde {h}_{{ij}}^{{\left( k \right)}},{{\tilde {h}}_{ij}}} \right)=\frac{{\sum\nolimits_{{t=1}}^{l} {\left( {\left| {{{\left( {{{\left( {\tilde {h}_{{ij}}^{{\left( k \right)}}} \right)}^{\sigma \left( t \right)}}} \right)}^L} - {{\left( {\tilde {h}_{{ij}}^{{\sigma \left( t \right)}}} \right)}^L}} \right|+\left| {{{\left( {{{\left( {\tilde {h}_{{ij}}^{{\left( k \right)}}} \right)}^{\sigma \left( t \right)}}} \right)}^U} - {{\left( {\tilde {h}_{{ij}}^{{\sigma \left( t \right)}}} \right)}^U}} \right|} \right)} }}{{2l}} \hfill \\ {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} =\frac{{\sum\nolimits_{{t=1}}^{l} {\left( {\left| {{{\left( {{{\left( {\tilde {h}_{{ij}}^{{\left( k \right)}}} \right)}^{\sigma \left( t \right)}}} \right)}^L} - \sum\nolimits_{{q=1}}^{p} {{\omega _q}} {{\left( {{{\left( {\tilde {h}_{{ij}}^{{\left( q \right)}}} \right)}^{\sigma \left( t \right)}}} \right)}^L}} \right|+\left| {{{\left( {{{\left( {\tilde {h}_{{ij}}^{{\left( k \right)}}} \right)}^{\sigma \left( t \right)}}} \right)}^U} - \sum\nolimits_{{q=1}}^{p} {{\omega _q}} {{\left( {{{\left( {\tilde {h}_{{ij}}^{{\left( q \right)}}} \right)}^{\sigma \left( t \right)}}} \right)}^U}} \right|} \right)} }}{{2l}} \hfill \\ \end{gathered}$$
(7)

for all \(i=1,2, \dots ,m\), \(j=1,2, \dots ,n\), \(k=1,2, \dots ,p\). Then, the square deviations among all \({\tilde {H}^{\left( k \right)}}\) (\(k \in \{ 1,2, \dots ,p\}\)) and \(\tilde {H}\) are given by

$$\begin{aligned} e\left( \omega \right)= & \frac{1}{{2mnpl}}\sum\limits_{{k=1}}^{p} {\sum\limits_{{i=1}}^{m} {\sum\limits_{{j=1}}^{n} {\sum\limits_{{t=1}}^{l} {\left( {{{\left( {{{\left( {{{\left( {\tilde {h}_{{ij}}^{{\left( k \right)}}} \right)}^{\sigma \left( t \right)}}} \right)}^L} - \sum\limits_{{q=1}}^{p} {{\omega _q}} {{\left( {{{\left( {\tilde {h}_{{ij}}^{{\left( q \right)}}} \right)}^{\sigma \left( t \right)}}} \right)}^L}} \right)}^2}+{{\left( {{{\left( {{{\left( {\tilde {h}_{{ij}}^{{\left( k \right)}}} \right)}^{\sigma \left( t \right)}}} \right)}^U} - \sum\limits_{{q=1}}^{p} {{\omega _q}} {{\left( {{{\left( {\tilde {h}_{{ij}}^{{\left( q \right)}}} \right)}^{\sigma \left( t \right)}}} \right)}^U}} \right)}^2}} \right)} } } } \\ {\kern 1pt} = & \frac{1}{{2mnpl}}\sum\limits_{{k=1}}^{p} {\sum\limits_{{i=1}}^{m} {\sum\limits_{{j=1}}^{n} {\sum\limits_{{t=1}}^{l} {\left( {{{\left( {\sum\limits_{{q=1}}^{p} {{\omega _q}\left( {{{\left( {{{\left( {\tilde {h}_{{ij}}^{{\left( k \right)}}} \right)}^{\sigma \left( t \right)}}} \right)}^L} - {{\left( {{{\left( {\tilde {h}_{{ij}}^{{\left( q \right)}}} \right)}^{\sigma \left( t \right)}}} \right)}^L}} \right)} } \right)}^2}+{{\left( {\sum\limits_{{q=1}}^{p} {{\omega _q}\left( {{{\left( {{{\left( {\tilde {h}_{{ij}}^{{\left( k \right)}}} \right)}^{\sigma \left( t \right)}}} \right)}^U} - {{\left( {{{\left( {\tilde {h}_{{ij}}^{{\left( q \right)}}} \right)}^{\sigma \left( t \right)}}} \right)}^U}} \right)} } \right)}^2}} \right)} } } } \\ = & \frac{1}{{2mnpl}}\sum\limits_{{k=1}}^{p} {\sum\limits_{{i=1}}^{m} {\sum\limits_{{j=1}}^{n} {\sum\limits_{{t=1}}^{l} {\left( \begin{gathered} \left( {\sum\limits_{{{q_1}=1}}^{p} {{\omega _{{q_1}}}\left( {{{\left( {{{\left( {\tilde {h}_{{ij}}^{{\left( k \right)}}} \right)}^{\sigma \left( t \right)}}} \right)}^L} - {{\left( {{{\left( {\tilde {h}_{{ij}}^{{\left( {{q_1}} \right)}}} \right)}^{\sigma \left( t \right)}}} \right)}^L}} \right)} } \right) \cdot \left( {\sum\limits_{{{q_2}=1}}^{p} {{\omega _{{q_2}}}\left( {{{\left( {{{\left( {\tilde {h}_{{ij}}^{{\left( k \right)}}} \right)}^{\sigma \left( t \right)}}} \right)}^L} - {{\left( {{{\left( {\tilde {h}_{{ij}}^{{\left( {{q_2}} \right)}}} \right)}^{\sigma \left( t \right)}}} \right)}^L}} \right)} } \right)+ \hfill \\ \left( {\sum\limits_{{{q_1}=1}}^{p} {{\omega _{{q_1}}}\left( {{{\left( {{{\left( {\tilde {h}_{{ij}}^{{\left( k \right)}}} \right)}^{\sigma \left( t \right)}}} \right)}^U} - {{\left( {{{\left( {\tilde {h}_{{ij}}^{{\left( {{q_1}} \right)}}} \right)}^{\sigma \left( t \right)}}} \right)}^U}} \right)} } \right) \cdot \left( {\sum\limits_{{{q_2}=1}}^{p} {{\omega _{{q_2}}}\left( {{{\left( {{{\left( {\tilde {h}_{{ij}}^{{\left( k \right)}}} \right)}^{\sigma \left( t \right)}}} \right)}^U} - {{\left( {{{\left( {\tilde {h}_{{ij}}^{{\left( {{q_2}} \right)}}} \right)}^{\sigma \left( t \right)}}} \right)}^U}} \right)} } \right) \hfill \\ \end{gathered} \right)} } } } \\ = & \frac{1}{{2mnpl}}\sum\limits_{{k=1}}^{p} {\sum\limits_{{i=1}}^{m} {\sum\limits_{{j=1}}^{n} {\sum\limits_{{t=1}}^{l} {\left( \begin{gathered} \sum\limits_{{{q_1}=1}}^{p} {\sum\limits_{{{q_2}=1}}^{p} {{\omega _{{q_1}}}{\omega _{{q_2}}}\left( {{{\left( {{{\left( {\tilde {h}_{{ij}}^{{\left( k \right)}}} \right)}^{\sigma \left( t \right)}}} \right)}^L} - {{\left( {{{\left( {\tilde {h}_{{ij}}^{{\left( {{q_1}} \right)}}} \right)}^{\sigma \left( t \right)}}} \right)}^L}} \right)\left( {{{\left( {{{\left( {\tilde {h}_{{ij}}^{{\left( k \right)}}} \right)}^{\sigma \left( t \right)}}} \right)}^L} - {{\left( {{{\left( {\tilde {h}_{{ij}}^{{\left( {{q_2}} \right)}}} \right)}^{\sigma \left( t \right)}}} \right)}^L}} \right)} } + \hfill \\ \sum\limits_{{{q_1}=1}}^{p} {\sum\limits_{{{q_2}=1}}^{p} {{\omega _{{q_1}}}{\omega _{{q_2}}}\left( {{{\left( {{{\left( {\tilde {h}_{{ij}}^{{\left( k \right)}}} \right)}^{\sigma \left( t \right)}}} \right)}^U} - {{\left( {{{\left( {\tilde {h}_{{ij}}^{{\left( {{q_1}} \right)}}} \right)}^{\sigma \left( t \right)}}} \right)}^U}} \right)\left( {{{\left( {{{\left( {\tilde {h}_{{ij}}^{{\left( k \right)}}} \right)}^{\sigma \left( t \right)}}} \right)}^U} - {{\left( {{{\left( {\tilde {h}_{{ij}}^{{\left( {{q_2}} \right)}}} \right)}^{\sigma \left( t \right)}}} \right)}^U}} \right)} } \hfill \\ \end{gathered} \right)} } } } \\ {\kern 1pt} = & \sum\limits_{{{q_1}=1}}^{p} {\sum\limits_{{{q_2}=1}}^{p} {{\omega _{{q_1}}}{\omega _{{q_2}}}\left( {\frac{1}{{2mnpl}}\sum\limits_{{k=1}}^{p} {\sum\limits_{{i=1}}^{m} {\sum\limits_{{j=1}}^{n} {\sum\limits_{{t=1}}^{l} {\left( \begin{gathered} \left( {{{\left( {{{\left( {\tilde {h}_{{ij}}^{{\left( k \right)}}} \right)}^{\sigma \left( t \right)}}} \right)}^L} - {{\left( {{{\left( {\tilde {h}_{{ij}}^{{\left( {{q_1}} \right)}}} \right)}^{\sigma \left( t \right)}}} \right)}^L}} \right)\left( {{{\left( {{{\left( {\tilde {h}_{{ij}}^{{\left( k \right)}}} \right)}^{\sigma \left( t \right)}}} \right)}^L} - {{\left( {{{\left( {\tilde {h}_{{ij}}^{{\left( {{q_2}} \right)}}} \right)}^{\sigma \left( t \right)}}} \right)}^L}} \right)+ \hfill \\ \left( {{{\left( {{{\left( {\tilde {h}_{{ij}}^{{\left( k \right)}}} \right)}^{\sigma \left( t \right)}}} \right)}^U} - {{\left( {{{\left( {\tilde {h}_{{ij}}^{{\left( {{q_1}} \right)}}} \right)}^{\sigma \left( t \right)}}} \right)}^U}} \right)\left( {{{\left( {{{\left( {\tilde {h}_{{ij}}^{{\left( k \right)}}} \right)}^{\sigma \left( t \right)}}} \right)}^U} - {{\left( {{{\left( {\tilde {h}_{{ij}}^{{\left( {{q_2}} \right)}}} \right)}^{\sigma \left( t \right)}}} \right)}^U}} \right) \hfill \\ \end{gathered} \right)} } } } } \right)} } \\ \end{aligned}$$
(8)

It is obvious that \(e\left( \omega \right)\) is the function with decision makers’ weight vector \(\omega ={\left( {{\omega _1},{\omega _2}, \dots ,{\omega _p}} \right)^T}\). Let \(G={\left( {{g_{{q_1}{q_2}}}} \right)_{p \times p}}\) be a matrix, where

$${g_{{q_1}{q_2}}}=\frac{1}{{2mnpl}}\sum\limits_{{k=1}}^{p} {\sum\limits_{{i=1}}^{m} {\sum\limits_{{j=1}}^{n} {\sum\limits_{{t=1}}^{l} {\left( \begin{gathered} \left( {{{\left( {{{\left( {\tilde {h}_{{ij}}^{{\left( k \right)}}} \right)}^{\sigma \left( t \right)}}} \right)}^L} - {{\left( {{{\left( {\tilde {h}_{{ij}}^{{\left( {{q_1}} \right)}}} \right)}^{\sigma \left( t \right)}}} \right)}^L}} \right)\left( {{{\left( {{{\left( {\tilde {h}_{{ij}}^{{\left( k \right)}}} \right)}^{\sigma \left( t \right)}}} \right)}^L} - {{\left( {{{\left( {\tilde {h}_{{ij}}^{{\left( {{q_2}} \right)}}} \right)}^{\sigma \left( t \right)}}} \right)}^L}} \right)+ \hfill \\ \left( {{{\left( {{{\left( {\tilde {h}_{{ij}}^{{\left( k \right)}}} \right)}^{\sigma \left( t \right)}}} \right)}^U} - {{\left( {{{\left( {\tilde {h}_{{ij}}^{{\left( {{q_1}} \right)}}} \right)}^{\sigma \left( t \right)}}} \right)}^U}} \right)\left( {{{\left( {{{\left( {\tilde {h}_{{ij}}^{{\left( k \right)}}} \right)}^{\sigma \left( t \right)}}} \right)}^U} - {{\left( {{{\left( {\tilde {h}_{{ij}}^{{\left( {{q_2}} \right)}}} \right)}^{\sigma \left( t \right)}}} \right)}^U}} \right) \hfill \\ \end{gathered} \right)} } } } ,\quad {q_1},{q_2}=1,2, \dots ,p.$$
(9)

Thus, Eq. (8) can be rewritten as

$$e\left( \omega \right)={\omega ^T}G\omega$$
(10)

Therefore, from the standpoint of maximum group consensus, one can utilize the following model to obtain the weights of decision makers in the GDM environments:

$$\begin{gathered} \hbox{min} e\left( \omega \right)={\omega ^T}G\omega \hfill \\ {\text{s.t.}}\left\{ \begin{gathered} \sum\limits_{{k=1}}^{p} {{\omega _k}} =1, \hfill \\ {\omega _k} \geq 0,{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} k=1,2, \dots ,p, \hfill \\ \end{gathered} \right. \hfill \\ \end{gathered}$$
(M-1)

Suppose that \(E={\left( {1,1, \dots ,1} \right)^T}\), one can have

$$\begin{gathered} \hbox{min} e\,\left( \omega \right)={\omega ^T}G\omega \hfill \\ {\text{s.t.}}\left\{ \begin{gathered} {E^T}\omega =1, \hfill \\ \omega \geq 0 \hfill \\ \end{gathered} \right. \hfill \\ \end{gathered}$$
(M-2)

In case we do not consider the condition of \(\omega \geq 0\) temporally, model (M-2) can become

$$\begin{gathered} \hbox{min} e\,\left( \omega \right)={\omega ^T}D\omega \hfill \\ {\text{s.t.}}{\kern 1pt} {\kern 1pt} {E^T}\omega =1 \hfill \\ \end{gathered}$$
(M-3)

Theorem 3.1

Suppose\({\tilde {H}^{\left( k \right)}}={\left( {\tilde {h}_{{ij}}^{{\left( k \right)}}} \right)_{m \times n}}\) (\(k=1,2, \dots ,p\)) are\(p\)IVHF decision matrices and\(\tilde {H}={\left( {{{\tilde {h}}_{ij}}} \right)_{m \times n}}\)is the group IVHF decision matrix derived from Eq. (6). If not all of\({\tilde {H}^{\left( k \right)}}\) (\(k=1,2, \dots ,p\)) are the same, then (M-3) has the optimum solution:

$${\omega ^ * }=\frac{{{G^{ - 1}}E}}{{{E^T}{G^{ - 1}}E}}$$
(11)

Proof

Because not all of \({\tilde {H}^{\left( k \right)}}\) (\(k=1,2, \dots ,p\)) are equal, there exists at least one matrix \({\tilde {H}^{\left( {{k_0}} \right)}}\) (\({k_0} \in \left\{ {1,2, \dots ,p} \right\}\)) such that \({\tilde {H}^{\left( {{k_0}} \right)}} \ne \tilde {H}\). Thus, there exists \({i_0} \in \left\{ {1,2, \dots ,m} \right\}\), \({j_0} \in \left\{ {1,2, \dots ,n} \right\}\), and \({t_0} \in \left\{ {1,2, \dots ,l} \right\}\), satisfying \({\left( {\tilde {h}_{{{i_0}{j_0}}}^{{\left( {{k_0}} \right)}}} \right)^{\sigma \left( {{t_0}} \right)}} \ne \tilde {h}_{{{i_0}{j_0}}}^{{\sigma \left( {{t_0}} \right)}}\), i.e., \({\left( {{{\left( {\tilde {h}_{{{i_0}{j_0}}}^{{\left( {{k_0}} \right)}}} \right)}^{\sigma \left( {{t_0}} \right)}}} \right)^L} \ne {\left( {\tilde {h}_{{{i_0}{j_0}}}^{{\sigma \left( {{t_0}} \right)}}} \right)^L}\) or \({\left( {{{\left( {\tilde {h}_{{{i_0}{j_0}}}^{{\left( {{k_0}} \right)}}} \right)}^{\sigma \left( {{t_0}} \right)}}} \right)^U} \ne {\left( {\tilde {h}_{{{i_0}{j_0}}}^{{\sigma \left( {{t_0}} \right)}}} \right)^U}\). Therefore, we have

$${\left( {{{\left( {{{\left( {\tilde {h}_{{{i_0}{j_0}}}^{{\left( {{k_0}} \right)}}} \right)}^{\sigma \left( {{t_0}} \right)}}} \right)}^L} - {{\left( {\tilde {h}_{{{i_0}{j_0}}}^{{\sigma \left( {{t_0}} \right)}}} \right)}^L}} \right)^2}+{\left( {{{\left( {{{\left( {\tilde {h}_{{{i_0}{j_0}}}^{{\left( {{k_0}} \right)}}} \right)}^{\sigma \left( {{t_0}} \right)}}} \right)}^U} - {{\left( {\tilde {h}_{{{i_0}{j_0}}}^{{\sigma \left( {{t_0}} \right)}}} \right)}^U}} \right)^2}>0$$

Thus,

$$e\left( \omega \right)=\frac{1}{{2mnpl}}\sum\limits_{{k=1}}^{p} {\sum\limits_{{i=1}}^{m} {\sum\limits_{{j=1}}^{n} {\sum\limits_{{t=1}}^{l} {\left( {{{\left( {{{\left( {{{\left( {\tilde {h}_{{ij}}^{{\left( k \right)}}} \right)}^{\sigma \left( t \right)}}} \right)}^L} - \sum\limits_{{q=1}}^{p} {{\omega _q}} {{\left( {{{\left( {\tilde {h}_{{ij}}^{{\left( q \right)}}} \right)}^{\sigma \left( t \right)}}} \right)}^L}} \right)}^2}+{{\left( {{{\left( {{{\left( {\tilde {h}_{{ij}}^{{\left( k \right)}}} \right)}^{\sigma \left( t \right)}}} \right)}^U} - \sum\limits_{{q=1}}^{p} {{\omega _q}} {{\left( {{{\left( {\tilde {h}_{{ij}}^{{\left( q \right)}}} \right)}^{\sigma \left( t \right)}}} \right)}^U}} \right)}^2}} \right)} } } }>0$$
(12)

Obviously, according to Eq. (9), we have

$${g_{{q_1}{q_2}}}={g_{{q_2}{q_1}}},\quad \forall {q_1},{q_2}=1,2, \ldots ,p.$$

Therefore, \(G={\left( {{g_{{q_1}{q_2}}}} \right)_{p \times p}}\) is a symmetry matrix. According to Eqs. (10) and (12), we have

$$e\left( \omega \right)={\omega ^T}G\omega>0;$$

Because\(\omega \ne 0\) stands for the weight vector of experts, so \(G={\left( {{g_{{q_1}{q_2}}}} \right)_{p \times p}}\) is a definite and nonsingular matrix. Then, we can derive the most desirable solution of (M-3) by the following procedures:

The Lagrange function is given by

$$L\left( {\omega ,\lambda } \right)={\omega ^T}G\omega +\lambda \left( {{E^T}\omega - 1} \right),$$
(13)

in which \(\lambda\) represents the Lagrange multiplier.

Second, we acquire the equations as below:

$$\left\{ \begin{gathered} \frac{{\partial L\left( {\omega ,\lambda } \right)}}{{\partial \omega }}=2G\omega +\lambda E=0 \hfill \\ \frac{{\partial L\left( {\omega ,\lambda } \right)}}{{\partial \lambda }}={E^T}\omega - 1=0 \hfill \\ \end{gathered} \right.$$
(14)

Solving Eq. (14) yields the optimal solution as

$${\omega ^ * }=\frac{{{G^{ - 1}}E}}{{{E^T}{G^{ - 1}}E}}$$

Because \(\frac{{{\partial ^2}L\left( {\omega ,\lambda } \right)}}{{\partial {\omega ^2}}}=2G\) is a definite matrix, \(e\left( \omega \right)={\omega ^T}G\omega\) is a strictly convex function. Consequently, \({\omega ^ * }=\frac{{{G^{ - 1}}E}}{{{E^T}{G^{ - 1}}E}}\) is the unique optimum solution of (M-3). The theorem is verified. □

If \({\omega ^ * }=\frac{{{G^{ - 1}}E}}{{{E^T}{G^{ - 1}}E}} \geq 0\), then it is the only optimum solution of (M-3) too; or else we utilize the LINGO software package to handle model (M-3).

3.3 Determining the optimal weights of attributes based on the maximizing deviation method

Assume that \(\Delta\) is a collection of the given weight information (Kim and Ahn 1999; Kim et al. 1999; Park 2004; Park and Kim 1997), where \(\Delta\) is established as having the following forms for \(i \ne j\):

Form 1.

A weak ranking \(\left\{ {{w_i} \geq {w_j}} \right\};\)

Form 2.

A strict ranking \(\left\{ {{w_i} - {w_j} \geq {\alpha _i}} \right\}\left( {{\alpha _i}>0} \right);\)

Form 3.

A ranking of differences \(\left\{ {{w_i} - {w_j} \geq {w_k} - {w_l}} \right\},\quad {\text{for}}\quad j \ne k \ne l;\)

Form 4.

A ranking with multiples \(\left\{ {{w_i} \geq {\alpha _i}{w_j}} \right\}\left( {0 \leq {\alpha _i} \leq 1} \right);\)

Form 5.

An interval form \(\left\{ {{\alpha _i} \leq {w_i} \leq {\alpha _i}+{\varepsilon _i}} \right\}\left( {0 \leq {\alpha _i} \leq {\alpha _i}+{\varepsilon _i} \leq 1} \right).\)

In what follows, motivated by considering the maximum deviation method proposed by Wang (1997), we build a model to obtain the most desirable weights of attributes. For each \({c_j} \in C\), the deviation of \({x_i} \in X\) from any other alternatives regarding the expert \({d_k} \in D\) is defined by:

$$D_{{ij}}^{{\left( k \right)}}=\sum\limits_{{q=1}}^{m} {d\left( {\tilde {h}_{{ij}}^{{\left( k \right)}},\tilde {h}_{{qj}}^{{\left( k \right)}}} \right)} =\frac{{\sum\nolimits_{{q=1}}^{m} {\sum\nolimits_{{t=1}}^{l} {\left( {\left| {{{\left( {{{\left( {\tilde {h}_{{ij}}^{{\left( k \right)}}} \right)}^{\sigma \left( t \right)}}} \right)}^L} - {{\left( {{{\left( {\tilde {h}_{{qj}}^{{\left( k \right)}}} \right)}^{\sigma \left( t \right)}}} \right)}^L}} \right|+\left| {{{\left( {{{\left( {\tilde {h}_{{ij}}^{{\left( k \right)}}} \right)}^{\sigma \left( t \right)}}} \right)}^U} - {{\left( {{{\left( {\tilde {h}_{{qj}}^{{\left( k \right)}}} \right)}^{\sigma \left( t \right)}}} \right)}^U}} \right|} \right)} } }}{{2l}}$$
(15)

\(i=1,2, \dots ,m\), \(j=1,2, \dots ,n\), \(k=1,2, \dots ,p\). Let

$$D_{j}^{{\left( k \right)}}=\sum\limits_{{i=1}}^{m} {D_{{ij}}^{{\left( k \right)}}} =\sum\limits_{{i=1}}^{m} {\sum\limits_{{q=1}}^{m} {d\left( {\tilde {h}_{{ij}}^{{\left( k \right)}},\tilde {h}_{{qj}}^{{\left( k \right)}}} \right)} } =\frac{{\sum\nolimits_{{i=1}}^{m} {\sum\nolimits_{{q=1}}^{m} {\sum\nolimits_{{t=1}}^{l} {\left( {\left| {{{\left( {{{\left( {\tilde {h}_{{ij}}^{{\left( k \right)}}} \right)}^{\sigma \left( t \right)}}} \right)}^L} - {{\left( {{{\left( {\tilde {h}_{{qj}}^{{\left( k \right)}}} \right)}^{\sigma \left( t \right)}}} \right)}^L}} \right|+\left| {{{\left( {{{\left( {\tilde {h}_{{ij}}^{{\left( k \right)}}} \right)}^{\sigma \left( t \right)}}} \right)}^U} - {{\left( {{{\left( {\tilde {h}_{{qj}}^{{\left( k \right)}}} \right)}^{\sigma \left( t \right)}}} \right)}^U}} \right|} \right)} } } }}{{2l}}.$$
(16)

Further, let

$$\begin{aligned} D\left( w \right) & =\sum\limits_{{k=1}}^{p} {{\omega _k}\left( {\sum\limits_{{j=1}}^{n} {{w_j}D_{j}^{{\left( k \right)}}} } \right)} \\ & =\sum\limits_{{k=1}}^{p} {{\omega _k}\left( {\sum\limits_{{j=1}}^{n} {{w_j}\left( {\sum\limits_{{i=1}}^{m} {D_{{ij}}^{{\left( k \right)}}} } \right)} } \right)} \\ & =\sum\limits_{{k=1}}^{p} {{\omega _k}\left( {\sum\limits_{{j=1}}^{n} {{w_j}\left( {\sum\limits_{{i=1}}^{m} {\left( {\sum\limits_{{q=1}}^{m} {d\left( {\tilde {h}_{{ij}}^{{\left( k \right)}},\tilde {h}_{{qj}}^{{\left( k \right)}}} \right)} } \right)} } \right)} } \right)} \\ & =\frac{{\sum\nolimits_{{k=1}}^{p} {{\omega _k}\left( {\sum\nolimits_{{j=1}}^{n} {{w_j}\left( {\sum\nolimits_{{i=1}}^{m} {\sum\nolimits_{{q=1}}^{m} {\sum\nolimits_{{t=1}}^{l} {\left( {\left| {{{\left( {{{\left( {\tilde {h}_{{ij}}^{{\left( k \right)}}} \right)}^{\sigma \left( t \right)}}} \right)}^L} - {{\left( {{{\left( {\tilde {h}_{{qj}}^{{\left( k \right)}}} \right)}^{\sigma \left( t \right)}}} \right)}^L}} \right|+\left| {{{\left( {{{\left( {\tilde {h}_{{ij}}^{{\left( k \right)}}} \right)}^{\sigma \left( t \right)}}} \right)}^U} - {{\left( {{{\left( {\tilde {h}_{{qj}}^{{\left( k \right)}}} \right)}^{\sigma \left( t \right)}}} \right)}^U}} \right|} \right)} } } } \right)} } \right)} }}{{2l}} \\ {\kern 1pt} & =\frac{{\sum\nolimits_{{k=1}}^{p} {{\omega _k}\left( {\sum\nolimits_{{j=1}}^{n} {\sum\nolimits_{{i=1}}^{m} {\sum\nolimits_{{q=1}}^{m} {\sum\nolimits_{{t=1}}^{l} {\left( {\left| {{{\left( {{{\left( {\tilde {h}_{{ij}}^{{\left( k \right)}}} \right)}^{\sigma \left( t \right)}}} \right)}^L} - {{\left( {{{\left( {\tilde {h}_{{qj}}^{{\left( k \right)}}} \right)}^{\sigma \left( t \right)}}} \right)}^L}} \right|+\left| {{{\left( {{{\left( {\tilde {h}_{{ij}}^{{\left( k \right)}}} \right)}^{\sigma \left( t \right)}}} \right)}^U} - {{\left( {{{\left( {\tilde {h}_{{qj}}^{{\left( k \right)}}} \right)}^{\sigma \left( t \right)}}} \right)}^U}} \right|} \right){w_j}} } } } } \right)} }}{{2l}}. \\ \end{aligned}$$
(17)

From the above discussion, a model is built to obtain the best weight vector \(w\) by maximizing \(D\left( w \right)\):

$$\begin{gathered} \hbox{max} \,D\left( w \right)=\frac{{\sum\nolimits_{{k=1}}^{p} {{\omega _k}\left( {\sum\nolimits_{{j=1}}^{n} {\sum\nolimits_{{i=1}}^{m} {\sum\nolimits_{{q=1}}^{m} {\sum\nolimits_{{t=1}}^{l} {\left( {\left| {{{\left( {{{\left( {\tilde {h}_{{ij}}^{{\left( k \right)}}} \right)}^{\sigma \left( t \right)}}} \right)}^L} - {{\left( {{{\left( {\tilde {h}_{{qj}}^{{\left( k \right)}}} \right)}^{\sigma \left( t \right)}}} \right)}^L}} \right|+\left| {{{\left( {{{\left( {\tilde {h}_{{ij}}^{{\left( k \right)}}} \right)}^{\sigma \left( t \right)}}} \right)}^U} - {{\left( {{{\left( {\tilde {h}_{{qj}}^{{\left( k \right)}}} \right)}^{\sigma \left( t \right)}}} \right)}^U}} \right|} \right){w_j}} } } } } \right)} }}{{2l}} \hfill \\ {\text{s.t.}}{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {w_j} \geq 0,{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} j=1,2, \dots ,n,{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} \sum\limits_{{j=1}}^{n} {w_{j}^{2}} =1 \hfill \\ \end{gathered}$$
(M-4)

To obtain the solution of (M-4), we construct a Lagrange function as below:

$$L\left( {w,\lambda } \right)=\frac{{\sum\nolimits_{{k=1}}^{p} {{\omega _k}\left( {\sum\nolimits_{{j=1}}^{n} {\sum\nolimits_{{i=1}}^{m} {\sum\nolimits_{{q=1}}^{m} {\sum\nolimits_{{t=1}}^{l} {\left( \begin{gathered} \left| {{{\left( {{{\left( {\tilde {h}_{{ij}}^{{\left( k \right)}}} \right)}^{\sigma \left( t \right)}}} \right)}^L} - {{\left( {{{\left( {\tilde {h}_{{qj}}^{{\left( k \right)}}} \right)}^{\sigma \left( t \right)}}} \right)}^L}} \right|+ \hfill \\ \left| {{{\left( {{{\left( {\tilde {h}_{{ij}}^{{\left( k \right)}}} \right)}^{\sigma \left( t \right)}}} \right)}^U} - {{\left( {{{\left( {\tilde {h}_{{qj}}^{{\left( k \right)}}} \right)}^{\sigma \left( t \right)}}} \right)}^U}} \right| \hfill \\ \end{gathered} \right){w_j}} } } } } \right)} }}{{2l}}+\frac{\lambda }{2}\left( {\sum\limits_{{j=1}}^{n} {w_{j}^{2}} - 1} \right),$$
(18)

where \(\lambda\) represents the Lagrange multiplier.

Moreover, we have

$$\frac{{\partial L}}{{\partial {w_j}}}=\frac{{\sum\nolimits_{{k=1}}^{p} {\sum\nolimits_{{i=1}}^{m} {\sum\nolimits_{{q=1}}^{m} {\sum\nolimits_{{t=1}}^{l} {\left( {\left| {{{\left( {{{\left( {\tilde {h}_{{ij}}^{{\left( k \right)}}} \right)}^{\sigma \left( t \right)}}} \right)}^L} - {{\left( {{{\left( {\tilde {h}_{{qj}}^{{\left( k \right)}}} \right)}^{\sigma \left( t \right)}}} \right)}^L}} \right|+\left| {{{\left( {{{\left( {\tilde {h}_{{ij}}^{{\left( k \right)}}} \right)}^{\sigma \left( t \right)}}} \right)}^U} - {{\left( {{{\left( {\tilde {h}_{{qj}}^{{\left( k \right)}}} \right)}^{\sigma \left( t \right)}}} \right)}^U}} \right|} \right){\omega _k}} } } } }}{{2l}}+\lambda {w_j}=0,$$
(19)
$$\frac{{\partial L}}{{\partial \lambda }}=\frac{1}{2}\left( {\sum\limits_{{j=1}}^{n} {w_{j}^{2}} - 1} \right)=0.$$
(20)

It follows from Eq. (19) that

$${w_j}=\frac{{ - \sum\nolimits_{{k=1}}^{p} {\sum\nolimits_{{i=1}}^{m} {\sum\nolimits_{{q=1}}^{m} {\sum\nolimits_{{t=1}}^{l} {\left( {\left| {{{\left( {{{\left( {\tilde {h}_{{ij}}^{{\left( k \right)}}} \right)}^{\sigma \left( t \right)}}} \right)}^L} - {{\left( {{{\left( {\tilde {h}_{{qj}}^{{\left( k \right)}}} \right)}^{\sigma \left( t \right)}}} \right)}^L}} \right|+\left| {{{\left( {{{\left( {\tilde {h}_{{ij}}^{{\left( k \right)}}} \right)}^{\sigma \left( t \right)}}} \right)}^U} - {{\left( {{{\left( {\tilde {h}_{{qj}}^{{\left( k \right)}}} \right)}^{\sigma \left( t \right)}}} \right)}^U}} \right|} \right){\omega _k}} } } } }}{{2\lambda l}}.$$
(21)

Substituting Eq. (21) into Eq. (20) yields

$$\lambda =\frac{{ - \sqrt {\sum\nolimits_{{j=1}}^{n} {{{\left( {\sum\nolimits_{{k=1}}^{p} {\sum\nolimits_{{i=1}}^{m} {\sum\nolimits_{{q=1}}^{m} {\sum\nolimits_{{t=1}}^{l} {\left( {\left| {{{\left( {{{\left( {\tilde {h}_{{ij}}^{{\left( k \right)}}} \right)}^{\sigma \left( t \right)}}} \right)}^L} - {{\left( {{{\left( {\tilde {h}_{{qj}}^{{\left( k \right)}}} \right)}^{\sigma \left( t \right)}}} \right)}^L}} \right|+\left| {{{\left( {{{\left( {\tilde {h}_{{ij}}^{{\left( k \right)}}} \right)}^{\sigma \left( t \right)}}} \right)}^U} - {{\left( {{{\left( {\tilde {h}_{{qj}}^{{\left( k \right)}}} \right)}^{\sigma \left( t \right)}}} \right)}^U}} \right|} \right){\omega _k}} } } } } \right)}^2}} } }}{{2l}}.$$
(22)

Then, we have

$${w_j}=\frac{{\sum\nolimits_{{k=1}}^{p} {\sum\nolimits_{{i=1}}^{m} {\sum\nolimits_{{q=1}}^{m} {\sum\nolimits_{{t=1}}^{l} {\left( {\left| {{{\left( {{{\left( {\tilde {h}_{{ij}}^{{\left( k \right)}}} \right)}^{\sigma \left( t \right)}}} \right)}^L} - {{\left( {{{\left( {\tilde {h}_{{qj}}^{{\left( k \right)}}} \right)}^{\sigma \left( t \right)}}} \right)}^L}} \right|+\left| {{{\left( {{{\left( {\tilde {h}_{{ij}}^{{\left( k \right)}}} \right)}^{\sigma \left( t \right)}}} \right)}^U} - {{\left( {{{\left( {\tilde {h}_{{qj}}^{{\left( k \right)}}} \right)}^{\sigma \left( t \right)}}} \right)}^U}} \right|} \right){\omega _k}} } } } }}{{\sqrt {\sum\nolimits_{{j=1}}^{n} {{{\left( {\sum\nolimits_{{k=1}}^{p} {\sum\nolimits_{{i=1}}^{m} {\sum\nolimits_{{q=1}}^{m} {\sum\nolimits_{{t=1}}^{l} {\left( {\left| {{{\left( {{{\left( {\tilde {h}_{{ij}}^{{\left( k \right)}}} \right)}^{\sigma \left( t \right)}}} \right)}^L} - {{\left( {{{\left( {\tilde {h}_{{qj}}^{{\left( k \right)}}} \right)}^{\sigma \left( t \right)}}} \right)}^L}} \right|+\left| {{{\left( {{{\left( {\tilde {h}_{{ij}}^{{\left( k \right)}}} \right)}^{\sigma \left( t \right)}}} \right)}^U} - {{\left( {{{\left( {\tilde {h}_{{qj}}^{{\left( k \right)}}} \right)}^{\sigma \left( t \right)}}} \right)}^U}} \right|} \right){\omega _k}} } } } } \right)}^2}} } }}.$$
(23)

Upon normalizing \({w_j}\) (\(j=1,2, \dots ,n\)), we obtain

$$w_{j}^{ * }=\frac{{{w_j}}}{{\sum\nolimits_{{j=1}}^{n} {{w_j}} }}=\frac{{\sum\nolimits_{{k=1}}^{p} {\sum\nolimits_{{i=1}}^{m} {\sum\nolimits_{{q=1}}^{m} {\sum\nolimits_{{t=1}}^{l} {\left( {\left| {{{\left( {{{\left( {\tilde {h}_{{ij}}^{{\left( k \right)}}} \right)}^{\sigma \left( t \right)}}} \right)}^L} - {{\left( {{{\left( {\tilde {h}_{{qj}}^{{\left( k \right)}}} \right)}^{\sigma \left( t \right)}}} \right)}^L}} \right|+\left| {{{\left( {{{\left( {\tilde {h}_{{ij}}^{{\left( k \right)}}} \right)}^{\sigma \left( t \right)}}} \right)}^U} - {{\left( {{{\left( {\tilde {h}_{{qj}}^{{\left( k \right)}}} \right)}^{\sigma \left( t \right)}}} \right)}^U}} \right|} \right){\omega _k}} } } } }}{{\sum\nolimits_{{j=1}}^{n} {\sum\nolimits_{{k=1}}^{p} {\sum\nolimits_{{i=1}}^{m} {\sum\nolimits_{{q=1}}^{m} {\sum\nolimits_{{t=1}}^{l} {\left( {\left| {{{\left( {{{\left( {\tilde {h}_{{ij}}^{{\left( k \right)}}} \right)}^{\sigma \left( t \right)}}} \right)}^L} - {{\left( {{{\left( {\tilde {h}_{{qj}}^{{\left( k \right)}}} \right)}^{\sigma \left( t \right)}}} \right)}^L}} \right|+\left| {{{\left( {{{\left( {\tilde {h}_{{ij}}^{{\left( k \right)}}} \right)}^{\sigma \left( t \right)}}} \right)}^U} - {{\left( {{{\left( {\tilde {h}_{{qj}}^{{\left( k \right)}}} \right)}^{\sigma \left( t \right)}}} \right)}^U}} \right|} \right){\omega _k}} } } } } }}$$
(24)

which is the most desirable weighting vector of attributes.

Furthermore, the constrained optimization model below is built to address the situation in which the information for the weight vector is partially known:

$$\begin{gathered} \hbox{max} D\left( w \right)=\hbox{max} \frac{{\sum\nolimits_{{k=1}}^{p} {{\omega _k}\left( {\sum\nolimits_{{j=1}}^{n} {\sum\nolimits_{{i=1}}^{m} {\sum\nolimits_{{q=1}}^{m} {\sum\nolimits_{{t=1}}^{l} {\left( {\left| {{{\left( {{{\left( {\tilde {h}_{{ij}}^{{\left( k \right)}}} \right)}^{\sigma \left( t \right)}}} \right)}^L} - {{\left( {{{\left( {\tilde {h}_{{qj}}^{{\left( k \right)}}} \right)}^{\sigma \left( t \right)}}} \right)}^L}} \right|+\left| {{{\left( {{{\left( {\tilde {h}_{{ij}}^{{\left( k \right)}}} \right)}^{\sigma \left( t \right)}}} \right)}^U} - {{\left( {{{\left( {\tilde {h}_{{qj}}^{{\left( k \right)}}} \right)}^{\sigma \left( t \right)}}} \right)}^U}} \right|} \right){w_j}} } } } } \right)} }}{{2l}} \hfill \\ {\text{s.t.}}{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} w \in \Delta ,{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {w_j} \geq 0,{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} j=1,2, \dots ,n,{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} \sum\limits_{{j=1}}^{n} {{w_j}} =1. \hfill \\ \end{gathered}$$
(M-5)

3.4 Extended TOPIS approach for the MAGDM with IVHF information

This subsection will extend the classical TOPIS, originally introduced in Hwang and Yoon (1981), to a MAGDM problem under IVHF environments.

The flowchart of the extended TOPIS method is shown in Fig. 1. The extended method includes the following steps:

Fig. 1
figure 1

The flowchart of the developed method

Step 1. The experts \({d_k} \in D\) furnish the IVHF decision matrices \({\tilde {A}^{\left( k \right)}}={\left( {\tilde {a}_{{ij}}^{{\left( k \right)}}} \right)_{m \times n}}\), which are transformed into normalized decision matrices \({\tilde {H}^{\left( k \right)}}={\left( {\tilde {h}_{{ij}}^{{\left( k \right)}}} \right)_{m \times n}}\) via Eq. (5), for \(k=1,2, \dots ,p\).

Step 2. In cases where we are unaware of the weighting information of experts, we can acquire the weights of experts through Eq. (11).

Step 3. In cases where we are entirely unaware of the weighting information of attributes, we can acquire the attribute weights through Eq. (24); in cases where we are partially aware of the weighting information of attributes, we can derive the attribute weights by solving model (M-5).

Step 4. Determine the IVHFPIS \(\tilde {h}_{+}^{{\left( k \right)}}=\left\{ {\tilde {h}_{{+1}}^{{\left( k \right)}},\tilde {h}_{{+2}}^{{\left( k \right)}}, \dots ,\tilde {h}_{{+n}}^{{\left( k \right)}}} \right\}\) and the IVHFNIS \(\tilde {h}_{ - }^{{\left( k \right)}}=\left\{ {\tilde {h}_{{ - 1}}^{{\left( k \right)}},\tilde {h}_{{ - 2}}^{{\left( k \right)}}, \dots ,\tilde {h}_{{ - n}}^{{\left( k \right)}}} \right\}\) for each decision maker \({d_k}\) by the following equations:

$$\tilde {h}_{{+j}}^{{\left( k \right)}}=\mathop {\hbox{max} }\limits_{i} \left\{ {\tilde {h}_{{ij}}^{{\left( k \right)}}} \right\}=\left\{ {\left. {\mathop {\hbox{max} }\limits_{i} \left\{ {{{\left( {\tilde {h}_{{ij}}^{{\left( k \right)}}} \right)}^{\sigma \left( t \right)}}} \right\}} \right|t=1,2, \dots ,l} \right\}=\left\{ {\left. {\left[ {\mathop {\hbox{max} }\limits_{i} {{\left( {{{\left( {\tilde {h}_{{ij}}^{{\left( k \right)}}} \right)}^{\sigma \left( t \right)}}} \right)}^L},\mathop {\hbox{max} }\limits_{i} {{\left( {{{\left( {\tilde {h}_{{ij}}^{{\left( k \right)}}} \right)}^{\sigma \left( t \right)}}} \right)}^U}} \right]} \right|t=1,2, \dots ,l} \right\}\quad j=1,2, \dots ,n,$$
(25)
$$\tilde {h}_{{ - j}}^{{\left( k \right)}}=\mathop {\hbox{min} }\limits_{i} \left\{ {\tilde {h}_{{ij}}^{{\left( k \right)}}} \right\}=\left\{ {\left. {\mathop {\hbox{min} }\limits_{i} \left\{ {{{\left( {\tilde {h}_{{ij}}^{{\left( k \right)}}} \right)}^t}} \right\}} \right|t=1,2, \dots ,l} \right\}=\left\{ {\left. {\left[ {\mathop {\hbox{min} }\limits_{i} {{\left( {{{\left( {\tilde {h}_{{ij}}^{{\left( k \right)}}} \right)}^{\sigma \left( t \right)}}} \right)}^L},\mathop {\hbox{min} }\limits_{i} {{\left( {{{\left( {\tilde {h}_{{ij}}^{{\left( k \right)}}} \right)}^{\sigma \left( t \right)}}} \right)}^U}} \right]} \right|t=1,2, \dots ,l} \right\}\quad j=1,2, \dots ,n.$$
(26)

Step 5. Compute \(d_{{+i}}^{{\left( k \right)}}\) for each alternative \({x_i}\) from the IVHFPIS \(h_{+}^{{\left( k \right)}}\) regarding the decision maker \({d_k}\) as:

$$d_{{+i}}^{{\left( k \right)}}=\sum\limits_{{j=1}}^{n} {{w_j}d\left( {\tilde {h}_{{ij}}^{{\left( k \right)}},\tilde {h}_{{+j}}^{{\left( k \right)}}} \right)} =\frac{{\sum\nolimits_{{j=1}}^{n} {\sum\nolimits_{{t=1}}^{l} {{w_j}\left( {\left| {{{\left( {{{\left( {\tilde {h}_{{ij}}^{{\left( k \right)}}} \right)}^{\sigma \left( t \right)}}} \right)}^L} - {{\left( {{{\left( {\tilde {h}_{{+j}}^{{\left( k \right)}}} \right)}^{\sigma \left( t \right)}}} \right)}^L}} \right|+\left| {{{\left( {{{\left( {\tilde {h}_{{ij}}^{{\left( k \right)}}} \right)}^{\sigma \left( t \right)}}} \right)}^U} - {{\left( {{{\left( {\tilde {h}_{{+j}}^{{\left( k \right)}}} \right)}^{\sigma \left( t \right)}}} \right)}^U}} \right|} \right)} } }}{{2l}}.$$
(27)

In a similar way, compute \(d_{{ - i}}^{{\left( k \right)}}\) for each alternative \({x_i}\) from the IVHFNIS \(h_{ - }^{{\left( k \right)}}\) with respect to the decision maker \({d_k}\) as:

$$d_{{ - i}}^{{\left( k \right)}}=\sum\limits_{{j=1}}^{n} {{w_j}d\left( {\tilde {h}_{{ij}}^{{\left( k \right)}},\tilde {h}_{{ - j}}^{{\left( k \right)}}} \right)} =\frac{{\sum\nolimits_{{j=1}}^{n} {\sum\nolimits_{{t=1}}^{l} {{w_j}\left( {\left| {{{\left( {{{\left( {h_{{ij}}^{{\left( k \right)}}} \right)}^{\sigma \left( t \right)}}} \right)}^L} - {{\left( {{{\left( {h_{{ - j}}^{{\left( k \right)}}} \right)}^{\sigma \left( t \right)}}} \right)}^L}} \right|+\left| {{{\left( {{{\left( {h_{{ij}}^{{\left( k \right)}}} \right)}^{\sigma \left( t \right)}}} \right)}^U} - {{\left( {{{\left( {h_{{ - j}}^{{\left( k \right)}}} \right)}^{\sigma \left( t \right)}}} \right)}^U}} \right|} \right)} } }}{{2l}}.$$
(28)

Step 6. Compute the RCC of each alternative \({x_i}\) to the IVHFPIS \(h_{+}^{{\left( k \right)}}\) regarding the decision maker \({d_k}\) as:

$$C_{i}^{{\left( k \right)}}=\frac{{d_{{ - i}}^{{\left( k \right)}}}}{{d_{{+i}}^{{\left( k \right)}}+d_{{ - i}}^{{\left( k \right)}}}}.$$
(29)

After calculating the \(C_{i}^{{\left( k \right)}}\) for every decision maker \({d_k}\) (\(k=1,2, \dots ,p\)), we then construct the relative-closeness coefficient matrix as below:

$$C={\left( {\begin{array}{*{20}{c}} {C_{1}^{{\left( 1 \right)}}}&{C_{1}^{{\left( 2 \right)}}}& \cdots &{C_{1}^{{\left( p \right)}}} \\ {C_{2}^{{\left( 1 \right)}}}&{C_{2}^{{\left( 2 \right)}}}& \cdots &{C_{2}^{{\left( p \right)}}} \\ \cdots & \cdots & \cdots & \cdots \\ {C_{m}^{{\left( 1 \right)}}}&{C_{m}^{{\left( 2 \right)}}}& \cdots &{C_{m}^{{\left( p \right)}}} \end{array}} \right)_{m \times p}}$$
(30)

Steps 4–6 extend the standard TOPSIS to IVHF environments. Therefore, it can be called the IVHF–TOPSIS.

Step 7. Utilize the following formulas to confirm the group positive and negative ideal solutions, respectively, and obtain the GPIS \(\tilde {h}_{+}^{G}\) and GNIS \(\tilde {h}_{ - }^{G}\):

$$\tilde {h}_{+}^{G}=\left\{ {\mathop {\hbox{max} }\limits_{i} \left\{ {C_{i}^{{\left( 1 \right)}}} \right\},\mathop {\hbox{max} }\limits_{i} \left\{ {C_{i}^{{\left( 2 \right)}}} \right\}, \dots ,\mathop {\hbox{max} }\limits_{i} \left\{ {C_{i}^{{\left( p \right)}}} \right\}} \right\},$$
(31)
$$\tilde {h}_{ - }^{G}=\left\{ {\mathop {\hbox{min} }\limits_{i} \left\{ {C_{i}^{{\left( 1 \right)}}} \right\},\mathop {\hbox{min} }\limits_{i} \left\{ {C_{i}^{{\left( 2 \right)}}} \right\}, \dots ,\mathop {\hbox{min} }\limits_{i} \left\{ {C_{i}^{{\left( p \right)}}} \right\}} \right\}.$$
(32)

Step 8. For any alternative \({x_i}\) from the GPIS \(\tilde {h}_{+}^{G}\) and the GNIS \(\tilde {h}_{ - }^{G}\), compute the separation measures \(d_{{+i}}^{G}\) and \(d_{{ - i}}^{G}\), respectively, as follows:

$$d_{{+i}}^{G}=\sum\limits_{{k=1}}^{p} {{\omega _k}d\left( {C_{i}^{{\left( k \right)}},\mathop {\hbox{max} }\limits_{i} \left\{ {C_{i}^{{\left( k \right)}}} \right\}} \right)} =\sum\limits_{{k=1}}^{p} {{\omega _k}\left| {C_{i}^{{\left( k \right)}} - \left( {\mathop {\hbox{max} }\limits_{i} \left\{ {C_{i}^{{\left( k \right)}}} \right\}} \right)} \right|} ,$$
(33)
$$d_{{ - i}}^{G}=\sum\limits_{{k=1}}^{p} {{\omega _k}d\left( {C_{i}^{{\left( k \right)}},\mathop {\hbox{min} }\limits_{i} \left\{ {C_{i}^{{\left( k \right)}}} \right\}} \right)} =\sum\limits_{{k=1}}^{p} {{\omega _k}\left| {C_{i}^{{\left( k \right)}} - \left( {\mathop {\hbox{min} }\limits_{i} \left\{ {C_{i}^{{\left( k \right)}}} \right\}} \right)} \right|} .$$
(34)

Step 9. For any alternative \({x_i}\) to GPIS \(d_{{+i}}^{G}\), compute the group relative-closeness coefficient (GRCC) \(C_{i}^{G}\) as below:

$$C_{i}^{G}=\frac{{d_{{ - i}}^{G}}}{{d_{{+i}}^{G}+d_{{ - i}}^{G}}}.$$
(35)

Step 10. Based on the GRCCs \(C_{i}^{G},\) we array all alternatives\({x_i}\), \(i=1,2, \dots ,m\), and obtain the most desirable alternative. When the value of \(C_{i}^{G}\) is larger, the alternative \({x_i}\) and the group negative ideal object \(d_{{ - i}}^{G}\) are more different, while the alternative \({x_i}\) and the group positive ideal object \(d_{{+i}}^{G}\) are more similar. Thus, the alternative(s) with the largest GRCC can be selected as the optimum one(s).

4 Illustrative examples

The section will first give an investment example to demonstrate the proposed method. Then, a comparative discussion with other methods will be made to show the superiority of the developed method.

4.1 An investment problem in an IVHF environment

Example 4.1

Consider an investment problem adapted from (Herrera and Herrera-Viedma 2000; Xu 2006), which is composed of five alternatives, four attributes and three decision makers (DMs). The five alternatives are specified as follows: a truck industry (\({x_1}\)), a drug company (\({x_2}\)), a refrigerator company (\({x_3}\)), an arms company (\({x_4}\)) and a television company (\({x_5}\)). The four attributes include the investment risk (\({c_1}\)), the investment return (\({c_2}\)), the social–political impact (\({c_3}\)) and the investment environment (\({c_4}\)). Assume that three experts \({d_k}\) (\(k=1,2,3\)) furnish the IVHF decision matrices \({\tilde {A}^{\left( k \right)}}={\left( {\tilde {a}_{{ij}}^{{\left( k \right)}}} \right)_{m \times n}}\) (\(k=1,2,3\)), shown in Tables 1, 2 and 3.

Table 1 Interval-valued hesitant fuzzy decision matrix \({\tilde {A}^{\left( 1 \right)}}\)
Table 2 Interval-valued hesitant fuzzy decision matrix \({\tilde {A}^{\left( 2 \right)}}\)
Table 3 Interval-valued hesitant fuzzy decision matrix \({\tilde {A}^{\left( 3 \right)}}\)

To apply our method to seek the optimal alternative, two situations are considered as follows:

Situation 1

We are entirely unaware of the weights information for attributes.

Step 1. Since all attributes \({c_j}\) (\(j=1,2,3,4\)) are benefit types, it is unnecessary to normalize \({\tilde {A}^{\left( k \right)}}={\left( {\tilde {a}_{{ij}}^{{\left( k \right)}}} \right)_{5 \times 4}}\) (\(k=1,2,3\)). Under the assumption that all three DMs \({d_k}\) (\(k=1,2,3\)) are pessimists, \({\tilde {A}^{\left( k \right)}}={\left( {\tilde {a}_{{ij}}^{{\left( k \right)}}} \right)_{m \times n}}\) is normalized and becomes\({\tilde {H}^{\left( k \right)}}={\left( {\tilde {h}_{{ij}}^{{\left( k \right)}}} \right)_{m \times n}}\), \(k=1,2,3\) (refer to Tables 4, 5, 6).

Table 4 Interval-valued hesitant fuzzy decision matrix \({\tilde {H}^{\left( 1 \right)}}\)
Table 5 Interval-valued hesitant fuzzy decision matrix \({\tilde {H}^{\left( 2 \right)}}\)
Table 6 Interval-valued hesitant fuzzy decision matrix \({\tilde {H}^{\left( 3 \right)}}\)

Step 2

Calculating the weights of decision makers through Eq. (11) yields

$$\omega =\left( {\frac{1}{3},\frac{1}{3},\frac{1}{3}} \right).$$

Step 3. By Eq. (24), attribute weights are generated as follows:

$$w={\left( {{\text{0.2836,0.2367,0.2566,0.2232}}} \right)^T}.$$

Step 4. Using Eqs. (25) and (26), we identify the IVHFPIS \(\tilde {h}_{+}^{{\left( k \right)}}\) and the IVHFNIS \(\tilde {h}_{ - }^{{\left( k \right)}}\) for each decision maker \({d_k}\), respectively, \(k=1,2,3\).

$$\begin{gathered} \tilde {h}_{+}^{{\left( 1 \right)}}=\left\{ \begin{gathered} \left\{ {\left[ {0.7,0.8} \right],\left[ {0.6,0.7} \right],\left[ {0.5,0.7} \right],\left[ {0.5,0.7} \right],\left[ {0.5,0.7} \right]} \right\},\left\{ {\left[ {0.7,0.9} \right],\left[ {0.6,0.7} \right],\left[ {0.5,0.6} \right],\left[ {0.5,0.6} \right],\left[ {0.5,0.6} \right]} \right\}, \hfill \\ \left\{ {\left[ {0.8,0.9} \right],\left[ {0.7,0.8} \right],\left[ {0.6,0.7} \right],\left[ {0.6,0.7} \right],\left[ {0.6,0.7} \right]} \right\},\left\{ {\left[ {0.8,0.9} \right],\left[ {0.6,0.7} \right],\left[ {0.5,0.6} \right],\left[ {0.5,0.6} \right],\left[ {0.5,0.6} \right]} \right\} \hfill \\ \end{gathered} \right\} \hfill \\ \tilde {h}_{ - }^{{\left( 1 \right)}}=\left\{ \begin{gathered} \left\{ {\left[ {0.4,0.5} \right],\left[ {0.1,0.3} \right],\left[ {0.1,0.2} \right],\left[ {0.1,0.2} \right],\left[ {0.1,0.2} \right]} \right\},\left\{ {\left[ {0.3,0.5} \right],\left[ {0.3,0.4} \right],\left[ {0.2,0.3} \right],\left[ {0.2,0.3} \right],\left[ {0.1,0.3} \right]} \right\}, \hfill \\ \left\{ {\left[ {0.3,0.4} \right],\left[ {0.1,0.3} \right],\left[ {0.1,0.2} \right],\left[ {0.1,0.2} \right],\left[ {0.1,0.2} \right]} \right\},\left\{ {\left[ {0.5,0.6} \right],\left[ {0.2,0.3} \right],\left[ {0.1,0.2} \right],\left[ {0.1,0.2} \right],\left[ {0.1,0.2} \right]} \right\} \hfill \\ \end{gathered} \right\} \hfill \\ \tilde {h}_{+}^{{\left( 2 \right)}}=\left\{ \begin{gathered} \left\{ {\left[ {0.7,0.9} \right],\left[ {0.7,0.8} \right],\left[ {0.5,0.6} \right],\left[ {0.5,0.6} \right],\left[ {0.5,0.6} \right]} \right\},\left\{ {\left[ {0.7,0.9} \right],\left[ {0.7,0.8} \right],\left[ {0.6,0.7} \right],\left[ {0.6,0.7} \right],\left[ {0.6,0.7} \right]} \right\}, \hfill \\ \left\{ {\left[ {0.8,0.9} \right],\left[ {0.6,0.7} \right],\left[ {0.5,0.6} \right],\left[ {0.4,0.5} \right],\left[ {0.3,0.4} \right]} \right\},\left\{ {\left[ {0.8,0.9} \right],\left[ {0.6,0.7} \right],\left[ {0.5,0.7} \right],\left[ {0.5,0.7} \right],\left[ {0.5,0.7} \right]} \right\} \hfill \\ \end{gathered} \right\} \hfill \\ \tilde {h}_{ - }^{{\left( 2 \right)}}=\left\{ \begin{gathered} \left\{ {\left[ {0.4,0.5} \right],\left[ {0.2,0.3} \right],\left[ {0.1,0.2} \right],\left[ {0.1,0.2} \right],\left[ {0.1,0.2} \right]} \right\},\left\{ {\left[ {0.2,0.3} \right],\left[ {0.1,0.3} \right],\left[ {0.1,0.3} \right],\left[ {0.1,0.3} \right],\left[ {0.1,0.2} \right]} \right\}, \hfill \\ \left\{ {\left[ {0.2,0.3} \right],\left[ {0.1,0.2} \right],\left[ {0.1,0.2} \right],\left[ {0.1,0.2} \right],\left[ {0.1,0.2} \right]} \right\},\left\{ {\left[ {0.6,0.7} \right],\left[ {0.3,0.5} \right],\left[ {0.2,0.3} \right],\left[ {0.2,0.3} \right],\left[ {0.1,0.3} \right]} \right\} \hfill \\ \end{gathered} \right\} \hfill \\ \tilde {h}_{+}^{{\left( 3 \right)}}=\left\{ \begin{gathered} \left\{ {\left[ {0.8,0.9} \right],\left[ {0.7,0.8} \right],\left[ {0.6,0.7} \right],\left[ {0.6,0.7} \right],\left[ {0.6,0.7} \right]} \right\},\left\{ {\left[ {0.8,0.9} \right],\left[ {0.7,0.8} \right],\left[ {0.6,0.7} \right],\left[ {0.6,0.7} \right],\left[ {0.6,0.7} \right]} \right\}, \hfill \\ \left\{ {\left[ {0.7,0.9} \right],\left[ {0.7,0.8} \right],\left[ {0.5,0.6} \right],\left[ {0.5,0.6} \right],\left[ {0.5,0.6} \right]} \right\},\left\{ {\left[ {0.7,0.9} \right],\left[ {0.7,0.8} \right],\left[ {0.6,0.7} \right],\left[ {0.5,0.6} \right],\left[ {0.5,0.6} \right]} \right\} \hfill \\ \end{gathered} \right\} \hfill \\ \tilde {h}_{ - }^{{\left( 3 \right)}}=\left\{ \begin{gathered} \left\{ {\left[ {0.2,0.3} \right],\left[ {0.1,0.2} \right],\left[ {0.1,0.2} \right],\left[ {0.1,0.2} \right],\left[ {0.1,0.2} \right]} \right\},\left\{ {\left[ {0.5,0.6} \right],\left[ {0.5,0.6} \right],\left[ {0.2,0.4} \right],\left[ {0.1,0.3} \right],\left[ {0.1,0.2} \right]} \right\}, \hfill \\ \left\{ {\left[ {0.2,0.3} \right],\left[ {0.1,0.2} \right],\left[ {0.1,0.2} \right],\left[ {0.1,0.2} \right],\left[ {0.1,0.2} \right]} \right\},\left\{ {\left[ {0.3,0.5} \right],\left[ {0.3,0.4} \right],\left[ {0.2,0.3} \right],\left[ {0.1,0.3} \right],\left[ {0.1,0.3} \right]} \right\} \hfill \\ \end{gathered} \right\}. \hfill \\ \end{gathered}$$

Step 5

Use Eqs. (27) and (28) to compute \(d_{{+i}}^{{\left( k \right)}}\) and \(d_{{ - i}}^{{\left( k \right)}}\) for each alternative \({x_i}\) for the decision maker \({d_k}\):

\(d_{{+1}}^{{\left( 1 \right)}}={\text{0.1923,}}\quad d_{{ - 1}}^{{\left( 1 \right)}}={\text{0.2206,}}\quad d_{{+2}}^{{\left( 1 \right)}}={\text{0.2388}},\quad d_{{ - 2}}^{{\left( 1 \right)}}={\text{0.1741,}}\quad d_{{+3}}^{{\left( 1 \right)}}={\text{0.1525}},\quad d_{{ - 3}}^{{\left( 1 \right)}}={\text{0.2603}},\quad d_{{+4}}^{{\left( 1 \right)}}={\text{0.1449,}}\quad d_{{ - 4}}^{{\left( 1 \right)}}={\text{0.2679}},\quad d_{{+5}}^{{\left( 1 \right)}}={\text{0.2105}},\quad d_{{ - 5}}^{{\left( 1 \right)}}={\text{0.2023}},\)\(d_{{+1}}^{{\left( 2 \right)}}={\text{0.1902,}}\quad d_{{ - 1}}^{{\left( 2 \right)}}={\text{0.2162}},\quad d_{{+2}}^{{\left( 2 \right)}}={\text{0.2100}},\quad d_{{ - 2}}^{{\left( 2 \right)}}={\text{0.1965}},\quad d_{{+3}}^{{\left( 2 \right)}}={\text{0.2612}},\quad d_{{ - 3}}^{{\left( 2 \right)}}={\text{0.1453}},\quad d_{{+4}}^{{\left( 2 \right)}}={\text{0.2709}},\quad d_{{ - 4}}^{{\left( 2 \right)}}={\text{0.1355}},\quad d_{{+5}}^{{\left( 2 \right)}}={\text{0.2001}},\quad d_{{ - 5}}^{{\left( 2 \right)}}={\text{0.2063}},\)\(d_{{+1}}^{{\left( 3 \right)}}={\text{0.2118}},\quad d_{{ - 1}}^{{\left( 3 \right)}}={\text{0.2162}},\quad d_{{+2}}^{{\left( 3 \right)}}={\text{0.2051}},\quad d_{{ - 2}}^{{\left( 3 \right)}}={\text{0.1965}},\quad d_{{+3}}^{{\left( 3 \right)}}={\text{0.2050}},\quad d_{{ - 3}}^{{\left( 3 \right)}}={\text{0.1453}},\quad d_{{+4}}^{{\left( 3 \right)}}={\text{0.2432}},\quad d_{{ - 4}}^{{\left( 3 \right)}}={\text{0.1355}},\quad d_{{+5}}^{{\left( 3 \right)}}={\text{0.1159}},\quad d_{{ - 5}}^{{\left( 3 \right)}}={\text{0.2063}}.\)Step 6: Use Eq. (29) to compute the RCC \(C_{i}^{{\left( k \right)}}\) of each alternative \({x_i}\)regarding the IVHFPIS \(\tilde {h}_{+}^{{\left( k \right)}}\) of the decision maker \({d_k}\) as

$$C_{1}^{{\left( 1 \right)}}={\text{0.5343,}}\quad C_{2}^{{\left( 1 \right)}}={\text{0.4216,}}\quad C_{3}^{{\left( 1 \right)}}={\text{0.6305,}}\quad C_{4}^{{\left( 1 \right)}}={\text{0.6489,}}\quad C_{5}^{{\left( 1 \right)}}={\text{0.4901}},$$
$$C_{1}^{{\left( 2 \right)}}={\text{0.5320,}}\quad C_{2}^{{\left( 2 \right)}}={\text{0.4834}},\quad C_{3}^{{\left( 2 \right)}}={\text{0.3574,}}\quad C_{4}^{{\left( 2 \right)}}={\text{0.3334,}}\quad C_{5}^{{\left( 2 \right)}}={\text{0.5076}},$$
$$C_{1}^{{\left( 3 \right)}}={\text{0.5227,}}\quad C_{2}^{{\left( 3 \right)}}={\text{0.5377,}}\quad C_{3}^{{\left( 3 \right)}}={\text{0.5379,}}\quad C_{4}^{{\left( 3 \right)}}={\text{0.4519}},\quad C_{5}^{{\left( 3 \right)}}={\text{0.7389}}.$$

Then, we construct the relative-closeness coefficient matrix as:

$$C={\left( {\begin{array}{*{20}{c}} {{\text{0.5343}}}&{{\text{0.5320}}}&{{\text{0.5227}}} \\ {{\text{0.4216}}}&{{\text{0.4834}}}&{{\text{0.5377}}} \\ {{\text{0.6305}}}&{{\text{0.3574}}}&{{\text{0.5379}}} \\ {{\text{0.6489}}}&{{\text{0.3334}}}&{{\text{0.4519}}} \\ {{\text{0.4901}}}&{{\text{0.5076}}}&{{\text{0.7389}}} \end{array}} \right)_{5 \times 3}}$$

Step 7. Use Eqs. (31) and (32) to get the GPIS and GNIS, respectively, as:

$$\begin{gathered} \tilde {h}_{+}^{G}=\left\{ {{\text{0.6489}},{\text{0.5320}},{\text{0.7389}}} \right\} \hfill \\ \tilde {h}_{ - }^{G}=\left\{ {{\text{0.4216}},{\text{0.3334}},{\text{0.4519}}} \right\}. \hfill \\ \end{gathered}$$

Step 8. Use Eqs. (33) and (34) to compute \(d_{{+i}}^{G}\) and \(d_{{ - i}}^{G}\) of the alternative \({x_i}\) regarding the GPIS \(\tilde {h}_{+}^{G}\) and the GNIS \(\tilde {h}_{ - }^{G}\), respectively, as follows:

$$d_{{+1}}^{G}={\text{0.1103}},\quad d_{{ - 1}}^{G}={\text{0.1274}},\quad d_{{+2}}^{G}={\text{0.1591}},\quad d_{{ - 2}}^{G}={\text{0.0786}},\quad d_{{+3}}^{G}={\text{0.1313}},\quad d_{{ - 3}}^{G}={\text{0.1063}},\quad d_{{+4}}^{G}={\text{0.1619}},\quad d_{{ - 4}}^{G}={\text{0.0758}},\quad d_{{+5}}^{G}={\text{0.0611}},\quad d_{{ - 5}}^{G}={\text{0.1766}}.$$

Step 9. Use Eq. (35) to compute the GRCC \(C_{i}^{G}\) of each alternative \({x_i}\) to GPIS \(d_{{+i}}^{G}\) as:

$$C_{1}^{G}={\text{0.5360,}}\quad C_{2}^{G}={\text{0.3308}},\quad C_{3}^{G}={\text{0.4474}},\quad C_{4}^{G}={\text{0.3188}},\quad C_{5}^{G}={\text{0.7430}}.$$

Step 10

As per the GRCC \(C_{i}^{G}\), all alternatives \({x_i}\) (\(i=1,2,3,4,5\)) are ranked as \({x_5} \succ {x_1} \succ {x_3} \succ {x_2} \succ {x_4}\), therefore, \({x_5}\) is chosen as the best alternative.

Situation 2

The information for the weighting vector of attributes is partially known and given as below:

$$\Delta =\left\{ {0.2 \leq {w_1} \leq 0.3,{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} 0.25 \leq {w_2} \leq 0.35,{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} 0.3 \leq {w_3} \leq 0.35,{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} 0.4 \leq {w_4} \leq 0.5,{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {w_j} \geq 0,{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} j=1,2,3,4,{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} \sum\limits_{{j=1}}^{4} {{w_j}} =1} \right\}$$

Step 1′

Same as Step 1.

Step 2′

Same as Step 2.

Step 3′

A model is built through (M-5):

$$\left\{ \begin{gathered} \hbox{max} D\left( w \right)={\text{4.7600}}{w_1}+{\text{3.9733}}{w_2}+{\text{4.3067}}{w_3}+{\text{3.7467}}{w_4} \hfill \\ {\text{s.t.}}{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} w \in \Delta \hfill \\ \end{gathered} \right.$$

Solving this equation yields the weights of attributes as \(w={\left( {{\text{0.2000,0.2500,0.3000,0.4000}}} \right)^T}\).

Step 4′

See Step 4.

Step 5′

Use Eqs. (27) and (28) to calculate \(d_{{+i}}^{{\left( k \right)}}\) and \(d_{{ - i}}^{{\left( k \right)}}\) of the alternative \({x_i}\) regarding the decision maker \({d_k}\):

$$d_{{+1}}^{{\left( 3 \right)}}={\text{0.1800}},\quad d_{{ - 1}}^{{\left( 3 \right)}}={\text{0.3110}},\quad d_{{+2}}^{{\left( 3 \right)}}={\text{0.2120}},\quad d_{{ - 2}}^{{\left( 3 \right)}}={\text{0.2790}},\quad d_{{+3}}^{{\left( 3 \right)}}={\text{0.2665}},\quad d_{{ - 3}}^{{\left( 3 \right)}}={\text{0.2245}},\quad d_{{+4}}^{{\left( 3 \right)}}={\text{0.2575}},\quad d_{{ - 4}}^{{\left( 3 \right)}}={\text{0.2335}},\quad d_{{+5}}^{{\left( 3 \right)}}={\text{0.1815}},\quad d_{{ - 5}}^{{\left( 3 \right)}}={\text{0.3095}}.$$

Step 6′

Use Eq. (29) to compute the RCC \(C_{i}^{{\left( k \right)}}\) of each alternative \({x_i}\) relating to the IVHFPIS \(\tilde {h}_{+}^{{\left( k \right)}}\) of the decision maker \({d_k}\) as

$$C_{1}^{{\left( 1 \right)}}={\text{0.6331,}}\quad C_{2}^{{\left( 1 \right)}}={\text{0.4401}},\quad C_{3}^{{\left( 1 \right)}}={\text{0.5429,}}\quad C_{4}^{{\left( 1 \right)}}={\text{0.5440}},\quad C_{5}^{{\left( 1 \right)}}={\text{0.4687}},$$
$$C_{1}^{{\left( 2 \right)}}={\text{0.4789,}}\quad C_{2}^{{\left( 2 \right)}}={\text{0.5576,}}\quad C_{3}^{{\left( 2 \right)}}={\text{0.3459,}}\quad C_{4}^{{\left( 2 \right)}}={\text{0.3503,}}\quad C_{5}^{{\left( 2 \right)}}={\text{0.4900}},$$
$$C_{1}^{{\left( 3 \right)}}={\text{0.6334,}}\quad C_{2}^{{\left( 3 \right)}}={\text{0.5682,}}\quad C_{3}^{{\left( 3 \right)}}={\text{0.4572,}}\quad C_{4}^{{\left( 3 \right)}}={\text{0.4756,}}\quad C_{5}^{{\left( 3 \right)}}={\text{0.6303}}.$$

Then, we construct the relative-closeness coefficient matrix as:

$$C={\left( {\begin{array}{*{20}{c}} {{\text{0.6331}}}&{{\text{0.4789}}}&{{\text{0.6334}}} \\ {{\text{0.4401}}}&{{\text{0.5576}}}&{{\text{0.5682}}} \\ {{\text{0.5429}}}&{{\text{0.3459}}}&{{\text{0.4572}}} \\ {{\text{0.5440}}}&{{\text{0.3503}}}&{{\text{0.4756}}} \\ {{\text{0.4687}}}&{{\text{0.4900}}}&{{\text{0.6303}}} \end{array}} \right)_{5 \times 3}}$$

Step 7′. Utilize Eqs. (31) and (32) to identify the GPIS and GNIS, respectively, as:

$$\begin{gathered} \tilde {h}_{+}^{G}=\left\{ {{\text{0.6331}},{\text{0.5576}},{\text{0.6334}}} \right\} \hfill \\ \tilde {h}_{ - }^{G}=\left\{ {{\text{0.4401}},{\text{0.3459}},{\text{0.4572}}} \right\} \hfill \\ \end{gathered}$$

Step 8′. Utilize Eqs. (33) and (34) to compute \(d_{{+i}}^{G}\) and \(d_{{ - i}}^{G}\) of the alternative \({x_i}\) relating to the GPIS \(\tilde {h}_{+}^{G}\) and the GNIS \(\tilde {h}_{ - }^{G}\), respectively, as follows:

$$d_{{+1}}^{G}={\text{0.0262}},\quad d_{{ - 1}}^{G}={\text{0.1674}},\quad d_{{+2}}^{G}={\text{0.0861}},\quad d_{{ - 2}}^{G}={\text{0.1076}},\quad d_{{+3}}^{G}={\text{0.1594}},\quad d_{{ - 3}}^{G}={\text{0.0343}},\quad d_{{+4}}^{G}={\text{0.1514}},\quad d_{{ - 4}}^{G}={\text{0.0422}},\quad d_{{+5}}^{G}={\text{0.0784}},\quad d_{{ - 5}}^{G}={\text{0.1153}}.$$

Step 9′. Use Eq. (35) to compute the GRCC \(C_{i}^{G}\) of each alternative \({x_i}\) with respect to GPIS \(d_{{+i}}^{G}\) as:

$$C_{1}^{G}={\text{0.8645}},\quad C_{2}^{G}={\text{0.5556}},\quad C_{3}^{G}={\text{0.1771}},\quad C_{4}^{G}={\text{0.2181}},\quad C_{5}^{G}={\text{0.5954}}.$$

Step 10′

Rank all alternatives \({x_i}\) via the GRCC \(C_{i}^{G}\), \(i=1,2,3,4,5\). Clearly, \({x_1} \succ {x_5} \succ {x_2} \succ {x_4} \succ {x_3}\), and \({x_1}\) is determined to be the best alternative.

4.2 Comparison analysis with other IVHF-MADM methods

This subsection will compare the new method with other IVHF-MADM methods and demonstrate the merits of our method.

4.2.1 Comparison with the IVHF-MADM methods based on TOPSIS

Recently, a new approach was developed to solve an MADM problem with IVHF information in Xu and Zhang (2013). Compared with the method, our method has the following advantages: the method proposed in Xu and Zhang (2013) focuses on only the MADM problems, while our method gives a novel procedure to handle a MAGDM problem in the IVHF surroundings. First, in our method, a quadratic programming model is constructed to obtain the weight vector of experts, which is not considered in the method in Xu and Zhang (2013). Second, although Xu and Zhang (2013) established a model to obtain the weight vector of attributes, this model determined the attribute weights from only an individual IVHF decision matrix, and it cannot confirm the importance weights of attributes in group decision making environments. Our method can obtain the optimal weights of attributes from all of the individual IVHF decision matrices. Finally, TOPSIS methods in Xu and Zhang (2013) only included one stage, while the extended TOPSIS proposed by our method includes two stages: The first stage is called the IVHF-TOPSIS, which can be used to calculate the individual RCC of each alternative to the individual IVHFPIS. The second stage is the standard TOPSIS, which is used to calculate the GRCC of each alternative to GPIS and choose the most desirable one with the maximum group relative-closeness coefficient.

4.2.2 Comparison with the IVHF-MADM methods based on aggregation operators

Recently, many aggregation operators were studied to accommodate IVHF arguments (Wei and Zhao 2013; Wei et al. 2013; Zhang and Wu 2014), including the IVHFWA, IVHFWG, GIVHFWA, GIVHFWG, IVHFOWA, IVHFOWG, GIVHFOWA, GIVHFOWG, IVHFHA, IVHFHG, GIVHFHA, GIVHFHG, HIVFEWA, HIVFEOWA, I-HIVFEOWA, HIVFEWG, HIVFEOWG, I-HIVFEOWG, A-IVHFWA, and A-IVHFWG operators, based on which some IVHF-MADM methods (Wei and Zhao 2013; Wei et al. 2013; Zhang and Wu 2014) have also been proposed to handle a MADM problem in the context of IVHFSs. It is noted that the aforesaid operators and approaches have some inherent weaknesses, which are displayed as follows:

(1) The existing operators and approaches perform an aggregation on the IVHF arguments. Accordingly, the dimension of the fused IVHFE may increase as such an aggregation is performed, which might increase the complexity of calculation and therefore cause information loss. In contrast, our method does not carry out such an aggregation but directly deals with interval-valued hesitant fuzzy arguments. Consequently, it does not increase the dimension of the fused IVHFE and retains as much original preference information as possible.

(2) Our method utilizes the maximizing group consensus and the maximizing deviation methods to obtain the weight vectors of experts and attributes, respectively. In contrast, existing methods in (Wei and Zhao 2013; Wei et al. 2013; Zhang and Wu 2014) assign these weight vectors in advance. Therefore, our method is more objective and reasonable, whereas existing methods in (Wei and Zhao 2013; Wei et al. 2013; Zhang and Wu 2014) are subjective and unreasonable.

5 Conclusions

The paper has proposed a novel method for MAGDM problems with imperfect weight information under IVHF environments, which involves three main findings.

(1) Inspired by the idea that a set of group members should have the largest degree of agreement solution, we have established a quadratic programming model to obtain the most desirable weights of decision makers.

(2) A maximum deviation method was employed to acquire the optimum weights of attributes.

(3) An extended TOPSIS method was proposed to solve a MAGDM problem with IVHF-information. The extended TOPSIS includes two stages: the IVHF–TOPSIS and the standard TOPSIS. The former is used to calculate the RCC of each alternative to the IVHFPIS, while the latter is used to calculate the GRCC of each alternative to GPIS, from which all the alternatives will be ranked and the optimal alternative(s) with the maximum GRCC will be selected.

The validity and practicality of our method have been demonstrated with an investment example, and the merits of the new method have been shown by comparing it with other IVHF-MADM approaches. Comparison analysis shows that the proposed method needs less computational complexity and results in less information losing, which means that the proposed method is much more simplified and effective. In addition, the proposed builds a programming model to determine the weights of the DMs that is based on the maximum consensus analysis. Furthermore, the proposed method can efficiently address the MAGDM problem where the evaluative ratings of alternatives take the form of IVHFE, the weight information of DMs is completely unknown and the weight of attributions are completely unknown or partially unknown. However, it is noted that the proposed method requires all IVHFEs in a MAGDM problem offered by the DM to have the same length; otherwise, it needs to add extra interval numbers to those IVHFEs with the shorter length. However, it is unsuitable to add extra interval numbers into IVHFEs because such a procedure changes the original judgments offered by the DMs and produces different IVHFEs with respect to the original ones based on added different interval numbers, and these different IVHFEs do not equal to the original ones. Thus, how to circumvent this drawback is a meaning and challenging task and also is our future research direction. In addition, in the future, we will continue to study the application of the new method and solve other decision making problems in the setting of interval-valued hesitant fuzzy environments. Furthermore, we will extend the new theoretical results to other types of uncertain environments.