1 Introduction

Considering that the difficulty of establishing the membership degree of an element to a given set is sometimes not because we have a margin of error (as in intuitionistic fuzzy set [1], interval-valued fuzzy set [32], or interval-valued intuitionistic fuzzy set [2] or some possibility distribution on the possible values (as in type-2 fuzzy set [6, 14]), but because we have some possible numerical values, [20] presented a new concept of hesitant fuzzy sets (HFSs), which permits the membership degree of an element to be presented as a set of several possible values between 0 and 1, and thus can be used to manage the situation where people hesitate between several values to express their opinions. In the short time since its first appearance, hesitant fuzzy sets have received more and more attention [15, 22, 24, 30]. Rodríguez et al. [17] presented an overview on hesitant fuzzy sets with the aim of providing a clear perspective on the different concepts, tools, and trends related to hesitant fuzzy sets.

The aim of multi-criteria group decision making (MCGDM) problems is to find the most desirable alternative(s) among a set of feasible alternatives according to the preferences provided by a group of decision makers. In some practical group decision making process, owing to the time pressure and lack of knowledge or data, or the decision makers (DMs)’ limited attention and information processing capacities, the DMs cannot provide their preference information with single exact numerical value, a margin of error or some possibility distribution on the possible values, but several possible numerical values represented by the hesitant fuzzy elements (HFEs) [24]. To deal with such situations, most recently, some hesitant fuzzy aggregation operators [10, 13, 16, 18, 19, 31, 35, 36] have been developed for aggregating hesitant fuzzy information, including the HFEWA, HFEWG, HFEOWA, HFEOWG, HFEHA, HFEHG, HFMSM, WHFMSM, GHFPEWA, GHFPEWG, GHFGSCWA, GHFGSCGM, GHFHWA, GHFHWG, GQHFHWA, GQHFHWG, IHFHWA, IHFHWG, IQHFHWA, IQHFHWG, IGHFHWA, IGHFHWG, IGQHFHWA, IGQHFHWG, HFHWA, HFHWG, GHFHWA, GHFHWG, HFHOWA, HFHOWG, GHFHOWA, GHFHOWG, HFHHA, HFHWG, GHFHHA, GHFHHG, and THFPRI-OR operators. Moreover, based on these aggregation operators, some methods have been developed for handling the multi-criteria decision making (MCDM) or multi-criteria group decision making (MCGDM) problems with hesitant fuzzy information in which the criterion values take the form of hesitant fuzzy elements (HFEs) [24].

However, these operators and methods have some drawbacks as follows: (1) An important topic in hesitant fuzzy MCGDM is how to determine the weights of both criteria and decision makers. All the aforementioned operators and methods only consider the situations where the criteria weights are completely known or partially known, and the weights of decision makers (DMs) are completely known. Furthermore, these weight vectors are provided by the decision makers in advance and therefore are more or less subjective and insufficient. However, in many practical problems, owing to time pressure, lack of knowledge or data, and the decision makers’ limited expertise about the problem domain, the weight information on both criteria and decision makers is usually completely unknown or is very difficult to obtain. Thus, how to obtain the weight vectors of both criteria and decision makers is an interesting issue and is worthy to be studied in depth. Recently, some studies [8, 11, 29] have been devoted to addressing this issue and developed some completely unknown weight generation processes within the hesitant fuzzy environment. For example, Hu et al. [8] constructed the entropy weight model to determine the criteria weights based on the proposed entropy measures. Liu et al. [11] took advantage of the the linear programming technique for multidimensional analysis of preference (LINMAP) to determine the attribute weights objectively in the hesitant fuzzy multiple attribute decision making. Xu and Zhang [29] established an optimization model based on the maximizing deviation method to determine the optimal relative weights of attributes under hesitant fuzzy environment. However, the existing weight generation methods in [8, 11, 29] only investigated multi-criteria single person decision making with hesitant fuzzy information and did not consider multi-criteria group decision making (MCGDM) with hesitant fuzzy information. In addition, in a MCGDM with hesitant fuzzy information, because the experts have their own inherent value systems and consideration, and thus the disagreement among the experts is inevitable. In such a case, consensus turns out to be very important in group decision making. Consensus makes it possible for a group to reach a final decision that all group members can support despite their differing opinions. Clearly, it is preferable that the experts had achieved a high level of consensus concerning their preferences before reaching a desirable decision result. The existing weight generation methods in [8, 11, 29] did not consider any consensus issue. To address this issue, in this paper, we develop two nonlinear optimization models for MCGDM problems with hesitant fuzzy information, one minimizing the divergence among the individual hesitant fuzzy decision matrices, and the other minimizing the divergence between each individual hesitant fuzzy decision matrix and the collective hesitant fuzzy decision matrix, from which two exact formulae can be obtained to derive the weights of decision makers and criteria, respectively. (2) These operators and methods need to perform some aggregation operations on the input hesitant fuzzy arguments, which will lead to increasing dimensions of the aggregated hesitant fuzzy elements. Consequently, these operators and methods increase the computational complexity and may cause the loss of decision information. In real applications, the large computational complexity means the high costs of decision making. To address this issue, in this paper, we use a simple additive weighting operator to aggregate all the individual hesitant fuzzy decision matrices into the collective hesitant fuzzy decision matrix and then derive the collective overall HFE corresponding to each alternative from the collective hesitant fuzzy decision matrix. Compared to the other operators, the additive weighting operator does not increase the dimensions of the fused hesitant fuzzy arguments. Comparison analysis shows that the developed operators and methods can obtain the same optimal alternative as the one obtained with the other methods on the premise that the dimensions of the fused hesitant fuzzy arguments is not increased.

As a generalization of hesitant fuzzy set, interval-valued hesitant fuzzy set [4, 5] permits the membership of an element to be a set of several possible interval values due to the fact it is somewhat difficult for experts to assign exact numerical values for the membership degrees of certain elements to a set in some practical problems [4, 5]. Interval-valued hesitant fuzzy set can efficiently manage the MCGDM problems in which experts hesitate between several possible interval values instead of exact numerical values to assess an alternative. To date, some interval-valued hesitant fuzzy aggregation operators [9, 33, 34] have been proposed for aggregating interval-valued hesitant fuzzy information, such as the interval-valued hesitant fuzzy Hamacher synergetic weighted averaging (IVHFHSWA) operator, the interval-valued hesitant fuzzy Hamacher synergetic weighted geometric (IVHFHSWG) operator, the induced generalized interval-valued hesitant fuzzy ordered weighted averaging (IGIVHFOWA) operator, the induced generalized interval-valued hesitant fuzzy ordered weighted geometric (IGIVHFOWG) operator, the Archimedean t-conorm- and t-norm-based interval-valued hesitant fuzzy weighted averaging (A-IVHFWA) operator, and the Archimedean t-conorm- and t-norm-based interval-valued hesitant fuzzy weighted geometric (A-IVHFWG) operator. Furthermore, based on these operators, some methods have been developed for handling the MCDM and MCGDM problems with interval-valued hesitant fuzzy information in which the criterion values take the form of interval-valued hesitant fuzzy elements (IVHFEs) [5]. However, these operators and methods have the same shortcomings as the ones shown in the above section. To address this issue, we extend the above results to interval-valued hesitant fuzzy situations.

The remainder of this paper is arranged as follows. In Sect. 2, we give a brief review of hesitant fuzzy sets and interval-valued hesitant fuzzy sets. In Sect. 3, we first develop two nonlinear optimization models, one is to minimize the divergence among the individual hesitant fuzzy decision matrices to derive the weights of decision makers, and the other is to minimize the divergence between each individual hesitant fuzzy decision matrix and the collective hesitant fuzzy decision matrix to obtain the criterion weights. Then, on the basis of the simple additive weighting operator and the score function, we develop a method for ranking the given alternatives and selecting the optimal alternative. Section 4 extends the results obtained in Sect. 3 to interval-valued hesitant fuzzy environments. In Sect. 5, two illustrative examples are employed to illustrate the effectiveness and practicality of the developed methods. Furthermore, this section also makes a comparison analysis with the other methods. Section 6 ends this paper with some concluding remarks.

2 Preliminaries

2.1 Hesitant Fuzzy Sets (HFSs)

Torra [20] proposed the notion of hesitant fuzzy sets to manage the situations in which several numerical values are possible for the definition of the membership of an element to a given set.

Definition 2.1

Torra [20]. Let X be a reference set, a hesitant fuzzy set (HFS) A on X is in terms of a function h A (x) that when applied to X returns a subset of [0,1].

To be easily understood, the HFS can be expressed by a mathematical symbol

$$A = \left\{ {\left. {\left\langle {x,h_{A} \left( x \right)} \right\rangle } \right|x \in X} \right\}$$
(1)

where h A (x) is a set of some values in [0,1], denoting the possible membership degrees of the element x ∈ X to the set A. For convenience, Xia and Xu [24] called h = h A (x) a hesitant fuzzy element (HFE).

Let l h denote the numbers of values in the HFE h. For convenience, the values in the HFE h are arranged in a descending order, i.e., \(h = \left\{ {\left. {h^{\sigma \left( i \right)} } \right|i = 1,2, \ldots ,l_{h} } \right\}\), where h σ(i) is the ith biggest value in h.

Example 2.1

Let \(X = \left\{ {x_{1} ,x_{2} ,x_{3} } \right\}\), \(A = \left\{ {\left\langle {x_{1} ,\left\{ {0.7,0.5} \right\}} \right\rangle ,\left\langle {x_{2} ,\left\{ {0.5,0.3,0.2} \right\}} \right\rangle ,\left\langle {x_{3} ,\left\{ {0.8,0.7} \right\}} \right\rangle } \right\}\), and \(h = \left\{ {0.5,0.3,0.2} \right\}\). Then, A is a HFS on X, h is a HFE, and l h  = 3.

Given three HFEs, h, h 1, and h 2, Torra [20] defined the following operations:

  1. (1)

    \(h^{c} = \bigcup\nolimits_{\gamma \in h} {\left\{ {1 - \gamma } \right\}} ;\)

  2. (2)

    \(h_{1} \cup h_{2} = \bigcup\nolimits_{{\gamma_{1} \in h_{1} ,\gamma_{2} \in h_{2} }} {\left\{ {\gamma_{1} \vee \gamma_{2} } \right\}}\)

  3. (3)

    \(h_{1} \cap h_{2} = \bigcup\nolimits_{{\gamma_{1} \in h_{1} ,\gamma_{2} \in h_{2} }} {\left\{ {\gamma_{1} \wedge \gamma_{2} } \right\}}\)

Xia and Xu [24] defined the following comparison rules for HFEs:

Definition 2.2

Xia and Xu [24]. For a HFE \(h = \bigcup\nolimits_{\gamma \in h} {\left\{ \gamma \right\}}\), \(s\left( h \right) = \frac{{\sum\nolimits_{\gamma \in h} \gamma }}{{l_{h} }}\) is called the score function of h, where l h is the number of elements in h. For two HFEs, h 1 and h 2, if \(s\left( {h_{1} } \right) > s\left( {h_{2} } \right)\), then h 1 > h 2; if \(s\left( {h_{1} } \right) = s\left( {h_{2} } \right)\), then h 1 = h 2.

Let h 1 and h 2 be two HFEs. In most cases, \(l_{{h_{1} }} \ne l_{{h_{2} }}\); for convenience, let \(l = { \hbox{max} }\left\{ {l_{{h_{1} }} ,l_{{h_{2} }} } \right\}\). To compare h 1 and h 2, Xu and Xia [28] suggested that we should extend the shorter HFE until the length of both HFEs was the same. The simplest way to extend the shorter HFE is to append the same value repeatedly; in principle, any value can be appended. In practice, the selection of the appended value depends primarily on the decision makers’ risk preferences. To address this issue, Xu and Zhang [29] developed the following method:

Definition 2.3

Xu and Zhang [29]. Assume a HFE \(h = \left\{ {\left. {h^{\sigma \left( i \right)} } \right|i = 1,2, \ldots ,l_{h} } \right\}\), and stipulate that h + and h are the maximum and minimum values in the HFE h, respectively; then we call \(\bar{h} = \eta h^{ + } + \left( {1 - \eta } \right)h^{ - }\) an extension value, where \(\eta\) (\(0 \le \eta \le 1\)) is the parameter determined by the DM according his/her risk preference.

As a result, we can add different values to the HFE using h according the DM’s risk preference. If \(\eta = 1\), then the extension value \(\bar{h} = h^{ - }\), which shows that the DM’s risk preference is risk-seeking; if \(\eta = 0\), then \(\bar{h} = h^{ - }\), which means that the DM’s risk preference is risk-averse; if \(\eta = \frac{1}{2}\), then \(\bar{h} = \frac{{h^{ + } + h^{ - } }}{2}\), which indicates that the DM’s risk preference is risk-neutral. Clearly, the parameter \(\eta\) provided by the DM reflects his/her risk preference and affects the final decision results.

Example 2.2

Let \(h_{1} = \left\{ {0.4,0.3,0.1} \right\}\) and \(h_{2} = \left\{ {0.8,0.7} \right\}\) be two HFEs. It is clear that \(l_{{h_{1} }} = 3\), \(l_{{h_{2} }} = 2\), and \(l_{{h_{1} }} > l_{{h_{2} }}\). Therefore, by Xu and Zhang’s method (suppose \(\eta = 0\)), we can extend h 2 to the following: \(\bar{h}_{2} = \left\{ {0.8,0.7,0.7} \right\}\).

To aggregate hesitant fuzzy information, we define some new operational laws on the HFEs \(h_{1} = \left\{ {\left. {h_{1}^{\sigma \left( i \right)} } \right|i = 1,2, \ldots ,l_{{h_{1} }} } \right\}\), \(h_{2} = \left\{ {\left. {h_{2}^{\sigma \left( i \right)} } \right|i = 1,2, \ldots ,l_{{h_{2} }} } \right\}\), and \(h = \left\{ {\left. {h^{\sigma \left( i \right)} } \right|i = 1,2, \ldots ,l_{h} } \right\}\):

$$h_{1} \oplus h_{2} = \left\{ {\left. {h_{1}^{\sigma \left( i \right)} + h_{2}^{\sigma \left( i \right)} } \right|i = 1,2, \ldots ,l} \right\}$$
$$h_{1} { \ominus }h_{2} = \bigcup\nolimits_{{\gamma_{1}^{\sigma \left( s \right)} \in h_{1} ,\gamma_{2}^{\sigma \left( s \right)} \in h_{2} }} {\left\{ {\gamma_{1}^{\sigma \left( s \right)} - \gamma_{2}^{\sigma \left( s \right)} } \right\}}$$
$$\lambda h = \left\{ {\left. {\lambda h^{\sigma \left( i \right)} } \right|i = 1,2, \ldots ,l_{h} } \right\},\quad 0 \le \lambda \le 1$$

where \(h_{1}^{\sigma \left( s \right)}\) and \(h_{2}^{\sigma \left( s \right)}\) are the ith biggest elements in h 1 and h 2, respectively, and it is assumed that \(l_{{h_{1} }} = l_{{h_{2} }} = l\), otherwise the shorter one can be extended by using Definition 2.3 until both of them have the same length.

2.2 Interval-Valued Hesitant Fuzzy Sets (IVHFSs)

It should be noted that hesitant fuzzy sets permit the membership of an element to be a set of several possible values. All these possible values are crisp real numbers that belong to [0,1]. However, in some practical problems, it is somewhat difficult for experts to assign exact values for the membership degrees of certain elements to a set, but a range of values belonging to [0,1] may be assigned [4, 5]. For such cases, Chen et al. [4, 5] introduced the concept of interval-valued hesitant fuzzy set (IVHFS). Next, we briefly review the IVHFS.

Throughout this paper, let \(D\left( {\left[ {0,1} \right]} \right) = \left\{ {\left. {a = \left[ {a^{L} ,a^{U} } \right]} \right|a^{L} \le a^{U} ,a^{L} ,a^{U} \in \left[ {0,1} \right]} \right\}\) stand for the set of all closed subintervals of [0,1].

Definition 2.4

Xu and Da [26]. If \(a = \left[ {a^{L} ,a^{U} } \right],b = \left[ {b^{L} ,b^{U} } \right] \in D\left( {\left[ {0,1} \right]} \right)\), then we define

  1. (1)

    \(a = b \Leftrightarrow \left[ {a^{L} ,a^{U} } \right] = \left[ {b^{L} ,b^{U} } \right] \Leftrightarrow a^{L} = b^{L} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\text{and}}{\kern 1pt} {\kern 1pt} {\kern 1pt} a^{U} = b^{U} ;\)

  2. (2)

    \(a + b = \left[ {a^{L} ,a^{U} } \right] + \left[ {b^{L} ,b^{U} } \right] = \left[ {a^{L} + b^{L} ,a^{U} + b^{U} } \right];\)

  3. (3)

    \(\lambda a = \lambda \left[ {a^{L} ,a^{U} } \right] = \left[ {\lambda a^{L} ,\lambda a^{U} } \right];\)

  4. (4)

    The complement of a is denoted by \(a^{c} = \left[ {a^{L} ,a^{U} } \right]^{c} = \left[ {1 - a^{U} ,1 - a^{L} } \right]\).

In order to compare \(a = \left[ {a^{L} ,a^{U} } \right]\) and \(b = \left[ {b^{L} ,b^{U} } \right]\), Xu and Da [26] gave the following definition.

Definition 2.5

Xu and Da [26]. Let \(a = \left[ {a^{L} ,a^{U} } \right],b = \left[ {b^{L} ,b^{U} } \right] \in D\left( {\left[ {0,1} \right]} \right)\), and let \({\text{len}}\left( a \right) = a^{U} - a^{L}\) and \({\text{len}}\left( b \right) = b^{U} - b^{L}\). Then the degree of possibility of \(a \ge b\) is defined as

$$p\left( {a \ge b} \right) = \hbox{max} \left\{ {1 - \hbox{max} \left( {\frac{{b^{U} - \alpha^{L} }}{{{\text{len}}(a) + {\text{len}}(b)}},0} \right),0} \right\}$$

To rank the interval numbers \(a_{i} = \left[ {a_{i}^{L} ,a_{i}^{U} } \right] \in D\left( {\left[ {0,1} \right]} \right)\) (\(i = 1,2, \ldots ,n\)), based on Definition 2.5, Xu and Da [26] developed a complementary matrix as

$$P = \left[ {\begin{array}{cccc} {p_{11} } & {p_{12} } & \cdots & {p_{1n} } \\ {p_{21} } & {p_{22} } & \cdots & {p_{2n} } \\ \vdots & \vdots & \vdots & \vdots \\ {p_{n1} } & {p_{n2} } & \cdots & {p_{nn} } \\ \end{array} } \right]$$

where \(p_{ij} = p\left( {a_{i} \ge a_{j} } \right)\), \(p_{ij} \ge 0\), \(p_{ij} + p_{ji} = 1\), \(p_{ii} = \frac{1}{2}\), \(i,j = 1,2, \ldots ,n\).

Summing all elements in each line of the matrix P, we have

$$p_{i} = \sum\limits_{j = 1}^{n} {p_{ij} } \quad i = 1,2, \ldots ,n$$

Then we can rank the \(a_{i} = \left[ {a_{i}^{L} ,a_{i}^{U} } \right]\) (\(i = 1,2, \ldots ,n\)) in descending order according to the values of p i (\(i = 1,2, \ldots ,n\)).

Definition 2.6

Chen et al. [4, 5]. Let X be a fixed set, an interval-valued hesitant fuzzy set (IVHFS) on X is in terms of a function that when applied to X returns a subset of D([0,1]).

To be easily understood, we express the IVHFS by a mathematical symbol:

$$\tilde{A} = \left\{ {\left. {\left\langle {x,\tilde{h}_{{\tilde{A}}} \left( x \right)} \right\rangle } \right|x \in X} \right\}$$
(2)

where \(\tilde{h}_{{\tilde{A}}} (x)\) denotes all possible interval membership degrees of the element \(x \in X\) to the set \(\tilde{A}\). For convenience, Chen et al. [5] called \(\widetilde{h} = \tilde{h}_{{\tilde{A}}} \left( x \right)\) an interval-valued hesitant fuzzy element (IVHFE). If \(\tilde{\gamma } \in \widetilde{h}\), then \(\tilde{\gamma }\) is an interval number and can be denoted by \(\tilde{\gamma } = \left[ {\tilde{\gamma }^{L} ,\tilde{\gamma }^{U} } \right]\), where \(\tilde{\gamma }^{L} = \inf \tilde{\gamma }\) and \(\tilde{\gamma }^{U} = \sup \tilde{\gamma }\) express the lower and upper limits of \(\tilde{\gamma }\), respectively. Obviously, if \(\tilde{\gamma }^{L} = \tilde{\gamma }^{U}\) for any \(\tilde{\gamma } \in \widetilde{h}\), then the IVHFEs reduce to the HFEs.

Let \(l_{{\tilde{h}}}\) denote the numbers of intervals in the IVHFE \(\tilde{h}\). For convenience, the values in the IVHFE \(\tilde{h}\) are arranged in a descending order, i.e., \(\tilde{h} = \left\{ {\left. {\tilde{h}^{\sigma \left( i \right)} } \right|i = 1,2, \ldots ,l_{{\tilde{h}}} } \right\}\), where \(\tilde{h}^{\sigma \left( i \right)}\) is the ith biggest interval in \(\tilde{h}\).

Example 2.3

Let \(X = \left\{ {x_{1} ,x_{2} ,x_{3} } \right\}\), \(\tilde{A} = \left\{ {\left\langle {x_{1} ,\left\{ {\left[ {0.7,0.8} \right],\left[ {0.5,0.6} \right]} \right\}} \right\rangle ,\left\langle {x_{2} ,\left\{ {\left[ {0.3,0.5} \right],\left[ {0.3,0.4} \right],\left[ {0.1,0.3} \right]} \right\}} \right\rangle ,\left\langle {x_{3} ,\left\{ {\left[ {0.6,0.8} \right],\left[ {0.6,0.7} \right]} \right\}} \right\rangle } \right\}\), and \(\tilde{h} = \left\{ {\left[ {0.3,0.5} \right],\left[ {0.3,0.4} \right],\left[ {0.1,0.3} \right]} \right\}\). Then, \(\tilde{A}\) is an IVHFS on X, \(\tilde{h}\) is an IVHFE, and \(l_{{\tilde{h}}} = 3\).

In most situations, the number of intervals for different IVHFEs could be different. For convenience, let \(l = \hbox{max} \left\{ {l_{{\tilde{h}_{1} }} ,l_{{\tilde{h}_{2} }} } \right\}\), where \(l_{{\tilde{h}_{1} }}\) and \(l_{{\tilde{h}_{2} }}\) are the numbers of intervals in IVHFEs \(\tilde{h}_{1}\) and \(\tilde{h}_{2}\), respectively. In order to more accurately operate between two IVHFEs, they should have the same number of intervals. To address this issue, similar to Definition 23, Xu and Zhang [29] developed the following method to extend the shorter IVHFE until the length of both IVHFEs was the same.

Definition 2.7

Xu and Zhang [29]. Assume an IVHFE \(\tilde{h} = \left\{ {\left. {\tilde{h}^{\sigma \left( i \right)} } \right|i = 1,2, \ldots ,l_{{\tilde{h}}} } \right\}\), and stipulate that \(\tilde{h}^{ + }\) and \(\tilde{h}^{ - }\) are the maximum and minimum intervals in the IVHFE \(\tilde{h}\), respectively; then we call \(\bar{\tilde{h}} = \eta \tilde{h}^{ + } + \left( {1 - \eta } \right)\tilde{h}^{ - }\) an extension value, where \(\eta\) (\(0 \le \eta \le 1\)) is the parameter determined by the DM according his/her risk preference.

Consequently, we can add different values to the IVHFE using \(\eta\) according the DM’s risk preference. If \(\eta = 1\), then the extension value \(\bar{\tilde{h}} = \tilde{h}^{ + }\), which shows that the DM’s risk preference is risk-seeking; if \(\eta = 0\), then \(\bar{\tilde{h}} = \tilde{h}^{ - }\), which means that the DM’s risk preference is risk-averse; if \(\eta = \frac{1}{2}\), then \(\bar{\tilde{h}} = \frac{{\tilde{h}^{ + } + \tilde{h}^{ - } }}{2}\), which indicates that the DM’s risk preference is risk-neutral. Clearly, the parameter \(\eta\) provided by the DM reflects his/her risk preference and affects the final decision results. In this paper, we assume that the decision makers are all risk-averse.

Definition 2.8

Chen et al. [5]. For an IVHFE \(\tilde{h}\), \(s\left( {\tilde{h}} \right) = \frac{{\sum\nolimits_{{\tilde{\gamma } \in \widetilde{h}}} {\tilde{\gamma }} }}{{l_{{\tilde{h}}} }}\) is called the score function of \(\tilde{h}\). For two IVHFEs \(\tilde{h}_{1}\) and \(\tilde{h}_{2}\), if \(s\left( {\tilde{h}_{1} } \right) \ge s\left( {\tilde{h}_{2} } \right)\), then \(\tilde{h}_{1} \ge \tilde{h}_{2}\).

3 Nonlinear Optimization Models for Multi-Criteria Group Decision Making Under Hesitant Fuzzy Situations

3.1 Problem Description

First, a multi-criteria group decision making (MCGDM) with hesitant fuzzy information can be formulated as follows: Let \(X = \left\{ {x_{1} ,x_{2} , \ldots ,x_{m} } \right\}\) be a set of m alternatives, \(C = \left\{ {c_{1} ,c_{2} , \ldots ,c_{n} } \right\}\) be a collection of n criteria, whose weight vector is \(w = \left( {w_{1} ,w_{2} , \ldots ,w_{n} } \right)^{T}\), with \(w_{j} \in \left[ {0,1} \right]\), \(j = 1,2, \ldots ,n\), and \(\sum\nolimits_{j = 1}^{n} {w_{j} } = 1\), and let \(D = \left\{ {d_{1} ,d_{2} , \ldots ,d_{p} } \right\}\) is a set of p decision makers, whose weight vector is \(\omega = \left( {\omega_{1} ,\omega_{2} , \ldots ,\omega_{p} } \right)^{T}\), with \(\omega_{k} \in \left[ {0,1} \right]\), \(k = 1,2, \ldots ,p\), and \(\sum\nolimits_{k = 1}^{p} {\omega_{k} } = 1\). Let \(A^{\left( k \right)} = \left( {a_{ij}^{\left( k \right)} } \right)_{m \times n}\) be a hesitant fuzzy decision matrix, where \(a_{ij}^{\left( k \right)} = \left\{ {\left. {\left( {a_{ij}^{\left( k \right)} } \right)^{\sigma \left( t \right)} } \right|t = 1,2, \ldots ,l_{{a_{ij}^{\left( k \right)} }} } \right\}\) is a HFE, which is a set of all of the possible values that the alternative \(x_{i} \in X\) satisfies the attribute \(c_{j} \in C\), given by the decision maker \(d_{k} \in D\).

In general, there are benefit criteria and cost criteria in a MCGDM problem. For such cases, we need to transform the hesitant fuzzy decision matrices \(A^{\left( k \right)} = \left( {a_{ij}^{\left( k \right)} } \right)_{m \times n}\) (\(k = 1,2, \ldots ,p\)) into the normalized hesitant fuzzy decision matrix \(B^{\left( k \right)} = \left( {b_{ij}^{\left( k \right)} } \right)_{m \times n}\) (\(k = 1,2, \ldots ,p\)) by the following equation [27]:

$$b_{ij}^{\left( k \right)} = \left\{ \begin{aligned} a_{ij}^{\left( k \right)},{\text{for benefit criterion}}\,c_{j} \\ \left( {a_{ij}^{\left( k \right)} } \right)^{c},{\text{for cost criterion}}\,c_{j}\\ \end{aligned} \right.\quad i = 1,2, \ldots ,\,m,\,j = 1,2, \ldots ,n,\,k = 1,2, \ldots ,p$$
(3)

where \(\left( {a_{ij}^{\left( k \right)} } \right)^{c}\) is the complement of \(a_{ij}^{\left( k \right)}\), such that \(\left( {a_{ij}^{\left( k \right)} } \right)^{c} = \left\{ {\left. {1 - \left( {a_{ij}^{\left( k \right)} } \right)^{\sigma \left( t \right)} } \right|t = 1,2, \ldots ,l_{{a_{ij}^{\left( k \right)} }} } \right\}\).

In most situations, it is noted that the numbers of the elements in different HFEs \(b_{ij}^{\left( k \right)}\) of \(B^{\left( k \right)}\) (\(k = 1,2, \ldots ,p\)) are different. In order to more accurately operate between these HFEs, we should extend the shorter ones until all of them have the same length. Let \(l = \hbox{max} \left\{ {\left. {l_{{b_{ij}^{\left( k \right)} }} } \right|i = 1,2, \ldots ,m,{\kern 1pt} {\kern 1pt} \, j = 1,2, \ldots ,n,{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} k = 1,2, \ldots ,p} \right\}\). By the regulation method proposed by Xu and Zhang [29], we transform the hesitant fuzzy decision matrices \(B^{\left( k \right)} = \left( {b_{ij}^{\left( k \right)} } \right)_{m \times n}\) (\(k = 1,2, \ldots ,p\)) into the corresponding hesitant fuzzy decision matrices \(H^{\left( k \right)} = \left( {h_{ij}^{\left( k \right)} } \right)_{m \times n}\) (\(k = 1,2, \ldots ,p\)), such that \(l_{{h_{ij}^{\left( k \right)} }} = l\) for all \(i = 1,2, \ldots ,m\), \(j = 1,2, \ldots ,n\), and \(k = 1,2, \ldots ,p\).

3.2 A Nonlinear Optimization Model for Determining Decision Makers’ Weights

For any two HFEs \(h_{1} = \left\{ {\left. {h_{1}^{\sigma \left( i \right)} } \right|i = 1,2, \ldots ,l_{{h_{1} }} } \right\}\) and \(h_{2} = \left\{ {\left. {h_{2}^{\sigma \left( i \right)} } \right|i = 1,2, \ldots ,l_{{h_{2} }} } \right\}\), we define the square deviation between h 1 and h 2 as

$$d\left( {h_{1} ,h_{2} } \right) = \sum\limits_{i = 1}^{l} {\left( {h_{1}^{\sigma \left( i \right)} - h_{2}^{\sigma \left( i \right)} } \right)^{2} }$$
(4)

where \(l = \hbox{max} \left\{ {l_{{h_{1} }} ,l_{{h_{2} }} } \right\}\), and \(h_{1}^{\sigma \left( i \right)}\) and \(h_{2}^{\sigma \left( i \right)}\) are the ith largest values in h 1 and h 2, respectively.

If we take the weight of each HFE into account, then we define the weighted square deviation between h 1 and h 2 as

$$d\left( {\omega_{1} h_{1} ,\omega_{2} h_{2} } \right) = \sum\limits_{i = 1}^{l} {\left( {\omega_{1} h_{1}^{\sigma \left( i \right)} - \omega_{2} h_{2}^{\sigma \left( i \right)} } \right)^{2} }$$
(5)

where ω 1 and ω 2 are the weights of h 1 and h 2, respectively, \(\omega = \left( {\omega_{1} ,\omega_{2} } \right)^{T}\), \(\omega_{i} \ge 0\), \(i = 1,2\), \(\omega_{1} + \omega_{2} = 1\).

Based on Eq. (5), we define the weighted square deviation between each pair of the individual hesitant fuzzy decision matrices \(\left( {H^{\left( k \right)} ,H^{\left( q \right)} } \right)\) as

$$d\left( {\omega_{k} H^{\left( k \right)} ,\omega_{q} H^{\left( q \right)} } \right) = \sum\limits_{i = 1}^{m} {\sum\limits_{j = 1}^{n} {\sum\limits_{t = 1}^{l} {\left( {\omega_{k} \left( {h_{ij}^{\left( k \right)} } \right)^{\sigma \left( t \right)} -\, \omega_{q} \left( {h_{ij}^{\left( q \right)} } \right)^{\sigma \left( t \right)} } \right)^{2} } } }$$
(6)

where \(\left( {h_{ij}^{\left( k \right)} } \right)^{\sigma \left( t \right)}\) and \(\left( {h_{ij}^{\left( q \right)} } \right)^{\sigma \left( t \right)}\) are the tth largest values in \(h_{ij}^{\left( k \right)}\) and \(h_{ij}^{\left( q \right)}\), respectively.

Furthermore, if we consider all the weighted square deviations among all the pairs of the individual hesitant fuzzy decision matrices, then it follows from Eq. (6) that

$$f\left( \omega \right) = \sum\limits_{k = 1}^{p} {\sum\limits_{q = 1,q \ne k}^{p} {d\left( {\omega_{k} H^{\left( k \right)} ,\omega_{q} H^{\left( q \right)} } \right)} } = \sum\limits_{k = 1}^{p} {\sum\limits_{q = 1,q \ne k}^{p} {\sum\limits_{i = 1}^{m} {\sum\limits_{j = 1}^{n} {\sum\limits_{t = 1}^{l} {\left( {\omega_{k} \left( {h_{ij}^{\left( k \right)} } \right)^{\sigma \left( t \right)} - \omega_{q} \left( {h_{ij}^{\left( q \right)} } \right)^{\sigma \left( t \right)} } \right)^{2} } } } } }$$
(7)

It is highly likely that individual hesitant fuzzy decision matrices are largely dispersed if their weights are not considered. Therefore, the weights should be incorporated into each hesitant fuzzy decision matrix. In group decision-making problems, the experts, because they usually come from different specialty fields and have different backgrounds and levels of knowledge, usually have diverging opinions. Consensus can measure the degree of agreement among the decision makers on the solution of the problem. The larger the value of consensus measure, the closer that decision maker is to the group, and the more desirable the decision result. Consensus is a pathway to a true group decision because it can guarantee that the final result should be supported by all the group members despite their different opinions. Clearly, it is preferable that the experts had achieved a high level of consensus concerning their preferences before reaching a desirable decision result. In the cases where all the individual weighted hesitant fuzzy decision matrices \(\omega_{k} H^{\left( k \right)} = \left( {\omega_{k} h_{ij}^{\left( k \right)} } \right)_{m \times n}\) (\(k = 1,2, \ldots ,p\)) are the same, then obviously the group is of high consensus. Nevertheless, in the actual applications, this case generally does not occur since the experts may have different experiences and specialties. In order to achieve maximum consensus, the weighted hesitant fuzzy decision matrices should come closer to each other. Based on this idea, the following nonlinear optimization model (M-1) is constructed to minimize the sum of squared distances between all pairs of weighted hesitant fuzzy decision matrices and make the group consensus as high as possible:

$$\begin{aligned} & {\rm{min}}\,f\left( \omega \right) = \sum\limits_{k = 1}^p {\sum\limits_{q = 1,q \ne k}^p {\sum\limits_{i = 1}^m {\sum\limits_{j = 1}^n {\sum\limits_{t = 1}^l {{{\left( {{\omega _k}{{\left( {h_{ij}^{\left( k \right)}} \right)}^{\sigma \left( t \right)}} - {\omega _q}{{\left( {h_{ij}^{\left( q \right)}} \right)}^{\sigma \left( t \right)}}} \right)}^2}} } } } } \hfill \\ & {\text{s}}. {\text{t}}\left\{ \begin{aligned} &{\sum\limits_{k = 1}^p {{\omega _k}} = 1,}\\ & {{\omega _k} \ge 0,k = 1,2, \ldots ,p,} \hfill \\ \end{aligned} \right. \hfill \\ \end{aligned}$$
(M-1)

Theorem 3.1

Model (M-1) is equivalent to (M-2) below in a matrix form

$$\begin{aligned} &\hbox{min}\,f\left( \omega \right) = \omega^{T} G\omega \hfill \\ & {\text{s}}. {\text{t}}.\left\{ \begin{aligned} & e^{T} \omega = 1, \hfill \\ & \omega \ge 0 \hfill \\ \end{aligned} \right. \hfill \\ \end{aligned}$$
(M-2)

where \(\omega = \left( {\omega_{1} ,\omega_{2} , \ldots ,\omega_{p} } \right)^{T}\), \(e = \left( {1,1, \ldots ,1} \right)^{T}\), and

$$G = \left( {g_{kq} } \right)_{p \times p} = 2\left[ {\begin{array}{*{20}c} {\left( {p - 1} \right)\sum\limits_{i = 1}^{m} {\sum\limits_{j = 1}^{n} {\sum\limits_{t = 1}^{l} {\left( {\left( {h_{ij}^{\left( 1 \right)} } \right)^{\sigma \left( t \right)} } \right)^{2} } } } } & { - \sum\limits_{i = 1}^{m} {\sum\limits_{j = 1}^{n} {\sum\limits_{t = 1}^{l} {\left( {h_{ij}^{\left( 1 \right)} } \right)^{\sigma \left( t \right)} \left( {h_{ij}^{\left( 2 \right)} } \right)^{\sigma \left( t \right)} } } } } & \cdots & { - \sum\limits_{i = 1}^{m} {\sum\limits_{j = 1}^{n} {\sum\limits_{t = 1}^{l} {\left( {h_{ij}^{\left( 1 \right)} } \right)^{\sigma \left( t \right)} \left( {h_{ij}^{\left( p \right)} } \right)^{\sigma \left( t \right)} } } } } \\ { - \sum\limits_{i = 1}^{m} {\sum\limits_{j = 1}^{n} {\sum\limits_{t = 1}^{l} {\left( {h_{ij}^{\left( 2 \right)} } \right)^{\sigma \left( t \right)} \left( {h_{ij}^{\left( 1 \right)} } \right)^{\sigma \left( t \right)} } } } } & {\left( {p - 1} \right)\sum\limits_{i = 1}^{m} {\sum\limits_{j = 1}^{n} {\sum\limits_{t = 1}^{l} {\left( {\left( {h_{ij}^{\left( 2 \right)} } \right)^{\sigma \left( t \right)} } \right)^{2} } } } } & \cdots & { - \sum\limits_{i = 1}^{m} {\sum\limits_{j = 1}^{n} {\sum\limits_{t = 1}^{l} {\left( {h_{ij}^{\left( 2 \right)} } \right)^{\sigma \left( t \right)} \left( {h_{ij}^{\left( p \right)} } \right)^{\sigma \left( t \right)} } } } } \\ \cdots & \cdots & \cdots & \cdots \\ { - \sum\limits_{i = 1}^{m} {\sum\limits_{j = 1}^{n} {\sum\limits_{t = 1}^{l} {\left( {h_{ij}^{\left( p \right)} } \right)^{\sigma \left( t \right)} \left( {h_{ij}^{\left( 1 \right)} } \right)^{\sigma \left( t \right)} } } } } & { - \sum\limits_{i = 1}^{m} {\sum\limits_{j = 1}^{n} {\sum\limits_{t = 1}^{l} {\left( {h_{ij}^{\left( p \right)} } \right)^{\sigma \left( t \right)} \left( {h_{ij}^{\left( 2 \right)} } \right)^{\sigma \left( t \right)} } } } } & \cdots & {\left( {p - 1} \right)\sum\limits_{i = 1}^{m} {\sum\limits_{j = 1}^{n} {\sum\limits_{t = 1}^{l} {\left( {\left( {h_{ij}^{\left( p \right)} } \right)^{\sigma \left( t \right)} } \right)^{2} } } } } \\ \end{array} } \right]$$
(8)

Proof

$$\begin{aligned} f\left( \omega \right) &= \sum\limits_{k = 1}^{p} {\sum\limits_{q = 1,q \ne k}^{p} {\sum\limits_{i = 1}^{m} {\sum\limits_{j = 1}^{n} {\sum\limits_{t = 1}^{l} {\left( {\omega_{k} \left( {h_{ij}^{\left( k \right)} } \right)^{\sigma \left( t \right)} - \omega_{q} \left( {h_{ij}^{\left( q \right)} } \right)^{\sigma \left( t \right)} } \right)^{2} } } } } } \hfill \\&= \sum\limits_{k = 1}^{p} {\sum\limits_{q = 1,q \ne k}^{p} {\sum\limits_{i = 1}^{m} {\sum\limits_{j = 1}^{n} {\sum\limits_{t = 1}^{l} {\left( {\omega_{k}^{2} \left( {\left( {h_{ij}^{\left( k \right)} } \right)^{\sigma \left( t \right)} } \right)^{2} + \omega_{q}^{2} \left( {\left( {h_{ij}^{\left( q \right)} } \right)^{\sigma \left( t \right)} } \right)^{2} } \right)} } } } }\\ &\quad -\, 2\sum\limits_{k = 1}^{p} {\sum\limits_{q = 1,q \ne k}^{p} {\sum\limits_{i = 1}^{m} {\sum\limits_{j = 1}^{n} {\sum\limits_{t = 1}^{l} {\omega_{k} \omega_{q} \left( {h_{ij}^{\left( k \right)} } \right)^{\sigma \left( t \right)} \left( {h_{ij}^{\left( q \right)} } \right)^{\sigma \left( t \right)} } } } } } \hfill \\ & = \sum\limits_{k = 1}^{p} {\sum\limits_{q = 1,q \ne k}^{p} {\left( {\sum\limits_{i = 1}^{m} {\sum\limits_{j = 1}^{n} {\sum\limits_{t = 1}^{l} {\omega_{k}^{2} \left( {\left( {h_{ij}^{\left( k \right)} } \right)^{\sigma \left( t \right)} } \right)^{2} } } } } \right)} } \\ &\quad +\, \sum\limits_{k = 1}^{p} {\sum\limits_{q = 1,q \ne k}^{p} {\left( {\sum\limits_{i = 1}^{m} {\sum\limits_{j = 1}^{n} {\sum\limits_{t = 1}^{l} {\omega_{q}^{2} \left( {\left( {h_{ij}^{\left( q \right)} } \right)^{\sigma \left( t \right)} } \right)^{2} } } } } \right)} } \\ & \quad -\,2\sum\limits_{k = 1}^{p} {\sum\limits_{q = 1,q \ne k}^{p} {\sum\limits_{i = 1}^{m} {\sum\limits_{j = 1}^{n} {\sum\limits_{t = 1}^{l} {\omega_{k} \omega_{q} \left( {h_{ij}^{\left( k \right)} } \right)^{\sigma \left( t \right)} \left( {h_{ij}^{\left( q \right)} } \right)^{\sigma \left( t \right)} } } } } } \hfill \\ &= \sum\limits_{k = 1}^{p} {\left( {\left( {p - 1} \right)\sum\limits_{i = 1}^{m} {\sum\limits_{j = 1}^{n} {\sum\limits_{t = 1}^{l} {\omega_{k}^{2} \left( {\left( {h_{ij}^{\left( k \right)} } \right)^{\sigma \left( t \right)} } \right)^{2} } } } } \right)} \\ &\quad +\, \sum\limits_{q = 1}^{p} {\left( {\left( {p - 1} \right)\sum\limits_{i = 1}^{m} {\sum\limits_{j = 1}^{n} {\sum\limits_{t = 1}^{l} {\omega_{q}^{2} \left( {\left( {h_{ij}^{\left( q \right)} } \right)^{\sigma \left( t \right)} } \right)^{2} } } } } \right)} \hfill \\& \quad -\, 2\sum\limits_{k = 1}^{p} {\sum\limits_{q = 1,q \ne k}^{p} {\sum\limits_{i = 1}^{m} {\sum\limits_{j = 1}^{n} {\sum\limits_{t = 1}^{l} {\omega_{k} \omega_{q} \left( {h_{ij}^{\left( k \right)} } \right)^{\sigma \left( t \right)} \left( {h_{ij}^{\left( q \right)} } \right)^{\sigma \left( t \right)} } } } } } \hfill \\ &= \sum\limits_{k = 1}^{p} {\left( {2\left( {p - 1} \right)\sum\limits_{i = 1}^{m} {\sum\limits_{j = 1}^{n} {\sum\limits_{t = 1}^{l} {\omega_{k}^{2} \left( {\left( {h_{ij}^{\left( k \right)} } \right)^{\sigma \left( t \right)} } \right)^{2} } } } } \right)} \\ &\quad -\, 2\sum\limits_{k = 1}^{p} {\sum\limits_{q = 1,q \ne k}^{p} {\sum\limits_{i = 1}^{m} {\sum\limits_{j = 1}^{n} {\sum\limits_{t = 1}^{l} {\omega_{k} \omega_{q} \left( {h_{ij}^{\left( k \right)} } \right)^{\sigma \left( t \right)} \left( {h_{ij}^{\left( q \right)} } \right)^{\sigma \left( t \right)} } } } } } \hfill \\ &= \sum\limits_{k = 1}^{p} {\left( {2\left( {p - 1} \right)\sum\limits_{i = 1}^{m} {\sum\limits_{j = 1}^{n} {\sum\limits_{t = 1}^{l} {\left( {\left( {h_{ij}^{\left( k \right)} } \right)^{\sigma \left( t \right)} } \right)^{2} } } } } \right)} \omega_{k}^{2} \\ &\quad + \,\sum\limits_{k = 1}^{p} {\sum\limits_{q = 1,q \ne k}^{p} {\sum\limits_{i = 1}^{m} {\sum\limits_{j = 1}^{n} {\sum\limits_{t = 1}^{l} {\left( { - 2\left( {h_{ij}^{\left( k \right)} } \right)^{\sigma \left( t \right)} \left( {h_{ij}^{\left( q \right)} } \right)^{\sigma \left( t \right)} } \right)\omega_{k} \omega_{q} } } } } } \hfill \\ &= \sum\limits_{k = 1}^{p} {g_{kk} } \omega_{k}^{2} + \sum\limits_{k = 1}^{p} {\sum\limits_{q = 1,q \ne k}^{p} {g_{kq} \omega_{k} \omega_{q} } }\\& = \sum\limits_{k = 1}^{p} {\sum\limits_{q = 1}^{p} {g_{kq} \omega_{k} \omega_{q} } } = \omega^{T} G\omega \hfill \\ \end{aligned}$$

which completes the proof of Theorem 3.1. □

Theorem 3.2

For the model (M-2), if there exist \(k_{0} ,q_{0} \in \left\{ {1,2, \ldots ,p} \right\}\), and \(k_{0} \ne q_{0}\), satisfying \(H^{{\left( {k_{0} } \right)}} \ne cH^{{\left( {q_{0} } \right)}}\) for any \(c \in \left( { - \infty , + \infty } \right)\), then matrix G determined by Eq. (8) is positive definite and, hence, nonsingular and invertible.

Proof

Obviously, \(f\left( \omega \right) = \omega^{T} G\omega \ge 0\). Now, we will prove that \(f\left( \omega \right) \ne 0\) if there exist \(k_{0} ,q_{0} \in \left\{ {1,2, \ldots ,p} \right\}\), and \(k_{0} \ne q_{0}\), satisfying \(H^{{\left( {k_{0} } \right)}} \ne cH^{{\left( {q_{0} } \right)}}\) for any \(c \in \left( { - \infty , + \infty } \right)\).

Suppose that there exists a weight vector \(\omega = \left( {\omega_{1} ,\omega_{2} , \ldots ,\omega_{p} } \right)^{T}\) such that \(f\left( \omega \right) = \omega^{T} G\omega = 0\). Then, for all i, j, t, k, and q, we have \(\omega_{k} \left( {h_{ij}^{\left( k \right)} } \right)^{\sigma \left( t \right)} = \omega_{q} \left( {h_{ij}^{\left( q \right)} } \right)^{\sigma \left( t \right)}\), i.e., \(\left( {h_{ij}^{\left( k \right)} } \right)^{\sigma \left( t \right)} = \frac{{\omega_{q} }}{{\omega_{k} }}\left( {h_{ij}^{\left( q \right)} } \right)^{\sigma \left( t \right)}\), which contradicts with the assumption that there exist \(k_{0} ,q_{0} \in \left\{ {1,2, \ldots ,p} \right\}\), and \(k_{0} \ne q_{0}\), satisfying \(H^{{\left( {k_{0} } \right)}} \ne cH^{{\left( {q_{0} } \right)}}\) for any \(c \in \left( { - \infty , + \infty } \right)\). Thus, \(f\left( \omega \right) = \omega^{T} G\omega > 0\). In addition, according to Eq. (8), we have \(g_{kq} = g_{qk}\), \(\forall k,q = 1,2, \ldots ,p\), i.e., \(\forall k,q = 1,2, \ldots ,p,\)i.e., \(G = \left( {g_{kq} } \right)_{p \times p}\) is a symmetry matrix. According to the definition of positive definiteness, \(G = \left( {g_{kq} } \right)_{p \times p}\) is positive definite, and, hence, nonsingular and invertible, i.e., G −1 exists. This completes the proof of Theorem 3.2. □

Remark 3.1

Theorem 3.2 shows us that G is positive definite as long as not all \(H^{\left( k \right)} = \left( {h_{ij}^{\left( k \right)} } \right)_{m \times n}\) (\(k = 1,2, \ldots ,p\)) are proportional to one another. If all \(H^{\left( k \right)} = \left( {h_{ij}^{\left( k \right)} } \right)_{m \times n}\) (\(k = 1,2, \ldots ,p\)) are proportional to one another, then a complete weighted consensus is reached. However, in real situations, this case generally does not occur due to the fact that the experts may come from different fields and thus have different experiences and specialties. In the following, we consider the general case where not all \(H^{\left( k \right)} = \left( {h_{ij}^{\left( k \right)} } \right)_{m \times n}\) (\(k = 1,2, \ldots ,p\)) are proportional to one another, and it is always assumed that there exist \(k_{0} ,q_{0} \in \left\{ {1,2, \ldots ,p} \right\}\), and \(k_{0} \ne q_{0}\), satisfying \(H^{{\left( {k_{0} } \right)}} \ne cH^{{\left( {q_{0} } \right)}}\) for any \(c \in \left( { - \infty , + \infty } \right)\).

Lemma 3.1

Let Ω be the feasible set of (M-2). Then, Ω is a closed convex set, and (M-2) is a convex quadratic program.

Proof

Based on the definition of convex set [3], it is clear that Ω is a closed convex set. Because \(\frac{{\partial^{2} f\left( \omega \right)}}{{\partial \omega^{2} }} = 2G\) is a definite matrix, \(f\left( \omega \right) = \omega^{T} G\omega\) is a strictly convex function. Because the constraints of (M-2) are linear, (M-2) is a convex quadratic programming. This completes the proof of Lemma 3.1. □

Lemma 3.2

Ma et al. [12]. Let \(F = \left( {f_{ij} } \right)_{p \times p}\) be a symmetric matrix such that \(f_{ij} \le 0\) for \(i \ne j\) and \(f_{ii} > 0\). Then, \(F^{ - 1} \ge \left[ 0 \right]_{p \times p}\) (i.e., \(F^{ - 1}\) is a nonnegative matrix) if and only if F is positive definite.

Theorem 3.3

Let \(H^{\left( k \right)} = \left( {h_{ij}^{\left( k \right)} } \right)_{m \times n} = \Bigg( {\left\{ {\left. {\left( {h_{ij}^{\left( k \right)} } \right)^{\sigma \left( t \right)} } \right|t = 1,2, \ldots ,l} \right\}} \Bigg)_{m \times n}\) (\(k = 1,2, \ldots ,p\)) be p hesitant fuzzy decision matrices. Assume that there exist \(k_{0} ,q_{0} \in \left\{ {1,2, \ldots ,p} \right\}\), and \(k_{0} \ne q_{0}\), satisfying \(H^{{\left( {k_{0} } \right)}} \ne cH^{{\left( {q_{0} } \right)}}\) for any \(c \in \left( { - \infty , + \infty } \right)\). Then, the unique optimal solution to the model (M-2) is

$$\omega^{ * } = \frac{{G^{ - 1} e}}{{e^{T} G^{ - 1} e}}$$
(9)

Proof

We first construct the following Lagrange function:

$$L\left( {\omega ,\lambda } \right) = \omega^{T} G\omega + 2\lambda \left( {e^{T} \omega - 1} \right)$$
(10)

where λ is the Lagrange multiplier.

Differentiate Eq. (10) with respect to ω and λ, and then set these partial derivatives equal to zero, then we have the following equations:

$$\frac{{\partial L\left( {\omega ,\lambda } \right)}}{\partial \omega } = 2G\omega + 2\lambda e = 0$$
(11)
$$\frac{{\partial L\left( {\omega ,\lambda } \right)}}{\partial \lambda } = 2\left( {e^{T} \omega - 1} \right) = 0$$
(12)

By Theorem 3.2, G is invertible. Thus, the optimal solutions to Eqs. (11) and (12) are derived as follows:

$$\omega^{ * } = \frac{{G^{ - 1} e}}{{e^{T} G^{ - 1} e}}$$
(13)
$$\lambda^{ * } = - \frac{1}{{e^{T} G^{ - 1} e}}$$
(14)

According to Theorem 3.2 and Eq. (8), G is a positive definite matrix, \(g_{kq} \le 0\) (\(k \ne q\)), and \(g_{kk} > 0\). Thus, it follows from Lemma 3.2 that \(G^{ - 1} \ge \left[ 0 \right]_{p \times p}\), i.e., \(G^{ - 1}\) is a nonnegative matrix. Therefore, \(\omega^{ * } \ge 0\), which means that the weight vector \(\omega^{ * }\) satisfies the nonnegativity constraint. By combining Lemma 3.1, \(\omega^{ * } = \frac{{G^{ - 1} e}}{{e^{T} G^{ - 1} e}}\) is the unique optimal solution to the model (M-2), which completes the proof. □

3.3 A Nonlinear Optimization Model for Determining the Weight Vector of the Criteria

First, to get the group opinion, we aggregate all the individual hesitant fuzzy decision matrices \(H^{\left( k \right)} = \left( {h_{ij}^{\left( k \right)} } \right)_{m \times n} = \left( {\left\{ {\left. {\left( {h_{ij}^{\left( k \right)} } \right)^{\sigma \left( t \right)} } \right|t = 1,2, \ldots ,l} \right\}} \right)_{m \times n}\) (\(k = 1,2, \ldots ,p\)) into the collective hesitant fuzzy decision matrix \(H = \left( {h_{ij} } \right)_{m \times n} = \left( {\left\{ {\left. {h_{ij}^{\sigma \left( t \right)} } \right|t = 1,2, \ldots ,l} \right\}} \right)_{m \times n}\), where

$$h_{ij} = \mathop \oplus \limits_{k = 1}^{p} \left( {\omega_{k} h_{ij}^{\left( k \right)} } \right) = \left\{ {\left. {\sum\limits_{k = 1}^{p} {\omega_{k} \left( {h_{ij}^{\left( k \right)} } \right)^{\sigma \left( t \right)} } } \right|t = 1,2, \ldots ,l} \right\}$$
(15)

In what follows, we further investigate the approach to determining the weight vector \(w = \left( {w_{1} ,w_{2} , \ldots ,w_{n} } \right)^{T}\) of the criteria c j (\(j = 1,2, \ldots ,n\)):

In the cases where the decision maker d k ’s opinion is consistent with the group opinion, the individual hesitant fuzzy decision matrix H (k) should be equal to the collective hesitant fuzzy decision matrix H, i.e., \(h_{ij}^{\left( k \right)} = h_{ij}\), for all \(i = 1,2, \ldots ,m\), \(j = 1,2, \ldots ,n\). By Eq. (15), we have

$$\left( {h_{ij}^{\left( k \right)} } \right)^{\sigma \left( t \right)} = \sum\limits_{q = 1}^{p} {\omega_{q} \left( {h_{ij}^{\left( q \right)} } \right)^{\sigma \left( t \right)} } \quad {\text{for}}\,{\text{all}}\,i = 1,2, \ldots ,m,\,j = 1,2, \ldots ,n,\,\,{\text{and}}\,t = 1,2, \ldots ,l$$
(16)

It is noted that each criterion c j has its own importance weight w j , we can express the weighted form of Eq. (16) as

$$w_{j} \left( {h_{ij}^{\left( k \right)} } \right)^{\sigma \left( t \right)} = \sum\limits_{q = 1}^{p} {w_{j} \omega_{q} \left( {h_{ij}^{\left( q \right)} } \right)^{\sigma \left( t \right)} } \quad {\text{for}}\,{\text{all}}\,i = 1,2, \ldots ,m,\,j = 1,2, \ldots ,n,\,{\text{and}}\,t = 1,2, \ldots ,l$$
(17)

However, Eq. (17) generally does not hold because the decision makers may have different experiences and specialties. As a result, we define a deviation variable \(e_{ij}^{\left( k \right)} \left( w \right)\) as

$$e_{ij}^{\left( k \right)} \left( w \right) = \sum\limits_{t = 1}^{l} {\left( {w_{j} \left( {h_{ij}^{\left( k \right)} } \right)^{\sigma \left( t \right)} - \sum\limits_{q = 1}^{p} {w_{j} \omega_{q} \left( {h_{ij}^{\left( q \right)} } \right)^{\sigma \left( t \right)} } } \right)^{2} } \quad {\text{for}}\,{\text{all}}\,i = 1,2, \ldots ,m,\,j = 1,2, \ldots ,n,\,k = 1,2, \ldots ,p$$
(18)

and construct a deviation function

$$\begin{aligned} e\left( w \right) &= \sum\limits_{k = 1}^{p} {\sum\limits_{i = 1}^{m} {\sum\limits_{j = 1}^{n} {e_{ij}^{\left( k \right)} \left( w \right)} } } = \sum\limits_{k = 1}^{p} {\sum\limits_{i = 1}^{m} {\sum\limits_{j = 1}^{n} {\sum\limits_{t = 1}^{l} {\left( {w_{j} \left( {h_{ij}^{\left( k \right)} } \right)^{\sigma \left( t \right)} - \sum\limits_{q = 1}^{p} {w_{j} \omega_{q} \left( {h_{ij}^{\left( q \right)} } \right)^{\sigma \left( t \right)} } } \right)^{2} } } } } \hfill \\ & = \sum\limits_{k = 1}^{p} {\sum\limits_{i = 1}^{m} {\sum\limits_{j = 1}^{n} {\sum\limits_{t = 1}^{l} {\left( {\left( {h_{ij}^{\left( k \right)} } \right)^{\sigma \left( t \right)} - \sum\limits_{q = 1}^{p} {\omega_{q} \left( {h_{ij}^{\left( q \right)} } \right)^{\sigma \left( t \right)} } } \right)^{2} w_{j}^{2} } } } } \hfill \\ \end{aligned}$$
(19)

The following nonlinear optimization model (M-3) is established to obtain a desirable decision result with as high group consensus as possible:

$$\begin{aligned} & \hbox{min} e\left( \omega \right) = \sum\limits_{k = 1}^{p} {\sum\limits_{i = 1}^{m} {\sum\limits_{j = 1}^{n} {\sum\limits_{t = 1}^{l} {\left( {\left( {h_{ij}^{\left( k \right)} } \right)^{\sigma \left( t \right)} - \sum\limits_{q = 1}^{p} {\omega_{q} \left( {h_{ij}^{\left( q \right)} } \right)^{\sigma \left( t \right)} } } \right)^{2} w_{j}^{2} } } } } \hfill \\ & {\text{s}} . {\text{t}} .\left\{ \begin{aligned} & \sum\limits_{j = 1}^{n} {w_{j} } = 1, \hfill \\ & w_{j} \ge 0, j = 1,2, \ldots ,n \hfill \\ \end{aligned} \right. \hfill \\ \end{aligned}$$
(M-3)

In the following, the Lagrangian multiplier technique is utilized to solve the model (M-3). To do it, we first construct the Lagrange function:

$$\begin{aligned} L\left( {w,\lambda } \right)& = \sum\limits_{k = 1}^{p} {\sum\limits_{i = 1}^{m} {\sum\limits_{j = 1}^{n} {\sum\limits_{t = 1}^{l} {\left( {\left( {h_{ij}^{\left( k \right)} } \right)^{\sigma \left( t \right)} - \sum\limits_{q = 1}^{p} {\omega_{q} \left( {h_{ij}^{\left( q \right)} } \right)^{\sigma \left( t \right)} } } \right)^{2} w_{j}^{2} } } } } \\&\quad- 2\lambda \left( {\sum\limits_{j = 1}^{n} {w_{j} } - 1} \right) \end{aligned}$$
(20)

where λ is the Lagrange multiplier.

Differentiating Eq. (20) with respect to \(w_{j}\) (\(j = 1,2, \ldots ,n\)) and \(\lambda\), and setting these partial derivatives equal to zero, then the following set of equations is obtained:

$$\frac{\partial L}{{\partial w_{j} }} = 2\sum\nolimits_{k = 1}^{p} {\sum\nolimits_{i = 1}^{m} {\sum\limits_{t = 1}^{l} {\left( {\left( {h_{ij}^{\left( k \right)} } \right)^{\sigma \left( t \right)} - \sum\nolimits_{q = 1}^{p} {\omega_{q} \left( {h_{ij}^{\left( q \right)} } \right)^{\sigma \left( t \right)} } } \right)^{2} w_{j} } } } - 2\lambda = 0$$
(21)
$$\frac{\partial L}{\partial \lambda } = - 2\left( {\sum\nolimits_{j = 1}^{n} {w_{j} } - 1} \right) = 0$$
(22)

It follows from Eq. (21) that

$$w_{j} = \frac{\lambda }{{\sum\nolimits_{k = 1}^{p} {\sum\nolimits_{i = 1}^{m} {\sum\nolimits_{t = 1}^{l} {\left( {\left( {h_{ij}^{\left( k \right)} } \right)^{\sigma \left( t \right)} - \sum\nolimits_{q = 1}^{p} {\omega_{q} \left( {h_{ij}^{\left( q \right)} } \right)^{\sigma \left( t \right)} } } \right)^{2} } } } }}$$
(23)

Putting Eq. (23) into Eq. (22), we get

$$\lambda = \frac{1}{{\sum\nolimits_{j = 1}^{n} {\left( {\frac{1}{{\sum\nolimits_{k = 1}^{p} {\sum\nolimits_{i = 1}^{m} {\sum\nolimits_{t = 1}^{l} {\left( {\left( {h_{ij}^{\left( k \right)} } \right)^{\sigma \left( t \right)} - \sum\limits_{q = 1}^{p} {\omega_{q} \left( {h_{ij}^{\left( q \right)} } \right)^{\sigma \left( t \right)} } } \right)^{2} } } } }}} \right)} }}$$
(24)

Then, by Eqs. (23) and (24), we have

$$w_{j} = \frac{{\frac{1}{{\sum\nolimits_{j = 1}^{n} {\left( {\frac{1}{{\sum\nolimits_{k = 1}^{p} {\sum\nolimits_{i = 1}^{m} {\sum\nolimits_{t = 1}^{l} {\left( {\left( {h_{ij}^{\left( k \right)} } \right)^{\sigma \left( t \right)} - \sum\nolimits_{q = 1}^{p} {\omega_{q} \left( {h_{ij}^{\left( q \right)} } \right)^{\sigma \left( t \right)} } } \right)^{2} } } } }}} \right)} }}}}{{\sum\nolimits_{k = 1}^{p} {\sum\nolimits_{i = 1}^{m} {\sum\nolimits_{t = 1}^{l} {\left( {\left( {h_{ij}^{\left( k \right)} } \right)^{\sigma \left( t \right)} - \sum\nolimits_{q = 1}^{p} {\omega_{q} \left( {h_{ij}^{\left( q \right)} } \right)^{\sigma \left( t \right)} } } \right)^{2} } } } }}$$
(25)

Remark

It is noted that [23] used the maximizing deviation method [21] to determine the optimal criterion weights under the assumption that criterion weights are completely unknown. The idea of the maximizing deviation method is that the criterion with a larger deviation value among alternatives should be assigned a larger weight, while the criterion with a small deviation value among alternatives should be assigned a smaller weight. Different from the maximizing deviation method, the proposed approach in this paper uses the maximizing group consensus method to determine the weight vector of the criteria. The main idea of the maximizing group consensus method is that the criteria weights should make all the weighted hesitant fuzzy decision matrices come close to the collective hesitant fuzzy decision matrix as much as possible.

Based on the above analysis, we next develop an approach to MCGDM problem with hesitant fuzzy information, which is composed of the following steps:

  1. Step 1

    For a MCGDM problem, the decision maker \(d_{k} \in D\) constructs the hesitant fuzzy decision matrix \(A^{\left( k \right)} = \left( {a_{ij}^{\left( k \right)} } \right)_{m \times n}\), where \(a_{ij}^{\left( k \right)}\) is a HFE, given by the DM \(d_{k} \in D\), for the alternative \(x_{i} \in X\) with respect to the criterion \(c_{j} \in C\). Utilize Eq. (3) to transform the hesitant fuzzy decision matrices \(A^{\left( k \right)} = \left( {a_{ij}^{\left( k \right)} } \right)_{m \times n}\) (\(k = 1,2, \ldots ,p\)) into the normalized hesitant fuzzy decision matrices \(H^{\left( k \right)} = \left( {h_{ij}^{\left( k \right)} } \right)_{m \times n}\) (\(k = 1,2, \ldots ,p\)).

  2. Step 2

    Utilize Eq. (9) to obtain decision makers’ weights \(\omega = \left( {\omega_{1} ,\omega_{2} , \ldots ,\omega_{p} } \right)^{T}\).

  3. Step 3

    Utilize Eq. (25) to obtain the weights of the criteria \(w = \left( {w_{1} ,w_{2} , \ldots ,w_{n} } \right)^{T}\).

  4. Step 4

    Utilize Eq. (15) to aggregate all the individual hesitant fuzzy decision matrices \(H^{\left( k \right)} = \left( {h_{ij}^{\left( k \right)} } \right)_{m \times n} = \left( {\left\{ {\left. {\left( {h_{ij}^{\left( k \right)} } \right)^{\sigma \left( t \right)} } \right|t = 1,2, \ldots ,l} \right\}} \right)_{m \times n}\) (\(k = 1,2, \ldots ,p\)) into the collective hesitant fuzzy decision matrix \(H = \left( {h_{ij} } \right)_{m \times n} = \left( {\left\{ {\left. {h_{ij}^{\sigma \left( t \right)} } \right|t = 1,2, \ldots ,l} \right\}} \right)_{m \times n}\).

  5. Step 5

    Utilize the additive weighting operator to get the collective overall HFE h i corresponding to each alternative x i , where

    $$h_{i} = \mathop \oplus \limits_{j = 1}^{n} w_{j} h_{ij} = \left\{ {\left. {\sum\limits_{j = 1}^{n} {w_{j} h_{ij}^{\sigma \left( t \right)} } } \right|t = 1,2, \ldots ,l} \right\},\quad i = 1,2, \ldots ,m$$
    (26)
  6. Step 6

    Calculate the scores \(s\left( {h_{i} } \right)\) (\(i = 1,2, \ldots ,m\)) of the overall HFEs \(h_{i}\) (\(i = 1,2, \ldots ,m\)) as

    $$s\left( {h_{i} } \right) = \frac{{\sum\nolimits_{t = 1}^{l} {\sum\nolimits_{j = 1}^{n} {w_{j} h_{ij}^{\sigma \left( t \right)} } } }}{l}$$
    (27)
  7. Step 7

    Rank the alternatives x i (\(i = 1,2, \ldots ,m\)) in accordance with the ranking of the HFEs h i (\(i = 1,2, \ldots ,m\)), and then select the optimal one.

  8. Step 8

    End.

4 Extended Nonlinear Optimization Models Under Interval-Valued Hesitant Fuzzy Situations

In this section, we extend the results obtained in Sect. 3 to interval-valued hesitant fuzzy environments.

Similar to Subsect 3.1, a MCGDM problem with interval-valued hesitant fuzzy information can be summarized as follows: Let \(X = \left\{ {x_{1} ,x_{2} , \ldots ,x_{m} } \right\}\) be a set of m alternatives, \(C = \left\{ {c_{1} ,c_{2} , \ldots ,c_{n} } \right\}\) be a collection of n criteria, whose weight vector is \(w = \left( {w_{1} ,w_{2} , \ldots ,w_{n} } \right)^{T}\), with \(w_{j} \in \left[ {0,1} \right]\), \(j = 1,2, \ldots ,n\), and \(\sum\nolimits_{j = 1}^{n} {w_{j} } = 1\), and let \(D = \left\{ {d_{1} ,d_{2} , \ldots ,d_{p} } \right\}\) is a set of p decision makers, whose weight vector is \(\omega = \left( {\omega_{1} ,\omega_{2} , \ldots ,\omega_{p} } \right)^{T}\), with \(\omega_{k} \in \left[ {0,1} \right]\), \(k = 1,2, \ldots ,p\), and \(\sum\limits_{k = 1}^{p} {\omega_{k} } = 1\). Let \(\tilde{A}^{\left( k \right)} = \left( {\tilde{a}_{ij}^{\left( k \right)} } \right)_{m \times n}\) be an interval-valued hesitant fuzzy decision matrix, where \(\tilde{a}_{ij}^{\left( k \right)} = \left\{ {\left. {\left( {\tilde{a}_{ij}^{\left( k \right)} } \right)^{\sigma \left( t \right)} } \right|t = 1,2, \ldots ,l_{{\tilde{a}_{ij}^{\left( k \right)} }} } \right\}\) is an IVHFE, which is a set of all the possible interval values that the alternative \(x_{i} \in X\) satisfies the criterion \(c_{j} \in C\), given by the decision maker \(d_{k} \in D\).

The following equation [27] is utilized to transform the interval-valued hesitant fuzzy decision matrices \(\tilde{A}^{\left( k \right)} = \left( {\tilde{a}_{ij}^{\left( k \right)} } \right)_{m \times n}\) (\(k = 1,2, \ldots ,p\)) into the normalized interval-valued hesitant fuzzy decision matrix \(\tilde{B}^{\left( k \right)} = \left( {\tilde{b}_{ij}^{\left( k \right)} } \right)_{m \times n}\) (\(k = 1,2, \ldots ,p\)):

$$\tilde{b}_{ij}^{\left( k \right)} = \left\{ \begin{aligned} \tilde{a}_{ij}^{\left( k \right)} ,{\kern 1pt} \quad {\text{for benefit criterion}}\,{\kern 1pt} c_{j} \hfill \\ \left( {\tilde{a}_{ij}^{\left( k \right)} } \right)^{c} ,\quad {\text{for cost criterion}}\,{\kern 1pt} c_{j} \hfill \\ \end{aligned} \right.\quad i = 1,2, \ldots ,m,\,j = 1,2, \ldots ,n,\,k = 1,2, \ldots ,p$$
(28)

where \(\left( {\tilde{a}_{ij}^{\left( k \right)} } \right)^{c}\) is the complement of \(\tilde{a}_{ij}^{\left( k \right)}\), such that \(\left( {\tilde{a}_{ij}^{\left( k \right)} } \right)^{c} = \left\{ {\left. {\left[ {1 - \left( {\left( {\tilde{a}_{ij}^{\left( k \right)} } \right)^{\sigma \left( t \right)} } \right)^{U} ,1 - \left( {\left( {\tilde{a}_{ij}^{\left( k \right)} } \right)^{\sigma \left( t \right)} } \right)^{L} } \right]} \right|t = 1,2, \ldots ,l_{{\tilde{a}_{ij}^{\left( k \right)} }} } \right\}\).

In most situations, it is noted that the numbers of the elements in different IVHFEs \(\tilde{b}_{ij}^{\left( k \right)}\) of \(\tilde{B}^{\left( k \right)}\) (\(k = 1,2, \ldots ,p\)) are different. In order to more accurately operate between these IVHFEs, we should extend the shorter ones until all of them have the same length. Let \(l = \hbox{max} \left\{ {\left. {l_{{\tilde{b}_{ij}^{\left( k \right)} }} } \right|i = 1,2, \ldots ,m,{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} \, j = 1,2, \ldots ,n,{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} k = 1,2, \ldots ,p} \right\}\). By the regulations mentioned by [29], we transform the interval-valued hesitant fuzzy decision matrices \(\tilde{B}^{\left( k \right)} = \left( {\tilde{b}_{ij}^{\left( k \right)} } \right)_{m \times n}\) (\(k = 1,2, \ldots ,p\)) into the corresponding interval-valued hesitant fuzzy decision matrices \(\tilde{H}^{\left( k \right)} = \left( {\tilde{h}_{ij}^{\left( k \right)} } \right)_{m \times n}\) (\(k = 1,2, \ldots ,p\)), such that \(l_{{\tilde{h}_{ij}^{\left( k \right)} }} = l\) for all \(i = 1,2, \ldots ,m\), \(j = 1,2, \ldots ,n\), and \(k = 1,2, \ldots ,p\).

For any two IVHFEs \(\tilde{h}_{1} = \left\{ {\left. {\tilde{h}_{1}^{\sigma \left( i \right)} } \right|i = 1,2, \ldots ,l_{{\tilde{h}_{1} }} } \right\} = \left\{ {\left. {\left[ {\left( {\tilde{h}_{1}^{\sigma \left( i \right)} } \right)^{L} ,\left( {\tilde{h}_{1}^{\sigma \left( i \right)} } \right)^{U} } \right]} \right|i = 1,2, \ldots ,l_{{\tilde{h}_{1} }} } \right\}\) and \(\tilde{h}_{2} = \left\{ {\left. {\tilde{h}_{2}^{\sigma \left( i \right)} } \right|i = 1,2, \ldots ,l_{{\tilde{h}_{2} }} } \right\} = \left\{ {\left. {\left[ {\left( {\tilde{h}_{2}^{\sigma \left( i \right)} } \right)^{L} ,\left( {\tilde{h}_{2}^{\sigma \left( i \right)} } \right)^{U} } \right]} \right|i = 1,2, \ldots ,l_{{\tilde{h}_{2} }} } \right\}\), we define the square deviation between \(\tilde{h}_{1}\) and \(\tilde{h}_{2}\) as

$$d\left( {\tilde{h}_{1} ,\tilde{h}_{2} } \right) = \sum\limits_{i = 1}^{l} {\left( {\left( {\left( {\tilde{h}_{1}^{\sigma \left( i \right)} } \right)^{L} - \left( {\tilde{h}_{2}^{\sigma \left( i \right)} } \right)^{L} } \right)^{2} + \left( {\left( {\tilde{h}_{1}^{\sigma \left( i \right)} } \right)^{U} - \left( {\tilde{h}_{2}^{\sigma \left( i \right)} } \right)^{U} } \right)^{2} } \right)}$$
(29)

where \(l = \hbox{max} \left\{ {l_{{\tilde{h}_{1} }} ,l_{{\tilde{h}_{2} }} } \right\}\), and \(\tilde{h}_{1}^{\sigma \left( i \right)}\) and \(\tilde{h}_{2}^{\sigma \left( i \right)}\) are the ith largest intervals in \(\tilde{h}_{1}\) and \(\tilde{h}_{2}\), respectively.

If we take the weight of each IVHFE into account, then we define the weighted square deviation between \(\tilde{h}_{1}\) and \(\tilde{h}_{2}\) as

$$d\left( {\omega_{1} \tilde{h}_{1} ,\omega_{2} \tilde{h}_{2} } \right) = \sum\limits_{i = 1}^{l} {\left( {\left( {\omega_{1} \left( {\tilde{h}_{1}^{\sigma \left( i \right)} } \right)^{L} - \omega_{2} \left( {\tilde{h}_{2}^{\sigma \left( i \right)} } \right)^{L} } \right)^{2} + \left( {\omega_{1} \left( {\tilde{h}_{1}^{\sigma \left( i \right)} } \right)^{U} - \omega_{2} \left( {\tilde{h}_{2}^{\sigma \left( i \right)} } \right)^{U} } \right)^{2} } \right)}$$
(30)

where \(\omega_{1}\) and \(\omega_{2}\) are the weights of \(\tilde{h}_{1}\) and \(\tilde{h}_{2}\), respectively, \(\omega = \left( {\omega_{1} ,\omega_{2} } \right)^{T}\), \(\omega_{i} \ge 0\), \(i = 1,2\), \(\omega_{1} + \omega_{2} = 1\).

Based on Eq. (30), we define the weighted square deviation between each pair of the individual interval-valued hesitant fuzzy decision matrices \(\left( {\tilde{H}^{\left( k \right)} ,\tilde{H}^{\left( q \right)} } \right)\) as

$$d\left( {\omega_{k} \tilde{H}^{\left( k \right)} ,\omega_{q} \tilde{H}^{\left( q \right)} } \right) = \sum\limits_{i = 1}^{m} {\sum\limits_{j = 1}^{n} {\sum\limits_{t = 1}^{l} {\left( {\left( {\omega_{k} \left( {\left( {\tilde{h}_{ij}^{\left( k \right)} } \right)^{\sigma \left( t \right)} } \right)^{L} - \omega_{q} \left( {\left( {\tilde{h}_{ij}^{\left( q \right)} } \right)^{\sigma \left( t \right)} } \right)^{L} } \right)^{2} + \left( {\omega_{k} \left( {\left( {\tilde{h}_{ij}^{\left( k \right)} } \right)^{\sigma \left( t \right)} } \right)^{U} - \omega_{q} \left( {\left( {\tilde{h}_{ij}^{\left( q \right)} } \right)^{\sigma \left( t \right)} } \right)^{U} } \right)^{2} } \right)} } }$$
(31)

\(\left( {\tilde{h}_{ij}^{\left( k \right)} } \right)^{\sigma \left( t \right)}\) and \(\left( {\tilde{h}_{ij}^{\left( q \right)} } \right)^{\sigma \left( t \right)}\) are the tth largest values in \(\tilde{h}_{ij}^{\left( k \right)}\) and \(\tilde{h}_{ij}^{\left( q \right)}\), respectively.

Furthermore, we define the weighted square deviations among all the pairs of the individual interval-valued hesitant fuzzy decision matrices as

$$\begin{aligned} \tilde{f}\left( \omega \right) &= \sum\limits_{k = 1}^{p} {\sum\limits_{q = 1,q \ne k}^{p} {d\left( {\omega_{k} \tilde{H}^{\left( k \right)} ,\omega_{q} \tilde{H}^{\left( q \right)} } \right)} } \hfill \\ & = \sum\limits_{k = 1}^{p} {\sum\limits_{q = 1,q \ne k}^{p} {\sum\limits_{i = 1}^{m} {\sum\limits_{j = 1}^{n} {\sum\limits_{t = 1}^{l} {\left( {\left( {\omega_{k} \left( {\left( {\tilde{h}_{ij}^{\left( k \right)} } \right)^{\sigma \left( t \right)} } \right)^{L} - \omega_{q} \left( {\left( {\tilde{h}_{ij}^{\left( q \right)} } \right)^{\sigma \left( t \right)} } \right)^{L} } \right)^{2} + \left( {\omega_{k} \left( {\left( {\tilde{h}_{ij}^{\left( k \right)} } \right)^{\sigma \left( t \right)} } \right)^{U} - \omega_{q} \left( {\left( {\tilde{h}_{ij}^{\left( q \right)} } \right)^{\sigma \left( t \right)} } \right)^{U} } \right)^{2} } \right)} } } } } \hfill \\ \end{aligned}$$
(32)

Then, similar to (M-1), the following nonlinear optimization model (M-4) is constructed to make the group consensus as high as possible:

$$\begin{aligned} & \hbox{min}\,\tilde{f}\left( \omega \right) = \sum\limits_{k = 1}^{p} {\sum\limits_{q = 1,q \ne k}^{p} {\sum\limits_{i = 1}^{m} {\sum\limits_{j = 1}^{n} {\sum\limits_{t = 1}^{l} {\left( {\left( {\omega_{k} \left( {\left( {\tilde{h}_{ij}^{\left( k \right)} } \right)^{\sigma \left( t \right)} } \right)^{L} - \omega_{q} \left( {\left( {\tilde{h}_{ij}^{\left( q \right)} } \right)^{\sigma \left( t \right)} } \right)^{L} } \right)^{2} + \left( {\omega_{k} \left( {\left( {\tilde{h}_{ij}^{\left( k \right)} } \right)^{\sigma \left( t \right)} } \right)^{U} - \omega_{q} \left( {\left( {\tilde{h}_{ij}^{\left( q \right)} } \right)^{\sigma \left( t \right)} } \right)^{U} } \right)^{2} } \right)} } } } } \hfill \\ & {\text{s}} . {\text{t}} .\left\{ \begin{aligned} & \sum\limits_{k = 1}^{p} {\omega_{k} } = 1, \hfill \\ & \omega_{k} \ge 0, k = 1,2, \ldots ,p, \hfill \\ \end{aligned} \right. \hfill \\ \end{aligned}$$
(M-4)

Theorem 4.1

Model (M-4) is equivalent to (M-5) below in a matrix form

$$\begin{aligned} & \hbox{min}\,\tilde{f}\left( \omega \right) = \omega^{T} \tilde{G}\omega \hfill \\ & {\text{s}} . {\text{t}} .\left\{ \begin{aligned} & e^{T} \omega = 1, \hfill \\ & \omega \ge 0 \hfill \\ \end{aligned} \right. \hfill \\ \end{aligned}$$
(M-5)

where \(\omega = \left( {\omega_{1} ,\omega_{2} , \ldots ,\omega_{p} } \right)^{T}\), \(e = \left( {1,1, \ldots ,1} \right)^{T}\), and

$$\begin{aligned} \tilde{G} = \left( {\tilde{g}_{kq} } \right)_{p \times p} = 2\left[ {\begin{array}{l} {\left( {p - 1} \right)\sum\limits_{i = 1}^{m} {\sum\limits_{j = 1}^{n} {\sum\limits_{t = 1}^{l} {\left( {\left( {\left( {\left( {\tilde{h}_{ij}^{\left( 1 \right)} } \right)^{\sigma \left( t \right)} } \right)^{L} } \right)^{2} + \left( {\left( {\left( {\tilde{h}_{ij}^{\left( 1 \right)} } \right)^{\sigma \left( t \right)} } \right)^{U} } \right)^{2} } \right)} } } } \\ { - \sum\limits_{i = 1}^{m} {\sum\limits_{j = 1}^{n} {\sum\limits_{t = 1}^{l} {\left( {\left( {\left( {\tilde{h}_{ij}^{\left( 1 \right)} } \right)^{\sigma \left( t \right)} } \right)^{L} \left( {\left( {\tilde{h}_{ij}^{\left( 2 \right)} } \right)^{\sigma \left( t \right)} } \right)^{L} + \left( {\left( {\tilde{h}_{ij}^{\left( 1 \right)} } \right)^{\sigma \left( t \right)} } \right)^{U} \left( {\left( {\tilde{h}_{ij}^{\left( 2 \right)} } \right)^{\sigma \left( t \right)} } \right)^{U} } \right)} } } } \\ \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad\cdots \\ { - \sum\limits_{i = 1}^{m} {\sum\limits_{j = 1}^{n} {\sum\limits_{t = 1}^{l} {\left( {\left( {\left( {\tilde{h}_{ij}^{\left( 1 \right)} } \right)^{\sigma \left( t \right)} } \right)^{L} \left( {\left( {\tilde{h}_{ij}^{\left( p \right)} } \right)^{\sigma \left( t \right)} } \right)^{L} + \left( {\left( {\tilde{h}_{ij}^{\left( 1 \right)} } \right)^{\sigma \left( t \right)} } \right)^{U} \left( {\left( {\tilde{h}_{ij}^{\left( p \right)} } \right)^{\sigma \left( t \right)} } \right)^{U} } \right)} } } } \\ { - \sum\limits_{i = 1}^{m} {\sum\limits_{j = 1}^{n} {\sum\limits_{t = 1}^{l} {\left( {\left( {\left( {\tilde{h}_{ij}^{\left( 2 \right)} } \right)^{\sigma \left( t \right)} } \right)^{L} \left( {\left( {\tilde{h}_{ij}^{\left( 1 \right)} } \right)^{\sigma \left( t \right)} } \right)^{L} + \left( {\left( {\tilde{h}_{ij}^{\left( 2 \right)} } \right)^{\sigma \left( t \right)} } \right)^{U} \left( {\left( {\tilde{h}_{ij}^{\left( 1 \right)} } \right)^{\sigma \left( t \right)} } \right)^{U} } \right)} } } } \\ {\left( {p - 1} \right)\sum\limits_{i = 1}^{m} {\sum\limits_{j = 1}^{n} {\sum\limits_{t = 1}^{l} {\left( {\left( {\left( {\left( {\tilde{h}_{ij}^{\left( 2 \right)} } \right)^{\sigma \left( t \right)} } \right)^{L} } \right)^{2} + \left( {\left( {\left( {\tilde{h}_{ij}^{\left( 2 \right)} } \right)^{\sigma \left( t \right)} } \right)^{U} } \right)^{2} } \right)} } } } \\ \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \cdots \\ { - \sum\limits_{i = 1}^{m} {\sum\limits_{j = 1}^{n} {\sum\limits_{t = 1}^{l} {\left( {\left( {\left( {\tilde{h}_{ij}^{\left( 2 \right)} } \right)^{\sigma \left( t \right)} } \right)^{L} \left( {\left( {\tilde{h}_{ij}^{\left( p \right)} } \right)^{\sigma \left( t \right)} } \right)^{L} + \left( {\left( {\tilde{h}_{ij}^{\left( 2 \right)} } \right)^{\sigma \left( t \right)} } \right)^{U} \left( {\left( {\tilde{h}_{ij}^{\left( p \right)} } \right)^{\sigma \left( t \right)} } \right)^{U} } \right)} } } } \\ \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \vdots \\ { - \sum\limits_{i = 1}^{m} {\sum\limits_{j = 1}^{n} {\sum\limits_{t = 1}^{l} {\left( {\left( {\left( {\tilde{h}_{ij}^{\left( p \right)} } \right)^{\sigma \left( t \right)} } \right)^{L} \left( {\left( {\tilde{h}_{ij}^{\left( 1 \right)} } \right)^{\sigma \left( t \right)} } \right)^{L} + \left( {\left( {\tilde{h}_{ij}^{\left( p \right)} } \right)^{\sigma \left( t \right)} } \right)^{U} \left( {\left( {\tilde{h}_{ij}^{\left( 1 \right)} } \right)^{\sigma \left( t \right)} } \right)^{U} } \right)} } } } \\ { - \sum\limits_{i = 1}^{m} {\sum\limits_{j = 1}^{n} {\sum\limits_{t = 1}^{l} {\left( {\left( {\left( {\tilde{h}_{ij}^{\left( p \right)} } \right)^{\sigma \left( t \right)} } \right)^{L} \left( {\left( {\tilde{h}_{ij}^{\left( 2 \right)} } \right)^{\sigma \left( t \right)} } \right)^{L} + \left( {\left( {\tilde{h}_{ij}^{\left( p \right)} } \right)^{\sigma \left( t \right)} } \right)^{U} \left( {\left( {\tilde{h}_{ij}^{\left( 2 \right)} } \right)^{\sigma \left( t \right)} } \right)^{U} } \right)} } } } \\ \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \cdots \\ {\left( {p - 1} \right)\sum\limits_{i = 1}^{m} {\sum\limits_{j = 1}^{n} {\sum\limits_{t = 1}^{l} {\left( {\left( {\left( {\left( {\tilde{h}_{ij}^{\left( p \right)} } \right)^{\sigma \left( t \right)} } \right)^{L} } \right)^{2} + \left( {\left( {\left( {\tilde{h}_{ij}^{\left( p \right)} } \right)^{\sigma \left( t \right)} } \right)^{U} } \right)^{2} } \right)} } } } \\ \end{array} } \right] \end{aligned}$$
(33)

Proof

$$\begin{aligned} \tilde{f}\left( \omega \right) & = \sum\limits_{{k = 1}}^{p} {\sum\limits_{{q = 1,q \ne k}}^{p} {\sum\limits_{{i = 1}}^{m} {\sum\limits_{{j = 1}}^{n} {\sum\limits_{{t = 1}}^{l} {\left( {\left( {\omega _{k} \left( {\left( {\tilde{h}_{{ij}}^{{\left( k \right)}} } \right)^{{\sigma \left( t \right)}} } \right)^{L} - \omega _{q} \left( {\left( {\tilde{h}_{{ij}}^{{\left( q \right)}} } \right)^{{\sigma \left( t \right)}} } \right)^{L} } \right)^{2} + \left( {\omega _{k} \left( {\left( {\tilde{h}_{{ij}}^{{\left( k \right)}} } \right)^{{\sigma \left( t \right)}} } \right)^{U} - \omega _{q} \left( {\left( {\tilde{h}_{{ij}}^{{\left( q \right)}} } \right)^{{\sigma \left( t \right)}} } \right)^{U} } \right)^{2} } \right)} } } } } \\ & = \sum\limits_{{k = 1}}^{p} {\sum\limits_{{q = 1,q \ne k}}^{p} {\sum\limits_{{i = 1}}^{m} {\sum\limits_{{j = 1}}^{n} {\sum\limits_{{t = 1}}^{l} {\left( {\omega _{k}^{2} \left( {\left( {\left( {\tilde{h}_{{ij}}^{{\left( k \right)}} } \right)^{{\sigma \left( t \right)}} } \right)^{L} } \right)^{2} + \omega _{q}^{2} \left( {\left( {\left( {\tilde{h}_{{ij}}^{{\left( q \right)}} } \right)^{{\sigma \left( t \right)}} } \right)^{L} } \right)^{2} + \omega _{k}^{2} \left( {\left( {\left( {\tilde{h}_{{ij}}^{{\left( k \right)}} } \right)^{{\sigma \left( t \right)}} } \right)^{U} } \right)^{2} + \omega _{q}^{2} \left( {\left( {\left( {\tilde{h}_{{ij}}^{{\left( q \right)}} } \right)^{{\sigma \left( t \right)}} } \right)^{U} } \right)^{2} } \right)} } } } } \\ & \quad - 2\sum\limits_{{k = 1}}^{p} {\sum\limits_{{q = 1,q \ne k}}^{p} {\sum\limits_{{i = 1}}^{m} {\sum\limits_{{j = 1}}^{n} {\sum\limits_{{t = 1}}^{l} {\left( {\omega _{k} \omega _{q} \left( {\left( {\tilde{h}_{{ij}}^{{\left( k \right)}} } \right)^{{\sigma \left( t \right)}} } \right)^{L} \left( {\left( {\tilde{h}_{{ij}}^{{\left( q \right)}} } \right)^{{\sigma \left( t \right)}} } \right)^{L} + \omega _{k} \omega _{q} \left( {\left( {\tilde{h}_{{ij}}^{{\left( k \right)}} } \right)^{{\sigma \left( t \right)}} } \right)^{U} \left( {\left( {\tilde{h}_{{ij}}^{{\left( q \right)}} } \right)^{{\sigma \left( t \right)}} } \right)^{U} } \right)} } } } } \\ & = \sum\limits_{{k = 1}}^{p} {\omega _{k}^{2} \sum\limits_{{q = 1,q \ne k}}^{p} {\left( {\sum\limits_{{i = 1}}^{m} {\sum\limits_{{j = 1}}^{n} {\sum\limits_{{t = 1}}^{l} {\left( {\left( {\left( {\left( {\tilde{h}_{{ij}}^{{\left( k \right)}} } \right)^{{\sigma \left( t \right)}} } \right)^{L} } \right)^{2} + \left( {\left( {\left( {\tilde{h}_{{ij}}^{{\left( k \right)}} } \right)^{{\sigma \left( t \right)}} } \right)^{U} } \right)^{2} } \right)} } } } \right)} } \\ & \quad + \sum\limits_{{k = 1}}^{p} {\sum\limits_{{q = 1,q \ne k}}^{p} {\omega _{q}^{2} \left( {\sum\limits_{{i = 1}}^{m} {\sum\limits_{{j = 1}}^{n} {\sum\limits_{{t = 1}}^{l} {\left( {\left( {\left( {\left( {\tilde{h}_{{ij}}^{{\left( q \right)}} } \right)^{{\sigma \left( t \right)}} } \right)^{L} } \right)^{2} + \left( {\left( {\left( {\tilde{h}_{{ij}}^{{\left( q \right)}} } \right)^{{\sigma \left( t \right)}} } \right)^{U} } \right)^{2} } \right)} } } } \right)} } \\ & \quad - 2\sum\limits_{{k = 1}}^{p} {\sum\limits_{{q = 1,q \ne k}}^{p} {\sum\limits_{{i = 1}}^{m} {\sum\limits_{{j = 1}}^{n} {\sum\limits_{{t = 1}}^{l} {\omega _{k} \omega _{q} \left( {\left( {\left( {\tilde{h}_{{ij}}^{{\left( k \right)}} } \right)^{{\sigma \left( t \right)}} } \right)^{L} \left( {\left( {\tilde{h}_{{ij}}^{{\left( q \right)}} } \right)^{{\sigma \left( t \right)}} } \right)^{L} + \left( {\left( {\tilde{h}_{{ij}}^{{\left( k \right)}} } \right)^{{\sigma \left( t \right)}} } \right)^{U} \left( {\left( {\tilde{h}_{{ij}}^{{\left( q \right)}} } \right)^{{\sigma \left( t \right)}} } \right)^{U} } \right)} } } } } \\ & = \sum\limits_{{k = 1}}^{p} {\left( {\omega _{k}^{2} \left( {p - 1} \right)\sum\limits_{{i = 1}}^{m} {\sum\limits_{{j = 1}}^{n} {\sum\limits_{{t = 1}}^{l} {\left( {\left( {\left( {\left( {\tilde{h}_{{ij}}^{{\left( k \right)}} } \right)^{{\sigma \left( t \right)}} } \right)^{L} } \right)^{2} + \left( {\left( {\left( {\tilde{h}_{{ij}}^{{\left( k \right)}} } \right)^{{\sigma \left( t \right)}} } \right)^{U} } \right)^{2} } \right)} } } } \right)} \\ & \quad + \sum\limits_{{q = 1}}^{p} {\omega _{q}^{2} \left( {p - 1} \right)\left( {\sum\limits_{{i = 1}}^{m} {\sum\limits_{{j = 1}}^{n} {\sum\limits_{{t = 1}}^{l} {\left( {\left( {\left( {\left( {\tilde{h}_{{ij}}^{{\left( q \right)}} } \right)^{{\sigma \left( t \right)}} } \right)^{L} } \right)^{2} + \left( {\left( {\left( {\tilde{h}_{{ij}}^{{\left( q \right)}} } \right)^{{\sigma \left( t \right)}} } \right)^{U} } \right)^{2} } \right)} } } } \right)} \\ & \quad - 2\sum\limits_{{k = 1}}^{p} {\sum\limits_{{q = 1,q \ne k}}^{p} {\sum\limits_{{i = 1}}^{m} {\sum\limits_{{j = 1}}^{n} {\sum\limits_{{t = 1}}^{l} {\omega _{k} \omega _{q} \left( {\left( {\left( {\tilde{h}_{{ij}}^{{\left( k \right)}} } \right)^{{\sigma \left( t \right)}} } \right)^{L} \left( {\left( {\tilde{h}_{{ij}}^{{\left( q \right)}} } \right)^{{\sigma \left( t \right)}} } \right)^{L} + \left( {\left( {\tilde{h}_{{ij}}^{{\left( k \right)}} } \right)^{{\sigma \left( t \right)}} } \right)^{U} \left( {\left( {\tilde{h}_{{ij}}^{{\left( q \right)}} } \right)^{{\sigma \left( t \right)}} } \right)^{U} } \right)} } } } } \\ & = \sum\limits_{{k = 1}}^{p} {\left( {2\left( {p - 1} \right)\sum\limits_{{i = 1}}^{m} {\sum\limits_{{j = 1}}^{n} {\sum\limits_{{t = 1}}^{l} {\left( {\left( {\left( {\left( {\tilde{h}_{{ij}}^{{\left( k \right)}} } \right)^{{\sigma \left( t \right)}} } \right)^{L} } \right)^{2} + \left( {\left( {\left( {\tilde{h}_{{ij}}^{{\left( k \right)}} } \right)^{{\sigma \left( t \right)}} } \right)^{U} } \right)^{2} } \right)} } } } \right)\omega _{k}^{2} } \\ & \quad + \sum\limits_{{k = 1}}^{p} {\sum\limits_{{q = 1,q \ne k}}^{p} {\sum\limits_{{i = 1}}^{m} {\sum\limits_{{j = 1}}^{n} {\sum\limits_{{t = 1}}^{l} {\left( { - 2\left( {\left( {\left( {\tilde{h}_{{ij}}^{{\left( k \right)}} } \right)^{{\sigma \left( t \right)}} } \right)^{L} \left( {\left( {\tilde{h}_{{ij}}^{{\left( q \right)}} } \right)^{{\sigma \left( t \right)}} } \right)^{L} + \left( {\left( {\tilde{h}_{{ij}}^{{\left( k \right)}} } \right)^{{\sigma \left( t \right)}} } \right)^{U} \left( {\left( {\tilde{h}_{{ij}}^{{\left( q \right)}} } \right)^{{\sigma \left( t \right)}} } \right)^{U} } \right)} \right)\omega _{k} \omega _{q} } } } } } \\ & = \sum\limits_{{k = 1}}^{p} {\tilde{g}_{{kk}} } \omega _{k}^{2} + \sum\limits_{{k = 1}}^{p} {\sum\limits_{{q = 1,q \ne k}}^{p} {\tilde{g}_{{kq}} \omega _{k} \omega _{q} } } = \sum\limits_{{k = 1}}^{p} {\sum\limits_{{q = 1}}^{p} {\tilde{g}_{{kq}} \omega _{k} \omega _{q} } } = \omega ^{T} \tilde{G}\omega \\ \end{aligned}$$

which completes the proof of Theorem 4.1. □

Theorem 4.2

For the model (M-5), if there exist \(k_{0} ,q_{0} \in \left\{ {1,2, \ldots ,p} \right\}\), and \(k_{0} \ne q_{0}\) , satisfying \(\tilde{H}^{{\left( {k_{0} } \right)}} \ne c\tilde{H}^{{\left( {q_{0} } \right)}}\) for any \(c \in \left( { - \infty , + \infty } \right)\) , then matrix \(\tilde{G}\) determined by Eq. (33) is positive definite and, hence, non singular and invertible.

Proof

Obviously, \(\tilde{f}\left( \omega \right) = \omega^{T} \tilde{G}\omega \ge 0\). Now, we will prove that \(\tilde{f}\left( \omega \right) \ne 0\) if there exist \(k_{0} ,q_{0} \in \left\{ {1,2, \ldots ,p} \right\}\), and \(k_{0} \ne q_{0}\), satisfying \(\tilde{H}^{{\left( {k_{0} } \right)}} \ne c\tilde{H}^{{\left( {q_{0} } \right)}}\) for any \(c \in \left( { - \infty , + \infty } \right)\).

Suppose that there exists a weight vector \(\omega = \left( {\omega_{1} ,\omega_{2} , \ldots ,\omega_{p} } \right)^{T}\) such that \(\tilde{f}\left( \omega \right) = \omega^{T} \tilde{G}\omega = 0\). Then, for all i, j, t, k, and q, we have \(\omega_{k} \left( {\left( {\tilde{h}_{ij}^{\left( k \right)} } \right)^{\sigma \left( t \right)} } \right)^{L} = \omega_{q} \left( {\left( {\tilde{h}_{ij}^{\left( q \right)} } \right)^{\sigma \left( t \right)} } \right)^{L}\) and \(\omega_{k} \left( {\left( {\tilde{h}_{ij}^{\left( k \right)} } \right)^{\sigma \left( t \right)} } \right)^{U} = \omega_{q} \left( {\left( {\tilde{h}_{ij}^{\left( q \right)} } \right)^{\sigma \left( t \right)} } \right)^{U}\), i.e., \(\frac{{\left( {\left( {\tilde{h}_{ij}^{\left( k \right)} } \right)^{\sigma \left( t \right)} } \right)^{L} }}{{\left( {\left( {\tilde{h}_{ij}^{\left( q \right)} } \right)^{\sigma \left( t \right)} } \right)^{L} }} = \frac{{\left( {\left( {\tilde{h}_{ij}^{\left( k \right)} } \right)^{\sigma \left( t \right)} } \right)^{U} }}{{\left( {\left( {\tilde{h}_{ij}^{\left( q \right)} } \right)^{\sigma \left( t \right)} } \right)^{U} }} = \frac{{\omega_{q} }}{{\omega_{k} }}\), which contradicts with the assumption that there exist \(k_{0} ,q_{0} \in \left\{ {1,2, \ldots ,p} \right\}\), and \(k_{0} \ne q_{0}\), satisfying \(\tilde{H}^{{\left( {k_{0} } \right)}} \ne c\tilde{H}^{{\left( {q_{0} } \right)}}\) for any \(c \in \left( { - \infty , + \infty } \right)\). Thus, \(\tilde{f}\left( \omega \right) = \omega^{T} \tilde{G}\omega > 0\). In addition, according to Eq. (33), we have \(\tilde{g}_{kq} = \tilde{g}_{qk}\), \(\forall k,q = 1,2, \ldots ,p\), i.e., \(\tilde{G} = \left( {\tilde{g}_{kq} } \right)_{p \times p}\) is a symmetry matrix. According to the definition of positive definiteness, \(\tilde{G} = \left( {\tilde{g}_{kq} } \right)_{p \times p}\) is positive definite, and, hence, nonsingular and invertible, i.e., \(\tilde{G}^{ - 1}\) exists. This completes the proof of Theorem 4.2. □

Remark 4.1

Theorem 4.2 shows us that \(\tilde{G}\) is positive definite as long as not all \(\tilde{H}^{\left( k \right)} = \left( {\tilde{h}_{ij}^{\left( k \right)} } \right)_{m \times n}\) (\(k = 1,2, \ldots ,p\)) are proportional to one another. If all \(\tilde{H}^{\left( k \right)} = \left( {\tilde{h}_{ij}^{\left( k \right)} } \right)_{m \times n}\) (\(k = 1,2, \ldots ,p\)) are proportional to one another, then a complete weighted consensus is reached. However, in real situations, this case generally does not occur due to the fact that the experts may come from different fields and thus have different experiences and specialties. In the following, we consider the general case in which not all \(\tilde{H}^{\left( k \right)} = \left( {\tilde{h}_{ij}^{\left( k \right)} } \right)_{m \times n}\) (\(k = 1,2, \ldots ,p\)) are proportional to one another, and it is always assumed that there exist \(k_{0} ,q_{0} \in \left\{ {1,2, \ldots ,p} \right\}\), and \(k_{0} \ne q_{0}\), satisfying \(\tilde{H}^{{\left( {k_{0} } \right)}} \ne c\tilde{H}^{{\left( {q_{0} } \right)}}\) for any \(c \in \left( { - \infty , + \infty } \right)\).

Similar to Lemma 3.1, we have the following result:

Lemma 4.1

Let \(\varOmega\) be the feasible set of (M-5). Then, \(\varOmega\) is a closed convex set, and (M-5) is a convex quadratic program.

Similar to Theorem 3.3, we have the following theorem:

Theorem 4.3

Let \(\tilde{H}^{\left( k \right)} = \left( {\tilde{h}_{ij}^{\left( k \right)} } \right)_{m \times n} = \left( {\left\{ {\left. {\left( {\tilde{h}_{ij}^{\left( k \right)} } \right)^{\sigma \left( t \right)} } \right|t = 1,2, \ldots ,l} \right\}} \right)_{m \times n}\) (\(k = 1,2, \ldots ,p\)) be p interval-valued hesitant fuzzy decision matrices. Assume that there exist \(k_{0} ,q_{0} \in \left\{ {1,2, \ldots ,p} \right\}\), \(k_{0} \ne q_{0}\), satisfying \(\tilde{H}^{{\left( {k_{0} } \right)}} \ne c\tilde{H}^{{\left( {q_{0} } \right)}}\) for any \(c \in \left( { - \infty , + \infty } \right)\). Then, the unique optimal solution to the model (M-5) is

$$\omega^{ * } = \frac{{\tilde{G}^{ - 1} e}}{{e^{T} \tilde{G}^{ - 1} e}}$$
(34)

In what follows, we further investigate the approach to determining the weight vector \(w = \left( {w_{1} ,w_{2} , \ldots ,w_{n} } \right)^{T}\) of the criteria \(c_{j}\) (\(j = 1,2, \ldots ,n\)) from the angle of maximizing the group consensus.

First, to get the group opinion, we aggregate all the individual interval-valued hesitant fuzzy decision matrices \(\tilde{H}^{\left( k \right)} = \left( {\tilde{h}_{ij}^{\left( k \right)} } \right)_{m \times n} = \left( {\left\{ {\left. {\left( {\tilde{h}_{ij}^{\left( k \right)} } \right)^{\sigma \left( t \right)} } \right|t = 1,2, \ldots ,l} \right\}} \right)_{m \times n}\) (\(k = 1,2, \ldots ,p\)) into the collective interval-valued hesitant fuzzy decision matrix \(\tilde{H} = \left( {\tilde{h}_{ij} } \right)_{m \times n} = \left( {\left\{ {\left. {\tilde{h}_{ij}^{\sigma \left( t \right)} } \right|t = 1,2, \ldots ,l} \right\}} \right)_{m \times n}\), where

$$\begin{aligned}\tilde{h}_{ij} &= \mathop \oplus \limits_{k = 1}^{p} \left( {\omega_{k} \tilde{h}_{ij}^{\left( k \right)} } \right) = \left\{ {\left. {\sum\limits_{k = 1}^{p} {\omega_{k} \left( {\tilde{h}_{ij}^{\left( k \right)} } \right)^{\sigma \left( t \right)} } } \right|t = 1,2, \ldots ,l} \right\} \\ &= \left\{ {\left. {\left[ {\sum\limits_{k = 1}^{p} {\omega_{k} \left( {\left( {\tilde{h}_{ij}^{\left( k \right)} } \right)^{\sigma \left( t \right)} } \right)^{L} } ,\sum\limits_{k = 1}^{p} {\omega_{k} \left( {\left( {\tilde{h}_{ij}^{\left( k \right)} } \right)^{\sigma \left( t \right)} } \right)^{U} } } \right]} \right|t = 1,2, \ldots ,l} \right\} \end{aligned}$$
(35)

In the cases where the decision maker \(d_{k}\)’s opinion is consistent with the group opinion, the individual interval-valued hesitant fuzzy decision matrix \(\tilde{H}^{\left( k \right)}\) should be equal to the collective interval-valued hesitant fuzzy decision matrix \(\tilde{H}\), i.e., \(\tilde{h}_{ij}^{\left( k \right)} = \tilde{h}_{ij}\), for all \(i = 1,2, \ldots ,m\), \(j = 1,2, \ldots ,n\). By Eq. (35), we have

$$\begin{aligned} \left( {\left( {\tilde{h}_{ij}^{\left( k \right)} } \right)^{\sigma \left( t \right)} } \right)^{L} &= \sum\limits_{q = 1}^{p} {\omega_{q} \left( {\left( {\tilde{h}_{ij}^{\left( q \right)} } \right)^{\sigma \left( t \right)} } \right)^{L} } \,{\text{and}}\\ \,\left( {\left( {\tilde{h}_{ij}^{\left( k \right)} } \right)^{\sigma \left( t \right)} } \right)^{U} &= \sum\limits_{q = 1}^{p} {\omega_{q} \left( {\left( {\tilde{h}_{ij}^{\left( q \right)} } \right)^{\sigma \left( t \right)} } \right)^{U} } \quad {\text{for}}\,{\text{all}}\,i = 1,2, \ldots ,m,\,j = 1,2, \ldots ,n,\,{\text{and}}\,t = 1,2, \ldots ,l \end{aligned}$$
(36)

It is noted that each criterion \(c_{j}\) has its own importance weight \(w_{j}\), we can express the weighted form of Eq. (36) as

$$\begin{aligned}w_{j} \left( {\left( {\tilde{h}_{ij}^{\left( k \right)} } \right)^{\sigma \left( t \right)} } \right)^{L}& = \sum\limits_{q = 1}^{p} {w_{j} \omega_{q} \left( {\left( {\tilde{h}_{ij}^{\left( q \right)} } \right)^{\sigma \left( t \right)} } \right)^{L} } \,{\text{and}}\\ \,w_{j} \left( {\left( {\tilde{h}_{ij}^{\left( k \right)} } \right)^{\sigma \left( t \right)} } \right)^{U} &= \sum\limits_{q = 1}^{p} {w_{j} \omega_{q} \left( {\left( {\tilde{h}_{ij}^{\left( q \right)} } \right)^{\sigma \left( t \right)} } \right)^{U} } \quad {\text{for}}\,{\text{all}}\,i = 1,2, \ldots ,m,\,j = 1,2, \ldots ,n,\,{\text{and}}\,t = 1,2, \ldots ,l \end{aligned}$$
(37)

However, Eq. (37) generally does not hold because the decision makers may have different experiences and specialties. As a result, we define a deviation variable \(\tilde{e}_{ij}^{\left( k \right)} \left( w \right)\) as

$$\tilde{e}_{ij}^{\left( k \right)} \left( w \right) = \sum\limits_{t = 1}^{l} {\left( {\left( {w_{j} \left( {\left( {\tilde{h}_{ij}^{\left( k \right)} } \right)^{\sigma \left( t \right)} } \right)^{L} - \sum\limits_{q = 1}^{p} {w_{j} \omega_{q} \left( {\left( {\tilde{h}_{ij}^{\left( q \right)} } \right)^{\sigma \left( t \right)} } \right)^{L} } } \right)^{2} + \left( {w_{j} \left( {\left( {\tilde{h}_{ij}^{\left( k \right)} } \right)^{\sigma \left( t \right)} } \right)^{U} - \sum\limits_{q = 1}^{p} {w_{j} \omega_{q} \left( {\left( {\tilde{h}_{ij}^{\left( q \right)} } \right)^{\sigma \left( t \right)} } \right)^{U} } } \right)^{2} } \right)} \quad {\text{for}}\,{\text{all}}\,i = 1,2, \ldots ,m,\,j = 1,2, \ldots ,n,\,k = 1,2, \ldots ,p$$
(38)

and construct a deviation function

$$\begin{aligned} \tilde{e}\left( w \right)& = \sum\limits_{k = 1}^{p} {\sum\limits_{i = 1}^{m} {\sum\limits_{j = 1}^{n} {\tilde{e}_{ij}^{\left( k \right)} \left( w \right)} } } \hfill \\&= \sum\limits_{k = 1}^{p} {\sum\limits_{i = 1}^{m} {\sum\limits_{j = 1}^{n} {\sum\limits_{t = 1}^{l} {\left( {\left( {w_{j} \left( {\left( {\tilde{h}_{ij}^{\left( k \right)} } \right)^{\sigma \left( t \right)} } \right)^{L} - \sum\limits_{q = 1}^{p} {w_{j} \omega_{q} \left( {\left( {\tilde{h}_{ij}^{\left( q \right)} } \right)^{\sigma \left( t \right)} } \right)^{L} } } \right)^{2} + \left( {w_{j} \left( {\left( {\tilde{h}_{ij}^{\left( k \right)} } \right)^{\sigma \left( t \right)} } \right)^{U} - \sum\limits_{q = 1}^{p} {w_{j} \omega_{q} \left( {\left( {\tilde{h}_{ij}^{\left( q \right)} } \right)^{\sigma \left( t \right)} } \right)^{U} } } \right)^{2} } \right)} } } } \hfill \\ &= \sum\limits_{k = 1}^{p} {\sum\limits_{i = 1}^{m} {\sum\limits_{j = 1}^{n} {\sum\limits_{t = 1}^{l} {\left( {\left( {\left( {\left( {\tilde{h}_{ij}^{\left( k \right)} } \right)^{\sigma \left( t \right)} } \right)^{L} - \sum\limits_{q = 1}^{p} {\omega_{q} \left( {\left( {\tilde{h}_{ij}^{\left( q \right)} } \right)^{\sigma \left( t \right)} } \right)^{L} } } \right)^{2} + \left( {\left( {\left( {\tilde{h}_{ij}^{\left( k \right)} } \right)^{\sigma \left( t \right)} } \right)^{U} - \sum\limits_{q = 1}^{p} {\omega_{q} \left( {\left( {\tilde{h}_{ij}^{\left( q \right)} } \right)^{\sigma \left( t \right)} } \right)^{U} } } \right)^{2} } \right)w_{j}^{2} } } } } \hfill \\ \end{aligned}$$
(39)

The following nonlinear optimization model (M-6) is established to obtain a desirable decision result with as high group consensus as possible:

$$\begin{aligned} & \hbox{min} \tilde{e}\left( \omega \right) = \sum\limits_{k = 1}^{p} {\sum\limits_{i = 1}^{m} {\sum\limits_{j = 1}^{n} {\sum\limits_{t = 1}^{l} \times{\left( {\left( {\left( {\left( {\tilde{h}_{ij}^{\left( k \right)} } \right)^{\sigma \left( t \right)} } \right)^{L} - \sum\limits_{q = 1}^{p} {\omega_{q} \left( {\left( {\tilde{h}_{ij}^{\left( q \right)} } \right)^{\sigma \left( t \right)} } \right)^{L} } } \right)^{2} + \left( {\left( {\left( {\tilde{h}_{ij}^{\left( k \right)} } \right)^{\sigma \left( t \right)} } \right)^{U} - \sum\limits_{q = 1}^{p} {\omega_{q} \left( {\left( {\tilde{h}_{ij}^{\left( q \right)} } \right)^{\sigma \left( t \right)} } \right)^{U} } } \right)^{2} } \right)w_{j}^{2} } } } } \hfill \\ & {\text{s}} . {\text{t}} .\left\{ \begin{aligned} & \sum\limits_{j = 1}^{n} {w_{j} } = 1, \hfill \\ & w_{j} \ge 0, j = 1,2, \ldots ,n \hfill \\ \end{aligned} \right. \hfill \\ \end{aligned}$$
(M-6)

Similar to Subsect. 3.3, by using the Lagrangian multiplier technique, the solution to the model (M-6) can be derived as

$$w_{j} = \frac{{\frac{1}{{\sum\nolimits_{j = 1}^{n} {\left( {\frac{1}{{\sum\nolimits_{k = 1}^{p} {\sum\nolimits_{i = 1}^{m} {\sum\nolimits_{t = 1}^{l} {\left( {\left( {\left( {\left( {\tilde{h}_{ij}^{\left( k \right)} } \right)^{\sigma \left( t \right)} } \right)^{L} - \sum\limits_{q = 1}^{p} {\omega_{q} \left( {\left( {\tilde{h}_{ij}^{\left( q \right)} } \right)^{\sigma \left( t \right)} } \right)^{L} } } \right)^{2} + \left( {\left( {\left( {\tilde{h}_{ij}^{\left( k \right)} } \right)^{\sigma \left( t \right)} } \right)^{U} - \sum\nolimits_{q = 1}^{p} {\omega_{q} \left( {\left( {\tilde{h}_{ij}^{\left( q \right)} } \right)^{\sigma \left( t \right)} } \right)^{U} } } \right)^{2} } \right)} } } }}} \right)} }}}}{{\sum\nolimits_{k = 1}^{p} {\sum\nolimits_{i = 1}^{m} {\sum\nolimits_{t = 1}^{l} {\left( {\left( {\left( {\left( {\tilde{h}_{ij}^{\left( k \right)} } \right)^{\sigma \left( t \right)} } \right)^{L} - \sum\nolimits_{q = 1}^{p} {\omega_{q} \left( {\left( {\tilde{h}_{ij}^{\left( q \right)} } \right)^{\sigma \left( t \right)} } \right)^{L} } } \right)^{2} + \left( {\left( {\left( {\tilde{h}_{ij}^{\left( k \right)} } \right)^{\sigma \left( t \right)} } \right)^{U} - \sum\nolimits_{q = 1}^{p} {\omega_{q} \left( {\left( {\tilde{h}_{ij}^{\left( q \right)} } \right)^{\sigma \left( t \right)} } \right)^{U} } } \right)^{2} } \right)} } } }},\quad j = 1,2, \ldots ,n$$
(40)

On the basis of the above analysis, we next develop an approach to MCGDM problem with interval-valued hesitant fuzzy information, which consists of the following steps:

  1. Step 1

    For a MCGDM problem, the decision maker \(d_{k} \in D\) constructs the interval-valued hesitant fuzzy decision matrix \(\tilde{A}^{\left( k \right)} = \left( {\tilde{a}_{ij}^{\left( k \right)} } \right)_{m \times n}\), where \(\tilde{a}_{ij}^{\left( k \right)}\) is an IVHFE, given by the DM \(d_{k} \in D\), for the alternative \(x_{i} \in X\) with respect to the attribute \(c_{j} \in C\). Utilize Eq. (28) to transform the interval-valued hesitant fuzzy decision matrices \(\tilde{A}^{\left( k \right)} = \left( {\tilde{a}_{ij}^{\left( k \right)} } \right)_{m \times n}\) (\(k = 1,2, \ldots ,p\)) into the normalized interval-valued hesitant fuzzy decision matrices \(\tilde{H}^{\left( k \right)} = \left( {\tilde{h}_{ij}^{\left( k \right)} } \right)_{m \times n}\) (\(k = 1,2, \ldots ,p\)).

  2. Step 2

    Utilize Eq. (34) to obtain decision makers’ weights \(\omega = \left( {\omega_{1} ,\omega_{2} , \ldots ,\omega_{p} } \right)^{T}\).

  3. Step 3

    Utilize Eq. (40) to obtain the weights of the criteria \(w = \left( {w_{1} ,w_{2} , \ldots ,w_{n} } \right)^{T}\).

  4. Step 4

    Utilize Eq. (35) to aggregate all the individual interval-valued hesitant fuzzy decision matrices \(\tilde{H}^{\left( k \right)} = \left( {\tilde{h}_{ij}^{\left( k \right)} } \right)_{m \times n} = \left( {\left\{ {\left. {\left( {\tilde{h}_{ij}^{\left( k \right)} } \right)^{\sigma \left( t \right)} } \right|t = 1,2, \ldots ,l} \right\}} \right)_{m \times n}\) (\(k = 1,2, \ldots ,p\)) into the collective interval-valued hesitant fuzzy decision matrix \(\tilde{H} = \left( {\tilde{h}_{ij} } \right)_{m \times n} = \left( {\left\{ {\left. {\tilde{h}_{ij}^{\sigma \left( t \right)} } \right|t = 1,2, \cdots ,l} \right\}} \right)_{m \times n}\).

  5. Step 5

    Utilize the additive weighting operator to get the collective overall IVHFE \(\tilde{h}_{i}\) corresponding to each alternative \(x_{i}\), where

    $$\begin{aligned} \tilde{h}_{i} &= \mathop \oplus \limits_{j = 1}^{n} w_{j} \tilde{h}_{ij} = \left\{ {\left. {\sum\limits_{j = 1}^{n} {w_{j} \tilde{h}_{ij}^{\sigma \left( t \right)} } } \right|t = 1,2, \ldots ,l} \right\} \\ &= \left\{ {\left. {\left[ {\sum\limits_{j = 1}^{n} {w_{j} \left( {\tilde{h}_{ij}^{\sigma \left( t \right)} } \right)^{L} } ,\sum\limits_{j = 1}^{n} {w_{j} \left( {\tilde{h}_{ij}^{\sigma \left( t \right)} } \right)^{U} } } \right]} \right|t = 1,2, \ldots ,l} \right\},\,\quad i = 1,2, \ldots ,m \end{aligned}$$
    (41)
  6. Step 6

    Calculate the scores \(s\left( {\tilde{h}_{i} } \right)\) (\(i = 1,2, \ldots ,m\)) of the overall IVHFEs \(\tilde{h}_{i}\) (\(i = 1,2, \ldots ,m\)), where

    $$\begin{aligned}s\left( {\tilde{h}_{i} } \right) &= \,\frac{{\sum\nolimits_{t = 1}^{l} {\sum\nolimits_{j = 1}^{n} {w_{j} \tilde{h}_{ij}^{\sigma \left( t \right)} } } }}{l} \\ &= \left[ {\frac{{\sum\nolimits_{t = 1}^{l} {\sum\nolimits_{j = 1}^{n} {w_{j} \left( {\tilde{h}_{ij}^{\sigma \left( t \right)} } \right)^{L} } } }}{l},\frac{{\sum\nolimits_{t = 1}^{l} {\sum\nolimits_{j = 1}^{n} {w_{j} \left( {\tilde{h}_{ij}^{\sigma \left( t \right)} } \right)^{U} } } }}{l}} \right] \end{aligned}$$
    (42)

    To rank these scores \(s\left( {\tilde{h}_{i} } \right)\) (\(i = 1,2, \ldots ,m\)), based on Definition 2.5, we first develop a complementary matrix as

    $$P = \left[ {\begin{array}{*{20}c} {p_{11} } & {p_{12} } & \cdots & {p_{1m} } \\ {p_{21} } & {p_{22} } & \cdots & {p_{2m} } \\ \vdots & \vdots & \vdots & \vdots \\ {p_{m1} } & {p_{m2} } & \cdots & {p_{mm} } \\ \end{array} } \right]_{m \times m}$$

    where \(p_{ij} = p\left( {s\left( {\tilde{h}_{i} } \right) \ge s\left( {\tilde{h}_{j} } \right)} \right)\), \(p_{ij} \ge 0\), \(p_{ij} + p_{ji} = 1\), \(p_{ii} = \frac{1}{2}\), \(i,j = 1,2, \ldots ,m\).

    Summing all elements in each line of the matrix P, we have

    $$p_{i} = \sum\limits_{j = 1}^{m} {p_{ij} } ,\quad i = 1,2, \ldots ,m$$

    Then we can rank the \(s\left( {\tilde{h}_{i} } \right)\) (\(i = 1,2, \ldots ,m\)) in descending order according to the values of \(p_{i}\) (\(i = 1,2, \ldots ,m\)).

  7. Step 7

    Rank the alternatives \(x_{i}\) (\(i = 1,2, \ldots ,m\)) in accordance with the ranking of the scores \(s\left( {\tilde{h}_{i} } \right)\) (\(i = 1,2, \ldots ,m\)), and then select the optimal one.

  8. Step 8

    End.

5 Illustrative Examples

In this section, a numerical example is used to illustrate the applicability and the effectiveness of our methods under hesitant fuzzy environments and interval-valued hesitant fuzzy environments.

Example 5.1

Let us suppose an investment company, which wants to invest a sum of money in the best option (adapted from [7, 25]). There is a panel with five possible alternatives to invest the money: (1) x 1 is a car industry; (2) x 2 is a food company; (3) x 3 is a computer company; (4) x 4 is an arms company; and (5) x 5 is a TV company. The investment company must make a decision according to the following four criteria (whose weight vector \(w = \left( {w_{1} ,w_{2} ,w_{3} ,w_{4} } \right)^{T}\) is to be determined): (1) c 1 is the risk analysis; (2) c 2 is the growth analysis; (3) c 3 is the social–political impact analysis; (4) c 4 is the environmental impact analysis. Suppose that five possible candidates x i \(\left( {i = 1,2,3,4,5} \right)\) are to be evaluated by three decision makers d k (\(k = 1,2,3\)) (whose weight vector \(\omega = \left( {\omega_{1} ,\omega_{2} ,\omega_{3} } \right)^{T}\) is to be determined) under the above four criteria c j (\(j = 1,2,3,4\)). The decision makers construct, respectively, three hesitant fuzzy decision matrices \(A^{\left( k \right)} = \left( {a_{ij}^{\left( k \right)} } \right)_{5 \times 4}\) (\(k = 1,2,3\)) listed in Tables 1, 2, 3, where \(a_{ij}^{\left( k \right)}\) is a HFE denoting all the possible values, given by the decision maker d k , for the alternative x i under the attribute c j .

Table 1 Hesitant fuzzy decision matrix A (1) provided by the decision maker d 1
Table 2 Hesitant fuzzy decision matrix A (2) provided by the decision maker d 2
Table 3 Hesitant fuzzy decision matrix A (3) provided by the decision maker d 3

In what follows, we utilize the developed method to find the best alternative(s).

  1. Step 1

    Considering that all the criteria c j (\(j = 1,2,3,4\)) are the benefit type criteria, the hesitant fuzzy decision matrices \(A^{\left( k \right)} = \left( {a_{ij}^{\left( k \right)} } \right)_{5 \times 4}\) (\(k = 1,2,3\)) do not need normalization. Suppose that all the decision makers (DMs) (\(k = 1,2,3\)) are pessimistic, then we utilize Definition 2.3 to transform the hesitant fuzzy decision matrices \(A^{\left( k \right)} = \left( {a_{ij}^{\left( k \right)} } \right)_{5 \times 4}\) (\(k = 1,2,3\)) into the corresponding hesitant fuzzy decision matrices \(H^{\left( k \right)} = \left( {h_{ij}^{\left( k \right)} } \right)_{5 \times 4}\) (\(k = 1,2,3\)) (see Tables 4, 5, 6), such that \(l_{{h_{ij}^{\left( k \right)} }} = 5\) for all \(i = 1,2,3,4,5\), \(j = 1,2,3,4\), and \(k = 1,2,3\).

    Table 4 Hesitant fuzzy decision matrix H (1)
    Table 5 Hesitant fuzzy decision matrix H (2)
    Table 6 Hesitant fuzzy decision matrix H (3)
  2. Step 2

    Utilize Eq. (9) to get the decision makers’ weight vector:

    $$\omega = \left( {0.3447,0.3263,0.3290} \right)$$
  3. Step 3

    Utilize Eq. (25) to get the optimal weight vector of criteria:

    $$w = \left( {0.3463,0.2236,0.1789,0.2511} \right)^{T}$$
  4. Step 4

    Utilize Eq. (15) to aggregate all the individual hesitant fuzzy decision matrices \(H^{\left( k \right)}\) (\(k = 1,2,3\)) into the collective hesitant fuzzy decision matrix H (see Table 7).

    Table 7 Collective hesitant fuzzy decision matrix H
  5. Step 5

    Utilize Eq. (26) to get the collective overall HFE h i corresponding to each alternative x i as follows:

    $$h_{1} = \left\{ {0.6978,0.5776,0.4909,0.4457,0.4256} \right\}$$
    $$h_{2} = \left\{ {0.6479,0.5261,0.4423,0.3786,0.3352} \right\}$$
    $$h_{3} = \left\{ {0.6849,0.5431,0.4157,0.3834,0.3775} \right\}$$
    $$h_{4} = \left\{ {0.7501,0.5930,0.5284,0.5140,0.5057} \right\}$$
    $$h_{5} = \left\{ {0.7106,0.5801,0.4955,0.4693,0.4634} \right\}$$
  6. Step 6

    Utilize Eq. (27) to calculate the scores \(s\left( {h_{i} } \right)\) (\(i = 1,2,3,4,5\)) of the overall HFEs h i (\(i = 1,2,3,4,5\)) as

    $$s\left( {h_{1} } \right) = 0.5275,\quad \,s\left( {h_{2} } \right) = 0.4660,\,\quad s\left( {h_{3} } \right) = 0.4809,\,\quad s\left( {h_{4} } \right) = 0.5782,\,\quad s\left( {h_{5} } \right) = 0.5438$$
  7. Step 7

    According to Definition 2.2, the ranking of the alternatives x i (\(i = 1,2,3,4,5\)) is as follows:

    $$x_{4} > x_{5} > x_{1} > x_{3} > x_{2}$$

    and the optimal one is x 4.

In order to clearly demonstrate the advantages of the developed methods, we use the hesitant fuzzy weighted averaging (HFWA) operator-based MCGDM method [24] to revisit Example 5.1, which includes the following steps:

  1. Step 1

    Utilize the HFWA operator [24]:

    $${\text{HFWA}}\left( {h_{ij}^{\left( 1 \right)} ,h_{ij}^{\left( 2 \right)} ,h_{ij}^{\left( 3 \right)} } \right) = \mathop \oplus \limits_{k = 1}^{3} \left( {\omega_{k} h_{ij}^{\left( k \right)} } \right) = \bigcup\nolimits_{{t_{1} = 1,2, \ldots ,l_{{h_{ij}^{\left( 1 \right)} }} ,t_{2} = 1,2, \ldots ,l_{{h_{ij}^{\left( 2 \right)} }} ,t_{3} = 1,2, \ldots ,l_{{h_{ij}^{\left( 3 \right)} }} }} {\left\{ {1 - \prod\limits_{k = 1}^{3} {\left( {1 - \left( {h_{ij}^{\left( k \right)} } \right)^{{t_{k} }} } \right)^{{\omega_{k} }} } } \right\}} \quad i = 1,2,3,4,5,\,j = 1,2,3,4$$

    to aggregate all the individual hesitant fuzzy decision matrix \(H^{\left( k \right)} = \left( {h_{ij}^{\left( k \right)} } \right)_{5 \times 4}\) (\(k = 1,2,3\)) into the collective hesitant fuzzy decision matrix \(H = \left( {h_{ij} } \right)_{5 \times 4}\), which is not be listed here because of space limitations. In order to be consistent with Example 5.1, the same weights for decision makers obtained, i.e., \(\omega_{1} = 0. 3 4 4 7\), \(\omega_{2} = 0. 3 2 6 3\), and \(\omega_{3} = 0. 3 2 9 0\) are adopted here. Let \(L = \left( {l_{{h_{ij} }} } \right)_{5 \times 4}\), where \(l_{{h_{ij} }}\) is the dimension of the collective hesitant fuzzy element \(h_{ij}\).

    $$L = \left( {l_{{h_{ij} }} } \right)_{5 \times 4} = \left( {\begin{array}{*{20}c} {45} & {12} & {12} & {60} \\ {75} & {24} & {12} & {40} \\ {36} & {32} & {45} & {18} \\ {12} & {12} & {36} & {20} \\ {12} & {24} & {45} & {27} \\ \end{array} } \right)$$
  2. Step 2

    Utilize the HFWA operator [24]:

    $${\text{HFWA}}\left( {h_{i1} ,h_{i2} ,h_{i3} ,h_{i4} } \right) = \mathop \oplus \limits_{j = 1}^{4} \left( {w_{j} h_{ij} } \right) = \bigcup\nolimits_{{t_{1} = 1,2, \ldots ,l_{{h_{i1} }} ,t_{2} = 1,2, \ldots ,l_{{h_{i2} }} ,t_{3} = 1,2, \ldots ,l_{{h_{i3} }} ,t_{3} = 1,2, \ldots ,l_{{h_{i4} }} }} \times\,{\left\{ {1 - \prod\limits_{j = 1}^{4} {\left( {1 - \left( {h_{ij} } \right)^{{t_{j} }} } \right)^{{w_{j} }} } } \right\}}, \quad i = 1,2,3,4,5$$

    to aggregate all the preference values \(h_{ij}\) (\(j = 1,2,3,4\)) in the ith line of H, and then derive the collective overall preference value \(h_{i}\) (\(i = 1,2,3,4,5\)) of the alternative \(x_{i}\) (\(i = 1,2,3,4,5\)). In order to be consistent with Example 5.1, the same weights for criteria obtained, i.e., \(w_{1} = 0. 2 6 9 4\), \(w_{2} = 0. 2 8 5 0\), \(w_{3} = 0. 2 6 9 4\), and \(w_{4} = 0. 1 7 6 2\) are adopted here. We will not list the collective overall preference values here because of space limitations. The dimensions of the collective overall preference value \(h_{i}\) (\(i = 1,2,3,4,5\)) are shown below:

    $$l_{{h_{1} }} = 388800,\,\quad l_{{h_{2} }} = 864000,\,\quad l_{{h_{3} }} = 933120,\,\quad l_{{h_{4} }} = 103680,\,\quad l_{{h_{5} }} = 349920$$
  3. Step 3

    According to Definition 2.2, we calculate the score values \(s\left( {h_{i} } \right)\) (\(i = 1,2,3,4,5\)) of \(h_{i}\) (\(i = 1,2,3,4,5\)):

    $$s\left( {h_{1} } \right) = 0.6125,\,\quad s\left( {h_{2} } \right) = 0.5481,\,\quad s\left( {h_{3} } \right) = 0.6000,\,\quad s\left( {h_{4} } \right) = 0.6709,\,\quad s\left( {h_{5} } \right) = 0.6364$$
  4. Step 4

    Get the priority of the alternatives \(x_{i}\) (\(i = 1,2,3,4,5\)) by ranking \(s\left( {h_{i} } \right)\) (\(i = 1,2,3,4,5\)) as follows: \(x_{4} > x_{5} > x_{1} > x_{3} > x_{2}\). Thus, the best alternative is \(x_{4}\).

It is easy to see that the ranking order of the alternatives obtained by the Xia and Xu’ method [24] is the same as our method, which shows the effectiveness and reasonableness of our method. However, it is noted that the dimension \(l_{{h_{i} }}\) of the collective overall preference value h i obtained with the Xia and Xu’ method is very larger, which increases the computational complexity. The number of operations required in Xia and Xu’s method is 2640124. In contrast, our method has a less computational complexity. The number of operations required in our method is 197. Therefore, our method is more computationally efficient than Xia and Xu’s method. Furthermore, by using the MATLAB mathematics software 7.0, the time that is used to obtain the optimal alternative with the Xia and Xu’ method is more than 2 h, while the time that is used to obtain the optimal alternative with our method is less than 1 s. Thus, our method is considerably time saving and convenient. Based on the above analysis, compared with the other methods, our method not only is more appropriate for handling the hesitant fuzzy MCGDM problems, but also can reduce the computational complexity and the information loss.

To show the advantages of the proposed criteria weight generation process, we next compare the proposed method with the other criteria weight generation methods mentioned in the existing references [8, 11, 29].

The existing criteria weight generation methods in [8, 11, 29] only investigated multi-criteria single person decision making with hesitant fuzzy information and did not consider multi-criteria group decision making (MCGDM) with hesitant fuzzy information. Therefore, to facilitate the comparison, we first use Eq. (15) to aggregate all the individual hesitant fuzzy decision matrices \(H^{\left( k \right)} = \left( {h_{ij}^{\left( k \right)} } \right)_{5 \times 4}\) (\(k = 1,2,3\)) into the collective hesitant fuzzy decision matrix \(H = \left( {h_{ij} } \right)_{5 \times 4}\), which has been shown in Table 7. Then, we employ the existing criteria weight generation methods in [8, 11, 29] to derive the optimal weight vector of criteria. Finally, we calculate the group consensus measures by the following formula:

$${\text{GCM}} = 1 - \sum\limits_{k = 1}^{p} {\sum\limits_{i = 1}^{m} {\sum\limits_{j = 1}^{n} {\sum\limits_{t = 1}^{l} {\left( {\left( {h_{ij}^{\left( k \right)} } \right)^{\sigma \left( t \right)} - \sum\limits_{q = 1}^{p} {\omega_{q} \left( {h_{ij}^{\left( q \right)} } \right)^{\sigma \left( t \right)} } } \right)^{2} w_{j}^{2} } } } }$$
(43)

The calculated results are summarized in Table 8.

Table 8 The criteria weights and group consensus measures with respect to different criteria weight generation methods

According to Eq. (43), the group consensus measure can be somehow understood as closeness between the individual preference and the collective one and it can measure the degree of agreement among the decision makers on the solution of the problem. The larger the value of GCM, the closer that decision maker is to the group, and the more desirable the decision result. In a practical MCGDM with hesitant fuzzy information, because the experts have their own inherent value systems and consideration, the disagreement among the experts is inevitable. In such a case, consensus turns out to be very important in group decision making because it can make a group reach a final decision that all group members can support despite their differing opinions. It is preferable that the experts had achieved a high level of consensus concerning their preferences before applying the selection process. The existing criteria weight generation methods in [8, 11, 29] did not consider any consensus issue. In contrast, our proposed method determines the optimal weight vector of criteria based on the idea of maximizing the group decision consensus. From Table 8, we can see that among four criteria weight generation methods, the proposed method in this paper can produce the largest group consensus measure. Therefore, from the point of view of group decision consensus, our criteria weight generation method is more efficient and useful than the other three in [8, 11, 29]. The decision result obtained with our method is more reasonable and convincing than that obtained with the other three methods.

Example 5.2

In Example 5.1, suppose that the decision makers construct, respectively, three interval-valued hesitant fuzzy decision matrices \(\tilde{A}^{\left( k \right)} = \left( {\tilde{a}_{ij}^{\left( k \right)} } \right)_{5 \times 4}\) (\(k = 1,2,3\)) listed in Tables 9, 10, 11, where \(\tilde{a}_{ij}^{\left( k \right)}\) is an IVHFE denoting all the possible interval values, given by the decision maker d k , for the alternative x i under the attribute c j . In what follows, we proceed to utilize the developed method to find the most optimal alternative(s), which consists of the following steps:

Table 9 Interval-valued hesitant fuzzy decision matrix \(\tilde{A}^{(1)}\) provided by the decision maker d 2
Table 10 Interval-valued hesitant fuzzy decision matrix \(h\) provided by the decision maker \(l_{h}\)
Table 11 Interval-valued hesitant fuzzy decision matrix \(\tilde{A}^{(3)}\) provided by the decision maker d 3
  1. Step 1

    Considering that all the attributes c j (\(j = 1,2,3,4\)) are the benefit type attributes, the interval-valued hesitant fuzzy decision matrices \(\tilde{A}^{\left( k \right)} = \left( {\tilde{a}_{ij}^{\left( k \right)} } \right)_{5 \times 4}\) (\(k = 1,2,3\)) do not need normalization. Suppose that all the DMs (\(k = 1,2,3\)) are pessimistic, then we utilize Definition 2.7 to transform the interval-valued hesitant fuzzy decision matrices \(\tilde{A}^{\left( k \right)} = \left( {\tilde{a}_{ij}^{\left( k \right)} } \right)_{m \times n}\) (\(k = 1,2,3\)) into the corresponding interval-valued hesitant fuzzy decision matrices \(\tilde{H}^{\left( k \right)} = \left( {\tilde{h}_{ij}^{\left( k \right)} } \right)_{m \times n}\) (\(k = 1,2,3\)) (see Tables 12, 13, 14), such that \(l_{{\tilde{h}_{ij}^{\left( k \right)} }} = 5\) for all \(i = 1,2,3,4,5\), \(j = 1,2,3,4\), and \(k = 1,2,3\).

    Table 12 Interval-valued hesitant fuzzy decision matrix \(\tilde{H}^{(1)}\)
    Table 13 Interval-valued hesitant fuzzy decision matrix \(\tilde{H}^{(2)}\)
    Table 14 Interval-valued hesitant fuzzy decision matrix \(\tilde{H}^{(3)}\)
  2. Step 2

    Utilize Eq. (34) to get the weights of the decision makers:

    $$\omega = \left( {0.3316,0.3531,0.3153} \right)$$
  3. Step 3

    Utilize Eq. (40) to get the optimal weight vector of criteria:

    $$w = \left( {0.3011,0.1610,0.1875,0.3504} \right)^{T}$$
  4. Step 4

    Utilize Eq. (35) to aggregate all the individual interval-valued hesitant fuzzy decision matrices \(\tilde{H}^{\left( k \right)}\) (\(k = 1,2,3\)) into the collective interval-valued hesitant fuzzy decision matrix \(\tilde{H}\), which is shown in Table 15.

    Table 15 Collective interval-valued hesitant fuzzy decision matrix \({\tilde{H}}\)
  5. Step 5

    Utilize Eq. (41) to get the collective overall IVHFE \(\tilde{h}_{i}\) corresponding to each alternative x i as follows:

    $$\tilde{h}_{1} = \left\{ {\left[ {0.5729,0.7446} \right],\left[ {0.4524,0.5791} \right],\left[ {0.3585,0.4635} \right],\left[ {0.2984,0.3984} \right],\left[ {0.2639,0.3763} \right]} \right\}$$
    $$\tilde{h}_{2} = \left\{ {\left[ {0.5786,0.7202} \right],\left[ {0.4309,0.5375} \right],\left[ {0.4013,0.5136} \right],\left[ {0.3814,0.4938} \right],\left[ {0.3814,0.4938} \right]} \right\}$$
    $$\tilde{h}_{3} = \left\{ {\left[ {0.5697,0.7229} \right],\left[ {0.4510,0.5659} \right],\left[ {0.3421,0.4531} \right],\left[ {0.3200,0.4309} \right],\left[ {0.3086,0.4196} \right]} \right\}$$
    $$\tilde{h}_{4} = \left\{ {\left[ {0.5676,0.6981} \right],\left[ {0.4132,0.5319} \right],\left[ {0.3311,0.4424} \right],\left[ {0.2694,0.3920} \right],\left[ {0.2363,0.3589} \right]} \right\}$$
    $$\tilde{h}_{5} = \left\{ {\left[ {0.5963,0.7429} \right],\left[ {0.4744,0.5934} \right],\left[ {0.3554,0.4620} \right],\left[ {0.3231,0.4298} \right],\left[ {0.2776,0.3843} \right]} \right\}$$
  6. Step 6

    Utilize Eq. (42) to calculate the scores \(s\left( {\tilde{h}_{i} } \right)\) (\(i = 1,2,3,4,5\)) of the overall IVHFEs \(\tilde{h}_{i}\) (\(i = 1,2,3,4,5\)) as shown below:

    $$s\left( {\tilde{h}_{1} } \right) = \left[ {0.3892,0.5124} \right],\,\quad s\left( {\tilde{h}_{2} } \right) = \left[ {0.4347,0.5518} \right],\,\quad s\left( {\tilde{h}_{3} } \right) = \left[ {0.3983,0.5185} \right],\quad \,s\left( {\tilde{h}_{4} } \right) = \left[ {0.3635,0.4846} \right],\,\quad s\left( {\tilde{h}_{5} } \right) = \left[ {0.4054,0.5225} \right]$$

    To rank these scores \(s\left( {\tilde{h}_{i} } \right)\) (\(i = 1,2,3,4,5\)), based on Definition 2.5, we first develop a complementary matrix as

    $$P = \left[ {\begin{array}{*{20}c} {0.5000} & {0.3234} & {0.4689} & {0.6095} & {0.4454} \\ {0.6766} & {0.5000} & {0.6470} & {0.7904} & {0.6252} \\ {0.5311} & {0.3530} & {0.5000} & {0.6422} & {0.4766} \\ {0.3905} & {0.2096} & {0.3578} & {0.5000} & {0.3327} \\ {0.5546} & {0.3748} & {0.5234} & {0.6673} & {0.5000} \\ \end{array} } \right]_{5 \times 5}$$

    where \(p_{ij} = p\left( {s\left( {\tilde{h}_{i} } \right) \ge s\left( {\tilde{h}_{j} } \right)} \right)\), \(i,j = 1,2, \cdots ,5\).

    Summing all elements in each line of the matrix P, we have

    $$p_{1} = 2.3471,\,\quad p_{2} = 3.2392,\,\quad p_{3} = 2.5029,\,\quad p_{4} = 1.7907,\,\quad p_{5} = 2.6201\quad i = 1,2, \ldots ,m$$

    Then we can rank the \(s\left( {\tilde{h}_{i} } \right)\) (\(i = 1,2,3,4,5\)) in descending order according to the values of p i (\(i = 1,2,3,4,5\)):

    $$s\left( {\tilde{h}_{2} } \right) > s\left( {\tilde{h}_{5} } \right) > s\left( {\tilde{h}_{3} } \right) > s\left( {\tilde{h}_{1} } \right) > s\left( {\tilde{h}_{4} } \right)$$
  7. Step 7

    Rank the alternatives x i (\(i = 1,2, \ldots ,m\)) in accordance with the ranking of the scores \(s\left( {\tilde{h}_{i} } \right)\) (\(i = 1,2, \ldots ,m\)):

    $$x_{2} > x_{5} > x_{3} > x_{1} > x_{4}$$

    thus \(x_{2}\) is the best alternative.

6 Conclusions

In this paper, we have developed two nonlinear optimization models for dealing with MCGDM problems in which the criterion values are expressed in HFEs, and the weight information about both the decision makers and the criteria is unknown. First, we have minimized the divergence among the individual hesitant fuzzy decision matrices to establish a nonlinear optimization model for determining the optimal weights of decision makers under hesitant fuzzy situations. It has been shown that the solution to this model can be derived from an exact formula. Then, another nonlinear optimization model has been developed to obtain the weights of criteria from the viewpoint of maximizing group consensus on the basis of all the individual hesitant fuzzy decision matrices, whose solution can also be derived from a simple formula. Moreover, a simple additive weighting operator has been utilized to aggregate all the hesitant fuzzy criterion values corresponding to each alternative, and then the score function has been utilized to rank the given alternatives and to select the best alternative. Finally, we have extended all the above results to interval-valued hesitant fuzzy environments, and have applied the developed models to an investment decision problem. A comparison analysis has shown that the developed methods have some prominent advantages over the other hesitant fuzzy or interval-valued hesitant fuzzy MCGDM methods.