1 Introduction

Zadeh introduced fuzzy sets (FSs) [38] which have capability to model nonstatistical imprecision or vague concepts. Then several extensions have been developed including interval-valued fuzzy sets (IVFSs) [39], intuitionistic fuzzy sets (IFSs) [1, 2, 4, 11, 13, 14, 26], interval-valued intuitionistic fuzzy sets (IVIFSs) [5, 12, 25], type-2 fuzzy sets (T2FSs) [39], fuzzy multisets (FMSs) [16, 17] and hesitant fuzzy sets (HFSs) [20, 30] etc. Zhu et al. [40] studied the interrelationship among these fuzzy sets, developed a dual hesitant fuzzy set (DHFS) and investigated the basic operations and properties of DHFSs [41]. They also gave the application of DHFSs in group forecasting [40]. As a more comprehensive fuzzy set, DHFSs encompass FSs, IFSs, HFSs, and FMSs as special cases with certain conditions. For example, when DHFSs are nonempty closed intervals, they can reduce to IVFSs or IVIFSs. So DHFSs has drawn more and more scholars’ attention. For clustering DHFSs, Wang et al. [23] defined the correlation measures for dual hesitant fuzzy information and then discussed their properties in detail. Farhadinia [9] proposed an approach for deriving the correlation coefficient of DHFSs. To solve multi-attribute decision-making problems under dual hesitant fuzzy environments, Ye [35] proposed a correlation coefficient between DHFSs as a new extension of existing correlation coefficients for hesitant fuzzy sets [31] and intuitionistic fuzzy sets [36], Chen et al. [8] proposed some approaches based on the correlation coefficient to multi-attribute decision-making with dual hesitant fuzzy information. Wang et al. [22] developed some aggregation operators for aggregating dual hesitant fuzzy information and used them in multi-attribute decision-making. Wang et al. [21] developed some generalized dual hesitant fuzzy aggregation operators which encompass some existing operators as their particular cases and applied them to deal with multi-attribute decision-making problems under dual hesitant fuzzy environment. There were many applications with dual hesitant fuzzy information in some aspects of real life. Li [15] developed the model to solve the multi-attribute decision-making problems for evaluating the clothing creative design with dual hesitant fuzzy information. Yu and Li [37] proposed a method for multi-criteria decision-making under dual hesitant fuzzy environment and applied it in teaching quality assessment. Xu [29] devised an application model for evaluating the mechanical product design quality with dual hesitant fuzzy information. Zhang [43] developed a model for evaluating the computer network security with dual hesitant fuzzy information. Except for decision-making problem in real life, various soft computing techniques have been applied in different engineering fields in [6, 7, 19, 24, 27, 42].

We can see, from the analysis above, that DHFS is a very useful tool to deal with uncertainty. With regard to multi-attribute decision-making problems with dual hesitant fuzzy information, many methods have been developed. But these methods are under the assumption that the attributes are at the same priority level, and the satisfactions under one attribute can be completely compensated by the satisfactions under other attributes. However, in many real applications, there exists a prioritization relationship over the attributes and we do not want to allow this kind of compensation between attributes. The existing methods could not deal with this situation. In this paper, we develop a method for solving the prioritized multi-attribute decision-making problems with dual hesitant fuzzy information.

Yager [32, 33] studied this kind of situation with crisp attribute values and proposed some prioritized aggregation operators. The core of Yager’s aggregation approach is the introduction of importance weights to enforce this prioritization between attributes, that is, to determine the weights of attributes according to the attribute satisfactions and the prioritization relationship. In this process, one needs to compare the weight of an attribute with its satisfaction. It’ll be an easy work when satisfactions are crisp values. Now, we adopt Yager’s idea in [32, 33] to aggregate dual hesitant fuzzy information with prioritization relationship. In this situation, the key problem is also to determine the weights of attributes according to the prioritization relationship and attribute satisfactions that are represented by DHFEs instead of script values. The process of deriving weights inevitably involves the comparisons of DHFEs. Based on a score function and an accuracy function of a DHFE, Zhu et al. [40] proposed a comparison method. For DHFEs having different deviations of the mean membership values and the mean nonmembership values, only the score function would be used to compare those deviations, the accuracy function would not be involved. So in this case, their hesitations are not considered in the ranking process. This treatment may incur problems at times, especially when a DHFE contains a large hesitation degree, which will signify the decision-maker’s high uncertainty or risk level. Under such a condition, a simple discarding of hesitations in DHFEs may yield risky or misleading recommendations. Therefore, it is necessary to propose more effective comparison methods of DHFEs for more reliable decision aid.

For refining hesitations of intuitionistic fuzzy numbers, Huang and Li [10] proposed an objective approach to improve the parameter \( \alpha \) in the \( D_{\alpha } \) operator introduced by Atanassov in [3], so that the hesitation in an intuitionistic fuzzy number can be further refined and characterized. Motivated by the idea in [10], we proposed a correctional score function to compare DHFEs. The method can distinguish two DHFEs by considering their hesitant degrees and emphasize the influence of nonmembership degrees. Based on the new comparison method for DHFEs and the prioritization relationship, the weights of attributes are derived for each alternative. Then the proposed dice similarity measure for DHFSs is utilized to rank alternatives.

The rest of paper is organized as follows. Section 2 gives some necessary concepts. In Sect. 3, a correctional score function of DHFEs is defined and the dice similarity measure is extended for DHFSs. In Sect. 4, a decision-making method is established by means of the correctional score function and the dice similarity measure of DHFSs. An example is presented to illustrate the developed approach in Sect. 5. Finally, some final remarks are given in Sect. 6.

2 Preliminaries

In this section, some necessary concepts are reviewed including IFS, HFS, DHFS and the dice similarity measure.

2.1 Intuitionistic fuzzy set and hesitant fuzzy set

Definition 1

[38]. Let \( X \) be a universe of discourse. Then a fuzzy set is defined by

$$ A = \left\{ {\left\langle {x,\mu_{A} (x)} \right\rangle \left| {x \in X} \right.} \right\} $$
(1)

which is characterized by a membership function \( \mu_{A} :X \to [0,1] \), where \( \mu_{A} (x) \) denotes the degree of membership of the element \( x \in X \) to the set \( A \).

Atanassov [2] extended fuzzy set to intuitionistic fuzzy set (IFS), shown as follows.

Definition 2

[2]. An IFS \( F \) in a universe of discourse \( X \) is given by

$$ F = \left\{ {\left\langle {x,\mu_{F} (x),\nu_{F} (x)} \right\rangle \left| {x \in X} \right.} \right\}, $$
(2)

where \( 0 \le \mu_{F} (x) + \nu_{F} (x) \le 1 \), \( \forall x \in X \), \( \mu_{F} :X \to [0,1] \) and \( \nu_{F} :X \to [0,1] \). The numbers \( \mu_{F} (x) \) and \( \nu_{F} (x) \) represent, respectively, the membership degree and non-membership degree of the element \( x \in X \) to the set \( F \). The hesitation degree of the element \( x \) is \( \pi_{F} (x) = 1 - \mu_{F} (x) - \nu_{F} (x) \).

However, when giving the membership degree of an element, the difficulty of establishing the membership degree is not because we have a margin of error, but some possibility distribution on the possible values. For such cases, Torra [20] proposed hesitant fuzzy set (HFS).

Definition 3

[20]. Given a fixed set \( X \), then a HFS on \( X \) is in terms of a function that when applied to \( X \) returns a subset of \( [0,1] \). A HFS \( E \) can be expressed by mathematical symbol:

$$ E = \left\{ {\left\langle {x,h_{E} (x)} \right\rangle \left| {x \in X} \right.} \right\} $$
(3)

where \( h_{E} (x) \) is a set of some values in \( [0,1] \) and denotes the possible membership degree of the element \( x \in X \) to the set \( E \). For convenience, Xia and Xu [28] call \( h = h_{E} (x) \) a hesitant fuzzy element (HFE).

2.2 Dual hesitant fuzzy set

Zhu [40] defined dual hesitant fuzzy set (DHFS) in terms of two functions that returns two sets of membership values and nonmembership values, respectively, for each element in the domain.

Definition 4

[40]. Let \( X \) be a fixed set, then a DHFS on \( X \) is described as:

$$ D = \left\{ {\left\langle {x,h(x),g(x)} \right\rangle \left| {x \in X} \right.} \right\} , $$
(4)

where \( h(x) \) and \( g(x) \) are two sets of some values in \( [0,1] \), denoting the possible membership degrees and non-membership degrees of the element \( x \in X \) to the set \( D \) respectively, with the conditions: \( 0 \le \gamma ,\;\eta \le 1,\;\;\;\;0 \le \gamma^{ + } + \eta^{ + } \le 1 \), where \( \gamma \in h(x) \), \( \eta \in g(x) \), \( \gamma^{ + } \in h^{ + } = \cup_{x \in X} \hbox{max} \left\{ {\gamma \left| {\gamma \in h(x)} \right.} \right\} \) and \( \eta^{ + } \in g^{ + } = \cup_{x \in X} \hbox{max} \left\{ {\eta \left| {\eta \in g(x)} \right.} \right\} \).

For convenience, the pair \( d(x) = (h(x),\;g(x)) \) is called a DHFE, denoted by \( d = (h,\;g) \), with the conditions: \( \gamma \in h \), \( \eta \in g \), \( \gamma^{ + } = \hbox{max} \left\{ {\gamma \left| {\gamma \in h} \right.} \right\} \), \( \eta^{ + } = \hbox{max} \left\{ {\eta |\eta \in g} \right\} \), \( 0 \le \gamma ,\eta \le 1,\;\;\;\;0 \le \gamma^{ + } + \eta^{ + } \le 1 \).

For comparing the DHFEs, Zhu et al. gave the following comparison laws.

Definition 5

[40]. Let \( d_{i} = (h_{{d_{i} }} ,\;g_{{d_{i} }} )(i = 1,2) \) be any two DHFEs, \( s(d_{i} ) = \frac{1}{{\# h_{{d_{i} }} }}\sum\nolimits_{{\gamma \in h_{{d_{i} }} }} \gamma - \frac{1}{{\# g_{{d_{i} }} }}\sum\nolimits_{{\eta \in g_{{d_{i} }} }} \eta \) the score function of \( d_{i} \), and \( p(d_{i} ) = \frac{1}{{\# h_{{d_{i} }} }}\sum\nolimits_{{\gamma \in h_{{d_{i} }} }} \gamma + \frac{1}{{\# g_{{d_{i} }} }}\sum\nolimits_{{\eta \in g_{{d_{i} }} }} \eta \) the accuracy function of \( d_{i} \), where \( \# h_{{d_{i} }} \) and \( \# g_{{d_{i} }} \) are the numbers of the elements in \( h_{{d_{i} }} \) and \( g_{{d_{i} }} \) respectively. Then

  • If \( s(d_{1} ) > s(d_{2} ) \), then \( d_{1} \) is superior to \( d_{2} \), denoted by \( d_{1} \succ d_{2} \);

  • If \( s(d_{1} ) = s(d_{2} ) \), then if \( p(d_{1} ) = p(d_{2} ) \), \( d_{1} \) is equivalent to \( d_{2} \), denoted by \( d_{1} \sim d_{2} \). Otherwise, if \( p(d_{1} ) > p(d_{2} ) \), then \( d_{1} \) is superior to \( d_{2} \), denoted by \( d_{1} \succ d_{2} \).

Example 1

Let \( A = \{ (0.3,\;0.4),\;(0.1,\;0.2)\} \), \( B = \{ (0.5,\;0.6),\;(0.3,\;0.4)\} \) and \( C = \{ (0.6),\;(0.3,\;0.4)\} \) be three DHFEs. We use the Definition 5 to rank them. According to Definition 5 we can get:

$$ s(A) = \frac{0.3 + 0.4}{2} - \frac{0.1 + 0.2}{2} = 0.2;\;s(B) = \frac{0.5 + 0.6}{2} - \frac{0.3 + 0.4}{2} = 0.2; $$

\( s(C) = 0.6 - \frac{0.3 + 0.4}{2} = 0.25 \). \( s(A) = s(B) < s(C) \), so \( C \) is superior to \( A \) and \( B \). Then we need to calculate the accuracy function values of \( A \) and \( B \). Since \( p(A) = \frac{0.3 + 0.4}{2} + \frac{0.1 + 0.2}{2} = 0.5 \); \( p(B) = \frac{0.5 + 0.6}{2} + \frac{0.3 + 0.4}{2} = 0.9 \), \( B \) is superior to \( A \). Finally we can get the rank of the three DHFEs is \( C \succ B \succ A \).

The score function in Definition 5 is the deviation of the mean membership and the mean nonmembership. In the case that two DHFEs \( A \) and \( C \) have different deviations of the mean membership values and the mean nonmembership values, their hesitations are not considered in the ranking process. This treatment may cause problems sometimes, especially when a DHFE contains a large hesitation, which will signify the decision-maker’s high uncertainty or risk level. In the next section, based on \( D_{\alpha } \) operator [3] and the determination method of the parameter \( \alpha \) in [10], we will propose a correctional score function that considers the hesitant degree of a DHFE and compares DHFEs more effectively.

2.3 A dice similarity measure

The dice similarity measure proposed in [34] can overcome some disadvantages of the cosine similarity measure [18]. Therefore, the dice similarity measure will be extended for DHFSs in the next section.

Definition 6

[34]. Let \( X = (x_{1} ,\;x_{2} , \ldots ,\;x_{n} ) \) and \( Y = (y_{1} ,\;y_{2} , \ldots ,\;y_{n} ) \) be two vectors of length \( n \) where all the coordinates are positive. Then the dice similarity measure is defined as follows:

$$ D = \frac{2X \cdot Y}{{\left\| X \right\|_{2}^{2} + \left\| Y \right\|_{2}^{2} }} = \frac{{2\sum\nolimits_{i = 1}^{n} {x_{i} y_{i} } }}{{\sum\nolimits_{i = 1}^{n} {x_{i}^{2} } + \sum\nolimits_{i = 1}^{n} {y_{i}^{2} } }}, $$
(5)

where \( X \cdot Y = \sum\nolimits_{i = 1}^{n} {x_{i} y_{i} } \) is the inner product of the vectors \( X \) and \( Y \), \( \left\| X \right\|_{2} = \sqrt {\sum\nolimits_{i = 1}^{n} {x^{2} } } \) and \( \left\| Y \right\|_{2} = \sqrt {\sum\nolimits_{i = 1}^{n} {y^{2} } } \) are the Euclidean norms of \( X \) and \( Y \).

The dice similarity measure takes values in the interval \( [0,\;1] \). However, it is undefined if \( x_{i} = y_{i} = 0(i = 1,\;2, \ldots ,n) \). In this case, let the dice similarity measure value be zero when \( x_{i} = y_{i} = 0(i = 1,\;2, \ldots ,n) \).

3 A correctional score function and a dice similarity measure of DHFSs

In this section, we will propose a correctional score function of DHFE based on the \( D_{\alpha } \) operator [3] and the determination method of the parameter \( \alpha \) in [10], and then extend the dice similarity measure to DHFSs.

3.1 A correctional score function

For characterizing the hesitation degree of a DHFE, a correctional score function will be proposed to compare DHFEs in this section. We will consider about the following two aspects on the comparison of DHFEs.

First, we should focus more on the negative comments in decision-making problem. Let’s take online shopping for instance. When customers find certain negative comments, most people respond cautiously even if there are other positive ones. So we should consider more about the negative comments in decision-making problem. In this paper, we regard nonmembership degrees of DHFEs as negative comments and consider it more important in comparison of DHFEs.

Second, the hesitant degrees also should be considered in the ranking of DHFEs. In Example 1, the score function value of DHFE \( C \) is greater than \( A \) and \( B \), so \( C \) is superior to other two DHFEs. In this comparison process, the hesitant degrees of DHFEs not considered. In order to solve this issue, we consider to split the hesitation degree into two parts, with one part being added to the membership function and the remaining part being attributed to the nonmembership function based on \( D_{\alpha } \) operator [3].

In order to refine hesitations in IFSs, Atanassov [3] proposed a \( D_{\alpha } \) operator as follows.

Definition 7

[3]. Let \( \alpha \in \left[ {0,\;1} \right] \) be a fixed number, and \( X \) be a fixed set. For an IFS \( F = \left\{ {\left\langle {x,\mu_{F} (x),\nu_{F} (x)} \right\rangle \left| {x \in X} \right.} \right\} \), the operator \( D_{\alpha } \) is defined as: \( D_{\alpha } (F) = \left\{ {\left\langle {x,\mu_{F} (x) + \alpha \cdot \pi_{F} (x),\nu_{F} (x) + (1 - \alpha ) \cdot \pi_{F} (x)} \right\rangle \left| {x \in X} \right.} \right\} \), where \( \pi_{F} (x) \) is the hesitant degree of the element \( x \in X \) to the set \( F \).

Since \( \mu_{F} (x) + \alpha \cdot \pi_{F} (x) + \nu_{F} (x) + (1 - \alpha ) \cdot \pi_{F} (x) = \mu_{F} (x) + \nu_{F} (x) + \pi_{F} (x) = 1 \), it is apparent that \( D_{\alpha } \) effectively reduces an IFS \( F \) to a fuzzy set with a membership function \( \mu_{F} (x) + \alpha \cdot \pi_{F} (x) \).

The nature of this \( D_{\alpha } \) operator is to divide the \( \pi_{F} (x) \) into two parts, and attribute part of the \( \pi_{F} (x) \) to the membership function and the remainder to the nonmembership function. \( \alpha \) serves as a key parameter to determine how much of the hesitation will be attributed to the membership and nonmembership functions, respectively.

Huang and Li [10] gave an improved formula for determining the value of \( \alpha \),

$$ \alpha = \frac{1}{2} + \frac{{\mu_{F} (x) - \nu_{F} (x)}}{2} + \frac{{\mu_{F} (x) - \nu_{F} (x)}}{2}\pi_{F} (x) = \mu_{F} (x) + \frac{{\pi_{F} (x)}}{2} + \frac{{\mu_{F} (x) - \nu_{F} (x)}}{2}\pi_{F} (x). $$

By implementing a lot of example analysis, Huang and Li [10] illustrated that the above formula takes both score function [the deviation function \( \mu_{F} (x) - \nu_{F} (x \))] and hesitation functions into account and yields an attribution rule that is consistent with “following-the-herd” principle: When the number of “support” votes exceeds that of “opposition” [corresponding to the case that \( \mu_{F} (x) - \nu_{F} (x) > 0 \)], a larger percentage of the hesitation function \( \pi_{F} (x) \) will be attributed to the membership function (corresponding to a larger \( \alpha \)); When “opposition” outnumbers “support” in a vote (corresponding to the case that \( \mu_{F} (x) - \nu_{F} (x) < 0 \)], more of the hesitation function will be attributed to the nonmembership function (corresponding to a smaller \( \alpha \)). We use this idea to deal with the hesitations of DHFEs.

Let \( d = (h,\;g) \) be a DHFE, \( h \) and \( g \) be two sets of some values in \( [0,\;1] \), respectively. We call \( \frac{1}{\# h}\sum\nolimits_{\gamma \in h} \gamma \) the mean membership value and \( \frac{1}{\# g}\sum\nolimits_{\gamma \in g} \gamma \) the mean nonmembership value of \( d \). The mean hesitation degree of \( d \) is \( 1 - \frac{1}{\# h}\sum\nolimits_{\gamma \in h} \gamma - \frac{1}{\# g}\sum\nolimits_{\gamma \in g} \gamma \), denoted by \( \pi \). Then the mean membership and mean nonmembership can construct an intuitionistic fuzzy value, that is a basic element of an IFS. So we adopt the idea of the \( D_{\alpha } \) operator and the method of determining parameter \( \alpha \) in [10] to refine hesitations in DHFSs and consider the nonmembership more important. Based on the above analysis, a correctional score function of DHFEs is defined as follows.

Definition 8

Let \( d = (h,\;g) \) be a DHFE, \( h \) and \( g \) be two sets of some values in \( \left[ {0,1} \right] \), respectively. Then the correctional score function of \( d \) is proposed as follows:

$$ S_{{D_{\alpha } }} \left( d \right) = \frac{{1 + S_{h} \left( h \right) - S_{g} \left( g \right)}}{2}, $$
(6)

where \( S_{h} \left( h \right) = \frac{1}{\# h}\sum\nolimits_{\gamma \in h} \gamma + \alpha \pi \), \( S_{g} \left( g \right) = \frac{1}{\# g}\sum\nolimits_{\gamma \in g} \gamma + \left( {1 - \alpha } \right)\pi \), \( \pi = 1 - \frac{1}{\# h}\sum\nolimits_{\gamma \in h} \gamma - \frac{1}{\# g}\sum\nolimits_{\gamma \in g} \gamma \) and \( \alpha = \frac{1}{2} + \frac{{\frac{1}{\# h}\sum\nolimits_{\gamma \in h} \gamma - \frac{1}{\# g}\sum\nolimits_{\gamma \in g} \gamma }}{2} + \frac{{\frac{1}{\# h}\sum\nolimits_{\gamma \in h} \gamma - \frac{1}{\# g}\sum\nolimits_{\gamma \in g} \gamma }}{2}\pi \). \( S_{h} (h) \) and \( S_{g} (g) \) is called the correctional membership value and the correctional nonmembership value, respectively.

It is obvious that the correctional score function \( S_{{D_{\alpha } }} (d) \in [0,1] \). For two DHFEs \( d_{1} \) and \( d_{2} \), if \( S_{{D_{\alpha } }} (d_{1} ) > S_{{D_{\alpha } }} (d_{2} ) \), then \( d_{1} \) is superior to \( d_{2} \), denoted by \( d_{1} \succ d_{2} . \)

From Definition 8, we can see the hesitation degree \( \pi \) of \( d \) is divided by the parameter \( \alpha \) into two parts, one to the mean membership and the remainder to the nonmembership of \( d \). According to Definition 5, \( s\left( d \right) = \frac{1}{\# h}\sum\nolimits_{\gamma \in h} \gamma - \frac{1}{\# g}\sum\nolimits_{\eta \in g} \eta \) is the score function of \( d_{{}} \). So \( \alpha = \frac{1}{2} + \frac{s(d)}{2} + \frac{s(d)}{2}\pi = \frac{1}{\# h}\sum\nolimits_{\gamma \in h} \gamma + \frac{\pi }{2} + \frac{s(d)}{2}\pi \). It is apparent that the \( \alpha \) value varys with not only the score function value \( s(d) \) but also the mean hesitant degree \( \pi \), and \( \alpha \) increases in \( \pi \) for a given positive score function value and decreases in \( \pi \) for a given negative score function value.

So the correctional score function of DHFE can characterize the hesitation of DHFEs. In addition, if a DHFE has bigger nonmembership value, then it could have worse correctional score function value. Therefore, the correctional score function pays more attention to the nonmembership value. We give an analysis using the following example.

Example 2

Let \( A = \{ (0.3,0.4),\;(0.1,0.2)\} \), \( B = \{ (0.5,0.6),\;(0.3,0.4)\} \) and \( C = \{ (0.6),\;(0.3,\;0.4)\} \) be three DHFEs as in Example 1. We calculate their correctional score function values and rank them.

From Definition 8, we get the mean hesitant degree of \( A \) is \( \pi_{A} = 0.5 \), then \( \alpha_{A} = \frac{1}{2} + \frac{{\frac{0.3 + 0.4}{2} - \frac{0.1 + 0.2}{2}}}{2} \times (1 + \pi_{A} ) = 0.65 \). By splitting the hesitant degree, we get \( S_{h} (h_{A} ) = 0.675 \) and \( S_{g} (g_{A} ) = 0.325 \). So \( S_{{D_{\alpha } }} (A) = \frac{1 + 0.675 - 0.325}{2} = 0.675 \).

By a similar way, we get \( \pi_{B} = 0.1 \), \( \alpha_{B} = 0.61 \), \( S_{h} (h_{B} ) = 0.611 \), \( S_{g} (g_{B} ) = 0.389 \) and \( S_{{D_{\alpha } }} (B) \,= \,\frac{1 + 0.611 - 0.389}{2} \,=\, 0.611 \);

\( \pi_{C} = 0.05,\alpha_{C} = 0.6312 \), \( S_{h} (h_{C} ) \,= \,0.6316 \),\( S_{g} (g_{C} ) \,=\, 0.3684 \) and \( S_{{D_{\alpha } }} (C) = \frac{1 + 0.6316 - 0.3684}{2} = 0.6316 \). Obviously, \( S_{{D_{\alpha } }} (A) > S_{{D_{\alpha } }} (C) > S_{{D_{\alpha } }} (B) \), and the rank of the three DHFEs is \( A \succ C \succ B \).

The result is different from Example 1, which is analyzed as follows.

  1. 1.

    \( B \) and \( C \) have the same mean nonmembership value, but the mean membership value of \( C \) is bigger, so \( C \succ B \) in Examples 1 and 2. The result satisfies people’s cognition.

  2. 2.

    Using the correctional score function \( S_{{D_{\alpha } }} \), we get \( A \succ C \), while using the method in Definition 5, the result is \( C \succ A \). We can see, for the method in Definition 5, only the score function is used to compare \( A \) and \( C \), since they have different deviations of the mean membership values and the mean nonmembership values, the accuracy function is not entertained. So in this case, their hesitations are not considered in the ranking process. Comparing with the method in Definition 5, the correctional score function refines the mean hesitant degrees of \( A \) and \( C \), and splits the mean hesitant degrees to the mean membership values and the mean nonmembership value, respectively.

  3. 3.

    The comparison results of \( A \) and \( B \) are also different in the above two methods. The reason is that, comparing with \( A \), \( B \) has bigger nonmembership value and the membership value is not big enough, and the correctional score function considers more about the influence of the nonmembership value and allots little mean hesitant degree to the mean membership degree of \( B \).

In Figs. 1 and 2, we give the further analysis on the change of the new correctional function value with the change of mean nonmembership value or mean membership value for the two DHFEs \( A \) and \( B \).

Fig. 1
figure 1

The change between correctional function value and mean nonmembership value

Fig. 2
figure 2

The change between correctional function value and mean membership value

In Fig. 1, the correctional score function values of \( A \) and \( B \) are decreasing with the increasing of their mean nonmembership values, respectively.

In Fig. 2, the correctional score function values of \( A \) and \( B \) are increasing with the increasing of their mean membership values, respectively.

Obviously, the correctional score function satisfies people’s cognition:

  1. 1.

    A DHFE has better score function value when it has bigger membership value and smaller nonmembership value

  2. 2.

    A DHFE has worse score function value when it has bigger nonmembership value and the membership value is not big enough

We can also note that when \( \alpha = \frac{1}{2} \) in Definition 8, the correctional score funtion of DHFEs is equal to the score function in Definition 5. So the correctional score function is a generalization of the score function in Definition 5 and is more effective for compare DHFEs.

3.2 The dice similarity measure of DHFSs

For using the dice similarity measure in decision-making problems with dual hesitant fuzzy information, we extend the dice similarity measure in Definition 6 to DHFSs.

Definition 9

Let \( X = (x_{1} ,\;x_{2} , \ldots ,x_{n} ) \) be a fixed set, \( D_{1} = \left\{ {\left\langle {x_{i} ,d_{{D_{1} }} (x_{i} )} \right\rangle \left| {x_{i} \in X} \right.} \right\} \) and \( D_{2} = \left\{ {\left\langle {x_{i} ,d_{{D_{2} }} (x_{i} )} \right\rangle \left| {x_{i} \in X} \right.} \right\} \) be two DHFSs, where \( d_{{D_{1} }} (x) = (h_{{D_{1} }} (x),\;g_{{D_{1} }} (x)) \) and \( d_{{D_{2} }} (x) = (h_{{D_{2} }} (x),\;g_{{D_{2} }} (x)) \), and \( w = (w_{1} ,\;w_{2} , \ldots ,w_{n} ) \) be a weighting vector with \( w_{i} > 0 \) and \( \sum\nolimits_{i = 1}^{n} {w_{i} } = 1 \). Then the dice similarity measure of DHFSs is defined as follows:

$$ PD(D_{1} ,D_{2} ) = \sum\limits_{i = 1}^{n} {w_{i} } \frac{{2S_{{D_{\alpha } }} (d_{{D_{1} }} (x_{i} ))S_{{D_{\alpha } }} (d_{{D_{2} }} (x_{i} ))}}{{S_{{D_{\alpha } }} (d_{{D_{1} }} (x_{i} ))^{2} + S_{{D_{\alpha } }} (d_{{D_{2} }} (x_{i} ))^{2} }} $$

where \( d_{{D_{1} }} (x_{i} ) \) and \( d_{{D_{2} }} (x_{i} ) \) are DHFEs, \( S_{{D_{\alpha } }} (d_{{D_{1} }} ) \in [0,1] \), \( S_{{D_{\alpha } }} (d_{{D_{2} }} ) \in [0,\;1] \).

Obviously, the above dice similarity measure of DHFSs has the same properties with the dice similarity measure in Definition 5. In the next section, we will use the dice similarity measure of DHFSs to rank alternatives.

4 An approach for multiple attribute decision making with dual hesitant fuzzy information

In this section, we shall utilize the correctional score function and the dice similarity measure of DHFSs to develop an approach for solving prioritized decision-making problems with dual hesitant fuzzy information.

For a multi-attribute decision-making problem, let \( A = \left\{ {A_{1} ,\;A_{2} , \ldots ,A_{m} } \right\} \) be a discrete set of alternatives, \( G = \{ G_{1} ,G_{2} , \ldots ,G_{n} \} \) be a collection of attributes. Suppose that there is a prioritization between the attributes expressed by the linear ordering \( G_{1} \succ G_{2} \succ \cdots \succ G_{n} \), indicating attribute \( G_{j} \) has a higher priority than \( G_{s} \), if \( j < s \). If the decision makers provide several membership values or nonmembership values for the alternative \( A_{i} \) under the attribute \( G_{j} \) with anonymity, these values can be considered as a DHFE \( d_{ij} \). In the case that two decision makers provide the same value, then the value emerges only once in \( d_{ij} \) no matter membership or nonmembership. So we can construct a dual hesitant fuzzy decision matrix \( H = (d_{ij} )_{m \times n} \), where \( d_{ij} \) are in the form of DHFEs.

For the prioritized multi-attribute decision-making, we first consider to obtain the priority induced importance weights of each of the attribute with respect to alternative \( A_{i} \). With the correctional score function, for each attribute we let \( G_{j} (A_{i} ) = S_{{D_{\alpha } }} (d_{ij} ) \), the degree of satisfaction of \( A_{i} \) under \( G_{j} \).

Then we obtain for each \( G_{j} \) its un-normalized importance weights \( T_{ij} \) with respect to alternative \( A_{i} \):

$$ T_{i1} = 1,\;T_{ij} = \prod\nolimits_{k = 1}^{j - 1} {G_{k} (A_{i} )(j = 2, \ldots ,n)}. $$

Using this we obtain the normalized importance weights:

$$ w_{ij} = \frac{{T_{ij} }}{{\sum\nolimits_{j = 1}^{n} {T_{ij} } }},\;(j = 1,\;2, \ldots ,n). $$

Once we get the normalized priority based importance, the next step is to ranking the alternatives by using the dice similarity measure.

We assume \( A_{i} \, =\, (d_{i1} ,\;d_{i2} , \ldots ,d_{in} )(i = 1,\;2, \ldots ,m) \) are the DHFSs corresponding to alternative \( x_{i} \) and \( A^{*} \, = \,(d_{1}^{ *} ,\;d_{2}^{ *} , \ldots ,d_{n}^{*} ) \), where \( d_{j} = (1,\;0)(j = 1,\;2, \ldots ,n) \), is an DHFS corresponding to the ideal alternative. Then, by considering attribute weights, we can obtain the dice similarity measures according to Definition 9 between \( A^{*} \) and \( A_{i} \):

$$ PD(A^{*} ,\;A_{i} ) = \sum\limits_{j = 1}^{n} {w_{j} \frac{{2S_{{D_{\alpha } }} (d_{j}^{*} )S_{{D_{\alpha } }} (d_{ij} )}}{{S_{{D_{\alpha } }} (d_{j}^{ *} )^{2} + S_{{D_{\alpha } }} (d_{ij} )^{2} }}.} $$
(7)

The bigger the value of \( PD(A^{*} ,A_{i} ) \), the closer the alternative \( A_{i} \) is to the ideal alternative \( A^{*} \). Therefore, according to the dice similarity measure, the best alternative can be selected.

From the above discussion, we utilize the following steps to solve the prioritized multiple attribute decision making problems:

Step 1. Calculate the un-normalized importance weights \( T_{ij} (i = 1,\;2, \ldots ,m,\;\;j = 2, \ldots ,n) \) for alternative \( A_{i} \):

$$ T_{i1} = 1,\;T_{ij} = \prod\limits_{k = 1}^{j - 1} {G_{k} (A_{i} )(j = 2, \ldots ,n)} $$
(8)

Step 2. Calculate the normalized importance weights \( w_{ij} (i = 1,\;2, \ldots ,m,\;\;j = 1,\;2, \ldots ,n) \):

$$ w_{ij} = \frac{{T_{ij} }}{{\sum\nolimits_{j = 1}^{n} {T_{ij} } }}(i = 1,\,2, \ldots ,m,\;j = 1,\;2, \ldots ,n) $$
(9)

Step 3. By Eq. (7), calculate the dice similarity measures \( PD(A^{*} ,A_{i} )(i = 1,\;2, \ldots ,m) \) between the ideal alternative \( A^{*} \) and the alternative \( A_{i} (i = 1,\;2, \ldots ,m) \).

Step 4. Rank the alternatives \( A_{i} (i = 1,\;2, \ldots ,m) \) and select the best one(s) according to the values \( PD(A^{*} ,A_{i} )(i = 1,\;2, \ldots ,m) \).

5 Illustrative example

In this section, the proposed approach is demonstrated by an example of teacher evaluation and the result is compared with the results using the correlation coefficient of DHFSs in [35] and the aggregation operator of DHFSs in [22].

Here we consider a selection problem of choosing the best teacher in a Chinese university. This evaluation has been raised great attention from the school, university president, dean of management school. Human resource officer sets up the panel of decision makers which will take the whole responsibility for this evaluation. There are five candidates \( A_{i} (i = 1,\;2,\;3,\;4,\;5) \) according to the following four attributes: ① \( G_{1} \) is the morality; ② \( G_{2} \) is the research capability; ③ \( G_{3} \) is the teaching skill; ④ \( G_{4} \) is the education background. The prioritization relationship for the attributes is as below: \( G_{1} \succ G_{2} \succ G_{3} \succ G_{4} \). The five possible candidates \( A_{i} (i = 1,\;2,\;3,\;4,\;5) \) are to be evaluated using the dual hesitant fuzzy values by three decision makers under the above four attributes. The dual hesitant fuzzy decision matrix is constructed as in Table 1.

Table 1 Dual hesitant fuzzy decision matrix

In order to select the most desirable candidate, we utilize the above steps in Sect. 4 to solve the problem, which can be described as following:

Step 1. Utilize Eq. (8) to calculate the values of \( T_{ij} (i = 1,\;2,\;3,\;4,\;5,\;j = 2,\;3,\;4) \) as follows:

$$ (T_{ij} )_{5 \times 5} = \left[ {\begin{array}{*{20}c} 1 & {0.6466} & {0.1361} & {0.053} \\ 1 & {0.8} & {0.622} & {0.2246} \\ 1 & {0.7894} & {0.2968} & {0.203} \\ 1 & {0.3648} & {0.2071} & {0.0763} \\ 1 & {0.7} & {0.3934} & {0.1312} \\ \end{array} } \right] .$$

Step 2. Utilize Eq. (9) to calculate the values of \( w_{ij} (i = 1,\;2,\;3,\;4,\;5,\;\;j = 1,\;2,\;3,\;4) \) as follows:

$$ (w_{ij} )_{5 \times 5} = \left[ {\begin{array}{*{20}c} {0.5448} & {0.3522} & {0.0742} & {0.0288} \\ {0.3778} & {0.3023} & {0.0742} & {0.0849} \\ {0.4368} & {0.3448} & {0.235} & {0.0887} \\ {0.6054} & {0.2231} & {0.1297} & {0.0461} \\ {0.4495} & {0.3147} & {0.1768} & {0.059} \\ \end{array} } \right]. $$

Step 3. Utilize Eq. (7) calculate the values of \( PD(A^{*} ,A_{i} )(i = 1,\;2,\;3,\;4,\;5) \) as follows:

$$ PD(A^{*} ,A_{1} )\, =\,0.7156,\;PD(A^{*} ,A_{2} )\, =\, 0. 7 4 5,\;PD(A^{*} ,A_{3} )\, =\, 0. 9 2 6 8 ,\;PD(A^{*} ,\;A_{4} )\, =\, 0. 7 0 3 2 , { }PD(A^{*} ,A_{5} )\, =\, 0. 8 4 5 9. $$

Step 4. Rank the candidates \( A_{i} \left( {i = 1,2,3,4,5} \right) \) in accordance with \( PD(A^{*} ,A_{i} ) \) and we get \( A_{3} \succ A_{5} \succ A_{2} \succ A_{1} \succ A_{4} \). Thus the most desirable candidate is \( A_{3} \).

In step 3 of our proposed decision method, we use the dice similarity measure for DHFSs to rank alternatives. It’s true that the ranking method can be replaced by the correlation coefficient method in [35] and the DHFWA aggregation operator in [22]. The correlation coefficient and the DHFWA aggregation operator are as follows (all parameters in these formulas are explained in detail in [35] and [22]):

$$ \rho_{DHFS} (A,\;B) = \frac{{C_{DHFS} (A,B)}}{{\sqrt {C{}_{DHFS}(A,A)} \sqrt {C_{DHFS} (B,B)} }} $$
$$ = \frac{{\sum\nolimits_{i = 1}^{n} {\left( {\frac{1}{{k_{i} }}\sum\nolimits_{s = 1}^{{k_{i} }} {h_{A\sigma (s)} (x_{i} )h_{B\sigma (s)} (x_{i} )} + \frac{1}{{l_{i} }}\sum\nolimits_{t = 1}^{{l_{i} }} {g_{A\sigma (t)} (x_{i} )g_{B\sigma (t)} (x_{i} )} } \right)} }}{{\sqrt {\sum\nolimits_{i = 1}^{n} {\left( {\frac{1}{{k_{i} }}\sum\nolimits_{s = 1}^{{k_{i} }} {h_{A\sigma (s)}^{2} (x_{i} )} + \frac{1}{{l_{i} }}\sum\nolimits_{t = 1}^{{l_{i} }} {g_{A\sigma (t)}^{2} (x_{i} )} } \right)} } \sqrt {\sum\nolimits_{i = 1}^{n} {\left( {\frac{1}{{k_{i} }}\sum\nolimits_{s = 1}^{{k_{i} }} {h_{B\sigma (s)}^{2} (x_{i} )} + \frac{1}{{l_{i} }}\sum\nolimits_{t = 1}^{{l_{i} }} {g_{B\sigma (t)}^{2} (x_{i} )} } \right)} } }} $$

and

$$ DHFWA_{w} (d_{1} ,\;d_{2} , \ldots ,d_{n} ) = \mathop \oplus \limits_{j = 1}^{n} (w_{j} d_{j} )\, = \, \cup_{{\gamma_{j} \in h_{j} ,\eta_{j} \in g_{j} }} \left\{ {\left\{ {1 - \prod\limits_{j = 1}^{n} {(1 - \gamma_{j} )^{{w_{j} }} } } \right\},\;\left\{ {\prod\limits_{j = 1}^{n} {(\eta_{j} )^{{w_{j} }} } } \right\}} \right\} $$

Using the weights obtained in Step 2 and the two methods, we rank the candidates.

Using the correlation coefficient of DHFSs, we can get the following results:

$$ \rho_{WDHFS} (A^{*} ,A_{1} ) = 0.6008,\;\rho_{WDHFS} (A^{*} ,A_{2} ) = 0.7878,\;\rho_{WDHFS} (A^{*} ,A_{3} ) = 0.8155,\; $$
$$ \rho_{WDHFS} (A^{*} ,A_{4} ) = 0.5634,\;\rho_{WDHFS} (A^{*} ,A_{5} ) = 0.7841 $$

By ranking the candidates \( A_{i} (i = 1,\;2,\;3,\;4,\;5) \) according to the values of \( \rho_{WDHFS} (A^{*} ,A_{i} ) \), we get \( A_{3} \succ A_{2} \succ A_{5} \succ A_{1} \succ A_{4} \). The most desirable candidate is \( A_{3} \).

When using the aggregation operator and the comparison method of DHFEs in [22], we can get the following result:

$$ DHFWA_{w} (A_{1} ) = \{ (0.3951,\;0.3989,\;0.402,\;0.4039,\;0.4058,\;0.4107,\;0.4643,\;0.4677,0.4704,0.4721,0.4738,0.4781),\;(0.4199,\;0.4257,\;0.4402,\;0.4462)\} . $$

Because the aggregation values are lengthy, we just give the above one as representation. Then we can get the score values of each candidate \( A_{i} (i = 1,\;2,\;3,\;4,\;5) \):

$$ s(A_{1} ) = 0.0039,\;s(A_{2} ) = 0.3395,\;s(A_{3} ) = 0.3438,\;s(A_{4} ) = - 0.1445,\;s(A_{5} ) = 0.2026. $$

So we get the ranking order \( A_{3} \succ A_{2} \succ A_{5} \succ A_{1} \succ A_{4} \). Thus the most desirable candidate is \( A_{3} \).

Compared with the alternative ranking methods, our proposed approach gets the same optimal alternative though the rank of alternatives is slightly different. But the dice similarity measure has more pithier procedure and is easier to calculate.

It is necessary to point out that the weighted correlation coefficient in [35] and the DHFWA operator and comparison method for DHFEs in [22] is proposed to solve the dual hesitant fuzzy decision-making problems with the assumption that the attributes are at the same priority level. In the methods proposed in [35] and [22], the weighting vector is given for all the alternatives. Compared with these hesitant fuzzy decision-making methods, the characteristic of the proposed dual hesitant fuzzy decision-making approach could be interpreted as it being able to deal the dual hesitant fuzzy decision-making problem with the prioritization relationship between attributes. So, for different alternatives, we derive different weighting vectors according to the proposed comparison method for DHFEs, the attribute satisfactions and prioritization relationship.

6 Conclusion

In this paper, we investigated the dual hesitant fuzzy multi-attribute decision-making problem with prioritization relationship between attributes. We proposed a correctional score function of DHFE. Then, we utilized the correctional score function and the dice similarity measure to develop an approach to solve the dual hesitant fuzzy multiple-attribute decision-making problem in which the attributes are in different priority level. Finally, a practical example about talent introduction was given to illustrate the developed approach and demonstrate its practicality and effectiveness.

As we can see, the comparison method based on the correctional score function lacks sufficient theoretical support. We should make further study on the correctional score function. In addition, DHFSs are a suitable technique of denoting uncertain information that is widely spread in daily life. There are latent applications of our approach in the field of data mining, information retrieval and pattern recognition, and so forth. These may be topics for future research.