Keywords

1 Introduction

MADM is a common process to human beings. In classical MADM, the assessments of alternatives are precisely known [1]. However, because of the inherent vagueness of human preferences and the uncertainty of objects, the attributes involved in decision making problems are better denoted by the fuzzy concept. Fuzzy sets have been introduced by Zadeh [2]. Subsequently, many scholars have done a lot of achievements on fuzzy decision making [3, 4, 5]. Hesitant fuzzy set (HFS), which has been introduced by Torra and Narukawa as an extension of fuzzy set [6, 7], describes the situations that permit the membership of an element to a given set having a few different values, which is a useful means to describe and deal with uncertain information in the process of MADM. However, the HFS only depicts people’s membership and ignors the people’s non-membership. Therefore, Zhu et al. have proposed the DHFS [8]. Comparing with the classical fuzzy set theory, DHFS increase the non-membership degree to describe the fuzzy nature of the objective world comprehensively and show a strong flexibility and practicality.

In recent years, hesitant fuzzy MADM has received more and more attention. Herrera-Videma et al. have emphasized fuzzy preference matrix and studied different forms of expression in fuzzy preference matrix approach [9]. Xu et al. have defined the distance and similarity measure of the HFSs [10, 11]. Zhang et al. have proposed hesitant fuzzy power aggregation operator to aggregate MADM evaluation values [12]. Xia and Xu also have developed some aggregation operators for hesitant fuzzy information and applied them in MADM problems under the hesitant fuzzy environment [13]. In the real life, people often judge the truth-values of a fuzzy proposition with language. Lin R has defined the hesitant fuzzy linguistic set by combining the advantages of linguistic evaluation values and HFS [14]. On the basis of multi-hesitant fuzzy sets, literature [15] has proposed the multi-hesitant fuzzy linguistic term sets.

The above studies are based on the assumption that the evaluation information is complete. However, due to the complexity of the decision making environment, the hesitation of the decision makers and the improper operation in the decision making process, the information is often incomplete. In order to solve the problem of MADM with information loss, the information is usually complemented by some methods. Therefore, this paper proposes a multi-attribute decision making method based on the maximum similarity of IDHFSs. We define the similarity of the IDHFEs in the IDHFSs and compare the similarity among individuals to complement the maximum data. An example is then provided to illustrate the proposed approach is more flexible and effective.

2 Preliminaries

DHFS, as a generalization of fuzzy set, permit the membership degree and the non-membership degree of an element to a set presented as several possible values between 0 and 1, which can describe the situations where people have hesitancy in providing their preferences over objects in the process of decision making.

Definition 1 [17].

Let X be a fixed set, then DHFS on X is defined as:

$$ D = \{ \langle \left. {x,h(x),g(x)} \right|x \in X\rangle \} $$

where \( h(x) \) and \( g(x) \) are two sets of several values in [0, 1], representing the possible membership degrees and the non-membership degrees for \( x \in X \). Also, there is

$$ 0\, \le \,\gamma ,\eta\, \le \,1,0\, \le \,\gamma^{+} \,+\, \eta^{+} \, \le \, 1 $$

where \( \gamma \in h(x),\eta \in g(x),\gamma^{ + } \in h^{ + } (x) = \cup_{\gamma \in h(x)} \) max \( \left\{ \gamma \right\} \) and \( \eta^{ + } \in g^{ + } (x) = \cup_{\eta \in g(x)} \) max \( \left\{ \eta \right\} \). The DHFS is composed of dual hesitant fuzzy elements (DHFEs), which is denoted by d(x) = (h(x), g(x)) (d = (h, g) for short).

To compare the DHFSs, Zhu et al. have gave the following comparison laws:

Definition 2 [8].

Let \( A_{i} = \left\{ {h_{{A_{i} }} ,g_{{A_{i} }} } \right\}(i = 1,2) \) be any two DHFSs, \( s_{{A_{i} }} = \frac{1}{{l_{h} }}\sum\limits_{\gamma \in h} \gamma - \frac{1}{{l_{g} }} \) \( \sum\limits_{\eta \in g} \eta (i = 1,2) \) the score function of \( A_{i} (i = 1,2) \) and \( p_{{A_{i} }} = \frac{1}{{l_{h} }}\sum\limits_{\gamma \in h} \gamma \,+\, \frac{1}{{l_{g} }}\sum\limits_{\eta \in g} \eta (i = 1,2) \). The accuracy function of \( A_{i} (i = 1,2) \), where \( l_{h} \) and \( l_{g} \) be the number of elements in \( h \) and \( g \), respectively, then

  1. (i)

    if \( s_{{A_{1} }}\, >\, s_{{A_{2} }} \), then A 1 is superior to A 2 , denoted by \( A_{1} \succ A_{2} \);

  2. (ii)

    if \( s_{{A_{1} }} = s_{{A_{2} }} \), then

    1. (1)

      if \( p_{{A_{1} }} = p_{{A_{2} }} \), then A 1 is equivalent to A 2, denoted by \( A_{1}\, \sim\, A_{2} \);

    2. (2)

      if \( p_{{A_{1} }}\, >\, p_{{A_{2} }} \), then A 1 is superior to A 2, denoted by \( A_{1} \succ A_{2} \).

3 Similarity of Incomplete Dual Hesitant Fuzzy Elements

In the real life, due to the preference of experts and the influence of external factors, some values of the information can’t be given. So there is a case where the DHFEs are incomplete when the values of the information are represented by DHFEs. Therefore, we divide the DHFEs into complete double hesitation fuzzy elements and incomplete double hesitation fuzzy elements. The corresponding definitions as follows:

Definition 3.

Let \( {\kern 1pt} U = \left\{ {u_{1} ,u_{2} , \ldots ,u_{n} } \right\} \) be a fixed set and \( A_{i} = (h_{i} ,g_{i} ) \) be any DHFEs on U, i = 1,2,…n.

  1. (1)

    if \( n_{i}^{h}\, =\, \left( {n_{i} } \right)^{*}\, =\, n_{i}^{g} \), then the DHFEs are called complete dual hesitant fuzzy elements, denoted by CDHFEs; The matrix(s) composed of CDHFEs are called complete double hesitation fuzzy matrix(s), denoted by CDHFM(s); The set(s) composed of CDHFEs are called complete double hesitation fuzzy set(s), denoted by CDHFS(s).

  2. (2)

    if \( n_{i}^{h}\, \ne\, \left( {n_{i} } \right)^{*}\, \ne \,n_{i}^{g} , \) then the DHFEs are called incomplete dual hesitant fuzzy elements, denoted by IDHFEs. The matrix(s) composed of IDHFEs are called incomplete double hesitation fuzzy matrix(s), denoted by IDHFM(s); The set(s) composed of IDHFEs are called incomplete double hesitation fuzzy set(s), denoted by IDHFS(s).

Where \( n_{i}^{h} \) and \( n_{i}^{g} \) are the number of values in \( h_{i} \) and \( g_{i} \), respectively.\( \left( {n_{i} } \right)^{*} \) is the largest value of \( n_{i}^{h} \) and \( n_{i}^{g} \).

Singh [16] has proposed the similarity measures of CDHFSs. However, it is no longer applicable for IDHFS(s). We improve the definition of similarity measures in [16] and give the definition of similarity degree of IDHFEs.

Definition 4.

Let \( A(u_{\mu } ) = {\kern 1pt} (h(u_{\mu } ),g(u_{\mu } )) \), and \( B(u_{\nu } ) = {\kern 1pt} (h(u_{\nu } ),g(u_{\nu } )) \) be two IDHFEs, the similarity degree \( s_{IDHFSs} (A,B) \) between A and B is defined as:

$$ s_{IDHFEs} (A,B) =2 \frac{\frac{1}{p}{\sum\limits_{k = 1}^{p} {h_{ik} \cdot \frac{1}{q} \sum\limits_{s = 1}^{q} {h_{js} } + \frac{1}{\alpha} \sum\limits_{l = 1}^{\alpha } {g_{il} \cdot \frac{1}{\beta} \sum\limits_{r = 1}^{\beta } {g_{jr} } } } }}{{ (\frac{1}{p}\sum\limits_{k = 1}^{p} {h_{ik} } )^{2} + (\frac{1}{q}\sum\limits_{s = 1}^{q} {h_{js} } )^{2} + (\frac{1}{\alpha}\sum\limits_{l = 1}^{\alpha } {g_{il} } )^{2} + (\frac{1}{\beta}\sum\limits_{r = 1}^{\beta } {g_{jr} } )^{2} } } $$
(1)

If we take weight of each element \( x \in X \) for membership function and non-membership function into account, then

$$ s_{IDHFEs} (A,B) =2 \frac{{\sum\limits_{k = 1}^{p} {\omega_{\mu k} h_{\mu k} \cdot \sum\limits_{s = 1}^{q} {\omega_{vs} h_{\nu s} } + \sum\limits_{l = 1}^{\alpha } {\omega_{\mu l} h_{\mu l} \cdot \sum\limits_{r = 1}^{\beta } {\omega_{\nu r} h_{\nu r} } } } }}{{ (\sum\limits_{k = 1}^{p} {\omega_{\mu k} h_{\mu k} } )^{2} + (\sum\limits_{s = 1}^{q} {\omega_{vs} h_{\nu s} } )^{2} + (\sum\limits_{l = 1}^{\alpha } {\omega_{\mu l} h_{\mu l} } )^{2} + (\sum\limits_{r = 1}^{\beta } {\omega_{\nu r} h_{\nu r} } )^{2} }} $$
(2)

where \( \sum\limits_{k = 1}^{p} {\omega_{\mu k} h_{\mu k} ,\sum\limits_{s = 1}^{q} {\omega_{vs} h_{\nu s} } ,\sum\limits_{l = 1}^{\alpha } {w_{\mu l} g_{\mu l} ,\sum\limits_{r = 1}^{\beta } {w_{\nu r} g_{\nu r} } } } \) are called the weighted summation of each element \( x \in X \) for membership function and non-membership function, respectively. Particularly, if each element has same importance, then (2) is reduced to (1).

According to the Definition 2, we can easily proof Definition 4 satisfies the following properties:

Proposition 1.

Let \( A,B \) and \( C \) be any IDHFSs, then \( s_{IDHFEs} (A,B) \) is the similarity degree, which satisfies the following properties:

  1. (i)

    \( 0\, \le\, s_{IDHFEs} (A,B) \,\le \,1 \);

  2. (ii)

    \( s_{IDHFEs} (A,B) = 1 \) if and only if \( A = B \);

  3. (iii)

    \( s_{IDHFEs} (A,B) = s_{IDHFEs} (B,A) \);

  4. (iv)

    Let C be any IDHFS, if \( A \,\subseteq\, B \,\subseteq\, C \), then \( s_{IDHFEs} (A,B) \,\ge\, s_{IDHFEs} (A,C) \), \( s_{IDHFEs} (B,C)\, \ge\, s_{IDHFEs} (A,C) \).

Definition 5.

Let \( A = {\kern 1pt} (h_{D} (u_{\mu } ),g_{D} (u_{\mu } )) \) and \( B = {\kern 1pt} (h_{D} (u_{\nu } ),g_{D} (u_{\nu } )) \) be two IDHFEs, then the similarity matrix of A and B on attribute \( c_{j} \) is defined as:

$$ R_{{c_{j} }} = (x_{\mu \nu } )_{m \times m} ,{\text{where}}\,x_{\mu \nu } = s_{{c_{j} }} (A,B),1\, \le\, j\, \le\, n $$

Definition 6.

Let \( A = {\kern 1pt} (h_{D} (u_{\mu } ),g_{D} (u_{\mu } )) \) and \( B = {\kern 1pt} (h_{D} (u_{\nu } ),g_{D} (u_{\nu } )) \) be two IDHFEs, then the similarity aggregation matrix \( R_{\aleph } \) is defined as:

$$ R_{\aleph } = (r_{\mu \nu } )_{m \times m} = \frac{{\sum\limits_{j = 1}^{n} {R_{{c_{j} }} (x_{\mu \nu } )} }}{n},{\text{where}}\,r_{\mu \nu } = x_{{c_{1} (\mu \nu )}} \cdot x_{{c_{2} (\mu \nu )}} \cdot \ldots \cdot x_{{c_{j} (\mu \nu )}} $$

4 Application of Proposed Similarity Degree in MADM

4.1 Method of Proposed Similarity Degree in MADM

When we complement the data to IDHFEs based on the maximum similarity, we need to operate and organize the system of the information many times. So we make the following signs:

  1. (1)

    Denote the similarity matrix as \( R_{{c_{j} }}^{t} \) after the tth operations for the information system;

  2. (2)

    Denote the similarity aggregation matrix as \( R_{{\aleph (c_{j} )}}^{t} \) after the tth operations for the information system;

  3. (3)

    Denote the information system as \( D^{t} \) after the tth operations;

  4. (4)

    Denote the integral information system as \( D \).

Note: Two IDHFEs, which similarity degree is the greatest, we complement the two IDHFEs by adding to the weighted summation each other until the IDHFEs get to CDHFEs, then ordering the membership degree and non-membership degree of the CDHFEs from the largest to the smallest.

Based on the above analysis, when the information of system is incomplete, we use the following method to find the best alternative in MADM problem.

Let \( U = \left\{ {u_{1} ,u_{2} , \ldots ,u_{n} } \right\} \) be a set of alternatives and \( C = \left\{ {c_{1} ,c_{2} , \ldots .,c_{m} } \right\} \) be a set of attributes, we construct the decision matrix \( D^{0} = (d_{\mu \nu } )_{m \times n} \) where \( d_{\mu \nu } = (h_{\mu \nu } ,g_{\mu \nu } ) \) \( (1\, \le\, \mu\, \le\, m,1\, \le\, \nu \,\le\, n) \). The method involves the following steps:

  • Step 1. Let \( t = 1 \) and calculate the similarity degrees \( s_{{c_{j} }} (u_{\mu } ,u_{\nu } ) \) of any two alternatives \( u_{\mu } \) and \( u_{\nu } \) of attribute \( c_{j} \) in \( U^{t - 1} \in D^{t - 1} \), we obtain the similarity matrix \( R_{{c_{j} }}^{t} = (x_{\mu \nu } )_{n \times n} \) and the similarity aggregation matrix \( R_{{\aleph (c_{j} )}}^{t} = (r_{\mu \nu } )_{n \times n} \);

  • Step 2. Scan similarity aggregation matrix \( R_{{c_{j} }}^{t} = (x_{\mu \nu } )_{n \times n} \) and find the maximum of the similarity of any two alternatives with IDHFEs. According to the Note, we arrange the matrix \( D^{t - 1} \) to get matrix \( D^{t} \):

    1. (i)

      If \( D^{t} \ne D,t = t + 1 \), return Step 1,

    2. (ii)

      If \( D^{t} = D \), then to Step 3;

  • Step 3. Calculate the similarity degrees among the alternatives \( u_{{c_{j} }}^{i} \) and the ideal alternatives \( u_{{c_{j} }}^{*} \) by using formula (2) (i = 1, 2, . . ., m, j = 1, 2, . . ., n);

  • Step 4. Rank the alternatives according to the results of Step 3;

  • Step 5. Select the best alternative according to the Step 4.

4.2 Practical Example

Here, we take the example [16], to illustrate the utility of the proposed method. Also, we show that the results obtained using the proposed method are same as the results of Ref. [16].

There is an investment company, which wants to invest a sum of money in the best option. There is a panel with four possible alternatives to invest the money: u 1 is a car company; u 2 is a food company; u 3 is a computer company; u 4 is an arms company. The investment company must make a decision according to the following three attributes: c 1 is the risk analysis; c 2 is the growth analysis; c 3 is the environmental impact analysis. The attribute weight vector for membership degree and nonmembership degree is given as w = (0.35, 0.25, 0.40)T and z = (0.30, 0.40, 0.30)T, respectively. The four possible alternatives u i (i = 1, 2, 3, 4) are to be evaluated using the dual hesitant fuzzy information by three decision makers under three attributes c j (j = 1, 2, 3), as listed in the following dual hesitant fuzzy decision matrix \( D^{0} \) (“\( {\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\Delta } } \)” represent that the missing value):

$$ D^{0} = \left( {\begin{array}{*{20}c} {\left\{ {\left\{ {0.5,0.4,0.3} \right\},\left\{ {0.4,0.3} \right\}} \right\}} & {\left\{ {\left\{ {0.6,0.4,{\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\Delta } }} \right\},\left\{ {0.4,0.2} \right\}} \right\}} & {\left\{ {\left\{ {0.3,0.2,0.1} \right\},\left\{ {0.6,0.5} \right\}} \right\}} \\ {\left\{ {\left\{ {0.7,0.6,0.4} \right\},\left\{ {0.3,0.2} \right\}} \right\}} & {\left\{ {\left\{ {0.7,0.6,{\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\Delta } }} \right\},\left\{ {0.3,0.2} \right\}} \right\}} & {\left\{ {\left\{ {0.7,0.6,0.4} \right\},\left\{ {0.2,0.1} \right\}} \right\}} \\ {\left\{ {\left\{ {0.6,0.4,0.3} \right\},\left\{ {0.3,{\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\Delta } }} \right\}} \right\}} & {\left\{ {\left\{ {0.6,0.5,{\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\Delta } }} \right\},\left\{ {0.3,{\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\Delta } }} \right\}} \right\}} & {\left\{ {\left\{ {0.6,0.5,{\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\Delta } }} \right\},\left\{ {0.3,0.1} \right\}} \right\}} \\ {\left\{ {\left\{ {0.8,0.7,0.6} \right\},\left\{ {0.2,0.1} \right\}} \right\}} & {\left\{ {\left\{ {0.7,0.6,{\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\Delta } }} \right\},\left\{ {0.2,{\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\Delta } }} \right\}} \right\}} & {\left\{ {\left\{ {0.4,0.3,{\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\Delta } }} \right\},\left\{ {0.2,0.1} \right\}} \right\}} \\ \end{array} } \right) $$
  • Step 1. Let \( t = 1 \) and calculate the similarity degrees \( s_{{c_{j} }} (u_{\mu } ,u_{\nu } ) \) of any two alternatives \( u_{\mu } \) and \( u_{\nu } \) of attribute \( c_{j} \) to obtain the similarity matrix \( R_{{c_{j} }}^{1} = (x_{\mu \nu } )_{4 \times 4} \), where \( 1 \,\le\, \mu, v \,\le\, 4,1 \,\le\, j \,\le\, 3 \). Then we get the similarity aggregation matrix \( R_{{\aleph (c_{j} )}}^{1} = (r_{\mu \nu } )_{4 \times 4} \) of \( c_{j} \).

    $$ \begin{aligned} R_{{c_{1} }}^{1} = \left( {\begin{array}{*{20}c} 1 & {0.942} & {0.968} & {0.852} \\ {0.942} & 1 & {0.959} & {0.976} \\ {0.968} & {0.959} & 1 & {0.895} \\ {0.852} & {0.976} & {0.895} & 1 \\ \end{array} } \right)\,R_{{c_{2} }}^{1} = \left( {\begin{array}{*{20}c} 1 & {0.965} & {0.971} & {0.926} \\ {0.965} & 1 & {0.981} & {0.983} \\ {0.971} & {0.981} & 1 & {0.983} \\ {0.926} & {0.983} & {0.983} & 1 \\ \end{array} } \right) \hfill \\ R_{{c_{3} }}^{1} = \left( {\begin{array}{*{20}c} 1 & {0.599} & {0.763} & {0.729} \\ {0.599} & 1 & {0.924} & {0.728} \\ {0.763} & {0.924} & 1 & {0.908} \\ {0.729} & {0.728} & {0.908} & 1 \\ \end{array} } \right)\,R_{{\aleph (c_{j} )}}^{1} = \left( {\begin{array}{*{20}c} 1 & {0.835} & {0.900} & {0.835} \\ {0.835} & 1 & {0.954} & {0.895} \\ {0.900} & {0.954} & 1 & {0.928} \\ {0.835} & {0.895} & {0.928} & 1 \\ \end{array} } \right) \hfill \\ \end{aligned} $$
  • Step 2. We can see \( r_{23} = r_{32} \) is the largest from \( R_{{\aleph (c_{j} )}}^{1} \) easily, that is, the similarity of \( u_{2} \) and \( u_{3} \) is the largest. Arranging matrix \( D^{0} \), we can get matrix \( D^{1} \).

    $$ D^{1} = \left( {\begin{array}{*{20}c} {\left\{ {\left\{ {0.5,0.4,0.3} \right\},\left\{ {0.4,0.3} \right\}} \right\}} & {\left\{ {\left\{ {0.6,0.4,\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\Delta } } \right\},\left\{ {0.4,0.2} \right\}} \right\}} & {\left\{ {\left\{ {0.3,0.2,0.1} \right\},\left\{ {0.6,0.5} \right\}} \right\}} \\ {\left\{ \begin{aligned} & \left\{ {0.7,0.6,0.4} \right\}, \\ & \left\{ {0.3,0.2,0.105} \right\} \\ \end{aligned} \right\}} & {\left\{ \begin{aligned} & \left\{ {0.7,0.6,0.38} \right\}, \\ & \left\{ {0.3,0.2,0.105} \right\} \\ \end{aligned} \right\}} & {\left\{ \begin{aligned} & \left\{ {0.7,0.6,0.4} \right\}, \\ & \left\{ {0.2,0.13,0.1} \right\} \\ \end{aligned} \right\}} \\ {\left\{ \begin{aligned} & \left\{ {0.6,0.4,0.3} \right\}, \\ & \left\{ {0.3,0.155,0.155} \right\} \\ \end{aligned} \right\}} & {\left\{ \begin{aligned} & \left\{ {0.6,0.5,0.45} \right\}, \\ & \left\{ {0.3,0.155,0.155} \right\} \\ \end{aligned} \right\}} & {\left\{ \begin{aligned} & \left\{ {0.6,0.57,0.5} \right\}, \\ & \left\{ {0.3,0.1,0.095} \right\} \\ \end{aligned} \right\}} \\ {\left\{ {\left\{ {0.8,0.7,0.6} \right\},\left\{ {0.2,0.1} \right\}} \right\}} & {\left\{ {\left\{ {0.7,0.6,\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\Delta } } \right\},\left\{ {0.2,\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\Delta } } \right\}} \right\}} & {\left\{ {\left\{ {0.4,0.3,\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\Delta } } \right\},\left\{ {0.2,0.1} \right\}} \right\}} \\ \end{array} } \right) $$

Thus, we can get \( D^{1} \ne D \), return Step 1.

Let \( t = 2 \) and calculate the similarity degrees \( s_{{c_{j} }} (u_{\mu } ,u_{\nu } ) \) of any two alternatives \( u_{\mu } \) and \( u_{\nu } \) of attribute \( c_{j} \) to obtain the similarity matrix \( R_{{c_{j} }}^{2} = (x_{\mu \nu } )_{4 \times 4} \), where \( 1 \,\le\, \mu, \,\nu\, \le\, 4,1 \,\le\, j \,\le\, 3 \). Then we get the similarity aggregation matrix \( R_{{\aleph (c_{j} )}}^{2} = (r_{\mu \nu } )_{4 \times 4} \) of \( c_{j} \).

$$ \begin{aligned} R_{{c_{1} }}^{2} = \left( {\begin{array}{*{20}c} 1 & {0.948} & {0.997} & {0.852} \\ {0.948} & 1 & {0.966} & {0.968} \\ {0.997} & {0.966} & 1 & {0.882} \\ {0.852} & {0.968} & {0.882} & 1 \\ \end{array} } \right)\,R_{{c_{2} }}^{2} = \left( {\begin{array}{*{20}c} 1 & {0.901} & {0.932} & {0.926} \\ {0.901} & 1 & {0.996} & {0.948} \\ {0.932} & {0.996} & 1 & {0.956} \\ {0.926} & {0.948} & {0.956} & 1 \\ \end{array} } \right) \hfill \\ R_{{c_{3} }}^{2} = \left( {\begin{array}{*{20}c} 1 & {0.650} & {0.682} & {0.729} \\ {0.650} & 1 & {0.998} & {0.730} \\ {0.682} & {0.998} & 1 & {0.737} \\ {0.729} & {0.730} & {0.737} & 1 \\ \end{array} } \right)\,R_{{\aleph (c_{j} )}}^{2} = \left( {\begin{array}{*{20}c} 1 & {0.833} & {0.871} & {0.835} \\ {0.833} & 1 & {0.987} & {0.882} \\ {0.871} & {0.987} & 1 & {0.858} \\ {0.835} & {0.882} & {0.858} & 1 \\ \end{array} } \right) \hfill \\ \end{aligned} $$

Due to the similarity of \( u_{2} \) and \( u_{3} \) are complete, overlook \( s(u_{2} ,u_{3} ) \). We can see \( r_{24} = r_{42} \) is the largest from \( R_{{\aleph (c_{j} )}}^{2} \) easily, that is, the similarity of \( u_{2} \) and \( u_{4} \) is the largest. Arranging matrix \( D^{1} \), we can get matrix \( D^{2} \).

$$ D^{2} = \left( {\begin{array}{*{20}c} {\left\{ {\left\{ {0.5,0.4,0.3} \right\},\left\{ {0.4,0.3} \right\}} \right\}} & {\left\{ {\left\{ {0.6,0.4,\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\Delta } } \right\},\left\{ {0.4,0.2} \right\}} \right\}} & {\left\{ {\left\{ {0.3,0.2,0.1} \right\},\left\{ {0.6,0.5} \right\}} \right\}} \\ {\left\{ \begin{aligned} & \left\{ {0.7,0.6,0.4} \right\}, \\ & \left\{ {0.3,0.2,0.105} \right\} \\ \end{aligned} \right\}} & {\left\{ \begin{aligned} & \left\{ {0.7,0.6,0.38} \right\}, \\ & \left\{ {0.3,0.2,0.105} \right\} \\ \end{aligned} \right\}} & {\left\{ \begin{aligned} & \left\{ {0.7,0.6,0.4} \right\}, \\ & \left\{ {0.2,0.13,0.1} \right\} \\ \end{aligned} \right\}} \\ {\left\{ \begin{aligned} & \left\{ {0.6,0.4,0.3} \right\}, \\ & \left\{ {0.3,0.155,0.155} \right\} \\ \end{aligned} \right\}} & {\left\{ \begin{aligned} & \left\{ {0.6,0.5,0.45} \right\}, \\ & \left\{ {0.3,0.155,0.155} \right\} \\ \end{aligned} \right\}} & {\left\{ \begin{aligned} & \left\{ {0.6,0.57,0.5} \right\}, \\ & \left\{ {0.3,0.1,0.095} \right\} \\ \end{aligned} \right\}} \\ {\left\{ \begin{aligned} & \left\{ {0.8,0.7,0.6} \right\}, \\ & \left\{ {0.2,0.197,0.1} \right\} \\ \end{aligned} \right\}} & {\left\{ \begin{aligned} & \left\{ {0.7,0.6,0.564} \right\}, \\ & \left\{ {0.2,0.197,0.197} \right\} \\ \end{aligned} \right\}} & {\left\{ \begin{aligned} & \left\{ {0.4,0.57,0.3} \right\}, \\ & \left\{ {0.2,0.1,0.1425} \right\} \\ \end{aligned} \right\}} \\ \end{array} } \right) $$

Thus, we can get \( D^{2} \ne D \), return Step 1.

Let \( t = 3 \) and calculate the similarity degrees \( s_{{c_{j} }} (u_{\mu } ,u_{\nu } ) \) of any two alternatives \( u_{\mu } \) and \( u_{\nu } \) of attribute \( c_{j} \) to obtain the similarity matrix \( R_{{c_{j} }}^{3} = (x_{\mu \nu } )_{4 \times 4} \), where \( 1 \,\le\, \mu, v \,\le\, 4,1 \,\le\, j \,\le\, 3 \). Then we get the similarity aggregation matrix \( R_{{\aleph (c_{j} )}}^{3} = (r_{\mu \nu } )_{4 \times 4} \) of \( c_{j} \).

$$ \begin{aligned} R_{{c_{1} }}^{3} = \left( {\begin{array}{*{20}c} 1 & {0.948} & {0.997} & {0.871} \\ {0.948} & 1 & {0.966} & {0.979} \\ {0.997} & {0.966} & 1 & {0.898} \\ {0.871} & {0.979} & {0.898} & 1 \\ \end{array} } \right)\,R_{{c_{2} }}^{3} = \left( {\begin{array}{*{20}c} 1 & {0.901} & {0.932} & {0.864} \\ {0.901} & 1 & {0.996} & {0.996} \\ {0.932} & {0.996} & 1 & {0.985} \\ {0.864} & {0.996} & {0.985} & 1 \\ \end{array} } \right) \hfill \\ R_{{c_{3} }}^{3} = \left( {\begin{array}{*{20}c} 1 & {0.650} & {0.682} & {0.754} \\ {0.650} & 1 & {0.998} & {0.968} \\ {0.682} & {0.998} & 1 & {0.973} \\ {0.754} & {0.968} & {0.973} & 1 \\ \end{array} } \right)\,R_{{\aleph (c_{j} )}}^{3} = \left( {\begin{array}{*{20}c} 1 & {0.833} & {0.871} & {0.829} \\ {0.833} & 1 & {0.987} & {0.981} \\ {0.871} & {0.987} & 1 & {0.973} \\ {0.829} & {0.981} & {0.973} & 1 \\ \end{array} } \right) \hfill \\ \end{aligned} $$

Due to the similarity of \( u_{2} ,u_{3} ,u_{4} \), are complete, overlook \( s(u_{2} ,u_{3} ),s(u_{2} ,u_{4} ),s(u_{3} ,u_{4} ) \). We can see \( r_{13} = r_{31} \) is the largest from \( R_{{\aleph (c_{j} )}}^{3} \) easily, that is, the similarity of \( u_{1} \) and \( u_{3} \) is the largest. Arranging matrix \( D^{2} \), we can get matrix \( D^{3} \).

$$ D^{3} = \left( {\begin{array}{*{20}c} {\left\{ \begin{aligned} & \left\{ {0.5,0.4,0.3} \right\}, \\ & \left\{ {0.4,0.3,0.2058} \right\} \\ \end{aligned} \right\}} & {\left\{ \begin{aligned} & \left\{ {0.6,0.515,0.4} \right\}, \\ & \left\{ {0.4,0.2058,0.2} \right\} \\ \end{aligned} \right\}} & {\left\{ \begin{aligned} & \left\{ {0.3,0.2,0.1} \right\}, \\ & \left\{ {0.6,0.5,0.168} \right\} \\ \end{aligned} \right\}} \\ {\left\{ \begin{aligned} & \left\{ {0.7,0.6,0.4} \right\}, \\ & \left\{ {0.3,0.2,0.105} \right\} \\ \end{aligned} \right\}} & {\left\{ \begin{aligned} & \left\{ {0.7,0.6,0.38} \right\}, \\ & \left\{ {0.3,0.2,0.105} \right\} \\ \end{aligned} \right\}} & {\left\{ \begin{aligned} & \left\{ {0.7,0.6,0.4} \right\}, \\ & \left\{ {0.2,0.13,0.1} \right\} \\ \end{aligned} \right\}} \\ {\left\{ \begin{aligned} & \left\{ {0.6,0.4,0.3} \right\}, \\ & \left\{ {0.3,0.155,0.155} \right\} \\ \end{aligned} \right\}} & {\left\{ \begin{aligned} & \left\{ {0.6,0.5,0.45} \right\}, \\ & \left\{ {0.3,0.155,0.155} \right\} \\ \end{aligned} \right\}} & {\left\{ \begin{aligned} & \left\{ {0.6,0.57,0.5} \right\}, \\ & \left\{ {0.3,0.1,0.095} \right\} \\ \end{aligned} \right\}} \\ {\left\{ \begin{aligned} & \left\{ {0.8,0.7,0.6} \right\}, \\ & \left\{ {0.2,0.197,0.1} \right\} \\ \end{aligned} \right\}} & {\left\{ \begin{aligned} & \left\{ {0.7,0.6,0.564} \right\}, \\ & \left\{ {0.2,0.197,0.197} \right\} \\ \end{aligned} \right\}} & {\left\{ \begin{aligned} & \left\{ {0.4,0.57,0.3} \right\}, \\ & \left\{ {0.2,0.1,0.1425} \right\} \\ \end{aligned} \right\}} \\ \end{array} } \right) $$

Thus, we can get \( D^{3} = D \), turn to Step 3.

  • Step 3. Calculate the similarity degrees among the alternatives \( u_{{c_{j} }}^{i} \) and the ideal alternatives \( u_{{c_{j} }}^{*} \) by using formula (2) (i = 1, 2, . . ., m, j = 1, 2, . . ., n);

    1. (1)

      The ideal alternatives \( u_{{c_{j} }}^{*} \): \( u_{{c_{1} }}^{*} = (\left\{ {0.8,0.7,0.6} \right\},\left\{ {0.2,0.155,0.1} \right\}) \); \( u_{{c_{2} }}^{*} = (\left\{ {0.7,0.6,0.564} \right\},\left\{ {0.2,0.155,0.105} \right\}); \) \( u_{{c_{3} }}^{*} = (\left\{ {0.7,0.6,0.5} \right\},\left\{ {0.2,0.1,0.095} \right\}) \)

    2. (2)

      The similarity degrees:

      $$ \begin{aligned} & s_{DHFEs} (u_{{c_{1} }}^{1} ,u_{{c_{1} }}^{*} ) = 0.852,s_{DHFEs} (u_{{c_{1} }}^{2} ,u_{{c_{1} }}^{*} ) = 0.978,s_{DHFEs} (u_{{c_{1} }}^{3} ,u_{{c_{1} }}^{*} ) = 0.897, \\ & s_{DHFEs} (u_{{c_{1} }}^{4} ,u_{{c_{1} }}^{*} ) = 0.999; \\ & s_{DHFEs} (u_{{c_{2} }}^{1} ,u_{{c_{2} }}^{*} ) = 0.962,s_{DHFEs} (u_{{c_{2} }}^{2} ,u_{{c_{2} }}^{*} ) = 0.993,s_{DHFEs} (u_{{c_{2} }}^{3} ,u_{{c_{2} }}^{*} ) = 0.980, \\ & s_{DHFEs} (u_{{c_{2} }}^{4} ,u_{{c_{2} }}^{*} ) = 0.997; \\ & s_{DHFEs} (u_{{c_{3} }}^{1} ,u_{{c_{3} }}^{*} ) = 0.598,s_{DHFEs} (u_{{c_{3} }}^{2} ,u_{{c_{3} }}^{*} ) = 0.998, s_{DHFEs} (u_{{c_{3} }}^{3} ,u_{{c_{3} }}^{*} ) = 0.955, \\ & s_{DHFEs} (u_{{c_{3} }}^{4} ,u_{{c_{3} }}^{*} ) = 0.955. \\ \end{aligned} $$
  • Step 4. Rank the alternatives according to the results of Step 3;

    $$ \hat{s}_{{u_{2} }} > \hat{s}_{{u_{4} }} > \hat{s}_{{u_{3} }} > \hat{s}_{{u_{1} }} $$
  • Step 5. Select the best alternative according to the Step 4.

    $$ u_{2} \succ u_{4} \succ u_{3} \succ u_{1} $$

Therefore, \( u_{2} \) is the best alternative to choose.

4.3 Advantages of the Proposed Distance Measures

  1. (i)

    Singh has proposed the distance and similarity measures for MADM with DHFSs in [16]. The result of the proposed method is matched with existing method of Singh. But the method in [16] based on decision maker’s preference parameter is known to make decision. The approach we proposed is to complement the missing data when the decision maker’s risk preference is unknown, extracting rules from the existed information to solve the problem of missing information. Thus the proposed method is more practical and general.

  2. (ii)

    As mentioned above, we give the definition of the similarity degree for IDHFEs and give some related properties. Using the method of maximum similarity degree to complement the IDHFEs not only keeps the maximum similarity among the hesitant fuzzy information, but also solves the problem of information loss in IDHFSs.

5 Conclusions

In the real world, the HFS or DHFS is adequate for dealing with the vagueness of DM’s judgment. However, due to the preference of experts and the influence of external factors, some information values can’t be given, which results in the fuzzy information is not complete. In order to solve this problem, we propose a completion method of information values based on the maximum similarity degree in IDHFSs. An important advantage of the proposed method is that it not only solves the problem of information loss in IDHFSs, but also keeps the maximum similarity among the hesitant fuzzy information.

The next step of the research work is to extend the proposed method to the wider areas such as machine learning, clustering, reasoning and so on.