1 Introduction

As a generation of Zadeh’s fuzzy set theory (Zadeh 1965), Atanassov (1986) initiated the concept of intuitionistic fuzzy sets (IFSs) for the purpose of dealing with uncertainty much better. Unlike fuzzy sets merely characterized by membership functions, non-membership functions are brought into the characterization of intuitionistic fuzzy sets for an elaborate description on uncertain information. The concept of vague sets proposed by Gau and Buehrer (1993) is regarded as another extension of fuzzy sets. Then it is pointed out by Bustince and Burillo (1996) that the notion of intuitionistic fuzzy sets and that of vague sets coincide with each other. Intuitionistic fuzzy sets are agile and flexible in dealing with vagueness and uncertainty. So the theory of intuitionistic fuzzy sets has been widely used in many areas, such as uncertainty reasoning and decision making in uncertain environment.

The definition of similarity measure between IFSs is one of the most interesting topics in IFSs theory. A similarity measure is defined to compare the information carried by IFSs. As an important tool for decision making (Szmidt and Kacprzyk 2002), pattern recognition (Papakostas 2013; Ngan 2016), machine learning (Szmidt and Kacprzyk 2004) and image processing (Balasubramaniam and Ananthi 2014), similarity measure between IFSs has received much attention in recent years.

Szmidt and Kacprzyk (2000) extended the well-known Hamming distance and Euclidean distance to construct intuitionistic fuzzy similarity measure and compared them with the approaches used for ordinary fuzzy sets. However, Wang and Xin (2005) argued that the distance measures in Szmidt and Kacprzyk (2000) were not effective in some cases. Therefore, several new distance measures were proposed and applied to pattern recognition (Wang and Xin 2005). Grzegorzewski (2004) also extended the Hamming distance, the Euclidean distance and their normalized counterparts to IFS. Later, Chen (2007) pointed out that some errors existed in Grzegorzewski (2004) by showing some counter examples. Hung and Yang (2004) extended the Hausdorff distance to IFSs and proposed three similarity measures. On the other hand, instead of extending the well-known measures, some studies defined new similarity measures for IFSs. Li and Cheng (2002) suggested a new similarity measure for IFSs based on the membership degree and the non-membership degree. Afterward, Li (2004) defined two dissimilarity measures between intuitionistic fuzzy sets of a finite set, and it was proved that both of these measures are metrical. Mitchell (2003) showed that the similarity measure of Li and Cheng (2002) had some counterintuitive cases and modified that similarity measure based on statistical point of view. Moreover, Liang and Shi (2003) presented some examples to show that the similarity measure proposed by Li and Cheng (2002) was not reasonable for some conditions and therefore proposed several new similarity measures for IFSs. Li et al. (2007) analyzed, compared and summarized the existing similarity measures between IFSs/vague sets by their counterintuitive examples in pattern recognition. Ye (2011) conducted a similar comparative study of the existing similarity measures between IFSs and proposed a cosine similarity measure and a weighted cosine similarity measure. Hwang et al. (2012) proposed a similarity measure for IFSs in which Sugeno integral was used for aggregation. The proposed similarity measure was applied to clustering problem. Xu (2007) introduced a series of similarity measures for IFSs and applied them to multiple attribute decision-making problem based on intuitionistic fuzzy information. Xu and Chen (2008) introduced a series of distance and similarity measures, which are various combinations and generalizations of the weighted Hamming distance, the weighted Euclidean distance and the weighted Hausdorff distance. Xu and Yager (2009) developed a similarity measure between IFSs and applied the developed similarity measure for consensus analysis in group decision making based on intuitionistic fuzzy preference relations. Xia and Xu (2010) proposed a series of distance measures based on the intuitionistic fuzzy point operators.

As an addition to aforementioned studies, some attempts have been done to define similarity measures based on the relationships between distance measure, similarity measure and entropy of IFSs. Zeng and Guo (2008) investigated the relationship among the normalized distance, the similarity measure, the inclusion measure and the entropy of interval-valued fuzzy sets. It was also shown that the similarity measure, the inclusion measure and the entropy of interval-valued fuzzy sets could be induced by the normalized distance of interval-valued fuzzy sets based on their axiomatic definitions. Wei et al. (2011) introduced an approach to construct similarity measures by using entropy measures for IFSs.

Besides, many other kinds of similarity measure between IFSs are emerging. Boran and Akay (2014) proposed a new general type of similarity measure for IFS with two parameters, expressing \(L_{p}\) norm and the level of uncertainty, respectively. This similarity measure can also make sense in terms of counterintuitive cases. Zhang and Yu (2013) presented a new distance measure based on interval comparison, where IFSs were transformed into the symmetric triangular fuzzy numbers. Comparison with the widely used methods indicated that the proposed method in Zhang and Yu (2013) contained more information, with much less loss of information. Li and Deng (2012) introduced an axiomatic definition of the similarity measure of IFSs. The relationship between the entropy and the similarity measure of IFS was investigated in detail. It was proved that the similarity measure and the entropy of IFS can be transformed into each other based on their axiomatic definitions. Papakostas (2013) investigated the main theoretical and computational properties of distance and similarity measures, as well as the relationships between them. A comparison between distance and similarity measures was also carried out in Papakostas (2013), from the point of view of pattern recognition.

In the proposed similarity measures, some of them are constructed based on the well-known distance measures, such as the Hamming distance (Wang and Xin 2005), the Euclidean distance (Szmidt and Kacprzyk 2000) and the Hausdorff distance (Grzegorzewski 2004; Chen 2007; Hung and Yang 2004). Other similarity measures are defined based on the linear or nonlinear relationship of the membership and non-membership functions of IFSs (Li and Cheng 2002; Liang and Shi 2003). There are also other kinds of similarity measures, e.g., similarity defined by entropy measures for IFSs, similarity induced by interval comparison and cosine similarity (Ye 2011; Zeng and Guo 2008; Wei et al. 2011; Boran and Akay 2014; Zhang and Yu 2013; Li and Deng 2012; Papakostas 2013).

On the basis of above-mentioned research articles, Szmidt (2014) has provided a highly informative survey and in-depth analysis of the major IFS similarity measures in the literature. The ideas of two-term (i.e., membership and non-membership values) and three-term (i.e., membership, non-membership and hesitancy values) representations of IFSs have been extensively discussed and employed by Szmidt (2014) to characterize existing IFS distance and similarity measures. Further, a detailed analysis of existing works concerning the correlation of IFSs, a measure closely related to similarity and distance, has also been provided in Szmidt (2014), also in terms of the two-term and three-term representations.

Taking a closer examination on the existing similarity measures between IFSs, we can find that it is highly non-trivial to construct a truly robust IFS similarity measure. Some of them cannot fully satisfy the axiomatic definition of similarity by providing counterintuitive cases, which has been demonstrated by the identification of counterintuitive examples by researchers for the various existing measures. Other similarity measures are lack of definitude physical meaning with complicated expressions. We can note that most of the existing measures, when proposed, were first and foremost postulated at the “formula” level. So it is desirable to define an easier-to-understand similarity measure for IFS. Therefore, the definition of similarity measure is still an open problem achieving more interest.

In this paper, we propose a new similarity measures with relative simple expression. The proposed similarity measure can be considered as the consistency between two IFSs. We define it by the direct operation on the membership function, non-membership function, hesitation function and the upper bound of membership function of two IFS, instead of based on the distance measure or the relationship of membership and non-membership functions. The computation of our proposed similarity involves operations of multiplication and evolution without choosing other parameters, which is relative simple and concise. Illustrative examples reveal that the proposed measures satisfy the properties of the axiomatic definition for similarity measures. In addition, several comparative examples are provided to show the performance of the proposed similarity measure.

The remainder of this paper is organized as follows. Section 2 presents some preliminary definitions related to the IFSs, similarity measure between IFSs. Some classical existing similarity measures are introduced in Sect. 3. The new similarity measure, along with its interpretations is presented in Sect. 4. Comparison between the proposed similarity measure and the existing similarity measures is carried out in Sect. 5. The application of the proposed similarity measure is presented in Sect. 6, followed by the conclusion of this paper.

2 Preliminaries

In this section, we briefly recall some basic knowledge related to IFSs and similarity measure to facilitate subsequent exposition.

Definition 1

Let \(X=\{x_1 ,x_2 ,\ldots ,x_n \}\) be a universe of discourse, then a fuzzy set A in X is defined as follows:

$$\begin{aligned} A=\left\{ {\left\langle {x,\mu _A (x)} \right\rangle \left| {x\in X} \right. } \right\} \end{aligned}$$
(1)

where \(\mu _A (x){:}\,X\rightarrow [0,1]\) is the membership degree.

Definition 2

An IFS A in X defined by Atanassov can be written as:

$$\begin{aligned} A=\left\{ {\left\langle {x,\mu _A (x),v_A (x)} \right\rangle \left| {x\in X} \right. } \right\} \end{aligned}$$
(2)

where \(\mu _A (x){:}\,X\rightarrow [0,1]\) and \(v_A (x){:}\,X\rightarrow [0,1]\) are membership degree and non-membership degree, respectively, with the condition:

$$\begin{aligned} 0\le \mu _A (x)+v_A (x)\le 1 \end{aligned}$$
(3)

\(\pi _A (x)\) determined by the following expression:

$$\begin{aligned} \pi _A (x)=1-\mu _A (x)-v_A (x) \end{aligned}$$
(4)

is called the hesitancy degree of the element \(x\in X\) to the set A, and \(\pi _A (x)\in [0,1]\), \(\forall x\in X\).

\(\pi _A (x)\) is also called the intuitionistic index of x to A. Greater \(\pi _A (x)\) indicates more vagueness on x. Obviously, when \( \pi _A (x)=0\), \(\forall x\in X\), the IFS degenerates into an ordinary fuzzy set.

In the sequel, the couple \(\left\langle {\mu _A (x),v_A (x)} \right\rangle \) is called an IFS or intuitionistic fuzzy value (IFV) for clarity. Let IFSs(X) denote the set of all IFSs in X.

It is worth noting that besides Definition 2, there are other possible representations of IFSs proposed in the literature. Hong and Kim (1999) proposed to use an interval representation \(\left[ {\mu _A (x),1-v_A (x)} \right] \) of intuitionistic fuzzy set A in X instead of pair \(\left\langle {\mu _A (x),v_A (x)} \right\rangle \). This approach is equivalent to the interval-valued fuzzy sets interpretation of IFS, where \(\mu _A (x)\) and \(1-v_A (x)\) represent the lower bound and upper bounds of membership degree, respectively. Obviously, \(\left[ {\mu _A (x),1-v_A (x)} \right] \) is a valid interval, since \(\mu _A (x)\le 1-v_A (x)\) always holds for \(\mu _A (x)+v_A (x)\le 1\).

Definition 3

For \(A\in \hbox {IFSs}(X)\) and \(B\in \hbox {IFSs}(X)\), some relations between them are defined as:

  1. (R1)

    \(A\subseteq B\) iff \(\forall x\in X\mu _A (x)\le \mu _B (x),v_A (x)\ge v_B (x)\);

  2. (R2)

    \(A=B\) iff \(\forall x\in X\mu _A (x)=\mu _B (x),v_A (x)=v_B (x)\);

  3. (R3)

    \(A^{C}=\left\{ {\left\langle {x,v_A (x),\mu _A (x)} \right\rangle \left| {x\in X} \right. } \right\} \), where \(A^{C}\) is the complement of A.

Definition 4

Let D denote a mapping \(D{:}\,\hbox {IFS}\times \hbox {IFS}\rightarrow [0,1]\), if D(AB) satisfies the following properties, D(AB) is called a distance between \(A\in \hbox {IFSs}(X)\) and \(B\in \hbox {IFSs}(X)\).

  1. (DP1)

    \(0\le D(A,B)\le 1\);

  2. (DP2)

    \(D(A,B)=0\), if and only if \(A=B\);

  3. (DP3)

    \(D(A,B)=D(B,A)\);

  4. (DP4)

    If \(A\subseteq B\subseteq C\), then \(D(A,B)\le D(A,C)\), and \(D(B,C)\le D(A,C)\).

Definition 5

A mapping \(S{:}\,\hbox {IFS}\times \hbox {IFS}\rightarrow [0,1]\) is called a degree of similarity between \(A\in \hbox {IFSs}(X)\) and \(B\in \hbox {IFSs}(X)\), if S(AB) satisfies the following properties:

  1. (SP1)

    \(0\le S(A,B)\le 1\);

  2. (SP2)

    \(S(A,B)=1\), if and only if \(A=B\);

  3. (SP3)

    \(S(A,B)=S(B,A)\);

  4. (SP4)

    If \(A\subseteq B\subseteq C\), then \(S(A,B)\ge S(A,C)\), and \(S(B,C)\ge S(A,C)\).

Because distance and similarity measures are complementary concepts, similarity measures can be used to define distance measures, and vice versa.

3 Existing similarity measures

In this section, we briefly review some existing similarity measures between intuitionistic fuzzy sets. Here we only present expressions of these similarity measures. Critical analyses on the existing similarity measures will be presented in Sect. 5.

Let \(X=\{x_1 ,x_2 ,\ldots ,x_n \}\) be a universe of discourse. Suppose that A and B are two intuitionistic fuzzy sets in the universe of X. The existing similarity measures between the intuitionistic fuzzy sets A and B are reviewed as follows:

  1. (1)

    Dengfeng and Chuntian’s similarity measure \(S_\mathrm{DC}\) (Li and Cheng 2002):

    $$\begin{aligned} S_\mathrm{DC} (A,B)=1-\root p \of {\frac{\sum _{i=1}^n {\left| {\varphi _A (x_i )-\varphi _B (x_i )} \right| ^{p}} }{n}} \end{aligned}$$
    (5)

    where \(\varphi _A (x_i )=\frac{\mu _A (x_i )+1-v_A (x_i )}{2}\), \(\varphi _B (x_i )=\frac{\mu _B (x_i )+1-v_B (x_i )}{2}\).

  2. (2)

    Mitchell’s similarity measure \(S_\mathrm{HB}\) (Mitchell 2003):

    $$\begin{aligned} S_\mathrm{HB} (A,B)=\frac{1}{2}\left( {\rho _\mu (A,B)+\rho _v (A,B)} \right) \end{aligned}$$
    (6)

    where \(\rho _\mu (A,B)=1-\root p \of {\frac{\sum _{i=1}^n {\left| {\mu _A (x_i )-\mu _B (x_i )} \right| ^{p}} }{n}}\),

    $$\begin{aligned} \rho _v (A,B)=1-\root p \of {\frac{\sum _{i=1}^n {\left| {v_A (x_i )-v_B (x_i )} \right| ^{p}} }{n}}. \end{aligned}$$
  3. (3)

    Hong and Kim’s similarity measure \(S_\mathrm{HK}\) (Hong and Kim 1999):

    $$\begin{aligned}&S_\mathrm{HK} (A,B)=1\nonumber \\&\quad -\frac{\sum _{i=1}^n {\left| {\left( {\mu _A (x_i )-\mu _B (x_i )} \right) -\left( {v_A (x_i )-v_B (x_i )} \right) } \right| } }{2n}\nonumber \\ \end{aligned}$$
    (7)
  4. (4)

    Liang and Shi’s similarity measures \(S_e^p \), \(S_s^p \) and \(S_h^p \) (Liang and Shi 2003):

    $$\begin{aligned} S_e^p (A,B)=1-\root p \of {\frac{\sum _{i=1}^n {\left( {\phi _\mu (x_i )+\phi _v (x_i )} \right) ^{p}} }{n}} \end{aligned}$$
    (8)

    where

    $$\begin{aligned} \phi _\mu (x_i )= & {} \left| {\mu _A (x_i )-\mu _B (x_i )} \right| /2,\nonumber \\ \phi _v (x_i )= & {} \left| {\left( {1-v_A (x_i )} \right) -\left( {1-v_B (x_i )} \right) } \right| /2.\nonumber \\ S_s^p (A,B)= & {} 1-\root p \of {\frac{\sum _{i=1}^n {\left| {\psi _{s1} (x_i )+\psi _{s2} (x_i )} \right| ^{p}} }{n}} \end{aligned}$$
    (9)

    where

    $$\begin{aligned}&\psi _{s1} (x_i )=\left| {m_{A1} (x_i )-m_{B1} (x_i )} \right| /2, \nonumber \\&\psi _{s2} (x_i )=\left| {m_{A2} (x_i )-m_{B2} (x_i )} \right| /2,\nonumber \\&m_{A1} (x_i )=\left| {\mu _A (x_i )+m_A (x_i )} \right| /2,\nonumber \\&m_{B1} (x_i )=\left| {\mu _B (x_i )+m_B (x_i )} \right| /2,\nonumber \\&m_{A2} (x_i )=\left| {1-v_A (x_i )+m_A (x_i )} \right| /2,\nonumber \\&m_{B2} (x_i )=\left| {1-v_B (x_i )+m_B (x_i )} \right| /2,\nonumber \\&m_A (x_i )=\left| {1-v_A (x_i )+\mu _A (x_i )} \right| /2,\nonumber \\&m_B (x_i )=\left| {1-v_B (x_i )+\mu _B (x_i )} \right| /2.\nonumber \\&S_h^p (A,B)=1 -\root p \of {\frac{\sum _{i=1}^n {\left( {\eta _1 (x_i )+\eta _2 (x_i )+\eta _3 (x_i )} \right) ^{p}} }{3n}}\nonumber \\ \end{aligned}$$
    (10)

    where

    $$\begin{aligned} \eta _1 (x_i )= & {} \phi _\mu (x_i )+\phi _v (x_i ) (\text {defined in}\,S_e^p ),\\ \eta _2 (x_i )= & {} \left| {\varphi _\mu (x_i )-\varphi _v (x_i )} \right| (\text {defined in}\,S_\mathrm{DC} ),\\ \eta _3 (x_i )= & {} \max \left( {l_A (x_i ),l_B (x_i )} \right) -\min \left( {l_A (x_i ),l_B (x_i )} \right) ,\\ l_A (x_i )= & {} \left( {1-\mu _A (x_i )-v_A (x_i )} \right) /2,\\ l_B (x_i )= & {} \left( {1-\mu _B (x_i )-v_B (x_i )} \right) /2. \end{aligned}$$
  5. (5)

    Chen’s similarity measure \(S_\mathrm{C}\) (Chen 1995):

    $$\begin{aligned}&S_\mathrm{C} (A,B)=1\nonumber \\&\quad -\frac{\sum _{i=1}^n {\left| {\left( {\mu _A (x_i )-v_A (x_i )} \right) -\left( {\mu _B (x_i )-v_B (x_i )} \right) } \right| } }{2n}\nonumber \\ \end{aligned}$$
    (11)
  6. (6)

    Ye’s cosine similarity measure \(C_\mathrm{IFS}\) (Ye 2011) :

    $$\begin{aligned}&C_\mathrm{IFS} (A,B) =\frac{1}{n}\sum \nolimits _{i=1}^n \nonumber \\&\quad {\frac{\mu _A (x_i )\mu _B (x_i )+v_A (x_i )v_B (x_i )}{\sqrt{\left( {\mu _A (x_i )} \right) ^{2}+\left( {v_A (x_i )} \right) ^{2}}\sqrt{\left( {\mu _B (x_i )} \right) ^{2}+\left( {v_B (x_i )} \right) ^{2}}}}\nonumber \\ \end{aligned}$$
    (12)
  7. (7)

    The similarity measure \(S_\mathrm{O}\) proposed by Li et al. (2002):

    $$\begin{aligned}&S_\mathrm{O} (A,B)=1\nonumber \\&\quad -\sqrt{\frac{\sum _{i=1}^n {\left( {\left( {\mu _A (x_i )-\mu _B (x_i )} \right) ^{2}+\left( {v_A (x_i )-v_B (x_i )} \right) ^{2}} \right) } }{2n}}\nonumber \\ \end{aligned}$$
    (13)
  8. (8)

    Hung and Yang’s similarity measures \(S_\mathrm{HY}^1 \), \(S_\mathrm{HY}^2 \) and \(S_\mathrm{HY}^3 \) (Hung and Yang 2004):

    $$\begin{aligned} S_\mathrm{HY}^1 (A,B)= & {} 1-d_\mathrm{H} (A,B) \end{aligned}$$
    (14)
    $$\begin{aligned} S_\mathrm{HY}^2 (A,B)= & {} \frac{e^{-d_\mathrm{H} (A,B)}-e^{-1}}{1-e^{-1}} \end{aligned}$$
    (15)
    $$\begin{aligned} S_\mathrm{HY}^3 (A,B)= & {} \frac{1-d_\mathrm{H} (A,B)}{1+d_\mathrm{H} (A,B)} \end{aligned}$$
    (16)

    where \(d_\mathrm{H} (A,B)=\frac{1}{n}\sum _{i=1}^n \max \big ( \left| {\mu _A (x_i )-\mu _B (x_i )} \right| ,\left| {v_A (x_i )-v_B (x_i )} \right| \big ) \).

  9. (9)

    Li and Xu’s similarity measure \(S_\mathrm{LX}\) (Li and Xu 2001):

    $$\begin{aligned}&S_\mathrm{LX} (A,B)\nonumber \\&\quad =1-\frac{\sum _{i=1}^n {\left| {\left( {\mu _A (x_i )-v_A (x_i )} \right) -\left( {\mu _B (x_i )-v_B (x_i )} \right) } \right| } }{4n} \nonumber \\&\qquad -\frac{\sum _{i=1}^n {\left( {\left| {\mu _A (x_i )-\mu _B (x_i )} \right| +\left| {v_A (x_i )-v_B (x_i )} \right| } \right) } }{4n}\nonumber \\ \end{aligned}$$
    (17)
  10. (10)

    Boran and Akay’s similarity measure \(S_\mathrm{BA}\) (Boran and Akay 2014):

    $$\begin{aligned}&S_t^p (A,B) =1 \nonumber \\&\quad -\root p \of {\sum _{i=1}^n {\frac{1}{2n(1+p)}\left\{ {\begin{array}{l} \left| {t\left( {\mu _A (x_i )-\mu _B (x_i )} \right) -\left( {v_A (x_i )-v_B (x_i )} \right) } \right| ^{p} \\ +\left| {t\left( {v_A (x_i )-v_B (x_i )} \right) -\left( {u_A (x_i )-u_B (x_i )} \right) } \right| ^{p} \\ \end{array}} \right\} } }\nonumber \\ \end{aligned}$$
    (18)

    where \(p=1,2,3,\ldots \) is the \(L_p \) norm and \(t=2,3,4,\ldots \) is a parameter for adjusting the effect of hesitation margin.

4 A new similarity measure

Let\(A=\left\{ {\left\langle {x,\mu _A (x),v_A (x)} \right\rangle \left| {x\in X} \right. } \right\} \) and \(B=\big \{ \left\langle x,\mu _B (x), v_B (x) \right\rangle \left| {x\in X} \right. \big \}\) be two IFSs in \(X=\{x_1 ,x_2 ,\ldots ,x_n \}\). If we consider A and B as intervals representation, the information carried by them is determined by not only the lower and upper bounds, but also the length of the interval. So we can define a similarity measure between A and B as:

$$\begin{aligned}&S_\mathrm{Y} (A,B) =\frac{1}{3n}\sum _{i=1}^n \nonumber \\&\quad {\left( {\begin{array}{l} 2\sqrt{\mu _A (x_i )\mu _B (x_i )}+2\sqrt{v_A (x_i )v_B (x_i )}+\sqrt{\pi _A (x_i )\pi _B (x_i )} \\ +\,\sqrt{\left( {1-\mu _A (x_i )} \right) \left( {1-\mu _B (x_i )} \right) }+\sqrt{\left( {1-v_A (x_i )} \right) \left( {1-v_B (x_i )} \right) } \\ \end{array}} \right) }\nonumber \\ \end{aligned}$$
(19)

In this definition, we apply the concept of consistency to describe the similarity between two IFSs. For an IFS \(A=\left\langle {x,\mu _A (x),v_A (x)} \right\rangle \), the membership degree of x can be written as an interval \(\left[ {\mu _A (x),1-v_A (x)} \right] \). Similarly, its non-membership degree can be regarded as \(\left[ {v_A (x),1-\mu _A (x)} \right] \). The span of these intervals are \(1-\mu _A (x)-v_A (x)\), i.e., \(\pi _A (x)\). Hence, in Eq. (19), \(\sqrt{\mu _A (x_i )\mu _B (x_i )}\) and \(\sqrt{v_A (x_i )v_B (x_i )}\) represent the consistency degree between the lower bounds of membership degree, non-membership degree, respectively. Accordingly, \(\sqrt{\left( {1-\mu _A (x_i )} \right) \left( {1-\mu _B (x_i )} \right) }\) and \(\sqrt{\left( {1-v_A (x_i )} \right) \left( {1-v_B (x_i )} \right) }\) are used to describe the consistency degree between the upper bounds of membership degree and non-membership degree. \(\sqrt{\pi _A (x_i )\pi _B (x_i )}\) represents the consistency degree between interval lengths. Different coefficients are assigned to each item to differentiate their importance. The factor 1 / 3n is adopted to normalize the similarity into the interval [0, 1].

Theorem 1

\(S_\mathrm{Y} (A,B)\) is a similarity measure between two IFSs A and B in X.

Proof

For the sake of simplicity, IFSs A and B are denoted by \(A=\left\{ {\left\langle {\mu _A (x_i ),v_A (x_i )} \right\rangle } \right\} \) and \(B=\left\{ {\left\langle {\mu _B (x_i ),v_B (x_i )} \right\rangle } \right\} \), respectively.

(SP1) For each \(x,y\in [0,+\infty ]\), we have \(0\le \sqrt{xy}\le \frac{x+y}{2}\). For \(0\le \mu (x_i )\le 1\), \(0\le v(x_i )\le 1\), \(0\le \pi (x_i )\le 1\) and \(0\le 1-v(x_i )\le 1\), we can get:

  1. (i)
    $$\begin{aligned}&0\le 2\sqrt{\mu _A (x_i )\mu _B (x_i )}+2\sqrt{v_A (x_i )v_B (x_i )} \\&\quad +\sqrt{\pi _A (x_i )\pi _B (x_i )} +\sqrt{\left( {1-\mu _A (x_i )} \right) \left( {1-\mu _B (x_i )} \right) }\\&\quad +\sqrt{\left( {1-v_A (x_i )} \right) \left( {1-v_B (x_i )} \right) } \le 2\cdot \frac{\mu _A (x_i )+\mu _B (x_i )}{2}\\&\quad +2\cdot \frac{v_A (x_i )+v_B (x_i )}{2}+\frac{\pi _A (x_i )+\pi _B (x_i )}{2} \\&\quad +\frac{1-\mu _A (x_i )+1-\mu _B (x_i )}{2}+\frac{1-v_A (x_i )+1-v_B (x_i )}{2} \\&\quad =2+\frac{\mu _A (x_i )+v_A (x_i )+\pi _A (x_i )}{2}\\&\quad +\frac{\mu _B (x_i )+v_B (x_i )+\pi _B (x_i )}{2} =3 , \end{aligned}$$

    and

    $$\begin{aligned}&0\le \sum \nolimits _{i=1}^n \\&\quad {\left( {\begin{array}{l} 2\sqrt{\mu _A (x_i )\mu _B (x_i )}+2\sqrt{v_A (x_i )v_B (x_i )}+\sqrt{\pi _A (x_i )\pi _B (x_i )} \\ +\sqrt{\left( {1-\mu _A (x_i )} \right) \left( {1-\mu _B (x_i )} \right) }+\sqrt{\left( {1-v_A (x_i )} \right) \left( {1-v_B (x_i )} \right) } \\ \end{array}} \right) } \\&\le 3n .\\ \end{aligned}$$

So we have \(0\le S_\mathrm{Y} (A,B)\le 1\).

(SP2) We know that \(\sqrt{xy}\) achieves its maximum value \(\frac{x+y}{2}\) when \(x=y\). Therefore, we have:

  1. (ii)
    $$\begin{aligned}&S_\mathrm{Y} (A,B)=1 \\&\quad \Leftrightarrow 2\sqrt{\mu _A (x_i )\mu _B (x_i )}+2\sqrt{v_A (x_i )v_B (x_i )}\\&\quad +\sqrt{\pi _A (x_i )\pi _B (x_i )} +\sqrt{\left( {1-\mu _A (x_i )} \right) \left( {1-\mu _B (x_i )} \right) }\\&\quad +\sqrt{\left( {1-v_A (x_i )} \right) \left( {1-v_B (x_i )} \right) }=3 \\&\quad \Leftrightarrow \mu _A (x_i )=\mu _B (x_i ),v_A (x_i )=v_B (x_i ),\pi _A (x_i )\\&\quad =\pi _B (x_i ), 1-\mu _A (x_i )=1-\mu _B (x_i ),1-v_A (x_i )\\&\quad =1-v_B (x_i ) \Leftrightarrow A=B . \end{aligned}$$

Thus, \(S_\mathrm{Y} (A,B)=1\), if and only if \(A=B\).

(SP3) It is easy to note that the expression of \(S_\mathrm{Y} (A,B)\) is commutative. So we have \(S_\mathrm{Y} (A,B)=S_\mathrm{Y} (B,A)\).

(SP4) Let \(C=\left\{ {\left\langle {\mu _C (x_i ),v_C (x_i )} \right\rangle } \right\} \) be another IFS in X, satisfying \(A\subseteq B\subseteq C\). We have \(0\le \mu _A (x_i )\le \mu _B (x_i )\le \mu _C (x_i )\le 1\) and \(0\le v_C (x_i )\le v_B (x_i )\le v_A (x_i )\le 1\), for \(\forall x\in X\). Based on Eq. (19), the similarity measures between (BC) and (AC) can be written as:

$$\begin{aligned}&S_\mathrm{Y} (B,C)=\frac{1}{3n}\sum \nolimits _{i=1}^n \nonumber \\&\quad {\left( {\begin{array}{l} 2\sqrt{\mu _B (x_i )\mu _C (x_i )}+\sqrt{v_B (x_i )v_C (x_i )}+\sqrt{\pi _B (x_i )\pi _C (x_i )} \\ +\sqrt{\left( {1-\mu _B (x_i )} \right) \left( {1-\mu _C (x_i )} \right) }+\sqrt{\left( {1-v_B (x_i )} \right) \left( {1-v_C (x_i )} \right) } \\ \end{array}} \right) }\\&S_\mathrm{Y} (A,C)=\frac{1}{3n}\sum \nolimits _{i=1}^n \\&\quad {\left( {\begin{array}{l} 2\sqrt{\mu _A (x_i )\mu _C (x_i )}+2\sqrt{v_A (x_i )v_C (x_i )}+\sqrt{\pi _A (x_i )\pi _C (x_i )} \\ +\sqrt{\left( {1-\mu _A (x_i )} \right) \left( {1-\mu _C (x_i )} \right) }+\sqrt{\left( {1-v_A (x_i )} \right) \left( {1-v_C (x_i )} \right) } \\ \end{array}} \right) } \end{aligned}$$

For \(a,b\in [0,1]\), \(a+b\le 1\), we define a function f as:

$$\begin{aligned}&f(x,y)=2\sqrt{ax}+2\sqrt{by}+\sqrt{(1-a-b)(1-x-y)} \\&\quad +\,\sqrt{(1-a)(1-x)}+\sqrt{(1-b)(1-y)} \end{aligned}$$

where \(x,y\in [0,1]\), \(x+y\in [0,1]\).

Then we have:

$$\begin{aligned} \frac{\partial f}{\partial x}= & {} \frac{\sqrt{a}}{\sqrt{x}}-\frac{\sqrt{1-a-b}}{2\sqrt{1-x-y}}-\frac{\sqrt{1-a}}{2\sqrt{1-x}} \\= & {} \frac{(a-x)(1-b)}{2\sqrt{x(1-x-y)}\left( {\sqrt{a(1-x-y)}+\sqrt{(1-a-b)x}} \right) } \\&+\frac{a-x}{2\sqrt{x(1-x)}\left( {\sqrt{a(1-x)}+\sqrt{(1-a)x}} \right) } ,\\ \frac{\partial f}{\partial y}= & {} \frac{\sqrt{b}}{\sqrt{y}}-\frac{\sqrt{1-a-b}}{2\sqrt{1-x-y}}-\frac{\sqrt{1-b}}{2\sqrt{1-y}} \\= & {} \frac{(b-y)(1-a)}{2\sqrt{y(1-x-y)}\left( {\sqrt{b(1-x-y)}+\sqrt{(1-a-b)y}} \right) } \\&+\frac{b-y}{2\sqrt{y(1-y)}\left( {\sqrt{b(1-y)}+\sqrt{(1-b)y}} \right) } .\\ \end{aligned}$$

Given \(a\le x\le 1\), \(b\le 1\), we have \(\frac{\partial f}{\partial x}\le 0\), which means that f is a decreasing function of x,when \(x\ge a\).

For \(0\le x\le a\), \(b\le 1\), we can get \(\frac{\partial f}{\partial x}\ge 0\), which means that f is an increasing function of x,when, \(x\le a\).

Similarly, we can also get \(\frac{\partial f}{\partial y}\ge 0\) for \(0\le y\le b\), \(a\le 1\) and \(\frac{\partial f}{\partial y}\le 0\) for \(b\le y\le 1\), \(a\le 1\). These indicate that f is an increasing function of y for \(y\le b\), but a decreasing function when \(y\le b\).

Given \(a=\mu _A (x_i )\), \(b=v_A (x_i )\) and two couples \(\left( {\mu _B (x_i ),v_B (x_i )} \right) \), \(\left( {\mu _C (x_i ),v_C (x_i )} \right) \), satisfying \(a=\mu _A (x_i )\le \mu _B (x_i )\le \mu _C (x_i )\) and \(v_C (x_i )\le v_B (x_i )\le v_A (x_i )=b\), we can get:

$$\begin{aligned}&f(\mu _C (x_i ),v_C (x_i ))\le f(\mu _B (x_i ),v_C (x_i ))\\&\quad \le f(\mu _B (x_i ),v_B (x_i )). \end{aligned}$$

And then

  1. (iii)
    $$\begin{aligned}&2\sqrt{\mu _A (x_i )\mu _C (x_i )}+2\sqrt{v_A (x_i )v_C (x_i )}\\&+\sqrt{\pi _A (x_i )\pi _C (x_i )} +\sqrt{\left( {1-\mu _A (x_i )} \right) \left( {1-\mu _C (x_i )} \right) }\\&+\sqrt{\left( {1-v_A (x_i )} \right) \left( {1-v_C (x_i )} \right) }\le 2\sqrt{\mu _A (x_i )\mu _B (x_i )}\\&+2\sqrt{v_A (x_i )v_B (x_i )}+\sqrt{\pi _A (x_i )\pi _B (x_i )} \\&+\sqrt{\left( {1-\mu _A (x_i )} \right) \left( {1-\mu _B (x_i )} \right) }\\&+\sqrt{\left( {1-v_A (x_i )} \right) \left( {1-v_B (x_i )} \right) } \\ \end{aligned}$$

Therefore, \(S_\mathrm{Y} (A,B)\ge S_\mathrm{Y} (A,C)\).

In such a way, if we suppose \(a=\mu _C (x_i )b=v_C (x_i )\), considering another two couples \(\left( {\mu _B (x_i ),v_B (x_i )} \right) \) and \(\left( {\mu _A (x_i ),v_A (x_i )} \right) \), we have: \(\mu _A (x_i )\le \mu _B (x_i )\le \mu _C (x_i )=a\), \(b=v_C (x_i )\le v_B (x_i )\le v_A (x_i )\).

Hence, it follows that

\(f(\mu _A (x_i ),v_A (x_i ))\le f(\mu _B (x_i ),v_A (x_i ))\le f(\mu _B (x_i ),v_B (x_i )),\) which can be written as:

  1. (iv)
    $$\begin{aligned}&2\sqrt{\mu _A (x_i )\mu _C (x_i )}+\sqrt{v_A (x_i )v_C (x_i )}\\&+\sqrt{\pi _A (x_i )\pi _C (x_i )} +\sqrt{\left( {1-\mu _A (x_i )} \right) \left( {1-\mu _C (x_i )} \right) }\\&+\sqrt{\left( {1-v_A (x_i )} \right) \left( {1-v_C (x_i )} \right) } \le 2\sqrt{\mu _B (x_i )\mu _C (x_i )}\\&+2\sqrt{v_B (x_i )v_C (x_i )}+\sqrt{\pi _B (x_i )\pi _C (x_i )}\\&+\sqrt{\left( {1-\mu _B (x_i )} \right) \left( {1-\mu _C (x_i )} \right) }\\&+\sqrt{\left( {1-v_B (x_i )} \right) \left( {1-v_C (x_i )} \right) } .\\ \end{aligned}$$

Then we have \(S_\mathrm{Y} (B,C)\ge S_\mathrm{Y} (A,C)\).

So the similarity measure \(S_\mathrm{Y} (A,B)\) satisfies all properties in Definition 5. It is a similarity measure between IFSs. \(\square \)

Considering the weights of \(x_i \), we can define the weighted similarity between two IFSs as:

$$\begin{aligned}&S_\mathrm{WY} (A,B) \nonumber \\&\quad =\frac{1}{3}\sum \nolimits _{i=1}^n {w_i \left( {\begin{array}{l} 2\sqrt{\mu _A (x_i )\mu _B (x_i )}+2\sqrt{v_A (x_i )v_B (x_i )}+\sqrt{\pi _A (x_i )\pi _B (x_i )} \\ +\sqrt{\left( {1-\mu _A (x_i )} \right) \left( {1-\mu _B (x_i )} \right) }+\sqrt{\left( {1-v_A (x_i )} \right) \left( {1-v_B (x_i )} \right) } \\ \end{array}} \right) }\qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \!\!\quad \end{aligned}$$
(20)

where \(w_i \) is the weights factor of the features \(x_i \), \(w_i \in [0,1]\) and \(\sum _{i=1}^n {w_i } =1\).

Theorem 2

\(S_\mathrm{WY} (A,B)\) is the similarity measure between two IFSs A and B in X.

Proof

(SP1) Considering the expression (i), we can get:

$$\begin{aligned} 0\le & {} \sum \nolimits _{i=1}^n {w_i \left( {\begin{array}{l} 2\sqrt{\mu _A (x_i )\mu _B (x_i )}+2\sqrt{v_A (x_i )v_B (x_i )}+\sqrt{\pi _A (x_i )\pi _B (x_i )} \\ +\sqrt{\left( {1-\mu _A (x_i )} \right) \left( {1-\mu _B (x_i )} \right) }+\sqrt{\left( {1-v_A (x_i )} \right) \left( {1-v_B (x_i )} \right) } \\ \end{array}} \right) } \\\le & {} \sum \nolimits _{i=1}^n {3w_i } =3\cdot \sum \nolimits _{i=1}^n {w_i } =3 \end{aligned}$$

Therefore, \(0\le S_\mathrm{WY} (A,B)\le 1\).

(SP2) Given the implication (ii), we have:

$$\begin{aligned}&S_\mathrm{WY} (A,B)=1 \\&\Leftrightarrow w_i \left( {\begin{array}{l} 2\sqrt{\mu _A (x_i )\mu _B (x_i )}+2\sqrt{v_A (x_i )v_B (x_i )}+\sqrt{\pi _A (x_i )\pi _B (x_i )} \\ +\sqrt{\left( {1-\mu _A (x_i )} \right) \left( {1-\mu _B (x_i )} \right) }+\sqrt{\left( {1-v_A (x_i )} \right) \left( {1-v_B (x_i )} \right) } \\ \end{array}} \right) \\&=3w_i \\&\Leftrightarrow 2\sqrt{\mu _A (x_i )\mu _B (x_i )}+2\sqrt{v_A (x_i )v_B (x_i )}+\sqrt{\pi _A (x_i )\pi _B (x_i )} \\&+\sqrt{\left( {1-\mu _A (x_i )} \right) \left( {1-\mu _B (x_i )} \right) }+\sqrt{\left( {1-v_A (x_i )} \right) \left( {1-v_B (x_i )} \right) }\\&=3 \\&\Leftrightarrow \mu _A (x_i )=\mu _B (x_i ),v_A (x_i )=v_B (x_i ),\pi _A (x_i )=\pi _B (x_i ), \\&1-\mu _A (x_i )=1-\mu _B (x_i ),1-v_A (x_i )=1-v_B (x_i ) \\&\Leftrightarrow A=B \\ \end{aligned}$$

So we get \(S_\mathrm{WY} (A,B)=1\Leftrightarrow A=B\).

(SP3) It is obvious that \(S_\mathrm{WY} (A,B)\) satisfies SP3.

(SP4) Since all \(w_i \ge 0\), we can multiply inequality (iii) and (iv) by \(w_i \) as:

$$\begin{aligned} \begin{array}{l} w_i \left( {\begin{array}{l} 2\sqrt{\mu _A (x_i )\mu _C (x_i )}+2\sqrt{v_A (x_i )v_C (x_i )}+\sqrt{\pi _A (x_i )\pi _C (x_i )} \\ +\sqrt{\left( {1-\mu _A (x_i )} \right) \left( {1-\mu _C (x_i )} \right) }+\sqrt{\left( {1-v_A (x_i )} \right) \left( {1-v_C (x_i )} \right) } \\ \end{array}} \right) \\ \le w_i \left( {\begin{array}{l} 2\sqrt{\mu _A (x_i )\mu _B (x_i )}+2\sqrt{v_A (x_i )v_B (x_i )}+\sqrt{\pi _A (x_i )\pi _B (x_i )} \\ +\sqrt{\left( {1-\mu _A (x_i )} \right) \left( {1-\mu _B (x_i )} \right) }+\sqrt{\left( {1-v_A (x_i )} \right) \left( {1-v_B (x_i )} \right) } \\ \end{array}} \right) \\ \end{array},\\ \begin{array}{l} w_i \left( {\begin{array}{l} 2\sqrt{\mu _A (x_i )\mu _C (x_i )}+2\sqrt{v_A (x_i )v_C (x_i )}+\sqrt{\pi _A (x_i )\pi _C (x_i )} \\ +\sqrt{\left( {1-\mu _A (x_i )} \right) \left( {1-\mu _C (x_i )} \right) }+\sqrt{\left( {1-v_A (x_i )} \right) \left( {1-v_C (x_i )} \right) } \\ \end{array}} \right) \\ \le w_i \left( {\begin{array}{l} 2\sqrt{\mu _B (x_i )\mu _C (x_i )}+2\sqrt{v_B (x_i )v_C (x_i )}+\sqrt{\pi _B (x_i )\pi _C (x_i )} \\ +\sqrt{\left( {1-\mu _B (x_i )} \right) \left( {1-\mu _C (x_i )} \right) }+\sqrt{\left( {1-v_B (x_i )} \right) \left( {1-v_C (x_i )} \right) } \\ \end{array}} \right) \\ \end{array}. \end{aligned}$$

We obtain \(S_\mathrm{WY} (A,B)\ge S_\mathrm{WY} (A,C)\) and \(S_\mathrm{WY} (B,C)\ge S_\mathrm{WY} (A,C)\).

Therefore, \(S_\mathrm{WY} (A,B)\) is a similarity measure between IFSs A and B. \(\square \)

Table 1 Comparison of similarity measures (counterintuitive cases are in bold type)

5 Numerical comparisons

In order to illustrate the superiority of the proposed similarity measure, a comparison between the proposed similarity measure and all the existing similarity measures is conducted based on the numerical cases in Boran and Akay (2014), which are widely used as counterintuitive examples. Table 1 presents the result with \(p=1\) for \(S_\mathrm{HB} \), \(S_e^p \), \(S_s^p \), \(S_h^p \) and \(p=1\), \(t=2\) for \(S_t^p \).

We can see that \(S_\mathrm{C} (A,B)=S_\mathrm{DC} (A,B)=C_\mathrm{IFS} (A,B)=1\) for two different IF sets \(A=\left\langle {0.3,0.3} \right\rangle \) and \(B=\left\langle {0.4,0.4} \right\rangle \).This indicates that the second axiomatic requirement of similarity measure (SP2) is not satisfied by \(S_\mathrm{C} (A,B)\), \(S_\mathrm{DC} (A,B)\) and \(C_\mathrm{IFS} (A,B)\). This also can be illustrated by \(S_\mathrm{C} (A,B)=S_\mathrm{DC} (A,B)=1\) when \(A=\left\langle {0.5,0.5} \right\rangle \), \(B=\left\langle {0,0} \right\rangle \) and \(A=\left\langle {0.4,0.2} \right\rangle \), \(B=\left\langle {0.5,0.3} \right\rangle \). As for \(S_\mathrm{H} \), \(S_\mathrm{O} \), \(S_\mathrm{HB} \), \(S_e^p \), \(S_s^p \) and \(S_h^p \), different pairs of A, B may provide the identical results, which cannot satisfy the application of pattern recognition. Table 1 shows that \(S_\mathrm{HB} =0.9\) for both \(A=\left\langle {0.3,0.3} \right\rangle \), \(B=\left\langle {0.4,0.4} \right\rangle \) and \(A=\left\langle {0.3,0.4} \right\rangle \), \(B=\left\langle {0.4,0.3} \right\rangle \). Such situation seems to be worse for \(S_\mathrm{HY}^1 \), \(S_\mathrm{HY}^2 \) and \(S_\mathrm{HY}^3 \), where all the cases take the same similarity degree except case.3 and case.4. \(S_t^p \) seems to be reasonable without any counterintuitive results, but it bring new problem with the choice of parameters p and t, which is still an open problem. Moreover, we can notice an interesting situation when comparing case.3 and case.4. For three IF sets \(A=\left\langle {1,0} \right\rangle \), \(B=\left\langle {0.5,0.5} \right\rangle \) and \(C=\left\langle {0,0} \right\rangle \), intuitively, it is more reasonable to take the similarity degree between them as: \(S_F (A,C)=0.15\), \(S_F (B,C)=0.25\) than taking \(S_t^p (A,C)=0.5\) and \(S_t^p (B,C)=0.833\). In such a sense, the proposed similarity measure is the most reasonable one with a relative simple expression, and has none of the counterintuitive cases. Three IFSs \(A=\left\langle {0.4,0.2} \right\rangle \), \(B=\left\langle {0.5,0.3} \right\rangle \) and \(C=\left\langle {0.5,0.2} \right\rangle \) can be written in forms of interval values as: \(A=\left[ {0.4,0.8} \right] \), \(B=\left[ {0.5,0.7} \right] \) and \(C=\left[ {0.5,0.8} \right] \), respectively. In such a sense, we can say that the similarity degree between A and C should not less than the similarity degree between A and B, which is also illustrated by other similarity measures except \(S_\mathrm{C} \), \(S_\mathrm{DC} \) and \(S_t^p \) (underlined cases). Therefore, our proposed similarity measure is in agreement with this analysis. The proposed similarity measure is the most reasonable similarity measure without any counterintuitive cases. We must note that, among the measures listed in Table 1, \(S_t^p \) seems to be another metric measure without any counterintuitive cases. However, it brings a new problem with the choice of the parameter p and t, which is also an important open problem facing by similarity measures \(S_\mathrm{HB} \), \(S_e^p \), \(S_s^p \) and \(S_h^p \). Therefore, we can say that our proposal is a satisfactory similarity measure satisfying all axiomatic properties, without any counterintuitive cases and the problem of choosing other parameters.

In order to study the effectiveness of the proposed similarity measure for IFSs in the application of pattern recognition, we consider the pattern recognition problem discussed in Ye (2011) and Li and Cheng (2002).

Suppose there are m patterns, which can be represented by IFSs \(A_j =\left\{ {\left\langle {x_i ,\mu _{A_j } (x_i ),v_{A_j } (x_i )} \right\rangle \left| {x_i \in X} \right. } \right\} \), \(A_j \in \hbox {IFSs}(X)\), \(j=1,2,\ldots ,m\). Let the sample to be recognized be denoted as \(B=\left\{ {\left\langle {x_i ,\mu _B (x_i ),v_B (x_i )} \right\rangle \left| {x_i \in X} \right. } \right\} \). According to the recognition principle of maximum degree of similarity between IFSs, the process of assigning B to \(A_k \) is described by:

$$\begin{aligned} k=\arg \mathop {\max }\limits _{j=1,2,\ldots ,m} \{S(A_j ,B)\} \end{aligned}$$
(21)

Example 1

Assume that there exists three known patterns \(A_1 \), \(A_2 \) and \(A_3 \), with class labels \(C_1 \), \(C_2 \) and \(C_3 \), respectively. Each pattern can be expressed by IFS in \(X=\{x_1 ,x_2 ,x_3 \}\) as:

$$\begin{aligned} A_1= & {} \left\{ {\left\langle {x_1 ,1,0} \right\rangle ,\left\langle {x_2 ,0.8,0} \right\rangle ,\left\langle {x_3 ,0.7,0.1} \right\rangle } \right\} ,\\ A_2= & {} \left\{ {\left\langle {x_1 ,0.8,0.1} \right\rangle ,\left\langle {x_2 ,1,0} \right\rangle ,\left\langle {x_3 ,0.9,0} \right\rangle } \right\} ,\\ A_3= & {} \left\{ {\left\langle {x_1 ,0.6,0.2} \right\rangle ,\left\langle {x_2 ,0.8,0} \right\rangle ,\left\langle {x_3 ,1,0} \right\rangle } \right\} . \end{aligned}$$

The sample B need to be recognized is:

$$\begin{aligned} B=\left\{ {\left\langle {x_1 ,0.5,0.3} \right\rangle ,\left\langle {x_2 ,0.6,0.2} \right\rangle ,\left\langle {x_3 ,0.8,0.1} \right\rangle } \right\} . \end{aligned}$$

The similarity degree between \(A_i \quad (i=1,2,3)\) and B calculated by Eq.(19) is:

$$\begin{aligned} S_\mathrm{Y} (A_1 ,B)= & {} 0.888,\\ S_\mathrm{Y} (A_2 ,B)= & {} 0.910,\\ S_\mathrm{Y} (A_3 ,B)= & {} 0.942. \end{aligned}$$
Table 2 Similarity measures between the known patterns and the unknown pattern in Example 2 (patterns not discriminated are in bold type)

It can be observed that the pattern B should be classified to \(A_3 \) with a class label \(C_3 \).according to the recognition principle of maximum degree of similarity between IFSs. This result is in agreement with the one obtained in Ye (2011) and Li and Cheng (2002).

Let’s assume that the weights of \(x_1 \), \(x_2 \)and \(x_3 \) are 0.5, 0.3, and 0.2, respectively, as they were assumed in Ye (2011). Considering Eq.(20), we can get:

$$\begin{aligned} S_\mathrm{WY} (A_1 ,B)= & {} 0.850,\\ S_\mathrm{WY} (A_2 ,B)= & {} 0.914,\\ S_\mathrm{WY} (A_3 ,B)= & {} 0.955. \end{aligned}$$

According to Eq.(21), B can be recognized as \(A_3 \), which is identical to the result obtained in Ye (2011) and Li and Cheng (2002).

To make our similarity measure more transparent and comparable with the measures proposed earlier by other authors, the example analyzed in Hwang et al. (2012) will be discussed next.

Example 2

Assume that there are three IFS patterns in \(X=\{x_1 ,x_2 ,x_3 \}\).The three patterns are denoted as follows:

$$\begin{aligned} A_1= & {} \left\{ {\left\langle {x_1 ,0.3,0.3} \right\rangle ,\left\langle {x_2 ,0.2,0.2} \right\rangle ,\left\langle {x_3 ,0.1,0.1} \right\rangle } \right\} ,\\ A_2= & {} \left\{ {\left\langle {x_1 ,0.2,0.2} \right\rangle ,\left\langle {x_2 ,0.2,0.2} \right\rangle ,\left\langle {x_3 ,0.2,0.2} \right\rangle } \right\} \\ A_3= & {} \left\{ {\left\langle {x_1 ,0.4,0.4} \right\rangle ,\left\langle {x_2 ,0.4,0.4} \right\rangle ,\left\langle {x_3 ,0.4,0.4} \right\rangle } \right\} \end{aligned}$$

Assume that a sample \(B=\big \{ \left\langle {x_1 ,0.3,0.3} \right\rangle ,\left\langle {x_2 ,0.2,0.2} \right\rangle ,\left\langle {x_3 ,0.1,0.1} \right\rangle \big \}\) is to be classified.

The similarity degrees of \(S(A_1 ,B)\), \(S(A_2 ,B)\)and \(S(A_3 ,B)\) calculated for all similarity measures listed in Sect. 3 are shown in Table 2.

The proposed similarity measure \(S_\mathrm{Y} \) can be calculated by Eq.(19) as:

$$\begin{aligned} S_\mathrm{Y} (A_1 ,B)= & {} 1,\\ S_\mathrm{Y} (A_2 ,B)= & {} 0.991,\\ S_\mathrm{Y} (A_3 ,B)= & {} 0.944. \end{aligned}$$

It is obvious that B is equal to \(A_1 \). This indicates that sample B should be classified to \(A_1 \). However, the similarity degrees of \(S(A_1 ,B)\), \(S(A_2 ,B)\) and \(S(A_3 ,B)\) are equal to each other when \(S_\mathrm{C} \), \(S_\mathrm{H} \), \(S_\mathrm{DC} \) and \(C_\mathrm{IFS} \) are employed. These four similarity measures cannot capable of discriminating difference between the three patterns. Fortunately, the results of \(S_\mathrm{Y} (A_i ,B) (i=1,2,3)\) can be used to make correct classification conclusion. This means that the proposed similarity measure shows an identical performance with majority of the existing measures.

Table 3 Symptoms characteristic for the patients
Table 4 Symptoms characteristic for the diagnoses
Table 5 Proposed similarity measure \(S_\mathrm{Y} \) between each patient’s symptoms and the considered set of possible diagnoses

6 Applications in pattern recognition

In this section, to illustrate the efficiency of the proposed similarity measure in practice, we will apply it in the area of pattern recognition. Examples on medical diagnosis and clustering will be discussed.

6.1 Medical diagnosis

The medical diagnosis problem discussed in Boran and Akay (2014), De et al. (2001), Own (2009), Szmidt and Kacprzyk (2004), Vlachos and Sergiadis (2007), Wei et al. (2011) and Ye (2011) will be presented as an application in pattern recognition. In this paper, we propose an alternative approach to medical diagnosis using the newly defined similarity measure.

Suppose that there are four patients Al, Bob, Joe and Ted, represented as P= {Al, Bob, Joe, Ted}. Their symptoms are S = {Temperature, Headache, Stomach pain, Cough, Chest pain}. The set of diagnoses is defined as = {Viral fever, Malaria, Typhoid, Stomach problem, Chest problem}. The intuitionistic fuzzy relation \(P\rightarrow S\) is presented in Table 3. Table 4 gives the intuitionistic fuzzy relation \(S\rightarrow D\). Each element of the tables is given in the form of IFV, which is a pair of numbers corresponding to the membership and non-membership values, respectively.

In order to make a proper diagnosis for each patient, we calculate the similarity degree between each patient and each diagnose. According to the principle of maximum similarity degree, the higher similarity degree indicates a proper diagnosis. In Table 5, the similarity degree \(S_\mathrm{Y} \) between patients and diagnoses are presented. According to the similarity degrees in Table 5, conclusion can be made that Al suffers from Viral fever, Bob suffers from Stomach problem, Joe suffers from Typhoid, and Ted suffers from Viral fever. The diagnosis results for this case obtained in previous study have been presented in Boran and Akay (2014). It is clear that our proposed method provides the same results as those obtained by the methods presented in Boran and Akay (2014), Own (2009) and Vlachos and Sergiadis (2007). Moreover, operations involved in our proposed similarity measure are addition, multiplication and evolution, which are implemented based on membership degree, non-membership degree and hesitancy degree, without any other parameters. So it can reduce the computation complexity.

6.2 Cluster analysis

Let \(O=\{O_1 ,O_2 ,\ldots ,O_n \}\) be a set of n objects. The aim of cluster analysis is to cluster O into c clusters. We will apply our proposed similarity measure into the clustering algorithms for IFSs (Algorithm-IFSC) proposed by Xu et al. (2008).

Let \(X=\{x_1 ,x_2 ,\ldots ,x_n \}\) be a discrete universe of discourse, and let \(w=(w_1 ,w_2 ,\ldots ,w_n )\) be the weight vector of the elements \(x_i\,(i=1,2,\ldots ,n)\), with \(w_i \ge 0\), \(i=1,2,\ldots ,n\), and \(\sum \nolimits _{i=1}^n {w_i } =1\). Let \(A_j (j=1,2,\ldots ,m)\) be a collection of m IFSs representing different objects, where

$$\begin{aligned} A_j =\left\{ {\left. {\left\langle {x_i ,\mu _{A_j } (x_i ),v_{A_j } (x_i )} \right\rangle } \right| x_i \in X} \right\} ,j=1,2,\ldots ,m \end{aligned}$$

and \(\pi _{A_j } (x_i )=1-\mu _{A_j } (x_i )-v_{A_j } (x_i ),j=1,2,\ldots ,m\) is the degree of hesitation of \(x_{i}\) to \(A_{j}\).

Table 6 Car data set

On the basis of our proposed similarity measure \(S_\mathrm{WY}\), the Algorithm-IFSC can be described as follows:

  • Step 1 Utilize Eq. (20) to calculate the association coefficients of IFSs \(A_{i}\) (\(i=1,2,\ldots ,m\)), and then construct an association matrix \(C=(c_{ij} )_{m\times m} \), where \(c_{ij} =S_\mathrm{WY} (A_i ,A_j )\), \(i,j=1,2,\ldots ,m\).

  • Step 2 If the association matrix \(C=(c_{ij} )_{m\times m} \) is an equivalent association matrix, then we construct a \(\lambda \)-cutting matrix \(C_\lambda =(_\lambda c_{ij} )_{m\times m} \) of C by Eq. (22); otherwise, we compose the association matrix C by using Eq. (23) to derive an equivalent association matrix \(\bar{{C}}\), and then construct a \(\lambda \)-cutting matrix \(\bar{{C}}_\lambda =(_\lambda \bar{{c}}_{ij} )_{m\times m} \) of \(\bar{{C}}\) by using (22).

    $$\begin{aligned} _\lambda c_{ij}= & {} \left\{ {\begin{array}{l@{\quad }l} 0&{}ifc_{ij} <\lambda , \\ 1&{}ifc_{ij} \ge \lambda , \\ \end{array}} \right. i,j=1,2,\ldots ,m \end{aligned}$$
    (22)
    $$\begin{aligned} \bar{{c}}_{ij}= & {} \mathop {\max }\limits _k \{\min \{c_{ik} ,c_{kj} \}\},i,j=1,2,\ldots ,m \end{aligned}$$
    (23)
  • Step 3 If all elements of the \(i\hbox {th}\) line (column) in \(C_\lambda \) (or \(\bar{{C}}_\lambda )\) are the same as the corresponding elements of the \(j\hbox {th}\) line (column) in \(C_\lambda \) (or \(\bar{{C}}_\lambda )\), then the IFSs \(A_{i}\) and \(A_{j}\) are of the same type. By this principle, we can classify all these m IFSs \(A_j (j=1,2,\ldots ,m)\).

Remarks\(C^{2}=C\circ C=\bar{{C}}_\lambda =(\bar{{c}}_{ij} )_{m\times m} \) is called a composition matrix of C. If \(C^{2}\subseteq C\), i.e., \(\mathop {\max }\nolimits _k \{\min \{c_{ik} ,c_{kj} \}\}\le c_{ij} \) for all \(i,j=1,2,\ldots ,m\), then C is called an equivalent association matrix. For the association matrix C, after the finite times of compositions \(C\rightarrow C^{2}\rightarrow C^{4}\rightarrow \cdots \rightarrow C^{2k}\rightarrow \cdots \), there must exist a positive integer k such that \(C^{2k}=C^{2(k+1)}\), and \(C^{2k}\) is also an equivalent association matrix.

Based on Algorithm-IFSC, we conduct an experiment on real-word data sets from Xu et al. (2008). Each data point has six attributes: \(x_{1}\): fuel economy; \(x_{2}\): aerodynamic degree; \(x_{3}\): price; \(x_{4}\): comfort; \(x_{5}\): design; \(x_{6}\): safety. The data set is shown in Table 6.

Now we utilize the Algorithm-IFSC to cluster the ten new cars \(A_i (i=1,2,\ldots ,10)\):

Based on the association coefficients that can be calculated by Eq.(20), we can construct the association matrix as:

$$\begin{aligned} C= \left( {\begin{array}{llllllllll} 1.0000 &{} 0.9121 &{} 0.8964 &{} 0.9166 &{} 0.8759 &{} 0.9891 &{} 0.9114 &{} 0.8747 &{} 0.8953 &{} 0.8970 \\ 0.9121 &{} 1.0000 &{} 0.9872 &{} 0.9223 &{} 0.9052 &{} 0.9137 &{} 0.9821 &{} 0.9604 &{} 0.9114 &{} 0.9233 \\ 0.8964 &{} 0.9872 &{} 1.0000 &{} 0.9351 &{} 0.9223 &{} 0.8880 &{} 0.9797 &{} 0.9578 &{} 0.9353 &{} 0.9131 \\ 0.9166 &{} 0.9223 &{} 0.9351 &{} 1.0000 &{} 0.9116 &{} 0.8956 &{} 0.9136 &{} 0.8906 &{} 0.9843 &{} 0.9038 \\ 0.8759 &{} 0.9052 &{} 0.9223 &{} 0.9116 &{} 1.0000 &{} 0.8771 &{} 0.9049 &{} 0.9265 &{} 0.9041 &{} 0.9467 \\ 0.9891 &{} 0.9137 &{} 0.8880 &{} 0.8956 &{} 0.8771 &{} 1.0000 &{} 0.9048 &{} 0.8777 &{} 0.8691 &{} 0.9146 \\ 0.9114 &{} 0.9821 &{} 0.9797 &{} 0.9136 &{} 0.9049 &{} 0.9048 &{} 1.0000 &{} 0.9735 &{} 0.9119 &{} 0.9246 \\ 0.8747 &{} 0.9604 &{} 0.9578 &{} 0.8906 &{} 0.9265 &{} 0.8777 &{} 0.9735 &{} 1.0000 &{} 0.8915 &{} 0.9432 \\ 0.8953 &{} 0.9114 &{} 0.9353 &{} 0.9843 &{} 0.9041 &{} 0.8691 &{} 0.9119 &{} 0.8915 &{} 1.0000 &{} 0.9051 \\ 0.8970 &{} 0.9233 &{} 0.9131 &{} 0.9038 &{} 0.9467 &{} 0.9146 &{} 0.9246 &{} 0.9432 &{} 0.9051 &{} 1.0000 \\ \end{array}} \right) . \end{aligned}$$

Calculate the composition matrix of C:

$$\begin{aligned} C^{2}=C\circ C= \left( {\begin{array}{llllllllll} 1.0000 &{} 0.9166 &{} 0.9166 &{} 0.9166 &{} 0.9116 &{} 0.9891 &{} 0.9136 &{} 0.9121 &{} 0.9166 &{} 0.9146 \\ 0.9166 &{} 1.0000 &{} 0.9872 &{} 0.9351 &{} 0.9265 &{} 0.9146 &{} 0.9821 &{} 0.9735 &{} 0.9353 &{} 0.9432 \\ 0.9166 &{} 0.9872 &{} 1.0000 &{} 0.9353 &{} 0.9265 &{} 0.9137 &{} 0.9821 &{} 0.9735 &{} 0.9353 &{} 0.9432 \\ 0.9166 &{} 0.9351 &{} 0.9353 &{} 1.0000 &{} 0.9223 &{} 0.9166 &{} 0.9351 &{} 0.9351 &{} 0.9843 &{} 0.9223 \\ 0.9116 &{} 0.9265 &{} 0.9265 &{} 0.9223 &{} 1.0000 &{} 0.9146 &{} 0.9265 &{} 0.9432 &{} 0.9223 &{} 0.9467 \\ 0.9891 &{} 0.9146 &{} 0.9137 &{} 0.9166 &{} 0.9146 &{} 1.0000 &{} 0.9146 &{} 0.9146 &{} 0.9114 &{} 0.9146 \\ 0.9136 &{} 0.9821 &{} 0.9821 &{} 0.9351 &{} 0.9265 &{} 0.9146 &{} 1.0000 &{} 0.9735 &{} 0.9353 &{} 0.9432 \\ 0.9121 &{} 0.9735 &{} 0.9735 &{} 0.9351 &{} 0.9432 &{} 0.9146 &{} 0.9735 &{} 1.0000 &{} 0.9353 &{} 0.9432 \\ 0.9166 &{} 0.9353 &{} 0.9353 &{} 0.9843 &{} 0.9223 &{} 0.9114 &{} 0.9353 &{} 0.9353 &{} 1.0000 &{} 0.9131 \\ 0.9146 &{} 0.9432 &{} 0.9432 &{} 0.9223 &{} 0.9467 &{} 0.9146 &{} 0.9432 &{} 0.9432 &{} 0.9131 &{} 1.0000 \\ \end{array}} \right) \end{aligned}$$

We found that \(C^{2}\subseteq C\) does not hold, i.e., the association matrix C is not an equivalent association matrix. Thus, we further calculate:

$$\begin{aligned}&C^{4}=C^{2}\circ C^{2}= \left( {\begin{array}{llllllllll} 1.0000 &{} 0.9166 &{} 0.9166 &{} 0.9166 &{} 0.9166 &{} 0.9891 &{} 0.9166 &{} 0.9166 &{} 0.9166 &{} 0.9166 \\ 0.9166 &{} 1.0000 &{} 0.9872 &{} 0.9353 &{} 0.9432 &{} 0.9166 &{} 0.9821 &{} 0.9735 &{} 0.9353 &{} 0.9432 \\ 0.9166 &{} 0.9872 &{} 1.0000 &{} 0.9353 &{} 0.9432 &{} 0.9166 &{} 0.9821 &{} 0.9735 &{} 0.9353 &{} 0.9432 \\ 0.9166 &{} 0.9353 &{} 0.9353 &{} 1.0000 &{} 0.9351 &{} 0.9166 &{} 0.9353 &{} 0.9353 &{} 0.9843 &{} 0.9353 \\ 0.9166 &{} 0.9432 &{} 0.9432 &{} 0.9351 &{} 1.0000 &{} 0.9166 &{} 0.9432 &{} 0.9432 &{} 0.9353 &{} 0.9467 \\ 0.9891 &{} 0.9166 &{} 0.9166 &{} 0.9166 &{} 0.9166 &{} 1.0000 &{} 0.9166 &{} 0.9166 &{} 0.9166 &{} 0.9166 \\ 0.9166 &{} 0.9821 &{} 0.9821 &{} 0.9353 &{} 0.9432 &{} 0.9166 &{} 1.0000 &{} 0.9735 &{} 0.9353 &{} 0.9432 \\ 0.9166 &{} 0.9735 &{} 0.9735 &{} 0.9353 &{} 0.9432 &{} 0.9166 &{} 0.9735 &{} 1.0000 &{} 0.9353 &{} 0.9432 \\ 0.9166 &{} 0.9353 &{} 0.9353 &{} 0.9843 &{} 0.9353 &{} 0.9166 &{} 0.9353 &{} 0.9353 &{} 1.0000 &{} 0.9353 \\ 0.9166 &{} 0.9432 &{} 0.9432 &{} 0.9353 &{} 0.9467 &{} 0.9166 &{} 0.9432 &{} 0.9432 &{} 0.9353 &{} 1.0000 \\ \end{array}} \right) \\&C^{8}=C^{4}\circ C^{4}= \left( {\begin{array}{llllllllll} 1.0000 &{} 0.9166 &{} 0.9166 &{} 0.9166 &{} 0.9166 &{} 0.9891 &{} 0.9166 &{} 0.9166 &{} 0.9166 &{} 0.9166 \\ 0.9166 &{} 1.0000 &{} 0.9872 &{} 0.9353 &{} 0.9432 &{} 0.9166 &{} 0.9821 &{} 0.9735 &{} 0.9353 &{} 0.9432 \\ 0.9166 &{} 0.9872 &{} 1.0000 &{} 0.9353 &{} 0.9432 &{} 0.9166 &{} 0.9821 &{} 0.9735 &{} 0.9353 &{} 0.9432 \\ 0.9166 &{} 0.9353 &{} 0.9353 &{} 1.0000 &{} 0.9353 &{} 0.9166 &{} 0.9353 &{} 0.9353 &{} 0.9843 &{} 0.9353 \\ 0.9166 &{} 0.9432 &{} 0.9432 &{} 0.9353 &{} 1.0000 &{} 0.9166 &{} 0.9432 &{} 0.9432 &{} 0.9353 &{} 0.9467 \\ 0.9891 &{} 0.9166 &{} 0.9166 &{} 0.9166 &{} 0.9166 &{} 1.0000 &{} 0.9166 &{} 0.9166 &{} 0.9166 &{} 0.9166 \\ 0.9166 &{} 0.9821 &{} 0.9821 &{} 0.9353 &{} 0.9432 &{} 0.9166 &{} 1.0000 &{} 0.9735 &{} 0.9353 &{} 0.9432 \\ 0.9166 &{} 0.9735 &{} 0.9735 &{} 0.9353 &{} 0.9432 &{} 0.9166 &{} 0.9735 &{} 1.0000 &{} 0.9353 &{} 0.9432 \\ 0.9166 &{} 0.9353 &{} 0.9353 &{} 0.9843 &{} 0.9353 &{} 0.9166 &{} 0.9353 &{} 0.9353 &{} 1.0000 &{} 0.9353 \\ 0.9166 &{} 0.9432 &{} 0.9432 &{} 0.9353 &{} 0.9467 &{} 0.9166 &{} 0.9432 &{} 0.9432 &{} 0.9353 &{} 1.0000 \\ \end{array}} \right) \\&C^{16}=C^{8}\circ C^{8}= \left( {\begin{array}{llllllllll} 1.0000 &{} 0.9166 &{} 0.9166 &{} 0.9166 &{} 0.9166 &{} 0.9891 &{} 0.9166 &{} 0.9166 &{} 0.9166 &{} 0.9166 \\ 0.9166 &{} 1.0000 &{} 0.9872 &{} 0.9353 &{} 0.9432 &{} 0.9166 &{} 0.9821 &{} 0.9735 &{} 0.9353 &{} 0.9432 \\ 0.9166 &{} 0.9872 &{} 1.0000 &{} 0.9353 &{} 0.9432 &{} 0.9166 &{} 0.9821 &{} 0.9735 &{} 0.9353 &{} 0.9432 \\ 0.9166 &{} 0.9353 &{} 0.9353 &{} 1.0000 &{} 0.9353 &{} 0.9166 &{} 0.9353 &{} 0.9353 &{} 0.9843 &{} 0.9353 \\ 0.9166 &{} 0.9432 &{} 0.9432 &{} 0.9353 &{} 1.0000 &{} 0.9166 &{} 0.9432 &{} 0.9432 &{} 0.9353 &{} 0.9467 \\ 0.9891 &{} 0.9166 &{} 0.9166 &{} 0.9166 &{} 0.9166 &{} 1.0000 &{} 0.9166 &{} 0.9166 &{} 0.9166 &{} 0.9166 \\ 0.9166 &{} 0.9821 &{} 0.9821 &{} 0.9353 &{} 0.9432 &{} 0.9166 &{} 1.0000 &{} 0.9735 &{} 0.9353 &{} 0.9432 \\ 0.9166 &{} 0.9735 &{} 0.9735 &{} 0.9353 &{} 0.9432 &{} 0.9166 &{} 0.9735 &{} 1.0000 &{} 0.9353 &{} 0.9432 \\ 0.9166 &{} 0.9353 &{} 0.9353 &{} 0.9843 &{} 0.9353 &{} 0.9166 &{} 0.9353 &{} 0.9353 &{} 1.0000 &{} 0.9353 \\ 0.9166 &{} 0.9432 &{} 0.9432 &{} 0.9353 &{} 0.9467 &{} 0.9166 &{} 0.9432 &{} 0.9432 &{} 0.9353 &{} 1.0000 \\ \end{array}} \right) \end{aligned}$$

We note that \(C^{16}=C^{8}\), which indicates that \(C^{8}\) is an equivalent association matrix.

Since the confidence level \(\lambda \) has a close relationship with the elements of the equivalent association matrix \(C^{8}\), in the following, we give a detailed sensitivity analysis with respect to the confidence level \(\lambda \), and by Eq. (22), we get all the possible classifications of the 10 new cars \(A_i (i=1,2,\ldots ,10)\):

  1. (1)

    If \(0\le \lambda \le 0.9166\), all cars \(A_i (i=1,2,\ldots ,10)\) are of the same type:

    $$\begin{aligned} \left\{ {A_1 ,A_2 ,A_3 ,A_4 ,A_5 ,A_6 ,A_7 ,A_8 ,A_9 ,A_{10} } \right\} . \end{aligned}$$
  2. (2)

    If \(0.9166<\lambda \le 0.9353\), all cars \(A_i (i=1,2,\ldots ,10)\) are classified into the following two types:

    $$\begin{aligned} \left\{ {A_1 ,A_6 } \right\} , \left\{ {A_2 ,A_3 ,A_4 ,A_5 ,A_7 ,A_8 ,A_9 ,A_{10} } \right\} . \end{aligned}$$
  3. (3)

    If \(0.9353<\lambda \le 0.9432\), all cars \(A_i (i=1,2,\ldots ,10)\) are classified into the following three types:

    $$\begin{aligned} \left\{ {A_1 ,A_6 } \right\} , \left\{ {A_2 ,A_3 ,A_5 ,A_7 ,A_8 ,A_{10} } \right\} , \left\{ {A_4 ,A_9 } \right\} . \end{aligned}$$
  4. (4)

    If \(0.9432<\lambda \le 0.9467\), all cars \(A_i (i=1,2,\ldots ,10)\) are classified into the following four types:

    $$\begin{aligned} \left\{ {A_1 ,A_6 } \right\} , \left\{ {A_2 ,A_3 ,A_7 ,A_8 } \right\} , \left\{ {A_4 ,A_9 } \right\} , \left\{ {A_5 ,A_{10} } \right\} . \end{aligned}$$
  5. (5)

    If \(0.9467<\lambda \le 0.9735\), all cars \(A_i (i=1,2,\ldots ,10)\) are classified into the following five types:

    $$\begin{aligned} \left\{ {A_1 ,A_6 } \right\} , \left\{ {A_2 ,A_3 ,A_7 ,A_8 } \right\} , \left\{ {A_4 ,A_9 } \right\} , \left\{ {A_5 } \right\} , \left\{ {A_{10} } \right\} . \end{aligned}$$
  6. (6)

    If \(0.9735<\lambda \le 0.9821\), all cars \(A_i (i=1,2,\ldots ,10)\) are classified into the following six types:

    $$\begin{aligned} \left\{ {A_1 ,A_6 } \right\} , \left\{ {A_2 ,A_3 ,A_7 } \right\} , \left\{ {A_4 ,A_9 } \right\} , \left\{ {A_5 } \right\} , \left\{ {A_8 } \right\} , \left\{ {A_{10} } \right\} . \end{aligned}$$
  7. (7)

    If \(0.9821<\lambda \le 0.9843\), all cars \(A_i (i=1,2,\ldots ,10)\) are classified into the following seven types:

    $$\begin{aligned} \left\{ {A_1 ,A_6 } \right\} , \left\{ {A_2 ,A_3 } \right\} , \left\{ {A_4 ,A_9 } \right\} , \left\{ {A_5 } \right\} , \left\{ {A_7 } \right\} , \left\{ {A_8 } \right\} , \left\{ {A_{10} } \right\} . \end{aligned}$$
  8. (8)

    If \(0.9843<\lambda \le 0.9872\), all cars \(A_i (i=1,2,\ldots ,10)\) are classified into the following eight types:

    $$\begin{aligned}&\left\{ {A_1 ,A_6 } \right\} , \left\{ {A_2 ,A_3 } \right\} , \left\{ {A_4 } \right\} , \left\{ {A_5 } \right\} , \left\{ {A_7 } \right\} ,\\&\qquad \left\{ {A_8 } \right\} , \left\{ {A_9 } \right\} , \left\{ {A_{10} } \right\} . \end{aligned}$$
  9. (9)

    If \(0.9872<\lambda \le 0.9891\), all cars \(A_i (i=1,2,\ldots ,10)\) are classified into the following nine types:

    $$\begin{aligned}&\left\{ {A_1 ,A_6 } \right\} , \left\{ {A_2 } \right\} , \left\{ {A_3 } \right\} , \left\{ {A_4 } \right\} , \left\{ {A_5 } \right\} , \left\{ {A_7 } \right\} ,\\&\qquad \left\{ {A_8 } \right\} , \left\{ {A_9 } \right\} , \left\{ {A_{10} } \right\} . \end{aligned}$$
  10. (10)

    If \(0.9891<\lambda \le 1\), all cars \(A_i (i=1,2,\ldots ,10)\) are classified into the following ten types, i.e., none of them are of the same class:

    $$\begin{aligned}&\left\{ {A_1 } \right\} , \left\{ {A_2 } \right\} , \left\{ {A_3 } \right\} , \left\{ {A_4 } \right\} , \left\{ {A_5 } \right\} , \left\{ {A_6 } \right\} , \left\{ {A_7 } \right\} ,\\&\qquad \left\{ {A_8 } \right\} , \left\{ {A_9 } \right\} , \left\{ {A_{10} } \right\} . \end{aligned}$$

We can find that with the increase in confidence level, more and more patterns are differentiated. Comparing these clustering results with those obtained by Xu et al. (2008), we note that the clustering algorithm based on our proposed similarity measure is more sensitive to the confidence level. There is only one probable case for a specific cluster number. For example, if these cars are classified into four types, the result can only be \(\left\{ {A_1 ,A_6 } \right\} \), \(\left\{ {A_2 ,A_3 ,A_7 ,A_8 } \right\} \), \(\left\{ {A_4 ,A_9 } \right\} \), \(\left\{ {A_5 ,A_{10} } \right\} \). This is helpful for final decision by reducing uncertainty. Moreover, the reasonable clustering result presented in Hwang et al. (2012) is also included in these results yielded based on our proposed similarity measure.

7 Conclusion

Although numbers of similarity measures between IFSs have been proposed to cope with uncertainty in information systems, most of them have provided counterintuitive results. In this study, a new similarity measure and weighted similarity measure between IFSs are proposed. The new similarity measure is calculated based on the operations on the membership degree \(\mu _A (x)\), non-membership degree \(v_A (x)\), hesitancy degree \(\pi _A (x)\), as well as the upper bound of membership \(1-v_A (x)\). In some special cases where some of the existing similarity measure cannot provide reasonable results, the proposed similarity measure shows great capacity for discriminating IFSs. Moreover, investigation of the application of the proposed similarity in the field of pattern recognition is carried out based on numerical examples as well as the practice of medical diagnosis and cluster analysis. It has been illustrated that the proposed similarity measure performs as well as or better than previous measures. However, our proposed similarity is not an absolute perfect one. It is stuck with the lack of definitude physical meaning. Efforts are continuing to look for a more excellent similarity measure for much better exploration and exploitation on IFSs.