Abstract
Intuitionistic fuzzy set (IFS) is one of the most robust and trustworthy tools for portraying the imprecise information with the help of the membership degrees. Similarity measure, one of the information measures, plays an important role in treating imperfect and ambiguous information to reach the final decision by determining the degree of similarity between the pairs of the numbers. Motivated by these, this paper aims to present a novel distance/ similarity among the IFSs based on the transformation techniques with their characteristics. To explore the study, the given IFSs are transformed into the right-angled triangle over a unit square area, and hence based on the intersection of the triangles, novel distance and similarity measures are proposed. An algorithm to solve the decision-making problems with the proposed similarity measure is developed and implemented to execute their performance over the numerous examples such as pattern recognition and clustering analysis. The reliability of the developed measure is investigated by applying it in clustering and the pattern recognition problems and their results are compared with some prevailing studies. From the investigation, we conclude that several existing measures fail to give classification results under the different instances such as “division by zero problems” or “counter-intuitive cases” while the proposed measure successfully overcomes this drawback.
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
Introduction
Nowadays, decision making (DM) is one of the most frequently used processes of our daily life, whose target is to decide the best alternative out of the available ones under the numerous known or unknown criteria. Multicriteria decision making (MCDM) is a part of the DM and is credited as a cognitive-based human action. In order to handle and aggregate the information gathered from several resources, the most important step is of data collection. Traditionally, all the information is given in the form of a crisp number. But in human cognition devices, it is often felt challenging to show the working situations using the crisp number-based primitive data handling techniques. These methods lead the decision-makers to vague conclusions as well as uncertain decisions. Therefore, to deal with such risks and to examine the process, a large-scale family of theories such as fuzzy set (FS) [1] and its extensions as intuitionistic FS (IFS) [2], interval-valued IFS [3], linguistic interval-valued IFS [4] are proposed by the researchers. Under the utilization of such theories, decision-makers maintain their DM criteria in accordance with the particular situation whether it is human cognition or pattern recognition. In these theories, each element is represented using two degrees namely membership degree (MD) and non-membership degree (NMD) such that their sum does not exceed one.
Recently, DM problems with uncertain information have become a hot research topic, which requires the three influential phases:
-
1.
how to arrange the information using a suitable scale to read the data.
-
2.
how to aggregate the distinct attribute benefits and accumulate the overall preference value.
-
3.
how to order things to find the finest alternative(s).
In phase 1), an IFS is one of the most widely used mediums to assess the information in terms of ordered pair of MDs and NMDs. This representation is more reliable and also gives the hesitancy degree within the pairs of MDs. In phase 2), information measures (IMs) play a noteworthy role in treating imperfect and ambiguous information to reach the final decision. Different kinds of IMs such as distance, similarity, inclusion, entropy, etc., exist in the literature and among them, the notion of the similarity measure (SM) is one of the effective tools to estimate the similarity degree between the pairs of the objects. Finally, in the phase 3), the combined values acquired from the foregoing phases are ordered with suitable measures.
Up to now, IFSs have been applied by many researchers to address the decision-making problems (DMPs) by adopting the notion of the SMs. Chen et al. [5] gave the notion of measuring the degree of similarity among vague sets and put forth two SMs. Authors in [6] pointed out that the measures given in [5] do not fit well in some instances with some counterintuitive cases. To resolve it, they gave a set of modified SMs and showed the validity of their proposed measures with the aid of examples. Authors in [7] presented SMs for both discrete and continuous sets and used them to solve the pattern recognition problems. Mitchell [8] pointed out that the SM given in [7] may give irrational results in some instances and hence they modified it and employed the modified measure to solve the pattern recognition problems. Authors in [9] presented a SM by improving some of the existing measures and validated it with the help of several numerical cases. In [10], the authors presented distance and SMs among IFSs using the concept of Hausdorff distance and studied their related features. In [11], authors presented SMs for IFSs and gave their application in medical diagnostic reasoning. In [12], the authors presented some IMs for IFSs to solve pattern recognition problems and also discussed the relationship between the SMs and the distance measures. Liu [13] pointed out that the measures recommended in [6] produce illogical results in some cases and consequently, proposed another SM and proved the related properties. Xu [14] presented SMs for IFSs. Song et al. [15] put forward the weighted SMs along with its application in pattern recognition problems. Chen et al. [16] developed SM by transforming the IFSs into right-angled triangular numbers and showed the effectiveness of the recommended measure by applying it to various pattern recognition problems. Garg [17] developed an improved cosine SM for IFSs. In [18], authors presented some SMs for IFSs based on the connection numbers of set pair analysis theory. In [19], the authors proposed distance measures to calculate the separation within the pairs of IFSs by transforming them into isosceles triangles. In [20], they proposed SM based on transformation techniques and gave its applications in various pattern recognition problems. In [21], authors presented distance measures for cubic IFSs and applied them to solve the pattern recognition and medical diagnosis problems. Furthermore, IMs are applied by the various scholars and researchers in many other areas too. For more details, we refer to read [22,23,24,25,26,27,28,29,30,31,32,33,34] and their corresponding references.
From the existing studies, it can be worth noticed that SMs are essential tools for processing the uncertainty associated with FSs and IFSs. Although, till now, different MCDM methods based on the SMs among IFSs have been explored and used in real-life problems such as pattern recognition and clustering analysis but most of the measures fail to give classification results in some situations. On account of having counterintuitive aspects [5,6,7,8,9,10,11,12,13, 15, 16, 22,23,24,25,26, 28] of some existing intuitionistic fuzzy MCDM methods, the decision makers may have encountered difficulties in reaching at conclusion due to “division by zero problem” or indistinguishable results. Therefore, in order to overcome the drawbacks of the prevailing SM-based MCDM methods, there is a requirement of more optimized measures to handle the DMPs under diverse circumstances. Henceforth, the presented paper intends to present some novel distance and similarity measures by transforming IFSs into right-angled triangles and consequently the proposed measure-based MCDM methods to address the above issues.
To address these problems, the fundamental objectives of the presented research are given as follows:
-
1.
to transform the collective IFSs information into right-angled triangles over a unit square area.
-
2.
to propose new distance and similarity measures for IFSs, based on diagonal of the unit square and the intersection of transformed right-angled triangles, to overcome the drawbacks of the several existing studies [5,6,7,8,9,10,11,12,13, 15, 16, 22,23,24,25,26, 28, 32];
-
3.
to present a novel decision-making algorithm based on proposed SMs and validate it by several numerical examples.
-
4.
to develop the clustering algorithm based on proposed SMs to distinguish the things of the same pattern.
To complete the above four goals, we divide the paper into 6 sections. In Section 2, we review the fundamental prevailing works on IFS and the SMs. In Section 3, we first transform the given collective data into the right-angled triangles over a unit square area. Based on this transformation, we introduce a novel distance and similarity measure among IFSs to estimate the degree of separation and similarity, respectively, among the pairs of IFSs. The comprehensive description of their origin is also described and explained. In Section 4, the superiority of the recommended SM over the existing SMs [5,6,7,8,9,10,11,12,13, 15, 16, 22,23,24,25,26, 28, 32] is shown through several illustrative examples. From their results, it is proved that the proposed measures perform well in the instances where the existing measures suffer from “division by zero problem” or “counter-intuitive cases”. Section 5 presents an algorithm based on proposed SMs to solve the diverse kinds of problems such as pattern recognition and clustering analysis and comparative studies with several existing algorithms to demonstrate the effectiveness of the developed measure. Also, a clustering algorithm based on proposed similarity measure is applied to classify the objects with distinct values of confidence level of the expert. Finally, concluding remarks are stated in Section 6.
Preliminaries
This section reviews some basic concepts related to IFSs. For it, we denote the universal set by \(\mathcal {U}\) and let \(\Phi (\mathcal {U})\) be the set of all IFSs.
Definition 1
[1] A FS \(\mathcal {F}\) on \(\mathcal {U}\) is stated as
where \(\zeta_{\mathcal {F}} : \mathcal {U} \rightarrow [0,1]\) represents the degree of extent an element belongs to the FS \(\mathcal {F}\) and is named as membership function.
Definition 2
[2, 35] An IFS \(\mathcal {I}\) is given as
where \(\zeta_{\mathcal {I}}, \vartheta_{\mathcal {I}} : \mathcal {U} \rightarrow [0,1]\) represents the MD and NMD with \(0 \le \zeta_{\mathcal {I}}(x)+ \vartheta_{\mathcal {I}}(x)\le 1\) \(\forall\) x and \(h_{\mathcal {I}}(x)=1-\zeta_{\mathcal {I}}(x)-\vartheta_{\mathcal {I}}(x)\) gives the hesitation degree of x in \(\mathcal {I}\).
Definition 3
[2] For two IFSs \(\mathcal {I}=\{\langle x,\zeta _{\mathcal {I}}(x), \vartheta_{\mathcal {I}}(x)\rangle \mid x \in \mathcal {U} \}\) and \(\mathcal {J}=\{\langle x,\zeta_{\mathcal {J}}(x), \vartheta_{\mathcal {J}}(x)\rangle \mid x \in \mathcal {U} \}\) defined on \(\mathcal {U}\), we have
-
(i)
\(\mathcal {I} \subseteq \mathcal {J}\) if \(\zeta_{\mathcal {I}}(x) \le \zeta_{\mathcal {J}}(x)\) and \(\vartheta_{\mathcal {I}}(x) \ge \vartheta_{\mathcal {J}}(x)\) \(\forall\) x.
-
(ii)
\(\mathcal {I}=\mathcal {J}\) \(\Leftrightarrow\) \(\mathcal {I} \subseteq \mathcal {J}\) and \(\mathcal {J} \subseteq \mathcal {I}\).
Definition 4
[8] A function \(\mathcal {S}:\Phi (\mathcal {U}) \times \Phi (\mathcal {U}) \rightarrow [0,1]\) is called SM, if it satisfies the following characteristics:
(P1) \(0\le \mathcal {S}(\mathcal {I}, \mathcal {J})\le 1\).
(P2) \(\mathcal {S}(\mathcal {I}, \mathcal {J})=1\) \(\Leftrightarrow\) \(\mathcal {I}=\mathcal {J}\).
(P3) \(\mathcal {S}(\mathcal {I}, \mathcal {J})=\mathcal {S}(\mathcal {J}, \mathcal {I})\).
(P4) If \(\mathcal {I}\subseteq \mathcal {J}\subseteq \mathcal {K}\) then, \(\mathcal {S}(\mathcal {I}, \mathcal {K})\le \mathcal {S}(\mathcal {I}, \mathcal {J})\) and \(\mathcal {S}(\mathcal {I}, \mathcal {K})\le \mathcal {S}(\mathcal {J}, \mathcal {K})\) where \(\mathcal {I}, \mathcal {J}, \mathcal {K} \in \Phi (\mathcal {U})\).
Later on, the axiomatic definition of distance measure \(\mathcal {D}\) [12] is introduced. Distance measure, which is complement of SM, is a function satisfying the following axioms:
(P1) \(0\le \mathcal {D}(\mathcal {I},\mathcal {J})\le 1\).
(P2) \(\mathcal {D}(\mathcal {I}, \mathcal {J})=0\) \(\Leftrightarrow\) \(\mathcal {I}=\mathcal {J}\).
(P3) \(\mathcal {D}(\mathcal {I},\mathcal {J})=\mathcal {D}(\mathcal {J}, \mathcal {I})\).
(P4) If \(\mathcal {I}\subseteq \mathcal {J}\subseteq \mathcal {K}\) then, \(\mathcal {D}(\mathcal {I}, \mathcal {K})\ge \mathcal {D}(\mathcal {I}, \mathcal {J})\) and \(\mathcal {D}(\mathcal {I}, \mathcal {K})\ge \mathcal {D}(\mathcal {J}, \mathcal {K})\) where \(\mathcal {I}, \mathcal {J}, \mathcal {K} \in \Phi (\mathcal {U})\).
Under IFS environment, several authors [5,6,7,8,9,10,11,12,13, 15, 16, 22,23,24,25,26, 28] have outlined the numerous SMs to estimate the degree of similarity among IFSs. These measures are summarized in Table 1. From these existing studies, it has been investigated that most of the prevailing measures fail to solve the real-world DMPs due to “division by zero problem” or “counter-intuitive cases”. Therefore, the existing measures give biased and contradictory outputs which concludes that the decision made cannot be optimal. Motivated by this fact, in this work, we propose a novel distance and similarity measure by applying the concept of transforming the given IFSs into right-angled triangles. The detailed description of the proposed measures is given in the next section.
Novel Proposed Distance and Similarity Measure Between IFSs
Let \(\mathcal {P}=\big \{\big \langle x_j,\zeta _{\mathcal {P}}(x_j), \vartheta _{\mathcal {P}}(x_j)\big \rangle \mid j=1,2, \ldots , n \big \}\) and \(\mathcal {Q}=\big \{\big \langle x_j, \zeta _{\mathcal {Q}}(x_j), \vartheta _{\mathcal {Q}}(x_j)\big \rangle \mid j=1,2, \ldots , n \big \}\) be two IFSs defined on universal set \(\mathcal {U}\). Then clearly we can obtain that \(\big [\zeta _{\mathcal {P}}(x_j), 1-\vartheta _{\mathcal {P}}(x_j)\big ]\) is the intuitionistic fuzzy (IF) value of the element \(x_j \in \mathcal {U}\) in the IFS \(\mathcal {P}\). For convenience, we represent the IF values \(\big [\zeta _{\mathcal {P}}(x_j), 1-\vartheta _{\mathcal {P}}(x_j)\big ]\) and \(\big [\zeta _{\mathcal {Q}}(x_j),1-\vartheta _{\mathcal {Q}}(x_j)\big ]\) of IFSs \(\mathcal {P}\) and \(\mathcal {Q}\) are represented as \([p_1(x_j), p_2(x_j)]\) and \([q_1(x_j), q_2(x_j)]\) in the universal set [0, 1] on the y-axis and x-axis, respectively, as shown in Fig. 1a. We denote the distance among the points \(p_1(x_j)\) and \(p_2(x_j)\) as \(l_p(x_j),\) i.e., \(l_p(x_j)=p_2(x_j)-p_1(x_j)=1-\zeta _{\mathcal {P}}(x_j)-\vartheta _{\mathcal {P}}(x_j)=h_{\mathcal {P}}(x_j)\) and the distance between the points \(q_1(x_j)\) and \(q_2(x_j)\) as \(l_q(x_j),\) i.e., \(l_q(x_j)=q_2(x_j)-q_1(x_j)= 1-\zeta _{\mathcal {Q}}(x_j)-\vartheta _{\mathcal {Q}}(x_j)=h_{\mathcal {Q}}(x_j)\) . Thus, based on it, the vertices of transformed right-angled triangles of IFSs \(\mathcal {P}\) and \(\mathcal {Q}\) are denoted by \(O_{\mathcal {P}}\) and \(O_{\mathcal {Q}},\) respectively, as depicted in Fig. 1a.
During the formulation, we transform the given IFSs to the right-angled triangles in a square area and construct the SM based on the intersection of the transformed triangles. The distance measure between the IFSs is defined as half of the sum of lengths of straight lines \(X_1Y_1\) and \(X_2Y_2\) as shown in Fig. 1a. Moreover, Fig. 1b depicts that when two IFSs \(\mathcal {P}\) and \(\mathcal {Q}\) are equal, i.e., \(\mathcal {P}=\mathcal {Q}\) then, the point \(X_1\) coincides with \(Y_1\) and \(X_2\) point coincides with \(Y_2\). This gives that lengths of \(X_1Y_1\) and \(X_2Y_2\) are zero and hence, distance among IFSs \(\mathcal {P}\) and \(\mathcal {Q}\) becomes zero.
Since, the co-ordinates of the points \(p_1\), \(p_2\), \(q_1\), \(q_2\), \(O_{\mathcal {P}}\) and \(O_{\mathcal {Q}}\) are \(\big (0,p_1(x_j)\big )\), \(\big (0,p_2(x_j)\big )\), \(\big (q_1(x_j),0\big )\), \(\big (q_2(x_j),0\big )\), \(\big (1,p_1(x_j)\big )\) and \(\big (q_1(x_j),1\big ),\) respectively. Therefore, the equations of the straight lines \(p_1 O_{\mathcal {P}}\), \(p_2 O_{\mathcal {P}}\), \(q_1 O_{\mathcal {Q}}\), \(q_2 O_{\mathcal {Q}}\) are \(y=p_1(x_j)\), \(y=-xl_p(x_j)+p_2(x_j)\), \(x=q_1(x_j)\) and \(y=\tfrac{q_2(x_j)-x}{l_q(x_j)},\) respectively. Now, in order to find out the lengths of the straight lines \(X_1Y_1\) and \(X_2Y_2\), we need to compute the co-ordinates of the points \(X_1\), \(X_2\), \(Y_1\) and \(Y_2\).
\(X_1\) point: Since \(X_1\) is the point of intersection of the straight lines \(p_1O_{\mathcal {P}}\) and \(q_1O_{\mathcal {Q}}\) therefore, x-coordinate of the point \(X_1\) is \(q_1(x_j)\) and the y-coordinate of the point \(X_1\) is \(p_1(x_j)\). Hence, co-ordinates of the point \(X_1\) are \(\big (q_1(x_j)\), \(p_1(x_j)\big )\).
\(X_2\) point: As \(X_2\) is the intersection point of the straight lines \(p_2O_{\mathcal {P}}\) and \(q_2O_{\mathcal {Q}}\). Therefore, the x-coordinate corresponding to the point \(X_2\) is given as:
Further, the y-coordinate corresponding to the point \(X_2\) is obtained as:
Hence, the co-ordinates of the point \(X_2\) are
\(Y_1\) point: Since \(Y_1\) is the point of intersection of the straight lines OZ and \(q_1O_{\mathcal {Q}}\) therefore, x-coordinate of the point \(Y_1\) is same as the \(x-\) coordinate of the point \(X_1,\) i.e., \(q_1(x_j)\). Also, since straight line OZ is one of the diagonals of the unit square therefore, for any point on the line OZ, \(x-\)coordinate =\(y-\) coordinate. Hence, the co-ordinates of the point \(Y_1\) are \(\big (q_1(x_j),q_1(x_j)\big )\).
\(Y_2\) point: \(Y_2\) is the point on the diagonal OZ and Fig. 1a depicts that \(x-\) coordinate of the point \(X_2\) and \(Y_2\) is same. Therefore, \(x-\)coordinate of the point \(Y_2\) is
\(\tfrac{q_2(x_j)-p_2(x_j)l_q(x_j)}{1-l_p(x_j)l_q(x_j)}\). Also, as explained earlier, \(y-\) coordinate of the point \(Y_2\)= \(x-\) coordinate of the point \(Y_2\) = \(\tfrac{q_2(x_j)-p_2(x_j)l_q(x_j)}{1-l_p(x_j)l_q(x_j)}\). Hence, the coordinates of the point \(Y_2\) are
Finally, assume that \(\overline{X_1Y_1}(x_j)\) and \(\overline{X_2Y_2}(x_j)\) denote the lengths of the straight lines \(X_1Y_1\) and \(X_2Y_2,\) respectively corresponding to the element \(x_j\) of universal set \(\mathcal {U}\). Then,
Based on the above discussion and calculation, we outline the new distance measure among two IFSs \(\mathcal {P}\) and \(\mathcal {Q}\) (description as shown in Fig. 1a as follows:
Definition 5
The distance measure between two IFSs \(\mathcal {P}=\big \{\big \langle x,\zeta _{\mathcal {P}}(x), \vartheta _{\mathcal {P}}(x)\big \rangle \mid x \in \mathcal {U} \big \}\) and \(\mathcal {Q}=\big \{\big \langle x,\zeta _{\mathcal {Q}}(x)\), \(\vartheta _{\mathcal {Q}}(x)\big \rangle \mid x \in \mathcal {U} \big \}\) on \(\mathcal {U}=\{x_1,x_2, \ldots , x_n\}\) is:
where \(h_{\mathcal {P}}(x_j)h_{\mathcal {Q}}(x_j) \ne 1\) for all \(x_j \in \mathcal {U}\).
Next, for IFSs \(\mathcal {P}\) and \(\mathcal {Q}\) defined on \(\mathcal {U}=\{x_1,x_2, \ldots , x_n\}\) satisfying \(h_{\mathcal {P}}(x_j)h_{\mathcal {Q}}(x_j) \ne 1\), the distance measure \(\mathcal {D}(\mathcal {P}, \mathcal {Q})\), given in Eq. (5), has the following characteristics.
Theorem 1
The proposed distance measure \(\mathcal {D}\) satisfies the inequality given as: \(0\le \mathcal {D}(\mathcal {P}\), \(\mathcal {Q})\le 1\).
Proof
For IFSs \(\mathcal {P}\) and \(\mathcal {Q}\) and from Fig. 1a, it is depicted that minimum and maximum values of \(\overline{X_1Y_1}\), \(\overline{X_2Y_2}\) are 0 and 1, respectively. Therefore,
which implies that
Hence, \(0 \le \mathcal {D}(\mathcal {P}, \mathcal {Q}) \le 1\).Hence, the result.
Theorem 2
\(\mathcal {D}(\mathcal {P}, \mathcal {Q})=0\) \(\Leftrightarrow\) \(\mathcal {P}=\mathcal {Q}\).
Proof
\(\mathcal {P}=\mathcal {Q}\) implies \(\zeta _{\mathcal {P}}(x_j)=\zeta _{\mathcal {Q}}(x_j)\), \(\vartheta _{\mathcal {P}}(x_j)=\vartheta _{\mathcal {Q}}(x_j)\) and \(h_{\mathcal {P}}(x_j)=h_{\mathcal {Q}}(x_j)\) \(\forall\) \(j=1,2, \ldots ,n\). Then, clearly, \(\mathcal {D}(\mathcal {P}, \mathcal {Q})=0\). Conversely,
Now, Eq. (6) gives that \(\zeta _{\mathcal {P}}(x_j)=\zeta _{\mathcal {Q}}(x_j)\) \(\forall\) j. Further, using \(\zeta _{\mathcal {P}}(x_j)=\zeta _{\mathcal {Q}}(x_j)\) in Eq. (7), we obtain that
Thus, we have two possibilities:
-
(i)
When \(\vartheta _{\mathcal {P}}(x_j)=\vartheta _{\mathcal {Q}}(x_j)\). Also, from Eq. (6) we have, \(\zeta _{\mathcal {P}}(x_j)=\zeta _{\mathcal {Q}}(x_j)\) \(\forall\) j. Hence, \(\mathcal {P}=\mathcal {Q}\).
-
(ii)
When \(\zeta _{\mathcal {P}}(x_j)=1\). It implies that \(\vartheta _{\mathcal {P}}(x_j)=0\). Also, \(\zeta _{\mathcal {P}}(x_j)=\zeta _{\mathcal {Q}}(x_j)\) gives that \(\zeta _{\mathcal {Q}}(x_j)=1\) and hence, \(\vartheta _{\mathcal {Q}}(x_j)=0\) \(\forall\) j. Thus, we have \(\mathcal {P}=\mathcal {Q}\).
Theorem 3
The proposed distance measure is symmetric, i.e., \(\mathcal {D}(\mathcal {P}, \mathcal {Q})=\mathcal {D}(\mathcal {Q}, \mathcal {P})\).
Proof
From Eq. (5), we have
Theorem 4
If \(\mathcal {P}\subseteq \mathcal {Q}\subseteq \mathcal {R}\) then, \(\mathcal {D}(\mathcal {P}, \mathcal {R})\ge \mathcal {D}(\mathcal {P}, \mathcal {Q})\) and \(\mathcal {D}(\mathcal {P},\mathcal {R})\ge \mathcal {D}(\mathcal {Q},\mathcal {R})\).
Proof
\(\mathcal {P}\subseteq \mathcal {Q}\subseteq \mathcal {R}\) signifies that \(\zeta _{\mathcal {P}} \le \zeta _{\mathcal {Q}} \le \zeta _{\mathcal {R}}\) and \(\vartheta _{\mathcal {P}} \ge \vartheta _{\mathcal {Q}} \ge \vartheta _{\mathcal {R}}\) \(\forall\) x. From Eq. (5), we have
Since, \(\zeta _{\mathcal {P}}(x_j) \le \zeta _{\mathcal {Q}}(x_j)\) and \(\vartheta _{\mathcal {P}}(x_j) \ge \vartheta _{\mathcal {Q}}(x_j)\). It gives that \(\zeta _{\mathcal {Q}}(x_j)-\zeta _{\mathcal {P}}(x_j) \ge 0\) therefore, \(\big |\zeta _{\mathcal {Q}}(x_j)-\zeta _{\mathcal {P}}(x_j)\big |\) \(=\zeta _{\mathcal {Q}}(x_j)-\zeta _{\mathcal {P}}(x_j)\). Also, we obtain that \(\big (1-\zeta _{\mathcal {P}}(x_j)\big )\big (1-\vartheta _{\mathcal {Q}}(x_j)\big ) \ge \big (1-\zeta _{\mathcal {Q}}(x_j)\big )\big (1-\vartheta _{\mathcal {P}}(x_j)\big )\) \(\Rightarrow\) \(\zeta _{\mathcal {Q}}(x_j)-\zeta _{\mathcal {P}}(x_j)+\vartheta _{\mathcal {P}}(x_j) - \vartheta _{\mathcal {Q}}(x_j) + \zeta _{\mathcal {P}}(x_j)\vartheta _{\mathcal {Q}}(x_j)\) \(- \zeta _{\mathcal {Q}}(x_j)\vartheta _{\mathcal {P}}(x_j) \ge 0\)
Hence,
In the similar manner, using the given conditions, we have
Now,
where \(\mathcal {A}_j=\zeta _{\mathcal {R}}(x_j)-\zeta _{\mathcal {Q}}(x_j)\) and
Further, to show that \(\mathcal {D}(\mathcal {P}, \mathcal {R})-\mathcal {D}(\mathcal {P}, \mathcal {Q})\ge 0\), it is sufficient to prove that \(\mathcal {A}_j, \mathcal {B}_j \ge 0\) \(\forall\) j. Since, \(\zeta _{\mathcal {R}}(x_j) \ge \zeta _{\mathcal {Q}}(x_j)\) therefore, \(\mathcal {A}_j \ge 0\). Now,
Thus \({\mathcal {A}}_j,{\mathcal {B}}_j \ge 0\) \(\forall\) j. Hence, the result.
Further, based on the distance measure, we can define a novel SM as given in the following definition:
Definition 6
A SM between two IFSs \(\mathcal {P}\) and \(\mathcal {Q}\) is given as
where \(h_{\mathcal {P}}(x_j)h_{\mathcal {Q}}(x_j) \ne 1\) \(\forall\) \(x_j \in \mathcal {U}\).
Theorem 5
For \(\mathcal {P}, \mathcal {Q}, \mathcal {R} \in \Phi (\mathcal {U})\), a similarity measure \(\mathcal {S}\) between \(\mathcal {P}\) and \(\mathcal {Q}\), denoted by \(\mathcal {S}(\mathcal {P}, \mathcal {Q})\), satisfies the following properties:
(P1) \(0\le \mathcal {S}(\mathcal {P}, \mathcal {Q})\le 1\).
(P2) \(\mathcal {S}(\mathcal {P}, \mathcal {Q})={1}\) \(\Leftrightarrow\) \(\mathcal {P}= \mathcal {Q}\).
(P3) \(\mathcal {S}(\mathcal {P}, \mathcal {Q})=\mathcal {S}(\mathcal {Q}, \mathcal {P})\).
(P4) If \(\mathcal {P}\subseteq \mathcal {Q}\subseteq \mathcal {R}\) then, \(\mathcal {S}(\mathcal {P},\mathcal {R})\le \mathcal {S}(\mathcal {P}, \mathcal {Q})\) and \(\mathcal {S}(\mathcal {P}, \mathcal {R})\le \mathcal {S}(\mathcal {Q},\mathcal {R})\).
Proof
For IFSs \(\mathcal {P}, \mathcal {Q}, \mathcal {R}\) and by Definition 6, we can obtain that \(\mathcal {S}(\mathcal {P}, \mathcal {Q})=1-\mathcal {D}(\mathcal {P}, \mathcal {Q})\). Thus, from it, we have
(P1) As \(0 \le \mathcal {D}(\mathcal {P},\mathcal {Q}) \le 1\). Therefore, \(0 \le 1-\mathcal {D}(\mathcal {P}, \mathcal {Q}) \le 1\). Hence, \(0 \le \mathcal {S}(\mathcal {P}, \mathcal {Q}) \le 1\).
(P2) Since, \(\mathcal {D}(\mathcal {P}, \mathcal {Q})=0\) \(\Leftrightarrow\) \(\mathcal {P}=\mathcal {Q}\). It implies that \(1-\mathcal {D}(\mathcal {P}, \mathcal {Q})=1\) \(\Leftrightarrow\) \(\mathcal {P}=\mathcal {Q}\) which gives that \(\mathcal {S}(\mathcal {P}, \mathcal {Q})=1\) \(\Leftrightarrow\) \(\mathcal {P}=\mathcal {Q}\).
(P3) \(\mathcal {D}(\mathcal {P}, \mathcal {Q})=\mathcal {D}(\mathcal {Q}, \mathcal {P})\) gives that \(1-\mathcal {D}(\mathcal {P}, \mathcal {Q})=1-\mathcal {D}(\mathcal {Q}, \mathcal {P})\). It implies \(\mathcal {S}(\mathcal {P}, \mathcal {Q})=\mathcal {S}(\mathcal {Q}, \mathcal {P})\)
(P4) For \(\mathcal {P} \subseteq \mathcal {Q} \subseteq \mathcal {R}\) we have, \(\mathcal {D}(\mathcal {P}, \mathcal {R}) \ge \mathcal {D}(\mathcal {P}, \mathcal {Q})\) and \(\mathcal {D}(\mathcal {P}, \mathcal {R}) \ge \mathcal {D}(\mathcal {Q}, \mathcal {R})\) which gives that \(1-\mathcal {D}(\mathcal {P}, \mathcal {R}) \le 1-\mathcal {D}(\mathcal {P}, \mathcal {Q})\) and \(1-\mathcal {D}(\mathcal {P}, \mathcal {R}) \le 1-\mathcal {D}(\mathcal {Q}, \mathcal {R})\). It implies \(\mathcal {S}(\mathcal {P}, \mathcal {R}) \le \mathcal {S}(\mathcal {P}, \mathcal {Q})\) and \(\mathcal {S}(\mathcal {P}, \mathcal {R}) \le \mathcal {S}(\mathcal {Q}, \mathcal {R})\).
Superiority Analysis of the Stated Measure
In this section, we analyze the drawbacks of the several existing SMs [5,6,7,8,9,10,11,12,13, 15, 16, 22,23,24,25,26, 28] of IFSs with the help of some numerical examples. The existing SMs for IFSs are summarized in Table 1.
In the following, we give some “counterintuitive cases” to illustrate the drawbacks of the existing SMs [5, 7, 11, 22, 25].
Example 1
Consider three IFSs \(\mathcal {P}= \{\langle x,0.00,0.00\rangle \}\), \(\mathcal {Q}= \{\langle x,0.49,0.51\rangle \}\) and \(\mathcal {R}= \{\langle x,0.50,0.50\rangle \}\) defined on \(\mathcal {U}=\{x\}\). From these given sets, it is obvious that the similarity between the IFSs \(\mathcal {Q}\) and \(\mathcal {R}\) is more than the IFSs \(\mathcal {P}\) and \(\mathcal {R}\). Now, in order to show the superiority of the presented SM \(\mathcal {S}\), we apply the prevailing similarity measures \(S_C\) [5], \(S_{DC}\) [7], \(S_{SK}\) [11], \(S_{VS}\) [22], \(S_Y\) [25] and the proposed similarity measure \(\mathcal {S}\) on these IFSs, and tabulate the obtained results in Table 2.
From this table, it is evident that the similarity measures \(S_C\) [5], \(S_{DC}\) [7] provides that the similarity degree between \(\mathcal {P}\) and \(\mathcal {R}\) is higher than the \(\mathcal {Q}\) and \(\mathcal {R}\). On the other hand, the measure \(S_{SK}\) [11] draws that the SM degree between \(\mathcal {P}\) and \(\mathcal {R}\) is identical with \(\mathcal {Q}\) and \(\mathcal {R}\). The measures \(S_{VS}\) [22] and \(S_Y\) [25] fail to determine the SM degree between \(\mathcal {P}\) and \(\mathcal {R}\) due to “division by zero problem,” whereas on applying the proposed SM we get that the similarity degree between \(\mathcal {Q}\) and \(\mathcal {R}\) is greater than between \(\mathcal {P}\) and \(\mathcal {R}\). Thus, we obtain that the prevailing measures \(S_C\) [5], \(S_{DC}\) [7] and \(S_{SK}\) [11] give that the IFSs \(\mathcal {P}\) and \(\mathcal {R}\) are more similar which is counter to the fact that \(\mathcal {Q}\) and \(\mathcal {R}\) are more alike whereas the proposed measure \(\mathcal {S}\) gives the accurate results. Therefore, the proposed measure \(\mathcal {S}\) gives the better and reliable results as compared to existing SMs [5, 7, 11, 22, 25]. Moreover, the similarity measures [5, 7, 11] calculate the degree of similarity between \(\mathcal {P}\) and \(\mathcal {R}\) to be exactly 1 whereas \(\mathcal {P}\) and \(\mathcal {R}\) are not equal. Thus the similarity measures [5, 7, 11] do not satisfy the property (P2) of SM as stated in Theorem 5. Hence, from this analysis, we can conclude that the existing SMs have some drawbacks that it does not satisfy certain characteristics.
Next, we utilize the 6 groups of IFSs to verify the superiority of the stated SMs over the various existing SMs [5,6,7,8,9,10,11,12,13, 15, 16, 22,23,24,25,26, 28].
Example 2
Consider the 6 groups of diverse IFSs \(\mathcal {P}\) and \(\mathcal {Q}\) in order to compare the results of the proposed SMs over the existing SMs [5,6,7,8,9,10,11,12,13, 15, 16, 22,23,24,25,26, 28]. The obtained results for each group are tabulated in Table 3. From this table, it is analyzed that the developed SM \(\mathcal {S}\) can work well over these existing SMs and overcome their drawbacks. In this table, we can notice that there occur indistinguishable results for the different pairs of input data corresponding to the existing SMs, which drives to the conclusion that these measures are inadequate to discriminate the different input samples precisely. The following are the drawbacks of the existing SMs over the proposed SM:
-
1.
The different input cases 1, 4 and 5 cannot be identified using similarity measure \(S_C\) [5] because of the indistinguishable results. Besides this, it is seen that measurement values obtained using \(S_C\) [5] are exactly 1 for input cases 1, 4 and 5, i.e., \(S_C(\mathcal {P}, \mathcal {Q})=1\) when \(\mathcal {P}=(0.3, 0.3)\), \(\mathcal {Q}=(0.4, 0.4)\) (Case 1), \(\mathcal {P}=(0.5, 0.5)\), \(\mathcal {Q}=(0.0, 0.0)\) (Case 4), \(\mathcal {P}=(0.4, 0.2)\), \(\mathcal {Q}=(0.5, 0.3)\) (Case 5), which are not identical to each other, i.e., \(\mathcal {P}\ne \mathcal {Q}\). In addition to these, the similarity measures \(S_{DC}\) [7], \(S_{SK}\) [11] and \(S_Y\) [25] also suffer from the same problem for the different cases group. Thus, the existing SMs [5, 7, 11, 25] do not satisfy the property (P2) as stated in Theorem 5 and hence this is the cause of the failure of these measures in some particular cases.
-
2.
It is also seen from the table that the measure of similarity \(S_{HK}\) [6] fails to discriminate the cases 1, 2, 5 and the cases 2, 3, i.e., \(S_{HK}(\mathcal {P}, \mathcal {Q})=S_{HK}(\mathcal {P}1, \mathcal {Q}1)=S_{HK}(\mathcal {P}2, \mathcal {Q}2)=0.9\) when \(\mathcal {P}=(0.3, 0.3)\), \(\mathcal {Q}=(0.4, 0.4)\) (Case 1), \(\mathcal {P}1=(0.3, 0.4)\), \(\mathcal {Q}1=(0.4, 0.3)\) (Case 2) and \(\mathcal {P}2=(0.4, 0.2)\), \(\mathcal {Q}2=(0.5, 0.3)\) (Case 5). In the similar manner, similarity measures given in [5,6,7,8,9,10,11,12,13, 16, 22,23,24, 26, 28] yield illogical results as these measures produce the identical outcomes for different input values and hence they are incapable to distinguish the pairs.
-
3.
Some of the existing SMs fail to deal with the “division by zero” problem and thus they are incapable to classify or rank the objects. For instance, the measures \(S_{VS}\) [22], \(S_Y\) [25] when \(\mathcal {P}=(1,0)\), \(\mathcal {Q}=(0,0)\) (Case 3) and \(\mathcal {P}=(0.5, 0.5)\), \(\mathcal {Q}=(0,0)\) (Case 4).
Therefore, from this analysis, we conclude that the prevailing measures [5,6,7,8,9,10,11,12,13, 16, 22,23,24,25,26, 28] fail to make the accurate decision for the considered input data whereas the presented SM is consistent for each group. The developed SM \(\mathcal {S}\) and the SM \(S_S\) [15] have “no counter-intuitive cases”, which is presented in Table 3 for each group. Hence, the presented SM overcomes the drawbacks of existing measures.
In the next, we illustrate one example from the pattern recognition field to prove that proposed SM gives the valid outcomes over the existing SMs in the literature.
Example 3
Consider three given patterns \({\mathcal {P}}_1\), \({\mathcal {P}}_2\) and \({\mathcal {P}}_3\) represented by IFSs, defined on \({\mathcal {U}}=\{x_1,x_2,x_3\}\) as:
Further, consider an unknown pattern \(\mathcal {Q}\) whose rating are summarized in IFSs as
and the main target is to recognize unknown pattern \(\mathcal {Q}\) with one of the known classes \({\mathcal {P}}_i\) \((i=1,2,3)\).
To achieve it, we estimate the degree of similarity among \({\mathcal {P}}_i\) and \(\mathcal {Q}\) using the proposed SM \(\mathcal {S}\) and the prevailing SMs [5,6,7,8,9,10,11,12,13, 15, 16, 22,23,24,25,26, 28]. The results corresponding to each measure are outlined in Table 4. From this table, it is seen that the unknown pattern \(\mathcal {Q}\) is classified with pattern \({\mathcal {P}}_3\) by using the proposed measure \(\mathcal {S}\). However, from the existing SMs, we identify that most of the SMs create identical outcomes due to which unknown pattern “Cannot be recognized”. For example, the measures \(S_C\) [5], \(S_{DC}\) [7], \(S_{HY1}\) [10], \(S_{HY2}\) [10], \(S_{HY3}\) [10], \(S_{WX}\) [12], \(S_{L}\) [13] give similar results for \(S({\mathcal {P}}_1,\mathcal {Q})\) and \(S({\mathcal {P}}_3,\mathcal {Q})\) and the measure \(S_N\) [28] produces same outcomes for \(S_N({\mathcal {P}}_2,\mathcal {Q})\) and \(S_N({\mathcal {P}}_3,\mathcal {Q})\). Besides these, the measures \(S_{HK}\) [6], \(S_{LS1}\) [9], \(S_{LS2}\) [9], \(S_{M}\) [8], \(S_{SK}\) [11], \(S_{HY4}\) [23], \(S_{HY5}\) [23], \(S_{HY6}\) [23], \(S_{HY7}\) [23], \(S_{HY8}\) [24], \(S_{HY9}\) [24] and \(S_{HY10}\) [24] produce identical values for \(S({\mathcal {P}}_1,\mathcal {Q})\), \(S({\mathcal {P}}_2,\mathcal {Q})\) and \(S({\mathcal {P}}_3,\mathcal {Q})\). Apart from these, it is seen that the measure \(S_{VS}\) [22] becomes unsuccessful in computing the degree of similarity of \(\mathcal {Q}\) with all the three known patterns \({\mathcal {P}}_i\) due to “division by zero problem”. Hence, it is concluded the prevailing SMs [5,6,7,8,9,10,11,12,13, 22,23,24,25, 28] fail to reach at any decision in this case whereas the proposed measure \(\mathcal {S}\) is effective in giving better results and to make optimal decisions in such cases as well.
Applications
In this section, we present an approach to solve the DMPs using proposed SMs followed by several illustrative examples.
Proposed DM Approach
Consider a set of alternatives \({\mathcal {P}}_1\), \({\mathcal {P}}_2 \ldots {\mathcal {P}}_m\) which needs to be evaluated to find the finest among them over the different parameters \(x_j\) of the universal set \(\mathcal {U}\). Each alternative is assessed under the IFS environment where an expert give their preferences to each \({\mathcal {P}}_i\) as IFN given by \((\zeta _{ij},\vartheta _{ij})\) where \(1\le i\le m\); \(1 \le j \le n\) such that \(\zeta _{ij},\vartheta _{ij},\zeta _{ij}+\vartheta _{ij}\in [0,1]\) . Then the various steps involved to find the finest alternative based on the proposed SM are summarized as
-
Step 1:
Prepare the collective information in the decision matrix.
-
Step 2:
Transform the given IFSs to the right-angled triangle information.
-
Step 3:
Compute the degree of similarity \({\mathcal {S}}_i\) between the alternative \({\mathcal {P}}_i\) and the ideal set by using proposed SM.
-
Step 4:
Rank the given alternative with index as computed by \(k=\arg \max \limits _{1\le i\le m} \{{\mathcal {S}}_i\}\).
Applications in Pattern Recognition
To demonstrate the functionality of the proposed SM in various fields such as pattern recognition and clustering analysis, we solved some standard benchmark problems and compare its results with some of the existing SMs [5,6,7,8,9,10,11,12,13, 15, 16, 22,23,24,25,26, 28] to prove its superiority over them as follows.
Example 4
[7, 16, 24, 25] Consider the three patterns \({\mathcal {P}}_i\) \((i=1,2,3)\) represented by IFSs as:
Consider an unknown pattern \(\mathcal {Q}\) given by
which needs to be classified with one of the given patterns \({\mathcal {P}}_i\). To recognize \(\mathcal {Q}\) into \({\mathcal {P}}_i\), we compute measure of similarity between \({\mathcal {P}}_i\) and \(\mathcal {Q}\) by using proposed SM \(\mathcal {S}\) and existing SMs [5,6,7,8,9,10,11,12,13, 15, 16, 22,23,24,25,26, 28]. The results computed corresponding to them are given in Table 5. From these results, it is analyzed that the unknown pattern \(\mathcal {Q}\) is classified with known pattern \({\mathcal {P}}_3\) by the proposed SM. Although the existing measures also recognize \(\mathcal {Q}\) with \({\mathcal {P}}_3\) but most of the SMs [5,6,7,8,9,10, 12, 22, 23, 26, 28] produce identical outcomes with other patterns which leads to unreasonable results. For instance, the SMs \(S_C\) [5], \(S_{HK}\) [6], \(S_{DC}\) [7], \(S_{LS1}\) [9], \(S_{LS2}\) [9], \(S_{M}\) [8], \(S_{HY1}\) [10], \(S_{HY2}\) [10], \(S_{HY3}\) [10], \(S_{WX}\) [12], \(S_{HY4}\) [23], \(S_{HY5}\) [23], \(S_{HY6}\) [23], \(S_{HY8}\) [24], \(S_{BA}\) [26] and \(S_{N}\) [28] give the same results for patterns \({\mathcal {P}}_1\) and \({\mathcal {P}}_2\), i.e., \(S({\mathcal {P}}_1,\mathcal {Q})= S({\mathcal {P}}_2, \mathcal {Q})\). But it is clearly seen that \({\mathcal {P}}_1\ne {\mathcal {P}}_2\). Besides this, the SM \(S_{VS}\) [22] fails to provide any valid result due to division by zero problem. Thus, the proposed SM \(\mathcal {S}\) gives the better results and overcomes the drawbacks of some of existing measures [5,6,7,8,9,10, 12, 16, 22, 23, 26, 28].
Example 5
Consider three patterns \({\mathcal {P}}_i\) \((i=1,2,3)\) represented by IFSs as:
Further, consider an unknown pattern \(\mathcal {Q}\) which is to be classified in one of the given patterns \({\mathcal {P}}_i\) and it is represented as:
In order to recognize the unknown pattern \(\mathcal {Q}\), the prevailing similarity measures [5,6,7,8,9,10,11,12,13, 15, 16, 22,23,24,25,26, 28] and the proposed measure \(\mathcal {S}\) are utilized and the corresponding results are tabulated in Table 6. From this table, it is analyzed that most of the existing SMs fail to reach at any decision due to various shortcomings which are outlined as follows:
-
1.
The SMs \(S_C\) [5], \(S_{HK}\) [6], \(S_{DC}\) [7], \(S_{LS1}\) [9], \(S_{LS2}\) [9], \(S_M\) [8], \(S_{HY1}\) [10], \(S_{HY2}\) [10], \(S_{HY3}\) [10], \(S_{WX}\) [12], \(S_{L}\) [13], \(S_{HY4}\) [23], \(S_{HY5}\) [23], \(S_{HY6}\) [23], \(S_{HY8}\) [24], \(S_{HY9}\) [24], \(S_{HY10}\) [24], \(S_{BA}\) [26], \(S_{CL}\) [16], \(S_{N}\) [28] give the identical values of \(S({\mathcal {P}}_1,\mathcal {Q})\), \(S({\mathcal {P}}_2,\mathcal {Q})\) and \(S({\mathcal {P}}_3,\mathcal {Q}),\) i.e., \(S({\mathcal {P}}_1, \mathcal {Q})=S({\mathcal {P}}_2, \mathcal {Q})=S({\mathcal {P}}_3, \mathcal {Q})\) as a result of which pattern \(\mathcal {Q}\) cannot be recognized.
-
2.
The SM \(S_{VS}\) [22] fails to assess the similarity degrees between patterns \({\mathcal {P}}_2\) and \(\mathcal {Q}\) due to “division by zero problem”.
-
3.
The prevailing measures \(S_{SK}\) [11], \(S_{HY7}\) [24], \(S_Y\) [25] and \(S_S\) [15] recognize unknown pattern \(\mathcal {Q}\) with known pattern \({\mathcal {P}}_1\) which coincides with the proposed measure results.
This investigation leads to the conclusion that the proposed measure \(\mathcal {S}\) can be applied on those real-life pattern recognition problems also while solving which most of the existing SMs [5,6,7,8,9,10, 12, 13, 16, 22, 23, 26, 28] fail.
Example 6
[16, 26] Consider three patterns \({\mathcal {P}}_i\) \((i=1,2,3)\) given in IFSs as
Consider an unknown pattern \(\mathcal {Q}\) whose rating values are represented as an IFS given by
and the target is to classify \(\mathcal {Q}\) in one of the classes \({\mathcal {P}}_i\). For this, we compute the degree of similarity between \({\mathcal {P}}_i\) and \(\mathcal {Q}\) by utilizing some of the existing measures [5,6,7,8,9,10,11,12,13, 15, 16, 22,23,24,25,26, 28] along with the proposed SM \(\mathcal {S}\) and tabulate the corresponding results in Table 7.
It is analyzed from this table that the proposed SM \(\mathcal {S}\) has some advantages over the shortcomings of the several existing SMs which are outlined as follows.
-
1.
The existing SMs are unsuccessful in recognizing the unknown pattern \(\mathcal {Q}\) in any of the classes \({\mathcal {P}}_i\) due to identical outcomes. For instance, the measures \(S_{HK}\) [6], \(S_{M}\) [8], \(S_{WX}\) [12], \(S_{HY4}\) [23], \(S_{HY5}\) [23], \(S_{HY6}\) [23] and \(S_{HY8}\) [24] give the same values of \(S({\mathcal {P}}_1, \mathcal {Q})\) and \(S({\mathcal {P}}_3, \mathcal {Q})\) and the measure \(S_L\) [13] give the identical results for \(S({\mathcal {P}}_1, \mathcal {Q})\) and \(S({\mathcal {P}}_2, \mathcal {Q})\).
-
2.
Another counter-intuitive case can be provided for the SMs \(S_{LS2}\) [9], \(S_{HY1}\) [10], \(S_{HY2}\) [10] and \(S_{HY3}\) [10]. It is noticed from the table that, these existing measures give the similar results for \(S({\mathcal {P}}_1,\mathcal {Q})\), \(S({\mathcal {P}}_2,\mathcal {Q})\) and \(S({\mathcal {P}}_3,\) \(\mathcal {Q})\), i.e., \(S({\mathcal {P}}_1,\mathcal {Q}) = S({\mathcal {P}}_2,\mathcal {Q}) = S({\mathcal {P}}_3,\mathcal {Q})\) and thus, we are unable to recognize the pattern \(\mathcal {Q}\).
-
3.
The measure \(S_{VS}\) [22] fails to determine the similarity degree between patterns \({\mathcal {P}}_1\) and \(\mathcal {Q}\) due to “division by zero problem”.
-
4.
The prevailing measures such as \(S_{C}\) [5], \(S_{DC}\) [7], \(S_{SK}\) [11], \(S_{HY7}\) [24], \(S_{HY9}\) [24], \(S_{HY10}\) [24], \(S_{Y}\) [25], \(S_{BA}\) [26], \(S_{S}\) [15], \(S_{CL}\) [16], \(S_{N}\) [28] classify pattern \(\mathcal {Q}\) with known pattern \({\mathcal {P}}_3\) as the degree of similarity obtained, using these measures, among \({\mathcal {P}}_3\) and \(\mathcal {Q}\) is maximum. It also coincides with the results of proposed measure. Thus, the proposed measure and these existing measures have “no counter-intuitive cases” as shown in Table 7.
Therefore, it is concluded that the proposed SM \(\mathcal {S}\) is more efficient than some of the prevailing measures [6, 8,9,10, 12, 22,23,24] as in some cases, these existing SMs are unable to reach at any decision. Also, it has been computed that the overall time complexity of the proposed decision making approach is O(mn) where m is the number of the alternatives and n represents the number of the criteria for a given MCDM problem. Furthermore, to discuss the comparison from the perspective of computational cost, we have computed the time elapsed and the memory utilized by CPU during the execution of the proposed decision making algorithm and the several existing approaches. The CPU memory utilized by the proposed method is found to be \(5.8746 \times 10^{-4}\) mega bytes. However, the time corresponding to each algorithm is noted and listed in Table 8, which gives a quantitative analysis of computational cost of the proposed and the existing methods. Although, it is seen that there is no much significant difference among the execution time of the proposed measure and the prevailing approaches, but we figure out that the proposed measure has the following benefits: (i) obtain the finest alternative without counter-intuitive cases [5,6,7,8,9,10, 12, 16, 22, 23, 26, 28]; (ii) without division by zero problem [22, 25], over the other existing SM-based algorithm.
Application in Clustering Problem
In this section, we demonstrate the application of the stated measure in the clustering problem.
Definition 7
For a collection of “m” IFSs, \({\mathcal {P}}_i\), a similarity matrix is given as: \(\mathcal {C}=(c_{ik})_{m \times m}\), where \(c_{ik}=S({\mathcal {P}}_i,\mathcal {P}_k)\) represents the SM among \({\mathcal {P}}_i\) and \({\mathcal {P}}_k\) and satisfies \(0 \le c_{ik} \le 1\); \(c_{ii}=1\) and \(c_{ik}=c_{ki}\).
Definition 8
[32] A matrix \({\mathcal {C}}^2 = \mathcal {C}\circ \mathcal {C}=(\bar{c}_{ik})_{m \times m}\) where \(\bar{c}_{ik}=\max \limits _{u}\big (\min (c_{iu},c_{uk})\big )\) is called similarity composition matrix.
Definition 9
[32] If \({\mathcal {C}}^2 \subseteq \mathcal {C},\) i.e., \(\max \limits _{u} \big (\min (c_{iu},c_{uk})\big ) \le c_{ik}\) \(\forall\) i, k, then \(C^2\) is termed as “equivalent similarity matrix (ESM)”.
Theorem 6
[32] For similarity matrix \(\mathcal {C}=(c_{ik})_{m \times m}\), and in the compositions \(\mathcal {C}\rightarrow {\mathcal {C}}^{2}\rightarrow {\mathcal {C}}^{4}\rightarrow \ldots \rightarrow {\mathcal {C}}^{2^{z}}\rightarrow \ldots\), if \(\exists\) \(z\in \mathbf {Z}^+\) such that \({\mathcal {C}}^{2^{z}}={\mathcal {C}}^{2^{z+1}}\) and then \({\mathcal {C}}^{2^{z}}\) is also an ESM.
Definition 10
[32] For an ESM \(\mathcal {C}=(c_{ik})_{m \times m}\), the matrix \({\mathcal {C}}_{\lambda }=(c^{\lambda }_{ik})_{m \times m}\) is termed \(\lambda -\)cutting matrix of \(\mathcal {C}\), where
and \(\lambda \in [0,1]\) is the “confidence level”.
Example 7
[32] Consider the dataset of ten cars \({\mathcal {P}}_i\) \((i=1,2, \ldots , 10)\). These cars are characterized by the six criteria \({\mathcal {Q}}_j\) \((j=1,2, \ldots ,6)\) namely: \({\mathcal {Q}}_1:\) Fuel company, \({\mathcal {Q}}_2:\) Aerodynamic degree, \({\mathcal {Q}}_3:\) Price, \({\mathcal {Q}}_4:\) Comfort, \({\mathcal {Q}}_5:\) Design and \({\mathcal {Q}}_6:\) Safety. The data of the cars \({\mathcal {P}}_i\) are tabulated in Table 9. Now, we utilize the proposed SM \(\mathcal {S}\) to cluster the cars \({\mathcal {P}}_i\), which involves the subsequent steps:
Step 1: By using Eq. (8), calculate the degrees of similarity between the cars, i.e., \(\mathcal {S}({\mathcal {P}}_i,{\mathcal {P}}_k)\) (i,k=1,2, ..., 10). Thus, a similarity matrix \(\mathcal {C}\) is obtained as:
Step 2: Compute the matrix \({\mathcal {C}}^2\), using Definition 8, given as:
Since \({\mathcal {C}}^2 \ne \mathcal {C}\). Therefore, we compute \({\mathcal {C}}^4\).
Also \({\mathcal {C}}^4 \ne {\mathcal {C}}^2\). Therefore, we compute \({\mathcal {C}}^8\).
Also \({\mathcal {C}}^8 \ne {\mathcal {C}}^4\). Therefore, we compute \({\mathcal {C}}^{16}\).
As \({\mathcal {C}}^{16}={\mathcal {C}}^8\). Therefore, \({\mathcal {C}}^{16}\) is an ESM.
Step 3: Assume \(\lambda =0.8202\), and by Definition 10, \({\mathcal {C}}_{\lambda }\) becomes
Step 4: From Eq. (10), we divide \({\mathcal {P}}_{i}\) into three classes as \(\{{\mathcal {P}}_1,{\mathcal {P}}_6\}\), \(\{{\mathcal {P}}_4,{\mathcal {P}}_9\}\), \(\{{\mathcal {P}}_2,{\mathcal {P}}_3, {\mathcal {P}}_5, {\mathcal {P}}_7, {\mathcal {P}}_8, {\mathcal {P}}_{10}\}\)
Since different values of \(\lambda\) will produce different \(\lambda -\) cutting matrices and consequently we will obtain different clustering outcomes. Accordingly, a comprehensive sensitivity investigation for \(\lambda\) is provided in Table 10. We take the value of confidence level \(\lambda\) from the least one to the highest one. By observing the obtained outcomes, for different values of \(\lambda\), we conclude that as the value of \(\lambda\) increases then more and more patterns become differentiated. Besides this, for a particular cluster number, there is only one case. For instance, if the cars \({\mathcal {P}}_i\) are classified into four classes then, the obtained outcomes are \(\{{\mathcal {P}}_1, {\mathcal {P}}_6\}\), \(\{{\mathcal {P}}_2, {\mathcal {P}}_3, {\mathcal {P}}_7, {\mathcal {P}}_8\}\), \(\{{\mathcal {P}}_4,{\mathcal {P}}_9\}\), \(\{{\mathcal {P}}_5,{\mathcal {P}}_{10}\}\). This is useful in taking final decision as it reduces uncertainty in choosing \(\lambda\).
Furthermore, the clustering distribution of ten cars \(\mathcal {P}_i\) is given in Fig. 2. This figure gives that the software \(\mathcal {P}_i\) are principally separated into two groups which are: \(\{\mathcal {P}_2, \mathcal {P}_3, \mathcal {P}_4\), \(\mathcal {P}_5\), \(\mathcal {P}_7\), \(\mathcal {P}_8, \mathcal {P}_9, \mathcal {P}_{10}\}\), \(\{\mathcal {P}_1, \mathcal {P}_6\}\). Furthermore, when the confidence level stays in the relaxed level, the overall trend can be found using Fig. 2.
The clustering results tabulated in Table 10 are confirmed by the existing works. For instance, the clustering outcomes with two clusters \(\{\mathcal {P}_1, \mathcal {P}_6\}\), \(\{\mathcal {P}_2, \mathcal {P}_3, \mathcal {P}_4, \mathcal {P}_5, \mathcal {P}_7, \mathcal {P}_8, \mathcal {P}_9, \mathcal {P}_{10}\}\) are supported by [32] and [19]. The results with three clusters \(\{\mathcal {P}_1, \mathcal {P}_6\}\), \(\{\mathcal {P}_4,\mathcal {P}_9\}\), \(\{\mathcal {P}_2, \mathcal {P}_3, \mathcal {P}_5, \mathcal {P}_7, \mathcal {P}_8, \mathcal {P}_{10}\}\) are validated by [27] and [19]. The four cluster outcomes \(\{\mathcal {P}_1, \mathcal {P}_6\}\), \(\{\mathcal {P}_4,\mathcal {P}_9\}\), \(\{\mathcal {P}_5,\mathcal {P}_{10}\}\), \(\{\mathcal {P}_2, \mathcal {P}_3, \mathcal {P}_7, \mathcal {P}_8\}\) are identical with the results of [19, 27, 29]. The outcomes of five clusters \(\{\mathcal {P}_1, \mathcal {P}_6\}\), \(\{\mathcal {P}_4,\mathcal {P}_9\}\), \(\{\mathcal {P}_5\), \(\mathcal {P}_{10}\}\), \(\{\mathcal {P}_2, \mathcal {P}_3, \mathcal {P}_7\}\), \(\{\mathcal {P}_8\}\), seven clusters \(\{\mathcal {P}_1\}\), \(\{\mathcal {P}_6\}\), \(\{\mathcal {P}_4\), \(\mathcal {P}_9\}\), \(\{\mathcal {P}_5\}\), \(\{\mathcal {P}_{10}\}\), \(\{\mathcal {P}_2, \mathcal {P}_3, \mathcal {P}_7\}\), \(\{\mathcal {P}_8\}\), eight clusters \(\{\mathcal {P}_1\}\), \(\{\mathcal {P}_6\}\), \(\{\mathcal {P}_4,\mathcal {P}_9\}\), \(\{\mathcal {P}_5\}\), \(\{\mathcal {P}_{10}\}\), \(\{\mathcal {P}_2, \mathcal {P}_7\}\), \(\{\mathcal {P}_3\}\), \(\{\mathcal {P}_8\}\) are supported by [19]. Nine cluster results \(\{\mathcal {P}_1\}\), \(\{\mathcal {P}_6\}\), \(\{\mathcal {P}_4,\mathcal {P}_9\}\), \(\{\mathcal {P}_5\}\), \(\{\mathcal {P}_{10}\}\), \(\{\mathcal {P}_2\}\), \(\{\mathcal {P}_7\}\), \(\{\mathcal {P}_3\}\), \(\{\mathcal {P}_8\}\) are validated by [32].
Example 8
[29] Consider the dataset of fifteen patterns \(\mathcal {P}_i\) \((i=1,2, \ldots , 15)\), given as \(\mathcal {P}_1=\langle 0.910,0.080\rangle\), \(\mathcal {P}_2=\langle 0.930,0.070\rangle\), \(\mathcal {P}_3=\langle 0.870,0.120\rangle\), \(\mathcal {P}_4=\langle 0.850,0.140\rangle\), \(\mathcal {P}_5=\langle 0.790,0.200\rangle\), \(\mathcal {P}_6=\langle 0.190,0.800\rangle\), \(\mathcal {P}_7=\langle 0.100\), \(0.820\rangle\), \(\mathcal {P}_8=\langle 0.450,0.550\rangle\), \(\mathcal {P}_9=\langle 0.030,0.820\rangle\), \(\mathcal {P}_{10}=\langle 0.070\), \(0.730\rangle\), \(\mathcal {P}_{11}=\langle 0.500,0.500\rangle\), \(\mathcal {P}_{12}=\langle 0.910,0.080\rangle\), \(\mathcal {P}_{13}=\langle 0.400,0.500\rangle\), \(\mathcal {P}_{14}=\langle 0.420,0.480\rangle\), \(\mathcal {P}_{15}=\langle 0.460\), \(0.460\rangle\).
Now, we utilize the proposed SM \(\mathcal {S}\) in order to cluster the patterns \(\mathcal {P}_i\), which involves the subsequent steps:
Step 1: By using Eq. (8), calculate the similarity matrix as:
Step 2: Compute \(\mathcal {C}^2\), \(\mathcal {C}^4\), ...until \(\mathcal {C}^{2^z}=\mathcal {C}^{2^{z+1}}\) for some positive integer z. We observe that \(\mathcal {C}^8=\mathcal {C}^4\). Therefore \(\mathcal {C}^8\) is an ESM and given by
Step 3: Assume \(\lambda =0.9397\), \(\lambda -\) cutting matrix \(\mathcal {C}_{\lambda }\) is obtained by applying Definition 10 as:
Step 4: By using Eq. (11), the given \(\mathcal {P}_{i}\) are categorized as: \(\{\mathcal {P}_1,\mathcal {P}_2,\mathcal {P}_3,\mathcal {P}_4,\mathcal {P}_5,\mathcal {P}_{12}\}\), \(\{\mathcal {P}_6\}\), \(\{\mathcal {P}_7,\mathcal {P}_9, \mathcal {P}_{10}\}\), and \(\{\mathcal {P}_8\), \(\mathcal {P}_{11}\), \(\mathcal {P}_{13}\), \(\mathcal {P}_{14}\), \(\mathcal {P}_{15}\}\).
Apart from these, the complete results for different values of \(\lambda\) are listed in Table 11.
Conclusion
The key contribution of this work is outlined below:
-
1.
A notion of novel SM between the pairs of IFSs is explained by transforming the given IFSs into the right-angled triangle in a square area and hence investigated their several properties. In particular, comparative studies with several existing SMs, given in Table 1, are done to prove the superiority of the proposed measure over these existing SMs.
-
2.
The validity as well as superiority of the proposed SM over the existing SMs is summarized in Section 4 which demonstrates that the existing SMs fail to give classification results under the different instances such as “division by zero problems” or “counter-intuitive cases” and hence decision makers may have faced obstacles in making the optimal choice.
-
3.
An algorithm to solve the DMPs with the proposed SM is developed and implemented to show its performance in numerous examples such as pattern recognition, clustering analysis, etc. As compared to the other existing SM-based algorithm, we figure out that the proposed measure has the following benefits: (i) obtain the finest alternative without counter-intuitive cases [5,6,7,8,9,10, 12, 16, 22, 23, 26, 28]; (ii) without division by zero problem [22, 25].
-
4.
Further, based on the proposed SM, a novel clustering algorithm is given to classify the given objects under the different confidence levels of the expert.
In the future, there is a scope of extending this research to some uncertain environment. Also, in the present work, the interactions between the different attributes are not considered. These drawbacks will be studied in our future work. Also, we will try to define some more generalized algorithms in order to solve more complex problems such as brain hemorrhage, healthcare, nonlinear systems, control systems, and others.
References
Zadeh LA. Fuzzy sets. Inf Control. 1965;8:338–53.
Atanassov KT. Intuitionistic fuzzy sets. Fuzzy Sets Syst. 1986;20(1):87–96.
Atanassov K, Gargov G. Interval-valued intuitionistic fuzzy sets. Fuzzy Sets Syst. 1989;31:343–9.
Garg H, Kumar K. Linguistic interval-valued Atanassov intuitionistic fuzzy sets and their applications to group decision-making problems. IEEE Trans Fuzzy Syst. 2019;27(12):2302–11.
Chen SM. Measures of similarity between vague sets. Fuzzy Sets Syst. 1995;74(2):217–23.
Hong DH, Kim C. A note on similarity measures between vague sets and between elements. Inf Sci. 1999;115:83–96.
Dengfeng L, Chuntian C. New similarity measure of intuitionistic fuzzy sets and application to pattern recognitions. Pattern Recognit Lett. 2002;23:221–5.
Mitchell HB. On the dengfeng chuntian similarity measure and its application to pattern recognition. Pattern Recognit Lett. 2003;24:3101–4.
Liang Z, Shi P. Similarity measures on intuitionistic fuzzy sets. Pattern Recognit Lett. 2003;24:2687–93.
Hung WL, Yang MS. Similarity measures of intuitionistic fuzzy sets based on hausdorff distance. Pattern Recognit Lett. 2004;25:1603–11.
Szmidt E, Kacprzyk J. A similarity measure for intuitionistic fuzzy sets and its application in supporting medical diagnostic reasoning. Lect Notes Comput Sci. 2004;3070:388–93.
Wang W, Xin X. Distance measure between intuitionistic fuzzy sets. Pattern Recognit Lett. 2005;26(13):2063–9.
Liu HW. New similarity measures between intuitionistic fuzzy sets and between elements. Math Comput Model. 2005;42:61–70.
Xu ZS. Some similarity meeasures of intuitionistic fuzzy sets and their applications to multiple attribute decision making. Fuzzy Optim Decis Making. 2007;6:109–21.
Song Y, Wang X, Lei L, Xue A. A new similarity measure between intuitionistic fuzzy sets and its application to pattern recognition. Abstr Appl Anal, vol. 2014, pp. Article ID 384 241, 11 pages, 2014.
Chen SM, Cheng SH, Lan TC. A novel similarity measure between intuitionistic fuzzy sets based on the centroid points of transformed fuzzy numbers with applications to pattern recognition. Inf Sci. 2016;343–344:15–40.
Garg H. An improved cosine similarity measure for intuitionistic fuzzy sets and their applications to decision-making process. Hacettepe Journal of Mathematics and Statistics. 2018;47(6):1585–601.
Garg H, Kumar K. An advanced study on the similarity measures of intuitionistic fuzzy sets based on the set pair analysis theory and their application in decision making. Soft Comput. 2018;22(15):4959–70.
Jiang Q, Jin X, Lee SJ, Yao S. A new similarity/distance measure between intuitionistic fuzzy sets based on the transformed isosceles triangles and its applications to pattern recognition. Expert Syst Appl. 2019;116:439–53.
Chen SM, Chang CH. A novel similarity measure between Atanassov’s intuitionistic fuzzy sets based on transformation techniques with applications to pattern recognition. Inf Sci. 2015;291:96–114.
Garg H, Kaur G. Novel distance measures for cubic intuitionistic fuzzy sets and their applications to pattern recognitions and medical diagnosis. Granular Computing. 2020;5(2):169–84.
Vlachos IK, Sergiadis GD. Intuitionistic fuzzy information - application to pattern recognition. Pattern Recognit Lett. 2007;28(2):197–206.
Hung WL, Yang MS. Similarity measures of intuitionistic fuzzy sets based on lp metric. Int J Approx Reason. 2007;46:120–36.
Hung WL, Yang MS. On similarity measures between intuitionistic fuzzy sets. Int J Intell Syst. 2008;23(3):364–83.
Hung WL, Yang MS. On similarity measures between intuitionistic fuzzy sets. Math Comput Model. 2008;23(3):364–83.
Boran FE, Akay D. A biparametric similarity measure on intuitionistic fuzzy sets with applications to pattern recognition. Inf Sci. 2014;255:45–57.
Khan MS, Lohani QD. “A similarity measure for atanassov intuitionistic fuzzy sets, and its application to clustering,” in 2016, International Workshop on Computational Intelligence (IWCI). IEEE. 2016;232–9.
Ngan RT, Ali M, Son LH. Equality of intuitionistic fuzzy sets: a new proximity measure and applications in medical diagnosis. Appl Intell. 2018;48(2):499–525.
Hwang CM, Yang MS, Hung WL, Lee MG. A similarity measure of intuitionistic fuzzy sets based on the Sugeno integral with its application to pattern recognition. Inf Sci. 2012;189:93–109.
Singh S, Garg H. Distance measures between type-2 intuitionistic fuzzy sets and their application to multicriteria decision-making process. Appl Intell. 2017;46(4):788–99.
Garg H. Distance and similarity measure for intuitionistic multiplicative preference relation and its application. Int J Uncertain Quantif. 2017;7(2):117–33.
Xu ZS, Chen J, Wu JJ. Cluster algorithm for intuitionistic fuzzy sets. Inf Sci. 2008;178:3775–90.
Hwang CM, Yang MS, Hung WL. New similarity measures of intuitionistic fuzzy sets based on the jaccard index with its application to clustering. Int J Intell Syst. 2018;33(8):1672–88.
Dhivya J, Sridevi B. A novel similarity measure between intuitionistic fuzzy sets based on the mid points of transformed triangular fuzzy numbers with applications to pattern recognition and medical diagnosis. Applied Mathematics-A Journal of Chinese Universities. 2019;34(2):229–52.
Xu ZS. Intuitionistic fuzzy aggregation operators. IEEE Trans Fuzzy Syst. 2007;15:1179–87.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflicts of Interest
The authors declare that they have no conflict of interest.
Ethical Approval
This article does not contain any studies with human participants or animals performed by any of the authors.
Rights and permissions
About this article
Cite this article
Garg, H., Rani, D. Novel Similarity Measure Based on the Transformed Right-Angled Triangles Between Intuitionistic Fuzzy Sets and its Applications. Cogn Comput 13, 447–465 (2021). https://doi.org/10.1007/s12559-020-09809-2
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s12559-020-09809-2