Introduction

Nowadays, decision making (DM) is one of the most frequently used processes of our daily life, whose target is to decide the best alternative out of the available ones under the numerous known or unknown criteria. Multicriteria decision making (MCDM) is a part of the DM and is credited as a cognitive-based human action. In order to handle and aggregate the information gathered from several resources, the most important step is of data collection. Traditionally, all the information is given in the form of a crisp number. But in human cognition devices, it is often felt challenging to show the working situations using the crisp number-based primitive data handling techniques. These methods lead the decision-makers to vague conclusions as well as uncertain decisions. Therefore, to deal with such risks and to examine the process, a large-scale family of theories such as fuzzy set (FS) [1] and its extensions as intuitionistic FS (IFS) [2], interval-valued IFS [3], linguistic interval-valued IFS [4] are proposed by the researchers. Under the utilization of such theories, decision-makers maintain their DM criteria in accordance with the particular situation whether it is human cognition or pattern recognition. In these theories, each element is represented using two degrees namely membership degree (MD) and non-membership degree (NMD) such that their sum does not exceed one.

Recently, DM problems with uncertain information have become a hot research topic, which requires the three influential phases:

  1. 1.

    how to arrange the information using a suitable scale to read the data.

  2. 2.

    how to aggregate the distinct attribute benefits and accumulate the overall preference value.

  3. 3.

    how to order things to find the finest alternative(s).

In phase 1), an IFS is one of the most widely used mediums to assess the information in terms of ordered pair of MDs and NMDs. This representation is more reliable and also gives the hesitancy degree within the pairs of MDs. In phase 2), information measures (IMs) play a noteworthy role in treating imperfect and ambiguous information to reach the final decision. Different kinds of IMs such as distance, similarity, inclusion, entropy, etc., exist in the literature and among them, the notion of the similarity measure (SM) is one of the effective tools to estimate the similarity degree between the pairs of the objects. Finally, in the phase 3), the combined values acquired from the foregoing phases are ordered with suitable measures.

Up to now, IFSs have been applied by many researchers to address the decision-making problems (DMPs) by adopting the notion of the SMs. Chen et al. [5] gave the notion of measuring the degree of similarity among vague sets and put forth two SMs. Authors in [6] pointed out that the measures given in [5] do not fit well in some instances with some counterintuitive cases. To resolve it, they gave a set of modified SMs and showed the validity of their proposed measures with the aid of examples. Authors in [7] presented SMs for both discrete and continuous sets and used them to solve the pattern recognition problems. Mitchell [8] pointed out that the SM given in [7] may give irrational results in some instances and hence they modified it and employed the modified measure to solve the pattern recognition problems. Authors in [9] presented a SM by improving some of the existing measures and validated it with the help of several numerical cases. In [10], the authors presented distance and SMs among IFSs using the concept of Hausdorff distance and studied their related features. In [11], authors presented SMs for IFSs and gave their application in medical diagnostic reasoning. In [12], the authors presented some IMs for IFSs to solve pattern recognition problems and also discussed the relationship between the SMs and the distance measures. Liu [13] pointed out that the measures recommended in [6] produce illogical results in some cases and consequently, proposed another SM and proved the related properties. Xu [14] presented SMs for IFSs. Song et al. [15] put forward the weighted SMs along with its application in pattern recognition problems. Chen et al. [16] developed SM by transforming the IFSs into right-angled triangular numbers and showed the effectiveness of the recommended measure by applying it to various pattern recognition problems. Garg [17] developed an improved cosine SM for IFSs. In [18], authors presented some SMs for IFSs based on the connection numbers of set pair analysis theory. In [19], the authors proposed distance measures to calculate the separation within the pairs of IFSs by transforming them into isosceles triangles. In [20], they proposed SM based on transformation techniques and gave its applications in various pattern recognition problems. In [21], authors presented distance measures for cubic IFSs and applied them to solve the pattern recognition and medical diagnosis problems. Furthermore, IMs are applied by the various scholars and researchers in many other areas too. For more details, we refer to read [22,23,24,25,26,27,28,29,30,31,32,33,34] and their corresponding references.

From the existing studies, it can be worth noticed that SMs are essential tools for processing the uncertainty associated with FSs and IFSs. Although, till now, different MCDM methods based on the SMs among IFSs have been explored and used in real-life problems such as pattern recognition and clustering analysis but most of the measures fail to give classification results in some situations. On account of having counterintuitive aspects [5,6,7,8,9,10,11,12,13, 15, 16, 22,23,24,25,26, 28] of some existing intuitionistic fuzzy MCDM methods, the decision makers may have encountered difficulties in reaching at conclusion due to “division by zero problem” or indistinguishable results. Therefore, in order to overcome the drawbacks of the prevailing SM-based MCDM methods, there is a requirement of more optimized measures to handle the DMPs under diverse circumstances. Henceforth, the presented paper intends to present some novel distance and similarity measures by transforming IFSs into right-angled triangles and consequently the proposed measure-based MCDM methods to address the above issues.

To address these problems, the fundamental objectives of the presented research are given as follows:

  1. 1.

    to transform the collective IFSs information into right-angled triangles over a unit square area.

  2. 2.

    to propose new distance and similarity measures for IFSs, based on diagonal of the unit square and the intersection of transformed right-angled triangles, to overcome the drawbacks of the several existing studies [5,6,7,8,9,10,11,12,13, 15, 16, 22,23,24,25,26, 28, 32];

  3. 3.

    to present a novel decision-making algorithm based on proposed SMs and validate it by several numerical examples.

  4. 4.

    to develop the clustering algorithm based on proposed SMs to distinguish the things of the same pattern.

To complete the above four goals, we divide the paper into 6 sections. In Section 2, we review the fundamental prevailing works on IFS and the SMs. In Section 3, we first transform the given collective data into the right-angled triangles over a unit square area. Based on this transformation, we introduce a novel distance and similarity measure among IFSs to estimate the degree of separation and similarity, respectively, among the pairs of IFSs. The comprehensive description of their origin is also described and explained. In Section 4, the superiority of the recommended SM over the existing SMs [5,6,7,8,9,10,11,12,13, 15, 16, 22,23,24,25,26, 28, 32] is shown through several illustrative examples. From their results, it is proved that the proposed measures perform well in the instances where the existing measures suffer from “division by zero problem” or “counter-intuitive cases”. Section 5 presents an algorithm based on proposed SMs to solve the diverse kinds of problems such as pattern recognition and clustering analysis and comparative studies with several existing algorithms to demonstrate the effectiveness of the developed measure. Also, a clustering algorithm based on proposed similarity measure is applied to classify the objects with distinct values of confidence level of the expert. Finally, concluding remarks are stated in Section 6.

Preliminaries

This section reviews some basic concepts related to IFSs. For it, we denote the universal set by \(\mathcal {U}\) and let \(\Phi (\mathcal {U})\) be the set of all IFSs.

Definition 1

[1] A FS \(\mathcal {F}\) on \(\mathcal {U}\) is stated as

$$\begin{aligned} \mathcal {F} = \{ (x, \zeta_{\mathcal {F}}(x) \mid x \in \mathcal {U} \} \end{aligned}$$
(1)

where \(\zeta_{\mathcal {F}} : \mathcal {U} \rightarrow [0,1]\) represents the degree of extent an element belongs to the FS \(\mathcal {F}\) and is named as membership function.

Definition 2

[2, 35] An IFS \(\mathcal {I}\) is given as

$$\begin{aligned} \mathcal {I}=\{\langle x,\zeta_{\mathcal {I}}(x), \vartheta_{\mathcal {I}}(x)\rangle \mid x \in \mathcal {U} \} \end{aligned}$$
(2)

where \(\zeta_{\mathcal {I}}, \vartheta_{\mathcal {I}} : \mathcal {U} \rightarrow [0,1]\) represents the MD and NMD with \(0 \le \zeta_{\mathcal {I}}(x)+ \vartheta_{\mathcal {I}}(x)\le 1\)  \(\forall\) x and \(h_{\mathcal {I}}(x)=1-\zeta_{\mathcal {I}}(x)-\vartheta_{\mathcal {I}}(x)\) gives the hesitation degree of x in \(\mathcal {I}\).

Definition 3

[2] For two IFSs \(\mathcal {I}=\{\langle x,\zeta _{\mathcal {I}}(x), \vartheta_{\mathcal {I}}(x)\rangle \mid x \in \mathcal {U} \}\) and \(\mathcal {J}=\{\langle x,\zeta_{\mathcal {J}}(x), \vartheta_{\mathcal {J}}(x)\rangle \mid x \in \mathcal {U} \}\) defined on \(\mathcal {U}\), we have

  1. (i)

    \(\mathcal {I} \subseteq \mathcal {J}\) if \(\zeta_{\mathcal {I}}(x) \le \zeta_{\mathcal {J}}(x)\) and \(\vartheta_{\mathcal {I}}(x) \ge \vartheta_{\mathcal {J}}(x)\)  \(\forall\) x.

  2. (ii)

    \(\mathcal {I}=\mathcal {J}\) \(\Leftrightarrow\) \(\mathcal {I} \subseteq \mathcal {J}\) and \(\mathcal {J} \subseteq \mathcal {I}\).

Definition 4

[8] A function \(\mathcal {S}:\Phi (\mathcal {U}) \times \Phi (\mathcal {U}) \rightarrow [0,1]\) is called SM, if it satisfies the following characteristics:

(P1) \(0\le \mathcal {S}(\mathcal {I}, \mathcal {J})\le 1\).

(P2) \(\mathcal {S}(\mathcal {I}, \mathcal {J})=1\) \(\Leftrightarrow\) \(\mathcal {I}=\mathcal {J}\).

(P3) \(\mathcal {S}(\mathcal {I}, \mathcal {J})=\mathcal {S}(\mathcal {J}, \mathcal {I})\).

(P4) If \(\mathcal {I}\subseteq \mathcal {J}\subseteq \mathcal {K}\) then, \(\mathcal {S}(\mathcal {I}, \mathcal {K})\le \mathcal {S}(\mathcal {I}, \mathcal {J})\) and \(\mathcal {S}(\mathcal {I}, \mathcal {K})\le \mathcal {S}(\mathcal {J}, \mathcal {K})\) where \(\mathcal {I}, \mathcal {J}, \mathcal {K} \in \Phi (\mathcal {U})\).

Later on, the axiomatic definition of distance measure \(\mathcal {D}\) [12] is introduced. Distance measure, which is complement of SM, is a function satisfying the following axioms:

(P1) \(0\le \mathcal {D}(\mathcal {I},\mathcal {J})\le 1\).

(P2) \(\mathcal {D}(\mathcal {I}, \mathcal {J})=0\) \(\Leftrightarrow\) \(\mathcal {I}=\mathcal {J}\).

(P3) \(\mathcal {D}(\mathcal {I},\mathcal {J})=\mathcal {D}(\mathcal {J}, \mathcal {I})\).

(P4) If \(\mathcal {I}\subseteq \mathcal {J}\subseteq \mathcal {K}\) then, \(\mathcal {D}(\mathcal {I}, \mathcal {K})\ge \mathcal {D}(\mathcal {I}, \mathcal {J})\) and \(\mathcal {D}(\mathcal {I}, \mathcal {K})\ge \mathcal {D}(\mathcal {J}, \mathcal {K})\) where \(\mathcal {I}, \mathcal {J}, \mathcal {K} \in \Phi (\mathcal {U})\).

Under IFS environment, several authors [5,6,7,8,9,10,11,12,13, 15, 16, 22,23,24,25,26, 28] have outlined the numerous SMs to estimate the degree of similarity among IFSs. These measures are summarized in Table 1. From these existing studies, it has been investigated that most of the prevailing measures fail to solve the real-world DMPs due to “division by zero problem” or “counter-intuitive cases”. Therefore, the existing measures give biased and contradictory outputs which concludes that the decision made cannot be optimal. Motivated by this fact, in this work, we propose a novel distance and similarity measure by applying the concept of transforming the given IFSs into right-angled triangles. The detailed description of the proposed measures is given in the next section.

Table 1 Existing similarity measures between IFSs

Novel Proposed Distance and Similarity Measure Between IFSs

Let \(\mathcal {P}=\big \{\big \langle x_j,\zeta _{\mathcal {P}}(x_j), \vartheta _{\mathcal {P}}(x_j)\big \rangle \mid j=1,2, \ldots , n \big \}\) and \(\mathcal {Q}=\big \{\big \langle x_j, \zeta _{\mathcal {Q}}(x_j), \vartheta _{\mathcal {Q}}(x_j)\big \rangle \mid j=1,2, \ldots , n \big \}\) be two IFSs defined on universal set \(\mathcal {U}\). Then clearly we can obtain that \(\big [\zeta _{\mathcal {P}}(x_j), 1-\vartheta _{\mathcal {P}}(x_j)\big ]\) is the intuitionistic fuzzy (IF) value of the element \(x_j \in \mathcal {U}\) in the IFS \(\mathcal {P}\). For convenience, we represent the IF values \(\big [\zeta _{\mathcal {P}}(x_j), 1-\vartheta _{\mathcal {P}}(x_j)\big ]\) and \(\big [\zeta _{\mathcal {Q}}(x_j),1-\vartheta _{\mathcal {Q}}(x_j)\big ]\) of IFSs \(\mathcal {P}\) and \(\mathcal {Q}\) are represented as \([p_1(x_j), p_2(x_j)]\) and \([q_1(x_j), q_2(x_j)]\) in the universal set [0, 1] on the y-axis and x-axis, respectively, as shown in Fig. 1a. We denote the distance among the points \(p_1(x_j)\) and \(p_2(x_j)\) as \(l_p(x_j),\) i.e., \(l_p(x_j)=p_2(x_j)-p_1(x_j)=1-\zeta _{\mathcal {P}}(x_j)-\vartheta _{\mathcal {P}}(x_j)=h_{\mathcal {P}}(x_j)\) and the distance between the points \(q_1(x_j)\) and \(q_2(x_j)\) as \(l_q(x_j),\) i.e., \(l_q(x_j)=q_2(x_j)-q_1(x_j)= 1-\zeta _{\mathcal {Q}}(x_j)-\vartheta _{\mathcal {Q}}(x_j)=h_{\mathcal {Q}}(x_j)\) . Thus, based on it, the vertices of transformed right-angled triangles of IFSs \(\mathcal {P}\) and \(\mathcal {Q}\) are denoted by \(O_{\mathcal {P}}\) and \(O_{\mathcal {Q}},\) respectively, as depicted in Fig. 1a.

Fig. 1
figure 1

The transformed right-angled triangles of IFSs \(\mathcal {P}\) and \(\mathcal {Q}\)

During the formulation, we transform the given IFSs to the right-angled triangles in a square area and construct the SM based on the intersection of the transformed triangles. The distance measure between the IFSs is defined as half of the sum of lengths of straight lines \(X_1Y_1\) and \(X_2Y_2\) as shown in Fig. 1a. Moreover, Fig. 1b depicts that when two IFSs \(\mathcal {P}\) and \(\mathcal {Q}\) are equal, i.e., \(\mathcal {P}=\mathcal {Q}\) then, the point \(X_1\) coincides with \(Y_1\) and \(X_2\) point coincides with \(Y_2\). This gives that lengths of \(X_1Y_1\) and \(X_2Y_2\) are zero and hence, distance among IFSs \(\mathcal {P}\) and \(\mathcal {Q}\) becomes zero.

Since, the co-ordinates of the points \(p_1\), \(p_2\), \(q_1\), \(q_2\), \(O_{\mathcal {P}}\) and \(O_{\mathcal {Q}}\) are \(\big (0,p_1(x_j)\big )\), \(\big (0,p_2(x_j)\big )\), \(\big (q_1(x_j),0\big )\), \(\big (q_2(x_j),0\big )\), \(\big (1,p_1(x_j)\big )\) and \(\big (q_1(x_j),1\big ),\) respectively. Therefore, the equations of the straight lines \(p_1 O_{\mathcal {P}}\), \(p_2 O_{\mathcal {P}}\), \(q_1 O_{\mathcal {Q}}\), \(q_2 O_{\mathcal {Q}}\) are \(y=p_1(x_j)\), \(y=-xl_p(x_j)+p_2(x_j)\), \(x=q_1(x_j)\) and \(y=\tfrac{q_2(x_j)-x}{l_q(x_j)},\) respectively. Now, in order to find out the lengths of the straight lines \(X_1Y_1\) and \(X_2Y_2\), we need to compute the co-ordinates of the points \(X_1\), \(X_2\), \(Y_1\) and \(Y_2\).

\(X_1\) point: Since \(X_1\) is the point of intersection of the straight lines \(p_1O_{\mathcal {P}}\) and \(q_1O_{\mathcal {Q}}\) therefore, x-coordinate of the point \(X_1\) is \(q_1(x_j)\) and the y-coordinate of the point \(X_1\) is \(p_1(x_j)\). Hence, co-ordinates of the point \(X_1\) are \(\big (q_1(x_j)\), \(p_1(x_j)\big )\).

\(X_2\) point: As \(X_2\) is the intersection point of the straight lines \(p_2O_{\mathcal {P}}\) and \(q_2O_{\mathcal {Q}}\). Therefore, the x-coordinate corresponding to the point \(X_2\) is given as:

$$\begin{aligned} -xl_p(x_j)+p_2(x_j)= & {} \tfrac{q_2(x_j)-x}{l_q(x_j)} \nonumber \\ \Rightarrow x= & {} \tfrac{q_2(x_j)-p_2(x_j)l_q(x_j)}{1-l_p(x_j)l_q(x_j)} \end{aligned}$$
(3)

Further, the y-coordinate corresponding to the point \(X_2\) is obtained as:

$$\begin{aligned} y= & {} -l_p(x_j)\left( \tfrac{q_2(x_j)-p_2(x_j)l_q(x_j)}{1-l_p(x_j)l_q(x_j)}\right) +p_2(x_j) \nonumber \\= & {} \tfrac{p_2(x_j)-q_2(x_j)l_p(x_j)}{1-l_p(x_j)l_q(x_j)} \end{aligned}$$
(4)

Hence, the co-ordinates of the point \(X_2\) are

$$\left( \tfrac{q_2(x_j)-p_2(x_j)l_q(x_j)}{1-l_p(x_j)l_q(x_j)}, \tfrac{p_2(x_j)-q_2(x_j)l_p(x_j)}{1-l_p(x_j)l_q(x_j)}\right) .$$

\(Y_1\) point: Since \(Y_1\) is the point of intersection of the straight lines OZ and \(q_1O_{\mathcal {Q}}\) therefore, x-coordinate of the point \(Y_1\) is same as the \(x-\) coordinate of the point \(X_1,\) i.e., \(q_1(x_j)\). Also, since straight line OZ is one of the diagonals of the unit square therefore, for any point on the line OZ, \(x-\)coordinate =\(y-\) coordinate. Hence, the co-ordinates of the point \(Y_1\) are \(\big (q_1(x_j),q_1(x_j)\big )\).

\(Y_2\) point: \(Y_2\) is the point on the diagonal OZ and Fig. 1a depicts that \(x-\) coordinate of the point \(X_2\) and \(Y_2\) is same. Therefore, \(x-\)coordinate of the point \(Y_2\) is

\(\tfrac{q_2(x_j)-p_2(x_j)l_q(x_j)}{1-l_p(x_j)l_q(x_j)}\). Also, as explained earlier, \(y-\) coordinate of the point \(Y_2\)= \(x-\) coordinate of the point \(Y_2\) = \(\tfrac{q_2(x_j)-p_2(x_j)l_q(x_j)}{1-l_p(x_j)l_q(x_j)}\). Hence, the coordinates of the point \(Y_2\) are

$$\left( \tfrac{q_2(x_j)-p_2(x_j)l_q(x_j)}{1-l_p(x_j)l_q(x_j)},\tfrac{q_2(x_j)-p_2(x_j)l_q(x_j)}{1-l_p(x_j)l_q(x_j)}\right) .$$

Finally, assume that \(\overline{X_1Y_1}(x_j)\) and \(\overline{X_2Y_2}(x_j)\) denote the lengths of the straight lines \(X_1Y_1\) and \(X_2Y_2,\) respectively corresponding to the element \(x_j\) of universal set \(\mathcal {U}\). Then,

$$\begin{aligned} \overline{X_1Y_1}(x_j)= & {} \big |\text {y-coordinate of}~ Y_1-\text {y-coordinate of}~ X_1\big | \\= & {} \big |q_1(x_j)-p_1(x_j)\big |=\big |\zeta _Q(x_j)-\zeta _P(x_j)\big | \\ \text {and~} \overline{X_2Y_2}(x_j)= & {} \big |\text {y-coordinate of}~ Y_2-\text {y-coordinate of}~ X_2\big | \\= & {} \left| \tfrac{q_2(x_j)-p_2(x_j)l_q(x_j)}{1-l_p(x_j)l_q(x_j)}-\tfrac{p_2(x_j)-q_2(x_j)l_p(x_j)}{1-l_p(x_j)l_q(x_j)}\right| \\= & {} \left| \tfrac{q_2(x_j)-p_2(x_j)l_q(x_j)-p_2(x_j)+q_2(x_j)l_p(x_j)}{1-l_p(x_j)l_q(x_j)}\right| \\= & {} \left| \tfrac{ \begin{aligned} \vartheta _{\mathcal {P}}(x_j)-\vartheta _{\mathcal {Q}}(x_j)+h_{\mathcal {P}}(x_j)-h_{\mathcal {Q}}(x_j)+ \\ h_{\mathcal {Q}}(x_j)\vartheta _{\mathcal {P}}(x_j)-h_{\mathcal {P}}(x_j)\vartheta _{\mathcal {Q}}(x_j) \end{aligned} }{1-h_{\mathcal {P}}(x_j)h_{\mathcal {Q}}(x_j)}\right| \end{aligned}$$

Based on the above discussion and calculation, we outline the new distance measure among two IFSs \(\mathcal {P}\) and \(\mathcal {Q}\) (description as shown in Fig. 1a as follows:

Definition 5

The distance measure between two IFSs \(\mathcal {P}=\big \{\big \langle x,\zeta _{\mathcal {P}}(x), \vartheta _{\mathcal {P}}(x)\big \rangle \mid x \in \mathcal {U} \big \}\) and \(\mathcal {Q}=\big \{\big \langle x,\zeta _{\mathcal {Q}}(x)\), \(\vartheta _{\mathcal {Q}}(x)\big \rangle \mid x \in \mathcal {U} \big \}\) on \(\mathcal {U}=\{x_1,x_2, \ldots , x_n\}\) is:

$$\begin{aligned} \mathcal {D}(\mathcal {P}, \mathcal {Q})&= \tfrac{1}{2{n}}\sum \limits _{j=1}^n\left( \overline{X_1Y_1}(x_j)+\overline{X_2Y_2}(x_j)\right) \nonumber \\&= \tfrac{1}{2n} \sum \limits _{j=1}^n \left[ \big |\zeta _{\mathcal {Q}}(x_j)-\zeta _{\mathcal {P}}(x_j)\big |+ \left| \tfrac{ \begin{aligned} \vartheta _{\mathcal {P}}(x_j)-\vartheta _{\mathcal {Q}}(x_j)+h_{\mathcal {P}}(x_j)-h_{\mathcal {Q}}(x_j) + \\ h_{\mathcal {Q}}(x_j) \vartheta _{\mathcal {P}}(x_j)-h_{\mathcal {P}}(x_j)\vartheta _{\mathcal {Q}}(x_j) \end{aligned} }{1-h_{\mathcal {P}}(x_j) h_{\mathcal {Q}}(x_j)}\right| \right] \end{aligned}$$
(5)

where \(h_{\mathcal {P}}(x_j)h_{\mathcal {Q}}(x_j) \ne 1\) for all \(x_j \in \mathcal {U}\).

Next, for IFSs \(\mathcal {P}\) and \(\mathcal {Q}\) defined on \(\mathcal {U}=\{x_1,x_2, \ldots , x_n\}\) satisfying \(h_{\mathcal {P}}(x_j)h_{\mathcal {Q}}(x_j) \ne 1\), the distance measure \(\mathcal {D}(\mathcal {P}, \mathcal {Q})\), given in Eq. (5), has the following characteristics.

Theorem 1

The proposed distance measure \(\mathcal {D}\) satisfies the inequality given as: \(0\le \mathcal {D}(\mathcal {P}\), \(\mathcal {Q})\le 1\).

Proof

For IFSs \(\mathcal {P}\) and \(\mathcal {Q}\) and from Fig. 1a, it is depicted that minimum and maximum values of \(\overline{X_1Y_1}\), \(\overline{X_2Y_2}\) are 0 and 1, respectively. Therefore,

$$0 \le \tfrac{1}{2}\left( \overline{X_1Y_1}(x_j)+\overline{X_2Y_2}(x_j)\right) \le 1$$

which implies that

$$\begin{aligned} 0 \le \tfrac{1}{2n} \sum \limits _{j=1}^n \left[ \big |\zeta _{\mathcal {Q}}(x_j)-\zeta _{\mathcal {P}}(x_j)\big |+ \left| \tfrac{ \begin{aligned} \vartheta _{\mathcal {P}}(x_j)-\vartheta _{\mathcal {Q}}(x_j)+h_{\mathcal {P}}(x_j)-h_{\mathcal {Q}}(x_j)+ \\ h_{\mathcal {Q}}(x_j) \vartheta _{\mathcal {P}}(x_j)-h_{\mathcal {P}}(x_j)\vartheta _{\mathcal {Q}}(x_j) \end{aligned} }{1-h_{\mathcal {P}}(x_j)h_{\mathcal {Q}}(x_j)}\right| \right] \le 1. \end{aligned}$$

Hence, \(0 \le \mathcal {D}(\mathcal {P}, \mathcal {Q}) \le 1\).Hence, the result.

Theorem 2

\(\mathcal {D}(\mathcal {P}, \mathcal {Q})=0\) \(\Leftrightarrow\) \(\mathcal {P}=\mathcal {Q}\).

Proof

\(\mathcal {P}=\mathcal {Q}\) implies \(\zeta _{\mathcal {P}}(x_j)=\zeta _{\mathcal {Q}}(x_j)\), \(\vartheta _{\mathcal {P}}(x_j)=\vartheta _{\mathcal {Q}}(x_j)\) and \(h_{\mathcal {P}}(x_j)=h_{\mathcal {Q}}(x_j)\)  \(\forall\) \(j=1,2, \ldots ,n\). Then, clearly, \(\mathcal {D}(\mathcal {P}, \mathcal {Q})=0\). Conversely,

$$\begin{aligned}&\mathcal {D}(\mathcal {P}, \mathcal {Q})=0 \nonumber \\&\Rightarrow \tfrac{1}{2n} \sum \limits _{j=1}^n \left[ \big |\zeta _{\mathcal {Q}}(x_j)-\zeta _{\mathcal {P}}(x_j)\big |+ \left| \tfrac{ \begin{aligned} \vartheta _{\mathcal {P}}(x_j)-\vartheta _{\mathcal {Q}}(x_j)+h_{\mathcal {P}}(x_j)-h_{\mathcal {Q}}(x_j)+ \\ h_{\mathcal {Q}}(x_j)\vartheta _{\mathcal {P}}(x_j)-h_{\mathcal {P}}(x_j)\vartheta _{\mathcal {Q}}(x_j) \end{aligned} }{1-h_{\mathcal {P}}(x_j)h_{\mathcal {Q}}(x_j)}\right| \right] =0 \nonumber \\&\Rightarrow \zeta _{\mathcal {Q}}(x_j)-\zeta _{\mathcal {P}}(x_j)=0 \end{aligned}$$
(6)
$$\begin{aligned} \text {and} \left( \begin{aligned} \vartheta _{\mathcal {P}}(x_j)-\vartheta _{\mathcal {Q}}(x_j)+h_{\mathcal {P}}(x_j)-h_{\mathcal {Q}}(x_j)+ \\ h_{\mathcal {Q}}(x_j)\vartheta _{\mathcal {P}}(x_j)-h_{\mathcal {P}}(x_j)\vartheta _{\mathcal {Q}}(x_j) \end{aligned} \right) =0 ~~ \forall ~j \end{aligned}$$
(7)

Now, Eq. (6) gives that \(\zeta _{\mathcal {P}}(x_j)=\zeta _{\mathcal {Q}}(x_j)\)  \(\forall\) j. Further, using \(\zeta _{\mathcal {P}}(x_j)=\zeta _{\mathcal {Q}}(x_j)\) in Eq. (7), we obtain that

$$\begin{aligned}&\left( \begin{aligned}&\vartheta _{\mathcal {P}}(x_j)-\vartheta _{\mathcal {Q}}(x_j)+\big (1-\zeta _{\mathcal {P}}(x_j)-\vartheta _{\mathcal {P}}(x_j)\big )-\big (1-\zeta _{\mathcal {P}}(x_j)-\vartheta _{\mathcal {Q}}(x_j)\big ) \\&+\big (1-\zeta _{\mathcal {P}}(x_j)-\vartheta _{\mathcal {Q}}(x_j)\big ) \vartheta _{\mathcal {P}}(x_j)-\big (1-\zeta _{\mathcal {P}}(x_j)-\vartheta _{\mathcal {P}}(x_j)\big )\vartheta _{\mathcal {Q}}(x_j) \end{aligned} \right) =0 \\&\Rightarrow \big (\vartheta _{\mathcal {P}}(x_j)-\vartheta _{\mathcal {Q}}(x_j))\big (1-\zeta _{\mathcal {P}}(x_j)\big )=0 \\&\Rightarrow \vartheta _{\mathcal {P}}(x_j)=\vartheta _{\mathcal {Q}}(x_j) ~~\text {or}~~ \zeta _{\mathcal {P}}(x_j)=1 ~ \forall ~ j \end{aligned}$$

Thus, we have two possibilities:

  1. (i)

    When \(\vartheta _{\mathcal {P}}(x_j)=\vartheta _{\mathcal {Q}}(x_j)\). Also, from Eq. (6) we have, \(\zeta _{\mathcal {P}}(x_j)=\zeta _{\mathcal {Q}}(x_j)\)  \(\forall\) j. Hence, \(\mathcal {P}=\mathcal {Q}\).

  2. (ii)

    When \(\zeta _{\mathcal {P}}(x_j)=1\). It implies that \(\vartheta _{\mathcal {P}}(x_j)=0\). Also, \(\zeta _{\mathcal {P}}(x_j)=\zeta _{\mathcal {Q}}(x_j)\) gives that \(\zeta _{\mathcal {Q}}(x_j)=1\) and hence, \(\vartheta _{\mathcal {Q}}(x_j)=0\)  \(\forall\) j. Thus, we have \(\mathcal {P}=\mathcal {Q}\).

Theorem 3

The proposed distance measure is symmetric, i.e., \(\mathcal {D}(\mathcal {P}, \mathcal {Q})=\mathcal {D}(\mathcal {Q}, \mathcal {P})\).

Proof

From Eq. (5), we have

$$\begin{aligned}&\mathcal {D}(\mathcal {P}, \mathcal {Q}) = \tfrac{1}{2n} \sum \limits _{j=1}^n \left[ \big |\zeta _{\mathcal {Q}}(x_j)-\zeta _{\mathcal {P}}(x_j)\big |+ \left| \tfrac{ \begin{aligned} \vartheta _{\mathcal {P}}(x_j)-\vartheta _{\mathcal {Q}}(x_j)+h_{\mathcal {P}}(x_j)-h_{\mathcal {Q}}(x_j)+ \\ h_{\mathcal {Q}}(x_j)\vartheta _{\mathcal {P}}(x_j)-h_{\mathcal {P}}(x_j)\vartheta _{\mathcal {Q}}(x_j) \end{aligned} }{1-h_{\mathcal {P}}(x_j)h_{\mathcal {Q}}(x_j)}\right| \right] \\&= \tfrac{1}{2n} \sum \limits _{j=1}^n \left[ \big |\zeta _{\mathcal {P}}(x_j)-\zeta _{\mathcal {Q}}(x_j)\big |+ \left| \tfrac{ \begin{aligned} \vartheta _{\mathcal {Q}}(x_j)-\vartheta _{\mathcal {P}}(x_j)+h_{\mathcal {Q}}(x_j)-h_{\mathcal {P}}(x_j)+ \\ h_{\mathcal {P}}(x_j)\vartheta _{\mathcal {Q}}(x_j)-h_{\mathcal {Q}}(x_j)\vartheta _{\mathcal {P}}(x_j) \end{aligned} }{1-h_{\mathcal {Q}}(x_j) h_{\mathcal {P}}(x_j)}\right| \right] \\&= \mathcal {D}(\mathcal {Q}, \mathcal {P}) \end{aligned}$$

Theorem 4

If \(\mathcal {P}\subseteq \mathcal {Q}\subseteq \mathcal {R}\) then, \(\mathcal {D}(\mathcal {P}, \mathcal {R})\ge \mathcal {D}(\mathcal {P}, \mathcal {Q})\) and \(\mathcal {D}(\mathcal {P},\mathcal {R})\ge \mathcal {D}(\mathcal {Q},\mathcal {R})\).

Proof

\(\mathcal {P}\subseteq \mathcal {Q}\subseteq \mathcal {R}\) signifies that \(\zeta _{\mathcal {P}} \le \zeta _{\mathcal {Q}} \le \zeta _{\mathcal {R}}\) and \(\vartheta _{\mathcal {P}} \ge \vartheta _{\mathcal {Q}} \ge \vartheta _{\mathcal {R}}\) \(\forall\) x. From Eq. (5), we have

$$\begin{aligned}&\mathcal {D}(\mathcal {P}, \mathcal {Q}) = \tfrac{1}{2n} \sum \limits _{j=1}^n \left[ \big |\zeta _{\mathcal {Q}}(x_j)-\zeta _{\mathcal {P}}(x_j)\big |+ \left| \tfrac{ \begin{aligned} \vartheta _{\mathcal {P}}(x_j)-\vartheta _{\mathcal {Q}}(x_j)+h_{\mathcal {P}}(x_j)-h_{\mathcal {Q}}(x_j)+ \\ h_{\mathcal {Q}}(x_j)\vartheta _{\mathcal {P}}(x_j)-h_{\mathcal {P}}(x_j)\vartheta _{\mathcal {Q}}(x_j) \end{aligned}}{1-h_{\mathcal {P}}(x_j)h_{\mathcal {Q}}(x_j)}\right| \right] \\&= \tfrac{1}{2n} \sum \limits _{j=1}^n \left[ \big |\zeta _{\mathcal {Q}}(x_j)-\zeta _{\mathcal {P}}(x_j)\big |+ \left| \tfrac{ \begin{aligned} \vartheta _{\mathcal {P}}(x_j)-\vartheta _{\mathcal {Q}}(x_j)+\big (1-\zeta _{\mathcal {P}}(x_j)-\vartheta _{\mathcal {P}}(x_j)\big )- \big (1-\zeta _{\mathcal {Q}}(x_j)-\vartheta _{\mathcal {Q}}(x_j)\big ) \\ +\big (1-\zeta _{\mathcal {Q}}(x_j)-\vartheta _{\mathcal {Q}}(x_j)\big ) \vartheta _{\mathcal {P}}(x_j)-\big (1-\zeta _{\mathcal {P}}(x_j)- \vartheta _{\mathcal {P}}(x_j)\big )\vartheta _{\mathcal {Q}}(x_j) \end{aligned} }{1-h_{\mathcal {P}}(x_j)h_{\mathcal {Q}}(x_j)}\right| \right] \\&= \tfrac{1}{2n} \sum \limits _{j=1}^n \left[ \big |\zeta _{\mathcal {Q}}(x_j)-\zeta _{\mathcal {P}}(x_j)\big |+ \left| \tfrac{ \begin{aligned} \zeta _{\mathcal {Q}}(x_j)-\zeta _{\mathcal {P}}(x_j)+\vartheta _{\mathcal {P}}(x_j)-\vartheta _{\mathcal {Q}}(x_j)+ \\ \zeta _{\mathcal {P}}(x_j)\vartheta _{\mathcal {Q}}(x_j)-\zeta _{\mathcal {Q}}(x_j)\vartheta _{\mathcal {P}}(x_j) \end{aligned} }{1-h_{\mathcal {P}}(x_j)h_{\mathcal {Q}}(x_j)}\right| \right] \end{aligned}$$

Since, \(\zeta _{\mathcal {P}}(x_j) \le \zeta _{\mathcal {Q}}(x_j)\) and \(\vartheta _{\mathcal {P}}(x_j) \ge \vartheta _{\mathcal {Q}}(x_j)\). It gives that \(\zeta _{\mathcal {Q}}(x_j)-\zeta _{\mathcal {P}}(x_j) \ge 0\) therefore, \(\big |\zeta _{\mathcal {Q}}(x_j)-\zeta _{\mathcal {P}}(x_j)\big |\) \(=\zeta _{\mathcal {Q}}(x_j)-\zeta _{\mathcal {P}}(x_j)\). Also, we obtain that \(\big (1-\zeta _{\mathcal {P}}(x_j)\big )\big (1-\vartheta _{\mathcal {Q}}(x_j)\big ) \ge \big (1-\zeta _{\mathcal {Q}}(x_j)\big )\big (1-\vartheta _{\mathcal {P}}(x_j)\big )\) \(\Rightarrow\) \(\zeta _{\mathcal {Q}}(x_j)-\zeta _{\mathcal {P}}(x_j)+\vartheta _{\mathcal {P}}(x_j) - \vartheta _{\mathcal {Q}}(x_j) + \zeta _{\mathcal {P}}(x_j)\vartheta _{\mathcal {Q}}(x_j)\)  \(- \zeta _{\mathcal {Q}}(x_j)\vartheta _{\mathcal {P}}(x_j) \ge 0\)

$$\begin{aligned} \Rightarrow \left| \begin{aligned}&\zeta _{\mathcal {Q}}(x_j)-\zeta _{\mathcal {P}}(x_j)+\\&\vartheta _{\mathcal {P}}(x_j)-\vartheta _{\mathcal {Q}}(x_j)+ \\&\zeta _{\mathcal {P}}(x_j)\vartheta _{\mathcal {Q}}(x_j)-\zeta _{\mathcal {Q}}(x_j)\vartheta _{\mathcal {P}}(x_j) \end{aligned} \right| = \left( \begin{aligned}&\zeta _{\mathcal {Q}}(x_j)-\zeta _{\mathcal {P}}(x_j)+ \\&\vartheta _{\mathcal {P}}(x_j)-\vartheta _{\mathcal {Q}}(x_j)+ \\&\zeta _{\mathcal {P}}(x_j)\vartheta _{\mathcal {Q}}(x_j)-\zeta _{\mathcal {Q}}(x_j)\vartheta _{\mathcal {P}}(x_j) \end{aligned} \right) . \end{aligned}$$

Hence,

$$\begin{aligned} \mathcal {D}(\mathcal {P}, \mathcal {Q}) = \tfrac{1}{2n} \sum \limits _{j=1}^n \left[ \big (\zeta _{\mathcal {Q}}(x_j)-\zeta _{\mathcal {P}}(x_j)\big )+ \left( \tfrac{ \begin{aligned} \zeta _{\mathcal {Q}}(x_j)-\zeta _{\mathcal {P}}(x_j)+\vartheta _{\mathcal {P}}(x_j)-\vartheta _{\mathcal {Q}}(x_j)+\\ \zeta _{\mathcal {P}}(x_j)\vartheta _{\mathcal {Q}}(x_j)-\zeta _{\mathcal {Q}}(x_j)\vartheta _{\mathcal {P}}(x_j) \end{aligned} }{1-h_{\mathcal {P}}(x_j)h_{\mathcal {Q}}(x_j)}\right) \right] . \end{aligned}$$

In the similar manner, using the given conditions, we have

$$\begin{aligned}&\mathcal {D}(\mathcal {Q}, \mathcal {R}) = \tfrac{1}{2n} \sum \limits _{j=1}^n \left[ \big (\zeta _{\mathcal {R}}(x_j)-\zeta _{\mathcal {Q}}(x_j)\big )+ \left( \tfrac{\begin{aligned} \zeta _{\mathcal {R}}(x_j)-\zeta _{\mathcal {Q}}(x_j)+\vartheta _{\mathcal {Q}}(x_j)- \vartheta _{\mathcal {R}}(x_j) \\ +\zeta _{\mathcal {Q}}(x_j)\vartheta _{\mathcal {R}}(x_j)-\zeta _{\mathcal {R}}(x_j)\vartheta _{\mathcal {Q}}(x_j) \end{aligned}}{1-h_{\mathcal {Q}}(x_j)h_{\mathcal {R}}(x_j)}\right) \right] \\ \text {and}&\\&\mathcal {D}(\mathcal {P}, \mathcal {R}) =\tfrac{1}{2n} \sum \limits _{j=1}^n \left[ \big (\zeta _{\mathcal {R}}(x_j)-\zeta _{\mathcal {P}}(x_j)\big )+ \left( \tfrac{ \begin{aligned} \zeta _{\mathcal {R}}(x_j)-\zeta _{\mathcal {P}}(x_j)+\vartheta _{\mathcal {P}}(x_j)-\vartheta _{\mathcal {R}}(x_j)+\\ \zeta _{\mathcal {P}}(x_j)\vartheta _{\mathcal {R}}(x_j)-\zeta _{\mathcal {R}}(x_j)\vartheta _{\mathcal {P}}(x_j) \end{aligned} }{1-h_{\mathcal {P}}(x_j)h_{\mathcal {R}}(x_j)}\right) \right] \end{aligned}$$

Now,

$$\begin{aligned}&\mathcal {D}(\mathcal {P}, \mathcal {R})-\mathcal {D}(\mathcal {P}, \mathcal {Q}) \\&= \tfrac{1}{2n} \sum \limits _{j=1}^n \left[ \begin{aligned}&\big (\zeta _{\mathcal {R}}(x_j)-\zeta _{\mathcal {Q}}(x_j)\big ) \\&+\left( \tfrac{\zeta _{\mathcal {R}}(x_j)-\zeta _{\mathcal {P}}(x_j)+\vartheta _{\mathcal {P}}(x_j)-\vartheta _{\mathcal {R}}(x_j)+\zeta _{\mathcal {P}}(x_j)\vartheta _{\mathcal {R}}(x_j)-\zeta _{\mathcal {R}}(x_j)\vartheta _{\mathcal {P}}(x_j)}{1-h_{\mathcal {P}}(x_j)h_{\mathcal {R}}(x_j)}\right) \\&- \left( \tfrac{\zeta _{\mathcal {Q}}(x_j)-\zeta _{\mathcal {P}}(x_j)+\vartheta _{\mathcal {P}}(x_j)-\vartheta _{\mathcal {Q}}(x_j)+\zeta _{\mathcal {P}}(x_j)\vartheta _{\mathcal {Q}}(x_j)-\zeta _{\mathcal {Q}}(x_j)\vartheta _{\mathcal {P}}(x_j)}{1-h_{\mathcal {P}}(x_j)h_{\mathcal {Q}}(x_j)}\right) \end{aligned} \right] \\&= \tfrac{1}{2n} \sum \limits _{j=1}^n \big (\mathcal {A}_j+\mathcal {B}_j\big ) \end{aligned}$$

where \(\mathcal {A}_j=\zeta _{\mathcal {R}}(x_j)-\zeta _{\mathcal {Q}}(x_j)\) and

$$\begin{aligned}\mathcal {B}_j= \left( \tfrac{ \begin{aligned}&\zeta _{\mathcal {R}}(x_j)-\zeta _{\mathcal {P}}(x_j)+\vartheta _{\mathcal {P}}(x_j)-\vartheta _{\mathcal {R}}(x_j) \\&+\zeta _{\mathcal {P}}(x_j)\vartheta _{\mathcal {R}}(x_j)-\zeta _{\mathcal {R}}(x_j)\vartheta _{\mathcal {P}}(x_j) \end{aligned}}{1-h_{\mathcal {P}}(x_j)h_{\mathcal {R}}(x_j)}\right) - \left( \tfrac{ \begin{aligned}&\zeta _{\mathcal {Q}}(x_j)-\zeta _{\mathcal {P}}(x_j)+ \vartheta _{\mathcal {P}}(x_j)-\vartheta _{\mathcal {Q}}(x_j) \\&+\zeta _{\mathcal {P}}(x_j)\vartheta _{\mathcal {Q}}(x_j)-\zeta _{\mathcal {Q}}(x_j)\vartheta _{\mathcal {P}}(x_j) \end{aligned}}{1-h_{\mathcal {P}}(x_j)h_{\mathcal {Q}}(x_j)}\right) \end{aligned}$$

Further, to show that \(\mathcal {D}(\mathcal {P}, \mathcal {R})-\mathcal {D}(\mathcal {P}, \mathcal {Q})\ge 0\), it is sufficient to prove that \(\mathcal {A}_j, \mathcal {B}_j \ge 0\) \(\forall\) j. Since, \(\zeta _{\mathcal {R}}(x_j) \ge \zeta _{\mathcal {Q}}(x_j)\) therefore, \(\mathcal {A}_j \ge 0\). Now,

$$\begin{aligned} \mathcal {B}_j&=\left( \tfrac{ \begin{aligned}&\zeta _{\mathcal {R}}(x_j)-\zeta _{\mathcal {P}}(x_j)+\vartheta _{\mathcal {P}}(x_j)-\vartheta _{\mathcal {R}}(x_j) \\&+\zeta _{\mathcal {P}}(x_j)\vartheta _{\mathcal {R}}(x_j)-\zeta _{\mathcal {R}}(x_j)\vartheta _{\mathcal {P}}(x_j) \end{aligned}}{1-h_{\mathcal {P}}(x_j)h_{\mathcal {R}}(x_j)}\right) - \left( \tfrac{ \begin{aligned}&\zeta _{\mathcal {Q}}(x_j)-\zeta _{\mathcal {P}}(x_j)+ \vartheta _{\mathcal {P}}(x_j)-\vartheta _{\mathcal {Q}}(x_j) \\&+\zeta _{\mathcal {P}}(x_j)\vartheta _{\mathcal {Q}}(x_j)-\zeta _{\mathcal {Q}}(x_j)\vartheta _{\mathcal {P}}(x_j) \end{aligned}}{1-h_{\mathcal {P}}(x_j)h_{\mathcal {Q}}(x_j)}\right) \\&=\tfrac{\begin{aligned}&\big (\zeta _{\mathcal {R}}(x_j)\vartheta _{\mathcal {Q}}(x_j)-\zeta _{\mathcal {Q}}(x_j) \vartheta _{\mathcal {R}}(x_j)\big )\big (\zeta _{\mathcal {P}}(x_j)+\vartheta _{\mathcal {P}}(x_j)-1\big ) \big (\zeta _{\mathcal {Q}}(x_j)+\vartheta _{\mathcal {Q}}(x_j)-1\big )\\&+\big (2-\zeta _{\mathcal {P}}(x_j)-\vartheta _{\mathcal {P}}(x_j)\big ) \Big (\zeta _{\mathcal {P}}(x_j)\big (\zeta _{\mathcal {R}}(x_j)-\zeta _{\mathcal {Q}}(x_j)\big ) +\vartheta _{\mathcal {P}}(x_j)\big (\vartheta _{\mathcal {Q}}(x_j)-\vartheta _{\mathcal {R}}(x_j)\big )\Big ) \end{aligned}}{\big (1-h_{\mathcal {P}}(x_j)h_{\mathcal {R}}(x_j)\big )\big (1-h_{\mathcal {P}}(x_j)h_{\mathcal {Q}}(x_j)\big )}\\&\ge 0 \end{aligned}$$

Thus \({\mathcal {A}}_j,{\mathcal {B}}_j \ge 0\) \(\forall\) j. Hence, the result.

Further, based on the distance measure, we can define a novel SM as given in the following definition:

Definition 6

A SM between two IFSs \(\mathcal {P}\) and \(\mathcal {Q}\) is given as

$$\begin{aligned}&\mathcal {S}(\mathcal {P}, \mathcal {Q}) = 1-\left( \tfrac{1}{2{n}}\sum \limits _{j=1}^n\left( \overline{X_1Y_1}(x_j)+\overline{X_2Y_2}(x_j)\right) \right) \nonumber \\&= 1-\left( \tfrac{1}{2n} \sum \limits _{j=1}^n \left[ \big |\zeta _{\mathcal {Q}}(x_j)-\zeta _{\mathcal {P}}(x_j)\big |+ \left| \tfrac{ \begin{aligned} \vartheta _{\mathcal {P}}(x_j)-\vartheta _{\mathcal {Q}} (x_j)+h_{\mathcal {P}}(x_j)-h_{\mathcal {Q}}(x_j) + \\ h_{\mathcal {Q}}(x_j)\vartheta _{\mathcal {P}}(x_j)-h_{\mathcal {P}}(x_j)\vartheta _{\mathcal {Q}}(x_j) \end{aligned} }{1-h_{\mathcal {P}}(x_j) h_{\mathcal {Q}}(x_j)}\right| \right] \right) \end{aligned}$$
(8)

where \(h_{\mathcal {P}}(x_j)h_{\mathcal {Q}}(x_j) \ne 1\) \(\forall\) \(x_j \in \mathcal {U}\).

Theorem 5

For \(\mathcal {P}, \mathcal {Q}, \mathcal {R} \in \Phi (\mathcal {U})\), a similarity measure \(\mathcal {S}\) between \(\mathcal {P}\) and \(\mathcal {Q}\), denoted by \(\mathcal {S}(\mathcal {P}, \mathcal {Q})\), satisfies the following properties:

(P1) \(0\le \mathcal {S}(\mathcal {P}, \mathcal {Q})\le 1\).

(P2) \(\mathcal {S}(\mathcal {P}, \mathcal {Q})={1}\) \(\Leftrightarrow\) \(\mathcal {P}= \mathcal {Q}\).

(P3) \(\mathcal {S}(\mathcal {P}, \mathcal {Q})=\mathcal {S}(\mathcal {Q}, \mathcal {P})\).

(P4) If \(\mathcal {P}\subseteq \mathcal {Q}\subseteq \mathcal {R}\) then, \(\mathcal {S}(\mathcal {P},\mathcal {R})\le \mathcal {S}(\mathcal {P}, \mathcal {Q})\) and \(\mathcal {S}(\mathcal {P}, \mathcal {R})\le \mathcal {S}(\mathcal {Q},\mathcal {R})\).

Proof

For IFSs \(\mathcal {P}, \mathcal {Q}, \mathcal {R}\) and by Definition 6, we can obtain that \(\mathcal {S}(\mathcal {P}, \mathcal {Q})=1-\mathcal {D}(\mathcal {P}, \mathcal {Q})\). Thus, from it, we have

(P1) As \(0 \le \mathcal {D}(\mathcal {P},\mathcal {Q}) \le 1\). Therefore, \(0 \le 1-\mathcal {D}(\mathcal {P}, \mathcal {Q}) \le 1\). Hence, \(0 \le \mathcal {S}(\mathcal {P}, \mathcal {Q}) \le 1\).

(P2) Since, \(\mathcal {D}(\mathcal {P}, \mathcal {Q})=0\) \(\Leftrightarrow\) \(\mathcal {P}=\mathcal {Q}\). It implies that \(1-\mathcal {D}(\mathcal {P}, \mathcal {Q})=1\) \(\Leftrightarrow\) \(\mathcal {P}=\mathcal {Q}\) which gives that \(\mathcal {S}(\mathcal {P}, \mathcal {Q})=1\) \(\Leftrightarrow\) \(\mathcal {P}=\mathcal {Q}\).

(P3) \(\mathcal {D}(\mathcal {P}, \mathcal {Q})=\mathcal {D}(\mathcal {Q}, \mathcal {P})\) gives that \(1-\mathcal {D}(\mathcal {P}, \mathcal {Q})=1-\mathcal {D}(\mathcal {Q}, \mathcal {P})\). It implies \(\mathcal {S}(\mathcal {P}, \mathcal {Q})=\mathcal {S}(\mathcal {Q}, \mathcal {P})\)

(P4) For \(\mathcal {P} \subseteq \mathcal {Q} \subseteq \mathcal {R}\) we have, \(\mathcal {D}(\mathcal {P}, \mathcal {R}) \ge \mathcal {D}(\mathcal {P}, \mathcal {Q})\) and \(\mathcal {D}(\mathcal {P}, \mathcal {R}) \ge \mathcal {D}(\mathcal {Q}, \mathcal {R})\) which gives that \(1-\mathcal {D}(\mathcal {P}, \mathcal {R}) \le 1-\mathcal {D}(\mathcal {P}, \mathcal {Q})\) and \(1-\mathcal {D}(\mathcal {P}, \mathcal {R}) \le 1-\mathcal {D}(\mathcal {Q}, \mathcal {R})\). It implies \(\mathcal {S}(\mathcal {P}, \mathcal {R}) \le \mathcal {S}(\mathcal {P}, \mathcal {Q})\) and \(\mathcal {S}(\mathcal {P}, \mathcal {R}) \le \mathcal {S}(\mathcal {Q}, \mathcal {R})\).

Superiority Analysis of the Stated Measure

In this section, we analyze the drawbacks of the several existing SMs [5,6,7,8,9,10,11,12,13, 15, 16, 22,23,24,25,26, 28] of IFSs with the help of some numerical examples. The existing SMs for IFSs are summarized in Table 1.

In the following, we give some “counterintuitive cases” to illustrate the drawbacks of the existing SMs [5, 7, 11, 22, 25].

Example 1

Consider three IFSs \(\mathcal {P}= \{\langle x,0.00,0.00\rangle \}\), \(\mathcal {Q}= \{\langle x,0.49,0.51\rangle \}\) and \(\mathcal {R}= \{\langle x,0.50,0.50\rangle \}\) defined on \(\mathcal {U}=\{x\}\). From these given sets, it is obvious that the similarity between the IFSs \(\mathcal {Q}\) and \(\mathcal {R}\) is more than the IFSs \(\mathcal {P}\) and \(\mathcal {R}\). Now, in order to show the superiority of the presented SM \(\mathcal {S}\), we apply the prevailing similarity measures \(S_C\) [5], \(S_{DC}\) [7], \(S_{SK}\) [11], \(S_{VS}\) [22], \(S_Y\) [25] and the proposed similarity measure \(\mathcal {S}\) on these IFSs, and tabulate the obtained results in Table 2.

Table 2 Similarity measure results for IFSs \(\mathcal {P}\), \(\mathcal {Q}\) and \(\mathcal {R}\)

From this table, it is evident that the similarity measures \(S_C\) [5], \(S_{DC}\) [7] provides that the similarity degree between \(\mathcal {P}\) and \(\mathcal {R}\) is higher than the \(\mathcal {Q}\) and \(\mathcal {R}\). On the other hand, the measure \(S_{SK}\) [11] draws that the SM degree between \(\mathcal {P}\) and \(\mathcal {R}\) is identical with \(\mathcal {Q}\) and \(\mathcal {R}\). The measures \(S_{VS}\) [22] and \(S_Y\) [25] fail to determine the SM degree between \(\mathcal {P}\) and \(\mathcal {R}\) due to “division by zero problem,” whereas on applying the proposed SM we get that the similarity degree between \(\mathcal {Q}\) and \(\mathcal {R}\) is greater than between \(\mathcal {P}\) and \(\mathcal {R}\). Thus, we obtain that the prevailing measures \(S_C\) [5], \(S_{DC}\) [7] and \(S_{SK}\) [11] give that the IFSs \(\mathcal {P}\) and \(\mathcal {R}\) are more similar which is counter to the fact that \(\mathcal {Q}\) and \(\mathcal {R}\) are more alike whereas the proposed measure \(\mathcal {S}\) gives the accurate results. Therefore, the proposed measure \(\mathcal {S}\) gives the better and reliable results as compared to existing SMs [5, 7, 11, 22, 25]. Moreover, the similarity measures [5, 7, 11] calculate the degree of similarity between \(\mathcal {P}\) and \(\mathcal {R}\) to be exactly 1 whereas \(\mathcal {P}\) and \(\mathcal {R}\) are not equal. Thus the similarity measures [5, 7, 11] do not satisfy the property (P2) of SM as stated in Theorem 5. Hence, from this analysis, we can conclude that the existing SMs have some drawbacks that it does not satisfy certain characteristics.

Next, we utilize the 6 groups of IFSs to verify the superiority of the stated SMs over the various existing SMs [5,6,7,8,9,10,11,12,13, 15, 16, 22,23,24,25,26, 28].

Example 2

Consider the 6 groups of diverse IFSs \(\mathcal {P}\) and \(\mathcal {Q}\) in order to compare the results of the proposed SMs over the existing SMs [5,6,7,8,9,10,11,12,13, 15, 16, 22,23,24,25,26, 28]. The obtained results for each group are tabulated in Table 3. From this table, it is analyzed that the developed SM \(\mathcal {S}\) can work well over these existing SMs and overcome their drawbacks. In this table, we can notice that there occur indistinguishable results for the different pairs of input data corresponding to the existing SMs, which drives to the conclusion that these measures are inadequate to discriminate the different input samples precisely. The following are the drawbacks of the existing SMs over the proposed SM:

  1. 1.

    The different input cases 1, 4 and 5 cannot be identified using similarity measure \(S_C\) [5] because of the indistinguishable results. Besides this, it is seen that measurement values obtained using \(S_C\) [5] are exactly 1 for input cases 1, 4 and 5, i.e., \(S_C(\mathcal {P}, \mathcal {Q})=1\) when \(\mathcal {P}=(0.3, 0.3)\), \(\mathcal {Q}=(0.4, 0.4)\) (Case 1), \(\mathcal {P}=(0.5, 0.5)\), \(\mathcal {Q}=(0.0, 0.0)\) (Case 4), \(\mathcal {P}=(0.4, 0.2)\), \(\mathcal {Q}=(0.5, 0.3)\) (Case 5), which are not identical to each other, i.e., \(\mathcal {P}\ne \mathcal {Q}\). In addition to these, the similarity measures \(S_{DC}\) [7], \(S_{SK}\) [11] and \(S_Y\) [25] also suffer from the same problem for the different cases group. Thus, the existing SMs [5, 7, 11, 25] do not satisfy the property (P2) as stated in Theorem 5 and hence this is the cause of the failure of these measures in some particular cases.

  2. 2.

    It is also seen from the table that the measure of similarity \(S_{HK}\) [6] fails to discriminate the cases 1, 2, 5 and the cases 2, 3, i.e., \(S_{HK}(\mathcal {P}, \mathcal {Q})=S_{HK}(\mathcal {P}1, \mathcal {Q}1)=S_{HK}(\mathcal {P}2, \mathcal {Q}2)=0.9\) when \(\mathcal {P}=(0.3, 0.3)\), \(\mathcal {Q}=(0.4, 0.4)\) (Case 1), \(\mathcal {P}1=(0.3, 0.4)\), \(\mathcal {Q}1=(0.4, 0.3)\) (Case 2) and \(\mathcal {P}2=(0.4, 0.2)\), \(\mathcal {Q}2=(0.5, 0.3)\) (Case 5). In the similar manner, similarity measures given in [5,6,7,8,9,10,11,12,13, 16, 22,23,24, 26, 28] yield illogical results as these measures produce the identical outcomes for different input values and hence they are incapable to distinguish the pairs.

  3. 3.

    Some of the existing SMs fail to deal with the “division by zero” problem and thus they are incapable to classify or rank the objects. For instance, the measures \(S_{VS}\) [22], \(S_Y\) [25] when \(\mathcal {P}=(1,0)\), \(\mathcal {Q}=(0,0)\) (Case 3) and \(\mathcal {P}=(0.5, 0.5)\), \(\mathcal {Q}=(0,0)\) (Case 4).

Therefore, from this analysis, we conclude that the prevailing measures [5,6,7,8,9,10,11,12,13, 16, 22,23,24,25,26, 28] fail to make the accurate decision for the considered input data whereas the presented SM is consistent for each group. The developed SM \(\mathcal {S}\) and the SM \(S_S\) [15] have “no counter-intuitive cases”, which is presented in Table 3 for each group. Hence, the presented SM overcomes the drawbacks of existing measures.

Table 3 Comparative study results of SMs of Example 2\(\big (p=1, t=2\) in \(S_{BA}\) and \(p=1\) in \(S_{DC},S_M,S_{LS}\big )\)

In the next, we illustrate one example from the pattern recognition field to prove that proposed SM gives the valid outcomes over the existing SMs in the literature.

Example 3

Consider three given patterns \({\mathcal {P}}_1\), \({\mathcal {P}}_2\) and \({\mathcal {P}}_3\) represented by IFSs, defined on \({\mathcal {U}}=\{x_1,x_2,x_3\}\) as:

$$\begin{aligned} {\mathcal {P}}_1=\left\{ \langle x_1,0.2,0.3\rangle , \langle x_2,0.1,0.4\rangle , \langle x_3,0.2,0.6\rangle \right\} \\ {\mathcal {P}}_2=\left\{ \langle x_1,0.3,0.2\rangle , \langle x_2,0.4,0.1\rangle , \langle x_3,0.5,0.3\rangle \right\} \\ {\mathcal {P}}_3=\left\{ \langle x_1,0.2,0.3\rangle , \langle x_2,0.4,0.1\rangle , \langle x_3,0.5,0.3\rangle \right\} \end{aligned}$$

Further, consider an unknown pattern \(\mathcal {Q}\) whose rating are summarized in IFSs as

$$\begin{aligned} \mathcal {Q} =\left\{ \langle x_1,0.1,0.2\rangle , \langle x_2,0.4,0.5\rangle , \langle x_3,0.0,0.0\rangle \right\} \end{aligned}$$

and the main target is to recognize unknown pattern \(\mathcal {Q}\) with one of the known classes \({\mathcal {P}}_i\) \((i=1,2,3)\).

To achieve it, we estimate the degree of similarity among \({\mathcal {P}}_i\) and \(\mathcal {Q}\) using the proposed SM \(\mathcal {S}\) and the prevailing SMs [5,6,7,8,9,10,11,12,13, 15, 16, 22,23,24,25,26, 28]. The results corresponding to each measure are outlined in Table 4. From this table, it is seen that the unknown pattern \(\mathcal {Q}\) is classified with pattern \({\mathcal {P}}_3\) by using the proposed measure \(\mathcal {S}\). However, from the existing SMs, we identify that most of the SMs create identical outcomes due to which unknown pattern “Cannot be recognized”. For example, the measures \(S_C\) [5], \(S_{DC}\) [7], \(S_{HY1}\) [10], \(S_{HY2}\) [10], \(S_{HY3}\) [10], \(S_{WX}\) [12], \(S_{L}\) [13] give similar results for \(S({\mathcal {P}}_1,\mathcal {Q})\) and \(S({\mathcal {P}}_3,\mathcal {Q})\) and the measure \(S_N\) [28] produces same outcomes for \(S_N({\mathcal {P}}_2,\mathcal {Q})\) and \(S_N({\mathcal {P}}_3,\mathcal {Q})\). Besides these, the measures \(S_{HK}\) [6], \(S_{LS1}\) [9], \(S_{LS2}\) [9], \(S_{M}\) [8], \(S_{SK}\) [11], \(S_{HY4}\) [23], \(S_{HY5}\) [23], \(S_{HY6}\) [23], \(S_{HY7}\) [23], \(S_{HY8}\) [24], \(S_{HY9}\) [24] and \(S_{HY10}\) [24] produce identical values for \(S({\mathcal {P}}_1,\mathcal {Q})\), \(S({\mathcal {P}}_2,\mathcal {Q})\) and \(S({\mathcal {P}}_3,\mathcal {Q})\). Apart from these, it is seen that the measure \(S_{VS}\) [22] becomes unsuccessful in computing the degree of similarity of \(\mathcal {Q}\) with all the three known patterns \({\mathcal {P}}_i\) due to “division by zero problem”. Hence, it is concluded the prevailing SMs [5,6,7,8,9,10,11,12,13, 22,23,24,25, 28] fail to reach at any decision in this case whereas the proposed measure \(\mathcal {S}\) is effective in giving better results and to make optimal decisions in such cases as well.

Table 4 Comparative analysis of Example 3\(\big (p=1\) in \(S_{DC},S_M,S_{LS}\big )\)

Applications

In this section, we present an approach to solve the DMPs using proposed SMs followed by several illustrative examples.

Proposed DM Approach

Consider a set of alternatives \({\mathcal {P}}_1\), \({\mathcal {P}}_2 \ldots {\mathcal {P}}_m\) which needs to be evaluated to find the finest among them over the different parameters \(x_j\) of the universal set \(\mathcal {U}\). Each alternative is assessed under the IFS environment where an expert give their preferences to each \({\mathcal {P}}_i\) as IFN given by \((\zeta _{ij},\vartheta _{ij})\) where \(1\le i\le m\); \(1 \le j \le n\) such that \(\zeta _{ij},\vartheta _{ij},\zeta _{ij}+\vartheta _{ij}\in [0,1]\) . Then the various steps involved to find the finest alternative based on the proposed SM are summarized as

  1. Step 1:

    Prepare the collective information in the decision matrix.

  2. Step 2:

    Transform the given IFSs to the right-angled triangle information.

  3. Step 3:

    Compute the degree of similarity \({\mathcal {S}}_i\) between the alternative \({\mathcal {P}}_i\) and the ideal set by using proposed SM.

  4. Step 4:

    Rank the given alternative with index as computed by \(k=\arg \max \limits _{1\le i\le m} \{{\mathcal {S}}_i\}\).

Applications in Pattern Recognition

To demonstrate the functionality of the proposed SM in various fields such as pattern recognition and clustering analysis, we solved some standard benchmark problems and compare its results with some of the existing SMs [5,6,7,8,9,10,11,12,13, 15, 16, 22,23,24,25,26, 28] to prove its superiority over them as follows.

Example 4

[7, 16, 24, 25] Consider the three patterns \({\mathcal {P}}_i\) \((i=1,2,3)\) represented by IFSs as:

$$\begin{aligned} {\mathcal {P}}_1=\left\{ \langle x_1,1.0,0.0\rangle , \langle x_2,0.8,0.0\rangle , \langle x_3,0.7,0.1\rangle \right\} \\ {\mathcal {P}}_2=\left\{ \langle x_1,0.8,0.1\rangle , \langle x_2,1.0,0.0\rangle , \langle x_3,0.9,0.0\rangle \right\} \\ {\mathcal {P}}_3=\left\{ \langle x_1,0.6,0.2\rangle , \langle x_2,0.8,0.0\rangle , \langle x_3,1.0,0.0\rangle \right\} \end{aligned}$$

Consider an unknown pattern \(\mathcal {Q}\) given by

$$\begin{aligned} \mathcal {Q} =\left\{ \langle x_1,0.5,0.3\rangle , \langle x_2,0.6,0.2\rangle , \langle x_3,0.8,0.1\rangle \right\} \end{aligned}$$

which needs to be classified with one of the given patterns \({\mathcal {P}}_i\). To recognize \(\mathcal {Q}\) into \({\mathcal {P}}_i\), we compute measure of similarity between \({\mathcal {P}}_i\) and \(\mathcal {Q}\) by using proposed SM \(\mathcal {S}\) and existing SMs [5,6,7,8,9,10,11,12,13, 15, 16, 22,23,24,25,26, 28]. The results computed corresponding to them are given in Table 5. From these results, it is analyzed that the unknown pattern \(\mathcal {Q}\) is classified with known pattern \({\mathcal {P}}_3\) by the proposed SM. Although the existing measures also recognize \(\mathcal {Q}\) with \({\mathcal {P}}_3\) but most of the SMs [5,6,7,8,9,10, 12, 22, 23, 26, 28] produce identical outcomes with other patterns which leads to unreasonable results. For instance, the SMs \(S_C\) [5], \(S_{HK}\) [6], \(S_{DC}\) [7], \(S_{LS1}\) [9], \(S_{LS2}\) [9], \(S_{M}\) [8], \(S_{HY1}\) [10], \(S_{HY2}\) [10], \(S_{HY3}\) [10], \(S_{WX}\) [12], \(S_{HY4}\) [23], \(S_{HY5}\) [23], \(S_{HY6}\) [23], \(S_{HY8}\) [24], \(S_{BA}\) [26] and \(S_{N}\) [28] give the same results for patterns \({\mathcal {P}}_1\) and \({\mathcal {P}}_2\), i.e., \(S({\mathcal {P}}_1,\mathcal {Q})= S({\mathcal {P}}_2, \mathcal {Q})\). But it is clearly seen that \({\mathcal {P}}_1\ne {\mathcal {P}}_2\). Besides this, the SM \(S_{VS}\) [22] fails to provide any valid result due to division by zero problem. Thus, the proposed SM \(\mathcal {S}\) gives the better results and overcomes the drawbacks of some of existing measures [5,6,7,8,9,10, 12, 16, 22, 23, 26, 28].

Table 5 Comparative analysis of Example 4\(\big (p=1, t=2\) in \(S_{BA}\) and \(p=1\) in \(S_{DC},S_M,S_{LS}\big )\)

Example 5

Consider three patterns \({\mathcal {P}}_i\) \((i=1,2,3)\) represented by IFSs as:

$$\begin{aligned} {\mathcal {P}}_1=\left\{ \langle x_1,0.34,0.34\rangle , \langle x_2,0.19,0.48\rangle , \langle x_3,0.02,0.12\rangle \right\} \\ {\mathcal {P}}_2=\left\{ \langle x_1,0.35,0.33\rangle , \langle x_2,0.20,0.47\rangle , \langle x_3,0.00,0.14\rangle \right\} \\ {\mathcal {P}}_3=\left\{ \langle x_1,0.33,0.35\rangle , \langle x_2,0.21,0.46\rangle , \langle x_3,0.01,0.13\rangle \right\} \end{aligned}$$

Further, consider an unknown pattern \(\mathcal {Q}\) which is to be classified in one of the given patterns \({\mathcal {P}}_i\) and it is represented as:

$$\begin{aligned} \mathcal {Q} =\left\{ \langle x_1,0.37,0.31\rangle , \langle x_2,0.23,0.44\rangle , \langle x_3,0.04,0.1\rangle \right\} \end{aligned}$$

In order to recognize the unknown pattern \(\mathcal {Q}\), the prevailing similarity measures [5,6,7,8,9,10,11,12,13, 15, 16, 22,23,24,25,26, 28] and the proposed measure \(\mathcal {S}\) are utilized and the corresponding results are tabulated in Table 6. From this table, it is analyzed that most of the existing SMs fail to reach at any decision due to various shortcomings which are outlined as follows:

Table 6 Comparative analysis results of Example 5\(\big (p=1, t=2\) in \(S_{BA}\) and \(p=1\) in \(S_{DC},S_M,S_{LS}\big )\)
  1. 1.

    The SMs \(S_C\) [5], \(S_{HK}\) [6], \(S_{DC}\) [7], \(S_{LS1}\) [9], \(S_{LS2}\) [9], \(S_M\) [8], \(S_{HY1}\) [10], \(S_{HY2}\) [10], \(S_{HY3}\) [10], \(S_{WX}\) [12], \(S_{L}\) [13], \(S_{HY4}\) [23], \(S_{HY5}\) [23], \(S_{HY6}\) [23], \(S_{HY8}\) [24], \(S_{HY9}\) [24], \(S_{HY10}\) [24], \(S_{BA}\) [26], \(S_{CL}\) [16], \(S_{N}\) [28] give the identical values of \(S({\mathcal {P}}_1,\mathcal {Q})\), \(S({\mathcal {P}}_2,\mathcal {Q})\) and \(S({\mathcal {P}}_3,\mathcal {Q}),\) i.e., \(S({\mathcal {P}}_1, \mathcal {Q})=S({\mathcal {P}}_2, \mathcal {Q})=S({\mathcal {P}}_3, \mathcal {Q})\) as a result of which pattern \(\mathcal {Q}\) cannot be recognized.

  2. 2.

    The SM \(S_{VS}\) [22] fails to assess the similarity degrees between patterns \({\mathcal {P}}_2\) and \(\mathcal {Q}\) due to “division by zero problem”.

  3. 3.

    The prevailing measures \(S_{SK}\) [11], \(S_{HY7}\) [24], \(S_Y\) [25] and \(S_S\) [15] recognize unknown pattern \(\mathcal {Q}\) with known pattern \({\mathcal {P}}_1\) which coincides with the proposed measure results.

This investigation leads to the conclusion that the proposed measure \(\mathcal {S}\) can be applied on those real-life pattern recognition problems also while solving which most of the existing SMs [5,6,7,8,9,10, 12, 13, 16, 22, 23, 26, 28] fail.

Example 6

[16, 26] Consider three patterns \({\mathcal {P}}_i\) \((i=1,2,3)\) given in IFSs as

$$\begin{aligned} {\mathcal {P}}_1=\left\{ \langle x_1,0.5,0.3\rangle , \langle x_2,0.7,0.0\rangle , \langle x_3,0.4,0.5\rangle , \langle x_4,0.7,0.3\rangle \right\} \\ {\mathcal {P}}_2=\left\{ \langle x_1,0.5,0.2\rangle , \langle x_2,0.6,0.1\rangle , \langle x_3,0.2,0.7\rangle , \langle x_4,0.7,0.3\rangle \right\} \\ {\mathcal {P}}_3=\left\{ \langle x_1,0.5,0.4\rangle , \langle x_2,0.7,0.1\rangle , \langle x_3,0.4,0.6\rangle , \langle x_4,0.7,0.2\rangle \right\} \end{aligned}$$

Consider an unknown pattern \(\mathcal {Q}\) whose rating values are represented as an IFS given by

$$\begin{aligned} \mathcal {Q} = \left\{ \langle x_1,0.4,0.3\rangle , \langle x_2,0.7,0.1\rangle , \langle x_3,0.3,0.6\rangle , \langle x_4,0.7,0.3\rangle \right\} \end{aligned}$$

and the target is to classify \(\mathcal {Q}\) in one of the classes \({\mathcal {P}}_i\). For this, we compute the degree of similarity between \({\mathcal {P}}_i\) and \(\mathcal {Q}\) by utilizing some of the existing measures [5,6,7,8,9,10,11,12,13, 15, 16, 22,23,24,25,26, 28] along with the proposed SM \(\mathcal {S}\) and tabulate the corresponding results in Table 7.

Table 7 Comparative analysis results of Example 6\(\big (p=1, t=2\) in \(S_{BA}\) and \(p=1\) in \(S_{DC},S_M,S_{LS}\big )\)

It is analyzed from this table that the proposed SM \(\mathcal {S}\) has some advantages over the shortcomings of the several existing SMs which are outlined as follows.

  1. 1.

    The existing SMs are unsuccessful in recognizing the unknown pattern \(\mathcal {Q}\) in any of the classes \({\mathcal {P}}_i\) due to identical outcomes. For instance, the measures \(S_{HK}\) [6], \(S_{M}\) [8], \(S_{WX}\) [12], \(S_{HY4}\) [23], \(S_{HY5}\) [23], \(S_{HY6}\) [23] and \(S_{HY8}\) [24] give the same values of \(S({\mathcal {P}}_1, \mathcal {Q})\) and \(S({\mathcal {P}}_3, \mathcal {Q})\) and the measure \(S_L\) [13] give the identical results for \(S({\mathcal {P}}_1, \mathcal {Q})\) and \(S({\mathcal {P}}_2, \mathcal {Q})\).

  2. 2.

    Another counter-intuitive case can be provided for the SMs \(S_{LS2}\) [9], \(S_{HY1}\) [10], \(S_{HY2}\) [10] and \(S_{HY3}\) [10]. It is noticed from the table that, these existing measures give the similar results for \(S({\mathcal {P}}_1,\mathcal {Q})\), \(S({\mathcal {P}}_2,\mathcal {Q})\) and \(S({\mathcal {P}}_3,\) \(\mathcal {Q})\), i.e., \(S({\mathcal {P}}_1,\mathcal {Q}) = S({\mathcal {P}}_2,\mathcal {Q}) = S({\mathcal {P}}_3,\mathcal {Q})\) and thus, we are unable to recognize the pattern \(\mathcal {Q}\).

  3. 3.

    The measure \(S_{VS}\) [22] fails to determine the similarity degree between patterns \({\mathcal {P}}_1\) and \(\mathcal {Q}\) due to “division by zero problem”.

  4. 4.

    The prevailing measures such as \(S_{C}\) [5], \(S_{DC}\) [7], \(S_{SK}\) [11], \(S_{HY7}\) [24], \(S_{HY9}\) [24], \(S_{HY10}\) [24], \(S_{Y}\) [25], \(S_{BA}\) [26], \(S_{S}\) [15], \(S_{CL}\) [16], \(S_{N}\) [28] classify pattern \(\mathcal {Q}\) with known pattern \({\mathcal {P}}_3\) as the degree of similarity obtained, using these measures, among \({\mathcal {P}}_3\) and \(\mathcal {Q}\) is maximum. It also coincides with the results of proposed measure. Thus, the proposed measure and these existing measures have “no counter-intuitive cases” as shown in Table 7.

Therefore, it is concluded that the proposed SM \(\mathcal {S}\) is more efficient than some of the prevailing measures [6, 8,9,10, 12, 22,23,24] as in some cases, these existing SMs are unable to reach at any decision. Also, it has been computed that the overall time complexity of the proposed decision making approach is O(mn) where m is the number of the alternatives and n represents the number of the criteria for a given MCDM problem. Furthermore, to discuss the comparison from the perspective of computational cost, we have computed the time elapsed and the memory utilized by CPU during the execution of the proposed decision making algorithm and the several existing approaches. The CPU memory utilized by the proposed method is found to be \(5.8746 \times 10^{-4}\) mega bytes. However, the time corresponding to each algorithm is noted and listed in Table 8, which gives a quantitative analysis of computational cost of the proposed and the existing methods. Although, it is seen that there is no much significant difference among the execution time of the proposed measure and the prevailing approaches, but we figure out that the proposed measure has the following benefits: (i) obtain the finest alternative without counter-intuitive cases [5,6,7,8,9,10, 12, 16, 22, 23, 26, 28]; (ii) without division by zero problem [22, 25], over the other existing SM-based algorithm.

Table 8 Comparative study results of computational cost

Application in Clustering Problem

In this section, we demonstrate the application of the stated measure in the clustering problem.

Definition 7

For a collection of “m” IFSs, \({\mathcal {P}}_i\), a similarity matrix is given as: \(\mathcal {C}=(c_{ik})_{m \times m}\), where \(c_{ik}=S({\mathcal {P}}_i,\mathcal {P}_k)\) represents the SM among \({\mathcal {P}}_i\) and \({\mathcal {P}}_k\) and satisfies \(0 \le c_{ik} \le 1\); \(c_{ii}=1\) and \(c_{ik}=c_{ki}\).

Definition 8

[32] A matrix \({\mathcal {C}}^2 = \mathcal {C}\circ \mathcal {C}=(\bar{c}_{ik})_{m \times m}\) where \(\bar{c}_{ik}=\max \limits _{u}\big (\min (c_{iu},c_{uk})\big )\) is called similarity composition matrix.

Definition 9

[32] If \({\mathcal {C}}^2 \subseteq \mathcal {C},\) i.e., \(\max \limits _{u} \big (\min (c_{iu},c_{uk})\big ) \le c_{ik}\) \(\forall\) ik, then \(C^2\) is termed as “equivalent similarity matrix (ESM)”.

Theorem 6

[32] For similarity matrix \(\mathcal {C}=(c_{ik})_{m \times m}\), and in the compositions \(\mathcal {C}\rightarrow {\mathcal {C}}^{2}\rightarrow {\mathcal {C}}^{4}\rightarrow \ldots \rightarrow {\mathcal {C}}^{2^{z}}\rightarrow \ldots\), if \(\exists\) \(z\in \mathbf {Z}^+\) such that \({\mathcal {C}}^{2^{z}}={\mathcal {C}}^{2^{z+1}}\) and then \({\mathcal {C}}^{2^{z}}\) is also an ESM.

Definition 10

[32] For an ESM \(\mathcal {C}=(c_{ik})_{m \times m}\), the matrix \({\mathcal {C}}_{\lambda }=(c^{\lambda }_{ik})_{m \times m}\) is termed \(\lambda -\)cutting matrix of \(\mathcal {C}\), where

$$\begin{aligned} c^{\lambda }_{ik}= {\left\{ \begin{array}{ll} 1 &{}; \quad c_{ik} \ge \lambda \\ 0 &{}; \quad c_{ik} < \lambda \end{array}\right. } \end{aligned}$$
(9)

and \(\lambda \in [0,1]\) is the “confidence level”.

Example 7

[32] Consider the dataset of ten cars \({\mathcal {P}}_i\) \((i=1,2, \ldots , 10)\). These cars are characterized by the six criteria \({\mathcal {Q}}_j\) \((j=1,2, \ldots ,6)\) namely: \({\mathcal {Q}}_1:\) Fuel company, \({\mathcal {Q}}_2:\) Aerodynamic degree, \({\mathcal {Q}}_3:\) Price, \({\mathcal {Q}}_4:\) Comfort, \({\mathcal {Q}}_5:\) Design and \({\mathcal {Q}}_6:\) Safety. The data of the cars \({\mathcal {P}}_i\) are tabulated in Table 9. Now, we utilize the proposed SM \(\mathcal {S}\) to cluster the cars \({\mathcal {P}}_i\), which involves the subsequent steps:

Table 9 Input data of Example 7

Step 1: By using Eq. (8), calculate the degrees of similarity between the cars, i.e., \(\mathcal {S}({\mathcal {P}}_i,{\mathcal {P}}_k)\) (i,k=1,2, ..., 10). Thus, a similarity matrix \(\mathcal {C}\) is obtained as:

$$\begin{aligned} \mathcal {C}= \begin{pmatrix} 1.0000 &{} 0.6537 &{} 0.6235 &{} 0.7311 &{} 0.6047 &{} 0.8738 &{} 0.6553 &{} 0.5731 &{} 0.6841 &{} 0.5708 \\ 0.6537 &{} 1.0000 &{} 0.9007 &{} 0.6567 &{} 0.6811 &{} 0.6731 &{} 0.9033 &{} 0.7994 &{} 0.6623 &{} 0.6867 \\ 0.6235 &{} 0.9007 &{} 1.0000 &{} 0.7298 &{} 0.7306 &{} 0.6463 &{} 0.8929 &{} 0.7835 &{} 0.7342 &{} 0.6855 \\ 0.7311 &{} 0.6567 &{} 0.7298 &{} 1.0000 &{} 0.6667 &{} 0.6764 &{} 0.6647 &{} 0.6271 &{} 0.9075 &{} 0.6254 \\ 0.6047 &{} 0.6811 &{} 0.7306 &{} 0.6667 &{} 1.0000 &{} 0.6522 &{} 0.7381 &{} 0.8188 &{} 0.6790 &{} 0.8930 \\ 0.8738 &{} 0.6731 &{} 0.6463 &{} 0.6764 &{} 0.6522 &{} 1.0000 &{} 0.6777 &{} 0.5871 &{} 0.6563 &{} 0.6233 \\ 0.6553 &{} 0.9033 &{} 0.8929 &{} 0.6647 &{} 0.7381 &{} 0.6777 &{} 1.0000 &{} 0.8254 &{} 0.6666 &{} 0.7125 \\ 0.5731 &{} 0.7994 &{} 0.7835 &{} 0.6271 &{} 0.8188 &{} 0.5871 &{} 0.8254 &{} 1.0000 &{} 0.6295 &{} 0.8202 \\ 0.6841 &{} 0.6623 &{} 0.7342 &{} 0.9075 &{} 0.6790 &{} 0.6563 &{} 0.6666 &{} 0.6295 &{} 1.0000 &{} 0.6349 \\ 0.5708 &{} 0.6867 &{} 0.6855 &{} 0.6254 &{} 0.8930 &{} 0.6233 &{} 0.7125 &{} 0.8202 &{} 0.6349 &{} 1.0000 \\ \end{pmatrix} \end{aligned}$$

Step 2: Compute the matrix \({\mathcal {C}}^2\), using Definition 8, given as:

$$\begin{aligned} {\mathcal {C}}^2= \begin{pmatrix} 1.0000 &{} 0.6731 &{} 0.7298 &{} 0.7311 &{} 0.6790 &{} 0.8738 &{} 0.6777 &{} 0.6553 &{} 0.7311 &{} 0.6553 \\ 0.6731 &{} 1.0000 &{} 0.9007 &{} 0.7298 &{} 0.7994 &{} 0.6777 &{} 0.9033 &{} 0.8254 &{} 0.7342 &{} 0.7994 \\ 0.7298 &{} 0.9007 &{} 1.0000 &{} 0.7342 &{} 0.7835 &{} 0.6777 &{} 0.9007 &{} 0.8254 &{} 0.7342 &{} 0.7835 \\ 0.7311 &{} 0.7298 &{} 0.7342 &{} 1.0000 &{} 0.7298 &{} 0.7311 &{} 0.7298 &{} 0.7298 &{} 0.9075 &{} 0.6855 \\ 0.6790 &{} 0.7994 &{} 0.7835 &{} 0.7298 &{} 1.0000 &{} 0.6777 &{} 0.8188 &{} 0.8202 &{} 0.7306 &{} 0.8930 \\ 0.8738 &{} 0.6777 &{} 0.6777 &{} 0.7311 &{} 0.6777 &{} 1.0000 &{} 0.6777 &{} 0.6777 &{} 0.6841 &{} 0.6777 \\ 0.6777 &{} 0.9033 &{} 0.9007 &{} 0.7298 &{} 0.8188 &{} 0.6777 &{} 1.0000 &{} 0.8254 &{} 0.7342 &{} 0.8208 \\ 0.6553 &{} 0.8254 &{} 0.8254 &{} 0.7298 &{} 0.8202 &{} 0.6777 &{} 0.8254 &{} 1.0000 &{} 0.7342 &{} 0.8202 \\ 0.7311 &{} 0.7342 &{} 0.7342 &{} 0.9075 &{} 0.7306 &{} 0.6841 &{} 0.7342 &{} 0.7342 &{} 1.0000 &{} 0.6855 \\ 0.6553 &{} 0.7994 &{} 0.7835 &{} 0.6855 &{} 0.8930 &{} 0.6777 &{} 0.8202 &{} 0.8202 &{} 0.6855 &{} 1.0000 \\ \end{pmatrix} \end{aligned}$$

Since \({\mathcal {C}}^2 \ne \mathcal {C}\). Therefore, we compute \({\mathcal {C}}^4\).

$$\begin{aligned} {\mathcal {C}}^4= \begin{pmatrix} 1.0000 &{} 0.7311 &{} 0.7311 &{} 0.7311 &{} 0.7306 &{} 0.8738 &{} 0.7311 &{} 0.7311 &{} 0.7311 &{} 0.7298 \\ 0.7311 &{} 1.0000 &{} 0.9007 &{} 0.7342 &{} 0.8202 &{} 0.7298 &{} 0.9033 &{} 0.8254 &{} 0.7342 &{} 0.8202 \\ 0.7311 &{} 0.9007 &{} 1.0000 &{} 0.7342 &{} 0.8202 &{} 0.7311 &{} 0.9007 &{} 0.8254 &{} 0.7342 &{} 0.8202 \\ 0.7311 &{} 0.7342 &{} 0.7342 &{} 1.0000 &{} 0.7342 &{} 0.7311 &{} 0.7342 &{} 0.7342 &{} 0.9075 &{} 0.7342 \\ 0.7306 &{} 0.8202 &{} 0.8202 &{} 0.7342 &{} 1.0000 &{} 0.7298 &{} 0.8202 &{} 0.8202 &{} 0.7342 &{} 0.8930 \\ 0.8738 &{} 0.7298 &{} 0.7311 &{} 0.7311 &{} 0.7298 &{} 1.0000 &{} 0.7298 &{} 0.7298 &{} 0.7311 &{} 0.6855 \\ 0.7311 &{} 0.9033 &{} 0.9007 &{} 0.7342 &{} 0.8202 &{} 0.7298 &{} 1.0000 &{} 0.8254 &{} 0.7342 &{} 0.8202 \\ 0.7311 &{} 0.8254 &{} 0.8254 &{} 0.7342 &{} 0.8202 &{} 0.7298 &{} 0.8254 &{} 1.0000 &{} 0.7342 &{} 0.8202 \\ 0.7311 &{} 0.7342 &{} 0.7342 &{} 0.9075 &{} 0.7342 &{} 0.7311 &{} 0.7342 &{} 0.7342 &{} 1.0000 &{} 0.7342 \\ 0.7298 &{} 0.8202 &{} 0.8202 &{} 0.7342 &{} 0.8930 &{} 0.6855 &{} 0.8202 &{} 0.8202 &{} 0.7342 &{} 1.0000 \\ \end{pmatrix} \end{aligned}$$

Also \({\mathcal {C}}^4 \ne {\mathcal {C}}^2\). Therefore, we compute \({\mathcal {C}}^8\).

$$\begin{aligned} {\mathcal {C}}^8= \begin{pmatrix} 1.0000 &{} 0.7311 &{} 0.7311 &{} 0.7311 &{} 0.7311 &{} 0.8738 &{} 0.7311 &{} 0.7311 &{} 0.7311 &{} 0.7311 \\ 0.7311 &{} 1.0000 &{} 0.9007 &{} 0.7342 &{} 0.8202 &{} 0.7311 &{} 0.9033 &{} 0.8254 &{} 0.7342 &{} 0.8202 \\ 0.7311 &{} 0.9007 &{} 1.0000 &{} 0.7342 &{} 0.8202 &{} 0.7311 &{} 0.9007 &{} 0.8254 &{} 0.7342 &{} 0.8202 \\ 0.7311 &{} 0.7342 &{} 0.7342 &{} 1.0000 &{} 0.7342 &{} 0.7311 &{} 0.7342 &{} 0.7342 &{} 0.9075 &{} 0.7342 \\ 0.7311 &{} 0.8202 &{} 0.8202 &{} 0.7342 &{} 1.0000 &{} 0.7311 &{} 0.8202 &{} 0.8202 &{} 0.7342 &{} 0.8930 \\ 0.8738 &{} 0.7311 &{} 0.7311 &{} 0.7311 &{} 0.7311 &{} 1.0000 &{} 0.7311 &{} 0.7311 &{} 0.7311 &{} 0.7311 \\ 0.7311 &{} 0.9033 &{} 0.9007 &{} 0.7342 &{} 0.8202 &{} 0.7311 &{} 1.0000 &{} 0.8254 &{} 0.7342 &{} 0.8202 \\ 0.7311 &{} 0.8254 &{} 0.8254 &{} 0.7342 &{} 0.8202 &{} 0.7311 &{} 0.8254 &{} 1.0000 &{} 0.7342 &{} 0.8202 \\ 0.7311 &{} 0.7342 &{} 0.7342 &{} 0.9075 &{} 0.7342 &{} 0.7311 &{} 0.7342 &{} 0.7342 &{} 1.0000 &{} 0.7342 \\ 0.7311 &{} 0.8202 &{} 0.8202 &{} 0.7342 &{} 0.8930 &{} 0.7311 &{} 0.8202 &{} 0.8202 &{} 0.7342 &{} 1.0000 \\ \end{pmatrix} \end{aligned}$$

Also \({\mathcal {C}}^8 \ne {\mathcal {C}}^4\). Therefore, we compute \({\mathcal {C}}^{16}\).

$$\begin{aligned} {\mathcal {C}}^{16}= \begin{pmatrix} 1.0000 &{} 0.7311 &{} 0.7311 &{} 0.7311 &{} 0.7311 &{} 0.8738 &{} 0.7311 &{} 0.7311 &{} 0.7311 &{} 0.7311 \\ 0.7311 &{} 1.0000 &{} 0.9007 &{} 0.7342 &{} 0.8202 &{} 0.7311 &{} 0.9033 &{} 0.8254 &{} 0.7342 &{} 0.8202 \\ 0.7311 &{} 0.9007 &{} 1.0000 &{} 0.7342 &{} 0.8202 &{} 0.7311 &{} 0.9007 &{} 0.8254 &{} 0.7342 &{} 0.8202 \\ 0.7311 &{} 0.7342 &{} 0.7342 &{} 1.0000 &{} 0.7342 &{} 0.7311 &{} 0.7342 &{} 0.7342 &{} 0.9075 &{} 0.7342 \\ 0.7311 &{} 0.8202 &{} 0.8202 &{} 0.7342 &{} 1.0000 &{} 0.7311 &{} 0.8202 &{} 0.8202 &{} 0.7342 &{} 0.8930 \\ 0.8738 &{} 0.7311 &{} 0.7311 &{} 0.7311 &{} 0.7311 &{} 1.0000 &{} 0.7311 &{} 0.7311 &{} 0.7311 &{} 0.7311 \\ 0.7311 &{} 0.9033 &{} 0.9007 &{} 0.7342 &{} 0.8202 &{} 0.7311 &{} 1.0000 &{} 0.8254 &{} 0.7342 &{} 0.8202 \\ 0.7311 &{} 0.8254 &{} 0.8254 &{} 0.7342 &{} 0.8202 &{} 0.7311 &{} 0.8254 &{} 1.0000 &{} 0.7342 &{} 0.8202 \\ 0.7311 &{} 0.7342 &{} 0.7342 &{} 0.9075 &{} 0.7342 &{} 0.7311 &{} 0.7342 &{} 0.7342 &{} 1.0000 &{} 0.7342 \\ 0.7311 &{} 0.8202 &{} 0.8202 &{} 0.7342 &{} 0.8930 &{} 0.7311 &{} 0.8202 &{} 0.8202 &{} 0.7342 &{} 1.0000 \\ \end{pmatrix} \end{aligned}$$

As \({\mathcal {C}}^{16}={\mathcal {C}}^8\). Therefore, \({\mathcal {C}}^{16}\) is an ESM.

Step 3: Assume \(\lambda =0.8202\), and by Definition 10, \({\mathcal {C}}_{\lambda }\) becomes

$$\begin{aligned} {\mathcal {C}}_{\lambda }= \begin{pmatrix} 1 &{} 0 &{} 0 &{} 0 &{} 0 &{} 1 &{} 0 &{} 0 &{} 0 &{} 0 \\ 0 &{} 1 &{} 1 &{} 0 &{} 1 &{} 0 &{} 1 &{} 1 &{} 0 &{} 1 \\ 0 &{} 1 &{} 1 &{} 0 &{} 1 &{} 0 &{} 1 &{} 1 &{} 0 &{} 1 \\ 0 &{} 0 &{} 0 &{} 1 &{} 0 &{} 0 &{} 0 &{} 0 &{} 1 &{} 0 \\ 0 &{} 1 &{} 1 &{} 0 &{} 1 &{} 0 &{} 1 &{} 1 &{} 0 &{} 1 \\ 1 &{} 0 &{} 0 &{} 0 &{} 0 &{} 1 &{} 0 &{} 0 &{} 0 &{} 0 \\ 0 &{} 1 &{} 1 &{} 0 &{} 1 &{} 0 &{} 1 &{} 1 &{} 0 &{} 1 \\ 0 &{} 1 &{} 1 &{} 0 &{} 1 &{} 0 &{} 1 &{} 1 &{} 0 &{} 1 \\ 0 &{} 0 &{} 0 &{} 1 &{} 0 &{} 0 &{} 0 &{} 0 &{} 1 &{} 0 \\ 0 &{} 1 &{} 1 &{} 0 &{} 1 &{} 0 &{} 1 &{} 1 &{} 0 &{} 1 \\ \end{pmatrix} \end{aligned}$$
(10)

Step 4: From Eq. (10), we divide \({\mathcal {P}}_{i}\) into three classes as \(\{{\mathcal {P}}_1,{\mathcal {P}}_6\}\), \(\{{\mathcal {P}}_4,{\mathcal {P}}_9\}\), \(\{{\mathcal {P}}_2,{\mathcal {P}}_3, {\mathcal {P}}_5, {\mathcal {P}}_7, {\mathcal {P}}_8, {\mathcal {P}}_{10}\}\)

Since different values of \(\lambda\) will produce different \(\lambda -\) cutting matrices and consequently we will obtain different clustering outcomes. Accordingly, a comprehensive sensitivity investigation for \(\lambda\) is provided in Table 10. We take the value of confidence level \(\lambda\) from the least one to the highest one. By observing the obtained outcomes, for different values of \(\lambda\), we conclude that as the value of \(\lambda\) increases then more and more patterns become differentiated. Besides this, for a particular cluster number, there is only one case. For instance, if the cars \({\mathcal {P}}_i\) are classified into four classes then, the obtained outcomes are \(\{{\mathcal {P}}_1, {\mathcal {P}}_6\}\), \(\{{\mathcal {P}}_2, {\mathcal {P}}_3, {\mathcal {P}}_7, {\mathcal {P}}_8\}\), \(\{{\mathcal {P}}_4,{\mathcal {P}}_9\}\), \(\{{\mathcal {P}}_5,{\mathcal {P}}_{10}\}\). This is useful in taking final decision as it reduces uncertainty in choosing \(\lambda\).

Table 10 Clustering results for different confidence levels in Example 7

Furthermore, the clustering distribution of ten cars \(\mathcal {P}_i\) is given in Fig. 2. This figure gives that the software \(\mathcal {P}_i\) are principally separated into two groups which are: \(\{\mathcal {P}_2, \mathcal {P}_3, \mathcal {P}_4\), \(\mathcal {P}_5\), \(\mathcal {P}_7\), \(\mathcal {P}_8, \mathcal {P}_9, \mathcal {P}_{10}\}\), \(\{\mathcal {P}_1, \mathcal {P}_6\}\). Furthermore, when the confidence level stays in the relaxed level, the overall trend can be found using Fig. 2.

Fig. 2
figure 2

The clustering effect diagram of ten cars \(\mathcal {P}_i\)

The clustering results tabulated in Table 10 are confirmed by the existing works. For instance, the clustering outcomes with two clusters \(\{\mathcal {P}_1, \mathcal {P}_6\}\), \(\{\mathcal {P}_2, \mathcal {P}_3, \mathcal {P}_4, \mathcal {P}_5, \mathcal {P}_7, \mathcal {P}_8, \mathcal {P}_9, \mathcal {P}_{10}\}\) are supported by [32] and [19]. The results with three clusters \(\{\mathcal {P}_1, \mathcal {P}_6\}\), \(\{\mathcal {P}_4,\mathcal {P}_9\}\), \(\{\mathcal {P}_2, \mathcal {P}_3, \mathcal {P}_5, \mathcal {P}_7, \mathcal {P}_8, \mathcal {P}_{10}\}\) are validated by [27] and [19]. The four cluster outcomes \(\{\mathcal {P}_1, \mathcal {P}_6\}\), \(\{\mathcal {P}_4,\mathcal {P}_9\}\), \(\{\mathcal {P}_5,\mathcal {P}_{10}\}\), \(\{\mathcal {P}_2, \mathcal {P}_3, \mathcal {P}_7, \mathcal {P}_8\}\) are identical with the results of [19, 27, 29]. The outcomes of five clusters \(\{\mathcal {P}_1, \mathcal {P}_6\}\), \(\{\mathcal {P}_4,\mathcal {P}_9\}\), \(\{\mathcal {P}_5\), \(\mathcal {P}_{10}\}\), \(\{\mathcal {P}_2, \mathcal {P}_3, \mathcal {P}_7\}\), \(\{\mathcal {P}_8\}\), seven clusters \(\{\mathcal {P}_1\}\), \(\{\mathcal {P}_6\}\), \(\{\mathcal {P}_4\), \(\mathcal {P}_9\}\), \(\{\mathcal {P}_5\}\), \(\{\mathcal {P}_{10}\}\), \(\{\mathcal {P}_2, \mathcal {P}_3, \mathcal {P}_7\}\), \(\{\mathcal {P}_8\}\), eight clusters \(\{\mathcal {P}_1\}\), \(\{\mathcal {P}_6\}\), \(\{\mathcal {P}_4,\mathcal {P}_9\}\), \(\{\mathcal {P}_5\}\), \(\{\mathcal {P}_{10}\}\), \(\{\mathcal {P}_2, \mathcal {P}_7\}\), \(\{\mathcal {P}_3\}\), \(\{\mathcal {P}_8\}\) are supported by [19]. Nine cluster results \(\{\mathcal {P}_1\}\), \(\{\mathcal {P}_6\}\), \(\{\mathcal {P}_4,\mathcal {P}_9\}\), \(\{\mathcal {P}_5\}\), \(\{\mathcal {P}_{10}\}\), \(\{\mathcal {P}_2\}\), \(\{\mathcal {P}_7\}\), \(\{\mathcal {P}_3\}\), \(\{\mathcal {P}_8\}\) are validated by [32].

Example 8

[29] Consider the dataset of fifteen patterns \(\mathcal {P}_i\) \((i=1,2, \ldots , 15)\), given as \(\mathcal {P}_1=\langle 0.910,0.080\rangle\), \(\mathcal {P}_2=\langle 0.930,0.070\rangle\), \(\mathcal {P}_3=\langle 0.870,0.120\rangle\), \(\mathcal {P}_4=\langle 0.850,0.140\rangle\), \(\mathcal {P}_5=\langle 0.790,0.200\rangle\), \(\mathcal {P}_6=\langle 0.190,0.800\rangle\), \(\mathcal {P}_7=\langle 0.100\), \(0.820\rangle\), \(\mathcal {P}_8=\langle 0.450,0.550\rangle\), \(\mathcal {P}_9=\langle 0.030,0.820\rangle\), \(\mathcal {P}_{10}=\langle 0.070\), \(0.730\rangle\), \(\mathcal {P}_{11}=\langle 0.500,0.500\rangle\), \(\mathcal {P}_{12}=\langle 0.910,0.080\rangle\), \(\mathcal {P}_{13}=\langle 0.400,0.500\rangle\), \(\mathcal {P}_{14}=\langle 0.420,0.480\rangle\), \(\mathcal {P}_{15}=\langle 0.460\), \(0.460\rangle\).

Now, we utilize the proposed SM \(\mathcal {S}\) in order to cluster the patterns \(\mathcal {P}_i\), which involves the subsequent steps:

Step 1: By using Eq. (8), calculate the similarity matrix as:

$$\begin{aligned} \mathcal {C}=\left( \begin{array}{ccccccccccccccc} 1.0000 &{} 0.9803 &{} 0.9598 &{} 0.9397 &{} 0.8794 &{} 0.2764 &{} 0.1888 &{} 0.5373 &{} 0.1212 &{} 0.1635 &{} 0.5875 &{} 1.0000 &{} 0.4912 &{} 0.5114 &{} 0.5507 \\ 0.9803 &{} 1.0000 &{} 0.9403 &{} 0.9204 &{} 0.8603 &{} 0.2604 &{} 0.1728 &{} 0.5200 &{} 0.1052 &{} 0.1470 &{} 0.5700 &{} 0.9803 &{} 0.4735 &{} 0.4935 &{} 0.5328 \\ 0.9598 &{} 0.9403 &{} 1.0000 &{} 0.9799 &{} 0.9196 &{} 0.3166 &{} 0.2304 &{} 0.5773 &{} 0.1643 &{} 0.2076 &{} 0.6275 &{} 0.9598 &{} 0.5333 &{} 0.5534 &{} 0.5923 \\ 0.9397 &{} 0.9204 &{} 0.9799 &{} 1.0000 &{} 0.9397 &{} 0.3367 &{} 0.2512 &{} 0.5973 &{} 0.1858 &{} 0.2296 &{} 0.6475 &{} 0.9397 &{} 0.5543 &{} 0.5744 &{} 0.6131 \\ 0.8794 &{} 0.8603 &{} 0.9196 &{} 0.9397 &{} 1.0000 &{} 0.3970 &{} 0.3136 &{} 0.6573 &{} 0.2503 &{} 0.2957 &{} 0.7075 &{} 0.8794 &{} 0.6173 &{} 0.6374 &{} 0.6756 \\ 0.2764 &{} 0.2604 &{} 0.3166 &{} 0.3367 &{} 0.3970 &{} 1.0000 &{} 0.9379 &{} 0.7428 &{} 0.8959 &{} 0.9236 &{} 0.6925 &{} 0.2764 &{} 0.7524 &{} 0.7322 &{} 0.7002 \\ 0.1888 &{} 0.1728 &{} 0.2304 &{} 0.2512 &{} 0.3136 &{} 0.9379 &{} 1.0000 &{} 0.6720 &{} 0.9586 &{} 0.9466 &{} 0.6200 &{} 0.1888 &{} 0.6776 &{} 0.6567 &{} 0.6243 \\ 0.5373 &{} 0.5200 &{} 0.5773 &{} 0.5973 &{} 0.6573 &{} 0.7428 &{} 0.6720 &{} 1.0000 &{} 0.6213 &{} 0.6750 &{} 0.9500 &{} 0.5373 &{} 0.9725 &{} 0.9725 &{} 0.9680 \\ 0.1212 &{} 0.1052 &{} 0.1643 &{} 0.1858 &{} 0.2503 &{} 0.8959 &{} 0.9586 &{} 0.6213 &{} 1.0000 &{} 0.9313 &{} 0.5675 &{} 0.1212 &{} 0.6236 &{} 0.6020 &{} 0.5691 \\ 0.1635 &{} 0.1470 &{} 0.2076 &{} 0.2296 &{} 0.2957 &{} 0.9236 &{} 0.9466 &{} 0.6750 &{} 0.9313 &{} 1.0000 &{} 0.6200 &{} 0.1635 &{} 0.6804 &{} 0.6582 &{} 0.6239 \\ 0.5875 &{} 0.5700 &{} 0.6275 &{} 0.6475 &{} 0.7075 &{} 0.6925 &{} 0.6200 &{} 0.9500 &{} 0.5675 &{} 0.6200 &{} 1.0000 &{} 0.5875 &{} 0.9250 &{} 0.9450 &{} 0.9800 \\ 1.0000 &{} 0.9803 &{} 0.9598 &{} 0.9397 &{} 0.8794 &{} 0.2764 &{} 0.1888 &{} 0.5373 &{} 0.1212 &{} 0.1635 &{} 0.5875 &{} 1.0000 &{} 0.4912 &{} 0.5114 &{} 0.5507 \\ 0.4912 &{} 0.4735 &{} 0.5333 &{} 0.5543 &{} 0.6173 &{} 0.7524 &{} 0.6776 &{} 0.9725 &{} 0.6236 &{} 0.6804 &{} 0.9250 &{} 0.4912 &{} 1.0000 &{} 0.9789 &{} 0.9428 \\ 0.5114 &{} 0.4935 &{} 0.5534 &{} 0.5744 &{} 0.6374 &{} 0.7322 &{} 0.6567 &{} 0.9725 &{} 0.6020 &{} 0.6582 &{} 0.9450 &{} 0.5114 &{} 0.9789 &{} 1.0000 &{} 0.9637 \\ 0.5507 &{} 0.5328 &{} 0.5923 &{} 0.6131 &{} 0.6756 &{} 0.7002 &{} 0.6243 &{} 0.9680 &{} 0.5691 &{} 0.6239 &{} 0.9800 &{} 0.5507 &{} 0.9428 &{} 0.9637 &{} 1.0000 \\ \end{array}\right) \end{aligned}$$

Step 2: Compute \(\mathcal {C}^2\), \(\mathcal {C}^4\), ...until \(\mathcal {C}^{2^z}=\mathcal {C}^{2^{z+1}}\) for some positive integer z. We observe that \(\mathcal {C}^8=\mathcal {C}^4\). Therefore \(\mathcal {C}^8\) is an ESM and given by

$$\begin{aligned} \mathcal {C}^8=\left( \begin{array}{ccccccccccccccc} 1.0000 &{} 0.9803 &{} 0.9598 &{} 0.9598 &{} 0.9397 &{} 0.7075 &{} 0.7075 &{} 0.7075 &{} 0.7075 &{} 0.7075 &{} 0.7075 &{} 1.0000 &{} 0.7075 &{} 0.7075 &{} 0.7075 \\ 0.9803 &{} 1.0000 &{} 0.9598 &{} 0.9598 &{} 0.9397 &{} 0.7075 &{} 0.7075 &{} 0.7075 &{} 0.7075 &{} 0.7075 &{} 0.7075 &{} 0.9803 &{} 0.7075 &{} 0.7075 &{} 0.7075 \\ 0.9598 &{} 0.9598 &{} 1.0000 &{} 0.9799 &{} 0.9397 &{} 0.7075 &{} 0.7075 &{} 0.7075 &{} 0.7075 &{} 0.7075 &{} 0.7075 &{} 0.9598 &{} 0.7075 &{} 0.7075 &{} 0.7075 \\ 0.9598 &{} 0.9598 &{} 0.9799 &{} 1.0000 &{} 0.9397 &{} 0.7075 &{} 0.7075 &{} 0.7075 &{} 0.7075 &{} 0.7075 &{} 0.7075 &{} 0.9598 &{} 0.7075 &{} 0.7075 &{} 0.7075 \\ 0.9397 &{} 0.9397 &{} 0.9397 &{} 0.9397 &{} 1.0000 &{} 0.7075 &{} 0.7075 &{} 0.7075 &{} 0.7075 &{} 0.7075 &{} 0.7075 &{} 0.9397 &{} 0.7075 &{} 0.7075 &{} 0.7075 \\ 0.7075 &{} 0.7075 &{} 0.7075 &{} 0.7075 &{} 0.7075 &{} 1.0000 &{} 0.9379 &{} 0.7524 &{} 0.9379 &{} 0.9379 &{} 0.7524 &{} 0.7075 &{} 0.7524 &{} 0.7524 &{} 0.7524 \\ 0.7075 &{} 0.7075 &{} 0.7075 &{} 0.7075 &{} 0.7075 &{} 0.9379 &{} 1.0000 &{} 0.7524 &{} 0.9586 &{} 0.9466 &{} 0.7524 &{} 0.7075 &{} 0.7524 &{} 0.7524 &{} 0.7524 \\ 0.7075 &{} 0.7075 &{} 0.7075 &{} 0.7075 &{} 0.7075 &{} 0.7524 &{} 0.7524 &{} 1.0000 &{} 0.7524 &{} 0.7524 &{} 0.9680 &{} 0.7075 &{} 0.9725 &{} 0.9725 &{} 0.9680 \\ 0.7075 &{} 0.7075 &{} 0.7075 &{} 0.7075 &{} 0.7075 &{} 0.9379 &{} 0.9586 &{} 0.7524 &{} 1.0000 &{} 0.9466 &{} 0.7524 &{} 0.7075 &{} 0.7524 &{} 0.7524 &{} 0.7524 \\ 0.7075 &{} 0.7075 &{} 0.7075 &{} 0.7075 &{} 0.7075 &{} 0.9379 &{} 0.9466 &{} 0.7524 &{} 0.9466 &{} 1.0000 &{} 0.7524 &{} 0.7075 &{} 0.7524 &{} 0.7524 &{} 0.7524 \\ 0.7075 &{} 0.7075 &{} 0.7075 &{} 0.7075 &{} 0.7075 &{} 0.7524 &{} 0.7524 &{} 0.9680 &{} 0.7524 &{} 0.7524 &{} 1.0000 &{} 0.7075 &{} 0.9680 &{} 0.9680 &{} 0.9800 \\ 1.0000 &{} 0.9803 &{} 0.9598 &{} 0.9598 &{} 0.9397 &{} 0.7075 &{} 0.7075 &{} 0.7075 &{} 0.7075 &{} 0.7075 &{} 0.7075 &{} 1.0000 &{} 0.7075 &{} 0.7075 &{} 0.7075 \\ 0.7075 &{} 0.7075 &{} 0.7075 &{} 0.7075 &{} 0.7075 &{} 0.7524 &{} 0.7524 &{} 0.9725 &{} 0.7524 &{} 0.7524 &{} 0.9680 &{} 0.7075 &{} 1.0000 &{} 0.9789 &{} 0.9680 \\ 0.7075 &{} 0.7075 &{} 0.7075 &{} 0.7075 &{} 0.7075 &{} 0.7524 &{} 0.7524 &{} 0.9725 &{} 0.7524 &{} 0.7524 &{} 0.9680 &{} 0.7075 &{} 0.9789 &{} 1.0000 &{} 0.9680 \\ 0.7075 &{} 0.7075 &{} 0.7075 &{} 0.7075 &{} 0.7075 &{} 0.7524 &{} 0.7524 &{} 0.9680 &{} 0.7524 &{} 0.7524 &{} 0.9800 &{} 0.7075 &{} 0.9680 &{} 0.9680 &{} 1.0000 \\ \end{array}\right) \end{aligned}$$

Step 3: Assume \(\lambda =0.9397\), \(\lambda -\) cutting matrix \(\mathcal {C}_{\lambda }\) is obtained by applying Definition 10 as:

$$\begin{aligned} \mathcal {C}_{\lambda }=\left( \begin{array}{ccccccccccccccc} 1 &{} 1 &{} 1 &{} 1 &{} 1 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 1 &{} 0 &{} 0 &{} 0 \\ 1 &{} 1 &{} 1 &{} 1 &{} 1 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 1 &{} 0 &{} 0 &{} 0 \\ 1 &{} 1 &{} 1 &{} 1 &{} 1 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 1 &{} 0 &{} 0 &{} 0 \\ 1 &{} 1 &{} 1 &{} 1 &{} 1 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 1 &{} 0 &{} 0 &{} 0 \\ 1 &{} 1 &{} 1 &{} 1 &{} 1 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 1 &{} 0 &{} 0 &{} 0 \\ 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 1 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 \\ 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 1 &{} 0 &{} 1 &{} 1 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 \\ 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 1 &{} 0 &{} 0 &{} 1 &{} 0 &{} 1 &{} 1 &{} 1 \\ 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 1 &{} 0 &{} 1 &{} 1 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 \\ 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 1 &{} 0 &{} 1 &{} 1 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 \\ 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 1 &{} 0 &{} 0 &{} 1 &{} 0 &{} 1 &{} 1 &{} 1 \\ 1 &{} 1 &{} 1 &{} 1 &{} 1 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 1 &{} 0 &{} 0 &{} 0 \\ 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 1 &{} 0 &{} 0 &{} 1 &{} 0 &{} 1 &{} 1 &{} 1 \\ 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 1 &{} 0 &{} 0 &{} 1 &{} 0 &{} 1 &{} 1 &{} 1 \\ 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 1 &{} 0 &{} 0 &{} 1 &{} 0 &{} 1 &{} 1 &{} 1 \\ \end{array}\right) \end{aligned}$$
(11)

Step 4: By using Eq. (11), the given \(\mathcal {P}_{i}\) are categorized as: \(\{\mathcal {P}_1,\mathcal {P}_2,\mathcal {P}_3,\mathcal {P}_4,\mathcal {P}_5,\mathcal {P}_{12}\}\), \(\{\mathcal {P}_6\}\), \(\{\mathcal {P}_7,\mathcal {P}_9, \mathcal {P}_{10}\}\), and \(\{\mathcal {P}_8\), \(\mathcal {P}_{11}\), \(\mathcal {P}_{13}\), \(\mathcal {P}_{14}\), \(\mathcal {P}_{15}\}\).

Apart from these, the complete results for different values of \(\lambda\) are listed in Table 11.

Table 11 Results for different confidence levels of Example 8

Conclusion

The key contribution of this work is outlined below:

  1. 1.

    A notion of novel SM between the pairs of IFSs is explained by transforming the given IFSs into the right-angled triangle in a square area and hence investigated their several properties. In particular, comparative studies with several existing SMs, given in Table 1, are done to prove the superiority of the proposed measure over these existing SMs.

  2. 2.

    The validity as well as superiority of the proposed SM over the existing SMs is summarized in Section 4 which demonstrates that the existing SMs fail to give classification results under the different instances such as “division by zero problems” or “counter-intuitive cases” and hence decision makers may have faced obstacles in making the optimal choice.

  3. 3.

    An algorithm to solve the DMPs with the proposed SM is developed and implemented to show its performance in numerous examples such as pattern recognition, clustering analysis, etc. As compared to the other existing SM-based algorithm, we figure out that the proposed measure has the following benefits: (i) obtain the finest alternative without counter-intuitive cases [5,6,7,8,9,10, 12, 16, 22, 23, 26, 28]; (ii) without division by zero problem [22, 25].

  4. 4.

    Further, based on the proposed SM, a novel clustering algorithm is given to classify the given objects under the different confidence levels of the expert.

In the future, there is a scope of extending this research to some uncertain environment. Also, in the present work, the interactions between the different attributes are not considered. These drawbacks will be studied in our future work. Also, we will try to define some more generalized algorithms in order to solve more complex problems such as brain hemorrhage, healthcare, nonlinear systems, control systems, and others.