Keywords

1 Introduction

As a generalization of fuzzy set, intuitionistic fuzzy set (IFS) was introduced by Atanassov [1] to deal with uncertainty of imperfect information. Since the IFS represents information by both membership and non-membership degrees and hesitancy degree being a lack of information, it is found to be more powerful to deal with vagueness and uncertainty than the fuzzy set (FS). Many measures [2, 46, 10, 1315, 17, 23] have been proposed by scholars to evaluate IFSs. Basically, it is desired that the measure made on IFSs should be able to evaluate degrees of fuzziness and intuitionism from the imperfect information. Among the most interesting measures in IFSs theory, knowledge measure is an essential tool for evaluating amount of knowledge from information contained in IFSs. Based on knowledge measure of IFSs, the entropy measure and similarity measure between IFSs can be constructed.

The entropy mentioned first in 1965 by Zadeh [26], described the fuzziness of a FS. In order to measure the degree of fuzziness of FSs, De Luca and Termini [7] introduced a non-probabilistic entropy, that was also called a measure of a quantity of information. Kaufmann [11] proposed a method for measuring the fuzziness degree of a fuzzy set by a metric distance between its membership function and membership function of its nearest crisp set. Yager [24] suggested the entropy measure expressed by the distance between a fuzzy set and its complement. In 1996, Bustince and Burillo [3] firstly introduced a notion that entropy on IFSs can be used to evaluate intuitionism of an IFS. Szmidt and Kacprzyk [18] reformulated De Luca and Termini’s axioms and proposed an entropy measure for IFSs, based on geometric interpretation as a ratio of the distance from the IFS to the nearer crisp set and the distance to the another (farer) one. Hung and Yang [9] gave their axiomatic definitions of entropy of IFSs by using the concept of probability. Vlachos and Sergiadis [21] pointed out that entropy as a measure of fuzziness can measure both fuzziness and intuitionism for IFSs. On the other hand, Szmidt at el. [19] emphasized that the entropy alone may be not a satisfactory dual measure of knowledge useful from the viewpoint of decision making and introduced a new measure of knowledge for IFSs, which involves both entropy and hesitation margin. Dengfeng and Chuntian [8] gave the axiomatic definition of similarity measures between IFSs and proposed similarity measures based on high and low membership functions. Ye [25] proposed cosine and weighted cosine similarity measures for IFSs and applied to a small medical diagnosis problem. However, Li et al. [12] pointed out that there always are counterintuitive examples in pattern recognition among these existing similarity measure. Many unreasonable results of other measures for IFSs are also revealed in [19, 25]. The main reason of unreasonable cases of the existing entropy measures and similarity measures is that there is no reliable measurement of amount of knowledge carried by IFSs, which can be used for measuring and comparing them. In this paper, we present a new knowledge measure for IFSs that provides reliable results. The performance evaluation of the proposed measure is twofold: assessing how much the measure is reasonable, and indicating the accuracy of the measure in comparison with others.

2 Basic Concept of Intuitionistic Fuzzy Sets

For any elements of the universe of discourse X, an intuitionistic fuzzy set A is described by:

$$A = \left\{ {\left( {x,\mu_{A} (x),\nu_{A} (x)} \right) |x \in X} \right\},$$
(1)

where \(\upmu_{\text{A}} ({\text{x}})\) denotes a degree of membership and \(\upnu_{\text{A}} ({\text{x}})\) denotes a degree of non-membership of x to A, \(\upmu_{\text{A}} :{\text{X}} \to [0,1]\) and \(\upnu_{\text{A}} :{\text{X}} \to [0,1]\) such that \(0 \le\upmu_{\text{A}} \left( {\text{x}} \right) +\upnu_{\text{A}} \left( {\text{x}} \right) \le 1,\forall {\text{x}} \in {\text{X}}\). To measure hesitancy of membership of an element to an IFS, Atanassov introduced a third function given by:

$$\pi_{A} \left( x \right) = 1 - \mu_{A} \left( x \right) - \nu_{A} \left( x \right) ,$$
(2)

which is called the intuitionistic fuzzy index or the hesitation margin. It is obvious, that \(0 \le\uppi_{\text{A}} \left( {\text{x}} \right) \le 1,\forall {\text{x}} \in {\text{X}}\). If \(\uppi_{\text{A}} \left( {\text{x}} \right) = 0,\forall {\text{x}} \in {\text{X}}\), then \(\upmu_{\text{A}} \left( {\text{x}} \right) +\upnu_{\text{A}} \left( {\text{x}} \right) = 1\) and the IFS A is reduced to an ordinary fuzzy set. The concept of a complement of an IFS A, denoted by Ac is defined as [1]:

$$A^{c} = \left\{ {\left( {x,\nu_{A} (x),\mu_{A} (x)} \right) |x \in X} \right\}.$$
(3)

Many measures for IFSs, such as well-known measures given by De Luca and Termini [7], Szmidt and Kacprzyk [18], Wang [22] and Zhang [27] have been presented. But in their works, Szmidt and Kacprzyk [19] have found some problems with the existing distance measures, entropy measures and similarity measures. To deal with these situations, in [20] Szmidt and Kacprzyk proposed a measure of amount of knowledge for IFSs, considering both entropy measure and hesitation margin as follows:

$$K\left( x \right) = 1 - 0.5(E\left( x \right) + \pi \left( x \right)) .$$
(4)

However, this measure also gives unreasonable results because evaluates equally amounts of knowledge for two different IFSs. For example, in the case of two singleton IFSs \({\text{A}} = \left\langle {{\text{x}},0.5,0.5} \right\rangle\) and \({\text{B}} = \left\langle {{\text{x}},0,0.5} \right\rangle\), from Eq. (4) we get \({\text{K}}\left( {\text{A}} \right) = 0.5\) and \({\text{K}}\left( {\text{B}} \right) = 0.5\). It can well be argued, that amount of knowledge for \({\text{A}} = \left\langle {{\text{x}},0.5,0.5} \right\rangle\) should be bigger than for \({\text{B}} = \left\langle {{\text{x}},0,0.5} \right\rangle\) from the viewpoint of decision making. To overcome this drawback we propose a new measure of amount of knowledge carried by IFSs.

3 A New Proposed Measure of Knowledge for the Intuitionistic Fuzzy Sets

The knowledge measure of an FS evaluates a distance to the most fuzzy set, i.e. the set with membership and non-membership grades equal to 0.5. Motivated by this idea, we define a new measure of amount of knowledge for IFSs as follows:

Definition 1

Let A be an IFS in the finite universe of discourse \(X = \left\{ {x_{1} , x_{2} , \ldots ,x_{n} } \right\}\). The new knowledge measure of A is defined as a normalized Euclidean distance from A to the most fuzzy intuitionistic set, i.e. \(F = \left\langle {x,0,0} \right\rangle\) and expressed as:

$$K_{F} \left( A \right) = \frac{1}{n\sqrt 2 }\mathop \sum \limits_{i = 1}^{n} \sqrt {\left( {\upmu_{A} \left( {x_{i} } \right) - 0} \right)^{2} + \left( {\nu_{A} \left( {x_{i} } \right) - 0} \right)^{2} + \left( {\pi_{A} \left( {x_{i} } \right) - 1} \right)^{2} } .$$
(5)

Hence, the proposed knowledge measure evaluates quantity of information of an IFS A as its normalized Euclidean distance from the reference level 0 of information. For example, knowledge measure is equal to 1 for the crisp sets \((\mu_{A} \left( {x_{i} } \right) = 1 \;or \;\nu_{A} \left( {x_{i} } \right) = 1)\) and 0 for the most intuitionistic fuzzy set \(F = \left\langle {x,0,0} \right\rangle .\)

Entropy measure for IFSs, as a dual measure of the amount of knowledge is defined as:

$$E_{F} \left( A \right) = 1 - K_{F} \left( A \right) .$$
(6)

Theorem 1

Let A be an IFS in \(X = \left\{ {x_{1} , x_{2} , \ldots ,x_{n} } \right\}\) . The proposed knowledge measure K F of A is a metric and satisfies the following axiomatic properties.

  1. (P1)

    \(\begin{array}{*{20}c} {K_{F} \left( A \right) = 1\quad {\text{iff}}} & {{\text{A}}\;{\text{is}}\;{\text{a}}\;{\text{crisp}}\;{\text{set}}} \\ \end{array}\)

  2. (P2)

    \(\begin{array}{*{20}c} {{\text{K}}_{\text{F}} \left( {\text{A}} \right) = 0\quad {\text{iff}}} & {\uppi_{\text{A}} \left( {{\text{x}}_{\text{i}} } \right) = 1} \\ \end{array}\)

  3. (P3)

    \(0 \le K_{F} \left( A \right) \le 1\)

  4. (P4)

    \({\text{K}}_{\text{F}} \left( {\text{A}} \right) = {\text{K}}_{\text{F}} \left( {{\text{A}}^{\text{c}} } \right)\)

Proof

  1. (P1)

    Having in mind \(\upmu_{A} \left( {x_{i} } \right) + \nu_{A} \left( {x_{i} } \right) + \pi_{A} \left( {x_{i} } \right) = 1\) we have:

    $$\begin{aligned} {\text{K}}_{\text{F}} \left( {\text{A}} \right) & = 1 \Leftrightarrow \mathop \sum \limits_{i = 1}^{n} \sqrt {\left( {\upmu_{A} \left( {x_{i} } \right)} \right)^{2} + \left( {\nu_{A} \left( {x_{i} } \right)} \right)^{2} + \left( {1 - \pi_{A} \left( {x_{i} } \right)} \right)^{2} } \\ & = n\sqrt 2 \Leftrightarrow \mathop \sum \limits_{i = 1}^{n} \sqrt {\left( {\upmu_{A} \left( {x_{i} } \right)} \right)^{2} + \left( {\nu_{A} \left( {x_{i} } \right)} \right)^{2} + \left( {\upmu_{A} \left( {x_{i} } \right) + \nu_{A} \left( {x_{i} } \right)} \right)^{2} } \\ & = n\sqrt 2 \Leftrightarrow \mathop \sum \limits_{i = 1}^{n} \sqrt {\left( {\upmu_{A} \left( {x_{i} } \right) + \nu_{A} \left( {x_{i} } \right)} \right)^{2} -\upmu_{A} \left( {x_{i} } \right)\nu_{A} \left( {x_{i} } \right)} = n. \\ \end{aligned}$$

    As \(0 \le\upmu_{A} \left( {x_{i} } \right) + \nu_{A} \left( {x_{i} } \right) \le 1\) and \(0 \le\upmu_{A} \left( {x_{i} } \right)\nu_{A} \left( {x_{i} } \right) \le 1\) imply inequality

    $$\begin{aligned} & 0 \le \left( {\upmu_{A} \left( {x_{i} } \right) + \nu_{A} \left( {x_{i} } \right)} \right)^{2} -\upmu_{A} \left( {x_{i} } \right)\nu_{A} \left( {x_{i} } \right) \le 1 \Leftrightarrow \\ & 0 \le \sqrt {\left( {\upmu_{A} \left( {x_{i} } \right) + \nu_{A} \left( {x_{i} } \right)} \right)^{2} -\upmu_{A} \left( {x_{i} } \right)\nu_{A} \left( {x_{i} } \right)} \le 1, \\ \end{aligned}$$

    then \(\sum\nolimits_{i = 1}^{n} {\sqrt {\left( {\upmu_{A} \left( {x_{i} } \right)} \right)^{2} + \left( {\nu_{A} \left( {x_{i} } \right)} \right)^{2} +\upmu_{A} \left( {x_{i} } \right)\nu_{A} \left( {x_{i} } \right)} = n \Leftrightarrow }\)

    \(\left( {\mu_{A} \left( {x_{i} } \right) = 1\; and\; \nu_{A} \left( {x_{i} } \right) = 0} \right) or \left( {\mu_{A} \left( {x_{i} } \right) = 0 \;and\;\nu_{A} \left( {x_{i} } \right) = 1} \right)\) holds that A is a crisp set.

  2. (P2)

    From Eq. (5) we have

    $$\begin{aligned} K_{F} \left( A \right) & = 0 \Leftrightarrow \mathop \sum \limits_{i = 1}^{n} \sqrt {\left( {\mu_{A} \left( {x_{i} } \right)} \right)^{2} + \left( {\nu_{A} \left( {x_{i} } \right)} \right)^{2} + \left( {1 - \pi_{A} \left( {x_{i} } \right)} \right)^{2} } \\ & = 0 \Leftrightarrow \pi_{A} \left( {x_{i} } \right) = 1, \mu_{A} \left( {x_{i} } \right) = 0, \nu_{A} \left( {x_{i} } \right) = 0 \Leftrightarrow \pi_{A} \left( {x_{i} } \right) = 1. \\ \end{aligned}$$
  3. (P3)

    From (P1) we have

    $$\begin{aligned} & 0 \le \sqrt {\left( {\mu_{A} \left( {x_{i} } \right) + \nu_{A} \left( {x_{i} } \right)} \right)^{2} - \mu_{A} \left( {x_{i} } \right)\nu_{A} \left( {x_{i} } \right)} \le 1 \\ & \quad \Leftrightarrow 0 \le \sqrt {\left( {\mu_{A} \left( {x_{i} } \right)} \right)^{2} + \left( {\nu_{A} \left( {x_{i} } \right)} \right)^{2} + \mu_{A} \left( {x_{i} } \right)\nu_{A} \left( {x_{i} } \right)} \le 1 \\ & \quad \Leftrightarrow 0 \le \frac{1}{\sqrt 2 }\sqrt {2\left( {\mu_{A} \left( {x_{i} } \right)} \right)^{2} + 2\left( {\nu_{A} \left( {x_{i} } \right)} \right)^{2} + 2\mu_{A} \left( {x_{i} } \right)\nu_{A} \left( {x_{i} } \right)} \le 1 \\ & \quad \Leftrightarrow 0 \le \frac{1}{n\sqrt 2 }\mathop \sum \limits_{i = 1}^{n} \sqrt {\left( {\mu_{A} \left( {x_{i} } \right)} \right)^{2} + \left( {\nu_{A} \left( {x_{i} } \right)} \right)^{2} + \left( {\mu_{A} \left( {x_{i} } \right) + \nu_{A} \left( {x_{i} } \right)} \right)^{2} } \le 1. \\ \end{aligned}$$

    Having in mind \(1 - \pi_{A} \left( {x_{i} } \right) =\upmu_{A} \left( {x_{i} } \right) + \nu_{A} \left( {x_{i} } \right)\), then \(0 \le K_{F} \left( A \right) \le 1\) that implies (P3).

  4. (P4)

    Combining Eqs. (5) and (3) we have

    $$K_{F} \left( A \right) = \frac{1}{n\sqrt 2 }\mathop \sum \limits_{i = 1}^{n} \sqrt {\left( {\mu_{A} \left( {x_{i} } \right)} \right)^{2} + \left( {\nu_{A} \left( {x_{i} } \right)} \right)^{2} + \left( {1 - \pi_{A} \left( {x_{i} } \right)} \right)^{2} } = K_{F} \left( {A^{c} } \right).$$

    This completes the proof.

A geometrical interpretation of proposed knowledge measure KF for IFSs is shown in Fig. 1. The IFS A is mapped into the triangle MNF, where each element x of A corresponds a point of triangle MNF with coordinates \(\left( {\mu_{A} (x), \nu_{A} (x),\pi_{A} (x)} \right)\) fulfilling Eq. (2). Point M(1,0,0) represents elements fully belonging to A \((\mu_{A} \left( x \right) = 1)\). Point N(0,1,0) represents elements fully not belonging to A \((\nu_{A} \left( x \right) = 1)\). Point F(0,0,1) represents elements fully being hesitation, i.e. we are not able to say whether they belong or not belong to A. In such a way the whole information about IFS A can be described by location of corresponding point \(\left( {\mu_{A} (x), \nu_{A} (x),\pi_{A} (x)} \right)\) inside the triangle MNF or rather by its distance to the point F(0,0,1) (distance c in Fig. 1). Clearly that the closer distance c, the smaller knowledge measure (the higher degree of fuzziness). When distance c = 0, point F represents elements with zero information level \((K_{F} = 0)\), i.e. the highest degree of fuzziness.

Fig. 1
figure 1

Geometrical representation of an IFS and its distance to the most IFS

4 Comparative Examples

In order to testify the validity and capability of the new entropy, some comparative examples are presented in this section.

Example 1

Let us calculate knowledge measure \(K_{F}\) for two singleton IFSs \(A = x,0.5,0.5\) and \(B = x,0,0.5\). Adopting the proposed knowledge measure \(K_{F}\) from Eq. (5) we derive \(K_{F} \left( A \right) = 0.866\) and \(K_{F} \left( B \right) = 0.5\). Thus, \({\text{K}}_{\text{F}} \left( A \right) > K_{F} \left( B \right)\) indicates that amount of knowledge of A is bigger than of B while the Szmidt and Kacprzyk’s measure gave \(K\left( A \right) = K\left( B \right) = 0.5\) in [20]. The result is consistent with our intuition because A and B have the same non-membership degrees, but A has additional information about membership degree. Therefore amount of knowledge of A should be bigger than of B.

Example 2

In this example we compare our knowledge-based entropy measure with some existing entropy measures. We first recall some widely used entropy measures for IFSs as follows.

  1. (a)

    Bustine and Burillo [3]:

    $${\text{E}}_{\text{bb}} \left( {\text{A}} \right) = \mathop \sum \limits_{{{\text{i}} = 1}}^{\text{n}}\uppi_{\text{A}} \left( {{\text{x}}_{\text{i}} } \right);$$
    (7)
  2. (b)

    Hung and Yang [9]:

    $$E_{hc}^{\alpha } \left( A \right) = \left\{ {\begin{array}{*{20}c} {\frac{1}{\alpha - 1}\left[ {1 - \left( {\mu_{A}^{\alpha } + \nu_{A}^{\alpha } + \pi_{A}^{\alpha } } \right)} \right] \alpha \ne 1, \alpha > 0} \\ { - \left( {\upmu_{A } \ln\upmu_{A} + \nu_{A} \ln \nu_{A} + \pi_{A} \ln \pi_{A} } \right),\alpha = 1} \\ \end{array} } \right.,$$
    (8)
    $$E_{r}^{\beta } \left( A \right) = \frac{1}{1 - \beta }ln\left( {\mu_{A}^{\beta } + \nu_{A}^{\beta } + \pi_{A}^{\beta } } \right), 0 < \beta < 1;$$
    (9)
  3. (c)

    Szmidt and Kacprzyk [18]:

    $$E_{sk} \left( A \right) = \frac{1}{\text{n}}\mathop \sum \limits_{{{\text{i}} = 1}}^{\text{n}} \frac{{\hbox{min} \left( {\upmu_{\text{A}} \left( {{\text{x}}_{\text{i}} } \right),\upnu_{\text{A}} \left( {{\text{x}}_{\text{i}} } \right)} \right) +\uppi_{\text{A}} \left( {{\text{x}}_{\text{i}} } \right)}}{{\hbox{max} \left( {\upmu_{\text{A}} \left( {{\text{x}}_{\text{i}} } \right),\upnu_{\text{A}} \left( {{\text{x}}_{\text{i}} } \right)} \right) +\uppi_{\text{A}} \left( {{\text{x}}_{\text{i}} } \right)}};$$
    (10)
  4. (d)

    Vlachos and Sergiadis [21]:

    $$E_{vs1} \left( A \right) = - \frac{1}{nln2}\mathop \sum \limits_{i = 1}^{n} \left[ {\begin{array}{*{20}c} {\upmu_{A} \left( {x_{i} } \right)\ln\upmu_{A} \left( {x_{i} } \right) + \nu_{A} \left( {x_{i} } \right)\ln \nu_{A} \left( {x_{i} } \right) - } \\ {\left( {1 - \pi_{A} \left( {x_{i} } \right)\ln \left( {1 - \pi_{A} \left( {x_{i} } \right)} \right)} \right) - \pi_{A} \left( {x_{i} } \right)\ln 2} \\ \end{array} } \right],$$
    (11)
    $$E_{vs2} \left( A \right) = \frac{1}{n}\mathop \sum \limits_{i = 1}^{n} \frac{{2\upmu_{A} \left( {x_{i} } \right)\nu_{A} \left( {x_{i} } \right) + \pi_{A}^{2} \left( {x_{i} } \right)}}{{\upmu_{A}^{2} \left( {x_{i} } \right) + \nu_{A}^{2} \left( {x_{i} } \right) + \pi_{A}^{2} \left( {x_{i} } \right)}}.$$
    (12)

Let us consider seven single-element IFSs given by \({\text{A}}_{1} = \left\langle {{\text{x}}, 0.7,0.2} \right\rangle\), \({\text{A}}_{2} = \left\langle {{\text{x}}, 0.5,0.3} \right\rangle\), \({\text{A}}_{3} = \left\langle {{\text{x}}, 0.5,0} \right\rangle\), \({\text{A}}_{4} = \left\langle {{\text{x}}, 0.5,0.5} \right\rangle\), \({\text{A}}_{5} = \left\langle {{\text{x}}, 0.5,0.4} \right\rangle\), \({\text{A}}_{6} = \left\langle {{\text{x}}, 0.6,0.2} \right\rangle\) and \({\text{A}}_{7} = \left\langle {{\text{x}}, 0.4,0.4} \right\rangle\). These IFSs are used for comparing calculations of the recalled entropy measures with our new measure \({\text{E}}_{\text{F}}\) from formula (6). The calculated results of specific measures are summarized in columns of Table 1.

Table 1 Comparison of the entropy measures

It can be seen that the recalled measures give some unreasonable cases (in bold type). For instance, the measures \(E_{hc}^{\alpha }\) and \(E_{r}^{\beta }\) \((\alpha = 1/2, 1, 2 \;and \;3;\beta = 1/3 \;and\; 1/2)\) from Hung and Yang [9] cannot distinguish two different IFSs \({\text{A}}_{3} = \left\langle {{\text{x}}, 0.5,0} \right\rangle\) and \({\text{A}}_{4} = \left\langle {{\text{x}}, 0.5,0.5} \right\rangle\). The measures \({\text{E}}_{\text{sk}}\) from [18] and \({\text{E}}_{{{\text{vs}}1}}\) and \({\text{E}}_{{{\text{vs}}2}}\) from [21] give the same entropy measures for two different IFSs \({\text{A}}_{4} = \left\langle {{\text{x}}, 0.5,0.5} \right\rangle\) and \({\text{A}}_{7} = \left\langle {{\text{x}}, 0.4,0.4} \right\rangle\). In turn, \({\text{E}}_{\text{bb}}\) evaluates entropy by only the hesitation margin, omitting fuzziness involved relation between membership/non-membership degrees in cases \({\text{A}}_{1} = \left\langle {{\text{x}}, 0.7,0.2} \right\rangle\) and \({\text{A}}_{5} = \left\langle {{\text{x}}, 0.5,0.4} \right\rangle\) or \({\text{A}}_{2} = \left\langle {{\text{x}}, 0.5,0.3} \right\rangle\), \({\text{A}}_{6} = \left\langle {{\text{x}}, 0.6,0.2} \right\rangle\) and \({\text{A}}_{7} = \left\langle {{\text{x}}, 0.4,0.4} \right\rangle\). Based on results of the new entropy measure \({\text{E}}_{\text{F}}\) (last column), we can rank the IFSs in accordance with the increasing related entropy measures as follows: \({\text{A}}_{4} \prec {\text{A}}_{1} \prec {\text{A}}_{5} \prec {\text{A}}_{6} \prec {\text{A}}_{2} \prec {\text{A}}_{7} \prec {\text{A}}_{3}\). Thus, from the tested sets, the most fuzzy set is \({\text{A}}_{3}\) and the sharpest set is \({\text{A}}_{4}\). This order is met only by Burillo and Bustine’s measure [3] known as \({\text{E}}_{\text{bb}} \left( {\text{A}} \right) = \mathop \sum \limits_{{{\text{i}} = 1}}^{\text{n}}\uppi_{\text{A}} ({\text{x}}_{\text{i}} )\), which is a function only of the hesitation margin \(\uppi_{\text{A}}\). Therefore, \({\text{E}}_{\text{bb}}\) is not able to point out the influence of relationship between \(\mu_{A} \;and\;\nu_{A}\) on degree of fuzziness. Nevertheless it indicates the overwhelming importance of the hesitation margin in evaluating degree of fuzziness for IFSs.

Example 3

We consider the problem of data classification so-called “Saturday mornings”, introduced by Quinlan [16] and solved by using decision trees and selecting the minimal possible tree. The presented example is quite small, but it is a challenge to many classification and machine learning methods. The objects of classification were attributes describing the weather on Saturday mornings, and each attribute was assigned by a set of linguistic disjoint values as follows [16]: Outlook = {sunny, overcast, rain}, Temperature = {cold, mild, hot}, Humidity = {high, normal} and Windy = {true, false}. Each object belongs to one of two classes of C = {P, N}, where P denotes positive and N—negative. Quinlan pointed out the best ranking of the attributes in context of amount of knowledge as: Outlook Humidity Windy Temperature.

The representation of the “Saturday Mornings” data in terms of the IFSs is shown in Table 2. The results of evaluating amount of knowledge of “Saturday Morning” data for particular attributes are \(K_{F} \left( {Outlook} \right) = 0.51\), \(K_{F} \left( {Temperature} \right) = 0.25\), \(K_{F} \left( {Humidity} \right) = 0.498\) and \(K_{F} \left( {Windy} \right) = 0.49\). From the viewpoint of decision making, the best attribute is the most informative one, i.e. with the highest amount of knowledge KF. Therefore, the order of ranking of the attributes indicated by KF is following: Outlook Humidity Windy Temperature, which is exactly the same in comparison with the results presented by Quinlan [16] and Szmidt et al. [20].

Table 2 The representation of “Saturday Mornings” data in terms of IFSs

5 Conclusion

We have discussed on some features of the measures for IFSs existing in literature and proposed a new knowledge measure for IFSs. The new proposed measures have been verified by comparison with the existing measures in the illustrative examples. From the obtain results we can see that the proposed measure overcomes the drawbacks of the existing measures. The new measure points out the relationship between positive and negative information and strong influence of a lack of information on amount of knowledge. Finally, the proposed measure gives reasonable results in comparison with other measures, for dealing with the data classification problem.