Abstract
The knowledge measure can be considered as a dual measure of entropy for fuzzy sets. In the present work, a new entropy-based knowledge measure is proposed for FSs, which complies with the extended idea of De Luca and Termini axioms. Besides this, some of its major properties are also discussed. Comparison of the proposed measure with various existing fuzzy measures indicates that the proposed knowledge measure has a greater ability in discrimination of various FSs. Moreover, a fuzzy inaccuracy measure is introduced based on the proposed measure and investigated some properties. Considering the significance of integrated weights, a new multiple attribute decision-making (MADM) model is introduced under fuzzy set environment. The proposed knowledge measure is utilized to calculate the weights vector, when weights are partially known and other when weights are completely unknown. Finally, an example is employed to illustrate the effectiveness and consistency of the new MADM method.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
1 Introduction
Fuzzy set (FS) theory introduced by Zadeh (1965) is an important topic to measure the fuzziness degree. Fuzzy set contains only membership degree to elements of a set in the interval [0,1]. The fuzzy set can describe in terms of “belong to” and “not belong to”. Xuecheng (1992) presented an axiomatic definition of entropy, distance and similarity measures of FS and also systematically considered the basic relations between these measures. Entropy defined as a measure of uncertain information or fuzziness in the FSs theory has been widely applied in finance, information technology, communication and decision-making. Afterwards, De Luca and Termini (1972) suggested the fuzzy entropy of FSs on the basis of Shannon entropy function and defined a set of axioms. Yager (1979) obtained an entropy measure of an FS from a distance between fuzzy set and its complement. Later on, Higashi and Klir (1982) extended this concept to a general class of fuzzy complements. Various authors applied the applications of FSs in distinct fields like pattern recognition, decision-making problems, image processing, etc. (Chen and Chen 2001; Chen et al. 2013; Chen and Chen 2014; Fahmi et al. 2019). Knowledge measure is usually regarded as the dual measure of fuzzy entropy or uncertainty. An entropy measure cannot capture all uncertainties in FSs. knowledge measure represents difference of a fuzzy set from the most FS. It may be noted that the definition of knowledge and entropy measures satisfies the following condition: If M \(\le\) N, then knowledge(M)\(\ge\) knowledge(N) or entropy(M) \(\le\) entropy(N). The notion of the order \(\le\) plays a vital role in the theory of entropy and knowledge measure, as in the definition of the classical ordering set theory.
The intuitionistic fuzzy set (IFS) proposed by Atanassov (1986), as an extension of the fuzzy set, is characterized by introducing a hesitancy degree to evaluate the gap between 1 and the sum of membership grade and non-membership grade. Some models have been developed by various researchers in the intuitionistic fuzzy set environment (Chen et al. 2016a, b; Fahmi et al. 2019; Mahmood et al. 2018; Jamkhaneh and Garg 2018; Rani et al. 2019; Zeng et al. 2019). Some functional forms in which \(l_2\) metric seems to be a particular case have been studied by Lubbe (1981). Lad et al. (2015) introduced a long-standing inquiry in the theory of information, to check whether there is a duality work for the entropy (1948). Szmidt et al. (2014) introduced a standard axiom system of knowledge measure in an IFS theory as a dual axiom system of intuitionistic fuzzy entropy. Guo (2016) defined a dual axiomatic system of knowledge measure under IFSs theory and developed a generalized knowledge measure. Zhang et al. (2019) presented some necessary and equivalent conditions of the classical order property on the basis of Szmidt and Kacprzyk’s (2007) axioms. However, Nguyen’s (2015) and Guo’s (2016) knowledge measure may bring contradictory results due to the use of the distance measure. Therefore, Wang et al. (2018) introduced a new knowledge measure for IFSs to fill up the gap in Nguyen’s (2015) and Guo’s (2016) knowledge measure.
Based on the given information, the attribute weights in MADM are usually named as subjective and objective weights. In subjective information, decision expert offers weights to the attributes. Analytic hierarchy process (AHP) method (Satty 1980) and Delphi method (Hwang and Lin 1987) are examples to determine subjective attribute weights. On the other hand, the objective attribute weights are evaluated by solving mathematical programming models and one in all the foremost vital approach is the entropy method (Shannon 1948). Although Chen and Li (2010) presented a model to assess the attribute weights by utilizing intuitionistic fuzzy (IF) entropy within the IFS system. Within the literature, various authors have self-addressed the MADM problems (Seikh and Mandal 2019; Joshi and Kumar 2017, 2018; Wang et al. 2017; Liu and You 2017a; Liu et al. 2017b; Zafar and Akram 2018; Wang et al. 2019).
In this paper, our aim is to investigate the duality concept among fuzzy information measures and present a new axiomatic definition of the knowledge measure. The main contribution of this study is summarized as follows: (i) A new knowledge measure for FSs is proposed and proved its validity with the help of examples. (ii) An accuracy measure is introduced which is a generalization of the proposed knowledge measure. This may be considered as a dual of inaccuracy of fuzzy sets. (iii) A score function is developed which is based on the combination of subjective and objective weights. (iv) The proposed knowledge measure is applied to MADM problems under the fuzzy condition.
The remainder of this article is structured as follows: Sect. 2 recalls some basic concept of fuzzy sets. In Sect. 3, we propose a fuzzy knowledge measure and prove some major properties of this measure. Section 4 carries out comparative analysis with the help of numerical examples. In Sect. 5, we introduce an accuracy measure for fuzzy set and its properties are investigated. In Sect. 6, the new knowledge measure is applied to MADM problems in fuzzy environment and an example is employed to check the applicability of the proposed approach. Finally, the conclusion and future scope are made in Sect. 7.
2 Preliminaries
Some notions are used in the following section. Let \(Z=\{z_1,z_2,\ldots ,z_n\}\) be a fixed set, and FSs represent the class of all fuzzy sets of Z.
Definition 2.1
(Zadeh 1965) A fuzzy set M on Z is described as:
where \(\mu _M:Z\rightarrow [0,1]\) denotes the membership function. The value \(\mu _M(z) \in [0,1]\) is the membership degree of \(z \in Z\) in M.
Definition 2.2
(Zadeh 1965) Let \(M,N \in FSs(Z)\). Then, we have
-
(a)
Complement: \({\bar{M}}=\{(z,1-\mu _M(z))|z \in Z\}.\)
-
(b)
Union:\(M\cup N =\{(z,\max(\mu _M(z),\mu _N(z))|z \in Z\}\).
-
(c)
Intersection: \(M\cap N=\{(z,\min(\mu _M(z),\mu _N(z))|z \in Z\}\).
Definition 2.3
(Kosko 1986) Let FSs(Z) be a collection of all FSs in Z. Then, for all \(P_{{\rm near}},P_{{\rm far}} \in FSs(Z)\) are defined as follows:
and
In FS theory, the difficulty of imprecision with respect to a FS is measured by fuzzy entropy. De Luca and Termini (1972) gave the following axiomatic definition of a fuzzy entropy measure of FSs.
Definition 2.4
(De Luca and Termini 1972) Let \(M \in FSs(Z);\) then H(M) (measure of fuzziness) should have at least following four properties:
- A-1 (Sharpness)::
-
H(M) is minimum if and only if M is crisp set.
- A-2 (Maximality)::
-
H(M)is maximum if and only if M is the most fuzzy set.
- A-3 (Resolution)::
-
\(H(M)\ge H(M^*)\),if \(M^*\) is crisper than M.
- A-4 (Symmetry)::
-
\(H(M)=H({\bar{M}}),\) where \({\bar{M}}\) is the complement set of M,i.e.\(\mu _{{\bar{M}}} (z_i)=1-\mu _M(z_i).\)
3 A new knowledge measure for FSs
The function of entropy and knowledge measure are both important tools in the study of fuzzy set. The fuzzy entropy is to estimate the degree of uncertainty, disorder and irregularity between FSs, whereas fuzzy knowledge measure is to measure the degree of certainty, order and regularity between FSs. The entropy of FS provides the average amount of ambiguity present in a FS. In a similar way, we can think about the average amount of knowledge present in a FS. This kind of knowledge measure is called dual measure of fuzzy entropy.
According to Definition 2.4, an axiomatic definition for the fuzzy knowledge measure can be defined as follows:
Definition 3.1
A real function S: FSs(Z)\(\rightarrow R^+\) is called a knowledge measure if it satisfies the following four properties:
- S1.:
-
For all \(M\in FSs(Z)\), S(M)is maximum if and only if M is a crisp set.
- S2.:
-
S(M)is minimum if and only if M is most fuzzy set.
- S3.:
-
\(S(M^{*})\ge S(M)\), where \(M^*\)is crisper than M.
- S4.:
-
\(S(M)=S({\bar{M}})\), where \({\bar{M}}\) denotes complement of M.
Now, we introduced the following knowledge measure for a FS as:
Remark 1
For comparative investigation all through in this study, we considered fuzzy entropy and knowledge measure as a normalized form. In the next theorem,
Theorem 3.1
To prove that, S(M) defined in Equation 1is a valid knowledge measure for FSs.
Proof
For validity, the proposed knowledge measure defined in Eq. 1 should fulfilled axiomatic requirements given in Definition 3.1.
(S1.) First, suppose that \(S(M)=1\),
which is possible only when \(\mu _M(z_i)=0\) or 1.
Conversely, suppose that set M is crisp, i.e. \(\mu _M(z_i)=0\) or 1 for all \(z_i\in Z.\)
Then, for \(\mu _M(z_i)=0\) or 1, we have
Therefore, \(S(M)=1\), when \(\mu _M(z_i)=0\) or 1 for all \(z_i.\)
Therefore, \(S(M)=1\) iff M is crisp, i.e, \(\mu _M(z_i)=0\) or 1 for all \(z_i.\)
(S2.) Let S(M) be minimum. Then, \(S(M)=0.\)
which holds if \(\mu _M(z_i)=0.5\) for all \(1\le i\le n\).
Conversely, assume that M is the most fuzzy set. Then,
Therefore, S(M)is minimum iff M is the most fuzzy i.e. \(\mu _M(z_i)=0.5,\) for all \(z_i\in Z\).
(S3.) Let \(M^*\) be crisper than M,
i.e. (1) \(\mu _M(z_i)\le \mu _{M^*}(z_i)\), if \(\mu _M(z_i)\ge 0.5\) and
(2) \(\mu _M(z_i)\ge \mu _{M^*}(z_i)\), if \(\mu _M(z_i)< 0.5\). Differentiating (1) with respect to \(\mu _M(z_i)\), we have
since S(M) is increasing function of \(\mu _M(z_i)\) in (0.5,1] and decreasing function of \(\mu _M(z_i)\) in [0,0.5).
This implies \(\mu _M(z_i)\le \mu _{M^*}(z_i)\)
and \(\mu _M(z_i)\ge \mu _{M^*}(z_i)\)
From Eqs. (2) and (3), we get \(S(M^*)\ge S(M)\), where \(M^*\) is crisper than M.
(S4.) We have
Therefore,
Hence, the validity of proposed knowledge measure S(M) for FSs is established.
Now, we investigated some major properties of proposed measure (1).
Theorem 3.2
For any \(M,N \in FSs(Z),\) \(S(M\cup N)+S(M\cap N)=S(M)+S(N)\).
Proof
Let
where \(\mu _M(z)\) and \(\mu _N(z)\) represent the membership degrees of M and N, respectively.
If \(z\in Z_1\), then \(\mu _{M\cup N}(q)=\max\{\mu _M(z),\mu _N(z)\}=\mu _M(z)\) and \(\mu _{M\cap N}(z)=\min\{\mu _M(z),\mu _N(z)\}=\mu _N(z)\).
If \(z\in Z_2\), then \(\mu _{M\cup N}(z)=\max\{\mu _M(z),\mu _N(z)\}=\mu _N(z)\) and \(\mu _{M\cap N}(z)=\min\{\mu _M{z},\mu _N(z)\}=\mu _M(z)\).
Now, using Eq. (1) for all \(z_i\in Z\), we have
On simplifying, we get
4 Comparative study
To demonstrate the performance and effectiveness of the proposed knowledge measure for FSs, some previously developed measures for FSs will be borrowed for comparison.
The fuzzy measure proposed by Yager (1979) is shown below.
The fuzzy measure proposed by Kosko (1986) is shown below.
The fuzzy measure proposed by Pal and Pal (1992) is shown below.
The fuzzy measure proposed by Li and Liu (2008) is shown below.
The fuzzy measure proposed by Hwang and Yang (2008) is shown below.
The fuzzy measure proposed by Joshi and Kumar (2018) is shown below.
Example 1
Let us consider four FSs defined in \(Z=\{z\}\) are given by: \(M_1 = \langle z, 0.5\rangle, M_2 = \langle z, 0.25\rangle, M_3 =\langle z, 0.13\rangle,M_4 = \langle z, 0.2\rangle.\) These FSs are utilized for comparing values of the recalled fuzzy measures together with the proposed measure S(M). Table 1 shows the comparative results of specific measures.
It is clear from the outcomes depicted in Table 1 that the recalled fuzzy measures have some cases, which are unreasonable (shown in bold type). For example, the measures \(H_{Y},H_{{\rm Pal}}\) cannot discriminate two different FSs \(M_1= \langle z, 0.5\rangle\), and \(M_4= \langle z, 1 \rangle\), i.e. \(H_{Y}(M_1)=H_{Y}(M_4)=1, H_{{\rm Pal}}(M_1)=H_{{\rm Pal}}(M_4)=0.74\), and \(H_{\alpha '(=0.5)}^{\beta '(=15)}(M_1)=H_{\alpha '(=0.5)}^{\beta '(=15)}(M_3)=0.421.\) The measure \(H_{{\rm Pal}}\) gives the same value 0.65 of fuzzy measure for two different FSs \(M_1= \langle z, 0.5\rangle\), and \(M_3= \langle z, 0.25\rangle\). This example indicates that only proposed fuzzy measure does not have any contradictory/unreasonable case. Based on the result of new fuzzy measure, we can rank the FSs in accordance with the decreasing fuzzy measure as follows: \(M_3 \succ M_2 \succ M_4 \succ M_1.\) The fuzzy measures \(H_{LL}, H_{HY}\) give the result of ranking order, which resembles but not exactly same order. Thus, the proposed fuzzy measure is effective and more reasonable to distinguish entropy of FSs in comparison with other fuzzy measures.
Example 2
Let \(M \in FS(Z)\). Then, the modifier for the FS M is given by
Let us consider a FS \(M_1\) of \(Z=\{3,4,5,6,7\}\) is defined as
Considering the features of linguistics hedges, we take \(A_1\) as “good” on Z. We can compute the following FSs:
The hedges described by the above FSs are considered as: \(M_1^\frac{1}{2}\) may be described as “very bad”, \(M_1^2\) may be described as “extreme”, \(M_1^3\) may be described as “common”, \(M_1^4\) may be described as “perfect”.
Intuitively, from \(M_1^\frac{1}{2}\) to \(M_1^4\), the loss of information hidden in them becomes less. The entropy conveyed by them is increasing. So the following relation holds:
To make a comparison, entropy measures \(H_{Y_1}(M_1),H_K(M_1),H_{{\rm Pal}}(M_1),H_{LL}(M_1)\), \(H_{HY}(M_1),H_{\alpha '}^{\beta '}(M_1)\) and knowledge measure \(S(M_1)\) are employed to promote analysis. In Table 2, we present the outcomes based on diverse measures to promote comparative analysis.
We can notice that FS \(M_1\) will be obtained more entropy than the FS \(M_1^\frac{1}{2}\), if entropy measures \(H_{Y_1}(M_1)\), \(H_K(M_1)\) and \(H_{LL}(M_1)\) are implemented and the ranking outcomes are listed below.
It is clear that these ranking outcomes do not satisfy the sequence given in Eq. 6, whereas the entropy measures \(H_{{\rm Pal}}(M_1)\), \(H_{HY}(M_1)\), \(H_{\alpha '}^{\beta '}(M_1)\) and \(S(M_1)\) perform well. These results indicate that these entropy measures are not suitable to discriminate the uncertainty information of FSs with linguistic terms. Also, the proposed knowledge measure performs much better by satisfying Eq. 7.
Example 3
Let us consider another FS \(M_2\) defines in Z. The FS is defined as:
We calculate \(M_2^\frac{1}{2}\), \(M_2^2\),\(M_2^3\) and \(M_2^4.\) Now we compare only \(H_{{\rm Pal}}(M_2)\),\(H_{HY}(M_2)\),\(H_{\alpha '}^{\beta '}(M_2)\) and \(S(M_2)\) (Table 3).
Moreover, the results produced by entropy measures \(H_{{\rm Pal}}(M_2), H_{\alpha }^{\beta }\) are also not reasonable, which are shown as the equations below.
Therefore, the entropy measures \(H_{{\rm Pal}}(M_2), H_{\alpha '}^{\beta '}(M_2)\) are not suitable for differentiating the information conveyed by FSs. But \(H_{HY}(M_2)\) and \(S(M_2)\) are also satisfy the ranking order. The effectiveness of proposed fuzzy measure \(H_{HY}(M_2)\) and \(S(M_2)\) is indicated by this example once again. Also, \(S(M_2)\) satisfy the ranking order in Eq. 7. From the above analysis, we can conclude that the performance of the proposed measure is much better than others.
In the subsequent section, generalization of knowledge measure is done.
5 Fuzzy accuracy measure
5.1 Background
The thought of inaccuracy was introduced by Kerridge (1961), which may be thought of as a generalization based on Shannon (1948) entropy. It has been extensively used as a great model for measuring of error in experimental results. Suppose that an expert states the probabilities of the numerous possible outcomes of an experiment. His statement will lack exactness in two approaches: either, he may not have sufficient information, and so his statement is imprecise, or he may have some information, yet he has is also incorrect. All statistical inference related problems are involved with creating statements which can be inaccurate in either or each of those ways. Kerridge (1961) proposed the inaccuracy measure that may take accounts for these two types of errors. Suppose that the expert assumes that the probability of the \({i}\hbox{th}\) event is \(q_i\) once the truth probability is \(p_i\). Then, the inaccuracy of the observer, as developed by Kerridge (1961) can be measured as:
where \(P' = (p_1, p_2,\ldots , p_n)\) and \(Q' = (q_1, q_2,\ldots , q_n)\) are the discrete probability distributions with \(\displaystyle \sum\nolimits_{i=1}^{n}p_i=1=\sum\nolimits_{i=1}^{n}q_i.\)
If \(p_i=q_i\), then Eq. 8 recovers Shannon entropy as:
For the probability distributions \(P'\) and \(Q'\), Kullback and Leibler (1951) divergence measure is defined as:
We have
which implies \(I_{{\rm Kerridge}}(P',Q')=D_{KL}(P',Q')+H_{{\rm Shannon}}(P').\)
For all \(M,N \in FSs(Z)\), Verma and Sharma (2011) corresponding to Eq. 9 defined fuzzy inaccuracy measure as:
where D(M, N) is Verma and Sharma (2011) fuzzy divergence and H(M) is De Luca and Termini (1972) fuzzy entropy.
When \(M=N\), we get
Thus, I(M; N) is a generalization of Luca and Termini fuzzy entropy H(M).
Now, we shall introduce a new dual of inaccuracy which we called fuzzy accuracy for a FS N relative to M as follows:
Also, for \(M=N\), we have
Therefore, \(I_{{\rm Acc}}(M;N)\) can be considered as a generalized measure of the proposed fuzzy knowledge measure S(M) and is called fuzzy accuracy measure.
Keeping above said concepts in mind, we discussed some properties of the proposed fuzzy accuracy measure from a mathematical viewpoint in the next section.
5.2 Properties of the Proposed Accuracy Measure
Theorem 5.1
\(I_{{\rm Acc}}(M;N)\) is maximum if M and N are crisp set and \(M=N \quad \forall \; M,N \in FSs(Z).\)
Proof
Since we are computing the fuzzy accuracy of N relative to M, the maximum accuracy in N that it may obtain is equally to the amount of knowledge that lies in M, that is, S(M). Furthermore, S(M) may obtain the largest value 1 according to our definition.
Suppose \(\mu _{M}(z_i)=\mu _{N}(z_i)=1\) or 0, then we have
Hence, \(I_{{\rm Acc}}(M;N)=1\) if \(\mu _{M}(z_i)=\mu _{N}(z_i)=0\) or 1. We have
Now, for \(\mu _{M}(z_i)=\mu _{N}(z_i)\)
Therefore, \(I_{{\rm Acc}}(M;N)=S(M)\) if \(\mu _{M}(z_i)=\mu _{N}(z_i).\) This proves the theorem.
Theorem 5.2
Let \(M, N, C \in FSs(Z)\) be such that \(M\subset N \subset C\); then,
-
1.
\(I_{{\rm Acc}}(M;N)\le I_{{\rm Acc}}(M;C)\) for \(\mu _{M}(z_i)\ge \frac{1}{2},\)
-
2.
\(I_{{\rm Acc}}(M;N)\ge I_{{\rm Acc}}(M;C)\) for \(\mu _{M}(z_i)\le \frac{1}{2}.\)
Proof
Taking, \(\mu _{M}(z_i)=\alpha _i\) and \(\mu _{N}(z_i)=\beta _i\)
Therefore,
and
and \(\frac{\partial I_{{\rm Acc}}}{\partial \beta _i}\ge 0\) for \(\alpha _i \ge 0.5.\)
Hence, in the initial and second argument \(I_{{\rm Acc}}\) is increasing, if \(0.5\le \alpha _i \le \beta _i\) and \(I_{{\rm Acc}}\) is decreasing, if \(\alpha _i\le \beta _i \le 0.5.\) in the initial and second argument. \(\square\)
Theorem 5.3
Let M,N and C be any three FSs; then,
-
1.
\(I_{{\rm Acc}}(M:N)=I_{{\rm Acc}}({\bar{M}}:{\bar{N}})\)
-
2.
\(I_{{\rm Acc}}(M:{\bar{M}})=I_{{\rm Acc}}({\bar{M}};M)\)
-
3.
\(I_{{\rm Acc}}(M:{\bar{N}})=I_{{\rm Acc}}({\bar{M}};N)\)
-
4.
\(I_{{\rm Acc}}({\bar{M}};N)+I_{{\rm Acc}}({\bar{M}};N)=I_{{\rm Acc}}({\bar{M}},{\bar{N}})+I_{{\rm Acc}}(M;{\bar{N}}).\)
Proof
The proof of these properties is state forward and easy calculation.
Next, we obtained a knowledge measure with the help of accuracy measure.
Theorem 5.4
\(\displaystyle S(M)=\frac{I_{{\rm Acc}}(M,M^{{\rm near}})}{I_{{\rm Acc}}(M,M^{{\rm far}})}\) is a valid knowledge measure.
Proof
We prove that S(M) holds the axiomatic requirements given in Definition 3.1.
(S1.) Let M be a crisp set or non-fuzzy. So, \(M=M^{{\rm near}}.\)
Now,
Case 1: When \(\mu _{M}(z_i)\ge 0.5\), then in view of Eq. 12 and Definition 2.3, we have \(I_{{\rm Acc}}(M^{{\rm near}};M^{{\rm near}})=1.\)
Case 2: When \(\mu _{M}(z_i) < 0.5\), then in view of Eq. 12 and Definition 2.3, we have \(I_{{\rm Acc}}(M^{{\rm near}};M^{{\rm near}})=1.\)
For both cases, similarly, we prove that \(I_{{\rm Acc}}(M;M^{{\rm far}})=1\). Therefore,
Conversely, suppose that \(S(M)=1.\) So, \(I_{{\rm Acc}}(M;M^{{\rm near}})=1.\)
That is, \(M=M^{{\rm near}}.\)
Hence, M is a non-fuzzy or crisp set.
(S2.) Let M is the most fuzzy set, that is, \(\mu _{M}(z_i)=0.5 \;\forall \; z_i \in Z.\)
Then, \(\mu _{M^{{\rm near}}}(z_i)=1\). So, \(I_{{\rm Acc}}(M,M^{{\rm near}})=0.\)
Therefore, \(\displaystyle S(M)=\frac{I_{{\rm Acc}}(M,M^{{\rm near}})}{I_{{\rm Acc}}(M,M^{{\rm far}})}=0.\)
Conversely, suppose \(S(M)=0\); then, we have \(M=M_F\)= most fuzzy set.
(S3.) Let \(M^*\) is a sharpened version of M. Then,
It is evident that
, and hence, \(I_{{\rm Acc}}(M^*,M^{near })=I_{{\rm Acc}}(M^*,M^{{*\rm near}}),\)
and \(I_{{\rm Acc}}(M^*,M^{{\rm far}})=I_{{\rm Acc}}(M^*,M^{{*\rm far}}).\)
We have \(M\subset M^*\subset M^{{\rm near}}.\) for \(\mu _{M}(z_i)\ge 0.5\).
From Theorem 5.2, we have \(I_{{\rm Acc}}(M^*,M^{{*\rm near}})\ge I_{{\rm Acc}}(M,M^{{\rm near}}).\)
Similarly, we can prove that
(S4.) Using Theorem 5.3, we have
Also, notice that \({\bar{M}}^{{\rm near}}=\bar{M^{{\rm near}}}\) and \({\bar{M}}^{{\rm far}}=\bar{M^{{\rm far}}}.\)
Since \(I_{{\rm Acc}}(M,M^{{\rm near}})={I_{{\rm Acc}}({\bar{M}},\bar{M^{{\rm near}}})}\) and \(I_{{\rm Acc}}(M,M^{{\rm far}})={I_{{\rm Acc}}({\bar{M}},\bar{M^{{\rm far}}})}\).
Therefore,
Hence, S(M) is a valid knowledge measure.
The developed accuracy measure can be employed to measure the errors in pattern recognition, computer vision and in memory. In real world, accuracy/similarity/dissimilarity data are found in many forms such as rating of pairs, errors of substitution, correlation between occurrences and sorting of objects. Applications of the proposed accuracy measure can be further implemented for decision-making in real-world problems.
6 Application of the proposed knowledge measure in solving MADM model
In this section, a new model, based on the proposed knowledge measure, for solving the decision-making problems, has been demonstrated under the FSs environment. The improved score function is used to make the ranking results more accurate and flexible. Moreover, a numerical example has been considered to validate the approach.
6.1 Proposed approach
The set of all alternatives defined as \(G'=\{G'_i: 1\le i\le m\}\) and the finite set of all attributes are defined as \(A'=\{A'_i:1\le i\le n\}\) in a decision-making problem. The weight vector of attribute is considered as \(w=(w_1,w_2,\ldots ,w_n)^T\) such that \(\textstyle \sum _{i=1}^{n}w_i=1\). Due to the limitation of the DM’s knowledge, paucity of time and experience, a fuzzy form represents the preference or evaluation information provided for each attribute. The fuzzy decision matrix provided by DM’s is defined as :
-
1.
If the attribute weight ages are partially/completely known, then MADM techniques play an important role in solving the problems based on the fuzzy information under multiple attributes. But in practical situation, the attribute weights are found partially known or completely unidentified (Wei 2008). Therefore, the attribute weights are determined before solving the MADM problems and the attribute weights given by decision makers. However, this technique is subjective and the partial information about attribute weights can not be implemented sufficiently. Following on, we may propose a new method to compute the weights vector based on the proposed measure. We can set the whole information as the objective function of optimization. By minimizing the sum of all information under all attributes, we can construct the following model.
$$\begin{aligned}&{\text{Min}}\;T=\sum _{i=1}^{m}S(\nu _{ij}) \sum _{j=1}^{n}w_j\nonumber \\& {\text{s.t.}} {\left\{ \begin{array}{ll} \sum _{j=1}^{n}w_j=1\\ w_j\ge 0,j=1,2,\ldots ,n\\ w\in H \end{array}\right. } \end{aligned}$$(13)where H denotes the set of unidentified/incomplete information for attribute weights and \(S(\nu _{ij})\) is the information measure calculated by our proposed measure. If the attribute weights are completely unidentified, then we will determine the criteria \(A'_j,1\le j\le r\), using Formula (14).
$$\begin{aligned} \begin{aligned} S(M)=\log_2\left[ \frac{2}{n}\sum _{i=1}^{n}\left( \mu _M^2(z_i)+(1-\mu _M(z_i)\right) ^2)\right] . \end{aligned} \end{aligned}$$(14) -
2.
Calculate the weights \(w_j\) for each criterion \(A'_j\) as given in (15),
$$\begin{aligned} w_j=\frac{\sum _{i=1}^{m}S(\nu _{ij})}{\sum _{j=1}^{n}\sum _{i=1}^{m}S(\nu _{ij})}. \end{aligned}$$(15) -
3.
Now, we develop a score function \(S(G'_i)\) as follows:
$$\begin{aligned} S(G'_i) =\sum _{j=1}^{m}\nu _{ij}[\xi w_j^s+w_j^o(1-\xi )]^T, \end{aligned}$$(16)where \([\xi w_j^s+w_j^o(1-\xi )]^T\) in Eq. 16 is the mechanism of subjective (\(w_j^s\)) and objective weights (\(w_j^o\)). The parameter \(\xi \in (0,1)\) denotes the relative importance between subjective and objective weights and can choose any arbitrary value.
-
4.
Alternative with highest score is the best desirable choice.
6.2 Numerical example and discussion
Suppose a company wishes to invest a sum of money into a project. Based on the complexities of economic development, there are five companies (alternatives) as candidates and the companies are considered as: ( \(G'_1\)): A washing machine company, ( \(G'_2\)): A software company, (\(G'_3\) ): A chemical company, (\(G'_4\)): A computer company and (\(G'_5\)): A LCD company.
The investment company evaluates these five companies using the following four attributes, which are: (\(A'_1\)): Capital gain, (\(A'_2\)): Investment risk, (\(A'_3\)): Social and political impact, (\(A'_4\)): Environmental impact.
To find the desirable alternative, based on their knowledge and experience in problem domain, the weights are assigned to the experts as follows: \(\displaystyle w_1^o=0.2,w_2^o=0.4,w_3^o=0.3,w_4^o=0.1.\)
Case 1 The information regarding attribute weights is unidentified. The partially information about attribute weights is denoted by the set H,
The overall information of each attribute can be determined as follows:
The programming model to find the attribute weights using Eq. 13 can be constructed as: Min \(T=0.102w_1+0.055w_2+0.013w_3+0.013w_4\)
Then, the weights vector can be obtained as:
Determining the score using \(\sum _{j=1}^{4}\nu (G'_i,A_j) [\xi w_j^s+w_j^o(1-\xi )]^T,i=1,2,\ldots ,5,\) we may compute the \(S(G'_i),i=1,2,\ldots ,5\) for various values of \(\xi\) and it is given briefly in Table 4 and Fig. 1.
Using the above results, we have obtained the following order ranking of the alternatives.
-
For \(\xi =0.1\):\(S(G'_3)\succ S(G'_4)\succ S(G'_5)\succ S(G'_2)\succ S(G'_1).\)
-
For \(\xi =0.2\):\(S(G'_3)\succ S(G'_4)\succ S(G'_5)\succ S(G'_2)\succ S(G'_1).\)
-
For \(\xi =0.3\):\(S(G'_3)\succ S(G'_4)\succ S(G'_5)\succ S(G'_2)\succ S(G'_1).\)
-
For \(\xi =0.4\):\(S(G'_3)\succ S(G'_5)\succ S(G'_4)\succ S(G'_2)\succ S(G'_1).\)
-
For \(\xi =0.5\):\(S(G'_3)\succ S(G'_5)\succ S(G'_4)\succ S(G'_2)\succ S(G'_1).\)
-
For \(\xi =0.6\):\(S(G'_3)\succ S(G'_5)\succ S(G'_4)\succ S(G'_2)\succ S(G'_1).\)
-
For \(\xi =0.7\):\(S(G'_3)\succ S(G'_5)\succ S(G'_4)\succ S(G'_2)\succ S(G'_1).\)
-
For \(\xi =0.8\):\(S(G'_3)\succ S(G'_5)\succ S(G'_4)\succ S(G'_2)\succ S(G'_1).\)
-
For \(\xi =0.9\):\(S(G'_3)\succ S(G'_5)\succ S(G'_4)\succ S(G'_2)\succ S(G'_1).\)
From the above ranking results, it is clear that \(S(G'_3)\) is the most suitable alternative. The sequence of alternatives so obtained is given by: \(S(G'_3)\succ S(G'_4)\succ S(G'_5)\succ S(G'_2)\succ S(G'_1)\). The same problem was computed by using the method proposed by Xia and Xu (2012) based on \(E_M^{1.5}\) and \(CE_M^{1.5}\). The attributes weight vector is obtained as \(w=(0.29,0.27,0.15,0.29)^T\).
The ranking results of alternatives are \(S(G'_3)\succ S(G'_4)\succ S(G'_5)\succ S(G'_2)\succ S(G'_1)\), which is same as obtained by the proposed method. Clearly, the output of proposed method is more reliable and effective.
Case 2 When there is no information about the attribute weights, then the weights can be obtained based on the proposed approach, and total knowledge amount of all fuzzy information so obtained is listed below:
The attribute weights can be obtained as: \(w_1=K_1/\sum _{i=1}^{4}K_i=0.2677\) ;\(w_2=K_2/\sum _{i=1}^{4}K_i=0.2361=\);\(w_3=K_3/\sum _{i=1}^{4}K_i=0.2565\);\(w_4=K_4/\sum _{i=1}^{4}K_i=0.2397\). Therefore, the weighted vector is \(w=(0.268,0.236,0.257,0.24)^T.\) Computing the score using \(\sum _{j=1}^{4}\nu (G'_i,A_j)[\xi w_j^s+w_j^o(1-\xi )]^T,i=1,2,\ldots ,5,\) we may compute \(S(G'_i),i=1,2,\ldots ,5\), for diverse values of \(\xi\), and it is summarized briefly in Table 5 and Fig. 2.
Using the above results, we obtain the following order of ranks of the alternatives \(G'_i,(i=1,2,3,4,5)\).
For \(\xi =0.1\): \(S(G'_3)\succ S(G'_4)\succ S(G'_5)\succ S(G'_2)\succ S(G'_1).\) For \(\xi =0.2\):\(S(G'_3)\succ S(G'_4)\succ S(G'_5)\succ S(G'_2)\succ S(G'_1).\) For \(\xi =0.3\):\(S(G'_3)\succ S(G'_4)\succ S(G'_5)\succ S(G'_2)\succ S(G'_1).\) For \(\xi =0.4\):\(S(G'_3)\succ S(G'_4)\succ S(G'_5)\succ S(G'_2)\succ S(G'_1).\) For \(\xi =0.5\):\(S(G'_3)\succ S(G'_4)\succ S(G'_5)\succ S(G'_2)\succ S(G'_1).\) For \(\xi =0.6\):\(S(G'_3)\succ S(G'_4)\succ S(G'_5)\succ S(G'_2)\succ S(G'_1).\) For \(\xi =0.7\):\(S(G'_3)\succ S(G'_4)\succ S(G'_5)\succ S(G'_2)\succ S(G'_1).\) For \(\xi =0.8\):\(S(G'_3)\succ S(G'_4)\succ S(G'_5)\succ S(G'_2)\succ S(G'_1).\) For \(\xi =0.9\):\(S(G'_3)\succ S(G'_4)\succ S(G'_5)\succ S(G'_2)\succ S(G'_1).\)
Table 5 and Fig. 2 display the score of the alternatives calculated by the value of the coefficient \(\xi .\) From the on top of discussion, it is easy to say that \(S(G'_3)\) is the most suitable choice as sequence of preferences, that is, \(S(G'_3)\) is the most suitable alternative. For comparative study, we can also solve the same problem based on method Xia and Xu (2012). The yielding vector is \(w=(0.2577,0.2161,0.2765,0.2497)^T\). The ranking outcomes of all alternatives are \(S(G'_3)\succ S(G'_4)\succ S(G'_5)\succ S(G'_2)\succ S(G'_1)\). For further comparison, the same example has been solved by using the same method suggested by Singh et al. (2019); we got the preferential sequence which is given by \(S(G'_3)\succ S(G'_4)\succ S(G'_5)\succ S(G'_2)\succ S(G'_1)\) with \(S(G'_3)\) being the best alternative. Now, this is quite natural to know as the result of which method is more accurate and reliable. From the above methods, we can see that \(S(G'_3)\) is the best choice for investment. Even though the ranking order acquired by the proposed method is totally different from the method acquired by Xia and Xu (2012) method. This distinction has no impact on selecting the most effective company to invest. In general, the solution of a multi-attribute problem for decision-making only provides the best alternative and the order of other alternatives is on the far side the final word goal of an MADM problem.
This example shows that the proposed models for solving multi-attributes problems for decision-making are competent to obtaining cheap results. Compared with methods (Xia and Xu 2012; Singh et al. 2019), proposed optimal is more concise and less complicated, which will scale back the computation burden. Thus, the superiority of the new knowledge measure is confirmed over existing knowledge measures.
A discussion on value of \(\xi :\) If we substitute the value of \(\xi =0.1\), i.e. decrease the importance of subjective weights (\(w_j^s\) ) and correspondingly increase the importance of objective weights (\(w_j^o\) ), the outcomes so acquired are shown in Tables 4 and 5. If we give equal importance to both objective and subjective weights, i.e. \(\xi =0.5\), the preference sequence obtained is \(S(G'_3)\succ S(G'_4)\succ S(G'_5)\succ S(G'_2)\succ S(G'_1)\) and \(S(G'_3)\) as the best alternative. On the other side, if \(\xi =0.9\), i.e. increase the importance of \(w_j^s\) rather than \(w_j^o\), the outputs so obtained are appeared in Tables 4 and 5. The close examination of Tables 4 and 5 demonstrates that on shifting the preferences given to both objective and subjective weights, the best alternative always unaltered; however, the preference sequences of alternatives may be changed.
7 Conclusions
In this paper, we have studied and introduced a new knowledge measure in a fuzzy environment. The mathematical properties of the proposed knowledge measure are also discussed. Comparative study was presented to prove the usefulness and feasibility of the developed knowledge measure over the existing ones. Besides this, a fuzzy accuracy measure based on the proposed knowledge measure is also developed and validated. Further, a new method of decision-making problem is proposed in which attribute weight vector is completely known and unknown. To calculate the attribute weights, we have constructed a programming model based on the developed knowledge measure. Finally, an example has been considered to demonstrate the validity and feasibility of the proposed decision-making process. The advantages of the proposed approach are the computation simplicity for fuzzy sets.
In future, the developed MADM model can be further extended to IFSs, interval-valued intuitionistic fuzzy sets and hesitant fuzzy sets. Apart from this, we can also apply the applications of the developed accuracy measure in the other realistic problems like as like social network analysis, medical diagnosis, remote sensing, speech recognition and so on.
References
Atanassov KT (1986) Intuitionistic fuzzy sets. Fuzzy Sets Syst 20(1):87–96
Chen SJ, Chen SM (2001) A new method to measure the similarity between fuzzy numbers. IEEE Int Conf Fuzzy Syst 3:1123–1126
Chen SM, Chen SW (2014) Fuzzy forecasting based on two-factors second-order fuzzy-trend logical relationship groups and the probabilities of trends of fuzzy logical relationships. IEEE Trans Cybern 45(3):391–403
Chen TY, Li CH (2010) Determining objective weights with intuitionistic fuzzy entropy measures: a comparative analysis. Inf Sci 180(21):4207–4222
Chen SM, Manalu GM, Pan JS, Liu HC (2013) Fuzzy forecasting based on two-factors second-order fuzzy-trend logical relationship groups and particle swarm optimization techniques. IEEE Trans Cybern 43(3):1102–1117
Chen SM, Cheng SH, Lan TC (2016a) A novel similarity measure between intuitionistic fuzzy sets based on the centroid points of transformed fuzzy numbers with applications to pattern recognition. Inf Sci 343:15–40
Chen SM, Cheng SH, Lan TC (2016b) Multicriteria decision making based on the TOPSIS method and similarity measures between intuitionistic fuzzy values. Inf Sci 367:279–295
De Luca A, Termini S (1972) A definition of a non-probabilistic entropy in the setting of fuzzy sets theory. Inf Control 20(4):301–312
Fahmi A, Amin F, Ullah H (2019) Multiple attribute group decision making based on weighted aggregation operators of triangular neutrosophic cubic fuzzy numbers. Granul Comput 2019:1–3
Guo K (2016) Knowledge measure for Atanassov’s intuitionistic fuzzy sets. IEEE Trans Fuzzy Syst 24(5):1072–1078
Higashi M, Klir GJ (1982) On measures of fuzziness and fuzzy complements. Int J Gen Syst 8(3):169–180
Hwang CL, Lin MJ (1987) Group decision making under multiple criteria: methods and applications. Springer, Berlin
Hwang CH, Yang MS (2008) On entropy of fuzzy sets. Int J Uncertain Fuzzy Knowl-Based Syst 16:519–527
Jamkhaneh EB, Garg H (2018) Some new operations over the generalized intuitionistic fuzzy sets and their application to decision making process. Granul Comput 3(2):111–122
Joshi R, Kumar S (2017) An \((R, S)\)-norm fuzzy information measure with its applications in multiple-attribute decision-making. Comput Appl Math 37(3):2943–2964
Joshi R, Kumar S (2018) A novel fuzzy decision-making method using entropy weights-based correlation coefficients under intuitionistic fuzzy. Int J Fuzzy Syst 21(1):232–242
Kerridge DF (1961) Inaccuracy and inference. J R Stat Soc 23:184–194
Kosko B (1986) Fuzzy entropy and conditioning. Inf Sci 40(2):165–174
Kullback S, Leibler RA (1951) On information and sufficiency. Ann Math Stat 22(1):79–86
Lad F, Sanfilippo G, Agro G (2015) Extropy:complementary dual of entropy. Stat Sci 30(1):40–58
Li P, Liu B (2008) Entropy of credibility distributions for fuzzy variables. IEEE Trans Fuzzy Syst 16:123–129
Liu P, You X (2017) Probabilistic linguistic TODIM approach for multiple attribute decision-making. Granul Comput 2(4):333–342
Liu P, Chen SM, Liu J (2017) Multiple attribute group decision making based on intuitionistic fuzzy interaction partitioned Bonferroni mean operators. Inf Sci 411:98–121
Lubbe JCA (1981) A generalized probabilistic theory of the measurement of certainty and information [Ph.D. thesis]. Department of Electrical Engineering, Delft University of Technology, Delft, The Netherlands
Mahmood T, Liu P, Ye J, Khan Q (2018) Several hybrid aggregation operators for triangular intuitionistic fuzzy set and their application in multicriteria decision making. Granul Comput 3(2):153–168
Nguyen H (2015) A new knowledge-based measure for intuitionistic fuzzy sets and its application in multiple attribute group decision making. Expert Syst Appl 42(22):8766–8774
Pal NR, Pal SR (1992) Higher order fuzzy entropy and hybrid entropy of a set. Inf Sci 61(3):211–231
Rani P, Jain D, Hooda DS (2019) Extension of intuitionistic fuzzy TODIM technique for multi-criteria decision making method based on shapley weighted divergence measure. Granul Comput 4(3):407–420
Satty TL (1980) The analytical hierarchy process. Mc-Graw Hill, New-York
Seikh MR, Mandal U (2019) Intuitionistic fuzzy Dombi aggregation operators and their application to multiple attribute decision-making. Granul Comput 2019:1–6. https://doi.org/10.1007/s41066-019-00209-y
Shannon CE (1948) A mathematical theory of communication. Bell Syst Tech J 27(3):379–423
Singh S, Lalotra S, Sharma S (2019) Dual concepts in fuzzy theory:entropy and knowledge measure. Int J Intell Syst 34(5):1034–59
Szmidt E, Kacprzyk J, Bujnowski P (2014) How to measure amount of knowledge conveyed by Atanassov’s intuitionistic fuzzy sets. Inf Sci 257:276–285
Szmidt E, Kacprzyk J (2007) Some problems with entropy measures for the Atanassov intuitionistic fuzzy sets. In: International workshop on fuzzy logic and applications, pp 291–297
Verma RK, Sharma BD (2011) A measure of Inaccuracy between two fuzzy sets. Cybern Inf Technol 11(2):13–23
Wang C, Fu X, Meng S, He Y (2017) Multi-attribute decision-making based on the SPIFGIA operators. Granul Comput 2(4):321–331
Wang HD, Pan XH, He SF (2019) A new interval type-2 fuzzy VIKOR method for multi-attribute decision making. Int J Fuzzy Syst 21(1):145–156
Wang G, Zhang J, Song, Y, Li Q (2018) An entropy-based knowledge measure for Atanassov’s intuitionistic fuzzy sets and its application to multiple attribute decision making. Entropy 20(12):981
Wei GW (2008) Maximizing deviation method for multiple attribute decision making in intuitionistic fuzzy setting. Know-Based Syst 21(8):833–836
Xia M, Xu Z (2012) Entropy /cross entropy-based group decision making under intuitionistic fuzzy environment. Inf Fusion 13(1):31–47
Xuecheng L (1992) Entropy, distance measure and similarity measure of fuzzy sets and their relations. Fuzzy Sets Syst 52(3):305–318
Yager RR (1979) On the measure of fuzziness and negation. Part 1: Membership in the unit interval. Int J Gen Syst 5(4):221–229
Zadeh LA (1965) Fuzzy sets. Inf Control 8(3):338–353
Zafar F, Akram M (2018) A novel decision-making method based on rough fuzzy information. Int J Fuzzy Syst 20(3):1000–1014
Zeng S, Chen SM, Kuo LW (2019) Multi-attribute decision making based on novel score function of intuitionistic fuzzy values and modified VIKOR method. Inf Sci 488:76–92
Zhang Z, Yuan S, Ma C, Xu J, Zhang J (2019) A parametric method for knowledge measure of intuitionistic fuzzy sets. In: Advances in computer communication and computational sciences, vol 924, pp 199–210
Acknowledgements
Authors dedicate their earnest gratitude to the editor and the anonymous reviewers for their valuable comments that helped us improve the quality of the paper.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors declare that they have no conflict of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Arya, V., Kumar, S. Knowledge measure and entropy: a complementary concept in fuzzy theory. Granul. Comput. 6, 631–643 (2021). https://doi.org/10.1007/s41066-020-00221-7
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s41066-020-00221-7