1 Introduction

Fuzzy set (FS) theory introduced by Zadeh (1965) is an important topic to measure the fuzziness degree. Fuzzy set contains only membership degree to elements of a set in the interval [0,1]. The fuzzy set can describe in terms of “belong to” and “not belong to”. Xuecheng (1992) presented an axiomatic definition of entropy, distance and similarity measures of FS and also systematically considered the basic relations between these measures. Entropy defined as a measure of uncertain information or fuzziness in the FSs theory has been widely applied in finance, information technology, communication and decision-making. Afterwards, De Luca and Termini (1972) suggested the fuzzy entropy of FSs on the basis of Shannon entropy function and defined a set of axioms. Yager (1979) obtained an entropy measure of an FS from a distance between fuzzy set and its complement. Later on, Higashi and Klir (1982) extended this concept to a general class of fuzzy complements. Various authors applied the applications of FSs in distinct fields like pattern recognition, decision-making problems, image processing, etc. (Chen and Chen 2001; Chen et al. 2013; Chen and Chen 2014; Fahmi et al. 2019). Knowledge measure is usually regarded as the dual measure of fuzzy entropy or uncertainty. An entropy measure cannot capture all uncertainties in FSs. knowledge measure represents difference of a fuzzy set from the most FS. It may be noted that the definition of knowledge and entropy measures satisfies the following condition: If M \(\le\) N, then knowledge(M)\(\ge\) knowledge(N) or entropy(M) \(\le\) entropy(N). The notion of the order \(\le\) plays a vital role in the theory of entropy and knowledge measure, as in the definition of the classical ordering set theory.

The intuitionistic fuzzy set (IFS) proposed by Atanassov (1986), as an extension of the fuzzy set, is characterized by introducing a hesitancy degree to evaluate the gap between 1 and the sum of membership grade and non-membership grade. Some models have been developed by various researchers in the intuitionistic fuzzy set environment (Chen et al. 2016a, b; Fahmi et al. 2019; Mahmood et al. 2018; Jamkhaneh and Garg 2018; Rani et al. 2019; Zeng et al. 2019). Some functional forms in which \(l_2\) metric seems to be a particular case have been studied by Lubbe (1981). Lad et al. (2015) introduced a long-standing inquiry in the theory of information, to check whether there is a duality work for the entropy (1948). Szmidt et al. (2014) introduced a standard axiom system of knowledge measure in an IFS theory as a dual axiom system of intuitionistic fuzzy entropy. Guo (2016) defined a dual axiomatic system of knowledge measure under IFSs theory and developed a generalized knowledge measure. Zhang et al. (2019) presented some necessary and equivalent conditions of the classical order property on the basis of Szmidt and Kacprzyk’s (2007) axioms. However, Nguyen’s (2015) and Guo’s (2016) knowledge measure may bring contradictory results due to the use of the distance measure. Therefore, Wang et al. (2018) introduced a new knowledge measure for IFSs to fill up the gap in Nguyen’s (2015) and Guo’s (2016) knowledge measure.

Based on the given information, the attribute weights in MADM are usually named as subjective and objective weights. In subjective information, decision expert offers weights to the attributes. Analytic hierarchy process (AHP) method (Satty 1980) and Delphi method (Hwang and Lin 1987) are examples to determine subjective attribute weights. On the other hand, the objective attribute weights are evaluated by solving mathematical programming models and one in all the foremost vital approach is the entropy method (Shannon 1948). Although Chen and Li (2010) presented a model to assess the attribute weights by utilizing intuitionistic fuzzy (IF) entropy within the IFS system. Within the literature, various authors have self-addressed the MADM problems (Seikh and Mandal 2019; Joshi and Kumar 2017, 2018; Wang et al. 2017; Liu and You 2017a; Liu et al. 2017b; Zafar and Akram 2018; Wang et al. 2019).

In this paper, our aim is to investigate the duality concept among fuzzy information measures and present a new axiomatic definition of the knowledge measure. The main contribution of this study is summarized as follows: (i) A new knowledge measure for FSs is proposed and proved its validity with the help of examples. (ii) An accuracy measure is introduced which is a generalization of the proposed knowledge measure. This may be considered as a dual of inaccuracy of fuzzy sets. (iii) A score function is developed which is based on the combination of subjective and objective weights. (iv) The proposed knowledge measure is applied to MADM problems under the fuzzy condition.

The remainder of this article is structured as follows: Sect. 2 recalls some basic concept of fuzzy sets. In Sect. 3, we propose a fuzzy knowledge measure and prove some major properties of this measure. Section 4 carries out comparative analysis with the help of numerical examples. In Sect. 5, we introduce an accuracy measure for fuzzy set and its properties are investigated. In Sect. 6, the new knowledge measure is applied to MADM problems in fuzzy environment and an example is employed to check the applicability of the proposed approach. Finally, the conclusion and future scope are made in Sect. 7.

2 Preliminaries

Some notions are used in the following section. Let \(Z=\{z_1,z_2,\ldots ,z_n\}\) be a fixed set, and FSs represent the class of all fuzzy sets of Z.

Definition 2.1

(Zadeh 1965) A fuzzy set M on Z is described as:

$$\begin{aligned} M=\{(z,\mu _M(z))|z\in Z\}, \end{aligned}$$

where \(\mu _M:Z\rightarrow [0,1]\) denotes the membership function. The value \(\mu _M(z) \in [0,1]\) is the membership degree of \(z \in Z\) in M.

Definition 2.2

(Zadeh 1965) Let \(M,N \in FSs(Z)\). Then, we have

  1. (a)

    Complement: \({\bar{M}}=\{(z,1-\mu _M(z))|z \in Z\}.\)

  2. (b)

    Union:\(M\cup N =\{(z,\max(\mu _M(z),\mu _N(z))|z \in Z\}\).

  3. (c)

    Intersection: \(M\cap N=\{(z,\min(\mu _M(z),\mu _N(z))|z \in Z\}\).

Definition 2.3

(Kosko 1986) Let FSs(Z) be a collection of all FSs in Z. Then, for all \(P_{{\rm near}},P_{{\rm far}} \in FSs(Z)\) are defined as follows:

$$\begin{aligned} \mu _{P_{{\rm near}}}(z)=\left\{ \begin{array}{lr} 1: &{}\mu _P(z)\ge {\frac{1}{2}},\\ 0: &{}\mu _P(z)<{\frac{1}{2}}, \end{array} \right. \end{aligned}$$

and

$$\begin{aligned} \mu _{P_{{\rm far}}}(z)=\left\{ \begin{array}{lr} 1: &{}\mu _P(z)<{\frac{1}{2}},\\ 0: &{}\mu _P(z)\ge {\frac{1}{2}}. \end{array} \right. \end{aligned}$$

In FS theory, the difficulty of imprecision with respect to a FS is measured by fuzzy entropy. De Luca and Termini (1972) gave the following axiomatic definition of a fuzzy entropy measure of FSs.

Definition 2.4

(De Luca and Termini 1972) Let \(M \in FSs(Z);\) then H(M) (measure of fuzziness) should have at least following four properties:

A-1 (Sharpness)::

H(M) is minimum if and only if M is crisp set.

A-2 (Maximality)::

H(M)is maximum if and only if M is the most fuzzy set.

A-3 (Resolution)::

\(H(M)\ge H(M^*)\),if \(M^*\) is crisper than M.

A-4 (Symmetry)::

\(H(M)=H({\bar{M}}),\) where \({\bar{M}}\) is the complement set of M,i.e.\(\mu _{{\bar{M}}} (z_i)=1-\mu _M(z_i).\)

3 A new knowledge measure for FSs

The function of entropy and knowledge measure are both important tools in the study of fuzzy set. The fuzzy entropy is to estimate the degree of uncertainty, disorder and irregularity between FSs, whereas fuzzy knowledge measure is to measure the degree of certainty, order and regularity between FSs. The entropy of FS provides the average amount of ambiguity present in a FS. In a similar way, we can think about the average amount of knowledge present in a FS. This kind of knowledge measure is called dual measure of fuzzy entropy.

According to Definition 2.4, an axiomatic definition for the fuzzy knowledge measure can be defined as follows:

Definition 3.1

A real function S: FSs(Z)\(\rightarrow R^+\) is called a knowledge measure if it satisfies the following four properties:

S1.:

For all \(M\in FSs(Z)\), S(M)is maximum if and only if M is a crisp set.

S2.:

S(M)is minimum if and only if M is most fuzzy set.

S3.:

\(S(M^{*})\ge S(M)\), where \(M^*\)is crisper than M.

S4.:

\(S(M)=S({\bar{M}})\), where \({\bar{M}}\) denotes complement of M.

Now, we introduced the following knowledge measure for a FS as:

$$\begin{aligned} S(M)=\log_2\left[ \frac{2}{n}\sum _{i=1}^{n}\left( \mu _M^2(z_i)+(1-\mu _M(z_i)\right) ^2)\right] . \end{aligned}$$
(1)

Remark 1

For comparative investigation all through in this study, we considered fuzzy entropy and knowledge measure as a normalized form. In the next theorem,

Theorem 3.1

To prove that, S(M) defined in Equation 1is a valid knowledge measure for FSs.

Proof

For validity, the proposed knowledge measure defined in Eq. 1 should fulfilled axiomatic requirements given in Definition 3.1.

(S1.) First, suppose that \(S(M)=1\),

$$\begin{aligned} \Rightarrow \log_2\left[ \frac{2}{n}\sum _{i=1}^{n}\left( \mu _M^2(z_i)\right) +\left( 1-\mu _M(z_i)\right) ^2\right] =1. \end{aligned}$$

which is possible only when \(\mu _M(z_i)=0\) or 1.

Conversely, suppose that set M is crisp, i.e. \(\mu _M(z_i)=0\) or 1 for all \(z_i\in Z.\)

Then, for \(\mu _M(z_i)=0\) or 1, we have

$$\begin{aligned} \log_2\left[ \frac{2}{n}\sum _{i=1}^{n}\left( \mu _M^2(z_i)\right) +\left( 1-\mu _M(z_i)\right) ^2\right] =1. \end{aligned}$$

Therefore, \(S(M)=1\), when \(\mu _M(z_i)=0\) or 1 for all \(z_i.\)

Therefore, \(S(M)=1\) iff M is crisp, i.e, \(\mu _M(z_i)=0\) or 1 for all \(z_i.\)

(S2.) Let S(M) be minimum. Then, \(S(M)=0.\)

$$\begin{aligned} \Rightarrow \log_2\left[ \frac{2}{n}\sum _{i=1}^{n}\left( \mu _M^2(z_i)\right) +\left( 1-\mu _M(z_i)\right) ^2\right] =0, \end{aligned}$$

which holds if \(\mu _M(z_i)=0.5\) for all \(1\le i\le n\).

Conversely, assume that M is the most fuzzy set. Then,

$$\begin{aligned} \log_2\left[ \frac{2}{n}\sum _{i=1}^{n}\left( \mu _M^2(z_i)\right) +\left( 1-\mu _M(z_i)\right) ^2\right] =0. \\ \Rightarrow S(M)=0. \end{aligned}$$

Therefore, S(M)is minimum iff M is the most fuzzy i.e. \(\mu _M(z_i)=0.5,\) for all \(z_i\in Z\).

(S3.) Let \(M^*\) be crisper than M,

i.e. (1) \(\mu _M(z_i)\le \mu _{M^*}(z_i)\), if \(\mu _M(z_i)\ge 0.5\) and

(2) \(\mu _M(z_i)\ge \mu _{M^*}(z_i)\), if \(\mu _M(z_i)< 0.5\). Differentiating (1) with respect to \(\mu _M(z_i)\), we have

$$\begin{aligned} \frac{\partial S(M)}{\partial \mu _M(z_i)}=\frac{1}{\frac{1}{n}\sum _{i=1}^{n}2\left( \mu _M^2(z_i)+(1-\mu _M(z_i)\right) ^2}(8\mu _M(z_i)-4). \end{aligned}$$

since S(M) is increasing function of \(\mu _M(z_i)\) in (0.5,1] and decreasing function of \(\mu _M(z_i)\) in [0,0.5).

This implies \(\mu _M(z_i)\le \mu _{M^*}(z_i)\)

$$\begin{aligned} \Rightarrow S(M^*)\ge S(M)\; \mathrm {in}\; (0.5,1], \end{aligned}$$
(2)

and \(\mu _M(z_i)\ge \mu _{M^*}(z_i)\)

$$\begin{aligned} \Rightarrow S(M^*)\ge S(M)\; \mathrm {in}\; (0.5,1]. \end{aligned}$$
(3)

From Eqs. (2) and (3), we get \(S(M^*)\ge S(M)\), where \(M^*\) is crisper than M.

(S4.) We have

$$\begin{aligned} S({\bar{M}}) &= \log_2\left[ \frac{2}{n}\sum _{i=1}^{n}\left( \mu _{{\bar{M}}}^2(z_i)\right) +(1-\mu _{{\bar{M}}}(z_i))^2\right] \\ &= \log_2\left[ \frac{1}{n}\sum _{i=1}^{n}2\left( 1-\mu _{{\bar{M}}}(z_i)\right) ^2+\mu _{M}^2(z_i)\right] = S(M). \end{aligned}$$

Therefore,

$$\begin{aligned} S(M)=S({\bar{M}}). \end{aligned}$$

Hence, the validity of proposed knowledge measure S(M) for FSs is established.

Now, we investigated some major properties of proposed measure (1).

Theorem 3.2

For any \(M,N \in FSs(Z),\) \(S(M\cup N)+S(M\cap N)=S(M)+S(N)\).

Proof

Let

$$\begin{aligned} Z_1=\{ z\in Z \;\vert \; \mu _M(z)\ge \mu _N(z)\}, \end{aligned}$$
(4)
$$\begin{aligned} Z_2=\{ z\in Z\; \vert \; \mu _M(z)<\mu _N(z)\}, \end{aligned}$$
(5)

where \(\mu _M(z)\) and \(\mu _N(z)\) represent the membership degrees of M and N, respectively.

If \(z\in Z_1\), then \(\mu _{M\cup N}(q)=\max\{\mu _M(z),\mu _N(z)\}=\mu _M(z)\) and \(\mu _{M\cap N}(z)=\min\{\mu _M(z),\mu _N(z)\}=\mu _N(z)\).

If \(z\in Z_2\), then \(\mu _{M\cup N}(z)=\max\{\mu _M(z),\mu _N(z)\}=\mu _N(z)\) and \(\mu _{M\cap N}(z)=\min\{\mu _M{z},\mu _N(z)\}=\mu _M(z)\).

Now, using Eq. (1) for all \(z_i\in Z\), we have

$$\begin{aligned}&S(M\cup N)+S(M\cap N) \displaystyle \\ &\quad =\log_2\left[ \frac{2}{n}\sum _{i=1}^{n}\left( \mu _{M\cup N}^2(z_i)+\left( 1-\mu _{M\cup N}(z_i)\right) ^2\right) \right] \\ &\qquad +\log_2\left[ \frac{2}{n}\sum _{i=1}^{n}\left( \mu _{M\cap N}^2(z_i)+\left( 1-\mu _{M\cap N}(z_i)\right) ^2\right) \right] . \\ &\quad =\log_2\left[ \frac{2}{n}\sum _{i=1}^{n}\left( \mu _M^2(z_i)+(1-\mu _M(z_i))\right) ^2\right] \\ &\qquad +\log_2\left[ \frac{2}{n}\sum _{i=1}^{n}\left( \mu _N^2(z_i)\right) +(1-\mu _N(z_i))^2\right] .\\&\quad =S(M)+S(N). \end{aligned}$$

On simplifying, we get

$$\begin{aligned} S(M\cup N)+S(M\cap N)=S(M)+S(N). \end{aligned}$$

4 Comparative study

To demonstrate the performance and effectiveness of the proposed knowledge measure for FSs, some previously developed measures for FSs will be borrowed for comparison.

The fuzzy measure proposed by Yager (1979) is shown below.

$$\begin{aligned} H_{Y_1}(M)=1-\frac{d_p(M,M^c)}{n^\frac{1}{p}} \end{aligned}$$

The fuzzy measure proposed by Kosko (1986) is shown below.

$$\begin{aligned} H_K(M)=\frac{d_p(M,M_{{\rm near}})}{d_p(M,M_{{\rm far}})}. \end{aligned}$$

The fuzzy measure proposed by Pal and Pal (1992) is shown below.

$$\begin{aligned} H_{{\rm Pal}}(M)=\frac{1}{n}\sum _{i=1}^{n}\left[ \mu _M({z_i})e^{1-\mu _M({z_i})}+(1-\mu _M({z_i}))e^{\mu _M({z_i})} \right] . \end{aligned}$$

The fuzzy measure proposed by Li and Liu (2008) is shown below.

$$\begin{aligned} H_{LL}(M)=\sum _{i=1}^{n}S(cr(\xi _M=z_i)). \end{aligned}$$

The fuzzy measure proposed by Hwang and Yang (2008) is shown below.

$$\begin{aligned} H_{HY}(M)= &\, {} \frac{1}{1-e^\frac{-1}{2}} \sum _{i=1}^{n}\left[ \left( 1-e^{-\mu _{M^c}({z_i})}\right) I_{\left[ \mu _M({z_i})\ge \frac{1}{2}\right] } \right. \\&\left. + \left( 1-e^{-\mu _M{(z_i)}}\right) I_{\left[ \mu _M{(z_i)}<\frac{1}{2}\right] }\right] . \end{aligned}$$

The fuzzy measure proposed by Joshi and Kumar (2018) is shown below.

$$\begin{aligned}&H_{\alpha '}^{\beta '}(M)\\&\quad =\frac{\alpha '\times \beta '}{n({\alpha '}-\beta ')}\left[ \sum _{i=1}^{n}\left\{ \left( \mu _M{(z_i)}^{\beta '}+\left( 1-\mu _M{(z_i)} \right) ^{\beta '} \right) ^\frac{1}{\beta '}\right. \right. \\&\quad \left. -\left. \left( \mu _M{(z_i)}^{\alpha '}+\left( 1-\mu _M{(z_i)} \right) ^{\alpha '} \right) ^\frac{1}{\alpha '} \right\} \right] . \\&S(M)=\log_{2}\left[ \frac{2}{n}\sum _{i=1}^{n}\left( \mu _M^2(z_i)+(1-\mu _M(z_i)\right) ^2\right] \text {(our proposed one).} \end{aligned}$$

Example 1

Let us consider four FSs defined in \(Z=\{z\}\) are given by: \(M_1 = \langle z, 0.5\rangle, M_2 = \langle z, 0.25\rangle, M_3 =\langle z, 0.13\rangle,M_4 = \langle z, 0.2\rangle.\) These FSs are utilized for comparing values of the recalled fuzzy measures together with the proposed measure S(M). Table 1 shows the comparative results of specific measures.

Table 1 Comparison of the fuzzy measures

It is clear from the outcomes depicted in Table 1 that the recalled fuzzy measures have some cases, which are unreasonable (shown in bold type). For example, the measures \(H_{Y},H_{{\rm Pal}}\) cannot discriminate two different FSs \(M_1= \langle z, 0.5\rangle\), and \(M_4= \langle z, 1 \rangle\), i.e. \(H_{Y}(M_1)=H_{Y}(M_4)=1, H_{{\rm Pal}}(M_1)=H_{{\rm Pal}}(M_4)=0.74\), and \(H_{\alpha '(=0.5)}^{\beta '(=15)}(M_1)=H_{\alpha '(=0.5)}^{\beta '(=15)}(M_3)=0.421.\) The measure \(H_{{\rm Pal}}\) gives the same value 0.65 of fuzzy measure for two different FSs \(M_1= \langle z, 0.5\rangle\), and \(M_3= \langle z, 0.25\rangle\). This example indicates that only proposed fuzzy measure does not have any contradictory/unreasonable case. Based on the result of new fuzzy measure, we can rank the FSs in accordance with the decreasing fuzzy measure as follows: \(M_3 \succ M_2 \succ M_4 \succ M_1.\) The fuzzy measures \(H_{LL}, H_{HY}\) give the result of ranking order, which resembles but not exactly same order. Thus, the proposed fuzzy measure is effective and more reasonable to distinguish entropy of FSs in comparison with other fuzzy measures.

Example 2

Let \(M \in FS(Z)\). Then, the modifier for the FS M is given by

$$\begin{aligned} M^n=\{(z,(\mu _M(z))^n)/z\in Z\}. \end{aligned}$$

Let us consider a FS \(M_1\) of \(Z=\{3,4,5,6,7\}\) is defined as

$$\begin{aligned} M_1=\{(3,0.1),(4,0.3),(5,0.4),(6,0.9),(7,1)\}. \end{aligned}$$

Considering the features of linguistics hedges, we take \(A_1\) as “good” on Z. We can compute the following FSs:

$$\begin{aligned} M_1^\frac{1}{2}= &\, {} \{(3,0.316),(4,0.548),(5,0.632),(6,0.949),(7,1)\}, \\ M_1^2= &\, {} \{(3,0.01),(4,0.09),(5,0.16),(6,0.81),(7,1)\}, \\ M_1^3= &\, {} \{(3,0.001),(4,0.027),(5,0.064),(6,0.729),(7,1)\}, \\ M_1^4= &\, {} \{(3,0),(4,0.008),(5,0.026),(6,0.656),(7,1)\}. \end{aligned}$$

The hedges described by the above FSs are considered as: \(M_1^\frac{1}{2}\) may be described as “very bad”, \(M_1^2\) may be described as “extreme”, \(M_1^3\) may be described as “common”, \(M_1^4\) may be described as “perfect”.

Intuitively, from \(M_1^\frac{1}{2}\) to \(M_1^4\), the loss of information hidden in them becomes less. The entropy conveyed by them is increasing. So the following relation holds:

$$\begin{aligned}&H(M_1^{\frac{1}{2}})> H(M_1)>H(M_1^2)>H(M_1^3)>H(M_1^4). \end{aligned}$$
(6)
$$\begin{aligned}&S(M_1^{\frac{1}{2}})< S(M_1)<S(M_1^2)<S(M_1^3)<S(M_1^4). \end{aligned}$$
(7)

To make a comparison, entropy measures \(H_{Y_1}(M_1),H_K(M_1),H_{{\rm Pal}}(M_1),H_{LL}(M_1)\), \(H_{HY}(M_1),H_{\alpha '}^{\beta '}(M_1)\) and knowledge measure \(S(M_1)\) are employed to promote analysis. In Table 2, we present the outcomes based on diverse measures to promote comparative analysis.

Table 2 Fuzziness values using various fuzzy measures

We can notice that FS \(M_1\) will be obtained more entropy than the FS \(M_1^\frac{1}{2}\), if entropy measures \(H_{Y_1}(M_1)\), \(H_K(M_1)\) and \(H_{LL}(M_1)\) are implemented and the ranking outcomes are listed below.

$$\begin{aligned}&H_{Y_1}(M_1^\frac{1}{2})> H_{Y_1}(M_1)> H_{Y_1}(M_1^2)>H_{Y_1}(M_1^4)<H_{Y_1}(M_1^3), \\&H_K(M_1)>H_K(M_1^\frac{1}{2})> H_K(M_1^2)>H_K(M_1^4)< H_K(M_1^3), \\&H_{LL}(M_1^3)> H_{LL}(M_1^\frac{1}{2})> H_{LL}(M_1)<H_{LL}(M_1^4)>H_{LL}(M_1^2). \end{aligned}$$

It is clear that these ranking outcomes do not satisfy the sequence given in Eq. 6, whereas the entropy measures \(H_{{\rm Pal}}(M_1)\), \(H_{HY}(M_1)\), \(H_{\alpha '}^{\beta '}(M_1)\) and \(S(M_1)\) perform well. These results indicate that these entropy measures are not suitable to discriminate the uncertainty information of FSs with linguistic terms. Also, the proposed knowledge measure performs much better by satisfying Eq. 7.

Example 3

Let us consider another FS \(M_2\) defines in Z. The FS is defined as:

$$\begin{aligned} M_2=\{(3,0.2),(4,0.3),(5,0.4),(6,0.7),(7,0.8)\}. \end{aligned}$$

We calculate \(M_2^\frac{1}{2}\), \(M_2^2\),\(M_2^3\) and \(M_2^4.\) Now we compare only \(H_{{\rm Pal}}(M_2)\),\(H_{HY}(M_2)\),\(H_{\alpha '}^{\beta '}(M_2)\) and \(S(M_2)\) (Table 3).

Table 3 Fuzziness values with \(H_{{\rm Pal}}(M_1)\), \(H_{HY}(M_1)\), \(H_{\alpha '(=0.5)}^{\beta '(=15)}(M_2)\) and \(S(M_2)\)

Moreover, the results produced by entropy measures \(H_{{\rm Pal}}(M_2), H_{\alpha }^{\beta }\) are also not reasonable, which are shown as the equations below.

$$\begin{aligned}&H_{{\rm Pal}}(M_2)> H_{{\rm Pal}}(M_2^\frac{1}{2})> H_{{\rm Pal}}(M_2^2)> H_{{\rm Pal}}(M_2^4)>H_{{\rm Pal}}(M_2^3), \\&H_{\alpha '}^{\beta '}(M_2)>H_{\alpha '}^{\beta '}(M_2^\frac{1}{2})> H_{\alpha '}^{\beta '}(M_2^2)> H_{\alpha '}^{\beta '}(M_2^3)>H_{\alpha '}^{\beta '}(M_2^4). \end{aligned}$$

Therefore, the entropy measures \(H_{{\rm Pal}}(M_2), H_{\alpha '}^{\beta '}(M_2)\) are not suitable for differentiating the information conveyed by FSs. But \(H_{HY}(M_2)\) and \(S(M_2)\) are also satisfy the ranking order. The effectiveness of proposed fuzzy measure \(H_{HY}(M_2)\) and \(S(M_2)\) is indicated by this example once again. Also, \(S(M_2)\) satisfy the ranking order in Eq. 7. From the above analysis, we can conclude that the performance of the proposed measure is much better than others.

In the subsequent section, generalization of knowledge measure is done.

5 Fuzzy accuracy measure

5.1 Background

The thought of inaccuracy was introduced by Kerridge (1961), which may be thought of as a generalization based on Shannon (1948) entropy. It has been extensively used as a great model for measuring of error in experimental results. Suppose that an expert states the probabilities of the numerous possible outcomes of an experiment. His statement will lack exactness in two approaches: either, he may not have sufficient information, and so his statement is imprecise, or he may have some information, yet he has is also incorrect. All statistical inference related problems are involved with creating statements which can be inaccurate in either or each of those ways. Kerridge (1961) proposed the inaccuracy measure that may take accounts for these two types of errors. Suppose that the expert assumes that the probability of the \({i}\hbox{th}\) event is \(q_i\) once the truth probability is \(p_i\). Then, the inaccuracy of the observer, as developed by Kerridge (1961) can be measured as:

$$\begin{aligned} I_{{\rm Kerridge}}(P',Q')=-\sum (p_i) \log(q_i), \end{aligned}$$
(8)

where \(P' = (p_1, p_2,\ldots , p_n)\) and \(Q' = (q_1, q_2,\ldots , q_n)\) are the discrete probability distributions with \(\displaystyle \sum\nolimits_{i=1}^{n}p_i=1=\sum\nolimits_{i=1}^{n}q_i.\)

If \(p_i=q_i\), then Eq. 8 recovers Shannon entropy as:

$$\begin{aligned} H_{{\rm Shannon}} (P')=-\sum p_i \;\log\; p_i.\end{aligned}$$

For the probability distributions \(P'\) and \(Q'\), Kullback and Leibler (1951) divergence measure is defined as:

$$\begin{aligned} D_{KL}(P',N')=\sum _{i=1}^{n}p_i\; \log\;\frac{p_i}{q_i}. \end{aligned}$$
(9)

We have

$$\begin{aligned} D_{KL}(P',N')=-H_{{\rm Shannon}}(P')+I_{{\rm Kerridge}}(P',N') \end{aligned}$$

which implies \(I_{{\rm Kerridge}}(P',Q')=D_{KL}(P',Q')+H_{{\rm Shannon}}(P').\)

For all \(M,N \in FSs(Z)\), Verma and Sharma (2011) corresponding to Eq. 9 defined fuzzy inaccuracy measure as:

$$\begin{aligned} I(M;N)= &\, {} -\frac{1}{n}\sum _{i=1}^{n}\left[ \mu _M(z_i)\log(\mu _N(z_i))\right. \\&\quad \left. +(1-\mu _M(z_i))\log(1-\mu _N(z_i)) \right] \\ I(M;N)= &\, {} D(M,N)+H(M), \end{aligned}$$

where D(MN) is Verma and Sharma (2011) fuzzy divergence and H(M) is De Luca and Termini (1972) fuzzy entropy.

When \(M=N\), we get

$$\begin{aligned} I(M,N)=H(M). \end{aligned}$$

Thus, I(MN) is a generalization of Luca and Termini fuzzy entropy H(M).

Now, we shall introduce a new dual of inaccuracy which we called fuzzy accuracy for a FS N relative to M as follows:

$$\begin{aligned} I_{{\rm Acc}}(M;N)=&\frac{1}{2}\log_2\left[ \frac{2}{n}\sum _{i=1}^{n}\left\{ \mu _M^2(z_i)+(1-\mu _M(z_i))^2 \right\} \right] \nonumber \\&+\frac{1}{2}\log_2\left[ \frac{2}{n}\sum _{i=1}^{n}\left\{ \mu _M(z_i)\mu _N(z_i)\right. \right. \nonumber \\&\left. \left. +(1-\mu _M(z_i))(1-\mu _N(z_i)) \right\} \right] . \end{aligned}$$
(10)

Also, for \(M=N\), we have

$$\begin{aligned} I_{{\rm Acc}}(M;N)=S(M). \end{aligned}$$

Therefore, \(I_{{\rm Acc}}(M;N)\) can be considered as a generalized measure of the proposed fuzzy knowledge measure S(M) and is called fuzzy accuracy measure.

Keeping above said concepts in mind, we discussed some properties of the proposed fuzzy accuracy measure from a mathematical viewpoint in the next section.

5.2 Properties of the Proposed Accuracy Measure

Theorem 5.1

\(I_{{\rm Acc}}(M;N)\) is maximum if M and N are crisp set and \(M=N \quad \forall \; M,N \in FSs(Z).\)

Proof

Since we are computing the fuzzy accuracy of N relative to M, the maximum accuracy in N that it may obtain is equally to the amount of knowledge that lies in M, that is, S(M). Furthermore, S(M) may obtain the largest value 1 according to our definition.

Suppose \(\mu _{M}(z_i)=\mu _{N}(z_i)=1\) or 0, then we have

$$\begin{aligned} I_{{\rm Acc}}(M;N)= &\, {} \frac{1}{2}\log_2\left[ \frac{2}{n}\sum _{i=1}^{n}\mu _{M}^2(z_i)+\left( 1-\mu _{M}(z_i) \right) ^2 \right] \\&+\frac{1}{2}\log_2\left[ \frac{2}{n}\sum _{i=1}^{n}\mu _{M}(z_i)\mu _{N}(z_i)\right. \\&\left. +\left( 1-\mu _{M}(z_i) \right) \left( 1-\mu _{N}(z_i) \right) \right] =1. \end{aligned}$$

Hence, \(I_{{\rm Acc}}(M;N)=1\) if \(\mu _{M}(z_i)=\mu _{N}(z_i)=0\) or 1. We have

$$\begin{aligned} I_{{\rm Acc}}(M;N)= &\, {} \frac{1}{2}\log_2\left[ \frac{2}{n}\sum _{i=1}^{n}\mu _{M}^2(z_i)+\left( 1-\mu _{M}(z_i) \right) ^2 \right] \\&+\frac{1}{2}\log_2\left[ \frac{2}{n}\sum _{i=1}^{n}\mu _{M}(z_i)\mu _{N}(z_i)\right. \\&\left. +\left( 1-\mu _{M}(z_i) \right) \left( 1-\mu _{N}(z_i) \right) \right] . \end{aligned}$$

Now, for \(\mu _{M}(z_i)=\mu _{N}(z_i)\)

$$\begin{aligned} I_{{\rm Acc}}(M;N)= &\, {} \frac{1}{2}\log_2\left[ \frac{2}{n}\sum _{i=1}^{n}\mu _{M}^2(z_i)+\left( 1-\mu _{M}(z_i) \right) ^2 \right] \\&+\frac{1}{2}\log_2\left[ \frac{2}{n}\sum _{i=1}^{n}\mu _{M}(z_i)\mu _{M}(z_i)\right. \\&\left. +\left( 1-\mu _{M}(z_i) \right) \left( 1-\mu _{M}(z_i) \right) \right] . \\= &\, {} \log_2\left[ \frac{2}{n}\sum _{i=1}^{n}\mu _{M}^2(z_i)+\left( 1-\mu _{M}(z_i) \right) ^2 \right] =S(M). \end{aligned}$$

Therefore, \(I_{{\rm Acc}}(M;N)=S(M)\) if \(\mu _{M}(z_i)=\mu _{N}(z_i).\) This proves the theorem.

Theorem 5.2

Let \(M, N, C \in FSs(Z)\) be such that \(M\subset N \subset C\); then,

  1. 1.

    \(I_{{\rm Acc}}(M;N)\le I_{{\rm Acc}}(M;C)\) for \(\mu _{M}(z_i)\ge \frac{1}{2},\)

  2. 2.

    \(I_{{\rm Acc}}(M;N)\ge I_{{\rm Acc}}(M;C)\) for \(\mu _{M}(z_i)\le \frac{1}{2}.\)

Proof

$$\begin{aligned} I_{{\rm Acc}}(M;N)&=\frac{1}{2}\log_2\left[ \frac{2}{n}\sum _{i=1}^{n}\left( \mu _{M}^2(z_i)+\left( 1-\mu _{M}(z_i) \right) ^2\right) \right] \\&+\frac{1}{2}\log_2\left[ \frac{2}{n}\sum _{i=1}^{n}\left( \mu _{M}(z_i)\mu _{N}(z_i)\right) +\left( 1-\mu _{M}(z_i)\right) \left( 1-\mu _{N}(z_i)\right) \right] . \end{aligned}$$

Taking, \(\mu _{M}(z_i)=\alpha _i\) and \(\mu _{N}(z_i)=\beta _i\)

$$\begin{aligned}&=\frac{1}{2}\log_2 \left[ \frac{2}{n}\sum _{i=1}^{n}\left( \alpha _i^2+\left( 1-\alpha _i\right) ^2\right) \right] \\&\quad +\frac{1}{2}\log_2\left[ \frac{2}{n}\sum _{i=1}^{n}\left( \alpha _i\beta _i+(1-\alpha _i)(1-\beta _i)\right) \right] . \end{aligned}$$

Therefore,

$$\begin{aligned} \frac{\partial I_{{\rm Acc}}}{\partial \alpha _i}= &\, {} \frac{n}{4\sum _{i=1}^{n}\left( \alpha _i^2+\left( 1-\alpha _i\right) ^2 \right) }(4\alpha _i-1)\\&+\frac{n}{4\sum _{i=1}^{n}\left( \alpha _i\beta _i+(1-\alpha _i)(1-\beta _i)\right) }(2\beta _i-1) \end{aligned}$$

and

$$\begin{aligned}&\frac{\partial I_{{\rm Acc}}}{\partial \beta _i}=\frac{n}{4\sum _{i=1}^{n}\left( \alpha _i\beta _i+(1-\alpha _i)(1-\beta _i)\right) }(2\alpha _i-1). \\&\implies \frac{\partial I_{{\rm Acc}}}{\partial \alpha _i}\ge 0\; \text {for}\; \alpha _i\ge 0.5\; and \;\beta _i \ge 0.5, \end{aligned}$$

and \(\frac{\partial I_{{\rm Acc}}}{\partial \beta _i}\ge 0\) for \(\alpha _i \ge 0.5.\)

Hence, in the initial and second argument \(I_{{\rm Acc}}\) is increasing, if \(0.5\le \alpha _i \le \beta _i\) and \(I_{{\rm Acc}}\) is decreasing, if \(\alpha _i\le \beta _i \le 0.5.\) in the initial and second argument. \(\square\)

Theorem 5.3

Let M,N and C be any three FSs; then,

  1. 1.

    \(I_{{\rm Acc}}(M:N)=I_{{\rm Acc}}({\bar{M}}:{\bar{N}})\)

  2. 2.

    \(I_{{\rm Acc}}(M:{\bar{M}})=I_{{\rm Acc}}({\bar{M}};M)\)

  3. 3.

    \(I_{{\rm Acc}}(M:{\bar{N}})=I_{{\rm Acc}}({\bar{M}};N)\)

  4. 4.

    \(I_{{\rm Acc}}({\bar{M}};N)+I_{{\rm Acc}}({\bar{M}};N)=I_{{\rm Acc}}({\bar{M}},{\bar{N}})+I_{{\rm Acc}}(M;{\bar{N}}).\)

Proof

The proof of these properties is state forward and easy calculation.

Next, we obtained a knowledge measure with the help of accuracy measure.

Theorem 5.4

\(\displaystyle S(M)=\frac{I_{{\rm Acc}}(M,M^{{\rm near}})}{I_{{\rm Acc}}(M,M^{{\rm far}})}\) is a valid knowledge measure.

Proof

We prove that S(M) holds the axiomatic requirements given in Definition 3.1.

(S1.) Let M be a crisp set or non-fuzzy. So, \(M=M^{{\rm near}}.\)

Now,

$$\begin{aligned} I_{{\rm Acc}}(M;M^{{\rm near}})&=\frac{1}{2}\log_2\left[ \frac{2}{n}\sum _{i=1}^{n}\left( \mu _{M}^2(z_i)+\left( 1-\mu _{M}(z_i) \right) ^2\right) \right] \nonumber \\&+\frac{1}{2}\log_2\left[ \frac{2}{n}\sum _{i=1}^{n}\left( \mu _{M}(z_i)\mu _{M^{{\rm near}}}(z_i)\right) \right. \nonumber \\&\left. +\left( 1-\mu _{M}(z_i)\right) \left( 1-\mu _{M^{{\rm near}}}(z_i)\right) \right] . \end{aligned}$$
(11)
$$\begin{aligned} I_{{\rm Acc}}(M^{{\rm near}};M^{{\rm near}})&=\frac{1}{2}\log_2\left[ \frac{2}{n}\sum _{i=1}^{n}\left( \mu _{M^{{\rm near}}}^2(z_i)+\left( 1-\mu _{M^{{\rm near}}}(z_i) \right) ^2\right) \right] \nonumber \\&+\frac{1}{2}\log_2\left[ \frac{2}{n}\sum _{i=1}^{n}\left( \mu _{M^{{\rm near}}}(z_i)\mu _{M^{{\rm near}}}(z_i)\right) \right. \nonumber \\&\left. +\left( 1-\mu _{M^{{\rm near}}}(z_i)\right) \left( 1-\mu _{M^{{\rm near}}}(z_i)\right) \right] . \end{aligned}$$
(12)

Case 1: When \(\mu _{M}(z_i)\ge 0.5\), then in view of Eq. 12 and Definition 2.3, we have \(I_{{\rm Acc}}(M^{{\rm near}};M^{{\rm near}})=1.\)

Case 2: When \(\mu _{M}(z_i) < 0.5\), then in view of Eq. 12 and Definition 2.3, we have \(I_{{\rm Acc}}(M^{{\rm near}};M^{{\rm near}})=1.\)

For both cases, similarly, we prove that \(I_{{\rm Acc}}(M;M^{{\rm far}})=1\). Therefore,

$$\begin{aligned} \displaystyle S(M)=\frac{I_{{\rm Acc}}(M,M^{{\rm near}})}{I_{{\rm Acc}}(M,M^{{\rm far}})}=1. \end{aligned}$$

Conversely, suppose that \(S(M)=1.\) So, \(I_{{\rm Acc}}(M;M^{{\rm near}})=1.\)

That is, \(M=M^{{\rm near}}.\)

Hence, M is a non-fuzzy or crisp set.

(S2.) Let M is the most fuzzy set, that is, \(\mu _{M}(z_i)=0.5 \;\forall \; z_i \in Z.\)

Then, \(\mu _{M^{{\rm near}}}(z_i)=1\). So, \(I_{{\rm Acc}}(M,M^{{\rm near}})=0.\)

Therefore, \(\displaystyle S(M)=\frac{I_{{\rm Acc}}(M,M^{{\rm near}})}{I_{{\rm Acc}}(M,M^{{\rm far}})}=0.\)

Conversely, suppose \(S(M)=0\); then, we have \(M=M_F\)= most fuzzy set.

(S3.) Let \(M^*\) is a sharpened version of M. Then,

$$\begin{aligned} S(M^*)\ge S(M). \end{aligned}$$

It is evident that

$$\begin{aligned} M^{{\rm near}}=M^{{*\rm near}} \text {and}\; M^{{\rm far}}=M^{{*\rm far}}, \end{aligned}$$

, and hence, \(I_{{\rm Acc}}(M^*,M^{near })=I_{{\rm Acc}}(M^*,M^{{*\rm near}}),\)

and \(I_{{\rm Acc}}(M^*,M^{{\rm far}})=I_{{\rm Acc}}(M^*,M^{{*\rm far}}).\)

We have \(M\subset M^*\subset M^{{\rm near}}.\) for \(\mu _{M}(z_i)\ge 0.5\).

From Theorem 5.2, we have \(I_{{\rm Acc}}(M^*,M^{{*\rm near}})\ge I_{{\rm Acc}}(M,M^{{\rm near}}).\)

Similarly, we can prove that

$$\begin{aligned} I_{{\rm Acc}}(M^*,M^{{*\rm far}})\ge & {} I_{{\rm Acc}}(M,M^{{\rm far}}). \\ \Rightarrow \frac{I_{{\rm Acc}}(M,M^{{\rm near}})}{I_{{\rm Acc}}(M,M^{{\rm far}})}\le & {} \frac{I_{{\rm Acc}}(M^*,M^{{*\rm near}})}{I_{{\rm Acc}}(M^*,M^{{*\rm far}})}. \\ \Rightarrow S(M^*)\ge S(M). \end{aligned}$$

(S4.) Using Theorem 5.3, we have

$$\begin{aligned} I_{{\rm Acc}}(M;N)=I_{{\rm Acc}}({\bar{M}};{\bar{N}}). \end{aligned}$$

Also, notice that \({\bar{M}}^{{\rm near}}=\bar{M^{{\rm near}}}\) and \({\bar{M}}^{{\rm far}}=\bar{M^{{\rm far}}}.\)

Since \(I_{{\rm Acc}}(M,M^{{\rm near}})={I_{{\rm Acc}}({\bar{M}},\bar{M^{{\rm near}}})}\) and \(I_{{\rm Acc}}(M,M^{{\rm far}})={I_{{\rm Acc}}({\bar{M}},\bar{M^{{\rm far}}})}\).

Therefore,

$$\begin{aligned} S(M)=\frac{I_{{\rm Acc}}(M,M^{{\rm near}})}{I_{{\rm Acc}}(M,M^{{\rm far}})}=\frac{I_{{\rm Acc}}({\bar{M}},\bar{M^{{\rm near}}})}{I_{{\rm Acc}}({\bar{M}},\bar{M^{{\rm far}}})}=\frac{I_{{\rm Acc}}({\bar{M}},{\bar{M}}^{{\rm near}})}{I_{{\rm Acc}}({\bar{M}},{\bar{M}}^{{\rm far}})}=S({\bar{M}}). \end{aligned}$$

Hence, S(M) is a valid knowledge measure.

The developed accuracy measure can be employed to measure the errors in pattern recognition, computer vision and in memory. In real world, accuracy/similarity/dissimilarity data are found in many forms such as rating of pairs, errors of substitution, correlation between occurrences and sorting of objects. Applications of the proposed accuracy measure can be further implemented for decision-making in real-world problems.

6 Application of the proposed knowledge measure in solving MADM model

In this section, a new model, based on the proposed knowledge measure, for solving the decision-making problems, has been demonstrated under the FSs environment. The improved score function is used to make the ranking results more accurate and flexible. Moreover, a numerical example has been considered to validate the approach.

6.1 Proposed approach

The set of all alternatives defined as \(G'=\{G'_i: 1\le i\le m\}\) and the finite set of all attributes are defined as \(A'=\{A'_i:1\le i\le n\}\) in a decision-making problem. The weight vector of attribute is considered as \(w=(w_1,w_2,\ldots ,w_n)^T\) such that \(\textstyle \sum _{i=1}^{n}w_i=1\). Due to the limitation of the DM’s knowledge, paucity of time and experience, a fuzzy form represents the preference or evaluation information provided for each attribute. The fuzzy decision matrix provided by DM’s is defined as :

figure a
  1. 1.

    If the attribute weight ages are partially/completely known, then MADM techniques play an important role in solving the problems based on the fuzzy information under multiple attributes. But in practical situation, the attribute weights are found partially known or completely unidentified (Wei 2008). Therefore, the attribute weights are determined before solving the MADM problems and the attribute weights given by decision makers. However, this technique is subjective and the partial information about attribute weights can not be implemented sufficiently. Following on, we may propose a new method to compute the weights vector based on the proposed measure. We can set the whole information as the objective function of optimization. By minimizing the sum of all information under all attributes, we can construct the following model.

    $$\begin{aligned}&{\text{Min}}\;T=\sum _{i=1}^{m}S(\nu _{ij}) \sum _{j=1}^{n}w_j\nonumber \\& {\text{s.t.}} {\left\{ \begin{array}{ll} \sum _{j=1}^{n}w_j=1\\ w_j\ge 0,j=1,2,\ldots ,n\\ w\in H \end{array}\right. } \end{aligned}$$
    (13)

    where H denotes the set of unidentified/incomplete information for attribute weights and \(S(\nu _{ij})\) is the information measure calculated by our proposed measure. If the attribute weights are completely unidentified, then we will determine the criteria \(A'_j,1\le j\le r\), using Formula (14).

    $$\begin{aligned} \begin{aligned} S(M)=\log_2\left[ \frac{2}{n}\sum _{i=1}^{n}\left( \mu _M^2(z_i)+(1-\mu _M(z_i)\right) ^2)\right] . \end{aligned} \end{aligned}$$
    (14)
  2. 2.

    Calculate the weights \(w_j\) for each criterion \(A'_j\) as given in (15),

    $$\begin{aligned} w_j=\frac{\sum _{i=1}^{m}S(\nu _{ij})}{\sum _{j=1}^{n}\sum _{i=1}^{m}S(\nu _{ij})}. \end{aligned}$$
    (15)
  3. 3.

    Now, we develop a score function \(S(G'_i)\) as follows:

    $$\begin{aligned} S(G'_i) =\sum _{j=1}^{m}\nu _{ij}[\xi w_j^s+w_j^o(1-\xi )]^T, \end{aligned}$$
    (16)

    where \([\xi w_j^s+w_j^o(1-\xi )]^T\) in Eq. 16 is the mechanism of subjective (\(w_j^s\)) and objective weights (\(w_j^o\)). The parameter \(\xi \in (0,1)\) denotes the relative importance between subjective and objective weights and can choose any arbitrary value.

  4. 4.

    Alternative with highest score is the best desirable choice.

6.2 Numerical example and discussion

Suppose a company wishes to invest a sum of money into a project. Based on the complexities of economic development, there are five companies (alternatives) as candidates and the companies are considered as: ( \(G'_1\)): A washing machine company, ( \(G'_2\)): A software company, (\(G'_3\) ): A chemical company, (\(G'_4\)): A computer company and (\(G'_5\)): A LCD company.

The investment company evaluates these five companies using the following four attributes, which are: (\(A'_1\)): Capital gain, (\(A'_2\)): Investment risk, (\(A'_3\)): Social and political impact, (\(A'_4\)): Environmental impact.

figure b

To find the desirable alternative, based on their knowledge and experience in problem domain, the weights are assigned to the experts as follows: \(\displaystyle w_1^o=0.2,w_2^o=0.4,w_3^o=0.3,w_4^o=0.1.\)

Case 1 The information regarding attribute weights is unidentified. The partially information about attribute weights is denoted by the set H

$$\begin{aligned} H&=\{ 0.10\le w_1\le 0.13,0.13\le w_2\le 0.19,0.24\le w_3\\&\quad \le 0.30,0.40\le w_4\le 0.60\}. \end{aligned}$$

The overall information of each attribute can be determined as follows:

$$\begin{aligned} K_1=\sum _{i=1}^{5}S(\nu _{ij})=1.152;\; K_2=\sum _{i=1}^{5}S(\nu _{ij})=1.016; \\ K_3=\sum _{i=1}^{5}S(\nu _{ij})=1.104;\; K_4=\sum _{i=1}^{5}S(\nu _{ij})=1.032. \end{aligned}$$

The programming model to find the attribute weights using Eq. 13 can be constructed as: Min \(T=0.102w_1+0.055w_2+0.013w_3+0.013w_4\)

$$\begin{aligned} \text {such that} {\left\{ \begin{array}{ll} w\in H\\ \sum _{j=1}^{4}w_j=1\\ w_j\ge 0,j=1,2,3,4. \end{array}\right. } \end{aligned}$$

Then, the weights vector can be obtained as:

$$\begin{aligned} w=(0.31,0.30,0.13,0.26)^T. \end{aligned}$$

Determining the score using \(\sum _{j=1}^{4}\nu (G'_i,A_j) [\xi w_j^s+w_j^o(1-\xi )]^T,i=1,2,\ldots ,5,\) we may compute the \(S(G'_i),i=1,2,\ldots ,5\) for various values of \(\xi\) and it is given briefly in Table 4 and Fig. 1.

Table 4 Values of Score function for various values of \(\xi\)
Fig. 1
figure 1

Analysis of the coefficient \(\xi\) to the outcome of the alternative

Using the above results, we have obtained the following order ranking of the alternatives.

  • For \(\xi =0.1\):\(S(G'_3)\succ S(G'_4)\succ S(G'_5)\succ S(G'_2)\succ S(G'_1).\)

  • For \(\xi =0.2\):\(S(G'_3)\succ S(G'_4)\succ S(G'_5)\succ S(G'_2)\succ S(G'_1).\)

  • For \(\xi =0.3\):\(S(G'_3)\succ S(G'_4)\succ S(G'_5)\succ S(G'_2)\succ S(G'_1).\)

  • For \(\xi =0.4\):\(S(G'_3)\succ S(G'_5)\succ S(G'_4)\succ S(G'_2)\succ S(G'_1).\)

  • For \(\xi =0.5\):\(S(G'_3)\succ S(G'_5)\succ S(G'_4)\succ S(G'_2)\succ S(G'_1).\)

  • For \(\xi =0.6\):\(S(G'_3)\succ S(G'_5)\succ S(G'_4)\succ S(G'_2)\succ S(G'_1).\)

  • For \(\xi =0.7\):\(S(G'_3)\succ S(G'_5)\succ S(G'_4)\succ S(G'_2)\succ S(G'_1).\)

  • For \(\xi =0.8\):\(S(G'_3)\succ S(G'_5)\succ S(G'_4)\succ S(G'_2)\succ S(G'_1).\)

  • For \(\xi =0.9\):\(S(G'_3)\succ S(G'_5)\succ S(G'_4)\succ S(G'_2)\succ S(G'_1).\)

From the above ranking results, it is clear that \(S(G'_3)\) is the most suitable alternative. The sequence of alternatives so obtained is given by: \(S(G'_3)\succ S(G'_4)\succ S(G'_5)\succ S(G'_2)\succ S(G'_1)\). The same problem was computed by using the method proposed by Xia and Xu (2012) based on \(E_M^{1.5}\) and \(CE_M^{1.5}\). The attributes weight vector is obtained as \(w=(0.29,0.27,0.15,0.29)^T\).

The ranking results of alternatives are \(S(G'_3)\succ S(G'_4)\succ S(G'_5)\succ S(G'_2)\succ S(G'_1)\), which is same as obtained by the proposed method. Clearly, the output of proposed method is more reliable and effective.

Case 2 When there is no information about the attribute weights, then the weights can be obtained based on the proposed approach, and total knowledge amount of all fuzzy information so obtained is listed below:

$$\begin{aligned} K_1=\sum _{i=1}^{5}S(\nu _{ij})=1.152;\; K_2=\sum _{i=1}^{5}S(\nu _{ij})=1.016; \\ K_3=\sum _{i=1}^{5}S(\nu _{ij})=1.104;\; K_4=\sum _{i=1}^{5}H_S(\nu _{ij})=1.032. \end{aligned}$$

The attribute weights can be obtained as: \(w_1=K_1/\sum _{i=1}^{4}K_i=0.2677\) ;\(w_2=K_2/\sum _{i=1}^{4}K_i=0.2361=\);\(w_3=K_3/\sum _{i=1}^{4}K_i=0.2565\);\(w_4=K_4/\sum _{i=1}^{4}K_i=0.2397\). Therefore, the weighted vector is \(w=(0.268,0.236,0.257,0.24)^T.\) Computing the score using \(\sum _{j=1}^{4}\nu (G'_i,A_j)[\xi w_j^s+w_j^o(1-\xi )]^T,i=1,2,\ldots ,5,\) we may compute \(S(G'_i),i=1,2,\ldots ,5\), for diverse values of \(\xi\), and it is summarized briefly in Table 5 and Fig. 2.

Table 5 Values of Score function for diverse values
Fig. 2
figure 2

Analysis of the coefficient \(\xi\) to the outcome of the alternative

Using the above results, we obtain the following order of ranks of the alternatives \(G'_i,(i=1,2,3,4,5)\).

For \(\xi =0.1\): \(S(G'_3)\succ S(G'_4)\succ S(G'_5)\succ S(G'_2)\succ S(G'_1).\) For \(\xi =0.2\):\(S(G'_3)\succ S(G'_4)\succ S(G'_5)\succ S(G'_2)\succ S(G'_1).\) For \(\xi =0.3\):\(S(G'_3)\succ S(G'_4)\succ S(G'_5)\succ S(G'_2)\succ S(G'_1).\) For \(\xi =0.4\):\(S(G'_3)\succ S(G'_4)\succ S(G'_5)\succ S(G'_2)\succ S(G'_1).\) For \(\xi =0.5\):\(S(G'_3)\succ S(G'_4)\succ S(G'_5)\succ S(G'_2)\succ S(G'_1).\) For \(\xi =0.6\):\(S(G'_3)\succ S(G'_4)\succ S(G'_5)\succ S(G'_2)\succ S(G'_1).\) For \(\xi =0.7\):\(S(G'_3)\succ S(G'_4)\succ S(G'_5)\succ S(G'_2)\succ S(G'_1).\) For \(\xi =0.8\):\(S(G'_3)\succ S(G'_4)\succ S(G'_5)\succ S(G'_2)\succ S(G'_1).\) For \(\xi =0.9\):\(S(G'_3)\succ S(G'_4)\succ S(G'_5)\succ S(G'_2)\succ S(G'_1).\)

Table 5 and Fig. 2 display the score of the alternatives calculated by the value of the coefficient \(\xi .\) From the on top of discussion, it is easy to say that \(S(G'_3)\) is the most suitable choice as sequence of preferences, that is, \(S(G'_3)\) is the most suitable alternative. For comparative study, we can also solve the same problem based on method Xia and Xu (2012). The yielding vector is \(w=(0.2577,0.2161,0.2765,0.2497)^T\). The ranking outcomes of all alternatives are \(S(G'_3)\succ S(G'_4)\succ S(G'_5)\succ S(G'_2)\succ S(G'_1)\). For further comparison, the same example has been solved by using the same method suggested by Singh et al. (2019); we got the preferential sequence which is given by \(S(G'_3)\succ S(G'_4)\succ S(G'_5)\succ S(G'_2)\succ S(G'_1)\) with \(S(G'_3)\) being the best alternative. Now, this is quite natural to know as the result of which method is more accurate and reliable. From the above methods, we can see that \(S(G'_3)\) is the best choice for investment. Even though the ranking order acquired by the proposed method is totally different from the method acquired by Xia and Xu (2012) method. This distinction has no impact on selecting the most effective company to invest. In general, the solution of a multi-attribute problem for decision-making only provides the best alternative and the order of other alternatives is on the far side the final word goal of an MADM problem.

This example shows that the proposed models for solving multi-attributes problems for decision-making are competent to obtaining cheap results. Compared with methods (Xia and Xu 2012; Singh et al. 2019), proposed optimal is more concise and less complicated, which will scale back the computation burden. Thus, the superiority of the new knowledge measure is confirmed over existing knowledge measures.

A discussion on value of \(\xi :\) If we substitute the value of \(\xi =0.1\), i.e. decrease the importance of subjective weights (\(w_j^s\) ) and correspondingly increase the importance of objective weights (\(w_j^o\) ), the outcomes so acquired are shown in Tables 4 and 5. If we give equal importance to both objective and subjective weights, i.e. \(\xi =0.5\), the preference sequence obtained is \(S(G'_3)\succ S(G'_4)\succ S(G'_5)\succ S(G'_2)\succ S(G'_1)\) and \(S(G'_3)\) as the best alternative. On the other side, if \(\xi =0.9\), i.e. increase the importance of \(w_j^s\) rather than \(w_j^o\), the outputs so obtained are appeared in Tables 4 and 5. The close examination of Tables 4 and 5 demonstrates that on shifting the preferences given to both objective and subjective weights, the best alternative always unaltered; however, the preference sequences of alternatives may be changed.

7 Conclusions

In this paper, we have studied and introduced a new knowledge measure in a fuzzy environment. The mathematical properties of the proposed knowledge measure are also discussed. Comparative study was presented to prove the usefulness and feasibility of the developed knowledge measure over the existing ones. Besides this, a fuzzy accuracy measure based on the proposed knowledge measure is also developed and validated. Further, a new method of decision-making problem is proposed in which attribute weight vector is completely known and unknown. To calculate the attribute weights, we have constructed a programming model based on the developed knowledge measure. Finally, an example has been considered to demonstrate the validity and feasibility of the proposed decision-making process. The advantages of the proposed approach are the computation simplicity for fuzzy sets.

In future, the developed MADM model can be further extended to IFSs, interval-valued intuitionistic fuzzy sets and hesitant fuzzy sets. Apart from this, we can also apply the applications of the developed accuracy measure in the other realistic problems like as like social network analysis, medical diagnosis, remote sensing, speech recognition and so on.