1 Introduction

As a generation of Zadeh’s fuzzy sets [1], Atanassov’s intuitionistic fuzzy sets (AIFSs), first formulated by Atanassov [2], can express and process uncertainty much better by introducing a hesitation degree. In fuzzy sets, the membership value μ(x) of x in X (called the universe) is just a single real number, usually in [0, 1], and the non-membership of x is taken as 1 − μ(x). But for AIFSs, the membership value μ(x) and the non-membership value v(x) should be both taken into account for describing any x in X. Moreover the sum of membership and non-membership is less than or equal to 1. Thus, an AIFS is expressed with an ordered pair of real numbers (μ(x),v(x)). The difference between 1 and μ(x) + v(x), i.e., 1 − μ(x) − v(x), is called the hesitancy degree. As a novel mathematical tool for handling uncertainty, AIFS is attracting more and more attentions from researchers [3]. Theoretical research on AIFS mainly concentrates on its relationship with other generalized fuzzy sets [46], intuitionistic fuzzy operations [7, 8], and some important mathematical measures for AIFSs [1018]. Meanwhile, the application of AIFS has been extended to many areas like information fusion [19, 20], multi-attribute decision making [21], intuitionistic fuzzy optimization [22, 23], data mining [24], etc.

The measure of uncertainty is very crucial in all kinds of theories of uncertainty. The concept of uncertainty is intricately connected to the concept of information. Therefore, to describe the uncertainty, measures in information theory are often used for reference. For example, in probability theory, the Shannon entropy is developed. In fuzzy set theory and its related applications, some entropy-alike measures also are proposed to represent the uncertainty [25, 26]. These measures try to quantify only one aspect of uncertainty, i.e., fuzziness. But fuzzy sets are associated with another kind of uncertainty, which is related to lack of specificity. To make a distinction between them, we can say that fuzziness measures gradualarity while specificity is related to granularity. Yager [27, 28] have proposed different measures of specificity for fuzzy sets. Yager’s specificity measures are defined by quantifying the degree to which a fuzzy subset focuses on (points to) one and only one element as its member.

Inspired by those works on fuzzy uncertainty, many researchers have constructed uncertainty measures for AIFSs. An axiomatic foundation for entropy of AIFS was first proposed by Burillo and Bustince [10] in 1996. Szmidt and Kacprzyk [15] also proposed another set of axioms for intuitionistic fuzzy entropy, by extending the axiomatic entropy of fuzzy sets. Both of these axiomatic structures are non-probabilistic type entropy measures. The diference between these two definitions lies in the fact that the entropy of Burillo and Bustince [10] is a measure of how far an AIFS is from a fuzzy set, whereas the definition given by Szmidt and Kacprzyk [15] is a measure of how far an AIFS is from a classical set. Thus, the former is often called intuitionism, while the latter is called fuzziness.

However, it has been demonstrated that these two interesting approaches cannot capture all facets of uncertainty that are associated with AIFSs. Either fuzziness or intuitionism alone may not be a satisfactory uncertainty measure for AIFSs. The reason is that a fuzziness measure answers the question about the difference between the membership and non-membership degrees, but does not consider any peculiarities of the membership and non-membership degrees, as discussed by Szmidt et al. [16]. So, the two situations, one with the maximal entropy for a membership function equal to a non-membership function (e.g., both take 0.5), and another when we know absolutely nothing (i.e., both equal to 0), are equal from the point of view of the entropy measure (in terms of the AIFSs). Nevertheless, in the application of decision making, these two situations are clearly different. On the other hand, the intuitionism essentially measures the differences between 1 and μ(x) + v(x). So the uncertainty degrees of AIFSs with equal hesitancy are difficult to be discriminated. Based on such sense, some researchers attempted to define uncertainty measure for AIFSs based on both fuzziness and intuitionism [12, 13, 1618].

Taking a closer examination on existing uncertainty measures, we can find that the uncertainty caused by fuzziness is decreasing with the difference between μ(x) and v(x), while uncertainty caused by intuitionism or lack of knowledge is increasing with the hesitancy degree π(x). It is obvious that all existing uncerrtainty measures for AIFSs can be applied in fuzzy sets by setting v(x)=1 − μ(x). Then another interesting question arises: how to measure the uncertainty of a classical subset of X? Since a classical set can be regarded as a special AIFS with all membership and non-membership degrees as 0 or 1, hesitancy degrees as 0 for ∀xX, we have |μ(x) − v(x)|=1 and π(x)=0. Monotonicity properties of fuzziness and intuitionism indicate that the uncertainty degree of all sets is minimum, always taken as 0. However, the uncertainty hidden by classical sets with different number of elements should indicate different uncertainty grades. For example, the full set A = X representing full ignorance is more uncertain than the singleton set which is completely precise. In fact, this contradiction also exists for other non-singleton subsets of X. To overcome such contradictions, a general uncertainty measure for AIFSs is desirable.

So far, there are few uncertainty measures that can quantify uncertainty degree of fuzzy sets, AIFSs and classical sets simultaneously. From our point of view, another uncertainty related to the cardinality of a set must be considered. This kind of uncertainty is also determined by the inverse of the degree to which a set points to one and only one element as its member. In this paper, we call it non-specificity, which is different from the so-called non-specificity defined by Pal et al. in [13]. Our proposed non-specificity means two or more alternatives are left unspecified, which represents a degree of imprecision. It only focuses on those sets with cardinality larger than 1. Non-specificity is a distinctive uncertainty type in the theory of intuitionistic fuzzy set when compared with the entropy-type uncertainty. Thus, only by combining non-specificity, fuzziness and intuitionism together, can we define a general uncertainty measure suitable for all AIFSs, including fuzzy sets and sets.

In this paper, we address the problem of general uncertainty measure for AIFSs. The proposed uncertainty measure comprises three aspects: non-specificity, fuzziness, and intuitionism. Meanwhile, we uncover some axiomatic properties of the proposed measure to demonstrate its generalization ability. The main priority of our proposed general uncertainty measure lies in its capability in discriminating uncertainty degrees of classical sets, fuzzy sets, as well as Atanassov’s intuitionistic fuzzy sets in a union way. Hence, it provides an alternative way to quantify uncertainties of imprecise information.

We organize the rest of this paper as follows. Section 2 gives a brief recall of AIFSs. In Section 3, we briefly review and analyze existing axiomatic entropy measures for AIFSs. Non-specificity, fuzziness, intuitionism and a general uncertainty measure consisting of them are put forward in Section 4. In this section we also provide some properties of proposed measures along with their proofs. In Section 5, illustrative examples are given to show the performance of the proposed measure. The proposed uncertainty measure is applied to solve the problem of multi-attribute decision in Section 6. This paper is concluded in Section 7.

2 Preliminaries

In this section, we briefly recall the basic concepts related to AIFSs, and then list some operations defined on AIFSs. The connection between AIFSs and classical sets is also presented.

Definition 1

[1]. Let X = {x 1,x 2,⋯ ,x n } be the universe of discourse, then a fuzzy set A in X is defined as:

$$ A=\left\{ {\left\langle {x,\mu_{A} (x)} \right\rangle \left| {x\in X} \right.} \right\} $$
(1)

where μ A (x) : X → [0, 1] is the membership degree.

Definition 2

[2]. An intuitionistic fuzzy set A in X defined by Atanassov can be written as:

$$ A=\left\{ {\left\langle {x,\mu_{A} (x),v_{A} (x)} \right\rangle \left| {x\in X} \right.} \right\} $$
(2)

where μ A (x):X → [0, 1] and v A (x):X → [0, 1] are membership degree and non-membership degree, respectively, with the condition:

$$ 0\le \mu_{A} (x)+v_{A} (x)\le 1 $$
(3)

π A (x) determined by the following equation:

$$ \pi_{A} (x)=1-\mu_{A} (x)-v_{A} (x) $$
(4)

is called the hesitancy degree of the element xX to the set A, and π A (x)∈[0,1],∀xX.

π A (x) is also called the intuitionistic index of x belonging to A. Greater π A (x) indicates more vagueness on x. Obviously, when π A (x) = 0, ∀xX, an AIFS degenerates into an ordinary fuzzy set.

It is worth noting that besides Definition 2 there are other possible representations of AIFSs proposed in the literature. Hong and Kim [11] proposed an interval representation [μ A (x),1 − v A (x)] to represent Atanassov’s intuitionistic fuzzy set A in X instead of using 〈μ A (x),v A (x)〉. This approach is equivalent to the interpretation of interval valued fuzzy sets, where μ A (x) and 1 − v A (x) represent the lower bound and upper bound of membership degree, respectively. Obviously, [μ A (x),1 − v A (x)] is a valid interval, since μ A (x) + v A (x) ≤ 1 always indicates μ A (x) ≤ 1 − v A (x).

Definition 3

[9]. Let A = {〈x, μ A (x),v A (x)〉|xX} and B = {〈x, μ B (x),v B (x)〉|xX} be two Atanassov’s intuitionistic fuzzy sets over X. Some operations on them are defined as:

AB if and only if μ A (x) ≤ μ B (x), v A (x) ≥ v B (x), ∀xX;

A = B if and only if μ A (x) = μ B (x), v A (x) = v B (x),∀xX;

A C = {〈x, v A (x), μ A (x)〉|xX}, where A C is the complement of A;

$$A\cap B=\left\{ {\left\langle {x,\min \left( {\mu_{A} (x),\mu_{B} (x)} \right),\max \left( {v_{A} (x),v_{B} (x)} \right)} \right\rangle \left| {x\in X} \right.} \right\}; $$
$$A\cup B=\left\{ {\left\langle {x,\max \left( {\mu_{A} (x),\mu_{B} (x)} \right),\min \left( {v_{A} (x),v_{B} (x)} \right)} \right\rangle \left| {x\in X} \right.} \right\}; $$
$$\begin{array}{@{}rcl@{}} A+B&=&\left\{\left\langle x,\mu_{A} (x)+\mu_{B} (x)\right.\right.\\ &&\left.-\mu_{A} (x)\cdot \mu _{B} (x),v_{A} (x)\cdot v_{B} (x)\rangle \left| {x\in X}\right. \right\}; \end{array} $$
$$A\cdot B=\left\{ {\left\langle {x,\mu_{A} (x)\cdot \mu_{B} (x),v_{A} (x)+v_{B} (x)-v_{A} (x)\cdot v_{B} (x)} \right\rangle \left| {x\in X} \right.} \right\}. $$

Let X = {x 1, x 2,⋯ , x n } be the universe of discourse. A denotes an arbitrary nonempty subset of X, i.e., AX and A. Then A can be expressed as an AIFS as: \(\tilde {{A}}=\left \{ {\left \langle {x_{i} ,\mu _{\tilde {{A}}} (x_{i} ),v_{\tilde {{A}}} (x_{i} )} \right \rangle } \right \}\) , where \(\mu _{\tilde {{A}}} (x_{i} )=1\) for x i A and \(\mu _{\tilde {{A}}} (x_{i} )=0\) for x i A, i = 1,2,⋯ , n; \(v_{\tilde {{A}}} (x_{i} )=1-\mu _{\tilde {{A}}} (x_{i} )\) , and \(\pi _{\tilde {{A}}} (x_{i} )=0\), ∀x i X.

For example, in the universe X = {x 1, x 2, x 3, x 4}, two subsets A = {x 1, x 2} and B = {x 1, x 2, x 3} can, respectively, be written in the form of AIFS as:

$$\tilde{{A}}=\left\{ {\left\langle {x_{1} ,1,0} \right\rangle ,\left\langle {x_{2} ,1,0} \right\rangle ,\left\langle {x_{3} ,0,1} \right\rangle ,\left\langle {x_{4} ,0,1} \right\rangle } \right\}, $$
$$\tilde{{B}}=\left\{ {\left\langle {x_{1} ,1,0} \right\rangle ,\left\langle {x_{2} ,1,0} \right\rangle ,\left\langle {x_{3} ,1,0} \right\rangle ,\left\langle {x_{4} ,0,1} \right\rangle } \right\}. $$

In such sense, AIFSs can be considered as a general form of fuzzy sets and classical sets. For clarity, in the following discussions, no differentiation among classical sets, fuzzy sets and intuitionistic fuzzy sets will be made. The superscripts in \(\tilde {{A}}\) will be omitted.

3 Entropy measures for AIFSs

As discussed earlier, most of the existing uncertainty measures are defined based on entropy. Burillo and Bustince [10] firstly presented the definition of entropy for AIFSs as following.

Definition 4

[10] Let A I F S(X) denote the set of all AIFSs over X. A mapping E I :A I F S(X)→[0,1] is called an entropy if it satisfies the following properties:

  • (E I 1) E I (A) = 0 if and only if A is a fuzzy set;

  • (E I 2) E I (A) = 1 if and only if μ A (x) = v A (x) = 0for every xX;

  • (E I 3) E I (A) = E I (A C);

  • (E I 4) E I (A) ≥ E I (B) if μ A (x) ≤ μ B (x) and v A (x) ≤ v B (x) for every xX.

We can notice that this entropy measure is increasing with the hesitancy degree. This definition basically considers the entropy as a measure of how far an AIFS is from a fuzzy set. Therefore, the uncertainty degree of AIFSs cannot be captured by this definition.

By describing the difference between fuzzy sets and classical sets, De Luca and Termini [25] has defined an entropy measure \(E_{LT}^{FS} \) for fuzzy sets as:

$$\begin{array}{@{}rcl@{}} E_{LT}^{FS} (A)&=&-\frac{1}{n}\sum\limits_{i=1}^{n}\left[ \mu_{A} (x_{i} )\log_{2} \mu_{A} (x_{i} )\right.\\ &&\left.+\left( {1-\mu_{A} (x_{i} )} \right)\log_{2} \left( {1-\mu_{A} (x_{i} )} \right) \right] \end{array} $$
(5)

Then Szmidt and Kacprzyk [15] extended this definition to measure the entropy of AIFSs as following.

Definition 5

[15]. A mapping E F :A I F S(X)→[0,1] is said to be an entropy if it satisfies the following axioms:

  • (E F 1) E F (A) = 0 if and only if A is a set;

  • (E F 2) E F (A) = 1 if and only if μ A (x i ) = v A (x i ) for every x i X;

  • (E F 3) E F (A) = E F (A C);

  • (E F 4) E F (A) ≤ E F (B) if μ A (x i ) ≤ μ B (x i ) and v A (x i ) ≥ v B (x i ) for μ B (x i ) ≤ v B (x i ) or μ A (x i ) ≥ μ B (x i ) and v A (x i ) ≤ v B (x i ) for μ B (x i ) ≥ v B (x i ).

We can find that this definition only focuses on fuzziness, without considering intuitionism. It ignores the entropy changes caused by hesitancy degree while μ A (x) = v A (x) for all xX.

These two definitions provide us with two different ways for measuring uncertainty for AIFSs. In fact, these two kinds of entropy exist in an AIFS simultaneously. Firstly, an AIFS is a fuzzy set and it is associated with fuzziness. A measure of such fuzziness would be related to the departure of the AIFS from its closest set. Such uncertainty may be called the fuzziness (or “fuzzy-type” uncertainty) associated with an AIFS. On the other hand, in an AIFS, there exists another kind of uncertainty that is related to lack of knowledge. This aspect of uncertainty is related to how much we do not know about the membership and non-membership, i.e., related to hesitancy 1 − μ A (x) − v A (x). This uncertainty may be called intuitionism type uncertainty. Essentially, E I (A) in Definition 4 models the intuitionistic aspect of uncertainty, while E F (A) in Definition 5 is defined to model the fuzzy aspect of uncertainty.Thus, although both of these measures are interesting uncertainty measures for AIFSs, neither of them is competent to capture intuitionistic fuzzy uncertainty comprehensively.

To illustrate these analyses, let us consider the following examples.

Example 1

Let X be a universe, A and B be two AIFSs in X. For some fixed xX, suppose μ A (x) = 0.2, v A (x) = 0.5; μ B (x) = 0.6, v B (x) = 0.3. Thus, π A (x) = 0.3 whereas π B (x) = 0.1. E I (A) is a model of entropy in terms of intuitionism, so, in principle E I (A) ≥ E I (B). But clearly the conditions μ A (x) ≤ μ B (x) and v A (x) ≤ v B (x) do not hold at least for one xX.

Example 2

Let A = {< x 1, 0.5, 0.5 >,< x 2,0.3, 0.3 >} and B = {< x 1, 0.2, 0.2 >, < x 2, 0.1, 0.1 >} be two AIFSs on X = {x 1, x 2}. Then we have E F (A) = E F (B) = 1. But clearly in terms of lack of knowledge (hesitancy, intuitionism), these two sets differ signicantly. So the intuitionism-type uncertainty, which is the special and intrinsic property of an AIFS, is not reflected in this formulation.

In order to construct a comprehensive uncertainty measure to capture all kinds of uncertain information in AIFSs, Mao et al. [12] refined the axiomatic criteria for the intuitionistic fuzzy entropy measure and proposed a general model as follows:

Definition 6

[12]. The intuitionistic fuzzy entropy measure of an AIFS A is a real-valued function E I F (A) with π A and Δ A =|μ A v A |, i.e., E I F (A) = g(π A A ):A I F S(X) → [0, 1], satisfying the following axiomatic principles:

  • (E I F 1) E I F (A) = 0 if and only if A is a classical set;

  • (E I F 2) E I F (A) = 1 if and only if μ A (x i ) = v A (x i ) = 0 for every x i X;

  • (E I F 3) E I F (A) = E I F (A C);

  • (E I F 4) g(π A A ) is a real-valued continuous function increasing with respect to the first variable π A , and decreasing with second variable Δ A =|μ A v A |.

That is to say, the entropy value will increase along with the enhanced fuzziness under the same intuitionism; equivalently, it will decrease along with the weakened intuitionism under the same fuzziness.

Pal et al. [13] have also reformulated the axioms for measures of uncertainty associated with an AIFS in a manner such that it can capture both types of uncertainties present in it. They applied the couple (E I , E F ) to quantify the uncertainty hidden in AIFSs. Definitions of E I and E F are in analogy with Definition 4 and Definition 5, respectively. Moreover, we can obtain the definition of E I F by taking both E I and E F into account. So it is indicated that the uncertainty degree of AIFSs can be well measured by the combination of fuzziness and intuitionism.

Since classical sets can be regarded as a special type of AIFSs, the uncertainty measure for AIFSs should be capable of discriminating the uncertainty degree of classical sets. However, none of fuzziness, intuitionism and the combination of them makes sense for classical sets. The reason is that the fuzziness and intuitionism of a classical set are both zero, but it contains another kind of uncertainty namely non-specificity, which is related to its cardinality. As for an AIFS, the non-specificity can be regarded as the inverse concept of the degree to which an AIFS focuses on one and only one element as its member. Obviously, the non-specificity-based uncertainty is beyond the scope of fuzziness and intuitionism.

This can be demonstrated by the next example.

Example 3

Let X = {x 1, x 2, x 3} be the universe. Three subsets A = {x 1}, B = {x 1, x 2} and C = {x 1, x 2, x 3} can, respectively, be expressed by AIFSs as:

$$A=\{<x_{1} ,1,0>,<x_{2} ,0,1>,<x_{3} ,0,1>\}, $$
$$B=\{<x_{1} ,1,0>,<x_{2} ,1,0>,<x_{3} ,0,1>\}, $$
$$C=\{<x_{1} ,1,0>,<x_{2} ,1,0>,<x_{3} ,1,0>\}. $$

Given Definition 4 and Definition 5, we have E I (A) = E I (B) = E I (C) = 0 and E F (A) = E F (B) = E F (C) = 0, which means these three subsets contain equivalent uncertainty information. But, obviously, the uncertainty degree of the full set C is largest, while A is a singleton set taking its uncertainty degree as 0.

This can be further illustrated by an application in target recognition, where the discernment frame is X = {x 1, x 2, x 3}, i.e., the unknown target can be recognized as any subset of X. Three reorganization reports are obtained as:

  1. R1:

    the target is {x 1};

  2. R2:

    the target is {x 1, x 2};

  3. R3:

    the target is {x 1, x 2, x 3}.

We can notice that R1 is fully precise, while R3 is completely uncertain providing little knowledge about the target. So R1 is more useful for decision maker than other two reports. It is straightforward that the uncertainty degree of {x 1} is 0, and the full set {x 1, x 2, x 3} contains the largest uncertainty. Such analysis is potential suitable to other applications. Therefore, all existing entropy measures for AIFSs cannot discriminate the uncertainty degree for those special AIFSs, namely classical sets. A general uncertainty measure suitable for all AIFSs (AIFSs, fuzzy sets and classical sets) is desirable.

4 General uncertainty measure for AIFSs

We would have in mind that a properly constructed uncertainty measure for AIFSs consists of three kinds of uncertainty: non-specificity, fuzziness, and intuitionism. The non-specificity is determined by the inverse of the degree to which an AIFS focuses on one and only one element as its member. For a classical set, its non-specificity is closely related to its cardinality. As for an AIFS, we can define its non-specificity based on the difference between the maximum membership degree and the sum of other membership degrees. The fuzziness and intuitionism can be constructed according to Definition 4 and Definition 5.

4.1 Non-specificity

As mentioned earlier, non-specificity means two or more alternatives are left unspecified, which represents a degree of imprecision. It is usually inversely related to degree to which the true alternative is in the set. So if the set consisting of all possible alternatives is focusing on one and only one element, it has the lowest non-specificity degree. Let X = {x 1, x 2,⋯ , x n } be the universe of discourse. A singleton element set contains only one element, so it is implicitly specific, and its non-specificity degree should be zero. Intuitively, the full set X = {x 1, x 2,⋯ , x n } representing full ignorance implies the maximum non-specificity. Since the fuzziness and intuitionism of a classical set are both 0, the only measurable uncertainty for a classical set is the non-specificity, directly referring to its cardinality. Regarding the well-known Hartley measure [29] shown in next definition as the standard measure of non-specificity for classical sets, we can determine the range of non-specificity as [0, log2 n], with n as the cardinality of universe.

Definition 7

[29]. Let X = {x 1, x 2, ⋯ , x n } be the universe of discourse, and A be any subset of X. Then, the Hartley measure is defined as:

$$ H(A)=\log_{2} \left( {\left| A \right|} \right) $$
(6)

where |⋅| denotes the cardinality.

Yager [30] has investigated the specificity of AIFSs base on the relationship between the membership degrees of different elements. Since the non-specificity is the inverse concept of specificity, we can define it in the same way. Motivated by Yager’s work, we can provide the following procedure for calculating the non-specificity of AIFS A over X = {x 1, x 2,⋯ , x n }.

  1. (1)

    Determine the largest of the membership values. α = max[μ A (x 1), μ A (x 2),⋯ , μ A (x n )]. We can assume this occurs at x , thus α = μ A (x ).

  2. (2)

    For all xx , assume M A (x) = min[α,1 − v A (x)].

  3. (3)

    The non-specificity of A is defined as

    $$ N(A)=\log_{2} \left( {n+\sum\limits_{x\ne x^{\ast }} {\left( {M_{A} (x)-\alpha } \right)} } \right) $$
    (7)

    Considering μ A (x ) ≤ 1 − v A (x ), we have M A (x )= min[α,1 − v A (x )] = α, M A (x ) − α = 0. So (7) is equivalent to the following formula:

    $$ N(A)=\log_{2} \left( {n+\sum\limits_{x} {\left( {M_{A} (x)-\alpha } \right)} } \right) $$
    (8)

This non-specificity measure can be interpreted as follows. If we use an interval representation [μ A (x),1 − v A (x)] to express AIFS A, 1 − v A (x) represents the maximum membership degree of x belonging to A. For xx , although its membership degree is less than α, its membership still can take the value α under the condition of α ≤ 1 − v A (x). So M A (x) − α reflects the minimum difference between α and the possible membership degree of x. Moreover, since M A (x) − α is non-positive, N(A) is capable of depicting non-specificity of AIFS.

In the following example, we will illustrate the implementation of this procedure, together with its intuitive performance.

Example 4

Let A and B be two AIFSs on X = {x 1, x 2, x 3, x 4, x 5} given as:

$$\begin{array}{@{}rcl@{}} A&=&\left\{\left\langle {x_{1} ,0.3,0.6} \right\rangle ,\left\langle {x_{2} ,0.6,0} \right\rangle,\right.\left\langle {x_{3} ,0.4,0.6} \right\rangle ,\\ &&\left.\left\langle {x_{4} ,0.2,0.7} \right\rangle ,\left\langle {x_{5} ,0,0.5} \right\rangle \right\}, \end{array} $$
$$\begin{array}{@{}rcl@{}} B&=&\left\{ \left\langle {x_{1} ,0.3,0.6} \right\rangle ,\left\langle {x_{2} ,0,0.6} \right\rangle ,\left\langle {x_{3} ,0.4,0.6} \right\rangle ,\right.\\ &&\left.\left\langle {x_{4} ,0.7,0.2} \right\rangle ,\left\langle {x_{5} ,0,0.5} \right\rangle\right\}. \end{array} $$

For AIFS A, we have α A = max[μ A (x 1), μ A (x 2), μ A (x 3), μ A (x 4), μ A (x 5)] = μ A (x 2) = 0.6. Then we can get:

$$M_{A} (x_{1} )=\min [\alpha_{A} ,1-v_{A} (x_{1} )]=\min [0.6,0.4]=0.4; $$
$$M_{A} (x_{3} )=\min [\alpha_{A} ,1-v_{A} (x_{3} )]=\min [0.6,0.4]=0.4; $$
$$M_{A} (x_{4} )=\min [\alpha_{A} ,1-v_{A} (x_{4} )]=\min [0.6,0.3]=0.3; $$
$$M_{A} (x_{5} )=\min [\alpha_{A} ,1-v_{A} (x_{5} )]=\min [0.6,0.5]=0.5. $$

The non-specificity of A can be obtained as:

$$\begin{array}{@{}rcl@{}} N(A)&=&\log_{2} \left( 5+(0.4-0.6)+(0.4-0.6)+(0.3-0.6)\right.\\ &&\left.+(0.5-0.6) \right)=2.07. \end{array} $$

Analogously, we have α B = max[μ B (x 1), μ B (x 2), μ B (x 3), μ B (x 4), μ B (x 5)] = μ B (x 4) = 0.7. Then we can get:

$$M_{B} (x_{1} )=\min [\alpha_{B} ,1-v_{B} (x_{1} )]=\min [0.7,0.4]=0.4; $$
$$M_{B} (x_{2} )=\min [\alpha_{B} ,1-v_{B} (x_{2} )]=\min [0.7,0.4]=0.4 $$
$$M_{B} (x_{3} )=\min [\alpha_{B} ,1-v_{B} (x_{3} )]=\min [0.7,0.4]=0.4; $$
$$M_{B} (x_{5} )=\min [\alpha_{B} ,1-v_{B} (x_{5} )]=\min [0.7,0.5]=0.5. $$

The non-specificity of B can be obtained as:

$$\begin{array}{@{}rcl@{}} N(B)&=&\log_{2} \left( 5+(0.4-0.7)+(0.4-0.7)+(0.4-0.7)\right.\\ &&\left.+(0.5-0.7)\right)=1.96. \end{array} $$

Intuitively, we can notice that the degree to which B focuses on x 4 is stronger than the ability of A focusing on x 2. Moreover, if let Δ be the difference between the largest membership degree and the second largest membership degree, we have Δ A B . So B should be more specific than A. Comparing N(A) and N(B), we find that the proposed non-specificity measure coincides with the intuitive analysis.

Let us now look at some properties of the proposed non-specificity measure.

Corollary 1

For arbitrary AIFS A in the universe X = {x 1, x 2, ⋯, x n }, its non-specificity satisfies N(A) ∈ [0, log2 n].

Proof

Since M A (x) ≤ α, we have \(n+\sum \limits _{x\ne x^{\ast }} {\left ({M_{A} (x)-\alpha } \right )} \le n\)

Considering 0 ≤ M A (x) and α ≤ 1, we get:

$$\begin{array}{@{}rcl@{}} n+\sum\limits_{x\ne x^{\ast }} {\left( {M_{A} (x)-\alpha } \right)} &=&n+\sum\limits_{x\ne x^{\ast }} {M_{A} (x)} -(n-1)\alpha\\ &\ge& n\!-\!(n-1)\alpha \ge n-(n-1)\!=\!1. \end{array} $$

Hence, \(1\le n+\sum \limits _{x\ne x^{\ast }} {\left ({M_{A} (x)-\alpha } \right )} \le n\).

Consequently, we obtain N(A) ∈ [0, log2 n].

Corollary 2

If A is a subset of X={x 1 ,x 2 ,⋯ ,x n }, its non-specificity measure N(A) coincides with its Hartley Measure H(A).

Proof

Without any loss of generality, we can suppose that A = {x 1, x 2,⋯ , x m } with 1 ≤ mn. A can be rewritten as A = {〈x 1,1,0〉,⋯ ,〈x m ,1,0〉,〈x m+1,0,1〉,⋯ ,〈x n ,0,1〉}. So we have α = 1 for all x i A.

Since no generality will be lost by taking x = x 1, we have M A (x 2) = ⋯ = M A (x m ) = 1and M A (x m+1) = ⋯ = M A (x n ) = 0.

Then it follows that:

$$\begin{array}{@{}rcl@{}} n+\sum\limits_{x\ne x^{\ast }} {\left( {M_{A} (x)-\alpha } \right)} &=&n+\sum\limits_{x\ne x^{\ast }} {M_{A} (x)} -(n-1)\alpha\\ &=&n+(m-1)-(n-1)=m. \end{array} $$

Hence, \(N(A)=\log _{2} \left ({n+\sum \limits _{x\ne x^{\ast }} {\left ({M_{A} (x)-\alpha } \right )} } \right )=\log _{2} m=\log _{2} \left ({\left | A \right |} \right )=H(A)\).

This indicates that its non-specificity measure N(A)is analogous to its Hartley Measure H(A).

Corollary 3

If A is a classical fuzzy set in X={x 1 ,x 2 ,⋯ ,x n }, its non-specificity degree is:

$$N(A)=\log_{2} \left( {n+\sum\limits_{x\ne x^{\ast }} {\left( {\mu_{A} (x)-\alpha } \right)}} \right). $$

Proof

Without loss of generality, we can take α = μ A (x 1), i.e., x = x 1. Since A is a classical fuzzy set, we have μ A (x) + v A (x) = 1for every xX. α = μ(x 1) = max[μ A (x 1), μ A (x 2),⋯ , μ A (x n )] indicates that μ A (x 1) ≥ μ A (x) = 1 − v A (x) ∀xX.

And then,

$$\begin{array}{@{}rcl@{}} M_{A} (x)&=&\min [1-v_{A} (x),\alpha ]=\min [1-v_{A} (x),\mu_{A} (x_{1} )]\\ &=&1-v_{A} (x)=\mu_{A} (x). \end{array} $$

Finally we get

$$\begin{array}{@{}rcl@{}} N(A)&=&\log_{2} \left( {n+\sum\limits_{x\ne x^{\ast }} {\left( {M_{A} (x)-\alpha } \right)} } \right)\\ &=&\log_{2} \left( {n+\sum\limits_{x\ne x^{\ast }} {\left( {\mu_{A} (x)-\alpha } \right)} } \right). \end{array} $$

Corollary 4

For an AIFS A over X={x 1 ,x 2 ,⋯ ,x n }, if there exists at least one x∈X such that μ(x)=1, its non-specificity degree can be written as:

$$N(A)=\log_{2} \left( {n-\sum\limits_{x\ne x^{\ast }} {v_{A} (x)} } \right). $$

Proof

Given μ(x) = 1, we get α = 1 and M(x) = min[α,1 − v A (x)]=1 − v A (x).

Consequently, we have:

$$\begin{array}{@{}rcl@{}} N(A)&=&\log_{2} \left( {n+\sum\limits_{x\ne x^{\ast }} {\left( {M_{A} (x)-\alpha } \right)} } \right)\\ &=&\log_{2} \left( {n+\sum\limits_{x\ne x^{\ast }} {\left( {1-v_{A} (x)-1} \right)} } \right)\\ &=&\log_{2} \left( {n-\sum\limits_{x\ne x^{\ast}} {v_{A} (x)} } \right). \end{array} $$

Corollary 5

For an AIFS A over X={x 1 ,x 2 ,⋯ ,x n }, if α=μ A (x )= max[μ A (x 1 ),μ A (x 2 ),⋯ ,μ A (x n )] and v(x) A v A (x ), its non-specificity is N(A)= log2 n.

Proof

Since μ A (x ) + v A (x ) ≤ 1, then μ A (x ) ≤ 1 − v A (x ). Based on v A (x) ≤ v A (x ), we can also get 1 − v A (x)≥1 − v A (x ) ≥ μ A (x ) = α. So we have M A (x) = min[α,1 − v A (x)] = α.

Consequently,

$$\begin{array}{@{}rcl@{}} N(A)&=&\log_{2} \left( {n+\sum\limits_{x\ne x^{\ast }} {\left( {M_{A} (x)-\alpha } \right)} } \right)\\ &=&\log_{2} \left( {n+\sum\limits_{x\ne x^{\ast }} {\left( {\alpha -\alpha } \right)} } \right)=\log_{2} n. \end{array} $$

Corollary 5 provides a situation where an AIFS contains the greatest non-specificity. v(x) A v A (x ) implies a possibility the all elements of AIFS A can take the same membership degrees, thus, its non-specificity is the maximum. It is straightforward that when all elements have the same membership grades and non-membership grades in A, the non-specificity of A is maximum. This can also be applied to explain the maximum non-specificity of the full set X.

Next, we will give the property of function M for a further investigation on the non-specificity measure.

Corollary 6

[30]. Given AIFS A in X = {x 1, x 2, ⋯, x n }, for each xx , there exists a value λ x ∈ [0, 1] such that M A (x) = μ A (x) + λ x π A (x).

Proof

By the definition of M A (x) we should consider two cases. One case happens when α ≥ 1 − v A (x) for xx . Then we have M A (x) = 1 − v A (x) = μ A (x) + π A (x), i.e., M A (x) = μ A (x) + λ x π A (x) with λ x = 1.

Considering another case where α ≤ 1 − v A (x), we have μ A (x) ≤ M A (x) = α ≤ 1 − v A (x). It is obvious that there exists a value λ x ∈ [0, 1] such that M A (x) = μ A (x) + λ x (1 − v A (x) − μ A (x)) = μ A (x) + λ x π A (x).

In the light of this result we can view our formulation for the non-specificity of an AIFS A as the non-specificity of a particular classical fuzzy set compatible with A.

Let A be an AIFS in X = {x 1, x 2,⋯ , x n }, and B be a classical fuzzy set defined as:

μ B (x )= min[μ A (x ),1 − v A (x)] and v B (x) = 1 − μ B (x), for all xX.

By Corollary 4, we can get the non-specificity of B as :

$$\begin{array}{@{}rcl@{}} N(B)&=&\log_{2} \left( {n+\sum\limits_{x\ne x^{\ast }} {\left( {\mu_{B} (x)-\alpha } \right)} } \right)\\ &=&\log_{2} \left( {n+\!\sum\limits_{x\ne x^{\ast }} {\left( {\min \left[ {\mu_{A} (x^{\ast }),1\!-v_{A} (x)} \right]-\alpha } \right)} }\! \right)\!. \end{array} $$

So we have N(A) = N(B). Thus the non-specificity of the AIFS A is equal to that of B, a particular standard fuzzy set compatible with A. Then a natural question arises why do we use this special compatible standard fuzzy set, instead of using any arbitrary classical fuzzy set F associated with A satisfying: μ F (x) = μ A (x) + λ x π A (x) with λ x ∈ [0, 1]. We will borrow a theorem in [30] and modify it to show the uniqueness of our choice.

Corollary 7

Assume X={x 1 ,x 2 ,⋯ ,x n }. Let A={〈x,μ A (x),v A (x)〉|x∈X} be an AIFS on X and α= max[μ A (x)]=μ A (x ). B denotes a classical fuzzy set associated with Asuch that: μ B (x )= min[μ A (x ),1−v A (x)] and v B (x)=1−μ B (x), for all x∈X. F is an arbitrary classical fuzzy set associated with A satisfying: μ F (x)=μ A (x)+λ x π A (x) with λ x ∈[0,1]. Then the non-specificity of B is not smaller than that of F, i.e., N(F) ≤ N(B).

Proof

In this case we have:

$$\begin{array}{@{}rcl@{}} N(B)&=&\log_{2} \left( {n+\sum\limits_{x\ne x^{\ast }} {\left( {\mu_{B} (x)-\alpha } \right)} } \right)\\ &=&\log_{2} \left( {n+\sum\limits_{x\ne x^{\ast }} {\left( {\min \left[ {\alpha ,1-v_{A} (x)} \right]-\alpha } \right)} } \right). \end{array} $$

For convenience, we assume that β = max[μ F (x)] = μ F (x ∗∗).

Since \(\mu _{F} (x^{\ast })=\mu _{A} (x^{\ast })+\lambda _{x^{\ast }} \pi _{A} (x^{\ast })\ge \alpha \), we have β = max[μ F (x)] ≥ α.

The non-specificity grades of Fand B can be respectively written as:

$$\begin{array}{@{}rcl@{}} N(F)&=&\log_{2} \left( {n+\sum\limits_{x\ne x^{\ast \ast }} {\left( {\mu _{F} (x)-\beta } \right)} } \right) \\ &=&\log_{2} \left( {n-\!(n-\!1)\beta +\mu_{F} (x^{\ast })+\!\sum\limits_{x\ne x^{\ast \ast },x^{\ast }} {\mu_{F} (x)} } \right) \\ &=&\log_{2} \left( {n\!+\left( {\mu_{F} (x^{\ast })-\beta } \right)+\!\sum\limits_{x\ne x^{\ast \ast },x^{\ast }}\! {\left( {\mu_{F} (x)\!-\beta } \right)} }\! \right)\\ \end{array} $$
$$\begin{array}{@{}rcl@{}} N(B)&=&\log_{2} \left( {n+\sum\limits_{x\ne x^{\ast }} {\left( {\mu_{B} (x)-\alpha } \right)} } \right) \\ &=&\log_{2} \left( {n+\sum\limits_{x\ne x^{\ast }} {\left( {\min \left[ {\alpha ,1-v_{A} (x)} \right]-\alpha } \right)} } \right) \\ &=&\log_{2} \left( {n-(n-1)\alpha \!+\!\sum\limits_{x\ne x^{\ast }} {\left( {\min \left[ {\alpha ,1-v_{A} (x)} \right]} \right)} } \right) \\ &=&\log _{2}\left( \vphantom{\sum\limits_{x\ne x^{\ast },x^{\ast \ast}}} n-(n-1)\alpha +\min \left[ {\alpha ,1-v_{A} (x^{\ast \ast })} \right]\right.\\ &&\left.+\sum\limits_{x\ne x^{\ast },x^{\ast \ast }} {\left( {\min \left[ {\alpha ,1-v_{A} (x)} \right]} \right)} \right) \\ &=&\log_{2} \left( \vphantom{\sum\limits_{x\ne x^{\ast },x^{\ast \ast }}}n+\left( {\min \left[ {\alpha ,1-v_{A} (x^{\ast \ast })} \right]-\alpha } \right)\right.\\ &&+\left.\sum\limits_{x\ne x^{\ast },x^{\ast \ast }} {\left( {\min \left[ {\alpha ,1-v_{A} (x)} \right]-\alpha } \right)} \right) \\ \end{array} $$

When xx , x ∗∗, two cases should be considered:

  1. (1)

    α ≤ 1 − v A (x), then min[α,1 − v A (x)] − α = 0, thus μ F (x) − β ≤ 0 ≤ min[α,1 − v A (x)] − α.

  2. (2)

    α ≥ 1 − v A (x), then min[α,1 − v A (x)] − α = 1 − v A (x) − α. Moreover, considering the following relations: μ F (x) = μ A (x) + λ x π A (x) = μ A (x) + λ x (1 − μ A (x) − v A (x))=(1 − λ x )μ A (x) + λ x (1 − v A (x)) ≤ 1 − v A (x), we get: μ F (x) − β ≤ 0 ≤ 1 − v A (x) − β ≤ 1 − v A (x) − α.

Thus for any xx , x ∗∗, we have μ F (x) − β ≤ 0 ≤ min[α,1 − v A (x)] − α.

Given 1 − v A (x ∗∗) ≥ μ A (x ∗∗) = βα, we have min[α,1 − v A (x ∗∗)] − α = αα = 0. Comparing with μ F (x ) − β ≤ 0, we obtain μ F (x ) − β ≤ min[α,1 − v A (x ∗∗)] − α.

Considering both μ F (x ) − β ≤ min[α,1 − v A (x ∗∗)] − α and μ F (x) − β ≤ 0 ≤ min[α,1 − v A (x)] − α, we have the result that N(F) ≤ N(B).

Thus we see that our definition for the non-specificity of an AIFS is equal to the largest non-specificity of any classical fuzzy subset that is consistent with the AIFS. That is to say, the non-specificity of an AIFS is greater than any classical fuzzy sets associated with it, which coincides with our intuitive cognition.

Then another question arousing our interest is finding the associated standard fuzzy subset F with the smallest non-specificity. Inspired by the definition of vortex subset given by Yager [30], we can conclude that the classical fuzzy set with the smallest non-specificity must be a vortex. We first introduce the concept of a vortex subset.

Definition 8

[30]. Let A = {〈x, μ A (x), v A (x)〉|xX} be an AIFS on X = {x 1, x 2,⋯ , x n }. F denotes a classical fuzzy set associated with A. F is called a vortex subset of A if F satisfies μ F (x ) = 1 − v A (x ) and μ F (x) = μ A (x) for all xx .

From this definition we can note that there exist n vortex subsets associated with A. We can denote F j as the vortex subset in which x = x j . For the vortex subset focused on x j , we have:

μ F (x j ) = μ A (x j ) + λ x π A (x j ) with λ x = 1 and μ F (x) = μ A (x) + λ x π A (x) for xx j with λ x = 0.

Corollary 8

Assume A={〈x,μ A (x),v A (x)〉|x∈X} is an AIFS on X={x 1 ,x 2 ,⋯ ,x n }. If F is any consistent classical fuzzy subset associated with A, there exist some vortex subsets F k such that N(F k ) ≤ N(F)

Proof

Let μ F (x k ) = max[μ F (x)], then we have:

$$N(F)=\log_{2} \left( {n+\sum\limits_{x\ne x_{k} } {\left( {\mu_{F} (x)-\mu_{F} (x_{k} )} \right)} } \right) $$

Let F k be the vortex subset of A focusing on x k , then we have:

\(\mu _{F_{k} } (x_{k} )=1-v_{A} (x_{k} )\) and \(\mu _{F_{k} } (x_{j} )=\mu _{A} (x_{j} )\) for x j x k .

\(\mu _{F} (x_{k} )=\mu _{A} (x_{k} )+\lambda _{x_{k} } \pi _{A} (x_{k} )\ge \mu _{F} (x)=\mu _{A} (x)+\lambda _{x} \pi _{A} (x)\) also indicates that:

$$\mu_{F_{k} } (x_{k} )=\mu_{A} (x_{k} )+\pi_{A} (x_{k} )\ge \mu_{A} (x)=\mu_{F_{k} } (x). $$

So the non-specificity of F k is:

$$N(F_{k} )=\log_{2} \left( {n+\sum\limits_{x\ne x_{k} } {\left( {\mu_{F_{k} } (x)-\mu_{F_{k} } (x_{k} )} \right)} } \right). $$

For xx k , regarding μ F (x) = μ A (x) + λ x π A (x) with λ x ∈ [0, 1] and \(\mu _{F_{k} } (x)=\mu _{A} (x)+0\cdot \pi _{A} (x)\) , we have \(\mu _{F_{k} } (x)\le \mu _{F} (x)\).

Moreover, given \(\mu _{F} (x_{k} )=\mu _{A} (x_{k} )\le 1-v_{A} (x_{k} )=\mu _{F_{k} } (x_{k} )\), we can get \(\mu _{F_{k} } (x)-\mu _{F_{k} } (x_{k} )\le \mu _{F} (x)-\mu _{F} (x_{k} )\). So we can finally get N(F k ) ≤ N(F).

This theorem says that for any classical fuzzy set F associated with A we can find vortex subsets F k such that N(F k ) ≤ N(F). So the classical fuzzy subset with the smallest non-specificity must be a vortex set.

4.2 Fuzziness and intuitionism

Fuzziness of an AIFS is determined by the difference between the membership and non-membership degrees. Definition 5 illustrates the axiomatic requirements of a fuzziness measure. So any entropy measures satisfying E F 1- E F 4 can be treated as fuzziness measure of AIFS. In such sense, many proposed entropy measures for AIFSs are actually fuzzy measures, such as Szmidt’s entropy [15], Vlachos’ intuitionistic fuzzy entropy measure [17], and Xia’s entropy measure [18]. Here we want to define a new fuzziness measure for AIFS as following.

Definition 9

Let A = {〈x, μ A (x), v A (x)〉|xX} be an AIFS in X = {x 1, x 2,⋯ , x n }. Then the fuzziness measure of A can be defined as:

$$\begin{array}{@{}rcl@{}} E_{F} (A)&=&-\frac{1}{n}\sum\limits_{i=1}^{n} \left[\left( {\mu_{A} (x_{i} )+0.5\pi_{A} (x_{i} )} \right)\log_{2} \left( \mu_{A} (x_{i} )\right.\right.\\ &&\left.\left.+0.5\pi _{A} (x_{i} ) \right)+\left( v_{A} (x_{i} )\right.\right.\\ &&\left.\left.+0.5\pi_{A} (x_{i} ) \right)\log_{2} \left( {v_{A} (x_{i} )+0.5\pi_{A} (x_{i} )} \right) \right] \end{array} $$
(9)

Lemma 1

E F (A) satisfies all properties in Definition 5

Proof

E F 1: Let A be a set with membership and non-membership values being either 0 or 1 for all x i X. Since π A (x i ) = 0 for all x i X, we obtain E F (A) = 0 from (9).

Suppose E F (A) = 0. Since every term in the summation is non-positive, we deduce that every term should equal zero, i.e.,

(μ A (x i )+0.5π A (x i ))log2(μ A (x i )+0.5π A (x i ))=0 and (v A (x i )+0.5π A (x i ))log2(v A (x i )+0.5π A (x i ))=0.

So μ A (x i )+0.5π A (x i ) and v A (x i )+0.5π A (x i ) can be taken 0 or 1.

Taking μ A (x i )+0.5π A (x i ) + v A (x i )+0.5π A (x i ) = μ A (x i ) + π A (x i ) + v A (x i ) = 1 into account, we get the following two cases:

  1. (1)

    μ A (x i )+0.5π A (x i ) = 1, v A (x i )+0.5π A (x i ) = 0, thus v A (x i ) = π A (x i ) = 0, μ A (x i ) = 1;

  2. (2)

    μ A (x i )+0.5π A (x i ) = 0, v A (x i )+0.5π A (x i ) = 1, thus μ A (x i ) = π A (x i ) = 0, v A (x i ) = 1.

This set of equations implies that the membership and non-membership values are either 0 or 1 for all x i X. So A is a set.

E F 2: Let v A (x i ) = μ A (x i ) for all x i X. Applying this condition to μ A (x i ) + π A (x i ) + v A (x i ) = 1 yields μ A (x i ) = v A (x i ) = 0.5(1 − π A (x i )).

So we have:

$$\begin{array}{@{}rcl@{}} &&\left( \mu_{A} (x_{i} )+0.5\pi_{A} (x_{i} ) \right)\log_{2} \left( {\mu_{A} (x_{i} )+0.5\pi_{A} (x_{i} )} \right)\\ &&\quad+\left( v_{A} (x_{i} )+ 0.5\pi_{A} (x_{i} )\right)\log_{2} \left( {v_{A} (x_{i} )+0.5\pi_{A} (x_{i} )} \right)\\ &&\quad=2\left( {\mu_{A} (x_{i} )+0.5\pi_{A} (x_{i} )} \right)\log_{2} \left( {\mu_{A} (x_{i} )+0.5\pi_{A} (x_{i} )} \right) \\ &&\quad=2\left( 0.5(1-\pi_{A} (x_{i} ))\right.\\ &&\quad\left.+0.5\pi_{A} (x_{i} )\right)\log_{2} \left( {0.5(1-\pi_{A} (x_{i} ))+0.5\pi_{A} (x_{i} )} \right) \\ &&\quad=2\cdot 0.5\log_{2} (0.5)=-1 \\ \end{array} $$

By (9) we can get E F (A) = 1.

Let us now suppose that E F (A) = 1. Assume μ A (x i ) − v A (x i ) = t A (x i ), t A (x i ) ∈ [−1, 1], (9) can be rewritten as:

$$\begin{array}{@{}rcl@{}} E_{F} (A)&\!=\!&-\frac{1}{n}\sum\limits_{i=1}^{n}[(\mu_{A} (x_{i} )+0.5(1-\mu_{A} (x_{i} )\\ &&-v_{A} (x_{i} )))\log_{2}(\mu_{A}(x_{i} )\\ &&+0.5(1-\mu_{A} (x_{i} )-v_{A} (x_{i} )))\\ &&+(v_{A} (x_{i} )+0.5(1-\mu_{A} (x_{i} )\\ &&-v_{A} (x_{i} )))\log_{2}(v_{A}(x_{i} )\\ &&+0.5(1-\mu_{A} (x_{i} )-v_{A} (x_{i} )))]\\ &\!=\!&\frac{1}{n\ln 2}\sum\limits_{i=1}^{n} [-({0.5(1\!\!+\!t_{A} (x_{i} ))} x)\ln({0.5(1\!\!+\!t_{A} (x_{i} ))})\\ &&-({0.5(1-t_{A} (x_{i} ))})\ln({0.5(1-t_{A} (x_{i} ))})] \end{array} $$
(10)

Consider the following function:

$$\begin{array}{@{}rcl@{}} f(t)&=&-\left( {0.5+0.5t)} \right)\ln \left( {0.5+0.5t} \right)\\ &&-\left( {0.5-0.5t)} \right)\ln \left( {0.5-0.5t} \right) \end{array} $$

where t ∈ [−1, 1]. Then we have f (t) = 0.5ln(1 − t) − 0.5ln(1 + t). Then the following inequalities can be obtained: f (t) ≤ 0 for t ∈ [0, 1] and f (t) ≥ 0 for t ∈ [−1, 0]. So f(t) gets its maximum value 1 when t = 0.

We can observe that f(t) has the same form as (10). Therefore, E F (A) ≤ 1 and E F (A) get its maximum value 1 if and only if v A (x i ) = μ A (x i ) for all x i X.

E F 3: Evaluating (9) for A C = {〈x, v A (x), μ A (x)〉|xX} yields E F (A) = E F (A C).

E F 4: Let us reconsider the function f(t). f (t) ≤ 0 for t ∈ [0, 1] and f (t) ≥ 0 for t ∈ [−1, 0] indicate that f(t) is increasing with respect to t for t ∈ [−1, 0] and decreasing for t ∈ [0, 1]. The condition μ A (x i ) ≤ μ B (x i ) ≤ v B (x i ) ≤ v A (x i ) implies μ A (x i ) − v A (x i )) ≤ μ B (x i ) − v B (x i ) ≤ 0, i.e., t A (x i ) ≤ t B (x i ) ≤ 0. Then it follows that f(t A (x i )) ≤ f(t B (x i )), thus E F (A) ≤ E F (B).

Similarly, considering μ A (x i ) ≥ μ B (x i ) ≥ v B (x i ) ≥ v A (x i ), we have μ A (x i ) − v A (x i )) ≥ μ B (x i ) − v B (x i )≥0. From the monotonicity of f(t), we can obtain f(t A (x i )) ≤ f(t B (x i )), i.e., E F (A) ≤ E F (B).

In the light of property E F 4, we can also get the following corollary.

Corollary 9

E F (A) is a decreasing function with respect to the argument Δ A (x i )=|μ A (x i )−v A (x i )| for all x i ∈X

Proof

For convenience, we assume that μ A (x i ) − v A (x i ) = t A (x i ), t A (x i )∈[−1,1]. Let us consider the following function:

$$\begin{array}{@{}rcl@{}} f(t)&=&-\left( {0.5+0.5t)} \right)\ln \left( {0.5+0.5t} \right)\\ &-&\left( {0.5-0.5t)} \right)\ln \left( {0.5-0.5t} \right), t\in [-1,1]. \end{array} $$

Given y = |t|∈[0,1], we have:

f(t) = g(y) = −(0.5+0.5y)ln(0.5+0.5y)−(0.5−0.5y)ln(0.5−0.5y) for t ∈ [0,1],

f(t) = h(y) = −(0.5−0.5y)ln(0.5−0.5y)−(0.5+0.5y)ln(0.5+0.5y) for t ∈ [−1, 0].

We can note that g(y) and h(y) have identical forms with f(t). f (t) ≤ 0 for t ∈ [0, 1] indicates that f(t) is decreasing with t in the condition of t ∈ [0, 1]. Since y = |t| ∈ [0, 1], g(y) and h(y) are decreasing with y. Finally, from the expression in (10) we obtain that E F (A) is a decreasing function with respect to the argument Δ A (x i ) = |μ A (x i ) − v A (x i )| for all x i X.

Corollary 10

If A reduces to a classical fuzzy set, E F (A)reduces to the De LucaTermini entropy \(E_{LT}^{FS} \)

Proof

It is straightforward from (9) by assigning π A (x i ) = 0 for all x i X.

Now we come to the concept of intuitionism. As is discussed earlier, Definition 4 demonstrates the axiomatic requirements of the intuitionism measure for an AIFS. We can find that the intuitionism measure is an increasing measure of hesitancy degree. So any increasing function f:π A (x i )→[0,1] satisfying f(0)=0 and f(1)=1 can be applied to define a normalized intuitionism measure. Obviously, the use of the hesitancy degree is straightforward. Thus, the hesitancy value π A (x i ) are commonly used to define an intuitionism measure. Motivated by its simplicity and good mathematical properties, we also adopt this form. So the intuitionism measure of an AIFS A = {〈x, μ A (x), v A (x)〉|xX} in X = {x 1, x 2,⋯ , x n } can be defined as:

$$ E_{I} =\frac{1}{n}\sum\limits_{i=1}^{n} {\pi_{A} (x_{i} )} $$
(11)

Obviously, E I is increasing with the value of π A (x i ).

Combining E I and E F , we can get the normalize intuitionistic fuzzy entropy of an AIFS as:

$$ E_{IF} =\frac{1}{2}\left( {E_{I} +E_{F} } \right) $$
(12)

It is obvious that E I F satisfies all requirements in Definition 6.

To illustrate the property of the proposed fuzziness measure and intuitionism measure more intuitively, Figs. 1 and 2 depict the fuzziness and intuitionism of AIFSs in X = {x}, respectively. In each figure, the color of each point (μ A (x), v A (x), π A (x)) on the simplex respectively denotes the value of E F and E I of the set A = {〈x, μ A (x), v A (x)|x〉} corresponding to that point. As a comparison, Fig. 3 illustrates the intuitionistic fuzzy entropy E I F defined in (12).

Fig. 1
figure 1

Fuzziness measure E F of AIFSs in X = { x}

Fig. 2
figure 2

Intuitionism measure E I of AIFSs in X = { x}

Fig. 3
figure 3

Intuitionistic fuzzy entropy measure E I F of AIFSs in X = { x}

By now, three kinds of uncertainty measure have been presented. The combination of them can be applied to define a general uncertainty measure capable of capturing all kinds of uncertainty. In most cases the actual value is not as important as the relative uncertainty. This gives us considerable freedom in selecting the actual form of the uncertainty measure to be used. So we can present a form of general uncertainty measure as:

$$ U(A)=N(A)+E_{I} (A)+E_{F} (A) $$
(13)

We should have in mind that a properly constructed uncertainty measure for the AIFSs is in the interval [0,1], so we cannot increase the value above “1” while considering the general uncertainty measure for AIFSs. Each properly constructed uncertainty measure for the AIFSs, due to the very sense of uncertainty, should also be from the interval [0,1]. So the non-specificity measure of AIFS A in X = {x 1, x 2,⋯ , x n } should be normalized as:

$$ N_{N} (A)=N(A)/\log_{2} n $$
(14)

Then a normalized general uncertainty measure of A can be defined as:

$$ GU(A)=\frac{1}{3}\left( {N_{N} (A)+E_{I} (A)+E_{F} (A)} \right) $$
(15)

5 Illustrative examples

In order to illustrate more performance of the proposed general uncertainty, we will employ the following examples.

Example 5

Let X = {x 1, x 2, x 3, x 4, x 5} be the universe of discourse. Five subsets of X are given as:

$$\begin{array}{@{}rcl@{}} A_{1}&=&\{x_{1} \},\quad A_{2} =\{x_{1} ,x_{2} \},\quad A_{3} =\{x_{1} ,x_{2} ,x_{3} \},\\ A_{4}&=&\{x_{1} ,x_{2} ,x_{3} ,x_{4} \},\quad A_{5} =\{x_{1} ,x_{2} ,x_{3} ,x_{4} ,x_{5} \}. \end{array} $$

These classical sets can be written in forms of AIFSs as:

$$A_{1} =\left\{ {\left\langle {x_{1} ,1,0} \right\rangle ,\left\langle {x_{2} ,0,1} \right\rangle ,\left\langle {x_{3} ,0,1} \right\rangle ,\left\langle {x_{4} ,0,1} \right\rangle ,\left\langle {x_{5} ,0,1} \right\rangle } \right\}, $$
$$A_{2} =\left\{ {\left\langle {x_{1} ,1,0} \right\rangle ,\left\langle {x_{2} ,1,0} \right\rangle ,\left\langle {x_{3} ,0,1} \right\rangle ,\left\langle {x_{4} ,0,1} \right\rangle ,\left\langle {x_{5} ,0,1} \right\rangle } \right\}, $$
$$A_{3} =\left\{ {\left\langle {x_{1} ,1,0} \right\rangle ,\left\langle {x_{2} ,1,0} \right\rangle ,\left\langle {x_{3} ,1,0} \right\rangle ,\left\langle {x_{4} ,0,1} \right\rangle ,\left\langle {x_{5} ,0,1} \right\rangle } \right\}, $$
$$A_{4} =\left\{ {\left\langle {x_{1} ,1,0} \right\rangle ,\left\langle {x_{2} ,1,0} \right\rangle ,\left\langle {x_{3} ,1,0} \right\rangle ,\left\langle {x_{4} ,1,0} \right\rangle ,\left\langle {x_{5} ,0,1} \right\rangle } \right\}, $$
$$A_{5} =\left\{ {\left\langle {x_{1} ,1,0} \right\rangle ,\left\langle {x_{2} ,1,0} \right\rangle ,\left\langle {x_{3} ,1,0} \right\rangle ,\left\langle {x_{4} ,1,0} \right\rangle ,\left\langle {x_{5} ,1,0} \right\rangle } \right\}. $$

Table 1 gives all kinds of uncertainty for these five classical sets. We can note that all of the fuzziness, intuitionism and intuitionistic fuzzy entropy remain unchanged for different classical sets, thus they cannot discriminate the uncertainty hidden in different sets. From A 1 to A 5, the non-specificity degree increases, which also bring changes to the normalized non-specificity and general uncertainty. Moreover, the increasing general uncertainty coincides with the positive correlation between the cardinality of a set and its uncertainty degree. So we can conclude that the general uncertainty measure is competent to discriminate uncertainty of classical sets.

Table 1 Uncertainty degree of each classical set in example 5

Example 6

Let X = {x 1, x 2, x 3, x 4, x 5}be the universe of discourse. Two AIFSs in X are given as:

$$\begin{array}{@{}rcl@{}} A_{1} &=&\left\{\left\langle {x_{1} ,0.3,0.6} \right\rangle ,\left\langle {x_{2} ,0.6,0} \right\rangle ,\left\langle {x_{3} ,0.4,0.6} \right\rangle ,\right.\\ &&\left.\left\langle {x_{4} ,0.2,0.7} \right\rangle ,\left\langle {x_{5} ,0,0.5}\right\rangle \right\}, \end{array} $$
$$\begin{array}{@{}rcl@{}} A_{2} &=&{A_{1}^{C}} =\left\{\left\langle {x_{1} ,0.6,0.3} \right\rangle ,\left\langle {x_{2} ,0,0.6} \right\rangle ,\left\langle {x_{3} ,0.6,0.4} \right\rangle,\right.\\ &&\left.\left\langle {x_{4} ,0.7,0.2} \right\rangle ,\left\langle {x_{5} ,0.5,0} \right\rangle \right\}. \end{array} $$

By (7), (9) and (11), we can get:

$$N(A_{1} )=2.07, E_{F} (A_{1} )=0.87, E_{I} (A_{1} )=0.18; $$
$$N(A_{2} )=2.20, E_{F} (A_{2} )=0.87, E_{I} (A_{2} )=0.18. $$

Considering (14) and Fig. 2, we have:

$$N_{N} (A_{1} )=0.89, \quad GU(A_{1} )=0.65; $$
$$N_{N} (A_{2} )=0.95, \quad GU(A_{2} )=0.67. $$

Generally, the information conveyed by an AIFS is different from that of its complement set. From this example we can see that the general uncertainty measure can reflect this difference due to the participation of non-specificity. The third properties in Definition 4, Definition 5 and 6 show that both of fuzziness measure and intuitionism measure, as well as their combination miss the capability to discriminate an AIFS from its complement. Equation 7 shows that in most cases the non-specificity grades of an AIFS and its complement are distinct. In this example, for AIFS A 1, the membership grade of x 2 is maximum, and it is quite different from other membership grades. For AIFS A 2, although its greatest membership grade is 0.7, occurring in x 4, there exist two identical membership grades \(\mu _{A_{2} } (x_{1} )=\mu _{A_{2} } (x_{3} )=0.6\), which is close to the maximum value 0.7. So the non-specificity of A 2 is greater than that of A 1. Thus A 2 should be more uncertain than A 1. The general uncertainty values obtained by (15) coincide with this analysis.

Example 7

Let X = {x 1, x 2}be the universe of discourse. A typical AIFS in X is defined as: A = {〈x 1, a, b〉,〈x 2, b, a〉}

By the equivalent definition of non-specificity measure in (8), we can get:

$$\begin{array}{@{}rcl@{}} N(A)&=&\log_{2} \left( 2+\min (1-b,\max (a,b))\right.\\ &&\left.+\min (1-a,\max (a,b))-2\cdot \max (a,b)\right). \end{array} $$

Since a + b ≤ 1, we have a ≤ 1 − b and b ≤ 1 − a.

Then the following cases should be considered:

(1) ba ≤ 1 − b, then b ≤ 0.5, and we have:

  1. (i)

    For 1 − aa, i.e., a ≤ 0.5, N(A) = log2(2 + a + a−2⋅a) = 1,

  2. (ii)

    For 1 − aa, i.e., a ≥ 0.5, N(A) = log2(2 + a+(1 − a)−2⋅a) = log2(3−2⋅a).

    (2) ab ≤ 1 − a, then a ≤ 0.5, and we have:

  3. (iii)

    For 1 − bb, i.e., b ≤ 0.5, N(A) = log2(2 + b + b−2⋅b) = 1.

  4. (iv)

    For 1 − bb, i.e., b ≥ 0.5, N(A) = log2(2 + b+(1 − b)−2⋅b) = log2(3−2⋅b).

Taking all cases into account, we can get:

$$N(A)=\left\{ {\begin{array}{l} \log_{2} \left( {3-2b} \right),{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}a<0.5,b>0.5{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt} \\ \log_{2} \left( {3-2a} \right),{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}a>0.5,b<0.5{\kern 1pt}{\kern 1pt} \\ {\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}1,{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}a\le 0.5,b\le 0.5 \\ \end{array}} \right.. $$

In this case, n = 2 implies log2 n = 1, thus N N (A) = N(A). For a more expressive manifestation on the non-specificity of A, we present Fig. 4 to show the value of N(A), which is denoted by the color of each point (a, b) on the simplex.

Fig. 4
figure 4

The value of N(A)

We can also get the fuzziness, intuitionism, intuitionistic fuzzy entropy, and the normalized general uncertainty degree of A as:

$$\begin{array}{@{}rcl@{}} E_{F} (A)&=&-\left( {\frac{1+a-b}{2}} \right)\log_{2} \left( {\frac{1+a-b}{2}} \right)\\ &&-\left( {\frac{1-a+b}{2}} \right)\log_{2} \left( {\frac{1-a+b}{2}} \right), \end{array} $$
$$E_{I} (A)=1-a-b, $$
$$E_{IF} (A)=\frac{E_{I} (A)+E_{F} (A)}{2}, $$
$$GU(A)=\frac{N(A)+E_{I} (A)+E_{F} (A)}{3}. $$

As a comparison, Figs. 5 and 6 illustrate graphs of E I F (A) and G U(A), respectively. We can notice that by adding non-specificity degree to the intuitionistic fuzzy entropy, the uncertainty of A increases. Moreover, the general uncertainty is more sensitive to the change of A in the cases of a < 0.5, b > 0.5 and a > 0.5, b < 0.5 by providing more detailed information about the distribution of uncertainty. We can also note that the general uncertainty measure gets its maximum (equal to 1) when both the membership value and non-membership value are equal to 0. The general uncertainty degree is equal to 0 for two sets {x 1} ({ < x 1, 1, 0 >, < x 2, 0, 1 >}) and {x 2} ({ < x 1, 0, 1 >, < x 2, 1, 0 >}).

Fig. 5
figure 5

Graph of E I F (A)

6 Application in MADM problems

The uncertainty measure of AIFS can be used in multi-attribute decision making (MADM) for the determination of attribute weight in terms of the information content of attribute [20,31]. Generally speaking, if an attribute can discriminate the data more effectively, it is given a higher weight. In the intuitionistic fuzzy environment, this method is based on the uncertainty measures of AIFSs, which are used to measure the uncertainty associated with AIFSs. In this method, the basic principle of determining weights for attributes is that the smaller the uncertainty value of the intuitionistic fuzzy assessment information of alternatives under an attribute, the bigger the weight should be assigned to the attribute; otherwise, the smaller the weight should be assigned to the attribute. Assume that there are n alternatives A 1, A 2,⋯ , A n to be performed over n attributes x 1, x 2,⋯ , x n . The intuitionistic fuzzy decision matrix D is expressed as follows:

$$ \mathbf{D}=\left( {\left\langle {\mu_{ij} ,v_{ij} } \right\rangle } \right)_{m\times n} ={\begin{array}{*{20}c} \hfill & {{\begin{array}{*{20}c} {x_{1} {\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}x_{2} {\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}\cdots {\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}x_{n} } \\ \end{array} }} \hfill \\ {\begin{array}{l} A_{1} \\ A_{2} \\ {\kern 1pt}{\kern 1pt}{\kern 1pt}\vdots \\ A_{m} \\ \end{array}} \hfill & {\left( {{\begin{array}{*{20}c} {\left\langle {\mu_{11} ,v_{11} } \right\rangle } \hfill & {\left\langle {\mu_{12} ,v_{12} } \right\rangle } \hfill & \cdots \hfill & {\left\langle {\mu_{1n} ,v_{1n} } \right\rangle } \hfill \\ {\left\langle {\mu_{21} ,v_{21} } \right\rangle } \hfill & {\left\langle {\mu_{22} ,v_{22} } \right\rangle } \hfill & \cdots \hfill & {\left\langle {\mu_{2n} ,v_{2n} } \right\rangle } \hfill \\ \vdots \hfill & \vdots \hfill & \ddots \hfill & \vdots \hfill \\ {\left\langle {\mu_{m1} ,v_{m1} } \right\rangle } \hfill & {\left\langle {\mu_{m2} ,v_{m2} } \right\rangle } \hfill & \cdots \hfill & {\left\langle {\mu_{mn} ,v_{mn} } \right\rangle } \hfill \\ \end{array} }} \right)} \hfill \\ \end{array} } $$
(16)

In terms of above principle, the attribute weights can be determined as follows:

$$ w_{j} =\frac{1-UN_{j} }{n-\sum\nolimits_{j=1}^{n} {UN_{j} } },j=1,2,\cdots ,n $$
(17)

where U N j is the uncertainty degree value of the AIFS I j = {〈A i , μ i j , v i j 〉,|A i A}, A = {A 1, A 2,⋯ , A n }.

Next, we use a numerical example to illustrate the proposed method for determining objective weights in a MADM problem.

Example 8

Suppose that the MADM problem refers to four alternatives on six attributes. The intuitionistic fuzzy decision matrix D is given as follows:

$$\mathbf{D}={\begin{array}{*{20}c} \hfill & {{\begin{array}{*{20}c} {x_{1} {\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}x_{2} {\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}x_{3} {\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}x_{4} {\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}x_{5} {\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}x_{6} {\kern 1pt}{\kern 1pt}} \\ \end{array} }} \hfill \\ {\begin{array}{l} A_{1} \\ A_{2} \\ A_{3} \\ A_{4} \\ \end{array}} \hfill & {\left( {{\begin{array}{*{20}c} {<0.39,0.33>} \hfill & {<0.31,0.46>} \hfill & {<0.11,0.35>} \hfill & {<0.24,0.69>} \hfill & {<0.36,0.16>} \hfill & {<0.01,0.53>} \hfill \\ {<0.13,0.58>} \hfill & {<0.27,0.49>} \hfill & {<0.31,0.68>} \hfill & {<0.91,0.08>} \hfill & {<0.01,0.14>} \hfill & {<0.04,0.72>} \hfill \\ {<0.39,0.49>} \hfill & {<0.70,0.10>} \hfill & {<0.47,0.02>} \hfill & {<0.21,0.62>} \hfill & {<0.01,0.04>} \hfill & {<0.26,0.11>} \hfill \\ {<0.54,0.34>} \hfill & {<0.95,0.00>} \hfill & {<0.08,0.02>} \hfill & {<0.22,0.27>} \hfill & {<0.19,0.02>} \hfill & {<0.08,0.46>} \hfill \\ \end{array} }} \right)} \hfill \\ \end{array} }. $$

Then we can get the AIFSs I j (j = 1,2,⋯ ,6):

$$\begin{array}{@{}rcl@{}}I_{\text{1}} &\text{=}&\left\{ <A_{1} ,0.39,0.33>,<A_{2} ,0.13,0.58>,\right.\\ &&\left.<A_{3},0.39,0.49>,<A_{4} ,0.54,0.34>\right\}, \end{array} $$
$$\begin{array}{@{}rcl@{}} I_{2} &\text{=}&\left\{<A_{1} ,0.31,0.46>,<A_{2} ,0.27,0.49>,\right.\\ &&\left.<A_{3},0.70,0.10>,<A_{4} ,0.95,0.00>\right\}, \end{array} $$
$$\begin{array}{@{}rcl@{}} I_{3} &\text{=}&\left\{<A_{1} ,0.11,0.35>,<A_{2} ,0.31,0.68>,\right.\\ &&\left.<A_{3},0.47,0.02>,<A_{4} ,0.08,0.02>\right\}, \end{array} $$
$$\begin{array}{@{}rcl@{}} I_{4} &\text{=}& \left\{<A_{1} ,0.24,0.69>,<A_{2} ,0.91,0.08>,\right.\\ &&\left.<A_{3},0.21,0.62>,<A_{4} ,0.22,0.27>\right\}, \end{array} $$
$$\begin{array}{@{}rcl@{}} I_{5} &\text{=}&\left\{<A_{1} ,0.36,0.16>,<A_{2} ,0.01,0.14>,\right.\\ &&\left.<A_{3},0.01,0.04>,<A_{4} ,0.19,0.02>\right\}, \end{array} $$
$$\begin{array}{@{}rcl@{}} I_{6} &\text{=}&\left\{<A_{1} ,0.01,0.53>,<A_{2} ,0.04,0.72>,\right.\\ &&\left.<A_{3},0.26,0.11>,<A_{4} ,0.08,0.46>\right\}. \end{array} $$

By (15), we can get the uncertainty values of I j (j = 1,2,⋯ ,6) as following:

$$GU(I_{1} )=0.7091, \quad GU(I_{2} )=0.5686, \quad GU(I_{3} )=0.7960, $$
$$GU(I_{\text{4}} )=0.56\text{31}, \quad GU(I_{\text{5}} )=0.\text{9173}, \quad GU(I_{\text{6}} )=0.\text{7580}. $$
Fig. 6
figure 6

Graph of GU(A)

By employing above uncertainty values into (17), we can get the attribute weights as:

w 1 = 0.1723, w 2 = 0.2556, w 3 = 0.1208, w 4 = 0.2589, w 5 = 0.0490, w 6 = 0.1434.

In the following, a real-life example adapted from [32] is provided for a further discussion on the application of our proposed uncertainty measure in decision making under uncertainty.

Example 9

Located in Central China and the middle reaches of the Changjiang (Yangtze) River, Hubei Province is distributed in a transitional belt where physical conditions and landscapes are on the transition from north to south and from east to west. Thus, Hubei Province is well known as “a land of rice and fish” since the region enjoys some of the favorable physical conditions, with a diversity of natural resources and the suitability for growing various crops. At the same time, however, there are also some restrictive factors for developing agriculture such as a tight man–land relation between a constant degradation of natural resources and a growing population pressure on land resource reserve. Despite cherishing a burning desire to promote their standard of living, people living in the area are frustrated because they have no ability to enhance their power to accelerate economic development because of a dramatic decline in quantity and quality of natural resources and a deteriorating environment. Based on the distinctness and differences in environment and natural resources, Hubei Province can be roughly divided into seven agro-ecological regions:

A 1: Wuhan–Ezhou–Huanggang; A 2: Northeast of Hubei; A 3: Southeast of Hubei; A 4: Jianghan region; A 5: North of Hubei; A 6: Northwest of Hubei; A 7: Southwest of Hubei.

In order to prioritize these agro-ecological regions A i (i = 1,2,⋯ ,7) according to their comprehensive functions, a committee comprised of three decision makers (DMs) d k (k = 1, 2, 3) has been formed with a weighting vector λ = (0.5,0.2,0.3)T. The attributes which are considered here in the assessment of A i (i = 1,2,⋯ ,7) are: c 1 – ecological benefit; c 2 – economic benefit; c 3 – social benefit.

Assume that the importance of the attributes c j (j = 1, 2, 3) is completely unknown. The individual opinions of the DM d k on the agro-ecological regions A i with respect to the attribute c j are expressed as an individual decision matrix \(\mathbf {R}^{(k)}=\left ({r_{ij}^{(k)} } \right )_{7\times 3} \), where \(r_{ij}^{(k)} =\left \langle {\mu _{ij}^{(k)} ,v_{ij}^{(k)} } \right \rangle \) (i = 1, 2, ⋯, 7, j = 1, 2, 3) are IFVs.

$$\mathbf{R}^{(1)}={\begin{array}{*{20}c} \hfill & {{\begin{array}{*{20}c} {c_{1} {\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}c_{2} {\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}c_{3} } \\ \end{array} }} \hfill \\ {\begin{array}{l} A_{1} \\ A_{2} \\ A_{3} \\ A_{4} \\ A_{5} \\ A_{6} \\ A_{7} \\ \end{array}} \hfill & {\left( {{\begin{array}{*{20}c} {<0.8,0.1>} \hfill & {<0.9,0.1>} \hfill & {<0.7,0.2>} \hfill \\ {<0.7,0.3>} \hfill & {<0.6,0.2>} \hfill & {<0.6,0.1>} \hfill \\ {<0.5,0.4>} \hfill & {<0.7,0.3>} \hfill & {<0.6,0.1>} \hfill \\ {<0.9,0.1>} \hfill & {<0.7,0.1>} \hfill & {<0.8,0.2>} \hfill \\ {<0.6,0.1>} \hfill & {<0.8,0.2>} \hfill & {<0.5,0.1>} \hfill \\ {<0.3,0.6>} \hfill & {<0.5,0.4>} \hfill & {<0.4,0.5>} \hfill \\ {<0.5,0.2>} \hfill & {<0.4,0.6>} \hfill & {<0.5,0.5>} \hfill \\ \end{array} }} \right)} \hfill \\ \end{array} }\!, $$
$$\mathbf{R}^{(2)}={\begin{array}{*{20}c} \hfill & {{\begin{array}{*{20}c} {c_{1} {\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}c_{2} {\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}c_{3} } \\ \end{array} }} \hfill \\ {\begin{array}{l} A_{1} \\ A_{2} \\ A_{3} \\ A_{4} \\ A_{5} \\ A_{6} \\ A_{7} \\ \end{array}} \hfill & {\left( {{\begin{array}{*{20}c} {<0.9,0.1>} \hfill & {<0.8,0.2>} \hfill & {<0.8,0.1>} \hfill \\ {<0.8,0.2>} \hfill & {<0.5,0.1>} \hfill & {<0.7,0.2>} \hfill \\ {<0.5,0.5>} \hfill & {<0.7,0.2>} \hfill & {<0.8,0.2>} \hfill \\ {<0.9,0.1>} \hfill & {<0.9,0.1>} \hfill & {<0.7,0.3>} \hfill \\ {<0.5,0.2>} \hfill & {<0.6,0.3>} \hfill & {<0.6,0.2>} \hfill \\ {<0.4,0.6>} \hfill & {<0.3,0.4>} \hfill & {<0.5,0.5>} \hfill \\ {<0.3,0.5>} \hfill & {<0.5,0.3>} \hfill & {<0.6,0.4>} \hfill \\ \end{array} }} \right)} \hfill \\ \end{array} }\!, $$
$$\mathbf{R}^{(3)}={\begin{array}{*{20}c} \hfill & {{\begin{array}{*{20}c} {c_{1} {\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}c_{2} {\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}c_{3} } \\ \end{array} }} \hfill \\ {\begin{array}{l} A_{1} \\ A_{2} \\ A_{3} \\ A_{4} \\ A_{5} \\ A_{6} \\ A_{7} \\ \end{array}} \hfill & {\left( {{\begin{array}{*{20}c} {<0.7,0.1>} \hfill & {<0.9,0.1>} \hfill & {<0.9,0.1>} \hfill \\ {<0.9,0.1>} \hfill & {<0.6,0.2>} \hfill & {<0.6,0.2>} \hfill \\ {<0.4,0.5>} \hfill & {<0.8,0.1>} \hfill & {<0.7,0.1>} \hfill \\ {<0.8,0.1>} \hfill & {<0.7,0.2>} \hfill & {<0.9,0.1>} \hfill \\ {<0.6,0.3>} \hfill & {<0.8,0.2>} \hfill & {<0.7,0.2>} \hfill \\ {<0.2,0.7>} \hfill & {<0.5,0.1>} \hfill & {<0.3,0.1>} \hfill \\ {<0.4,0.6>} \hfill & {<0.7,0.3>} \hfill & {<0.5,0.5>} \hfill \\ \end{array} }} \right)} \hfill \\ \end{array} }. $$

In the following our developed uncertainty measure GU in (15) is used to derive the attribute weighting vector w, with which the alternative agro-ecological regions are finally ranked.

Step 1 :

Aggregate all of the individual opinions into a group one by using the intuitionistic fuzzy weighted averaging (IFWA) operator [33]. The group opinion is denoted by R = (r i j )7×3, where

$$\begin{array}{@{}rcl@{}} r_{ij} &=&IFWA_{\lambda } \left( {r_{ij}^{(1)} ,r_{ij}^{(2)} ,r_{ij}^{(3)} } \right)\\ &=&\left\langle {1-\prod\limits_{k=1}^{3} {\left( {1-\mu_{ij}^{(k)} } \right)^{\lambda_{k} }} ,\prod\limits_{k=1}^{3} {\left( {v_{ij}^{(k)} } \right)^{\lambda_{k} }} } \right\rangle . \end{array} $$

Thus the group opinion is shown as:

$$\mathbf{R}={\begin{array}{*{20}c} \hfill & {{\begin{array}{*{20}c} {{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}c_{1} {\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}c_{2} {\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}c_{3} {\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}{\kern 1pt}} \\ \end{array} }} \hfill \\ {\begin{array}{l} A_{1} \\ A_{2} \\ A_{3} \\ A_{4} \\ A_{5} \\ A_{6} \\ A_{7} \\ \end{array}} \hfill & {\left( {{\begin{array}{*{20}c} {<0.803,0.100>} \hfill & {<0.885,0.115>} \hfill & {<0.801,0.141>} \hfill \\ {<0.801,0.199>} \hfill & {<0.582,0.174>} \hfill & {<0.622,0.141>} \hfill \\ {<0.472,0.447>} \hfill & {<0.734,0.199>} \hfill & {<0.681,0.115>} \hfill \\ {<0.877,0.100>} \hfill & {<0.759,0.123>} \hfill & {<0.824,0.176>} \hfill \\ {<0.582,0.160>} \hfill & {<0.770,0.217>} \hfill & {<0.590,0.141>} \hfill \\ {<0.294,0.628>} \hfill & {<0.465,0.264>} \hfill & {<0.394,0.309>} \hfill \\ {<0.435,0.334>} \hfill & {<0.530,0.424>} \hfill & {<0.522,0.478>} \hfill \\ \end{array} }} \right)} \hfill \\ \end{array} }\!. $$
Step 2 :

Calculate the fuzzy entropy for each attribute c j (j = 1, 2, 3) by using the formula (17). Considering the general uncertainty measure GU, we can get the weights of all attributes c j (j = 1, 2, 3). The weighting vector is w = (0.345, 0.346, 0.309) T.

Step 3 :

With the obtained w and R = (r i j )7×3, make a group assessment r i on each alternative A i (i = 1, 2, ⋯, 7) by using the IFWA operator again, where

$$\begin{array}{@{}rcl@{}} r_{i} &=&IFWA_{w} \left( {r_{i1} ,r_{i2} ,r_{i3} } \right)\\ &=&\left\langle {1-\prod\limits_{j=1}^{3} {\left( {1-\mu_{ij} } \right)^{w_{j} }} ,\prod\limits_{j=1}^{3} {\left( {v_{ij} } \right)^{w_{j} }} } \right\rangle ,\\ i&=&1,2,\cdots ,7. \end{array} $$

Thus the group assessments on the agro-ecological regions are shown as:

$$\begin{array}{@{}rcl@{}} r_{1} &=&<0.836,0.117>,\quad r_{2} =<0.686,0.171>,\\ r_{3} &=&<0.644,0.222>, \end{array} $$
$$\begin{array}{@{}rcl@{}} r_{4} &=&<0.827,0.128>, \quad r_{5} =<0.662,0.171>,\\ r_{6} &=&<0.472,0.374>, \quad r_{7} =<0.497,0.405>. \end{array} $$
Step 4 :

In order to rank the alternatives, evaluate the values of r i (i = 1, 2, ⋯, 7) by the following technique [34]:

$$\begin{array}{@{}rcl@{}} z(r_{r} )&=&\left( {1-\frac{1}{2}\pi_{r_{i} } } \right)\left( {\mu_{r_{i} } +\frac{1}{2}\pi_{r_{i} } } \right)\\ &=&\frac{1}{4}\left( {1+\mu_{r_{i} } +v_{r_{i} } } \right)\left( {1+\mu_{r_{i} } -v_{r_{i} } } \right) \end{array} $$

where the larger the value of z(r i ) ∈ [0, 1], the better the alternative corresponding to r i .

Then we can get:

$$z(r_{1} )=0.839, \quad z(r_{2} )=0.704, \quad z(r_{3} )=0.663, $$
$$\begin{array}{@{}rcl@{}} z(r_{4} )&=&0.830, \quad z(r_{5} )=0.683, \quad z(r_{6} )=0.507,\\ z(r_{7} )&=&0.519. \end{array} $$
Step 5 :

Finally, rank all agro-ecological regions with respect to the set of attributes in terms of the values of z(r i ) (i = 1, 2, ⋯, 7). We can get the preference order of seven agro-ecological regions is:

$$A_{1} \succ A_{4} \succ A_{2} \succ A_{5} \succ A_{3} \succ A_{7} \succ A_{6} . $$

With the weighting vectors pre-assigned by subjectivity, Xu and Yager [32] ranked all of the alternatives by the combination of the dynamic intuitionistic fuzzy weighted averaging (DIFWA) operator and the closeness coefficients. It is worth noticing that our ranking list is exactly identical to theirs. To further examine the effectiveness of our uncertainty measure, we compare this result with those results listed in [35], where several different entropy measures are used to derive the weights on the attributes c j (j = 1, 2, 3). The main results by these measures [15, 36,37] are shown in Table 2. It is clear that attribute c 2 is considered as the most important attribute by all measures. Together with the result obtained based on our uncertainty measure, four out of five measures take attribute c 1 as the second most important attribute. Table 2 shows that the ranking orders are exactly the same as it did by our general uncertainty measure. So our proposed uncertainty measure can be applied well in practical application.

Table 2 Main results of Example 9 by different entropy measures

7 Conclusions

We have presented a general uncertainty measure associated with Atanassov’s intuitionistic fuzzy sets. Firstly, we critically reviewed existing axiomatic definitions of entropy measures and pointed out that they cannot capture all kinds of uncertainty implying in Atanassov’s intuitionistic fuzzy sets. Then we argued that an Atanassov’s intuitionistic fuzzy set has three kinds of uncertainty, namely non-specificity, fuzziness, and intuitionism. The non-specificity is determined by the inverse of the degree to which an intuitionistic fuzzy set points to one and only one element as its member. Then we defined a non-specificity measure based on the difference between the maximum membership grade and other membership grades. We have proved that this non-specificity measure can also be applied to quantify non-specificity of classical sets. Its other properties are also presented with their proofs. Following existing axiomatic characterizations of fuzzy entropy, we define a new measure to depict the fuzziness of an intuitionistic fuzzy set. For its simplicity and good mathematical properties, hesitancy degree is straightforward used to define the intuitionism measure. Finally, we give an expression of general uncertainty measure based on the mean value of the non-specificity, fuzziness and intuitionism. Performance of the proposed general uncertainty measure is demonstrated by examples.

The proposed uncertainty can be applied to quantify uncertainty of classical sets, fuzzy sets, as well as AIFSs. It is a general uncertainty measure for these three kinds of sets. However, it is worth noticing that not only the forms of non-specificity measure, fuzziness measure, and intuitionism measure, but also the style of their combination are not unique. There must be many other measures that can be used to construct a general uncertainty measure to capture all kinds of uncertainty. In fact, our proposed non-specificity measure is the maximum non-specificity of sets. So the follow-up research is dedicated to looking for more general non-specificity measures. In the future, we shall also continue working on the application of the general uncertainty measure in real-life domains.