1 Introduction

Dempster-Shafer evidence theory is widely used in uncertainty reasoning and decision making [1, 2]. Two BPAs derived from distinct sources with a high similarity can be combined by using Dempster’s combination rule and the result is always reasonable. However, if they have a high dissimilarity, a counterintuitive result which is unwished will appear [3, 4]. This leaves us with the question of how to determine whether there is a high dissimilarity between two BPAs or not.

For a long time, the conflict coefficient denoted by k [2] in Dempster’s combination rule has been regarded as the only way to quantify the dissimilarity between two BPAs, which is the mass of the combined BPA assigned to the empty set before normalization. However, it is inappropriate to use k as a quantitative measure of dissimilarity. In [5], a research is particularly made to point out the weakness of k, indicating that not all high values of k represent a high dissimilarity between two BPAs. In recent years, researchers have proposed many dissimilarity measures. In [6, 7], a survey of the existing dissimilarity measures is provided together with a classification of them, including the composite distances [8, 9], the Minkowski family [10,11,12,13], the inner product family [14, 15], the fidelity family [16, 17], the information-based distances [18, 19] and the two-dimensional distances [5].

Among all the dissimilarity measures, the distance between betting commitments proposed in [10] is the maximum difference between the pignistic probabilities transformed from two BPAs. In [5], it is used together with the conflict coefficient k to construct a two-dimensional measure to quantify the dissimilarity between two BPAs. Four conditions in which Dempster’s combination rule should be used are defined. As the distance between betting commitments and the conflict coefficient k are complementary in character, the performance of the two-dimensional measure in guiding the use of Dempster’s combination rule is feasible. However, the shortcomings in the distance between betting commitments should not be ignored. It is judged as a pseudo-metric in [6] because of its violation of the standard metric properties. In [20], some unreasonable results obtained from it are pointed out and it is suggested to be cautiously used. The essential component of the distance between betting commitments is the pignistic probability function proposed in [22], which is progressively used as a probability measure for decision making in recent years [23,24,25]. The pignistic probability function is used to transform the BPA into pignistic probabilities. Since the transformation is from the power set to the set, some information contained in the BPA is lost. The loss of information may be a cause of the shortcomings in the distance between betting commitments. In order to make use of the information contained in the BPA as much as possible, a distance directly defined based on the power set is proposed by Jousselme et al. in [11]. As it takes more information contained in the BPA than many other measures, its performance is satisfactory in most cases, and it is widely used in quantifying the dissimilarity between two BPAs.

In this paper, the PSD pignistic probability function is proposed, which can be regarded as an improvement of the pignistic probability function. It is used to transform the BPA into pignistic probabilities on the power set. And based on it, the distance between PSD betting commitments is proposed. The new distance is a metric and the relevant proof is provided. Like the distance proposed by Jousselme et al, the new distance works over the power set and takes more information contained in the BPA. Numerical examples confirm that the results obtained from it are more reasonable.

The rest of the paper is organized as follows. In Section 2, the basic definitions in Dempster-Shafer evidence theory and the standard metric properties are reviewed, the property that the distance between betting commitments doesn’t satisfy is also indicated. In Section 3, the definitions of the PSD pignistic probability function and the distance between PSD betting commitments are proposed. The proof that the proposed distance is a metric one is provided through mathematical reasoning. In Section 4, the rationality of the new distance is demonstrated by comparing it with three existing dissimilarity measures. In Section 5, its applications in combining the conflicting BPAs are presented through two examples. In Section 6, the main contribution of this paper is concluded.

2 Background

The background knowledge presented in this section involves the following three main points: (1) the basic definitions which are frequently used in Dempster-Shafer evidence theory, (2) the geometrical interpretation of the BPA, on the basis of which the dissimilarity between two BPAs can be quantified by various distances, (3) the definition of the distance between betting commitments, and one of the standard metric properties it doesn’t satisfy.

It should be noted that the BPAs in this paper are always assumed to be derived from distinct sources.

2.1 Basics of Dempster-Shafer evidence theory

In Dempster-Shafer evidence theory, the notation Ω is used to denote the frame of discernment, which is a non-empty set with mutually exclusive and exhaustive elements. The power set of Ω, denoted by 2Ω, consists of all the \(2^{\left | {\Omega } \right |}\) subsets of Ω.

Definition 2.1

(Basic probability assignment) [1]. Let Ω be a frame of discernment, the subset of which is denoted by A. The function m : 2Ω → [0,1] is called a basic probability assignment (BPA) if it satisfies the two conditions as follows:

$$ m(\emptyset )=0\,\, \text{and} \sum\limits_{A\subseteq {\Omega} } {m(A)} =1 $$
(1)

For \(\forall A\subseteq {\Omega } \), m(A) is the basic probability mass of A, which reflects the support degree that the evidence gives to A. If m(A) > 0, A is called the focal element of Ω.

According to Definition 2.1, three one-to-one corresponding functions were defined by G. Shafer [2], including the belief function denoted by B e l(A), the plausibility function denoted by P l(A) and the commonality function denoted by Q(A). The calculation formulas of them are as follows:

$$ \begin{array}{l} Bel(A)=\sum\limits_{B\subseteq A} {m(B)} \\ Pl(A)=\sum\limits_{B\cap A\ne \emptyset } {m(B)} \\ Q(A)=\sum\limits_{A\subseteq B} {m(B)} \\ \end{array} $$
(2)

Definition 2.2

(Dempster’s combination rule) [1]. Let m 1 and m 2 be two BBAs on the frame of discernment Ω. The combined BPA, denoted by m 1⊕2, is defined as:

$$ m_{1\oplus 2} (A)=\left\{ {{\begin{array}{lll} {\frac{1}{1-k}\sum\limits_{B\cap C=A} {m_{1} (B)m_{2} (C)} ,A\ne \emptyset } \\ {0,A=\emptyset {\begin{array}{*{20}c} {{\begin{array}{*{20}c} {{\begin{array}{*{20}c} & \\ \end{array} }} & & & \\ \end{array} }} & & \\ \end{array} }} \\ \end{array} }} \right. $$
(3)

with

$$ k=\sum\limits_{B\cap C=\emptyset } {m_{1} (B)} m_{2} (C) $$
(4)

where k is the mass of the combined BPA assigned to the empty set before normalization, called the conflict coefficient.

Definition 2.3

(Pignistic probability function) [21, 22]. Let \({\Omega } =\left \{ {\theta _{1} ,\theta _{2} ,...,\theta _{n} } \right \}\) be a frame of discernment, 𝜃 i (1 ≤ in) is the singleton element which belongs to Ω, m is a BPA on Ω. The corresponding pignistic probability function B e t P m : Ω → [0,1] of m is defined as:

$$ BetP_{m} (\theta_{i} )=\sum\limits_{A\subseteq {\Omega} } {m(A)\frac{\left| {\theta_{i} \cap A} \right|}{\left| A \right|}} =\sum\limits_{\theta_{i} \in A,A\subseteq {\Omega} } {\frac{m(A)}{\left| A \right|}} $$
(5)

where A and \(\left | A \right |\) is the cardinality of A. For \(B\subseteq {\Omega } \),

$${\begin{array}{*{20}c} {BetP_{m} (B)=\sum\limits_{A\subseteq {\Omega} } {m(A)\frac{\left| {B\cap A} \right|}{\left| A \right|}} } & {or}& {BetP_{m} (B)=\sum\limits_{\theta_{i} \in B} {BetP_{m} (\theta_{i} )} } \\ \end{array} } $$
(6)

The pignistic probability function transforms the BPA into pignistic probabilities. Since the transformation is from the power set of Ω to the set itself, some information is lost. The computational process of the transformation can be divided into two steps as follows.

  1. Step 1:

    Distribute m(A) averagely to the singleton elements 𝜃 j which belong to A, add up the values that 𝜃 j acquires to get B e t P m (𝜃 j );

  2. Step 2:

    For 𝜃 k B, add up B e t P m (𝜃 k ) to get B e t P m (B).

2.2 Geometrical interpretation of the BPA

The geometrical interpretation of the BPA makes it possible to measure the dissimilarity between two BPAs via a distance, which is described clearly in [26, 27].

Definition 2.4

(Geometrical interpretation of the BPA). Let Ω be a frame of discernment, m is a BPA on Ω, Ψ is the vector space which is generated from the subsets of Ω. The corresponding vector of m in Ψ can be defined as:

$$ \overset{\rightharpoonup}{\boldsymbol{m}} =[m(A_{1} ),m(A_{2} ),...,m(A_{2^{\left| {\Omega} \right|}} )]^{T} $$
(7)

where \(A_{i} \subseteq {\Omega } \), \(i=1,2,...,2^{\left | {\Omega } \right |}\), \(\sum \limits _{i=1}^{2^{\left | {\Omega } \right |}} {m(A_{i} )=1} \).

After transforming two BPAs into vectors, researchers can use different kinds of distances to measure the dissimilarity between them.

Based on the satisfaction degree of the standard metric properties, Jousselme and Maupin [6] judged the existing distances into four groups: metric, pseudo-metric, semi-metric and non-metric. The standard metric properties are as follows:

Definition 2.5

(Standard metric properties). Let Ψ be a vector space. If the function d : Ψ × Ψ → R satisfies the following properties for ∀A,B,C ∈ Ψ, it is called a metric.

  1. (p1)

    Non-negativity: d(A,B) ≥ 0;

  2. (p2)

    Symmetry: d(A,B) = d(B,A);

  3. (p3)

    Definiteness: \(d(A,B)=0\Leftrightarrow A=B\);

  4. (p4)

    Triangle inequality: d(A,B) ≤ d(A,C) + d(B,C).

In [6], (p3) is divided into (p3)\(^{\prime }\) and (p3)\(^{\prime \prime }\), which are as follows:

  1. (p3)

    \(^{\prime }\) Reflexivity: d(A,A) = 0;

  2. (p3)

    \(^{\prime \prime }\) Separability: \(d(A,B)=0\Rightarrow A=B\).

If a function d satisfies all the properties except for (p3)\(^{\prime \prime }\), it is called a pseudo-metric. A semi-metric satisfies all the conditions except for (p4). If d doesn’t satisfy (p1) and (p3)\(^{\prime }\), it is called a non-metric.

2.3 Distance between betting commitments

Definition 2.6

(Distance between betting commitments). Let m 1 and m 2 be two BBAs on the frame of discernment Ω, \(BetP_{m_{1} } \) and \(BetP_{m_{2} } \) are the corresponding pignistic probability functions of them, respectively. Then the distance between betting commitments is defined as:

$$ difBetP_{m_{1}}^{m_{2} } =\max\limits_{A\subseteq {\Omega} } (| {BetP_{m_{1} } (A)-BetP_{m_{2} } (A)} |) $$
(8)

Influenced by the pignistic probability function, the distance between betting commitments works over Ω. This leads to the loss of information contained in the BPA. We use d B e t (m 1,m 2) to denote \(difBetP_{m_{1}}^{m_{2}}\) in further detail below.

d B e t is judged as a pseudo-metric in [6], which means that it doesn’t satisfies (p3)\(^{\prime \prime }\). Here we cite an example to demonstrate it.

Example 2.1

Let m 1 and m 2 be two BPAs on the frame of discernment \({\Omega } =\left \{ {\theta _{1} ,\theta _{2} ,\theta _{3} } \right \}\), which are defined as follows:

$$m_{_{1}} \left( {\left\{ {\theta_{1} } \right\}} \right)=\frac{1}{3},{\begin{array}{*{20}c} \\ \end{array} }m_{_{1}} \left( {\left\{ {\theta_{2} } \right\}} \right)=\frac{1}{3},{\begin{array}{*{20}c} \\ \end{array} }m_{_{1}} \left( {\left\{ {\theta_{3} } \right\}} \right)=\frac{1}{3}; $$
$$m_{2} \left( {\left\{ {\theta_{1} ,\theta_{2} ,\theta_{3} } \right\}} \right)=1. $$

Then we have d B e t (m 1,m 2) = 0. However, it is obvious that m 1m 2. This example indicates that d B e t doesn’t satisfy (p3)\(^{\prime \prime }\).

As the proof that d B e t satisfies the other properties is similar with the proposed distance in Section 3, it will not be listed here in addition.

3 The proposed function and distance

3.1 Power-set-distribution pignistic probability function

Example 3.1

Let \(m(\left \{ {\theta _{1} ,\theta _{2} ,\theta _{3} } \right \})=1\) be a BPA on the frame of discernment Ω = {𝜃 1,𝜃 2,𝜃 3}, 𝜃 i (1 ≤ i ≤ 3) is the singleton element in Ω. What’s the information contained in the BPA?

The general explanation is as follows: Any of the singleton elements in Ω may have a support degree of 1. Here, the singleton elements refer to \(\left \{ {\theta _{1} } \right \}\), \(\left \{ {\theta _{2} } \right \}\), \(\left \{ {\theta _{3} } \right \}\). In [22], B e t P is proposed by Smets on the basis of this explanation. When B e t P is used to transform the BPA \(m(\left \{ {\theta _{1} ,\theta _{2} ,\theta _{3} } \right \})=1\) into pignistic probabilities, the value of \(m(\left \{ {\theta _{1} ,\theta _{2} ,\theta _{3} } \right \})\) is distributed equally among the singleton elements of \(\left \{ {\theta _{1} ,\theta _{2} ,\theta _{3} } \right \}\). Since the transformation is from the power set of Ω to the set itself, it leads to much information loss.

In order to reduce the information loss, the transformation should be modified to become a bijective mapping. One practical way is to distribute the m value of the non-singleton focal element among its non-empty subsets. In this way, an improved pignistic probability function comes up. It works over the power set of Ω, the information loss of which is less than B e t P. In this case, how to explain the information contained in the BPA \(m(\left \{ {\theta _{1} ,\theta _{2} ,\theta _{3} } \right \})=1\)? The explanation proposed by us is as follows: Any of the singleton and non-singleton elements may have a support degree of 1. Here, the singleton and non-singleton elements refer to \(\left \{ {\theta _{1} } \right \}\), \(\left \{ {\theta _{2} } \right \}\), \(\left \{ {\theta _{3} } \right \}\), \(\left \{ {\theta _{1} ,\theta _{2} } \right \}\), \(\left \{ {\theta _{1} ,\theta _{3} } \right \}\), \(\left \{ {\theta _{2} ,\theta _{3} } \right \}\), \(\left \{ {\theta _{1} ,\theta _{2} ,\theta _{3} } \right \}\).

As the improved function works over the power set of Ω, we call it the power-set-distribution (PSD) pignistic probability function. The definition of it is as follows:

Definition 3.1

(PSD pignistic probability function). Let Ω be a frame of discernment, m is a BPA on Ω. The corresponding power-set-distribution (PSD) pignistic probability function P B e t P m : 2Ω → [0,1] of m is defined as:

$$ PBetP_{m} (B)=\sum\limits_{A\subseteq {\Omega} } {m(A)\frac{2^{\left| {B\cap A} \right|}-1}{2^{\left| A \right|}-1}} $$
(9)

where B denotes a subset of Ω, A and \(\left | A \right |\) is the cardinality of set A.

3.2 Distance between PSD betting commitments

On the basis of Definition 3.1, the definition of the distance between PSD betting commitments is proposed as follows:

Definition 3.2

(Distance between PSD betting commitments). Let m 1 and m 2 be two BBAs on the frame of discernment Ω, \(PBetP_{m_{1} } \) and \(PBetP_{m_{2} } \) are the corresponding PSD pignistic probability functions of them, respectively. Then the distance between PSD betting commitments is defined as:

$$ difPBetP_{m_{1} }^{m_{2} } \,=\,\max\limits_{A\subseteq {\Omega} } (| {PBetP_{m_{1} } (A)\,-\,PBetP_{m_{2} } (A)} |) $$
(10)

Influenced by the PSD pignistic probability function, the new distance works over the power set of Ω. It takes more information contained in the BPA than d B e t . We use d P B e t (m 1,m 2) to denote \(difPBetP_{m_{1} }^{m_{2} } \) in further detail below. d P B e t is most applicable to the BPAs which contain non-singleton focal elements. When d P B e t is applied to the BPAs which just contain singleton focal elements, the result obtained from it is consistent with that of d B e t .

3.3 Proof of the properties

d P B e t is a metric as it satisfies all the standard metric properties. We will prove these properties in this section.

Property 1 (Non-negativity)

d P B e t ≥ 0;

Proof

This property is easy to prove via (10).

$$d_{PBet} (m_{1} ,m_{2}) \,=\,\max\limits_{A\subseteq {\Omega} } (| {PBetP_{m_{1} } (A)\,-\,PBetP_{m_{2} } (A)} |)\ge 0 $$

Property 2 (Symmetry)

d P B e t (m 1,m 2) = d P B e t (m 2,m 1);

Proof

This is straightforward via (10).

$$\begin{array}{lll} d_{PBet} (m_{1} ,m_{2})&=&\max\limits_{A\subseteq {\Omega} } (| {PBetP_{m_{1} } (A)-PBetP_{m_{2} } (A)} |) \\ &=&\max\limits_{A\subseteq {\Omega} } (| {PBetP_{m_{2} } (A)-PBetP_{m_{1} } (A)} |) \\ &=&d_{PBet} (m_{2} ,m_{1} ) \\ \end{array} $$

Property 3\(^{\prime }\) (Reflexivity)

d P B e t (m 1,m 1) = 0;

Proof

This property can be proved via (10).

$$d_{PBet} (m_{1} ,m_{1} )\,=\,\max\limits_{A\subseteq {\Omega} } (| {PBetP_{m_{1} } (A)\,-\,PBetP_{m_{1} } (A)} |)=0 $$

Property 3\(^{\prime \prime }\) (Separability)

\(d_{PBet} (m_{1} ,m_{2} )=0\Rightarrow m_{1} =m_{2} \);

Proof

Suppose that m 1 and m 2 are two BPAs on the frame of discernment\({\Omega } =\left \{ {\theta _{1} ,\theta _{2} ,...,\theta _{n} } \right \}\), which are detailed in Table 1. \(A_{i} \subseteq {\Omega } \) and A i . The total number of A i is 2n − 1. All of them are in sequential order.

Table 1 Two BPAs

If \(d_{PBet} (m_{1} ,m_{2} )=\max \limits _{A\subseteq {\Omega } } (| PBetP_{m_{1} } (A)-PBetP_{m_{2} }\) (A)|) = 0, we can get the following equations:

$$\left\{ {\begin{array}{l} PBetP_{m_{1} } (A_{1} )-PBetP_{m_{2} } (A_{1} )=0 \\ PBetP_{m_{1} } (A_{2} )-PBetP_{m_{2} } (A_{2} )=0 \\ \qquad\qquad\qquad\qquad{\vdots} \\ PBetP_{m_{1} } (A_{2^{n}-1} )-PBetP_{m_{2} } (A_{2^{n}-1} )=0 \\ \end{array}} \right. $$

By (9), we have:

$$\begin{array}{@{}rcl@{}} &&\left\{\!\! {\begin{array}{l} \left( \frac{2^{\left| {A_{1} \cap A_{1} } \right|}-1}{2^{\left| {A_{1} } \right|}-1}x_{1} +\frac{2^{\left| {A_{1} \cap A_{2} } \right|}-1}{2^{\left| {A_{2} } \right|}-1}x_{2} +{\cdots} +\frac{2^{\left| {A_{1} \cap A_{2^{n}-1} } \right|}-1}{2^{\left| {A_{2^{n}-1} } \right|}-1}x_{_{2^{n}-1}} \right)-\left( \frac{2^{\left| {A_{1} \cap A_{1} } \right|}-1}{2^{\left| {A_{1} } \right|}-1}y_{1} +\frac{2^{\left| {A_{1} \cap A_{2} } \right|}-1}{2^{\left| {A_{2} } \right|}-1}y_{2} +{\cdots} +\frac{2^{\left| {A_{1} \cap A_{2^{n}-1} } \right|}-1}{2^{\left| {A_{2^{n}-1} } \right|}-1}y_{_{2^{n}-1}} \right)=0 \\ \left( \frac{2^{\left| {A_{2} \cap A_{1} } \right|}-1}{2^{\left| {A_{1} } \right|}-1}x_{1} +\frac{2^{\left| {A_{2} \cap A_{2} } \right|}-1}{2^{\left| {A_{2} } \right|}-1}x_{2} +{\cdots} +\frac{2^{\left| {A_{2} \cap A_{2^{n}-1} } \right|}-1}{2^{\left| {A_{2^{n}-1} } \right|}-1}x_{_{2^{n}-1}} \right)-\left( \frac{2^{\left| {A_{2} \cap A_{1} } \right|}-1}{2^{\left| {A_{1} } \right|}-1}y_{1} +\frac{2^{\left| {A_{2} \cap A_{2} } \right|}-1}{2^{\left| {A_{2} } \right|}-1}y_{2} +{\cdots} +\frac{2^{\left| {A_{2} \cap A_{2^{n}-1} } \right|}-1}{2^{\left| {A_{2^{n}-1} } \right|}-1}y_{_{2^{n}-1}} \right)=0 \\ { }{\vdots} \\ \left( \frac{2^{\left| {A_{2^{n}-1} \cap A_{1} } \right|}-1}{2^{\left| {A_{1} } \right|}-1}x_{1} +\frac{2^{\left| {A_{2^{n}-1} \cap A_{2} } \right|}-1}{2^{\left| {A_{2} } \right|}-1}x_{2} \,+\,{\cdots} \,+\,\frac{2^{\left| {A_{2^{n}-1} \cap A_{2^{n}-1} } \right|}-1}{2^{\left| {A_{2^{n}-1} } \right|}-1}x_{_{2^{n}-1}} \right)\,-\,\left( \frac{2^{\left| {A_{2^{n}-1} \cap A_{1} } \right|}-1}{2^{\left| {A_{1} } \right|}-1}y_{1} \,+\,\frac{2^{\left| {A_{2^{n}-1} \cap A_{2} } \right|}-1}{2^{\left| {A_{2} } \right|}-1}y_{2} \,+\,{\cdots} \,+\,\frac{2^{\left| {A_{2^{n}-1} \cap A_{2^{n}-1} } \right|}-1}{2^{\left| {A_{2^{n}-1} } \right|}-1}y_{_{2^{n}-1}} \right)\,=\,0 \end{array}} \right.\\ &&\left\{\!\! {\begin{array}{l} \frac{2^{\left| {A_{1} \cap A_{1} } \right|}-1}{2^{\left| {A_{1} } \right|}-1}(x_{1} -y_{1} )+\frac{2^{\left| {A_{1} \cap A_{2} } \right|}-1}{2^{\left| {A_{2} } \right|}-1}(x_{2} -y_{2} )+{\cdots} +\frac{2^{\left| {A_{1} \cap A_{2^{n}-1} } \right|}-1}{2^{\left| {A_{2^{n}-1} } \right|}-1}(x_{_{2^{n}-1}} -y_{_{2^{n}-1}})=0 \\ \frac{2^{\left| {A_{2} \cap A_{1} } \right|}-1}{2^{\left| {A_{1} } \right|}-1}(x_{1} -y_{1} )+\frac{2^{\left| {A_{2} \cap A_{2} } \right|}-1}{2^{\left| {A_{2} } \right|}-1}(x_{2} -y_{2} )+{\cdots} +\frac{2^{\left| {A_{2} \cap A_{2^{n}-1} } \right|}-1}{2^{\left| {A_{2^{n}-1} } \right|}-1}(x_{_{2^{n}-1}} -y_{_{2^{n}-1}})=0 \\ {\vdots} \\ \frac{2^{\left| {A_{2^{n}-1} \cap A_{1} } \right|}-1}{2^{\left| {A_{1} } \right|}-1}(x_{1} -y_{1} )+\frac{2^{\left| {A_{2^{n}-1} \cap A_{2} } \right|}-1}{2^{\left| {A_{2} } \right|}-1}(x_{2} -y_{2} )+{\cdots} +\frac{2^{\left| {A_{2^{n}-1} \cap A_{2^{n}-1} } \right|}-1}{2^{\left| {A_{2^{n}-1} } \right|}-1}(x_{_{2^{n}-1}} -y_{_{2^{n}-1}})=0 \\ \end{array}} \right. \end{array} $$

It is equivalent to the equation as follows:

$$\begin{array}{@{}rcl@{}} &&\left[ {{\begin{array}{ccccc} {\frac{2^{\left| {A_{1} \cap A_{1} } \right|}-1}{2^{\left| {A_{1} } \right|}-1}} & {\frac{2^{\left| {A_{1} \cap A_{2} } \right|}-1}{2^{\left| {A_{2} } \right|}-1}} & {\cdots} & {\frac{2^{\left| {A_{1} \cap A_{2^{n}-1} } \right|}-1}{2^{\left| {A_{2^{n}-1} } \right|}-1}} \\ {\frac{2^{\left| {A_{2} \cap A_{1} } \right|}-1}{2^{\left| {A_{1} } \right|}-1}} & {\frac{2^{\left| {A_{2} \cap A_{2} } \right|}-1}{2^{\left| {A_{2} } \right|}-1}} & {\cdots} & {\frac{2^{\left| {A_{2} \cap A_{2^{n}-1} } \right|}-1}{2^{\left| {A_{2^{n}-1} } \right|}-1}} \\ {\vdots} & {\vdots} & {\ddots} & {\vdots} \\ {\frac{2^{\left| {A_{2^{n}-1} \cap A_{1} } \right|}-1}{2^{\left| {A_{1} } \right|}-1}} & {\frac{2^{\left| {A_{2^{n}-1} \cap A_{2} } \right|}-1}{2^{\left| {A_{2} } \right|}-1}} & {\cdots} & {\frac{2^{\left| {A_{2^{n}-1} \cap A_{2^{n}-1} } \right|}-1}{2^{\left| {A_{2^{n}-1} } \right|}-1}} \\ \end{array} }} \right]\left[ {{\begin{array}{c} {x_{1} -y_{1} } \\ {x_{2} -y_{2} } \\ {\vdots} \\ {x_{_{2^{n}-1}} -y_{_{2^{n}-1}}} \\ \end{array} }} \right]=0\\ &&\left[ {{\begin{array}{ccccc} {2^{\left| {A_{1} \cap A_{1} } \right|}-1} & {2^{\left| {A_{1} \cap A_{2} } \right|}-1} & {\cdots} & {2^{\left| {A_{1} \cap A_{2^{n}-1} } \right|}-1} \\ {2^{\left| {A_{2} \cap A_{1} } \right|}-1} & {2^{\left| {A_{2} \cap A_{2} } \right|}-1} & {\cdots} & {2^{\left| {A_{2} \cap A_{2^{n}-1} } \right|}-1} \\ {\vdots} & {\vdots} & {\ddots} & {\vdots} \\ {2^{\left| {A_{2^{n}-1} \cap A_{1} } \right|}-1} & {2^{\left| {A_{2^{n}-1} \cap A_{2} } \right|}-1} & {\cdots} & {2^{\left| {A_{2^{n}-1} \cap A_{2^{n}-1} } \right|}-1} \\ \end{array} }} \right]\left[ {{\begin{array}{ccccc} {\frac{1}{2^{\left| {A_{1} } \right|}-1}} & 0 & \cdots & 0 \\ 0 & {\frac{1}{2^{\left| {A_{2} } \right|}-1}} & \cdots & 0 \\ {\vdots} & {\vdots} & {\ddots} & {\vdots} \\ 0 & 0 & {\cdots} & {\frac{1}{2^{\left| {A_{2^{n}-1} } \right|}-1}} \\ \end{array} }} \right]\left[ {{\begin{array}{ccc} {x_{1} -y_{1} } \\ {x_{2} -y_{2} } \\ {\vdots} \\ {x_{_{2^{n}-1}} -y_{_{2^{n}-1}}} \\ \end{array} }} \right]=0 \end{array} $$

We set

$$R=\left[ {{\begin{array}{ccccc} {\frac{2^{\left| {A_{1} \cap A_{1} } \right|}-1}{2^{\left| {A_{1} } \right|}-1}} & {\frac{2^{\left| {A_{1} \cap A_{2} } \right|}-1}{2^{\left| {A_{2} } \right|}-1}} & {\cdots} & {\frac{2^{\left| {A_{1} \cap A_{2^{n}-1} } \right|}-1}{2^{\left| {A_{2^{n}-1} } \right|}-1}} \\ {\frac{2^{\left| {A_{2} \cap A_{1} } \right|}-1}{2^{\left| {A_{1} } \right|}-1}} & {\frac{2^{\left| {A_{2} \cap A_{2} } \right|}-1}{2^{\left| {A_{2} } \right|}-1}} & {\cdots} & {\frac{2^{\left| {A_{2} \cap A_{2^{n}-1} } \right|}-1}{2^{\left| {A_{2^{n}-1} } \right|}-1}} \\ {\vdots} & {\vdots} & {\ddots} & {\vdots} \\ {\frac{2^{\left| {A_{2^{n}-1} \cap A_{1} } \right|}-1}{2^{\left| {A_{1} } \right|}-1}} & {\frac{2^{\left| {A_{2^{n}-1} \cap A_{2} } \right|}-1}{2^{\left| {A_{2} } \right|}-1}} & {\cdots} & {\frac{2^{\left| {A_{2^{n}-1} \cap A_{2^{n}-1} } \right|}-1}{2^{\left| {A_{2^{n}-1} } \right|}-1}} \\ \end{array} }} \right] $$
$$S=\left[ {{\begin{array}{cccc} {2^{\left| {A_{1} \cap A_{1} } \right|}-1} & {2^{\left| {A_{1} \cap A_{2} } \right|}-1} & {\cdots} & {2^{\left| {A_{1} \cap A_{2^{n}-1} } \right|}-1} \\ {2^{\left| {A_{2} \cap A_{1} } \right|}-1} & {2^{\left| {A_{2} \cap A_{2} } \right|}-1} & {\cdots} & {2^{\left| {A_{2} \cap A_{2^{n}-1} } \right|}-1} \\ {\vdots} & {\vdots} & {\ddots} & {\vdots} \\ {2^{\left| {A_{2^{n}-1} \cap A_{1} } \right|}-1} & {2^{\left| {A_{2^{n}-1} \cap A_{2} } \right|}-1} & {\cdots} & {2^{\left| {A_{2^{n}-1} \cap A_{2^{n}-1} } \right|}-1} \\ \end{array} }} \right] $$
$$T=\left[ {{\begin{array}{cccc} {\frac{1}{2^{\left| {A_{1} } \right|}-1}} & 0 & \cdots & 0 \\ 0 & {\frac{1}{2^{\left| {A_{2} } \right|}-1}} & \cdots & 0 \\ {\vdots} & {\vdots} & {\ddots} & {\vdots} \\ 0 & 0 & {\cdots} & {\frac{1}{2^{\left| {A_{2^{n}-1} } \right|}-1}} \\ \end{array} }} \right] $$

Then we can get R = S T and \(R\left [ {{\begin {array}{ccc} {x_{1} -y_{1} } \\ {x_{2} -y_{2} } \\ {\vdots } \\ {x_{_{2^{n}-1}} -y_{_{2^{n}-1}}} \\ \end {array} }} \right ]=0\).

The positive definiteness of the Jaccard index matrix has been presented in [28] by M. Bouchard. S is similar to the Jaccard index matrix. Moreover, S is more complex, and it is really difficult to demonstrate its positive definiteness through mathematical proof. In the following, the results of S are listed when n = 1,2,3,4. After being diagonalized, each of them becomes an identity matrix, which demonstrates that S is a non-singular matrix when n = 1,2,3,4. As the dimension of S is 31 × 31 when n = 5 and it is still growing, there is no need to list the results of S when n ≥ 5. However, it can be determined that the matrix is still an identity matrix after being diagonalized. The mathematical proof of the positive definiteness is needed in the future work.

$$\text{For } n=1, S=[1]; \text{For } n=2, S=\left[ {{\begin{array}{ccc} 1 & 0 & 1 \\ 0 & 1 & 1 \\ 1 & 1 & 3 \\ \end{array} }} \right]\overset{diagonalized}{\longrightarrow} \left[ {{\begin{array}{cccc} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \\ \end{array} }} \right];$$
$$\text{For } n=3, S=\left[ {{\begin{array}{cccccccc} 1 & 0 & 0 & 1 & 1 & 0 & 1 \\ 0 & 1 & 0 & 1 & 0 & 1 & 1 \\ 0 & 0 & 1 & 0 & 1 & 1 & 1 \\ 1 & 1 & 0 & 3 & 1 & 1 & 3 \\ 1 & 0 & 1 & 1 & 3 & 1 & 3 \\ 0 & 1 & 1 & 1 & 1 & 3 & 3 \\ 1 & 1 & 1 & 3 & 3 & 3 & 7 \\ \end{array} }} \right]\overset{diagonalized}{\longrightarrow} \left[ {{\begin{array}{cccccccc} 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 1 \\ \end{array} }} \right]; $$
$$\text{For } n=4, S=\left[ {{\begin{array}{ccccccccccccccc} 1 & 0 & 0 & 0 & 1 & 1 & 1 & 0 & 0 & 0 & 1 & 1 & 1 & 0 & 1 \\[-.2pt] 0 & 1 & 0 & 0 & 1 & 0 & 0 & 1 & 1 & 0 & 1 & 1 & 0 & 1 & 1 \\[-.2pt] 0 & 0 & 1 & 0 & 0 & 1 & 0 & 1 & 0 & 1 & 1 & 0 & 1 & 1 & 1 \\[-.2pt] 0 & 0 & 0 & 1 & 0 & 0 & 1 & 0 & 1 & 1 & 0 & 1 & 1 & 1 & 1 \\[-.2pt] 1 & 1 & 0 & 0 & 3 & 1 & 1 & 1 & 1 & 0 & 3 & 3 & 1 & 1 & 3 \\[-.2pt] 1 & 0 & 1 & 0 & 1 & 3 & 1 & 1 & 0 & 1 & 3 & 1 & 3 & 1 & 3 \\[-.2pt] 1 & 0 & 0 & 1 & 1 & 1 & 3 & 0 & 1 & 1 & 1 & 3 & 3 & 1 & 3 \\[-.2pt] 0 & 1 & 1 & 0 & 1 & 1 & 0 & 3 & 1 & 1 & 3 & 1 & 1 & 3 & 3 \\[-.2pt] 0 & 1 & 0 & 1 & 1 & 0 & 1 & 1 & 3 & 1 & 1 & 3 & 1 & 3 & 3 \\[-.2pt] 0 & 0 & 1 & 1 & 0 & 1 & 1 & 1 & 1 & 3 & 1 & 1 & 3 & 3 & 3 \\[-.2pt] 1 & 1 & 1 & 0 & 3 & 3 & 1 & 3 & 1 & 1 & 7 & 3 & 3 & 3 & 7 \\[-.2pt] 1 & 1 & 0 & 1 & 3 & 1 & 3 & 1 & 3 & 1 & 3 & 7 & 3 & 3 & 7 \\[-.2pt] 1 & 0 & 1 & 1 & 1 & 3 & 3 & 1 & 1 & 3 & 3 & 3 & 7 & 3 & 7 \\[-.2pt] 0 & 1 & 1 & 1 & 1 & 1 & 1 & 3 & 3 & 3 & 3 & 3 & 3 & 7 & 7 \\[-.2pt] 1 & 1 & 1 & 1 & 3 & 3 & 3 & 3 & 3 & 3 & 7 & 7 & 7 & 7 & {15} \\[-.2pt] \end{array} }} \right]\overset{diagonalized}{\longrightarrow}\left[ {{\begin{array}{cccccccccccccccc} 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\[-.2pt] 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\[-.2pt] 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\[-.2pt] 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\[-.2pt] 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\[-.2pt] 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\[-.2pt] 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\[-.2pt] 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\[-.2pt] 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\[-.2pt] 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\[-.2pt] 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\[-.2pt] 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \\[-.2pt] 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\[-.2pt] 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \\[-.2pt] 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \\[-.2pt] \end{array} }} \right]. $$

Since S is a non-singular matrix. The rank of S is 2n − 1, i.e., r a n k(S) = 2n − 1. Obviously, r a n k(T) = 2n − 1. As R = S T, we can get that r a n k(R) = 2n − 1.

Because that r a n k(R) = 2n − 1 and \(R\left [ {{\begin {array}{ccc} {x_{1} -y_{1} } \\ {x_{2} -y_{2} } \\ {\vdots } \\ {x_{2^{n}-1} -y_{2^{n}-1} } \\ \end {array} }} \right ]=0\), it is available that \(\left [ {{\begin {array}{ccc} {x_{1} -y_{1} } \\[-1pt] {x_{2} -y_{2} } \\[-1pt] {\vdots } \\[-1pt] {x_{2^{n}-1} -y_{2^{n}-1} } \\[-1pt] \end {array} }} \right ]=0\). That is to say: x 1 = y 1, x 2 = y 2, . . . , \(x_{2^{n}-1} =y_{2^{n}-1} \). It is equivalent to that m 1 = m 2. □

Property 4 (Triangle inequality)

$$d_{PBet} (m_{1} ,m_{2} )\le d_{PBet} (m_{1} ,m_{3} )+d_{PBet} (m_{3} ,m_{2} ); $$

Proof

Suppose that there are three BPAs on the frame of discernment Ω, which are denoted by m 1, m 2 and m 3. The corresponding P B e t P of the three BPAs are detailed in Table 2, where \(A_{i} \subseteq {\Omega } (1\le i\le n)\) and A i .

Table 2 The corresponding P B e t P of m 1,m 2 and m 3

Assume that,

$$\begin{array}{lll} d_{PBet} (m_{1} ,m_{2} )\!&=&\!\max\limits_{A\subseteq {\Theta} } (\left| {PBetP_{m_{1} } (A)-PBetP_{m_{2} } (A)} \right|) \\ &=&\!| {PBetP_{m_{1} } (A_{i} )\,-\,PBetP_{m_{2} } (A_{i} )} |\,=\,| {x_{i} \,-\,y_{i} } | \\ \end{array}. $$

It is certain that the following two inequations are appropriate,

$$\begin{array}{lll} d_{PBet} (m_{1} ,m_{3})&=&\max\limits_{A\subseteq {\Theta} } (\left| {PBetP_{m_{1} } (A)-PBetP_{m_{3} } (A)} \right|) \\ &\ge& \left| {PBetP_{m_{1} } (A_{i} )-PBetP_{m_{3} } (A_{i} )} \right|\\ &=&\left| {x_{i} -z_{i} } \right| \\ \end{array} $$
$$\begin{array}{lll} d_{PBet} (m_{3} ,m_{2} )\!&=&\!\max\limits_{A\subseteq {\Theta} } (\left| {PBetP_{m_{3} } (A)-PBetP_{m_{2} } (A)} \right|) \\ &\ge&\! \left| {PBetP_{m_{3} } (A_{i} )\,-\,PBetP_{m_{2} } (A_{i} )} \right|\,=\,\left| {z_{i} \,-\,y_{i} } \right| \\ \end{array} $$

Then, we can get the inequation as follows,

$$\begin{array}{lll} d_{PBet} (m_{1} ,m_{3} )+d_{PBet} (m_{3} ,m_{2} )\!&\ge& \left| {x_{i} -z_{i} } \right|+\left| {z_{i} -y_{i} } \right| \\ &\ge& \left| {x_{i} -z_{i} +z_{i} -y_{i} } \right| \\ &=&\left| {x_{i} \,-\,y_{i} } \right|\,=\,d_{PBet} (m_{1} ,m_{2} ) \\ \end{array} $$

That is,

$$d_{PBet} (m_{1} ,m_{2} )\le d_{PBet} (m_{1} ,m_{3} )+d_{PBet} (m_{3} ,m_{2} ). $$

Through the mathematical proof above, it is certain that d P B e t is a metric.

4 Numerical comparisons

In [11], a distance between two BPAs is proposed, which is defined as:

$$ d_{J} (m_{1} ,m_{2} )=\sqrt {\frac{1}{2}\left( {\overset{\rightharpoonup}{\boldsymbol{m}}_{1} -\overset{\rightharpoonup}{\boldsymbol{m}}_{2} } \right)^{T}\underset{=}{\boldsymbol{D}}\left( {\overset{\rightharpoonup}{\boldsymbol{m}}_{1} -\overset{\rightharpoonup}{\boldsymbol{m}}_{2} } \right)} $$
(11)

where \(\overset {\rightharpoonup }{\boldsymbol {m}}\) is a \(2^{\left | {\Omega } \right |}\)-dimensional column vector generated from a BPA, \(\underset {=}{\boldsymbol {D}}\) is a \(2^{\left | {\Omega } \right |}\times 2^{\left | {\Omega } \right |}\)-dimensional matrix, the elements of \(\underset {=}{\boldsymbol {D}}\) are \(J(A,B)=\left | {A\cap B} \right |/\left | {A\cup B} \right |\), A and B are the subsets of Ω(we define \(\left | {\emptyset \cap \emptyset } \right |/\left | {\emptyset \cup \emptyset } \right |=0)\). d J is proved to be a metric in [28]. It is widely used to measure the dissimilarity between two BPAs.

In [15], a cosine similarity measure between two BPAs is proposed, which is defined as:

$$ Sim(m_{1} ,m_{2} )=\cos \theta =\frac{\left\langle {\overset{\rightharpoonup}{\boldsymbol{m}}_{1} ,\overset{\rightharpoonup}{\boldsymbol{m}}_{2} } \right\rangle }{\left\| {\overset{\rightharpoonup}{\boldsymbol{m}}_{1} } \right\|\left\| {\overset{\rightharpoonup}{\boldsymbol{m}}_{2} } \right\|} $$
(12)

where \(\overset {\rightharpoonup }{\boldsymbol {m}}\) is a \(2^{\left | {\Omega } \right |}\)-dimensional column vector generated from a BPA, 𝜃 is the angle between \(\overset {\rightharpoonup }{\boldsymbol {m}}_{1}\) and \(\overset {\rightharpoonup }{\boldsymbol {m}}_{2} \), \(\left \langle {\overset {\rightharpoonup }{\boldsymbol {m}}_{1} ,\overset {\rightharpoonup }{\boldsymbol {m}}_{2} } \right \rangle \) is the inner product of \(\overset {\rightharpoonup }{\boldsymbol {m}}_{1}\) and \(\overset {\rightharpoonup }{\boldsymbol {m}}_{2} \), \(\left \| {\overset {\rightharpoonup }{\boldsymbol {m}}} \right \|\) is the norm of \(\overset {\rightharpoonup }{\boldsymbol {m}}\). \(\cos \theta \) is proved to be a semi-pseudo-metric in [28]. As \(\cos \theta \) is a similarity measure, then the corresponding dissimilarity measure can be defined as \(1-\cos \theta \).

In this section, three examples are given to demonstrate the performance of d P B e t , including Example 4.1 (one of the cases in Example 1 in [29]), Example 4.2 (Example 5, Fig. 2 in [20]) and Example 4.3 (Example 1, Fig. 6 in [12]). d P B e t is compared with d B e t , d J and \(1-\cos \theta \) in these examples, and each of them can be used to measure the dissimilarity between two BPAs.

Example 4.1

Let m 1, m 2, m 3 and m 4 be four BPAs on the frame of discernment \({\Omega } =\left \{ {\theta _{1} ,\theta _{2} ,\theta _{3} } \right \}\), which are defined as follows:

$$m_{1} \left( {\left\{ {\theta_{1} } \right\}} \right)=\frac{1}{3},{\begin{array}{*{20}c} \\ \end{array} }m_{1} \left( {\left\{ {\theta_{2} } \right\}} \right)=\frac{1}{3},{\begin{array}{*{20}c} \\ \end{array} }m_{1} \left( {\left\{ {\theta_{3} } \right\}} \right)=\frac{1}{3}_{\mathrm{;}} $$
$$m_{2} \left( {\left\{ {\theta_{1} ,\theta_{2} ,\theta_{3} } \right\}} \right)=1; $$
$$m_{_{3}} \left( {\left\{ {\theta_{1} } \right\}} \right)=\frac{1}{3},\; m_{_{3}} \left( {\left\{ {\theta_{2} } \right\}} \right)=\frac{1}{3},\; m_{_{3}} \left( {\left\{ {\theta_{1} ,\theta_{2} } \right\}} \right)=\frac{1}{3} {;} $$
$$m_{4} \left( {\left\{ {\theta_{1} } \right\}} \right)=1. $$

For m 1 and m 2, the results of d B e t , d J , \(1-\cos \theta \) and d P B e t are as follows:

\(d_{Bet} (m_{1} ,m_{2} )=0; \quad d_{J} (m_{1} ,m_{2} )=0.5774; \quad 1-\cos \theta =0; \quad d_{PBet} (m_{1} ,m_{2} )=0.2381. \)

Since m 1 and m 2 are not the same, the value of the dissimilarity between them should not be 0. According to this, both d B e t (m 1,m 2) = 0 and \(1-\cos \theta =0\) are counterintuitive, while d J (m 1,m 2) = 0.5774 and d P B e t (m 1,m 2) = 0.2381 are acceptable.

For m 1 and m 3, the results of d J and d P B e t are as follows:

$$d_{J} (m_{1} ,m_{3} )=0.3333; \quad d_{PBet} (m_{1} ,m_{3} )=0.3333. $$

It is obvious that m 1 and m 3 are not the same. Both d J (m 1,m 3) = 0.3333 and d P B e t (m 1,m 3) = 0.3333 prove this.

For m 2 and m 3, the results of d J and d P B e t are as follows:

$$d_{J} (m_{2} ,m_{3} )=0.5774; \quad d_{PBet} (m_{2} ,m_{3} )=0.5714. $$

Since m 1 and m 3 are not the same, the dissimilarity between m 1 and m 2 should not be the same as that between m 2 and m 3. But from the results, we can see that d J (m 1,m 2) = d J (m 2,m 3) = 0.5774, which is counterintuitive. d P B e t (m 1,m 3)≠d P B e t (m 2,m 3) can reflect the difference between the two dissimilarities, which demonstrates that d P B e t is more reasonable than d J .

For m 1 and m 4, the results of d J and d P B e t are as follows:

$$d_{J} (m_{1} ,m_{4} )=0.5774; \quad d_{PBet} (m_{1} ,m_{4} )=0.6667. $$

As we can see, m 4 is absolutely confident in 𝜃 1, while m 1 and m 2 can be considered both as very uncertain sources. m 1 has a full randomness, and m 2 corresponds to the full ignorant source. Although m 1 and m 2 are different in the intrinsic nature of uncertainty, from a decision-making point of view, the decision-maker is faced with the full uncertainty for taking a decision. Intuitively, since both m 1 and m 2 carry uncertainty and they yield to the complete indeterminacy in the decision-making problem, it is expected that m 1 is closer to m 2 than to m 4. As d J (m 1,m 2) = d J (m 1,m 4) = 0.5774 is counterintuitive, d J doesn’t characterize well the difference between these two very different cases. As d P B e t (m 1,m 2) = 0.2381 < d P B e t (m 1,m 4) = 0.6667 is consistent with the analysis, d P B e t characterize well the difference.

To sum up, in this example, d P B e t is more reasonable than the other three dissimilarity measures.

Example 4.2

Let m 1 and m 2 be two BBAs on the frame of discernment \({\Omega } =\left \{ {\theta _{1} ,\theta _{2} ,\theta _{3} } \right \}\), it is known that the number of the non-empty subsets of Ω is 23 − 1 = 7. Here we set \(m_{1} \left \{ {\theta _{3} } \right \}=1\) and keep it unchanged. m 2 is constantly changing from steps 1 to 20, which is as follows: in step 1, m 2 is beginning with an identical distribution (the value is\(\frac {1}{7})\) of the mass over the 7 non-empty subsets; from steps 2 to 20, each step has an increase of Δ for \(m_{2} (\left \{ {\theta _{1} ,\theta _{2} ,\theta _{3} } \right \})\) together with a decrease of \(\frac {\Delta }{6}\) for 6 other non-empty subsets(ensure that the sum is 1). A value of \(\frac {6}{133}\) is assigned to Δ in order to make sure that \(m_{2} (\left \{ {\theta _{1} ,\theta _{2} ,\theta _{3} } \right \})=1\) in step 20. The comparisons of d B e t , d J , \(1-\cos \theta \) and d P B e t in all the 20 steps are detailed in Table 3 and illustrated in Fig. 1.

Table 3 Comparisons of the four dissimilarity measures
Fig. 1
figure 1

Comparisons of the four dissimilarity measures

From steps 1 to 20, \(m_{1} \left \{ {\theta _{3} } \right \}=1\) is kept unchanged, while \(m_{2} \left \{ {\theta _{3} } \right \}\) dips from \(\frac {1}{7}\) to 0. Intuitively, the distance between m 1 and m 2 should increase gradually. In Fig. 1, d B e t has an invariable value of 0.6667, which mistakenly indicates that m 2 stay unchanged as m 1 does. It is obviously unreasonable, and it also demonstrates that d B e t is a pseudo-metric which doesn’t satisfy the (p3)\(^{\prime \prime }\)in the standard metric properties. The values of \(1-\cos \theta \) are acceptable from steps 1 to 19. However, m 1 and m 2 are not completely conflicting in step 20, so the value of the dissimilarity should not be 1 in step 20. According to this, \(1-\cos \theta \) is faulty. Since both d J and d P B e t are metrics, the values of them in all the 20 steps increase gradually and are reasonable.

Example 4.3

Let \({\Omega } =\left \{ {\theta _{1} ,\theta _{2} ,...,\theta _{20} } \right \}\) be a frame of discernment. m 1 and m 2 are two BBAs on Ω defined as follows:

$$m_{1} \left( {\left\{ {\theta_{2} ,\theta_{3} ,\theta_{4} } \right\}} \right)=0.05,{\begin{array}{*{20}c} \\ \end{array} }m_{1} \left( {\left\{ {\theta_{7} } \right\}} \right)=0.05, $$
$$m_{1} \left( {\Omega} \right)=0.1,{\begin{array}{*{20}c} \\ \end{array} }m_{1} \left( A \right)=0.8; $$
$$m_{2} (A^{\ast })=1{\begin{array}{*{20}c} \\ \end{array} }with{\begin{array}{*{20}c} \\ \end{array} }A^{\ast }=\left\{ {\theta_{1} ,\theta_{2} ,\theta_{3} ,\theta_{4} ,\theta_{5} } \right\}. $$

Here we set 20 steps. From steps 1 to 20, m 2 is unchanged, m 1 is constantly changing as follows: in step 1, A is \(\left \{ {\theta _{1} } \right \}\); from steps 2 to 20, one more element 𝜃 i (i = 2,3,...20) is added to A at each step. The comparisons of d B e t , d J , \(1-\cos \theta \) and d P B e t in all the 20 steps are detailed in Table 4 and illustrated in Fig. 2.

Table 4 Comparisons of the four dissimilarity measures
Fig. 2
figure 2

Comparisons of the four dissimilarity measures

As can be seen from Fig. 2, the value of \(1-\cos \theta \) stays unchanged at 1 from steps 1 to 4 and steps 6 to 20. Since m 1 and m 2 are not completely conflicting in these steps, it is obviously unreasonable. d B e t , d J and d P B e t have a similar variation trend in general. All three decrease when A tends to A from steps 1 to 4, and they reach their respective minimum values when A attains A at step 5, then they rise as A departs from A in the remaining steps.

By analyzing, more information is acquired. From steps 3 to 8, the rangeability of d P B e t is greater than that of d B e t and d J , indicating that the sensibility of d P B e t is better. From steps 10 to 20, the rangeability of d P B e t is smaller than that of d B e t and d J , indicating that the sensibility of d P B e t is worse. The reason why this happens is as follows. The variation range of the dissimilarity is [0,1]. Steps 3 to 8 are adjacent to step 5 (the minimum value), while steps 10 to 20 are away from step 5. From steps 3 to 8, the dissimilarity has a large variation range which is from the minimum value to 1. From steps 10 to 20, as the dissimilarity has approached its maximum value and is still rising, the variation range is small. On one hand, the better sensibility from steps 3 to 8 enables d P B e t to better measure the difference among these steps. On the other hand, it makes d P B e t approach its maximum value too quickly, which leads to the worse sensibility from steps 10 to 20.

To summarize, d B e t and \(1-\cos \theta \) are not metrics, sometimes the results obtained from them are obviously unreasonable. d J and d P B e t are metrics, the results obtained from them are reasonable. When d P B e t is compared with d J , it has both an advantage and disadvantage. The advantage is that d P B e t is more accurate than d J in some cases (Example 4.1), and the sensibility of d P B e t is better when the variation range is large. The disadvantage is that the sensibility of it is worse when the variation range is small. d P B e t is a proper measure for quantifying the dissimilarity between BPAs. As it takes more information (works over the power set) than many other dissimilarity measures, the performance of it is better.

To demonstrate the superiority of d P B e t further, d P B e t and d J are compared through the applications in combining the conflicting BPAs in Section 5.

5 Applications of the proposed distance

In this section, d P B e t is used to replace \(1-\cos \theta \) and d J in two methods presented in [16] and [30], respectively. Both of the methods are used for combining the conflicting BPAs.

After importing d P B e t into the method in [16], Method 1 is generated, which is as follows:

Suppose that there are n(n ≥ 3) BPAs on the frame of discernment Ω, m i (1 ≤ in) and m j (1 ≤ jn) are used to denote any two of them.

  1. Step 1:

    Calculate the value of d P B e t (m i ,m j ), then the corresponding similarity measure between m i and m j can be obtained, which is defined as:

    $$ Sim(m_{i} ,m_{j} )=1-d_{PBet} (m_{i} ,m_{j} ) $$
    (13)

    when i = j, it is obvious that S i m(m i ,m j ) = 1. For ease of description, we use S i j to denote S i m(m i ,m j ) in further detail below.

  2. Step 2:

    Use S i j to build the similarity measure matrix (SMM), which is as follows:

    $$ SMM=\left[ {{\begin{array}{ccccccc} 1 & {S_{12} } & {\cdots} & {S_{1j} } & \cdots & {S_{1n} } \\ {S_{21} } & 1 & {\cdots} & {S_{2j} } & \cdots & {S_{2n} } \\ {\vdots} & {\vdots} & & {\vdots} & & \vdots \\ {S_{i1} } & {S_{i2} } & {\cdots} & {S_{ij} } & {\cdots} & {S_{in} } \\ {\vdots} & {\vdots} & & {\vdots} & & \vdots \\ {S_{n1} } & {S_{n2} } & {\cdots} & {S_{nj} } & {\cdots} & 1 \\ \end{array} }} \right] $$
    (14)
  3. Step 3:

    Calculate the support degree of m i , which is defined as:

    $$ Sup(m_{i} )=\sum\limits_{j=1,j\ne i}^{n} {S_{ij} } $$
    (15)
  4. Step 4:

    Calculate the confidence degree of m i , which is defined as:

    $$ \omega_{i} =\frac{Sup(m_{i} )}{\max\limits_{1\le i\le n} \left[ {Sup(m_{i} )} \right]} $$
    (16)
  5. Step 5:

    Modify the original BPAs m i via the following equation to obtain the new BPAs \(m_{i}^{\prime }\):

    $$ m_{i}^{\prime }(A)=\left\{ {{\begin{array}{*{20}c} {\omega_{i} m_{i} (A)} & {A\subset {\Omega} } \\ {1-\sum\limits_{B\subset {\Omega} } {\omega_{i} m_{i} (B)} } & {A={\Omega} } \\ \end{array} }} \right. $$
    (17)
  6. Step 6:

    Use Dempster’s combination rule to combine the new BPAs \(m_{i} ^{\prime }\), then the combined result is obtained.

The example of fault diagnosis in [16] is quoted to show the performance of Method 1. The details are as follows.

Example 5.1

Let \({\Omega } =\left \{ {\theta _{1} ,\theta _{2} ,\theta _{3} ,\theta _{4} } \right \}\) be a frame of discernment, 𝜃 i (1 ≤ i ≤ 4) represent the running conditions of the equipment, including \(\theta _{1} =\left \{ {normal} \right \}\), \(\theta _{2} =\left \{ {unbalance} \right \}\), \(\theta _{3} =\left \{ {eccentricity} \right \}\) and \(\theta _{4} =\left \{ {baseloosening} \right \}\). m 1,m 2 and m 3 are BPAs obtained from three sensors experimentally, which are detailed in Table 5. It is known that some interference is added to the second sensor, meaning m 2 can’t reflect the running conditions correctly.

Table 5 m 1,m 2 and m 3 obtained from three sensors

Among the three BPAs, m 2 is incorrect, m 1 and m 3 are correct and they both assign most of their belief to 𝜃 2. Intuitively, the reasonable combined result of the three BPAs should indicate that 𝜃 2 is most likely to happen. Four different methods are used to combine the three BPAs, including Dempster’s combination rule, Deng’s method (d J is used), Wen’s method (\(\cos \theta \) is used) and Method 1 (d P B e t is used). The results obtained from different combination methods are detailed in Table 6.

Table 6 Results obtained from different combination methods

As can be seen from Table 6, since all the combined results obtained from four different methods give their largest mass of belief to 𝜃 2, they are reasonable. Moreover, Method 1 (d P B e t is used) gives 𝜃 2 a larger mass of belief than other three methods, it has a better convergence rate, and the result of it is more convenient for decision making.

After importing d P B e t into the method in [30], Method 2 is generated. The first three steps of it are the same as Method 1. Step 4, Step 5 and Step 6 of Method 2 are as follows:

Step 4 of Method 2: Calculate the confidence degree of m i , which is defined as:

$$ \omega_{i} =\frac{Sup(m_{i} )}{\sum\limits_{i=1}^{n} {Sup(m_{i} )} } $$
(18)

Step 5 of Method 2: The modified average of the evidence \(m_{i}^{\prime }\) is given as:

$$ m_{i}^{\prime }(A)=\sum\limits_{i=1}^{n} {(\omega_{i} \times m_{i} )} $$
(19)

Step 6 of Method 2: Use Dempster’s combination rule to combine \(m_{i} ^{\prime } \quad n-1\) times.

The example in [30] is quoted to show the performance of Method 2. The details are as follows.

Example 5.2

In a ballistic target identification system, there are five sensors for judging the target. It is known that the types of the target could be warhead, bait and fragment. That is to say, the frame of discernment is \({\Omega } =\left \{ {A(warhead),B(bait),C(fragment)} \right \}\). Suppose the real target is A, the BPAs obtained from the sensors are as follows:

$$m_{1} \left( A \right)=0.5,{\begin{array}{ccc} \\ \end{array} }m_{1} \left( B \right)=0.2,{\begin{array}{*{20}c} \\ \end{array} }m_{1} \left( C \right)=0.3; $$
$$m_{2} \left( A \right)=0,{\begin{array}{*{20}c} \\ \end{array} }m_{2} \left( B \right)=0.9,{\begin{array}{*{20}c} \\ \end{array} }m_{2} \left( C \right)=0.1; $$
$$m_{3} \left( A \right)=0.55,{\begin{array}{*{20}c} \\ \end{array} }m_{3} \left( B \right)=0.1,{\begin{array}{*{20}c} \\ \end{array} }m_{3} \left( {A,C} \right)=0.35; $$
$$m_{4} \left( A \right)=0.55,{\begin{array}{*{20}c} \\ \end{array} }m_{4} \left( B \right)=0.1,{\begin{array}{*{20}c} \\ \end{array} }m_{4} \left( {A,C} \right)=0.35; $$
$$m_{5} \left( A \right)=0.6,{\begin{array}{*{20}c} \\ \end{array} }m_{5} \left( B \right)=0.1,{\begin{array}{*{20}c} \\ \end{array} }m_{5} \left( {A,C} \right)=0.3. $$

The BPAs are combined in turn. First, m 1 and m 2 are combined. Then one more BPA is added each time until all of them are combined. The results obtained from different combination methods are detailed in Table 7.

Table 7 Results obtained from different combination methods

As can be seen from Table 7, although more BPAs which support A take part in the combination constantly, Dempster’s combination rule concludes that the target cannot be A all the time. When the number of BPAs is more than 2, such results are unreasonable. Because the majority BPAs assign most of their belief to A, but only one BPA distributes its largest mass of belief to B. Such unexpected behavior shows that Dempster’s combination rule is risky to combine conflicting BPAs. In the same circumstances, the results obtained from other three methods indicate that the target is most likely to be A, and they are reasonable. The difference of the three methods is the convergence rate. As Method 2 (d P B e t is used) gives A a larger mass of belief than the other two methods, it has a better convergence rate, and the result of it is more convenient for decision making.

To sum up, when d P B e t is used in the methods for combining the conflicting BPAs, the results of the new methods (Method 1 and Method 2) are reasonable. Moreover, the new methods have better convergence rates, which are more convenient for decision making.

6 Conclusion

In this paper, the PSD pignistic probability function was proposed, which transforms the BPA into pignistic probabilities on the power set of the frame of discernment. Compared with the pignistic probability function which works over the frame of discernment, it takes more information contained in the BPA. Based on the proposed function, a new dissimilarity measure called the distance between PSD betting commitments was defined. The new distance is a metric, and the relevant proof was provided. In order to demonstrate the performance of the new distance, we compared it with three existing dissimilarity measures. Its applications in combining the conflicting BPAs were also presented. The results indicate that it is a better measure for quantifying the dissimilarity between BPAs.

For the new distance, the disadvantage lies in the calculation burden, which will increase exponentially with the rise of the singleton elements in Ω. But as its calculation burden is at the same level as Jousselme’s distance, it is acceptable.