1 Introduction

In the process of multi-AUV task execution, a large number of various multi-sensor systems emerge, making the information show the characteristics of diversity and complexity. When observing an object, the information from different sensors only contains a partial understanding of it and can not give an accurate decision (Zhongli 2017; Chongzhao et al. 2010; Chaozhong and Bei 2008). If the data from different sensors can be synthesized, analyzed and understood with high efficiency and high precision, the accuracy of decision will be greatly improved. By combining, summarizing and deducing the information obtained from multiple sensors according to certain rules, consistent interpretation and description of the observed object can be obtained finally (Sultana et al. 2018; Chen et al. 2018; Kuncheva et al. 2001; Sinha et al. 2008; Blasch et al. 2011; Thomopoulos et al. 1989). Multi-sensor information fusion has shown a broad application prospect in various fields, and the object of fusion has gradually developed from sensor data to all the information related to the observed things (Wang 2015; Park 2011; Waltz and Lirms 1990; Han and Han 2004). Decision fusion corresponds to the advanced stage of multi-sensor information fusion. It has more intuitive results and can be directly applied to the auxiliary decision-making processes such as combat command, medical emergency, rescue and relief, and fault diagnosis, and can better reflect the capability of the fusion system (Singh and Bailey 1997; Murphy 2000; Chen 2010; Hong 2014). Recently, Northwest Polytechnic University proposed a joint optimization based on variational Bayes for information fusion theory (Quan et al. 2018). Zhou Bin and Wang Qing proposed a non-threat assessment method for sonar targets based on multi-source information fusion (Bin et al. 2018). Among the existing uncertain information processing methods, fuzzy set theory has subjectivity (Zadeh 1965), rough set theory lacks uncertainty (Pawlak 1991), and bayesian theory has weak information expression ability and poor reasoning method robustness (Hall 1992). These problems make their processing process complex and application effect poor when solving the problem of decision fusion. For this reason, we adopt evidence theory and have good ability of uncertainty declaration and reasoning (Shafer 1976; Laengle et al. 1997). In this paper, the underwater multiple AUV hazard decision is studied, we use evidence reasoning to design an AUV information model, and build a distributed decision-making framework based on multiple AUV, a new multi-AUV hazard measure function is presented

2 The key fusion theory in dangerous decision making

2.1 Recognition framework and belief assignment

Recognition framework \(\theta\) is usually a finite set of non-empty, \(R\) is a class of identification framework \(2^{\theta } ,\) represents any set of possible propositions, \((\theta ,R)\) is called the propositional space. The recognition framework contains \(N\) elements. There are at most subsets \(2^{\theta }\) of \(R.\) Because \(R\) has set properties. We can define, and, intersection, complement, and include relationships. Recognition framework is the basis of evidential reasoning. Every concept and function of evidential reasoning is based on recognition framework, and the combination rule of evidence is also based on the same recognition framework.

Define 2.1

\(\theta\) is recognition framework. The basic belief assignment (BBA) on the recognition framework is defined as m:\(2^{\theta } \to [0,1]\)

$$\sum {\{ m(A)|A \in \theta \} = 1}$$
(1)

\(A \in \theta ,m(A)\) is also called the basic probability assignment.\(m(A)\) is the confidence measure given to the proposition A. Supporting the degree of occurrence of proposition A itself, but not the true subset of any A.

Define 2.2

\(\theta\) is recognition framework. The belief function (Bel) is defined as the confidence function derived from the basic confidence assignment function in the recognition framework: \(2^{\theta } \to [0,1]\)

$$Bel(A) = \sum\limits_{B \subseteq A} {m(B)} ,$$
(2)

\(Bel(A)\) denotes all confidence levels based on proposition A,the sum of the basic confidence values corresponding to all subsets in A.

Define 2.3

\(\theta\) is recognition framework. The likelihood function derived from the basic confidence assignment function in the recognition framework is defined as Pl: \(2^{\theta } \to [0,1]\)

$$Pl(A) = \sum\limits_{B \cap A \ne \emptyset } {m(B)} ,$$
(3)

\(Pl(A)\) expresses no objection to the extent to which Proposition A occurs. Sum of Basic Confidence Assignments for All Non-empty Sets Intersecting with A. \([Bel(A),Pl(A)]\) constitutes an uncertain interval. Represents uncertainty measures for A.

Define 2.4

\(\theta\) is recognition framework. The common function derived from the basic confidence assignment function in the recognition framework is defined as q: \(2^{\theta } \to [0,1]\)

$$q(A) = \sum {m(B),A \subseteq B \subseteq \theta } .$$
(4)

It has no definite meaning, but it can be used to simplify the formula.

Define 2.5

\(\theta\) is recognition framework. A is a subset of \(\theta\).

  1. (1)

    \((m(A),A)\) is called the body of evidence. Evidence consists of several evidences.

  2. (2)

    If \(m(A) > 0\) A is called the focus element of evidence.

  3. (3)

    All focus elements are called nuclei of evidence.

  4. (4)

    If \(\theta\) is the only focal element of a confidence function, it is called an empty confidence function.

2.2 Confidence assignment combination rules

Two or more confidence functions can be combined by confidence assignment combination rules, and a new confidence function can be obtained by calculating the orthogonal sum based on the confidence degree of different sources.

Let \(m_{1}\) be the basic confidence assignment corresponding to the confidence function \(Bel_{1}\) in the recognition framework \(\theta\).Define the focal element of \(Bel_{1}\) as \(A_{1} , \cdots ,A_{k}\).Basic confidence assignment \(m_{1} (A_{1} ), \cdots m_{1} (A_{k} )\). Satisfy:

$$\sum {m_{1} } \left( {A_{k} } \right) = 1,(k = 1,2, \ldots k).$$
(5)

Let \(m_{2}\) be the basic confidence assignment corresponding to the confidence function \(Bel_{2}\) in the recognition framework \(\theta\).Define the focal element of \(Bel_{2}\) as \(B_{1} , \ldots ,B_{l} .\) Basic confidence assignment \(m_{2} (B_{1} ), \ldots m_{2} (B_{l} ).\). Satisfy:

For \(A = A_{i} \cap B_{j}\).the total confidence assignment is:

$$m(A) = \sum\limits_{{A_{i} \cap B_{j} = A}} {m_{1} \left( {A_{i} } \right)m_{2} \left( {B_{j} } \right)} .$$
(6)

But if \(\emptyset = A_{i} \cap B_{j} ,\)\(m(\emptyset ) = \sum\limits_{{A_{i} \cap B_{j} = \emptyset }} {m_{1} (A_{i} )m_{2} (B_{j} )} > 0,\) assign a total confidence value of 1, all neglects of \(\emptyset = A_{i} \cap B_{j}\) must be taken into account or equivalent to the following correlation factors to normalize them.\([1 - \sum\limits_{{A_{i} \cap B_{j} = \emptyset }} {m_{1} (A_{i} )m_{2} (B_{j} )} ]^{ - 1}\) is called normalization factor.

In summary, we can give the basic confidence assignment function that combines two confidence assignment functions:

$$m(A) = \frac{{\sum\limits_{{A_{i} \cap B_{j} = A}} {m_{1} \left( {A_{i} } \right)m_{2} \left( {B_{j} } \right)} }}{{1 - \sum\limits_{{A_{i} \cap B_{j} = \emptyset }} {m_{1} \left( {A_{i} } \right)m_{2} \left( {B_{j} } \right)} }} = \frac{1}{c}\sum\limits_{{A_{i} \cap B_{j} = A}} {m_{1} \left( {A_{i} } \right)m_{2} \left( {B_{j} } \right)} .$$
(7)

We can also give basic confidence assignment functions for multiple confidence assignments:

$$m(P) = \frac{{\sum\limits_{{ \cap A_{I} = P}} {\prod\limits_{1 \le s \le m} {m_{s} \left( {A_{i} } \right)} } }}{{1 - \sum\limits_{{ \cap A_{I} = \emptyset }} {\prod\limits_{1 \le s \le m} {m_{s} \left( {A_{i} } \right)} } }},$$
(8)

2.3 Confidence assignment and probability conversion

The general principle of insufficient reasoning is as follows:

For trusted space: \((\varOmega ,R,Bel),A \in R,A = A_{1} \cup A_{2} \cup \cdots \cup A_{n}\).Because of the lack of information, m(A) can not be further allocated to a subset of A. In order to make decision on R, a probability distribution is established. For all atoms \(X \in R:\)

$$BetP(x) = \sum\limits_{x \subseteq A \in R} {\frac{m(A)}{|A|} = \sum\limits_{A \in R} {m(A)\frac{|x \cap A|}{|A|}} } .$$
(9)

|A| is the number of atoms of R in A. For \(B \in R:\)

$$BetP(B) = \sum\limits_{A \in R} {m(A)\frac{|B \cap A|}{|A|}} .$$
(10)

Given Confidence Space: \((\varOmega ,R,Bel).\) Let m be the basic confidence assignment corresponding to Bel. Let \(BetP( \cdot ;m)\) be the probability defined on R, and the parameter m be added to increase the basic confidence assignment.

Hypothesis1: \(\forall x \in R,\)\(BetP(x;m)\) only depend on \(m(x),x \subseteq X \in R\).

Hypothesis2: \(m(x),x \subseteq X \in R,\)\(BetP(x;m)\) is continuous.

Hypothesis3: Let G be a permutation defined on \(\varOmega ,\) for \(x \subseteq \varOmega ,\) let \(G(X) = \{ G(x):x \in X\} ,\)\(m^{\prime} = G(m)\) is the basic confidence assignment of proposition assigned after substitution. For \(X \in R,m^{\prime}(G(X)) = m(X),\) for any atom in R, \(x,BetP(x;m) = BetP(G(x);G(m)).\)

Hypothesis4: Let \((\varOmega ,R,Bel)\) be the confidence space. W is not an element in R, \(\forall A \in R,A \cong A \cup X\)\(Bel(A) = Bel(A \cup X).\) Considering confidence space:\((\varOmega^{\prime},R^{\prime},Bel^{\prime}),\)\(\varOmega^{\prime} = \varOmega - X,\)\(R^{\prime}\) is a boolean algebra established from the atom of R.

\(BetP(x;m)\) and \(BetP^{\prime}(x;m^{\prime})\) are probabilities from \(Bel(m)\) and \(Bel(m^{\prime}),\) respectively. For \(X \in R^{\prime}:\)\(BetP(x;m) = BetP^{\prime}(x;m^{\prime})\) and \(BetP(x;m).\).

Let \((\varOmega ,R)\) be a propositional space, m be the basic confidence assignment on R, and | A | be the number of atoms of R in A. Under hypothesis 1–4, for any atom X of R, there is:

$$BetP(x;m) = \sum\limits_{x \subseteq A \in R} {\frac{m(A)}{|A|}} .$$
(11)

3 Building a single AUV information model

Multi-AUV system uses distributed structure (Bo et al. 2004), which has better reliability and fault redundancy. In distributed decision-making multi-AUV systems, distributed AUVs make decisions and take actions to solve problems by coordinating their knowledge, goals, skills and plans. AUV in distributed systems can have different domain expert knowledge and different decision-making functions. They can observe certain characteristics of the environment or different areas of the environment.

In multi-AUV, it is assumed that the feature information of the environment observed and extracted by \(AUV_{i} (1 \le i \le I)\) can be represented by a feature vector. Express: \(S^{i} = \left( {s_{1}^{i} ,s_{2}^{i} , \ldots ,s_{N}^{i} } \right),\)\(N\) is the eigenvector dimension.\(\theta = \left\{ {\theta_{1} ,\theta_{2} , \ldots ,\theta_{n} } \right\}\) is an identification framework.\(\theta_{k} (k = 1,2,3, \ldots n)\) is the premise of belonging to pattern type K.\(\phi (S^{i} ,\theta_{k} )\) is a measure function between eigenvector \(S^{i}\) and \(\theta_{k}\) .\(\phi (g)\) is a decreasing function. And \(0 \le \phi (S^{i} ,\theta_{k} ) \le 1,\phi (S^{i} ,\theta_{k} )\) generates a single confidence function (Rogova and Nimier 2004).

$$m_{k}^{i} (\theta_{k} ) = \phi \left( {\bar{S}^{i} ,\theta_{k} } \right),$$
(12)
$$m_{k}^{i} (\theta ) = 1 - \phi \left( {\bar{S}^{i} ,\theta_{k} } \right),$$
(13)
$$m_{k}^{i} (A) = 0,\quad \forall A \ne \theta_{k} \subset \theta .$$
(14)

All \(m_{k}^{i}\) can be synthesized according to combination rules. Thus the basic confidence assignment of AUV is obtained.

$$m^{i} \left( {\theta_{k} } \right) = \frac{{m_{k}^{i} \prod\limits_{j \ne k} {\left( {1 - m_{j}^{i} } \right)} }}{{\sum\limits_{k} {m_{k}^{i} \prod\limits_{j \ne k} {\left( {1 - m_{j}^{i} } \right)} + \prod\limits_{j} {\left( {1 - m_{j}^{i} } \right)} } }}.$$
(15)

4 Transferable belief model

Transferable Confidence Model is a two-tier model (Rogova and Kasturi 2001; Smets and Kennes 1994). The first layer is to obtain confidence and quantify, assign and update it. Another level is to convert confidence into probability and make decisions based on it.TBM does not depend on any probability theory. TBM can be regarded as a “pure” evidence theory model, that is to say, it has been completely “purified” from the probability model. TBM mimics the difference between human thought and action, or the difference between “reasoning” (showing how confidence is affected by evidence) and “behavior” (choosing one from multiple possible behavioral scenarios seems to be the best).

In multi-AUV systems, each AUV does not share information with each other, but is communicated by a designated fusion center. Each AUV can be heterogeneous and has its own unique domain knowledge. It has a premise set and a confidence assignment for each premise set. Each AUV can extract different characteristics of dangerous. Each AUV generates confidence for each premise of the recognition framework and transmits these beliefs to the fusion center. The fusion center combines the basic confidence values of each AUV to generate the confidence values for the whole environment. The decision center generates the probability of each premise of the recognition framework and makes decisions based on it. A TBM distributed decision system framework is used as shown in Fig. 1

Fig. 1
figure 1

The transferable belief model of multi-AUV

5 Results and discussions

5.1 Information model construction of AUV

In verifying the validity of the decision, we assume that multiple AUVs are in a vertical formation. Motioning in a two-dimensional plane with dangerous points, we use sonar to detect dangerous points that may pose a danger to the formation (these dangerous points may be obstacles or unknown dynamic objects), and then use information fusion to analyze the possible threat level to the formation and take measures to deal with it.

Dangerous Information: {Dangerous Location, Dangerous Orientation, Number of Dangerous Numbers}.These three information are observed by \(AUV_{1} ,AUV_{2} ,AUV_{3} ,\), respectively. Let the recognition framework of \(AUV_{1}\) be \(\theta_{1}\):{no danger, primary danger, level two danger, grave danger}.Let \(AUV_{2}\) identification framework be \(\theta_{2}\):{towards formation direction, away from formation direction}.Let \(AUV_{3}\) identification framework be \(\theta_{3} :\) {a small number, a large number, multiple quantity, dense}.

The eigenvector of \(AUV_{1}\) is the dangerous position coordinate \(S_{P} = (x,y).\) Four peripheral reference vectors \(w_{1} ,w_{2} ,w_{3} ,w_{4}\) are defined according to the detection range characteristics of the whole formation. A measurement function is defined as follows:

$$\phi \left( {S_{p} ,\theta_{1}^{k} } \right) = \exp \left( { - \gamma^{k} \left( {d^{k} } \right)} \right),$$
(16)
$$k = 1,2,3,4;\,\gamma^{k} > 0;\,d^{k} = ||S_{p} - w_{i} ||(i = 1,2,3,4).$$

Confidence assignment of \(AUV_{1}\) can be obtained:\(m_{p}\) (no danger),\(m_{p}\) (primary danger),\(m_{p}\) (level two danger), \(m_{p}\) (grave danger).

\(AUV_{2}\)‘s eigenvector is a dangerous direction angle \(S_{d} = (\varphi ).\) Based on the analysis of sonar perception range, the direction of formation is taken as the axis, and the angle between danger and two reference points is \(0^{ \circ } \le \varphi \le 180^{ \circ } .\) The position of formation and the farthest range of forward direction sonar are taken as two reference points. A measurement function is defined as follows:

$$\phi \left( {S_{d} ,\theta_{2}^{k} } \right) = \frac{\cos (\varphi ) + 1}{2}(k = 1,2).$$
(17)

Confidence assignment of \(AUV_{2}\) can be obtained:\(m_{d}\) (towards my formation direction), \(m_{d}\) (away from my formation direction).

The eigenvector of \(AUV_{3}\) is the number of dangerous points in the whole detection range.\(S_{n} = \{ a\} .\) The maximum number of detections that AUV can detect is expressed in B. We divide the detection area into four areas and give a definition of the measurement function as follows:

$$\phi \left( {S_{n} ,\theta_{3}^{k} } \right) = 1 - \frac{{a_{k} }}{b}(k = 1,2,3,4),\sum {a_{k} = a \le b} .$$
(18)

Confidence assignment of \(AUV_{3}\) can be obtained:\(m_{n}\) (a small number),\(m_{n}\) (a large number),\(m_{n}\) (multiple quantity), \(m_{n}\)( dense).

5.2 Conversion rules in different recognition frameworks

The three AUVs mentioned above have different identification frameworks. It is concluded that the final decision needs to be made under the same identification framework. In the face of the confidence assignment of AUVs with different internal structures, we need to give confidence transformation.

\(AUV_{1}\) and \(AUV_{3}\) can complete mutual transformation. We give a rule that {a small amount} corresponds to {no danger}, {a large number} corresponds to {primary danger}, {multiple quantity} corresponds to {level two danger}, and {dense} corresponds to {grave danger}.

Confidence conversion for AUV1 and AUV2. We divide the dangerous areas detected by AUV into three areas.A conversion rule is given: for one region {towards formation direction} corresponds to {grave danger}, {away from formation direction} corresponds to {level two danger}, for two regions {towards formation direction} corresponds to {level two danger}, {away from formation direction}corresponds to {primary danger}, for three regions {towards formation direction} corresponds to {primary danger},{away from formation direction}corresponds to {no danger} (Table 1).

Table 1 Confidence conversion for AUV1 and AUV2
Table 2 Confidence value of single AUV in danger decision-making
Table 3 Confidence value of multi-AUV decision-making on danger after information fusion

5.3 Verification and conclusion of dangerous decision-making

In the validation test of danger decision fusion method, we use computer to simulate and verify, and use MATLAB to generate random points in a blank area, which represent various dangers. Four robots are used for formation, four squares are used to represent four AUVs. Three AUVs are used for feature extraction and confidence assignment. One AUV is used as the fusion center to make decision. Computing with the fusion method of dangerous decision given in the previous paper. Five states in the course of the voyage are computed, and the clockwise direction is from state 1 to state 5. Compare single AUV facing dangerous decision with multiple AUV facing dangerous decision. Using the above-mentioned risk decision-making method, we can decide what kind of danger level the formation is in. Improve the viability of the whole formation (Fig. 2).

Fig. 2
figure 2

The clockwise direction from the lower left corner is state 1 to state 5

Confidence value of single AUV in danger decision-making

Confidence value of multi-AUV decision-making on danger after information fusion

From the calculation results, we can see that when a single AUV makes a dangerous decision, it can’t make a good decision at the second moment of state that it is in a major danger, and even assign a part of the decision-making value to non-danger. For primary danger, level two danger and grave dangers, the analysis is not accurate enough and the boundaries are blurred. In state 1, it is not particularly ideal to distinguish between non-danger and primary danger. In some cases, it is impossible to highlight a decision directly, which will lead to the extension of AUV decision-making time. When we fuse multi-AUV data, compared with single AUV decision-making, multi-AUV will not appear the fuzzy expression of a certain decision-making. The decision-making is more prominent than other decisions. In state 2, we can see that the formation clearly gives the decision-making of great danger. In state 1, the decision-making that is not prominent enough compared with single AUV is obtained through multi-AUV information fusion. More prominent risk-free decision-making, in state 5, compared with single AUV, for risk-free decision-making, enhance the credibility. Through multi-AUV information fusion, the formation can better understand its own environment and strengthen the ability of the whole formation to deal with dangers (Fig. 3; Tables 2, 3) .

Fig. 3
figure 3

Decision-making without integration and decision-making after integration

At the same time, we can see from the histogram that yellow represents the fusion decision and blue represents the single decision. Compared with the fusion decision, the single decision sometimes gives the same confidence value to both risk levels, single decision can not give a salient result. After the fusion, the danger decision is more concentrated on a danger level, and the danger level is given. By fusing multiple AUV decision-making, we will get a certain decision-making.

6 Conclusion

In this paper, a multi-AUV danger decision-making method is proposed, which can improve the survivability of formation in the process of task execution. The feasibility of multi-AUV is verified by calculation. It can improve the survivability and decision-making ability of multi-AUV in the future practical environment. Based on transferable confidence model, the information or decision-making provided by each AUV is fused at the confidence layer, and the final decision is made. In the whole danger decision-making process, the selection of appropriate identification framework for AUV and the effective design of AUV information model are two key factors to improve the overall decision-making. Finer recognition frameworks and measurement functions will improve the accuracy of decision-making. In future work, we can introduce discount factors, strengthen the proportion of assured information, and improve the overall decision-making efficiency. In decision-making model, improving the effective observation of the danger is also a methods to improve the accuracy of decision-making.