Keywords

1 Introduction

The sky screen is a kind of system that was widely used to measure the velocity of flying projectile in ranger. The flying projectile that goes through the screen outputs a trigger digital pulse signal where its front edge indicates the instant of projectile nose trigging and back edge indicates the instant of projectile base trigging, the duration of pulse indicates the time of flying through the screen [1, 2]. Some kinds of methods suppressed non-projectile signals that were given in Refs. [35]. However, those methods can only work at the case that shock wave and flying insects exist independently. Based on the above analysis, Hopfield neural network pattern recognition methods have been proposed in this paper. A pattern decision by the memory of their closest signal samples of typical learning established memory. Many advantages are available to deal with some very complex environmental information, blurred background knowledge, unclear rules of inference problems, and allow a greater sample defects and aberrations.

In this paper, several typical projectile signals were identified from sky-screen acquisition at the shooting range scene. Recognition rate and the error rate are calculated as follows [6]:

Recognition rate = (number of identified signal/Theoretical valid signal number) × 100 %

Error rate = (number of misidentified signals/Theoretical valid signal number) × 100 %

2 Hopfield Associative Neural Network

2.1 Discrete Hopfield Network Theory

Hopfield network is divided into two types: discrete and continuous Hopfield network [7]. In this paper, the application of discrete Hopfield neural network (DHNN) is a discrete-time system. The basic structure of discrete Hopfield network is given in Fig. 1, the network consists of n units, \( N_{1} ,N_{2} , \ldots ,N_{n - 1} ,N_{n} \) denotes n units, this neural network is basically fed forward a layered neural network that has same number of nodes in the input layer and the output layer, the transfer characteristic function is \( f_{1} ,\,f_{2} , \ldots ,f_{n} \) and the threshold is \( \theta_{1} ,\theta_{2} , \ldots ,\theta_{n} \). For discrete Hopfield network, generally, all nodes select the same transfer function, which is the sign function, that is [8]:

$$ f_{1} (x) = f_{2} (x) = \cdots = f_{n} (x) = {\text{sgn}}(x) $$
(1)

all transfer function is equal to 0 as:

$$ \theta_{1} = \theta_{2} = \cdots = \theta_{n} = 0 $$
(2)

meanwhile, \( x = (x_{1} ,x_{2} , \ldots ,x_{n} ),x \in \{ { - 1, + 1} \}^{n} \), x is actually the computing inputs layer; \( y = ({y}_{1} ,{y}_{2} , \ldots ,{y}_{n} ),\;{y} \in \{ - 1, + 1\}^{n} \), y is actually the computing output layer; \( V(t) = (V_{1} (t),V_{2} (t), \ldots ,V_{n} (t)),V(t) \in \{ - 1, + 1\}^{n} \) is the status of network at time t, wherein \( t \in \{ {0,1,2, \ldots } \} \) is variable of discrete time; where \( w_{ij} \) is connection weights from \( N_{i} \) to \( N_{j} \), since Hopfield network is symmetrical that: \( w_{ij} = w_{ji} ,\;i,j \in \left\{ {1,2, \ldots ,n} \right\} \).

Fig. 1
figure 1

Neural network program flowchart

All n nodes-associated connection strength in the network expressed by matrix W, and W is n × n matrix.

The figure shows the structure of discrete Hopfield network in layer feedback network that can handle bipolar discrete data (input x ∈ {−1, +1}). When the network is trained, the whole operation is a process of repeated feedback. If the network is stable, then, with the number of times the feedback operation, the network status changes are reduced until the reach of steady state or no longer changes. In this case, the output of the network stable output can be obtained. The formula is expressed as follows:

$$ \left\{ {\begin{array}{*{20}l} {v_{j} (0) = x_{j} } \\ {v_{j} (t + 1) = f_{j} \left( {\sum\limits_{i = 1}^{n} {w_{ij} v_{i} (t) - \theta_{j} } } \right)} \\ \end{array} } \right. $$
(3)

\( f_{j} \) was determined by formula (1). After a certain time t from the state network with no change, that is, V (t + 1) = V (t), then the output is:

$$ y = V\left( t \right) $$
(4)

Asynchronous (serial) pattern. Status update based on formula (3) of a neuron \( N_{j} \) at a time t, the state of j − 1 remaining neurons keeping unchanged,

$$ V_{j} (t + 1) =\,\text{sgn} \left( {\sum\limits_{i = 1}^{n} {w_{ij} v_{j} (t)} } \right) .$$
(5)

To other neurons:

$$ V_{i} (t + 1) = V_{i} (t)\quad i \in \left\{ {1,2, \ldots n} \right\},\;{\text{i}} \ne {\text{j}} . $$
(6)

Order update is defined by the change in accordance with formula (5) if the order selected according to a deterministic, random update is called the selected-based neurons according to the preset probability.

Synchronous (parallel) pattern. The state update of some neurons according to formula (3) at any time t, in which an important special case is at certain time, while the state of all the neurons changing in accordance with formula (3), as \( V_{j} (t + 1) = \text{sgn} \left( {\sum\nolimits_{i = 1}^{n} {w_{ij} v_{i} (t)} } \right),\quad j = \left\{ {1,2, \ldots n} \right\} \) can be written:

$$ V\left( {t + 1} \right) = {\text{sgn}}\left( {V\left( t \right) \times W } \right) .$$
(7)

If the network has a limited period of time at any initial state x(0) from t = 0, the network is called stable after the nerve network status with no change from the moment,

$$ V(t + \Updelta t) = V(t),\quad \Updelta t > 0 $$
(8)

3 Application of Hopfield Network for Typical Projectile Signal

3.1 Extraction of Pattern Feature

The main purpose of feature extraction is characterized as centralized pattern information with differences of significant classification. Another purpose is to minimize the data sets, to improve the recognition efficiency, and to reduce the amount of calculation. Feature extraction and selection is very important.

According to the characteristics of the collected signal sky target, we use one-dimensional moment feature extraction method. One-dimensional moment feature is dominated and generated by one-dimensional pattern sequence. We define finite pattern sequence \( \{ {x_{j}^{(i)} ,\;i = 1,2, \ldots ,c,\;j = 1,2, \ldots N^{(i)} } \} \) r-order moments and central moments, respectively [9]

$$ C_{r}^{i} = E[x^{(i)r} ] \approx \frac{1}{{N^{(i)} }}\sum\limits_{j = 1}^{{N^{(i)} }} {x_{j}^{(i)r} } \quad i = 1,2, \ldots , c $$
(9)
$$ D_{r}^{i} = E\left[ {\left( {x^{(i)} - \bar{x}^{(i)} } \right)^{r} } \right] \approx \frac{1}{{N^{(i)} }}\sum\limits_{j = 1}^{{N^{(i)} }} {\left( {x_{j}^{(i)} - \bar{x}_{j}^{(i)} } \right)^{r} } \quad i = 1,2, \ldots , c $$
(10)

where \( \bar{x}^{(i)} \) is the mean vector pattern sequence. Therefore, the usual moment feature may be:

  1. 1.

    Mean-variance:

    $$m^{(i)} = E\left[ {(x^{(i)} )} \right] \approx \frac{1}{{N^{(i)} }}\sum\limits_{j = 1}^{{N^{(i)} }} {x_{j}^{(i)} } \quad i = 1,2, \ldots ,c $$
    (11)
  2. 2.

    Variance:

    $$\sigma^{(i)} = E\left[ {(x^{(i)} - \bar{x}^{(i)} )^{2} } \right] \approx \frac{1}{{N^{(i)} }}\sum\limits_{j = 1}^{{N^{(i)} }} {(x_{j}^{(i)} - \bar{x}_{j}^{(i)} )^{2} } \quad i = 1,\,2, \ldots,\,c $$
    (12)
  3. 3.

    Partial odd:

    $${\text{sk}}^{(i)} = \frac{{E[(x^{(i)} - \bar{x}^{(i)} )^{3} ]}}{{\sigma^{(i)3} }} \approx \frac{1}{{N^{(i)} }}\sum\limits_{j = 1}^{{N^{(i)} }} {\left( {\frac{{x_{j}^{(i)} - \bar{x}_{j}^{(i)} }}{{\sigma^{(i)} }}} \right)^{3} } \quad i = 1,\,2, \ldots,\,c $$
    (13)
  4. 4.

    Kurtosis:

    $${\text{ku}}^{(i)} = \frac{{E[(x^{(i)} - \bar{x}^{(i)} )^{4} ]}}{{\sigma^{(i)4} }} - 3 \approx \frac{1}{{N^{(i)} }}\sum\limits_{j = 1}^{{N^{(i)} }} {\left( {\frac{{x_{j}^{(i)} - \bar{x}_{j}^{(i)} }}{{\sigma^{(i)} }}} \right)^{7} - 3} \quad i = 1,\,2, \ldots,\,c .$$
    (14)

In general, the pattern reflects the pattern cluster centers by the mean characteristics; patterns around the degree of deviation by the mean vector variance; and sample distribution shape information was given by partial odd and kurtosis. Partial odd is portrayed as the asymmetric degree on the sample mean vector pattern. sk > 0 indicates pattern-right, sk < 0 indicates pattern-left; while kurtosis reflects the peak flatness of pattern sample.

In normal subject, ku > 0 indicates peak distribution than the Gaussian distribution pattern, ku < 0 indicates peak distribution pattern below the Gaussian distribution.

3.2 Associative Memory Algorithm for Discrete Hopfield Neural Networks

The following is a use of a Hebb rule, according to the discrete Hopfield asynchronous update algorithm steps:

  1. 1.

    Initialize the weights, set w = [0].

  2. 2.

    Input p sample pattern of \( x^{1} ,x^{2} , \ldots ,x^{p} \) to network, determine the network weights.

  3. 3.

    Initialize unknown input pattern \( x^{l} ,\;x_{j} (0) = x_{j}^{l} ,\;1 \le j \le n \), where \( x_{j} \) is the number of j in the input pattern of the \( x_{j} \in \left\{ { - 1, + 1} \right\}. \)

  4. 4.

    Iteration until convergence \( x_{j} (t + 1) =\,\text{sgn} [ {\sum\limits_{i = 1}^{n} {W_{ij} x_{i} (t)} } ]\;1 \le j \le n \), where \( x_{j} (t) \)is the output state at time t and neuron j.

  5. 5.

    When \( x_{j} (t + 1) = x_{j} (t) \) is steady-state output, \( 1 \le j \le n \), the steady-state output indicates its network best match with unknown input pattern.

3.3 Simulation and Application for Typical Signal of Sky Screens

Pattern neurons determine and feature extraction. Hopfield network memory capacity is limited according to the training by Hebb rule, when the pattern vector is orthogonal vectors, the network pattern is equal to the number of stable storage of the number of neurons in the network are all n. Based on sky screens test firing the collected data, select the penetrator signal, using 16 neurons nodes. Ten training samples and the test samples were selected.

Sky-screen data analyzed in this paper was collected through experiment of rifle in field, the sampling frequency (1 MHz), SNR ≥ 6 dB. All signals data saved in computer with text pattern. Using Matlab to obtain one-dimensional feature, vector extracted moments was shown in Table 1.

Table 1 Typical signal characteristics

Pattern features corresponding binary vector. Using cluster encoding

$$ \begin{aligned} x^{1} & = \left( {\begin{array}{*{20}c} { 1, - 1, - 1, - 1,} & { 1, - 1, - 1, - 1,} & { 1, - 1, - 1, - 1,} & { 1, - 1, - 1, - 1,} \\ \end{array} } \right)^{\text{T}} \\ x^{2} & = \left( {\begin{array}{*{20}c} { - 1, 1, - 1, - 1,} & { - 1, 1, - 1, - 1,} & { - 1, 1, - 1, - 1,} & { - 1, 1, - 1, - 1} \\ \end{array} } \right)^{\text{T}} \\ x^{3} & = \left( {\begin{array}{*{20}c} { - 1, - 1, 1, - 1,} & { - 1, - 1, 1, - 1,} & { - 1, - 1, 1, - 1,} & { - 1, - 1, 1, - 1} \\ \end{array} } \right)^{\text{T}} \\ \end{aligned} $$

As the network composed of a layer of saturated linear neurons, neurons with an output connected to the input through the weight matrix, the neuron output is specified as the initial output vector, requiring a discrete value, and value is a binary function, where to take 1 and −1, design of a steady-state value:

  • Five bursts signal can be memorized by \( x^{1}. \)

  • A single-slit sky target projectile signal can be memorized \( x^{2}. \)

  • Penetrators signal characteristics can be memorized \( x^{3}. \)

Update the network as many times, when the network has reached steady state at some point, the output of a vector with a value will be equal to the initial output and stabilize at a certain point on the output of the initial setting, the final output vector is the classification of the initial vector.

The result of Matlab simulation. Multiple bursts of signal recognition of sky screens

Bullet: 30 mm diameter; Frequency: 7,500 rounds/min; Time: 10 s

  1. 1.

    Matlab simulation interface (Fig. 2):

    Fig. 2
    figure 2

    Multiple bursts signal recognition of sky screens

  2. 2.

    Analysis

From Table 2, compared to the level signal recognition, the recognition rate can be increased by 3 % from auto-associative neural network with 30 mm caliber projectiles at a frequency of 7,500 rounds/min for 10 s duration in the sky-screen-repeating projectile test, the accuracy and reliability of the system was fully verified. In order to test the performance of the method, to promote, and to expand the sample set with 50 training and test samples, the objective-recognition effectiveness is proved achieving the approach by 93 % at signal noise ratio greater than 6 dB. Under the circumstances of high signal to noise ratio, especially for plus, a typical noise such as shock, fly insects, and other confounding factors in the case of low SNR, the neural network identification method is far superior level of recognition.

Table 2 Analysis of Hopfield network identification result

4 Results

In this paper, the rule of the test accuracy and reliability of the sky-screen system was analyzed based on the principle and a variety of the interference noise such as warhead shock, shock projectile bottom, mosquito birds, vibration noise signal, and etc. Then, using approach of auto-associative neural network to identify and eliminate typical factors interference, through live fire test and simulation, the accuracy and reliability of the system was fully verified and also proved prototype testing system to achieve the technical specifications.