Keywords

1 Introduction

In recent decades years, people have become more and more demanding on the reliability of integrated circuits, and the process and structure of large-scale integrated circuits have become increasingly complex. The amount of test data required for fault detection has increased dramatically, and the difficulty of testing and protection has increased. Changes have prompted scholars in related fields to break the tradition and seek new solutions [1]. With the advent of the era of big data in recent years, the maturity of neural network theory in deep learning and the improvement of high-speed parallel computing capabilities, the change of parameters is analyzed by machine learning to identify the failure state of the device under test (whether it fails and fails). The type and incentives have been explored by domestic and foreign scholars and carried out a series of practical explorations. Artificial Neural Network (ANN) has amazing performance in many fields such as computer vision, pattern recognition and biological natural science. It uses deep learning to detect high-dimensional data such as images, audio, video and natural science. The data is excellent in classification [2]. At present, hundreds of application models have been developed based on neural network theory. Many neural computing models have developed into classic methods in various fields such as signal processing, computer vision and optimal design, which have promoted progress and interconnection in various fields. The entire scientific field has a milestone significance [3]. The powerful pattern recognition capability of neural networks is the key to achieving classification goals in various fields. The essence of failure diagnosis of integrated circuits is a pattern recognition (that is, classification of failure states, and normal working conditions are also one type). It is theoretically feasible to apply neural network theory to failure analysis in the field of integrated circuit reliability. And clever.

Applying neural network theory to the practice of failure analysis in the field of integrated circuit reliability has a milestone in the development of integrated circuit reliability. This will provide practical examples and data samples for theoretical research in the field of integrated circuit reliability to promote its improvement and development. At the same time, it is of great significance to carry out targeted early warning and protection planning for the failure of integrated circuit devices, which provides new ideas and new ways for the reliability maintenance of integrated circuit products. It marks that the Prognostics and Health Management (PHM) transitions from a physical model to a mathematical model based on an experience-driven phase into a data-driven phase [4].

2 Fault Diagnosis System Design

2.1 Overview of Neural Network Principles

ANN is a model that simulates human brain regions, neurons, and their ability to learn. It simulates biological neurons with a single neuron node structure, simulating the biological neural network with the topology of a single neuron node. At present, the research and application of ANN has achieved a lot of results, and has carried out many applications in pattern recognition, prediction and early warning functions, and provided new problem-solving problems in engineering, biomedicine, humanities and social sciences, and economics.

The development of ANN began with the construction of a single neuron (McCulloch-Pitts, M-P) model to simulate a single biological cell source. The structure of a single neural network node model is shown in Fig. 1.

Fig. 1.
figure 1

Block diagram of a single neural network node model

The output calculation formula of a single neuron node model is shown in Eq. (1).

$$ y = f(\sum\limits_{i = 1}^{n} {w_{i} *x_{i} + b} ) $$
(1)

where y denotes the output of the neuron node; xi denotes the input of the neuron node; f denotes the activation function of the neuron node; wi denotes the weight of the neuron node and all nodes of the previous layer; b denotes the offset of the neuron node.

With the development of science and technology in other fields, based on the M-P model, a variety of neural network failure prediction models are proposed and developed according to specific application requirements. Among them, the neural network failure prediction model with important significance and wide application is shown below [5].

  1. (1)

    The next step in the M-P model is the development of a perceptron model. A single-layer neural network formed by multiple M-P model topologies has only one layer of computation between the input layer and the output layer. It is equivalent to a feedforward neural network without a hidden layer.

  2. (2)

    Feedforward neural network is a multi-layer perceptron model. There are one or more hidden layers between the input layer and the output layer. The multi-layer depth structure makes the calculation accuracy higher. The feedforward neural network is a one-way non-feedback network model. The network structure and network parameters (weights and offsets) are fixed, or there is no independent learning function. Generally, it is not portable, only in the original code. Modifications or migrations to the model through other means of transmission.

  3. (3)

    BP algorithm the neural network is a feedforward neural network with learning function. It adds training function to the feedforward neural network. The input samples are divided into training samples (with expected output) and test samples (unknown output). The training samples are input into the network model for one-way calculation. The network parameters are updated by comparing the expected output with the calculated output difference (feedback). When the expected output and the calculated output error rate reach the standard or the number of feedbacks reaches the standard, the network parameters are considered to be mature, and the test sample is calculated as a feedforward neural network.

  4. (4)

    Recurrent Neural Network (RNN) is better at analyzing time series input sequences than feedforward neural networks. The time series input sequence means that the input characteristic parameters are variables that have a certain relationship with time, rather than only the reaction value. Constant data. RNN is mostly used for handwriting and speech recognition.

  5. (5)

    Convolutional Neural Networks (CNN) is a feedforward neural network with convolutional computational power and depth structure that simulates biological vision, sharing network parameters (weights and biases) within its convolution kernel. And the characteristics of the pooled sparse representation are particularly suitable for processing samples with very large data volumes. CNN is mostly used for image recognition.

In addition to the above several neural network failure prediction models, a large number of other models have been proposed in different development periods. The characteristic parameter of this subject is the chip-level power supply current constant value. There are only 5 parameters in one sample setting, so BP neural network is used. However, the back- propagation algorithm of BP neural network is difficult to implement with FPGA, so FPGA implements the same feedforward neural network, software platform (training platform) realizes the training process, and trains the mature network parameters to transmit to the FPGA through serial protocol. Neural network module.

2.2 Feedforward Neural Network Implementation

The number of hidden layer neuron nodes is related to the computational requirements of the actual problem and the number of input and output nodes. If the number of hidden layer nodes is too small, the calculation accuracy will be affected; if the number of hidden layer nodes is large, the training process will be over-fitting. The empirical formulas for the selection of the number of hidden layer nodes can be found in Eqs. (2) and (3) [6].

$$ n_{1} = \sqrt {n + m} + a $$
(2)
$$ n_{1} = log_{2} n $$
(3)

Where n denotes the number of input-layer nodes denotes the number of output-layer nodes; 1 denotes the number of nodes in the hidden layer; a denotes a constant in the interval [1, 10].

Due to the constraints of the look-up table (LUT) resources of the system carrier Xilinx XC7A100T FPGA, after the final design is completed, the number of input nodes is compressed to 5; the high-temperature fault injection test is performed for the system to be tested. It detects normal status and high temperature faults, and the number of output nodes is 2 (two classifications) [7-10]. The single hidden layer can solve most of the current pattern recognition problems, and the number of hidden layers is set to 2 layers, which provides a certain guarantee for calculation accuracy. The structure of the feedforward neural network is shown in Fig. 2.

Fig. 2.
figure 2

Structure diagram of feedforward neural network

The functional simulation results of the feedforward neural network are shown in Fig. 3. Datain1–datain5 are the five parameters IN1–IN5 of the input sample, and OUT1_1–OUT1_2, OUT2_1–OUT2_4, OUT3_1–OUT3_2 are the output of each neuron node of the first hidden layer, the second hidden layer and the output layer respectively. Done1, Done2, and Done3 are the end flags of the feedforward operation, respectively.

Fig. 3.
figure 3

Functional simulation results of feedforward neural networks

2.3 Implementation of Output Classification Function

In the previous section, the neural network failure prediction model structure block diagram 3–16 input sample parameters IN1–IN5 are the processed chip-level power supply current parameters, the output values are OUT1 and OUT2, and the output values are the same fixed-point decimals as the input value format. As the last loop of the failure analysis system, the feedforward neural network module needs to convert the output value of the output node of the neural network failure prediction model into the classification result (normal state or high temperature fault) [11].

The output value of the output layer node is classified by a SoftMax function, also called a normalized exponential function. The essence of the SoftMax function is to normalize the array, highlighting the largest value and suppressing other components far below the maximum [12]. The meaning of the SoftMax function is: use the output value of all output nodes as an array, use the SoftMax function to process the array, get the SoftMax function value of each element in the array, they represent the contribution rate of each element to the array classification. For a neural network, the number of output nodes represents the number of classification categories, and the SoftMax function is: use the output value of all output nodes as an array, use the SoftMax function to process the array, get the SoftMax function value of each element in the array, they represent the contribution rate of each element to the array classification. For a neural network, the number of output nodes represents the number of classification categories, and the output node with the largest contribution represents the classification result. If SoftMax (OUT1) > SoftMax (OUT2), the classification result is 1; otherwise, the classification result is 2. The formula for calculating the SoftMax function is given by Eq. (4).

$$ soft\hbox{max} (x_{i} ) = \frac{{\exp (x_{i} )}}{{\sum\limits_{i = 1}^{n} {\exp (x_{i} )} }} $$
(4)

Where xi is the value of each parameter in the sample. Figure 4 shows the simulation results of the SoftMax function module.

Fig. 4.
figure 4

Simulation result of SoftMax function module

As shown in Fig. 3, data_in1 and data_in2 are the output values OUT1 and OUT2 of the neural network failure prediction model. Exp1 and exp2 are their corresponding exponents. The RTL implementation of the exponential function is completed by instantiating the CORDIC core. The principle of the CORDIC core. See formula (5). Sum is the sum of the indices, which is the denominator of Eq. (3–11). Here out1 and out2 refer to SoftMax (OUT1) and SoftMax (OUT2). It can be seen that SoftMax (OUT1) > SoftMax (OUT2), so the result of result classification is 1.

$$ exp(x)\, = \,sinh\,x \, + \,cosh\,x $$
(5)

3 The Simulation Results and Analysis

Figure 5 and Fig. 6 show the resource application and power consumption report of the feedforward neural network (including the serial port receiving module) based on FPGA.

Fig. 5.
figure 5

Resource application of feedforward neural network module

Fig. 6.
figure 6

Power consumption of the feedforward neural network module

This module occupies 82% of the lookup table (LUT) resources of the Xilinx XC7A100T FPGA, 1% of distributed RAM (LUTRAM) resources, 7% of flip-flops (FF) resources, 6% of I/O resources, and 38% Global Buffer Unit (BUFG) resource.

The total power consumption of the module is 0.524 W, of which dynamic power consumption is 80% and static power consumption is 20%. The power consumed by clocks, signals, logic cells, and I/Os accounted for 21%, 39%, 39%, and 1% of total power consumption.

4 Conclusions

This paper implement the failure analysis system on the FPGA carrier. The failure analysis system includes a feature parameter processing module and a feedforward neural network module. The feature parameter processing module includes an SPI protocol algorithm, a digital signal recovery fixed point fractional algorithm, a Kalman filter algorithm, and a dispersion normalization algorithm; the feedforward neural network includes the establishment of a network structure. And output classification algorithms are proposed. The Verilog code is used to design each part of the algorithm, and some Xilinx auxiliary IP cores are called. The function simulation, synthesis, gate level simulation, place and route and post-simulation process are sequentially implemented on the vivado platform.