#### 3.1. Algorithm Description

The three-layer BP neural network model adopted in this paper is composed of input layer, hidden layer and output layer.

Figure 1 shows a typical three-layer BP neural network model.

The feedforward significance of the network is that the input of each layer node only comes from the output of the previous layer nodes. For the input signal, it is transmitted to the hidden layer node first, and then the output information of the hidden layer nodes is transmitted to the output nodes after the activation function and, finally, the output results are obtained.

(1) For the input layer nodes $i(i=1,2,\dots ,n),$ their output ${O}_{i}$ is equal to the input ${X}_{i}$, and the variable value is transmitted to the second layer;

(2) For the hidden layer nodes

$j(j=1,2,\dots ,p),$ their input

${I}_{j}$ and output

${O}_{j}$ are:

where

${\omega}_{ji}$ is the weight between hidden layer node

$j$ and input layer node

$i$,

${\theta}_{j}$ is the bias of hidden layer node

$j$, and

$f$ is a Sigmoid function with the following expression:

(3) For output layer nodes

$k(k=1,2,\dots ,m),$ their input

${I}_{k}$ and output

${y}_{k}$ are:

where

${\omega}_{kj}$ is the connection weight between input layer node

$k$ and hidden layer node

$j$, and

${\theta}_{k}$ is the bias of output layer node

$k$.

For a given training sample

$({x}_{p1,}{x}_{p2},\dots ,{x}_{pn})$,

$p$ is the number of samples

$(p=1,2,\dots ,P)$, and the mean square error between network output and training target can be expressed as:

where

$p$ is the number of samples,

${t}_{pl}$ is the target output result of the lth output unit of the pth sample, and

${y}_{pl}$ is the network operation result of the lth output unit of the pth sample. The process of BP network training includes the forward calculation within the network and the backpropagation of error, and it aims to minimize the output error of the network through adjusting the connection weight within the network. The connection weight between the input layer and the hidden layer and between the hidden layer and the output layer in the multi-layer feedforward network is adjusted by the BP algorithm.

The essence of data fusion using the BP neural network is to use the gradient descent method to train the weights according to the error function to minimize the error value. The weight training is described as follows.

(1) The correction coefficient of connection weight between hidden layer and output layer is calculated. At the time,

${y}_{pl}={y}_{k}$, the network training rules make

${E}_{p}$ descend by gradient in each training cycle, then the modified equation of weight coefficient is as follows:

where

${I}_{k}$ is the input of node

$k$ in the output layer,

$\eta $ is the step length searched by the gradient,

$0<\eta <1$, then the equation as below is obtained.

The backpropagation error signal of the output layer is defined as follows:

Taking the derivative of both sides, the equation below is obtained.

The correction equation of weight coefficient is as follows:

Therefore, error backpropagation adjusts the weight from the hidden layer to the output layer as follows:

(2) The correction coefficient of connection weight between the input layer and hidden layer is calculated. At the time,

${y}_{pl}={O}_{j}$, the correction equation of weight coefficient is as follows:

where

${I}_{j}$ is the input of node

$j$ in the hidden layer,

$\eta $ is the step length searched by the gradient,

$0<\eta <1$, then the equation below is obtained.

The backpropagation error signal of hidden layer is defined as follows:

Taking the derivative of both sides, the equation as below is obtained.

Therefore, the error backpropagation adjusts the weight from the input layer to the hidden layer as follows:

The training process of the BP neural network is shown in

Figure 2. When the network structure is determined, the training of the BP neural network begins. BP neural network learning is composed of two processes: forward signal propagation and back error propagation. In the case of forward propagation, the input sample signal enters the network from the input layer and is finally transmitted to the output layer through hidden layer processing. If the output result is not consistent with the expected value, the error will be backpropagated and the weight will be adjusted according to the above weight adjustment coefficient until the output result of the network output layer meets the requirements.

#### 3.2. Diagnosis Steps of Neural Network Information Fusion Fault

Neural network knowledge representation is an implicit representation of knowledge, and knowledge shows the topology structure and connection weight of the network. Meanwhile, it adopts the expert system of neural network technology, because the neural network is a unified network system of information storage and processing. Therefore, among expert systems adopting neural network technology, knowledge storage and reasoning in the problem-solving process are carried out in the neural network module of the system, and they are the unity of the knowledge base and inference machine. First of all, the feature data is extracted from the existing equipment characteristic signal, then, after data preprocessing (normalized processing), it is used as the input of the neural network. The data extracted from known fault results is used as the neural network output to build the BP neural network, and the training sample set formed by the existing characteristic data and known fault data is used to carry out the learning and network self-learning of the BP neural network, so as to make the corresponding relation between the weights, threshold values and known fault results of the BP neural network achieve the expected results output. When the BP neural network training is completed, the BP neural network which has successfully completed training can be used for fault diagnosis. The process of fault diagnosis is as follows:

(1) Input the fault sample to each node of the input layer, and it is also the output of neurons in this layer;

(2) Obtain the output of hidden layer neurons by Equation (22) and take it as the input of the output layer;

(3) Obtain the output of neurons in the output layer from Equations (2)–(5);

(4) Determine the final output result of neurons in the output layer by the threshold function.

The fault diagnosis of diesel engine firstly extracts data from the fault signals to be diagnosed for preprocessing, and then the fault data to be diagnosed is input into the neural network which has finished training successfully. The fault diagnosis steps of using neural network information fusion are shown in

Figure 3. Firstly, the pressure sensor is used to measure the voltage signal at the key points of the components to be diagnosed, the temperature sensor is used to test the temperature signal of the components to be diagnosed, and then the fuzzy set theory is used for analysis.