BP neural network matlab code explanation and implementation steps

1. Introduction and structural parameters of BP neural network

Neural network is a common mathematical model in machine learning. It processes information by constructing a structure similar to the synaptic connection of brain nerves. In the process of applying neural network, the units dealing with information are generally divided into three categories: input unit, output unit and implicit unit. As the name suggests: the input unit receives signals and data from the outside; The output unit realizes the output of system processing results; The hidden unit is between the input and output units, and the structure of the hidden unit cannot be observed from the outside of the network system. In addition to the above three information processing units, the connection strength between neurons is determined by parameters such as weight.

1.1 structure and composition of BP neural network

The following figure is an interface that often appears when training neural network. From this part, we can see that this is a BP network with 2 inputs and 1 output and 5 hidden layers, which is called 2-5-1 network structure

1.2 parameter interpretation of BP neural network training interface

It should be noted that:
1. Generalization: it means that in the training process of BP neural network, if the mean square error (MSE) does not decrease but increases for 6 consecutive times, the network stops training
2. Error accuracy: an understanding of the meaning of mu parameter is that mu is an error accuracy parameter, which is used to add a modulation to the weight of neural network, so as to avoid falling into a local minimum in the process of BP network training. The range of mu is 0 to 1. The English meaning is as follows:
Mu stands for momentum constant or momentum parameter which is included in weight update expression to avoid the problem of local minimum. Sometimes network may get stuck to local minimum and convergence does not occur. Range of mu is between 0 and 1.

2. Steps to realize BP network

  1. Read data
  2. Set training data and prediction data
  3. Training sample data normalization
  4. Constructing BP neural network
  5. Network parameter configuration (training times, learning rate, minimum error of training target, etc.)
  6. BP neural network training
  7. Test sample normalization
  8. BP neural network prediction
  9. Inverse normalization of prediction results and error calculation
  10. Comparison of error between real value and predicted value of validation set

3. matlab code writing


%% This program matlab Programmed BP neural network
% Clear environment variables
clear all
clc

%%Step 1: read data
input=randi([1 20],2,200);  %Load input data
output=input(1,:)'+input(2,:)';  %Load output data

%% Step 2: set training data and prediction data
input_train = input(:,1:190);
output_train =output(1:190,:)';
input_test = input(:,191:200);
output_test =output(191:200,:)';
%Number of nodes
inputnum=2;
hiddennum=5;%Empirical formula for the number of hidden layer nodes p=sqrt(m+n)+a ,Therefore, take 2 respectively~13 Conduct test
outputnum=1;
%% Third, the training sample data are normalized
[inputn,inputps]=mapminmax(input_train);%Normalized to[-1,1]between, inputps Used to do the same normalization next time
[outputn,outputps]=mapminmax(output_train);
%% Step 4 build BP neural network
net=newff(inputn,outputn,hiddennum,{'tansig','purelin'},'trainlm');% Build the model and use the transfer function purelin,Gradient descent training

W1= net. iw{1, 1};%Input layer to middle layer weight
B1 = net.b{1};%Threshold of neurons in the middle layer

W2 = net.lw{2,1};%Weight from middle layer to output layer
B2 = net. b{2};%Threshold of neurons in output layer

%% Step 5 network parameter configuration (training times, learning rate, minimum error of training target, etc.)
net.trainParam.epochs=1000;         % Training times, set here as 1000 times
net.trainParam.lr=0.01;                   % The learning rate is set to 0 here.01
net.trainParam.goal=0.00001;                    % The minimum error of training target is set to 0 here.00001

%% Step 6 BP Neural network training
net=train(net,inputn,outputn);%Start training, where inputn,outputn Input and output samples

%% Step 7 test sample normalization
inputn_test=mapminmax('apply',input_test,inputps);% Normalize the sample data

%% Step 8 BP Neural network prediction
an=sim(net,inputn_test); %The trained model is used for simulation

%% Step 9 inverse normalization of prediction results and error calculation     
test_simu=mapminmax('reverse',an,outputps); %Restore the simulated data to the original order of magnitude
error=test_simu-output_test;      %Error between predicted value and real value

%%Step 10 error comparison between real value and predicted value
figure(1)
plot(output_test,'bo-')
hold on
plot(test_simu,'r*-')
hold on
plot(error,'square','MarkerFaceColor','b')
legend('expected value','Estimate','error')
xlabel('Number of data groups')
ylabel('value')
[c,l]=size(output_test);
MAE1=sum(abs(error))/l;
MSE1=error*error'/l;
RMSE1=MSE1^(1/2);
disp(['-----------------------Error calculation--------------------------'])
disp(['The number of hidden layer nodes is',num2str(hiddennum),'The error results are as follows:'])
disp(['Mean absolute error MAE Is:',num2str(MAE1)])
disp(['Mean square error MSE Is:       ',num2str(MSE1)])
disp(['Root mean square error RMSE Is:  ',num2str(RMSE1)])

web https://blog.csdn.net/qq_45955094/article/details/105259719    %CSDN

4. BP code operation results

4.1 error calculation of predicted value and true value (MAE, MSE, MRSE)

4.2 performance analysis of BP network training

  1. Analysis image of predicted value, real value and error
  2. The change of mean square error of training set, verification set, test set and overall with the number of training images
    Note: the position of the small circle represents the mean square error at the number of terminated training times (i.e. algebra)
  3. Training images of each stage of BP neural network
  4. Correlation analysis images of each sample set and population
  5. Square image of error distribution of training set, verification set and test set

5. Conclusion

  1. After a week's hard work, we finally expressed the idea and complete code of BP network for everyone to learn.
  2. Partners only need to insert their own data to get the corresponding results. If you have any questions, please leave a message.
  3. If there is anything inappropriate, please correct it.

6. MATLAB code

1. BP neural network prediction code

2. BP neural network prediction code and explanation

3. BP neural network data classifier code

4. Particle swarm optimization PSO Optimized BP neural network regression prediction code

5. Genetic algorithm GA optimizes BP neural network regression prediction code

6. Genetic algorithm GA optimization BP neural network regression prediction and explanation

7. Genetic algorithm GA optimizes BP neural network classification

8. Sparrow search algorithm SSA optimizes BP neural network classification

Tags: MATLAB Machine Learning neural networks

Posted on Tue, 07 Dec 2021 00:02:11 -0500 by Billett