BP neural network and its application [neural network iv]

Typical case analysis of BP neural network

[example 5-1] 60 groups of gasoline samples were scanned by Fourier near infrared transform spectrometer. The scanning range was 900 ~ 1700nm and the scanning interval was 2nm. The spectral curve of each sample contained 401 wavelength points. At the same time, the octane number content was determined by traditional laboratory detection methods. Now it is required to use BP neural network and RBF neural network to establish the mathematical model between the near-infrared spectrum of gasoline samples and their octane number, and evaluate the performance of the model.
The MATLAB code is as follows:

>> clear all;
%%Training set/Generation of test set
load spectra_data.mat
%Randomly generate training set and test set
temp = randperm(size(NIR,1));
%50 samples of training set
P_train = NIR(temp(1:50),:)';
T_train = octane(temp(1:50),:)';
%10 samples of the test set
P_test = NIR(temp(51:end),:)';
T_test = octane(temp(51:end),:)';
N = size(P_test,2);

Overview of BP neural network

Linear neural network can only solve the problem of linear separability, which is related to the structure of its single-layer network. BP neural network contains multiple hidden layers and has the ability to deal with linear inseparable problems. In history, because no suitable learning algorithm of multilayer neural network has been found before, the research of neural network once fell into a downturn. The basic idea of BP algorithm is that the learning process consists of signal forward propagation and error back propagation. During forward propagation, the input samples are transmitted from the input layer, processed layer by layer by each hidden layer, and then transmitted to the output layer. If the actual output of the output layer is inconsistent with the expected output (teacher signal), it will turn to the reflection propagation stage of the error. Error back propagation is to transfer the output error to the input layer layer by layer through the hidden layer in some form, and allocate the error to all units of each layer, so as to obtain the error signal of each layer unit, which can be used as the basis for correcting the weight of each unit. The weight adjustment process of each layer of signal forward propagation and error back propagation is carried out repeatedly. The process of continuous weight adjustment is the learning and training process of the network. This process continues until the error of the network output is reduced to an acceptable level, or until a predetermined number of learning times.

BP neural network algorithm

[example 5-2] BP neural network training is realized by additional momentum method.
The program implementation code is as follows:

>> clear all;
%initialization
P=[-6.0 -6.1 -4.1 -4.0 5.0 -5.1 6.0 6.1];
T=[0 0 0.97 0.99 0.01 0.03 1.0 1.0];
[R,Q]=size(P);
[S,Q]=size(T);
disp('The bias B is fixed at 300 and will not learn')
Z1=menu('Initalize Weight with:',...          	%Make menu
    'W0=[-0.9];B0=3;',...                  	%According to the given initial value
    'Pick Values with Mouse/Arrow Keys',...  	%Click any point on the graph with the mouse as the initial value
    'Random Inital Condition[Defaut];');       	%Random initial value (default)
disp('')
B0=3;
if Z1==1

[example 5-3] the gradient descent method of adaptive learning rate is used to train BP neural network.
The program implementation code is as follows:

>> clear all;
P=[-1 -1 2 2;0 5 0 5];
T=[-1 -1 1 1];        %Expected output
net=newff(minmax(P),[3,1],{'tansig','purelin'},'traingda');
net.trainParam.show=50;
net.trainParam.lr=0.05;
net.trainParam.lr_inc=1.05;
net.trainParam.epochs=300;
net.trainParam.goal=1e-5;
[net,tr]=train(net,P,T);
%Simulate the network
a=sim(net,P)

[example 5-4] BP neural network is trained by elastic gradient descent method.
The program implementation code is as follows:

>> clear all;
P=[-1 -1 2 2;0 5 0 5];
T=[-1 -1 1 1];
net=newff(minmax(P),[3,1],{'tansig','purelin'},'trainrp');
net.trainParam.show=10;
net.trainParam.epochs=300;
net.trainParam.goal=1e-5;
[net,tr]=train(net,P,T);
%Simulate the network
a=sim(net,P)

Design of BP neural network

[example 5-5] please take sine function fitting as an example to illustrate the design of BP neural network.
The program implementation code is as follows:

>> clear all;
%Construction confirmation sample set val 
P=[-1:0.05:1];                         		%Input vector of training sample
t0=sin(3*pi*P);                             	%Sine function to be fitted
t=sin(3*pi*P)+0.15*randn(size(P));            	%Target vector of training sample
val.P=[-0.975:0.05:0.975];              		%Confirm the input vector of the sample
val.T=sin(3*pi*val.P)+0.15*randn(size(val.P));  	%Confirm the target vector of the sample
%Construct network
net=newff([-1 1],[20 1],{'tansig','purelin'},'traingdx');
net.trainParam.show=25;
net.trainParam.epochs=300;
net=init(net);                      			%Network initialization
%Network training
[net,tr]=train(net,P,t,[],[],val);
save net2 net;   							                                                                                                       %Save network

BP neural network toolbox function

[example 5-6] use newcf function to create BP neural network for given data and conduct simulation.
The program implementation code is as follows:

>> clear all;
X=[2 3;-1 2;2 3];        	%Input training set
T=[3 4;2 1];          	%Target set
net=newcf(X,T,5);        	%establish BP neural network
net=train(net,X,T);      	%Network training
X1=X; 
disp('Output network simulation data:')
y=sim(net,X1)

Run the program and the training record is shown in Figure 5-20.


[example 5-8] a cascade BP neural network is established by using cascade forward net function, and it is simply fitted.
The program implementation code is as follows:

>> clear all;
[x,t] = simplefit_dataset;
net = cascadeforwardnet(10);
net = train(net,x,t);
view(net)
y = net(x);
perf = perform(net,y,t)
Run the program, and the output results are as follows:
perf =
   2.0202e-05

[example 5-10] the sigmoid function compresses the data interval that deviates from the origin, while the data close to the origin is enlarged. Given a linear data, after processing with sigmoid function, the data with larger absolute value becomes closer, while the data with smaller absolute value becomes more sparse because the interval is enlarged.
The program implementation code is as follows:

>> clear all;
x=-3:.2:3;
subplot(2,1,1);plot(x,x,'+');
hold on;
plot([0,0],x([8,24]),'^m','LineWidth',3.5)		%Project raw data to Y axis
plot(zeros(1,length(x)),x,'+');
grid on;
title('raw data')
y=logsig(x);							%calculation y Value of
subplot(2,1,2);plot(x,y,'+')				%display y
hold on;
plot(zeros(1,length(y)),y,'+')
plot([0,0],y([8,24]),'^m','LineWidth',3.5)
grid on
title('sigmoid After function processing')



[example 5-13] calculate the change rate of weight and threshold according to the given gradient, learning rate and momentum.
The program implementation code is as follows:

>> clear all;
gW = rand(3,2);
lp.lr = 0.45;
lp.mc = 0.8;
ls = [];
[dW,ls] = learngdm([],[],[],[],[],[],[],gW,[],[],lp,ls)
Run the program, and the output results are as follows:
dW =
    0.3043    0.3128
    0.1301    0.0306
    0.3023    0.1147
ls = 
  struct with fields:
    dw: [3×2 double]

[example 5-15] train the given input data and target data with traingd function.
The program implementation code is as follows:

>> clear all;
p = [-1 -1 2 2; 0 5 0 5];
t = [-1 -1 1 1];
%utilize feedforwardnet Function creation BP neural network
net = feedforwardnet(3,'traingd');
%In the example, turn off the functions described later
net.divideFcn = '';
%Modify some default training parameters
net.trainParam.show = 50;
net.trainParam.lr = 0.05;
net.trainParam.epochs = 300;
net.trainParam.goal = 1e-5;
%Conduct network training
[net,tr] = train(net,p,t);
%Conduct simulation
a = net(p)

traingdm is the function of negative gradient descent momentum BP algorithm. The calling format of the function is as follows:
net.trainFcn = 'traingdm'
[net,tr] = train(net,...)
The meaning of its parameters is the same as that of trainbfg function parameters.
[example 5-16] use feedforwardnet function to create BP neural network, and use traingdm function to train the network.
The program implementation code is as follows:

>> clear all;
p = [-1 -1 2 2; 0 5 0 5];
t = [-1 -1 1 1];
net = feedforwardnet(3,'traingdm');
net.trainParam.lr = 0.05;
net.trainParam.mc = 0.9;
net = train(net,p,t);
y = net(p)

[example 5-17] create a BP neural network, and then use msereg function to evaluate its performance.
The program implementation code is as follows:

>> clear all;
%Create a BP neural network
net=newff([-6,6],[4,2],{'tansig','purelin'},'trainlm','learngdm','msereg');
p=[-6 -3 0 3 6];
t=[0 1 1 1 0];
y=net(p)
e=t-y   						%Error vector
net.performParam.ratio=20/(20+1);  	%Set performance parameters
perf=msereg(e,net)

3) plotes function
The plots function is used to draw the error surface of a single neuron. The call format of the function is as follows:
plotes(WV,BV,ES,V)
Where WV is the n-dimensional weight row vector; BV is the threshold line vector of M dimension; ES is M composed of error vector × N-dimensional matrix; V is the viewing angle, and the default value is [- 37.5,30].
The error curve drawn by plotes function is determined by weight and threshold, and calculated by errsurf function.
[example 5-19] draw the error surface and contour according to the input sample and target data.
The program implementation code is as follows:

>> clear all;
p = [3 2 4];  				%Input sample
t = [0.4 0.8 1];  				%target data 
wv = -4:0.4:4;  				%Weight 
bv = wv;  					%threshold
ES = errsurf(p,t,wv,bv,'logsig');  	%Calculation error surface
plotes(wv,bv,ES,[60 30]);  		%Draw error surface


4) plotep function
plotep function is used to draw the position of weight and threshold on the error surface. The call format of the function is as follows:
H = plotep(W,B,E)
Where W is the current weight; B is the current threshold; E is the error of the current input neuron.
H = plotep(W,B,E,H)
Where H is the information vector of current weight and threshold position.
[example 5-20] calculate the position of weight and threshold on the error surface according to the input samples and target data.
The program implementation code is as follows:

>> clear all;
x=[4.5 4.5 4.5 ];  			%Input sample
t=[0.4 0.45 0.5];  			%target data 
wv=-4:0.4:4;  				%Weight 
bv=wv;  					%threshold
ES=errsurf(x,t,wv,bv,'logsig');  	%Calculation error surface
plotes(wv,bv,ES,[60,50]);  		%Draw error surface
wv=-4;bv=0;
net=newlind(x,t);  			%Creating linear neural networks
y=net(x);
e=t-y;
E=sumsqr(e);
plotep(wv,bv,E);  			%Calculate the position of weight and threshold on the error surface

Application of BP neural network

[example 5-21] the classification of two types of patterns can be realized by BP neural network, as shown in Figure 5-34.
The training samples determined according to the two types of patterns shown in Figure 5-34 are as follows:
P=[1 3;-1 2;-2 1;-3 0];T=[0.3 0.7 0.8 0.2]
Analyzing the above problems, because the problem is simple, the steepest descent BP algorithm is used to train the network. The MATLAB code is as follows:

>> clear all;
%Define input vector and target vector
P=[1 3;-1 2;-2 1;-3 0]';
T=[0.3 0.7 0.8 0.2];
%establish BP Neural network, and define the training function and parameters
net=newff([-1 1;-1 1],[5 1],{'logsig','logsig'},'traingd');
net.trainParam.goal=0.001;
net.trainParam.epochs=5000;
[net,tr]=train(net,P,T);       	%Network training

[example 5-24] demonstrate the application of BP neural network in pattern recognition.
A network is designed and trained to recognize 26 letters in the alphabet. The digital imaging system analyzes each letter and turns it into a digital signal. Figure 5-58 shows a grid diagram of the letter A.
The MATLAB code is as follows:

>>clear all;
[alphabet,targets]=prprob;
[R,Q]=size(alphabet);
[S2,Q]=size(targets);
S1=10;
[R,Q]=size(alphabet);
[S2,Q]=size(targets);`Insert the code slice here`
P=alphabet;
net=newff(minmax(P),[S1,S2],{'logsig','logsig'},'traingdx');  %structure BP neural network
net.LW{2,1}=net.LW{2,1}*0.01;
net.b{2}=net.b{2}+0.01;

Application of BP neural network in estimation

[example 5-25] it is planned to design an instrument to test the content of cholesterol in serum by spectral analysis of blood samples. A total of 364 patients' blood samples were collected for spectral analysis. A total of 21 spectral wavelengths were found. The contents of hdl, ldl and vldl cholesterol can also be measured by serum separation.
The MATLAB code is as follows:

>>  clear all;
load choles_all;         								% choles_all Self contained data
[pn,meanp,stdp,tn,meant,stdt]=prestd(p,t);
%Delete some data and retain only 99% of it.9%Main component data of
[ptrans,transMat]=prepca(pn,0.001);
[R,Q]=size(ptrans)            							%Check the size of the converted data matrix

Tags: Machine Learning neural networks Deep Learning

Posted on Thu, 23 Sep 2021 03:35:19 -0400 by northernmonkey