[BP prediction] optimize BP neural network based on atomic search algorithm to realize data prediction matlab source code

one   Introduction of BP neural network prediction algorithm

1.1 introduction to the principle of BP neural network

Neural network is the basis of deep learning. It is widely used in machine learning and deep learning, such as function approximation, pattern recognition, classification model, image classification, CTR prediction based on deep learning, data compression, data mining and so on. The following mainly introduces the principle of BP neural network.

The simplest three-layer BP neural network is shown in the figure below

It includes input layer, hidden layer and output layer, and there are weights between nodes. A sample has m input features, including ID features or continuous features. There can be multiple hidden layers. The selection of the number of hidden layer nodes is generally set to the nth power of 2, such as 512256128, 64, 32, etc. n output results, usually one output result, such as classification model or regression model.

In 1989, Robert Hecht Nielsen proved that a continuous function in any closed interval can be approximated by a hidden layer BP network, which is the universal approximation theorem. Therefore, a three-layer BP network can complete any m-dimensional to n-dimensional mapping. BP neural network is mainly divided into two processes

  1. Working signal forward transfer subprocess

  2. Error signal reverse transfer subprocess

1 working signal forward transmission subprocess

Let the weight between node i and node j be ω {ij}, the offset of node j is b{j}, and the output value of each node is x_{j} , the output value of each node is calculated according to the output value of all nodes in the upper layer, the weight of the current node and all nodes in the upper layer, the offset of the current node and the activation function. The specific calculation formula is as follows:

Where f is the activation function, S-type function or tanh function is generally selected. The forward transfer process is relatively simple, which can be calculated from front to back.

2 error signal reverse transmission subprocess

In BP neural network, the sub process of error signal reverse transmission is complex, which is based on Widrow Hoff learning rule. Assume that all results of the output layer are d_{j} , the error function of the regression model is as follows

The main purpose of BP neural network is to repeatedly modify the weight and bias to minimize the value of error function. Widrow Hoff learning rule is to continuously adjust the weight and bias of the network along the steepest descent direction of the sum of squares of relative errors. According to the gradient descent method, the correction of the weight vector is directly proportional to the gradient of E(w,b) at the current position, and it is effective for the j-th output node

Suppose the selected activation function is

The derivation of the activation function is obtained

So next for ω_ {ij}, yes

among

Also for b_{j} , yes

This is the famous δ Learning rules reduce the error between the actual output and expected output of the system by changing the connection weight between neurons. This rule is also called Widrow Hoff learning rules or error correction learning rules.

The above is to calculate the adjustment amount for the weight between the hidden layer and the output layer and the offset of the output layer, while the calculation of the offset adjustment amount for the input layer, the hidden layer and the hidden layer is more complex. hypothesis ω_ {ki} is the weight between the k-th node of the input layer and the i-th node of the hidden layer

Among them

With the above formula, according to the gradient descent method, the weight and offset between the hidden layer and the output layer are adjusted as follows

The adjustment of weight and offset between input layer and hidden layer is also important

The above is the formula derivation of the principle of BP neural network.

2 atomic search algorithm

Atom Search Optimization algorithm is a novel intelligent algorithm based on molecular dynamics model proposed in 2019. It simulates the displacement of atoms due to mutual forces and system constraints in a molecular system composed of atoms. In a subsystem, there are mutual forces (attraction and repulsion) between adjacent atoms, And the globally optimal atom has geometric constraints on other atoms. Gravity urges atoms to explore the whole search space widely, and repulsion enables them to effectively develop potential regions. It has the characteristics of strong optimization ability and fast convergence.
1. Principle of atomic optimization algorithm

 

 

3 steps of ASO Optimizing BP neural network

Step 1: initialize the weight and threshold of BP neural network
Step 2: calculate the length of decision variables of the atomic search optimization algorithm, and select the mean square error as the optimization objective function.
Step 3: set the algorithm stop criteria, and use the genetic optimization algorithm to optimize the weights and threshold parameters of the neural network.
Step4: assign the optimized weight and threshold parameters to BP neural network.
Step 5: the optimized BP neural network is trained and tested, and the error analysis and accuracy comparison are carried out with the optimized BP neural network.

4 demonstration code

%--------------------------------------------------------------------------
% Atom Search Optimization.
function [X_Best,Fit_XBest,Functon_Best]=ASO(alpha,beta,Fun_Index,Atom_Num,Max_Iteration)

% Dim: Dimension of search space.
% Atom_Pop: Population (position) of atoms.
% Atom_V:  Velocity of atoms.
% Acc: Acceleration of atoms.
% M: Mass of atoms. 
% Atom_Num: Number of atom population.
% Fitness: Fitness of atoms.
% Max_Iteration: Maximum of iterations.
% X_Best: Best solution (position) found so far. 
% Fit_XBest: Best result corresponding to X_Best. 
% Functon_Best: The fitness over iterations. 
% Low: The low bound of search space.
% Up: The up bound of search space.
% alpha: Depth weight.
% beta: Multiplier weight

alpha=50;
beta=0.2;


   Iteration=1;
   [Low,Up,Dim]=Test_Functions_Range(Fun_Index); 
 
   % Randomly initialize positions and velocities of atoms.
     if size(Up,2)==1
         Atom_Pop=rand(Atom_Num,Dim).*(Up-Low)+Low;
         Atom_V=rand(Atom_Num,Dim).*(Up-Low)+Low;
     end
   
     if size(Up,2)>1
        for i=1:Dim
           Atom_Pop(:,i)=rand(Atom_Num,1).*(Up(i)-Low(i))+Low(i);
           Atom_V(:,i)=rand(Atom_Num,1).*(Up(i)-Low(i))+Low(i);
        end
     end

 % Compute function fitness of atoms.
     for i=1:Atom_Num
       Fitness(i)=Test_Functions(Atom_Pop(i,:),Fun_Index,Dim);
     end
       Functon_Best=zeros(Max_Iteration,1);
       [Max_Fitness,Index]=min(Fitness);
       Functon_Best(1)=Fitness(Index);
       X_Best=Atom_Pop(Index,:);
     
 % Calculate acceleration.
 Atom_Acc=Acceleration(Atom_Pop,Fitness,Iteration,Max_Iteration,Dim,Atom_Num,X_Best,alpha,beta);


 % Iteration
 for Iteration=2:Max_Iteration 
           Functon_Best(Iteration)=Functon_Best(Iteration-1);
           Atom_V=rand(Atom_Num,Dim).*Atom_V+Atom_Acc;
           Atom_Pop=Atom_Pop+Atom_V;     
    
    
         for i=1:Atom_Num
       % Relocate atom out of range.  
           TU= Atom_Pop(i,:)>Up;
           TL= Atom_Pop(i,:)<Low;
           Atom_Pop(i,:)=(Atom_Pop(i,:).*(~(TU+TL)))+((rand(1,Dim).*(Up-Low)+Low).*(TU+TL));
           %evaluate atom. 
           Fitness(i)=Test_Functions(Atom_Pop(i,:),Fun_Index,Dim);
         end
        [Max_Fitness,Index]=min(Fitness);      
     
        if Max_Fitness<Functon_Best(Iteration)
             Functon_Best(Iteration)=Max_Fitness;
             X_Best=Atom_Pop(Index,:);
          else
            r=fix(rand*Atom_Num)+1;
             Atom_Pop(r,:)=X_Best;
        end
     
      % Calculate acceleration.
       Atom_Acc=Acceleration(Atom_Pop,Fitness,Iteration,Max_Iteration,Dim,Atom_Num,X_Best,alpha,beta);
 end

Fit_XBest=Functon_Best(Iteration); 

5 simulation results

6 references

Prediction of water resources demand in Ningxia Based on BP neural network

Tags: MATLAB Algorithm neural networks

Posted on Thu, 23 Sep 2021 11:32:07 -0400 by romanali