Although the code is implemented in JAVA, Yuanbo's analysis of the problem is very clear

https://www.cnblogs.com/nce4/p/10044871.html

## 1. Introduction to particle swarm optimization algorithm

particle swarm optimization (PSO)

Proposed by Kennedy and Eberhart in 1995, it is a kind of evolutionary algorithm, which is designed by simulating the foraging behavior of birds.

Basic idea:

Starting from the random solution, the optimal solution is found through iteration, and the quality of the solution is evaluated through fitness.

Scene setting:

A flock of birds were randomly searching for food. There is only one piece of food in this area. All the birds don't know where the food is. But they know how far away they are from the food. So what is the best strategy to find food. The simplest and most effective way is to search the surrounding area of the bird nearest to the food.

Some concepts of particle swarm optimization

Particles: a bird in the scene;

Population: a flock of birds in the scene;

Position: the position of the particle (bird);

Speed: the flying speed of birds and the moving speed of particles;

Fitness: evaluation of the distance between the bird and the food and the distance between the particle and the target.

## 2. Algorithm analysis

Algorithm flow

Process description

1. First, randomly generate particles and form a population; The number of particles and the size of the population can be controlled;

2. Calculate the fitness value of each particle;

3. Update the speed and position of the current particle by comparing the current fitness value with pbest (the best value of the current particle in previous generations) and gbest (the best value of the population in previous generations);

4. Judge whether the exit conditions are met (the number of iterations is reached or the error of the optimal solution meets the set threshold). If not, turn to 2

Update of speed and position

The core of particle swarm optimization algorithm is to update the position and velocity of each particle

Speed update

v: The current velocity of the particle; w is the inertia factor; Position is the current position of the particle;

pBest is the best position of the current particle in history; gBest is the best position in the population; c1 and c2 are learning factors, learning from pBest and gBest respectively.

Three part interpretation of speed update

w*v: in the inertia holding part, the particles fly inertia along the current speed and direction without offset. If there is no inertial part, the particles will quickly move to pBest and gBest, and it is easy to fall into local optimization. With inertia, particles will have the tendency to fly freely in space and can find the optimal solution in the whole space.

C1 * rand () * (pbest position): in the part of self cognition, particles have the intention to return to their best position in history. If there is no self cognition part, the particles will soon move to gBest, and it is easy to fall into local optimization.

C2 * rand () * (gbest position): in the social cognition part, particles are willing to learn from the best position in the population. If there is no social cognitive part, all particles move to their respective pBest and fall into their own optimal solution, resulting in the non convergence of the whole process.

Location update

## 3. TSP issues

TSP (traveling salesman problem) is the traveling salesman problem, which is also translated into traveling salesman problem and salesman problem. It is one of the famous problems in the field of mathematics. Suppose a traveling businessman wants to visit n cities, he must choose the path he wants to take. The restriction of the path is that each city can only visit once, and finally he has to return to the original city. The goal of path selection is to obtain the minimum path distance among all paths.

TSP problem is a combinatorial optimization problem and an NPC problem. It is divided into two categories: one is Symmetric TSP and the other is Asymmetric TSP.

## 4. Particle swarm optimization algorithm to solve TSP problem

Implementation of algorithm

Representation of particles: a solution of TSP problem is a sequence, which can be expressed as a particle;

Representation of velocity: the exchange sequence of a sequence is used to represent the velocity of particles.

Definition of fitness function: the path length of the current sequence is the fitness value, which is calculated by longitude and latitude coordinates.

Definition of inertia factor: its own exchange sequence is inertia factor

Java code implementation

Update of speed and position

Update formula: VII = WVI + RA (PID XID) + Rb (PGD XID)

private void evolution() { for (int t = 0; t < MAX_GEN; t++) { for (int k = 0; k < scale; k++) { ArrayList<SO> vii = new ArrayList<>(); //Part I: Inertia holding part, self exchange pair int len = (int) (w * listV.get(k).size()); for (int i = 0; i < len; i++) { vii.add(listV.get(k).get(i)); } //The second part: self cognition, comparing with the best result in the current particle, the exchange sequence is obtained //ra(Pid-Xid) ArrayList<SO> a = minus(mUnits.get(k).getPath(), Pd.get(k).getPath()); float ra = random.nextFloat(); len = (int) (ra * a.size()); for (int i = 0; i < len; i++) { vii.add(a.get(i)); } //The third part: the social cognition part, compares with the global optimal result, and obtains the exchange sequence //rb(Pgd-Xid) ArrayList<SO> b = minus(mUnits.get(k).getPath(), Pgd.getPath()); float rb = random.nextFloat(); len = (int) (rb * b.size()); for (int i = 0; i < len; i++) { vii.add(b.get(i)); } listV.remove(0); listV.add(vii); //Perform a swap to generate the next particle exchange(mUnits.get(k).getPath(), vii); } //Update fitness values for (int i = 0; i < scale; i++) { mUnits.get(i).upDateFitness(); if (Pd.get(i).getFitness() > mUnits.get(i).getFitness()) { Pd.put(i, mUnits.get(i)); } if (Pgd.getFitness() > Pd.get(i).getFitness()) { Pgd = Pd.get(i); bestT = t; } } } }