1, Foreword
1.1 Gmapping algorithm
1.1.1 introduction
Taking the laser SLAM algorithm gmapping as an example, this paper introduces the necessary conditions, algorithm flow and algorithm principle of mobile robot building environment map. In fact, the steps of using gmapping algorithm to build a map are very simple. It is generally divided into understanding the algorithm, installing the algorithm, changing parameters, executing the algorithm and saving the map.
Gmapping is a SLAM algorithm based on 2D lidar and using RBPF (RAO blackwelled particle filters) algorithm to complete the construction of 2D grid map.
1.1.2 topic subscription and publication
The topics and services in the gmapping algorithm function pack are shown in the figure below:
As can be seen from the above figure, gmapping needs to subscribe to coordinate transformation topic / tf and lidar scanning data topic / scan, and will publish grid map topic / map.
1.1.3 TF transformation details
The / TF is divided into two parts. The TF transformation in the gmapping function package is shown in the figure below:
Therefore, from the above introduction, we know that the prerequisites of gmapping algorithm, / tf, / odom, / scan, can publish / map through the algorithm.
1.1.4 Gmapping pseudo code
1.1.5 Gmapping source code
The source code of gamping algorithm is not clear in an article. Here is only a call process of algorithm function. If you have the opportunity in the future, you will analyze the source code according to the module.
1.2 grid map creation
1.2.1 map classification
There are three common maps for mobile robots: scale map, topology map and semantic map.
Scale map: it has real physical dimensions, such as grid map, feature map and point cloud map; It is often used in map construction, positioning, SLAM and small-scale path planning.
Topology map: it does not have real physical size, but only represents the connectivity and distance of different places, such as railway network, which is often used for super large-scale robot path planning.
Semantic map: labeled scale map, which is recognized as the future of SLAM - the combination of SLAM and deep learning, which is often used in human-computer interaction.
Focus on scale map
1.2.2 grid map construction algorithm
(1) Why use the occupancy grid map building algorithm to build maps?
First, the construction of grid map requires the use of lidar sensors.
Second, there is noise in lidar sensor. Generally speaking, "it is not necessarily accurate".
For example, at different times in the same pose, the detection distance of a fixed obstacle by the robot through the lidar is inconsistent. One frame is 5M and the other frame is 5.1m. Should we mark the positions of 5m and 5.1m as obstacles? This is why the occupancy grid map construction algorithm is used.
(2) What is a grid map construction algorithm?
To solve this problem, we introduce the concept of Occupancy Grid Map. We rasterize the map. The state of each grid is either occupied, idle or unknown (i.e. initialization state).
(3) Derivation of grid map construction algorithm
(4) Examples
First, we assume looccu = 0.9 and lofree = -0.7. Then, it is obvious that the larger the value of a grid state, the more it indicates that the grid is occupied. On the contrary, the smaller the value, the grid is changed to idle state. (this solves the problem that the lidar observation value is "not necessarily accurate" proposed earlier)
The above figure shows the process of updating the map with two laser scanning data. In the result, the darker the color, the more empty the grid is, and the lighter the color, the more occupied it is. Here, we need to distinguish the maps in the commonly used laser SLAM algorithm, but there is no right or wrong.
1.3 lidar grid map
1.3.1 algorithm core
Obviously, a conclusion drawn in the whole article is shown in the figure below. Here, it is assumed that lofree and looccu are definite values, generally one positive and one negative.
Then, we judge through the lidar data grid. If it is determined that the grid is idle, execute the above formula; If it is determined that the grid is occupied, the following formula is executed. After the baptism of many frames of lidar data, each grid stores a value. At this time, we can set our own threshold and compare it with this value to determine the final state of the grid.
1.3.2 algorithm input data
Lidar data packet (each scanning point contains angle (counterclockwise is positive direction) and distance, each frame of laser data contains several scanning points, and lidar data packet contains several frames of lidar data)
Carrier pose data package (each pose includes carrier position and heading angle in the world coordinate system. The initial heading angle coincides with the positive direction of X axis in the world coordinate system, and counterclockwise is the positive direction)
Map parameters (the height and width of the map to be built, the starting point of the map to be built, the set values of lofree and looccu, and the resolution of the map)
It is assumed that the lidar coordinate system coincides with the carrier coordinate system
1.3.3 algorithm flow
Here, taking the processing of the first frame lidar as an example, it is introduced from top to bottom
(1) Read a frame of lidar data and the robot pose corresponding to the frame
//Obtain the position and attitude data of lidar and robot in each frame GeneralLaserScan scan = scans[i]; Eigen::Vector3d robotPose = robot_poses[i];
(2) Calculate the grid sequence number of the robot position in this frame
//Get the grid sequence number of the robot pose in this frame GridIndex robotIndex = ConvertWorld2GridIndex(robotPose(0),robotPose(1));
That is, from the world coordinate system to the grid coordinate system, each grid serial number has two numbers x and y. This is related to the map resolution. For example, the map resolution is 0.05, that is, 1m is represented by 20 grids.
For example, the robot position (1,1) in the world coordinate system corresponds to (20,20) in the grid coordinate system.
Note: the world coordinate system is distinguished from the pixel coordinate system. The y-axis direction between them is opposite, and the others are consistent. The pixel coordinate system (grid coordinate system) is used for map display.
(3) Traverse all scan points of the lidar data of this frame and perform the following operations
- Step 1
Calculate the grid serial number of the grid in the pixel coordinate system in each laser click
//Define the world coordinate system here_ x. Not the real world coordinate system, but the pixel coordinate system. The Y axis is opposite to the real world coordinate system. This is laser_ Reason for y plus minus sign double laser_x = dist * cos(theta + angle); double laser_y = -dist * sin(theta + angle); //The position of the laser scanning point in the world coordinate system (pixel coordinate system) is obtained double world_x = laser_x + robotPose(0); double world_y = laser_y + robotPose(1); //Convert the position of the laser scanning point in the world coordinate system into grid serial number GridIndex mapIndex = ConvertWorld2GridIndex(world_x,world_y);
- Step 2
Draw a line from the grid number of the current robot pose to the grid number of the laser scanning point to find out all idle grid numbers
//Scribe from the grid number of the robot to the grid number of the laser scanning point //Purpose: find the free grid between two points and store the grid serial number in STD:: vector < gridindex > std::vector<GridIndex> freeIndex = TraceLine(robotIndex.x,robotIndex.y,mapIndex.x,mapIndex.y);
- Step 3
Traverse all idle grids and update the idle grid status
data += mapParams.log_free;//log_ If free = - 1, the data will be smaller
- Step 4
Update the grid status in this laser scan click
data += mapParams.log_occ;//log_occ=2, data will become larger
1.3.4 grid map construction code
//Grid map construction algorithm //Input lidar data and robot pose data //Purpose: by traversing all frame data, assign new values to each passing free grid or hit grid in pMap []. There is a calculation method in the middle, that is, the theoretical implementation of occupying grid map construction void OccupanyMapping(std::vector<GeneralLaserScan>& scans,std::vector<Eigen::Vector3d>& robot_poses) { std::cout <<"Scans Size:"<<scans.size()<<std::endl; std::cout <<"Poses Size:"<<robot_poses.size()<<std::endl; //Traverse all lidar data frames for(int i = 0; i < scans.size();i++) { //Obtain the position and attitude data of lidar and robot in each frame GeneralLaserScan scan = scans[i]; Eigen::Vector3d robotPose = robot_poses[i]; //Get the grid sequence number of the robot pose in this frame GridIndex robotIndex = ConvertWorld2GridIndex(robotPose(0),robotPose(1)); //Judge whether the grid sequence number of the robot pose in this frame is within the grid map range set by yourself if(isValidGridIndex(robotIndex) == false) continue; //Traverse all scanning points of the lidar data of this frame for(int id = 0; id < scan.range_readings.size();id++) { //Take out the distance and angle of the lidar scanning point double dist = scan.range_readings[id]; double angle = scan.angle_readings[id]; //Eliminate abnormal data, skip this cycle and do not process it if(std::isinf(dist) || std::isnan(dist)) continue; //The heading angle of the robot, the angle between the x axis of the robot and the x axis of the world coordinate system double theta = robotPose(2); //Coordinates x,y in the lidar coordinate system after rotation (parallel to the world coordinate system (pixel coordinate system)) //At the beginning, I didn't understand why laser_y needs a minus sign //Make sure the angle of lidar data changes counterclockwise //It is clear that the heading angle of the robot changes counterclockwise with the x axis of the world coordinate system //Define the world coordinate system here_ x. Not the real world coordinate system, but the pixel coordinate system. The Y axis is opposite to the real world coordinate system. This is laser_ Reason for y plus minus sign double laser_x = dist * cos(theta + angle); double laser_y = -dist * sin(theta + angle); //The position of the laser scanning point in the world coordinate system (pixel coordinate system) is obtained double world_x = laser_x + robotPose(0); double world_y = laser_y + robotPose(1); //Convert the position of the laser scanning point in the world coordinate system into grid serial number GridIndex mapIndex = ConvertWorld2GridIndex(world_x,world_y); //Judge whether the grid serial number of the laser scanning point is within the range of 900x900 set by yourself. If not, skip if(isValidGridIndex(mapIndex) == false)continue; //Scribe from the grid number of the robot to the grid number of the laser scanning point //Purpose: find the free grid between two points and store the grid serial number in STD:: vector < gridindex > std::vector<GridIndex> freeIndex = TraceLine(robotIndex.x,robotIndex.y,mapIndex.x,mapIndex.y); //Traverse all free grids through which the scanning laser point passes for(int k = 0; k < freeIndex.size();k++) { GridIndex tmpIndex = freeIndex[k]; //Convert the grid sequence number of the idle grid to the array sequence number, which is used to store the data of each grid int linearIndex = GridIndexToLinearIndex(tmpIndex); //Take out the data represented by the grid int data = pMap[linearIndex]; //Execute data + = mapparams.log according to the grid idle rule_ free; if(data > 0)//Default data=50 data += mapParams.log_free;//log_ If free = - 1, the data will be smaller else data = 0; //Assign a new value to the free grid, with a minimum of 0 pMap[linearIndex] = data; } //Update the grid in the laser scanning point set, int tmpIndex = GridIndexToLinearIndex(mapIndex); int data = pMap[tmpIndex]; //Execute data + = mapparams.log according to the grid hit rules_ occ; if(data < 100)//Default data=50 data += mapParams.log_occ;//log_occ=2, data will become larger else data = 100; //Assign a new value to the hit grid, up to 100 pMap[tmpIndex] = data; //Here, the free grid passed by a laser scanning data in a pose and the pMap hitting the grid are re assigned } //Here, the free grid and hit grid of a frame of laser scanning data in a pose are re assigned } //Here, the free grid and hit grid passed by all frames of laser scanning data are re assigned }
Reference link
- https://blog.csdn.net/zhao_ke_xue/article/details/109040676