Point cloud segmentation based on logo-beam analysis

Reprint address: https://zhuanlan.zhihu.com/p/382460472
Logo-beam is a laser radar SLAM algorithm. The corresponding paper is "logo-beam: light and ground optimized lidar odometry and mapping on variable terrain", with open source code.

Next, we will make a simple analysis of logo-logo in combination with the paper and code. Logo-logo is divided into four processes, and the code structure is roughly divided according to these four processes. Therefore, we will also introduce the algorithm process of logo-logo in four chapters. This chapter first introduces the mainstream process of logo-beam, and then focuses on the process of point cloud segmentation.

Logo-logo introduction

At first, I thought that the logo of logo-logo represents LeGO building blocks, and SLAM algorithm can be built in a way similar to LeGO building blocks. It was not clear until I saw the paper. Le of logo represents Lightweight, and GO represents ground optimized. In other words, logo-beam is a Lightweight, ground-based optimization based ladar SLAM algorithm.

The hardware platform adopted by logo-beam is Jackal UGV, and the overall system architecture is shown in the figure below.

Logo-logo is divided into four steps.

  1. Segmentation (filter out noise)
  2. Feature Extraction(distinctive planar and edge features)
  3. Lidar Odometry : Label Matching, Two-step L-M Optimization
  4. Lidar Mapping(pose-graph SLAM)

Firstly, segment the input original point cloud, find the ground and segment the point cloud, then extract the feature of the segmented point cloud, find the face feature and edge feature, extract the feature, then match the feature and output the pose, and finally register the point cloud to generate a global map, And loop detection is carried out to optimize the generated map.

Next, let's introduce point cloud segmentation.

Point cloud segmentation

The main process of point cloud segmentation is to extract the ground first, then segment the remaining point cloud, and finally take the segmented point cloud to the next step of feature extraction.

Since the robot may be in a complex environment, small objects (such as leaves) may form insignificant and unreliable features, because it is unlikely to see the same leaves in two consecutive scans. In order to improve efficiency and reliability, clusters with less than 30 points are ignored during segmentation (in fact, clusters with more than 5 points and more than 3 horizontal lines are reserved in the code).

After this process, only points that may represent large objects, such as trunk and ground points, are retained for further processing. In the following figure, (a) is the original point cloud, (b) is the segmented point cloud, red represents the ground, and the remaining points are the segmented point cloud. You can see that a lot of noise has been removed.

code analysis

The code corresponding to point cloud segmentation is in "logo-logo / SRC / imageprojection. CPP". The four processes of logo-logo are ROS programs running independently, and the entry functions are as follows

int main(int argc, char** argv){

    ros::init(argc, argv, "lego_loam");
    
    ImageProjection IP;   // Work flow

    ROS_INFO("\033[1;32m---->\033[0m Image Projection Started.");

    ros::spin();
    return 0;
}

The main workflow is actually in "ImageProjection IP", so why can it work without any function execution? In fact, in the constructor of ImageProjection class, the subscription message and callback are created. In this way, as soon as the ImageProjection class IP is created, the constructor will be executed, and the publish subscription process will be started. After the point cloud message arrives, the point cloud processing callback will be triggered.

Let's move on to the constructor

 ImageProjection():
        nh("~"){
// 1. Point cloud callback, main workflow
        subLaserCloud = nh.subscribe<sensor_msgs::PointCloud2>(pointCloudTopic, 1, &ImageProjection::cloudHandler, this);

// 2. The following are released ground point clouds and segmented point clouds
        pubFullCloud = nh.advertise<sensor_msgs::PointCloud2> ("/full_cloud_projected", 1);
        pubFullInfoCloud = nh.advertise<sensor_msgs::PointCloud2> ("/full_cloud_info", 1);

        pubGroundCloud = nh.advertise<sensor_msgs::PointCloud2> ("/ground_cloud", 1);
        pubSegmentedCloud = nh.advertise<sensor_msgs::PointCloud2> ("/segmented_cloud", 1);
        pubSegmentedCloudPure = nh.advertise<sensor_msgs::PointCloud2> ("/segmented_cloud_pure", 1);
        pubSegmentedCloudInfo = nh.advertise<cloud_msgs::cloud_info> ("/segmented_cloud_info", 1);
        pubOutlierCloud = nh.advertise<sensor_msgs::PointCloud2> ("/outlier_cloud", 1);

        nanPoint.x = std::numeric_limits<float>::quiet_NaN();
        nanPoint.y = std::numeric_limits<float>::quiet_NaN();
        nanPoint.z = std::numeric_limits<float>::quiet_NaN();
        nanPoint.intensity = -1;

// 3. Request resources
        allocateMemory();
// 4. Initialization
        resetParameters();
    }

In the source code, resources are applied for in the constructor. Some are smart pointers, which will automatically release resources, but the other four arrays will not, which will cause resource leakage. It is best to release resources in the destructor.

Only here can we find the real point cloud processing function, that is, the point cloud callback function "cloudHandler". The process of cloudHandler is very clear, which is divided into seven steps.

 void cloudHandler(const sensor_msgs::PointCloud2ConstPtr& laserCloudMsg){

        // 1. Convert ros message to pcl point cloud
        copyPointCloud(laserCloudMsg);
        // 2. Start and end angles of scanning
        findStartEndAngle();
        // 3. Distance image projection
        projectPointCloud();
        // 4. Mark ground point cloud
        groundRemoval();
        // 5. Point cloud segmentation
        cloudSegmentation();
        // 6. Publish the split point cloud
        publishCloud();
        // 7. Reset the parameters for the next iteration
        resetParameters();
    }

Next, we analyze these seven processes step by step. The main processes are "3. Distance image projection, 4. Marking ground point cloud and 5. Point cloud segmentation"

1. copyPointCloud

The ros message is converted into pcl point cloud. Here, the lidar of velodyne is distinguished. Taking 16 line radar as an example, the lidar rotates and scans through 16 radar beams to obtain a 360 ° point cloud. The lidar of velodyne marks which line of the 16 lines the point cloud belongs to. This mark is called Ring index, where Ring means Ring. If it is not velodyne's lidar, it will calculate which harness the point cloud belongs to. Therefore, the point cloud is saved in "lasercloud in" and "lasercloud Inring" respectively.

 void copyPointCloud(const sensor_msgs::PointCloud2ConstPtr& laserCloudMsg){
        // 1. Read ROS point cloud and convert it to PCL point cloud
        cloudHeader = laserCloudMsg->header;
        // cloudHeader.stamp = ros::Time::now(); // Ouster lidar users may need to uncomment this line
        pcl::fromROSMsg(*laserCloudMsg, *laserCloudIn);
        // 2. Remove Nan points from the point cloud
        std::vector<int> indices;
        pcl::removeNaNFromPointCloud(*laserCloudIn, *laserCloudIn, indices);
        // 3. If the "ring" of the point cloud passes, it is saved as lasercloud Inring
        if (useCloudRing == true){
            pcl::fromROSMsg(*laserCloudMsg, *laserCloudInRing);
            if (laserCloudInRing->is_dense == false) {
                ROS_ERROR("Point cloud is not in dense format, please remove NaN points first!");
                ros::shutdown();
            }  
        }
    }

2. findStartEndAngle*

Find the start angle and end angle. The angle here is the start angle and end angle of one rotation of the lidar. The selected starting point is points[0], and the selected end point is points [lasercloud in - > points. Size() - 1]. Does it mean that the point clouds are arranged in order? In addition, the last point here is not necessarily the maximum angle.

In addition, the coordinate transformation of X and Y is also carried out. The results of this step will be used in feature extraction.

void findStartEndAngle(){
        // start and end orientation of this cloud
        segMsg.startOrientation = -atan2(laserCloudIn->points[0].y, laserCloudIn->points[0].x);
        segMsg.endOrientation   = -atan2(laserCloudIn->points[laserCloudIn->points.size() - 1].y,
                                                     laserCloudIn->points[laserCloudIn->points.size() - 1].x) + 2 * M_PI;
        if (segMsg.endOrientation - segMsg.startOrientation > 3 * M_PI) {
            segMsg.endOrientation -= 2 * M_PI;
        } else if (segMsg.endOrientation - segMsg.startOrientation < M_PI)
            segMsg.endOrientation += 2 * M_PI;
        segMsg.orientationDiff = segMsg.endOrientation - segMsg.startOrientation;
    }

3. projectPointCloud

In this step, the point cloud is projected onto RangeImage, that is, the original point cloud is a sphere. We project the sphere onto a cylinder and expand it. The following figure is a simple example.

Taking the 16 line laser as an example, the converted range diagram is 16 in length and 1800 in width, so the final image is 16 * 1800. This image can be represented by an array, where the coordinates are x and y, and z will be converted into depth information. Finally, the converted rangeImage effect is shown in the figure below, similar to a depth map.

After understanding the principle, let's start analyzing the code

void projectPointCloud(){
        // range image projection
        float verticalAngle, horizonAngle, range;
        size_t rowIdn, columnIdn, index, cloudSize; 
        PointType thisPoint;

        cloudSize = laserCloudIn->points.size();
        // 1. Traverse point cloud
        for (size_t i = 0; i < cloudSize; ++i){
            thisPoint.x = laserCloudIn->points[i].x;
            thisPoint.y = laserCloudIn->points[i].y;
            thisPoint.z = laserCloudIn->points[i].z;
            // 2.1 if there is a ring index, use it directly
            if (useCloudRing == true){
                rowIdn = laserCloudInRing->points[i].ring;
            }
            else{
                // 2.2 if not, the index is obtained by calculating the angle
                verticalAngle = atan2(thisPoint.z, sqrt(thisPoint.x * thisPoint.x + thisPoint.y * thisPoint.y)) * 180 / M_PI;
                rowIdn = (verticalAngle + ang_bottom) / ang_res_y;
            }
            if (rowIdn < 0 || rowIdn >= N_SCAN)
                continue;

            horizonAngle = atan2(thisPoint.x, thisPoint.y) * 180 / M_PI;
            // 3. Calculate abscissa
            columnIdn = -round((horizonAngle-90.0)/ang_res_x) + Horizon_SCAN/2;
            if (columnIdn >= Horizon_SCAN)
                columnIdn -= Horizon_SCAN;

            if (columnIdn < 0 || columnIdn >= Horizon_SCAN)
                continue;
            // 4. Calculate the distance. If it is less than 1M, filter it. It is usually used to filter the point cloud formed by itself
            range = sqrt(thisPoint.x * thisPoint.x + thisPoint.y * thisPoint.y + thisPoint.z * thisPoint.z);
            if (range < sensorMinimumRange)
                continue;
            // 5. Save the distance to the matrix rangeMat
            rangeMat.at<float>(rowIdn, columnIdn) = range;
            // 6. The strength here is actually to maintain the ordinate and abscissa
            thisPoint.intensity = (float)rowIdn + (float)columnIdn / 10000.0;
            // 7. Save the point cloud to the array. The intensity of fullCloud is the abscissa and ordinate, and the intensity in fullInfoCloud is the distance
            index = columnIdn  + rowIdn * Horizon_SCAN;
            fullCloud->points[index] = thisPoint;
            fullInfoCloud->points[index] = thisPoint;
            fullInfoCloud->points[index].intensity = range; // the corresponding range of a point is saved as "intensity"
        }
    }

4. groundRemoval

The next processing is based on rangeImage. The process of removing the ground is as follows. Ground detection is carried out for the points with Ringindex less than 7 in rangeImage. If it is 32 line radar, it is less than 20. Therefore, Lidar's prior knowledge is used here.

The principle of ground detection is shown in the figure above, by judging the included angle between A and adjacent point B in rangeImage ɑ Whether it is small enough. If the angle is small enough, it is judged as the ground. The angle threshold selected here is 10 °. If it is less than 10 °, it is the ground.

void groundRemoval(){
        size_t lowerInd, upperInd;
        float diffX, diffY, diffZ, angle;
        // groundMat
        // -1, no valid info to check if ground of not
        //  0, initial value, after validation, means not ground
        //  1, ground
        // 1. Traverse points less than 7
        for (size_t j = 0; j < Horizon_SCAN; ++j){
            for (size_t i = 0; i < groundScanInd; ++i){

                lowerInd = j + ( i )*Horizon_SCAN;
                upperInd = j + (i+1)*Horizon_SCAN;

                if (fullCloud->points[lowerInd].intensity == -1 ||
                    fullCloud->points[upperInd].intensity == -1){
                    // no info to check, invalid points
                    groundMat.at<int8_t>(i,j) = -1;
                    continue;
                }
                    
                diffX = fullCloud->points[upperInd].x - fullCloud->points[lowerInd].x;
                diffY = fullCloud->points[upperInd].y - fullCloud->points[lowerInd].y;
                diffZ = fullCloud->points[upperInd].z - fullCloud->points[lowerInd].z;
                // 2. Calculate the included angle of two vertical points
                angle = atan2(diffZ, sqrt(diffX*diffX + diffY*diffY) ) * 180 / M_PI;
                // 3. If it is less than 10 °, it means ground
                if (abs(angle - sensorMountAngle) <= 10){
                    groundMat.at<int8_t>(i,j) = 1;
                    groundMat.at<int8_t>(i+1,j) = 1;
                }
            }
        }
        // extract ground cloud (groundMat == 1)
        // mark entry that doesn't need to label (ground and invalid point) for segmentation
        // note that ground remove is from 0~N_SCAN-1, need rangeMat for mark label matrix for the 16th scan
        // 4. If it is the ground or rangeMat is empty, it is marked as - 1, and these points are removed later
        for (size_t i = 0; i < N_SCAN; ++i){
            for (size_t j = 0; j < Horizon_SCAN; ++j){
                if (groundMat.at<int8_t>(i,j) == 1 || rangeMat.at<float>(i,j) == FLT_MAX){
                    labelMat.at<int>(i,j) = -1;
                }
            }
        }
        // 5. This step is mainly to release the ground point cloud
        if (pubGroundCloud.getNumSubscribers() != 0){
            for (size_t i = 0; i <= groundScanInd; ++i){
                for (size_t j = 0; j < Horizon_SCAN; ++j){
                    if (groundMat.at<int8_t>(i,j) == 1)
                        groundCloud->push_back(fullCloud->points[j + i*Horizon_SCAN]);
                }
            }
        }
    }

5. cloudSegmentation

After removing the ground, segment the next points. Here, BFS (depth first traversal) recursion is used to search. Starting from point [0,0], traverse the four points before, after, left and right, and compare them respectively. If the relative angle is greater than 60 °, it is considered to be the same point cluster. Finally, if the number of segmented point clouds is greater than 30, the segmentation is considered effective (in fact, if it is greater than 5, it may also be OK).

Here, the cloudSegmentation is divided into two parts. First, label the labelComponents, that is, recursively find the points with obvious characteristics.

  void cloudSegmentation(){
        // segmentation process
        for (size_t i = 0; i < N_SCAN; ++i)
            for (size_t j = 0; j < Horizon_SCAN; ++j)
                // 1. If not marked, traverse the node (recursive call)
                if (labelMat.at<int>(i,j) == 0)
                    labelComponents(i, j);

Let's first analyze labelComponents, and then continue to introduce cloudSegmentation.

The author said that using vector and deque will slow down. I don't know if it is caused by the dynamic expansion of the array (when the pre allocated vector size is not enough, it will be 2 times the size of malloc, and then copy the original array, and the system will complete it automatically). If the size is fixed, I don't think it will lead to significant performance degradation. Here, the author uses four arrays to replace the operation of queue, which seems a little awkward.

The principle of point cloud segmentation is shown in the figure below. A and B are two consecutive point clouds scanned by Lidar. If the angle between a and B β Very large. For example, the red line in the figure below (60 ° in the code) proves that these two points do not belong to the same object. On the contrary, if the included angle of these two points β If it is small, such as the green line in the figure below, it can be regarded as the same object. This method is used to segment the point cloud.

void labelComponents(int row, int col){
        // use std::queue std::vector std::deque will slow the program down greatly
        float d1, d2, alpha, angle;
        int fromIndX, fromIndY, thisIndX, thisIndY; 
        bool lineCountFlag[N_SCAN] = {false};

        queueIndX[0] = row;
        queueIndY[0] = col;
        int queueSize = 1;
        int queueStartInd = 0;
        int queueEndInd = 1;

        allPushedIndX[0] = row;
        allPushedIndY[0] = col;
        int allPushedIndSize = 1;
        // 1. The initial value is [row, col], and the length is 1
        while(queueSize > 0){
            // Pop point
            fromIndX = queueIndX[queueStartInd];
            fromIndY = queueIndY[queueStartInd];
            --queueSize;
            ++queueStartInd;
            // Mark popped point
            labelMat.at<int>(fromIndX, fromIndY) = labelCount;
            // Loop through all the neighboring grids of popped grid
            // 2. Traverse the front, back, left and right four points to calculate the angle difference
            for (auto iter = neighborIterator.begin(); iter != neighborIterator.end(); ++iter){
                // new index
                thisIndX = fromIndX + (*iter).first;
                thisIndY = fromIndY + (*iter).second;
                // index should be within the boundary
                if (thisIndX < 0 || thisIndX >= N_SCAN)
                    continue;
                // at range image margin (left or right side)
                if (thisIndY < 0)
                    thisIndY = Horizon_SCAN - 1;
                if (thisIndY >= Horizon_SCAN)
                    thisIndY = 0;
                // prevent infinite loop (caused by put already examined point back)
                if (labelMat.at<int>(thisIndX, thisIndY) != 0)
                    continue;

                d1 = std::max(rangeMat.at<float>(fromIndX, fromIndY), 
                              rangeMat.at<float>(thisIndX, thisIndY));
                d2 = std::min(rangeMat.at<float>(fromIndX, fromIndY), 
                              rangeMat.at<float>(thisIndX, thisIndY));
                // 2.1 longitudinal and transverse angles
                if ((*iter).first == 0)
                    alpha = segmentAlphaX;
                else
                    alpha = segmentAlphaY;

                angle = atan2(d2*sin(alpha), (d1 -d2*cos(alpha)));
                // 2.2 if the angle is greater than 60 degrees 
                if (angle > segmentTheta){

                    queueIndX[queueEndInd] = thisIndX;
                    queueIndY[queueEndInd] = thisIndY;
                    ++queueSize;
                    ++queueEndInd;

                    labelMat.at<int>(thisIndX, thisIndY) = labelCount;
                    lineCountFlag[thisIndX] = true;

                    allPushedIndX[allPushedIndSize] = thisIndX;
                    allPushedIndY[allPushedIndSize] = thisIndY;
                    ++allPushedIndSize;
                }
            }
        }
        // check if this segment is valid
        bool feasibleSegment = false;
        // 3. If there are more than 30 points, it is considered successful
        if (allPushedIndSize >= 30)
            feasibleSegment = true;
        // 4. If there are more than 5 points and they span 3 ordinates, it is considered successful
        else if (allPushedIndSize >= segmentValidPointNum){
            int lineCount = 0;
            for (size_t i = 0; i < N_SCAN; ++i)
                if (lineCountFlag[i] == true)
                    ++lineCount;
            if (lineCount >= segmentValidLineNum)
                feasibleSegment = true;            
        }
        // segment is valid, mark these points
        // 5. If true, then labelCount plus 1, and labelCount represents how many clusters are divided into
        if (feasibleSegment == true){
            ++labelCount;
        }else{ // segment is invalid, mark these points
            // 6. If it is false, it is marked as 999999
            for (size_t i = 0; i < allPushedIndSize; ++i){
                labelMat.at<int>(allPushedIndX[i], allPushedIndY[i]) = 999999;
            }
        }
    }

Next, continue to analyze cloudSegmentation, mainly generating segmented point clouds based on the above marked point clouds.

        int sizeOfSegCloud = 0;
        // extract segmented cloud for lidar odometry
        for (size_t i = 0; i < N_SCAN; ++i) {

            segMsg.startRingIndex[i] = sizeOfSegCloud-1 + 5;

            for (size_t j = 0; j < Horizon_SCAN; ++j) {
                if (labelMat.at<int>(i,j) > 0 || groundMat.at<int8_t>(i,j) == 1){
                    // outliers that will not be used for optimization (always continue)
                    // 1. If the label is 999999, skip
                    if (labelMat.at<int>(i,j) == 999999){
                        if (i > groundScanInd && j % 5 == 0){
                            outlierCloud->push_back(fullCloud->points[j + i*Horizon_SCAN]);
                            continue;
                        }else{
                            continue;
                        }
                    }
                    // majority of ground points are skipped
                    // 2. If it is ground, skip the point where the index is not a multiple of 5
                    if (groundMat.at<int8_t>(i,j) == 1){
                        if (j%5!=0 && j>5 && j<Horizon_SCAN-5)
                            continue;
                    }
                    // 3. It may be the ground or the point after segmentation
                    // mark ground points so they will not be considered as edge features later
                    segMsg.segmentedCloudGroundFlag[sizeOfSegCloud] = (groundMat.at<int8_t>(i,j) == 1);
                    // mark the points' column index for marking occlusion later
                    segMsg.segmentedCloudColInd[sizeOfSegCloud] = j;
                    // save range info
                    segMsg.segmentedCloudRange[sizeOfSegCloud]  = rangeMat.at<float>(i,j);
                    // save seg cloud
                    segmentedCloud->push_back(fullCloud->points[j + i*Horizon_SCAN]);
                    // size of seg cloud
                    ++sizeOfSegCloud;
                }
            }

            segMsg.endRingIndex[i] = sizeOfSegCloud-1 - 5;
        }
        
        // extract segmented cloud for visualization
        // 4. The segmented point cloud does not include the ground
        if (pubSegmentedCloudPure.getNumSubscribers() != 0){
            for (size_t i = 0; i < N_SCAN; ++i){
                for (size_t j = 0; j < Horizon_SCAN; ++j){
                    if (labelMat.at<int>(i,j) > 0 && labelMat.at<int>(i,j) != 999999){
                        segmentedCloudPure->push_back(fullCloud->points[j + i*Horizon_SCAN]);
                        segmentedCloudPure->points.back().intensity = labelMat.at<int>(i,j);
                    }
                }
            }
        }
    }

6. publishCloud

The next step is to release the segmented point cloud. This function is very simple. It is mainly to find out the role of each point cloud. The code is as follows

void publishCloud(){
        // 1. Publish Seg Cloud Info
        // 1. Publish point clouds, including sampled ground and segmented point clouds
        segMsg.header = cloudHeader;
        pubSegmentedCloudInfo.publish(segMsg);
        // 2. Publish clouds
        // 2. Post outlier point cloud
        sensor_msgs::PointCloud2 laserCloudTemp;

        pcl::toROSMsg(*outlierCloud, laserCloudTemp);
        laserCloudTemp.header.stamp = cloudHeader.stamp;
        laserCloudTemp.header.frame_id = "base_link";
        pubOutlierCloud.publish(laserCloudTemp);
        // segmented cloud with ground
        // 3. The same as 1
        pcl::toROSMsg(*segmentedCloud, laserCloudTemp);
        laserCloudTemp.header.stamp = cloudHeader.stamp;
        laserCloudTemp.header.frame_id = "base_link";
        pubSegmentedCloud.publish(laserCloudTemp);
        // projected full cloud
        // 4. Point cloud after rangeimage projection
        if (pubFullCloud.getNumSubscribers() != 0){
            pcl::toROSMsg(*fullCloud, laserCloudTemp);
            laserCloudTemp.header.stamp = cloudHeader.stamp;
            laserCloudTemp.header.frame_id = "base_link";
            pubFullCloud.publish(laserCloudTemp);
        }
        // original dense ground cloud
        //  5. Ground point cloud
        if (pubGroundCloud.getNumSubscribers() != 0){
            pcl::toROSMsg(*groundCloud, laserCloudTemp);
            laserCloudTemp.header.stamp = cloudHeader.stamp;
            laserCloudTemp.header.frame_id = "base_link";
            pubGroundCloud.publish(laserCloudTemp);
        }
        // segmented cloud without ground
        // 6. The divided point cloud does not include the ground
        if (pubSegmentedCloudPure.getNumSubscribers() != 0){
            pcl::toROSMsg(*segmentedCloudPure, laserCloudTemp);
            laserCloudTemp.header.stamp = cloudHeader.stamp;
            laserCloudTemp.header.frame_id = "base_link";
            pubSegmentedCloudPure.publish(laserCloudTemp);
        }
        // projected full cloud info
        // 7. Point cloud after rangeimage projection
        if (pubFullInfoCloud.getNumSubscribers() != 0){
            pcl::toROSMsg(*fullInfoCloud, laserCloudTemp);
            laserCloudTemp.header.stamp = cloudHeader.stamp;
            laserCloudTemp.header.frame_id = "base_link";
            pubFullInfoCloud.publish(laserCloudTemp);
        }
    }

7. resetParameters

The last step is to clear the temporary variables. There is not much to explain.

    void resetParameters(){
        laserCloudIn->clear();
        groundCloud->clear();
        segmentedCloud->clear();
        segmentedCloudPure->clear();
        outlierCloud->clear();

        rangeMat = cv::Mat(N_SCAN, Horizon_SCAN, CV_32F, cv::Scalar::all(FLT_MAX));
        groundMat = cv::Mat(N_SCAN, Horizon_SCAN, CV_8S, cv::Scalar::all(0));
        labelMat = cv::Mat(N_SCAN, Horizon_SCAN, CV_32S, cv::Scalar::all(0));
        labelCount = 1;

        std::fill(fullCloud->points.begin(), fullCloud->points.end(), nanPoint);
        std::fill(fullInfoCloud->points.begin(), fullInfoCloud->points.end(), nanPoint);
    }

summary

So far, the whole part of logo-beam point cloud segmentation has been introduced. If you need to adapt yourself, you need to pay attention to the parameters of different models of Lidar in "logo-beam / include / utility. H", such as included angle and ground search harness.

About the principle of point cloud segmentation in the article, the above two figures are cited in the following two papers. If you need to deepen your understanding, please refer to the following two papers.

  • <Fast Range Image-Based Segmentation of Sparse 3D Laser Scans for Online Operation> 2016 IROS
  • <Efficient Online Segmentation for Sparse 3D Laser Scans> 2017

Tags: Algorithm Machine Learning Autonomous vehicles

Posted on Tue, 21 Sep 2021 15:39:05 -0400 by pure_skill_2000