ROS-3DSLAM: Radar Part AC analysis A

2021@SDUSC Tuesday, October 26, 2021 - Saturday, October 30, 2021 ...
1, Program introduction:
2, Reading analysis:
1. unity package: header file, definition
2. imageProjection generate graph node
3. imuPreintegration imu preintegration node
3, Feeling:
4, References:

2021@SDUSC
Tuesday, October 26, 2021 - Saturday, October 30, 2021

1, Program introduction:

Through previous studies, I have mastered the information flow, structure and function of LVI Sam package, and also focused on the analysis of the feature extracted code.

This week, I decided to learn the research results of other students in the group before analyzing the graph factor optimization package, so as to facilitate further study and research and start the follow-up code more effectively. At the same time, you can also learn from other students' ideas and methods of reading code to improve your knowledge and skills.

My learning method this time is similar to the information flow analysis last time. It is also carried out in the form of blog comments and source code to help me understand quickly.

2, Reading analysis:

Because other students in the group have parsed the code in the whole package, it doesn't make sense if I parse it again here. It's better to write down my problems and understanding here, which is more meaningful.

In the group discussion on Monday, I found that there was a certain deviation between my understanding of the information flow and that of my classmates. This week, I should deal with the problem of information flow again.

1. unity package: header file, definition

The main functions of this header file are: importing header files, defining classes and methods.

The referenced important header file is the point cloud library PCL:

Point cloud Library (PCL) is an independent, large-scale and open project for 2D/3D image and point cloud processing. The PCL is issued under the BSD license terms and is therefore free for commercial and research purposes

The more important class in the definition is paramSever and the method is imuConverter. In addition, some tools and methods should also master the meaning to read other codes.

//Tool function code: it is not difficult to see that the main functions of these tool functions are the conversion between imu information and point cloud information, obtaining ros time, calculating the distance of information on point cloud, etc template<typename T> sensor_msgs::PointCloud2 publishCloud(ros::Publisher *thisPub, T thisCloud, ros::Time thisStamp, std::string thisFrame) { sensor_msgs::PointCloud2 tempCloud; pcl::toROSMsg(*thisCloud, tempCloud); tempCloud.header.stamp = thisStamp; tempCloud.header.frame_id = thisFrame; if (thisPub->getNumSubscribers() != 0) thisPub->publish(tempCloud); return tempCloud; } template<typename T> double ROS_TIME(T msg) { return msg->header.stamp.toSec(); } template<typename T> void imuAngular2rosAngular(sensor_msgs::Imu *thisImuMsg, T *angular_x, T *angular_y, T *angular_z) { *angular_x = thisImuMsg->angular_velocity.x; *angular_y = thisImuMsg->angular_velocity.y; *angular_z = thisImuMsg->angular_velocity.z; } template<typename T> void imuAccel2rosAccel(sensor_msgs::Imu *thisImuMsg, T *acc_x, T *acc_y, T *acc_z) { *acc_x = thisImuMsg->linear_acceleration.x; *acc_y = thisImuMsg->linear_acceleration.y; *acc_z = thisImuMsg->linear_acceleration.z; } template<typename T> void imuRPY2rosRPY(sensor_msgs::Imu *thisImuMsg, T *rosRoll, T *rosPitch, T *rosYaw) { double imuRoll, imuPitch, imuYaw; tf::Quaternion orientation; tf::quaternionMsgToTF(thisImuMsg->orientation, orientation); tf::Matrix3x3(orientation).getRPY(imuRoll, imuPitch, imuYaw); *rosRoll = imuRoll; *rosPitch = imuPitch; *rosYaw = imuYaw; } float pointDistance(PointType p) { return sqrt(p.x*p.x + p.y*p.y + p.z*p.z); } float pointDistance(PointType p1, PointType p2) { return sqrt((p1.x-p2.x)*(p1.x-p2.x) + (p1.y-p2.y)*(p1.y-p2.y) + (p1.z-p2.z)*(p1.z-p2.z)); }

2. imageProjection generate graph node

This node is used to analyze the input imu information and odometer information and generate the corresponding depth map.

Start by defining structures, initializing, subscribing to messages, and so on.

The focus is cloudHandler.

Like the feature extraction function I analyzed, cloudhandler here also has six important functions to realize its functions.

namely:

  • cachePointCloud
  • deskewInfo
  • projectPointCloud
  • cloudExtraction
  • publishClouds
  • resetParameters

Question:

  • Timestamp: timestamp

  • Mathematical concept: quaternion; Euler angle roll, pitch, yaw; Axial angle; Rotation matrix

  • deskewInfo() function

  • The most important function here is laser motion distortion correction. The function to realize this function is the deskewPoint() function:

    The principle can refer to ly's seventh blog, which solves my confusion

  • tf structure: equivalent to coordinate system record points. All variables defined here are in the tf package.

    It is worth noting that this quaternion is the so-called "quaternion".

    // Extracting imu odometer attitude angle tf::Quaternion orientation; tf::quaternionMsgToTF(startOdomMsg.pose.pose.orientation, orientation); double roll, pitch, yaw; tf::Matrix3x3(orientation).getRPY(roll, pitch, yaw);
  • Affine transformation?

Summary:

The function under this file is simply to process imu data and odometer data to obtain qualified data, and then hand these deviation correction data to the laser frame point cloud for deviation correction to obtain the final correct results.

Among these functions, the most important is the deskewInfo() function, which completes most of the main functions of this document.

3. imuPreintegration imu preintegration node

This node is used to complete imu pre integration.

Although I had a preliminary understanding of imu before, the blogs of my classmates in the group deepened my understanding of this concept in this regard, making me realize that I still have a lot of knowledge to add in this regard.

Two callback functions are more important in this Code: imuHandler and odometeryhandler.

And both of them are deeply related to the third-party library gtsam.

Question:

  • Gtsam Library: gtsam 4.0 is a BSD licensed C + + library that implements sensor fusion for robotics and computer vision applications, including Slam (simultaneous localization and mapping), VO (visual ontology), and SFM (structure from motion) . It uses factor graphs and Bayes networks as the underlying computing paradigm rather than sparse matrices to optimize for the most probable configuration or an optimal plan. Coupled with a capable sensor front-end (not provided here), GTSAM powers many impressive autonomous systems, in both academia and industry.

    This is an important library, which occupies a considerable weight in this code.

  • imu pre integration:

    High sampling frequency of imu - > large amount of data - > reduce processed data - > extract data every other period of time

    However, there is a problem with this:

    However, in the process of back-end optimization, when performing iterative solution calculation to update and adjust the PVQ value, once the PVQ (such as the first second) is adjusted, each intermediate process and all subsequent tracks must be integrated again. If it is 100Hz and there are 100 collected data between two seconds, 100 integrals must be calculated.

    The purpose of pre integration is to simplify this process and turn 100 integrals into one.

    In addition, it is also necessary to solve the inaccurate problem caused by simplifying the process - pre integration error.

  • Under tf package:

    // map -> odom tf::Transform map_to_odom; tf::TransformBroadcaster tfMap2Odom; // odom -> base_link tf::TransformBroadcaster tfOdom2BaseLink;

    It can be understood that the odometer information of the robot = the position of the robot in the current map minus the starting position of the robot in the map.

  • iSAM2

  • imuHandler:

    The function of imuHandler is not complex, that is, it accepts the raw data obtained from imu for processing, thisImu.

    Then the time stamp information is used to add the frame to the imu pre integrator.

    Then, the state and deviation corresponding to the time of the laser odometer in the previous frame are used to add the prediction of the current frame to obtain the state of the current time.

    The core functions are implemented in the middle part of gtsam.

    void imuHandler(const sensor_msgs::Imu::ConstPtr& imuMsg) { sensor_msgs::Imu thisImu = imuConverter(*imuMsg); // publish static tf tfMap2Odom.sendTransform(tf::StampedTransform(map_to_odom, thisImu.header.stamp, "map", "odom")); imuQueOpt.push_back(thisImu); imuQueImu.push_back(thisImu); if (doneFirstOpt == false) return; double imuTime = ROS_TIME(&thisImu); double dt = (lastImuT_imu < 0) ? (1.0 / 500.0) : (imuTime - lastImuT_imu); lastImuT_imu = imuTime; // integrate this single imu message imuIntegratorImu_->integrateMeasurement(gtsam::Vector3(thisImu.linear_acceleration.x, thisImu.linear_acceleration.y, thisImu.linear_acceleration.z), gtsam::Vector3(thisImu.angular_velocity.x, thisImu.angular_velocity.y, thisImu.angular_velocity.z), dt); // predict odometry gtsam::NavState currentState = imuIntegratorImu_->predict(prevStateOdom, prevBiasOdom); // publish odometry nav_msgs::Odometry odometry; odometry.header.stamp = thisImu.header.stamp; odometry.header.frame_id = "odom"; odometry.child_frame_id = "odom_imu"; // transform imu pose to ldiar gtsam::Pose3 imuPose = gtsam::Pose3(currentState.quaternion(), currentState.position()); gtsam::Pose3 lidarPose = imuPose.compose(imu2Lidar); odometry.pose.pose.position.x = lidarPose.translation().x(); odometry.pose.pose.position.y = lidarPose.translation().y(); odometry.pose.pose.position.z = lidarPose.translation().z(); odometry.pose.pose.orientation.x = lidarPose.rotation().toQuaternion().x(); odometry.pose.pose.orientation.y = lidarPose.rotation().toQuaternion().y(); odometry.pose.pose.orientation.z = lidarPose.rotation().toQuaternion().z(); odometry.pose.pose.orientation.w = lidarPose.rotation().toQuaternion().w(); odometry.twist.twist.linear.x = currentState.velocity().x(); odometry.twist.twist.linear.y = currentState.velocity().y(); odometry.twist.twist.linear.z = currentState.velocity().z(); odometry.twist.twist.angular.x = thisImu.angular_velocity.x + prevBiasOdom.gyroscope().x(); odometry.twist.twist.angular.y = thisImu.angular_velocity.y + prevBiasOdom.gyroscope().y(); odometry.twist.twist.angular.z = thisImu.angular_velocity.z + prevBiasOdom.gyroscope().z(); // information for VINS initialization odometry.pose.covariance[0] = double(imuPreintegrationResetId); odometry.pose.covariance[1] = prevBiasOdom.accelerometer().x(); odometry.pose.covariance[2] = prevBiasOdom.accelerometer().y(); odometry.pose.covariance[3] = prevBiasOdom.accelerometer().z(); odometry.pose.covariance[4] = prevBiasOdom.gyroscope().x(); odometry.pose.covariance[5] = prevBiasOdom.gyroscope().y(); odometry.pose.covariance[6] = prevBiasOdom.gyroscope().z(); odometry.pose.covariance[7] = imuGravity; pubImuOdometry.publish(odometry); // publish imu path static nav_msgs::Path imuPath; static double last_path_time = -1; if (imuTime - last_path_time > 0.1) { last_path_time = imuTime; geometry_msgs::PoseStamped pose_stamped; pose_stamped.header.stamp = thisImu.header.stamp; pose_stamped.header.frame_id = "odom"; pose_stamped.pose = odometry.pose.pose; imuPath.poses.push_back(pose_stamped); while(!imuPath.poses.empty() && abs(imuPath.poses.front().header.stamp.toSec() - imuPath.poses.back().header.stamp.toSec()) > 3.0) imuPath.poses.erase(imuPath.poses.begin()); if (pubImuPath.getNumSubscribers() != 0) { imuPath.header.stamp = thisImu.header.stamp; imuPath.header.frame_id = "odom"; pubImuPath.publish(imuPath); } } // publish transformation tf::Transform tCur; tf::poseMsgToTF(odometry.pose.pose, tCur); tf::StampedTransform odom_2_baselink = tf::StampedTransform(tCur, thisImu.header.stamp, "odom", "base_link"); tfOdom2BaseLink.sendTransform(odom_2_baselink); }
  • odometryHandler

    Compared with the above imu, the function of odometeryhandler is much more complex, but the front is mainly used to detect the existence of imu data and system initialization. The back is the normal function expansion. Let's truncate the code and continue from there.

    void odometryHandler(const nav_msgs::Odometry::ConstPtr& odomMsg) { double currentCorrectionTime = ROS_TIME(odomMsg); // make sure we have imu data to integrate if (imuQueOpt.empty()) return; float p_x = odomMsg->pose.pose.position.x; float p_y = odomMsg->pose.pose.position.y; float p_z = odomMsg->pose.pose.position.z; float r_x = odomMsg->pose.pose.orientation.x; float r_y = odomMsg->pose.pose.orientation.y; float r_z = odomMsg->pose.pose.orientation.z; float r_w = odomMsg->pose.pose.orientation.w; int currentResetId = round(odomMsg->pose.covariance[0]); gtsam::Pose3 lidarPose = gtsam::Pose3(gtsam::Rot3::Quaternion(r_w, r_x, r_y, r_z), gtsam::Point3(p_x, p_y, p_z)); // correction pose jumped, reset imu pre-integration if (currentResetId != imuPreintegrationResetId) { resetParams(); imuPreintegrationResetId = currentResetId; return; } // 0. initialize system if (systemInitialized == false) { resetOptimization(); // pop old IMU message while (!imuQueOpt.empty()) { if (ROS_TIME(&imuQueOpt.front()) < currentCorrectionTime - delta_t) { lastImuT_opt = ROS_TIME(&imuQueOpt.front()); imuQueOpt.pop_front(); } else break; } // initial pose prevPose_ = lidarPose.compose(lidar2Imu); gtsam::PriorFactor<gtsam::Pose3> priorPose(X(0), prevPose_, priorPoseNoise); graphFactors.add(priorPose); // initial velocity prevVel_ = gtsam::Vector3(0, 0, 0); gtsam::PriorFactor<gtsam::Vector3> priorVel(V(0), prevVel_, priorVelNoise); graphFactors.add(priorVel); // initial bias prevBias_ = gtsam::imuBias::ConstantBias(); gtsam::PriorFactor<gtsam::imuBias::ConstantBias> priorBias(B(0), prevBias_, priorBiasNoise); graphFactors.add(priorBias); // add values graphValues.insert(X(0), prevPose_); graphValues.insert(V(0), prevVel_); graphValues.insert(B(0), prevBias_); // optimize once optimizer.update(graphFactors, graphValues); graphFactors.resize(0); graphValues.clear(); imuIntegratorImu_->resetIntegrationAndSetBias(prevBias_); imuIntegratorOpt_->resetIntegrationAndSetBias(prevBias_); key = 1; systemInitialized = true; return; } // reset graph for speed if (key == 100) { // get updated noise before reset gtsam::noiseModel::Gaussian::shared_ptr updatedPoseNoise = gtsam::noiseModel::Gaussian::Covariance(optimizer.marginalCovariance(X(key-1))); gtsam::noiseModel::Gaussian::shared_ptr updatedVelNoise = gtsam::noiseModel::Gaussian::Covariance(optimizer.marginalCovariance(V(key-1))); gtsam::noiseModel::Gaussian::shared_ptr updatedBiasNoise = gtsam::noiseModel::Gaussian::Covariance(optimizer.marginalCovariance(B(key-1))); // reset graph resetOptimization(); // add pose gtsam::PriorFactor<gtsam::Pose3> priorPose(X(0), prevPose_, updatedPoseNoise); graphFactors.add(priorPose); // add velocity gtsam::PriorFactor<gtsam::Vector3> priorVel(V(0), prevVel_, updatedVelNoise); graphFactors.add(priorVel); // add bias gtsam::PriorFactor<gtsam::imuBias::ConstantBias> priorBias(B(0), prevBias_, updatedBiasNoise); graphFactors.add(priorBias); // add values graphValues.insert(X(0), prevPose_); graphValues.insert(V(0), prevVel_); graphValues.insert(B(0), prevBias_); // optimize once optimizer.update(graphFactors, graphValues); graphFactors.resize(0); graphValues.clear(); key = 1; }

    Add imu information and pose information to the optimizer for calculation and optimization.

    Finally, pay attention to whether the detection is successful. The failureDetection() function.

    // 1. integrate imu data and optimize while (!imuQueOpt.empty()) { // pop and integrate imu data that is between two optimizations sensor_msgs::Imu *thisImu = &imuQueOpt.front(); double imuTime = ROS_TIME(thisImu); if (imuTime < currentCorrectionTime - delta_t) { double dt = (lastImuT_opt < 0) ? (1.0 / 500.0) : (imuTime - lastImuT_opt); imuIntegratorOpt_->integrateMeasurement( gtsam::Vector3(thisImu->linear_acceleration.x, thisImu->linear_acceleration.y, thisImu->linear_acceleration.z), gtsam::Vector3(thisImu->angular_velocity.x, thisImu->angular_velocity.y, thisImu->angular_velocity.z), dt); lastImuT_opt = imuTime; imuQueOpt.pop_front(); } else break; } // add imu factor to graph const gtsam::PreintegratedImuMeasurements& preint_imu = dynamic_cast<const gtsam::PreintegratedImuMeasurements&>(*imuIntegratorOpt_); gtsam::ImuFactor imu_factor(X(key - 1), V(key - 1), X(key), V(key), B(key - 1), preint_imu); graphFactors.add(imu_factor); // add imu bias between factor graphFactors.add(gtsam::BetweenFactor<gtsam::imuBias::ConstantBias>(B(key - 1), B(key), gtsam::imuBias::ConstantBias(), gtsam::noiseModel::Diagonal::Sigmas(sqrt(imuIntegratorOpt_->deltaTij()) * noiseModelBetweenBias))); // add pose factor gtsam::Pose3 curPose = lidarPose.compose(lidar2Imu); gtsam::PriorFactor<gtsam::Pose3> pose_factor(X(key), curPose, correctionNoise); graphFactors.add(pose_factor); // insert predicted values gtsam::NavState propState_ = imuIntegratorOpt_->predict(prevState_, prevBias_); graphValues.insert(X(key), propState_.pose()); graphValues.insert(V(key), propState_.v()); graphValues.insert(B(key), prevBias_); // optimize optimizer.update(graphFactors, graphValues); optimizer.update(); graphFactors.resize(0); graphValues.clear(); // Overwrite the beginning of the preintegration for the next step. gtsam::Values result = optimizer.calculateEstimate(); prevPose_ = result.at<gtsam::Pose3>(X(key)); prevVel_ = result.at<gtsam::Vector3>(V(key)); prevState_ = gtsam::NavState(prevPose_, prevVel_); prevBias_ = result.at<gtsam::imuBias::ConstantBias>(B(key)); // Reset the optimization preintegration object. imuIntegratorOpt_->resetIntegrationAndSetBias(prevBias_); // check optimization if (failureDetection(prevVel_, prevBias_)) { resetParams(); return; }

    //In order to maintain real-time performance, imuIntegrateImu must obtain the latest bias immediately after each odom trigger optimization / / at the same time, perform bias change state re propagation processing on imu measured value imuQueImu, so as to ensure real-time performance and accuracy to the greatest extent

    After optimization, we should keep getting the latest data to maintain real-time and accuracy.

    // 2. after optiization, re-propagate imu odometry preintegration prevStateOdom = prevState_; prevBiasOdom = prevBias_; // first pop imu message older than current correction data double lastImuQT = -1; while (!imuQueImu.empty() && ROS_TIME(&imuQueImu.front()) < currentCorrectionTime - delta_t) { lastImuQT = ROS_TIME(&imuQueImu.front()); imuQueImu.pop_front(); } // repropogate if (!imuQueImu.empty()) { // reset bias use the newly optimized bias imuIntegratorImu_->resetIntegrationAndSetBias(prevBiasOdom); // integrate imu message from the beginning of this optimization for (int i = 0; i < (int)imuQueImu.size(); ++i) { sensor_msgs::Imu *thisImu = &imuQueImu[i]; double imuTime = ROS_TIME(thisImu); double dt = (lastImuQT < 0) ? (1.0 / 500.0) :(imuTime - lastImuQT); imuIntegratorImu_->integrateMeasurement(gtsam::Vector3(thisImu->linear_acceleration.x, thisImu->linear_acceleration.y, thisImu->linear_acceleration.z), gtsam::Vector3(thisImu->angular_velocity.x, thisImu->angular_velocity.y, thisImu->angular_velocity.z), dt); lastImuQT = imuTime; } } ++key; doneFirstOpt = true; }

Summary:

I found that there are still some problems in my understanding of the basic coordinate system of ros. I thoroughly understood it here:

map,odom,base_link,base_laser

Map is a map coordinate system. As the name suggests, it is the coordinate of the real world;

odom odometer coordinate system is equivalent to a virtual coordinate system based on the transformation between the actually calculated coordinates and the real coordinates;

base_link robot body coordinate system, which coincides with the center of the robot;

base_ The coordinate system of laser radar is related to the installation position of laser radar.

The last two are fixed.

odom is the most difficult concept to understand. The first thing to note here is that although odom has the same name as odom in topic, it is actually two things. The relationship between them is that odom topic can obtain odom – > base through pose transformation matrix_ tf relationship of link. map and odom coincide at the beginning of the movement, but with the movement of the machine, there is a deviation between them, which is the cumulative error of the odometer. map – > the tf conversion of odom needs to be corrected.

In addition, we need to learn more about gtsam package.

3, Feeling:

In addition to the deeper understanding of the whole code, I think my code reading has also been exercised.

I think I should pay attention to the following three problems in my later code reading:

1. Pay attention to the access and learning of third-party libraries.

2. Pay attention to the introduction of relevant mathematical knowledge. Don't simply quote. It's best to write the process in order to deepen your understanding.

3. Don't be vague about the learning of some advanced c + + writing methods in the code.

29 October 2021, 21:43 | Views: 7282

Add new comment

For adding a comment, please log in
or create account

0 comments