Original link: Open source soon! Multimodal visual position recognition based on dynamic invariant perception
Title: multi modal visual place recognition in dynamics invariant perception space
**From: * * School of automation, Southeast University
**Author: * * Lin Wu, Teng Wang and Changyin Sun
Code address (to be open source): https://github.com/fiftywu/Multimodal-VPR
Visual position recognition is one of the essential and challenging problems in the field of robot. In this newsletter, we first explore the use of semantic and visual multimodal fusion in dynamic invariant space to improve location recognition in dynamic environment. Firstly, we design a novel deep learning architecture to generate static semantic segmentation and recover the static image directly from the corresponding dynamic image. Then, we use spatial pyramid matching model (SPM) to encode static semantic segmentation into feature vectors, while for static images, we use the popular word bag model (BoW) to encode. Based on the above multimodal features, we measure the similarity between the query image and the target landmark through the joint similarity of semantic and visual coding. A large number of experiments show the effectiveness and robustness of the proposed method in dynamic environment.
Visual position recognition
Visual position recognition (VPR), as a key component of SLAM system, is a task that can help robot determine whether it is located in the place it has visited previously. The current work usually regards it as an image retrieval task to match the current observation with a set of reference landmarks, and designs various feature descriptors to measure the similarity of landmarks. These methods usually assume that the system runs in a static environment. However, the real world is complex and dynamic. The existence of dynamic objects makes the appearance of the scene inconsistent at different times, which increases the error of feature matching.
Dynamic invariant perception
Dynamic invariant perception refers to the elimination and transformation of dynamic content (such as pedestrians and vehicles) into corresponding static content in a dynamic scene. Typical work includes empty cities: a dynamic object invariant space for visual Slam (IEEE Transactions on Robotics,2020). On this basis, we have made some improvements and proposed a coarse to fine approach for dynamic to static image translation (Pattern Recognition, 2021). In the IEEE-SPL express, we design a novel deep neural network architecture to directly infer static semantics (i.e. static semantic segmentation graph) and static images from the input static scene images. In particular, we also use static semantics as a priori to improve the quality of static image generation. The static semantic segmentation results and static image conversion effects are shown in Figure 2 and figure 3 (the experimental data set is created by driverless simulator CARLA).
Visual position recognition experiment
In order to compare with the VPR recall rate of the current mainstream image conversion methods, we use Pix2Pix, MGAN, SRMGAN and SSGGNet to restore the static image, and then extract the BoW feature from it to measure the image similarity. The recall accuracy of different models is given in the table. In contrast, our method uses BoW and SPM coding at the same time, which performs best, and greatly improves the recall rate of the second SSGGNet BoW, which fully reflects the importance of semantic features based on SPM. In addition, SSGGNet BoW is better than Pix2Pix BoW, MGAN BoW and SRMGAN BoW, which further verifies the effectiveness of using static semantics to guide static image generation.
T. Wang, L. Wu and C. Sun, "A coarse-to-fine approach for dynamic-to-static image translation," in Pattern Recognition, 2022, doi: 10.1016/j.patcog.2021.108373.
L. Wu, T. Wang and C. Sun, "Multi-Modal Visual Place Recognition in Dynamics-Invariant Perception Space," in IEEE Signal Processing Letters, 2021, doi: 10.1109/LSP.2021.3123907.
B. Bescos, C. Cadena and J. Neira, "Empty Cities: A Dynamic-Object-Invariant Space for Visual SLAM," in IEEE Transactions on Robotics, 2021, doi: 10.1109/TRO.2020.3031267.
P. Isola, J. Zhu, T. Zhou and A. Efros, "Image-to-Image Translation with Conditional Adversarial Networks", CVPR, 2017, https://arxiv.org/pdf/1611.07004.pdf.
Exclusive heavyweight course! 1, VINS:Mono+Fusion [SLAM Interviewer: look at your resume VINS，Please push the pre integral on site!](http://mp.weixin.qq.com/s?__biz=MzIxOTczOTM4NA==&mid=2247533048&idx=1&sn=69ab9a5650cf8b40eb4b44101fa5c1d6&chksm=97d4a46fa0a32d790bce452e812a86dc13fa9b2ce997786dfde4d8266b153b9787daf16ab10c&scene=21#wechat_redirect) 2,VIO Course:[VIO Best open source algorithm: ORB-SLAM3 Super full analysis course heavy upgrade!](http://mp.weixin.qq.com/s?__biz=MzIxOTczOTM4NA==&mid=2247531253&idx=2&sn=55a8499c705f4b1e16344be12539110b&chksm=97d49f62a0a316745e1740381d4cb9308cff2f46cc61e2d8a789398d986fa5182a5e81995694&scene=21#wechat_redirect) 3,3D image reconstruction course (phase 2):[Visual geometry 3D reconstruction tutorial (phase 2): dense reconstruction, surface reconstruction, point cloud fusion, texture mapping](http://mp.weixin.qq.com/s?__biz=MzIxOTczOTM4NA==&mid=2247529697&idx=1&sn=6bee36b3cc7cd76135652d06a62693c6&chksm=97d49176a0a3186016ddfc820425dbec0b671a7dea8095cefb29a87eb2393ddfbbd342007e11&scene=21#wechat_redirect) 4,[Heavy attack! be based on LiDAR Multisensor fusion SLAM Series of tutorials: LOAM,LeGO-LOAM,LIO-SAM](http://mp.weixin.qq.com/s?__biz=MzIxOTczOTM4NA==&mid=2247524898&idx=1&sn=97e134aef34170c4c1539986c5cebe2b&chksm=97d487b5a0a30ea3dd0eb5c2f264bd0bd98ae3acb7076a10973ea6065e5f1e5cb4e6d3103dda&scene=21#wechat_redirect) 5,Systematic and comprehensive camera calibration course:[Monocular/fisheye/binocular/Array camera calibration: principle and Practice](http://mp.weixin.qq.com/s?__biz=MzIxOTczOTM4NA==&mid=2247524279&idx=1&sn=8aeb79b5f33de5d88114f2c9398a4858&chksm=97d47a20a0a3f3365e959c131f9d0fb217b57d2e064d41a3646774779038070ff9ddffd1a90c&scene=21#wechat_redirect) 6,vision SLAM Essential foundation (phase 2):[vision SLAM Required Foundation: ORB-SLAM2 Detailed source code](http://mp.weixin.qq.com/s?__biz=MzIxOTczOTM4NA==&mid=2247523523&idx=3&sn=d0526b0bdf170fdc2b95de21083db74f&chksm=97d47954a0a3f042288e2e88e6b7a10a74b353eb8609f7043b6428295dd0bcee6ffae77e8010&scene=21#wechat_redirect) 7,In depth 3D reconstruction course:[3D reconstruction learning route based on deep learning](http://mp.weixin.qq.com/s?__biz=MzIxOTczOTM4NA==&mid=2247522209&idx=3&sn=7bea4b408ada457feed196dc3c54a943&chksm=97d47236a0a3fb20cd2f805b787176463b51fce8172a726e78c9aaba6f586d7f57e811d2fb68&scene=21#wechat_redirect) 8,Laser positioning+Mapping course:[laser SLAM frame Cartographer Course 90+All videos are online! Suitable for service robots!](http://mp.weixin.qq.com/s?__biz=MzIxOTczOTM4NA==&mid=2247532927&idx=1&sn=079cefecbb206907a6a26b4ec8822574&chksm=97d4a4e8a0a32dfe88cba508a57d3326bbbed9846d5f522740805d392814ca0080ba60a480a5&scene=21#wechat_redirect) Link:[Open source soon! Multimodal visual position recognition based on dynamic invariant perception](https://mp.weixin.qq.com/s/P5hpmdZiDkpqpDMsXsIMoA) The best in the country SLAM,3D visual learning community↓ Link:[Open source soon! Multimodal visual position recognition based on dynamic invariant perception](https://mp.weixin.qq.com/s/P5hpmdZiDkpqpDMsXsIMoA) #### Technology exchange wechat group Welcome to join official account readers and communicate with colleagues. SLAM,3D vision, sensors, automatic driving, computational photography, detection, segmentation, recognition, medical imaging GAN,Algorithm competition and other wechat groups, please add wechat signals chichui502 Or add a group at the bottom of the scan and note: "name"/nickname+school/company+Research direction ". Please note according to the format, otherwise it will not pass. After adding successfully, you will be invited to enter the relevant wechat group according to the research direction. Do not send advertisements in the group, otherwise you will be invited out of the group. Thank you for your understanding~ Contributions and cooperation are also welcome: email@example.com Link:[Open source soon! Multimodal visual position recognition based on dynamic invariant perception](https://mp.weixin.qq.com/s/P5hpmdZiDkpqpDMsXsIMoA) Scan the video number and watch the video show of the latest technology landing and open source solutions ↓ Video number link:[Open source soon! Multimodal visual position recognition based on dynamic invariant perception](https://mp.weixin.qq.com/s/P5hpmdZiDkpqpDMsXsIMoA) — Copyright notice - The original content of the official account is computer vision. life All non original words, pictures, audio and video materials collected, sorted out and reproduced with authorization from public channels belong to the original author. If they infringe, please contact us and will be deleted in time.