(48-3) 11 * << * >> * Русский * English * Содержание * Все выпуски

Localization of mobile robot in prior 3D LiDAR maps using stereo image sequence
I.V. Belkin 1,2, A.A. Abramenko 2, V.D. Bezuglyi 1, D.A. Yudin 1,3

Moscow Institute of Physics and Technology, 141700, Russia, Moscow Region, Dolgoprudny, Institutskiy per. 9;
LLC Integrant, 127204, Russia, Moscow, Dolgoprudnenskoe highway. 3;
Artificial Intelligence Research Institute (AIRI), 121170, Russia, Moscow, Kutuzovsky ave. 32 c1

  PDF, 2135 kB

DOI: 10.18287/2412-6179-CO-1369

Страницы: 406-417.

Язык статьи: English.

Аннотация:
The paper studies the real-time stereo image-based localization of a vehicle in a prior 3D LiDAR map. A novel localization approach for mobile ground robot, which successfully combines conventional computer vision techniques, neural network based image analysis and numerical optimization, is proposed. It includes matching a noisy depth image and visible point cloud based on the modified Nelder-Mead optimization method. Deep neural network for image semantic segmentation is used to eliminate dynamic obstacles. The visible point cloud is extracted using a 3D mesh map representation. The proposed approach is evaluated on the KITTI dataset and a custom dataset collected from a ClearPath Husky mobile robot. It shows a stable absolute translation error of about 0.11 – 0.13 m. and a rotation error of 0.42 – 0.62 deg. The standard deviation of the obtained absolute metrics for our method is the smallest among other state-of-the-art approaches. Thus, our approach provides more stability in the estimated pose. It is achieved primarily through the use of multiple data frames during the optimization step and dynamic obstacles elimination on depth image. The method’s performance is demonstrated on different hardware platforms, including energy-efficient Nvidia Jetson Xavier AGX. With parallel code implementation, we achieve an input stereo image processing speed of 14 frames per second on Xavier AGX.

Ключевые слова:
vehicle localization, optimization, deep learning, stereo camera, semantic segmentation, embedded systems.

Благодарности
This work was supported in part of theoretical investigation and methodology by the Analytical Center for the Government of the Russian Federation in accordance with the subsidy agreement (agreement identifier 000000D730321P5Q0002; grant No. 70-2021-00138).

Citation:
Belkin IV, Abrameko AA, Bezuglyi VD, Yudin DA. Localization of mobile robot in prior 3D LiDAR maps using stereo image sequence. Computer Optics 2024; 48(3): 406-417. DOI: 10.18287/2412-6179-CO-1369.

References:

  1. Myasnikov VV, Dmitriev EA. The accuracy dependency investigation of simultaneous localization and mapping on the errors from mobile device sensors. Computer Optics 2019; 43(3): 492-503. DOI: 10.18287/2412-6179-2019-43-3-492-503.
  2. Goshin YV, Kotov AP. Method for camera motion parameter estimation from a small number of corresponding points using quaternions. Computer Optics 2020; 44(3): 446-453. DOI: 10.18287/2412-6179-CO-683.
  3. Mur-Artal R, Tardós JD. ORB-SLAM2: An open-source SLAM system for monocular, stereo, and RGB-D cameras. IEEE Trans Robot 2017; 33(5): 1255-1262.
  4. Labbé M, Michaud F. RTAB-Map as an open-source lidar and visual simultaneous localization and mapping library for large-scale and longterm online operation. J Field Robot 2019; 36(2): 416-446.
  5. Zhang J, Singh S. Laser–visual–inertial odometry and mapping with high robustness and low drift. J Field Robot 2018; 35(8): 1242-1264.
  6. Shan T, Englot B. Lego-loam: Lightweight and ground-optimized lidar odometry and mapping on variable terrain. in: 2018 IEEE/RSJ Int Conf on Intelligent Robots and Systems (IROS) 2018: 4758-4765.
  7. Caselitz T, Steder B, Ruhnke M, Burgard W. Monocular camera localization in 3D LiDAR maps. 2016 IEEE/RSJ Int Conf on Intelligent Robots and Systems (IROS) 2016: 1926-1931.
  8. Kim Y, Jeong J, Kim A. stereo camera localization in 3D LiDAR maps. 2018 IEEE/RSJ Int Conf on Intelligent Robots and Systems (IROS) 2018: 1-9.
  9. Huang H, Ye H, Sun Y, Liu M. GMMLoc: Structure consistent visual localization with gaussian mixture models. IEEE Robot Autom Lett 2020; 5: 5043-5050.
  10. Ding X, Wang Y, Li D, Tang L, Yin H, Xiong R. Laser map aided visual inertial localization in changing environment. 2018 IEEE/RSJ Int Conf on Intelligent Robots and Systems (IROS) 2018: 4794-4801.
  11. Zuo X, Geneva P, Yang Y, Ye W, Liu Y, Huang G. Visual-inertial localization with prior LiDAR map constraints. IEEE Robot Autom Lett 2019; 4: 3394-3401.
  12. Neuhold G, Ollmann T, Rota Bulo S, Kontschieder P. The mapillary vistas dataset for semantic understanding of street scenes. Proc IEEE Int Conf on Computer Vision 2017: 4990-4999.
  13. Wang Y, Lai Z, Huang G, Wang BH, Van Der Maaten L, Campbell M, Weinberger KQ. Anytime stereo image depth estimation on mobile devices. 2019 Int Conf on Robotics and Automation (ICRA) 2019: 5893-5900.
  14. Kasatkin N, Yudin D. Real-time approach to neural network-based disparity map generation from stereo images. In Book: Kryzhanovsky B, Dunin-Barkowski W, Redko V, Tiumentsev Y, Klimov VV, eds. Advances in neural computation, machine learning, and cognitive research V. Cham: Springer Nature Switzerland AG; 2022.
  15. Chen Y, Wang G. EnforceNet: Monocular camera localization in large scale indoor sparse LiDAR point cloud. arXiv Preprint. 2019. Source: <https://arxiv.org/abs/1907.07160>.
  16. Pauls JH, Petek K, Poggenhans F, Stiller C. Monocular localization in HD maps by combining semantic segmentation and distance transform. 2020 IEEE/RSJ Int Conf on Intelligent Robots and Systems (IROS) 2020: 4595-4601.
  17. Magnusson M, Lilienthal A, Duckett T. Scan registration for autonomous mining vehicles using 3D-NDT. J Field Robot 2007; 24(10): 803-827.
  18. Han D, Zou Z, Wang L, Xu C. A robust stereo camera localization method with prior LiDAR map constrains. 2019 IEEE Int Conf on Robotics and Biomimetics (ROBIO) 2019: 2001-2006.
  19. Sun M, Yang S, Liu H. Scale-aware camera localization in 3D LiDAR maps with a monocular visual odometry. Comput Animat Virtual Worlds 2019; 30(3-4): e1879.
  20. Yu H, Zhen W, Yang W, Scherer S. Line-based camera pose estimation in point cloud of structured environments. arXiv Preprint. 2019. Source: <https://arxiv.org/abs/1912.05013>.
  21. Yu H, Zhen W, Yang W, Zhang J, Scherer S. Monocular camera localization in prior LiDAR maps with 2D-3D line correspondences. 2020 IEEE/RSJ Int Conf on Intelligent Robots and Systems (IROS) 2020: 4588-4594.
  22. Lu Y, Huang J, Chen Y, Heisele B. Monocular localization in urban environments using road markings. 2017 IEEE Intelligent Vehicles Symposium (IV) 2017: 468-474.
  23. Jeong J, Cho Y, Kim A. HDMI-Loc: Exploiting high definition map image for precise localization via bitwise particle filter. IEEE Robot Autom Lett 2020; 5: 6310-6317.
  24. Qiu K, Liu T, Shen S. Model-based global localization for aerial robots using edge alignment. IEEE Robot Autom Lett 2017; 2: 1256-1263.
  25. Wong D, Kawanishi Y, Deguchi D, Ide I, Murase H. Monocular localization within sparse voxel maps. 2017 IEEE Intelligent Vehicles Symposium (IV) 2017: 499-504.
  26. Pascoe G, Maddern WP, Stewart AD, Newman P. FARLAP: Fast robust localisation using appearance priors. 2015 IEEE Int Conf on Robotics and Automation (ICRA) 2015: 6366-6373.
  27. Pascoe G, Maddern WP, Newman P. Robust direct visual localisation using normalised information distance. British Machine Vision Conf 2015: 1-13.
  28. Oishi S, Kawamata Y, Yokozuka M, Koide K, Banno A, Miura J. C*: Cross-modal simultaneous tracking and rendering for 6-DoF monocular camera localization beyond modalities. IEEE Robot Autom Lett 2020; 5: 5229-5236.
  29. Wolcott RW, Eustice R. Visual localization within LIDAR maps for automated urban driving. 2014 IEEE/RSJ Int Conf on Intelligent Robots and Systems 2014: 176-183.
  30. Neubert P, Schubert S, Protzel P. Sampling-based methods for visual navigation in 3D maps by synthesizing depth images. 2017 IEEE/RSJ Int Conf on Intelligent Robots and Systems (IROS) 2017: 2492-2498.
  31. Lu Y, Lee J, Yeh SH, Cheng HM, Chen B, Song D. Sharing heterogeneous spatial knowledge: Map fusion between asynchronous monocular vision and lidar or other prior inputs. In Book: Amato NM, Hager G, Thomas S, Torres-Torriti M, eds. Robotics research. Cham: Springer Nature Switzerland AG; 2020: 727-741.
  32. Bao H, Xie W, Qian Q, Chen D, Zhai S, Wang N, Zhang G. Robust tightly-coupled visual-inertial odometry with pre-built maps in high latency situations. IEEE Trans Vis Comput Graph 2022; 28(05): 2212-2222.
  33. Ye H, Huang H, Liu M. Monocular direct sparse localization in a prior 3D surfel map. 2020 IEEE Int Conf on Robotics and Automation (ICRA) 2020: 8892-8898.
  34. Huang H, Sun Y, Ye H, Liu M. Metric monocular localization using signed distance fields. 2019 IEEE/RSJ Int Conf on Intelligent Robots and Systems (IROS) 2019: 1195-1201.
  35. Zuo X, Ye W, Yang Y, Zheng R, Vidal-Calleja T, Huang G, Liu Y. Multimodal localization: Stereo over LiDAR map. J Field Robot 2020; 37(6): 1003-1026.
  36. Cattaneo D, Sorrenti DG, Valada A. CMRNet++: Map and camera agnostic monocular visual localization in LiDAR maps. arXiv Preprint. 2020. Source:           <https://arxiv.org/abs/2004.13795>.
  37. Tang S, Tang C, Huang R, Zhu S, Tan P. Learning camera localization via dense scene matching. Proc IEEE/CVF Conf on Computer Vision and Pattern Recognition 2021: 1831-1841.
  38. Chang MF, Mangelson J, Kaess M, Lucey S. HyperMap: Compressed 3D map for monocular camera registration. 2021 IEEE Int Conf on Robotics and Automation (ICRA) 2021: 11739-11745.
  39. Spangenberg R, Langner T, Adfeldt S, Rojas R. Large scale semi-global matching on the cpu. 2014 IEEE Intelligent Vehicles Symposium Proc 2014: 195-201.
  40. Hirschmüller H. Accurate and efficient stereo processing by semi-global matching and mutual information. 2005 IEEE Computer Society Conf on Computer Vision and Pattern Recognition (CVPR’05) 2005; 2: 807-814.
  41. Hirschmüller H. Stereo processing by semiglobal matching and mutual information. IEEE Trans Pattern Anal Mach Intell 2007; 30(2): 328-341.
  42. Hirschmüller H. Semi-global matching-motivation, developments and applications. Photogrammetric Week 2011; 11: 173-184.
  43. Hirschmüller H, Buder M, Ernst I. Memory efficient semi-global matching. ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences 2012; 3: 371-376.
  44. Menze M, Geiger A. Object scene flow for autonomous vehicles. Conf on Computer Vision and Pattern Recognition (CVPR) 2015: 1-10.
  45. Geiger A, Lenz P, Urtasun R. Are we ready for autonomous driving? The KITTI vision benchmark suite. Conf on Computer Vision and Pattern Recognition (CVPR) 2012: 3354-3361.
  46. Shepel I, Adeshkin V, Belkin I, Yudin DA. Occupancy grid generation with dynamic obstacle segmentation in stereo images. IEEE Trans Intell Transp Syst 2021; 23(9): 14779-14789.
  47. Belkin I, Abramenko A, Yudin D. Real-time lidar-based localization of mobile ground robot. Procedia Computer Sci 2021; 186: 440-448.
  48. Koide K. Interactive slam: open source 3D LIDAR-based mapping framework. 2019. Source: <https://github.com/SMRT-AIST/interactive_slam>.
  49. Marton ZC, Rusu RB, Beetz M. On fast surface reconstruction methods for large and noisy datasets. Proc IEEE Int Conf on Robotics and Automation (ICRA) 2009: 3218-3223.
  50. Rusu RB, Cousins S. 3D is here: Point cloud library (pcl). 2011 IEEE Int Conf on Robotics and Automation 2011: 1-4.
  51. Del Moral P. Nonlinear filtering: Interacting particle resolution. Comptes Rendus de l’Académie des Sciences – Series I – Mathematics 1997; 325(6): 653-658.
  52. Nelder J, Mead R. A simplex method for function minimization. Comput J 1965; 7: 308-313.
  53. Geiger A, Lenz P, Stiller C, Urtasun R. Vision meets robotics: The KITTI dataset. Int J Rob Res 2013; 32: 1231-1237.
  54. Yabuuchi K, Wong DR, Ishita T, Kitsukawa Y, Kato S. Visual localization for autonomous driving using pre-built point cloud maps. 2021 IEEE Intelligent Vehicles Symposium (IV) 2021: 913-919.
  55. Laser odometry and localization. 2019. Source: <https://github.com/jyakaranda/LOL>.
  56. MapIV. Iris: Visual localization in pre-build pointcloud maps. 2020. Source: <https://github.com/MapIV/iris>.

© 2009, IPSI RAS
Россия, 443001, Самара, ул. Молодогвардейская, 151; электронная почта: journal@computeroptics.ru; тел: +7 (846) 242-41-24 (ответственный секретарь), +7 (846) 332-56-22 (технический редактор), факс: +7 (846) 332-56-20