(44-3) 17 * << * >> * Russian * English * Content * All Issues

Method for camera motion parameter estimation from a small number of corresponding points using quaternions
Ye.V. Goshin 1, A.P. Kotov 1,2

Samara National Research University, 443086, Samara, Russia, Moskovskoye Shosse 34,
IPSI RAS – Branch of the FSRC "Crystallography and Photonics" RAS,
443001, Samara, Russia, Molodogvardeyskaya 151

 PDF, 1209 kB

DOI: 10.18287/2412-6179-CO-683

Pages: 446-453.

Full text of article: Russian language.

Abstract:
In this paper, we study methods for determining parameters of camera movement from a set of corresponding points. Unlike the traditional approach, the corresponding points in this paper are not used to determine the fundamental matrix, but directly to determine motion parameters. In addition, in this work, we use a multi-angle image formation model based on the representation of three-dimensional images and motion parameters in the form of quaternions. We propose method for determining motion parameters, including the selection of the most noise-free matches using the RANSAC method. The study presents results of an experiment on the “Middlebury” and “ETH3D” test kits, which contains a set of images with known values of the motion parameters. Using a program written in Python, a comparative experiment was conducted to evaluate the accuracy and reliability of the estimates obtained using the proposed method under conditions of a small number of corresponding points and a shallow depth of the scene. In the course of experimental studies, it was shown that under the above-described conditions, the reliability of parameter determination using the proposed method significantly exceeds the reliability of traditional methods for estimating motion parameters based on the calculation of the fundamental matrix.

Keywords:
epipolar geometry, quaternion, motion parameters.

Citation:
Goshin YeV, Kotov AP. Method for camera motion parameter estimation from a small number of corresponding points using quaternions. Computer Optics 2020; 44(3): 446-453. DOI: 10.18287/2412-6179-CO-683.

Acknowledgements:
This work was supported by the Russian Foundation for Basic Research (projects No. 17-29-03112, 19-29-01235) and the RF Ministry of Science and Higher Education within a state contract with the “Crystallography and Photonics” Research Center of the RAS under agreement 007-ГЗ/Ч3363/26.

References:

  1. Myasnikov VV, Dmitriev EA. The accuracy dependency investigation of simultaneous localization and mapping on the errors from mobile device sensors. Computer Optics 2019; 43(3): 492-503. DOI: 10.18287/2412-6179-2019-43-3-492-503.
  2. Lee B, Daniilidis K, Lee DD. Online self-supervised monocular visual odometry for ground vehicles. IEEE International Conference on Robotics and Automation (ICRA) 2015: 5232-5238.
  3. Fu C, Carrio A, Campoy P. Efficient visual odometry and mapping for unmanned aerial vehicle using ARM-based stereo vision pre-processing system. International Conference on Unmanned Aircraft Systems (ICUAS) 2015: 957-962.
  4. Kudinov IA, Nikiforov MB, Kholopov IS. Camera and auxiliary sensor calibration for a multispectral panoramic vision system with a distributed aperture. J Phys Conf Ser 2019; 1368(3): 032009.
  5. Kirsh DV, Skirokanev AS, Kupriyanov AV. Algorithm of reconstruction of a three-dimensional crystal structure from two-dimensional projections. Computer Optics 2019; 43(2): 324-331. DOI: 10.18287/2412-6179-2019-43-2-324-331.
  6. Ruchay AN, Dorofeev KA, Kolpakov VI. Fusion of information from multiple Kinect sensors for 3D object reconstruction. Computer Optics 2018; 42(5): 898-903. DOI: 10.18287/2412-6179-2018-42-5-898-903.
  7. Troiani C, Martinelli A, Laugier C, Scaramuzza D. 2-point-based outlier rejection for camera-imu systems with applications to micro aerial vehicles. IEEE Int Conf Robot Automat (ICRA) 2014: 5530-5536.
  8. Cadena C, et al. Past, present, and future of simultaneous localization and mapping: Toward the robust-perception age. IEEE Trans Robot 2016; 32(6): 1309-1332.
  9. Rebecq H, Horstschäfer T, Gallego G., Scaramuzza D. EVO: A geometric approach to event-based 6-DOF parallel tracking and mapping in real time. IEEE Robot Autom Lett 2006; 2(2): 593-600.
  10. Zhang Y, Liang W, Li Y, An H, Tan J. Robust orientation estimate via inertial guided visual sample consensus. Pers Ubiquit Comput 2018; 22(2): 259-274.
  11. Rebecq H, Horstschaefer T, Scaramuzza D. Real-time visual-inertial odometry for event cameras using keyframe-based nonlinear optimization. British Machine Vision Con-ference (BMVC) 2017.
  12. Liang J, Cheng X, He Y, Li X, Liu H. Experimental Evaluation of Direct Monocular Visual Odometry Based on Nonlinear Optimization. WRC Symposium on Advanced Robotics and Automation (WRC SARA) 2019: 291295.
  13. Von Stumberg L, Usenko V, Cremers D. Direct sparse visual-inertial odometry using dynamic marginalization. ICRA 2018: 2510-2517.
  14. Leutenegger S, Furgale P, Rabaud V, Chli M, Konolige K, Siegwart R. Keyframe-based visual-inertial slam using nonlinear optimization. Proc RSS 2013.
  15. Rosten E, Drummond T. Machine learning for high-speed corner detection. ECCV 2006: 430-443.
  16. Li R, Wang S, Long Z, Gu D. UnDeepVO: Monocular visual odometry through unsupervised deep learning. ICRA 2018: 7286-7291.
  17. Fursov VA, Gavrilov AV, Kotov AP. Prediction of estimates' accuracy for linear regression with a small sample size. TSP 2018: 679-685. DOI: 10.1109/TSP.2018.8441385
  18. Hartley R, Zisserman A. Multiple view geometry in computer vision. Cambridge: Cambridge University Press; 2003.
  19. Karlsson L, Tisseur F. Algorithms for Hessenberg-triangular reduction of Fiedler linearization of matrix polynomials. SIAM J Sci Comput 2015; 37(3): C384-C414.
  20. Goshin YeV., Useinova IR. A method for determination of the extrinsic camera parameters from a pair of images with the use of dual quaternions [In Russian]. Mekhatronika, Avtomatizatsiya, Upravlenie 2017; 18(4): 279-284. DOI: 10.17587/mau.18.279-284.
  21. Dataset "Middlebury". Source: <http://vision.middlebury.edu/mview/data/data/dino.zip/>.
  22. Bay H, Tuytelaars T, Van Gool L. Surf: Speeded up robust features. In Book: Leonardis A, Bischof H, Pinz A, eds. Computer Vision – ECCV 2006. Springer, Berlin, Heidelberg; 2006: 404-417.
  23. Rublee E, Rabaud V, Konolige K, Bradski G. ORB: An efficient alternative to SIFT or SURF. ICCV 2011: 2564-2571.
  24. Lowe DG. Object recognition from local scale-invariant features. Proc IEEE ICCV 1999; 2: 1150-1157.
  25. The image processing library OpenCV. Source: <http://opencv.org/>.
  26. Moré JJ. The Levenberg-Marquardt algorithm: Implementation and theory. In Book: Watson GA, ed. Numerical analysis. Berlin, Heidelberg: Springer; 1978: 105-116.
  27. Csurka G, Zeller C, Zhang Z, Faugeras OD. Characterizing the uncertainty of the fundamental matrix. Comput Vis Image Underst 1997; 68(1): 18-36.

© 2009, IPSI RAS
151, Molodogvardeiskaya str., Samara, 443001, Russia; E-mail: ko@smr.ru ; Tel: +7 (846) 242-41-24 (Executive secretary), +7 (846) 332-56-22 (Issuing editor), Fax: +7 (846) 332-56-20