Fusion of information from multiple Kinect sensors for 3D object reconstruction
Ruchay A.N., Dorofeev K.A., Kolpakov V.I.

Federal Research Centre of Biological Systems and Agro-technologies of the Russian Academy of Sciences, Orenburg, Russia,

Department of Mathematics, Chelyabinsk State University, Chelyabinsk, Russia

 PDF

Abstract:
In this paper, we estimate the accuracy of 3D object reconstruction using multiple Kinect sensors. First, we discuss the calibration of multiple Kinect sensors, and provide an analysis of the accuracy and resolution of the depth data. Next, the precision of coordinate mapping between sensors data for registration of depth and color images is evaluated. We test a proposed system for 3D object reconstruction with four Kinect V2 sensors and present reconstruction accuracy results. Experiments and computer simulation are carried out using Matlab and Kinect V2.

Keywords:
multiple sensors, Kinect, 3D object reconstruction, fusion.

Citation:
Ruchay AN, Dorofeev KA, Kolpakov VI. Fusion of information from multiple Kinect sensors for 3D object reconstruction. Computer Optics 2018; 42(5): 898-903. DOI: 10.18287/2412-6179-2018-42-5-898-903.

References:

  1. Echeagaray-Patron BA, Miramontes-Jaramillo D, Kober V. Conformal parameterization and curvature analysis for 3D facial recognition. 2015 International Conference on Computational Science and Computational Intelligence (CSCI) 2015: 843-844. DOI: 10.1109/CSCI.2015.133.
  2. Echeagaray-Patron BA, Kober V. 3D face recognition based on matching of facial surfaces. Proc SPIE 2015; 9598: 95980V. DOI: 10.1117/12.2186695.
  3. Smelkina NA, Kosarev RN, Nikonorov AV, Bairikov IM, Ryabov KN, Avdeev AV, Kazanskiy NL. Reconstruction of anatomical structures using statistical shape modeling [In Russian]. Computer Optics 2017; 41(6): 897-904. DOI: 10.18287/2412-6179-2017-41-6-897-904.
  4. Vokhmintsev A, Makovetskii A, Kober V, Sochenkov I, Kuznetsov V. A fusion algorithm for building three-dimensional maps. Proc SPIE 2015; 9599: 959929. DOI: 10.1117/12.2187929.
  5. Kotov AP, Fursov VA, Goshin YeV. Technology for fast 3D-scene reconstruction from stereo images [In Russian]. Computer Optics 2015; 39(4): 600-605. DOI: 10.18287/0134-2452-2015-39-4-600-605.
  6. Sochenkov I, Sochenkova A, Vokhmintsev A, Makovetskii A, Melnikov A. Effective indexing for face recognition. Proc SPIE 2016; 9971: 997124. DOI: 10.1117/12.2238096.
  7. Picos K, Diaz-Ramirez V, Kober V, Montemayor A, Pantrigo J. Accurate three-dimensional pose recognition from monocular images using template matched filtering. Opt Eng 2016; 55(6): 063102. DOI: 10.1117/1.OE.55.6.063102.
  8. Echeagaray-Patrón BA, Kober V. Face recognition based on matching of local features on 3D dynamic range sequences. Proc SPIE 2016; 9971: 997131. DOI: 10.1117/12.2236355.
  9. Echeagaray-Patrón BA, Kober VI, Karnaukhov VN, Kuznetsov VV. A method of face recognition using 3D facial surfaces. J Commun Technol Electron 2017; 62(6): 648-652. DOI: 10.1134/S1064226917060067.
  10. Cai Z, Han J, Liu L, Shao L. RGB-D datasets using microsoft kinect or similar sensors: a survey. Multimed Tools Appl 2017; 76(3): 4313-4355. DOI: 10.1007/s11042-016-3374-6.
  11. Dou M, Taylor J, Fuchs H, Fitzgibbon A, Izadi S. 3D scanning deformable objects with a single RGBD sensor. 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2015: 493-501. DOI: 10.1109/CVPR.2015.7298647.
  12. Guo K, Xu F, Yu T, Liu X, Dai Q, Liu Y. Real-time geometry, albedo, and motion reconstruction using a single RGB-D camera. ACM Trans Graph 2017; 36(4): 44a. DOI: 10.1145/3072959.3083722.
  13. Namitha N, Vaitheeswaran SM, Jayasree VK, Bharat MK. Point cloud mapping measurements using kinect RGB-D sensor and kinect fusion for visual odometry. Procedia Computer Science 2016; 89: 209-212. DOI: 10.1016/j.procs.2016.06.044.
  14. Jun C, Kang J, Yeon S, Choi H, Chung T-Y, Doh NL. Towards a realistic indoor world reconstruction: Preliminary results for an object-oriented 3D RGB-D mapping. Intelligent Automation & Soft Computing 2017; 23(2): 207-218. DOI: 10.1080/10798587.2016.1186890.
  15. Susanto W, Rohrbach M, Schiele B. 3D object detection with multiple kinects. ECCV'12 Proceedings of the 12th international conference on Computer Vision 2012; 2: 93-102. DOI: 10.1007/978-3-642-33868-7_10.
  16. Kowalski M, Naruniec J, Daniluk M. Livescan3D: A fast and inexpensive 3D data acquisition system for multiple Kinect v2 Sensors. 2015 International Conference on 3D Vision (3DV) 2015: 318-325. DOI: 10.1109/3DV.2015.43.
  17. Córdova-Esparza D-M, Terven JR, Jiménez-Hernández H, Herrera-Navarro A-M. A multiple camera calibration and point cloud fusion tool for Kinect V2. Science of Computer Programming 2017; 143: 1-8. DOI: 10.1016/j.scico.2016.11.004.
  18. Aguilar-Gonzalez PM, Kober V. Design of correlation filters for pattern recognition with disjoint reference image. Opt Eng 2011; 50(11): 117201. DOI: 10.1117/1.3643723.
  19. Aguilar-Gonzalez PM, Kober V. Design of correlation filters for pattern recognition using a noisy reference. Opt Commun 2012; 285(5): 574-583. DOI: 10.1016/j.optcom.2011.11.012.
  20. Ruchay A, Kober V. Clustered impulse noise removal from color images with spatially connected rank filtering. Proc SPIE 2016; 9971: 99712Y. DOI: 10.1117/12.2236785.
  21. Ruchay A, Kober V. Removal of impulse noise clusters from color images with local order statistics. Proc SPIE 2017; 10396: 1039626. DOI: 10.1117/12.2272718.
  22. Ruchay A, Kober V. Impulsive noise removal from color video with morphological filtering. Proc SPIE 2017; 10396: 1039627. DOI: 10.1117/12.2272719.
  23. Ruchay A, Kober V. Impulsive noise removal from color images with morphological filtering. In Book: van der Aalst W, Ignatov DI, Khachay M, Kuznetsov SO, Lempitsky V, Lomazova IA, Loukachevitch N, Napoli A, Panchenko A, Pardalos PM, Savchenko AV, Wasserman S, eds. Analysis of Images, Social Networks and Texts (AIST 2017) 2018: 280-291. DOI: 10.1007/978-3-319-73013-4_26.
  24. Takimoto RY, Tsuzuki M de SG, Vogelaar R, Martins T de C, Sato AK, Iwao Y, Gotoh T, Kagei S. 3D reconstruction and multiple point cloud registration using a low precision RGB-D sensor. Mechatronics 2016; 35: 11-22. DOI: 10.1016/j.mechatronics.2015.10.014.
  25. Nasrin T, Yi F, Das S, Moon I. Partially occluded object reconstruction using multiple Kinect sensors. Proc SPIE 2014; 9117: 91171G. DOI: 10.1117/12.2053938.
  26. Xiang S, Yu L, Liu Q, Xiong Z. A gradient-based approach for interference cancelation in systems with multiple Kinect cameras. 2013 IEEE International Symposium on Circuits and Systems (ISCAS2013) 2013: 13-16. DOI: 10.1109/ISCAS.2013.6571770.
  27. Susperregi L, Arruti A, Jauregi E, Sierra B, Martinez-Otzeta JM, Lazkano E, Ansuategui A. Fusing multiple image transformations and a thermal sensor with kinect to improve person detection ability. Engineering Applications of Artificial Intelligence 2013; 26(8): 1980-1991. DOI: 10.1016/j.engappai.2013.04.013.
  28. Kwon B, Kim D, Kim J, Lee I, Kim J, Oh H, Kim H, Lee S. Implementation of human action recognition system using multiple Kinect sensors. In Book: Ho YS, Sang J, Ro Y, Kim J, Wu F, eds. Advances in Multimedia Information Processing – PCM 2015; I: 334-343. DOI: 10.1007/978-3-319-24075-6_32.
  29. Du H, Zhao Y, Han J, Wang Z, Song G. Data fusion of multiple kinect sensors for a rehabilitation system. 2016 38th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC) 2016: 4869-4872. DOI: 10.1109/EMBC.2016.7591818.
  30. Noonan PJ, Ma J, Cole D, Howard J, Hallett WA, Glocker B, Gunn R. Simultaneous multiple Kinect v2 for extended field of view motion tracking. 2015 IEEE Nuclear Science Symposium and Medical Imaging Conference (NSS/MIC) 2015: 1-4. DOI: 10.1109/NSSMIC.2015.7582070.
  31. Pathirana PN, Li S, Trinh HM, Seneviratne A. Robust real-time bio-kinematic movement tracking using multiple kinects for tele-rehabilitation. IEEE Transactions on Industrial Electronics 2016; 63(3): 1822-1833. DOI: 10.1109/TIE.2015.2497662.
  32. Nakazawa M, Mitsugami I, Habe H, Yamazoe H, Yagi Y. Calibration of multiple kinects with little overlap regions. IEEJ Transactions on Electrical and Electronic Engineering 2015; 10(S1): S108-S115. DOI: 10.1002/tee.22171.
  33. Córdova-Esparza D-M, Terven JR, Jiménez-Hernández H, Vázquez-Cervantes A, Herrera-Navarro A-M, Ramírez-Ped­raza A. Multiple Kinect V2 calibration. Automatika 2016; 57(3): 810-821. DOI: 10.7305/automatika.2017.02.1758.
  34. Tsui KP, Wong KH, Wang C, Kam HC, Yau HT, Yu YK. Calibration of multiple Kinect depth sensors for full surface model reconstruction. Proc SPIE 2016; 10011: 100111H. DOI: 10.1117/12.2241159.
  35. Liao Y, Sun Y, Li G, Kong J, Jiang G, Jiang D, Cai H, Ju Z, Yu H, Liu H. Simultaneous calibration: A joint optimization approach for multiple kinect and external cameras. Sensors 2017; 17(7): 1491. DOI: 10.3390/s17071491.
  36. Li H, Liu H, Cao N, Peng Y, Xie S, Luo J, Sun Y. Real-time RGB-D image stitching using multiple Kinects for improved field of view. Int J Adv Robot Syst 2017; 14(2): 1-8. DOI: 10.1177/1729881417695560.
  37. Choi S, Zhou Q-Y, Miller S, Koltun V. A large dataset of object scans. arXiv:1602.02481. 2016.
  38. Khoshelham K, Elberink SO. Accuracy and resolution of kinect depth data for indoor mapping applications. Sensors 2012; 12(2): 1437-1454. DOI: 10.3390/s120201437.

© 2009, IPSI RAS
151, Molodogvardeiskaya str., Samara, 443001, Russia; E-mail: journal@computeroptics.ru ; Tel: +7 (846) 242-41-24 (Executive secretary), +7 (846) 332-56-22 (Issuing editor), Fax: +7 (846) 332-56-20