Proportional Error Back-Propagation (PEB): Real-Time Automatic Loop Closure Correction for Maintaining Global Consistency in 3D Reconstruction with Minimal Computational Cost

Open access

Abstract

This paper introduces a robust, real-time loop closure correction technique for achieving global consistency in 3D reconstruction, whose underlying notion is to back-propagate the cumulative transformation error appearing while merging the pairs of consecutive frames in a sequence of shots taken by an RGB-D or depth camera. The proposed algorithm assumes that the starting frame and the last frame of the sequence roughly overlap. In order to verify the robustness and reliability of the proposed method, namely, Proportional Error Back- Propagation (PEB), it has been applied to numerous case-studies, which encompass a wide range of experimental conditions, including different scanning trajectories with reversely directed motions within them, and the results are presented. The main contribution of the proposed algorithm is its considerably low computational cost which has the possibility of usage in real-time 3D reconstruction applications. Also, neither manual input nor interference is required from the user, which renders the whole process automatic.

[1] Berg, L.P., Vance, J.M. (2017). Industry use of virtual reality in product design and manufacturing: A survey. Virtual Reality 21(1), 1-17.

[2] Avots, E., Daneshmand, M., Traumann, A., Escalera, S., Anbarjafari, G. (2016). Automatic garment retexturing based on infrared information. Computers & Graphics, 59, 28-38.

[3] Anbarjafari, G., Haamer, R.E., Lusi, I., Tikk, T., Valgma, L. (2018). 3D face reconstruction with region based best fit blending using mobile phone for virtual reality based social media. Bulletin of the Polish Academy of Sciences Technical Sciences, 66, 1-11.

[4] Daneshmand, M., Helmi, A., Avots, E., Noroozi, F., Alisinanoglu, F., Arslan, H.S., Gorbova, J., Haamer, R.E., Ozcinar, C., Anbarjafari, G. (2018). 3D scanning: A comprehensive survey. arXiv:1801.08863 [cs.CV].

[5] Bailey, T., Durrant-Whyte, H. (2006). Simultaneous localization and mapping (SLAM): Part II. IEEE Robotics & Automation Magazine 13(3), 108-117.

[6] Sim, R., Roy, N. (2005). Global a-optimal robot exploration in SLAM. In IEEE International Conference on Robotics and Automation (ICRA 2005). IEEE, 661-666.

[7] Tomono, M. (2009). Robust 3d SLAM with a stereo camera based on an edge-point ICP algorithm. In International Conference on Robotics and Automation (ICRA’09). IEEE, 4306-4311.

[8] Valgma, L., Daneshmand, M., Anbarjafari, G. (2016). Iterative closest point based 3D object reconstruction using RGB-D acquisition devices. In 24th Signal Processing and Communication Application Conference (SIU). IEEE, 457-460.

[9] Beardsley, P.A., Zisserman, A., Murray, D.W. (1997). Sequential updating of projective and affine structure from motion. International Journal of Computer Vision, 23(3), 235-259.

[10] Turner, D., Lucieer, A., Watson, C. (2012). An automated technique for generating georectified mosaics from ultra-high resolution unmanned aerial vehicle (UAV) imagery, based on structure from motion (SFM) point clouds. Remote Sensing 4(5), 1392-1410.

[11] Fitzgibbon, A.W., Zisserman, A. (1998). Automatic camera recovery for closed or open image sequences. In: Computer Vision - ECCV’98. Springer, 311-326.

[12] Curless, B., Levoy, M. (1996). A volumetric method for building complex models from range images. In: 23rd Annual Conference on Computer Graphics and Interactive Techniques (SIGGRAPH ’96). ACM, 303-312.

[13] Henry, P., Krainin, M., Herbst, E., Ren, X., Fox, D. (2010). RGB-D mapping: Using depth cameras for dense 3d modeling of indoor environments. In: Experimental Robotics: 12th International Symposium on Experimental Robotics. Springer, STAR 79, 477-491.

[14] Frank Steinbrucker, Christian Kerl, J.S., Cremers, D. (2013). Large-scale multi-resolution surface reconstruction from RGB-Dsequences. In: IEEE International Conference on Computer Vision (ICCV). IEEE, 3264-3271.

[15] Liu, T., Zhang, X., Wei, Z., Yuan, Z. (2013). A robust fusion method for RGB-D SLAM. In: Chinese Automation Congress (CAC). IEEE, 474-481.

[16] Shiratori, T., Berclaz, J., Harville, M., Shah, C., Li, T., Matsushita, Y., Shiller, S. (2015). Efficient large-scale point cloud registration using loop closures. In: International Conference on 3D Vision (3DV). IEEE, 232-240.

[17] Whelan, T., Kaess, M., Johannsson, H., Fallon, M., Leonard, J.J., McDonald, J. (2015). Real-time largescale dense RGB-D SLAM with volumetric fusion. The International Journal of Robotics Research, 34(4-5), 598-626.

[18] Kaess, M., Ranganathan, A., Dellaert, F. (2008). ISAM: Incremental smoothing and mapping. IEEE Transactions on Robotics, 24(6), 1365-1378.

[19] Kaess, M., Johannsson, H., Roberts, R., Ila, V., Leonard, J.J., Dellaert, F. (2011). ISAM2: Incremental smoothing and mapping using the Bayes tree. The International Journal of Robotics Research, 31(2), 216-235.

[20] Wang, Y., Zhang, Q., Zhou, Y. (2015). Dense 3D mapping for indoor environment based on kinect-style depth cameras. In: Robot Intelligence Technology and Applications 3.. Springer, 317-330.

[21] Grisetti, G., Stachniss, C., Grzonka, S., Burgard, W. (2007). TORO - Tree-based netwORk Optimizer. https://openslam.org/toro.html.

[22] Wu, J., Cui, Z., Sheng, V.S., Zhao, P., Su, D., Gong, S. (2013). A comparative study of SIFT and its variants.Measurement Science Review, 13(3), 122-131.

[23] Daneshmand, M., Aabloo, A., Ozcinar, C., Anbarjafari, G. (2016). Real-time, automatic shape-changing robot adjustment and gender classification. Signal, Image and Video Processing, 10(4), 753-760.

[24] Kim, K., Lawrence, R.L., Kyllonen, N., Ludewig, P.M., Ellingson, A.M., Keefe, D.F. (2017). Anatomical 2D/3D shape-matching in virtual reality: A user interface for quantifying joint kinematics with radiographic imaging. In IEEE Symposium on 3D User Interfaces (3DUI)., IEEE, 243-244.

[25] Lüsi, I., Anbarjafari, G., Meister, E. (2015). Real-time mimicking of estonian speaker’s mouth movements on a 3D avatar using Kinect 2. In International Conference on Information and Communication Technology Convergence (ICTC), IEEE, 141-143.

[26] Kühnapfel, U., Cakmak, H.K., Maaß, H. (2000). Endoscopic surgery training using virtual reality and deformable tissue simulation. Computers & Graphics, 24(5), 671-682.

[27] Traumann, A., Daneshmand, M., Escalera, S., Anbarjafari, G. (2015). Accurate 3D measurement using optical depth information. Electronics Letters, 51(18), 1420-1422.

[28] Daneshmand, M., Aabloo, A., Anbarjafari, G. (2015). Size-dictionary interpolation for robot’s adjustment. Frontiers in Bioengineering and Biotechnology, 3, 63.

[29] Microsoft Corporation. Kinect for Windows. https://developer.microsoft.com/en-us/windows/kinect.

[30] Besl, P.J., McKay, N.D. (1992). Method for registration of 3-D shapes. In Robotics-DL tentative, SPIE, 586-606.

Measurement Science Review

The Journal of Institute of Measurement Science of Slovak Academy of Sciences

Journal Information


IMPACT FACTOR 2017: 1.345
5-year IMPACT FACTOR: 1.253



CiteScore 2017: 1.61

SCImago Journal Rank (SJR) 2017: 0.441
Source Normalized Impact per Paper (SNIP) 2017: 0.936

Metrics

All Time Past Year Past 30 Days
Abstract Views 0 0 0
Full Text Views 98 98 16
PDF Downloads 68 68 12