Dense visual SLAM for RGB-D cameras

Kerl C, Sturm J, Cremers D (2013)


Publication Type: Conference contribution

Publication year: 2013

Journal

Pages Range: 2100-2106

Conference Proceedings Title: IEEE International Conference on Intelligent Robots and Systems

Event location: JPN

ISBN: 9781467363587

DOI: 10.1109/IROS.2013.6696650

Abstract

In this paper, we propose a dense visual SLAM method for RGB-D cameras that minimizes both the photometric and the depth error over all pixels. In contrast to sparse, feature-based methods, this allows us to better exploit the available information in the image data which leads to higher pose accuracy. Furthermore, we propose an entropy-based similarity measure for keyframe selection and loop closure detection. From all successful matches, we build up a graph that we optimize using the g2o framework. We evaluated our approach extensively on publicly available benchmark datasets, and found that it performs well in scenes with low texture as well as low structure. In direct comparison to several state-of-the-art methods, our approach yields a significantly lower trajectory error. We release our software as open-source. © 2013 IEEE.

Involved external institutions

How to cite

APA:

Kerl, C., Sturm, J., & Cremers, D. (2013). Dense visual SLAM for RGB-D cameras. In IEEE International Conference on Intelligent Robots and Systems (pp. 2100-2106). JPN.

MLA:

Kerl, Christian, Jurgen Sturm, and Daniel Cremers. "Dense visual SLAM for RGB-D cameras." Proceedings of the 2013 26th IEEE/RSJ International Conference on Intelligent Robots and Systems: New Horizon, IROS 2013, JPN 2013. 2100-2106.

BibTeX: Download