Pauly O, Diotte B, Habert S, Weidert S, Euler E, Fallavollita P, Navab N (2014)
Publication Type: Conference contribution
Publication year: 2014
Publisher: Springer Verlag
Book Volume: 8498 LNCS
Pages Range: 178-185
Conference Proceedings Title: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Event location: JPN
ISBN: 9783319075204
DOI: 10.1007/978-3-319-07521-1_19
In computer-aided interventions, the visual feedback of the doctor is vital. Enhancing the relevant object will help for the perception of this feedback. In this paper, we present a learning-based labeling of the surgical scene using a depth camera (comprised of RGB and depth range sensors). The depth sensor is used for background extraction and Random Forests are used for segmenting color images. The end result is a labeled scene consisting of surgeon hands, surgical instruments and background labels. We evaluated the method by conducting 10 simulated surgeries with 5 clinicians and demonstrated that the approach provides surgeons a dissected surgical scene, enhanced visualization, and upgraded depth perception. © 2014 Springer International Publishing Switzerland.
APA:
Pauly, O., Diotte, B., Habert, S., Weidert, S., Euler, E., Fallavollita, P., & Navab, N. (2014). Relevance-based visualization to improve surgeon perception. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (pp. 178-185). JPN: Springer Verlag.
MLA:
Pauly, Olivier, et al. "Relevance-based visualization to improve surgeon perception." Proceedings of the 5th International Conference on Information Processing in Computer-Assisted Interventions, IPCAI 2014, JPN Springer Verlag, 2014. 178-185.
BibTeX: Download