Laina I, Rupprecht C, Navab N (2019)
Publication Type: Conference contribution
Publication year: 2019
Publisher: Institute of Electrical and Electronics Engineers Inc.
Book Volume: 2019-October
Pages Range: 7413-7423
Conference Proceedings Title: Proceedings of the IEEE International Conference on Computer Vision
Event location: Seoul, KOR
ISBN: 9781728148038
Understanding images without explicit supervision has become an important problem in computer vision. In this paper, we address image captioning by generating language descriptions of scenes without learning from annotated pairs of images and their captions. The core component of our approach is a shared latent space that is structured by visual concepts. In this space, the two modalities should be indistinguishable. A language model is first trained to encode sentences into semantically structured embeddings. Image features that are translated into this embedding space can be decoded into descriptions through the same language model, similarly to sentence embeddings. This translation is learned from weakly paired images and text using a loss robust to noisy assignments and a conditional adversarial component. Our approach allows to exploit large text corpora outside the annotated distributions of image/caption data. Our experiments show that the proposed domain alignment learns a semantically meaningful representation which outperforms previous work.
APA:
Laina, I., Rupprecht, C., & Navab, N. (2019). Towards unsupervised image captioning with shared multimodal embeddings. In Proceedings of the IEEE International Conference on Computer Vision (pp. 7413-7423). Seoul, KOR: Institute of Electrical and Electronics Engineers Inc..
MLA:
Laina, Iro, Christian Rupprecht, and Nassir Navab. "Towards unsupervised image captioning with shared multimodal embeddings." Proceedings of the 17th IEEE/CVF International Conference on Computer Vision, ICCV 2019, Seoul, KOR Institute of Electrical and Electronics Engineers Inc., 2019. 7413-7423.
BibTeX: Download