TransDeepLab: Convolution-Free Transformer-Based DeepLab v3+ for Medical Image Segmentation

Azad R, Heidari M, Shariatnia M, Aghdam EK, Karimijafarbigloo S, Adeli E, Merhof D (2022)


Publication Type: Conference contribution

Publication year: 2022

Journal

Publisher: Springer Science and Business Media Deutschland GmbH

Book Volume: 13564 LNCS

Pages Range: 91-102

Conference Proceedings Title: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Event location: Virtual, Online

ISBN: 9783031169182

DOI: 10.1007/978-3-031-16919-9_9

Abstract

Convolutional neural networks (CNNs) have been the de facto standard in a diverse set of computer vision tasks for many years. Especially, deep neural networks based on seminal architectures such as U-shaped model with skip-connections or atrous convolution with pyramid pooling have been tailored to a wide range of medical image analysis tasks. The main advantage of such architectures is that they are prone to detaining versatile local features. However, as a general consensus, CNNs fail to capture long-range dependencies and spatial correlations due to the intrinsic property of confined receptive field size of convolution operations. Alternatively, Transformer, profiting from global information modeling that stems from the self-attention mechanism, has recently attained remarkable performance in natural language processing and computer vision. Nevertheless, previous studies prove that both local and global features are critical for a deep model in dense prediction, such as segmenting complicated structures with disparate shapes and configurations. This paper proposes TransDeepLab, a novel DeepLab-like pure Transformer for medical image segmentation. Specifically, we exploit hierarchical Swin-Transformer with shifted windows to extend the DeepLabv3 and model the Atrous Spatial Pyramid Pooling (ASPP) module. A thorough search of the relevant literature yielded that we are the first to model the seminal DeepLab model with a pure Transformer-based model. Extensive experiments on various medical image segmentation tasks verify that our approach performs superior or on par with most contemporary works on an amalgamation of Vision Transformer and CNN-based methods, along with a significant reduction of model complexity. The codes and trained models are publicly available at github.

Involved external institutions

How to cite

APA:

Azad, R., Heidari, M., Shariatnia, M., Aghdam, E.K., Karimijafarbigloo, S., Adeli, E., & Merhof, D. (2022). TransDeepLab: Convolution-Free Transformer-Based DeepLab v3+ for Medical Image Segmentation. In Islem Rekik, Ehsan Adeli, Sang Hyun Park, Celia Cintas (Eds.), Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (pp. 91-102). Virtual, Online: Springer Science and Business Media Deutschland GmbH.

MLA:

Azad, Reza, et al. "TransDeepLab: Convolution-Free Transformer-Based DeepLab v3+ for Medical Image Segmentation." Proceedings of the 5th International Workshop on Predictive Intelligence in Medicine, PRIME 2022, held in conjunction with 25th International Conference on Medical Image Computing and Computer-Assisted Intervention, MICCAI 2022, Virtual, Online Ed. Islem Rekik, Ehsan Adeli, Sang Hyun Park, Celia Cintas, Springer Science and Business Media Deutschland GmbH, 2022. 91-102.

BibTeX: Download