APRNet: A 3D Anisotropic Pyramidal Reversible Network with Multi-Modal Cross-Dimension Attention for Brain Tissue Segmentation in MR Images

Yuzhou Zhuang, Huazhong University of Science and Technology
Hong Liu, Huazhong University of Science and Technology
Enmin Song, Huazhong University of Science and Technology
Chih Cheng Hung, Kennesaw State University


Brain tissue segmentation in multi-modal magnetic resonance (MR) images is significant for the clinical diagnosis of brain diseases. Due to blurred boundaries, low contrast, and intricate anatomical relationships between brain tissue regions, automatic brain tissue segmentation without prior knowledge is still challenging. This paper presents a novel 3D fully convolutional network (FCN) for brain tissue segmentation, called APRNet. In this network, we first propose a 3D anisotropic pyramidal convolutional reversible residual sequence (3DAPC-RRS) module to integrate the intra-slice information with the inter-slice information without significant memory consumption; secondly, we design a multi-modal cross-dimension attention (MCDA) module to automatically capture the effective information in each dimension of multi-modal images; then, we apply 3DAPC-RRS modules and MCDA modules to a 3D FCN with multiple encoded streams and one decoded stream for constituting the overall architecture of APRNet. We evaluated APRNet on two benchmark challenges, namely MRBrainS13 and iSeg-2017. The experimental results show that APRNet yields state-of-the-art segmentation results on both benchmark challenge datasets and achieves the best segmentation performance on the cerebrospinal fluid region. Compared with other methods, our proposed approach exploits the complementary information of different modalities to segment brain tissue regions in both adult and infant MR images, and it achieves the average Dice coefficient of 87.22% and 93.03% on the MRBrainS13 and iSeg-2017 testing data, respectively. The proposed method is beneficial for quantitative brain analysis in the clinical study, and our code is made publicly available.