CFNet: A medical image segmentation method using the multi-view attention mechanism and adaptive fusion strategy
Department
Computer Science
Document Type
Article
Publication Date
1-1-2023
Abstract
The image feature extraction method based on the attention mechanism has contributed significantly to the accuracy of medical image segmentation. However, the current attention mechanism is based on the single-view information for feature extraction which imposes certain limitations for extracting efficient features. In this study, we propose the encoder-decoder structure of the U-Net as the basic network structure to construct a medical image segmentation method based on the multi-view attention mechanism and adaptive fusion strategy. We will refer to this new network as CFNet. The first component of CFNet is a cross-scale feature fusion method (CFF) employing a new multi-view attention mechanism (MAM) for feature extraction. This can effectively extract features in the multi-receptive field space and obtain more effective cross-scale fusion features in skip-connection. The second component is a fusion weight adaptive allocation strategy (FAS), which can guide the cross-scale fusion features to effectively connect to the decoder features for solving the semantic gap. We evaluated the CFNet using two publicly available medical image segmentation datasets: MoNuSeg and LGG. The experimental results show that the CFNet can achieve better performance compared with the current state-of-the-art methods in medical image segmentation. We then perform extensive ablation studies to validate our method.
Journal Title
Biomedical Signal Processing and Control
Journal ISSN
17468094
Volume
79
Digital Object Identifier (DOI)
10.1016/j.bspc.2022.104112