Next Article in Journal
Benign Intranodal Thyroid Tissue Similar to Nodal Metastasis of Thyroid Papillary Carcinoma: A Rare Case Report
Previous Article in Journal
A Novel Automatic Audiometric System Design Based on Machine Learning Methods Using the Brain’s Electrical Activity Signals
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Medical Image Segmentation Method Based on Improved UNet 3+ Network

1
Institute of Disaster and Emergency Medicine, Tianjin University, Tianjin 300072, China
2
Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin 300072, China
*
Author to whom correspondence should be addressed.
Diagnostics 2023, 13(3), 576; https://doi.org/10.3390/diagnostics13030576
Submission received: 2 December 2022 / Revised: 16 January 2023 / Accepted: 1 February 2023 / Published: 3 February 2023
(This article belongs to the Section Machine Learning and Artificial Intelligence in Diagnostics)

Abstract

:
In recent years, segmentation details and computing efficiency have become more important in medical image segmentation for clinical applications. In deep learning, UNet based on a convolutional neural network is one of the most commonly used models. UNet 3+ was designed as a modified UNet by adopting the architecture of full-scale skip connections. However, full-scale feature fusion can result in excessively redundant computations. This study aimed to reduce the network parameters of UNet 3+ while further improving the feature extraction capability. First, to eliminate redundancy and improve computational efficiency, we prune the full-scale skip connections of UNet 3+. In addition, we use the attention module called Convolutional Block Attention Module (CBAM) to capture more essential features and thus improve the feature expression capabilities. The performance of the proposed model was validated by three different types of datasets: skin cancer segmentation, breast cancer segmentation, and lung segmentation. The parameters are reduced by about 36% and 18% compared to UNet and UNet 3+, respectively. The results show that the proposed method not only outperformed the comparison models in a variety of evaluation metrics but also achieved more accurate segmentation results. The proposed models have lower network parameters that enhance feature extraction and improve segmentation performance efficiently. Furthermore, the models have great potential for application in medical imaging computer-aided diagnosis.

1. Introduction

Medical imaging can provide a wealth of information to help clinicians make the best diagnosis possible. However, current medical imaging diagnosis primarily relies on manual interpretation, which will add to doctors’ workload and lead to misjudgment operations. Computer-aided diagnosis has shown to be a reliable tool for reducing the strain on clinicians and shortening the time it takes to evaluate medical images [1,2,3]. Among these, the automatic segmentation of medical images is one of the current research hotspots.
Since deep learning has first been used, many researchers have employed convolutional neural networks (CNNs) to enable the automatic segmentation of medical images [4,5,6]. Representative CNNs models include FCN [7], SegNet [8], PSPNet [9], DeepLab [10,11], and UNet [12]. The encoder-decoder based UNet architecture, in particular, is widely used for medical image segmentation. It uses skip connections to combine the encoder’s low-level feature maps with the corresponding decoder’s high-level feature maps. However, the direct skip connections in UNet limit its capacity to capture abundant features [13]. UNet++ [14,15] further introduces nested and dense skip pathways in these connections, superimposing high-resolution features in the encoder layer to the deeper decoder layer, reducing the semantic gap between feature mappings. However, UNet++ has more parameters than UNet. In addition, the edge and location information of the image is easily diluted by down-sampling or up-sampling operations of the deep network, which does not make full use of the low-level feature maps. Subsequently, UNet 3+ [16] further aggregated feature maps from the full-scale perspective. Although UNet 3+ can capture the full-scale coarse-grained and fine-grained semantic feature maps, the features of adjacent layers contribute very similarly to the segmentation results when connected to the corresponding decoder layers. Therefore, full-scale feature fusion would have excessive redundant computations. By pruning the skip connections of UNet 3+ to reduce redundant computations, we proposed an optimized multi-scale skip connections segmentation architecture named Ref-UNet 3+. In addition, we also show that the segmentation performance of this model does not degrade.
In addition, many medical image segmentation and classification tasks have used the attention mechanism. It can make the neural network focus on and select significant features while suppressing unnecessary ones. Recent studies by many scholars are as follows: He et al. [17] proposed a novel classification model using the category attention block to identify diabetic retinopathy with small lesions and imbalanced data distribution. Hu et al. [18] proposed a parallel deep learning segmentation algorithm based on a hybrid attention mechanism that can extract multi-scale feature maps. Xiao et al. [19] proposed an MRI brain disease detection model based on transferred residual networks combined with a convolutional block attention module (CBAM), which has an excellent performance in two-class and multi-class tasks. Canayaz [20] proposed an EfficientNet consisting of attention blocks and proved that the attention mechanism plays a critical role in extracting deep-level features. Niu et al. [21] proposed a multi-scale attention-based convolutional neural network, enhancing the network’s feature expression ability. This study further introduces a lightweight attention mechanism module to the decoder [22], which is used to enhance the feature extraction and expression capabilities of the network.
In summary, the main contributions of this paper are as follows:
(1)
We proposed an improved model Ref-UNet 3+, which reduced unnecessary redundant calculations.
(2)
We used the current advanced attention module (CBAM) to enhance the feature extraction ability of the network.
(3)
Algorithm comparison and quantitative analysis were performed on three different modalities of medical imaging, including skin cancer segmentation, breast cancer segmentation, and lung segmentation, demonstrating the better performance of our proposed model.

2. Methods

Figure 1 gives the proposed network model. The model is pruned on the basis of UNet 3+, and subsequently, an attention mechanism is added to the decoder layer. Taking X d e 3 as an example, the features from X e n 1 , X e n 3 , and X d e 4 are concatenated and sent to the CBAM module after a 3 × 3 convolution and BN layer, and subsequently sent to the next layer to complete the subsequent operations.
Compared with UNet and UNet 3+, the improved model provides more accurate segmentation effects with fewer parameters using multi-scale feature fusion and attention mechanisms.

2.1. Redesigned Multi-Scale Skip Connections

The redesigned skip connections consider the positive impact of specific multi-scale feature maps on segmentation. In UNet 3+, each decoder layer will have adjacent multi-scale feature maps with similar contributions to segmentation, resulting in excessive redundant computation. As an example, the feature map X d e 3 is obtained by X e n 1 and X e n 2 through different max pooling operations, respectively, the same-scale encoder layer X e n 3 is fed into a plain 3 × 3 convolution layer followed by a sigmoid function, and the larger-scale decoder layer X d e 5 and X d e 4 are fed into a 3 × 3 convolution layer followed by a bilinear up-sampling and a sigmoid function. However, the feature map X e n 2 and X e n 3 , and the feature map X d e 5 and X d e 4 have little difference in their contribution to segmentation. As a result, our proposed model retains the low-level and high-level semantic feature information with an enormous contribution by pruning UNet 3+ accordingly.
Formally, we formulate the multi-scale skip connections: let i represent the current encoder-decoder layer and N refers to the total number of layers in the network. The stack of feature maps represented by X d e i is computed as:
X d e i = A ( C ( [ S ( X e n i ) , U ( X d e k ) k = i + 1 N 2 ] ) ) , i = 1 A ( C ( [ S ( X e n i ) , U ( X d e k ) k = i + 1 , M ( X e n j ) j = i 2 ] ) ) , i > 1 , j > 0
where function A represents the convolutional block attention module (CBAM) followed by a ReLU activation function. C indicates a set of convolution and batch normalization operations. S represents a convolution followed by batch normalization and a ReLU activation function. U and M denote up-sampling and down-sampling, respectively. indicates the multi-scale feature maps concatenation layer.

2.2. CBAM in the Decoder

Among the many attention models, CBAM is a lightweight feedforward convolutional neural network attention module that can be integrated into any CNN architecture for end-to-end training [22,23]. Figure 2a illustrates the CBAM structure, Figure 2b shows the channel attention, and Figure 2c indicates the spatial attention. A CBAM with channel attention and spatial attention facilitates the combination of more expressive feature information, thereby leading to a more efficient extraction of contextual information from images of various scales [24]. In our model, each decoder layer gets the feature map F R C × H × W fed into a convolution operation, and then this feature map F is considered the input feature map of CBAM. Secondly, the channel attention map M C R C × 1 × 1 and channel-refined feature map F are calculated, then the spatial attention map M s R 1 × H × W is derived. Finally, the final feature map F is the output. The relevant calculation formulas are summarized as follows:
F = M c ( F ) F
F = M s ( F ) F
where represents the element-wise multiplication corresponding to the feature matrix.

3. Experiments and Results

We conducted experiments on three different types of medical imaging datasets to validate the performance of the proposed model. Figure 3 depicts three data samples. The models involved in this paper are implemented based on the deep learning frameworks Pytorch, and the experimental program language is Python. The experiments were run on a dual-core Intel(R) i7-11700 CPU with 32 GB of RAM and an NVIDIA GEFORCE RTX 3080 GPU with 10 GB of RAM. The running platform is Windows 10. The training parameters of the comparison models were kept consistent for the sake of fairness and rationality in the experiment, and the average of five random validation outcomes is the experimental result.

3.1. Datasets

3.1.1. Skin Cancer Segmentation

Dataset used in this study was from the ISIC Challenge competition in 2018 [25], which is provided by https://www.kaggle.com/datasets/tschandl/isic2018-challenge-task1-data-segmentation/ (accessed on 15 November 2022). It consists of 2594 images and 2594 corresponding ground truth response masks. In this implementation, each sample was rescaled to 256 × 256 pixels.

3.1.2. Breast Cancer Segmentation

The data reviews the medical images of breast cancer using ultrasound scans [26], which is provided by https://www.kaggle.com/datasets/aryashah2k/breast-ultrasound-images-dataset/ (accessed on 15 November 2022). The dataset consists of 780 images with an average image size of 500 × 500 pixels. The experimental data are 647 benign and malignant samples, which are resized to 256 × 256 pixels in this implementation.

3.1.3. Lung Segmentation

The dataset is taken from The Lung Nodule Analysis (LUNA) and The Kaggle Data Science Bowl competition in 2017, which is provided by https://www.kaggle.com/datasets/kmader/finding-lungs-in-ct-data/ (accessed on 15 November 2022). The original image size was 512 × 512. In this implementation, each sample was rescaled to 256 × 256 pixels.

3.2. Quantitative Analysis Approaches

To evaluate the model reasonably, we considered the following evaluation indicators: accuracy (ACC), sensitivity (SE), precision (PRE), F1-score, Jaccard similarity (JS), and Dice coefficient (DC).
ACC = TP + TN TP + TN + FP + FN
SE = TP TP + FN
PRE = TP TP + FP
F 1 - score = 2 Precison Sensitivity Precison + Sensitivity
JS = | GT SR | | GT SR |
DC = 2 | GT SR | | GT | + | SR |
In Equations (4)–(6), True Positive (TP) represents the number of pixels correctly segmented by the target, False Positive (FP) represents the number of pixels incorrectly segmented by the background as the target, True Negative (TN) represents the number of pixels correctly segmented by the background, and False Negative (FN) represents the number of pixels incorrectly segmented by the target as the background. In Equations (8) and (9), Ground Truth (GT) and Segmentation Result (SR) denote the true labels and the generated prediction maps, respectively.

3.3. Loss Function

The model in this paper is an end-to-end deep learning network. The Dice coefficient loss is usually used as the loss function in medical image segmentation. The Dice coefficient is an ensemble similarity measure function to calculate the similarity of two samples and takes values in the range [0, 1].
The Dice coefficient loss is computed as:
L d i c e = 1 2 | GT SR | | GT | + | SR |
where GT SR is the intersection between the label and the prediction. GT + SR denotes the sum of the elements of the label and the prediction.

3.4. Results

3.4.1. Skin Cancer Segmentation

In this implementation, we adopted the ADAM [27] optimization technique with a weight decay of 0.0001. In addition, the data augmentation ratio was 0.5 and the learning rate was 2 × 10−4. The number of iterations was 200, and the loss function was dice coefficient loss. The encoder layer architecture of the U-shape network is 64—>128—>256—>512—>1024, and the decoder layers make corresponding adjustments according to different models. The proposed models Ref-UNet 3+ and CBAM+Ref-UNet 3+ were compared with UNet and UNet 3+ in terms of training loss and validation accuracy. The results are shown in Figure 4 and Figure 5. The proposed approach can be seen to achieve smaller loss, quicker convergence, and higher accuracy. This displays the suggested model’s remarkable robustness in a very straightforward way.
Table 1 shows the results of the 5-fold cross-validation and average. Our proposed models have excellent segmentation performance, with the number of parameters reduced roughly to 36% and 18%, compared to UNet and UNet 3+. Among them, CBAM+Ref-UNet 3+ surpasses UNet, UNet3+, and Ref-UNet 3+. The average F1-score in the testing phase achieved 0.8970, which is 0.76 and 1.00 points higher than UNet and UNet 3+, respectively. In addition, the average score of JS is 0.8136, which is 1.28 points higher than UNet and 1.60 points better than UNet 3+. Furthermore, the average DC score of the CBAM+Ref-UNet 3+ is 0.8848 on skin cancer segmentation. Hence, the results show that our proposed CBAM+Ref-UNet 3+ is feasible and effective, and the segmentation performance was significantly improved.

3.4.2. Breast Cancer Segmentation

In this experiment, the parameter settings in this implementation are the same as the skin cancer segmentation dataset. We used the ADAM optimization technique with a learning rate of 2 × 10−4, a number of iterations of 200, the data augmentation ratio of 0.5, and the loss function of dice coefficient loss.
We can see from Figure 6 that the loss of the proposed model decreases faster and is able to obtain smaller losses. From the dice metric in Figure 7, we can show that the proposed model has the highest dice and climbs steadily with the number of iterations. Relatively speaking, CBAM+Ref-UNet 3+ can have the best segmentation performance.
Table 2 summarizes the results of different methods on this dataset. On F1-score, the average score of UNet is 0.6678, the average score of UNet 3+ is 0.6564, the average score of Ref-UNet 3+ is 0.6656, and the average score of CBAM+Ref-UNet 3+ is 0.6858. On JS, the score of CBAM+Ref-UNet 3+ is 0.5228, which is 2.16% and 3.38% higher than UNet and UNet 3+, respectively. On DC, the CBAM+Ref-UNet 3+ score is 0.7132, which are 6.28% and 7.98% higher than UNet and UNet 3+. Thereby, our proposed module provides better performance.

3.4.3. Lung Segmentation

In this experiment, we used the ADAM optimization technique with a learning rate of 2 × 10−4. The number of iterations was 200, the data augmentation ratio was 0.5, and the loss function was dice coefficient loss. In addition, because the lung segmentation dataset is small and the images are not complex, we created a set of encoder layers with fewer convolutions: 8—>16—>32—>64—>128, and the decoder layers make corresponding adjustments according to different models.
Figure 8 and Figure 9 show the training loss and mean IoU when using the lung segmentation dataset. CBAM+Ref-UNet 3+ converges faster in training loss and provides the highest mean IoU score. Thus, the performance proves the validity of the proposed segmentation methods.
The comparison results are given in Table 3. It can be seen that CBAM+Ref-UNet 3+ achieves the highest average scores on multiple metrics. Moreover, CBAM+Ref-UNet 3+ can accomplish the best segmentation performance with fewer parameters.

3.5. Computation Time

Table 4 shows the number of parameters, floating point operations, training time and computation time of a test sample for each model using the skin cancer segmentation dataset as an example. Our improved models have fewer parameters and floating point operations. In addition, compared to UNet 3+, the proposed models have much shorter training times and computation times for a sample.

3.6. Visual Analysis

This section shows the partial visual segmentation results of the three datasets of skin, breast, and lung segmentation, as shown in Figure 10, Figure 11 and Figure 12, respectively. The segmentation results of each method are image binarized with a threshold of 0.5 [28]. Firstly, the presented methods are sharper in boundary segmentation in the skin image, comparable to GT pictures. Secondly, all methods’ segmentation results are not perfect in the breast image, but Ref-UNet 3+ and CBAM+Ref-Unet 3+ can accurately find the lesion. Lastly, in the lung image, the segmentation accuracy of each model was higher, but our models performed the best in handling details.

4. Discussion

The above results show that our proposed model is the best overall and is able to outperform other models in a variety of metrics. However, there are some drawbacks; for instance, the number of convolutions has a minimal impact on the performance on small data sets and medical images with simple structures. Therefore, the proposed model does not perform well enough, and further details need to be extracted and used to enhance the feature fusion.
In practical applications, it is more necessary to minimize network parameters and computation time than to increase the accuracy of current deep learning-based medical picture segmentation models. As a result, our first contribution is to propose an improved jump connection structure. Moreover, we added an attention mechanism to this model. The attention mechanism can selectively focus on the image regions of interest to obtain more detailed information, which can effectively improve the feature representation ability of the model. Finally, the various models were validated on three datasets.
First, we found that CBAM+Ref-UNet 3+ was optimal in all evaluation metrics on skin images with large sample sizes and distinct lesion boundaries. At the same time, the segmentation time used for the test set also indicated the best performance. Furthermore, we put up two network architectures using the small-sample breast dataset to see if our suggested models still have good segmentation accuracy with varying numbers of convolutions. The proposed two models outperform the comparative methods in various metrics, particularly PRE, JS, and DC. Finally, we use the lung cancer dataset to perform segmentation to see if the proposed models are valid. The lung is larger and more regular in shape than other organs. Hence the various models have limited potential to increase segmentation accuracy. However, our suggested model achieves the best segmentation performance with the fewest parameters because of the attention mechanism.
Although the research in this paper has achieved some results, the following areas need further exploration in the future: (1) We consider accelerating the convolution operation and optimizing the loss function to improve the performance of our models. (2) Deformable convolution is used to enhance the transformation modeling capability of CNNs [29,30,31,32,33], and it should be determined whether adding it to the model can enhance the feature extraction capability.

5. Conclusions

In this paper, we propose an improved model of UNet 3+ combined with CBAM. The goal of our studies was three-fold: Firstly, the proposed model achieves excellent segmentation performance with fewer parameters. Secondly, the proposed model enhances feature extraction’s ability to understand the image better while improving the accuracy and completeness of image segmentation. Lastly, the proposed model has better segmentation performance than UNet and UNet 3+.

Author Contributions

Conceptualization, L.L.; methodology, Y.X.; software, Y.X. and X.W.; validation, L.L. and Y.X.; formal analysis, L.L., S.H. and Y.X.; investigation, L.L., D.L. and Y.X.; resources, L.L., S.H. and Y.X.; data curation, X.W., D.L. and Y.X.; writing—original draft preparation, Y.X.; writing—review and editing, L.L., Y.X. and S.H.; visualization, L.L. and Y.X.; supervision, L.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by National Key Research and Development Program of China, grant number 2019YFC1606306 and 2021YFC2600504.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Publicly available datasets were analyzed in this study. These data can be found here: [https://www.kaggle.com] (accessed on 15 November 2022).

Acknowledgments

The authors thank the five anonymous reviewers for their helpful comments and suggestions. The authors also thank the Editor for the kind assistance and beneficial comments.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Takahashi, R.; Kajikawa, Y. Computer-aided diagnosis: A survey with bibliometric analysis. Int. J. Med. Inform. 2017, 101, 58–67. [Google Scholar] [CrossRef] [PubMed]
  2. Ahmad, O.F.; Soares, A.S.; Mazomenos, E.; Brandao, P.; Vega, R.; Seward, E.; Stoyanov, D.; Chand, M.; Lovat, L.B. Artificial intelligence and computer-aided diagnosis in colonoscopy: Current evidence and future directions. Lancet Gastroenterol. Hepatol. 2019, 4, 71–80. [Google Scholar] [CrossRef]
  3. Mori, Y.; Kudo, S.; Berzin, T.M.; Misawa, M.; Takeda, K. Computer-aided diagnosis for colonoscopy. Endoscopy 2017, 49, 813–819. [Google Scholar] [CrossRef] [PubMed]
  4. Chen, L.; Bentley, P.; Mori, K.; Misawa, K.; Fujiwara, M.; Rueckert, D. DRINet for Medical Image Segmentation. IEEE T Med. Imaging 2018, 37, 2453–2462. [Google Scholar] [CrossRef] [PubMed]
  5. Zhang, Z.; Wu, C.D.; Coleman, S.; Kerr, D. DENSE-INception U -net for medical image segmentation. Comput. Meth. Prog. Bio. 2020, 192. [Google Scholar] [CrossRef]
  6. Girum, K.B.; Crehange, G.; Lalande, A. Learning With Context Feedback Loop for Robust Medical Image Segmentation. IEEE T Med. Imaging 2021, 40, 1542–1554. [Google Scholar] [CrossRef]
  7. Shelhamer, E.; Long, J.; Darrell, T. Fully Convolutional Networks for Semantic Segmentation. IEEE T Pattern Anal. 2017, 39, 640–651. [Google Scholar] [CrossRef]
  8. Badrinarayanan, V.; Kendall, A.; Cipolla, R. SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation. IEEE T Pattern Anal. 2017, 39, 2481–2495. [Google Scholar] [CrossRef]
  9. Zhao, H.S.; Shi, J.P.; Qi, X.J.; Wang, X.G.; Jia, J.Y. Pyramid Scene Parsing Network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 6230–6239. [Google Scholar]
  10. Chen, L.C.; Papandreou, G.; Kokkinos, I.; Murphy, K.; Yuille, A.L. DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs. IEEE T Pattern Anal. 2018, 40, 834–848. [Google Scholar] [CrossRef]
  11. Chen, L.C.E.; Zhu, Y.K.; Papandreou, G.; Schroff, F.; Adam, H. Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation. Lect. Notes Comput. Sc. 2018, 11211, 833–851. [Google Scholar]
  12. Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. Med. Image Comput. Comput.-Assist. Interv. Pt Iii 2015, 9351, 234–241. [Google Scholar]
  13. Ma, H.; Zou, Y.N.; Liu, P.X. MHSU-Net: A more versatile neural network for medical image segmentation. Comput. Meth. Prog. Bio. 2021, 208. [Google Scholar] [CrossRef]
  14. Zhou, Z.W.; Siddiquee, M.M.R.; Tajbakhsh, N.; Liang, J.M. UNet plus plus: A Nested U-Net Architecture for Medical Image Segmentation. Deep. Learn. Med. Image Anal. Multimodal Learn. Clin. Decis. Support Dlmia 2018 2018, 11045, 3–11. [Google Scholar]
  15. Zhou, Z.W.; Siddiquee, M.M.R.; Tajbakhsh, N.; Liang, J.M. UNet plus plus: Redesigning Skip Connections to Exploit Multiscale Features in Image Segmentation. IEEE T Med. Imaging 2020, 39, 1856–1867. [Google Scholar] [CrossRef]
  16. Huang, H.M.; Lin, L.F.; Tong, R.F.; Hu, H.J.; Zhang, Q.W.; Iwamoto, Y.; Han, X.H.; Chen, Y.W.; Wu, J. Unet 3+: A Full-Scale Connected Unet for Medical Image Segmentation. In Proceedings of the ICASSP 2020–2020 IEEE International Conference on Acoustics, Speech and Signal Processing, Virtual, 4–8 May 2020; pp. 1055–1059. [Google Scholar]
  17. He, A.L.; Li, T.; Li, N.; Wang, K.; Fu, H.Z. CABNet: Category Attention Block for Imbalanced Diabetic Retinopathy Grading. IEEE T Med. Imaging 2021, 40, 143–153. [Google Scholar] [CrossRef]
  18. Hu, H.X.; Li, Q.Q.; Zhao, Y.F.; Zhang, Y. Parallel Deep Learning Algorithms With Hybrid Attention Mechanism for Image Segmentation of Lung Tumors. IEEE T Ind. Inform. 2021, 17, 2880–2889. [Google Scholar] [CrossRef]
  19. Xiao, Y.T.; Yin, H.S.; Wang, S.H.; Zhang, Y.D. TReC: Transferred ResNet and CBAM for Detecting Brain Diseases. Front. Neuroinform. 2021, 15. [Google Scholar] [CrossRef]
  20. Canayaz, M. C plus EffxNet: A novel hybrid approach for COVID-19 diagnosis on CT images based on CBAM and EfficientNet. Chaos Soliton. Fract. 2021, 151. [Google Scholar] [CrossRef]
  21. Niu, J.; Li, H.; Zhang, C.; Li, D.A. Multi-scale attention-based convolutional neural network for classification of breast masses in mammograms. Med. Phys. 2021, 48, 3878–3892. [Google Scholar] [CrossRef] [PubMed]
  22. Woo, S.H.; Park, J.; Lee, J.Y.; Kweon, I.S. CBAM: Convolutional Block Attention Module. Lect. Notes Comput. Sc. 2018, 11211, 3–19. [Google Scholar]
  23. Gul, M.S.K.; Mukati, M.U.; Batz, M.; Forchhammer, S.; Keinert, J. Light-Field View Synthesis Using a Convolutional Block Attention Module. Ieee Image Proc 2021, 3398–3402. [Google Scholar]
  24. Wang, J.J.; Yu, Z.S.; Luan, Z.Y.; Ren, J.W.; Zhao, Y.H.; Yu, G. RDAU-Net: Based on a Residual Convolutional Neural Network With DFP and CBAM for Brain Tumor Segmentation. Front. Oncol. 2022, 12, 805263. [Google Scholar] [CrossRef] [PubMed]
  25. Codella, N.C.F.; Gutman, D.; Celebi, M.E.; Helba, B.; Marchetti, M.A.; Dusza, S.W.; Kalloo, A.; Liopyris, K.; Mishra, N.; Kittler, H.; et al. Skin Lesion Analysis toward Melanoma Detection: A Challenge at the 2017 International Symposium on Biomedical Imaging (Isbi), Hosted by the International Skin Imaging Collaboration (Isic). In Proceedings of the 2018 IEEE 15th international symposium on biomedical imaging, Washington, DC, USA, 4–7 April 2018; pp. 168–172. [Google Scholar]
  26. Al-Dhabyani, W.; Gomaa, M.; Khaled, H.; Fahmy, A. Dataset of breast ultrasound images. Data Brief 2020, 28, 104863. [Google Scholar] [CrossRef]
  27. Kingma, D.; Ba, J. Adam: A Method for Stochastic Optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
  28. Alom, M.Z.; Yakopcic, C.; Hasan, M.; Taha, T.M.; Asari, V.K. Recurrent residual U-Net for medical image segmentation. J. Med. Imaging 2019, 6, 014006. [Google Scholar] [CrossRef]
  29. Dai, J.F.; Qi, H.Z.; Xiong, Y.W.; Li, Y.; Zhang, G.D.; Hu, H.; Wei, Y.C. Deformable Convolutional Networks. In Proceedings of the IEEE International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–22 November 2019; pp. 764–773. [Google Scholar]
  30. Zhu, X.Z.; Hu, H.; Lin, S.; Dai, J.F. Deformable ConvNets v2: More Deformable, Better Results. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 9300–9308. [Google Scholar]
  31. Zhao, F.Q.; Wu, Z.W.; Wang, L.; Lin, W.L.; Gilmore, J.H.; Xia, S.R.; Shen, D.G.; Li, G. Spherical Deformable U-Net: Application to Cortical Surface Parcellation and Development Prediction. IEEE Trans. Med. Imaging 2021, 40, 1217–1228. [Google Scholar] [CrossRef]
  32. Hellmann, F.; Ren, Z.; Andre, E.; Schuller, B.W. Deformable Dilated Faster R-CNN for Universal Lesion Detection in CT Images. IEEE Eng. Med. Bio. 2021, 2896–2902. [Google Scholar]
  33. Gurita, A.; Mocanu, I.G. Image Segmentation Using Encoder-Decoder with Deformable Convolutions. Sensors 2021, 21, 1570. [Google Scholar] [CrossRef]
Figure 1. An improved UNet 3+ model combined with CBAM.
Figure 1. An improved UNet 3+ model combined with CBAM.
Diagnostics 13 00576 g001
Figure 2. CBAM structure. (a) Convolutional Block Attention Module. (b) Channel Attention Module. (c) Spatial Attention Module.
Figure 2. CBAM structure. (a) Convolutional Block Attention Module. (b) Channel Attention Module. (c) Spatial Attention Module.
Diagnostics 13 00576 g002
Figure 3. Medical image segmentation: skin cancer lesion segmentation on the left, breast cancer segmentation, and lung segmentation on the right.
Figure 3. Medical image segmentation: skin cancer lesion segmentation on the left, breast cancer segmentation, and lung segmentation on the right.
Diagnostics 13 00576 g003
Figure 4. Comparison of training loss of different models for skin cancer segmentation.
Figure 4. Comparison of training loss of different models for skin cancer segmentation.
Diagnostics 13 00576 g004
Figure 5. Comparison of validation accuracy of different models for skin cancer segmentation.
Figure 5. Comparison of validation accuracy of different models for skin cancer segmentation.
Diagnostics 13 00576 g005
Figure 6. Comparison of training loss of different models for breast cancer segmentation.
Figure 6. Comparison of training loss of different models for breast cancer segmentation.
Diagnostics 13 00576 g006
Figure 7. Comparison of validation dice of different models for breast cancer segmentation.
Figure 7. Comparison of validation dice of different models for breast cancer segmentation.
Diagnostics 13 00576 g007
Figure 8. Training loss of the proposed models against UNet and UNet 3+.
Figure 8. Training loss of the proposed models against UNet and UNet 3+.
Diagnostics 13 00576 g008
Figure 9. Mean IoU of the proposed models against UNet and UNet 3+.
Figure 9. Mean IoU of the proposed models against UNet and UNet 3+.
Diagnostics 13 00576 g009
Figure 10. Visualization of skin cancer segmentation results. From left to right: original image, ground truth, segmentation result of Unet, segmentation result of Unet 3+, segmentation result of Ref-Unet 3+, and segmentation result of CBAM+Ref-Unet 3+.
Figure 10. Visualization of skin cancer segmentation results. From left to right: original image, ground truth, segmentation result of Unet, segmentation result of Unet 3+, segmentation result of Ref-Unet 3+, and segmentation result of CBAM+Ref-Unet 3+.
Diagnostics 13 00576 g010
Figure 11. Visualization of breast cancer segmentation results. From left to right: original image, ground truth, segmentation result of Unet, segmentation result of Unet 3+, segmentation result of Ref-Unet 3+, and segmentation result of CBAM+Ref-Unet 3+.
Figure 11. Visualization of breast cancer segmentation results. From left to right: original image, ground truth, segmentation result of Unet, segmentation result of Unet 3+, segmentation result of Ref-Unet 3+, and segmentation result of CBAM+Ref-Unet 3+.
Diagnostics 13 00576 g011
Figure 12. Visualization of lung segmentation results. From left to right: original image, ground truth, segmentation result of Unet, segmentation result of Unet 3+, segmentation result of Ref-Unet 3+, and segmentation result of CBAM+Ref-Unet 3+.
Figure 12. Visualization of lung segmentation results. From left to right: original image, ground truth, segmentation result of Unet, segmentation result of Unet 3+, segmentation result of Ref-Unet 3+, and segmentation result of CBAM+Ref-Unet 3+.
Diagnostics 13 00576 g012
Table 1. Experimental results of proposed methods for skin cancer segmentation and comparison against other methods. The bold data denotes the best value.
Table 1. Experimental results of proposed methods for skin cancer segmentation and comparison against other methods. The bold data denotes the best value.
MethodsParametersk-FoldACCPRESEF1-ScoreJSDC
UNet32.92 M1-fold0.9610.8910.8980.8940.8090.879
2-fold0.9550.8730.8880.8800.7860.871
3-fold0.9580.8830.9030.8930.8060.880
4-fold0.9570.8900.8920.8910.8030.881
5-fold0.9580.8760.9030.8890.8000.876
Average0.95780.88260.89680.88940.80080.8774
UNet 3+25.71 M1-fold0.9600.8980.8910.8940.8090.877
2-fold0.9550.8550.9010.8770.7820.870
3-fold0.9570.8790.9030.8910.8030.881
4-fold0.9560.8840.8950.8890.8010.879
5-fold0.9570.8640.9050.8840.7930.874
Average0.95700.87600.89900.88700.79760.8762
Ref-UNet 3+21.00 M1-fold0.9610.8940.8970.8960.8110.877
2-fold0.9540.8640.8920.8780.7820.868
3-fold0.9580.8790.9080.8930.8060.881
4-fold0.9550.8840.8880.8860.7950.878
5-fold0.9580.8850.8950.8900.8020.873
Average0.95720.88120.89600.88860.79920.8754
CBAM+Ref-UNet 3+21.02 M1-fold0.9630.9000.9010.9000.8190.882
2-fold0.9580.8860.8920.8890.8000.879
3-fold0.9610.8930.9110.9010.8210.889
4-fold0.9590.8920.9010.8960.8120.885
5-fold0.9610.8980.9000.8990.8160.889
Average0.96040.89380.90100.89700.81360.8848
Table 2. Experimental results of proposed methods for breast cancer segmentation and comparison against other methods. The bold data denotes the best one.
Table 2. Experimental results of proposed methods for breast cancer segmentation and comparison against other methods. The bold data denotes the best one.
MethodsParametersk-FoldACCPRESEF1-ScoreJSDC
UNet32.92 M1-fold0.9420.6000.6870.6410.4700.636
2-fold0.9350.6700.6730.6720.5060.658
3-fold0.9330.7160.6400.6760.5110.660
4-fold0.9520.6690.6830.6760.5110.657
5-fold0.9340.6490.7010.6740.5080.641
Average0.93920.66080.67680.66780.50120.6504
UNet 3+25.71 M1-fold0.9440.5640.7230.6340.4640.624
2-fold0.9370.6440.6980.6700.5040.650
3-fold0.9340.7120.6480.6790.5140.650
4-fold0.9350.7290.5540.6290.4590.613
5-fold0.9300.6720.6680.6700.5040.630
Average0.93600.66420.65820.65640.48900.6334
Ref-UNet 3+21.00 M1-fold0.9420.5720.7040.6310.4610.624
2-fold0.9380.6950.6890.6920.5290.672
3-fold0.9370.6820.6790.6810.5160.670
4-fold0.9480.6500.6600.6550.4870.652
5-fold0.9310.6440.6950.6690.4940.6330
Average0.93920.64860.68540.66560.49740.6502
CBAM+Ref-UNet 3+21.02 M1-fold0.9420.6250.6780.6500.4820.701
2-fold0.9450.7000.7340.7170.5580.736
3-fold0.9330.7310.6240.6730.5070.718
4-fold0.9610.7050.7600.7310.5770.739
5-fold0.9270.6650.6510.6580.4900.672
Average0.94160.68520.68940.68580.52280.7132
Table 3. Experimental results of proposed methods for lung segmentation and comparison against other existing methods. The bold data denotes the best value.
Table 3. Experimental results of proposed methods for lung segmentation and comparison against other existing methods. The bold data denotes the best value.
MethodsParametersk-FoldACCPRESEF1-ScoreJSDC
UNet0.60 M1-fold0.9850.9820.9540.9680.9370.968
2-fold0.9770.9470.9610.9540.9120.956
3-fold0.9900.9930.9660.9800.9600.976
4-fold0.9870.9910.9560.9730.9470.968
5-fold0.9880.9940.9540.9740.9490.971
Average0.98540.98140.95820.96980.94100.9678
UNet 3+0.40 M1-fold0.9840.9770.9530.9650.9320.964
2-fold0.9780.9370.9370.9550.9130.958
3-fold0.9800.9960.9560.9760.9530.973
4-fold0.9860.9870.9550.9710.9440.967
5-fold0.9880.9930.9550.9740.9490.972
Average0.98320.97800.95120.96820.93820.9668
Ref-UNet 3+0.33 M1-fold0.9850.9630.9720.9680.9370.967
2-fold0.9770.9490.9580.9530.9110.957
3-fold0.9890.9920.9640.9780.9560.974
4-fold0.9870.9930.9540.9730.9470.968
5-fold0.9890.9890.9630.9760.9520.974
Average0.98540.97720.96220.96960.94060.9680
CBAM+Ref-UNet 3+0.33 M1-fold0.9880.9750.9740.9750.9500.975
2-fold0.9790.9490.9640.9560.9170.959
3-fold0.9900.9950.9640.9790.9590.977
4-fold0.9880.9900.9610.9750.9510.971
5-fold0.9910.9940.9670.9800.9610.979
Average0.98720.98060.96600.97300.94760.9722
Table 4. Comparison of models in terms of computation time.
Table 4. Comparison of models in terms of computation time.
MethodsParameters (M)GFLOPsTraining Time (h.)Test Time (s)/Sample
UNet 3+25.710.27180.035
Ref-UNet 3+21.000.2260.015
CBAM+Ref-UNet 3+21.020.2270.020
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Xu, Y.; Hou, S.; Wang, X.; Li, D.; Lu, L. A Medical Image Segmentation Method Based on Improved UNet 3+ Network. Diagnostics 2023, 13, 576. https://doi.org/10.3390/diagnostics13030576

AMA Style

Xu Y, Hou S, Wang X, Li D, Lu L. A Medical Image Segmentation Method Based on Improved UNet 3+ Network. Diagnostics. 2023; 13(3):576. https://doi.org/10.3390/diagnostics13030576

Chicago/Turabian Style

Xu, Yang, Shike Hou, Xiangyu Wang, Duo Li, and Lu Lu. 2023. "A Medical Image Segmentation Method Based on Improved UNet 3+ Network" Diagnostics 13, no. 3: 576. https://doi.org/10.3390/diagnostics13030576

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop