FAC-Net: Feedback Attention Network Based on Context Encoder Network for Skin Lesion Segmentation
Abstract
:1. Introduction
- We proposed a novel and efficient FFB, which captures multi-scale features and fuses information on different scales to get richer information of feature maps.
- The AMB module is an improved Convolutional Block Attention Module, which applies to skip connection fusion, strengthening vital information and suppressing irrelevant interference information.
2. Related Work
2.1. Segmentation Network
2.2. Feedback Mechanism
2.3. Attention Mechanism
3. Methods
3.1. The Overall Structure of FAC-Net
3.2. Feedback Fusion Block
3.3. Attention Mechanism Block
3.3.1. Channel Attention Mechanism
3.3.2. Spatial Attention Mechanism
3.4. Loss Function
4. Experiment
4.1. Datasets
- The size and shape of the skin lesions in the sample are different, and the boundary is fuzzy.
- There are interfering factors in the sample, such as hair, air bubbles and other obstructions.
- The distinction between the diseased part and the normal skin part is small and difficult to distinguish.
- There are obvious hierarchical features in the lesion location of the sample, which may lead to misjudgment of the lesion boundary.
4.2. Metrics
4.3. Experimental Setting
4.4. Ablation Experiment
4.5. Comparative Experiment
5. Discussion and Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Rogers, H.W.; Weinstock, M.A.; Feldman, S.R.; Coldiron, B.M. Incidence Estimate of Nonmelanoma Skin Cancer (Keratinocyte carcinomas) in the US Population, 2012. JAMA Dermatol. 2015, 151, 1081–1086. [Google Scholar] [CrossRef]
- Pathan, S.; Prabhu, K.G. Techniques and algorithms for computer aided diagnosis of pigmented skin lesions—A review. Biomed. Signal Process. Control 2018, 39, 237–262. [Google Scholar] [CrossRef]
- Siegel, R.L.; Miller, K.D.; Jemal, A. Cancer statistics, 2019. CA Cancer J. Clin. 2019, 69, 7–34. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Balch, C.M.; Gershenwald, J.E.; Soong, S.-J.; Thompson, J.F.; Atkins, M.B.; Byrd, D.R.; Buzaid, A.C.; Cochran, A.J.; Coit, D.G.; Ding, S.; et al. Final Version of 2009 AJCC Melanoma Staging and Classification. J. Clin. Oncol. 2009, 27, 6199–6206. [Google Scholar] [CrossRef] [Green Version]
- Yu, L.; Chen, H.; Dou, Q.; Qin, J.; Heng, P.A. Automated Melanoma Recognition in Dermoscopy Images via Very Deep Residual Networks. IEEE Trans. Med. Imaging 2017, 36, 994–1004. [Google Scholar] [CrossRef]
- Ünver, H.M.; Ayan, E. Skin Lesion Segmentation in Dermoscopic Images with Combination of YOLO and GrabCut Algorithm. Diagnostics 2019, 9, 72. [Google Scholar] [CrossRef] [Green Version]
- Saez, A.; Serrano, C.; Acha, B. Model-Based Classification Methods of Global Patterns in Dermoscopic Images. IEEE Trans. Med. Imaging 2014, 33, 1137–1147. [Google Scholar] [CrossRef]
- Jafari, M.; Karimi, N.; Nasr-Esfahani, E.; Samavi, S.; Soroushmehr, S.; Ward, K.; Najarian, K. Skin lesion segmentation in clinical images using deep learning. In Proceedings of the 2016 23rd International Conference on Pattern Recognition (ICPR), Cancun, Mexico, 4–8 December 2016; pp. 337–342. [Google Scholar]
- Saleh, S.; Kalyankar, N.V.; Khamitkar, S.D. Image segmentation by using threshold techniques. J. Comput. 2010, 2, 2151–9617. [Google Scholar]
- Muthukrishnan, R.; Radha, M. Edge Detection Techniques for Image Segmentation. Int. J. Comput. Sci. Inf. Technol. 2011, 3, 259–267. [Google Scholar] [CrossRef]
- Ugarriza, L.G.; Saber, E.; Vantaram, S.R.; Amuso, V.; Shaw, M.; Bhaskar, R. Automatic Image Segmentation by Dynamic Region Growth and Multiresolution Merging. IEEE Trans. Image Process. 2009, 18, 2275–2288. [Google Scholar] [CrossRef] [Green Version]
- Kim, J.; Çetin, M.; Willsky, A.S. Nonparametric shape priors for active contour-based image segmentation. Signal Process. 2007, 87, 3021–3044. [Google Scholar] [CrossRef] [Green Version]
- Dang, N.; Thanh, H.; Erkan, U. A Skin Lesion Segmentation Method for Dermoscopic Images Based on Adaptive Thresholding with Normalization of Color Models. In Proceedings of the IEEE 2019 6th International Conference on Electrical and Electronics Engineering, Istanbul, Turkey, 16–17 April 2019; pp. 116–120. [Google Scholar]
- Militello, C.; Rundo, L.; Toia, P.; Conti, V.; Russo, G.; Filorizzo, C.; Maffei, E.; Cademartiri, F.; La Grutta, L.; Midiri, M.; et al. A semi-automatic approach for epicardial adipose tissue segmentation and quantification on cardiac CT scans. Comput. Biol. Med. 2019, 114, 103424. [Google Scholar] [CrossRef]
- Ben-Cohen, A.; Diamant, I.; Klang, E.; Amitai, M.; Greenspan, H. Fully Convolutional Network for Liver Segmentation and Lesions Detection; Springer: Berlin/Heidelberg, Germany, 2016; pp. 77–85. [Google Scholar]
- Yuan, Y.; Chao, M.; Lo, Y.-C. Automatic Skin Lesion Segmentation Using Deep Fully Convolutional Networks With Jaccard Distance. IEEE Trans. Med. Imaging 2017, 36, 1876–1886. [Google Scholar] [CrossRef]
- Badrinarayanan, V.; Kendall, A.; Cipolla, R. SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 2481–2495. [Google Scholar] [CrossRef] [PubMed]
- Chen, L.-C.; Papandreou, G.; Kokkinos, I.; Murphy, K.; Yuille, A.L. DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs. IEEE Trans. Pattern Anal. Mach. Intell. 2018, 40, 834–848. [Google Scholar] [CrossRef]
- Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional nNetworks for biomedical image segmentation. In International Conference on Medical Image Computing and Computer-Assisted Intervention; Springer: Berlin/Heidelberg, Germany, 2015; pp. 234–241. [Google Scholar] [CrossRef] [Green Version]
- Alom, M.Z.; Hasan, M. Recurrent Residual Convolutional Neural Network based on U-Net (R2U-Net) for Medical Image Segmentation. arXiv 2018, arXiv:1802.06955. [Google Scholar]
- Gu, Z.; Cheng, J.; Fu, H.; Zhou, K.; Hao, H.; Zhao, Y.; Zhang, T.; Gao, S.; Liu, J. CE-Net: Context Encoder Network for 2D Medical Image Segmentation. IEEE Trans. Med. Imaging 2019, 38, 2281–2292. [Google Scholar] [CrossRef] [Green Version]
- Zhou, Z.; Tajbakhsh, N. UNet++: A Nested U-Net Architecture for Medical Image Segmentation; Springer: Berlin/Heidelberg, Germany, 2018. [Google Scholar]
- Huang, H.; Lin, L.; Tong, R.; Hu, H.; Zhang, Q.; Iwamoto, Y.; Han, X.; Chen, Y.-W.; Wu, J. UNet 3+: A Full-Scale Connected UNet for Medical Image Segmentation. In Proceedings of the ICASSP 2020—2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Barcelona, Spain, 4–8 May 2020; pp. 1055–1059. [Google Scholar]
- Guo, C.; Szemenyei, M.; Yi, Y.; Wang, W.; Chen, B.; Fan, C. SA-UNet: Spatial Attention U-Net for Retinal Vessel Segmentation. In Proceedings of the 2020 25th International Conference on Pattern Recognition (ICPR), Milan, Italy, 10–15 January 2021; pp. 1236–1242. [Google Scholar]
- Phan, T.-D.-T.; Kim, S.H. Skin Lesion Segmentation by U-Net with Adaptive Skip Connection and Structural Awareness. Appl. Sci. 2021, 11, 4528. [Google Scholar] [CrossRef]
- Salih, O.; Viriri, S. Skin Lesion Segmentation Using Stochastic Region-Merging and Pixel-Based Markov Random Field. Symmetry 2020, 12, 1224. [Google Scholar] [CrossRef]
- Khan, M.; Sharif, M.; Akram, T.; Damaševičius, R.; Maskeliūnas, R. Skin Lesion Segmentation and Multiclass Classification Using Deep Learning Features and Improved Moth Flame Optimization. Diagnostics 2021, 11, 811. [Google Scholar] [CrossRef]
- Tong, X.; Wei, J.; Sun, B.; Su, S.; Zuo, Z.; Wu, P. ASCU-Net: Attention Gate, Spatial and Channel Attention U-Net for Skin Lesion Segmentation. Diagnostics 2021, 11, 501. [Google Scholar] [CrossRef]
- Hafhouf, B.; Zitouni, A.; Megherbi, A.C.; Sbaa, S. A Modified U-Net for Skin Lesion Segmentation. In Proceedings of the 2020 1st International Conference on Communications, Control Systems and Signal Processing (CCSSP), El Oued, Algeria, 16–17 May 2020; pp. 225–228. [Google Scholar]
- Saha, A.; Prasad, P.; Thabit, A. Leveraging Adaptive Color Augmentation in Convolutional Neural Networks for Deep Skin Lesion Segmentation. In Proceedings of the 2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI), Iowa City, IA, USA, 3–7 April 2020; pp. 2014–2017. [Google Scholar]
- Tang, Y.; Fang, Z.; Yuan, S.; Zhan, C.A.; Xing, Y.; Zhou, J.T.; Yang, F. iMSCGnet: Iterative Multi-Scale Context-Guided Segmentation of Skin Lesion in Dermoscopic Images. IEEE Access 2020, 8, 39700–39712. [Google Scholar] [CrossRef]
- Wu, H.; Pan, J.; Li, Z.; Wen, Z.; Qin, J. Automated Skin Lesion Segmentation Via an Adaptive Dual Attention Module. IEEE Trans. Med. Imaging 2021, 40, 357–370. [Google Scholar] [CrossRef]
- Haris, M.; Shakhnarovich, G.; Ukita, N. Deep Back-Projection Networks for Super-Resolution. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 1664–1673. [Google Scholar]
- Wei, H.; Chang, S. Image Super-Resolution via Dual-State Recurrent Networks. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 1654–1663. [Google Scholar]
- Wald, L.; Ranchin, T. Fusion of satellite images of different spatial resolutions: Assessing the quality of resulting images. Photogramm. Eng. Remote. Sens. 1997, 63, 691–699. [Google Scholar]
- Bahdanau, D.; Cho, K.; Bengio, Y. Neural Machine Translation by Jointly Learning to Align and Translate. arXiv 2014, arXiv:1409.0473. [Google Scholar]
- Wang, X.; Girshick, R.B.; Gupta, A.; He, K. Non-local Neural Networks. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 7794–7803. [Google Scholar]
- Jie, H.; Shen, L. Squeeze-and-Excitation Networks. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 7132–7141. [Google Scholar]
- Cao, Y.; Xu, J.; Lin, S.; Wei, F.; Hu, H. GCNet: Non-Local Networks Meet Squeeze-Excitation Networks and Beyond. In Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), Seoul, Korea, 27–28 October 2019; pp. 1971–1980. [Google Scholar]
- Woo, S.; Park, J.; Lee, J. CBAM: Convolutional Block Attention Module; Springer: Berlin/Heidelberg, Germany, 2018; pp. 3–19. [Google Scholar]
- Huang, Z.; Wang, X.; Huang, L.; Huang, C.; Wei, Y.; Liu, W. CCNet: Criss-Cross Attention for Semantic Segmentation. In Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Korea, 27 October–2 November 2019; pp. 603–612. [Google Scholar]
- Fu, J.; Liu, J.; Tian, H.; Li, Y.; Bao, Y.; Fang, Z.; Lu, H. Dual Attention Network for Scene Segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 16–20 June 2019; pp. 3141–3149. [Google Scholar]
- Han, C.; Rundo, L.; Murao, K. MADGAN: Unsupervised Medical Anomaly Detection GAN using multiple adjacent brain MRI slice reconstruction. BMC Bioinform. 2020, 22, 1–20. [Google Scholar]
- Schlemper, J.; Oktay, O.; Schaap, M.; Heinrich, M.; Kainz, B.; Glocker, B.; Rueckert, D. Attention gated networks: Learning to leverage salient regions in medical images. Med. Image Anal. 2019, 53, 197–207. [Google Scholar] [CrossRef]
- Lin, T.-Y.; Dollar, P.; Girshick, R.; He, K.; Hariharan, B.; Belongie, S. Feature Pyramid Networks for Object Detection. In Proceedings of the 2017 IEEE Conference on Computer Vision a936&nd Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 936–944. [Google Scholar] [CrossRef] [Green Version]
- Zhen, M.; Tavares, J.M. A Novel Approach to Segment Skin Lesions in Dermoscopic Images Based on a Deformable Model. IEEE J. Biomed. Health Inform. 2016, 20, 615–623. [Google Scholar]
- Codella, N.C.F.; Gutman, D.; Celebi, M.E.; Helba, B.; Marchetti, M.A.; Dusza, S.W.; Kalloo, A.; Liopyris, K.; Mishra, N.; Kittler, H.; et al. Skin lesion analysis toward melanoma detection: A challenge at the 2017 International symposium on biomedical imaging (ISBI), hosted by the international skin imaging collaboration (ISIC). In Proceedings of the 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018), Washington, DC, USA, 4–7 April 2018; pp. 168–172. [Google Scholar]
- Tschandl, P.; Rosendahl, C.; Kittler, H. The HAM10000 dataset, a large collection of multi-source dermatoscopic images of common pigmented skin lesions. Sci. Data 2018, 5, 180161. [Google Scholar] [CrossRef]
Process | Operation | Input | Output |
---|---|---|---|
Part one | Conv3 × 3 | C × H × W | C × H × W |
element-wise Multi | C × H × W & C × 1 × 1 | C × H × W | |
Concat | C × H × W & C × H × W | 2C × H × W | |
Part two | Gap | 2C × H/2 × W/2 | 2C × 1 × 1 |
Conv1 × 1 | 2C × 1 × 1 | C × 1 × 1 | |
Up-sampling | 2C × H/2 × W/2 | 2C × H × W | |
Connection | Concat | 2C × H × W & 2C × H × W | 4C × H × W |
Conv1 × 1 | 4C × H × W | C × H × W |
Model | ACC (%) | SE (%) | SP (%) | PC (%) | JA (%) | DC (%) |
---|---|---|---|---|---|---|
CE-Net | 95.81 | 88.11 | 97.88 | 91.64 | 81.58 | 89.71 |
CE-Net+FFB | 96.18 | 88.40 | 98.21 | 92.00 | 82.83 | 90.49 |
CE-Net+AMB | 96.05 | 89.74 | 97.75 | 90.49 | 82.87 | 90.48 |
CE-Net+FFB+AMB | 96.41 | 89.92 | 98.16 | 92.74 | 83.99 | 91.19 |
Model | ACC (%) | SE (%) | SP (%) | PC (%) | JA (%) | DC (%) |
---|---|---|---|---|---|---|
Mode | 95.96 | 90.17 | 97.47 | 90.63 | 82.64 | 90.24 |
Mode+Avg | 96.10 | 89.62 | 97.92 | 91.86 | 82.94 | 90.55 |
Mode+Max | 96.07 | 89.54 | 97.82 | 91.64 | 82.79 | 90.45 |
Max+Avg | 96.17 | 90.02 | 97.64 | 91.23 | 82.83 | 90.45 |
Mode+Max+Avg | 96.41 | 89.92 | 98.16 | 92.74 | 83.99 | 91.19 |
Model | Year | ACC (%) | SE (%) | SP (%) | PC (%) | JA (%) | DC (%) |
---|---|---|---|---|---|---|---|
U-Net | 2015 | 94.66 | 86.03 | 97.10 | 88.72 | 77.43 | 87.13 |
R2U-Net | 2018 | 95.09 | 86.58 | 97.51 | 90.00 | 78.85 | 88.05 |
CE-Net | 2019 | 95.81 | 88.11 | 97.88 | 91.64 | 81.58 | 89.71 |
U-Net3+ | 2020 | 94.97 | 85.20 | 97.77 | 90.86 | 78.30 | 87.71 |
SA-UNet | 2021 | 94.78 | 84.87 | 97.59 | 90.29 | 77.63 | 87.25 |
Ours | - | 96.41 | 89.92 | 98.16 | 92.74 | 83.99 | 91.19 |
Model | Year | ACC (%) | SE (%) | SP (%) | PC (%) | JA (%) | DC (%) |
---|---|---|---|---|---|---|---|
U-Net | 2015 | 92.21 | 74.38 | 97.58 | 89.58 | 68.30 | 80.70 |
R2U-Net | 2018 | 92.28 | 75.37 | 97.45 | 89.38 | 69.04 | 88.05 |
CE-Net | 2019 | 93.49 | 80.51 | 97.33 | 89.92 | 73.83 | 84.55 |
U-Net3+ | 2020 | 92.08 | 72.95 | 97.87 | 90.69 | 67.79 | 80.29 |
SA-UNet | 2021 | 92.08 | 76.93 | 96.66 | 86.74 | 68.76 | 81.06 |
Ours | - | 93.63 | 81.06 | 97.43 | 90.07 | 74.27 | 84.91 |
Model | Year | ACC (%) | SE (%) | SP (%) | PC (%) | JA (%) | DC (%) |
U-Net | 2015 | 94.69 | 91.30 | 96.01 | 89.32 | 82.18 | 90.12 |
R2U-Net | 2018 | 94.43 | 87.68 | 97.06 | 91.49 | 80.95 | 89.38 |
CE-Net | 2019 | 95.94 | 92.80 | 97.10 | 92.06 | 85.85 | 92.31 |
U-Net3+ | 2020 | 94.94 | 90.26 | 96.74 | 91.12 | 82.87 | 90.54 |
SA-UNet | 2021 | 94.11 | 89.46 | 95.90 | 88.82 | 80.14 | 88.82 |
Ours | - | 96.09 | 92.50 | 97.43 | 92.74 | 86.23 | 92.51 |
Model | Dataset | ACC (%) | SP (%) | JA (%) | DC (%) |
---|---|---|---|---|---|
Tang et al. [32]-2020 | ISBI2016 | 96.08 | - | 85.98 | 91.91 |
Hafhouf et al. [30]-2020 | ISBI2016 | 93.9 | 95.2 | 82.7 | 89.6 |
Khan et al. [28]-2021 | ISBI2016 | 92.69 | - | - | - |
Ours | ISBI2016 | 96.09 | 97.43 | 86.23 | 92.51 |
Tong et al. [29]-2021 | ISBI2017 | 92.6 | 96.5 | 74.2 | 83 |
Ours | ISBI2017 | 93.63 | 97.43 | 74.27 | 84.91 |
Salih et al. [27]-2020 | ISIC2018 | 89.47 | 95.09 | 72.45 | 80.67 |
Tang et al. [32]-2020 | ISIC2018 | - | - | 81.91 | - |
Saha et al. [31]-2020 | ISIC2018 | - | 93.2 | 81.9 | 89.1 |
Wu et al. [33]-2021 | ISIC2018 | 94.7 | 94.1 | 84.4 | 90.8 |
Khan et al. [28]-2021 | ISIC2018 | 92.69 | - | - | - |
Ours | ISIC2018 | 96.41 | 98.16 | 83.99 | 91.19 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Dong, Y.; Wang, L.; Cheng, S.; Li, Y. FAC-Net: Feedback Attention Network Based on Context Encoder Network for Skin Lesion Segmentation. Sensors 2021, 21, 5172. https://doi.org/10.3390/s21155172
Dong Y, Wang L, Cheng S, Li Y. FAC-Net: Feedback Attention Network Based on Context Encoder Network for Skin Lesion Segmentation. Sensors. 2021; 21(15):5172. https://doi.org/10.3390/s21155172
Chicago/Turabian StyleDong, Yuying, Liejun Wang, Shuli Cheng, and Yongming Li. 2021. "FAC-Net: Feedback Attention Network Based on Context Encoder Network for Skin Lesion Segmentation" Sensors 21, no. 15: 5172. https://doi.org/10.3390/s21155172
APA StyleDong, Y., Wang, L., Cheng, S., & Li, Y. (2021). FAC-Net: Feedback Attention Network Based on Context Encoder Network for Skin Lesion Segmentation. Sensors, 21(15), 5172. https://doi.org/10.3390/s21155172