U-Net-Based Semi-Automatic Semantic Segmentation Using Adaptive Differential Evolution
Abstract
:1. Introduction
- (1)
- Our method enables a novel semi-automatic GT generation by combining clustering and combinatorial optimization methods.
- (2)
- Our method optimizes CT images using DICOM parameters based on the adaptive DE.
- (3)
- We evaluate our GTs’ capability as input data for the U-net using femur, spine, and artificial bones.
2. Proposed Method
2.1. Semi-Automatic GTs
- O-0:
- ,
- O-1:
- Divide C into M parts using DBSCAN, ,
- O-2:
- ,
- O-3:
- whilemin. ,s.t. ,,,,Draw a line between and when ,Fill in the boundary,.
2.2. Windowing by jDE
- D-0
- Generate an initial population set with a random uniform distribution within feasible regions.
- D-1
- D-2
- Select three individuals for individual , where ().
- D-3
- Generate the mutant vector , where is a scaling parameter.
- D-4
- Select j randomly from [1, , where n is the dimension of each .
- D-5
- Cross over between the mutant vector and the parent vector using the following equation,=where represents the crossover rate.
- D-6
- Apply D-1 to D-4 to generate a population of trial vectors .
- D-7
- Evaluate and update the solution set.
- D-8
- Update and by the following equations==
- D-9
- D-10
- Repeat D-1 through D-9 for a certain period.
- P-0
- Generate initial parameter set with a uniform random number U(0, 1500), and convert DICOM images to PNG images using .
- P-1
- If each pixel value of each PNG image is above the threshold (in our experiment, we set it to (200, 200, 200)), this pixel is converted to a bone label. We refer to these converted PNG images as windowed images.
- P-2
- Execute the jDE. The objective function is set to the average of the IoU between each windowed image and its GT image.
- P-3
- Repeat P-1 to P-2 for a certain period.
2.3. U-Net
3. Evaluations
3.1. Experimental Settings
3.2. Results
3.2.1. GTs
3.2.2. Segmentation Results
4. Conclusions and Future Work
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Bilal, A.; Zhu, L.; Deng, A.; Lu, H.; Wu, N. AI-Based Automatic Detection and Classification of Diabetic Retinopathy Using U-Net and Deep Learning. Symmetry 2022, 14, 1427. [Google Scholar] [CrossRef]
- Bilal, A.; Sun, G.; Mazhar, S.; Imran, A.; Latif, J. A Transfer Learning and U-Net-based automatic detection of diabetic retinopathy from fundus images. Comput. Methods Biomech. Biomed. Eng. Imaging Vis. 2022, 10, 663–674. [Google Scholar] [CrossRef]
- Bilal, A.; Sun, G.; Mazhar, S.; Imran, A. Improved Grey Wolf optimization-based feature selection and classification using CNN for diabetic retinopathy detection. In Evolutionary Computing and Mobile Sustainable Networks: Proceedings of ICECMSN 2021; Springer: Berlin/Heidelberg, Germany, 2022; pp. 1–14. [Google Scholar]
- Bilal, A.; Sun, G.; Mazhar, S. Diabetic retinopathy detection using weighted filters and classification using CNN. In Proceedings of the 2021 International Conference on Intelligent Technologies (CONIT), Hubli, India, 25–27 June 2021; pp. 1–6. [Google Scholar]
- Bilal, A.; Sun, G.; Li, Y.; Mazhar, S.; Khan, A.Q. Diabetic retinopathy detection and classification using mixed models for a disease grading database. IEEE Access 2021, 9, 23544–23553. [Google Scholar] [CrossRef]
- Bilal, A.; Sun, G.; Li, Y.; Mazhar, S.; Latif, J. Lung nodules detection using grey wolf optimization by weighted filters and classification using CNN. J. Chin. Inst. Eng. 2022, 45, 175–186. [Google Scholar] [CrossRef]
- Bilal, A.; Shafiq, M.; Fang, F.; Waqar, M.; Ullah, I.; Ghadi, Y.Y.; Long, H.; Zeng, R. IGWO-IVNet3: DL-Based Automatic Diagnosis of Lung Nodules Using an Improved Gray Wolf Optimization and InceptionNet-V3. Sensors 2022, 22, 9603. [Google Scholar] [CrossRef] [PubMed]
- Bilal, A.; Sun, G.; Mazhar, S. Finger-vein recognition using a novel enhancement method with convolutional neural network. J. Chin. Inst. Eng. 2021, 44, 407–417. [Google Scholar] [CrossRef]
- Havaei, M.; Davy, A.; Warde-Farley, D.; Biard, A.; Courville, A.; Bengio, Y.; Pal, C.; Jodoin, P.M.; Larochelle, H. Brain tumor segmentation with deep neural networks. Med. Image Anal. 2017, 35, 18–31. [Google Scholar] [CrossRef]
- Bilic, P.; Christ, P.; Li, H.B.; Vorontsov, E.; Ben-Cohen, A.; Kaissis, G.; Szeskin, A.; Jacobs, C.; Mamani, G.E.H.; Chartrand, G.; et al. The liver tumor segmentation benchmark (lits). Med. Image Anal. 2023, 84, 102680. [Google Scholar] [CrossRef]
- Pereira, S.; Pinto, A.; Alves, V.; Silva, C.A. Brain tumor segmentation using convolutional neural networks in MRI images. IEEE Trans. Med. Imaging 2016, 35, 1240–1251. [Google Scholar] [CrossRef]
- Deniz, C.M.; Xiang, S.; Hallyburton, R.S.; Welbeck, A.; Babb, J.S.; Honig, S.; Cho, K.; Chang, G. Segmentation of the proximal femur from MR images using deep convolutional neural networks. Sci. Rep. 2018, 8, 16485. [Google Scholar] [CrossRef]
- Chen, F.; Liu, J.; Zhao, Z.; Zhu, M.; Liao, H. Three-dimensional feature-enhanced network for automatic femur segmentation. IEEE J. Biomed. Health Inform. 2017, 23, 243–252. [Google Scholar] [CrossRef] [PubMed]
- Chen, W.T.; Liu, W.C.; Chen, M.S. Adaptive color feature extraction based on image color distributions. IEEE Trans. Image Process. 2010, 19, 2005–2016. [Google Scholar] [CrossRef] [PubMed]
- Radman, A.; Zainal, N.; Suandi, S.A. Automated segmentation of iris images acquired in an unconstrained environment using HOG-SVM and GrowCut. Digit. Signal Process. 2017, 64, 60–70. [Google Scholar] [CrossRef]
- Wang, S.; Zhu, W.; Liang, Z.P. Shape deformation: SVM regression and application to medical image segmentation. In Proceedings of the Eighth IEEE International Conference on Computer Vision, ICCV 2001, Vancouver, BC, Canada, 7–14 July 2001; Volume 2, pp. 209–216. [Google Scholar]
- Simonyan, K.; Vedaldi, A.; Zisserman, A. Deep inside convolutional networks: Visualising image classification models and saliency maps. arXiv 2013, arXiv:1312.6034. [Google Scholar]
- Badrinarayanan, V.; Kendall, A.; Cipolla, R. Segnet: A deep convolutional encoder-decoder architecture for image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 2481–2495. [Google Scholar] [CrossRef] [PubMed]
- Zheng, S.; Jayasumana, S.; Romera-Paredes, B.; Vineet, V.; Su, Z.; Du, D.; Huang, C.; Torr, P.H. Conditional random fields as recurrent neural networks. In Proceedings of the IEEE International Conference on Computer Vision, Seoul, Republic of Korea, 7–13 December 2015; pp. 1529–1537. [Google Scholar]
- O Pinheiro, P.O.; Collobert, R.; Dollár, P. Learning to segment object candidates. In Proceedings of the 28th International Conference on Neural Information Processing Systems, Cambridge, MA, USA, 7–12 December 2015; Volume 28. [Google Scholar]
- Quan, T.M.; Hildebrand, D.G.C.; Jeong, W.K. Fusionnet: A deep fully residual convolutional neural network for image segmentation in connectomics. Front. Comput. Sci. 2021, 3, 613981. [Google Scholar] [CrossRef]
- Chen, L.C.; Papandreou, G.; Kokkinos, I.; Murphy, K.; Yuille, A.L. Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 40, 834–848. [Google Scholar] [CrossRef]
- Chen, L.C.; Papandreou, G.; Schroff, F.; Adam, H. Rethinking atrous convolution for semantic image segmentation. arXiv 2017, arXiv:1706.05587. [Google Scholar]
- Minaee, S.; Boykov, Y.; Porikli, F.; Plaza, A.; Kehtarnavaz, N.; Terzopoulos, D. Image segmentation using deep learning: A survey. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 44, 3523–3542. [Google Scholar] [CrossRef]
- Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, 5–9 October 2015; Proceedings, Part III 18. Springer: Berlin/Heidelberg, Germany, 2015; pp. 234–241. [Google Scholar]
- Sekuboyina, A.; Husseini, M.E.; Bayat, A.; Löffler, M.; Liebl, H.; Li, H.; Tetteh, G.; Kukačka, J.; Payer, C.; Štern, D.; et al. VerSe: A vertebrae labelling and segmentation benchmark for multi-detector CT images. Med. Image Anal. 2021, 73, 102166. [Google Scholar] [CrossRef]
- Huang, H.; Lin, L.; Tong, R.; Hu, H.; Zhang, Q.; Iwamoto, Y.; Han, X.; Chen, Y.W.; Wu, J. Unet 3+: A full-scale connected unet for medical image segmentation. In Proceedings of the ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Barcelona, Spain, 4–8 May 2020; pp. 1055–1059. [Google Scholar]
- Kim-Wang, S.Y.; Bradley, P.X.; Cutcliffe, H.C.; Collins, A.T.; Crook, B.S.; Paranjape, C.S.; Spritzer, C.E.; DeFrate, L.E. Auto-segmentation of the tibia and femur from knee MR images via deep learning and its application to cartilage strain and recovery. J. Biomech. 2023, 149, 111473. [Google Scholar] [CrossRef]
- Liukkonen, M.K.; Mononen, M.E.; Tanska, P.; Saarakkala, S.; Nieminen, M.T.; Korhonen, R.K. Application of a semi-automatic cartilage segmentation method for biomechanical modeling of the knee joint. Comput. Methods Biomech. Biomed. Eng. 2017, 20, 1453–1463. [Google Scholar] [CrossRef]
- Flannery, S.W.; Kiapour, A.M.; Edgar, D.J.; Murray, M.M.; Fleming, B.C. Automated magnetic resonance image segmentation of the anterior cruciate ligament. J. Orthop. Res. 2021, 39, 831–840. [Google Scholar] [CrossRef]
- Brest, J.; Greiner, S.; Boskovic, B.; Mernik, M.; Zumer, V. Self-adapting control parameters in differential evolution: A comparative study on numerical benchmark problems. IEEE Trans. Evol. Comput. 2006, 10, 646–657. [Google Scholar] [CrossRef]
- Cancer Imaging Archive. Available online: https://www.cancerimagingarchive.net/ (accessed on 30 September 2022).
Method | Artificial Bone | Spine | Femur |
---|---|---|---|
Deeplabv3 | 0.792 ± | 0.644 ± | 0.870 ± |
U-net | 0.811 ± | 0.837 ± | 0.946 ± |
U-net + jDE (Proposed) | 0.832 ± | 0.860 ± | 0.949 ± |
Method | Artificial Bone | Spine | Femur |
---|---|---|---|
Deeplabv3 | 0.867 ± | 0.777 ± | 0.930 ± |
U-net | 0.882 ± | 0.907 ± | 0.972 ± |
U-net + jDE (Proposed) | 0.896 ± | 0.922 ± | 0.973 ± |
Method | Artificial Bone | Spine | Femur |
---|---|---|---|
Deeplabv3 | 0.995 ± | 0.997 ± | 0.998 ± |
U-net | 0.995 ± | 0.998 ± | 0.999 ± |
U-net + jDE (Proposed) | 0.996 ± | 0.999 ± | 0.999 ± |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Ono, K.; Tawara, D.; Tani, Y.; Yamakawa, S.; Yakushijin, S. U-Net-Based Semi-Automatic Semantic Segmentation Using Adaptive Differential Evolution. Appl. Sci. 2023, 13, 10798. https://doi.org/10.3390/app131910798
Ono K, Tawara D, Tani Y, Yamakawa S, Yakushijin S. U-Net-Based Semi-Automatic Semantic Segmentation Using Adaptive Differential Evolution. Applied Sciences. 2023; 13(19):10798. https://doi.org/10.3390/app131910798
Chicago/Turabian StyleOno, Keiko, Daisuke Tawara, Yuki Tani, Sohei Yamakawa, and Shoma Yakushijin. 2023. "U-Net-Based Semi-Automatic Semantic Segmentation Using Adaptive Differential Evolution" Applied Sciences 13, no. 19: 10798. https://doi.org/10.3390/app131910798
APA StyleOno, K., Tawara, D., Tani, Y., Yamakawa, S., & Yakushijin, S. (2023). U-Net-Based Semi-Automatic Semantic Segmentation Using Adaptive Differential Evolution. Applied Sciences, 13(19), 10798. https://doi.org/10.3390/app131910798