Enhanced Deep Neural Network for Prostate Segmentation in Micro-Ultrasound Images
Abstract
1. Introduction
- We propose a model for precise Micro-Ultrasound medical image segmentation composed of dual encoders with a fusion module designed to capture both local details and long-range dependencies. A Hypergraph Neural Network (HGNN) is integrated into the skip connections to model non-pairwise correlations.
- To further enhance segmentation accuracy, a Mamba-based decoder is incorporated, utilizing VSSD blocks built upon Mamba-2 and NC-SSD.
- Experimental results demonstrate that our method achieves superior performance on the Micro-Ultrasound (US) prostate medical image segmentation dataset.
2. Materials and Methods
2.1. Overall Architecture
2.1.1. Dual Encoder
2.1.2. Fusion Module
2.1.3. Hyper GNN
2.1.4. Mamba Decoder and VSSD Block
2.2. Dataset
2.3. Augmentation
2.4. Loss Function and Evaluation
- N is the total number of pixels,
- is the is the weight of assigned to pixel i,
- is the ground truth label for pixel i,
- is the predicted probability for pixel i.
2.5. Implementation Details
3. Results and Discussion
4. Ablation Study
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Li, J.; Xu, C.; Lee, H.J.; Ren, S.; Zi, X.; Zhang, Z.; Wang, H.; Yu, Y.; Yang, C.; Gao, X.; et al. A genomic and epigenomic atlas of prostate cancer in Asian populations. Nature 2020, 580, 93–99. [Google Scholar] [CrossRef]
- Imran, M.; Nguyen, B.; Pensa, J.; Falzarano, S.M.; Sisk, A.E.; Liang, M.; DiBianco, J.M.; Su, L.M.; Zhou, Y.; Joseph, J.P.; et al. Image registration of in vivo micro-ultrasound and ex vivo pseudo-whole mount histopathology images of the prostate: A proof-of-concept study. Biomed. Signal Process. Control 2024, 96, 106657. [Google Scholar] [CrossRef]
- Wasih, M.; Ahmad, S.; Almekkawy, M. A robust cascaded deep neural network for image reconstruction of single plane wave ultrasound RF data. Ultrasonics 2023, 132, 106981. [Google Scholar] [CrossRef]
- Jiang, H.; Imran, M.; Muralidharan, P.; Patel, A.; Pensa, J.; Liang, M.; Benidir, T.; Grajo, J.R.; Joseph, J.P.; Terry, R.; et al. MicroSegNet: A deep learning approach for prostate segmentation on micro-ultrasound images. Comput. Med. Imaging Graph. 2024, 112, 102326. [Google Scholar] [CrossRef]
- Sun, Y.; Dai, D.; Zhang, Q.; Wang, Y.; Xu, S.; Lian, C. MSCA-Net: Multi-scale contextual attention network for skin lesion segmentation. Pattern Recognit. 2023, 139, 109524. [Google Scholar] [CrossRef]
- Al Qurri, A.; Almekkawy, M. Improved UNet with Attention for Medical Image Segmentation. Sensors 2023, 23, 8589. [Google Scholar] [CrossRef]
- Wang, C.; Xu, R.; Xu, S.; Meng, W.; Zhang, X. Automatic polyp segmentation via image-level and surrounding-level context fusion deep neural network. Eng. Appl. Artif. Intell. 2023, 123, 106168. [Google Scholar] [CrossRef]
- Huo, X.; Sun, G.; Tian, S.; Wang, Y.; Yu, L.; Long, J.; Zhang, W.; Li, A. HiFuse: Hierarchical multi-scale feature fusion network for medical image classification. Biomed. Signal Process. Control 2024, 87, 105534. [Google Scholar] [CrossRef]
- Gao, Q.; Almekkawy, M. ASU-Net++: A nested U-Net with adaptive feature extractions for liver tumor segmentation. Comput. Biol. Med. 2021, 136, 104688. [Google Scholar] [CrossRef] [PubMed]
- Ahmed, A.Q.; Almekkawy, M. Improved UNet++ Based on Kolmogorov-Arnold Convolutions. In Proceedings of the 2025 IEEE International Conference on Image Processing (ICIP), Anchorage, AK, USA, 14–18 September 2025; pp. 905–910. [Google Scholar]
- Kang, J.; Al-Qurri, A.; Almekkawy, M. Fast and Resource-Efficient Ultrasound Segmentation Using FPGAs. In Proceedings of the 2025 IEEE International Ultrasonics Symposium (IUS), Utrecht, The Netherlands, 15–18 September 2025; pp. 1–5. [Google Scholar]
- Hu, J.; Shen, L.; Sun, G. Squeeze-and-excitation networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 7132–7141. [Google Scholar]
- Wang, X.; Girshick, R.; Gupta, A.; He, K. Non-local neural networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 7794–7803. [Google Scholar]
- Fu, J.; Liu, J.; Tian, H.; Li, Y.; Bao, Y.; Fang, Z.; Lu, H. Dual attention network for scene segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 3146–3154. [Google Scholar]
- Woo, S.; Park, J.; Lee, J.Y.; Kweon, I.S. Cbam: Convolutional block attention module. In Computer Vision—ECCV 2018, Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; Springer: Berlin/Heidelberg, Germany, 2018; pp. 3–19. [Google Scholar]
- Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention is all you need. In Advances in Neural Information Processing Systems 30 (NIPS 2017), Proceedings of the Annual Conference on Neural Information Processing Systems 2017, Long Beach, CA, USA, 4–9 December 2017; Guyon, I., Von Luxburg, U., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., Garnett, R., Eds.; Curran Associates Inc.: Red Hook, NY, USA, 2017. [Google Scholar]
- Wang, H.; Xie, S.; Lin, L.; Iwamoto, Y.; Han, X.H.; Chen, Y.W.; Tong, R. Mixed transformer u-net for medical image segmentation. In Proceedings of the ICASSP 2022–2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Singapore, 22–27 May 2022; pp. 2390–2394. [Google Scholar]
- Li, J.; Chen, J.; Tang, Y.; Wang, C.; Landman, B.A.; Zhou, S.K. Transforming medical imaging with Transformers? A comparative review of key properties, current progresses, and future perspectives. Med. Image Anal. 2023, 85, 102762. [Google Scholar] [CrossRef] [PubMed]
- Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv 2020, arXiv:2010.11929. [Google Scholar]
- Cao, H.; Wang, Y.; Chen, J.; Jiang, D.; Zhang, X.; Tian, Q.; Wang, M. Swin-unet: Unet-like pure transformer for medical image segmentation. In Computer Vision—ECCV 2022 Workshops, Proceedings of the European Conference on Computer Vision, Tel Aviv, Israel, 23–27 October, 2022; Springer: Berlin/Heidelberg, Germany, 2022; pp. 205–218. [Google Scholar]
- Zuo, S.; Xiao, Y.; Chang, X.; Wang, X. Vision transformers for dense prediction: A survey. Knowl.-Based Syst. 2022, 253, 109552. [Google Scholar] [CrossRef]
- Gao, B.B.; Huang, Z. CSTrans: Correlation-guided Self-Activation Transformer for Counting Everything. Pattern Recognit. 2024, 153, 110556. [Google Scholar] [CrossRef]
- He, D.; Zhang, Y.; Huang, H.; Si, Y.; Wang, Z.; Li, Y. Dual-branch hybrid network for lesion segmentation in gastric cancer images. Sci. Rep. 2023, 13, 6377. [Google Scholar] [CrossRef]
- Azad, R.; Kazerouni, A.; Heidari, M.; Aghdam, E.K.; Molaei, A.; Jia, Y.; Jose, A.; Roy, R.; Merhof, D. Advances in medical image analysis with vision transformers: A comprehensive review. Med. Image Anal. 2023, 91, 103000. [Google Scholar] [CrossRef] [PubMed]
- Zhang, Y.; Liu, H.; Hu, Q. Transfuse: Fusing transformers and cnns for medical image segmentation. In Medical Image Computing and Computer Assisted Intervention—MICCAI 2021, Proceedings of the Medical Image Computing and Computer Assisted Intervention–MICCAI 2021: 24th International Conference, Strasbourg, France, 27 September–1 October 2021; Springer: Berlin/Heidelberg, Germany, 2021; pp. 14–24. [Google Scholar]
- Chen, J.; Lu, Y.; Yu, Q.; Luo, X.; Adeli, E.; Wang, Y.; Lu, L.; Yuille, A.L.; Zhou, Y. Transunet: Transformers make strong encoders for medical image segmentation. arXiv 2021, arXiv:2102.04306. [Google Scholar] [CrossRef]
- Ahmed, A.Q.; Alqarni, A.; Almekkawy, M. Trifuse: Triplet Encoders Network for Medical Image Segmentation. In Proceedings of the 2025 IEEE 22nd International Symposium on Biomedical Imaging (ISBI), Houston, TX, USA, 14–17 April 2025; pp. 1–5. [Google Scholar]
- Chen, Y.; Wang, T.; Tang, H.; Zhao, L.; Zhang, X.; Tan, T.; Gao, Q.; Du, M.; Tong, T. CoTrFuse: A novel framework by fusing CNN and transformer for medical image segmentation. Phys. Med. Biol. 2023, 68, 175027. [Google Scholar] [CrossRef]
- Azad, R.; Al-Antary, M.T.; Heidari, M.; Merhof, D. Transnorm: Transformer provides a strong spatial normalization mechanism for a deep segmentation model. IEEE Access 2022, 10, 108205–108215. [Google Scholar] [CrossRef]
- Wang, B.; Wang, F.; Dong, P.; Li, C. Multiscale transunet++: Dense hybrid U-Net with transformer for medical image segmentation. Signal, Image Video Process. 2022, 16, 1607–1614. [Google Scholar] [CrossRef]
- Yin, Y.; Xu, W.; Chen, L.; Wu, H. CoT-UNet++: A medical image segmentation method based on contextual transformer and dense connection. Math. Biosci. Eng. 2023, 20, 8320–8336. [Google Scholar] [CrossRef]
- Al-Qurri, A.; Almekkawy, M. Ultrasound Image Segmentation using a Model of Transformer and DFT. In Proceedings of the 2024 IEEE UFFC Latin America Ultrasonics Symposium (LAUS), Montevideo, Uruguay, 8–10 May 2024; pp. 1–4. [Google Scholar]
- Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; Wojna, Z. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 2818–2826. [Google Scholar]
- Zeng, Z.; Hu, Q.; Xie, Z.; Zhou, J.; Xu, Y. Small but Mighty: Enhancing 3D Point Clouds Semantic Segmentation with U-Next Framework. arXiv 2023, arXiv:2304.00749. [Google Scholar] [CrossRef]
- Qin, X.; Zhang, Z.; Huang, C.; Gao, C.; Dehghan, M.; Jagersand, M. Basnet: Boundary-aware salient object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 7479–7489. [Google Scholar]
- Liu, Y.; Tian, Y.; Zhao, Y.; Yu, H.; Xie, L.; Wang, Y.; Ye, Q.; Liu, Y. Vmamba: Visual state space model. arXiv 2024, arXiv:2401.10166. [Google Scholar] [PubMed]
- Tan, M.; Le, Q. Efficientnet: Rethinking model scaling for convolutional neural networks. In Proceedings of the International Conference on Machine Learning, Long Beach, CA, USA, 9–15 June 2019; pp. 6105–6114. [Google Scholar]
- Sandler, M.; Howard, A.; Zhu, M.; Zhmoginov, A.; Chen, L.C. Mobilenetv2: Inverted residuals and linear bottlenecks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 4510–4520. [Google Scholar]
- Han, Y.; Wang, P.; Kundu, S.; Ding, Y.; Wang, Z. Vision hgnn: An image is more than a graph of nodes. In Proceedings of the 2023 IEEE/CVF International Conference on Computer Vision (ICCV), Paris, France, 1–6 October 2023; pp. 19878–19888. [Google Scholar]
- Feng, Y.; You, H.; Zhang, Z.; Ji, R.; Gao, Y. Hypergraph neural networks. In Proceedings of the AAAI Conference on Artificial Intelligence, Honolulu, HI, USA, 27 January–1 February 2019; pp. 3558–3565. [Google Scholar]
- Peng, J.; Yang, J.; Xia, C.; Li, X.; Guo, Y.; Fu, Y.; Chen, X.; Cui, Z. Make U-Net Greater: An Easy-to-Embed Approach to Improve Segmentation Performance Using Hypergraph. Comput. Syst. Sci. Eng. 2022, 42, 319–333. [Google Scholar] [CrossRef]
- Chai, S.; Jain, R.K.; Mo, S.; Liu, J.; Yang, Y.; Li, Y.; Tateyama, T.; Lin, L.; Chen, Y.W. A Novel Adaptive Hypergraph Neural Network for Enhancing Medical Image Segmentation. In Medical Image Computing and Computer Assisted Intervention—MICCAI 2024, Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Marrakesh, Morocco, 6–10 October 2024; Springer: Berlin/Heidelberg, Germany, 2024; pp. 23–33. [Google Scholar]
- Dao, T.; Gu, A. Transformers are ssms: Generalized models and efficient algorithms through structured state space duality. arXiv 2024, arXiv:2405.21060. [Google Scholar] [CrossRef]
- Al-Qurri, A.; Almekkawy, M. Enhancing Medical Image Segmentation with Mamba and UNet++. In Proceedings of the 2025 IEEE 22nd International Symposium on Biomedical Imaging (ISBI), Houston, TX, USA, 14–17 April 2025; pp. 1–5. [Google Scholar]
- Shi, Y.; Dong, M.; Li, M.; Xu, C. VSSD: Vision Mamba with Non-Casual State Space Duality. arXiv 2024, arXiv:2407.18559. [Google Scholar] [CrossRef]
- Gu, A.; Dao, T. Mamba: Linear-time sequence modeling with selective state spaces. In Proceedings of the First Conference on Language Modeling, Philadelphia, PA, USA, 7–9 October 2024. [Google Scholar]
- Zhu, L.; Liao, B.; Zhang, Q.; Wang, X.; Liu, W.; Wang, X. Vision mamba: Efficient visual representation learning with bidirectional state space model. arXiv 2024, arXiv:2401.09417. [Google Scholar] [CrossRef]
- Tupper, A.; Gagné, C. Revisiting Data Augmentation for Ultrasound Images. arXiv 2025, arXiv:2501.13193. [Google Scholar] [CrossRef]
- Jiang, J.; Zhang, J.; Liu, W.; Gao, M.; Hu, X.; Yan, X.; Huang, F.; Liu, Y. Rwkv-unet: Improving unet with long-range cooperation for effective medical image segmentation. arXiv 2025, arXiv:2501.08458. [Google Scholar]
- Li, C.; Liu, X.; Li, W.; Wang, C.; Liu, H.; Liu, Y.; Chen, Z.; Yuan, Y. U-kan makes strong backbone for medical image segmentation and generation. In Proceedings of the AAAI Conference on Artificial Intelligence, Philadelphia, PA, USA, 25 February–4 March 2025; pp. 4652–4660. [Google Scholar]
- Zhou, Z.; Rahman Siddiquee, M.M.; Tajbakhsh, N.; Liang, J. Unet++: A nested u-net architecture for medical image segmentation. In Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support, Proceedings of the 4th International Workshop, DLMIA 2018, and 8th International Workshop, ML-CDS 2018, Granada, Spain, 20 September 2018; Springer: Berlin/Heidelberg, Germany, 2018; pp. 3–11. [Google Scholar]
- Heidari, M.; Kazerouni, A.; Soltany, M.; Azad, R.; Aghdam, E.K.; Cohen-Adad, J.; Merhof, D. Hiformer: Hierarchical multi-scale representations using transformers for medical image segmentation. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA, 3–7 January 2023; pp. 6202–6212. [Google Scholar]
- Azad, R.; Niggemeier, L.; Hüttemann, M.; Kazerouni, A.; Aghdam, E.K.; Velichko, Y.; Bagci, U.; Merhof, D. Beyond self-attention: Deformable large kernel attention for medical image segmentation. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA, 3–8 January 2024; pp. 1287–1297. [Google Scholar]
- Wu, R.; Liu, Y.; Liang, P.; Chang, Q. H-vmunet: High-order vision mamba unet for medical image segmentation. Neurocomputing 2025, 624, 129447. [Google Scholar] [CrossRef]
- Zhang, M.; Yu, Y.; Gu, L.; Lin, T.; Tao, X. VM-UNET-V2 Rethinking Vision Mamba UNet for Medical Image Segmentation. arXiv 2024, arXiv:2403.09157. [Google Scholar]
- Ruan, J.; Li, J.; Xiang, S. VM-UNet: Vision Mamba UNet for Medical Image Segmentation. arXiv 2024, arXiv:2402.02491. [Google Scholar] [CrossRef]
- Sun, S.; Cao, Z.; Zhu, H.; Zhao, J. A survey of optimization methods from a machine learning perspective. IEEE Trans. Cybern. 2019, 50, 3668–3681. [Google Scholar] [CrossRef]
- Li, Q.; Xiong, D.; Shang, M. Adjusted stochastic gradient descent for latent factor analysis. Inf. Sci. 2022, 588, 196–213. [Google Scholar] [CrossRef]
- Loshchilov, I.; Hutter, F. Decoupled weight decay regularization. arXiv 2017, arXiv:1711.05101. [Google Scholar]
- Roux, N.; Schmidt, M.; Bach, F. A stochastic gradient method with an exponential convergence _rate for finite training sets. In Advances in Neural Information Processing Systems 25 (NIPS 2012), Proceedings of the Annual Conference on Neural Information Processing Systems 2012, Lake Tahoe, NV, USA, 3–6 December 2012; Pereira, F., Burges, C.J., Bottou, L., Weinberger, K.Q., Eds.; Curran Associates Inc.: Red Hook, NY, USA, 2012. [Google Scholar]
- Foret, P.; Kleiner, A.; Mobahi, H.; Neyshabur, B. Sharpness-aware minimization for efficiently improving generalization. arXiv 2020, arXiv:2010.01412. [Google Scholar]
- Xie, X.; Zhou, P.; Li, H.; Lin, Z.; Yan, S. Adan: Adaptive nesterov momentum algorithm for faster optimizing deep models. IEEE Trans. Pattern Anal. Mach. Intell. 2024, 46, 9508–9520. [Google Scholar] [CrossRef]
- Chen, X.; Hsieh, C.J.; Gong, B. When vision transformers outperform resnets without pre-training or strong data augmentations. arXiv 2021, arXiv:2106.01548. [Google Scholar]
- Qurri, A.A.; Almekkawy, M. Hybrid MultiResUNet with transformers for medical image segmentation. Biomed. Signal Process. Control 2025, 110, 108056. [Google Scholar] [CrossRef]
- Li, H.; Xu, Z.; Taylor, G.; Studer, C.; Goldstein, T. Visualizing the loss landscape of neural nets. In Advances in Neural Information Processing Systems 31 (NeurIPS 2018), Proceedings of Annual Conference on Neural Information Processing Systems 2018, Montréal, QC, Canada, 3–8 December 2018; Bengio, S., Wallach, H., Larochelle, H., Grauman, K., Cesa-Bianchi, N., Garnett, R., Eds.; Curran Associates Inc.: Red Hook, NY, USA, 2018. [Google Scholar]
- Liu, Z.; Lin, Y.; Cao, Y.; Hu, H.; Wei, Y.; Zhang, Z.; Lin, S.; Guo, B. Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, QC, Canada, 11–17 October 2021; pp. 10012–10022. [Google Scholar]
- Chen, J.; Duan, H.; Zhang, X.; Gao, B.; Grau, V.; Han, J. From gaze to insight: Bridging human visual attention and vision language model explanation for weakly-supervised medical image segmentation. IEEE Trans. Med. Imaging 2025. [Google Scholar] [CrossRef] [PubMed]










| Model | 150 Epochs | 10 Epochs | Inference (S) | |||||
|---|---|---|---|---|---|---|---|---|
| DSC↑ | HD↓ | DSC↑ | HD↓ | FLOPs (G) | #Params (M) | GPU Mem (G) | ||
| Unet [26] | 0.8897 | 5.94 | 0.8420 | 7.62 | 21.37 | 7.85 | 5.93 | 120 |
| UNet++ [51] | 0.9081 | 3.81 1 | 0.8894 | 4.82 | 53.1 | 9.16 | 11.39 | 134 |
| TransUNet [26] | 0.9293 | 2.39 | 0.9303 | 2.20 | 58.49 | 105.28 | 10.57 | 133 |
| Swin-UNet [20] | 0.9327 | 2.04 | 0.9218 | 2.49 | 17.4 | 41.38 | 19.46 | 165 |
| TransNorm [29] | 0.9214 | 2.63 | 0.9232 | 2.45 | 62.18 | 117.63 | 16.54 | 130 |
| HiFormer-B [52,53] | 0.8967 | 4.40 | 0.8784 | 5.34 | 8.045 | 25.51 | 11.40 | 141 |
| CoTrFuse [28] | 0.9266 | 2.74 | 0.9065 | 3.42 | 33.07 | 56.19 | 15.30 | 166 |
| RWKV-UNet [49] | 0.8866 | 4.62 | 0.7958 | 5.85 | 57.44 | 120.24 | 10.96 | 122 |
| H-vmunet [54] | 0.8817 | 3.34 | 0.8746 | 4.39 | 1140.6 | 8.97 | 3.79 | 167 |
| Seg. U-KAN [50] | 0.8918 | 5.09 | 0.8409 | 6.81 | 14.02 | 6.35 | 15.01 | 130 |
| VM-UNet [55,56] | 0.9042 | 2.89 | 0.8630 | 5.30 | 7.56 | 34.62 | 18.97 | 126 |
| MicroSegNet [4] | 0.9341 | 2.23 | 0.939 * | 2.02 | 58.49 | 105.28 | 10.96 | 131 |
| Ours | 0.9416 | 1.93 | 0.9380 | 2.02 | 70.72 | 93.13 | 11.93 | 202 |
| Methods | DSC↑ | HD95↓ |
|---|---|---|
| MicroSegNet with Augmentation (150 Epochs) | 0.9321 | 2.29 |
| MicroSegNet with Augmentation (10 Epochs) | 0.9356 | 2.04 |
| Ours | 0.9416 | 1.93 |
| Mamba (VSSD) | DS | HyperGraph | Augment. | FLOPs (G) | #Params (M) | DSC↑ | HD↓ | |
|---|---|---|---|---|---|---|---|---|
| Baseline | 42.4 | 66.79 | 92.84 | 2.28 | ||||
| ✓ | 68.5 | 93.05 | 93.03 | 2.19 | ||||
| ✓ | ✓ | 68.5 | 93.05 | 93.73 | 1.95 | |||
| ✓ | ✓ | ✓ | 70.74 | 93.13 | 93.75 | 1.99 | ||
| Proposed Model | ✓ | ✓ | ✓ | ✓ | 70.72 | 93.13 | 94.16 | 1.93 |
| Methods | DSC↑ | HD95↓ |
|---|---|---|
| Without HyperGraph | 93.96 | 1.95 |
| With HyperGraph (Overall) | 94.16 | 1.93 |
| Methods | DSC↑ | HD95↓ |
|---|---|---|
| Swin-T | 94.16 | 1.93 |
| Swin-S | 93.96 | 1.95 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
AL-Qurri, A.; Thaher, A.; Almekkawy, M.K. Enhanced Deep Neural Network for Prostate Segmentation in Micro-Ultrasound Images. Sensors 2025, 25, 6815. https://doi.org/10.3390/s25226815
AL-Qurri A, Thaher A, Almekkawy MK. Enhanced Deep Neural Network for Prostate Segmentation in Micro-Ultrasound Images. Sensors. 2025; 25(22):6815. https://doi.org/10.3390/s25226815
Chicago/Turabian StyleAL-Qurri, Ahmed, Asem Thaher, and Mohamed Khaled Almekkawy. 2025. "Enhanced Deep Neural Network for Prostate Segmentation in Micro-Ultrasound Images" Sensors 25, no. 22: 6815. https://doi.org/10.3390/s25226815
APA StyleAL-Qurri, A., Thaher, A., & Almekkawy, M. K. (2025). Enhanced Deep Neural Network for Prostate Segmentation in Micro-Ultrasound Images. Sensors, 25(22), 6815. https://doi.org/10.3390/s25226815

