Interpretable and Performant Multimodal Nasopharyngeal Carcinoma GTV Segmentation with Clinical Priors Guided 3D-Gaussian-Prompted Diffusion Model (3DGS-PDM)
Simple Summary
Abstract
1. Introduction
2. Materials and Methods
2.1. Method Overview
2.2. Gaussian Initialization Module
2.3. Diffusion Segmentation Module
2.4. Dataset
2.5. Implementation Details
3. Results
3.1. Quantitative Evaluation
3.2. Qualitative Evaluation
3.3. Ablation Study
4. Discussion
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Wong, K.C.; Hui, E.P.; Lo, K.W.; Lam, W.K.J.; Johnson, D.; Li, L.; Tao, Q.; Chan, K.C.A.; To, K.F.; King, A.D.; et al. Nasopharyngeal carcinoma: An evolving paradigm. Nat. Rev. Clin. Oncol. 2021, 18, 679–695. [Google Scholar] [CrossRef]
- Lin, L.; Dou, Q.; Jin, Y.M.; Zhou, G.Q.; Tang, Y.Q.; Chen, W.L.; Su, B.A.; Liu, F.; Tao, C.J.; Jiang, N.; et al. Deep learning for automated contouring of primary tumor volumes by MRI for nasopharyngeal carcinoma. Radiology 2019, 291, 677–686. [Google Scholar] [CrossRef]
- Bossi, P.; Chan, A.T.; Licitra, L.; Trama, A.; Orlandi, E.; Hui, E.P.; Halámková, J.; Mattheis, S.; Baujat, B.; Hardillo, J.; et al. Nasopharyngeal carcinoma: ESMO-EURACAN Clinical Practice Guidelines for diagnosis, treatment and follow-up. Ann. Oncol. 2021, 32, 452–465. [Google Scholar] [CrossRef]
- Tang, P.; Zu, C.; Hong, M.; Yan, R.; Peng, X.; Xiao, J.; Wu, X.; Zhou, J.; Zhou, L.; Wang, Y. DA-DSUnet: Dual attention-based dense SU-net for automatic head-and-neck tumor segmentation in MRI images. Neurocomputing 2021, 435, 103–113. [Google Scholar] [CrossRef]
- Guo, Z.; Li, X.; Huang, H.; Guo, N.; Li, Q. Deep learning-based image segmentation on multimodal medical imaging. IEEE Trans. Radiat. Plasma Med. Sci. 2019, 3, 162–169. [Google Scholar] [CrossRef]
- Hao, Y.; Jiang, H.; Diao, Z.; Shi, T.; Liu, L.; Li, H.; Zhang, W. MSU-Net: Multi-scale Sensitive U-Net based on pixel-edge-region level collaborative loss for nasopharyngeal MRI segmentation. Comput. Biol. Med. 2023, 159, 106956. [Google Scholar] [CrossRef]
- He, Q.; Sun, X.; Diao, W.; Yan, Z.; Yao, F.; Fu, K. Multimodal remote sensing image segmentation with intuition-inspired hypergraph modeling. IEEE Trans. Image Process. 2023, 32, 1474–1487. [Google Scholar] [CrossRef]
- Peng, Y.; Sun, J. The multimodal MRI brain tumor segmentation based on AD-Net. Biomed. Signal Process. Control 2023, 80, 104336. [Google Scholar] [CrossRef]
- Tao, G.; Li, H.; Huang, J.; Han, C.; Chen, J.; Ruan, G.; Huang, W.; Hu, Y.; Dan, T.; Zhang, B.; et al. SeqSeg: A sequential method to achieve nasopharyngeal carcinoma segmentation free from background dominance. Med Image Anal. 2022, 78, 102381. [Google Scholar] [CrossRef]
- Zhang, J.; Li, B.; Qiu, Q.; Mo, H.; Tian, L. SICNet: Learning selective inter-slice context via Mask-Guided Self-knowledge distillation for NPC segmentation. J. Vis. Commun. Image Represent. 2024, 98, 104053. [Google Scholar] [CrossRef]
- Shi, Y.; Zu, C.; Yang, P.; Tan, S.; Ren, H.; Wu, X.; Zhou, J.; Wang, Y. Uncertainty-weighted and relation-driven consistency training for semi-supervised head-and-neck tumor segmentation. Knowl.-Based Syst. 2023, 272, 110598. [Google Scholar] [CrossRef]
- Tang, P.; Yang, P.; Nie, D.; Wu, X.; Zhou, J.; Wang, Y. Unified medical image segmentation by learning from uncertainty in an end-to-end manner. Knowl.-Based Syst. 2022, 241, 108215. [Google Scholar] [CrossRef]
- Zhang, Y.; Sidibé, D.; Morel, O.; Mériaudeau, F. Deep multimodal fusion for semantic image segmentation: A survey. Image Vis. Comput. 2021, 105, 104042. [Google Scholar] [CrossRef]
- Pandey, S.; Chen, K.F.; Dam, E.B. Comprehensive multimodal segmentation in medical imaging: Combining yolov8 with sam and hq-sam models. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Paris, France, 1–6 October 2023; pp. 2592–2598. [Google Scholar]
- Kerbl, B.; Kopanas, G.; Leimkühler, T.; Drettakis, G. 3D Gaussian Splatting for Real-Time Radiance Field Rendering. ACM Trans. Graph. 2023, 42, 1–14. [Google Scholar] [CrossRef]
- Jain, U.; Mirzaei, A.; Gilitschenski, I. Gaussiancut: Interactive segmentation via graph cut for 3D Gaussian splatting. Adv. Neural Inf. Process. Syst. 2024, 37, 89184–89212. [Google Scholar]
- Cen, J.; Fang, J.; Yang, C.; Xie, L.; Zhang, X.; Shen, W.; Tian, Q. Segment any 3D Gaussians. In Proceedings of the AAAI Conference on Artificial Intelligence, Philadelphia, PA, USA, 25 February–4 March 2025; Volume 39, pp. 1971–1979. [Google Scholar]
- Kim, C.M.; Wu, M.; Kerr, J.; Goldberg, K.; Tancik, M.; Kanazawa, A. Garfield: Group anything with radiance fields. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 16–22 June 2024; pp. 21530–21539. [Google Scholar]
- Zwicker, M.; Pfister, H.; Van Baar, J.; Gross, M. Ewa volume splatting. In Proceedings of the Proceedings Visualization (VIS’01), San Diego, CA, USA, 21–26 October 2001; IEEE: New York, NY, USA, 2001; pp. 29–538. [Google Scholar]
- Li, Y.; Fu, X.; Li, H.; Zhao, S.; Jin, R.; Zhou, S.K. 3DGR-CT: Sparse-view CT reconstruction with a 3D Gaussian representation. Med. Image Anal. 2025, 103, 103585. [Google Scholar] [CrossRef]
- Rahman, A.; Valanarasu, J.M.J.; Hacihaliloglu, I.; Patel, V.M. Ambiguous medical image segmentation using diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Denver, CO, USA, 3–7 June 2023; pp. 11536–11546. [Google Scholar]
- Lee, A.W.; Ng, W.T.; Pan, J.J.; Poh, S.S.; Ahn, Y.C.; AlHussain, H.; Corry, J.; Grau, C.; Grégoire, V.; Harrington, K.J.; et al. International guideline for the delineation of the clinical target volumes (CTV) for nasopharyngeal carcinoma. Radiother. Oncol. 2018, 126, 25–36. [Google Scholar] [CrossRef]
- Klein, S.; Staring, M.; Murphy, K.; Viergever, M.A.; Pluim, J.P. Elastix: A toolbox for intensity-based medical image registration. IEEE Trans. Med. Imaging 2009, 29, 196–205. [Google Scholar] [CrossRef]
- Ibtehaz, N.; Sohel Rahman, M.M. Rethinking the U-Net architecture for multimodal biomedical image segmentation. arXiv 2019, arXiv:1902.04049. [Google Scholar] [CrossRef]
- Isensee, F.; Jaeger, P.F.; Kohl, S.A.; Petersen, J.; Maier-Hein, K.H. nnU-Net: A self-configuring method for deep learning-based biomedical image segmentation. Nat. Methods 2021, 18, 203–211. [Google Scholar] [CrossRef]
- Thada, V.; Jaglan, V. Comparison of jaccard, dice, cosine similarity coefficient to find best fitness value for web retrieved documents using genetic algorithm. Int. J. Innov. Eng. Technol. 2013, 2, 202–205. [Google Scholar]
- Zhan, F.; Yu, Y.; Wu, R.; Zhang, J.; Lu, S.; Liu, L.; Kortylewski, A.; Theobalt, C.; Xing, E. Multimodal image synthesis and editing: A survey and taxonomy. IEEE Trans. Pattern Anal. Mach. Intell. 2023, 4, 15098–15119. [Google Scholar] [CrossRef]
- Zhou, T.; Fu, H.; Chen, G.; Shen, J.; Shao, L. Hi-net: Hybrid-fusion network for multi-modal MR image synthesis. IEEE Trans. Med Imaging 2020, 39, 2772–2781. [Google Scholar] [CrossRef]
- Yuan, X.; Lin, Z.; Kuen, J.; Zhang, J.; Wang, Y.; Maire, M.; Kale, A.; Faieta, B. Multimodal contrastive training for visual representation learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 6995–7004. [Google Scholar]
- Vora, A.; Paunwala, C.N.; Paunwala, M. Improved weight assignment approach for multimodal fusion. In Proceedings of the 2014 International Conference on Circuits, Systems, Communication and Information Technology Applications (CSCITA), Mumbai, India, 4–5 April 2014; IEEE: New York, NY, USA, 2014; pp. 70–74. [Google Scholar]
- He, B.; Wang, J.; Qiu, J.; Bui, T.; Shrivastava, A.; Wang, Z. Align and attend: Multimodal summarization with dual contrastive losses. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 17–24 June 2023; pp. 14867–14878. [Google Scholar]
- Wu, J.; Fu, R.; Fang, H.; Zhang, Y.; Yang, Y.; Xiong, H.; Liu, H.; Xu, Y. Medsegdiff: Medical image segmentation with diffusion probabilistic model. In Proceedings of the Medical Imaging with Deep Learning. PMLR, Paris, France, 3–5 July 2024; pp. 1623–1639. [Google Scholar]






| (%) | (mm) | (mm) | |||||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Method | Institution | GTVp | GTVnd | Average | GTVp | GTVnd | Average | GTVp | GTVnd | Average | |||||||||
| Mean | Dev | Mean | Dev | Mean | Dev | Mean | Dev | Mean | Dev | Mean | Dev | Mean | Dev | Mean | Dev | Mean | Dev | ||
| QEH | 75.10 | 7.08 | 69.21 | 7.99 | 72.16 | 7.54 | 1.23 | 0.35 | 0.96 | 0.53 | 1.10 | 0.44 | 3.87 | 2.84 | 3.09 | 2.64 | 3.48 | 2.74 | |
| QMH | 78.27 | 10.62 | 70.93 | 7.74 | 74.60 | 9.18 | 1.20 | 0.49 | 0.87 | 0.44 | 1.03 | 0.47 | 3.46 | 2.97 | 2.94 | 3.22 | 3.20 | 3.10 | |
| Msu-Net | XJH | 75.31 | 7.79 | 73.50 | 7.34 | 74.41 | 7.57 | 1.12 | 0.61 | 1.18 | 0.41 | 1.15 | 0.51 | 3.75 | 3.26 | 2.91 | 2.65 | 3.33 | 2.96 |
| WWH | 79.24 | 2.07 | 74.20 | 7.17 | 76.72 | 4.62 | 1.41 | 0.43 | 1.19 | 0.86 | 1.30 | 0.65 | 3.40 | 3.65 | 3.94 | 3.45 | 3.67 | 3.55 | |
| Average | 76.98 | 6.89 | 71.96 | 7.89 | 74.47 | 7.39 | 1.24 | 0.47 | 1.05 | 0.56 | 1.15 | 0.52 | 3.62 | 3.18 | 3.22 | 2.99 | 3.42 | 3.09 | |
| QEH | 80.88 | 9.90 | 72.90 | 9.67 | 76.89 | 9.79 | 1.27 | 0.56 | 1.20 | 0.30 | 1.23 | 0.43 | 3.74 | 2.13 | 3.45 | 1.62 | 3.60 | 1.88 | |
| QMH | 81.79 | 4.24 | 71.78 | 9.94 | 76.79 | 7.09 | 1.11 | 0.65 | 1.15 | 0.45 | 1.13 | 0.55 | 4.08 | 1.96 | 3.40 | 1.53 | 3.74 | 1.75 | |
| AD-Net | XJH | 82.03 | 8.89 | 74.87 | 13.02 | 78.45 | 10.96 | 1.01 | 0.57 | 0.92 | 0.28 | 0.97 | 0.43 | 3.82 | 1.93 | 3.07 | 1.66 | 3.45 | 1.80 |
| WWH | 76.74 | 13.81 | 79.53 | 16.01 | 78.13 | 14.91 | 1.65 | 0.66 | 1.13 | 1.09 | 1.39 | 0.88 | 4.32 | 3.02 | 3.52 | 2.27 | 3.92 | 2.64 | |
| Average | 80.36 | 11.66 | 74.77 | 12.16 | 77.57 | 11.91 | 1.26 | 0.61 | 1.10 | 0.53 | 1.18 | 0.57 | 3.99 | 2.26 | 3.36 | 1.77 | 3.68 | 2.01 | |
| QEH | 80.57 | 5.16 | 71.42 | 9.59 | 76.00 | 7.38 | 1.30 | 0.42 | 1.01 | 0.37 | 1.16 | 0.40 | 3.84 | 2.49 | 3.27 | 2.28 | 3.56 | 2.38 | |
| QMH | 81.74 | 5.85 | 73.00 | 9.70 | 77.37 | 7.78 | 1.37 | 0.33 | 1.23 | 0.47 | 1.30 | 0.40 | 3.72 | 2.78 | 3.02 | 2.26 | 3.37 | 2.52 | |
| Multi-resU-Net | XJH | 84.00 | 7.72 | 73.74 | 8.05 | 78.87 | 7.89 | 1.31 | 0.39 | 1.20 | 0.58 | 1.25 | 0.49 | 3.82 | 2.67 | 3.36 | 2.06 | 3.59 | 2.37 |
| WWH | 81.65 | 19.83 | 74.72 | 9.78 | 78.18 | 14.81 | 1.02 | 0.86 | 0.92 | 0.66 | 0.97 | 0.76 | 3.66 | 2.74 | 2.87 | 1.72 | 3.26 | 2.23 | |
| Average | 81.99 | 9.64 | 73.22 | 9.28 | 77.60 | 9.46 | 1.25 | 0.50 | 1.09 | 0.52 | 1.17 | 0.51 | 3.76 | 2.67 | 3.13 | 2.08 | 3.45 | 2.38 | |
| QEH | 82.43 | 6.12 | 75.82 | 7.56 | 79.13 | 6.84 | 1.33 | 0.70 | 1.16 | 0.41 | 1.25 | 0.55 | 4.66 | 1.82 | 4.30 | 2.00 | 4.48 | 1.91 | |
| QMH | 82.45 | 9.09 | 74.33 | 11.12 | 78.39 | 10.11 | 1.07 | 0.41 | 1.20 | 0.25 | 1.14 | 0.33 | 4.71 | 1.70 | 4.00 | 2.48 | 4.36 | 2.09 | |
| nnformer | XJH | 74.81 | 7.58 | 76.48 | 11.52 | 75.65 | 9.55 | 1.32 | 0.48 | 1.22 | 0.46 | 1.27 | 0.47 | 4.28 | 1.56 | 4.26 | 2.16 | 4.27 | 1.86 |
| WWH | 81.47 | 12.37 | 72.93 | 8.88 | 77.20 | 10.62 | 1.36 | 0.85 | 1.02 | 0.84 | 1.19 | 0.85 | 4.55 | 1.16 | 4.16 | 2.36 | 4.35 | 1.76 | |
| Average | 80.29 | 8.79 | 74.89 | 9.77 | 77.59 | 9.28 | 1.27 | 0.61 | 1.15 | 0.49 | 1.21 | 0.55 | 4.55 | 1.56 | 4.18 | 2.25 | 4.37 | 1.91 | |
| QEH | 80.50 | 8.13 | 75.77 | 11.43 | 78.13 | 9.78 | 1.05 | 0.42 | 1.00 | 0.61 | 1.02 | 0.52 | 4.24 | 1.72 | 3.93 | 1.40 | 4.09 | 1.56 | |
| QMH | 83.30 | 6.17 | 74.65 | 10.20 | 78.97 | 8.18 | 1.38 | 0.39 | 1.25 | 0.47 | 1.32 | 0.43 | 4.27 | 1.56 | 4.27 | 1.50 | 4.27 | 1.53 | |
| nnU-Net | XJH | 79.07 | 4.27 | 77.94 | 13.48 | 78.51 | 8.88 | 1.25 | 0.68 | 1.09 | 0.59 | 1.17 | 0.64 | 4.36 | 1.85 | 4.34 | 1.78 | 4.35 | 1.82 |
| WWH | 83.61 | 14.31 | 72.52 | 6.33 | 78.07 | 10.32 | 1.44 | 0.87 | 1.10 | 0.93 | 1.27 | 0.90 | 5.29 | 1.95 | 4.38 | 1.56 | 4.84 | 1.76 | |
| Average | 81.62 | 8.22 | 75.22 | 10.36 | 78.42 | 9.29 | 1.28 | 0.59 | 1.11 | 0.65 | 1.20 | 0.62 | 4.54 | 1.77 | 4.23 | 1.56 | 4.39 | 1.67 | |
| QEH | 82.51 | 4.89 | 81.94 | 9.40 | 82.23 | 7.15 | 1.34 | 0.52 | 1.22 | 0.66 | 1.28 | 0.59 | 4.86 | 1.81 | 4.48 | 1.68 | 4.67 | 1.75 | |
| QMH | 82.08 | 5.06 | 78.05 | 12.55 | 80.07 | 8.81 | 1.32 | 0.45 | 0.94 | 0.51 | 1.13 | 0.48 | 4.58 | 1.85 | 4.31 | 1.70 | 4.45 | 1.78 | |
| Proposed | XJH | 84.86 | 4.40 | 80.70 | 10.48 | 82.78 | 7.44 | 1.09 | 0.42 | 0.98 | 0.74 | 1.04 | 0.58 | 4.95 | 1.73 | 4.49 | 1.51 | 4.72 | 1.62 |
| WWH | 87.71 | 14.97 | 76.31 | 7.97 | 82.01 | 11.47 | 1.49 | 1.13 | 1.62 | 0.97 | 1.56 | 1.05 | 4.65 | 2.53 | 5.16 | 1.95 | 4.91 | 2.24 | |
| Average | 84.29 | 7.33 | 79.25 | 10.10 | 81.77 | 8.72 | 1.31 | 0.63 | 1.19 | 0.72 | 1.25 | 0.68 | 4.76 | 1.98 | 4.61 | 1.71 | 4.69 | 1.85 | |
| Method | Institution | Step 1 MRI-t1-ce | Step 2 MRI-t1-ce, MRI-t2 | Step 3 MRI-t1-ce, MRI-t2, CT | ||||||
|---|---|---|---|---|---|---|---|---|---|---|
| (%) | (mm) | (mm) | (%) | (mm) | (mm) | (%) | (mm) | (mm) | ||
| QEH | 71.04 | 0.94 | 3.41 | 71.63 | 1.02 | 3.45 | 72.16 | 1.10 | 3.48 | |
| QMH | 73.32 | 0.85 | 3.18 | 74.01 | 0.95 | 3.19 | 74.60 | 1.03 | 3.20 | |
| Msu-Net | XJH | 73.26 | 1.01 | 3.29 | 73.87 | 1.10 | 3.31 | 74.41 | 1.15 | 3.33 |
| WWH | 75.36 | 1.12 | 3.62 | 76.05 | 1.21 | 3.63 | 76.72 | 1.30 | 3.67 | |
| Average | 73.18 | 1.02 | 3.38 | 73.83 | 1.09 | 3.42 | 74.47 | 1.15 | 3.42 | |
| QEH | 75.54 | 1.07 | 3.56 | 76.15 | 1.15 | 3.57 | 76.79 | 1.23 | 3.60 | |
| QMH | 77.16 | 0.99 | 3.70 | 77.75 | 1.04 | 3.72 | 78.45 | 1.13 | 3.74 | |
| AD-Net | XJH | 76.80 | 0.82 | 3.42 | 77.46 | 0.91 | 3.45 | 78.13 | 0.97 | 3.45 |
| WWH | 77.02 | 1.23 | 3.89 | 77.54 | 1.32 | 3.92 | 78.13 | 1.39 | 3.92 | |
| Average | 76.37 | 1.05 | 3.62 | 76.96 | 1.12 | 3.65 | 77.57 | 1.18 | 3.68 | |
| QEH | 74.77 | 1.02 | 3.50 | 75.39 | 1.08 | 3.52 | 76.00 | 1.16 | 3.56 | |
| QMH | 76.03 | 1.13 | 3.33 | 76.69 | 1.21 | 3.33 | 77.37 | 1.30 | 3.37 | |
| Multi-resU-Net | XJH | 77.86 | 1.08 | 3.56 | 78.37 | 1.15 | 3.58 | 78.87 | 1.25 | 3.59 |
| WWH | 76.93 | 0.86 | 3.20 | 77.53 | 0.92 | 3.25 | 78.18 | 0.97 | 3.26 | |
| Average | 76.41 | 1.04 | 3.37 | 76.99 | 1.11 | 3.41 | 77.60 | 1.17 | 3.45 | |
| QEH | 78.09 | 1.06 | 4.46 | 78.62 | 1.15 | 4.46 | 79.13 | 1.25 | 4.48 | |
| QMH | 77.24 | 0.98 | 4.35 | 77.74 | 1.05 | 4.36 | 78.39 | 1.14 | 4.36 | |
| nnformer | XJH | 74.37 | 1.12 | 7.26 | 74.96 | 1.18 | 7.27 | 75.65 | 1.27 | 7.27 |
| WWH | 75.94 | 1.04 | 4.27 | 76.64 | 1.12 | 4.31 | 77.20 | 1.19 | 4.35 | |
| Average | 76.20 | 1.05 | 4.29 | 76.89 | 1.14 | 4.33 | 77.59 | 1.21 | 4.37 | |
| QEH | 76.92 | 0.87 | 4.06 | 77.49 | 0.97 | 4.07 | 78.13 | 1.02 | 4.09 | |
| QMH | 77.60 | 1.15 | 4.22 | 78.29 | 1.24 | 4.26 | 78.97 | 1.32 | 4.27 | |
| nnU-Net | XJH | 77.17 | 1.02 | 4.30 | 77.85 | 1.10 | 4.34 | 78.51 | 1.17 | 4.35 |
| WWH | 76.78 | 1.09 | 4.80 | 77.47 | 1.18 | 4.82 | 78.07 | 1.27 | 4.84 | |
| Average | 77.12 | 1.07 | 4.32 | 77.72 | 1.14 | 4.37 | 78.42 | 1.20 | 4.39 | |
| QEH | 79.85 | 1.07 | 3.73 | 81.68 | 1.20 | 4.34 | 82.23 | 1.28 | 4.67 | |
| QMH | 76.91 | 0.90 | 3.39 | 79.19 | 1.04 | 4.05 | 80.07 | 1.13 | 4.45 | |
| Proposed | XJH | 79.60 | 0.83 | 3.67 | 81.78 | 0.94 | 4.36 | 82.78 | 1.04 | 4.72 |
| WWH | 79.38 | 1.32 | 3.85 | 81.34 | 1.46 | 4.41 | 82.01 | 1.56 | 4.91 | |
| Average | 78.99 | 1.02 | 3.81 | 80.97 | 1.16 | 4.34 | 81.77 | 1.25 | 4.69 | |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Zhu, J.; Ma, Z.; Ren, G.; Cai, J. Interpretable and Performant Multimodal Nasopharyngeal Carcinoma GTV Segmentation with Clinical Priors Guided 3D-Gaussian-Prompted Diffusion Model (3DGS-PDM). Cancers 2025, 17, 3660. https://doi.org/10.3390/cancers17223660
Zhu J, Ma Z, Ren G, Cai J. Interpretable and Performant Multimodal Nasopharyngeal Carcinoma GTV Segmentation with Clinical Priors Guided 3D-Gaussian-Prompted Diffusion Model (3DGS-PDM). Cancers. 2025; 17(22):3660. https://doi.org/10.3390/cancers17223660
Chicago/Turabian StyleZhu, Jiarui, Zongrui Ma, Ge Ren, and Jing Cai. 2025. "Interpretable and Performant Multimodal Nasopharyngeal Carcinoma GTV Segmentation with Clinical Priors Guided 3D-Gaussian-Prompted Diffusion Model (3DGS-PDM)" Cancers 17, no. 22: 3660. https://doi.org/10.3390/cancers17223660
APA StyleZhu, J., Ma, Z., Ren, G., & Cai, J. (2025). Interpretable and Performant Multimodal Nasopharyngeal Carcinoma GTV Segmentation with Clinical Priors Guided 3D-Gaussian-Prompted Diffusion Model (3DGS-PDM). Cancers, 17(22), 3660. https://doi.org/10.3390/cancers17223660

