Robust Dual-Stream Diagnosis Network for Ultrasound Breast Tumor Classification with Cross-Domain Segmentation Priors
Abstract
1. Introduction
- We propose a cross-domain segmentation prior guided classification strategy and develop a DSDNet, in which a frozen segmentation branch provides robust spatial priors to guide classification. This decoupled dual-stream design effectively enhances classification robustness across heterogeneous ultrasound imaging conditions, including different devices and clinical centers.
- To fully exploit segmentation priors, we design two dedicated modules: the Dual-Stream Mask Attention (DSMA) module, which jointly models foreground–background dependencies to enhance lesion-aware representations, and the Segmentation Prior Guidance Fusion (SPGF) module, which injects multi-scale segmentation priors into the classification backbone. Together, they enable more accurate and stable tumor representation learning.
- We introduce a Mamba-Inspired Linear Attention (MILA) mechanism and build a Mamba-Inspired Linear Transformer (MILT) block as the core of the classification branch. MILA integrates the advantages of Mamba into a linear attention framework, enabling efficient modeling of long-range dependencies and fine-grained tumor structures in breast ultrasound images.
2. Methodology
2.1. Overall Network Architecture
2.2. Dual-Stream Mask Attention Module
2.3. Segmentation Prior Guidance Fusion Module
2.4. Mamba-Inspired Linear Transformer Block
3. Experiments
3.1. Dataset
3.2. Training Details
3.3. Evaluation Metrics
4. Results and Discussion
4.1. Comparison with the State-of-the-Art-Methods
4.1.1. Quantitative Comparison
4.1.2. Qualitative Visualization
4.2. Ablation Studies
4.2.1. Ablation of Segmentation Prior Guidance
4.2.2. Ablation of SPGF and DSMA Modules
4.2.3. Ablation of MILA Within the MILT Block
4.3. Cross-Dataset Validation
4.4. Complexity Analysis
5. Conclusions
Supplementary Materials
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Kriti; Virmani, J.; Agarwal, R. A review of Segmentation Algorithms Applied to B-Mode breast ultrasound images: A characterization Approach. Arch. Comput. Methods Eng. 2021, 28, 2567–2606. [Google Scholar] [CrossRef]
- Bray, F.; Laversanne, M.; Sung, H.; Ferlay, J.; Siegel, R.L.; Soerjomataram, I.; Jemal, A. Global cancer statistics 2022: GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries. CA A Cancer J. Clin. 2024, 74, 229–263. [Google Scholar] [CrossRef] [PubMed]
- Sung, H.; Ferlay, J.; Siegel, R.L.; Laversanne, M.; Soerjomataram, I.; Jemal, A.; Bray, F. Global cancer statistics 2020: GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries. CA A Cancer J. Clin. 2021, 71, 209–249. [Google Scholar]
- Sureshkumar, V.; Prasad, R.S.N.; Balasubramaniam, S.; Jagannathan, D.; Daniel, J.; Dhanasekaran, S. Breast cancer detection and analytics using hybrid cnn and extreme learning machine. J. Pers. Med. 2024, 14, 792. [Google Scholar] [CrossRef] [PubMed]
- Chen, H.; Ma, M.; Liu, G.; Wang, Y.; Jin, Z.; Liu, C. Breast tumor classification in ultrasound images by fusion of deep convolutional neural network and shallow LBP feature. J. Digit. Imaging 2023, 36, 932–946. [Google Scholar] [CrossRef]
- Luo, Y.; Huang, Q.; Liu, L. Classification of tumor in one single ultrasound image via a novel multi-view learning strategy. Pattern Recognit. 2023, 143, 109776. [Google Scholar] [CrossRef]
- Ying, T.; Ya-Ling, C.; Yu, Y.; Rui-Qing, H. Breast ultrasound image despeckling using multi-filtering DFrFT and adaptive fast BM3D. Comput. Methods Programs Biomed. 2024, 246, 108042. [Google Scholar]
- Horsch, K.; Giger, M.L.; Vyborny, C.J.; A Venta, L. Performance of computer-aided diagnosis in the interpretation of lesions on breast sonography. Acad. Radiol. 2004, 11, 272–280. [Google Scholar] [CrossRef]
- Huang, Y.-L.; Chen, D.-R. Watershed segmentation for breast tumor in 2-D sonography. Ultrasound Med. Biol. 2004, 30, 625–632. [Google Scholar] [CrossRef]
- Fan, H.; Meng, F.; Liu, Y.; Kong, F.; Ma, J.; Lv, Z. A novel breast ultrasound image automated segmentation algorithm based on seeded region growing integrating gradual equipartition threshold. Multimed. Tools Appl. 2019, 78, 27915–27932. [Google Scholar]
- Patra, D.K.; Si, T.; Mondal, S.; Mukherjee, P. Breast DCE-MRI segmentation for lesion detection by multi-level thresholding using student psychological based optimization. Biomed. Signal Process. Control 2021, 69, 102925. [Google Scholar] [CrossRef]
- Huang, Q.; Zhang, F.; Li, X. A new breast tumor ultrasonography CAD system based on decision tree and BI-RADS features. World Wide Web 2018, 21, 1491–1504. [Google Scholar] [CrossRef]
- Liu, Y.; Ren, L.; Cao, X.; Tong, Y. Breast tumors recognition based on edge feature extraction using support vector machine. Biomed. Signal Process. Control 2020, 58, 101825. [Google Scholar] [CrossRef]
- Kumar, P.; Nair, G.G. An efficient classification framework for breast cancer using hyper parameter tuned Random Decision Forest Classifier and Bayesian Optimization. Biomed. Signal Process. Control 2021, 68, 102682. [Google Scholar]
- Caorsi, S.; Lenzi, C. Can a mm-Wave ultra-wideband ANN-based radar data processing approach be used for breast cancer detection? In Proceedings of the 2017 International Conference on Electromagnetics in Advanced Applications (ICEAA); IEEE: New York, NY, USA, 2017. [Google Scholar]
- Hu, K.; Li, M.; Song, Z.; Xu, K.; Xia, Q.; Sun, N.; Zhou, P.; Xia, M. A review of research on reinforcement learning algorithms for multi-agents. Neurocomputing 2024, 599, 128068. [Google Scholar] [CrossRef]
- Hu, K.; Xu, K.; Xia, Q.; Li, M.; Song, Z.; Song, L.; Sun, N. An overview: Attention mechanisms in multi-agent reinforcement learning. Neurocomputing 2024, 598, 128015. [Google Scholar] [CrossRef]
- Ma, K.; Hu, K.; Chen, J.; Jiang, M.; Xu, Y.; Xia, M.; Weng, L. OSNet: An Edge Enhancement Network for a Joint Application of SAR and Optical Images. Remote Sens. 2025, 17, 505. [Google Scholar] [CrossRef]
- Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016. [Google Scholar]
- Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A.; Liu, W.; et al. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015. [Google Scholar]
- Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; Wojna, Z. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016. [Google Scholar]
- Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv 2020, arXiv:2010.11929. [Google Scholar]
- Sirjani, N.; Oghli, M.G.; Tarzamni, M.K.; Gity, M.; Shabanzadeh, A.; Ghaderi, P.; Shiri, I.; Akhavan, A.; Faraji, M.; Taghipour, M. A novel deep learning model for breast lesion classification using ultrasound Images: A multicenter data evaluation. Phys. Medica 2023, 107, 102560. [Google Scholar] [CrossRef]
- Deb, S.D.; Jha, R.K. Breast UltraSound Image classification using fuzzy-rank-based ensemble network. Biomed. Signal Process. Control 2023, 85, 104871. [Google Scholar] [CrossRef]
- Alhussan, A.A.; Eid, M.M.; Towfek, S.K.; Khafaga, D.S. Breast Cancer classification depends on the dynamic Dipper Throated optimization Algorithm. Biomimetics 2023, 8, 163. [Google Scholar] [CrossRef] [PubMed]
- Chen, F.; Wang, J.; Liu, H.; Kong, W.; Zhao, Z.; Ma, L.; Liao, H.; Zhang, D. Frequency constraint-based adversarial attack on deep neural networks for medical image classification. Comput. Biol. Med. 2023, 164, 107248. [Google Scholar] [CrossRef] [PubMed]
- Manzari, O.N.; Ahmadabadi, H.; Kashiani, H.; Shokouhi, S.B.; Ayatollahi, A. MedViT: A robust vision transformer for generalized medical image classification. Comput. Biol. Med. 2023, 157, 106791. [Google Scholar] [CrossRef]
- Gheflati, B.; Rivaz, H. Vision transformers for classification of breast ultrasound images. In Proceedings of the 2022 44th Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC); IEEE: New York, NY, USA, 2022. [Google Scholar]
- Hüseyin, Ü.Z.E.N.; Firat, H.; Atila, O.; Şengür, A. Swin transformer-based fork architecture for automated breast tumor classification. Expert Syst. Appl. 2024, 256, 125009. [Google Scholar] [CrossRef]
- Zhou, Y.; Chen, H.; Li, Y.; Liu, Q.; Xu, X.; Wang, S.; Yap, P.-T.; Shen, D. Multi-task learning for segmentation and classification of tumors in 3D automated breast ultrasound images. Med. Image Anal. 2021, 70, 101918. [Google Scholar] [CrossRef]
- Mishra, A.K.; Roy, P.; Bandyopadhyay, S.; Das, S.K. A multi-task learning based approach for efficient breast cancer detection and classification. Expert Syst. 2022, 39, E13047. [Google Scholar] [CrossRef]
- Kang, Q.; Lao, Q.; Li, Y.; Jiang, Z.; Qiu, Y.; Zhang, S.; Li, K. Thyroid nodule segmentation and classification in ultrasound images through intra-and inter-task consistent learning. Med. Image Anal. 2022, 79, 102443. [Google Scholar] [CrossRef]
- Zhu, C.; Chai, X.; Wang, Z.; Xiao, Y.; Zhang, R.; Yang, Z.; Feng, J. DBL-Net: A dual-branch learning network with information from spatial and frequency domains for tumor segmentation and classification in breast ultrasound image. Biomed. Signal Process. Control 2024, 93, 106221. [Google Scholar] [CrossRef]
- Aumente-Maestro, C.; Díez, J.; Remeseiro, B. A multi-task framework for breast cancer segmentation and classification in ultrasound imaging. Comput. Methods Programs Biomed. 2025, 260, 108540. [Google Scholar] [CrossRef]
- Jiang, T.; Guo, J.; Xing, W.; Yu, M.; Li, Y.; Zhang, B.; Dong, Y.; Ta, D. A prior segmentation knowledge enhanced deep learning system for the classification of tumors in ultrasound image. Eng. Appl. Artif. Intell. 2025, 142, 109926. [Google Scholar] [CrossRef]
- Fan, Q.; Huang, H.; Chen, M.; Liu, H.; He, R. Rmt: Retentive networks meet vision transformers. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 16–24 June 2024. [Google Scholar]
- Dong, X.; Bao, J.; Chen, D.; Zhang, W.; Yu, N.; Yuan, L.; Guo, B. Cswin transformer: A general vision transformer backbone with cross-shaped windows. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022. [Google Scholar]
- Su, J.; Ahmed, M.; Lu, Y.; Pan, S.; Bo, W.; Liu, Y. Roformer: Enhanced transformer with rotary position embedding. Neurocomputing 2024, 568, 127063. [Google Scholar] [CrossRef]
- Al-Dhabyani, W.; Gomaa, M.; Khaled, H.; Fahmy, A. Dataset of breast ultrasound images. Data Brief 2020, 28, 104863. [Google Scholar] [CrossRef] [PubMed]
- Yap, M.H.; Pons, G.; Martí, J.; Ganau, S.; Sentís, M.; Zwiggelaar, R.; Davison, A.K.; Martí, R. Automated breast ultrasound lesions detection using convolutional neural networks. IEEE J. Biomed. Health Inform. 2017, 22, 1218–1226. [Google Scholar] [CrossRef]
- Mo, Y.; Han, C.; Liu, Y.; Liu, M.; Shi, Z.; Lin, J.; Zhao, B.; Huang, C.; Qiu, B.; Cui, Y.; et al. Hover-trans: Anatomy-aware hover-transformer for roi-free breast cancer diagnosis in ultrasound images. IEEE Trans. Med. Imaging 2023, 42, 1696–1706. [Google Scholar] [CrossRef]
- Gómez-Flores, W.; Gregorio-Calas, M.J.; Pereira, W.C.d.A. BUS-BRA: A breast ultrasound dataset for assessing computer-aided diagnosis systems. Med. Phys. 2024, 51, 3110–3123. [Google Scholar] [CrossRef]
- Liu, Z.; Mao, H.; Wu, C.-Y.; Feichtenhofer, C.; Darrell, T.; Xie, S. A convnet for the 2020s. In Proceedings of the IEEE/CVF Conference on Computer Vision And Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022. [Google Scholar]
- Yang, J.; Li, C.; Dai, X.; Gao, J. Focal modulation networks. Adv. Neural Inf. Process. Syst. 2022, 35, 4203–4217. [Google Scholar]
- Li, K.; Wang, Y.; Zhang, J.; Gao, P.; Song, G.; Liu, Y.; Li, H.; Qiao, Y. Uniformer: Unifying convolution and self-attention for visual recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2023, 45, 12581–12600. [Google Scholar] [CrossRef]
- Zhu, L.; Wang, X.; Ke, Z.; Zhang, W.; Lau, R. Biformer: Vision transformer with bi-level routing attention. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 17–24 June 2023. [Google Scholar]
- Shi, D. Transnext: Robust foveal visual perception for vision transformers. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 16–22 June 2024. [Google Scholar]
- Yu, W.; Wang, X. Mambaout: Do we really need mamba for vision? arXiv 2024, arXiv:2405.07992. [Google Scholar] [CrossRef]
- Yun, S.; Ro, Y. Shvit: Single-head vision transformer with memory efficient macro design. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 16–22 June 2024. [Google Scholar]
- Meng, W.; Luo, Y.; Li, X.; Jiang, D.; Zhang, Z. PolaFormer: Polarity-aware Linear Attention for Vision Transformers. arXiv 2025, arXiv:2501.15061. [Google Scholar]













| Category | BUSI | BUS | GDPH_SYSUCC |
|---|---|---|---|
| Number of Images | Number of Images | Number of Images | |
| Benign (Class 0) | 487 | 110 | 886 |
| Malignant (Class 1) | 210 | 53 | 1519 |
| Total | 697 | 163 | 2405 |
| Methods | ACC | Recall | Prew | F1w | Kappa |
|---|---|---|---|---|---|
| ConvNeXt-B | 0.811 ± 0.030 | 0.771 ± 0.028 | 0.808 ± 0.030 | 0.809 ± 0.029 | 0.558 ± 0.064 |
| Focal-B | 0.782 ± 0.050 | 0.726 ± 0.062 | 0.776 ± 0.055 | 0.775 ± 0.053 | 0.475 ± 0.126 |
| UniFormer-B | 0.841 ± 0.022 | 0.799 ± 0.032 | 0.843 ± 0.020 | 0.836 ± 0.023 | 0.622 ± 0.054 |
| BiFormer-S | 0.858 ± 0.009 | 0.829 ± 0.019 | 0.857 ± 0.010 | 0.856 ± 0.010 | 0.670 ± 0.025 |
| TransNeXt-T | 0.858 ± 0.015 | 0.842 ± 0.019 | 0.860 ± 0.015 | 0.858 ± 0.015 | 0.678 ± 0.034 |
| MambaOut-B | 0.847 ± 0.014 | 0.815 ± 0.016 | 0.847 ± 0.012 | 0.845 ± 0.013 | 0.644 ± 0.028 |
| DSDNet (Ours) | 0.878 ± 0.028 | 0.866 ± 0.034 | 0.880 ± 0.029 | 0.878 ± 0.027 | 0.724 ± 0.063 |
| Methods | ACC | Recall | Prew | F1w | Kappa |
|---|---|---|---|---|---|
| ConvNeXt-B | 0.687 ± 0.026 | 0.555 ± 0.099 | 0.578 ± 0.177 | 0.589 ± 0.085 | 0.109 ± 0.184 |
| Focal-B | 0.706 ± 0.039 | 0.580 ± 0.072 | 0.662 ± 0.140 | 0.637 ± 0.082 | 0.184 ± 0.159 |
| UniFormer-B | 0.816 ± 0.047 | 0.774 ± 0.052 | 0.813 ± 0.049 | 0.812 ± 0.048 | 0.569 ± 0.108 |
| BiFormer-S | 0.828 ± 0.019 | 0.782 ± 0.047 | 0.835 ± 0.025 | 0.821 ± 0.026 | 0.590 ± 0.066 |
| TransNeXt-T | 0.742 ± 0.045 | 0.644 ± 0.089 | 0.710 ± 0.153 | 0.697 ± 0.095 | 0.318 ± 0.188 |
| MambaOut-B | 0.718 ± 0.033 | 0.579 ± 0.050 | 0.716 ± 0.159 | 0.641 ± 0.064 | 0.195 ± 0.122 |
| DSDNet (Ours) | 0.836 ± 0.078 | 0.789 ± 0.098 | 0.839 ± 0.077 | 0.827 ± 0.085 | 0.606 ± 0.195 |
| Methods | ACC | Recall | Prew | F1w | Kappa |
|---|---|---|---|---|---|
| ConvNeXt-B | 0.830 ± 0.016 | 0.826 ± 0.013 | 0.835 ± 0.013 | 0.832 ± 0.015 | 0.641 ± 0.030 |
| Focal-B | 0.717 ± 0.025 | 0.672 ± 0.033 | 0.716 ± 0.020 | 0.705 ± 0.029 | 0.361 ± 0.061 |
| UniFormer-B | 0.870 ± 0.015 | 0.865 ± 0.016 | 0.872 ± 0.014 | 0.871 ± 0.015 | 0.724 ± 0.031 |
| BiFormer-S | 0.871 ± 0.013 | 0.867 ± 0.012 | 0.873 ± 0.012 | 0.871 ± 0.013 | 0.725 ± 0.026 |
| TransNeXt-T | 0.866 ± 0.023 | 0.857 ± 0.029 | 0.868 ± 0.023 | 0.866 ± 0.024 | 0.712 ± 0.051 |
| MambaOut-B | 0.864 ± 0.011 | 0.856 ± 0.011 | 0.865 ± 0.010 | 0.864 ± 0.011 | 0.709 ± 0.022 |
| DSDNet (Ours) | 0.882 ± 0.014 | 0.878 ± 0.017 | 0.884 ± 0.015 | 0.883 ± 0.014 | 0.749 ± 0.031 |
| Segmentation Branch | Classification Branch | ACC | Recall | Prew | F1w | Kappa |
|---|---|---|---|---|---|---|
| × | √ | 0.824 ± 0.010 | 0.797 ± 0.020 | 0.824 ± 0.009 | 0.820 ± 0.012 | 0.610 ± 0.029 |
| √ | √ | 0.882 ± 0.014 | 0.878 ± 0.017 | 0.884 ± 0.015 | 0.883 ± 0.014 | 0.749 ± 0.031 |
| SPGF | DSMA | ACC | Recall | Prew | F1w | Kappa |
|---|---|---|---|---|---|---|
| × | × | 0.850 ± 0.018 | 0.837 ± 0.008 | 0.855 ± 0.009 | 0.851 ± 0.016 | 0.664 ± 0.029 |
| √ | × | 0.861 ± 0.024 | 0.840 ± 0.023 | 0.864 ± 0.020 | 0.860 ± 0.023 | 0.683 ± 0.049 |
| × | √ | 0.863 ± 0.023 | 0.841 ± 0.023 | 0.864 ± 0.023 | 0.862 ± 0.022 | 0.686 ± 0.050 |
| √ | √ | 0.878 ± 0.028 | 0.866 ± 0.034 | 0.880 ± 0.029 | 0.878 ± 0.027 | 0.724 ± 0.063 |
| Methods | ACC | Recall | Prew | F1w | Kappa |
|---|---|---|---|---|---|
| MHRA | 0.869 ± 0.015 | 0.854 ± 0.022 | 0.871 ± 0.016 | 0.869 ± 0.015 | 0.703 ± 0.035 |
| SHSA | 0.872 ± 0.018 | 0.853 ± 0.036 | 0.875 ± 0.021 | 0.871 ± 0.019 | 0.706 ± 0.048 |
| Polalinear Attention | 0.862 ± 0.022 | 0.835 ± 0.025 | 0.865 ± 0.020 | 0.861 ± 0.021 | 0.681 ± 0.046 |
| MILA | 0.878 ± 0.028 | 0.866 ± 0.034 | 0.880 ± 0.029 | 0.878 ± 0.027 | 0.724 ± 0.063 |
| Methods | GDPH_SYSUCC(Training)/BUSI(Test) | GDPH_SYSUCC(Training)/BUS(Test) | ||||||||
|---|---|---|---|---|---|---|---|---|---|---|
| ACC | Recall | Prew | F1w | Kappa | ACC | Recall | Prew | F1w | Kappa | |
| ConvNeXt-B | 0.655 ± 0.041 | 0.685 ± 0.037 | 0.727 ± 0.030 | 0.666 ± 0.041 | 0.321 ± 0.069 | 0.656 ± 0.062 | 0.590 ± 0.086 | 0.640 ± 0.076 | 0.642 ± 0.072 | 0.182 ± 0.171 |
| Focal-B | 0.525 ± 0.027 | 0.603 ± 0.022 | 0.679 ± 0.020 | 0.524 ± 0.030 | 0.160 ± 0.036 | 0.454 ± 0.096 | 0.545 ± 0.099 | 0.621 ± 0.156 | 0.429 ± 0.111 | 0.070 ± 0.149 |
| UniFormer-B | 0.621 ± 0.018 | 0.685 ± 0.019 | 0.748 ± 0.022 | 0.628 ± 0.019 | 0.300 ± 0.032 | 0.652 ± 0.119 | 0.670 ± 0.137 | 0.708 ± 0.122 | 0.662 ± 0.116 | 0.301 ± 0.241 |
| BiFormer-S | 0.617 ± 0.049 | 0.664 ± 0.048 | 0.717 ± 0.041 | 0.626 ± 0.049 | 0.274 ± 0.083 | 0.583 ± 0.084 | 0.614 ± 0.088 | 0.663 ± 0.081 | 0.593 ± 0.082 | 0.196 ± 0.154 |
| TransNeXt-T | 0.598 ± 0.044 | 0.658 ± 0.046 | 0.719 ± 0.044 | 0.605 ± 0.045 | 0.257 ± 0.077 | 0.613 ± 0.084 | 0.622 ± 0.061 | 0.670 ± 0.056 | 0.619 ± 0.079 | 0.224 ± 0.114 |
| MambaOut-B | 0.583 ± 0.039 | 0.656 ± 0.043 | 0.730 ± 0.046 | 0.585 ± 0.039 | 0.248 ± 0.069 | 0.503 ± 0.051 | 0.555 ± 0.038 | 0.620 ± 0.056 | 0.502 ± 0.064 | 0.089 ± 0.059 |
| DSDNet (Ours) | 0.737 ± 0.052 | 0.724 ± 0.059 | 0.753 ± 0.050 | 0.742 ± 0.051 | 0.428 ± 0.111 | 0.768 ± 0.056 | 0.758 ± 0.076 | 0.787 ± 0.068 | 0.770 ± 0.055 | 0.494 ± 0.125 |
| Methods | Params (M) | GFLOPs | Inference Time (ms) |
|---|---|---|---|
| ConvNeXt-B | 87.57 | 20.14 | 5.53 |
| Focal-B | 87.13 | 20.0 | 11.64 |
| UniFormer-B | 49.26 | 10.20 | 12.33 |
| BiFormer-S | 25.02 | 4.22 | 19.21 |
| TransNeXt-T | 27.68 | 7.35 | 22.11 |
| MambaOut-B | 81.75 | 20.72 | 6.37 |
| DSDNet | 21.88 | 98.64 | 31.50 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2026 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.
Share and Cite
Jiang, X.; Ding, X.; Ma, J.; Liu, C.; Li, X. Robust Dual-Stream Diagnosis Network for Ultrasound Breast Tumor Classification with Cross-Domain Segmentation Priors. Sensors 2026, 26, 974. https://doi.org/10.3390/s26030974
Jiang X, Ding X, Ma J, Liu C, Li X. Robust Dual-Stream Diagnosis Network for Ultrasound Breast Tumor Classification with Cross-Domain Segmentation Priors. Sensors. 2026; 26(3):974. https://doi.org/10.3390/s26030974
Chicago/Turabian StyleJiang, Xiaokai, Xuewen Ding, Jinying Ma, Chunyu Liu, and Xinyi Li. 2026. "Robust Dual-Stream Diagnosis Network for Ultrasound Breast Tumor Classification with Cross-Domain Segmentation Priors" Sensors 26, no. 3: 974. https://doi.org/10.3390/s26030974
APA StyleJiang, X., Ding, X., Ma, J., Liu, C., & Li, X. (2026). Robust Dual-Stream Diagnosis Network for Ultrasound Breast Tumor Classification with Cross-Domain Segmentation Priors. Sensors, 26(3), 974. https://doi.org/10.3390/s26030974
