ADAM-Net: Anatomy-Guided Attentive Unsupervised Domain Adaptation for Joint MG Segmentation and MGD Grading
Abstract
1. Introduction
2. Methods
2.1. ADAM-Net Model Overview
2.2. The Segmentation Decoder
2.3. SGSA Module
2.4. The Label Predictor
2.5. Adversarial Domain Adaptation Process
2.6. Joint Loss Function
3. Experiments
3.1. Datasets Description
3.2. Evaluation Metrics
3.3. Experimental Setup
3.3.1. Implementation Details
3.3.2. Parameter Tuning Strategy
3.4. Experimental Results
3.4.1. Comparison of Domain Adaptation Methods
3.4.2. Ablation Study
3.4.3. Statistical Analysis Across Multiple Seeds
3.4.4. Visualization
3.5. Failure Cases Analysis
4. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Abbreviations
| MGD | Meibomian Gland Dysfunction |
| DL | Deep Learning |
| MG | Meibomian Gland |
| DANN | Domain-Adversarial Neural Network |
| UDA | Unsupervised Domain Adaptation |
| MCC-score | Matthews correlation coefficient |
| ML | Machine Learning |
| SVM | Support Vector Machine |
| ROI | Regions of Interest |
| XAI | Explainable Artificial Intelligence |
| DA | Domain adaptation |
| SNR | Signal-to-Noise Ratio |
| ADAM-Net | Attention-guided Domain-Adaptive Multi-task Network |
| SGSA | Segmentation-Guided Spatial Attention |
| GRL | Gradient Reversal Layer |
| CBAM | Convolutional Block Attention Module |
| MSE | Mean Squared Error |
| BCE | Binary Cross-Entropy |
| t-SNE | t-distributed Stochastic Neighbor Embedding |
| IoU | Intersection over Union |
| CDAN | Conditional Domain Adaptation Network |
| DAN | Deep Adaptation Network |
| JAN | Joint Adaptation Network |
| MDD | Margin Disparity Discrepancy |
| BSP | Batch Spectral Penalization |
| MCC | Minimum Class Confusion |
| Grad-CAM | Gradient-weighted Class Activation Mapping |
| GT | Ground Truth |
| Pred | Predicted |
References
- Zhang, Z.; Lin, X.; Yu, X.; Fu, Y.; Chen, X.; Yang, W.; Dai, Q. Meibomian gland density: An effective evaluation index of meibomian gland dysfunction based on deep learning and transfer learning. J. Clin. Med. 2022, 11, 2396. [Google Scholar] [CrossRef]
- Gupta, P.K.; Venkateswaran, N.; Heinke, J.; Stinnett, S.S. Association of meibomian gland architecture and body mass index in a pediatric population. Ocul. Surf. 2020, 18, 657–662. [Google Scholar] [CrossRef]
- Arita, R.; Suehiro, J.; Haraguchi, T.; Shirakawa, R.; Tokoro, H.; Amano, S. Objective image analysis of the meibomian gland area. Br. J. Ophthalmol. 2014, 98, 746–755. [Google Scholar] [CrossRef]
- Pflugfelder, S.C.; Tseng, S.C.; Sanabria, O.; Kell, H.; Garcia, C.G.; Felix, C.; Feuer, W.; Reis, B.L. Evaluation of subjective assessments and objective diagnostic tests for diagnosing tear-film disorders known to cause ocular irritation. Cornea 1998, 17, 38. [Google Scholar] [CrossRef] [PubMed]
- Pult, H.; Riede-Pult, B. Comparison of subjective grading and objective assessment in meibography. Contact Lens Anterior Eye 2013, 36, 22–27. [Google Scholar] [CrossRef] [PubMed]
- Wang, J.; Yeh, T.N.; Chakraborty, R.; Yu, S.X.; Lin, M.C. A deep learning approach for meibomian gland atrophy evaluation in meibography images. Transl. Vis. Sci. Technol. 2019, 8, 37. [Google Scholar] [CrossRef] [PubMed]
- Llorens-Quintana, C.; Syga, P.; Iskander, D.R. Automated image processing algorithm for infrared meibography. In Proceedings of the Imaging Systems and Applications, Optica Publishing Group, Orlando, FL, USA, 25–28 June 2018; p. IM3B.3. [Google Scholar]
- Llorens-Quintana, C.; Rico-del Viejo, L.; Syga, P.; Madrid-Costa, D.; Iskander, D.R. A novel automated approach for infrared-based assessment of meibomian gland morphology. Transl. Vis. Sci. Technol. 2019, 8, 17. [Google Scholar] [CrossRef]
- Koh, Y.W.; Celik, T.; Lee, H.K.; Petznick, A.; Tong, L. Detection of meibomian glands and classification of meibography images. J. Biomed. Opt. 2012, 17, 086008. [Google Scholar] [CrossRef]
- Ciężar, K.; Pochylski, M. 2D fourier transform for global analysis and classification of meibomian gland images. Ocul. Surf. 2020, 18, 865–870. [Google Scholar] [CrossRef]
- Saha, R.K.; Chowdhury, A.M.; Na, K.S.; Hwang, G.D.; Eom, Y.; Kim, J.; Jeon, H.G.; Hwang, H.S.; Chung, E. Automated quantification of meibomian gland dropout in infrared meibography using deep learning. Ocul. Surf. 2022, 26, 283–294. [Google Scholar] [CrossRef]
- Luo, X.; Wen, W.; Wang, J.; Xu, S.; Gao, Y.; Huang, J. Health classification of Meibomian gland images using keratography 5M based on AlexNet model. Comput. Methods Programs Biomed. 2022, 219, 106742. [Google Scholar] [CrossRef]
- Prabhu, S.M.; Chakiat, A.; Vunnava, K.P.; Shetty, R. Deep learning segmentation and quantification of Meibomian glands. Biomed. Signal Process. Control 2020, 57, 101776. [Google Scholar] [CrossRef]
- Setu, M.A.K.; Horstmann, J.; Schmidt, S.; Stern, M.E.; Steven, P. Deep learning-based automatic meibomian gland segmentation and morphology assessment in infrared meibography. Sci. Rep. 2021, 11, 7649. [Google Scholar] [CrossRef] [PubMed]
- Zhu, W.; Liu, D.; Zhuang, X.; Gong, T.; Shi, F.; Xiang, D.; Peng, T.; Zhang, X.; Chen, X. Strip and boundary detection multi-task learning network for segmentation of meibomian glands. Med. Phys. 2025, 52, 1615–1628. [Google Scholar] [CrossRef] [PubMed]
- Wang, M.H.; Zhou, R.; Lin, Z.; Yu, Y.; Zeng, P.; Fang, X.; Hou, G.; Li, Y.; Yu, X.; Chong, K.K.L.; et al. Can explainable artificial intelligence optimize the data quality of machine learning model? Taking Meibomian gland dysfunction detections as a case study. J. Phys. Conf. Ser. 2023, 2650, 012025. [Google Scholar] [CrossRef]
- Yeh, C.H.; Stella, X.Y.; Lin, M.C. Meibography phenotyping and classification from unsupervised discriminative feature learning. Transl. Vis. Sci. Technol. 2021, 10, 4. [Google Scholar] [CrossRef]
- Valiant, L.G. A theory of the learnable. Commun. ACM 1984, 27, 1134–1142. [Google Scholar] [CrossRef]
- Quiñonero-Candela, J.; Sugiyama, M.; Schwaighofer, A.; Lawrence, N.D. Dataset Shift in Machine Learning; MIT Press: Cambridge, MA, USA, 2022. [Google Scholar]
- Ben-David, S.; Blitzer, J.; Crammer, K.; Pereira, F. Analysis of representations for domain adaptation. In Proceedings of the NIPS’06: Proceedings of the 20th International Conference on Neural Information Processing Systems, Vancouver, BC, Canada, 4–7 December 2006; Volume 19. [Google Scholar]
- Guan, H.; Liu, M. Domain adaptation for medical image analysis: A survey. IEEE Trans. Biomed. Eng. 2021, 69, 1173–1185. [Google Scholar] [CrossRef]
- Wiestler, B.; Menze, B. Deep learning for medical image analysis: A brief introduction. Neuro-Oncol. Adv. 2020, 2, iv35–iv41. [Google Scholar] [CrossRef]
- Schlemper, J.; Oktay, O.; Schaap, M.; Heinrich, M.; Kainz, B.; Glocker, B.; Rueckert, D. Attention gated networks: Learning to leverage salient regions in medical images. Med. Image Anal. 2019, 53, 197–207. [Google Scholar] [CrossRef]
- Oktay, O.; Schlemper, J.; Folgoc, L.L.; Lee, M.; Heinrich, M.; Misawa, K.; Mori, K.; McDonagh, S.; Hammerla, N.Y.; Kainz, B.; et al. Attention u-net: Learning where to look for the pancreas. arXiv 2018, arXiv:1804.03999. [Google Scholar] [CrossRef]
- Ganin, Y.; Lempitsky, V. Unsupervised domain adaptation by backpropagation. In Proceedings of the International Conference on Machine Learning, PMLR, Lille, France, 6–11 July 2015; pp. 1180–1189. [Google Scholar]
- Graham, S.; Vu, Q.D.; Jahanifar, M.; Raza, S.E.A.; Minhas, F.; Snead, D.; Rajpoot, N. One model is all you need: Multi-task learning enables simultaneous histology image segmentation and classification. Med. Image Anal. 2023, 83, 102685. [Google Scholar]
- Kendall, A.; Gal, Y.; Cipolla, R. Multi-task learning using uncertainty to weigh losses for scene geometry and semantics. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18– 23 June 2018; pp. 7482–7491. [Google Scholar]
- Peng, J.; Yi, J.; Yuan, Z. Unsupervised mitochondria segmentation in EM images via domain adaptive multi-task learning. IEEE J. Sel. Top. Signal Process. 2020, 14, 1199–1209. [Google Scholar] [CrossRef]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Woo, S.; Park, J.; Lee, J.Y.; Kweon, I.S. Cbam: Convolutional block attention module. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 3–19. [Google Scholar]
- Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the International Conference on Medical IMAGE Computing and Computer-Assisted Intervention; Springer: Cham, Switzerland, 2015; pp. 234–241. [Google Scholar]
- Ioffe, S.; Szegedy, C. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proceedings of the International Conference on Machine Learning, PMLR, Lille, France, 6–11 July 2015; pp. 448–456. [Google Scholar]
- Srivastava, N.; Hinton, G.; Krizhevsky, A.; Sutskever, I.; Salakhutdinov, R. Dropout: A simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 2014, 15, 1929–1958. [Google Scholar]
- Ren, J.; Hacihaliloglu, I.; Singer, E.A.; Foran, D.J.; Qi, X. Adversarial domain adaptation for classification of prostate histopathology whole-slide images. In Proceedings of the International Conference on Medical Image computing and Computer-Assisted Intervention; Springer: Cham, Switzerland, 2018; pp. 201–209. [Google Scholar]
- Zhang, J.; Liu, M.; Pan, Y.; Shen, D. Unsupervised conditional consensus adversarial network for brain disease identification with structural MRI. In Proceedings of the International Workshop on Machine Learning in Medical Imaging; Springer: Cham, Switzerland, 2019; pp. 391–399. [Google Scholar]
- Long, M.; Cao, Z.; Wang, J.; Jordan, M.I. Conditional adversarial domain adaptation. In Proceedings of the NIPS’18: Proceedings of the 32nd International Conference on Neural Information Processing Systems, Montreal, QC, Canada, 3–8 December 2018; Volume 31. [Google Scholar]
- Lee, J.M.; Jeon, Y.J.; Kim, K.Y.; Hwang, K.Y.; Kwon, Y.A.; Koh, K. Ocular surface analysis: A comparison between the LipiView® II and IDRA®. Eur. J. Ophthalmol. 2021, 31, 2300–2306. [Google Scholar] [PubMed]
- Markoulli, M.; Duong, T.B.; Lin, M.; Papas, E. Imaging the tear film: A comparison between the subjective keeler tearscope-plus™ and the objective oculus® keratograph 5M and LipiView® interferometer. Curr. Eye Res. 2018, 43, 155–162. [Google Scholar] [CrossRef]
- Townsend, J.T. Theoretical analysis of an alphabetic confusion matrix. Percept. Psychophys. 1971, 9, 40–50. [Google Scholar] [CrossRef]
- Sathyanarayanan, S.; Tantri, B.R. Confusion matrix-based performance evaluation metrics. Afr. J. Biomed. Res. 2024, 27, 4023–4031. [Google Scholar]
- Powers, D.M. Evaluation: From precision, recall and F-measure to ROC, informedness, markedness and correlation. arXiv 2020, arXiv:2010.16061. [Google Scholar] [CrossRef]
- Chicco, D.; Jurman, G. The advantages of the Matthews correlation coefficient (MCC) over F1 score and accuracy in binary classification evaluation. BMC Genom. 2020, 21, 6. [Google Scholar] [CrossRef]
- Dice, L.R. Measures of the amount of ecologic association between species. Ecology 1945, 26, 297–302. [Google Scholar] [CrossRef]
- Long, M.; Cao, Y.; Wang, J.; Jordan, M. Learning transferable features with deep adaptation networks. In Proceedings of the International Conference on Machine Learning, PMLR, Lille, France, 6–11 July 2015; pp. 97–105. [Google Scholar]
- Long, M.; Zhu, H.; Wang, J.; Jordan, M.I. Deep transfer learning with joint adaptation networks. In Proceedings of the International Conference on Machine Learning, PMLR, Sydney, Australia, 6–11 August 2017; pp. 2208–2217. [Google Scholar]
- Zhang, Y.; Liu, T.; Long, M.; Jordan, M. Bridging theory and algorithm for domain adaptation. In Proceedings of the International Conference on Machine Learning, PMLR, Long Beach, CA, USA, 9–15 June 2019; pp. 7404–7413. [Google Scholar]
- Chen, X.; Wang, S.; Long, M.; Wang, J. Transferability vs. discriminability: Batch spectral penalization for adversarial domain adaptation. In Proceedings of the International Conference on Machine Learning, PMLR, Long Beach, CA, USA, 9–15 June 2019; pp. 1081–1090. [Google Scholar]
- Jin, Y.; Wang, X.; Long, M.; Wang, J. Minimum class confusion for versatile domain adaptation. In Proceedings of the European Conference on Computer Vision, Glasgow, UK, 23–28 August 2020; pp. 464–480. [Google Scholar]
- Selvaraju, R.R.; Cogswell, M.; Das, A.; Vedantam, R.; Parikh, D.; Batra, D. Grad-cam: Visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 618–626. [Google Scholar]
- Varoquaux, G.; Colliot, O. Evaluating machine learning models and their diagnostic value. In Machine Learning for Brain Disorders; Springer: Cham, Switzerland, 2023; pp. 601–630. [Google Scholar]
- Tzeng, E.; Hoffman, J.; Zhang, N.; Saenko, K.; Darrell, T. Deep domain confusion: Maximizing for domain invariance. arXiv 2014, arXiv:1412.3474. [Google Scholar] [CrossRef]











| Method | Unsupervised Domain Adaptation | Joint MG Segmentation & MGD Classification | Attention Mechanism | ROI-Aware/Anatomy-Guided |
|---|---|---|---|---|
| Source-only Classifier | × | × | × | × |
| Source-only Segmenter | × | × | × | × |
| Mainstream UDA methods | ✓ | × | × | × |
| DANN-Head | ✓ | × | × | × |
| DANN-Head+Seg | ✓ | ✓ | × | △ |
| DANN-Head+Seg+CBAM | ✓ | ✓ | ✓ | △ |
| ADAM-Net | ✓ | ✓ | ✓ | ✓ |
| Datasets | Devices | Total | Grade 0 | Grade 1 | Grade 2 | Grade 3 |
|---|---|---|---|---|---|---|
| MGD-1K | LipiView II Ocular Surface | 1000 | 180 | 612 | 168 | 40 |
| K5M | Keratograph 5M | 95 | 25 | 38 | 22 | 10 |
| CR-2 | CR-2 AF | 115 | 15 | 68 | 22 | 10 |
| LV II | LipiView II Ocular Surface | 264 | 30 | 100 | 66 | 68 |
| Task | Metrics | DANN | CDAN | DAN | JAN | MDD | BSP | MCC | ADAM-Net |
|---|---|---|---|---|---|---|---|---|---|
| MGD-1K→K5M | Accuracy | 0.7662 0.4828 | 0.8458 0.4138 | 0.7811 0.3448 | 0.7512 0.3448 | 0.7512 0.3793 | 0.6866 0.3793 | 0.6915 0.3103 | 0.8358 0.7586 |
| Precision | 0.6491 0.4874 | 0.7292 0.3828 | 0.6544 0.3263 | 0.6302 0.3056 | 0.6273 0.3750 | 0.5578 0.3618 | 0.5594 0.3173 | 0.7899 0.7431 | |
| Recall | 0.7589 0.5124 | 0.8172 0.3849 | 0.7357 0.3394 | 0.7129 0.2788 | 0.7129 0.3707 | 0.6331 0.4012 | 0.6351 0.3688 | 0.8427 0.7503 | |
| F1-score | 0.6850 0.4886 | 0.7604 0.3774 | 0.6823 0.3277 | 0.6585 0.2887 | 0.6568 0.3600 | 0.5795 0.3500 | 0.5824 0.3186 | 0.8102 0.7417 | |
| MCC-score | 0.6242 0.2864 | 0.7443 0.1904 | 0.6387 0.1033 | 0.5905 0.0945 | 0.5925 0.1711 | 0.5026 0.1492 | 0.5079 0.0738 | 0.7288 0.6695 | |
| MGD-1K→CR-2 | Accuracy | 0.7512 0.4286 | 0.7363 0.4857 | 0.7214 0.4000 | 0.7363 0.3714 | 0.7463 0.4286 | 0.6667 0.3714 | 0.7015 0.3429 | 0.7910 0.7429 |
| Precision | 0.6368 0.3860 | 0.6607 0.4469 | 0.6876 0.3750 | 0.6767 0.3274 | 0.6263 0.4250 | 0.6119 0.3396 | 0.5910 0.3389 | 0.7421 0.6949 | |
| Recall | 0.7532 0.4262 | 0.7418 0.5363 | 0.7463 0.4048 | 0.7418 0.3548 | 0.6994 0.4738 | 0.6901 0.3780 | 0.6778 0.4131 | 0.8138 0.8036 | |
| F1-score | 0.6720 0.3792 | 0.6914 0.4662 | 0.7046 0.3672 | 0.7017 0.3186 | 0.6478 0.3985 | 0.6326 0.3336 | 0.6168 0.3403 | 0.7692 0.7201 | |
| MCC-score | 0.6070 0.2161 | 0.5812 0.2695 | 0.5656 0.1504 | 0.5762 0.1197 | 0.5810 0.2447 | 0.4648 0.1329 | 0.5197 0.1084 | 0.6604 0.6240 | |
| MGD-1K→LV II | Accuracy | 0.8358 0.5570 | 0.8458 0.5190 | 0.8507 0.5063 | 0.8159 0.5190 | 0.8010 0.5316 | 0.7910 0.4810 | 0.7662 0.4557 | 0.8408 0.8101 |
| Precision | 0.7216 0.5437 | 0.7368 0.5125 | 0.7605 0.4824 | 0.7387 0.4914 | 0.6847 0.5596 | 0.6815 0.4640 | 0.6522 0.4298 | 0.8136 0.8187 | |
| Recall | 0.8276 0.5556 | 0.8172 0.5417 | 0.8738 0.4944 | 0.8293 0.5139 | 0.7985 0.5431 | 0.7842 0.4778 | 0.7486 0.4417 | 0.8895 0.7958 | |
| F1-score | 0.7544 0.5464 | 0.7675 0.5129 | 0.8027 0.4838 | 0.7732 0.4896 | 0.7237 0.5272 | 0.7197 0.4677 | 0.6850 0.4336 | 0.8448 0.8030 | |
| MCC-score | 0.7286 0.3881 | 0.7407 0.3529 | 0.7605 0.3218 | 0.6966 0.3468 | 0.6836 0.3856 | 0.6642 0.2884 | 0.6157 0.2514 | 0.7445 0.7357 |
| Task | Metrics | Baseline | DANN | DANN-Head | DANN-Head +Seg | DANN-Head +Seg+CBAM | ADAM-Net |
|---|---|---|---|---|---|---|---|
| MGD-1K→K5M | Accuracy | 0.8557 0.1895 | 0.7662 0.4828 | 0.7811 0.5517 | 0.7960 0.5862 | 0.8060 0.6552 | 0.8358 0.7586 |
| Precision | 0.7814 0.1702 | 0.6491 0.4874 | 0.7163 0.5762 | 0.7276 0.6081 | 0.7203 0.6470 | 0.7899 0.7431 | |
| Recall | 0.8899 0.1649 | 0.7589 0.5124 | 0.7756 0.5709 | 0.7764 0.6542 | 0.8058 0.7167 | 0.8427 0.7503 | |
| F1-score | 0.8246 0.1658 | 0.6850 0.4886 | 0.7415 0.5676 | 0.7493 0.6226 | 0.7505 0.6679 | 0.8102 0.7417 | |
| MCC-score | 0.7633 −0.1648 | 0.6242 0.2864 | 0.6381 0.3833 | 0.6559 0.4340 | 0.6940 0.5313 | 0.7288 0.6695 | |
| MGD-1K→CR-2 | Accuracy | 0.8557 0.1913 | 0.7512 0.4286 | 0.7363 0.5429 | 0.7015 0.5714 | 0.7612 0.6000 | 0.7910 0.7429 |
| Precision | 0.7814 0.1706 | 0.6368 0.3860 | 0.6348 0.5028 | 0.7291 0.5199 | 0.7791 0.5325 | 0.7421 0.6949 | |
| Recall | 0.8899 0.1962 | 0.7532 0.4262 | 0.7471 0.5845 | 0.7363 0.5970 | 0.8054 0.6095 | 0.8138 0.8036 | |
| F1-score | 0.8246 0.1578 | 0.6720 0.3792 | 0.6735 0.5134 | 0.7195 0.5321 | 0.7810 0.5515 | 0.7692 0.7201 | |
| MCC-score | 0.7633 −0.1259 | 0.6070 0.2161 | 0.5830 0.3708 | 0.5249 0.3996 | 0.6106 0.4247 | 0.6604 0.6240 | |
| MGD-1K→LV II | Accuracy | 0.8557 0.5114 | 0.8358 0.5570 | 0.8159 0.5823 | 0.8209 0.6329 | 0.8507 0.6835 | 0.8408 0.8101 |
| Precision | 0.7814 0.5430 | 0.7216 0.5437 | 0.7737 0.5689 | 0.7866 0.6191 | 0.8676 0.6696 | 0.8136 0.8187 | |
| Recall | 0.8899 0.5013 | 0.8276 0.5556 | 0.8088 0.5806 | 0.8397 0.6417 | 0.8719 0.6986 | 0.8895 0.7958 | |
| F1-score | 0.8246 0.5132 | 0.7544 0.5464 | 0.7880 0.5676 | 0.8068 0.6216 | 0.8626 0.6721 | 0.8448 0.8003 | |
| MCC-score | 0.7633 0.3145 | 0.7286 0.3881 | 0.6857 0.4285 | 0.6951 0.5001 | 0.7477 0.5714 | 0.7445 0.7357 |
| Task | Accuracy | Precision | Recall | F1-Score | MCC-Score |
|---|---|---|---|---|---|
| MGD-1K→K5M | 0.8388 ± 0.0051 0.7793 ± 0.0169 | 0.7979 ± 0.0099 0.7816 ± 0.0341 | 0.8497 ± 0.0157 0.7769 ± 0.0298 | 0.8175 ± 0.0051 0.7686 ± 0.0193 | 0.7330 ± 0.0088 0.6971 ± 0.0235 |
| MGD-1K→CR-2 | 0.7920 ± 0.0058 0.7486 ± 0.0114 | 0.7440 ± 0.0049 0.7009 ± 0.0142 | 0.8200 ± 0.0151 0.7720 ± 0.0405 | 0.7720 ± 0.0029 0.7248 ± 0.0221 | 0.6635 ± 0.0092 0.6176 ± 0.0188 |
| MGD-1K→LV II | 0.8428 ± 0.0024 0.8177 ± 0.0179 | 0.8038 ± 0.0120 0.8020 ± 0.0202 | 0.8903 ± 0.0060 0.8003 ± 0.0175 | 0.8398 ± 0.0076 0.7988 ± 0.0174 | 0.7451 ± 0.0052 0.7366 ± 0.0247 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2026 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.
Share and Cite
Fang, J.; He, X.; Jiang, Y.; Wang, M.H. ADAM-Net: Anatomy-Guided Attentive Unsupervised Domain Adaptation for Joint MG Segmentation and MGD Grading. J. Imaging 2026, 12, 50. https://doi.org/10.3390/jimaging12010050
Fang J, He X, Jiang Y, Wang MH. ADAM-Net: Anatomy-Guided Attentive Unsupervised Domain Adaptation for Joint MG Segmentation and MGD Grading. Journal of Imaging. 2026; 12(1):50. https://doi.org/10.3390/jimaging12010050
Chicago/Turabian StyleFang, Junbin, Xuan He, You Jiang, and Mini Han Wang. 2026. "ADAM-Net: Anatomy-Guided Attentive Unsupervised Domain Adaptation for Joint MG Segmentation and MGD Grading" Journal of Imaging 12, no. 1: 50. https://doi.org/10.3390/jimaging12010050
APA StyleFang, J., He, X., Jiang, Y., & Wang, M. H. (2026). ADAM-Net: Anatomy-Guided Attentive Unsupervised Domain Adaptation for Joint MG Segmentation and MGD Grading. Journal of Imaging, 12(1), 50. https://doi.org/10.3390/jimaging12010050

