Adaptive Contrastive Learning with Label Consistency for Source Data Free Unsupervised Domain Adaptation
Abstract
:1. Introduction
- An adaptive contrastive learning framework that works at the class level for source-free unsupervised domain adaptation is proposed.
- The proposed method introduces a memory bank that stores reliable samples with consistent labels and encourages samples in the target domain to learn discriminative features at the class level.
- Comprehensive experiments show that our method is competitive with existing methods in a series of source-free unsupervised domain adaptation scenarios.
2. Related Work
3. Methodologies
Algorithm 1 LCCL algorithm for SFUDA task. |
Input: source model , target data , maximum number of epochs , trade-off parameter . Initialization: Freeze the final classifier layer , and copy the parameters from to as initialization. fortodo Obtain self-supervised pseudo labels via Equation (4) for to do # min-batch optimization Sample a batch from target data and get the corresponding pseudo labels. Update the parameters in via in Equation (8). Select label consistency samples and add them into memory bank. end for end for |
4. Experiment
4.1. Datasets
4.2. Implementation Details
4.3. Baselines
4.4. Overall Results
4.5. Experimental Analysis
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2015, arXiv:1409.1556. [Google Scholar]
- Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You Only Look Once: Unified, Real-Time Object Detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 779–788. [Google Scholar]
- Girshick, R. Fast r-cnn. arXiv 2015, arXiv:1504.08083. [Google Scholar]
- Ganin, Y.; Lempitsky, V. Unsupervised Domain Adaptation by Backpropagation. In Proceedings of the 32nd International Conference on Machine Learning, Lille, France, 7–9 July 2015; pp. 1180–1189. [Google Scholar]
- Li, S.; Liu, C.H.; Lin, Q.; Wen, Q.; Su, L.; Huang, G. Deep residual correction network for partial domain adaptation. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 43, 2329–2344. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Gretton, A.; Borgwardt, K.M.; Rasch, M.J.; Schlkopf, B.; Smola, A.J. A kernel method for the two-sample-problem. In Proceedings of the NeurIPS, Vancouver, BC, Canada, 4–8 December 2007; pp. 513–520. [Google Scholar]
- Shen, J.; Qu, Y.; Zhang, W.; Yu, Y. Wasserstein Distance Guided Representation Learning for Domain Adaptation. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, New Orleans, LA, USA, 2–8 February 2018. [Google Scholar]
- Liang, J.; Hu, D.; Feng, J. Do we really need to access the source data? Source hypothesis transfer for unsupervised domain adaptation. In Proceedings of the 37th International Conference on Machine Learning, Vienna, Austria, 13–18 July 2020; pp. 6028–6039. [Google Scholar]
- Li, S.; Song, S.; Gao, H.; Ding, Z.; Cheng, W. Domain Invariant and Class Discriminative Feature Learning for Visual Domain Adaptation. IEEE Trans. Image Process. 2018, 27, 4260–4273. [Google Scholar] [CrossRef] [PubMed]
- Li, S.; Liu, C.H.; Bin, X.; Limin, S.; Ding, Z.; Gao, H. Joint Adversarial Domain Adaptation. In Proceedings of the 27th ACM International Conference on Multimedia, Nice, France, 21–25 October 2019. [Google Scholar]
- Tzeng, E.; Hoffman, J.; Zhang, N.; Saenko, K.; Darrell, T. Deep domain confusion: Maximizing for domain invariance. arXiv 2014, arXiv:1412.3474. [Google Scholar]
- Li, S.; Binhui, X.; Jia, W.; Ying, Z.; Liu, C.H.; Ding, Z. Simultaneous Semantic Alignment Network for Heterogeneous Domain Adaptation. In Proceedings of the 28th ACM International Conference on Multimedia, Seattle, WA, USA, 12–16 October 2020. [Google Scholar]
- Chen, Y.; Song, S.; Li, S.; Wu, C. A Graph Embedding Framework for Maximum Mean Discrepancy-Based Domain Adaptation Algorithms. IEEE Trans. Image Process. 2020, 29, 199–213. [Google Scholar] [CrossRef] [PubMed]
- Li, S.; Xie, B.; Lin, Q.; Liu, C.H.; Wang, G. Generalized Domain Conditioned Adaptation Network. IEEE Trans. Pattern Anal. Mach. Intell. 2021. [Google Scholar] [CrossRef] [PubMed]
- Saito, K.; Watanabe, K.; Ushiku, Y.; Harada, T. Maximum classifier discrepancy for unsupervised domain adaptation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–23 June 2018; pp. 3723–3732. [Google Scholar]
- Pan, Y.; Yao, T.; Li, Y.; Wang, Y.; Ngo, C.W.; Mei, T. Transferrable prototypical networks for unsupervised domain adaptation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019. [Google Scholar]
- Kang, G.; Lu, J.; Yi, Y.; Hauptmann, A.G. Contrastive Adaptation Network for Unsupervised Domain Adaptation. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019. [Google Scholar]
- Dai, S.; Cheng, Y.; Zhang, Y.; Gan, Z.; Liu, J.; Carin, L. Contrastively Smoothed Class Alignment for Unsupervised Domain Adaptation. In Proceedings of the Asian Conference on Computer Vision, Singapore, 20–23 May 2021. [Google Scholar]
- Li, R.; Jiao, Q.; Cao, W.; Wong, H.S.; Wu, S. Model adaptation: Unsupervised domain adaptation without source data. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020. [Google Scholar]
- Yang, S.; Wang, Y.; Joost, V.; Herranz, L. Casting a BAIT for Offline and Online Source-free Domain Adaptation. arXiv 2020, arXiv:2010.12427. [Google Scholar]
- He, K.; Fan, H.; Wu, Y.; Xie, S.; Girshick, R. Momentum Contrast for Unsupervised Visual Representation Learning. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020. [Google Scholar]
- Chen, T.; Kornblith, S.; Norouzi, M.; Hinton, G. A Simple Framework for Contrastive Learning of Visual Representations. arXiv 2020, arXiv:2002.05709. [Google Scholar]
- Sharma, V.; Tapaswi, M.; Sarfraz, M.S.; Stiefelhagen, R. Clustering based Contrastive Learning for Improving Face Representations. In Proceedings of the 2020 15th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2020), Buenos Aires, Argentina, 16–20 November 2020. [Google Scholar]
- Dosovitskiy, A.; Fischer, P.; Springenberg, J.T.; Riedmiller, M.; Brox, T. Discriminative Unsupervised Feature Learning with Exemplar Convolutional Neural Networks. IEEE Trans. Pattern Anal. Mach. Intell. 2014, 38, 1734–1747. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Schroff, F.; Kalenichenko, D.; Philbin, J. Facenet: A unified embedding for face recognition and clustering. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; Volume 38, pp. 815–823. [Google Scholar]
- Gutmann, M.U.; Hyvärinen, A. Noise-Contrastive Estimation of Unnormalized Statistical Models, with Applications to Natural Image Statistics. J. Mach. Learn. Res. 2012, 13, 307–361. [Google Scholar]
- Singh, A. CLDA: Contrastive Learning for Semi-Supervised Domain Adaptation. arXiv 2021, arXiv:2107.00085. [Google Scholar]
- Li, S.; Liu, C.H.; Su, L.; Xie, B.; Wu, D. Discriminative Transfer Feature and Label Consistency for Cross-Domain Image Classification. IEEE Trans. Neural Netw. Learn. Syst. 2020, 31, 4842–4856. [Google Scholar] [CrossRef] [PubMed]
- Peng, X.; Usman, B.; Kaushik, N.; Hoffman, J.; Saenko, K. Visda: The visual domain adaptation challenge. arXiv 2017, arXiv:1710.06924. [Google Scholar]
- Saenko, K.; Kulis, B.; Fritz, M.; Darrell, T. Adapting visual category models to new domains. In Proceedings of the ECCV, Crete, Greece, 5–11 September 2010; pp. 213–226. [Google Scholar]
- Lecun, Y.; Bottou, L. Gradient-based learning applied to document recognition. Proc. IEEE 1998, 86, 2278–2324. [Google Scholar] [CrossRef] [Green Version]
- Long, M.; Cao, Z.; Wang, J.; Jordan, M.I. Conditional adversarial domain adaptation. In Proceedings of the NeurIPS, Montréal, QC, Canada, 2–8 December 2018; pp. 1647–1657. [Google Scholar]
- Xu, R.; Li, G.; Yang, J.; Lin, L. Larger Norm More Transferable: An Adaptive Feature Norm Approach for Unsupervised Domain Adaptation. In Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Korea, 27 October–2 November 2019; pp. 1426–1435. [Google Scholar]
- Lee, C.Y.; Batra, T.; Baig, M.H.; Ulbricht, D. Sliced wasserstein discrepancy for unsupervised domain adaptation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019. [Google Scholar]
- Jin, Y.; Wang, X.; Long, M.; Wang, J. Minimum Class Confusion for Versatile Domain Adaptation. In Proceedings of the European Conference on Computer Vision, Glasgow, UK, 23–28 August 2020. [Google Scholar]
- Kim, Y.; Cho, D.; Panda, P.; Hong, S. Progressive Domain Adaptation from a Source Pre-trained Model. arXiv 2020, arXiv:1811.07456. [Google Scholar]
- Saito, K.; Ushiku, Y.; Harada, T.; Saenko, K. Adversarial Dropout Regularization. arXiv 2017, arXiv:1711.01575. [Google Scholar]
- Hoffman, J.; Tzeng, E.; Park, T.; Zhu, J.Y.; Da Rrell, T. CyCADA: Cycle-Consistent Adversarial Domain Adaptation. In Proceedings of the International Conference on Machine Learning, Stockholm, Sweden, 10–15 July 2018; pp. 1994–2003. [Google Scholar]
- Deng, Z.; Luo, Y.; Zhu, J. Cluster Alignment with a Teacher for Unsupervised Domain Adaptation. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Korea, 27 October–2 November 2019. [Google Scholar]
- Zhang, Y.; Liu, T.; Long, M.; Jordan, M.I. Bridging Theory and Algorithm for Domain Adaptation. In Proceedings of the International Conference on Machine Learning, Long Beach, CA, USA, 10–15 June 2019; pp. 7404–7413. [Google Scholar]
- Paszke, A.; Gross, S.; Massa, F.; Lerer, A.; Chintala, S. PyTorch: An imperative style, high-performance deep learning library. In Proceedings of the NeurIPS, Vancouver, BC, Canada, 8–14 December 2019; pp. 8024–8035. [Google Scholar]
- Tzeng, E.; Hoffman, J.; Saenko, K.; Darrell, T. Adversarial discriminative domain adaptation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 7167–7176. [Google Scholar]
- Shi, S. Visualizing data using t-SNE. J. Mach. Learn. Res. 2008, 9, 2579–2605. [Google Scholar]
Method | Plane | Bcycl | Bus | Car | Horse | Knife | Mcycl | Person | Plant | Sktbrd | Train | Truck | Per-Class |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Source only | 55.1 | 53.3 | 61.9 | 59.1 | 80.6 | 17.9 | 79.7 | 31.2 | 81.0 | 26.5 | 73.5 | 8.5 | 52.4 |
DANN [5] | 81.9 | 77.7 | 82.8 | 44.3 | 81.2 | 29.5 | 65.1 | 28.6 | 51.9 | 54.6 | 82.8 | 7.8 | 57.4 |
CDAN [33] | 85.2 | 66.9 | 83.0 | 50.8 | 84.2 | 74.9 | 88.1 | 74.5 | 83.4 | 76.0 | 81.9 | 38.0 | 73.9 |
SAFN [34] | 93.6 | 61.3 | 84.1 | 70.6 | 94.1 | 79.0 | 91.8 | 79.6 | 89.9 | 55.6 | 89.0 | 24.4 | 76.1 |
SWD [35] | 90.8 | 82.5 | 81.7 | 70.5 | 91.7 | 69.5 | 86.3 | 77.5 | 87.4 | 63.6 | 85.6 | 29.2 | 76.4 |
TPN [17] | 93.7 | 85.1 | 69.2 | 81.6 | 93.5 | 61.9 | 89.3 | 81.4 | 93.5 | 81.6 | 84.5 | 49.9 | 80.4 |
MCC [36] | 88.7 | 80.3 | 80.5 | 71.5 | 90.1 | 93.2 | 85.0 | 71.6 | 89.4 | 73.8 | 85.0 | 36.9 | 78.8 |
PrDA [37] | 86.9 | 81.7 | 84.6 | 63.9 | 93.1 | 91.4 | 86.6 | 71.9 | 84.5 | 58.2 | 74.5 | 42.7 | 76.7 |
SHOT [9] | 94.3 | 88.5 | 80.1 | 57.3 | 93.1 | 94.9 | 80.7 | 80.3 | 91.5 | 89.1 | 86.3 | 58.2 | 82.9 |
MA [20] | 94.8 | 73.4 | 68.8 | 74.8 | 93.1 | 95.4 | 88.6 | 84.7 | 89.1 | 84.7 | 83.5 | 48.1 | 81.6 |
BAIT [21] | 93.7 | 83.2 | 84.5 | 65.0 | 92.9 | 95.4 | 88.1 | 80.8 | 90.0 | 89.0 | 84.0 | 45.3 | 82.7 |
LCCL | 92.8 | 86 | 78.7 | 60.4 | 92.9 | 93.9 | 87.0 | 81.1 | 91.5 | 91.3 | 86.5 | 59.3 | 83.4 |
Method | S→M | U→M | M→U | Avg. |
---|---|---|---|---|
Source-only | 70.2 | 88.0 | 79.7 | 79.3 |
ADDA [43] | 76.0 | 90.1 | 89.4 | 85.2 |
ADR [38] | 95.0 | 93.1 | 93.2 | 93.8 |
CDAN + E [33] | 89.2 | 98.0 | 95.6 | 94.3 |
CyCADA [39] | 90.4 | 96.5 | 95.6 | 94.2 |
rRevGrad + CAT [40] | 98.8 | 96.0 | 94.0 | 96.3 |
SWD [35] | 98.9 | 97.1 | 98.1 | 98.0 |
SHOT [9] | 98.9 | 98.0 | 97.9 | 98.3 |
MA [20] | 99.4 | 99.3 | 97.3 | 98.6 |
LCCL | 99.4 | 98.8 | 97.9 | 98.7 |
Method | A→D | A→W | D→W | W→D | D→A | W→A | Avg. |
---|---|---|---|---|---|---|---|
Source-only | 68.9 | 68.4 | 96.9 | 68.2 | 99.1 | 67.4 | 76.1 |
DANN [5] | 79.7 | 82.0 | 96.9 | 99.1 | 67.4 | 68.2 | 82.2 |
MCD [16] | 92.2 | 88.6 | 98.5 | 100.0 | 69.5 | 69.7 | 86.5 |
CDAN [33] | 92.9 | 94.1 | 98.6 | 100.0 | 71.0 | 69.3 | 87.7 |
MDD [41] | 90.4 | 90.4 | 98.7 | 99.9 | 75.0 | 73.7 | 88.0 |
CAN [18] | 95.0 | 94.5 | 99.1 | 99.6 | 70.3 | 66.4 | 90.6 |
MCC [36] | 95.6 | 95.4 | 98.6 | 100.0 | 72.6 | 73.9 | 89.4 |
SHOT [9] | 93.1 | 90.9 | 98.8 | 99.9 | 74.5 | 74.8 | 88.7 |
PrDA [37] | 92.2 | 91.1 | 98.2 | 99.5 | 71.0 | 71.2 | 87.2 |
MA [20] | 92.7 | 93.7 | 98.5 | 99.8 | 75.3 | 77.8 | 89.6 |
BAIT [21] | 92.0 | 94.6 | 98.1 | 100.0 | 74.6 | 75.2 | 89.1 |
LCCL | 94.5 | 92.2 | 98.9 | 99.9 | 75.3 | 75.1 | 89.3 |
Datasets | Digits | Office-31 | VisDA-2017 |
---|---|---|---|
Source-only | 79.3 | 76.1 | 52.4 |
+ | 95.2 | 87.3 | 80.4 |
+ + | 98.3 | 88.6 | 82.9 |
+ + + | 98.7 | 89.3 | 83.4 |
Method | A→D | A→W | D→W | W→D | D→A | W→A | Avg. |
---|---|---|---|---|---|---|---|
DANN [5] | 79.7 | 82.0 | 96.9 | 99.1 | 67.4 | 68.2 | 82.2 |
DANN + | 90.8 | 90.3 | 98.5 | 99.9 | 72.7 | 70.7 | 87.2 |
Method | A→D | A→W | D→W | W→D | D→A | W→A | Avg. |
---|---|---|---|---|---|---|---|
Source only model | 92.4 | 91.3 | 96.5 | 99.9 | 74.8 | 73.9 | 88.1 |
Ours | 94.5 | 92.2 | 98.9 | 99.9 | 75.3 | 75.1 | 89.3 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Zhao, X.; Stanislawski, R.; Gardoni, P.; Sulowicz, M.; Glowacz, A.; Krolczyk, G.; Li, Z. Adaptive Contrastive Learning with Label Consistency for Source Data Free Unsupervised Domain Adaptation. Sensors 2022, 22, 4238. https://doi.org/10.3390/s22114238
Zhao X, Stanislawski R, Gardoni P, Sulowicz M, Glowacz A, Krolczyk G, Li Z. Adaptive Contrastive Learning with Label Consistency for Source Data Free Unsupervised Domain Adaptation. Sensors. 2022; 22(11):4238. https://doi.org/10.3390/s22114238
Chicago/Turabian StyleZhao, Xuejun, Rafal Stanislawski, Paolo Gardoni, Maciej Sulowicz, Adam Glowacz, Grzegorz Krolczyk, and Zhixiong Li. 2022. "Adaptive Contrastive Learning with Label Consistency for Source Data Free Unsupervised Domain Adaptation" Sensors 22, no. 11: 4238. https://doi.org/10.3390/s22114238