Proceedings:
No. 1: AAAI-22 Technical Tracks 1
Volume
Issue:
Proceedings of the AAAI Conference on Artificial Intelligence, 36
Track:
AAAI Technical Track on Computer Vision I
Downloads:
Abstract:
Source-Free Unsupervised Domain Adaptation(SFUDA) aims to adapt a pre-trained source model to an unlabeled target domain without access to the original labeled source domain samples. Many existing SFUDA approaches apply the self-training strategy, which involves iteratively selecting confidently predicted target samples as pseudo-labeled samples used to train the model to fit the target domain. However, the self-training strategy may also suffer from sample selection bias and be impacted by the label noise of the pseudo-labeled samples. In this work, we provide a rigorous theoretical analysis on how these two issues affect the model generalization ability when applying the self-training strategy for the SFUDA problem. Based on this theoretical analysis, we then propose a new Denoised Maximum Classifier Discrepancy (D-MCD) method for SFUDA to effectively address these two issues. In particular, we first minimize the distribution mismatch between the selected pseudo-labeled samples and the remaining target domain samples to alleviate the sample selection bias. Moreover, we design a strong-weak self-training paradigm to denoise the selected pseudo-labeled samples, where the strong network is used to select pseudo-labeled samples while the weak network helps the strong network to filter out hard samples to avoid incorrect labels. In this way, we are able to ensure both the quality of the pseudo-labels and the generalization ability of the trained model on the target domain. We achieve state-of-the-art results on three domain adaptation benchmark datasets, which clearly validates the effectiveness of our proposed approach. Full code is available at https://github.com/kkkkkkon/D-MCD.
DOI:
10.1609/aaai.v36i1.19925
AAAI
Proceedings of the AAAI Conference on Artificial Intelligence, 36