Proceedings:
No. 6: AAAI-22 Technical Tracks 6
Volume
Issue:
Proceedings of the AAAI Conference on Artificial Intelligence, 36
Track:
AAAI Technical Track on Machine Learning I
Downloads:
Abstract:
Universal domain adaptation (UniDA) aims to transfer knowledge learned from a labeled source domain to an unlabeled target domain under domain shift and category shift. Without prior category overlap information, it is challenging to simultaneously align the common categories between two domains and separate their respective private categories. Additionally, previous studies utilize the source classifier's prediction to obtain various known labels and one generic "unknown" label of target samples. However, over-reliance on learned classifier knowledge is inevitably biased to source data, ignoring the intrinsic structure of target domain. Therefore, in this paper, we propose a novel two-stage UniDA framework called MATHS based on the principle of mutual nearest neighbor contrast and hybrid prototype discrimination. In the first stage, we design an efficient mutual nearest neighbor contrastive learning scheme to achieve feature alignment, which exploits the instance-level affinity relationship to uncover the intrinsic structure of two domains. We introduce a bimodality hypothesis for the maximum discriminative probability distribution to detect the possible target private samples, and present a data-based statistical approach to separate the common and private categories. In the second stage, to obtain more reliable label predictions, we propose an incremental pseudo-classifier for target data only, which is driven by the hybrid representative prototypes. A confidence-guided prototype contrastive loss is designed to optimize the category allocation uncertainty via a self-training mechanism. Extensive experiments on three benchmarks demonstrate that MATHS outperforms previous state-of-the-arts on most UniDA settings.
DOI:
10.1609/aaai.v36i6.20574
AAAI
Proceedings of the AAAI Conference on Artificial Intelligence, 36