Unsupervised Domain Adaptation Via Causal-Contrastive Learning
Journal of Supercomputing(2025)CCF CSCI 4区SCI 3区
Hefei University of Technology
Abstract
Unsupervised domain adaptation (UDA) aims to reduce the domain differences between source and target domains by mapping their data to a shared feature space, thereby learning domain-invariant features. The aim of this study is to address the challenges faced by contrastive learning-based UDA methods when dealing with domain discrepancies, particularly the spurious correlations introduced by confounding factors caused by data augmentation. In recent years, contrastive learning has gained attention for its powerful representation learning capabilities, as it can pull similar samples from the source and target domains closer together while separating different classes of negative samples. This process helps alleviate domain differences and enhances the model’s generalization ability. However, mainstream UDA methods based on contrastive learning often introduce confounding factors due to the randomness of data augmentation, leading the model to learn incorrect spurious associations, especially when the target domain contains counterfactual data from the source domain. As the amount of counterfactual data increases, this bias and accuracy loss can significantly exacerbate and are difficult to eliminate through non-causal methods. To address this, this paper proposes causal invariance contrastive adaptation (CICA), a causal-contrastive learning-based unsupervised domain adaptation model for image classification. The model inputs labeled source domain samples and unlabeled target domain samples into a feature generator after data augmentation, and quantifies the degree of confusion between the generated features based on a backdoor criterion. We effectively separate domain-invariant features from spurious features using adversarial training, thereby reducing the interference of confounding factors on the domain adaptation task. Our experiments conducted on four domain adaptation image classification benchmark datasets and one counterfactual dataset show that the model achieves a significant improvement in average classification accuracy compared to state-of-the-art methods on the benchmark datasets, while still maintaining advanced performance on the counterfactual dataset.
MoreTranslated text
Key words
Domain adaptation,Causal intervention,Confounding factors,Contrastive learning
求助PDF
上传PDF
View via Publisher
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
- Pretraining has recently greatly promoted the development of natural language processing (NLP)
- We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
- We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
- The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
- Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Upload PDF to Generate Summary
Must-Reading Tree
Example

Generate MRT to find the research sequence of this paper
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper