Proceedings:
No. 12: AAAI-21 Technical Tracks 12
Volume
Issue:
Proceedings of the AAAI Conference on Artificial Intelligence, 35
Track:
AAAI Technical Track on Machine Learning V
Downloads:
Abstract:
The threat of data-poisoning backdoor attacks on learning algorithms typically comes from the labeled data. However, in deep semi-supervised learning (SSL), unknown threats mainly stem from the unlabeled data. In this paper, we propose a novel deep hidden backdoor (DeHiB) attack scheme for SSL-based systems. In contrast to the conventional attacking methods, the DeHiB can inject malicious unlabeled training data to the semi-supervised learner so as to enable the SSL model to output premeditated results. In particular, a robust adversarial perturbation generator regularized by a unified objective function is proposed to generate poisoned data. To alleviate the negative impact of the trigger patterns on model accuracy and improve the attack success rate, a novel contrastive data poisoning strategy is designed. Using the proposed data poisoning scheme, one can implant the backdoor into the SSL model using the raw data without hand-crafted labels. Extensive experiments based on CIFAR10 and CIFAR100 datasets demonstrated the effectiveness and crypticity of the proposed scheme.
DOI:
10.1609/aaai.v35i12.17266
AAAI
Proceedings of the AAAI Conference on Artificial Intelligence, 35