Distributionally Adversarial Attack

Authors

  • Tianhang Zheng State University of New York at Buffalo
  • Changyou Chen State University of New York at Buffalo
  • Kui Ren State University of New York at Buffalo

DOI:

https://doi.org/10.1609/aaai.v33i01.33012253

Abstract

Recent work on adversarial attack has shown that Projected Gradient Descent (PGD) Adversary is a universal first-order adversary, and the classifier adversarially trained by PGD is robust against a wide range of first-order attacks. It is worth noting that the original objective of an attack/defense model relies on a data distribution p(x), typically in the form of risk maximization/minimization, e.g., max/min Ep(x)L(x) with p(x) some unknown data distribution and L(·) a loss function. However, since PGD generates attack samples independently for each data sample based on L(·), the procedure does not necessarily lead to good generalization in terms of risk optimization. In this paper, we achieve the goal by proposing distributionally adversarial attack (DAA), a framework to solve an optimal adversarial-data distribution, a perturbed distribution that satisfies the L constraint but deviates from the original data distribution to increase the generalization risk maximally. Algorithmically, DAA performs optimization on the space of potential data distributions, which introduces direct dependency between all data points when generating adversarial samples. DAA is evaluated by attacking state-of-the-art defense models, including the adversarially-trained models provided by MIT MadryLab. Notably, DAA ranks the first place on MadryLab’s white-box leaderboards, reducing the accuracy of their secret MNIST model to 88.56% (with l perturbations of ε = 0.3) and the accuracy of their secret CIFAR model to 44.71% (with l perturbations of ε = 8.0). Code for the experiments is released on https://github.com/tianzheng4/Distributionally-Adversarial-Attack.

Downloads

Published

2019-07-17

How to Cite

Zheng, T., Chen, C., & Ren, K. (2019). Distributionally Adversarial Attack. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 2253-2260. https://doi.org/10.1609/aaai.v33i01.33012253

Issue

Section

AAAI Technical Track: Game Theory and Economic Paradigms