Proceedings:
No. 18: AAAI-21 Student Papers and Demonstrations
Volume
Issue:
Proceedings of the AAAI Conference on Artificial Intelligence, 35
Track:
AAAI Student Abstract and Poster Program
Downloads:
Abstract:
Graph data has been widely used to represent data from various domain, e.g., social networks, recommendation system. With great power, the GNN models, usually as valuable properties of their owners, also become attractive targets of the adversary who covets to steal them. While existing works show that simple deep neural networks can be reproduced by so-called Model Extraction Attacks, how to extract a GNN model has not been explored. In this paper, we exploit the threat of model extraction attacks against GNN models. Unlike ordinary attacks which obtain model information via only the input-output query pairs, we utilize both the node queries and the graph structure to extract the GNNs. Furthermore, we consider the stealthiness of the attack and propose to generate legitimate queries so the extraction can be applied discreetly. We implement our attack by leveraging the responses of these queries, as well as other accessible knowledge, e.g., neighbor connectives of the queried nodes. By evaluating over three real-world datasets, our attack is shown to effectively produce a surrogate model with more than 80% equivalent predictions as the target model.
DOI:
10.1609/aaai.v35i18.17959
AAAI
Proceedings of the AAAI Conference on Artificial Intelligence, 35