Less Is Better: Unweighted Data Subsampling via Influence Function

Authors

  • Zifeng Wang Tsinghua University
  • Hong Zhu Noah's Ark Lab, Huawei
  • Zhenhua Dong Noah's Ark Lab
  • Xiuqiang He Noah's Ark Lab
  • Shao-Lun Huang Tsinghua University

DOI:

https://doi.org/10.1609/aaai.v34i04.6103

Abstract

In the time of Big Data, training complex models on large-scale data sets is challenging, making it appealing to reduce data volume for saving computation resources by subsampling. Most previous works in subsampling are weighted methods designed to help the performance of subset-model approach the full-set-model, hence the weighted methods have no chance to acquire a subset-model that is better than the full-set-model. However, we question that how can we achieve better model with less data? In this work, we propose a novel Unweighted Influence Data Subsampling (UIDS) method, and prove that the subset-model acquired through our method can outperform the full-set-model. Besides, we show that overly confident on a given test set for sampling is common in Influence-based subsampling methods, which can eventually cause our subset-model's failure in out-of-sample test. To mitigate it, we develop a probabilistic sampling scheme to control the worst-case risk over all distributions close to the empirical distribution. The experiment results demonstrate our methods superiority over existed subsampling methods in diverse tasks, such as text classification, image classification, click-through prediction, etc.

Downloads

Published

2020-04-03

How to Cite

Wang, Z., Zhu, H., Dong, Z., He, X., & Huang, S.-L. (2020). Less Is Better: Unweighted Data Subsampling via Influence Function. Proceedings of the AAAI Conference on Artificial Intelligence, 34(04), 6340-6347. https://doi.org/10.1609/aaai.v34i04.6103

Issue

Section

AAAI Technical Track: Machine Learning