Robust Federated Learning via Collaborative Machine Teaching

Authors

  • Yufei Han Nortonlifelock Research Group
  • Xiangliang Zhang King Abdullah University of Science and Technology

DOI:

https://doi.org/10.1609/aaai.v34i04.5826

Abstract

For federated learning systems deployed in the wild, data flaws hosted on local agents are widely witnessed. On one hand, given a large amount (e.g. over 60%) of training data are corrupted by systematic sensor noise and environmental perturbations, the performances of federated model training can be degraded significantly. On the other hand, it is prohibitively expensive for either clients or service providers to set up manual sanitary checks to verify the quality of data instances. In our study, we echo this challenge by proposing a collaborative and privacy-preserving machine teaching method. Specifically, we use a few trusted instances provided by teachers as benign examples in the teaching process. Our collaborative teaching approach seeks jointly the optimal tuning on the distributed training set, such that the model learned from the tuned training set predicts labels of the trusted items correctly. The proposed method couples the process of teaching and learning and thus produces directly a robust prediction model despite the extremely pervasive systematic data corruption. The experimental study on real benchmark data sets demonstrates the validity of our method.

Downloads

Published

2020-04-03

How to Cite

Han, Y., & Zhang, X. (2020). Robust Federated Learning via Collaborative Machine Teaching. Proceedings of the AAAI Conference on Artificial Intelligence, 34(04), 4075-4082. https://doi.org/10.1609/aaai.v34i04.5826

Issue

Section

AAAI Technical Track: Machine Learning