Distributed Machine Learning through Heterogeneous Edge Systems

Authors

  • Hanpeng Hu The University of Hong Kong
  • Dan Wang The Hong Kong Polytechnic University
  • Chuan Wu The University of Hong Kong

DOI:

https://doi.org/10.1609/aaai.v34i05.6207

Abstract

Many emerging AI applications request distributed machine learning (ML) among edge systems (e.g., IoT devices and PCs at the edge of the Internet), where data cannot be uploaded to a central venue for model training, due to their large volumes and/or security/privacy concerns. Edge devices are intrinsically heterogeneous in computing capacity, posing significant challenges to parameter synchronization for parallel training with the parameter server (PS) architecture. This paper proposes ADSP, a parameter synchronization model for distributed machine learning (ML) with heterogeneous edge systems. Eliminating the significant waiting time occurring with existing parameter synchronization models, the core idea of ADSP is to let faster edge devices continue training, while committing their model updates at strategically decided intervals. We design algorithms that decide time points for each worker to commit its model update, and ensure not only global model convergence but also faster convergence. Our testbed implementation and experiments show that ADSP outperforms existing parameter synchronization models significantly in terms of ML model convergence time, scalability and adaptability to large heterogeneity.

Downloads

Published

2020-04-03

How to Cite

Hu, H., Wang, D., & Wu, C. (2020). Distributed Machine Learning through Heterogeneous Edge Systems. Proceedings of the AAAI Conference on Artificial Intelligence, 34(05), 7179-7186. https://doi.org/10.1609/aaai.v34i05.6207

Issue

Section

AAAI Technical Track: Multiagent Systems