Spiking-YOLO: Spiking Neural Network for Energy-Efficient Object Detection

Authors

  • Seijoon Kim Seoul National University
  • Seongsik Park Seoul National University
  • Byunggook Na Seoul National University
  • Sungroh Yoon Seoul National University

DOI:

https://doi.org/10.1609/aaai.v34i07.6787

Abstract

Over the past decade, deep neural networks (DNNs) have demonstrated remarkable performance in a variety of applications. As we try to solve more advanced problems, increasing demands for computing and power resources has become inevitable. Spiking neural networks (SNNs) have attracted widespread interest as the third-generation of neural networks due to their event-driven and low-powered nature. SNNs, however, are difficult to train, mainly owing to their complex dynamics of neurons and non-differentiable spike operations. Furthermore, their applications have been limited to relatively simple tasks such as image classification. In this study, we investigate the performance degradation of SNNs in a more challenging regression problem (i.e., object detection). Through our in-depth analysis, we introduce two novel methods: channel-wise normalization and signed neuron with imbalanced threshold, both of which provide fast and accurate information transmission for deep SNNs. Consequently, we present a first spiked-based object detection model, called Spiking-YOLO. Our experiments show that Spiking-YOLO achieves remarkable results that are comparable (up to 98%) to those of Tiny YOLO on non-trivial datasets, PASCAL VOC and MS COCO. Furthermore, Spiking-YOLO on a neuromorphic chip consumes approximately 280 times less energy than Tiny YOLO and converges 2.3 to 4 times faster than previous SNN conversion methods.

Downloads

Published

2020-04-03

How to Cite

Kim, S., Park, S., Na, B., & Yoon, S. (2020). Spiking-YOLO: Spiking Neural Network for Energy-Efficient Object Detection. Proceedings of the AAAI Conference on Artificial Intelligence, 34(07), 11270-11277. https://doi.org/10.1609/aaai.v34i07.6787

Issue

Section

AAAI Technical Track: Vision