The softmax loss and its variants are widely used as objectives for embedding learning applications like face recognition. However, the intra- and inter-class objectives in Softmax are entangled, therefore a well-optimized inter-class objective leads to relaxation on the intra-class objective, and vice versa. In this paper, we propose to dissect Softmax into independent intra- and inter-class objective (D-Softmax) with a clear understanding. It is straightforward to tune each part to the best state with D-Softmax as objective.Furthermore, we find the computation of the inter-class part is redundant and propose sampling-based variants of D-Softmax to reduce the computation cost. The face recognition experiments on regular-scale data show D-Softmax is favorably comparable to existing losses such as SphereFace and ArcFace. Experiments on massive-scale data show the fast variants significantly accelerates the training process (such as 64×) with only a minor sacrifice in performance, outperforming existing acceleration methods of Softmax in terms of both performance and efficiency.
Published Date: 2020-06-02
Registration: ISSN 2374-3468 (Online) ISSN 2159-5399 (Print) ISBN 978-1-57735-835-0 (10 issue set)
Copyright: Published by AAAI Press, Palo Alto, California USA Copyright © 2020, Association for the Advancement of Artificial Intelligence All Rights Reserved