Inter-Class Angular Loss for Convolutional Neural Networks

  • Le Hui Nanjing University of Science and Technology
  • Xiang Li Nanjing University of Science and Technology
  • Chen Gong Nanjing University of Science and Technology
  • Meng Fang Tencent AI Lab
  • Joey Tianyi Zhou Institute of High Performance Computing
  • Jian Yang Nanjing University of Science and Technology


Convolutional Neural Networks (CNNs) have shown great power in various classification tasks and have achieved remarkable results in practical applications. However, the distinct learning difficulties in discriminating different pairs of classes are largely ignored by the existing networks. For instance, in CIFAR-10 dataset, distinguishing cats from dogs is usually harder than distinguishing horses from ships. By carefully studying the behavior of CNN models in the training process, we observe that the confusion level of two classes is strongly correlated with their angular separability in the feature space. That is, the larger the inter-class angle is, the lower the confusion will be. Based on this observation, we propose a novel loss function dubbed “Inter-Class Angular Loss” (ICAL), which explicitly models the class correlation and can be directly applied to many existing deep networks. By minimizing the proposed ICAL, the networks can effectively discriminate the examples in similar classes by enlarging the angle between their corresponding class vectors. Thorough experimental results on a series of vision and nonvision datasets confirm that ICAL critically improves the discriminative ability of various representative deep neural networks and generates superior performance to the original networks with conventional softmax loss.

AAAI Technical Track: Machine Learning