Adaptive Activation Network and Functional Regularization for Efficient and Flexible Deep Multi-Task Learning

Authors

  • Yingru Liu Stony Brook University
  • Xuewen Yang Stony Brook University
  • Dongliang Xie Beijing University of Posts and Telecommunications
  • Xin Wang Stony Brook University
  • Li Shen Tencent AI Lab
  • Haozhi Huang Tencent AI Lab
  • Niranjan Balasubramanian Stony Brook University

DOI:

https://doi.org/10.1609/aaai.v34i04.5930

Abstract

Multi-task learning (MTL) is a common paradigm that seeks to improve the generalization performance of task learning by training related tasks simultaneously. However, it is still a challenging problem to search the flexible and accurate architecture that can be shared among multiple tasks. In this paper, we propose a novel deep learning model called Task Adaptive Activation Network (TAAN) that can automatically learn the optimal network architecture for MTL. The main principle of TAAN is to derive flexible activation functions for different tasks from the data with other parameters of the network fully shared. We further propose two functional regularization methods that improve the MTL performance of TAAN. The improved performance of both TAAN and the regularization methods is demonstrated by comprehensive experiments.

Downloads

Published

2020-04-03

How to Cite

Liu, Y., Yang, X., Xie, D., Wang, X., Shen, L., Huang, H., & Balasubramanian, N. (2020). Adaptive Activation Network and Functional Regularization for Efficient and Flexible Deep Multi-Task Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 34(04), 4924-4931. https://doi.org/10.1609/aaai.v34i04.5930

Issue

Section

AAAI Technical Track: Machine Learning