Proceedings:
No. 15: AAAI-21 Technical Tracks 15
Volume
Issue:
Proceedings of the AAAI Conference on Artificial Intelligence, 35
Track:
AAAI Technical Track on Speech and Natural Language Processing II
Downloads:
Abstract:
Recent research has pointed to a limitation of word-level neural language models with softmax outputs. This limitation, known as the softmax bottleneck refers to the inability of these models to produce high-rank log probability (log P) matrices. Various solutions have been proposed to break this bottleneck, including Mixture of Softmaxes, SigSoftmax, and Linear Monotonic Softmax with Piecewise Linear Increasing Functions. They were reported to offer better performance in terms of perplexity on test data. A natural perception from these results is a strong positive correlation between the rank of the log P matrix and the model's performance. In this work, we show via an extensive empirical study that such a correlation is fairly weak and that the high-rank of the log P matrix is neither necessary nor sufficient for better test perplexity. Although our results are empirical, they are established in part via the construction of a rich family of models, which we call Generalized SigSoftmax. They are able to create diverse ranks for the log P matrices. We also present an investigation as to why the proposed solutions achieve better performance.
DOI:
10.1609/aaai.v35i15.17608
AAAI
Proceedings of the AAAI Conference on Artificial Intelligence, 35