Proceedings:
No. 16: AAAI-21 Technical Tracks 16
Volume
Issue:
Proceedings of the AAAI Conference on Artificial Intelligence, 35
Track:
AAAI Technical Track on Speech and Natural Language Processing III
Downloads:
Abstract:
Human judgments of word similarity have been a popular method of evaluating the quality of word embedding. But it fails to measure the geometry properties such as asymmetry. For example, it is more natural to say ``Ellipses are like Circles'' than ``Circles are like Ellipses''. Such asymmetry has been observed from the word evocation experiment, where one word is used to recall another. This association data have been understudied for measuring embedding quality. In this paper, we use three well-known evocation datasets for the purpose and study both static embedding as well as contextual embedding, such as BERT. To fight for the dynamic nature of BERT embedding, we probe BERT's conditional probabilities as a language model, using a large number of Wikipedia contexts to derive a theoretically justifiable Bayesian asymmetry score. The result shows that the asymmetry judgment and similarity judgments disagree, and asymmetry judgment aligns with its strong performance on ``extrinsic evaluations''. This is the first time we can show contextual embeddings's strength on intrinsic evaluation, and the asymmetry judgment provides a new perspective to evaluate contextual embedding and new insights for representation learning.
DOI:
10.1609/aaai.v35i16.17701
AAAI
Proceedings of the AAAI Conference on Artificial Intelligence, 35