Proceedings:
No. 10: AAAI-22 Technical Tracks 10
Volume
Issue:
Proceedings of the AAAI Conference on Artificial Intelligence, 36
Track:
AAAI Technical Track on Speech and Natural Language Processing
Downloads:
Abstract:
Progress in pre-trained language models has led to a surge of impressive results on downstream tasks for natural language understanding. Recent work on probing pre-trained language models uncovered a wide range of linguistic properties encoded in their contextualized representations. However, it is unclear whether they encode semantic knowledge that is crucial to symbolic inference methods. We propose a methodology for probing knowledge for inference that logical systems require but often lack in pre-trained language model representations. Our probing datasets cover a list of key types of knowledge used by many symbolic inference systems. We find that (i) pre-trained language models do encode several types of knowledge for inference, but there are also some types of knowledge for inference that are not encoded, (ii) language models can effectively learn missing knowledge for inference through fine-tuning. Overall, our findings provide insights into which aspects of knowledge for inference language models and their pre-training procedures capture. Moreover, we have demonstrated language models' potential as semantic and background knowledge bases for supporting symbolic inference methods.
DOI:
10.1609/aaai.v36i10.21294
AAAI
Proceedings of the AAAI Conference on Artificial Intelligence, 36