Learning Conceptual-Contextual Embeddings for Medical Text

Authors

  • Xiao Zhang Tsinghua University
  • Dejing Dou University of Oregon
  • Ji Wu Tsinghua University

DOI:

https://doi.org/10.1609/aaai.v34i05.6504

Abstract

External knowledge is often useful for natural language understanding tasks. We introduce a contextual text representation model called Conceptual-Contextual (CC) embeddings, which incorporates structured knowledge into text representations. Unlike entity embedding methods, our approach encodes a knowledge graph into a context model. CC embeddings can be easily reused for a wide range of tasks in a similar fashion to pre-trained language models. Our model effectively encodes the huge UMLS database by leveraging semantic generalizability. Experiments on electronic health records (EHRs) and medical text processing benchmarks showed our model gives a major boost to the performance of supervised medical NLP tasks.

Downloads

Published

2020-04-03

How to Cite

Zhang, X., Dou, D., & Wu, J. (2020). Learning Conceptual-Contextual Embeddings for Medical Text. Proceedings of the AAAI Conference on Artificial Intelligence, 34(05), 9579-9586. https://doi.org/10.1609/aaai.v34i05.6504

Issue

Section

AAAI Technical Track: Natural Language Processing