JEC-QA: A Legal-Domain Question Answering Dataset

Authors

  • Haoxi Zhong Tsinghua University
  • Chaojun Xiao Tsinghua University
  • Cunchao Tu Tsinghua University
  • Tianyang Zhang Powerlaw Inc.
  • Zhiyuan Liu Tsinghua University
  • Maosong Sun Tsinghua University

DOI:

https://doi.org/10.1609/aaai.v34i05.6519

Abstract

We present JEC-QA, the largest question answering dataset in the legal domain, collected from the National Judicial Examination of China. The examination is a comprehensive evaluation of professional skills for legal practitioners. College students are required to pass the examination to be certified as a lawyer or a judge. The dataset is challenging for existing question answering methods, because both retrieving relevant materials and answering questions require the ability of logic reasoning. Due to the high demand of multiple reasoning abilities to answer legal questions, the state-of-the-art models can only achieve about 28% accuracy on JEC-QA, while skilled humans and unskilled humans can reach 81% and 64% accuracy respectively, which indicates a huge gap between humans and machines on this task. We will release JEC-QA and our baselines to help improve the reasoning ability of machine comprehension models. You can access the dataset from http://jecqa.thunlp.org/.

Downloads

Published

2020-04-03

How to Cite

Zhong, H., Xiao, C., Tu, C., Zhang, T., Liu, Z., & Sun, M. (2020). JEC-QA: A Legal-Domain Question Answering Dataset. Proceedings of the AAAI Conference on Artificial Intelligence, 34(05), 9701-9708. https://doi.org/10.1609/aaai.v34i05.6519

Issue

Section

AAAI Technical Track: Natural Language Processing