Iteratively Questioning and Answering for Interpretable Legal Judgment Prediction

Authors

  • Haoxi Zhong Tsinghua University
  • Yuzhong Wang Tsinghua University
  • Cunchao Tu Tsinghua University
  • Tianyang Zhang Powerlaw Inc.
  • Zhiyuan Liu Tsinghua University
  • Maosong Sun Tsinghua University

DOI:

https://doi.org/10.1609/aaai.v34i01.5479

Abstract

Legal Judgment Prediction (LJP) aims to predict judgment results according to the facts of cases. In recent years, LJP has drawn increasing attention rapidly from both academia and the legal industry, as it can provide references for legal practitioners and is expected to promote judicial justice. However, the research to date usually suffers from the lack of interpretability, which may lead to ethical issues like inconsistent judgments or gender bias. In this paper, we present QAjudge, a model based on reinforcement learning to visualize the prediction process and give interpretable judgments. QAjudge follows two essential principles in legal systems across the world: Presumption of Innocence and Elemental Trial. During inference, a Question Net will select questions from the given set and an Answer Net will answer the question according to the fact description. Finally, a Predict Net will produce judgment results based on the answers. Reward functions are designed to minimize the number of questions asked. We conduct extensive experiments on several real-world datasets. Experimental results show that QAjudge can provide interpretable judgments while maintaining comparable performance with other state-of-the-art LJP models. The codes can be found from https://github.com/thunlp/QAjudge.

Downloads

Published

2020-04-03

How to Cite

Zhong, H., Wang, Y., Tu, C., Zhang, T., Liu, Z., & Sun, M. (2020). Iteratively Questioning and Answering for Interpretable Legal Judgment Prediction. Proceedings of the AAAI Conference on Artificial Intelligence, 34(01), 1250-1257. https://doi.org/10.1609/aaai.v34i01.5479

Issue

Section

AAAI Technical Track: Applications