In this study, we proposed an approach to automatically generating court view from the fact description of a legal case. This is a text-to-text natural language generation problem, and it can help the automatic legal document generation. Due to the specialty of the legal domain, our model exploits the charge and law article information in the generation process, instead of utilizing just the fact description text. The BERT model is used as the encoder and a Transformer architecture is used as decoder. To smoothly integrate these two parts together, we employ two separate optimizers for the two components during the training process. The experiments on two data sets of Chinese legal cases show that our approach outperforms other methods.