Certifying the True Error: Machine Learning in Coq with Verified Generalization Guarantees

Authors

  • Alexander Bagnall Ohio University
  • Gordon Stewart Ohio University

DOI:

https://doi.org/10.1609/aaai.v33i01.33012662

Abstract

We present MLCERT, a novel system for doing practical mechanized proof of the generalization of learning procedures, bounding expected error in terms of training or test error. MLCERT is mechanized in that we prove generalization bounds inside the theorem prover Coq; thus the bounds are machine checked by Coq’s proof checker. MLCERT is practical in that we extract learning procedures defined in Coq to executable code; thus procedures with proved generalization bounds can be trained and deployed in real systems. MLCERT is well documented and open source; thus we expect it to be usable even by those without Coq expertise. To validate MLCERT, which is compatible with external tools such as TensorFlow, we use it to prove generalization bounds on neural networks trained using TensorFlow on the extended MNIST data set.

Downloads

Published

2019-07-17

How to Cite

Bagnall, A., & Stewart, G. (2019). Certifying the True Error: Machine Learning in Coq with Verified Generalization Guarantees. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 2662-2669. https://doi.org/10.1609/aaai.v33i01.33012662

Issue

Section

AAAI Technical Track: Knowledge Representation and Reasoning