Verifying Robustness of Gradient Boosted Models

Authors

  • Gil Einziger Nokia Bell Labs
  • Maayan Goldstein Nokia Bell Labs
  • Yaniv Sa’ar Nokia Bell Labs
  • Itai Segall Nokia Bell Labs

DOI:

https://doi.org/10.1609/aaai.v33i01.33012446

Abstract

Gradient boosted models are a fundamental machine learning technique. Robustness to small perturbations of the input is an important quality measure for machine learning models, but the literature lacks a method to prove the robustness of gradient boosted models.

This work introduces VERIGB, a tool for quantifying the robustness of gradient boosted models. VERIGB encodes the model and the robustness property as an SMT formula, which enables state of the art verification tools to prove the model’s robustness. We extensively evaluate VERIGB on publicly available datasets and demonstrate a capability for verifying large models. Finally, we show that some model configurations tend to be inherently more robust than others.

Downloads

Published

2019-07-17

How to Cite

Einziger, G., Goldstein, M., Sa’ar, Y., & Segall, I. (2019). Verifying Robustness of Gradient Boosted Models. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 2446-2453. https://doi.org/10.1609/aaai.v33i01.33012446

Issue

Section

AAAI Technical Track: Human-AI Collaboration