Track:
Contents
Downloads:
Abstract:
The evaluation of classifiers or learning algorithms is not a topic that has, generally, been given much thought in the fields of Machine Learning and Data Mining. More often than not, common off-the-shelf metrics such as Accuracy, Precision/Recall and ROC Analysis as well as confidence estimation methods, such as the t-test, are applied without much attention being paid to their meaning. The purpose of this paper is to give the reader an intuitive idea of what could go wrong with our commonly used evaluation methods. In particular, we show, through examples, that since evaluation metrics and confidence estimation methods summarize the system’s performance, they can, at times, obscure important behaviors of the hypotheses or algorithms under consideration. We hope that this very simple review of some of the problems surrounding evaluation will sensitize Machine Learning and Data Mining researchers to the issue and encourage us to think twice, prior to selecting and applying an evaluation method.