Track:
Contents
Downloads:
Abstract:
Applications of machine learning have shown repeatedly that the standard assumptions of uniform class distribution and uniform misclassification costs rarely hold. Little is known about how to select classifiers when error costs and class distributions are not known precisely at training time, or when they can change. We present a method for analyzing and visualizing the performance of classification methods that is robust to changing distributions and allows a sensitivity analysis if a range of costs is known. The method combines techniques from ROC analysis, decision analysis and computational geometry, and adapts them to the particulars of analyzing learned classifters. We then demonstrate analysis and visualization properties of the method.