A Method for Large-Scale l1-regularized Logistic Regression

Kwangmoo Koh, Seung-Jean Kim, Stephen Boyd

Logistic regression with l1 regularization has been proposed as a promising method for feature selection in classification problems. Several specialized solution methods have been proposed for l1-regularized logistic regression problems (LRPs). However, existing methods do not scale well to large problems that arise in many practical settings. In this paper we describe an efficient interior-point method for solving l1-regularized LRPs. Small problems with up to a thousand or so features and examples can be solved in seconds on a PC. A variation on the basic method, that uses a preconditioned conjugate gradient method to compute the search step, can solve large sparse problems, with a million features and examples (e.g., the 20 Newsgroups data set), in a few tens of minutes, on a PC. Numerical experiments show that ourmethod outperforms standard methods for solving convex optimization problems as well as other methods specifically designed for l1-regularized LRPs.

Subjects: 12. Machine Learning and Discovery; 15. Problem Solving

Submitted: Apr 22, 2007


This page is copyrighted by AAAI. All rights reserved. Your use of this site constitutes acceptance of all of AAAI's terms and conditions and privacy policy.