Proceedings:
No. 1: Thirty-First AAAI Conference On Artificial Intelligence
Volume
Issue:
Proceedings of the AAAI Conference on Artificial Intelligence, 31
Track:
Machine Learning Methods
Downloads:
Abstract:
Structured sparse learning has become a popular and mature research field. Among all structured sparse models, we found an interesting fact that most structured sparse properties could be captured by convolution operators, most famous ones being total variation and wavelet sparsity. This finding has naturally brought us to a generalization termed as Convolutional Sparsity. While this generalization bridges the convolution and sparse learning theory, we are able to propose a general, efficient, hyperparameter-free optimization algorithm framework for convolutional sparse models, thanks to the analysis theory of convolution operators. The convergence of the general, hyperparameter-free algorithm has been comprehensively analyzed, with a non-ergodic rate of O(1/_2) and ergodic rate of O(1/_), where _ is the desired accuracy. Extensive experiments confirm the superior performance of our general algorithm in various convolutional sparse models, even better than some application-specialistic algorithms.
DOI:
10.1609/aaai.v31i1.10880
AAAI
Proceedings of the AAAI Conference on Artificial Intelligence, 31