Finding Relevant Subspaces in Neural Network Learning

Avrim Blum, Ravi Kannan

A common technique for searching for relevant subspaces is that of "principal component analysis." Our main result is a proof that under an interesting set of conditions, a variation on this approach will find at least one vector (nearly) in the space we are looking for. A recursive approach can then be used to get additional such vectors.


This page is copyrighted by AAAI. All rights reserved. Your use of this site constitutes acceptance of all of AAAI's terms and conditions and privacy policy.