Proceedings:
Book One
Volume
Issue:
Proceedings of the AAAI Conference on Artificial Intelligence, 12
Track:
Student Abstracts
Downloads:
Abstract:
After a learning system has been trained, the usual procedure is to average the testing errors in order to obtain an estimate of how well the system has learned. However, that is tossing away a lot of potentially useful information. We present an algorithm which exploits the distribution of errors in order to find where the algorithm performs badly and partition the space into parts which can be learned easily. We will show a simple example which gives the intuition of the algorithm, and then a more complex one which brings forth some of the details of the algorithm. Let us suppose that we are trying to learn the absolute value function. Almost all learning algorithms perform well along the arms of the function, but do badly around the cusp. If we notice th 'hill' of errors around x = 0, then we can partition the space which we are trying to learn into two parts which fall on either side of the hill. Those two partitions have the property of not only being linear, but of being learnable. Each partition can be trained separately, and when tested separately gives a better answer since irrelevant and misleading training points from other partitions have not been included.
AAAI
Proceedings of the AAAI Conference on Artificial Intelligence, 12