In order to learn from naive instructors, machines must learn more like how humans learn. We are organizing a "Bootstrapped Learning Cup" competition, in which competitors attempt to create the best learning agent for a curriculum whose focus is known but whose specifics are not. By focusing each competition on a particular factoring of the larger problem of human-like learning, we hope to simultaneously identify productive decompositions of the larger problem and components that can eventually be integrated to solve it. To this end, we seek to measure learner autonomy with "spectrum curricula" that measure learning against an incrementally varied set of curricula, ranging from extremely telegraphic to overly detailed. If this program of competitions is successful, it will lead to a revolutionary change in the deployability of machine intelligence, by allowing human curricula to be used in configuring a system for an application area and by allowing machines to "culturally" adapt to the specifics of their deployment. The tools and models developed during this effort may also lead to significant improvements in our models of human learning and cognition.