Comparing Different Optimality — Theoretic Learning Algorithms: The Case of Metrical Phonology

Diana Apoussidou and Paul Boersma

We fed short overt Latin stress patterns to 100 virtual language learners whose grammars consist of a universal set of 12 Optimality-Theoretic constraints. For 50 learners the learning algorithm was Error-Driven Constraint Demotion (EDCD), for the remaining 50 it was the Gradual Learning Algorithm (GLA). The EDCD learners did not succeed: they ended up in a grammar that could not reproduce the correct stress pattern. The GLA learners did succeed: they came up with an analysis close to one of the analyses proposed in the literature. These results add to previous findings that the GLA seems to be a more realistic ingredient than EDCD for models of actual language acquisition.

This page is copyrighted by AAAI. All rights reserved. Your use of this site constitutes acceptance of all of AAAI's terms and conditions and privacy policy.