This paper presents an error-driven model of OT acquisition called Error-Selective Learning, which is both restrictive and gradual. It is restrictive in that it chooses grammars that can generate observed outputs but as few others as possible, enforced by using a version of the Biased Constraint Demotion (BCD: Prince and Tesar, 2004) algorithm to learn rankings. It is gradual, unlike a pure BCD learner, in that it accumulates many errors at each stage before re-ranking, and only searches for a new grammar when a number of errors suggest the same problem with the current one. In the paper, the author illustrates two intermediate stages that are frequently observed in L1 development, which can be captured by rankings that sit between the initial and final states with respect to the ranking of Markedness constraints. The author then demonstrates how the error-selective learner can derive such stages, relying both on error frequencies and the logic of OT constraint rankings, while still ensuring that learners eventually converge on a restrictive final grammar.
Proceedings of the 25th West Coast Conference on Formal Linguistics
edited by Donald Baumer, David Montero, and Michael Scanlon
Table of contents