Whamit!

The Weekly Newsletter of MIT Linguistics

MIT Linguistics Colloquium 4/9 - Elliott Moreton

Speaker: Elliott Moreton (University of North Carolina, Chapel Hill)
Time: Friday, April 9, 2010, 3:30pm-5pm
Location: 32-141
Title: Connecting paradigmatic and syntagmatic simplicity bias in phonotactic learning

Phonotactic patterns are easier to learn in the lab when they are simple and systematic in terms of phonetic features (e.g., LaRiviere et al. 1974, Saffran & Thiessen 2003, Kuo 2009, Wilson 2003, Moreton 2008). This is true in two ways: A category contrast is easier if it is defined by possession of a specific feature (paradigmatic simplicity, e.g. [p t k]/[b d g] rather than [p d k]/[b t g]), and also if it is characterized by within-stimulus dependencies between instances of the same feature rather than of different features (syntagmatic simplicity, e.g., height harmony rather than height-voice correlation). Both biases are important to linguists because of their possible impact on natural-language typology. This talk presents evidence for syntagmatic simplicity bias, and discusses the relationship between paradigmatic and syntagmatic simplicity bias, in connection with theories of general human and non-human category learning, and of phonotactic pattern learning.

Although paradigmatic simplicity bias is consistent with what is known about human category learning in other domains (Shepard et al. 1961, Nosofsky et al. 1994), syntagmatic simplicity bias has not been addressed. Paradigmatic simplicity bias in non-linguistic domains can be accounted for by error-driven learning in which constraints compete for influence on the basis of how well they explain unexpected data (the “delta rule”, Gluck & Bower 1988). The same learning rule is used in Maximum Entropy (Jaeger 2004), Harmonic Grammar (Boersma and Pater 2008), and Stochastic OT (Boersma 1997, Boersma & Hayes 2001), resulting in the same bias (Pater et al. 2008).

These results can be extended to account for syntagmatic simplicity bias, *if* there is a guarantee that the constraint set provides more-general constraints only for featurally-simpler within-stimulus dependencies. But evidence from both the lab and natural language suggests that constraints can also be induced from phonological data (Hayes et al. 2009). This talk will present a model of supervised phonotactic learning in which constraint induction is restricted by Feature-Geometric constraint schemas which support general constraints only for featurally-simple between- and within-stimulus dependencies, while still allowing great flexibility in the formulation of constraints. The model implements the delta rule in Harmonic Grammar as an evolutionary competition among constraints which reproduce with variation and selection, so that constraint induction and ranking (weighting) happen simultaneously. The model correctly predicts superior acquisition of syntagmatically- and paradigmatically-simple patterns. Discussion will focus on alternative models of category learning and phonotactic learning.