The Weekly Newsletter of MIT Linguistics

Phonology circle - Edward Flemming, Adam Albright (10/1); Filipe Kobayashi (10/3)

This week we will have two meeting of Phonology Circle: Monday (10/1) and Wednesday (10/3). The meeting on Wednesday in the usual time slot (5pm-6:30pm) and location (8th floor seminar room). The meeting on Monday will be held at 12:30pm-2pm in the 4th floor seminar room. Details are below.

Monday meeting – two poster presentations:
Poster 1: Edward Flemming. Title: Systemic markedness in sibilant inventories (click here for abstract)
Poster 2: Adam Albright. Title: English vowel reduction is conditioned by duration, not stress (click here for abstract)
Date/Time: Monday, October 1, 12:30-2pm
Location: 32-D461 (4th floor seminar room)

Wednesday meeting – discussion of a paper:
Leader of discussion: Filipe Hisao de Salles Kobayashi (MIT)
Title: Hayes and Wilson’s (2008) A Maximum Entropy Model of Phonotactics and Phonotactic Learning
Date/Time: Wednesday, October 3, 5:00-6:30pm
Location: 32-D831

The study of phonotactics is a central topic in phonology. We propose a theory of phonotactic grammars and a learning algorithm that constructs such grammars from positive evidence. Our grammars consist of constraints that are assigned numerical weights according to the principle of maximum entropy. The grammars assess possible words on the basis of the weighted sum of their constraint violations. The learning algorithm yields grammars that can capture both categorical and gradient phonotactic patterns. The algorithm is not provided with constraints in advance, but uses its own resources to form constraints and weight them. A baseline model, in which Universal Grammar is reduced to a feature set and an SPE-style constraint format, suffices to learn many phonotactic phenomena. In order for the model to learn nonlocal phenomena such as stress and vowel harmony, it must be augmented with autosegmental tiers and metrical grids. Our results thus offer novel, learning-theoretic support for such representations. We apply the model in a variety of learning simulations, showing that the learned grammars capture the distributional generalizations of these languages and accurately predict the findings of a phonotactic experiment.