**Speaker: ** Edward Flemming (MIT)**Title: **MaxEnt vs. Noisy Harmonic Grammar**Time: **Monday, October 26th, 5pm – 6:30pm**Abstract: **MaxEnt grammars have become the tool of choice for analyzing phonological phenomena involving variation or gradient acceptability. MaxEnt is a probabilistic form of Harmonic Grammar in which harmony scores (sums of weighted constraint violations) of candidates are mapped onto probabilities (Goldwater & Johnson 2003). However, there is a competing proposal for deriving probabilities from Harmonic Grammars: Noisy Harmonic Grammar (NHG, Boersma & Pater 2016), in which variation is derived by adding random ‘noise’ to constraint weights. NHG has a variant, censored NHG, in which noise is prevented from making constraint weights negative.
All of these grammar models can be formulated as making the outputs of a harmonic grammar random by adding random noise to the harmonies of candidates. The models are differentiated by the nature of the distribution of this noise. This formulation provides a common frame for analyzing and comparing their properties. The comparison reveals a basic difference between the models: in MaxEnt, the relative probability of two candidates depends only on the difference in their harmony scores, whereas in NHG it also depends on the number of unshared violations incurred by the two candidates. This difference leads to testable predictions which are evaluated against data on variable realization of schwa in French (Smith & Pater 2020). The evaluation turns out to have interesting complications, but ultimately provides some support for MaxEnt over censored NHG, while both of these models clearly out-perform regular NHG on this data set.