Speaker: Cindy Torma (MIT)
Title: Prospective New Experiment
Time: Friday, October 30th, 2pm - 3pm
Abstract: Cindy will be discussing her ideas for a new language acquisition experiment and looking for feedback. Come and give your input on what might become the Language Acquisition Lab’s next new experiment!
Issue of Monday, October 26th, 2020
Experimentalist Meeting 10/30 - Cindy Torma (MIT)
Syntax Square 10/27 - Elise Newman (MIT)
Speaker: Elise Newman (MIT)
Title: Revisiting UTAH: an informal discussion! (Part 2)
Time: Tuesday, October 27th, 1pm - 2pm
Abstract: Put your lexical semantics hats on and come discuss argument structure with me! Some questions I have been pondering include:
- What do “theta positions” mean in the Y-model?
- Do heads “assign” theta roles in the syntax?
- What is the right division of labor between syntax/semantics when it comes to explaining structural generalizations about arguments?
I surely won’t have all the answers, but here are some readings I have found helpful for those who want to do some homework: Levin & Rappaport Hovav (2005), Harley (2011), Marantz (2013), and the recent round table discussion featuring Gillian Ramchand, Heidi Harley, and Artemis Alexiadou (https://www.youtube.com/watch?v=1_bjHrMunWo)!
Speaker: Elise Newman (MIT)
Title: Revisiting UTAH: an informal discussion! (Part 2)
Time: Tuesday, October 27th, 1pm - 2pm
Abstract: Put your lexical semantics hats on and come discuss argument structure with me! Some questions I have been pondering include:
- What do “theta positions” mean in the Y-model?
- Do heads “assign” theta roles in the syntax?
- What is the right division of labor between syntax/semantics when it comes to explaining structural generalizations about arguments?
I surely won’t have all the answers, but here are some readings I have found helpful for those who want to do some homework: Levin & Rappaport Hovav (2005), Harley (2011), Marantz (2013), and the recent round table discussion featuring Gillian Ramchand, Heidi Harley, and Artemis Alexiadou (https://www.youtube.com/watch?v=1_bjHrMunWo)!
Phonology Circle 10/26 - Edward Flemming (MIT)
Speaker: Edward Flemming (MIT)
Title: MaxEnt vs. Noisy Harmonic Grammar
Time: Monday, October 26th, 5pm - 6:30pm
Abstract: MaxEnt grammars have become the tool of choice for analyzing phonological phenomena involving variation or gradient acceptability. MaxEnt is a probabilistic form of Harmonic Grammar in which harmony scores (sums of weighted constraint violations) of candidates are mapped onto probabilities (Goldwater & Johnson 2003). However, there is a competing proposal for deriving probabilities from Harmonic Grammars: Noisy Harmonic Grammar (NHG, Boersma & Pater 2016), in which variation is derived by adding random ‘noise’ to constraint weights. NHG has a variant, censored NHG, in which noise is prevented from making constraint weights negative.
All of these grammar models can be formulated as making the outputs of a harmonic grammar random by adding random noise to the harmonies of candidates. The models are differentiated by the nature of the distribution of this noise. This formulation provides a common frame for analyzing and comparing their properties. The comparison reveals a basic difference between the models: in MaxEnt, the relative probability of two candidates depends only on the difference in their harmony scores, whereas in NHG it also depends on the number of unshared violations incurred by the two candidates. This difference leads to testable predictions which are evaluated against data on variable realization of schwa in French (Smith & Pater 2020). The evaluation turns out to have interesting complications, but ultimately provides some support for MaxEnt over censored NHG, while both of these models clearly out-perform regular NHG on this data set.
Speaker: Edward Flemming (MIT)
Title: MaxEnt vs. Noisy Harmonic Grammar
Time: Monday, October 26th, 5pm - 6:30pm
Abstract: MaxEnt grammars have become the tool of choice for analyzing phonological phenomena involving variation or gradient acceptability. MaxEnt is a probabilistic form of Harmonic Grammar in which harmony scores (sums of weighted constraint violations) of candidates are mapped onto probabilities (Goldwater & Johnson 2003). However, there is a competing proposal for deriving probabilities from Harmonic Grammars: Noisy Harmonic Grammar (NHG, Boersma & Pater 2016), in which variation is derived by adding random ‘noise’ to constraint weights. NHG has a variant, censored NHG, in which noise is prevented from making constraint weights negative.
All of these grammar models can be formulated as making the outputs of a harmonic grammar random by adding random noise to the harmonies of candidates. The models are differentiated by the nature of the distribution of this noise. This formulation provides a common frame for analyzing and comparing their properties. The comparison reveals a basic difference between the models: in MaxEnt, the relative probability of two candidates depends only on the difference in their harmony scores, whereas in NHG it also depends on the number of unshared violations incurred by the two candidates. This difference leads to testable predictions which are evaluated against data on variable realization of schwa in French (Smith & Pater 2020). The evaluation turns out to have interesting complications, but ultimately provides some support for MaxEnt over censored NHG, while both of these models clearly out-perform regular NHG on this data set.
LingLunch 10/29 - Rafael Abramovitz (MIT)
Speaker: Rafael Abramovitz (MIT)
Title: Deconstructing Inverse Case Attraction
Time: Thursday, October 29th, 12:30pm - 13:50pm
Abstract: Inverse case attraction (ICA) in relative clauses (RC) is a phenomenon whereby the head of a seemingly externally-headed RC is marked with the case assigned to the gap inside the RC. This phenomenon has puzzled linguists and grammarians for the better part of the last two millennia, and has given rise to a number of unsatisfactory analyses. Based on data from Koryak, in this talk (practice for NELS) I’ll propose a new analysis of ICA whereby it involves the head of an RC surfacing in a left-peripheral position within the RC. This analysis is supported by data from all languages with ICA for which sufficient data exists in the literature, suggesting that it may provide a unified analysis for all instances of ICA across languages. Further, the type of RC that I posit (ex-situ but internally-headed) has been extensively argued to exist in the Gur languages (most prominently Buli) by Ken Hiraiwa and colleagues. ICA, I argue, therefore arises in languages with both Gur-style RCs and case-marked relative pronouns.
Speaker: Rafael Abramovitz (MIT)
Title: Deconstructing Inverse Case Attraction
Time: Thursday, October 29th, 12:30pm - 13:50pm
Abstract: Inverse case attraction (ICA) in relative clauses (RC) is a phenomenon whereby the head of a seemingly externally-headed RC is marked with the case assigned to the gap inside the RC. This phenomenon has puzzled linguists and grammarians for the better part of the last two millennia, and has given rise to a number of unsatisfactory analyses. Based on data from Koryak, in this talk (practice for NELS) I’ll propose a new analysis of ICA whereby it involves the head of an RC surfacing in a left-peripheral position within the RC. This analysis is supported by data from all languages with ICA for which sufficient data exists in the literature, suggesting that it may provide a unified analysis for all instances of ICA across languages. Further, the type of RC that I posit (ex-situ but internally-headed) has been extensively argued to exist in the Gur languages (most prominently Buli) by Ken Hiraiwa and colleagues. ICA, I argue, therefore arises in languages with both Gur-style RCs and case-marked relative pronouns.
LFRG 10/28 - Yadav Gowda
On the Existential Perfect reading of statives
English perfect constructions involving stative predicates such as be in the attic are ambiguous between an (E)xistential-perfect and (U)niversal-perfect reading. The U-perfect reading is the most readily available reading, but the E-perfect reading can be forced with certain modifiers, such as three times.
(1)
- I have been in the attic since I moved in. (U-perfect, E-perfect)
- I have (only) been in the attic three times since I moved in. (E-perfect)
Previous accounts of such sentences (e.g. Mittwoch 1988, Giannakidou 2003) have suggested that E-perfect readings involve coercion of the stative predicate into an eventive predicate.
In this talk, I will argue that these sentences do not involve any eventive structure, and that such readings can straightforwardly be derived using the operation of topological closure, which Giorgi and Pianesi (1997, 2000) argue is integral to deriving the meaning of perfective verb forms.
As additional support for this account, I will provide a compositional semantics of Kannada sentences like (2), which exhibit a peculiar (and as far as I know, unattested) reading which is a combination of the E-perfect and U-perfect readings.
(2) 2001-rinda ī kōṇe-yalli mūru bari iddāne.
2001-ABL this room-LOC three times be.PRES.1SG
Lit: “I am in this room three times from 2001.”
≈ “This is the third time I have been in this room since 2001.”
I will argue that such a reading can be derived through topological closure, but cannot be derived through eventive coercion.
On the Existential Perfect reading of statives
English perfect constructions involving stative predicates such as be in the attic are ambiguous between an (E)xistential-perfect and (U)niversal-perfect reading. The U-perfect reading is the most readily available reading, but the E-perfect reading can be forced with certain modifiers, such as three times.
(1)
- I have been in the attic since I moved in. (U-perfect, E-perfect)
- I have (only) been in the attic three times since I moved in. (E-perfect)
Previous accounts of such sentences (e.g. Mittwoch 1988, Giannakidou 2003) have suggested that E-perfect readings involve coercion of the stative predicate into an eventive predicate.
In this talk, I will argue that these sentences do not involve any eventive structure, and that such readings can straightforwardly be derived using the operation of topological closure, which Giorgi and Pianesi (1997, 2000) argue is integral to deriving the meaning of perfective verb forms.
As additional support for this account, I will provide a compositional semantics of Kannada sentences like (2), which exhibit a peculiar (and as far as I know, unattested) reading which is a combination of the E-perfect and U-perfect readings.
(2) 2001-rinda ī kōṇe-yalli mūru bari iddāne.
2001-ABL this room-LOC three times be.PRES.1SG
Lit: “I am in this room three times from 2001.”
≈ “This is the third time I have been in this room since 2001.”
I will argue that such a reading can be derived through topological closure, but cannot be derived through eventive coercion.