Archive for December 5th, 2016
Speaker: Adam Albright (MIT)
Title: Why do speakers try to predict the unpredictable?
Time: Monday, December 5th, 5:00–6:30pm
Generative phonology traditionally distinguishes two types of feature values: (1) unpredictable, or contrastive values, and (2) contextually predictable values. Unpredictable values are listed in the lexicon as arbitrary properties of morphemes, whereas predictable values are assigned or enforced by grammar. However, statistical studies of lexicons have revealed that contrastive feature values are often surprisingly predictable. For example, Ernestus and Baayen (2003) observed that although stem-final obstruent voicing is nominally contrastive in Dutch, it is actually fairly predictable based on the obstruent’s place and continuancy, and the preceding vowel’s quality. Furthermore, speakers are aware of this predictability, and can use it to judge likely voicing values for stem-final obstruents in nonce words. Similar results have been found for contrasts in numerous other languages, including Korean stem-final continuancy and laryngeal features Jun (2010), Spanish mid vowel vs. diphthong contrasts (Albright et al. 2001), and others. These results support a model in which phonological grammars attempt to predict at least some contrastive feature values.
In this study, we ask why there is this redundancy between the grammar and the lexicon. One possibility is data compression (Rasin and Katzir 2015, and others); if the grammar can exploit statistical asymmetries to predict certain feature values, they need not be listed in the lexicon. Maximal compression is achieved if the grammar supplies all predictable feature values. An alternative possibility is that values must be predicted when there is neutralization. In Dutch, stem-final obstruents undergo final devoicing, so speakers must sometimes guess the voicing of a stem-final obstruent, based on the neutralized singular form. Under this account, the grammar must supply only those feature values that are neutralized in the singular. We test the predictions of these accounts by comparing the predictability of feature values that are subject to neutralization in different languages. We compare place, continuancy, and laryngeal contrasts in Korean, Dutch, and English. In English, all three features contrast word-finally (with numerous specific restrictions), whereas in Dutch, voicing is neutralized, and in Korean, continuancy and laryngeal features are both neutralized in this position.
In order to test predictability, we extracted the most frequent items in each language (5018 Korean nouns; 5151 Dutch nouns; 5085 English words). When trained the Minimal Generalization Learner (Albright and Hayes 2002) to predict the values of various features based on remaining features of the segment in question, and the preceding context. We then wug-tested the resulting grammars, to determine whether feature values get more predictable at lower frequencies. The reasoning is that, as with morphological regularity, low frequency words should be less able to sustain exceptionality, and should therefore reflect grammatical preferences. The results show that although overall predictability does tend to be higher for neutralizing features, neutralizing and non-neutralizing features both get more predictable at lower frequencies, as predicted by the data compression model. Neutralization may increase the likelihood that a speaker will need to use their grammar to predict an `unpredictable’ feature, but it is not a prerequisite to learning and enforcing such generalizations.
English someone can give rise to the speaker’s ignorance implicature (Somebody arrived late —> the speaker doesn’t know who). Some of its analogues in other languages, e.g. German `irgendein’ or Russian `kto-to’, have conventionalized this implicature into an inference. Namely, the ignorance inference became a part of their semantics. And it is these elements that Aloni and Port call epistemic indefinites (EIs).
The ignorance inference is the main focus of Aloni and Port’s papers. They are proposing an analysis that derives this inference with the use of Aloni (2001)’s theory of concepts and conceptual covers. They argue that EI represent a special case of domain widening.
Enoch Aboh (University of Amsterdam) will be visiting our department this week and will give two talks.
- The role of vulnerable interfaces in language change: the case of the C-, and D-systems
- Date: Wednesday, December 7
- Time: 2—5pm
- Location: TBA
- Reading: Chapters 5-6 of Enoch (2015), The Emergence of Hybrid Grammars: Language contact and change
- The emergence of serial verb constructions
- Date: Friday, December 9
- Time: 1:30—3:15pm
- Location: TBA
- Reading: chapter 7 of Enoch (2015)
For more information, please contact Michel DeGraff (firstname.lastname@example.org).
Speaker: Jenneke van der Wal (Harvard) Title: The AWSOM and RANDOM in Bantu object marking Time: Thursday, December 8/12:30pm-1:50pm Location: 32-D461 Abstract:
Many Bantu languages mark objects on the verb by a prefix agreeing in noun class:(1) N-a-va-et-eaa anca mUhUmba. [Nyaturu, Hualde 1989] 1SG.SM-PAST-2OM-bring-APPL 2.girls 1.boy
‘I brought the girls a boy.’
However, object marking (OM) shows fascinating microvariation across Bantu, along the following parameters:
1. the nature of the OM: doubling / non-doubling (OM and DP can co-occur in the same domain in Nyaturu = doubling); 2. the behaviour in ditransitives: asymmetric / symmetric (only benefactive and not theme can be OM-ed in Nyaturu = asymmetric); 3. the number of object markers allowed: one/two/multiple (Nyaturu is restricted to one).
This talk maps the parameter settings of 50+ Bantu languages, revealing two gaps:Asymmetry Wants Single Object Marking correlation (AWSOM) → Almost no language has multiple markers that are doubling. Relation between Asymmetry and Non-Doubling Object Marking (RANDOM)
→ No language has non-doubling asymmetrical object marking.
I argue that these gaps are in fact not random, but can be understood as obligatory marking of salience, in the form of a [Person] feature in either the non-clausal domain (doubling) or the clausal domain (symmetry)
(The abstract can also be read here.)
Heidi Harley (University of Arizona, MIT PhD ‘95) will be visiting the department this week. In addition to her Colloquium talk on Friday, she will be offering a mini-course on head movement. Details below:
Speaker: Heidi Harley (University of Arizona)
Title: Report from the bleeding edge of the head movement debate
Time: Wednesday, December 7th and Thursday, December 8th, 2016, 5:00-6:30 pm
Place: 32-124 (Wed), 32-144 (Thurs)
I will review and discuss various models of head-movement and the evidence that has been brought to bear on them, including but not limited to conflation (Hale&Keyser 2002, Harley 2004), remnant movement (Zeller 2013), (phrasal mvt +) m-merger (Matushansky 2004, Harizanov 2014, Harley and Folli ms), and traditional head-adjunction (Keine and Bhatt 2016), or some combination of different mechanisms (Harley 2013, Gribanova and Harizanov 2016handout). In doing so, I’ll talk about the idea that head-movement does or does not have syntacticosemantic (LF) effects, and if it does, what they are and why, borrowing heavily on a presentation by McCloskey including some discussion of LaCara (2016), Hartman (2011), Gribanova (ms), Lechner (2007), as well as Keine and Bhatt (2016)).
A reading packet is attached for people to browse at will if they want but I’m not going to assume attendees will have read any of it. The ones I most highly recommend for the interested are the Keine and Bhatt 2016 on German verb clusters and the Zeller 2013 on Zulu relatives; Harizanov 2014 on Bulgarian clitics and Gribanova (2016ms) on Russian ellipsis & polarity-licensing are interesting too. Not to be discussed but included because it’s mind-blowingly weird are the results of Lipták 2013, 2016handout, on the (failure of) the verbal identity condition on VPE in Hungarian).
Speaker: Heidi Harley (University of Arizona)
Title: We don’t need word-internal phase boundaries (for Hiaki)
Time: Friday, December 9th, 3:30-5:00 pm
Hiaki verbs exhibit what looks like a word-internal phase boundary, with some, more derivational affixes attaching to a ‘bound’ stem, which only appears with suffixal material attached, and other, inflectional affixes attaching to a ‘free’ stem, which can also appear unsuffixed; a classic type of stem-attaching vs word-attaching dichotomy. The mirror-principle boundary for stem-attaching suffixes located more or less at VoiceP. Only inflectional suffixes can attach outside the passive voice marker, and only derivational ones can attach inside it, and there can only be one Voice marker per verb complex. However, there are problems identifying the bound-stem/free-stem boundary with Voice, particularly having to do with the existence of embedded external arguments within the bound-stem complex in causatives and related forms.
In fact, I will argue that the correct analysis is in a sense precisely the opposite. The particular form taken by bound stems shows evidence of word-level morphophonological processes, such as a word-final fortition of the voiceless affricate, and echo vowels that appear to extend monomoraic stems to satisfy minimal word requirements (or actually probably exhaustive footing requirements). The ‘bound’ stems which appear to the left of Voice morphology behave like independent morphophonological words with respect to these constraints. The ‘free’ stems, in contrast, all have a recently-detectedmorphemic final vowel on them.
I propose that the whole complex verb word is simply a cluster of verbs lined up on the right by the head-final nature of Hiaki. This cluster of verbs is subject to very quotidian inflectional requirements: The highest (rightmost) [+V] head in the domain is attracted to Voice and T (and sometimes C). That head-movement process which creates the ‘free’ forms. That is, the ‘bound’ forms are free, and the ‘free’ forms are all inflected; the only process we need to appeal to is the usual expectation that the highest eligible head in a verbal complementation sequence is the one that moves and inflects. The entire complex is pronounced (and spelled) as a unit, perhaps due to postsyntactic Morphological Merger, perhaps due to the prosodic rules of the language.
In short, the syntactic picture presented by the apparently complex agglutinative Hiaki verb word is actually most appropriately analyzed in the same way as auxiliary and light verb complexes in left-headed languages. No level-ordering-type of cyclicity hypothesis involving word-internal phase boundaries is motivated by this data. This is good, because the notion of a word-internal phase boundary in a structure created by syntactic head-movement is somewhat problematic, technically speaking. I’ll also exhibit cases from Cupeño and maybe Korean that seem to require analysis in similar terms.