Whamit!

The Weekly Newsletter of MIT Linguistics

Issue of Monday, October 18th, 2021

Ling-Lunch 10/21 — Ido Benbaji (MIT)

This Thursday we have another Ling-Lunch talk, presented by Ido Benbaji. Please find the details below.
 
Speaker: Ido Benbaji (MIT)
Title: An argument against V-stranding VP-ellipsis from only in polar questions
Date: Thursday, 10/21
Time: 12:30–1:50pm
Location: 32-D461

Abstract: This paper contributes to the debate over the (non-)existence of verb-stranding VP-ellipsis (VSVPE), providing a new argument against its existence from the behavior of focus particles in questions. Polar questions in Hebrew (as in many other languages (Holmberg 2016)) can be answered affirmatively by echoing the verb in the question. Hebrew verb-echo answers (VEAs) are often analyzed as declarative sentences whose arguments have been deleted by a combination of VSVPE and subject pro-drop (Doron 1990). We show that VEAs are unacceptable as answers to polar questions with the focus particle only, and argue that this remains a mystery on a VSVPE account, as the presence of only is compatible with both V-to-T movement and VP-ellipsis (the ingredients required for VSVPE). We then show that the data can be straightforwardly accounted for if VEAs are derived via Argument Ellipsis (AE); i.e. elision of the verb’s object based on parallelism with a linguistic antecedent, without verb-movement (which has been proposed for Hebrew in Landau 2018).
 
The presentation will be given in person, however, if you cannot make it to Stata on Thursday, you will be able to join via Zoom. Please contact Ling-Lunch organizer (kukhto@mit.edu) for zoom link. 

Linguistics and Social Justice Seminar 10/19 — Jo-Anne Ferreira

You are invited to participate in our discussion this week, Tuesday, October 19, 2-5pm EST, on “Linguistics and Social Justice: Language, Education & Human Rights”  (MIT Linguistics, Graduate Seminar, 24.S96).  Please contact Michel <degraff@mit.edu> for information about Zoom link and readings.  NB: We are committed to creating an inclusive and accessible environment in our seminar. If you need assistance for accommodations or accessibility in order to fully participate, please email degraff@MIT.EDU so that we can work out adequate arrangements.

This Tuesday, Jo-Anne Ferreira will lead the discussion on:

Resistance and Revitalisation: French Creole in Trinidad & Tobago and Venezuela

Venezuela and Trinidad share a maritime boundary in the Gulf of Paria, and are only seven miles or eleven kilometres apart at the nearest point. The Gulf area has been a point of linguistic exchange between the two areas since pre-Columbian times with speakers of Amerindian (especially Warao, to this day), European (Spanish, French, and English) and Caribbean Creole (French-lexified and English-lexified) languages going back and forth. Neither was ever colonised by France, yet both share a French Creole and speakers and advocates in both spaces have been attempting to overturn past wrongs against sociolinguistically oppressed populations.

In multilingual but French and French Creole-dominated Trinidad of the 19th century, speakers of French and by extension French Creole were the targets of a “full‐scale policy of ‘Anglicisation’” developed and implemented by the British government in the 1840s to govern and control Trinidad, seen as linguistically unruly (Brereton 1993: 37). French Creole was mostly ignored by Venezuela until the Chavez government’s attempts to document and protect minority languages and cultures (Indigenous, Creole, European), affording language rights to all.

This presentation will focus on French Creole in western Trinidad and eastern Venezuela (mostly Estado Sucre in which the Paria Peninsula is located, although French Creole is also spoken in El Callao in Estado Bolívar), on the acts of resistance that have led to the survival of this language in hostile spaces, and on recent and current efforts to save and revitalise the language in both places. I will discuss how an official English-only policy and an unofficial Spanish-only policy affected education in both places, and represent a virulent and malevolent attack on language rights and language justice of large sectors of two populations, with long-term effects, and how revitalisation acts complement and fortify ancient acts of resistance against such injustice, planned or unplanned.

Tech industry workshop 10/20 — guest speaker David Q. Sun

Summary: tech industry workshop guest presentation — an NL engineering manager’s perspective
 
What: A short presentation on how voice assistants / Natural Language Understanding systems work and where different roles are in an organization, and an open Q&A session
When: Wednesday 10/20, 2-3:30pm EST
Where: Zoom (event will be virtual; contact Hadas Kotek for details) + 32-D769
Who: Dr. David Sun is an engineering manager in the Siri Natural Language team of the AI/ML organization at Apple. His work leverages data science and machine learning to support research, development, and implementation of models for natural language processing to extend Siri’s understanding and functionality. Prior to Apple, he worked as a consultant in the San Francisco office of the Monitor Government Venture Services (“Monitor 360”), the former political consulting practice of the Monitor Group, specializing in “narrative analysis & influencing”.
 
David received his Ph.D. in Systems Engineering from Penn, advised by Prof. Barry Silverman. His research interests include Network Science, Decision Theory, and the Agent-Based Simulation & Modeling approach in understanding the dynamics of coalitions in regions of conflict around the world.
 

MorPhun 10/20 - Luke Adamson (Harvard)

Speaker: Luke Adamson (Harvard)
Title: The locus of gender interpretation: A Reply to Yatsushiro and Sauerland (2006)
Time: Wednesday, October 20th, 5pm - 6:30pm

Abstract: Yatsushiro and Sauerland (2006) observe an ambiguity in German for a set of nouns with the feminine suffix -in, e.g. die beliebste Politikerin, which can be interpreted (referring to a woman) as ‘the most popular female politician’ or ‘the most popular politician’. They suggest that the two interpretations should be derived through variable placement of an interpretable feature [FEM], with the latter interpretation being derived when the noun’s gender is licensed through agreement with a higher instance of [FEM]. This type of agreement-based approach would have significant implications for the valuation of a noun’s gender if correct. However, we provide four arguments against this approach by examining evidence from comparative deletion, nominal Right Node Raising, nP ellipsis, and intermediate scope, and we sketch an alternative semantic account of the ambiguity.

Phonology Circle 10/18 - Trevor Driscoll (MIT)

Speaker: Trevor Driscoll (MIT)
Title: Voicing as a Diagnostic of Foot Structure
Time: Monday, October 18th, 5pm - 6:30pm

Abstract: There is a substantial body of literature that indicates that fortition targets foot-initial position and lenition targets foot-medial and foot-final position (Pierrehumbert & Talkin 1992, Byrd 1994, Dilley et al. 1996, Cho & Keating 2001, Keating et al. 2003). Foot-medial consonants appear lenis due to the absence of fortition, or simply by virtue of being foot-medial. I argue that lenition processes can be used to determine whether a pair of syllables is parsed together in a foot much as fortition can be used to locate an initial foot boundary. This provides phonologists an additional tool to diagnose various aspects of foot structure that are not always readily identifiable by stress assignment.
Little is known about the metrical structure of Hidatsa, a Siouan language spoken in North Dakota. A recent phonetic study finds that words bear a single stress on a quantity-sensitive iamb at the left edge of the word (Metzler 2021).

(1)   Initial LL   meɁépi    ‘grinder’      Initial HL    máːtsu      ‘berry’
        Initial LH  tsaɡáːɡa   ‘bird’            Initial HH   kóːxaːti    ‘corn’

The remainder of the literature on Hidatsa makes no reference to foot structure whatsoever, and the stress data from Metzler are not particularly informative about feet beyond the initial iamb. It is necessary to turn to other cues to learn more about feet in Hidatsa.
Harris & Voegelin (1939) note that underlyingly plain stops and affricates become voiced intervocalically. Although all stops become voiced between vowels, the duration of voicing in intervocalically voiced stops is determined by a stop’s position in the foot; stops in foot-medial position are significantly more voiced than intervocalic stops in other prosodic environments:

(2)                        Voicing (ms)     Voicing (%)       Fully Voiced (FV/Total)
       Initial  LĹ    88 ms                 90%                     46/60
       Stray             72 ms                 69%                     18/53
       p-value        < .001                 < .001                  < .001

In addition to demonstrating that voicing interacts with feet, I further show that a complete sketch of the metrical structure of Hidatsa can be given using voicing, with only limited assistance from more conventional indicators of foot structure such as stress.
Cues to foot structure other than stress are of particular interest in iambic languages. Kager (1993) and Hayes (1995) have famously debated whether asymmetrical iambs (LH) are grammatical, but foot typologies with and without (LH) make identical predictions for iambic stress. Foot-medial voicing in Hidatsa is able to distinguish between the two. Hayes’ foot typology predicts that there should be no contrast in voicing between LL and LH because both are acceptable iambs. The robust voicing found in foot-medial stops is absent in LH pairs, which suggests that LH is not a foot.

(3)                      Voicing (ms)        Voicing (%)     Fully Voiced (FV/Total)
        LL              86 ms                     87%                  104/151
        LH             78 ms                      73%                  34/84
        p-value     .01                           .001                   < .001

These results provide evidence against the canonical iamb, in support of Kager’s typology of feet.

Colloquium 10/22 - Omer Preminger (UMD)

Speaker: Omer Preminger (UMD)
Title: Natural language without semiosis
Time: Friday, October 22th, 3:30pm - 5pm
Location: Zoom (but those on campus can gather in 32-155 to attend the talk together)

Abstract: A traditional view holds that natural language is fundamentally semiotic, in that individual syntactic  terminals are Saussurean “signs” (Saussure 1916; Hjelmslev 1943): they are the locus where individual  units of form are paired with individual units of meaning. 

It has always been clear, however, that more needs to be said to make such a view work. For one thing,  it is likely that no two performances of the “same form” are literally identical. But even if we were to  artificially abstract away from phonetic detail, there would still remain the issue of allomorphy, and,  perhaps most pressingly, of alternations in which the resulting forms are not phonotactically predictable  (e.g. the alternation of the Korean nominative marker between ‑i and ‑ka). On the meaning side, we must  contend with things like systematic polysemy (cf. the artifactual and abstract senses of book in a sentence  like This book is old and crumbling but will affect your life like no other). But it is quite widely assumed,  in practice if not in theory, that given the right models of allomorphy and polysemy, a semiotic view of  natural language can be salvaged. One can see this de facto assumption at work every time anyone asks, “What does the word (or morpheme) w in this language mean?” or “How do you say meaning m in this  language?” These are questions that only make sense within a fundamentally semiotic framework. In other  words, a common working assumption (if not a theoretical one) is that natural language is composed of  signs after all, it’s just that the “forms” and “meanings” that are paired by these signs are more abstract  than one might have initially thought—in a way that provides the necessary leeway to capture phenomena  like allomorphy (up to and including suppletion) as well as polysemy. 

In this talk, I present arguments that even this weaker semiotic characterization is incorrect. I argue  that, with the possible exception of single-morpheme utterances (e.g. Ugh!), a proper model of the  competence of a native speaker contains no pairings of form and meaning whatsoever. Instead, speaker  competence involves: (i) an inventory of syntactic atoms, which are fully abstract (associated with neither  form nor meaning); (ii) a list of mapping rules from sets of atoms to forms (“exponents”); (iii) a list of  mapping rules from sets of atoms to meanings (“lexical meanings”). Importantly, lists (ii) and (iii) are  disjoint objects; they have nothing to do with one another, except in the sense that the competence system  associates derived structures (consisting of items from list (i)) with items from list (ii) as well as with items  from list (iii). But the relation is necessarily indirect and mediated in this fashion. 

It is worth noting that lists (ii) and (iii), in this proposed model, bear some resemblance to the  “Vocabulary” and the “Encyclopedia” in Distributed Morphology (DM; see, e.g., Marantz 1997, and  references therein). But DM is still a fundamentally semiotic theory: the unit associated with form, albeit in  a context-sensitive way, is still the individual syntactic terminal; and the unit associated with meaning is  again the individual syntactic terminal (again, with potential allowances for context-sensitivity, in  particular when it comes to idiomaticity; see Harley 2014a,b). I present a collection of linguistic properties  (some language-specific, and some quite general) that only make sense in light of a more radically non semiotic model, one in which the relevant mappings are mappings from sets of syntactic terminals to units  of form (“exponents”), and from sets of syntactic terminals to listed meanings (“lexical meanings”). I also  show why a framework like Nanosyntax (Starke 2009, Caha 2019), which also maps sets of terminals to  forms and meanings, falls short of these explanatory goals, due to its failure to properly dissociate syntax form mappings from syntax-meaning mappings (cf. lists (ii) and (iii), above).