Whamit!

The Weekly Newsletter of MIT Linguistics

Issue of Monday, March 5th, 2018

CompLang 3/5 - Ishita Dasgupta (MIT)

Speaker: Ishita Dasgupta (MIT)
Title:  Evaluating Compositionality in Sentence Embeddings
Date and time: Monday, March 3, 5:00-6:00pm
Location: 46-5156
Abstract: 

An important challenge for human-like AI is compositional semantics. Recent research has attempted to address this by using deep neural networks to learn vector space embeddings of sentences, which then serve as input to other tasks. We present a new dataset for one such task, “natural language inference” (NLI), that cannot be solved using only word-level knowledge and requires some compositionality. We find that the performance of state of the art sentence embeddings (InferSent, Conneau et al. (2017)) on our new dataset is poor. We analyze some of the decision rules learned by InferSent and find that they are largely driven by simple heuristics that are ecologically valid in its training dataset. Further, we find that augmenting training with our dataset improves test performance on our dataset without loss of performance on the original training dataset. This highlights the importance of structured datasets in better understanding and improving NLP systems.

Syntax Square 3/6 - Colin Davis (MIT)

Speaker: Colin Davis
Title: Parasitic Gaps and the Structures of Multiple Movement
Date and time: Tuesday March 6, 1-2pm
Location: 32-D461
Abstract:

In this talk, I present some work in progress about the structure of derivations where multiple A’-movement chains overlap. These derivations show interesting complexities that do not (and could not) arise in derivations with only one A’-movement (Pesetsky 1982, Richards 1997). Towards deepening our understanding of this issue, I use Nissenbaum’s (2000) findings about parasitic gap licensing as a diagnostic for the multiple specifier structures created by successive-cyclic movement through vP in these derivations.

This test reveals a puzzle: While Richard’s (1997) theory of specifier formation predicts tucking-in structures at vP in these scenarios, I show via parasitic gap licensing that (at least sometimes) tucking-in fails to occur. Observations about parasitic gaps in superiority violating D-linking from Nissenbaum provide another instance of the same puzzle. It seems to be the case that the structure at vP, tucked-in or not, reflects the final order of the moved phrases. This is exactly what we predict under the hypothesis of Order Preservation (Fox & Pesetsky 2005). However, it remains mysterious how derivations can ‘know’ what vP configurations to form based on what the final result of the derivation will be. I do not have a good solution, but I hope discussing these puzzles will help.

LF Reading Group 3/7 - Itai Bassi (MIT)

Speaker: Itai Bassi
Title: Fake Indexicals without feature transmission
Date and time: Wednesday March 7, 1-2pm
Location: 32-D461
Abstract:

In a footnote, Partee (1989) mentioned that 1st person pronouns can be semantically bound (“fake indexicals”), pointing to sentence (1). That footnote generated a line of research (Kratzer 1998, Kratzer 2009, Wurmbrand 2017; Heim 2008) according to which bound variables (can) enter the syntactic derivation lacking interpreted phi-features, and inherit features from their binder at the PF branch, as a result of some “feature transmission” mechanism(s).

(1) I am the only one around here who will admit that I could be wrong
—> the speaker is the only individual in {x: x is willing to admit that x could be wrong}

In this talk I offer a formal syntax-semantics for this construction which derives a bound reading for (1) while maintaining that the bound “I” has its person feature interpreted, rendering feature transmission unnecessary. My proposal is to reduce (1) to focus constructions like (2), for which there are alternatives to the feature-transmission story (Bassi and Longenbaugh 2017, a.o.). I will thus propose, building on a suggestion made in Bhatt (2002), that the construction in (1) involves silent association with focus. In addition, I show how my proposal can account for the contrast between (1) and the minimally different (3), which does not have a bound reading for “I” and constitutes a problem for existing feature-transmission analyses (Wurmbrand, Kratzer).

(2) Only I will admit that I could be wrong

(3) I met the only one around here who will admit that I could be wrong (no bound reading)

Invited talk 3/8 - Athulya Aravind (MIT)

Speaker: Athulya Aravind (MIT)
Title: Principles of presupposition in development
Time: Thursday March 8th, 12:30-2:00pm 
Place: 32-D461
Abstract:

Natural language affords us the means to communicate not only new information, but also information that we are already taking for granted, our presuppositions. The proper characterization of presuppositions–the way they enter into the compositional semantics and the way they fit into the exchange of information in communicative situations–has been at the center of long-standing debate. One class of theories treat presuppositions as categorically imposing restrictions on the conversational common ground: presuppositions must signal information that is already mutually known by all participants. While principled and elegant, these theories are often thought to be empirically inadequate, as the common ground requirement is not always met in everyday conversation. A second class of theories, therefore, adopt weaker and less categorical approaches to the phenomenon that are nonetheless a better fit to the empirical facts. 

 
This talk compares these two classes of approaches to presupposition in terms of their implications for language acquisition. I argue that children initially adopt a view of presuppositions as uniformly placing restrictions on the conversational common ground, even in situations where these requirements may be bent. More tellingly, I show that children initially lack the ability to use presuppositions in ways that violate the common ground requirement. The observed two-step developmental trajectory supports a common ground theory of presuppositions, according to which the “rule of thumb” is that presuppositions are already common knowledge, and informative uses involve strategic violations of this rule. In turn, the acquisition data vindicate some of the theoretical idealizations whose empirical validity is masked in part due to the pragmatic sophistication of adult language users.

MIT Colloquium 3/9: Sandhya Sundaresan

Speaker: Sandhya Sundaresan (Leipzig)
Title: An Alternative Treatment of Indexical Shift: Modelling Shift Together Exceptions, Dual Contexts, and Selectional Variation
Date and time: Friday March 9, 3:30-5:00pm
Location: 32-155
Abstract:

I present the following three types of evidence that challenge both context-overwriting and quantifier-binding approaches to indexical shift (the phenomenon where the denotation of an indexical is interpreted, not against the utterance context, but against the index associated with an intensional verb). (I) Systematic exceptions to Shift Together (the constraint that all shiftable indexicals in a local intensional domain must shift together) in Tamil, varieties of Zazaki and Turkish, and potentially also Late Egyptian; (II) novel evidence from imperatives in Korean and supporting secondary data from imperatives in Slovenian, showing that the utterance context continues to be instantiated even in putatively shifted environments; and (III) results from personal fieldwork in Tamil dialects and secondary data from 26 languages (from 19 distinct language families) showing that there is structured selectional variation in the intensional environments in which indexical shift obtains and, furthermore, that such variation is one-way implicational. The following desiderata emerge: 1. Shift Together holds whenever possible, but systematic exceptions may nevertheless obtain; 2. the utterance-context is never overwritten; 3. indexical shift is an embedded root phenomenon that privileges speech predicates. To capture these, I develop an alternative model of indexical shift with the following properties. The context-shifter is not a context-overwriting operator, but a contextual quantifier. At the same time, unlike with standard quantificational approaches to shifting, this contextual quantifier (or “monster”) is a distinct grammatical entity severed from the attitude verb. Specifically, I present evidence from nominalization patterns and complementizer deletion to show that the monster is encoded on the complementizer selected by the attitude verb. I then propose that selectional variation for indexical shift ensues as the result of the monster being encoded on structurally distinct types of complementizer head, each selected by a different class of attitude verb (as has also been recently proposed in the literature).