Whamit!

The Weekly Newsletter of MIT Linguistics

Issue of Monday, September 26th, 2022

Experimental/Computational Ling Group 9/30 - Meg Gotowski and Forrest Davis (MIT)

Meg Gotowski and Forrest Davis will be giving presentations on their dissertation research. Join us on Friday (9/30) from 2-3:30 in the 8th floor conference room (32-D831). 

 

It is DAXY to learn! Bootstrapping in the Adjectival Domain (Meg Gotowski) 

Abstract: An influential theory in word learning is known as syntactic bootstrapping (Landau & Gleitman 1985), which claims that children are able to map structure to meaning.  Most of the bootstrapping literature has focused on the ability of learners to rely on syntactic frames in order to deduce the meaning of verbs (see Gleitman et al. 2005). In this talk, I examine how syntactic bootstrapping extends to the adjectival domain, focusing on how learners are able to acquire different subclasses of subjective gradable predicates (e.g. fun, tasty, tough). I discuss the results of an experiment based on the Human Simulation Paradigm (Gillette et al. 1999), and argue that while learners are sensitive to individual adjectival frames, they are also dependent on seeing adjectives across multiple frames in order to effectively narrow down the hypothesis space of possible meanings (consistent with Mintz 2003 for verbs).  

 

Neural Models of Language and the Limits of Superficialism (Forrest Davis) 

Abstract: A typical approach to evaluating neural models of language for linguistic knowledge will find instances of overlap between humans and models. This overlap is claimed to be evidence that our linguistic theories can be simplified. I will instead argue for a different approach to evaluating such models. I advance the position that neural models are models of “superficialism”, the worldview which asserts that all meaningful linguistic (and more broadly psychological) distinctions can be made on the basis of observing ordinary behavior. By assuming this worldview, the role of data in determining a neural model’s behavior is centered. I then show via two case studies (ambiguous relative clause attachment and implicit causality) that mismatches between neural models and humans follow from general properties of data. I conclude by suggesting that, to the extent that these really are general properties of data, models will always be sensitive to incorrect generalizations. 

MIT @ SuB27!

MIT had a huge turnout at Sinn und Bedeutung 27 in Prague, 14-16 September 2022! 



Current students:

+ Lorenzo Pinton (with Maria Aloni): Sluicing and Free Choice
+ Adèle Hénot-Mortier: A dynamic alternative-pruning account of asymmetries in Hurford disjunctions
+ Ido Benbaji: (with Omri) Adversative only is only only; (with Yash and Filipe) The Logic of Hindi Co-compounds
+ Enrico Flor: Questions in non-distributive belief ascriptions
+ Jad Wehbe: Against the lexical view of cumulative inferences and homogeneity
+ Omri Doron: (with Ido) Adversative only is only only
+ Anastasia Tsilia (poster): “Quasi-ECM” constructions in Modern Greek: Evidence for semantic lowering

Alums:

+ Itai Bassi: Strict readings of logophors and the LF of anaphoric dependencies (with others) 
+ Jonathan Bobaljik: (with Uli Sauerland) About ‘us’
+ Uli Sauerland (invited): An Algebra of Thought that Predicts Key Aspects of Language Structure; (with Jonathan Bobaljik) About ‘us’
+ Pritty Patel-Grosz (invited): The search for universal primate gestural meanings
+ Yasutada Sudo: Against simplification: free choice with anaphora
+ Sam Alxatib: Necessary Free Choice and its theoretical significance

Presenting but not in the picture:

+ İsa Kerem Bayırlı (alumnus): UM2: A Generalization over Determiner Denotations
+ Yash Sinha (with Ido and Filipe): The Logic of Hindi Co-compounds
+ Filipe Hisao Kobayashi (with Ido and Yash): The Logic of Hindi Co-compounds

Industry workshop 9/28 - Andy Zhang

Who: Dr. Andy Zhang (Analytical Linguist at Google, virtual talk)
When: Wednesday 9/28 2-2:45
Where: 5-231 or on zoom (contact Hadas Kotek for the link)
What: Andy is a linguist(/data scientist/PM) at Google. His team works on designing and building machine learning systems that protect kids’ safety on Google surfaces across the internet. Andy’s particular domain is designing ML systems for enhancing the safety of ads that serve in Search for underage users. Andy completed his PhD in linguistics at Yale in 2021, where his work focused on how the ways in which we are different (domain-general dimensions of individual-level cognitive variability) influence and constrain (a) the ways in which we use language (real-time comprehension, lexical semantics) and (b) the ways in which languages change over time (diachronic semantics, grammaticalization pathways). 
 
Andy writes: “I’m looking forward to sharing about my journey into tech and hopefully helping you out on yours!”

Syntax Square 9/27 - Peter Grishin (MIT)

Speaker: Peter Grishin (MIT)
Title: Passamaquoddy subordinative clauses are TPs
Time: Tuesday, September 27th, 1pm - 2pm

Abstract: Algonquian languages are known for their distinct inflectional paradigms that have different syntactic distributions, something that has received quite a bit of interest in the theoretical literature (Campana 1996, Brittain 2001, Richards 2004, Cook 2014, Bogomolets, Fenger, and Stegovec 2022, a.o.). However, one inflectional paradigm/clause type has not received much (if any) attention: the subordinative, an Eastern Algonquian innovation. In this talk I’ll present some fieldwork and corpus data on the subordinative in Passamaquoddy, proposing that the seemingly-unrelated syntactic contexts it appears in—clausal complements to certain verbs and modal particles, some clausal coordinations, and polite imperatives—can all be unified if we take subordinative clauses to be TP sized, lacking a CP layer. While I think the broad picture I sketch is compelling, there are some loose ends and problems that will emerge—I’m looking for help with figuring out how to deal with them.

LSA ballot open until November 5

The annual ballot of the Linguistic Society of America is now open and members have until November 5, 2022 to cast their votes. There are a series of proposed amendments to the LSA Constitution and Bylaws and the LSA’s website provides some comments from members pro and contra. The ballot also includes the slate of candidates for various positions in the Society:

  • Marlyse Baptista (University of Michigan) for
    Vice-President/President Elect
  • Shelome Gooden (University of Pittsburgh) for Language
    Co-Editor
  • Four candidates for two at-large seats on the Executive Commitee:
    • Melissa Baese-Berk (University of Oregon)
    • Michel DeGraff (Massachusetts Institute of Technology)
    • Sali A. Tagliamonte (University of Toronto)
    • Michal Temkin Martinez (Boise State University)

LingLunch 9/29 - Janek Guerrini (Institut Jean Nicod, ENS)

Speaker: Janek Guerrini (Institut Jean Nicod, ENS)
Title: Genericity in similarity
Time: Thursday, September 29th, 12:30pm - 1:50pm

Abstract: In this talk, I offer an account of similarity constructions involving ‘like’, such as ‘be like’ and ‘look like’. I argue that these constructions have two key properties. (1) The first is that similarity predication amounts to predication of overlap of salient properties: I analyse ‘John is like Mary’ as ‘John shares relevant properties with Mary’. This is motivated by the fact that there seem to be grammatical devices that single out precisely what properties are relevant, e.g. ‘With respect to personality, she’s just like her father’. (2) The second key feature of similarity talk is, I argue, that it involves inherent generic quantification. This explains a range of data: first, it accounts for the reading of indefinites embedded in ‘like’ Prepositional Phrases: ‘John looks like a lawyer’ is almost equivalent to ‘John looks like a typical lawyer’. Second, it accounts for narrow-scope and almost conjunctive readings of disjunction in the scope of ‘like’: ‘Mary looks like a lawyer or a judge’ is almost equivalent (on its most accessible reading) to ‘Mary looks like a lawyer and Mary looks like a judge’.

Wu defends!

A tardy report, but a happy one:  on August 15, Danfeng Wu successfully and eloquently defended her dissertation entitled “Syntax and Prosody of Coordination”. The dissertation focuses on what she calls “correlative coordination ” — coordinate structures such as “either … or …” in which each element contains a coordinator. Danfeng defends the hypothesis that “the coordinator, traditionally considered to be the head of coordination (e.g., or and but), may not be the actual head, but just the daughter of a [conjunct]”. This idea in turn motivates analyses of situations in which the coordinator appears to be located in a surprising place as involving instances of ellipsis. The second half of her dissertation reports experimental research on the syntax-prosody interface that tests for the existence of some of these proposed ellipsis sites. An extremely interesting body of work, that also suggests a new tool for ellipsis detection, above and beyond its usefulness to the central problems of the dissertation. As we mentioned in an earlier post, Danfeng’s next stop is Oxford University, where she takes up a three-year Fellowship at Magdalen College.
Congratulations Danfeng!
 
And of course, after the defense, there was the usual gathering with food and champagne — jointly celebrating Danfeng’s defense and Christopher Baron’s (reported earlier here), which took place concurrently. The party photos below celebrate both events!