Whamit!

The Weekly Newsletter of MIT Linguistics

Experimental/Computational Ling Group 9/30 - Meg Gotowski and Forrest Davis (MIT)

Meg Gotowski and Forrest Davis will be giving presentations on their dissertation research. Join us on Friday (9/30) from 2-3:30 in the 8th floor conference room (32-D831). 

 

It is DAXY to learn! Bootstrapping in the Adjectival Domain (Meg Gotowski) 

Abstract: An influential theory in word learning is known as syntactic bootstrapping (Landau & Gleitman 1985), which claims that children are able to map structure to meaning.  Most of the bootstrapping literature has focused on the ability of learners to rely on syntactic frames in order to deduce the meaning of verbs (see Gleitman et al. 2005). In this talk, I examine how syntactic bootstrapping extends to the adjectival domain, focusing on how learners are able to acquire different subclasses of subjective gradable predicates (e.g. fun, tasty, tough). I discuss the results of an experiment based on the Human Simulation Paradigm (Gillette et al. 1999), and argue that while learners are sensitive to individual adjectival frames, they are also dependent on seeing adjectives across multiple frames in order to effectively narrow down the hypothesis space of possible meanings (consistent with Mintz 2003 for verbs).  

 

Neural Models of Language and the Limits of Superficialism (Forrest Davis) 

Abstract: A typical approach to evaluating neural models of language for linguistic knowledge will find instances of overlap between humans and models. This overlap is claimed to be evidence that our linguistic theories can be simplified. I will instead argue for a different approach to evaluating such models. I advance the position that neural models are models of “superficialism”, the worldview which asserts that all meaningful linguistic (and more broadly psychological) distinctions can be made on the basis of observing ordinary behavior. By assuming this worldview, the role of data in determining a neural model’s behavior is centered. I then show via two case studies (ambiguous relative clause attachment and implicit causality) that mismatches between neural models and humans follow from general properties of data. I conclude by suggesting that, to the extent that these really are general properties of data, models will always be sensitive to incorrect generalizations.