Whamit!

The Weekly Newsletter of MIT Linguistics

LingLunch 10/10 - Ethan Wilcox (Harvard)

Speaker: Ethan Wilcox (Harvard)
Title: Neural Network Models and the Argument from the Poverty of the Stimulus: The Case of Filler—Gap Dependencies
Time: Thursday, October 10th, 12:30pm – 1:50pm
Location: 32-D461

Abstract: Recurrent Neural Networks (RNNs) are one type of neural model that has been able to achieve state-of-the-art scores on a variety of natural language tasks, including translation and language modeling (which is used in, for example, text prediction). In this talk I will assess how these models might way in to linguistic debates about the types of biases required to learn syntactic structures. By treating these models like subjects in a psycholinguistics experiment, I will demonstrate that they are able to learn the filler—gap dependency, and are even sensitive to the hierarchical constraints implicated in the dependency. Next, I turn to “island effects”, or structural configurations that block the filler—gap dependency, which have historically played a role in the Argument from the Poverty of the Stimulus. I demonstrate that RNNs are able to learn some of the “island” constraints and even recover some of their pre-island gap expectation. These experiments demonstrate that linear statistical models are able to learn some fine-grained syntactic rules, however their behavior remains un-humanlike in many cases.