Whamit!

The Weekly Newsletter of MIT Linguistics

CompLang 5/13 - Ethan Wilcox (Harvard)

Speaker: Ethan Wilcox (Harvard University)
Title: Neural Network Language Models as Psycholinguistic Subjects: The Case of Filler—Gap Dependencies
Time: Thursday, 5/13, 5-6pm
Location: 46-5165

Abstract:
Recurrent Neural Networks (RNNs) are one type of neural model that has been able to achieve state-of-the-art scores on a variety of natural language tasks, including translation and language modeling (which is used in, for example, text prediction). However, the nature of the representations that these ‘black boxes’ learn is poorly understood, raising issues of accountability and controllability of the NLP system. In this talk, I will argue that one way to assess what these networks are learning is to treat like subjects in a psycholinguistic experiment. By feeding them hand-crafted sentences that belie the model’s underlying knowledge of language I will demonstrate that they are able to learn the filler—gap dependency, and are even sensitive to the hierarchical constraints implicated in the dependency. Next, I turn to “island effects”, or structural configurations that block the filler—-gap dependency, which have been theorized to be unlearnable. I demonstrate that RNNs are able to learn some of the “island” constraints and even recover some of their pre-island gap expectation. These experiments demonstrate that linear statistical models are able to learn some fine-grained syntactic rules, however their behavior remains un-humanlike in many cases.