The Weekly Newsletter of MIT Linguistics

ESSL/LAcqLab 9/22 - Laurel Perkins (UMD)

Speaker: Laurel Perkins (University of Maryland)
Title: Perceiving Transitivity: Consequences for Verb Learning
Date and time: Friday, September 22nd, 2pm-4pm
Location: 32-D461

There is a paradox in language acquisition concerning the perception of the input. If learners can veridically parse the input, then there is nothing to learn from it; but if they cannot parse the input, then it is unclear how they avoid faulty inferences about structure, or even learn from it at all (Valian 1990, Fodor 1996). In this talk, I examine how children deal with their input, given only partial knowledge of the target grammar. Specifically, I focus on the intersection of transitivity, wh-movement, and verb learning.
Infants can use a verb’s distribution in transitive and intransitive clauses to draw inferences about its meaning (e.g. Fisher et al., 2010) and its argument-taking properties (Lidz, White, & Baier, 2017). In this talk I’ll address two questions about the nature of these inferences. First, are infants’ inferences about verb meaning best characterized as one-to-one matching between arguments in a clause and participants in an event described by that clause (Naigles, 1990; Fisher et al., 2010)? To differentiate this participant-to-argument matching hypothesis from other possibilities, we investigate whether children think an intransitive clause could be a good fit for a two-participant event. Second, at early stages in development, infants may not recognize transitivity in certain “non-basic” clauses, like What did Amy fix? (Gagliardi, Mease, & Lidz, 2016). If a learner does not yet recognize that what stands for the object of fix, might she erroneously infer that fix does not require an object? We probe when infants are able to recognize the transitivity of non-basic clauses like wh-object questions, and how infants who do not yet have that ability might learn to “filter” non-basic clauses from the data they use for verb learning. Thus, learners may be able to overcome the limits of partial knowledge by unconsciously filtering data that may lead to faulty inferences about their grammar.