Whamit!

The Weekly Newsletter of MIT Linguistics

Issue of Monday, April 3rd, 2017

Phonology Circle 4/3 - Kevin Ryan (Harvard)

Speaker: Kevin Ryan (Harvard)
Title: Onset vs. rime effects in phrasal weight
Date/Time: Monday, April 3, 5:00–6:30pm
Location: 32-D461
Abstract:

Prosodic end-weight (PEW) refers to the specifically phonological aspect of end-weight, whereby prosodically heavier constituents tend to be preferred domain-finally, all else being equal (i.e. controlling for semantics, frequency, morphosyntactic complexity, etc.). This tendency can be seen in coordination (“X and Y” or “Y and X”?) among numerous other constructions, is widespread (though not universal) cross-linguistically, and is amply supported by experiments, including wug-tests. Several explanations have been put forth for PEW, including final lengthening, complexity deferral (for reasons related to processing), metrical optimization, phonotactic optimization, and (esp. in my own work) stress-weight alignment in sentential prosody. I maintain that the stress-weight interface best explains the core properties of PEW, while the other factors are either irrelevant or at least largely orthogonal to it. One area in which the stress-weight analysis illuminates PEW concerns its differing treatment of onset vs. rime segments. For instance, in the nucleus and coda, greater sonority correlates with greater weight, while in the onset, the generalization is reversed: Greater obstruency patterns as heavier. This reversal is also evident from other types of weight systems (with phonetic rationales in Gordon 2005, Ryan 2014). Thus, I propose that PEW instantiates the same stress-weight interface that is well-documented for stress, meter, etc., a generalization of Weight-to-Stress (Prince 1983 et seq.). The proposed generalization is formalized as a stringent weight hierarchy (e.g. moraic sonorant X > moraic X > X), partly to avoid monsters, but stringency can only be maintained if one recognizes a natural class that is the union of onset obstruents (which cannot be analyzed as moraic in English) and rime segments (which are moraic), among other issues.
Share

Syntax Square 4/4 - Colin Davis

Speaker: Colin Davis (MIT)
Title: English Possessor Extraction, Pied-piping, and Cyclic Linearization
Date and time: Tuesday April 4, 1-2pm
Location: 32-D461
Abstract:

In my previous syntax square, I introduced possessor extraction in English. This essentially undocumented possibility in colloquial speech stands in contrast to canonical pied-piping wh-movement of possessors in English.
  1. Who do they say[[_’s cat] is cute]? (Possessor extraction)
  2. [Whose cat] do they say [_ is cute]? (Pied-piping)
English possessor extraction cannot happen all the time, however. In this talk, I go on to analyze the phenomenon’s restrictions. For example, non-subject DPs must be pied-piped to the edge of their clause for PE out of them to be licit, producing a unique instance of partial pied-piping. In a very general sense, pied-piping to an intermediate position provides a nice piece of overt evidence for successive-cyclic movement through intermediate specifiers of CP.
  1. *Who do they think [John likes [_’s cake]]? (No PE from object in-situ)
  2. Who do they think[[_’s cake] John likes _]? (PE from pied-piped object)
I argue that this pied-piping and a number of other details result from an adjacency condition between possessor and the saxon genitive (cf. Gavruseva & Thornton 2001) which interacts with phase-by-phase linearization of syntactic structure (Fox & Pesetsky 2005, Ko 2005, 2014). Along the way, this analysis finds a explanation for the fact that successive-cyclic movement through spec-vP cannot strand anything in English, a curious gap in the paradigm of McCloskeys’s all-stranding and true of P-stranding in English generally. This finding leads to a number of broader predictions about stranding and its interaction with movement and the nature of specifiers (cf. Ko). The interaction of English possessor extraction with existential constructions also leads to a novel argument from linearization that expletive there originates in vP (Biberauer & Richards 2005, Deal 2009).
Share

LFRG 4/5 - Colin Davis

Speaker: Colin Davis (MIT)
Title: English possessor extraction and LF pied-piping
Date and time: Wednesday April 5, 1-2pm
Location: 32-D461
Abstract:

The colloquial speech of many English speakers permits what looks like possessor extraction, which A’-moves a possessor (1) without pied-piping the rest of the DP (2).

1. Who do they think [[_’s fat cat] is cute]? (Possessor extraction)

2. [Whose fat cat] do they think [_ is cute]? (Standard pied-piping)

This movement is interesting in light of the fact that English is a language that otherwise obeys the Left Branch Condition (Ross 1967), which describes a lack of extraction of the leftmost constituent of a nominal phrase. I argue that despite appearances, the possessum is in fact covertly pied-piped in (1), meaning that there really is no Left Branch Condition exception here. Some evidence for this comes from parasitic gaps, where a possessum stranded in an embedded clause can bind a parasitic gap in the matrix clause, as in (3).

3. [Who did you say[_’s haircut is awful] despite wanting help from PG]?

If who moved alone and didn’t carry haircut into the matrix clause, we expect the PG to be bound by who and so refer to a person. If there was full pied-piping we predict whose haircut to bind the PG, giving a silly reading where you want help from a haircut. By the judgments of most speakers, it turns out that the silly reading is the most salient for sentences like this, with the non-silly reading being absent or difficult. Importantly, we only expect the silly reading to be available if the possessum was covertly pied-piped, binding the PG. von Stechow (1996) argues against Nishigauchi (1990), saying that covert pied-piping does not exist, or at least is not interpreted. In (3), covert pied-piping is interpreted. I also apply the logic of covert pied-piping to sluicing in answers to possessor-extracting questions, and some puzzles regarding free relatives, which don’t pattern as expected.
Share

Ling-Lunch 4/6 - Adrian Stegovec

Speaker: Adrian Stegovec (UConn)
Title: Two’s company, three’s a crowd: Strength implications in syntactic person restrictions
Date/Time: Thursday, April 6th, 12h30—1h50pm
Location: 32-D461
Abstract:

In this talk I argue for a novel approach to syntactic person restrictions (SPRs) such as the Person-Case Constraint (PCC) in ditransitives and analogous restrictions in transitives. I present data from a broad cross-linguistic survey of SPRs (101 languages), revealing a generalization on the distribution of SPRs across combinations of External-Internal and Internal-Internal arguments —- the Strength Implication Generalization: “If a language has both an External-Internal argument and an Internal-Internal argument SPR, the Internal-Internal one is never “weaker” than the External-Internal one”. I propose that SPRs arise due to the inherent person feature underspecification of relevant pronominal markers which makes them dependent on phase heads for external person feature valuation. This is shown not only to derive the generalization from standard assumptions on argument structure, but also to capture the cross-linguistic variation in SPR types in terms of lexical (micro-)variation in pronominal markers and a contextual approach to phases.
Share

Explanatory adequacy in formal semantics 4/7 - Irene Heim

Earlier this semester there were three LFRG presentations on topics that had to do with explanatory adequacy in formal semantics. Since there was interest in discussing these issues further, a separate reading group on explanatory adequacy in formal semantics will start this week.

The goal is to discuss theoretical, experimental, and computational work in formal semantics that addresses the question of how denotations of lexical items are acquired, with a special focus on 1) typological and experimental work that contributes to the characterization of the range of possible denotations available to the child, and 2) computational work on semantic learning.

The reading group will meet on Fridays at 2-3pm in 32-D831. The first meeting’s details are below.

Speaker: Irene Heim (MIT)
Title: Type Economy
Date/Time: Friday, April 7, 2:00-3:00pm
Location: 32-D831
Abstract:

Lexicalist and syntactic accounts of a given construction have often been pitted against each other in the linguistic literature. Proponents of either account ought to do more than argue that their favorite account derives better empirical predictions from simpler assumptions. They also should tell us how the language learner chooses this analysis. For example, a linguist who favors a raising-to-subject analysis of verbs like seem should formulate constraints or biases which may guide children to acquire this analysis and not a lexicalist one. Informally, a bias in favor of “simpler” semantic types could fill the bill in this case. But what exactly is the relevant metric of simplicity?
Share

MIT Colloquium 4/7 - Ricardo Bermúdez-Otero (Manchester)

Speaker: Ricardo Bermúdez-Otero (Manchester)
Title: The phonological lexicon, usage factors, and rates of change: Evidence from Manchester English
Time: Friday, April 7, 3:30-5:00 pm
Venue: 32-155
Abstract:

This paper reports the results of research conducted jointly with George Bailey (University of Manchester), Maciej Baranowski (University of Manchester), and Danielle Turton (University of Newcastle upon Tyne).

In classical modular feedforward architectures of grammar, phonetic implementation does not have access to information about lexical items beyond the discrete properties encoded in phonological representations. This hypothesis accounts for fundamental facts of human language such as double articulation and the existence of neogrammarian change, but it fails to explain the fact that fine phonetic detail is also affected by gradient usage-related properties of lexical items such as token frequency and neighbourhood density.

Exemplar Theory seeks to explain the phonetic effects of usage factors by abandoning the classical hypothesis that lexical phonological representations consist solely of categorical information. Less radical approaches, however, continue to uphold this assumption: some, such as Baese-Berk & Goldrick’s (2009) account of neighbourhood density effects, rely on the notion of gradient symbolic computation, according to which phonological representations are made up of symbols that are discrete but exhibit continuously varying degrees of activation (Smolensky & Goldrick 2016).

These two approaches to the phonetic effects of usage factors differ in their diachronic predictions. In the case of lexical token frequency, in particular, it has been repeatedly observed that, synchronically, high-frequency words exhibit more lenition than low-frequency words. From this observation the proponents of Exemplar Theory infer that, during historical language change, high-frequency words undergo reduction at a relatively faster rate due to greater exposure to reductive phonetic biases, whose effects are claimed to be directly registered in phonetically-detailed lexical representations. Pace Hay & Foulkes (2016), however, this diachronic pattern has never been reliably observed, and these accounts fail to consider another logical possibility: namely, that high-frequency words are ahead synchronically but actually change at the same rate as low-frequency words.

In this talk I report the findings of an investigation into the effect of lexical token frequency on the glottal replacement of word-medial /t/ in Manchester English, using apparent-time data from 62 speakers born between 1926 and 1985 (2131 tokens). Two stringent tests (mixed effects logistic regression and comparison between curve-fitting models) show that lexical token frequency gives rise to a ‘constant rate effect’ in the sense of Kroch (1989): high-frequency words exhibit more glottalization at all points in apparent time, but the size of their advantage remains unchanged. This suggests that glottalization advances historically through an increase in the probability of application of a single process targeting both high- and low-frequency words, whilst the impact of frequency is produced by time-invariant orthogonal mechanisms, possibly involving gradient symbolic computation. Thus, the evidence is consistent with the classical assumption that lexical phonological representations consist solely of discrete categories and do not encode fine phonetic detail.

A PDF copy of the abstract (with references) is also available.

Share

Aravind in NLLT

Good news from fourth-year student Athulya Aravind, whose paper “Licensing long-distance wh-in-situ in Malayalam” has been accepted for publication in Natural Language & Linguistic Theory. Congratulations, Athulya! Follow the link for a pre-publication draft.

Share

Kotek to NYU

Congratulations to Hadas Kotek (PhD 2014), who has accepted a position as Visiting Assistant Professors at NYU’s Department of Linguistics next year!

Share

Sugawara accepts Assistant Professor position

We are delighted to report that our 2016 alum Ayaka Sugawara has accepted a position as Lecturer (the equivalent of a tenure-track Assistant Professor), at Mie University. Ayaka’s dissertation concerned “The role of Question-Answer Congruence (QAC) in child language and adult sentence processing”. Fantastic news — congratulations, Ayaka!

Share

DeGraff’s paper at PROSPECTS

Michel DeGraff published the article “Mother-tongue books in Haiti: The power of Kreyòl in learning to read and in reading to learn” in the UNESCO journal PROSPECTS (Comparative Journal of Curriculum, Learning, and Assessment). The article is available here.

Share

MIT @ ACAL 48

While WHAMIT! was on hiatus because of Spring Break, the 48th Annual Conference on African Linguistics took place at Indiana University, Bloomington. 3rd year grad student Abdul-Razak Sulemana gave the talk GETCASE is Violable: Evidence for Wholesale Late Merger.

Share

MIT@ Формальные подходы к русскому языку

Last Wednesday, Thursday and Friday, the linguists of Moscow State University and Moscow State Pedagogical University hosted the second workshop on Formal Approaches to Russian Linguistics, with several MIT connections. The excellent conference was organized by frequent MIT visitor, former Fulbright Fellow and former visiting faculty Sergei Tatevosov and his colleague Ekaterina Lyutikova.  David Pesetsky and alum Ora Matushansky (PhD 2002) were the invited speakers. David’s talk was entitled “Clause size and Nominal size: towards a derivational theory of both”.  Ora, well-known for her work on semantics, syntax and morpology, spoke about … Russian phonology — specifically “A problem in the Hallean approach to the Russian verb”. In addition, alum Natasha Ivlieva (PhD 2013) of Moscow’s Higher School of Economics presented a talk on to li…to li disjunctions and Tatiana Bondarenko, a member of next Fall’s incoming class (!), spoke about “Russian applicatives and the lexical decomposition”. After FARL, David Pesetsky also presented a colloquium talk on his research concerning clause size at Moscow State University.

at the conference dinner: L to R: Ekaterina Lyutikova, David Pesetsky, Misha Knyazev, Natasha Ivlieva, Maria Vassilyéva, Tatiana Bondarenko, Sergei Tatevosov
the conference dinner.  From left to right: Katya Lyutikova, David Pesetsky, Misha Knyazev, Natasha Ivlieva, Maria Vassilyéva, Tatiana Bondarenko, Sergei Tatevosov

Share