Whamit!

The Weekly Newsletter of MIT Linguistics

Archive for April 26th, 2010

????, ???? ? ?????? ?? ??????????? «FASL»

The world of linguistics this weekend saw not one, but two conferences with “Formal Approaches” in its name. At Formal Approaches to Slavic Linguistics (FASL) hosted by the University of Maryland, Liudmila Nikolaeva (Liuda) presented a paper entitled “On the Nature of Preverbal Internal Arguments in Russian”, and Sasha Podobryaev presented a paper coauthored with Natasha Ivlieva on “How Many Splits in Russian: A View From LF”. All three are second-year graduate students.

Share

Written by pesetsky

April 26th, 2010 at 6:00 am

Posted in Student News

Formal Approaches to Mayan Linguistics last weekend

FAMLi sign
Meanwhile, our department hosted the first-ever conference on “Formal Approaches to Mayan Linguistics (FAMLi)” this weekend. We won’t repeat all the details of the conference. which we already described in last week’s WHAMIT — except to say that it was a tremendous success. The papers were excellent — many breaking new ground in grammatical description, others offering competing explanations for particularly puzzling phenomea in Mayan (especially Agent Focus, central to at least three of the talks). The discussion was lively and productive after each and every talk, the room was full, and the spirit of the conference was magnificent. The two official languages of the conference were English and Spanish, as about half the participants were native-speaker linguists. Combinations of English handout with Spanish presentation (and vice-versa) were common and worked very well for those of us with less than perfect bilingualism skills. From our own department, talks were given by Kirill Shklovsky (“Person-Case effects in Tseltal”); by Jessica Coon with Pedro Mateo (University of Kansas) (“Extraction and embedding in two Mayan languages” — Chol and Q’anjob’al); and by Norvin Richards, who was one of the five invited speakers. A particular highlight of the conference (and a chance to get out of Cambridge) was a Friday dinner reception at the Mexican Consulate in Boston, hosted by Consul Fernando Estrada. Thank you, Jessica, Robert, Kirill and Katie for this unforgettable workshop, and thanks also to everyone who helped!
FAMLi conversation
(photo credits: Mitcho Erlewine. More photos here.)

Share

Written by pesetsky

April 26th, 2010 at 6:00 am

Posted in Student News

Shklovsky at GLOW

Fourth-year student Kirill Shklovsky is back from GLOW (Generative Linguists in the Old World) in Wroc?aw, Poland, where he gave two talks. The first, co-authored with third-year student Yasutada Sudo, was “No Case Licensing: Evidence from Uyghur”. The second, related to his presentation a week later at FAMLi, concerned “Person?Case Effects in Tseltal”. In between the two conferences, Kirill spent an unexpected five days in Berlin, thanks to a certain Icelandic volcano.

Share

Written by pesetsky

April 26th, 2010 at 6:00 am

Posted in Student News

Phonology Circle 4/26 - Jae Yung Song

This week’s Phonology Circle presentation will be by Jae Yung Song, of Brown University.

Speaker: Jae Yung Song (Brown University)
Title: The development of acoustic cues to coda voicing and place of articulation
Time: Monday 4/26, 5pm
Location: 32-D831

Studies on young children’s speech perception and production suggest that voicing and place of articulation (POA) contrasts may be acquired early in life. However, most of these studies have focused on onset consonants; little is known about the development of cues to feature contrasts in codas. To this end, we investigated children’s representations of coda voicing (voiced vs. voiceless) and POA (alveolar vs. velar) by conducting detailed acoustic analyses of their speech. In particular, we examined longitudinal, spontaneous speech data from 6 American English-speaking mother-child dyads. The results showed that children as young as 1;6 exhibited many adult-like acoustic cues to coda voicing and POA contrasts, such as longer vowels and more voice bars before voiced codas, and more frequent releases, a greater number of bursts, and longer post-release noise duration for alveolar codas. In contrast, some cues, such as glottalization at the end of the vowel, were still not systematically produced by 2;6. In general, younger children used more exaggerated cues compared to mothers, but showed nearly adult-like patterns by 2;6. In conclusion, although 2-year-olds produced some adult-like acoustic cues to voicing and POA distinctions, others take time to become adult-like. Physiological and contextual correlates of these findings are discussed.

Upcoming talks:

  • May 3 Igor Yanovich and Donca Steriade
  • May 10 Donca Steriade
  • May 17 Ari Goldberg (Tufts)

Access real-time updates, on-line via the web (click ‘agenda’ to see the schedule as a list), or through iCal

Share

Written by albright

April 26th, 2010 at 5:09 am

Posted in Talks

Syntax Square 4/27: Kirill Shklovsky

Join us this week for Syntax Square. Kirill Shklovsky will lead the discussion with a report from GLOW.

TIME: Tuesday, April 27, 1-2PM
PLACE: 32D-461

If you would like to lead the discussion at next week’s Syntax Square, please email Claire (halpert@mit.edu).

Share

Written by claire.halpert

April 26th, 2010 at 5:07 am

Posted in Talks

Ling-Lunch 4/29: Bane, Graff, & Sonderegger

Speakers: Max Bane (University of Chicago), Peter Graff (MIT), Morgan Sonderegger (University of Chicago)
Title: Longitudinal phonetic variation in a closed system
Time: Thurs 4/29, 12:30-1:45
Place: 32-D461

Previous work shows that in short term, laboratory settings, aspects of one’s speech can change under exposure to the speech of others (e.g. Goldinger 1998, Nielsen 2007), and that this change is mediated by social variables such as (speaker) gender (e.g. Namy et al. 2002, Pardo 2006). An implicit hypothesis is that these effects can help explain dialect formation and the social stratification of speech. However, it is not known whether such change occurs in natural interaction over a longer term.

This study shows longitudinal change in VOT in a closed linguistic system, mediated by social interaction. Our corpus consists of speech from Big Brother UK (2008), a reality TV show in which 16 contestants live in a house for 93 days, subject to 24 hour audio/video recording. VOT was measured for 4 contestants over the season. We model the effect of time on VOT for different contestants, controlling for known confounds (e.g. speech rate, place of articulation), and allowing for intrinsic differences in the VOTs of different words. Each contestant’s modeled VOT time trajectory shows significant longitudinal change, and all trajectories appear to move closer together over time.

We then examine the effect of two measures of social interaction on differences between VOT trajectories for pairs of contestants. The more a pair interacts, as measured via live blog entries by a UK newspaper, the closer their VOTs become. When a pair is on the same side of an artificial divide in the house (present for half the season), their VOTs become closer than if they are on different sides.

Share

Written by claire.halpert

April 26th, 2010 at 5:00 am

Posted in Talks

MIT Linguistics Colloquium 4/30 - Bruce Hayes

Speaker: Bruce Hayes (UCLA)
Time: Friday, April 30, 2010, 3:30pm-5pm
Location: 32-141 (Stata Center)
Title: Accidentally-true constraints in phonotactic learning

The phonotactic learning system proposed by Hayes and Wilson (2008) follows the principle of the inductive baseline: it tries to learn phonotactics using as few principles of Universal Grammar (UG) as possible. The leading idea is that one could learn from such a system’s failures just as much as from its successes. For instance, the simplest version of the system fails to learn patterns of vowel harmony or unbounded stress, but it becomes able to learn them when amplified with UG principles corresponding to classical autosegmental tiers and metrical grids—thus forming a new kind of argument for such representations.

There is a second way in which failures of the baseline system might be informative: it could learn too much rather than too little. The baseline system involved a rather permissive concept of what can be a phonotactic constraint: a constraint’s structural description is simply a sequence of feature matrices, each representing one of the natural classes of segments in a language. Where there are C natural classes and constraints are allowed to have n matrices, there will be Cn possible constraints. In actual practice, this can be a very large number.

With such a large hypothesis space, it is imaginable that the system might find constraints that are “accidentally true”: they have few or no exceptions in the lexicon, but are not apprehended by native speakers and play no role in their phonotactic intuitions. Hayes and Wilson’s learning simulation for the phonotactics of Wargamay may have done this. While the 100 constraints the system learned included 43 that successfully recapitulate the known phonotactic restrictions of this language (Dixon 1981), a further 57 constraints were discovered that struck the authors as complex and phonologically mystifying. An example is *[–approx, +cor][+high,+back,–main][–cons], which forbids sequences of coronal noncontinuants ([d, ?, n, ?]), followed by unstressed or secondary-stressed [u, u?], followed by a vowel or glide. Almost any phonologist would agree that this an unlikely configuration for a language to forbid.

Do real speakers apprehend constraints of this kind? I will report an experimental study now in progress that addresses this question for English. When trained on English data, the Hayes/Wilson system behaves just as it did with Wargamay, learning both sensible and accidental-seeming constraints. The experiment used 20 nonce-word quadruplets, each containing:

  1. a word that violates exactly one constraint, of the “accidental” type
  2. a word that is violation-free but otherwise similar to (1)
  3. a word that violates exactly one constraint that would be considered by phonologists to be natural (e.g. a sonority-sequencing constraint), and has a weight similar to the constraint in (1)
  4. a violation-free control word similar to (3).

The results of the experiment indicate that the (1)-(2) difference is considerably smaller than the (3)-(4) difference—i.e. that unnatural constraints really do have a weaker effect on native speaker judgment than natural constraints.

I will then explore two hypotheses that might explain the disparity: (a) a statistical approach based on comparing the explanatory power of added constraints (Wilson 2009); (b) a UG-based approach under which language learners are biased (Wilson 2006) to assign the natural constraints high weights relative to unnatural ones.

References

  • Dixon, Robert M. W. 1981. Wargamay. In Handbook of Australian languages, volume II, ed. Robert M. W. Dixon and Barry J. Blake, 1–144. Amsterdam: John Benjamins.
  • Hayes, Bruce and Colin Wilson (2008) A maximum entropy model of phonotactics and phonotactic learning. Linguistic Inquiry 39: 379-440.
  • Wilson, Colin and Marieke Obdeyn (2009) Simplifying subsidiary theory: statistical evidence from Arabic, Muna, Shona, and Wargamay. Ms., Johns Hopkins University.

Share

Written by albright

April 26th, 2010 at 5:00 am

Posted in Talks

Katz accepts CNRS post-doc

Jonah Katz has accepted a post-doctoral position at the Institut Jean Nicod in Paris, to begin in the fall. Congratulations, Jonah!

Share

Written by albright

April 26th, 2010 at 5:00 am

Posted in Student News