Whamit!

The Weekly Newsletter of MIT Linguistics

Course Announcements: Spring 2019

  • 24.942: Topics in the Grammar of a Less Familiar Language
  • 24.947: Language Disorders in Children
  • 24.954: Pragmatics in Linguistic Theory
  • 24.960: Syntactic Models
  • 24.964: Topics in Phonology
  • 24.966J: Laboratory on the Physiology, Acoustics, and Perception of Speech
  • 24.979: Topics in Semantics
  • 24.981: Topics in Computational Phonology

24.942 Topics in the Grammar of a Less Familiar Language
Kenstowicz, Richards
Students will work with a native speaker of Javanese, examining aspects of its syntax, semantics, and phonology. In the course of doing this, students will acquire techniques for gathering linguistic data from native speakers.

24.947 Language Disorders in Children
Flynn
Reading and discussion of current linguistic theory, first language acquisition and language disorders in young children. Focus on development of a principled understanding of language disorders at the phonological, morphological and syntactic levels. Examines ways in which these disorders confront theories of language and acquisition.

24.954 Pragmatics in Linguistic Theory
Fox, Levy
Formal theories of context-dependency, presupposition, implicature, context-change, focus and topic. Special emphasis on the division of labor between semantics and pragmatics. Applications to the analysis of quantification, definiteness, presupposition projection, conditionals and modality, anaphora, questions and answers.

24.960 Syntactic Models
Pesetsky
Comparison of different proposed architectures for the syntax module of grammar. Subject traces several themes across a wide variety of approaches, with emphasis on testable differences among models. Models discussed include ancient and medieval proposals, structuralism, early generative grammar, generative semantics, government-binding theory/minimalism, LFG, HPSG, TAG, functionalist perspectives and others.

24.964 Topics in Phonology
Steriade
Description: The goal of this class is to understand some of the results of linguistic research using the Artificial Grammar paradigm (Reber 1967, 1993).  The AG studies we consider compare learning outcomes between subject groups exposed to artificial languages that are designed to differ just in whether they present ‘natural’ or ‘unnatural’ patterns. The definition of ‘naturalness’ varies, and understanding what should be meant by this term will be one focus of discussion. Naturalness is commonly defined in terms of attestation – with natural taken to mean well-attested – but this needs rethinking. 
Most of the large body of phonological AG work now available dates from the last 15 years. The hypothesis commonly tested is that the naturalness of the target pattern is a determinant of learning success. When this hypothesis is supported, subjects succeed in learning the natural pattern and fail with the unnatural one. In many studies, however, null results are obtained: subjects acquire to the same extent patterns identified as natural and unnatural. We will begin the course by exploring one interpretation of these mixed results, due to Moreton and Pater 2012a,b, and Moreton 2008: the successfully learned natural patterns are not necessarily natural in a typological, or phonetic sense, but they are simpler than the patterns that the subjects fail to learn. This idea comes with its own learning algorithm and with a distinct view of how phonological typology relates to competence, or doesn’t. It is a useful way to explore large parts of the AG literature but we will see that it has limitations. A different idea to consider is that studies that have successfully shown that natural patterns are easier to learn than unnatural ones have focused on faithfulness, or can be reinterpreted in those terms. The null results are found in the markedness domain.  If this interpretation continues to look promising, we’ll look for an explanation.
There are other potentially interesting results in the phonological AG work and we may expand the reading list and consider syntax if participants express an interest. I can’t promise answers to any of the questions raised above, but the best of this AG body of literature tackles two essential questions in the field: do aspects of a speaker’s grammatical knowledge reflect the linguistic typology? what are the sources of this knowledge?

24.966J Laboratory on the Physiology, Acoustics, and Perception of Speech
Braida, Shattuck-Hufnagel, Choi
Experimental investigations of speech processes. Topics include computer-aided waveform analysis and spectral analysis of speech; synthesis of speech; perception and discrimination of speech-like sounds; speech prosody; models of speech recognition; speech development; analysis of atypical speech; and others. Recommended prerequisite: 6.002, 18.03, or 24.900.

24.979 Topics in Semantics
Fox, Hackl, Schwarzschild
The seminar will focus on topics in degree semantics. Issues to be addressed include:

  • how degree operators interact with individual and modal quantifiers 
  • interpretation of degree modifiers of noun phrases (e.g. at least)
  • analysis of relative clauses modifying nominalized degree predicates (height, weight)
  • whether degrees are primitive or derived 
  • analysis of degree constructions using vectors or directed segments
  • external and internal syntax of comparative clauses

Our starting point will be Schwarzschild 2019, which we will later try to use as the basis for the discussion of older literature. Some of the sessions will be lead by registered students in collaboration with the instructors. 

24.981 Topics in Computational Phonology
Albright
Computational modeling can usefully inform many aspects of phonological theory. Implementing a theory provides a more rigorous test of its applicability to different data sets, and requires a greater degree of formal precision than is found in purely expository presentations. By training learning models on realistic training samples, we can test whether a posited analysis can actually be discovered from representative data, and we can observe what proportion of the data is actually accounted for by that analysis. Modeling also provides a direct means of testing whether a proposed formal device facilitates the discovery of generalizations, or whether it hampers learning by greatly increasing the size of the search space. In the most interesting cases, computational modeling uncovers facts about the language that would have been difficult to discover by eye, and forces us to ask which facts are treated as linguistically significant by speakers.
Topics will include: (subject to revision)
– Statistical “baseline” models (n-gram models, exemplar models)
– Algorithms for constraint ranking and weighting
– Algorithms for constraint discovery
– Integrating learned and innate constraints
– Learning in the midst of variation and exceptions, and discovery of gradient patterns
Requirements: readings and small regular problem sets, final project+presentation.