The Weekly Newsletter of MIT Linguistics

Issue of Monday, April 19th, 2021

Colloquium 4/23 - Jeff Heinz (Stony Brook University)

Speaker: Jeff Heinz (Stony Brook University)
Title: Making copies
Time: Friday, April 23rd, 3:30pm - 5pm

Abstract: In this talk, I explain how making copies is computationally simpler than recognizing copies (Dolatian and Heinz 2020). Computational perspectives on reduplication, and copying more generally, have centered around the complexity of the “copy language”, which is the set of totally reduplicated strings: { ww | w is a string }. It is well known that no context-free grammar can recognize this formal language. This means that the kind of memory required to correctly identify copied strings grows, in the worst case, in a particular way. However, when viewed as a transformation { w —> ww | w is a string } the computational complexity can be shown to belong to the regular class of transformations. This regular class should not be confused with the rational class of transformations, upon which much current computational morpho-phonological analyses are based. Furthermore, I discuss how attested patterns of reduplication can be classified with respect to subregular classes of transformations that are relevant to morphology and phonology, and how various insights from linguistic theories are likewise present within this formal system and can be derived from it. The general message is that far from being fossilized relic of a previous era, mathematical and theoretical computational linguistics continue to contribute much to our understanding of natural languages.

Minicourse 4/21-4/22: Jeff Heinz (Stony Brook University)

Speaker: Jeff Heinz (Stony Brook University)

Dates: Wednesday, April 21, 12:30pm-2:00pm EDT, and Thursday, April 22, 12:30pm-2:00pm EDT

Title: Learning Constraints over Representations of Your Own Choosing 


In this 2-day workshop, I present a general method for learning constraints of different complexities over different kinds of representations (Chandlee et al. 2019, Lambert et al. in review). The representations can include whatever information you want —- phonological/prosodic/morphological/syntactic features, autosegments, different ordering relations, trees, graphs and so on —- for linguistic objects in any linguistic subfield.  It is shown generally how learning all but the simplest logical kinds of constraints require prohibitively enormous resources. In constrast, for common representations invoked for phonological words, learning the simpler logical kinds of constraints (1) is feasible (2) returns constraints which are the kind found in phonotactic patterns in the world’s languages (3) does not require statistics (contra Wilson and Gallagher 2018), and instead (4) succeeds because of the *structure* provided by the logic and the representational choices. These results surprised even me, and I conclude with a discussion distinguishing between *inductive* and *abductive* inference.