The Weekly Newsletter of MIT Linguistics

Colloquium 10/22 - Omer Preminger (UMD)

Speaker: Omer Preminger (UMD)
Title: Natural language without semiosis
Time: Friday, October 22th, 3:30pm – 5pm
Location: Zoom (but those on campus can gather in 32-155 to attend the talk together)

Abstract: A traditional view holds that natural language is fundamentally semiotic, in that individual syntactic  terminals are Saussurean “signs” (Saussure 1916; Hjelmslev 1943): they are the locus where individual  units of form are paired with individual units of meaning. 

It has always been clear, however, that more needs to be said to make such a view work. For one thing,  it is likely that no two performances of the “same form” are literally identical. But even if we were to  artificially abstract away from phonetic detail, there would still remain the issue of allomorphy, and,  perhaps most pressingly, of alternations in which the resulting forms are not phonotactically predictable  (e.g. the alternation of the Korean nominative marker between ‑i and ‑ka). On the meaning side, we must  contend with things like systematic polysemy (cf. the artifactual and abstract senses of book in a sentence  like This book is old and crumbling but will affect your life like no other). But it is quite widely assumed,  in practice if not in theory, that given the right models of allomorphy and polysemy, a semiotic view of  natural language can be salvaged. One can see this de facto assumption at work every time anyone asks, “What does the word (or morpheme) w in this language mean?” or “How do you say meaning m in this  language?” These are questions that only make sense within a fundamentally semiotic framework. In other  words, a common working assumption (if not a theoretical one) is that natural language is composed of  signs after all, it’s just that the “forms” and “meanings” that are paired by these signs are more abstract  than one might have initially thought—in a way that provides the necessary leeway to capture phenomena  like allomorphy (up to and including suppletion) as well as polysemy. 

In this talk, I present arguments that even this weaker semiotic characterization is incorrect. I argue  that, with the possible exception of single-morpheme utterances (e.g. Ugh!), a proper model of the  competence of a native speaker contains no pairings of form and meaning whatsoever. Instead, speaker  competence involves: (i) an inventory of syntactic atoms, which are fully abstract (associated with neither  form nor meaning); (ii) a list of mapping rules from sets of atoms to forms (“exponents”); (iii) a list of  mapping rules from sets of atoms to meanings (“lexical meanings”). Importantly, lists (ii) and (iii) are  disjoint objects; they have nothing to do with one another, except in the sense that the competence system  associates derived structures (consisting of items from list (i)) with items from list (ii) as well as with items  from list (iii). But the relation is necessarily indirect and mediated in this fashion. 

It is worth noting that lists (ii) and (iii), in this proposed model, bear some resemblance to the  “Vocabulary” and the “Encyclopedia” in Distributed Morphology (DM; see, e.g., Marantz 1997, and  references therein). But DM is still a fundamentally semiotic theory: the unit associated with form, albeit in  a context-sensitive way, is still the individual syntactic terminal; and the unit associated with meaning is  again the individual syntactic terminal (again, with potential allowances for context-sensitivity, in  particular when it comes to idiomaticity; see Harley 2014a,b). I present a collection of linguistic properties  (some language-specific, and some quite general) that only make sense in light of a more radically non semiotic model, one in which the relevant mappings are mappings from sets of syntactic terminals to units  of form (“exponents”), and from sets of syntactic terminals to listed meanings (“lexical meanings”). I also  show why a framework like Nanosyntax (Starke 2009, Caha 2019), which also maps sets of terminals to  forms and meanings, falls short of these explanatory goals, due to its failure to properly dissociate syntax form mappings from syntax-meaning mappings (cf. lists (ii) and (iii), above).