Last week, Jon Rawski (visiting Assistant Professor) gave an invited colloquium talk at Rutgers Linguistics. The talk was titled **Mathematical Linguistics & Cognitive Complexity**.

**Abstract:** Mathematics is the study of structures, and mathematical linguistics studies linguistic structures. Generative linguistics put the focus squarely on grammars: finite abstractions of the highly structured and complex mental computations of linguistic phenomena. The focus of this talk is the expressive power of a grammar, which directly measures the complexity of any cognitive system which instantiates it. Two key algebraic restrictions have emerged: regularity, and transduction (mappings between finite structures). I will use these notions to describe upper and lower bounds on the weak and strong capacity of human grammars (the kinds of phenomena they cover, and the kinds structures they posit during a derivation), as well as mismatches between them. These bounds enable precise theory comparison, and I will show how many linguistic theories drastically overgenerate, using reduplication as a case study. I will then consider the expressivity of so-called ‘neural language models’. I will show, using finite model theory and multilinear algebra, how regular transductions can be embedded into the tensor representations used by neural computation and Optimality Theory. I will then present new theoretical bounds on the capacity of language models, connecting various transformer models to classes of first-order transductions. The overall picture that emerges is that, under the lens of mathematical abstraction, linguistic complexity is indeed a window into fundamental aspects of cognition and computation.