Kyle Gorman
Department of Linguistics
CUNY
Black box phonology
There is recent interest in the abilities of neural network sequence-to-sequence models to acquire “irregular” morphophonological generalizations. While it is not often explicit, this body of work views such models as potential computational models of language acquisition. I argue that these models as they currently exist are implausible cognitive models of acquisition, since they require unprincipled hacks and repeated passes through large amounts of data during training. However, these models are powerful domain-general pattern learning devices (“black boxes”), and their success (or failure) in inflecting unseen words argues for (or against) phonological abstractness. In two case studies—Polish declension and Spanish conjugation—I find that computational models struggle to predict “irregular” generalizations well-modeled with abstract phonological devices.