Department of Linguistics
University of Maryland
Language, the world, and sustained neural activity
Even before evolving the capacity for language, we already had the ability to represent many aspects of the world—categories of entities, their perceptual and non-perceptual properties (e.g. dangerous, nutritious) and crucially, the *novel relations* these entities entered into with locations in space, or with each other. Thus a language ‘front-end’ was built on top of an existing neural architecture for comprehending the world and updating our knowledge of it; for example, the dual-stream ventral/dorsal division of labor observed in vision for identifying types vs. tracking individuals. But, psycholinguists and neurolinguists like me often don’t approach the problem of language comprehension with this background in mind. We tend to focus too much on ‘concept activation’ in our theories of the non-linguistic interface, rather than the novel, on-the-fly representations of the world that the concepts enter into. In this talk I’ll argue that we can better understand neural ‘language comprehension’ responses by remembering that some of it must reflect the combinatorial, non-linguistic mental model/situation model generated in response to the sentence. Building on the visual working memory literature, I’ll suggest that the contribution of inferior parietal cortex during comprehension may reflect its non-linguistic role in indexing entities in this model, and I’ll suggest that many cases of sustained neural responses in ERP studies of language could similarly reflect properties of the mental model that the sentence provides instructions for updating, rather than reflecting the linguistic representation itself.