Events / ILST Seminar: Soo-Hwan Lee

ILST Seminar: Soo-Hwan Lee

October 11, 2024
1:30 PM - 3:00 PM

111 Levin Building

Soo-Hwan Lee
Language and Cognition Lab
University of Pennsylvania

 

Do language models (LMs) know how to be polite? LM performance on linguistic dependencies sensitive to politeness

 

Politeness is often associated with a degree of formality that the speaker conveys to the addressee. Languages such as Korean and Hindi showcase morphemes that are sensitive to politeness. Some of these morphemes are dependent on one another, forming a linguistic dependency. While language model (LM) performance on syntactic dependencies such as filler-gap dependencies (Wilcox et al., 2018), anaphor binding (Hu et al., 2020), and control phenomena (Lee & Schuster 2022) have been explored in recent years, little work has been done on non-syntactic dependencies that reflect politeness, especially in languages other than English. This work presents preliminary findings on how LMs perform on dependencies sensitive to politeness.