University of Kentucky
Click here to join!
Distributional Models and Sound Change
Quantitative models of sound change (quantitative models of everything really) mostly focus on shifts in the central tendency (the location parameter or $\mu$) of pronunciations. However, the variance of distributions, both within and between speakers, is also a potentially time varying factor, and one of some descriptive interest. For example, if the way sound change occurs is via a vanguard of speakers staking out an advanced position at the outset of the change, after which the rest of the community catches up, that would result in a large variance parameter at the start of the change, followed by a narrowing.
I have, in the past, explored these kinds of questions by writing my own autoregression models in Stan. While Stan offered the flexibility of specifying my own model completely, there were a few drawbacks. First, it was time intensive to write the initial models, and it took nearly as long to add just one more additional predictor. Secondly, since my background is not in statistics, I couldn’t be sure I was writing effecient models, and when there were errors in convergence (or divergence), it was difficult to attribute them to model complexity, or coding error.
In this talk, I’ll be discussing how I’ve explored these “distributional models” using the brms package, which converts models specified in an expanded form of the R formula syntax into a Stan model, and allows for smooth terms specified in the same way as a generalized additive model. The results are somewhat surprising: between and within speaker variance parameters do not change considerably over the course of a sound change, even as the location parameter does.