Josef Fruehwald (2017)
Pronunciations play out across two radically different time scales. The first is on the order of milliseconds, from the beginning of pronouncing a speech sound to the end, during which your tongue and other articulators carry out carefully detailed and coordinated gestures. The second is across generational time, as the conventional pronunciation of some sounds very gradually shifts. In this talk, I’ll be presenting work I have recently been doing to try to model and understand these two time domains of pronunciation simultaneously using generalized additive mixed effects models (gamms). I’ll briefly cover how it is possible to engage in this kind of research through the use of archival recordings, how to specify a gamm model to account for the random effects structure and autocorrelation of measurements, and (if there’s time) how to simulate samples from the posterior to estimate credible intervals for parameters of interest.
Presented at EdinbR: The Edinburgh R User group