Validating and Optimizing a Crowdsourced Method for Gradient Measures of Child Speech

Document Type

Conference Proceeding

Publication Date



There is broad consensus that speech sound development is a gradual process, with acoustic measures frequently revealing covert contrast between sounds perceived as identical. Well-constructed perceptual tasks using Visual Analog Scaling (VAS) can draw out these gradient differences. However, this method has not seen widespread uptake in speech acquisition research, possibly due to the time-intensive character of VAS data collection. This project tested the validity of streamlined VAS data collection via crowdsourcing. It also addressed a methodological question that would be challenging to answer through conventional data collection: when collecting ratings of speech samples elicited from multiple individuals, should those samples be presented in fully random order, or grouped by speaker? 100 naïve listeners recruited through Amazon Mechanical Turk provided VAS ratings for 120 /r/ words produced by 4 children before, during, and after intervention. 50 listeners rated the stimuli in fully randomized order and 50 in grouped-by-speaker order. Mean click location was compared against an acoustic standard, and standard error of click location was used to index variability. In both conditions, mean click location was highly correlated with the acoustic measure, supporting the validity of speech ratings obtained via crowdsourcing. Lower variability was observed in the grouped presentation condition.

This document is currently not available here.