INTERSPEECH 2021
VocalTurk: Exploring Feasibility of Crowdsourced Speaker Identification
This paper presents VocalTurk, a feasibility study of crowdsourced speaker identification based on our worker dataset collected in Amazon Mechanical Turk.
Crowdsourced data labeling has already been acknowledged in speech data processing nowadays, but empirical analysis that answer to common questions such as how accurate are workers capable of labeling speech data? and what does a good speech-labeling microtask interface look like? still remain underexplored, which would limit the quality and scale of the dataset collection.
Focusing on the speaker identification task in particular, we thus conducted two studies in Amazon Mechanical Turk: i) hired 3,800+ unique workers to test their performances and confidences in giving answers to voice pair comparison tasks, and ii) additionally assigned more-difficult tasks of 1-vs-N voice set comparisons to 350+ top-scoring workers to test their accuracy-speed performances across patterns of N={1,3,5}.
The results revealed some positive findings that would motivate speech researchers toward crowdsourced data labeling, such as that the top-scoring workers were capable of giving labels to our voice comparison pairs with 99% accuracy after majority voting, as well as they were even capable of batch-labeling which significantly shortened up to 34% of their completion time but still with no statistically-significant degradation in accuracy.
Research Paper
VocalTurk: Exploring Feasibility of Crowdsourced Speaker Identification
Authors: Susumu Saito, Yuta Ide, Teppei Nakano, and Tetsuji Ogawa
To appear in INTERSPEECH2021 Proceedings.
Datasets
All the collected worker responses can be downloaded here.