Файл:Achiving human parity in conversational speech recognition 1610.05256v1.pdf

Материал из Материалы по машинному обучению
Перейти к: навигация, поиск
Achiving_human_parity_in_conversational_speech_recognition_1610.05256v1.pdf(0 × 0 пикселей, размер файла: 292 КБ, MIME-тип: application/pdf)
  • W. Xiong, J. Droppo, X. Huang, F. Seide, M. Seltzer, A. Stolcke, D. Yu and G. Zweig
  • Microsoft Research Technical Report MSR-TR-2016-71

Abstract

Conversational speech recognition has served as a flagship speech recognition task since the release of the DARPA Switchboard corpus in the 1990s. In this paper, we measure the human error rate on the widely used NIST 2000 test set, and find that our latest automated system has reached human parity. The error rate of professional transcriptionists is 5.9% for the Switchboard portion of the data, in which newly acquaintedpairsofpeoplediscussanassignedtopic,and11.3% for the CallHome portion where friends and family members have open-ended conversations. In both cases, our automated system establishes a new state-of-the-art, and edges past the human benchmark. This marks the first time that human parity has been reported for conversational speech. The key to our system’s performance is the systematic use of convolutional and LSTM neural networks, combined with a novel spatial smoothing method and lattice-free MMI acoustic training.

Index Terms — Conversational speech recognition, convolutional neural networks, recurrent neural networks, VGG, ResNet, LACE, BLSTM, spatial smoothing.

История файла

Нажмите на дату/время, чтобы просмотреть, как тогда выглядел файл.

Дата/времяРазмерыУчастникПримечание
текущий13:44, 22 декабря 20160 × 0 (292 КБ)Slikos (обсуждение | вклад)
  • Вы не можете перезаписать этот файл.

Следующая 1 страница ссылается на данный файл: