Файл:Attention-Based Models for Speech Recognition.pdf
Jan Chorowski University of Wrocław, Poland jan.chorowski@ii.uni.wroc.pl Dzmitry Bahdanau Jacobs University Bremen, Germany Dmitriy Serdyuk Universit´e de Montr´eal Kyunghyun Cho Universit´e de Montr´eal Yoshua Bengio Universit´e de Montr´eal CIFAR Senior Fellow
Abstract
Recurrent sequence generators conditioned on input data through an attention mechanism have recently shown very good performance on a range of tasks including machine translation, handwriting synthesis [1, 2] and image caption generation [3]. We extend the attention-mechanism with features needed for speech recognition. We show that while an adaptation of the model used for machine translation in [2] reaches a competitive 18.7% phoneme error rate (PER) on the TIMIT phoneme recognition task, it can only be applied to utterances which are roughly as long as the ones it was trained on. We offer a qualitative explanation of this failure and propose a novel and generic method of adding location-awareness to the attention mechanism to all eviate this issue. The new method yields a model that is robust to long inputs and achieves 18% PER in single utterances and 20% in 10-times longer (repeated) utterances. Finally, we propose a change to the attention mechanism that prevents it from concentrating too much on single frames, which further reduces PER to 17.6% level.
Keywords: Deep bidirectional recurrent network (BiRNN), Long short-term memory units (LSTM), Gated recurrent units (GRU), TIMIT corpus
История файла
Нажмите на дату/время, чтобы просмотреть, как тогда выглядел файл.
Дата/время | Размеры | Участник | Примечание | |
---|---|---|---|---|
текущий | 17:00, 22 декабря 2016 | 0 × 0 (2,28 МБ) | Slikos (обсуждение | вклад) |
- Вы не можете перезаписать этот файл.
Использование файла
Следующая 1 страница ссылается на данный файл: