Материал из Материалы по машинному обучению
Перейти к: навигация, поиск
END-TO-END_ATTENTION-BASED_LARGE_VOCABULARY_SPEECH_RECOGNITION_-_1508.04395.pdf(0 × 0 пикселей, размер файла: 316 КБ, MIME-тип: application/pdf)

Dzmitry Bahdanau∗, Jan Chorowski†, Dmitriy Serdyuk‡, Philemon Brakel‡ and Yoshua Bengio‡1 ∗Jacobs University Bremen †University of Wrocław ‡ Universit´e de Montr´eal 1 CIFAR Fellow


Many of the current state-of-the-art Large Vocabulary Continuous Speech Recognition Systems (LVCSR) are hybrids of neural networks and Hidden Markov Models (HMMs). Most of these systems contain separate components that deal with the acoustic modelling, language modelling and sequence decoding. We investigate a more direct approach in which the HMM is replaced with a Recurrent Neural Network(RNN) that performs sequence prediction directly at the character level. Alignment between the input features and the desired character sequence is learned automatically by an attention mechanism built into the RNN. For each predicted character, the attention mechanism scans the input sequence and chooses relevant frames. We propose two methods to speed up this operation: limiting the scan to a subset of most promising frames and pooling over time the information contained in neighboring frames, thereby reducing source sequence length. Integrating an n-gram language model into the decoding process yields recognition accuracies similar to other HMM-free RNN-based approaches.

Index Terms — neural networks, LVCSR, attention, speech recognition, ASR, Weighted Finite State Transducer (WFST), Encoder-Decoder network, Wall Street Journal (WSJ) corpus, Connectionist Temporal Classification (CTC), Gated recurrent units (GRU)

История файла

Нажмите на дату/время, чтобы просмотреть, как тогда выглядел файл.

текущий14:20, 22 декабря 20160 × 0 (316 КБ)Slikos (обсуждение | вклад)
  • Вы не можете перезаписать этот файл.

Следующая 1 страница ссылается на данный файл: