Файл:Visualizing and understanding recurrent networks 1506.02078v2.pdf

Материал из Материалы по машинному обучению
Перейти к: навигация, поиск
Visualizing_and_understanding_recurrent_networks_1506.02078v2.pdf(0 × 0 пикселей, размер файла: 2,85 МБ, MIME-тип: application/pdf)

Andrej Karpathy∗ Justin Johnson∗ Li Fei-Fei Department of Computer Science, Stanford University {karpathy,jcjohns,feifeili}@cs.stanford.edu

ABSTRACT

Recurrent Neural Networks (RNNs), and specifically a variant with Long ShortTerm Memory (LSTM), are enjoying renewed interest as a result of successful applications in a wide range of machine learning problems that involve sequential data. However, while LSTMs provide exceptional results in practice, the source of their performance and their limitations remain rather poorly understood. Using character-level language models as an interpretable testbed, we aim to bridge this gap by providing an analysis of their representations, predictions and error types. In particular, our experiments reveal the existence of interpretable cells that keep track of long-range dependencies such as line lengths, quotes and brackets. Moreover, our comparative analysis with finite horizon n-gram models traces the source of the LSTM improvements to long-range structural dependencies. Finally, we provide analysis of the remaining errors and suggests areas for further study.

История файла

Нажмите на дату/время, чтобы просмотреть, как тогда выглядел файл.

Дата/времяРазмерыУчастникПримечание
текущий12:01, 22 декабря 20160 × 0 (2,85 МБ)Slikos (обсуждение | вклад)
  • Вы не можете перезаписать этот файл.

Следующая 1 страница ссылается на данный файл: