Файл:3112-efficient-learning-of-sparse-representations-with-an-energy-based-model.pdf

Материал из Материалы по машинному обучению
Перейти к: навигация, поиск
3112-efficient-learning-of-sparse-representations-with-an-energy-based-model.pdf(0 × 0 пикселей, размер файла: 137 КБ, MIME-тип: application/pdf)

Marc’Aurelio Ranzato Christopher Poultney Sumit Chopra Yann LeCun Courant Institute of Mathematical Sciences New York University, New York, NY 10003 {ranzato,crispy,sumit,yann}@cs.nyu.edu

Abstract

We describe a novel unsupervised method for learning sparse, overcomplete features. The model uses a linear encoder, and a linear decoder preceded by a sparsifying non-linearity that turns a code vector into a quasi-binary sparse code vector. Given an input, the optimal code minimizes the distance between the output of the decoder and the input patch while being as similar as possible to the encoder output. Learning proceeds in a two-phase EM-like fashion: (1) compute the minimum-energy code vector, (2) adjust the parameters of the encoder and decoder so as to decrease the energy. The model produces “stroke detectors” when trained on handwritten numerals, and Gabor-like filters when trained on natural image patches. Inference and learning are very fast, requiring no preprocessing, and no expensive sampling. Using the proposed unsupervised method to initialize the first layer of a convolutional network, we achieved an error rate slightly lower than the best reported result on the MNIST dataset. Finally, an extension of the method is described to learn topographical filter maps.

История файла

Нажмите на дату/время, чтобы просмотреть, как тогда выглядел файл.

Дата/времяРазмерыУчастникПримечание
текущий17:46, 23 декабря 20160 × 0 (137 КБ)Slikos (обсуждение | вклад)
  • Вы не можете перезаписать этот файл.

Следующая 1 страница ссылается на данный файл: