Файл:Batch Normalization- Accelerating Deep Network Training by Reducing Internal Covariate Shift 1502.03167v3.pdf

Материал из Материалы по машинному обучению
Перейти к: навигация, поиск
Batch_Normalization-_Accelerating_Deep_Network_Training_by_Reducing_Internal_Covariate_Shift_1502.03167v3.pdf(0 × 0 пикселей, размер файла: 169 КБ, MIME-тип: application/pdf)

Sergey Ioffe Google Inc., sioffe@google.com Christian Szegedy Google Inc., szegedy@google.com

Abstract

Training Deep Neural Networks is complicatedby the fact that the distribution of each layer’s inputs changes during training, as the parameters of the previous layers change. This slows down the training by requiring lower learning rates and careful parameter initialization, and makes it notoriously hard to train models with saturating nonlinearities. We refer to this phenomenon as internal covariate shift, and address the problem by normalizing layer inputs. Our method draws its strength from making normalization a part of the modelarchitecture and performingthe normalization for each training mini-batch. Batch Normalization allows us to use much higher learningrates and be less careful about initialization. It also acts as a regularizer, in some cases eliminating the need for Dropout. Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin. Using an ensemble of batchnormalizednetworks, we improveupon the best published result on ImageNet classification: reaching 4.9% top-5 validation error (and 4.8% test error), exceeding the accuracy of human raters.

История файла

Нажмите на дату/время, чтобы просмотреть, как тогда выглядел файл.

Дата/времяРазмерыУчастникПримечание
текущий16:42, 22 декабря 20160 × 0 (169 КБ)Slikos (обсуждение | вклад)
  • Вы не можете перезаписать этот файл.

Следующая 1 страница ссылается на данный файл: