Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift 论文解读

论文地址: https://arxiv.org/abs/1502.03167.
用TensorFlow实现带BN的MINIST数据集识别: https://blog.****.net/weixin_43551972/article/details/102629329.

这是一篇Google发表在ICML 2015上的一篇文章,并且在ImageNet图像识别数据机上实现了超越人类识别水平的效果,所以之前虽然看了很多论文,但是对于这篇的兴趣相当浓厚。这篇论文感觉还是很难的,看了好多遍里面还是有些不理解的不知道对不对。但是当paper投出去之后,有时间充分弄啦,先祝自己好运。

前期准备

  1. internal covariate shift
    训练深度学习的模型的难处在于每一个层的的输入有一些小改变,但随着网络的加深就会放大这些改变,导致整个模型的性能变差,不稳定。所以就会通过较低的学习率和选择较好的初始化来解决这个问题,但这些都是经验上地东西,很难把握,就会浪费很多的时间在这些调参工作上,相信大家都深受其害。而且对于带有非线性的饱和模型(如带有sigmoid的**函数),都会让训练模型变得异常困难。我们把这种现象就叫做internal covariate shift。
  2. 梯度消失和爆炸:深度网络中微小的参数变动引起梯度上的巨变,导致训练陷入sigmoid的饱和区。
  3. 白化(whitening):目的就是降低输入的冗余性。经过白化处理后,新的数据的特征之间相关系较低,且所有特征具有相同方差。

BN的优点:

  1. Increase learning rate. we have been able to achieve a training speedup from higher learning rates, with no ill side effects.
  2. Remove Dropout. Batch Normalization fulfills some of the same goals as Dropout. Removing Dropout from Modified BN-Inception speeds up training, without increasing overfitting.
  3. Reduce the L2 weight regularization. While in Inception an L2 loss on the model parameters controls overfitting, in Modified BN-Inception the weight of this loss is reduced by a factor of 5.
  4. Accelerate the learning rate decay. In training Inception, learning rate was decayed exponentially.
  5. Shuffle training examples more thoroughly. We enabled within-shard shuffling of the training data, which prevents the same examples from always appearing in a mini-batch together.
  6. Reduce the photometric distortions. Because batchnormalized networks train faster and observe each training example fewer times, we let the trainer focus on more “real” images by distorting them less.

BN的一些局限性

  1. 在一些internal covariate shift 、梯度消失和爆炸 很严重的领域中,如Recurrent Neural Networks,BN能否在梯度传播过程中很好提高模型的性能并没有给出证明。
  2. 未研究BN能否对域的适应性(domain adaption)有用,BN通过BN能够很容易的实现数据新分布,但也只是计算了这个分布的均值和方差,这样的转换并不能保证两个数据域一定是一样的。

Batch Normalization(BN)

随机梯度下降(SGD)

通过下面的目标函数:
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift 论文解读
为了加快训练,用mini-batch去近似整个损失函数的梯度,通过计算:
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift 论文解读
虽然SGD是简单有效的,但是要非常注意微调模型的超参数:学习率、初始化的值。因为internal covariate shift的存在,可以通过域的适应性来解决,internal covariate shift 不仅存在整个网络中,也存在自网络中,考虑到:
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift 论文解读
F1F_{1},F2F_{2}是两个不同的函数。我们令x=F1(u,Θ1)x=F_{1}(u,Θ_{1}),则得到一个自网络:
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift 论文解读
其梯度下降是:
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift 论文解读

为了减少 Internal Covariate Shift

1.我们都知道数据集的均值为0,方差为1时,会加快模型的训练和稳定模型。所以我们想类似于白化一样,在每一步训练或一些中间步中修改网络的参数。但是这些修改是和优化步骤(optimization steps)一起的,SGD试图更新网络参数,但如果这参数又不在反向传播更新中,就会造成梯度消失或爆炸。如:
我们试图减去输入数据的均值:
x^=xE[x]\hat{x} = x - E[x],x=u+bx=u+b
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift 论文解读
因为bbE[x]E[x]是没有作用的,但是却更新了:
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift 论文解读
然后归一化:
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift 论文解读
bb被更新了,但是归一化中却没有被更新,就会导致bb无限增长,而损失函数却保持不变。

2.为了解决上述问题,又提出了这样的转换:

Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift 论文解读
而反向传播时,通过计算下面的公式:
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift 论文解读
这样看来似乎真的是解决了,但是要计算协方差矩阵在训练集上,这个在梯度下降中计算是非常难得。还有个基本问题,Norm(x,Χ)要计算X的Singular Value Decomposition,它不是一个连续函数。
3.综上,改论文对以上做了两点必要的简化:
a.归一化每一个输入的标量,让其的均值为0,方差为1,而不是白化输入层的特征了。
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift 论文解读
b.上述这样的归一化后虽然可以,但是转变后的数据还能表示原先的信息吗?就像下面的sigmoid函数所示,通过转化将非线性的饱和部分变成了红线所示,那还有意义吗?因为我们正是要有到非线性的部分呀? Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift 论文解读
所以该论文又做了一点改变,为了保证转变后的数据还能含有原先的数据一样的信息,对每一个神经元x^k\hat{x}^{k},加了一对参数γkγ^{k},βkβ^{k}:
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift 论文解读
参数γkγ^{k},βkβ^{k}可以随着原先的模型参数一块训练。
如果:
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift 论文解读
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift 论文解读
就可以恢复到原先的数据分布了。
BN在一个mini-batch上的算法:
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift 论文解读
参数更新:
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift 论文解读

Training and Inference with Batch-Normalized Networks

1.在这个阶段,唯一不同的是inference时它的均值和方差是输入的所有的值的均值和方差,而不是mini-batch的。在全连接层training和inference上没有不同,而在卷积层上,是不一样的(个人理解,可能不好理解,但下面有图说明)。而且这个过程中如果用滑动均值(moving average),效果会更好。
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift 论文解读
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift 论文解读
2.Batch-Normalized Convolutional Networks。
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift 论文解读
在卷积层,mean和var的计算方式如下所示,它计算的是每个feature map的均值和方差,每个feature map配一个神经元,即一对参数γkγ^{k},βkβ^{k}。配上原文说明:
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift 论文解读

在什么位置使用BN

Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift 论文解读
WWbb是要学习的参数,g()g()是类似于sigmoid的非线性函数。我们添加BN是在执行非线性之前,即通过归一化x=Wu+bx=Wu+b。我们也可以归一化u,但是因为u可能是另一个非线性的输出,它的分布在训练时可能会改变,而Wu+bWu+b更可能有系统的非稀疏的分布。

使用BN可以用更大的学习率

一般,大的的学习率可能增加层参数的尺度,而随着网络的加深会加大这个影响导致梯度爆炸,而用了BN却可以不受这个参数尺度的影响。为什么可以用更大的学习率,文中给了证明:
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift 论文解读

Experiments

Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift 论文解读
实验部分分别证明了我在本博客开头写的BN的优点,这里就不写了。
以上全是自己的见解,欢迎大家留言讨论!