Deep learning II - III Batch Normalization - Normalizing activation in a network 激励函数输出归一化

Normalizing activation in a network 激励函数输出归一化


Deep learning II - III Batch Normalization - Normalizing activation in a network 激励函数输出归一化
之前我们提到过将输入归一化,能够使学习加速;但是,由于深度神经网络的层数较多,每一层的输出又会变成下一层的输入,因此对于每一层z[l](常用,应作为default)或者a[l]的输出进行归一化,同样能够加速学习。

如何计算Batch Norm

Deep learning II - III Batch Normalization - Normalizing activation in a network 激励函数输出归一化

如何在mini-batch中实现Batch Norm

Deep learning II - III Batch Normalization - Normalizing activation in a network 激励函数输出归一化
Deep learning II - III Batch Normalization - Normalizing activation in a network 激励函数输出归一化
Deep learning II - III Batch Normalization - Normalizing activation in a network 激励函数输出归一化