Unsupervised representation learning with deep convolutional generative adversarial networks.md
论文:《Unsupervised representation learning with deep convolutional generative adversarial networks》
Contribution:
- We propose and evaluate a set of constraints on the architectural topology of Convolutional GANs that make them stable to train in most settings. We name this class of architectures Deep Convolutional GANs (DCGAN)
- We use trained discriminators for image classification tasks, showing competitive performance with other unsupervised algorithms
- We visualize the filters learnt by GANs and empirically show that specific filters have learned to draw specific objects.
- We show that the generators have interesting vector arithmetic properties allowing for easy manipulation of many semantic qualities of generated samples
Guidelines:
fractional-strided convolution: transposed convolution, deconvolution
Details:
- input scale to the range of tanh activation functio [-1, 1]
- SGD, batch_size = 128
- weights are initialized from N(0, 0.02^2)
- In the LeakyReLU, the slope is 0.2.
- optimizer: Adam, lr=0.0002, momentum=0.5
Evaluation
验证无监督表示学习的最好方法,是将该模型输出的特征,作为一个分类模型的输入,在有监督数据集上,做分类检测查看效果。
Conclusion and Future Work
There are still some forms of model instability remaining - we noticed as models are trained longer they sometimes collapse a subset of filters to a single oscillating mode.