Image-to-Image Translation with Conditional Adversarial Networks_201611

P Isola, JY Zhu, T Zhou, AA Efros - arXiv preprint arXiv:1611.07004, 2016 - arxiv.org

被引用次数:349  https://arxiv.org/pdf/1611.07004

pytorch实现:https://github.com/sunshineatnoon/Paper-Implementations

Image-to-Image Translation with Conditional Adversarial Networks_201611

图1:图像处理,图形和视觉中的许多问题就是把一幅输入图像转换为输出图像。在每一个情形下,我们使用相同的结构和目标,并且简单地训练在不同的数据上。

Image-to-Image Translation with Conditional Adversarial Networks_201611

Image-to-Image Translation with Conditional Adversarial Networks_201611

Image-to-Image Translation with Conditional Adversarial Networks_201611

图4:说明了不同的loss损失函数下,有着不同的输出结果。其中L1能够看出,保留相当多的低频信息。

Image-to-Image Translation with Conditional Adversarial Networks_201611

图5:可以看出,如果生成模型拥有着类似UNet的skip connections,那么可以看出输出结果将有更高质量的结果。

Image-to-Image Translation with Conditional Adversarial Networks_201611

Image-to-Image Translation with Conditional Adversarial Networks_201611

图6:说明patch大小的变化。

Image-to-Image Translation with Conditional Adversarial Networks_201611

Image-to-Image Translation with Conditional Adversarial Networks_201611

Image-to-Image Translation with Conditional Adversarial Networks_201611

Image-to-Image Translation with Conditional Adversarial Networks_201611

图10:把cGAN应用到语义分割上。

Image-to-Image Translation with Conditional Adversarial Networks_201611

Image-to-Image Translation with Conditional Adversarial Networks_201611

Image-to-Image Translation with Conditional Adversarial Networks_201611