1705.Person Re-Identification by Deep Joint Learning of Multi-Loss Classification 论文阅读笔记

Person Re-Identification by Deep Joint Learning of Multi-Loss Classification
本文采用多loss分类联合训练同时学习行人条纹局部特征和全局特征,受益于局部和全局学习到的特征具有互补性,因此得到的特征更具区分性。具体框架如下:
1705.Person Re-Identification by Deep Joint Learning of Multi-Loss Classification 论文阅读笔记
1705.Person Re-Identification by Deep Joint Learning of Multi-Loss Classification 论文阅读笔记
提出的JLML(Joint Learning Multi-Loss)模型框架如下:
1.两个分支,一个m个子流(条纹)的局部CNN子网络,一个全局的CNN子网络,他们还共享一个基网络(这样底层抽取的特征具有一些共性,共用即合理也可以极大地减少参数)。网络是对resnet-50的一个缩减和修改(得到JLML-ResNet39)。

2.为同时优化分支的各子流网络的特征表达,并使得局部和全局网络间特征的表达具有互补性(一个特征选择的过程),各分支都各自受同样的ID监督约束(即每个分支都有各自的loss function)进行训练,每个分支的训练是独立的,但学习到的信息具有互补的判别性,作者称这种设计为MultiLoss design。

3.(noise and data covariance)为进一步较少对噪声的学习,提升对多样数据源的鲁棒性学习,作者引入了以下论文的正则化方法,进行进一步的特征去冗余.
1705.Person Re-Identification by Deep Joint Learning of Multi-Loss Classification 论文阅读笔记
sparsify the global feature representation with a group LASSO [Wang et al., 2013]和 enforce a local feature sparsity constraint by an exclusive group LASSO [Kong et al., 2014],去冗余正则化项分别是:
1705.Person Re-Identification by Deep Joint Learning of Multi-Loss Classification 论文阅读笔记
1705.Person Re-Identification by Deep Joint Learning of Multi-Loss Classification 论文阅读笔记
参考文献:
[Wang et al., 2013] Hua Wang, Feiping Nie, and Heng Huang.Multi-view clustering and feature learning via structured sparsity.In ICML, 2013.
[Kong et al., 2014] Deguang Kong, Ryohei Fujimaki, Ji Liu, Feiping Nie, and Chris Ding. Exclusive feature learning on arbitrary structures via l1,2-norm. In NIPS, 2014.
4.分类loss采用了交叉熵loss,最终的全局和局部分支loss形式为(含正则化项):
1705.Person Re-Identification by Deep Joint Learning of Multi-Loss Classification 论文阅读笔记

实验
1705.Person Re-Identification by Deep Joint Learning of Multi-Loss Classification 论文阅读笔记
作者分别在VIPeR [Gray and Tao, 2008], GRID [Loy et al.,2009], CUHK03 [Li et al., 2014], and Market-1501 [Zheng etal., 2015]. 数据集上对以下现有的一些方法做了比较和测试:
1705.Person Re-Identification by Deep Joint Learning of Multi-Loss Classification 论文阅读笔记

实验结果:
表格内容说明:
(A) Hand-crafted (feature) with domain-specific distance learning (metric); (B) Deep learning (feature) with domain-specific deep verification (metric) learning (C) Deep learning (feature) with generic nonlearning L2 distance (metric).

1705.Person Re-Identification by Deep Joint Learning of Multi-Loss Classification 论文阅读笔记
1705.Person Re-Identification by Deep Joint Learning of Multi-Loss Classification 论文阅读笔记
1705.Person Re-Identification by Deep Joint Learning of Multi-Loss Classification 论文阅读笔记
1705.Person Re-Identification by Deep Joint Learning of Multi-Loss Classification 论文阅读笔记

另外,作者对一些细节做了进一步的实验分析和讨论:
1.Complementary of Global and Local Features
1705.Person Re-Identification by Deep Joint Learning of Multi-Loss Classification 论文阅读笔记

2.Importance of Branch Independence
1705.Person Re-Identification by Deep Joint Learning of Multi-Loss Classification 论文阅读笔记
3.Benefits from Shared Low-Level Features
1705.Person Re-Identification by Deep Joint Learning of Multi-Loss Classification 论文阅读笔记
4.Effects of Selective Feature Learning
1705.Person Re-Identification by Deep Joint Learning of Multi-Loss Classification 论文阅读笔记
5.Comparisons of Model Size and Complexity
1705.Person Re-Identification by Deep Joint Learning of Multi-Loss Classification 论文阅读笔记

作者的贡献:
1. propose the idea of learning concurrently both local and global feature selections for maximising their correlated complementary effects by learning discriminative feature representations in different context subject to multi-loss classification objective functions in a unified framework(formulate a novel Joint Learning Multi-Loss (JLML) CNN model)
2.by optimising multiple classification losses on the same person label information concurrently, but also utilising their complementary advantages jointly in coping with local misalignment and optimising holistic matching criteria for person re-id.
3.performing structured feature sparsity regularisation:introduce a structured sparsity based feature selective learning mechanism to further improve joint feature learning.