【ReID based video 】2017 AMOC: accumulative motion context : 有监督

AMOC: Hao Liu, Zequn Jie, Karlekar Jayashree, Meibin Qi, Jianguo Jiang, Shuicheng Yan, Jiashi Feng. Video based person re-identification with accumulative motion context[J]. arXiv preprint arXiv:1701.00193,2017.

没有开源。。

基于累计运动内容的ReID

相关文献

[10]

[10] uses a recurrent neural network to learn theinteraction between multiple frames in a video and a Siamesenetwork to learn the discriminative video-level features forperson re-id.

[9] uses the Long-Short Term Memory (LSTM)network to aggregate frame-wise person features in a recurrentmanner.

本文方法

Accumulative MotionContext (AMOC) networks introduces an end-to-end two-stream architecture that has specialized network streams for learning spatial appearance and temporal feature representa-tions individually.

Spatial appearance information from the rawvideo frame input and temporal motion information from theoptical flow predicted by the motion network are processedrespectively and then fused at higher recurrent layers to forma discriminative video-level representation.

累积的MotionContext网络引入了端到端两流体系结构,它拥有专门的网络流来学习空间外观和时序特性。

从运动网络预测的多层帧输入和时间运动信息中的空间外观信息进行了非参数化,然后在较高的递归层融合,形成判别视频级表示。

网络结构

【ReID based video 】2017 AMOC: accumulative motion context : 有监督
AMOC的核心思想在于网络除了要提取序列图像的特征,还要提取运动光流的运动特征,其网络结构图如下图所示。AMOC拥有空间信息网络(Spatial network, Spat Nets)和运动信息网络两个子网络。

图像序列的每一帧图像都被输入到Spat Nets来提取图像的全局内容特征。

而相邻的两帧将会送到Moti Nets来提取光流图特征。

之后空间特征和光流特征融合后输入到一个RNN来提取时序特征。

通过AMOC网络,每个图像序列都能被提取出一个融合了内容信息、运动信息的特征。

网络采用了分类损失和对比损失来训练模型。

融合了运动信息的序列图像特征能够提高行人重识别的准确度。

空间信息网络(Spatial network, Spat Nets)

【ReID based video 】2017 AMOC: accumulative motion context : 有监督

运动信息网络(Moti Net )

【ReID based video 】2017 AMOC: accumulative motion context : 有监督

Fusion

【ReID based video 】2017 AMOC: accumulative motion context : 有监督

效果

What I Need

空间信息网络挖掘