论文阅读笔记(四十):Learning Spatiotemporal Features with 3D Convolutional Networks(C3D)

We propose a simple, yet effective approach for spatiotemporal feature learning using deep 3-dimensional convolutional networks (3D ConvNets) trained on a large scale supervised video dataset. Our findings are three-fold: 1) 3D ConvNets are more suitable for spatiotemporal feature learning compared to 2D ConvNets; 2) A homogeneous architecture with small 3 × 3 × 3 convolution kernels in all layers is among the best performing architectures for 3D ConvNets; and 3) Our learned features, namely C3D (Convolutional 3D), with a simple linear classifier outperform state-of-the-art methods on 4 different benchmarks and are comparable with current best methods on the other 2 benchmarks. In addition, the features are compact: achieving 52.8% accuracy on UCF101 dataset with only 10 dimensions and also very efficient to compute due to the fast inference of ConvNets. Finally, they are conceptually very simple and easy to train and use.

Inspired by the deep learning breakthroughs in the image domain [24] where rapid progress has been made in the past few years in feature learning, various pre-trained convolutional network (ConvNet) models [16] are made available for extracting image features. These features are the activations of the network’s last few fully-connected layers which perform well on transfer learning tasks [47, 48]. However, such image based deep features are not directly suitable for videos due to lack of motion modeling (as shown in our experiments in sections 4,5,6). In this paper we propose to learn spatio-temporal features using deep 3D ConvNet. We empirically show that these learned features with a simple linear classifier can yield good performance on various video analysis tasks. Although 3D ConvNets were proposed before [15, 18], to our knowledge this work exploits 3D ConvNets in the context of large-scale supervised training datasets and modern deep architectures to achieve the best performance on different types of video analysis tasks. The features from these 3D ConvNets encapsulate information related to objects, scenes and actions in a video, making them useful for various tasks without requiring to finetune the model for each task. C3D has the properties that a good descriptor should have: it is generic, compact, simple and efficient. To summarize, our contributions in this paper are:

• We experimentally show 3D convolutional deep networks are good feature learning machines that model appearance and motion simultaneously.
• We empirically find that 3 × 3 × 3 convolution kernel for all layers to work best among the limited set of explored architectures.
• The proposed features with a simple linear model outperform or approach the current best methods on 4 different tasks and 6 different benchmarks (see Table 1). They are also compact and efficient to compute.

论文阅读笔记(四十):Learning Spatiotemporal Features with 3D Convolutional Networks(C3D)

Table 1. C3D compared to best published results. C3D outperforms all previous best reported methods on a range of benchmarks except for Sports-1M and UCF101. On UCF101, we report accuracy for two groups of methods. The first set of methods use only RGB frame inputs while the second set of methods (in parentheses) use all possible features (e.g. optical flow, improved Dense Trajectory).

Despite its good performance, this method is computationally intensive and becomes intractable on largescale datasets.

Although this method showed good results on action recognition, it is still computationally intensive at training and hard to scale up for testing on large datasets.

Among these approaches, the 3D ConvNets approach in [15] is most closely related to us. This method used a human detector and head tracking to segment human subjects in videos. The segmented video volumes are used as inputs for a 3-convolution-layer 3D ConvNet to classify actions. In contrast, our method takes full video frames as inputs and does not rely on any preprocessing, thus easily scaling to large datasets. We also share some similarities with Karpathy et al. [18] and Simonyan and Zisserman [36] in terms of using full frames for training the ConvNet. However, these methods are built on using only 2D convolution and 2D pooling operations (except for the Slow Fusion model in [18]) whereas our model performs 3D convolutions and 3D pooling propagating temporal information across all the layers in the network (further detailed in section 3). We also show that gradually pooling space and time information and building deeper networks achieves best results and we discuss more about the architecture search in section 3.2.

We believe that 3D ConvNet is well-suited for spatiotemporal feature learning. Compared to 2D ConvNet, 3D ConvNet has the ability to model temporal information better owing to 3D convolution and 3D pooling operations. In 3D ConvNets, convolution and pooling operations are performed spatio-temporally while in 2D ConvNets they are done only spatially. Figure 1 illustrates the difference, 2D convolution applied on an image will output an image, 2D convolution applied on multiple images (treating them as different channels [36]) also results in an image. Hence, 2D ConvNets lose temporal information of the input signal right after every convolution operation. Only 3D convolution preserves the temporal information of the input signals resulting in an output volume. The same phenomena is applicable for 2D and 3D polling. In [36], although the temporal stream network takes multiple frames as input, because of the 2D convolutions, after the first convolution layer, temporal information is collapsed completely. Similarly, fusion models in [18] used 2D convolutions, most of the networks lose their input’s temporal signal after the first convolution layer. Only the Slow Fusion model in [18] uses 3D convolutions and averaging pooling in its first 3 convolution layers. We believe this is the key reason why it performs best among all networks studied in [18]. However, it still loses all temporal information after the third convolution layer.
论文阅读笔记(四十):Learning Spatiotemporal Features with 3D Convolutional Networks(C3D)
Figure 1. 2D and 3D convolution operations. a) Applying 2D convolution on an image results in an image. b) Applying 2D convolution on a video volume (multiple frames as multiple channels) also results in an image. c) Applying 3D convolution on a video volume results in another volume, preserving temporal information of the input signal.

We verify the findings on a large scale dataset with a smaller number of network experiments. According to the findings in 2D ConvNet [37], small receptive fields of 3 × 3 convolution kernels with deeper architectures yield best results. Hence, for our architecture search study we fix the spatial receptive field to 3 × 3 and vary only the temporal depth of the 3D convolution kernels.

Notations: For simplicity, from now on we refer video clips with a size of c×l×h×w where c is the numberrof channels, l is length in number of frames, h and w are the height and width of the frame, respectively. We also refer 3D convolution and pooling kernel size by d×k×k, where d is kernel temporal depth and k is kernel spatial size.

Common network settings: In this section we describe the network settings that are common to all the networks we trained. The networks are set up to take video clips as inputs and predict the class labels which belong to 101 different actions. All video frames are resized into 128 × 171. This is roughly half resolution of the UCF101 frames. Videos are split into non-overlapped 16-frame clips which are then used as input to the networks. The input dimensions are 3 × 16 × 128 × 171. We also use jittering by using random crops with a size of 3×16×112×112 of the input clips during training. The networks have 5 convolution layers and 5 pooling layers (each convolution layer is immediately followed by a pooling layer), 2 fully-connected layers and a softmax loss layer to predict action labels. The number of filters for 5 convolution layers from 1 to 5 are 64, 128, 256, 256, 256, respectively. All convolution kernels have a size of d where d is the kernel temporal depth (we will later vary the value d of these layers to search for a good 3D architecture). All of these convolution layers are applied with appropriate padding (both spatial and temporal) and stride 1, thus there is no change in term of size from the input to the output of these convolution layers. All pooling layers are max pooling with kernel size 2 × 2 × 2 (except for the first layer) with stride 1 which means the size of output signal is reduced by a factor of 8 compared with the input signal.

The first pooling layer has kernel size 1 × 2 × 2 with the intention of not to merge the temporal signal too early and also to satisfy the clip length of 16 frames (e.g. we can temporally pool with factor 2 at most 4 times before completely collapsing the temporal signal). The two fully connected layers have 2048 outputs. We train the networks from scratch using mini-batches of 30 clips, with initial learning rate of 0.003. The learning rate is divided by 10 after every 4 epochs. The training is stopped after 16 epochs.

Varying network architectures: For the purposes of this study we are mainly interested in how to aggregate temporal information through the deep networks. To search for a good 3D ConvNet architecture, we only vary kernel temporal depth di of the convolution layers while keeping all other common settings fixed as stated above. We experiment with two types of architectures: 1) homogeneous temporal depth: all convolution layers have the same kernel temporal depth; and 2) varying temporal depth: kernel temporal depth is changing across the layers. For homogeneous setting, we experiment with 4 networks having kernel temporal depth of d equal to 1, 3, 5, and 7. We name these networks as depth-d, where d is their homogeneous temporal depth. Note that depth-1 net is equivalent to applying 2D convolutions on separate frames. For the varying temporal depth setting, we experiment two networks with temporal depth increasing: 3-3-5-5-7 and decreasing: 75-5-3-3 from the first to the fifth convolution layer respectively. We note that all of these networks have the same size of the output signal at the last pooling layer, thus they have the same number of parameters for fully connected layers. Their number of parameters is only different at convolution layers due to different kernel temporal depth. These differences are quite minute compared to millions of parameters in the fully connected layers. For example, any two of the above nets with temporal depth difference of 2, only has 17K parameters fewer or more from each other. The biggest difference in number of parameters is between depth-1 net and depth-7 net where depth-7 net has 51K more parameters which is less than 0.3% of the total of 17.5 millions parameters of each network. This indicates that the learning capacity of the networks are comparable and the differences in number of parameters should not affect the results of our architecture search.

论文阅读笔记(四十):Learning Spatiotemporal Features with 3D Convolutional Networks(C3D)

Figure 2. 3D convolution kernel temporal depth search. Action recognition clip accuracy on UCF101 test split-1 of different kernel temporal depth settings. 2D ConvNet performs worst and 3D ConvNet with 3 × 3 × 3 kernels performs best among the experimented nets.

We train these networks on the train split 1 of UCF101. Figure 2 presents clip accuracy of different architectures on UCF101 test split 1. The left plot shows results of nets with homogeneous temporal depth and the right plot presents results of nets that changing kernel temporal depth. Depth3 performs best among the homogeneous nets. Note that depth-1 is significantly worse than the other nets which we believe is due to lack of motion modeling. Compared to the varying temporal depth nets, depth-3 is the best performer, but the gap is smaller. We also experiment with bigger spatial receptive field (e.g. 5 × 5) and/or full input resolution (240 × 320 frame inputs) and still observe similar behavior. This suggests 3 × 3 × 3 is the best kernel choice for 3D ConvNets (according to our subset of experiments) and 3D ConvNets are consistently better than 2D ConvNets for video classification. We also verify that 3D ConvNet consistently performs better than 2D ConvNet on a large-scale internal dataset, namely I380K.

Spatiotemporal feature learning
Network architecture: Our findings in the previous section indicate that homogeneous setting with convolution kernels of 3 × 3 × 3 is the best option for 3D ConvNets. This finding is also consistent with a similar finding in 2D ConvNets [37]. With a large-scale dataset, one can train a 3D ConvNet with 3×3×3 kernel as deep as possible subject to the machine memory limit and computation affordability. With current GPU memory, we design our 3D ConvNet to have 8 convolution layers, 5 pooling layers, followed by two fully connected layers, and a softmax output layer. The network architecture is presented in figure 3. For simplicity, we call this net C3D from now on. All of 3D convolution filters are 3×3×3 with stride 1×1×1. All 3D pooling layers are 2×2×2 with stride 2×2×2 except for pool1 which has kernel size of 1 × 2 × 2 and stride 1 × 2 × 2 with the intention of preserving the temporal information in the early phase. Each fully connected layer has 4096 output units.

论文阅读笔记(四十):Learning Spatiotemporal Features with 3D Convolutional Networks(C3D)

Figure 3. C3D architecture. C3D net has 8 convolution, 5 max-pooling, and 2 fully connected layers, followed by a softmax output layer. All 3D convolution kernels are 3 × 3 × 3 with stride 1 in both spatial and temporal dimensions. Number of filters are denoted in each box. The 3D pooling layers are denoted from pool1 to pool5. All pooling kernels are 2 × 2 × 2, except for pool1 is 1 × 2 × 2. Each fully connected layer has 4096 output units.

Dataset. To learn spatiotemproal features, we train our C3D on Sports-1M dataset [18] which is currently the largest video classification benchmark. The dataset consists of 1.1 million sports videos. Each video belongs to one of 487 sports categories. Compared with UCF101, Sports1M has 5 times the number of categories and 100 times the number of videos.

Training: Training is done on the Sports-1M train split. As Sports-1M has many long videos, we randomly extract five 2-second long clips from every training video. Clips are resized to have a frame size of 128 × 171. On training, we randomly crop input clips into 16 × 112 × 112 crops for spatial and temporal jittering. We also horizontally flip them with 50% probability. Training is done by SGD with minibatch size of 30 examples. Initial learning rate is 0.003, and is divided by 2 every 150K iterations. The optimization is stopped at 1.9M iterations (about 13 epochs). Beside the C3D net trained from scratch, we also experiment with C3D net fine-tuned from the model pre-trained on I380K.

论文阅读笔记(四十):Learning Spatiotemporal Features with 3D Convolutional Networks(C3D)

Table 2. Sports-1M classification result. C3D outperforms [18] by 5% on top-5 video-level accuracy. (*)We note that the method of [29] uses long clips, thus its clip-level accuracy is not directly comparable to that of C3D and DeepVideo.

Sports-1M classification results: Table 2 presents the results of our C3D networks compared with DeepVideo [18] and Convolution pooling [29]. We use only a single center crop per clip, and pass it through the network to make the clip prediction. For video predictions, we average clip predictions of 10 clips which are randomly extracted from the video. It is worth noting some setting differences between the comparing methods. DeepVideo and C3D use short clips while Convolution pooling [29] uses much longer clips. DeepVideo uses more crops: 4 crops per clip and 80 crops per video compared with 1 and 10 used by C3D, respectively. The C3D network trained from scratch yields an accuracy of 84.4% and the one fine-tuned from the I380K pre-trained model yields 85.5% at video top5 accuracy. Both C3D networks outperform DeepVideo’s networks. C3D is still 5.6% below the method of [29]. However, this method uses convolution pooling of deep image features on long clips of 120 frames, thus it is not directly comparable to C3D and DeepVideo which operate on much shorter clips. We note that the difference in top-1 accuracy for clips and videos of this method is small (1.6%) as it already uses 120-frame clips as inputs. In practice, convolution pooling or more sophisticated aggregation schemes [29] can be applied on top of C3D features to improve video hit performance.

C3D video descriptor: After training, C3D can be used as a feature extractor for other video analysis tasks. To extract C3D feature, a video is split into 16 frame long clips with a 8-frame overlap between two consecutive clips. These clips are passed to the C3D network to extract fc6 activations. These clip fc6 activations are averaged to form a 4096-dim video descriptor which is then followed by an L2-normalization. We refer to this representation as C3D video descriptor/feature in all experiments, unless we clearly specify the difference.

What does C3D learn? We use the deconvolution method explained in [46] to understand what C3D is learning internally. We observe that C3D starts by focusing on appearance in the first few frames and tracks the salient motion in the subsequent frames. Figure 4 visualizes deconvolution of two C3D conv5b feature maps with highest activations projected back to the image space. In the first example, the feature focuses on the whole person and then tracks the motion of the pole vault performance over the rest of the frames. Similarly in the second example it first focuses on the eyes and then tracks the motion happening around the eyes while applying the makeup. Thus C3D differs from standard 2D ConvNets in that it selectively attends to both motion and appearance. We provide more visualizations in the supplementary material to give a better insight about the learned feature.

Action recognition

Dataset: We evaluate C3D features on UCF101 dataset [38]. The dataset consists of 13, 320 videos of 101 human action categories. We use the three split setting provided with this dataset.

Classification model: We extract C3D features and input them to a multi-class linear SVM for training models. We experiment with C3D descriptor using 3 different nets: C3D trained on I380K, C3D trained on Sports-1M, and C3D trained on I380K and fine-tuned on Sports-1M. In the muliple nets setting, we concatenate the L2-normalized C3D descriptors of these nets.

Baselines: We compare C3D feature with a few baselines: the current best hand-crafted features, namely improved dense trajectories (iDT) [44] and the popular-used deep image features, namely Imagenet [16], using Caffe’s Imagenet pre-train model. For iDT, we use the bag-of-word representation with a codebook size of 5000 for each feature channel of iDT which are trajectories, HOG, HOF, MBHx, and MBHy. We normalize histogram of each channel separately using L1-norm and concatenate these normalized histograms to form a 25K feature vector for a video. For Imagenet baseline, similar to C3D, we extract Imagenet fc6 feature for each frame, average these frame features to make video descriptor. A multi-class linear SVM is also used for these two baselines for a fair comparison.

论文阅读笔记(四十):Learning Spatiotemporal Features with 3D Convolutional Networks(C3D)

Table 3. Action recognition results on UCF101. C3D compared with baselines and current state-of-the-art methods. Top: simple features with linear SVM; Middle: methods taking only RGB frames as inputs; Bottom: methods using multiple feature combinations.

Results: Table 3 presents action recognition accuracy of C3D compared with the two baselines and current best methods. The upper part shows results of the two baselines. The middle part presents methods that use only RGB frames as inputs. And the lower part reports all current best methods using all possible feature combinations (e.g. optical flows, iDT).

C3D fine-tuned net performs best among three C3D nets described previously. The performance gap between these three nets, however, is small (1%). From now on, we refer to the fine-tuned net as C3D, unless otherwise stated. C3D using one net which has only 4, 096 dimensions obtains an accuracy of 82.3%. C3D with 3 nets boosts the accuracy to 85.2% with the dimension is increased to 12, 288. C3D when combined with iDT further improves the accuracy to 90.4%, while when it is combined with Imagenet, we observe only 0.6% improvement. This indicates C3D can well capture both appearance and motion information, thus there is no benefit to combining with Imagenet which is an appearance based deep feature. On the other hand, it is beneficial to combine C3D with iDT as they are highly complementary to each other. In fact, iDT are hand-crafted features based on optical flow tracking and histograms of low-level gradients while C3D captures high level abstract/semantic information.

C3D with 3 nets achieves 85.2% which is 9% and 16.4% better than the iDT and Imagenet baselines, respectively. On the only RGB input setting, compared with CNN-based approaches, Our C3D outperforms deep networks [18] and spatial stream network in [36] by 19.8% and 12.6%, respectively. Both deep networks [18] and spatial stream network in [36] use AlexNet architecture. While in [18], the net is fine-tuned from their model pre-trained on Sports-1M, spatial stream network in [36] is fine-tuned from Imagenet pretrained model. Our C3D is different from these CNN-base methods in term of network architecture and basic operations.

In addition, C3D is trained on Sports-1M and used as is without any finetuning. Compared with Recurrent Neural Networks (RNN) based methods, C3D outperforms Longterm Recurrent Convolutional Networks (LRCN) [6] and LSTM composite model [39] by 14.1% and 9.4%, respectively. C3D with only RGB input still outperforms these two RNN-based methods when they used both optical flows and RGB as well as the temporal stream network in [36].**However, C3D needs to **be combined with iDT to outperform two-stream networks [36], the other iDT-based methods [31, 25], and the method that focuses on long-term modeling [29]. Apart from the promising numbers, C3D also has the advantage of simplicity compared to the other methods.

C3D is compact: In order to evaluate the compactness of C3D features we use PCA to project the features into lower dimensions and report the classification accuracy of the projected features on UCF101 [38] using a linear SVM. We apply the same process with iDT [44] as well as Imagenet features [7] and compare the results in Figure 5. At the extreme setting with only 10 dimensions, C3D accuracy is 52.8% which is more than 20% better than the accuracy of Imagenet and iDT which are about 32%. At 50 and 100 dim, C3D obtains an accuracy of 72.6% and 75.6% which are about 10-12% better than Imagenet and iDT. Finally, with 500 dimensions, C3D is able to achieve 79.4% accuracy which is 6% better than iDT and 11% better than Imagenet. This indicates that our features are both compact and discriminative. This is very helpful for large-scale retrieval applications where low storage cost and fast retrieval are crucial.

论文阅读笔记(四十):Learning Spatiotemporal Features with 3D Convolutional Networks(C3D)
Figure 5. C3D compared with Imagenet and iDT in low dimensions. C3D, Imagenet, and iDT accuracy on UCF101 using PCA dimensionality reduction and a linear SVM. C3D outperforms Imagenet and iDT by 10-20% in low dimensions.

We qualitatively evaluate our learned C3D features to verify if it is a good generic feature for video by visualizing the learned feature embedding on another dataset. We randomly select 100K clips from UCF101, then extract fc6 features for those clips using for features from Imagenet and C3D. These features are then projected to 2-dimensional space using t-SNE [43]. Figure 6 visualizes the feature embedding of the features from Imagenet and our C3D on UCF101. It is worth noting that we did not do any finetuning as we wanted to verify if the features show good generalization capability across datasets. We quantitatively observe that C3D is better than Imagenet.

论文阅读笔记(四十):Learning Spatiotemporal Features with 3D Convolutional Networks(C3D)

Figure 6. Feature embedding. Feature embedding visualizations of Imagenet and C3D on UCF101 dataset using t-SNE [43]. C3D features are semantically separable compared to Imagenet suggesting that it is a better feature for videos. Each clip is visualized as a point and clips belonging to the same action have the same color. Best viewed in color.

Action Similarity Labeling
Dataset: The ASLAN dataset consists of 3, 631 videos from 432 action classes. The task is to predict if a given pair of videos belong to the same or different action. We use the prescribed 10-fold cross validation with the splits provided with the dataset. This problem is different from action recognition, as the task focuses on predicting action similarity not the actual action label. The task is quite challenging because the test set contains videos of “never-seenbefore” actions.

Features: We split videos into 16-frame clips with an overlap of 8 frames. We extract C3D features: prob, fc7, fc6, pool5 for each clip. The features for videos are computed by averaging the clip features separately for each type of feature, followed by an L2 normalization.

Classification model: We follow the same setup used in [21]. Given a pair of videos, we compute the 12 different distances provided in [21]. With 4 types of features, we obtain 48-dimensional (12 × 4 = 48) feature vector for each video pair. As these 48 distances are not comparable to each other, we normalize them independently such that each dimension has zero mean and unit variance. Finally, a linear SVM is trained to classify video pairs into same or different on these 48-dim feature vectors. Beside comparing with current methods, we also compare C3D with a strong baseline using deep image-based features. The baseline has the same setting as our C3D and we replace C3D features with Imagenet features.

论文阅读笔记(四十):Learning Spatiotemporal Features with 3D Convolutional Networks(C3D)
Figure 7. Action similarity labeling result. ROC curve of C3D evaluated on ASLAN. C3D achieves 86.5% on AUC and outperforms current state-of-the-art by 11.1%.

论文阅读笔记(四十):Learning Spatiotemporal Features with 3D Convolutional Networks(C3D)

Table 4. Action similarity labeling result on ASLAN. C3D significantly outperforms state-of-the-art method [45] by 9.6% in accuracy and by 11.1% in area under ROC curve.

Results: We report the result of C3D and compare with state-of-the-art methods in table 4. While most current methods use multiple hand-crafted features, strong encoding methods (VLAD, Fisher Vector), and complex learning models, our method uses a simple averaging of C3D features over the video and a linear SVM. C3D significantly outperforms state-of-the-art method [45] by 9.6% on accuracy and 11.1% on area under ROC curve (AUC). Imagenet baseline performs reasonably well which is just 1.2% below state-of-the-art method [45], but 10.8% worse than C3D due to lack of motion modeling. Figure 7 plots the ROC curves of C3D compared with current methods and human performance. C3D has clearly made a significant improvement which is a halfway from current state-of-the-art method to human performance (98.9%).

Scene and Object Recognition

Datasets: For dynamic scene recognition, we evaluate C3D on two benchmarks: YUPENN [4] and Maryland [35]. YUPENN consists of 420 videos of 14 scene categories and Maryland has 130 videos of 13 scene categories. For object recognition, we test C3D on egocentric dataset [32] which consists 42 types of everyday objects. A point to note, this dataset is egocentric and all videos are recorded in a first person view which have quite different appearance and motion characteristics than any of the videos we have in the training dataset.

Classification model: For both datasets, we use the same setup of feature extraction and linear SVM for classification and follow the same leave-one-out evaluation protocol as described by the authors of these datasets. For object dataset, the standard evaluation is based on frames. However, C3D takes a video clip of length 16 frames to extract the feature. We slide a window of 16 frames over all videos to extract C3D features. We choose the ground truth label for each clip to be the most frequently occurring label of the clip. If the most frequent label in a clip occurs fewer than 8 frames, we consider it as negative clip with no object and discard it in both training and testing. We train and test C3D features using linear SVM and report the object recognition accuracy. We follow the same split provided in [32]. We also compare C3D with a baseline using Imagenet feature on these 3 benchmarks.

Results: Table 5 reports our C3D results and compares it with the current best methods. On scene classification, C3D outperforms state-of-the-art method [9] by 10% and 1.9% on Maryland and YUPENN respectively. It is worth nothing that C3D uses only a linear SVM with simple averaging of clip features while the second best method [9] uses different complex feature encodings (FV, LLC, and dynamic pooling). The Imagenet baseline achieves similar performance with C3D on Maryland and 1.4% lower than C3D on YUPENN. On object recognition, C3D obtains 22.3% accuracy and outperforms [32] by 10.3% with only linear SVM where the comparing method used RBF-kernel on strong SIFT-RANSAC feature matching.

Compared with Imagenet baseline, C3D is still 3.4% worse. This can be explained by the fact that C3D uses smaller input resolution (128 × 128) compared to full-size resolution (256 × 256) using by Imagenet. Since C3D is trained only on Sports1M videos without any fine-tuning while Imagenet is fully trained on 1000 object categories, we did not expect C3D to work that well on this task. The result is very surprising and shows how generic C3D is on capturing appearance and motion information in videos.

论文阅读笔记(四十):Learning Spatiotemporal Features with 3D Convolutional Networks(C3D)
Table 5. Scene recognition accuracy. C3D using a simple linear SVM outperforms current methods on Maryland and YUPENN.

Runtime Analysis
We compare the runtime of C3D and with iDT [44] and the Temporal stream network [36]. For iDT, we use the code kindly provided by the authors [44]. For [36], there is no public model available to evaluate. However, this method uses Brox’s optical flows [3] as inputs. We manage to evaluate runtime of Brox’s method using two different versions: CPU implementation provided by the authors [3] and the GPU implementation provided in OpenCV.

We report runtime of the three above-mentioned methods to extract features (including I/O) for the whole UCF101 dataset in table 6 using using a single CPU or a single K40 Tesla GPU. [36] reported a computation time (without I/O) of 0.06s for a pair of images. In our experiment, Brox’s GPU implementation takes 0.85-0.9s per image pair including I/O. Note that this is not a fair comparison for iDT as it uses only CPU. We cannot find any GPU implementation of this method and it is not trivial to implement a parallel version of this algorithm on GPU. Note that C3D is much faster than real-time, processing at 313 fps while the other two methods have a processing speed of less than 4 fps.

论文阅读笔记(四十):Learning Spatiotemporal Features with 3D Convolutional Networks(C3D)
Table 6. Runtime analysis on UCF101. C3D is 91x faster than improved dense trajectories [44] and 274x faster than Brox’s GPU implementation in OpenCV.

Conclusions

In this work we try to address the problem of learning spatiotemporal features for videos using 3D ConvNets trained on large-scale video datasets. We conducted a systematic study to find the best temporal kernel length for 3D ConvNets. We showed that C3D can model appearance and motion information simultaneously and outperforms the 2D ConvNet features on various video analysis tasks. We demonstrated that C3D features with a linear classifier can outperform or approach current best methods on different video analysis benchmarks. Last but not least, the proposed C3D features are efficient, compact, and extremely simple to use.

C3D source code and pre-trained model are available at http://vlg.cs.dartmouth.edu/c3d.

Appendix A: Effects of Input Resolution

As part of the architecture study, we examine the effects of input resolution on 3D ConvNets. We use the same common network setting described in section 3. We fix all convolution kernels to 3 × 3 × 3 and vary the input resolutions to study the effects. We experiment with 3 different nets with input resolutions of 64 × 64, 128 × 128, and 256 × 256, namely net-64, net-128, and net-256, respectively. Note that net-128 is equivalent to the depth-3 net in section 3.2. Because of the difference in input resolutions, these nets have different output size at the last pooling layer, thus leading to a significant difference in terms of number of parameters. Table 7 reports the numbers of parameters and the training time of these nets. Figure 8 presents the clip accuracy of these nets on UCF101 test split-1. Net-128 outperforms net-64 by 3.1% and attains a comparable accuracy with net-256. This indicates that net-128 provides a good trade-off between training time, accuracy, and memory consumption. We note that with the current GPU memory limit, one has to use model parallelism to train C3D with 256 × 256 input resolution.

论文阅读笔记(四十):Learning Spatiotemporal Features with 3D Convolutional Networks(C3D)
Table 7. Number of parameters and training time comparison of 3D ConvNets with different input resolutions. Note that net-128 is equivalent to the depth-3 net in the paper.

Appendix B: Visualization of C3D Learned Features

For a better understanding of what C3D learned internally, we provide additional visualizations using deconvolution.

Decovolutions of C3D: We randomly select 20K clips from UCF101. We group clips that fire strongly for the same feature map at a pre-selected convolution layer. We use deconvolution [46] to project the top activations of these clips back into image space. We visualize the gradients causing the activiation together with the corresponding cropped image sequences. Note that we did not do any fine-tuning of C3D model on UCF101.

Figure 9 and 10 visualize deconvolutions of C3D learned feature maps at the layers conv2a and conv3b. Visualizations of the same feature map are grouped together. For figures 11, 12, 13, and 14, each figure presents the deconvolutions of one learned feature map of the conv5b layer. Finally, figure 15 compares the deconvolutions of several C3D conv5b feature maps with optical flows. As showed in the visualizations, at early convolution layer conv2a, C3D learns low-level motion patterns such as moving edges, blobs, short changes, edge orientation changes, or color changes. At a higher layer of conv3b, C3D learns bigger moving patterns of corners, textures, body parts, and trajectories. Finally, at the deepest convolution layer, conv5b, C3D learns more complicated motion patterns such as moving circular objects, biking-like motions.

Figure 9. Deconvolutions of C3D conv2a feature maps. Each group is a C3D conv2a learned feature map. First two rows: the learned filters detect moving edges and blobs. The last row: the learned filters detect shot changes, edge orientation changes, and color changes. Best viewed in a color screen.

Figure 10. Deconvolutions of C3D conv3b feature maps. Each group is a C3D conv3b learned feature map. Upper: feature maps detect moving corners and moving textures. Middle: feature maps detect moving body parts. Lower: feature maps detect object trajectories and circular objects. Best viewed in a color screen.

Figure 11. Deconvolutions of a C3D conv5b learned feature map which detects moving motions of circular objects. In the second last clip, it detects a moving head while in the last clip, it detects the moving hair-curler. Best viewed in a color screen.

Figure 12. Deconvolutions of a C3D conv5b learned feature map which detects biking-like motions. Note that the last two clips have no biking but their motion patterns are similar to biking motions. Best viewed in a color screen.

Figure 13. Deconvolutions of a C3D conv5b learned feature map which detects face-related motions: applying eye-makeup, applying lipstick, and brushing tooth. Best viewed in a color screen.

Figure 14. Deconvolutions of a C3D conv5b learned feature map which detects balance-beam-like motions. In the last clip, it detects hammering which shares similar motion patterns with balance beam. Best viewed in a color screen.

Figure 15. Deconvlotuions of C3D conv5b learned feature maps compared with optical flows. Optical flows fire at all of moving pixels while C3D just pays attention to only salient motions. Best viewed in a color screen.