Anchor-Free目标检测模型

FCOS: Fully Convolutional One-Stage Object Detection(已开源
FoveaBox: Beyond Anchor-based Object Detector(未开源)

FCOS

摘要:我们提出了一种全卷积的 one-stage 目标检测器(FCOS),以每像素预测方式解决目标检测,类似于语义分割。几乎所有最先进的目标检测器,如RetinaNet,SSD,YOLOv3和Faster R-CNN都依赖于预定义的锚框(anchor boxes)。相比之下,我们提出的检测器FCOS不需要锚框,即 proposal free。通过消除预定义的锚框,FCOS完全避免了与锚框相关的复杂计算,例如在训练期间计算重叠并且显著减少了训练内存。更重要的是,我们还避免了与锚框相关的所有超参数,这些参数通常对最终检测性能非常敏感。凭借唯一的后处理:非极大值抑制(NMS),我们的检测器FCOS优于以前基于锚框的one-stage探测器,具有更简单的优势。我们首次展示了一种更加简单灵活的检测框架,可以提高检测精度。我们希望提出的FCOS框架可以作为许多其他实例级任务的简单而强大的替代方案。

算法

网络:【Backbone】 + 【特征金字塔(Feature Pyramid)】+ 【Classification + Center-ness + Regression】

Anchor-Free目标检测模型

这里的 Center-ness 是全新的概念,也是论文的创新所在。重点 anchor-box free的思路 。

损失函数: 

Anchor-Free目标检测模型

Center-ness:

Anchor-Free目标检测模型

Anchor-Free目标检测模型

 特性

  • Totally anchor-free: FCOS completely avoids the complicated computation related to anchor boxes and all hyper-parameters of anchor boxes.
  • Memory-efficient: FCOS uses 2x less training memory footprint than its anchor-based counterpart RetinaNet.
  • Better performance: Compared to RetinaNet, FCOS has better performance under exactly the same training and testing settings.
  • State-of-the-art performance: Without bells and whistles, FCOS achieves state-of-the-art performances. It achieves 41.0% (ResNet-101-FPN) and 42.1% (ResNeXt-32x8d-101) in AP on coco test-dev.
  • Faster: FCOS enjoys faster training and inference speed than RetinaNet.嗯。。。并不快啊

 转自:https://zhuanlan.zhihu.com/p/62198865

to be continue