caffe日志文件中Iteration loss和Train net output loss的区别

caffe训练日志文件中包含两个loss,一个是Iteration####, loss = ####;另一个是Train net output #0: loss = ####。
如下图所示

caffe日志文件中Iteration loss和Train net output loss的区别
如果想要知道这两个loss的区别就需要找到输出该日志的代码。经查询可知,该部分的代码在solver.cpp中。


template <typename Dtype>
void Solver<Dtype>::Step(int iters) {
  const int start_iter = iter_;
  const int stop_iter = iter_ + iters;
  int average_loss = this->param_.average_loss();
  losses_.clear();
  smoothed_loss_ = 0;

  while (iter_ < stop_iter) {
    // zero-init the params
    net_->ClearParamDiffs();
    if (param_.test_interval() && iter_ % param_.test_interval() == 0
        && (iter_ > 0 || param_.test_initialization())
        && Caffe::root_solver()) {
      TestAll();
      if (requested_early_exit_) {
        // Break out of the while loop because stop was requested while testing.
        break;
      }
    }

    for (int i = 0; i < callbacks_.size(); ++i) {
      callbacks_[i]->on_start();
    }
    const bool display = param_.display() && iter_ % param_.display() == 0;
    net_->set_debug_info(display && param_.debug_info());
    // accumulate the loss and gradient
    Dtype loss = 0;
    for (int i = 0; i < param_.iter_size(); ++i) {
      loss += net_->ForwardBackward();
    }
    loss /= param_.iter_size();
    // average the loss across iterations for smoothed reporting
    UpdateSmoothedLoss(loss, start_iter, average_loss);
    if (display) {
      LOG_IF(INFO, Caffe::root_solver()) << "Iteration " << iter_
          << ", loss = " << smoothed_loss_;
      const vector<Blob<Dtype>*>& result = net_->output_blobs();
      int score_index = 0;
      for (int j = 0; j < result.size(); ++j) {
        const Dtype* result_vec = result[j]->cpu_data();
        const string& output_name =
            net_->blob_names()[net_->output_blob_indices()[j]];
        const Dtype loss_weight =
            net_->blob_loss_weights()[net_->output_blob_indices()[j]];
        for (int k = 0; k < result[j]->count(); ++k) {
          ostringstream loss_msg_stream;
          if (loss_weight) {
            loss_msg_stream << " (* " << loss_weight
                            << " = " << loss_weight * result_vec[k] << " loss)";
          }
          LOG_IF(INFO, Caffe::root_solver()) << "    Train net output #"
              << score_index++ << ": " << output_name << " = "
              << result_vec[k] << loss_msg_stream.str();
        }
      }
    }
    for (int i = 0; i < callbacks_.size(); ++i) {
      callbacks_[i]->on_gradients_ready();
    }
    ApplyUpdate();

    // Increment the internal iter_ counter -- its value should always indicate
    // the number of times the weights have been updated.
    ++iter_;

    SolverAction::Enum request = GetRequestedAction();

    // Save a snapshot if needed.
    if ((param_.snapshot()
         && iter_ % param_.snapshot() == 0
         && Caffe::root_solver()) ||
         (request == SolverAction::SNAPSHOT)) {
      Snapshot();
    }
    if (SolverAction::STOP == request) {
      requested_early_exit_ = true;
      // Break out of training loop.
      break;
    }
  }
}

Iteration loss代码为:
caffe日志文件中Iteration loss和Train net output loss的区别
Train net output loss代码为:
caffe日志文件中Iteration loss和Train net output loss的区别
查看并分析源码可知。Step函数完成实际的逐步迭代优化过程。Iteration loss部分代表的是:更新输出的当前的average_loss个样本的平均loss。该值为smoothed_loss_,该值是通过调用 UpdateSmoothedLoss(loss, start_iter, average_loss);更新得到的。

template <typename Dtype>
void Solver<Dtype>::UpdateSmoothedLoss(Dtype loss, int start_iter,
    int average_loss) {
  if (losses_.size() < average_loss) {
    losses_.push_back(loss);
    int size = losses_.size();
    smoothed_loss_ = (smoothed_loss_ * (size - 1) + loss) / size;
  } else {
    int idx = (iter_ - start_iter) % average_loss;
    smoothed_loss_ += (loss - losses_[idx]) / average_loss;
    losses_[idx] = loss;
  }
}

Train net output loss代表的是每个输出的loss值。 输出的是loss_msg_stream。

ostringstream loss_msg_stream;
    const Dtype mean_score = test_score[i] / param_.test_iter(test_net_id);
    if (loss_weight) {
      loss_msg_stream << " (* " << loss_weight
                      << " = " << loss_weight * mean_score << " loss)";
    }
    LOG(INFO) << "    Test net output #" << i << ": " << output_name << " = "
              << mean_score << loss_msg_stream.str();

为了更直观的理解Train net output loss,我们引入GoogLeNet网络的日志文件。
caffe日志文件中Iteration loss和Train net output loss的区别
可以看到一共有三个output,并且对应不同的比例(0.3,0.3,1)。如果我们了解GoogLeNet网络结构就会知道它有三个loss,分别是loss1/loss1,loss2/loss1, loss3/loss3
caffe日志文件中Iteration loss和Train net output loss的区别
caffe日志文件中Iteration loss和Train net output loss的区别
caffe日志文件中Iteration loss和Train net output loss的区别

总结:
Iteration loss代表的是:更新输出的当前的average_loss个样本的平均loss。
Train net output loss代表的是每个输出的loss值。

作者:GL3_24
来源:CSDN
著作权归作者所有。转载请联系作者获得授权。