softmax回归详解

基本模型

Softmax 回归是logistic回归是用的一般形式,它将logistic **函数推广到C类(C是神经网络模型的输出),而不仅仅是两类,是一种多分类器,如果C = 2,那么Softmax实际上变回了 logistic 回归。

逻辑回归使用的是sigmoid函数,将wx+b\mathbf wx+b 的值映射到(0, 1)的区间,输出的结果为样本标签等于1的概率值;而softmax回归采用的是softmax函数,将wx+b\mathbf wx+b的值映射到[0, 1]的区间,输出的结果为一个向量,向量里的值为样本属于每个标签的概率值。

如下图所示:

softmax回归详解

假设sigmoid模型共有nn个输入,记wi=(wi1,wi2,,win)T,i=1,2,,c;x(j)=(xj1,xj2,,xjn,1),j=1,2,,m;w_i = (w_{i1}, w_{i2} , \cdots , w_{in} )^T,i=1,2,\ldots,c;\quad x^{(j)} = (x_{j1},x_{j2},\ldots,x_{jn},1),j=1,2,\ldots,m; ,一共KK 类,m个样本。

设:
zi=wix+bi z_i = w_i x + b_i

hw(x(j))=[p1p2pc]=1i=1Kezi[ez1ez2ezc] h_w(x^{(j)}) = \begin{bmatrix}p_1\\p_2 \\ \vdots \\p_{c} \end{bmatrix} = \frac{1}{\sum_{i=1}^K e^{z_i}} \begin{bmatrix}e^{z_1}\\e^{z_2 } \\ \vdots \\e^{z_c} \end{bmatrix}
一共cc个类别。上式结果向量中最大值得对应类别为最终类别。

损失函数

softmax分类的损失函数是最小化对数似然函数的负数:
L(w)=logP(y(i)x(i);w)=k=1Klog(ezij=1Kezj)yk=k=1Kyklog(ezkj=1Kezj) \begin{aligned} L(w) &= - \log P(y^{(i)}|x^{(i)};w) \\ &= -\prod_{k=1}^{K} \log\left(\frac{e^{z_i}}{\sum_{j=1}^K e^{z_j}} \right)^{y_k} \\&=-\sum_{k=1}^K y_k \log\left(\frac{e^{z_k}}{\sum_{j=1}^K e^{z_j}} \right) \end{aligned}
注:yk=I{y(j)=k}y_k = I\{y^{(j)} = k\} 是指示函数,当y(j)=ky^{(j)} = k,即当第jj个样本属于第kk个类别时,指示函数为1。 或者理解为:某个样本xx对应的标签yy为一个向量:y=(y1,y2,,yK)y=(y_1,y_2,\ldots,y_K),其中只有一个元素是1,如y=(1,0,,0)y=(1,0,\ldots,0)

我们的目标是:
minL(w) \min L(w)

求解最优参数

通过梯度下降法则求解最优参数。

设:第jj个样本的第ii 个输出
si=ezij=1Kezii=1,2,,K s_{i} = \frac{e^{z_i}}{\sum_{j=1}^K e^{z_i}} \quad i=1,2,\ldots,K
针对某一个样本:
Lwi=LziziwiLbi=Lzizibi \begin{aligned} \frac{\partial L}{\partial w_i} &= \frac{\partial L}{\partial z_i} \frac{\partial z_i}{\partial w_i} \\ \frac{\partial L}{\partial b_i} &= \frac{\partial L}{\partial z_i} \frac{\partial z_i}{\partial b_i} \end{aligned}
显然:
ziwi=xzibi=1 \frac{\partial z_i}{\partial w_i} = x \\ \frac{\partial z_i}{\partial b_i} = 1
所以核心问题是求Lzi\frac{\partial L}{\partial z_i}
Lzi=k=1K[Lskskzi] \frac{\partial L}{\partial z_i} = \sum_{k=1}^K \left[ \frac{\partial L}{\partial s_k} \frac{\partial s_k}{\partial z_i} \right]
先求Lsk\frac{\partial L}{\partial s_k}
Lsk=(k=1Kyklogsk)sk=yksk \frac{\partial L}{\partial s_k} = \frac{\partial \left(-\sum_{k=1}^K y_k \log s_k \right)}{\partial s_k} = - \frac{y_k}{s_k}

再求skzi\frac{\partial s_k}{\partial z_i} :

先来复习一下复合求导:
f(x)=g(x)h(x)f(x)=g(x)h(x)g(x)h(x)[h(x)]2 f(x) = \frac{g(x)}{h(x)} \\ f'(x) = \frac{g'(x) h(x) - g(x)h'(x)}{[h(x)]^2}
所以,分两种情况讨论:

(1)当kik \ne i时,那么:
skzi=ezkj=1Kezjzi=ezkezi(j=1Kezj)2=ezkj=1Kezjezij=1Kezj=sksi \begin{aligned} \frac{\partial s_k}{\partial z_i} &= \frac{\partial \frac{e^{z_k}}{\sum_{j=1}^K e^{z_j}} }{\partial z_i} \\ &= \frac{-e^{z_k}\cdot e^{z_i}}{(\sum_{j=1}^K e^{z_j})^2} \\ &=-\frac{e^{z_k}}{\sum_{j=1}^K e^{z_j}} \frac{ e^{z_i}} {\sum_{j=1}^K e^{z_j}} \\ &= -s_k s_i \end{aligned}
(2)当k=ik = i时,那么:
skzi=sizi=ezij=1Kezjzi=ezij=1Kezj(ezi)2(j=1Kezj)2=ezij=1Kezjj=1Kezjezij=1Kezj=si(1si) \begin{aligned} \frac{\partial s_k}{\partial z_i} &= \frac{\partial s_i}{\partial z_i} =\frac{\partial \frac{e^{z_i}}{\sum_{j=1}^K e^{z_j}} }{\partial z_i} \\ &= \frac{e^{z_i}\sum_{j=1}^K e^{z_j} - (e^{z_i})^2}{(\sum_{j=1}^K e^{z_j})^2} \\ &=\frac{e^{z_i}}{\sum_{j=1}^K e^{z_j}} \frac{\sum_{j=1}^K e^{z_j} - e^{z_i}} {\sum_{j=1}^K e^{z_j}} \\ &= s_i(1-s_i) \end{aligned}
所以:
Lzi=k=1K[Lskskzi]=k=1K[ykskskzi]=yisisizi+k=1,kiK[ykskskzi]=yisisi(1si)+k=1,kiK[yksksksl]=yi(si1)+k=1,kiKyksi=yi+yisi+k=1,kiKyksi=yi+sik=1Kyk \begin{array}{l} \frac{\partial \mathrm{L}}{\partial \mathrm{z}_{i}}=\sum_{k=1}^{K}\left[\frac{\partial L}{\partial s_{k}} \frac{\partial s_{k}}{\partial z_{i}}\right]=\sum_{k=1}^{K}\left[-\frac{y_{k}}{s_{k}} \frac{\partial s_{k}}{\partial z_{i}}\right] \\ =-\frac{y_{i}}{s_{i}} \frac{\partial s_{i}}{\partial z_{i}}+\sum_{k=1, k \neq i}^{K}\left[-\frac{y_{k}}{s_{k}} \frac{\partial s_{k}}{\partial z_{i}}\right] \\ =-\frac{y_{i}}{s_{i}} s_{i}\left(1-s_{i}\right)+\sum_{k=1, k \neq i}^{K}\left[-\frac{y_{k}}{s_{k}} \cdot-s_{k} s_{l}\right] \\ =y_{i}\left(s_{i}-1\right)+\sum_{k=1, k \neq i}^{K} y_{k} s_{i} \\ =-y_{i}+y_{i} s_{i}+\sum_{k=1, k \neq i}^{K} y_{k} s_{i} \\ =-y_{i}+s_{i} \sum_{k=1}^{K} y_{k} \end{array}
对于某个样本xx对应的标签yy为一个向量:y=(y1,y2,,yK)y=(y_1,y_2,\ldots,y_K),其中只有一个元素是1,如y=(1,0,,0)y=(1,0,\ldots,0) 。所以有:k=1Kyk=1\sum_{k=1}^{K} y_{k} = 1,所以:
Lzi=siyi \frac{\partial \mathrm{L}}{\partial \mathrm{z}_{i}}= s_i - y_i
所以最终结果为:
Lwi=(siyi)xLbi=siyi \frac{\partial L}{\partial w_i} = (s_i - y_i)x \\ \frac{\partial L}{\partial b_i} = s_i - y_i
所以,更新法则如下:
wi=wiη(siyi)xbi=biη(siyi) w_i = w_i - \eta (s_i - y_i)x \\ b_i = b_i - \eta (s_i - y_i) \\
直至收敛为之。