第7章 支持向量机(SVM)
背景
支持向量机由简至繁可分为:线性可分支持向量机(linear support vector machine in linearly separable case)、线性支持向量机(linear support vector machine)、非线性支持向量机(non-linear support vector machine)。当训练数据线性可分时,通过硬件个最大化学习一个线性分类器,即线性可分支持向量机,又称应间隔支持向量机;当训练数据近似线性可分时,通过软间隔最大化,学习一个线性分类器,即线性支持向量机,又称软间隔支持向量机;当训练数据线性不可分时,通过核技巧(kernel trick)及软间隔最大化,学习非线性支持向量机。
1、线性可分支持向量机
1.1 基本定义
线性可分支持向量机
给定线性可分数据集,通过间隔最大化或等价地求解相应的凸二次规划问题学习得到分离超平面: w ∗ x + b = 0 w^*x+b=0 w ∗ x + b = 0 以及相应的分隔函数: f ( x ) = s i g n ( w ∗ x + b ) f(x) = sign(w^*x+b) f ( x ) = s i g n ( w ∗ x + b )
函数间隔
对于给定的数据集T T T 和超平面( w , b ) (w, b) ( w , b ) , 定义函数间隔 为:γ ^ = y i ( w × x i + b )
\hat\gamma = y_i(w\times x_i+b)
γ ^ = y i ( w × x i + b )
最小函数间隔函数:γ ^ = min i = 1 , . . n γ ^ i \hat\gamma = \min_{i = 1, .. n}{\hat \gamma_i} γ ^ = min i = 1 , . . n γ ^ i
几何间隔
对于给定的数据集T T T 和超平面( w , b ) (w, b) ( w , b ) , 定义几何间隔 为:γ = y i ( w ∣ ∣ w ∣ ∣ . x i + b ∣ ∣ w ∣ ∣ )
\gamma = y_i(\dfrac{w}{||w||} . x_i + \dfrac{b}{||w||})
γ = y i ( ∣ ∣ w ∣ ∣ w . x i + ∣ ∣ w ∣ ∣ b )
最小几何间隔:γ ^ = m i n i = 1 , . . N γ ^ i \hat\gamma = min_{i = 1, .. N}{\hat \gamma_i} γ ^ = m i n i = 1 , . . N γ ^ i
注 :函数间隔和几何间隔的数学关系:γ = γ ^ ∣ ∣ w ∣ ∣
\gamma = \dfrac{\hat \gamma}{||w||}
γ = ∣ ∣ w ∣ ∣ γ ^
1.2 最大间隔分离超平面
max w , b γ s t . y i ( w ∣ ∣ w ∣ ∣ . x i + b ∣ ∣ w ∣ ∣ ) ≥ γ
\max_{w, b} \ {\gamma} \ \ \
st. \ \ y_i(\dfrac{w}{||w||} . x_i + \dfrac{b}{||w||}) \geq \gamma
w , b max γ s t . y i ( ∣ ∣ w ∣ ∣ w . x i + ∣ ∣ w ∣ ∣ b ) ≥ γ
约束条件表明:超平面( w , b ) (w, b) ( w , b ) 关于每个训练样本的几何间隔至少是γ \gamma γ
根据函数间隔和几何间隔的关系,最大间隔分离超平面也可以表示为:max w , b γ ^ ∣ ∣ w ∣ ∣ s t . y i ( w . x i + b ) ≥ γ ^
\max_{w, b} \ {\dfrac{\hat \gamma}{||w||}} \ \ \
st. \ \ y_i(w . x_i +b) \geq \hat \gamma
w , b max ∣ ∣ w ∣ ∣ γ ^ s t . y i ( w . x i + b ) ≥ γ ^
由于γ ^ \hat \gamma γ ^ 对最优化的影响在于同等放缩( w , b ) (w, b) ( w , b ) ,所以最优化时问题转化为求解最大值1 ∣ ∣ w ∣ ∣ \dfrac{1}{||w||} ∣ ∣ w ∣ ∣ 1 ,等价地,最优化问题转化为求解最小值1 2 ∣ ∣ w ∣ ∣ 2 \dfrac{1}{2}||w||^2 2 1 ∣ ∣ w ∣ ∣ 2 ,即有:min w , b 1 2 ∣ ∣ w ∣ ∣ 2 s t . y i ( w . x i + b ) − 1 ≥ 0 (*)
\min_{w, b}{\dfrac{1}{2}||w||^2} \ \ \ st. y_i(w.x_i+b) - 1 \geq 0 \tag{*}
w , b min 2 1 ∣ ∣ w ∣ ∣ 2 s t . y i ( w . x i + b ) − 1 ≥ 0 ( * )
(*)为凸二次规划问题,对应的超平面( w , b ) (w, b) ( w , b ) 存在且唯一 :min w f ( w ) s t . g i ( w ) ≤ 0 i = 1 , 2 , . . . , k h i ( w ) = 0 i = 1 , 2 , . . . , l
\min_{w}{f(w)} \\
st.g_i(w) \leq 0 \ \ i = 1, 2, ..., k \\
h_i(w) = 0 \ \ \ i = 1, 2, ..., l
w min f ( w ) s t . g i ( w ) ≤ 0 i = 1 , 2 , . . . , k h i ( w ) = 0 i = 1 , 2 , . . . , l
1.3 对偶算法与KKT条件
(*)构造 拉格朗日函数,对每一个不等式约束引入拉格朗日乘子, α i ≥ 0 , i = 1 , 2 , . . . , N \alpha_i \geq 0,i = 1, 2, ..., N α i ≥ 0 , i = 1 , 2 , . . . , N 定义拉格朗日函数:L ( w , b , α ) = 1 2 ∣ ∣ w ∣ ∣ 2 − ∑ i = 1 N α i y i ( w . x i + b ) + ∑ i = 1 N α i (1)
L(w, b, \alpha) = \dfrac{1}{2}||w||^2 - \sum_{i=1}^N\alpha_iy_i(w.x_i+b)+\sum_{i=1}^N\alpha_i \tag1
L ( w , b , α ) = 2 1 ∣ ∣ w ∣ ∣ 2 − i = 1 ∑ N α i y i ( w . x i + b ) + i = 1 ∑ N α i ( 1 )
其中,α = ( α 1 , α 2 , . . . , α N ) T \alpha = (\alpha_1, \alpha_2, ..., \alpha_N)^T α = ( α 1 , α 2 , . . . , α N ) T ,根据拉格朗日对偶性,原始问题的对偶问题是极大极小问题:max α min w , b L ( w , b , α )
\max_\alpha\min_{w,b}L(w,b,\alpha)
α max w , b min L ( w , b , α )
所以,为了得到对偶问题的解,需要先求L ( w , b , a ) L(w,b,a) L ( w , b , a ) 对w , b w, b w , b 的极小值,再求对α \alpha α 的极大值。
(1)求min w , b L ( w , b , a ) \min_{w,b}L(w,b,a) min w , b L ( w , b , a ) ∇ w L ( w , b , α ) = w − ∑ i = 1 N α i y i x i = 0 ∇ b L ( w , b , α ) = − ∑ i = 1 N α i y i = 0 (2)
\nabla_wL(w,b,\alpha) = w - \sum_{i=1}^N\alpha_iy_ix_i = 0 \\
\nabla_bL(w,b,\alpha) = -\sum_{i=1}^N\alpha_iy_i=0 \tag2
∇ w L ( w , b , α ) = w − i = 1 ∑ N α i y i x i = 0 ∇ b L ( w , b , α ) = − i = 1 ∑ N α i y i = 0 ( 2 )
将(2)带入(1)得:L ( w , b , α ) = − 1 2 ∑ i = 1 N ∑ j = 1 N α i α j y i y j ( x i . x j ) + ∑ i = 1 N α i
L(w,b,\alpha) = - \dfrac{1}{2}\sum_{i=1}^N\sum_{j=1}^N\alpha_i\alpha_jy_iy_j(x_i.x_j)+\sum_{i=1}^N\alpha_i
L ( w , b , α ) = − 2 1 i = 1 ∑ N j = 1 ∑ N α i α j y i y j ( x i . x j ) + i = 1 ∑ N α i
(2)求min w , b L ( w , b , α ) \min_{w,b}L(w,b,\alpha) min w , b L ( w , b , α ) 对α \alpha α 的极大,即是对偶问题max α − 1 2 ∑ i = 1 N ∑ j = 1 N α i α j y i y j ( x i . x j ) + ∑ i = 1 N α i s . t . ∑ i = 1 N α i y i = 0 α i ≥ 0 , i = 1 , 2 , . . . , N (3)
\max_{\alpha}-\dfrac{1}{2}\sum_{i=1}^N\sum_{j=1}^N\alpha_i\alpha_jy_iy_j(x_i.x_j)+\sum_{i=1}^N\alpha_i
\\ s.t. \ \ \ \sum_{i=1}^N\alpha_iy_i=0
\\ \alpha_i \geq 0, \ \ \ i = 1, 2, ..., N
\tag3
α max − 2 1 i = 1 ∑ N j = 1 ∑ N α i α j y i y j ( x i . x j ) + i = 1 ∑ N α i s . t . i = 1 ∑ N α i y i = 0 α i ≥ 0 , i = 1 , 2 , . . . , N ( 3 )
将(3)式目标函数由求极大转换转换为求极小,则等价对偶最优化问题:min α 1 2 ∑ i = 1 N ∑ j = 1 N α i α j y i y j ( x i . x j ) − ∑ i = 1 N α i s . t . ∑ i = 1 N α i y i = 0 α i ≥ 0 , i = 1 , 2 , . . . , N (4)
\min_{\alpha}\dfrac{1}{2}\sum_{i=1}^N\sum_{j=1}^N\alpha_i\alpha_jy_iy_j(x_i.x_j)-\sum_{i=1}^N\alpha_i \\ s.t. \ \ \ \sum_{i=1}^N\alpha_iy_i=0\\ \alpha_i \geq 0, \ \ \ i = 1, 2, ..., N\tag4
α min 2 1 i = 1 ∑ N j = 1 ∑ N α i α j y i y j ( x i . x j ) − i = 1 ∑ N α i s . t . i = 1 ∑ N α i y i = 0 α i ≥ 0 , i = 1 , 2 , . . . , N ( 4 )
因此,原始w , b w,b w , b 的最优化转换为先求解α \alpha α 的最优化,设(4)条件下的最优化解为α ∗ \alpha^* α ∗ ,其中α ∗ = ( α 1 ∗ , α 2 ∗ , . . . , α N ∗ ) T \alpha^*=(\alpha_1^*, \alpha_2^*, ..., \alpha_N^*)^T α ∗ = ( α 1 ∗ , α 2 ∗ , . . . , α N ∗ ) T ,则( w , b ) (w, b) ( w , b ) 的最优解为( w ∗ , b ∗ ) (w^*,b^*) ( w ∗ , b ∗ ) :w ∗ = ∑ i = 1 N α i ∗ y i x i b ∗ = y j − w ∗ x i = y j − ∑ i = 1 N α ∗ y i ( x i . x j )
w^* = \sum_{i=1}^N\alpha_i^*y_ix_i
\\ b^* = y_j-w^*x_i = y_j - \sum_{i=1}^N\alpha^*y_i(x_i.x_j)
w ∗ = i = 1 ∑ N α i ∗ y i x i b ∗ = y j − w ∗ x i = y j − i = 1 ∑ N α ∗ y i ( x i . x j )
1.4 KKT条件:
∇ w L ( w ∗ , b ∗ , α ∗ ) = w ∗ − ∑ i = 1 N α i ∗ y i x i = 0 ∇ b L ( w ∗ , b ∗ , α ∗ ) = − ∑ i = 1 N α i ∗ y i = 0 α i ∗ ( y i ( w ∗ . x i + b ∗ ) − 1 ) = 0 y i ( w ∗ . x i + b ∗ ) − 1 ≥ 0 i = 1 , 2 , . . . , N α i ∗ ≥ 0 i = 1 , 2 , . . . , N (5)
\nabla_wL(w^*,b^*,\alpha^*) = w^* - \sum_{i=1}^N\alpha_i^*y_ix_i = 0 \\
\nabla_bL(w^*,b^*,\alpha^*) = -\sum_{i=1}^N\alpha_i^*y_i=0 \\
\alpha_i^*(y_i(w^*.x_i+b^*) - 1) = 0 \\
y_i(w^*.x_i+b^*) - 1 \geq 0 \ \ \ i = 1, 2, ..., N \\
\alpha_i^* \geq 0 \ \ \ i = 1, 2, ..., N \\
\tag5
∇ w L ( w ∗ , b ∗ , α ∗ ) = w ∗ − i = 1 ∑ N α i ∗ y i x i = 0 ∇ b L ( w ∗ , b ∗ , α ∗ ) = − i = 1 ∑ N α i ∗ y i = 0 α i ∗ ( y i ( w ∗ . x i + b ∗ ) − 1 ) = 0 y i ( w ∗ . x i + b ∗ ) − 1 ≥ 0 i = 1 , 2 , . . . , N α i ∗ ≥ 0 i = 1 , 2 , . . . , N ( 5 )
2、线性支持向量机
线性可分的学习方法对线性不可分的训练是不适用的,因为此时的不等式越是并不成立,需要将硬间隔最大化修改为软间隔最大化。
此时对应的凸二次规划问题(原始问题):min w , b , ξ 1 2 ∣ ∣ w ∣ ∣ 2 + C ∑ i = 1 N ξ i ξ i ≥ 0 i = 1 , 2 , . . . , N s . t . y i ( w i . x + b ) ≥ 1 − ξ i i = 1 , 2 , . . . , N (6)
\min_{w, b,\xi}\dfrac{1}{2}||w||^2+C\sum_{i=1}^N\xi_i \\
\xi_i \geq 0 \ \ \ i = 1, 2, ..., N \\
s.t. \ \ \ y_i(w_i.x+b) \geq 1 - \xi_i i = 1, 2, ...,N \\
\tag6
w , b , ξ min 2 1 ∣ ∣ w ∣ ∣ 2 + C i = 1 ∑ N ξ i ξ i ≥ 0 i = 1 , 2 , . . . , N s . t . y i ( w i . x + b ) ≥ 1 − ξ i i = 1 , 2 , . . . , N ( 6 )
2.1 定义
对于给定的线性不可分的训练数据集,通过求解凸二次规划问题,即软间隔最大化,得到分离超平面:w ∗ . x + b ∗ = 0 w^* .x +b^* = 0 w ∗ . x + b ∗ = 0 ,对应的分类决策函数:f ( x ) = s i g n ( w ∗ . x + b ∗ ) f(x) = sign(w^*.x+b^*) f ( x ) = s i g n ( w ∗ . x + b ∗ ) ,称为线性支持向量机。
原始问题的对偶问题:min α 1 2 ∣ ∣ w ∣ ∣ 2 − ∑ i = 1 N α i s . t . ∑ i = 1 N α i y i = 0 i = 1 , 2 , . . . , N 0 ≤ α i ≤ C i = 1 , 2 , . . . , N (7)
\min_\alpha\dfrac{1}{2}||w||^2-\sum_{i=1}^N\alpha_i \\
s.t. \ \ \ \sum_{i=1}^N\alpha_iy_i = 0 \ \ i = 1, 2, ..., N\\
0 \leq \alpha_i \leq C \ \ i = 1, 2, ..., N\\
\tag7
α min 2 1 ∣ ∣ w ∣ ∣ 2 − i = 1 ∑ N α i s . t . i = 1 ∑ N α i y i = 0 i = 1 , 2 , . . . , N 0 ≤ α i ≤ C i = 1 , 2 , . . . , N ( 7 )
原始问题的最优化拉格朗日目标函数:L ( w , b , ξ , α , μ ) = 1 2 ∣ ∣ w ∣ ∣ 2 + C ∑ i = 1 N ξ i − ∑ i = 1 N α i ( y i ( w . x i + b ) + ξ i − 1 ) ) − ∑ i = 1 N μ i ξ i
L(w, b, \xi, \alpha, \mu) = \dfrac{1}{2}||w||^2+C\sum_{i=1}^N\xi_i-\sum_{i=1}^N\alpha_i(y_i(w.x_i+b) + \xi_i - 1))-\sum_{i=1}^N\mu_i\xi_i
L ( w , b , ξ , α , μ ) = 2 1 ∣ ∣ w ∣ ∣ 2 + C i = 1 ∑ N ξ i − i = 1 ∑ N α i ( y i ( w . x i + b ) + ξ i − 1 ) ) − i = 1 ∑ N μ i ξ i
其中α i ≥ 0 , μ i ≥ 0 \alpha_i \geq 0, \mu_i \geq 0 α i ≥ 0 , μ i ≥ 0 。
对偶问题是拉格朗日目标函数的极大极小问题即:max α min w , b , ξ L ( w , b , ξ , α , μ ) = max α min w , b , ξ ( 1 2 ∣ ∣ w ∣ ∣ 2 + C ∑ i = 1 N ξ i − ∑ i = 1 N α i ( y i ( w . x i + b ) + ξ i − 1 ) ) − ∑ i = 1 N μ i ξ i ) (8)
\max_{\alpha}\min_{w, b, \xi} L(w, b, \xi, \alpha, \mu) = \max_{\alpha}\min_{w, b, \xi} (\dfrac{1}{2}||w||^2+C\sum_{i=1}^N\xi_i-\sum_{i=1}^N\alpha_i(y_i(w.x_i+b) + \xi_i - 1))-\sum_{i=1}^N\mu_i\xi_i) \tag8
α max w , b , ξ min L ( w , b , ξ , α , μ ) = α max w , b , ξ min ( 2 1 ∣ ∣ w ∣ ∣ 2 + C i = 1 ∑ N ξ i − i = 1 ∑ N α i ( y i ( w . x i + b ) + ξ i − 1 ) ) − i = 1 ∑ N μ i ξ i ) ( 8 )
对极小化拉格朗日目标函数min w , b , ξ L ( w , b , ξ , α , μ ) \min_{w, b, \xi}L(w,b, \xi, \alpha, \mu) min w , b , ξ L ( w , b , ξ , α , μ ) 求对应偏导数:∇ w L ( w , b , ξ , α , μ ) = w − ∑ i = 1 N α i y i x i = 0 ∇ b L ( w , b , ξ , α , μ ) = − ∑ i = 1 N α i y i = 0 ∇ ξ i L ( w , b , ξ , α , μ ) = C − α i − μ i = 0 (9)
\nabla_wL(w,b,\xi,\alpha, \mu) = w - \sum_{i=1}^N\alpha_iy_ix_i = 0 \\
\nabla_bL(w,b,\xi, \alpha, \mu) = -\sum_{i=1}^N\alpha_iy_i=0 \\
\nabla_{\xi_i}L(w,b,\xi, \alpha, \mu) = C - \alpha_i-\mu_i = 0
\tag9
∇ w L ( w , b , ξ , α , μ ) = w − i = 1 ∑ N α i y i x i = 0 ∇ b L ( w , b , ξ , α , μ ) = − i = 1 ∑ N α i y i = 0 ∇ ξ i L ( w , b , ξ , α , μ ) = C − α i − μ i = 0 ( 9 )
将(9)回带入(8)式, 再对min w , b , ξ L ( w , b , ξ , α , μ ) \min_{w, b, \xi}L(w,b, \xi, \alpha, \mu) min w , b , ξ L ( w , b , ξ , α , μ ) 中的α \alpha α 求极大,得对偶问题:max α 1 2 ∑ i = 1 N ∑ j = 1 N α i α j y i y j ( x i . x j ) + ∑ i = 1 N α i s . t . ∑ i = 1 N α i y i = 0 C − α i − μ i = 0 α i ≥ 0 μ i ≥ 0 (10)
\max_\alpha\dfrac{1}{2}\sum_{i=1}^N\sum_{j=1}^N\alpha_i\alpha_jy_iy_j(x_i.x_j)+\sum_{i=1}^N\alpha_i \\
s.t. \ \ \sum_{i=1}^N\alpha_iy_i = 0 \\
C - \alpha_i - \mu_i = 0 \\
\alpha_i \geq 0 \\
\mu_i \geq 0 \\ \tag{10}
α max 2 1 i = 1 ∑ N j = 1 ∑ N α i α j y i y j ( x i . x j ) + i = 1 ∑ N α i s . t . i = 1 ∑ N α i y i = 0 C − α i − μ i = 0 α i ≥ 0 μ i ≥ 0 ( 1 0 )
将(9)式C − α i − μ i = 0 C - \alpha_i - \mu_i = 0 C − α i − μ i = 0 带入(10)式,求解只含α i \alpha_i α i 的z最优化问题,满足:0 ≤ α i ≤ C 0 \leq \alpha_i \leq C 0 ≤ α i ≤ C 。
设α ∗ = ( α 1 ∗ , α 2 ∗ , . . . , α N ∗ ) T \alpha^*= (\alpha_1^*, \alpha_2^*, ..., \alpha_N^*)^T α ∗ = ( α 1 ∗ , α 2 ∗ , . . . , α N ∗ ) T 是原始问题的对偶问题(7)的一个解,则原始问题的解( w ∗ , b ∗ ) (w^*, b^*) ( w ∗ , b ∗ ) 为:w ∗ = ∑ i = 1 N α i ∗ y i x i b ∗ = y j − w ∗ x j = y j − ∑ i = 1 N α i ∗ y i ( x i . x j )
w^* = \sum_{i=1}^N\alpha_i^*y_ix_i \\
b^* = y_j - w^*x_j = y_j - \sum_{i=1}^N\alpha_i^*y_i(x_i.x_j)
w ∗ = i = 1 ∑ N α i ∗ y i x i b ∗ = y j − w ∗ x j = y j − i = 1 ∑ N α i ∗ y i ( x i . x j )
2.2 KKT条件:
∇ w L ( w ∗ , b ∗ , ξ ∗ , α ∗ , μ ∗ ) = w ∗ − ∑ i = 1 N α i ∗ y i x i = 0 ∇ b L ( w ∗ , b ∗ , ξ ∗ , α ∗ , μ ∗ ) = − ∑ i = 1 N α i ∗ y i = 0 ∇ ξ L ( w ∗ , b ∗ , ξ ∗ , α ∗ , μ ∗ ) = C − α ∗ − μ ∗ = 0 α i ∗ ( y i ∗ ( w ∗ . x i + b ∗ ) + ξ i ∗ − 1 ) = 0 μ i ∗ ξ i ∗ = 0 y i ∗ ( w ∗ . x i + b ∗ ) + ξ i ∗ − 1 ≥ 0 ξ i ∗ ≥ 0 α i ∗ ≥ 0 μ i ≥ 0 (11)
\nabla_wL(w^*,b^*,\xi^*,\alpha^*, \mu^*) = w^* - \sum_{i=1}^N\alpha_i^*y_ix_i = 0 \\
\nabla_bL(w^*,b^*,\xi^*,\alpha^*, \mu^*) = -\sum_{i=1}^N\alpha_i^*y_i=0 \\
\nabla_{\xi}L(w^*,b^*,\xi^*,\alpha^*, \mu^*) = C - \alpha^*-\mu^* = 0 \\
\alpha_i^*(y_i^*(w^*.x_i+b^*) + \xi_i^* - 1) = 0 \\
\mu_i^* \xi_i^* = 0 \\
y_i^*(w^*.x_i+b^*) + \xi_i^* - 1 \geq 0 \\
\xi_i^* \geq 0 \\
\alpha_i^* \geq 0 \\
\mu_i \geq 0 \\
\tag{11}
∇ w L ( w ∗ , b ∗ , ξ ∗ , α ∗ , μ ∗ ) = w ∗ − i = 1 ∑ N α i ∗ y i x i = 0 ∇ b L ( w ∗ , b ∗ , ξ ∗ , α ∗ , μ ∗ ) = − i = 1 ∑ N α i ∗ y i = 0 ∇ ξ L ( w ∗ , b ∗ , ξ ∗ , α ∗ , μ ∗ ) = C − α ∗ − μ ∗ = 0 α i ∗ ( y i ∗ ( w ∗ . x i + b ∗ ) + ξ i ∗ − 1 ) = 0 μ i ∗ ξ i ∗ = 0 y i ∗ ( w ∗ . x i + b ∗ ) + ξ i ∗ − 1 ≥ 0 ξ i ∗ ≥ 0 α i ∗ ≥ 0 μ i ≥ 0 ( 1 1 )
由此所得的分离超平面:w ∗ x + b = ∑ i = 1 N α i ∗ y i ( x . x i ) + b ∗ = 0 w^*x+b = \sum_{i=1}^N\alpha_i^*y_i(x.x_i) +b^* = 0 w ∗ x + b = ∑ i = 1 N α i ∗ y i ( x . x i ) + b ∗ = 0
分类决策函数:f ( x ) = s i g n ( w ∗ x + b ) = s i g n ( ∑ i = 1 N α i ∗ y i ( x . x i ) + b ∗ ) f(x) = sign(w^*x+b) = sign(\sum_{i=1}^N\alpha_i^*y_i(x.x_i) +b^*) f ( x ) = s i g n ( w ∗ x + b ) = s i g n ( ∑ i = 1 N α i ∗ y i ( x . x i ) + b ∗ )
【注】:对于线性支持向量机学习策略,w ∗ w^* w ∗ 是唯一的,b ∗ b^* b ∗ 可以有多个,实际中取N N N 个b ∗ b^* b ∗ 的均值。
2.3 支持向量:
图中,正例用"o o o “表示,负例用”× \times × "表示,实例到边间的距离为:ξ i ∣ ∣ w ∣ ∣ \dfrac{\xi_i}{||w||} ∣ ∣ w ∣ ∣ ξ i
软间隔支持向量x i x_i x i 或者在间隔边界上,或者在间隔边界和分离超平面之间,或者在分离超平面误分类一侧,存在如下关系:
若0 < α i < C 0 < \alpha_i <C 0 < α i < C ,u i ≠ 0 u_i \neq 0 u i = 0 ,ξ i = 0 \xi_i=0 ξ i = 0 ,则支持向量落在间隔边界上;
若α i = C \alpha_i = C α i = C ,μ i = 0 \mu_i = 0 μ i = 0 , 若0 < ξ i < 1 0 < \xi_i <1 0 < ξ i < 1 ,则分类正确,支持向量落在间隔边界和分离超平面之间;
若α i = C \alpha_i = C α i = C ,μ i = 0 \mu_i = 0 μ i = 0 , 若ξ i = 1 \xi_i =1 ξ i = 1 ,则支持向量恰好落在分离超平面上;
若α i = C \alpha_i = C α i = C ,μ i = 0 \mu_i = 0 μ i = 0 , 若ξ i > 1 \xi_i > 1 ξ i > 1 ,则支持向量位于分离超平面误分类一侧;
2.4 另一种解释:
线性支持向量机学习相当于最小化下列函数:∑ i = 1 N [ 1 − y i ( w . x i + b ) ] + + λ ∣ ∣ w ∣ ∣ 2 (12)
\sum_{i=1}^N[1-y_i(w.x_i+b)]_+ + \lambda ||w||^2 \tag{12}
i = 1 ∑ N [ 1 − y i ( w . x i + b ) ] + + λ ∣ ∣ w ∣ ∣ 2 ( 1 2 )
目标函数的第一项是经验损失或经验风险,记L ( y ( w . x + b ) ) = [ 1 − y ( w . x + b ) ] + L(y(w.x+b)) = [1-y(w.x+b)]_+ L ( y ( w . x + b ) ) = [ 1 − y ( w . x + b ) ] + 为合页损失,其中z + = max ( 0 , z ) z_+ =\max(0, z) z + = max ( 0 , z ) 。
因此有,当样本点被正确分类且函数间隔y ( w . x + b ) y(w.x+b) y ( w . x + b ) 大于1时,损失为0,注意,位于分离超平面误分类一侧的点能够被正确分类,但是损失不为0;目标函数的第二项是系数为λ \lambda λ 的w w w 的L 2 L_2 L 2 范数,为正则化项。
令ξ i = 1 − y i ( w . x i + b ) \xi_i = 1 - y_i(w.x_i+b) ξ i = 1 − y i ( w . x i + b ) ,则线性支持向量机的原始最优化问题(6)等价于最优化问题(12),其中λ = 1 2 C \lambda = \dfrac{1}{2C} λ = 2 C 1 。
合页损失和0-1损失函数的图示如下图,其中虚线为感知机损失函数[ y i ( w . x i + b ) ] + [y_i(w.x_i+b)]_+ [ y i ( w . x i + b ) ] + 。
由于0-1损失函数不是连续可导的,直接优化由其构成的损失函数比较困难,可认为线性支持向量机的合页损失函数是0-1损失的上界,此时的上界损失又称代理损失函数(surrogate loss function)。
3、非线性支持向量机与核函数
3.1 核技巧
对给定的训练数据集T = { ( x 1 , y 1 ) , ( x 2 , y 2 ) , . . . , ( x N , y N ) } T=\{(x_1, y_1), (x_2, y_2), ..., (x_N, y_N)\} T = { ( x 1 , y 1 ) , ( x 2 , y 2 ) , . . . , ( x N , y N ) } ,其中x i x_i x i 属于输入空间,y i ∈ [ + 1 , − 1 ] y_i \in [+1, -1] y i ∈ [ + 1 , − 1 ] ,如果能用R N R^N R N 的一个超曲面将正负样例正确地分开,则称这个问题为非线性可分问题 。非线性可分问题的求解思路是对输入进行非线性变换,将非线性问题转换为线性问题。
用线性分类方法求解非线性分类问题分两步:首先使用一个变换将原空间映射到新空间;然后再新空间使用线性分类方法,从训练数据中学习分类模型。核技巧 就属于这种方法。
核技巧应用到支持向量机,其基本思想是通过一个非线性变换将输入空间(欧式空间R \mathcal{R} R )对应于一个特种空间(希尔伯特空间H \mathcal{H} H )。
3.2 核函数
设X \mathcal{X} X 是输入空间(欧式空间R \mathcal {R} R ,设H \mathcal {H} H 为特征空间(希尔伯特空间),如果存在一个X \mathcal {X} X 到H \mathcal {H} H 的映射 ϕ ( x ) : X → H \phi(x):\mathcal {X} \rightarrow \mathcal {H} ϕ ( x ) : X → H ,使得对所有x , z ∈ X x,z \in \mathcal {X} x , z ∈ X 函数K ( x , z ) K(x, z) K ( x , z ) 满足条件:K ( x , z ) = ϕ ( x ) . ϕ ( z ) K(x,z) = \phi(x).\phi(z) K ( x , z ) = ϕ ( x ) . ϕ ( z ) ,则称K ( x , z ) K(x,z) K ( x , z ) 为核函数,ϕ ( x ) \phi(x) ϕ ( x ) 为映射函数。
【注】:ϕ \phi ϕ 是输入空间X \mathcal {X} X 到特征空间H \mathcal {H} H 的映射,特征空间H \mathcal {H} H 一般是高维的,甚至是无穷的,对于给定的核函数K ( x , z ) K(x, z) K ( x , z ) ,特征映射ϕ \phi ϕ 并不唯一。
例 :设输入空间为R 2 \mathcal R^2 R 2 ,核函数为K ( x , z ) = ( x . z ) 2 K(x,z) = (x.z)^2 K ( x , z ) = ( x . z ) 2 ,试找出相关的特征空间H \mathcal H H 和特征映射ϕ ( x ) \phi(x) ϕ ( x ) :R → H R\rightarrow \mathcal H R → H
取特征空间H = R 3 \mathcal H = \mathcal R^3 H = R 3 ,记x = ( x ( 1 ) , x ( 2 ) ) T x=(x^{(1)}, x^{(2)})^T x = ( x ( 1 ) , x ( 2 ) ) T ,z = ( z ( 1 ) , z ( 2 ) ) T z = (z^{(1)}, z^{(2)})^T z = ( z ( 1 ) , z ( 2 ) ) T ,则:
K ( x , z ) = ( x . z ) 2 = ( x ( 1 ) z ( 1 ) + x ( 2 ) z ( 2 ) ) 2 = ( x ( 1 ) z ( 1 ) ) 2 + 2. x ( 1 ) z ( 1 ) x ( 2 ) z ( 2 ) + ( x ( 2 ) z ( 2 ) ) 2 K(x,z) = (x.z)^2 = (x^{(1)}z^{(1)} + x^{(2)}z^{(2)})^2 = (x^{(1)}z^{(1)})^2+2.x^{(1)}z^{(1)}x^{(2)}z^{(2)}+(x^{(2)}z^{(2)})^2 K ( x , z ) = ( x . z ) 2 = ( x ( 1 ) z ( 1 ) + x ( 2 ) z ( 2 ) ) 2 = ( x ( 1 ) z ( 1 ) ) 2 + 2 . x ( 1 ) z ( 1 ) x ( 2 ) z ( 2 ) + ( x ( 2 ) z ( 2 ) ) 2
取映射:ϕ ( x ) = ( ( x ( 1 ) ) 2 , 2 . x ( 1 ) x ( 2 ) , ( x ( 2 ) ) 2 ) T \phi(x)= ((x^{(1)})^2, \sqrt2.x^{(1)}x^{(2)}, (x^{(2)})^2)^T ϕ ( x ) = ( ( x ( 1 ) ) 2 , 2 . x ( 1 ) x ( 2 ) , ( x ( 2 ) ) 2 ) T
满足:K ( x , z ) = ϕ ( x ) . ϕ ( z ) K(x,z) = \phi(x).\phi(z) K ( x , z ) = ϕ ( x ) . ϕ ( z )
同理,取映射:ϕ ( x ) = 1 2 ( ( x ( 1 ) ) 2 − ( x ( 2 ) ) 2 , 2. x ( 1 ) x ( 2 ) , ( x ( 1 ) ) 2 + ( x ( 2 ) ) 2 ) T \phi(x)= \dfrac{1}{\sqrt2}((x^{(1)})^2 - (x^{(2)})^2, 2.x^{(1)}x^{(2)}, (x^{(1)})^2+(x^{(2)})^2)^T ϕ ( x ) = 2 1 ( ( x ( 1 ) ) 2 − ( x ( 2 ) ) 2 , 2 . x ( 1 ) x ( 2 ) , ( x ( 1 ) ) 2 + ( x ( 2 ) ) 2 ) T
满足:K ( x , z ) = ϕ ( x ) . ϕ ( z ) K(x,z) = \phi(x).\phi(z) K ( x , z ) = ϕ ( x ) . ϕ ( z )
取特征空间H = R 4 \mathcal H = \mathcal R^4 H = R 4 ,ϕ ( x ) = ( ( x ( 1 ) ) 2 , x ( 1 ) x ( 2 ) , x ( 1 ) x ( 2 ) , ( x ( 2 ) ) 2 ) T \phi(x)= ((x^{(1)})^2, x^{(1)}x^{(2)}, x^{(1)}x^{(2)},(x^{(2)})^2)^T ϕ ( x ) = ( ( x ( 1 ) ) 2 , x ( 1 ) x ( 2 ) , x ( 1 ) x ( 2 ) , ( x ( 2 ) ) 2 ) T
满足:K ( x , z ) = ϕ ( x ) . ϕ ( z ) K(x,z) = \phi(x).\phi(z) K ( x , z ) = ϕ ( x ) . ϕ ( z )
3.3 核方法在支持向量机中的应用
将对偶问题(10)的内积x i . x j x_i.x_j x i . x j 用核函数K ( x i , x j ) = ϕ ( x i ) . ϕ ( x j ) K(x_i,x_j) = \phi(x_i).\phi(x_j) K ( x i , x j ) = ϕ ( x i ) . ϕ ( x j ) 代替,得此时的目标函数:W ( α ) = 1 2 ∑ i = 1 N ∑ j = 1 N α i α j y i y j K ( x i , x j ) − ∑ i = 1 N α i (13)
W(\alpha) = \dfrac{1}{2}\sum_{i=1}^N\sum_{j=1}^N\alpha_i\alpha_jy_iy_jK(x_i,x_j)-\sum_{i=1}^N\alpha_i \tag{13}
W ( α ) = 2 1 i = 1 ∑ N j = 1 ∑ N α i α j y i y j K ( x i , x j ) − i = 1 ∑ N α i ( 1 3 )
即将原内积映射到高维特征空间,当映射函数为非线性映射时,学习到的含核函数的支持向量机即为非线性支持向量机。
3.4 常用核函数
多项式核函数对应一个p p p 次多项式分类器:K ( x , z ) = ( x . z + 1 ) p
K(x, z) = (x.z+1)^p
K ( x , z ) = ( x . z + 1 ) p
高斯核函数对应一个高斯径向基函数分类器:K ( x , z ) = exp { − ∣ ∣ x − z ∣ ∣ 2 2 δ 2 }
K(x,z) = \exp\{-\dfrac{||x-z||^2}{2\delta^2}\}
K ( x , z ) = exp { − 2 δ 2 ∣ ∣ x − z ∣ ∣ 2 }
字符串核函数k n ( s , t ) k_n(s,t) k n ( s , t ) 给出了字符串s s s 和t t t 中长度等于n n n 的所有子串组成的特征向量的余弦相似度。k n ( s , t ) = ∑ u ∈ ∑ n [ ϕ n ( s ) ] u . [ ϕ n ( t ) ] u = ∑ u ∈ ∑ n ( i . j ) ∑ s ( i ) = t ( i ) = u λ l ( i ) λ l ( j )
k_n(s ,t) = \sum_{u \in \sum^n}[\phi_n(s)]_u.[\phi_n(t)]_u=\sum_{u \in \sum^n(i.j)}\sum_{s(i)= t(i)=u}\lambda^{l{(i)}}\lambda^{l{(j)}}
k n ( s , t ) = u ∈ ∑ n ∑ [ ϕ n ( s ) ] u . [ ϕ n ( t ) ] u = u ∈ ∑ n ( i . j ) ∑ s ( i ) = t ( i ) = u ∑ λ l ( i ) λ l ( j )
4、序列最小化算法
序列最小化算法(sequential minimal optimization, SMO)要求解的凸二次优化的对偶问题:min α 1 2 ∑ i = 1 N ∑ j = 1 N α i α j y i y j K ( x i , x j ) − ∑ i = 1 N α i s . t . ∑ i = 1 N α i y i = 0 0 ≤ α i ≤ C i = 1 , 2 , . . . , N (14)
\min_\alpha\dfrac{1}{2}\sum_{i=1}^N\sum_{j=1}^N\alpha_i\alpha_jy_iy_jK(x_i,x_j)-\sum_{i=1}^N\alpha_i \\
s.t. \ \ \ \sum_{i=1}^N\alpha_iy_i = 0 \\
0 \leq \alpha_i \leq C \ \ \ i = 1 ,2, ..., N \tag{14}
α min 2 1 i = 1 ∑ N j = 1 ∑ N α i α j y i y j K ( x i , x j ) − i = 1 ∑ N α i s . t . i = 1 ∑ N α i y i = 0 0 ≤ α i ≤ C i = 1 , 2 , . . . , N ( 1 4 )
整个SMO算法包括两个部分:求解两个变量的二次规划的解析方法和选择变量的启发式方法。