聚类模型
1、层次聚类
2、原型聚类-K-means
3、模型聚类-GMM
4、EM算法-LDA主题模型
5、密度聚类-DBSCAN
6、图聚类-谱聚类
四、EM算法
一、EM算法
EM算法是一种迭代算法,用于带隐变量的概率模型参数的极大似然估计,是无监督学习中一大类算法求解的算法。EM算法每次迭代由两步组成,E步:假设隐变量和特征变量的联合分布P ( x , z ; θ ) P(x,z;\theta) P ( x , z ; θ ) ,求解样本关于隐变量z z z 的概率函数(使Jensen不等式等号成立),M步:在已知样本 ( x , z ) (x,z) ( x , z ) 的联合分布(确定样本特征和类标),采用极大似然估计最大化似然函数求解参数θ \theta θ 。
在讨论EM算法之前,先介绍Jensen inequality(由凸函数性质导出)
假设f是定义在实数上的凸函数,由凸函数的定义有:f ( λ x ( 1 ) + ( 1 − λ ) x 2 ) ≤ λ f ( x ( 1 ) ) + ( 1 − λ ) f ( x 2 )
f(\lambda x^{(1)}+(1-\lambda)x^{2})\leq \lambda f(x^{(1)})+(1-\lambda) f(x^{2})
f ( λ x ( 1 ) + ( 1 − λ ) x 2 ) ≤ λ f ( x ( 1 ) ) + ( 1 − λ ) f ( x 2 )
严格凸函数则严格大于,凸函数的判定是其二阶可微的话,其Hesse矩阵半正定。对凸函数的性质推广有:f ( ∑ i = 1 k ( λ i x ( i ) ) ) ≤ ∑ i = 1 m λ i f ( x ( i ) ) s . t . ∑ i = 1 m λ i = 1 , λ i ≥ 0
f(\sum_{i=1}^{k}(\lambda_{i}x^{(i)}))\leq\sum_{i=1}^{m}\lambda_{i} f(x^{(i)})\\
s.t. \sum_{i=1}^{m}\lambda_{i}=1,\lambda_{i}\geq 0
f ( i = 1 ∑ k ( λ i x ( i ) ) ) ≤ i = 1 ∑ m λ i f ( x ( i ) ) s . t . i = 1 ∑ m λ i = 1 , λ i ≥ 0
当λ i \lambda_{i} λ i 表示f ( x ( i ) ) , x ( i ) f(x^{(i)}),x^{(i)} f ( x ( i ) ) , x ( i ) 的概率时,那么有:f ( E ( x ) ) ≤ E ( f ( x ) )
f(E(x))\leq E(f(x))
f ( E ( x ) ) ≤ E ( f ( x ) )
当且仅当:p ( f ( x ) = c ) = 1 p(f(x)=c)=1 p ( f ( x ) = c ) = 1 ,即f ( x ) f(x) f ( x ) 为常数函数,等号成立。
反之,对于凹函数不等式方向相反。
现在来看EM算法,给定训练样本{ x ( 1 ) , x ( 2 ) , . . x ( m ) } \{x^{(1)},x^{(2)},..x^{(m)}\} { x ( 1 ) , x ( 2 ) , . . x ( m ) } ,引入隐含的类别标签z ( i ) z^{(i)} z ( i ) ,在有监督方法中,最大对数似然函数L = p ( z ∣ x ; θ ) L=p(z|x;\theta) L = p ( z ∣ x ; θ ) ,同样这里最大化对数似然函数的L = ( x ( i ) ; θ ) L=(x^{(i)}; \theta) L = ( x ( i ) ; θ ) 在隐变量z ( i ) z^{(i)} z ( i ) 的全期望:KaTeX parse error: No such environment: align at position 8:
\begin{̲a̲l̲i̲g̲n̲}̲
L(\theta)&=\su…
其中Q i ( z ( i ) ) Q_{i}(z^{(i)}) Q i ( z ( i ) ) 为样本的隐变量z ( i ) z^{(i)} z ( i ) 的概率分布,∑ z Q i ( z ( i ) ) = 1 , Q i ( z ( i ) ) ≥ 0 \sum_{z}Q_{i}(z^{(i)})=1,Q_{i}(z^{(i)})\geq0 ∑ z Q i ( z ( i ) ) = 1 , Q i ( z ( i ) ) ≥ 0 。不同Q ( z ) Q(z) Q ( z ) 选择,会得到EM在不同情况下的模型,比如高斯混合,朴素贝叶斯混,LDA等。
因为l o g log l o g 函数是一个严格凹函数,由Jessen不等式有:l o g ( E ( g ( x ) ) ≥ E ( l o g ( g ( x ) ) l o g ( ∑ z ( i ) Q i ( z ( i ) ) P ( x ( i ) , z ( i ) ; θ ) Q i ( z ( i ) ) ) ≥ ∑ z ( i ) Q i ( z ( i ) ) l o g P ( x ( i ) , z ( i ) ; θ ) Q i ( z ( i ) )
log(E(g(x))\geq E(log(g(x))\\
log\left( \sum\limits_{z^{(i)}}Q_i(z^{(i)})\frac{P(x^{(i)},z^{(i)};\theta)}{Q_i(z^{(i)})}\right) \geq \sum\limits_{z^{(i)}}Q_i(z^{(i)})log\frac{P(x^{(i)},z^{(i)};\theta)}{Q_i(z^{(i)})}
l o g ( E ( g ( x ) ) ≥ E ( l o g ( g ( x ) ) l o g ( z ( i ) ∑ Q i ( z ( i ) ) Q i ( z ( i ) ) P ( x ( i ) , z ( i ) ; θ ) ) ≥ z ( i ) ∑ Q i ( z ( i ) ) l o g Q i ( z ( i ) ) P ( x ( i ) , z ( i ) ; θ )
其中g ( x ) = P ( x ( i ) , z ( i ) ∣ θ ) Q i ( z ( i ) ) g(x)=\frac{P(x^{(i)},z^{(i)}|\theta)}{Q_i(z^{(i)})} g ( x ) = Q i ( z ( i ) ) P ( x ( i ) , z ( i ) ∣ θ ) ,因此当且仅当,g ( x ) = c g(x)=c g ( x ) = c ,等号成立。
因为∑ z Q i ( z ( i ) ) = 1 , Q i ( z ( i ) ) ≥ 0 \sum_{z}Q_{i}(z^{(i)})=1,Q_{i}(z^{(i)})\geq0 ∑ z Q i ( z ( i ) ) = 1 , Q i ( z ( i ) ) ≥ 0 ,所以Q i ( z ( i ) ) Q_{i}(z^{(i)}) Q i ( z ( i ) ) 可以看做是样本关于隐变量z z z 的概率分布,等于x , z x,z x , z 联合概率归一化,即比上联合概率对z z z 的全期望:KaTeX parse error: No such environment: align at position 8:
\begin{̲a̲l̲i̲g̲n̲}̲
Q_{i}(z^{(i)})…
因此,EM算法的第一步就是计算在给定θ \theta θ 下隐变量z z z 的条件概率。
当已知Q ( z ) Q_(z) Q ( z ) 之后,且Jessen不等式等号成立,回过头来再最大化似然函数:KaTeX parse error: No such environment: align at position 8:
\begin{̲a̲l̲i̲g̲n̲}̲
arg \max \limi…
因此,EM算法的极大似然估计可以看作是坐标上升过程,第一步在给定的参数θ \theta θ 下最大化似然函数L ( Q ( z ) ; θ ) L(Q(z);\theta) L ( Q ( z ) ; θ ) ,第二步则是在当前的Q ( z ) Q(z) Q ( z ) 下最大化似然函数L ( θ ; Q ( z ) ) L(\theta;Q(z)) L ( θ ; Q ( z ) ) 。
收敛性:KaTeX parse error: No such environment: align at position 8:
\begin{̲a̲l̲i̲g̲n̲}̲ L(\theta^{t+1}…
L ( Q ( z ) , θ ( i ) ) ≤ L ( Q ( z ) , θ ( i + 1 ) ) ≤ L ( Q ∗ ( z ) , θ ( i + 1 ) )
L(Q(z),\theta^{(i)})\leq L(Q(z),\theta^{(i+1)})\leq L(Q^{*}(z),\theta^{(i+1)})
L ( Q ( z ) , θ ( i ) ) ≤ L ( Q ( z ) , θ ( i + 1 ) ) ≤ L ( Q ∗ ( z ) , θ ( i + 1 ) )
下面的第一个不等号由由最大似然估计得到θ ( i + 1 ) \theta^{(i+1)} θ ( i + 1 ) ,第二个不等号Jessen不等式得到Q ∗ ( z ) = p ( z ( i ) ∣ x ( i ) ; θ ) Q^{*}(z)=p({z^{(i)}|x^{(i)};\theta)} Q ∗ ( z ) = p ( z ( i ) ∣ x ( i ) ; θ ) ,但是求解过程是先由Jessen不等式确定似然函数的下界,后最大似然函数下界。
EM算法的一般流程:
E-step:(固定参数下,求隐变量条件分布)KaTeX parse error: No such environment: align at position 8:
\begin{̲a̲l̲i̲g̲n̲}̲
Q_{i}(z^{(i)})…
M-step:(最大化似然函数下界)KaTeX parse error: No such environment: align at position 8:
\begin{̲a̲l̲i̲g̲n̲}̲
arg \max \limi…
EM求解的过程大致如图所示,是否能收敛到全局最优,取决于目标函数是否为凸函数:
从上图可以看出,EM算法的思想也是坐标上升的思想,即固定一组变量求解另一组变量,不断地迭代。比较特殊的,EM算法针对于带隐变量的概率模型参数的极大似然估计,求解过程又称“期望最大化”,第一步求期望,第二步最大化,这是带隐变量的概率模型特有的,不是EM算法的特点。
二、EM算法例子-高斯混合,朴素贝叶斯混合
高斯混合模型
为什么采用EM算法求解高斯混合模型,回顾高斯混合模型的目标函数,我们发现l o g log l o g 函数在求和外面。特别的情况是当类标签已知,像高斯判别模型那么进行参数估计,然而在混合高斯模型中。而隐变量却是未知,所以我们很难直接优化,采用EM算法的Jessen不等式,我们将l o g log l o g 函数放到里面,并使等号成立,对目标函数进行转化:L ( θ ) = ∑ i = 1 m l o g ∑ z ( i ) = 1 k ( P ( x ( i ) , z ( i ) ; μ , Σ , Φ ) ) L ( θ , ) ∑ i = 1 m ∑ z ( i ) Q i ( z ( i ) ) l o g P ( x ( i ) , z ( i ) ; θ ) Q i ( z ( i ) )
L(\theta)=\sum_{i=1}^{m}log\sum_{z^{(i)}=1}^{k}(P(x^{(i)},z^{(i)};\mu,\Sigma,\Phi))\\
L(\theta,)\sum\limits_{i=1}^m \sum\limits_{z^{(i)}}Q_i(z^{(i)})log\frac{P(x^{(i)}, z^{(i)};\theta)}{Q_i(z^{(i)})}
L ( θ ) = i = 1 ∑ m l o g z ( i ) = 1 ∑ k ( P ( x ( i ) , z ( i ) ; μ , Σ , Φ ) ) L ( θ , ) i = 1 ∑ m z ( i ) ∑ Q i ( z ( i ) ) l o g Q i ( z ( i ) ) P ( x ( i ) , z ( i ) ; θ )
其中Q i ( z ( i ) ) Q_{i}(z^{(i)}) Q i ( z ( i ) ) 条件概率分布P ( z ∣ x ; θ ) P(z|x;\theta) P ( z ∣ x ; θ ) 。
下面从EM思想来看高斯混合模型,给出高斯混合模型最大似然估计计算出参数的推导过程:
E-step:(固定参数下,求隐变量条件分布)w j ( i ) = Q ( z ( i ) = j ) = p ( z ( i ) ∣ x ( i ) ; μ , Σ , ϕ ) = p ( x ( i ) ∣ z ( i ) ; μ , Σ ) p ( z ( i ) = j ; ϕ ) ∑ j = 1 k p ( x ( i ) ∣ z ( i ) = j ; μ , Σ ) p ( z ( i ) = j ; ϕ )
w_{j}^{(i)}=Q(z^{(i)}=j)=p(z^{(i)}|x^{(i)};\mu,\Sigma,\phi)=\frac{p(x^{(i)}|z^{(i)};\mu,\Sigma)p(z^{(i)}=j;\phi)}{\sum_{j=1}^{k}p(x^{(i)}|z^{(i)}=j;\mu,\Sigma)p(z^{(i)}=j;\phi)}
w j ( i ) = Q ( z ( i ) = j ) = p ( z ( i ) ∣ x ( i ) ; μ , Σ , ϕ ) = ∑ j = 1 k p ( x ( i ) ∣ z ( i ) = j ; μ , Σ ) p ( z ( i ) = j ; ϕ ) p ( x ( i ) ∣ z ( i ) ; μ , Σ ) p ( z ( i ) = j ; ϕ )
其中p ( x ∣ z ) p(x|z) p ( x ∣ z ) 服从高斯分布,p ( z ) p(z) p ( z ) 服从伯努利分布。
M-step:(最大化似然函数下界)KaTeX parse error: No such environment: align at position 8:
\begin{̲a̲l̲i̲g̲n̲}̲
arg \max \limi…
对μ , Σ , ϕ \mu,\Sigma,\phi μ , Σ , ϕ 求导:KaTeX parse error: No such environment: align at position 8:
\begin{̲a̲l̲i̲g̲n̲}̲
&\bigtriangled…
令导数为0,有:μ j = ∑ i = 1 m w j ( i ) 1 { z ( i ) = j } x ( i ) ∑ i = 1 m 1 { z ( i ) = j }
\mathbf{\mu}_j = \frac{\sum_{i=1}^{m}w_{j}^{(i)}1\{z^{(i)}=j\}x^{(i)}}{\sum_{i=1}^{m}1\{z^{(i)}=j\}}
μ j = ∑ i = 1 m 1 { z ( i ) = j } ∑ i = 1 m w j ( i ) 1 { z ( i ) = j } x ( i )
对目标函数求解参数ϕ \phi ϕ ,去掉无关项,有:∑ i = 1 m ∑ j = 1 k w j ( i ) l o g ϕ j
\sum_{i=1}^{m}\sum_{j=1}^{k}w_{j}^{(i)}log\phi_{j}
i = 1 ∑ m j = 1 ∑ k w j ( i ) l o g ϕ j
由∑ j = 1 k ϕ j = 1 \sum_{j=1}^{k}\phi_{j}=1 ∑ j = 1 k ϕ j = 1 ,对其拉格朗日函数求导:▽ ϕ j L ( ϕ , β ) = ▽ ϕ j ∑ i = 1 m ∑ j = 1 k w j ( i ) l o g ϕ j + β ( ( ∑ j = 1 k ϕ j ) − 1 ) = ∑ i = 1 m w j ( i ) ϕ j + β
\bigtriangledown_{\phi_{j}}L(\phi,\beta)=\bigtriangledown_{\phi_{j}}\sum_{i=1}^{m}\sum_{j=1}^{k}w_{j}^{(i)}log\phi_{j}+\beta((\sum_{j=1}^{k}\phi_{j})-1)\\
=\sum_{i=1}^{m}\frac{w_{j}^{(i)}}{\phi_{j}}+\beta
▽ ϕ j L ( ϕ , β ) = ▽ ϕ j i = 1 ∑ m j = 1 ∑ k w j ( i ) l o g ϕ j + β ( ( j = 1 ∑ k ϕ j ) − 1 ) = i = 1 ∑ m ϕ j w j ( i ) + β
令其导数为0,有:ϕ j = ∑ i = 1 m w j ( i ) − β
\phi_{j}=\frac{\sum_{i=1}^{m}w_{j}^{(i)}}{-\beta}
ϕ j = − β ∑ i = 1 m w j ( i )
所以,ϕ j ∝ ∑ i = 1 m w j ( i ) \phi_{j}\propto\sum_{i=1}^{m}w_{j}^{(i)} ϕ j ∝ ∑ i = 1 m w j ( i ) ,又∑ j ϕ j = 1 \sum_{j}\phi_{j}=1 ∑ j ϕ j = 1 ,所以得− β = ∑ i = 1 m ∑ j = 1 k w j ( i ) = m -\beta=\sum_{i=1}^{m}\sum_{j=1}^{k}w_{j}^{(i)}=m − β = ∑ i = 1 m ∑ j = 1 k w j ( i ) = m
所以:ϕ j = ∑ i = 1 m w j ( i ) m
\mathbf{\phi}_{j} = \sum_{i=1}^{m}\frac{w_{j}^{(i)}}{m}
ϕ j = i = 1 ∑ m m w j ( i ) 朴素贝叶斯混合
考虑文本聚类问题,采用One-hot编码,则一篇文档的类别可以看作是隐变量z z z 服从伯努利分布,文档中词x x x 在类别z z z 下的条件概率也可看作是一个伯努利分布(如果文档采用词在字典中的位置表示,则对应一个N元伯努利分布)。所以有:ϕ z = 1 = p ( z = 1 ) , ϕ z = 0 = 1 − p ( z = 1 )
\phi_{z=1}=p(z=1),\phi_{z=0}=1-p(z=1)
ϕ z = 1 = p ( z = 1 ) , ϕ z = 0 = 1 − p ( z = 1 )
ϕ j = 1 ∣ z = 1 = p ( x j ( i ) = 1 ∣ z = 1 ) , ϕ j = 0 ∣ z = 1 = 1 − p ( x j ( i ) = 1 ∣ z = 1 ) ϕ j = 1 ∣ z = 0 = p ( x j ( i ) = 1 ∣ z = 0 ) , ϕ j = 0 ∣ z = 1 = 1 − p ( x j ( i ) = 1 ∣ z = 0 )
\phi_{j=1|z=1}=p(x_{j}^{(i)}=1|z=1),\phi_{j=0|z=1}=1-p(x_{j}^{(i)}=1|z=1)\\
\phi_{j=1|z=0}=p(x_{j}^{(i)}=1|z=0),\phi_{j=0|z=1}=1-p(x_{j}^{(i)}=1|z=0)
ϕ j = 1 ∣ z = 1 = p ( x j ( i ) = 1 ∣ z = 1 ) , ϕ j = 0 ∣ z = 1 = 1 − p ( x j ( i ) = 1 ∣ z = 1 ) ϕ j = 1 ∣ z = 0 = p ( x j ( i ) = 1 ∣ z = 0 ) , ϕ j = 0 ∣ z = 1 = 1 − p ( x j ( i ) = 1 ∣ z = 0 )
其中由于概率和为1,所有的变量我们只需要求解一般,即ϕ z , ϕ z = 1 , ϕ z = 0 \phi_{z},\phi_{z=1},\phi_{z=0} ϕ z , ϕ z = 1 , ϕ z = 0 。
由于朴素贝叶斯中词(特征)的独立性假设有:P ( x ( i ) , z ( i ) ) = ∏ j = 1 n P ( x j ( i ) , z ( i ) ) , j = 1 , 2.. n
P(x^{(i)},z^{(i)})=\prod_{j=1}^{n}P(x_{j}^{(i)},z^{(i)}),j=1,2..n
P ( x ( i ) , z ( i ) ) = j = 1 ∏ n P ( x j ( i ) , z ( i ) ) , j = 1 , 2 . . n
其中j j j 为特征下标。
同样,最大化对数似然函数:∑ i = 1 m l o g ∏ j = 1 n P ( x j ( i ) , z ( i ) ) , j = 1 , 2.. n
\sum_{i=1}^{m}log\prod_{j=1}^{n}P(x_{j}^{(i)},z^{(i)}),j=1,2..n
i = 1 ∑ m l o g j = 1 ∏ n P ( x j ( i ) , z ( i ) ) , j = 1 , 2 . . n
套入EM框架:
E-step:(固定参数下,求隐变量条件分布)w ( i ) = p ( z ( i ) = 1 ∣ x ( i ) ; ϕ j ) = p ( x ( i ) ∣ z ( i ) = 1 ) p ( z ( i ) = 1 ) ∑ j = 0 1 p ( x ( i ) ∣ z ( i ) = j ) p ( z ( i ) = 1 )
w^{(i)}=p(z^{(i)}=1|x^{(i)};\phi_{j})=\frac{p(x^{(i)}|z^{(i)}=1)p(z^{(i)}=1)}{\sum_{j=0}^{1}p(x^{(i)}|z^{(i)}=j)p(z^{(i)}=1)}
w ( i ) = p ( z ( i ) = 1 ∣ x ( i ) ; ϕ j ) = ∑ j = 0 1 p ( x ( i ) ∣ z ( i ) = j ) p ( z ( i ) = 1 ) p ( x ( i ) ∣ z ( i ) = 1 ) p ( z ( i ) = 1 )
由于该聚类是二聚类,w ( i ) w^{(i)} w ( i ) 表示是属于其中一类的概率,样本x ( i ) x^{(i)} x ( i ) 属于另一类的概率则为1 − w ( i ) 1-w^{(i)} 1 − w ( i ) 。
M-step:(最大化似然函数下界)KaTeX parse error: No such environment: align at position 8:
\begin{̲a̲l̲i̲g̲n̲}̲
arg \max \limi…
其中,p ( x ( i ) , z ( i ) = 1 ; ϕ j ) = ∏ j = 1 n p ( x j ( i ) ∣ z ( i ) = 1 ) ϕ z p ( x ( i ) , z ( i ) = 0 ; ϕ j ) = ∏ j = 1 n p ( x j ( i ) ∣ z ( i ) = 0 ) ϕ z
p(x^{(i)},z^{(i)}=1;\phi_{j})=\prod_{j=1}^{n}p(x_{j}^{(i)}|z^{(i)}=1)\phi_{z}\\
p(x^{(i)},z^{(i)}=0;\phi_{j})=\prod_{j=1}^{n}p(x_{j}^{(i)}|z^{(i)}=0)\phi_{z}
p ( x ( i ) , z ( i ) = 1 ; ϕ j ) = j = 1 ∏ n p ( x j ( i ) ∣ z ( i ) = 1 ) ϕ z p ( x ( i ) , z ( i ) = 0 ; ϕ j ) = j = 1 ∏ n p ( x j ( i ) ∣ z ( i ) = 0 ) ϕ z
似然函数对ϕ z \phi_{z} ϕ z 求导:▽ ϕ z L ( θ , Q ( z ) ) = ▽ ϕ z ∑ i = 1 m [ w ( i ) l o g ϕ z + ( 1 − w ( i ) ) l o g ( 1 − ϕ z ) ] = ∑ i = 1 m [ w ( i ) ϕ z − 1 − w ( i ) 1 − ϕ z ]
\bigtriangledown_{\phi_z}L(\theta,Q(z))=\bigtriangledown_{\phi_{z}} \sum_{i=1}^{m}\left[ w^{(i)}log\phi_{z} +(1-w^{(i)})log(1-\phi_{z})\right]\\
=\sum_{i=1}^{m}\left[ \frac{w^{(i)}}{\phi_{z}}-\frac{1-w^{(i)}}{1-\phi_{z}}\right]
▽ ϕ z L ( θ , Q ( z ) ) = ▽ ϕ z i = 1 ∑ m [ w ( i ) l o g ϕ z + ( 1 − w ( i ) ) l o g ( 1 − ϕ z ) ] = i = 1 ∑ m [ ϕ z w ( i ) − 1 − ϕ z 1 − w ( i ) ]
令偏导为0,得:ϕ z = ∑ i = 1 m w ( i ) m
\phi_{z}=\frac{\sum_{i=1}^{m}w^{(i)}}{m}
ϕ z = m ∑ i = 1 m w ( i )
对ϕ j ∣ z = 1 \phi_{j|z=1} ϕ j ∣ z = 1 求偏导:▽ ϕ j ∣ z = 1 L ( ϕ j ∣ z , ϕ z ) = ∑ i = 1 m w ( i ) l o g p ( x j ( i ) ∣ z ( i ) = 1 ) = ∑ i = 1 m w ( i ) l o g [ ( ϕ j ∣ z = 1 ) I ( x j ( i ) = 1 ) ( 1 − ϕ j ∣ z = 1 ) ( 1 − I ( x j ( i ) = 1 ) ) ] = ∑ i = 1 m [ w ( i ) I ( x j ( i ) = 1 ) ϕ j ∣ z = 1 − w ( i ) ( 1 − I ( x j ( i ) = 1 ) ) 1 − ϕ j ∣ z = 1 ]
\bigtriangledown_{\phi_{j|z=1}}L(\phi_{j|z},\phi_{z})=\sum_{i=1}^{m}w^{(i)}logp(x_{j}^{(i)}|z^{(i)}=1)\\
=\sum_{i=1}^{m}w^{(i)}log\left[ (\phi_{j|z=1})^{I(x_{j}^{(i)}=1)}(1-\phi_{j|z=1})^{(1-I(x_{j}^{(i)}=1))}\right]\\
=\sum_{i=1}^{m}\left[ \frac{w^{(i)}{I(x_{j}^{(i)}=1)}}{\phi_{j|z=1}}-\frac{w^{(i)}{(1-I(x_{j}^{(i)}=1)})}{1-\phi_{j|z=1}}\right]
▽ ϕ j ∣ z = 1 L ( ϕ j ∣ z , ϕ z ) = i = 1 ∑ m w ( i ) l o g p ( x j ( i ) ∣ z ( i ) = 1 ) = i = 1 ∑ m w ( i ) l o g [ ( ϕ j ∣ z = 1 ) I ( x j ( i ) = 1 ) ( 1 − ϕ j ∣ z = 1 ) ( 1 − I ( x j ( i ) = 1 ) ) ] = i = 1 ∑ m [ ϕ j ∣ z = 1 w ( i ) I ( x j ( i ) = 1 ) − 1 − ϕ j ∣ z = 1 w ( i ) ( 1 − I ( x j ( i ) = 1 ) ) ]
令偏导为0,得:ϕ j ∣ z = 1 = ∑ i = 1 m w ( i ) I ( x j ( i ) = 1 ) ∑ i = 1 m w ( i )
\phi_{j|z=1}=\frac{\sum_{i=1}^{m}w^{(i)}{I(x_{j}^{(i)}=1)}}{\sum_{i=1}^{m}w^{(i)}}
ϕ j ∣ z = 1 = ∑ i = 1 m w ( i ) ∑ i = 1 m w ( i ) I ( x j ( i ) = 1 )
同理ϕ j ∣ z = 0 \phi_{j|z=0} ϕ j ∣ z = 0 ,求导令其偏导为0有:ϕ j ∣ z = 0 = ∑ i = 1 m w ( i ) I ( x j ( i ) = 0 ) ∑ i = 1 m 1 − w ( i )
\phi_{j|z=0}=\frac{\sum_{i=1}^{m}w^{(i)}{I(x_{j}^{(i)}=0)}}{\sum_{i=1}^{m}1-w^{(i)}}
ϕ j ∣ z = 0 = ∑ i = 1 m 1 − w ( i ) ∑ i = 1 m w ( i ) I ( x j ( i ) = 0 )
可以看出,朴素贝叶斯混合和高斯混合惊人的相似。M-step的参数估计与有监督最大似然参数估计的结果一致。
三、LDA主题模型
在了解主题模型之前,我们顺着朴素贝叶斯混合模型在多元伯努利分布上的推广导出隐语义分析(pLSA)模型。在PLSA模型中,我们假设隐变量z z z 的语义是主题,而一篇文档涉及多个主题,不同的主题下产生词的概率不同。那么一篇文档的生成的概率可以写作:P ( w j , d i ) = P ( d i ) ∑ z P ( z k ∣ d i ) P ( w j ∣ z k )
P(w_{j},d_{i})=P(d_{i})\sum_{z}P(z_{k}|d_{i})P(w_{j}|z_{k})
P ( w j , d i ) = P ( d i ) z ∑ P ( z k ∣ d i ) P ( w j ∣ z k )
其中p ( d i ) p(d_{i}) p ( d i ) 表示第i i i 篇文档被选中的概率,p ( z k ∣ d i ) p(z_{k}|d_{i}) p ( z k ∣ d i ) 表示第i i i 篇文档生成第k k k 个主题的概率,p ( w j ∣ z k ) p(w_{j}|z_{k}) p ( w j ∣ z k ) 表示第k k k 个主题下产生词w j w_{j} w j 的概率。其中后两个概率服从多项分布。采用EM算法,我们同样可以在E-step假设参数,来确定主题的条件概率;在M-step最大化似然函数的下界更新参数。
LDA同pLSA极为相似,不同的是pLSA是频率学派的角度来看待文档-主题-词的关系,而LDA是贝叶斯学派角度来看待文档-主题-词关系。频率学派认为数据服从参数一定的概率分布p ( x ∣ θ ) p(x|\theta) p ( x ∣ θ ) ,贝叶斯学派则从数据中估计参数的概率p ( θ ∣ x ) p(\theta|x) p ( θ ∣ x ) ,认为参数本身服从一个先验概率p ( θ ) p(\theta) p ( θ ) ,由贝叶斯公式,最大化后验概率:p ( θ ∣ x ) = p ( x ∣ θ ) p ( θ ) p ( x ) ∝ p ( x ∣ θ ) p ( θ )
p(\theta|x)=\frac{p(x|\theta)p(\theta)}{p(x)} \propto p(x|\theta)p(\theta)
p ( θ ∣ x ) = p ( x ) p ( x ∣ θ ) p ( θ ) ∝ p ( x ∣ θ ) p ( θ )
也就是说LDA比pLSA多了两个先验分布:P ( w j , d i ∣ α ⃗ , β ⃗ ) = p ( z k ∣ θ ( i ) ) p ( θ ( i ) ∣ α ⃗ ) p ( w j ( i ) ∣ z k , ϕ k ) p ( ϕ k ∣ β ⃗ )
P(w_{j},d_{i}|\vec{\alpha},\vec{\beta})=p(z_{k}|\theta_{(i)})p(\theta_{(i)}|\vec{\alpha})p(w_{j}^{(i)}|z_{k},\phi_{k})p(\phi_{k}|\vec{\beta})
P ( w j , d i ∣ α , β ) = p ( z k ∣ θ ( i ) ) p ( θ ( i ) ∣ α ) p ( w j ( i ) ∣ z k , ϕ k ) p ( ϕ k ∣ β )
其中i i i 表示文档,k k k 表示主题,j j j 表示词。
LDA模型从贝叶斯角度引入先验概率的目的是构造似然函数的一个共轭先验分布,用实践α , β \alpha,\beta α , β 作为超参来调节似然函数模型。关于LDA的求解可以采用吉布斯采样和变分的EM算法,这里不细究,在专门介绍LDA时再详细学习。
四、因子分析
EM->因子分析->概率矩阵分解