z∼N(μ,Σ) z∈Rn,μ∈Rn,Σ∈Rn∗n z – variable μ=⎣⎢⎢⎡μ1μ2...μn⎦⎥⎥⎤ – mean vector Σ – covarience matrix
All the Gaussian models share one covarience matrix.
Based on the two Gaussian models, we can draw a boundary line. 图片来源
Prediction
argymaxP(y∣x)=argymaxP(x)P(x∣y)P(y)=argymaxP(x∣y)P(y)
(P(x) is a constant)
& Logistic Regression
图片是我的笔记
The picture shows when our data is 1D the function looks like Sigmoid function. Actually, it is Sigmoid function and it also applys to higher dimension. I won’t prove it here.
GDA is a stricter version of logistic regression because the data has to follow Gaussian distribution.
When the data follows Gaussian distribution or the data is very big(according to the central limit theorem), GDA works better than logistic regression.
Also, the data follows Gaussian distribution so the model has no local optima.