Introduction to statistical machine learning复习笔记1概率密度部分

概率密度部分

  1. 期望和峰值 当outliers很严重
    Introduction to statistical machine learning复习笔记1概率密度部分

  2. skewness and kurtosis
     Skewness: E[(xE[x])3](D[x])3 Kurtosis: E[(xE[x])4](D[x])43\begin{array}{l} \text { Skewness: } \frac{E\left[(x-E[x])^{3}\right]}{(D[x])^{3}} \\ \text { Kurtosis: } \frac{E\left[(x-E[x])^{4}\right]}{(D[x])^{4}}-3 \end{array}
    Introduction to statistical machine learning复习笔记1概率密度部分

  3. Moment 生成函数
    As the limit, if the moments of all orders are specifed, the probability distribution is uniquely determined.
    Mx(t)=E[etx]={xetxf(x) (Discrete) etxf(x)dx (Continuous) M_{x}(t)=E\left[e^{t x}\right]=\left\{\begin{array}{ll} \sum_{x} e^{t x} f(x) & \text { (Discrete) } \\ \int e^{t x} f(x) \mathrm{d} x & \text { (Continuous) } \end{array}\right.
    etx=1+(tx)+(tx)22!+(tx)33!+e^{t x}=1+(t x)+\frac{(t x)^{2}}{2 !}+\frac{(t x)^{3}}{3 !}+\cdots
    E[etx]=Mx(t)=1+tμ1+t2μ22!+t3μ33!+E\left[e^{t x}\right]=M_{x}(t)=1+t \mu_{1}+t^{2} \frac{\mu_{2}}{2 !}+t^{3} \frac{\mu_{3}}{3 !}+\cdots
    Mx(t)=μ1+μ2t+μ32!t2+μ43!t3+Mx(t)=μ2+μ3t+μ42!t2+μ53!t3+Mx(k)(t)=μk+μk+1t+μk+22!t2+μk+33!t3+\begin{aligned} M_{x}^{\prime}(t) &=\mu_{1}+\mu_{2} t+\frac{\mu_{3}}{2 !} t^{2}+\frac{\mu_{4}}{3 !} t^{3}+\cdots \\ M_{x}^{\prime \prime}(t) &=\mu_{2}+\mu_{3} t+\frac{\mu_{4}}{2 !} t^{2}+\frac{\mu_{5}}{3 !} t^{3}+\cdots \\ & \vdots \\ M_{x}^{(k)}(t) &=\mu_{k}+\mu_{k+1} t+\frac{\mu_{k+2}}{2 !} t^{2}+\frac{\mu_{k+3}}{3 !} t^{3}+\cdots \end{aligned}

  4. 分布的特征方程
    是概率密度的傅里叶变换
    φx(t)=Mix(t)=Mx(it)\varphi_{x}(t)=M_{i x}(t)=M_{x}(i t)

  5. TRANSFORMATION OF RANDOM VARIABLES
    要乘上雅克比行列式:
    Integration of function f(x)f(x) over X\mathcal{X} can be expressed by using function g(r)g(r) on R\mathcal{R} such that
    x=g(r) and X=g(R) x=g(r) \text { and } X=g(\mathcal{R})
    as
    Xf(x)dx=Rf(g(r))dxdrdr \int_{\mathcal{X}} f(x) \mathrm{d} x=\int_{\mathcal{R}} f(g(r))\left|\frac{\mathrm{d} x}{\mathrm{d} r}\right| \mathrm{d} r