UA MATH566 统计理论 QE练习 位置变换后的指数分布

UA MATH566 统计理论 QE练习 位置变换后的指数分布

2016年1月第六题

UA MATH566 统计理论 QE练习 位置变换后的指数分布
Part a
Joint likelihood is
L(θ)=exp(i=1n(X1θ))=exp(nθi=1nX1)I(X(1)θ)L(\theta) = \exp \left( - \sum_{i=1}^n (X_1 - \theta) \right) = \exp \left( n\theta- \sum_{i=1}^n X_1 \right)I(X_{(1)}\ge \theta)

Compute likelihood ratio
L(θX)L(θY)=exp(nθi=1nX1)I(X(1)θ)exp(nθi=1nY1)I(Y(1)θ)=exp(i=1nYii=1nXi)I(X(1)θ)I(Y(1)θ)\frac{L(\theta|\textbf{X})}{L(\theta|\textbf{Y})} = \frac{\exp \left( n\theta- \sum_{i=1}^n X_1 \right)I(X_{(1)}\ge \theta)}{\exp \left( n\theta- \sum_{i=1}^n Y_1 \right)I(Y_{(1)}\ge \theta)} = \exp \left( \sum_{i=1}^nY_i - \sum_{i=1}^n X_i\right) \frac{I(X_{(1)}\ge \theta)}{I(Y_{(1)}\ge \theta)}

To make likelihood ration independent of θ\theta,
X(1)=Y(1)X_{(1)} = Y_{(1)}

So X(1)X_{(1)} is minimal sufficient statistics.

Part b
EX=θxe(xθ)dx=θ+1=Xˉθ^MME=Xˉ1EX = \int_{\theta}^{\infty} xe^{-(x-\theta)} dx = \theta + 1 = \bar{X} \Rightarrow \hat\theta_{MME} = \bar{X}-1

Part c
Notice if θ>X(1)\theta>X_{(1)}, L(θ)=0L(\theta) = 0. To make L(θ)L(\theta) as greater as possible, θX(1)\theta \le X_{(1)}. Since L(θ)L(\theta) is increasing on θ\theta, when θ^MLE=X(1)\hat\theta_{MLE} = X_{(1)}

Part d
Compute
E[θ^MME]=E[Xˉ1]=θVar(θ^MME)=Var(Xˉ)=1nE[\hat\theta_{MME}] = E[\bar{X}-1] = \theta \\ Var(\hat\theta_{MME}) = Var(\bar{X}) = \frac{1}{n}

By the property of order statistics, density of X1X_{1} is
fX(1)=nen(xθ)f_{X_{(1)}} = ne^{-n(x-\theta)}

Compute
EX(1)=θnxen(xθ)dx=nθ+1nEX(1)2=θnx2en(xθ)dx=n2θ2+2nθ+2n2Var(X(1))=EX(1)2EX(1)=1n2MSE(X(1))=bias2+1n2=2n2EX_{(1)} = \int_{\theta}^{\infty} nxe^{-n(x-\theta)}dx = \frac{n\theta + 1}{n} \\ EX_{(1)}^2 = \int_{\theta}^{\infty} nx^2e^{-n(x-\theta)}dx = \frac{n^2\theta^2 + 2n\theta + 2}{n^2} \\ Var(X_{(1)}) = EX_{(1)}^2 - EX_{(1)} = \frac{1}{n^2} \\ MSE(X_{(1)}) = bias^2 + \frac{1}{n^2} = \frac{2}{n^2}

Part e
By Lehmann-Scheffe theorem, E[Xˉ1X(1)]E[\bar{X}-1|X_{(1)}] will be better. Or X(1)1/nX_{(1)}-1/n.

2018年5月第六题

UA MATH566 统计理论 QE练习 位置变换后的指数分布
Part a
Joint likelihood function is
L(λ,θ)=i=1nf(Xiλ,θ)=λnexp(λi=1n(Xiθ))I(X(1)>θ)L(\lambda,\theta) = \prod_{i=1}^n f(X_i|\lambda,\theta) = \lambda^n\exp \left( -\lambda \sum_{i=1}^n (X_i-\theta) \right)I(X_{(1)}>\theta)
Consider two samples {Xi}i=1n\{X_i\}_{i=1}^n and {Yi}i=1n\{Y_i\}_{i=1}^n,

L(λ,θX)L(λ,θY)=λnexp(λi=1n(Xiθ))I(X(1)>θ)λnexp(λi=1n(Yiθ))I(Y(1)>θ)=I(X(1)>θ)I(Y(1)>θ)exp(λi=1n(YiXi))\frac{L(\lambda,\theta|\textbf{X})}{L(\lambda,\theta|\textbf{Y})} = \frac{\lambda^n\exp \left( -\lambda \sum_{i=1}^n (X_i-\theta) \right)I(X_{(1)}>\theta)}{\lambda^n\exp \left( -\lambda \sum_{i=1}^n (Y_i-\theta) \right)I(Y_{(1)}>\theta)} \\ = \frac{I(X_{(1)}>\theta)}{I(Y_{(1)}>\theta)} \exp \left( -\lambda \sum_{i=1}^n (Y_i-X_i) \right)

To make this likelihood ratio independent of parameters,
X(1)=Y(1),  i=1nXi=i=1nYiX_{(1)} = Y_{(1)},\ \ \sum_{i=1}^n X_i = \sum_{i=1}^n Y_i

Let T1(X)=X(1)T_1(X) = X_{(1)}, T2(X)=i=1nXiT_2(X) = \sum_{i=1}^n X_i and they are minimal sufficient statistics.

Part b
If λ=1\lambda = 1,
fX(x)=e(xθ),x>θ, FX(x)=θxe(sθ)ds=1e(xθ),x>θf_X(x) = e^{-(x-\theta)},x> \theta,\ F_X(x) = \int_{\theta}^{x} e^{-(s-\theta)}ds = 1 - e^{-(x-\theta)},x> \theta

Compute
P(X(1)x)=P(min(X1,,Xn)x)=1P(min(X1,,Xn)>x)=1[1F(x)]nfX(1)(x)=n[1F(x)]n1f(x)=ne(n1)(xθ)e(xθ)=nen(xθ)P(X_{(1)} \le x) = P(\min(X_1,\cdots,X_n) \le x) = 1 - P(\min(X_1,\cdots,X_n) > x) = 1 - [1-F(x)]^n \\ f_{X_{(1)}}(x) = n[1-F(x)]^{n-1}f(x) = ne^{-(n-1)(x-\theta)}e^{-(x-\theta)} = ne^{-n(x-\theta)}

This means X(1)θ+Gamma(1,n)X_{(1)} \sim \theta+Gamma(1,n)

Part c
Define Q=2n(X(1)θ)Q = 2n(X_{(1)} - \theta). By this location-scale transformation, Qχ22=dΓ(1,12)Q \sim \chi^2_2 =_d \Gamma(1,\frac{1}{2}). Let χy,22\chi^2_{y,2} and χ1α+y,22\chi^2_{1-\alpha+y,2} denote yy and 1α+y1-\alpha+y quantile of χ22\chi^2_2.
P(χy,22Qχ1α+y,22)=1αP(X(1)χy,222nθX(1)χ1α+y,222n)=1αP(\chi^2_{y,2} \le Q \le \chi^2_{1-\alpha+y,2}) = 1-\alpha \\ P\left( X_{(1)} - \frac{\chi^2_{y,2}}{2n} \le \theta \le X_{(1)} - \frac{\chi^2_{1-\alpha+y,2}}{2n} \right) = 1 - \alpha

Notice the length of confidential interval is
L=χ1α+y,22χy,222nL = \frac{\chi^2_{1-\alpha+y,2} -\chi^2_{y,2} }{2n}

Let ZZ denote standard normal variable. By normal approximation of chi-square distribution (see UA MATH564 概率论VI 数理统计基础3 卡方分布的正态近似)
L2+2Z1α+y(2+2Zy)2n=Z1α+yZynL \approx \frac{2+2Z_{1-\alpha+y} - (2+2Z_{y})}{2n} = \frac{Z_{1-\alpha+y} - Z_{y}}{n}

Notice standard normal distribution is symmetric on y-axis, so the shortest length should be y=α2y = \frac{\alpha}{2}. Hence the shortest confidential interval is
P(X(1)χα2,222nθX(1)χ1α2,222n)=1αP\left( X_{(1)} - \frac{\chi^2_{\frac{\alpha}{2},2}}{2n} \le \theta \le X_{(1)} - \frac{\chi^2_{1-\frac{\alpha}{2},2}}{2n} \right) = 1 - \alpha

Part d
Posterior kernel of θ\theta is
π(θX)exp(i=1n(Xiθ))\pi(\theta|\textbf{X}) \propto \exp \left(-\sum_{i=1}^n (X_i - \theta) \right)

Compute
01θexp(i=1n(xiθ))dθ=1n01θdexp(i=1n(xiθ))=1nθexp(i=1n(xiθ))011n01exp(i=1n(xiθ))dθ=n1n2exp(ni=1nXi)+1n2exp(i=1nXi) \int_{0}^{1} \theta \exp \left(-\sum_{i=1}^n (x_i - \theta) \right) d\theta = \frac{1}{n}\int_{0}^{1} \theta d\exp \left(-\sum_{i=1}^n (x_i - \theta) \right) \\ = \frac{1}{n}\theta \exp \left(-\sum_{i=1}^n (x_i - \theta) \right)|_0^1 - \frac{1}{n} \int_{0}^{1}\exp \left(-\sum_{i=1}^n (x_i - \theta) \right) d\theta \\ = \frac{n-1}{n^2} \exp \left(n-\sum_{i=1}^n X_i\right)+\frac{1}{n^2}\exp \left(-\sum_{i=1}^n X_i\right)

Marginal density of XX is
m(x)=01exp(i=1n(xiθ))dθ=1nexp(ni=1nXi)1nexp(i=1nXi)m(x) = \int_{0}^{1} \exp \left(-\sum_{i=1}^n (x_i - \theta) \right) d\theta = \frac{1}{n} \exp \left(n-\sum_{i=1}^n X_i\right) - \frac{1}{n} \exp \left(-\sum_{i=1}^n X_i\right)

So the estimator is
θ^=n1n2exp(ni=1nXi)+1n2exp(i=1nXi)1nexp(ni=1nXi)1nexp(i=1nXi)=(n1)en+1nenn\hat\theta = \frac{\frac{n-1}{n^2} \exp \left(n-\sum_{i=1}^n X_i\right)+\frac{1}{n^2}\exp \left(-\sum_{i=1}^n X_i\right)}{ \frac{1}{n} \exp \left(n-\sum_{i=1}^n X_i\right) - \frac{1}{n} \exp \left(-\sum_{i=1}^n X_i\right)} = \frac{(n-1)e^n + 1}{ne^n - n}