The log likelihood is
Since
the Lehmann-Shceffé theorem
proves that the UMVUE of
is
.
The random variable
has an F distribution so that
But
So the UMVUE is
You should check that for the smaller model the vector of complete
sufficient statistics is the 3 vector
According to Lehmann-Scheffé
is still the UMVUE of
.
Since
is independent of
WARNING: I haven't checked the algebra very carefully.
We are analyzing the model
iid
and independently
iid
.
I suggested
studying the submodel
iid
and independently
iid
.
For this model
the complete sufficient statistics are
Now if there were a UMVUE
for the big model we would have
to have
For any choice of a
is unbiased. Thus
has expected value identically equal
to 0. Since
is a function of the minimal
sufficient statistics, we see that these statistics are not complete,
making Lehmann-Shceffé irrelevant.
is complete and sufficient and unbiased for
.
Hence the UMVUE of
is
.
IF
Remember that
is Poisson
and
that
is Poisson
to get
Conditioning on
is the same as conditioning
on
we see that the UMVUE of
is
For n=1 if the UMVUE is g(X) then
For n=2 the complete sufficient statistic is X=X-1+X2. If g(X)
is the UMVUE of
then
From the power series expansion of the LHS we conclude
To find the mean and variance of
(as opposed to
those of
which is what I meant to ask about)
use the tactics in question 1 of this assignment remebering
that
has a
distribution.
Note that
Use the fact that
has a
distribution
and the change of variables formula along with the
independence of the Ti.
Get
You have to minimize
We now have to minimize
You have to minimize the log of the likelihood in the first part of this
question, that is, the log of the joint density of
.
You get
The solution is
.
You can use the Lehmann-Scheffé theorem to prove it is UMVUE.
are minimal sufficient where
The trick is to use the fact that the score function in this problem
has kth component
We have
This vector is MVN and the mean of Xij is
.
The variance of
any Xij is
.
We have
Simply multiply
To prove the determinant fact (which I didn't ask you to do) you can
use the fact that b11t has p-1 eigenvalues equal to 0 and 1
equal to pb. Then if N is any symmetric matrix with eigenvalues
and corresponding eigenvectors
you see that
If X is the vector of all the Xij strung out in the order
then X has a
distribution
where
and
.
The log likelihood is
Here I wanted you to say that the ANOVA table has entries
Between SS, Within SS (or Error SS), and Total SS. The
expected mean square for error is
If you differentiate the log likelihood with resepct to
you
get
Since
Since
h(LD50)/(1-h(LD50))=1 we find
.
Hence
This is a delta method question. If
then the gradient of f is
and
The log likelihood for the full model is
The
derivative of the log likelihood is
Use this data in the following questions. First take/teaching/801/gamma.