Ch. 5 Hypothesis Testing
The current framework of hypothesis testing is largely due to the work of
Neyman and Pearson in the late 1920s, early 30s, complementing Fisher’s work
on estimation. As in estimation, we begin by postulating a statistical model
but instead of seeking an estimator of θ in Θ we consider the question whether
θΘΘ or θΘ= ΘΘ is most supported by the observed data. The
∈ 0 ⊂∈ 1 − 0
discussion which follows will proceed in a similar way, though less systematically
and formally, to the discussion of estimation. This is due to plexity of
the topic which arises mainly because one is asked to assimilate too many con-
cepts too quickly just to be able to define the problem properly. This difficulty,
however, is inherent in testing, if any proper understanding of the topic is to be
attempted, and thus unavoidable.
1 Testing: Definition and Concepts
The Decision Rule
Let X be a random variables defined on the probability space ( , , ( )) and
S F P ·
consider the statistical model associated with X:
(a) Φ= f(x; θ), θΘ};
{ ∈
(b) x = (X1, X2, ..., Xn)’ is a random sample from f(x; θ).
The problem of hypothesis testing is one of deciding whether or not some
conjectures about θ of the form θ belongs to some subset Θ0 of Θ is supported
by the data x = (x1, x2, ..., xn)0. We call such a conjecture the null hypothesis
and denoted it by
H : θΘ,
0 ∈ 0
where if the sample realization x C we accept H , if x C we reject it.
∈ 0 0 ∈ 1
Since the observation space Rn, but both the acceptance region C R1
X ∈ 0 ∈
and rejection region C R1, we need a mapping from Rn to R1. The mapping
1 ∈
which enables us to define C and C we call a test statistic τ(x) : R1.
0 1 X →
1
Example:
Let X be the random variables representing the marks achieved by students in
an econometric theory paper an let the statistical model be:
1 1 x θ 2
(a) Φ= f(x; θ) = exp −, θΘ[0, 100];
8√2π− 2 8 ∈≡
(b) x = (nX1, X2, ..., Xn)0, n=40h is
random iosamp
Chapter 5_Hypothesis testing 来自淘豆网www.taodocs.com转载请标明出处.