Chapter06 The Two-Variable Model Hypothesis Testing.ppt
Chapter 6 The Two-Variable Model: Hypothesis Testing The Object of Hypothesis Testing To answer—— How “good” is the estimated regression line. How can we be sure that the estimated regression function (., the SRF) is in fact a good estimator of the true PRF? Yi=B1+B2Xi+μi Xi———nonstochastic μi———stochasti Yi———stochastic Before we tell how good an SRF is as an estimate of the true PRF, we should assume how the stochastic μterms are generated. The Classical Linear Regression Model (CLRM) CLRM assumptions: . The explanatory variable(s) X is uncorrelated with the disturbance term μ. . Zero mean value assumption: ——The expected, or mean, value of the disturbance term μ is zero. E(μi)=0 () . Homoscedasticity assumption: ——The variance of each μi is constant, or homoscedastic. var(μi)=σ2 () . No autocorrelation assumption: ——There is no correlation between two error terms. cov(μi,μj)=0 i≠j () Variandes and Standard Errors of Ordinary Least Squares(OLS)Estimators ——Study the sampling variability of OLS estimators. The variances and standard errors of the OLS estimators: var(b1)= ·σ2 () se(b1) = ()
var(b2)= () se(b2) = () is an estimator of σ2 ()
() ∑ei2=RSS(residual sum of squares) =∑(Yi-Yi)2 n-2……..degrees of freedom The Properties of OLS Estimators Why Ordinary Least Squares(OLS)? The OLS method is used popularly because it has some very strong theoretical properties, which is known as the Gauss-Markov theorem: Gauss-Markov theorem: ——Given the assumptions of the classical linear regression model, the OLS estimators, in the class of unbiased linear estimators, have minimum variance; that is, they are BLUE(best linear unbiased estimators). That is , the OLS estimators b1 and b2 are: 1. Linear: they are linear functions of the random variable Y. 2. Unbiased: E( b1 )=B1 E( b2 )=B2 E( )= 3. Have minimum variance. The Sampling , or Probability, Distributions of OLS Estimat
Chapter06 The Two-Variable Model Hypothesis Testing 来自淘豆网www.taodocs.com转载请标明出处.