基础计量经济学英文课件2

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

where: xi
=
Xi
−X
,
y i
= Yi − Y
denote deviations from mean values
∑ ∑ X
=
n i =1
X
i
/
n
,
Y
=
n i =1
Yi
/
n
denote sample means of X, Y
5
• The estimators that we obtained are called OLS estimators, since they are derived from the least squares criterion.
• The OLS method is extensively used in regression analysis because it is intuitively appealing and mathematically simpler than alternative estimation methods (e.g. maximum likelihood estimation).
(Yi

βˆ1

βˆ2 X i )2
Step 1: First order condition: take partial derivatives of RSS w.r.t.
βˆ1, βˆ2 and set them equal to zero.
n
∑ ∂ i =1 ∑ n
∑ ∂ ∑ i=1
(Yi (Yi
minimising the sum of squared residuals (RSS):
n
∑ min
βˆ1 ,βˆ2
i =1
uˆi2
=
min
βˆ1 ,βˆ2
RSS
where uˆi = Yi − Yˆi is the residual term , n: sample size
Yi = β1 + β2 Xi + ui : stochastic PRF → generates actual Y
− βˆ1 − βˆ2 Xi )2 ∂βˆ1 − βˆ1 − βˆ2 Xi )2 ∂βˆ2
= =
0

0
2
2
பைடு நூலகம்
n i =1 n i =1
(Yi (Yi
− βˆ1 − βˆ1

βˆ2
X
i
)
(−1)
=
0

βˆ2
X
i
)
(−
X
i
)
=
0
4
Step 2: Solve the corresponding system of simultaneous (normal) equations:
• Note that the OLS estimators are expressed solely in terms of the observable (sample) quantities of X, Y, hence they can be easily computed.
• The OLS estimators are point estimators, that is, given the sample, each estimator will provide only a single (point) value of the relevant population parameter.
• The OLS was developed first by C.F. Gauss in 1821.
1
• Least Squares Criterion: specify the SRF (by choosing values for the
estimators) so that it is as close as possible to the actual PRF, by
Yi = βˆ1 + βˆ2 X i + uˆi : stochastic SRF
Yˆi = βˆ1 + βˆ2 Xi : estimator of the conditional mean of Y
(non-stochastic SRF)
βˆ: estimator of β
2
Why not minimize (i) sum of residuals, instead of (ii) sum of squared residuals? Because (i) gives equal weight to both ‘small’ and ‘large’ residuals, while (ii) penalizes more the ‘large’.
i =1
i =1

= 0
∑ ∑ ∑ n Yi = nβˆ1 + βˆ2 n X i
i =1
i =1
∑ ∑ ∑ ∑ n Yi Xi = βˆ1 n Xi + βˆ2
i =1
i =1
n i =1
X
2 i
⇔ ...

βˆ2 =
n
i=1 xi yi = Cov(X,Y)
n
xi2
Var(X)
i =1
,
βˆ1 = Y − βˆ2 X
Y
.Y1
uˆ1 : large
.Y2
uˆ2 : small
Yˆi = βˆ1 + βˆ2 X i
Yˆ2
X1
X2
X
3
Minimisation problem
∑ ∑ ∑ min
βˆ1 ,βˆ2
n i =1
uˆi2
=
min
βˆ1 ,βˆ2
n i =1
(Yi
− Yˆi )2
=
min
βˆ1 ,βˆ2
n i =1
n
∑ ∑ ∑ ∑ (Yi
i =1
n
∑ ∑ ∑ ∑ (Yi
i =1
− βˆ1 − βˆ2 X i ) = 0

− βˆ1 − βˆ2 X i ) X i = 0
n Yi − n βˆ1 − βˆ2 n X i = 0
i =1
i =1
i =1
n
Yi X i − βˆ1 n
X i − βˆ2
n
X
2 i
i =1
Lecture 2
Two Variable Regression Model: Estimation
I. The Method of Ordinary Least Squares (OLS)
• The OLS method is used to estimate the non-directly observable population regression function (PRF) on the basis of the sample regression function.
相关文档
最新文档