伍德里奇计量经济学课件Chapte (2)

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

Why? Remember from basic probability that Cov(A,B) = E(AB) – E(A)E(B)
These are called moment restrictions
11
12
2
2013/9/24
Deriving OLS using MOM

Derivation of OLS (cont.)

ˆ ˆx y 0 1 ˆ y ˆx 0 1
x y
i
ˆ y 1 x i x i x
i 1
n

x
i 1
n i i
i
x
y i
2 ˆ y 1 x i x i 1
n
Note: From basic properties of the summation operator,
ˆ ˆx 0 n 1 y i 0 1 i
i 1 n
The second condition n 1 x y i i ˆ0 ˆ1x i 0


x
i 1 n i 1 n
i
y y ˆ x ˆ x 0
i 1 1 i i

If one uses calculus to solve the minimization problem for the two parameters you obtain the following first order conditions, which are the same as we obtained before, multiplied by n

Examples


Δy=β1Δx if Δu=0.
图解与数学推导……
[2.1] Soybean Yield and Fertilizer yield = β0 + β1fertilizer + u β1 measures the effect of fertilizer on yield, holding other factors fixed: Δy yield= β1 Δf fertilizer. z [2.2] A Simple Wage Equation wage = β0 + β1education + u β1 measures the change in hourly wage given another year of education, holding other factors fixed: Δwage= β1 Δeducation.
x x x x x
i 1 i 1 i
n
2
x y y x x y y
i1 i i i1 i i
16
n
n
15
So the OLS estimated slope is
Summary of OLS estimators
For y = β0 + β1x + u, if we have sample {(xi, yi): i=1, …,n}, then its OLS estimator is
We can write our two restrictions just in terms of x, y, β0 and β1 , since u = y – β0 – β1x
E(u) = 0 E(xu) = 0

E(y – β0 – β1x) = 0 E[x(y – β0 – β1x)] = 0
2013/9/24
2.1 Definition of the Simple Regression Model
Chapter 2 The Simple Regression Model
y = β0 + β1x + u
u ---- error term, disturbance
1 2
The linear relationship in SLR
18

17
3
2013/9/24
Sample regression line, sample data points and the associated estimated error terms
y y4 û4 {
Alternate approach for deriving

.
ˆ ˆx ˆ y 0 1
y
f(y)
〖拓展学习资料之一〗
回归与高尔顿
.
x1 x2
.
E(y|x) = 0 + 1x
7
8
2.2 Deriving the Ordinary Least Squares Estimates

Basic idea of regression is to estimate the population parameters from a sample Let {(xi, yi): i=1, …,n} denote a random sample of size n from the population For each observation in this sample, it will be the case that yi = β0 + β1xi + ui , i=1,2,…,n

The method of moments approach to estimation implies imposing the population moment restrictions on the sample moments Wh d What does this hi mean? ? R Recall ll that h f for E(X), ) the h mean of a population distribution, a sample estimator of E(X) is simply the arithmetic mean of the sample

We want to choose values of the parameters that will ensure that the sample versions of our moment restrictions are true The sample versions are as follows:
4
3
A Simple Assumption
【ASSUMPTION】The average value of u, the error term, in the population is zero. That is, E(u) = 0

Zero Conditional Mean

We need to make a crucial assumption about how u and x are related We want it to be the case that knowing something about x does not give us any information about u, so that h they h are completely l l unrelated. l d Th That i is, E(u|x) = E(u) = 0, which implies E(y|x) = β0 + β1x


9
10
Derivation of OLS (cont.)

Derivation of OLS (cont.)

To derive the OLS estimates we need to realize that our main assumption of E(u|x) = E(u) = 0 also implies that Cov(x,u) = E(xu) = 0
Given the intuitive idea of fitting a line, we can set up a formal minimization problem. That is, we want to choose our parameters such that we minimize the following:
20


More OLS

Alternative approach to derivation
40,000
ˆ ˆx ˆ y 0 1


Intuitively, OLS is fitting a line through the sample points such that the sum of squared residuals is as small as possible, hence the term least squares The residual, residual û, is an estimate of the error term, term u, and is the difference between the fitted line (sample regression function) and the sample point
wenku.baidu.com
This is not a restrictive assumption, since we can always use β0 to normalize E(u) to 0. y = β0 + β1 x + u
5
6
1
2013/9/24
E(y|x) as a linear function of x, where for any x the distribution of y is centered about E(y|x)
ˆ 1
ˆ 1
x
i 1 n
n
i
x y i y
i
x
i 1 n
n
i
x y i y
i
x
i 1
x
n
2

x
i 1
x
2
;
ˆ y ˆx 0 1
provided that
x
i 1
i
x 2 0
The slope estimate is the sample covariance between x and y divided by the sample variance of x If x and y are positively correlated, the slope will be positive If x and y are negatively correlated, the slope will be negative Only need x to vary in our sample
ˆ ˆx 0 n 1 y i 0 1 i
i 1 n
n


ˆ ˆx 0 n 1 x i y i 0 1 i
i 1
13 14


Derivation of OLS (cont.)

Derivation of OLS (cont.)
n i 1
n
Given the definition of a sample mean, and properties of summation, we can rewrite the first condition as follows
The simple linear regression model, y = β0 + β1x + u addresses the issue of the functional relationship between y and x. If the other factors in u are held fixed, so that the change in u is zero, Δu=0, then x has a linear effect on y:
Q
y3 y2
û2 { .
.} û3

i 1 i
n
2


n
i 1
y
i
ˆ ˆ x 0 1 i

2
Q 0 ˆ
0

n i 1 n
ˆ ˆx 0 yi 0 1 i

y1
.} û1
x1 x2 x3 x4 x
19
Q ˆ ˆx 0 0 x i yi 0 1 i ˆ i 1 1 See Appendix 2A on pp.66-67 for detail of derivation.
相关文档
最新文档