北大暑期课程《回归分析》(Linear-Regression-Analysis)讲义1
第九章:回归分析-30页文档

Chapter 11
Regression and Correlation
Techniques that are used to establish whether there is a mathematical relationship between two or more variables, so that the behavior of one variable can be used to predict the behavior of others. Applicable to “Variables” data only.
run
axis.
b
0
X
A simple linear relationship can be described mathematically by
Y = mX + b
Simple Linear Regression
slope =
rise run
=
(6 - 3)
1
=
(10 - 4)
2
Y
rise
5
run intercept = 1
Rent
Step 1: Scatter plot
2500 2300 2100 1900 1700 1500 1300 1100 900 700 500
500 700 900 1100 1300 1500 1700 1900 2100
Size
Scatter plot suggests that there is a ‘linear’ relationship between Rent and Size
High
北大暑期课程《回归分析报告》(Linear Regression Analysis)讲义1

实用文案Class 1: Expectations, variances, and basics of estimationBasics of matrix (1)I. Organizational Matters(1)Course requirements:1)Exercises: There will be seven (7) exercises, the last of which is optional. Eachexercise will be graded on a scale of 0-10. In addition to the graded exercise, ananswer handout will be given to you in lab sections.2)Examination: There will be one in-class, open-book examination.(2)Computer software: StataII. Teaching Strategies(1) Emphasis on conceptual understanding.Yes, we will deal with mathematical formulas, actually a lot of mathematical formulas. But, I do not want you to memorize them. What I hope you will do, is to understand the logic behind the mathematical formulas.(2) Emphasis on hands-on research experience.Yes, we will use computers for most of our work. But I do not want you to become a computer programmer. Many people think they know statistics once they know how to run astatistical package. This is wrong. Doing statistics is more than running computer programs. What I will emphasize is to use computer programs to your advantage in research settings. Computer programs are like automobiles. The best automobile is useless unless someone drives it. You will be the driver of statistical computer programs.(3) Emphasis on student-instructor communication.I happen to believe in students' judgment about their own education. Even though I willbe ultimately responsible if the class should not go well, I hope that you will feel part of the class and contribute to the quality of the course. If you have questions, do not hesitate toask in class. If you have suggestions, please come forward with them. The class is as muchyours as mine.Now let us get to the real business.III(1). Expectation and VarianceRandom Variable: A random variable is a variable whose numerical value is determined by the outcome of a random trial.Two properties: random and variable.A random variable assigns numeric values to uncertain outcomes. In a common language, "give a number". For example, income can be a random variable. There are many ways to do it. You can use the actual dollar amounts.In this case, you have a continuous random variable. Or you can use levels of income, such as high, median, and low. In this case, you have an ordinal random variable [1=high,2=median, 3=low]. Or if you are interested in the issue of poverty, you can have a dichotomous variable: 1=in poverty, 0=not in poverty.In sum, the mapping of numeric values to outcomes of events in this way is the essenceof a random variable.Probability Distribution: The probability distribution for a discrete random variable X associates with each of the distinct outcomes x i(i = 1, 2,..., k) a probability P(X = x i). Cumulative Probability Distribution: The cumulative probability distribution for a discrete random variable X provides the cumulative probabilities P(X x) for all values x.Expected Value of Random Variable: The expected value of a discrete random variable X is denoted by E{X} and defined:E{X}= P(x i)where: P(x i) denotes P(X = x i). The notation E{ } (read “expectation of”) is called the expectation operator.In common language, expectation is the mean. But the difference is that expectation is a concept for the entire population that you never observe. It is the result of the infinite number of repetitions. For example, if you toss a coin, the proportion of tails should be .5 in the limit. Or the expectation is .5. Most of the times you do not get the exact .5, but a number close to it.Conditional ExpectationIt is the mean of a variable conditional on the value of another random variable.Note the notation: E(Y|X).In 1996, per-capita average wages in three Chinese cities were (in RMB):Shanghai: 3,778Wuhan: 1,709Xi’an: 1,155Variance of Random Variable: The variance of a discrete random variable X is denoted by V{X} and defined:V{X}=(x i - E{X})2 P(x i)where: P(x i) denotes P(X = x i). The notation V{ } (read “variance of”) is called the variance operator.Since the variance of a random variable X is a weighted average of the squared deviations, (X - E{X})2 , it may be defined equivalently as an expected value: V{X} = E{(X - E{X})2}. An algebraically identical expression is: V{X} = E{X2} - (E{X})2.Standard Deviation of Random Variable: The positive square root of the variance of X is called the standard deviation of X and is denoted by σ{X}:σ{X} =The notation σ{ } (read “standard deviation of”) is called the standard deviation operator. Standardized Random Variables: If X is a random variable with expected value E{X} and standard deviation σ{X}, then:Y=}{}{ X XEXσ-is known as the standardized form of random variable X.Covariance: The covariance of two discrete random variables X and Y is denoted by Cov{X,Y} and defined:Cov{X,Y} =where: P(x i, y j) denotes )The notation of Cov{ , } (read “covariance of”) is called the covariance operator.When X and Y are independent, Cov {X,Y} = 0.Cov {X,Y} = E{(X - E{X})(Y - E{Y})}; Cov {X,Y} = E{XY} - E{X}E{Y}(Variance is a special case of covariance.)Coefficient of Correlation: The coefficient of correlation of two random variables X and Y is denoted by ρ{X,Y} (Greek rho) and defined:where: σ{X} is the standard deviation of X; σ{Y} is the standard deviation of Y; Cov is the covariance of X and Y.Sum and Difference of Two Random Variables: If X and Y are two random variables, then the expected value and the variance of X + Y are as follows:Expected Value: E{X+Y} = E{X} + E{Y};Variance: V{X+Y} = V{X} + V{Y}+ 2 Cov(X,Y).If X and Y are two random variables, then the expected value and the variance of X - Y are as follows:Expected Value : E {X - Y } = E {X } - E {Y };Variance : V {X - Y } = V {X } + V {Y } - 2 Cov (X,Y ).Sum of More Than Two Independent Random Variables: If T = X 1 + X 2 + ... + X s is the sum of sindependent random variables, then the expected value and the variance of T are as follows:Expected Value: ; Variance:III(2). Properties of Expectations and Covariances:(1) Properties of Expectations under Simple Algebraic Operations)()(x bE a bX a E +=+This says that a linear transformation is retained after taking an expectation.bX a X +=*is called rescaling: a is the location parameter, b is the scale parameter.Special cases are:For a constant: a a E =)(For a different scale: )()(X E b bX E =, e.g., transforming the scale of dollars intothe scale of cents.(2) Properties of Variances under Simple Algebraic Operations)()(2X V b bX a V =+This says two things: (1) Adding a constant to a variable does not change the varianceof the variable; reason: the definition of variance controls for the mean of the variable[graphics]. (2) Multiplying a constant to a variable changes the variance of the variable by a factor of the constant squared; this is to easy prove, and I will leave it to you. This is the reason why we often use standard deviation instead of variance2x x σσ=is of the same scale as x.(3) Properties of Covariance under Simple Algebraic OperationsCov(a + bX, c + dY) = bd Cov(X,Y).Again, only scale matters, location does not.(4) Properties of Correlation under Simple Algebraic OperationsI will leave this as part of your first exercise:),(),(Y X dY c bX a ρρ=++That is, neither scale nor location affects correlation.IV: Basics of matrix.1. DefinitionsA. MatricesToday, I would like to introduce the basics of matrix algebra. A matrix is a rectangular array of elements arranged in rows and columns:11121211.......m n nm x x x x X x x ⎡⎤⎢⎥⎢⎥=⎢⎥⎢⎥⎣⎦Index: row index, column index.Dimension: number of rows x number of columns (n x m)Elements: are denoted in small letters with subscripts.An example is the spreadsheet that records the grades for your home work in the following way:Name 1st 2nd ....6thA 7 10 (9)B 6 5 (8)... ... ......Z 8 9 (8)This is a matrix.Notation: I will use Capital Letters for Matrices.B. VectorsVectors are special cases of matrices:If the dimension of a matrix is n x 1, it is a column vector:⎥⎥⎥⎥⎦⎤⎢⎢⎢⎢⎣⎡=n x x x x (21)If the dimension is 1 x m, it is a row vector: y' = | 1y 2y .... m y |Notation: small underlined letters for column vectors (in lecture notes)C. TransposeThe transpose of a matrix is another matrix with positions of rows and columns being exchanged symmetrically.For example: if⎥⎥⎥⎥⎦⎤⎢⎢⎢⎢⎣⎡=⨯nm n m m n x x x x x x X 12111211)( (1121112)()1....'...n m n m nm x x x x X x x ⨯⎡⎤⎢⎥⎢⎥=⎢⎥⎢⎥⎣⎦It is easy to see that a row vector and a column vector are transposes of each other. 2. Matrix Addition and SubtractionAdditions and subtraction of two matrices are possible only when the matrices have the same dimension. In this case, addition or subtraction of matrices forms another matrix whoseelements consist of the sum, or difference, of the corresponding elements of the two matrices.⎥⎥⎥⎥⎦⎤⎢⎢⎢⎢⎣⎡±±±±±=Y ±X mn nm n n m m y x y x y x y x y x (11)2121111111 Examples:⎥⎦⎤⎢⎣⎡=A ⨯4321)22(⎥⎦⎤⎢⎣⎡=B ⨯1111)22(⎥⎦⎤⎢⎣⎡=B +A =⨯5432)22(C 3. Matrix MultiplicationA. Multiplication of a scalar and a matrixMultiplying a scalar to a matrix is equivalent to multiplying the scalar to each of the elements of the matrix.11121211Χ...m n nm cx c cx cx ⎢⎥⎢⎥=⎢⎥⎢⎥⎣⎦ B. Multiplication of a Matrix by a Matrix (Inner Product)The inner product of matrix X (a x b) and matrix Y (c x d) exists if b is equal to c. The inner product is a new matrix with the dimension (a x d). The element of the new matrix Z is:c∑=kj ik ij y x zk=1Note that XY and YX are very different. Very often, only one of the inner products (XY and YX) exists.Example:⎥⎦⎤⎢⎣⎡=4321)22(x A⎥⎦⎤⎢⎣⎡=10)12(x BBA does not exist. AB has the dimension 2x1⎥⎦⎤⎢⎣⎡=42ABOther examples:If )53(x A , )35(x B , what is the dimension of AB? (3x3)If )53(x A , )35(x B , what is the dimension of BA? (5x5)If )51(x A , )15(x B , what is the dimension of AB? (1x1, scalar)If )53(x A , )15(x B , what is the dimension of BA? (nonexistent)4. Special MatricesA. Square Matrix)(n n A ⨯B. Symmetric MatrixA special case of square matrix.For )(n n A ⨯, ji ij a a =. All i, j .A' = AC. Diagonal MatrixA special case of symmetric matrix⎥⎥⎥⎥⎦⎢⎢⎢⎢⎣=X nn x x 0 (2211)D. Scalar Matrix0....0c c c c ⎡⎤⎢⎥⎢⎥=I ⎢⎥⎢⎥⎣⎦E. Identity MatrixA special case of scalar matrix⎥⎥⎥⎥⎦⎤⎢⎢⎢⎢⎣⎡=I 10 (101)Important: for r r A ⨯AI = IA = AF. Null (Zero) MatrixAnother special case of scalar matrix⎥⎥⎥⎥⎦⎤⎢⎢⎢⎢⎣⎡=O 00 (000)From A to E or F, cases are nested from being more general towards being more specific.G. Idempotent MatrixLet A be a square symmetric matrix. A is idempotent if....32=A =A =AH. Vectors and Matrices with elements being oneA column vector with all elements being 1,⎥⎥⎥⎥⎦⎤⎢⎢⎢⎢⎣⎡=⨯1......111r A matrix with all elements being 1, ⎥⎥⎥⎥⎦⎤⎢⎢⎢⎢⎣⎡=⨯1...1...111...11rr J Examples let 1 be a vector of n 1's: )1(1⨯n 1'1 = )11(⨯n11' = )(n n J ⨯I. Zero Vector A zero vector is⎥⎥⎥⎥⎦⎤⎢⎢⎢⎢⎣⎡=⨯0....001r 5. Rank of a MatrixThe maximum number of linearly independent rows is equal to the maximum number of linearly independent columns. This unique number is defined to be the rank of the matrix.For example,⎥⎥⎥⎦⎤⎢⎢⎢⎣⎡=B 542211014321 Because row 3 = row 1 + row 2, the 3rd row is linearly dependent on rows 1 and 2. The maximum number of independent rows is 2. Let us have a new matrix:⎥⎦⎤⎢⎣⎡=B 11014321* Singularity: if a square matrix A of dimension ()n n ⨯has rank n, the matrix is nonsingular. If the rank is less than n, the matrix is then singular.。
回归分析的基本思想及其初步应用

用这种方法可以对所有预报变量计算组合效应。
数学上,把每个效应(观测值减去总的平均值)的平方加起来,即用
2、由散点图知道身高和体重有比较 好的线性相关关系,因此可以用线性 回归方程刻画它们之间的关系。
2024/10/21
3、从散点图还看到,样本点散布在 某一条直线的附近,而不是在一条 直线上,所以不能用一次函数 y=bx+a描述它们关系。
15
我们可以用下面的线性回归模型来表示:
y=bx+a+e,其中a和b为模型的未知参数,e称为随
1. 散点图;
2.回归方程: yˆ 0.849x 85.172 身高172cm女大学生体重 yˆ = 0.849×172 - 85.712 = 60.316(kg)
2024/10/21
14
案例1:女大学生的身高与体重
例1 从某大学中随机选取8名女大学生,其身高和体重数据如表1-1所示。
探究编:号 1 2 3 4 5 6 7 8 身 吗?高身/c如为高m果17不2c1是6m5,的你1女6能大5 解学15析生7一的17下体0原重1因一75吗定1?是65601.35156k1g70 求17根2c体据m一的重名女女大大学4学生8生的的体身重5高7。预报5她0的体5重4的回6归4方程6,1并预4报3一名5身9高为 解:1/、kg选取身高为自变量x,体重为因变量y,作散点图:
1.回归平方和占总偏差平方和的比例
2. 反映回归直线的拟合程度 3. 取值范围在 [ 0 , 1 ] 之间 4. R2 1,说明回归方程拟合的越好;R20
第5章多元线性回归分析1

样本,可表示为
Y 1 1 2 X 2 1 3 X 3 1 ... k X k 1 u 1
Y 2 1 2 X 2 2 3 X 3 2 ... k X k 2 u 2
Y n 1 2 X 2 n 3 X 3 n ... k X k n u n
相关系数,即全部自变量参与回归的总体相
关系数,Rmxi 为去掉xi 的复相关系数。可见
部分相关系数的平方是在总体拟合效果中扣 除了其他变量综合拟合效果之后剩余部分。
15
16
多元线性回归模型
●多元线性回归模型及古典假定 ●多元线性回归模型的估计 ●多元线性回归模型的检验
17
§5.1多元线性回归模型及古典假定
j 个解释变量的单位变动对应变量平均值的影响。
20
多元线性回归
指对各个回归系数而言是“线性”的,对变量则 可是线性的,也可是非线性的 例如:生产函数
YALKu
取自然对数
l n Y ln A l n L l n K l n u
21
多元总体回归函数
Y 的总体条件均值表示为多个解释变量的函数
因为 Xe=0 ,则正规方程为:
XXβˆ =XY
32
OLS估计式
由正规方程 多元回归中 二元回归中
XXβˆ =XY ( X X ) k k 是 满 秩 矩 阵 ,其 逆 存 在
βˆ=(XX)-1XY
ˆ1Y-β ˆ2X2-β ˆ3X3
ˆ2(
yix2 i)( x3 2 i)-( yix3 i)( x2 ix3 i) ( x2 2 i)( x3 2 i)-( x2 ix3 i)2
多元线性回归分析ppt课件

DF 自由度
22 22 22 22 22
Parameter Standard
Estimate Error
t Value
偏回归系数 标准误 t值
5.94327 2.82859 2.10
0.14245 0.36565 0.39
0.35147 0.20420 1.72
-0.27059 0.12139 -2.23
ppt课件完整
31很多自变量时,即使其中一些自变量在解释
因变量 Y 的变异时贡献很小,但随着回归方程中自变量的
增加。决定系数仍然会表现为只增不减,故计算校正决定
系数(adjusted coefficient of determination)以消除自变量
个数的影响。公式为:
ppt课件完整
2
Multivariate linear regression
概念: 多元线性回归分析也称复线性回归分析(multiple linear regression analysis),它研究一组自变量如何直接影响一个 因变量。
自变量(independent variable)是指独立自由变量的变量,用向量X 表示;因变量(dependent variable)是指非独立的、受其它变量影响 的变量,用向量Y表示;由于模型仅涉及一个因变量,所以多元线性回 归分析也称单变量线性回归分析(univariate linear regression analysis)
方程组中: lij l ji (Xi Xi )(X j X j ) Xi X j [(Xi )(X j )]/ n liy (Xi Xi )(Y Y ) XiY [(Xi )(Y)]/ n
常数项 b0 Y b1X1 b2 X2 ... bm Xm
RegressionAnalysis回归分析

Brian Klinkenberg
9
Geob 479
January 13
What if there are several factors affecting the independent variable?
For example, think of the price of a home as a dependent variable. Several factors contribute to the price of a home. Among them are the size (ft2), the # of bedrooms, the # of bathrooms, the age of the home, if it has both central heat and air conditioning, and, of course, location (and all that that entails).
• Ordinary Least Squares (OLS) is the best known of all regression techniques. It is also the proper starting point for all spatial regression analyses. It provides a global model of the variable or process you are trying to understand or predict (obesity/biodiversity loss); it creates a single regression equation to represent that process.
《回归分析 》课件

通过t检验或z检验等方法,检验模型中各个参数的显著性,以确定 哪些参数对模型有显著影响。
拟合优度检验
通过残差分析、R方值等方法,检验模型的拟合优度,以评估模型是 否能够很好地描述数据。
非线性回归模型的预测
预测的重要性
非线性回归模型的预测可以帮助我们了解未来趋势和进行 决策。
预测的步骤
线性回归模型是一种预测模型,用于描述因变 量和自变量之间的线性关系。
线性回归模型的公式
Y = β0 + β1X1 + β2X2 + ... + βpXp + ε
线性回归模型的适用范围
适用于因变量和自变量之间存在线性关系的情况。
线性回归模型的参数估计
最小二乘法
最小二乘法是一种常用的参数估计方法,通过最小化预测值与实 际值之间的平方误差来估计参数。
最大似然估计法
最大似然估计法是一种基于概率的参数估计方法,通过最大化似 然函数来估计参数。
梯度下降法
梯度下降法是一种迭代优化算法,通过不断迭代更新参数来最小 化损失函数。
线性回归模型的假设检验
线性假设检验
检验自变量与因变量之间是否存在线性关系 。
参数显著性检验
检验模型中的每个参数是否显著不为零。
残差分析
岭回归和套索回归
使用岭回归和套索回归等方法来处理多重共线性问题。
THANKS
感谢观看
04
回归分析的应用场景
经济学
研究经济指标之间的关系,如GDP与消费、 投资之间的关系。
市场营销
预测产品销量、客户行为等,帮助制定营销 策略。
生物统计学
研究生物学特征与疾病、健康状况之间的关 系。
《回归分析》课件 刘超——回归分析教学大纲-hep

回归分析教学大纲概述本书主要内容、特点及全书章节主要标题并附教学大纲本书基于归纳演绎的认知规律,把握统计理论的掌握能力和统计理论的应用能力的平衡,依据认知规律安排教材各章节内容。
教材不仅阐述了回归分析的基本理论和具体的应用技术,还按照认知规律适当拓宽学生思维,介绍了伴前沿回归方法。
教材采用了引例、解题思路、解题模型、概念、案例、习题、统计软件七要素合一的教材内容安排模式,有助于培养学生的统计思维与统计能力。
全书共分14章,包括绪论、一元线性回归、多元线性回归、模型诊断、自变量的问题、误差的问题、模型选择、收缩方法、非线性回归、广义线性模型、非参数回归、机器学习的回归模型、人工神经网络以及缺失数据等内容。
第1章对回归分析的研究内容和建模过程给出综述性介绍;第2章和第3章详细介绍了一元和多元线性回归的参数估计、显著性检验及其应用;第4章介绍了回归模型的诊断,对违背回归模型基本假设的误差和观测的各种问题给出了处理方法;第5章介绍了回归建模中自变量可能存在的问题及处理方法,包括自变量的误差、尺度变化以及共线性问题;第6章介绍了回归建模中误差可能存在的问题及处理方法,包括广义最小二乘估计、加权最小二乘估计;第7章介绍了模型选择方法,包括基于检验的方法、基于标准的方法;第8章介绍了模型估计的收缩方法,包括岭回归、lasso、自适应lasso、主成分法、偏最小二乘法;第9章介绍了非线性回归,包括因变量、自变量的变换以及多项式回归、分段回归、内在的非线性回归等方法;第10章介绍了广义线性模型,包括logistic回归、Softmax回归、泊松回归等;第11章介绍了非参数回归的方法,包括核估计、局部回归、样条、小波、非参数多元回归、加法模型等方法;第12章介绍了机器学习中可用于回归问题的方法,包括决策树、随机森林、AdaBoost模型等;第13章介绍了人工神经网络在回归分析中的应用;第14章介绍了常见的数据缺失问题及处理方法,包括删除、单一插补、多重插补等。
回归分析学习课件PPT课件

为了找到最优的参数组合,可以使用网格搜索方 法对参数空间进行穷举或随机搜索,通过比较不 同参数组合下的预测性能来选择最优的参数。
非线性回归模型的假设检验与评估
假设检验
与线性回归模型类似,非线性回归模型也需要进行假设检验,以检验模型是否满足某些统计假 设,如误差项的独立性、同方差性等。
整估计。
最大似然法
03
基于似然函数的最大值来估计参数,能够同时估计参数和模型
选择。
多元回归模型的假设检验与评估
线性假设检验
检验回归模型的线性关系 是否成立,通常使用F检 验或t检验。
异方差性检验
检验回归模型残差的异方 差性,常用的方法有图检 验、White检验和 Goldfeld-Quandt检验。
多重共线性检验
检验回归模型中自变量之 间的多重共线性问题,常 用的方法有VIF、条件指数 等。
模型评估指标
包括R方、调整R方、AIC、 BIC等指标,用于评估模 型的拟合优度和预测能力。
05
回归分析的实践应用
案例一:股票价格预测
总结词
通过历史数据建立回归模型,预测未来股票 价格走势。
详细描述
利用股票市场的历史数据,如开盘价、收盘价、成 交量等,通过回归分析方法建立模型,预测未来股 票价格的走势。
描述因变量与自变量之间的非线性关系,通过变 换或使用其他方法来适应非线性关系。
03 混合效应回归模型
同时考虑固定效应和随机效应,适用于面板数据 或重复测量数据。
多元回归模型的参数估计
最小二乘法
01
通过最小化残差平方和来估计参数,是最常用的参数估计方法。
加权最小二乘法
02
适用于异方差性数据,通过给不同观测值赋予不同的权重来调
北大暑期课程《回归分析》(Linear-Regression-Analysis)讲义PKU7

Class 7: Path analysis and multicollinearityI. Standardized Coefficients: TransformationsIf the true model isi p i p i i x x y εβββ++++=--)1(1110(1)If we make the following transformation:xkk ik iky i i s X x x s Y y y /)(/)(**-=-= ,where y s and xk s are sample standard deviations of y and x k , respectively.Thus, standardization does two things: centering and rescaling. Centering is to normalize the location of a variable so that it has a mean of zero. Rescaling is to normalize a variable to have a variance of unity. Location of a measurement: where is zero? Scale of a measurement: how big is one-unit?Both the location and the scale of a variable can be arbitrary to begin with and need to be normalized. Examples: temperature, IQ, emotion. Some other variables have natural location and scale, such as the number of children and the number of days.Standardized regression: a regression with all variables standardized.******i 111(1)y i p i p i x x ββε--=+++(2)Relationship between (1) and (2):Average equation (1) and then take the difference between (1) and the averaged (1). This is equivalent to centering variables in (1) (note that 0=ε):i p p i p i i X x X x Y y εββ+-++-=----)()(1)1(1111(3)Note: )1(1110--+++=p p X X YβββDivide (3) by y s**)1(*1*1*1)1(1)1()1(1111111)1(1111//))(/(/))(/(/))(/())(/(/)(ip i p i y i p x p p i y p x p x i y x yi p p i y p i y y i x x s s X x s s s X x s s s X x s X x s s Y y εββεββεββ+++=+-++-=+-++-=-----------That is, y xk k ks s /*ββ=When variables are standardized variables, we have()xx X X r '<=>xy X y r '<=>xy x x r r y X X X b 11)(--=''=.In the older days of sociology (1960s and 1970s), many studies publish correlation matrices so that their regression results can be easily replicated. This is possible because correlation matrices contain all the sufficient statistics for path analysis.II. Why Standardized Coefficients? A. Ease of ComputationB. Boundaries of Estimates: -1 to 1.C. Standardized Scale in ComparisonWhich is better: Standardized or UnstandardizedUnstandardized coefficients are generally better because they tell you more about the data and about changes in real units. Rule of Thumb:A. Usually it is not a good idea to report standardized coefficients.B. Almost always report unstandardized coefficients (if you can).C. Read standardized coefficients on your own.D. You can interpret unstandardized coefficients in terms of standard deviations. (homework).E. If only a correlation matrix is available, then only standardized coefficients can beestimated (LISREL). F. In an analysis of comparing multiple populations, whether to use standardized orunstandardized is consequential. In this case, theoretical/conceptual considerations should dictate the decision.III. Decomposition of Total EffectsA. Difference between reduced-form equations and structural equationsEverything I am now discussing is about systems of equations. What are systems of equations? Systems of equations are equations with different dependent variables. For example, we talked about auxiliary regressions: one independent variable is turned into the new dependent variable.1. Exogenous variablesExogenous variables are variables that are used only as independent variables in all equations. 2. Endogenous variablesEndogenous variables are variables that are used as dependent variables in some equations and may be used as independent variables in other equations.B. Structural Equations versus Reduced Forms1. Structural EquationsStructural equations are theoretically derived equations that often have endogenous variables as independent variables. 2. Reduced FormsReduced form equations are equations in which all independent variables are exogenous variables. In other words, in reduced form equations, we purposely ignore intermediate (or relevant) variables.C. Types of EffectsTotal effects can be decomposed into two parts: direct effects and indirect effects. A famous example is drawn from Blau and Duncan model of status attainment:1. Total EffectA total effect can be defined as the effect in the reduced form equations. In the example, what is the total effects of father's education and father's occupation on son's occupation:You run a regression of son's occupation on father's education and father's occupation. The estimated coefficients are total effects. 2. Direct EffectDirect effects can be defined as the effects in the structural equations. In our example, the direct effect of father's education is zero by assumption, which is subject to testing. The direct effect of father's occupation on son's occupation is estimated in the model regression son's occupation on son's education and father's occupation. 3. Indirect EffectThe indirect effect works through an intermediate variable. It is usually the product of two coefficients. In our example, the indirect effect of father's education on son's occupation is the product of the effect of father's education on son's education and the effect of son's education on son's occupation. This is the same as the auxiliary regression before. The total effect is the sum of the direct effect and the indirect effect. This result is consistent with our earlier discussion of omitted variables.How do we calculate the total effect? It should be the direct effect plus the indirecteffect. It has the same formula as the one we discussed in connection with auxiliary regressions.X Father’socc. .516 V .310 .394 .859.818 .753 .115 .440 Respondent’s educationUOcc. In 1962Y.281 W.279 First job Father’s education .224Total effect = Direct Effect + Indirect Effect =k α+k βk p τβ1-IV. Problem of MulticollinearityA. Assumption about the singularity of X X '.Recall that the first assumption for the least squares estimator is that X X ' isnonsingular. What is meant by that is that none of the columns in the X matrix is a linear combination of other columns in X . Why do we need the assumption? Because without the assumption, we cannot take the inverse ofX X ' for y X X X b ')'(1-=.Why do we use the word "multicollinearity" instead of collinearity? [joke] "multi" is a trendy word: multimillionaires, multi-national, and multiculturalism. Answer: linear combinations of several variables. We cannot determine whether there is a problem of collinearity from correlations.B. Examples of Perfect Multicollinearity.1. If X includes 1, we cannot include other variables that do not change across all observations.2. We cannot include parent's education after we include mother and father's education in the model separately.C. Identification ProblemContrary to common misunderstandings, multicollinearity does not cause bias. It is anidentification problem.D. Empirical Under-identificationEven though the model is identified theoretically, the data may be so thin that it isunder-identified empirically.Rather than “ye s-no,” the under -identification is a matter of degree. Thus, we wouldlike to have a way to quantify the degree of under-identification.Root of the problem: less information. Empirical under-identification problem can oftenbe overcome by collecting more data.Under-identification = less efficiency = reduction in effective number of cases. Thus,increase of sample size compensates for under-identification.E. Consequences of MulticollinearityIn the presence of multicollinearity, the estimates are not biased. Rather, they are"unstable" or having large standard errors. If through the computer output gives you small standard errors of the estimates, do not worry about the multicollinearity problem.This is important, but often misunderstood.V. Variance Inflation FactorReview of partial regression estimation:True regression:i p i p i i x x y εβββ++++=--)1()1(110....In matrix:εβ+=X yThis model can always be written into: (1) 1212y X X ββε=++where now []⎥⎦⎤⎢⎣⎡==2121,βββX X X 21X and X are matrices of dimensions )(,2121p p p p n p n =+⨯⨯ and 12,ββare parametervectors, of dimensions 21,p p .We first want to prove that regression equation (1) is equivalent to the following procedure:(1) Regress 1on X y , obtain residuals *y ;(2) Regress 12on X X , obtain residuals *2X ;(3) Then regress*y on *2X , and obtain the correct least squares estimates of 2β(=2b ), same as those from the one-step method.Without loss of generality, say that the last independent variable is singled out. That is, make ()[]21...1-p x x 1X , make ()1-p x 2X . From the above result, we can estimate ()1-p β from()()εβ+=--11**p p x y ,where *y and ()*1-p x are respectively residuals of the regressions of y and ()*1-p x on()[]21...1-p x x.(both *y and ()*1i x p -have zero means).There is no intercept term because 1 was contained in 1X (so that *y and ()*1-p x are centered around zero).From the formulas for simple regression:()()()∑∑---=2*111/**p i p i i p x x y bClass 7, Page 6()()()()()()()1221122*121/*1//-----=-==∑p x p p x p i p SST VIF R SST x b V σσσVariance Inflation Factor VIF is defined as:()()211/1--=p R VIFSimilar results apply to other independent variables.VIF is inversely related to 2R in the auxiliary regression of an independent variable on all other independent variables.VIF measures the reduction in the information in an independent variable due to its linear dependency on other independent variables. In other words, it is the reduction factor associated with the variance of the LS estimator (2βσ) after we include other independent variables in the model. If an independent variable were orthogonal to other variables in the model, the sampling variance of the estimator of its coefficient would remain the same as the case under simple regression.This is another reason why we cannot increase the number of independent variables infinitely.。
2024_2025学年高中数学课时跟踪检测一回归分析含解析北师大版选修1_2

课时跟踪检测(一)回来分析1.已知两个有线性相关关系的变量的相关系数为r,则r取下列何值时,两个变量的线性相关关系最强( )A.-0.91 B.0.25C.0.6 D.0.86解析:选A 在四个r值中,|-0.91|最接近1,故此时,两个变量的线性相关关系最强.2.依据如下样本数据x 345678y 4.0 2.5-0.50.5-2.0-3.0 得到的回来方程为y=bx+a,则( )A.a>0,b>0 B.a>0,b<0C.a<0,b>0 D.a<0,b<0解析:选B 由表中数据画出散点图,如图.由散点图可知b<0,a>0,选B.3.设某高校的女生体重y(单位:kg)与身高x(单位:cm)具有线性相关关系,依据一组样本数据(x i,y i)(i=1,2,…,n),用最小二乘法建立的回来方程为y=0.85x-85.71,则下列结论中不正确的是( )A.y与x具有正的线性相关关系B.回来直线过样本点的中心(x,y)C.若该高校某女生身高增加1 cm,则其体重约增加0.85 kgD.若该高校某女生身高为170 cm,则可断定其体重必为58.79 kg解析:选D 由于回来直线的斜率为正值,故y与x具有正的线性相关关系,选项A中的结论正确;回来直线过样本点的中心,选项B中的结论正确;依据回来直线斜率的意义易知选项C中的结论正确;由于回来分析得出的是估计值,故选项D中的结论不正确.4.为了解某社区居民的家庭年收入与年支出的关系,随机调查了该社区5户家庭,得到如下统计数据表:收入x(万元)8.28.610.011.311.9支出y(万元) 6.27.58.08.59.8 依据上表可得回来直线方程y=bx+a,其中b=0.76,a=y-b x.据此估计,该社区一户年收入为15万元家庭的年支出为( )A .11.4万元B .11.8万元C .12.0万元D .12.2万元解析:选B 由题意知,x =8.2+8.6+10.0+11.3+11.95=10,y =6.2+7.5+8.0+8.5+9.85=8,∴a =8-0.76×10=0.4,∴当x =15时,y =0.76×15+0.4=11.8(万元).5.在一组样本数据(x 1,y 1),(x 2,y 2),…,(x n ,y n )(n ≥2,x 1,x 2,…,x n 不全相等)的散点图中,若全部样本点(x i ,y i )(i =1,2,…,n )都在直线y =12x +1上,则这组样本数据的样本相关系数为________.解析:依据样本相关系数的定义可知, 当全部样本点都在直线上时, 相关系数为1. 答案:16.某咖啡厅为了了解热饮的销售量y (个)与气温x (℃)之间的关系,随机统计了某4天的销售量与气温,并制作了比照表:________.解析:∵x =14(18+13+10-1)=10,y =14(24+34+38+64)=40,∴40=-2×10+a ,∴a =60,当x =-4时,y =-2×(-4)+60=68.答案:687.某种产品的广告费用支出x 与销售额y 之间有如下的对应数据(单位:万元).(1)(2)求回来方程;(3)据此估计广告费用支出为10万元时,销售额y 的值. 解:(1)作出散点图如下图.(2)由散点图可知,样本点近似地分布在一条直线旁边,因此,x ,y 之间具有线性相关关系.由表中的数据可知,x -=15×(2+4+5+6+8)=5,y -=15×(30+40+60+50+70)=50.所以b =∑i =15x i -x-y i -y-∑i =15x i -x-2=6.5,a =y --b x -=50-6.5×5=17.5,因此线性回来方程为y =17.5+6.5x .(3)x =10时,y =17.5+10×6.5=82.5(万元). 即当支出广告费用10万元时,销售额为82.5万元.8.某工厂为了对新研发的一种产品进行合理定价,将该产品按事先拟定的价格进行试销,得到如下数据:单价x (元) 8 8.2 8.4 8.6 8.8 9 销量y (件)908483807568(1)求回来直线方程y =bx +a ,其中b =-20,a =y -b x ;(2)预料在今后的销售中,销量与单价仍旧听从(1)中的关系,且该产品的成本是4元/件,为使工厂获得最大利润,该产品的单价应定为多少元?(利润=销售收入-成本)解:(1)x =16(8+8.2+8.4+8.6+8.8+9)=8.5,y =16(90+84+83+80+75+68)=80,从而a =y +20x =80+20×8.5=250, 故y =-20x +250.(2)由题意知, 工厂获得利润z =(x -4)y =-20x 2+330x -1 000=-20⎝⎛⎭⎪⎫x -3342+361.25,所以当x =334=8.25时,z max =361.25(元).即当该产品的单价定为8.25元时,工厂获得最大利润.9.在钢铁碳含量对于电阻的效应探讨中,得到如下数据表:碳含量x (%) 0.10 0.30 0.40 0.55 0.70 0.80 0.95 20 ℃时电阻(Ω)1518192122.623.626解:由已知数据得x -=17×∑i =17x i ≈0.543,y -=17×145.2≈20.74,∑i =17x 2i =2.595,∑i =17y 2i =3 094.72,∑i =17x i y i =85.45.∴b ≈85.45-7×0.543×20.742.595-7×0.5432≈12.46, a =20.74-12.46×0.543≈13.97.线性回来方程为y =13.97+12.46x . 下面利用相关系数检验是否显著.∑i =17x i y i -7x - y -=85.45-7×0.543×20.74≈6.62,∑i =17x 2i -7x -2=2.595-7×(0.543)2≈0.531, ∑i =17y 2i -7y -2=3 094.72-7×(20.74)2=83.687. ∴r =6.620.531×83.687≈0.993.由于r 接近于1,故钢铁碳含量对电阻的效应线性相关关系显著.。
北大暑期课程《回归分析》(Linear-Regression-Analysis)讲义PKU5

Class 5: ANOVA (Analysis of Variance) andF-testsI.What is ANOVAWhat is ANOVA? ANOVA is the short name for the Analysis of Variance. The essence ofANOVA is to decompose the total variance of the dependent variable into two additivecomponents, one for the structural part, and the other for the stochastic part, of a regression. Today we are going to examine the easiest case.II.ANOVA: An Introduction Let the model beεβ+= X y .Assuming x i is a column vector (of length p) of independent variable values for the i th'observation,i i i εβ+='x y .Then is the predicted value. sum of squares total:[]∑-=2Y y SST i[]∑-+-=2'x b 'x y Y b i i i[][][][]∑∑∑-+-+-=Y -b 'x b 'x y 2Y b 'x b 'x y 22i i i i i i[][]∑∑-+=22Y b 'x e i ibecause .This is always true by OLS. = SSE + SSRImportant: the total variance of the dependent variable is decomposed into two additive parts: SSE, which is due to errors, and SSR, which is due to regression. Geometric interpretation: [blackboard ]Decomposition of VarianceIf we treat X as a random variable, we can decompose total variance to the between-group portion and the within-group portion in any population:()()()i i i x y εβV 'V V +=Prove:()()i i i x y εβ+='V V()()()i i i i x x εβεβ,'Cov 2V 'V ++=()()iix εβV 'V +=(by the assumption that ()0 ,'Cov =εβk x , for all possible k.)The ANOVA table is to estimate the three quantities of equation (1) from the sample.As the sample size gets larger and larger, the ANOVA table will approach the equation closer and closer.In a sample, decomposition of estimated variance is not strictly true. We thus need toseparately decompose sums of squares and degrees of freedom. Is ANOVA a misnomer?III.ANOVA in MatrixI will try to give a simplied representation of ANOVA as follows:[]∑-=2Y y SST i ()∑-+=i i y Y 2Y y 22∑∑∑-+=i i y Y 2Y y 22∑-+=222Y n 2Y n y i (because ∑=Y n y i )∑-=22Y n y i2Y n y 'y -=y J 'y n /1y 'y -= (in your textbook, monster look)SSE = e'e[]∑-=2Y b 'x SSR i()()[]∑-+=Y b 'x 2Y b 'x 22i i()[]()∑∑-+=b 'x Y 2Y n b 'x 22i i()[]()∑∑--+=i i ie y Y 2Y n b 'x 22()[]∑-+=222Y n 2Y n b 'x i(because ∑∑==0e ,Y n y i i , as always)()[]∑-=22Yn b 'x i2Y n Xb X'b'-=y J 'y n /1y X'b'-= (in your textbook, monster look)IV.ANOVA TableLet us use a real example. Assume that we have a regression estimated to be y = - 1.70 + 0.840 xANOVA TableSOURCE SS DF MS F with Regression 6.44 1 6.44 6.44/0.19=33.89 1, 18Error 3.40 18 0.19 Total 9.8419We know , , , , . If we know that DF for SST=19, what is n?n= 205.220/50Y ==84.95.25.22084.134Y n y SST 22=⨯⨯-=-=∑i()[]∑-+=0.1250.84x 1.7-SSR 2i[]∑-⨯⨯⨯-⨯+⨯=0.125x 84.07.12x 84.084.07.17.12i i= 20⨯1.7⨯1.7+0.84⨯0.84⨯509.12-2⨯1.7⨯0.84⨯100- 125.0 = 6.44SSE = SST-SSR=9.84-6.44=3.40DF (Degrees of freedom): demonstration. Note: discounting the intercept when calculating SST. MS = SS/DFp = 0.000 [ask students].What does the p-value say?V.F-TestsF-tests are more general than t-tests, t-tests can be seen as a special case of F-tests.If you have difficulty with F-tests, please ask your GSIs to review F-tests in the lab. F-tests takes the form of a fraction of two MS's.MSR/MSE F ,=df2df1An F statistic has two degrees of freedom associated with it: the degree of freedom inthe numerator, and the degree of freedom in the denominator.An F statistic is usually larger than 1. The interpretation of an F statistics is thatwhether the explained variance by the alternative hypothesis is due to chance. In other words, the null hypothesis is that the explained variance is due to chance, or all the coefficients are zero.The larger an F-statistic, the more likely that the null hypothesis is not true. There is atable in the back of your book from which you can find exact probability values.In our example, the F is 34, which is highly significant. VI.R2R 2 = SSR / SSTThe proportion of variance explained by the model. In our example, R-sq = 65.4%VII.What happens if we increase more independent variables. 1.SST stays the same. 2.SSR always increases. 3.SSE always decreases. 4.R2 always increases. 5.MSR usually increases. 6.MSE usually decreases.7.F-test usually increases.Exceptions to 5 and 7: irrelevant variables may not explain the variance but take up degrees of freedom. We really need to look at the results.VIII.Important: General Ways of Hypothesis Testing with F-Statistics.All tests in linear regression can be performed with F-test statistics. The trick is to run"nested models."Two models are nested if the independent variables in one model are a subset or linearcombinations of a subset (子集)of the independent variables in the other model.That is to say. If model A has independent variables (1, , ), and model B has independent variables (1, , , ), A and B are nested. A is called the restricted model; B is called less restricted or unrestricted model. We call A restricted because A implies that . This is a restriction.Another example: C has independent variable (1, , + ), D has (1, + ). C and A are not nested.C and B are nested.One restriction in C: . C andD are nested.One restriction in D: . D and A are not nested.D and B are nested: two restriction in D: 32ββ=; 0=1β.We can always test hypotheses implied in the restricted models. Steps: run tworegression for each hypothesis, one for the restricted model and one for the unrestricted model. The SST should be the same across the two models. What is different is SSE and SSR. That is, what is different is R2. Let()()df df SSE ,df df SSE u u r r ==;df df ()()0u r u r r u n p n p p p -=---=-<Use the following formulas:()()()()(),SSE SSE /df SSE df SSE F SSE /df r u r u dfr dfu dfu u u---=or()()()()(),SSR SSR /df SSR df SSR F SSE /df u r u r dfr dfu dfu u u---=(proof: use SST = SSE+SSR)Note, df(SSE r )-df(SSE u ) = df(SSR u )-df(SSR r ) =df ∆,is the number of constraints (not number of parameters) implied by the restricted modelor()()()22,2R R /df F 1R /dfur dfr dfu dfuuu--∆=- Note thatdf 1df ,2F t =That is, for 1df tests, you can either do an F-test or a t-test. They yield the same result. Another way to look at it is that the t-test is a special case of the F test, with the numerator DF being 1.IX.Assumptions of F-testsWhat assumptions do we need to make an ANOVA table work?Not much an assumption. All we need is the assumption that (X'X) is not singular, so that the least square estimate b exists.The assumption of =0 is needed if you want the ANOVA table to be an unbiased estimate of the true ANOVA (equation 1) in the population. Reason: we want b to be an unbiased estimator of , and the covariance between b and to disappear.For reasons I discussed earlier, the assumptions of homoscedasticity and non-serial correlation are necessary for the estimation of .The normality assumption that (i is distributed in a normal distribution is needed for small samples.X.The Concept of IncrementEvery time you put one more independent variable into your model, you get an increase in . We sometime called the increase "incremental ." What is means is that more variance is explained, or SSR is increased, SSE is reduced. What you should understand is that the incremental attributed to a variable is always smaller than the when other variables are absent.XI.Consequences of Omitting Relevant Independent VariablesSay the true model is the following:0112233i i i i i y x x x ββββε=++++.But for some reason we only collect or consider data on . Therefore, we omit in the regression. That is, we omit in our model. We briefly discussed this problem before. The short story is that we are likely to have a bias due to the omission of a relevant variable in the model. This is so even though our primary interest is to estimate the effect of or on y. Why? We will have a formal presentation of this problem.XII.Measures of Goodness-of-FitThere are different ways to assess the goodness-of-fit of a model.A. R2R2 is a heuristic measure for the overall goodness-of-fit. It does not have an associated test statistic.R 2 measures the proportion of the variance in the dependent variable that is “explained” by the model: R 2 =SSESSR SSRSST SSR +=B.Model F-testThe model F-test tests the joint hypotheses that all the model coefficients except for the constant term are zero.Degrees of freedoms associated with the model F-test: Numerator: p-1Denominator: n-p.C.t-tests for individual parametersA t-test for an individual parameter tests the hypothesis that a particular coefficient is equal to a particular number (commonly zero).tk = (bk- (k0)/SEk, where SEkis the (k, k) element of MSE(X ’X)-1, with degree of freedom=n-p. D.Incremental R2Relative to a restricted model, the gain in R 2 for the unrestricted model: ∆R 2= R u 2- R r 2E.F-tests for Nested ModelIt is the most general form of F-tests and t-tests.()()()()(),SSE SSE /df SSE df SSE F SSE /df r u r dfu dfr u dfu u u---=It is equal to a t-test if the unrestricted and restricted models differ only by one single parameter.It is equal to the model F-test if we set the restricted model to the constant-only model.[Ask students] What are SST, SSE, and SSR, and their associated degrees of freedom, for the constant-only model?Numerical ExampleA sociological study is interested in understanding the social determinants of mathematical achievement among high school students. You are now asked to answer a series of questions. The data are real but have been tailored for educational purposes. The total number of observations is 400. The variables are defined as: y: math scorex1: father's education x2: mother's educationx3: family's socioeconomic status x4: number of siblings x5: class rankx6: parents' total education (note: x6 = x1 + x2) For the following regression models, we know: Table 1 SST SSR SSE DF R 2 (1) y on (1 x1 x2 x3 x4) 34863 4201 (2) y on (1 x6 x3 x4) 34863 396 .1065 (3) y on (1 x6 x3 x4 x5) 34863 10426 24437 395 .2991 (4) x5 on (1 x6 x3 x4) 269753 396 .02101.Please fill the missing cells in Table 1.2.Test the hypothesis that the effects of father's education (x1) and mother's education (x2) on math score are the same after controlling for x3 and x4.3.Test the hypothesis that x6, x3 and x4 in Model (2) all have a zero effect on y.4.Can we add x6 to Model (1)? Briefly explain your answer.5.Test the hypothesis that the effect of class rank (x5) on math score is zero after controlling for x6, x3, and x4.Answer: 1. SST SSR SSE DF R 2 (1) y on (1 x1 x2 x3 x4) 34863 4201 30662 395 .1205 (2) y on (1 x6 x3 x4) 34863 3713 31150 396 .1065 (3) y on (1 x6 x3 x4 x5) 34863 10426 24437 395 .2991 (4) x5 on (1 x6 x3 x4) 275539 5786 269753 396 .0210Note that the SST for Model (4) is different from those for Models (1) through (3). 2.Restricted model is 01123344()y b b x x b x b x e =+++++Unrestricted model is ''''''011223344y b b x b x b x b x e =+++++(31150 - 30662)/1F 1,395 = -------------------- = 488/77.63 = 6.29 30662 / 395 3.3713 / 3F 3,396 = --------------- = 1237.67 / 78.66 = 15.73 31150 / 3964.No. x6 is a linear combination of x1 and x2. X'X is singular.5.(31150 - 24437)/1F 1,395 = -------------------- = 6713 / 61.87 = 108.50 24437/395t = 10.42t ===。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
Class 1: Expectations, variances, and basics of estimationBasics of matrix (1)I. Organizational Matters(1)Course requirements:1)Exercises: There will be seven (7) exercises, the last of which is optional. Eachexercise will be graded on a scale of 0-10. In addition to the graded exercise, ananswer handout will be given to you in lab sections.2)Examination: There will be one in-class, open-book examination.(2)Computer software: StataII. Teaching Strategies(1) Emphasis on conceptual understanding.Yes, we will deal with mathematical formulas, actually a lot of mathematical formulas. But, I do not want you to memorize them. What I hope you will do, is to understand the logic behind the mathematical formulas.(2) Emphasis on hands-on research experience.Yes, we will use computers for most of our work. But I do not want you to become a computer programmer. Many people think they know statistics once they know how to run a statistical package. This is wrong. Doing statistics is more than running computer programs. What I will emphasize is to use computer programs to your advantage in research settings. Computer programs are like automobiles. The best automobile is useless unless someone drives it. You will be the driver of statistical computer programs.(3) Emphasis on student-instructor communication.I happen to believe in students' judgment about their own education. Even though I will be ultimately responsible if the class should not go well, I hope that you will feel part of the class and contribute to the quality of the course. If you have questions, do not hesitate to ask in class. If you have suggestions, please come forward with them. The class is as much yours as mine.Now let us get to the real business.III(1). Expectation and VarianceRandom Variable: A random variable is a variable whose numerical value is determined by the outcome of a random trial.Two properties: random and variable.A random variable assigns numeric values to uncertain outcomes. In a common language, "give a number". For example, income can be a random variable. There are many ways to do it. You can use the actual dollar amounts.In this case, you have a continuous random variable. Or you can use levels of income, such as high, median, and low. In this case, you have an ordinal random variable [1=high,2=median, 3=low]. Or if you are interested in the issue of poverty, you can have a dichotomous variable: 1=in poverty, 0=not in poverty.In sum, the mapping of numeric values to outcomes of events in this way is theessence of a random variable.Probability Distribution: The probability distribution for a discrete random variable Xassociates with each of the distinct outcomes x i (i = 1, 2,..., k ) a probability P (X = x i ).Cumulative Probability Distribution: The cumulative probability distribution for a discreterandom variable X provides the cumulative probabilities P (X ≤ x ) for all values x .Expected Value of Random Variable: The expected value of a discrete random variable X isdenoted by E {X } and defined:E {X } = x i i k=∑1P (x i )where : P (x i ) denotes P (X = x i ). The notation E { } (read “expectation of”) is called theexpectation operator.In common language, expectation is the mean. But the difference is that expectation is a concept for the entire population that you never observe. It is the result of the infinitenumber of repetitions. For example, if you toss a coin, the proportion of tails should be .5 in the limit. Or the expectation is .5. Most of the times you do not get the exact .5, but a number close to it.Conditional ExpectationIt is the mean of a variable conditional on the value of another random variable.Note the notation: E(Y|X).In 1996, per-capita average wages in three Chinese cities were (in RMB):Shanghai: 3,778Wuhan: 1,709Xi ’an: 1,155Variance of Random Variable: The variance of a discrete random variable X is denoted by V {X } and defined:V {X } = i k =∑1(x i - E {X })2 P (x i )where : P (x i ) denotes P (X = x i ). The notation V { } (read “variance of”) is called the variance operator.Since the variance of a random variable X is a weighted average of the squared deviations,(X - E {X })2 , it may be defined equivalently as an expected value: V {X } = E {(X - E {X })2}. An algebraically identical expression is: V {X} = E {X 2} - (E {X })2.Standard Deviation of Random Variable: The positive square root of the variance of X is called the standard deviation of X and is denoted by σ{X }:σ {X } =V X {}The notation σ{ } (read “standard deviation of”) is called the standard deviation operator.Standardized Random Variables: If X is a random variable with expected value E {X } and standard deviation σ{X }, then:Y =}{}{X X E X σ-is known as the standardized form of random variable X .Covariance: The covariance of two discrete random variables X and Y is denoted by Cov {X,Y } and defined:Cov {X, Y } = ({})({})(,)xE X y E Y P x y i j i j j i --∑∑where: P (x i , y j ) denotes P X x Y y i j (=⋂= )The notation of Cov { , } (read “covariance of”) is called the covariance operator .When X and Y are independent, Cov {X, Y } = 0.Cov {X, Y } = E {(X - E {X })(Y - E {Y })}; Cov {X, Y } = E {XY } - E {X }E {Y }(Variance is a special case of covariance.)Coefficient of Correlation: The coefficient of correlation of two random variables X and Y isdenoted by ρ{X,Y } (Greek rho) and defined:ρσσ{,}{,}{}{}X Y Cov X Y X Y =where: σ{X } is the standard deviation of X; σ{Y } is the standard deviation of Y; Cov is the covariance of X and Y.Sum and Difference of Two Random Variables: If X and Y are two random variables, then the expected value and the variance of X + Y are as follows:Expected Value : E {X+Y } = E {X } + E {Y };Variance : V {X+Y } = V {X } + V {Y }+ 2 Cov (X,Y ).If X and Y are two random variables, then the expected value and the variance of X - Y are as follows:Expected Value : E {X - Y } = E {X } - E {Y };Variance : V {X - Y } = V {X } + V {Y } - 2 Cov (X,Y ).Sum of More Than Two Independent Random Variables: If T = X 1 + X 2 + ... + X s is the sumof s independent random variables, then the expected value and the variance of T are as follows:Expected Value: E T E X i i s {}{}==∑1; Variance: V T V X i i s {}{}==∑1III(2). Properties of Expectations and Covariances:(1) Properties of Expectations under Simple Algebraic Operations)()(x bE a bX a E +=+This says that a linear transformation is retained after taking an expectation.bX a X +=*is called rescaling: a is the location parameter, b is the scale parameter.Special cases are:For a constant: a a E =)(For a different scale: )()(X E b bX E =, e.g., transforming the scale of dollars into thescale of cents.(2) Properties of Variances under Simple Algebraic Operations)()(2X V b bX a V =+This says two things: (1) Adding a constant to a variable does not change the variance of the variable; reason: the definition of variance controls for the mean of the variable[graphics]. (2) Multiplying a constant to a variable changes the variance of the variable by a factor of the constant squared; this is to easy prove, and I will leave it to you. This is the reason why we often use standard deviation instead of variance2x x σσ=is of the same scale as x.(3) Properties of Covariance under Simple Algebraic OperationsCov(a + bX, c + dY) = bd Cov(X,Y).Again, only scale matters, location does not.(4) Properties of Correlation under Simple Algebraic OperationsI will leave this as part of your first exercise:),(),(Y X dY c bX a ρρ=++That is, neither scale nor location affects correlation.IV: Basics of matrix.1. DefinitionsA. MatricesToday, I would like to introduce the basics of matrix algebra. A matrix is a rectangular array of elements arranged in rows and columns:11121211.......m n nm x x x x X x x ⎡⎤⎢⎥⎢⎥=⎢⎥⎢⎥⎣⎦Index: row index, column index.Dimension: number of rows x number of columns (n x m)Elements: are denoted in small letters with subscripts.An example is the spreadsheet that records the grades for your home work in the following way:Name 1st 2nd ....6thA 7 10 (9)B 6 5 (8)... ... ......Z 8 9 (8)This is a matrix.Notation: I will use Capital Letters for Matrices.B. VectorsVectors are special cases of matrices:If the dimension of a matrix is n x 1, it is a column vector:⎥⎥⎥⎥⎦⎤⎢⎢⎢⎢⎣⎡=n x x x x (21)If the dimension is 1 x m, it is a row vector:y' = | 1y 2y .... m y |Notation: small underlined letters for column vectors (in lecture notes)C. TransposeThe transpose of a matrix is another matrix with positions of rows and columns beingexchanged symmetrically.For example: if⎥⎥⎥⎥⎦⎤⎢⎢⎢⎢⎣⎡=⨯nm n m m n x x x x x x X 12111211)( (1121112)()1....'...n m n m nm x x x x X x x ⨯⎡⎤⎢⎥⎢⎥=⎢⎥⎢⎥⎣⎦It is easy to see that a row vector and a column vector are transposes of each other. 2. Matrix Addition and SubtractionAdditions and subtraction of two matrices are possible only when the matrices have the same dimension. In this case, addition or subtraction of matrices forms another matrix whoseelements consist of the sum, or difference, of the corresponding elements of the two matrices.⎥⎥⎥⎥⎦⎤⎢⎢⎢⎢⎣⎡±±±±±=Y ±X mn nm n n m m y x y x y x y x y x (11)2121111111 Examples:⎥⎦⎤⎢⎣⎡=A ⨯4321)22(⎥⎦⎤⎢⎣⎡=B ⨯1111)22(⎥⎦⎤⎢⎣⎡=B +A =⨯5432)22(C 3. Matrix MultiplicationA. Multiplication of a scalar and a matrixMultiplying a scalar to a matrix is equivalent to multiplying the scalar to each of the elements of the matrix.11121211Χ...m n nm cx c cx cx ⎢⎥⎢⎥=⎢⎥⎢⎥⎣⎦ B. Multiplication of a Matrix by a Matrix (Inner Product)The inner product of matrix X (a x b) and matrix Y (c x d) exists if b is equal to c. The inner product is a new matrix with the dimension (a x d). The element of the new matrix Z is:c∑=kj ik ijy x zk=1Note that XY and YX are very different. Very often, only one of the inner products (XY and YX) exists.Example:⎥⎦⎤⎢⎣⎡=4321)22(x A⎥⎦⎤⎢⎣⎡=10)12(x BBA does not exist. AB has the dimension 2x1⎥⎦⎤⎢⎣⎡=42ABOther examples:If )53(x A , )35(x B , what is the dimension of AB? (3x3)If )53(x A , )35(x B , what is the dimension of BA? (5x5)If )51(x A , )15(x B , what is the dimension of AB? (1x1, scalar)If )53(x A , )15(x B , what is the dimension of BA? (nonexistent)4. Special MatricesA. Square Matrix)(n n A ⨯B. Symmetric MatrixA special case of square matrix.For )(n n A ⨯, ji ij a a =. All i, j .A' = AC. Diagonal MatrixA special case of symmetric matrix⎥⎥⎥⎥⎦⎢⎢⎢⎢⎣=X nn x x 0 (2211)D. Scalar Matrix0....0c c c c ⎡⎤⎢⎥⎢⎥=I ⎢⎥⎢⎥⎣⎦E. Identity MatrixA special case of scalar matrix⎥⎥⎥⎥⎦⎤⎢⎢⎢⎢⎣⎡=I 10 (101)Important: for r r A ⨯AI = IA = AF. Null (Zero) MatrixAnother special case of scalar matrix⎥⎥⎥⎥⎦⎤⎢⎢⎢⎢⎣⎡=O 00 (000)From A to E or F, cases are nested from being more general towards being more specific.G. Idempotent MatrixLet A be a square symmetric matrix. A is idempotent if....32=A =A =AH. Vectors and Matrices with elements being oneA column vector with all elements being 1,⎥⎥⎥⎥⎦⎤⎢⎢⎢⎢⎣⎡=⨯1......111r A matrix with all elements being 1, ⎥⎥⎥⎥⎦⎤⎢⎢⎢⎢⎣⎡=⨯1...1...111...11rr J Examples let 1 be a vector of n 1's: )1(1⨯n1'1 = )11(⨯n11' = )(n n J ⨯I. Zero Vector A zero vector is⎥⎥⎥⎥⎦⎤⎢⎢⎢⎢⎣⎡=⨯0....001r 5. Rank of a MatrixThe maximum number of linearly independent rows is equal to the maximum number of linearly independent columns. This unique number is defined to be the rank of the matrix. For example,⎥⎥⎥⎦⎤⎢⎢⎢⎣⎡=B 542211014321 Because row 3 = row 1 + row 2, the 3rd row is linearly dependent on rows 1 and 2. The maximum number of independent rows is 2. Let us have a new matrix:⎥⎦⎤⎢⎣⎡=B 11014321* Singularity: if a square matrix A of dimension ()n n ⨯has rank n, the matrix is nonsingular. If the rank is less than n, the matrix is then singular.。