Factor Analysis and Latent Structure-IRT and Rasch Models

合集下载

主成分分析

主成分分析

因子分析(factor analysis)
因子分析与主成分分析一样,是一种探索性分 析技巧 主要应用:合理解释多个能直接测量的且有一 定相关性的实测指标是如何受少数几个不能直 接测量相对独立的因子支配的
举 例
例5.1 为了解中学生的知识和能力,抽查了
100名学生,每人答40道题,可测得得分。问题
一、主成分的基本原理
一、主成分的基本原理
寻找一个适当的线性或非线性变换,将若干个
彼此相关的变量转变为彼此独立的新变量,然
后根据新变量的方差大小,选取几个方差较大
的新变量替代原变量,使得用较少的几个新变
量就能综合反映原变量中包含的主要信息且又
各自带有独特的专业含义。
新变量(综合变量)称为原变量的主成分
2 h2 aij i2 j
主成分是原变量的线性组合,是对原变量信息 的一种提取,主成分不增加总信息,也不减少 总信息量,只是对原信息进行重新分配。 应用者可根据实际情况选择重要的信息(前几个
主成分),作进一步分析。
2. 确定主成分个数
① 经验法: 主成分的累积贡献率达到70~80%以上;
因子分析
曹 明 芹 流行病与卫生统计学教研室
因子分析(factor analysis)
医学研究中,很多情况下我们研究的变量是不 能或不易直接测量得到的 例如,研究家庭环境、社会环境和学校环境对 儿童智商的发育影响问题。这些个变量都是不 能或不易直接测量的 不能或不易直接观测得到的变量称为潜在变量 (latent variable)或潜在因子(latent factor)。
Extraction Method: Principal Component Analysis.

主成分分析与因子分析法

主成分分析与因子分析法

主成分分析与因子分析法主成分分析(PCA)是一种无监督的降维技术,通过将原始数据投影到新的正交坐标系上,使得投影后的数据具有最大的方差。

具体而言,PCA根据数据的协方差矩阵或相关矩阵生成一组称为主成分的新变量,其中每个主成分都是原始数据的线性组合。

这些主成分按照方差递减的顺序排列,因此前几个主成分能够解释原始数据中大部分的方差。

通过选择保留的主成分数量,可以将数据集的维度降低到较低的维度,从而更容易进行进一步的分析和可视化。

PCA的主要应用有:数据预处理(如去除冗余信息和噪声)、特征提取、数据可视化和模式识别等。

在特征提取中,选择前k个主成分可以将原始数据变换到一个k维的子空间中,实现数据降维的目的。

此外,PCA还可以通过计算原始数据与主成分之间的相关性,识别出数据中的关键特征。

因子分析法(Factor Analysis)是一种用于探索多个观测变量之间潜在因子(Latent Factor)的关系的统计方法。

潜在因子是无法直接观测到的,但是可以通过多个相关变量的共同变异性来间接测量。

因子分析的目标是找到最小数目的潜在因子,以解释原始数据中的共同变化。

与PCA不同,因子分析法假设观测变量与潜在因子之间存在线性关系,并且观测变量之间的相关性可以被这些潜在因子所解释。

通过因子载荷矩阵,我们可以了解每个观测变量与每个潜在因子之间的相关性大小。

而通过解释因子的方差贡献率,我们可以了解每个因子对数据变异性的解释程度。

因子分析方法还可以用于探索主要的潜在因素,并构建潜在因子模型,以便进行进一步分析和预测。

因子分析的主要应用有:确认性因子分析(Confirmatory Factor Analysis,CFA)用于检验理论模型的拟合度;在心理学和教育领域中,用于构建潜在因子模型并验证心理学量表的可信度和效度;在市场研究中,用于构建品牌形象的因子模型,分析消费者对不同品牌特征的感知。

总的来说,主成分分析和因子分析法都是多变量分析方法,用于探索和减少数据集的维度。

因子分析注意

因子分析注意

Factor Analysis with Categorical Indicators: A Comparison Between Traditional andLatent Class ApproachesJeroen K.VermuntTilburg UniversityJay MagidsonStatistical Innovations Inc.1INTRODUCTIONThe linear factor analysis(FA)model is a popular tool for exploratory data analysis or,more precisely,for assessing the dimensionality of sets of items. Although it is well known that it is meant for continuous observed indica-tors,it is often used with dichotomous,ordinal,and other types of discrete variables,yielding results that might be incorrect.Not only parameter es-timates may be biased,but also goodness-of-fit indices cannot be trusted. Magidson and Vermunt(2001)presented a nonlinear factor-analytic model based on latent class(LC)analysis that is especially suited for dealing with categorical indicators,such as dichotomous,ordinal,and nominal variables,1and counts.The approach is called latent class factor analysis(LCFA)be-cause it combines elements from LC and traditional FA.This LCFA model is one of the LC models implemented in the Latent GOLD program(Vermunt &Magidson,2000,2003).A disadvantage of the LCFA model is,however,that its parameters may be somewhat more difficult to interpret than the typical factor-analytic co-efficients–factor loadings,factor-item correlations,factor correlations,and communalities.In order to overcome this problem,we propose using a linear approximation of the maximum likelihood estimates obtained with a LCFA model.This makes it possible to provide the same type of output measures as in standard FA,while retaining the fact that the underlying factor structure is identified by the more reliable nonlinear factor-analytic model.Bartholomew and Knott(1999)gave a four-fold classification of latent variable models based on the scale types of the latent and observed variables;i.e.,factor analysis,latent trait(LT)analysis,latent profile(LP)analysis ,and latent class analysis.As shown in Table1,in FA and LT models, the latent variables are treated as continuous normally distributed variables. In LP and LC models on the other hand,the latent variable is assumed to be discrete and to come from a multinomial distribution.The manifest variables in FA and LP model are continuous.In most cases,their conditional2distribution given the latent variables is assumed to be normal.In LT and LC analysis,the indicators are dichotomous,ordinal,or nominal categorical variables,and their conditional distributions are assumed to be binomial or multinomial.[INSERT TABLE1ABOUT HERE]The distinction between models for continuous and discrete indicators is not a fundamental one since the choice between the two should simply depend on the type of data.The specification of the conditional distributions of the indicators follows naturally from their scale types.A recent development in latent variable modeling is to allow for a different distributional form for each indicator.This can,for example,be a normal,student,log-normal,gamma, or exponential distribution for continuous variables,binomial for dichoto-mous variables,multinomial for ordinal and nominal variables,and Poisson, binomial,or negative-binomial for counts.Depending on whether the latent variable is treated as continuous or discrete,one obtains a generalized LT (Moustaki&Knott,2000)or LC(Vermunt&Magidson,2001)model.The more fundamental distinction in Bartholomew’s typology is the one between continuous and discrete latent variables.A researcher has to decide whether to treat the underlying latent variable(s)as continuous or discrete.3However,Heinen(1996)demonstrated that the distribution of a continuouslatent variable can be approximated by a discrete distribution,and that such a discrete approximation may even be superior1to a misspecified continuous (usually normal)model.More precisely,Heinen(1996;also,see Vermunt, 2001)showed that constrained LC models can be used to approximate well-known unidimensional LT or item response theory(IRT)models2,such as the Rasch,Birnbaum,nominal-response,and partial credit model.This suggests that the distinction between continuous and discrete latent variables is less fundamental than one might initially think,especially if the number of latent classes is increased.More precisely,as shown by Aitkin(1999; also,see Vermunt and Van Dijk,2001;Vermunt,2004),a continuous latent distribution can be approximated using a nonparametric specification;that is,by afinite mixture model with the maximum number of identifiable latent classes.An advantage of such a nonparametric approach is that it is not necessary to introduce possibly inappropriate and unverifiable assumptionsabout the distribution of the random effects.1With superior we refer to the fact that mispecification of the distribution of the con-tinuous latent variables may cause bias in the item parameter estimates.In a discrete or nonparametric specificaton,on the other hand,no assumptions are made about the latent distribution and,as a result,parameters cannot be biased because of mispecification of the latent distribution.2We will use the terms latent trait(LT)and item response theory(IRT)interchange-ably.4The proposed LCFA model is based on a multidimensional generaliza-tion of Heinen’s(1996)idea:it is a restricted LC model with several latent variables.As exploratory FA,the LCFA model can be used to determine which items measure the same dimension.The idea of defining an LC model with several latent variables is not new:Goodman(1974)and Hagenaars (1990)proposed such a model and showed that it can be derived from a standard LC model by specifying a set of equality constraints on the item conditional probabilities.What is new is that we use IRT-like regression-type constraints on the item conditional means/probabilities3in order to be able to use the LC model with several latent variables as an exploratory factor-analytic tool.Our approach is also somewhat more general than Heinen’s in the sense that it cannot only deal with dichotomous,ordinal,and nominal observed variables,but also with counts and continuous indicators,as well as any combination of these.Using a general latent variable model as the starting point,it will be shown that several important special cases are obtained by varying the model assumptions.In particular,assuming1)that the latent variables are dichoto-3With regression-type constraints on the item conditional probabilities we mean that the probability of giving a particular response given the latent traits is restricted by means of a logistic regression model,or another type of regression model.In the case of continuous responses,the means are restricted by linear regression models,as in standard factor analysis.5mous or ordinal,and2)that the effects of these latent variables on the trans-formed means are additive,yields the proposed LCFA model.We show how the results of this LCFA model can be approximated using a linear FA model ,which yields the well-known standard FA output.Special attention is given to the meaning of the part that is ignored by the linear approximation and to the handling of nominal variables.Several real life examples are presented to illustrate our approach.2THE LATENT CLASS F ACTOR MODEL Letθdenote a vector of L latent variables and y a vector of K observed variables.Indices and k are used when referring to a specific latent and ob-served variable,respectively.A basic latent variable model has the following form:f(θ,y)=f(θ)f(y|θ)=f(θ)Kk=1f(y k|θ),where the primary model assumption is that the K observed variables are independent of one another given the latent variablesθ,usually referred to as the local independence assumption(Bartholomew and Knott,1999).The various types of latent variable models are obtained by specifying the distri-bution of the latent variables f(θ)and the K conditional item distributions f(y k|θ).The two most popular choices for f(θ)are continuous multivari-6ate normal and discrete nominal.The specification for the error functions f(y k|θ)will depend on the scale type of indicator k.4Besides the distribu-tional form of f(y k|θ),an appropriate link or transformation function g(·)is defined for the expectation of y k givenθ,E(y k|θ).With continuousθ(FA or LT),the effects of the latent variables are assumed to be additive in g(·); that is,g[E(y k|θ)]=β0k+L=1β kθ ,(1)where the regression interceptsβ0k can be interpreted as“difficulty”param-eters and the slopesβ k as“discrimination”parameters.With a discreteθ(LC or LP),usually no constraints are imposed on g[E(y k|θ)].The new element of the LCFA model is that a set of discrete latent vari-ables is explicitly treated as multidimensional,and that the same additivity of their effects is assumed as in Equation1.In the simplest specification,the latent variables are specified to be dichotomous and mutually independent, yielding what we call the basic LCFA model.An LCFA model with L dichoto-mous latent variables is,actually,a restricted LC model with2L latent classes (Magidson&Vermunt,2001).Our approach is an extension of Heinen’s work to the multidimensional case.Heinen(1996)showed that LC models with 4The term error function is jargon from the generalized linear modeling framework. Here,it refers to the distribution of the unexplained or unique part(the error)of y k.7certain log-linear constraints yield discretized versions of unidimensional LT models.The proposed LCFA model is a discretized multidimensional LT or IRT model.With dichotomous observed variables,for instance,we obtain a discretized version of the multidimensional two-parameter logistic model (Reckase,1997).A disadvantage of the(standard)LC model compared to the LT and LCFA models is that it does not explicitly distinguish different dimensions, which makes it less suited for dimensionality detection.Disadvantages of the LT model compared to the other two models are that it makes stronger assumptions about the latent distribution and that its estimation is com-putationally much more intensive,especially with more than a few dimen-sions.Estimation of LT models via maximum likelihood requires numerical integration:for example,with3dimensions and10quadrature points per di-mension,computation of the log-likelihood function involves summation over 1000(=103)quadrature points.The LCFA model shares the advantages of the LT model,but is much easier to estimate,which is a very important feature if one wishes to use the method for exploratory purposes.Note that a LCFA model with3dimensions requires summation over no more than8 (=23)discrete nodes.Of course,the number of nodes becomes larger with more than two categories per latent dimension,but will still be much smaller8than in the corresponding LT model.Let usfirst consider the situation in which all indicators are dichotomous. In that case,the most natural choices for f(y k|θ)and g(·)are a binomial distribution function and a logistic transformation function.Alternatives to the logistic transformation are probit,log-log,and complementary log-log transformations.Depending on the specification of f(θ)and model for g[E(y k|θ)],we obtain a LT,LC,or LCFA model.In the LCFA model,f(θ)=π(θ)=L=1π(θ )g[E(y k|θ)]=logπ(y k|θ)1−π(y k|θ)=β0k+L=1β kθ .(2)The parameters to be estimated are the probabilitiesπ(θ )and the coeffi-cientsβ0k andβ k.The number of categories of each of the L discrete latent variables is at least2,andθ are thefixed category scores assumed to be equally spaced between0and1.The assumption of mutual independence between the latent variablesθ can be relaxed by incorporation two-variable associations in the model forπ(θ).Furthermore,the number of categories of the factors can be specified to be larger than two:A two-level factor has category scores0and1for the factor levels,a three-level factor scores0,0.5, and1,etc.The above LCFA model for dichotomous indicators can easily be extended9to other types of indicators.For indicators of other scale types,other distri-butional assumption are made and other link functions are used.Some of the possibilities are described in Table2.For example,the restricted logit model we use for ordinal variables is an adjacent-category logit model.Letting s denote one of the S k categories of variable y k,it can be defined aslogπ(y k=s|θ)π(y k=s−1|θ)=β0ks+L=1β kθ ,for2≤s≤S k.[INSERT TABLE2ABOUT HERE]Extensions of the basic LCFA model are among others that local depen-dencies can be included between indicators and that covariates may influence the latent variables and the indicators(Magidson&Vermunt,2001,2004). These are similar to extensions proposed for the standard latent class model (for example,see Dayton&McReady,1988;Hagenaars,1988,Van der Heij-den,Dessens&B¨o ckenholt,1996).Similarly to standard LC models and IRT models,the parameters of a LCFA model can be estimated by means of maximum likelihood(ML).We solved this ML estimation problem by means of a combination of an EM and a Newton-Raphson algorithm.More specifically,we start with EM and10switch to Newton-Raphson when close to the maximum likelihood solution. The interested reader is referred to Vermunt and Magidson(2000:Appendix). 3LINEAR APPROXIMATIONAs mentioned above,the proposed nonlinear LCFA model is estimated by means of ML.However,as a result of the scale transformations g(·),the parameters of the LCFA model are more difficult to interpret than the pa-rameters of the traditional FA model.In order to facilitate the interpretation of the results,we propose approximating the maximum likelihood solution for the conditional means E(y k|θ)by a linear model,yielding the same type of output as in traditional FA.While the original model for item k may,for example,be a logistic model,we approximate the logistic response function by means of a linear function.The ML estimates E(y k|θ)are approximated by the following linear func-tion:E(yk |θ)=b0k+L=1b kθ +ek|θ.(3)The parameters of the K linear regression models are simply estimated bymeans of ordinary least squares(OLS).The residual term ek|θis needed because the linear approximation will generally not be perfect.With2dichotomous factors,a perfect description by a linear model is11obtained byE(y|θ1,θ2)=b k0+b k1θ1+b k2θ2+b k12θ1θ2;kthat is,by the inclusion of the interaction between the two factors.Becausethe similarity with standard FA would otherwise get lost,interaction termssuch as b k12are omitted in the approximation.Special provisions have to be made for ordinal and nominal variables.Because of the adjacent-category logit model specificationindexlogistic transformation,for ordinal variables,it is most natural to define E(y|θ)= S s=1s π(y k=s|θ).5With nominal variables,analogous to the kGoodman and Kruskal tau-b(GK-τb),each category is treated as a separatedichotomous variable,yielding one coefficient per category.For category,wemodel the probability of being in the category concerned.These category-specific coefficients are combined into a single measure in exactly the sameway as is done in the computation of the GK-τb coefficient.As is shown below,overall measures for nominal variables are defined as weighted averages of thecategory-specific coefficients.), The coefficients reported in traditional linear FA are factor loadings(pθy k factor correlations(rθ),communalities or proportion explained item vari-θ5The same would apply with other link functions for ordinal variables,such as with a cumulative logit link.12ances(R2y k ),factor-item correlations(rθy k),and in the case that there arelocal dependencies,also residual item correlations(r ek e k).The correlationsrθθ ,rθy k,and r yk y kcan be computed from π(θ), E(y k|θ),and the observeditem distributions using elementary statistics computation.For example,the rθθis obtained by dividing the covariance betweenθ andθ by the product of their standard deviations;that is,rθθ =σθθσθσθ=θθ[θ − E(θ )][θ − E(θ )] π(θ θ )θ[θ − E(θ )]2 π(θ )θ[θ − E(θ )]2 π(θ ),where E(θ )= θθ π(θ ).The factor-factor(rθθ )and the factor-item(rθy k)correlations can beused to compute OLS estimates for the factor loadings(pθy k),which arestandardized versions of the regression coefficients appearing in Equation3.The communalities or R2values(R2y k)corresponding to the linear approxi-mation are obtained with rθy k and pθy k:R2y k= L =1rθy kpθy k.The residualcorrelations(r ek e k )are defined as the difference between r yk y kand the to-tal correlation(not only the linear part)induced by the factors,denoted byrθy k y k.The linear approximation of E(y k|θ)is,of course,not perfect.One error source is caused by the fact that the approximation excludes higher-order interaction effects of the factors.More specifically,in the LCFA model pre-13sented in Equation??,higher-order interactions are excluded,but this doesnot mean that no higher-order interactions are needed to get a perfect linear approximation.On the other hand,with all interaction included,the linear approximation would be perfect.For factors having more than two ordered levels,there is an additional error source caused by the fact that linear effects on the transformed scale are nonlinear on the nontransformed scale.In order to get insight in the quality of the linear approximation,we also compute the R2treating the joint latent variable as a set of dummies;that is,as a single nominal latent variable.As was mentioned above,for nominal variables,we have a separate set of coefficients for each of the S k categories because each category is treated as a separate dichotomous indicator.If s denotes one of the S k categories ofy k,the category-specific R2(R2y sk)equalsR2y sk =σ2E(y k=s|θ)σ2y sk,whereσ2E(y k=s|θ)is the explained variance of the dummy variable corre-sponding to category s of item k,andσ2y skis its total variance defined asπ(y k=s)[1−π(y k=s)].The overall R2y kfor item k is obtained as aweighted sum of the S k category-specific R2values,where the weights w y sk14are proportional to the total variancesσ2y sk;that is,R2y k =S ksσ2y skS ktσ2y tkR2y sk=Ssw y skR2y sk.This weighting method is equivalent to what is done in the computation of the GK-τb,an asymmetric association measure for nominal dependent variables.We propose using the same weighting in the computation of pθy k ,rθy k,and r ek e kfrom their category-specific counterpart.This yieldspθy k =S ks=1w y skpθy s k2rθy k =S ks=1w y skrθy s k2r ek e k =S ks=1S kt=1w y skw y tkr e sk,e tk2.As can be seen the signs are lost,but that is,of course,not a problem for a nominal variable.4EMPIRICAL EXAMPLES4.1Rater AgreementFor ourfirst example we factor analyze dichotomous ratings made by7 pathologists,each of whom classified118slides as to the presence or ab-sence of carcinoma in the uterine cervix(Landis&Koch,1977).This is an example of an inter-rater agreement analysis.We want to know whether the15ratings of the seven raters are similar or not,and if not,in what sense the ratings deviate from each other.Agresti(2002),using standard LC models to analyze these data found that a two-class solution does not provide an adequatefit to these data. Using the LCFA framework,Magidson and Vermunt(2004)confirmed that a single dichotomous factor(equivalent to a two-class LC model)did notfit the data.They found that a basic two-factor LCFA model provides a good fit.Table3presents the results of the two-factor model in terms of the condi-tional probabilities.These results suggest that Factor1distinguishes between slides that are“true negative”or“true positive”for cancer.Thefirst class (θ1=0)is the“true negative”group because it has lower probabilities of a “+”rating for each of the raters than class two(θ1=1),the“true positive”group.Factor2is a bias factor,which suggests that some pathologists bias their ratings in the direction of a“false+”error(θ2=1)while others exhibit a bias towards“false–”error(θ2=0).More precisily,for some raters we see a too high probability of a“+”rating ifθ1=0andθ2=1(raters A,G,E, and B)and for others we see a too high probability of a“–”rating ifθ1=1 andθ2=0(raters F and D).These results demonstrate the richness of the LCFA model to extract meaningful information from these data.Valuable16information includes an indication of which slides are positive for carcinoma,6 as well as estimates of“false+”and“false–”error for each rater.[INSERT TABLE3ABOUT HERE]The left-most columns of Table4lists the estimates of the logit coefficients for these data.Although the probability estimates in Table3are derived from these quantities(recall Equation2),the logit parameters are not as easy to interpret as the probabilities.For example,the logit effect ofθ1 on A,a measure of the validity of the ratings of pathologist A,is a single quantity,exp(7.74)=2,298.This means that among those slides atθ2=0, the odds of rater A classifying a“true+”slide as“+”is2,298times as high as classifying a“true–”slide as“+”.Similarly,among those slides atθ2=1, the corresponding odds ratio is also2,298.[INSERT TABLE4ABOUT HERE]We could instead express the effect of Factor1in terms of differences between probabilities.Such a linear effect is easier to interpret,but is not the same for both types of slides.For slides atθ2=0,the probability of classifying a“true+”slide as“+”is.94higher than classifying a“true–”6For each patient,we can obtain the posterior distribution for thefirst factor.This posterior distribution can be used determine whether a patient has carcinoma or not, corrected for rater bias(the second factor).17slide as“+”(.99-.06=.94),while for slides atθ2=1,it is.59higher(1.00 -.41=.59),markedly different quantities.This illustrates that for the linear model,a large interaction term is needed to reproduce the results obtained from the logistic LC model.Given that a substantial interaction must be added to the linear model to capture the differential biases among the raters,it might be expected that the traditional(linear)FA model also fails to capture this bias.This turns out to be the case,as the traditional rule of choosing the number of factors to be equal to the number of eigenvalues greater than1yields only a single factor: The largest eigenvalue was4.57,followed by0.89for the second largest. Despite this result,for purposes of comparison with the LCFA solution,we fitted a two-factor model anyway,using maximum likelihood for estimation.Table5shows that the results obtained from Varimax(orthogonal)and Quartimax(oblique)rotations differ substantially.Hence,without theoreti-cal justification for one rotation over another,FA produces arbitrary results in this example.[INSERT TABLE5ABOUT HERE]The three right-most columns of Table4present results from a lineariza-tion of the LCFA model using the following equation to obtain“linearized18loadings”for each variable y k:E(y|θ1,θ2)=b k0+b k1θ1+b k2θ2+b k12θ1θ2.kThese3loadings have clear meanings in terms of the magnitude of validity and bias for each rater.They have been used to sort the raters according to the magnitude and direction of bias.The logit loadings do not provide such clear information.The loading onθ1corresponds to a measure of validity of the ratings. Raters C,A,and G who have the highest loadings on thefirst linearized factor show the highest level of agreement among all raters.The loading onθ2relates to the magnitude of bias and the loading onθ1θ2indicates the direction of the bias.For example,from Table3we saw that raters F and B show the most bias,F in the direction of“false–”ratings and B in the direction of“false+”.This is exactly what is picked up by the nonlinear term:the magnitude of the loadings on the nonlinear term(Table4)is highest for these2raters,one occurring as“+”,the other as“–”.values)for each rater,and de-Table4also lists the communalities(R2y kcomposes these into linear and nonlinear portions(the“Total”column in-cludes the sum of the linear and nonlinear portions).The linear portion is the part accounted for by b k1θ1+b k2θ2,and the nonlinear part concerns the19factor interaction b k12θ1θ2.Note the substantial amount of nonlinear varia-tion that is picked up by the LCFA model.For comparison,the left-most column of Table5provides the communalities obtained from the FA model, which are quite different from the ones obtained with the LCFA model. 4.2MBTI Personality ItemsIn our second example we analyzed19dichotomous items from the Myers-Briggs Type Indicator(MBTI)test–7indicators of the sensing-intuition (S-N)dimension,and12indicators of the thinking-feeling(T-F)personality dimension.7The total sample size was8,344.These items were designed to measure2hypothetical personality dimensions,which were posited by Carl Jung to be latent dichotomies.The purpose of the presented analysis was to investigate whether the LCFA model was able to identify these two theoretical dimensions and whether results differed from the ones obtained with a traditional factor analysis.Wefitted0-,1-,2-,and3-factor models for this data set.Strict adherence to afit measure like BIC or a similar criterion suggest that more than2 latent factors are required tofit these data due to violations of the local independence assumption.This is due to similar wording used in several 7Each questionnaire item involves making a choice between two categories,such as,for example,between thinking and feeling,convincing and touching,or analyze and sympa-thize.20of the S-N items and similar wording used in some of the T-F items.For example,in a three-factor solution,all loadings on the third factor are small except those for S-N items S09and S73.Both items ask the respondent to express a preference between“practical”and a second alternative(for item S09,’ingenious’;for item S73,“innovative”).In such cases,additional association between these items exists which is not explainable by the general S-N(T-F)factor.For our current purpose,we ignore these local dependencies and present results of the two-factor model.In contrast to ourfirst example,the decomposition of communalities(R2y k values)in the right-most columns of Table6shows that a linear model can approximate the LCFA model here quite well.Only for a couple of items (T35,T49,and T70)is the total communality not explained to2decimal places by the linear terms only.The left-most columns of Table6compares)for each variable.The fact that the the logit and linearized“loadings”(pθy klatter numbers are bounded between-1and+1offers easier interpretation.[INSERT TABLE6ABOUT HERE]The traditional FA model also does better here than thefirst example. Thefirst four eigenvalues are4.4,2.8,1.1and0.9.For comparability to the LC solution,Table7presents the loadings for the two-factor solution under21Varimax(orthogonal)and Quartimax(oblique)rotations.Unlike thefirst example where the corresponding loadings showed considerable differences, these two sets of loadings are quite similar.The results are also similar to the linearized loadings obtained from the LCFA solution.[INSERT TABLE7ABOUT HERE]The right-most column of Table7shows that the communalities obtained from FA are quite similar to those obtained from LCFA.In general,these communalities are somewhat higher than those for LCFA,especially for items S27,S44,and S67.Figure1displays the two-factor LCFA bi-plot for these data(see Magid-son&Vermunt,2001,2004).The plot shows how clearly differentiated the S-N items are from the T-F items on both factors.The seven S-N items are displayed along the vertical dimension of the plot which is associated with Factor2,while the T-F items are displayed along the horizontal dimension, which is associated with Factor1.This display turns out to be very similar to the traditional FA loadings plot for these data.The advantage of this type of display becomes especially evident when nominal variables are included among the items.[INSERT FIGURE1ABOUT HERE]22。

aass

aass

appraisal lagged far behind the development of asset appraisal practices. The serious disequilibrium between them is a crucial obstacle to the development of Chinese asset appraisal. Since the second half of year 2000, there are too many problems in intangible asset appraisal, such as the unreasonable process of appraisal, the imperfect disclosure of appraisal results, and the improper use of the appraisal report, the moral hazard in practicing and so on. In recent years, more and more civil action cases with regard to asset appraisal appear in China. Always asset appraisal industry is put in a disadvantage condition. The asset appraisal deviation becomes the focus of argument and this phenomenon causes widespread attention. In order to improve the asset appraisal quality well, the research on asset appraisal deviation is essential. The definition of asset appraisal deviation is along with situation and environment, so its concept is the basis of research. In this paper, Asset appraisal deviation can be defines as the ratio of appraisal increment amount and book value after adjustment in the asset appraisal results. It reflects the degree of asset appraisal results deviating from the book value. Furthermore, the average increment rate each year can be defined as following formula: the average increment rate= the sum of asset - the sum of asset after adjustment the sum of asset after adjustment

SPSS_11FA主成份分析-英文版

SPSS_11FA主成份分析-英文版

SPSS Factor AnalysisRecommended steps to perform Factor Analysis 1. Factor Analysis without factor rotation KMO test of sampling adequacy: > 0.7 Bartlett's Test of Sphericity: < 0.05Find eigenvalues and latent factors by an appropriate method 2. Factor Analysis with orthogonal factor rotationFind Factor Pattern Matrix F ( K ×J ) to draw latent structureFind Factor Scoring Matrix S ( K ×J ) to yield factor scores for all cases * Step 1 and 2 can be done together by SPSS3. Factor Analysis with oblique factor rotation if necessary Find Factor Pattern Matrix F ( K ×J ) to draw latent structureFind Factor Scoring Matrix S ( K ×J ) to yield factor scores for all cases Find Factor Structure Matrix R xy ( K ×J ) to name latent factorsFind Factor Correlation Matrix R yy ( J ×J ) to explore correlations among factorsPrepare data for Factor Analysis (see SPSS_11FA.sav)Open original data file survey.sav and keep sample data NO, C01~C15.Factor Analysis with orthogonal factor rotation1. Analyze / Data Reduction / Factor…2. Select variables [C01] ~ [C15] listed in the left window Variables: f3. Descriptives… Statistics 5 Initial solution Correlation Matrix 5KMO and Bartlett's test ofSphericity Continue4. Extraction… Method: Principal components ▼ Analyze Correlation matrix Display 5 Unrotated factor solution Extract Eigenvalues over: 1 Maximum Iterations for Convergence: 25 Continue5. Rotation… Method Varimax Display 5Rotated solution Maximum Iterations for Convergence: 25 Continue6. Scores… 5Display factor score coefficient matrix Continue7. OKChange the number of digits after decimal point of some results to compare with textbook’s results> 0.7< 0.05Matrix S after factor rotation by VarimaxFactor Analysis with oblique factor rotation1. Analyze / Data Reduction / Factor…2. Select variables [C01] ~ [C15] listed in the left window Variables: f3. Descriptives… Statistics Initial solution Correlation Matrix KMO and Bartlett's test ofSphericity Continue4. Extraction… Method: Principal components ▼ Analyze Correlation matrix DisplayUnrotated factor solution Extract Eigenvalues over: 1 Maximum Iterations for Convergence: 25 Continue5. Rotation… Method Promax Kappa 3 Display 5Rotated solution MaximumIterations for Convergence: 25 Continue6. Scores… 5 Save as variables Method Regression 5Display factor score coefficientmatrix Continue7. OKChange the number of digits after decimal point of some results to compare with textbook’s resultsMatrix F after factor rotation by PromaxMatrix S after factor rotation by PromaxThe factor rotation by Promax with Kappa = 4 has following results%10044×=KP λwith Kappa = 4。

结构方程模型的理论与应用.pdf

结构方程模型的理论与应用.pdf

2012/4/16
7
此种分析时利用协方差矩阵来进行模 型的统和分析,比较研究者所提的假 设模型隐含的协方差矩阵与实际搜集 数据导出的协方差矩阵之间的差异。
2012/4/16
8
LISREL ( Linear Structural Relationship )
LISREL由统计学者Karl G. Joreskog 和 Dag Sorbom结合矩阵模型的分析技巧, 用以处理协方差结构分析的一套计算机程 序。
3.754
46.924
46.924
2.203
27.532
74.456
1.208
15.096
89.551
Rotation S ums of S quared Loadings
Total % of V ariance C umulativ e %
3.207
40.092
40.092
2.217
27.708
67.800
从上述名称中可以看出,结构方程模型的几 个本质特征是:
结构化 协方差 线性
2012/4/16
6
协方差结构分析 (covariance structure analysis)
协方差结构分析本质上是一种验证性的 模型分析,它试图利用研究者所搜集的 实证资料来确认假设的潜在变量间的关 系以及潜在变量与显性指标的一致性程 度。
Kaiser-M ey er-O lkin Measure of Sampling A dequacy .
Bartlett's Test of S phericity
A pprox. C hi-S quare df S ig.
.620
231.285 28

因子混合模型_潜在类别分析与因子分析的整合_陈宇帅

因子混合模型_潜在类别分析与因子分析的整合_陈宇帅

P ( y j b j ) P (c k ) P ( y j b j c k )
k 1
(2)
在公式 (2)中 , 下标 j 表示第 j 个指标 , k 表示第 k 个类别 , bj 的取值为 0 或 1。 P(c=k)和 P(yj=bj|c=k) 是 LCA 分析中两个主要的参数 , 前者称为潜在类 别概率 , 描述了第 k 个类别占总体的比例 , 也是 任一被试属于第 k 个类别的概率 ; 后者是条件概 率 , 描述了第 k 个类别的被试在第 j 个指标上取值
等 (2009) 根据参数限定程度的不同描述了 FMM 的 5 种变式 ( 见表 1), 如潜在类别因子分析模型
当局部独立性假设难以满足时 , LCA 可以通过增 加类别个数来实现 , 但也可能因此导致增加的类 别并非真正的子群体 , 而 FMM 通过添加因子来 解释条目间余下的关联 , 避免了这一现象的出现 ; 另一方面, 即便添加的类别为真正的子群体,
529
收稿日期 : 2014-05-26 * 国家自然科学基金项目 (31271116, 31400909)资助。 通讯作者 : 温忠麟 , E-mail: wenzl@
530
心 理 科 学 进 展
第 23 卷
和 M2), 但两者存在本质差异。首先 , 潜变量的尺 度不同。 FA 中因子是连续变量 , 而在 LCA 中被 潜在类别变量 c 取代。其次 , 两者关注的焦点不 同。 FA 的焦点是对变量进行分类 , 而 LCA 的焦 点是对被试进行分类。LCA 分析的统计原理主要 是条件概率和贝叶斯公式。以图 1 中的 M2 为例 , 假定模型中共包含 r 个指标 , 都是 0-1 取值 , 指标 间的关联由潜在类别变量 c 所解释。设有 K 个类 别 , 则有 (Clark et al., 2009):

结构方程模型

结构方程模型
相对于多元回归分析,结构方程模型在应用上的限制也较少,关键的亮点包括在进行 「路径分析」的时候,即使自变量间存在明显的共线性 (multi-collinearity),结构方程模型 依然可以照单全收,丝毫不影响其解释上的有效性。利用结构方程模型来进行「验证性因子 分析」,更可以通过将多个可观测变量指定给单一潜变量,从而可以在根源处直接降低衡量 误差。尤其在残差的处理上,很少有统计方法可以这么方便地直接检查每一个可观测变量的 残差,甚至操弄这些残差之间的相关。结构方程模型在路径系数的处理上也高人一等,不仅 可以同时估计多个自变量对多组因变量的关系,还能够进行多样本多模型之间的系数比较。 最重要的优势是,结构方程模型不仅仅可以估计单一参数的系数,还能够直接估计整体模型 的拟合度,这是许多传统统计方法所望尘莫及的。
研究者真正想要的其实是「过度识别 (over identification)」,「过度识别」代表已知变 量间的协方差数量,大于未知的待估计参数的数量,所以这时模型的自由度将会是正的数值, 我们才能够应用结构方程模型的软件来估计参数,同时计算出模型的各种「拟合指标」来。 事实上由信度的立场来看这个问题,越多的「可观测变量」通常其结构信度也较佳,这可由 Cronbach's alpha 信赖系数的计算即可清晰观察出来,在同一个构念中,当我们放入的近似的 衡量题项愈多,Cronbach's alpha 的值很容易就可以升高。
所以在构造衡量题项的时候,最好尽可能从多维度多视角的多元观点来广泛采纳「可观 测变量」,不要吝惜于「可观测变量」被纳入研究工具中的数量。毕竟在研究工具接受前测 中效度信度检查的时候,就可能开始删减题项了,再加上田野调查之后,根据大规模数据进 行衡量模型的效度信度检查时,还可能继续删减题项,如果原始题项不足,在最后的结构模 型分析阶段,就很可能发生「识别不足」或是「恰好识别」的问题,为研究过程带来无谓的 麻烦。

结构方程模型

结构方程模型
YI=B0+B1Xi1+B2Xi2+…+BpXip+ εi εi为残差值,表示因变量无法被自变量解释的部
分,在测量模型即测量误差,在结构模型中为 干扰变量或残差项,表示内生变量无法被外生 变量及其他内生变量解释的部分。
ηη11== γ ξ + γ111ξ11+ ζ11 ζ1 η 1= γ11 ξ1+ γ12 ξ2 +ζ1
符号表示
潜在变量:被假定为因的外因变量,以ξ(xi/ksi) 表示;假定果的内因变量以η(eta)表示。
外因变量ξ的观测指标称为X变量,内因变量η观测值 表称为Y变量。
它们之间的关系是:①ξ与Y、η与X无关②ξ的协差 阵以Φ(phi)表示③ξ与η的关系以γ表示,即内因 被外因解释的归回矩阵④ξ与X之间的关系,以Λx表 示,X的测量误差以δ表示,δ间的协方差阵以Θε表 示⑥内因潜变量η与η之间以β表示。
观察变量
观察变量作为反映潜在变量的指标变量,可分为反映性指 标与形成性指标两种。
反映性指标又称为果指标,是指一个以上的潜在变量是引 起观察变量或显性变量的因,此种指标能反映其相对应的 潜在变量,此时,指标变量为果,而潜在变量为因。
相对的,形成性指标是指指标变量是成因,而潜在变量被 定义为指标变量的线性组合,因此潜在变量变成内生变量, 指标变量变为没有误差项的外生变量。
SEM包含了许多不同的统计技术
SEM融合了因子分析和路径分析两种统计技 术,可允许同时考虑许多内生变量、外生变量 与内生变量的测量误差,及潜在变量的指标变 量,可评估变量的信度、效度与误差值、整体 模型的干扰因素等。
SEM重视多重统计指标的运用
SEM所处理的是整体模型契合度的程度,关注整体模 型的比较,因而模型参考的指标是多元的,研究者必 须参考多种不同的指标,才能对模型的是陪读做整体 的判断,个别参数显著与否并不是SEM的重点。

因素分析法

因素分析法

因素分析法概述因素分析法(Factor Analysis)是一种用于研究多个变量之间关系的统计方法。

它可以帮助我们确定一组潜在因素(latent factors),这些潜在因素可以解释观察到的数据中的共同变异。

因素分析法是一种非常重要的多变量分析方法,在社会科学、心理学、市场研究等领域被广泛应用。

应用场景因素分析法常被用于以下几个方面:1.降维:当我们面对大量观察变量时,可以使用因素分析法将这些变量归纳为少数几个因素,从而降低数据的维度。

2.变量筛选:我们可以使用因素分析法来确定哪些变量对于解释数据变异的贡献较大,从而选择出最相关的变量。

3.数据压缩:在某些情况下,我们希望将大量信号压缩到少数几个潜在因素中,以减少存储和计算成本。

4.假设检验:因素分析法可以帮助我们验证某些假设,在探索数据中隐藏的因素结构时提供支持或反对。

5.变量解释:因素分析法可以帮助我们解释观察变量之间的复杂关系,找出其中的一些共同因素。

基本原理共同度在因素分析中,共同度(communality)是指观察变量与潜在变量之间的相关性的平方,表示观察变量中可以被潜在变量解释的方差部分。

通过计算每个观察变量的共同度,我们可以确定每个变量对潜在因素的贡献程度。

因子载荷因子载荷(factor loading)衡量了观察变量与潜在因素之间的关系强度。

一个观察变量可以与多个潜在因素相关,每个潜在因素对应一个因子载荷。

通过因子载荷,我们可以了解观察变量与每个潜在因素之间的关系。

提取因子提取因子是指通过运用数学算法从观察数据中发现潜在因素。

通常使用主成分分析(Principal Component Analysis,PCA)或极大似然估计法(Maximum Likelihood Estimation,MLE)进行因子提取。

因子旋转因子旋转是指将提取到的因子进行旋转,使得每个因子对应的因子载荷更加清晰可解释。

常用的因子旋转方法有正交旋转和斜交旋转。

结构方程建模技术和AMOS应用

结构方程建模技术和AMOS应用
Ryan, A. M., West, B. J., & Carr, J. Z. (2003). Effects of the terrorist attacks of 9/11/01 on employee attitudes. Journal of Applied Psychology, 88(4), 647-659.
1421 2421 3421 24 4
1521 2521 3521 45
52 5
1621 函数
F=F(S, Σ(θ))
Maximum likelihood
FML=log| Σ(θ)| + Trace[Σ(θ)-1S] – log|S| - p(p : number of measured variable)
结构方程建模技术和AMOS操作
内容提要
结构方程建模技术背景 结构方程建模基本理论 结构方程建模分析过程 AMOS操作
一、结构方程建模技术背景
结构方程建模的起源
心理计量学根源
Spearman(1930 ): 因素分析(factor analysis)
生物根源
Wright(1918): 路径分析(path analysis)
为潜变量命名
AMOS绘制结果
ks1
1
X1
x2
x3
1
1
1
e1
e2
e3
ks2
1
x3
x4
1
1
e4
e5
AMOS估计方法
View-Analysis Properties-Estimation
选择在AMOS报表中输出的各种统计量
View-Analysis Properties-Output

因子载荷系数英文

因子载荷系数英文

因子载荷系数英文Factor loading coefficients, also known as factor loadings, are essential statistical measures used in factor analysis. They quantify the relationship between observed variables and latent factors in a model. In this article, we will explore the concept of factor loading coefficients in the field of statistics.Introduction to Factor Loading CoefficientsFactor loading coefficients play a crucial role in understanding the underlying structure of a set of observed variables. They indicate the strength and direction of the relationship between the observed variables and the latent factors. The coefficients can range from -1 to 1, with values closer to -1 or 1 indicating a stronger relationship.Interpretation of Factor Loading CoefficientsTo interpret factor loading coefficients, it is important to consider both the magnitude and sign of the coefficients. A positive coefficient indicates a positive relationship between the observed variable and the latent factor, while a negative coefficient indicates a negative relationship.The magnitude of the coefficient represents the strength of the relationship. Higher magnitudes suggest a stronger association between the observed variable and the latent factor. However, it is important to note that the absolute value of the coefficient is more meaningful than the magnitude itself.Importance of Factor Loading CoefficientsFactor loading coefficients are used to assess the quality of the factor model. They help researchers determine which observed variables are most strongly related to each latent factor. By examining the coefficients, researchers can identify the key variables that contribute most to a specific factor.Moreover, factor loading coefficients can be used to determine the reliability and validity of a measure. A measure is considered reliable when its observed variables load highly on their designated factors. Conversely, measures with low factor loading coefficients may indicate measurement errors or weak variables.Calculation of Factor Loading CoefficientsFactor loading coefficients can be calculated using various methods, such as principal component analysis (PCA) or maximum likelihood estimation (MLE). These methods aim to estimate the associations between observed variables and latent factors based on the data collected.PCA is a commonly used method for factor analysis. It transforms the observed variables into an orthogonal set of factors. The factor loading coefficients in PCA represent the correlation between the observed variables and the factors.MLE, on the other hand, estimates the parameters of a statistical model by maximizing the likelihood function. In factor analysis, MLE is used to estimate the factor loading coefficients by maximizing the likelihood of the observed data given the latent factors.Limitations of Factor Loading CoefficientsWhile factor loading coefficients provide valuable insights, they have certain limitations. First, they are sample-specific and may vary across different samples. Therefore, caution should be exercised when generalizing findings based on specific factor loading coefficients.Second, the interpretation of factor loading coefficients depends on the context and underlying theory. A coefficient considered substantial in one study may not be significant in another. Thus, it is essential to consider the specific research question and theoretical framework when interpreting the coefficients.ConclusionFactor loading coefficients are fundamental statistical measures in factor analysis. They help researchers understand the relationship between observed variables and latent factors. By interpreting these coefficients, researchers can identify key variables, assess measure reliability, and evaluate the quality of a factor model.It is crucial to consider the magnitude and sign of the coefficients when interpreting their meaning. However, it is important to recognize the limitations of factor loading coefficients, as they are sample-specific and context-dependent. Overall, factor loading coefficients provide valuable insights into the underlying structure of a set of observed variables.。

LISREL软件简介

LISREL软件简介

LISREL软件简介LISREL (LInear Structural RELations)是由K.G. Joreskog & D. Sorbom所发展的结构方程模型(Structural Equation Modeling)软件. LISREL被公认为最专业的结构方程模块(Structural Equation Modeling, 简称SEM )分析工具,其权威性不容其它类似软件取代。

目前几乎可在各平台执行包含Windows, Mac OS 9 X, Solaris, AIX, RISC ,OpenV MS , Linux等. LISREL的内容包含多层次分析(multilevel analysis),二阶最小平方估测(t wo-stage least-squares estimation),主成份分析(principal component analysis)等等.LISREL 8.71于2004年10月更新。

最新的特色包含对遗漏值的最大概似估计法、多元结构等式模型(multilevel structural equation modeling) 、以recursive modeling为基础的正式推论、multiple imputation和非线性多元回归模型以及各式各样操作界面的改进,包括使用长的数据和文件名称。

LISREL 8.7 的特色主要有下列几点:1.可以分析完整data 和不完整data 时的Multilevel Structural Equation Model ,以及非线性Multilevel Model (Two-level nonlinear regression models) ,技术明显领先其它同类软件。

2.唯一提供Efficient Full Information Maximum Likelihood (FIML) 方法处理SEM 中missing data 的问题,模型解释力最强。

结构方程模型的理论与应用

结构方程模型的理论与应用

2012/4/16
7
此种分析时利用协方差矩阵来进行模 型的统和分析,比较研究者所提的假 设模型隐含的协方差矩阵与实际搜集 数据导出的协方差矩阵之间的差异。
2012/4/16
8
LISREL ( Linear Structural Relationship )
LISREL由统计学者Karl G. Joreskog 和
(六)SEM的应用
理论模型构建
文献综述 模型建构
数据分析
描述性统计 信度分析
变量确定
研究假设
EFA
CFA SEM
研究设计
变量的测量 问卷设计
假设检验
潜变量假设检验
数据收集
研究方法和研究工具
2012/4/16
中介变量假设检验
调节变量假设检验
SEM 是对一般线性模型( general linear
model,GLM)的扩展。
• 一般线性模型即对平均数检验的方差分析,
• 适用于回归分析、方差和协方差分析、多水平模
型等具体的统计模型
2012/4/16
27
这些线性模式包括:路径分析、典型相关、
因素分析、判别分析、多元方差分析以及多
元回归分析。其中的每种分析都只是结构方
201241629六sem的应用理论模型构建?文献综述?模型建构?变量确定?研究假设研究设计?变量的测量?问卷设计?数据收集?研究方法和研究工具201241630数据分析?描述性统计?信度分析?efa?cfa?sem假设检验?潜变量假设检验?中介变量假设检验?调节变量假设检验第二章sem的组成sem的组成可以简单总结为
Exploratory Factor Analysis)可以求得

验证性因子分析

验证性因子分析

三、实例分析
在Amos中进行验证性因子分析的 步骤
• 1、绘制假设模型 • 2、选取数据库 • ቤተ መጻሕፍቲ ባይዱ、选取变量 • 4、潜变量命名 • 5、选择报告数据
• 6、检查相关设定 • 7、执行分析 • 8、查看分析结果 • ……
1、绘制假设模型
2、选取数据库
可编辑
3、选取变量
4、潜变量命名
5、选择报告数据
6、检查相关设定
7、执行分析
8、查看分析结果
参数值均达到显著,因素载荷 recall2最高,cued1的0.610
卡方 值
卡方值/自由 度
要求<2.
RMSEA
Model
RMSEA LO 90 HI 90 PCLOSE
Default model
.832 .584 1.110
统计模式
结构模式
理论建构间之关系
測量模式
理论建构与观察指标 间之关系
二、基本概念
潜在变量和外显变量
• 外显变量(测量变量)——可直接测量 • 潜在变量——不可直接测量的变量
• 工作满意度:如何测量?
• 您对自己的工作环境是否满意?在1-7分范围打分 • 用一组问题来测量,构建测量模型

11 21
常用图标
• 潜变量或因子
• 观察变量或指标
• 单向影响/效应 • 弧形:相关关系 • 内生潜变量未被解释的部 分 • 测量误差
模式组成
測量模式
δ1
X1
ξ1
δ2
X2
δ3
X3
ξ2
δ4
X4
測量模式
结构模式
Y1
ε
ξ3
1
Y2

结构方程结构方程

结构方程结构方程

界定潜在变项的测量单位
理由:因为潜在变项与无法观察的到,其量尺刻度无法确定, 我们必须界定其原点与测量单位,才能估计潜在变项的变异数与 径路系数,以界定其结构模式为可辨认的模式 (An Identified Model) 。 方法(以下两者仅能选其一):
选值定加一以个固最定(能通代常表设潜定在为变1,项会的使观相察关变之项因,子将具其有x相与同y 之变异数),误差项的廻归系数亦设定为1 ,才能进行 其余的参数估计。
结构模式旨在考验潜在变项间之因果路径关系主要针对潜在变项进行径路分析以考验结构模式的适配性理论上假如结构方程模式正确及母群参数已知时母群共变数矩阵会等于理论隐含的共变数矩阵隐含的共变数矩阵系根据回归方程式中的参数所重组之共变数矩阵式中向量包含模式中所有待估计的参数例如
李茂能, 2006
结构方程模式 之定义
AMOS Graphic Mode执行步骤(2)
執行AMOS/SEM分析方法:
AMOS径路图输出
按EDIT下之『COPY』即可輸出徑路圖形
AMOS報表輸出的各種統計量
利用View/Set下『Analysis Properties』中点选Output, 选 取所需统计量, 亦可点选『Output』选择估计方法。
變異抽取百分比
s tan dardized loadings2 s tan dardized loadings2
j
最好大於.50(亦是一種聚斂效度的指標)
适合度考验: 结构模式
• SEM程式提供每一估计系数之标准误与统 计考验的t值。当样本较小且使用MLE估计 法时,使用较保守的显着水准 (.025或.01)
SEM的統計模式
☆测量模式的考验必须先于结构模式。 AMOS統計模式

汶川地震幸存者创伤后应激反应的症状结构

汶川地震幸存者创伤后应激反应的症状结构
【基金项目】 本研究获得中科院抗震救灾应急研究项目中的 “灾后 应激心理过程以及心理疾病高危人群的筛查和干预” 子课题 (O8CX112011) ;中 科 院 心 理 所 发 展 基 金 项 目 “震 后 心 理 应 激 反 应 的 民 族 差 异 及 其 社 会 心 理 模 式 ”(O9CX154015)和 国 家 自 然 科 学 基 金 青 年 项 目 (30900402 )资 助 通讯作者:吴坎坎
对回避症状具有不同的作用,而对麻木却没有差异, 针对麻木症状的治疗相对更困难, 并且发现回避症
bach’s α 系数为 0.822,与 SCL-90 的效度指标也很 高[21]。
状 和 麻 木 症 状 背 后 各 自 有 不 同 的 生 理 机 制 [15]。
1.3 统计分析
Simms 等 人 对 3695 名 海 湾 战 争 士 兵 的 PTSD
中国临床心理学杂志 2011 年 第 19 卷 第 3期
附图 PTSD 的四因素麻木模型各项目 的载 对 723 名 5·12 汶 川 地 震 灾 区 的 群 众 的 PCL 结果进行验证性因素分析发现, 四因素麻木模 型的各项拟合指数都达到了可接受的拟合指标且为 最优拟合指数,表明 5·12 汶川地震后幸存者表现为 闯入、回避、麻木和高警觉四个症状,与 DSM-IV-TR 中的三因素模型并不相符,但与 King 等人对退伍老 兵 的研究 以 及 后 续 众 多 的 研 究 结 果 一 致 , [4,7-12] 同 时 也与国内王孟成等人在汶川地震后对中小学生 PTSD 的研究结果一致[20],表明我国灾区幸存者中未 成年人或成人除了表现出闯入、 回避和高警觉 3 个 症状以外,都还在汶川地震后表现出了第四个症状: 麻木症状。另外,此四因素麻木模型中各个症状的相 关度很高, 更证明了 PTSD 是一种集多个症状于一 体的综合征, 并且也再次表明需要对 PTSD 的不同 亚 型 进 行 深 入 的 研 究 [7]。
  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

Jo reskog K G1967Some contributions to maximum likelihood factor analysis.Psychometrika32:443–82Jo reskog K G,So rbom D1996LISREL8.Scientific Software International,ChicagoLoehlin J C1998Latent Variable Models:An Introduction to Factor,Path,and Structural Analysis,3rd wrence Erlbaum Associates,Mahwah,NJMarsh H W,Balla J R,McDonald R P1988Goodness-of-fit indexes in confirmatory factor analysis:The effect of sample size.Psychological Bulletin103:391–410Miller M B1995Coefficient alpha:A basic introduction from the perspectives of classical test theory and structural equation modeling.Structural Equation Modeling:A Multidisciplinary Journal2:255–73Mueller R O1996Basic Principles of Structural Equation Modeling:An Introduction to LISREL and EQS.Springer-Verlag,New YorkMulaik S A1972The Foundations of Factor Analysis.McGraw-Hill,New YorkMulaik S A,James L R,Van Alstine J,Bennett N,Lind S, Stilwell C D1989Evaluation of goodness-of-fit indexes for structural equation models.Psychological Bulletin105:430–45 Mulvey P W,Miceli M P,Near J P1992The Pay Satisfaction Questionnaire:A confirmatory factor analysis.Journal of Social Psychology132:139–42O’Grady K E1989Factor structure of the WISC-R.Multi ariate Beha ioral Research24:177–93Raykov T1997Estimation of composite reliability for con-generic measures.Applied Psychological Measurement21: 173–84Steiger J H,Lind J M1980Statistically Based Tests for The Number of Common Factors.Paper presented at the annual meeting of the Psychometric Society.Iowa City,IA Stevens J J1995Confirmatory factor analysis of the Iowa Test of Basic Skills.Structural Equation Modeling:A Multidisciplinary Journal2:214–31Tanaka J S1993Multifaceted conceptions offit in structural equation models.In:Bollen K A,Long J S(eds.)Testing Structural Equation Models.Sage,Newbury Park,CA,pp. 10–39Windle M,Dumenci L1999The factorial structure and construct validity of the Psychopathy Checklist-Revised(PCL-R) among alcoholic inpatients.Structural Equation Modeling:A Multidisciplinary Journal6:372–93R.O.Mueller and G.R.Hancock Factor Analysis and Latent Structure:IRT and Rasch ModelsIn education and the social sciences,we often ask subjects to respond to a set of items(questions, statements,or tasks)on survey forms,self-report inventories,and mental tests,that are coded as discrete—often dichotomous—variables.In many settings it is natural to think,in analogy with factor analysis(see Factor Analysis and Latent Structure: O er iew;Factor Analysis and Latent Structure,Con-firmatory),that there is one or more continuous latent variables for each subject—such as political agency, extrovertedness,or ability in some area of mathe-matics—that can be measured or estimated using the positive and negative responses to these items.Item Response Theory(IRT),and specializations such as the Rasch Model,refer to a set of statistical modeling and estimation tools for making inferences about one or more continuous latent variables from this kind of multivariate discrete data.1.Introduction:the Rasch ModelAs an example,consider a test of transitive reason-ing in school children(e.g.,Sijtsma and Verweij 1999).An item may consist of three rods,A,B, and C,of differing lengths.The child is given evi-dence that Length(A) Length(B),and Length(B) Length(C),and is asked to deduce a relationship between Length(A)and Length(C);his or her answer is scored1if it is correct and0if it is incorrect.Let us say that J such items,involving different types of objects and attributes(balls of varying weight,disks of varying area,etc.)and different numbers of objects, make up a test given to N school children.If the scored responses were continuous we might build a two-way additive ANOVA model in which child i’s response to item j,Xij,is the sum(or difference)of the child’s achieved transitive reasoning level,and the item’s innate difficulty,as determined by the attribute on which objects are compared and the number of objects being compared.However,since the responses are dichotomous Xijl0or1,we need a model specifying the probability that Xijl1,in terms of the child’s transitive reasoning ability and the item’s difficulty.One of the earliest such models,developed by the Danish mathematician Georg Rasch(Rasch1960),specifiesPj(θi) P[Xijl1Qθi,βj]lexp(θikβj)1j exp(θikβj),i l1,…,N,j l1,…,J(1)whereθirepresents child i’s transitive reasoning level, andβjrepresents the difficulty of the item:as the latent variableθiincreases,the probability of getting the item right increases,and as the difficulty parameterβj increases,the probability of getting the item rightdecreases.The Rasch model,together with similar models using different forms for Pj(θi)developed by Jane Loevinger(Loevinger1948)and Frederic M Lord(Lord1952),became the core of what is now item response theory.On the logit scale,the Rasch model has exactly the two-way additive ANOVA structure we sought:logit Pj(θi)l lnPj(θ)1k Pj(θi)lθikβj.We further assume local independence among item responses,i.e.,all Xijare conditionally independent,5244Factor Analysis and Latent Structure,Confirmatorygiven θs and βs.Letting x ij denote the observed values of X ij ,we may write the likelihood for the Rasch model asP [X ij l x ij ,i l 1,…,N ,j l 1,…,J Q θ",…,θN ;β",…,βJ ]l Ni =" Jj ="P j (θi )x ij [1k P j (θi )]"−x ij .(2)An up-to date account of the Rasch model and its various extensions is given by Fischer and Molenaar (1995),to whom we also refer for historical notes and references.The Rasch model in Eqn.1has the interesting invariance property that the odds ratio comparing two items j and k isP j (θi )1k P j (θi ):1k P k (θi )P k (θi )l exp βk k βjindependent of θi .Hence in our example,the ‘dif-ficulty’of two items involving different transitive reasoning settings can be compared,in principle,using any convenient sample of subjects regardless of their transitive reasoning level (in practice,subjects still contribute at least ‘small sample bias’to the estimated βs).Similarly,the odds ratio comparing two subjects is independent of the item difficulty parameters.These two invariance properties are instances of ‘specific objectivity,’a property Rasch considered important in defining a measurement model and from which Eqn.1can be derived (see Fischer and Molenaar 1995).The Rasch model is also closely related to the Brad-ley–Terry model of paired comparisons.Consider two items j and k for which βj βk .For any subject i who got only one item right,it follows from Eqn.1that the probability that the easier item was correct is P [X ij l 1Q X ij j X ik l 1,θi ,βj ,βk ]lexp(βk k βj )1j exp(βk k βj )(3)independent of θi ;this is exactly the Bradley–Terry model with parameters φj lk βj .The lack of de-pendence on θi on the right can also be used to construct empirical tests of the Rasch model,since the probability in Eqn.3can be estimated without assuming the Rasch model and should be the same in any subpopulation of subjects in which the Rasch model applies.2.Estimation in the Rasch ModelTwo estimation methods are traditionally associated with the Rasch model,joint maximum likelihood (JML)and conditional maximum likelihood (CML).One can maximize Eqn.2jointly in the θs and βs,by numerically solving the ‘normal equations’obtained by setting all partial derivatives of the logarithm of Eqn.2equal to zero;the maximizing values θi and βj are called joint maximum likelihood estimators (JMLEs).However,the JMLEs turn out to be in -consistent ,or asymptotically biased:suppose we are primarily interested in estimating βj s;it need not be the case that each βj converges to βj as we increase the subject sample size N ,keeping the number of items J fixed.Indeed,as N increases,more θi s are also added to the model:intuitively,some information must be expended on estimating them,and there is not enough information left over to improve the estimates of the βi s.One solution to this problem is to exploit the fact that in the Rasch model,a sufficient statistic for each θi is subject i ’s total number correct score,X i +l ΣJ j ="X ij.If the likelihood in Eqn.2is divided by the joint likelihood for the observed x i +s,the resulting conditional likelihood contains only the βj parameters,and the conditional maximum likelihood estimators (CMLEs)βj obtained by maximizing this new like-lihood are consistent (asymptotically unbiased)as N grows and J stays fixed.For this reason,CML estimates are usually preferred over JML estimates.Additional details are provided by Andersen (1980),whose earlier work demonstrated the inconsistency problem with JML,and established consistency of CML for the Rasch model.Another approach to solving the inconsistency problem with JML is to assume the θi are independent random effects following a common (discrete or continuous)distribution f (θQ λ).Integrating over θi for each subject yields the marginal likelihoodNi ="&Jj ="P j (θi )x ij [1k P j (θi )]"−x ij f (θi Q λ)d θi (4)and maximizing this with respect to βi s and λ(using an E-M algorithm,for example)yields what are called maximum marginal likelihood (MML)estimates βj and λ.These MML estimates are also consistent (asymptotically unbiased).The method based on Eqn.4can be interpreted as an empirical Bayes method;it also links the Rasch model directly with other latent variable approaches such as factor analysis,where the latent variable θi is treated as an unobserved random variable,or equivalently as completely missing data (see Factor Analysis and Latent Structure:O er iew ).Finally,it can be shown that under Eqn.4,the marginal distribution of the data follows a log-linear model for the 2J table of the formln p (x i ",…,x iJ )l αk Jj ="βj x ij j Jk =!γk χ(x i +l k )where χ(S )l 1if statement S is true,and 0otherwise.This is the log-linear model of quasisymmetry;the connection between the Rasch model and the quasi-5245Factor Analysis and Latent Structure:IRT and Rasch Modelssymmetry model has been independently discovered innumerable times in the literature.Lindsay et al.(1991)give a reasonably complete account of the connection with quasisymmetry and consequences for the identifiability of the distribution f (θQ λ)in a semiparametric formulation of Eqn.4.3.Parametric IRTOther parametric IRT models have been developed for situations in which the Rasch model does not fit,while retaining the ideas of dichotomous (X ij ?o 0,1q )or at least polytomous (X ij ?o 0,1,…,K k 1q )item scores,a unidimensional (or low-dimensional)latent variable θi characterizing subjects,a low-dimensional parameter βj characterizing items,and local (conditional)in-dependence of all item responses X ij given all par-ameters.An accessible review of developments in parametric IRT in the 1960s,1970s and 1980s is given by Hambleton et al.(1991).A simple generalization of the Rasch model that is often useful for dichotomously-scored items,P j (θi )lexp[αj (θi k βj )]1j exp[αj (θi j )](5)is called the two-parameter logistic (2PL)model.In the 2PL model,the slope parameters αjk are called ‘discrimination parameters,’and they control the Fisher information for estimating the θs.Again,the location parameters βj play the role of ‘difficulty parameters’;they also determine the value of θmaximizing the Fisher information.The three-par-ameter logistic (3PL)model adds a nonzero lower asymptote to Eqn.(3),to model random or exchange-able response behavior by low-θsubjects.Further generalizations,discussed in the next several para-graphs,are treated at length by Fischer and Molenaar (1995)and van der Linden and Hambleton (1997).Additional covariates and other structures can be incorporated into these models as regression terms.For example,in the linear logistic test model (LLTM),the difficulty parameters βj in the Rasch model are rewritten as linear combinations of K basic parameters ηk with weights q jk ,andlogit P j (θi )l θi k Kk ="q jk ηk .The matrix Q l [q jk ]is usually obtained as a regression model matrix from a priori analysis of the items;for example,in the transitive reasoning example described in Sect.1,the q jk might indicate what kind and how many objects were used for each item.By contrast,multidimensional compensatory IRT models decom-pose the unidimensional θi parameter into an item-dependent linear combination of underlying traits,e.g.,logit P j (θi )l Dd ="αjd θid k βj .For example,mathematics word problems may in-volve both a math component,θi ",and a verbalcomponent θi #in different proportions in differentproblems,as determined by the discrimination par-ameters αjd .In some settings,the αjd may be fixed apriori ,like the q jk in the LLTM model,and in other settings they may be estimated analogously to factor loadings in Factor Analysis.Multiplicative or con-junctive IRT models combine unidimensional models for components of response in a product P j (θi )l ΠD dl 1P jd (θid )where P jd (θid )are parametric unidimen-sional dichotomous response functions.The usual interpretation is that the P jd (θid )are probabilities of correctly performing component skills or subtasks,which must be done in conjunction in order to generate a correct response to the item itself;this model is sometimes referred to as the multicomponent latent trait model (MLTM).Probit versions of all the above models,in which the logit function is replaced with Φ−"(p ),the inverse standard normal distribution function,are also in use.Aside from a trivial rescaling of the parameters,there is little difference in practice between the probit and logit models;the choice depends mostly on computa-tional expedience.IRT model can also be extended to handle polyto-mous or multicategory items,X ij ?o 0,1,2,…,K k 1q ,in much the same way that logistic regression models are extended polytomous data:in the ‘graded response model’(GRM)the cumulative logit,logit P [X ij c Q θi ,βj ],is linear in θi ;in the ‘sequential model’(SM)the continuation-ratio logit,logit P [X ij c Q X ij c k 1,θi ,βj ],is linear in θi ;and in the ‘partial credit model’(PCM)the adjacent-category logit,logit P [X ij l c j 1Q X ij ?o c ,c j 1q ,θi ,βj ],is linear in θi .Again,probit reformulations of these models are straightforward.When the discrimination parameters are all non-negative,the cumulative response curves in any of these models is nondecreasing (Hemker et al.1997),a condition referred to as ‘monotonicity.’By contrast,IRT formulations of direct-response prob-abilistic unfolding models,also known as proximity models or parallelogram models,have unimodal response curves that peak near the location βj of the item.As was the case for the Rasch model discussed in Sect.2,JML estimates for IRT models are relatively easy to set up but generally inconsistent (asymp-totically biased);CML methods can be used when simple sufficient statistics are available.MML meth-ods based on applying an E-M algorithm to Eqn.4,with P j (θi )redefined according to one of the above5246Factor Analysis and Latent Structure:IRT and Rasch ModelsFigure 1Two-way hierarchical structure for N individuals and J dichotomous response variables.Factors in the first level are independent and are multiplied together to form a likelihood for the response data matrix [X ij ].Factors in the second and third levels are alsoindependent,and covariates and special structure for the latent variables and item parameters may be introduced in these levelsparametric forms,are most common,however.The recent development of Markov Chain Monte Carlo (MCMC)integration methods for Bayesian statistics (see Marko Chain Monte Carlo Methods )provides an alternative to these methods.Patz and Junker (1999)sketch a general MCMC methodology for parametric IRT models,and Johnson and Albert (1999)survey a range of related applications from a working Bayesian statistician’s perspective.The IRT toolbox has been pushed forward in recent years by the needs of both large-scale educational assessment surveys (e.g.,Johnson et al.1994)and cognitively based assessment models (e.g.,Nichols et al.1995).To accommodate finer structure in the latent space,as well as to incorporate subject and item covariates in the model,it is useful to reformulate the IRT framework as a two-way hierarchical Bayes structure for N individuals and J response variables,as in Fig.1.In addition to the item response functions P j (θi )l P (θi ;βj )in Fig.1,f i and g j are prior distribu-tions on the latent variables θand item parameters βrespectively (both of which may be multidimensional),and λf and λg represent sets of hyperparameters needed to specify these prior distributions.The model in Fig.1is expressed for dichotomous items but can be generalized easily to polytomous items or combina-tions of item types,incomplete data,and so forth (e.g.,Patz and Junker 1999).The MML formulation dis-cussed in the previous paragraph is obtained by taking f i (θ)to be a single population distribution not de-pendent on j for the latent trait,and taking g j to be flat priors not depending on j for the item parameters.4.Nonparametric IRTNonparametric IRT refers to several related method-ologies for working with item response data without fully committing to any particular well-known family of parametric item response theory models.Some topics in nonparametric IRT are considered in van der Linden and Hambleton (1997)and Boomsma et al.(2001),and several current issues are surveyed in a special issue of Applied Psychological Measurement (Junker and Sijtsma 2001).Modern nonparametric scaling methods include the ‘essential independence’(EI)approach of Stout and his students and colleagues (e.g.,Stout et al.1996),and the Mokken scaling approach (e.g.,Hemker et al.1995);both methods are related to Cronbach’s alpha bound on reliability.Mokken scaling is very good at finding groups of items that are highly discriminating among subjects;Stout’s methods tend to identify groups of items that narrowly satisfy LI \EI,M,and U better.Ramsay (e.g.,Ramsay 1996)has developed computational tools for estimating item response functions nonparametrically,and for visualizing the dimensionality of θin terms of the surface in the J -dimensional unit cube generated by the joint like-lihood for X i ",…,X iJ as θi varies.5.Some ApplicationsParametric and nonparametric IRT methods are routinely used to assess the quality of individual items and sets of items in educational measurement work (Linn 1989)and to elucidate experimental design and statistical estimation issues when performances of examinees who sit for different versions of the same exam must be compared;such work goes under the name test equating.A similar problem arises in the scoring of sequentially-designed,computerized adap-tive tests (e.g.,Sands et al.1997).IRT modeling has also elucidated research into the assessment of so-ciological bias (differential item functioning)on stan-dardized test items (Holland and Wainer 1993)and the detection and diagnosis of subject outliers (Meijer 1996).Related work includes the study of rater effects (e.g.,Patz et al.2000)and matrix-sampling and the incorporation of group effects and other item and subject covariates in educational assessment survey work (Johnson et al.1994).In the broader social sciences and related areas,IRT has also played a substantial role.The Rasch model has long been applied in social survey work;see,for example,Duncan (1985).The Rasch model and other5247Factor Analysis and Latent Structure:IRT and Rasch ModelsIRT models can also be applied to model subject heterogeneity in estimating a closed population from multiple-recapture census data(see Censuses:History and Methods);Fienberg et al.(1999)recently surveyed this methodology and compared Bayesian IRT models with standard log-linear models in this context.Appli-cations to multiple outcomes of designed experiments and to panel data are indicated in the volumes by Fischer and Molenaar(1995)and van der Linden and Hambleton(1997).Sijtsma and Verweij(1999)apply nonparametric IRT to scale construction in develop-mental psychology;and Nichols et al.(1995)collect many applications of IRT and related Bayesian infer-ence network models(see Bayesian Graphical Models and Networks;Mislevy1996)in cognitive diagnosis. Applications of parametric and nonparametric IRT to psychiatric scales include Gibbons et al.(1985),Santor et al.(1995)and Kim and Pilkonis(1999).In bio-statistics Legler and Ryan(1997)use IRT to model multiple physical outcomes in the study of birth defects.6.SummaryItem response theory(IRT)has grown from its roots in postwar mental-testing problems,through intensive use in educational measurements in the1970s,1980s, and1990s,to become a mature statistical toolkit for modeling of multivariate discrete response data using subject-level latent variables.Applications of IRT can be found throughout the social sciences and related areas,from education,psychology,economics,and demography to medical research.Most parametric IRT models would be recognizable by modern statisticians as mixed-effects multivariate generalized linear models,but IRT has benefited from interaction with all parts of the statistical community. Almost any assessment phenomenon—from between-subject dependence due to institutional or sociological factors,to behavioral aspects of raters,to the analysis of item responses in terms of requisite subject or item features—can be expressed in the hierarchical mix-ture\Bayes modeling framework of Fig.1,because of its conceptual simplicity.Recent advances in com-putation,and MCMC methods in particular,have made it possible to estimate a vastly wider variety of these models than would have been imaginable even in the early1990s however the more complicated models typically also make higher sample-size de-mands.Nonparametric IRT approaches also rely on computationally intensive methods,including spline and kernel smoothing and bootstrap techniques,to estimate and test probability inequalities,stochastic ordering properties,and similar features of the models. Questions motivating both parametric and non-parametric IRT modeling inevitably involve identi-fying the phenomena that are worth detailed modeling, and seeing if the computational and data collection machinery can be pushed to be informative about reasonable models of these phenomena. Hambleton et al.(1991)provide a fairly straight-forward introduction to IRT in educational testing; Andersen(1980)presents the Rasch model in the context of other statistical models for the social sciences;and Fischer and Molenaar(1995)and van der Linden and Hambleton(1997)provide modern technical accounts.A broad range of current research issues in IRT is collected in Boomsma et al.(2001).BibliographyAndersen E B1980Discrete Statistical Models with Social Science Applications.North-Holland,New York Boomsma A,Snijders T A B,van Duijn M A J(eds.)2001 Essays on Item Response Theory.Springer-Verlag,New YorkDuncan O D1985Some models of response uncertainty for panel analysis.Social Science Research14:126–41 Fienberg S E,Johnson M S,Junker B W1999Classical multi-level and Bayesian approaches to population size estimation using multiple lists.Journal of Royal Statistical Society:Series A162:383–405Fischer G H,Molenaar I W(eds.)1995Rasch Models:Founda-tions,Recent De elopments,and Applications.Springer-Verlag, New YorkGibbons R D,Clark D C,von Ammon Cavanaugh S,Davis J M 1985Application of modern psychometric theory in psy-chiatric research.Journal of Psychiatric Research19:43–55 Hambleton R K,Swaminathan H,Rogers H J(eds.)1991 Fundamentals of Item Response Theory.Sage,Newbury Park, CAHemker B T,Sijtsma K,Molenaar I W1995Selection of unidimensional scales from a multidimensional item bank in the polytomous Mokken IRT model.Applied Psychological Measurement19:337–52Hemker B T,Sijtsma K,Molenaar I W,Junker B W1997 Stochastic ordering using the latent trait and the sum score in polytomous IRT models.Psymka62:331–47Holland P W,Wainer H1993Differential Item Functioning. Erlbaum,Hillsdale,NJJohnson V E,Albert J1999Ordinal Data Modeling.Springer-Verlag,New YorkJohnson E G,Mislevy R J,Thomas N1994Theoretical back-ground and philosophy of NAEP scaling procedures.In: Johnson E G et al.(eds.)Technical Report of the NAEP1992 Trial State Assessment Program in Reading.OERI,US Dept. of Ed.,Washington,DC,Chap.8,pp.133–46Junker B W,Sijtsma K2001Nonparametric IRT in action:An overview of the special issue.Applied Psychological Measure-ment25:Kim Y,Pilkonis P A1999Selecting the most informative items in the IIP scales for personality disorders:An application of item response theory.Journal of Personality Disorders13: 157–74Legler J M,Ryan L M1997Latent variable models for teratogenesis using multiple binary outcomes.Journal of the American Statistical Association92:13–20Linn R L(ed.)1989Educational Measurement,3rd edn. Macmillan,New YorkLindsay B,Clogg C C,Grego J1991Semiparametric estimation in the Rasch model and related exponential response models,5248Factor Analysis and Latent Structure:IRT and Rasch Modelsincluding a simple latent class model for item analysis.Journal of the American Statistical Association86:96–107Lord F M1952A theory of test scores.Psychometric Society, New YorkLoevinger J1948The technique of homogeneous tests compared with some aspects of‘scale analysis’and factor analysis. Psychological Bulletin45:507–30Meijer R R1996Person-fit research:An introduction.Applied Measurement in Education9:3–8Mislevy R J1996Test theory reconceived.Journal of Educational Measurement33:379–416Nichols P D,Chipman S F,Brennan R L(eds.)1995Cogniti ely Diagnostic Assessment.Erlbaum,Hillsdale,NJPatz R J,Junker B W1999Applications and extensions of MCMC in IRT:Multiple item types,missing data,and rated responses.Journal of Educational and Beha ioral Statistics24: 342–66Patz R J,Junker B W,Johnson M S submitted The hierarchical rater model for rated test items and its application to large-scale educational assessment data.Journal of Educational and Beha ioral StatisticsRamsay J O1996A geometrical approach to item response theory.Behmka23:3–17Rasch G1960Probabilistic models for some intelligence and attainment tests.University of Chicago Press,Chicago Sands W A,Waters B K,McBride J R1997Computerized Adapti e Testing:From Inquiry to Operation.American Psychological Association,Washington,DCSantor D A,ZuroffD C,Ramsay J O,Cervantes P,Palacios J 1995Examining scale discriminability in the BDI and CES-D as a function of depressive severity.Psychiatry Assessment7: 131–9Sijtsma K,Verweij A C1999Knowledge of solution strategies and IRT modeling of items for transitive reasoning.Applied Psychological Measurement23:55–68Stout W,Habing B,Douglas J,Kim H R,Roussos L,Zhang J1996Conditional covariance-based nonparametric multidi-mensionality assessment.Applied Psychological Measurement 20:331–54van der Linden W J,Hambleton R K(eds.)1997Handbook of Modern Item Response Theory.Springer-Verlag,New YorkB.W.Junker Factor Analysis and Latent Structure: OverviewThe terms factor analysis and latent structure refer to two aspects of essentially the same problem.Both are concerned with statistical problems in which some of the variables are latent,meaning that they are rmation about the latent variables has,therefore,to be obtained indirectly through indicators,also known as manifest or observable variables.The terminology of the subject reflects the diverse origins of the subject accumulated over almost a century.‘Factor’in this context is a synonym for latent variable.From a theoretical perspective,the only distinctive thing about a latent variable model is the presence of unobservable variables.In all other respects the normal methods of statistics apply. Indeed,a latent variable problem can be regarded as a standard statistical problem in which the data on the latent variables are missing.There is no essential difference,for purposes of analysis,between a problem in which some of the data are never obtained and one in which some are lost.1.IntroductionLatent variables arise mainly,but not exclusively,in the social sciences.This is because social science often deals in concepts which are constructs rather than the directly measurable variables which are typical of physical sciences.The earliest example,and still one of the most important,is general intelligence or‘g.’This goes back to Spearman(1904)and was constructed to describe the variation among individuals in what appeared to be common to a wide range of mental tests.Psychology and sociology abound in such latent variables.Attitudes as well as abilities are all spoken of in the discourse of these subjects as things which occur in varying amounts and which,therefore,appear in the theory as quantitative variables.In economics, variables like business confidence play a similar role.It could be justly claimed that the aspiration of these subjects to be regarded as sciences depends on the success with which latent variables can be quantified. Latent variables can be classified into several types. Many,like intelligence,are conceived of as continuous in which case we are looking for a scale on which individuals can be located.In other contexts it is more appropriate to think of the latent variable as cat-egorical.In that case individuals are supposed to belong to one of several classes which may or may not be ordered.What is true of the latent variables is,of course,true for the manifest variables and the only essential difference between the various methods is in the types of variables for which they are appropriate.A convenient way of displaying the relationship between the methods is to introduce the fourfold classification of Table1.We classify the manifest and the latent variables as metrical or categorical.In the former case we mean that they take values on someTable1Classification of latent variable methodsManifest variablesMetrical Categorical Latent variablesMetrical Factor analysis Latent traitanalysis Categorical Latent profileanalysisLatent classanalysis5249 Factor Analysis and Latent Structure:O er iewCopyright#2001Elsevier Science Ltd.All rights reserved.International Encyclopedia of the Social&Behavioral Sciences ISBN:0-08-043076-7。

相关文档
最新文档